[
  {
    "path": ".github/workflows/add-to-devrel.yml",
    "content": "name: 'Add to DevRel Project'\n\non:\n  issues:\n    types:\n      - opened\n      - reopened\n  pull_request_target:\n    types:\n      - opened\n      - reopened\n\njobs:\n  add-to-project:\n    name: Add issue/PR to project\n    runs-on: ubuntu-latest\n    steps:\n      - uses: actions/add-to-project@v1.0.0\n        with:\n          # add to DevRel Project #117\n          project-url: https://github.com/orgs/near/projects/117\n          github-token: ${{ secrets.PROJECT_GH_TOKEN }}\n"
  },
  {
    "path": ".github/workflows/lint.yml",
    "content": "name: Lint\n\non:\n  pull_request:\n    branches: [master, main]\n  merge_group:\n\nconcurrency:\n  group: ci-${{ github.ref }}-${{ github.workflow }}\n  cancel-in-progress: true\n\njobs:\n  markdown-lint:\n    name: markdown-lint\n    runs-on: ubuntu-latest\n    steps:\n      - uses: actions/checkout@v3\n        with:\n          fetch-depth: 0\n      # lint only changed files\n      - uses: tj-actions/changed-files@v46\n        id: changed-files\n        with:\n          files: \"**/*.md\"\n          separator: \",\"\n      - uses: DavidAnson/markdownlint-cli2-action@v19\n        if: steps.changed-files.outputs.any_changed == 'true'\n        with:\n          config: .markdownlint.json\n          globs: |\n            ${{ steps.changed-files.outputs.all_changed_files }}\n          separator: \",\"\n\n  markdown-link-check:\n    runs-on: ubuntu-latest\n    steps:\n      - uses: actions/checkout@master\n      - uses: gaurav-nelson/github-action-markdown-link-check@v1\n        with:\n          use-quiet-mode: \"yes\"\n          #        use-verbose-mode: 'yes'\n          config-file: \".mlc_config.json\"\n          folder-path: \"neps\"\n"
  },
  {
    "path": ".github/workflows/spellcheck.yml",
    "content": "name: spellchecker\n\non:\n  pull_request:\n    branches:\n      - master\n\njobs:\n  misspell:\n    name: runner / misspell\n    runs-on: ubuntu-latest\n    steps:\n      - name: Check out code.\n        uses: actions/checkout@v1\n      - name: misspell\n        id: check_for_typos\n        uses: reviewdog/action-misspell@v1\n        with:\n          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}\n          path: \"./specs\"\n          locale: \"US\"\n"
  },
  {
    "path": ".gitignore",
    "content": "/docs\n.idea\n.DS_Store\n.vscode\n"
  },
  {
    "path": ".markdownlint.json",
    "content": "{\n  \"default\": true,\n  \"MD001\": false,\n  \"MD013\": false,\n  \"MD024\": { \"siblings_only\": true },\n  \"MD025\": false,\n  \"MD033\": false,\n  \"MD034\": false,\n  \"MD040\": false,\n  \"MD041\": false,\n  \"MD046\": false,\n  \"whitespace\": false\n}\n"
  },
  {
    "path": ".mlc_config.json",
    "content": "{\n  \"ignorePatterns\": [\n    {\n      \"pattern\": \"^/\"\n    },\n    {\n      \"pattern\": \"^https://codepen.io\"\n    },\n    {\n      \"pattern\": \"^https://stackoverflow.com\"\n    },\n    {\n      \"pattern\": \"^https://www.researchgate.net\"\n    },\n    {\n      \"pattern\": \"^https://pages.near.org/papers/the-official-near-white-paper/\"\n    }\n  ],\n  \"timeout\": \"20s\",\n  \"retryOn429\": true,\n  \"retryCount\": 5,\n  \"fallbackRetryDelay\": \"30s\",\n  \"aliveStatusCodes\": [200, 206]\n}\n"
  },
  {
    "path": "CODEOWNERS",
    "content": "*           @near/nep-moderators\n"
  },
  {
    "path": "README.md",
    "content": "# NEAR Protocol Specifications and Standards\n\n[![project chat](https://img.shields.io/badge/zulip-join_chat-brightgreen.svg)](https://near.zulipchat.com/#narrow/stream/320497-nep-standards)\n\nThis repository hosts the current NEAR Protocol specification and standards.\nThis includes the core protocol specification, APIs, contract standards, processes, and workflows.\n\nChanges to the protocol specification and standards are called NEAR Enhancement Proposals (NEPs).\n\n## NEPs\n\n| NEP #                                                             | Title                                                             | Author                                            | Status     |\n| ----------------------------------------------------------------- | ----------------------------------------------------------------- | ------------------------------------------------- | ---------- |\n| [0001](https://github.com/near/NEPs/blob/master/neps/nep-0001.md) | NEP Purpose and Guidelines                                        | @ori-near @bowenwang1996 @austinbaggio @frol      | Living     |\n| [0021](https://github.com/near/NEPs/blob/master/neps/nep-0021.md) | Fungible Token Standard (Deprecated)                              | @evgenykuzyakov                                   | Deprecated |\n| [0141](https://github.com/near/NEPs/blob/master/neps/nep-0141.md) | Fungible Token Standard                                           | @evgenykuzyakov @oysterpack, @robert-zaremba      | Final      |\n| [0145](https://github.com/near/NEPs/blob/master/neps/nep-0145.md) | Storage Management                                                | @evgenykuzyakov                                   | Final      |\n| [0148](https://github.com/near/NEPs/blob/master/neps/nep-0148.md) | Fungible Token Metadata                                           | @robert-zaremba @evgenykuzyakov @oysterpack       | Final      |\n| [0171](https://github.com/near/NEPs/blob/master/neps/nep-0171.md) | Non Fungible Token Standard                                       | @mikedotexe @evgenykuzyakov @oysterpack           | Final      |\n| [0177](https://github.com/near/NEPs/blob/master/neps/nep-0177.md) | Non Fungible Token Metadata                                       | @chadoh @mikedotexe                               | Final      |\n| [0178](https://github.com/near/NEPs/blob/master/neps/nep-0178.md) | Non Fungible Token Approval Management                            | @chadoh @thor314                                  | Final      |\n| [0181](https://github.com/near/NEPs/blob/master/neps/nep-0181.md) | Non Fungible Token Enumeration                                    | @chadoh @thor314                                  | Final      |\n| [0199](https://github.com/near/NEPs/blob/master/neps/nep-0199.md) | Non Fungible Token Royalties and Payouts                          | @thor314 @mattlockyer                             | Final      |\n| [0245](https://github.com/near/NEPs/blob/master/neps/nep-0245.md) | Multi Token Standard                                              | @zcstarr @riqi @jriemann @marcos.sun              | Final      |\n| [0256](https://github.com/near/NEPs/blob/master/neps/nep-0256.md) | Non-Fungible Token Events                                         | @telezhnaya                                       | Final      |\n| [0264](https://github.com/near/NEPs/blob/master/neps/nep-0264.md) | Promise Gas Weights                                               | @austinabell                                      | Final      |\n| [0297](https://github.com/near/NEPs/blob/master/neps/nep-0297.md) | Events Standard                                                   | @telezhnaya                                       | Final      |\n| [0300](https://github.com/near/NEPs/blob/master/neps/nep-0300.md) | Fungible Token Events                                         | @telezhnaya                                       | Final      |\n| [0330](https://github.com/near/NEPs/blob/master/neps/nep-0330.md) | Source Metadata                                                   | @BenKurrek                                        | Final      |\n| [0364](https://github.com/near/NEPs/blob/master/neps/nep-0364.md) | Efficient signature verification and hashing precompile functions | @blasrodri                                        | Final      |\n| [0366](https://github.com/near/NEPs/blob/master/neps/nep-0366.md) | Meta Transactions                                                 | @ilblackdragon @e-uleyskiy @fadeevab              | Final      |\n| [0368](https://github.com/near/NEPs/blob/master/neps/nep-0368.md) | Bridge Wallets                  | @lewis-sqa                             | Final      |\n| [0393](https://github.com/near/NEPs/blob/master/neps/nep-0393.md) | Sould Bound Token (SBT)                                           | @robert-zaremba                                   | Final      |\n| [0399](https://github.com/near/NEPs/blob/master/neps/nep-0399.md) | Flat Storage                                                      | @Longarithm @mzhangmzz                            | Final      |\n| [0408](https://github.com/near/NEPs/blob/master/neps/nep-0408.md) | Injected Wallet API                  | @MaximusHaximus @lewis-sqa                             | Final      |\n| [0413](https://github.com/near/NEPs/blob/master/neps/nep-0413.md) | Near Wallet API - support for signMessage method                  | @gagdiez @gutsyphilip                             | Final      |\n| [0418](https://github.com/near/NEPs/blob/master/neps/nep-0418.md) | Remove attached_deposit view panic                                | @austinabell                                      | Final      |\n| [0448](https://github.com/near/NEPs/blob/master/neps/nep-0448.md) | Zero-balance Accounts                                             | @bowenwang1996                                    | Final      |\n| [0452](https://github.com/near/NEPs/blob/master/neps/nep-0452.md) | Linkdrop Standard                                                 | @benkurrek @miyachi                               | Final      |\n| [0455](https://github.com/near/NEPs/blob/master/neps/nep-0455.md) | Parameter Compute Costs                                           | @akashin @jakmeier                                | Final      |\n| [0488](https://github.com/near/NEPs/blob/master/neps/nep-0488.md) | Host Functions for BLS12-381 Curve Operations                     | @olga24912                                        | Final      |\n| [0491](https://github.com/near/NEPs/blob/master/neps/nep-0491.md) | Non-Refundable Storage Staking                                    | @jakmeier                                         | Final      |\n| [0492](https://github.com/near/NEPs/blob/master/neps/nep-0492.md) | Restrict creation of Ethereum Addresses                           | @bowenwang1996                                    | Final      |\n| [0508](https://github.com/near/NEPs/blob/master/neps/nep-0508.md) | Resharding v2                                                     | @wacban @shreyan-gupta @walnut-the-cat            | Final      |\n| [0509](https://github.com/near/NEPs/blob/master/neps/nep-0509.md) | Stateless validation Stage 0                                      | @robin-near @pugachAG @Longarithm @walnut-the-cat | Final      |\n| [0514](https://github.com/near/NEPs/blob/master/neps/nep-0514.md) | Fewer Block Producer Seats in `testnet`                           | @nikurt                                           | Final      |\n| [0518](https://github.com/near/NEPs/blob/master/neps/nep-0518.md) | Web3-Compatible Wallets Support                                   | @alexauroradev @birchmd                           | Final      |\n| [0519](https://github.com/near/NEPs/blob/master/neps/nep-0519.md) | Yield Execution                                                   | @akhi3030 @saketh-are                             | Final      |\n| [0536](https://github.com/near/NEPs/blob/master/neps/nep-0536.md) | Reduce the number of gas refunds                                  | @evgenykuzyakov @bowenwang1996                    | Final      |\n| [0539](https://github.com/near/NEPs/blob/master/neps/nep-0539.md) | Cross-Shard Congestion Control                                    | @wacban @jakmeier                                 | Final      |\n| [0568](https://github.com/near/NEPs/blob/master/neps/nep-0568.md) | Resharding V3                                   | @staffik @Longarithm @Trisfald @marcelo-gonzalez @shreyan-gupta @wacban                                 | Final      |\n| [0584](https://github.com/near/NEPs/blob/master/neps/nep-0584.md) | Cross-shard bandwidth scheduler                                   | @jancionear                                       | Final      |\n| [0591](https://github.com/near/NEPs/blob/master/neps/nep-0591.md) | Global Contracts                                                  | @bowenwang1996 @pugachag @stedfn                  | Final      |\n\n\n## Specification\n\nNEAR Specification is under active development.\nSpecification defines how any NEAR client should be connecting, producing blocks, reaching consensus, processing state transitions, using runtime APIs, and implementing smart contract standards as well.\n\n## Standards & Processes\n\nStandards refer to various common interfaces and APIs that are used by smart contract developers on top of the NEAR Protocol.\nFor example, such standards include SDK for Rust, API for fungible tokens or how to manage user's social graph.\n\nProcesses include release process for spec, clients or how standards are updated.\n\n### Contributing\n\n#### Expectations\n\nIdeas presented ultimately as NEPs will need to be driven by the author through the process. It's an exciting opportunity with a fair amount of responsibility from the contributor(s). Please put care into the details. NEPs that do not present convincing motivation, demonstrate understanding of the impact of the design, or are disingenuous about the drawbacks or alternatives tend to be poorly received. Again, by the time the NEP makes it to the pull request, it has a clear plan and path forward based on the discussions in the governance forum.\n\n#### Process\n\nSpec changes are ultimately done via pull requests to this repository (formalized process [here](neps/nep-0001.md)). In an effort to keep the pull request clean and readable, please follow these instructions to flesh out an idea.\n\n1. Sign up for the [governance site](https://gov.near.org/) and make a post to the appropriate section. For instance, during the ideation phase of a standard, one might start a new conversation in the [Development » Standards section](https://gov.near.org/c/dev/standards/29) or the [NEP Discussions Forum](https://github.com/near/NEPs/discussions).\n2. The forum has comment threading which allows the community and NEAR Collective to ideate, ask questions, wrestle with approaches, etc. If more immediate responses are desired, consider bringing the conversation to [Zulip](https://near.zulipchat.com/#narrow/stream/320497-nep-standards).\n3. When the governance conversations have reached a point where a clear plan is evident, create a pull request, using the instructions below.\n\n   - Clone this repository and create a branch with \"my-feature\".\n   - Update relevant content in the current specification that are affected by the proposal.\n   - Create a Pull request, using [nep-0000-template.md](nep-0000-template.md) to describe motivation and details of the new Contract or Protocol specification. In the document header, ensure the `Status` is marked as `Draft`, and any relevant discussion links are added to the `DiscussionsTo` section.\n     Use the pull request number padded with zeroes. For instance, the pull request `219` should be created as `neps/nep-0219.md`.\n   - Add your Draft standard to the `NEPs` section of this README.md. This helps advertise your standard via github.\n   - Once complete, submit the pull request for editor review.\n\n   - The formalization dance begins:\n     - NEP Editors, who are unopinionated shepherds of the process, check document formatting, completeness and adherence to [NEP-0001](neps/nep-0001.md) and approve the pull request.\n     - Once ready, the author updates the NEP status to `Review` allowing further community participation, to address any gaps or clarifications, normally part of the Review PR.\n     - NEP Editors mark the NEP as `Last Call`, allowing a 14 day grace period for any final community feedback. Any unresolved show stoppers roll the state back to `Review`.\n     - NEP Editors mark the NEP as `Final`, marking the standard as complete. The standard should only be updated to correct errata and add non-normative clarifications.\n\nTip: build consensus and integrate feedback. NEPs that have broad support are much more likely to make progress than those that don't receive any comments. Feel free to reach out to the NEP assignee in particular to get help identify stakeholders and obstacles.\n\n"
  },
  {
    "path": "nep-0000-template.md",
    "content": "---\nNEP: 0\nTitle: NEP Template\nAuthors: Todd Codrington III <satoshi@fakenews.org>\nStatus: Approved\nDiscussionsTo: https://github.com/nearprotocol/neps/pull/0000\nType: Developer Tools\nVersion: 1.1.0\nCreated: 2022-03-03\nLastUpdated: 2023-03-07\n---\n\n[This is a NEP (NEAR Enhancement Proposal) template, as described in [NEP-0001](https://github.com/near/NEPs/blob/master/neps/nep-0001.md). Use this when creating a new NEP. The author should delete or replace all the comments or commented brackets when merging their NEP.]\n\n<!-- NEP Header Preamble\n\nEach NEP must begin with an RFC 822 style header preamble. The headers must appear in the following order:\n\nNEP: The NEP title in no more than 4-5 words.\n\nTitle: NEP title\n\nAuthor: List of author name(s) and optional contact info. Examples FirstName LastName <satoshi@fakenews.org>, FirstName LastName (@GitHubUserName)>\n\nStatus: The NEP status -- New | Approved | Deprecated.\n\nDiscussionsTo (Optional): URL of current canonical discussion thread, e.g. GitHub Pull Request link.\n\nType: The NEP type -- Protocol | Contract Standard | Wallet Standard | DevTools Standard.\n\nRequires (Optional): NEPs may have a Requires header, indicating the NEP numbers that this NEP depends on.\n\nReplaces (Optional): A newer NEP marked with a SupercededBy header must have a Replaces header containing the number of the NEP that it rendered obsolete.\n\nSupersededBy (Optional): NEPs may also have a SupersededBy header indicating that a NEP has been rendered obsolete by a later document; the value is the number of the NEP that replaces the current document.\n\nVersion: The version number. A new NEP should start with 1.0.0, and future NEP Extensions must follow Semantic Versioning.\n\nCreated: The Created header records the date that the NEP was assigned a number, should be in ISO 8601 yyyy-mm-dd format, e.g. 2022-12-31.\n\nLastUpdated: The Created header records the date that the NEP was assigned a number, should be in ISO 8601 yyyy-mm-dd format, e.g. 2022-12-31.\n\nSee example above -->\n\n## Summary\n\n[Provide a short human-readable (~200 words) description of the proposal. A reader should get from this section a high-level understanding about the issue this NEP is addressing.]\n\n## Motivation\n\n[Explain why this proposal is necessary, how it will benefit the NEAR protocol or community, and what problems it solves. Also describe why the existing protocol specification is inadequate to address the problem that this NEP solves, and what potential use cases or outcomes.]\n\n## Specification\n\n[Explain the proposal as if you were teaching it to another developer. This generally means describing the syntax and semantics, naming new concepts, and providing clear examples. The specification needs to include sufficient detail to allow interoperable implementations getting built by following only the provided specification. In cases where it is infeasible to specify all implementation details upfront, broadly describe what they are.]\n\n## Reference Implementation\n\n[This technical section is required for Protocol proposals but optional for other categories. A draft implementation should demonstrate a minimal implementation that assists in understanding or implementing this proposal. Explain the design in sufficient detail that:\n\n* Its interaction with other features is clear.\n* Where possible, include a Minimum Viable Interface subsection expressing the required behavior and types in a target programming language. (ie. traits and structs for rust, interfaces and classes for javascript, function signatures and structs for c, etc.)\n* It is reasonably clear how the feature would be implemented.\n* Corner cases are dissected by example.\n* For protocol changes: A link to a draft PR on nearcore that shows how it can be integrated in the current code. It should at least solve the key technical challenges.\n\nThe section should return to the examples given in the previous section, and explain more fully how the detailed proposal makes those examples work.]\n\n## Security Implications\n\n[Explicitly outline any security concerns in relation to the NEP, and potential ways to resolve or mitigate them. At the very least, well-known relevant threats must be covered, e.g. person-in-the-middle, double-spend, XSS, CSRF, etc.]\n\n## Alternatives\n\n[Explain any alternative designs that were considered and the rationale for not choosing them. Why your design is superior?]\n\n## Future possibilities\n\n[Describe any natural extensions and evolutions to the NEP proposal, and how they would impact the project. Use this section as a tool to help fully consider all possible interactions with the project in your proposal. This is also a good place to \"dump ideas\"; if they are out of scope for the NEP but otherwise related. Note that having something written down in the future-possibilities section is not a reason to accept the current or a future NEP. Such notes should be in the section on motivation or rationale in this or subsequent NEPs. The section merely provides additional information.]\n\n## Consequences\n\n[This section describes the consequences, after applying the decision. All consequences should be summarized here, not just the \"positive\" ones. Record any concerns raised throughout the NEP discussion.]\n\n### Positive\n\n* p1\n\n### Neutral\n\n* n1\n\n### Negative\n\n* n1\n\n### Backwards Compatibility\n\n[All NEPs that introduce backwards incompatibilities must include a section describing these incompatibilities and their severity. Author must explain a proposes to deal with these incompatibilities. Submissions without a sufficient backwards compatibility treatise may be rejected outright.]\n\n## Unresolved Issues (Optional)\n\n[Explain any issues that warrant further discussion. Considerations\n\n* What parts of the design do you expect to resolve through the NEP process before this gets merged?\n* What parts of the design do you expect to resolve through the implementation of this feature before stabilization?\n* What related issues do you consider out of scope for this NEP that could be addressed in the future independently of the solution that comes out of this NEP?]\n\n## Changelog\n\n[The changelog section provides historical context for how the NEP developed over time. Initial NEP submission should start with version 1.0.0, and all subsequent NEP extensions must follow [Semantic Versioning](https://semver.org/). Every version should have the benefits and concerns raised during the review. The author does not need to fill out this section for the initial draft. Instead, the assigned reviewers (Subject Matter Experts) should create the first version during the first technical review. After the final public call, the author should then finalize the last version of the decision context.]\n\n### 1.0.0 - Initial Version\n\n> Placeholder for the context about when and who approved this NEP version.\n\n#### Benefits\n\n> List of benefits filled by the Subject Matter Experts while reviewing this version:\n\n* Benefit 1\n* Benefit 2\n\n#### Concerns\n\n> Template for Subject Matter Experts review for this version:\n> Status: New | Ongoing | Resolved\n\n|   # | Concern | Resolution | Status |\n| --: | :------ | :--------- | -----: |\n|   1 |         |            |        |\n|   2 |         |            |        |\n\n## Copyright\n\nCopyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).\n"
  },
  {
    "path": "neps/archive/0005-access-keys.md",
    "content": "- Proposal Code Name: access_keys\n- Start Date: 2019-07-08\n- NEP PR: [nearprotocol/neps#0000](https://github.com/near/NEPs/blob/master/nep-0000-template.md)\n- Issue(s): [nearprotocol/nearcore#687](https://github.com/nearprotocol/nearcore/issues/687)\n\n# Summary\n\nAccess keys provide limited access to an account.\nEach access key belongs to some account and identified by a unique (within the account) public key.\nOne account may have large number of access keys.\nAccess keys will replace original account-level public keys.\nAccess keys allow to act on behalf of the account by restricting allowed transactions with the access key permissions.\n\n# Motivation\n\nAccess keys give an ability to use dApps in a secure way without asking the user to sign every transaction in the wallet.\nBy issuing the access key once for the application, the application can now act on behalf of the user in a restricted environment.\nThis enables seamless experience for the user.\n\nAccess keys also enable a few other use-cases that are discussed in details below.\n\n# Guide-level explanation\n\nHere are proposed changes for the AccessKey and Account structs.  \n\n```rust\n/// `account_id,public_key` is a key in the state\nstruct AccessKey {\n  /// The nonce for this access key.\n  /// It makes sense for nonce to not start from 0, in case the access key is recreated\n  /// with the same public key, to avoid replaying of old transactions.\n  pub nonce: Nonce,  // u64 \n  \n  /// Defines permissions for the AccessKey \n  pub permission: AccessKeyPermission,\n}\n\n/// Defines permissions for AccessKey \npub enum AccessKeyPermission {\n  /// Restricts AccessKey to only be used for function calls.\n  FunctionCall(FunctionCallPermission),\n\n  /// Gives full access to the account.\n  /// NOTE: It's used to replace account-level public keys.\n  FullAccess,\n}\n\npub struct FunctionCallPermission {\n  /// `Some` amount that can be spent for transaction fees by this access key from the account balance.\n  /// When used, both account balance and the allowance is decreased.\n  /// To change or increase the allowance, the access key can be replaced using SwapKey.\n  /// NOTE: If you reuse the public key, make sure to keep the nonce from the old AccessKey.\n  /// `None` means unlimited allowance.\n  pub allowance: Option<Balance>,  // u128\n\n  /// The AccountID of the receiver of the transaction. The access key will restrict transactions to\n  /// only this receiver.\n  pub receiver_id: AccountId,  // String\n  \n  /// If `Some`, the access key would be restricted to calling only the given method name.\n  /// `None` means it's restricted to calling the receiver_id contract, but any method name.   \n  pub method_name: Option<String>,\n}\n\n/// NOTE: This change removes account-level nonce and public keys.\n/// Key is `account_id`\nstruct Account {\n  pub balance: Balance(u128),\n  pub code_hash: Hash,\n  /// Storage usage accounts for all access keys      \n  pub storage_usage: StorageUsage(u64),\n    /// Last block index at which the storage was paid for.\n  pub storage_paid_at: BlockIndex(u64),\n}\n```\n\n### Examples\n\n#### AccessKey as account-level public key\n\nIf an AccessKey has the full access to the account and the allowance set to be the max value for u128, then\nit essentially acts as an account-level public key. Which means we can remove account-level\npublic keys from the account struct and rely only on access keys.\n\nAn access key example from user `vasya.near` with full access:\n\n```rust\n/// vasya.near,a123bca2\nAccessKey {\n    nonce: 0,\n    \n    permission: AccessKeyPermission::FullAccess,\n}\n```\n\n\n#### AccessKey for a dApp by a user\n\nThis is a simple example where a user wants to use some dApp. The user has to authorize this dApp within their wallet, so the dApp knows who the user is, and also can issue simple function call transactions on behalf of this user.\n\nTo create such AccessKey a dApp generates a new key pair and passes the new public key to the user's wallet in a URL.\nThen the wallet asks the user to create a new AccessKey with that points to the dApp.\nUser has to explicitly confirm this in the wallet for AccessKey to be created.\n\nThe new access key is restricted to be only used for the app’s contract_id, but is not restricted for any method name.\nThe user also selects the allowance to some reasonable amount, enough for the application to issue regular transactions.\nThe application might also hint the user about this desired allowance in some way.\n\nNow the app can issue function call transactions on behalf of the user’s account towards the app’s contract without requiring the user to sign each transaction.\n\nAn access key example for chess app from user `vasya.near`:\n\n```rust\n/// vasya.near,c5d312f3\nAccessKey {\n    nonce: 0,\n    \n    permission: AccessKeyPermission::FunctionCall(FunctionCallPermission {\n        // Since the access key is stored on the Chess app front-end, the user has\n        // limited the spending amount to some reasonable, but large enough number.\n        // NOTE: It's needs to be multiplied to decimals, e.g. 10^-18 \n        allowance: Some(1_000_000_000),\n        \n        // This access key restricts access to `chess.app` contract.\n        receiver_id: \"chess.app\",\n\n        // Any method name on the `chess.app` contract can be called.  \n        method_name: None,\n    }),\n}\n```\n\n#### AccessKey issued by a dApp\n\nThis is an example where the dApp wants to pay for the user, or it doesn't want to go through the user's sign-in flow.\nFor whatever reason the dApp decided to issue an access key directly for their account.\n\nFor this to work there should be one account with funds (that dApp controls on the backend) which creates access keys for the users.\nThe difference from the example above is there is only one account (the same for all users) that creates multiple access keys (one per user) towards one other contract (app's contract).\nTo differentiate users the contract has to use the public key of the access key instead of sender's account ID.\n\nIf the access key wants to support user's identity from the account ID. The contract can provide a public method that links user's account ID with a given public key.\nOnce this is done, a user can request a new access key with the linked public key (sponsored by the app), but it is linked to the user's account ID.\n\nThere are some caveats with this approach:\n\n- The dApp is required to have a backend and to have some sybil resistance for users. It's needed to prevent abuse by bots.\n- Writing the contract is slightly more complicated, since the contract now needs to handle mapping of the public keys to the account IDs.\n\nAn access key example for chess app paid by the chess app from `chess.funds` account:\n\n```rust\n/// chess.funds,2bc2b3b\nAccessKey {\n    nonce: 0,\n    \n    permission: AccessKeyPermission::FunctionCall(FunctionCallPermission {\n        // Since the access key is given to the user, the developer wants to limit the\n        // the spending amount to some conservative number, since a user might try to drain it.\n        allowance: Some(5_000_000),\n        \n        // This access key restricts access to `chess.app` contract.\n        receiver_id: \"chess.app\",\n\n        // Any method name on the `chess.app` contract can be called (but some methods might just ignore this key).\n        method_name: None,\n    }),\n}\n```\n\n#### AccessKey through a proxy\n\nThis examples demonstrates how to have more granular control on top of built-in access key restrictions.\n\nLet's say a user wants to:\n\n- limit the number of calls the access key can make per minute\n- support multiple contracts with the same access key\n- select which methods name can be called and which can't\n- transfer funds from the account up to a certain limit \n- stake from the account, but prevent withdrawing funds\n\n\nTo make it work, we need to have a custom logic at every call.\nWe can achieve this by running a portion of a smart contract code before any action.\nA user can deploy a code on their account and restrict access key to their account and to a method name, e.g. `proxy`.\nNow this access key will only be able to issue transactions on behalf of the user that goes to the user's contract code and calls method `proxy`.\nThe `proxy` method can find out which access key is used by comparing public keys and verify the request before executing it.\n\nE.g. the access key should only be able to call `chess.app` at most 3 times per 20 block and can transfer at most 1M tokens to the `chess.app`.\nThe `proxy` function internally can validate that this access key is used, fetch its config, validate the passed arguments and proxy the transaction.\nA `proxy` method might take the following arguments for a function call:\n\n```json\n{\n  \"action\": \"call\",\n  \"contractId\": \"chess.app\",\n  \"methodName\": \"move\",\n  \"args\": \"{...serialized args...}\",\n  \"amount\": 0\n}\n```\n\nIn this case the `action` is `call`, so the function checks the `amount` to be within the withdrawal limit, check that the contract name is `chess.app` and if there were the last 3 calls were not in the last 20 blocks issue an async call to the `chess.app`.\nThe same `proxy` function in theory can handle other actions, e.g. staking or vesting.\n\nThe benefit of having a proxy function on your own account is that it doesn't require additional receipt, because the account's state and the code are available at the transaction verification time.\n\nAn example of an access key limited to `proxy` function:\n\n```rust\n/// vasya.near,3bc2b3b\nAccessKey {\n    nonce: 0,\n    \n    permission: AccessKeyPermission::FunctionCall(FunctionCallPermission {\n        // Allowance can be large enough, since the user is likely trusting the app.\n        allowance: Some(1_000_000_000),\n        \n        // This access key restricts access to user's account `vasya.near` contract.\n        // Most likely, the contract code can be deployed and upgraded directly from the wallet.\n        receiver_id: \"vasya.near\",\n\n        // The method is restricted to proxy, which does all the security checks.\n        method_name: Some(\"proxy\"),\n    }),\n}\n```\n\n# Reference-level explanation\n\n- Access keys are stored with the `account_id,public_key` key. Where `account_id` and `public_key` are actual Account ID and public keys, and `,` is a separator.\nThey should be stored on the same shard as the account.\n- Access keys storage rent should be accounted and paid from the account directly without affecting the allowance.\n- Access keys allowance can exceed the account balance.\n- To validate a transaction signed with the AccessKey, we need to first validate the signature, then fetch the Account and the AccessKey, validate that we have enough funds and verify permissions. \n- Account creation should now create a full access permission access key, instead of public keys within the account.\n- SwapKey transaction should just replace the old access key with the given new access key.\n\n### Technical changes\n\n#### `nonce` on the AccessKey level instead of account level\n\nSince access keys can be used by the different people or parties at the same time, we need to be able to \nhave separate nonce for each key instead of a single nonce at the account level.\nWith a single nonce on the account level, there is a high probability that 2 apps would use the same nonce for 2 different transactions and one of this transactions would be rejected.\n\nPreviously we were ordering transactions by nonce and rejecting transactions with a duplicated or lower nonce.\nWith the access key nonce, we still need to order transactions by nonce, but now we need to group them by `account_id,public_key` key instead of just account_id.\nTo prevent one access key from having a priority on other access keys, we should order transactions by hash when determining which transactions should be added to the block.\n\nThe suggestion from @nearmax:\n\n\"\nWe need to spec out here how transactions from different access keys are going to be ordered with respect to each other. For example:\n3 access keys (A,B,C) issue 3 transactions each:\nA1, A2, A3; B1,B2,B3; C1, C2, C3;\nAll these transactions operate on the same state so they need to have an order. First transaction to execute is one of \\{A1,B1,C1} that has lowest hash, let's say it is B1. Second transaction to execute is one of \\{A1,B2,C1} with lowest hash, etc.\n\"\n\nWe should also restrict the nonce of the next transaction to be exactly the previous nonce incremented by 1.\nIt will help us with ordering transactions.\n\nThe transaction ordering should be a separate topic which should also include security for transactions expiration and fork selection.\n\n#### `allowance` field\n\nAllowance is the amount of tokens the AccessKey can spend from the account balance.\nWhen some amount is spent, it's subtracted from both the allowance of the access key and from the account balance.\nIf in some case the user wants to have unlimited allowance for this key, then we have a `None` allowance option.\n\nNOTE: In the previous iteration of access keys, we used balance instead of the allowance.\nBut it required to sum up all access keys balances to get the total account balance.\nIt also prevented sharing of the account balance between access keys.\n\n#### Permissions\n\nAlmost all desired use-cases of access keys can be achieved by using the old permissions model.\nIt restricts access keys to only issue function call with no attached tokens.\nThe function calls are restricted to the selected `receiver_id` and potentially restricted to a single `method_name`.\nAnything non-trivial can be done by the contract that receives this call, e.g. through `proxy` function.\n\nTo remove public keys from the account, we added a new permission that full access to the account and not limited by the allowance.\n\n#### How is `storage_usage` computed?\n\nIf we use protobuf size to compute the `storage_usage` value, then protobuf might compress `u128` value and it would affect storage usage every time the `allowance` is modified.\n\nThe best option would be is to change `storage_usage` only when the access key is created or removed.\nSo that changes to the `allowance` value shouldn't change the `storage_usage` value.\nFor this to work, we might need to update the storage computation formula for the access key, e.g. the one that ignores the compressed size of the `allowance` and instead just relies on the 16 bytes of `u128` size.\nEspecially, because we currently don't use the proto size for the storage_usage for the account itself.\n\n# Drawbacks\n\nCurrently the permission model is quite limited to either a function call with one or any method names, or a full access key.\nBut we may add more permissions in the future in order to handle this issue. \n\n# Rationale and alternatives\n\n## Alternatives\n\n#### More permissions directly on the access key\n\nFor example we can have multiple method names, multiple contract_id/method_name pairs or different transactions types (e.g. only allow staking transactions).\n\nThis can be achieved with the contract and a dedicated function that does this control. So to keep the runtime simple and secure we should avoid doing more checks, since they are not accounted for fees.\n\nIt's also can be achieved if we refactor SignedTransaction to only use method_names instead of oneof body types.\n\n#### Balance instead of allowance\n\nAllowance enables sharing of a single account balance with multiple access keys. E.g. if you use 5 apps, you can give full allowance to each app instead of splitting balance into 5 parts.\n\nIt's also easier to work with, than access keys balances.\n\nPreviously we have AccessKey's balance owner, so the dApp could sponsor users. But it can be achieved by dApps creating access keys from their account, effectively paying for all transactions.\n \n#### Not exposing `nonce` on each AccessKey\n\nIf you use 2 applications at the same time, e.g. a mobile app and a desktop wallet, you might run into a `nonce` collision at the account level, which would cancel one of the transaction. It would happen more frequently with more apps being used.\n\nAs for the runtime multi nonce handling per account, we need to think and verify security a little more.\n\n#### `receiver_id` being an `Option<AccountId>`\n\nIn the previous design, the `receiver_id` was called `contract_id` and was an option field. But it didn't remove the requirement for the receiver when it was `None`. Instead it was assuming the access key is pointed to the owner's account. \nWe can potentially use `None` to mean unlimited key, and require user to explicitly specify their own account_id if they want to use proxy function.\n\n# Unresolved questions\n\n#### Transactions ordering and nonce restrictions\n\nThat question is still unresolved. Whether we should restrict TX nonce to be +1 or not restricting.\nIt's not a blocking change, but it would make sense to do this change with other SignedTransaction security features such as minimum hash of a block header and block expiration.\n\n#### Permissions\n\nNot clear whether a single pair of `receiver_id`/`method_name` is enough to cover all use-cases at the moment.\nE.g. if I want to use my account that already has some code on it, e.g. vesting locked account. I can't deploy a new code on it, so I can't use a `proxy` method.\n\n# Future possibilities\n\nFor all use-cases to work we need to add all missing runtime methods that are currently only possible with `SignedTransaction` at the moment, e.g. staking, account creation, public/access key management and code deployment.\n\nNext we might consider refactoring stake out of `Account` and also refactor `SignedTransaction` to support text based method names instead of enums.\n\nWe should also think about storing the same code (by hash) only once instead of storing for each account. Especially, if we adopt `proxy` model. \n"
  },
  {
    "path": "neps/archive/0006-bindings.md",
    "content": "- Proposal Name: `wasm_bindings`\n- Start Date: 2019-07-22\n- NEP PR: [nearprotocol/neps#0000](https://github.com/near/NEPs/blob/master/nep-0000-template.md)\n\n# Summary\n\nWasm bindings, a.k.a imports, are functions that the runtime (a.k.a host) exposes to the Wasm code (a.k.a guest) running on the virtual machine.\nThese functions are arguably the most difficult thing to change in our entire ecosystem, after we have contracts running on our blockchain,\nsince once the bindings change the old smart contracts will not be able to run on the new nodes.\nAdditionally, we need a highly detailed specification of the bindings to be able to write unit tests for our contracts,\nsince currently we only allow integration tests. Currently, writing unit tests is not possible since we cannot have\na precise mock of the host in the smart contract unit tests, e.g. we don't know how to mock the range iterator (what does it do\nwhen given an empty or inverted range?).\n\nIn this proposal we give a detailed specification of the functions that we will be relying on for many months to come.\n\n## Motivation\n\nThe current imports have the following issues:\n\n- **Trie API.** The behavior of trie API is currently unspecified. Many things are unclear: what happens when we try\niterating over an empty range, what happens if we try accessing a non-existent key, etc. Having a trie API specification\nis important for being able to create a testing framework for Rust and AssemblyScript smart contracts, since in unit\ntests the contracts will be running on a mocked implementation of the host;\n- **Promise API.** Recently we have discussed the changes to our promise mechanics. The schema does not need to change,\nbut the specification now needs to be clarified;\n- `data_read` currently has mixed functionality -- it can be used both for reading data from trie and to read data from\nthe context. In former it expects pointers to be passed as arguments, in later it expects enum to be passed. It achieves\njuxtaposition by casting pointer type in enum when needed;\n- **Economics API.** The functions that provide access to balance and such might need to be added or removed since we\nnow consider splitting attached balance into two.\n\n# Specification\n\n## Registers\n\nRegisters allow the host function to return the data into a buffer located inside the host oppose to the buffer\nlocated on the client. A special operation can be used to copy the content of the buffer into the host. Memory pointers\ncan then be used to point either to the memory on the guest or the memory on the host, see below. Benefits:\n\n- We can have functions that return values that are not necessarily used, e.g. inserting key-value into a trie can\nalso return the preempted old value, which might not be necessarily used. Previously, if we returned something we\nwould have to pass the blob from host into the guest, even if it is not used;\n- We can pass blobs of data between host functions without going through the guest, e.g. we can remove the value\nfrom the storage and insert it into under a different key;\n- It makes API cleaner, because we don't need to pass `buffer_len` and `buffer_ptr` as arguments to other functions;\n- It allows merging certain functions together, see `storage_iter_next`;\n- This is consistent with other APIs that were created for high performance, e.g. allegedly Ewasm have implemented\n\nSNARK-like computations in Wasm by exposing a bignum library through stack-like interface to the guest. The guest\ncan manipulate then with the stack of 256-bit numbers that is located on the host.\n\n#### Host → host blob passing\n\nThe registers can be used to pass the blobs between host functions. For any function that\ntakes a pair of arguments `*_len: u64, *_ptr: u64` this pair is pointing to a region of memory either on the guest or\nthe host:\n\n- If `*_len != u64::MAX` it points to the memory on the guest;\n- If `*_len == u64::MAX` it points to the memory under the register `*_ptr` on the host.\n\nFor example:\n`storage_write(u64::MAX, 0, u64::MAX, 1, 2)` -- insert key-value into storage, where key is read from register 0,\nvalue is read from register 1, and result is saved to register 2.\n\nNote, if some function takes `register_id` then it means this function can copy some data into this register. If\n`register_id == u64::MAX` then the copying does not happen. This allows some micro-optimizations in the future.\n\nNote, we allow multiple registers on the host, identified with `u64` number. The guest does not have to use them in\norder and can for instance save some blob in register `5000` and another value in register `1`.\n\n#### Specification\n\n```rust\nread_register(register_id: u64, ptr: u64)\n```\n\nWrites the entire content from the register `register_id` into the memory of the guest starting with `ptr`.\n\n###### Panics\n\n- If the content extends outside the memory allocated to the guest. In Wasmer, it returns `MemoryAccessViolation` error message;\n- If `register_id` is pointing to unused register returns `InvalidRegisterId` error message.\n\n###### Undefined Behavior\n\n- If the content of register extends outside the preallocated memory on the host side, or the pointer points to a\nwrong location this function will overwrite memory that it is not supposed to overwrite causing an undefined behavior.\n\n---\n\n##### register_len\n\n```rust\nregister_len(register_id: u64) -> u64\n```\n\nReturns the size of the blob stored in the given register.\n\n###### Normal operation\n\n- If register is used, then returns the size, which can potentially be zero;\n- If register is not used, returns `u64::MAX`\n\n## Trie API\n\nHere we provide a specification of trie API. After this NEP is merged, the cases where our current implementation does\nnot follow the specification are considered to be bugs that need to be fixed.\n\n---\n\n##### storage_write\n\n```rust\nstorage_write(key_len: u64, key_ptr: u64, value_len: u64, value_ptr: u64, register_id: u64) -> u64\n```\n\nWrites key-value into storage.\n\n###### Normal operation\n\n- If key is not in use it inserts the key-value pair and does not modify the register;\n- If key is in use it inserts the key-value and copies the old value into the `register_id`.\n\n###### Returns\n\n- If key was not used returns `0`;\n- If key was used returns `1`.\n\n###### Panics\n\n- If `key_len + key_ptr` or `value_len + value_ptr` exceeds the memory container or points to an unused register it panics\nwith `MemoryAccessViolation`. (When we say that something panics with the given error we mean that we use Wasmer API to\ncreate this error and terminate the execution of VM. For mocks of the host that would only cause a non-name panic.)\n- If returning the preempted value into the registers exceed the memory container it panics with `MemoryAccessViolation`;\n\n###### Current bugs\n\n- `External::storage_set` trait can return an error which is then converted to a generic non-descriptive\n  `StorageUpdateError`, [here](https://github.com/nearprotocol/nearcore/blob/942bd7bdbba5fb3403e5c2f1ee3c08963947d0c6/runtime/wasm/src/runtime.rs#L210)\n  however the actual implementation does not return error at all, [see](https://github.com/nearprotocol/nearcore/blob/4773873b3cd680936bf206cebd56bdc3701ddca9/runtime/runtime/src/ext.rs#L95);\n- Does not return into the registers.\n\n---\n\n##### storage_read\n\n```rust\nstorage_read(key_len: u64, key_ptr: u64, register_id: u64) -> u64\n```\n\nReads the value stored under the given key.\n\n###### Normal operation\n\n- If key is used copies the content of the value into the `register_id`, even if the content is zero bytes;\n- If key is not present then does not modify the register.\n\n###### Returns\n\n- If key was not present returns `0`;\n- If key was present returns `1`.\n\n###### Panics\n\n- If `key_len + key_ptr` exceeds the memory container or points to an unused register it panics with `MemoryAccessViolation`;\n- If returning the preempted value into the registers exceed the memory container it panics with `MemoryAccessViolation`;\n\n###### Current bugs\n\n- This function currently does not exist.\n\n---\n\n##### storage_remove\n\n```rust\nstorage_remove(key_len: u64, key_ptr: u64, register_id: u64) -> u64\n```\n\nRemoves the value stored under the given key.\n\n###### Normal operation\n\nVery similar to `storage_read`:\n\n- If key is used, removes the key-value from the trie and copies the content of the value into the `register_id`, even if the content is zero bytes.\n- If key is not present then does not modify the register.\n\n###### Returns\n\n- If key was not present returns `0`;\n- If key was present returns `1`.\n\n###### Panics\n\n- If `key_len + key_ptr` exceeds the memory container or points to an unused register it panics with `MemoryAccessViolation`;\n- If the registers exceed the memory limit panics with `MemoryAccessViolation`;\n- If returning the preempted value into the registers exceed the memory container it panics with `MemoryAccessViolation`;\n\n\n###### Current bugs\n\n- Does not return into the registers.\n\n---\n\n##### storage_has_key\n\n```rust\nstorage_has_key(key_len: u64, key_ptr: u64) -> u64\n```\n\nChecks if there is a key-value pair.\n\n###### Normal operation\n\n- If key is used returns `1`, even if the value is zero bytes;\n- Otherwise returns `0`.\n\n###### Panics\n\n- If `key_len + key_ptr` exceeds the memory container it panics with `MemoryAccessViolation`;\n\n---\n\n#### storage_iter_prefix\n\n```rust\nstorage_iter_prefix(prefix_len: u64, prefix_ptr: u64) -> u64\n```\n\nCreates an iterator object inside the host.\nReturns the identifier that uniquely differentiates the given iterator from other iterators that can be simultaneously\ncreated.\n\n###### Normal operation\n\n- It iterates over the keys that have the provided prefix. The order of iteration is defined by the lexicographic\norder of the bytes in the keys. If there are no keys, it creates an empty iterator, see below on empty iterators;\n\n###### Panics\n\n- If `prefix_len + prefix_ptr` exceeds the memory container it panics with `MemoryAccessViolation`;\n\n---\n\n#### storage_iter_range\n\n```rust\nstorage_iter_range(start_len: u64, start_ptr: u64, end_len: u64, end_ptr: u64) -> u64\n```\n\nSimilarly to `storage_iter_prefix`\ncreates an iterator object inside the host.\n\n###### Normal operation\n\nUnless lexicographically `start < end`, it creates an empty iterator.\nIterates over all key-values such that keys are between `start` and `end`, where `start` is inclusive and `end` is exclusive.\n\nNote, this definition allows for `start` or `end` keys to not actually exist on the given trie.\n\n###### Panics\n\n- If `start_len + start_ptr` or `end_len + end_ptr` exceeds the memory container or points to an unused register it panics with `MemoryAccessViolation`;\n\n---\n\n##### storage_iter_next\n\n```rust\nstorage_iter_next(iterator_id: u64, key_register_id: u64, value_register_id: u64) -> u64\n```\n\nAdvances iterator and saves the next key and value in the register.\n\n###### Normal operation\n\n- If iterator is not empty (after calling next it points to a key-value), copies the key into `key_register_id` and value into `value_register_id` and returns `1`;\n- If iterator is empty returns `0`.\n\nThis allows us to iterate over the keys that have zero bytes stored in values.\n\n###### Panics\n\n- If `key_register_id == value_register_id` panics with `MemoryAccessViolation`;\n- If the registers exceed the memory limit panics with `MemoryAccessViolation`;\n- If `iterator_id` does not correspond to an existing iterator panics with  `InvalidIteratorId`\n- If between the creation of the iterator and calling `storage_iter_next` any modification to storage was done through\n  `storage_write` or `storage_remove` the iterator is invalidated and the error message is `IteratorWasInvalidated`.\n\n###### Current bugs\n\n- Not implemented, currently we have `storage_iter_next` and `data_read` + `DATA_TYPE_STORAGE_ITER` that together fulfill\nthe purpose, but have unspecified behavior.\n\n## Context API\n\nContext API mostly provides read-only functions that access current information about the blockchain, the accounts\n(that originally initiated the chain of cross-contract calls, the immediate contract that called the current one, the account of the current contract),\nother important information like storage usage.\n\nMany of the below functions are currently implemented through `data_read` which allows to read generic context data.\nHowever, there is no reason to have `data_read` instead of the specific functions:\n\n- `data_read` does not solve forward compatibility. If later we want to add another context function, e.g. `executed_operations`\nwe can just declare it as a new function, instead of encoding it as `DATA_TYPE_EXECUTED_OPERATIONS = 42` which is passed\nas the first argument to `data_read`;\n- `data_read` does not help with renaming. If later we decide to rename `signer_account_id` to `originator_id` then one could\nargue that contracts that rely on `data_read` would not break, while contracts relying on `signer_account_id()` would. However\nthe name change often means the change of the semantics, which means the contracts using this function are no longer safe to\nexecute anyway.\n\nHowever there is one reason to not have `data_read` -- it makes `API` more human-like which is a general direction Wasm APIs, like WASI are moving towards to.\n\n---\n\n##### current_account_id\n\n```rust\ncurrent_account_id(register_id: u64)\n```\n\nSaves the account id of the current contract that we execute into the register.\n\n###### Panics\n\n- If the registers exceed the memory limit panics with `MemoryAccessViolation`;\n\n---\n\n##### signer_account_id\n\n```rust\nsigner_account_id(register_id: u64)\n```\n\nAll contract calls are a result of some transaction that was signed by some account using\nsome access key and submitted into a memory pool (either through the wallet using RPC or by a node itself). This function returns the id of that account.\n\n###### Normal operation\n\n- Saves the bytes of the signer account id into the register.\n\n###### Panics\n\n- If the registers exceed the memory limit panics with `MemoryAccessViolation`;\n\n###### Current bugs\n\n- Currently we conflate `originator_id` and `sender_id` in our code base.\n\n---\n\n##### signer_account_pk\n\n```rust\nsigner_account_pk(register_id: u64)\n```\n\nSaves the public key fo the access key that was used by the signer into the register.\nIn rare situations smart contract might want to know the exact access key that was used to send the original transaction,\ne.g. to increase the allowance or manipulate with the public key.\n\n###### Panics\n\n- If the registers exceed the memory limit panics with `MemoryAccessViolation`;\n\n\n###### Current bugs\n\n- Not implemented.\n\n---\n\n#### predecessor_account_id\n\n```rust\npredecessor_account_id(register_id: u64)\n```\n\nAll contract calls are a result of a receipt, this receipt might be created by a transaction\nthat does function invocation on the contract or another contract as a result of cross-contract call.\n\n###### Normal operation\n\n- Saves the bytes of the predecessor account id into the register.\n\n###### Panics\n\n- If the registers exceed the memory limit panics with `MemoryAccessViolation`;\n\n###### Current bugs\n\n- Not implemented.\n\n---\n\n#### input\n\n```rust\ninput(register_id: u64)\n```\n\nReads input to the contract call into the register. Input is expected to be in JSON-format.\n\n###### Normal operation\n\n- If input is provided saves the bytes (potentially zero) of input into register.\n- If input is not provided does not modify the register.\n\n###### Returns\n\n- If input was not provided returns `0`;\n- If input was provided returns `1`; If input is zero bytes returns `1`, too.\n\n###### Panics\n\n- If the registers exceed the memory limit panics with `MemoryAccessViolation`;\n\n###### Current bugs\n\n- Implemented as part of `data_read`. However there is no reason to have one unified function, like `data_read` that can\nbe used to read all\n\n---\n\n#### block_index\n\n```rust\nblock_index() -> u64\n```\n\nReturns the current block index.\n\n---\n\n#### storage_usage\n\n```rust\nstorage_usage() -> u64\n```\n\nReturns the number of bytes used by the contract if it was saved to the trie as of the\ninvocation. This includes:\n\n- The data written with `storage_*` functions during current and previous execution;\n- The bytes needed to store the account protobuf and the access keys of the given account.\n\n## Economics API\n\nAccounts own certain balance; and each transaction and each receipt have certain amount of balance and prepaid gas\nattached to them.\nDuring the contract execution, the contract has access to the following `u128` values:\n\n- `account_balance` -- the balance attached to the given account. This includes the `attached_deposit` that was attached\n  to the transaction;\n- `attached_deposit` -- the balance that was attached to the call that will be immediately deposited before\n  the contract execution starts;\n- `prepaid_gas` -- the tokens attached to the call that can be used to pay for the gas;\n- `used_gas` -- the gas that was already burnt during the contract execution and attached to promises (cannot exceed `prepaid_gas`);\n\nIf contract execution fails `prepaid_gas - used_gas` is refunded back to `signer_account_id` and `attached_balance`\nis refunded back to `predecessor_account_id`.\n\nThe following spec is the same for all functions:\n\n```rust\naccount_balance(balance_ptr: u64)\nattached_deposit(balance_ptr: u64)\n\n```\n\n -- writes the value into the `u128` variable pointed by `balance_ptr`.\n\n###### Panics\n\n- If `balance_ptr + 16` points outside the memory of the guest with `MemoryAccessViolation`;\n\n###### Current bugs\n\n- Use a different name;\n\n---\n\n```rust\nprepaid_gas() -> u64\nused_gas() -> u64\n```\n\n## Math\n\n#### random_seed\n\n```rust\nrandom_seed(register_id: u64)\n```\n\nReturns random seed that can be used for pseudo-random number generation in deterministic way.\n\n###### Panics\n\n- If the size of the registers exceed the set limit `MemoryAccessViolation`;\n\n---\n\n#### sha256\n\n```rust\nsha256(value_len: u64, value_ptr: u64, register_id: u64)\n```\n\nHashes the random sequence of bytes using sha256 and returns it into `register_id`.\n\n###### Panics\n\n- If `value_len + value_ptr` points outside the memory or the registers use more memory than the limit with `MemoryAccessViolation`.\n\n###### Current bugs\n\n- Current name `hash` is not specific to what hash is being used.\n- We have `hash32` that largely duplicates the mechanics of `hash` because it returns the first 4 bytes only.\n\n---\n\n#### check_ethash\n\n```rust\ncheck_ethash(block_number_ptr: u64,\n             header_hash_ptr: u64,\n             nonce: u64,\n             mix_hash_ptr: u64,\n             difficulty_ptr: u64) -> u64\n```\n\n-- verifies hash of the header that we created using [Ethash](https://en.wikipedia.org/wiki/Ethash). Parameters are:\n\n- `block_number` -- `u256`/`[u64; 4]`, number of the block on Ethereum blockchain. We use the pointer to the slice of 32 bytes on guest memory;\n- `header_hash` -- `h256`/`[u8; 32]`, hash of the header on Ethereum blockchain. We use the pointer to the slice of 32 bytes on guest memory;\n- `nonce` -- `u64`/`h64`/`[u8; 8]`, nonce that was used to find the correct hash, passed as `u64` without pointers;\n- `mix_hash` -- `h256`/`[u8; 32]`, special hash that avoid griefing attack. We use the pointer to the slice of 32 bytes on guest memory;\n- `difficulty` -- `u256`/`[u64; 4]`, the difficulty of mining the block. We use the pointer to the slice of 32 bytes on guest memory;\n\n###### Returns\n\n- `1` if the Ethash is valid;\n- `0` otherwise.\n\n###### Panics\n\n- If `block_number_ptr + 32` or `header_hash_ptr + 32` or `mix_hash_ptr + 32` or `difficulty_ptr + 32` point outside the memory or registers use more memory than the limit with `MemoryAccessViolation`.\n\n###### Current bugs\n\n- `block_number` and `difficulty` are currently exposed as `u64` which are casted to `u256` which breaks Ethereum compatibility;\n- Currently, we also pass the length together with `header_hash_ptr` and `mix_hash_ptr` which is not necessary since\nwe know their length.\n\n## Promises API\n\n```rust\npromise_create(account_id_len: u64,\n               account_id_ptr: u64,\n               method_name_len: u64,\n               method_name_ptr: u64,\n               arguments_len: u64,\n               arguments_ptr: u64,\n               amount_ptr: u64,\n               gas: u64) -> u64\n```\n\nCreates a promise that will execute a method on account with given arguments and attaches the given amount.\n`amount_ptr` point to slices of bytes representing `u128`.\n\n###### Panics\n\n- If `account_id_len + account_id_ptr` or `method_name_len + method_name_ptr` or `arguments_len + arguments_ptr`\nor `amount_ptr + 16` points outside the memory of the guest or host, with `MemoryAccessViolation`.\n\n###### Returns\n\n- Index of the new promise that uniquely identifies it within the current execution of the method.\n\n---\n\n#### promise_then\n\n```rust\npromise_then(promise_idx: u64,\n             account_id_len: u64,\n             account_id_ptr: u64,\n             method_name_len: u64,\n             method_name_ptr: u64,\n             arguments_len: u64,\n             arguments_ptr: u64,\n             amount_ptr: u64,\n             gas: u64) -> u64\n```\n\nAttaches the callback that is executed after promise pointed by `promise_idx` is complete.\n\n###### Panics\n\n- If `promise_idx` does not correspond to an existing promise panics with `InvalidPromiseIndex`.\n- If `account_id_len + account_id_ptr` or `method_name_len + method_name_ptr` or `arguments_len + arguments_ptr`\nor `amount_ptr + 16` points outside the memory of the guest or host, with `MemoryAccessViolation`.\n\n###### Returns\n\n- Index of the new promise that uniquely identifies it within the current execution of the method.\n\n---\n\n#### promise_and\n\n```rust\npromise_and(promise_idx_ptr: u64, promise_idx_count: u64) -> u64\n```\n\nCreates a new promise which completes when time all promises passed as arguments complete. Cannot be used with registers.\n`promise_idx_ptr` points to an array of `u64` elements, with `promise_idx_count` denoting the number of elements.\nThe array contains indices of promises that need to be waited on jointly.\n\n###### Panics\n\n- If `promise_ids_ptr + 8 * promise_idx_count` extend outside the guest memory with `MemoryAccessViolation`;\n- If any of the promises in the array do not correspond to existing promises panics with `InvalidPromiseIndex`.\n\n###### Returns\n\n- Index of the new promise that uniquely identifies it within the current execution of the method.\n\n---\n\n#### promise_results_count\n\n```rust\npromise_results_count() -> u64\n```\n\nIf the current function is invoked by a callback we can access the execution results of the promises that\ncaused the callback. This function returns the number of complete and incomplete callbacks.\n\nNote, we are only going to have incomplete callbacks once we have `promise_or` combinator.\n\n###### Normal execution\n\n- If there is only one callback `promise_results_count()` returns `1`;\n- If there are multiple callbacks (e.g. created through `promise_and`) `promise_results_count()` returns their number.\n- If the function was called not through the callback `promise_results_count()` returns `0`.\n\n\n---\n\n#### promise_result\n\n```rust\npromise_result(result_idx: u64, register_id: u64) -> u64\n```\n\nIf the current function is invoked by a callback we can access the execution results of the promises that\ncaused the callback. This function returns the result in blob format and places it into the register.\n\n###### Normal execution\n\n- If promise result is complete and successful copies its blob into the register;\n- If promise result is complete and failed or incomplete keeps register unused;\n\n###### Returns\n\n- If promise result is not complete returns `0`;\n- If promise result is complete and successful returns `1`;\n- If promise result is complete and failed returns `2`.\n\n###### Panics\n\n- If `result_idx` does not correspond to an existing result panics with `InvalidResultIndex`.\n- If copying the blob exhausts the memory limit it panics with `MemoryAccessViolation`.\n\n###### Current bugs\n\n- We currently have two separate functions to check for result completion and copy it.\n\n---\n\n#### promise_return\n\n```rust\npromise_return(promise_idx: u64)\n```\n\nWhen promise `promise_idx` finishes executing its result is considered to be the result of the current function.\n\n###### Panics\n\n- If `promise_idx` does not correspond to an existing promise panics with `InvalidPromiseIndex`.\n\n###### Current bugs\n\n- The current name `return_promise` is inconsistent with the naming convention of Promise API.\n\n## Miscellaneous API\n\n#### value_return\n\n```rust\nvalue_return(value_len: u64, value_ptr: u64)\n```\n\nSets the blob of data as the return value of the contract.\n\n##### Panics\n\n- If `value_len + value_ptr` exceeds the memory container or points to an unused register it panics with `MemoryAccessViolation`;\n\n---\n\n```rust\npanic()\n```\n\nTerminates the execution of the program with panic `GuestPanic`.\n\n---\n\n#### log_utf8\n\n```rust\nlog_utf8(len: u64, ptr: u64)\n```\n\nLogs the UTF-8 encoded string. See https://stackoverflow.com/a/5923961 that explains\nthat null termination is not defined through encoding.\n\n###### Normal behavior\n\nIf `len == u64::MAX` then treats the string as null-terminated with character `'\\0'`;\n\n###### Panics\n\n- If string extends outside the memory of the guest with `MemoryAccessViolation`;\n\n---\n\n#### log_utf16\n\n```rust\nlog_utf16(len: u64, ptr: u64)\n```\n\nLogs the UTF-16 encoded string. `len` is the number of bytes in the string.\n\n###### Normal behavior\n\nIf `len == u64::MAX` then treats the string as null-terminated with two-byte sequence of `0x00 0x00`.\n\n###### Panics\n\n- If string extends outside the memory of the guest with `MemoryAccessViolation`;\n\n---\n\n#### abort\n\n```rust\nabort(msg_ptr: u32, filename_ptr: u32, line: u32, col: u32)\n```\n\nSpecial import kept for compatibility with AssemblyScript contracts. Not called by smart contracts directly, but instead\ncalled by the code generated by AssemblyScript.\n\n\n# Future Improvements\n\nIn the future we can have some of the registers to be on the guest.\nFor instance a guest can tell the host that it has some pre-allocated memory that it wants to be used for the register,\ne.g.\n\n```rust\nset_guest_register(register_id: u64, register_ptr: u64, max_register_size: u64)\n```\n\nwill assign `register_id` to a span of memory on the guest. Host then would also know the size of that buffer on guest\nand can throw a panic if there is an attempted copying that exceeds the guest register size.\n"
  },
  {
    "path": "neps/archive/0008-transaction-refactoring.md",
    "content": "- Proposal Name: Batched Transactions\n- Start Date: 2019-07-22\n- NEP PR: [nearprotocol/neps#0008](https://github.com/nearprotocol/neps/pull/8)\n\n# Summary\n\nRefactor signed transactions and receipts to support batched atomic transactions and data dependency.\n\n# Motivation\n\nIt simplifies account creation, by supporting batching of multiple transactions together instead of\ncreating more complicated transaction types.\n\nFor example, we want to create a new account with some account balance and one or many access keys, deploy a contract code on it and run an initialization method to restrict access keys permissions for a `proxy` function.\n\nTo be able to do this now, we need to have a `CreateAccount` transaction with all the parameters of a new account.\nThen we need to handle it in one operation in a runtime code, which might have duplicated code for executing some WASM code with the rollback conditions.\n\nAlternative to this is to execute multiple simple transactions in a batch within the same block.\nIt has to be done in a row without any commits to the state until the entire batch is completed.\nWe propose to support this type of transaction batching to simplify the runtime.\n\nCurrently callbacks are handled differently from async calls, this NEP simplifies data dependencies and callbacks by unifying them.\n\n# Guide-level explanation\n\n### New transaction and receipts\n\nPreviously, in the runtime to produce a block we first executed new signed transactions and then executed received receipts. It resulted in duplicated code that might be shared across similar actions, e.g. function calls for async calls, callbacks and self-calls.\nIt also increased the complexity of the runtime implementation.\n\nThis NEP proposes changing it by first converting all signed transactions into receipts and then either execute them immediately before received receipts, or put them into the list of the new receipts to be routed.\nTo achieve this, NEP introduces a new message `Action` that represents one of atomic actions, e.g. a function call.\n`TransactionBody` is now called just `Transaction`. It contains the list of actions that needs to be performed in a single batch and the information shared across these actions.\n\n`Transaction` contains the following fields\n\n- `signer_id` is an account ID of the transaction signer.\n- `public_key` is a public key used to identify the access key and to sign the transaction.\n- `nonce` is used to deduplicate and order transactions (per access key).\n- `receiver_id` is the account ID of the destination of this transaction. It's where the generated receipt will be routed for execution.\n- `action` is the list of actions to perform.\n\nAn `Action` can be of the following:\n\n- `CreateAccount` creates a new account with the `receiver_id` account ID. The action fails if the account already exists. `CreateAccount` also grants permission for all subsequent batched action for the newly created account. For example, permission to deploy code on the new account. Permission details are described in the reference section below.\n- `DeployContract` deploys given binary wasm code on the account. Either the `receiver_id` equals to the `signer_id`, or the batch of actions has started with `CreateAccount`, which granted that permission.\n- `FunctionCall` executes a function call on the last deployed contract. The action fails if the account or the code doesn't exist. E.g. if the previous action was `DeployContract`, then the code to execute will be the new deployed contract. `FunctionCall` has `method_name` and `args` to identify method with arguments to call. It also has `gas` and the `deposit`. `gas` is a prepaid amount of gas for this call (the price of gas is determined when a signed transaction is converted to a receipt. `deposit` is the attached deposit balance of NEAR tokens that the contract can spend, e.g. 10 tokens to pay for a crypto-corgi.\n- `Transfer` transfers the given `deposit` balance of tokens from the predecessor to the receiver.\n- `Stake` stakes the new total `stake` balance with the given `public_key`. The difference in stake is taken from the account's balance (if the new stake is greater than the current one) at the moment when this action is executed, so it's not prepaid. There is no particular reason to stake on behalf of a newly created account, so we may disallow it.\n- `DeleteKey` deletes an old `AccessKey` identified by the given `public_key` from the account. Fails if the access key with the given public key doesn't exist. All next batched actions will continue to execute, even if the public key that authorized that transaction was removed.\n- `AddKey` adds a new given `AccessKey` identified by a new given `public_key` to the account. Fails if an access key with the given public key already exists. We removed `SwapKeyTransaction`, because it can be replaced with 2 batched actions - delete an old key and add a new key.\n- `DeleteAccount` deletes `receiver_id` account if the account doesn't have enough balance to pay the rent, or the `receiver_id` is the `predecessor_id`. Sends the remaining balance to the `beneficiary_id` account.\n\nThe new `Receipt` contains the shared information and either one of the receipt actions or a list of actions:\n\n- `predecessor_id` the account ID of the immediate previous sender (predecessor) of this receipt. It can be different from the `signer_id` in some cases, e.g. for promises.\n- `receiver_id` the account ID of the current account, on which we need to perform action(s).\n- `receipt_id` is a unique ID of this receipt (previously was called `nonce`). It's generated from either the signed transaction or the parent receipt.\n- `receipt` can be one of 2 types:\n  - `ActionReceipt` is used to perform some actions on the receiver.\n  - `DataReceipt` is used when some data needs to be passed from the predecessor to the receiver, e.g. an execution result.\n\nTo support promises and callbacks we introduce a concept of cross-shard data sharing with dependencies. Each `ActionReceipt` may have a list of input `data_id`. The execution will not start until all required inputs are received. Once the execution completes and if there is `output_data_id`, it produces a `DataReceipt` that will be routed to the `output_receiver_id`.\n\n`ActionReceipt` contains the following fields:\n\n- `signer_id` the account ID of the signer, who signed the transaction.\n- `signer_public_key` the public key that the signer used to sign the original signed transaction.\n- `output_data_id` is the data ID to create DataReceipt. If it's absent, then the `DataReceipt` is not created.\n- `output_receiver_id` is the account ID of the data receiver. It's needed to route `DataReceipt`. It's absent if the DataReceipt is not needed.\n- `input_data_id` is the list of data IDs that are required for the execution of the `ActionReceipt`. If some of data IDs is not available when the receipt is received, then the `ActionReceipt` is postponed until all data is available. Once the last `DataReceipt` for the required input data arrives, the action receipt execution is triggered.\n- `action` is the list of actions to execute. The execution doesn't need to validate permissions of the actions, but need to fail in some cases. E.g. when the receiver's account doesn't exist and the action acts on the account, or when the action is a function call and the code is not present.\n\n`DataReceipt` contains the following fields:\n\n- `data_id` is the data ID to be used as an input.\n- `success` is true if the `ActionReceipt` that generated this `DataReceipt` finished the execution without any failures.\n- `data` is the binary data that is returned from the last action of the `ActionReceipt`. Right now, it's empty for all actions except for function calls. For function calls the data is the result of the code execution. But in the future we might introduce non-contract state reads.\n\nData should be stored at the same shard as the receiver's account, even if the receiver's account doesn't exist.\n\n### Refunds\n\nIn case an `ActionReceipt` execution fails the runtime can generate a refund.\nWe've removed `refund_account_id` from receipts, because the account IDs for refunds can be determined from the `signer_id` and `predecessor_id` in the `ActionReceipt`.\nAll unused gas and action fees (also measured in gas) are always refunded back to the `signer_id`, because fees are always prepaid by the signer. The gas is converted into tokens using the `gas_price`.\nThe deposit balances from `FunctionCall` and `Transfer` are refunded back to the `predecessor_id`, because they were deducted from predecessor's account balance.\nIt's also important to note that the account ID of predecessor for refund receipts is `system`.\nIt's done to prevent refund loops, e.g. when the account to receive the refund was deleted before the refund arrives. In this case the refund is burned.\n\nIf the function call action with the attached `deposit` fails in the middle of the execution, then 2 refund receipts can be generated, one for the unused gas and one for the deposits.\nThe runtime should combine them into one receipt if `signer_id` and `predecessor_id` is the same.\n\nExample of a receipt for a refund of `42000` atto-tokens to `vasya.near`:\n\n```json\n{\n    \"predecessor_id\": \"system\",\n    \"receiver_id\": \"vasya.near\",\n    \"receipt_id\": ...,\n\n    \"action\": {\n        \"signer_id\": \"vasya.near\",\n        \"signer_public_key\": ...,\n\n        \"gas_price\": \"3\",\n\n        \"output_data_id\": null,\n        \"output_receiver_id\": null,\n\n        \"input_data_id\": [],\n\n        \"action\": [\n            {\n                \"transfer\": {\n                    \"deposit\": \"42000\"\n                }\n            }\n        ]\n    }\n}\n```\n\n### Examples\n\n#### Account Creation\n\nTo create a new account we can create a new `Transaction`:\n\n```json\n{\n    \"signer_id\": \"vasya.near\",\n    \"public_key\": ...,\n    \"nonce\": 42,\n    \"receiver_id\": \"vitalik.vasya.near\",\n\n    \"action\": [\n        {\n            \"create_account\": {\n            }\n        },\n        {\n            \"transfer\": {\n                \"deposit\": \"19231293123\"\n            }\n        },\n        {\n            \"deploy_contract\": {\n                \"code\": ...\n            }\n        },\n        {\n            \"add_key\": {\n                \"public_key\": ...,\n                \"access_key\": ...\n            }\n        },\n        {\n            \"function_call\": {\n                \"method_name\": \"init\",\n                \"args\": ...,\n                \"gas\": 20000,\n                \"deposit\": \"0\"\n            }\n        }\n    ]\n}\n```\n\nThis transaction is sent from `vasya.near` signed with a `public_key`.\nThe receiver is `vitalik.vasya.near`, which is a new account id.\nThe transaction contains a batch of actions.\nFirst we create the account, then we transfer a few tokens to the newly created account, then we deploy code on the new account, add a new access key with some given public key, and as a final action initializing the deployed code by calling a method `init` with some arguments.\n\nFor this transaction to work `vasya.near` needs to have enough balance on the account cover gas and deposits for all actions at once.\nEvery action has some associated action gas fee with it. While `transfer` and `function_call` actions need additional balance for deposits and gas (for executions and promises).\n\nOnce we validated and subtracted the total amount from `vasya.near` account, this transaction is transformed into a `Receipt`:\n\n```json\n{\n    \"predecessor_id\": \"vasya.near\",\n    \"receiver_id\": \"vitalik.vasya.near\",\n    \"receipt_id\": ...,\n\n    \"action\": {\n        \"signer_id\": \"vasya.near\",\n        \"signer_public_key\": ...,\n\n        \"gas_price\": \"3\",\n\n        \"output_data_id\": null,\n        \"output_receiver_id\": null,\n\n        \"input_data_id\": [],\n\n        \"action\": [...]\n    }\n}\n```\n\nIn this example the gas price at the moment when the transaction was processed was 3 per gas.\nThis receipt will be sent to `vitalik.vasya.near`'s shard to be executed.\nIn case the `vitalik.vasya.near` account already exists, the execution will fail and some amount of prepaid_fees will be refunded back to `vasya.near`.\nIf the account creation receipt succeeds, it wouldn't create a `DataReceipt`, because `output_data_id` is `null`.\nBut it will generate a refund receipt for the unused portion of prepaid function call `gas`.\n\n#### Deploy code example\n\nDeploying code with initialization is pretty similar to creating account, except you can't deploy code on someone else account. So the transaction's `receiver_id` has to be the same as the `signer_id`.\n\n#### Simple promise with callback\n\nLet's say the transaction contained a single action which is a function call to `a.contract.near`.\nIt created a new promise `b.contract.near` and added a callback to itself.\nOnce the execution completes it will result in the following new receipts:\n\nThe receipt for the new promise towards `b.contract.near`\n\n```json\n{\n    \"predecessor_id\": \"a.contract.near\",\n    \"receiver_id\": \"b.contract.near\",\n    \"receipt_id\": ...,\n\n    \"action\": {\n        \"signer_id\": \"vasya.near\",\n        \"signer_public_key\": ...,\n\n        \"gas_price\": \"3\",\n\n        \"output_data_id\": \"data_123_1\",\n        \"output_receiver_id\": \"a.contract.near\",\n\n        \"input_data_id\": [],\n\n        \"action\": [\n            {\n                \"function_call\": {\n                    \"method_name\": \"sum\",\n                    \"args\": ...,\n                    \"gas\": 10000,\n                    \"deposit\": \"0\"\n                }\n            }\n        ]\n    }\n}\n```\n\nInteresting details:\n\n- `signer_id` is still `vasya.near`, because it's the account that initialized the transaction, but not the creator of the promise.\n- `output_data_id` contains some unique data ID. In this example we used `data_123_1`.\n- `output_receiver_id` indicates where to route the result of the execution.\n\n\nThe other receipt is for the callback which will stay in the same shard.\n\n```json\n{\n    \"predecessor_id\": \"a.contract.near\",\n    \"receiver_id\": \"a.contract.near\",\n    \"receipt_id\": ...,\n\n    \"action\": {\n        \"signer_id\": \"vasya.near\",\n        \"signer_public_key\": ...,\n\n        \"gas_price\": \"3\",\n\n        \"output_data_id\": null,\n        \"output_receiver_id\": null,\n\n        \"input_data_id\": [\"data_123_1\"],\n\n        \"action\": [\n            {\n                \"function_call\": {\n                    \"method_name\": \"process_sum\",\n                    \"args\": ...,\n                    \"gas\": 10000,\n                    \"deposit\": \"0\"\n                }\n            }\n        ]\n    }\n}\n```\n\nIt looks very similar to the new promise, but instead of `output_data_id` it has an `input_data_id`.\nThis action receipt will be postponed until the other receipt is routed, executed and generated a data receipt.\n\nOnce the new promise receipt is successfully executed, it will generate the following receipt:\n\n```json\n{\n    \"predecessor_id\": \"b.contract.near\",\n    \"receiver_id\": \"a.contract.near\",\n    \"receipt_id\": ...,\n\n    \"data\": {\n        \"data_id\": \"data_123_1\",\n        \"success\": true,\n        \"data\": ...\n    }\n}\n```\n\nIt contains the data ID `data_123_1` and routed to the `a.contract.near`.\n\nLet's say the callback receipt was processed and postponed, then this data receipt will trigger execution of the callback receipt, because the all input data is now available.\n\n#### Remote callback with 2 joined promises, with a callback on itself\n\nLet's say `a.contract.near` wants to call `b.contract.near` and `c.contract.near`, and send the result to `d.contract.near` for joining before processing the result on itself.\nIt will generate 2 receipts for new promises, 1 receipt for the remote callback and 1 receipt for the callback on itself.\n\nPart of the receipt (#1) for the promise towards `b.contract.near`:\n\n```\n...\n\"output_data_id\": \"data_123_b\",\n\"output_receiver_id\": \"d.contract.near\",\n\n\"input_data_id\": [],\n...\n```\n\nPart of the receipt (#2) for the promise towards `c.contract.near`:\n\n```\n...\n\"output_data_id\": \"data_321_c\",\n\"output_receiver_id\": \"d.contract.near\",\n\n\"input_data_id\": [],\n...\n```\n\nThe receipt (#3) for the remote callback that has to be executed on `d.contract.near` with data from `b.contract.near` and `c.contract.near`:\n\n```json\n{\n    \"predecessor_id\": \"a.contract.near\",\n    \"receiver_id\": \"d.contract.near\",\n    \"receipt_id\": ...,\n\n    \"action\": {\n        \"signer_id\": \"vasya.near\",\n        \"signer_public_key\": ...,\n\n        \"gas_price\": \"3\",\n\n        \"output_data_id\": \"bla_543\",\n        \"output_receiver_id\": \"a.contract.near\",\n\n        \"input_data_id\": [\"data_123_b\", \"data_321_c\"],\n\n        \"action\": [\n            {\n                \"function_call\": {\n                    \"method_name\": \"join_data\",\n                    \"args\": ...,\n                    \"gas\": 10000,\n                    \"deposit\": \"0\"\n                }\n            }\n        ]\n    }\n}\n```\n\nIt also has the `output_data_id` and `output_receiver_id` that is specified back towards `a.contract.near`.\n\nAnd finally the part of the receipt (#4) for the local callback on `a.contract.near`:\n\n```\n...\n\"output_data_id\": null,\n\"output_receiver_id\": null,\n\n\"input_data_id\": [\"bla_543\"],\n...\n```\n\nFor all of this to execute the first 3 receipts needs to go to the corresponding shards and be processed.\nIf for some reason the data arrived before the corresponding action receipt, then this data will be hold there until the action receipt arrives.\nAn example for this is if the receipt #3 is delayed for some reason, while the receipt #2 was processed and generated a data receipt towards `d.contract.near` which arrived before #3.\n\nAlso if any of the function calls fail, the receipt still going to generate a new `DataReceipt` because it has `output_data_id` and `output_receiver_id`. Here is an example of a DataReceipt for a failed execution:\n\n```json\n{\n    \"predecessor_id\": \"b.contract.near\",\n    \"receiver_id\": \"d.contract.near\",\n    \"receipt_id\": ...,\n\n    \"data\": {\n        \"data_id\": \"data_123_b\",\n        \"success\": false,\n        \"data\": null\n    }\n}\n```\n\n#### Swap Key example\n\nSince there are no swap key action, we can just batch 2 actions together. One for adding a new key and one for deleting the old key. The actual order is not important if the public keys are different, but if the public key is the same then you need to first delete the old key and only after this add a new key.\n\n\n# Reference-level explanation\n\n\n### Updated protobufs\n\n##### public_key.proto\n\n```proto\nsyntax = \"proto3\";\n\nmessage PublicKey {\n    enum KeyType {\n        ED25519 = 0;\n    }\n    KeyType key_type = 1;\n    bytes data = 2;\n}\n```\n\n##### signed_transaction.proto\n\n```proto\nsyntax = \"proto3\";\n\nimport \"access_key.proto\";\nimport \"public_key.proto\";\nimport \"uint128.proto\";\n\nmessage Action {\n    message CreateAccount {\n        // empty\n    }\n\n    message DeployContract {\n        // Binary wasm code\n        bytes code = 1;\n    }\n\n    message FunctionCall {\n        string method_name = 1;\n        bytes args = 2;\n        uint64 gas = 3;\n        Uint128 deposit = 4;\n    }\n\n    message Transfer {\n        Uint128 deposit = 1;\n    }\n\n    message Stake {\n        // New total stake\n        Uint128 stake = 1;\n        PublicKey public_key = 2;\n    }\n\n    message AddKey {\n        PublicKey public_key = 1;\n        AccessKey access_key = 2;\n    }\n\n    message DeleteKey {\n        PublicKey public_key = 1;\n    }\n\n    message DeleteAccount {\n        // The account ID which would receive the remaining funds.\n        string beneficiary_id = 1;\n    }\n\n    oneof action {\n        CreateAccount create_account = 1;\n        DeployContract deploy_contract = 2;\n        FunctionCall function_call = 3;\n        Transfer transfer = 4;\n        Stake stake = 5;\n        AddKey add_key = 6;\n        DeleteKey delete_key = 7;\n        DeleteAccount delete_account = 8;\n    }\n}\n\nmessage Transaction {\n    string signer_id = 1;\n    PublicKey public_key = 2;\n    uint64 nonce = 3;\n    string receiver_id = 4;\n\n    repeated Action actions = 5;\n}\n\nmessage SignedTransaction {\n    bytes signature = 1;\n\n    Transaction transaction = 2;\n}\n\n```\n\n##### receipt.proto\n\n```proto\nsyntax = \"proto3\";\n\nimport \"public_key.proto\";\nimport \"signed_transaction.proto\";\nimport \"uint128.proto\";\nimport \"wrappers.proto\";\n\nmessage DataReceipt {\n    bytes data_id = 1;\n    google.protobuf.BytesValue data = 2;\n}\n\nmessage ActionReceipt {\n    message DataReceiver {\n        bytes data_id = 1;\n        string receiver_id = 2;\n    }\n\n    string signer_id = 1;\n    PublicKey signer_public_key = 2;\n\n    // The price of gas is determined when the original SignedTransaction is\n    // converted into the Receipt. It's used for refunds.\n    Uint128 gas_price = 3;\n\n    // List of data receivers where to route the output data\n    // (e.g. result of execution)\n    repeated DataReceiver output_data_receivers = 4;\n\n    // Ordered list of data ID to provide as input results.\n    repeated bytes input_data_ids = 5;\n\n    repeated Action actions = 6;\n}\n\nmessage Receipt {\n    string predecessor_id = 1;\n    string receiver_id = 2;\n    bytes receipt_id = 3;\n\n    oneof receipt {\n        ActionReceipt action = 4;\n        DataReceipt data = 5;\n    }\n}\n\n```\n\n### Validation and Permissions\n\nTo validate `SignedTransaction` we need to do the following:\n\n- verify transaction hash against signature and the given public key\n- verify `signed_id` is a valid account ID\n- verify `receiver_id` is a valid account ID\n- fetch account for the given `signed_id`\n- fetch access key for the given `signed_id` and `public_key`\n- verify access key `nonce`\n- get the current price of gas\n- compute total required balance for the transaction, including action fees (in gas), deposits and prepaid gas.\n- verify account balance is larger than required balance.\n- verify actions are allowed by the access key permissions, e.g. if the access key only allows function call, then need to verify receiver, method name and allowance.\n\nBefore we convert a `Transaction` to a new `ActionReceipt`, we don't need to validate permissions of the actions or their order. It's checked during `ActionReceipt` execution.\n\n`ActionReceipt` doesn't need to be validated before we start executing it.\nThe actions in the `ActionReceipt` are executed in given order.\nEach action has to check for the validity before execution.\n\nSince `CreateAccount` gives permissions to perform actions on the new account, like it's your account, we introduce temporary variable `actor_id`.\nAt the beginning of the execution `actor_id` is set to the value of `predecessor_id`.\n\nValidation rules for actions:\n\n- `CreateAccount`\n  - check the account `receiver_id` doesn't exist\n- `DeployContract`, `Stake`, `AddKey`, `DeleteKey`\n  - check the account `receiver_id` exists\n  - check `actor_id` equals to `receiver_id`\n- `FunctionCall`, `Transfer`\n  - check the account `receiver_id` exists\n\nWhen `CreateAccount` completes, the `actor_id` changes to `receiver_id`.\nNOTE: When we implement `DeleteAccount` action, its completion will change `actor_id` back to `predecessor_id`.\n\nOnce validated, each action might still do some additional checks, e.g. `FunctionCall` might check that the code exists and `method_name` is valid.\n\n### `DataReceipt` generation rules\n\nIf `ActionReceipt` doesn't have `output_data_id` and `output_receiver_id`, then `DataReceipt` is not generated.\nOtherwise, `DataReceipt` depends on the last action of `ActionReceipt`. There are 4 different outcomes:\n\n1. Last action is invalid, failed or the execution stopped on some previous action.\n    - `DataReceipt` is generated\n    - `data_id` is set to the value of `output_data_id` from the `ActionReceipt`\n    - `success` is set to `false`\n    - `data` is set to `null`\n2. Last action is valid and finished successfully, but it's not a `FunctionCall`. Or a `FunctionCall`, that returned no value.\n    - `DataReceipt` is generated\n    - `data_id` is set to the value of `output_data_id` from the `ActionReceipt`\n    - `success` is set to `true`\n    - `data` is set to `null`\n3. Last action is `FunctionCall`, and the result of the execution is some value.\n    - `DataReceipt` is generated\n    - `data_id` is set to the value of `output_data_id` from the `ActionReceipt`\n    - `success` is set to `true`\n    - `data` is set to the bytes of the returned value\n4. Last action is `FunctionCall`, and the result of the execution is a promise ID\n    - `DataReceipt` is NOT generated, because we don't have the value for the execution.\n    - Instead we should modify the `ActionReceipt` generated for the returned promise ID.\n    - In this receipt the `output_data_id` should be set to the `output_data_id` of the action receipt that we just finished executed.\n    - `output_receiver_id` is set the same way as `output_data_id` described above.\n\n#### Example for the case #4\n\nA user called contract `a.app`, which called `b.app` and expect a callback to `a.app`. So `a.app` generated 2 receipts:\nTowards `b.app`:\n\n```\n...\n\"receiver_id\": \"b.app\",\n...\n\"output_data_id\": \"data_a\",\n\"output_receiver_id\": \"a.app\",\n\n\"input_data_id\": [],\n...\n```\n\nTowards itself:\n\n```\n...\n\"receiver_id\": \"a.app\",\n...\n\"output_data_id\": \"null\",\n\"output_receiver_id\": \"null\",\n\n\"input_data_id\": [\"data_a\"],\n...\n```\n\nNow let's say `b.app` doesn't actually do the work, but it's just a middleman that charges some fees before redirecting the work to the actual contract `c.app`.\nIn this case `b.app` creates a new promise by calling `c.app` and returns it instead of data.\nThis triggers the case #4, so it doesn't generate the data receipt yet, instead it creates an action receipt which would look like that:\n\n```\n...\n\"receiver_id\": \"c.app\",\n...\n\"output_data_id\": \"data_a\",\n\"output_receiver_id\": \"a.app\",\n\n\"input_data_id\": [],\n...\n```\n\nOnce it completes, it would send a data receipt to `a.app` (unless `c.app` is a middleman as well).\n\nBut let's say `b.app` doesn't want to reveal it's a middleman.\nIn this case it would call `c.app`, but instead of returning data directly to `a.app`, `b.app` wants to wrap the result into some nice wrapper.\nThen instead of returning the promise to `c.app`, `b.app` would attach a callback to itself and return the promise ID of that callback. Here is how it would look:\nTowards `c.app`:\n\n```\n...\n\"receiver_id\": \"c.app\",\n...\n\"output_data_id\": \"data_b\",\n\"output_receiver_id\": \"b.app\",\n\n\"input_data_id\": [],\n...\n```\n\nSo when the callback receipt first generated, it looks like this:\n\n```\n...\n\"receiver_id\": \"b.app\",\n...\n\"output_data_id\": \"null\",\n\"output_receiver_id\": \"null\",\n\n\"input_data_id\": [\"data_b\"],\n...\n```\n\nBut once, its promise ID is returned with `promise_return`, it is updated to return data towards `a.app`:\n\n```\n...\n\"receiver_id\": \"b.app\",\n...\n\"output_data_id\": \"data_a\",\n\"output_receiver_id\": \"a.app\",\n\n\"input_data_id\": [\"data_b\"],\n...\n```\n\n### Data storage\n\nWe should maintain the following persistent maps per account (`receiver_id`)\n\n- Received data: `data_id -> (success, data)`\n- Postponed receipts: `receipt_id -> Receipt`\n- Pending input data: `data_id -> receipt_id`\n\nWhen `ActionReceipt` is received, the runtime iterates through the list of `input_data_id`.\nIf `input_data_id` is not present in the received data map, then a pair `(input_data_id, receipt_id)` is added to pending input data map and the receipt marked as postponed.\nAt the end of the iteration if the receipt is marked as postponed, then it's added to map of postponed receipts keyed by `receipt_id`.\nIf all `input_data_id`s are available in the received data, then `ActionReceipt` is executed.\n\nWhen `DataReceipt` is received, a pair `(data_id, (success, data))` is added to the received data map.\nThen the runtime checks if `data_id` is present in the pending input data.\nIf it's present, then `data_id` is removed from the pending input data and the corresponding `ActionReceipt` is checked again (see above).\n\nNOTE: we can optimize by not storing `data_id` in the received data map when the pending input data is present and it was the final input data item in the receipt.\n\nWhen `ActionReceipt` is executed, the runtime deletes all `input_data_id` from the received data map.\nThe `receipt_id` is deleted from the postponed receipts map (if present).\n\n### TODO Receipt execution\n\n- input data is available to all function calls in the batched actions\n- TODODO\n\n# Future possibilities\n\n- We can add `or` based data selector, so data storage can be affected.\n"
  },
  {
    "path": "neps/archive/0013-system-methods.md",
    "content": "- Proposal Name: System methods in runtime API\n- Start Date: 2019-09-03\n- NEP PR: [nearprotocol/neps#0013](https://github.com/nearprotocol/neps/pull/0013)\n\n# Summary\n\nAdds new ability for contracts to perform some system functions:\n\n- create new accounts (with possible code deploy and initialization)\n- deploy new code (or redeploying code for upgrades)\n- batched function calls\n- transfer money\n- stake\n- add key\n- delete key\n- delete account\n\n# Motivation\n\nContracts should have the ability to create new accounts, transfer money without calling code and\nstake. It will enable full functionality of contract-based accounts.\n\n# Reference\n\nWe introduce additional promise APIs to support batched actions.\n\nFirstly, we enable ability to create empty promises without any action. They act similarly to\ntraditional promises, but don't contain function call action.\n\nSecondly, we add API to append individual actions to promises. For example we can create\na promise with a function_call first using `promise_create` and then attach a transfer action on top\nof this promise. So the transfer will only deposit tokens if the function call succeeds. Another example\nis how we create accounts now using batched actions. To create a new account, we create a transaction with\nthe following actions: `create_account`, `transfer`, `add_key`. It creates a new account, deposit some funds on it and the adds a new key.\n\nFor more examples see NEP#8: https://github.com/nearprotocol/NEPs/pull/8/files?short_path=15b6752#diff-15b6752ec7d78e7b85b8c7de4a19cbd4\n\n**NOTE: The existing promise API is a special case of the batched promise API.**\n\n- Calling `promise_batch_create` and then `promise_batch_action_function_call` will produce the same promise as calling `promise_create` directly.\n- Calling `promise_batch_then` and then `promise_batch_action_function_call` will produce the same promise as calling `promise_then` directly.\n\n## Promises API\n\n#### promise_batch_create\n\n```rust\npromise_batch_create(account_id_len: u64, account_id_ptr: u64) -> u64\n```\n\nCreates a new promise towards given `account_id` without any actions attached to it.\n\n###### Panics\n\n- If `account_id_len + account_id_ptr` points outside the memory of the guest or host, with `MemoryAccessViolation`.\n\n###### Returns\n\n- Index of the new promise that uniquely identifies it within the current execution of the method.\n\n---\n\n#### promise_batch_then\n\n```rust\npromise_batch_then(promise_idx: u64, account_id_len: u64, account_id_ptr: u64) -> u64\n```\n\nAttaches a new empty promise that is executed after promise pointed by `promise_idx` is complete.\n\n###### Panics\n\n- If `promise_idx` does not correspond to an existing promise panics with `InvalidPromiseIndex`.\n- If `account_id_len + account_id_ptr` points outside the memory of the guest or host, with `MemoryAccessViolation`.\n\n###### Returns\n\n- Index of the new promise that uniquely identifies it within the current execution of the method.\n\n---\n\n##### promise_batch_action_create_account\n\n```rust\npromise_batch_action_create_account(promise_idx: u64)\n```\n\nAppends `CreateAccount` action to the batch of actions for the given promise pointed by `promise_idx`.\nDetails for the action: https://github.com/nearprotocol/NEPs/pull/8/files#diff-15b6752ec7d78e7b85b8c7de4a19cbd4R48\n\n###### Panics\n\n- If `promise_idx` does not correspond to an existing promise panics with `InvalidPromiseIndex`.\n- If the promise pointed by the `promise_idx` is an ephemeral promise created by `promise_and`.\n\n---\n\n#### promise_batch_action_deploy_contract\n\n```rust\npromise_batch_action_deploy_contract(promise_idx: u64, code_len: u64, code_ptr: u64)\n```\n\nAppends `DeployContract` action to the batch of actions for the given promise pointed by `promise_idx`.\nDetails for the action: https://github.com/nearprotocol/NEPs/pull/8/files#diff-15b6752ec7d78e7b85b8c7de4a19cbd4R49\n\n###### Panics\n\n- If `promise_idx` does not correspond to an existing promise panics with `InvalidPromiseIndex`.\n- If the promise pointed by the `promise_idx` is an ephemeral promise created by `promise_and`.\n- If `code_len + code_ptr` points outside the memory of the guest or host, with `MemoryAccessViolation`.\n\n---\n\n#### promise_batch_action_function_call\n\n```rust\npromise_batch_action_function_call(promise_idx: u64,\n                                   method_name_len: u64,\n                                   method_name_ptr: u64,\n                                   arguments_len: u64,\n                                   arguments_ptr: u64,\n                                   amount_ptr: u64,\n                                   gas: u64)\n```\n\nAppends `FunctionCall` action to the batch of actions for the given promise pointed by `promise_idx`.\nDetails for the action: https://github.com/nearprotocol/NEPs/pull/8/files#diff-15b6752ec7d78e7b85b8c7de4a19cbd4R50\n\n*NOTE: Calling `promise_batch_create` and then `promise_batch_action_function_call` will produce the same promise as calling `promise_create` directly.*\n\n###### Panics\n\n- If `promise_idx` does not correspond to an existing promise panics with `InvalidPromiseIndex`.\n- If the promise pointed by the `promise_idx` is an ephemeral promise created by `promise_and`.\n- If `account_id_len + account_id_ptr` or `method_name_len + method_name_ptr` or `arguments_len + arguments_ptr`\nor `amount_ptr + 16` points outside the memory of the guest or host, with `MemoryAccessViolation`.\n\n---\n\n#### promise_batch_action_transfer\n\n```rust\npromise_batch_action_transfer(promise_idx: u64, amount_ptr: u64)\n```\n\nAppends `Transfer` action to the batch of actions for the given promise pointed by `promise_idx`.\nDetails for the action: https://github.com/nearprotocol/NEPs/pull/8/files#diff-15b6752ec7d78e7b85b8c7de4a19cbd4R51\n\n###### Panics\n\n- If `promise_idx` does not correspond to an existing promise panics with `InvalidPromiseIndex`.\n- If the promise pointed by the `promise_idx` is an ephemeral promise created by `promise_and`.\n- If `amount_ptr + 16` points outside the memory of the guest or host, with `MemoryAccessViolation`.\n\n---\n\n#### promise_batch_action_stake\n\n```rust\npromise_batch_action_stake(promise_idx: u64,\n                           amount_ptr: u64,\n                           public_key_len: u64,\n                           public_key_ptr: u64)\n```\n\nAppends `Stake` action to the batch of actions for the given promise pointed by `promise_idx`.\nDetails for the action: https://github.com/nearprotocol/NEPs/pull/8/files#diff-15b6752ec7d78e7b85b8c7de4a19cbd4R52\n\n###### Panics\n\n- If `promise_idx` does not correspond to an existing promise panics with `InvalidPromiseIndex`.\n- If the promise pointed by the `promise_idx` is an ephemeral promise created by `promise_and`.\n- If the given public key is not a valid public key (e.g. wrong length) `InvalidPublicKey`.\n- If `amount_ptr + 16` or `public_key_len + public_key_ptr` points outside the memory of the guest or host, with `MemoryAccessViolation`.\n\n---\n\n#### promise_batch_action_add_key_with_full_access\n\n```rust\npromise_batch_action_add_key_with_full_access(promise_idx: u64,\n                                              public_key_len: u64,\n                                              public_key_ptr: u64,\n                                              nonce: u64)\n```\n\nAppends `AddKey` action to the batch of actions for the given promise pointed by `promise_idx`.\nDetails for the action: https://github.com/nearprotocol/NEPs/pull/8/files#diff-15b6752ec7d78e7b85b8c7de4a19cbd4R54\nThe access key will have `FullAccess` permission, details: [0005-access-keys.md#guide-level-explanation](click here)\n\n###### Panics\n\n- If `promise_idx` does not correspond to an existing promise panics with `InvalidPromiseIndex`.\n- If the promise pointed by the `promise_idx` is an ephemeral promise created by `promise_and`.\n- If the given public key is not a valid public key (e.g. wrong length) `InvalidPublicKey`.\n- If `public_key_len + public_key_ptr` points outside the memory of the guest or host, with `MemoryAccessViolation`.\n\n---\n\n#### promise_batch_action_add_key_with_function_call\n\n```rust\npromise_batch_action_add_key_with_function_call(promise_idx: u64,\n                                                public_key_len: u64,\n                                                public_key_ptr: u64,\n                                                nonce: u64,\n                                                allowance_ptr: u64,\n                                                receiver_id_len: u64,\n                                                receiver_id_ptr: u64,\n                                                method_names_len: u64,\n                                                method_names_ptr: u64)\n```\n\nAppends `AddKey` action to the batch of actions for the given promise pointed by `promise_idx`.\nDetails for the action: https://github.com/nearprotocol/NEPs/pull/8/files#diff-156752ec7d78e7b85b8c7de4a19cbd4R54\nThe access key will have `FunctionCall` permission, details: [0005-access-keys.md#guide-level-explanation](click here)\n\n- If the `allowance` value (not the pointer) is `0`, the allowance is set to `None` (which means unlimited allowance). And positive value represents a `Some(...)` allowance.\n- Given `method_names` is a `utf-8` string with `,` used as a separator. The vm will split the given string into a vector of strings.\n\n###### Panics\n\n- If `promise_idx` does not correspond to an existing promise panics with `InvalidPromiseIndex`.\n- If the promise pointed by the `promise_idx` is an ephemeral promise created by `promise_and`.\n- If the given public key is not a valid public key (e.g. wrong length) `InvalidPublicKey`.\n- if `method_names` is not a valid `utf-8` string, fails with `BadUTF8`.\n- If `public_key_len + public_key_ptr`, `allowance_ptr + 16`, `receiver_id_len + receiver_id_ptr` or\n`method_names_len + method_names_ptr` points outside the memory of the guest or host, with `MemoryAccessViolation`.\n\n---\n\n#### promise_batch_action_delete_key\n\n```rust\npromise_batch_action_delete_key(promise_idx: u64,\n                                public_key_len: u64,\n                                public_key_ptr: u64)\n```\n\nAppends `DeleteKey` action to the batch of actions for the given promise pointed by `promise_idx`.\nDetails for the action: https://github.com/nearprotocol/NEPs/pull/8/files#diff-15b6752ec7d78e7b85b8c7de4a19cbd4R55\n\n###### Panics\n\n- If `promise_idx` does not correspond to an existing promise panics with `InvalidPromiseIndex`.\n- If the promise pointed by the `promise_idx` is an ephemeral promise created by `promise_and`.\n- If the given public key is not a valid public key (e.g. wrong length) `InvalidPublicKey`.\n- If `public_key_len + public_key_ptr` points outside the memory of the guest or host, with `MemoryAccessViolation`.\n\n---\n\n#### promise_batch_action_delete_account\n\n```rust\npromise_batch_action_delete_account(promise_idx: u64,\n                                    beneficiary_id_len: u64,\n                                    beneficiary_id_ptr: u64)\n```\n\nAppends `DeleteAccount` action to the batch of actions for the given promise pointed by `promise_idx`.\nAction is used to delete an account. It can be performed on a newly created account, on your own account or an account with\ninsufficient funds to pay rent. Takes `beneficiary_id` to indicate where to send the remaining funds.\n\n###### Panics\n\n- If `promise_idx` does not correspond to an existing promise panics with `InvalidPromiseIndex`.\n- If the promise pointed by the `promise_idx` is an ephemeral promise created by `promise_and`.\n- If `beneficiary_id_len + beneficiary_id_ptr` points outside the memory of the guest or host, with `MemoryAccessViolation`.\n\n---\n"
  },
  {
    "path": "neps/archive/0017-execution-outcome.md",
    "content": "- Proposal Name: Execution Outcome\n- Start Date: 2019-09-23\n- NEP PR: [nearprotocol/neps#0017](https://github.com/nearprotocol/neps/pull/17)\n- Issue(s): https://github.com/nearprotocol/nearcore/issues/1307\n\n# Summary\n\nRefactor current TransactionResult/TransactionLog/FinalTransactionResult to improve naming, deduplicate results and provide\nresults resolution by the front-end for async-calls.\n\n# Motivation\n\nRight now the contract calls 2 promises and doesn't return a value, the front-end will return one of the promises results as an execution result. It's because we return the last result from final transaction result. With the current API, it's impossible to know what is the actual result of the contract execution.\n\n# Guide-level explanation\n\nHere is the proposed Rust structures. Highlights:\n\n- Rename `TransactionResult` to `ExecutionOutcome` since it's used for transactions and receipts\n- Rename `TransactionStatus` and merge it with result into `ExecutionResult`.\n- In case of success `ExecutionStatus` can either be a value of a receipt_id. This helps to resolve the\n  actual returned value by the transaction from async calls, e.g. `A->B->A->C` should return result from `C`.\n  Also in distinguish result in case of forks, e.g. `A` calls `B` and calls `C`, but returns a result from `B`.\n  Currently there is no way to know.\n- Rename `TransactionLog` to `ExecutionOutcomeWithId` which is `ExecutionOutcome` with receipt_id\n  or transaction hash. Probably needs a better name.\n- Rename `FinalTransactionResult` to `FinalExecutionOutcome`.\n- Update `FinalTransactionStatus` to `FinalExecutionStatus`.\n- Provide final resolved returned result directly, so the front-end doesn't need to traverse the receipt tree.\n  We may also expose the error directly in the execution result.\n- Split into final outcome into transaction and receipts.\n\n### NEW\n\n- The `FinalExecutionStatus` contains the early result even if some dependent receipts are not yet executed. Most function call\ntransactions contain 2 receipts. The 1st receipt is execution, the 2nd is the refund. Before this change, the transaction was\nnot resolved until the 2nd receipt was executed. After this change, the `FinalExecutionOutcome` will have\n`FinalTransactionStatus::SuccessValue(\"\")` after the execution of the 1st receipt, while the 2nd receipt execution outcome status is still `Pending`.\nThis helps to get the transaction result on the front-end faster without waiting for all refunds.\n\n```rust\npub struct ExecutionOutcome {\n    /// Execution status. Contains the result in case of successful execution.\n    pub status: ExecutionStatus,\n    /// Logs from this transaction or receipt.\n    pub logs: Vec<LogEntry>,\n    /// Receipt IDs generated by this transaction or receipt.\n    pub receipt_ids: Vec<CryptoHash>,\n    /// The amount of the gas burnt by the given transaction or receipt.\n    pub gas_burnt: Gas,\n}\n\n/// The status of execution for a transaction or a receipt.\npub enum ExecutionStatus {\n    /// The execution is pending.\n    Pending,\n    /// The execution has failed.\n    Failure,\n    /// The final action succeeded and returned some value or an empty vec.\n    SuccessValue(Vec<u8>),\n    /// The final action of the receipt returned a promise or the signed transaction was converted\n    /// to a receipt. Contains the receipt_id of the generated receipt.\n    SuccessReceiptId(CryptoHash),\n}\n\n// TODO: Need a better name\npub struct ExecutionOutcomeWithId {\n    /// The transaction hash or the receipt ID.\n    pub id: CryptoHash,\n    pub outcome: ExecutionOutcome,\n}\n\n#[derive(Serialize, Deserialize, PartialEq, Eq, Debug, Clone)]\npub enum FinalExecutionStatus {\n    /// The execution has not yet started.\n    NotStarted,\n    /// The execution has started and still going.\n    Started,\n    /// The execution has failed.\n    Failure,\n    /// The execution has succeeded and returned some value or an empty vec in base64.\n    SuccessValue(String),\n}\n\npub struct FinalExecutionOutcome {\n    /// Execution status. Contains the result in case of successful execution.\n    pub status: FinalExecutionStatus,\n    /// The execution outcome of the signed transaction.\n    pub transaction: ExecutionOutcomeWithId,\n    /// The execution outcome of receipts.\n    pub receipts: Vec<ExecutionOutcomeWithId>,\n}\n```\n"
  },
  {
    "path": "neps/archive/0018-view-change-method.md",
    "content": "- Proposal Name: Improve view/change methods in contracts\n- Start Date: 2019-09-26\n- NEP PR: [nearprotocol/neps#0000](https://github.com/nearprotocol/neps/pull/18)\n\n# Summary\n\nCurrently the separation between view methods and change methods on the contract level is not very well defined and causes\nquite a bit of confusion among developers. We propose in the NEP to elucidate the difference between view methods\nand change methods and how they should be used. In short, we would like to restrict view methods from accessing certain\ncontext variables and do not distinguish between view and change methods on the contract level. Developers have the option\nto differentiate between the two in frontend or through near-shell.\n\n# Motivation\n\nFrom the feedback we received it seems that developers are confused by the results they get from view calls, which are\nmainly caused by the fact that some binding methods such as `signer_account_id`, `current_account_id`, `attached_deposit`\ndo not make sense in a view call. \nTo avoid such confusion and create better developer experience, it is better if those context variables\nare prohibited in view calls.\n\n# Guide-level explanation\n\nAmong binding methods that we expose from nearcore, some do make sense in a view call, such as `block_index`,\nwhile the majority does not. \nHere we explicitly list the methods are not allowed in a view call and, in case they are invoked, the contract will panic with\n`<method_name> is not allowed in view calls`.\n\nThe following methods are prohibited:\n\n- `signer_account_id`\n- `signer_account_pk`\n- `predecessor_account_id`\n- `attached_deposit`\n- `prepaid_gas`\n- `used_gas`\n- `promise_create`\n- `promise_then`\n- `promise_and`\n- `promise_batch_create`\n- `promise_batch_then`\n- `promise_batch_action_create_account`\n- `promise_batch_action_deploy_account`\n- `promise_batch_action_function_call`\n- `promise_batch_action_transfer`\n- `promise_batch_action_stake`\n- `promise_batch_action_add_key_with_full_access`\n- `promise_batch_action_add_key_with_function_call`\n- `promise_batch_action_delete_key`\n- `promise_batch_action_delete_account`\n- `promise_results_count`\n- `promise_result`\n- `promise_return`\n\nFrom the developer perspective, if they want to call view functions from command line on some contract, they would just\ncall `near view <contractName> <methodName> [args]`. If they are building an app and want to call a view function from the\nfrontend, they should follow the same pattern as we have right now, specifying `viewMethods` and `changeMethods` in\n`loadContract`.\n\n# Reference-level explanation\n\nTo implement this NEP, we need to change how binding methods are handled in runtime. More specifically, we can rename\n`free_of_charge` to `is_view` and use that to indicate whether we are processing a view call. In addition we can add\n a variant `ProhibitedInView(String)` to `HostError` so that if `is_view` is true,\nthen all the access to the prohibited\nmethods will error with `HostError::ProhibitedInView(<method_name>)`.\n\n# Drawbacks\n\nIn terms of not allowing context variables, I don't see any drawback as those variables do not have a proper meaning\nin view functions. For alternatives, see the section below.\n\n# Rationale and alternatives\n\nThis design is very simple and requires very little change to the existing infrastructure. An alternative solution is\nto distinguish between view methods and change methods on the contract level. One way to do it is through decorators, as\ndescribed [here](https://github.com/nearprotocol/NEPs/pull/3). However, enforcing such distinction on the contract level\nrequires much more work and is not currently feasible for Rust contracts. \n\n# Unresolved questions\n\n# Future possibilities\n\n\n"
  },
  {
    "path": "neps/archive/0033-economics.md",
    "content": "- Proposal Name: NEAR economics specs\n- Start Date: 2020-02-23\n- NEP PR: [nearprotocol/neps#0000](https://github.com/nearprotocol/NEPs/pull/33)\n- Issue(s): link to relevant issues in relevant repos (not required).\n\n# Summary\n\n\nAdding economics specification for NEAR Protocol based on the NEAR whitepaper - https://pages.near.org/papers/the-official-near-white-paper/#economics\n\n# Motivation\n\n\nCurrently, the specification is defined by the implementation in https://github.com/near/nearcore. This codifies all the parameters and formulas and defines main concepts.\n\n# Guide-level explanation\n\n\nThe goal is to build a set of specs about NEAR token economics, for analysts and adopters, to simplify their understanding of the protocol and its game-theoretical dynamics.\nThis initial release will be oriented to validators and staking in general.\n\n# Reference-level explanation\n\n\nThis part of the documentation is self-contained. It may provide material for third-party research papers, and spreadsheet analysis.\n\n# Drawbacks\n\n\nWe might just put this in the NEAR docs.\n\n# Rationale and alternatives\n\n\n# Unresolved questions\n\n\n# Future possibilities\n\n\nThis is an open document which may be used by NEAR's community to pull request a new economic policy. Having a formal document also for non-technical aspects opens new opportunities for the governance.\n"
  },
  {
    "path": "neps/archive/0040-split-states.md",
    "content": "- Proposal Name: Splitting States for Simple Nightshade\n- Start Date: 2021-07-19\n- NEP PR: [near/NEPs#241](https://github.com/near/NEPs/pull/241)\n- Issue(s): [near/NEPs#225](https://github.com/near/NEPs/issues/225) [near/nearcore#4419](https://github.com/near/nearcore/issues/4419)\n\n# Summary\n\nThis proposal proposes a way to split each shard in the blockchain into multiple shards.\n\nCurrently, the near blockchain only has one shard and it needs to be split into eight shards for Simple Nightshade.\n\n# Motivation\n\nTo enable sharding, specifically, phase 0 of Simple Nightshade, we need to find a way to split the current one shard state into eight shards.\n\n# Guide-level explanation \n\nThe proposal assumes that all validators track all shards and that challenges are not enabled.\n\nSuppose the new sharding assignment comes into effect at epoch T.\n\nState migration is done at epoch T-1, when the validators for epoch T are catching up states for the next epoch.\nAt the beginning of epoch T-1, they run state sync for the current shards if needed.\nFrom the existing states, they build states for the new shards, then apply changes to the new states when they process the blocks in epoch T-1.\n This whole process runs off-chain as the new states will be not included in blocks at epoch T-1.\nAt the beginning of epoch T, the new validators start to build blocks based on the new state roots.\n\nThe change involves three parts.\n\n## Dynamic Shards\n\nThe first issue to address in splitting shards is the assumption that the current implementation of chain and runtime makes that the number of shards never changes.\nThis in turn involves two parts, how the validators know when and how sharding changes happen and how they store states of shards from different epochs during the transition.\nThe former is a protocol change and the latter only affects validators' internal states.\n\n### Protocol Change\n\nSharding config for an epoch will be encapsulated in a struct `ShardLayout`, which not only contains the number of shards, but also layout information to decide which account ids should be mapped to which shards. \nThe `ShardLayout` information will be stored as part of `EpochConfig`. \nRight now, `EpochConfig` is stored in `EpochManager` and remains static accross epochs. \nThat will be changed in the new implementation so that `EpochConfig` can be changed according to protocol versions, similar to how `RuntimeConfig` is implemented right now.\n\nThe switch to Simple Nightshade will be implemented as a protocol upgrade.\n`EpochManager` creates a new `EpochConfig` for each epoch from the protocol version of the epoch.\nWhen the protocol version is large enough and the `SimpleNightShade` feature is enabled, the `EpochConfig` will be use the `ShardLayout` of Simple Nightshade, otherwise it uses the genesis `ShardLayout`.\nSince the protocol version and the shard information of epoch T will be determined at the end of epoch T-2, the validators will have time to prepare for states of the new shards during epoch T-1.\n\nAlthough not ideal, the `ShardLayout` for Simple Nightshade will be added as part of the genesis config in the code.\nThe genesis config file itself will not be changed, but the field will be set to a default value we specify in the code.\nThis process is as hacky as it sounds, but currently we have no better way to account for changing protocol config.\nTo completely solve this issue will be a hard problem by itself, thus we do not try to solve it in this NEP.\n\n\nWe will discuss how the sharding transition will be managed in the next section.\n\n### State Change\n\nIn epoch T-1, the validators need to maintain two versions of states for all shards, one for the current epoch, one that is split for the next epoch.\nCurrently, shards are identified by their `shard_id`, which is a number ranging from `0` to `NUM_SHARDS-1`.`shard_id` is also used as part of the indexing keys by which trie nodes are stored in the database.\nHowever, when shards may change accross epochs, `shard_id` can no longer be used to uniquely identify states because new shards and old shards will share the same `shard_id`s under this representation.\n\nTo solve this issue, the new proposal creates a new struct `ShardUId` as an unique identifier to reference shards accross epochs. \n`ShardUId` will only be used for storing and managing states, for example, in `Trie` related structures, \nIn most other places in the code, it is clear which epoch the referenced shard belongs, and `ShardId` is enough to identify the shard. \nThere will be no change in the protocol level since `ShardId` will continue to be used in protocol level specs.\n\n`ShardUId` contains a version number and the corresponding `shard_id`.\n\n```rust\npub struct ShardUId {\n    version: u32,\n    shard_id: u32,\n}\n```\n\nThe version number is different between different shard layouts, to ensure `ShardUId`s for shards from different epochs are different.\n`EpochManager` will be responsible for managing shard versions and `ShardUId` accross epochs.\n\n## Build New States\n\nCurrently, when receiving the first block of every epoch, validators start downloading states to prepare for the next epoch.\nWe can modify this existing process to make the validators build states for the new shards after they finish downloading states for the existing shards.\nTo build the new states, the validator iterates through all accounts in the current states and adds them to the new states one by one.\n\n## Update States\n\nSimilar to how validators usually catch up for the next epoch, the new states are updated as new blocks are processed.\nThe difference is that in epoch T-1, chunks are still sharded by the current sharding assignment, but the validators need to perform updates on the new states.\nWe cannot simply split transactions and receipts to the new shards and process updates on each new shard separately.\nIf we do so, since each shard processes transactions and receipts with their own gas limits, some receipts may be delayed in the new states but not in the current states, or the other way around.\nThat will lead to inconsistencies between the orderings by which transactions and receipts are applied to the current and new states.\n\nFor example, for simplicity, assume there is only one shard A in epoch T-1 and there will be two shards B and C in epoch T.\n To process a block in epoch T-1, shard A needs to process receipts 0, 1, .., 99 while in the new sharding assignments receipts 0, 2, …, 98 belong to shard B and receipts 1, 3, …, 99 belong to shard C.\nAssume in shard A, the gas limit is hit after receipt 89 is processed, so receipts 90 to 99 are delayed.\nTo achieve the same processing result, shard B must process receipt 0, 2, …, 88 and delay 90, 92, ..., 98 and shard C must process receipt 1, 3, ..., 89 and delay receipts 91, 93, …, 99.\nHowever, shard B and C have their own gas limits and which receipts will be processed and delayed cannot be guaranteed.\n\nWhether a receipt is processed in a block or delayed can affect the execution result of this receipt because transactions are charged and local receipts are processed before delayed receipts are processed.\nFor example, let’s assume Alice’s account has 0N now and Bob sends a transaction T1 to transfer 5N to Alice.\nThe transaction has been converted to a receipt R at block i-1 and sent to Alice's shard at block i.\nLet's say Alice signs another transaction T2 to send 1N to Charlie and that transaction is included in block i+1.\nWhether transaction T2 succeeds depends on whether receipt R is processed or delayed in block i.\nIf R is processed in block i, Alice’s account will have 5N before block i+1 and T2 will succeed while if R is delayed in block i, Alice’s account will have 0N and T2 will be declined.\n\nTherefore, the validators must still process transactions and receipts based on the current sharding assignment.\nAfter the processing is finished, they can take the generated state changes to apply to the new states.\n\n# Reference-level explanation\n\n## Protocol-Level Shard Representation\n\n### `ShardLayout`\n\n```rust\npub enum ShardLayout {\n    V0(ShardLayoutV0),\n    V1(ShardLayoutV1),\n}\n```\n\nShardLayout is a versioned struct that contains all information needed to decide which accounts belong to which shards. Note that `ShardLayout` only contains information at the protocol level, so it uses `ShardOrd` instead of `ShardId`. \n\nThe API contains the following two functions.\n\n#### `get_split_shards`\n\n```\npub fn get_split_shards(&self, parent_shard_id: ShardId) -> Option<&Vec<ShardId>>\n```\n\nreturns the children shards of shard `parent_shard_id` (we will explain parent-children shards shortly). Note that `parent_shard_id` is a shard from the last ShardLayout, not from `self`. The returned `ShardId` represents shard in the current shard layout.\nThis information is needed for constructing states for the new shards.\n\nWe only allow adding new shards that are split from the existing shards. If shard B and C are split from shard A, we call shard A the parent shard of shard B and C.\nFor example, if epoch T-1 has a shard layout `shardlayout0` with two shards with `shard_ord` 0 and 1 and each of them will be split to two shards in `shardlayout1` in epoch T, then `shard_layout1.get_split_shards(0)` returns `[0,1]` and `shard_layout.get_split_shards(1)` returns `[2,3]`.\n    \n#### `version`\n\n```rust\npub fn version(&self) -> ShardVersion\n```\n\nreturns the version number of this shard layout. This version number is used to create `ShardUId` for shards in this `ShardLayout`. The version numbers must be different for all shard layouts used in the blockchain.\n\n#### `account_id_to_shard_id`\n\n```rust\npub fn account_id_to_shard_id(account_id: &AccountId, shard_layout: ShardLayout) -> ShardId\n```\n\nmaps account id to shard id given a shard layout\n\n#### `ShardLayoutV0`\n\n```rust\npub struct ShardLayoutV0 {\n    /// map accounts evenly accross all shards\n    num_shards: NumShards,\n}\n```\n\nA shard layout that maps accounts evenly accross all shards -- by calculate the hash of account id and mod number of shards. This is added to capture the current `account_id_to_shard_id` algorithm, to keep backward compatibility for some existing tests. `parent_shards` for `ShardLayoutV1` is always `None` and `version`is always `0`.\n\n#### `ShardLayoutV1`\n\n```rust\npub struct ShardLayoutV1 {\n    /// num_shards = fixed_shards.len() + boundary_accounts.len() + 1\n    /// Each account and all subaccounts map to the shard of position in this array.\n    fixed_shards: Vec<AccountId>,\n    /// The rest are divided by boundary_accounts to ranges, each range is mapped to a shard\n    boundary_accounts: Vec<AccountId>,\n    /// Parent shards for the shards, useful for constructing states for the shards.\n    /// None for the genesis shard layout\n    parent_shards: Option<Vec<ShardId>>,\n    /// Version of the shard layout, useful to uniquely identify the shard layout\n    version: ShardVersion,\n}\n```\n\nA shard layout that consists some fixed shards each of which is mapped to a fixed account and other shards which are mapped to ranges of accounts. This will be the ShardLayout used by Simple Nightshade.\n\n### `EpochConfig`\n\n`EpochConfig` will contain the shard layout for the given epoch.\n\n```rust\npub struct EpochConfig {\n    // existing fields\n    ...\n    /// Shard layout of this epoch, may change from epoch to epoch\n    pub shard_layout: ShardLayout,\n```\n\n### `AllEpochConfig`\n\n`AllEpochConfig` stores a mapping from protocol versions to `EpochConfig`s. `EpochConfig` for a particular epoch can be retrieved from `AllEpochConfig`, given the protocol version of the epoch. For SimpleNightshade migration, it only needs to contain two configs. `AllEpochConfig` will be stored inside `EpochManager` to be used to construct `EpochConfig` for different epochs.\n\n```rust\npub struct AllEpochConfig {\n    genesis_epoch_config: Arc<EpochConfig>,\n    simple_nightshade_epoch_config: Arc<EpochConfig>,\n}\n```\n\n#### `for_protocol_version`\n\n```rust\npub fn for_protocol_version(&self, protocol_version: ProtocolVersion) -> &Arc<EpochConfig>\n```\n\nreturns `EpochConfig` according to the given protocol version. `EpochManager` will call this function for every new epoch.\n\n### `EpochManager`\n\n`EpochManager` will be responsible for managing `ShardLayout` accross epochs. As we mentioned, `EpochManager` stores an instance of `AllEpochConfig`, so it can returns the `ShardLayout` for each epoch. \n\n#### `get_shard_layout`\n\n```rust\npub fn get_shard_layout(&mut self, epoch_id: &EpochId) -> Result<&ShardLayout, EpochError> \n```\n\n## Internal Shard Representation in Validators' State\n\n### `ShardUId`\n\n`ShardUId` is a unique identifier that a validator uses internally to identify shards from all epochs. It only exists inside a validator's internal state and can be different among validators, thus it should never be exposed to outside APIs.\n\n```rust\npub struct ShardUId {\n    pub version: ShardVersion,\n    pub shard_id: u32,\n}\n```\n\n`version` in `ShardUId` comes from the version of `ShardLayout` that this shard belongs. This way, different shards from different shard layout will have different `ShardUId`s.\n\n### Database storage\n\nThe following database columns are stored with `ShardId` as part of the database key, it will be replaced by `ShardUId`\n\n- ColState\n- ColChunkExtra\n- ColTrieChanges\n\n#### `TrieCachingStorage`\n\nTrie storage will contruct database key from `ShardUId` and hash of the trie node.\n\n##### `get_shard_uid_and_hash_from_key`\n\n```rust\nfn get_shard_uid_and_hash_from_key(key: &[u8]) -> Result<(ShardUId, CryptoHash), std::io::Error>\n```\n\n##### `get_key_from_shard_uid_and_hash`\n\n```rust\nfn get_key_from_shard_uid_and_hash(shard_uid: ShardUId, hash: &CryptoHash) -> [u8; 40]\n```\n\n\n## Build New States\n\nThe following method in `Chain` will be added or modified to split a shard's current state into multiple states.\n\n### `build_state_for_split_shards`\n\n```rust\npub fn build_state_for_split_shards(&mut self, sync_hash: &CryptoHash, shard_id: ShardId) -> Result<(), Error>\n```\n\nbuilds states for the new shards that the shard `shard_id` will be split to.\nAfter this function is finished, the states for the new shards should be ready in `ShardTries` to be accessed.\n\n### `run_catchup`\n\n```rust\npub fn run_catchup(...) {\n    ...\n    match state_sync.run(\n        ...\n    )? {\n        StateSyncResult::Unchanged => {}\n        StateSyncResult::Changed(fetch_block) => {...}\n        StateSyncResult::Completed => {\n            // build states for new shards if shards will change and we will track some of the new shards\n            if self.runtime_adapter.will_shards_change_next_epoch(epoch_id) {\n                let mut parent_shards = HashSet::new();\n                let (new_shards, mapping_to_parent_shards) = self.runtime_adapter.get_shards_next_epoch(epoch_id);\n                for shard_id in new_shards {\n                    if self.runtime_adapter.will_care_about_shard(None, &sync_hash, shard_id, true) {\n                \tparent_shards.insert(mapping_to_parent_shards.get(shard_id)?);\n                    }\n                }\n                for shard_id in parent_shards {\n                    self.split_shards(me, &sync_hash, shard_id);\n                }\n            }\n           ...\n       }\n   }\n   ...\n}\n```\n\n## Update States\n\n### `split_state_changes`\n\n```rust\nsplit_state_changes(shard_id: ShardId, state_changes: &Vec<RawStateChangesWithTrieKey>) -> HashMap<ShardId, Vec<RawStateChangesWithTrieKey>>\n```\n\nsplits state changes to be made to a current shard to changes that should be applid to the new shards. Note that this function call can take a long time. To avoid blocking the client actor from processing and producing blocks for the current epoch, it should be called from a separate thread. Unfortunately, as of now, catching up states and catching up blocks are both run in client actor. They should be moved to a separate actor. However, that can be a separate project, although this NEP will depend on that project. In fact, the issue has already been discussed in [#3201](https://github.com/near/nearcore/issues/3201).\n\n### `apply_chunks`\n\n`apply_chunks` will be modified so that states of the new shards will be updated when processing chunks.\n In `apply_chunks`, after processing each chunk, the state changes in `apply_results` are sorted into changes to new shards.\nAt the end, we apply these changes to the new shards.\n\n```rust\nfn apply_chunks(...) -> Result<(), Error> {\n    ...\n    for (shard_id, (chunk_header, prev_chunk_header)) in\n\t(block.chunks().iter().zip(prev_block.chunks().iter())).enumerate()\n    {\n\t...\n        let apply_result = ...;\n        // split states to new shards\n        let changes_to_new_shards = self.split_state_changes(trie_changes);\n        // apply changes_to_new_changes to the new shards\n        for (new_shard_id, new_state_changes) in changes_to_new_states {\n            // locate the state for the new shard\n            let trie = self.get_trie_for_shard(new_shard_id);\n            let chunk_extra =\n                self.chain_store_update.get_chunk_extra(&prev_block.hash(), new_shard_id)?.clone();\n            let mut state_update = TrieUpdate::new(trie.clone(), *chunk_extra.state_root());\n            \n            // update the state\n            for state_change in new_state_changes {\n                state_update.set(state_change.trie_key, state_change.value);\n            }\n            state_update.commit(StateChangeCause::Resharding);\n            let (trie_changes, state_changes) = state_update.finalize()?;\n\t    \n            // save the TrieChanges and ChunkExtra\n            self.chain_store_update.save_trie_changes(WrappedTrieChanges::new(\n                self.tries,\n                new_shard_id,\n                trie_changes,\n                state_changes,\n                *block.hash(),\n            ));\n            self.chain_store_update.save_chunk_extra(\n                &block.hash(),\n                new_shard_id,\n                ChunkExtra::new(&trie_changes.new_root, CryptoHash::default(), Vec::new(), 0, 0, 0),\n            );\n        }\n    }\n    ...\n\n}\n```\n\n## Garbage Collection\n\nThe old states need to be garbage collected after the resharding finishes. The garbage collection algorithm today won't automatically handle that. (#TODO: why?)\n\nAlthough we need to handle garbage collection eventually, it is not a pressing issue. Thus, we leave the discussion from this NEP for now and will add a detailed plan later.\n\n# Drawbacks\n\nThe drawback of this approach is that it will not work when challenges are enabled since challenges to the transition to the new states will be too large to construct or verify.\nThus, most of the change will likely be a one time use that only works for the Simple Nightshade transition, although part of the change involving `ShardId` may be reused in the future.\n\n# Rationale and alternatives\n\n- Why is this design the best in the space of possible designs?\n  - It is the best because its implementation is the simplest.\n    Considering we want to launch Simple Nightshade as soon as possible by Q4 2021 and we will not enable challenges any time soon, this is the best option we have.\n- What other designs have been considered and what is the rationale for not choosing them?\n  - We have considered other designs that change states incrementally and keep state roots on chain to make it compatible with challenges.\nHowever, the implementation of those approaches are overly complicated and does not fit into our timeline for launching Simple Nightshade.\n- What is the impact of not doing this?\n  - The impact will be the delay of launching Simple Nightshade, or no launch at all.\n\n# Unresolved questions\n\n- What parts of the design do you expect to resolve through the NEP process before this gets merged?\n  - Garbage collection\n  - State Sync?\n- What parts of the design do you expect to resolve through the implementation of this feature before stabilization?\n  - There might be small changes in the detailed implementations or specifications of some of the functions described above, but the overall structure will not be changed.\n- What related issues do you consider out of scope for this NEP that could be addressed in the future independently of the solution that comes out of this NEP?\n  - One issue that is related to this NEP but will be resolved indepdently is how trie nodes are stored in the database.\n    Right now, it is a combination of `shard_id` and the node hash.\n    Part of the change proposed in this NEP regarding `ShardId` is because of this.\n    Plans on how to only store the node hash as keys are being discussed [here](https://github.com/near/nearcore/issues/4527), but it will happen after the Simple Nightshade migration since completely solving the issue will take some careful design and we want to prioritize launching Simple Nightshade for now.\n  - Another issue that is not part of this NEP but must be solved for this NEP to work is to move expensive computation related to state sync / catch up into a separate actor [#3201](https://github.com/near/nearcore/issues/3201).\n  - Lastly, we should also build a better mechanism to deal with changing protocol config. The current way of putting changing protocol config in the genesis config and changing how the genesis config file is parsed is not a long term solution.\n\n# Future possibilities\n\n## Extension\n\nIn the future, when challenges are enabled, resharding and state upgrade should be implemented on-chain.\n\n## Affected Projects\n\n- \n\n## Pre-mortem\n\n- Building and catching up new states takes longer than one epoch to finish.\n- Protocol version switched back to pre simple nightshade\n- Validators cannot track shards properly after resharding\n- Genesis State\n- Must load the correct `shard_version`\n- ShardTracker?\n"
  },
  {
    "path": "neps/archive/README.md",
    "content": "# Proposals\n\nThis section contains the NEAR Enhancement Proposals (NEPs) that cover a fleshed out concept for NEAR. Before an idea is turned into a proposal, it will be fleshed out and discussed on the [NEAR Governance Forum](https://gov.near.org).\n\nThese subcategories are great places to start such a discussion:\n\n- [Standards](https://gov.near.org/c/dev/standards/29) — examples might include new protocol standards, token standards, etc.\n- [Proposals](https://gov.near.org/c/dev/proposals/68) — ecosystem proposals that may touch tooling, node experience, wallet usage, and so on.\n\nOnce and idea has been thoroughly discussed and vetted, a pull request should be made according to the instructions at the [NEP repository](https://github.com/near/NEPs).\n\nThe proposals shown in this section have been merged and exist to offer as much information as possible including historical motivations, drawbacks, approaches, future concerns, etc.\n\nOnce a proposal has been fully implemented it can be added as a specification, but will remain a proposal until that time.\n"
  },
  {
    "path": "neps/nep-0001.md",
    "content": "---\nNEP: 1\nTitle: NEP Purpose and Guidelines\nAuthors: Bowen W. <bowen@near.org>, Austin Baggio <austin.baggio@near.org>, Ori A. <ori@near.org>, Vlad F. <frol@near.org>, Guillermo G. <guillermo@near.dev>;\nStatus: Approved\nDiscussionsTo: https://github.com/near/NEPs/pull/333, https://github.com/near/NEPs/pull/619\nType: Developer Tools\nVersion: 2.0.0\nCreated: 2022-03-03\nLast Updated: 2025-08-04\n---\n\n## Summary\n\nNEAR Enhancement Proposals (NEPs) are design documents that describe standards for the NEAR platform, including core protocol specifications, contract standards, and wallet APIs. Each NEP provides concise technical specifications and the rationale behind the proposed enhancement.\n\nEach NEP is championed by a community member, which builds consensus within the community and sheperds the NEP from ideation to completion. The NEP process is designed to be open and transparent, allowing anyone in the NEAR community to propose, discuss, and review ideas for improving the NEAR ecosystem.\n\nAll NEPs are stored as text files in a [versioned repository](https://github.com/near/NEPs), allowing for easy historical tracking.\n\n## Motivation\n\nThe purpose of the NEP process is to give the community a way to propose, discuss, and document changes that impact the whole NEAR ecosystem in a structured manner. Given the complexity and number of participants involved across the ecosystem, a well-defined process helps ensure transparency, security, and stability.\n\n## NEP Types\n\nThere are three kinds of NEPs:\n\n1. A **Protocol** NEP describes a new feature of the NEAR protocol (e.g. [NEP-264](https://github.com/near/NEPs/blob/master/neps/nep-0264.md), [NEP-366](https://github.com/near/NEPs/blob/master/neps/nep-0366.md))\n2. A **Contract Standards** NEP specifies NEAR smart contract interfaces for a reusable concept in the NEAR ecosystem (e.g. [NEP-141](https://github.com/near/NEPs/blob/master/neps/nep-0141.md), [NEP-171](https://github.com/near/NEPs/blob/master/neps/nep-0171.md))\n3. A **Wallet Standards** NEP specifies ecosystem-wide APIs for Wallet implementations (e.g. [NEP-413](https://github.com/near/NEPs/blob/master/neps/nep-0413.md))\n\n## Submit a NEP\n\nEach NEP must have a champion that proposes a new idea, shepherds the discussions in the appropriate forums to build community consensus, proposes the NEP and help it progress toward completion.\n\n### Start with ideation\n\nEveryone in the community is welcome to propose, discuss, and review ideas to improve the NEAR protocol and standards. The NEP process begins with a new idea for the NEAR ecosystem.\n\nBefore submitting a new NEP, publicly check if your idea is original and relevant to the NEAR community. This saves time and avoids proposing something already discussed or unsuitable for most users.\n\n- **Check prior proposals:** Many ideas for changing NEAR come up frequently. Please search the [issues](https://github.com/near/NEPs/issues) and NEPs in this repo before proposing something new.\n- **Share the idea:** Submit a new [issue](https://github.com/near/NEPs/issues) explaining the problem you want to tackle, and your proposed solution.\n- **Get feedback:** Share the issue to the appropriate community group:\n        - Wallet Group: https://nearbuilders.com/tg-wallet\n        - Protocol: https://near.zulipchat.com/\n        - Contract Standards: https://t.me/NEAR_Tools_Community_Group\n\n### Submit a NEP Draft\n\nFollowing the above initial discussions, the author willing to champion the NEP should submit the NEP Draft in the form of a `Draft Pull Request`:\n\n1. Fork the [NEPs repository](https://github.com/near/NEPs).\n2. Copy `nep-0000-template.md` to `neps/nep-xxxx.md` (do **not** assign a NEP number yet).\n3. Fill in the NEP following the NEP template guidelines. For the Header Preamble, make sure to set the status as “Draft.”\n4. Push this to your GitHub fork and submit a pull request.\n5. Now that your NEP has an open pull request, use the pull request number to update your `0000` prefix. For example, if the PR is 305, the NEP should be `neps/nep-0305.md`.\n6. Push this to your GitHub fork and submit a pull request. Mention the @near/nep-moderators in the comment and turn the PR into a \"Ready for Review\" state once you believe the NEP is ready for review.\n\n## NEP Lifecycle\n\nThe NEP process begins when an author submits a [NEP draft](#submit-a-nep-draft). The NEP lifecycle consists of three stages: draft, review, and voting, with two possible outcomes: approval or rejection.\n\nThroughout the process, various roles play a critical part in moving the proposal forward. Most of the activity happens asynchronously on the NEP within GitHub, where all the roles can communicate and collaborate on revisions and improvements to the proposal.\n\n![NEP Process](https://user-images.githubusercontent.com/110252255/201413632-f72743d6-593e-4747-9409-f56bc38de17b.png)\n\n### NEP Stages\n\n- **Draft:** The first formally tracked stage of a new NEP. This process begins once an author submits a draft proposal and the NEP moderator merges it into the NEP repo when properly formatted.\n- **Review:** A NEP moderator marks a NEP as ready for Subject Matter Experts Review. If the NEP is not approved within two months, it is automatically rejected.\n- **Voting:** This is the final voting period for a NEP. The working group will vote on whether to accept or reject the NEP. This period is limited to two weeks. If during this period necessary normative changes are required, the NEP will revert to Review.\n\nModerator, when moving a NEP to review stage, should update the Pull Request description to include the\nreview summary, example:\n\n```markdown\n---\n\n## NEP Status _(Updated by NEP moderators)_\n\nSME reviews:\n\n- [ ] Role1: @github-handle\n- [ ] Role2: @github-handle\n\nContract Standards WG voting indications (❔ | :+1: | :-1: ):\n\n- ❔ @github-handle\n- ❔ ...\n\n<Other> voting indications:\n\n- ❔\n- ❔\n```\n\n### NEP Outcomes\n\n- **Approved:** If the working group votes to approve, they will move the NEP to Approved. Once approved, Standards NEPs exist in a state of finality and should only be updated to correct errata and add non-normative clarifications.\n- **Rejected:** If the working group votes to reject, they will move the NEP to Rejected.\n\n### NEP Roles and Responsibilities\n\n![author](https://user-images.githubusercontent.com/110252255/181816534-2f92b073-79e2-4e8d-b5b9-b10824958acd.png)\n**Author**<br />\n_Anyone can participate_\n\nThe NEP author (or champion) is responsible for creating a NEP draft that follows the guidelines. They drive the NEP forward by actively participating in discussions and incorporating feedback. During the voting stage, they may present the NEP to the working group and community, and provide a final implementation with thorough testing and documentation once approved.\n\n![Moderator](https://user-images.githubusercontent.com/110252255/181816650-b1610c0e-6d32-4d2a-a34e-877c702139bd.png)\n**Moderator**<br />\n_Assigned by the working group_\n\nThe moderator is responsible for facilitating the process and validating that the NEP follows the guidelines. They do not assess the technical feasibility or write any part of the proposal. They provide comments if revisions are necessary and ensure that all roles are working together to progress the NEP forward. They also schedule and facilitate public voting calls.\n\n![Reviewer](https://user-images.githubusercontent.com/110252255/181816664-a9485ea6-e774-4999-b11d-dc8be6b08f87.png)\n**NEP Reviewer** (Subject Matter Experts)<br />\n_Assigned by the working group_\n\nThe reviewer is responsible for reviewing the technical feasibility of a NEP and giving feedback to the author. While they do not have voting power, they play a critical role in providing their voting recommendations along with a summary of the benefits and concerns that were raised in the discussion. Their inputs help everyone involved make a transparent and informed decision.\n\n![Approver](https://user-images.githubusercontent.com/110252255/181816752-521dd147-f56f-4c5c-84de-567b109f21d6.png)\n**Approver** (Working Groups)<br />\n_Selected by the Dev Gov DAO in the bootstrapping phase_\n\nThe working group is a selected committee of 3-7 recognized experts who are responsible for coordinating the public review and making decisions on a NEP in a fair and timely manner. There are multiple working groups, each one focusing on a specific ecosystem area, such as the Protocol or Wallet Standards. They assign reviewers to proposals, provide feedback to the author, and attend public calls to vote to approve or reject the NEP.\n\n### NEP Communication\n\nNEP discussions should happen asynchronously within the NEP’s public thread. This allows for broad participation and ensures transparency.\n\nHowever, if a discussion becomes circular and could benefit from a synchronous conversation, any participants on a given NEP can suggest that the moderator schedules an ad hoc meeting. For example, if a reviewer and author have multiple rounds of comments, they may request a call. The moderator can help coordinate the call and post the registration link on the NEP. The person who requested the call should designate a note-taker to post a summary on the NEP after the call.\n\nWhen a NEP gets to the final voting stage, the moderator will schedule a public working group meeting to discuss the NEP with the author and formalize the decision. The moderator will first coordinate a time with the author and working group members, and then post the meeting time and registration link on the NEP at least one week in advance.\n\nAll participants in the NEP process should maintain a professional and respectful code of conduct in all interactions. This includes communicating clearly and promptly and refraining from disrespectful or offensive language.\n\n### NEP Playbook\n\n1. Once an author [submits a NEP draft](#submit-a-nep-draft), the NEP moderators will review their pull request (PR) for structure, formatting, and other errors. Approval criteria are:\n    - The content is complete and technically sound. The moderators do not consider whether the NEP is likely or not to get accepted.\n    - The title accurately reflects the content.\n    - The language, spelling, grammar, sentence structure, and code style are correct and conformant.\n2. If the NEP is not ready for approval, the moderators will send it back to the author with specific instructions in the PR. The moderators must complete the review within one week.\n3. Once the moderators agree that the PR is ready for review, they will ask the approvers (working group members) to nominate a team of at least two reviewers (subject matter experts) to review the NEP. At least one working group member must explicitly tag the reviewers and comment: `\"As a working group member, I'd like to nominate @SME-username and @SME-username as the Subject Matter Experts to review this NEP.\"` If the assigned reviewers feel that they lack the relevant expertise to fully review the NEP, they can ask the working group to re-assign the reviewers for the NEP.\n4. The reviewers must finish the technical review within one week. Technical Review Guidelines:\n    - First, review the technical details of the proposals and assess their merit. If you have feedback, explicitly tag the author and comment: `\"As the assigned Reviewer, I request from @author-username to [ask clarifying questions, request changes, or provide suggestions that are actionable.].\"` It may take a couple of iterations to resolve any open comments.\n    - Second, once the reviewer believes that the NEP is close to the voting stage, explicitly tag the @near/nep-moderators and comment with your technical summary. The Technical Summary must include:\n        - A recommendation for the working group: `\"As the assigned reviewer, I do not have any feedback for the author. I recommend moving this NEP forward and for the working group to [accept or reject] it based on [provide reasoning, including a sense of importance or urgency of this NEP].\"` Please note that this is the reviewer's personal recommendation.\n        - A summary of benefits that surfaced in previous discussions. This should include a concise list of all the benefits that others raised, not just the ones that the reviewer personally agrees with.\n        - A summary of concerns or blockers, along with their current status and resolution. Again, this should reflect the collective view of all commenters, not just the reviewer's perspective.\n5. The NEP author can make revisions and request further reviews from the reviewers. However, if a proposal is in the review stage for more than two months, the moderator will automatically reject it. To reopen the proposal, the author must restart the NEP process again.\n6. Once both reviewers complete their technical summary, the moderators will notify the approvers (working group members) that the NEP is in the final comment period. The approvers must fully review the NEP within one week. Approver guidelines:\n    - First, read the NEP thoroughly. If you have feedback, explicitly tag the author and comment: `\"As a working group member, I request from @author-username to [ask clarifying questions, request changes, or provide actionable suggestions.].\"`\n    - Second, once the approver believes the NEP is close to the voting stage, explicitly comment with your voting indication: `\"As a working group member, I lean towards [approving OR rejecting] this NEP based on [provide reasoning].\"`\n7. Once all the approvers indicate their voting indication, the moderator will review the voting indication for a 2/3 majority:\n    - If the votes lean toward rejection: The moderator will summarize the feedback and close the NEP.\n    - If the votes lean toward approval: The moderator will schedule a public call (see [NEP Communication](#nep-communication)) for the author to present the NEP and for the working group members to formalize the voting decision. If the working group members agree that the NEP is overall beneficial for the NEAR ecosystem and vote to approve it, then the proposal is considered accepted. After the call, the moderator will summarize the decision on the NEP.\n8. The NEP author or other assignees will complete action items from the call. For example, the author will finalize the \"Changelog\" section on the NEP, which summarizes the benefits and concerns for future reference.\n\n### Transferring NEP Ownership\n\nWhile a NEP is worked on, it occasionally becomes necessary to transfer ownership of NEPs to a new author. In general, it is preferable to retain the original author as a co-author of the transferred NEP, but that is up to the original author. A good reason to transfer ownership is that the original author no longer has the time or interest in updating it or following through with the NEP process. A bad reason to transfer ownership is that the author does not agree with the direction of the NEP. One aim of the NEP process is to try to build consensus around a NEP, but if that is not possible, an author can submit a competing NEP.\n\nIf you are interested in assuming ownership of a NEP, you can also do this via pull request. Fork the NEP repository, modify the owner, and submit a pull request. In the PR description, tag the original author and provide a summary of the work that was previously done. Also clearly state the intent of the fork and the relationship of the new PR to the old one. For example: \"Forked to address the remaining review comments in NEP \\# since the original author does not have time to address them.\n\n## What does a successful NEP look like?\n\nEach NEP should be written in markdown format and follow the [NEP-0000 template](https://github.com/near/NEPs/blob/master/nep-0000-template.md) and include all the appropriate sections, which will make it easier for the NEP reviewers and community members to understand and provide feedback. The most successful NEPs are those that go through collective iteration, with authors who actively seek feedback and support from the community. Ultimately, a successful NEP is one that addresses a specific problem or needs within the NEAR ecosystem, is well-researched, and has the support of the community and ecosystem experts.\n\n### Auxiliary Files\n\nImages, diagrams, and auxiliary files should be included in a subdirectory of the assets folder for that NEP as follows: assets/nep-N (where N is to be replaced with the NEP number). When linking to an image in the NEP, use relative links such as `../assets/nep-1/image.png`\n\n### Style Guide\n\n#### NEP numbers\n\nWhen referring to a NEP by number, it should be written in the hyphenated form NEP-X where X is the NEP's assigned number.\n\n#### RFC 2119\n\nNEPs are encouraged to follow [RFC 2119](https://www.ietf.org/rfc/rfc2119.txt) for terminology and to insert the following at the beginning of the Specification section:\n\nThe keywords \"MUST\", \"MUST NOT\", \"REQUIRED\", \"SHALL\", \"SHALL NOT\", \"SHOULD\", \"SHOULD NOT\", \"RECOMMENDED\", \"MAY\", and \"OPTIONAL\" in this document are to be interpreted as described in [RFC 2119](https://www.ietf.org/rfc/rfc2119.txt).\n\n## NEP Maintenance\n\nGenerally, NEPs are not modifiable after reaching their final state. However, there are occasions when updating a NEP is necessary, such as when discovering a security vulnerability or identifying misalignment with a widely-used implementation. In such cases, an author may submit a NEP extension in a pull request with the proposed changes to an existing NEP document.\n\nA NEP extension has a higher chance of approval if it introduces clear benefits to existing implementors and does not introduce breaking changes.\n\nIf an author believes that a new extension meets the criteria for its own separate NEP, it is better to submit a new NEP than to modify an existing one. Just make sure to specify any dependencies on certain NEPs.\n\n## References\n\nThe content of this document was derived heavily from the PEP, BIP, Rust RFC, and EIP standards bootstrap documents:\n\n- Klock, F et al. Rust: RFC-0002: RFC Process. https://github.com/rust-lang/rfcs/blob/master/text/0002-rfc-process.md\n- Taaki, A. et al. Bitcoin Improvement Proposal: BIP:1, BIP Purpose and Guidelines. https://github.com/bitcoin/bips/blob/master/bip-0001.mediawiki\n- Warsaw, B. et al. Python Enhancement Proposal: PEP Purpose and Guidelines. https://github.com/python/peps/blob/main/peps/pep-0001.rst\n- Becze, M. et al. Ethereum Improvement Proposal EIP1: EIP Purpose and Guidelines. https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1.md\n\n## Copyright\n\nCopyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).\n"
  },
  {
    "path": "neps/nep-0021.md",
    "content": "---\nNEP: 21\nTitle: Fungible Token Standard\nAuthor: Evgeny Kuzyakov <ek@near.org>\nStatus: Final\nDiscussionsTo: https://github.com/near/NEPs/pull/21\nType: Standards Track\nCategory: Contract\nCreated: 29-Oct-2019\nSupersededBy: 141\n---\n\n## Summary\n\nA standard interface for fungible tokens allowing for ownership, escrow and transfer, specifically targeting third-party marketplace integration.\n\n## Motivation\n\nNEAR Protocol uses an asynchronous sharded Runtime. This means the following:\n\nStorage for different contracts and accounts can be located on the different shards.\nTwo contracts can be executed at the same time in different shards.\nWhile this increases the transaction throughput linearly with the number of shards, it also creates some challenges for cross-contract development.\nFor example, if one contract wants to query some information from the state of another contract (e.g. current balance), by the time the first contract receive the balance the real balance can change.\nIt means in the async system, a contract can't rely on the state of other contract and assume it's not going to change.\n\nInstead the contract can rely on temporary partial lock of the state with a callback to act or unlock, but it requires careful engineering to avoid dead locks.\n\n## Rationale and alternatives\n\nIn this standard we're trying to avoid enforcing locks, since most actions can still be completed without locks by transferring ownership to an escrow account.\n\nPrior art:\n\nERC-20 standard\nNEP#4 NEAR NFT standard: nearprotocol/neps#4\nFor latest lock proposals see Safes (#26)\n\n## Specification\n\nWe should be able to do the following:\n\n- Initialize contract once. The given total supply will be owned by the given account ID.\n- Get the total supply.\n- Transfer tokens to a new user.\n- Set a given allowance for an escrow account ID.\n  - Escrow will be able to transfer up this allowance from your account.\n  - Get current balance for a given account ID.\n- Transfer tokens from one user to another.\n- Get the current allowance for an escrow account on behalf of the balance owner. This should only be used in the UI, since a contract shouldn't rely on this temporary information.\n\nThere are a few concepts in the scenarios above:\n\n- **Total supply**. It's the total number of tokens in circulation.\n- **Balance owner**. An account ID that owns some amount of tokens.\n- **Balance**. Some amount of tokens.\n- **Transfer**. Action that moves some amount from one account to another account.\n- **Escrow**. A different account from the balance owner who has permission to use some amount of tokens.\n- **Allowance**. The amount of tokens an escrow account can use on behalf of the account owner.\n\nNote, that the precision is not part of the default standard, since it's not required to perform actions. The minimum\nvalue is always 1 token.\n\n### Simple transfer\n\nAlice wants to send 5 wBTC tokens to Bob.\nAssumptions:\n\n- The wBTC token contract is `wbtc`.\n- Alice's account is `alice`.\n- Bob's account is `bob`.\n- The precision on wBTC contract is `10^8`.\n- The 5 tokens is `5 * 10^8` or as a number is `500000000`.\n\n#### High-level explanation\n\nAlice needs to issue one transaction to wBTC contract to transfer 5 tokens (multiplied by precision) to Bob.\n\n#### Technical calls\n\n1. `alice` calls `wbtc::transfer({\"new_owner_id\": \"bob\", \"amount\": \"500000000\"})`.\n\n### Token deposit to a contract\n\nAlice wants to deposit 1000 DAI tokens to a compound interest contract to earn extra tokens.\nAssumptions:\n\n- The DAI token contract is `dai`.\n- Alice's account is `alice`.\n- The compound interest contract is `compound`.\n- The precision on DAI contract is `10^18`.\n- The 1000 tokens is `1000 * 10^18` or as a number is `1000000000000000000000`.\n- The compound contract can work with multiple token types.\n\n#### High-level explanation\n\nAlice needs to issue 2 transactions. The first one to `dai` to set an allowance for `compound` to be able to withdraw tokens from `alice`.\nThe second transaction is to the `compound` to start the deposit process. Compound will check that the DAI tokens are supported and will try to withdraw the desired amount of DAI from `alice`.\n\n- If transfer succeeded, `compound` can increase local ownership for `alice` to 1000 DAI\n- If transfer fails, `compound` doesn't need to do anything in current example, but maybe can notify `alice` of unsuccessful transfer.\n\n#### Technical calls\n\n1. `alice` calls `dai::set_allowance({\"escrow_account_id\": \"compound\", \"allowance\": \"1000000000000000000000\"})`.\n1. `alice` calls `compound::deposit({\"token_contract\": \"dai\", \"amount\": \"1000000000000000000000\"})`. During the `deposit` call, `compound` does the following:\n   1. makes async call `dai::transfer_from({\"owner_id\": \"alice\", \"new_owner_id\": \"compound\", \"amount\": \"1000000000000000000000\"})`.\n   1. attaches a callback `compound::on_transfer({\"owner_id\": \"alice\", \"token_contract\": \"dai\", \"amount\": \"1000000000000000000000\"})`.\n\n### Multi-token swap on DEX\n\nCharlie wants to exchange his wLTC to wBTC on decentralized exchange contract. Alex wants to buy wLTC and has 80 wBTC.\nAssumptions\n\n- The wLTC token contract is `wltc`.\n- The wBTC token contract is `wbtc`.\n- The DEX contract is `dex`.\n- Charlie's account is `charlie`.\n- Alex's account is `alex`.\n- The precision on both tokens contract is `10^8`.\n- The amount of 9001 wLTC tokens is Alex wants is `9001 * 10^8` or as a number is `900100000000`.\n- The 80 wBTC tokens is `80 * 10^8` or as a number is `8000000000`.\n- Charlie has 1000000 wLTC tokens which is `1000000 * 10^8` or as a number is `100000000000000`\n- Dex contract already has an open order to sell 80 wBTC tokens by `alex` towards 9001 wLTC.\n- Without Safes implementation, DEX has to act as an escrow and hold funds of both users before it can do an exchange.\n\n#### High-level explanation\n\nLet's first setup open order by Alex on DEX. It's similar to `Token deposit to a contract` example above.\n\n- Alex sets an allowance on wBTC to DEX\n- Alex calls deposit on Dex for wBTC.\n- Alex calls DEX to make an new sell order.\n\nThen Charlie comes and decides to fulfill the order by selling his wLTC to Alex on DEX.\nCharlie calls the DEX\n\n- Charlie sets the allowance on wLTC to DEX\n- Alex calls deposit on Dex for wLTC.\n- Then calls DEX to take the order from Alex.\n\nWhen called, DEX makes 2 async transfers calls to exchange corresponding tokens.\n\n- DEX calls wLTC to transfer tokens DEX to Alex.\n- DEX calls wBTC to transfer tokens DEX to Charlie.\n\n#### Technical calls\n\n1. `alex` calls `wbtc::set_allowance({\"escrow_account_id\": \"dex\", \"allowance\": \"8000000000\"})`.\n1. `alex` calls `dex::deposit({\"token\": \"wbtc\", \"amount\": \"8000000000\"})`.\n   1. `dex` calls `wbtc::transfer_from({\"owner_id\": \"alex\", \"new_owner_id\": \"dex\", \"amount\": \"8000000000\"})`\n1. `alex` calls `dex::trade({\"have\": \"wbtc\", \"have_amount\": \"8000000000\", \"want\": \"wltc\", \"want_amount\": \"900100000000\"})`.\n1. `charlie` calls `wltc::set_allowance({\"escrow_account_id\": \"dex\", \"allowance\": \"100000000000000\"})`.\n1. `charlie` calls `dex::deposit({\"token\": \"wltc\", \"amount\": \"100000000000000\"})`.\n   1. `dex` calls `wltc::transfer_from({\"owner_id\": \"charlie\", \"new_owner_id\": \"dex\", \"amount\": \"100000000000000\"})`\n1. `charlie` calls `dex::trade({\"have\": \"wltc\", \"have_amount\": \"900100000000\", \"want\": \"wbtc\", \"want_amount\": \"8000000000\"})`.\n   - `dex` calls `wbtc::transfer({\"new_owner_id\": \"charlie\", \"amount\": \"8000000000\"})`\n   - `dex` calls `wltc::transfer({\"new_owner_id\": \"alex\", \"amount\": \"900100000000\"})`\n\n## Reference Implementation\n\nThe full implementation in Rust can be found there: https://github.com/near/near-sdk-rs/blob/master/examples/fungible-token/ft/src/lib.rs\n\nNOTES:\n\n- All amounts, balances and allowance are limited by U128 (max value `2**128 - 1`).\n- Token standard uses JSON for serialization of arguments and results.\n- Amounts in arguments and results have are serialized as Base-10 strings, e.g. `\"100\"`. This is done to avoid\n  JSON limitation of max integer value of `2**53`.\n\nInterface:\n\n```rust\n/******************/\n/* CHANGE METHODS */\n/******************/\n\n/// Sets the `allowance` for `escrow_account_id` on the account of the caller of this contract\n/// (`predecessor_id`) who is the balance owner.\npub fn set_allowance(&mut self, escrow_account_id: AccountId, allowance: U128);\n\n/// Transfers the `amount` of tokens from `owner_id` to the `new_owner_id`.\n/// Requirements:\n/// * `amount` should be a positive integer.\n/// * `owner_id` should have balance on the account greater or equal than the transfer `amount`.\n/// * If this function is called by an escrow account (`owner_id != predecessor_account_id`),\n///   then the allowance of the caller of the function (`predecessor_account_id`) on\n///   the account of `owner_id` should be greater or equal than the transfer `amount`.\npub fn transfer_from(&mut self, owner_id: AccountId, new_owner_id: AccountId, amount: U128);\n\n\n/// Transfer `amount` of tokens from the caller of the contract (`predecessor_id`) to\n/// `new_owner_id`.\n/// Act the same was as `transfer_from` with `owner_id` equal to the caller of the contract\n/// (`predecessor_id`).\npub fn transfer(&mut self, new_owner_id: AccountId, amount: U128);\n\n/****************/\n/* VIEW METHODS */\n/****************/\n\n/// Returns total supply of tokens.\npub fn get_total_supply(&self) -> U128;\n\n/// Returns balance of the `owner_id` account.\npub fn get_balance(&self, owner_id: AccountId) -> U128;\n\n/// Returns current allowance of `escrow_account_id` for the account of `owner_id`.\n///\n/// NOTE: Other contracts should not rely on this information, because by the moment a contract\n/// receives this information, the allowance may already be changed by the owner.\n/// So this method should only be used on the front-end to see the current allowance.\npub fn get_allowance(&self, owner_id: AccountId, escrow_account_id: AccountId) -> U128;\n```\n\n## Drawbacks\n\n- Current interface doesn't have minting, precision (decimals), naming. But it should be done as extensions, e.g. a Precision extension.\n- It's not possible to exchange tokens without transferring them to escrow first.\n- It's not possible to transfer tokens to a contract with a single transaction without setting the allowance first.\n  It should be possible if we introduce `transfer_with` function that transfers tokens and calls escrow contract. It needs to handle result of the execution and contracts have to be aware of this API.\n\n## Future possibilities\n\n- Support for multiple token types\n- Minting and burning\n- Precision, naming and short token name.\n\n## Copyright\n\nCopyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).\n"
  },
  {
    "path": "neps/nep-0141.md",
    "content": "---\nNEP: 141\nTitle: Fungible Token Standard\nAuthor: Evgeny Kuzyakov <ek@near.org>, Robert Zaremba <@robert-zaremba>, @oysterpack\nStatus: Final\nDiscussionsTo: https://github.com/near/NEPs/issues/141\nType: Standards Track\nCategory: Contract\nCreated: 03-Mar-2022\nReplaces: 21\nRequires: 297\n---\n\n## Summary\n\nA standard interface for fungible tokens that allows for a normal transfer as well as a transfer and method call in a single transaction. The [storage standard][Storage Management] addresses the needs (and security) of storage staking.\nThe [fungible token metadata standard][FT Metadata] provides the fields needed for ergonomics across dApps and marketplaces.\n\n## Motivation\n\nNEAR Protocol uses an asynchronous, sharded runtime. This means the following:\n\n- Storage for different contracts and accounts can be located on the different shards.\n- Two contracts can be executed at the same time in different shards.\n\nWhile this increases the transaction throughput linearly with the number of shards, it also creates some challenges for cross-contract development. For example, if one contract wants to query some information from the state of another contract (e.g. current balance), by the time the first contract receives the balance the real balance can change. In such an async system, a contract can't rely on the state of another contract and assume it's not going to change.\n\nInstead the contract can rely on temporary partial lock of the state with a callback to act or unlock, but it requires careful engineering to avoid deadlocks. In this standard we're trying to avoid enforcing locks. A typical approach to this problem is to include an escrow system with allowances. This approach was initially developed for [NEP-21](https://github.com/near/NEPs/pull/21) which is similar to the Ethereum ERC-20 standard. There are a few issues with using an escrow as the only avenue to pay for a service with a fungible token. This frequently requires more than one transaction for common scenarios where fungible tokens are given as payment with the expectation that a method will subsequently be called.\n\nFor example, an oracle contract might be paid in fungible tokens. A client contract that wishes to use the oracle must either increase the escrow allowance before each request to the oracle contract, or allocate a large allowance that covers multiple calls. Both have drawbacks and ultimately it would be ideal to be able to send fungible tokens and call a method in a single transaction. This concern is addressed in the `ft_transfer_call` method. The power of this comes from the receiver contract working in concert with the fungible token contract in a secure way. That is, if the receiver contract abides by the standard, a single transaction may transfer and call a method.\n\nNote: there is no reason why an escrow system cannot be included in a fungible token's implementation, but it is simply not necessary in the core standard. Escrow logic should be moved to a separate contract to handle that functionality. One reason for this is because the [Rainbow Bridge](https://near.org/blog/eth-near-rainbow-bridge/) will be transferring fungible tokens from Ethereum to NEAR, where the token locker (a factory) will be using the fungible token core standard.\n\nPrior art:\n\n- [ERC-20 standard](https://eips.ethereum.org/EIPS/eip-20)\n- NEP#4 NEAR NFT standard: [near/neps#4](https://github.com/near/neps/pull/4)\n\nLearn about NEP-141:\n\n- [Figment Learning Pathway](https://web.archive.org/web/20220621055335/https://learn.figment.io/tutorials/stake-fungible-token)\n\n## Specification\n\n### Guide-level explanation\n\nWe should be able to do the following:\n\n- Initialize contract once. The given total supply will be owned by the given account ID.\n- Get the total supply.\n- Transfer tokens to a new user.\n- Transfer tokens from one user to another.\n- Transfer tokens to a contract, have the receiver contract call a method and \"return\" any fungible tokens not used.\n- Remove state for the key/value pair corresponding with a user's account, withdrawing a nominal balance of Ⓝ that was used for storage.\n\nThere are a few concepts in the scenarios above:\n\n- **Total supply**: the total number of tokens in circulation.\n- **Balance owner**: an account ID that owns some amount of tokens.\n- **Balance**: an amount of tokens.\n- **Transfer**: an action that moves some amount from one account to another account, either an externally owned account or a contract account.\n- **Transfer and call**: an action that moves some amount from one account to a contract account where the receiver calls a method.\n- **Storage amount**: the amount of storage used for an account to be \"registered\" in the fungible token. This amount is denominated in Ⓝ, not bytes, and represents the [storage staked](https://docs.near.org/docs/concepts/storage-staking).\n\nNote that precision (the number of decimal places supported by a given token) is not part of this core standard, since it's not required to perform actions. The minimum value is always 1 token. See the [Fungible Token Metadata Standard][FT Metadata] to learn how to support precision/decimals in a standardized way.\n\nGiven that multiple users will use a Fungible Token contract, and their activity will result in an increased [storage staking](https://docs.near.org/docs/concepts/storage-staking) burden for the contract's account, this standard is designed to interoperate nicely with [the Account Storage standard][Storage Management] for storage deposits and refunds.\n\n### Example scenarios\n\n#### Simple transfer\n\nAlice wants to send 5 wBTC tokens to Bob.\nAssumptions\n\n- The wBTC token contract is `wbtc`.\n- Alice's account is `alice`.\n- Bob's account is `bob`.\n- The precision (\"decimals\" in the metadata standard) on wBTC contract is `10^8`.\n- The 5 tokens is `5 * 10^8` or as a number is `500000000`.\n\n##### High-level explanation\n\nAlice needs to issue one transaction to wBTC contract to transfer 5 tokens (multiplied by precision) to Bob.\n\n##### Technical calls\n\n1. `alice` calls `wbtc::ft_transfer({\"receiver_id\": \"bob\", \"amount\": \"500000000\"})`.\n\n#### Token deposit to a contract\n\nAlice wants to deposit 1000 DAI tokens to a compound interest contract to earn extra tokens.\n\n##### Assumptions\n\n- The DAI token contract is `dai`.\n- Alice's account is `alice`.\n- The compound interest contract is `compound`.\n- The precision (\"decimals\" in the metadata standard) on DAI contract is `10^18`.\n- The 1000 tokens is `1000 * 10^18` or as a number is `1000000000000000000000`.\n- The compound contract can work with multiple token types.\n\n<details>\n<summary>For this example, you may expand this section to see how a previous fungible token standard using escrows would deal with the scenario.</summary>\n\n##### High-level explanation (NEP-21 standard)\n\nAlice needs to issue 2 transactions. The first one to `dai` to set an allowance for `compound` to be able to withdraw tokens from `alice`.\nThe second transaction is to the `compound` to start the deposit process. Compound will check that the DAI tokens are supported and will try to withdraw the desired amount of DAI from `alice`.\n\n- If transfer succeeded, `compound` can increase local ownership for `alice` to 1000 DAI\n- If transfer fails, `compound` doesn't need to do anything in current example, but maybe can notify `alice` of unsuccessful transfer.\n\n##### Technical calls (NEP-21 standard)\n\n1. `alice` calls `dai::set_allowance({\"escrow_account_id\": \"compound\", \"allowance\": \"1000000000000000000000\"})`.\n2. `alice` calls `compound::deposit({\"token_contract\": \"dai\", \"amount\": \"1000000000000000000000\"})`. During the `deposit` call, `compound` does the following:\n   1. makes async call `dai::transfer_from({\"owner_id\": \"alice\", \"new_owner_id\": \"compound\", \"amount\": \"1000000000000000000000\"})`.\n   2. attaches a callback `compound::on_transfer({\"owner_id\": \"alice\", \"token_contract\": \"dai\", \"amount\": \"1000000000000000000000\"})`.\n\n</details>\n\n##### High-level explanation\n\nAlice needs to issue 1 transaction, as opposed to 2 with a typical escrow workflow.\n\n##### Technical calls\n\n1. `alice` calls `dai::ft_transfer_call({\"receiver_id\": \"compound\", \"amount\": \"1000000000000000000000\", \"msg\": \"invest\"})`. During the `ft_transfer_call` call, `dai` does the following:\n   1. makes async call `compound::ft_on_transfer({\"sender_id\": \"alice\", \"amount\": \"1000000000000000000000\", \"msg\": \"invest\"})`.\n   2. attaches a callback `dai::ft_resolve_transfer({\"sender_id\": \"alice\", \"receiver_id\": \"compound\", \"amount\": \"1000000000000000000000\"})`.\n   3. compound finishes investing, using all attached fungible tokens `compound::invest({…})` then returns the value of the tokens that weren't used or needed. In this case, Alice asked for the tokens to be invested, so it will return 0. (In some cases a method may not need to use all the fungible tokens, and would return the remainder.)\n   4. the `dai::ft_resolve_transfer` function receives success/failure of the promise. If success, it will contain the unused tokens. Then the `dai` contract uses simple arithmetic (not needed in this case) and updates the balance for Alice.\n\n#### Swapping one token for another via an Automated Market Maker (AMM) like Uniswap\n\nAlice wants to swap 5 wrapped NEAR (wNEAR) for BNNA tokens at current market rate, with less than 2% slippage.\n\n##### Assumptions\n\n- The wNEAR token contract is `wnear`.\n- Alice's account is `alice`.\n- The AMM's contract is `amm`.\n- BNNA's contract is `bnna`.\n- The precision (\"decimals\" in the metadata standard) on wNEAR contract is `10^24`.\n- The 5 tokens is `5 * 10^24` or as a number is `5000000000000000000000000`.\n\n##### High-level explanation\n\nAlice needs to issue one transaction to wNEAR contract to transfer 5 tokens (multiplied by precision) to `amm`, specifying her desired action (swap), her destination token (BNNA) & minimum slippage (<2%) in `msg`.\n\nAlice will probably make this call via a UI that knows how to construct `msg` in a way the `amm` contract will understand. However, it's possible that the `amm` contract itself may provide view functions which take desired action, destination token, & slippage as input and return data ready to pass to `msg` for `ft_transfer_call`. For the sake of this example, let's say `amm` implements a view function called `ft_data_to_msg`.\n\nAlice needs to attach one yoctoNEAR. This will result in her seeing a confirmation page in her preferred NEAR wallet. NEAR wallet implementations will (eventually) attempt to provide useful information in this confirmation page, so receiver contracts should follow a strong convention in how they format `msg`. We will update this documentation with a recommendation, as community consensus emerges.\n\nAltogether then, Alice may take two steps, though the first may be a background detail of the app she uses.\n\n##### Technical calls\n\n1. View `amm::ft_data_to_msg({ action: \"swap\", destination_token: \"bnna\", min_slip: 2 })`. Using [NEAR CLI](https://docs.near.org/docs/tools/near-cli):\n\n   ```shell\n   near view amm ft_data_to_msg \\\n     '{\"action\": \"swap\", \"destination_token\": \"bnna\", \"min_slip\": 2}'\n   ```\n\n   Then Alice (or the app she uses) will hold onto the result and use it in the next step. Let's say this result is `\"swap:bnna,2\"`.\n\n2. Call `wnear::ft_on_transfer`. Using NEAR CLI:\n\n   ```shell\n   near call wnear ft_transfer_call \\\n     '{\"receiver_id\": \"amm\", \"amount\": \"5000000000000000000000000\", \"msg\": \"swap:bnna,2\"}' \\\n     --accountId alice --depositYocto 1\n   ```\n\n   During the `ft_transfer_call` call, `wnear` does the following:\n\n   1. Decrease the balance of `alice` and increase the balance of `amm` by 5000000000000000000000000.\n   2. Makes async call `amm::ft_on_transfer({\"sender_id\": \"alice\", \"amount\": \"5000000000000000000000000\", \"msg\": \"swap:bnna,2\"})`.\n   3. Attaches a callback `wnear::ft_resolve_transfer({\"sender_id\": \"alice\", \"receiver_id\": \"compound\", \"amount\": \"5000000000000000000000000\"})`.\n   4. `amm` finishes the swap, either successfully swapping all 5 wNEAR within the desired slippage, or failing.\n   5. The `wnear::ft_resolve_transfer` function receives success/failure of the promise. Assuming `amm` implements all-or-nothing transfers (as in, it will not transfer less-than-the-specified amount in order to fulfill the slippage requirements), `wnear` will do nothing at this point if the swap succeeded, or it will decrease the balance of `amm` and increase the balance of `alice` by 5000000000000000000000000.\n\n### Reference-level explanation\n\nNOTES:\n\n- All amounts, balances and allowance are limited by `U128` (max value `2**128 - 1`).\n- Token standard uses JSON for serialization of arguments and results.\n- Amounts in arguments and results have are serialized as Base-10 strings, e.g. `\"100\"`. This is done to avoid JSON limitation of max integer value of `2**53`.\n- The contract must track the change in storage when adding to and removing from collections. This is not included in this core fungible token standard but instead in the [Storage Standard][Storage Management].\n- To prevent the deployed contract from being modified or deleted, it should not have any access keys on its account.\n\n#### Interface\n\n##### ft_transfer\n\nSimple transfer to a receiver.\n\nRequirements:\n\n- Caller of the method must attach a deposit of 1 yoctoⓃ for security purposes\n- Caller must have greater than or equal to the `amount` being requested\n\nArguments:\n\n- `receiver_id`: the valid NEAR account receiving the fungible tokens.\n- `amount`: the number of tokens to transfer, wrapped in quotes and treated\n  like a string, although the number will be stored as an unsigned integer\n  with 128 bits.\n- `memo` (optional): for use cases that may benefit from indexing or\n  providing information for a transfer.\n\n```ts\nfunction ft_transfer(\n  receiver_id: string,\n  amount: string,\n  memo: string | null\n): void;\n```\n\n##### ft_transfer_call\n\nTransfer tokens and call a method on a receiver contract. A successful\nworkflow will end in a success execution outcome to the callback on the same\ncontract at the method `ft_resolve_transfer`.\nYou can think of this as being similar to attaching native NEAR tokens to a\nfunction call. It allows you to attach any Fungible Token in a call to a\nreceiver contract.\n\nRequirements:\n\n- Caller of the method must attach a deposit of 1 yoctoⓃ for security\n  purposes\n- Caller must have greater than or equal to the `amount` being requested\n- The receiving contract must implement `ft_on_transfer` according to the\n  standard. If it does not, FT contract's `ft_resolve_transfer` MUST deal\n  with the resulting failed cross-contract call and roll back the transfer.\n- Contract MUST implement the behavior described in `ft_resolve_transfer`\n\nArguments:\n\n- `receiver_id`: the valid NEAR account receiving the fungible tokens.\n- `amount`: the number of tokens to transfer, wrapped in quotes and treated\n  like a string, although the number will be stored as an unsigned integer\n  with 128 bits.\n- `memo` (optional): for use cases that may benefit from indexing or\n  providing information for a transfer.\n- `msg`: specifies information needed by the receiving contract in\n  order to properly handle the transfer. Can indicate both a function to call and the parameters to pass to that function.\n\n```ts\nfunction ft_transfer_call(\n  receiver_id: string,\n  amount: string,\n  memo: string | null,\n  msg: string\n): Promise;\n```\n\n##### ft_on_transfer\n\nThis function is implemented on the receiving contract.\nAs mentioned, the `msg` argument contains information necessary for the receiving contract to know how to process the request. This may include method names and/or arguments.\nReturns a value, or a promise which resolves with a value. The value is the\nnumber of unused tokens in string form. For instance, if `amount` is 10 but only 9 are\nneeded, it will return \"1\".\n\n```ts\nfunction ft_on_transfer(sender_id: string, amount: string, msg: string): string;\n```\n\n### View Methods\n\n##### ft_total_supply\n\nReturns the total supply of fungible tokens as a string representing the value as an unsigned 128-bit integer.\n\n```js\nfunction ft_total_supply(): string\n```\n\n##### ft_balance_of\n\nReturns the balance of an account in string form representing a value as an unsigned 128-bit integer. If the account doesn't exist must returns `\"0\"`.\n\n```ts\nfunction ft_balance_of(account_id: string): string;\n```\n\n##### ft_resolve_transfer\n\nThe following behavior is required, but contract authors may name this function something other than the conventional `ft_resolve_transfer` used here.\n\nFinalize an `ft_transfer_call` chain of cross-contract calls.\n\nThe `ft_transfer_call` process:\n\n1. Sender calls `ft_transfer_call` on FT contract\n2. FT contract transfers `amount` tokens from sender to receiver\n3. FT contract calls `ft_on_transfer` on receiver contract\n4. [receiver contract may make other cross-contract calls]\n5. FT contract resolves promise chain with `ft_resolve_transfer`, and may refund sender some or all of original `amount`\n\nRequirements:\n\n- Contract MUST forbid calls to this function by any account except self\n- If promise chain failed, contract MUST revert token transfer\n- If promise chain resolves with a non-zero amount given as a string,\n  contract MUST return this amount of tokens to `sender_id`\n\nArguments:\n\n- `sender_id`: the sender of `ft_transfer_call`\n- `receiver_id`: the `receiver_id` argument given to `ft_transfer_call`\n- `amount`: the `amount` argument given to `ft_transfer_call`\n\nReturns a string representing a string version of an unsigned 128-bit\ninteger of how many total tokens were spent by sender_id. Example: if sender\ncalls `ft_transfer_call({ \"amount\": \"100\" })`, but `receiver_id` only uses\n80, `ft_on_transfer` will resolve with `\"20\"`, and `ft_resolve_transfer`\nwill return `\"80\"`.\n\n```ts\nfunction ft_resolve_transfer(\n  sender_id: string,\n  receiver_id: string,\n  amount: string\n): string;\n```\n\n### Events\n\nStandard interfaces for FT contract actions that extend [NEP-297](nep-0297.md)\n\nNEAR and third-party applications need to track `mint`, `transfer`, `burn` events for all FT-driven apps consistently.\nThis extension addresses that.\n\nKeep in mind that applications, including NEAR Wallet, could require implementing additional methods, such as [`ft_metadata`][FT Metadata], to display the FTs correctly.\n\n### Event Interface\n\nFungible Token Events MUST have `standard` set to `\"nep141\"`, standard version set to `\"1.0.0\"`, `event` value is one of `ft_mint`, `ft_burn`, `ft_transfer`, and `data` must be of one of the following relevant types: `FtMintLog[] | FtTransferLog[] | FtBurnLog[]`:\n\n```ts\ninterface FtEventLogData {\n  standard: \"nep141\";\n  version: \"1.0.0\";\n  event: \"ft_mint\" | \"ft_burn\" | \"ft_transfer\";\n  data: FtMintLog[] | FtTransferLog[] | FtBurnLog[];\n}\n```\n\n```ts\n// An event log to capture tokens minting\n// Arguments\n// * `owner_id`: \"account.near\"\n// * `amount`: the number of tokens to mint, wrapped in quotes and treated\n//   like a string, although the number will be stored as an unsigned integer\n//   with 128 bits.\n// * `memo`: optional message\ninterface FtMintLog {\n  owner_id: string;\n  amount: string;\n  memo?: string;\n}\n\n// An event log to capture tokens burning\n// Arguments\n// * `owner_id`: owner of tokens to burn\n// * `amount`: the number of tokens to burn, wrapped in quotes and treated\n//   like a string, although the number will be stored as an unsigned integer\n//   with 128 bits.\n// * `memo`: optional message\ninterface FtBurnLog {\n  owner_id: string;\n  amount: string;\n  memo?: string;\n}\n\n// An event log to capture tokens transfer\n// Arguments\n// * `old_owner_id`: \"owner.near\"\n// * `new_owner_id`: \"receiver.near\"\n// * `amount`: the number of tokens to transfer, wrapped in quotes and treated\n//   like a string, although the number will be stored as an unsigned integer\n//   with 128 bits.\n// * `memo`: optional message\ninterface FtTransferLog {\n  old_owner_id: string;\n  new_owner_id: string;\n  amount: string;\n  memo?: string;\n}\n```\n\n### Event Examples\n\nBatch mint:\n\n```js\nEVENT_JSON:{\n    \"standard\": \"nep141\",\n    \"version\": \"1.0.0\",\n    \"event\": \"ft_mint\",\n    \"data\": [\n        {\"owner_id\": \"foundation.near\", \"amount\": \"500\"}\n    ]\n}\n```\n\nBatch transfer:\n\n```js\nEVENT_JSON:{\n    \"standard\": \"nep141\",\n    \"version\": \"1.0.0\",\n    \"event\": \"ft_transfer\",\n    \"data\": [\n        {\"old_owner_id\": \"from.near\", \"new_owner_id\": \"to.near\", \"amount\": \"42\", \"memo\": \"hi hello bonjour\"},\n        {\"old_owner_id\": \"user1.near\", \"new_owner_id\": \"user2.near\", \"amount\": \"7500\"}\n    ]\n}\n```\n\nBatch burn:\n\n```js\nEVENT_JSON:{\n    \"standard\": \"nep141\",\n    \"version\": \"1.0.0\",\n    \"event\": \"ft_burn\",\n    \"data\": [\n        {\"owner_id\": \"foundation.near\", \"amount\": \"100\"},\n    ]\n}\n```\n\n### Further Event Methods\n\nNote that the example events covered above cover two different kinds of events:\n\n1. Events that are not specified in the FT Standard (`ft_mint`, `ft_burn`)\n2. An event that is covered in the [FT Core Standard][FT Core]. (`ft_transfer`)\n\nPlease feel free to open pull requests for extending the events standard detailed here as needs arise.\n\n## Reference Implementation\n\nThe `near-contract-standards` cargo package of the [Near Rust SDK](https://github.com/near/near-sdk-rs) contain the following implementations of NEP-141:\n\n- [Minimum Viable Interface](https://github.com/near/near-sdk-rs/blob/master/near-contract-standards/src/fungible_token/core.rs)\n- The [Core Fungible Token Implementation](https://github.com/near/near-sdk-rs/blob/master/near-contract-standards/src/fungible_token/core_impl.rs)\n- [Optional Fungible Token Events](https://github.com/near/near-sdk-rs/blob/master/near-contract-standards/src/fungible_token/events.rs)\n- [Core Fungible Token tests](https://github.com/near/near-sdk-rs/blob/master/examples/fungible-token/tests/workspaces.rs)\n\n## Drawbacks\n\n- The `msg` argument to `ft_transfer` and `ft_transfer_call` is freeform, which may necessitate conventions.\n- The paradigm of an escrow system may be familiar to developers and end users, and education on properly handling this in another contract may be needed.\n\n## Future possibilities\n\n- Support for multiple token types\n- Minting and burning\n\n## History\n\nSee also the discussions:\n\n- [Fungible token core](https://github.com/near/NEPs/discussions/146#discussioncomment-298943)\n- [Fungible token metadata](https://github.com/near/NEPs/discussions/148)\n- [Storage standard](https://github.com/near/NEPs/discussions/145)\n\n## Copyright\n\nCopyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).\n\n[Storage Management]: https://github.com/near/NEPs/blob/master/neps/nep-0145.md\n[FT Metadata]: https://github.com/near/NEPs/blob/master/neps/nep-0148.md\n[FT Core]: https://github.com/near/NEPs/blob/master/neps/nep-0141.md\n"
  },
  {
    "path": "neps/nep-0145.md",
    "content": "---\nNEP: 145\nTitle: Storage Management\nAuthor: Evgeny Kuzyakov <ek@near.org>, @oysterpack\nStatus: Final\nDiscussionsTo: https://github.com/near/NEPs/discussions/145\nType: Standards Track\nCategory: Contract\nCreated: 03-Mar-2022\n---\n\n## Summary\n\nNEAR uses [storage staking] which means that a contract account must have sufficient balance to cover all storage added over time. This standard provides a uniform way to pass storage costs onto users.\n\n## Motivation\n\nIt allows accounts and contracts to:\n\n1. Check an account's storage balance.\n2. Determine the minimum storage needed to add account information such that the account can interact as expected with a contract.\n3. Add storage balance for an account; either one's own or another.\n4. Withdraw some storage deposit by removing associated account data from the contract and then making a call to remove unused deposit.\n5. Unregister an account to recover full storage balance.\n\n[storage staking]: https://docs.near.org/concepts/storage/storage-staking\n\n## Rationale and alternatives\n\nPrior art:\n\n- A previous fungible token standard ([NEP-21](https://github.com/near/NEPs/pull/21)) highlighting how [storage was paid](https://github.com/near/near-sdk-rs/blob/1d3535bd131b68f97a216e643ad1cba19e16dddf/examples/fungible-token/src/lib.rs#L92-L113) for when increasing the allowance of an escrow system.\n\n### Example scenarios\n\nTo show the flexibility and power of this standard, let's walk through two example contracts.\n\n1. A simple Fungible Token contract which uses Storage Management in \"registration only\" mode, where the contract only adds storage on a user's first interaction.\n   1. Account registers self\n   2. Account registers another\n   3. Unnecessary attempt to re-register\n   4. Force-closure of account\n   5. Graceful closure of account\n2. A social media contract, where users can add more data to the contract over time.\n   1. Account registers self with more than minimum required\n   2. Unnecessary attempt to re-register using `registration_only` param\n   3. Attempting to take action which exceeds paid-for storage; increasing storage deposit\n   4. Removing storage and reclaiming excess deposit\n\n### Example 1: Fungible Token Contract\n\nImagine a [fungible token][FT Core] contract deployed at `ft`. Let's say this contract saves all user balances to a Map data structure internally, and adding a key for a new user requires 0.00235Ⓝ. This contract therefore uses the Storage Management standard to pass this cost onto users, so that a new user must effectively pay a registration fee to interact with this contract of 0.00235Ⓝ, or 2350000000000000000000 yoctoⓃ ([yocto](https://www.metricconversion.us/prefixes.htm) = 10<sup>-24</sup>).\n\nFor this contract, `storage_balance_bounds` will be:\n\n```json\n{\n  \"min\": \"2350000000000000000000\",\n  \"max\": \"2350000000000000000000\"\n}\n```\n\nThis means a user must deposit 0.00235Ⓝ to interact with this contract, and that attempts to deposit more than this will have no effect (attached deposits will be immediately refunded).\n\nLet's follow two users, Alice with account `alice` and Bob with account `bob`, as they interact with `ft` through the following scenarios:\n\n1. Alice registers herself\n2. Alice registers Bob\n3. Alice tries to register Bob again\n4. Alice force-closes her account\n5. Bob gracefully closes his account\n\n#### 1. Account pays own registration fee\n\n##### High-level explanation\n\n1. Alice checks if she is registered with the `ft` contract.\n2. Alice determines the needed registration fee to register with the `ft` contract.\n3. Alice issues a transaction to deposit Ⓝ for her account.\n\n##### Technical calls\n\n1.  Alice queries a view-only method to determine if she already has storage on this contract with `ft::storage_balance_of({\"account_id\": \"alice\"})`. Using [NEAR CLI](https://docs.near.org/tools/near-cli) to make this view call, the command would be:\n\n    ```shell\n    near view ft storage_balance_of '{\"account_id\": \"alice\"}'\n    ```\n\n    The response:\n\n    ```shell\n    null\n    ```\n\n2.  Alice uses [NEAR CLI](https://docs.near.org/docs/tools/near-cli) to make a view call.\n\n    ```shell\n    near view ft storage_balance_bounds\n    ```\n\n    As mentioned above, this will show that both `min` and `max` are both 2350000000000000000000 yoctoⓃ.\n\n3.  Alice converts this yoctoⓃ amount to 0.00235 Ⓝ, then calls `ft::storage_deposit` with this attached deposit. Using NEAR CLI:\n\n    ```shell\n    near call ft storage_deposit '' --accountId alice --amount 0.00235\n    ```\n\n    The result:\n\n    ```json\n    {\n      \"total\": \"2350000000000000000000\",\n      \"available\": \"0\"\n    }\n    ```\n\n#### 2. Account pays for another account's storage\n\nAlice wishes to eventually send `ft` tokens to Bob who is not registered. She decides to pay for Bob's storage.\n\n##### High-level explanation\n\nAlice issues a transaction to deposit Ⓝ for Bob's account.\n\n##### Technical calls\n\nAlice calls `ft::storage_deposit({\"account_id\": \"bob\"})` with the attached deposit of '0.00235'. Using NEAR CLI the command would be:\n\n```shell\nnear call ft storage_deposit '{\"account_id\": \"bob\"}' --accountId alice --amount 0.00235\n```\n\nThe result:\n\n```json\n{\n  \"total\": \"2350000000000000000000\",\n  \"available\": \"0\"\n}\n```\n\n#### 3. Unnecessary attempt to register already-registered account\n\nAlice accidentally makes the same call again, and even misses a leading zero in her deposit amount.\n\n```shell\nnear call ft storage_deposit '{\"account_id\": \"bob\"}' --accountId alice --amount 0.0235\n```\n\nThe result:\n\n```json\n{\n  \"total\": \"2350000000000000000000\",\n  \"available\": \"0\"\n}\n```\n\nAdditionally, Alice will be refunded the 0.0235Ⓝ she attached, because the `storage_deposit_bounds.max` specifies that Bob's account cannot have a total balance larger than 0.00235Ⓝ.\n\n#### 4. Account force-closes registration\n\nAlice decides she doesn't care about her `ft` tokens and wants to forcibly recover her registration fee. If the contract permits this operation, her remaining `ft` tokens will either be burned or transferred to another account, which she may or may not have the ability to specify prior to force-closing.\n\n##### High-level explanation\n\nAlice issues a transaction to unregister her account and recover the Ⓝ from her registration fee. She must attach 1 yoctoⓃ, expressed in Ⓝ as `.000000000000000000000001`.\n\n##### Technical calls\n\nAlice calls `ft::storage_unregister({\"force\": true})` with a 1 yoctoⓃ deposit. Using NEAR CLI the command would be:\n\n```shell\nnear call ft storage_unregister '{ \"force\": true }' --accountId alice --depositYocto 1\n```\n\nThe result:\n\n```shell\ntrue\n```\n\n#### 5. Account gracefully closes registration\n\nBob wants to close his account, but has a non-zero balance of `ft` tokens.\n\n##### High-level explanation\n\n1. Bob tries to gracefully close his account, calling `storage_unregister()` without specifying `force=true`. This results in an intelligible error that tells him why his account can't yet be unregistered gracefully.\n2. Bob sends all of his `ft` tokens to a friend.\n3. Bob retries to gracefully close his account. It works.\n\n##### Technical calls\n\n1.  Bob calls `ft::storage_unregister()` with a 1 yoctoⓃ deposit. Using NEAR CLI the command would be:\n\n    ```shell\n    near call ft storage_unregister '' --accountId bob --depositYocto 1\n    ```\n\n    It fails with a message like \"Cannot gracefully close account with positive remaining balance; bob has balance N\"\n\n2.  Bob transfers his tokens to a friend using `ft_transfer` from the [Fungible Token Core][FT Core] standard.\n\n3.  Bob tries the call from Step 1 again. It works.\n\n### Example 2: Social Media Contract\n\nImagine a social media smart contract which passes storage costs onto users for posts and follower data. Let's say this contract is deployed at account `social`. Like the Fungible Token contract example above, the `storage_balance_bounds.min` is 0.00235, because this contract will likewise add a newly-registered user to an internal Map. However, this contract sets no `storage_balance_bounds.max`, since users can add more data to the contract over time and must cover the cost for this storage.\n\nSo for this contract, `storage_balance_bounds` will return:\n\n```json\n{\n  \"min\": \"2350000000000000000000\",\n  \"max\": null\n}\n```\n\nLet's follow a user, Alice with account `alice`, as she interacts with `social` through the following scenarios:\n\n1. Registration\n2. Unnecessary attempt to re-register using `registration_only` param\n3. Attempting to take action which exceeds paid-for storage; increasing storage deposit\n4. Removing storage and reclaiming excess deposit\n\n#### 1. Account registers with `social`\n\n##### High-level explanation\n\nAlice issues a transaction to deposit Ⓝ for her account. While the `storage_balance_bounds.min` for this contract is 0.00235Ⓝ, the frontend she uses suggests adding 0.1Ⓝ, so that she can immediately start adding data to the app, rather than _only_ registering.\n\n##### Technical calls\n\nUsing NEAR CLI:\n\n```shell\nnear call social storage_deposit ''  --accountId alice --amount 0.1\n```\n\nThe result:\n\n```json\n{\n  \"total\": \"100000000000000000000000\",\n  \"available\": \"97650000000000000000000\"\n}\n```\n\nHere we see that she has deposited 0.1Ⓝ and that 0.00235 of it has been used to register her account, and is therefore locked by the contract. The rest is available to facilitate interaction with the contract, but could also be withdrawn by Alice by using `storage_withdraw`.\n\n#### 2. Unnecessary attempt to re-register using `registration_only` param\n\n##### High-level explanation\n\nAlice can't remember if she already registered and re-sends the call, using the `registration_only` param to ensure she doesn't attach another 0.1Ⓝ.\n\n##### Technical calls\n\nUsing NEAR CLI:\n\n```shell\nnear call social storage_deposit '{\"registration_only\": true}' --accountId alice --amount 0.1\n```\n\nThe result:\n\n```json\n{\n  \"total\": \"100000000000000000000000\",\n  \"available\": \"97650000000000000000000\"\n}\n```\n\nAdditionally, Alice will be refunded the extra 0.1Ⓝ that she just attached. This makes it easy for other contracts to always attempt to register users while performing batch transactions without worrying about errors or lost deposits.\n\nNote that if Alice had not included `registration_only`, she would have ended up with a `total` of 0.2Ⓝ.\n\n#### 3. Account increases storage deposit\n\nAssumption: `social` has a `post` function which allows creating a new post with free-form text. Alice has used almost all of her available storage balance. She attempts to call `post` with a large amount of text, and the transaction aborts because she needs to pay for more storage first.\n\nNote that applications will probably want to avoid this situation in the first place by prompting users to top up storage deposits sufficiently before available balance runs out.\n\n##### High-level explanation\n\n1. Alice issues a transaction, let's say `social.post`, and it fails with an intelligible error message to tell her that she has an insufficient storage balance to cover the cost of the operation\n2. Alice issues a transaction to increase her storage balance\n3. Alice retries the initial transaction and it succeeds\n\n##### Technical calls\n\n1.  This is outside the scope of this spec, but let's say Alice calls `near call social post '{ \"text\": \"very long message\" }'`, and that this fails with a message saying something like \"Insufficient storage deposit for transaction. Please call `storage_deposit` and attach at least 0.1 NEAR, then try again.\"\n\n2.  Alice deposits the proper amount in a transaction by calling `social::storage_deposit` with the attached deposit of '0.1'. Using NEAR CLI:\n\n    ```shell\n    near call social storage_deposit '' --accountId alice --amount 0.1\n    ```\n\n    The result:\n\n    ```json\n    {\n      \"total\": \"200000000000000000000000\",\n      \"available\": \"100100000000000000000000\"\n    }\n    ```\n\n3.  Alice tries the initial `near call social post` call again. It works.\n\n#### 4. Removing storage and reclaiming excess deposit\n\nAssumption: Alice has more deposited than she is using.\n\n##### High-level explanation\n\n1. Alice views her storage balance and sees that she has extra.\n2. Alice withdraws her excess deposit.\n\n##### Technical calls\n\n1.  Alice queries `social::storage_balance_of({ \"account_id\": \"alice\" })`. With NEAR CLI:\n\n    ```shell\n    near view social storage_balance_of '{\"account_id\": \"alice\"}'\n    ```\n\n    Response:\n\n    ```json\n    {\n      \"total\": \"200000000000000000000000\",\n      \"available\": \"100100000000000000000000\"\n    }\n    ```\n\n2.  Alice calls `storage_withdraw` with a 1 yoctoⓃ deposit. NEAR CLI command:\n\n    ```shell\n    near call social storage_withdraw '{\"amount\": \"100100000000000000000000\"}' \\\n        --accountId alice --depositYocto 1\n    ```\n\n    Result:\n\n    ```json\n    {\n      \"total\": \"200000000000000000000000\",\n      \"available\": \"0\"\n    }\n    ```\n\n## Specification\n\nNOTES:\n\n- All amounts, balances and allowance are limited by `U128` (max value 2<sup>128</sup> - 1).\n- This storage standard uses JSON for serialization of arguments and results.\n- Amounts in arguments and results are serialized as Base-10 strings, e.g. `\"100\"`. This is done to avoid JSON limitation of max integer value of 2<sup>53</sup>.\n- To prevent the deployed contract from being modified or deleted, it should not have any access keys on its account.\n\n### Interface\n\n```ts\n// The structure that will be returned for the methods:\n// * `storage_deposit`\n// * `storage_withdraw`\n// * `storage_balance_of`\n// The `total` and `available` values are string representations of unsigned\n// 128-bit integers showing the balance of a specific account in yoctoⓃ.\ntype StorageBalance = {\n  total: string;\n  available: string;\n};\n\n// The below structure will be returned for the method `storage_balance_bounds`.\n// Both `min` and `max` are string representations of unsigned 128-bit integers.\n//\n// `min` is the amount of tokens required to start using this contract at all\n// (eg to register with the contract). If a new contract user attaches `min`\n// NEAR to a `storage_deposit` call, subsequent calls to `storage_balance_of`\n// for this user must show their `total` equal to `min` and `available=0` .\n//\n// A contract may implement `max` equal to `min` if it only charges for initial\n// registration, and does not adjust per-user storage over time. A contract\n// which implements `max` must refund deposits that would increase a user's\n// storage balance beyond this amount.\ntype StorageBalanceBounds = {\n  min: string;\n  max: string | null;\n};\n\n/************************************/\n/* CHANGE METHODS on fungible token */\n/************************************/\n// Payable method that receives an attached deposit of Ⓝ for a given account.\n//\n// If `account_id` is omitted, the deposit MUST go toward predecessor account.\n// If provided, deposit MUST go toward this account. If invalid, contract MUST\n// panic.\n//\n// If `registration_only=true`, contract MUST refund above the minimum balance\n// if the account wasn't registered and refund full deposit if already\n// registered.\n//\n// The `storage_balance_of.total` + `attached_deposit` in excess of\n// `storage_balance_bounds.max` must be refunded to predecessor account.\n//\n// Returns the StorageBalance structure showing updated balances.\nfunction storage_deposit(\n  account_id: string | null,\n  registration_only: boolean | null\n): StorageBalance {}\n\n// Withdraw specified amount of available Ⓝ for predecessor account.\n//\n// This method is safe to call. It MUST NOT remove data.\n//\n// `amount` is sent as a string representing an unsigned 128-bit integer. If\n// omitted, contract MUST refund full `available` balance. If `amount` exceeds\n// predecessor account's available balance, contract MUST panic.\n//\n// If predecessor account not registered, contract MUST panic.\n//\n// MUST require exactly 1 yoctoNEAR attached balance to prevent restricted\n// function-call access-key call (UX wallet security)\n//\n// Returns the StorageBalance structure showing updated balances.\nfunction storage_withdraw(amount: string | null): StorageBalance {}\n\n// Unregisters the predecessor account and returns the storage NEAR deposit.\n//\n// If the predecessor account is not registered, the function MUST return\n// `false` without panic.\n//\n// If `force=true` the function SHOULD ignore existing account data, such as\n// non-zero balances on an FT contract (that is, it should burn such balances),\n// and close the account. Contract MAY panic if it doesn't support forced\n// unregistration, or if it can't force unregister for the particular situation\n// (example: too much data to delete at once).\n//\n// If `force=false` or `force` is omitted, the contract MUST panic if caller\n// has existing account data, such as a positive registered balance (eg token\n// holdings).\n//\n// MUST require exactly 1 yoctoNEAR attached balance to prevent restricted\n// function-call access-key call (UX wallet security)\n//\n// Returns `true` iff the account was successfully unregistered.\n// Returns `false` iff account was not registered before.\nfunction storage_unregister(force: boolean | null): boolean {}\n\n/****************/\n/* VIEW METHODS */\n/****************/\n// Returns minimum and maximum allowed balance amounts to interact with this\n// contract. See StorageBalanceBounds.\nfunction storage_balance_bounds(): StorageBalanceBounds {}\n\n// Returns the StorageBalance structure of the valid `account_id`\n// provided. Must panic if `account_id` is invalid.\n//\n// If `account_id` is not registered, must return `null`.\nfunction storage_balance_of(account_id: string): StorageBalance | null {}\n```\n\n## Reference Implementation\n\nThe `near-contract-standards` cargo package of the [Near Rust SDK](https://github.com/near/near-sdk-rs) contain the following implementations of NEP-145:\n\n- [Minimum Viable Interface](https://github.com/near/near-sdk-rs/blob/master/near-contract-standards/src/storage_management/mod.rs#L20)\n- [Implementation](https://github.com/near/near-sdk-rs/blob/master/near-contract-standards/src/storage_management/mod.rs)\n\n## Drawbacks\n\n- The idea may confuse contract developers at first until they understand how a system with storage staking works.\n- Some folks in the community would rather see the storage deposit only done for the sender. That is, that no one else should be able to add storage for another user. This stance wasn't adopted in this standard, but others may have similar concerns in the future.\n\n## Future possibilities\n\n- Ideally, contracts will update available balance for all accounts every time the NEAR blockchain's configured storage-cost-per-byte is reduced. That they _must_ do so is not enforced by this current standard.\n\n## Copyright\n\nCopyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).\n\n[FT Core]: https://github.com/near/NEPs/blob/master/neps/nep-0141.md\n"
  },
  {
    "path": "neps/nep-0148.md",
    "content": "---\nNEP: 148\nTitle: Fungible Token Metadata\nAuthor: Robert Zaremba <robert-zaremba>, Evgeny Kuzyakov <ek@near.org>, @oysterpack\nStatus: Final\nDiscussionsTo: https://github.com/near/NEPs/discussions/148\nType: Standards Track\nCategory: Contract\nCreated: 03-Mar-2022\nRequires: 141\n---\n\n## Summary\n\nAn interface for a fungible token's metadata. The goal is to keep the metadata future-proof as well as lightweight. This will be important to dApps needing additional information about an FT's properties, and broadly compatible with other tokens standards such that the [NEAR Rainbow Bridge](https://near.org/blog/eth-near-rainbow-bridge/) can move tokens between chains.\n\n## Motivation\n\nCustom fungible tokens play a major role in decentralized applications today. FTs can contain custom properties to differentiate themselves from other tokens or contracts in the ecosystem. In NEAR, many common properties can be stored right on-chain. Other properties are best stored off-chain or in a decentralized storage platform, in order to save on storage costs and allow rapid community experimentation.\n\n## Rationale and alternatives\n\nAs blockchain technology advances, it becomes increasingly important to provide backwards compatibility and a concept of a spec. This standard encompasses all of these concerns.\n\nPrior art:\n\n- [EIP-1046](https://eips.ethereum.org/EIPS/eip-1046)\n- [OpenZeppelin's ERC-721 Metadata standard](https://docs.openzeppelin.com/contracts/5.x/api/token/ERC721#IERC721Metadata) also helped, although it's for non-fungible tokens.\n\n## Specification\n\nA fungible token smart contract allows for discoverable properties. Some properties can be determined by other contracts on-chain, or return in view method calls. Others can only be determined by an oracle system to be used on-chain, or by a frontend with the ability to access a linked reference file.\n\n### Examples scenario\n\n#### Token provides metadata upon deploy and initialization\n\nAlice deploys a wBTC fungible token contract.\n\n##### Assumptions\n\n- The wBTC token contract is `wbtc`.\n- Alice's account is `alice`.\n- The precision (\"decimals\" in this metadata standard) on wBTC contract is `10^8`.\n\n##### High-level explanation\n\nAlice issues a transaction to deploy and initialize the fungible token contract, providing arguments to the initialization function that set metadata fields.\n\n##### Technical calls\n\n1. `alice` deploys a contract and calls `wbtc::new` with all metadata. If this deploy and initialization were done using [NEAR CLI](https://docs.near.org/tools/near-cli) the command would be:\n\n```shell\nnear deploy wbtc --wasmFile res/ft.wasm --initFunction new --initArgs '{\n   \"owner_id\": \"wbtc\",\n   \"total_supply\": \"100000000000000\",\n   \"metadata\": {\n     \"spec\": \"ft-1.0.0\",\n     \"name\": \"Wrapped Bitcoin\",\n     \"symbol\": \"WBTC\",\n     \"icon\": \"data:image/svg+xml,%3C…\",\n     \"reference\": \"https://example.com/wbtc.json\",\n     \"reference_hash\": \"AK3YRHqKhCJNmKfV6SrutnlWW/icN5J8NUPtKsNXR1M=\",\n     \"decimals\": 8\n   }}' --accountId alice\n```\n\n## Reference-level explanation\n\nA fungible token contract implementing the metadata standard shall contain a function named `ft_metadata`.\n\n```ts\nfunction ft_metadata(): FungibleTokenMetadata {}\n```\n\n##### Interface\n\n```ts\ntype FungibleTokenMetadata = {\n  spec: string;\n  name: string;\n  symbol: string;\n  icon: string | null;\n  reference: string | null;\n  reference_hash: string | null;\n  decimals: number;\n};\n```\n\nAn implementing contract MUST include the following fields on-chain\n\n- `spec`: a string. Should be `ft-1.0.0` to indicate that a Fungible Token contract adheres to the current versions of this Metadata and the [Fungible Token Core][FT Core] specs. This will allow consumers of the Fungible Token to know if they support the features of a given contract.\n- `name`: the human-readable name of the token.\n- `symbol`: the abbreviation, like wETH or AMPL.\n- `decimals`: used in frontends to show the proper significant digits of a token. This concept is explained well in this [OpenZeppelin post](https://docs.openzeppelin.com/contracts/3.x/erc20#a-note-on-decimals).\n\nAn implementing contract MAY include the following fields on-chain\n\n- `icon`: a small image associated with this token. Must be a [data URL](https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/Data_URIs), to help consumers display it quickly while protecting user data. Recommendation: use [optimized SVG](https://codepen.io/tigt/post/optimizing-svgs-in-data-uris), which can result in high-resolution images with only 100s of bytes of [storage cost](https://docs.near.org/concepts/storage/storage-staking). (Note that these storage costs are incurred to the token owner/deployer, but that querying these icons is a very cheap & cacheable read operation for all consumers of the contract and the RPC nodes that serve the data.) Recommendation: create icons that will work well with both light-mode and dark-mode websites by either using middle-tone color schemes, or by [embedding `media` queries in the SVG](https://timkadlec.com/2013/04/media-queries-within-svg/).\n- `reference`: a link to a valid JSON file containing various keys offering supplementary details on the token. Example: `/ipfs/QmdmQXB2mzChmMeKY47C43LxUdg1NDJ5MWcKMKxDu7RgQm`, `https://example.com/token.json`, etc. If the information given in this document conflicts with the on-chain attributes, the values in `reference` shall be considered the source of truth.\n- `reference_hash`: the base64-encoded sha256 hash of the JSON file contained in the `reference` field. This is to guard against off-chain tampering.\n\n## Reference Implementation\n\nThe `near-contract-standards` cargo package of the [Near Rust SDK](https://github.com/near/near-sdk-rs) contain the following implementations of NEP-148:\n\n- [Implementation](https://github.com/near/near-sdk-rs/blob/master/near-contract-standards/src/fungible_token/metadata.rs)\n\n## Drawbacks\n\n- It could be argued that `symbol` and even `name` could belong as key/values in the `reference` JSON object.\n- Enforcement of `icon` to be a data URL rather than a link to an HTTP endpoint that could contain privacy-violating code cannot be done on deploy or update of contract metadata, and must be done on the consumer/app side when displaying token data.\n- If on-chain icon uses a data URL or is not set but the document given by `reference` contains a privacy-violating `icon` URL, consumers & apps of this data should not naïvely display the `reference` version, but should prefer the safe version. This is technically a violation of the \"`reference` setting wins\" policy described above.\n\n## Future possibilities\n\n- Detailed conventions that may be enforced for versions.\n- A fleshed out schema for what the `reference` object should contain.\n\n## Copyright\n\nCopyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).\n\n[FT Core]: https://github.com/near/NEPs/blob/master/neps/nep-0141.md\n"
  },
  {
    "path": "neps/nep-0171.md",
    "content": "---\nNEP: 171\nTitle: Non Fungible Token Standard\nAuthor: Mike Purvis <mike@near.org>, Evgeny Kuzyakov <ek@near.org>, @oysterpack\nStatus: Final\nDiscussionsTo: https://github.com/near/NEPs/discussions/171\nType: Standards Track\nCategory: Contract\nVersion: 1.2.0\nCreated: 03-Mar-2022\nUpdated: 13-Mar-2023\nRequires: 297\n---\n\n## Summary\n\nA standard interface for non-fungible tokens (NFTs). That is, tokens which each have a unique ID.\n\n## Motivation\n\nIn the three years since [ERC-721] was ratified by the Ethereum community, Non-Fungible Tokens have proven themselves as an incredible new opportunity across a wide array of disciplines: collectibles, art, gaming, finance, virtual reality, real estate, and more.\n\nThis standard builds off the lessons learned in this early experimentation, and pushes the possibilities further by harnessing unique attributes of the NEAR blockchain:\n\n- an asynchronous, sharded runtime, meaning that two contracts can be executed at the same time in different shards\n- a [storage staking] model that separates [gas] fees from the storage demands of the network, enabling greater on-chain storage (see [Metadata] extension) and ultra-low transaction fees\n\nGiven these attributes, this NFT standard can accomplish with one user interaction things for which other blockchains need two or three. Most noteworthy is `nft_transfer_call`, by which a user can essentially attach a token to a call to a separate contract. An example scenario:\n\n- An [Exquisite Corpse](https://en.wikipedia.org/wiki/Exquisite_corpse) contract allows three drawings to be submitted, one for each section of a final composition, to be minted as its own NFT and sold on a marketplace, splitting royalties amongst the original artists.\n- Alice draws the top third and submits it, Bob the middle third, and Carol follows up with the bottom third. Since they each use `nft_transfer_call` to both transfer their NFT to the Exquisite Corpse contract as well as call a `submit` method on it, the call from Carol can automatically kick off minting a composite NFT from the three submissions, as well as listing this composite NFT in a marketplace.\n- When Dan attempts to also call `nft_transfer_call` to submit an unneeded top third of the drawing, the Exquisite Corpse contract can throw an error, and the transfer will be rolled back so that Bob maintains ownership of his NFT.\n\nWhile this is already flexible and powerful enough to handle all sorts of existing and new use-cases, apps such as marketplaces may still benefit from the [Approval Management] extension.\n\nPrior art:\n\n- [ERC-721]\n- [EIP-1155 for multi-tokens](https://eips.ethereum.org/EIPS/eip-1155#non-fungible-tokens)\n- [NEAR's Fungible Token Standard][FT Core], which first pioneered the \"transfer and call\" technique\n\n\n## Rationale and alternatives\n\n- Why is this design the best in the space of possible designs?\n- What other designs have been considered and what is the rationale for not choosing them?\n- What is the impact of not doing this?\n\n## Specification\n\n**NOTES**:\n\n- All amounts, balances and allowance are limited by `U128` (max value `2**128 - 1`).\n- Token standard uses JSON for serialization of arguments and results.\n- Amounts in arguments and results are serialized as Base-10 strings, e.g. `\"100\"`. This is done to avoid JSON limitation of max integer value of `2**53`.\n- The contract must track the change in storage when adding to and removing from collections. This is not included in this core fungible token standard but instead in the [Storage Standard][Storage Management].\n- To prevent the deployed contract from being modified or deleted, it should not have any access keys on its account.\n\n### NFT Interface\n\n```ts\n// The base structure that will be returned for a token. If contract is using\n// extensions such as Approval Management, Metadata, or other\n// attributes may be included in this structure.\ntype Token = {\n   token_id: string,\n   owner_id: string,\n }\n\n/******************/\n/* CHANGE METHODS */\n/******************/\n\n// Simple transfer. Transfer a given `token_id` from current owner to\n// `receiver_id`.\n//\n// Requirements\n// * Caller of the method must attach a deposit of 1 yoctoⓃ for security purposes\n// * Contract MUST panic if called by someone other than token owner or,\n//   if using Approval Management, one of the approved accounts\n// * `approval_id` is for use with Approval Management extension, see\n//   that document for full explanation.\n// * If using Approval Management, contract MUST nullify approved accounts on\n//   successful transfer.\n//\n// Arguments:\n// * `receiver_id`: the valid NEAR account receiving the token\n// * `token_id`: the token to transfer\n// * `approval_id`: expected approval ID. A number smaller than\n//    2^53, and therefore representable as JSON. See Approval Management\n//    standard for full explanation.\n// * `memo` (optional): for use cases that may benefit from indexing or\n//    providing information for a transfer\nfunction nft_transfer(\n  receiver_id: string,\n  token_id: string,\n  approval_id: number|null,\n  memo: string|null,\n) {}\n\n// Returns `true` if the token was transferred from the sender's account.\n\n// Transfer token and call a method on a receiver contract. A successful\n// workflow will end in a success execution outcome to the callback on the NFT\n// contract at the method `nft_resolve_transfer`.\n//\n// You can think of this as being similar to attaching native NEAR tokens to a\n// function call. It allows you to attach any Non-Fungible Token in a call to a\n// receiver contract.\n//\n// Requirements:\n// * Caller of the method must attach a deposit of 1 yoctoⓃ for security\n//   purposes\n// * Contract MUST panic if called by someone other than token owner or,\n//   if using Approval Management, one of the approved accounts\n// * The receiving contract must implement `nft_on_transfer` according to the\n//   standard. If it does not, FT contract's `nft_resolve_transfer` MUST deal\n//   with the resulting failed cross-contract call and roll back the transfer.\n// * Contract MUST implement the behavior described in `nft_resolve_transfer`\n// * `approval_id` is for use with Approval Management extension, see\n//   that document for full explanation.\n// * If using Approval Management, contract MUST nullify approved accounts on\n//   successful transfer.\n//\n// Arguments:\n// * `receiver_id`: the valid NEAR account receiving the token.\n// * `token_id`: the token to send.\n// * `approval_id`: expected approval ID. A number smaller than\n//    2^53, and therefore representable as JSON. See Approval Management\n//    standard for full explanation.\n// * `memo` (optional): for use cases that may benefit from indexing or\n//    providing information for a transfer.\n// * `msg`: specifies information needed by the receiving contract in\n//    order to properly handle the transfer. Can indicate both a function to\n//    call and the parameters to pass to that function.\nfunction nft_transfer_call(\n  receiver_id: string,\n  token_id: string,\n  approval_id: number|null,\n  memo: string|null,\n  msg: string,\n): Promise {}\n\n\n/****************/\n/* VIEW METHODS */\n/****************/\n\n// Returns the token with the given `token_id` or `null` if no such token.\nfunction nft_token(token_id: string): Token|null {}\n```\n\nThe following behavior is required, but contract authors may name this function something other than the conventional `nft_resolve_transfer` used here.\n\n```ts\n// Finalize an `nft_transfer_call` chain of cross-contract calls.\n//\n// The `nft_transfer_call` process:\n//\n// 1. Sender calls `nft_transfer_call` on NFT contract\n// 2. NFT contract transfers token from sender to receiver\n// 3. NFT contract calls `nft_on_transfer` on receiver contract\n// 4+. [receiver contract may make other cross-contract calls]\n// N. NFT contract resolves promise chain with `nft_resolve_transfer`, and may\n//    transfer token back to sender\n//\n// Requirements:\n// * Contract MUST forbid calls to this function by any account except self\n// * If promise chain failed, contract MUST revert token transfer\n// * If promise chain resolves with `true`, contract MUST return token to\n//   `owner_id`\n//\n// Arguments:\n// * `owner_id`: the original owner of the NFT.\n// * `receiver_id`: the `receiver_id` argument given to `nft_transfer_call`\n// * `token_id`: the `token_id` argument given to `nft_transfer_call`\n// * `approved_account_ids `: if using Approval Management, contract MUST provide\n//   record of original approved accounts in this argument, and restore these\n//   approved accounts and their approval IDs in case of revert.\n//\n// Returns true if token was successfully transferred to `receiver_id`.\nfunction nft_resolve_transfer(\n  owner_id: string,\n  receiver_id: string,\n  token_id: string,\n  approved_account_ids: null|Record<string, number>,\n): boolean {}\n```\n\n### Receiver Interface\n\nContracts which want to make use of `nft_transfer_call` must implement the following:\n\n```ts\n// Take some action after receiving a non-fungible token\n//\n// Requirements:\n// * Contract MUST restrict calls to this function to a set of whitelisted NFT\n//   contracts\n//\n// Arguments:\n// * `sender_id`: the sender of `nft_transfer_call`\n// * `previous_owner_id`: the account that owned the NFT prior to it being\n//   transferred to this contract, which can differ from `sender_id` if using\n//   Approval Management extension\n// * `token_id`: the `token_id` argument given to `nft_transfer_call`\n// * `msg`: information necessary for this contract to know how to process the\n//   request. This may include method names and/or arguments.\n//\n// Returns true if token should be returned to `sender_id`\nfunction nft_on_transfer(\n  sender_id: string,\n  previous_owner_id: string,\n  token_id: string,\n  msg: string,\n): Promise<boolean>;\n```\n\n### Events\n\nNEAR and third-party applications need to track mint, transfer, burn, metadata update, and contract metadata update events for all NFT-driven apps consistently.\nThis extension addresses that.\n\nKeep in mind that applications, including NEAR Wallet, could require implementing additional methods to display the NFTs correctly, such as [`nft_metadata`][Metadata] and [`nft_tokens_for_owner`][NFT Enumeration]).\n\n#### Events Interface\n\nNon-Fungible Token Events MUST have `standard` set to `\"nep171\"`, standard version set to `\"1.2.0\"`, `event` value is one of `nft_mint`, `nft_burn`, `nft_transfer`, `nft_metadata_update`, or `contract_metadata_update`, and `data` must be of one of the following relevant types: `NftMintLog[] | NftTransferLog[] | NftBurnLog[] | NftMetadataUpdateLog[] | NftContractMetadataUpdateLog[]`:\n\n```ts\ninterface NftEventLogData {\n    standard: \"nep171\",\n    version: \"1.2.0\",\n    event: \"nft_mint\" | \"nft_burn\" | \"nft_transfer\" | \"nft_metadata_update\" | \"contract_metadata_update\",\n    data: NftMintLog[] | NftTransferLog[] | NftBurnLog[] | NftMetadataUpdateLog[] | NftContractMetadataUpdateLog[],\n}\n```\n\n```ts\n// An event log to capture token minting\n// Arguments\n// * `owner_id`: \"account.near\"\n// * `token_ids`: [\"1\", \"abc\"]\n// * `memo`: optional message\ninterface NftMintLog {\n    owner_id: string,\n    token_ids: string[],\n    memo?: string\n}\n\n// An event log to capture token burning\n// Arguments\n// * `owner_id`: owner of tokens to burn\n// * `authorized_id`: approved account_id to burn, if applicable\n// * `token_ids`: [\"1\",\"2\"]\n// * `memo`: optional message\ninterface NftBurnLog {\n    owner_id: string,\n    authorized_id?: string,\n    token_ids: string[],\n    memo?: string\n}\n\n// An event log to capture token transfer\n// Arguments\n// * `authorized_id`: approved account_id to transfer, if applicable\n// * `old_owner_id`: \"owner.near\"\n// * `new_owner_id`: \"receiver.near\"\n// * `token_ids`: [\"1\", \"12345abc\"]\n// * `memo`: optional message\ninterface NftTransferLog {\n    authorized_id?: string,\n    old_owner_id: string,\n    new_owner_id: string,\n    token_ids: string[],\n    memo?: string\n}\n\n// An event log to capture token metadata updating\n// Arguments\n// * `token_ids`: [\"1\", \"abc\"]\n// * `memo`: optional message\ninterface NftMetadataUpdateLog {\n    token_ids: string[],\n    memo?: string\n}\n\n// An event log to capture contract metadata updates. Note that the updated contract metadata is not included in the log, as it could easily exceed the 16KB log size limit. Listeners can query `nft_metadata` to get the updated contract metadata.\n// Arguments\n// * `memo`: optional message\ninterface NftContractMetadataUpdateLog {\n    memo?: string\n}\n```\n\n#### Examples\n\nSingle owner batch minting (pretty-formatted for readability purposes):\n\n```js\nEVENT_JSON:{\n  \"standard\": \"nep171\",\n  \"version\": \"1.2.0\",\n  \"event\": \"nft_mint\",\n  \"data\": [\n    {\"owner_id\": \"foundation.near\", \"token_ids\": [\"aurora\", \"proximitylabs\"]}\n  ]\n}\n```\n\nDifferent owners batch minting:\n\n```js\nEVENT_JSON:{\n  \"standard\": \"nep171\",\n  \"version\": \"1.2.0\",\n  \"event\": \"nft_mint\",\n  \"data\": [\n    {\"owner_id\": \"foundation.near\", \"token_ids\": [\"aurora\", \"proximitylabs\"]},\n    {\"owner_id\": \"user1.near\", \"token_ids\": [\"meme\"]}\n  ]\n}\n```\n\nDifferent events (separate log entries):\n\n```js\nEVENT_JSON:{\n  \"standard\": \"nep171\",\n  \"version\": \"1.2.0\",\n  \"event\": \"nft_burn\",\n  \"data\": [\n    {\"owner_id\": \"foundation.near\", \"token_ids\": [\"aurora\", \"proximitylabs\"]},\n  ]\n}\n```\n\n```js\nEVENT_JSON:{\n  \"standard\": \"nep171\",\n  \"version\": \"1.2.0\",\n  \"event\": \"nft_transfer\",\n  \"data\": [\n    {\"old_owner_id\": \"user1.near\", \"new_owner_id\": \"user2.near\", \"token_ids\": [\"meme\"], \"memo\": \"have fun!\"}\n  ]\n}\n```\n\nAuthorized id:\n\n```js\nEVENT_JSON:{\n  \"standard\": \"nep171\",\n  \"version\": \"1.2.0\",\n  \"event\": \"nft_burn\",\n  \"data\": [\n    {\"owner_id\": \"owner.near\", \"token_ids\": [\"goodbye\", \"aurevoir\"], \"authorized_id\": \"thirdparty.near\"}\n  ]\n}\n```\n\nNFT Metadata Update:\n\n```js\nEVENT_JSON:{\n  \"standard\": \"nep171\",\n  \"version\": \"1.2.0\",\n  \"event\": \"nft_metadata_update\",\n  \"data\": [\n    {\"token_ids\": [\"1\", \"2\"]}\n  ]\n}\n```\n\nContract metadata update:\n\n```js\nEVENT_JSON:{\n  \"standard\": \"nep171\",\n  \"version\": \"1.2.0\",\n  \"event\": \"contract_metadata_update\",\n  \"data\": []\n}\n```\n\n#### Events for Other NFT Methods\n\nNote that the example events above cover two different kinds of events:\n\n1. Events that do not have a dedicated trigger function in the NFT Standard (`nft_mint`, `nft_metadata_update`, `nft_burn`, `contract_metadata_update`)\n2. An event that has a relevant trigger function [NFT Core Standard](https://nomicon.io/Standards/NonFungibleToken/Core.html#nft-interface) (`nft_transfer`)\n\nThis event standard also applies beyond the events highlighted here, where future events follow the same convention of as the second type. For instance, if an NFT contract uses the [approval management standard](https://nomicon.io/Standards/NonFungibleToken/ApprovalManagement.html), it may emit an event for `nft_approve` if that's deemed as important by the developer community.\n\nPlease feel free to open pull requests for extending the events standard detailed here as needs arise.\n\n\n## Reference Implementation\n\n[Minimum Viable Interface](https://github.com/near/near-sdk-rs/blob/master/near-contract-standards/src/non_fungible_token/core/mod.rs)\n\n[NFT Implementation](https://github.com/near/near-sdk-rs/blob/master/near-contract-standards/src/non_fungible_token/core/core_impl.rs)\n\n\n## Changelog\n\n### 1.0.0 - Initial version\n\nThis NEP had several pre-1.0.0 iterations that led to the following errata updates to this NEP:\n\n- **2022-02-03**: updated `Token` struct field names. `id` was changed to `token_id`. This is to be consistent with current implementations of the standard and the rust SDK docs.\n- **2021-12-20**: updated `nft_resolve_transfer` argument `approved_account_ids` to be type `null|Record<string, number>` instead of `null|string[]`. This gives contracts a way to restore the original approved accounts and their approval IDs. More information can be found in [this](https://github.com/near/NEPs/issues/301) discussion.\n- **2021-07-16**: updated `nft_transfer_call` argument `approval_id` to be type `number|null` instead of `string|null`. As stated, approval IDs are not expected to exceed the JSON limit of 2^53.\n\n### 1.1.0 - Add `contract_metadata_update` Event\n\nThe extension NEP-0423 that added Contract Metadata Update Event Kind to this NEP-0171 was approved by Contract Standards Working Group members (@frol, @abacabadabacaba, @mfornet) on January 13, 2023 ([meeting recording](https://youtu.be/pBLN9UyE6AA)).\n\n#### Benefits\n\n- This new event type will help indexers to invalidate their cached values reliably and efficiently\n- This NEP extension only introduces an additional event type, so there is no breaking change to the original NEP\n\n#### Concerns\n\n| # | Concern | Resolution | Status |\n| - | - | - | - |\n| 1 | Old NFT contracts do not emit JSON Events at all; more recent NFT contracts will only emit mint/burn/transfer events, so when it comes to legacy contracts support, we won’t benefit from this new event type and only further fragment the implementations | Legacy contracts usage will die out eventually and new contracts will support new features in a non-breaking way | Resolved |\n| 2 | There is a need to have a similar event type for individual NFT updates | It is outside of the scope of this NEP extension. Feel free to create a follow-up proposal | Resolved |\n\n### 1.2.0 - Add `nft_metadata_update` Event\n\nThe extension NEP-0469 that added Token Metadata Update Event Kind to this NEP-0171 was approved by Contract Standards Working Group members (@frol, @abacabadabacaba, @mfornet, @fadeevab, @robert-zaremba) on April 21, 2023 ([meeting recording](https://youtu.be/KOIT8XDQNjM)).\n\n#### Benefits\n\n- Apps that cache indexed NFTs will benefit from having this new event type as they won't need to have a custom logic to track potentially changing NFTs or refetch all NFTs on every transaction to NFT contract\n- It plays well with `contract_metadata_update` event that was introduced in version 1.1.0 of this NEP\n- This NEP extension only introduces an additional event type, so there is no breaking change to the original NEP\n\n#### Concerns\n\n| # | Concern | Resolution | Status |\n| - | - | - | - |\n| 1 | Ecosystem will be split where legacy contracts won't emit these new events, so legacy support will still be needed | In the future, there will be fewer legacy contracts and eventually apps will have support for this type of event | Resolved |\n| 2 | `nft_update` event name is ambiguous | It was decided to use `nft_metadata_update` name, instead | Resolved |\n\n## Copyright\n\nCopyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).\n\n[ERC-721]: https://eips.ethereum.org/EIPS/eip-721\n[storage staking]: https://docs.near.org/concepts/storage/storage-staking\n[gas]: https://docs.near.org/concepts/basics/transactions/gas\n[Metadata]: https://github.com/near/NEPs/blob/master/neps/nep-0177.md\n[Approval Management]: https://github.com/near/NEPs/blob/master/neps/nep-0178.md\n[FT core]: https://github.com/near/NEPs/blob/master/neps/nep-0141.md\n[Storage Management]: https://github.com/near/NEPs/blob/master/neps/nep-0145.md\n[NFT Enumeration]: https://github.com/near/NEPs/blob/master/neps/nep-0181.md\n"
  },
  {
    "path": "neps/nep-0177.md",
    "content": "---\nNEP: 177\nTitle: Non Fungible Token Metadata\nAuthor: Chad Ostrowski <@chadoh>, Mike Purvis <mike@near.org>\nStatus: Final\nDiscussionsTo: https://github.com/near/NEPs/discussions/177\nType: Standards Track\nCategory: Contract\nCreated: 03-Mar-2022\nRequires: 171\n---\n\n## Summary\n\nAn interface for a non-fungible token's metadata. The goal is to keep the metadata future-proof as well as lightweight. This will be important to dApps needing additional information about an NFT's properties, and broadly compatible with other token standards such that the [NEAR Rainbow Bridge](https://near.org/blog/eth-near-rainbow-bridge/) can move tokens between chains.\n\n## Motivation\n\nThe primary value of non-fungible tokens comes from their metadata. While the [core standard][NFT Core] provides the minimum interface that can be considered a non-fungible token, most artists, developers, and dApps will want to associate more data with each NFT, and will want a predictable way to interact with any NFT's metadata.\n\n## Rationale and alternatives\n\nNEAR's unique [storage staking](https://docs.near.org/concepts/storage/storage-staking) approach makes it feasible to store more data on-chain than other blockchains. This standard leverages this strength for common metadata attributes, and provides a standard way to link to additional offchain data to support rapid community experimentation.\n\nThis standard also provides a `spec` version. This makes it easy for consumers of NFTs, such as marketplaces, to know if they support all the features of a given token.\n\nPrior art:\n\n- NEAR's [Fungible Token Metadata Standard][FT Metadata]\n- Discussion about NEAR's complete NFT standard: #171\n\n## Specification\n\n## Interface\n\nMetadata applies at both the contract level (`NFTContractMetadata`) and the token level (`TokenMetadata`). The relevant metadata for each:\n\n```ts\ntype NFTContractMetadata = {\n  spec: string, // required, essentially a version like \"nft-1.0.0\"\n  name: string, // required, ex. \"Mochi Rising — Digital Edition\" or \"Metaverse 3\"\n  symbol: string, // required, ex. \"MOCHI\"\n  icon: string|null, // Data URL\n  base_uri: string|null, // Centralized gateway known to have reliable access to decentralized storage assets referenced by `reference` or `media` URLs\n  reference: string|null, // URL to a JSON file with more info\n  reference_hash: string|null, // Base64-encoded sha256 hash of JSON from reference field. Required if `reference` is included.\n}\n\ntype TokenMetadata = {\n  title: string|null, // ex. \"Arch Nemesis: Mail Carrier\" or \"Parcel #5055\"\n  description: string|null, // free-form description\n  media: string|null, // URL to associated media, preferably to decentralized, content-addressed storage\n  media_hash: string|null, // Base64-encoded sha256 hash of content referenced by the `media` field. Required if `media` is included.\n  copies: number|null, // number of copies of this set of metadata in existence when token was minted.\n  issued_at: number|null, // When token was issued or minted, Unix epoch in milliseconds\n  expires_at: number|null, // When token expires, Unix epoch in milliseconds\n  starts_at: number|null, // When token starts being valid, Unix epoch in milliseconds\n  updated_at: number|null, // When token was last updated, Unix epoch in milliseconds\n  extra: string|null, // anything extra the NFT wants to store on-chain. Can be stringified JSON.\n  reference: string|null, // URL to an off-chain JSON file with more info.\n  reference_hash: string|null // Base64-encoded sha256 hash of JSON from reference field. Required if `reference` is included.\n}\n```\n\nA new function MUST be supported on the NFT contract:\n\n```ts\nfunction nft_metadata(): NFTContractMetadata {}\n```\n\nA new attribute MUST be added to each `Token` struct:\n\n```diff\n type Token = {\n   token_id: string,\n   owner_id: string,\n+  metadata: TokenMetadata,\n }\n```\n\n### An implementing contract MUST include the following fields on-chain\n\n- `spec`: a string that MUST be formatted `nft-1.0.0` to indicate that a Non-Fungible Token contract adheres to the current versions of this Metadata spec. This will allow consumers of the Non-Fungible Token to know if they support the features of a given contract.\n- `name`: the human-readable name of the contract.\n- `symbol`: the abbreviated symbol of the contract, like MOCHI or MV3\n- `base_uri`: Centralized gateway known to have reliable access to decentralized storage assets referenced by `reference` or `media` URLs. Can be used by other frontends for initial retrieval of assets, even if these frontends then replicate the data to their own decentralized nodes, which they are encouraged to do.\n\n### An implementing contract MAY include the following fields on-chain\n\nFor `NFTContractMetadata`:\n\n- `icon`: a small image associated with this contract. Encouraged to be a [data URL](https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/Data_URIs), to help consumers display it quickly while protecting user data. Recommendation: use [optimized SVG](https://codepen.io/tigt/post/optimizing-svgs-in-data-uris), which can result in high-resolution images with only 100s of bytes of [storage cost](https://docs.near.org/concepts/storage/storage-staking). (Note that these storage costs are incurred to the contract deployer, but that querying these icons is a very cheap & cacheable read operation for all consumers of the contract and the RPC nodes that serve the data.) Recommendation: create icons that will work well with both light-mode and dark-mode websites by either using middle-tone color schemes, or by [embedding `media` queries in the SVG](https://timkadlec.com/2013/04/media-queries-within-svg/).\n- `reference`: a link to a valid JSON file containing various keys offering supplementary details on the token. Example: `/ipfs/QmdmQXB2mzChmMeKY47C43LxUdg1NDJ5MWcKMKxDu7RgQm`, `https://example.com/token.json`, etc. If the information given in this document conflicts with the on-chain attributes, the values in `reference` shall be considered the source of truth.\n- `reference_hash`: the base64-encoded sha256 hash of the JSON file contained in the `reference` field. This is to guard against off-chain tampering.\n\nFor `TokenMetadata`:\n\n- `name`:  The name of this specific token.\n- `description`: A longer description of the token.\n- `media`: URL to associated media. Preferably to decentralized, content-addressed storage.\n- `media_hash`: the base64-encoded sha256 hash of content referenced by the `media` field. This is to guard against off-chain tampering.\n- `copies`: The number of tokens with this set of metadata or `media` known to exist at time of minting.\n- `issued_at`: Unix epoch in milliseconds when token was issued or minted (an unsigned 32-bit integer would suffice until the year 2106)\n- `expires_at`: Unix epoch in milliseconds when token expires\n- `starts_at`: Unix epoch in milliseconds when token starts being valid\n- `updated_at`: Unix epoch in milliseconds when token was last updated\n- `extra`: anything extra the NFT wants to store on-chain. Can be stringified JSON.\n- `reference`: URL to an off-chain JSON file with more info.\n- `reference_hash`: Base64-encoded sha256 hash of JSON from reference field. Required if `reference` is included.\n\n### No incurred cost for core NFT behavior\n\nContracts should be implemented in a way to avoid extra gas fees for serialization & deserialization of metadata for calls to `nft_*` methods other than `nft_metadata` or `nft_token`. See `near-contract-standards` [implementation using `LazyOption`](https://github.com/near/near-sdk-rs/blob/c2771af7fdfe01a4e8414046752ee16fb0d29d39/examples/fungible-token/ft/src/lib.rs#L71) as a reference example.\n\n\n## Reference Implementation\n\nThis is the technical portion of the NEP. Explain the design in sufficient detail that:\n\n- Its interaction with other features is clear.\n- Where possible, include a `Minimum Viable Interface` subsection expressing the required behavior and types in a target Near Contract language. (ie. traits and structs for rust, interfaces and classes for javascript, function signatures and structs for c, etc.)\n- It is reasonably clear how the feature would be implemented.\n- Corner cases are dissected by example.\n\nThe section should return to the examples given in the previous section, and explain more fully how the detailed proposal makes those examples work.\n\n## Drawbacks\n\n- When this NFT contract is created and initialized, the storage use per-token will be higher than an NFT Core version. Frontends can account for this by adding extra deposit when minting. This could be done by padding with a reasonable amount, or by the frontend using the [RPC call detailed here](https://docs.near.org/docs/develop/front-end/rpc#genesis-config) that gets genesis configuration and actually determine precisely how much deposit is needed.\n- Convention of `icon` being a data URL rather than a link to an HTTP endpoint that could contain privacy-violating code cannot be done on deploy or update of contract metadata, and must be done on the consumer/app side when displaying token data.\n- If on-chain icon uses a data URL or is not set but the document given by `reference` contains a privacy-violating `icon` URL, consumers & apps of this data should not naïvely display the `reference` version, but should prefer the safe version. This is technically a violation of the \"`reference` setting wins\" policy described above.\n\n## Future possibilities\n\n- Detailed conventions that may be enforced for versions.\n- A fleshed out schema for what the `reference` object should contain.\n\n## Errata\n\n- **2022-02-03**: updated `Token` struct field names. `id` was changed to `token_id`. This is to be consistent with current implementations of the standard and the rust SDK docs.\n\nThe first version (`1.0.0`) had confusing language regarding the fields:\n\n- `issued_at`\n- `expires_at`\n- `starts_at`\n- `updated_at`\n\nIt gave those fields the type `string|null` but it was unclear whether it should be a Unix epoch in milliseconds or [ISO 8601](https://www.iso.org/iso-8601-date-and-time-format.html). Upon having to revisit this, it was determined to be the most efficient to use epoch milliseconds as it would reduce the computation on the smart contract and can be derived trivially from the block timestamp.\n\n## Copyright\n\nCopyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).\n\n[NFT Core]: https://github.com/near/NEPs/blob/master/neps/nep-0171.md\n[FT Metadata]: https://github.com/near/NEPs/blob/master/neps/nep-0148.md\n"
  },
  {
    "path": "neps/nep-0178.md",
    "content": "---\nNEP: 178\nTitle: Non Fungible Token Approval Management\nAuthor: Chad Ostrowski <@chadoh>, Thor <@thor314>\nStatus: Final\nDiscussionsTo: https://github.com/near/NEPs/discussions/178\nType: Standards Track\nCategory: Contract\nCreated: 03-Mar-2022\nRequires: 171\n---\n\n## Summary\n\nA system for allowing a set of users or contracts to transfer specific Non-Fungible Tokens on behalf of an owner. Similar to approval management systems in standards like [ERC-721].\n\n## Motivation\n\nPeople familiar with [ERC-721] may expect to need an approval management system for basic transfers, where a simple transfer from Alice to Bob requires that Alice first _approve_ Bob to spend one of her tokens, after which Bob can call `transfer_from` to actually transfer the token to himself.\n\n## Rationale and alternatives\n\nNEAR's [core Non-Fungible Token standard][NFT Core] includes good support for safe atomic transfers without such complexity. It even provides \"transfer and call\" functionality (`nft_transfer_call`) which allows a specific token to be \"attached\" to a call to a separate contract. For many Non-Fungible Token workflows, these options may circumvent the need for a full-blown Approval Management system.\n\nHowever, some Non-Fungible Token developers, marketplaces, dApps, or artists may require greater control. This standard provides a uniform interface allowing token owners to approve other NEAR accounts, whether individuals or contracts, to transfer specific tokens on the owner's behalf.\n\nPrior art:\n\n- Ethereum's [ERC-721]\n- [NEP-4](https://github.com/near/NEPs/pull/4), NEAR's old NFT standard that does not include approved_account_ids per token ID\n\n## Specification\n\n## Example Scenarios\n\nLet's consider some examples. Our cast of characters & apps:\n\n- Alice: has account `alice` with no contract deployed to it\n- Bob: has account `bob` with no contract deployed to it\n- NFT: a contract with account `nft`, implementing only the [Core NFT standard][NFT Core] with this Approval Management extension\n- Market: a contract with account `market` which sells tokens from `nft` as well as other NFT contracts\n- Bazaar: similar to Market, but implemented differently (spoiler alert: has no `nft_on_approve` function!), has account `bazaar`\n\nAlice and Bob are already [registered][Storage Management] with NFT, Market, and Bazaar, and Alice owns a token on the NFT contract with ID=`\"1\"`.\n\nLet's examine the technical calls through the following scenarios:\n\n1. [Simple approval](#1-simple-approval): Alice approves Bob to transfer her token.\n2. [Approval with cross-contract call (XCC)](#2-approval-with-cross-contract-call): Alice approves Market to transfer one of her tokens and passes `msg` so that NFT will call `nft_on_approve` on Market's contract.\n3. [Approval with XCC, edge case](#3-approval-with-cross-contract-call-edge-case): Alice approves Bazaar and passes `msg` again, but what's this? Bazaar doesn't implement `nft_on_approve`, so Alice sees an error in the transaction result. Not to worry, though, she checks `nft_is_approved` and sees that she did successfully approve Bazaar, despite the error.\n4. [Approval IDs](#4-approval-ids): Bob buys Alice's token via Market.\n5. [Approval IDs, edge case](#5-approval-ids-edge-case): Bob transfers same token back to Alice, Alice re-approves Market & Bazaar. Bazaar has an outdated cache. Bob tries to buy from Bazaar at the old price.\n6. [Revoke one](#6-revoke-one): Alice revokes Market's approval for this token.\n7. [Revoke all](#7-revoke-all): Alice revokes all approval for this token.\n\n### 1. Simple Approval\n\nAlice approves Bob to transfer her token.\n\n##### High-level explanation\n\n1. Alice approves Bob\n2. Alice queries the token to verify\n3. Alice verifies a different way\n\n##### Technical calls\n\n1.  Alice calls `nft::nft_approve({ \"token_id\": \"1\", \"account_id\": \"bob\" })`. She attaches 1 yoctoⓃ, (.000000000000000000000001Ⓝ). Using [NEAR CLI](https://docs.near.org/tools/near-cli) to make this call, the command would be:\n\n    ```shell\n    near call nft nft_approve \\\n      '{ \"token_id\": \"1\", \"account_id\": \"bob\" }' \\\n      --accountId alice --depositYocto 1\n    ```\n\n    The response:\n\n    ```shell\n    ''\n    ```\n\n2.  Alice calls view method `nft_token`:\n\n    ```shell\n    near view nft nft_token '{ \"token_id\": \"1\" }'\n    ```\n\n    The response:\n\n    ```json\n    {\n      \"token_id\": \"1\",\n      \"owner_id\": \"alice.near\",\n      \"approved_account_ids\": {\n        \"bob\": 1\n      }\n    }\n    ```\n\n3.  Alice calls view method `nft_is_approved`:\n\n    ```shell\n    near view nft nft_is_approved '{ \"token_id\": \"1\", \"approved_account_id\": \"bob\" }'\n    ```\n\n    The response:\n\n    ```shell\n    true\n    ```\n\n### 2. Approval with cross-contract call\n\nAlice approves Market to transfer one of her tokens and passes `msg` so that NFT will call `nft_on_approve` on Market's contract. She probably does this via Market's frontend app which would know how to construct `msg` in a useful way.\n\n##### High-level explanation\n\n1. Alice calls `nft_approve` to approve `market` to transfer her token, and passes a `msg`\n2. Since `msg` is included, `nft` will schedule a cross-contract call to `market`\n3. Market can do whatever it wants with this info, such as listing the token for sale at a given price. The result of this operation is returned as the promise outcome to the original `nft_approve` call.\n\n##### Technical calls\n\n1.  Using near-cli:\n\n    ```shell\n    near call nft nft_approve '{\n          \"token_id\": \"1\",\n          \"account_id\": \"market\",\n          \"msg\": \"{\\\"action\\\": \\\"list\\\", \\\"price\\\": \\\"100\\\", \\\"token\\\": \\\"nDAI\\\" }\"\n        }' --accountId alice --depositYocto 1\n    ```\n\n    At this point, near-cli will hang until the cross-contract call chain fully resolves, which would also be true if Alice used a Market frontend using [near-api](https://docs.near.org/tools/near-api). Alice's part is done, though. The rest happens behind the scenes.\n\n2.  `nft` schedules a call to `nft_on_approve` on `market`. Using near-cli notation for easy cross-reference with the above, this would look like:\n\n    ```shell\n    near call market nft_on_approve '{\n      \"token_id\": \"1\",\n      \"owner_id\": \"alice\",\n      \"approval_id\": 2,\n      \"msg\": \"{\\\"action\\\": \\\"list\\\", \\\"price\\\": \\\"100\\\", \\\"token\\\": \\\"nDAI\\\" }\"\n      }' --accountId nft\n    ```\n\n3.  `market` now knows that it can sell Alice's token for 100 [nDAI](https://explorer.mainnet.near.org/accounts/6b175474e89094c44da98b954eedeac495271d0f.factory.bridge.near), and that when it transfers it to a buyer using `nft_transfer`, it can pass along the given `approval_id` to ensure that Alice hasn't changed her mind. It can schedule any further cross-contract calls it wants, and if it returns these promises correctly, Alice's initial near-cli call will resolve with the outcome from the final step in the chain. If Alice actually made this call from a Market frontend, the frontend can use this return value for something useful.\n\n### 3. Approval with cross-contract call, edge case\n\nAlice approves Bazaar and passes `msg` again. Maybe she actually does this via near-cli, rather than using Bazaar's frontend, because what's this? Bazaar doesn't implement `nft_on_approve`, so Alice sees an error in the transaction result.\n\nNot to worry, though, she checks `nft_is_approved` and sees that she did successfully approve Bazaar, despite the error. She will have to find a new way to list her token for sale in Bazaar, rather than using the same `msg` shortcut that worked for Market.\n\n##### High-level explanation\n\n1. Alice calls `nft_approve` to approve `bazaar` to transfer her token, and passes a `msg`.\n2. Since `msg` is included, `nft` will schedule a cross-contract call to `bazaar`.\n3. Bazaar doesn't implement `nft_on_approve`, so this call results in an error. The approval still worked, but Alice sees an error in her near-cli output.\n4. Alice checks if `bazaar` is approved, and sees that it is, despite the error.\n\n##### Technical calls\n\n1.  Using near-cli:\n\n    ```shell\n    near call nft nft_approve '{\n      \"token_id\": \"1\",\n      \"account_id\": \"bazaar\",\n      \"msg\": \"{\\\"action\\\": \\\"list\\\", \\\"price\\\": \\\"100\\\", \\\"token\\\": \\\"nDAI\\\" }\"\n      }' --accountId alice --depositYocto 1\n    ```\n\n2.  `nft` schedules a call to `nft_on_approve` on `market`. Using near-cli notation for easy cross-reference with the above, this would look like:\n\n    ```shell\n    near call bazaar nft_on_approve '{\n      \"token_id\": \"1\",\n      \"owner_id\": \"alice\",\n      \"approval_id\": 3,\n      \"msg\": \"{\\\"action\\\": \\\"list\\\", \\\"price\\\": \\\"100\\\", \\\"token\\\": \\\"nDAI\\\" }\"\n      }' --accountId nft\n    ```\n\n3.  💥 `bazaar` doesn't implement this method, so the call results in an error. Alice sees this error in the output from near-cli.\n\n4.  Alice checks if the approval itself worked, despite the error on the cross-contract call:\n\n    ```shell\n    near view nft nft_is_approved \\\n      '{ \"token_id\": \"1\", \"approved_account_id\": \"bazaar\" }'\n    ```\n\n    The response:\n\n    ```shell\n    true\n    ```\n\n### 4. Approval IDs\n\nBob buys Alice's token via Market. Bob probably does this via Market's frontend, which will probably initiate the transfer via a call to `ft_transfer_call` on the nDAI contract to transfer 100 nDAI to `market`. Like the NFT standard's \"transfer and call\" function, [Fungible Token][FT Core]'s `ft_transfer_call` takes a `msg` which `market` can use to pass along information it will need to pay Alice and actually transfer the NFT. The actual transfer of the NFT is the only part we care about here.\n\n##### High-level explanation\n\n1. Bob signs some transaction which results in the `market` contract calling `nft_transfer` on the `nft` contract, as described above. To be trustworthy and pass security audits, `market` needs to pass along `approval_id` so that it knows it has up-to-date information.\n\n##### Technical calls\n\nUsing near-cli notation for consistency:\n\n```shell\nnear call nft nft_transfer '{\n  \"receiver_id\": \"bob\",\n  \"token_id\": \"1\",\n  \"approval_id\": 2,\n  }' --accountId market --depositYocto 1\n```\n\n### 5. Approval IDs, edge case\n\nBob transfers same token back to Alice, Alice re-approves Market & Bazaar, listing her token at a higher price than before. Bazaar is somehow unaware of these changes, and still stores `approval_id: 3` internally along with Alice's old price. Bob tries to buy from Bazaar at the old price. Like the previous example, this probably starts with a call to a different contract, which eventually results in a call to `nft_transfer` on `bazaar`. Let's consider a possible scenario from that point.\n\n##### High-level explanation\n\nBob signs some transaction which results in the `bazaar` contract calling `nft_transfer` on the `nft` contract, as described above. To be trustworthy and pass security audits, `bazaar` needs to pass along `approval_id` so that it knows it has up-to-date information. It does not have up-to-date information, so the call fails. If the initial `nft_transfer` call is part of a call chain originating from a call to `ft_transfer_call` on a fungible token, Bob's payment will be refunded and no assets will change hands.\n\n##### Technical calls\n\nUsing near-cli notation for consistency:\n\n```shell\nnear call nft nft_transfer '{\n  \"receiver_id\": \"bob\",\n  \"token_id\": \"1\",\n  \"approval_id\": 3,\n  }' --accountId bazaar --depositYocto 1\n```\n\n### 6. Revoke one\n\nAlice revokes Market's approval for this token.\n\n##### Technical calls\n\nUsing near-cli:\n\n```shell\nnear call nft nft_revoke '{\n  \"account_id\": \"market\",\n  \"token_id\": \"1\",\n  }' --accountId alice --depositYocto 1\n```\n\nNote that `market` will not get a cross-contract call in this case. The implementers of the Market app should implement [cron](https://en.wikipedia.org/wiki/Cron)-type functionality to intermittently check that Market still has the access they expect.\n\n### 7. Revoke all\n\nAlice revokes all approval for this token.\n\n##### Technical calls\n\nUsing near-cli:\n\n```shell\nnear call nft nft_revoke_all '{\n  \"token_id\": \"1\",\n  }' --accountId alice --depositYocto 1\n```\n\nAgain, note that no previous approvers will get cross-contract calls in this case.\n\n## Reference-level explanation\n\nThe `Token` structure returned by `nft_token` must include an `approved_account_ids` field, which is a map of account IDs to approval IDs. Using TypeScript's [Record type](https://www.typescriptlang.org/docs/handbook/utility-types.html#recordkeystype) notation:\n\n```diff\ntype Token = {\n   token_id: string,\n   owner_id: string,\n+  approved_account_ids: Record<string, number>,\n }\n```\n\nExample token data:\n\n```json\n{\n  \"token_id\": \"1\",\n  \"owner_id\": \"alice.near\",\n  \"approved_account_ids\": {\n    \"bob.near\": 1,\n    \"carol.near\": 2\n  }\n}\n```\n\n### What is an \"approval ID\"?\n\nThis is a unique number given to each approval that allows well-intentioned marketplaces or other 3rd-party NFT resellers to avoid a race condition. The race condition occurs when:\n\n1. A token is listed in two marketplaces, which are both saved to the token as approved accounts.\n2. One marketplace sells the token, which clears the approved accounts.\n3. The new owner sells back to the original owner.\n4. The original owner approves the token for the second marketplace again to list at a new price. But for some reason the second marketplace still lists the token at the previous price and is unaware of the transfers happening.\n5. The second marketplace, operating from old information, attempts to again sell the token at the old price.\n\nNote that while this describes an honest mistake, the possibility of such a bug can also be taken advantage of by malicious parties via [front-running](https://users.encs.concordia.ca/~clark/papers/2019_wtsc_front.pdf).\n\nTo avoid this possibility, the NFT contract generates a unique approval ID each time it approves an account. Then when calling `nft_transfer` or `nft_transfer_call`, the approved account passes `approval_id` with this value to make sure the underlying state of the token hasn't changed from what the approved account expects.\n\nKeeping with the example above, say the initial approval of the second marketplace generated the following `approved_account_ids` data:\n\n```json\n{\n  \"token_id\": \"1\",\n  \"owner_id\": \"alice.near\",\n  \"approved_account_ids\": {\n    \"marketplace_1.near\": 1,\n    \"marketplace_2.near\": 2\n  }\n}\n```\n\nBut after the transfers and re-approval described above, the token might have `approved_account_ids` as:\n\n```json\n{\n  \"token_id\": \"1\",\n  \"owner_id\": \"alice.near\",\n  \"approved_account_ids\": {\n    \"marketplace_2.near\": 3\n  }\n}\n```\n\nThe marketplace then tries to call `nft_transfer`, passing outdated information:\n\n```bash\n# oops!\nnear call nft-contract.near nft_transfer '{ \"approval_id\": 2 }'\n```\n\n### Interface\n\nThe NFT contract must implement the following methods:\n\n```ts\n/******************/\n/* CHANGE METHODS */\n/******************/\n\n// Add an approved account for a specific token.\n//\n// Requirements\n// * Caller of the method must attach a deposit of at least 1 yoctoⓃ for\n//   security purposes\n// * Contract MAY require caller to attach larger deposit, to cover cost of\n//   storing approver data\n// * Contract MUST panic if called by someone other than token owner\n// * Contract MUST panic if addition would cause `nft_revoke_all` to exceed\n//   single-block gas limit. See below for more info.\n// * Contract MUST increment approval ID even if re-approving an account\n// * If successfully approved or if had already been approved, and if `msg` is\n//   present, contract MUST call `nft_on_approve` on `account_id`. See\n//   `nft_on_approve` description below for details.\n//\n// Arguments:\n// * `token_id`: the token for which to add an approval\n// * `account_id`: the account to add to `approved_account_ids`\n// * `msg`: optional string to be passed to `nft_on_approve`\n//\n// Returns void, if no `msg` given. Otherwise, returns promise call to\n// `nft_on_approve`, which can resolve with whatever it wants.\nfunction nft_approve(\n  token_id: TokenId,\n  account_id: string,\n  msg: string | null\n): void | Promise<any> {}\n\n// Revoke an approved account for a specific token.\n//\n// Requirements\n// * Caller of the method must attach a deposit of 1 yoctoⓃ for security\n//   purposes\n// * If contract requires >1yN deposit on `nft_approve`, contract\n//   MUST refund associated storage deposit when owner revokes approval\n// * Contract MUST panic if called by someone other than token owner\n//\n// Arguments:\n// * `token_id`: the token for which to revoke an approval\n// * `account_id`: the account to remove from `approved_account_ids`\nfunction nft_revoke(token_id: string, account_id: string) {}\n\n// Revoke all approved accounts for a specific token.\n//\n// Requirements\n// * Caller of the method must attach a deposit of 1 yoctoⓃ for security\n//   purposes\n// * If contract requires >1yN deposit on `nft_approve`, contract\n//   MUST refund all associated storage deposit when owner revokes approved_account_ids\n// * Contract MUST panic if called by someone other than token owner\n//\n// Arguments:\n// * `token_id`: the token with approved_account_ids to revoke\nfunction nft_revoke_all(token_id: string) {}\n\n/****************/\n/* VIEW METHODS */\n/****************/\n\n// Check if a token is approved for transfer by a given account, optionally\n// checking an approval_id\n//\n// Arguments:\n// * `token_id`: the token for which to revoke an approval\n// * `approved_account_id`: the account to check the existence of in `approved_account_ids`\n// * `approval_id`: an optional approval ID to check against current approval ID for given account\n//\n// Returns:\n// if `approval_id` given, `true` if `approved_account_id` is approved with given `approval_id`\n// otherwise, `true` if `approved_account_id` is in list of approved accounts\nfunction nft_is_approved(\n  token_id: string,\n  approved_account_id: string,\n  approval_id: number | null\n): boolean {}\n```\n\n### Why must `nft_approve` panic if `nft_revoke_all` would fail later?\n\nIn the description of `nft_approve` above, it states:\n\n> Contract MUST panic if addition would cause `nft_revoke_all` to exceed\n> single-block gas limit.\n\nWhat does this mean?\n\nFirst, it's useful to understand what we mean by \"single-block gas limit\". This refers to the [hard cap on gas per block at the protocol layer](https://docs.near.org/docs/concepts/gas#thinking-in-gas). This number will increase over time.\n\nRemoving data from a contract uses gas, so if an NFT had a large enough number of approved_account_ids, `nft_revoke_all` would fail, because calling it would exceed the maximum gas.\n\nContracts must prevent this by capping the number of approved_account_ids for a given token. However, it is up to contract authors to determine a sensible cap for their contract (and the single block gas limit at the time they deploy). Since contract implementations can vary, some implementations will be able to support a larger number of approved_account_ids than others, even with the same maximum gas per block.\n\nContract authors may choose to set a cap of something small and safe like 10 approved_account_ids, or they could dynamically calculate whether a new approval would break future calls to `nft_revoke_all`. But every contract MUST ensure that they never break the functionality of `nft_revoke_all`.\n\n### Approved Account Contract Interface\n\nIf a contract that gets approved to transfer NFTs wants to, it can implement `nft_on_approve` to update its own state when granted approval for a token:\n\n```ts\n// Respond to notification that contract has been granted approval for a token.\n//\n// Notes\n// * Contract knows the token contract ID from `predecessor_account_id`\n//\n// Arguments:\n// * `token_id`: the token to which this contract has been granted approval\n// * `owner_id`: the owner of the token\n// * `approval_id`: the approval ID stored by NFT contract for this approval.\n//   Expected to be a number within the 2^53 limit representable by JSON.\n// * `msg`: specifies information needed by the approved contract in order to\n//    handle the approval. Can indicate both a function to call and the\n//    parameters to pass to that function.\nfunction nft_on_approve(\n  token_id: TokenId,\n  owner_id: string,\n  approval_id: number,\n  msg: string\n) {}\n```\n\nNote that the NFT contract will fire-and-forget this call, ignoring any return values or errors generated. This means that even if the approved account does not have a contract or does not implement `nft_on_approve`, the approval will still work correctly from the point of view of the NFT contract.\n\nFurther note that there is no parallel `nft_on_revoke` when revoking either a single approval or when revoking all. This is partially because scheduling many `nft_on_revoke` calls when revoking all approved_account_ids could incur prohibitive [gas fees](https://docs.near.org/docs/concepts/gas). Apps and contracts which cache NFT approved_account_ids can therefore not rely on having up-to-date information, and should periodically refresh their caches. Since this will be the necessary reality for dealing with `nft_revoke_all`, there is no reason to complicate `nft_revoke` with an `nft_on_revoke` call.\n\n### No incurred cost for core NFT behavior\n\nNFT contracts should be implemented in a way to avoid extra gas fees for serialization & deserialization of `approved_account_ids` for calls to `nft_*` methods other than `nft_token`. See `near-contract-standards` [implementation of `ft_metadata` using `LazyOption`](https://github.com/near/near-sdk-rs/blob/c2771af7fdfe01a4e8414046752ee16fb0d29d39/examples/fungible-token/ft/src/lib.rs#L71) as a reference example.\n\n## Reference Implementation\n\n[NFT Approval Receiver Interface](https://github.com/near/near-sdk-rs/blob/master/near-contract-standards/src/non_fungible_token/approval/approval_receiver.rs)\n\n[NFT Approval Implementation](https://github.com/near/near-sdk-rs/blob/master/near-contract-standards/src/non_fungible_token/approval/approval_impl.rs)\n\n## Errata\n\n- **2022-02-03**: updated `Token` struct field names. `id` was changed to `token_id` and `approvals` was changed to `approved_account_ids`. This is to be consistent with current implementations of the standard and the rust SDK docs.\n\n## Copyright\n\nCopyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).\n\n[ERC-721]: https://eips.ethereum.org/EIPS/eip-721\n[NFT Core]: https://github.com/near/NEPs/blob/master/neps/nep-0171.md\n[Storage Management]: https://github.com/near/NEPs/blob/master/neps/nep-0145.md\n[FT Core]: https://github.com/near/NEPs/blob/master/neps/nep-0141.md\n"
  },
  {
    "path": "neps/nep-0181.md",
    "content": "---\nNEP: 181\nTitle: Non Fungible Token Enumeration\nAuthor: Chad Ostrowski <@chadoh>, Thor <@thor314>\nStatus: Final\nDiscussionsTo: https://github.com/near/NEPs/discussions/181\nType: Standards Track\nCategory: Contract\nCreated: 03-Mar-2022\nRequires: 171\n---\n\n## Summary\n\nStandard interfaces for counting & fetching tokens, for an entire NFT contract or for a given owner.\n\n## Motivation\n\nApps such as marketplaces and wallets need a way to show all tokens owned by a given account and to show statistics about all tokens for a given contract. This extension provides a standard way to do so.\n\n## Rationale and alternatives\n\nWhile some NFT contracts may forego this extension to save [storage] costs, this requires apps to have custom off-chain indexing layers. This makes it harder for apps to integrate with such NFTs. Apps which integrate only with NFTs that use the Enumeration extension do not even need a server-side component at all, since they can retrieve all information they need directly from the blockchain.\n\nPrior art:\n\n- [ERC-721]'s enumeration extension\n\n## Specification\n\nThe contract must implement the following view methods:\n\n```ts\n// Returns the total supply of non-fungible tokens as a string representing an\n// unsigned 128-bit integer to avoid JSON number limit of 2^53; and \"0\" if there are no tokens.\nfunction nft_total_supply(): string {}\n\n// Get a list of all tokens\n//\n// Arguments:\n// * `from_index`: a string representing an unsigned 128-bit integer,\n//    representing the starting index of tokens to return\n// * `limit`: the maximum number of tokens to return\n//\n// Returns an array of Token objects, as described in Core standard, and an empty array if there are no tokens\nfunction nft_tokens(\n  from_index: string|null, // default: \"0\"\n  limit: number|null, // default: unlimited (could fail due to gas limit)\n): Token[] {}\n\n// Get number of tokens owned by a given account\n//\n// Arguments:\n// * `account_id`: a valid NEAR account\n//\n// Returns the number of non-fungible tokens owned by given `account_id` as\n// a string representing the value as an unsigned 128-bit integer to avoid JSON\n// number limit of 2^53; and \"0\" if there are no tokens.\nfunction nft_supply_for_owner(\n  account_id: string,\n): string {}\n\n// Get list of all tokens owned by a given account\n//\n// Arguments:\n// * `account_id`: a valid NEAR account\n// * `from_index`: a string representing an unsigned 128-bit integer,\n//    representing the starting index of tokens to return\n// * `limit`: the maximum number of tokens to return\n//\n// Returns a paginated list of all tokens owned by this account, and an empty array if there are no tokens\nfunction nft_tokens_for_owner(\n  account_id: string,\n  from_index: string|null, // default: 0\n  limit: number|null, // default: unlimited (could fail due to gas limit)\n): Token[] {}\n```\n\n## Notes\n\nAt the time of this writing, the specialized collections in the `near-sdk` Rust crate are iterable, but not all of them have implemented an `iter_from` solution. There may be efficiency gains for large collections and contract developers are encouraged to test their data structures with a large amount of entries.\n\n## Reference Implementation\n\n[Minimum Viable Interface](https://github.com/near/near-sdk-rs/blob/master/near-contract-standards/src/non_fungible_token/enumeration/mod.rs)\n\n[Implementation](https://github.com/near/near-sdk-rs/blob/master/near-contract-standards/src/non_fungible_token/enumeration/enumeration_impl.rs)\n\n## Copyright\n\nCopyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).\n\n[ERC-721]: https://eips.ethereum.org/EIPS/eip-721\n[storage]: https://docs.near.org/concepts/storage/storage-staking\n"
  },
  {
    "path": "neps/nep-0199.md",
    "content": "---\nNEP: 199\nTitle: Non Fungible Token Royalties and Payouts\nAuthor: Thor <@thor314>, Matt Lockyer <@mattlockyer>\nStatus: Final\nDiscussionsTo: https://github.com/near/NEPs/discussions/199\nType: Standards Track\nCategory: Contract\nCreated: 03-Mar-2022\nRequires: 171, 178\n---\n\n## Summary\n\nAn interface allowing non-fungible token contracts to request that financial contracts pay-out multiple receivers, enabling flexible royalty implementations.\n\n## Motivation\n\nCurrently, NFTs on NEAR support the field `owner_id`, but lack flexibility for ownership and payout mechanics with more complexity, including but not limited to royalties. Financial contracts, such as marketplaces, auction houses, and NFT loan contracts would benefit from a standard interface on NFT producer contracts for querying whom to pay out, and how much to pay.\n\nTherefore, the core goal of this standard is to define a set of methods for financial contracts to call, without specifying how NFT contracts define the divide of payout mechanics, and a standard `Payout` response structure.\n\n## Specification\n\nThis Payout extension standard adds two methods to NFT contracts:\n\n- a view method: `nft_payout`, accepting a `token_id` and some `balance`, returning the `Payout` mapping for the given token.\n- a call method: `nft_transfer_payout`, accepting all the arguments of`nft_transfer`, plus a field for some `Balance` that calculates the `Payout`, calls `nft_transfer`, and returns the `Payout` mapping.\n\nFinancial contracts MUST validate several invariants on the returned\n`Payout`:\n\n1. The returned `Payout` MUST be no longer than the given maximum length (`max_len_payout` parameter) if provided. Payouts of excessive length can become prohibitively gas-expensive. Financial contracts can specify the maximum length of payout the contract is willing to respect with the `max_len_payout` field on `nft_transfer_payout`.\n2. The balances MUST add up to less than or equal to the `balance` argument in `nft_transfer_payout`. If the balance adds up to less than the `balance` argument, the financial contract MAY claim the remainder for itself.\n3. The sum of the balances MUST NOT overflow. This is technically identical to 2, but financial contracts should be expected to handle this possibility.\n\nFinancial contracts MAY specify their own maximum length payout to respect.\nAt minimum, financial contracts MUST NOT set their maximum length below 10.\n\nIf the Payout contains any addresses that do not exist, the financial contract MAY keep those wasted payout funds.\n\nFinancial contracts MAY take a cut of the NFT sale price as commission, subtracting their cut from the total token sale price, and calling `nft_transfer_payout` with the remainder.\n\n## Example Flow\n\n```text\n ┌─────────────────────────────────────────────────┐\n │Token Owner approves marketplace for token_id \"0\"│\n ├─────────────────────────────────────────────────┘\n │  nft_approve(\"0\",market.near,<SaleArgs>)\n ▼\n ┌───────────────────────────────────────────────┐\n │Marketplace sells token to user.near for 10N   │\n ├───────────────────────────────────────────────┘\n │  nft_transfer_payout(user.near,\"0\",0,\"10000000\",5)\n ▼\n ┌───────────────────────────────────────────────┐\n │NFT contract returns Payout data               │\n ├───────────────────────────────────────────────┘\n │  Payout(<who_gets_paid_and_how_much)\n ▼\n ┌───────────────────────────────────────────────┐\n │Market validates and pays out addresses        │\n └───────────────────────────────────────────────┘\n```\n\n## Reference Implementation\n\n```rust\n/// A mapping of NEAR accounts to the amount each should be paid out, in\n/// the event of a token-sale. The payout mapping MUST be shorter than the\n/// maximum length specified by the financial contract obtaining this\n/// payout data. Any mapping of length 10 or less MUST be accepted by\n/// financial contracts, so 10 is a safe upper limit.\n#[derive(Serialize, Deserialize)]\n#[serde(crate = \"near_sdk::serde\")]\npub struct Payout {\n  pub payout: HashMap<AccountId, U128>,\n}\n\npub trait Payouts {\n  /// Given a `token_id` and NEAR-denominated balance, return the `Payout`.\n  /// struct for the given token. Panic if the length of the payout exceeds\n  /// `max_len_payout.`\n  fn nft_payout(&self, token_id: String, balance: U128, max_len_payout: Option<u32>) -> Payout;\n  /// Given a `token_id` and NEAR-denominated balance, transfer the token\n  /// and return the `Payout` struct for the given token. Panic if the\n  /// length of the payout exceeds `max_len_payout.`\n  #[payable]\n  fn nft_transfer_payout(\n    &mut self,\n    receiver_id: AccountId,\n    token_id: String,\n    approval_id: Option<u64>,\n    memo: Option<String>,\n    balance: U128,\n    max_len_payout: Option<u32>,\n  ) -> Payout {\n    assert_one_yocto();\n    let payout = self.nft_payout(token_id, balance);\n    self.nft_transfer(receiver_id, token_id, approval_id, memo);\n    payout\n  }\n}\n```\n\n## Fallback on error\n\nIn the case where either the `max_len_payout` causes a panic, or a malformed `Payout` is returned, the caller contract should transfer all funds to the original token owner selling the token.\n\n## Potential pitfalls\n\nThe payout must include all accounts that should receive funds. Thus it is a mistake to assume that the original token owner will receive funds if they are not included in the payout.\n\nNFT and financial contracts vary in implementation. This means that some extra CPU cycles may occur in one NFT contract and not another. Furthermore, a financial contract may accept fungible tokens, native NEAR, or another entity as payment. Transferring native NEAR tokens is less expensive in gas than sending fungible tokens. For these reasons, the maximum length of payouts may vary according to the customization of the smart contracts.\n\n## Drawbacks\n\nThere is an introduction of trust that the contract calling `nft_transfer_payout` will indeed pay out to all intended parties. However, since the calling contract will typically be something like a marketplace used by end users, malicious actors might be found out more easily and might have less incentive.\nThere is an assumption that NFT contracts will understand the limits of gas and not allow for a number of payouts that cannot be achieved.\n\n## Future possibilities\n\nIn the future, the NFT contract itself may be able to place an NFT transfer is a state that is \"pending transfer\" until all payouts have been awarded. This would keep all the information inside the NFT and remove trust.\n\n## Errata\n\n- Version `2.1.0` adds a memo parameter to `nft_transfer_payout`, which previously forced implementers of `2.0.0` to pass `None` to the inner `nft_transfer`. Also refactors `max_len_payout` to be an option type.\n- Version `2.0.0` contains the intended `approval_id` of `u64` instead of the stringified `U64` version. This was an oversight, but since the standard was live for a few months before noticing, the team thought it best to bump the major version.\n\n## Copyright\n\nCopyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).\n"
  },
  {
    "path": "neps/nep-0245/ApprovalManagement.md",
    "content": "# Multi Token Standard Approval Management\n\n:::caution\nThis is part of the proposed spec [NEP-245](https://github.com/near/NEPs/blob/master/neps/nep-0245.md) and is subject to change.\n:::\n\n\nVersion `1.0.0`\n\n## Summary\n\nA system for allowing a set of users or contracts to transfer specific tokens on behalf of an owner. Similar to approval management systems in standards like [ERC-721] and [ERC-1155].\n\n  [ERC-721]: https://eips.ethereum.org/EIPS/eip-721\n  [ERC-1155]: https://eips.ethereum.org/EIPS/eip-1155\n\n## Motivation\n\nPeople familiar with [ERC-721] may expect to need an approval management system for basic transfers, where a simple transfer from Alice to Bob requires that Alice first _approve_ Bob to spend one of her tokens, after which Bob can call `transfer_from` to actually transfer the token to himself.\n\nNEAR's [core Multi Token standard](https://github.com/near/NEPs/blob/master/neps/nep-0245.md) includes good support for safe atomic transfers without such complexity. It even provides \"transfer and call\" functionality (`mt_transfer_call`) which allows  specific tokens to be \"attached\" to a call to a separate contract. For many token workflows, these options may circumvent the need for a full-blown Approval Management system.\n\nHowever, some Multi Token developers, marketplaces, dApps, or artists may require greater control. This standard provides a uniform interface allowing token owners to approve other NEAR accounts, whether individuals or contracts, to transfer specific tokens on the owner's behalf.\n\nPrior art:\n\n- Ethereum's [ERC-721]\n- Ethereum's [ERC-1155]\n\n## Example Scenarios\n\nLet's consider some examples. Our cast of characters & apps:\n\n- Alice: has account `alice` with no contract deployed to it\n- Bob: has account `bob` with no contract deployed to it\n- MT: a contract with account `mt`, implementing only the [Multi Token Standard](https://github.com/near/NEPs/blob/master/neps/nep-0245.md) with this Approval Management extension\n- Market: a contract with account `market` which sells tokens from `mt` as well as other token contracts\n- Bazaar: similar to Market, but implemented differently (spoiler alert: has no `mt_on_approve` function!), has account `bazaar`\n\nAlice and Bob are already [registered](https://github.com/near/NEPs/blob/master/neps/nep-0145.md) with MT, Market, and Bazaar, and Alice owns a token on the MT contract with ID=`\"1\"` and a fungible style token with ID =`\"2\"` and AMOUNT =`\"100\"`.\n\nLet's examine the technical calls through the following scenarios:\n\n1. [Simple approval](#1-simple-approval): Alice approves Bob to transfer her token.\n2. [Approval with cross-contract call (XCC)](#2-approval-with-cross-contract-call): Alice approves Market to transfer one of her tokens and passes `msg` so that MT will call `mt_on_approve` on Market's contract.\n3. [Approval with XCC, edge case](#3-approval-with-cross-contract-call-edge-case): Alice approves Bazaar and passes `msg` again, but what's this? Bazaar doesn't implement `mt_on_approve`, so Alice sees an error in the transaction result. Not to worry, though, she checks `mt_is_approved` and sees that she did successfully approve Bazaar, despite the error.\n4. [Approval IDs](#4-approval-ids): Bob buys Alice's token via Market.\n5. [Approval IDs, edge case](#5-approval-ids-edge-case): Bob transfers same token back to Alice, Alice re-approves Market & Bazaar. Bazaar has an outdated cache. Bob tries to buy from Bazaar at the old price.\n6. [Revoke one](#6-revoke-one): Alice revokes Market's approval for this token.\n7. [Revoke all](#7-revoke-all): Alice revokes all approval for this token.\n\n### 1. Simple Approval\n\nAlice approves Bob to transfer her tokens.\n\n#### High-level explanation\n\n1. Alice approves Bob\n2. Alice queries the token to verify\n\n#### Technical calls\n\n1. Alice calls `mt::mt_approve({ \"token_ids\": [\"1\",\"2\"], amounts:[\"1\",\"100\"], \"account_id\": \"bob\" })`. She attaches 1 yoctoⓃ, (.000000000000000000000001Ⓝ). Using [NEAR CLI](https://docs.near.org/tools/near-cli) to make this call, the command would be:\n\n    ```bash\n    near call mt mt_approve \\\n        '\\{ \"token_ids\": [\"1\",\"2\"], amounts: [\"1\",\"100\"], \"account_id\": \"bob\" }' \\\n        --accountId alice --amount .000000000000000000000001\n    ```\n\n    The response:\n\n    ```bash\n    ''\n    ```\n\n2. Alice calls view method `mt_is_approved`:\n\n    ```bash\n    near view mt mt_is_approved \\\n        '\\{ \"token_ids\": [\"1\", \"2\"], amounts:[\"1\",\"100\"], \"approved_account_id\": \"bob\" }'\n    ```\n\n    The response:\n\n    ```bash\n    true\n    ```\n\n### 2. Approval with cross-contract call\n\nAlice approves Market to transfer some of her tokens and passes `msg` so that MT will call `mt_on_approve` on Market's contract. She probably does this via Market's frontend app which would know how to construct `msg` in a useful way.\n\n#### High-level explanation\n\n1. Alice calls `mt_approve` to approve `market` to transfer her token, and passes a `msg`\n2. Since `msg` is included, `mt` will schedule a cross-contract call to `market`\n3. Market can do whatever it wants with this info, such as listing the token for sale at a given price. The result of this operation is returned as the promise outcome to the original `mt_approve` call.\n\n#### Technical calls\n\n1. Using near-cli:\n\n    ```bash\n    near call mt mt_approve '\\{\n        \"token_ids\": [\"1\",\"2\"],\n        \"amounts\": [\"1\", \"100\"],\n        \"account_id\": \"market\",\n        \"msg\": \"\\{\\\"action\\\": \\\"list\\\", \\\"price\\\": [\\\"100\\\",\\\"50\\\"],\\\"token\\\": \\\"nDAI\\\" }\"\n    }' --accountId alice --amount .000000000000000000000001\n    ```\n\n    At this point, near-cli will hang until the cross-contract call chain fully resolves, which would also be true if Alice used a Market frontend using [near-api](https://docs.near.org/tools/near-api). Alice's part is done, though. The rest happens behind the scenes.\n\n2. `mt` schedules a call to `mt_on_approve` on `market`. Using near-cli notation for easy cross-reference with the above, this would look like:\n\n    ```bash\n    near call market mt_on_approve '\\{\n        \"token_ids\": [\"1\",\"2\"],\n        \"amounts\": [\"1\",\"100\"],\n        \"owner_id\": \"alice\",\n        \"approval_ids\": [\"4\",\"5\"],\n        \"msg\": \"\\{\\\"action\\\": \\\"list\\\", \\\"price\\\": [\\\"100\\\",\\\"50\\\"], \\\"token\\\": \\\"nDAI\\\" }\"\n    }' --accountId mt\n    ```\n\n3. `market` now knows that it can sell Alice's tokens for 100 [nDAI](https://explorer.mainnet.near.org/accounts/6b175474e89094c44da98b954eedeac495271d0f.factory.bridge.near) and 50 [nDAI](https://explorer.mainnet.near.org/accounts/6b175474e89094c44da98b954eedeac495271d0f.factory.bridge.near), and that when it transfers it to a buyer using `mt_batch_transfer`, it can pass along the given `approval_ids` to ensure that Alice hasn't changed her mind. It can schedule any further cross-contract calls it wants, and if it returns these promises correctly, Alice's initial near-cli call will resolve with the outcome from the final step in the chain. If Alice actually made this call from a Market frontend, the frontend can use this return value for something useful.\n\n### 3. Approval with cross-contract call, edge case\n\nAlice approves Bazaar and passes `msg` again. Maybe she actually does this via near-cli, rather than using Bazaar's frontend, because what's this? Bazaar doesn't implement `mt_on_approve`, so Alice sees an error in the transaction result.\n\nNot to worry, though, she checks `mt_is_approved` and sees that she did successfully approve Bazaar, despite the error. She will have to find a new way to list her token for sale in Bazaar, rather than using the same `msg` shortcut that worked for Market.\n\n#### High-level explanation\n\n1. Alice calls `mt_approve` to approve `bazaar` to transfer her token, and passes a `msg`.\n2. Since `msg` is included, `mt` will schedule a cross-contract call to `bazaar`.\n3. Bazaar doesn't implement `mt_on_approve`, so this call results in an error. The approval still worked, but Alice sees an error in her near-cli output.\n4. Alice checks if `bazaar` is approved, and sees that it is, despite the error.\n\n#### Technical calls\n\n1. Using near-cli:\n\n    ```bash\n    near call mt mt_approve '\\{\n        \"token_ids\": [\"1\"],\n        \"amounts: [\"1000\"],\n        \"account_id\": \"bazaar\",\n        \"msg\": \"\\{\\\"action\\\": \\\"list\\\", \\\"price\\\": \\\"100\\\", \\\"token\\\": \\\"nDAI\\\" }\"\n    }' --accountId alice --amount .000000000000000000000001\n    ```\n\n2. `mt` schedules a call to `mt_on_approve` on `market`. Using near-cli notation for easy cross-reference with the above, this would look like:\n\n    ```bash\n    near call bazaar mt_on_approve '\\{\n        \"token_ids\": [\"1\"],\n        \"amounts\": [\"1000\"],\n        \"owner_id\": \"alice\",\n        \"approval_ids\": [3],\n        \"msg\": \"\\{\\\"action\\\": \\\"list\\\", \\\"price\\\": \\\"100\\\", \\\"token\\\": \\\"nDAI\\\" }\"\n    }' --accountId mt\n    ```\n\n3. 💥 `bazaar` doesn't implement this method, so the call results in an error. Alice sees this error in the output from near-cli.\n\n4. Alice checks if the approval itself worked, despite the error on the cross-contract call:\n\n    ```bash\n    near view mt mt_is_approved \\\n        '{ \"token_ids\": [\"1\",\"2\"], \"amounts\":[\"1\",\"100\"], \"approved_account_id\": \"bazaar\" }'\n    ```\n\n    The response:\n\n    ```bash\n    true\n    ```\n\n### 4. Approval IDs\n\nBob buys Alice's token via Market. Bob probably does this via Market's frontend, which will probably initiate the transfer via a call to `ft_transfer_call` on the nDAI contract to transfer 100 nDAI to `market`. Like the MT standard's \"transfer and call\" function, [Fungible Token](https://github.com/near/NEPs/blob/master/neps/nep-0141.md)'s `ft_transfer_call` takes a `msg` which `market` can use to pass along information it will need to pay Alice and actually transfer the MT. The actual transfer of the MT is the only part we care about here.\n\n#### High-level explanation\n\n1. Bob signs some transaction which results in the `market` contract calling `mt_transfer` on the `mt` contract, as described above. To be trustworthy and pass security audits, `market` needs to pass along `approval_id` so that it knows it has up-to-date information.\n\n#### Technical calls\n\nUsing near-cli notation for consistency:\n\n    ```bash\n    near call mt mt_transfer '{\n        \"receiver_id\": \"bob\",\n        \"token_id\": \"1\",\n        \"amount\": \"1\",\n        \"approval_id\": 2,\n    }' --accountId market --amount .000000000000000000000001\n    ```\n\n### 5. Approval IDs, edge case\n\nBob transfers same token back to Alice, Alice re-approves Market & Bazaar, listing her token at a higher price than before. Bazaar is somehow unaware of these changes, and still stores `approval_id: 3` internally along with Alice's old price. Bob tries to buy from Bazaar at the old price. Like the previous example, this probably starts with a call to a different contract, which eventually results in a call to `mt_transfer` on `bazaar`. Let's consider a possible scenario from that point.\n\n#### High-level explanation\n\nBob signs some transaction which results in the `bazaar` contract calling `mt_transfer` on the `mt` contract, as described above. To be trustworthy and pass security audits, `bazaar` needs to pass along `approval_id` so that it knows it has up-to-date information. It does not have up-to-date information, so the call fails. If the initial `mt_transfer` call is part of a call chain originating from a call to `ft_transfer_call` on a fungible token, Bob's payment will be refunded and no assets will change hands.\n\n#### Technical calls\n\nUsing near-cli notation for consistency:\n\n```bash\nnear call mt mt_transfer '{\n    \"receiver_id\": \"bob\",\n    \"token_id\": \"1\",\n    \"amount\": \"1\",\n    \"approval_id\": 3,\n}' --accountId bazaar --amount .000000000000000000000001\n```\n\n### 6. Revoke one\n\nAlice revokes Market's approval for this token.\n\n#### Technical calls\n\nUsing near-cli:\n\n```bash\nnear call mt mt_revoke '\\{\n    \"account_id\": \"market\",\n    \"token_ids\": [\"1\"],\n}' --accountId alice --amount .000000000000000000000001\n```\n\nNote that `market` will not get a cross-contract call in this case. The implementors of the Market app should implement [cron](https://en.wikipedia.org/wiki/Cron)-type functionality to intermittently check that Market still has the access they expect.\n\n### 7. Revoke all\n\nAlice revokes all approval for these tokens\n\n#### Technical calls\n\nUsing near-cli:\n\n```bash\nnear call mt mt_revoke_all '\\{\n  \"token_ids\": [\"1\", \"2\"],\n}' --accountId alice --amount .000000000000000000000001\n```\n\nAgain, note that no previous approvers will get cross-contract calls in this case.\n\n\n## Reference-level explanation\n\nThe `TokenApproval` structure returned by `mt_token_approvals` returns `approved_account_ids` field, which is a map of account IDs to `Approval` and `approval_owner_id` which is the associated account approved for removal from. The `amount` field though wrapped in quotes and treated like strings, the number will be stored as an unsigned integer with 128 bits.\n in approval is  Using TypeScript's [Record type](https://www.typescriptlang.org/docs/handbook/utility-types.html#recordkeystype) notation:\n\n```diff\n+ type Approval = {\n+   amount: string\n+   approval_id: string\n+ }\n+\n+ type TokenApproval = {\n+  approval_owner_id: string,\n+  approved_account_ids: Record<string, Approval>,\n+ };\n```\n\nExample token approval data:\n\n```json\n[{\n  \"approval_owner_id\": \"alice.near\",\n  \"approved_account_ids\": {\n    \"bob.near\": {\n      \"amount\": \"100\",\n      \"approval_id\":1,\n    },\n    \"carol.near\": {\n      \"amount\":\"2\",\n      \"approval_id\": 2,\n    }\n  }\n}]\n```\n\n### What is an \"approval ID\"?\n\nThis is a unique number given to each approval that allows well-intentioned marketplaces or other 3rd-party MT resellers to avoid a race condition. The race condition occurs when:\n\n1. A token is listed in two marketplaces, which are both saved to the token as approved accounts.\n2. One marketplace sells the token, which clears the approved accounts.\n3. The new owner sells back to the original owner.\n4. The original owner approves the token for the second marketplace again to list at a new price. But for some reason the second marketplace still lists the token at the previous price and is unaware of the transfers happening.\n5. The second marketplace, operating from old information, attempts to again sell the token at the old price.\n\nNote that while this describes an honest mistake, the possibility of such a bug can also be taken advantage of by malicious parties via [front-running](https://users.encs.concordia.ca/~clark/papers/2019_wtsc_front.pdf).\n\nTo avoid this possibility, the MT contract generates a unique approval ID each time it approves an account. Then when calling `mt_transfer`, `mt_transfer_call`, `mt_batch_transfer`, or `mt_batch_transfer_call` the approved account passes `approval_id` or `approval_ids` with this value to make sure the underlying state of the token(s) hasn't changed from what the approved account expects.\n\nKeeping with the example above, say the initial approval of the second marketplace generated the following `approved_account_ids` data:\n\n```json\n{\n  \"approval_owner_id\": \"alice.near\",\n  \"approved_account_ids\": {\n    \"marketplace_1.near\": {\n      \"approval_id\": 1,\n      \"amount\": \"100\",\n    },\n    \"marketplace_2.near\": 2,\n    \"approval_id\": 2,\n     \"amount\": \"50\",\n  }\n}\n```\n\nBut after the transfers and re-approval described above, the token might have `approved_account_ids` as:\n\n```json\n{\n  \"approval_owner_id\": \"alice.near\",\n  \"approved_account_ids\": {\n    \"marketplace_2.near\": {\n      \"approval_id\": 3,\n      \"amount\": \"50\",\n    }\n  }\n}\n```\n\nThe marketplace then tries to call `mt_transfer`, passing outdated information:\n\n```bash\n# oops!\nnear call mt-contract.near mt_transfer '{\"account_id\": \"someacct\", \"amount\":\"50\", \"approval_id\": 2 }'\n```\n\n\n### Interface\n\nThe MT contract must implement the following methods:\n\n```ts\n/******************/\n/* CHANGE METHODS */\n/******************/\n\n// Add an approved account for a specific set of tokens.\n//\n// Requirements\n// * Caller of the method must attach a deposit of at least 1 yoctoⓃ for\n//   security purposes\n// * Contract MAY require caller to attach larger deposit, to cover cost of\n//   storing approver data\n// * Contract MUST panic if called by someone other than token owner\n// * Contract MUST panic if addition would cause `mt_revoke_all` to exceed\n//   single-block gas limit. See below for more info.\n// * Contract MUST increment approval ID even if re-approving an account\n// * If successfully approved or if had already been approved, and if `msg` is\n//   present, contract MUST call `mt_on_approve` on `account_id`. See\n//   `mt_on_approve` description below for details.\n//\n// Arguments:\n// * `token_ids`: the token ids for which to add an approval\n// * `account_id`: the account to add to `approved_account_ids`\n// * `amounts`: the number of tokens to approve for transfer, wrapped in quotes and treated\n//    like an array of string, although the numbers will be stored as an array of\n//    unsigned integer with 128 bits.\n\n// * `msg`: optional string to be passed to `mt_on_approve`\n//\n// Returns void, if no `msg` given. Otherwise, returns promise call to\n// `mt_on_approve`, which can resolve with whatever it wants.\nfunction mt_approve(\n  token_ids: [string],\n  amounts: [string],\n  account_id: string,\n  msg: string|null,\n): void|Promise<any> {}\n\n// Revoke an approved account for a specific token.\n//\n// Requirements\n// * Caller of the method must attach a deposit of 1 yoctoⓃ for security\n//   purposes\n// * If contract requires >1yN deposit on `mt_approve`, contract\n//   MUST refund associated storage deposit when owner revokes approval\n// * Contract MUST panic if called by someone other than token owner\n//\n// Arguments:\n// * `token_ids`: the token for which to revoke approved_account_ids\n// * `account_id`: the account to remove from `approvals`\nfunction mt_revoke(\n  token_ids: [string],\n  account_id: string\n) {}\n\n// Revoke all approved accounts for a specific token.\n//\n// Requirements\n// * Caller of the method must attach a deposit of 1 yoctoⓃ for security\n//   purposes\n// * If contract requires >1yN deposit on `mt_approve`, contract\n//   MUST refund all associated storage deposit when owner revokes approved_account_ids\n// * Contract MUST panic if called by someone other than token owner\n//\n// Arguments:\n// * `token_ids`: the token ids with approved_account_ids to revoke\nfunction mt_revoke_all(token_ids: [string]) {}\n\n/****************/\n/* VIEW METHODS */\n/****************/\n\n// Check if tokens are approved for transfer by a given account, optionally\n// checking an approval_id\n//\n// Requirements:\n// * Contract MUST panic if `approval_ids` is not null and the length of\n//   `approval_ids` is not equal to `token_ids`\n//\n// Arguments:\n// * `token_ids`: the tokens for which to check an approval\n// * `approved_account_id`: the account to check the existence of in `approved_account_ids`\n// * `amounts`: specify the positionally corresponding amount for the `token_id`\n//    that at least must be approved. The number of tokens to approve for transfer,\n//    wrapped in quotes and treated like an array of string, although the numbers will be\n//    stored as an array of unsigned integer with 128 bits.\n// * `approval_ids`: an optional array of approval IDs to check against\n//    current approval IDs for given account and `token_ids`.\n//\n// Returns:\n// if `approval_ids` is given, `true` if `approved_account_id` is approved with given `approval_id`\n// and has at least the amount specified approved  otherwise, `true` if `approved_account_id`\n// is in list of approved accounts and has at least the amount specified approved\n// finally it returns false for all other states\nfunction mt_is_approved(\n  token_ids: [string],\n  approved_account_id: string,\n  amounts: [string],\n  approval_ids: number[]|null\n): boolean {}\n\n// Get a the list of approvals for a given token_id and account_id\n//\n// Arguments:\n// * `token_id`: the token for which to check an approval\n// * `account_id`: the account to retrieve approvals for\n//\n// Returns a TokenApproval object, as described in Approval Management standard\nfunction mt_token_approval(\n  token_id: string,\n  account_id: string,\n): TokenApproval {}\n\n\n// Get a list of all approvals for a given token_id\n//\n// Arguments:\n// * `from_index`: a string representing an unsigned 128-bit integer,\n//    representing the starting index of tokens to return\n// * `limit`: the maximum number of tokens to return\n//\n// Returns an array of TokenApproval objects, as described in Approval Management standard, and an empty array if there are no approvals\nfunction mt_token_approvals(\n  token_id: string,\n  from_index: string|null, // default: \"0\"\n  limit: number|null,\n): TokenApproval[] {}\n```\n\n### Why must `mt_approve` panic if `mt_revoke_all` would fail later?\n\nIn the description of `mt_approve` above, it states:\n\n    Contract MUST panic if addition would cause `mt_revoke_all` to exceed\n    single-block gas limit.\n\nWhat does this mean?\n\nFirst, it's useful to understand what we mean by \"single-block gas limit\". This refers to the [hard cap on gas per block at the protocol layer](https://docs.near.org/docs/concepts/gas#thinking-in-gas). This number will increase over time.\n\nRemoving data from a contract uses gas, so if an MT had a large enough number of approvals, `mt_revoke_all` would fail, because calling it would exceed the maximum gas.\n\nContracts must prevent this by capping the number of approvals for a given token. However, it is up to contract authors to determine a sensible cap for their contract (and the single block gas limit at the time they deploy). Since contract implementations can vary, some implementations will be able to support a larger number of approvals than others, even with the same maximum gas per block.\n\nContract authors may choose to set a cap of something small and safe like 10 approvals, or they could dynamically calculate whether a new approval would break future calls to `mt_revoke_all`. But every contract MUST ensure that they never break the functionality of `mt_revoke_all`.\n\n\n### Approved Account Contract Interface\n\nIf a contract that gets approved to transfer MTs wants to, it can implement `mt_on_approve` to update its own state when granted approval for a token:\n\n```ts\n// Respond to notification that contract has been granted approval for a token.\n//\n// Notes\n// * Contract knows the token contract ID from `predecessor_account_id`\n//\n// Arguments:\n// * `token_ids`: the token_ids to which this contract has been granted approval\n// * `amounts`: the ositionally corresponding amount for the token_id\n//    that at must be approved. The number of tokens to approve for transfer,\n//    wrapped in quotes and treated like an array of string, although the numbers will be\n//    stored as an array of unsigned integer with 128 bits.\n// * `owner_id`: the owner of the token\n// * `approval_ids`: the approval ID stored by NFT contract for this approval.\n//    Expected to be a number within the 2^53 limit representable by JSON.\n// * `msg`: specifies information needed by the approved contract in order to\n//    handle the approval. Can indicate both a function to call and the\n//    parameters to pass to that function.\nfunction mt_on_approve(\n  token_ids: [TokenId],\n  amounts: [string],\n  owner_id: string,\n  approval_ids: [number],\n  msg: string,\n) {}\n```\n\nNote that the MT contract will fire-and-forget this call, ignoring any return values or errors generated. This means that even if the approved account does not have a contract or does not implement `mt_on_approve`, the approval will still work correctly from the point of view of the MT contract.\n\nFurther note that there is no parallel `mt_on_revoke` when revoking either a single approval or when revoking all. This is partially because scheduling many `mt_on_revoke` calls when revoking all approvals could incur prohibitive [gas fees](https://docs.near.org/docs/concepts/gas). Apps and contracts which cache MT approvals can therefore not rely on having up-to-date information, and should periodically refresh their caches. Since this will be the necessary reality for dealing with `mt_revoke_all`, there is no reason to complicate `mt_revoke` with an `mt_on_revoke` call.\n\n### No incurred cost for core MT behavior\n\nMT contracts should be implemented in a way to avoid extra gas fees for serialization & deserialization of `approved_account_ids` for calls to `mt_*` methods other than `mt_tokens`. See `near-contract-standards` [implementation of `ft_metadata` using `LazyOption`](https://github.com/near/near-sdk-rs/blob/c2771af7fdfe01a4e8414046752ee16fb0d29d39/examples/fungible-token/ft/src/lib.rs#L71) as a reference example.\n"
  },
  {
    "path": "neps/nep-0245/Enumeration.md",
    "content": "# Multi Token Enumeration\n\n:::caution\nThis is part of the proposed spec [NEP-245](https://github.com/near/NEPs/blob/master/neps/nep-0245.md) and is subject to change.\n:::\n\nVersion `1.0.0`\n\n## Summary\n\nStandard interfaces for counting & fetching tokens, for an entire Multi Token contract or for a given owner.\n\n## Motivation\n\nApps such as marketplaces and wallets need a way to show all tokens owned by a given account and to show statistics about all tokens for a given contract. This extension provides a standard way to do so.\n\nWhile some Multi Token contracts may forego this extension to save [storage] costs, this requires apps to have custom off-chain indexing layers. This makes it harder for apps to integrate with such Multi Token contracts. Apps which integrate only with Multi Token Standards that use the Enumeration extension do not even need a server-side component at all, since they can retrieve all information they need directly from the blockchain.\n\nPrior art:\n\n- [ERC-721]'s enumeration extension\n- [Non Fungible Token Standard's](https://github.com/near/NEPs/blob/master/neps/nep-0181.md) enumeration extension\n\n## Interface\n\nThe contract must implement the following view methods:\n\n// Metadata field is optional if metadata extension is implemented. Includes the base token metadata id and the token_metadata object, that represents the token specific metadata.\n\n```ts\n// Get a list of all tokens\n//\n// Arguments:\n// * `from_index`: a string representing an unsigned 128-bit integer,\n//    representing the starting index of tokens to return\n// * `limit`: the maximum number of tokens to return\n//\n// Returns an array of `Token` objects, as described in the Core standard,\n// and an empty array if there are no tokens\nfunction mt_tokens(\n  from_index: string|null, // default: \"0\"\n  limit: number|null, // default: unlimited (could fail due to gas limit)\n): Token[] {}\n\n// Get list of all tokens owned by a given account\n//\n// Arguments:\n// * `account_id`: a valid NEAR account\n// * `from_index`: a string representing an unsigned 128-bit integer,\n//    representing the starting index of tokens to return\n// * `limit`: the maximum number of tokens to return\n//\n// Returns a paginated list of all tokens owned by this account, and an empty array if there are no tokens\nfunction mt_tokens_for_owner(\n  account_id: string,\n  from_index: string|null, // default: 0\n  limit: number|null, // default: unlimited (could fail due to gas limit)\n): Token[] {}\n```\n\nThe contract must implement the following view methods if using metadata extension:\n\n```ts\n// Get list of all base metadata for the contract\n//\n// Arguments:\n// * `from_index`: a string representing an unsigned 128-bit integer,\n//    representing the starting index of tokens to return\n// * `limit`: the maximum number of tokens to return\n//\n// Returns an array of `MTBaseTokenMetadata` objects, as described in the Metadata standard, and an empty array if there are no tokens\nfunction mt_tokens_base_metadata_all(\n  from_index: string | null,\n  limit: number | null\n  ): MTBaseTokenMetadata[]\n```\n\n\n## Notes\n\nAt the time of this writing, the specialized collections in the `near-sdk` Rust crate are iterable, but not all of them have implemented an `iter_from` solution. There may be efficiency gains for large collections and contract developers are encouraged to test their data structures with a large amount of entries.\n\n  [ERC-721]: https://eips.ethereum.org/EIPS/eip-721\n  [storage]: https://docs.near.org/concepts/storage/storage-staking\n"
  },
  {
    "path": "neps/nep-0245/Events.md",
    "content": "# Multi Token Event\n\n:::caution\nThis is part of the proposed spec [NEP-245](https://github.com/near/NEPs/blob/master/neps/nep-0245.md) and is subject to change.\n:::\n\nVersion `1.0.0`\n\n## Summary\n\nStandard interfaces for Multi Token Contract actions.\nExtension of [NEP-297](https://github.com/near/NEPs/blob/master/neps/nep-0297.md)\n\n## Motivation\n\nNEAR and third-party applications need to track\n `mint`, `burn`, `transfer` events for all MT-driven apps consistently. This exension addresses that.\n\nNote that applications, including NEAR Wallet, could require implementing additional methods to display tokens correctly such as [`mt_metadata`](Metadata.md) and [`mt_tokens_for_owner`](Enumeration.md).\n\n## Interface\n\nMulti Token Events MUST have `standard` set to `\"nep245\"`, standard version set to `\"1.0.0\"`, `event` value is one of `mt_mint`, `mt_burn`, `mt_transfer`, and `data` must be of one of the following relavant types: `MtMintLog[] | MtBurnLog[] | MtTransferLog[]`:\n\n\n\n```ts\ninterface MtEventLogData {\n  EVENT_JSON: {\n    standard: \"nep245\",\n    version: \"1.0.0\",\n    event: MtEvent,\n    data: MtMintLog[] | MtBurnLog[] | MtTransferLog[]\n  }\n}\n```\n\n```ts\n// Minting event log. Emitted when a token is minted/created.\n// Requirements\n// * Contract MUST emit event when minting a token\n// Fields\n// * Contract token_ids and amounts MUST be the same length\n// * `owner_id`: the account receiving the minted token\n// * `token_ids`: the tokens minted\n// * `amounts`: the number of tokens minted, wrapped in quotes and treated\n//    like a string, although the numbers will be stored as an unsigned integer\n//    array with 128 bits.\n// * `memo`: optional message\ninterface MtMintLog {\n    owner_id: string,\n    token_ids: string[],\n    amounts: string[],\n    memo?: string\n}\n\n// Burning event log. Emitted when a token is burned.\n// Requirements\n// * Contract MUST emit event when minting a token\n// Fields\n// * Contract token_ids and amounts MUST be the same length\n// * `owner_id`: the account whose token(s) are being burned\n// * `authorized_id`: approved account_id to burn, if applicable\n// * `token_ids`: the tokens being burned\n// * `amounts`: the number of tokens burned, wrapped in quotes and treated\n//    like a string, although the numbers will be stored as an unsigned integer\n//    array with 128 bits.\n// * `memo`: optional message\ninterface MtBurnLog {\n    owner_id: string,\n    authorized_id?: string,\n    token_ids: string[],\n    amounts: string[],\n    memo?: string\n}\n\n// Transfer event log. Emitted when a token is transferred.\n// Requirements\n// * Contract MUST emit event when transferring a token\n// Fields\n// * `authorized_id`: approved account_id to transfer\n// * `old_owner_id`: the account sending the tokens \"sender.near\"\n// * `new_owner_id`: the account receiving the tokens \"receiver.near\"\n// * `token_ids`: the tokens to transfer\n// * `amounts`: the number of tokens to transfer, wrapped in quotes and treated\n//    like a string, although the numbers will be stored as an unsigned integer\n//    array with 128 bits.\ninterface MtTransferLog {\n    authorized_id?: string,\n    old_owner_id: string,\n    new_owner_id: string,\n    token_ids: string[],\n    amounts: string[],\n    memo?: string\n}\n```\n\n## Examples\n\nSingle owner minting (pretty-formatted for readability purposes):\n\n```js\nEVENT_JSON:{\n  \"standard\": \"nep245\",\n  \"version\": \"1.0.0\",\n  \"event\": \"mt_mint\",\n  \"data\": [\n    {\"owner_id\": \"foundation.near\", \"token_ids\": [\"aurora\", \"proximitylabs_ft\"], \"amounts\":[\"1\", \"100\"]}\n  ]\n}\n```\n\nDifferent owners minting:\n\n```js\nEVENT_JSON:{\n  \"standard\": \"nep245\",\n  \"version\": \"1.0.0\",\n  \"event\": \"mt_mint\",\n  \"data\": [\n    {\"owner_id\": \"foundation.near\", \"token_ids\": [\"aurora\", \"proximitylabs_ft\"], \"amounts\":[\"1\",\"100\"]},\n    {\"owner_id\": \"user1.near\", \"token_ids\": [\"meme\"], \"amounts\": [\"1\"]}\n  ]\n}\n```\n\nDifferent events (separate log entries):\n\n```js\nEVENT_JSON:{\n  \"standard\": \"nep245\",\n  \"version\": \"1.0.0\",\n  \"event\": \"mt_burn\",\n  \"data\": [\n    {\"owner_id\": \"foundation.near\", \"token_ids\": [\"aurora\", \"proximitylabs_ft\"], \"amounts\": [\"1\",\"100\"]},\n  ]\n}\n```\n\nAuthorized id:\n\n```js\nEVENT_JSON:{\n  \"standard\": \"nep245\",\n  \"version\": \"1.0.0\",\n  \"event\": \"mt_burn\",\n  \"data\": [\n    {\"owner_id\": \"foundation.near\", \"token_ids\": [\"aurora_alpha\", \"proximitylabs_ft\"], \"amounts\": [\"1\",\"100\"], \"authorized_id\": \"thirdparty.near\" },\n  ]\n}\n```\n\n```js\nEVENT_JSON:{\n  \"standard\": \"nep245\",\n  \"version\": \"1.0.0\",\n  \"event\": \"mt_transfer\",\n  \"data\": [\n    {\"old_owner_id\": \"user1.near\", \"new_owner_id\": \"user2.near\", \"token_ids\": [\"meme\"], \"amounts\":[\"1\"], \"memo\": \"have fun!\"}\n  ]\n}\n\nEVENT_JSON:{\n  \"standard\": \"nep245\",\n  \"version\": \"1.0.0\",\n  \"event\": \"mt_transfer\",\n  \"data\": [\n    {\"old_owner_id\": \"user2.near\", \"new_owner_id\": \"user3.near\", \"token_ids\": [\"meme\"], \"amounts\":[\"1\"], \"authorized_id\": \"thirdparty.near\", \"memo\": \"have fun!\"}\n  ]\n}\n```\n\n## Further methods\n\nNote that the example events covered above cover two different kinds of events:\n\n1. Events that are not specified in the MT Standard (`mt_mint`, `mt_burn`)\n2. An event that is covered in the [Multi Token Core Standard](https://github.com/near/NEPs/blob/master/neps/nep-0245.md). (`mt_transfer`)\n\nThis event standard also applies beyond the three events highlighted here, where future events follow the same convention of as the second type. For instance, if an MT contract uses the [approval management standard](ApprovalManagement.md), it may emit an event for `mt_approve` if that's deemed as important by the developer community.\n\nPlease feel free to open pull requests for extending the events standard detailed here as needs arise.\n\n## Drawbacks\n\nThere is a known limitation of 16kb strings when capturing logs.\nThis can be observed from `token_ids` that may vary in length\nfor different apps so the amount of logs that can\nbe executed may vary.\n"
  },
  {
    "path": "neps/nep-0245/Metadata.md",
    "content": "# Multi Token Metadata\n\n:::caution\nThis is part of the proposed spec [NEP-245](https://github.com/near/NEPs/blob/master/neps/nep-0245.md) and is subject to change.\n:::\n\nVersion `1.0.0`\n\n## Summary\n\nAn interface for a multi token's metadata. The goal is to keep the metadata future-proof as well as lightweight. This will be important to dApps needing additional information about multi token properties, and broadly compatible with other token standards such that the [NEAR Rainbow Bridge](https://near.org/blog/eth-near-rainbow-bridge/) can move tokens between chains.\n\n## Motivation\n\nThe primary value of tokens comes from their metadata. While the [core standard](https://github.com/near/NEPs/blob/master/neps/nep-0245.md) provides the minimum interface that can be considered a multi token, most artists, developers, and dApps will want to associate more data with each token, and will want a predictable way to interact with any MT's metadata.\n\nNEAR's unique [storage staking](https://docs.near.org/concepts/storage/storage-staking) approach makes it feasible to store more data on-chain than other blockchains. This standard leverages this strength for common metadata attributes, and provides a standard way to link to additional offchain data to support rapid community experimentation.\n\nThis standard also provides a `spec` version. This makes it easy for consumers of Multi Tokens, such as marketplaces, to know if they support all the features of a given token.\n\nPrior art:\n\n- NEAR's [Fungible Token Metadata Standard](https://github.com/near/NEPs/blob/master/neps/nep-0148.md)\n- NEAR's [Non-Fungible Token Metadata Standard](https://github.com/near/NEPs/blob/master/neps/nep-0177.md)\n- Discussion about NEAR's complete NFT standard: #171\n- Discussion about NEAR's complete Multi Token standard: #245\n\n## Interface\n\nMetadata applies at both the class level (`MTBaseTokenMetadata`) and the specific instance level (`MTTokenMetadata`). The relevant metadata for each:\n\n```ts\ntype MTContractMetadata = {\n  spec: string, // required, essentially a version like \"mt-1.0.0\"\n  name: string, // required Zoink's Digitial Sword Collection\n}\n\ntype MTBaseTokenMetadata = {\n  name: string, // required, ex. \"Silver Swords\" or \"Metaverse 3\"\n  id: string, // required a unique identifier for the metadata\n  symbol: string|null, // required, ex. \"MOCHI\"\n  icon: string|null, // Data URL\n  decimals: string|null // number of decimals for the token useful for FT related tokens\n  base_uri: string|null, // Centralized gateway known to have reliable access to decentralized storage assets referenced by `reference` or `media` URLs\n  reference: string|null, // URL to a JSON file with more info\n  copies: number|null, // number of copies of this set of metadata in existence when token was minted.\n  reference_hash: string|null, // Base64-encoded sha256 hash of JSON from reference field. Required if `reference` is included.\n}\n\ntype MTTokenMetadata = {\n  title: string|null, // ex. \"Arch Nemesis: Mail Carrier\" or \"Parcel #5055\"\n  description: string|null, // free-form description\n  media: string|null, // URL to associated media, preferably to decentralized, content-addressed storage\n  media_hash: string|null, // Base64-encoded sha256 hash of content referenced by the `media` field. Required if `media` is included.\n  issued_at: string|null, // When token was issued or minted, Unix epoch in milliseconds\n  expires_at: string|null, // When token expires, Unix epoch in milliseconds\n  starts_at: string|null, // When token starts being valid, Unix epoch in milliseconds\n  updated_at: string|null, // When token was last updated, Unix epoch in milliseconds\n  extra: string|null, // Anything extra the MT wants to store on-chain. Can be stringified JSON.\n  reference: string|null, // URL to an off-chain JSON file with more info.\n  reference_hash: string|null // Base64-encoded sha256 hash of JSON from reference field. Required if `reference` is included.\n}\n\ntype MTTokenMetadataAll = {\n  base: MTBaseTokenMetadata\n  token: MTTokenMetadata\n}\n```\n\nA new set of functions MUST be supported on the MT contract:\n\n```ts\n// Returns the top-level contract level metadtata\nfunction mt_metadata_contract(): MTContractMetadata {}\nfunction mt_metadata_token_all(token_ids: string[]): MTTokenMetadataAll[]\nfunction mt_metadata_token_by_token_id(token_ids: string[]): MTTokenMetadata[]\nfunction mt_metadata_base_by_token_id(token_ids: string[]): MTBaseTokenMetadata[]\nfunction mt_metadata_base_by_metadata_id(base_metadata_ids: string[]): MTBaseTokenMetadata[]\n\n```\n\nA new attribute MUST be added to each `Token` struct:\n\n```diff\n type Token = {\n   token_id: string,\n+  token_metadata?: MTTokenMetadata,\n+  base_metadata_id: string,\n }\n```\n\n### An implementing contract MUST include the following fields on-chain\n\nFor `MTContractMetadata`:\n\n- `spec`: a string that MUST be formatted `mt-1.0.0` to indicate that a Multi Token contract adheres to the current versions of this Metadata spec. This will allow consumers of the Multi Token to know if they support the features of a given contract.\n- `name`: the human-readable name of the contract.\n\n### An implementing contract must include the following fields on-chain\n\nFor `MTBaseTokenMetadata`:\n\n- `name`: the human-readable name of the Token.\n- `base_uri`: Centralized gateway known to have reliable access to decentralized storage assets referenced by `reference` or `media` URLs. Can be used by other frontends for initial retrieval of assets, even if these frontends then replicate the data to their own decentralized nodes, which they are encouraged to do.\n\n### An implementing contract MAY include the following fields on-chain\n\nFor `MTBaseTokenMetadata`:\n\n- `symbol`: the abbreviated symbol of the contract, like MOCHI or MV3\n- `icon`: a small image associated with this contract. Encouraged to be a [data URL](https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/Data_URIs), to help consumers display it quickly while protecting user data. Recommendation: use [optimized SVG](https://codepen.io/tigt/post/optimizing-svgs-in-data-uris), which can result in high-resolution images with only 100s of bytes of [storage cost](https://docs.near.org/concepts/storage/storage-staking). (Note that these storage costs are incurred to the contract deployer, but that querying these icons is a very cheap & cacheable read operation for all consumers of the contract and the RPC nodes that serve the data.) Recommendation: create icons that will work well with both light-mode and dark-mode websites by either using middle-tone color schemes, or by [embedding `media` queries in the SVG](https://timkadlec.com/2013/04/media-queries-within-svg/).\n- `reference`: a link to a valid JSON file containing various keys offering supplementary details on the token. Example: `/ipfs/QmdmQXB2mzChmMeKY47C43LxUdg1NDJ5MWcKMKxDu7RgQm`, etc. If the information given in this document conflicts with the on-chain attributes, the values in `reference` shall be considered the source of truth.\n- `reference_hash`: the base64-encoded sha256 hash of the JSON file contained in the `reference` field. This is to guard against off-chain tampering.\n- `copies`: The number of tokens with this set of metadata or `media` known to exist at time of minting. Supply is a more accurate current reflection.\n\nFor `MTTokenMetadata`:\n\n- `title`:  The title of this specific token.\n- `description`: A longer description of the token.\n- `media`: URL to associated media. Preferably to decentralized, content-addressed storage.\n- `media_hash`: the base64-encoded sha256 hash of content referenced by the `media` field. This is to guard against off-chain tampering.\n- `copies`: The number of tokens with this set of metadata or `media` known to exist at time of minting.\n- `issued_at`: Unix epoch in milliseconds when token was issued or minted (an unsigned 32-bit integer would suffice until the year 2106)\n- `expires_at`: Unix epoch in milliseconds when token expires\n- `starts_at`: Unix epoch in milliseconds when token starts being valid\n- `updated_at`: Unix epoch in milliseconds when token was last updated\n- `extra`: anything extra the MT wants to store on-chain. Can be stringified JSON.\n- `reference`: URL to an off-chain JSON file with more info.\n- `reference_hash`: Base64-encoded sha256 hash of JSON from reference field. Required if `reference` is included.\n\nFor `MTTokenMetadataAll `:\n\n- `base`: The base metadata that corresponds to `MTBaseTokenMetadata` for the token.\n- `token`: The token specific metadata that corresponds to `MTTokenMetadata`.\n\n### No incurred cost for core MT behavior\n\nContracts should be implemented in a way to avoid extra gas fees for serialization & deserialization of metadata for calls to `mt_*` methods other than `mt_metadata*` or `mt_tokens`. See `near-contract-standards` [implementation using `LazyOption`](https://github.com/near/near-sdk-rs/blob/c2771af7fdfe01a4e8414046752ee16fb0d29d39/examples/fungible-token/ft/src/lib.rs#L71) as a reference example.\n\n## Drawbacks\n\n- When this MT contract is created and initialized, the storage use per-token will be higher than an MT Core version. Frontends can account for this by adding extra deposit when minting. This could be done by padding with a reasonable amount, or by the frontend using the [RPC call detailed here](https://docs.near.org/docs/develop/front-end/rpc#genesis-config) that gets genesis configuration and actually determine precisely how much deposit is needed.\n- Convention of `icon` being a data URL rather than a link to an HTTP endpoint that could contain privacy-violating code cannot be done on deploy or update of contract metadata, and must be done on the consumer/app side when displaying token data.\n- If on-chain icon uses a data URL or is not set but the document given by `reference` contains a privacy-violating `icon` URL, consumers & apps of this data should not naïvely display the `reference` version, but should prefer the safe version. This is technically a violation of the \"`reference` setting wins\" policy described above.\n\n## Future possibilities\n\n- Detailed conventions that may be enforced for versions.\n- A fleshed out schema for what the `reference` object should contain.\n"
  },
  {
    "path": "neps/nep-0245.md",
    "content": "---\nNEP: 245\nTitle: Multi Token Standard\nAuthor: Zane Starr <zane@ships.gold>, @riqi, @jriemann, @marcos.sun\nStatus: Final\nDiscussionsTo: https://github.com/near/NEPs/discussions/246\nType: Standards Track\nCategory: Contract\nCreated: 03-Mar-2022\nRequires: 297\n---\n\n## Summary\n\nA standard interface for a multi token standard that supports fungible, semi-fungible,non-fungible, and tokens of any type, allowing for ownership, transfer, and batch transfer of tokens regardless of specific type.\n\n### Extensions\n\n- [Approval Management](nep-0245/ApprovalManagement.md)\n- [Enumeration](nep-0245/Enumeration.md)\n- [Events](nep-0245/Events.md)\n- [Metadata](nep-0245/Metadata.md)\n\n## Motivation\n\n\nIn the three years since [ERC-1155] was ratified by the Ethereum Community, Multi Token based contracts have proven themselves valuable assets. Many blockchain projects emulate this standard for representing multiple token assets classes in a single contract. The ability to reduce transaction overhead for marketplaces, video games, DAOs, and exchanges is appealing to the blockchain ecosystem and simplifies transactions for developers.\n\nHaving a single contract represent NFTs, FTs, and tokens that sit inbetween greatly improves efficiency. The standard also introduced the ability to make batch requests with multiple asset classes reducing complexity. This standard allows operations that currently require _many_ transactions to be completed in a single transaction that can transfer not only NFTs and FTs, but any tokens that are a part of same token contract.\n\nWith this standard, we have sought to take advantage of the ability of the NEAR blockchain to scale. Its sharded runtime, and [storage staking] model that decouples [gas] fees from storage demand, enables ultra low transaction fees and greater on chain storage (see [Metadata][MT Metadata] extension).\n\nWith the aforementioned, it is noteworthy to mention that like the [NFT] standard the Multi Token standard, implements `mt_transfer_call`,\nwhich allows, a user to attach many tokens to a call to a separate contract. Additionally, this standard includes an optional [Approval Management] extension. The extension allows marketplaces to trade on behalf of a user, providing additional flexibility for dApps.\n\nPrior art:\n\n- [ERC-721]\n- [ERC-1155]\n- [NEAR Fungible Token Standard][FT Core], which first pioneered the \"transfer and call\" technique\n- [NEAR Non-Fungible Token Standard][NFT Core]\n\n## Rationale and alternatives\n\nWhy have another standard, aren't fungible and non-fungible tokens enough?  The current fungible token and non-fungible token standards, do not provide support for representing many FT tokens in a single contract, as well as the flexibility to define different token types with different behavior in a single contract. This is something that makes it difficult to be interoperable with other major blockchain networks, that implement standards that allow for representation of many different FT tokens in a single contract such as Ethereum.\n\nThe standard here introduces a few concepts that evolve the original [ERC-1155] standard to have more utility, while maintaining the original flexibility of the standard. So keeping that in mind, we are defining this as a new token type. It combines two main features of FT and NFT. It allows us to represent many token types in a single contract, and it's possible to store the amount for each token.\n\nThe decision to not use FT and NFT as explicit token types was taken to allow the community to define their own standards and meanings through metadata. As standards evolve on other networks, this specification allows the standard to be able to represent tokens across networks accurately, without necessarily restricting the behavior to any preset definition.\n\nThe issues with this in general is a problem with defining what metadata means and how is that interpreted. We have chosen to follow the pattern that is currently in use on Ethereum in the [ERC-1155] standard. That pattern relies on people to make extensions or to make signals as to how they want the metadata to be represented for their use case.\n\nOne of the areas that has broad sweeping implications from the [ERC-1155] standard is the lack of direct access to metadata. With Near's sharding we are able to have a [Metadata Extension][MT Metadata] for the standard that exists on chain. So developers and users are not required to use an indexer to understand, how to interact or interpret tokens, via token identifiers that they receive.\n\nAnother extension that we made was to provide an explicit ability for developers and users to group or link together series of NFTs/FTs or any combination of tokens. This provides additional flexibility that the  [ERC-1155] standard only has loose guidelines on. This was chosen to make it easy for consumers to understand the relationship between tokens within the contract.\n\nTo recap, we choose to create this standard, to improve interoperability, developer ease of use, and to extend token representability beyond what was available directly in the FT or NFT standards. We believe this to be another tool in the developer's toolkit. It makes it possible to represent many types of tokens and to enable exchanges of many tokens within a single `transaction`.\n\n## Specification\n\n**NOTES**:\n\n- All amounts, balances and allowance are limited by `U128` (max value `2**128 - 1`).\n- Token standard uses JSON for serialization of arguments and results.\n- Amounts in arguments and results are serialized as Base-10 strings, e.g. `\"100\"`. This is done to avoid JSON limitation of max integer value of `2**53`.\n- The contract must track the change in storage when adding to and removing from collections. This is not included in this core multi token standard but instead in the [Storage Standard][Storage Management].\n- To prevent the deployed contract from being modified or deleted, it should not have any access keys on its account.\n\n### MT Interface\n\n```ts\n// The base structure that will be returned for a token. If contract is using\n// extensions such as Approval Management, Enumeration, Metadata, or other\n// attributes may be included in this structure.\ntype Token = {\n  token_id: string,\n  owner_id: string | null\n}\n\n/******************/\n/* CHANGE METHODS */\n/******************/\n\n// Simple transfer. Transfer a given `token_id` from current owner to\n// `receiver_id`.\n//\n// Requirements\n// * Caller of the method must attach a deposit of 1 yoctoⓃ for security purposes\n// * Caller must have greater than or equal to the `amount` being requested\n// * Contract MUST panic if called by someone other than token owner or,\n//   if using Approval Management, one of the approved accounts\n// * `approval_id` is for use with Approval Management extension, see\n//   that document for full explanation.\n// * If using Approval Management, contract MUST nullify approved accounts on\n//   successful transfer.\n//\n// Arguments:\n// * `receiver_id`: the valid NEAR account receiving the token\n// * `token_id`: the token to transfer\n// * `amount`: the number of tokens to transfer, wrapped in quotes and treated\n//    like a string, although the number will be stored as an unsigned integer\n//    with 128 bits.\n// * `approval` (optional): is a tuple of [`owner_id`,`approval_id`].\n//   `owner_id` is the valid Near account that owns the tokens.\n//   `approval_id` is the expected approval ID. A number smaller than\n//    2^53, and therefore representable as JSON. See Approval Management\n//    standard for full explanation.\n// * `memo` (optional): for use cases that may benefit from indexing or\n//    providing information for a transfer\n\n\nfunction mt_transfer(\n  receiver_id: string,\n  token_id: string,\n  amount: string,\n  approval: [owner_id: string, approval_id: number]|null,\n  memo: string|null,\n) {}\n\n// Simple batch transfer. Transfer a given `token_ids` from current owner to\n// `receiver_id`.\n//\n// Requirements\n// * Caller of the method must attach a deposit of 1 yoctoⓃ for security purposes\n// * Caller must have greater than or equal to the `amounts` being requested for the given `token_ids`\n// * Contract MUST panic if called by someone other than token owner or,\n//   if using Approval Management, one of the approved accounts\n// * `approval_id` is for use with Approval Management extension, see\n//   that document for full explanation.\n// * If using Approval Management, contract MUST nullify approved accounts on\n//   successful transfer.\n// * Contract MUST panic if called with the length of `token_ids` not equal to `amounts` is not equal\n// * Contract MUST panic if `approval_ids` is not `null` and does not equal the length of `token_ids`\n//\n// Arguments:\n// * `receiver_id`: the valid NEAR account receiving the token\n// * `token_ids`: the tokens to transfer\n// * `amounts`: the number of tokens to transfer, wrapped in quotes and treated\n//    like an array of strings, although the numbers will be stored as an array of unsigned integer\n//    with 128 bits.\n// * `approvals` (optional): is an array of expected `approval` per `token_ids`.\n//    If a `token_id` does not have a corresponding `approval` then the entry in the array\n//    must be marked null.\n//   `approval` is a tuple of [`owner_id`,`approval_id`].\n//   `owner_id` is the valid Near account that owns the tokens.\n//   `approval_id` is the expected approval ID. A number smaller than\n//    2^53, and therefore representable as JSON. See Approval Management\n//    standard for full explanation.\n// * `memo` (optional): for use cases that may benefit from indexing or\n//    providing information for a transfer\n\n\nfunction mt_batch_transfer(\n  receiver_id: string,\n  token_ids: string[],\n  amounts: string[],\n  approvals: ([owner_id: string, approval_id: number]| null)[]| null,\n  memo: string|null,\n) {}\n\n\n// Transfer token and call a method on a receiver contract. A successful\n// workflow will end in a success execution outcome to the callback on the MT\n// contract at the method `mt_resolve_transfer`.\n//\n// You can think of this as being similar to attaching native NEAR tokens to a\n// function call. It allows you to attach any Multi Token, token in a call to a\n// receiver contract.\n//\n// Requirements:\n// * Caller of the method must attach a deposit of 1 yoctoⓃ for security\n//   purposes\n// * Caller must have greater than or equal to the `amount` being requested\n// * Contract MUST panic if called by someone other than token owner or,\n//   if using Approval Management, one of the approved accounts\n// * The receiving contract must implement `mt_on_transfer` according to the\n//   standard. If it does not, MT contract's `mt_resolve_transfer` MUST deal\n//   with the resulting failed cross-contract call and roll back the transfer.\n// * Contract MUST implement the behavior described in `mt_resolve_transfer`\n// * `approval_id` is for use with Approval Management extension, see\n//   that document for full explanation.\n// * If using Approval Management, contract MUST nullify approved accounts on\n//   successful transfer.\n//\n// Arguments:\n// * `receiver_id`: the valid NEAR account receiving the token.\n// * `token_id`: the token to send.\n// * `amount`: the number of tokens to transfer, wrapped in quotes and treated\n//    like a string, although the number will be stored as an unsigned integer\n//    with 128 bits.\n// * `owner_id`: the valid NEAR account that owns the token\n// * `approval` (optional): is a tuple of [`owner_id`,`approval_id`].\n//   `owner_id` is the valid Near account that owns the tokens.\n//   `approval_id` is the expected approval ID. A number smaller than\n//    2^53, and therefore representable as JSON. See Approval Management\n// * `memo` (optional): for use cases that may benefit from indexing or\n//    providing information for a transfer.\n// * `msg`: specifies information needed by the receiving contract in\n//    order to properly handle the transfer. Can indicate both a function to\n//    call and the parameters to pass to that function.\n\n\nfunction mt_transfer_call(\n  receiver_id: string,\n  token_id: string,\n  amount: string,\n  approval: [owner_id: string, approval_id: number]|null,\n  memo: string|null,\n  msg: string,\n): Promise {}\n\n\n\n// Transfer tokens and call a method on a receiver contract. A successful\n// workflow will end in a success execution outcome to the callback on the MT\n// contract at the method `mt_resolve_transfer`.\n//\n// You can think of this as being similar to attaching native NEAR tokens to a\n// function call. It allows you to attach any Multi Token, token in a call to a\n// receiver contract.\n//\n// Requirements:\n// * Caller of the method must attach a deposit of 1 yoctoⓃ for security\n//   purposes\n// * Caller must have greater than or equal to the `amount` being requested\n// * Contract MUST panic if called by someone other than token owner or,\n//   if using Approval Management, one of the approved accounts\n// * The receiving contract must implement `mt_on_transfer` according to the\n//   standard. If it does not, MT contract's `mt_resolve_transfer` MUST deal\n//   with the resulting failed cross-contract call and roll back the transfer.\n// * Contract MUST implement the behavior described in `mt_resolve_transfer`\n// * `approval_id` is for use with Approval Management extension, see\n//   that document for full explanation.\n// * If using Approval Management, contract MUST nullify approved accounts on\n//   successful transfer.\n// * Contract MUST panic if called with the length of `token_ids` not equal to `amounts` is not equal\n// * Contract MUST panic if `approval_ids` is not `null` and does not equal the length of `token_ids`\n//\n// Arguments:\n// * `receiver_id`: the valid NEAR account receiving the token.\n// * `token_ids`: the tokens to transfer\n// * `amounts`: the number of tokens to transfer, wrapped in quotes and treated\n//    like an array of string, although the numbers will be stored as an array of\n//    unsigned integer with 128 bits.\n// * `approvals` (optional): is an array of expected `approval` per `token_ids`.\n//    If a `token_id` does not have a corresponding `approval` then the entry in the array\n//    must be marked null.\n//    `approval` is a tuple of [`owner_id`,`approval_id`].\n//   `owner_id` is the valid Near account that owns the tokens.\n//   `approval_id` is the expected approval ID. A number smaller than\n//    2^53, and therefore representable as JSON. See Approval Management\n//    standard for full explanation.\n// * `memo` (optional): for use cases that may benefit from indexing or\n//    providing information for a transfer.\n// * `msg`: specifies information needed by the receiving contract in\n//    order to properly handle the transfer. Can indicate both a function to\n//    call and the parameters to pass to that function.\n\n\nfunction mt_batch_transfer_call(\n  receiver_id: string,\n  token_ids: string[],\n  amounts: string[],\n  approvals: ([owner_id: string, approval_id: number]|null)[] | null,\n  memo: string|null,\n  msg: string,\n): Promise {}\n\n/****************/\n/* VIEW METHODS */\n/****************/\n\n\n// Returns the tokens with the given `token_ids` or `null` if no such token.\nfunction mt_token(token_ids: string[]) (Token | null)[]\n\n// Returns the balance of an account for the given `token_id`.\n// The balance though wrapped in quotes and treated like a string,\n// the number will be stored as an unsigned integer with 128 bits.\n// Arguments:\n// * `account_id`: the NEAR account that owns the token.\n// * `token_id`: the token to retrieve the balance from\nfunction mt_balance_of(account_id: string, token_id: string): string\n\n// Returns the balances of an account for the given `token_ids`.\n// The balances though wrapped in quotes and treated like strings,\n// the numbers will be stored as an unsigned integer with 128 bits.\n// Arguments:\n// * `account_id`: the NEAR account that owns the tokens.\n// * `token_ids`: the tokens to retrieve the balance from\nfunction mt_batch_balance_of(account_id: string, token_ids: string[]): string[]\n\n// Returns the token supply with the given `token_id` or `null` if no such token exists.\n// The supply though wrapped in quotes and treated like a string, the number will be stored\n// as an unsigned integer with 128 bits.\nfunction mt_supply(token_id: string): string | null\n\n// Returns the token supplies with the given `token_ids`, a string value is returned or `null`\n// if no such token exists. The supplies though wrapped in quotes and treated like strings,\n// the numbers will be stored as an unsigned integer with 128 bits.\nfunction mt_batch_supply(token_ids: string[]): (string | null)[]\n```\n\nThe following behavior is required, but contract authors may name this function something other than the conventional `mt_resolve_transfer` used here.\n\n```ts\n// Finalize an `mt_transfer_call` or `mt_batch_transfer_call` chain of cross-contract calls. Generically\n// referred to as `mt_transfer_call` as it applies to `mt_batch_transfer_call` as well.\n//\n// The `mt_transfer_call` process:\n//\n// 1. Sender calls `mt_transfer_call` on MT contract\n// 2. MT contract transfers token from sender to receiver\n// 3. MT contract calls `mt_on_transfer` on receiver contract\n// 4+. [receiver contract may make other cross-contract calls]\n// N. MT contract resolves promise chain with `mt_resolve_transfer`, and may\n//    transfer token back to sender\n//\n// Requirements:\n// * Contract MUST forbid calls to this function by any account except self\n// * If promise chain failed, contract MUST revert token transfer\n// * If promise chain resolves with `true`, contract MUST return token to\n//   `sender_id`\n//\n// Arguments:\n// * `sender_id`: the sender of `mt_transfer_call`\n// * `receiver_id`: the `receiver_id` argument given to `mt_transfer_call`\n// * `token_ids`: the `token_ids` argument given to `mt_transfer_call`\n// * `amounts`: the `token_ids` argument given to `mt_transfer_call`\n// * `approvals (optional)`: if using Approval Management, contract MUST provide\n//   set of original approvals in this argument, and restore the\n//   approved accounts in case of revert.\n//   `approvals` is an array of expected `approval_list` per `token_ids`.\n//   If a `token_id` does not have a corresponding `approvals_list` then the entry in the\n//   array must be marked null.\n//   `approvals_list` is an array of triplets of [`owner_id`,`approval_id`,`amount`].\n//   `owner_id` is the valid Near account that owns the tokens.\n//   `approval_id` is the expected approval ID. A number smaller than\n//    2^53, and therefore representable as JSON. See Approval Management\n//    standard for full explanation.\n//   `amount`: the number of tokens to transfer, wrapped in quotes and treated\n//    like a string, although the number will be stored as an unsigned integer\n//    with 128 bits.\n//\n//\n//\n// Returns total amount spent by the `receiver_id`, corresponding to the `token_id`.\n// The amounts returned, though wrapped in quotes and treated like strings,\n// the numbers will be stored as an unsigned integer with 128 bits.\n// Example: if sender_id calls `mt_transfer_call({ \"amounts\": [\"100\"], token_ids: [\"55\"], receiver_id: \"games\" })`,\n// but `receiver_id` only uses 80, `mt_on_transfer` will resolve with `[\"20\"]`, and `mt_resolve_transfer`\n// will return `[\"80\"]`.\n\n\nfunction mt_resolve_transfer(\n  sender_id: string,\n  receiver_id: string,\n  token_ids: string[],\n  approvals: (null | [owner_id: string, approval_id: number, amount: string][]) []| null\n):string[]  {}\n```\n\n### Receiver Interface\n\nContracts which want to make use of `mt_transfer_call` and `mt_batch_transfer_call` must implement the following:\n\n```ts\n// Take some action after receiving a multi token\n//\n// Requirements:\n// * Contract MUST restrict calls to this function to a set of whitelisted\n//   contracts\n// * Contract MUST panic if `token_ids` length does not equals `amounts`\n//   length\n// * Contract MUST panic if `previous_owner_ids` length does not equals `token_ids`\n//   length\n//\n// Arguments:\n// * `sender_id`: the sender of `mt_transfer_call`\n// * `previous_owner_ids`: the account that owned the tokens prior to it being\n//   transferred to this contract, which can differ from `sender_id` if using\n//   Approval Management extension\n// * `token_ids`: the `token_ids` argument given to `mt_transfer_call`\n// * `amounts`: the `token_ids` argument given to `mt_transfer_call`\n// * `msg`: information necessary for this contract to know how to process the\n//   request. This may include method names and/or arguments.\n//\n// Returns the number of unused tokens in string form. For instance, if `amounts`\n// is `[\"10\"]` but only 9 are needed, it will return `[\"1\"]`. The amounts returned,\n// though wrapped in quotes and treated like strings, the numbers will be stored as\n// an unsigned integer with 128 bits.\n\n\nfunction mt_on_transfer(\n  sender_id: string,\n  previous_owner_ids: string[],\n  token_ids: string[],\n  amounts: string[],\n  msg: string,\n): Promise<string[]>;\n```\n\n## Events\n\nNEAR and third-party applications need to track\n `mint`, `burn`, `transfer` events for all MT-driven apps consistently. [This extension][MT Events] addresses that.\n\nNote that applications, including NEAR Wallet, could require implementing additional methods to display tokens correctly such as [`mt_metadata`][MT Metadata] and [`mt_tokens_for_owner`][MT Enumeration].\n\n### Events Interface\n\n[Multi Token Events][MT Events] MUST have `standard` set to `\"nep245\"`, standard version set to `\"1.0.0\"`, `event` value is one of `mt_mint`, `mt_burn`, `mt_transfer`, and `data` must be of one of the following relevant types: `MtMintLog[] | MtBurnLog[] | MtTransferLog[]`:\n\n\n\n```ts\ninterface MtEventLogData {\n  EVENT_JSON: {\n    standard: \"nep245\",\n    version: \"1.0.0\",\n    event: MtEvent,\n    data: MtMintLog[] | MtBurnLog[] | MtTransferLog[]\n  }\n}\n```\n\n```ts\n// Minting event log. Emitted when a token is minted/created.\n// Requirements\n// * Contract MUST emit event when minting a token\n// Fields\n// * Contract token_ids and amounts MUST be the same length\n// * `owner_id`: the account receiving the minted token\n// * `token_ids`: the tokens minted\n// * `amounts`: the number of tokens minted, wrapped in quotes and treated\n//    like a string, although the numbers will be stored as an unsigned integer\n//    array with 128 bits.\n// * `memo`: optional message\ninterface MtMintLog {\n    owner_id: string,\n    token_ids: string[],\n    amounts: string[],\n    memo?: string\n}\n\n// Burning event log. Emitted when a token is burned.\n// Requirements\n// * Contract MUST emit event when minting a token\n// Fields\n// * Contract token_ids and amounts MUST be the same length\n// * `owner_id`: the account whose token(s) are being burned\n// * `authorized_id`: approved account_id to burn, if applicable\n// * `token_ids`: the tokens being burned\n// * `amounts`: the number of tokens burned, wrapped in quotes and treated\n//    like a string, although the numbers will be stored as an unsigned integer\n//    array with 128 bits.\n// * `memo`: optional message\ninterface MtBurnLog {\n    owner_id: string,\n    authorized_id?: string,\n    token_ids: string[],\n    amounts: string[],\n    memo?: string\n}\n\n// Transfer event log. Emitted when a token is transferred.\n// Requirements\n// * Contract MUST emit event when transferring a token\n// Fields\n// * `authorized_id`: approved account_id to transfer\n// * `old_owner_id`: the account sending the tokens \"sender.near\"\n// * `new_owner_id`: the account receiving the tokens \"receiver.near\"\n// * `token_ids`: the tokens to transfer\n// * `amounts`: the number of tokens to transfer, wrapped in quotes and treated\n//    like a string, although the numbers will be stored as an unsigned integer\n//    array with 128 bits.\ninterface MtTransferLog {\n    authorized_id?: string,\n    old_owner_id: string,\n    new_owner_id: string,\n    token_ids: string[],\n    amounts: string[],\n    memo?: string\n}\n```\n\n## Examples\n\nSingle owner minting (pretty-formatted for readability purposes):\n\n```js\nEVENT_JSON:{\n  \"standard\": \"nep245\",\n  \"version\": \"1.0.0\",\n  \"event\": \"mt_mint\",\n  \"data\": [\n    {\"owner_id\": \"foundation.near\", \"token_ids\": [\"aurora\", \"proximitylabs_ft\"], \"amounts\":[\"1\", \"100\"]}\n  ]\n}\n```\n\nDifferent owners minting:\n\n```js\nEVENT_JSON:{\n  \"standard\": \"nep245\",\n  \"version\": \"1.0.0\",\n  \"event\": \"mt_mint\",\n  \"data\": [\n    {\"owner_id\": \"foundation.near\", \"token_ids\": [\"aurora\", \"proximitylabs_ft\"], \"amounts\":[\"1\",\"100\"]},\n    {\"owner_id\": \"user1.near\", \"token_ids\": [\"meme\"], \"amounts\": [\"1\"]}\n  ]\n}\n```\n\nDifferent events (separate log entries):\n\n```js\nEVENT_JSON:{\n  \"standard\": \"nep245\",\n  \"version\": \"1.0.0\",\n  \"event\": \"mt_burn\",\n  \"data\": [\n    {\"owner_id\": \"foundation.near\", \"token_ids\": [\"aurora\", \"proximitylabs_ft\"], \"amounts\": [\"1\",\"100\"]},\n  ]\n}\n```\n\nAuthorized id:\n\n```js\nEVENT_JSON:{\n  \"standard\": \"nep245\",\n  \"version\": \"1.0.0\",\n  \"event\": \"mt_burn\",\n  \"data\": [\n    {\"owner_id\": \"foundation.near\", \"token_ids\": [\"aurora_alpha\", \"proximitylabs_ft\"], \"amounts\": [\"1\",\"100\"], \"authorized_id\": \"thirdparty.near\" },\n  ]\n}\n```\n\n```js\nEVENT_JSON:{\n  \"standard\": \"nep245\",\n  \"version\": \"1.0.0\",\n  \"event\": \"mt_transfer\",\n  \"data\": [\n    {\"old_owner_id\": \"user1.near\", \"new_owner_id\": \"user2.near\", \"token_ids\": [\"meme\"], \"amounts\":[\"1\"], \"memo\": \"have fun!\"}\n  ]\n}\n\nEVENT_JSON:{\n  \"standard\": \"nep245\",\n  \"version\": \"1.0.0\",\n  \"event\": \"mt_transfer\",\n  \"data\": [\n    {\"old_owner_id\": \"user2.near\", \"new_owner_id\": \"user3.near\", \"token_ids\": [\"meme\"], \"amounts\":[\"1\"], \"authorized_id\": \"thirdparty.near\", \"memo\": \"have fun!\"}\n  ]\n}\n```\n\n## Further Event Methods\n\nNote that the example events covered above cover two different kinds of events:\n\n1. Events that are not specified in the MT Standard (`mt_mint`, `mt_burn`)\n2. An event that is covered in the [Multi Token Core Standard](https://nomicon.io/Standards/Tokens/MultiToken/Core#mt-interface). (`mt_transfer`)\n\nThis event standard also applies beyond the three events highlighted here, where future events follow the same convention of as the second type. For instance, if an MT contract uses the [approval management standard][MT Approval Management], it may emit an event for `mt_approve` if that's deemed as important by the developer community.\n\nPlease feel free to open pull requests for extending the events standard detailed here as needs arise.\n\n## Reference Implementation\n\n[Minimum Viable Interface](https://github.com/jriemann/near-sdk-rs/blob/multi-token-reference-impl/near-contract-standards/src/multi_token/core/mod.rs)\n\n[MT Implementation](https://github.com/jriemann/near-sdk-rs/blob/multi-token-reference-impl/near-contract-standards/src/multi_token/core/core_impl.rs)\n\n\n\n## Copyright\n\nCopyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).\n\n[ERC-721]: https://eips.ethereum.org/EIPS/eip-721\n[ERC-1155]: https://eips.ethereum.org/EIPS/eip-1155\n[storage staking]: https://docs.near.org/concepts/storage/storage-staking\n[gas]: https://docs.near.org/concepts/basics/transactions/gas\n[NFT Core]: https://github.com/near/NEPs/blob/master/neps/nep-0171.md\n[FT Core]: https://github.com/near/NEPs/blob/master/neps/nep-0141.md\n[Storage Management]: https://github.com/near/NEPs/blob/master/neps/nep-0145.md\n[MT Approval Management]: nep-0245/ApprovalManagement.md\n[MT Enumeration]: nep-0245/Enumeration.md\n[MT Events]: nep-0245/Events.md\n[MT Metadata]: nep-0245/Metadata.md\n"
  },
  {
    "path": "neps/nep-0256.md",
    "content": "---\nNEP: 256\nTitle: Non-Fungible Token Events\nAuthor: Olga Telezhnaya <olga@near.org>, @evergreen-trading-systems\nStatus: Final\nDiscussionsTo: https://github.com/near/NEPs/pull/256, https://github.com/near/NEPs/issues/254\nType: Standards Track\nCategory: Contract\nCreated: 8-Sep-2021\n---\n\n# Events\n\nVersion `1.1.0`\n\n## Summary\n\nStandard interface for NFT contract actions based on [NEP-297](https://github.com/near/NEPs/blob/master/neps/nep-0297.md).\n\n## Motivation\n\nNEAR and third-party applications need to track `mint`, `transfer`, `burn` and `contract_metadata_update` events for all NFT-driven apps consistently.\nThis extension addresses that.\n\nKeep in mind that applications, including NEAR Wallet, could require implementing additional methods to display the NFTs correctly, such as [`nft_metadata`](https://github.com/near/NEPs/blob/master/neps/nep-0177.md) and [`nft_tokens_for_owner`](https://github.com/near/NEPs/blob/master/neps/nep-0181.md).\n\n## Interface\n\nNon-Fungible Token Events MUST have `standard` set to `\"nep171\"`, standard version set to `\"1.1.0\"`, `event` value is one of `nft_mint`, `nft_burn`, `nft_transfer`, `contract_metadata_update`, and `data` must be of one of the following relavant types: `NftMintLog[] | NftTransferLog[] | NftBurnLog[] | NftContractMetadataUpdateLog[]`:\n\n```ts\ninterface NftEventLogData {\n    standard: \"nep171\",\n    version: \"1.1.0\",\n    event: \"nft_mint\" | \"nft_burn\" | \"nft_transfer\" | \"contract_metadata_update\",\n    data: NftMintLog[] | NftTransferLog[] | NftBurnLog[] | NftContractMetadataUpdateLog[],\n}\n```\n\n```ts\n// An event log to capture token minting\n// Arguments\n// * `owner_id`: \"account.near\"\n// * `token_ids`: [\"1\", \"abc\"]\n// * `memo`: optional message\ninterface NftMintLog {\n    owner_id: string,\n    token_ids: string[],\n    memo?: string\n}\n\n// An event log to capture token burning\n// Arguments\n// * `owner_id`: owner of tokens to burn\n// * `authorized_id`: approved account_id to burn, if applicable\n// * `token_ids`: [\"1\",\"2\"]\n// * `memo`: optional message\ninterface NftBurnLog {\n    owner_id: string,\n    authorized_id?: string,\n    token_ids: string[],\n    memo?: string\n}\n\n// An event log to capture token transfer\n// Arguments\n// * `authorized_id`: approved account_id to transfer, if applicable\n// * `old_owner_id`: \"owner.near\"\n// * `new_owner_id`: \"receiver.near\"\n// * `token_ids`: [\"1\", \"12345abc\"]\n// * `memo`: optional message\ninterface NftTransferLog {\n    authorized_id?: string,\n    old_owner_id: string,\n    new_owner_id: string,\n    token_ids: string[],\n    memo?: string\n}\n\n// An event log to capture contract metadata updates. Note that the updated contract metadata is not included in the log, as it could easily exceed the 16KB log size limit. Listeners can query `nft_metadata` to get the updated contract metadata.\n// Arguments\n// * `memo`: optional message\ninterface NftContractMetadataUpdateLog {\n    memo?: string\n}\n```\n\n## Examples\n\nSingle owner batch minting (pretty-formatted for readability purposes):\n\n```js\nEVENT_JSON:{\n  \"standard\": \"nep171\",\n  \"version\": \"1.1.0\",\n  \"event\": \"nft_mint\",\n  \"data\": [\n    {\"owner_id\": \"foundation.near\", \"token_ids\": [\"aurora\", \"proximitylabs\"]}\n  ]\n}\n```\n\nDifferent owners batch minting:\n\n```js\nEVENT_JSON:{\n  \"standard\": \"nep171\",\n  \"version\": \"1.1.0\",\n  \"event\": \"nft_mint\",\n  \"data\": [\n    {\"owner_id\": \"foundation.near\", \"token_ids\": [\"aurora\", \"proximitylabs\"]},\n    {\"owner_id\": \"user1.near\", \"token_ids\": [\"meme\"]}\n  ]\n}\n```\n\nDifferent events (separate log entries):\n\n```js\nEVENT_JSON:{\n  \"standard\": \"nep171\",\n  \"version\": \"1.1.0\",\n  \"event\": \"nft_burn\",\n  \"data\": [\n    {\"owner_id\": \"foundation.near\", \"token_ids\": [\"aurora\", \"proximitylabs\"]},\n  ]\n}\n```\n\n```js\nEVENT_JSON:{\n  \"standard\": \"nep171\",\n  \"version\": \"1.1.0\",\n  \"event\": \"nft_transfer\",\n  \"data\": [\n    {\"old_owner_id\": \"user1.near\", \"new_owner_id\": \"user2.near\", \"token_ids\": [\"meme\"], \"memo\": \"have fun!\"}\n  ]\n}\n```\n\nAuthorized id:\n\n```js\nEVENT_JSON:{\n  \"standard\": \"nep171\",\n  \"version\": \"1.1.0\",\n  \"event\": \"nft_burn\",\n  \"data\": [\n    {\"owner_id\": \"owner.near\", \"token_ids\": [\"goodbye\", \"aurevoir\"], \"authorized_id\": \"thirdparty.near\"}\n  ]\n}\n```\n\nContract metadata update:\n\n```js\nEVENT_JSON:{\n  \"standard\": \"nep171\",\n  \"version\": \"1.1.0\",\n  \"event\": \"contract_metadata_update\",\n  \"data\": []\n}\n```\n\n## Events for Other NFT Methods\n\nNote that the example events above cover two different kinds of events:\n\n1. Events that do not have a dedicated trigger function in the NFT Standard (`nft_mint`, `nft_burn`, `contract_metadata_update`)\n2. An event that has a relevant trigger function [NFT Core Standard](https://github.com/near/NEPs/blob/master/neps/nep-0171.md#nft-interface) (`nft_transfer`)\n\nThis event standard also applies beyond the events highlighted here, where future events follow the same convention of as the second type. For instance, if an NFT contract uses the [approval management standard](https://github.com/near/NEPs/blob/master/neps/nep-0178.md), it may emit an event for `nft_approve` if that's deemed as important by the developer community.\n \nPlease feel free to open pull requests for extending the events standard detailed here as needs arise.\n"
  },
  {
    "path": "neps/nep-0264.md",
    "content": "---\nNEP: 264\nTitle: Utilization of unspent gas for promise function calls\nAuthors: Austin Abell <austinabell8@gmail.com>\nStatus: Final\nDiscussionsTo: https://github.com/near/NEPs/pull/264\nType: Protocol\nVersion: 1.0.0\nCreated: 2021-09-30\nLastUpdated: 2022-05-26\n---\n\n# Summary\n\nThis proposal is to introduce a new host function on the NEAR runtime that allows for scheduling cross-contract function calls using a percentage/weight of the remaining gas in addition to the statically defined amount. This will enable async promise execution to use the remaining gas more efficiently by utilizing unspent gas from the current transaction.\n\n# Motivation\n\nWe are proposing this to be able to utilize gas more efficiently but also to improve the devX of cross-contract calls. Currently, developers must guess how much gas will remain after the current transaction finishes and if this value is too little, the transaction will fail, and if it is too large, gas will be wasted. Therefore, these cross-contract calls need a reasonable default of splitting unused gas efficiently for basic cases without sacrificing the ability to configure the gas amount attached at a granular level. Currently, gas is allocated very inefficiently, requiring more prepaid gas or failed transactions when the allocations are imprecise.\n\n# Guide-level explanation\n\nThis host function is similar to [`promise_batch_action_function_call`](https://github.com/near/nearcore/blob/7d15bbc996282c8ae8f15b8f49d110fc901b84d8/runtime/near-vm-logic/src/logic.rs#L1526), except with an additional parameter that lets you specify how much of the excess gas should be attached to the function call. This parameter is a weight value that determines how much of the excess gas is attached to each function.\n\nSo, for example, if there is 40 gas leftover and three function calls that select weights of 1, 5, and 2, the runtime will add 5, 25, and 10 gas to each function call. A developer can specify whether they want to attach a fixed amount of gas, a weight of remaining gas, or both. If at least one function call uses a weight of remaining gas, then all excess gas will be attached to future calls. This proposal allows developers the ability to utilize prepaid gas more efficiently than currently possible.\n\n# Reference-level explanation\n\nThis host function would need to be implemented in `nearcore` and parallel [`promise_batch_action_function_call`](https://github.com/near/nearcore/blob/7d15bbc996282c8ae8f15b8f49d110fc901b84d8/runtime/near-vm-logic/src/logic.rs#L1526). Most details of these functions will be consistent, except that there will be additional bookkeeping for keeping track of which functions specified a weight for unused gas. This will not affect or replace any existing host functions, but this will likely require a slightly higher gas cost than the original `promise_batch_action_function_call` host function due to this additional overhead.\n\nThis host function definition would look like this (as a Rust consumer):\n\n```rust\n    /// Appends `FunctionCall` action to the batch of actions for the given promise pointed by\n    /// `promise_idx`. This function allows not specifying a specific gas value and allowing the\n    /// runtime to assign remaining gas based on a weight.\n    ///\n    /// # Gas\n    ///\n    /// Gas can be specified using a static amount, a weight of remaining prepaid gas, or a mixture\n    /// of both. To omit a static gas amount, `0` can be passed for the `gas` parameter.\n    /// To omit assigning remaining gas, `0` can be passed as the `gas_weight` parameter.\n    ///\n    /// The gas weight parameter works as the following:\n    ///\n    /// All unused prepaid gas from the current function call is split among all function calls\n    /// which supply this gas weight. The amount attached to each respective call depends on the\n    /// value of the weight.\n    ///\n    /// For example, if 40 gas is leftover from the current method call and three functions specify\n    /// the weights 1, 5, 2 then 5, 25, 10 gas will be added to each function call respectively,\n    /// using up all remaining available gas. Any remaining gas will be allocated to the last\n    /// function call.\n    ///\n    /// # Errors\n    ///\n    /// <...Ommitted previous errors as they do not change>\n    /// - If `0` is passed for both `gas` and `gas_weight` parameters\n    pub fn promise_batch_action_function_call_weight(\n        promise_index: u64,\n        method_name_len: u64,\n        method_name_ptr: u64,\n        arguments_len: u64,\n        arguments_ptr: u64,\n        amount_ptr: u64,\n        gas: u64,\n        gas_weight: u64,\n    );\n\n```\n\nThe only difference from the existing API is `gas_weight` added as another parameter, as an unsigned 64-bit integer.\n\nAs for calculations, the remaining gas at the end of the transaction can be floor divided by the sum of all the weights tracked. Then, after getting this value, just attach that value multiplied by the weight gas to each function call action.\n\nFor example, if there are three weights, `a`, `b`, `c`:\n\n```rust\nweight_sum = a + b + c\na_gas += remaining_gas * a / weight_sum\nb_gas += remaining_gas * b / weight_sum\nc_gas += remaining_gas * c / weight_sum\n```\n\nAny remaining gas that is not allocated to any of these function calls will be attached to the last function call scheduled.\n\n### SDK changes\n\nThis protocol change will allow cross-contract calls to provide a fixed amount of gas and/or adjust the weight of unused gas to use. If neither is provided, it will default to using a weight of 1 for each and no static amount of gas. If no function modifies this weight, the runtime will split the unused gas evenly among all function calls.\n\nCurrently, the API for a cross-contract call looks like:\n\n```rust\nlet contract_account_id: AccountId = todo!();\next::some_method(/* parameters */, contract_account_id, 0 /* deposit amount */, 5_000_000_000_000 /* static amount of gas to attach */)\n```\n\nWhen the intended API should not require thinking about how much gas to attach by default, the API will look something like what's shown in [this PR](https://github.com/near/near-sdk-rs/pull/742), which can look like the following:\n\n```rust\ncross_contract::ext(contract_account_id)\n \t// Optional config\n\t.with_attached_deposit(1 /* default deposit of 0 */)\n \t.with_static_gas(Gas(5_000_000_000_000) /* default of 0 */)\n \t.with_unused_gas_weight(2 /* default 1 */)\n\n \t// Then call any method to schedule the function call\n \t.some_method(/* parameters */)\n```\n\nAt a basic level, a developer has only to include the parameters for the function call and specify the account id of the contract being called. Currently, only the amount can be optional because there is no way to set a reasonable default for the amount of gas to use for each function call.\n\n# Drawbacks\n\n- Complexity in refactoring to handle assigning remaining gas at the end of a transaction\n- Complexity in extra calculations for assigning gas will make the host function slightly more expensive than the base one. It is not easy to create an API on the SDK level that can decide which host function to call if dynamic gas assigning is needed or not. If both are used, the size of the wasm binary is trivially larger by including both host functions\n- Adds another host function to the runtime, which can probably never be removed\n- Can be confusing to have both static gas and dynamic unused gas and convey what is happening internally to a developer\n- If we start utilizing all prepaid gas, this will likely lead to a higher percentage of prepaid gas usage. This could be an unexpected pattern for users and require them to think about how much gas they are attaching to make sure they only attach what they are willing to spend\n  - Since currently, we are refunding a lot of unused gas, this could be a hidden negative side effect\n  - Keep in mind that it will also be positive because transactions will generally succeed more often due to gas more efficiently\n\n# Rationale and alternatives\n\nAlternative 1 (fraction parameters):\nThe primary alternative is using a numerator and denominator to represent a fraction instead of a weight. This alternative would be equivalent to the one listed above except for two u64 additional parameters instead of just the one for weight. I'll list the tradeoff as pros and cons:\n\nPros:\n\n- Can under-utilize the gas for the current transaction to limit gas allowed for certain functions\n- This could take responsibility away from DApp users because they would not have to worry less about attaching too much prepaid gas\n- Thinking in terms of fractions may be more intuitive for some developers\n- Might future proof better if we ever need this ability in the future, want to minimize the number of host functions created at all costs\n\nCons:\n\n- More complicated logic/edge cases to handle to make sure the percentages don't sum to greater than 100% (or adjusting if they do)\n- Precision loss from dividing integers may lead to unexpected results\n  - To get closer to expected, we could use floats for the division, but this gets messy\n- API for specifying a fraction would be messy (need to specify two values rather than just optionally one)\n- There isn't a good default for this. Unless there is a special value that indicates a pool of function calls that will split the remaining equally, but this defeats the purpose of this alternative completely\n- Slightly larger API (only one u64, can probably safely ignore this point)\n\nAlternative 2 (handle within contract/SDK):\nThe other alternative is to handle all of this logic on the contract side, as seen by [this PR](https://github.com/near/near-sdk-rs/pull/523). This is much less feasible/accurate because there is only so much information available within the runtime, and gas costs and internal functionality may not always be the same. As discussed on [the respective issue](https://github.com/near/near-sdk-rs/issues/526), this alternative seems to be very infeasible.\n\nPros:\n\n- No protocol change is needed\n- Can still have improved API as with protocol change\n\nCons:\n\n- Additional bloat to every contract, even ones that don't use the pattern (~5kb in PoC, even with simple estimation logic)\n- Still inaccurate gas estimations, because at the point of calculation, we cannot know how much gas will be used for assigning gas values as well as gas consumed after the transaction ends\n  - This leads to either underutilizing or having transactions fail when using too much gas if trying to estimate how much gas will be left\n- Prone to breaking existing contracts on protocol changes that affect gas usage or logic of runtime\n\n# Unresolved questions\n\nWhat needs to be addressed before this gets merged:\n~~- How much refactoring exactly is needed to handle this pattern?~~\n    ~~- Can we keep a queue of receipt and action indices with their respective weights and update their gas values after the current method is executed? Is there a cleaner way to handle this while keeping order?~~\n~~- Do we want to attach the gas lost due to precision on division to any function?~~\n\n- The remaining gas is now attached to the last function call\n\nWhat would be addressed in future independently of the solution:\n\n- How many users would expect the ability to refund part of the gas after the initial transaction? (is this worth considering the API difference of using fractions rather than weights)\n- Will weights be an intuitive experience for developers?\n\n# Future possibilities\n\nThe future change that would extend from this being implemented is a much cleaner API for the SDKs. As mentioned previously in the alternatives section, the API changes from [the changes tested on the SDK](https://github.com/near/near-sdk-rs/pull/523) will remain, but without the overhead from implementing this on the contract level. Thus, not only can this be implemented in Rust, but it will also allow a consistent API for existing and future SDK languages to build on.\n\nThe primary benefit for SDKs is that it removes the need to specify gas when making cross-contract calls explicitly. Currently, there is no easy way of knowing how many function calls will be made to split prepaid gas without a decent amount of overhead. Even if the developer does this, it's impossible to know how much gas will remain after the transaction from inside the contract. Having this host function available will simplify the DevX for contract developers and make the contracts use gas more efficiently.\n"
  },
  {
    "path": "neps/nep-0297.md",
    "content": "---\nNEP: 297\nTitle: Events\nAuthor: Olga Telezhnaya <olga@near.org>\nStatus: Final\nDiscussionsTo: https://github.com/near/NEPs/issues/297\nType: Standards Track\nCategory: Contract\nCreated: 03-Mar-2022\n---\n\n## Summary\n\nEvents format is a standard interface for tracking contract activity.\nThis document is a meta-part of other standards, such as [NEP-141](https://github.com/near/NEPs/issues/141) or [NEP-171](https://github.com/near/NEPs/discussions/171).\n\n## Motivation\n\nApps usually perform many similar actions.\nEach app may have its own way of performing these actions, introducing inconsistency in capturing these events.\n\nNEAR and third-party applications need to track these and similar events consistently.\nIf not, tracking state across many apps becomes infeasible.\nEvents address this issue, providing other applications with the needed standardized data.\n\nInitial discussion is [here](https://github.com/near/NEPs/issues/254).\n\n## Rationale and alternatives\n\n- Why is this design the best in the space of possible designs?\n- What other designs have been considered and what is the rationale for not choosing them?\n- What is the impact of not doing this?\n\n## Specification\n\nMany apps use different interfaces that represent the same action.\nThis interface standardizes that process by introducing event logs.\n\nEvents use the standard logs capability of NEAR.\nEvents are log entries that start with the `EVENT_JSON:` prefix followed by a single valid JSON string.\nJSON string may have any number of space characters in the beginning, the middle, or the end of the string.\nIt's guaranteed that space characters do not break its parsing.\nAll the examples below are pretty-formatted for better readability.\n\nJSON string should have the following interface:\n\n```ts\n// Interface to capture data about an event\n// Arguments\n// * `standard`: name of standard, e.g. nep171\n// * `version`: e.g. 1.0.0\n// * `event`: type of the event, e.g. nft_mint\n// * `data`: associate event data. Strictly typed for each set {standard, version, event} inside corresponding NEP\ninterface EventLogData {\n  standard: string;\n  version: string;\n  event: string;\n  data?: unknown;\n}\n```\n\nThus, to emit an event, you only need to log a string following the rules above. Here is a bare-bones example using Rust SDK `near_sdk::log!` macro (security note: prefer using `serde_json` or alternatives to serialize the JSON string to avoid potential injections and corrupted events):\n\n```rust\nuse near_sdk::log;\n\n// ...\nlog!(\n    r#\"EVENT_JSON:{\"standard\": \"nepXXX\", \"version\": \"1.0.0\", \"event\": \"YYY\", \"data\": {\"token_id\": \"{}\"}}\"#,\n    token_id\n);\n// ...\n```\n\n#### Valid event logs\n\n```js\nEVENT_JSON:{\n    \"standard\": \"nepXXX\",\n    \"version\": \"1.0.0\",\n    \"event\": \"xyz_is_triggered\"\n}\n```\n\n```js\nEVENT_JSON:{\n    \"standard\": \"nepXXX\",\n    \"version\": \"1.0.0\",\n    \"event\": \"xyz_is_triggered\",\n    \"data\": {\n        \"triggered_by\": \"foundation.near\"\n    }\n}\n```\n\n#### Invalid event logs\n\n- Two events in a single log entry (instead, call `log` for each individual event)\n\n```js\nEVENT_JSON:{\n    \"standard\": \"nepXXX\",\n    \"version\": \"1.0.0\",\n    \"event\": \"abc_is_triggered\"\n}\nEVENT_JSON:{\n    \"standard\": \"nepXXX\",\n    \"version\": \"1.0.0\",\n    \"event\": \"xyz_is_triggered\"\n}\n```\n\n- Invalid JSON data\n\n```js\nEVENT_JSON:invalid json\n```\n\n- Missing required fields `standard`, `version` or `event`\n\n```js\nEVENT_JSON:{\n    \"standard\": \"nepXXX\",\n    \"event\": \"xyz_is_triggered\",\n    \"data\": {\n        \"triggered_by\": \"foundation.near\"\n    }\n}\n```\n\n## Reference Implementation\n\n[Fungible Token Events Implementation](https://github.com/near/near-sdk-rs/blob/master/near-contract-standards/src/fungible_token/events.rs)\n\n[Non-Fungible Token Events Implementation](https://github.com/near/near-sdk-rs/blob/master/near-contract-standards/src/non_fungible_token/events.rs)\n\n## Drawbacks\n\nThere is a known limitation of 16kb strings when capturing logs.\nThis impacts the amount of events that can be processed.\n\n## Copyright\n\nCopyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).\n"
  },
  {
    "path": "neps/nep-0300.md",
    "content": "---\nNEP: 300\nTitle: Fungible Token Events\nAuthor: Olga Telezhnaya <olga@near.org>\nStatus: Final\nDiscussionsTo: https://github.com/near/NEPs/pull/300, https://github.com/near/NEPs/issues/271\nType: Standards Track\nCategory: Contract\nCreated: 15-Dec-2021\n---\n\n# Fungible Token Event\n\nVersion `1.0.0`\n\n## Summary\n\nStandard interfaces for FT contract actions.\nExtension of [NEP-297](https://github.com/near/NEPs/blob/master/neps/nep-0297.md)\n\n## Motivation\n\nNEAR and third-party applications need to track `mint`, `transfer`, `burn` events for all FT-driven apps consistently.\nThis extension addresses that.\n\nKeep in mind that applications, including NEAR Wallet, could require implementing additional methods, such as [`ft_metadata`](https://github.com/near/NEPs/blob/master/neps/nep-0148.md), to display the FTs correctly.\n\n## Interface\n\nFungible Token Events MUST have `standard` set to `\"nep141\"`, standard version set to `\"1.0.0\"`, `event` value is one of `ft_mint`, `ft_burn`, `ft_transfer`, and `data` must be of one of the following relevant types: `FtMintLog[] | FtTransferLog[] | FtBurnLog[]`:\n\n```ts\ninterface FtEventLogData {\n    standard: \"nep141\",\n    version: \"1.0.0\",\n    event: \"ft_mint\" | \"ft_burn\" | \"ft_transfer\",\n    data: FtMintLog[] | FtTransferLog[] | FtBurnLog[],\n}\n```\n\n```ts\n// An event log to capture tokens minting\n// Arguments\n// * `owner_id`: \"account.near\"\n// * `amount`: the number of tokens to mint, wrapped in quotes and treated\n//   like a string, although the number will be stored as an unsigned integer\n//   with 128 bits.\n// * `memo`: optional message\ninterface FtMintLog {\n    owner_id: string,\n    amount: string,\n    memo?: string\n}\n\n// An event log to capture tokens burning\n// Arguments\n// * `owner_id`: owner of tokens to burn\n// * `amount`: the number of tokens to burn, wrapped in quotes and treated\n//   like a string, although the number will be stored as an unsigned integer\n//   with 128 bits.\n// * `memo`: optional message\ninterface FtBurnLog {\n    owner_id: string,\n    amount: string,\n    memo?: string\n}\n\n// An event log to capture tokens transfer\n// Arguments\n// * `old_owner_id`: \"owner.near\"\n// * `new_owner_id`: \"receiver.near\"\n// * `amount`: the number of tokens to transfer, wrapped in quotes and treated\n//   like a string, although the number will be stored as an unsigned integer\n//   with 128 bits.\n// * `memo`: optional message\ninterface FtTransferLog {\n    old_owner_id: string,\n    new_owner_id: string,\n    amount: string,\n    memo?: string\n}\n```\n\n## Examples\n\nBatch mint:\n\n```js\nEVENT_JSON:{\n    \"standard\": \"nep141\",\n    \"version\": \"1.0.0\",\n    \"event\": \"ft_mint\",\n    \"data\": [\n        {\"owner_id\": \"foundation.near\", \"amount\": \"500\"}\n    ]\n}\n```\n\nBatch transfer:\n\n```js\nEVENT_JSON:{\n    \"standard\": \"nep141\",\n    \"version\": \"1.0.0\",\n    \"event\": \"ft_transfer\",\n    \"data\": [\n        {\"old_owner_id\": \"from.near\", \"new_owner_id\": \"to.near\", \"amount\": \"42\", \"memo\": \"hi hello bonjour\"},\n        {\"old_owner_id\": \"user1.near\", \"new_owner_id\": \"user2.near\", \"amount\": \"7500\"}\n    ]\n}\n```\n\nBatch burn:\n\n```js\nEVENT_JSON:{\n    \"standard\": \"nep141\",\n    \"version\": \"1.0.0\",\n    \"event\": \"ft_burn\",\n    \"data\": [\n        {\"owner_id\": \"foundation.near\", \"amount\": \"100\"},\n    ]\n}\n```\n\n## Further methods\n\nNote that the example events covered above cover two different kinds of events:\n\n1. Events that are not specified in the FT Standard (`ft_mint`, `ft_burn`)\n2. An event that is covered in the [FT Core Standard](https://github.com/near/NEPs/blob/master/neps/nep-0141.md). (`ft_transfer`)\n\nPlease feel free to open pull requests for extending the events standard detailed here as needs arise.\n"
  },
  {
    "path": "neps/nep-0330.md",
    "content": "---\nNEP: 330\nTitle: Source Metadata\nAuthor: Ben Kurrek <ben.kurrek@near.org>, Osman Abdelnasir <osman@near.org>, Andrey Gruzdev <@canvi>, Alexey Zenin <@alexthebuildr>\nStatus: Final\nDiscussionsTo: https://github.com/near/NEPs/discussions/329\nType: Standards Track\nCategory: Contract\nVersion: 1.2.0\nCreated: 27-Feb-2022\nUpdated: 19-Feb-2023\n---\n\n## Summary\n\nThe contract source metadata represents a standardized interface designed to facilitate the auditing and inspection of source code associated with a deployed smart contract. Adoption of this standard remains discretionary; however, it is strongly advocated for developers who maintain an open-source approach to their contracts. This initiative promotes greater accountability and transparency within the ecosystem, encouraging best practices in contract development and deployment.\n\n## Motivation\n\nThe incorporation of metadata facilitates the discovery and validation of deployed source code, thereby significantly reducing the requisite level of trust during code integration or interaction processes.\n\nThe absence of an accepted protocol for identifying the source code or author contact details of a deployed smart contract presents a challenge. Establishing a standardized framework for accessing the source code of any given smart contract would foster a culture of transparency and collaborative engagement.\n\nMoreover, the current landscape does not offer a straightforward mechanism to verify the authenticity of a smart contract's deployed source code against its deployed version. To address this issue, it is imperative that metadata includes specific details that enable contract verification through reproducible builds.\n\nFurthermore, it is desirable for users and dApps to possess the capability to interpret this metadata, thereby identifying executable methods and generating UIs that facilitate such functionalities. This also extends to acquiring comprehensive insights into potential future modifications by the contract or its developers, enhancing overall system transparency and user trust.\n\nThe initial discussion can be found [here](https://github.com/near/NEPs/discussions/329).\n\n## Rationale and alternatives\n\nThere is a lot of information that can be held about a contract. Ultimately, we wanted to limit it to the least amount fields while still maintaining our goal. This decision was made to not bloat the contracts with unnecessary storage and also to keep the standard simple and understandable.\n\n## Specification\n\nSuccessful implementations of this standard will introduce a new  (`ContractSourceMetadata`) struct that will hold all the necessary information to be queried for. This struct will be kept on the contract level.\n\nThe metadata will include optional fields:\n\n- `version`: a string that references the specific commit ID or a tag of the code currently deployed on-chain. Examples: `\"v0.8.1\"`, `\"a80bc29\"`.\n- `link`: an URL to the currently deployed code. It must include version or a tag if using a GitHub or a GitLab link. Examples: \"https://github.com/near/near-cli-rs/releases/tag/v0.8.1\", \"https://github.com/near/cargo-near-new-project-template/tree/9c16aaff3c0fe5bda4d8ffb418c4bb2b535eb420\" or an IPFS CID.\n- `standards`: a list of objects (see type definition below) that enumerates the NEPs supported by the contract. If this extension is supported, it is advised to also include NEP-330 version 1.1.0 in the list (`{standard: \"nep330\", version: \"1.1.0\"}`).\n- `build_info`: a build details object (see type definition below) that contains all the necessary information about how the contract was built, making it possible for others to reproduce the same WASM of this contract.\n\n```ts\ntype ContractSourceMetadata = {\n    version: string|null, // optional, commit hash being used for the currently deployed WASM. If the contract is not open-sourced, this could also be a numbering system for internal organization / tracking such as \"1.0.0\" and \"2.1.0\".\n    link: string|null, // optional, link to open source code such as a Github repository or a CID to somewhere on IPFS, e.g., \"https://github.com/near/cargo-near-new-project-template/tree/9c16aaff3c0fe5bda4d8ffb418c4bb2b535eb420\"\n    standards: Standard[]|null, // optional, standards and extensions implemented in the currently deployed WASM, e.g., [{standard: \"nep330\", version: \"1.1.0\"},{standard: \"nep141\", version: \"1.0.0\"}].\n    build_info: BuildInfo|null, // optional, details that are required for contract WASM reproducibility.\n}\n\ntype Standard {\n    standard: string, // standard name, e.g., \"nep141\"\n    version: string, // semantic version number of the Standard, e.g., \"1.0.0\"\n}\n\ntype BuildInfo {\n    build_environment: string, // reference to a reproducible build environment docker image, e.g., \"docker.io/sourcescan/cargo-near@sha256:bf488476d9c4e49e36862bbdef2c595f88d34a295fd551cc65dc291553849471\" or something else pointing to the build environment.\n    source_code_snapshot: string, // reference to the source code snapshot that was used to build the contract, e.g., \"git+https://github.com/near/cargo-near-new-project-template.git#9c16aaff3c0fe5bda4d8ffb418c4bb2b535eb420\" or \"ipfs://<ipfs-hash>\".\n    contract_path: string, // relative path to contract crate within the source code, e.g., \"contracts/contract-one\". Often, it is the root of the repository, so can be set to empty string.\n    build_command: string[], // the exact command that was used to build the contract, with all the flags, e.g., [\"cargo\", \"near\", \"build\", \"--no-abi\"].\n    output_wasm_path: string|null, // absolute path inside build environment, where resulting `*.wasm` file was put during build\n}\n```\n\nIn order to view this information, contracts must include a getter which will return the struct.\n\n```ts\nfunction contract_source_metadata(): ContractSourceMetadata {}\n```\n\n### Ensuring WASM Reproducibility\n\n#### Build Environment Docker Image\n\nWhen using a Docker image as a reference, it's important to specify the digest of the image to ensure reproducibility, since a tag could be reassigned to a different image.\n\n### Paths Inside Docker Image\n\nDuring the build, paths from the source of the build as well as the location of the cargo registry could be saved into WASM, which affects reproducibility. Therefore, we need to ensure that everyone uses the same paths inside the Docker image. We propose using the following paths:\n\n- `/home/near/code` - Mounting volume from the host system containing the source code.\n- `/home/near/.cargo` - Cargo registry.\n\n#### Cargo.lock\n\nIt is important to have `Cargo.lock` inside the source code snapshot to ensure reproducibility. Example: https://github.com/near/core-contracts.\n\n## Reference Implementation\n\nAs an example, consider a contract located at the root path of the repository, which was deployed using the `cargo near deploy --no-abi` and environment docker image `sourcescan/cargo-near@sha256:bf488476d9c4e49e36862bbdef2c595f88d34a295fd551cc65dc291553849471`. Its latest commit hash is `9c16aaff3c0fe5bda4d8ffb418c4bb2b535eb420`, and its open-source code can be found at `https://github.com/near/cargo-near-new-project-template`. This contract would then include a struct with the following fields:\n\n```ts\ntype ContractSourceMetadata = {\n    version: \"1.0.0\",\n    link: \"https://github.com/near/cargo-near-new-project-template/tree/9c16aaff3c0fe5bda4d8ffb418c4bb2b535eb420\",\n    standards: [\n        {\n            standard: \"nep330\",\n            version: \"1.1.0\"\n        }\n    ],\n    build_info: {\n        build_environment: \"docker.io/sourcescan/cargo-near@sha256:bf488476d9c4e49e36862bbdef2c595f88d34a295fd551cc65dc291553849471\",\n        source_code_snapshot: \"git+https://github.com/near/cargo-near-new-project-template?rev=9c16aaff3c0fe5bda4d8ffb418c4bb2b535eb420\",\n        contract_path: \"\",\n        build_command: [\"cargo\", \"near\", \"deploy\", \"--no-abi\"],\n        output_wasm_path: \"/home/near/code/target/near/cargo_near_new_project_name.wasm\"\n    }\n}\n\n```\n\nCalling the view function `contract_source_metadata`, the contract would return:\n\n```json\n{\n    \"version\": \"1.0.0\"\n    \"link\": \"https://github.com/near/cargo-near-new-project-template/tree/9c16aaff3c0fe5bda4d8ffb418c4bb2b535eb420\",\n    \"standards\": [\n        {\n            \"standard\": \"nep330\",\n            \"version\": \"1.3.0\"\n        }\n    ],\n    \"build_info\": {\n        \"build_environment\": \"docker.io/sourcescan/cargo-near@sha256:bf488476d9c4e49e36862bbdef2c595f88d34a295fd551cc65dc291553849471\",\n        \"source_code_snapshot\": \"git+https://github.com/near/cargo-near-new-project-template?rev=9c16aaff3c0fe5bda4d8ffb418c4bb2b535eb420\",\n        \"contract_path\": \"\",\n        \"build_command\": [\"cargo\", \"near\", \"deploy\", \"--no-abi\"],\n        \"output_wasm_path\": \"/home/near/code/target/near/cargo_near_new_project_name.wasm\"\n    }\n}\n```\n\nThis could be used by SourceScan to reproduce the same WASM using the build details and to verify the on-chain WASM code with the reproduced one.\n\nAn example implementation can be seen below.\n\n```rust\n/// Simple Implementation\n#[near_bindgen]\npub struct Contract {\n    pub contract_metadata: ContractSourceMetadata\n}\n\n/// NEP supported by the contract.\npub struct Standard {\n    pub standard: String,\n    pub version: String\n}\n\n/// BuildInfo structure\npub struct BuildInfo {\n    pub build_environment: String,\n    pub source_code_snapshot: String,\n    pub contract_path: String,\n    pub build_command: Vec<String>,\n    pub output_wasm_path: Option<String>,\n}\n\n/// Contract metadata structure\npub struct ContractSourceMetadata {\n    pub version: Option<String>,\n    pub link: Option<String>,\n    pub standards: Option<Vec<Standard>>,\n    pub build_info: Option<BuildInfo>,\n}\n\n/// Minimum Viable Interface\npub trait ContractSourceMetadataTrait {\n    fn contract_source_metadata(&self) -> ContractSourceMetadata;\n}\n\n/// Implementation of the view function\n#[near_bindgen]\nimpl ContractSourceMetadataTrait for Contract {\n    fn contract_source_metadata(&self) -> ContractSourceMetadata {\n        self.contract_source_metadata.get().unwrap()\n    }\n}\n```\n\n## Future possibilities\n\n- By having a standard outlining metadata for an arbitrary contract, any information that pertains on a contract level can be added based on the requests of the developer community.\n\n## Decision Context\n\n### 1.0.0 - Initial Version\n\nThe initial version of NEP-330 was approved by @jlogelin on Mar 29, 2022.\n\n### 1.1.0 - Contract Metadata Extension\n\nThe extension NEP-351 that added Contract Metadata to this NEP-330 was approved by Contract Standards Working Group members on January 17, 2023 ([meeting recording](https://youtu.be/pBLN9UyE6AA)).\n\n#### Benefits\n\n- Unlocks NEP extensions that otherwise would be hard to integrate into the tooling as it would be guess-based (e.g. see \"interface detection\" concerns in the Non-transferrable NFT NEP)\n- Standardization enables composability as it makes it easier to interact with contracts when you can programmatically check compatibility\n- This NEP extension introduces an optional field, so there is no breaking change to the original NEP\n\n#### Concerns\n\n| # | Concern | Resolution | Status |\n| - | - | - | - |\n| 1 | Integer field as a standard reference is limiting as third-party projects may want to introduce their own standards without pushing it through the NEP process | Author accepted the proposed string-value standard reference (e.g. “nep123” instead of just 123, and allow “xyz001” as previously it was not possible to express it) | Resolved |\n| 2 | NEP-330 and NEP-351 should be included in the list of the supported NEPs | There seems to be a general agreement that it is a good default, so NEP was updated | Resolved |\n| 3 | JSON Event could be beneficial, so tooling can react to the changes in the supported standards | It is outside the scope of this NEP. Also, list of supported standards only changes with contract re-deployment, so tooling can track DEPLOY_CODE events and check the list of supported standards when new code is deployed | Won’t fix |\n\n### 1.2.0 - Build Details Extension\n\nThe NEP extension adds build details to the contract metadata, containing necessary information about how the contract was built. This makes it possible for others to reproduce the same WASM of this contract. The idea first appeared in the [cargo-near SourceScan integration thread](https://github.com/near/cargo-near/issues/131).\n\n### 1.3.0 - Add `output_wasm_path` field to Build Details Extension\n\n1. This field is required in order to be able to verify build in a way, that is agnostic of \n   specific language/toolchain the contract is implemented with.\n2. Field's type is `Option<String>` (rust semantics) which allows backward compatibility with 1.2.0 contract metadata\n   when parsing, when the field is absent.\n3. If the field is present, contract metadata is considered to be at least 1.3.0 version.\n4. Valid value for the field should be a valid unix path to a `wasm` file, being a subpath of \n   `/home/near/code` (mentioned in [Paths Inside Docker Image](#paths-inside-docker-image)).\n\n#### Benefits\n\n- This NEP extension gives developers the capability to save all the required build details, making it possible to reproduce the same WASM code in the future. This ensures greater consistency in contracts and the ability to verify source code. With the assistance of tools like SourceScan and cargo-near, the development process on NEAR becomes significantly easier\n\n## Copyright\n\nCopyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).\n"
  },
  {
    "path": "neps/nep-0364.md",
    "content": "---\nNEP: 364\nTitle: Efficient signature verification and hashing precompile functions\nAuthor: Blas Rodriguez Irizar <rodrigblas@gmail.com>\nStatus: Final\nDiscussionsTo: https://github.com/nearprotocol/neps/pull/364\nType: Runtime Spec\nCategory: Contract\nCreated: 15-Jun-2022\n---\n\n## Summary\n\nThis NEP introduces the request of adding into the NEAR runtime a pre-compiled\nfunction used to verify signatures that can help IBC compatible light clients run on-chain.\n\n## Motivation\n\nSignature verification and hashing are ubiquitous operations in light clients,\nespecially in PoS consensus mechanisms. Based on Polkadot's consensus mechanism\nthere will be a need for verification of ~200 signatures every minute\n(Polkadot’s authority set is ~300 signers and it may be increased in the future: https://polkadot.polkassembly.io/referendum/16).\n\nTherefore, a mechanism to perform these operations cost-effectively in terms\nof gas and speed would be highly beneficial to have. Currently, NEAR does not have any native signature verification toolbox.\nThis implies that a light client operating inside NEAR will have to import a library\ncompiled to WASM as mentioned in [Zulip](https://near.zulipchat.com/#narrow/stream/295302-general/topic/light_client).\n\nPolkadot uses [three different cryptographic schemes](https://wiki.polkadot.network/docs/learn-keys)\nfor its keys/accounts, which also translates into different signature types. However, for this NEP the focus is on:\n\n- The vanilla ed25519 implementation uses Schnorr signatures.\n\n## Rationale and alternatives\n\nAdd a signature verification signatures function into the runtime as host functions.\n\n- ED25519 signature verification function using `ed25519_dalek` crates into NEAR runtime as pre-compiled functions.\n\nBenchmarks were run using a signature verifier smart contract on-chain importing the aforementioned functions from\nwidespread used crypto Rust crates. The biggest pitfall of these functions running wasm code instead of native\nis performance and gas cost. Our [benchmarks](https://github.com/blasrodri/near-test) show the following results:\n\n```log\nnear call sitoula-test.testnet verify_ed25519 '{\"signature_p1\": [145,193,203,18,114,227,14,117,33,213,121,66,130,14,25,4,36,120,46,142,226,215,7,66,122,112,97,30,249,135,61,165], \"signature_p2\": [221,249,252,23,105,40,56,70,31,152,236,141,154,122,207,20,75,118,79,90,168,6,221,122,213,29,126,196,216,104,191,6], \"msg\": [107,97,106,100,108,102,107,106,97,108,107,102,106,97,107,108,102,106,100,107,108,97,100,106,102,107,108,106,97,100,115,107], \"iterations\": 10}' --accountId sitoula-test.testnet --gas 300000000000000\n# transaction id DZMuFHisupKW42w3giWxTRw5nhBviPu4YZLgKZ6cK4Uq\n```\n\nWith `iterations = 130` **all these calls return ExecutionError**: `'Exceeded the maximum amount of gas allowed to burn per contract.'`\nWith iterations = 50 these are the results:\n\n```text\ned25519: tx id 6DcJYfkp9fGxDGtQLZ2m6PEDBwKHXpk7Lf5VgDYLi9vB (299 Tgas)\n```\n\n- Performance in wall clock time when you compile the signature validation library directly from rust to native.\n  Here are the results on an AMD Ryzen 9 5900X 12-Core Processor machine:\n\n```text\n# 10k signature verifications\ned25519: took 387ms\n```\n\n- Performance in wall clock time when you compile the library into wasm first and then use the single-pass compiler in Wasmer 1 to then compile to native.\n\n```text\ned25519: took 9926ms\n```\n\nAs an extra data point, when passing `--enable-simd` instead of `--singlepass`\n\n```text\ned25519: took 3085ms\n```\n\nSteps to reproduce:\ncommit: `31cf97fb2e155d238308f062c4b92bae716ac19f` in `https://github.com/blasrodri/near-test`\n\n```sh\n# wasi singlepass\ncargo wasi build --bin benches --release\nwasmer compile --singlepass ./target/wasm32-wasi/release/benches.wasi.wasm -o benches_singlepass\nwasmer run ./benches_singlepass\n```\n\n```sh\n# rust native\ncargo run --bin benches --release\n```\n\nOverall: the difference between the two versions (native vs wasi + singlepass is)\n\n```text\ned25519: 25.64x slower\n```\n\n### What is the impact of not doing this?\n\nCosts of running IBC-compatible trustless bridges would be very high. Plus, the fact that signature verification\nis such an expensive operation will force the contract to split the process of batch verification of signatures\ninto multiple transactions.\n\n### Why is this design the best in the space of possible designs?\n\nAdding existing proved and vetted crypto crates into the runtime is a safe workaround. It will boost performance\nbetween 20-25x according to our benchmarks. This will both reduce operating costs significantly and will also\nenable the contract to verify all the signatures in one transaction, which will simplify the contract design.\n\n### What other designs have been considered and what is the rationale for not choosing them?\n\nOne possible alternative would be to improve the runtime implementation so that it can compile WASM code to a sufficiently\nfast machine code. Even when it may not be as fast as LLVM native produced code it could still be acceptable for\nthese types of use cases (CPU intensive functions) and will certainly avoid the need of adding host functions.\nThe effort of adding such a feature will be significantly higher than adding these host functions one by one.\nBut on the other side, it will decrease the need of including more host functions in the future.\n\nAnother alternative is to deal with the high cost of computing/verifying these signatures in some other manner.\nDecreasing the overall cost of gas and increasing the limits of gas available to attach to the contract could be a possibility.\nIntroducing such modification for some contracts, and not for some others can be rather arbitrary\nand not straightforward in the implementation, but an alternative nevertheless.\n\n## Specification\n\nThis NEP aims to introduce the following host function:\n\n```rust\nextern \"C\"{\n\n/// Verify an ED25519 signature given a message and a public key.\n/// Ed25519 is a public-key signature system with several attractive features\n///\n/// Proof of Stake Validator sets can contain different signature schemes.\n/// Ed25519 is one of the most used ones across blockchains, and hence it's importance to be added.\n/// For further reference, visit: https://ed25519.cr.yp.to\n/// # Returns\n/// - 1 if the signature was properly verified\n/// - 0 if the signature failed to be verified\n///\n/// # Cost\n///\n/// Each input can either be in memory or in a register. Set the length of the input to `u64::MAX`\n/// to declare that the input is a register number and not a pointer.\n/// Each input has a gas cost input_cost(num_bytes) that depends on whether it is from memory\n/// or from a register. It is either read_memory_base + num_bytes * read_memory_byte in the\n/// former case or read_register_base + num_bytes * read_register_byte in the latter. This function\n/// is labeled as `input_cost` below.\n///\n/// `input_cost(num_bytes_signature) + input_cost(num_bytes_message) + input_cost(num_bytes_public_key)\n///  ed25519_verify_base + ed25519_verify_byte * num_bytes_message`\n///\n/// # Errors\n///\n/// If the signature size is not equal to 64 bytes, or public key length is not equal to 32 bytes, contract execution is terminated with an error.\n  fn ed25519_verify(\n    sig_len: u64,\n    sig_ptr: u64,\n    msg_len: u64,\n    msg_ptr: u64,\n    pub_key_len: u64,\n    pub_key_ptr: u64,\n  ) -> u64;\n```\n\nAnd a `rust-sdk` possible implementation could look like this:\n\n```rs\npub fn ed25519_verify(sig: &ed25519::Signature, msg: &[u8], pub_key: &ed25519::Public) -> bool;\n```\n\nOnce this NEP is approved and integrated, these functions will be available in the `near_sdk` crate in the\n`env` module.\n\n[This blog post](https://hdevalence.ca/blog/2020-10-04-its-25519am) describes the various ways in which the existing Ed25519 implementations differ in practice. The behavior that this proposal uses, which is shared by Go `crypto/ed25519`, Rust `ed25519-dalek` (using `verify` function with `legacy_compatibility` feature turned off) and several others, makes the following decisions:\n\n- The encoding of the values `R` and `s` must be canonical, while the encoding of `A` doesn't need to.\n- The verification equation is `R = [s]B − [k]A`.\n- No additional checks are performed. In particular, the points outside of the order-l subgroup are accepted, as are the points in the torsion subgroup.\n\nNote that this implementation only refers to the `verify` function in the\ncrate `ed25519-dalek` and **not** `verify_strict` or `verify_batch`.\n\n## Security Implications (Optional)\n\nWe have chosen this crate because it is already integrated into `nearcore`.\n\n## Unresolved Issues (Optional)\n\n- What parts of the design do you expect to resolve through the NEP process before this gets merged?\n  Both the function signatures and crates are up for discussion.\n\n## Future possibilities\n\nI currently do not envision any extension in this regard.\n\n## Copyright\n\nCopyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).\n"
  },
  {
    "path": "neps/nep-0366.md",
    "content": "---\nNEP: 366\nTitle: Meta Transactions\nAuthor: Illia Polosukhin <ilblackdragon@gmail.com>, Egor Uleyskiy (egor.ulieiskii@gmail.com), Alexander Fadeev (fadeevab.com@gmail.com)\nStatus: Final\nDiscussionsTo: https://github.com/nearprotocol/neps/pull/366\nType: Protocol Track\nCategory: Runtime\nVersion: 1.1.0\nCreated: 19-Oct-2022\nLastUpdated: 03-Aug-2023\n---\n\n## Summary\n\nIn-protocol meta transactions allow third-party accounts to initiate and pay transaction fees on behalf of the account.\n\n## Motivation\n\nNEAR has been designed with simplicity of onboarding in mind. One of the large hurdles right now is that after creating an implicit or even named account the user does not have NEAR to pay gas fees to interact with apps.\n\nFor example, apps that pay user for doing work (like NEARCrowd or Sweatcoin) or free-to-play games.\n\n[Aurora Plus](https://aurora.plus) has shown viability of the relayers that can offer some number of free transactions and a subscription model. Shifting the complexity of dealing with fees to the infrastructure from the user space.\n\n## Rationale and alternatives\n\nThe proposed design here provides the easiest way for users and developers to onboard and to pay for user transactions.\n\nAn alternative is to have a proxy contract deployed on the user account.\nThis design has severe limitations as it requires the user to deploy such contract and incur additional costs for storage.\n\n## Specification\n\n- **User** (Sender) is the one who is going to send the `DelegateAction` to Receiver via Relayer.\n- **Relayer** is the one who publishes the `DelegateAction` to the protocol.\n- **User** and Relayer doesn't trust each other.\n\nThe main flow of the meta transaction will be as follows:\n\n- User specifies `sender_id` (the user's account id), `receiver_id` (the receiver's account id) and other information (see `DelegateAction` format).\n- User signs `DelegateAction` specifying the set of actions that they need to be executed.\n- User forms `SignedDelegateAction` with the `DelegateAction` and the signature.\n- User forms `DelegateActionMessage` with the `SignedDelegateAction`.\n- User sends `DelegateActionMessage` data to the relayer.\n- Relayer verifies actions specified in `DelegateAction`: the total cost and whether the user included the reward for the relayer.\n- Relayer forms a `Transaction` with `receiver_id` equals to `delegate_action.sender_id` and `actions: [SignedDelegateAction { ... }]`. Signs it with its key. Note that such transactions can contain other actions toward user's account (for example calling a function).\n- This transaction is processed normally. A `Receipt` is created with a copy of the actions in the transaction.\n- When processing a `SignedDelegateAction`, a number of checks are done (see below), mainly a check to ensure that the `signature` matches the user account's key.\n- When a `Receipt` with a valid `SignedDelegateAction` in actions arrives at the user's account, it gets executed. Execution means creation of a new Receipt with `receiver_id: AccountId` and `actions: Action` matching `receiver_id` and `actions` in the `DelegateAction`.\n- The new `Receipt` looks like a normal receipt that could have originated from the user's account, with `predeccessor_id` equal to tbe user's account, `signer_id` equal to the relayer's account, `signer_public_key` equal to the relayer's public key.\n\n## Diagram\n\n![Delegate Action Diagram](assets/nep-0366/NEP-DelegateAction.png)\n\n## Limitations\n\n- If User account exist, then deposit and gas are refunded as usual: gas is refuned to Relayer, deposit is refunded to User.\n- If User account doesn't exist then gas is refunded to Relayer, deposit is burnt.\n- `DelegateAction` actions mustn't contain another `DelegateAction` (`DelegateAction` can't contain the nested ones).\n\n### DelegateAction\n\nDelegate actions allow an account to initiate a batch of actions on behalf of a receiving account, allowing proxy actions. This can be used to implement meta transactions.\n\n```rust\npub struct DelegateAction {\n    /// Signer of the delegated actions\n    sender_id: AccountId,\n    /// Receiver of the delegated actions.\n    receiver_id: AccountId,\n    /// List of actions to be executed.\n    actions: Vec<Action>,\n    /// Nonce to ensure that the same delegate action is not sent twice by a relayer and should match for given account's `public_key`.\n    /// After this action is processed it will increment.\n    nonce: Nonce,\n    /// The maximal height of the block in the blockchain below which the given DelegateAction is valid.\n    max_block_height: BlockHeight,\n    /// Public key that is used to sign this delegated action.\n    public_key: PublicKey,\n}\n```\n\n```rust\npub struct SignedDelegateAction {\n    delegate_action: DelegateAction,\n    /// Signature of the `DelegateAction`.\n    signature: Signature,\n}\n```\n\nSupporting batches of `actions` means `DelegateAction` can be used to initiate complex steps like creating new accounts, transferring funds, deploying contracts, and executing an initialization function all within the same transaction.\n\n##### Validation\n\n1. Validate `DelegateAction` doesn't contain a nested `DelegateAction` in actions.\n2. To ensure that a `DelegateAction` is correct, on receipt the following signature verification is performed: `verify_signature(hash(delegate_action), delegate_action.public_key, signature)`.\n3. Verify `transaction.receiver_id` matches `delegate_action.sender_id`.\n4. Verify `delegate_action.max_block_height`. The `max_block_height` must be greater than the current block height (at the `DelegateAction` processing time).\n5. Verify `delegate_action.sender_id` owns `delegate_action.public_key`.\n6. Verify `delegate_action.nonce > sender.access_key.nonce`.\n\nA `message` is formed in the following format:\n\n```rust\nstruct DelegateActionMessage {\n    signed_delegate_action: SignedDelegateAction\n}\n```\n\nThe next set of security concerns are addressed by this format:\n\n- `sender_id` is included to ensure that the relayer sets the correct `transaction.receiver_id`.\n- `max_block_height` is included to ensure that the `DelegateAction` isn't expired.\n- `nonce` is included to ensure that the `DelegateAction` can't be replayed again.\n- `public_key` and `sender_id` are needed to ensure that on the right account, work across rotating keys and fetch the correct `nonce`.\n\nThe permissions are verified based on the variant of `public_key`:\n\n- `AccessKeyPermission::FullAccess`, all actions are allowed.\n- `AccessKeyPermission::FunctionCall`, only a single `FunctionCall` action is allowed in `actions`.\n  - `DelegateAction.receiver_id` must match to the `account[public_key].receiver_id`\n  - `DelegateAction.actions[0].method_name` must be in the `account[public_key].method_names`\n\n##### Outcomes\n\n- If the `signature` matches the receiver's account's `public_key`, a new receipt is created from this account with a set of `ActionReceipt { receiver_id, action }` for each action in `actions`.\n\n##### Recommendations\n\n- Because the User doesn't trust the Relayer, the User should verify whether the Relayer has submitted the `DelegateAction` and the execution result.\n\n### Errors\n\n- If the Sender's account doesn't exist\n\n```rust\n/// Happens when TX receiver_id doesn't exist\nAccountDoesNotExist\n```\n\n- If the `signature` does not match the data and the `public_key` of the given key, then the following error will be returned\n\n```rust\n/// Signature does not match the provided actions and given signer public key.\nDelegateActionInvalidSignature\n```\n\n- If the `sender_id` doesn't match the `tx.receiver_id`\n\n```rust\n/// Receiver of the transaction doesn't match Sender of the delegate action\nDelegateActionSenderDoesNotMatchTxReceiver\n```\n\n- If the current block is equal or greater than `max_block_height`\n\n```rust\n/// Delegate action has expired\nDelegateActionExpired\n```\n\n- If the `public_key` does not exist for Sender account\n\n```rust\n/// The given public key doesn't exist for Sender account\nDelegateActionAccessKeyError\n```\n\n- If the `nonce` does match the `public_key` for the `sender_id`\n\n```rust\n/// Nonce must be greater sender[public_key].nonce\nDelegateActionInvalidNonce\n```\n\n- If `nonce` is too large\n\n```rust\n/// DelegateAction nonce is larger than the upper bound given by the block height (block_height * 1e6)\nDelegateActionNonceTooLarge\n```\n\n- If the list of Transaction actions contains several `DelegateAction`\n\n```rust\n/// There should be the only one DelegateAction\nDelegateActionMustBeOnlyOne\n```\n\nSee the [DelegateAction specification](https://nomicon.io/RuntimeSpec/Actions#DelegateAction) for details.\n\n## Security Implications\n\nDelegate actions do not override `signer_public_key`, leaving that to the original signer that initiated the transaction (e.g. the relayer in the meta transaction case). Although it is possible to override the `signer_public_key` in the context with one from the `DelegateAction`, there is no clear value in that.\n\nSee the **_Validation_** section in [DelegateAction specification](https://nomicon.io/RuntimeSpec/Actions#DelegateAction) for security considerations around what the user signs and the validation of actions with different permissions.\n\n## Drawbacks\n\n- Increases complexity of NEAR's transactional model.\n- Meta transactions take an extra block to execute, as they first need to be included by the originating account, then routed to the delegate account, and only after that to the real destination.\n- User can't call functions from different contracts in same `DelegateAction`. This is because `DelegateAction` has only one receiver for all inner actions.\n- The Relayer must verify most of the parameters before submitting `DelegateAction`, making sure that one of the function calls is the reward action. Either way, this is a risk for Relayer in general.\n- User must not trust Relayer’s response and should check execution errors in Blockchain.\n\n## Future possibilities\n\nSupporting ZK proofs instead of just signatures can allow for anonymous transactions, which pay fees to relayers anonymously.\n\n## Changelog\n\n### 1.1.0 - Adjust errors to reflect deployed reality (03-Aug`2023)\n\n- Remove the error variant `DelegateActionCantContainNestedOne` because this would already fail in the parsing stage.\n- Rename the error variant `DelegateActionSenderDoesNotMatchReceiver` to `DelegateActionSenderDoesNotMatchTxReceiver` to reflect published types in [near_primitives](https://docs.rs/near-primitives/0.17.0/near_primitives/errors/enum.ActionErrorKind.html#variant.DelegateActionSenderDoesNotMatchTxReceiver).\n\n## Copyright\n\nCopyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).\n"
  },
  {
    "path": "neps/nep-0368.md",
    "content": "---\nNEP: 368\nTitle: Bridge Wallets\nAuthor: lewis-sqa <@lewis-sqa>\nStatus: Final\nDiscussionsTo: https://github.com/near/NEPs/pull/368\nType: Standards Track\nCategory: Wallet\nCreated: 1-Jul-2022\n---\n\n# Bridge Wallets\n\n## Summary\n\nStandard interface for bridge wallets.\n\n## Motivation\n\nBridge wallets such as [WalletConnect](https://docs.walletconnect.com/2.0/) and [Nightly Connect](https://connect.nightly.app/) are powerful messaging layers for communicating with various blockchains. Since they lack an opinion on how payloads are structured, without a standard, it can be impossible for dApps and wallets to universally communicate without compatibility problems.\n\n## Rationale and alternatives\n\nAt its most basic, a wallet manages key pairs which are used to sign messages. The signed messages are typically then submitted by the wallet to the blockchain.  This standard aims to define an API (based on our learning from [Wallet Selector](https://github.com/near/wallet-selector)) that achieves this requirement through a number of methods compatible with a relay architecture.\n\nThere have been many iterations of this standard to help inform what we consider the best approach right now for NEAR. You can find more relevant content in the Injected Wallet Standard.\n\n## Specification\n\nBridge wallets use a relay architecture to forward signing requests between dApps and wallets.  Requests are typically relayed using a messaging protocol such as WebSockets or by polling a REST API. The concept of a `session` wraps this connection between a dApp and a wallet.  When the session is established, the wallet user typically selects which accounts that the dApp should have any awareness of. As the user interacts with the dApp and performs actions that require signing, messages are relayed to the wallet from the dApp, those messages are signed by the wallet and submitted to the blockchain on behalf of the requesting dApp. This relay architecture decouples the 'signing' context (wallet) and the 'requesting' context (dApp, which enables signing to be performed on an entirely different device than the dApp browser is running on.\n\nTo establish a session, the dApp must first pair with the wallet. Pairing often includes a QR code to improve UX. Once both clients are paired, a request to initialize a session is made. During this phase, the wallet user is prompted to select one or more accounts (previously imported) to be visible to the session before approving the request.\n\nOnce a session has been created, the dApp can make requests to sign transactions using either [`signTransaction`](#signtransaction) or [`signTransactions`](#signtransactions). These methods accept encoded [Transactions](https://nomicon.io/RuntimeSpec/Transactions) created with `near-api-js`. Since transactions must know the public key that will be used as the `signerId`, a call to [`getAccounts`](#getaccounts) is required to retrieve a list of the accounts visible to the session along with their associated public key. Requests to both [`signTransaction`](#signtransaction) and [`signTransactions`](#signtransactions) require explicit approval from the user since [`FullAccess`](https://nomicon.io/DataStructures/AccessKey) keys are used.\n\nFor dApps that regularly sign gas-only transactions, Limited [`FunctionCall`](https://nomicon.io/DataStructures/AccessKey#accesskeypermissionfunctioncall) access keys can be added/deleted to one or more accounts by using the [`signIn`](#signin) and [`signOut`](#signout) methods. While the same functionality could be achieved with [`signTransactions`](#signtransactions), that contain actions that add specific access keys with particular permissions to a specific account, by using `signIn`, the wallet receives a direct intention that a user wishes to sign in/out of a dApp's smart contract, which can provide a cleaner UI to the wallet user and allow convenient behavior to be implemented by wallet providers such as 'sign out' automatically deleting the associated limited access key that was created when the user first signed in.\n\nAlthough intentionally similar to the Injected Wallet Standard, this standard focuses on the transport layer instead of the high-level abstractions found in injected wallets. Below are the key differences between the standards:\n\n- [Transactions](https://nomicon.io/RuntimeSpec/Transactions) passed to `signTransaction` and `signTransactions` must be encoded.\n- The result of `signTransaction` and `signTransactions` are encoded [SignedTransaction](https://nomicon.io/RuntimeSpec/Transactions#signed-transaction) models.\n- Accounts contain only a string representation of public keys.\n\n### Methods\n\n#### `signTransaction`\n\nSign a transaction. This request should require explicit approval from the user.\n\n```ts\nimport { transactions } from \"near-api-js\";\n\ninterface SignTransactionParams {\n  // Encoded Transaction via transactions.Transaction.encode().\n  transaction: Uint8Array;\n}\n\n// Encoded SignedTransaction via transactions.SignedTransaction.encode().\ntype SignTransactionResponse = Uint8Array;\n```\n\n#### `signTransactions`\n\nSign a list of transactions. This request should require explicit approval from the user.\n\n```ts\nimport { providers, transactions } from \"near-api-js\";\n\ninterface SignTransactionsParams {\n  // Encoded Transaction via transactions.Transaction.encode().\n  transactions: Array<Uint8Array>;\n}\n\n// Encoded SignedTransaction via transactions.SignedTransaction.encode().\ntype SignTransactionsResponse = Array<Uint8Array>;\n```\n\n#### `signIn`\n\nFor dApps that often sign gas-only transactions, `FunctionCall` access keys can be created for one or more accounts to greatly improve the UX. While this could be achieved with `signTransactions`, it suggests a direct intention that a user wishes to sign in to a dApp's smart contract.\n\n```ts\nimport { transactions } from \"near-api-js\";\n\ninterface Account {\n  accountId: string;\n  publicKey: string;\n}\n\ninterface SignInParams {\n  permission: transactions.FunctionCallPermission;\n  accounts: Array<Account>;\n}\n\ntype SignInResponse = null;\n```\n\n#### `signOut`\n\nDelete one or more `FunctionCall` access keys created with `signIn`. While this could be achieved with `signTransactions`, it suggests a direct intention that a user wishes to sign out from a dApp's smart contract.\n\n```ts\ninterface Account {\n  accountId: string;\n  publicKey: string;\n}\n\ninterface SignOutParams {\n  accounts: Array<Account>;\n}\n\ntype SignOutResponse = null;\n```\n\n#### `getAccounts`\n\nRetrieve all accounts visible to the session. `publicKey` references the underlying `FullAccess` key linked to each account.\n\n```ts\ninterface Account {\n  accountId: string;\n  publicKey: string;\n}\n\ninterface GetAccountsParams {}\n\ntype GetAccountsResponse = Array<Account>;\n```\n\n## Flows\n\n### Connect\n\n1. dApp initiates pairing via QR modal.\n2. wallet establishes pairing and prompts selection of accounts for new session.\n3. wallet responds with session (id and accounts).\n4. dApp stores reference to session.\n\n### Sign in (optional)\n\n1. dApp generates a key pair for one or more accounts in the session.\n2. dApp makes `signIn` request with `permission` and `accounts`.\n3. wallet receives request and executes a transaction containing an `AddKey` Action for each account.\n4. wallet responds with `null`.\n5. dApp stores the newly generated key pairs securely.\n\n### Sign out (optional)\n\n1. dApp makes `signOut` request with `accounts`.\n2. wallet receives request and executes a transaction containing a `DeleteKey` Action for each account.\n3. wallet responds with `null`.\n4. dApp clears stored key pairs.\n\n### Sign transaction\n\n1. dApp makes `signTransaction` request.\n2. wallet prompts approval of transaction.\n3. wallet signs the transaction.\n4. wallet responds with `Uint8Array`.\n5. dApp decodes signed transaction.\n6. dApp sends signed transaction.\n\n### Sign transactions\n\n1. dApp makes `signTransactions` request.\n2. wallet prompts approval of transactions.\n3. wallet signs the transactions.\n4. wallet responds with `Array<Uint8Array>`.\n5. dApp decodes signed transactions.\n6. dApp sends signed transactions.\n"
  },
  {
    "path": "neps/nep-0393.md",
    "content": "---\nNEP: 393\nTitle: Soulbound Token\nAuthors: Robert Zaremba <@robert-zaremba>\nStatus: Final\nDiscussionsTo:\nType: Standards Track\nCategory: Contract\nCreated: 12-Sep-2022\nRequires:\n---\n\n## Summary\n\nSoulbound Token (SBT) is a form of a non-fungible token which represents an aspect of an account: _soul_. [Transferability](#transferability) is limited only to a case of recoverability or a _soul transfer_. The latter must coordinate with a registry to transfer all SBTs from one account to another, and _banning_ the source account.\n\nSBTs are well suited for carrying proof-of-attendance, proof-of-unique-human \"stamps\" and other similar credibility-carriers.\n\n## Motivation\n\nRecent [Decentralized Society](https://www.bankless.com/decentralized-society-desoc-explained) trends open a new area of Web3 research to model various aspects of what characterizes humans. Economic and governance value is generated by humans and their relationship. SBTs can represent the commitments, credentials, and affiliations of “Souls” that encode the trust networks of the real economy to establish provenance and reputation.\n\n> More importantly, SBTs enable other applications of increasing ambition, such as community wallet recovery, Sybil-resistant governance, mechanisms for decentralization, and novel markets with decomposable, shared rights. We call this richer, pluralistic ecosystem “Decentralized Society” (DeSoc)—a co-determined sociality, where Souls and communities come together bottom-up, as emergent properties of each other to co-create plural network goods and intelligences, at a range of scales.\n\nCreating strong primitives is necessary to model new innovative systems and decentralized societies. Examples include reputation protocols, non-transferrable certificates, non-transferrable rights, undercollateralized lending, proof-of-personhood, proof-of-attendance, proof-of-skill, one-person-one-vote, fair airdrops & ICOs, universal basic income, non KYC identity systems, Human DAOs and methods for Sybil attack resistance.\n\nWe propose an SBT standard to model protocols described above.\n\n_Verifiable Credentials_ (VC) could be seen as subset of SBT. However there is an important distinction: VC require set of claims and privacy protocols. It would make more sense to model VC with relation to W3 DID standard. SBT is different, it doesn't require a [resolver](https://www.w3.org/TR/did-core/#dfn-did-resolvers) nor [method](https://www.w3.org/TR/did-core/#dfn-did-methods) registry. For SBT, we need something more elastic than VC.\n\n## Specification\n\nMain requirement for Soulbound tokens is to bound an account to a human. A **Soul** is an account with SBTs, which are used to define account identity.\nOften non transferrable NFT (an NFT token with a no-op transfer function) is used to implement SBTs. However, such model is rather shortsighted. Transferability is required to allow users to either recover their SBTs or merge between the accounts they own. At the same time, we need to limit transferability to assure that an SBTs is kept bound to the same _soul_.\nWe also need an efficient on way to make composed ownership queries (for example: check if an account owns SBT of class C1, C2 and C3 issued by issuer I1, I2 and I3 respectively) - this is needed to model emergent properties discussed above.\n\nWe introduce a **soul transfer**: an ability for user to move ALL SBT tokens from one account to another in a [semi atomic](#soul-transfer) way, while keeping the SBT bounded to the same _soul_. This happens when a user needs to merge his accounts (e.g. they started with a few different accounts but later decides to merge them to increase an account reputation). Soul transfer is different than a token transfer. The standard forbids a traditional token transfer functionality, where a user can transfer individual tokens. That being said, a registry can have extension functions for more advanced scenarios, which could require a governance approval.\n\nSBT standard separates the token issuer concept from the token registry in order to meet the requirements listed above.\nIn the following sections we discuss the functionality of an issuer and registry.\n\n### SBT Registry\n\nTraditional Token model in NEAR blockchain assumes that each token has it's own balance book and implements the authorization and issuance mechanism in the same smart contract. Such model prevents atomic _soul transfer_ in the current NEAR runtime. When token balance is kept separately in each SBT smart contract, synchronizing transfer calls to all such contracts to assure atomicity is not possible at scale. We need an additional contract, the `SBT Registry`, to provide atomic transfer of all user SBTs and efficient way to ban accounts in relation to the Ban event discussed below.\nThis, and efficient cross-issuer queries are the main reasons SBT standards separates that token registry and token issuance concerns.\n\nIssuer is an entity which issues new tokens and potentially can update the tokens (for example execute renewal). All standard modification options are discussed in the sections below.\n\nRegistry is a smart contract, where issuers register the tokens. Registry provides a balance book of all associated SBTs. Registry must ensure that each issuer has it's own \"sandbox\" and issuers won't overwrite each other. A registry provides an efficient way to query multiple tokens for a single user. This will allow implementation of use cases such us:\n\n- SBT based identities (main use case of the `i-am-human` protocol);\n- SBT classes;\n- decentralized societies.\n\n```mermaid\ngraph TB\n    Issuer1--uses--> Registry\n    Issuer2--uses--> Registry\n    Issuer3--uses--> Registry\n```\n\nWe can have multiple competing registries (with different purpose or different management scheme). An SBT issuer SHOULD opt-in to a registry before being able to use registry. Registries may develop different opt-in mechanisms (they could differ by the approval mechanism, be fully permissioned etc..). One SBT smart contract can opt-in to:\n\n- many registries: it MUST relay all state change functions to all registries.\n- or to no registry. We should think about it as a single token registry, and it MUST strictly implement all SBT Registry query functions by itself. The contract address must be part of the arguments, and it must check that it equals to the deployed account address (`require!(ctr == env::current_account_id())`). It also MUST emit related events by itself.\n\nWe recommend that each issuer will use only one registry to avoid complex reconciliation and assure single source of truth.\n\nThe registry fills a central and important role. But it is **not centralized**, as anyone can create their own registry and SBT issuers can choose which registry to use. It's also not too powerful, as almost all of the power (mint, revoke, burn, recover, etc) still remains with the SBT issuer and not with the registry.\n\n#### Issuer authorization\n\nA registry can limit which issuers can use registry to mint SBTs by implementing a custom issuer whitelist methods (for example simple access control list managed by a DAO) or keep it fully open (allowing any issuer minting withing the registry).\n\nExample: an `SBT_1 Issuer` wants to mint tokens using the `SBT_Registry`. The `SBT_Registry` has a DAO which votes on adding a new issuer:\n\n```mermaid\nsequenceDiagram\n    actor Issuer1 as SBT_1 Issuer\n    actor DAO\n\n    participant SBT_Registry\n\n    Note over Issuer1,DAO: Issuer1 connects with the DAO<br>to be whitelisted.\n\n    Issuer1-->>DAO: request whitelist\n    DAO->>SBT_Registry: whitelist(SBT_1 Issuer)\n```\n\n#### Personal Identifiable Information\n\nIssuers must not include any PII into any SBT.\n\n### Account Ban\n\n`Ban` is an event emitted by a registry signaling that the account is banned, and can't own any SBT. Registry must return zero for every SBT supply query of a banned account. Operations which trigger soul transfer must emit Ban.\n\nA registry can emit a `Ban` for use cases not discussed in this standard. Handling it depends on the registry governance. One example is to use social governance to identify fake accounts (like bots) - in that case the registry should allow to emit `Ban` and block a scam soul and block future transfers.\nNOTE: an SBT Issuer can have it's own list of blocked accounts or allowed only accounts.\n\n### Minting\n\nMinting is done by issuer calling `registry.sbt_mint(tokens_to_mint)` method. Standard doesn't specify how a registry authorizes an issuer. A classical approach is a whitelist of issuers: any whitelisted issuer can mint any amount of new tokens. Registry must keep the balances and assign token IDs to newly minted tokens.\n\nExample: Alice has two accounts: `alice1` and `alice2` which she used to mint tokens. She is getting tokens from 2 issuers that use the same registry. Alice uses her `alice1` account to interact with `SBT_1 Issuer` and receives an SBT with token ID = 238:\n\n```mermaid\nsequenceDiagram\n    actor Alice\n    actor Issuer1 as SBT_1 Issuer\n\n    participant SBT1 as SBT_1 Contract\n    participant SBT_Registry\n\n    Issuer1->>SBT1: sbt_mint(alice1, metadata)\n    activate SBT1\n    SBT1-)SBT_Registry: sbt_mint([[alice1, [metadata]]])\n    SBT_Registry->>SBT_Registry: emit Mint(SBT_1_Contract, alice1, [238])\n    SBT_Registry-)SBT1: [238]\n    deactivate SBT1\n\n    Note over Alice,SBT_Registry: now Alice can query registry to check her SBT\n\n    Alice-->>SBT_Registry: sbt(SBT_1_Contract, 238)\n    SBT_Registry-->>Alice: {token: 238, owner: alice1, metadata}\n```\n\nWith `SBT_2 Issuer`, Alice uses her `alice2` account. Note that `SBT_2 Contract` has different mint function (can mint many tokens at once), and validates a proof prior to requesting the registry to mint the tokens.\n\n```mermaid\nsequenceDiagram\n    actor Alice\n    actor Issuer2 as SBT_2 Issuer\n\n    participant SBT2 as SBT_2 Contract\n    participant SBT_Registry\n\n    Issuer2->>SBT2: sbt_mint_multi([[alice2, metadata2], [alice2, metadata3]], proof)\n    activate SBT2\n    SBT2-)SBT_Registry: sbt_mint([[alice2, [metadata2, metadata3]]])\n    SBT_Registry->>SBT_Registry: emit Mint(SBT_2_Contract, alice2, [7991, 7992])\n    SBT_Registry-)SBT2: [7991, 1992]\n    deactivate SBT2\n\n    Note over Alice,SBT_Registry: Alice queries one of her new tokens\n    Alice-->>SBT_Registry: sbt(SBT_2_Contract, 7991)\n    SBT_Registry-->>Alice: {token: 7991, owner: alice2, metadata: metadata2}\n```\n\n### Transferability\n\nSafeguards are set against misuse of SBT transfer and keep the _soul bound_ property. SBT transfer from one account to another should be strictly limited to:\n\n- **revocation** allows issuer to invalidate or burn an SBT in case a token issuance should be reverted (for example the recipient is a Sybil account, that is an account controlled by an entity trying to create the false appearance);\n- **recoverability** in case a user's private key is compromised due to extortion, loss, etc. Users cannot recover an SBT only by themselves. Users must connect with issuer to request recoverability or use more advanced mechanism (like social recoverability). The recovery function provides additional economical cost preventing account trading: user should always be able to recover his SBT, and move to another, not banned account.\n- **soul transfer** - moving all SBT tokens from a source account (issued by all issuers) to a destination account. During such transfer, SBT registry emits `SoulTransfer` and `Ban` events. The latter signals that the account can't host nor receive any SBT in the future, effectively burning the identity of the source account. This creates an inherit cost for the source account: it's identity can't be used any more. Registry can have extension functions for more advanced scenarios, which could require a governance mechanism. `SoulTransfer` event can also trigger similar actions in other registries (specification for this is out of the scope of this NEP).\n\nThis becomes especially important for proof-of-human stamps that can only be issued once per user.\n\n#### Revocation\n\nAn issuer can revoke SBTs by calling `registry.sbt_revoke(tokens_to_revoke, burn)`. Example: when a related certificate or membership should be revoked, when an issuer finds out that there was an abuse or a scam, etc...). Registry, when receiving `sbt_revoke` request from an issuer must always emit the `Revoke` event. Registry must only accept revoke requests from a valid issuer, and only revoke tokens from that issuer. If `burn=true` is set in the request, then the token should be burned and `Burn` event must be emitted. Otherwise (when `burn=false`) the registry must update token metadata and set expire date to a time in the past. Registry must not ban nor emit `Ban` event when revoking a contract. That would create an attack vector, when a malicious registry would thread the registry by banning accounts.\n\n#### Recoverability\n\nStandard defines issuer recoverability. At minimum, the standard registry exposes `sbt_recover` method, which allows issuer to reassign a token issued by him from one account to another.\n\nSBT recovery MUST not trigger `SoulTransfer` nor `Ban` event: malicious issuer could compromise the system by faking the token recovery and take over all other SBTs from a user. Only the owner of the account can make a Soul Transfer transaction and merge 2 accounts they owns.\n\n#### Recoverability within an SBT Registry\n\nSBT registry can define it's own mechanism to atomically recover all tokens related to one account and execute soul transfer to another account, without going one by one through each SBT issuer (sometimes that might be even not possible). Below we list few ideas a Registry can use to implement recovery:\n\n- KYC based recovery\n- Social recovery\n\n![Social Recovery, Image via “Decentralized Society”](https://bankless.ghost.io/content/images/public/images/cdb1fc23-6179-44f0-9bfe-e5e5831492f7_1399x680.png)\n\nSBT Registry based recovery is not part of this specification.\n\n#### Soul Transfer\n\nThe basic use case is described above. Registry MUST provide a permissionless method to allow any user to execute soul transfer. It is essential part of the standard, but the exact interface of the method is not part of the standard, because registries may adopt different mechanism and require different arguments.\n\nSoul transfers must be _semi atomic_. That is, the holder account must be non operational (in terms of SBT supply) until the soul transfer is completed. Given the nature of NEAR blockchain, where transactions are limited by gas, big registries may require to implement the Soul Transfer operation in stages. Source and destination accounts should act as a non-soul accounts while the soul transfer operation is ongoing. For example, in the first call, contract can lock the account, and do maximum X amount of transfers. If the list of to be transferred SBTs has not been exhausted, the contract should keep locking the account and remember the last transferred SBT. Subsequent calls by the same user will resume the operation until the list is exhausted.\nSoul Transfer must emit the `SoulTransfer` event.\n\nExample: Alice has two accounts: `alice1` and `alice2` which she used to mint tokens (see [mint diagram](#minting)). She decides to merge the accounts by doing Soul Transfer.\n\n```mermaid\nsequenceDiagram\n    actor Alice\n    participant SBT_Registry\n\n    Alice->>SBT_Registry: sbt_soul_transfer(alice1) --accountId alice2\n\n    Alice-->>+SBT_Registry: sbt_tokens_by_owner(alice2)\n    SBT_Registry-->>-Alice: []\n\n    Alice-->>+SBT_Registry: sbt_tokens_by_owner(alice1)\n    SBT_Registry-->>-Alice: [[SBT_1_Contract, [238]], [SBT_2_Contract, [7991, 7992]]]}\n```\n\nImplementation Notes:\n\n- There is a risk of conflict. The standard requires that one account can't have more than one SBT of the same (issuer, class) pair.\n- When both `alice1` and `alice2` have SBT of the same (issuer, class) pair, then the transfer should fail. One of the accounts should burn conflicting tokens to be able to continue the soul transfer.\n- Soul transfer may require extra confirmation before executing a transfer. For example, if `alice1` wants to do a soul transfer to `alice2`, the contract my require `alice2` approval before continuing the transfer.\n- Other techniques may be used to enforce that the source account will be deleted.\n\n### Renewal\n\nSoulbound tokens can have an _expire date_. It is useful for tokens which are related to real world certificates with expire time, or social mechanisms (e.g. community membership). Such tokens SHOULD have an option to be renewable. Examples include mandatory renewal with a frequency to check that the owner is still alive, or renew membership to a DAO that uses SBTs as membership gating.\nRegistry defines `sbt_renew` method allowing issuers to update the token expire date. The issuer can set the _expire date_ in the past. This is useful if an issuer wants to invalidate the token without removing it.\n\n### Burning tokens\n\nRegistry MAY expose a mechanism to allow an account to burn an unwanted token. The exact mechanism is not part of the standard and it will depend on the registry implementation. We only define a standard `Burn` event, which must be emitted each time a token is removed from existence. Some registries may forbid accounts to burn their tokens in order to preserve specific claims. Ultimately, NEAR is a public blockchain, and even if a token is burned, it's trace will be preserved.\n\n### Token Class (multitoken approach)\n\nSBT tokens can't be fractionized. Also, by definition, there should be only one SBT per token class per user. Examples: user should not be able to receive few badges of the same class, or few proof of attendance to the same event.\nHowever, we identify a need for having to support token classes (aka multitoken interface) in a single contract:\n\n- badges: one contract. Each badge will have a class (community lead, OG...), and each token will belong to a specific class;\n- certificates: one issuer can create certificates of a different class (eg school department can create diplomas for each major and each graduation year).\n\nWe also see a trend in the NFT community and demand for market places to support multi token contracts.\n\n- In Ethereum community many projects are using [ERC-1155 Multi Token Standard](https://eips.ethereum.org/EIPS/eip-1155). NFT projects are using it for fraction ownership: each token id can have many fungible fractions.\n- NEAR [NEP-245](https://github.com/near/NEPs/blob/master/neps/nep-0245.md) has elaborated similar interface for both bridge compatibility with EVM chains as well as flexibility to define different token types with different behavior in a single contract. [DevGovGigs Board](https://near.social/#/mob.near/widget/MainPage.Post.Page?accountId=devgovgigs.near&blockHeight=87938945) recently also shows growing interest to move NEP-245 adoption forward.\n- [NEP-454](https://github.com/near/NEPs/pull/454) proposes royalties support for multi token contracts.\n\nWe propose that the SBT Standard will support the multi-token idea from the get go. This won't increase the complexity of the contract (in a traditional case, where one contract will only issue tokens of the single class, the `class` argument is simply ignored in the state, and in the functions it's required to be of a constant value, eg `1`) but will unify the interface.\nIt's up to the smart contract design how the token classes is managed. A smart contract can expose an admin function (example: `sbt_new_class() -> ClassId`) or hard code the pre-registered classes.\n\nFinally, we require that each token ID is unique within the smart contract. This will allow us to query token only by token ID, without knowing it's class.\n\n## Smart contract interface\n\nFor the Token ID type we propose `u64` rather than `U128`. `u64` capacity is more than 1e19. If we will mint 10'000 SBTs per second, then it will take us 58'494'241 years to fill the capacity.\nToday, the JS integer limit is `2^53-1 ~ 9e15`. Similarly, when minting 10'000 SBTs per second, it will take us 28'561 years to reach the limit. So, we don't need u128 nor the String type. However, if for some reason, we will need to get u64 support for JS, then we can always add another set of methods which will return String, so making it compatible with NFT standard (which is using `U128`, which is a string). Also, it's worth to note, that in 28'000 years JS (if it will still exists) will be completely different.\nThe number of combinations for a single issuer is much higher in fact: the token standard uses classes. So technically that makes the number of all possible combinations for a single issuer equal `(2^64)^2 ~ 1e38`. For \"today\" JS it is `(2^53-1)^2 ~ 1e31`.\n\nToken IDs MUST be created in a sequence to make sure the ID space is not exhausted locally (eg if a registry would decide to introduce segments, it would potentially get into a trap where one of the segments is filled up very quickly).\n\n```rust\n// TokenId and ClassId must be positive (0 is not a valid ID)\npub type TokenId = u64;\npub type ClassId = u64;\n\npub struct Token {\n    pub token: TokenId,\n    pub owner: AccountId,\n    pub metadata: TokenMetadata,\n}\n```\n\nThe Soulbound Token follows the NFT [NEP-171](https://github.com/near/NEPs/blob/master/neps/nep-0171.md) interface, with few differences:\n\n- token ID is `u64` (as discussed above).\n- token class is `u64`, it's required when minting and it's part of the token metadata.\n- `TokenMetadata` doesn't have `title`, `description`, `media`, `media_hash`, `copies`, `extra`, `starts_at` nor `updated_at`. All that attributes except the `updated_at` can be part of the document stored at `reference`. `updated_at` can be tracked easily by indexers.\n- We don't have traditional transferability.\n- We propose to use more targeted events, to better reflect the event nature. Moreover events are emitted by the registry, so we need to include issuer contract address in the event.\n\nAll time related attributes are defined in milliseconds (as per NEP-171).\n\n```rust\n/// IssuerMetadata defines contract wide attributes, which describes the whole contract.\n/// Must be provided by the Issuer contract. See the `SBTIssuer` trait.\npub struct IssuerMetadata {\n    /// Version with namespace, example: \"sbt-1.0.0\". Required.\n    pub spec: String,\n    /// Issuer Name, required, ex. \"Mosaics\"\n    pub name: String,\n    /// Issuer symbol which can be used as a token symbol, eg Ⓝ, ₿, BTC, MOSAIC ...\n    pub symbol: String,\n    /// Icon content (SVG) or a link to an Icon. If it doesn't start with a scheme (eg: https://)\n    /// then `base_uri` should be prepended.\n    pub icon: Option<String>,\n    /// URI prefix which will be prepended to other links which don't start with a scheme\n    /// (eg: ipfs:// or https:// ...).\n    pub base_uri: Option<String>,\n    /// JSON or an URL to a JSON file with more info. If it doesn't start with a scheme\n    /// (eg: https://) then base_uri should be prepended.\n    pub reference: Option<String>,\n    /// Base64-encoded sha256 hash of JSON from reference field. Required if `reference` is included.\n    pub reference_hash: Option<Base64VecU8>,\n}\n\n/// ClassMetadata defines SBT class wide attributes, which are shared and default to all SBTs of\n/// the given class. Must be provided by the Issuer contract. See the `SBTIssuer` trait.\npub struct ClassMetadata {\n    /// Issuer class name. Required to be not empty.\n    pub name: String,\n    /// If defined, should be used instead of `IssuerMetadata::symbol`.\n    pub symbol: Option<String>,\n    /// An URL to an Icon. To protect fellow developers from unintentionally triggering any\n    /// SSRF vulnerabilities with URL parsers, we don't allow to set an image bytes here.\n    /// If it doesn't start with a scheme (eg: https://) then `IssuerMetadata::base_uri`\n    /// should be prepended.\n    pub icon: Option<String>,\n    /// JSON or an URL to a JSON file with more info. If it doesn't start with a scheme\n    /// (eg: https://) then base_uri should be prepended.\n    pub reference: Option<String>,\n    /// Base64-encoded sha256 hash of JSON from reference field. Required if `reference` is included.\n    pub reference_hash: Option<Base64VecU8>,\n}\n\n/// TokenMetadata defines attributes for each SBT token.\npub struct TokenMetadata {\n    pub class: ClassId, // token class. Required. Must be non zero.\n    pub issued_at: Option<u64>, // When token was issued or minted, Unix time in milliseconds\n    pub expires_at: Option<u64>, // When token expires, Unix time in milliseconds\n    /// JSON or an URL to a JSON file with more info. If it doesn't start with a scheme\n    /// (eg: https://) then base_uri should be prepended.\n    pub reference: Option<String>,\n    /// Base64-encoded sha256 hash of JSON from reference field. Required if `reference` is included.\n    pub reference_hash: Option<Base64VecU8>,\n}\n\n\ntrait SBTRegistry {\n    /**********\n     * QUERIES\n     **********/\n\n    /// Get the information about specific token ID issued by `issuer` SBT contract.\n    fn sbt(&self, issuer: AccountId, token: TokenId) -> Option<Token>;\n\n    /// Get the information about list of token IDs issued by the `issuer` SBT contract.\n    /// If token ID is not found `None` is set in the specific return index.\n    fn sbts(&self, issuer: AccountId, token: Vec<TokenId>) -> Vec<Option<Token>>;\n\n    /// Query class ID for each token ID issued by the SBT `issuer`.\n    /// If token ID is not found, `None` is set in the specific return index.\n    fn sbt_classes(&self, issuer: AccountId, tokens: Vec<TokenId>) -> Vec<Option<ClassId>>;\n\n    /// Returns total amount of tokens issued by `issuer` SBT contract, including expired\n    /// tokens. If a revoke removes a token, it must not be included in the supply.\n    fn sbt_supply(&self, issuer: AccountId) -> u64;\n\n    /// Returns total amount of tokens of given class minted by `issuer`. See `sbt_supply` for\n    /// information about revoked tokens.\n    fn sbt_supply_by_class(&self, issuer: AccountId, class: ClassId) -> u64;\n\n    /// Returns total supply of SBTs for a given owner. See `sbt_supply` for information about\n    /// revoked tokens.\n    /// If `class` is specified, returns only owner supply of the given class (either 0 or 1).\n    fn sbt_supply_by_owner(\n        &self,\n        account: AccountId,\n        issuer: AccountId,\n        class: Option<ClassId>,\n    ) -> u64;\n\n    /// Query sbt tokens issued by a given contract.\n    /// `limit` specifies the upper limit of how many tokens we want to return.\n    /// If `from_token` is not specified, then `from_token` should be assumed\n    /// to be the first valid token id. If `with_expired` is set to `true` then all the tokens are returned\n    /// including expired ones otherwise only non-expired tokens are returned.\n    fn sbt_tokens(\n        &self,\n        issuer: AccountId,\n        from_token: Option<u64>,\n        limit: Option<u32>,\n        with_expired: bool,\n    ) -> Vec<Token>;\n\n    /// Query SBT tokens by owner.\n    /// `limit` specifies the upper limit of how many tokens we want to return.\n    /// If `from_class` is not specified, then `from_class` should be assumed to be the first\n    /// valid class id. If `with_expired` is set to `true` then all the tokens are returned\n    /// including expired ones otherwise only non-expired tokens are returned.\n    /// Returns list of pairs: `(Contract address, list of token IDs)`.\n    fn sbt_tokens_by_owner(\n        &self,\n        account: AccountId,\n        issuer: Option<AccountId>,\n        from_class: Option<u64>,\n        limit: Option<u32>,\n        with_expired: bool,\n    ) -> Vec<(AccountId, Vec<OwnedToken>)>;\n\n    /// checks if an `account` was banned by the registry.\n    fn is_banned(&self, account: AccountId) -> bool;\n\n    /*************\n     * Transactions\n     *************/\n\n    /// Creates a new, unique token and assigns it to the `receiver`.\n    /// `token_spec` is a vector of pairs: owner AccountId and TokenMetadata.\n    /// Each TokenMetadata must have non zero `class`.\n    /// Must be called by an SBT contract.\n    /// Must emit `Mint` event.\n    /// Must provide enough NEAR to cover registry storage cost.\n    // #[payable]\n    fn sbt_mint(&mut self, token_spec: Vec<(AccountId, Vec<TokenMetadata>)>) -> Vec<TokenId>;\n\n    /// sbt_recover reassigns all tokens issued by the caller, from the old owner to a new owner.\n    /// Must be called by a valid SBT issuer.\n    /// Must emit `Recover` event once all the tokens have been recovered.\n    /// Requires attaching enough tokens to cover the storage growth.\n    /// Returns the amount of tokens recovered and a boolean: `true` if the whole\n    /// process has finished, `false` when the process has not finished and should be\n    /// continued by a subsequent call. User must keep calling the `sbt_recover` until `true`\n    /// is returned.\n    // #[payable]\n    fn sbt_recover(&mut self, from: AccountId, to: AccountId) -> (u32, bool);\n\n    /// sbt_renew will update the expire time of provided tokens.\n    /// `expires_at` is a unix timestamp (in miliseconds).\n    /// Must be called by an SBT contract.\n    /// Must emit `Renew` event.\n    fn sbt_renew(&mut self, tokens: Vec<TokenId>, expires_at: u64);\n\n    /// Revokes SBT by burning the token or updating its expire time.\n    /// Must be called by an SBT contract.\n    /// Must emit `Revoke` event.\n    /// Must also emit `Burn` event if the SBT tokens are burned (removed).\n    fn sbt_revoke(&mut self, tokens: Vec<TokenId>, burn: bool);\n\n    /// Similar to `sbt_revoke`. Allows SBT issuer to revoke all tokens by holder either by\n    /// burning or updating their expire time. When an owner has many tokens from the issuer,\n    /// the issuer may need to call this function multiple times, until all tokens are revoked.\n    /// Retuns true if all the tokens were revoked, false otherwise.\n    /// If false is returned issuer must call the method until true is returned\n    /// Must be called by an SBT contract.\n    /// Must emit `Revoke` event.\n    /// Must also emit `Burn` event if the SBT tokens are burned (removed).\n    fn sbt_revoke_by_owner(&mut self, owner: AccountId, burn: bool) -> bool;\n\n    /// Allows issuer to update token metadata reference and reference_hash.\n    /// * `updates` is a list of triples: (token ID, reference, reference base64-encoded sha256 hash).\n    /// Must emit `token_reference` event.\n    /// Panics if any of the token IDs don't exist.\n    fn sbt_update_token_references(\n        &mut self,\n        updates: Vec<(TokenId, Option<String>, Option<Base64VecU8>)>,\n    );\n}\n```\n\nExample **Soul Transfer** interface:\n\n```rust\n    /// Transfers atomically all SBT tokens from one account to another account.\n    /// The caller must be an SBT holder and the `recipient` must not be a banned account.\n    /// Returns the amount of tokens transferred and a boolean: `true` if the whole\n    /// process has finished, `false` when the process has not finished and should be\n    /// continued by a subsequent call.\n    /// Emits `Ban` event for the caller at the beginning of the process.\n    /// Emits `SoulTransfer` event only once all the tokens from the caller were transferred\n    /// and at least one token was trasnfered (caller had at least 1 sbt).\n    /// + User must keep calling the `sbt_soul_transfer` until `true` is returned.\n    /// + If caller does not have any tokens, nothing will be transfered, the caller\n    ///   will be banned and `Ban` event will be emitted.\n    #[payable]\n    fn sbt_soul_transfer(\n        &mut self,\n        recipient: AccountId,\n    ) -> (u32, bool);\n```\n\n### SBT Issuer interface\n\nSBTIssuer is the minimum required interface to be implemented by issuer. Other methods, such as a mint function, which requests the registry to proceed with token minting, is specific to an Issuer implementation (similarly, mint is not part of the FT standard).\n\nThe issuer must provide metadata object of the Issuer. Optionally, Issuer can also provide metadata object for each token class.\nIssuer level (contract) metadata, must provide information common to all tokens and all classes defined by the issuer. Class level metadata, must provide information common to all tokens of a given class. Information should be deduplicated and denormalized whenever possible.\nExample: The issuer can set a default icon for all tokens (SBT) using `IssuerMetadata::icon` and additionally it can customize an icon of a particular token via `TokenMetadata::icon`.\n\n```rust\npub trait SBTIssuer {\n    /// Returns contract metadata.\n    fn sbt_metadata(&self) -> IssuerMetadata;\n    /// Returns SBT class metadata, or `None` if the class is not found.\n    fn sbt_class_metadata(&self, class: ClassId) -> Option<ClassMetadata>;\n}\n```\n\nSBT issuer smart contracts may implement NFT query interface to make it compatible with NFT tools. In that case, the contract should proxy the calls to the related registry. Note, we use U64 type rather than U128. However, SBT issuer must not emit NFT related events.\n\n```rust\ntrait SBTNFT {\n    fn nft_total_supply(&self) -> U64;\n    // here we index by token id instead of by class id (as done in `sbt_tokens_by_owner`)\n    fn nft_tokens_for_owner(&self, account_id: AccountId, from_index: Option<U64>, limit: Option<u64>) -> Vec<Token>;\n    fn nft_supply_for_owner(&self, account_id: AccountId) -> U64;\n}\n```\n\n### Events\n\nEvent design principles:\n\n- Events don't need to repeat all function arguments - these are easy to retrieve by indexer (events are consumed by indexers anyway).\n- Events must include fields necessary to identify subject matters related to use case.\n- When possible, events should contain aggregated data, with respect to the standard function related to the event.\n\n```typescript\n// only valid integer numbers (without rounding errors).\ntype u64 = number;\n\ntype Nep393Event {\n  standard: \"nep393\";\n  version: \"1.0.0\";\n  event: \"mint\" | \"recover\" | \"renew\" | \"revoke\" | \"burn\" | \"ban\" | \"soul_transfer\" | \"token_reference\" ;\n  data: Mint | Recover | Renew | Revoke | Burn | Ban[] | SoulTransfer | TokenReference;\n}\n\n/// An event emitted by the Registry when new SBT is created.\ntype Mint {\n  issuer: AccountId;    // SBT Contract minting the tokens\n  tokens: (AccountId, u64[])[];  // list of pairs (token owner, TokenId[])\n}\n\n/// An event emitted when a recovery process succeeded to reassign SBTs, usually due to account\n/// access loss. This action is usually requested by the owner, but executed by an issuer,\n/// and doesn't trigger Soul Transfer. Registry reassigns all tokens assigned to `old_owner`\n/// that were ONLY issued by the `ctr` SBT Contract (hence we don't need to enumerate the\n/// token IDs).\n/// Must be emitted by an SBT registry.\ntype Recover {\n  issuer: AccountId         // SBT Contract recovering the tokens\n  old_owner: AccountId;  // current holder of the SBT\n  new_owner: AccountId;  // destination account.\n}\n\n/// An event emitted when existing tokens are renewed.\n/// Must be emitted by an SBT registry.\ntype Renew {\n  issuer: AccountId;  // SBT Contract renewing the tokens\n  tokens: u64[];  // list of token ids.\n}\n\n/// An event emitted when existing tokens are revoked.\n/// Revoked tokens will continue to be listed by the registry but they should not be listed in\n/// a wallet. See also `Burn` event.\n/// Must be emitted by an SBT registry.\ntype Revoke {\n  issuer: AccountId;  // SBT Contract revoking the tokens\n  tokens: u64[];  // list of token ids.\n}\n\n/// An event emitted when existing tokens are burned and removed from the registry.\n/// Must be emitted by an SBT registry.\ntype Burn {\n  issuer: AccountId;  // SBT Contract burning the tokens\n  tokens: u64[];  // list of token ids.\n}\n\n/// An event emitted when an account is banned within the emitting registry.\n/// Registry must add the `account` to a list of accounts that are not allowed to get any SBT\n/// in the future.\n/// Must be emitted by an SBT registry.\ntype Ban = AccountId;\n\n/// An event emitted when soul transfer is happening: all SBTs owned by `from` are transferred\n/// to `to`, and the `from` account is banned (can't receive any new SBT).\n/// Must be emitted by an SBT registry.\n/// Registry MUST also emit `Ban` whenever the soul transfer happens.\ntype SoulTransfer {\n  from: AccountId;\n  to: AccountId;\n}\n\n/// An event emitted when an issuer updates token metadata reference of existing SBTs.\n/// Must be emitted by an SBT registry.\ntype TokenReference {\n  issuer: AccountId;  // Issuer account\n  tokens: u64[];  // list of token ids.\n}\n\n/// An event emitted when existing token metadata references are updated.\ntype TokenReference = u64[];  // list of token ids.\n```\n\nWhenever a recovery is made in a way that an existing SBT is burned, the `Burn` event MUST be emitted. If `Revoke` burns token then `Burn` event MUST be emitted instead of `Revoke`.\n\n### Example SBT Contract functions\n\nAlthough the transaction functions below are not part of the SBT smart contract standard (depending on a use case, they may have different parameters), we present here an example interface for SBT issuance and we also provide a reference implementation.\nThese functions must relay calls to an SBT registry, which will emit appropriate events.\nWe recommend that all functions related to an event will take an optional `memo: Option<String>` argument for accounting purposes.\n\n```rust\ntrait SBT {\n    /// Must provide enough NEAR to cover registry storage cost.\n    // #[payable]\n    fn sbt_mint(\n        &mut self,\n        account: AccountId,\n        metadata: TokenMetadata,\n        memo: Option<String>,\n    ) -> TokenId;\n\n    /// Creates a new, unique token and assigns it to the `receiver`.\n    /// `token_spec` is a vector of pairs: owner AccountId and TokenMetadata.\n    /// Must provide enough NEAR to cover registry storage cost.\n    // #[payable]\n    fn sbt_mint_multi(\n        &mut self,\n        token_spec: Vec<(AccountId, TokenMetadata)>,\n        memo: Option<String>,\n    ) -> Vec<TokenId>;\n\n    // #[payable]\n    fn sbt_recover(&mut self, from: AccountId, to: AccountId, memo: Option<String>);\n\n    fn sbt_renew(&mut self, tokens: Vec<TokenId>, expires_at: u64, memo: Option<String>);\n\n    fn sbt_revoke(token: Vec<u64>, memo: Option<String>) -> bool;\n}\n```\n\n## Reference Implementation\n\n- Common [type definitions](https://github.com/near-ndc/i-am-human/tree/master/contracts/sbt) (events, traits).\n- [I Am Human](https://github.com/near-ndc/i-am-human) registry and issuers.\n\n## Consequences\n\nBeing fully compatible with NFT standard is a desirable. However, given the requirements related to _soul transfer_ we didn't find an applaudable solution. Also we decided to use u64 as a Token ID, diverging further from the NFT NEP-171 standard.\n\nGiven that our requirements are much striker, we had to reconsider the level of compatibility with NEP-171 NFT.\nThere are many examples where NFT standards are improperly implemented. Adding another standard with different functionality but equal naming will cause lots of problems and misclassifications between NFT and SBT.\n\n### Positive\n\n- Template and set of guidelines for creating SBT tokens.\n- Ability to create SBT aggregators.\n- An SBT standard with recoverability mechanism provides a unified model for multiple primitives such as non KYC identities, badges, certificates etc...\n- SBT can be further used for \"lego\" protocols, like: Proof of Humanity (discussed for NDC Governance), undercollateralized lending, role based authentication systems, innovative economic and social applications...\n- Standard recoverability mechanism.\n- SBT are considered as a basic primitive for Decentralized Societies.\n- new way to implement Sybil attack resistance.\n\n### Neutral\n\n- The API partially follows the NEP-171 (NFT) standard. The proposed design is to have native SBT API and make it possible for issuer contracts to support NFT based queries if needed (such contract will have a limitation of only issuing SBTs with one `ClassId` only).\n\n### Negative\n\n- New set of events to be handled by the indexer and wallets.\n- Complexity of integration with a registry: all SBT related transactions must go through Registry.\n\n### Privacy Notes\n\n> Blockchain-based systems are public by default. Any relationship that is recorded on-chain is immediately visible not just to the participants, but also to anyone in the entire world. Some privacy can be retained by having multiple pseudonyms: a family Soul, a medical Soul, a professional Soul, a political Soul each carrying different SBTs. But done naively, it could be very easy to correlate these Souls to each other.\n> The consequences of this lack of privacy are serious. Indeed, without explicit measures taken to protect privacy, the “naive” vision of simply putting all SBTs on-chain may well make too much information public for many applications.\n\n-- Decentralized Society\n\nThere are multiple ways how an identity can be doxxed using chain data. SBT, indeed provides more data about account.\nThe standard allows for few anonymization methods:\n\n- not providing any data in the token metadata (reference...) or encrypt the reference.\n- anonymize issuers (standard allows to have many issues for the same entity) and mix it with different class ids. These are just a numbers.\n\nPerfect privacy can only be done with solid ZKP, not off-chain walls.\n\nImplementations must not store any personal information on chain.\n\n## Changelog\n\n### v1.0.0\n\nThe Contract Standards Working Group members approved this NEP on June 30, 2023 ([meeting recording](https://youtu.be/S1An5CDG154)).\n\n#### Benefits\n\n- SBTs as any other kind of a token are essential primitive to represent real world use cases. This standards provides a model and a guideline for developers to build SBT based solutions.\n- Token standards are key for composability.\n- Wallet and tools needs a common interface to operate tokens.\n\n#### Concerns\n\n| #   | Concern                                                                                                                                                                                                                                    | Resolution                                                                                                                                                                                                                                                                                                                                                           | Status   |\n| --- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- |\n| 1   | [Robert] Should we Emit NEP-171 Mint and NEP-171 Burn by the SBT contract (in addition to SBT native events emitted by the registry)? If the events will be emitted by registry, then we need new events to include the contract address.  | Don't emit NFT events. SBT is not NFT. Support: @alexastrum                                                                                                                                                                                                                                                                                                          | resolved |\n| 2   | [Robert] remove `memo` in events. The `memo` is already part of the transaction, and should not be needed to identify transactions. Processes looking for events, can easily track transaction through event and recover `memo` if needed. | Removed, consequently also removed from registry transactions . Support: @alexastrum                                                                                                                                                                                                                                                                                 | resolved |\n| 3   | [Token Spam](https://github.com/near/NEPs/pull/393/#discussion_r1163938750)                                                                                                                                                                | We have a `Burn` event. Added example `sbt_burn` function, but keeping it not as a part of required interface. Event should be enough.                                                                                                                                                                                                                               | resolved |\n| 4   | [Multiple registries](https://github.com/near/NEPs/pull/393/#discussion_r1163951624). Registry source of truth [comment](https://github.com/near/NEPs/pull/393/#issuecomment-1531766643)                                                   | This is a part of the design: permissionless approach. [Justification for registry](https://github.com/near/NEPs/pull/393/#issuecomment-1540621077)                                                                                                                                                                                                                  | resolved |\n| 5   | [Robert] Approve the proposed multi-token                                                                                                                                                                                                  | Support: @alexastrum                                                                                                                                                                                                                                                                                                                                                 | resolved |\n| 6   | [Robert] Use of milliseconds as a time unit.                                                                                                                                                                                               | Use milliseconds.                                                                                                                                                                                                                                                                                                                                                    | resolved |\n| 7   | Should a `burn` function be part of a standard or a recommendation?                                                                                                                                                                        | We already have the Burn event. A call method should not be part of the standard interface (similarly to FT and NFT).                                                                                                                                                                                                                                                | resolved |\n| 8   | [Robert] Don't include `sbt_soul_transfer` in the standard interface, [comment](https://github.com/near/NEPs/pull/393#issuecomment-1506969996).                                                                                            | Moved outside of the required interface.                                                                                                                                                                                                                                                                                                                             | resolved |\n| 9   | [Privacy](https://github.com/near/NEPs/pull/393/#issuecomment-1504309947)                                                                                                                                                                  | Concerns have been addressed: [comment-1](https://github.com/near/NEPs/pull/393/#issuecomment-1504485420) and [comment2](https://github.com/near/NEPs/pull/393/#issuecomment-1505958549)                                                                                                                                                                             | resolved |\n| 10  | @frol [suggested](https://github.com/near/NEPs/pull/393/#discussion_r1247879778) to use a struct in `sbt_recover` and `sbt_soul_transfer`.                                                                                                 | Motivation to use pair `(number, bool)` rather than follow a common Iterator Pattern. Rust uses `Option` type for that, that works perfectly for languages with native Option type, but creates a \"null\" problem for anything else. Other common way to implement Iterator is the presented pair, which doesn't require extra type definition and reduces code size. | new      |\n\n\n### v1.1.0\n\n\nIn v1.0.0 we defined Issuer (an entity authorized to mint SBTs in the registry) and SBT Class. We also defined Issuer Metadata and Token Metadata, but we didn't provide interface for class metadata. This was implemented in the reference implementation (in one of the subsequent revisions), but was not backported to the NEP. This update:\n\n- Fixes the name of the issuer interface from `SBTContract` to `SBTIssuer`. The original name is wrong and we oversight it in reviews. We talk everywhere about the issuer entity and issuer contract (even the header is SBT Issuer interface).\n- Renames `ContractMetadata` to `IssuerMetadata`.\n- Adds `ClassMetadata` struct and `sbt_class_metadata` function to the `SBTIssuer` interface.\n\nReference implementation: [ContractMetadata, ClassMetadata, TokenMetadata](https://github.com/near-ndc/i-am-human/blob/registry/v1.8.0/contracts/sbt/src/metadata.rs#L18) and [SBTIssuer interface](https://github.com/near-ndc/i-am-human/blob/registry/v1.8.0/contracts/sbt/src/lib.rs#L49).\n\n#### Benefits\n\n- Improves the documentation and meaning of the issuer entity.\n- Adds missing `ClassMetadata`.\n- Improves issuer, class and token metadata documentation.\n\n## Copyright\n\n[Creative Commons Attribution 4.0 International Public License (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/)\n"
  },
  {
    "path": "neps/nep-0399.md",
    "content": "---\nNEP: 399\nTitle: Flat Storage\nAuthor: Aleksandr Logunov <alex.logunov@near.org> Min Zhang <min@near.org>\nStatus: Final\nDiscussionsTo: https://github.com/nearprotocol/neps/pull/0399\nType: Protocol Track\nCategory: Storage\nCreated: 30-Sep-2022\n---\n\n## Summary\n\nThis NEP proposes the idea of Flat Storage, which stores a flattened map of key/value pairs of the current\nblockchain state on disk. Note that original Trie (persistent merkelized trie) is not removed, but Flat Storage\nallows to make storage reads faster, make storage fees more predictable and potentially decrease them.\n\n## Motivation\n\nCurrently, the blockchain state is stored in our storage only in the format of persistent merkelized tries.\nAlthough it is needed to compute state roots and prove the validity of states, reading from it requires a\ntraversal from the trie root to the leaf node that contains the key value pair, which could mean up to\n2 \\* key_length disk accesses in the worst case.\n\nIn addition, we charge receipts by the number of trie nodes they touched (TTN cost). Note that the number\nof touched trie node does not always equal to the key length, it depends on the internal trie structure.\nBased on some feedback from contract developers collected in the past, they are interested in predictable fees,\nbut TTN costs are annoying to predict and can lead to unexpected excess of the gas limit. They are also a burden\nfor NEAR Protocol client implementations, i.e. nearcore, as exact TTN number must be computed deterministically\nby all clients. This prevents storage optimizations that use other strategies than nearcore uses today.\n\nWith Flat Storage, number of disk reads is reduced from worst-case 2 \\* key_length to exactly 2, storage read gas\nfees are simplified by getting rid of TTN cost, and potentially can be further reduced because fewer disk reads\nare needed.\n\n## Rationale and alternatives\n\nQ: Why is this design the best in the space of possible designs?\n\nA: Space of possible designs is quite big here, let's show some meaningful examples.\n\nThe most straightforward one is just to increase TTN, or align it with biggest possible value, or alternatively increase\nthe base fees for storage reads and writes. However, technically the biggest TTN could be 4096 as of today. And we tend\nto strongly avoid increasing fees, because it may break existing contract calls, not even mentioning that it would greatly\nreduce capacity of NEAR blocks, because for current mainnet usecases depth is usually below 20.\n\nWe also consider changing tree type from Trie to AVL, B-tree, etc. to make number of traversed nodes more stable and\npredictable. But we approached AVL idea, and implementation turned out to be tricky, so we didn't go\nmuch further than POC here: https://github.com/near/nearcore/discussions/4815. Also for most key-value pairs\ntree depth will actually increase - for example, if you have 1M keys, depth is always 20, and it would cause increasing\nfees as well. Size of intermediate node also increases, because we have to need to store a key there to decide whether\nwe should go to the left or right child.\n\nSeparate idea is to get rid of global state root completely: https://github.com/near/NEPs/discussions/425. Instead,\nwe could track the latest account state in the block where it was changed. But after closer look, it brings us to\nsimilar questions - if some key was untouched for a long time, it becomes harder to find exact block to find latest\nvalue for it, and we need some tree-shaped structure again. Because ideas like that could be also extremely invasive,\nwe stopped considering them at this point.\n\nOther ideas around storage include exploring databases other than RocksDB, or moving State to a separate database.\nWe can tweak State representation, i.e. start trie key from account id to achieve data locality within\nthe same account. However, Flat Storage main goal is speed up reads and make their costs predictable, and these ideas\nare orthogonal to that, although they still can improve storage in other ways.\n\nQ: What other designs have been considered and what is the rationale for not choosing them?\n\nA: There were several ideas on Flat Storage implementation. One of questions is whether we need global flat storage\nfor all shards or separate flat storages. Due to how sharding works, we have make flat storages separate, because in\nthe future node may have to catchup new shard while already tracking old shards, and flat storage heads (see\nSpecification) must be different for these shards.\n\nFlat storage deltas are another tricky part of design, but we cannot avoid them, because at certain points of time\ndifferent nodes can disagree what is the current chain head, and they have to support reads for some subset of latest\nblocks with decent speed. We can't really fallback to Trie in such cases because storage reads for it are much slower.\n\nAnother implementation detail is where to put flat storage head. The planned implementation doesn't rely on that\nsignificantly and can be changed, but for MVP we assume flat storage head = chain final head as a simplest solution.\n\nQ: What is the impact of not doing this?\n\nA: Storage reads will remain inefficiently implemented and cost more than they should, and the gas fees will remain\ndifficult for the contract developers to predict.\n\n## Specification\n\nThe key idea of Flat Storage is to store a direct mapping from trie keys to values in the DB.\nHere the values of this mapping can be either the value corresponding to the trie key itself,\nor the value ref, a hash that points to the address of the value. If the value itself is stored,\nonly one disk read is needed to look up a value from flat storage, otherwise two disk reads are needed if the value\nref is stored. We will discuss more in the following section for whether we use values or value refs.\nFor the purpose of high level discussion, it suffices to say that with Flat Storage,\nat most two disk reads are needed to perform a storage read.\n\nThe simple design above won't work because there could be forks in the chain. In the following case, FlatStorage\nmust support key value lookups for states of the blocks on both forks.\n\n```text\n        /  Block B1 - Block B2 - ...\nblock A\n        \\  Block C1 - Block C2 - ...\n```\n\nThe handling of forks will be the main consideration of the following design. More specifically,\nthe design should satisfy the following requirements,\n\n1. It should support concurrent block processing. Blocks on different forks are processed\n   concurrently in the nearcore Client code, the struct which responsibility includes receiving blocks from network,\n   scheduling applying chunks and writing results of that to disk. Flat storage API must be aligned with that.\n2. In case of long forks, block processing time should not be too much longer than the average case.\n   We don’t want this case to be exploitable. It is acceptable that block processing time is 200ms longer,\n   which may slow down block production, but probably won’t cause missing blocks and chunks.\n   10s delays are not acceptable and may lead to more forks and instability in the network.\n3. The design must be able to decrease storage access cost in all cases,\n   since we are going to change the storage read fees based on flat storage.\n   We can't conditionally enable Flat Storage for some blocks and disable it for other, because\n   the fees we charge must be consistent.\n\nThe mapping of key value pairs FlatStorage stored on disk matches the state at some block.\nWe call this block the head of flat storage, or the flat head. During block processing,\nthe flat head is set to the last final block. The Doomslug consensus algorithm\nguarantees that if a block is final, all future final blocks must be descendants of this block.\nIn other words, any block that is not built on top of the last final block can be discarded because they\nwill never be finalized. As a result, if we use the last final block as the flat head, any block\nFlatStorage needs to process is a descendant of the flat head.\n\nTo support key value lookups for other blocks that are not the flat head, FlatStorage will\nstore key value changes(deltas) per block for these blocks.\nWe call these deltas FlatStorageDelta (FSD). Let’s say the flat storage head is at block h,\nand we are applying transactions based on block h’. Since h is the last final block,\nh is an ancestor of h'. To access the state at block h', we need FSDs of all blocks between h and h'.\nNote that all these FSDs must be stored in memory, otherwise, the access of FSDs will trigger\nmore disk reads and we will have to set storage key read fee higher.\n\n### FSD size estimation\n\nWe prefer to store deltas in memory, because memory read is much faster than disk read, and even a single extra RocksDB\naccess requires increasing storage fees, which is not desirable. To reduce delta size, we will store hashes of trie keys\ninstead of keys, because deltas are read-only. Now let's carefully estimate FSD size.\n\nWe can do so using protocol fees as of today. Assume that flat state stores a mapping from keys to value refs.\nMaximal key length is ~2 KiB which is the limit of contract data key size. During wasm execution, we pay\n`wasm_storage_write_base` = 64 Ggas per call. Entry size is 68 B for key hash and value ref.\nThen the total size of keys changed in a block is at most\n`chunk_gas_limit / gas_per_entry * entry_size * num_shards = (1300 Tgas / 64 Ggas) * 68 B * 4 ~= 5.5 MiB`.\n\nAssuming that we can increase RAM requirements by 1 GiB, we can afford to store deltas for 100-200 blocks\nsimultaneously.\n\nNote that if we store a value instead of value ref, size of FSDs can potentially be much larger.\nBecause value limit is 4 MiB, we can’t apply previous argument about base cost.\nSince `wasm_storage_write_value_byte` = 31 Mgas, values contribution to FSD size can be estimated as\n`(1300 Tgas / storage_write_value_byte * num_shards)`, or ~167 MiB. Same estimation for trie keys gives 54 MiB.\nThe advantage of storing values instead of value refs is that it saves one disk read if the key has been\nmodified in the recent blocks. It may be beneficial if we get many transactions or receipts touching the same\ntrie keys in consecutive blocks, but it is hard to estimate the value of such benefits without more data.\nWe may store only short values (\"inlining\"), but this idea is orthogonal and can be applied separately.\n\n### Protocol changes\n\nFlat Storage itself doesn't change protocol. We only change impacted storage costs to reflect changes in performance. Below we describe reads and writes separately.\n\n#### Storage Reads\n\nLatest proposal for shipping storage reads is [here](https://github.com/near/nearcore/issues/8006#issuecomment-1473718509).\nIt solves several issues with costs, but the major impact of flat storage is that essentially for reads\n`wasm_touching_trie_node` and `wasm_read_cached_trie_node` are reduced to 0. Reason is that before we had to cover costs\nof reading nodes from memory or disk, and with flat storage we make only 2 DB reads.\n\nLatest up-to-date gas and compute costs can be found in nearcore repo.\n\n#### Storage Writes\n\nStorage writes are charged similarly to reads and include TTN as well, because updating the leaf trie\nnode which stores the value to the trie key requires updating all trie nodes on the path leading to the leaf node.\nAll writes are committed at once in one db transaction at the end of block processing, outside of runtime after\nall receipts in a block are executed. However, at the time of execution, runtime needs to calculate the cost,\nwhich means it needs to know how many trie nodes the write affects, so runtime will issue a read for every write\nto calculate the TTN cost for the write. Such reads cannot be replaced by a read in FlatStorage because FlatStorage does\nnot provide the path to the trie node.\n\nThere are multiple proposals on how storage writes can work with FlatStorage.\n\n- Keep it the same. The cost of writes remain the same. Note that this can increase the cost for writes in\n  some cases, for example, if a contract first read from a key and then writes to the same key in the same chunk.\n  Without FlatStorage, the key will be cached in the chunk cache after the read, so the write will cost less.\n  With FlatStorage, the read will go through FlatStorage, the write will not find the key in the chunk cache and\n  it will cost more.\n- Remove the TTN cost from storage write fees. Currently, there are two ideas in this direction.\n  - Charge based on maximum depth of a contract’s state, instead of per-touch-trie node.\n  - Charge based on key length only.\n    Both of the above ideas would allow us to get rid of trie traversal (\"reads-for-writes\") from the critical path of\n    block execution. However,\n    it is unclear at this point what the new cost would look like and whether further optimizations are needed\n    to bring down the cost for writes in the new cost model.\n\nSee https://gov.near.org/t/storage-write-optimizations/30083 for more details.\n\nWhile storage writes are not fully implemented yet, we may increase parameter compute cost for storage writes implemented\nin https://github.com/near/NEPs/pull/455 as an intermediate solution.\n\n### Migration Plan\n\nThere are two main questions regarding to how to enable FlatStorage.\n\n1. Whether there should be database migration. The main challenge of enabling FlatStorage will be to build the flat state\n   column, which requires iterating the entire state. Estimations showed that it takes 10 hours to build\n   flat state for archival nodes and 5 hours for rpc and validator nodes in 8 threads. The main concern is that if\n   it takes too long for archival node to migrate,\n   they may have a hard time catching up later since the block processing speed of archival nodes is not very fast.\n\n   Alternatively, we can build the flat state in a background process while the node is running. This provides a better\n   experience for both archival and validator nodes since the migration process is transient to them. It would require\n   more implementation effort from our side.\n\n   We currently proceed with background migration using 8 threads.\n\n2. Whether there should be a protocol upgrade. The enabling of FlatStorage itself does not require a protocol upgrade, since\n   it is an internal storage implementation that doesn't change protocol level. However, a protocol upgrade is needed\n   if we want to adjust fees based on the storage performance with FlatStorage. These two changes can happen in one release,\n   or we can be release them separately. We propose that the enabling of FlatStorage and the protocol upgrade\n   to adjust fees should happen in separate release to reduce the risk. The period between the two releases can be\n   used to test the stability and performance of FlatStorage. Because it is not a protocol change, it is easy to roll back\n   the change in case any issue arises.\n\n## Reference Implementation\n\nFlatStorage will implement the following structs.\n\n`FlatStorageChunkView`: interface for getting value or value reference from flat storage for\nspecific shard, block hash and trie key. In current logic we plan to make it part of `Trie`,\nand all trie reads will be directed to this object. Though we could work with chunk hashes, we don't,\nbecause block hashes are easier to navigate.\n\n`FlatStorage`: API for interacting with flat storage for fixed shard, including updating head,\nadding new delta and creating `FlatStorageChunkView`s.\nfor example, all block deltas that are stored in flat storage and the flat\nstorage head. `FlatStorageChunkView` can access `FlatStorage` to get the list of\ndeltas it needs to apply on top of state of current flat head in order to\ncompute state of a target block.\n\n`FlatStorageManager`: owns flat storages for all shards, being stored in `NightshadeRuntime`, accepts\nupdates from `Chain` side, caused by successful processing of chunk or block.\n\n`FlatStorageCreator`: handles flat storage structs creation or initiates background creation (aka migration\nprocess) if flat storage data is not presend on DB yet.\n\n`FlatStateDelta`: a HashMap that contains state changes introduced in a chunk. They can be applied\non top the state at flat storage head to compute state at another block.\n\nThe reason for having separate flat storages that there are two modes of block processing,\nnormal block processing and block catchups.\nSince they are performed on different ranges of blocks, flat storage need to be able to support\ndifferent range of blocks on different shards. Therefore, we separate the flat storage objects\nused for different shards.\n\n### DB columns\n\n`DBCol::FlatState` stores a mapping from trie keys to the value corresponding to the trie keys,\nbased on the state of the block at flat storage head.\n\n- _Rows_: trie key (`Vec<u8>`)\n- _Column type_: `ValueRef`\n\n`DBCol::FlatStateDeltas` stores all existing FSDs as mapping from `(shard_id, block_hash, trie_key)` to the `ValueRef`.\nTo read the whole delta, we read all values for given key prefix. This delta stores all state changes introduced in the\ngiven shard of the given block.\n\n- _Rows_: `{ shard_id, block_hash, trie_key }`\n- _Column type_: `ValueRef`\n\nNote that `FlatStateDelta`s needed are stored in memory, so during block processing this column won't be used\nat all. This column is only used to load deltas into memory at `FlatStorage` initialization time when node starts.\n\n`DBCol::FlatStateMetadata` stores miscellaneous data about flat storage layout, including current flat storage\nhead, current creation status and info about deltas existence. We don't specify exact format here because it is under\ndiscussion and can be tweaked until release.\n\nSimilarly, flat head is also stored in `FlatStorage` in memory, so this column is only used to initialize\n`FlatStorage` when node starts.\n\n### `FlatStateDelta`\n\n`FlatStateDelta` stores a mapping from trie keys to value refs. If the value is `None`, it means the key is deleted\nin the block.\n\n```rust\npub struct FlatStateDelta(HashMap<Vec<u8>, Option<ValueRef>>);\n```\n\n```rust\npub fn from_state_changes(changes: &[RawStateChangesWithTrieKey]) -> FlatStateDelta\n```\n\nConverts raw state changes to flat state delta. The raw state changes will be returned as part of the result of\n`Runtime::apply_transactions`. They will be converted to `FlatStateDelta` to be added\nto `FlatStorage` during `Chain::postprocess_block` or `Chain::catch_up_postprocess`.\n\n### `FlatStorageChunkView`\n\n`FlatStorageChunkView` will be created for a shard `shard_id` and a block `block_hash`, and it can perform\nkey value lookup for the state of shard `shard_id` after block `block_hash` is applied.\n\n```rust\npub struct FlatStorageChunkView {\n/// Used to access flat state stored at the head of flat storage.\nstore: Store,\n/// The block for which key-value pairs of its state will be retrieved. The flat state\n/// will reflect the state AFTER the block is applied.\nblock_hash: CryptoHash,\n/// Stores the state of the flat storage\nflat_storage: FlatStorage,\n}\n```\n\n`FlatStorageChunkView` will provide the following interface.\n\n```rust\npub fn get_ref(\n    &self,\n    key: &[u8],\n) -> Result<Option<ValueRef>, StorageError>\n```\n\nReturns the value or value reference corresponding to the given `key`\nfor the state that this `FlatStorageChunkView` object represents, i.e., the state that after\nblock `self.block_hash` is applied.\n\n### `FlatStorageManager`\n\n`FlatStorageManager` will be stored as part of `ShardTries` and `NightshadeRuntime`. Similar to how `ShardTries` is used to\nconstruct new `Trie` objects given a state root and a shard id, `FlatStorageManager` is used to construct\na new `FlatStorageChunkView` object given a block hash and a shard id.\n\n```rust\npub fn new_flat_storage_chunk_view(\n    &self,\n    shard_id: ShardId,\n    block_hash: Option<CryptoHash>,\n) -> FlatStorageChunkView\n```\n\nCreates a new `FlatStorageChunkView` to be used for performing key value lookups on the state of shard `shard_id`\nafter block `block_hash` is applied.\n\n```rust\npub fn get_flat_storage(\n    &self,\n    shard_id: ShardId,\n) -> Result<FlatStorage, FlatStorageError>\n```\n\nReturns the `FlatStorage` for the shard `shard_id`. This function is needed because even though\n`FlatStorage` is part of `NightshadeRuntime`, `Chain` also needs access to `FlatStorage` to update flat head.\nWe will also create a function with the same in `NightshadeRuntime` that calls this function to provide `Chain` to access\nto `FlatStorage`.\n\n```rust\npub fn remove_flat_storage(\n    &self,\n    shard_id: ShardId,\n) -> Result<FlatStorage, FlatStorageError>\n```\n\nRemoves flat storage for shard if we stopped tracking it.\n\n### `FlatStorage`\n\n`FlatStorage` is created per shard. It provides information to which blocks the flat storage\non the given shard currently supports and what block deltas need to be applied on top the stored\nflat state on disk to get the state of the target block.\n\n```rust\nfn get_blocks_to_head(\n    &self,\n    target_block_hash: &CryptoHash,\n) -> Result<Vec<CryptoHash>, FlatStorageError>\n```\n\nReturns the list of deltas between blocks `target_block_hash` (inclusive) and flat head (exclusive),\nReturns an error if `target_block_hash` is not a direct descendent of the current flat head.\nThis function will be used in `FlatStorageChunkView::get_ref`. Note that we can't call it once and store during applying\nchunk, because in parallel to that some block can be processed and flat head can be updated.\n\n```rust\nfn update_flat_head(&self, new_head: &CryptoHash) -> Result<(), FlatStorageError>\n```\n\nUpdates the head of the flat storage, including updating the flat head in memory and on disk,\nupdate the flat state on disk to reflect the state at the new head, and gc the `FlatStateDelta`s that\nare no longer needed from memory and from disk.\n\n```rust\nfn add_block(\n    &self,\n    block_hash: &CryptoHash,\n    delta: FlatStateDelta,\n) -> Result<StoreUpdate, FlatStorageError>\n```\n\nAdds `delta` to `FlatStorage`, returns a `StoreUpdate` object that includes DB transaction to be committed to persist\nthat change.\n\n```rust\nfn get_ref(\n    &self,\n    block_hash: &CryptoHash,\n    key: &[u8],\n) -> Result<Option<ValueRef>, FlatStorageError>\n```\n\nReturns `ValueRef` from flat storage state on top of `block_hash`. Returns `None` if key is not present, or an error if\nblock is not supported.\n\n### Thread Safety\n\nWe should note that the implementation of `FlatStorage` must be thread safe because it can\nbe concurrently accessed by multiple threads. A node can process multiple blocks at the same time\nif they are on different forks, and chunks from these blocks can trigger storage reads in parallel.\nTherefore, `FlatStorage` will be guarded by a `RwLock` so its access can be shared safely:\n\n```rust\npub struct FlatStorage(Arc<RwLock<FlatStorageInner>>);\n```\n\n## Drawbacks\n\nThe main drawback is that we need to control total size of state updates in blocks after current final head.\ncurrent testnet/mainnet load amount of blocks under final head doesn't exceed 5 in 99.99% cases, we still have to\nconsider extreme cases, because Doomslug consensus doesn't give guarantees / upper limit on that. If we don't consider\nthis at all and there is no finality for a long time, validator nodes can crash because of too many FSDs of memory, and\nchain slows down and stalls, which can have a negative impact on user/validator experience and reputation. For now, we\nclaim that we support enough deltas in memory for chain to be finalized, and the proper discussions are likely to happen\nin NEPs like https://github.com/near/NEPs/pull/460.\n\nRisk of DB corruption slightly increases, and it becomes harder to replay blocks on chain. While `Trie` entries are\nessentially immutable (in fact, value for each key is\nunique, because key is a value hash), `FlatStorage` is read-modify-write, because values for the same `TrieKey` can be\ncompletely different. We believe that such flat mapping is reasonable to maintain anyway, as for newly discovered state\nsync idea. But if some change was applied incorrectly, we may have to recompute the whole flat storage, and for block\nhashes before flat head we can't access flat storage at all.\n\nThough Flat Storage significantly reduces amount of storage reads, we have to keep it up-to-date, which results in 1\nextra disk write for changed key, and 1 auxiliary disk write + removal for each FSD. Disk requirements also slightly\nincrease. We think it is acceptable, because actual disk writes are executed in background and are not a bottleneck\nfor block processing. For storage write in general Flat Storage is even a net improvement, because it removes\nnecessity to traverse changed nodes during write execution (\"reads-for-writes\"), and we can apply optimizations\nthere (see \"Storage Writes\" section).\n\nImplementing FlatStorage will require a lot of engineering effort and introduce code that will make the codebase more\ncomplicated. In particular, we had to extend `RuntimeAdapter` API with flat storage-related method after thorough\nconsiderations. We are confident that FlatStorage will bring a lot of performance benefit, but we can only measure the exact\nimprovement after the implementation. We may find that the benefit FlatStorage brings is not\nworth the effort, but it is very unlikely.\n\nIt will make the state rollback harder in the future when we enable challenges in phase 2 of sharding.\nWhen a challenge is accepted and the state needs to be rolled back to a previous block, the entire flat state needs to\nbe rebuilt, which could take a long time. Alternatively, we could postpone garbage collection of deltas and add support\nof applying them backwards.\n\nSpeaking of new sharding phases, once nodes are no longer tracking all shards, Flat Storage must have support for adding\nor removing state for some specific shard. Adding new shard is a tricky but natural extension of catchup process. Our\ncurrent approach for removal is to iterate over all entries in `DBCol::FlatState` and find out for each trie key to\nwhich shard it belongs to. We would be happy to assume that each shard is represented by set of\ncontiguous ranges in `DBCol::FlatState` and make removals simpler, but this is still under discussion.\n\nLast but not least, resharding is not supported by current implementation yet.\n\n## Future possibilities\n\nFlat Storage maintains all state keys in sorted order, which seems beneficial. We currently investigate opportunity to\nspeed up state sync: instead of traversing state part in Trie, we can extract range of keys and values from Flat Storage\nand build range of Trie nodes based on it. It is well known that reading Trie nodes is a bottleneck for state sync as\nwell.\n\n## Changelog\n\n### 1.0.0 - Initial Version\n\nThe NEP was approved by Protocol Working Group members on March 16, 2023 ([meeting recording](https://www.youtube.com/watch?v=4VxRoKwLXIs)):\n\n- [Bowen's vote](https://github.com/near/NEPs/pull/399#issuecomment-1467010125)\n- [Marcelo's vote](https://github.com/near/NEPs/pull/399#pullrequestreview-1341069564)\n- [Marcin's vote](https://github.com/near/NEPs/pull/399#issuecomment-1465977749)\n\n#### Benefits\n\n- The proposal makes serving reads more efficient; making the NEAR protocol cheaper to use and increasing the capacity of the network;\n- The proposal makes estimating gas costs for a transaction easier as the fees for reading are no longer a function of the trie structure whose shape the smart contract developer does not know ahead of time and can continuously change.\n- The proposal should open doors to enabling future efficiency gains in the protocol and further simplifying gas fee estimations.\n- 'Secondary' index over the state data - which would allow further optimisations in the future.\n\n#### Concerns\n\n| #   | Concern                                        | Resolution                                                                                                                                                                                                                                                                 | Status                                                                                |\n| --- | ---------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------- |\n| 1   | The cache requires additional database storage | There is an upper bound on how much additional storage is needed. The costs for the additional disk storage should be negligible                                                                                                                                           | Not an issue                                                                          |\n| 2   | Additional implementation complexity           | Given the benefits of the proposal, I believe the complexity is justified                                                                                                                                                                                                  | not an issue                                                                          |\n| 3   | Additional memory requirement                  | Most node operators are already operating over-provisioned machines which can handle the additional memory requirement. The minimum requirements should be raised but it appears that minimum requirements are already not enough to operate a node                        | This is a concern but it is not specific to this project                              |\n| 4   | Slowing down the read-update-write workload    | This is common pattern in smart contracts so indeed a concern. However, there are future plans on how to address this by serving writes from the flat storage as well which will also reduce the fees of serving writes and make further improvements to the NEAR protocol | This is a concern but hopefully will be addressed in future iterations of the project |\n\n## Copyright\n\nCopyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).\n"
  },
  {
    "path": "neps/nep-0408.md",
    "content": "---\nNEP: 408\nTitle: Injected Wallet API\nAuthor: Daryl Collins <@MaximusHaximus>, @lewis-sqa\nStatus: Final\nDiscussionsTo: https://github.com/near/NEPs/pull/408, https://github.com/near/NEPs/pull/370\nType: Standards Track\nCategory: Wallet\nCreated: 10-Oct-2022\n---\n\n# Injected Wallets\n\n## Summary\n\nStandard interface for injected wallets.\n\n## Motivation\n\ndApps are finding it increasingly difficult to support the ever expanding choice of wallets due to their wildly different implementations. While projects such as [Wallet Selector](https://github.com/near/wallet-selector) attempt to mask this problem, it's clear the ecosystem requires a standard that will not only benefit dApps but make it easier for established wallets to support NEAR.\n\n## Rationale and alternatives\n\nAt its most basic, a wallet contains key pairs required to sign messages. This standard aims to define an API (based on our learning from [Wallet Selector](https://github.com/near/wallet-selector)) that achieves this requirement through a number of methods exposed on the `window` object.\n\nThe introduction of this standard makes it possible for `near-api-js` to become wallet-agnostic and eventually move away from the high amount of coupling with NEAR Wallet. It simplifies projects such as [Wallet Selector](https://github.com/near/wallet-selector) that must implement various abstractions to normalise the different APIs before it can display a modal for selecting a wallet.\n\nThis standard takes a different approach to a wallet API than other blockchains such as [Ethereum's JSON-RPC Methods](https://docs.metamask.io/guide/rpc-api.html#ethereum-json-rpc-methods). Mainly, it rejects the `request` abstraction that feels unnecessary and only adds to the complexity both in terms of implementation and types. Instead, it exposes various methods directly on the top-level object that also improves discoverability.\n\nThere have been many iterations of this standard to help inform what we consider the \"best\" approach right now for NEAR. Below is a summary of the key design choices:\n\n### Single account vs. multiple account\n\nAlmost every wallet implementation in NEAR used a single account model until we began integrating with [WalletConnect](https://walletconnect.com/). In WalletConnect, sessions can contain any number of accounts that can be modified by the dApp or wallet. The decision to use a multiple account model was influenced by the following reasons:\n\n- It future-proofs the API even if wallets (such as MetaMask) only support a single \"active\" account.\n- Other blockchains such as [Ethereum](https://docs.metamask.io/guide/rpc-api.html#eth-requestaccounts) implement this model.\n- Access to multiple accounts allow dApps more freedom to improve UX as users can seamlessly switch between accounts.\n- Aligns with WalletConnect via the [Bridge Wallet Standard](https://github.com/near/NEPs/blob/master/neps/nep-0368.md).\n\n### Storage of key pairs for FunctionCall access keys in dApp context vs. wallet context\n\n- NEAR's unique concept of `FunctionCall` access keys allow for the concept of 'signing in' to a dApp using your wallet. 'Signing In' to a dApp is accomplished by adding `FunctionCall` type access key that the dApp owns to the account that the user is logging in as.\n- Once a user has 'signed in' to a dApp, the dApp can then use the keypair that it owns to execute transactions without having to prompt the user to route and approve those transactions through their wallet.\n- `FunctionCall` access keys have a limited quota that can only be used to pay for gas fees (typically 0.25 NEAR) and can further be restricted to only be allowed to call _specific methods_ on one **specific** smart contract.\n- This allows for an ideal user experience for dApps that require small gas-only transactions regularly while in use. Those transactions can be done without interrupting the user experience by requiring them to be approved through their wallet. A great example of this is evident in gaming use-cases -- take a gaming dApp where some interactions the user makes must write to the blockchain as they do common actions in the game world. Without the 'sign in' concept that provides the dApp with its own limited usage key, the user might be constantly interrupted by needing to approve transactions on their wallet as they perform common actions. If a player has their account secured with a ledger, the gameplay experience would be constantly interrupted by prompts to approve transactions on their ledger device! With the 'sign in' concept, the user will only intermittently need to approve transactions to re-sign-in, when the quota that they approved for gas usage during their last login has been used up.\n- Generally, it is recommended to only keep `FullAccess` keys in wallet scope and hidden from the dApp consumer. `FunctionCall` type keys should be generated and owned by the dApp, and requested to be added using the `signIn` method. They should **not** be 'hidden' inside the wallet in the way that `FullAccess` type keys are.\n\n## Specification\n\nInjected wallets are typically browser extensions that implement the `Wallet` API (see below). References to the currently available wallets are tracked on the `window` object. To avoid namespace collisions and easily detect when they're available, wallets must mount under their own key of the object `window.near` (e.g. `window.near.sender`).\n**NOTE: Do not replace the entire `window.near` object with your wallet implementation, or add any objects as properties of the `window.near` object that do not conform to the Injected Wallet Standard**\n\nAt the core of a wallet are [`signTransaction`](#signtransaction) and [`signTransactions`](#signtransactions). These methods, when given a [`TransactionOptions`](#wallet-api) instance, will prompt the user to sign with a key pair previously imported (with the assumption it has [`FullAccess`](https://nomicon.io/DataStructures/AccessKey) permission).\n\nIn most cases, a dApp will need a reference to an account and associated public key to construct a [`Transaction`](https://nomicon.io/RuntimeSpec/Transactions). The [`connect`](#connect) method helps solve this issue by prompting the user to select one or more accounts they would like to make visible to the dApp. When at least one account is visible, the wallet considers the dApp [`connected`](#connected) and they can access a list of [`accounts`](#accounts) containing an `accountId` and `publicKey`.\n\nFor dApps that often sign gas-only transactions, [`FunctionCall`](https://nomicon.io/DataStructures/AccessKey#accesskeypermissionfunctioncall) access keys can be added/deleted for one or more accounts using the [`signIn`](#signin) and [`signOut`](#signout) methods. While this functionality could be achieved with [`signTransactions`](#signtransactions), it suggests a direct intention that a user wishes to sign in/out of a dApp's smart contract.\n\n### Wallet API\n\nBelow is the entire API for injected wallets. It makes use of `near-api-js` to enable interoperability with dApps that will already use it for constructing transactions and communicating with RPC endpoints.\n\n```ts\nimport { transactions, utils } from \"near-api-js\";\n\ninterface Account {\n  accountId: string;\n  publicKey: utils.PublicKey;\n}\n\ninterface Network {\n  networkId: string;\n  nodeUrl: string;\n}\n\ninterface SignInParams {\n  permission: transactions.FunctionCallPermission;\n  account: Account;\n}\n\ninterface SignInMultiParams {\n  permissions: Array<transactions.FunctionCallPermission>;\n  account: Account;\n}\n\ninterface SignOutParams {\n  accounts: Array<Account>;\n}\n\ninterface TransactionOptions {\n  receiverId: string;\n  actions: Array<transactions.Action>;\n  signerId?: string;\n}\n\ninterface SignTransactionParams {\n  transaction: TransactionOptions;\n}\n\ninterface SignTransactionsParams {\n  transactions: Array<TransactionOptions>;\n}\n\ninterface Events {\n  accountsChanged: { accounts: Array<Account> };\n}\n\ninterface ConnectParams {\n  networkId: string;\n}\n\ntype Unsubscribe = () => void;\n\ninterface Wallet {\n  id: string;\n  connected: boolean;\n  network: Network;\n  accounts: Array<Account>;\n\n  supportsNetwork(networkId: string): Promise<boolean>;\n  connect(params: ConnectParams): Promise<Array<Account>>;\n  signIn(params: SignInParams): Promise<void>;\n  signInMulti(params: SignInMultiParams): Promise<void>;\n  signOut(params: SignOutParams): Promise<void>;\n  signTransaction(\n    params: SignTransactionParams\n  ): Promise<transactions.SignedTransaction>;\n  signTransactions(\n    params: SignTransactionsParams\n  ): Promise<Array<transactions.SignedTransaction>>;\n  disconnect(): Promise<void>;\n  on<EventName extends keyof Events>(\n    event: EventName,\n    callback: (params: Events[EventName]) => void\n  ): Unsubscribe;\n  off<EventName extends keyof Events>(\n    event: EventName,\n    callback?: () => void\n  ): void;\n}\n```\n\n#### Properties\n\n##### `id`\n\nRetrieve the wallet's unique identifier.\n\n```ts\nconst { id } = window.near.wallet;\n\nconsole.log(id); // \"wallet\"\n```\n\n##### `connected`\n\nDetermine whether we're already connected to the wallet and have visibility of at least one account.\n\n```ts\nconst { connected } = window.near.wallet;\n\nconsole.log(connected); // true\n```\n\n##### `network`\n\nRetrieve the currently selected network.\n\n```ts\nconst { network } = window.near.wallet;\n\nconsole.log(network); // { networkId: \"testnet\", nodeUrl: \"https://rpc.testnet.near.org\" }\n```\n\n##### `accounts`\n\nRetrieve all accounts visible to the dApp.\n\n```ts\nconst { accounts } = window.near.wallet;\n\nconsole.log(accounts); // [{ accountId: \"test.testnet\", publicKey: PublicKey }]\n```\n\n#### Methods\n\n##### `connect`\n\nRequest visibility for one or more accounts from the wallet. This should explicitly prompt the user to select from their list of imported accounts. dApps can use the `accounts` property once connected to retrieve the list of visible accounts.\n\n> Note: Calling this method when already connected will allow users to modify their selection, triggering the 'accountsChanged' event.\n\n```ts\nconst accounts = await window.near.wallet.connect();\n```\n\n##### `signTransaction`\n\nSign a transaction. This request should require explicit approval from the user.\n\n```ts\nimport { transactions, providers, utils } from \"near-api-js\";\n\n// Retrieve accounts (assuming already connected) and current network.\nconst { network, accounts } = window.near.wallet;\n\n// Setup RPC to retrieve transaction-related prerequisites.\nconst provider = new providers.JsonRpcProvider({ url: network.nodeUrl });\n\nconst signedTx = await window.near.wallet.signTransaction({\n  transaction: {\n    signerId: accounts[0].accountId,\n    receiverId: \"guest-book.testnet\",\n    actions: [\n      transactions.functionCall(\n        \"addMessage\",\n        { text: \"Hello World!\" },\n        utils.format.parseNearAmount(\"0.00000000003\"),\n        utils.format.parseNearAmount(\"0.01\")\n      ),\n    ],\n  },\n});\n// Send the transaction to the blockchain.\nawait provider.sendTransaction(signedTx);\n```\n\n##### `signTransactions`\n\nSign a list of transactions. This request should require explicit approval from the user.\n\n```ts\nimport { transactions, providers, utils } from \"near-api-js\";\n\n// Retrieve accounts (assuming already connected) and current network.\nconst { network, accounts } = window.near.wallet;\n\n// Setup RPC to retrieve transaction-related prerequisites.\nconst provider = new providers.JsonRpcProvider({ url: network.nodeUrl });\n\nconst signedTxs = await window.near.wallet.signTransactions({\n  transactions: [\n    {\n      signerId: accounts[0].accountId,\n      receiverId: \"guest-book.testnet\",\n      actions: [\n        transactions.functionCall(\n          \"addMessage\",\n          { text: \"Hello World! (1/2)\" },\n          utils.format.parseNearAmount(\"0.00000000003\"),\n          utils.format.parseNearAmount(\"0.01\")\n        ),\n      ],\n    },\n    {\n      signerId: accounts[0].accountId,\n      receiverId: \"guest-book.testnet\",\n      actions: [\n        transactions.functionCall(\n          \"addMessage\",\n          { text: \"Hello World! (2/2)\" },\n          utils.format.parseNearAmount(\"0.00000000003\"),\n          utils.format.parseNearAmount(\"0.01\")\n        ),\n      ],\n    },\n  ],\n});\n\nfor (let i = 0; i < signedTxs.length; i += 1) {\n  const signedTx = signedTxs[i];\n\n  // Send the transaction to the blockchain.\n  await provider.sendTransaction(signedTx);\n}\n```\n\n##### `disconnect`\n\nRemove visibility of all accounts from the wallet.\n\n```ts\nawait window.near.wallet.disconnect();\n```\n\n##### `signIn`\n\nAdd one `FunctionCall` access key for one or more accounts. This request should require explicit approval from the user.\n\n```ts\nimport { utils } from \"near-api-js\";\n\n// Retrieve the list of accounts we have visibility of.\nconst { accounts } = window.near.wallet;\n\n// Request FunctionCall access to the 'guest-book.testnet' smart contract for each account.\nawait window.near.wallet.signIn({\n  permission: {\n    receiverId: \"guest-book.testnet\",\n    methodNames: [],\n  },\n  account: {\n    accountId: accounts[0].accountId,\n    publicKey: utils.KeyPair.fromRandom(\"ed25519\").getPublicKey(),\n  },\n});\n```\n\n##### `signInMulti`\n\nAdd multiple `FunctionCall` access keys for one or more accounts. This request should require explicit approval from the user.\n\n```ts\nimport { utils } from \"near-api-js\";\n\n// Retrieve the list of accounts we have visibility of.\nconst { accounts } = window.near.wallet;\n\n// Request FunctionCall access to the 'guest-book.testnet' and 'guest-book2.testnet' smart contract for each account.\nawait window.near.wallet.signInMulti({\n  permissions: [\n    {\n      receiverId: \"guest-book.testnet\",\n      methodNames: [],\n    },\n    {\n      receiverId: \"guest-book2.testnet\",\n      methodNames: [],\n    },\n  ],\n  account: {\n    accountId: accounts[0].accountId,\n    publicKey: utils.KeyPair.fromRandom(\"ed25519\").getPublicKey(),\n  },\n});\n```\n\n##### Benefits\n\nThis NEP will optimize UX for multi contract DApps and avoid multiple redirects. These are more and more common in the ecosystem and this NEP will benefit the UX for those DApps.\n\n##### Concerns\n\n- The currently available keystores will have to catch up in order to support multiple keys per account\n- We should add the new method to the Wallet interface for clarity in the NEP doc\n\n##### `signOut`\n\nDelete `FunctionCall` access key(s) for one or more accounts. This request should require explicit approval from the user.\n\n```ts\nimport { utils, keyStores } from \"near-api-js\";\n\n// Setup keystore to retrieve locally stored FunctionCall access keys.\nconst keystore = new keyStores.BrowserLocalStorageKeyStore();\n\n// Retrieve accounts (assuming already connected) and current network.\nconst { network, accounts } = window.near.wallet;\n\n// Remove FunctionCall access (previously granted via signIn) for each account.\nawait window.near.wallet.signOut({\n  accounts: await Promise.all(\n    accounts.map(async ({ accountId }) => {\n      const keyPair = await keystore.getKey(network.networkId, accountId);\n\n      return {\n        accountId,\n        publicKey: keyPair.getPublicKey(),\n      };\n    })\n  ),\n});\n```\n\n#### Events\n\n##### `accountsChanged`\n\nTriggered whenever accounts are updated (e.g. calling `connect` or `disconnect`).\n\n```ts\nwindow.near.wallet.on(\"accountsChanged\", ({ accounts }) => {\n  console.log(\"Accounts Changed\", accounts);\n});\n```\n"
  },
  {
    "path": "neps/nep-0413.md",
    "content": "---\nNEP: 413\nTitle: Near Wallet API - support for signMessage method\nAuthor: Philip Obosi <philip@near.org>, Guillermo Gallardo <guillermo@near.org>\nStatus: Final\n# DiscussionsTo:\nType: Standards Track\nCategory: Wallet\nCreated: 25-Oct-2022\n---\n\n## Summary\n\nA standardized Wallet API method, namely `signMessage`, that allows users to sign a message for a specific recipient using their NEAR account.\n\n## Motivation\n\nNEAR users want to create messages destined to a specific recipient using their accounts. This has multiple applications, one of them being authentication in third-party services.\n\nCurrently, there is no standardized way for wallets to sign a message destined to a specific recipient.\n\n## Rationale and Alternatives\n\nUsers want to sign messages for a specific recipient without incurring in GAS fees, nor compromising their account's security. This means that the message being signed:\n\n1. Must be signed off-chain, with no transactions being involved.\n2. Must include the recipient's name and a nonce.\n3. Cannot represent a valid transaction.\n4. Must be signed using a Full Access Key.\n5. Should be simple to produce/verify, and transmitted securely.\n\n### Why Off-Chain?\n\nSo the user would not incur in GAS fees, nor the signed message gets broadcasted into a public network.\n\n### Why The Message MUST NOT be a Transaction? How To Ensure This?\n\nAn attacker could make the user inadvertently sign a valid transaction which, once signed, could be submitted into the network to execute it.\n\n#### How to Ensure the Message is not a Transaction\n\nIn NEAR, transactions are encoded in Borsh before being signed. The first attribute of a transaction is a `signerId: string`, which is encoded as: (1) 4 bytes representing the string's length, (2) N bytes representing the string itself.\n\nBy prepending the prefix tag $2^{31} + 413$ we can both ensure that (1) the whole message is an invalid transaction (since the string would be too long to be a valid signer account id), (2) this NEP is ready for a potential future protocol update, in which non-consensus messages are tagged using $2^{31}$ + NEP-number.\n\n### Why The Message Needs to Include a Receiver and Nonce?\n\nTo stop a malicious app from requesting the user to sign a message for them, only to relay it to a third-party. Including the recipient and making sure the user knows about it should mitigate these kind of attacks.\n\nMeanwhile, including a nonce helps to mitigate replay attacks, in which an attacker can delay or re-send a signed message.\n\n### Why using a FullAccess Key? Why Not Simply Creating an [FunctionCall Key](https://docs.near.org/protocol/access-keys) for Signing?\n\nThe most common flow for NEAR user authentication into a Web3 frontend involves the creation of a [FunctionCall Key](https://docs.near.org/protocol/access-keys).\n\nOne might feel tempted to reproduce such process here, for example, by creating a key that can only be used to call a non-existing method in the user's account. This is a bad idea because:\n\n1. The user would need to expend gas in creating a new key.\n2. Any third-party can ask the user to create a `FunctionCall Key`, thus opening an attack vector.\n\nUsing a FullAccess key allows us to be sure that the challenge was signed by the user (since nobody should have access to their `FullAccess Key`), while keeping the constraints of not expending gas in the process (because no new key needs to be created).\n\n### Why The Input Needs to Include a State?\n\nIncluding a state helps to mitigate [CSRF attacks](https://auth0.com/docs/secure/attack-protection/state-parameters). This way, if a message needs to be signed for authentication purposes, the auth service can keep a state to make sure the auth request comes from the right author.\n\n### How to Return the Signed Message in a Safe Way\n\nSending the signed message in a query string to an arbitrary URL (even within the correct domain) is not secure as the data can be leaked (e.g. through headers, etc). Using URL fragments instead will improve security, since [URL fragments are not included in the `Referer`](https://greenbytes.de/tech/webdav/rfc2616.html#header.referer).\n\n### NEAR Signatures\n\nNEAR transaction signatures are not plain Ed25519 signatures but Ed25519 signatures of a SHA-256 hash (see [near/nearcore#2835](https://github.com/near/nearcore/issues/2835)). Any protocol that signs anything with NEAR account keys should use the same signature format.\n\n## Specification\n\nWallets must implement a `signMessage` method, which takes a `message` destined to a specific `recipient` and transform it into a verifiable signature.\n\n### Input Interface\n\n`signMessage` must implement the following input interface:\n\n```jsx\ninterface SignMessageParams {\n  message: string ; // The message that wants to be transmitted.\n  recipient: string; // The recipient to whom the message is destined (e.g. \"alice.near\" or \"myapp.com\").\n  nonce: [u8; 32] ; // A nonce that uniquely identifies this instance of the message, denoted as a 32 bytes array (a fixed `Buffer` in JS/TS).\n  callbackUrl?: string; // Optional, applicable to browser wallets (e.g. MyNearWallet). The URL to call after the signing process. Defaults to `window.location.href`.\n  state?: string; // Optional, applicable to browser wallets (e.g. MyNearWallet). A state for authentication purposes.\n}\n```\n\n### Structure\n\n`signMessage` must embed the input `message`, `recipient` and `nonce` into the following predefined structure:\n\n```rust\nstruct Payload {\n  message: string; // The same message passed in `SignMessageParams.message`\n  nonce: [u8; 32]; // The same nonce passed in `SignMessageParams.nonce`\n  recipient: string; // The same recipient passed in `SignMessageParams.recipient`\n  callbackUrl?: string // The same callbackUrl passed in `SignMessageParams.callbackUrl`\n}\n```\n\n### Signature\n\nIn order to create a signature, `signMessage` must:\n\n1. Create a `Payload` object.\n2. Convert the `payload` into its [Borsh Representation](https://borsh.io).\n3. Prepend the 4-bytes borsh representation of $2^{31}+413$, as the [prefix tag](https://github.com/near/NEPs/pull/461).\n4. Compute the `SHA256` hash of the serialized-prefix + serialized-tag.\n5. Sign the resulting `SHA256` hash from step 3 using a **full-access** key.\n\n> If the wallet does not hold any `full-access` keys, then it must return an error.\n\n### Example\n\nAssuming that the `signMessage` method was invoked, and that:\n\n- The input `message` is `\"hi\"`\n- The input `nonce` is `[0,...,31]`\n- The input `recipient` is `\"myapp.com\"`\n- The callbackUrl is `\"myapp.com/callback\"`\n- The wallet stores a full-access private key\n\nThe wallet must construct and sign the following `SHA256` hash:\n\n```jsx\n// 2**31 + 413 == 2147484061\nsha256.hash(Borsh.serialize<u32>(2147484061) + Borsh.serialize(Payload{message:\"hi\", nonce:[0,...,31], recipient:\"myapp.com\", callbackUrl: \"myapp.com/callback\"}))\n```\n\n### Output Interface\n\n`signMessage` must return an object containing the **base64** representation of the `signature`, and all the data necessary to verify such signature.\n\n```jsx\ninterface SignedMessage {\n  accountId: string; // The account name to which the publicKey corresponds as plain text (e.g. \"alice.near\")\n  publicKey: string; // The public counterpart of the key used to sign, expressed as a string with format \"<key-type>:<base58-key-bytes>\" (e.g. \"ed25519:6TupyNrcHGTt5XRLmHTc2KGaiSbjhQi1KHtCXTgbcr4Y\")\n  signature: string; // The base64 representation of the signature.\n  state?: string; // Optional, applicable to browser wallets (e.g. MyNearWallet). The same state passed in SignMessageParams.\n}\n```\n\n### Returning the signature\n\n#### Web Wallets\n\nWeb Wallets, such as [MyNearWallet](https://mynearwallet.com), should directly return the `SignedMessage` to the `SignMessageParams.callbackUrl`, passing the `accountId`,`publicKey`, `signature` and the state as URL fragments. This is: `<callbackUrl>#accountId=<accountId>&publicKey=<publicKey>&signature=<signature>&state=<state>`.\n\nIf the signing process fails, then the wallet must return an error message and the state as string fragments: `<callbackUrl>#error=<error-message-string>&state=<state>`.\n\n#### Other Wallets\n\nNon-web Wallets, such as [Ledger](https://www.ledger.com) can directly return the `SignedMessage` (in preference as a JSON object) and raise an error on failure.\n\n## References\n\nA full example on how to implement the `signMessage` method can be [found here](https://github.com/gagdiez/near-login/blob/1650e25080ab2e8a8c508638a9ba9e9732e76036/server/tests/wallet.ts#L60-L77).\n\n## Drawbacks\n\nAccounts that do not hold a FullAccess Key will not be able to sign this kind of messages. However, this is a necessary tradeoff for security since any third-party can ask the user to create a FunctionAccess key.\n\nAt the time of writing this NEP, the NEAR ledger app is unable to sign this kind of messages, since currently it can only sign pure transactions. This however can be overcomed by modifying the NEAR ledger app implementation in the near future.\n\nNon-expert subjects could use this standard to authenticate users in an unsecure way. To anyone implementing an authentication service, we urge them to read about [CSRF attacks](https://auth0.com/docs/secure/attack-protection/state-parameters), and make use of the `state` field.\n\n## Decision Context\n\n### 1.0.0 - Initial Version\n\nThe Wallet Standards Working Group members approved this NEP on January 17, 2023 ([meeting recording](https://youtu.be/Y6z7lUJSUuA)).\n\n### 1.1.0 - First Revison\n\nImportant Security concerns were raised by a community member, driving us to change the proposed implementation.\n\n### Benefits\n\n- Makes it possible to authenticate users without having to add new access keys. This will improve UX, save money and will not increase the on-chain storage of the users' accounts.\n- Makes it possible to authorize through jwt in web2 services using the NEAR account.\n- Removes delays in adding transactions to the blockchain and makes the experience of using projects like NEAR Social better.\n\n### Concerns\n\n| #   | Concern                                                                                                                                                                                                                    | Resolution                                                                                                                                                                                                                                                                       | Status   |\n| --- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- |\n| 1   | Implementing the signMessage standard will divide wallets into those that will quickly add support for it and those that will take significantly longer. In this case, some services may not work correctly for some users | (1) Be careful when adding functionality with signMessage with legacy and ensure that alternative authorization methods are possible. For example by adding publicKey. (2) Oblige wallets to implement a standard in specific deadlines to save their support in wallet selector | Resolved |\n| 2   | Large number of off-chain transactions will reduce activity in the blockchain and may negatively affect NEAR rate and attractiveness to third-party developers                                                             | There seems to be a general agreement that it is a good default                                                                                                                                                                                                                  | Resolved |\n| 3   | `receiver` terminology can be misleading and confusing when existing functionality is taken into consideration (`signTransaction`)                                                                                         | It was recommended for the community to vote for a new name, and the NEP was updated changing `receiver` to `recipient`                                                                                                                                                          | Resolved |\n| 4   | The NEP should emphasize that `nonce` and `receiver` should be clearly displayed to the user in the signing requests by wallets to achieve the desired security from these params being included                           | We strongly recommend the wallet to clearly display all the elements that compose the message being signed. However, this pertains to the wallet's UI and UX, and not to the method's specification, thus the NEP was not changed.                                               | Resolved |\n| 5   | NEP-408 (Injected Wallet API) should be extended with this new `signMessage` method                                                                                                                                        | It is not a blocker for this NEP, but a follow-up NEP-extension proposal is welcome.                                                                                                                                                                                             | Resolved |\n\n## Copyright\n\nCopyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).\n"
  },
  {
    "path": "neps/nep-0418.md",
    "content": "---\nNEP: 418\nTitle: Remove attached_deposit view panic\nAuthor: Austin Abell <austin.abell@near.org>\nStatus: Final\nDiscussionsTo: https://github.com/nearprotocol/neps/pull/418\nType: Standards Track\nCategory: Tools\nVersion: 1.0.0\nCreated: 18-Oct-2022\nUpdated: 27-Jan-2023\n---\n\n## Summary\n\nThis proposal is to switch the behavior of the `attached_deposit` host function on the runtime from panicking in view contexts to returning 0. This results in a better devX because instead of having to configure an assertion that there was no attached deposit to a function call only for transactions and not view calls, which is impossible because you can send a transaction to any method, you could just apply this assertion without the runtime aborting in view contexts.\n\n## Motivation\n\nThis will allow contract SDK frameworks to add the `attached_deposit == 0` assertion for every function on a contract by default. This behavior matches the Solidity/Eth payable modifier and will ensure that funds aren't sent accidentally to a contract in more cases than currently possible.\n\nThis can't be done at a contract level because there is no way of checking if a function call is within view context to call `attached_deposit` conditionally. This means that there is no way of restricting the sending of funds to functions intended to be view only because the abort from within `attached_deposit` can't be caught and ignored from inside the contract.\n\nInitial discussion: https://near.zulipchat.com/#narrow/stream/295306-pagoda.2Fcontract-runtime/topic/attached_deposit.20view.20error\n\n## Rationale and alternatives\n\nThe rationale for assigning `0u128` to the pointer (`u64`) passed into `attached_deposit` is that it's the least breaking change.\n\nThe alternative of returning some special value, say `u128::MAX`, is that it would cause some unintended side effects for view calls using the `attached_deposit`. For example, if `attached_deposit` is called within a function, older versions of a contract that do not check the special value will return a result assuming that the attached deposit is `u128::MAX`. This is not a large concern since it would just be a view call, but it might be a bad UX in some edge cases, where returning 0 wouldn't be an issue.\n\n## Specification\n\nThe error inside `attached_deposit` for view calls will be removed, and for all view calls, `0u128` will be set at the pointer passed in.\n\n## Reference Implementation\n\n\nCurrently, the implementation for `attached_deposit` is as follows:\n\n```rust\npub fn attached_deposit(&mut self, balance_ptr: u64) -> Result<()> {\n    self.gas_counter.pay_base(base)?;\n\n    if self.context.is_view() {\n        return Err(HostError::ProhibitedInView {\n            method_name: \"attached_deposit\".to_string(),\n        }\n        .into());\n    }\n    self.memory_set_u128(balance_ptr, self.context.attached_deposit)\n}\n```\n\nWhich would just remove the check for `is_view` to no longer throw an error:\n\n```rust\npub fn attached_deposit(&mut self, balance_ptr: u64) -> Result<()> {\n    self.gas_counter.pay_base(base)?;\n\n    self.memory_set_u128(balance_ptr, self.context.attached_deposit)\n}\n```\n\nThis assumes that in all cases, `self.context.attached_deposit` is set to 0 in all cases. This can be asserted, or just to be safe, can check if `self.context.is_view()` and set `0u128` explicitly.\n\n## Security Implications\n\nThis won't have any implications outside of view calls, so this will not affect anything that is persisted on-chain. This only affects view calls. This can only have a negative side effect if a contract is under the assumption that `attached_deposit` will panic in view contexts. The possibility that this is done _and_ has some value connected with a view call result off-chain seems extremely unlikely.\n\n## Drawbacks\n\nThis has a breaking change of the functionality of `attached_deposit` and affects the behavior of some function calls in view contexts if they use `attached_deposit` and no other prohibited host functions.\n\n## Future possibilities\n\n- The Rust SDK, as well as other SDKs, can add the `attached_deposit() == 0` check by default to all methods for safety of use.\n- Potentially, other host functions can be allowed where reasonable values can be inferred. For example, `prepaid_gas`, `used_gas` could return 0.\n\n## Decision Context\n\n### 1.0.0 - Initial Version\n\nThe initial version of NEP-418 was approved by Tools Working Group members on January 19, 2023 ([meeting recording](https://youtu.be/poVmblmc3L4)).\n\n#### Benefits\n\n- This will allow contract SDK frameworks to add the `attached_deposit == 0` assertion for every function on a contract by default.\n- This behavior matches the Solidity/Eth payable modifier and will ensure that funds aren't sent accidentally to a contract in more cases than currently possible.\n- Given that there is no way of checking if a function call is within view context to call `attached_deposit` conditionally, this NEP only changes a small surface of the API instead of introducing a new host function.\n\n#### Concerns\n\n| # | Concern | Resolution | Status |\n| - | - | - | - |\n| 1 | Proposal potentially triggers the protocol version change | It does not trigger the protocol version change. Current update could be considered a client-breaking change update. | Resolved |\n| 2 | The contract can assume that `attached_deposit` will panic in view contexts. | The possibility that this is done _and_ has some value connected with a view call result off-chain seems extremely unlikely. | Won't Fix |\n| 3 | Can we assume that in all view calls, the `attached_deposit` in the VMContext always zero? | Yes, there is no way to set `attached_deposit` in view calls context | Resolved |\n\n## Copyright\n\nCopyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).\n"
  },
  {
    "path": "neps/nep-0448.md",
    "content": "---\nNEP: 448\nTitle: Zero-balance Accounts\nAuthor: Bowen Wang <bowen@near.org>\nStatus: Final\nDiscussionsTo: https://github.com/nearprotocol/neps/pull/448\nType: Protocol Track\nCreated: 10-Jan-2023\n---\n\n## Summary\n\nA major blocker to a good new user onboarding experience is that users have to acquire NEAR tokens to pay for\ntheir account. With the implementation of [NEP-366](https://github.com/near/NEPs/pull/366), users don't necessarily have\nto first acquire NEAR tokens in order to pay transaction fees, but they still have to pay for the storage of their account.\nTo address this problem, we propose allowing each account to have free storage for the account itself and up to four keys\nand account for the cost of storage in the gas cost of create account transaction.\n\n## Motivation\n\nIdeally a new user should be able to onboard onto NEAR through any applications built on top of NEAR and do not have to\nunderstand that the application is running on top of blockchain. The ideal flow is as follows: a user hear about an interesting\napplication from their friends or some forum and they decide to give it a try. The user opens the application in their\nbrowser and directly starts using it without worrying about registration. Under the hood, a keypair is generated for the\nuser and the application creates an account for the user and pays for transaction fees through meta transactions. Later on,\nthe user may find other applications that they are also interested in and give them a try as well. At some point, the user\ngraduates from the onboarding experience by acquiring NEAR tokens either through earning or because they like some experience\nso much that they would like to pay for it explicitly. Overall we want to have two full access keys for recovery purposes\nand two function call access keys so that users can use two apps before graduating from the onboarding experience.\n\n## Rationale and alternatives\n\nThere are a few alternative ideas:\n\n- Completely disregard storage staking and do not change the account creation cost. This makes the implementation even\n  simpler. However, there may be a risk of spamming attack given that the cost of creating an account is around 0.2Tgas.\n  In addition, with the current design, it is easy to further reduce the cost. Going the other way is more difficult.\n- Do not change how storage staking is calculated when converting to gas cost. This means that account creation cost would\n  be around 60Tgas, which is both high in gas (meaning that the throughput is limited and more likely for some contract to break)\n  and more costly for users (around 0.006N per account creation).\n\n## Specification\n\nThere are two main changes to the protocol:\n\n- Account creation cost needs to be increased. For every account, at creation time, 770 bytes of storage are reserved\n  for the account itself + four full access keys + two function call access keys. For function call access keys,\n  the \"free\" ones cannot use `method_names` in order to minimize the storage requirement for an account.\n  The number of bytes is calculated as follows:\n  _ An account takes 100 bytes due to `storage_usage_config.num_bytes_account`\n  _ A full access key takes 42 bytes and there is an additional 40 bytes required due to `storage_usage_config.num_extra_bytes_record`\n  _ A function call access key takes 131 bytes and there is an additional 40 bytes required due to `storage_usage_config.num_extra_bytes_record`\n  _ Therefore the total the number of bytes is `100 + (131 + 40) * 2 + (42 + 40) * 4 = 770`.\n\n  The cost of these bytes is paid through transaction fee. Note that there is already [discussion](https://github.com/near/NEPs/issues/415)\n  around the storage cost of NEAR and whether it is reasonable. While this proposal does not attempt to change the entire\n  storage staking mechanism, the cost of storage is reduced in 10x when converting to gas. A [discussion](https://gov.near.org/t/storage-staking-price/399)\n  from a while ago mentioned this idea, and the concerns there were proven to be not real concerns. No one is deleting\n  data from storage in practice and the storage staking mechanism does not really serve its purpose. That conversion means\n  we increase the account creation cost to 7.7Tgas from 0.2Tgas\n\n- Storage staking check will not be applied if an account has <= 4 full access keys and <= 2 function call access keys\n  and does not have a contract deployed. If an account accrues more than full access keys or function call access keys,\n  however, it must pay for the storage of everything including those 6 keys. This makes the implementation simpler and less error-prone.\n\n## Reference Implementation (Required for Protocol Working Group proposals, optional for other categories)\n\nDetails of the changes described in the section above:\n\n- Change `create_account_cost` to\n\n```json\n\"create_account_cost\": {\n  \"send_sir\": 3850000000000,\n  \"send_not_sir\": 3850000000000,\n  \"execution\": 3850000000000\n},\n```\n\n- Change the implementation of `get_insufficient_storage_stake` to check whether an account is zero balance account.\n  Note that even though the intent, as described in the section above, is to limit the number of full access keys to 4\n  and the number of function call access keys to 2, for the ease of implementation, it makes sense to limit the size of\n  `storage_usage` on an account to 770 bytes because `storage_usage` is already stored under `Account` and it does not\n  require any additional storage reads. More specifically, the check looks roughly as follows:\n\n```rust\n/// Returns true if an account is a zero balance account\nfn check_for_zero_balance_account(account: &Account) -> bool {\n    account.storage_usage <= 770 // 4 full access keys and 2 function call access keys\n}\n```\n\n## Drawbacks (Optional)\n\n- Reduction of storage cost when converting the storage cost of zero balance accounts to gas cost may be a concern. But\n  I argue that the current storage cost is too high. A calculation shows that the current storage cost is around 36,000 times\n  higher than S3 storage cost. In addition, when a user accrues any contract data or has more than three keys on their account,\n  they have to pay for the storage cost of everything combined. In that sense, a user would pay slightly more than what\n  they pay today when their account is no longer a zero-balance account.\n\n## Unresolved Issues (Optional)\n\n## Future possibilities\n\n- We may change the number of keys allowed for zero-balance accounts in the future.\n- A more radical thought: we can separate out zero-balance accounts into its own trie and manage them separately. This\n  may allow more customization on how we want zero-balance accounts to be treated.\n\n## Decision Context\n\n### 1.0.0 - Initial Version\n\nThe initial version of NEP-448 was approved by Protocol Working Group members on February 9, 2023 ([meeting recording](https://youtu.be/ktgWXjNTU_A)).\n\n#### Benefits\n\n- Users can now onboard with having to acquire NEAR tokens\n- Together with [meta transactions](https://github.com/near/NEPs/pull/366), this allows a user to start interacting with an app on NEAR directly from their device without any additional steps\n- Solves the problem of meta transaction for implicit accounts\n\n#### Concerns\n\n| #   | Concern                                                                                      | Resolution                                                                                                                                                       | Status   |\n| --- | -------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- |\n| 1   | The number of full access keys allowed is too small                                          | Could be done in a future iteration.                                                                                                                             | Resolved |\n| 2   | No incentive for people to remove zero balance account.                                      | Very few people actually delete their account anyways.                                                                                                           | Resolved |\n| 3   | UX of requiring balance after a user graduate from zero balance account to a regular account | The experience of graduating from zero balance account should be handled on the product side                                                                     | Resolved |\n| 4   | Increase of account creation cost may break some existing contracts                          | A thorough investigation has been done and it turns out that we only need to change the contract that is deployed on `near` slightly                             | Resolved |\n| 5   | Account creation speed is slower due to increased cost                                       | Unlikely to be a concern, especially given that the number of shards is expected to grow in the future                                                           | Resolved |\n| 6   | Cost of transfers to implicit account increases                                              | Unlikely to break anything at the moment, and could be addressed in the future in a different NEP (see https://github.com/near/NEPs/issues/462 for more details) | Resolved |\n\n## Copyright\n\nCopyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).\n"
  },
  {
    "path": "neps/nep-0452.md",
    "content": "---\nNEP: 452\nTitle: Linkdrop Standard\nAuthor: Ben Kurrek <ben.kurrek@near.org>, Ken Miyachi <ken.miyachi@near.foundation>\nStatus: Final\nDiscussionsTo: https://gov.near.org/t/official-linkdrop-standard/32463/1\nType: Standards Track\nCategory: Contract\nVersion: 1.0.0\nCreated: 24-Jan-2023\nUpdated: 19-Apr-2023\n---\n\n## Summary\n\nA standard interface for linkdrops that support $NEAR, fungible tokens, non-fungible tokens, and is extensible to support new types in the future.\n\nLinkdrops are a simple way to send assets to someone by providing them with a link. This link can be embedded into a QR code, sent via email, text or any other means. Within the link, there is a private key that allows the holder to call a method that can create an account and send it assets. Alternatively, if the holder has an account, the assets can be sent there as well.\n\nBy definition, anyone with an access key can interact with the blockchain and since there is a private key embedded in the link, this removes the need for the end-user to have a wallet.\n\n## Motivation\n\nLinkdrops are an extremely powerful tool that enable seamless onboarding and instant crypto experiences with the click of a link. The original [near-linkdrop](https://github.com/near/near-linkdrop) contract provides a minimal interface allowing users to embed $NEAR within an access key and create a simple Web2 style link that can then be used as a means of onboarding. This simple $NEAR linkdrop is not enough as many artists, developers, event coordinators, and applications want to drop more digital assets such as NFTs, FTs, tickets etc.\n\nAs linkdrop implementations start to push the boundaries of what’s possible, new data structures, methods, and interfaces are being developed. There needs to be a standard data model and interface put into place to ensure assets can be claimed independent of the contract they came from. If not, integrating any application with linkdrops will require customized solutions, which would become cumbersome for the developer and deteriorate the user onboarding experience. The linkdrop standard addresses these issues by providing a simple and extensible standard data model and interface.\n\nThe initial discussion can be found [here](https://gov.near.org/t/official-linkdrop-standard/32463/1).\n\n## Specification\n\n### Example Scenarios\n\n_Pre-requisite Steps_: Linkdrop creation:\nThe linkdrop creator that has an account with some $NEAR:\n\n- creates a keypair locally (`pubKey1`, `privKey1`). (The keypair is not written to chain at this time)\n- calls a method on a contract that implements the linkdrop standard in order to create the drop. The `pubKey1` and desired $NEAR amount are both passed in as arguments.\n- The contract maps the `pubKey1` to the desired balance for the linkdrop (`KeyInfo` record).\n- The contract then adds the `pubKey1` as a function call access key with the ability to call `claim` and `create_account_and_claim`. This means that anyone with the `privKey1` (see above), can sign a transaction on behalf of the contract (signer id set to contract id) with a function call to call one of the mentioned functions to claim the assets.\n\n#### Claiming a linkdrop without a NEAR Account\n\nA user with _no_ account can claim the assets associated with an existing public key, already registered in the linkdrop contract:\n\n- generates a new keypair (`pubKey2`, `privKey2`) locally. (This new keypair is not written to chain)\n- chooses a new account ID such as benji.near.\n- calls `create_account_and_claim`. The transaction is signed on behalf of the linkdrop contract (`signer_id` is set to the contract address) using `privKey1`.\n  - the args of this function call will contain both `pubKey2` (which will be used to create a full access key for the new account) and the account ID itself.\n  - the linkdrop contract will delete the access key associated with `pubKey1` so that it cannot be used again.\n  - the linkdrop contract will create the new account and transfer the funds to it alongside any other assets.\n- the user will be able to sign transactions on behalf of the new account using `privKey2`.\n\n#### Claiming a linkdrop with a NEAR Account\n\nA user with an _existing_ account can claim the assets with an existing public key, already registered in the linkdrop contract:\n\n- calls `claim`. The transaction is signed on behalf of the linkdrop contract (`signer_id` is set to the contract address) using `privKey1`.\n  - the args of this function call will simply contain the user's existing account ID.\n  - the linkdrop contract will delete the access key associated with `pubKey1` so that it cannot be used again.\n  - the linkdrop contract will transfer the funds to that account alongside any other assets.\n\n```ts\n/// Information about a specific public key.\ntype KeyInfo = {\n   /// How much Gas should be attached when the key is used to call `claim` or `create_account_and_claim`.\n   /// It is up to the smart contract developer to calculate the required gas (which can be done either automatically on the contract or on the client-side).\n   required_gas: string,\n\n   /// yoctoNEAR$ amount that will be sent to the account that claims the linkdrop (either new or existing)\n   /// when the key is successfully used.\n   yoctonear: string,\n\n   /// If using the NFT standard extension, a set of NFTData can be linked to the public key\n   /// indicating that all those assets will be sent to the account that claims the linkdrop (either new or\n   /// existing) when the key is successfully used.\n   nft_list: NFTData[] | null,\n\n   /// If using the FT standard extension, a set of FTData can be linked to the public key\n   /// indicating that all those assets will be sent to the account that claims the linkdrop (either new or\n   /// existing) when the key is successfully used.\n   ft_list: FTData[] | null\n\n   /// ... other types can be introduced and the standard is easily extendable.\n}\n\n\n/// Data outlining a specific Non-Fungible Token that should be sent to the claiming account\n/// (either new or existing) when a key is successfully used.\ntype NFTData = {\n   /// the id of the token to transfer\n   token_id: string,\n\n   /// The valid NEAR account indicating the Non-Fungible Token contract.\n   contract_id: string\n}\n\n\n/// Data outlining Fungible Tokens that should be sent to the claiming account\n/// (either new or existing) when a key is successfully used.\ntype FTData = {\n   /// The number of tokens to transfer, wrapped in quotes and treated\n   /// like a string, although the number will be stored as an unsigned integer\n   /// with 128 bits.\n   amount: string,\n\n   /// The valid NEAR account indicating the Fungible Token contract.\n   contract_id: string\n}\n\n/****************/\n/* VIEW METHODS */\n/****************/\n\n/// Allows you to query for the amount of $NEAR tokens contained in a linkdrop corresponding to a given public key.\n///\n/// Requirements:\n/// * Panics if the key does not exist.\n///\n/// Arguments:\n/// * `key` the public counterpart of the key used to sign, expressed as a string with format \"<key-type>:<base58-key-bytes>\" (e.g. \"ed25519:6TupyNrcHGTt5XRLmHTc2KGaiSbjhQi1KHtCXTgbcr4Y\")\n///\n/// Returns a string representing the $yoctoNEAR amount associated with a given public key\nfunction get_key_balance(key: string) -> string;\n\n/// Allows you to query for the `KeyInfo` corresponding to a given public key. This method is preferred over `get_key_balance` as it provides more information about the key.\n///\n/// Requirements:\n/// * Panics if the key does not exist.\n///\n/// Arguments:\n/// * `key` the public counterpart of the key used to sign, expressed as a string with format \"<key-type>:<base58-key-bytes>\" (e.g. \"ed25519:6TupyNrcHGTt5XRLmHTc2KGaiSbjhQi1KHtCXTgbcr4Y\")\n///\n/// Returns `KeyInfo` associated with a given public key\nfunction get_key_information(key: string) -> KeyInfo;\n\n/******************/\n/* CHANGE METHODS */\n/******************/\n\n/// Transfer all assets linked to the signer’s public key to an `account_id`.\n/// If the transfer fails for whatever reason, it is up to the smart contract developer to\n/// choose what should happen. For example, the contract can choose to keep the assets\n/// or send them back to the original linkdrop creator.\n///\n/// Requirements:\n/// * The predecessor account *MUST* be the current contract ID.\n/// * The `account_id` MUST be an *initialized* NEAR account.\n/// * The assets being sent *MUST* be associated with the signer’s public key.\n/// * The assets *MUST* be sent to the `account_id` passed in.\n///\n/// Arguments:\n/// * `account_id` the account that should receive the linkdrop assets.\n///\n/// Returns `true` if the claim was successful meaning all assets were sent to the `account_id`.\nfunction claim(account_id: string) -> Promise<boolean>;\n\n/// Creates a new NEAR account and transfers all assets linked to the signer’s public key to\n/// the *newly created account*. If the transfer fails for whatever reason, it is up to the\n/// smart contract developer to choose what should happen. For example, the contract can\n/// choose to keep the assets or return them to the original linkdrop creator.\n///\n/// Requirements:\n/// * The predecessor account *MUST* be the current contract ID.\n/// * The assets being sent *MUST* be associated with the signer’s public key.\n/// * The assets *MUST* be sent to the `new_account_id` passed in.\n/// * The newly created account *MUST* have a new access key added to its account (either\n///   full or limited access) in the same receipt that the account was created in.\n/// * The Public key must be in a binary format with base58 string serialization with human-readable curve.\n///   The key types currently supported are secp256k1 and ed25519. Ed25519 public keys accepted are 32 bytes\n///   and secp256k1 keys are the uncompressed 64 format.\n///\n/// Arguments:\n/// * `new_account_id`: the valid NEAR account which is being created and should\n///   receive the linkdrop assets\n/// * `new_public_key`: the valid public key that should be used for the access key added to the newly created account (serialized with borsh).\n///\n/// Returns `true` if the claim was successful meaning the `new_account_id` was created and all assets were sent to it.\nfunction create_account_and_claim(new_account_id: string, new_public_key: string) -> Promise<boolean>;\n```\n\n## Reference Implementation\n\nBelow are some references for linkdrop contracts.\n\n- [Link Drop Contract](https://github.com/near/near-linkdrop)\n\n- [Keypom Contract](https://github.com/keypom/keypom)\n\n## Security Implications\n\n1. Linkdrop Creation\n   Linkdrop creation involves creating keypairs that, when used, have access to assets such as $NEAR, FTs, NFTs, etc. These keys should be limited access and restricted to specific functionality. For example, they should only have permission to call `claim` and `create_account_and_claim`. Since the keys allow the holder to sign transactions on behalf of the linkdrop contract, without the proper security measures, they could be used in a malicious manner (for example executing private methods or owner-only functions).\n\n   Another important security implication of linkdrop creation is to ensure that only one key is mapped to a set of assets at any given time. Externally, assets such as FTs, and NFTs belong to the overall linkdrop contract account rather than a specific access key. It is important to ensure that specific keys can only claim assets that they are mapped to.\n\n2. Linkdrop Key Management\n   Key management is a critical safety component of linkdrops. The linkdrop contract should implement a key management strategy for keys such that a reentrancy attack does not occur. For example, one strategy may be to \"lock\" or mark a key as \"in transfer\" such that it cannot be used again until the transfer is complete.\n\n3. Asset Refunds & Failed Claims\n   Given that linkdrops could contain multiple different assets such as NFTs, or fungible tokens, sending assets might happen across multiple blocks. If the claim was unsuccessful (such as passing in an invalid account ID), it is important to ensure that all state is properly managed and assets are optionally refunded depending on the linkdrop contract's implementation.\n\n4. Fungible Tokens & Future Data\n   Fungible token contracts require that anyone receiving tokens must be registered. For this reason, it is important to ensure that storage for accounts claiming linkdrops is paid for. This concept can be extended to any future data types that may be added. You must ensure that all the pre-requisite conditions have been met for the asset that is being transferred.\n\n5. Tokens Properly Sent to Linkdrop Contract\n   Since the linkdrop contract facilitates the transfer of assets including NFTs, and FTs, it is important to ensure that those tokens have been properly sent to the linkdrop contract prior to claiming. In addition, since all the tokens are in a shared pool, you must ensure that the linkdrop contract cannot claim assets that do not belong to the key that is being used to claim.\n\nIt is also important to note that not every linkdrop is valid. Drops can expire, funds can be lazily sent to the contract (as seen in the case of fungible and non-fungible tokens) and the supply can be limited.\n\n## Alternatives\n\n#### Why is this design the best in the space of possible designs?\n\nThis design allows for flexibility and extensibility of the standard while providing a set of criteria that cover the majority of current linkdrop use cases. The design was heavily inspired by current, functional NEPs such as the Fungible Token and Non-Fungible Token standards.\n\n#### What other designs have been considered and what is the rationale for not choosing them?\n\nA generic data struct that all drop types needed to inherit from. This struct contained a name and some metadata in the form of stringified JSON. This made it easily extensible for any new types down the road. The rationale for not choosing this design was both simplicity and flexibility. Having one data struct requires keys to be of one type only. In reality, there can be many at once. In addition, having a generic, open-ended metadata field could lead to many interpretations and different designs. We chose to use a KeyInfo struct that can be easily extensible and can cover all use-cases by having optional vectors of different data types. The proposed standard is simple, supports drops with multiple assets, and is backwards compatible with all previous linkdrops, and can be extended very easily.\n\nA standard linkdrop creation interface. A standardized linkdrop creation interface would provide data models and functions to ensure linkdrops were created and stored in a specific format. The rationale for not choosing this design was that is was too restrictive. Standardizing linkdrop creation adds complexity and reduces flexibility by restricting linkdrop creators in the process in which linkdrops are created, and potentially limiting linkdrop functionality. The functionality of the linkdrop creation, such as refunding of assets, access keys, and batch creation, should be chosen by the linkdrop creator and live within the linkdrop creator platform. Further, linkdrop creation is often not displayed to end users and there is not an inherent value proposition for a standardized linkdrop creation interface from a client perspective.\n\n#### What is the impact of not doing this?\n\nThe impact of not doing this is creating a fragmented ecosystem of linkdrops, increasing the friction for user onboarding. Linkdrop claim pages (e.g. wallet providers) would have to implement custom integrations for every linkdrop provider platform. Inherently this would lead to a bad user experience when new users are onboarding and interacting with linkdrops in general.\n\n## Future possibilities\n\n- Linkdrop creation interface\n\n- Bulk linkdrop management (create, update, claim)\n\n- Function call data types (allowing for funder defined functions to be executed when a linkdrop is claimed)\n\n- Optional configurations added to KeyInfo which can include multi-usekeys, time-based claiming etc…\n\n- Standard process for how links connect to claim pages (i.e a standardized URL such as an app’s baseUrl/contractId= [LINKDROP_CONTRACT]&secretKey=[SECRET_KEY]\n\n- Standard for deleting keys and refunding assets.\n\n## Copyright\n\nCopyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).\n"
  },
  {
    "path": "neps/nep-0455.md",
    "content": "---\nNEP: 455\nTitle: Parameter Compute Costs\nAuthor: Andrei Kashin <andrei.kashin@near.org>, Jakob Meier <jakob@near.org>\nStatus: Final\nDiscussionsTo: https://github.com/nearprotocol/neps/pull/455\nType: Protocol Track\nCategory: Runtime\nCreated: 26-Jan-2023\n---\n\n## Summary\n\nIntroduce compute costs decoupled from gas costs for individual parameters to safely limit the compute time it takes to process the chunk while avoiding adding breaking changes for contracts.\n\n## Motivation\n\nFor NEAR blockchain stability, we need to ensure that blocks are produced regularly and in a timely manner.\n\nThe chunk gas limit is used to ensure that the time it takes to validate a chunk is strictly bounded by limiting the total gas cost of operations included in the chunk.\nThis process relies on accurate estimates of gas costs for individual operations.\n\nUnderestimating these costs leads to *undercharging* which can increase the chunk validation time and slow down the chunk production.\n\nAs a concrete example, in the past we undercharged contract deployment.\nThe responsible team has implemented a number of optimizations but a gas increase was still necessary.\n[Meta-pool](https://github.com/Narwallets/meta-pool/issues/21) and [Sputnik-DAO](https://github.com/near-daos/sputnik-dao-contract/issues/135) were affected by this change, among others.\nFinding all affected parties and reaching out to them before implementing the change took a lot of effort, prolonging the period where the network was exposed to the attack.\n\nAnother motivating example is the upcoming incremental deployment of Flat Storage, where during one of the intermediate stages we expect the storage operations to be undercharged.\nSee the explanation in the next section for more details.\n\n## Rationale\n\nSeparating compute costs from gas costs will allow us to safely limit the compute usage for processing the chunk while still keeping the gas prices the same and thus not breaking existing contracts.\n\nAn important challenge with undercharging is that it is not possible to disclose them widely because it could be used to increase the chunk production time thereby impacting the stability of the network.\nAdjusting the compute cost for undercharged parameter eliminates the security concern and allows us to publicly discuss the ways to solve the undercharging (optimize implementation, smart contract or increasing the gas cost).\n\nThis design is easy to implement and simple to reason about and provides a clear way to address existing undercharging issues.\nIf we don't address the undercharging problems, we increase the risks that they will be exploited.\n\nSpecifically for Flat Storage deployment, we [plan](https://github.com/near/nearcore/issues/8006) to stop charging TTN (touching trie node) gas costs, however the intermediate implementation (read-only Flat Storage) will still incur these costs during writes introducing undercharging.\nSetting temporary high compute costs for writes will ensure that this undercharging does not lead to long chunk processing times.\n\n## Alternatives\n\n### Increase the gas costs for undercharged operations\n\nWe could increase the gas costs for the operations that are undercharged to match the computational time it takes to process them according to the rule 1ms = 1TGas.\n\nPros:\n\n- Does not require any new code or design work (but still requires a protocol version bump)\n- Security implications are well-understood\n\nCons:\n\n- Can break contracts that rely on current gas costs, in particular steeply increasing operating costs for the most active users of the blockchain (aurora and sweat)\n- Doing this safely and responsibly requires prior consent by the affected parties which is hard to do without disclosing security-sensitive information about undercharging in public\n\nIn case of flat storage specifically, using this approach will result in a large increase in storage write costs (breaking many contracts) to enable safe deployment of read-only flat storage and later a correction of storage write costs when flat storage for writes is rolled out.\nWith compute costs, we will be able to roll out the read-only flat storage with minimal impact on deployed contracts.\n\n### Adjust the gas chunk limit\n\nWe could continuously measure the chunk production time in nearcore clients and compare it to the gas burnt.\nIf the chunk producer observes undercharging, it decreases the limit.\nIf there is overcharging, the limit can be increased up to a limit of at most 1000 Tgas.\nTo make such adjustment more predictable under spiky load, we also [limit](https://nomicon.io/Economics/Economic#transaction-fees) the magnitude of change of gas limit by 0.1% per block.\n\nPros:\n\n- Prevents moderate undercharging from stalling the network\n- No protocol change necessary (as this feature is already [a part of the protocol](https://nomicon.io/Economics/Economic#transaction-fees)), we could easily experiment and revert if it does not work well\n\nCons:\n\n- Very broad granularity --- undercharging in one parameter affects all users, even those that never use the undercharged parts\n- Dependence on validator hardware --- someone running overspecced hardware will continuously want to increase the limit, others might run with underspecced hardware and continuously want to decrease the limit\n- Malicious undercharging attacks are unlikely to be prevented by this --- a single 10x undercharged receipt still needs to be processed using the old limit.\nAdjusting 0.1% per block means 100 chunks can only change by a maximum of 1.1x and 1000 chunks could change up to x2.7\n- Conflicts with transaction and receipt limit --- A transaction or receipt can (today) use up to 300Tgas.\nThe effective limit per chunk is `gas_limit` + 300Tgas since receipts are added to a chunk until one exceeds the limit and the last receipt is not removed.\nThus a gas limit of 0gas only reduces the effective limit from 1300Tgas to 300Tgas, which means a single 10x undercharged receipt can still result in a chunk with compute usage of 3 seconds (equivalent to 3000TGas)\n\n### Allow skipping chunks in the chain\n\nSlow chunk production in one shard can introduce additional user-visible latency in all shards as the nodes expect a regular and timely chunk production during normal operation.\nIf processing the chunk takes much longer than 1.3s, it can cause the corresponding block and possibly more consecutive blocks to be skipped.\n\nWe could extend the protocol to produce empty chunks for some of the shards within the block (effectively skipping them) when processing the chunk takes longer than expected.\nThis way will still ensure a regular block production, at a cost of lower throughput of the network in that shard.\nThe chunk should still be included in a later block to avoid stalling the affected shard.\n\nPros:\n\n- Fast and automatic adaptation to the blockchain workload\n\nCons:\n\n- For the purpose of slashing, it is hard to distinguish situations when the honest block producer skips chunk due to slowness from the situations when the block producer is offline or is maliciously stalling the block production. We need some mechanism (e.g. on-chain voting) for nodes to agree that the chunk was skipped legitimately due to slowness as otherwise we introduce new attack vectors to stall the network\n\n## Specification\n\n- **Chunk Compute Usage** -- total compute time spent on processing the chunk\n\n- **Chunk Compute Limit** -- upper-bound for compute time spent on processing the chunk\n\n- **Parameter Compute Cost** -- the numeric value in seconds corresponding to compute time that it takes to include an operation into the chunk\n\nToday, gas has two somewhat orthogonal roles:\n\n1. Gas is money. It is used to avoid spam by charging users\n2. Gas is CPU time. It defines how many transactions fit in a chunk so that validators can apply it within a second\n\nThe idea is to decouple these two by introducing parameter compute costs.\nEach gas parameter still has a gas cost that determines what users have to pay.\nBut when filling a chunk with transactions, parameter compute cost is used to estimate CPU time.\n\nIdeally, all compute costs should match corresponding gas costs.\nBut when we discover undercharging issues, we can set a higher compute cost (this would require a protocol upgrade).\nThe stability concern is then resolved when the compute cost becomes active.\n\nThe ratio between compute cost and gas cost can be thought of as an undercharging factor.\nIf a gas cost is 2 times too low to guarantee stability, compute cost will be twice the gas cost.\nA chunk will be full 2 times faster when gas for this parameter is burned.\nThis deterministically throttles the throughput to match what validators can actually handle.\n\nCompute costs influence the gas price adjustment logic described in https://nomicon.io/Economics/Economic#transaction-fees.\nSpecifically, we're now using compute usage instead of gas usage in the formula to make sure that the gas price increases if chunk processing time is close to the limit.\n\nCompute costs **do not** count towards the transaction/receipt gas limit of 300TGas, as that might break existing contracts by pushing their method calls over this limit.\n\nCompute costs are static for each protocol version.\n\n### Using Compute Costs\n\nCompute costs different from gas costs are only a temporary solution.\nWhenever we introduce a compute cost, we as the community can discuss this publicly and find a solution to the specific problem together.\n\nFor any active compute cost, a tracking GitHub issue in [`nearcore`](https://github.com/near/nearcore) should be created, tracking work towards resolving the undercharging. The reference to this issue should be added to this NEP.\n\nIn the best case, we find technical optimizations that allow us to decrease the compute cost to match the existing gas cost.\n\nIn other cases, the only solution is to increase the gas cost.\nBut the dApp developers who are affected by this change should have a chance to voice their opinion, suggest alternatives, and implement necessary changes before the gas cost is increased.\n\n## Reference Implementation\n\nThe compute cost is a numeric value represented as `u64` in time units.\nValue 1 corresponds to `10^-15` seconds or 1fs (femtosecond) to match the gas costs scale.\n\nBy default, the parameter compute cost matches the corresponding gas cost.\n\nCompute costs should be applicable to all gas parameters, specifically including:\n\n- [`ExtCosts`](https://github.com/near/nearcore/blob/6e08a41084c632010b1d4c42132ad58ecf1398a2/core/primitives-core/src/config.rs#L377)\n- [`ActionCosts`](https://github.com/near/nearcore/blob/6e08a41084c632010b1d4c42132ad58ecf1398a2/core/primitives-core/src/config.rs#L456)\n\nChanges necessary to support `ExtCosts`:\n\n1. Track compute usage in [`GasCounter`](https://github.com/near/nearcore/blob/51670e593a3741342a1abc40bb65e29ba0e1b026/runtime/near-vm-logic/src/gas_counter.rs#L47) struct\n2. Track compute usage in [`VMOutcome`](https://github.com/near/nearcore/blob/056c62183e31e64cd6cacfc923a357775bc2b5c9/runtime/near-vm-logic/src/logic.rs#L2868) struct (alongside `burnt_gas` and `used_gas`)\n3. Store compute usage in [`ActionResult`](https://github.com/near/nearcore/blob/6d2f3fcdd8512e0071847b9d2ca10fb0268f469e/runtime/runtime/src/lib.rs#L129) and aggregate it across multiple actions by modifying [`ActionResult::merge`](https://github.com/near/nearcore/blob/6d2f3fcdd8512e0071847b9d2ca10fb0268f469e/runtime/runtime/src/lib.rs#L141)\n4. Store compute costs in [`ExecutionOutcome`](https://github.com/near/nearcore/blob/578983c8df9cc36508da2fb4a205c852e92b211a/runtime/runtime/src/lib.rs#L266) and [aggregate them across all transactions](https://github.com/near/nearcore/blob/578983c8df9cc36508da2fb4a205c852e92b211a/runtime/runtime/src/lib.rs#L1279)\n5. Enforce the chunk compute limit when the chunk is [applied](https://github.com/near/nearcore/blob/6d2f3fcdd8512e0071847b9d2ca10fb0268f469e/runtime/runtime/src/lib.rs#L1325)\n\nAdditional changes necessary to support `ActionCosts`:\n\n1. Return compute costs from [`total_send_fees`](https://github.com/near/nearcore/blob/578983c8df9cc36508da2fb4a205c852e92b211a/runtime/runtime/src/config.rs#L71)\n2. Store aggregate compute cost in [`TransactionCost`](https://github.com/near/nearcore/blob/578983c8df9cc36508da2fb4a205c852e92b211a/runtime/runtime/src/config.rs#L22) struct\n3. Propagate compute costs to [`VerificationResult`](https://github.com/near/nearcore/blob/578983c8df9cc36508da2fb4a205c852e92b211a/runtime/runtime/src/verifier.rs#L330)\n\nAdditionaly, the gas price computation will need to be adjusted in [`compute_new_gas_price`](https://github.com/near/nearcore/blob/578983c8df9cc36508da2fb4a205c852e92b211a/core/primitives/src/block.rs#L328) to use compute cost instead of gas cost.\n\n## Security Implications\n\nChanges in compute costs will be publicly known and might reveal an undercharging that can be used as a target for the attack.\nIn practice, it is not trivial to exploit the undercharging unless you know the exact shape of the workload that realizes it.\nAlso, after the compute cost is deployed, the undercharging should no longer be a threat for the network stability.\n\n## Drawbacks\n\n- Changing compute costs requires a protocol version bump (and a new binary release), limiting their use to undercharging problems that we're aware of\n\n- Updating compute costs is a manual process and requires deliberately looking for potential underchargings\n\n- The compute cost would not have a full effect on the last receipt in the chunk, decreasing its effectiveness to deal with undercharging.\nThis is because 1) a transaction or receipt today can use up to 300TGas and 2) receipts are added to a chunk until one exceeds the limit and the last receipt is not removed.\nTherefore, a single receipt with 300TGas filled with undercharged operations with a factor of K can lead to overshooting the chunk compute limit by (K - 1) * 300TGas\n\n- Underchargings can still be exploited to lower the throughput of the network at unfair price and increase the waiting times for other users.\nThis is inevitable for any proposal that doesn't change the gas costs and must be resolved by improving the performance or increasing the gas costs\n\n- Even without malicious intent, the effective peak throughput of the network will decrease when the chunks include undercharged operations (as the stopping condition based on compute costs for filling the chunk becomes stricter).\nMost of the time, this is not the problem as the network is operating below the capacity.\nThe effects will also be softened by the fact that undercharged operations comprise only a fraction of the workload.\nFor example, the planned increase for TTN compute cost alongside the Flat Storage MVP is less critical because you cannot fill a receipt with only TTN costs, you will always have other storage costs and ~5Tgas overhead to even start a function call.\nSo even with 10x difference between gas and compute costs, the DoS only becomes 5x cheaper instead of 10x\n\n## Unresolved Issues\n\n## Future possibilities\n\nWe can also think about compute costs smaller than gas costs.\nFor example, we charge gas instead of token balance for extra storage bytes in [NEP-448](https://github.com/near/NEPs/pull/448), it would make sense to set the compute cost to 0 for the part that covers on-chain storage if the throttling due to increased gas cost becomes problematic.\nOtherwise, the throughput would be throttled unnecessarily.\n\nA further option would be to change compute costs dynamically without a protocol upgrade when block production has become too slow.\nThis would be a catch-all, self-healing solution that requires zero intervention from anyone.\nThe network would simply throttle throughput when block time remains too high for long enough.\nPursuing this approach would require additional design work:\n\n- On-chain voting to agree on new values of costs, given that inputs to the adjustment process are not deterministic (measurements of wall clock time it takes to process receipt on particular validator)\n- Ensuring that dynamic adjustment is done in a safe way that does not lead to erratic behavior of costs (and as a result unpredictable network throughput).\nHaving some experience manually operating this mechanism would be valuable before introducing automation\n\nand addressing challenges described in https://github.com/near/nearcore/issues/8032#issuecomment-1362564330.\n\nThe idea of introducing a chunk limit for compute resource usage naturally extends to other resource types, for example RAM usage, Disk IOPS, [Background CPU Usage](https://github.com/near/nearcore/issues/7625).\nThis would allow us to align the pricing model with cloud offerings familiar to many users, while still using gas as a common denominator to simplify UX.\n\n## Changelog\n\n### 1.0.0 - Initial Version\n\nThis NEP was approved by Protocol Working Group members on March 16, 2023 ([meeting recording](https://www.youtube.com/watch?v=4VxRoKwLXIs)):\n\n- [Bowen's vote](https://github.com/near/NEPs/pull/455#issuecomment-1467023424)\n- [Marcelo's vote](https://github.com/near/NEPs/pull/455#pullrequestreview-1340887413)\n- [Marcin's vote](https://github.com/near/NEPs/pull/455#issuecomment-1471882639)\n\n### 1.0.1 - Storage Related Compute Costs\n\nAdd five compute cost values for protocol version 61 and above.\n\n- wasm_touching_trie_node\n- wasm_storage_write_base\n- wasm_storage_remove_base\n- wasm_storage_read_base\n- wasm_storage_has_key_base\n\nFor the exact values, please refer to the table at the bottom.\n\nThe intention behind these increased compute costs is to address the issue of\nstorage accesses taking longer than the allocated gas costs, particularly in\ncases where RocksDB, the underlying storage system, is too slow. These values\nhave been chosen to ensure that validators with recommended hardware can meet\nthe required timing constraints.\n([Analysis Report](https://github.com/near/nearcore/issues/8006))\n\nThe protocol team at Pagoda is actively working on optimizing the nearcore\nclient storage implementation. This should eventually allow to lower the compute\ncosts parameters again.\n\nProgress on this work is tracked here: https://github.com/near/nearcore/issues/8938.\n\n#### Benefits\n\n- Among the alternatives, this is the easiest to implement.\n- It allows us to able to publicly discuss undercharging issues before they are fixed.\n\n#### Concerns\n\nNo concerns that need to be addressed. The drawbacks listed in this NEP are minor compared to the benefits that it will bring. And implementing this NEP is strictly better than what we have today.\n\n## Copyright\n\n\nCopyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).\n\n\n## References\n\n- https://gov.near.org/t/proposal-gas-weights-to-fight-instability-to-due-to-undercharging/30919\n- https://github.com/near/nearcore/issues/8032\n\n## Live Compute Costs Tracking\n\nParameter Name | Compute / Gas factor | First version | Last version | Tracking issue |\n-------------- | -------------------- | ------------- | ------------ | -------------- |\nwasm_touching_trie_node       |  6.83 |            61 |        *TBD* | [nearcore#8938](https://github.com/near/nearcore/issues/8938)\nwasm_storage_write_base       |  3.12 |            61 |        *TBD* | [nearcore#8938](https://github.com/near/nearcore/issues/8938)\nwasm_storage_remove_base      |  3.74 |            61 |        *TBD* | [nearcore#8938](https://github.com/near/nearcore/issues/8938)\nwasm_storage_read_base        |  3.55 |            61 |        *TBD* | [nearcore#8938](https://github.com/near/nearcore/issues/8938)\nwasm_storage_has_key_base     |  3.70 |            61 |        *TBD* | [nearcore#8938](https://github.com/near/nearcore/issues/8938)\n"
  },
  {
    "path": "neps/nep-0488.md",
    "content": "---\nNEP: 488\nTitle: Host Functions for BLS12-381 Curve Operations\nAuthors: Olga Kuniavskaia <olga.kunyavskaya@aurora.dev>\nStatus: Final\nDiscussionsTo: https://github.com/nearprotocol/neps/pull/488\nType: Runtime Spec\nVersion: 0.0.1\nCreated: 2023-07-17\nLastUpdated: 2023-11-21\n---\n\n## Summary\n\nThis NEP introduces host functions to perform operations on the BLS12-381 elliptic curve. It is a minimal set of functions needed to efficiently verify BLS signatures and zkSNARKs.\n\n## Motivation\n\nThe primary aim of this NEP is to enable fast and efficient verification of BLS signatures and zkSNARKs based on the BLS12-381[^1],[^11],[^52] elliptic curve on NEAR.\n\nTo efficiently verify zkSNARKs[^19], host functions for operations on the BN254\nelliptic curve (also known as Alt-BN128)[^9], [^12] have already been implemented on NEAR[^10].\nFor instance, the Zeropool[^20] project utilizes these host functions for verifying zkSNARKs on NEAR.\nHowever, recent research shows that the BN254 security level is lower than 100-bit[^13] and it is not recommended for use.\nBLS12-381, on the other hand, offers over 120 bits of security[^8] and is widely used[^2],[^3],[^4],[^5],[^6],[^7]  as a robust alternative.\nSupporting operations for BLS12-381 elliptic curve will significantly enhance the security of projects similar to Zeropool.\n\nAnother crucial objective is the verification of BLS signatures.\nInitially, host functions for BN254 on NEAR were designed for zkSNARK verification and\nare insufficient for BLS signature verification.\nHowever, even if these host functions were sufficient for BLS signature verification on the BN254 elliptic curve, this would not be enough for compatibility with other projects.\nIn particular, projects such as ZCash[^2], Ethereum[^3], Tezos[^5], and Filecoin[^6] incorporate BLS12-381 specifically within their protocols.\nIf we aim for compatibility with these projects, we must also utilize this elliptic curve.\nFor instance, to create a trustless bridge[^17] between Ethereum and NEAR,\nwe must efficiently verify BLS signatures based on BLS12-381, as these are the signatures employed within Ethereum's protocol.\n\nIn this NEP, we propose to add the following host functions:\n\n- ***bls12381_p1_sum —*** computes the sum of signed points from $E(F_p)$ elliptic curve. This function is useful for aggregating public keys or signatures in the BLS signature scheme. It can be employed for simple addition in $E(F_p)$. It is kept separate from the `multiexp` function due to gas cost considerations.\n- ***bls12381_p2_sum —*** computes the sum of signed points from $E'(F_{p^2})$ elliptic curve. This function is useful for aggregating signatures or public keys in the BLS signature scheme.\n- ***bls12381_g1_multiexp —*** calculates $\\sum p_i s_i$ for points $p_i \\in G_1 \\subset E(F_p)$ and scalars $s_i$. This operation can be used to multiply a group element by a scalar.\n- ***bls12381_g2_multiexp —*** calculates $\\sum p_i s_i$ for points $p_i \\in G_2 \\subset E'(F_{p^2})$ and scalars $s_i$.\n- ***bls12381_map_fp_to_g1 —*** maps base field elements into $G_1$ points. It does not perform the mapping of byte strings into field elements.\n- ***bls12381_map_fp2_to_g2 —*** maps extension field elements into $G_2$ points.  This function does not perform the mapping of byte strings into extension field elements, which would be needed to efficiently map a message into a group element. We are not implementing the `hash_to_field`[^60] function because the latter can be executed within a contract and various hashing algorithms can be used within this function.\n- ***bls12381_p1_decompress —*** decompresses points from $E(F_p)$ provided in a compressed form. Certain protocols offer points on the curve in a compressed form (e.g., the light client updates in Ethereum 2.0), and decompression is a time-consuming operation. All the other functions in this NEP only accept decompressed points for simplicity and optimized gas consumption.\n- ***bls12381_p2_decompress —*** decompresses points from $E'(F_{p^2})$ provided in a compressed form.\n- ***bls12381_pairing_check —*** verifies that $\\prod e(p_i, q_i) = 1$, where $e$ is a pairing operation and $p_i \\in G_1 \\land q_i \\in G_2$. This function is used to verify BLS signatures or zkSNARKs.\n\nFunctions required for verifying BLS signatures[^59]:\n\n- bls12381_p1_sum\n- bls12381_p2_sum\n- bls12381_map_fp2_to_g2\n- bls12381_p1_decompress\n- bls12381_p2_decompress\n- bls12381_pairing_check\n\nFunctions required for verifying zkSNARKs:\n\n- bls12381_p1_sum\n- bls12381_g1_multiexp\n- bls12381_pairing_check\n\nBoth zkSNARKs and BLS signatures can be implemented alternatively by swapping $G_1$ and $G_2$.\nTherefore, all functions have been implemented for both $G_1$ and $G_2$.\n\nAn analogous proposal, EIP-2537[^15], exists in Ethereum.\nThe functions here have been designed with compatibility\nwith that Ethereum's proposal in mind. This design approach aims\nto ensure future ease in supporting corresponding precompiles for Aurora[^24].\n\n## Specification\n\n### BLS12-381 Curve Specification\n\n#### Elliptic Curve\n\n**The field $F_p$** for some *prime* $p$ is a set of integer\nelements $\\textbraceleft 0, 1, \\ldots, p - 1 \\textbraceright$ with two\noperations: multiplication $\\cdot$ and addition $+$.\nThese operations involve standard integer multiplication and addition,\nfollowed by computing the remainder modulo $p$.\n\n**The elliptic curve $E(F_p)$** is the set of all pairs $(x, y)$ with coordinates in $F_p$ satisfying:\n\n$$\ny^2 \\equiv x^3 + Ax + B \\mod p\n$$\n\ntogether with an imaginary point at infinity $\\mathcal{O}$, where: $A, B \\in F_p$, $p$ is a prime $> 3$, and $4A^3 + 27B^2 \\not \\equiv 0 \\mod p$\n\nIn the case of BLS12-381 the equation is $y^2 \\equiv x^3 + 4 \\mod p$[^15],[^51],[^14],[^11]\n\n**Parameters for our case:**\n\n- $A = 0$\n- $B = 4$\n- $p = \\mathtt{0x1a0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaaab}$\n\nLet $P \\in E(F_q)$ have coordinates $(x, y)$, define **$-P$** as a point on a curve with coordinates $(x, -y)$.\n\n**The addition operation for Elliptic Curve** is a function $+\\colon E(F_p) \\times E(F_p) \\rightarrow E(F_p)$ defined with following rules: let $P$ and $Q \\in E(F_p)$\n\n- if  $P \\ne Q$ and $P \\ne -Q$\n  - draw a line passing through $P$ and $Q$. This line intersects the curve at a third point $R$.\n  - reflect the point $R$ across the $x$-axis by changing the sign of the $y$-coordinate. The resulting point is $P+Q$.\n- if $P=Q$\n  - draw a tangent line through $P$ for an elliptic curve. The line will intersect the curve at the second point $R$.\n  - reflect the point $R$ across the $x$-axis the same way to get point $2P$\n- $P = -Q$\n  - $P + Q = P + (-P) = \\mathcal{O}$ — the point on infinity\n- $Q = \\mathcal{O}$\n  - $P + Q = P + \\mathcal{O} = P$\n\nWith the addition operation, Elliptic Curve forms a **group**.\n\n#### Subgroups\n\n**Subgroup** H is a subset of the group G with the following properties:\n\n- $\\forall h_1, h_2 \\in H\\colon h_1 + h_2 \\in H$\n- $0 \\in H$\n- $\\forall h \\in H \\colon -h \\in H$\n\nNotation: $H \\subseteq G$\n\nGroup/subgroup **order** is the number of elements in group/subgroup.\n\nNotation: |G|  or #G, where G represents the group.\n\nFor some technical reason (related to the `pairing` operation which we will define later),\nwe will not operate over the entire $E(F_p)$,\nbut only over the two subgroups $G_1$ and $G_2$\nhaving the same **order** $r$.\n$G_1$ is a subset of $E(F_p)$,\nwhile $G_2$ is a subgroup of another group that we will define later.\nThe value of $r$ should be a prime number and $G_1 \\ne G_2$\n\nFor the BLS12-381 Elliptic Curve, **the order r** of $G_1$  and $G_2$[^15],[^51] is given by:\n\n- $r = \\mathtt{0x73eda753299d7d483339d80809a1d80553bda402fffe5bfeffffffff00000001}$\n\n#### Field extension\n\n**The field extension $F_{p^k}$ of $F_{p}$** is a set comprising all polynomials of degree < k and coefficients from $F_p$, along with defined operations of multiplication ($\\cdot$) and addition ($+$).\n\n$$\na_{k - 1}x^{k - 1} + \\ldots + a_1x + a_0 = A(x) \\in F_{p^k} \\vert a_i \\in F_p\n$$\n\nThe addition operation ($+$) is defined as regular polynomial addition:\n\n$$\nA(x) + B(x) = C(x)\n$$\n\n$$\n\\sum a_i x^i + \\sum b_i x^i = \\sum c_i x^i\n$$\n\n$$\nc_i = (a_i + b_i) \\mod p\n$$\n\n\nThe multiplication $\\cdot$ is defined as regular polynomial multiplication modulo $M(x)$,\nwhere $M(x)$ is an irreducible polynomial of degree $k$ with coefficients from $F_p$.\n\n$$\nC(x) = A(x) \\cdot B(x)\\mod M(x)\n$$\n\nNotation: $F_{p^k} = F_{p}[x] / M(x)$\n\nIn BLS12-381, we will require $F_{p^{12}}$.\nWe'll construct this field not directly as an extension from $F_p$,\nbut rather through a stepwise process. First, we'll build $F_{p^2}$\nas a quadratic extension of the field $F_p$.\nSecond, we'll establish $F_{p^6}$ as a cubic extension of $F_{p^2}$.\nFinally, we'll create $F_{p^{12}}$ as a quadratic extension of the\nfield $F_{p^6}$.\n\nTo define these fields, we'll need to set up three irreducible polynomials[^51]:\n\n- $F_{p^2} = F_p[u] / (u^2 + 1)$\n- $F_{p^6} = F_{p^2}[v] / (v^3 - u - 1)$\n- $F_{p^{12}} = F_{p^6}[w] / (w^2 - v)$\n\nThe second subgroup we'll utilize has order r and\nresides within the same elliptic curve but with elements from $F_{p^{12}}$.\nSpecifically, $G_2 \\subset E(F_{p^{12}})$, where $E: y^2 = x^3 + 4$\n\n#### Twist\n\nStoring elements from $E(F_{p^{12}})$ consumes a significant amount of memory.\nThe twist operation transforms the original curve $E(F_{p^{12}})$ into another curve within a different space,\ndenoted as $E'(F_{p^2})$. It is crucial that this new curve also includes a $G'_2$ subgroup with order 'r'\nso that we can easily transform it back to the original $G_2$.\n\nWe want to have $\\psi \\colon E'(F_{p^2}) \\rightarrow E(F_{p^{12}})$, such as\n\n- $\\forall a, b \\in E'(F_{p^2}) \\colon \\psi(a + b) = \\psi(a) + \\psi(b)$\n- $\\forall a, b \\in E'(F_{p^2}) \\colon \\psi(a) = \\psi(b) \\Rightarrow a = b$\n\nThis is referred to as an injective group homomorphism.\n\nFor BLS12-381, E’ is defined as[^51]:\n\n$$\nE'\\colon y^2 = x^3 + 4(u + 1)\n$$\n\nIn most cases, we will be working with points from $G_2' \\subset E'(F_{p^2})$ and will simply use the notation $G_2$ for this subgroup.\n\n#### Generators\n\nIf there exists an element $g$ in the group $G$ such that $\\textbraceleft g, 2 \\cdot g, 3 \\cdot g, \\ldots, |G|g \\textbraceright = G$, the group $G$ is called a ***cyclic group*** and $g$ is termed a ***generator***\n\n$G_1$ and $G_2$ are cyclic subgroups with the following generators[^15],[^51]:\n\n$G_1$:\n\n- $x = \\mathtt{0x17f1d3a73197d7942695638c4fa9ac0fc3688c4f9774b905a14e3a3f171bac586c55e83ff97a1aeffb3af00adb22c6bb}$\n- $y = \\mathtt{0x08b3f481e3aaa0f1a09e30ed741d8ae4fcf5e095d5d00af600db18cb2c04b3edd03cc744a2888ae40caa232946c5e7e1}$\n\nFor $(x', y') \\in G_2 \\subset E'(F_{p^2}):$\n$$x' = x_0 + x_1u$$\n\n$$y' = y_0 + y_1u$$\n\n$G_2$:\n\n- $x_0 = \\mathtt{0x024aa2b2f08f0a91260805272dc51051c6e47ad4fa403b02b4510b647ae3d1770bac0326a805bbefd48056c8c121bdb8}$\n- $x_1 = \\mathtt{0x13e02b6052719f607dacd3a088274f65596bd0d09920b61ab5da61bbdc7f5049334cf11213945d57e5ac7d055d042b7e}$\n- $y_0 = \\mathtt{0x0ce5d527727d6e118cc9cdc6da2e351aadfd9baa8cbdd3a76d429a695160d12c923ac9cc3baca289e193548608b82801}$\n- $y_1 = \\mathtt{0x0606c4a02ea734cc32acd2b02bc28b99cb3e287e85a763af267492ab572e99ab3f370d275cec1da1aaa9075ff05f79be}$\n\n\n**Cofactor** is the ratio of the size of the entire group $G$ to the size of the subgroup $H$:\n\n$$\n|G|/|H|\n$$\n\nCofactor $G_1\\colon h = |E(F_p)|/r$[^51]\n\n$$h = \\mathtt{0x396c8c005555e1568c00aaab0000aaab}$$\n\nCofactor $G_2\\colon h' = |E'(F_{p^2})|/r$[^51]\n\n$$h' = \\mathtt{0x5d543a95414e7f1091d50792876a202cd91de4547085abaa68a205b2e5a7ddfa628f1cb4d9e82ef21537e293a6691ae1616ec6e786f0c70cf1c38e31c7238e5}$$\n\n#### Pairing\n\nPairing is a necessary operation for the verification of BLS signatures and certain zkSNARKs. It performs the operation $e\\colon G_1 \\times G_2 \\rightarrow G_T$, where $G_T \\subset F_{p^{12}}$.\n\nThe main properties of the pairing operation are:\n\n- $e(P, Q + R) = e(P, Q) \\cdot e(P, R)$\n- $e(P  + S, R) = e(P, R)\\cdot e(S, R)$\n\nTo compute this function, we utilize an algorithm called Miller Loop.\nFor an affective implementation of this algorithm,\nwe require a key parameter for the BLS curve, denoted as $x$:\n\n$$ x = -\\mathtt{0xd201000000010000}$$\n\nThis parameter can be found in the following sources:\n\n- [^15] section specification, pairing parameters, Miller loop scalar\n- [^51] section 4.2.1 Parameter t\n- [^14] section BLS12-381, parameter u\n- [^11] section Curve equation and parameters, parameter x\n\n#### Summary\n\nThe parameters for the BLS12-381 curve are as follows:\n\nBase field modulus: $p = \\mathtt{0x1a0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaaab}$\n\n$$\nE\\colon y^2 \\equiv x^3 + 4\n$$\n\n$$\nE'\\colon y^2 \\equiv x^3 + 4(u + 1)\n$$\n\nMain subgroup order: $r = \\mathtt{0x73eda753299d7d483339d80809a1d80553bda402fffe5bfeffffffff00000001}$\n\n$$\nF_{p^2} = F_p[u] / (u^2 + 1)\n$$\n\n$$\nF_{p^6} = F_{p^2}[v] / (v^3 - u - 1)\n$$\n\n$$\nF_{p^{12}} = F_{p^6}[w] / (w^2 - v)\n$$\n\nGenerator for $G_1$:\n\n- $x = \\mathtt{0x17f1d3a73197d7942695638c4fa9ac0fc3688c4f9774b905a14e3a3f171bac586c55e83ff97a1aeffb3af00adb22c6bb}$\n- $y = \\mathtt{0x08b3f481e3aaa0f1a09e30ed741d8ae4fcf5e095d5d00af600db18cb2c04b3edd03cc744a2888ae40caa232946c5e7e1}$\n\nGenerator for $G_2$:\n\n- $x_0 = \\mathtt{0x024aa2b2f08f0a91260805272dc51051c6e47ad4fa403b02b4510b647ae3d1770bac0326a805bbefd48056c8c121bdb8}$\n- $x_1 = \\mathtt{0x13e02b6052719f607dacd3a088274f65596bd0d09920b61ab5da61bbdc7f5049334cf11213945d57e5ac7d055d042b7e}$\n- $y_0 = \\mathtt{0x0ce5d527727d6e118cc9cdc6da2e351aadfd9baa8cbdd3a76d429a695160d12c923ac9cc3baca289e193548608b82801}$\n- $y_1 = \\mathtt{0x0606c4a02ea734cc32acd2b02bc28b99cb3e287e85a763af267492ab572e99ab3f370d275cec1da1aaa9075ff05f79be}$\n\n\nCofactor for $G_1$:\n$$h = \\mathtt{0x396c8c005555e1568c00aaab0000aaab}$$\n\nCofactor for $G_2$:\n$$h' = \\mathtt{0x5d543a95414e7f1091d50792876a202cd91de4547085abaa68a205b2e5a7ddfa628f1cb4d9e82ef21537e293a6691ae1616ec6e786f0c70cf1c38e31c7238e5}$$\n\nKey  BLS12-381 parameter used in Miller Loop:\n$$x = -\\mathtt{0xd201000000010000}$$\n\nAll parameters were sourced from [^15], [^51], and [^14], and they remain consistent across these sources.\n\n### Map to curve specification\n\nThis section delineates the functionality of the `bls12381_map_fp_to_g1` and `bls12381_map_fp2_to_g2` functions,\noperating in accordance with the RFC9380 specification \"Hashing to Elliptic Curves\"[^62].\n\nThese functions map field elements in $F_p$ or $F_{p^2}$\nto their corresponding subgroups: $G_1 \\subset E(F_p)$ or $G_2 \\subset E'(F_{p^2})$.\n`bls12381_map_fp_to_g1`/`bls12381_map_fp2_to_g2` combine the functionalities\nof `map_to_curve` and `clear_cofactor` from RFC9380[^63].\n\n```text\nfn bls12381_map_fp_to_g1(u):\n    let Q = map_to_curve(u);\n    return clear_cofactor(Q);\n```\n\nWe choose not to implement the `hash_to_field` function as a host function due to potential changes in hashing methods.\nAdditionally, executing this function within the contract consumes approximately 2 TGas, which is acceptable for our goals.\n\nSpecific implementation parameters for `bls12381_map_fp_to_g1` and `bls12381_map_fp2_to_g2` can be found in RFC9380\nunder sections 8.8.1[^64] and 8.8.2[^65], respectively.\n\n### Curve points encoding\n\n#### General comments\n\nThe encoding rules for curve points and field elements align with the standards established in zkcrypto[^53] and\nthe implementation in the milagro lib[^29].\n\nFor elements from $F_p$ the first three bits will always be $0$, because the first byte of $p$ equals $1$. As a result,\nwe can use these bits to encode extra information: the encoding format, the point at infinity, and the points' sign.\nRead more in sections: Uncompressed/compressed points on curve $E(F_p)$ / $E'(F_{p^2})$.\n\n#### Sign\n\nThe sign of a point on the elliptic curve is represented as a u8 type in Rust, with two possible values: 0 for a positive sign and 1 for a negative sign. Any other u8 value is considered invalid and should be treated as incorrect.\n\n#### Scalar\n\nA scalar value is encoded as little-endian [u8; 32]. All possible byte combinations are allowed.\n\n#### Fields elements $F_p$\n\nValues from $F_p$ are encoded as big-endian [u8; 48]. Only values less than p are permitted. If the value is equal to or greater than p, an error should be returned.\n\n#### Extension fields elements $F_{p^2}$\n\nAn element $q \\in F_{p^{2}}$ can be expressed as $q = c_0 + c_1 v$, where $c_0, c_1 \\in F_p$.\nAn element from $F_{p^2}$ is encoded in [u8; 96] as the byte concatenation of $c_1$ and $c_0$. The encoding for $c_1$ and $c_0$ follows the rule described in the previous section.\n\n#### Uncompressed points on curve $E(F_p)$\n\nPoints on the curve are represented by affine coordinates: $(x: F_p, y: F_p)$.\nElements from $E(F_p)$ are encoded in `[u8; 96]` as the byte concatenation of the x and y point coordinates, where $x, y \\in F_p$.\nThe encoding follows the rules outlined in the section “Fields elements $F_p$”.\n\n*The second-highest bit* within the encoding serves to signify a point at infinity.\nWhen this bit is set to 1, it designates an infinity point.\nIn this case, all other bits should be set to 0.\n\nEncoding the point at infinity:\n\n```bash\nlet x: [u8; 96] = [0; 96];\nx[0] = x[0] | 0x40;\n```\n\n#### Compressed points on curve $E(F_p)$\n\nThe points on the curve are represented by affine coordinates: $(x: F_p, y: F_p)$.\nElements from $E(F_p)$ in compressed form are encoded as `[u8; 48]`,\nwith big-endian encoded $x \\in F_p$.\nThe $y$ coordinate is determined by the formula: $y = \\pm \\sqrt{x^3 + 4}$.\n\n- The highest bit indicates that the point is encoded in compressed form and thus must always be set to 1.\n- The second-highest bit marks the point at infinity (if set to 1).\n  - For the point at infinity, all bits except the first two should be set to 0; other encodings should be considered as incorrect.\n- To represent the sign of $y$, the third-highest bit in the x encoding is utilized.\n  - If the bit is 0, $y$ is positive; if 1, $y$ is negative. We'll consider the number positive by taking the smallest value between $y$ and $-y$, after reducing them to $[0, p)$.\n\nThe encoding for $x \\in F_p$ as `[u8; 48]` bytes follows the rules described in the section \"Extension fields elements $F_{p}$\".\n\nEncoding a point on $E(F_p)$ with a negative $y$ coordinate:\n\n```rust\nlet x: [u8; 48] = encodeFp(x)\nx[0] = x[0] | 0x80;\nx[0] = x[0] | 0x20;\n```\n\nEncoding the point at infinity:\n\n```rust\nlet x: [u8; 48] = [0; 48];\nx[0] = x[0] | 0x80;\nx[0] = x[0] | 0x40;\n```\n\n#### Uncompressed points on the twisted curve $E'(F_{p^2})$\n\nThe points on the curve are represented by affine coordinates: $(x: F_{p^2}, y: F_{p^2})$.\nElements from $E'(F_{p^2})$ are encoded in [u8; 192] as a concatenation of bytes representing x and y coordinates, where $x, y \\in F_{p^2}$.\nThe encoding for $x$ and $y$ follows the rules detailed in the \"Extension Fields Elements $F_{p^2}$\" section.\n\n*The second-highest bit* within the encoding serves to signify a point at infinity.\nWhen this bit is set to 1, it designates an infinity point.\nIn this case, all other bits should be set to 0.\n\nEncoding the point at infinity:\n\n```bash\nlet x: [u8; 192] = [0; 192];\nx[0] = x[0] | 0x40;\n```\n\n#### Compressed points on twisted curve $E'(F_{p^2})$\n\nThe points on the curve are represented by affine coordinates: $(x: F_{p^2}, y: F_{p^2})$.\nElements from $E'(F_{p^2})$ in compressed form are encoded as [u8; 96],\nwith big-endian encoded $x \\in F_{p^2}$.\nThe $y$ coordinate is determined using the formula: $y = \\pm \\sqrt{x^3 + 4(u + 1)}$.\n\n- The highest bit indicates if the point is encoded in compressed form and should be set to 1.\n- The second-highest bit marks the point at infinity (if set to 1).\n  - For the point at infinity, all bits except the first two should be set to 0; other encodings should be considered as incorrect.\n- To represent the sign of $y$, the third-highest bit in the x encoding is utilized.\n  - If the bit is 0, $y$ is positive; if 1, $y$ is negative. We'll consider the number positive by taking the smallest value between $y$ and $-y$: first compare $c_1$, then $c_0$, after reduction to $[0, p)$.\n\nThe encoding of $x \\in F_{p^2}$ as [u8; 96] bytes follows the rules from the section “Extension Fields Elements $F_{p^2}$”.\n\nEncoding a point on $E'(F_{p^2})$ with a negative $y$ coordinate:\n\n```rust\nlet x: [u8; 96] = encodeFp2(x);\nx[0] = x[0] | 0x80;\nx[0] = x[0] | 0x20;\n```\n\nEncoding the point at infinity:\n\n```rust\nlet x: [u8; 96] = [0; 96];\nx[0] = x[0] | 0x80;\nx[0] = x[0] | 0x40;\n```\n\n#### ERROR_CODE\n\nValidating the input for the host functions within the contract can consume significant gas.\nFor instance, verifying if a point belongs to the subgroup is gas-consuming.\nIf an error is returned by the near host function, the entire execution is reverted.\nTo mitigate this, when the input verification is complex, the host function\nwill successfully complete its work but return an ERROR_CODE.\nThis enables users to handle error cases independently. It's important to note that host functions\nmight terminate with an error if it's straightforward to avoid it (e.g., incorrect input size).\n\nThe ERROR_CODE is an u64 and can hold the following values:\n\n- 0: No error, execution was successful. For `bls12381_pairing_check` function, the pairing result equals the multiplicative identity.\n- 1: Execution finished with error due to:\n  - Incorrect encoding (e.g., incorrectly set compression/decompression bit, coordinate >= p, etc.).\n  - A point not on the curve (where applicable).\n  - A point not in the expected subgroup (where applicable).\n- 2: Can be returned only in `bls12381_pairing_check`. No error, execution was successful, but the pairing result doesn't equal the multiplicative identity.\n\n### Host functions\n\n#### General comments for all functions\n\nIn all functions, the input is fetched from memory, beginning at `value_ptr` and extending to `value_ptr + value_len`.\nIf `value_len` is `u64::MAX`, input will come from the register with id `value_ptr`.\n\nExecution ends only if there's an incorrect input length,\ninput extends beyond memory bounds, or gas limits are reached.\nOtherwise, execution completes successfully, providing the `ERROR_CODE`.\n\nIf the `ERROR_CODE` equals 0, the output data will be written to\nthe register with the `register_id` identifier.\nOtherwise, nothing will be written to the register.\n\n***Gas Estimation:***\n\nThe algorithms described above exhibit linear complexity concerning the number of elements. Gas estimation can be calculated using the following formula:\n\n```rust\nlet k = input_bytes/item_size\nlet gas_consumed = A + B * k\n```\n\nHere, A and B denote empirically calculated constants unique to each algorithm.\n\nFor gas estimation, the benchmark vectors outlined in EIP-2537[^46] can be used where applicable.\n\n***Error cases (execution is terminated):***\n\nFor all functions, execution will terminate in the following cases:\n\n- The input length is not divisible by `item_size`.\n- The input is beyond memory bounds.\n\n#### bls12381_p1_sum\n\n***Description:***\n\nThe function calculates the sum of signed elements on the BLS12-381 curve. It accepts an arbitrary number of pairs $(s_i, p_i)$, where $p_i \\in E(F_p)$ represents a point on the elliptic curve, and $s_i \\in {0, 1}$ signifies the point's sign. The output is a single point from $E(F_p)$ equivalent to $\\sum (-1)^{s_i}p_i$.\n\nThe operations, including the $E(F_p)$ curve, points on the curve, multiplication by -1, and the addition operation, are detailed in the BLS12-381 Curve Specification section.\n\nNote: This function accepts points from the entire curve and is not restricted to points in $G_1$.\n\n***Input:***\n\nThe sequence of pairs $(s_i, p_i)$, where $p_i \\in E(F_p)$ represents a point and $s_i \\in {0, 1}$ denotes the sign. Each point is encoded in decompressed form as $(x\\colon F_p, y\\colon F_p)$, and the sign is encoded in one byte, taking only two allowed values: 0 or 1. Expect 97*k bytes as input, which are interpreted as byte concatenation of k slices, with each slice representing the point sign and the uncompressed point from $E(F_p)$. Further details are available in the Curve Points Encoding section.\n\n***Output:***\n\nThe ERROR_CODE is returned.\n\n- ERROR_CODE = 0: the input is correct\n  - <ins>Output:</ins> 96 bytes represent one point $\\in E(F_p)$ in its decompressed form. In case of an empty input, it outputs a point on infinity (refer to the Curve Points Encoding section for more details).\n- ERROR_CODE = 1:\n  - Points or signs are incorrectly encoded (refer to Curve points encoded section).\n  - Point is not on the curve.\n\n***Test cases:***\n\n<ins>Tests for the sum of two points</ins>\n\nThis section aims to verify the correctness of summing up two valid elements on the curve:\n\n- Utilize points on the curve with known addition results for comparison, such as tests from EIP-2537[^47],[^48].\n- Generate random points on the curve and verify the commutative property: P + Q = Q + P.\n- Validate that the sum of random points from $G_1$ remains in $G_1$.\n- Generate random points on the curve and use another library to cross-check the results.\n\nEdge cases:\n\n- Points not from $G_1$.\n- $\\mathcal{O} + \\mathcal{O} = \\mathcal{O}$.\n- $P + \\mathcal{O} = \\mathcal{O} + P = P$.\n- $P + (-P) = (-P) + P = \\mathcal{O}$.\n- P + P (tangent to the curve).\n- The sum of two points P and (-(P + P)) (tangent to the curve at point P).\n\n\n<ins>Tests for inversion</ins>\n\nThis section aims to validate the correctness of point inversion:\n\n- Generate random points on the curve and verify $P - P = -P + P = \\mathcal{O}$.\n- Generate random points on the curve and verify -(-P) = P.\n- Generate random points from $G_1$ and ensure that -P also belong to $G_1$.\n- Utilize an external implementation, generate random points on the curve, and compare results.\n\nEdge cases:\n\n- Points not from $G_1$.\n- $-\\mathcal{O}$\n\n<ins>Tests for incorrect data</ins>\n\nThis section aims to validate the handling of incorrect input data:\n\n- Incorrect input length.\n- Incorrect sign value (not 0 or 1).\n- Erroneous coding of field elements: one of the first three bits set up incorrectly.\n- Erroneous coding of field elements resulting in a correct element on the curve modulo p.\n- Erroneous coding of field elements with an incorrect extra bit in the decompressed encoding.\n- Point not on the curve.\n- Incorrect encoding of the point at infinity.\n- Input is beyond memory bounds.\n\n<ins>Tests for the sum of an arbitrary amount of points</ins>\n\nThis section focuses on validating the summation functionality with an arbitrary number of points:\n\n- Generate random points on the curve and verify that the sum of a random permutation matches.\n- Generate random points on the curve and utilize another library to validate results.\n- Create points and cross-check the outcome with the `multiexp` function.\n- Generate random points from $G_1$ and confirm that the sum is also from $G_1$.\n\nEdge cases:\n\n- Empty input\n- Sum with the maximum number of elements\n- A single point\n\n***Annotation:***\n\n```rust\npub fn bls12381_p1_sum(&mut self,\n                       value_len: u64,\n                       value_ptr: u64,\n                       register_id: u64) -> Result<u64>;\n```\n\n#### bls12381_p2_sum\n\n***Description:***\n\nThe function computes the sum of the signed elements on the BLS12-381 curve. It accepts an arbitrary number of pairs $(s_i, p_i)$, where $p_i \\in E'(F_{p^2})$ represents a point on the elliptic curve and $s_i \\in {0, 1}$ is the point's sign. The output is a single point from $E'(F_{p^2})$ equal to $\\sum (-1)^{s_i}p_i$.\n\nThe $E'(F_{p^2})$ curve, the points on the curve, the multiplication by -1, and the addition operation are all defined in the BLS12-381 Curve Specification section.\n\nNote: The function accepts any points on the curve and is not limited to points in $G_2$.\n\n***Input:***\n\nThe sequence of pairs $(s_i, p_i)$, where $p_i \\in E'(F_{p^2})$ is point and $s_i \\in \\textbraceleft 0, 1 \\textbraceright$  represents a sign.\nEach point is encoded in decompressed form as $(x: F_{p^2}, y: F_{p^2})$, and the sign is encoded in one byte. The expected input size is 193*k bytes, interpreted as a byte concatenation of k slices,\neach slice representing the point sign alongside the uncompressed point from $E'(F_{p^2})$.\nMore details are available in the Curve Points Encoding section.\n\n***Output:***\n\nThe ERROR_CODE is returned.\n\n- ERROR_CODE = 0: the input is correct\n  - <ins>Output:</ins> 192 bytes represent one point $\\in E'(F_{p^2})$ in its decompressed form. In case of an empty input, it outputs the point at infinity (refer to the Curve Points Encoding section for more details).\n- ERROR_CODE = 1:\n  - Points or signs are incorrectly encoded (refer to Curve points encoded section).\n  - Point is not on the curve.\n\n***Test cases:***\n\nThe test cases are identical to those of `bls12381_p1_sum`, with the only alteration being the substitution of points from $G_1$ and $E(F_p)$ with points from $G_2$ and $E'(F_{p^2})$.\n\n***Annotation:***\n\n```rust\npub fn bls12381_p2_sum(&mut self,\n                       value_len: u64,\n                       value_ptr: u64,\n                       register_id: u64) -> Result<u64>;\n```\n\n#### ***bls12381_g1_multiexp***\n\n***Description:***\n\nThe function accepts a list of pairs $(p_i, s_i)$, where $p_i \\in G_1 \\subset E(F_p)$ represents a point on the curve, and $s_i \\in \\mathbb{N}_0$ denotes a scalar. It calculates $\\sum s_i \\cdot p_i$.\n\nThe scalar multiplication operation signifies the addition of that point a scalar number of times:\n\n$$\ns \\cdot p = \\underbrace{p + p + \\ldots + p}_{s}\n$$\n\nThe $E(F_p)$ curve, $G_1$ subgroup, points on the curve, and the addition operation are defined in the BLS12-381 Curve Specification section.\n\nPlease note:\n\n- The function accepts only points from $G_1$.\n- The scalar is an arbitrary unsigned integer and can exceed the group order.\n- To enhance gas efficiency, the Pippenger’s algorithm[^25] can be utilized.\n\n***Input:*** The sequence of pairs $(p_i, s_i)$, where $p_i \\in G_1 \\subset E(F_p)$ represents a point on the curve, and $s_i \\in \\mathbb{N}_0$ is a scalar. The expected input size is 128*k bytes, interpreted as byte concatenation of k slices. Each slice comprises the concatenation of an uncompressed point from $G_1 \\subset E(F_p)$— 96 bytes, along with a scalar— 32 bytes. Further details are available in the Curve Points Encoding section.\n\n***Output:***\n\nThe ERROR_CODE is returned.\n\n- ERROR_CODE = 0: the input is correct\n  - <ins>Output:</ins> 96 bytes represent one point $\\in G_1 \\subset E(F_p)$ in its decompressed form. In case of an empty input, it outputs the point at infinity (refer to the Curve Points Encoding section for more details).\n- ERROR_CODE = 1:\n  - Points are incorrectly encoded (refer to Curve points encoded section).\n  - Point is not on the curve.\n  - Point is not from $G_1$\n\n***Test cases:***\n\n<ins>Tests for multiplication</ins>\n\n- Tests with known answers for multiplication from EIP-2537[^47],[^48].\n- Random small scalar n and point P:\n  - Check results with the sum function: `P + P + P + .. + P = n*P`.\n  - Compare with results from another library.\n- Random scalar n and point P:\n  - Verify against results from another library.\n  - Implement multiplication by using the sum function and the double-and-add algorithm[^61].\n\nEdge cases:\n\n- `group_order * P = 0`\n- `(scalar + groupt_order) * P = scalar * P`\n- `P + P + P .. + P = N*P`\n- `0 * P = 0`\n- `1 * P = P`\n- Scalar is a MAX_INT\n\n<ins>Tests for sum of two points</ins>\n\nThese are identical test cases to those in the `bls12381_p1_sum` section, but only with points from $G_1$ subgroup.\n\n- Generate random points P and Q, then compare the results with the sum function.\n\n<ins>Tests for the sum of an arbitrary amount of points</ins>\n\n- Random number of points, random point values; compare results with the sum function.\n- Empty input.\n- Input of maximum size.\n\n<ins>Tests for the multiexp of an arbitrary amount of points</ins>\n\n- Tests with known answers from EIP-2537[^47],[^48]\n- Random number of points, scalars, and points:\n  - Check with results from another library.\n  - Check with raw implementation based on the sum function and the double-and-add algorithm.\n- Empty input\n- Maximum number of scalars and points.\n\n<ins>Tests for error cases</ins>\n\n- The same test cases as those in the `bls12381_p1_sum` section.\n- Points not from $G_1$.\n\n***Annotation:***\n\n```rust\npub fn bls12381_g1_multiexp(\n        &mut self,\n        value_len: u64,\n        value_ptr: u64,\n        register_id: u64,\n) -> Result<u64>;\n```\n\n#### ***bls12381_g2_multiexp***\n\n***Description:***\n\nThe function takes a list of pairs $(p_i, s_i)$ as input, where $p_i \\in G_2 \\subset E'(F_{p^2})$ represents a point on the curve, and $s_i \\in \\mathbb{N}_0$ denotes a scalar. The function computes $\\sum s_i \\cdot p_i$.\n\nThis scalar multiplication operation involves adding the point $p$ to itself a specified number of times:\n\n$$\ns \\cdot p = \\underbrace{p + p + \\ldots + p}_{s}\n$$\n\nThe $E'(F_{p^2})$ curve, $G_2$ subgroup, points on the curve, and the addition operation are defined in the BLS12-381 Curve Specification section.\n\nPlease note:\n\n- The function accepts only points from $G_2$.\n- The scalar is an arbitrary unsigned integer and can exceed the group order.\n- To enhance gas efficiency, the Pippenger’s algorithm[^25] can be utilized.\n\n***Input:*** the sequence of pairs $(p_i, s_i)$, where $p_i \\in G_2 \\subset E'(F_{p^2})$ is a point on the curve and $s_i \\in \\mathbb{N}_0$ is a scalar.\n\nThe expected input size is `224*k` bytes, interpreted as the byte concatenation of `k` slices. Each slice is the concatenation of an uncompressed point from $G_2 \\subset E'(F_{p^2})$ — `192` bytes and a scalar — `32` bytes. More details are in the Curve Points Encoding section.\n\n***Output:***\n\nThe ERROR_CODE is returned.\n\n- ERROR_CODE = 0: the input is correct\n  - <ins>Output:</ins> 192 bytes represent one point $\\in G_2 \\subset E'(F_{p^2})$ in its decompressed form. In case of an empty input, it outputs the point at infinity (refer to the Curve Points Encoding section for more details).\n- ERROR_CODE = 1:\n  - Points are incorrectly encoded (refer to Curve points encoded section).\n  - Point is not on the curve.\n  - Point is not in $G_2$ subgroup.\n\n***Test cases:***\n\nThe test cases are identical to those for `bls12381_g1_multiexp`, except that the points from $G_1$ and $E(F_p)$ are replaced with points from $G_2$ and $E'(F_{p^2})$\n\n***Annotation:***\n\n```rust\npub fn bls12381_g2_multiexp(\n        &mut self,\n        value_len: u64,\n        value_ptr: u64,\n        register_id: u64,\n) -> Result<u64>;\n```\n\n#### bls12381_map_fp_to_g1\n\n***Description:***\n\nThis function takes as input a list of field elements $a_i \\in F_p$ and maps them to $G_1 \\subset E(F_p)$.\nYou can find the specification of this mapping function in the section titled 'Map to curve specification.'\nImportantly, this function does NOT perform the mapping of the byte string into $F_p$.\nThe implementation of the mapping to $F_p$ may vary and can be effectively executed within the contract.\n\n***Input:***\n\nThe function expects `48*k` bytes as input, representing a list of element from $F_p$ (unsigned integer $< p$). Additional information is available in the Curve Points Encoding section.\n\n***Output:***\n\nThe ERROR_CODE is returned.\n\n- ERROR_CODE = 0: the input is correct\n  - <ins>Output:</ins> `96*k` bytes - represents a list of points $\\in G_1 \\subset E(F_p)$ in decompressed format. Further information is available in the Curve Points Encoding section.\n- ERROR_CODE = 1: $a_i \\ge p$.\n\n***Test cases:***\n\n<ins>Tests for general cases </ins>\n\n- Validate the results for known answers from EIP-2537[^47],[^48].\n- Generate a random point $a$ from $F_p$:\n  - Verify the result using another library.\n  - Check that the resulting point lies on the curve in $G_1$.\n  - Compare the results for $a$ and $-a$; they should share the same x-coordinates and have opposite y-coordinates.\n\nEdge cases:\n\n- $a = 0$\n- $a = p - 1$\n\n<ins>Tests for an arbitrary number of elements</ins>\n\n- Empty input\n- Maximum number of points.\n- Generate a random number of field elements and compare the result with another library.\n\n<ins>Tests for error cases </ins>\n\n- Input length is not divisible by 48:\n- Input is beyond memory bounds.\n- $a = p$\n- Random number $\\ge p$\n\n***Annotation:***\n\n```rust\npub fn bls12381_map_fp_to_g1(\n        &mut self,\n        value_len: u64,\n        value_ptr: u64,\n        register_id: u64,\n) -> Result<u64>;\n```\n\n#### bls12381_map_fp2_to_g2\n\n***Description:***\n\nThis function takes as input a list of elements $a_i \\in F_{p^2}$ and maps them to $G_2 \\subset E'(F_{p^2})$.\nYou can find the mapping function specification in the \"Map to Curve Specification\" section. It's important to note that this function does NOT map byte strings into $F_{p^2}$.\nThe implementation of the mapping to $F_{p^2}$ may vary and can be effectively executed within the contract.\n\n***Input:*** the function takes as input `96*k` bytes — the elements from $F_{p^2}$ (two unsigned integers $< p$). Additional details can be found in the Curve Points Encoding section.\n\n***Output:***\n\nThe ERROR_CODE is returned.\n\n- ERROR_CODE = 0: the input is correct\n  - <ins>Output:</ins> `192*k` bytes - represents a list of points $\\in G_2 \\subset E'(F_{p^2})$ in decompressed format. More details are in the Curve Points Encoding section.\n- ERROR_CODE = 1: one of the values is not a valid extension field $F_{p^2}$ element\n\n***Test cases:***\n\n<ins>Tests for general cases </ins>\n\n- Validate the results for known answers from EIP-2537[^47],[^48]\n- Generate a random point $a$ from $F_{p^2}$:\n  - Verify the result with another library.\n  - Check that the resulting point lies in $G_2$.\n  - Compare results for $a$ and $-a$; they should have the same x-coordinates and opposite y-coordinates.\n\nEdge cases:\n\n- $a = (0, 0)$\n- $a = (p - 1, p - 1)$\n\n<ins>Tests for an arbitrary number of elements</ins>\n\n- Empty input\n- Maximum number of points.\n- Generate a random number of field elements and compare the result with another library.\n\n<ins>Tests for error cases </ins>\n\n- Input length is not divisible by 96.\n- Input is beyond memory bounds.\n- $a = (0, p)$\n- $a = (p, 0)$\n- (random number $\\ge p$, 0)\n- (0, random number $\\ge p$)\n\n***Annotation:***\n\n```rust\npub fn bls12381_map_fp2_to_g2(\n        &mut self,\n        value_len: u64,\n        value_ptr: u64,\n        register_id: u64,\n) -> Result<u64>;\n```\n\n#### bls12381_pairing_check\n\n***Description:***\n\nThe pairing function is a bilinear function $e\\colon G_1 \\times G_2 \\rightarrow G_T$, where $G_T \\subset F_{q^{12}}$,\nwhich is used to verify BLS signatures/zkSNARKs.\n\nThis function takes as input the sequence of pairs $(p_i, q_i)$, where $p_i \\in G_1 \\subset E(F_{p})$ and $q_i \\in G_2 \\subset E'(F_{p^2})$ and validates:\n\n$$\n\\prod e(p_i, q_i) = 1\n$$\n\nWe don’t need to calculate the pairing function itself as the result would lie on a huge field, and in all known applications only this validation check is necessary.\n\n***Input:*** A sequence of pairs $(p_i, q_i)$, where $p_i \\in G_1 \\subset E(F_{p})$ and $q_i \\in G_2 \\subset E'(F_{p^2})$. Each point is encoded in decompressed form. An expected input size of 288*k bytes is anticipated, interpreted as byte concatenation of k slices. Each slice comprises the concatenation of an uncompressed point from $G_1 \\subset E(F_p)$ (occupying 96 bytes) and a point from $G_2 \\subset E'(F_{p^2})$ (occupying 192 bytes). Additional details can be found in the Curve Points Encoding section.\n\n***Output:***\n\nThe ERROR_CODE is returned.\n\n- ERROR_CODE = 0: the input is correct, the pairing result equals the multiplicative identity.\n- ERROR_CODE = 1:\n  - Points encoded incorrectly (refer to the Curve Points Encoded section).\n  - Point not on the curve.\n  - Point not in $G_1/G_2$.\n- ERROR_CODE = 2: the input is correct, the pairing result doesn't equal the multiplicative identity.\n\n***Test cases:***\n\n<ins>Tests for one pair</ins>\n\n- Generate a random point $P \\in G_1$: verify $e(P, \\mathcal{O}) = 1$\n- Generate a random point $Q \\in G_2$: verify $e(\\mathcal{O}, Q) = 1$\n- Generate random points $P \\ne \\mathcal{O} \\in G_1$ and $Q \\ne \\mathcal{O} \\in G_2$: verify $e(P, Q) \\ne 1$\n\n<ins>Tests for two pairs</ins>\n\n- Generate random points $P \\in G_1$, $Q \\in G_2$ and random scalars $s_1, s_2$:\n  - $e(P, Q) \\cdot e(P, -Q) = 1$\n  - $e(P, Q) \\cdot e(-P, Q) = 1$\n  - $e(s_1P, s_2Q) \\cdot e(-s_2P, s_1Q) = 1$\n  - $e(s_1P, s_2Q) \\cdot e(s_2P, -s_1Q) = 1$\n\n- $g_1 \\in G_1$, $g_2 \\in G_2$ are generators defined in section 'BLS12-381 Curve Specification', r is the order of $G_1$ and $G_2$,  and $p_1, p_2, q_1, q_2$ are randomly generated scalars:\n  - if $p_1 \\cdot q_1 + p_2 \\cdot q_2 \\not \\equiv 0 (\\mod r)$, verify $e(p_1 g_1, q_1 g_2) \\cdot e(p_2 g_1, q_2 g_2) \\ne 1$\n  - if $p_1 \\cdot q_1 + p_2 \\cdot q_2 \\equiv 0 (\\mod r)$, verify $e(p_1 g_1, q_1 g_2) \\cdot e(p_2 g_1, q_2 g_2) = 1$\n\n<ins>Tests for an arbitrary number of pairs</ins>\n\n- Empty input\n- Test with the maximum number of pairs\n- Tests using known answers from EIP-2537[^47],[^48]\n- For all possible values of 'n', generate random scalars $p_1 \\cdots p_n$ and $q_1 \\cdots q_n$ such that $\\sum p_i \\cdot q_i \\not \\equiv 0 (\\mod r)$:\n  - Verify $\\prod e(p_i g_1, q_i g_2) \\ne 1$\n- For all possible values of 'n', generate random scalars $p_1 \\cdots p_{n - 1}$ and $q_1 \\cdots q_{n - 1}$:\n  - Verify $(\\prod e(p_i g_1, q_i g_2)) \\cdot e(-(\\sum p_i q_i) g_1, g_2) = 1$\n  - Verify $(\\prod e(p_i g_1, q_i g_2)) \\cdot e(g_1, -(\\sum p_i q_i) g_2) = 1$\n\n<ins>Tests for error cases</ins>\n\n- The first point is on the curve but not in $G_1$.\n- The second point is on the curve but not in $G_2$.\n- The input length is not divisible by 288.\n- The first point is not on the curve.\n- The second point is not on the curve.\n- Input length exceeds the memory limit.\n- Incorrect encoding of the point at infinity.\n- Incorrect encoding of a curve point:\n  - Incorrect decompression bit.\n  - Coordinates greater than or equal to 'p'.\n\n***Annotation:***\n\n```rust\npub fn bls12381_pairing_check(&mut self,\n                              value_len: u64,\n                              value_ptr: u64) -> Result<u64>;\n```\n\n#### bls12381_p1_decompress\n\n***Description:***  The function decompresses compressed points from $E(F_p)$. It takes an arbitrary number of points $p_i \\in E(F_p)$ in compressed format as input and outputs the same number of points from $E(F_p)$ in decompressed format. Further details about the decompressed and compressed formats are available in the Curve Points Encoding section.\n\n***Input:*** A sequence of points $p_i \\in E(F_p)$, with each point encoded in compressed form. An expected input size of 48*k bytes is anticipated, interpreted as the byte concatenation of k slices. Each slice represents the compressed point from $E(F_p)$. Additional details can be found in the Curve Points Encoding section.\n\n***Output:***\n\nThe ERROR_CODE is returned.\n\n- ERROR_CODE = 0: the input is correct\n  - <ins>Output:</ins> The sequence of points $p_i \\in E(F_p)$, with each point encoded in decompressed form. An expected output of 96*k bytes, interpreted as the byte concatenation of k slices. Each slice represents the decompressed point from $E(F_p)$. k is the same as in the input. More details are available in the Curve Points Encoding section.\n- ERROR_CODE = 1:\n  - Points are incorrectly encoded (refer to the Curve points encoded section).\n  - Point is not on the curve.\n\n***Test cases:***\n\n<ins>Tests for decompressing a single point</ins>\n\n- Generate random points on the curve from $G_1$ and not from $G_1$:\n  - Check that the uncompressed point lies on the curve.\n  - Compare the result with another library.\n- Generate random points with a negative y:\n  - Take the inverse and compare the y-coordinate.\n  - Compare the result with another library.\n- Decompress a point on infinity.\n\n<ins>Tests for decompression of an arbitrary number of points</ins>\n\n- Empty input.\n- Maximum number of points.\n- Generate a random number of points on the curve and compare the result with another library.\n\n<ins>Tests for error cases</ins>\n\n- The input length is not divisible by 48.\n- The input is beyond memory bounds.\n- Point is not on the curve.\n- Incorrect decompression bit.\n- Incorrectly encoded point at infinity.\n- Point with a coordinate larger than 'p'.\n\n***Annotation:***\n\n```rust\npub fn bls12381_p1_decompress(&mut self,\n                              value_len: u64,\n                              value_ptr: u64,\n                              register_id: u64) -> Result<u64>;\n```\n\n#### bls12381_p2_decompress\n\n***Description:*** The function decompresses compressed points from $E'(F_{p^2})$. It takes an arbitrary number of points $p_i \\in E'(F_{p^2})$ in compressed format as input and outputs the same number of points from $E'(F_{p^2})$ in decompressed format. For more information about the decompressed and compressed formats, refer to the Curve Points Encoding section.\n\n***Input:*** A sequence of points $p_i \\in E'(F_{p^2})$, with each point encoded in compressed form. The expected input size is `96*k` bytes, interpreted as the byte concatenation of k slices. Each slice represents the compressed point from $E'(F_{p^2})$. Additional details are available in the Curve Points Encoding section.\n\n***Output:***\n\nThe ERROR_CODE is returned.\n\n- ERROR_CODE = 0: the input is correct\n  - <ins>Output:</ins> the sequence of point $p_i \\in E'(F_{p^2})$, with each point encoded in decompressed form. The expected output is 192*k bytes, interpreted as the byte concatenation of k slices. `k` corresponds to the value specified in the input section. Each slice represents the decompressed point from $E'(F_{p^2})$. For more details, refer to the Curve Points Encoding section.\n- ERROR_CODE = 1:\n  - Points are incorrectly encoded (refer to Curve points encoded section).\n  - Point is not on the curve.\n\n***Test cases:***\n\nThe same test cases as `bls12381_p1_decompress`, but with points from $G_2$, and the input length should be divisible by 96.\n\n***Annotation:***\n\n```rust\npub fn bls12381_p2_decompress(&mut self,\n                              value_len: u64,\n                              value_ptr: u64,\n                              register_id: u64) -> Result<u64>;\n```\n\n## Reference Implementation\n\nPrimarily, concerning integration with nearcore, our interest lies in Rust language libraries. The current implementations of BLS12-381 in Rust are:\n\n1. ***Milagro Library*** [^29].\n2. ***BLST***  [^30][^31].\n3. ***Matter labs EIP-1962 implementation*** [^32]\n4. ***zCash origin implementation*** [^33]\n5. ***MCL Library*** [^34]\n6. ***FileCoin implementation*** [^35]\n7. ***zkCrypto*** [^36]\n\nTo compile the list, we used links from EIP-2537[^43], the pairing-curves specification[^44], and an article containing benchmarks[^45]. This list might be incomplete, but it should encompass the primary BLS12-381 implementations.\n\nIn addition, there are implementations in other languages that are less relevant to us in this context but can serve as references.\n\n1. C++, ETH2.0 Client, ***Chia library***[^37]\n2. Haskell, ***Adjoint Lib***[^38]\n3. Go, ***Go-Ethereum***[^39]\n4. JavaScript, ***Noble JS***[^40]\n5. Go, ***Matter Labs Go EIP-1962 implementation***[^41]\n6. C++, ***Matter Labs C++ EIP-1962 implementation***[^42]\n\nOne of the possible libraries to use is the blst library[^30].\nThis library exhibits good performance[^45] and has undergone several audits[^55].\nYou can find a draft implementation in nearcore, which is based on this library, through this link[^54].\n\n## Security Implications\n\nThe implementation's security depends on the chosen library's security, supporting operations with BLS curves.\n\nWithin this NEP, a constant execution time for all operations isn't mandated. All the computations executed by smart contract are entirely public anyway, so there would be no advantage to a constant-time algorithm.\n\nBLS12-381 offers more security bits compared to the already existing pairing-friendly curve BN254. Consequently, the security of projects requiring a pairing-friendly curve will be enhanced.\n\n## Alternatives\n\nIn nearcore, host functions for another pairing-friendly curve, BN254, have already been implemented[^10]. Some projects[^20] might consider utilizing the supported curve as an alternative. However, recent research indicates that this curve provides less than 100 bits of security and is not recommended for use[^13]. Furthermore, projects involved in cross-chain interactions, like Rainbow Bridge, are mandated to employ the same curve as the target protocol, which, in the case of Ethereum, is currently BLS12-381[^3]. Consequently, there is no viable alternative to employing a different pairing-friendly curve.\n\nAn alternative approach involves creating a single straightforward host function in nearcore for BLS signature verification. This was the initially proposed solution[^26]. However, this solution lacks flexibility[^28] for several reasons: (1) projects may utilize different hash functions; (2) some projects might employ the $G_1$ subgroup for public keys, while others use $G_2$; (3) the specifications for Ethereum 2.0 remain in draft, subject to potential changes; (4) instead of a more varied and adaptable set of functions (inspired by EIP-2537's precompiles), we are left with a single large function; (5) there will be no support for zkSNARKs verification.\n\nAnother alternative is to perform BLS12-381 operations off-chain. In this scenario, applications utilizing the BLS curve will no longer maintain trustlessness.\n\n## Future possibilities\n\nIn the future, there might be support for working with various curves beyond just BLS12-381. In Ethereum, prior to EIP-2537[^15], there was a proposal, EIP-1962[^27], to introduce pairing-friendly elliptic curves in a versatile format, accommodating not only BLS curves but numerous others as well. However, this proposal wasn't adopted due to its extensive scope and complexity. Implementing every conceivable curve might not be practical, but it remains a potential extension worth considering.\n\nAnother potential extension could involve supporting `hash_to_field` or `hash_to_curve` operations[^58]. Enabling their support would optimize gas usage for encoding messages into elements on the curve, which could be beneficial to BLS signatures. However, implementing the hash_to_field operation requires supporting multiple hashing algorithms simultaneously and doesn't demand a significant amount of gas for implementation within the contract. Therefore, these functions exceed the scope of this proposal.\n\nAdditionally, a potential expansion might encompass supporting not only affine coordinates but also other coordinate systems, such as homogeneous or Jacobian projective coordinates.\n\n## Consequences\n\n### Positive\n\n- Projects currently utilizing BN254 will have the capability to transition to the BLS12-381 curve, thereby enhancing their security.\n- Trustless cross-chain interactions with blockchains employing BLS12-381 in protocols (like Ethereum 2.0) will become feasible.\n\n### Neutral\n\n### Negative\n\n- There emerges a dependency on a library that supports operations with BLS12-381 curves.\n- We'll have to continually maintain operations with BLS12-381 curves, even if vulnerabilities are discovered, and it becomes unsafe to use these curves.\n\n### Backward Compatibility\n\nThere are no backward compatibility questions.\n\n## Changelog\n\nThe previous NEP for supporting BLS signature based on BLS12-381[^26]\n\n[^1]: BLS 2002 [https://www.researchgate.net/publication/2894224_Constructing_Elliptic_Curves_with_Prescribed_Embedding_Degrees](https://www.researchgate.net/publication/2894224_Constructing_Elliptic_Curves_with_Prescribed_Embedding_Degrees)\n[^2]: ZCash protocol: [https://zips.z.cash/protocol/protocol.pdf](https://zips.z.cash/protocol/protocol.pdf)\n[^3]: Ethereum 2 specification: [https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/beacon-chain.md](https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/beacon-chain.md)\n[^4]: Dfinity: [https://internetcomputer.org/docs/current/references/ic-interface-spec#certificate](https://internetcomputer.org/docs/current/references/ic-interface-spec#certificate)\n[^5]: Tezos: [https://wiki.tezosagora.org/learn/futuredevelopments/layer2#zkchannels](https://web.archive.org/web/20210227221934/https://wiki.tezosagora.org/learn/futuredevelopments/layer2)\n[^6]: Filecoin: [https://spec.filecoin.io/](https://spec.filecoin.io/)\n[^7]: Specification of pairing friendly curves with a list of applications in the table: [https://datatracker.ietf.org/doc/html/draft-irtf-cfrg-pairing-friendly-curves-09#name-adoption-status-of-pairing-](https://datatracker.ietf.org/doc/html/draft-irtf-cfrg-pairing-friendly-curves-09#name-adoption-status-of-pairing-)\n[^8]: Specification of pairing friendly curves, the security level for BLS12-381: [https://datatracker.ietf.org/doc/html/draft-irtf-cfrg-pairing-friendly-curves-09#section-4.2.1](https://datatracker.ietf.org/doc/html/draft-irtf-cfrg-pairing-friendly-curves-09#section-4.2.1)\n[^9]: BN2005: [https://eprint.iacr.org/2005/133](https://eprint.iacr.org/2005/133)\n[^10]: NEP-98 for BN254 host functions on NEAR: [https://github.com/near/NEPs/issues/98](https://github.com/near/NEPs/issues/98)\n[^11]: BLS12-381 for the Rest of Us: [https://hackmd.io/@benjaminion/bls12-381](https://hackmd.io/@benjaminion/bls12-381)\n[^12]: BN254 for the Rest of Us: [https://hackmd.io/@jpw/bn254](https://hackmd.io/@jpw/bn254)\n[^13]: Some analytics of different curve security: [https://www.ietf.org/archive/id/draft-irtf-cfrg-pairing-friendly-curves-02.html#name-for-100-bits-of-security](https://www.ietf.org/archive/id/draft-irtf-cfrg-pairing-friendly-curves-02.html#name-for-100-bits-of-security)\n[^14]: ZCash Transfer from bn254 to bls12-381: [https://electriccoin.co/blog/new-snark-curve/](https://electriccoin.co/blog/new-snark-curve/)\n[^15]: EIP-2537 Precompiles for Ethereum for BLS12-381: [https://eips.ethereum.org/EIPS/eip-2537](https://eips.ethereum.org/EIPS/eip-2537)\n[^17]: Article about Rainbow Bridge [https://near.org/blog/eth-near-rainbow-bridge](https://near.org/blog/eth-near-rainbow-bridge)\n[^19]: Intro into zkSNARKs: [https://media.consensys.net/introduction-to-zksnarks-with-examples-3283b554fc3b](https://media.consensys.net/introduction-to-zksnarks-with-examples-3283b554fc3b)\n[^20]: Zeropool project: [https://zeropool.network/](https://zeropool.network/)\n[^24]: Precompiles on Aurora: [https://doc.aurora.dev/dev-reference/precompiles/](https://doc.aurora.dev/dev-reference/precompiles/)\n[^25]: Pippenger Algorithm: [https://github.com/wborgeaud/python-pippenger/blob/master/pippenger.pdf](https://github.com/wborgeaud/python-pippenger/blob/master/pippenger.pdf)\n[^26]: NEP-446 proposal for BLS-signature verification: [https://github.com/nearprotocol/neps/pull/446](https://github.com/nearprotocol/neps/pull/446)\n[^27]: EIP-1962 EC arithmetic and pairings with runtime definitions: [https://eips.ethereum.org/EIPS/eip-1962](https://eips.ethereum.org/EIPS/eip-1962)\n[^28]: Drawbacks of NEP-446: [https://github.com/near/NEPs/pull/446#pullrequestreview-1314601508](https://github.com/near/NEPs/pull/446#pullrequestreview-1314601508)\n[^29]: BLS12-381 Milagro: [https://github.com/sigp/incubator-milagro-crypto-rust/tree/057d238936c0cbbe3a59dfae6f2405db1090f474](https://github.com/sigp/incubator-milagro-crypto-rust/tree/057d238936c0cbbe3a59dfae6f2405db1090f474)\n[^30]: BLST: [https://github.com/supranational/blst](https://github.com/supranational/blst),\n[^31]: BLST EIP-2537 adaptation: [https://github.com/sean-sn/blst_eip2537](https://github.com/sean-sn/blst_eip2537)\n[^32]: EIP-1962 implementation matter labs Rust: https://github.com/matter-labs/eip1962\n[^33]: zCash origin rust implementation: [https://github.com/zcash/zcash/tree/master/src/rust/src](https://github.com/zcash/zcash/tree/master/src/rust/src)\n[^34]: MCL library: [https://github.com/herumi/bls](https://github.com/herumi/bls)\n[^35]: filecoin/bls-signature: [https://github.com/filecoin-project/bls-signatures](https://github.com/filecoin-project/bls-signatures)\n[^36]: zkCrypto: [https://github.com/zkcrypto/bls12_381](https://github.com/zkcrypto/bls12_381), [https://github.com/zkcrypto/pairing](https://github.com/zkcrypto/pairing)\n[^37]: BLS12-381 code bases for ETH2.0 client Chia library C++: [https://github.com/Chia-Network/bls-signatures](https://github.com/Chia-Network/bls-signatures)\n[^38]: Adjoint Lib: [https://github.com/sdiehl/pairing](https://github.com/sdiehl/pairing)\n[^39]: Ethereum Go implementation for EIP-2537: [https://github.com/ethereum/go-ethereum/tree/master/core/vm/testdata/precompiles](https://github.com/ethereum/go-ethereum/tree/master/core/vm/testdata/precompiles)\n[^40]: Noble JS implementation: [https://github.com/paulmillr/noble-bls12-381](https://github.com/paulmillr/noble-bls12-381)\n[^41]: EIP-1962 implementation matter labs Go: https://github.com/kilic/eip2537,\n[^42]: EIP-1962 implementation matter labs C++: https://github.com/matter-labs-archive/eip1962_cpp\n[^43]: EIP-2537 with links: [https://github.com/matter-labs-forks/EIPs/blob/bls12_381/EIPS/eip-2537.md](https://github.com/matter-labs-forks/EIPs/blob/bls12_381/EIPS/eip-2537.md)\n[^44]: Pairing-friendly curves specification, crypto libs: [https://datatracker.ietf.org/doc/html/draft-irtf-cfrg-pairing-friendly-curves-09#name-cryptographic-libraries](https://datatracker.ietf.org/doc/html/draft-irtf-cfrg-pairing-friendly-curves-09#name-cryptographic-libraries)\n[^45]: Comparing different libs for pairing-friendly curves: [https://hackmd.io/@gnark/eccbench](https://hackmd.io/@gnark/eccbench)\n[^46]: Bench vectors from EIP2537: [https://eips.ethereum.org/assets/eip-2537/bench_vectors](https://eips.ethereum.org/assets/eip-2537/bench_vectors)\n[^47]: Metter Labs tests for EIP2537: [https://github.com/matter-labs/eip1962/tree/master/src/test/test_vectors/eip2537](https://github.com/matter-labs/eip1962/tree/master/src/test/test_vectors/eip2537)\n[^48]: Tests from Go Ethereum implementation: [https://github.com/ethereum/go-ethereum/tree/master/core/vm/testdata/precompiles](https://github.com/ethereum/go-ethereum/tree/master/core/vm/testdata/precompiles)\n[^51]: draft-irtf-cfrg-pairing-friendly-curves-11 [https://datatracker.ietf.org/doc/html/draft-irtf-cfrg-pairing-friendly-curves-11#name-bls-curves-for-the-128-bit-](https://datatracker.ietf.org/doc/html/draft-irtf-cfrg-pairing-friendly-curves-11#name-bls-curves-for-the-128-bit-)\n[^52]: Paper with BLS12-381: [https://eprint.iacr.org/2019/403.pdf](https://eprint.iacr.org/2019/403.pdf)\n[^53]: Zkcrypto points encoding: [https://github.com/zkcrypto/pairing/blob/0.14.0/src/bls12_381/README.md](https://github.com/zkcrypto/pairing/blob/0.14.0/src/bls12_381/README.md)\n[^54]: Draft PR for BLS12-381 operations in nearcore: [https://github.com/near/nearcore/pull/9317](https://github.com/near/nearcore/pull/9317)\n[^55]: Audit for BLST library: [https://research.nccgroup.com/wp-content/uploads/2021/01/NCC_Group_EthereumFoundation_ETHF002_Report_2021-01-20_v1.0.pdf](https://research.nccgroup.com/wp-content/uploads/2021/01/NCC_Group_EthereumFoundation_ETHF002_Report_2021-01-20_v1.0.pdf)\n[^58]: hash_to_curve and hash_to_field function: [https://datatracker.ietf.org/doc/html/rfc9380#name-hash_to_field-implementatio](https://datatracker.ietf.org/doc/html/rfc9380#name-hash_to_field-implementatio)\n[^59]: Implementation of BLS-signature based on these host functions: [https://github.com/olga24912/bls-signature-verificaion-poc/blob/main/src/lib.rs](https://github.com/olga24912/bls-signature-verificaion-poc/blob/main/src/lib.rs)\n[^60]: hash_to_field specification: [https://datatracker.ietf.org/doc/html/rfc9380#name-hash_to_field-implementatio](https://datatracker.ietf.org/doc/html/rfc9380#name-hash_to_field-implementatio)\n[^61]: double-and-add algorithm: [https://en.wikipedia.org/wiki/Exponentiation_by_squaring](https://en.wikipedia.org/wiki/Exponentiation_by_squaring)\n[^62]: RFC 9380 Hashing to Elliptic Curves specification: [https://www.rfc-editor.org/rfc/rfc9380](https://www.rfc-editor.org/rfc/rfc9380)\n[^63]: map_to_curve and clear_cofactor functions: [https://datatracker.ietf.org/doc/html/rfc9380#name-encoding-byte-strings-to-el](https://datatracker.ietf.org/doc/html/rfc9380#name-encoding-byte-strings-to-el)\n[^64]: Specification of parameters for BLS12-381 G1: [https://datatracker.ietf.org/doc/html/rfc9380#name-bls12-381-g1](https://datatracker.ietf.org/doc/html/rfc9380#name-bls12-381-g1)\n[^65]: Specification of parameters for BLS12-381 G2: [https://datatracker.ietf.org/doc/html/rfc9380#name-bls12-381-g2](https://datatracker.ietf.org/doc/html/rfc9380#name-bls12-381-g2)\n"
  },
  {
    "path": "neps/nep-0491.md",
    "content": "---\nNEP: 491\nTitle: Non-Refundable Storage Staking\nAuthors: Jakob Meier <jakob@near.org>\nStatus: Final\nDiscussionsTo: https://gov.near.org/t/proposal-locking-account-storage-refunds-to-avoid-faucet-draining-attacks/34155\nType: Protocol Track\nVersion: 1.0.0\nCreated: 2023-07-24\nLastUpdated: 2023-07-26\n---\n\n## Summary\n\nNon-refundable storage allows to create accounts with arbitrary state for users,\nwithout being susceptible to refund abuse.\n\nThis is done by tracking non-refundable balance in a separate field of the\naccount. This balance is only useful for storage staking and otherwise can be\nconsidered burned.\n\n\n## Motivation\n\nCreating new accounts on chain costs a gas fee and a storage staking fee. The\nmore state is added to the account, the higher the storage staking fees. When\ndeploying a contract on the account, it can quickly go above 1 NEAR per account.\n\nSome business models are okay with paying that fee for users upfront, just to\nget them onboarded. However, if a business does that today, their users can\ndelete their new accounts and spend the tokens intended for storage staking in\nother ways. Since this is free for the user, they are financially incentivized\nto repeat this action for as long as the business has funds left in the faucet.\n\nThe protocol should allow to create accounts in a way that is not susceptible to\nsuch refund abuse. This would at least change the incentives such that creating\nfake users is no longer profitable.\n\nNon-refundable storage staking is a further improvement over\n[NEP-448](https://github.com/near/NEPs/pull/448) (Zero Balance Accounts) which\naddressed the same issue but is limited to 770 bytes per account. By lifting the\nlimit, sponsored accounts can be used in combination with smart contracts.\n\n## Specification\n\nUsers can opt-in to nonrefundable storage when creating new accounts. For that,\nwe use the new action `ReserveStorage`.\n\n```rust\npub enum Action {\n    ...\n    ReserveStorage(ReserveStorageAction),\n    ...\n}\n```\n\nTo create a named account today, the typical pattern is a transaction with\n`CreateAccount`, `Transfer`, and `AddKey`. To make the funds nonrefundable, we\ncan use action `ReserveStorage` like this:\n\n```json\n\"Actions\": {\n  \"CreateAccount\": {},\n  \"ReserveStorage\": { \"deposit\": \"1000000000000000000000000\" },\n  \"AddKey\": { \"public_key\": \"...\", \"access_key\": \"...\" }\n}\n```\n\nAdding a `Transfer` action allows the combination of nonrefundable balance and\nrefundable balance. This allows the user to make calls where they need to attach\nbalance, for example an FT transfer which requires 1 yocto NEAR.\n\n```json\n\"Actions\": {\n  \"CreateAccount\": {},\n  \"ReserveStorage\": { \"deposit\": \"1000000000000000000000000\" },\n  \"Transfer\": { \"deposit\": \"100\" },\n  \"AddKey\": { \"public_key\": \"...\", \"access_key\": \"...\" }\n}\n```\n\nTo create implicit accounts, the current protocol requires a single `Transfer`\naction without further actions in the same transaction and this has not changed\nwith this proposal:\n\n```json\n\"Actions\": {\n  \"CreateAccount\": {},\n  \"Transfer\": { \"deposit\": \"0\" },\n}\n```\n\nIf a non-refundable transfer arrives at an account that already exists, it will\nfail and the funds are returned to the predecessor.\n\nFinally, when querying an account for its balance, there will be an additional\nfield `nonrefundable` in the output. Wallets will need to decide how they want\nto show it. They could, for example, add a new field called \"non-refundable\nstorage credits\".\n\n```js\n// Account near\n{\n  \"amount\": \"68844924385676812880674962949\",\n  \"block_hash\": \"3d6SisRc5SuwrkJnLwQb3W5pWitZKCjGhiKZuc6tPpao\",\n  \"block_height\": 97314513,\n  \"code_hash\": \"Dmi6UTRYTT3eNirp8ndgDNh8kYk2T9SZ6PJZDUXB1VR3\",\n  \"locked\": \"0\",\n  \"storage_paid_at\": 0,\n  \"storage_usage\": 2511772,\n  \"formattedAmount\": \"68,844.924385676812880674962949\",\n  // this is new\n  \"nonrefundable\": \"0\"\n}\n```\n\n\n## Reference Implementation\n\nOn the protocol side, we need to add new action:\n\n```rust\nenum Action {\n  CreateAccount(CreateAccountAction),\n  DeployContract(DeployContractAction),\n  FunctionCall(FunctionCallAction),\n  Transfer(TransferAction),\n  Stake(StakeAction),\n  AddKey(AddKeyAction),\n  DeleteKey(DeleteKeyAction),\n  DeleteAccount(DeleteAccountAction),\n  Delegate(super::delegate_action::SignedDelegateAction),\n  // this gets added in the end\n  ReserveStorage(ReserveStorageAction),\n}\n```\n\nand handle the new action in the `apply_action` call.\n\nFurther, we have to update the account meta data representation in the state\ntrie to track the non-refundable storage.\n\n```rust\npub struct Account {\n    amount: Balance,\n    locked: Balance,\n    // this field is new\n    nonrefundable: Balance,\n    code_hash: CryptoHash,\n    storage_usage: StorageUsage,\n    // the account version will be increased from 1 to 2\n    version: AccountVersion,\n}\n```\n\nThe field `nonrefundable` must be added to the normal `amount` and the `locked`\nbalance calculate how much state the account is allowed to use. The new formula\nto check storage balance therefore becomes\n\n```rust\namount + locked + nonrefundable >= storage_usage * storage_amount_per_byte\n```\n\nFor old accounts that don't have the new field, the non-refundable balance is\nalways zero. Adding non-refundable balance later is not allowed. If a transfer\nis made to an account that already existed before the receipt's actions are\napplied, execution must fail with\n`ActionErrorKind::OnlyReserveStorageOnAccountCreation{ account_id: AccountId }`.\n\nConceptually, these are all changes on the protocol level. However,\nunfortunately, the account version field is not currently serialized, hence not\nincluded in the on-chain state.\n\nTherefore, as the last change necessary for this NEP, we also introduce a new\nserialization format for new accounts.\n\n```rust\n// new serialization format for `struct Account`\n\n// new: prefix with a sentinel value to detect V1 accounts, they will have\n//      a real balance here which is smaller than u128::MAX\nwriter.serialize(u128::MAX)?;\n// new: include version number (u8) for accounts with version 2 or more\nwriter.serialize(version)?;\nwriter.serialize(amount)?;\nwriter.serialize(locked)?;\nwriter.serialize(code_hash)?;\nwriter.serialize(storage_usage)?;\n// new: this is the field we added, the type is u128 like other balances\nwriter.serialize(nonrefundable)?;\n```\n\nNote that we are not migrating old accounts. Accounts created as version 1 will\nremain at version 1.\n\nA proof of concept implementation for nearcore is available in this PR:\nhttps://github.com/near/nearcore/pull/9346\n\n\n## Security Implications\n\nWe were not able to come up with security relevant implications.\n\n## Alternatives\n\nThere are small variations in the implementation, and then there are completely\ndifferent ways to look at the problem. Let's start with the variations.\n\n### Variation: Allow adding nonrefundable balance to existing accounts\n\nInstead of failing when a non-refundable transfer arrives at an existing\naccount, we could add the balance to the existing non-refundable balance. This\nwould be more flexible to use. A business could easily add more funds for\nstorage even after account creation.\n\nThe problems are in the implementation details. It would allow to add\nnon-refundable storage to existing accounts, which would require some form of\nmigration of the all accounts in the state trie. This is impractical, as we have\nto iterate over all existing accounts and re-merklize. That's infeasible within\na single block time and stopping the chain would be disruptive.\n\nWe could maybe migrate lazily, i.e. read account version 1 and automatically\nconvert it to version 2. However, that would break the assumption that every\nlogical value in the merkle trie has a unique borsh representation, as there\nwould be a account version 1 and a version 2 borsh serialization that both map\nto the same logical version 2 value. This could lead to different\nrepresentations of the same chunk in memory, which might be used in attacks to\nforce a double-sign by innocent validators.\n\nIt is not 100% clear to me, the author, if this is a problem we could work\naround. However, the complications it would involve do not seem to be worth it,\ngiven that in the feature discussions nobody saw it as critical to add\nnon-refundable balance to existing accounts.\n\n### Variation: Allow refunds to original sponsor\n\nInstead of complete non-refundability, the tokens reserved for storage staking\ncould be returned to the original account that created the account when an\naccount is deleted.\n\nThe community discussions ended with the conclusion that this feature would\nprobably not be used and we should not implement it until there is real demand\nfor it.\n\n### Alternative: Don't use smart contracts on user accounts\n\nInstead of deploying contracts on the user account, one could build a similar\nsolution that uses zero balance accounts and a single master contract that\nperforms all smart contract functionality required. This master contract can\nimplement the [Storage Management]\n(https://nomicon.io/Standards/StorageManagement) standard to limit storage usage\nper user.\n\nThis solution is not as flexible. The master account cannot make cross-contract\nfunction calls with the user id as the predecessor.\n\n### Alternative: Move away from storage staking\n\nWe could also abandon the concept of storage staking entirely. However, coming\nup with a scalable, sustainable solution that does not suffer from the same\nrefund problems is hard.\n\nOne proposed design is a combination of zero balance accounts and code sharing\nbetween contracts. Basically, if somehow the deployed code is stored in a way\nthat does not require storage staking by the user themself, maybe the per-user\nstate is small enough to fit in the 770 bytes limit of zero balance accounts.\n(Questionable for non-trivial use cases.)\n\nThis alternative is much harder to design and implement. The proposal that has\ngotten the furthest so far is [Ephemeral\nStorage](https://github.com/near/NEPs/pull/485), which is pretty complicated and\ndoes not have community consensus yet. Nobody is currently working on moving it\nforward. While we could wait for that to eventually make progress, in the\nmeantime, the community is held back in their innovation because of the refund\nproblem.\n\n### Alternative: Using a proxy account\n\nAs suggested by [@mfornet](https://github.com/near/NEPs/pull/491#discussion_r1349496234)\nanother alternative is using a proxy account approach where the business creates\nan account with a deployed contract that has Regular (user has full access key)\nand Restricted mode (user doesn't have full access key and cannot delete\naccount).\n\nIn restricted mode, the user has a `FunctionCallKey` which allows the user to\ncall methods of the contract that controls the `FullAccessKey` and allows the\nuser some functionality but not all, e.g. not allowing account deletion. The\nuser in restricted mode could also upgrade an account by sending the initial\namount of NEAR deposited by the account creator and will attach a new\n`FullAccessKey`.\n\nThe downside of this idea is additional complexity on the tooling side because\nactions like adding access keys to the account need to be converted to function\ncalls instead of being direct actions. And the complexity on the business side\nis that it needs to include the proxy logic with their business logic in the\nsame contract, increasing the complexity of development.\n\n### Alternative: Granular access key\n\nAnother suggestion is introducing another key type `GranularAccessKey`. This\nalternative includes a protocol change that introduces a new kind of access key\nwhich can have granular permissions set on, it e.g. not being able to delete an\naccount.\n\nThe business side gives this key to the user, and with this key comes a set of\npermissions that the user can do. The user can also call `Upgrade` and get\n`FullAccessKey` by paying for the initial amount which funded the account\ncreation.\n\nThe drawback of this approach is that it requires that the business side would\nhave to handle the logic around `GranularAccessKey` and the `Upgrade` method\nmaking the usage more complex.\n\n## Future possibilities\n\n- We might want to add the possibility to make non-refundable balance transfers\n  from within a smart contract. This would require changes to the WASM smart\n  contract to host interface. Since removing anything from there is virtually\n  impossible, we shouldn't be too eager in adding it there but if there is\n  demand for it, we certainly can do it without much trouble.\n- We could later add the possibility to refund the non-refundable tokens to the\n  account who sent the tokens initially.\n- We could allow sending non-refundable balance to existing accounts.\n- If (cheap) code sharing between contracts is implemented in the future, this\n  proposal will most likely work well in combination with that. Per-user data\n  will still need to be paid for by the user, which could be sponsored as\n  non-refundable balance without running into refund abuse.\n\n\n## Consequences\n\n\n### Positive\n\n- Businesses can sponsor new user accounts without the user being able to steal\n  any tokens.\n\n### Neutral\n\n- Non-refundable tokens are removed from the circulating supply, i.e. burnt.\n\n### Negative\n\n- Understanding a user's balance become even more complicated than it already\n  is. Instead of only `amount` and `locked`, there will be a third component.\n- There is no incentive anymore to delete an account and its state when the\n  backing tokens are not refundable.\n\n\n### Backwards Compatibility\n\nWe believe this can be implemented with full backwards compatibility.\n\n## Unresolved Issues (Optional)\n\nAll of these issues already have a proposed solution above. But nevertheless,\nthese points are likely to be challenged / discussed:\n\n- Should we allow adding non-refundable balance to existing accounts? (proposal:\n  no)\n- Should we allow adding more non-refundable balance after account creation?\n  (proposal: no)\n- Should this NEP include a host function to send non-refundable balance from\n  smart contracts? (proposal: no)\n- How should a wallet display non-refundable balances? (proposal: up to wallet\n  providers, probably a new separate field)\n\n## Changelog\n\n### 1.0.0 - Initial Version\n\n> Placeholder for the context about when and who approved this NEP version.\n\n#### Benefits\n\n> List of benefits filled by the Subject Matter Experts while reviewing this\n> version:\n\n- Benefit 1\n- Benefit 2\n\n#### Concerns\n\n> Template for Subject Matter Experts review for this version: Status: New |\n> Ongoing | Resolved\n\n|   # | Concern | Resolution | Status |\n| --: | :------ | :--------- | -----: |\n|   1 |         |            |        |\n|   2 |         |            |        |\n\n## Copyright\n\nCopyright and related rights waived via\n[CC0](https://creativecommons.org/publicdomain/zero/1.0/).\n"
  },
  {
    "path": "neps/nep-0492.md",
    "content": "---\nNEP: 492\nTitle: Restrict creation of Ethereum Addresses\nAuthors: Bowen Wang <bowen@near.org>\nStatus: Final\nDiscussionsTo: https://github.com/near/NEPs/pull/492\nType: Protocol\nVersion: 0.0.0\nCreated: 2023-07-27\nLastUpdated: 2023-07-27\n---\n\n## Summary\n\nThis proposal aims to restrict the creation of top level accounts (other than implicit accounts) on NEAR to both prevent loss of funds due to careless user behaviors and scams\nand create possibilities for future interopability solutions.\n\n## Motivation\n\nToday an [Ethereum address](https://ethereum.org/en/developers/docs/accounts/) such as \"0x32400084c286cf3e17e7b677ea9583e60a000324\" is a valid account on NEAR and because it is longer than 32 characters,\nanyone can create such an account. This has unfortunately caused a few incidents where users lose their funds due to either a scam or careless behaviors.\nFor example, when a user withdraw USDT from an exchange to their NEAR account, it is possible that they think they withdraw to Ethereum and therefore enter their Eth address.\nIf this address exists on NEAR, then the user would lose their fund. A malicious actor could exploit this can create known Eth smart contract addresses on NEAR to trick users to send tokens to those addresses. With the proliferation of BOS gateways, including Ethereum ones, such exploits may become more common as users switch between NEAR wallets and Ethereum wallets (mainly metamask).\n\nIn addition to prevent loss of funds for users, this change allows the possibility of Ethereum wallets supporting NEAR transactions, which could enable much more adoption of NEAR. The exact details of how that would be done is outside the scope of this proposal.\n\nThere are currently ~5000 Ethereum addresses already created on NEAR. It is also outside the scope of this proposal to discuss what to do with them. \n\n## Specification\n\nThe proposed change is quite simple. Only the protocol registrar account can create top-level accounts that are not implicit accounts\n\n## Reference Implementation\n\nThe implementation roughly looks as follows:\n\n```Rust\nfn action_create_account(...) {\n    ...\n    if account_id.is_top_level() && !account_id.is_implicit()\n                && predecessor_id != &account_creation_config.registrar_account_id\n            {\n                // Top level accounts that are not implicit can only be created by registrar\n                result.result = Err(ActionErrorKind::CreateAccountOnlyByRegistrar {\n                    account_id: account_id.clone(),\n                    registrar_account_id: account_creation_config.registrar_account_id.clone(),\n                    predecessor_id: predecessor_id.clone(),\n                }\n                .into());\n                return;\n            }\n    ...\n}\n```\n\n## Alternatives\n\nThere does not appear to be a good alternative for this problem.\n\n## Future possibilities\n\nEthereum wallets such as Metamask could potentially support NEAR transactions through meta transactions.\n\n## Consequences\n\nIn the short term, no new top-level accounts would be allowed to be created, but this change would not create any problem for users.\n\n### Backwards Compatibility\n\nFor Ethereum addresses specifically, there are ~5000 existing ones, but this proposal per se do not deal with existing accounts.\n\n## Copyright\n\nCopyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).\n"
  },
  {
    "path": "neps/nep-0508.md",
    "content": "---\nNEP: 508\nTitle: Resharding v2\nAuthors: Waclaw Banasik, Shreyan Gupta, Yoon Hong\nStatus: Final\nDiscussionsTo: https://github.com/near/nearcore/issues/8992\nType: Protocol\nVersion: 1.0.0\nCreated: 2023-09-19\nLastUpdated: 2023-11-14\n---\n\n## Summary\n\nThis proposal introduces a new implementation for resharding and a new shard layout for the production networks.\n\nIn essence, this NEP is an extension of [NEP-40](https://github.com/near/NEPs/blob/master/specs/Proposals/0040-split-states.md), which was focused on splitting one shard into multiple shards.\n\nWe are introducing resharding v2, which supports one shard splitting into two within one epoch at a pre-determined split boundary. The NEP includes performance improvement to make resharding feasible under the current state as well as actual resharding in mainnet and testnet (To be specific, splitting the largest shard into two).\n\nWhile the new approach addresses critical limitations left unsolved in NEP-40 and is expected to remain valid for foreseeable future, it does not serve all use cases, such as dynamic resharding.\n\n## Motivation\n\nCurrently, NEAR protocol has four shards. With more partners onboarding, we started seeing that some shards occasionally become over-crowded with respect to total state size and number of transactions. In addition, with state sync and stateless validation, validators will not need to track all shards and validator hardware requirements can be greatly reduced with smaller shard size. With future in-memory tries, it's also important to limit the size of individual shards.\n\n## Specification\n\n### High level assumptions\n\n* Flat storage is enabled.\n* Shard split boundary is predetermined and hardcoded. In other words, necessity of shard splitting is manually decided.\n* For the time being resharding as an event is only going to happen once but we would still like to have the infrastructure in place to handle future resharding events with ease.\n* Merkle Patricia Trie is the underlying data structure for the protocol state.\n* Epoch is at least 6 hrs long for resharding to complete.\n\n### High level requirements\n\n* Resharding must be fast enough so that both state sync and resharding can happen within one epoch.\n* Resharding should work efficiently within the limits of the current hardware requirements for nodes.\n* Potential failures in resharding may require intervention from node operator to recover.\n* No transaction or receipt must be lost during resharding.\n* Resharding must work regardless of number of existing shards.\n* No apps, tools or code should hardcode the number of shards to 4.\n\n### Out of scope\n\n* Dynamic resharding\n  * automatically scheduling resharding based on shard usage/capacity\n  * automatically determining the shard layout\n* Merging shards or boundary adjustments\n* Shard reshuffling\n\n### Required protocol changes\n\nA new protocol version will be introduced specifying the new shard layout which would be picked up by the resharding logic to split the shard.\n\n### Required state changes\n\n* For the duration of the resharding the node will need to maintain a snapshot of the flat state and related columns. As the main database and the snapshot diverge this will cause some extent of storage overhead.\n* For the duration of the epoch before the new shard layout takes effect, the node will need to maintain the state and flat state of shards in the old and new layout at the same time. The State and FlatState columns will grow up to approx 2x the size. The processing overhead should be minimal as the chunks will still be executed only on the parent shards. There will be increased load on the database while applying changes to both the parent and the children shards.\n* The total storage overhead is estimated to be on the order of 100GB for mainnet RPC nodes and 2TB for mainnet archival nodes. For testnet the overhead is expected to be much smaller.\n\n### Resharding flow\n\n* The new shard layout will be agreed on offline by the protocol team and hardcoded in the reference implementation.\n  * The first resharding will be scheduled soon after this NEP is merged. The new shard layout boundary accounts will be: ```[\"aurora\", \"aurora-0\", \"kkuuue2akv_1630967379.near\", \"tge-lockup.sweat\"]```.\n  * Subsequent reshardings will be scheduled as needed, without further NEPs, unless significant changes are introduced.\n* In epoch T, past the protocol version upgrade date, nodes will vote to switch to the new protocol version. The new protocol version will contain the new shard layout.\n* In epoch T, in the last block of the epoch, the EpochConfig for epoch T+2 will be set. The EpochConfig for epoch T+2 will have the new shard layout.\n* In epoch T + 1, all nodes will perform the state split. The child shards will be kept up to date with the blockchain up until the epoch end first via catchup, and later as part of block postprocessing state application.\n* In epoch T + 2, the chain will switch to the new shard layout.\n\n## Reference Implementation\n\nThe implementation heavily re-uses the implementation from [NEP-40](https://github.com/near/NEPs/blob/master/specs/Proposals/0040-split-states.md). Below are listed the major differences and additions.\n\n### Code pointers to the proposed implementation\n\n* [new shard layout](https://github.com/near/nearcore/blob/c9836ab5b05c229da933d451fe8198d781f40509/core/primitives/src/shard_layout.rs#L161)\n* [the main logic for splitting states](https://github.com/near/nearcore/blob/c9836ab5b05c229da933d451fe8198d781f40509/chain/chain/src/resharding.rs#L280)\n* [the main logic for applying chunks to split states](https://github.com/near/nearcore/blob/c9836ab5b05c229da933d451fe8198d781f40509/chain/chain/src/update_shard.rs#L315)\n* [the main logic for garbage collecting state from parent shard](https://github.com/near/nearcore/blob/c9836ab5b05c229da933d451fe8198d781f40509/chain/chain/src/store.rs#L2335)\n\n### Flat Storage\n\nThe old implementation of resharding relied on iterating over the full trie state of the parent shard in order to build the state for the children shards. This implementation was suitable at the time but since then the state has grown considerably and this implementation is now too slow to fit within a single epoch. The new implementation relies on iterating through the flat storage in order to build the children shards quicker. Based on benchmarks, splitting the largest shard by using flat storage can take around 15 min without throttling and around 3 hours with throttling to maintain the block production rate.\n\nThe new implementation will also propagate the flat storage for the children shards and keep it up to date with the chain until the switch to the new shard layout in the next epoch. The old implementation didn't handle this case because the flat storage didn't exist back then.\n\nIn order to ensure consistent view of the flat storage while splitting the state the node will maintain a snapshot of the flat state and related columns as of the last block of the epoch prior to resharding. The existing implementation of flat state snapshots used in State Sync will be used for this purpose.\n\n### Handling receipts, gas burnt and balance burnt\n\nWhen resharding, extra care should be taken when handling receipts in order to ensure that no receipts are lost or duplicated. The gas burnt and balance burnt also need to be correctly handled. The old resharding implementation for handling receipts, gas burnt and balance burnt relied on the fact in the first resharding there was only a single parent shard to begin with. The new implementation will provide a more generic and robust way of reassigning the receipts to the child shards, gas burnt, and balance burnt, that works for arbitrary splitting of shards, regardless of the previous shard layout.\n\n### New shard layout\n\nThe first release of the resharding v2 will contain a new shard layout where one of the existing shards will be split into two smaller shards. Furthermore additional reshardings can be scheduled in subsequent releases without additional NEPs unless the need for it arises. A new shard layout can be determined and will be scheduled and executed with the next protocol upgrade. Resharding will typically happen by splitting one of the existing shards into two smaller shards. The new shard layout will be created by adding a new boundary account that will be determined by analysing the storage and gas usage metrics within the shard and selecting a point that will divide the shard roughly in half in accordance to the mentioned metrics. Other metrics can also be used based on requirements.\n\n### Removal of Fixed shards\n\nFixed shards was a feature of the protocol that allowed for assigning specific accounts and all of their recursive sub accounts to a predetermined shard. This feature was only used for testing and was never used in production. Fixed shards feature unfortunately breaks the contiguity of shards and is not compatible with the new resharding flow. A sub account of a fixed shard account can fall in the middle of account range that belongs to a different shard. This property of fixed shards made it particularly hard to reason about and implement efficient resharding.\n\nFor example in a shard layout with boundary accounts [`b`, `d`] the account space is cleanly divided into three shards, each spanning a contiguous range and account ids:\n\n* 0 -  `:b`\n* 1 - `b:d`\n* 2 - `d:`\n\nNow if we add a fixed shard `f` to the same shard layout, then any we'll have 4 shards but neither is contiguous. Accounts such as `aaa.f`, `ccc.f`, `eee.f` that would otherwise belong to shards 0, 1 and 2 respectively are now all assigned to the fixed shard and create holes in the shard account ranges.\n\nIt's also worth noting that there is no benefit to having accounts colocated in the same shard. Any transaction or receipt is treated the same way regardless of crossing shard boundary.\n\nThis was implemented ahead of this NEP and the fixed shards feature was **removed**.\n\n### Garbage collection\n\nIn epoch T+2 once resharding is completed, we can delete the trie state and the flat state related to the parent shard. In practice, this is handled as part of the garbage collection code. While garbage collecting the last block of epoch T+1, we go ahead and clear all the data associated with the parent shard from the trie cache, flat storage, and RocksDB state associated with trie state and flat storage.\n\n### Transaction pool\n\nThe transaction pool is sharded i.e. it groups transactions by the shard where each transaction should be converted to a receipt. The transaction pool was previously sharded by the ShardId. Unfortunately ShardId is insufficient to correctly identify a shard across a resharding event as ShardIds change domain. The transaction pool was migrated to group transactions by ShardUId instead, and a transaction pool resharding was implemented to reassign transaction from parent shard to children shards right before the new shard layout takes effect. The ShardUId contains the version of the shard layout which allows differentiating between shards in different shard layouts.\n\nThis was implemented ahead of this NEP and the transaction pool is now fully **migrated** to ShardUId.\n\n## Alternatives\n\n### Why is this design the best in the space of possible designs?\n\nThis design is simple, robust, safe, and meets all requirements.\n\n### What other designs have been considered and what is the rationale for not choosing them?\n\n#### Alternative implementations\n\n* Splitting the trie by iterating over the boundaries between children shards for each trie record type. This implementation has the potential to be faster but it is more complex and it would take longer to implement. We opted in for the much simpler one using flat storage given it is already quite performant.\n* Changing the trie structure to have the account id first and type of record later. This change would allow for much faster resharding by only iterating over the nodes on the boundary. This approach has two major drawbacks without providing too many benefits over the previous approach of splitting by each trie record type.\n  1) It would require a massive migration of trie.\n  2) We would need to maintain the old and the new trie structure forever.\n* Changing the storage structure by having the storage key to have the format of `account_id.node_hash`. This structure would make it much easier to split the trie on storage level because the children shards are simple sub-ranges of the parent shard. Unfortunately we found that the migration would not be feasible.\n* Changing the storage structure by having the key format as only node_hash and dropping the ShardUId prefix. This is a feasible approach but it adds complexity to the garbage collection and data deletion, specially when nodes would start tracking only one shard. We opted in for the much simpler one by using the existing scheme of prefixing storage entries by shard uid.\n\n#### Other considerations\n\n* Dynamic Resharding - we have decided to not implement the full dynamic resharding at this time. Instead we hardcode the shard layout and schedule it manually. The reasons are as follows:\n  * We prefer incremental process of introducing resharding to make sure that it is robust and reliable, as well as give the community the time to adjust.\n  * Each resharding increases the potential total load on the system. We don't want to allow it to grow until full sharding is in place and we can handle that increase.\n* Extended shard layout adjustments - we have decided to only implement shard splitting and not implement any other operations. The reasons are as follows:\n  * In this iteration we only want to perform splitting.\n  * The extended adjustments are currently not justified. Both merging and boundary moving may be useful in the future when the traffic patterns change and some shard become underutilized. In the nearest future we only predict needing to reduce the size of the heaviest shards.\n\n### What is the impact of not doing this?\n\nWe need resharding in order to scale up the system. Without resharding eventually shards would grow so big (in either storage or cpu usage) that a single node would not be able to handle it. Additionally, this clears up the path to implement in-memory tries as we need to store the whole trie structure in limited RAM. In the future smaller shard size would lead to faster syncing of shard data when nodes start tracking just one shard.\n\n## Integration with State Sync\n\nThere are two known issues in the integration of resharding and state sync:\n\n* When syncing the state for the first epoch where the new shard layout is used. In this case the node would need to apply the last block of the previous epoch. It cannot be done on the children shard as on chain the block was applied on the parent shards and the trie related gas costs would be different.\n* When generating proofs for incoming receipts. The proof for each of the children shards contains only the receipts of the shard but it's generated on the parent shard layout and so may not be verified.\n\nIn this NEP we propose that resharding should be rolled out first, before any real dependency on state sync is added. We can then safely roll out the resharding logic and solve the above mentioned issues separately. We believe at least some of the issues can be mitigated by the implementation of new pre-state root and chunk execution design.\n\n## Integration with Stateless Validation\n\nThe Stateless Validation requires that chunk producers provide proof of correctness of the transition function from one state root to another. That proof for the first block after the new shard layout takes place will need to prove that the entire state split was correct as well as the state transition.\n\nIn this NEP we propose that resharding should be rolled out first, before stateless validation. We can then safely roll out the resharding logic and solve the above mentioned issues separately. This issue was discussed with the stateless validation experts and we are cautiously optimistic that the integration will be possible. The most concerning part is the proof size and we believe that it should be small enough thanks to the resharding touching relatively small number of trie nodes - on the order of the depth of the trie.\n\n## Future fast-followups\n\n### Resharding should work even when validators stop tracking all shards\n\nAs mentioned above under 'Integration with State Sync' section, initial release of resharding v2 will happen before the full implementation of state sync and we plan to tackle the integration between resharding and state sync after the next shard split (Won't need a separate NEP as the integration does not require protocol change.)\n\n### Resharding should work after stateless validation is enabled\n\nAs mentioned above under 'Integration with Stateless Validation' section, the initial release of resharding v2 will happen before the full implementation of stateless validation and we plan to tackle the integration between resharding and stateless validation after the next shard split (May need a separate NEP depending on implementation detail.)\n\n## Future possibilities\n\n### Further reshardings\n\nThis NEP introduces both an implementation of resharding and an actual resharding to be done in the production networks. Further reshardings can also be performed in the future by adding a new shard layout and setting the shard layout for the desired protocol version in the `AllEpochConfig`.\n\n### Dynamic resharding\n\nAs noted above, dynamic resharding is out of scope for this NEP and should be implemented in the future. Dynamic resharding includes the following but not limited to:\n\n* Automatic determination of split boundary based on parameters like traffic, gas usage, state size, etc.\n* Automatic scheduling of resharding events\n\n### Extended shard layout adjustments\n\nIn this NEP we only propose supporting splitting shards. This operation should be more than sufficient for the near future but eventually we may want to add support for more sophisticated adjustments such as:\n\n* Merging shards together\n* Moving the boundary account between two shards\n\n### Localization of resharding event to specific shard\n\nAs of today, at the RocksDB storage layer, we have the ShardUId, i.e. the ShardId along with the ShardVersion, as a prefix in the key of trie state and flat state. During a resharding event, we increment the ShardVersion by one, and effectively remap all the current parent shards to new child shards. This implies we can't use the same underlying key value pairs for store and instead would need to duplicate the values with the new ShardUId prefix, even if a shard is unaffected and not split.\n\nIn the future, we would like to potentially change the schema in a way such that only the shard that is splitting is impacted by a resharding event, so as to avoid additonal work done by nodes tracking other shards.\n\n### Other useful features\n\n* Removal of shard uids and introducing globally unique shard ids\n* Account colocation for low latency across account call - In case we start considering synchronous execution environment, colocating associated accounts (e.g. cross contract call between them) in the same shard can increase the efficiency\n* Shard purchase/reservation - When someone wants to secure entirety of limitation on a single shard (e.g. state size limit), they can 'purchase/reserve' a shard so it can be dedicated for them (similar to how Aurora is set up)\n\n## Consequences\n\n### Positive\n\n* Workload across shards will be more evenly distributed.\n* Required space to maintain state (either in memory or in persistent disk) will be smaller. This is useful for in-memory tries.\n* State sync overhead will be smaller with smaller state size.\n\n### Neutral\n\n* Number of shards would increase.\n* Underlying trie structure and data structure are not going to change.\n* Resharding will create dependency on flat state snapshots.\n* The resharding process, as of now, is not fully automated. Analyzing shard data, determining the split boundary, and triggering an actual shard split all need to be manually curated and tracked.\n\n### Negative\n\n* During resharding, a node is expected to require more resources as it will first need to copy state data from the parent shard to the child shard, and then will have to apply trie and flat state changes twice, once for the parent shard and once for the child shards.\n* Increased potential for apps and tools to break without proper shard layout change handling.\n\n### Backwards Compatibility\n\nAny light clients, tooling or frameworks external to nearcore that have the current shard layout or the current number of shards hardcoded may break and will need to be adjusted in advance. The recommended way for fixing it is querying an RPC node for the shard layout of the relevant epoch and using that information in place of the previously hardcoded shard layout or number of shards. The shard layout can be queried by using the `EXPERIMENTAL_protocol_config` rpc endpoint and reading the `shard_layout` field from the result. A dedicated endpoint may be added in the future as well.\n\nWithin nearcore we do not expect anything to break with this change. Yet, shard splitting can introduce additional complexity on replayability. For instance, as target shard of a receipt and belonging shard of an account can change with shard splitting, shard splitting must be replayed along with transactions at the exact epoch boundary.\n\n## Changelog\n\n[The changelog section provides historical context for how the NEP developed over time. Initial NEP submission should start with version 1.0.0, and all subsequent NEP extensions must follow [Semantic Versioning](https://semver.org/). Every version should have the benefits and concerns raised during the review. The author does not need to fill out this section for the initial draft. Instead, the assigned reviewers (Subject Matter Experts) should create the first version during the first technical review. After the final public call, the author should then finalize the last version of the decision context.]\n\n### 1.0.0 - Initial Version\n\n> Placeholder for the context about when and who approved this NEP version.\n\n#### Benefits\n\n> List of benefits filled by the Subject Matter Experts while reviewing this version:\n\n* Benefit 1\n* Benefit 2\n\n#### Concerns\n\n> Template for Subject Matter Experts review for this version:\n> Status: New | Ongoing | Resolved\n\n|   # | Concern | Resolution | Status |\n| --: | :------ | :--------- | -----: |\n|   1 |         |            |        |\n|   2 |         |            |        |\n\n## Copyright\n\nCopyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).\n"
  },
  {
    "path": "neps/nep-0509.md",
    "content": "---\nNEP: 509\nTitle: Stateless validation Stage 0\nAuthors: Robin Cheng, Anton Puhach, Alex Logunov, Yoon Hong\nStatus: Final\nDiscussionsTo: https://docs.google.com/document/d/1C-w4FNeXl8ZMd_Z_YxOf30XA1JM6eMDp5Nf3N-zzNWU/edit?usp=sharing, https://docs.google.com/document/d/1TzMENFGYjwc2g5A3Yf4zilvBwuYJufsUQJwRjXGb9Xc/edit?usp=sharing\nType: Protocol\nVersion: 1.0.1\nCreated: 2023-09-19\nLastUpdated: 2023-09-19\n---\n\n## Summary\n\nThe NEP proposes an solution to achieve phase 2 of sharding (where none of the validators needs to track all shards), with stateless validation, instead of the traditionally proposed approach of fraud proof and state rollback.\n\nThe fundamental idea is that validators do not need to have state locally to validate chunks.\n\n* Under stateless validation, the responsibility of a chunk producer extends to packaging transactions and receipts and annotating them with state witnesses. This extended role will be called \"chunk proposers\".\n* The state witness of a chunk is defined to be a subset of the trie state, alongside its proof of inclusion in the trie, that is needed to execute a chunk. A state witness allows anyone to execute the chunk without having the state of its shard locally.\n* Then, at each block height, validators will be randomly assigned to a shard, to validate the state witness for that shard. Once a validator receives both a chunk and its state witness, it verifies the state transition of the chunk, signs a chunk endorsement and sends it to the block producer. This is similar to, but separate from, block approvals and consensus.\n* The block producer waits for sufficient chunk endorsements before including a chunk into the block it produces, or omits the chunk if not enough endorsements arrive in time.\n\n## Motivation\n\nAs phase 1 of sharding requires block producers to track all shards due to underlying security concerns, the team explored potential ways to achieve phase 2 of sharding, where none of the validators has to track all shards.\n\nThe early design of phase 2 relied on the security assumption that as long as there is one honest validator or fisherman tracking a shard, the shard is secure; by doing so, it naturally relied on protocol's ability to handle challenges (when an honest validator or fisherman detects a malicious behavior and submits a proof of such), state rollbacks (when validators agree that the submitted challenge is valid), and slashing (to punish the malicious validator). While it sounds straightforward and simple on paper, the complex interactions between these abilities and the rest of the protocol led to concrete designs that were extremely complicated, involving several specific problems we still don't know how to solve.\n\nAs a result, the team sought alternative approaches and concluded that stateless validation is the most realistic and promising one; the stateless validation approach does not assume the existence of a fishermen, does not rely on challenges, and never rolls back state. Instead, it relies on the assumption that a shard is secure if every single chunk in that shard is validated by a randomly sampled subset of all validators, to always produce valid chunks in the first place.\n\n## Specification\n\n### Assumptions\n\n* Not more than 1/3 of validators (by stake) is corrupted.\n* In memory trie is enabled - [REF](https://docs.google.com/document/d/1_X2z6CZbIsL68PiFvyrasjRdvKA_uucyIaDURziiH2U/edit?usp=sharing)\n* State sync is enabled (so that nodes can track different shards across epochs)\n* Merkle Patricia Trie continues to be the state trie implementation\n* Congestion Control is enabled - [NEP-539](https://github.com/near/NEPs/pull/539)\n\n### Design requirements\n\n* No validator needs to track all shards.\n* Security of protocol must not degrade.\n  * Validator assignment for both chunk validation and block validation should not create any security vulnerabilities.\n* Block processing time should not take significantly more than what it takes today.\n* Any additional load on network and compute should not negatively affect existing functionalities of any node in the blockchain.\n  * The cost of additional network and compute should be acceptable.\n* Validator rewards should not be reduced.\n\n### Design before NEP-509\n\nThe current high-level chunk production flow, excluding details and edge cases, is as follows:\n\n* Block producer at height `H`, `BP(H)`, produces block `B(H)` with chunks accessible to it and distributes it.\n* Chunk producer for shard `S` at height `H+1`, `CP(S, H+1)`, produces chunk `C(S, H+1)` based on `B(H)` and distributes it.\n* `BP(H+1)` collects all chunks at height `H+1` until certain timeout is reached.\n* `BP(H+1)` produces block `B(H+1)` with chunks `C(*, H+1)` accessible to it and distributes it.\n\nAnd the flow goes on for heights H+1, H+2, etc. The \"induction base\" is at genesis height, where genesis block with default chunks is accessible to everyone, so chunk producers can start right away from genesis height + 1.\n\nOne can observe that there is no \"chunk validation\" step here. In fact, validity of chunks is implicitly guaranteed by **requirement for all block producers to track all shards**.\nTo achieve phase 2 of sharding, we want to drop this requirement. For that, we propose the following changes to the flow:\n\n### Design after NEP-509\n\n* Chunk producer, in addition to producing a chunk, produces new `ChunkStateWitness` message. The `ChunkStateWitness` contains data which is enough to prove validity of the chunk's header that is being produced.\n  * `ChunkStateWitness` proves to anyone, including those who track only block data and no shards, that this chunk header is correct.\n  * `ChunkStateWitness` is not part of the chunk itself; it is distributed separately and is considered transient data.\n* The chunk producer distributes the `ChunkStateWitness` to a subset of **chunk validators** which are assigned for this shard. This is in addition to, and independent of, the existing chunk distribution logic (implemented by `ShardsManager`) today.\n  * Chunk Validator selection and assignment are described below.\n* A chunk validator, upon receiving a `ChunkStateWitness`, validates the state witness and determines if the chunk header is indeed correctly produced. If so, it sends a `ChunkEndorsement` to the current block producer.\n* As the existing logic is today, the block producer for this block waits until either all chunks are ready, or a timeout occurs, and then proposes a block containing whatever chunks are ready. Now, the notion of readiness here is expanded to also having more than 2/3 of chunk endorsements by stake.\n  * This means that if a chunk does not receive enough chunk endorsements by the timeout, it will not be included in the block. In other words, the block only contains chunks for which there is already a consensus of validity. **This is the key reason why we will no longer need fraud proofs / tracking all shards**.\n  * The 2/3 fraction has the denominator being the total stake assigned to validate this shard, *not* the total stake of all validators.\n* The block producer, when producing the block, additionally includes the chunk endorsements (at least 2/3 needed for each chunk) in the block's body. The validity of the block is expanded to also having valid 2/3 chunk endorsements by stake for each chunk included in the block.\n  * If a block fails validation because of not having the required chunk endorsements, it is considered a block validation failure for the purpose of Doomslug consensus, just like any other block validation failure. In other words, nodes will not apply the block on top of their blockchain, and (block) validators will not endorse the block.\n\nSo the high-level specification can be described as the list of changes in the validator roles and responsibilities:\n\n* Block producers:\n  * (Same as today) Produce blocks, (new) including waiting for chunk endorsements\n  * (Same as today) Maintain chunk parts (i.e. participates in data availability based on Reed-Solomon erasure encoding)\n  * (New) No longer require tracking any shard\n  * (Same as today) Should have high barrier of entry (required stake) for security reasons, to make block double signing harder.\n* Chunk producers:\n  * (Same as today) Produce chunks\n  * (New) Produces and distributes state witnesses to chunk validators\n  * (Same as today) Must track the shard it produces the chunk for\n* Block validators:\n  * (Same as today) Validate blocks, (new) including verifying chunk endorsements\n  * (Same as today) Vote for blocks with endorsement or skip messages\n  * (New) No longer require tracking any shard\n  * (Same as today) Must collectively have a majority of all the validator stake, for security reasons.\n  * (Same as today) Should have high barrier of entry to keep `BlockHeader` size low, because it is proportional to the total byte size of block validator signatures;\n* (New) Chunk validators:\n  * Validate state witnesses and sends chunk endorsements to block producers\n  * Do not require tracking any shard\n  * Must collectively have a majority of all the validator stake, to ensure the security of chunk validation.\n\nSee the Validator Structure Change section below for more details.\n\n### Out of scope\n\n* Resharding support.\n* Data size optimizations such as compression, for both chunk data and state witnesses, except basic optimizations that are practically necessary.\n* Separation of consensus and execution, where consensus runs independently from execution, and validators asynchronously perform state transitions after the transactions are proposed on the consensus layer, for the purpose of amortizing the computation and network transfer time.\n* ZK integration.\n* Underlying data structure change (e.g. verkle tree).\n\n## Reference Implementation\n\nHere we carefully describe new structures and logic introduced, without going into too much technical details.\n\n### Validator Structure Change\n\n#### Roles\n\nCurrently, there are two different types of validators. Their responsibilities are defined as in the following pseudocode:\n\n```python\nif index(validator) < 100:\n    roles(validator).append(\"block producer\")\nroles(validator).append(\"chunk producer\")\n```\n\nThe validators are ordered by non-increasing stake in the considered epoch. Here and below by \"block production\" we mean both production and validation.\n\nWith stateless validation, this structure must change for several reasons:\n\n* Chunk production is the most resource consuming activity.\n* *Only* chunk production needs state in memory while other responsibilities can be completed via acquiring state witness\n* Chunk production does not have to be performed by all validators.\n\nHence, to make transition seamless, we change the role of nodes out of top 100 to only validate chunks:\n\n```python\nif index(validator) < 100:\n    roles(validator).append(\"chunk producer\")\n    roles(validator).append(\"block producer\")\nroles(validator).append(\"chunk validator\")\n```\n\nThe more stake validator has, the more **heavy** work it will get assigned. We expect that validators with higher stakes have more powerful hardware.\nWith stateless validation, relative heaviness of the work changes. Comparing to the current order \"block production\" > \"chunk production\", the new order is \"chunk production\" > \"block production\" > \"chunk validation\".\n\nShards are equally split among chunk producers: as in Mainnet on 12 Jun 2024 we have 6 shards, each shard would have ~16 chunk producers assigned.\n\nIn the future, with increase in number of shards, we can generalise the assignment by saying that each shard should have `X` chunk producers assigned, if we have at least `X * S` validators. In such case, pseudocode for the role assignment would look as follows:\n\n```python\nif index(validator) < X * S:\n    roles(validator).append(\"chunk producer\")\nif index(validator) < 100:\n    roles(validator).append(\"block producer\")\nroles(validator).append(\"chunk validator\")\n```\n\n#### Rewards\n\nReward for each validator is defined as `total_epoch_reward * validator_relative_stake * work_quality_ratio`, where:\n\n* `total_epoch_reward` is selected so that total inflation of the token is 2.5% per annum;\n* `validator_relative_stake = validator_stake / total_epoch_stake`;\n* `work_quality_ratio` is the measure of the work quality from 0 to 1.\n\nSo, the actual reward never exceeds total reward, and when everyone does perfect work, they are equal.\nFor the context of the NEP, it is enough to assume that `work_quality_ratio = avg_{role}({role}_quality_ratio)`.\nSo, if node is both a block and chunk producer, we compute quality for each role separately and then take average of them.\n\nWhen epoch is finalized, all block headers in it uniquely determine who was expected to produce each block and chunk.\nThus, if we define quality ratio for block producer as `produced_blocks/expected_blocks`, everyone is able to compute it.\nSimilarly, `produced_chunks/expected_chunks` is a quality for chunk producer.\nIt is more accurate to say `included_chunks/expected_chunks`, because inclusion of chunk in block is a final decision of a block producer which defines success here.\n\nIdeally, we could compute quality for chunk validator as `produced_endorsements/expected_endorsements`. Unfortunately, we won't do it in Stage 0 because:\n\n* Mask of endorsements is not part of the block header, and it would be a significant change;\n* Block producer doesn't have to wait for all endorsements to be collected, so it could be unfair to say that endorsement was not produced if block producer just went ahead.\n\nSo for now we decided to compute quality for chunk validator as ratio of `included_chunks/expected_chunks`, where we iterate over chunks which node was expected to validate.\nIt has clear drawbacks though:\n\n* chunk validators are not incentivized to validate the chunks, given they will be rewarded the same in either case;\n* if chunks are not produced at all, chunk validators will also be impacted.\n\nWe plan to address them in the future releases.\n\n#### Kickouts\n\nIn addition to that, if node performance is too poor, we want a mechanism to kick it out of the validator list, to ensure healthy protocol performance and validator rotation.\nCurrently, we have a threshold for each role, and if for some role the same `{role}_quality_ratio` is lower than threshold, the node is kicked out.\n\nIf we write this in pseudocode,\n\n```python\nif validator is block producer and block_producer_quality_ratio < 0.8:\n    kick out validator\nif validator is chunk producer and chunk_producer_quality_ratio < 0.8:\n    kick out validator\n```\n\nFor chunk validator, we apply absolutely the same formula. However, because:\n\n* the formula doesn't count endorsements explicitly\n* for chunk producers it kind of just makes chunk production condition stronger without adding value\n\nwe apply it to nodes which **only validate chunks**. So, we add this line:\n\n```python\nif validator is only chunk validator and chunk_validator_quality_ratio < 0.8:\n    kick out validator\n```\n\nAs we pointed out above, current formula `chunk_validator_quality_ratio` is problematic.\nHere it brings even a bigger issue: if chunk producers don't produce chunks, chunk validators will be kicked out as well, which impacts network stability.\nThis is another reason to come up with the better formula.\n\n#### Shard assignment\n\nAs chunk producer becomes the most important role, we need to ensure that every epoch has significant amount of healthy chunk producers.\nThis is a **significant difference** with current logic, where chunk-only producers generally have low stake and their performance doesn't impact overall performance.\n\nThe most challenging part of becoming a chunk producer for a shard is to download most recent shard state within previous epoch. This is called \"state sync\".\nUnfortunately, as of now, state sync is centralised on published snapshots, which is a major point of failure, until we don't have decentralised state sync.\n\nBecause of that, we make additional change: if node was a chunk producer for some shard in the previous epoch, and it is a chunk producer for current epoch, it will be assigned to the same shard.\nThis way, we minimise number of required state syncs at each epoch.\n\nThe exact algorithm needs a thorough description to satisfy different edge cases, so we will just leave a link to full explanation: https://github.com/near/nearcore/issues/11213#issuecomment-2111234940.\n\n### ChunkStateWitness\n\nThe full structure is described [here](https://github.com/near/nearcore/blob/b8f08d9ded5b7cbae9d73883785902b76e4626fc/core/primitives/src/stateless_validation.rs#L247).\nLet's construct it sequentially together with explaining why every field is needed. Start from simple data:\n\n```rust\npub struct ChunkStateWitness {\n    pub chunk_producer: AccountId,\n    pub epoch_id: EpochId,\n    /// The chunk header which this witness is proving.\n    pub chunk_header: ShardChunkHeader,\n}\n```\n\nWhat is needed to prove `ShardChunkHeader`?\n\nThe key function we have in codebase is [validate_chunk_with_chunk_extra_and_receipts_root](https://github.com/near/nearcore/blob/c2d80742187d9b8fc1bb672f16e3d5c144722742/chain/chain/src/validate.rs#L141).\nThe main arguments there are `prev_chunk_extra: &ChunkExtra` which stands for execution result of previous chunk, and `chunk_header`.\nThe most important field for `ShardChunkHeader` is `prev_state_root` - consider latest implementation `ShardChunkHeaderInnerV3`. It stands for state root resulted from updating shard for the previous block, which means applying previous chunk if there is no missing chunks.\nSo, chunk validator needs some way to run transactions and receipts from the previous chunk. Let's call it a \"main state transition\" and add two more fields to state witness:\n\n```rust\n    /// The base state and post-state-root of the main transition where we\n    /// apply transactions and receipts. Corresponds to the state transition\n    /// that takes us from the pre-state-root of the last new chunk of this\n    /// shard to the post-state-root of that same chunk.\n    pub main_state_transition: ChunkStateTransition,\n    /// The transactions to apply. These must be in the correct order in which\n    /// they are to be applied.\n    pub transactions: Vec<SignedTransaction>,\n```\n\nwhere\n\n```rust\n/// Represents the base state and the expected post-state-root of a chunk's state\n/// transition. The actual state transition itself is not included here.\npub struct ChunkStateTransition {\n    /// The block that contains the chunk; this identifies which part of the\n    /// state transition we're talking about.\n    pub block_hash: CryptoHash,\n    /// The partial state before the state transition. This includes whatever\n    /// initial state that is necessary to compute the state transition for this\n    /// chunk. It is a list of Merkle tree nodes.\n    pub base_state: PartialState,\n    /// The expected final state root after applying the state transition.\n    pub post_state_root: CryptoHash,\n}\n```\n\nFine, but where do we take the receipts?\n\nReceipts are internal messages, resulting from transaction execution, sent between shards, and **by default** they are not signed by anyone.\n\nHowever, each receipt is an execution outcome of some transaction or other parent receipt, executed in some previous chunk.\nFor every chunk, we conveniently store `prev_outgoing_receipts_root` which is a Merkle hash of all receipts sent to other shards resulting by execution of this chunk. So, for every receipt, there is a proof of its generation in some parent chunk. If there are no missing chunk, then it's enough to consider chunks from previous block.\n\nSo we add another field:\n\n```rust\n    /// Non-strict superset of the receipts that must be applied, along with\n    /// information that allows these receipts to be verifiable against the\n    /// blockchain history.\n    pub source_receipt_proofs: HashMap<ChunkHash, ReceiptProof>,\n```\n\nWhat about missing chunks though?\n\nUnfortunately, production and inclusion of any chunk **cannot be guaranteed**:\n\n* chunk producer may go offline;\n* chunk validators may not generate 2/3 endorsements;\n* block producer may not receive enough information to include chunk.\n\nLet's handle this case as well.\nFirst, each chunk producer needs not just to prove main state transition, but also all state transitions for latest missing chunks:\n\n```rust\n    /// For each missing chunk after the last new chunk of the shard, we need\n    /// to carry out an implicit state transition. This is technically needed\n    /// to handle validator rewards distribution. This list contains one for each\n    /// such chunk, in forward chronological order.\n    ///\n    /// After these are applied as well, we should arrive at the pre-state-root\n    /// of the chunk that this witness is for.\n    pub implicit_transitions: Vec<ChunkStateTransition>,\n```\n\nThen, while our shard was missing chunks, other shards could still produce chunks, which could generate receipts targeting our shards. So, we need to extend `source_receipt_proofs`.\nField structure doesn't change, but we need to carefully pick range of set of source chunks, so different subsets will cover all source receipts without intersection.\n\nLet's say B2 is the block that contains the last new chunk of shard S before chunk which state transition we execute, and B1 is the block that contains the last new chunk of shard S before B2.\nThen, we will define set of blocks B as the contiguous subsequence of blocks B1 (EXCLUSIVE) to B2 (inclusive) in this chunk's chain (i.e. the linear chain that this chunk's parent block is on). Lastly, source chunks are all chunks included in blocks from B.\n\nThe last caveat is **new** transactions introduced by chunk with `chunk_header`. As chunk header introduces `tx_root` for them, we need to check validity of this field as well.\nIf we don't do it, malicious chunk producer can include invalid transaction, and if it gets its chunk endorsed, nodes which track the shard must either accept invalid transaction or refuse to process chunk, but the latter means that shard will get stuck.\n\nTo validate new `tx_root`, we also need Merkle partial state to validate sender' balances, access keys, nonces, etc., which leads to two last fields to be added:\n\n```rust\n    pub new_transactions: Vec<SignedTransaction>,\n    pub new_transactions_validation_state: PartialState,\n```\n\nThe logic to produce `ChunkStateWitness` is [here](https://github.com/near/nearcore/blob/b8f08d9ded5b7cbae9d73883785902b76e4626fc/chain/client/src/stateless_validation/state_witness_producer.rs#L79).\nItself, it requires some minor changes to the logic of applying chunks, related to generating `ChunkStateTransition::base_state`.\nIt is controlled by [this line](https://github.com/near/nearcore/blob/dc03a34101f77a17210873c4b5be28ef23443864/chain/chain/src/runtime/mod.rs#L977), which causes all nodes read during applying chunk to be put inside `TrieRecorder`.\nAfter applying chunk, its contents are saved to `StateTransitionData`.\n\nThe validation logic is [here](https://github.com/near/nearcore/blob/b8f08d9ded5b7cbae9d73883785902b76e4626fc/chain/client/src/stateless_validation/chunk_validator/mod.rs#L85).\nFirst, it performs all validation steps for which access to `ChainStore` is required, `pre_validate_chunk_state_witness` is responsible for this. It is done separately because `ChainStore` is owned by a single thread.\nThen, it spawns a thread which runs computation-heavy `validate_chunk_state_witness` which main purpose is to apply chunk based on received state transitions and verify that execution results in chunk header are correct.\nIf validation is successful, `ChunkEndorsement` is sent.\n\n### ChunkEndorsement\n\nIt is basically a triple of `(ChunkHash, AccountId, Signature)`.\nReceiving this message means that specific chunk validator account endorsed chunk with specific chunk hash.\nIdeally chunk validator would send chunk endorsement to just the next block producer at the same height for which chunk was produced.\nHowever, block at that height can be skipped and block producers at heights h+1, h+2, ... will have to pick up the chunk.\nTo address that, we send `ChunkEndorsement` to all block producers at heights from `h` to `h+d-1`. We pick `d=5` as more than 5 skipped blocks in a row are very unlikely to occur.\n\nOn block producer side, chunk endorsements are collected and stored in `ChunkEndorsementTracker`.\nSmall **caveat** is that *sometimes* chunk endorsement may be received before chunk header which is required to understand that sender is indeed a validator of the chunk.\nSuch endorsements are stored as *pending*.\nWhen chunk header is received, all pending endorsements are checked for validity and marked as *validated*.\nAll endorsements received after that are validated right away.\n\nFinally, when block producer attempts to produce a block, in addition to checking chunk existence, it also checks that it has 2/3 endorsement stake for that chunk hash.\nTo make chunk inclusion verifiable, we introduce [another version](https://github.com/near/nearcore/blob/cf2caa3513f58da8be758d1c93b0900ffd5d51d2/core/primitives/src/block_body.rs#L30) of block body `BlockBodyV2` which has new field `chunk_endorsements`.\nIt is basically a `Vec<Vec<Option<Signature>>>` where element with indices `(s, i)` contains signature of i-th chunk validator for shard s if it was included and None otherwise.\nLastly, we add condition to block validation, such that if chunk `s` was included in the block, then block body must contain 2/3 endorsements for that shard.\n\nThis logic is triggered in `ChunkInclusionTracker` by methods [get_chunk_headers_ready_for_inclusion](https://github.com/near/nearcore/blob/6184e5dac45afb10a920cfa5532ce6b3c088deee/chain/client/src/chunk_inclusion_tracker.rs#L146) and couple similar ones. Number of ready chunks is returned by [num_chunk_headers_ready_for_inclusion](https://github.com/near/nearcore/blob/6184e5dac45afb10a920cfa5532ce6b3c088deee/chain/client/src/chunk_inclusion_tracker.rs#L178).\n\n### Chunk validators selection\n\nChunk validators will be randomly assigned to validate shards, for each block (or as we may decide later, for multiple blocks in a row, if required for performance reasons). A chunk validator may be assigned multiple shards at once, if it has sufficient stake.\n\nEach chunk validator's stake is divided into \"mandates\". There are full and partial mandates. The number of mandates per shard is a fixed parameter and the amount of stake per mandate is dynamically computed based on this parameter and the actual stake distribution; any remaining amount smaller than a full mandate is a partial mandate. A chunk validator therefore has zero or more full mandates plus up to one partial mandate. The list of full mandates and the list of partial mandates are then separately shuffled and partitioned equally (as in, no more than one mandate in difference between any two shards) across the shards. Any mandate assigned to a shard means that the chunk validator who owns the mandate is assigned to validate that shard. Because a chunk validator may have multiple mandates, it may be assigned multiple shards to validate.\n\nFor Stage 0, we select **target amount of mandates per shard** to 68, which was a [result of the latest research](https://near.zulipchat.com/#narrow/stream/407237-core.2Fstateless-validation/topic/validator.20seat.20assignment/near/435252304).\nWith this number of mandates per shard and 6 shards, we predict the protocol to be secure for 40 years at 90% confidence.\nBased on target number of mandates and total chunk validators stake, [here](https://github.com/near/nearcore/blob/696190b150dd2347f9f042fa99b844b67c8001d8/core/primitives/src/validator_mandates/mod.rs#L76) we compute price of a single full mandate for each new epoch using binary search.\nAll the mandates are stored in new version of `EpochInfo` `EpochInfoV4` in [validator_mandates](https://github.com/near/nearcore/blob/164b7a367623eb651914eeaf1cbf3579c107c22d/core/primitives/src/epoch_manager.rs#L775) field.\n\nAfter that, for each height in the epoch, [EpochInfo::sample_chunk_validators](https://github.com/near/nearcore/blob/164b7a367623eb651914eeaf1cbf3579c107c22d/core/primitives/src/epoch_manager.rs#L1224) is called to return `ChunkValidatorStakeAssignment`. It is `Vec<Vec<(ValidatorId, Balance)>>` where s-th element corresponds to s-th shard in the epoch, contains ids of all chunk validator for that height and shard, alongside with its total mandate stake assigned to that shard.\n`sample_chunk_validators` basically just shuffles `validator_mandates` among shards using height-specific seed. If there are no more than 1/3 malicious validators, then by Chernoff bound the probability that at least one shard is corrupted is small enough. **This is a reason why we can split validators among shards and still rely on basic consensus assumption**.\n\nThis way, everyone tracking block headers can compute chunk validator assignment for each height and shard.\n\n### Size limits\n\n`ChunkStateWitness` is relatively large message. Given large number of receivers as well, its size must be strictly limited.\nIf `ChunkStateWitness` for some state transition gets so uncontrollably large that it never can be handled by majority of validators, then its shard gets stuck.\n\nWe try to limit the size of the `ChunkStateWitness` to 16 MiB. All the limits are described [in this section](https://github.com/near/nearcore/blob/b34db1e2281fbfe1d99a36b4a90df3fc7f5d00cb/docs/misc/state_witness_size_limits.md).\nAdditionally, we have limit on currently stored partial state witnesses and chunk endorsements, because malicious chunk validators can spam these as well.\n\n## State witness size limits\n\nA number of new limits will be introduced in order to keep the size of `ChunkStateWitness` reasonable.\n`ChunkStateWitness` contains all the incoming transactions and receipts that will be processed during chunk application and in theory a single receipt could be tens of megabatytes in size. Distributing a `ChunkStateWitness` this large to all chunk validators would be troublesome, so we limit the size and number of transactions, receipts, etc. The limits aim to keep the total uncompressed size of `ChunkStateWitness` under 16 MiB.\n\nThere are two types of size limits:\n\n* Hard limit - The size must be below this limit, anything else is considered invalid. This is usually used in the context of having limits for a single item.\n* Soft limit - Things are added until the limit is exceeded, after that things stop being added. The last added thing is allowed to slightly exceed the limit. This is used in the context of having limits for a list of items.\n\nThe limits are:\n\n* `max_transaction_size - 1.5 MiB`\n  * All transactions must be below 1.5 MiB, otherwise they'll be considered invalid and rejected.\n* Previously was 4 MiB, now reduced to 1.5 MiB\n* `max_receipt_size - 4 MiB`:\n  * All receipts must be below 4 MiB, otherwise they'll be considered invalid and rejected.\n  * Previously there was no limit on receipt size. Set to 4 MiB, might be reduced to 1.5 MiB in the future to match the transaction limit.\n* `combined_transactions_size_limit - 4 MiB`\n  * Hard limit on total size of transactions from this and previous chunk. `ChunkStateWitness` contains transactions from two chunks, this limit applies to the sum of their sizes.\n* `new_transactions_validation_state_size_soft_limit - 500 KiB`\n  * Validating new transactions generates storage proof (recorded trie nodes), which has to be limited. Once transaction validation generates more storage proof than this limit, the chunk producer stops adding new transactions to the chunk.\n* `per_receipt_storage_proof_size_limit - 4 MB`\n  * Executing a receipt generates storage proof. A single receipt is allowed to generate at most 4 MB of storage proof. This is a hard limit, receipts which generate more than that will fail.\n* `main_storage_proof_size_soft_limit - 3 MB`\n  * This is a limit on the total size of storage proof generated by receipts in one chunk. Once receipts generate more storage proof than this limit, the chunk producer stops processing receipts and moves the rest to the delayed queue.\n  * It's a soft limit, which means that the total size of storage proof could reach 7 MB (2.99MB + one receipt which generates 4MB of storage proof)\n  * Due to implementation details it's hard to find the exact amount of storage proof generated by a receipt, so an upper bound estimation is used instead. This upper bound assumes that every removal generates additional 2000 bytes of storage proof, so receipts which perform a lot of trie removals might be limited more than theoretically applicable.\n* `outgoing_receipts_usual_size_limit - 100 KiB`\n  * Limit on the size of outgoing receipts to another shard. Needed to keep the size of `source_receipt_proofs` small.\n  * On most block heights a shard isn't allowed to send receipts larger than 100 KiB to another shard.\n* `outgoing_receipts_big_size_limit - 4.5 MiB`\n  * On every block height there's one special \"allowed shard\" which is allowed to send larger receipts, up to 4.5 MiB in total.\n  * A receiving shard will receive receipts from `num_shards - 1` shards using the usual limit and one shard using the big limit.\n  * The \"allowed shard\" is the same shard as in cross-shard congestion control. It's chosen in a round-robin fashion, at height 1 the special shard is 0, at height 2 it's 1 and so on.\n\nIn total that gives 4 MiB + 500 KiB + 7MB + 5*100 KiB + 4.5 MiB ~= 16 MiB of maximum witness size. Possibly a little more on missing chunks.\n\n### New limits breaking contracts\n\nThe new limits will break some existing contracts (for example, all transactions larger than 1.5 MiB). This is sad, but it's necessary. Stateless validation uses much more network bandwidth than the previous approach, as it has to send over all states on each chunk application. Because network bandwidth is limited, stateless validation\ncan't support some operations that were allowed in the previous design.\n\nIn the past year (31,536,000 blocks) there were only 679 transactions bigger than 1.5MiB, sent between 164 unique (sender -> receiver) pairs.\nOnly 0.002% of blocks contain such transactions, so the hope is that the breakage will be minimal. Contracts generally shouldn't require more than 1.5MiB of WASM.\n\nThe full list of transactions from the past year which would fail with the new limit is available here: https://gist.github.com/jancionear/4cf373aff5301a5905a5f685ff24ed6f\nContract developers can take a look at this list and see if their contract will be affected.\n\n### Validating the limits\n\nChunk validators have to verify that chunk producer respected all of the limits while producing the chunk. This means that validators also have to keep track of recorded storage proof by recording all trie accesses and they have to enforce the limits.\nIf it turns out that some limits weren't respected, the validators will generate a different result of chunk application and they won't endorse the chunk.\n\n### Missing chunks\n\nWhen a shard is missing some chunks, the following chunk on that shard will receive receipts from multiple blocks. This could lead to large `source_receipt_proofs` so a mechanism is added to reduce the impact. If there are two or more missing chunks in a row,\nthe shard is considered fully congested and no new receipts will be sent to it (unless it's the `allowed_shard` to avoid deadlocks).\n\n## ChunkStateWitness distribution\n\nFor chunk production, the chunk producer is required to distribute the chunk state witness to all the chunk validators. The chunk validators then validate the chunk and send the chunk endorsement to the block producer. Chunk state witness distribution is on a latency critical path.\n\nAs we saw in the section above, the maximum size of the state witness can be ~16 MiB. If the chunk producer were to send the chunk state witness to all the chunk validators it would add a massive bandwidth requirement for the chunk producer. To ease and distribute the network requirements across all the chunk producers, we have a distribution mechanism similar to what we have for chunks in the shards manager. We divide the chunk state witness into a number of parts, and let the chunk validators distribute the parts among themselves, and later reconstruct the chunk state witness.\n\n### Distribution mechanism\n\nA chunk producer divides the state witness into a set of `N` parts where `N` is the number of chunk validators. The parts or partial witnesses are represented as [PartialEncodedStateWitness](https://github.com/near/nearcore/blob/66d3b134343d9f35f6e0b437ebbdbef3e4aa1de3/core/primitives/src/stateless_validation.rs#L40). Each chunk validator is the owner of one part. The chunk producer uses the [PartialEncodedStateWitnessMessage](https://github.com/near/nearcore/blob/66d3b134343d9f35f6e0b437ebbdbef3e4aa1de3/chain/network/src/state_witness.rs#L11) to send each part to their respective owners. The chunk validator part owners, on receiving the `PartialEncodedStateWitnessMessage`, forward this part to all other chunk validators via the [PartialEncodedStateWitnessForwardMessage](https://github.com/near/nearcore/blob/66d3b134343d9f35f6e0b437ebbdbef3e4aa1de3/chain/network/src/state_witness.rs#L15). Each validator then uses the partial witnesses received to reconstruct the full chunk state witness.\n\nWe have a separate [PartialWitnessActor](https://github.com/near/nearcore/blob/66d3b134343d9f35f6e0b437ebbdbef3e4aa1de3/chain/client/src/stateless_validation/partial_witness/partial_witness_actor.rs#L32) actor/module that is responsible for dividing the state witness into parts, distributing the parts, handling both partial encoded state witness message and the forward message, validating and storing the parts, and reconstructing the state witness from the parts and sending is to the chunk validation module.\n\n### Building redundancy using Reed Solomon Erasure encoding\n\nDuring the distribution mechanism, it's possible that some of the chunk validators are malicious, offline, or have a high network latency. Since chunk witness distribution is on the critical path for block production, we safeguard the distribution mechanism by building in redundancy using the Reed Solomon Erasure encoding.\n\nWith Reed Solomon Erasure encoding, we can divde the chunk state witness into `N` total parts with `D` number of data parts. We can reconstruct the whole state witness as long as we have `D` of the `N` parts. The ratio of data parts `r = D/N` is something we can play around with.\n\nWhile reducing `r`, i.e. reducing the number of data parts required to reconstruct the state witness does allow for a more robust distribution mechanism, it comes with the cost of bloating the overall size of parts we need to distribute. If `S` is the size of the state witness, after reed solomon encoding, the total size `S'` of all parts becomes `S' = S/r` or `S' = S * N / D`.\n\nFor the first release of stateless validation, we've kept the ratio as `0.6` representing that ~2/3rd of all chunk validators need to be online for chunk state witness distribution mechanism to work smoothly.\n\nOne thing to note here is that the redundancy and upkeep requirement of 2/3rd is the *number* of chunk validators and not the *stake* of chunk validators.\n\n### PartialEncodedStateWitness structure\n\nThe partial encoded state witness has the following fields:\n\n* `(epoch_id, shard_id, height_created)` : These are the three fields that together uniquely determine the chunk associated with the partial witness. Since the chunk and chunk header distribution mechanism is independent of the partial witness, we rely on this triplet to uniquely identify which chunk is a part associated with.\n* `part_ord` : The index or id of the part in the array of partial witnesses.\n* `part` : The data associated with the part\n* `encoded_length` : The total length of the state witness. This is required in the reed solomon decoding process to reconstruct the state witness.\n* `signature` : Each part is signed by the chunk producer. This way the validity of the partial witness can be verified by the chunk validators receiving the parts.\n\nThe `PartialEncodedStateWitnessTracker` module that is responsible for the storage and decoding of partial witnesses. This module has a LRU cache to store all the partial witnesses with `(epoch_id, shard_id, height_created)` triplet as the key. We reconstruct the state witness as soon as we have `D` of the `N` parts as forward the state witness to the validation module.\n\n### Network tradeoffs\n\nTo get a sense of network requirements for validators with an without partial state witness distribution mechanism, we can do some quick back of the envelop calculations. Let `N` but the number of chunk validators, `S` be the size of the chunk state witness, `r` be the ratio of data parts to total parts for Reed Solomon Erasure encoding.\n\nWithout the partial state witness distribution, each chunk producer would have to send the state witness to all chunk validators, which would require a bandwidth `B` of `B = N * S`. For the worst case of ~16 validators and ~16 MiB of state witness size, this can be a burst requirement of 2 Gbps.\n\nPartial state witness distribution takes this load off the chunk producer and distributes it evenly among all the chunk validators. However, we get an additional factor of `1/r` of extra data being transferred for redundancy. Each partial witness has a size of `P = S' / N` or `P = S / r / N`. The chunk producer and validators needs a bandwidth `B` of `B = P * N` or `B = S / r` to forward its owned part to all `N` chunk validators. For worst case of ~16 MiB of state witness size and encoding ratio of `0.6`, this works out to be ~214 Mbps, which is much more reasonable.\n\n### Future work\n\nIn the Reed Solomon Erasure encoding section we discussed that the chunk state distribution mechanism relies on 2/3rd of the *number* of chunk validators being available/non-malicious and not 2/3rd of the *total stake* of the chunk validators. This can cause a potential issue where it's possible for more than 1/3rd of the chunk validators with small enough stake to be unavailable and cause the chunk production to stall. In the future we would like to address this problem.\n\n## Validator Role Change\n\nCurrently, there are two different types of validators and their responsibilities are as follows:\n\n|  | Top ~50% validators | Remaining validatiors (Chunk only producers) |\n|-----|:-----:|:----:|\n| block production | Y | N |\n| chunk production | Y | Y |\n| block validation | Y | N |\n\n### Protocol upgrade\n\nThe good property of the approach taken is that protocol upgrade happens almost seamlessly.\n\nIf (main transition, implicit transitions) fully belong to the protocol version before upgrade to stateless validation, chunk validator endorsements are not distributed, chunk validators are not sampled, but the protocol is safe because of all-shards tracking, as we described in \"High-level flow\".\n\nIf at least some transition belongs to the protocol version after upgrade, chunk header height also belongs to epoch after upgrade, so it has chunk validators assigned and requirement of 2/3 endorsements is enabled.\n\nThe minor accuracy needed is that generating and saving of state transition proofs have to be saved one epoch in advance, so we won't have to re-apply chunks to generate proofs once stateless validation is enabled. But new epoch protocol version is defined by finalization of **previous previous epoch**, so this is fine.\n\nIt also assumes that each epoch has at least two chunks, but if this is not the case, the chain is having a major disruption which never happened before.\n\n## Security Implications\n\nBlock validators no longer required to track any shard which means they don't have to validate state transitions proposed by the chunks in the block. Instead they trust chunk endorsements included in the block to certify the validity of the state transitions.\nThis makes chunk validator selection algorithm correctness critical for the security of the whole protocol, which is probabilistic by nature unlike the current more strict 2/3 of non-malicious validators requirement.\n\nIt is also worth mentioning that large state witness size makes witness distribution slow which could result in a missing chunk because the block producer won't get chunk endorsements in time. This design tries to address that by meticulously limiting max witness size (see [this doc](https://github.com/near/nearcore/blob/master/docs/misc/state_witness_size_limits.md)).\n\n\n## Alternatives\n\nThe only real alternative that was considered is the original nightshade proposal. The full overview of the differences can be found in the revised nightshade whitepaper at https://near.org/papers/nightshade.\n\n## Future possibilities\n\n* Integration with ZK allowing to get rid of large state witness distribution. If we treat state witness as a proof and ZK-ify it, anyone can validate that state witness indeed proves the new chunk header with much lower effort. Complexity of actual proof generation and computation indeed increases, but it can be distributed among chunk producers, and we can have separate concept of finality while allowing generic users to query optimistic chunks.\n* Integration with resharding to further increase the number of shards and the total throughput.\n* The sharding of non-validating nodes and services. There are a number of services that may benefit from tracking only a subset of shards. Some examples include the RPC, archival and read-RPC nodes.\n\n## Consequences\n\n### Positive\n\n* The validator nodes will need to track at most one shard.\n* The state will be held in memory making the chunk application much faster.\n* The disk space hardware requirement will decrease. The top 100 nodes will need to store at most 2 shards at a time and the remaining nodes will not need to store any shards.\n* Thanks to the above, in the future, it will be possible to reduce the gas costs and by doing so increase the throughput of the system.\n\n### Neutral\n\n* The current approach to resharding will need to be revised to support generating state witness.\n* The security assumptions will change. The responsibility will be moved from block producers to chunk validators and the security will become probabilistic.\n\n### Negative\n\n* The network bandwidth and memory hardware requirements will increase.\n  * The top 100 validators will need to store up to 2 shards in memory and participate in state witness distribution.\n  * The remaining validators will need to participate in state witness distribution.\n* Additional limits will be put on the size of transactions, receipts and, more generally, cross shard communication.\n* The dependency on cloud state sync will increase the centralization of the blockchain. This will be resolved separately by the decentralized state sync.\n\n### Backwards Compatibility\n\n[All NEPs that introduce backwards incompatibilities must include a section describing these incompatibilities and their severity. Author must explain a proposes to deal with these incompatibilities. Submissions without a sufficient backwards compatibility treatise may be rejected outright.]\n\n## Unresolved Issues (Optional)\n\n[Explain any issues that warrant further discussion. Considerations\n\n* What parts of the design do you expect to resolve through the NEP process before this gets merged?\n* What parts of the design do you expect to resolve through the implementation of this feature before stabilization?\n* What related issues do you consider out of scope for this NEP that could be addressed in the future independently of the solution that comes out of this NEP?]\n\n## Changelog\n\n[The changelog section provides historical context for how the NEP developed over time. Initial NEP submission should start with version 1.0.0, and all subsequent NEP extensions must follow [Semantic Versioning](https://semver.org/). Every version should have the benefits and concerns raised during the review. The author does not need to fill out this section for the initial draft. Instead, the assigned reviewers (Subject Matter Experts) should create the first version during the first technical review. After the final public call, the author should then finalize the last version of the decision context.]\n\n### 1.0.0 - Initial Version\n\n> Placeholder for the context about when and who approved this NEP version.\n\n### 1.0.1 - Fix: Protocol Rewards Halving\n\nAs of October 29 2025, the Protocol Rewards have been halved from 5% to 2.5%.\n\n#### Benefits\n\n> List of benefits filled by the Subject Matter Experts while reviewing this version:\n\n* Benefit 1\n* Benefit 2\n\n#### Concerns\n\n> Template for Subject Matter Experts review for this version:\n> Status: New | Ongoing | Resolved\n\n|   # | Concern | Resolution | Status |\n| --: | :------ | :--------- | -----: |\n|   1 |         |            |        |\n|   2 |         |            |        |\n\n## Copyright\n\nCopyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).\n"
  },
  {
    "path": "neps/nep-0514.md",
    "content": "---\nNEP: 514\nTitle: Reducing the number of Block Producer Seats in `testnet`\nAuthors: Nikolay Kurtov <nikolay.kurtov@near.org>\nStatus: Final\nDiscussionsTo: https://github.com/nearprotocol/neps/pull/514\nType: Protocol\nVersion: 1.0.0\nCreated: 2023-10-25\nLastUpdated: 2023-10-25\n---\n\n## Summary\n\nThis proposal aims to adjust the number of block producer seats on `testnet` in\norder to ensure a positive number of chunk-only producers present in `testnet`\nat all times.\n\n## Motivation\n\nThe problem is that important code paths are not exercised in `testnet`. This\nmakes `mainnet` releases more risky than they have to be, and greatly slows\ndown development of features related to chunk-only producers, such as State\nSync.\n\nThat is because `testnet` has fewer validating nodes than the number of block\nproducer seats configured.\n\nThe number of validating nodes on `testnet` is somewhere in the range of\n[26, 46], which means that all validating nodes are block producers and none of\nthem are chunk-only producers. [Grafana](https://nearinc.grafana.net/goto/7Kh81P7IR?orgId=1).\n\n`testnet` configuration is currently the following:\n\n* `\"num_block_producer_seats\": 100,`\n* `\"num_block_producer_seats_per_shard\": [ 100, 100, 100, 100 ],`\n* `\"num_chunk_only_producer_seats\": 200,`\n\nIt's evident that the 100 block producer seats significantly outnumber the\nvalidating nodes in `testnet`.\n\nAn alternative solution to the problem stated above can be the following:\n\n1. Encourage the community to run more `testnet` validating nodes\n1. Release owners or developers of features start a lot of validating nodes to\n1. ensure `testnet` gets some chunk-only producing nodes.\n1. Exercise the unique code paths in a separate chain, a-la `localnet`.\n\nLet's consider each of these options.\n\n### More community nodes\n\nThis would be the ideal perfect situation. More nodes joining will make\n`testnet` more similar to `mainnet`, which will have various positive effects\nfor protocol developers and dApp developers.\n\nHowever, this option is expensive, because running a validating node costs\nmoney, and most community members can't afford spending that amount of money for\nthe good of the network.\n\n### More protocol developer nodes\n\nWhile this option may seem viable, it poses significant financial challenges for\nprotocol development. The associated computational expenses are exorbitantly\nhigh, making it an impractical choice for sustainable development.\n\n### Test in separate chains\n\nThat is the current solution, and it has significant drawbacks:\n\n* Separate chains are short-lived and may miss events critical to the unique\n  code paths of chunk-only producers\n* Separate chains need special attention to be configured in a way that\n  accommodates for chunk-only producers. Most test cases are not concerned about\n  them, and don't exercise the unique code paths.\n* Separate chains can't process real transaction traffic. The traffic must\n  either be synthetic or \"inspired\" by real traffic.\n* Each such test has a significant cost of running multiple nodes, in some\n  cases, tens of nodes.\n\n## Specification\n\nThe proposal suggests altering the number of block producer seats to ensure that\na portion of the `testnet` validating nodes become chunk-only producers.\n\nThe desired `testnet` configuration is the following:\n\n* `\"num_block_producer_seats\": 20,`\n* `\"num_block_producer_seats_per_shard\": [ 20, 20, 20, 20 ],`\n* `\"num_chunk_only_producer_seats\": 100,`\n\nI suggest to implement the change for all networks that are not `mainnet` and\nhave `use_production_config` in the genesis file. `use_production_config` is a\nsneaky parameter in `GenesisConfig` that lets protocol upgrades to change\nnetwork's `GenesisConfig`.\n\nI don't have a solid argument for lowering the number of chunk producer seats,\nbut that reflects the reality that we don't expect a lot of nodes joining\n`testnet`. It also makes it easier to test the case of too many validating nodes\nwilling to join a network.\n\n## Reference Implementation\n\n[#9563](https://github.com/near/nearcore/pull/9563)\n\nIf `use_production_config`, check whether `chain_id` is eligible, then change\nthe configuration as specified above.\n\n## Security Implications\n\nThe block production in `testnet` becomes more centralized. It's not a new\nconcern as 50% of stake is already owned by nodes operated by the protocol\ndevelopers.\n\n## Alternatives\n\nSee above.\n\n## Future possibilities\n\nAdjust the number of block and chunk producer seats according to the development\nof the number of `testnet` validating nodes.\n\n## Consequences\n\n### Positive\n\n* Chunk-only production gets tested in `testnet`\n* Development of State Sync and other features related to chunk-only producers accelerates\n\n### Neutral\n\n* `testnet` block production becomes more centralized\n\n### Negative\n\n* Any?\n\n### Backwards Compatibility\n\nDuring the protocol upgrade, some nodes will become chunk-only producers.\n\nThe piece of code that updates `testnet` configuration value will need to be\nkept in the database in case somebody wants to generate `EpochInfo` compatible\nwith the protocol versions containing the implementation of this NEP.\n\n## Changelog\n\n### 1.0.0 - Initial Version\n\nThe Protocol Working Group members approved this NEP on Oct 26, 2023.\n\n[Zulip link](https://near.zulipchat.com/#narrow/stream/297873-pagoda.2Fnode/topic/How.20to.20test.20a.20chunk-only.20producer.20node.20in.20testnet.3F/near/396090090)\n\n#### Benefits\n\nSee [Consequences](#consequences).\n\n#### Concerns\n\nSee [Consequences](#consequences).\n\n## Copyright\n\nCopyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).\n"
  },
  {
    "path": "neps/nep-0518.md",
    "content": "---\nNEP: 518\nTitle: Web3-Compatible Wallets Support\nAuthors: Aleksandr Shevchenko <alex.shevchenko@aurora.dev>, Michael Birch <michael.birch@aurora.dev>\nStatus: New\nDiscussionsTo: https://github.com/near/NEPs/issues/518\nType: Protocol\nVersion: 1.0.0\nCreated: 2023-11-15\nLastUpdated: 2024-07-22\n---\n\n## Summary\n\nThis NEP describes the protocol changes needed to support the usage of Ethereum-compatible wallets (Web3 wallets), for example Metamask, on Near native applications. That is to say, with this protocol change all Metamask users can become Near users without installing any additional software; from their perspective Near will appear as just another network they can choose from (similar to Aurora today).\n\nThis is accomplished through two key protocol changes:\n\n1. Ethereum-like addresses (i.e. account IDs of the form `^0x[a-f0-9]{40}$`) are implicit accounts on Near (i.e. can be created via a `Transfer` action). We call these \"eth-implicit accounts\".\n2. Unlike the current implicit accounts (64-character hex-encoded), eth-implicit accounts do not have any access keys added to them on creation. Instead, these accounts will have a special contract deployed to them automatically called the \"wallet contract\". This wallet contract enables the owner of the Ethereum address corresponding to the eth-implicit account ID to sign transactions with their Ethereum private key, thus providing similar functionality to the default access key of 64-character implicit accounts.\n\nThe nature of this NEP requires the reader to know some concepts from the Ethereum ecosystem. However, since this is a document for readers only familiar with the Near network, we include appendices with definitions and descriptions of the Ethereum concepts needed to understand this proposal. Terms in bold, for example **EOA**, are defined in the glossary (Appendix A).\n\nThe protocol changes described here are a part of the overall eb3-Compatible Wallets Support solution. The full solution (including the protocol changes described here) are detailed in the original [NEP-518 issue description](https://github.com/near/NEPs/issues/518).\n\n## Motivation\n\nCurrently, the Ethereum ecosystem is a leading force in the smart contract blockchain space, boasting a large user base and extensive installations of Ethereum-compatible tooling and wallets. However, a significant challenge arises due to the incompatibility of these tools and wallets with NEAR Protocol. This incompatibility necessitates a complete onboarding process for users to interact with NEAR contracts and accounts, leading to confusion, decreased adoption, and the marginalization of NEAR Protocol.\n\nImplementing Web3 wallet support in NEAR Protocol, with an emphasis on user experience continuity, would significantly benefit the entire NEAR Ecosystem.\n\n## Specification\n\n### Eth-implicit accounts\n\n**Definition**: An eth-implicit account is a top-level Near account with ID of the form `^0x[a-f0-9]{40}$` (i.e. 42-characters with `0x` as a prefix followed by 40 characters of hex-encoded data which represents a 20-byte address).\n\nEth-implicit accounts, as the name suggests, are implicit accounts on Near. This means if the target account ID does not exist during a `Transfer` action then it MUST be automatically created. This includes being created even if the amount being transferred is zero (per the prior [NEP on zero balance accounts](https://github.com/near/NEPs/blob/master/neps/nep-0448.md)). Eth-implicit accounts represent an Ethereum **EOA** and therefore are controlled via the Ethereum private key corresponding to the address contained in the account ID (see Appendix B for a description of how 20-byte addresses are derived from a private key in the Ethereum ecosystem). To enable this control, eth-implicit accounts all have a smart contract deploy to them called the wallet contract (specification in the next section).\n\nWhen an eth-implicit account is created the runtime MUST set the contract code equal to specific \"magic bytes\". These bytes come from a UTF-8 encoded string which is equal to the constant `near` appended with the base-58 encoding of the sha2 hash of the wallet contract code. This constant allows the contract runtime to lookup the full contract code without needing it to be stored multiple times in the state. As well as being more efficient for the protocol, setting the code equal to a hash of the contract instead of the contract itself keeps the storage requirements of a new eth-implicit account small enough to be a zero balance account.\n\nThe magic bytes depend on the Near network chain id because the wallet contract (and therefore its hash) depends on the Near chain id. The magic bytes (UTF-8 encoded) for each Near chain id are listed below:\n\n- `mainnet`: `near83PPBGX9KNgC2TRJgX7mvZfFPx92bFkdYvZNARQjRt8G`\n- `testnet`: `near3Za8tfLX6nKa2k4u2Aq5CRrM7EmTVSL9EERxymfnSFKd`\n- any other id (e.g. `localnet`): `near2dQzuvePVCmkXwe1oF3AgY9pZvqtDtq43nFHph928CU4`\n\nWhen the runtime is executing a `FunctionCall` action on an account with these magic bytes as code then it MUST act as if the wallet contract code were stored there instead (i.e. the wallet contract Wasm module ends up being executed).\n\n### Wallet contract\n\nThis smart contract is automatically deployed to all eth-implicit accounts (see prior section). The purpose of this contract is to accept transactions encoded in an Ethereum style and create Near actions which are executed in subsequent receipts. In this way, the owner of the Ethereum private key associated with the eth-implicit account (the address contained in its account ID) controls what actions the account takes. Thus that Ethereum key effectively becomes the only access key for the account, emulating the behavior of an Ethereum **EOA**.\n\n#### API\n\nThe wallet contract has two public functions:\n\n- `get_nonce` is a view function which takes no arguments and returns a 64-bit number (encoded as a base-10 string).\n- `rlp_execute` is the main entry point for executing user transactions. It takes two inputs (encoded as a JSON object): `target` is an account ID (i.e. string) which indicates the account that is supposed to be the target of the Near action; and `tx_bytes_b64` is a string which is the base-64 encoding of the raw bytes of an Ethereum-like transaction. The process by which a Near action is derived from the Ethereum transaction is described below.\n\nThe wallet contract has two state variables: the nonce, a 64-bit number; and a boolean flag indicating if a transaction is currently in progress. As with nonce values on Near access keys, the purpose of the wallet contract nonce is to prevent replaying the same Ethereum transaction more than once. The boolean flag prevents multiple transactions from being in-flight at the same time. The reason this is needed is because of the asynchronous nature of Near as compared with the synchronous nature of the **EVM**. On Ethereum if two transactions are sent (they must have sequential nonces per the Ethereum standard) all actions of the first will happen before all actions of the second. However, on Near there is no guarantee of the order of execution for receipts in different shards. Therefore, the only way to ensure that all actions from the first transaction are executed before all the actions of the second transaction is to prevent the second transaction from starting its execution until after the first one entirely finishes.\n\n#### Details of `rlp_execute`\n\nThis function is named after the **RLP** standard in Ethereum. In particular, the `tx_bytes_b64` argument is parsed into bytes from base-64; then the bytes are parsed into structured data assuming it is RLP encoded; then the structured data is parsed into an Ethereum transaction. Ethereum transactions can have multiple different forms since the Ethereum protocol has evolved over time (there are \"legacy\" transactions, [EIP-1559](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1559.md) type transactions, [EIP-2930](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-2930.md) transactions). All these different forms are supported by the wallet contract (they are distinguished based on the \"type byte\" which starts the encoding as per [EIP-2718](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-2718.md)) and are ultimately all transformed into a common data structure with the following fields:\n\n- `from`: the address associated with the private key that signed the transaction.\n- `chain_id`: a numerical ID that is unique per **EVM**-chain. The Near chain ID values are discussed below.\n- `nonce`: the nonce associated with this transaction. It must be equal to the wallet contracts's currently stored nonce for the transaction to be executed.\n- `gas_limit`: the maximum amount of **EVM** gas the user is willing to spend on this transaction.\n- `max_fee_per_gas`: the gas price the user is willing to pay. `gas_limit * max_fee_per_gas` gives the maximum amount of **Wei** the user is willing to pay for the transaction.\n- `to`: the address of the account the transaction is targeting. This could be another **EOA** in the case of a base token transfer or the address of a smart contract in the case of what Near would refer to as a function call. In the Ethereum standard this field is allowed to be empty to indicate a new contract is being created, however that is forbidden by the wallet contract because Near currently does not support **EVM** bytecode, so there is not a reasonable way to emulate an Ethereum contract deployment.\n- `value`: the amount of **Wei** attached to the transaction.\n- `data`: the raw bytes which will be sent as a payload to the target address. If the target address is a contract it will use these bytes as input.\n\nNote: some Ethereum transaction fields are intentionally omitted because they are unused by the wallet contract.\n\nThese fields are used to validate the transaction and derive Near actions that the wallet contract will create as receipts. The details of this process are described below.\n\n##### Ethereum transaction validation\n\nThe following validation conditions MUST pass for the wallet contract to accept a transaction.\n\n1. `from` address when formatted as hex-encoded with `0x` prefix MUST match the current account ID (i.e. the wallet contract's account ID).\n2. `chain_id` MUST match one of the following values depending on the Near chain the wallet contract is deployed to: mainnet -> 397; testnet -> 398; any other chain -> 399. The mainnet and testnet values are registered with the [official Ethereum ecosystem registry of chain IDs](https://github.com/ethereum-lists/chains).\n3. `nonce` MUST match the nonce value currently stored in the contract state.\n4. `to` address MUST either (a) be equal to `keccak256(target)[12,32]` (where `target` is other argument passed to the `rlp_execute` function) or (b) when `to` is formatted as hex-encoded with `0x` prefix it MUST be equal to `target`. In case (b) there is an additional validation check that the `to` address is not registered in the \"Ethereum Translation Contract\" (ETC). The details of this check and why it is needed are discussed in Appendix C.\n5. `value` MUST be less than or equal to `(2**128 - 1) // 1_000_000`. This condition arises from the mismatch is decimal places between Ether and NEAR which is discussed in the definition of **Wei** in Appendix A. Essentially, we must ensure the `value` can be mapped into a valid amount of yoctoNEAR, which means `value * 1_000_000 <= u128::MAX`.\n\n##### Converting Ethereum transaction into Near actions\n\nEach Ethereum transaction is converted to a single Near action (batch transactions are not supported) based on the `data` field. Following the Solidity convention of the first four bytes of the data being a **method selector**, the wallet contract checks the first four bytes of the `data` to see if it is a known Near action. The **method selectors** for Near actions supported by the wallet contract are determined by mapping the actions to an equivalent Solidity function signature as follows:\n\n- `functionCall(string,string,bytes,uint64,uint32)`\n- `transfer(string,uint32)`\n- `addKey(uint8,bytes,uint64,bool,bool,uint128,string,string[])`\n- `deleteKey(uint8,bytes)`\n\nNote that the `uint32` fields in `functionCall` and `transfer` contain the amount of yoctoNEAR that cannot be included in the Ethereum transaction's `value` field due to the difference in decimal places (see **Wei** definition in Appendix A), therefore the value there is always less than `1_000_000` so it will easily fit in a 32-bit number. These type signatures then hash to the **method selectors**:\n\n- FunctionCall: `0x6179b707`\n- Transfer: `0x3ed64124`\n- AddKey: `0x753ce5ab`\n- DeleteKey: `0x3fc6d404`\n\nIf the first four bytes of the `data` field matches one of these **method selectors** then the wallet contract will try to parse the remainder of the `data` into the corresponding type signature (assuming the data is Solidity ABI encoded). If this parsing succeeds then the resulting tuple of values can be converted to the corresponding Near action. Some additional validation is done in this case, depending on the action:\n\n- FunctionCall/Transfer: `target` MUST equal the first `string` parameter (interpreted as the receiver ID), the `uint32` parameter value MUST be less than `1_000_000`.\n- AddKey/DeleteKey: the `uint8` parameter value MUST be 0 (corresponding to an ED25519 access key) or 1 (corresponding to a Secp256k1 access key), the `bytes` MUST be the appropriate length depending on the key type, `target` MUST equal the current account ID (since these actions can only act on the current account).\n\nAdditionally, the first `bool` value of `addKey` must be `false` because adding a full access key is currently not supported by the wallet contract. The reason for this is to prevent users from changing the contract code deployed to the eth-implicit contract, as it could break the account's intended functionality. However, this restriction may be lifted in the future.\n\nIf the first four bytes of `data` does not match one of these known selectors then the contract tries another set of known **method selectors** which come from the Ethereum ERC-20 standard:\n\n- `balanceOf(address)` -> `0x70a08231`\n- `transfer(address,uint256)` -> `0xa9059cbb`\n- `totalSupply()` -> `0x18160ddd`\n\nThese **method selectors** are included because some Web3 wallets (for example MetaMask) allow a user to transfer tokens directly within the wallet interface. This interface produces an Ethereum transaction with Solidity ABI encoded data following the ERC-20 standard rather than the encoding of the Near actions outlined above. Therefore the wallet contract also knows how to parse these ERC-20 standard methods into Near actions so that the wallet interfaces still work according to the user's expectations. This feature of the wallet contract is called Ethereum Standards Emulation because it emulates the execution of an Ethereum standard. Currently ERC-20 is the only supported standard for emulation, but perhaps more will be added in the future.\n\nERC-20 is Ethereum's fungible token standard, thus these calls are mapped to the corresponding NEP-141 `FunctionCall` actions:\n\n- `balanceOf` -> `ft_balance_of`\n- `transfer` -> `ft_transfer`\n- `totalSupply` -> `ft_total_supply`\n\nNote: it is intentional that not all the ERC-20 functions are emulated (in particular related to `approve`/`allowance`) because there is not the corresponding functionality in NEP-141. There is additional validation in the case of `transfer` that the amount is less than `u128::MAX` because the ERC-20 standard allows 256-bit amounts while the NEP-141 standard only allows 128-bit. The NEP-141 standard also has additional complexity that ERC-20 does not have because of the storage deposit requirement (a consequence of Near's storage staking). On Ethereum a user can transfer tokens to another account that has never held that kind of token before. On Near that is only possible if the user pays for the recipient's storage deposit first. Therefore, as part of the `transfer` emulation the wallet contract includes a call to `storage_balance_of` to check if a call to `storage_storage_deposit` is also needed before calling `ft_transfer`.\n\nIf none of the known selectors match the first four bytes of `data` or the remainder of `data` fails to parse into the appropriate type signature then there is one more possible emulation that the wallet contract checks for. On Ethereum base token transfers are allowed to have arbitrary data included and some wallets use this feature as a sort of messaging protocol between addresses. Therefore, if the `data` is not processed and the `target` is another eth-implicit account, then we assume this is meant to emulate a base token transfer and thus a Near `Transfer` action is created. Otherwise, the wallet contract returns an error that the transaction could not be parsed.\n\n#### Interaction with Web3 relayers\n\nTypically users will not be constructing the `rlp_execute` action themselves because the target user group are those who only have a Web3 wallet like MetaMask, not a Near wallet to sign Near transactions. Therefore, the Near transactions will be constructed and sent to the Near network on a user's behalf by relayers. These relayers expose the Ethereum standard JSON RPC so that Web3 wallets know how to as the relayer to send an Ethereum-like transaction and to query the status of that transaction. More details about relayers and the RPC they expose is found in the [NEP-518 issue description](https://github.com/near/NEPs/issues/518), but it out of scope for this document because they operated separately from the Near protocol itself.\n\nThe relevant fact for the wallet contact specification is that relayers can ask their users to add a function call access key to their eth-implicit account which the relayer uses to call `rlp_execute`. By using an access key on the eth-implicit account itself, the relayer does not need to cover any gas costs for the user because the transaction originates from the wallet contract account itself. However, for this mechanism to be safe for users, relayers must be prevented from sending transactions to the wallet contract that the user did not intend. Otherwise relayers could maliciously burn the $NEAR of their users on excess calls to `rlp_execute` (even if those transactions return an error, gas is still spent in the process).\n\nFor this reason, the wallet contract separates possible errors in the `rlp_execute` input into two categories: user errors and relayer errors. User errors are errors that arise from data signed by the user's private key and therefore cannot be spoofed by the relayer. Relayer errors arise from input that should not have been sent by an honest relayer in the first place. These relayer errors include:\n\n- Invalid Ethereum transaction nonce: if the nonce check fails then the relayer is at fault because it should have checked the nonce before sending the transaction. This prevents a malicious relayer from sending the same user-signed transaction over and over to burn the user's $NEAR unnecessarily.\n- Invalid base-64 encoding in `tx_bytes_b64`: an honest relayer should only send valid arguments. If a relayer sends garbage input then it is faulty.\n- Invalid Ethereum transaction encoding: similar to the error above, but with the issue occurring in the RLP-encoding instead of in the base-64 encoding.\n- Invalid sender: if the address extracted from the signature on the Ethereum transaction does not match the wallet contract account ID then the relayer is faulty because it sent an incorrectly signed transaction.\n- Invalid target: if the `target` validation relative to the `to` field in the user's signed Ethereum transaction fails then the relayer is faulty because it tried to misdirect the transaction to a different account than the user intended.\n- Invalid chain id: similar to the invalid sender error, the relayer should only send transaction with a valid signature, including with the correct chain id.\n- Insufficient gas: if the relayer does not attach as much Near gas to the transaction as the user asked for in the `gas_limit` field of their signed Ethereum transaction then it is faulty. This prevents a malicious relayer from intentionally making user transactions fail by not attaching enough gas to complete the action.\n\nIf a relayer error happens then the wallet contract creates a callback to remove the relayers access key. This prevents them from repeatedly sending incorrect input.\n\n## Reference implementation\n\nSummarizing the above, the protocol changes necessary for the Web3 wallets project include:\n\n- Creating Ethereum-like (0x) implicit accounts using `Transfer` action,\n- Automatically deploying the wallet contract to those 0x implicit accounts.\n\nThese protocol changes are implemented in nearcore ([eth-implicit accounts PR 10224](https://github.com/near/nearcore/pull/10224), [wallet contract implementation](https://github.com/near/nearcore/tree/1ab9b42c3d723604a214e685d8ed39f7d6434ae2/runtime/near-wallet-contract/implementation)) and have been stabilized in protocol version 70 ([PR 11765](https://github.com/near/nearcore/pull/11765)).\n\n## Security Implications\n\nThe wallet contract must uphold the invariant that only the owner of the private key can make the wallet contract create Near actions. The wallet contract has been audited and is believed to be secure.\n\n## Alternatives\n\nSee the \"Prior work\" section of the [original NEP-518 issue](https://github.com/near/NEPs/issues/518).\n\n## Future possibilities\n\nSee the \"Future Opportunities\" section of the [original NEP-518 issue](https://github.com/near/NEPs/issues/518).\n\n## Consequences\n\n### Positive\n\n- All Ethereum users can easily onboard to Near\n\n### Neutral\n\n- New implicit account type with a protocol-level smart contract deployed by default.\n\n### Backwards Compatibility\n\nAs pointed out in [PR 11606](https://github.com/near/nearcore/pull/11606) there are 5552 accounts on mainnet today with account IDs that would classify them as eth-implicit accounts. For backwards compatibility, these accounts will not be changed in any way (their access keys and contract code will be left in place) and therefore will in fact still be normal Near accounts as opposed to eth-implicit accounts because they have full access keys and possibly a contract different from the protocol-sanctioned wallet contract.\n\n## Appendix A - Glossary\n\nBelow is a list of Ethereum-related terms and their definitions.\n\n- **Ethereum Virtual Machine (EVM)**: the virtual machine used to execute smart contracts on the Ethereum blockchain. \"EVM-compatible\" is often used interchangeably with \"Ethereum compatible\".\n- **Externally owned account (EOA)**: An Ethereum account for which a user has the private key. Unlike Near, on Ethereum there is a distinction between contracts and user accounts. User accounts cannot have contract code and contract accounts cannot initiate a transaction.\n- **Method selector**: By convention in Solidity contracts, the first four bytes of the input to a smart contract determine which method is executed (unlike Near where a method is explicitly specified as part of the `FunctionCall` action). These bytes are obtained by taking the first four bytes of the keccak256 hash of the type signature of the function.\n- **Recursive Length Prefix (RLP) serialization**: An Ethereum ecosystem standard for encoding structured data as bytes. It plays a similar role to `borsh` in the Near ecosystem.\n- **Wei**: the smallest unit of the base token for Ethereum. It plays a similar role to yoctoNEAR in the Near ecosystem. An important difference between Wei and yoctoNEAR is that 1 Ether (the typical unit for the base token on Ethereum) is equal to 10^18 Wei, while 1 NEAR is equal to 10^24 yoctoNEAR. Phrased another way, Ether has 18 decimal places while NEAR has 24. This difference in precision creates minor complexities in the wallet contract.\n\n## Appendix B - How addresses are derived in Ethereum\n\nOn Ethereum all accounts are identified by a 20-byte address. The address of a user account is derived from a user's private key in the following way:\n\n1. Compute the user's public key from the private key (this step can be omitted if you already, indeed only, know the public key).\n2. Compute the keccak256 hash of the public key.\n3. Return the rightmost 20 bytes of this hash.\n\nThis is summarized by the following formula: `address = keccak256(public_key)[12,32]`.\n\n## Appendix C - Ethereum Translation Contract (ETC)\n\nThere is an additional contract which is tangentially related to the wallet contract. The [original NEP-518 issue](https://github.com/near/NEPs/issues/518) refers to it as the Ethereum Translation Contract (ETC), though perhaps a more descriptive name is the Ethereum address registrar. The implementation of this contract is not part of the protocol, however its account ID is because the account ID is hardcoded into the wallet contract. The reason is because the wallet contract occasionally needs the ETC to verify if the `target` argument to `rlp_execute` is properly set relative to the `to` field of the user's signed Ethereum transaction. The details of why ETC is needed and how it is used is described below.\n\nRecall that the user is signing an Ethereum transaction because the whole point of this project is to allow Web3 wallets like MetaMask to be used on Near. An Ethereum transaction specifies the target of a transaction using a 20-byte address because there are no named accounts on Ethereum. Therefore the user only signs over a 20-byte address to indicate their intent of what account is meant to receive this transaction. However, this is obviously insufficient information on Near because most accounts are named ones, not addresses. The purpose of the `target` argument to `rlp_execute` is to communicate the account ID of the receiver of the transaction and it must be consistent with the user's signed Ethereum transaction according to the validation conditions described in the \"Ethereum transaction validation\" section.\n\nMost of the time that means checking `to == keccak256(target)[12,32]` because the `target` will be some named Near account. However, it is possible that `target` be another eth-implicit account; this is the case for \"emulated\" base token transfers (emulated Ethereum standards are discussed in the section \"Converting Ethereum transaction into Near actions\"). Thus, we must also allow the possibility that `to == target`. Yet, this poses a problem because it means `target` could be set incorrectly if it was meant to be a named account satisfying the hash condition instead. The ETC closes the loophole by providing a reverse lookup from 20-byte address to named Near accounts where the association comes from the hash condition.\n\nTo fully validate `target` in the case that `to == target` the wallet contract makes the following additional checks:\n\n- If the `data` field of the user's signed Ethereum transaction can be parsed into a Near action then confirm `target` matches the `receiver_id` of the corresponding action (this is statically known to be the current account ID in the case of `AddKey` and `DeleteKey`, and it is encoded with the action in the case of `FunctionCall` and `Transfer`).\n- If the `data` field can be parsed as ERC-20 action then call the `lookup` method of the ETC to see if the `target` is registered. If it is registered then the `target` field is set incorrectly because the relayer should have set `target` equal to the named account returned from `ETC::lookup(to)`. This validity check ensures that emulated ERC-20 transactions are sent to the correct NEP-141 token account. If `target` is not registered then the transaction is interpreted as an emulated base token transfer with a message that happens to parse like an ERC-20 function call.\n\nNotably, for this security measure to be effective all widely used NEP-141 token accounts will need to be registered with ETC. ETC has a public method `register` which permissionlessly allows anyone to add an account ID they think is important. This openness id not a feasible attack vector for the system because of the one-way nature of the keccak256 hash function preventing an attacker from coming up with a Near account ID that corresponds to an address of their choosing.\n\n## Copyright\n\nCopyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).\n"
  },
  {
    "path": "neps/nep-0519.md",
    "content": "---\nNEP: 519\nTitle: Yield Execution\nAuthors: Akhi Singhania <akhi3030@gmail.com>; Saketh Are <saketh@near.org>\nStatus: Final\nDiscussionsTo: https://github.com/near/NEPs/pull/519\nType: Protocol\nVersion: 0.0.0\nCreated: 2023-11-17\nLastUpdated: 2023-11-20\n---\n\n## Summary\n\nToday, when a smart contract is called by a user or another contract, it has no sensible way to delay responding to the caller till it has observed another future transaction.  This proposal introduces this possibility into the NEAR protocol.\n\n## Motivation\n\nThere exist some situations where when a smart contract on NEAR is called, it will only be able to provide an answer at some time in the future.  The callee needs a way to defer replying to the caller while the response is being prepared.\n\nExamples include a smart contract (`S`) that provides the MPC signing capability.  It relies on indexers external to the NEAR protocol for computing the signatures.  The rough steps are:\n\n1. Signer contract provides a function `fn sign_payload(Payload, ...)`.\n2. When called, the contract defers replying to the caller.\n3. External indexers are monitoring the transactions on the contract; they observe the new signing request, compute a signature, and call another function `fn signature_available(Signature, ...)` on the signer contract.\n4. The signer contract validates the signature and replies to the original caller.\n\nToday, the NEAR protocol has no sensible way to defer replying to the caller in step 2 above.  This proposal proposes adding two following new host functions to the NEAR protocol:\n\n- `promise_yield_create`: allows setting up a continuation function that should only be executed after `promise_yield_resume` is invoked.  Together with `promise_return` this allows delaying the reply to the caller;\n- `promise_yield_resume`: indicates to the protocol that the continuation to the yield may now be executed.\n\nIf these two host functions were available, then `promise_yield_create` would be used to implement step 2 above and `promise_yield_resume` would be used for step 3 of the motivating example above.\n\n## Specification\n\nThe proposal is to add the following host functions to the NEAR protocol:\n\n\n```rust\n/// Smart contracts can use this host function along with\n/// `promise_yield_resume()` to delay replying to their caller for up to 200\n/// blocks.  This host function allows the contract to provide a callback to the\n/// protocol that will be executed after either contract calls\n/// `promise_yield_resume()` or after 200 blocks have been executed.  The\n/// callback then has the opportunity to either reply to the caller or to delay\n/// replying again.\n///\n/// `method_name_len` and `method_name_ptr`: Identify the callback method that\n/// should be executed either after the contract calls `promise_yield_resume()`\n/// or after 200 blocks have been executed.\n///\n/// `arguments_len` and `arguments_ptr` provide an initial blob of arguments\n/// that will be passed to the callback.  These will be available via the\n/// `input` host function.\n///\n/// `gas`: Similar to the `gas` parameter in\n/// [promise_create](https://github.com/near/nearcore/blob/a908de36ab6f75eb130447a5788007e26d05f93e/runtime/near-vm-runner/src/logic/logic.rs#L1281),\n/// the `gas` parameter is a prepayment for the gas that would be used to\n/// execute the callback.\n///\n/// `gas_weight`: Similar to the `gas_weight` parameter in\n/// [promise_batch_action_function_call_weight](https://github.com/near/nearcore/blob/a908de36ab6f75eb130447a5788007e26d05f93e/runtime/near-vm-runner/src/logic/logic.rs#L1699),\n/// this improves the devX for the smart contract.  It allows a contract to\n/// specify a portion of the remaining gas for executing the callback instead of\n/// specifying a precise amount.\n///\n/// `register_id`: is used to identify the register that will be filled with a\n/// unique resumption token. This token is used with `promise_yield_resume` to\n/// resolve the continuation receipt set up by this function.\n///\n/// Return value: u64: Similar to the\n/// [promise_create](https://github.com/near/nearcore/blob/a908de36ab6f75eb130447a5788007e26d05f93e/runtime/near-vm-runner/src/logic/logic.rs#L1281)\n/// host function, this function also create a promise and returns an index to\n/// the promise.  This index can be used to create a chain of promises.\npub fn promise_yield_create(\n    method_name_len: u64,\n    method_name_ptr: u64,\n    arguments_len: u64,\n    arguments_ptr: u64,\n    gas: u64,\n    gas_weight: u64,\n    register_id: u64,\n) -> u64;\n\n/// See `promise_yield_create()` for more details.  This host function can be\n/// used to resolve the continuation that was set up by\n/// `promise_yield_create()`.  The contract calling this function must be the\n/// same contract that called `promise_yield_create()` earlier.  This host\n/// function cannot be called for the same resumption token twice or if the\n/// callback specified in `promise_yield_create()` has already executed.\n///\n/// `data_id_len` and `data_it_ptr`: Used to pass the unique resumption token\n/// that was returned to the smart contract in the `promise_yield_create()`\n/// function (via the register).\n///\n/// `payload_len` and `payload_ptr`: the smart contract can provide an\n/// additional optional blob of arguments that should be passed to the callback\n/// that will be resumed.  These are available via the `promise_result` host\n/// function.\n///\n/// This function can be called multiple times with the same data id.  If it is\n/// called successfully multiple times, then the implementation guarantees that\n/// the yielded callback will execute with one of the successfully submitted\n/// payloads.  If submission was successful, then `1` is returned.   Otherwise\n/// (e.g. if the yield receipt has already timed out or the yielded callback has\n/// already been executed) `0` will be returned, indicating that this payload\n/// could not be submitted successfully.\npub fn promise_yield_resume(\n    data_id_len: u64,\n    data_id_ptr: u64,\n    payload_len: u64,\n    payload_ptr: u64,\n) -> u32;\n```\n\n## Reference Implementation\n\nThe reference implementation against the nearcore repository can be found in this [PR](https://github.com/near/nearcore/pull/10415).\n\n## Security Implications\n\nSome potential security issues have been identified and are covered below:\n\n- Smart contracts using this functionality have to be careful not to let just any party trigger a call to `promise_yield_resume`.  In the example above, it is possible that a malicious actor may pretend to be an external signer and call the `signature_available()` function with an incorrect signature.  Hence contracts should be taking precautions by only letting select callers call the function (by using [this](https://github.com/aurora-is-near/near-plugins/blob/master/near-plugins/src/access_controllable.rs) service for example) and validating the payload before acting upon it.\n- This mechanism introduces a new way to create delayed receipts in the protocol.  When the protocol is under conditions of congestion, this mechanism could be used to further aggravate the situation.  This is deemed as not a terrible issue as the existing mechanisms of using promises and etc. can also be used to further exacerbate the situation.\n\n## Alternatives\n\nTwo alternatives have been identified.\n\n### Self calls to delay replying\n\nIn the `fn sign_payload(Payload, ...)` function, instead of calling `yield`, the contract can keep calling itself in a loop till external indexer replies with the signature.  This would work but would be very fragile and expensive.  The contract would have to pay for all the calls and function executions while it is waiting for the response.  Also depending on the congestion on the network; if the shard is not busy at all, some self calls could happen within the same block meaning that the contract might not actually wait for as long as it hoped for and if the network is very busy then the call from the external indexer might be arbitrarily delayed.\n\n### Change the flow of calls\n\nThe general flow of cross contract calls in NEAR is that a contract `A` sends a request to another contract `B` to perform a service and `B` replies to `A` with the response.  This flow could be altered.  When a contract `A` calls `B` to perform a service, `B` could respond with a \"promise to call it later with the answer\".  Then when the signature is eventually available, `B` can then send `A` a request with the signature.\n\nThere are some problems with this approach though.  After the change of flow of calls; `B` is now going to be paying for gas for various executions that `A` should have been paying for.  Due to bugs or malicious intent, `B` could forget to call `A` with the signature.  If `A` is calling `B` deep in a call tree and `B` replies to it without actually providing an answer, then `A` would need a mechanism to keep the call tree alive while it waits for `B` to call it with the signature in effect running into the same problem that this NEP is attempting to solve.\n\n## Future possibilities\n\nOne potential future possibility is to allow contracts to specify how long the protocol should wait (up to a certain limit) for the contract to call `promise_yield_resume`.  If contracts specify a smaller value, they would potentially be charged a smaller gas fee.  This would make contracts more efficient.  This enhancement does lead to a more complex implementation and could even allow malicious contracts to more easily concentrate a lot of callbacks to occur at the same time increasing the congestion on the network.  Hence, we decided not to include this feature for the time being.\n\n## Consequences\n\n[This section describes the consequences, after applying the decision. All consequences should be summarized here, not just the \"positive\" ones. Record any concerns raised throughout the NEP discussion.]\n\n### Positive\n\n- p1\n\n### Neutral\n\n- n1\n\n### Negative\n\n- n1\n\n### Backwards Compatibility\n\nWe believe this can be implemented with full backwards compatibility.\n\n## Changelog\n\n### 1.0.0 - Initial Version\n\n> Placeholder for the context about when and who approved this NEP version.\n\n#### Benefits\n\n> List of benefits filled by the Subject Matter Experts while reviewing this version:\n\n- Benefit 1\n- Benefit 2\n\n#### Concerns\n\n> Template for Subject Matter Experts review for this version:\n> Status: New | Ongoing | Resolved\n\n|   # | Concern | Resolution | Status |\n| --: | :------ | :--------- | -----: |\n|   1 |         |            |        |\n|   2 |         |            |        |\n\n## Copyright\n\nCopyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).\n"
  },
  {
    "path": "neps/nep-0536.md",
    "content": "---\nNEP: 536\nTitle: Reduce the number of gas refunds\nAuthors: Evgeny Kuzyakov <ek@fastnear.com>, Bowen Wang <bowen@near.org>\nStatus: Final\nDiscussionsTo: https://github.com/near/NEPs/pull/536\nType: Protocol\nVersion: 1.0.0\nCreated: 2024-03-12\nLastUpdated: 2023-03-12\n---\n\n## Summary\n\n[Gas refund](https://docs.near.org/concepts/basics/transactions/gas#attach-extra-gas-get-refunded) is a mechanism that allows users to get refunded for gas that is not used during the execution of a smart contract. Due to [pessimistic gas pricing](https://docs.near.org/concepts/basics/transactions/gas-advanced#pessimistic-gas-price-inflation), however, even transactions that do not involve function calls generate refunds because users need to pay at a high price and get refunded the difference. Gas refunds lead to nontrivial overhead in runtime and other places, which hurts the performance of the protocol. This proposal aims to reduce the number of gas refunds and prepare for future changes that completely remove gas refunds.\n\n## Motivation\n\nRefund receipts create nontrivial overhead: they need to be merklized and sent across shards. In addition, the processing of refund receipts requires additional storage reads and writes, which is not optimal for the performance of the protocol. In addition, when there is congestion, refund receipts may be delayed during execution. Whenever this happens, it requires two additional storage writes to store a gas refund receipt and two additional reads and writes when they are later processed, which incurs a significant overhead. To optimize the performance of the protocol under congestion, it is imperative that we reduce the number of refund receipts.\n\n## Specification\n\nPessimistic gas pricing is removed as a part of this change. This means that transactions that do not involve function calls will not generate gas refund receipts as a result. For function calls, this proposal introduces cost of refund to be\n\n```rust\nREFUND_FIXED_COST = action_receipt_creation + action_transfer + action_add_function_call_key\nrefund_cost = max(REFUND_FIXED_COST, 0.05 * gas_refund);\n```\n\nper receipt. The refund fixed cost includes consideration for implicit accounts (created on transfer) and refund for access key allowance, which requires an access key update. The design of refund cost is supposed to penalize developers from attaching too much gas\nand creating unnecessary refunds. Some examples:\n\n* If the contract wants to refund 280Tgas, burning 5% of it would be about 14Tgas, which is a significant cost and developers would be encouraged to optimize it on the frontend.\n* If refund is 100Tgas, then 5% is 5Tgas, which is still significant and discourages developers from doing so.\n* If the refund is <10Tgas (very common case for cross-contract call self-callbacks), the penalty should be just 500Ggas, which is less than the gas refund cost. So only the fixed refund cost will be charged from gas to spawn the gas refund receipt. No UX will be broken for legacy cross-contract call contracts, so long as frontend correctly estimates the required gas in worst case scenario.\n\n\n## Reference Implementation\n\nThe protocol changes are as follows:\n\n* When a transaction is converted to a receipt, there is no longer a `pessmistic_gas_price` multiplier when the signer balance is deducted. Instead, the signer is charged `transaction_gas_cost * gas_price`. If the transaction succeeds, then unless the transaction contains a function call action, it will not generate any refund. On the other hand, when a transaction with multiple action fails, there is gas refund for the rest of unexecuted actions, same as how the protocol works today.\n* For function calls, if X gas is attached during the execution of a receipt and Y gas is used+burnt, then `max(0, X-Y-refund_cost)` is refunded at the original gas price where `refund_cost = max(REFUND_FIXED_COST, 0.05 * X-Y)`. In the case the refund is 0 then no refund receipt is generated.\n* Tokens burnt on refund cost is counted towards tx_balance_burnt and the part over `REFUND_FIXED_COST` is not counted towards gas limit to avoid artificially limiting throughput.\n* Because refund cost is now separate, action costs do not need to account for refund and therefore should be recalculated and reduced.\n\n## Security Implications\n\nThis change may lead to less correct charging for transactions when there is congestion. However, the entire gas price mechanism needs to be rethought any ways and when only one or two shards are congested, the gas price wouldn't change so there is no difference.\n\n## Alternatives\n\nOne altnerative is to completely remove gas refunds by burning all prepaid gas. This idea was [discussed](https://github.com/near/NEPs/issues/107) before. However, it would be a very drastic change and may very negatively damage the developer experience.\nThe approach outlined in this proposal has less impact on developer and user experience and may be extended to burning all prepaid gas in the future.\n\n## Future possibilities\n\n* Burning all prepaid gas is a natural extension to this proposal, which would completely get rid of gas refunds. This, however, would be a major change to the developer experience of NEAR and should be treated cautiously.\nAt the very least, developers should be able to easily estimate how much gas a function within a smart contract is going to consume during execution.\n\n## Consequences\n\n[This section describes the consequences, after applying the decision. All consequences should be summarized here, not just the \"positive\" ones. Record any concerns raised throughout the NEP discussion.]\n\n### Positive\n\n* p1\n\n### Neutral\n\n* n1\n\n### Negative\n\n* n1\n\n### Backwards Compatibility\n\nDevelopers may need to change the amount of gas they attach when they write client side code that interacts with smart contracts to avoid getting penalized. However, this is not too difficult to do.\n\n## Changelog\n\n[The changelog section provides historical context for how the NEP developed over time. Initial NEP submission should start with version 1.0.0, and all subsequent NEP extensions must follow [Semantic Versioning](https://semver.org/). Every version should have the benefits and concerns raised during the review. The author does not need to fill out this section for the initial draft. Instead, the assigned reviewers (Subject Matter Experts) should create the first version during the first technical review. After the final public call, the author should then finalize the last version of the decision context.]\n\n### 1.0.0 - Initial Version\n\n> Placeholder for the context about when and who approved this NEP version.\n\n#### Benefits\n\n> List of benefits filled by the Subject Matter Experts while reviewing this version:\n\n* Benefit 1\n* Benefit 2\n\n#### Concerns\n\n> Template for Subject Matter Experts review for this version:\n> Status: New | Ongoing | Resolved\n\n|   # | Concern | Resolution | Status |\n| --: | :------ | :--------- | -----: |\n|   1 |         |            |        |\n|   2 |         |            |        |\n\n## Copyright\n\nCopyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).\n"
  },
  {
    "path": "neps/nep-0539.md",
    "content": "---\nNEP: 539\nTitle: Cross-Shard Congestion Control\nAuthors: Waclaw Banasik <waclaw@near.org>, Jakob Meier <inbox@jakobmeier.ch>\nStatus: Final\nDiscussionsTo: https://github.com/nearprotocol/neps/pull/539\nType: Protocol\nVersion: 1.0.0\nCreated: 2024-03-21\nLastUpdated: 2024-05-15\n---\n\n## Summary\n\nWe propose to limit the transactions and outgoing receipts included in each\nchunk. Limits depend on the congestion level of that receiving shard and are to\nbe enforced by the protocol.\n\nCongestion is primarily tracked in terms of the total gas of all delayed receipts.\nChunk producers must ensure they stop accepting new transactions if the receiver\naccount is on a shard with a full delayed receipts queue.\n\nForwarding of receipts that are not directly produced by a transaction, namely\ncross-contract calls and delegated receipts, is limited by the receiver's overall\ncongestion level. This includes the gas of delayed receipts and the gas of\nreceipts that have not been forwarded, yet, due to congestion control\nrestrictions. Additionally, the memory consumption of both receipt types can\nalso cause congestion.\n\nThis proposal extends the local congestion control rules already in place. It\nkeeps the transaction pool size limit as is but replaces the old delayed receipt\ncount limit with limits on gas and size of the receipts.\n\n## Motivation\n\nWe want to guarantee the Near Protocol blockchain operates stably even during\ncongestion.\n\nToday, when users send transactions at a higher rate than what the network can\nprocess, receipts accumulate without limits. This leads to unlimited memory\nconsumption on chunk producers' and validators' machines. Furthermore, the delay\nfor a transaction from when it is accepted to when it finishes execution becomes\nlarger and larger, as the receipts need to queue behind those already in the\nsystem.\n\nFirst attempts to limit the memory consumption have been added without protocol\nchanges. This is known as \"local congestion control\" and can be summarized in\ntwo rules:\n\n- Limit the transaction pool to 100 MB.\n  https://github.com/near/nearcore/pull/9172\n- Once we have accumulated more than 20k delayed receipts in a shard,\n  chunk-producers for that shard stop accepting more transactions.\n  https://github.com/near/nearcore/pull/9222\n\nBut these rules do not put any limits on what other shards would accept. For\nexample, when a particular contract is popular, the contract's shard will\neventually stop accepting new transactions, but all other shards will keep\naccepting more and more transactions that produce receipts for the popular\ncontract. Therefore the number of delayed receipts keeps growing indefinitely.\n\nCross-shard congestion control addresses this issue by stopping new transactions\nat the source and delaying receipt forwarding when the receiving shard has\nreached its congestion limit.\n\n## Specification\n\nThe proposal adds fields to chunks headers, adds a new trie column, changes the\ntransaction selection rules, and changes the chunk execution flow. After the\nconcepts section below, the next four sections specify each of these changes in\nmore detail.\n\n### Concepts\n\nBelow are high-level description of important concepts to make the following\nsections a bit easier to digest.\n\n**Delayed receipts** are all receipts that were ready for execution in the\nprevious chunk but were delayed due to gas or compute limits. They are stored in\nthe delayed receipt queue, which itself is stored in the state of the trie.\nThere is exactly one delayed receipts queue per shard.\n\n**Outgoing receipts** are all newly produced receipts as a result of executing\nreceipts or when converting a transaction to non-local receipts. In the absence\nof congestion, they are all stored in the output of applying receipts, as a\nsimple list of receipts.\n\nThe **outgoing receipts buffer** is a data structure added by cross-shard\ncongestion control. Each shard has one instance of it in the state trie for\nevery other shard. We use this buffer to store outgoing receipts temporarily\nwhen reaching receipt forwarding limits.\n\n**Receipt limits** are measured in gas and size. Gas in this context refers to\nthe maximum amount of gas that could be burn when applying the receipt. The\nreceipt size is how much space it takes in memory, measured in bytes.\n\nThe **congestion level** is an indicator between 0 and 1 that determines how\nstrong congestion prevention measures should be. The maximum congestion measured\nis reached at congestion level 1. This value is defined separately for each\nshard and computed as the maximum value of the following congestion types.\n\n- **Incoming congestion** increases linearly with the amount of gas in\n  delayed receipts.\n- **Outgoing congestion** increases linearly with the amount of gas in any of\n  the outgoing buffers of the shard.\n- **Memory congestion** increases linearly with the total size of all delayed\n  and buffered outgoing receipts.\n- **Missed chunk congestion** rises linearly with the number of missed chunks\n  since the last successful chunk, measure in block height difference.\n\n### Chunk header changes\n\nWe change the chunk header to include congestion information, adding four\nindicators.\n\n```rust\n/// sum of gas of all receipts in the delayed receipts queue, at the end of chunk execution\ndelayed_receipts_gas: u128,\n/// sum of gas of all receipts in all outgoing receipt buffers, at the end of chunk execution\nbuffered_receipts_gas: u128,\n/// sum of all receipt sizes in the delayed receipts queue and all outgoing buffers\nreceipt_bytes: u64,\n/// if the congestion level is 1.0, the only shard that can forward receipts to this shard in the next chunk\n/// not relevant if the congestion level is < 1.0\nallowed_shard: u16,\n```\n\nThe exact header structure and reasons for the particular integer sizes are\ndescribed in more details in the [reference\nimplementation](#efficiently-computing-the-congestion-information) section.\nUsage of these fields is described in [Changes to chunk\nexecution](#changes-to-chunk-execution).\n\nThis adds 42 bytes to the chunk header, increasing it from 374 bytes up to 416\nbytes in the borsh representation (assuming no validator proposals are\nincluded.) This in turn increases the block size by 42 bytes *per shard*, as all\nchunk headers are fully included in blocks.\n\nIncluding all this information in the chunk header enables efficient validation.\nUsing the previous chunk header (or alternatively, the state), combined with the\nlist of receipts applied and forwarded, a validator can check that the\ncongestion rules described in this NEP are fulfilled.\n\n### Changes to transaction selection\n\nWe change transaction selection to reject new transactions when the system is\ncongested, to reduce to total workload in the system.\n\nToday, transactions are taken from the chunk producer's pool until `tx_limit` is\nreached, where `tx_limit` is computed as follows.\n\n```python\n# old\ntx_limit = 500 Tgas if len(delayed_receipts) < 20_000 else 0\n```\n\nWe replace the formula for the transaction limit to depend on the\n`incoming_congestion` variable (between 0 and 1) computed in the previous chunk\nof the same shard:\n\n```python\n# new\nMIN_TX_GAS = 20 Tgas\nMAX_TX_GAS = 500 Tgas\ntx_limit = mix(MAX_TX_GAS, MIN_TX_GAS, incoming_congestion)\n```\n\nThis smoothly limits the acceptance of new work, to prioritize reducing the\nbacklog of delayed receipts.\n\nIn the pseudo code above, we borrow the [`mix`](https://docs.gl/sl4/mix)\nfunction from GLSL for linear interpolation.\n\n> `mix(x, y, a)`\n>\n> `mix` performs a linear interpolation between x and y using a to weight between\n> them. The return value is computed as $x \\times (1 - a) + y \\times a$.\n\nMore importantly, we add a more targeted rule to reject all transactions *targeting*\na shard with a congestion level above a certain threshold.\n\n```python\ndef filter_tx(tx):\n  REJECT_TX_CONGESTION_THRESHOLD = 0.25\n  if congestion_level(tx.receiver_shard_id) > REJECT_TX_CONGESTION_THRESHOLD\n    tx.reject()\n  else\n    tx.accept()\n```\n\nHere, `congestion_level()` is the maximum of incoming, outgoing, memory, and\nmissed chunk congestion.\n\nThis stops (some) new incoming work at the source, when a shard is using too\nmuch memory to store unprocessed receipts, or if there is already too much work\npiled up for that shard.\n\nChunk validators must validate that the two rules above are respected in a\nproduced chunk.\n\n### Changes to chunk execution\n\nWe change chunk execution to hold back receipt forwarding to congested shards.\nThis has two effects.\n\n1. It prevents the memory consumption of the congested shard from growing at the\n   expense of buffering these pending receipts on the outgoing shards.\n2. When user demand is consistently higher than what the system can handle, this\n   mechanism lets backpressure propagate shard-by-shard until it reaches the shards\n   responsible for accepting too many receipts and causes transaction\n   filtering to kick in.\n\nTo accomplish this, we add 3 new steps to chunk execution (enumerated as 1, 2, 6\nbelow) and modify how outgoing receipts are treated in the transaction\nconversion step (3) and in the receipts execution step (4).\n\nThe new chunk execution then follows this order.\n\n1. (new) Read congestion information for *all* shards from the previous chunk headers.\n\n   ```rust\n   // congestion info for each shard, as computed in the last included chunk of the shard\n   {\n    delayed_receipts_gas: u128,\n    buffered_receipts_gas: u128,\n    receipt_bytes: u64,\n    allowed_shard: u16,\n    // extended congestion info, as computed from the latest block header\n    missed_chunks_count: u64\n   }\n   ```\n\n2. (new) Compute bandwidth limits to other shards based on the congestion information.\n   The formula is:\n\n    ```python\n    for receiver in other_shards:\n      MAX_CONGESTION_INCOMING_GAS = 20 Pgas\n      incoming_congestion = delayed_receipts_gas[receiver] / MAX_CONGESTION_INCOMING_GAS\n\n      MAX_CONGESTION_OUTGOING_GAS = 2 Pgas\n      outgoing_congestion = buffered_receipts_gas[receiver] / MAX_CONGESTION_OUTGOING_GAS\n\n      MAX_CONGESTION_MEMORY_CONSUMPTION = 1000 MB\n      memory_congestion = receipt_bytes[receiver] / MAX_CONGESTION_MEMORY_CONSUMPTION\n\n      MAX_CONGESTION_MISSED_CHUNKS = 10\n      missed_chunk_congestion = missed_chunks_count[receiver] / MAX_CONGESTION_MISSED_CHUNKS\n\n      congestion = max(incoming_congestion, outgoing_congestion, memory_congestion, missed_chunk_congestion)\n\n      if congestion >= 1.0:\n        # Maximum congestion: reduce to minimum speed\n        if current_shard == allowed_shard[receiver]:\n          outgoing_gas_limit[receiver] = 1 Pgas\n        else:\n          outgoing_gas_limit[receiver] = 0\n      else:\n        # Green or Amber\n        # linear interpolation based on congestion level\n        MIN_GAS_FORWARDING = 1 Pgas\n        MAX_GAS_FORWARDING = 300 Pgas\n        outgoing_gas_limit[receiver]\n          = mix(MAX_GAS_FORWARDING, MIN_GAS_FORWARDING, congestion)\n    ```\n\n3. (new) Drain receipts in the outgoing buffer from the previous round\n    - Subtract `receipt.gas()` from `outgoing_gas_limit[receipt.receiver]` for\n      each receipt drained.\n    - Keep receipts in the buffer if the gas limit would be negative.\n    - Subtract `receipt.gas()` from `outgoing_congestion` and `receipt.size()`\n      from `receipt_bytes` for the local shard for every forwarded receipt.\n    - Add the removed receipts to the outgoing receipts of the new chunk.\n4. Convert all transactions to receipts included in the chunk.\n    - Local receipts, which are receipts where the sender account id is equal to\n      the receiver id, are set aside as local receipts for step 5.\n    - Non-local receipts up to `outgoing_gas_limit[receipt.receiver]` for the\n      respective shard go to the outgoing receipts list of the chunk.\n    - (new) Non-local receipts above `outgoing_gas_limit[receipt.receiver]` for\n      the respective shard go to the outgoing receipts buffer.\n    - (new) For each receipt added to the outgoing buffer, add `receipt.gas()`\n      to `outgoing_congestion` and `receipt.size()` to `receipt_bytes` for\n      the local shard.\n5. Execute receipts in the order of `local`, `delayed`, `incoming`, `yield-resume time-out`.\n    - Don't stop before all receipts are executed or more than 1000 Tgas have\n      been burnt. Burnt gas includes the burnt gas from step 4.\n    - Outgoing receipts up to what is left in\n      `outgoing_gas_limit[receipt.receiver]` per shard (after step 3) go to the\n      outgoing receipts list of the chunk.\n    - (new) Outgoing receipts above `outgoing_gas_limit[receipt.receiver]`\n      go to the outgoing receipts buffer.\n    - (new) For each delayed executed receipt, remove `receipt.gas()` from\n      `incoming_congestion` and `receipt.size()` from `receipt_bytes`.\n6. Remaining local or incoming receipts are added to the end of the `delayed`\n   receipts queue.\n    - (new) For each receipt added to the delayed receipts queue, add\n      `receipt.gas()` to `incoming_congestion` and `receipt.size()` to\n      `receipt_bytes`.\n7. (new) Write own congestion information into the result, to be included in the\n   next chunk header.\n    - If the congestion level is >= 1.0, the `allowed_shard` can be chosen\n      freely by the chunk producer. Selecting the own shard means nobody can\n      send. The reference implementations uses round-robin assignment of all\n      other shards. Further optimization can be done without requiring protocol\n      changes.\n    - If the congestion level is <= 1.0, the `allowed_shard` value does not\n      affect congestion control. But chunk producer must set it to the own shard\n      in this case.\n\nIn the formula above, the receipt gas and the receipt size are defined as:\n\n```python\ndef gas(receipt):\n  return receipt.attached_gas + receipt.exec_gas\n\ndef size(receipt):\n  return len(borsh(receipt))\n```\n\n### Changes to Trie\n\nWe store the outgoing buffered receipts in the trie, similar to delayed receipts\nbut in their own separate column. But instead of a single queue per shard, we add\none queue for each other shard at the current sharding layout.\n\nWe add two trie columns:\n\n- `BUFFERED_RECEIPT_INDICES: u8 = 13;`\n- `BUFFERED_RECEIPT: u8 = 14`\n\nThe `BUFFERED_RECEIPT_INDICES` column only has one value, which stores a\nborsh-serialized instance of `BufferedReceiptIndices` defines as follows:\n\n```rust\npub struct BufferedReceiptIndices {\n    pub shard_buffer_indices: BTreeMap<ShardId, ShardBufferedReceiptIndices>,\n}\n\npub struct ShardBufferedReceiptIndices {\n    // First inclusive index in the queue.\n    pub first_index: u64,\n    // Exclusive end index of the queue\n    pub next_available_index: u64,\n}\n```\n\nThe `BUFFERED_RECEIPT` column stores receipts keyed by\n`TrieKey::BufferedReceipt{ receiving_shard: ShardId, index: u64 }`.\n\nThe `BufferedReceiptIndices` map defines which queues exist, which changes\nduring resharding. For each existing queue, all receipts in the range\n`[first_index, next_available_index)` (inclusive start, exclusive end) must\nexist under the key with the corresponding shard.\n\n\n### Notes on parameter fine-tuning\n\nBelow are the reasons why each parameter is set to the specific value given above.\n\nFor more details, a spreadsheet with the full analysis can be found here:\nhttps://docs.google.com/spreadsheets/d/1Vt_-sgMdX1ncYleikYY8uFID_aG9RaqJOqaVMLQ37tQ/edit#gid=0\n\n#### Queue sizes\n\nThe parameters are chosen to strike a balance between guarantees for short\nqueues and utilization. 20 PGas delayed receipts means that incoming receipts\nhave to wait at most 20 chunks to be applied. And it can guarantee 100%\nutilization as long as the ratio between burnt and attached gas in receipts is\nabove 1 to 20.\n\nA shorter delayed queue would result in lower delays but in our model\nsimulations, we saw reduced utilization even in simple and balanced workloads.\n\nThe 1 GB of memory is a target value for the control algorithm to try and stay\nbelow. With receipts in the normal range of sizes seen in today's traffic, we\nshould never even get close to 1 GB. But the protocol allows a single receipt to\nbe multiple MBs. In those cases, a limit of 1 GB still gives us almost 100%\nutilization but prevents queues from growing larger than what validators can\nkeep in memory.\n\n#### Receipt forwarding limits\n\n`MIN_GAS_FORWARDING = 1 Pgas` and `MAX_GAS_FORWARDING = 300 Pgas` give us a\nlarge range to smooth out how much should be forwarded to other shards.\nDepending on the burnt to attached gas ratio of the workload, it will settle at\ndifferent values for each directed shard pair. This gives the algorithm\nadaptability to many workload patterns.\n\nFor the forwarding to work smoothly, we need a bit of an outgoing buffer queue.\nWe found in simulations that `MAX_CONGESTION_OUTGOING_GAS = 2 Pgas` is enough\nfor the forwarding limit to settle in the perfect spot before we are restricted\nby transaction rejection. Higher values did not yield better results but it does\nincrease delays in some cases, hence we propose 2 Pgas.\n\n#### Limiting new transactions\n\nThe remaining parameters work together to adapt how much new workload we accept\nin the system, based on how congested the chain is already.\n\n`REJECT_TX_CONGESTION_THRESHOLD = 0.25` defines how quickly we start rejecting\nnew workload to a shard. Combined with the 20 PGas limit on the delayed receipts\nqueue, we only reject new work if we have at least 5 PGas excess workload sent\nto that shard already.\n\nThis is more aggressive than other mechanisms simply because rejecting more\nworkload to known-to-be congested shards is the most effective tool to prevent\nthe system from accumulating more transactions. The sooner we do it, the shorter\nthe delays experienced by users who got their transactions accepted.\n\n`MIN_TX_GAS = 20 Tgas` and `MAX_TX_GAS = 500 Tgas` gives a large range to smooth\nout the split between gas spend on new transactions vs delayed receipts. This\nonly looks at how many delayed receipts the local shard has, not at the\nreceiving shard. Depending on the workload, it will settle at different values.\n\nNote that hitting `REJECT_TX_CONGESTION_THRESHOLD`, which looks at the\ncongestion of the receiving shard, overrules this range and stops all\ntransactions to the congested shard when it is hit.\n\n`MIN_TX_GAS = 20 Tgas` guarantees that we can still accept a decent amount of\nnew transactions to shards that are not congested, even if the local shard\nitself is fully congested. This gives fairness properties under certain workloads\nwhich we could not achieve in any other of the tried congestion control\nstrategies. This is also useful to add transaction priority in\n[NEP-541](https://github.com/near/NEPs/pull/541), as we can always auction off\nthe available space for new transactions without altering the congestion control\nalgorithm.\n\n\n## Reference Implementation\n\nA reference implementation is available in this PR against nearcore:\nhttps://github.com/near/nearcore/pull/10918\n\nHere are the most important details which are not already described in the\nspecification above but are defined in the reference implementation.\n\n### Efficiently computing the congestion information\n\nThe congestion information is computed based on the gas and size of the incoming\nqueue and the outgoing buffers. A naive implementation would just iterate over all\nof the receipts in the queue and buffers and sum up the relevant metrics. This\napproach is slow and, in the context of stateless validation, would add too much\nto the state witness size. In order to prevent those issues we consider two\nalternative optimizations. Both use the same principle of caching the previously\ncalculated metrics and updating them based on the changes to the incoming queue\nand outgoing buffers.\n\nAfter applying a chunk, we store detailed information of the shard in the chunk\nextra. Unlike the shard header, this is only stored on the shard and not shared\nglobally.\n\nThe new fields in the chunk extra are included in `ChunkExtraV3`.\n\n```rust\npub struct ChunkExtraV3 {\n\n    // ...all fields from ChunkExtraV2\n\n    pub congestion_info: CongestionInfo,\n}\n\npub struct CongestionInfo {\n  delayed_receipts_gas: u128,\n  buffered_receipts_gas: u128,\n  receipt_bytes: u64,\n  allowed_shard: u16,\n}\n```\n\nThis implementation allows to efficiently update the `StoredReceiptsInfo` during\nchunk application by starting with the information of the previous chunk and\napplying only the changes.\n\nRegarding integer sizes, `delayed_receipts_gas` and `buffered_receipts_gas` use\n128-bit unsigned integers because 64-bit would not always be enough. `u64::MAX`\nwould only be enough to store `18_446 Pgas`. This translates to roughly 5 hours\nof work, assuming 1 Pgas per second. While the proposed congestion control\nstrategy should prevent congestion ever reaching such high levels, it is not\npossible to rule out completely.\n\nFor `receipt_bytes`, a `u64` is more than enough, we have other problems if we\nneed to store millions of Terabytes of receipts.\n\nFor the id of the allowed shard, we chose a `u16` which is large enough for\n65_535 shards.\n\n### Bootstrapping\n\nThe previous section explain how the gas and bytes information of unprocessed\nreceipts is computed based on what it was for the previous chunk. But for the\nfirst chunk with this feature enabled, the information for the previous chunk is\nnot available.\n\nIn this specific case, we detect that the previous information is not available\nand therefore we trigger an iteration of the existing queues to compute the\ncorrect values.\n\nThis computed `StoredReceiptsInfo` only applies locally. But the next value of it\nwill be shared in the chunk header and other shards will start using it to limit\nthe transactions they accept and receipts they forward.\n\nThe congestion info of other shards is assumed to be 0 for all values for the\nfirst block with the cross-shard congestion control feature enabled.\n\n### Missing Chunks\n\nWhen a chunk is missing, we use the congestion information of the last available\nchunk header for the shard. In practical terms this simply means we take the\nchunk header available in the block, even if the included height is not the\nlatest.\n\nAdditionally, we include the number of missed chunks as part of the congestion\nformula, treating a shard with 10 or missed chunks the same way as an otherwise\nfully congested shard. This is to prevent sending even more receipts to a shard\nthat already struggles to produce chunks.\n\n### Validation Changes\n\nThe following fields in the chunk header must be validated:\n\n- `receipt_bytes`: must be equal to `receipt_bytes` of the previous chunk, plus\n  all bytes of new receipts added to delayed or buffered receipts, minus all the\n  receipts removed of the same types.\n- `delayed_receipts_gas` must be equal to `delayed_receipts_gas` of the previous\n  chunk, plus the gas of receipts added to the delayed receipts queue, minus the\n  gas of receipts removed from the delayed receipts queue.\n- `buffered_receipts_gas` must be equal to `buffered_receipts_gas` of the previous\n  chunk, plus the gas of receipts added to any of the outgoing receipts buffers, minus the\n  gas of all forwarded buffered receipts.\n- `allowed_shard` must be a valid shard id.\n- `allowed_shard` must be equal to the chunk's shard id if congestion is below 1.\n\nThe balance checker also needs to take into account balances stored in buffered receipts.\n\n## Security Implications\n\nWith cross-shard congestion control enabled, malicious users could try to find\npatterns that clog up the system. This could potentially lead to cheaper denial\nof service attacks compared to today.\n\nIf such patterns exists, most likely today's implementation would suffer from\ndifferent problems, such as validators requiring unbounded amounts of memory.\nTherefore, we believe this feature is massive step forward in terms of security,\nall things considered.\n\n## Integration with state sync\n\nWhat we described in [Efficiently computing the congestion\ninformation](#efficiently-computing-the-congestion-information) creates a\ndependence on the previous block when processing a block. For a fully synced\nnode this requirement is always fulfilled because we keep at least 3 epochs of\nblocks. However in state sync we start processing from an arbitrary place in\nchain without access to full history.\n\nIn order to integrate the congestion control and state sync features we will\nadd extra steps in state sync to download the blocks that may be needed in\norder to finalize state sync.\n\nThe blocks that are needed are the `sync hash` block, the `previous block` where\nstate sync creates chunk extra in order to kick off block sync and the `previous\nprevious block` that is now needed in order to process the `previous block`. On\ntop of that we may need to download further blocks to ensure that every shard has\nat least one new chunk in the blocks leading up to the sync hash block.\n\n## Integration with resharding\n\nResharding is a process wherin we change the shard layout - the assignment of\naccound ids to shards. The centerpiece of resharding is moving the trie / state\nrecords from parent shards to children shards. It's important to preserve the\nability to perform resharding while adding other protocol features such as congestion\ncontrol. Below is a short description how resharding and congestion control can be\nintegrated, in particular how to reshard the new trie columns - the outgoing buffers.\n\nFor simplicity we'll only consider splitting a single parent shard into multiple\nchildren shards which is currently the only supported operation.\n\nThe actual implementation of this integration will be done independently and\noutside of this effort.\n\nImportantly the resharding affects both the shard that is being split and all the\nother shards.\n\n#### Changes to the shard under resharding\n\nThe outgoing buffers of the parent shard can be split among children by iterating\nall of the receipts in each buffer and inserting it to appropriate child shard.\nThe assignment can in theory be arbitrary e.g. all receipts can be reassigned to\na single shard. In practice it would make sense to either split the receipts\nequally between children or based on the sender account id of the receipt.\n\nSpecial consideration should be given to refund receipts where the sender account\nis \"system\" that may not belong to neither parent nor children shards.\nAny assignment of such receipts is fine.\n\n#### Changes to the other shards\n\nThe other shards, that is all shards that are not under resharding, have an\noutgoing buffer to the shard under resharding. This buffer should be split\ninto one outgoing buffer per child shard. The buffer can be split by iterating\nreceipts and reassigning each to either of the child shards. Each receipt can\nbe reassigned based on it's receiver account id and the new shard layout.\n\n## Alternatives\n\nA wide range of alternatives has been discussed. It would be too much to list\nall suggested variations of all strategies. Instead, here is a list of different\ndirections that were explored, with a representative strategy for each of them.\n\n1. Use transaction fees and an open market to reduce workload added to the\n   system.\n\n    - Problem 1: This does not prevent unbounded memory requirements of\n      validators, it just makes it more expensive.\n    - Problem 2: In a sharded environment like Near Protocol, it is hard to\n    implement this fairly. Because it's impossible to know where a transaction\n    burns most of its gas, the only simple solution would require all shards to\n    pay the price for the most congested shard.\n\n2. Set fixed limits for delayed receipts and drop receipts beyond that.\n\n    - Problem 1: Today, smart contract rely on receipts never being lost. This\n      network-level failure mode would be completely new.\n    - Problem 2: We either need to allow resuming with external inputs, or\n      roll-back changes on all shards to still have consistent states in smart\n      contracts. Both solutions means we are doing extra work when being\n      congested, inevitably reducing the available throughput for useful work in\n      times when the demand is the largest.\n\n3. Stop accepting transactions globally when any shard has too long of a delayed\n   receipts queue. ([See this issue](https://github.com/near/nearcore/issues/9228).)\n    - Problem 1: This gives poor utilization in many workloads, as our model\n      simulations confirmed.\n    - Problem 2: A global stop conflicts with plans to add fee based transaction\n      priorities which should allow sending transactions even under heavy\n      congestion.\n\n4. Reduce newly accepted transactions solely based on gas in delayed queues,\n   without adding new buffers or queues to the system. Gas is tracked per shard\n   of the transaction signer. ([Zulip Discussion](https://near.zulipchat.com/#narrow/stream/295558-core/topic/congestion.20control/near/429973223) and [idea in  code](https://github.com/near/nearcore/pull/10894).)\n\n    - Problem 1: This requires `N` gas numbers in each chunk header, or `N*N`\n      numbers per block, where `N` is the number of shards.\n    - Problem 2: We did not have time to simulate it properly. But on paper, it\n      seems each individual delayed queue can still grow indefinitely as the\n      number of shards in the system grows.\n\n5. Smartly track and limit buffer space across different shards. Only accept new\n   transactions if enough buffer space can be reserved ahead of time.\n\n    - Problem 1: Without knowing which shards a transaction touches and how\n      large receipts will be, we have to pessimistically reserve more space than\n      most receipts will actually need.\n    - Problem 2: If buffer space is shared globally, individual queues can still\n      grow really large, even indefinitely if we assume the number of shards\n      grows over time.\n    - Problem 3: If buffer space is on a per-shard basis, we run into deadlocks\n      when two shards have no more space left but both need to send to the other\n      shard to make progress.\n\n6. Require users to define which shards their transactions will touch and how\n   much gas is burnt in each. Then use this information for global scheduling\n   such that congestion is impossible.\n\n    - Problem 1: This requires lots of changes across the infrastructure stack.\n      It would take too long to implement as we are already facing congestion\n      problems today.\n    - Problem 2: This would have a strong impact on usability and it is unclear\n      if gas usage estimating services could close the gap to make it\n      acceptable.\n\n7. An alternative way to what is described in  [Efficiently computing the\n  congestion information](#efficiently-computing-the-congestion-information) would\n  be to store the total gas and total size of the incoming queue and the outgoing\n  receipts in the state along the respective queue or buffers. Those values will\n  be updated as receipts are added or removed from the queue.\n    - Pro: In this case the CongestionInfo struct can remain small and only\n      reflect the information needed by other shards. (3 bytes instead of 42\n      bytes)\n\n    ```rust\n    pub struct CongestionInfo {\n      allowed_shard: u16,\n      congestion_level: u8,\n    }\n    ```\n\n    - Con: Overall, it would result in more state changes per chunk, since the\n      congestion value needs to be read before applying receipts anyway. In\n      light of stateless validation, this would be worse for the state witness\n      size\n\n\n## Future possibilities\n\nWhile this proposal treats all requests the same, it sets the base for a proper\ntransaction priority implementation. We co-designed this proposal with\n[NEP-541](https://github.com/near/NEPs/pull/541), which adds a transaction\npriority fee. On a very high level, the fee is used to auction off a part of the\navailable gas per chunk to the highest bidders.\n\nWe also expect that this proposal alone will not be the final solution for\ncongestion control. Rather, we just want to build a solid foundation in this NEP\nand allow future optimization to take place on top of it.\n\nFor example, estimations on how much gas is burnt on each shard could help with\nbetter load distribution in some cases.\n\nWe also forsee that the round-robin scheduling of shard allowed to forward even\nunder full congestion is not perfect. It is a key feature to make deadlocks\nprovably impossible, since every shard is guaranteed to make a minimum progress\nafter N rounds. But it could be beneficial to allocate more bandwidth to shards\nthat actually have something to forward, or perhaps it would be better to stop\nforwarding anything for a while. The current proposal allows chunk producers to\nexperiment with this without a protocol version change.\n\nLastly, a future optimization could do better transaction rejection for meta\ntransactions. Instead of looking only at the outer transaction receiver, we\ncould also look at the receiver of the delegate action, which is most likely\nwhere most gas is going to be burnt, and use this for transaction rejection.\n\n## Consequences\n\n### Positive\n\n- Accepted transaction have lower latencies compared to today under congestion.\n- Storage and memory requirements on validator for storing receipts are bounded.\n\n### Neutral\n\n- More transactions are rejected at the chunk producer level.\n\n### Negative\n\n- Users need to resend transaction more often.\n\n### Backwards Compatibility\n\nThere are no observable changes on the smart contract, wallet, or API level.\nThus, there are no backwards-compatibility concerns.\n\n## Unresolved Issues (Optional)\n\nThese congestion problems are out of scope for this proposal:\n\n- Malicious patterns can still cause queues to grow beyond the parameter limits.\n- There is no way to pay for higher priority.\n  ([NEP-541](https://github.com/near/NEPs/pull/541) adds it.)\n\nPostponed receipts are also considered to be added to `receipt_bytes`. But at\nthis point it seems better to not include them to avoid further complications\nwith potential deadlocks, since postponed receipts can only be executed when\nincoming receipts are allowed to come in.\n\nFollowing the same logic, yielded receipts are also excluded from the size\nlimits, as they require incoming receipts to resume.\n\nA solution that also address the memory space of postponed and yielded receipts\ncould be added with future proposals but is not considered necessary for this\nfirst iteration of cross-shard congestion control.\n\n## Changelog\n\n[The changelog section provides historical context for how the NEP developed over time. Initial NEP submission should start with version 1.0.0, and all subsequent NEP extensions must follow [Semantic Versioning](https://semver.org/). Every version should have the benefits and concerns raised during the review. The author does not need to fill out this section for the initial draft. Instead, the assigned reviewers (Subject Matter Experts) should create the first version during the first technical review. After the final public call, the author should then finalize the last version of the decision context.]\n\n### 1.0.0 - Initial Version\n\n> Placeholder for the context about when and who approved this NEP version.\n\n#### Benefits\n\n> List of benefits filled by the Subject Matter Experts while reviewing this version:\n\n- Benefit 1\n- Benefit 2\n\n#### Concerns\n\n> Template for Subject Matter Experts review for this version:\n> Status: New | Ongoing | Resolved\n\n|   # | Concern | Resolution | Status |\n| --: | :------ | :--------- | -----: |\n|   1 |         |            |        |\n|   2 |         |            |        |\n\n## Copyright\n\nCopyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).\n"
  },
  {
    "path": "neps/nep-0568.md",
    "content": "---\nNEP: 568\nTitle: Resharding V3\nAuthors: Adam Chudas, Aleksandr Logunov, Andrea Spurio, Marcelo Diop-Gonzalez, Shreyan Gupta, Waclaw Banasik\nStatus: Final\nDiscussionsTo: https://github.com/near/nearcore/issues/11881\nType: Protocol\nVersion: 1.0.0\nCreated: 2024-10-24\nLastUpdated: 2024-10-24\n---\n\n## Summary\n\nThis proposal introduces a new resharding implementation and shard layout for production networks.\n\nThe primary objective of Resharding V3 is to increase chain capacity by splitting overutilized shards. A secondary aim is to lay the groundwork for supporting Dynamic Resharding, Instant Resharding, and Shard Merging in future updates.\n\n## Motivation\n\nThe sharded architecture of the NEAR Protocol is a cornerstone of its design, enabling parallel and distributed execution that significantly boosts overall throughput. Resharding plays a pivotal role in this system, allowing the network to adjust the number of shards to accommodate growth. By increasing the number of shards, resharding ensures the network can scale seamlessly, alleviating existing congestion, managing rising traffic demands, and welcoming new participants. This adaptability is essential for maintaining the protocol's performance, reliability, and capacity to support a thriving, ever-expanding ecosystem.\n\nResharding V3 is a significantly redesigned approach, addressing limitations of the previous versions, [Resharding V1][NEP-040] and [Resharding V2][NEP-508]. The earlier solutions became obsolete due to major protocol changes since Resharding V2, including the introduction of Stateless Validation, Single Shard Tracking, and Mem-Trie.\n\n## Specification\n\nResharding will be scheduled in advance by the NEAR developer team. The new shard layout will be hardcoded into the `neard` binary and linked to the protocol version. As the protocol upgrade progresses, resharding will be triggered during the post-processing phase of the last block of the epoch. At this point, the state of the parent shard will be split between two child shards. From the first block of the new protocol version onward, the chain will operate with the new shard layout.\n\nThere are two key dimensions to consider: state storage and protocol features, along with additional details.\n\n1. **State Storage**: Currently, the state of a shard is stored in three distinct formats: the state, the flat state, and the mem-trie. Each of these representations must be resharded. Logically, resharding is an almost instantaneous event that occurs before the first block under the new shard layout. However, in practice, some of this work may be deferred to post-processing, as long as the chain's view reflects a fully resharded state.\n\n2. **Protocol Features**: Several protocol features must integrate smoothly with the resharding process, including:\n\n    * **Stateless Validation**: Resharding must be validated and proven through stateless validation mechanisms.\n    * **State Sync**: Nodes must be able to synchronize the states of the child shards post-resharding.\n    * **Cross-Shard Traffic**: Receipts sent to the parent shard may need to be reassigned to one of the child shards.\n    * **Receipt Handling**: Delayed, postponed, buffered, and promise-yield receipts must be correctly distributed between the child shards.\n    * **ShardId Semantics**: The shard identifiers will become abstract identifiers where today they are numbers in the `0..num_shards` range.\n    * **Congestion Info**: `CongestionInfo` in the chunk header will be recalculated for the child shards at the resharding boundary. Proof must be compatible with Stateless Validation.\n\n### State Storage - MemTrie\n\nMemTrie is the in-memory representation of the trie that the runtime uses for all trie accesses. It is kept in sync with the Trie representation in the state.\n\nCurrently, it isn't mandatory for nodes to have the MemTrie feature enabled, but going forward with Resharding V3, all nodes will be required to have MemTrie enabled for resharding to happen successfully.\n\nFor resharding, we need an efficient way to split the MemTrie into two child tries based on the boundary account. This splitting happens at the epoch boundary when the new epoch is expected to have the two child shards. The requirements for MemTrie splitting are:\n\n* **Instantaneous Splitting**: MemTrie splitting needs to happen efficiently within the span of one block. The child tries need to be available for processing the next block in the new epoch.\n* **Compatibility with Stateless Validation**: We need to generate a proof that the MemTrie split proposed by the chunk producer is correct.\n* **State Witness Size Limits**: The proof generated for splitting the MemTrie needs to comply with the size limits of the state witness sent to all chunk validators. This prevents us from iterating through all trie keys for delayed receipts, etc.\n\nWith the Resharding V3 design, there's no protocol change to the structure of MemTries; however, implementation constraints required us to introduce the concept of a Frozen MemTrie. More details are in the [implementation](#state-storage---memtrie-1) section below.\n\nBased on these requirements, we developed an algorithm to efficiently split the parent trie into two child tries. Trie entries can be divided into three categories based on whether the trie keys have an `account_id` prefix and the total number of such trie keys. Splitting of these keys is handled differently.\n\n#### TrieKey with AccountID Prefix\n\nThis category includes most trie keys like `TrieKey::Account`, `TrieKey::ContractCode`, `TrieKey::PostponedReceipt`, etc. For these keys, we can efficiently split the trie based on the boundary account trie key. We only need to read all the intermediate nodes that form part of the split key. In the example below, if \"pass\" is the split key, we access all the nodes along the path of `root` ➔ `p` ➔ `a` ➔ `s` ➔ `s`, while not needing to touch other intermediate nodes like `o` ➔ `s` ➔ `t` in key \"post\". The accessed nodes form part of the state witness, as those are the only nodes needed by validators to verify that the resharding split is correct. This limits the size of the witness to effectively O(depth) of the trie for each trie key in this category.\n\n![Splitting Trie diagram](assets/nep-0568/NEP-SplitState.png)\n\n#### Singleton TrieKey\n\nThis category includes the trie keys `TrieKey::DelayedReceiptIndices`, `TrieKey::PromiseYieldIndices`, and `TrieKey::BufferedReceiptIndices`. These are just a single entry (or O(num_shard) entries) in the trie and are small enough to read and modify efficiently for the child tries.\n\n#### Indexed TrieKey\n\nThis category includes the trie keys `TrieKey::DelayedReceipt`, `TrieKey::PromiseYieldTimeout`, and `TrieKey::BufferedReceipt`. The number of entries for these keys can potentially be arbitrarily large, making it infeasible to iterate through all entries. In the pre-stateless validation world, where we didn't care about state witness size limits, for Resharding V2 we could iterate over all delayed receipts and split them into the respective child shards.\n\nFor Resharding V3, these are handled by one of two strategies:\n\n* **Duplication Across Child Shards**: `TrieKey::DelayedReceipt` and `TrieKey::PromiseYieldTimeout` are handled by duplicating entries across both child shards, as each entry could belong to either child shard. More details are in the [Delayed Receipts](#delayed-receipt-handling) and [Promise Yield](#promiseyield-receipt-handling) sections below.\n* **Assignment to Lower Index Child**: `TrieKey::BufferedReceipt` is independent of the `account_id` and can be sent to either of the child shards, but not both. We copy the buffered receipts and the associated metadata to the child shard with the lower index. More details are in the [Buffered Receipts](#buffered-receipt-handling) section below.\n\n### State Storage - Flat State\n\nFlat State is a collection of key-value pairs stored on disk, with each entry containing a reference to its `ShardId`. When splitting a shard, every item inside its Flat State must be correctly reassigned to one of the new child shards; however, due to technical limitations, such an operation cannot be completed instantaneously.\n\nFlat State's main purposes are allowing the creation of State Sync snapshots and the construction of Mem Tries. Fortunately, these two operations can be delayed until resharding is completed. Note also that with Mem Tries enabled, the chain can move forward even if the current status of Flat State is not in sync with the latest block.\n\nFor these reasons, the chosen strategy is to reshard Flat State in a long-running background task. The new shards' states must converge with their Mem Tries representation in a reasonable amount of time.\n\nSplitting a shard's Flat State is performed in multiple steps:\n\n1. A post-processing \"split\" task is created instantaneously during the last block of the old shard layout.\n2. The \"split\" task runs in parallel with the chain for a certain amount of time. Inside this routine, every key-value pair belonging to the shard being split (also called the parent shard) is copied into either the left or the right child Flat State. Entries linked to receipts are handled in a special way.\n3. Once the task is completed, the parent shard's Flat State is cleaned up. The child shards' Flat States have their state in sync with the last block of the old shard layout.\n4. Child shards must apply the delta changes from the first block of the new shard layout until the final block of the canonical chain. This operation is done in another background task to avoid slowdowns while processing blocks.\n5. Child shards' Flat States are now ready and can be used to take State Sync snapshots and to reload Mem Tries.\n\n### State Storage - State\n\nEach shard’s Trie is stored in the `State` column of the database, with keys prefixed by `ShardUId`, followed by a node's hash. This structure uniquely identifies each shard’s data. To avoid copying all entries under a new `ShardUId` during resharding, a mapping strategy allows child shards to access ancestor shard data without directly creating new entries.\n\nA naive approach to resharding would involve copying all `State` entries with a new `ShardUId` for a child shard, effectively duplicating the state. This method, while straightforward, is not feasible because copying a large state would take too much time. Resharding needs to appear complete between two blocks, so a direct copy would not allow the process to occur quickly enough.\n\nTo address this, Resharding V3 employs an efficient mapping strategy, using the `DBCol::ShardUIdMapping` column to link each child shard’s `ShardUId` to the closest ancestor’s `ShardUId` holding the relevant data. This allows child shards to access and update state data under the ancestor shard’s prefix without duplicating entries.\n\nInitially, `ShardUIdMapping` is empty, as existing shards map to themselves. During resharding, a mapping entry is added to `ShardUIdMapping`, pointing each child shard’s `ShardUId` to the appropriate ancestor. Mappings persist as long as any descendant shard references the ancestor’s data. Once a node stops tracking all children and descendants of a shard, the entry for that shard can be removed, allowing its data to be garbage collected.\n\nThis mapping strategy enables efficient shard management during resharding events, supporting smooth transitions without altering storage structures directly.\n\n#### Integration with Cold Storage (Archival Nodes)\n\nCold storage uses the same mapping strategy to manage shard state during resharding:\n\n* When state data is migrated from hot to cold storage, it retains the parent shard’s `ShardUId` prefix, ensuring consistency with the mapping strategy.\n* While copying data for the last block of the epoch where resharding occurred, the `DBCol::StateShardUIdMapping` column is copied into cold storage. This ensures that mappings are updated alongside the shard state data.\n* These mappings are permanent in cold storage, aligning with its role in preserving historical state.\n\nThis approach minimizes complexity while maintaining consistency across hot and cold storage.\n\n#### State cleanup\n\nSince [Stateless Validation][NEP-509], all shards tracking is no longer required. Currently, shard cleanup (e.g. when we stopped tracking one shard and started tracking another shard) has not been implemented. With resharding, we would also want to cleanup parent shard once we stop tracking all its descendants. We propose a shard cleanup mechanism that will also handle post-resharding cleanup.\n\nWhen garbage collection removes the last block of an epoch from the canonical chain, we determine which shards were tracked during that epoch by examining the shards present in `TrieChanges` at that block. Similarly, we collect information on shards tracked in subsequent epochs, up to the present one. A shard State is removed only if:\n\n* It was tracked in the old epoch (for which the last block has just been garbage collected).\n* It was not tracked in later epochs, is not currently tracked, and will not be tracked in the next epoch.\n\nTo ensure compatibility with resharding, instead of checking tracked shards directly, we analyze the `ShardUId` prefixes they use. A parent shard's state is retained as long as it remains referenced in `DBCol::StateShardUIdMapping` by any descendant shard. Once all descendant shards are no longer tracked, we clean up the parent shard's state (along with its descendants) and remove all mappings to the parent from `DBCol::StateShardUIdMapping`.\n\n#### Negative refcounts\n\nSome trie keys, such as `TrieKey::DelayedReceipt`, are shared among child shards, but their corresponding State is not duplicated. The `DBCol::State` column uses reference counting, meaning that some data is counted only once, even if referenced by multiple child shards. As a result, removing the data can sometimes lead to negative refcounts.\nTo address this, we have modified the RocksDB `refcount_merge` behavior so that negative refcounts are clamped to zero. However, this is suboptimal, as it can lead to some State being leaked. Specifically, if two operations decrement the refcount for the same key, the RocksDB compaction process may merge them before they are applied, effectively canceling each other out. As a result, the key would never be removed from disk until state sync occurs.\nThis is a temporary solution, and we should follow up on it later.\n\n### Stateless Validation\n\nSince only a fraction of nodes track the split shard, it is necessary to prove the transition from the state root of the parent shard to the new state roots for the child shards to other validators. Without this proof, chunk producers for the split shard could collude and provide invalid state roots, potentially compromising the protocol, such as by minting tokens out of thin air.\n\nThe design ensures that generating and verifying this state transition is negligible in time compared to applying a chunk. As detailed in the [State Storage - MemTrie](#state-storage---memtrie) section, the generation and verification logic involves a constant number of trie lookups. Specifically, we implement the `retain_split_shard(boundary_account, RetainMode::{Left, Right})` method for the trie, which retains only the keys in the trie that belong to the left or right child shard. Internally, this method uses `retain_multi_range(intervals)`, where `intervals` is a vector of trie key intervals to retain. Each interval corresponds to a unique trie key type prefix byte (`Account`, `AccessKey`, etc.) and defines an interval from the empty key to the `boundary_account` key for the left shard, or from the `boundary_account` to infinity for the right shard.\n\nThe `retain_multi_range` method is recursive. Based on the current trie key prefix covered by the current node, it either:\n\n* Returns the node if the subtree is fully contained within an interval.\n* Returns an \"empty\" node if the subtree is outside all intervals.\n* Descends into all children and constructs a new node with children returned by recursive calls.\n\nThis implementation is agnostic to the trie storage used for retrieving nodes and applies to both MemTries and partial storage (state proof).\n\n* Calling it for MemTrie generates a proof and a new state root.\n* Calling it for partial storage generates a new state root. If the method does not fail with an error indicating that a node was not found in the proof, it means the proof was sufficient, and it remains to compare the generated state root with the one proposed by the chunk producer.\n\n### State Witness\n\nThe resharding state transition becomes one of the `implicit_transitions` in `ChunkStateWitness`. It must be validated between processing the last chunk (potentially missing) in the old epoch and the first chunk (potentially missing) in the new epoch. The `ChunkStateTransition` fields correspond to the resharding state transition: the `block_hash` stores the hash of the last block of the parent shard, the `base_state` stores the resharding proof, and the `post_state_root` stores the proposed state root.\n\nThis results in **two** state transitions corresponding to the same block hash. On the chunk producer side, the first transition is stored for the `(block_hash, parent_shard_uid)` pair, and the second one is stored for the `(block_hash, child_shard_uid)` pair.\n\nThe chunk validator, having all the blocks, identifies whether the implicit transition corresponds to applying a missing chunk or resharding independently. This is implemented in `get_state_witness_block_range`, which iterates from `state_witness.chunk_header.prev_block_hash()` to the block that includes the last chunk for the (parent) shard, if it exists.\n\nThen, in `validate_chunk_state_witness`, if the implicit transition corresponds to resharding, the chunk validator calls `retain_split_shard` and proves the state transition from the parent to the child shard.\n\n### State Sync\n\nChanges to the state sync protocol are not typically considered protocol changes requiring a version bump, as they concern downloading state that is not present locally rather than the rules for executing blocks and chunks. However, it is helpful to outline some planned changes to state sync related to resharding.\n\nWhen nodes sync state (either because they have fallen far behind the chain or because they will become a chunk producer for a new shard in a future epoch), they first identify a point in the chain to sync to. They then download the tries corresponding to that point in the chain and apply all chunks from that point until they are caught up. Currently, the tries downloaded initially correspond to the `prev_state_root` field of the last new chunk before the first block of the current epoch. This means the state downloaded is from some point in the previous epoch.\n\nThe proposed change is to move the initial state download point to one in the current epoch rather than the previous one. This keeps shard IDs consistent throughout the state sync logic, simplifies the resharding implementation, and reduces the size of the state to be downloaded. Suppose the previous epoch's shard `S` was split into shards `S'` and `S''` in the current epoch, and a chunk producer that was not tracking shard `S` or any of its children in the current epoch will become a chunk producer for `S'` in the next epoch. With the old state sync algorithm, that chunk producer would download the pre-split state for shard `S`. Then, when it is done, it would need to perform the resharding that all other nodes had already done. While this is not a correctness issue, it simplifies the implementation if we instead download only the state for shard `S'`, allowing the node to download only the state belonging to `S'`, which is much smaller.\n\n### Cross-Shard Traffic\n\nWhen the shard layout changes, it is crucial to handle cross-shard traffic correctly, especially in the presence of missing chunks. Care must be taken to ensure that no receipt is lost or duplicated. There are two important receipt types that need to be considered: outgoing receipts and incoming receipts.\n\n*Note: This proposal reuses the approach taken by Resharding V2.*\n\n#### Outgoing Receipts\n\nEach new chunk in a shard contains a list of outgoing receipts generated during the processing of the previous chunk in that shard.\n\nIn cases where chunks are missing at the resharding boundary, both child shards could theoretically include the outgoing receipts from their shared ancestor chunk. However, this naive approach would lead to the duplication of receipts, which must be avoided.\n\nThe proposed solution is to reassign the outgoing receipts from the parent chunk to only one of the child shards. Specifically, the child shard with the lower shard ID will claim all outgoing receipts from the parent, while the other child will receive none. This ensures that all receipts are processed exactly once.\n\n#### Incoming Receipts\n\nTo process a chunk in a shard, it is necessary to gather all outgoing receipts from other shards that are targeted at this shard. These receipts must then be included as incoming receipts.\n\nIn the presence of missing chunks, the new chunk must collect receipts from all previous blocks, spanning the period since the last new chunk in this shard. This range may cross the resharding boundary.\n\nWhen this occurs, the chunk must also consider receipts that were previously targeted at its parent shard. However, it must filter these receipts to include only those where the recipient lies within the current shard, discarding those where the recipient belongs to the sibling shard in the new shard layout. This filtering process ensures that every receipt is processed exactly once and in the correct shard.\n\n### Delayed Receipt Handling\n\nThe delayed receipts queue contains all incoming receipts that could not be executed as part of a block due to resource constraints like compute cost, gas limits, etc. The entries in the delayed receipt queue can belong to any of the accounts within the shard. During a resharding event, we ideally need to split the delayed receipts across both child shards according to the associated `account_id` with the receipt.\n\nThe singleton trie key `DelayedReceiptIndices` holds the `start_index` and `end_index` associated with the delayed receipt entries for the shard. The trie key `DelayedReceipt { index }` contains the actual delayed receipt associated with some `account_id`. These are processed in a FIFO queue order during chunk execution.\n\nNote that the delayed receipt trie keys do not have the `account_id` prefix. In Resharding V2, we followed the trivial solution of iterating through all the delayed receipt queue entries and assigning them to the appropriate child shard. However, due to constraints on the state witness size limits and instant resharding, this approach is no longer feasible for Resharding V3.\n\nFor Resharding V3, we decided to handle the resharding by duplicating the entries of the delayed receipt queue across both child shards. This is beneficial from the perspective of state witness size and instant resharding, as we only need to access the delayed receipt queue root entry in the trie. However, it breaks the assumption that all delayed receipts in a shard belong to the accounts within that shard.\n\nTo resolve this, with the new protocol version, we changed the implementation of the runtime to discard executing delayed receipts that don't belong to the `account_id` on that shard.\n\nNote that no delayed receipts are lost during resharding, as all receipts get executed exactly once based on which of the child shards the associated `account_id` belongs to.\n\n### PromiseYield Receipt Handling\n\nPromise Yield was introduced as part of NEP-519 to enable deferring replies to the caller function while the response is being prepared. As part of the Promise Yield implementation, it introduced three new trie keys: `PromiseYieldIndices`, `PromiseYieldTimeout`, and `PromiseYieldReceipt`.\n\n* `PromiseYieldIndices`: This is a singleton key that holds the `start_index` and `end_index` of the keys in `PromiseYieldTimeout`.\n* `PromiseYieldTimeout { index }`: Along with the `receiver_id` and `data_id`, this stores the `expires_at` block height until which we need to wait to receive a response.\n* `PromiseYieldReceipt { receiver_id, data_id }`: This is the receipt created by the account.\n\nAn account can call the `promise_yield_create` host function that increments the `PromiseYieldIndices` along with adding a new entry into the `PromiseYieldTimeout` and `PromiseYieldReceipt`.\n\nThe `PromiseYieldTimeout` is sorted by time of creation and has an increasing value of `expires_at` block height. In the runtime, we iterate over all the expired receipts and create a blank receipt to resolve the entry in `PromiseYieldReceipt`.\n\nThe account can call the `promise_yield_resume` host function multiple times, and if this is called before the expiry period, we use this to resolve the promise yield receipt. Note that the implementation allows for multiple resolution receipts to be created, including the expiry receipt, but only the first one is used for the actual resolution of the promise yield receipt.\n\nWe use this implementation quirk to facilitate the resharding implementation. The resharding strategy for the three trie keys is:\n\n* **Duplicate Across Both Child Shards**:\n  * `PromiseYieldIndices`\n  * `PromiseYieldTimeout { index }`\n* **Split Based on Prefix**:\n  * `PromiseYieldReceipt { receiver_id, data_id }`: Since this key has the `account_id` prefix, we can split the entries across both child shards based on the prefix.\n\nAfter duplication of the `PromiseYieldIndices` and `PromiseYieldTimeout`, when the entries of `PromiseYieldTimeout` eventually get dequeued at the expiry height, the following happens:\n\n* If the promise yield receipt associated with the dequeued entry **is not** part of the child trie, we create a timeout resolution receipt, and it gets ignored.\n* If the promise yield receipt associated with the dequeued entry **is** part of the child trie, the promise yield implementation continues to work as expected.\n\nThis means we don't have to make any special changes in the runtime for handling the resharding of promise yield receipts.\n\n### Buffered Receipt Handling\n\nBuffered Receipts were introduced as part of NEP-539 for cross-shard congestion control. As part of the implementation, it introduced two new trie keys: `BufferedReceiptIndices` and `BufferedReceipt`.\n\n* `BufferedReceiptIndices`: This is a singleton key that holds the `start_index` and `end_index` of the keys in `BufferedReceipt` for each `shard_id`.\n* `BufferedReceipt { receiving_shard, index }`: This holds the actual buffered receipt that needs to be sent to the `receiving_shard`.\n\nNote that the targets of the buffered receipts belong to external shards, and during a resharding event, we would need to handle both the set of buffered receipts in the parent shard and the set of buffered receipts in other shards that target the parent shard.\n\n#### Handling Buffered Receipts in Parent Shard\n\nSince buffered receipts target external shards, it is acceptable to assign buffered receipts to either of the child shards. For simplicity, we assign all the buffered receipts to the child shard with the lower index, i.e., copy `BufferedReceiptIndices` and `BufferedReceipt` to the child shard with the lower index while keeping `BufferedReceiptIndices` empty for the child shard with the higher index.\n\n#### Handling Buffered Receipts that Target Parent Shard\n\nThis scenario is slightly more complex. At the boundary of resharding, we may have buffered receipts created before the resharding event targeting the parent shard. At the same time, we may also have buffered receipts generated after the resharding event that directly target the child shard. The receipts from both the parent and child buffered receipts queue need to be appropriately sent to the child shard depending on the `account_id`, while respecting the outgoing limits calculated by the bandwidth scheduler and congestion control.\n\nThe flow of handling buffered receipts before Resharding V3 is as follows:\n\n1. Calculate `outgoing_limit` for each shard.\n2. For each shard, try to forward as many in-order receipts as possible from the buffer while respecting `outgoing_limit`.\n3. Apply chunk and `try_forward` newly generated receipts. The newly generated receipts are forwarded if we have enough limit; otherwise, they are put in the buffered queue.\n\nThe solution for Resharding V3 is to first try draining the parent queue before moving on to draining the child queue. The modified flow would look like this:\n\n1. Calculate `outgoing_limit` for both child shards using congestion info from the parent.\n2. Forwarding receipts:\n    * First, try to forward as many in-order receipts as possible from the parent shard buffer. Stop either when we drain the parent buffer or when we exhaust the `outgoing_limit` of either of the child shards.\n    * Next, try to forward as many in-order receipts as possible from the child shard buffer.\n3. Applying chunk and `try_forward` newly generated receipts remains the same.\n\nThe minor downside to this approach is that we don't have guarantees between the order of receipt generation and the order of receipt forwarding, but that's already the case today with buffered receipts.\n\n### Congestion Control\n\nAlong with having buffered receipts, each chunk also publishes a `CongestionInfo` to the chunk header that contains information about the congestion of the shard during block processing.\n\n```rust\npub struct CongestionInfoV1 {\n    /// Sum of gas in currently delayed receipts.\n    pub delayed_receipts_gas: u128,\n    /// Sum of gas in currently buffered receipts.\n    pub buffered_receipts_gas: u128,\n    /// Size of borsh serialized receipts stored in state because they\n    /// were delayed, buffered, postponed, or yielded.\n    pub receipt_bytes: u64,\n    /// If fully congested, only this shard can forward receipts.\n    pub allowed_shard: u16,\n}\n```\n\nAfter a resharding event, it is essential to properly initialize the congestion info for the child shards. Here is how each field is handled:\n\n#### `delayed_receipts_gas`\n\nSince the resharding strategy for delayed receipts is to duplicate them across both child shards, we simply copy the value of `delayed_receipts_gas` to both shards.\n\n#### `buffered_receipts_gas`\n\nGiven that the strategy for buffered receipts is to assign all buffered receipts to the lower index child, we copy the `buffered_receipts_gas` from the parent to the lower index child and set `buffered_receipts_gas` to zero for the higher index child.\n\n#### `receipt_bytes`\n\nThis field is more complex as it includes information from both delayed receipts and buffered receipts. To calculate this field accurately, we need to know the distribution of `receipt_bytes` across both delayed receipts and buffered receipts. The current solution is to store metadata about the total `receipt_bytes` for buffered receipts in the trie. This way, we have the following:\n\n* For the child with the lower index, `receipt_bytes` is the sum of both delayed receipts bytes and buffered receipts bytes, hence `receipt_bytes = parent.receipt_bytes`.\n* For the child with the higher index, `receipt_bytes` is just the bytes from delayed receipts, hence `receipt_bytes = parent.receipt_bytes - parent.buffered_receipt_bytes`.\n\n#### `allowed_shard`\n\nThis field is calculated using a round-robin mechanism, which can be independently determined for both child shards. Since we are changing the [ShardId semantics](#shardid-semantics), we need to update the implementation to use `ShardIndex` instead of `ShardID`, which is simply an assignment for each `shard_id` to the contiguous index `[0, num_shards)`.\n\n### ShardId Semantics\n\nCurrently, shard IDs are represented as numbers within the range `[0, n)`, where `n` is the total number of shards. These shard IDs are sorted in the same order as the account ID ranges assigned to them. While this approach is straightforward, it complicates resharding operations, particularly when splitting a shard in the middle of the range. Such a split requires reindexing all subsequent shards with higher IDs, adding complexity to the process.\n\nIn this NEP, we propose updating the ShardId semantics to allow for arbitrary identifiers. Although ShardIds will remain integers, they will no longer be restricted to the `[0, n)` range, and they may appear in any order. The only requirement is that each ShardId must be unique. In practice, during resharding, the ID of a parent shard will be removed from the ShardLayout, and the new child shards will be assigned unique IDs - `max(shard_ids) + 1` and `max(shard_ids) + 2`.\n\n## Reference Implementation\n\n### Overview\n<!-- markdownlint-disable MD029 -->\n\n1. Any node tracking a shard must determine if it should split the shard in the last block before the epoch where resharding should happen.\n\n```pseudocode\nshould_split_shard(block, shard_id):\n    shard_layout = epoch_manager.shard_layout(block.epoch_id())\n    next_shard_layout = epoch_manager.shard_layout(block.next_epoch_id())\n    if epoch_manager.is_next_block_epoch_start(block) && \n        shard_layout != next_shard_layout &&\n        next_shard_layout.shard_split_map().contains(shard_id):\n        return Some(next_shard_layout.split_shard_event(shard_id))\n    return None\n```\n\n2. This logic is triggered during block post-processing, which means that the block is valid and is being persisted to disk.\n\n```pseudocode\non chain.postprocess_block(block):\n    next_shard_layout = epoch_manager.shard_layout(block.next_epoch_id())\n    if let Some(split_shard_event) = should_split_shard(block, shard_id):\n        resharding_manager.split_shard(split_shard_event)\n```\n\n3. The event triggers changes in all state storage components.\n\n```pseudocode\non resharding_manager.split_shard(split_shard_event, next_shard_layout):\n    set State mapping\n    start FlatState resharding\n    process MemTrie resharding:\n        freeze MemTrie, create HybridMemTries\n        for each child shard:\n            mem_tries[parent_shard].retain_split_shard(boundary_account)\n```\n\n4. `retain_split_shard` leaves only keys in the trie that belong to the left or right child shard. It retains trie key intervals for the left or right child as described above. Simultaneously, the proof is generated. In the end, we get a new state root, hybrid MemTrie corresponding to the child shard, and the proof. Proof is saved as state transition for pair `(block, new_shard_uid)`.\n\n5. The proof is sent as one of the implicit transitions in `ChunkStateWitness`.\n\n6. On the chunk validation path, the chunk validator determines if resharding is part of the state transition using the same `should_split_shard` condition.\n\n7. It calls `Trie(state_transition_proof).retain_split_shard(boundary_account)`, which should succeed if the proof is sufficient and generates a new state root.\n\n8. Finally, it checks that the new state root matches the state root proposed in `ChunkStateWitness`. If the whole `ChunkStateWitness` is valid, then the chunk validator sends an endorsement, which also endorses the resharding.\n\n### State Storage - MemTrie\n\nThe current implementation of MemTrie uses a memory pool (`STArena`) to allocate and deallocate nodes, with internal pointers in this pool referencing child nodes. Unlike the State representation of the Trie, MemTries do not work with node hashes but with internal memory pointers directly. Additionally, MemTries are not thread-safe, and one MemTrie exists per shard.\n\nAs described in the [MemTrie](#state-storage---memtrie) section above, we need an efficient way to split the MemTrie into two child MemTries within the span of one block. The challenge lies in the current implementation of MemTrie, which is not thread-safe and cannot be shared across two shards.\n\nA naive approach to creating two MemTries for the child shards would involve iterating through all entries of the parent MemTrie and populating these values into the child MemTries. However, this method is prohibitively time-consuming.\n\nThe solution to this problem is to introduce the concept of a Frozen MemTrie (with a `FrozenArena`), which is a cloneable, read-only, thread-safe snapshot of a MemTrie. By calling the `freeze` method on an existing MemTrie, we convert it into a Frozen MemTrie. This process consumes the original MemTrie, making it no longer available for node allocation and deallocation.\n\nAlong with `FrozenArena`, we also introduce a `HybridArena`, which effectively combines a base `FrozenArena` with a top layer of `STArena` that supports allocating and deallocating new nodes into the MemTrie. Newly allocated nodes can reference nodes in the `FrozenArena`. This Hybrid MemTrie serves as a temporary MemTrie while the flat storage is being constructed in the background.\n\nWhile Frozen MemTries facilitate instant resharding, they come at the cost of memory consumption. Once a MemTrie is frozen, it continues to consume the same amount of memory as it did at the time of freezing, as it does not support memory deallocation. If a node tracks only one of the child shards, a Frozen MemTrie would continue to use the same amount of memory as the parent trie. Therefore, Hybrid MemTries are only a temporary solution, and we rebuild the MemTrie for the children after resharding is completed.\n\nAdditionally, a node would need to support twice the memory footprint of a single trie. After resharding, there would be two copies of the trie in memory: one from the temporary Hybrid MemTrie used for block production and another from the background MemTrie under construction. Once the background MemTrie is fully constructed and caught up with the latest block, we perform an in-place swap of the Hybrid MemTrie with the new child MemTrie and deallocate the memory from the Hybrid MemTrie.\n\nDuring a resharding event at the epoch boundary, when we need to split the parent shard into two child shards, we follow these steps:\n\n1. **Freeze the Parent MemTrie**: Create a read-only frozen arena representing a snapshot of the state at the time of freezing (after post-processing the last block of the epoch). The parent MemTrie is no longer required in runtime going forward.\n2. **Clone the Frozen MemTrie**: Clone the Frozen MemTrie cheaply for both child MemTries to use. This does not clone the parent arena's memory but merely increases the reference count.\n3. **Create Hybrid MemTries for Each Child**: Create a new MemTrie with `HybridArena` for each child. The base of the MemTrie is the read-only `FrozenArena`, while all new node allocations occur in a dedicated `STArena` memory pool for each child MemTrie. This temporary MemTrie is used while Flat Storage is being built in the background.\n4. **Rebuild MemTrie**: Once resharding is completed, we use it to load a new MemTrie and catch up to the latest block.\n5. **Swap and Clean Up**: After the new child MemTrie has caught up to the latest block, we perform an in-place swap in the client and discard the Hybrid MemTrie.\n\n![Hybrid MemTrie diagram](assets/nep-0568/NEP-HybridMemTrie.png)\n\n### State Storage - Flat State\n\nResharding the Flat State is a time-consuming operation that runs in parallel with block processing for several block heights. Therefore, several important aspects must be considered during implementation:\n\n* **Flat State's Status Persistence**: Flat State's status should be resilient to application crashes.\n* **Correct Block Height**: The parent shard's Flat State should be split at the correct block height.\n* **Convergence with Mem Trie**: New shards' Flat States should eventually converge to the same representation the chain uses to process blocks (MemTries).\n* **Chain Forks Handling**: Resharding should work correctly in the presence of chain forks.\n* **Retired Shards Cleanup**: Retired shards should be cleaned up.\n\nNote that the Flat States of the newly created shards will not be available until resharding is completed. This is acceptable because the temporary MemTries are built instantly and can satisfy all block processing needs.\n\nThe main component responsible for carrying out resharding on Flat State is the [FlatStorageResharder](https://github.com/near/nearcore/blob/f4e9dd5d6e07089dfc789221ded8ec83bfe5f6e8/chain/chain/src/flat_storage_resharder.rs#L68).\n\n#### Flat State's Status Persistence\n\nEvery shard's Flat State has a status associated with it and stored in the database, called `FlatStorageStatus`. We propose extending the existing object by adding a new enum variant named `FlatStorageStatus::Resharding`. This approach has two benefits. First, the progress of any Flat State resharding is persisted to disk, making the operation resilient to a node crash or restart. Second, resuming resharding on node restart shares the same code path as Flat State creation (see `FlatStorageShardCreator`), reducing code duplication.\n\n`FlatStorageStatus` is updated at every committable step of resharding. The commit points are as follows:\n\n* Beginning of resharding, at the last block of the old shard layout.\n* Scheduling of the \"split parent shard\" task.\n* Execution, cancellation, or failure of the \"split parent shard\" task.\n* Execution or failure of any \"child catchup\" task.\n\n#### Splitting a Shard's Flat State\n\nWhen the shard layout changes at the end of an epoch, we identify a **resharding block** corresponding to the last block of the current epoch. A task to split the parent shard's Flat State is scheduled to occur after the resharding block becomes final. The finality condition is necessary to avoid splitting on a block that might be excluded from the canonical chain, which would lock the node into an erroneous state.\n\nInside the split task, we iterate over the Flat State and copy each element into either child. This routine is performed in batches to lessen the performance impact on the node.\n\nFinally, if the split completes successfully, the parent shard's Flat State is removed from the database, and the child shards' Flat States enter a catch-up phase.\n\nOne current technical limitation is that, upon a node crash or restart, the \"split parent shard\" task will start copying all elements again from the beginning.\n\nA reference implementation of splitting a Flat State can be found in [FlatStorageResharder::split_shard_task](https://github.com/near/nearcore/blob/fecce019f0355cf89b63b066ca206a3cdbbdffff/chain/chain/src/flat_storage_resharder.rs#L295).\n\n#### Assigning Values from Parent to Child Shards\n\nKey-value pairs in the parent shard's Flat State are inherited by children according to the following rules:\n\n**Elements inherited by the child shard tracking the `account_id` contained in the key:**\n\n* `ACCOUNT`\n* `CONTRACT_DATA`\n* `CONTRACT_CODE`\n* `ACCESS_KEY`\n* `RECEIVED_DATA`\n* `POSTPONED_RECEIPT_ID`\n* `PENDING_DATA_COUNT`\n* `POSTPONED_RECEIPT`\n* `PROMISE_YIELD_RECEIPT`\n\n**Elements inherited by both children:**\n\n* `DELAYED_RECEIPT_OR_INDICES`\n* `PROMISE_YIELD_INDICES`\n* `PROMISE_YIELD_TIMEOUT`\n* `BANDWIDTH_SCHEDULER_STATE`\n\n**Elements inherited only by the lowest index child:**\n\n* `BUFFERED_RECEIPT_INDICES`\n* `BUFFERED_RECEIPT`\n\n#### Bringing Child Shards Up to Date with the Chain's Head\n\nChild shards' Flat States build a complete view of their content at the height of the resharding block sometime during the new epoch after resharding. At that point, many new blocks have already been processed, and these will most likely contain updates for the new shards. A catch-up step is necessary to apply all Flat State deltas accumulated until now.\n\nThis phase of resharding does not require extra steps to handle chain forks. The catch-up task does not start until the parent shard splitting is done, ensuring the resharding block is final. Additionally, Flat State deltas can handle forks automatically.\n\nThe catch-up task commits batches of Flat State deltas to the database. If the application crashes or restarts, the task will resume from where it left off.\n\nOnce all Flat State deltas are applied, the child shard's status is changed to `Ready`, and cleanup of Flat State delta leftovers is performed.\n\nA reference implementation of the catch-up task can be found in [FlatStorageResharder::shard_catchup_task](https://github.com/near/nearcore/blob/fecce019f0355cf89b63b066ca206a3cdbbdffff/chain/chain/src/flat_storage_resharder.rs#L564).\n\n#### Failure of Flat State Resharding\n\nIn the current proposal, any failure during Flat State resharding is considered non-recoverable. `neard` will attempt resharding again on restart, but no automatic recovery is implemented.\n\n### State Storage - State Mapping\n\nTo enable efficient shard state management during resharding, Resharding V3 uses the `DBCol::ShardUIdMapping` column. This mapping allows child shards to reference ancestor shard data, avoiding the need for immediate duplication of state entries.\n\n#### Mapping Application in Adapters\n\nThe core of the mapping logic is applied in `TrieStoreAdapter` and `TrieStoreUpdateAdapter`, which act as layers over the general `Store` interface. Here’s a breakdown of the key functions involved:\n\n* **Key Resolution**:\n\n  The `get_key_from_shard_uid_and_hash` function is central to determining the correct `ShardUId` for state access. At a high level, operations use the child shard's `ShardUId`, but within this function, the `DBCol::ShardUIdMapping` column is checked to determine if an ancestor `ShardUId` should be used instead.\n\n  ```rust\n  fn get_key_from_shard_uid_and_hash(\n      store: &Store,\n      shard_uid: ShardUId,\n      hash: &CryptoHash,\n  ) -> [u8; 40] {\n      let mapped_shard_uid = store\n          .get_ser::<ShardUId>(DBCol::StateShardUIdMapping, &shard_uid.to_bytes())\n          .expect(\"get_key_from_shard_uid_and_hash() failed\")\n          .unwrap_or(shard_uid);\n      let mut key = [0; 40];\n      key[0..8].copy_from_slice(&mapped_shard_uid.to_bytes());\n      key[8..].copy_from_slice(hash.as_ref());\n      key\n  }\n  ```\n\n  This function first attempts to retrieve a mapped ancestor `ShardUId` from `DBCol::ShardUIdMapping`. If no mapping exists, it defaults to the provided child `ShardUId`. This resolved `ShardUId` is then combined with the `node_hash` to form the final key used in `State` column operations.\n\n* **State Access Operations**:\n\n  The `TrieStoreAdapter` and `TrieStoreUpdateAdapter` use `get_key_from_shard_uid_and_hash` to correctly resolve the key for both reads and writes. Example methods include:\n\n  ```rust\n  // In TrieStoreAdapter\n  pub fn get(&self, shard_uid: ShardUId, hash: &CryptoHash) -> Result<Arc<[u8]>, StorageError> {\n      let key = get_key_from_shard_uid_and_hash(self.store, shard_uid, hash);\n      self.store.get(DBCol::State, &key)\n  }\n\n  // In TrieStoreUpdateAdapter\n  pub fn increment_refcount_by(\n      &mut self,\n      shard_uid: ShardUId,\n      hash: &CryptoHash,\n      data: &[u8],\n      increment: NonZero<u32>,\n  ) {\n      let key = get_key_from_shard_uid_and_hash(self.store, shard_uid, hash);\n      self.store_update.increment_refcount_by(DBCol::State, key.as_ref(), data, increment);\n  }\n  ```\n\n  The `get` function retrieves data using the resolved `ShardUId` and key, while `increment_refcount_by` manages reference counts, ensuring correct tracking even when accessing data under an ancestor shard.\n\n#### Mapping Retention and Cleanup\n\nMappings in `DBCol::ShardUIdMapping` persist as long as any descendant relies on an ancestor’s data. To manage this, the `set_shard_uid_mapping` function in `TrieStoreUpdateAdapter` adds a new mapping during resharding:\n\n```rust\nfn set_shard_uid_mapping(&mut self, child_shard_uid: ShardUId, parent_shard_uid: ShardUId) {\n    let mapped_parent_shard_uid = store\n          .get_ser::<ShardUId>(DBCol::StateShardUIdMapping, &parent_shard_uid.to_bytes())\n          .expect(\"set_shard_uid_mapping() failed\")\n          .unwrap_or(parent_shard_uid);\n    self.store_update.set(\n        DBCol::StateShardUIdMapping,\n        child_shard_uid.to_bytes().as_ref(),\n        &borsh::to_vec(&mapped_parent_shard_uid).expect(\"Borsh serialize cannot fail\"),\n    )\n}\n```\n\nWhen a node stops tracking all descendants of a shard, garbage collection will eventually clear the last block of the last epoch in which the last descendant was tracked. The descendant will then appear in the result of:\n\n```rust\nfn get_potential_shards_for_cleanup(..., last_block_of_gced_epoch) -> Result<Vec<ShardUId>> {\n let mut tracked_shards = vec![];\n    for shard_uid in shard_layout.shard_uids() {\n        if chain_store_update\n            .store()\n            .exists(DBCol::TrieChanges, &get_block_shard_uid(&last_block_of_gced_epoch, &shard_uid))?\n        {\n            tracked_shards.push(shard_uid);\n        }\n    }\n    tracked_shards\n}\n```\n\nThen, `gc_state()` is called, mapping the descendant `ShardUId` to the parent `ShardUId`, making the parent shard a candidate for cleanup. We then detect that since `gced_epoch`, the parent `ShardUId` has not been used as a database key prefix. As a result, we can safely remove the state under this prefix (including the parent and all descendants) along with the associated entries from `DBCol::StateShardUIdMapping`.\n\n```rust\nfn gc_state(potential_shards_for_cleanup, gced_epoch, shard_tracker, store_update) {\n    let mut potential_shards_to_cleanup: HashSet<ShardUId> = potential_shards_for_cleanup\n        .iter()\n        .map(|shard_uid| get_shard_uid_mapping(&store, *shard_uid))\n        .collect();\n\n    for epoch in gced_epoch+1..current_epoch {\n        let shard_layout = get_shard_layout(epoch);\n        let last_block_of_epoch = get_last_block_of_epoch(epoch);\n        for shard_uid in shard_layout.shard_uids() {\n            if !store\n                .exists(DBCol::TrieChanges, &get_block_shard_uid(last_block_of_epoch, &shard_uid))?\n            {\n                continue;\n            }\n            let mapped_shard_uid = get_shard_uid_mapping(&store, shard_uid);\n            potential_shards_to_cleanup.remove(&mapped_shard_uid);\n        }\n    }\n\n    for shard_uid in shard_tracker.get_shards_tracks_this_or_next_epoch() {\n        let mapped_shard_uid = get_shard_uid_mapping(&store, shard_uid);\n        potential_shards_to_cleanup.remove(&mapped_shard_uid);\n    }\n    let shards_to_cleanup = potential_shards_to_cleanup;\n\n    for kv in store.iter_ser::<ShardUId>(DBCol::StateShardUIdMapping) {\n        let (child_shard_uid, parent_shard_uid) = kv?;\n        if shards_to_cleanup.contains(&parent_shard_uid) {\n            store_update.delete(DBCol::StateShardUIdMapping, &child_shard_uid);\n        }\n    }\n    for shard_uid_prefix in shards_to_cleanup {\n        store_update.delete_shard_uid_prefixed_state(shard_uid_prefix);\n    }\n}\n```\n\nFor archival nodes, mappings are retained permanently to ensure access to the historical state of all shards.\n\n\n### State Sync\n\nThe state sync algorithm defines a `sync_hash` used in many parts of the implementation. This is always the first block of the current epoch, which the node should be aware of once it has synced headers to the current point in the chain. A node performing state sync first makes a request for a `ShardStateSyncResponseHeader` corresponding to that `sync_hash` and the Shard ID of the shard it's interested in. Among other things, this header includes the last new chunk before `sync_hash` in the shard and a `StateRootNode` with a hash equal to that chunk's `prev_state_root` field. Then the node downloads the nodes of the trie with that `StateRootNode` as its root. Afterwards, it applies new chunks in the shard until it's caught up.\n\nAs described above, the state we download is the state in the shard after applying the second-to-last new chunk before `sync_hash`, which belongs to the previous epoch (since `sync_hash` is the first block of the new epoch). To move the point in the chain of the initial state download to the current epoch, we could either move the `sync_hash` forward or change the state sync protocol (perhaps changing the meaning of the `sync_hash` and the fields of the `ShardStateSyncResponseHeader`, or somehow changing these structures more significantly). The former is an easier first implementation, as it would not require any changes to the state sync protocol other than to the expected `sync_hash`. We would just need to move the `sync_hash` to a point far enough along in the chain so that the `StateRootNode` in the `ShardStateSyncResponseHeader` refers to the state in the current epoch. Currently, we plan on implementing it that way, but we may revisit making more extensive changes to the state sync protocol later.\n\n## Security Implications\n\n### Fork Handling\n\nIn theory, it's possible that more than one candidate block finishes the last epoch with the old shard layout. For previous implementations, it didn't matter because the resharding decision was made at the beginning of the previous epoch. Now, the decision is made at the epoch boundary, so the new implementation handles this case as well.\n\n### Proof Validation\n\nWith single shard tracking, nodes can't independently validate new state roots after resharding because they don't have the state of the shard being split. That's why we generate resharding proofs, whose generation and validation may be a new weak point. However, `retain_split_shard` is equivalent to a constant number of lookups in the trie, so its overhead is negligible. Even if the proof is invalid, it will only imply that `retain_split_shard` fails early, similar to other state transitions.\n\n## Alternatives\n\nIn the solution space that would keep the blockchain stateful, we also considered an alternative to handle resharding through the mechanism of `Receipts`. The workflow would be to:\n\n* Create an empty `target_shard`.\n* Require `source_shard` chunk producers to create special `ReshardingReceipt(source_shard, target_shard, data)` where `data` would be an interval of key-value pairs in `source_shard` along with the proof.\n* Then, `target_shard` trackers and validators would process that receipt, validate the proof, and insert the key-value pairs into the new shard.\n\nHowever, `data` would occupy most of the state witness capacity and introduce the overhead of proving every single interval in `source_shard`. Moreover, the approach to sync the target shard \"dynamically\" also requires some form of catch-up, which makes it much less feasible than the chosen approach.\n\nAnother question is whether we should tie resharding to epoch boundaries. This would allow us to move from the resharding decision to completion much faster. But for that, we would need to:\n\n* Agree if we should reshard in the middle of the epoch or allow \"fast epoch completion,\" which has to be implemented.\n* Keep chunk producers tracking \"spare shards\" ready to receive items from split shards.\n* On resharding event, implement a specific form of state sync, on which source and target chunk producers would agree on new state roots offline.\n* Then, new state roots would be validated by chunk validators in the same fashion.\n\nWhile it is much closer to Dynamic Resharding (below), it requires many more changes to the protocol. The considered idea works well as an intermediate step toward that, if needed.\n\n## Future Possibilities\n\n* **Dynamic Resharding**: In this proposal, resharding is scheduled in advance and hardcoded within the `neard` binary. In the future, we aim to enable the chain to dynamically trigger and execute resharding autonomously, allowing it to adjust capacity automatically based on demand.\n* **Fast Dynamic Resharding**: In the Dynamic Resharding extension, the new shard layout is configured for the second upcoming epoch. This means that a full epoch must pass before the chain transitions to the updated shard layout. In the future, our goal is to accelerate this process by finalizing the previous epoch more quickly, allowing the chain to adopt the new layout as soon as possible.\n* **Shard Merging**: In this proposal, the only allowed resharding operation is shard splitting. In the future, we aim to enable shard merging, allowing underutilized shards to be combined with neighboring shards. This would allow the chain to free up resources and reallocate them where they are most needed.\n\n## Consequences\n\n### Positive\n\n* The protocol can execute resharding even while only a fraction of nodes track the split shard.\n* State for new shard layouts is computed in a matter of minutes instead of hours, greatly increasing ecosystem stability during resharding. As before, from the point of view of NEAR users, it is instantaneous.\n\n### Neutral\n\nN/A\n\n### Negative\n\n* The storage components need to handle the additional complexity of controlling the shard layout change.\n\n### Backwards Compatibility\n\nResharding is backwards compatible with existing protocol logic.\n\n## Unresolved Issues (Optional)\n\n```text\n[Explain any issues that warrant further discussion. Considerations\n\n* What parts of the design do you expect to resolve through the NEP process before this gets merged?\n* What parts of the design do you expect to resolve through the implementation of this feature before stabilization?\n* What related issues do you consider out of scope for this NEP that could be addressed in the future independently of the solution that comes out of this NEP?]\n```\n\n## Changelog\n\n```text\n[The changelog section provides historical context for how the NEP developed over time. Initial NEP submission should start with version 1.0.0, and all subsequent NEP extensions must follow [Semantic Versioning](https://semver.org/). Every version should have the benefits and concerns raised during the review. The author does not need to fill out this section for the initial draft. Instead, the assigned reviewers (Subject Matter Experts) should create the first version during the first technical review. After the final public call, the author should then finalize the last version of the decision context.]\n```\n\n### 1.0.0 - Initial Version\n\n> Placeholder for the context about when and who approved this NEP version.\n\n#### Benefits\n\n> List of benefits filled by the Subject Matter Experts while reviewing this version:\n\n* Benefit 1\n* Benefit 2\n\n#### Concerns\n\n> Template for Subject Matter Experts review for this version:\n> Status: New | Ongoing | Resolved\n\n|    # | Concern | Resolution | Status |\n| ---: | :------ | :--------- | -----: |\n|    1 |         |            |        |\n|    2 |         |            |        |\n\n## Copyright\n\nCopyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).\n\n\n<!-- links --> \n\n[NEP-040]: https://github.com/near/NEPs/blob/master/specs/Proposals/0040-split-states.md\n[NEP-508]: https://github.com/near/NEPs/blob/master/neps/nep-0508.md\n[NEP-509]: https://github.com/near/NEPs/blob/master/neps/nep-0509.md\n"
  },
  {
    "path": "neps/nep-0584.md",
    "content": "---\nNEP: 584\nTitle: Cross-shard bandwidth scheduler\nAuthors: Jan Malinowski <jan.ciolek@nearone.org>\nStatus: Final\nDiscussionsTo: https://github.com/near/NEPs/pull/584\nType: Protocol\nVersion: 1.0.0\nCreated: 2025-01-13\nLastUpdated: 2025-01-13\n---\n\n## Summary\n\nBandwidth scheduler decides how many bytes of receipts a shard is allowed to send to different\nshards at every block height. Chunk application produces outgoing receipts that will be sent to\nother shards. Bandwidth scheduler looks at how many receipts every shard wants to send to other\nshards and decides how much can be sent between each pair of shards. It makes sure that every shard\nreceives and sends a reasonable amount of data at every height. Sending or receiving too much data\ncould cause performance problems and witness size issues. We have an existing mechanism to limit how\nmuch is sent between shards, but it's very rudimentary and inefficient. Bandwidth scheduler is a\nbetter solution to the problem.\n\n## Motivation\n\n### Why do we need cross-shard bandwidth limits?\n\nNEAR is a sharded blockchain - every shard is expected to do a limited amount of work at every\nheight. Scaling is mostly achieved by adding more shards. This also means that we cannot expect a\nshard to send or receive more than X MB of data at every height. Without cross-shard bandwidth\nlimits there could be a situation where this isn't respected - a shard could be forced to send or\nreceive a ton of data at a single height. There could be a situation where all of the shards decide\nto send a ton of data to a single receiver shard, or a situation where one sender shard generates a\nlot of outgoing receipts to other shards. This problem gets worse as the number of shards increases,\nwith 6 shards it isn't that bad, but 50 shards sending receipts to a single shard would definitely\noverwhelm the receiver.\n\nThere are two kinds of problems that can happen when too much data is being sent:\n\n- Nodes might not be able to transfer all of the receipts in time and chunk producers might not have\n  the data needed to produce a chunk, causing chunk misses.\n- With stateless validation all of the incoming receipts are kept inside `ChunkStateWitness`. The\n  protocol is very sensitive to the size of `ChunkStateWitness` - when `ChunkStateWitness` becomes\n  too large, the nodes are not able to distribute it in time and there are chunk misses, in extreme\n  cases a shard can even stall. We have to make sure that the size of incoming receipts is limited\n  to avoid witness size issues and attacks. There are plans to make the protocol more resilient to\n  large witness size, but that is still work in progress.\n\nWe need some kind of cross-shard bandwidth limits to avoid these problems.\n\n### Existing solution\n\nThere is already a rudimentary solution in place, added together with stateless validation in\n[NEP-509](https://github.com/near/NEPs/blob/master/neps/nep-0509.md) to limit witness size.\n\nIn this solution each shard is usually allowed to send 100KiB (`outgoing_receipts_usual_size_limit`)\nof receipts to another shard, but there's one special shard that is allowed to send 4.5MiB\n(`outgoing_receipts_big_size_limit`). The special allowed shard is switched on every height in a\nround robin fashion. If a shards wants to send less than 100KiB it can just do it, but for larger\ntransfers the sender needs to wait until it's the allowed shard to send the receipts. A node can\nonly send more than 100KiB on its turn. See the PR for a more detailed description of the solution:\nhttps://github.com/near/nearcore/pull/11492\n\nThis solution was simple enough to be implemented before stateless validation launch, but there are\nissues with this approach:\n\n- Small throughput. If we take two shards - `1` and `2`, then `1` is able to send at most 5MiB of\n  data to `2` every 6 blocks (assuming 6 shards). That's only 800KiB / height, even though in theory\n  NEAR could support 5MiB / height (assuming that other shards aren't sending much). That's a lot\n  unused throughput that we can't make use of because of the overly restrictive limits. There are\n  some use cases that could make use of higher throughput, e.g NEAR DA, although to be fair last I\n  heard NEAR DA was moving to a design that doesn't require a lot of cross-shard bandwidth.\n- High latency and bad scalability. A big receipt has to wait for up to `num_shards` heights before\n  it can be sent. This is much higher than it could be, with bandwidth scheduler a receipt never has\n  to wait more than one height (assuming that other shards aren't sending much). Even worse is that\n  the more shards there are, the higher the latency. With 60 shards a receipt might need to wait for\n  60 blocks before it is processed. This solution doesn't scale at all.\n\nBandwidth scheduler addresses pain points of the current solution, enabling higher throughput and\nscalability.\n\n## Specification\n\nThe main source of wasted bandwidth in the current algorithm is that assigning bandwidth doesn't\ntake into account the needs of individual shards. When shard `1` needs to send 500KiB and shard `2`\nneeds to send 20KiB, the algorithm can assign all of the bandwidth to shard `2` even though it\ndoesn't really need it, it just happened to be the allowed shard at this height. This is wasteful,\nit would be much better if the algorithm could see how much bandwidth each shard needs and give to\neach according to their needs. This is the general idea behind the new solution: each shard requests\nbandwidth according to its needs and bandwidth scheduler divides the bandwidth between everyone that\nrequested it. The bandwidth scheduler would be able to see that shard `2` needs 500KiB of bandwidth\nand it'd give all the bandwidth to `2`.\n\nThe flow will look like this:\n\n- A chunk is applied and produces outgoing receipts to other shards.\n- The shard calculates the current limits and sends as many receipts as it's allowed to.\n- The receipts that can't be sent due to limits are buffered (saved to state), they will be sent at\n  a later height.\n- The shard calculates how much bandwidth it needs to send the buffered receipts and creates a\n  `BandwidthRequest` with this information (there's one `BandwidthRequest` for each pair of shards).\n- The list of `BandwidthRequest` from this shard is included in the chunk header and distributed to\n  other nodes.\n- When the next chunk is applied it gathers all the `BandwidthRequests` from chunk headers at the\n  previous height(s) and uses `BandwidthScheduler` to calculate the current bandwidth limits in a\n  deterministic way. The same calculation is performed on all shards and all shards arrive at the\n  same bandwidth limits.\n- The chunk is applied and produces outgoing receipts, receipts are sent until they hit the limits\n  set by `BandwidthScheduler`.\n\n![Diagram where bandwidth scheduler requests bandwidth and then sends receipts](assets/nep-0584/basic-flow.png)\n\nDetails will be explained in the following sections.\n\n### `BandwidthRequest`\n\nA `BandwidthRequest` describes the receipts that a shard would like to send to another shard. A\nshard looks at its queue of buffered receipts to another shard and generates a `BandwidthRequest`\nwhich describes how much bandwidth the shard would like to have. In the simplest version a\n`BandwidthRequest` could be a single integer containing the total size of buffered receipts, but\nthere is a problem with this simple representation - it doesn't say anything about the size of\nindividual receipts. Let's say that two shards want to send 4MB of data each to another shard, but\nthe incoming limit is 5MB. Should we assign 2.5MB of bandwidth to each of the sender shards? That\nwould work if the shards want to send a lot of small receipts, but it wouldn't work when each shard\nwants to send a single 4MB receipt. A shard can't send a part of the 4MB receipt, it's either the\nwhole receipt or nothing. The scheduler should assign 2.5MB/2.5MB of bandwidth when the receipts are\nsmall and 4MB/0MB when they're large. The simple version doesn't have enough information for the\nscheduler to make the right decision, so we'll use a richer representation.\n\nThe richer representation is a list of possible bandwidth grants that make sense for this pair of\nshards. For example when the outgoing buffer contains a single `4MB` receipt, the only bandwidth\ngrant that makes sense is `4MB`. Granting more or less bandwidth wouldn't change how many receipts\ncan be sent. In that case the bandwidth request would contain a single possible grant: `4MB`. It\ntells the bandwidth scheduler that for this pair of shards it can either grant `4MB` or nothing at\nall, other options don't really make sense. On the other hand if the outgoing buffer contains 4000\nsmall receipts, 1kB each, there are many possible bandwidth grants that make sense. With so many\nsmall receipts the scheduler could grant 1kB, 2kB, 3kB, ..., 4000kB and each of those options would\nresult in a different number of receipts being sent. However having 4000 options in the request\nwould make the request pretty large. To deal with this we'll specify a list of 40 predefined options\nthat can be requested in a bandwidth request. An option is requested when granting this much\nbandwidth would result in more receipts being sent.\n\nLet's take a look at an example. Let's say that the predefined list of values that can be requested\nis:\n\n```rust\n[100kB, 200kB, 300kB, ..., 3900kB, 4MB]\n```\n\nAnd the outgoing receipts buffer has receipts with these sizes (receipts will be sent from left to\nright):\n\n```rust\n[20kB, 150kB, 60kB, 400kB, 1MB, 50kB, 300kB]\n```\n\nThe cumulative sum (sum from 0 to i) of sizes is:\n\n```rust\n[20kB, 170kB, 230kB, 630kB, 1630kB, 1680kB, 1980kB]\n```\n\nThe bandwidth grant options in the generated `BandwidthRequest` will be:\n\n```rust\n[100kB, 200kB, 300kB, 700kB, 1700kB, 2000kB]\n```\n\nExplanation:\n\n- Granting 100kB of bandwidth will allow to send the first receipt.\n- Granting 200kB of bandwidth will allow to send the first two receipts.\n- Granting 300kB will allow to send the first three receipts.\n- Granting 400kB would give the same result as 300kB, so it's not included in the options\n- Granting 700kB would allow to send the first four receipts\n- etc etc\n\nConceptually a `BandwidthRequest` looks like this:\n\n```rust\nstruct BandwidthRequest {\n    /// Requesting bandwidth to this shard\n    to_shard: ShardId,\n    /// Please grant me one of the options listed here.\n    possible_bandwidth_grants: Vec<u64>\n}\n```\n\nA list of such requests will be included in the chunk header:\n\n```rust\nstruct ChunkHeader {\n    bandwidth_requests: Vec<BandwidthRequest>\n}\n```\n\nWith this representation the `BandwidthRequest` struct could be quite big, up to about ~320 bytes.\nWe will use a more efficient representation to bring its size down to only 7 bytes. First we could\nuse `u16` instead `u64` for the `ShardId`, NEAR currently has only 6 shards and it'll take a while\nto reach 65536. There's no need to handle 10**18 shards. Second we can use a bitmap for the\n`possible_bandwidth_grants` field. The list of predefined options that can be requested will be\ncomputed deterministically on all nodes. The bitmap will have 40 bits, the `n-th` bit is `1` when\nthe `n-th` value from the predefined list is requested.\n\nSo the actual representation of a `BandwidthRequest` looks something like this:\n\n```rust\nstruct BandwidthRequest {\n    to_shard: u16,\n    requested_values_bitmap: [u8; 5]\n}\n```\n\nIt's important to keep the size of `BandwidthRequest` small because bandwidth requests are included\nin the chunk header, and the chunk header shouldn't be too large.\n\n### Base bandwidth\n\nIn current mainnet traffic most of the time the size of outgoing receipts is small, under 50kB. It'd\nbe nice if a shard was able to send them out without having to make a bandwidth request. It'd lower\nthe latency (no need to wait for a grant) and make chunk headers smaller. That's why there's a\nconcept of `base_bandwidth`. Bandwidth scheduler grants `base_bandwidth` of bandwidth for each pair\nof shards by default. This means that a shard doesn't need to make a request when it has less than\n`base_bandwidth` of receipts, it can just send them out immediately. Actual bandwidth grants based\non bandwidth request happen after granting the base bandwidth.\n\nOn current mainnet (with 6 shards) the base bandwidth is 61_139 (61kB)\n\n`base_bandwidth` is automatically calculated based on `max_shard_bandwidth`, `max_single_grant` and\nthe number of shards. It gets smaller as the number of shards increases. See the next section for details.\n\n### `BandwidthSchedulerParams`\n\nThe `BandwidthSchedulerParams` struct keeps parameters used throughout the bandwidth scheduler\nalgorithm:\n\n```rust\npub type Bandwidth = u64;\n\n/// Parameters used in the bandwidth scheduler algorithm.\npub struct BandwidthSchedulerParams {\n    /// This much bandwidth is granted by default.\n    /// base_bandwidth = (max_shard_bandwidth - max_single_grant) / (num_shards - 1)\n    pub base_bandwidth: Bandwidth,\n    /// The maximum amount of data that a shard can send or receive at a single height.\n    pub max_shard_bandwidth: Bandwidth,\n    /// The maximum amount of bandwidth that can be granted on a single link.\n    /// Should be at least as big as `max_receipt_size`.\n    pub max_single_grant: Bandwidth,\n    /// Maximum size of a single receipt.\n    pub max_receipt_size: Bandwidth,\n    /// Maximum bandwidth allowance that a link can accumulate.\n    pub max_allowance: Bandwidth,\n}\n```\n\nThe values are:\n\n```rust\nmax_shard_bandwidth = 4_500_000;\nmax_single_grant = 4_194_304\nmax_allowance = 4_500_000;\nmax_receipt_size = 4_194_304;\nbase_bandwidth = min(100_000, (max_shard_bandwidth - max_single_grant) / (num_shards - 1)) = 61_139\n```\n\nA shard must be able to send out `max_single_grant` on one link and `base_bandwidth` on all other\nlinks without exceeding `max_shard_bandwidth`. So it must hold that:\n\n```rust\nbase_bandwidth * (num_shards - 1) + max_single_grant <= max_shard_bandwidth\n```\n\nThat's why base bandwidth is calculated by taking the bandwidth that would remain available after\ngranting `max_single_grant` on one link and dividing it equally between the other links.\n\nThere's also a limit which makes sure that `base_bandwidth` stays under 100kB, even when the number\nof shards is low. There are some tests which have a low number of shards, and having a lower base\nbandwidth allows us to fully test the bandwidth scheduler in those tests.\n\n### `BandwidthRequestValues`\n\nThe `BandwidthRequestValues` struct represents the predefined list of values that can be requested\nin a `BandwidthRequest`:\n\n```rust\npub struct BandwidthRequestValues {\n    pub values: [Bandwidth; 40],\n}\n```\n\nThe values are calculated using a linear interpolation between `base_bandwidth` and\n`max_single_grant`, like this:\n\n```rust\nvalues[-1] = base_bandwidth // (here -1 is the imaginary element before 0, not the last element)\nvalues[values.len() - 1] = max_single_grant\nvalues[i] = linear interpolation between values[-1] and values[values.len() - 1]\n```\n\nThe exact code is:\n\n```rust\n/// Performs linear interpolation between min and max.\n/// interpolate(100, 200, 0, 10) = 100\n/// interpolate(100, 200, 5, 10) = 150\n/// interpolate(100, 200, 10, 10) = 200\nfn interpolate(min: u64, max: u64, i: u64, n: u64) -> u64 {\n    min + (max - min) * i / n\n}\n\nlet values_len: u64 =\n    values.len().try_into().expect(\"Converting usize to u64 shouldn't fail\");\nfor i in 0..values.len() {\n    let i_u64: u64 = i.try_into().expect(\"Converting usize to u64 shouldn't fail\");\n\n    values[i] = interpolate(\n        params.base_bandwidth,\n        params.max_single_grant,\n        i_u64 + 1,\n        values_len,\n    );\n}\n```\n\nThe final `BandwidthRequestValues` on current mainnet (6 shards) look like this:\n\n```rust\n[\n    164468, 267797, 371126, 474455, 577784, 681113, 784442, 887772, 991101, 1094430,\n    1197759, 1301088, 1404417, 1507746, 1611075, 1714405, 1817734, 1921063, 2024392,\n    2127721, 2231050, 2334379, 2437708, 2541038, 2644367, 2747696, 2851025, 2954354,\n    3057683, 3161012, 3264341, 3367671, 3471000, 3574329, 3677658, 3780987, 3884316,\n    3987645, 4090974, 4194304\n]\n```\n\n### Generating bandwidth requests\n\nTo generate a bandwidth request the sender shard has to look at the receipts stored in the outgoing\nbuffer to another shard and pick bandwidth grant options that make sense. In this context \"makes\nsense\" means that having this much bandwidth would cause the sender to send more receipts than the\nprevious requested option, as described in the `BandwidthRequest` section.\n\nThe simplest implementation would be to actually walk through the list of outgoing receipts\n(starting from the ones that will be sent the soonest) and request values that allow to send more\nreceipts.\n\n```rust\n/// Generate a bitmap of bandwidth requests based on the size of receipts stored in the outgoing buffer.\n/// Returns a bitmap with requests.\nfn make_request_bitmap_slow(\n    buffered_receipt_sizes: Vec<u64>,\n    bandwidth_request_values: &BandwidthRequestValues,\n) -> BandwidthRequestBitmap {\n  let mut requested_values_bitmap = BandwidthRequestBitmap::new(); // [u8; 5]\n\n    let mut total_size = 0;\n    let values = &bandwidth_request_values.values;\n    for receipt_size in buffered_receipt_sizes {\n        total_size += receipt_size;\n\n        for i in 0..values.len() {\n            if values[i] >= total_size {\n                requested_values_bitmap.set_bit(i, true);\n                break;\n            }\n        }\n    }\n\n    requested_values_bitmap\n}\n```\n\nBut this is very inefficient. Walking over all buffered receipts could take a lot of time and it'd\nrequire reading a lot of state from the Trie, which would make the `ChunkStateWitness` very large.\n\nWe need a more efficient algorithm. To achieve this we will add some additional metadata about the\noutgoing buffer, which keeps coarse information about the receipt sizes. We will group consecutive\nreceipts into `ReceiptGroups`. A single `ReceiptGroup` aims to have total size and gas under some\nthreshold. If adding a new receipt to the group would cause it to exceed the threshold, a new group\nis started. The threshold can only be exceeded when a single receipt has size or gas above the group\nthreshold.\n\nThe size threshold is set to 100kB, the gas threshold is currently infinite.\n\n```rust\npub struct ReceiptGroupsConfig {\n    /// All receipt groups aim to have a size below this threshold.\n    /// A group can be larger that this if a single receipt has size larger than the limit.\n    /// Set to 100kB\n    pub size_upper_bound: ByteSize,\n    /// All receipt groups aim to have gas below this threshold.\n    /// Set to Gas::MAX\n    pub gas_upper_bound: Gas,\n}\n```\n\nA `ReceiptGroup` keeps only the total size and gas of receipts in this group:\n\n```rust\npub struct ReceiptGroupV0 {\n    /// Total size of receipts in this group.\n    /// Should be no larger than `max_receipt_size`, otherwise the bandwidth\n    /// scheduler will not be able to grant the bandwidth needed to send\n    /// the receipts in this group.\n    pub size: u64,\n    /// Total gas of receipts in this group.\n    pub gas: u128,\n}\n```\n\nAll the groups are kept inside a `ReceiptGroupsQueue`, which is a Trie queue similar to the delayed\nreceipts queue. `ReceiptGroupsQueue` additionally keeps information about the total size, gas and\nnumber of receipts in the queue. There's one `ReceiptGroupsQueue` per outgoing buffer.\n\n```rust\npub struct ReceiptGroupsQueue {\n    /// Corresponds to receipts stored in the outgoing buffer to this shard.\n    receiver_shard: ShardId,\n    /// Persistent data, stored in the trie.\n    data: ReceiptGroupsQueueDataV0,\n}\n\npub struct ReceiptGroupsQueueDataV0 {\n    /// Indices of the receipt groups TrieQueue.\n    pub indices: TrieQueueIndices,\n    /// Total size of all receipts in the queue.\n    pub total_size: u64,\n    /// Total gas of all receipts in the queue.\n    pub total_gas: u128,\n    /// Total number of receipts in the queue.\n    pub total_receipts_num: u64,\n}\n```\n\nWhen a new receipt is added to the outgoing buffer, we try to add it to the last group in the\n`ReceiptGroupsQueue`. If there are no groups or adding the receipt would cause the last group to go\nover the threshold, a new group is created. When a receipt is removed from the outgoing buffer, we\nremove the receipt from the first group in the `ReceiptGroupsQueue` and remove the group if there\nare no more receipts in it.\n\nTo generate a bandwidth request, we will walk over the receipt groups and request the values that\nwill allow to send more receipts. Just like in `make_request_bitmap_slow`, only using\n`receipt_group_sizes` instead of `buffered_receipt_sizes`.\n\nIt's important to note that `size_upper_bound` is less than difference between two consecutive\nvalues in `BandwidthRequestValues` . Thanks to this the requests are just as good as they would be\nif they were generated directly using individual receipt sizes.\n\n\n#### Example\n\nLet's say that there are five buffered receipts with sizes:\n\n```rust\n5kB, 30kB, 40kB, 120kB, 20kB\n```\n\nThey would be grouped into groups of at most 100kB, like this:\n\n```rust\n(5kB, 30kB, 40kB), (120kB), (20kB)\n```\n\nSo the resulting groups would be:\n\n```rust\n75kB, 120kB, 20kB\n```\n\nAnd the bandwidth request will produced by walking over groups with sizes `35kB`, `120kB`, `70kB`,\nnot the individual receipts.\n\n\nNow let's say that the first receipt with size `5kB` is forwarded. In that case it would be removed\nfrom the first group, and the groups would look like this:\n\n```rust\n(30kB, 40kB), (120kB), (20kB)\n```\n\nWhen a new receipt is buffered, it's added to the last group, let's add a `50kB` receipt, after that\nthe groups would look like this:\n\n```rust\n(30kB, 40kB), (120kB), (20kB, 50kB)\n```\n\nWhen adding a new receipt would cause a group to go over the threshold, a new groups is started. So\nif we added another 50kB receipt, the groups would become:\n\n```rust\n(30kB, 40kB), (120kB), (20kB, 50kB), (50kB)\n```\n\n#### Trie columns\n\nTwo new trie columns are added to keep the receipt groups.\n\n- `BUFFERED_RECEIPT_GROUPS_QUEUE_DATA` - keeps `ReceiptGroupsQueueDataV0` for every outgoing buffer\n- `BUFFERED_RECEIPT_GROUPS_QUEUE_ITEM` - keeps the individual `ReceiptGroup` items from receipt group queues\n\n#### Protocol Upgrade\n\nThere's a bit of additional complexity around the protocol upgrade boundary. The receipt groups are\nbuilt for receipts that were buffered after protocol upgrade, but existing receipts that were\nbuffered before the upgrade won't have corresponding receipt groups. Eventually the old buffered\nreceipts will get sent out and we'll have full metadata for all receipts, but in the meantime we\nwon't be able to make a proper bandwidth request without having groups for all of the buffered\nreceipts. To deal with this we will pretend that there's one receipt with size `max_receipt_size` in\nthe buffer until the metadata is fully initialized. Requesting `max_receipt_size` is a safe bet -\nit's enough to send out any buffered receipt. The effect will be similar to the previous approach -\none shard will be granted most of the bandwidth (exactly `max_receipt_size`), while other will be\nwaiting for their turn to be the \"allowed shard\". Once all of the old buffered receipts are sent out\nwe can start making proper requests using the receipt groups.\n\n### `BandwidthScheduler`\n\n`BandwidthScheduler` is an algorithm which looks at all of the `BandwidthRequests` submitted by\nshards and grants some bandwidth on every link (pair of shards). A shard can send only as much data\nas the grant allows, the remaining receipts stay in the buffer.\n\nBandwidth scheduler tries to ensure that:\n\n- Every shard sends out at most `max_shard_bandwidth` bytes of receipts at every height.\n- Every shard receives at most `max_shard_bandwidth` bytes of receipts at every height.\n- The bandwidth is assigned in a fair way. At full load every link (pair of shards) sends and\n  receives the same amount of bandwidth on average, there are no favorites.\n- Bandwidth utilization is high.\n\nThe algorithm works in 4 stages:\n\n1) Give out a fair share of allowance to every link.\n2) Grant base bandwidth on every link. Decrease allowance by granted bandwidth.\n3) Process bandwidth requests. Order all bandwidth requests by the link's allowance. Take the\n   request with the highest allowance and try to grant the first proposed value. Check if it's\n   possible to grant the value without violating any restrictions. If yes, grant the bandwidth and\n   decrease the allowance accordingly. Then remove the granted value from the request and put it\n   back into the queue with new allowance. If no, remove the request from the queue, it will not be\n   fulfilled. Requests with the same allowance are processed in a random order.\n4) Distribute remaining bandwidth. If there's some bandwidth left after granting base bandwidth and\n   processing all requests, distribute it over all links in a fair manner to improve bandwidth\n   utilization.\n\n\n#### Allowance\n\nThere is a concept of \"allowance\" - every link (pair of sender and receiver shards) has an\nallowance. Allowance is a way to ensure fairness. Every link receives a fair amount of allowance on\nevery height. When bandwidth is granted on a link, the link's allowance is decreased by the granted\namount. Requests on links with higher allowance have priority over requests on links with lower\nallowance. Links that send more than their fair share are deprioritized, which keeps things fair.\nIt's a similar idea to the [Token Bucket](https://en.wikipedia.org/wiki/Token_bucket). Link\nallowances are persisted in the state trie, as they're used to track fairness across multiple\nheights.\n\nAn intuitive way to think about allowance is that it keeps track of how much each link sent recently\nand lowers priority of links that recently sent a lot of receipts, which gives other a fair chance.\n\nImagine a situation where one link wants to send a 2MB receipt at every height, and other links want\nto send a ton of small receipts to the same shard. Without allowance, the link with 2MB receipts\nwould always get 2MB of bandwidth assigned, and other links would get less than that, which would be\nunfair. Thanks to allowance, the scheduler will grant some bandwidth to the 2MB link, but then it\nwill decrease the allowance on that link, which will deprioritize it and other links will get their\nfair share.\n\nWhen multiple requests have the same allowance, they are processed in random order. The randomness\nis deterministic, the scheduler uses `ChaCha20Rng` seeded using the previous block hash and requests\nwith equal allowance are shuffled used this random generator.\n\n```rust\nChaCha20Rng::from_seed(prev_block_hash.0)\n...\nrequests.shuffle(&mut self.rng);\n```\n\nThe fair share of allowance that is given out on every height is:\n\n```rust\nfair_link_bandwidth = max_shard_bandwidth / num_shards\n```\n\nThe reasoning is that in an ideal, fair world, every link would send the same amount of bandwidth.\nThere would be `max_bandwidth / num_shards` sent on every link, fully saturating all senders and\nreceivers. Allowance measures the deviation from this perfect world.\n\n\nLink allowance never gets larger than `max_allowance` (currently 4.5MB). When a link's allowance\nreaches `max_allowance` we stop adding allowance there until the link uses up some of the\naccumulated one. Without `max_allowance` a link that sends very little for a long time could\naccumulate an enormous amount of allowance and it could have priority over other links for a very\nlong time. Capping the allowance at some value keeps the allowance fresh, information from the\nlatest blocks should be what matters most.\n\n#### Example\n\n<details>\n<summary>\nHere's an example, click to expand.\n</summary>\n\nLet's say that there are 3 shards. The `BandwidthSchedulerParams` look like this:\n\n```rust\nBandwidthSchedulerParams {\n    base_bandwidth: 100_000,\n    max_shard_bandwidth: 4_500_000,\n    max_single_grant: 4_194_304,\n    max_receipt_size: 4_194_304,\n    max_allowance: 4_500_000,\n}\n```\n\nAnd `BandwidthRequestValues` are:\n\n```rust\n[\n    210000, 320000, 430000, 540000, 650000, 760000, 870000, 980000, 1090000, 1200000,\n    1310000, 1420000, 1530000, 1640000, 1750000, 1860000, 1970000, 2080000, 2190000,\n    2300000, 2410000, 2520000, 2630000, 2740000, 2850000, 2960000, 3070000, 3180000,\n    3290000, 3400000, 3510000, 3620000, 3730000, 3840000, 3950000, 4060000, 4194304\n]\n```\n\n(Note that these values are slightly different from the ones that would be generated by linear\ninterpolation, but for the sake of the example let's say that they look like this, the slight\ndifference doesn't really matter and it's easier to work with round numbers)\n\nShard 2 is fully congested and only the allowed shard (shard 1) is allowed to send receipts to it.\n\nThe outgoing buffers look like this, shards want to send receipts with these sizes:\n\n- 0->1 [3.9MB]\n- 1->1 [200kB, 200kB, 200kB]\n- 1->2 [2MB]\n- 2->2 [500kB]\n\nBandwidth requests request values from the predefined list (`BandwidthRequestValues`), in this\nexample the requests would be:\n\n- 0->1 [3950kB]\n- 1->1 [210kB, 430kB, 650kB]\n- 1->2 [2.08MB]\n- 2->2 [540kB]\n\nEvery link has some allowance, in this example let's say that all links start with the same allowance (4MB)\n\n| Link | Allowance |\n| ---- | --------- |\n| 0->0 | 4MB       |\n| 0->1 | 4MB       |\n| ...  | 4MB       |\n\nAll shards start with sender and receiver budgets set to `max_shard_bandwidth` (4.5MB). Budgets\ndescribe how much more a shard can send or receive:\n\n![State of links before scheduler runs](assets/nep-0584/scheduler_example_1.png)\n\nFirst step of the algorithm is to give out a fair share of allowance on every link.\nIn an ideally fair world every link would send the same amount of data at every height, so the fair share of allowance is:\n\n```rust\nfair_link_bandwidth = max_shard_bandwidth / num_shards = 4.5MB/3 = 1.5MB\n```\n\nSo every link receives `1.5MB` of allowance. But allowance can't get larger than `max_allowance`, which is set to `4.5MB`, so the allowance is set to `4.5MB` on all links:\n\n| Link | Allowance |\n| ---- | --------- |\n| 0->0 | 4.5MB     |\n| 0->1 | 4.5MB     |\n| ...  | 4.5MB     |\n\nThe next step is to grant base bandwidth. Every (allowed) link is granted `base_bandwidth = 100kB`:\n\n![State of links after granting base bandwidth](assets/nep-0584/scheduler_example_2.png)\n\nThis grant is subtracted from the link's allowance, we assume that all of the granted base bandwidth\nwill be used for sending receipts. So the allowances change to:\n\n| Link | Allowance |\n| ---- | --------- |\n| 0->2 | 4.5MB     |\n| 2->2 | 4.5MB     |\n| 0->0 | 4.4MB     |\n| 0->1 | 4.4MB     |\n| ...  | 4.4MB     |\n\nThe next step is to process the bandwidth requests. Requests are processed in the order of\ndecreasing link allowance, so the first one to be processed is `(2->2 [540kB])`\n\nThis request can't be granted because the link `(2->2)` is not allowed. The request is rejected.\n\nThe remaining requests have the same link allowance, so they'll be processed in random order.\n\nLet's first process the request `(0->1 [3950kB])`. Sender and receiver have enough budget to grant\nthis much bandwidth and the link is allowed, so the bandwidth is granted. The grant on `(0->1)` is\nincreased from `100kB` to `3950kB`. Allowance on `(0->1)` is reduced by `3850kB`:\n\n| Link | Allowance |\n| ---- | --------- |\n| 0->1 | 550kB     |\n| ...  | ...       |\n\n![State of links after granting 3950kB on link from 0 to 1](assets/nep-0584/scheduler_example_3.png)\n\nThen' let's process `(1->1 [210kB, 430kB, 650kB])`. Can we increase the grant on `(1->1)` to 210kB?\nYes, let's do that. The bandwidth is granted and the allowance for `(1->1)` decreased. The `210kB`\noption is removed from the request and the request is reinserted into the priority queue with the\nlower allowance.\n\n![State of links after granting 210kB on link from 1 to 1](assets/nep-0584/scheduler_example_4.png)\n\n| Link | Allowance |\n| ---- | --------- |\n| 0->1 | 550kB     |\n| 1->1 | 4090kB    |\n| ...  | ...       |\n\n\nNow let's process `(1->2 [2.08MB])`. The bandwidth can be granted without any issues.\n\n![State of links after granting 2.08MB on link from 1 to 2](assets/nep-0584/scheduler_example_5.png)\n\n| Link | Allowance |\n| ---- | --------- |\n| 0->1 | 550kB     |\n| 1->1 | 4090kB    |\n| 1->2 | 2420kB    |\n| ...  | ...       |\n\nThen `(1->1 [430kB, 650kB])` is taken back out of the priority queue. Is it ok to increase the grant\non `(1->1)` to 430kB? Yes, do it. Then the `430kB` option is removed from the request, and the request is requeued.\n\n![State of links after granting 430kB on link from 1 to 1](assets/nep-0584/scheduler_example_6.png)\n\n| Link | Allowance |\n| ---- | --------- |\n| 0->1 | 550kB     |\n| 1->1 | 3870kB    |\n| 1->2 | 2420kB    |\n| ...  | ...       |\n\n\nFinally `(1->1 [650kB])` is taken out of the queue, but the request can't be granted because it\nwould exceed the incoming limit for shard 1.\n\nThe final grants are:\n\n| Link | Granted Bandwidth |\n| ---- | ----------------- |\n| 0->0 | 100kB             |\n| 0->1 | 3950kB            |\n| 0->2 | 0B                |\n| 1->0 | 100kB             |\n| 1->1 | 430kB             |\n| 1->2 | 2080kB            |\n| 2->0 | 100kB             |\n| 2->1 | 100kB             |\n| 2->2 | 0B                |\n\n\nNotice how the big receipt sent on `(0->1)` and smaller receipts sent on `(1->1)` compete for the\nincoming budget of shard 1. Let's imagine a scenario where `(0->1)` always sends `3.9MB` receipts\nand `(1->1)` always sends many `200kB` receipts. Without allowance we would grant the first value\nfrom both bandwidth requests, which would mean that `(0->1)` always gets to send the `3.9MB` receipt\nand `(1->1)` gets to send a few `200kB` receipts. This isn't fair, much more data would be sent on\nthe `(0->1)` link. With allowance the priority for `(0->1)` sharply drops after granting `3.9MB` and\n`(1->1)` has the space to send a fair amount of receipts.\n\n---\n\n</details>\n\nAll shards run the `BandwidthScheduler` algorithm with the same inputs and calculate the same\nbandwidth grants. The scheduler has to be run at every height, even on missing chunks, to ensure\nthat scheduler state stays identical on all shards. `nearcore` has existing infrastructure\n(`apply_old_chunk`) to run things on missing chunks, there are implicit state transitions that are\nused for distributing validator rewards. Scheduler reuses this infrastructure to run the algorithm\nand modify the state on every height.\n\n### `BandwidthSchedulerState`\n\n`BandwidthScheduler` keeps some persistent state that is modified with each run. The state is stored\nin the shard state trie. Each shard has identical `BandwidthSchedulerState` stored in the trie, all\nshards run the same algorithm with the same inputs and state and arrive at identical new state that\nis saved to the trie.\n\n`BandwidthSchedulerState` contains current allowance for every pair of shards. Allowance is used to\nensure fairness across many heights, so it has to be persisted across heights.\n\n```rust\npub enum BandwidthSchedulerState {\n    V1(BandwidthSchedulerStateV1),\n}\n\npub struct BandwidthSchedulerStateV1 {\n    /// Allowance for every pair of (sender, receiver). Used in the scheduler algorithm.\n    /// Bandwidth scheduler updates the allowances on every run.\n    pub link_allowances: Vec<LinkAllowance>,\n    /// Sanity check hash to assert that all shards run bandwidth scheduler in the exact same way.\n    /// Hash of previous scheduler state and (some) scheduler inputs.\n    pub sanity_check_hash: CryptoHash,\n}\n\npub struct LinkAllowance {\n    /// Sender shard\n    pub sender: ShardId,\n    /// Receiver shard\n    pub receiver: ShardId,\n    /// Link allowance, determines priority for granting bandwidth.\n    pub allowance: Bandwidth,\n}\n```\n\nThere's also `sanity_check_hash`. It's not used in the algorithm, it's only used for a sanity check\nto assert that scheduler state stays the same on all shards. It's calculated using the previous\n`sanity_check_hash` and the current list of shards:\n\n```rust\nlet mut sanity_check_bytes = Vec::new();\nsanity_check_bytes.extend_from_slice(scheduler_state.sanity_check_hash.as_ref());\nsanity_check_bytes.extend_from_slice(CryptoHash::hash_borsh(&all_shards).as_ref());\nscheduler_state.sanity_check_hash = CryptoHash::hash_bytes(&sanity_check_bytes);\n```\n\nIt would be nicer to hash all of the inputs to bandwidth scheduler, but that could require hashing\ntens of kilobytes of data, which could take a bit of cpu time, so it's not done. The sanity check\nstill checks that all shards ran the algorithm the same number of times and with the same shards.\n\nA new trie column is introduced to keep the scheduler state:\n\n```rust\npub const BANDWIDTH_SCHEDULER_STATE: u8 = 15;\n```\n\n### Congestion control\n\nBandwidth scheduler limits only the size of outgoing receipts, the gas is limited by congestion\ncontrol. It's important to make sure that these two are integrated properly. Situations where one\nlimit allows sending receipts, but the other doesn't could lead to liveness issues. To avoid\nliveness problems, the scheduler checks which shards are fully congested, and doesn't grant any\nbandwidth on links to these shards (except for the allowed sender shard). This prevents situations\nwhere the scheduler would grant bandwidth on some link, but no receipts would be sent because of\ncongestion. There is a guarantee that for every bandwidth grant, the shard will be able to send at\nleast one receipt, which is enough to ensure liveness. There can still be unlucky coincidences where\nthe scheduler grants a lot of bandwidth on a link, but the shard can send only a few receipts\nbecause of the gas limit enforced by congestion control. This is not ideal, in the future we might\nconsider merging these two algorithm into one better algorithm, but it is good enough for now.\n\n### Missing chunks\n\nWhen a chunk is missing, the incoming receipts that were aimed at this chunk are redirected to the\nfirst non-missing chunk on this shard. The non-missing chunk will be forced to consume incoming\nreceipts meant for two chunks, or even more if there were multiple missing chunks in a row. This is\ndangerous because the size of receipts sent to multiple chunks could be bigger than one chunk can\nhandle. We need to make sure that `BandwidthScheduler` is aware of this problem and stops sending\ndata when there are missing chunks on the target shard.\n\n`BandwidthScheduler` can see when another chunk is missing and it can refrain from sending new\nreceipts until the old ones have been processed. When a chunk is applied, it has access to the block\nthat contains this chunk. It can take a look at other shards and see if their chunks are missing in\nthe current block or not. If a chunk is missing, then the previously sent receipts haven't been\nprocessed and the scheduler won't send new ones.\n\nIn the code this condition looks like this:\n\n```rust\nfn calculate_is_link_allowed(\n    sender_index: ShardIndex,\n    receiver_index: ShardIndex,\n    shards_status: &ShardIndexMap<ShardStatus>,\n) -> bool {\n    let Some(receiver_status) = shards_status.get(&receiver_index) else {\n        // Receiver shard status unknown - don't send anything on the link, just to be safe.\n        return false;\n    };\n\n    if receiver_status.last_chunk_missing {\n        // The chunk was missing, receipts sent previously were not processed.\n        // Don't send anything to avoid accumulation of incoming receipts on the receiver shard.\n        return false;\n    }\n    // ...\n}\n```\n\nIt's forbidden to send receipts on a link if the last chunk on the receiver shard is missing.\n\n![Diagram showing how scheduler behaves with one missing chunk](assets/nep-0584/one-missing-chunk.png)\n![Diagram showing how scheduler behaves with two missing chunks](assets/nep-0584/two-missing-chunks.png)\n\nSadly this condition isn't enough to ensure that a chunk never receives more than\n`max_shard_bandwidth` of receipts. This is because receipts sent from a chunk aren't included as\nincoming receipts until the next non-missing chunk on the sender shard appears. A chunk producer\ncan't include incoming receipts until it has the `prev_outgoing_receipts_root` to prove the incoming\nreceipts against. Because of this there can be a situation where the bandwidth scheduler allows to\nsend some receipts, but they don't arrive immediately because chunks are missing on the sender\nshard. In the meantime other shards might send other receipts and in the end the receiver can\nreceive receipts sent at multiple shards, which could add up to more than `max_shard_bandwidth`.\n\n![Diagram showing that a chunk might receive more than max_shard_bandwidth](assets/nep-0584/missing-chunk-problem.png)\n\nThis is still an improvement over the previous solution which allowed to send receipts to shards\nwith missing chunks for up to `max_congestion_missed_chunks = 5` chunks. In the worst-worst case\nscenario a single chunk might receive `num_shards * max_shard_bandwidth` of receipts at once, but\nit's highly unlikely to happen. A lot of missing chunks and receipts would have to align for that\ntoo happen. To trigger it an attacker would need to have precise control over missing chunks on all\nshards, which they shouldn't have. A future version of bandwidth scheduler might solve this problem\nfully, for example by looking at how much was granted but not received and refusing to send more,\nbut it's out of scope for the initial version of the bandwidth scheduler. The whole environment\nmight change soon, SPICE might remove the concept of missing chunks altogether, for now we can live\nwith this problem.\n\n### Resharding\n\nDuring resharding the list of existing shards changes. Bandwidth scheduler assumes that sender and\nreceiver shards are from the same layout, but this is not true for one height at the resharding\nboundary where senders are from the old layout, but receivers are from the new one. Ideally the\nbandwidth scheduler would make sure that bandwidth is properly granted when the sets of senders and\nreceivers are different, but this not implemented for now. The grants will be slightly wrong (but\nstill within limits) on the resharding boundary. They will be wrong for only one block height, after\nresharding the senders and receivers will be the same again and the scheduler will work properly\nagain. The amount of work needed to properly support resharding exceeds the benefits, we can live\nwith a slight hiccup for one height at the resharding boundary.\n\nTo properly handle resharding we would have to:\n\n- Use different ShardLayouts for sender shards and receiver shards\n- Interpret bandwidth requests using the `BandwidthSchedulerParams` that they were created with\n- Make sure that `BandwidthSchedulerParams` are correct on the resharding boundary\n\nIt's doable, but it's out of scope for the initial version of the bandwidth scheduler.\n\nThere's one additional complication with generating bandwidth requests. When a parent shard is split\ninto two children, the parent disappears from the current `ShardLayout`, but other shards might\nstill have buffered receipts aimed at the parent shard. Bandwidth scheduler will not grant any\nbandwidth to send receipts to a shard that doesn't exist, which would prevent these buffered\nreceipts from being sent, they'd be stuck in the buffer forever. To deal with that we have to do two\nthings. First thing is to redirect receipts aimed at the parent to the proper child shard. When we\ntry to forward a receipt from the buffer aimed at the parent, we will determine which child the\nreceipt should go to and forward it to this child, using bandwidth limits meant for sending receipts\nto the child shard. Second thing is to generate bandwidth requests using both the parent buffer and\nthe child buffer. A shard can't send any receipts from the parent buffer without a bandwidth grant,\nso we have to somehow include the parent buffer in the bandwidth requests, even though the parent\ndoesn't exist in the current `ShardLayout`. This is done by merging (conceptually) parent and child\nbuffers when generating a bandwidth request. First we walk over receipt groups in the parent buffer,\nthen the receipt groups in the child buffer. This way the bandwidth grants to the child will include\nreceipts aimed at the parent.\n\n### Distributing remaining bandwidth\n\nAfter bandwidth scheduler processes all of the bandwidth requests, there's usually some leftover\nbudget for sending and receiving data between shards. It'd be wasteful to not use the remaining\nbandwidth, so the scheduler will distribute it between all the links. Granting extra bandwidth helps\nto lower latency, it might allow a shard to send out a new receipt without having to make a\nbandwidth request.\n\nThe algorithm for distributing remaining bandwidth works as follows:\n\n1) Calculate how much more each shard could send and receive, call it `bandwidth_left`\n2) Calculate how many active links there are to each sender and receiver. A link is active if\n   receipts can be sent on it, i.e. it's not forbidden because of congestion or missing chunks. Call\n   this number `links_num`.\n3) Order all senders and receivers by `average_link_bandwidth = bandwidth_left/links_num`, in\n   increasing order. Ignore shards that don't have any bandwidth or links.\n4) Walk over all senders (as ordered in (3)), for each sender walk over all receivers (as ordered in\n   (3)) and try to grant some bandwidth on this link\n5) Grant `min(sender.bandwidth_left / sender.links_num, receiver.bandwidth_left /\n   receiver.links_num)` on the link\n6) Decrease `sender.links_num` and `receiver.links_num` by one.\n\nThe algorithm is a bit tricky, but the intuition behind it is that if a shard can send 3MB more, and\nthere are 3 active links connected to this shard, then it should send about 1MB on every one of\nthese links. But it has to respect how much each of the receiver shards can receive. If one of them\ncan receive only 500kB we can't grant 1MB on this link. That's why the algorithm takes the minimum\nof how much the sender should send and how much the receiver should receive. Sender and receiver\nnegotiate the highest amount of data that can be sent between them. Shards are ordered by\n`average_link_bandwidth` to ensure high utilization - it gives the guarantee that all shards\nprocessed later will be able to send/receive at least as much as the shard being processed now.\n\n<details>\n<summary>\nHere's an example, click to expand\n</summary>\n\nLet's say there are three shards, each shard could send and receive a bit more data (called the remaining budget):\n\n| Shard | Sending budget | Receiving budget |\n| ----- | -------------- | ---------------- |\n| 0     | 300kB          | 700kB            |\n| 1     | 4.5MB          | 100kB            |\n| 2     | 1.5MB          | 4.5MB            |\n\nShard 2 is fully congested, which means that only the allowed shard (shard 1) can send receipts to it.\n![State of links before distributing remaining bandwidth](assets/nep-0584/distribute_remaining_example_1.png)\n\nFirst let's calculate how many active links there are to each shard.\nActive links are links that are not forbidden:\n\n| Shard | Active sending links | Active receiving links |\n| ----- | -------------------- | ---------------------- |\n| 0     | 2                    | 3                      |\n| 1     | 3                    | 3                      |\n| 2     | 2                    | 1                      |\n\nAverage link bandwidth is calculated as the budget divided by the number of active links.\n\n| Shard | Average sending bandwidth | Average receiving bandwidth |\n| ----- | ------------------------- | --------------------------- |\n| 0     | 300kB/2 = 150kB           | 700kB/3 = 233kB             |\n| 1     | 4.5MB/3 = 1.5MB           | 100kB/3 = 33kB              |\n| 2     | 1.5MB/2 = 750kB           | 4.5MB/1 = 4.5MB             |\n\nNow let's order senders and receivers by their average link bandwidth\n\n| Sender shard | Average link bandwidth |\n| ------------ | ---------------------- |\n| 0            | 150KB                  |\n| 2            | 750kB                  |\n| 1            | 1.5MB                  |\n\n| Receiver shard | Average link bandwidth |\n| -------------- | ---------------------- |\n| 1              | 33kB                   |\n| 0              | 233kB                  |\n| 2              | 4.5MB                  |\n\nAnd now let's distribute the bandwidth, process senders in the order of increasing average link\nbandwidth and for every sender process the receiver in the same order:\n\nLink (0 -> 1): Sender proposes 300kB/2 = 150kB. Receiver proposes 100kB/3 = 33kB. Grant 33kB\n\n![State of links after granting 33kB on link from 0 to 1](assets/nep-0584/distribute_remaining_example_2.png)\n\nLink (0 -> 0): Sender proposes 267kB/1 = 267kB. Receiver proposes 700kB/3 = 233kB. Grant 233kB\n\n![State of links after granting 233kB on link from 0 to 0](assets/nep-0584/distribute_remaining_example_3.png)\n\nLink (0 -> 2): This link is not allowed. Nothing is granted.\n\nLink (2 -> 1): Sender proposes 1.5MB/2 = 750kB. Receiver proposes 66kB/2 = 33kB. Grant 33kB\n\n![State of links after granting 33kB on link from 2 to 1](assets/nep-0584/distribute_remaining_example_4.png)\n\nLink (2 -> 0): Sender proposes 1467kB/1 = 1467kB. Receiver proposes 467kB/2 = 233kB. Grant 233kB\n\n![State of links after granting 233kB on link from 2 to 0](assets/nep-0584/distribute_remaining_example_5.png)\n\nLink (2 -> 2): This link is not allowed. Nothing is granted.\n\nLink (1 -> 1): Sender proposes 4.5MB/3 = 1.5MB. Receiver proposes 33kB/1. Grant 33kB\n\n![State of links after granting 33kB on link from 1 to 1](assets/nep-0584/distribute_remaining_example_6.png)\n\nLink (1 -> 0): Sender proposes 4467kB/2 = 2233kB. Receiver proposes 233kB/1 = 233kB. Grant 233kB\n\n![State of links after granting 233kB on link from 1 to 0](assets/nep-0584/distribute_remaining_example_7.png)\n\nLink (1 -> 2): Sender proposes 4234kB/1 = 4234kB. Receiver proposes 4.5MB. Grant 4234kB\n\n![State of links after granting 4234kB on link from 1 to 2](assets/nep-0584/distribute_remaining_example_8.png)\n\nAnd all of the bandwidth has been distributed fairly and efficiently.\n\n---\n\n</details>\n\nWhen all links are allowed the algorithm achieves very high bandwidth utilization (99% of the\ntheoretical maximum). When some links are not allowed the problem becomes much harder, it starts\nbeing similar to the maximum flow problem. The algorithm still achieves okay utilization (75% of the\ntheoretical maximum), and I think this is good enough. In this case we want a fast heuristic, not a\nslow algorithm that will solve the max flow problem perfectly.\n\nThe algorithm is safe because it never grants more than `min(sender.bandwidth_left,\nreceiver.bandwidth_right)`, so it'll never go over the limits. The utilization and fairness is good,\nbut I don't have a good proof for that, just an intuitive understanding. The algorithm is fast,\nbehaves well in practice and is provably safe, and I think that is good enough.\n\nFor the exact implementation see: https://github.com/near/nearcore/pull/12682\n\n### One block delay\n\nThere's a one block delay between requesting bandwidth and receiving a grant. This is not ideal,\nmost large receipts will have to be buffered and sent out at the next height, it'd be nicer if we\ncould quickly negotiate bandwidth and send them immediately.\n\nIt is a hard problem to solve - a shard doesn't know what other shards want to send, so it needs to\ncontact them and negotiate. Maybe it'd be possible to negotiate it off-chain in between blocks, but\nthat would be much more complex - we would have to make sure that the negotiation happens quickly\neven when latency between nodes is high and ensure that everything is fair and secure. The idea is\nexplored further in `Option D` section, but for now I think we can go with a solution that is\nsimpler and should be good enough, even though it has a one block delay.\n\nAt first glance it might seem that the delay prevents us from using 100% of the bandwidth - a big\nreceipt takes 2 blocks to reach the other shard, doesn't that mean that we get only 50% of the\ntheoretical throughput? Not really, the delay increases latency, but it doesn't affect throughput.\nAn application that wants to utilize 100% of bandwidth can submit the receipts and they'll be queued\nand sent over utilizing 100% of the bandwidth, just with a one block delay. There's no 50% problem.\nAs an example one can imagine a contract that wants to send 4MB of data to another shard at every\nheight. The contract will produce a 4MB receipt at every height, the shard will generate a 4MB\n`BandwidthRequest` at every height, and the bandwidth scheduler will grant the shard 4MB of\nbandwidth at every height (assuming no requests from other shards). At the first height the 4MB will\nbe buffered, but for all the following heights the shard will have the 4MB grant and it'll be able\nto send 4MB of data to the other shard. We can utilize 100% of the bandwidth despite the delay, we\njust have to make sure that we can buffer ~10MB of receipts in the outgoing queue.\n\n### Performance\n\nComplexity of the bandwidth scheduler algorithm is `O(num_shards^2 + num_requests *\nlog(num_requests))`, which in the worst case is equal to `O(num_shards^2 * log(num_shards))`. It's\nhard (impossible?) to avoid the `num_shards^2` because the scheduler has to consider every pair of\nshards. The `log(num_requests)` comes from sorting by allowance.\n\nScheduler works quickly for lower number of shards, but the time needed to run it grows quickly as\nthe number of shards increases. Here's a benchmark of the worst-case scenario performance, measured\non a typical `n2d-standard-8` GCP VM with an AMD EPYC 7B13 CPU:\n\n| Number of shards | time      |\n| ---------------- | ----------|\n| 6                | 0.13 ms   |\n| 10               | 0.19 ms   |\n| 32               | 1.85 ms   |\n| 64               | 5.80 ms   |\n| 128              | 23.98 ms  |\n| 256              | 97.44 ms  |\n| 512              | 385.97 ms |\n\nIt's important to note that this is worst-case performance, with all shards wanting to send a ton of\nsmall receipts to other shards. Usually the number of bandwidth requests will be lower and the\nscheduler will work quicker than that.\n\nThe current version of the scheduler should work fine up to 50-100 shards, after that we'll probably\nneed some modifications. A quick solution would be to randomly choose half of the shards at every\nheight and only grant bandwidth between them, this would cut `num_shards` in half. There's also some\npotential for parallelization, the bandwidth grants could be calculated in parallel with application\nof the action receipts. I think we can worry about it when we reach 100 shards, with this many\nshards the environment and typical patterns will probably change a lot, we can analyze them and\nmodify the scheduler accordingly.\n\n### Byzantine fault tolerance\n\nBandwidth scheduler needs to be resistant to malicious actors. All validators have to check that\nbandwidth requests are produced and processed correctly, and reject chunks where this isn't the\ncase.\n\nBandwidth scheduler is run during chunk application, all chunk validators run the same bandwidth\nscheduler with the same inputs and generate the same results. Any discrepancy in the scheduler would\nresult in different state after chunk application, which would cause the validation to fail.\n\nThe produced bandwidth requests are stored in the chunk header, along with other things produced\nwhen applying a chunk. Chunk validators apply the chunk and verify that produced data matches the\ndata in chunk header, they will not endorse a chunk if it doesn't match. The data from previous\nchunk header (like previous bandwidth requests) can be trusted because the previous chunk headers\nwere endorsed by chunk validators at the previous height.\n\nThe logic is pretty much identical to the one used to validate CongestionInfo for congestion control.\n\n### Testing\n\nBandwidth scheduler is pretty complex, and it's a bit hard to reason about how things really flow,\nso it's important to test it well. There is a bunch of tests which run some kind of workload and\ncheck if the parameters look alright. The two main parameters are:\n\n- Utilization - are receipts sent as fast as theoretically possible? Utilization should be close to\n  100% for small receipts. With big receipts it should be at least 50%. (If all receipts have size\n  `max_shard_bandwidth / 2 + 1` we can only send one such receipt per height, and we get ~50%\n  utilization)\n- Fairness - is every link sending the same amount of bytes on average? As long as all outgoing\n  buffers are full, all links should send about the same amount of data on average, there should be\n  no favorites.\n\nThe scheduler algorithm was tested in two ways:\n\n- On a blockchain simulator, which simulates a few shards sending receipts between each other, along\n  with missing chunks and blocks. It doesn't take into account other mechanisms like congestion\n  control. It was used to test utilization and fairness in various scenarios, without interference\n  from other congestion mechanisms. A simulator allows to quickly run a test over thousands of\n  blocks, which would take minutes in actual `nearcore`.\n- In testloop, which runs the actual blockchain code in a deterministic way. The tests are slower,\n  but test the actual code that will run in the real world. They also allow to test interaction with\n  other mechanisms like congestion control.\n\nThe simulator tests went well. Utilization and fairness were good, the only issue that these tests\nfound is that a chunk might sometimes receive more than `max_shard_bandwidth` because of missing\nchunks, which is a known issue with the design.\n\nThe testloop tests were a bit below expectations. It seems like there are other mechanisms that\nprevent us from reaching full cross-shard bandwidth utilization. It was hard to reach a state where\nall of the outgoing buffers were full and the scheduler could go at full speed. I plan to add more\nobservability which should shed more light on what exactly is going on there. Still the test results\nwere okay, the scheduler works reasonably well. Testing in testloop also allowed to find some bugs\nwith the congestion control integration.\n\n## Reference Implementation\n\nHere are the PRs which implement this NEP in `nearcore`:\n\n- https://github.com/near/nearcore/pull/12234: wiring for bandwidth scheduler\n- https://github.com/near/nearcore/pull/12307: Do bandwidth scheduler header upgrade the same way as for congestion control\n- https://github.com/near/nearcore/pull/12333: implement the BandwidthRequest struct\n- https://github.com/near/nearcore/pull/12464: generate bandwidth requests based on receipts in outgoing buffers\n- https://github.com/near/nearcore/pull/12511: use u16 for shard id in bandwidth requests\n- https://github.com/near/nearcore/pull/12533: bandwidth scheduler\n- https://github.com/near/nearcore/pull/12682: distribute remaining bandwidth\n- https://github.com/near/nearcore/pull/12694: make BandwidthSchedulerState versioned\n- https://github.com/near/nearcore/pull/12719: slightly increase base_bandwidth\n- https://github.com/near/nearcore/pull/12728: include parent's receipts in bandwidth requests\n- https://github.com/near/nearcore/pull/12747: Remove BandwidthRequestValues which can never be granted\n\n## Security Implications\n\n- Risk of too many receipts incoming receipts to a chunk. There are certain corner cases in which a\n  chunk could end up with more than `max_shard_bandwidth` of incoming receipts, up to `num_shards *\n  max_shard_bandwidth` in the worst-worst case. This has already been a problem before bandwidth\n  scheduler, the scheduler slightly increases protection against such situations, but they could\n  still happen. The consequence of too many receipts would be increased witness size, which could\n  cause missing chunks, and in extreme cases even chain stalls. The corner case is pretty hard to\n  trigger for an attacker, they would have to find a way to precisely cause multiple missing chunks,\n  which would be a vulnerability by itself.\n- The code is a bit complex, there's a risk of a bug somewhere in the code. We trade complexity for\n  higher performance. The most likely bug to happen would be some kind of liveness issue, e.g.\n  receipts getting stuck in the buffer without ever being sent. Worst-case scenario would probably\n  be a panic somewhere in the code which could cause a shard to stall.\n- Better protection against DoS attacks. Cross shard throughput in the previous solution was pretty\n  low, which made it easier to run DoS attacks by generating a ton of cross-shard receipts.\n  Bandwidth scheduler significantly increases cross-shard throughput, which makes such attacks less\n  viable.\n\n## Alternatives\n\nThere were a few alternative designs that we considered:\n\n### Partial receipts\n\nA lot of problems come from the fact that receipts can be big (up to 4MB in the current protocol\nversion). It means that we have to choose which receipts can be sent and which ones must wait for\ntheir turn. It would be much easier if all of the receipts were small (say under 1kB), we would be\nable to divide bandwidth in a more continuous way, e.g. just grant `max_shard_bandwidth /\nnum_shards` on every link. What if we split big receipts into parts, a 4MB receipt would be split\ninto 4000 partial receipts, 1kB each. Partial receipts would be sent to the receiver shard over\nmultiple block heights and saved to the receiver's state. Once all of the partial receipts are\navailable, the whole receipt would be rebuilt from the parts stored in the state.\n\nThis would work fine with stateful validation, but it isn't really viable for stateless. In\nstateless validation everything that is read from the state must be included in the\n`ChunkStateWitness`. Reconstructing a receipt from its parts would require adding all of the parts\nto the witness. This means that every receipt will be included in the witness twice - first as the\nincoming partial receipts, and then as the parts used to rebuild the whole receipt. This effectively\nhalves the witness bandwidth available for incoming receipts. And eventually we would include the\nwhole receipt in the witness anyway, so we might as well send the whole receipt immediately. There\ncould also be problems with fairness when choosing which receipts to rebuild - we might not be able\nto rebuild all of them, as that could cause witness size to explode, so we'd need to have some sort\nof fair scheduler to choose which ones to include in the witness. At this point the problem becomes\neerily similar to bandwidth scheduler, just with more complications.\n\n### Chunk producers choosing incoming receipts (also called `Option D` in some docs)\n\nBandwidth scheduler limits things from the sender side, but really it would be better to do that on\nthe receiver side, the sender usually has limited information about the receiver's exact state.\nLet's allow the chunk producer to choose the incoming receipts that it includes in a chunk. All\nshards would produce outgoing receipts and publish some small metadata about them (e.g this many\nreceipts, with this size, and this much gas). Chunk producers would read the metadata and fetch the\nreceipts as fast as they can. When it's time to produce a chunk, the chunk producer would include\nonly the incoming receipts that it was able to fetch so far, they wouldn't be forced to include all\nof the receipts that were sent. We would need some additional mechanism to limit how many receipts\nwere sent, but not included, but that's doable. It would make the bandwidth limits correspond\nexactly to the actual networking situation in the chain. It's a similar approach as the TCP flow\ncontrol uses - sender sends something, receiver receives as fast as it can and sends acks for the\nthings it received. If sender notices that it's sending data too fast, it slows down. This solution\nalso has potential to eliminate the one-block delay that is present in bandwidth scheduler. I feel\nlike this is the \"proper\" way that things should be done, but this approach was not chosen for two\nreasons:\n\n- It's much harder to implement than bandwidth scheduler. Bandwidth scheduler can be added\n  relatively painlessly in the runtime, this approach would require a lot of delicate changes to\n  various subsystems. Bandwidth scheduler solves 80% of the problem for 50% of the effort.\n- There are various security considerations around giving chunk producer more choice. A malicious\n  chunk producer could choose the receipts that it prefers, giving it extra power over what happens\n  on the blockchain, that could lead to potential security vulnerabilities.\n\n## Future possibilities\n\n### Better handling of missing chunks\n\nThe current way of handling missing chunks isn't as good as it could be. In some cases it's possible\nfor a shard to receive more than `max_shard_bandwidth` of receipts. It'd be good to improve the\nscheduler to guard against such situations. However it's also possible that the concept of missing\nchunks will disappear if we move to SPICE or some other consensus mechanism.\n\n### Merging bandwidth scheduler with congestion control\n\nSize of outgoing receipts is limited by the bandwidth scheduler, but their gas is limited by\ncongestion control. This is awkward, there could potentially be situations where the scheduler\ngrants a lot fo bandwidth, but receipts can't be sent because of gas limits imposed by congestion\ncontrol.\n\nI think ideally they should be merged into one `CongestionScheduler` which would look how much size\nand gas there is in every outgoing buffer and grant bandwidth/gas based on that. It could even\ndetect cycles and allow the receipts to progress in a smart way, which the current congestion\ncontrol can't do.\n\nBut that would be a big effort, for now we have two separate mechanisms for gas and size, it solves\nmost of the problems, even if it isn't ideal.\n\n### Don't put bandwidth requests in the chunk header\n\nThe chunk header should be as small as possible, and putting bandwidth requests there could add tens\nof bytes to each header. We could distribute the requests separately and include only their merkle\nroot in the chunk header.\n\n### Optimize witness size when a chunk receives too many receipts\n\nTo deal with situations where there are too many incoming receipts to a chunk, we could add a new\nrule for chunk application - only the first 4MB of incoming receipts are processed, the rest of\nincoming receipts will always be delayed. Thanks to this rule we would be able to do a trick - for\nthe first 4MB of incoming receipts include the actual receipts in the witness, for the rest include\nonly lightweight metadata that will be put in the delayed queue. Later when the metadata is read\nfrom the delayed queue we will include the actual receipts in the witness and they'll be executed.\n\nSo if there's 8MB of incoming receipts the chunk producer would include only the first 4MB in the\nwitness, for the rest there would metadata that will be put in the delayed queue. At the next height\nthe metadata would be read from the delayed queue, the next chunk producer would include these\nreceipts in the witness and they'd be processed.\n\nThis would allow to keep witness size small, even when there are too many incoming receipts.\n\n## Consequences\n\n### Positive\n\n- Higher cross-shard throughput\n- Lower latency for big cross-shard receipts\n- Better scalability\n- Slightly better protection against large incoming receipts than the previous solution\n\n### Neutral\n\n- ?\n\n### Negative\n\n- More complexity in the runtime, higher potential for bugs\n- Additional compute when applying a chunk\n\n### Backwards Compatibility\n\nThe change is backwards-compatible. Everything that worked before the change will still work after\nit.\n\n## Unresolved Issues (Optional)\n\n[Explain any issues that warrant further discussion. Considerations\n\n- What parts of the design do you expect to resolve through the NEP process before this gets merged?\n- What parts of the design do you expect to resolve through the implementation of this feature\n  before stabilization?\n- What related issues do you consider out of scope for this NEP that could be addressed in the\n  future independently of the solution that comes out of this NEP?]\n\nMost of the issues are resolved, the change should be ready for stabilization.\n\n## Changelog\n\n[The changelog section provides historical context for how the NEP developed over time. Initial NEP\nsubmission should start with version 1.0.0, and all subsequent NEP extensions must follow [Semantic\nVersioning](https://semver.org/). Every version should have the benefits and concerns raised during\nthe review. The author does not need to fill out this section for the initial draft. Instead, the\nassigned reviewers (Subject Matter Experts) should create the first version during the first\ntechnical review. After the final public call, the author should then finalize the last version of\nthe decision context.]\n\n### 1.0.0 - Initial Version\n\n> Placeholder for the context about when and who approved this NEP version.\n\n#### Benefits\n\n> List of benefits filled by the Subject Matter Experts while reviewing this version:\n\n- Benefit 1\n- Benefit 2\n\n#### Concerns\n\n> Template for Subject Matter Experts review for this version: Status: New | Ongoing | Resolved\n\n|   # | Concern | Resolution | Status |\n| --: | :------ | :--------- | -----: |\n|   1 |         |            |        |\n|   2 |         |            |        |\n\n## Copyright\n\nCopyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).\n"
  },
  {
    "path": "neps/nep-0591.md",
    "content": "---\nNEP: 591\nTitle: Global Contracts\nAuthors: Bowen Wang <bowen@nearone.org>, Anton Puhach <anton@nearone.org>, Stefan Neamtu <stefan.neamtu@nearone.org>\nStatus: Final\nDiscussionsTo: https://github.com/nearprotocol/neps/pull/591\nType: Protocol\nReplaces: 491\nVersion: 1.0.0\nCreated: 2025-02-11\nLastUpdated: 2025-03-17\n---\n\n## Summary\n\nThis proposal introduces global contracts, a new mechanism that allows smart contracts to be deployed once and reused by any account without incurring high storage costs.\n\nCurrently, deploying the same contract multiple times on different accounts leads to significant storage fees.\nGlobal contracts solve this by making contract code available globally, allowing multiple accounts to reference it instead of storing their own copies.\n\nRather than requiring full storage costs for each deployment, accounts can simply link to an existing global contract, reducing redundancy and improving scalability. This approach optimizes storage, lowers costs, and ensures efficient contract distribution across the network.\n\n## Motivation\n\nA common use case on NEAR is to deploy the same smart contract many times on many different accounts. For example, a multisig contract is a frequently deployed contract.\nHowever, today each time such a contract is deployed, a user has to pay for its storage and the cost is quite high. For a 300kb contract the cost is 3N.\n\nWith the advent of chain signatures, the smart contract wallet use case will become more ubiquitous.\nAs a result, it is very desirable to be able to reuse already deployed contract without having to pay for the storage cost again.\n\nAdditionally, global contracts cover the underlying use case for [NEP-491](https://github.com/near/NEPs/pull/491): [#12818](https://github.com/near/nearcore/pull/12818).\nSome businesses onboard their users by creating an account on their behalf and deploying a contract to it. Such a use case is susceptible to refund abuse, where there is a financial incentive to repeatedly create and destroy an account, cashing in on the storage fee that was initially paid by the business to deploy the contract.\n\n## Specification\n\nGlobal contract can be deployed in 2 ways: either by its hash or by owner account id.\nContracts deployed by hash are effectively immutable and cannot be updated.\nWhen deployed by account id the owner can redeploy the contract updating it for all its users.\nUsers can use contracts deployed by hash if they prefer having control over contract updates. In order to update the contract user would have to explicitly switch to a different version of the contract deployed under a different hash.\nContracts deployed by account id should be used when user trusts contract developers to update the contract for them. For example if user accounts are created specifically for some application to be onchain.\n\nWe introduce new receipt action for deploying global contracts:\n\n```rust\nstruct DeployGlobalContractAction {\n    code: Vec<u8>,\n    deploy_mode: GlobalContractDeployMode,\n}\n\nenum GlobalContractDeployMode {\n    /// Contract is deployed under its code hash.\n    /// Users will be able reference it by that hash.\n    /// This effectively makes the contract immutable.\n    CodeHash,\n    /// Contract is deployed under the owner account id.\n    /// Users will be able reference it by that account id.\n    /// This allows the owner to update the contract for all its users.\n    AccountId,\n}\n```\n\nUser pays for storage by burning NEAR tokens from its balance depending on the deployed contract size.\nGlobal contract is not checked to be compilable wasm (just like in case of a regular contract), so it is possible to deploy invalid wasm and that still burns tokens.\n\nAlso new action is added for using previously deployed global contract:\n\n```rust\nstruct UseGlobalContractAction {\n    contract_identifier: GlobalContractIdentifier,\n}\n\nenum GlobalContractIdentifier {\n    CodeHash(CryptoHash),\n    AccountId(AccountId),\n}\n```\n\nThis action is similar to deploying a regular contract to an account, except user does not cover storage deposit.\nUsing non-existing global contract (both by its hash and account id) results in `GlobalContractDoesNotExist` action error, so users have to wait for global contract distribution to be completed in order to starting using the contract.\n\n## Reference Implementation\n\n### Storage\n\nIn order to have global contracts available to users on all shards we store a copy in each shard's trie.\nA new trie key is introduced for that:\n\n```rust\npub enum TrieKey {\n    ...\n    GlobalContractCode {\n        identifier: GlobalContractCodeIdentifier,\n    },\n}\n\npub enum GlobalContractCodeIdentifier {\n    CodeHash(CryptoHash),\n    AccountId(AccountId),\n}\n```\n\nThe value is contract code bytes, similar to `TrieKey::ContractCode`.\n\n### Distribution\n\nGlobal contract has to be distributed to all shards after being deployed.\nThis is implemented with a dedicated receipt type:\n\n```rust\nenum ReceiptEnum {\n    ...\n    GlobalContractDistribution(GlobalContractDistributionReceipt),\n}\n\nenum GlobalContractDistributionReceipt {\n    V1(GlobalContractDistributionReceiptV1),\n}\n\nstruct GlobalContractDistributionReceiptV1 {\n    id: GlobalContractIdentifier,\n    target_shard: ShardId,\n    already_delivered_shards: Vec<ShardId>,\n    code: Arc<[u8]>,\n}\n```\n\n`GlobalContractDistribution` receipt is generated as a result of processing `DeployGlobalContractAction`.\nSuch receipt contains destination shard `target_shard` as well as list of already visited shard ids `already_delivered_shards`.\nApplying `GlobalContractDistribution` receipt updates the corresponding `TrieKey::GlobalContractCode` in the trie for the current shard.\nIt also generates distribution receipt for the next shard in the current shard layout. This process continues until `already_delivered_shards` contains all shards.\nThis way we ensure that at any point in time there is at most one `GlobalContractDistribution` receipt in flight for a given deployment and eventually it will reach all shards.\nSuch distribution also works well at the resharding boundary. If the receipt is applied with the old shard layout then storage resharding will make sure presence of the contract in both child shards. In case of application in the new layout, the distribution logic will take care of forwarding the receipt to the newly introduced child shards.\n\nNote that `GlobalContractDistribution` does not target any specific account (`system` is used as a placeholder) and `target_shard` is used for receipt routing.\n\n### Usage\n\nWe change `Account` struct to make it possible to reference global contracts.\n`AccountV2` is introduced changing `code_hash: CryptoHash` field to more generic `contract: AccountContract`:\n\n```rust\nenum AccountContract {\n    None,\n    Local(CryptoHash),\n    Global(CryptoHash),\n    GlobalByAccount(AccountId),\n}\n```\n\nApplying `UseGlobalContractAction` updates user account `contract` field accordingly.\n\n`FunctionCall` action processing is updated to respect global contracts. This includes updating [contract preparation pipeline](https://github.com/near/nearcore/blob/fb95d7b7740d1fda9245afa498ce4e9ac145c8af/runtime/runtime/src/pipelining.rs#L24) as well as [recording of the executed contract to be included in the state witness](https://github.com/near/nearcore/blob/fb95d7b7740d1fda9245afa498ce4e9ac145c8af/core/store/src/trie/update.rs#L338).\n\n### Costs\n\nFor global contracts deployment we burn tokens for storage instead of locking like what we do regular contracts today.\nThe cost per byte of global contract code `global_contract_storage_amount_per_byte` is set as 10x the storage staking cost `storage_amount_per_byte`.\n\nAdditionally we add action costs for the global contract related actions:\n\n* `action_deploy_global_contract` is exactly the same as `action_deploy_contract`\n* `action_deploy_global_contract_per_byte`:\n  * send costs are the same as `action_deploy_contract_per_byte`\n  * execution costs should cover distribution of the contract to all shards:\n    * this is pretty expensive for the network, so want want to charge significant amount of gas for that\n    * we still want to be able to fit max allowed contracts size into single chunk space: `max_gas_burnt = 300_000_000_000_000`, `max_contract_size = 4_194_304`, so it should be at most `max_gas_burnt /  max_contract_size = 71_525_573`\n    * we need to allow for some margin for other costs, so we can round it down to `70_000_000`\n\nUsing a global contract incurs two costs, as follows:\n\n* `action_use_global_contract`\n  * mirrors `action_deploy_contract`\n  * base cost for processing a usage action\n* `action_use_global_contract_per_identifier_byte`\n  * mirrors `action_deploy_contract_per_byte`, but based on the global contract identifier length\n  * introduced because the `AccountId` in `GlobalByAccount(AccountId)` variant can vary in length, unlike `Global(CryptoHash)` with a fixed size of 32 bytes\n\nReferencing a global contract locks tokens for storage in accordance with `storage_amount_per_byte` based on global contract identifier length, calculated similarly to `action_use_global_contract_per_identifier_byte`.\n\nAlthough the cost structure is similar to that of single shard contract deployments, the overall fees are significantly lower because only a few bytes are stored for the reference. This is desired, because referencing a global contract is not an expensive operation for the network.\n\n## Security Implications\n\nOne potential issue is increasing infrastructure cost for global contracts with growing number of shards.\nA global contract is effectively replicated on every shard, so with increase in number of shards each global contract uses more storage.\nThis can be potentially addressed in the future by making deployment costs parametrized with the number of shards in the current epoch, but it still wouldn't address the issue for the already deployed contracts.\n\n## Alternatives\n\nIn [the original proposal](https://github.com/near/NEPs/issues/556) we considered storing global contracts in a separate global trie (managed at the block level) and introducing a dedicated distribution mechanism.\nWe decided not to proceed with this approach as it requires significantly higher effort to implement and also introduces new critical dependencies for the protocol.\n\n## Future possibilities\n\nGlobal contracts can potentially be used as part of the sharded contracts effort. Sharded contract code should be available on all shards, so using global contracts for that might be a good choice.\n\n## Changelog\n\n[The changelog section provides historical context for how the NEP developed over time. Initial NEP submission should start with version 1.0.0, and all subsequent NEP extensions must follow [Semantic Versioning](https://semver.org/). Every version should have the benefits and concerns raised during the review. The author does not need to fill out this section for the initial draft. Instead, the assigned reviewers (Subject Matter Experts) should create the first version during the first technical review. After the final public call, the author should then finalize the last version of the decision context.]\n\n### 1.0.0 - Initial Version\n\n> Placeholder for the context about when and who approved this NEP version.\n\n#### Benefits\n\n> List of benefits filled by the Subject Matter Experts while reviewing this version:\n\n* Benefit 1\n* Benefit 2\n\n#### Concerns\n\n> Template for Subject Matter Experts review for this version:\n> Status: New | Ongoing | Resolved\n\n|   # | Concern | Resolution | Status |\n| --: | :------ | :--------- | -----: |\n|   1 |         |            |        |\n|   2 |         |            |        |\n\n## Copyright\n\nCopyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).\n"
  },
  {
    "path": "neps/nep-0611.md",
    "content": "---\nNEP: 611\nTitle: Pending Transaction Queue and Gas Keys\nAuthors: Robin Cheng <robin@nearone.org>, Darioush Jalali <darioush.jalali@nearone.org>\nStatus: Draft\nDiscussionsTo: https://github.com/near/NEPs/pull/611\nType: Protocol\nVersion: 1.2.2\nCreated: 2025-05-28\nLastUpdated: 2026-03-05\n---\n\n## Summary\n\nIn the near future of the Near blockchain, we foresee that via the SPICE project, transaction and receipt\nexecution will become decoupled from the blockchain itself; they will no longer run in lockstep.\nInstead, transactions will be included in the blocks first, and then execution will follow later.\n\nThis inherently introduces a problem that we must accept transactions before we know whether they are\nvalid. Today, when a chunk producer produces a chunk containing a transaction, it can verify using the\ncurrent shard state that the transaction has a valid signature and a valid nonce, and that the corresponding account has enough balance.\nBut as execution becomes asynchronous, we no longer have the current shard state to verify the\ntransactions against.\n\nThis NEP proposes a mechanism called the Pending Transaction Queue to solve this problem.\n\n## Motivation\n\n### Why is this worth solving?\n\nA potential for DoS attacks exists whenever the blockchain allows anyone to submit work without paying.\nInvalid transactions present such a vulnerability: if a transaction is included in a block (or more\nprecisely, a chunk of the block) but ends up being invalid because the sender does not have enough\nbalance, this transaction takes block space but cannot be charged against anyone.\n\nA very easy-to-perform attack exists if we do nothing to mitigate the problem:\n\n* The attacker creates two accounts, $A$ and $B$, with sufficiently many access keys each, and deploying\n  a specific contract for both accounts that provides a `send_near` function.\n* The attacker deposits 10 NEAR in $A$.\n* The attacker then performs the following, repeatedly:\n  * Submit a transaction to call A's `send_near` function, instructing it to send the account's remaining\n    balance to $B$.\n  * Right after, it floods the blockchain by signing and submitting many (arbitrary) transactions as $A$.\n    Because execution is asynchronous, the chunk producers think that there is still enough balance in\n    $A$'s account, so these transactions are accepted into chunks.\n  * Some blocks later, the execution catches up, and the `send_near` function drains $A$'s account.\n  * Subsequent executions of the following transactions all fail because $A$ has insufficient balance.\n  * After this is done, the attacker repeats but with $A$ and $B$ swapped.\n\nThis attack can be carried out with a very simple script and requires no cost other than a single contract\ncall every few blocks, but this ends up filling up the blockchain, preventing legitimate transactions from\nbeing included. This is also very hard to defend against, because the attacker can simply create more\naccounts. Note that this attack pattern is not the only problematic one; instead of sending away the\nbalance, the attacker can also delete access keys or delete the whole account.\n\nAt a high level, this solution has two key ideas:\n\n1. Limiting the number of in-flight transactions for accounts with contracts: This aims for backwards compatibility for most use cases and also limits the amount of spam they can include on chain.\n\n2. Allowing users with sophisticated issuance needs to prepay for gas and have many in-flight transactions. This NEP additionally includes a way to associate a single signing key and balance with an array of nonces, to provide a convenient way for parallel transaction issuance for such users.\n\n## Specification\n\nTo solve this problem, we present two critical components which work together to ensure that all accepted\ntransactions are valid. At a high level, they are:\n\n* **Access Key vs Gas Key**: We introduce a new type of access key, which we will call \"Gas Key\". A gas key can be funded with NEAR, and when issuing a transaction using a gas key, the transaction only consumes gas from the gas key, not from the account.\n* **Pending Transaction Queue**: Chunk producers keep track of pending (accepted into a block, but not \n  executed) transactions, and ensure that transactions sent with access keys are limited in parallelism,\n  whereas transactions sent with gas keys are limited to the available gas in the gas key.\n\nWe will now specify how exactly they work.\n\n### Definition of \"Pending Transactions\"\n\nIn a model where execution follows but lags behind consensus, there are transactions which are accepted\ninto consensus and thus committed to be executed in the near future, but are not yet executed. This set\nof transactions is called the *pending transactions*. We always discuss this in the context of one shard.\n\nNote that there are two slightly different ways to treat this definition, depending on how exactly the\n\"execution head\" (how far the execution has caught up) is defined:\n\n* It can be defined locally as the progress of execution at a node, but this will be different between nodes.\n* It can be defined deterministically as the last block whose execution result is certified by consensus.\n\nFor the purpose of this NEP, we use the latter definition, so that the notion of pending transactions is\nconsistent across all nodes and the determination of what transactions are eligible to be included by\na chunk producer can be verified -- even though we do not plan to implement this verification right now.\n\nAnother note is that the notion of pending transactions is anchored at a specific chunk that is being produced. In case of forks, we use the block that the chunk is being produced on top of to compute the\nset of pending transactions.\n\nFinally, this NEP does *not* depend on the implementation of SPICE. In the context where SPICE is not\nyet in effect, we consider the pending transactions queue to always be empty (despite technically being\none chunk worth of transactions due to execution lagging one block behind in the current implementation),\nbecause we can always verify the validity of all transactions at the moment of inclusion.\n\n### Access Key vs Gas Key\n\nWe introduce a new transaction version, `TransactionV1`, which is able to either specify an access key\nor a gas key. Any older transaction version is equivalent to a `TransactionV1` that specifies an access\nkey.\n\nNote: The nearcore reference implementation already includes a `TransactionV1` struct, which is unused.\nThis struct was intended for adding support for a `priority_fee`, though this was never activated and there is no plan to activate it, so it will be removed with this NEP instead of being added to the new transaction format. This prioritizes protocol simplicity over backwards compatibility for crates that may have taken a dependency on an inactive feature.\n\n#### Access Key Parallelism Restriction\n\nWe now restrict the ability to send multiple parallel pending transactions with Access Keys, from\naccounts that also have a contract deployed.\n\nSpecifically, for any given account $A$ that has a contract deployed, the total number of access key transactions (across all access keys in the account) in the pending transaction queue whose sender is $A$\ncannot exceed $P_{\\mathrm {max}}$, a constant determined by the epoch; we propose $P_{\\mathrm {max}} = 4$.\n\nIn other words, with traditional access keys, one cannot send more than 4 transactions from an account\nwith a contract deployed, before they are executed. If one wishes to send more transactions in parallel,\nthey would need to create a gas key.\n\nFrom a UX perspective, we pick the $P_{\\mathrm {max}}$ constant so that it is very likely that anyone\nexceeding this parallelism is a developer who needs to send parallel transactions to the contract. In\nsuch cases, we require them to use gas keys. This is not a pure limitation; as we will see later, a gas\nkey also simplifies the developer's implementation for sending parallel transactions.\n\nAccounts that do not have a contract deployed are not restricted with any limited parallelism.\n\n#### Gas Keys\n\nThis NEP adds Gas Keys, **conceptually** defined as the following:\n\n```rust\nstruct ConceptualGasKey {\n    public_key: PublicKey,\n    nonces: Vec<Nonce>,\n    balance: Balance,\n    permission: AccessKeyPermission,\n}\n```\n\nGas keys are managed using the standard key management actions (`AddKey`, `DeleteKey`) along with new transfer actions to move balance to/from gas keys:\n\n* `AddKey(PublicKey, AccessKeyPermission)` with a gas key permission: Creates a gas key with the given public key, permission, and number of nonces (specified in the permission).\n* `DeleteKey(PublicKey)`: Deletes a gas key or access key.\n* `TransferToGasKey(PublicKey, Balance)`: deducts balance from the account and gives it to the gas key.\n* `WithdrawFromGasKey(PublicKey, Balance)`: deducts the balance from the gas key and gives it to the account.\n\nSince gas keys are a kind of access key, they share a namespace. This means that the user is not allowed to create a gas key with the same public key as an existing access key. This is important in refund handling.\n\n### Gas Key Actions\n\n`AddKey` when adding a gas key is verified and executed as follows:\n\n* Account must already exist.\n* Check that the same key does not already exist as a gas key or access key.\n* Requested number of nonces must be less than or equal to `GasKey::MAX_NONCES` (currently suggested as 1024). This is to limit the number of trie operations for a single action.\n* Increases storage usage of the associated account,\n* Check that if the permission is a `FunctionCallPermission`, the allowance is `None` (unlimited\n  allowance).\n* A new `GasKey` entry is added to the trie.\n* For each nonce ID (from 0 to the number of nonces minus 1), store the default nonce in the trie.\n* The default nonce is block height * 1e6, the same as the default nonce calculation for access keys.\n\n`TransferToGasKey` is verified and executed as follows:\n\n* The gas key must exist under the account.\n* The cost of this action is the amount to fund; it is then verified the same way as a `Transfer` action.\n* To apply this action, the balance on the gas key is increased by the same amount.\n\n`WithdrawFromGasKey` is verified and executed as follows:\n\n* The cost of this action is similar to the cost of `Transfer`, however the deposit amount is not included.\n* The specified gas key must have sufficient balance to perform the transfer from.\n* To apply this action, the balance on the gas key is decreased by the specified amount, which is credited to the account balance.\n\n`DeleteKey` when deleting a gas key is verified as executed as follows:\n\n* The gas key must exist under the account.\n* To apply this action, all relevant trie nodes are deleted,\n* Decreases storage usage of associated account,\n* The remaining balance left in the key is **burned**. This prevents an attack where an adversary floods the chain with transactions, then deletes the key to reclaim the balance before the transactions execute.\n  * To protect the user from accidental loss of funds, this action fails if the balance of the key is more than 1 NEAR.\n  * Note that this does not mean that all balance in a gas key is eventually burned; one can withdraw from\n    a gas key using the `WithdrawFromGasKey` action.\n* To deter creating many keys and then deleting them all at once (which could consume excessive computation in a single chunk), a per-nonce compute cost is charged during execution. Combined with the 1024 max limit on number of nonces, this provides sufficient protection against computational DoS.\n\n#### Modifications to existing actions\n\n`AddKey` is extended to support a new gas key permission type. Since gas keys and access keys share a namespace, `AddKey` will fail if the public key is already registered as either kind of key. See [Gas Key Actions](#gas-key-actions) for full validation and execution details when adding a gas key.\n\n`DeleteKey` can delete both access keys and gas keys. When deleting a gas key, any remaining balance is burned.\n\n`DeleteAccount` can succeed even if the account has associated gas keys.\n\n* If the sum of gas key balances is less than or equal to 1 NEAR, balances are burnt.\n* If the sum of gas key balances is greater than 1 NEAR, the action fails, to protect the user against accidental loss.\n\n\n#### Cost of Gas Key actions\n\n* Cost of `AddKey` when adding a gas key will be based on the cost of adding a new access key. In addition, appropriate fees will be charged per nonce to account for trie operations. These per-nonce fees are collected during the `AddKey` action.\n* Cost of `DeleteKey` when deleting a gas key will be based on the cost of deleting an access key. In addition, per-nonce compute costs are charged during execution to deter creating many keys and deleting them in the same chunk. This per-nonce compute cost, combined with the `GasKey::MAX_NONCES` limit on nonces, should provide sufficient protection against computational DoS attacks.\n* Cost of `TransferToGasKey` and `WithdrawFromGasKey` will be based on `Transfer`, as it is the most similar existing action. `WithdrawFromGasKey` and `TransferToGasKey` do additional trie modifications on sending and on execution. These will be accounted for based on trie operations.\n\nAs an alternative, we could use the [estimator](https://github.com/near/nearcore/blob/master/docs/architecture/gas/estimator.md) to calculate these fees. However, this is known to not be accurate and ignored in favor of consistency with other fees in recent additions ([example](https://github.com/near/nearcore/issues/14160)).\n\n#### Gas Key Transactions\n\n`TransactionV1` reuses the fields of `TransactionV0` except replacing the `nonce` field with an enum:\n\n```rust\nenum TransactionNonce {\n    Nonce { nonce: Nonce },\n    GasKeyNonce { nonce: Nonce, nonce_index: u16 }\n}\n```\n\nTransactions that specify a `GasKeyNonce` will use gas keys, and transactions that specify `Nonce` will use access keys.\n\nThe cost of a transaction (as set in [`calculate_tx_cost`](https://github.com/near/nearcore/blob/4fefbdf90c645506beb562ecf87e84f6387aef2f/runtime/runtime/src/config.rs#L330)) will be split into:\n\n* **Gas key cost**: `burnt_amount + remaining_gas_amount`. (This includes the gas burnt for converting this tx to a receipt and pre-paid gas for function calls). In case of `WithdrawFromGasKey`, this includes the amount to withdraw from the gas key as well.\n* **Deposit**: Cost of transaction's actions.\n\nThe semantics for gas key transactions, at the moment of execution (conversion to receipts), are:\n\n* A gas key transaction is valid iff all of the following are true:\n  * The public key corresponds to a valid gas key;\n  * The gas key has enough balance to cover **gas key cost**.\n  * The account balance can cover **deposit**; this includes amounts included in `Transfer` actions.\n  * The nonce ID < total number of nonces for the gas key, and the nonce is a valid nonce for that nonce \n    ID (per the same nonce checks as access key);\n* When converting the gas key transaction to a receipt, the same logic applies as for access key\n  transactions, except\n  * The gas key cost is deducted from the gas key instead of the account. (Deposit cost is still deducted from account).\n  * The new nonce is written for the specific nonce ID under the gas key.\n\n\n##### What happens if account cannot cover deposit?\n\nTransaction processing is extended with a new failure case: `InvalidTxError::NotEnoughBalanceForDeposit`. A gas key transaction may be valid at submission time (the account has enough balance for the deposit), but by execution time the account may no longer have sufficient balance - for example, if a contract call drains it in between.\n\nIn this case, the transaction fails, but gas is still charged from the gas key balance. Otherwise, an attacker could intentionally drain their account balance between submission and execution to get free gas key transactions, reintroducing the spam issue this NEP intends to prevent. Note that with honest chunk producers, this new failure case can only occur for gas key transactions.\n\n*Note*: As of the current date, `nearcore` ignores failed transactions in processing. After `ProtocolFeature::InvalidTxGenerateOutcomes`, invalid transactions impact the outcome but not the state. Therefore, this NEP also introduces the change that failed transactions may update the state (specifically, to deduct gas fees from the gas key balance).\n\n#### Refunds from transactions originating from a gas key\n\nThis NEP suggests returning refunds to the same balance which pays for the transaction. Therefore [*balance refunds*](https://github.com/near/nearcore/blob/4fefbdf90c645506beb562ecf87e84f6387aef2f/core/primitives/src/receipt.rs#L555-L559) will be issued to the account balance, and [*gas refunds*](https://github.com/near/nearcore/blob/4fefbdf90c645506beb562ecf87e84f6387aef2f/core/primitives/src/receipt.rs#L596-L604) will be issued to the gas key.\n\nThis is compatible with the existing receipt refunds, which use signer_id to [refund access key allowances](https://github.com/near/nearcore/blob/4fefbdf90c645506beb562ecf87e84f6387aef2f/runtime/runtime/src/actions.rs#L418-L428).\n\nAs there is no overlap between gas key and access keys, a gas refund can be issued to the gas key's balance (without creating ambiguity if it should refund an access key's allowance instead).\n\n*Note*: We only allow `None` allowance for a `FunctionCallPermission` in gas keys.\n\nAdditionally, this is compatible with `refund_to`, which only [impacts](https://github.com/near/nearcore/blob/4fefbdf90c645506beb562ecf87e84f6387aef2f/core/primitives/src/receipt.rs#L764-L766) balance refunds.\n\n##### Refunds alternatives\n\n1. We could route balance and gas refunds to the account balance. This trades-off user experience for simplicity of implementation and protocol: no specific changes to receipts would be needed, however the user would have to \"top-off\" the gas key balance more frequently.\n\n2. We could add `ActionV3` and `PromiseYieldV3`, however this requires careful consideration of possible interactions with `Delegate` action and `refund_to`.\n\n3. We could track the gas key for refunds by adding a `Receipt` variant, however this seems a bit out of place (as `Receipt` currently only tracks `predecessor_id`, where `ActionReceipt` tracks `signer_id` and `signer_public_key` i.e, access key).\n\n##### Interactions with VMContext\n\nVM execution currently [provides](https://github.com/near/nearcore/blob/136acc3a524575aae1300e26901e664adb521a6f/runtime/runtime/src/actions.rs#L74) the public key of the access key used to originate the transaction in `signer_account_pk`.\nWith gas keys, this may become the public key of a gas key or an access key.\n\nAs alternative, we could provide more context to the VM, allowing contracts to distinguish gas keys and access keys. Doing so provides a richer context to applications, and trades-off simplicity and future flexibility. Without having concrete use cases or limitations, the additional VM changes seem unnecessary or best left for a future NEP.\n\n### Gas Key Pending Transaction Constraints\n\nUnlike access key transactions, gas key transactions are not limited in parallelism; they are only limited\nby the amount of gas these transactions consume. Specifically, for a gas key $G$, the **gas key cost** of \nthe transactions signed with $G$ in the pending transactions must not exceed the balance of $G$\n(according to the state that the pending transactions are based on).\n\nThis constraint should be good enough to cover cases of adding, funding, removing, or withdrawing from gas\nkeys as well. For adding and funding, we do need the execution to catch up before the newly available\nbalance can be used for pending transactions, which is suboptimal but correct. \nContract execution cannot create receipts that withdraw from gas keys, as we are intentionally not adding new promise-creating host functions for them; only gas key transactions can. This intention should be documented, for example where the host functions are defined, so they are not added accidentally in the future.\n\nFor deletion, balance in gas keys are not refunded, so although subsequent pending\ntransactions may end up failing, the gas committed to those transactions are already burnt, eliminating\nthe opportunity for the aforementioned attack.\n\n### Pending Transaction Queue\n\nWe would now maintain a new data structure that stores the pending transactions, called the Pending\nTransaction Queue. Although it's conceptually a queue, it is stored as a collection indexed by\n(account ID, transaction type), and further indexed by the block hash. Furthermore, the pending\ntransaction queue is stored per shard, not as a single data structure. The contents of the queue\nis exactly the pending transactions according to the definition above.\n\nThe constraints are enforced as described above; we reiterate them precisely here. For each account, Let\n$T_A$ be the set of transactions in the queue signed with any access key of this account, and let\n$T_G$\nbe the set of transactions in the queue signed with any gas key of this account. The following constraints\nmust hold true:\n\n* If the account has a contract deployed (as of the latest available state), then $|T_A| \\le P_{\\mathrm {max}}$.\n* If any transaction $t\\in T_A\\cup T_G$ contains a `DeployContract` action, then $|T_A|=\\{t\\}$. In other words, deploying a contract cannot be done in parallel with any access key\n  transactions. The same applies to `UseGlobalContract`, `DeterministicStateInit`, and `Delegate` with inner deploy-like actions.\n* The sum of **total costs** of all transactions in $T_A$ plus the sum of **deposit costs** of all transactions in $T_G$ does not exceed the account balance.\n* For each gas key $g$, the sum of the **gas key costs** of all transactions signed with $g$ does not exceed the balance of $g$.\n* Importantly, these costs can be calculated from the gas price and inspecting the actions contained in the transaction (does not depend on state or execution).\n\n![Transaction Flow Overview](assets/nep-0611/pending-tx-flow.svg)\n\nThe constraints are maintained at the time of chunk production: when producing a chunk, we only accept\ntransactions that would maintain these constraints. For limiting gas key transactions, we always query the\nbalance of the gas key from the executed state (as opposed to storing it in the queue).\n\nTo compute the pending transaction queues (one for each tracked shard) for a new block,\n\n* Start from the queue from the previous block;\n* Subtract the transactions that are included in each newly certified block;\n* Add new transactions included in this block.\n\nNote: The pending transaction queue is relevant for the SPICE project, where transactions may be pending for multiple blocks before execution. Without SPICE, as mentioned above, the pending transaction queue is considered empty because transactions are validated at inclusion time.\n\n### Trie storage\n\nTo facilitate the above design, gas keys must be stored in the trie. They are stored under the same `TrieKey` as a normal access key:\n\n```rust\n    AccessKey {\n        account_id: AccountId,\n        public_key: PublicKey,\n    } // uses col::ACCESS_KEY\n```\n\nThen, gas keys can be stored as a specific kind of a newly added `AccessKeyPermission` variant:\n\n```rust\npub struct AccessKey {\n    pub nonce: Nonce,\n    pub permission: AccessKeyPermission,\n}\npub enum AccessKeyPermission {\n    // Already exists\n    FunctionCall(FunctionCallPermission),\n    FullAccess,\n\n    // Newly added\n    GasKeyFunctionCall(GasKeyInfo, FunctionCallPermission),\n    GasKeyFullAccess(GasKeyInfo),\n}\n```\n\nIndividual nonces are stored under the following `TrieKey` as `u64` values:\n\n```rust\n    GasKeyNonce {\n        account_id: AccountId,\n        public_key: PublicKey,\n        nonce_index: u16\n    } // also uses col::ACCESS_KEY\n```\n\n### RPC and `StateChanges` modifications\n\nGas keys are returned as other access keys using `view_access_key_list`, and `view_access_key` queries. They will show with the newly introduced `AccessKeyPermission` enum variants that contain gas key info (i.e., balance and number of nonces).\n\nNew queries will be introduced for gas key nonces, such as `view_gas_key_nonces`.\n\nAs an alternative, we could gather and return all nonces as part of existing queries. However, this changes the return types of existing API and potentially makes them more expensive.\n\n\n### Impact to Existing Protocol without SPICE\n\nAs mentioned above, this NEP applies with or without SPICE. The impact to the existing protocol is purely\npositive:\n\n* Normal access keys are never restricted, as the set of pending transactions is always empty.\n* Gas keys allow programmatic transaction senders to more easily manage multiple nonces. Rather than\n  requiring multiple access keys to be created, they just need to create a single gas key.\n\nThe implementation of gas keys before SPICE also allows programmatic users to migrate to using gas keys,\npreparing them for when SPICE is launched.\n\n## Security Implications\n\n### DoS Attack Prevention\n\nThe primary security benefit of this NEP is preventing the DoS attack vector described in the Motivation section. By restricting access key parallelism for contract accounts and requiring gas keys to be pre-funded, we ensure that attackers cannot flood the blockchain with transactions that appear valid at inclusion time but become invalid during execution. All accepted transactions have committed resources that can be charged for gas consumption.\n\n### Balance Burning on Gas Key Deletion\n\nWhen a gas key is deleted, any remaining balance is burned rather than refunded. This prevents an attack where an adversary attempts to include non-fee paying transactions on chain by submitting many transactions yet reclaiming the balance on deletion.\n\n## Reference Implementation\n\n* [Initial implementation of gas key trie modifications](https://github.com/near/nearcore/pull/13687)\n* [Initial implementation of gas key actions](https://github.com/near/nearcore/pull/14532)\n* [Work in progress implementation of actions and refund receipts](https://github.com/near/nearcore/pull/14521)\n\n## Alternatives\n\nOne alternative to the gas key design is to introduce a `SenderReservedBalance`: under this model, the sender's account reserves a portion of its NEAR balance specifically for gas on future transactions.\n\nThe runtime would check that the sender has enough reserved funds to cover both the gas cost and the required minimum balance, thereby enforcing validity without requiring additional trie keys. While initially it seems simpler, this approach comes with significant and subtle drawbacks:\n\n* Contract execution can lead to deletion of access keys. This creates a problem for sharing a single reserved gas balance between multiple access keys.\n\n  The specific case to consider is when an `Account` does not have sufficient balance to pay for a transaction (i.e., `InvalidTxError::NotEnoughBalanceForDeposit`). This can occur if the contract transferred away its balance in a prior block. If the contract also programmatically deletes the signing key, the executor cannot distinguish between the `NotEnoughBalanceForDeposit` scenario (which should be charged gas) and an unauthorized transaction using an incorrect key (which should not be charged), based on the current state alone.\n\n* Modifies the semantics of the account's main balance, requiring checks that contract execution doesn't deplete the balance lower than `SenderReservedBalance`, potentially breaking existing assumptions about how balance operates.\n\nIn contrast, the gas keys solution solves the deleted key problem, does not modify semantics of existing account balance, and also provides improved user experience for concurrent transaction issuance via `nonce_index`.\n\n### Storing entire vector nonce of nonces in a single trie key\n\nAs an alternative to having a separate trie key for each nonce, we could store all the nonces for a given gas key under a single trie key in a vector.\n\nWith the vector implementation, the number of trie reads will decrease from 2 to 1, however the amount of data read from the value will be larger and depend on the number of nonces.\n\nThis means we would have to charge users using more nonces a higher fee not only during addition and deletion, but also when using the gas key to sign a transaction.\n \nThe larger amount of data read must also be included in state witnesses.\n\nAdditionally, in the future it may be easier to reason about processing transactions in parallel when the trie keys for different nonces do not overlap.\n\n### Alternate design for transaction parallelism (NEP-522)\n\n[NEP-522](https://github.com/near/NEPs/pull/522/files) describes an approach to transaction deduplication based on random nonces and tracking state of recently seen transaction hashes in the trie.\n\nThis NEP takes an approach which does not require writing to the trie. Additionally, it does not require maintaining a data structure with historical blocks.\n\n### Alternate possibilities for actions\n\n* An earlier version of this NEP used separate `AddGasKey` and `DeleteGasKey` actions. In this alternative, it would have been possible to issue a refund of the remaining gas balance on delete to the account. However, using the existing key operations provides better UX, by allowing contracts to manage keys.\n\n### Alternative trie storage scheme\n\nAs an alternative, gas keys may be stored under a separate trie column using the following `TrieKey`:\n\n```rust\n    GasKeyNonce {\n        account_id: AccountId,\n        public_key: PublicKey,\n        nonce_index: Option<u16>\n    } // uses new column col::GAS_KEY\n```\n\nIn this scheme, the gas key data is stored with `nonce_index: None` and for the individual nonces are stored by with `nonce_index: Some(index)`.\n\nThe advantage of this alternative is simplicity and a more additive implementation. The downside is processing refunds, and adding new access or gas keys would incur an additional trie read (which also consumes space in the state witness).\n\nWhile we can consider adding keys relatively uncommon, gas refunds are fairly common. This change would require 3 instead of 2 trie accesses which increases the impact of these receipts on state witness size by 50%.\n\n## Consequences\n\n### Positive\n\n* Enables implementing SPICE without potential for users to create unbounded spam.\n* Enhanced nonce management enabling parallel transaction submission for sophisticated users.\n\n### Neutral\n\n* N/A\n\n### Negative\n\n* Adds complexity to protocol and implementation.\n\n### Backwards Compatibility\n\n* Adding `TransactionV1` does not change the semantics for `TransactionV0`. Users can continue to submit their transactions as such.\n* Adding new keys to the trie may require modifications for downstream parsers, indexers.\n* Existing use cases may assume they can submit an unbounded number of in-flight transactions. As changes only impact accounts with contracts, the impact of this is assumed to be low.\n* `VMContext` exposes the access key's public key to the contracts. Going forward, this could be a public key corresponding to either a gas key or a public key.\n\n## Copyright\n\nCopyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).\n"
  },
  {
    "path": "neps/nep-0616.md",
    "content": "---\nNEP: 616\nTitle: Deterministic AccountIds\nAuthors: Arseny Mitin <mitinarseny@gmail.com>\nStatus: Approved\nDiscussionsTo: https://github.com/near/NEPs/issues/607#issuecomment-3067136865\nType: Protocol\nReplaces: 605\nVersion: 1.0.0\nCreated: 2025-07-24\nLastUpdated: 2025-10-06\n---\n\n## Summary\n\nWith [global contracts](https://github.com/near/NEPs/blob/master/neps/nep-0591.md)\nmaking code reuse cheaper, it's now more feasible to deploy multiple instances\nof the same contract code across different accounts. However, this creates\nchallenges around permissions, storage costs, refunds, and code verification.\n\nThis NEP addresses these concerns by introducing a new\n[backwards-compatible](#backwards-compatibility) `AccountIds` which are [deterministically derived](#deterministic-accountids) from contract\ninitialization code and state, enabling for truly sharded contract designs.\n\n## Motivation\n\nWith global contracts introduced in [NEP-591](https://github.com/near/NEPs/blob/master/neps/nep-0591.md),\nit's now much cheaper to reuse the same code across different contracts in\nterms of storage costs. This opens up a door for sharded contract designs where\neach instance needs to be deployed on a separate account.\n\nHowever, when designing sharded contracts, following concerns naturally come up:\n\n* Who is allowed to perform different actions (deploy, call, delete, etc.) on\n  different sharded contract instances?\n* Who pays for storage costs of deploying new sharded contract instances?\n* How does it get refunded if the contract turned out to be already deployed?\n* How can a contract verify that the caller was executing the same code?\n\nThis NEP proposes enabling sharded contracs design by introducing\nfully backwards-compatible [deterministic AccountIds](#deterministic-accountids)\nand new protocol-level primitives required to deploy such contracts, addressing\nconcerns above.\n\n## Specification\n\n### Deterministic AccountIds\n\nThe core idea is to have contract account ids **deterministically** derived\nfrom its initialization code and state.  \nLet's first define [`StateInit`](https://github.com/mitinarseny/near-sdk-rs/blob/50eb6e68544b75288145f7f0d07068a482b9e3fc/near-sdk/src/state_init.rs#L15-L24)\nstruct:\n\n```rust\n/// Initialization state for non-existing contract with\n/// deterministic account id, according to NEP-616.\n#[derive(BorshSerialize, BorshDeserialize)]\npub enum StateInit {\n    V1(StateInitV1),\n}\n\n#[derive(BorshSerialize, BorshDeserialize)]\npub struct StateInitV1 {\n    /// Code to deploy\n    pub code: ContractCode,\n    /// Optional key/value pairs to populate to storage on first initialization\n    pub data: BTreeMap<Vec<u8>, Vec<u8>>,\n}\n\n/// Code to deploy for non-existing contract.\n#[derive(BorshSerialize, BorshDeserialize)]\npub enum ContractCode {\n    /// Reference global contract's code by its hash\n    GlobalCodeHash(CryptoHash),\n    /// Reference global contract's code by its [`AccountId`]\n    GlobalAccountId(AccountId),\n}\n```\n\nNow, let's [derive](https://github.com/mitinarseny/near-sdk-rs/blob/50eb6e68544b75288145f7f0d07068a482b9e3fc/near-sdk/src/state_init.rs#L189-L203)\n`AccountId` deterministically as `\"0s\" .. hex(keccak256(state_init)[12..32])`:\n\n```rust\nimpl StateInit {\n    /// Derives [`AccountId`] deterministically, according to NEP-616.\n    pub fn derive_account_id(&self) -> AccountId {\n        let serialized = borsh::to_vec(self).unwrap_or_else(|| unreachable!());\n        format!(\"0s{}\", hex::encode(&env::keccak256_array(&serialized))[12..32]))\n            .parse()\n            .unwrap_or_else(|_| unreachable!())\n    }\n}\n```\n\nSuch schema is fully backwards-compatible with existing `AccountId` types. It\nlooks similar to existing implicit Eth addresses but we intentionally use a\ndifferent prefix, so it's possible to distinguish between different kinds of\naccounts in runtime and apply [different rules](https://github.com/near/nearcore/blob/c5225bacaad88de3574656333bd312464a90fb6a/core/parameters/src/cost.rs#L651-L681)\nfor estimating gas and storage costs.\n\n#### Versioning\n\nIn order to provide for future protocol upgrades, [`StateInit`](#deterministic-accountids) should\nbe a versioned enum, while the version can be thought as part of account's initialization state.\nSo, different versions would result in different derived account ids.\n\n> Contract implementations would use specific `StateInit` versions for\n> [initialization](#stateinit-action) / [verification](#address-verification) of other contracts.\n> The information about which version to use can be hardcoded into contract's code, stored\n> internally or forwarded in function call arguments, depending on its business logic.\n\n### Address Verification\n\nIf a contract needs to authenticate predecessor and verify that it was executing\nthe [same code](#other-host-functions), then it can simply\n[verify](https://github.com/mitinarseny/near-sdk-rs/blob/50eb6e68544b75288145f7f0d07068a482b9e3fc/examples/sharded-fungible-token/wallet/src/lib.rs#L162-L167)\nthat it matches `AccountId` [deterministically derived](#deterministic-accountids)\nfrom its expected initialization state:\n\n```rust\n// construct expected initialization state\nlet state_init = StateInit::V1(StateInitV1 {\n    code: env::current_contract_code(),\n    data: Self::init_state_for(params...),\n});\n\n// verify\nrequire!(env::predecessor_account_id() == state_init.derive_account_id(), \"not allowed\");\n```\n\nMoreover, contracts might store references to other global contracts, so that\nsuch verification logic [can](https://github.com/mitinarseny/near-sdk-rs/blob/d9950de34084a352bd58bbb0872e686fc7d40d04/examples/sharded-fungible-token/ft2sft/src/lib.rs#L131-L135)\nbe applied to all deterministic accounts, not just the predecessor, enabling for\ninfinite composability between different deterministic and non-deterministic contracts.\n\n### `StateInit` action\n\nThe main issue with sharded design is that there is no synchronous way for a\ncontract to check for existence of another contract before calling its methods.\nSo, the only way to ensure its existence is to optionally deploy and initialize\nit before proceeding to actual function calls.\n\nHowever, since we don't know in advance whether the contract exists or needs to\nbe deployed first, we need to reserve some amount to cover storage costs of\npotential deployment.\n\nLet's define a new [`StateInit` action](https://github.com/mitinarseny/near-sdk-rs/blob/50eb6e68544b75288145f7f0d07068a482b9e3fc/near-sdk/src/environment/env.rs#L1237-L1276)\nand a host function for it:\n\n```rust\n/// Optionally, deploy the contract with deterministic account id and\n/// initialize its storage with [`StateInit`](crate::StateInit) according to\n/// [NEP-616](https://github.com/near/NEPs/pull/616).\n///\n/// Note that the `receiver_id` of the [`Promise`](crate::Promise) MUST be\n/// equal to [`state_init.derive_account_id()`](StateInit::derive_account_id).\n/// Otherwise, this promise will fail.\n///\n/// If non-zero, `amount` will be immediately subtracted from current\n/// account's balance as a \"reserve\" for storage costs.\n///\n/// If the receiving contract is in `noexist` or `uninit` state by the time\n/// this action gets executed:\n/// * The contract is deployed and initialized with [`state_init`](crate::StateInit)\n/// * [`state_init.storage_cost`](crate::StateInit::storage_cost) is\n///   subtracted from attached `amount`. If the contract was in `uninit`\n///   state and had non-zero balance, then its balance is used first and only\n///   the missing part required for covering storage costs will subtracted\n///   from the attached `amount`.\n/// * The contract state is marked as `active`.\n/// * The remaining amount is transferred back to predecessor or\n///   [refund_to](promise_set_refund_to) if set.\n///\n/// If the contract was already in `active` state, then full `amount` is\n/// refunded. See [`promise_set_refund_to`].\npub fn promise_batch_action_state_init(\n    promise_index: PromiseIndex,\n    state_init: &LazyStateInit, // may be already serialized\n    amount: NearToken,\n);\n```\n\n**Note** that [zero-balance accounts](https://github.com/near/NEPs/blob/master/neps/nep-0448.md)\nlimits do apply here: if the contract's [`StateInit`](#deterministic-accountids)\nrequires no more than 770 bytes of storage then no NEAR tokens are required for\nstorage staking. Instead, storage costs will be compensated by higher gas costs.\n\n### Permissions\n\nSince the contract address is derived deterministically from its initialization\ncode and state, then **anyone** can deploy such contract via\n[`StateInit` action](#stateinit-action) and pay for it.\n\nAll existing rules for current account model apply to deterministic\naccounts, too. In particular, only the contract itself is allowed to perform\nthese actions on himself:\n\n* `CreateAccount`: for sub-accounts like `sub.0s123...`\n* `DeleteAccount`\n* `DeployGlobalContract`\n* `UseGlobalContract`\n* `DeployContract`, `AddKey`, `DeleteKey`: doesn't make sense and should be\n  discouraged by implementations, but still allowed\n* `Stake`\n\n### Account State\n\nThere is nothing preventing accounts with deterministic ids from accepting\nincoming native transfers before they were deployed and initialized via\n[`StateInit` action](#stateinit-action). This can be used to build some\nmechanics where one wants to sponsor the deployment but doesn't bother to\nactually initialize it.\n\nTo support this case, we say that each deterministic account can be in one of\nfollowing states:\n\n* `nonexist`: there were no accepted receipts on this account, so it doesn't\n  have any data (or the contract was deleted). Initially all deterministic\n  accounts are in this state.\n* `uninit`: account has some data, which contains balance and meta info. An\n  account enters this state, for example, when it was in a `nonexist` state,\n  and another account sent native transfer to it.\n* `active`: account has contract code deployed, persistent data and balance. An\n  account enters this state when it was in `nonexist` or `uninit` state and\n  there was an incoming [`StateInit` action](#stateinit-action). Note, that\n  to be able to deploy this account,\n  [`state_init.derive_account_id()`](#deterministic-accountids) must be equal\n  to its `AccountId`\n\n\n### Refunds\n\nCurrently, refund of total attached deposit to a failed receipt is sent via\nplain transfer to its predecessor. In a sharded design, the contract creating a\npromise is not necessarily the one who would like to receive a refund in case of\nfailure or unused amount (e.g. [`StateInit` action](#stateinit-action)). So, we\nneed a way to route these refunds to intended beneficiaries, which can be\nimplementation-specific and depend on inner logic of a smart-contract.\n\nUnfortunately, Near protocol doesn't provide a way for contracts to execute\nsome code upon receiving a native transfer if it wasn't attached deposit to a\nfunction call. Moreover, when scheduling callbacks, the runtime doesn't\nguarantee that the refund for a failed receipt will come before callback\nexecution, since the refund is processed as a new independent receipt:\n\n```mermaid\n---\ntitle: \"Regular Refunds: can't forward in callback\"\n---\nsequenceDiagram\n    participant A as alice.near\n    participant B as bob.near\n    participant C as carol.near\n    A ->> B: Promise::new(\"bob.near\")<br/>.function_call(..., attached_deposit)\n\n    activate B\n    B -x+ C: Promise::new(\"carol.near\")<br/>.function_call(..., attached_deposit)\n    B -->> B: .callback()\n    deactivate B\n\n    Note over C: Failed\n\n    C -->>+ B: failed receipt\n    B -->>- A: refund???\n\n    C -)- B: refund: attached_deposit\n```\n\nTo remove this limitation let's add an optional `refund_to` field to [`ActionReceipt`](https://github.com/near/nearcore/blob/685f92e3b9efafc966c9dafcb7815f172d4bb557/core/primitives/src/receipt.rs#L640-L659)\nand a host function for setting it:\n\n```rust\n/// Set a different [`AccountId`] instead of current one to which refunds\n/// should go for all failed (e.g. [function_call](promise_batch_action_function_call_weight))\n/// or unused (e.g. [state_init](promise_batch_action_state_init)) deposits in\n/// the created receipt.\npub fn promise_set_refund_to(\n    promise_idx: PromiseIndex,\n    account_id: &AccountId,\n);\n```\n\nSo, when `refund_to` is used together with [`StateInit` action](#stateinit-action) the flow can\nlook like:\n\n```mermaid\n---\ntitle: Custom Refunds via .refund_to(\"alice.near\")\n---\nsequenceDiagram\n    participant A as alice.near\n    participant B as bob.near\n    participant C as 0s123...\n    A ->> B: Promise::new(\"bob.near\")<br/>.function_call(..., attached_deposit)\n\n    activate B\n    B ->>+ C: Promise::new(\"0s123...\")<br/>.refund_to(\"alice.near\")<br/>.state_init(..., amount)<br/>.function_call(..., attached_deposit)\n    B -->> B: .callback()\n    deactivate B\n\n    alt uninit, success\n        activate B\n        C -->> B: success data receipt\n        deactivate B\n    else already init, success\n        activate B\n        C -->> B: success data receipt\n        deactivate B\n\n        C -) A: refund: state_init.amount\n    else failed\n        activate B\n        C -->> B: failed receipt\n        deactivate B\n\n        C -) A: refund: state_init.amount + attached_deposit\n    end\n    deactivate C\n```\n\nIt also makes sense to let smart-contracts know in the runtime which refund\naccount was set for the current receipt by its predecessor via `promise_set_refund_to()`,\nso that they can use it according to its business logic instead of duplicating\nit in a FunctionCall arguments. So, we might need another host function for that:\n\n```rust\n/// Returns the [`AccountId`] that was set for the current receipt by its\n/// predecessor via [`promise_set_refund_to()`], or [`predecessor_account_id()`] otherwise.\npub fn refund_to_account_id() -> AccountId;\n```\n\nNote that setting `refund_to` to a non-existing named account id will result\ninto burning refunded NEAR tokens.\n> In fact, regular refunds to predecessor also suffer from the same problem:\n> predecessor is not always guaranteed to exist as it could have been deleted.\n> You can [burn](https://github.com/near/nearcore/blob/685f92e3b9efafc966c9dafcb7815f172d4bb557/runtime/runtime/src/lib.rs#L901-L908)\n> NEAR today by using `DeleteAccount` action with non-existing `beneficiary_id`.\n\n### Other host functions\n\nWe would also need to implement a couple of other trivial host functions, which\nare required for contracts to function properly:\n\n```rust\n/// Returns code deployed on current contract being executed.\n/// For now, only references to globally deployed contracts are supported.\n///\n/// Note: gas cost of this for globally deployed contracts should be\n/// relatively small, since it would only return `ContractCode::GlobalCodeHash(code_hash)`\n/// or `ContractCode::GlobalAccountId(account_id)`.\npub fn current_contract_code() -> ContractCode;\n\n/// If the current function is invoked by a callback, we can access the\n/// length of execution results of the promises that caused the callback.\n/// It can be used to prevent out-of-gas failures when reading too long\n/// execution result via [`promise_result()`].\npub fn promise_result_length(result_idx: u64) -> Result<u64, PromiseError>;\n\n// Currently constants, but better to be exposed as host functions,\n// so that changing the protocol config doesn't brick existing contracts.\npub fn storage_byte_cost() -> NearToken;\npub fn storage_num_bytes_account() -> StorageUsage;\npub fn storage_num_extra_bytes_record() -> StorageUsage;\n```\n\n## Reference Implementation\n\nHere is a [reference implementation](https://github.com/near/near-sdk-rs/pull/1376)\nof Sharded Fungible Token contracts with deterministic account ids, including \nFT<->sFT adaptors and optional governance functionality (useful for stablecoins),\nalong with definition of [`StateInit`](#deterministic-accountids) structs and\nrequired [host functions](#stateinit-action).  \nAll changes are [backwards-compatible](#backwards-compatibility) with existing\ncontracts and account ids.\n\n## Security Implications\n\n### 3-stage Upgrades\n\n> This concern is valid for all global contracts introduced in [NEP-591](https://github.com/near/NEPs/blob/master/neps/nep-0591.md).\n> However, it's worth to highlight it here for better clarity.\n\nIf the upgrade of a globally deployed contract, which is used for some sharded\ninteractions, introduces a breaking change in its internal ABI (i.e. used for\nXCCs only between referencing contracts), then it can be done safely only via\n3-stage upgrade process:\n\n1. Deploy a \"pre-upgrade\" version that still uses the old ABI but **can**\n  understand the new one.  \n  Wait for it to be fully distributee across shards.\n2. Deploy a \"do-upgrade\" version that starts to use new ABI, but still **can**\n  understand the old one.  \n  Wait for it to be fully distributee across shards.\n3. Deploy a \"post-upgrade\" version that cleans up the legacy code and now\n  **only** uses and understands the new ABI.\n\nEven if the implementation always passes the contract version `v` as an argument\nfor all interactions between different instances of this global contract, then\nit would still not help the contract to understand how to handle the new\nversion, because the new code just hasn't reach his shard yet. The most what\nthe contract can do upon receiving \"unknown version\" is to panic, which can\nbreak others' assumptions about its standard behavior.\n\n## Alternatives\n\n### Sharded Contexts\n\nThere is an existing proposal described in\n[NEP-605](https://github.com/near/NEPs/pull/605) that follows a more OS-like\napproach by introducing a concept of non-root accounts and \"sharded contexts\".\n\nThe main advantage of current proposal over \"sharded contexts\" is that there is\nno isolation between sharded and non-sharded contracts: any deterministic\naccount is allowed to freely interact with any non-deterministic one and vice\nversa. As a result, deterministic accounts enable for truly sharded contract\ndesigns with infinite composability, while \"sharded contexts\" approach suffers\nfrom high complexity, less composability and inevitable bottlenecks for routing\ncalls between root/non-root accounts.\n\nHowever, non-root accounts are designed to live on the same shard as their\nroot accounts, which is a step towards [synchronous execution](https://github.com/near/NEPs/issues/497)\nbut only within the boundaries of a single root account. At the same time, this\ncan be seen as limitation for future scaling and dynamic resharding. With\ndeterministic accounts approach, this might be achievable by having\n[`closeTo`](#co-location) analog.\n\n### Prior Art\n\nThis proposal is highly inspired by [deterministic addresses](https://docs.ton.org/v3/documentation/smart-contracts/addresses#account-id)\nin TON blockchain. However, we still needed to adapt their definitions to Near\nspecifications:\n\n* Unlike TVM, Near doesn't support [message bouncing](https://docs.ton.org/v3/documentation/smart-contracts/transaction-fees/forward-fees#message-bouncing),\n  but instead allows to schedule callbacks, which gives more control over\n  handling of chained cross-contract calls.\n* Near doesn't provide a way for contracts to execute some code upon receiving\n  a native transfer if it wasn't attached deposit to a function call. See [refunds](#refunds).\n* TVM doesn't differentiate between gas and attached deposit, while in Near\n  they are not coupled, which removes some complexities.\n\n## Future possibilities\n\n### Sharded DeFi protocols\n\nHere is how deterministic accounts can be used to build [DEX](https://docs.dedust.io/docs/swaps)\nor [Lending Protocol](https://github.com/evaafi/contracts/blob/40a0e8bb32f88df8e09def536371192a824d1c3d/diagrams/liquidate-for-jetton.svg)\nwith sharded design in mind.\n\n### Wallet Extentions\n\nIf we allow deterministic accounts to handle external messages by verifying\nsignatures and tracking nonces by themselves instead of relying on\n[access keys](https://nomicon.io/DataStructures/AccessKey), then it would open\nup doors for upgradable [wallet implementations](https://docs.ton.org/v3/documentation/smart-contracts/contracts-specs/wallet-contracts)\nwith [arbitrary plugins](https://docs.ton.org/v3/documentation/smart-contracts/contracts-specs/wallet-contracts/#wallet-v5)\nsuch as 2FA, social recovery, gasless transactions and [more](https://github.com/ton-blockchain/wallet-contract-v5/blob/main/README.md#suggested-extensions).  \nThis can be seen as an alternative to [contract namespaces](https://gov.near.org/t/proposal-account-extensions-contract-namespaces/34227).\n\n### Co-location\n\nIt might be possible to implement some analog of [`closeTo`](https://docs.ton.org/v3/documentation/smart-contracts/tolk/tolk-vs-func/create-message#sharding-deploying-close-to-another-contract)\nfor deploying \"close to\" another contract based on shard depth (i.e. fixed\nprefix length).\n\n### Storage Rent vs Fixed Storage Staking\n\nWe can move from fixed storage staking fees to a more sustainable storage rent\napproach by adding new [account state](#account-state):\n\n> * `frozen`: account cannot perform any operations, this state contains only\n> two hashes of the previous state (code and state respectively). When an\n> account's storage charge exceeds its balance, it goes into this state. To\n> unfreeze it, you can send an internal message with state_init and code which\n> store the hashes described earlier and some NEAR. It can be difficult to\n> recover it, so you should not allow this situation. There is a project to\n> unfreeze the account, which you can find [here](https://unfreezer.ton.org).\n\n## Consequences\n\n### Positive\n\n* Deterministic AccountIds enable for truly **sharded** contract designs\n  with minimalistic implementations without sacrificing composability.\n\n* Proposed [deterministic accounts](#deterministic-accountids) are fully\n  [backwards-compatible](#backwards-compatibility) with existing ones.\n\n* Knowledge of `AccountId` (e.g. predecessor) combined with its expected\n  initialization state is enough to [verify](#address-verification) it was\n  executing the exact code.\n\n### Neutral\n\n* Deterministic accounts are allowed to freely interact directly with\n  non-deterministic ones and vice versa.\n\n* Existing contracts can be upgraded to start using\n  [`StateInit` action](#stateinit-action) and, thus, gain the power to\n  automatically deploy non-existing deterministic accounts.\n\n* No step towards [synchronous execution](https://github.com/near/NEPs/issues/497).\n  Though, latency might be improved if we add support for [`closeTo`](#co-location) analog.\n\n* Indexers for sharded contracts might need to be [stateful](#stateful-indexers) in order\n  to keep track of newly created contracts.\n\n\n### Negative\n\n\\-\n\n### Backwards Compatibility\n\nProposed [deterministic AccountIds](#deterministic-accountids) are fully\n**backwards-compatible** with existing ones.\n\nDeterministic accounts are allowed to freely interact directly with\nnon-deterministic ones and vice versa.\n\nExisting contracts can be upgraded to start using\n[`StateInit` action](#stateinit-action) and, thus, gain the power to\nautomatically deploy non-existing deterministic accounts.\n\n### Stateful Indexers\n\n> This concern is valid for any sharded contracts design.\n> However, it's worth to highlight it here for better clarity.\n\nFor some sharded contracts with deterministic account ids it doesn't make sense\nto emit events (e.g. `sft_transfer`) as it simply wouldn't bring any value for\nindexers. Even if they do emit these events, indexers still **need to track**\noutgoing [`StateInit` actions](#stateinit-action) to not-yet-existing sharded\ncontracts, which will emit these events in the future.  \n\nHowever, to properly track these cross-contract calls they would need parse\nfunction names (e.g. [`sft_transfer()`](https://github.com/mitinarseny/near-sdk-rs/blob/50eb6e68544b75288145f7f0d07068a482b9e3fc/near-contract-standards/src/sharded_fungible_token/wallet.rs#L26-L55))\nand their args, while this information combined with receipt status already\ncontains all necessary info for indexing.\n\n## Changelog\n\n### 1.0.0 - Initial Version\n\n#### Benefits\n\n* Deterministic AccountIds enable for truly **sharded** contract designs\n  with minimalistic implementations without sacrificing composability.\n* Proposed [deterministic accounts](#deterministic-accountids) are fully\n  [backwards-compatible](#backwards-compatibility) with existing ones.\n* Knowledge of `AccountId` (e.g. predecessor) combined with its expected\n  initialization state is enough to [verify](#address-verification) it was\n  executing the exact code.\n\n#### Concerns\n\n|   # | Concern | Resolution | Status |\n| --: | :------ | :--------- | -----: |\n|   1 | Deterministic AccountIds `0s123...`  are not human-readable and can be seen as a downside when compared to developer-friendly Named AccountIds. | Human-redable account names are mostly appealing for UX and can be implemented via NFTs (similar to [ENS](https://ens.domains)). | Resolved |\n\n## Copyright\n\nCopyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).\n"
  },
  {
    "path": "neps/nep-0621.md",
    "content": "---\nNEP: 621\nTitle: Vault NEP\nAuthors: JY Chew <edwardchew97@gmail.com>, Lee Hoe Mun <leehoemun@gmail.com>, Wade <wz.lim.00@gmail.com>, Steve Kok <kokchoquan@gmail.com>\nStatus: Approved\nDiscussionsTo: https://github.com/nearprotocol/neps/pull/0000\nType: Contract Standard\nRequires: 141\nVersion: 1.0.0\nCreated: 2025-08-08\nLastUpdated: 2025-08-08\n---\n\n## Summary\n\nThis NEP proposes a standardized interface for implementing vault contracts on the NEAR Protocol, drawing inspiration from the ERC-4626 standard widely used on Ethereum. A vault contract allows users to deposit an underlying fungible token (FT) into the vault, in exchange for which the vault issues shares that represent proportional ownership of the vault’s assets.\n\nThe underlying asset could be any NEP-141 compliant fungible token, such as a stablecoin or yield-bearing token. When deposited, the vault mints new shares to the depositor based on the current exchange rate between the vault’s total assets and total shares in circulation. Conversely, when a user redeems shares, the vault burns those shares and returns the equivalent amount of the underlying asset to the user.\n\nThe issued shares themselves are also NEP-141 compliant fungible tokens, enabling them to be freely transferred between accounts or traded on decentralized exchanges (DEXs). This compatibility allows vault shares to be integrated into broader DeFi ecosystems, enabling use cases such as collateral in lending protocols, liquidity provision, or composable yield strategies.\n\nBy standardizing the vault interface, this NEP aims to improve interoperability, reduce integration costs, and encourage consistent, secure practices for vault implementation across the NEAR ecosystem.\n\n## Motivation\n\nVault contracts are a fundamental building block in modern DeFi, enabling users to pool assets for yield generation, liquidity provision, or other strategies while receiving tokenized shares that represent their proportional ownership. However, without a standardized interface, each vault implementation on NEAR may expose different method names, return formats, and accounting mechanisms, creating unnecessary friction for developers, integrators, and auditors.\n\nA consistent vault standard, inspired by ERC-4626, would provide multiple benefits:\n\n-   Interoperability – Wallets, DEXs, lending protocols, and other DeFi applications can integrate with any compliant vault without custom logic for each implementation.\n\n-   Reduced Integration Costs – Developers and projects save time and resources by building once against the standard interface rather than creating one-off integrations.\n\n-   Ecosystem Growth – Standardized vaults make it easier for new projects to leverage existing liquidity and composability, accelerating adoption across the NEAR DeFi ecosystem.\n\nBy introducing this NEP, we aim to align vault design on NEAR with proven best practices from other blockchain ecosystems while optimizing for the unique features and requirements of NEP-141 fungible tokens.\n\n## Specification\n\n### Contract Interface\n\nThe contract should implement the VaultCore trait.\n\n```rust\n/// Specification for a fungible token vault that issues NEP-141 compliant shares.\n///\n/// A FungibleTokenVault accepts deposits of an underlying NEP-141 compliant asset\n/// and issues NEP-141 compliant \"shares\" in return. These shares can be transferred\n/// and traded like any other NEP-141 token.\n///\n/// This trait extends:\n/// - [`FungibleTokenCore`] to provide NEP-141 functionality for shares.\n/// - [`FungibleTokenReceiver`] to receive the underlying NEP-141 assets\npub trait VaultCore:\n    FungibleTokenCore + FungibleTokenReceiver\n{\n    // ----------------------------\n    // Asset Information\n    // ----------------------------\n\n    /// Returns the [`AccountId`] of the underlying asset token contract.\n    ///\n    /// ERC4626 original function name: asset()\n    ///\n    /// The underlying asset **must** be NEP-141 compliant.\n    /// Implementations should store this as an immutable configuration value.\n    fn asset_contract_id(&self) -> AccountId;\n\n    /// Returns the total amount of underlying assets represented by all shares in existence.\n    ///\n    /// ERC4626 original function name: totalAssets()\n    ///\n    /// **Important:**\n    /// - Represents the vault's *total managed value*, not just assets held in the contract.\n    /// - If assets are staked, lent, swapped, or deployed elsewhere, this should return\n    ///   an **estimated total equivalent value**.\n    /// - Must be denominated in the same units as [`Self::asset_contract_id`].\n    /// - If the vault applies any deposit or withdrawal fees, `total_asset_amount` must reflect the net value of\n    ///   all shares, i.e. the aggregate worth of the vault’s holdings after deducting all applicable fees.\n    fn total_asset_amount(&self) -> U128;\n\n    // ----------------------------\n    // Conversion Helpers\n    // ----------------------------\n\n    /// Converts an amount of underlying assets to the equivalent number of shares.\n    ///\n    /// ERC4626 original function name: convertToShares(uint256 assets)\n    ///\n    /// This is a **purely view-only estimation** that:\n    /// - Does not update state.\n    /// - Ignores user-specific constraints such as deposit limits or fees.\n    ///\n    /// # Panics / Fails\n    /// - The `convert_to_shares` method must never panic.\n    ///   It should always return the corresponding amount of shares for the given assets,\n    ///   or `0` if conversion is not possible.\n    ///\n    /// See also: [`Self::preview_deposit_shares`] for a version that accounts for limits and fees.\n    fn convert_to_shares(&self, asset_amount: U128) -> U128;\n\n    /// Converts an amount of shares to the equivalent amount of underlying assets.\n    ///\n    /// ERC4626 original function name: convertToAssets(uint256 shares)\n    ///\n    /// This is a **purely view-only estimation** that:\n    /// - Does not update state.\n    /// - Ignores withdrawal restrictions, fees, or penalties.\n    ///\n    /// # Panics / Fails\n    /// - The `convert_to_asset_amount` method must never panic.\n    ///   It should always return the corresponding amount of assets for the given shares,\n    ///   or `0` if conversion is not possible.\n    ///\n    /// See also: [`Self::preview_redeem_amount`] for a version that accounts for real-world constraints.\n    fn convert_to_asset_amount(&self, shares: U128) -> U128;\n\n    // ----------------------------\n    // Deposit / Redemption Limits\n    // ----------------------------\n\n    /// Returns the maximum amount of underlying assets that `receiver_id` can deposit.\n    ///\n    /// ERC4626 original function name: maxDeposit(address receiver)\n    ///\n    /// This may depend on:\n    /// - Vault capacity.\n    /// - User-specific limits.\n    /// - Current on-chain conditions.\n    ///\n    /// # Panics / Fails\n    /// - The `max_deposit_amount` method must never panic.\n    ///   It should return the maximum amount of assets that can be deposited for the given account,\n    ///   or `0` if deposits are not currently allowed.\n    ///\n    /// Implementations should return `U128::MAX` to signal \"unlimited\" deposits.\n    fn max_deposit_amount(&self, receiver_id: AccountId) -> U128;\n\n    /// Simulates depositing exactly `assets` into the vault and returns the number of shares\n    /// that would be minted to the receiver.\n    ///\n    /// ERC4626 original function name: previewDeposit(uint256 assets)\n    ///\n    /// Differs from [`Self::convert_to_shares`] by accounting for:\n    /// - Per-user deposit limits.\n    /// - Protocol-specific deposit fees.\n    ///\n    /// # Panics / Fails\n    /// - The `preview_deposit_shares` method must never panic.\n    ///   It should return the number of shares that would be minted for the given deposit amount,\n    ///   or `0` if deposits are not currently allowed.\n    ///\n    fn preview_deposit_shares(&self, asset_amount: U128) -> U128;\n\n    /// Returns the maximum number of shares that `receiver_id` can mint.\n    ///\n    /// ERC4626 original function name: maxMint(address receiver)\n    ///\n    /// This may depend on:\n    /// - Vault capacity.\n    /// - User-specific limits.\n    /// - Current on-chain conditions.\n    ///\n    /// # Panics / Fails\n    /// - The `max_mint_shares` method must never panic.\n    ///   It should return the maximum number of shares that can be minted for the given account,\n    ///   or `0` if minting is not currently allowed.\n    ///\n    /// Implementations should return `U128::MAX` to signal \"unlimited\" minting.\n    fn max_mint_shares(&self, receiver_id: AccountId) -> U128;\n\n    /// Simulates minting exactly `shares` and returns the amount of underlying assets\n    /// that would be required.\n    ///\n    /// ERC4626 original function name: previewMint(uint256 shares)\n    ///\n    /// Differs from [`Self::convert_to_asset_amount`] by accounting for:\n    /// - Per-user minting limits.\n    /// - Protocol-specific minting fees.\n    ///\n    /// Useful for frontends to estimate the cost of minting shares.\n    ///\n    /// # Panics / Fails\n    /// - The `preview_asset_amount_required_to_mint_shares` method must never panic.\n    ///   Instead, it should return the maximum possible mint amount, or `0` if minting is currently not permitted.\n    fn preview_asset_amount_required_to_mint_shares(&self, shares: U128) -> U128;\n\n    /// Returns the maximum number of shares that `owner_id` can redeem.\n    ///\n    /// ERC4626 original function name: maxRedeem(address owner)\n    ///\n    /// This may depend on:\n    /// - The owner's current share balance.\n    /// - Vault withdrawal restrictions.\n    /// - Lock-up periods or cooldowns.\n    ///\n    /// # Panics / Fails\n    /// - The `max_redeem_shares` method must never panic.\n    ///   It should return the maximum number of shares that can be redeemed by the given account,\n    ///   or `0` if redemption is not currently allowed.\n    ///\n    /// Implementations should return `0` if redemptions are currently disabled for the owner.\n    fn max_redeem_shares(&self, owner_id: AccountId) -> U128;\n\n    /// Returns the maximum amount of assets that `owner_id` can withdraw.\n    ///\n    /// ERC4626 original function name: maxWithdraw(address owner)\n    ///\n    /// This may depend on:\n    /// - The owner's share balance.\n    /// - Current vault liquidity.\n    /// - Withdrawal limits or cooldowns.\n    ///\n    /// # Panics / Fails\n    /// - The `max_withdraw_amount` method must never panic.\n    ///   It should return the maximum amount of assets that can be withdrawn by the given account,\n    ///   or `0` if withdrawals are not currently allowed.\n    ///\n    /// Implementations should return `0` if redemptions are currently disabled for the owner.\n    fn max_withdraw_amount(&self, owner_id: AccountId) -> U128;\n\n    // ----------------------------\n    // Redemption Operations\n    // ----------------------------\n\n    /// Redeems `shares` from the caller in exchange for the equivalent amount of underlying assets.\n    ///\n    /// ERC4626 original function name: redeem(uint256 shares, address receiver, address owner)\n    ///\n    /// - If `receiver_id` is `None`, defaults to sending assets to the caller.\n    /// - Burns the caller's shares.\n    /// - Returns the exact amount of assets redeemed.\n    ///\n    /// # Slippage\n    /// - To forcefully redeem shares without accounting for slippage, the user should set `min_amount_out` to `0`.\n    /// - To protect against slippage, the user should specify a reasonable `min_amount_out`.\n    ///\n    /// # Panics / Fails\n    /// - If the caller's share balance is insufficient.\n    /// - If withdrawal limits prevent the redemption.\n    ///\n    /// See also: [`Self::preview_redeem_amount`].\n    fn redeem(&mut self, shares: U128, min_amount_out: U128, receiver_id: Option<AccountId>) -> PromiseOrValue<U128>;\n\n    /// Simulates redeeming `shares` into assets without executing the redemption.\n    ///\n    /// ERC4626 original function name: previewRedeem(uint256 shares)\n    ///\n    /// Differs from [`Self::convert_to_asset_amount`] by factoring in:\n    /// - The caller's current share balance.\n    /// - Vault withdrawal limits.\n    /// - Applicable fees or penalties.\n    ///\n    /// # Panics / Fails\n    /// - The `preview_redeem_amount` method must never panic.\n    ///   It should return the amount of assets that would be received for redeeming the given shares,\n    ///   or `0` if redemption is not currently allowed.\n    ///\n    /// Useful for frontends to estimate redemption outcomes.\n    fn preview_redeem_amount(&self, shares: U128) -> U128;\n\n    /// Withdraws exactly `assets` worth of underlying tokens from the vault.\n    ///\n    /// ERC4626 original function name: withdraw(uint256 assets, address receiver, address owner)\n    ///\n    /// - If `receiver_id` is `None`, defaults to sending assets to the caller.\n    /// - Burns the required number of shares to fulfill the withdrawal.\n    ///\n    /// # Slippage\n    /// - To forcefully withdraw assets without accounting for slippage, the user can omit the `max_shares_deducted` parameter.\n    /// - To protect against slippage, the user may specify a `max_shares_deducted` parameter.\n    ///\n    /// # Panics / Fails\n    /// - If the caller's share balance cannot cover the withdrawal.\n    /// - If withdrawal limits or fees prevent the withdrawal.\n    ///\n    /// # Events\n    /// - The vault **must** emit a `VaultWithdraw` event upon a successful withdrawal.\n    /// - Since transactions on NEAR are non-atomic, the vault contract should follow one of these approaches:\n    ///   1. Emit a `VaultWithdraw` event when the fee is deducted, and if the withdrawal later fails, emit a `VaultDeposit` event\n    ///      to revert the funds; **or**\n    ///   2. Emit a `VaultWithdraw` event **only** if the withdrawal succeeds.\n    /// - In the reference implementation, the contract emits an `FtBurn` event when the user calls the `withdraw` method.\n    ///   If the withdrawal is successful, a `VaultWithdraw` event is emitted in the `resolve_withdraw` callback.\n    ///   If the withdrawal fails, an `FtMint` event is emitted instead to restore the user’s balance.\n    ///\n    /// See also: [`Self::preview_shares_deducted_for_withdraw`].\n    fn withdraw(&mut self, asset_amount: U128, max_shares_deducted: Option<U128> ,receiver_id: Option<AccountId>) -> PromiseOrValue<U128>;\n\n    /// Simulates withdrawing exactly `assets` worth of tokens without executing.\n    ///\n    /// ERC4626 original function name: previewWithdraw(uint256 assets)\n    ///\n    /// Differs from [`Self::convert_to_shares`] by factoring in:\n    /// - The caller's current share balance.\n    /// - Vault withdrawal limits.\n    /// - Applicable fees or penalties.\n    ///\n    /// # Panics / Fails\n    /// - The `preview_shares_deducted_for_withdraw` method must never panic.\n    ///   It should return the number of shares required to withdraw the given amount of assets,\n    ///   or `0` if withdrawals are not currently allowed.\n    ///\n    /// Useful for frontends to preview required shares for a given withdrawal.\n    fn preview_shares_deducted_for_withdraw(&self, asset_amount: U128) -> U128;\n}\n```\n\n### Deposit and Mint\n\n```rust\npub trait FungibleTokenReceiver {\n    /// # Events\n    /// - The vault **must** emit a `VaultDeposit` event upon a successful deposit.\n    /// - The vault should also emit a `FtMint` event for the share they issued.\n    fn ft_on_transfer(&mut self, sender_id: AccountId, amount: U128, msg: String) -> PromiseOrValue<U128>;\n}\n```\n\n-   According to the NEP-141 standard, all fungible token transfers to a contract must invoke the `ft_on_transfer` callback.\n-   In the context of ERC-4626, both the **`deposit`** and **`mint`** functions are implemented through this entrypoint on the vault contract.\n-   While NEP-141 does not prescribe how the `msg` parameter should be structured, our suggested convention is to pass a JSON-encoded message with the following schema:\n\n```rust\npub struct DepositMessage {\n    /// The minimum number of shares that must be received for this deposit to succeed. (Slippage control)\n    min_shares: Option<U128>,\n    /// The maximum number of shares that can be minted, used for `mint` operations.\n    max_shares: Option<U128>,\n    /// The account that should receive the minted shares.\n    receiver_id: Option<AccountId>,\n    /// Optional memo for logging or off-chain indexing.\n    memo: Option<String>,\n    /// If `true`, the transfer is treated as a donation and no shares are minted.\n    donate: Option<bool>,\n}\n```\n\n-   Implementers are free to define their own schema for `msg`, but the vault must correctly handle `ft_on_transfer` according to NEP-141:\n-   Return `PromiseOrValue::Value(0)` if all tokens were accepted.\n-   Return a non-zero `U128` to indicate the number of tokens to refund.\n\n### Events\n\n```rust\n/// Event emitted when a deposit is received by the vault.\n///\n/// This follows the proposed NEP vault standard, referencing the ERC-4626 pattern.\n/// Upon receiving assets, the vault mints and issues shares to the `owner_id`.\npub struct VaultDeposit {\n    /// The account that sends the deposit (payer of the assets).\n    pub sender_id: AccountId,\n\n    /// The account that receives the minted shares.\n    pub owner_id: AccountId,\n\n    /// Amount of underlying assets deposited into the vault.\n    pub asset_amount: U128,\n\n    /// Amount of shares minted and issued to `owner_id`.\n    pub shares: U128,\n\n    /// Optional memo provided by the sender for off-chain use.\n    pub memo: Option<String>,\n}\n\n/// Event emitted when shares are redeemed from the vault.\n///\n/// Upon redemption, the vault burns the shares from `owner_id`\n/// and transfers the equivalent assets to `receiver_id`.\npub struct VaultWithdraw {\n    /// The account that owns the shares being redeemed (burned).\n    pub owner_id: AccountId,\n\n    /// The account receiving the underlying assets.\n    pub receiver_id: AccountId,\n\n    /// Amount of shares redeemed (burned from the vault).\n    pub shares: U128,\n\n    /// Amount of underlying assets withdrawn from the vault.\n    pub asset_amount: U128,\n\n    /// Optional memo provided by the redeemer for off-chain use.\n    pub memo: Option<String>,\n}\n```\n\n## Reference Implementation\n\n[Example implementation](https://github.com/Meteor-Wallet/tokenized-vault-nep-implementation).\n\n## Security Implications\n\n### Exchange Rate Manipulation\n\nVaults allow dynamic exchange rates between shares and assets, calculated by dividing total vault assets by total issued shares. If the vault has a permissionless donation mechanism, it creates vulnerability to inflation attacks where attackers manipulate the rate by donating assets to inflate share values, potentially stealing funds from subsequent depositors. Vault deployers can protect against this attack by making an initial deposit of a non-trivial amount of the asset, such that price manipulation becomes infeasible. Vaults can also implement a virtual decimal offset for issued shares, making inflation attacks significantly less feasible, this is demonstrated in the reference implementation.\n\n### Cross-contract Calls\n\nRedeem and withdraw functions perform cross-contract calls to transfer fungible tokens, creating opportunities for reentrancy attacks and state manipulation during asynchronous execution. Vaults should implement reentrancy protection through proper state management, proper callback security, and rollback mechanisms for failed operations.\n\n### Rounding Direction Security\n\nVault calculations must consistently round in favor of the vault to prevent exploitation. When issuing shares for deposits or transferring assets for redemptions, round down; when calculating required shares or assets for specific amounts, round up. This asymmetric rounding prevents users from extracting value through repeated micro-transactions that exploit rounding errors and protects existing shareholders from value dilution.\n\n### Oracle and External Price Dependencies\n\nVaults that rely on external price oracles or cross-contract calls for exchange rate updates face additional security risks in Near's asynchronous environment. Oracle updates create temporal windows where vaults operate with stale pricing data, potentially allowing exploitation. Implementations should include staleness checks, prevent operations during oracle updates, implement proper callback security, and consider fallback pricing mechanisms for oracle failures.\n\n## Alternatives\n\n1. **Custom Per-Protocol Vault Implementations**\n    - While possible, this leads to fragmentation, increases integration costs, and reduces composability between protocols.\n2. **Direct ERC-4626 clone Without NEAR Adjustments**\n    - Rejected because NEAR’s asynchronous execution model makes a one-to-one ERC-4626 clone inefficient. This proposal integrates share and vault logic in a single contract to avoid unnecessary cross-contract calls. Key Differences from ERC-4626 includes:\n        - **Deposit & Mint**:\n          Unlike ERC-4626, deposits and mints on NEAR must be handled through the\n          `ft_on_transfer` callback (as defined by NEP-141), rather than direct method calls.\n        - **Async Execution**:\n          Since NEAR contracts execute asynchronously, results from `preview*` and `max*`\n          view methods cannot be guaranteed to match the actual values during transaction\n          execution. Cross-contract calls may be processed in later blocks, leading to\n          differences between simulated values and actual outcomes.\n\n## Future possibilities\n\n### NEP-245 Multi Token Support\n\nFuture vault implementations could extend this standard to support NEP-245 Multi Token contracts as underlying asset. We have also created a [minimal implementation](https://github.com/Meteor-Wallet/tokenized-vault-mt-nep-implementation).\n\n### Multi-Asset Vault Extensions\n\nFuture extensions could allow vaults to accept multiple assets for deposit and withdrawal. This would enable the standardization of LP vaults.\n\n### Asynchronous Vault Operations\n\nFuture vault standards could introduce asynchronous deposit and withdrawal patterns through `request_deposit` and `request_withdraw` functions. This would enable integration with cross-chain protocols and real-world asset protocols.\n\n## Consequences\n\n### Positive\n\n-   Enables a unified, predictable vault interface.\n\n-   Simplifies integration for wallets, DEXs, and aggregators.\n\n-   Improves security through consistent design and accounting.\n\n-   Encourages reuse of tooling, libraries, and audits.\n\n### Neutral\n\n-   Standard defines interface, not yield strategy — implementation remains flexible.\n\n-   Protocols may implement only relevant parts of the interface initially.\n\n### Negative\n\n-   Migration overhead for existing vault implementations to become compliant.\n\n### Backwards Compatibility\n\n-   No breaking changes to NEP-141 itself, but existing vault-like contracts that don’t conform will need to add or rename methods to comply.\n\n-   Share tokens must be NEP-141 compliant, meaning non-NEP-141 share implementations require migration.\n\n## Changelog\n\n### 1.0.0 - Initial Version\n\n> Placeholder for the context about when and who approved this NEP version.\n\n#### Benefits\n\n> List of benefits filled by the Subject Matter Experts while reviewing this version:\n\n-   Would benefit the ecosystem by introducing a standardized way to create token vaults. Indeed, multiple projects currently exists (e.g. metapool, more markets, etc) but none of them follow a single standard making it hard to interconnect with them\n-   Benefit 2\n\n#### Concerns\n\nNo major concerns were raised, besides the need to need to be careful with the #security implications sections described above\n\n## Copyright\n\nCopyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).\n\n```\n\n```\n"
  },
  {
    "path": "neps/nep-0635.md",
    "content": "---\nNEP: 622\nTitle: P-256 ECDSA Signature Verification Host Function\nAuthors: Bowen Wang <bowen@nearone.org>\nStatus: Draft\nDiscussionsTo: https://github.com/near/NEPs/pull/0635\nType: Runtime Spec\nCategory: Protocol\nVersion: 1.0.0\nCreated: 2026-01-24\nLast Updated: 2026-01-24\n---\n\n## Summary\n\nThis NEP proposes adding a host function, `p256_verify`, to verify ECDSA signatures over the P-256 curve (secp256r1, prime256v1). The host function exposes a native implementation in the runtime, enabling smart contracts to verify P-256 signatures at substantially lower gas cost than a WASM-based implementation.\n\n## Motivation\n\nP-256 verification is a hard requirement for multiple workloads on NEAR today, and the current WASM path is prohibitively expensive:\n\n- Near Intents relies on passkeys (WebAuthn), which use P-256 ECDSA. A native host function substantially reduces verification gas and enables higher throughput for intents.\n- TEE attestation verification depends on the `dcap-qvl` library, which uses P-256. The latest `dcap-qvl` does not fit within NEAR's gas limits when compiled to WASM, and a host function would reduce the cost of `dcap-qvl::verify::verify()` (see https://github.com/Phala-Network/dcap-qvl/issues/99).\n- Benchmarks show that compiling the crypto library to WASM directly consumes ~46 Tgas for verifying a 32-byte message, while the host function consumes ~0.45 Tgas, a reduction of over 100x.\n\n## Rationale and alternatives\n\nAdding a dedicated host function provides an immediate and predictable gas reduction for a widely used cryptographic primitive. The runtime already includes mature RustCrypto implementations, and the host function design is consistent with existing cryptography host functions such as `ed25519_verify`.\n\nAlternatives considered:\n\n- Improve WASM compilation performance or use new compiler backends. This is a large, cross-cutting effort and does not guarantee acceptable costs for P-256 verification in the near term.\n- Increase gas limits or allow special-case gas budgets. This adds policy complexity and does not address the root performance gap.\n- Require contracts to verify P-256 signatures off-chain. This shifts trust and reduces on-chain verifiability, which is undesirable for intents and TEE attestations.\n\n## Specification\n\nThe keywords \"MUST\", \"MUST NOT\", \"REQUIRED\", \"SHALL\", \"SHALL NOT\", \"SHOULD\", \"SHOULD NOT\", \"RECOMMENDED\", \"MAY\", and \"OPTIONAL\" in this document are to be interpreted as described in [RFC 2119](https://www.ietf.org/rfc/rfc2119.txt).\n\n### Protocol feature\n\nThe host function MUST be guarded by the protocol feature flag `p256_verify`. When disabled, the import MUST be unavailable to contracts.\n\n### Host function\n\n```rust\nextern \"C\" {\n  /// Verify a P-256 ECDSA signature for a message and public key.\n  ///\n  /// - `signature` MUST be 64 bytes encoded as `r || s` (32 bytes each, big-endian).\n  /// - `public_key` MUST be a 33-byte compressed SEC1 encoding.\n  /// - `message` is the pre-hashed digest to verify (no hashing is performed by the host function).\n  ///   Callers MUST hash the signed data with an appropriate cryptographic hash function before\n  ///   calling `p256_verify` (e.g., WebAuthn uses SHA-256). If `message` is not exactly 32 bytes,\n  ///   it will be truncated or zero-padded to 32 bytes to match the P-256 field size.\n  ///\n  /// Returns:\n  /// - 1 if the signature verifies\n  /// - 0 if the signature does not verify or cannot be parsed\n  ///\n  /// # Errors\n  ///\n  /// - If `signature` length is not 64, the runtime MUST raise `P256VerifyInvalidInput`.\n  /// - If `public_key` length is not 33, the runtime MUST raise `P256VerifyInvalidInput`.\n  /// - If any pointer is out of bounds, the runtime MUST raise `MemoryAccessViolation`.\n  fn p256_verify(\n    sig_len: u64,\n    sig_ptr: u64,\n    msg_len: u64,\n    msg_ptr: u64,\n    public_key_len: u64,\n    public_key_ptr: u64,\n  ) -> u64;\n}\n```\n\nEach input can be in memory or in a register. If the length argument is set to `u64::MAX`, the corresponding pointer is interpreted as a register ID. The runtime MUST apply the standard input cost for reading memory or registers.\n\n### Gas cost\n\nThe gas cost MUST be computed as:\n\n`input_cost(num_bytes_signature) + input_cost(num_bytes_message) + input_cost(num_bytes_public_key) + p256_verify_base + p256_verify_byte * num_bytes_message`\n\n### SDK exposure\n\nOnce enabled, the runtime SHOULD expose a higher-level helper in `near-sdk` as:\n\n```rust\npub fn p256_verify(signature: [u8; 64], message: [u8; 32], public_key: [u8; 33]) -> bool;\n```\n\n## Reference-level specification\n\nThe reference implementation in `nearcore`:\n\n- Uses the RustCrypto `p256` crate - [audited here](https://reports.zksecurity.xyz/reports/near-p256/) - to parse `Signature::from_slice` and `VerifyingKey::from_sec1_bytes`.\n- Returns `0` for parsing failures or verification failures, and raises `P256VerifyInvalidInput` for invalid lengths.\n- Charges `p256_verify_base` once and `p256_verify_byte` per message byte, in addition to the standard input costs.\n\n## Security Implications (Optional)\n\nThis host function does not perform hashing or domain separation. Callers MUST ensure that the message passed to `p256_verify` follows the correct protocol (for example, WebAuthn uses SHA-256 over the signed data). The runtime only checks signature and key encoding constraints as specified above.\n\n## Unresolved Issues (Optional)\n\n- Whether to support uncompressed SEC1 public keys (65 bytes) in addition to compressed keys.\n- Whether to support DER-encoded ECDSA signatures in addition to raw `r || s` encoding.\n\n## Future possibilities\n\n- Batch verification for P-256 signatures if proven beneficial for intents or attestations.\n- Additional host functions for other curves used by standard protocols or TEEs.\n\n## Copyright\n\nCopyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).\n"
  },
  {
    "path": "neps/nep-0638.md",
    "content": "---\nNEP: 638\nTitle: `chain_id()` host function\nAuthors: Arseny Mitin <mitinarseny@gmail.com>\nStatus: Draft\nType: Protocol\nVersion: 1.0.0\nCreated: 2026-02-02\nLastUpdated: 2026-02-02\n---\n\n## Summary\n\nThis NEP proposes adding a new `chain_id` host function, so that smart-contracts\ncan retrieve the identifier of Near chain they are being executed on.\n\n## Motivation\n\nWith [NEP-616](./nep-0616.md) it's now possible to have account-abstracted\n[wallet-contracts](./nep-0616.md#wallet-extentions) deployed at [deterministic\nAccountIds](./nep-0616.md#deterministic-accountids). These wallet-contracts are\nresponsible themselves for signature verification, nonce tracking and etc.\n\nWhile it enables for true account abstraction, smart-contracts on Near currently\nstill lack knowledge of `chain_id` that they are being executed on. As a result,\nwallet-contracts do not have a native protection against replaying messages\nbetween mainnet, testnet and other networks. The existing [workarounds](#alternatives)\ncome at a cost of poor UX.\n\nThis NEP removes this limitation by introduing a new `chain_id` host function, enabling\nfor account abstraction without UX tradeoffs.\n\n## Specification\n\nThe keywords \"MUST\", \"MUST NOT\", \"REQUIRED\", \"SHALL\", \"SHALL NOT\", \"SHOULD\", \"SHOULD NOT\", \"RECOMMENDED\", \"MAY\", and \"OPTIONAL\" in this document are to be interpreted as described in [RFC 2119](https://www.ietf.org/rfc/rfc2119.txt).\n\n### Host Function\n\n```rust\nextern \"C\" {\n    /// Get the current chain_id.\n    /// \n    /// Writes the current chain_id as UTF-8 encoded string into register\n    /// `register_id`.\n    fn chain_id(register_id: u64);\n}\n```\n\n### Gas cost\n\nThe gas cost MUST be computed as:\n\n`base + write_register_base + write_register_byte * num_bytes`\n\n## Reference Implementation\n\nHaving `chain_id` [field](https://github.com/near/nearcore/blob/b8ab0f5d578617eb287030e77ab17be512a5f8ac/core/chain-configs/src/genesis_config.rs#L120-L122)\nalready present in validator config, it's possible to make this value accessible\nby smart-contracts in the runtime.\n\n## Security Implications\n\nAs already [stated](https://github.com/near/nearcore/blob/b8ab0f5d578617eb287030e77ab17be512a5f8ac/core/chain-configs/src/genesis_config.rs#L120-L122)\nin nearcore implementation, `chain_id` MUST be unique for every blockchain.\nOtherwise, messages can be replayed between networks with same chain_ids.\n\n## Alternatives\n\nAn alternative would be to encode \"virtual\" `chain_id` as a part of\ninitializaton state for wallet-contract. For instance, TON's [wallet-v5](https://github.com/ton-blockchain/wallet-contract-v5/blob/main/README.md#known-security-issues) follows such approach, where clients [derive](https://github.com/ton-org/ton/blob/5deac43432fa5dfcd441f2f0100dc3f89f55bead/src/wallets/v5r1/WalletV5R1WalletId.ts#L11-L15)\ncorresponding `wallet_id` using \"virtual\" `chain_id`.\n\nHowever, this would make users to have their wallet-contracts deployed at\ndifferent [deterministic AccountIds](./nep-0616.md#deterministic-accountids)\non different Near chains even if they use the same public key. As a result, this\nworsens the UX for end users.\n\nHaving `chain_id` encoded in the signed payload enables EVM-like UX, where all\nwallet-contracts are deployed on the same AccountIds, regardless of a chain_id\nbeing used.\n\n\n## Future possibilities\n\nCurrently, [transactions](https://github.com/near/nearcore/blob/66a3dc3c2e79adb3bbe5bfd39e41ef7dcd723a95/core/primitives/src/transaction.rs#L82-L98)\non Near do not include `chain_id`. So, replay protection is achieved solely by\nrelying on the unlikelihood of recent [block_hash](https://github.com/near/nearcore/blob/66a3dc3c2e79adb3bbe5bfd39e41ef7dcd723a95/core/primitives/src/transaction.rs#L94-L95)\ncollisions. It might make sense to add `chain_id` field in the next version of\n[Transaction](https://github.com/near/nearcore/blob/66a3dc3c2e79adb3bbe5bfd39e41ef7dcd723a95/core/primitives/src/transaction.rs#L109-L112)\nobject.\n\n## Consequences\n\n### Positive\n\n* Wallet-contracts (and other smart-contracts) can have native protection\n  against replaying messages between different Near chains without UX tradeoffs.\n\n### Neutral\n\n\\-\n\n### Negative\n\n\\-\n\n### Backwards Compatibility\n\nThis proposal is **backwards-compatible**: it only adds a new host function,\nso that new contracts can opt-in using it, while existing ones can be\nupgraded to start using it if necessary.\n\n\n## Changelog\n\n### 1.0.0 - Initial Version\n\n#### Benefits\n\n* Native protection against replaying messages between different Near chains.\n* Better UX for end users: wallet-contracts are deployed at the same AccountIds\n  when using same public key.\n\n#### Concerns\n\n|   # | Concern | Resolution | Status |\n| -: | :------------------------------------------------------------- | :--- | ---: |\n|  1 | Does consensus ensure all validators have the same `chain_id`? | Yes ([reasoning](https://github.com/near/NEPs/pull/638#issuecomment-3842705589)) | New |\n\n## Copyright\n\nCopyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).\n"
  }
]