Showing preview only (1,062K chars total). Download the full file or copy to clipboard to get everything.
Repository: near/NEPs
Branch: master
Commit: 7ea639f0ada3
Files: 67
Total size: 1.0 MB
Directory structure:
gitextract_xo_k9all/
├── .github/
│ └── workflows/
│ ├── add-to-devrel.yml
│ ├── lint.yml
│ └── spellcheck.yml
├── .gitignore
├── .markdownlint.json
├── .mlc_config.json
├── CODEOWNERS
├── README.md
├── nep-0000-template.md
└── neps/
├── archive/
│ ├── 0005-access-keys.md
│ ├── 0006-bindings.md
│ ├── 0008-transaction-refactoring.md
│ ├── 0013-system-methods.md
│ ├── 0017-execution-outcome.md
│ ├── 0018-view-change-method.md
│ ├── 0033-economics.md
│ ├── 0040-split-states.md
│ └── README.md
├── nep-0001.md
├── nep-0021.md
├── nep-0141.md
├── nep-0145.md
├── nep-0148.md
├── nep-0171.md
├── nep-0177.md
├── nep-0178.md
├── nep-0181.md
├── nep-0199.md
├── nep-0245/
│ ├── ApprovalManagement.md
│ ├── Enumeration.md
│ ├── Events.md
│ └── Metadata.md
├── nep-0245.md
├── nep-0256.md
├── nep-0264.md
├── nep-0297.md
├── nep-0300.md
├── nep-0330.md
├── nep-0364.md
├── nep-0366.md
├── nep-0368.md
├── nep-0393.md
├── nep-0399.md
├── nep-0408.md
├── nep-0413.md
├── nep-0418.md
├── nep-0448.md
├── nep-0452.md
├── nep-0455.md
├── nep-0488.md
├── nep-0491.md
├── nep-0492.md
├── nep-0508.md
├── nep-0509.md
├── nep-0514.md
├── nep-0518.md
├── nep-0519.md
├── nep-0536.md
├── nep-0539.md
├── nep-0568.md
├── nep-0584.md
├── nep-0591.md
├── nep-0611.md
├── nep-0616.md
├── nep-0621.md
├── nep-0635.md
└── nep-0638.md
================================================
FILE CONTENTS
================================================
================================================
FILE: .github/workflows/add-to-devrel.yml
================================================
name: 'Add to DevRel Project'
on:
issues:
types:
- opened
- reopened
pull_request_target:
types:
- opened
- reopened
jobs:
add-to-project:
name: Add issue/PR to project
runs-on: ubuntu-latest
steps:
- uses: actions/add-to-project@v1.0.0
with:
# add to DevRel Project #117
project-url: https://github.com/orgs/near/projects/117
github-token: ${{ secrets.PROJECT_GH_TOKEN }}
================================================
FILE: .github/workflows/lint.yml
================================================
name: Lint
on:
pull_request:
branches: [master, main]
merge_group:
concurrency:
group: ci-${{ github.ref }}-${{ github.workflow }}
cancel-in-progress: true
jobs:
markdown-lint:
name: markdown-lint
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
# lint only changed files
- uses: tj-actions/changed-files@v46
id: changed-files
with:
files: "**/*.md"
separator: ","
- uses: DavidAnson/markdownlint-cli2-action@v19
if: steps.changed-files.outputs.any_changed == 'true'
with:
config: .markdownlint.json
globs: |
${{ steps.changed-files.outputs.all_changed_files }}
separator: ","
markdown-link-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@master
- uses: gaurav-nelson/github-action-markdown-link-check@v1
with:
use-quiet-mode: "yes"
# use-verbose-mode: 'yes'
config-file: ".mlc_config.json"
folder-path: "neps"
================================================
FILE: .github/workflows/spellcheck.yml
================================================
name: spellchecker
on:
pull_request:
branches:
- master
jobs:
misspell:
name: runner / misspell
runs-on: ubuntu-latest
steps:
- name: Check out code.
uses: actions/checkout@v1
- name: misspell
id: check_for_typos
uses: reviewdog/action-misspell@v1
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
path: "./specs"
locale: "US"
================================================
FILE: .gitignore
================================================
/docs
.idea
.DS_Store
.vscode
================================================
FILE: .markdownlint.json
================================================
{
"default": true,
"MD001": false,
"MD013": false,
"MD024": { "siblings_only": true },
"MD025": false,
"MD033": false,
"MD034": false,
"MD040": false,
"MD041": false,
"MD046": false,
"whitespace": false
}
================================================
FILE: .mlc_config.json
================================================
{
"ignorePatterns": [
{
"pattern": "^/"
},
{
"pattern": "^https://codepen.io"
},
{
"pattern": "^https://stackoverflow.com"
},
{
"pattern": "^https://www.researchgate.net"
},
{
"pattern": "^https://pages.near.org/papers/the-official-near-white-paper/"
}
],
"timeout": "20s",
"retryOn429": true,
"retryCount": 5,
"fallbackRetryDelay": "30s",
"aliveStatusCodes": [200, 206]
}
================================================
FILE: CODEOWNERS
================================================
* @near/nep-moderators
================================================
FILE: README.md
================================================
# NEAR Protocol Specifications and Standards
[](https://near.zulipchat.com/#narrow/stream/320497-nep-standards)
This repository hosts the current NEAR Protocol specification and standards.
This includes the core protocol specification, APIs, contract standards, processes, and workflows.
Changes to the protocol specification and standards are called NEAR Enhancement Proposals (NEPs).
## NEPs
| NEP # | Title | Author | Status |
| ----------------------------------------------------------------- | ----------------------------------------------------------------- | ------------------------------------------------- | ---------- |
| [0001](https://github.com/near/NEPs/blob/master/neps/nep-0001.md) | NEP Purpose and Guidelines | @ori-near @bowenwang1996 @austinbaggio @frol | Living |
| [0021](https://github.com/near/NEPs/blob/master/neps/nep-0021.md) | Fungible Token Standard (Deprecated) | @evgenykuzyakov | Deprecated |
| [0141](https://github.com/near/NEPs/blob/master/neps/nep-0141.md) | Fungible Token Standard | @evgenykuzyakov @oysterpack, @robert-zaremba | Final |
| [0145](https://github.com/near/NEPs/blob/master/neps/nep-0145.md) | Storage Management | @evgenykuzyakov | Final |
| [0148](https://github.com/near/NEPs/blob/master/neps/nep-0148.md) | Fungible Token Metadata | @robert-zaremba @evgenykuzyakov @oysterpack | Final |
| [0171](https://github.com/near/NEPs/blob/master/neps/nep-0171.md) | Non Fungible Token Standard | @mikedotexe @evgenykuzyakov @oysterpack | Final |
| [0177](https://github.com/near/NEPs/blob/master/neps/nep-0177.md) | Non Fungible Token Metadata | @chadoh @mikedotexe | Final |
| [0178](https://github.com/near/NEPs/blob/master/neps/nep-0178.md) | Non Fungible Token Approval Management | @chadoh @thor314 | Final |
| [0181](https://github.com/near/NEPs/blob/master/neps/nep-0181.md) | Non Fungible Token Enumeration | @chadoh @thor314 | Final |
| [0199](https://github.com/near/NEPs/blob/master/neps/nep-0199.md) | Non Fungible Token Royalties and Payouts | @thor314 @mattlockyer | Final |
| [0245](https://github.com/near/NEPs/blob/master/neps/nep-0245.md) | Multi Token Standard | @zcstarr @riqi @jriemann @marcos.sun | Final |
| [0256](https://github.com/near/NEPs/blob/master/neps/nep-0256.md) | Non-Fungible Token Events | @telezhnaya | Final |
| [0264](https://github.com/near/NEPs/blob/master/neps/nep-0264.md) | Promise Gas Weights | @austinabell | Final |
| [0297](https://github.com/near/NEPs/blob/master/neps/nep-0297.md) | Events Standard | @telezhnaya | Final |
| [0300](https://github.com/near/NEPs/blob/master/neps/nep-0300.md) | Fungible Token Events | @telezhnaya | Final |
| [0330](https://github.com/near/NEPs/blob/master/neps/nep-0330.md) | Source Metadata | @BenKurrek | Final |
| [0364](https://github.com/near/NEPs/blob/master/neps/nep-0364.md) | Efficient signature verification and hashing precompile functions | @blasrodri | Final |
| [0366](https://github.com/near/NEPs/blob/master/neps/nep-0366.md) | Meta Transactions | @ilblackdragon @e-uleyskiy @fadeevab | Final |
| [0368](https://github.com/near/NEPs/blob/master/neps/nep-0368.md) | Bridge Wallets | @lewis-sqa | Final |
| [0393](https://github.com/near/NEPs/blob/master/neps/nep-0393.md) | Sould Bound Token (SBT) | @robert-zaremba | Final |
| [0399](https://github.com/near/NEPs/blob/master/neps/nep-0399.md) | Flat Storage | @Longarithm @mzhangmzz | Final |
| [0408](https://github.com/near/NEPs/blob/master/neps/nep-0408.md) | Injected Wallet API | @MaximusHaximus @lewis-sqa | Final |
| [0413](https://github.com/near/NEPs/blob/master/neps/nep-0413.md) | Near Wallet API - support for signMessage method | @gagdiez @gutsyphilip | Final |
| [0418](https://github.com/near/NEPs/blob/master/neps/nep-0418.md) | Remove attached_deposit view panic | @austinabell | Final |
| [0448](https://github.com/near/NEPs/blob/master/neps/nep-0448.md) | Zero-balance Accounts | @bowenwang1996 | Final |
| [0452](https://github.com/near/NEPs/blob/master/neps/nep-0452.md) | Linkdrop Standard | @benkurrek @miyachi | Final |
| [0455](https://github.com/near/NEPs/blob/master/neps/nep-0455.md) | Parameter Compute Costs | @akashin @jakmeier | Final |
| [0488](https://github.com/near/NEPs/blob/master/neps/nep-0488.md) | Host Functions for BLS12-381 Curve Operations | @olga24912 | Final |
| [0491](https://github.com/near/NEPs/blob/master/neps/nep-0491.md) | Non-Refundable Storage Staking | @jakmeier | Final |
| [0492](https://github.com/near/NEPs/blob/master/neps/nep-0492.md) | Restrict creation of Ethereum Addresses | @bowenwang1996 | Final |
| [0508](https://github.com/near/NEPs/blob/master/neps/nep-0508.md) | Resharding v2 | @wacban @shreyan-gupta @walnut-the-cat | Final |
| [0509](https://github.com/near/NEPs/blob/master/neps/nep-0509.md) | Stateless validation Stage 0 | @robin-near @pugachAG @Longarithm @walnut-the-cat | Final |
| [0514](https://github.com/near/NEPs/blob/master/neps/nep-0514.md) | Fewer Block Producer Seats in `testnet` | @nikurt | Final |
| [0518](https://github.com/near/NEPs/blob/master/neps/nep-0518.md) | Web3-Compatible Wallets Support | @alexauroradev @birchmd | Final |
| [0519](https://github.com/near/NEPs/blob/master/neps/nep-0519.md) | Yield Execution | @akhi3030 @saketh-are | Final |
| [0536](https://github.com/near/NEPs/blob/master/neps/nep-0536.md) | Reduce the number of gas refunds | @evgenykuzyakov @bowenwang1996 | Final |
| [0539](https://github.com/near/NEPs/blob/master/neps/nep-0539.md) | Cross-Shard Congestion Control | @wacban @jakmeier | Final |
| [0568](https://github.com/near/NEPs/blob/master/neps/nep-0568.md) | Resharding V3 | @staffik @Longarithm @Trisfald @marcelo-gonzalez @shreyan-gupta @wacban | Final |
| [0584](https://github.com/near/NEPs/blob/master/neps/nep-0584.md) | Cross-shard bandwidth scheduler | @jancionear | Final |
| [0591](https://github.com/near/NEPs/blob/master/neps/nep-0591.md) | Global Contracts | @bowenwang1996 @pugachag @stedfn | Final |
## Specification
NEAR Specification is under active development.
Specification defines how any NEAR client should be connecting, producing blocks, reaching consensus, processing state transitions, using runtime APIs, and implementing smart contract standards as well.
## Standards & Processes
Standards refer to various common interfaces and APIs that are used by smart contract developers on top of the NEAR Protocol.
For example, such standards include SDK for Rust, API for fungible tokens or how to manage user's social graph.
Processes include release process for spec, clients or how standards are updated.
### Contributing
#### Expectations
Ideas presented ultimately as NEPs will need to be driven by the author through the process. It's an exciting opportunity with a fair amount of responsibility from the contributor(s). Please put care into the details. NEPs that do not present convincing motivation, demonstrate understanding of the impact of the design, or are disingenuous about the drawbacks or alternatives tend to be poorly received. Again, by the time the NEP makes it to the pull request, it has a clear plan and path forward based on the discussions in the governance forum.
#### Process
Spec changes are ultimately done via pull requests to this repository (formalized process [here](neps/nep-0001.md)). In an effort to keep the pull request clean and readable, please follow these instructions to flesh out an idea.
1. Sign up for the [governance site](https://gov.near.org/) and make a post to the appropriate section. For instance, during the ideation phase of a standard, one might start a new conversation in the [Development » Standards section](https://gov.near.org/c/dev/standards/29) or the [NEP Discussions Forum](https://github.com/near/NEPs/discussions).
2. The forum has comment threading which allows the community and NEAR Collective to ideate, ask questions, wrestle with approaches, etc. If more immediate responses are desired, consider bringing the conversation to [Zulip](https://near.zulipchat.com/#narrow/stream/320497-nep-standards).
3. When the governance conversations have reached a point where a clear plan is evident, create a pull request, using the instructions below.
- Clone this repository and create a branch with "my-feature".
- Update relevant content in the current specification that are affected by the proposal.
- Create a Pull request, using [nep-0000-template.md](nep-0000-template.md) to describe motivation and details of the new Contract or Protocol specification. In the document header, ensure the `Status` is marked as `Draft`, and any relevant discussion links are added to the `DiscussionsTo` section.
Use the pull request number padded with zeroes. For instance, the pull request `219` should be created as `neps/nep-0219.md`.
- Add your Draft standard to the `NEPs` section of this README.md. This helps advertise your standard via github.
- Once complete, submit the pull request for editor review.
- The formalization dance begins:
- NEP Editors, who are unopinionated shepherds of the process, check document formatting, completeness and adherence to [NEP-0001](neps/nep-0001.md) and approve the pull request.
- Once ready, the author updates the NEP status to `Review` allowing further community participation, to address any gaps or clarifications, normally part of the Review PR.
- NEP Editors mark the NEP as `Last Call`, allowing a 14 day grace period for any final community feedback. Any unresolved show stoppers roll the state back to `Review`.
- NEP Editors mark the NEP as `Final`, marking the standard as complete. The standard should only be updated to correct errata and add non-normative clarifications.
Tip: build consensus and integrate feedback. NEPs that have broad support are much more likely to make progress than those that don't receive any comments. Feel free to reach out to the NEP assignee in particular to get help identify stakeholders and obstacles.
================================================
FILE: nep-0000-template.md
================================================
---
NEP: 0
Title: NEP Template
Authors: Todd Codrington III <satoshi@fakenews.org>
Status: Approved
DiscussionsTo: https://github.com/nearprotocol/neps/pull/0000
Type: Developer Tools
Version: 1.1.0
Created: 2022-03-03
LastUpdated: 2023-03-07
---
[This is a NEP (NEAR Enhancement Proposal) template, as described in [NEP-0001](https://github.com/near/NEPs/blob/master/neps/nep-0001.md). Use this when creating a new NEP. The author should delete or replace all the comments or commented brackets when merging their NEP.]
<!-- NEP Header Preamble
Each NEP must begin with an RFC 822 style header preamble. The headers must appear in the following order:
NEP: The NEP title in no more than 4-5 words.
Title: NEP title
Author: List of author name(s) and optional contact info. Examples FirstName LastName <satoshi@fakenews.org>, FirstName LastName (@GitHubUserName)>
Status: The NEP status -- New | Approved | Deprecated.
DiscussionsTo (Optional): URL of current canonical discussion thread, e.g. GitHub Pull Request link.
Type: The NEP type -- Protocol | Contract Standard | Wallet Standard | DevTools Standard.
Requires (Optional): NEPs may have a Requires header, indicating the NEP numbers that this NEP depends on.
Replaces (Optional): A newer NEP marked with a SupercededBy header must have a Replaces header containing the number of the NEP that it rendered obsolete.
SupersededBy (Optional): NEPs may also have a SupersededBy header indicating that a NEP has been rendered obsolete by a later document; the value is the number of the NEP that replaces the current document.
Version: The version number. A new NEP should start with 1.0.0, and future NEP Extensions must follow Semantic Versioning.
Created: The Created header records the date that the NEP was assigned a number, should be in ISO 8601 yyyy-mm-dd format, e.g. 2022-12-31.
LastUpdated: The Created header records the date that the NEP was assigned a number, should be in ISO 8601 yyyy-mm-dd format, e.g. 2022-12-31.
See example above -->
## Summary
[Provide a short human-readable (~200 words) description of the proposal. A reader should get from this section a high-level understanding about the issue this NEP is addressing.]
## Motivation
[Explain why this proposal is necessary, how it will benefit the NEAR protocol or community, and what problems it solves. Also describe why the existing protocol specification is inadequate to address the problem that this NEP solves, and what potential use cases or outcomes.]
## Specification
[Explain the proposal as if you were teaching it to another developer. This generally means describing the syntax and semantics, naming new concepts, and providing clear examples. The specification needs to include sufficient detail to allow interoperable implementations getting built by following only the provided specification. In cases where it is infeasible to specify all implementation details upfront, broadly describe what they are.]
## Reference Implementation
[This technical section is required for Protocol proposals but optional for other categories. A draft implementation should demonstrate a minimal implementation that assists in understanding or implementing this proposal. Explain the design in sufficient detail that:
* Its interaction with other features is clear.
* Where possible, include a Minimum Viable Interface subsection expressing the required behavior and types in a target programming language. (ie. traits and structs for rust, interfaces and classes for javascript, function signatures and structs for c, etc.)
* It is reasonably clear how the feature would be implemented.
* Corner cases are dissected by example.
* For protocol changes: A link to a draft PR on nearcore that shows how it can be integrated in the current code. It should at least solve the key technical challenges.
The section should return to the examples given in the previous section, and explain more fully how the detailed proposal makes those examples work.]
## Security Implications
[Explicitly outline any security concerns in relation to the NEP, and potential ways to resolve or mitigate them. At the very least, well-known relevant threats must be covered, e.g. person-in-the-middle, double-spend, XSS, CSRF, etc.]
## Alternatives
[Explain any alternative designs that were considered and the rationale for not choosing them. Why your design is superior?]
## Future possibilities
[Describe any natural extensions and evolutions to the NEP proposal, and how they would impact the project. Use this section as a tool to help fully consider all possible interactions with the project in your proposal. This is also a good place to "dump ideas"; if they are out of scope for the NEP but otherwise related. Note that having something written down in the future-possibilities section is not a reason to accept the current or a future NEP. Such notes should be in the section on motivation or rationale in this or subsequent NEPs. The section merely provides additional information.]
## Consequences
[This section describes the consequences, after applying the decision. All consequences should be summarized here, not just the "positive" ones. Record any concerns raised throughout the NEP discussion.]
### Positive
* p1
### Neutral
* n1
### Negative
* n1
### Backwards Compatibility
[All NEPs that introduce backwards incompatibilities must include a section describing these incompatibilities and their severity. Author must explain a proposes to deal with these incompatibilities. Submissions without a sufficient backwards compatibility treatise may be rejected outright.]
## Unresolved Issues (Optional)
[Explain any issues that warrant further discussion. Considerations
* What parts of the design do you expect to resolve through the NEP process before this gets merged?
* What parts of the design do you expect to resolve through the implementation of this feature before stabilization?
* What related issues do you consider out of scope for this NEP that could be addressed in the future independently of the solution that comes out of this NEP?]
## Changelog
[The changelog section provides historical context for how the NEP developed over time. Initial NEP submission should start with version 1.0.0, and all subsequent NEP extensions must follow [Semantic Versioning](https://semver.org/). Every version should have the benefits and concerns raised during the review. The author does not need to fill out this section for the initial draft. Instead, the assigned reviewers (Subject Matter Experts) should create the first version during the first technical review. After the final public call, the author should then finalize the last version of the decision context.]
### 1.0.0 - Initial Version
> Placeholder for the context about when and who approved this NEP version.
#### Benefits
> List of benefits filled by the Subject Matter Experts while reviewing this version:
* Benefit 1
* Benefit 2
#### Concerns
> Template for Subject Matter Experts review for this version:
> Status: New | Ongoing | Resolved
| # | Concern | Resolution | Status |
| --: | :------ | :--------- | -----: |
| 1 | | | |
| 2 | | | |
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
================================================
FILE: neps/archive/0005-access-keys.md
================================================
- Proposal Code Name: access_keys
- Start Date: 2019-07-08
- NEP PR: [nearprotocol/neps#0000](https://github.com/near/NEPs/blob/master/nep-0000-template.md)
- Issue(s): [nearprotocol/nearcore#687](https://github.com/nearprotocol/nearcore/issues/687)
# Summary
Access keys provide limited access to an account.
Each access key belongs to some account and identified by a unique (within the account) public key.
One account may have large number of access keys.
Access keys will replace original account-level public keys.
Access keys allow to act on behalf of the account by restricting allowed transactions with the access key permissions.
# Motivation
Access keys give an ability to use dApps in a secure way without asking the user to sign every transaction in the wallet.
By issuing the access key once for the application, the application can now act on behalf of the user in a restricted environment.
This enables seamless experience for the user.
Access keys also enable a few other use-cases that are discussed in details below.
# Guide-level explanation
Here are proposed changes for the AccessKey and Account structs.
```rust
/// `account_id,public_key` is a key in the state
struct AccessKey {
/// The nonce for this access key.
/// It makes sense for nonce to not start from 0, in case the access key is recreated
/// with the same public key, to avoid replaying of old transactions.
pub nonce: Nonce, // u64
/// Defines permissions for the AccessKey
pub permission: AccessKeyPermission,
}
/// Defines permissions for AccessKey
pub enum AccessKeyPermission {
/// Restricts AccessKey to only be used for function calls.
FunctionCall(FunctionCallPermission),
/// Gives full access to the account.
/// NOTE: It's used to replace account-level public keys.
FullAccess,
}
pub struct FunctionCallPermission {
/// `Some` amount that can be spent for transaction fees by this access key from the account balance.
/// When used, both account balance and the allowance is decreased.
/// To change or increase the allowance, the access key can be replaced using SwapKey.
/// NOTE: If you reuse the public key, make sure to keep the nonce from the old AccessKey.
/// `None` means unlimited allowance.
pub allowance: Option<Balance>, // u128
/// The AccountID of the receiver of the transaction. The access key will restrict transactions to
/// only this receiver.
pub receiver_id: AccountId, // String
/// If `Some`, the access key would be restricted to calling only the given method name.
/// `None` means it's restricted to calling the receiver_id contract, but any method name.
pub method_name: Option<String>,
}
/// NOTE: This change removes account-level nonce and public keys.
/// Key is `account_id`
struct Account {
pub balance: Balance(u128),
pub code_hash: Hash,
/// Storage usage accounts for all access keys
pub storage_usage: StorageUsage(u64),
/// Last block index at which the storage was paid for.
pub storage_paid_at: BlockIndex(u64),
}
```
### Examples
#### AccessKey as account-level public key
If an AccessKey has the full access to the account and the allowance set to be the max value for u128, then
it essentially acts as an account-level public key. Which means we can remove account-level
public keys from the account struct and rely only on access keys.
An access key example from user `vasya.near` with full access:
```rust
/// vasya.near,a123bca2
AccessKey {
nonce: 0,
permission: AccessKeyPermission::FullAccess,
}
```
#### AccessKey for a dApp by a user
This is a simple example where a user wants to use some dApp. The user has to authorize this dApp within their wallet, so the dApp knows who the user is, and also can issue simple function call transactions on behalf of this user.
To create such AccessKey a dApp generates a new key pair and passes the new public key to the user's wallet in a URL.
Then the wallet asks the user to create a new AccessKey with that points to the dApp.
User has to explicitly confirm this in the wallet for AccessKey to be created.
The new access key is restricted to be only used for the app’s contract_id, but is not restricted for any method name.
The user also selects the allowance to some reasonable amount, enough for the application to issue regular transactions.
The application might also hint the user about this desired allowance in some way.
Now the app can issue function call transactions on behalf of the user’s account towards the app’s contract without requiring the user to sign each transaction.
An access key example for chess app from user `vasya.near`:
```rust
/// vasya.near,c5d312f3
AccessKey {
nonce: 0,
permission: AccessKeyPermission::FunctionCall(FunctionCallPermission {
// Since the access key is stored on the Chess app front-end, the user has
// limited the spending amount to some reasonable, but large enough number.
// NOTE: It's needs to be multiplied to decimals, e.g. 10^-18
allowance: Some(1_000_000_000),
// This access key restricts access to `chess.app` contract.
receiver_id: "chess.app",
// Any method name on the `chess.app` contract can be called.
method_name: None,
}),
}
```
#### AccessKey issued by a dApp
This is an example where the dApp wants to pay for the user, or it doesn't want to go through the user's sign-in flow.
For whatever reason the dApp decided to issue an access key directly for their account.
For this to work there should be one account with funds (that dApp controls on the backend) which creates access keys for the users.
The difference from the example above is there is only one account (the same for all users) that creates multiple access keys (one per user) towards one other contract (app's contract).
To differentiate users the contract has to use the public key of the access key instead of sender's account ID.
If the access key wants to support user's identity from the account ID. The contract can provide a public method that links user's account ID with a given public key.
Once this is done, a user can request a new access key with the linked public key (sponsored by the app), but it is linked to the user's account ID.
There are some caveats with this approach:
- The dApp is required to have a backend and to have some sybil resistance for users. It's needed to prevent abuse by bots.
- Writing the contract is slightly more complicated, since the contract now needs to handle mapping of the public keys to the account IDs.
An access key example for chess app paid by the chess app from `chess.funds` account:
```rust
/// chess.funds,2bc2b3b
AccessKey {
nonce: 0,
permission: AccessKeyPermission::FunctionCall(FunctionCallPermission {
// Since the access key is given to the user, the developer wants to limit the
// the spending amount to some conservative number, since a user might try to drain it.
allowance: Some(5_000_000),
// This access key restricts access to `chess.app` contract.
receiver_id: "chess.app",
// Any method name on the `chess.app` contract can be called (but some methods might just ignore this key).
method_name: None,
}),
}
```
#### AccessKey through a proxy
This examples demonstrates how to have more granular control on top of built-in access key restrictions.
Let's say a user wants to:
- limit the number of calls the access key can make per minute
- support multiple contracts with the same access key
- select which methods name can be called and which can't
- transfer funds from the account up to a certain limit
- stake from the account, but prevent withdrawing funds
To make it work, we need to have a custom logic at every call.
We can achieve this by running a portion of a smart contract code before any action.
A user can deploy a code on their account and restrict access key to their account and to a method name, e.g. `proxy`.
Now this access key will only be able to issue transactions on behalf of the user that goes to the user's contract code and calls method `proxy`.
The `proxy` method can find out which access key is used by comparing public keys and verify the request before executing it.
E.g. the access key should only be able to call `chess.app` at most 3 times per 20 block and can transfer at most 1M tokens to the `chess.app`.
The `proxy` function internally can validate that this access key is used, fetch its config, validate the passed arguments and proxy the transaction.
A `proxy` method might take the following arguments for a function call:
```json
{
"action": "call",
"contractId": "chess.app",
"methodName": "move",
"args": "{...serialized args...}",
"amount": 0
}
```
In this case the `action` is `call`, so the function checks the `amount` to be within the withdrawal limit, check that the contract name is `chess.app` and if there were the last 3 calls were not in the last 20 blocks issue an async call to the `chess.app`.
The same `proxy` function in theory can handle other actions, e.g. staking or vesting.
The benefit of having a proxy function on your own account is that it doesn't require additional receipt, because the account's state and the code are available at the transaction verification time.
An example of an access key limited to `proxy` function:
```rust
/// vasya.near,3bc2b3b
AccessKey {
nonce: 0,
permission: AccessKeyPermission::FunctionCall(FunctionCallPermission {
// Allowance can be large enough, since the user is likely trusting the app.
allowance: Some(1_000_000_000),
// This access key restricts access to user's account `vasya.near` contract.
// Most likely, the contract code can be deployed and upgraded directly from the wallet.
receiver_id: "vasya.near",
// The method is restricted to proxy, which does all the security checks.
method_name: Some("proxy"),
}),
}
```
# Reference-level explanation
- Access keys are stored with the `account_id,public_key` key. Where `account_id` and `public_key` are actual Account ID and public keys, and `,` is a separator.
They should be stored on the same shard as the account.
- Access keys storage rent should be accounted and paid from the account directly without affecting the allowance.
- Access keys allowance can exceed the account balance.
- To validate a transaction signed with the AccessKey, we need to first validate the signature, then fetch the Account and the AccessKey, validate that we have enough funds and verify permissions.
- Account creation should now create a full access permission access key, instead of public keys within the account.
- SwapKey transaction should just replace the old access key with the given new access key.
### Technical changes
#### `nonce` on the AccessKey level instead of account level
Since access keys can be used by the different people or parties at the same time, we need to be able to
have separate nonce for each key instead of a single nonce at the account level.
With a single nonce on the account level, there is a high probability that 2 apps would use the same nonce for 2 different transactions and one of this transactions would be rejected.
Previously we were ordering transactions by nonce and rejecting transactions with a duplicated or lower nonce.
With the access key nonce, we still need to order transactions by nonce, but now we need to group them by `account_id,public_key` key instead of just account_id.
To prevent one access key from having a priority on other access keys, we should order transactions by hash when determining which transactions should be added to the block.
The suggestion from @nearmax:
"
We need to spec out here how transactions from different access keys are going to be ordered with respect to each other. For example:
3 access keys (A,B,C) issue 3 transactions each:
A1, A2, A3; B1,B2,B3; C1, C2, C3;
All these transactions operate on the same state so they need to have an order. First transaction to execute is one of \{A1,B1,C1} that has lowest hash, let's say it is B1. Second transaction to execute is one of \{A1,B2,C1} with lowest hash, etc.
"
We should also restrict the nonce of the next transaction to be exactly the previous nonce incremented by 1.
It will help us with ordering transactions.
The transaction ordering should be a separate topic which should also include security for transactions expiration and fork selection.
#### `allowance` field
Allowance is the amount of tokens the AccessKey can spend from the account balance.
When some amount is spent, it's subtracted from both the allowance of the access key and from the account balance.
If in some case the user wants to have unlimited allowance for this key, then we have a `None` allowance option.
NOTE: In the previous iteration of access keys, we used balance instead of the allowance.
But it required to sum up all access keys balances to get the total account balance.
It also prevented sharing of the account balance between access keys.
#### Permissions
Almost all desired use-cases of access keys can be achieved by using the old permissions model.
It restricts access keys to only issue function call with no attached tokens.
The function calls are restricted to the selected `receiver_id` and potentially restricted to a single `method_name`.
Anything non-trivial can be done by the contract that receives this call, e.g. through `proxy` function.
To remove public keys from the account, we added a new permission that full access to the account and not limited by the allowance.
#### How is `storage_usage` computed?
If we use protobuf size to compute the `storage_usage` value, then protobuf might compress `u128` value and it would affect storage usage every time the `allowance` is modified.
The best option would be is to change `storage_usage` only when the access key is created or removed.
So that changes to the `allowance` value shouldn't change the `storage_usage` value.
For this to work, we might need to update the storage computation formula for the access key, e.g. the one that ignores the compressed size of the `allowance` and instead just relies on the 16 bytes of `u128` size.
Especially, because we currently don't use the proto size for the storage_usage for the account itself.
# Drawbacks
Currently the permission model is quite limited to either a function call with one or any method names, or a full access key.
But we may add more permissions in the future in order to handle this issue.
# Rationale and alternatives
## Alternatives
#### More permissions directly on the access key
For example we can have multiple method names, multiple contract_id/method_name pairs or different transactions types (e.g. only allow staking transactions).
This can be achieved with the contract and a dedicated function that does this control. So to keep the runtime simple and secure we should avoid doing more checks, since they are not accounted for fees.
It's also can be achieved if we refactor SignedTransaction to only use method_names instead of oneof body types.
#### Balance instead of allowance
Allowance enables sharing of a single account balance with multiple access keys. E.g. if you use 5 apps, you can give full allowance to each app instead of splitting balance into 5 parts.
It's also easier to work with, than access keys balances.
Previously we have AccessKey's balance owner, so the dApp could sponsor users. But it can be achieved by dApps creating access keys from their account, effectively paying for all transactions.
#### Not exposing `nonce` on each AccessKey
If you use 2 applications at the same time, e.g. a mobile app and a desktop wallet, you might run into a `nonce` collision at the account level, which would cancel one of the transaction. It would happen more frequently with more apps being used.
As for the runtime multi nonce handling per account, we need to think and verify security a little more.
#### `receiver_id` being an `Option<AccountId>`
In the previous design, the `receiver_id` was called `contract_id` and was an option field. But it didn't remove the requirement for the receiver when it was `None`. Instead it was assuming the access key is pointed to the owner's account.
We can potentially use `None` to mean unlimited key, and require user to explicitly specify their own account_id if they want to use proxy function.
# Unresolved questions
#### Transactions ordering and nonce restrictions
That question is still unresolved. Whether we should restrict TX nonce to be +1 or not restricting.
It's not a blocking change, but it would make sense to do this change with other SignedTransaction security features such as minimum hash of a block header and block expiration.
#### Permissions
Not clear whether a single pair of `receiver_id`/`method_name` is enough to cover all use-cases at the moment.
E.g. if I want to use my account that already has some code on it, e.g. vesting locked account. I can't deploy a new code on it, so I can't use a `proxy` method.
# Future possibilities
For all use-cases to work we need to add all missing runtime methods that are currently only possible with `SignedTransaction` at the moment, e.g. staking, account creation, public/access key management and code deployment.
Next we might consider refactoring stake out of `Account` and also refactor `SignedTransaction` to support text based method names instead of enums.
We should also think about storing the same code (by hash) only once instead of storing for each account. Especially, if we adopt `proxy` model.
================================================
FILE: neps/archive/0006-bindings.md
================================================
- Proposal Name: `wasm_bindings`
- Start Date: 2019-07-22
- NEP PR: [nearprotocol/neps#0000](https://github.com/near/NEPs/blob/master/nep-0000-template.md)
# Summary
Wasm bindings, a.k.a imports, are functions that the runtime (a.k.a host) exposes to the Wasm code (a.k.a guest) running on the virtual machine.
These functions are arguably the most difficult thing to change in our entire ecosystem, after we have contracts running on our blockchain,
since once the bindings change the old smart contracts will not be able to run on the new nodes.
Additionally, we need a highly detailed specification of the bindings to be able to write unit tests for our contracts,
since currently we only allow integration tests. Currently, writing unit tests is not possible since we cannot have
a precise mock of the host in the smart contract unit tests, e.g. we don't know how to mock the range iterator (what does it do
when given an empty or inverted range?).
In this proposal we give a detailed specification of the functions that we will be relying on for many months to come.
## Motivation
The current imports have the following issues:
- **Trie API.** The behavior of trie API is currently unspecified. Many things are unclear: what happens when we try
iterating over an empty range, what happens if we try accessing a non-existent key, etc. Having a trie API specification
is important for being able to create a testing framework for Rust and AssemblyScript smart contracts, since in unit
tests the contracts will be running on a mocked implementation of the host;
- **Promise API.** Recently we have discussed the changes to our promise mechanics. The schema does not need to change,
but the specification now needs to be clarified;
- `data_read` currently has mixed functionality -- it can be used both for reading data from trie and to read data from
the context. In former it expects pointers to be passed as arguments, in later it expects enum to be passed. It achieves
juxtaposition by casting pointer type in enum when needed;
- **Economics API.** The functions that provide access to balance and such might need to be added or removed since we
now consider splitting attached balance into two.
# Specification
## Registers
Registers allow the host function to return the data into a buffer located inside the host oppose to the buffer
located on the client. A special operation can be used to copy the content of the buffer into the host. Memory pointers
can then be used to point either to the memory on the guest or the memory on the host, see below. Benefits:
- We can have functions that return values that are not necessarily used, e.g. inserting key-value into a trie can
also return the preempted old value, which might not be necessarily used. Previously, if we returned something we
would have to pass the blob from host into the guest, even if it is not used;
- We can pass blobs of data between host functions without going through the guest, e.g. we can remove the value
from the storage and insert it into under a different key;
- It makes API cleaner, because we don't need to pass `buffer_len` and `buffer_ptr` as arguments to other functions;
- It allows merging certain functions together, see `storage_iter_next`;
- This is consistent with other APIs that were created for high performance, e.g. allegedly Ewasm have implemented
SNARK-like computations in Wasm by exposing a bignum library through stack-like interface to the guest. The guest
can manipulate then with the stack of 256-bit numbers that is located on the host.
#### Host → host blob passing
The registers can be used to pass the blobs between host functions. For any function that
takes a pair of arguments `*_len: u64, *_ptr: u64` this pair is pointing to a region of memory either on the guest or
the host:
- If `*_len != u64::MAX` it points to the memory on the guest;
- If `*_len == u64::MAX` it points to the memory under the register `*_ptr` on the host.
For example:
`storage_write(u64::MAX, 0, u64::MAX, 1, 2)` -- insert key-value into storage, where key is read from register 0,
value is read from register 1, and result is saved to register 2.
Note, if some function takes `register_id` then it means this function can copy some data into this register. If
`register_id == u64::MAX` then the copying does not happen. This allows some micro-optimizations in the future.
Note, we allow multiple registers on the host, identified with `u64` number. The guest does not have to use them in
order and can for instance save some blob in register `5000` and another value in register `1`.
#### Specification
```rust
read_register(register_id: u64, ptr: u64)
```
Writes the entire content from the register `register_id` into the memory of the guest starting with `ptr`.
###### Panics
- If the content extends outside the memory allocated to the guest. In Wasmer, it returns `MemoryAccessViolation` error message;
- If `register_id` is pointing to unused register returns `InvalidRegisterId` error message.
###### Undefined Behavior
- If the content of register extends outside the preallocated memory on the host side, or the pointer points to a
wrong location this function will overwrite memory that it is not supposed to overwrite causing an undefined behavior.
---
##### register_len
```rust
register_len(register_id: u64) -> u64
```
Returns the size of the blob stored in the given register.
###### Normal operation
- If register is used, then returns the size, which can potentially be zero;
- If register is not used, returns `u64::MAX`
## Trie API
Here we provide a specification of trie API. After this NEP is merged, the cases where our current implementation does
not follow the specification are considered to be bugs that need to be fixed.
---
##### storage_write
```rust
storage_write(key_len: u64, key_ptr: u64, value_len: u64, value_ptr: u64, register_id: u64) -> u64
```
Writes key-value into storage.
###### Normal operation
- If key is not in use it inserts the key-value pair and does not modify the register;
- If key is in use it inserts the key-value and copies the old value into the `register_id`.
###### Returns
- If key was not used returns `0`;
- If key was used returns `1`.
###### Panics
- If `key_len + key_ptr` or `value_len + value_ptr` exceeds the memory container or points to an unused register it panics
with `MemoryAccessViolation`. (When we say that something panics with the given error we mean that we use Wasmer API to
create this error and terminate the execution of VM. For mocks of the host that would only cause a non-name panic.)
- If returning the preempted value into the registers exceed the memory container it panics with `MemoryAccessViolation`;
###### Current bugs
- `External::storage_set` trait can return an error which is then converted to a generic non-descriptive
`StorageUpdateError`, [here](https://github.com/nearprotocol/nearcore/blob/942bd7bdbba5fb3403e5c2f1ee3c08963947d0c6/runtime/wasm/src/runtime.rs#L210)
however the actual implementation does not return error at all, [see](https://github.com/nearprotocol/nearcore/blob/4773873b3cd680936bf206cebd56bdc3701ddca9/runtime/runtime/src/ext.rs#L95);
- Does not return into the registers.
---
##### storage_read
```rust
storage_read(key_len: u64, key_ptr: u64, register_id: u64) -> u64
```
Reads the value stored under the given key.
###### Normal operation
- If key is used copies the content of the value into the `register_id`, even if the content is zero bytes;
- If key is not present then does not modify the register.
###### Returns
- If key was not present returns `0`;
- If key was present returns `1`.
###### Panics
- If `key_len + key_ptr` exceeds the memory container or points to an unused register it panics with `MemoryAccessViolation`;
- If returning the preempted value into the registers exceed the memory container it panics with `MemoryAccessViolation`;
###### Current bugs
- This function currently does not exist.
---
##### storage_remove
```rust
storage_remove(key_len: u64, key_ptr: u64, register_id: u64) -> u64
```
Removes the value stored under the given key.
###### Normal operation
Very similar to `storage_read`:
- If key is used, removes the key-value from the trie and copies the content of the value into the `register_id`, even if the content is zero bytes.
- If key is not present then does not modify the register.
###### Returns
- If key was not present returns `0`;
- If key was present returns `1`.
###### Panics
- If `key_len + key_ptr` exceeds the memory container or points to an unused register it panics with `MemoryAccessViolation`;
- If the registers exceed the memory limit panics with `MemoryAccessViolation`;
- If returning the preempted value into the registers exceed the memory container it panics with `MemoryAccessViolation`;
###### Current bugs
- Does not return into the registers.
---
##### storage_has_key
```rust
storage_has_key(key_len: u64, key_ptr: u64) -> u64
```
Checks if there is a key-value pair.
###### Normal operation
- If key is used returns `1`, even if the value is zero bytes;
- Otherwise returns `0`.
###### Panics
- If `key_len + key_ptr` exceeds the memory container it panics with `MemoryAccessViolation`;
---
#### storage_iter_prefix
```rust
storage_iter_prefix(prefix_len: u64, prefix_ptr: u64) -> u64
```
Creates an iterator object inside the host.
Returns the identifier that uniquely differentiates the given iterator from other iterators that can be simultaneously
created.
###### Normal operation
- It iterates over the keys that have the provided prefix. The order of iteration is defined by the lexicographic
order of the bytes in the keys. If there are no keys, it creates an empty iterator, see below on empty iterators;
###### Panics
- If `prefix_len + prefix_ptr` exceeds the memory container it panics with `MemoryAccessViolation`;
---
#### storage_iter_range
```rust
storage_iter_range(start_len: u64, start_ptr: u64, end_len: u64, end_ptr: u64) -> u64
```
Similarly to `storage_iter_prefix`
creates an iterator object inside the host.
###### Normal operation
Unless lexicographically `start < end`, it creates an empty iterator.
Iterates over all key-values such that keys are between `start` and `end`, where `start` is inclusive and `end` is exclusive.
Note, this definition allows for `start` or `end` keys to not actually exist on the given trie.
###### Panics
- If `start_len + start_ptr` or `end_len + end_ptr` exceeds the memory container or points to an unused register it panics with `MemoryAccessViolation`;
---
##### storage_iter_next
```rust
storage_iter_next(iterator_id: u64, key_register_id: u64, value_register_id: u64) -> u64
```
Advances iterator and saves the next key and value in the register.
###### Normal operation
- If iterator is not empty (after calling next it points to a key-value), copies the key into `key_register_id` and value into `value_register_id` and returns `1`;
- If iterator is empty returns `0`.
This allows us to iterate over the keys that have zero bytes stored in values.
###### Panics
- If `key_register_id == value_register_id` panics with `MemoryAccessViolation`;
- If the registers exceed the memory limit panics with `MemoryAccessViolation`;
- If `iterator_id` does not correspond to an existing iterator panics with `InvalidIteratorId`
- If between the creation of the iterator and calling `storage_iter_next` any modification to storage was done through
`storage_write` or `storage_remove` the iterator is invalidated and the error message is `IteratorWasInvalidated`.
###### Current bugs
- Not implemented, currently we have `storage_iter_next` and `data_read` + `DATA_TYPE_STORAGE_ITER` that together fulfill
the purpose, but have unspecified behavior.
## Context API
Context API mostly provides read-only functions that access current information about the blockchain, the accounts
(that originally initiated the chain of cross-contract calls, the immediate contract that called the current one, the account of the current contract),
other important information like storage usage.
Many of the below functions are currently implemented through `data_read` which allows to read generic context data.
However, there is no reason to have `data_read` instead of the specific functions:
- `data_read` does not solve forward compatibility. If later we want to add another context function, e.g. `executed_operations`
we can just declare it as a new function, instead of encoding it as `DATA_TYPE_EXECUTED_OPERATIONS = 42` which is passed
as the first argument to `data_read`;
- `data_read` does not help with renaming. If later we decide to rename `signer_account_id` to `originator_id` then one could
argue that contracts that rely on `data_read` would not break, while contracts relying on `signer_account_id()` would. However
the name change often means the change of the semantics, which means the contracts using this function are no longer safe to
execute anyway.
However there is one reason to not have `data_read` -- it makes `API` more human-like which is a general direction Wasm APIs, like WASI are moving towards to.
---
##### current_account_id
```rust
current_account_id(register_id: u64)
```
Saves the account id of the current contract that we execute into the register.
###### Panics
- If the registers exceed the memory limit panics with `MemoryAccessViolation`;
---
##### signer_account_id
```rust
signer_account_id(register_id: u64)
```
All contract calls are a result of some transaction that was signed by some account using
some access key and submitted into a memory pool (either through the wallet using RPC or by a node itself). This function returns the id of that account.
###### Normal operation
- Saves the bytes of the signer account id into the register.
###### Panics
- If the registers exceed the memory limit panics with `MemoryAccessViolation`;
###### Current bugs
- Currently we conflate `originator_id` and `sender_id` in our code base.
---
##### signer_account_pk
```rust
signer_account_pk(register_id: u64)
```
Saves the public key fo the access key that was used by the signer into the register.
In rare situations smart contract might want to know the exact access key that was used to send the original transaction,
e.g. to increase the allowance or manipulate with the public key.
###### Panics
- If the registers exceed the memory limit panics with `MemoryAccessViolation`;
###### Current bugs
- Not implemented.
---
#### predecessor_account_id
```rust
predecessor_account_id(register_id: u64)
```
All contract calls are a result of a receipt, this receipt might be created by a transaction
that does function invocation on the contract or another contract as a result of cross-contract call.
###### Normal operation
- Saves the bytes of the predecessor account id into the register.
###### Panics
- If the registers exceed the memory limit panics with `MemoryAccessViolation`;
###### Current bugs
- Not implemented.
---
#### input
```rust
input(register_id: u64)
```
Reads input to the contract call into the register. Input is expected to be in JSON-format.
###### Normal operation
- If input is provided saves the bytes (potentially zero) of input into register.
- If input is not provided does not modify the register.
###### Returns
- If input was not provided returns `0`;
- If input was provided returns `1`; If input is zero bytes returns `1`, too.
###### Panics
- If the registers exceed the memory limit panics with `MemoryAccessViolation`;
###### Current bugs
- Implemented as part of `data_read`. However there is no reason to have one unified function, like `data_read` that can
be used to read all
---
#### block_index
```rust
block_index() -> u64
```
Returns the current block index.
---
#### storage_usage
```rust
storage_usage() -> u64
```
Returns the number of bytes used by the contract if it was saved to the trie as of the
invocation. This includes:
- The data written with `storage_*` functions during current and previous execution;
- The bytes needed to store the account protobuf and the access keys of the given account.
## Economics API
Accounts own certain balance; and each transaction and each receipt have certain amount of balance and prepaid gas
attached to them.
During the contract execution, the contract has access to the following `u128` values:
- `account_balance` -- the balance attached to the given account. This includes the `attached_deposit` that was attached
to the transaction;
- `attached_deposit` -- the balance that was attached to the call that will be immediately deposited before
the contract execution starts;
- `prepaid_gas` -- the tokens attached to the call that can be used to pay for the gas;
- `used_gas` -- the gas that was already burnt during the contract execution and attached to promises (cannot exceed `prepaid_gas`);
If contract execution fails `prepaid_gas - used_gas` is refunded back to `signer_account_id` and `attached_balance`
is refunded back to `predecessor_account_id`.
The following spec is the same for all functions:
```rust
account_balance(balance_ptr: u64)
attached_deposit(balance_ptr: u64)
```
-- writes the value into the `u128` variable pointed by `balance_ptr`.
###### Panics
- If `balance_ptr + 16` points outside the memory of the guest with `MemoryAccessViolation`;
###### Current bugs
- Use a different name;
---
```rust
prepaid_gas() -> u64
used_gas() -> u64
```
## Math
#### random_seed
```rust
random_seed(register_id: u64)
```
Returns random seed that can be used for pseudo-random number generation in deterministic way.
###### Panics
- If the size of the registers exceed the set limit `MemoryAccessViolation`;
---
#### sha256
```rust
sha256(value_len: u64, value_ptr: u64, register_id: u64)
```
Hashes the random sequence of bytes using sha256 and returns it into `register_id`.
###### Panics
- If `value_len + value_ptr` points outside the memory or the registers use more memory than the limit with `MemoryAccessViolation`.
###### Current bugs
- Current name `hash` is not specific to what hash is being used.
- We have `hash32` that largely duplicates the mechanics of `hash` because it returns the first 4 bytes only.
---
#### check_ethash
```rust
check_ethash(block_number_ptr: u64,
header_hash_ptr: u64,
nonce: u64,
mix_hash_ptr: u64,
difficulty_ptr: u64) -> u64
```
-- verifies hash of the header that we created using [Ethash](https://en.wikipedia.org/wiki/Ethash). Parameters are:
- `block_number` -- `u256`/`[u64; 4]`, number of the block on Ethereum blockchain. We use the pointer to the slice of 32 bytes on guest memory;
- `header_hash` -- `h256`/`[u8; 32]`, hash of the header on Ethereum blockchain. We use the pointer to the slice of 32 bytes on guest memory;
- `nonce` -- `u64`/`h64`/`[u8; 8]`, nonce that was used to find the correct hash, passed as `u64` without pointers;
- `mix_hash` -- `h256`/`[u8; 32]`, special hash that avoid griefing attack. We use the pointer to the slice of 32 bytes on guest memory;
- `difficulty` -- `u256`/`[u64; 4]`, the difficulty of mining the block. We use the pointer to the slice of 32 bytes on guest memory;
###### Returns
- `1` if the Ethash is valid;
- `0` otherwise.
###### Panics
- If `block_number_ptr + 32` or `header_hash_ptr + 32` or `mix_hash_ptr + 32` or `difficulty_ptr + 32` point outside the memory or registers use more memory than the limit with `MemoryAccessViolation`.
###### Current bugs
- `block_number` and `difficulty` are currently exposed as `u64` which are casted to `u256` which breaks Ethereum compatibility;
- Currently, we also pass the length together with `header_hash_ptr` and `mix_hash_ptr` which is not necessary since
we know their length.
## Promises API
```rust
promise_create(account_id_len: u64,
account_id_ptr: u64,
method_name_len: u64,
method_name_ptr: u64,
arguments_len: u64,
arguments_ptr: u64,
amount_ptr: u64,
gas: u64) -> u64
```
Creates a promise that will execute a method on account with given arguments and attaches the given amount.
`amount_ptr` point to slices of bytes representing `u128`.
###### Panics
- If `account_id_len + account_id_ptr` or `method_name_len + method_name_ptr` or `arguments_len + arguments_ptr`
or `amount_ptr + 16` points outside the memory of the guest or host, with `MemoryAccessViolation`.
###### Returns
- Index of the new promise that uniquely identifies it within the current execution of the method.
---
#### promise_then
```rust
promise_then(promise_idx: u64,
account_id_len: u64,
account_id_ptr: u64,
method_name_len: u64,
method_name_ptr: u64,
arguments_len: u64,
arguments_ptr: u64,
amount_ptr: u64,
gas: u64) -> u64
```
Attaches the callback that is executed after promise pointed by `promise_idx` is complete.
###### Panics
- If `promise_idx` does not correspond to an existing promise panics with `InvalidPromiseIndex`.
- If `account_id_len + account_id_ptr` or `method_name_len + method_name_ptr` or `arguments_len + arguments_ptr`
or `amount_ptr + 16` points outside the memory of the guest or host, with `MemoryAccessViolation`.
###### Returns
- Index of the new promise that uniquely identifies it within the current execution of the method.
---
#### promise_and
```rust
promise_and(promise_idx_ptr: u64, promise_idx_count: u64) -> u64
```
Creates a new promise which completes when time all promises passed as arguments complete. Cannot be used with registers.
`promise_idx_ptr` points to an array of `u64` elements, with `promise_idx_count` denoting the number of elements.
The array contains indices of promises that need to be waited on jointly.
###### Panics
- If `promise_ids_ptr + 8 * promise_idx_count` extend outside the guest memory with `MemoryAccessViolation`;
- If any of the promises in the array do not correspond to existing promises panics with `InvalidPromiseIndex`.
###### Returns
- Index of the new promise that uniquely identifies it within the current execution of the method.
---
#### promise_results_count
```rust
promise_results_count() -> u64
```
If the current function is invoked by a callback we can access the execution results of the promises that
caused the callback. This function returns the number of complete and incomplete callbacks.
Note, we are only going to have incomplete callbacks once we have `promise_or` combinator.
###### Normal execution
- If there is only one callback `promise_results_count()` returns `1`;
- If there are multiple callbacks (e.g. created through `promise_and`) `promise_results_count()` returns their number.
- If the function was called not through the callback `promise_results_count()` returns `0`.
---
#### promise_result
```rust
promise_result(result_idx: u64, register_id: u64) -> u64
```
If the current function is invoked by a callback we can access the execution results of the promises that
caused the callback. This function returns the result in blob format and places it into the register.
###### Normal execution
- If promise result is complete and successful copies its blob into the register;
- If promise result is complete and failed or incomplete keeps register unused;
###### Returns
- If promise result is not complete returns `0`;
- If promise result is complete and successful returns `1`;
- If promise result is complete and failed returns `2`.
###### Panics
- If `result_idx` does not correspond to an existing result panics with `InvalidResultIndex`.
- If copying the blob exhausts the memory limit it panics with `MemoryAccessViolation`.
###### Current bugs
- We currently have two separate functions to check for result completion and copy it.
---
#### promise_return
```rust
promise_return(promise_idx: u64)
```
When promise `promise_idx` finishes executing its result is considered to be the result of the current function.
###### Panics
- If `promise_idx` does not correspond to an existing promise panics with `InvalidPromiseIndex`.
###### Current bugs
- The current name `return_promise` is inconsistent with the naming convention of Promise API.
## Miscellaneous API
#### value_return
```rust
value_return(value_len: u64, value_ptr: u64)
```
Sets the blob of data as the return value of the contract.
##### Panics
- If `value_len + value_ptr` exceeds the memory container or points to an unused register it panics with `MemoryAccessViolation`;
---
```rust
panic()
```
Terminates the execution of the program with panic `GuestPanic`.
---
#### log_utf8
```rust
log_utf8(len: u64, ptr: u64)
```
Logs the UTF-8 encoded string. See https://stackoverflow.com/a/5923961 that explains
that null termination is not defined through encoding.
###### Normal behavior
If `len == u64::MAX` then treats the string as null-terminated with character `'\0'`;
###### Panics
- If string extends outside the memory of the guest with `MemoryAccessViolation`;
---
#### log_utf16
```rust
log_utf16(len: u64, ptr: u64)
```
Logs the UTF-16 encoded string. `len` is the number of bytes in the string.
###### Normal behavior
If `len == u64::MAX` then treats the string as null-terminated with two-byte sequence of `0x00 0x00`.
###### Panics
- If string extends outside the memory of the guest with `MemoryAccessViolation`;
---
#### abort
```rust
abort(msg_ptr: u32, filename_ptr: u32, line: u32, col: u32)
```
Special import kept for compatibility with AssemblyScript contracts. Not called by smart contracts directly, but instead
called by the code generated by AssemblyScript.
# Future Improvements
In the future we can have some of the registers to be on the guest.
For instance a guest can tell the host that it has some pre-allocated memory that it wants to be used for the register,
e.g.
```rust
set_guest_register(register_id: u64, register_ptr: u64, max_register_size: u64)
```
will assign `register_id` to a span of memory on the guest. Host then would also know the size of that buffer on guest
and can throw a panic if there is an attempted copying that exceeds the guest register size.
================================================
FILE: neps/archive/0008-transaction-refactoring.md
================================================
- Proposal Name: Batched Transactions
- Start Date: 2019-07-22
- NEP PR: [nearprotocol/neps#0008](https://github.com/nearprotocol/neps/pull/8)
# Summary
Refactor signed transactions and receipts to support batched atomic transactions and data dependency.
# Motivation
It simplifies account creation, by supporting batching of multiple transactions together instead of
creating more complicated transaction types.
For example, we want to create a new account with some account balance and one or many access keys, deploy a contract code on it and run an initialization method to restrict access keys permissions for a `proxy` function.
To be able to do this now, we need to have a `CreateAccount` transaction with all the parameters of a new account.
Then we need to handle it in one operation in a runtime code, which might have duplicated code for executing some WASM code with the rollback conditions.
Alternative to this is to execute multiple simple transactions in a batch within the same block.
It has to be done in a row without any commits to the state until the entire batch is completed.
We propose to support this type of transaction batching to simplify the runtime.
Currently callbacks are handled differently from async calls, this NEP simplifies data dependencies and callbacks by unifying them.
# Guide-level explanation
### New transaction and receipts
Previously, in the runtime to produce a block we first executed new signed transactions and then executed received receipts. It resulted in duplicated code that might be shared across similar actions, e.g. function calls for async calls, callbacks and self-calls.
It also increased the complexity of the runtime implementation.
This NEP proposes changing it by first converting all signed transactions into receipts and then either execute them immediately before received receipts, or put them into the list of the new receipts to be routed.
To achieve this, NEP introduces a new message `Action` that represents one of atomic actions, e.g. a function call.
`TransactionBody` is now called just `Transaction`. It contains the list of actions that needs to be performed in a single batch and the information shared across these actions.
`Transaction` contains the following fields
- `signer_id` is an account ID of the transaction signer.
- `public_key` is a public key used to identify the access key and to sign the transaction.
- `nonce` is used to deduplicate and order transactions (per access key).
- `receiver_id` is the account ID of the destination of this transaction. It's where the generated receipt will be routed for execution.
- `action` is the list of actions to perform.
An `Action` can be of the following:
- `CreateAccount` creates a new account with the `receiver_id` account ID. The action fails if the account already exists. `CreateAccount` also grants permission for all subsequent batched action for the newly created account. For example, permission to deploy code on the new account. Permission details are described in the reference section below.
- `DeployContract` deploys given binary wasm code on the account. Either the `receiver_id` equals to the `signer_id`, or the batch of actions has started with `CreateAccount`, which granted that permission.
- `FunctionCall` executes a function call on the last deployed contract. The action fails if the account or the code doesn't exist. E.g. if the previous action was `DeployContract`, then the code to execute will be the new deployed contract. `FunctionCall` has `method_name` and `args` to identify method with arguments to call. It also has `gas` and the `deposit`. `gas` is a prepaid amount of gas for this call (the price of gas is determined when a signed transaction is converted to a receipt. `deposit` is the attached deposit balance of NEAR tokens that the contract can spend, e.g. 10 tokens to pay for a crypto-corgi.
- `Transfer` transfers the given `deposit` balance of tokens from the predecessor to the receiver.
- `Stake` stakes the new total `stake` balance with the given `public_key`. The difference in stake is taken from the account's balance (if the new stake is greater than the current one) at the moment when this action is executed, so it's not prepaid. There is no particular reason to stake on behalf of a newly created account, so we may disallow it.
- `DeleteKey` deletes an old `AccessKey` identified by the given `public_key` from the account. Fails if the access key with the given public key doesn't exist. All next batched actions will continue to execute, even if the public key that authorized that transaction was removed.
- `AddKey` adds a new given `AccessKey` identified by a new given `public_key` to the account. Fails if an access key with the given public key already exists. We removed `SwapKeyTransaction`, because it can be replaced with 2 batched actions - delete an old key and add a new key.
- `DeleteAccount` deletes `receiver_id` account if the account doesn't have enough balance to pay the rent, or the `receiver_id` is the `predecessor_id`. Sends the remaining balance to the `beneficiary_id` account.
The new `Receipt` contains the shared information and either one of the receipt actions or a list of actions:
- `predecessor_id` the account ID of the immediate previous sender (predecessor) of this receipt. It can be different from the `signer_id` in some cases, e.g. for promises.
- `receiver_id` the account ID of the current account, on which we need to perform action(s).
- `receipt_id` is a unique ID of this receipt (previously was called `nonce`). It's generated from either the signed transaction or the parent receipt.
- `receipt` can be one of 2 types:
- `ActionReceipt` is used to perform some actions on the receiver.
- `DataReceipt` is used when some data needs to be passed from the predecessor to the receiver, e.g. an execution result.
To support promises and callbacks we introduce a concept of cross-shard data sharing with dependencies. Each `ActionReceipt` may have a list of input `data_id`. The execution will not start until all required inputs are received. Once the execution completes and if there is `output_data_id`, it produces a `DataReceipt` that will be routed to the `output_receiver_id`.
`ActionReceipt` contains the following fields:
- `signer_id` the account ID of the signer, who signed the transaction.
- `signer_public_key` the public key that the signer used to sign the original signed transaction.
- `output_data_id` is the data ID to create DataReceipt. If it's absent, then the `DataReceipt` is not created.
- `output_receiver_id` is the account ID of the data receiver. It's needed to route `DataReceipt`. It's absent if the DataReceipt is not needed.
- `input_data_id` is the list of data IDs that are required for the execution of the `ActionReceipt`. If some of data IDs is not available when the receipt is received, then the `ActionReceipt` is postponed until all data is available. Once the last `DataReceipt` for the required input data arrives, the action receipt execution is triggered.
- `action` is the list of actions to execute. The execution doesn't need to validate permissions of the actions, but need to fail in some cases. E.g. when the receiver's account doesn't exist and the action acts on the account, or when the action is a function call and the code is not present.
`DataReceipt` contains the following fields:
- `data_id` is the data ID to be used as an input.
- `success` is true if the `ActionReceipt` that generated this `DataReceipt` finished the execution without any failures.
- `data` is the binary data that is returned from the last action of the `ActionReceipt`. Right now, it's empty for all actions except for function calls. For function calls the data is the result of the code execution. But in the future we might introduce non-contract state reads.
Data should be stored at the same shard as the receiver's account, even if the receiver's account doesn't exist.
### Refunds
In case an `ActionReceipt` execution fails the runtime can generate a refund.
We've removed `refund_account_id` from receipts, because the account IDs for refunds can be determined from the `signer_id` and `predecessor_id` in the `ActionReceipt`.
All unused gas and action fees (also measured in gas) are always refunded back to the `signer_id`, because fees are always prepaid by the signer. The gas is converted into tokens using the `gas_price`.
The deposit balances from `FunctionCall` and `Transfer` are refunded back to the `predecessor_id`, because they were deducted from predecessor's account balance.
It's also important to note that the account ID of predecessor for refund receipts is `system`.
It's done to prevent refund loops, e.g. when the account to receive the refund was deleted before the refund arrives. In this case the refund is burned.
If the function call action with the attached `deposit` fails in the middle of the execution, then 2 refund receipts can be generated, one for the unused gas and one for the deposits.
The runtime should combine them into one receipt if `signer_id` and `predecessor_id` is the same.
Example of a receipt for a refund of `42000` atto-tokens to `vasya.near`:
```json
{
"predecessor_id": "system",
"receiver_id": "vasya.near",
"receipt_id": ...,
"action": {
"signer_id": "vasya.near",
"signer_public_key": ...,
"gas_price": "3",
"output_data_id": null,
"output_receiver_id": null,
"input_data_id": [],
"action": [
{
"transfer": {
"deposit": "42000"
}
}
]
}
}
```
### Examples
#### Account Creation
To create a new account we can create a new `Transaction`:
```json
{
"signer_id": "vasya.near",
"public_key": ...,
"nonce": 42,
"receiver_id": "vitalik.vasya.near",
"action": [
{
"create_account": {
}
},
{
"transfer": {
"deposit": "19231293123"
}
},
{
"deploy_contract": {
"code": ...
}
},
{
"add_key": {
"public_key": ...,
"access_key": ...
}
},
{
"function_call": {
"method_name": "init",
"args": ...,
"gas": 20000,
"deposit": "0"
}
}
]
}
```
This transaction is sent from `vasya.near` signed with a `public_key`.
The receiver is `vitalik.vasya.near`, which is a new account id.
The transaction contains a batch of actions.
First we create the account, then we transfer a few tokens to the newly created account, then we deploy code on the new account, add a new access key with some given public key, and as a final action initializing the deployed code by calling a method `init` with some arguments.
For this transaction to work `vasya.near` needs to have enough balance on the account cover gas and deposits for all actions at once.
Every action has some associated action gas fee with it. While `transfer` and `function_call` actions need additional balance for deposits and gas (for executions and promises).
Once we validated and subtracted the total amount from `vasya.near` account, this transaction is transformed into a `Receipt`:
```json
{
"predecessor_id": "vasya.near",
"receiver_id": "vitalik.vasya.near",
"receipt_id": ...,
"action": {
"signer_id": "vasya.near",
"signer_public_key": ...,
"gas_price": "3",
"output_data_id": null,
"output_receiver_id": null,
"input_data_id": [],
"action": [...]
}
}
```
In this example the gas price at the moment when the transaction was processed was 3 per gas.
This receipt will be sent to `vitalik.vasya.near`'s shard to be executed.
In case the `vitalik.vasya.near` account already exists, the execution will fail and some amount of prepaid_fees will be refunded back to `vasya.near`.
If the account creation receipt succeeds, it wouldn't create a `DataReceipt`, because `output_data_id` is `null`.
But it will generate a refund receipt for the unused portion of prepaid function call `gas`.
#### Deploy code example
Deploying code with initialization is pretty similar to creating account, except you can't deploy code on someone else account. So the transaction's `receiver_id` has to be the same as the `signer_id`.
#### Simple promise with callback
Let's say the transaction contained a single action which is a function call to `a.contract.near`.
It created a new promise `b.contract.near` and added a callback to itself.
Once the execution completes it will result in the following new receipts:
The receipt for the new promise towards `b.contract.near`
```json
{
"predecessor_id": "a.contract.near",
"receiver_id": "b.contract.near",
"receipt_id": ...,
"action": {
"signer_id": "vasya.near",
"signer_public_key": ...,
"gas_price": "3",
"output_data_id": "data_123_1",
"output_receiver_id": "a.contract.near",
"input_data_id": [],
"action": [
{
"function_call": {
"method_name": "sum",
"args": ...,
"gas": 10000,
"deposit": "0"
}
}
]
}
}
```
Interesting details:
- `signer_id` is still `vasya.near`, because it's the account that initialized the transaction, but not the creator of the promise.
- `output_data_id` contains some unique data ID. In this example we used `data_123_1`.
- `output_receiver_id` indicates where to route the result of the execution.
The other receipt is for the callback which will stay in the same shard.
```json
{
"predecessor_id": "a.contract.near",
"receiver_id": "a.contract.near",
"receipt_id": ...,
"action": {
"signer_id": "vasya.near",
"signer_public_key": ...,
"gas_price": "3",
"output_data_id": null,
"output_receiver_id": null,
"input_data_id": ["data_123_1"],
"action": [
{
"function_call": {
"method_name": "process_sum",
"args": ...,
"gas": 10000,
"deposit": "0"
}
}
]
}
}
```
It looks very similar to the new promise, but instead of `output_data_id` it has an `input_data_id`.
This action receipt will be postponed until the other receipt is routed, executed and generated a data receipt.
Once the new promise receipt is successfully executed, it will generate the following receipt:
```json
{
"predecessor_id": "b.contract.near",
"receiver_id": "a.contract.near",
"receipt_id": ...,
"data": {
"data_id": "data_123_1",
"success": true,
"data": ...
}
}
```
It contains the data ID `data_123_1` and routed to the `a.contract.near`.
Let's say the callback receipt was processed and postponed, then this data receipt will trigger execution of the callback receipt, because the all input data is now available.
#### Remote callback with 2 joined promises, with a callback on itself
Let's say `a.contract.near` wants to call `b.contract.near` and `c.contract.near`, and send the result to `d.contract.near` for joining before processing the result on itself.
It will generate 2 receipts for new promises, 1 receipt for the remote callback and 1 receipt for the callback on itself.
Part of the receipt (#1) for the promise towards `b.contract.near`:
```
...
"output_data_id": "data_123_b",
"output_receiver_id": "d.contract.near",
"input_data_id": [],
...
```
Part of the receipt (#2) for the promise towards `c.contract.near`:
```
...
"output_data_id": "data_321_c",
"output_receiver_id": "d.contract.near",
"input_data_id": [],
...
```
The receipt (#3) for the remote callback that has to be executed on `d.contract.near` with data from `b.contract.near` and `c.contract.near`:
```json
{
"predecessor_id": "a.contract.near",
"receiver_id": "d.contract.near",
"receipt_id": ...,
"action": {
"signer_id": "vasya.near",
"signer_public_key": ...,
"gas_price": "3",
"output_data_id": "bla_543",
"output_receiver_id": "a.contract.near",
"input_data_id": ["data_123_b", "data_321_c"],
"action": [
{
"function_call": {
"method_name": "join_data",
"args": ...,
"gas": 10000,
"deposit": "0"
}
}
]
}
}
```
It also has the `output_data_id` and `output_receiver_id` that is specified back towards `a.contract.near`.
And finally the part of the receipt (#4) for the local callback on `a.contract.near`:
```
...
"output_data_id": null,
"output_receiver_id": null,
"input_data_id": ["bla_543"],
...
```
For all of this to execute the first 3 receipts needs to go to the corresponding shards and be processed.
If for some reason the data arrived before the corresponding action receipt, then this data will be hold there until the action receipt arrives.
An example for this is if the receipt #3 is delayed for some reason, while the receipt #2 was processed and generated a data receipt towards `d.contract.near` which arrived before #3.
Also if any of the function calls fail, the receipt still going to generate a new `DataReceipt` because it has `output_data_id` and `output_receiver_id`. Here is an example of a DataReceipt for a failed execution:
```json
{
"predecessor_id": "b.contract.near",
"receiver_id": "d.contract.near",
"receipt_id": ...,
"data": {
"data_id": "data_123_b",
"success": false,
"data": null
}
}
```
#### Swap Key example
Since there are no swap key action, we can just batch 2 actions together. One for adding a new key and one for deleting the old key. The actual order is not important if the public keys are different, but if the public key is the same then you need to first delete the old key and only after this add a new key.
# Reference-level explanation
### Updated protobufs
##### public_key.proto
```proto
syntax = "proto3";
message PublicKey {
enum KeyType {
ED25519 = 0;
}
KeyType key_type = 1;
bytes data = 2;
}
```
##### signed_transaction.proto
```proto
syntax = "proto3";
import "access_key.proto";
import "public_key.proto";
import "uint128.proto";
message Action {
message CreateAccount {
// empty
}
message DeployContract {
// Binary wasm code
bytes code = 1;
}
message FunctionCall {
string method_name = 1;
bytes args = 2;
uint64 gas = 3;
Uint128 deposit = 4;
}
message Transfer {
Uint128 deposit = 1;
}
message Stake {
// New total stake
Uint128 stake = 1;
PublicKey public_key = 2;
}
message AddKey {
PublicKey public_key = 1;
AccessKey access_key = 2;
}
message DeleteKey {
PublicKey public_key = 1;
}
message DeleteAccount {
// The account ID which would receive the remaining funds.
string beneficiary_id = 1;
}
oneof action {
CreateAccount create_account = 1;
DeployContract deploy_contract = 2;
FunctionCall function_call = 3;
Transfer transfer = 4;
Stake stake = 5;
AddKey add_key = 6;
DeleteKey delete_key = 7;
DeleteAccount delete_account = 8;
}
}
message Transaction {
string signer_id = 1;
PublicKey public_key = 2;
uint64 nonce = 3;
string receiver_id = 4;
repeated Action actions = 5;
}
message SignedTransaction {
bytes signature = 1;
Transaction transaction = 2;
}
```
##### receipt.proto
```proto
syntax = "proto3";
import "public_key.proto";
import "signed_transaction.proto";
import "uint128.proto";
import "wrappers.proto";
message DataReceipt {
bytes data_id = 1;
google.protobuf.BytesValue data = 2;
}
message ActionReceipt {
message DataReceiver {
bytes data_id = 1;
string receiver_id = 2;
}
string signer_id = 1;
PublicKey signer_public_key = 2;
// The price of gas is determined when the original SignedTransaction is
// converted into the Receipt. It's used for refunds.
Uint128 gas_price = 3;
// List of data receivers where to route the output data
// (e.g. result of execution)
repeated DataReceiver output_data_receivers = 4;
// Ordered list of data ID to provide as input results.
repeated bytes input_data_ids = 5;
repeated Action actions = 6;
}
message Receipt {
string predecessor_id = 1;
string receiver_id = 2;
bytes receipt_id = 3;
oneof receipt {
ActionReceipt action = 4;
DataReceipt data = 5;
}
}
```
### Validation and Permissions
To validate `SignedTransaction` we need to do the following:
- verify transaction hash against signature and the given public key
- verify `signed_id` is a valid account ID
- verify `receiver_id` is a valid account ID
- fetch account for the given `signed_id`
- fetch access key for the given `signed_id` and `public_key`
- verify access key `nonce`
- get the current price of gas
- compute total required balance for the transaction, including action fees (in gas), deposits and prepaid gas.
- verify account balance is larger than required balance.
- verify actions are allowed by the access key permissions, e.g. if the access key only allows function call, then need to verify receiver, method name and allowance.
Before we convert a `Transaction` to a new `ActionReceipt`, we don't need to validate permissions of the actions or their order. It's checked during `ActionReceipt` execution.
`ActionReceipt` doesn't need to be validated before we start executing it.
The actions in the `ActionReceipt` are executed in given order.
Each action has to check for the validity before execution.
Since `CreateAccount` gives permissions to perform actions on the new account, like it's your account, we introduce temporary variable `actor_id`.
At the beginning of the execution `actor_id` is set to the value of `predecessor_id`.
Validation rules for actions:
- `CreateAccount`
- check the account `receiver_id` doesn't exist
- `DeployContract`, `Stake`, `AddKey`, `DeleteKey`
- check the account `receiver_id` exists
- check `actor_id` equals to `receiver_id`
- `FunctionCall`, `Transfer`
- check the account `receiver_id` exists
When `CreateAccount` completes, the `actor_id` changes to `receiver_id`.
NOTE: When we implement `DeleteAccount` action, its completion will change `actor_id` back to `predecessor_id`.
Once validated, each action might still do some additional checks, e.g. `FunctionCall` might check that the code exists and `method_name` is valid.
### `DataReceipt` generation rules
If `ActionReceipt` doesn't have `output_data_id` and `output_receiver_id`, then `DataReceipt` is not generated.
Otherwise, `DataReceipt` depends on the last action of `ActionReceipt`. There are 4 different outcomes:
1. Last action is invalid, failed or the execution stopped on some previous action.
- `DataReceipt` is generated
- `data_id` is set to the value of `output_data_id` from the `ActionReceipt`
- `success` is set to `false`
- `data` is set to `null`
2. Last action is valid and finished successfully, but it's not a `FunctionCall`. Or a `FunctionCall`, that returned no value.
- `DataReceipt` is generated
- `data_id` is set to the value of `output_data_id` from the `ActionReceipt`
- `success` is set to `true`
- `data` is set to `null`
3. Last action is `FunctionCall`, and the result of the execution is some value.
- `DataReceipt` is generated
- `data_id` is set to the value of `output_data_id` from the `ActionReceipt`
- `success` is set to `true`
- `data` is set to the bytes of the returned value
4. Last action is `FunctionCall`, and the result of the execution is a promise ID
- `DataReceipt` is NOT generated, because we don't have the value for the execution.
- Instead we should modify the `ActionReceipt` generated for the returned promise ID.
- In this receipt the `output_data_id` should be set to the `output_data_id` of the action receipt that we just finished executed.
- `output_receiver_id` is set the same way as `output_data_id` described above.
#### Example for the case #4
A user called contract `a.app`, which called `b.app` and expect a callback to `a.app`. So `a.app` generated 2 receipts:
Towards `b.app`:
```
...
"receiver_id": "b.app",
...
"output_data_id": "data_a",
"output_receiver_id": "a.app",
"input_data_id": [],
...
```
Towards itself:
```
...
"receiver_id": "a.app",
...
"output_data_id": "null",
"output_receiver_id": "null",
"input_data_id": ["data_a"],
...
```
Now let's say `b.app` doesn't actually do the work, but it's just a middleman that charges some fees before redirecting the work to the actual contract `c.app`.
In this case `b.app` creates a new promise by calling `c.app` and returns it instead of data.
This triggers the case #4, so it doesn't generate the data receipt yet, instead it creates an action receipt which would look like that:
```
...
"receiver_id": "c.app",
...
"output_data_id": "data_a",
"output_receiver_id": "a.app",
"input_data_id": [],
...
```
Once it completes, it would send a data receipt to `a.app` (unless `c.app` is a middleman as well).
But let's say `b.app` doesn't want to reveal it's a middleman.
In this case it would call `c.app`, but instead of returning data directly to `a.app`, `b.app` wants to wrap the result into some nice wrapper.
Then instead of returning the promise to `c.app`, `b.app` would attach a callback to itself and return the promise ID of that callback. Here is how it would look:
Towards `c.app`:
```
...
"receiver_id": "c.app",
...
"output_data_id": "data_b",
"output_receiver_id": "b.app",
"input_data_id": [],
...
```
So when the callback receipt first generated, it looks like this:
```
...
"receiver_id": "b.app",
...
"output_data_id": "null",
"output_receiver_id": "null",
"input_data_id": ["data_b"],
...
```
But once, its promise ID is returned with `promise_return`, it is updated to return data towards `a.app`:
```
...
"receiver_id": "b.app",
...
"output_data_id": "data_a",
"output_receiver_id": "a.app",
"input_data_id": ["data_b"],
...
```
### Data storage
We should maintain the following persistent maps per account (`receiver_id`)
- Received data: `data_id -> (success, data)`
- Postponed receipts: `receipt_id -> Receipt`
- Pending input data: `data_id -> receipt_id`
When `ActionReceipt` is received, the runtime iterates through the list of `input_data_id`.
If `input_data_id` is not present in the received data map, then a pair `(input_data_id, receipt_id)` is added to pending input data map and the receipt marked as postponed.
At the end of the iteration if the receipt is marked as postponed, then it's added to map of postponed receipts keyed by `receipt_id`.
If all `input_data_id`s are available in the received data, then `ActionReceipt` is executed.
When `DataReceipt` is received, a pair `(data_id, (success, data))` is added to the received data map.
Then the runtime checks if `data_id` is present in the pending input data.
If it's present, then `data_id` is removed from the pending input data and the corresponding `ActionReceipt` is checked again (see above).
NOTE: we can optimize by not storing `data_id` in the received data map when the pending input data is present and it was the final input data item in the receipt.
When `ActionReceipt` is executed, the runtime deletes all `input_data_id` from the received data map.
The `receipt_id` is deleted from the postponed receipts map (if present).
### TODO Receipt execution
- input data is available to all function calls in the batched actions
- TODODO
# Future possibilities
- We can add `or` based data selector, so data storage can be affected.
================================================
FILE: neps/archive/0013-system-methods.md
================================================
- Proposal Name: System methods in runtime API
- Start Date: 2019-09-03
- NEP PR: [nearprotocol/neps#0013](https://github.com/nearprotocol/neps/pull/0013)
# Summary
Adds new ability for contracts to perform some system functions:
- create new accounts (with possible code deploy and initialization)
- deploy new code (or redeploying code for upgrades)
- batched function calls
- transfer money
- stake
- add key
- delete key
- delete account
# Motivation
Contracts should have the ability to create new accounts, transfer money without calling code and
stake. It will enable full functionality of contract-based accounts.
# Reference
We introduce additional promise APIs to support batched actions.
Firstly, we enable ability to create empty promises without any action. They act similarly to
traditional promises, but don't contain function call action.
Secondly, we add API to append individual actions to promises. For example we can create
a promise with a function_call first using `promise_create` and then attach a transfer action on top
of this promise. So the transfer will only deposit tokens if the function call succeeds. Another example
is how we create accounts now using batched actions. To create a new account, we create a transaction with
the following actions: `create_account`, `transfer`, `add_key`. It creates a new account, deposit some funds on it and the adds a new key.
For more examples see NEP#8: https://github.com/nearprotocol/NEPs/pull/8/files?short_path=15b6752#diff-15b6752ec7d78e7b85b8c7de4a19cbd4
**NOTE: The existing promise API is a special case of the batched promise API.**
- Calling `promise_batch_create` and then `promise_batch_action_function_call` will produce the same promise as calling `promise_create` directly.
- Calling `promise_batch_then` and then `promise_batch_action_function_call` will produce the same promise as calling `promise_then` directly.
## Promises API
#### promise_batch_create
```rust
promise_batch_create(account_id_len: u64, account_id_ptr: u64) -> u64
```
Creates a new promise towards given `account_id` without any actions attached to it.
###### Panics
- If `account_id_len + account_id_ptr` points outside the memory of the guest or host, with `MemoryAccessViolation`.
###### Returns
- Index of the new promise that uniquely identifies it within the current execution of the method.
---
#### promise_batch_then
```rust
promise_batch_then(promise_idx: u64, account_id_len: u64, account_id_ptr: u64) -> u64
```
Attaches a new empty promise that is executed after promise pointed by `promise_idx` is complete.
###### Panics
- If `promise_idx` does not correspond to an existing promise panics with `InvalidPromiseIndex`.
- If `account_id_len + account_id_ptr` points outside the memory of the guest or host, with `MemoryAccessViolation`.
###### Returns
- Index of the new promise that uniquely identifies it within the current execution of the method.
---
##### promise_batch_action_create_account
```rust
promise_batch_action_create_account(promise_idx: u64)
```
Appends `CreateAccount` action to the batch of actions for the given promise pointed by `promise_idx`.
Details for the action: https://github.com/nearprotocol/NEPs/pull/8/files#diff-15b6752ec7d78e7b85b8c7de4a19cbd4R48
###### Panics
- If `promise_idx` does not correspond to an existing promise panics with `InvalidPromiseIndex`.
- If the promise pointed by the `promise_idx` is an ephemeral promise created by `promise_and`.
---
#### promise_batch_action_deploy_contract
```rust
promise_batch_action_deploy_contract(promise_idx: u64, code_len: u64, code_ptr: u64)
```
Appends `DeployContract` action to the batch of actions for the given promise pointed by `promise_idx`.
Details for the action: https://github.com/nearprotocol/NEPs/pull/8/files#diff-15b6752ec7d78e7b85b8c7de4a19cbd4R49
###### Panics
- If `promise_idx` does not correspond to an existing promise panics with `InvalidPromiseIndex`.
- If the promise pointed by the `promise_idx` is an ephemeral promise created by `promise_and`.
- If `code_len + code_ptr` points outside the memory of the guest or host, with `MemoryAccessViolation`.
---
#### promise_batch_action_function_call
```rust
promise_batch_action_function_call(promise_idx: u64,
method_name_len: u64,
method_name_ptr: u64,
arguments_len: u64,
arguments_ptr: u64,
amount_ptr: u64,
gas: u64)
```
Appends `FunctionCall` action to the batch of actions for the given promise pointed by `promise_idx`.
Details for the action: https://github.com/nearprotocol/NEPs/pull/8/files#diff-15b6752ec7d78e7b85b8c7de4a19cbd4R50
*NOTE: Calling `promise_batch_create` and then `promise_batch_action_function_call` will produce the same promise as calling `promise_create` directly.*
###### Panics
- If `promise_idx` does not correspond to an existing promise panics with `InvalidPromiseIndex`.
- If the promise pointed by the `promise_idx` is an ephemeral promise created by `promise_and`.
- If `account_id_len + account_id_ptr` or `method_name_len + method_name_ptr` or `arguments_len + arguments_ptr`
or `amount_ptr + 16` points outside the memory of the guest or host, with `MemoryAccessViolation`.
---
#### promise_batch_action_transfer
```rust
promise_batch_action_transfer(promise_idx: u64, amount_ptr: u64)
```
Appends `Transfer` action to the batch of actions for the given promise pointed by `promise_idx`.
Details for the action: https://github.com/nearprotocol/NEPs/pull/8/files#diff-15b6752ec7d78e7b85b8c7de4a19cbd4R51
###### Panics
- If `promise_idx` does not correspond to an existing promise panics with `InvalidPromiseIndex`.
- If the promise pointed by the `promise_idx` is an ephemeral promise created by `promise_and`.
- If `amount_ptr + 16` points outside the memory of the guest or host, with `MemoryAccessViolation`.
---
#### promise_batch_action_stake
```rust
promise_batch_action_stake(promise_idx: u64,
amount_ptr: u64,
public_key_len: u64,
public_key_ptr: u64)
```
Appends `Stake` action to the batch of actions for the given promise pointed by `promise_idx`.
Details for the action: https://github.com/nearprotocol/NEPs/pull/8/files#diff-15b6752ec7d78e7b85b8c7de4a19cbd4R52
###### Panics
- If `promise_idx` does not correspond to an existing promise panics with `InvalidPromiseIndex`.
- If the promise pointed by the `promise_idx` is an ephemeral promise created by `promise_and`.
- If the given public key is not a valid public key (e.g. wrong length) `InvalidPublicKey`.
- If `amount_ptr + 16` or `public_key_len + public_key_ptr` points outside the memory of the guest or host, with `MemoryAccessViolation`.
---
#### promise_batch_action_add_key_with_full_access
```rust
promise_batch_action_add_key_with_full_access(promise_idx: u64,
public_key_len: u64,
public_key_ptr: u64,
nonce: u64)
```
Appends `AddKey` action to the batch of actions for the given promise pointed by `promise_idx`.
Details for the action: https://github.com/nearprotocol/NEPs/pull/8/files#diff-15b6752ec7d78e7b85b8c7de4a19cbd4R54
The access key will have `FullAccess` permission, details: [0005-access-keys.md#guide-level-explanation](click here)
###### Panics
- If `promise_idx` does not correspond to an existing promise panics with `InvalidPromiseIndex`.
- If the promise pointed by the `promise_idx` is an ephemeral promise created by `promise_and`.
- If the given public key is not a valid public key (e.g. wrong length) `InvalidPublicKey`.
- If `public_key_len + public_key_ptr` points outside the memory of the guest or host, with `MemoryAccessViolation`.
---
#### promise_batch_action_add_key_with_function_call
```rust
promise_batch_action_add_key_with_function_call(promise_idx: u64,
public_key_len: u64,
public_key_ptr: u64,
nonce: u64,
allowance_ptr: u64,
receiver_id_len: u64,
receiver_id_ptr: u64,
method_names_len: u64,
method_names_ptr: u64)
```
Appends `AddKey` action to the batch of actions for the given promise pointed by `promise_idx`.
Details for the action: https://github.com/nearprotocol/NEPs/pull/8/files#diff-156752ec7d78e7b85b8c7de4a19cbd4R54
The access key will have `FunctionCall` permission, details: [0005-access-keys.md#guide-level-explanation](click here)
- If the `allowance` value (not the pointer) is `0`, the allowance is set to `None` (which means unlimited allowance). And positive value represents a `Some(...)` allowance.
- Given `method_names` is a `utf-8` string with `,` used as a separator. The vm will split the given string into a vector of strings.
###### Panics
- If `promise_idx` does not correspond to an existing promise panics with `InvalidPromiseIndex`.
- If the promise pointed by the `promise_idx` is an ephemeral promise created by `promise_and`.
- If the given public key is not a valid public key (e.g. wrong length) `InvalidPublicKey`.
- if `method_names` is not a valid `utf-8` string, fails with `BadUTF8`.
- If `public_key_len + public_key_ptr`, `allowance_ptr + 16`, `receiver_id_len + receiver_id_ptr` or
`method_names_len + method_names_ptr` points outside the memory of the guest or host, with `MemoryAccessViolation`.
---
#### promise_batch_action_delete_key
```rust
promise_batch_action_delete_key(promise_idx: u64,
public_key_len: u64,
public_key_ptr: u64)
```
Appends `DeleteKey` action to the batch of actions for the given promise pointed by `promise_idx`.
Details for the action: https://github.com/nearprotocol/NEPs/pull/8/files#diff-15b6752ec7d78e7b85b8c7de4a19cbd4R55
###### Panics
- If `promise_idx` does not correspond to an existing promise panics with `InvalidPromiseIndex`.
- If the promise pointed by the `promise_idx` is an ephemeral promise created by `promise_and`.
- If the given public key is not a valid public key (e.g. wrong length) `InvalidPublicKey`.
- If `public_key_len + public_key_ptr` points outside the memory of the guest or host, with `MemoryAccessViolation`.
---
#### promise_batch_action_delete_account
```rust
promise_batch_action_delete_account(promise_idx: u64,
beneficiary_id_len: u64,
beneficiary_id_ptr: u64)
```
Appends `DeleteAccount` action to the batch of actions for the given promise pointed by `promise_idx`.
Action is used to delete an account. It can be performed on a newly created account, on your own account or an account with
insufficient funds to pay rent. Takes `beneficiary_id` to indicate where to send the remaining funds.
###### Panics
- If `promise_idx` does not correspond to an existing promise panics with `InvalidPromiseIndex`.
- If the promise pointed by the `promise_idx` is an ephemeral promise created by `promise_and`.
- If `beneficiary_id_len + beneficiary_id_ptr` points outside the memory of the guest or host, with `MemoryAccessViolation`.
---
================================================
FILE: neps/archive/0017-execution-outcome.md
================================================
- Proposal Name: Execution Outcome
- Start Date: 2019-09-23
- NEP PR: [nearprotocol/neps#0017](https://github.com/nearprotocol/neps/pull/17)
- Issue(s): https://github.com/nearprotocol/nearcore/issues/1307
# Summary
Refactor current TransactionResult/TransactionLog/FinalTransactionResult to improve naming, deduplicate results and provide
results resolution by the front-end for async-calls.
# Motivation
Right now the contract calls 2 promises and doesn't return a value, the front-end will return one of the promises results as an execution result. It's because we return the last result from final transaction result. With the current API, it's impossible to know what is the actual result of the contract execution.
# Guide-level explanation
Here is the proposed Rust structures. Highlights:
- Rename `TransactionResult` to `ExecutionOutcome` since it's used for transactions and receipts
- Rename `TransactionStatus` and merge it with result into `ExecutionResult`.
- In case of success `ExecutionStatus` can either be a value of a receipt_id. This helps to resolve the
actual returned value by the transaction from async calls, e.g. `A->B->A->C` should return result from `C`.
Also in distinguish result in case of forks, e.g. `A` calls `B` and calls `C`, but returns a result from `B`.
Currently there is no way to know.
- Rename `TransactionLog` to `ExecutionOutcomeWithId` which is `ExecutionOutcome` with receipt_id
or transaction hash. Probably needs a better name.
- Rename `FinalTransactionResult` to `FinalExecutionOutcome`.
- Update `FinalTransactionStatus` to `FinalExecutionStatus`.
- Provide final resolved returned result directly, so the front-end doesn't need to traverse the receipt tree.
We may also expose the error directly in the execution result.
- Split into final outcome into transaction and receipts.
### NEW
- The `FinalExecutionStatus` contains the early result even if some dependent receipts are not yet executed. Most function call
transactions contain 2 receipts. The 1st receipt is execution, the 2nd is the refund. Before this change, the transaction was
not resolved until the 2nd receipt was executed. After this change, the `FinalExecutionOutcome` will have
`FinalTransactionStatus::SuccessValue("")` after the execution of the 1st receipt, while the 2nd receipt execution outcome status is still `Pending`.
This helps to get the transaction result on the front-end faster without waiting for all refunds.
```rust
pub struct ExecutionOutcome {
/// Execution status. Contains the result in case of successful execution.
pub status: ExecutionStatus,
/// Logs from this transaction or receipt.
pub logs: Vec<LogEntry>,
/// Receipt IDs generated by this transaction or receipt.
pub receipt_ids: Vec<CryptoHash>,
/// The amount of the gas burnt by the given transaction or receipt.
pub gas_burnt: Gas,
}
/// The status of execution for a transaction or a receipt.
pub enum ExecutionStatus {
/// The execution is pending.
Pending,
/// The execution has failed.
Failure,
/// The final action succeeded and returned some value or an empty vec.
SuccessValue(Vec<u8>),
/// The final action of the receipt returned a promise or the signed transaction was converted
/// to a receipt. Contains the receipt_id of the generated receipt.
SuccessReceiptId(CryptoHash),
}
// TODO: Need a better name
pub struct ExecutionOutcomeWithId {
/// The transaction hash or the receipt ID.
pub id: CryptoHash,
pub outcome: ExecutionOutcome,
}
#[derive(Serialize, Deserialize, PartialEq, Eq, Debug, Clone)]
pub enum FinalExecutionStatus {
/// The execution has not yet started.
NotStarted,
/// The execution has started and still going.
Started,
/// The execution has failed.
Failure,
/// The execution has succeeded and returned some value or an empty vec in base64.
SuccessValue(String),
}
pub struct FinalExecutionOutcome {
/// Execution status. Contains the result in case of successful execution.
pub status: FinalExecutionStatus,
/// The execution outcome of the signed transaction.
pub transaction: ExecutionOutcomeWithId,
/// The execution outcome of receipts.
pub receipts: Vec<ExecutionOutcomeWithId>,
}
```
================================================
FILE: neps/archive/0018-view-change-method.md
================================================
- Proposal Name: Improve view/change methods in contracts
- Start Date: 2019-09-26
- NEP PR: [nearprotocol/neps#0000](https://github.com/nearprotocol/neps/pull/18)
# Summary
Currently the separation between view methods and change methods on the contract level is not very well defined and causes
quite a bit of confusion among developers. We propose in the NEP to elucidate the difference between view methods
and change methods and how they should be used. In short, we would like to restrict view methods from accessing certain
context variables and do not distinguish between view and change methods on the contract level. Developers have the option
to differentiate between the two in frontend or through near-shell.
# Motivation
From the feedback we received it seems that developers are confused by the results they get from view calls, which are
mainly caused by the fact that some binding methods such as `signer_account_id`, `current_account_id`, `attached_deposit`
do not make sense in a view call.
To avoid such confusion and create better developer experience, it is better if those context variables
are prohibited in view calls.
# Guide-level explanation
Among binding methods that we expose from nearcore, some do make sense in a view call, such as `block_index`,
while the majority does not.
Here we explicitly list the methods are not allowed in a view call and, in case they are invoked, the contract will panic with
`<method_name> is not allowed in view calls`.
The following methods are prohibited:
- `signer_account_id`
- `signer_account_pk`
- `predecessor_account_id`
- `attached_deposit`
- `prepaid_gas`
- `used_gas`
- `promise_create`
- `promise_then`
- `promise_and`
- `promise_batch_create`
- `promise_batch_then`
- `promise_batch_action_create_account`
- `promise_batch_action_deploy_account`
- `promise_batch_action_function_call`
- `promise_batch_action_transfer`
- `promise_batch_action_stake`
- `promise_batch_action_add_key_with_full_access`
- `promise_batch_action_add_key_with_function_call`
- `promise_batch_action_delete_key`
- `promise_batch_action_delete_account`
- `promise_results_count`
- `promise_result`
- `promise_return`
From the developer perspective, if they want to call view functions from command line on some contract, they would just
call `near view <contractName> <methodName> [args]`. If they are building an app and want to call a view function from the
frontend, they should follow the same pattern as we have right now, specifying `viewMethods` and `changeMethods` in
`loadContract`.
# Reference-level explanation
To implement this NEP, we need to change how binding methods are handled in runtime. More specifically, we can rename
`free_of_charge` to `is_view` and use that to indicate whether we are processing a view call. In addition we can add
a variant `ProhibitedInView(String)` to `HostError` so that if `is_view` is true,
then all the access to the prohibited
methods will error with `HostError::ProhibitedInView(<method_name>)`.
# Drawbacks
In terms of not allowing context variables, I don't see any drawback as those variables do not have a proper meaning
in view functions. For alternatives, see the section below.
# Rationale and alternatives
This design is very simple and requires very little change to the existing infrastructure. An alternative solution is
to distinguish between view methods and change methods on the contract level. One way to do it is through decorators, as
described [here](https://github.com/nearprotocol/NEPs/pull/3). However, enforcing such distinction on the contract level
requires much more work and is not currently feasible for Rust contracts.
# Unresolved questions
# Future possibilities
================================================
FILE: neps/archive/0033-economics.md
================================================
- Proposal Name: NEAR economics specs
- Start Date: 2020-02-23
- NEP PR: [nearprotocol/neps#0000](https://github.com/nearprotocol/NEPs/pull/33)
- Issue(s): link to relevant issues in relevant repos (not required).
# Summary
Adding economics specification for NEAR Protocol based on the NEAR whitepaper - https://pages.near.org/papers/the-official-near-white-paper/#economics
# Motivation
Currently, the specification is defined by the implementation in https://github.com/near/nearcore. This codifies all the parameters and formulas and defines main concepts.
# Guide-level explanation
The goal is to build a set of specs about NEAR token economics, for analysts and adopters, to simplify their understanding of the protocol and its game-theoretical dynamics.
This initial release will be oriented to validators and staking in general.
# Reference-level explanation
This part of the documentation is self-contained. It may provide material for third-party research papers, and spreadsheet analysis.
# Drawbacks
We might just put this in the NEAR docs.
# Rationale and alternatives
# Unresolved questions
# Future possibilities
This is an open document which may be used by NEAR's community to pull request a new economic policy. Having a formal document also for non-technical aspects opens new opportunities for the governance.
================================================
FILE: neps/archive/0040-split-states.md
================================================
- Proposal Name: Splitting States for Simple Nightshade
- Start Date: 2021-07-19
- NEP PR: [near/NEPs#241](https://github.com/near/NEPs/pull/241)
- Issue(s): [near/NEPs#225](https://github.com/near/NEPs/issues/225) [near/nearcore#4419](https://github.com/near/nearcore/issues/4419)
# Summary
This proposal proposes a way to split each shard in the blockchain into multiple shards.
Currently, the near blockchain only has one shard and it needs to be split into eight shards for Simple Nightshade.
# Motivation
To enable sharding, specifically, phase 0 of Simple Nightshade, we need to find a way to split the current one shard state into eight shards.
# Guide-level explanation
The proposal assumes that all validators track all shards and that challenges are not enabled.
Suppose the new sharding assignment comes into effect at epoch T.
State migration is done at epoch T-1, when the validators for epoch T are catching up states for the next epoch.
At the beginning of epoch T-1, they run state sync for the current shards if needed.
From the existing states, they build states for the new shards, then apply changes to the new states when they process the blocks in epoch T-1.
This whole process runs off-chain as the new states will be not included in blocks at epoch T-1.
At the beginning of epoch T, the new validators start to build blocks based on the new state roots.
The change involves three parts.
## Dynamic Shards
The first issue to address in splitting shards is the assumption that the current implementation of chain and runtime makes that the number of shards never changes.
This in turn involves two parts, how the validators know when and how sharding changes happen and how they store states of shards from different epochs during the transition.
The former is a protocol change and the latter only affects validators' internal states.
### Protocol Change
Sharding config for an epoch will be encapsulated in a struct `ShardLayout`, which not only contains the number of shards, but also layout information to decide which account ids should be mapped to which shards.
The `ShardLayout` information will be stored as part of `EpochConfig`.
Right now, `EpochConfig` is stored in `EpochManager` and remains static accross epochs.
That will be changed in the new implementation so that `EpochConfig` can be changed according to protocol versions, similar to how `RuntimeConfig` is implemented right now.
The switch to Simple Nightshade will be implemented as a protocol upgrade.
`EpochManager` creates a new `EpochConfig` for each epoch from the protocol version of the epoch.
When the protocol version is large enough and the `SimpleNightShade` feature is enabled, the `EpochConfig` will be use the `ShardLayout` of Simple Nightshade, otherwise it uses the genesis `ShardLayout`.
Since the protocol version and the shard information of epoch T will be determined at the end of epoch T-2, the validators will have time to prepare for states of the new shards during epoch T-1.
Although not ideal, the `ShardLayout` for Simple Nightshade will be added as part of the genesis config in the code.
The genesis config file itself will not be changed, but the field will be set to a default value we specify in the code.
This process is as hacky as it sounds, but currently we have no better way to account for changing protocol config.
To completely solve this issue will be a hard problem by itself, thus we do not try to solve it in this NEP.
We will discuss how the sharding transition will be managed in the next section.
### State Change
In epoch T-1, the validators need to maintain two versions of states for all shards, one for the current epoch, one that is split for the next epoch.
Currently, shards are identified by their `shard_id`, which is a number ranging from `0` to `NUM_SHARDS-1`.`shard_id` is also used as part of the indexing keys by which trie nodes are stored in the database.
However, when shards may change accross epochs, `shard_id` can no longer be used to uniquely identify states because new shards and old shards will share the same `shard_id`s under this representation.
To solve this issue, the new proposal creates a new struct `ShardUId` as an unique identifier to reference shards accross epochs.
`ShardUId` will only be used for storing and managing states, for example, in `Trie` related structures,
In most other places in the code, it is clear which epoch the referenced shard belongs, and `ShardId` is enough to identify the shard.
There will be no change in the protocol level since `ShardId` will continue to be used in protocol level specs.
`ShardUId` contains a version number and the corresponding `shard_id`.
```rust
pub struct ShardUId {
version: u32,
shard_id: u32,
}
```
The version number is different between different shard layouts, to ensure `ShardUId`s for shards from different epochs are different.
`EpochManager` will be responsible for managing shard versions and `ShardUId` accross epochs.
## Build New States
Currently, when receiving the first block of every epoch, validators start downloading states to prepare for the next epoch.
We can modify this existing process to make the validators build states for the new shards after they finish downloading states for the existing shards.
To build the new states, the validator iterates through all accounts in the current states and adds them to the new states one by one.
## Update States
Similar to how validators usually catch up for the next epoch, the new states are updated as new blocks are processed.
The difference is that in epoch T-1, chunks are still sharded by the current sharding assignment, but the validators need to perform updates on the new states.
We cannot simply split transactions and receipts to the new shards and process updates on each new shard separately.
If we do so, since each shard processes transactions and receipts with their own gas limits, some receipts may be delayed in the new states but not in the current states, or the other way around.
That will lead to inconsistencies between the orderings by which transactions and receipts are applied to the current and new states.
For example, for simplicity, assume there is only one shard A in epoch T-1 and there will be two shards B and C in epoch T.
To process a block in epoch T-1, shard A needs to process receipts 0, 1, .., 99 while in the new sharding assignments receipts 0, 2, …, 98 belong to shard B and receipts 1, 3, …, 99 belong to shard C.
Assume in shard A, the gas limit is hit after receipt 89 is processed, so receipts 90 to 99 are delayed.
To achieve the same processing result, shard B must process receipt 0, 2, …, 88 and delay 90, 92, ..., 98 and shard C must process receipt 1, 3, ..., 89 and delay receipts 91, 93, …, 99.
However, shard B and C have their own gas limits and which receipts will be processed and delayed cannot be guaranteed.
Whether a receipt is processed in a block or delayed can affect the execution result of this receipt because transactions are charged and local receipts are processed before delayed receipts are processed.
For example, let’s assume Alice’s account has 0N now and Bob sends a transaction T1 to transfer 5N to Alice.
The transaction has been converted to a receipt R at block i-1 and sent to Alice's shard at block i.
Let's say Alice signs another transaction T2 to send 1N to Charlie and that transaction is included in block i+1.
Whether transaction T2 succeeds depends on whether receipt R is processed or delayed in block i.
If R is processed in block i, Alice’s account will have 5N before block i+1 and T2 will succeed while if R is delayed in block i, Alice’s account will have 0N and T2 will be declined.
Therefore, the validators must still process transactions and receipts based on the current sharding assignment.
After the processing is finished, they can take the generated state changes to apply to the new states.
# Reference-level explanation
## Protocol-Level Shard Representation
### `ShardLayout`
```rust
pub enum ShardLayout {
V0(ShardLayoutV0),
V1(ShardLayoutV1),
}
```
ShardLayout is a versioned struct that contains all information needed to decide which accounts belong to which shards. Note that `ShardLayout` only contains information at the protocol level, so it uses `ShardOrd` instead of `ShardId`.
The API contains the following two functions.
#### `get_split_shards`
```
pub fn get_split_shards(&self, parent_shard_id: ShardId) -> Option<&Vec<ShardId>>
```
returns the children shards of shard `parent_shard_id` (we will explain parent-children shards shortly). Note that `parent_shard_id` is a shard from the last ShardLayout, not from `self`. The returned `ShardId` represents shard in the current shard layout.
This information is needed for constructing states for the new shards.
We only allow adding new shards that are split from the existing shards. If shard B and C are split from shard A, we call shard A the parent shard of shard B and C.
For example, if epoch T-1 has a shard layout `shardlayout0` with two shards with `shard_ord` 0 and 1 and each of them will be split to two shards in `shardlayout1` in epoch T, then `shard_layout1.get_split_shards(0)` returns `[0,1]` and `shard_layout.get_split_shards(1)` returns `[2,3]`.
#### `version`
```rust
pub fn version(&self) -> ShardVersion
```
returns the version number of this shard layout. This version number is used to create `ShardUId` for shards in this `ShardLayout`. The version numbers must be different for all shard layouts used in the blockchain.
#### `account_id_to_shard_id`
```rust
pub fn account_id_to_shard_id(account_id: &AccountId, shard_layout: ShardLayout) -> ShardId
```
maps account id to shard id given a shard layout
#### `ShardLayoutV0`
```rust
pub struct ShardLayoutV0 {
/// map accounts evenly accross all shards
num_shards: NumShards,
}
```
A shard layout that maps accounts evenly accross all shards -- by calculate the hash of account id and mod number of shards. This is added to capture the current `account_id_to_shard_id` algorithm, to keep backward compatibility for some existing tests. `parent_shards` for `ShardLayoutV1` is always `None` and `version`is always `0`.
#### `ShardLayoutV1`
```rust
pub struct ShardLayoutV1 {
/// num_shards = fixed_shards.len() + boundary_accounts.len() + 1
/// Each account and all subaccounts map to the shard of position in this array.
fixed_shards: Vec<AccountId>,
/// The rest are divided by boundary_accounts to ranges, each range is mapped to a shard
boundary_accounts: Vec<AccountId>,
/// Parent shards for the shards, useful for constructing states for the shards.
/// None for the genesis shard layout
parent_shards: Option<Vec<ShardId>>,
/// Version of the shard layout, useful to uniquely identify the shard layout
version: ShardVersion,
}
```
A shard layout that consists some fixed shards each of which is mapped to a fixed account and other shards which are mapped to ranges of accounts. This will be the ShardLayout used by Simple Nightshade.
### `EpochConfig`
`EpochConfig` will contain the shard layout for the given epoch.
```rust
pub struct EpochConfig {
// existing fields
...
/// Shard layout of this epoch, may change from epoch to epoch
pub shard_layout: ShardLayout,
```
### `AllEpochConfig`
`AllEpochConfig` stores a mapping from protocol versions to `EpochConfig`s. `EpochConfig` for a particular epoch can be retrieved from `AllEpochConfig`, given the protocol version of the epoch. For SimpleNightshade migration, it only needs to contain two configs. `AllEpochConfig` will be stored inside `EpochManager` to be used to construct `EpochConfig` for different epochs.
```rust
pub struct AllEpochConfig {
genesis_epoch_config: Arc<EpochConfig>,
simple_nightshade_epoch_config: Arc<EpochConfig>,
}
```
#### `for_protocol_version`
```rust
pub fn for_protocol_version(&self, protocol_version: ProtocolVersion) -> &Arc<EpochConfig>
```
returns `EpochConfig` according to the given protocol version. `EpochManager` will call this function for every new epoch.
### `EpochManager`
`EpochManager` will be responsible for managing `ShardLayout` accross epochs. As we mentioned, `EpochManager` stores an instance of `AllEpochConfig`, so it can returns the `ShardLayout` for each epoch.
#### `get_shard_layout`
```rust
pub fn get_shard_layout(&mut self, epoch_id: &EpochId) -> Result<&ShardLayout, EpochError>
```
## Internal Shard Representation in Validators' State
### `ShardUId`
`ShardUId` is a unique identifier that a validator uses internally to identify shards from all epochs. It only exists inside a validator's internal state and can be different among validators, thus it should never be exposed to outside APIs.
```rust
pub struct ShardUId {
pub version: ShardVersion,
pub shard_id: u32,
}
```
`version` in `ShardUId` comes from the version of `ShardLayout` that this shard belongs. This way, different shards from different shard layout will have different `ShardUId`s.
### Database storage
The following database columns are stored with `ShardId` as part of the database key, it will be replaced by `ShardUId`
- ColState
- ColChunkExtra
- ColTrieChanges
#### `TrieCachingStorage`
Trie storage will contruct database key from `ShardUId` and hash of the trie node.
##### `get_shard_uid_and_hash_from_key`
```rust
fn get_shard_uid_and_hash_from_key(key: &[u8]) -> Result<(ShardUId, CryptoHash), std::io::Error>
```
##### `get_key_from_shard_uid_and_hash`
```rust
fn get_key_from_shard_uid_and_hash(shard_uid: ShardUId, hash: &CryptoHash) -> [u8; 40]
```
## Build New States
The following method in `Chain` will be added or modified to split a shard's current state into multiple states.
### `build_state_for_split_shards`
```rust
pub fn build_state_for_split_shards(&mut self, sync_hash: &CryptoHash, shard_id: ShardId) -> Result<(), Error>
```
builds states for the new shards that the shard `shard_id` will be split to.
After this function is finished, the states for the new shards should be ready in `ShardTries` to be accessed.
### `run_catchup`
```rust
pub fn run_catchup(...) {
...
match state_sync.run(
...
)? {
StateSyncResult::Unchanged => {}
StateSyncResult::Changed(fetch_block) => {...}
StateSyncResult::Completed => {
// build states for new shards if shards will change and we will track some of the new shards
if self.runtime_adapter.will_shards_change_next_epoch(epoch_id) {
let mut parent_shards = HashSet::new();
let (new_shards, mapping_to_parent_shards) = self.runtime_adapter.get_shards_next_epoch(epoch_id);
for shard_id in new_shards {
if self.runtime_adapter.will_care_about_shard(None, &sync_hash, shard_id, true) {
parent_shards.insert(mapping_to_parent_shards.get(shard_id)?);
}
}
for shard_id in parent_shards {
self.split_shards(me, &sync_hash, shard_id);
}
}
...
}
}
...
}
```
## Update States
### `split_state_changes`
```rust
split_state_changes(shard_id: ShardId, state_changes: &Vec<RawStateChangesWithTrieKey>) -> HashMap<ShardId, Vec<RawStateChangesWithTrieKey>>
```
splits state changes to be made to a current shard to changes that should be applid to the new shards. Note that this function call can take a long time. To avoid blocking the client actor from processing and producing blocks for the current epoch, it should be called from a separate thread. Unfortunately, as of now, catching up states and catching up blocks are both run in client actor. They should be moved to a separate actor. However, that can be a separate project, although this NEP will depend on that project. In fact, the issue has already been discussed in [#3201](https://github.com/near/nearcore/issues/3201).
### `apply_chunks`
`apply_chunks` will be modified so that states of the new shards will be updated when processing chunks.
In `apply_chunks`, after processing each chunk, the state changes in `apply_results` are sorted into changes to new shards.
At the end, we apply these changes to the new shards.
```rust
fn apply_chunks(...) -> Result<(), Error> {
...
for (shard_id, (chunk_header, prev_chunk_header)) in
(block.chunks().iter().zip(prev_block.chunks().iter())).enumerate()
{
...
let apply_result = ...;
// split states to new shards
let changes_to_new_shards = self.split_state_changes(trie_changes);
// apply changes_to_new_changes to the new shards
for (new_shard_id, new_state_changes) in changes_to_new_states {
// locate the state for the new shard
let trie = self.get_trie_for_shard(new_shard_id);
let chunk_extra =
self.chain_store_update.get_chunk_extra(&prev_block.hash(), new_shard_id)?.clone();
let mut state_update = TrieUpdate::new(trie.clone(), *chunk_extra.state_root());
// update the state
for state_change in new_state_changes {
state_update.set(state_change.trie_key, state_change.value);
}
state_update.commit(StateChangeCause::Resharding);
let (trie_changes, state_changes) = state_update.finalize()?;
// save the TrieChanges and ChunkExtra
self.chain_store_update.save_trie_changes(WrappedTrieChanges::new(
self.tries,
new_shard_id,
trie_changes,
state_changes,
*block.hash(),
));
self.chain_store_update.save_chunk_extra(
&block.hash(),
new_shard_id,
ChunkExtra::new(&trie_changes.new_root, CryptoHash::default(), Vec::new(), 0, 0, 0),
);
}
}
...
}
```
## Garbage Collection
The old states need to be garbage collected after the resharding finishes. The garbage collection algorithm today won't automatically handle that. (#TODO: why?)
Although we need to handle garbage collection eventually, it is not a pressing issue. Thus, we leave the discussion from this NEP for now and will add a detailed plan later.
# Drawbacks
The drawback of this approach is that it will not work when challenges are enabled since challenges to the transition to the new states will be too large to construct or verify.
Thus, most of the change will likely be a one time use that only works for the Simple Nightshade transition, although part of the change involving `ShardId` may be reused in the future.
# Rationale and alternatives
- Why is this design the best in the space of possible designs?
- It is the best because its implementation is the simplest.
Considering we want to launch Simple Nightshade as soon as possible by Q4 2021 and we will not enable challenges any time soon, this is the best option we have.
- What other designs have been considered and what is the rationale for not choosing them?
- We have considered other designs that change states incrementally and keep state roots on chain to make it compatible with challenges.
However, the implementation of those approaches are overly complicated and does not fit into our timeline for launching Simple Nightshade.
- What is the impact of not doing this?
- The impact will be the delay of launching Simple Nightshade, or no launch at all.
# Unresolved questions
- What parts of the design do you expect to resolve through the NEP process before this gets merged?
- Garbage collection
- State Sync?
- What parts of the design do you expect to resolve through the implementation of this feature before stabilization?
- There might be small changes in the detailed implementations or specifications of some of the functions described above, but the overall structure will not be changed.
- What related issues do you consider out of scope for this NEP that could be addressed in the future independently of the solution that comes out of this NEP?
- One issue that is related to this NEP but will be resolved indepdently is how trie nodes are stored in the database.
Right now, it is a combination of `shard_id` and the node hash.
Part of the change proposed in this NEP regarding `ShardId` is because of this.
Plans on how to only store the node hash as keys are being discussed [here](https://github.com/near/nearcore/issues/4527), but it will happen after the Simple Nightshade migration since completely solving the issue will take some careful design and we want to prioritize launching Simple Nightshade for now.
- Another issue that is not part of this NEP but must be solved for this NEP to work is to move expensive computation related to state sync / catch up into a separate actor [#3201](https://github.com/near/nearcore/issues/3201).
- Lastly, we should also build a better mechanism to deal with changing protocol config. The current way of putting changing protocol config in the genesis config and changing how the genesis config file is parsed is not a long term solution.
# Future possibilities
## Extension
In the future, when challenges are enabled, resharding and state upgrade should be implemented on-chain.
## Affected Projects
-
## Pre-mortem
- Building and catching up new states takes longer than one epoch to finish.
- Protocol version switched back to pre simple nightshade
- Validators cannot track shards properly after resharding
- Genesis State
- Must load the correct `shard_version`
- ShardTracker?
================================================
FILE: neps/archive/README.md
================================================
# Proposals
This section contains the NEAR Enhancement Proposals (NEPs) that cover a fleshed out concept for NEAR. Before an idea is turned into a proposal, it will be fleshed out and discussed on the [NEAR Governance Forum](https://gov.near.org).
These subcategories are great places to start such a discussion:
- [Standards](https://gov.near.org/c/dev/standards/29) — examples might include new protocol standards, token standards, etc.
- [Proposals](https://gov.near.org/c/dev/proposals/68) — ecosystem proposals that may touch tooling, node experience, wallet usage, and so on.
Once and idea has been thoroughly discussed and vetted, a pull request should be made according to the instructions at the [NEP repository](https://github.com/near/NEPs).
The proposals shown in this section have been merged and exist to offer as much information as possible including historical motivations, drawbacks, approaches, future concerns, etc.
Once a proposal has been fully implemented it can be added as a specification, but will remain a proposal until that time.
================================================
FILE: neps/nep-0001.md
================================================
---
NEP: 1
Title: NEP Purpose and Guidelines
Authors: Bowen W. <bowen@near.org>, Austin Baggio <austin.baggio@near.org>, Ori A. <ori@near.org>, Vlad F. <frol@near.org>, Guillermo G. <guillermo@near.dev>;
Status: Approved
DiscussionsTo: https://github.com/near/NEPs/pull/333, https://github.com/near/NEPs/pull/619
Type: Developer Tools
Version: 2.0.0
Created: 2022-03-03
Last Updated: 2025-08-04
---
## Summary
NEAR Enhancement Proposals (NEPs) are design documents that describe standards for the NEAR platform, including core protocol specifications, contract standards, and wallet APIs. Each NEP provides concise technical specifications and the rationale behind the proposed enhancement.
Each NEP is championed by a community member, which builds consensus within the community and sheperds the NEP from ideation to completion. The NEP process is designed to be open and transparent, allowing anyone in the NEAR community to propose, discuss, and review ideas for improving the NEAR ecosystem.
All NEPs are stored as text files in a [versioned repository](https://github.com/near/NEPs), allowing for easy historical tracking.
## Motivation
The purpose of the NEP process is to give the community a way to propose, discuss, and document changes that impact the whole NEAR ecosystem in a structured manner. Given the complexity and number of participants involved across the ecosystem, a well-defined process helps ensure transparency, security, and stability.
## NEP Types
There are three kinds of NEPs:
1. A **Protocol** NEP describes a new feature of the NEAR protocol (e.g. [NEP-264](https://github.com/near/NEPs/blob/master/neps/nep-0264.md), [NEP-366](https://github.com/near/NEPs/blob/master/neps/nep-0366.md))
2. A **Contract Standards** NEP specifies NEAR smart contract interfaces for a reusable concept in the NEAR ecosystem (e.g. [NEP-141](https://github.com/near/NEPs/blob/master/neps/nep-0141.md), [NEP-171](https://github.com/near/NEPs/blob/master/neps/nep-0171.md))
3. A **Wallet Standards** NEP specifies ecosystem-wide APIs for Wallet implementations (e.g. [NEP-413](https://github.com/near/NEPs/blob/master/neps/nep-0413.md))
## Submit a NEP
Each NEP must have a champion that proposes a new idea, shepherds the discussions in the appropriate forums to build community consensus, proposes the NEP and help it progress toward completion.
### Start with ideation
Everyone in the community is welcome to propose, discuss, and review ideas to improve the NEAR protocol and standards. The NEP process begins with a new idea for the NEAR ecosystem.
Before submitting a new NEP, publicly check if your idea is original and relevant to the NEAR community. This saves time and avoids proposing something already discussed or unsuitable for most users.
- **Check prior proposals:** Many ideas for changing NEAR come up frequently. Please search the [issues](https://github.com/near/NEPs/issues) and NEPs in this repo before proposing something new.
- **Share the idea:** Submit a new [issue](https://github.com/near/NEPs/issues) explaining the problem you want to tackle, and your proposed solution.
- **Get feedback:** Share the issue to the appropriate community group:
- Wallet Group: https://nearbuilders.com/tg-wallet
- Protocol: https://near.zulipchat.com/
- Contract Standards: https://t.me/NEAR_Tools_Community_Group
### Submit a NEP Draft
Following the above initial discussions, the author willing to champion the NEP should submit the NEP Draft in the form of a `Draft Pull Request`:
1. Fork the [NEPs repository](https://github.com/near/NEPs).
2. Copy `nep-0000-template.md` to `neps/nep-xxxx.md` (do **not** assign a NEP number yet).
3. Fill in the NEP following the NEP template guidelines. For the Header Preamble, make sure to set the status as “Draft.”
4. Push this to your GitHub fork and submit a pull request.
5. Now that your NEP has an open pull request, use the pull request number to update your `0000` prefix. For example, if the PR is 305, the NEP should be `neps/nep-0305.md`.
6. Push this to your GitHub fork and submit a pull request. Mention the @near/nep-moderators in the comment and turn the PR into a "Ready for Review" state once you believe the NEP is ready for review.
## NEP Lifecycle
The NEP process begins when an author submits a [NEP draft](#submit-a-nep-draft). The NEP lifecycle consists of three stages: draft, review, and voting, with two possible outcomes: approval or rejection.
Throughout the process, various roles play a critical part in moving the proposal forward. Most of the activity happens asynchronously on the NEP within GitHub, where all the roles can communicate and collaborate on revisions and improvements to the proposal.

### NEP Stages
- **Draft:** The first formally tracked stage of a new NEP. This process begins once an author submits a draft proposal and the NEP moderator merges it into the NEP repo when properly formatted.
- **Review:** A NEP moderator marks a NEP as ready for Subject Matter Experts Review. If the NEP is not approved within two months, it is automatically rejected.
- **Voting:** This is the final voting period for a NEP. The working group will vote on whether to accept or reject the NEP. This period is limited to two weeks. If during this period necessary normative changes are required, the NEP will revert to Review.
Moderator, when moving a NEP to review stage, should update the Pull Request description to include the
review summary, example:
```markdown
---
## NEP Status _(Updated by NEP moderators)_
SME reviews:
- [ ] Role1: @github-handle
- [ ] Role2: @github-handle
Contract Standards WG voting indications (❔ | :+1: | :-1: ):
- ❔ @github-handle
- ❔ ...
<Other> voting indications:
- ❔
- ❔
```
### NEP Outcomes
- **Approved:** If the working group votes to approve, they will move the NEP to Approved. Once approved, Standards NEPs exist in a state of finality and should only be updated to correct errata and add non-normative clarifications.
- **Rejected:** If the working group votes to reject, they will move the NEP to Rejected.
### NEP Roles and Responsibilities

**Author**<br />
_Anyone can participate_
The NEP author (or champion) is responsible for creating a NEP draft that follows the guidelines. They drive the NEP forward by actively participating in discussions and incorporating feedback. During the voting stage, they may present the NEP to the working group and community, and provide a final implementation with thorough testing and documentation once approved.

**Moderator**<br />
_Assigned by the working group_
The moderator is responsible for facilitating the process and validating that the NEP follows the guidelines. They do not assess the technical feasibility or write any part of the proposal. They provide comments if revisions are necessary and ensure that all roles are working together to progress the NEP forward. They also schedule and facilitate public voting calls.

**NEP Reviewer** (Subject Matter Experts)<br />
_Assigned by the working group_
The reviewer is responsible for reviewing the technical feasibility of a NEP and giving feedback to the author. While they do not have voting power, they play a critical role in providing their voting recommendations along with a summary of the benefits and concerns that were raised in the discussion. Their inputs help everyone involved make a transparent and informed decision.

**Approver** (Working Groups)<br />
_Selected by the Dev Gov DAO in the bootstrapping phase_
The working group is a selected committee of 3-7 recognized experts who are responsible for coordinating the public review and making decisions on a NEP in a fair and timely manner. There are multiple working groups, each one focusing on a specific ecosystem area, such as the Protocol or Wallet Standards. They assign reviewers to proposals, provide feedback to the author, and attend public calls to vote to approve or reject the NEP.
### NEP Communication
NEP discussions should happen asynchronously within the NEP’s public thread. This allows for broad participation and ensures transparency.
However, if a discussion becomes circular and could benefit from a synchronous conversation, any participants on a given NEP can suggest that the moderator schedules an ad hoc meeting. For example, if a reviewer and author have multiple rounds of comments, they may request a call. The moderator can help coordinate the call and post the registration link on the NEP. The person who requested the call should designate a note-taker to post a summary on the NEP after the call.
When a NEP gets to the final voting stage, the moderator will schedule a public working group meeting to discuss the NEP with the author and formalize the decision. The moderator will first coordinate a time with the author and working group members, and then post the meeting time and registration link on the NEP at least one week in advance.
All participants in the NEP process should maintain a professional and respectful code of conduct in all interactions. This includes communicating clearly and promptly and refraining from disrespectful or offensive language.
### NEP Playbook
1. Once an author [submits a NEP draft](#submit-a-nep-draft), the NEP moderators will review their pull request (PR) for structure, formatting, and other errors. Approval criteria are:
- The content is complete and technically sound. The moderators do not consider whether the NEP is likely or not to get accepted.
- The title accurately reflects the content.
- The language, spelling, grammar, sentence structure, and code style are correct and conformant.
2. If the NEP is not ready for approval, the moderators will send it back to the author with specific instructions in the PR. The moderators must complete the review within one week.
3. Once the moderators agree that the PR is ready for review, they will ask the approvers (working group members) to nominate a team of at least two reviewers (subject matter experts) to review the NEP. At least one working group member must explicitly tag the reviewers and comment: `"As a working group member, I'd like to nominate @SME-username and @SME-username as the Subject Matter Experts to review this NEP."` If the assigned reviewers feel that they lack the relevant expertise to fully review the NEP, they can ask the working group to re-assign the reviewers for the NEP.
4. The reviewers must finish the technical review within one week. Technical Review Guidelines:
- First, review the technical details of the proposals and assess their merit. If you have feedback, explicitly tag the author and comment: `"As the assigned Reviewer, I request from @author-username to [ask clarifying questions, request changes, or provide suggestions that are actionable.]."` It may take a couple of iterations to resolve any open comments.
- Second, once the reviewer believes that the NEP is close to the voting stage, explicitly tag the @near/nep-moderators and comment with your technical summary. The Technical Summary must include:
- A recommendation for the working group: `"As the assigned reviewer, I do not have any feedback for the author. I recommend moving this NEP forward and for the working group to [accept or reject] it based on [provide reasoning, including a sense of importance or urgency of this NEP]."` Please note that this is the reviewer's personal recommendation.
- A summary of benefits that surfaced in previous discussions. This should include a concise list of all the benefits that others raised, not just the ones that the reviewer personally agrees with.
- A summary of concerns or blockers, along with their current status and resolution. Again, this should reflect the collective view of all commenters, not just the reviewer's perspective.
5. The NEP author can make revisions and request further reviews from the reviewers. However, if a proposal is in the review stage for more than two months, the moderator will automatically reject it. To reopen the proposal, the author must restart the NEP process again.
6. Once both reviewers complete their technical summary, the moderators will notify the approvers (working group members) that the NEP is in the final comment period. The approvers must fully review the NEP within one week. Approver guidelines:
- First, read the NEP thoroughly. If you have feedback, explicitly tag the author and comment: `"As a working group member, I request from @author-username to [ask clarifying questions, request changes, or provide actionable suggestions.]."`
- Second, once the approver believes the NEP is close to the voting stage, explicitly comment with your voting indication: `"As a working group member, I lean towards [approving OR rejecting] this NEP based on [provide reasoning]."`
7. Once all the approvers indicate their voting indication, the moderator will review the voting indication for a 2/3 majority:
- If the votes lean toward rejection: The moderator will summarize the feedback and close the NEP.
- If the votes lean toward approval: The moderator will schedule a public call (see [NEP Communication](#nep-communication)) for the author to present the NEP and for the working group members to formalize the voting decision. If the working group members agree that the NEP is overall beneficial for the NEAR ecosystem and vote to approve it, then the proposal is considered accepted. After the call, the moderator will summarize the decision on the NEP.
8. The NEP author or other assignees will complete action items from the call. For example, the author will finalize the "Changelog" section on the NEP, which summarizes the benefits and concerns for future reference.
### Transferring NEP Ownership
While a NEP is worked on, it occasionally becomes necessary to transfer ownership of NEPs to a new author. In general, it is preferable to retain the original author as a co-author of the transferred NEP, but that is up to the original author. A good reason to transfer ownership is that the original author no longer has the time or interest in updating it or following through with the NEP process. A bad reason to transfer ownership is that the author does not agree with the direction of the NEP. One aim of the NEP process is to try to build consensus around a NEP, but if that is not possible, an author can submit a competing NEP.
If you are interested in assuming ownership of a NEP, you can also do this via pull request. Fork the NEP repository, modify the owner, and submit a pull request. In the PR description, tag the original author and provide a summary of the work that was previously done. Also clearly state the intent of the fork and the relationship of the new PR to the old one. For example: "Forked to address the remaining review comments in NEP \# since the original author does not have time to address them.
## What does a successful NEP look like?
Each NEP should be written in markdown format and follow the [NEP-0000 template](https://github.com/near/NEPs/blob/master/nep-0000-template.md) and include all the appropriate sections, which will make it easier for the NEP reviewers and community members to understand and provide feedback. The most successful NEPs are those that go through collective iteration, with authors who actively seek feedback and support from the community. Ultimately, a successful NEP is one that addresses a specific problem or needs within the NEAR ecosystem, is well-researched, and has the support of the community and ecosystem experts.
### Auxiliary Files
Images, diagrams, and auxiliary files should be included in a subdirectory of the assets folder for that NEP as follows: assets/nep-N (where N is to be replaced with the NEP number). When linking to an image in the NEP, use relative links such as `../assets/nep-1/image.png`
### Style Guide
#### NEP numbers
When referring to a NEP by number, it should be written in the hyphenated form NEP-X where X is the NEP's assigned number.
#### RFC 2119
NEPs are encouraged to follow [RFC 2119](https://www.ietf.org/rfc/rfc2119.txt) for terminology and to insert the following at the beginning of the Specification section:
The keywords "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC 2119](https://www.ietf.org/rfc/rfc2119.txt).
## NEP Maintenance
Generally, NEPs are not modifiable after reaching their final state. However, there are occasions when updating a NEP is necessary, such as when discovering a security vulnerability or identifying misalignment with a widely-used implementation. In such cases, an author may submit a NEP extension in a pull request with the proposed changes to an existing NEP document.
A NEP extension has a higher chance of approval if it introduces clear benefits to existing implementors and does not introduce breaking changes.
If an author believes that a new extension meets the criteria for its own separate NEP, it is better to submit a new NEP than to modify an existing one. Just make sure to specify any dependencies on certain NEPs.
## References
The content of this document was derived heavily from the PEP, BIP, Rust RFC, and EIP standards bootstrap documents:
- Klock, F et al. Rust: RFC-0002: RFC Process. https://github.com/rust-lang/rfcs/blob/master/text/0002-rfc-process.md
- Taaki, A. et al. Bitcoin Improvement Proposal: BIP:1, BIP Purpose and Guidelines. https://github.com/bitcoin/bips/blob/master/bip-0001.mediawiki
- Warsaw, B. et al. Python Enhancement Proposal: PEP Purpose and Guidelines. https://github.com/python/peps/blob/main/peps/pep-0001.rst
- Becze, M. et al. Ethereum Improvement Proposal EIP1: EIP Purpose and Guidelines. https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1.md
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
================================================
FILE: neps/nep-0021.md
================================================
---
NEP: 21
Title: Fungible Token Standard
Author: Evgeny Kuzyakov <ek@near.org>
Status: Final
DiscussionsTo: https://github.com/near/NEPs/pull/21
Type: Standards Track
Category: Contract
Created: 29-Oct-2019
SupersededBy: 141
---
## Summary
A standard interface for fungible tokens allowing for ownership, escrow and transfer, specifically targeting third-party marketplace integration.
## Motivation
NEAR Protocol uses an asynchronous sharded Runtime. This means the following:
Storage for different contracts and accounts can be located on the different shards.
Two contracts can be executed at the same time in different shards.
While this increases the transaction throughput linearly with the number of shards, it also creates some challenges for cross-contract development.
For example, if one contract wants to query some information from the state of another contract (e.g. current balance), by the time the first contract receive the balance the real balance can change.
It means in the async system, a contract can't rely on the state of other contract and assume it's not going to change.
Instead the contract can rely on temporary partial lock of the state with a callback to act or unlock, but it requires careful engineering to avoid dead locks.
## Rationale and alternatives
In this standard we're trying to avoid enforcing locks, since most actions can still be completed without locks by transferring ownership to an escrow account.
Prior art:
ERC-20 standard
NEP#4 NEAR NFT standard: nearprotocol/neps#4
For latest lock proposals see Safes (#26)
## Specification
We should be able to do the following:
- Initialize contract once. The given total supply will be owned by the given account ID.
- Get the total supply.
- Transfer tokens to a new user.
- Set a given allowance for an escrow account ID.
- Escrow will be able to transfer up this allowance from your account.
- Get current balance for a given account ID.
- Transfer tokens from one user to another.
- Get the current allowance for an escrow account on behalf of the balance owner. This should only be used in the UI, since a contract shouldn't rely on this temporary information.
There are a few concepts in the scenarios above:
- **Total supply**. It's the total number of tokens in circulation.
- **Balance owner**. An account ID that owns some amount of tokens.
- **Balance**. Some amount of tokens.
- **Transfer**. Action that moves some amount from one account to another account.
- **Escrow**. A different account from the balance owner who has permission to use some amount of tokens.
- **Allowance**. The amount of tokens an escrow account can use on behalf of the account owner.
Note, that the precision is not part of the default standard, since it's not required to perform actions. The minimum
value is always 1 token.
### Simple transfer
Alice wants to send 5 wBTC tokens to Bob.
Assumptions:
- The wBTC token contract is `wbtc`.
- Alice's account is `alice`.
- Bob's account is `bob`.
- The precision on wBTC contract is `10^8`.
- The 5 tokens is `5 * 10^8` or as a number is `500000000`.
#### High-level explanation
Alice needs to issue one transaction to wBTC contract to transfer 5 tokens (multiplied by precision) to Bob.
#### Technical calls
1. `alice` calls `wbtc::transfer({"new_owner_id": "bob", "amount": "500000000"})`.
### Token deposit to a contract
Alice wants to deposit 1000 DAI tokens to a compound interest contract to earn extra tokens.
Assumptions:
- The DAI token contract is `dai`.
- Alice's account is `alice`.
- The compound interest contract is `compound`.
- The precision on DAI contract is `10^18`.
- The 1000 tokens is `1000 * 10^18` or as a number is `1000000000000000000000`.
- The compound contract can work with multiple token types.
#### High-level explanation
Alice needs to issue 2 transactions. The first one to `dai` to set an allowance for `compound` to be able to withdraw tokens from `alice`.
The second transaction is to the `compound` to start the deposit process. Compound will check that the DAI tokens are supported and will try to withdraw the desired amount of DAI from `alice`.
- If transfer succeeded, `compound` can increase local ownership for `alice` to 1000 DAI
- If transfer fails, `compound` doesn't need to do anything in current example, but maybe can notify `alice` of unsuccessful transfer.
#### Technical calls
1. `alice` calls `dai::set_allowance({"escrow_account_id": "compound", "allowance": "1000000000000000000000"})`.
1. `alice` calls `compound::deposit({"token_contract": "dai", "amount": "1000000000000000000000"})`. During the `deposit` call, `compound` does the following:
1. makes async call `dai::transfer_from({"owner_id": "alice", "new_owner_id": "compound", "amount": "1000000000000000000000"})`.
1. attaches a callback `compound::on_transfer({"owner_id": "alice", "token_contract": "dai", "amount": "1000000000000000000000"})`.
### Multi-token swap on DEX
Charlie wants to exchange his wLTC to wBTC on decentralized exchange contract. Alex wants to buy wLTC and has 80 wBTC.
Assumptions
- The wLTC token contract is `wltc`.
- The wBTC token contract is `wbtc`.
- The DEX contract is `dex`.
- Charlie's account is `charlie`.
- Alex's account is `alex`.
- The precision on both tokens contract is `10^8`.
- The amount of 9001 wLTC tokens is Alex wants is `9001 * 10^8` or as a number is `900100000000`.
- The 80 wBTC tokens is `80 * 10^8` or as a number is `8000000000`.
- Charlie has 1000000 wLTC tokens which is `1000000 * 10^8` or as a number is `100000000000000`
- Dex contract already has an open order to sell 80 wBTC tokens by `alex` towards 9001 wLTC.
- Without Safes implementation, DEX has to act as an escrow and hold funds of both users before it can do an exchange.
#### High-level explanation
Let's first setup open order by Alex on DEX. It's similar to `Token deposit to a contract` example above.
- Alex sets an allowance on wBTC to DEX
- Alex calls deposit on Dex for wBTC.
- Alex calls DEX to make an new sell order.
Then Charlie comes and decides to fulfill the order by selling his wLTC to Alex on DEX.
Charlie calls the DEX
- Charlie sets the allowance on wLTC to DEX
- Alex calls deposit on Dex for wLTC.
- Then calls DEX to take the order from Alex.
When called, DEX makes 2 async transfers calls to exchange corresponding tokens.
- DEX calls wLTC to transfer tokens DEX to Alex.
- DEX calls wBTC to transfer tokens DEX to Charlie.
#### Technical calls
1. `alex` calls `wbtc::set_allowance({"escrow_account_id": "dex", "allowance": "8000000000"})`.
1. `alex` calls `dex::deposit({"token": "wbtc", "amount": "8000000000"})`.
1. `dex` calls `wbtc::transfer_from({"owner_id": "alex", "new_owner_id": "dex", "amount": "8000000000"})`
1. `alex` calls `dex::trade({"have": "wbtc", "have_amount": "8000000000", "want": "wltc", "want_amount": "900100000000"})`.
1. `charlie` calls `wltc::set_allowance({"escrow_account_id": "dex", "allowance": "100000000000000"})`.
1. `charlie` calls `dex::deposit({"token": "wltc", "amount": "100000000000000"})`.
1. `dex` calls `wltc::transfer_from({"owner_id": "charlie", "new_owner_id": "dex", "amount": "100000000000000"})`
1. `charlie` calls `dex::trade({"have": "wltc", "have_amount": "900100000000", "want": "wbtc", "want_amount": "8000000000"})`.
- `dex` calls `wbtc::transfer({"new_owner_id": "charlie", "amount": "8000000000"})`
- `dex` calls `wltc::transfer({"new_owner_id": "alex", "amount": "900100000000"})`
## Reference Implementation
The full implementation in Rust can be found there: https://github.com/near/near-sdk-rs/blob/master/examples/fungible-token/ft/src/lib.rs
NOTES:
- All amounts, balances and allowance are limited by U128 (max value `2**128 - 1`).
- Token standard uses JSON for serialization of arguments and results.
- Amounts in arguments and results have are serialized as Base-10 strings, e.g. `"100"`. This is done to avoid
JSON limitation of max integer value of `2**53`.
Interface:
```rust
/******************/
/* CHANGE METHODS */
/******************/
/// Sets the `allowance` for `escrow_account_id` on the account of the caller of this contract
/// (`predecessor_id`) who is the balance owner.
pub fn set_allowance(&mut self, escrow_account_id: AccountId, allowance: U128);
/// Transfers the `amount` of tokens from `owner_id` to the `new_owner_id`.
/// Requirements:
/// * `amount` should be a positive integer.
/// * `owner_id` should have balance on the account greater or equal than the transfer `amount`.
/// * If this function is called by an escrow account (`owner_id != predecessor_account_id`),
/// then the allowance of the caller of the function (`predecessor_account_id`) on
/// the account of `owner_id` should be greater or equal than the transfer `amount`.
pub fn transfer_from(&mut self, owner_id: AccountId, new_owner_id: AccountId, amount: U128);
/// Transfer `amount` of tokens from the caller of the contract (`predecessor_id`) to
/// `new_owner_id`.
/// Act the same was as `transfer_from` with `owner_id` equal to the caller of the contract
/// (`predecessor_id`).
pub fn transfer(&mut self, new_owner_id: AccountId, amount: U128);
/****************/
/* VIEW METHODS */
/****************/
/// Returns total supply of tokens.
pub fn get_total_supply(&self) -> U128;
/// Returns balance of the `owner_id` account.
pub fn get_balance(&self, owner_id: AccountId) -> U128;
/// Returns current allowance of `escrow_account_id` for the account of `owner_id`.
///
/// NOTE: Other contracts should not rely on this information, because by the moment a contract
/// receives this information, the allowance may already be changed by the owner.
/// So this method should only be used on the front-end to see the current allowance.
pub fn get_allowance(&self, owner_id: AccountId, escrow_account_id: AccountId) -> U128;
```
## Drawbacks
- Current interface doesn't have minting, precision (decimals), naming. But it should be done as extensions, e.g. a Precision extension.
- It's not possible to exchange tokens without transferring them to escrow first.
- It's not possible to transfer tokens to a contract with a single transaction without setting the allowance first.
It should be possible if we introduce `transfer_with` function that transfers tokens and calls escrow contract. It needs to handle result of the execution and contracts have to be aware of this API.
## Future possibilities
- Support for multiple token types
- Minting and burning
- Precision, naming and short token name.
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
================================================
FILE: neps/nep-0141.md
================================================
---
NEP: 141
Title: Fungible Token Standard
Author: Evgeny Kuzyakov <ek@near.org>, Robert Zaremba <@robert-zaremba>, @oysterpack
Status: Final
DiscussionsTo: https://github.com/near/NEPs/issues/141
Type: Standards Track
Category: Contract
Created: 03-Mar-2022
Replaces: 21
Requires: 297
---
## Summary
A standard interface for fungible tokens that allows for a normal transfer as well as a transfer and method call in a single transaction. The [storage standard][Storage Management] addresses the needs (and security) of storage staking.
The [fungible token metadata standard][FT Metadata] provides the fields needed for ergonomics across dApps and marketplaces.
## Motivation
NEAR Protocol uses an asynchronous, sharded runtime. This means the following:
- Storage for different contracts and accounts can be located on the different shards.
- Two contracts can be executed at the same time in different shards.
While this increases the transaction throughput linearly with the number of shards, it also creates some challenges for cross-contract development. For example, if one contract wants to query some information from the state of another contract (e.g. current balance), by the time the first contract receives the balance the real balance can change. In such an async system, a contract can't rely on the state of another contract and assume it's not going to change.
Instead the contract can rely on temporary partial lock of the state with a callback to act or unlock, but it requires careful engineering to avoid deadlocks. In this standard we're trying to avoid enforcing locks. A typical approach to this problem is to include an escrow system with allowances. This approach was initially developed for [NEP-21](https://github.com/near/NEPs/pull/21) which is similar to the Ethereum ERC-20 standard. There are a few issues with using an escrow as the only avenue to pay for a service with a fungible token. This frequently requires more than one transaction for common scenarios where fungible tokens are given as payment with the expectation that a method will subsequently be called.
For example, an oracle contract might be paid in fungible tokens. A client contract that wishes to use the oracle must either increase the escrow allowance before each request to the oracle contract, or allocate a large allowance that covers multiple calls. Both have drawbacks and ultimately it would be ideal to be able to send fungible tokens and call a method in a single transaction. This concern is addressed in the `ft_transfer_call` method. The power of this comes from the receiver contract working in concert with the fungible token contract in a secure way. That is, if the receiver contract abides by the standard, a single transaction may transfer and call a method.
Note: there is no reason why an escrow system cannot be included in a fungible token's implementation, but it is simply not necessary in the core standard. Escrow logic should be moved to a separate contract to handle that functionality. One reason for this is because the [Rainbow Bridge](https://near.org/blog/eth-near-rainbow-bridge/) will be transferring fungible tokens from Ethereum to NEAR, where the token locker (a factory) will be using the fungible token core standard.
Prior art:
- [ERC-20 standard](https://eips.ethereum.org/EIPS/eip-20)
- NEP#4 NEAR NFT standard: [near/neps#4](https://github.com/near/neps/pull/4)
Learn about NEP-141:
- [Figment Learning Pathway](https://web.archive.org/web/20220621055335/https://learn.figment.io/tutorials/stake-fungible-token)
## Specification
### Guide-level explanation
We should be able to do the following:
- Initialize contract once. The given total supply will be owned by the given account ID.
- Get the total supply.
- Transfer tokens to a new user.
- Transfer tokens from one user to another.
- Transfer tokens to a contract, have the receiver contract call a method and "return" any fungible tokens not used.
- Remove state for the key/value pair corresponding with a user's account, withdrawing a nominal balance of Ⓝ that was used for storage.
There are a few concepts in the scenarios above:
- **Total supply**: the total number of tokens in circulation.
- **Balance owner**: an account ID that owns some amount of tokens.
- **Balance**: an amount of tokens.
- **Transfer**: an action that moves some amount from one account to another account, either an externally owned account or a contract account.
- **Transfer and call**: an action that moves some amount from one account to a contract account where the receiver calls a method.
- **Storage amount**: the amount of storage used for an account to be "registered" in the fungible token. This amount is denominated in Ⓝ, not bytes, and represents the [storage staked](https://docs.near.org/docs/concepts/storage-staking).
Note that precision (the number of decimal places supported by a given token) is not part of this core standard, since it's not required to perform actions. The minimum value is always 1 token. See the [Fungible Token Metadata Standard][FT Metadata] to learn how to support precision/decimals in a standardized way.
Given that multiple users will use a Fungible Token contract, and their activity will result in an increased [storage staking](https://docs.near.org/docs/concepts/storage-staking) burden for the contract's account, this standard is designed to interoperate nicely with [the Account Storage standard][Storage Management] for storage deposits and refunds.
### Example scenarios
#### Simple transfer
Alice wants to send 5 wBTC tokens to Bob.
Assumptions
- The wBTC token contract is `wbtc`.
- Alice's account is `alice`.
- Bob's account is `bob`.
- The precision ("decimals" in the metadata standard) on wBTC contract is `10^8`.
- The 5 tokens is `5 * 10^8` or as a number is `500000000`.
##### High-level explanation
Alice needs to issue one transaction to wBTC contract to transfer 5 tokens (multiplied by precision) to Bob.
##### Technical calls
1. `alice` calls `wbtc::ft_transfer({"receiver_id": "bob", "amount": "500000000"})`.
#### Token deposit to a contract
Alice wants to deposit 1000 DAI tokens to a compound interest contract to earn extra tokens.
##### Assumptions
- The DAI token contract is `dai`.
- Alice's account is `alice`.
- The compound interest contract is `compound`.
- The precision ("decimals" in the metadata standard) on DAI contract is `10^18`.
- The 1000 tokens is `1000 * 10^18` or as a number is `1000000000000000000000`.
- The compound contract can work with multiple token types.
<details>
<summary>For this example, you may expand this section to see how a previous fungible token standard using escrows would deal with the scenario.</summary>
##### High-level explanation (NEP-21 standard)
Alice needs to issue 2 transactions. The first one to `dai` to set an allowance for `compound` to be able to withdraw tokens from `alice`.
The second transaction is to the `compound` to start the deposit process. Compound will check that the DAI tokens are supported and will try to withdraw the desired amount of DAI from `alice`.
- If transfer succeeded, `compound` can increase local ownership for `alice` to 1000 DAI
- If transfer fails, `compound` doesn't need to do anything in current example, but maybe can notify `alice` of unsuccessful transfer.
##### Technical calls (NEP-21 standard)
1. `alice` calls `dai::set_allowance({"escrow_account_id": "compound", "allowance": "1000000000000000000000"})`.
2. `alice` calls `compound::deposit({"token_contract": "dai", "amount": "1000000000000000000000"})`. During the `deposit` call, `compound` does the following:
1. makes async call `dai::transfer_from({"owner_id": "alice", "new_owner_id": "compound", "amount": "1000000000000000000000"})`.
2. attaches a callback `compound::on_transfer({"owner_id": "alice", "token_contract": "dai", "amount": "1000000000000000000000"})`.
</details>
##### High-level explanation
Alice needs to issue 1 transaction, as opposed to 2 with a typical escrow workflow.
##### Technical calls
1. `alice` calls `dai::ft_transfer_call({"receiver_id": "compound", "amount": "1000000000000000000000", "msg": "invest"})`. During the `ft_transfer_call` call, `dai` does the following:
1. makes async call `compound::ft_on_transfer({"sender_id": "alice", "amount": "1000000000000000000000", "msg": "invest"})`.
2. attaches a callback `dai::ft_resolve_transfer({"sender_id": "alice", "receiver_id": "compound", "amount": "1000000000000000000000"})`.
3. compound finishes investing, using all attached fungible tokens `compound::invest({…})` then returns the value of the tokens that weren't used or needed. In this case, Alice asked for the tokens to be invested, so it will return 0. (In some cases a method may not need to use all the fungible tokens, and would return the remainder.)
4. the `dai::ft_resolve_transfer` function receives success/failure of the promise. If success, it will contain the unused tokens. Then the `dai` contract uses simple arithmetic (not needed in this case) and updates the balance for Alice.
#### Swapping one token for another via an Automated Market Maker (AMM) like Uniswap
Alice wants to swap 5 wrapped NEAR (wNEAR) for BNNA tokens at current market rate, with less than 2% slippage.
##### Assumptions
- The wNEAR token contract is `wnear`.
- Alice's account is `alice`.
- The AMM's contract is `amm`.
- BNNA's contract is `bnna`.
- The precision ("decimals" in the metadata standard) on wNEAR contract is `10^24`.
- The 5 tokens is `5 * 10^24` or as a number is `5000000000000000000000000`.
##### High-level explanation
Alice needs to issue one transaction to wNEAR contract to transfer 5 tokens (multiplied by precision) to `amm`, specifying her desired action (swap), her destination token (BNNA) & minimum slippage (<2%) in `msg`.
Alice will probably make this call via a UI that knows how to construct `msg` in a way the `amm` contract will understand. However, it's possible that the `amm` contract itself may provide view functions which take desired action, destination token, & slippage as input and return data ready to pass to `msg` for `ft_transfer_call`. For the sake of this example, let's say `amm` implements a view function called `ft_data_to_msg`.
Alice needs to attach one yoctoNEAR. This will result in her seeing a confirmation page in her preferred NEAR wallet. NEAR wallet implementations will (eventually) attempt to provide useful information in this confirmation page, so receiver contracts should follow a strong convention in how they format `msg`. We will update this documentation with a recommendation, as community consensus emerges.
Altogether then, Alice may take two steps, though the first may be a background detail of the app she uses.
##### Technical calls
1. View `amm::ft_data_to_msg({ action: "swap", destination_token: "bnna", min_slip: 2 })`. Using [NEAR CLI](https://docs.near.org/docs/tools/near-cli):
```shell
near view amm ft_data_to_msg \
'{"action": "swap", "destination_token": "bnna", "min_slip": 2}'
```
Then Alice (or the app she uses) will hold onto the result and use it in the next step. Let's say this result is `"swap:bnna,2"`.
2. Call `wnear::ft_on_transfer`. Using NEAR CLI:
```shell
near call wnear ft_transfer_call \
'{"receiver_id": "amm", "amount": "5000000000000000000000000", "msg": "swap:bnna,2"}' \
--accountId alice --depositYocto 1
```
During the `ft_transfer_call` call, `wnear` does the following:
1. Decrease the balance of `alice` and increase the balance of `amm` by 5000000000000000000000000.
2. Makes async call `amm::ft_on_transfer({"sender_id": "alice", "amount": "5000000000000000000000000", "msg": "swap:bnna,2"})`.
3. Attaches a callback `wnear::ft_resolve_transfer({"sender_id": "alice", "receiver_id": "compound", "amount": "5000000000000000000000000"})`.
4. `amm` finishes the swap, either successfully swapping all 5 wNEAR within the desired slippage, or failing.
5. The `wnear::ft_resolve_transfer` function receives success/failure of the promise. Assuming `amm` implements all-or-nothing transfers (as in, it will not transfer less-than-the-specified amount in order to fulfill the slippage requirements), `wnear` will do nothing at this point if the swap succeeded, or it will decrease the balance of `amm` and increase the balance of `alice` by 5000000000000000000000000.
### Reference-level explanation
NOTES:
- All amounts, balances and allowance are limited by `U128` (max value `2**128 - 1`).
- Token standard uses JSON for serialization of arguments and results.
- Amounts in arguments and results have are serialized as Base-10 strings, e.g. `"100"`. This is done to avoid JSON limitation of max integer value of `2**53`.
- The contract must track the change in storage when adding to and removing from collections. This is not included in this core fungible token standard but instead in the [Storage Standard][Storage Management].
- To prevent the deployed contract from being modified or deleted, it should not have any access keys on its account.
#### Interface
##### ft_transfer
Simple transfer to a receiver.
Requirements:
- Caller of the method must attach a deposit of 1 yoctoⓃ for security purposes
- Caller must have greater than or equal to the `amount` being requested
Arguments:
- `receiver_id`: the valid NEAR account receiving the fungible tokens.
- `amount`: the number of tokens to transfer, wrapped in quotes and treated
like a string, although the number will be stored as an unsigned integer
with 128 bits.
- `memo` (optional): for use cases that may benefit from indexing or
providing information for a transfer.
```ts
function ft_transfer(
receiver_id: string,
amount: string,
memo: string | null
): void;
```
##### ft_transfer_call
Transfer tokens and call a method on a receiver contract. A successful
workflow will end in a success execution outcome to the callback on the same
contract at the method `ft_resolve_transfer`.
You can think of this as being similar to attaching native NEAR tokens to a
function call. It allows you to attach any Fungible Token in a call to a
receiver contract.
Requirements:
- Caller of the method must attach a deposit of 1 yoctoⓃ for security
purposes
- Caller must have greater than or equal to the `amount` being requested
- The receiving contract must implement `ft_on_transfer` according to the
standard. If it does not, FT contract's `ft_resolve_transfer` MUST deal
with the resulting failed cross-contract call and roll back the transfer.
- Contract MUST implement the behavior described in `ft_resolve_transfer`
Arguments:
- `receiver_id`: the valid NEAR account receiving the fungible tokens.
- `amount`: the number of tokens to transfer, wrapped in quotes and treated
like a string, although the number will be stored as an unsigned integer
with 128 bits.
- `memo` (optional): for use cases that may benefit from indexing or
providing information for a transfer.
- `msg`: specifies information needed by the receiving contract in
order to properly handle the transfer. Can indicate both a function to call and the parameters to pass to that function.
```ts
function ft_transfer_call(
receiver_id: string,
amount: string,
memo: string | null,
msg: string
): Promise;
```
##### ft_on_transfer
This function is implemented on the receiving contract.
As mentioned, the `msg` argument contains information necessary for the receiving contract to know how to process the request. This may include method names and/or arguments.
Returns a value, or a promise which resolves with a value. The value is the
number of unused tokens in string form. For instance, if `amount` is 10 but only 9 are
needed, it will return "1".
```ts
function ft_on_transfer(sender_id: string, amount: string, msg: string): string;
```
### View Methods
##### ft_total_supply
Returns the total supply of fungible tokens as a string representing the value as an unsigned 128-bit integer.
```js
function ft_total_supply(): string
```
##### ft_balance_of
Returns the balance of an account in string form representing a value as an unsigned 128-bit integer. If the account doesn't exist must returns `"0"`.
```ts
function ft_balance_of(account_id: string): string;
```
##### ft_resolve_transfer
The following behavior is required, but contract authors may name this function something other than the conventional `ft_resolve_transfer` used here.
Finalize an `ft_transfer_call` chain of cross-contract calls.
The `ft_transfer_call` process:
1. Sender calls `ft_transfer_call` on FT contract
2. FT contract transfers `amount` tokens from sender to receiver
3. FT contract calls `ft_on_transfer` on receiver contract
4. [receiver contract may make other cross-contract calls]
5. FT contract resolves promise chain with `ft_resolve_transfer`, and may refund sender some or all of original `amount`
Requirements:
- Contract MUST forbid calls to this function by any account except self
- If promise chain failed, contract MUST revert token transfer
- If promise chain resolves with a non-zero amount given as a string,
contract MUST return this amount of tokens to `sender_id`
Arguments:
- `sender_id`: the sender of `ft_transfer_call`
- `receiver_id`: the `receiver_id` argument given to `ft_transfer_call`
- `amount`: the `amount` argument given to `ft_transfer_call`
Returns a string representing a string version of an unsigned 128-bit
integer of how many total tokens were spent by sender_id. Example: if sender
calls `ft_transfer_call({ "amount": "100" })`, but `receiver_id` only uses
80, `ft_on_transfer` will resolve with `"20"`, and `ft_resolve_transfer`
will return `"80"`.
```ts
function ft_resolve_transfer(
sender_id: string,
receiver_id: string,
amount: string
): string;
```
### Events
Standard interfaces for FT contract actions that extend [NEP-297](nep-0297.md)
NEAR and third-party applications need to track `mint`, `transfer`, `burn` events for all FT-driven apps consistently.
This extension addresses that.
Keep in mind that applications, including NEAR Wallet, could require implementing additional methods, such as [`ft_metadata`][FT Metadata], to display the FTs correctly.
### Event Interface
Fungible Token Events MUST have `standard` set to `"nep141"`, standard version set to `"1.0.0"`, `event` value is one of `ft_mint`, `ft_burn`, `ft_transfer`, and `data` must be of one of the following relevant types: `FtMintLog[] | FtTransferLog[] | FtBurnLog[]`:
```ts
interface FtEventLogData {
standard: "nep141";
version: "1.0.0";
event: "ft_mint" | "ft_burn" | "ft_transfer";
data: FtMintLog[] | FtTransferLog[] | FtBurnLog[];
}
```
```ts
// An event log to capture tokens minting
// Arguments
// * `owner_id`: "account.near"
// * `amount`: the number of tokens to mint, wrapped in quotes and treated
// like a string, although the number will be stored as an unsigned integer
// with 128 bits.
// * `memo`: optional message
interface FtMintLog {
owner_id: string;
amount: string;
memo?: string;
}
// An event log to capture tokens burning
// Arguments
// * `owner_id`: owner of tokens to burn
// * `amount`: the number of tokens to burn, wrapped in quotes and treated
// like a string, although the number will be stored as an unsigned integer
// with 128 bits.
// * `memo`: optional message
interface FtBurnLog {
owner_id: string;
amount: string;
memo?: string;
}
// An event log to capture tokens transfer
// Arguments
// * `old_owner_id`: "owner.near"
// * `new_owner_id`: "receiver.near"
// * `amount`: the number of tokens to transfer, wrapped in quotes and treated
// like a string, although the number will be stored as an unsigned integer
// with 128 bits.
// * `memo`: optional message
interface FtTransferLog {
old_owner_id: string;
new_owner_id: string;
amount: string;
memo?: string;
}
```
### Event Examples
Batch mint:
```js
EVENT_JSON:{
"standard": "nep141",
"version": "1.0.0",
"event": "ft_mint",
"data": [
{"owner_id": "foundation.near", "amount": "500"}
]
}
```
Batch transfer:
```js
EVENT_JSON:{
"standard": "nep141",
"version": "1.0.0",
"event": "ft_transfer",
"data": [
{"old_owner_id": "from.near", "new_owner_id": "to.near", "amount": "42", "memo": "hi hello bonjour"},
{"old_owner_id": "user1.near", "new_owner_id": "user2.near", "amount": "7500"}
]
}
```
Batch burn:
```js
EVENT_JSON:{
"standard": "nep141",
"version": "1.0.0",
"event": "ft_burn",
"data": [
{"owner_id": "foundation.near", "amount": "100"},
]
}
```
### Further Event Methods
Note that the example events covered above cover two different kinds of events:
1. Events that are not specified in the FT Standard (`ft_mint`, `ft_burn`)
2. An event that is covered in the [FT Core Standard][FT Core]. (`ft_transfer`)
Please feel free to open pull requests for extending the events standard detailed here as needs arise.
## Reference Implementation
The `near-contract-standards` cargo package of the [Near Rust SDK](https://github.com/near/near-sdk-rs) contain the following implementations of NEP-141:
- [Minimum Viable Interface](https://github.com/near/near-sdk-rs/blob/master/near-contract-standards/src/fungible_token/core.rs)
- The [Core Fungible Token Implementation](https://github.com/near/near-sdk-rs/blob/master/near-contract-standards/src/fungible_token/core_impl.rs)
- [Optional Fungible Token Events](https://github.com/near/near-sdk-rs/blob/master/near-contract-standards/src/fungible_token/events.rs)
- [Core Fungible Token tests](https://github.com/near/near-sdk-rs/blob/master/examples/fungible-token/tests/workspaces.rs)
## Drawbacks
- The `msg` argument to `ft_transfer` and `ft_transfer_call` is freeform, which may necessitate conventions.
- The paradigm of an escrow system may be familiar to developers and end users, and education on properly handling this in another contract may be needed.
## Future possibilities
- Support for multiple token types
- Minting and burning
## History
See also the discussions:
- [Fungible token core](https://github.com/near/NEPs/discussions/146#discussioncomment-298943)
- [Fungible token metadata](https://github.com/near/NEPs/discussions/148)
- [Storage standard](https://github.com/near/NEPs/discussions/145)
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
[Storage Management]: https://github.com/near/NEPs/blob/master/neps/nep-0145.md
[FT Metadata]: https://github.com/near/NEPs/blob/master/neps/nep-0148.md
[FT Core]: https://github.com/near/NEPs/blob/master/neps/nep-0141.md
================================================
FILE: neps/nep-0145.md
================================================
---
NEP: 145
Title: Storage Management
Author: Evgeny Kuzyakov <ek@near.org>, @oysterpack
Status: Final
DiscussionsTo: https://github.com/near/NEPs/discussions/145
Type: Standards Track
Category: Contract
Created: 03-Mar-2022
---
## Summary
NEAR uses [storage staking] which means that a contract account must have sufficient balance to cover all storage added over time. This standard provides a uniform way to pass storage costs onto users.
## Motivation
It allows accounts and contracts to:
1. Check an account's storage balance.
2. Determine the minimum storage needed to add account information such that the account can interact as expected with a contract.
3. Add storage balance for an account; either one's own or another.
4. Withdraw some storage deposit by removing associated account data from the contract and then making a call to remove unused deposit.
5. Unregister an account to recover full storage balance.
[storage staking]: https://docs.near.org/concepts/storage/storage-staking
## Rationale and alternatives
Prior art:
- A previous fungible token standard ([NEP-21](https://github.com/near/NEPs/pull/21)) highlighting how [storage was paid](https://github.com/near/near-sdk-rs/blob/1d3535bd131b68f97a216e643ad1cba19e16dddf/examples/fungible-token/src/lib.rs#L92-L113) for when increasing the allowance of an escrow system.
### Example scenarios
To show the flexibility and power of this standard, let's walk through two example contracts.
1. A simple Fungible Token contract which uses Storage Management in "registration only" mode, where the contract only adds storage on a user's first interaction.
1. Account registers self
2. Account registers another
3. Unnecessary attempt to re-register
4. Force-closure of account
5. Graceful closure of account
2. A social media contract, where users can add more data to the contract over time.
1. Account registers self with more than minimum required
2. Unnecessary attempt to re-register using `registration_only` param
3. Attempting to take action which exceeds paid-for storage; increasing storage deposit
4. Removing storage and reclaiming excess deposit
### Example 1: Fungible Token Contract
Imagine a [fungible token][FT Core] contract deployed at `ft`. Let's say this contract saves all user balances to a Map data structure internally, and adding a key for a new user requires 0.00235Ⓝ. This contract therefore uses the Storage Management standard to pass this cost onto users, so that a new user must effectively pay a registration fee to interact with this contract of 0.00235Ⓝ, or 2350000000000000000000 yoctoⓃ ([yocto](https://www.metricconversion.us/prefixes.htm) = 10<sup>-24</sup>).
For this contract, `storage_balance_bounds` will be:
```json
{
"min": "2350000000000000000000",
"max": "2350000000000000000000"
}
```
This means a user must deposit 0.00235Ⓝ to interact with this contract, and that attempts to deposit more than this will have no effect (attached deposits will be immediately refunded).
Let's follow two users, Alice with account `alice` and Bob with account `bob`, as they interact with `ft` through the following scenarios:
1. Alice registers herself
2. Alice registers Bob
3. Alice tries to register Bob again
4. Alice force-closes her account
5. Bob gracefully closes his account
#### 1. Account pays own registration fee
##### High-level explanation
1. Alice checks if she is registered with the `ft` contract.
2. Alice determines the needed registration fee to register with the `ft` contract.
3. Alice issues a transaction to deposit Ⓝ for her account.
##### Technical calls
1. Alice queries a view-only method to determine if she already has storage on this contract with `ft::storage_balance_of({"account_id": "alice"})`. Using [NEAR CLI](https://docs.near.org/tools/near-cli) to make this view call, the command would be:
```shell
near view ft storage_balance_of '{"account_id": "alice"}'
```
The response:
```shell
null
```
2. Alice uses [NEAR CLI](https://docs.near.org/docs/tools/near-cli) to make a view call.
```shell
near view ft storage_balance_bounds
```
As mentioned above, this will show that both `min` and `max` are both 2350000000000000000000 yoctoⓃ.
3. Alice converts this yoctoⓃ amount to 0.00235 Ⓝ, then calls `ft::storage_deposit` with this attached deposit. Using NEAR CLI:
```shell
near call ft storage_deposit '' --accountId alice --amount 0.00235
```
The result:
```json
{
"total": "2350000000000000000000",
"available": "0"
}
```
#### 2. Account pays for another account's storage
Alice wishes to eventually send `ft` tokens to Bob who is not registered. She decides to pay for Bob's storage.
##### High-level explanation
Alice issues a transaction to deposit Ⓝ for Bob's account.
##### Technical calls
Alice calls `ft::storage_deposit({"account_id": "bob"})` with the attached deposit of '0.00235'. Using NEAR CLI the command would be:
```shell
near call ft storage_deposit '{"account_id": "bob"}' --accountId alice --amount 0.00235
```
The result:
```json
{
"total": "2350000000000000000000",
"available": "0"
}
```
#### 3. Unnecessary attempt to register already-registered account
Alice accidentally makes the same call again, and even misses a leading zero in her deposit amount.
```shell
near call ft storage_deposit '{"account_id": "bob"}' --accountId alice --amount 0.0235
```
The result:
```json
{
"total": "2350000000000000000000",
"available": "0"
}
```
Additionally, Alice will be refunded the 0.0235Ⓝ she attached, because the `storage_depos
gitextract_xo_k9all/
├── .github/
│ └── workflows/
│ ├── add-to-devrel.yml
│ ├── lint.yml
│ └── spellcheck.yml
├── .gitignore
├── .markdownlint.json
├── .mlc_config.json
├── CODEOWNERS
├── README.md
├── nep-0000-template.md
└── neps/
├── archive/
│ ├── 0005-access-keys.md
│ ├── 0006-bindings.md
│ ├── 0008-transaction-refactoring.md
│ ├── 0013-system-methods.md
│ ├── 0017-execution-outcome.md
│ ├── 0018-view-change-method.md
│ ├── 0033-economics.md
│ ├── 0040-split-states.md
│ └── README.md
├── nep-0001.md
├── nep-0021.md
├── nep-0141.md
├── nep-0145.md
├── nep-0148.md
├── nep-0171.md
├── nep-0177.md
├── nep-0178.md
├── nep-0181.md
├── nep-0199.md
├── nep-0245/
│ ├── ApprovalManagement.md
│ ├── Enumeration.md
│ ├── Events.md
│ └── Metadata.md
├── nep-0245.md
├── nep-0256.md
├── nep-0264.md
├── nep-0297.md
├── nep-0300.md
├── nep-0330.md
├── nep-0364.md
├── nep-0366.md
├── nep-0368.md
├── nep-0393.md
├── nep-0399.md
├── nep-0408.md
├── nep-0413.md
├── nep-0418.md
├── nep-0448.md
├── nep-0452.md
├── nep-0455.md
├── nep-0488.md
├── nep-0491.md
├── nep-0492.md
├── nep-0508.md
├── nep-0509.md
├── nep-0514.md
├── nep-0518.md
├── nep-0519.md
├── nep-0536.md
├── nep-0539.md
├── nep-0568.md
├── nep-0584.md
├── nep-0591.md
├── nep-0611.md
├── nep-0616.md
├── nep-0621.md
├── nep-0635.md
└── nep-0638.md
Condensed preview — 67 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (1,078K chars).
[
{
"path": ".github/workflows/add-to-devrel.yml",
"chars": 469,
"preview": "name: 'Add to DevRel Project'\n\non:\n issues:\n types:\n - opened\n - reopened\n pull_request_target:\n types"
},
{
"path": ".github/workflows/lint.yml",
"chars": 1098,
"preview": "name: Lint\n\non:\n pull_request:\n branches: [master, main]\n merge_group:\n\nconcurrency:\n group: ci-${{ github.ref }}-"
},
{
"path": ".github/workflows/spellcheck.yml",
"chars": 427,
"preview": "name: spellchecker\n\non:\n pull_request:\n branches:\n - master\n\njobs:\n misspell:\n name: runner / misspell\n "
},
{
"path": ".gitignore",
"chars": 30,
"preview": "/docs\n.idea\n.DS_Store\n.vscode\n"
},
{
"path": ".markdownlint.json",
"chars": 227,
"preview": "{\n \"default\": true,\n \"MD001\": false,\n \"MD013\": false,\n \"MD024\": { \"siblings_only\": true },\n \"MD025\": false,\n \"MD03"
},
{
"path": ".mlc_config.json",
"chars": 457,
"preview": "{\n \"ignorePatterns\": [\n {\n \"pattern\": \"^/\"\n },\n {\n \"pattern\": \"^https://codepen.io\"\n },\n {\n "
},
{
"path": "CODEOWNERS",
"chars": 33,
"preview": "* @near/nep-moderators\n"
},
{
"path": "README.md",
"chars": 12982,
"preview": "# NEAR Protocol Specifications and Standards\n\n[ that cover a fleshed out concept for NEAR. Befo"
},
{
"path": "neps/nep-0001.md",
"chars": 18617,
"preview": "---\nNEP: 1\nTitle: NEP Purpose and Guidelines\nAuthors: Bowen W. <bowen@near.org>, Austin Baggio <austin.baggio@near.org>,"
},
{
"path": "neps/nep-0021.md",
"chars": 10668,
"preview": "---\nNEP: 21\nTitle: Fungible Token Standard\nAuthor: Evgeny Kuzyakov <ek@near.org>\nStatus: Final\nDiscussionsTo: https://gi"
},
{
"path": "neps/nep-0141.md",
"chars": 22792,
"preview": "---\nNEP: 141\nTitle: Fungible Token Standard\nAuthor: Evgeny Kuzyakov <ek@near.org>, Robert Zaremba <@robert-zaremba>, @oy"
},
{
"path": "neps/nep-0145.md",
"chars": 18832,
"preview": "---\nNEP: 145\nTitle: Storage Management\nAuthor: Evgeny Kuzyakov <ek@near.org>, @oysterpack\nStatus: Final\nDiscussionsTo: h"
},
{
"path": "neps/nep-0148.md",
"chars": 6958,
"preview": "---\nNEP: 148\nTitle: Fungible Token Metadata\nAuthor: Robert Zaremba <robert-zaremba>, Evgeny Kuzyakov <ek@near.org>, @oys"
},
{
"path": "neps/nep-0171.md",
"chars": 19152,
"preview": "---\nNEP: 171\nTitle: Non Fungible Token Standard\nAuthor: Mike Purvis <mike@near.org>, Evgeny Kuzyakov <ek@near.org>, @oys"
},
{
"path": "neps/nep-0177.md",
"chars": 10551,
"preview": "---\nNEP: 177\nTitle: Non Fungible Token Metadata\nAuthor: Chad Ostrowski <@chadoh>, Mike Purvis <mike@near.org>\nStatus: Fi"
},
{
"path": "neps/nep-0178.md",
"chars": 22523,
"preview": "---\nNEP: 178\nTitle: Non Fungible Token Approval Management\nAuthor: Chad Ostrowski <@chadoh>, Thor <@thor314>\nStatus: Fin"
},
{
"path": "neps/nep-0181.md",
"chars": 3705,
"preview": "---\nNEP: 181\nTitle: Non Fungible Token Enumeration\nAuthor: Chad Ostrowski <@chadoh>, Thor <@thor314>\nStatus: Final\nDiscu"
},
{
"path": "neps/nep-0199.md",
"chars": 7063,
"preview": "---\nNEP: 199\nTitle: Non Fungible Token Royalties and Payouts\nAuthor: Thor <@thor314>, Matt Lockyer <@mattlockyer>\nStatus"
},
{
"path": "neps/nep-0245/ApprovalManagement.md",
"chars": 24650,
"preview": "# Multi Token Standard Approval Management\n\n:::caution\nThis is part of the proposed spec [NEP-245](https://github.com/ne"
},
{
"path": "neps/nep-0245/Enumeration.md",
"chars": 3524,
"preview": "# Multi Token Enumeration\n\n:::caution\nThis is part of the proposed spec [NEP-245](https://github.com/near/NEPs/blob/mast"
},
{
"path": "neps/nep-0245/Events.md",
"chars": 5852,
"preview": "# Multi Token Event\n\n:::caution\nThis is part of the proposed spec [NEP-245](https://github.com/near/NEPs/blob/master/nep"
},
{
"path": "neps/nep-0245/Metadata.md",
"chars": 10218,
"preview": "# Multi Token Metadata\n\n:::caution\nThis is part of the proposed spec [NEP-245](https://github.com/near/NEPs/blob/master/"
},
{
"path": "neps/nep-0245.md",
"chars": 27376,
"preview": "---\nNEP: 245\nTitle: Multi Token Standard\nAuthor: Zane Starr <zane@ships.gold>, @riqi, @jriemann, @marcos.sun\nStatus: Fin"
},
{
"path": "neps/nep-0256.md",
"chars": 5175,
"preview": "---\nNEP: 256\nTitle: Non-Fungible Token Events\nAuthor: Olga Telezhnaya <olga@near.org>, @evergreen-trading-systems\nStatus"
},
{
"path": "neps/nep-0264.md",
"chars": 12468,
"preview": "---\nNEP: 264\nTitle: Utilization of unspent gas for promise function calls\nAuthors: Austin Abell <austinabell8@gmail.com>"
},
{
"path": "neps/nep-0297.md",
"chars": 4082,
"preview": "---\nNEP: 297\nTitle: Events\nAuthor: Olga Telezhnaya <olga@near.org>\nStatus: Final\nDiscussionsTo: https://github.com/near/"
},
{
"path": "neps/nep-0300.md",
"chars": 3655,
"preview": "---\nNEP: 300\nTitle: Fungible Token Events\nAuthor: Olga Telezhnaya <olga@near.org>\nStatus: Final\nDiscussionsTo: https://g"
},
{
"path": "neps/nep-0330.md",
"chars": 13547,
"preview": "---\nNEP: 330\nTitle: Source Metadata\nAuthor: Ben Kurrek <ben.kurrek@near.org>, Osman Abdelnasir <osman@near.org>, Andrey "
},
{
"path": "neps/nep-0364.md",
"chars": 9009,
"preview": "---\nNEP: 364\nTitle: Efficient signature verification and hashing precompile functions\nAuthor: Blas Rodriguez Irizar <rod"
},
{
"path": "neps/nep-0366.md",
"chars": 10642,
"preview": "---\nNEP: 366\nTitle: Meta Transactions\nAuthor: Illia Polosukhin <ilblackdragon@gmail.com>, Egor Uleyskiy (egor.ulieiskii@"
},
{
"path": "neps/nep-0368.md",
"chars": 8098,
"preview": "---\nNEP: 368\nTitle: Bridge Wallets\nAuthor: lewis-sqa <@lewis-sqa>\nStatus: Final\nDiscussionsTo: https://github.com/near/N"
},
{
"path": "neps/nep-0393.md",
"chars": 50585,
"preview": "---\nNEP: 393\nTitle: Soulbound Token\nAuthors: Robert Zaremba <@robert-zaremba>\nStatus: Final\nDiscussionsTo:\nType: Standar"
},
{
"path": "neps/nep-0399.md",
"chars": 30333,
"preview": "---\nNEP: 399\nTitle: Flat Storage\nAuthor: Aleksandr Logunov <alex.logunov@near.org> Min Zhang <min@near.org>\nStatus: Fina"
},
{
"path": "neps/nep-0408.md",
"chars": 15334,
"preview": "---\nNEP: 408\nTitle: Injected Wallet API\nAuthor: Daryl Collins <@MaximusHaximus>, @lewis-sqa\nStatus: Final\nDiscussionsTo:"
},
{
"path": "neps/nep-0413.md",
"chars": 13772,
"preview": "---\nNEP: 413\nTitle: Near Wallet API - support for signMessage method\nAuthor: Philip Obosi <philip@near.org>, Guillermo G"
},
{
"path": "neps/nep-0418.md",
"chars": 5805,
"preview": "---\nNEP: 418\nTitle: Remove attached_deposit view panic\nAuthor: Austin Abell <austin.abell@near.org>\nStatus: Final\nDiscus"
},
{
"path": "neps/nep-0448.md",
"chars": 9573,
"preview": "---\nNEP: 448\nTitle: Zero-balance Accounts\nAuthor: Bowen Wang <bowen@near.org>\nStatus: Final\nDiscussionsTo: https://githu"
},
{
"path": "neps/nep-0452.md",
"chars": 16400,
"preview": "---\nNEP: 452\nTitle: Linkdrop Standard\nAuthor: Ben Kurrek <ben.kurrek@near.org>, Ken Miyachi <ken.miyachi@near.foundation"
},
{
"path": "neps/nep-0455.md",
"chars": 19732,
"preview": "---\nNEP: 455\nTitle: Parameter Compute Costs\nAuthor: Andrei Kashin <andrei.kashin@near.org>, Jakob Meier <jakob@near.org>"
},
{
"path": "neps/nep-0488.md",
"chars": 60814,
"preview": "---\nNEP: 488\nTitle: Host Functions for BLS12-381 Curve Operations\nAuthors: Olga Kuniavskaia <olga.kunyavskaya@aurora.dev"
},
{
"path": "neps/nep-0491.md",
"chars": 14774,
"preview": "---\nNEP: 491\nTitle: Non-Refundable Storage Staking\nAuthors: Jakob Meier <jakob@near.org>\nStatus: Final\nDiscussionsTo: ht"
},
{
"path": "neps/nep-0492.md",
"chars": 3401,
"preview": "---\nNEP: 492\nTitle: Restrict creation of Ethereum Addresses\nAuthors: Bowen Wang <bowen@near.org>\nStatus: Final\nDiscussio"
},
{
"path": "neps/nep-0508.md",
"chars": 22751,
"preview": "---\nNEP: 508\nTitle: Resharding v2\nAuthors: Waclaw Banasik, Shreyan Gupta, Yoon Hong\nStatus: Final\nDiscussionsTo: https:/"
},
{
"path": "neps/nep-0509.md",
"chars": 49097,
"preview": "---\nNEP: 509\nTitle: Stateless validation Stage 0\nAuthors: Robin Cheng, Anton Puhach, Alex Logunov, Yoon Hong\nStatus: Fin"
},
{
"path": "neps/nep-0514.md",
"chars": 5622,
"preview": "---\nNEP: 514\nTitle: Reducing the number of Block Producer Seats in `testnet`\nAuthors: Nikolay Kurtov <nikolay.kurtov@nea"
},
{
"path": "neps/nep-0518.md",
"chars": 28498,
"preview": "---\nNEP: 518\nTitle: Web3-Compatible Wallets Support\nAuthors: Aleksandr Shevchenko <alex.shevchenko@aurora.dev>, Michael "
},
{
"path": "neps/nep-0519.md",
"chars": 10664,
"preview": "---\nNEP: 519\nTitle: Yield Execution\nAuthors: Akhi Singhania <akhi3030@gmail.com>; Saketh Are <saketh@near.org>\nStatus: F"
},
{
"path": "neps/nep-0536.md",
"chars": 7157,
"preview": "---\nNEP: 536\nTitle: Reduce the number of gas refunds\nAuthors: Evgeny Kuzyakov <ek@fastnear.com>, Bowen Wang <bowen@near."
},
{
"path": "neps/nep-0539.md",
"chars": 35566,
"preview": "---\nNEP: 539\nTitle: Cross-Shard Congestion Control\nAuthors: Waclaw Banasik <waclaw@near.org>, Jakob Meier <inbox@jakobme"
},
{
"path": "neps/nep-0568.md",
"chars": 58936,
"preview": "---\nNEP: 568\nTitle: Resharding V3\nAuthors: Adam Chudas, Aleksandr Logunov, Andrea Spurio, Marcelo Diop-Gonzalez, Shreyan"
},
{
"path": "neps/nep-0584.md",
"chars": 69578,
"preview": "---\nNEP: 584\nTitle: Cross-shard bandwidth scheduler\nAuthors: Jan Malinowski <jan.ciolek@nearone.org>\nStatus: Final\nDiscu"
},
{
"path": "neps/nep-0591.md",
"chars": 11773,
"preview": "---\nNEP: 591\nTitle: Global Contracts\nAuthors: Bowen Wang <bowen@nearone.org>, Anton Puhach <anton@nearone.org>, Stefan N"
},
{
"path": "neps/nep-0611.md",
"chars": 31359,
"preview": "---\nNEP: 611\nTitle: Pending Transaction Queue and Gas Keys\nAuthors: Robin Cheng <robin@nearone.org>, Darioush Jalali <da"
},
{
"path": "neps/nep-0616.md",
"chars": 24997,
"preview": "---\nNEP: 616\nTitle: Deterministic AccountIds\nAuthors: Arseny Mitin <mitinarseny@gmail.com>\nStatus: Approved\nDiscussionsT"
},
{
"path": "neps/nep-0621.md",
"chars": 23873,
"preview": "---\nNEP: 621\nTitle: Vault NEP\nAuthors: JY Chew <edwardchew97@gmail.com>, Lee Hoe Mun <leehoemun@gmail.com>, Wade <wz.lim"
},
{
"path": "neps/nep-0635.md",
"chars": 5903,
"preview": "---\nNEP: 622\nTitle: P-256 ECDSA Signature Verification Host Function\nAuthors: Bowen Wang <bowen@nearone.org>\nStatus: Dra"
},
{
"path": "neps/nep-0638.md",
"chars": 5054,
"preview": "---\nNEP: 638\nTitle: `chain_id()` host function\nAuthors: Arseny Mitin <mitinarseny@gmail.com>\nStatus: Draft\nType: Protoco"
}
]
About this extraction
This page contains the full source code of the near/NEPs GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 67 files (1.0 MB), approximately 243.0k tokens. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.