Full Code of ton-org/docs for AI

main 45160fd71f10 cached
464 files
5.8 MB
1.5M tokens
180 symbols
2 requests
Download .txt
Showing preview only (6,134K chars total). Download the full file or copy to clipboard to get everything.
Repository: ton-org/docs
Branch: main
Commit: 45160fd71f10
Files: 464
Total size: 5.8 MB

Directory structure:
gitextract_f543x511/

├── .cspell.jsonc
├── .editorconfig
├── .gitattributes
├── .github/
│   ├── dependabot.yml
│   ├── scripts/
│   │   ├── build_review_instructions.py
│   │   ├── build_review_payload.py
│   │   ├── common.mjs
│   │   ├── generate-v2-api-table.py
│   │   ├── generate-v3-api-table.py
│   │   ├── rewrite_review_links.py
│   │   └── tvm-instruction-gen.py
│   └── workflows/
│       ├── bouncer.yml
│       ├── commander.yml
│       ├── generate-api-tables.yml
│       ├── instructions.yml
│       ├── linter.yml
│       └── pitaya.yml
├── .gitignore
├── .husky/
│   └── pre-push
├── .prettierignore
├── .remarkignore
├── .remarkrc.mjs
├── CODEOWNERS
├── LICENSE-code
├── LICENSE-docs
├── README.md
├── contract-dev/
│   ├── blueprint/
│   │   ├── api.mdx
│   │   ├── benchmarks.mdx
│   │   ├── cli.mdx
│   │   ├── config.mdx
│   │   ├── coverage.mdx
│   │   ├── deploy.mdx
│   │   ├── develop.mdx
│   │   └── overview.mdx
│   ├── contract-sharding.mdx
│   ├── debug.mdx
│   ├── first-smart-contract.mdx
│   ├── gas.mdx
│   ├── ide/
│   │   ├── jetbrains.mdx
│   │   ├── overview.mdx
│   │   └── vscode.mdx
│   ├── on-chain-jetton-processing.mdx
│   ├── random.mdx
│   ├── security.mdx
│   ├── signing.mdx
│   ├── testing/
│   │   ├── overview.mdx
│   │   └── reference.mdx
│   ├── upgrades.mdx
│   ├── using-on-chain-libraries.mdx
│   ├── vanity.mdx
│   └── zero-knowledge.mdx
├── contribute/
│   ├── snippets/
│   │   ├── aside.mdx
│   │   ├── filetree.mdx
│   │   ├── image.mdx
│   │   └── overview.mdx
│   ├── style-guide-extended.mdx
│   └── style-guide.mdx
├── docs.json
├── ecosystem/
│   ├── ai/
│   │   └── mcp.mdx
│   ├── analytics.mdx
│   ├── api/
│   │   ├── overview.mdx
│   │   ├── price.mdx
│   │   └── toncenter/
│   │       ├── get-api-key.mdx
│   │       ├── introduction.mdx
│   │       ├── rate-limit.mdx
│   │       ├── smc-index/
│   │       │   ├── get-nominator-bookings-method.mdx
│   │       │   ├── get-nominator-earnings-method.mdx
│   │       │   ├── get-nominator-method.mdx
│   │       │   ├── get-pool-bookings-method.mdx
│   │       │   ├── get-pool-method.mdx
│   │       │   └── lifecheck-method.mdx
│   │       ├── smc-index.json
│   │       ├── v2/
│   │       │   ├── accounts/
│   │       │   │   ├── convert-raw-address-to-user-friendly-format.mdx
│   │       │   │   ├── convert-user-friendly-address-to-raw-format.mdx
│   │       │   │   ├── detect-all-address-formats.mdx
│   │       │   │   ├── get-account-balance-only.mdx
│   │       │   │   ├── get-account-lifecycle-state.mdx
│   │       │   │   ├── get-account-state-and-balance.mdx
│   │       │   │   ├── get-detailed-account-state-extended.mdx
│   │       │   │   ├── get-nft-or-jetton-metadata.mdx
│   │       │   │   ├── get-wallet-information.mdx
│   │       │   │   └── list-account-transactions.mdx
│   │       │   ├── blocks/
│   │       │   │   ├── get-block-header-metadata.mdx
│   │       │   │   ├── get-latest-consensus-block.mdx
│   │       │   │   ├── get-latest-masterchain-info.mdx
│   │       │   │   ├── get-masterchain-block-signatures.mdx
│   │       │   │   ├── get-outgoing-message-queue-sizes.mdx
│   │       │   │   ├── get-shard-block-proof.mdx
│   │       │   │   ├── get-shards-at-masterchain-seqno.mdx
│   │       │   │   ├── get-smart-contract-libraries.mdx
│   │       │   │   ├── list-block-transactions-extended-details.mdx
│   │       │   │   ├── list-block-transactions.mdx
│   │       │   │   └── look-up-block-by-height-lt-or-timestamp.mdx
│   │       │   ├── config/
│   │       │   │   ├── get-all-config-parameters.mdx
│   │       │   │   └── get-single-config-parameter.mdx
│   │       │   ├── json-rpc/
│   │       │   │   └── json-rpc-handler.mdx
│   │       │   ├── messages-and-transactions/
│   │       │   │   ├── estimate-transaction-fees.mdx
│   │       │   │   ├── send-external-message-and-return-hash.mdx
│   │       │   │   ├── send-external-message-boc.mdx
│   │       │   │   └── send-unpacked-external-query.mdx
│   │       │   ├── overview.mdx
│   │       │   ├── smart-contracts/
│   │       │   │   └── run-get-method-on-contract.mdx
│   │       │   └── transactions/
│   │       │       ├── locate-result-transaction-by-incoming-message.mdx
│   │       │       ├── locate-source-transaction-by-outgoing-message.mdx
│   │       │       └── locate-transaction-by-incoming-message.mdx
│   │       ├── v2-authentication.mdx
│   │       ├── v2-errors.mdx
│   │       ├── v2-tonlib-types.mdx
│   │       ├── v2.json
│   │       ├── v3/
│   │       │   ├── accounts/
│   │       │   │   ├── address-book.mdx
│   │       │   │   ├── get-account-states.mdx
│   │       │   │   ├── get-wallet-states.mdx
│   │       │   │   └── metadata.mdx
│   │       │   ├── actions-and-traces/
│   │       │   │   ├── get-actions.mdx
│   │       │   │   ├── get-pending-actions.mdx
│   │       │   │   ├── get-pending-traces.mdx
│   │       │   │   └── get-traces.mdx
│   │       │   ├── apiv2/
│   │       │   │   ├── estimate-fee.mdx
│   │       │   │   ├── get-address-information.mdx
│   │       │   │   ├── get-wallet-information.mdx
│   │       │   │   ├── run-get-method.mdx
│   │       │   │   └── send-message.mdx
│   │       │   ├── blockchain-data/
│   │       │   │   ├── get-adjacent-transactions.mdx
│   │       │   │   ├── get-blocks.mdx
│   │       │   │   ├── get-masterchain-block-shard-state-1.mdx
│   │       │   │   ├── get-masterchain-block-shard-state.mdx
│   │       │   │   ├── get-masterchain-info.mdx
│   │       │   │   ├── get-messages.mdx
│   │       │   │   ├── get-pending-transactions.mdx
│   │       │   │   ├── get-transactions-by-masterchain-block.mdx
│   │       │   │   ├── get-transactions-by-message.mdx
│   │       │   │   └── get-transactions.mdx
│   │       │   ├── dns/
│   │       │   │   └── get-dns-records.mdx
│   │       │   ├── jettons/
│   │       │   │   ├── get-jetton-burns.mdx
│   │       │   │   ├── get-jetton-masters.mdx
│   │       │   │   ├── get-jetton-transfers.mdx
│   │       │   │   └── get-jetton-wallets.mdx
│   │       │   ├── multisig/
│   │       │   │   ├── get-multisig-orders.mdx
│   │       │   │   └── get-multisig-wallets.mdx
│   │       │   ├── nfts/
│   │       │   │   ├── get-nft-collections.mdx
│   │       │   │   ├── get-nft-items.mdx
│   │       │   │   └── get-nft-transfers.mdx
│   │       │   ├── overview.mdx
│   │       │   ├── stats/
│   │       │   │   └── get-top-accounts-by-balance.mdx
│   │       │   ├── utils/
│   │       │   │   ├── decode-opcodes-and-bodies-1.mdx
│   │       │   │   └── decode-opcodes-and-bodies.mdx
│   │       │   └── vesting/
│   │       │       └── get-vesting-contracts.mdx
│   │       ├── v3-authentication.mdx
│   │       ├── v3-errors.mdx
│   │       ├── v3-pagination.mdx
│   │       └── v3.yaml
│   ├── appkit/
│   │   ├── init.mdx
│   │   ├── jettons.mdx
│   │   ├── overview.mdx
│   │   └── toncoin.mdx
│   ├── bridges.mdx
│   ├── explorers/
│   │   ├── overview.mdx
│   │   └── tonviewer.mdx
│   ├── nodes/
│   │   ├── cpp/
│   │   │   ├── integrating-with-prometheus.mdx
│   │   │   ├── mytonctrl/
│   │   │   │   ├── alerting.mdx
│   │   │   │   ├── backups.mdx
│   │   │   │   ├── btc-teleport.mdx
│   │   │   │   ├── collator.mdx
│   │   │   │   ├── core.mdx
│   │   │   │   ├── custom-overlays.mdx
│   │   │   │   ├── installer.mdx
│   │   │   │   ├── liquid-staking.mdx
│   │   │   │   ├── overview.mdx
│   │   │   │   ├── pools.mdx
│   │   │   │   ├── utilities.mdx
│   │   │   │   ├── validator.mdx
│   │   │   │   └── wallet.mdx
│   │   │   ├── run-validator.mdx
│   │   │   ├── setup-mylocalton.mdx
│   │   │   └── setup-mytonctrl.mdx
│   │   ├── overview.mdx
│   │   └── rust/
│   │       ├── architecture.mdx
│   │       ├── global-config.mdx
│   │       ├── logs-config.mdx
│   │       ├── metrics.mdx
│   │       ├── monitoring.mdx
│   │       ├── node-config-ref.mdx
│   │       ├── node-config.mdx
│   │       ├── probes.mdx
│   │       └── quick-start.mdx
│   ├── oracles/
│   │   ├── overview.mdx
│   │   ├── pyth.mdx
│   │   └── redstone.mdx
│   ├── sdks.mdx
│   ├── staking/
│   │   ├── liquid-staking.mdx
│   │   ├── nominator-pools.mdx
│   │   ├── overview.mdx
│   │   └── single-nominator.mdx
│   ├── status.mdx
│   ├── tma/
│   │   ├── analytics/
│   │   │   ├── analytics.mdx
│   │   │   ├── api-endpoints.mdx
│   │   │   ├── faq.mdx
│   │   │   ├── install-via-npm.mdx
│   │   │   ├── install-via-script.mdx
│   │   │   ├── managing-integration.mdx
│   │   │   ├── preparation.mdx
│   │   │   └── supported-events.mdx
│   │   ├── create-mini-app.mdx
│   │   ├── overview.mdx
│   │   └── telegram-ui/
│   │       ├── getting-started.mdx
│   │       ├── overview.mdx
│   │       ├── platform-and-palette.mdx
│   │       └── reference/
│   │           └── avatar.mdx
│   ├── ton-connect/
│   │   ├── dapp.mdx
│   │   ├── manifest.mdx
│   │   ├── message-lookup.mdx
│   │   ├── overview.mdx
│   │   └── wallet.mdx
│   ├── ton-pay/
│   │   ├── api-reference.mdx
│   │   ├── on-ramp.mdx
│   │   ├── overview.mdx
│   │   ├── payment-integration/
│   │   │   ├── payments-react.mdx
│   │   │   ├── payments-tonconnect.mdx
│   │   │   ├── status-info.mdx
│   │   │   └── transfer.mdx
│   │   ├── quick-start.mdx
│   │   ├── ui-integration/
│   │   │   ├── button-js.mdx
│   │   │   └── button-react.mdx
│   │   └── webhooks.mdx
│   ├── wallet-apps/
│   │   ├── addresses-workflow.mdx
│   │   ├── deep-links.mdx
│   │   ├── get-coins.mdx
│   │   ├── tonkeeper.mdx
│   │   └── web.mdx
│   └── walletkit/
│       ├── android/
│       │   ├── data.mdx
│       │   ├── events.mdx
│       │   ├── init.mdx
│       │   ├── installation.mdx
│       │   ├── transactions.mdx
│       │   ├── wallets.mdx
│       │   └── webview.mdx
│       ├── browser-extension.mdx
│       ├── ios/
│       │   ├── data.mdx
│       │   ├── events.mdx
│       │   ├── init.mdx
│       │   ├── installation.mdx
│       │   ├── transactions.mdx
│       │   ├── wallets.mdx
│       │   └── webview.mdx
│       ├── native-web.mdx
│       ├── overview.mdx
│       ├── qa-guide.mdx
│       └── web/
│           ├── connections.mdx
│           ├── events.mdx
│           ├── init.mdx
│           ├── jettons.mdx
│           ├── nfts.mdx
│           ├── toncoin.mdx
│           └── wallets.mdx
├── extra.css
├── extra.js
├── foundations/
│   ├── actions/
│   │   ├── change-library.mdx
│   │   ├── overview.mdx
│   │   ├── reserve.mdx
│   │   ├── send.mdx
│   │   └── set-code.mdx
│   ├── addresses/
│   │   ├── derive.mdx
│   │   ├── formats.mdx
│   │   ├── overview.mdx
│   │   └── serialize.mdx
│   ├── config.mdx
│   ├── consensus/
│   │   └── catchain-visualizer.mdx
│   ├── fees.mdx
│   ├── glossary.mdx
│   ├── limits.mdx
│   ├── messages/
│   │   ├── deploy.mdx
│   │   ├── external-in.mdx
│   │   ├── external-out.mdx
│   │   ├── internal.mdx
│   │   ├── modes.mdx
│   │   ├── ordinary-tx.mdx
│   │   └── overview.mdx
│   ├── phases.mdx
│   ├── precompiled.mdx
│   ├── proofs/
│   │   ├── overview.mdx
│   │   └── verifying-liteserver-proofs.mdx
│   ├── serialization/
│   │   ├── boc.mdx
│   │   ├── cells.mdx
│   │   ├── library.mdx
│   │   ├── merkle-update.mdx
│   │   ├── merkle.mdx
│   │   └── pruned.mdx
│   ├── services.mdx
│   ├── shards.mdx
│   ├── status.mdx
│   ├── system.mdx
│   ├── traces.mdx
│   └── whitepapers/
│       ├── catchain.mdx
│       ├── overview.mdx
│       ├── tblkch.mdx
│       ├── ton.mdx
│       └── tvm.mdx
├── from-ethereum.mdx
├── get-support.mdx
├── index.mdx
├── languages/
│   ├── fift/
│   │   ├── deep-dive.mdx
│   │   ├── fift-and-tvm-assembly.mdx
│   │   ├── multisig.mdx
│   │   ├── overview.mdx
│   │   └── whitepaper.mdx
│   ├── func/
│   │   ├── asm-functions.mdx
│   │   ├── built-ins.mdx
│   │   ├── changelog.mdx
│   │   ├── comments.mdx
│   │   ├── compiler-directives.mdx
│   │   ├── cookbook.mdx
│   │   ├── declarations-overview.mdx
│   │   ├── dictionaries.mdx
│   │   ├── expressions.mdx
│   │   ├── functions.mdx
│   │   ├── global-variables.mdx
│   │   ├── known-issues.mdx
│   │   ├── libraries.mdx
│   │   ├── literals.mdx
│   │   ├── operators.mdx
│   │   ├── overview.mdx
│   │   ├── special-functions.mdx
│   │   ├── statements.mdx
│   │   ├── stdlib.mdx
│   │   └── types.mdx
│   ├── tact.mdx
│   ├── tl-b/
│   │   ├── complex-and-non-trivial-examples.mdx
│   │   ├── overview.mdx
│   │   ├── simple-examples.mdx
│   │   ├── syntax-and-semantics.mdx
│   │   ├── tep-examples.mdx
│   │   └── tooling.mdx
│   └── tolk/
│       ├── basic-syntax.mdx
│       ├── changelog.mdx
│       ├── examples.mdx
│       ├── features/
│       │   ├── asm-functions.mdx
│       │   ├── auto-serialization.mdx
│       │   ├── compiler-optimizations.mdx
│       │   ├── contract-getters.mdx
│       │   ├── contract-storage.mdx
│       │   ├── jetton-payload.mdx
│       │   ├── lazy-loading.mdx
│       │   ├── message-handling.mdx
│       │   ├── message-sending.mdx
│       │   └── standard-library.mdx
│       ├── from-func/
│       │   ├── converter.mdx
│       │   ├── stdlib-comparison.mdx
│       │   └── tolk-vs-func.mdx
│       ├── idioms-conventions.mdx
│       ├── overview.mdx
│       ├── syntax/
│       │   ├── conditions-loops.mdx
│       │   ├── exceptions.mdx
│       │   ├── functions-methods.mdx
│       │   ├── imports.mdx
│       │   ├── mutability.mdx
│       │   ├── operators.mdx
│       │   ├── pattern-matching.mdx
│       │   ├── structures-fields.mdx
│       │   └── variables.mdx
│       └── types/
│           ├── address.mdx
│           ├── aliases.mdx
│           ├── booleans.mdx
│           ├── callables.mdx
│           ├── cells.mdx
│           ├── enums.mdx
│           ├── generics.mdx
│           ├── list-of-types.mdx
│           ├── maps.mdx
│           ├── nullable.mdx
│           ├── numbers.mdx
│           ├── overall-serialization.mdx
│           ├── overall-tvm-stack.mdx
│           ├── strings.mdx
│           ├── structures.mdx
│           ├── tensors.mdx
│           ├── tuples.mdx
│           ├── type-checks-and-casts.mdx
│           ├── unions.mdx
│           └── void-never.mdx
├── more-tutorials.mdx
├── old.mdx
├── package.json
├── payments/
│   ├── jettons.mdx
│   ├── overview.mdx
│   └── toncoin.mdx
├── resources/
│   ├── dictionaries/
│   │   ├── ban.txt
│   │   ├── custom.txt
│   │   ├── tvm-instructions.txt
│   │   └── two-letter-words-ban.txt
│   └── tvm/
│       └── cp0.txt
├── scripts/
│   ├── check-navigation.mjs
│   ├── check-redirects.mjs
│   ├── common.mjs
│   ├── docusaurus-sidebars-types.d.ts
│   └── stats.py
├── snippets/
│   ├── aside.jsx
│   ├── catchain-visualizer.jsx
│   ├── feePlayground.jsx
│   ├── fence-table.jsx
│   ├── filetree.jsx
│   ├── image.jsx
│   ├── stub.jsx
│   └── tvm-instruction-table.jsx
├── standard/
│   ├── tokens/
│   │   ├── airdrop.mdx
│   │   ├── jettons/
│   │   │   ├── api.mdx
│   │   │   ├── burn.mdx
│   │   │   ├── comparison.mdx
│   │   │   ├── find.mdx
│   │   │   ├── how-it-works.mdx
│   │   │   ├── mint.mdx
│   │   │   ├── mintless/
│   │   │   │   ├── deploy.mdx
│   │   │   │   └── overview.mdx
│   │   │   ├── overview.mdx
│   │   │   ├── supply-data.mdx
│   │   │   ├── transfer.mdx
│   │   │   └── wallet-data.mdx
│   │   ├── metadata.mdx
│   │   ├── nft/
│   │   │   ├── api.mdx
│   │   │   ├── comparison.mdx
│   │   │   ├── deploy.mdx
│   │   │   ├── how-it-works.mdx
│   │   │   ├── metadata.mdx
│   │   │   ├── nft-2.0.mdx
│   │   │   ├── overview.mdx
│   │   │   ├── reference.mdx
│   │   │   ├── sbt.mdx
│   │   │   ├── transfer.mdx
│   │   │   └── verify.mdx
│   │   └── overview.mdx
│   ├── vesting.mdx
│   └── wallets/
│       ├── comparison.mdx
│       ├── highload/
│       │   ├── overview.mdx
│       │   ├── v2/
│       │   │   └── specification.mdx
│       │   └── v3/
│       │       ├── create.mdx
│       │       ├── send-batch-transfers.mdx
│       │       ├── send-single-transfer.mdx
│       │       ├── specification.mdx
│       │       └── verify-is-processed.mdx
│       ├── history.mdx
│       ├── how-it-works.mdx
│       ├── interact.mdx
│       ├── lockup.mdx
│       ├── mnemonics.mdx
│       ├── performance.mdx
│       ├── preprocessed-v2/
│       │   ├── interact.mdx
│       │   └── specification.mdx
│       ├── restricted.mdx
│       ├── v4.mdx
│       ├── v5-api.mdx
│       └── v5.mdx
├── start-here.mdx
└── tvm/
    ├── builders-and-slices.mdx
    ├── continuations.mdx
    ├── exit-codes.mdx
    ├── gas.mdx
    ├── get-method.mdx
    ├── initialization.mdx
    ├── instructions.mdx
    ├── overview.mdx
    ├── registers.mdx
    └── tools/
        ├── retracer.mdx
        ├── ton-decompiler.mdx
        ├── tvm-explorer.mdx
        └── txtracer.mdx

================================================
FILE CONTENTS
================================================

================================================
FILE: .cspell.jsonc
================================================
// The .jsonc extension allows free use of comments and trailing commas.
// The file is named with a dot in front to discourage frequent editing —
// target dictionaries are located in the resources/dictionaries/ directory.
{
  "$schema": "https://raw.githubusercontent.com/streetsidesoftware/cspell/main/cspell.schema.json",
  "version": "0.2",
  "language": "en-US",
  "dictionaryDefinitions": [
    {
      // Allowed words
      "name": "main-list",
      "path": "resources/dictionaries/custom.txt",
      "addWords": true,
    },
    {
      // Banned words with no clear or correct replacements
      // For a few words with those, see the flagWords property later in this file
      "name": "deny-list",
      "path": "resources/dictionaries/ban.txt"
    },
    {
      "name": "2lw-deny-list",
      "path": "resources/dictionaries/two-letter-words-ban.txt"
    },
    {
      "name": "tvm-instructions",
      "path": "resources/dictionaries/tvm-instructions.txt"
    }
  ],
  "dictionaries": [
    "main-list",
    "deny-list",
    "2lw-deny-list",
    "tvm-instructions",
  ],
  "useGitignore": true,
  "files": [
    "**/*.{md,mdx}",
    "**/*.{js,jsx,mjs}",
  ],
  "minWordLength": 3,
  "overrides": [
    // Enable case sensitivity for Markdown and MDX files only
    {
      "filename": "**/*.{md,mdx}",
      "caseSensitive": true,
      // Known incorrect spellings and correct suggestions
      "flagWords": [
        "AccountChain->accountchain",
        "BaseChain->basechain",
        "boc->BoC",
        "BOC->BoC",
        "Github->GitHub",
        "id->ID",
        "Id->ID",
        "MasterChain->masterchain",
        "ShardChain->shardchain",
        "StateInit->`StateInit`",
        "TLB->TL-B",
        "Toncenter->TON Center",
        "toncoins->Toncoin",
        "Toncoins->Toncoin",
        "WorkChain->workchain",
        "zkProofs->ZK-proofs",
        "zkProof->ZK-proof",
      ],
    },
    // Do not check for banned words (denylists or flagWords) in certain files
    {
      "filename": "contribute/style-guide*.mdx",
      "ignoreWords": [
        "tos",
        "DOI",
        "boc",
        "BOC",
      ],
      "ignoreRegExpList": [
        "\\b[tT]on[a-zA-Z]+\\b", // ton or Ton-prefixed words
        "\\b[a-zA-Z]+Chain\\b", // Chain-suffixed words
      ],
      "dictionaries": [
        "!deny-list", // turns off the dictionary
        "!2lw-deny-list", // turns off the dictionary
      ]
    },
    {
      "filename": "languages/tolk/features/compiler-optimizations.mdx",
      "ignoreWords": [
        "fifting",
      ]
    },
    {
      "filename": "languages/tolk/from-func/tolk-vs-func.mdx",
      "ignoreWords": [
        "transpiles",
      ]
    },
    {
      "filename": "**/api/**/*.{json,yml,yaml}",
      "ignoreWords": [
        "smc",
      ],
      "dictionaries": [
        "!deny-list", // turns off the dictionary
        "!2lw-deny-list", // turns off the dictionary
      ]
    },
    {
      "filename": "**/*.{js,jsx,mjs}",
      "ignoreWords": [
        "Dests",
      ],
      "dictionaries": [
        "!deny-list", // turns off the dictionary
        "!2lw-deny-list", // turns off the dictionary
      ]
    }
  ],
  "ignorePaths": [
    // Some whitepapers
    "foundations/whitepapers/tblkch.mdx",
    "foundations/whitepapers/ton.mdx",
    "foundations/whitepapers/tvm.mdx",
    "languages/fift/whitepaper.mdx",
    "languages/tolk/features/standard-library.mdx",
    // Generated files
    "tvm/instructions.mdx",
    // Binaries
    "**/*.boc",
    // Code
    "**/*.fc",
    "**/*.fif",
    "**/*.fift",
    "**/*.func",
    "**/*.tact",
    "**/*.tasm",
    "**/*.tlb",
    "**/*.tolk",
    "**/*.py*",
    "**/*.{ts,tsx}",
    "**/*.css",
    // Miscellaneous
    "**/*.git*",
    "**/*.svg",
    "**/*.txt",
    "CODEOWNERS",
    "LICENSE-*",
    "snippets/tvm-instruction-table.jsx",
    "snippets/catchain-visualizer.jsx"
  ],
  "ignoreRegExpList": [
    //
    // Predefined patterns from:
    // https://github.com/streetsidesoftware/cspell/blob/main/packages/cspell-lib/src/lib/Settings/DefaultSettings.ts
    //
    "SpellCheckerDisable",
    "SpellCheckerIgnoreInDocSetting",
    "Urls",
    "Email",
    "RsaCert",
    "SshRsa",
    "Base64MultiLine",
    "Base64SingleLine",
    "CommitHash",
    "CommitHashLink",
    "CStyleHexValue",
    "CSSHexValue",
    "SHA",
    "HashStrings",
    "UnicodeRef",
    "UUID",
    "href",
    //
    // Custom patterns
    //
    "\\s*[^\\s]*?=[\"'\\{]", // arbitrary JSX attribute names
    "=\\s*\".*?\"", // string values of JSX attributes
    "=\\s*'.*?'", // string values of JSX attributes
    "(?<!\\\\)\\$(?:\\\\.|[^$\\\\])*?\\$", // inline math
    "/(?<!\\\\)\\$\\$[\\s\\S]*?\\$\\$/g", // block math
    "(?<!\\\\)``.*?``", // inline code with double backticks
    "(?<!\\\\)`.*?`", // inline code
    "/^([ \\t]*```).*([\\s\\S]*?)^\\1$/gmx", // block code
    "^import[ \\t].+$", // import ...
    "/^export[ \\t].+?(?=\\r?\\n\\r?\\n)/gms", // export ...
    "(?<!\\\\)\\{(?:[^{}]|\\{(?:[^{}]|\\{[^{}]*\\})*\\})*\\}", // jsx expressions in {}
  ],
}


================================================
FILE: .editorconfig
================================================
root = true

[*]
charset = utf-8
end_of_line = lf
indent_style = space
indent_size = 2
insert_final_newline = true
trim_trailing_whitespace = true


================================================
FILE: .gitattributes
================================================
* text=auto eol=lf


================================================
FILE: .github/dependabot.yml
================================================
# https://docs.github.com/en/code-security/dependabot/working-with-dependabot/dependabot-options-reference

version: 2
updates:
  - package-ecosystem: "npm"
    directory: "/"
    schedule:
      interval: "weekly"
      day: "monday"
      time: "04:00"
      timezone: "Etc/UTC"
    versioning-strategy: increase
    allow:
      - dependency-name: "cspell"
      - dependency-name: "husky"
      - dependency-name: "mint"
    ignore:
      - dependency-name: "remark*"
      - dependency-name: "unified*"
      - dependency-name: "unist*"
      - dependency-name: "mdast*"
      - dependency-name: "*string*"


================================================
FILE: .github/scripts/build_review_instructions.py
================================================
#!/usr/bin/env python3
"""Generate reviewer instructions with embedded style guide."""

from __future__ import annotations

import os
import textwrap

def main() -> None:
    workspace = os.environ.get("GITHUB_WORKSPACE")
    if not workspace:
        raise SystemExit("GITHUB_WORKSPACE env var is required")

    style_path = os.path.join(workspace, "contribute", "style-guide-extended.mdx")
    try:
        with open(style_path, encoding="utf-8") as fh:
            style_content = fh.read().rstrip()
    except FileNotFoundError as exc:
        raise SystemExit(f"Style guide file not found: {style_path}") from exc

    style_block = f"<styleguide>\n{style_content}\n</styleguide>\n\n"

    body = textwrap.dedent(
        """Repository: TON Blockchain documentation

Scope and priorities:
1. Style-guide compliance is the first and absolute priority. Before reviewing, read the entire <styleguide> block. For every changed line in the diff, confirm it matches the guide. Any violation must be reported with the exact style-rule link.
2. Only after style compliance, check for obvious, provable, blocking errors not covered by the guide (e.g., an incorrect calculation or an unsafe, non‑runnable step) and report them with proof. If not certain from repo content alone, omit.

Review protocol:
- Inspect only content files touched by this PR: `.md`, `.mdx`, and `docs.json`.
- It is acceptable to report findings that originate in `docs.json` (e.g., broken or duplicate paths/slugs, invalid sidebar grouping, typos in titles). When the problem is in `docs.json`, cite its exact lines.
- Examine only the lines changed in this diff (use surrounding context as needed). Do not flag issues that exist solely in unchanged content.
- Report every issue you see in this diff; do not postpone or soften problems.
- Location links must be repo-relative paths such as pending/discover/web3-basics/glossary.mdx?plain=1#L10-L12 (no https:// prefix).
- When a style rule applies, cite it using contribute/style-guide-extended.mdx?plain=1#L<start>-L<end>. Only add the citation after running a verification command such as `rg "<term>" contribute/style-guide-extended.mdx` or `sed -n '<start>,<end>p'` and inspecting the output to confirm the line range.
- If no style rule applies (e.g., factual error, typo), explain the issue clearly without a style link.
- Keep findings direct, professional, and concise. Suggestions must describe the required fix.
- Code identifiers: if the issue is lack of code font, preserve the token’s original case and wrap it in backticks. Only change case when the style guide explicitly mandates a canonical case for that exact identifier and you cite the relevant line range.

HARD SCOPE WALL — CONTENT ONLY (MANDATORY):
- You MUST NEVER read, open, cite, or rely on any non‑content files. This includes but is not limited to CI configs (`.github/**`), workflows (`*.yml`), code (`*.ts`, `*.tsx`, `*.js`, `*.py`, `*.go`, etc.), configuration/manifests (`package.json`, `pnpm-lock.yaml`, `*.toml`, `*.yaml`), tests, scripts, or build tool files.
- Allowed inputs are limited to the changed `.md`/`.mdx` files, `docs.json`, and `contribute/style-guide-extended.mdx` (for rule citations).
- Do not search outside these allowed files. Do not run commands that read or display non‑content files. Treat them as inaccessible.

Context for `docs.json`:
- Purpose: defines the site navigation tree, groupings, and slug mapping used by the docs site (metadata that directly affects the rendered docs experience).
- Legit uses during review:
  • Findings may target `docs.json` when the issue is there (e.g., broken/duplicate slug, incorrect path, wrong ordering/grouping).
  • You may also use `docs.json` to verify that changed frontmatter `slug`/title or links in `.md`/`.mdx` remain valid.
  • Cite `docs.json` lines when it is the source of the problem; otherwise cite the offending `.md`/`.mdx` lines.
  • If an issue relates to both `docs.json` and `.md`/`.mdx`, report it only on `docs.json`.
- Do not speculate about Mintlify runtime behavior or external systems; rely solely on repository content.

Severity policy:
- Report only HIGH‑severity violations.
- Do not report MEDIUM or LOW items.
- HIGH includes, in this order of precedence:
  (a) style‑guide rules tagged [HIGH] or listed under “Global overrides (always [HIGH])” in contribute/style-guide-extended.mdx; then
  (b) obvious, non‑style blocking errors (e.g., incorrect calculations, non‑runnable commands, unsafe steps) that you can prove using repository content (diff lines, examples, reference tables).
- For (b), include minimal proof with each finding (a short calculation or exact snippet) and cite the repo path/lines.
- Do not assume or infer behavior. Only report (b) when you are 100% certain from the repo itself; if uncertain, omit.

Persistence and completeness:
- Persist until the review is fully handled end-to-end within this single run.
- Do not stop after a partial pass; continue until you have either reported all HIGH-severity issues you can find in scope or are confident there are none.
- Do not stop to ask any kind of follow-up questions.

Verbosity and structure:
- Follow the existing review output contract, do not invent alternative formats.
- It is acceptable for the overall review to be long when there are many findings, but keep each Description and Suggestion concise (ideally no more than two short paragraphs each) while still giving enough detail to implement the fix.
- Avoid meta-commentary about your own reasoning process or tool usage; focus solely on concrete findings, locations, and fixes.

Goal: deliver exhaustive, high-confidence feedback that brings these TON Docs changes into full style-guide compliance and factual correctness.
"""
    )

    link_rules = textwrap.dedent(
        """
        
        LINK FORMATTING — REQUIRED (overrides earlier bullets):
        - Style‑guide citations: use a compact Markdown link with a short label, e.g. [Style rule — <short title>](contribute/style-guide-extended.mdx?plain=1#L<start>-L<end>). Verify the exact line range first (e.g., `rg "<term>" contribute/style-guide-extended.mdx` or `sed -n '<start>,<end>p'`).
        - General code/location references: output a plain repo‑relative link on its own line, with no Markdown/backticks/prefix text so GitHub renders a rich preview. Example line:
          pending/discover/web3-basics/glossary.mdx?plain=1#L10-L12
        - Do not use https:// prefixes for repo‑relative links.
        """
    )

    print(style_block + body + link_rules)


if __name__ == "__main__":
    main()


================================================
FILE: .github/scripts/build_review_payload.py
================================================
#!/usr/bin/env python3
"""
Build a GitHub Pull Request review payload from Pitaya results.

Inputs:
  - --run-dir: path to pitaya results/run_* directory (contains instances/)
  - --repo:    owner/repo for link rewriting (GITHUB_REPOSITORY)
  - --sha:     PR head SHA for absolute blob links (PR_HEAD_SHA)
  - --severities: comma-separated list of severities to include as inline comments (e.g., "HIGH" or "HIGH,MEDIUM,LOW")
  - --max-comments: hard cap for number of inline comments (default 40)

Output:
  JSON to stdout:
    {
      "body": "<composer summary with absolutized Location links>",
      "event": "COMMENT",
      "comments": [
        {"path":"...", "side":"RIGHT", "line":123, "start_line":120, "start_side":"RIGHT", "body":"..."}
      ]
    }
"""
from __future__ import annotations

import argparse
import json
import os
import re
from dataclasses import dataclass
from pathlib import Path
from typing import Dict, Iterable, List, Optional, Tuple

# ---------- Utilities ----------

def _read_json(path: Path) -> Optional[dict]:
    try:
        txt = path.read_text(encoding="utf-8", errors="replace")
        return json.loads(txt)
    except Exception:
        return None


def _iter_instance_jsons(run_dir: Path) -> Iterable[Tuple[Path, dict]]:
    inst = run_dir / "instances"
    if not inst.is_dir():
        return []
    files = list(inst.rglob("*.json"))
    for p in files:
        data = _read_json(p)
        if isinstance(data, dict):
            yield p, data


def _role_of(obj: dict) -> Optional[str]:
    # Strategy stores role either at top-level or under metadata.pr_review.role
    role = obj.get("role")
    if isinstance(role, str) and role:
        return role
    md = obj.get("metadata")
    if isinstance(md, dict):
        prr = md.get("pr_review")
        if isinstance(prr, dict):
            r = prr.get("role")
            if isinstance(r, str):
                return r
    return None


def _final_message_of(obj: dict) -> Optional[str]:
    msg = obj.get("final_message")
    return msg if isinstance(msg, str) else None


def _metrics_of(obj: dict) -> Dict[str, object]:
    m = obj.get("metrics")
    return m if isinstance(m, dict) else {}


# ---------- Link rewriting (replicates rewrite_review_links.py) ----------

def _absolutize_location_links(body: str, repo: Optional[str], sha: Optional[str]) -> str:
    if not body or not repo:
        return body
    blob_prefix = f"https://github.com/{repo}/blob/"
    doc_blob_prefix = f"{blob_prefix}{sha or 'main'}/"
    style_blob_prefix = f"{blob_prefix}main/"
    style_rel = "contribute/style-guide-extended.mdx"

    def absolutize_path(path: str) -> str:
        if path.startswith("http://") or path.startswith("https://"):
            return path
        normalized = path.lstrip("./")
        base = style_blob_prefix if normalized.startswith(style_rel) else doc_blob_prefix
        return f"{base}{normalized}"

    # 1) Fix explicit Location: lines when present
    lines: List[str] = []
    for line in body.splitlines():
        stripped = line.lstrip()
        indent_len = len(line) - len(stripped)
        for marker in ("- Location:", "Location:", "* Location:"):
            if stripped.startswith(marker):
                prefix, _, rest = stripped.partition(":")
                link = rest.strip()
                if link:
                    link = absolutize_path(link)
                    stripped = f"{prefix}: {link}"
                    line = " " * indent_len + stripped
                break
        lines.append(line)

    rewritten = "\n".join(lines)

    # 2) Convert any doc links like path/to/file.mdx?plain=1#L10-L20 anywhere in text
    #    Avoid variable-width lookbehinds; match optional scheme as a capture and skip when present.
    if repo:
        generic_pattern = re.compile(
            r"(?P<prefix>https?://)?(?P<path>[A-Za-z0-9_./\-]+\.(?:md|mdx|json))\?plain=1#L\d+(?:-L\d+)?"
        )

        def repl(match: re.Match[str]) -> str:
            if match.group("prefix"):
                # Already absolute; leave as-is
                return match.group(0)
            p = match.group("path").lstrip("./")
            base = style_blob_prefix if p.startswith(style_rel) else doc_blob_prefix
            # Append the anchor part after the path
            suffix = match.group(0)[len(match.group("path")) :]
            return f"{base}{p}{suffix}"

        rewritten = generic_pattern.sub(repl, rewritten)

    style_pattern = re.compile(rf"{re.escape(style_rel)}\?plain=1#L\d+(?:-L\d+)?")

    def replace_style_links(text: str) -> str:
        result: list[str] = []
        last = 0
        for match in style_pattern.finditer(text):
            start, end = match.span()
            result.append(text[last:start])
            link = match.group(0)
            prefix_start = max(0, start - len(style_blob_prefix))
            if text[prefix_start:start] == style_blob_prefix:
                result.append(link)
            else:
                result.append(f"{style_blob_prefix}{link.lstrip('./')}")
            last = end
        result.append(text[last:])
        return "".join(result)

    rewritten = replace_style_links(rewritten)

    # Ensure doc blob URLs use PR head SHA (style guide stays on main)
    if sha:
        doc_prefix_regex = re.compile(rf"{re.escape(blob_prefix)}([^/]+)/([^\s)]+)")

        def fix_doc(match: re.Match[str]) -> str:
            base = match.group(1)
            remainder = match.group(2)
            target = "main" if remainder.startswith(style_rel) else sha
            if base == target:
                return match.group(0)
            return f"{blob_prefix}{target}/{remainder}"

        rewritten = doc_prefix_regex.sub(fix_doc, rewritten)

    return rewritten


def _build_from_sidecar(sidecar: dict, *, repo: str, sha: str, repo_root: Path) -> Tuple[str, str, List[Dict[str, object]]]:
    """Return (body, event, comments[]) from sidecar index.json. Event is always COMMENT."""
    body = str(sidecar.get("intro") or "").strip()
    # Force COMMENT-only behavior regardless of sidecar content
    event = "COMMENT"
    commit_id = str(sidecar.get("commit_id") or "").strip()
    if commit_id:
        sha = commit_id
    items = sidecar.get("selected_details") or []
    comments: List[Dict[str, object]] = []
    def sanitize_code_for_gh_suggestion(code: str) -> str:
        """Normalize a suggestion snippet for GitHub suggestions.
        - If a fenced block is present, extract its inner content.
        - Remove diff headers and treat leading '+' additions as plain text; drop '-' lines.
        """
        # Extract inner of first fenced block when present
        lang, inner = _extract_first_code_block(code)
        text = inner if inner is not None else code
        out: List[str] = []
        for ln in text.splitlines():
            if ln.startswith('--- ') or ln.startswith('+++ ') or ln.startswith('@@'):
                continue
            if ln.startswith('+') and not ln.startswith('++'):
                out.append(ln[1:])
                continue
            if ln.startswith('-') and not ln.startswith('--'):
                # Skip removed lines in GH suggestion body
                continue
            out.append(ln)
        return "\n".join(out).rstrip("\n")

    for it in items:
        try:
            path = str(it.get("path") or "").strip()
            start = int(it.get("start") or 0)
            end = int(it.get("end") or 0)
            # severity is not required for comment body; skip storing it
            title = str(it.get("title") or "").strip()
            desc = str(it.get("desc") or "").strip()
            sugg = it.get("suggestion") or {}
            code = str(sugg.get("code") or "")
        except Exception:
            continue
        if not (path and start > 0 and end >= start and title):
            continue
        # Clamp to file length when available
        file_path = (repo_root / path).resolve()
        if file_path.is_file():
            try:
                line_count = sum(1 for _ in file_path.open("r", encoding="utf-8", errors="ignore"))
                if end > line_count:
                    end = line_count
                if start > line_count:
                    continue
            except Exception:
                pass
        # Build comment body with title + description and optional suggestion fence
        code = code.rstrip("\n")
        parts: List[str] = []
        # Prefer including severity in heading when present in sidecar
        sev = (it.get("severity") or "").strip().upper()
        if title:
            heading = f"### [{sev}] {title}".strip()
            parts.append(heading)
        if desc:
            parts.append("")
            parts.append(desc)
        # When replacement text is present, include a GitHub suggestion block.
        # Allow empty replacement (deletion) suggestions: GitHub treats an empty block as delete selected lines.
        if code is not None:
            repl = sanitize_code_for_gh_suggestion(code)
            repl_lines = repl.splitlines()
            n_range = end - start + 1
            if (
                (n_range == 1 and len(repl_lines) == 1) or
                (n_range > 1 and len(repl_lines) == n_range) or
                (repl == "" and n_range >= 1)
            ):
                parts.append("")
                parts.append("```suggestion")
                if repl:
                    parts.append(repl)
                parts.append("```")
        else:
            # No auto-fix block; rely on title/description and CTA only.
            pass
        # Always include the feedback CTA
        parts.append("")
        parts.append("Please leave a reaction 👍/👎 to this suggestion to improve future reviews for everyone!")
        body_text = "\n".join(parts).strip()
        body_text = _absolutize_location_links(body_text, repo or None, sha or None)

        c: Dict[str, object] = {"path": path, "side": "RIGHT", "body": body_text}
        if start == end:
            c["line"] = end
        else:
            c["start_line"] = start
            c["line"] = end
            c["start_side"] = "RIGHT"
        comments.append(c)
    # Rewrite links in top-level body
    body = _absolutize_location_links(body, repo or None, sha or None)
    return body, event, comments


# ---------- Finding parsing ----------

_H_RE = re.compile(r"^###\s*\[(HIGH|MEDIUM|LOW)\]\s*(.+?)\s*$", re.IGNORECASE)
_LOC_RE = re.compile(
    r"^Location:\s*([^\s?#]+)(?:\?plain=1)?#L(?P<start>\d+)(?:-L(?P<end>\d+))?\s*$",
    re.IGNORECASE,
)


@dataclass
class Finding:
    severity: str
    title: str
    path: str
    start: int
    end: int
    desc: str
    suggestion_raw: str
    suggestion_replacement: Optional[str] = None
    uid: Optional[str] = None

    def key(self) -> Tuple[str, int, int, str]:
        t = re.sub(r"\W+", " ", self.title or "").strip().lower()
        return (self.path, self.start, self.end, t)


def _extract_first_code_block(text: str) -> Tuple[Optional[str], Optional[str]]:
    """
    Return (lang, content) for the first fenced code block in text.
    """
    m = re.search(r"```([a-zA-Z0-9_-]*)\s*\n([\s\S]*?)\n```", text)
    if not m:
        return None, None
    lang = (m.group(1) or "").strip().lower()
    content = m.group(2)
    return lang, content


_TRAILER_JSON_RE = re.compile(r"```json\s*(\{[\s\S]*?\})\s*```\s*$", re.IGNORECASE | re.MULTILINE)
# Remove any fenced code blocks (```lang ... ```), used when we can't submit a proper GH suggestion
_FENCED_BLOCK_RE = re.compile(r"```[a-zA-Z0-9_-]*\s*\n[\s\S]*?\n```", re.MULTILINE)

def _strip_trailing_json_trailer(text: str) -> str:
    """Remove a trailing fenced JSON block (validator trailer) from text."""
    return _TRAILER_JSON_RE.sub("", text).rstrip()


def _parse_findings(md: str) -> List[Finding]:
    lines = md.splitlines()
    i = 0
    items: List[Finding] = []

    while i < len(lines):
        m = _H_RE.match(lines[i])
        if not m:
            i += 1
            continue
        severity = m.group(1).upper()
        title = m.group(2).strip()
        i += 1

        # Expect blocks with Location:, Description:, Suggestion:
        loc_path = ""
        loc_start = 0
        loc_end = 0
        desc_lines: List[str] = []
        sugg_lines: List[str] = []

        # Scan until next heading or end
        section = "none"
        while i < len(lines) and not _H_RE.match(lines[i]):
            line = lines[i]
            if line.strip().lower().startswith("location:"):
                lm = _LOC_RE.match(line.strip())
                if lm:
                    loc_path = lm.group(1).strip()
                    loc_start = int(lm.group("start"))
                    loc_end = int(lm.group("end") or lm.group("start"))
                section = "location"
            elif line.strip().lower().startswith("description:"):
                section = "desc"
            elif line.strip().lower().startswith("suggestion:"):
                section = "sugg"
            else:
                if section == "desc":
                    desc_lines.append(line)
                elif section == "sugg":
                    sugg_lines.append(line)
            i += 1

        if not (loc_path and loc_start > 0 and loc_end >= loc_start):
            # Skip malformed entries
            continue
        desc = "\n".join(desc_lines).strip()
        sugg_raw = "\n".join(sugg_lines).strip()
        # Remove any trailing validator JSON trailer that might have been captured
        sugg_raw = _strip_trailing_json_trailer(sugg_raw)

        # Try to derive a GH suggestion replacement from the first non-diff code block
        replacement: Optional[str] = None
        lang, content = _extract_first_code_block(sugg_raw)
        if content:
            if lang and lang != "diff" and lang != "patch":
                replacement = content
            elif not lang:
                # Unspecified language — assume it's a replacement snippet
                replacement = content
            # else: diff/patch -> skip automated suggestion; keep raw in comment

        items.append(
            Finding(
                severity=severity,
                title=title,
                path=loc_path,
                start=loc_start,
                end=loc_end,
                desc=desc,
                suggestion_raw=sugg_raw,
                suggestion_replacement=replacement,
            )
        )
    return items


def _parse_trailer_findings(md: str) -> List[dict]:
    """Parse the fenced JSON trailer at the end and return .findings list when present."""
    m = re.search(r"```json\s*(\{[\s\S]*?\})\s*```\s*$", md, flags=re.IGNORECASE | re.MULTILINE)
    if not m:
        return []
    try:
        obj = json.loads(m.group(1))
        if isinstance(obj, dict):
            f = obj.get("findings")
            if isinstance(f, list):
                out = []
                for it in f:
                    if isinstance(it, dict):
                        out.append(it)
                return out
    except Exception:
        return []
    return []


# Removed verdict aggregation logic: event selection is fixed to COMMENT.


# ---------- Main ----------

def main() -> None:
    ap = argparse.ArgumentParser()
    ap.add_argument("--run-dir", required=True, help="Pitaya results/run_* directory")
    ap.add_argument("--repo", default=os.environ.get("GITHUB_REPOSITORY") or "", help="owner/repo")
    ap.add_argument("--sha", default=os.environ.get("PR_HEAD_SHA") or "", help="PR head SHA")
    ap.add_argument("--severities", default=os.environ.get("INLINE_SEVERITIES") or "HIGH")
    ap.add_argument("--max-comments", type=int, default=int(os.environ.get("MAX_COMMENTS") or 40))
    args = ap.parse_args()

    run_dir = Path(args.run_dir)
    repo = args.repo.strip()
    sha = args.sha.strip()
    include_sevs = {s.strip().upper() for s in (args.severities or "HIGH").split(",") if s.strip()}

    # Prefer sidecar when present (new strategy contract)
    sidecar_path = run_dir / "review" / "index.json"
    if sidecar_path.exists():
        try:
            sidecar = json.loads(sidecar_path.read_text(encoding="utf-8", errors="replace"))
        except Exception as e:
            raise SystemExit(f"Failed to read sidecar {sidecar_path}: {e}")
        body, _event, comments = _build_from_sidecar(sidecar, repo=repo, sha=sha, repo_root=Path(os.environ.get("GITHUB_WORKSPACE") or "."))
        # Always submit a COMMENT review regardless of findings
        out = {
            "body": body or "No documentation issues detected.",
            "event": "COMMENT",
            "comments": comments,
            "commit_id": (sidecar.get("commit_id") or sha) or None,
        }
        json.dump(out, fp=os.fdopen(1, "w"), ensure_ascii=False)
        return

    # Fallback: derive from instances when sidecar is absent
    files = list(_iter_instance_jsons(run_dir))
    if not files:
        raise SystemExit("No instance JSON files found in run dir and no sidecar present")

    composer_body: Optional[str] = None
    composer_metrics: Dict[str, object] = {}
    validator_messages: List[str] = []
    validator_trailer_findings: List[dict] = []
    metrics_list: List[Dict[str, object]] = []

    for path, obj in files:
        role = _role_of(obj) or ""
        fm = _final_message_of(obj)
        metrics = _metrics_of(obj)
        if role == "composer":
            if fm and not composer_body:
                composer_body = fm
            if metrics:
                composer_metrics.update(metrics)
        elif role == "validator":
            if fm:
                validator_messages.append(fm)
                # collect trailer findings if present
                validator_trailer_findings.extend(_parse_trailer_findings(fm))
            if metrics:
                metrics_list.append(metrics)
        else:
            # Heuristic: treat messages that end with a fenced JSON trailer as validator outputs
            if isinstance(fm, str) and re.search(r"```json\s*\{[\s\S]*\}\s*```\s*$", fm, re.IGNORECASE):
                validator_messages.append(fm)
                validator_trailer_findings.extend(_parse_trailer_findings(fm))
                if metrics:
                    metrics_list.append(metrics)

    # Removed verdict computation; not used for event selection.

    # Event will be set by simplified rule after building comments.

    # Derive selected finding IDs and a human body from composer output (new composer may return JSON)
    selected_ids: List[str] = []
    body = composer_body or ""
    composer_json = None
    try:
        composer_json = json.loads(body) if body.strip().startswith("{") else None
    except Exception:
        composer_json = None
    if isinstance(composer_json, dict) and ("intro" in composer_json or "selected_ids" in composer_json):
        intro = composer_json.get("intro")
        if isinstance(intro, str) and intro.strip():
            body = intro.strip()
        else:
            body = "Automated review summary"
        ids = composer_json.get("selected_ids")
        if isinstance(ids, list):
            seen_ids = set()
            for v in ids:
                if isinstance(v, str) and v not in seen_ids:
                    selected_ids.append(v)
                    seen_ids.add(v)
    # Fallback to original markdown body
    body = _absolutize_location_links(body, repo if repo else None, sha if sha else None)
    if not body.strip():
        body = "No documentation issues detected."

    # Parse validator findings and deduplicate
    findings: List[Finding] = []
    for msg in validator_messages:
        parsed = _parse_findings(msg or "")
        # Attempt to attach UIDs from trailer by matching on (path,start,end,severity,title)
        if validator_trailer_findings:
            trailer_index: Dict[Tuple[str, int, int, str, str], str] = {}
            for it in validator_trailer_findings:
                path = str(it.get("path") or "").strip()
                start = int(it.get("start") or 0)
                end = int(it.get("end") or 0)
                sev = str(it.get("severity") or "").strip().upper()
                title = str(it.get("title") or "").strip()
                uid = str(it.get("uid") or "").strip()
                if path and start > 0 and end >= start and sev and title and uid:
                    trailer_index[(path, start, end, sev, title)] = uid
            for f in parsed:
                key = (f.path, f.start, f.end, f.severity.upper(), f.title)
                if key in trailer_index:
                    f.uid = trailer_index[key]
        findings.extend(parsed)

    # Build selected findings list (preserve order) when composer provided UIDs
    selected_findings: List[Finding] = []
    if selected_ids:
        # Index validator trailer findings by uid and tuple for robust matching
        trailer_by_uid: Dict[str, dict] = {}
        for it in validator_trailer_findings:
            uid = str(it.get("uid") or "").strip()
            if uid:
                trailer_by_uid[uid] = it
        # Index parsed findings for lookup by (path,start,end,sev,title)
        parsed_index: Dict[Tuple[str, int, int, str, str], Finding] = {}
        parsed_alt_index: Dict[Tuple[str, int, int, str], Finding] = {}
        for f in findings:
            parsed_index[(f.path, f.start, f.end, f.severity.upper(), f.title)] = f
            parsed_alt_index[(f.path, f.start, f.end, f.severity.upper())] = f
        for uid in selected_ids:
            fobj: Optional[Finding] = None
            t = trailer_by_uid.get(uid)
            if t:
                key = (
                    str(t.get("path") or "").strip(),
                    int(t.get("start") or 0),
                    int(t.get("end") or 0),
                    str(t.get("severity") or "").strip().upper(),
                    str(t.get("title") or "").strip(),
                )
                fobj = parsed_index.get(key)
                if not fobj:
                    key2 = (key[0], key[1], key[2], key[3])
                    fobj = parsed_alt_index.get(key2)
                if not fobj and key[0] and key[1] > 0 and key[2] >= key[1]:
                    # Create a minimal finding from trailer
                    fobj = Finding(
                        severity=key[3] or "HIGH",
                        title=key[4] or "Selected finding",
                        path=key[0],
                        start=key[1],
                        end=key[2],
                        desc="",
                        suggestion_raw="",
                    )
                    fobj.uid = uid
            else:
                # Fallback: search parsed findings by uid
                fobj = next((pf for pf in findings if pf.uid == uid), None)
            if fobj and fobj.severity in include_sevs:
                selected_findings.append(fobj)
        base_list = selected_findings
    else:
        # Filter by severities, then dedupe
        findings = [f for f in findings if f.severity in include_sevs]
        seen: set[Tuple[str, int, int, str]] = set()
        deduped: List[Finding] = []
        for f in findings:
            k = f.key()
            if k in seen:
                continue
            seen.add(k)
            deduped.append(f)
        base_list = deduped

    # Cap number of comments
    base_list = base_list[: max(0, int(args.max_comments))]

    # Build inline comments
    comments: List[Dict[str, object]] = []
    # Optional bounds check against workspace files to reduce 422 errors
    repo_root = Path(os.environ.get("GITHUB_WORKSPACE") or ".")
    for f in base_list:
        # Clamp line numbers to file length when possible
        file_path = (repo_root / f.path).resolve()
        if file_path.is_file():
            try:
                line_count = sum(1 for _ in file_path.open("r", encoding="utf-8", errors="ignore"))
                if f.end > line_count:
                    f.end = line_count
                if f.start > line_count:
                    # Skip invalid locations entirely
                    continue
            except Exception:
                pass
        # Compose comment body with optional suggestion
        parts: List[str] = []
        parts.append(f"### [{f.severity}] {f.title}")
        if f.desc.strip():
            parts.append("")
            parts.append(f.desc.strip())
        # Only submit commit suggestions when the replacement likely covers the full selected range
        submitted_suggestion = False
        if f.suggestion_replacement is not None:
            repl = f.suggestion_replacement.rstrip("\n")
            repl_lines = repl.splitlines()
            n_range = f.end - f.start + 1
            if (
                (n_range == 1 and len(repl_lines) == 1) or
                (n_range > 1 and len(repl_lines) == n_range) or
                (repl == "" and n_range >= 1)
            ):
                parts.append("")
                parts.append("```suggestion")
                if repl:
                    parts.append(repl)
                parts.append("```")
                submitted_suggestion = True
        if not submitted_suggestion and f.suggestion_raw.strip():
            # Detect deletion-only diffs and convert to empty GH suggestion
            raw = f.suggestion_raw
            lang, inner = _extract_first_code_block(raw)
            text = inner if inner is not None else raw
            lines = [ln.strip() for ln in text.splitlines()]
            has_add = any(ln.startswith('+') and not ln.startswith('++') for ln in lines)
            has_del = any(ln.startswith('-') and not ln.startswith('--') for ln in lines)
            if has_del and not has_add:
                parts.append("")
                parts.append("```suggestion")
                parts.append("```")
                submitted_suggestion = True
        if not submitted_suggestion and f.suggestion_raw.strip():
            parts.append("")
            # Do not include fenced blocks if we can't guarantee a commit suggestion
            cleaned = _TRAILER_JSON_RE.sub("", f.suggestion_raw.strip())
            cleaned = _FENCED_BLOCK_RE.sub("", cleaned).strip()
            if cleaned:
                parts.append(cleaned)
        # Always include the feedback CTA
        parts.append("")
        parts.append("Please leave a reaction 👍/👎 to this suggestion to improve future reviews for everyone!")
        body_text = "\n".join(parts).strip()
        # Rewrite style-guide references to clickable blob URLs
        body_text = _absolutize_location_links(body_text, repo if repo else None, sha if sha else None)

        c: Dict[str, object] = {
            "path": f.path,
            "side": "RIGHT",
            "body": body_text,
        }
        if f.start == f.end:
            c["line"] = f.end
        else:
            c["start_line"] = f.start
            c["line"] = f.end
            c["start_side"] = "RIGHT"
        comments.append(c)

    # Always submit a COMMENT review, never approve or request changes.
    event = "COMMENT"

    out = {
        "body": body,
        "event": event,
        "comments": comments,
        "commit_id": sha or None,
    }
    json.dump(out, fp=os.fdopen(1, "w"), ensure_ascii=False)


if __name__ == "__main__":
    main()


================================================
FILE: .github/scripts/common.mjs
================================================
export async function hidePriorCommentsWithPrefix({
  github, // injected by GitHub
  context, // injected by GitHub
  exec, // injected by GitHub
  prefix = '',
  resolved = true,
  user = 'github-actions[bot]',
}) {
  const comments = await withRetry(() =>
    github.rest.issues.listComments({
      owner: context.repo.owner,
      repo: context.repo.repo,
      issue_number: context.issue.number,
    })
  );
  await exec.exec('sleep 0.5s');
  for (const comment of comments.data) {
    const commentData = await withRetry(() =>
      github.graphql(`
        query($nodeId: ID!) {
          node(id: $nodeId) {
            ... on IssueComment {
              isMinimized
            }
          }
        }
      `, { nodeId: comment.node_id })
    );
    await exec.exec('sleep 0.5s');
    const isHidden = commentData?.node?.isMinimized;
    if (isHidden) { continue; }
    if (
      comment.user.login === user &&
      comment.body.startsWith(prefix)
    ) {
      console.log('Comment node_id:', comment.node_id);
      const commentStatus = await withRetry(() =>
        github.graphql(`
          mutation($subjectId: ID!, $classifier: ReportedContentClassifiers!) {
            minimizeComment(input: {
              subjectId: $subjectId,
              classifier: $classifier
            }) {
              minimizedComment {
                isMinimized
                minimizedReason
              }
            }
          }
        `, {
          subjectId: comment.node_id,
          classifier: resolved ? 'RESOLVED' : 'OUTDATED',
        })
      );
      await exec.exec('sleep 0.5s');
      console.log(commentStatus);
    }
  }
}

export async function createComment({
  github, // injected by GitHub
  context, // injected by GitHub
  exec, // injected by GitHub
  body = '',
}) {
  await withRetry(() =>
    github.rest.issues.createComment({
      owner: context.repo.owner,
      repo: context.repo.repo,
      issue_number: context.issue.number,
      body: body,
    })
  );
  await exec.exec('sleep 0.2s');
}

/** @param fn {() => Promise<any>} */
async function withRetry(fn, maxRetries = 3, baseDelayMs = 1500) {
  let lastError;
  for (let attempt = 1; attempt <= maxRetries; attempt += 1) {
    try {
      return await fn();
    } catch (error) {
      // Don't retry on 4xx errors (client errors), only on 5xx or network issues
      if (error.status && error.status >= 400 && error.status < 500) {
        throw error;
      }
      lastError = error;

      // Exponential backoff
      const delay = baseDelayMs * Math.pow(2, attempt - 1);
      console.log(`Attempt ${attempt} failed, retrying in ${delay / 1000}s...`);
      await new Promise((resolve) => setTimeout(resolve, delay));
    }
  }
  // Did not produce results after multiple retries
  throw lastError;
}


================================================
FILE: .github/scripts/generate-v2-api-table.py
================================================
import json
import re
from pathlib import Path
from collections import defaultdict

# Define which specs to process and where to inject tables
SPECS = [
    {
        'spec_path': 'ecosystem/api/toncenter/v2.json',
        'mdx_path': 'ecosystem/api/toncenter/v2/overview.mdx',
        'marker': 'API_V2_ENDPOINTS',
        'link_base': '/ecosystem/api/toncenter/v2',
        'exclude_tags': ['rpc'],
        'include_jsonrpc': True,
    },
]

def load_openapi_spec(filepath: Path) -> dict:
    """Load the OpenAPI JSON file."""
    with open(filepath, 'r') as f:
        return json.load(f)


def extract_endpoints(spec: dict, exclude_tags: list = None) -> list:
    """Extract endpoints from the OpenAPI spec."""
    exclude_tags = [t.lower() for t in (exclude_tags or [])]

    endpoints = []
    seen_paths = set()
    paths = spec.get('paths', {})

    for path, path_item in paths.items():
        for method in ['get', 'post', 'put', 'patch', 'delete']:
            if method not in path_item:
                continue

            operation = path_item[method]
            tags = operation.get('tags', ['Other'])
            tags_lower = [t.lower() for t in tags]

            # Skip if ALL tags are in exclude list
            if all(t in exclude_tags for t in tags_lower):
                continue

            # Use first non-excluded tag as category
            tag = next((t for t in tags if t.lower() not in exclude_tags), tags[0])

            # Avoid duplicates
            if path in seen_paths:
                continue
            seen_paths.add(path)

            endpoints.append({
                'path': path,
                'method': method.upper(),
                'tag': tag,
                'summary': operation.get('summary', ''),
                'operationId': operation.get('operationId', ''),
            })

    return endpoints


def generate_mintlify_link(endpoint: dict, base_path: str) -> str:
    """Generate Mintlify documentation link based on summary (slugified)."""
    tag = endpoint['tag'].lower().replace(' ', '-').replace('_', '-')
    summary = endpoint.get('summary', '')

    if summary:
        # Mintlify slugifies the summary for the URL
        # "Get account state and balance" -> "get-account-state-and-balance"
        slug = summary.lower()
        slug = re.sub(r'[^a-z0-9\s-]', '', slug)
        slug = re.sub(r'\s+', '-', slug)
        slug = re.sub(r'-+', '-', slug)
        slug = slug.strip('-')
        return f"{base_path}/{tag}/{slug}"

    operation_id = endpoint.get('operationId', '')
    if operation_id:
        clean_op_id = operation_id.replace('_get', '').replace('_post', '')
        slug = re.sub(r'([a-z])([A-Z])', r'\1-\2', clean_op_id).lower()
        return f"{base_path}/{tag}/{slug}"

    path_slug = endpoint['path'].split('/')[-1].lower()
    return f"{base_path}/{tag}/{path_slug}"


def generate_table(endpoints: list, link_base: str) -> str:
    """Generate markdown table from endpoints."""
    # Group by tag
    grouped = defaultdict(list)
    for ep in endpoints:
        grouped[ep['tag']].append(ep)

    # Custom sort order
    tag_order = ['accounts', 'blocks', 'transactions', 'send', 'run method', 'utils', 'configuration', 'json-rpc']

    def sort_key(tag):
        try:
            return tag_order.index(tag.lower())
        except ValueError:
            return len(tag_order)

    sorted_tags = sorted(grouped.keys(), key=sort_key)

    lines = [
        "| Category | Method | Description |",
        "| -------- | ------ | ----------- |",
    ]

    for tag in sorted_tags:
        for ep in grouped[tag]:
            method = ep['method']
            path = ep['path'].replace('/api/v2', '').replace('/api/v3', '')
            summary = ep['summary']
            link = generate_mintlify_link(ep, link_base)

            display_tag = tag.capitalize() if tag.islower() else tag
            method_display = f"[`{method} {path}`]({link})"

            lines.append(f"| **{display_tag}** | {method_display} | {summary} |")

    return '\n'.join(lines)


def process_spec(config: dict, repo_root: Path) -> str:
    """Process a single OpenAPI spec and generate table."""
    spec_path = repo_root / config['spec_path']

    if not spec_path.exists():
        print(f"Spec not found: {spec_path}")
        return None

    spec = load_openapi_spec(spec_path)
    if spec is None:
        return None

    endpoints = extract_endpoints(spec, config.get('exclude_tags', []))

    # Optionally add JSON-RPC endpoint
    if config.get('include_jsonrpc'):
        paths = spec.get('paths', {})
        for rpc_path in ['/api/v2/jsonRPC', '/api/v3/jsonRPC']:
            if rpc_path in paths:
                jsonrpc = paths[rpc_path].get('post', {})
                endpoints.append({
                    'path': rpc_path,
                    'method': 'POST',
                    'tag': 'JSON-RPC',
                    'summary': jsonrpc.get('summary', 'JSON-RPC endpoint'),
                    'operationId': jsonrpc.get('operationId', 'jsonRPC_post'),
                })

    return generate_table(endpoints, config['link_base'])


def inject_table_into_mdx(mdx_path: Path, marker: str, table: str) -> bool:
    """
    Inject generated table into MDX file between marker comments.

    Markers in MDX should look like:
    {/* BEGIN_AUTO_GENERATED: API_V2_ENDPOINTS */}
    {/* END_AUTO_GENERATED: API_V2_ENDPOINTS */}
    """
    if not mdx_path.exists():
        print(f"MDX not found: {mdx_path}")
        return False

    content = mdx_path.read_text()

    # Pattern to match the marker block (handles both empty and filled markers)
    pattern = rf'(\{{/\* BEGIN_AUTO_GENERATED: {marker} \*/\}})[ \t]*\n.*?(\{{/\* END_AUTO_GENERATED: {marker} \*/\}})'

    if not re.search(pattern, content, re.DOTALL):
        print(f"      Markers not found in {mdx_path}")
        print(f"      Add these markers where you want the table:")
        print(f"      {{/* BEGIN_AUTO_GENERATED: {marker} */}}")
        print(f"      {{/* END_AUTO_GENERATED: {marker} */}}")
        return False

    # Replace content between markers
    new_content = re.sub(
        pattern,
        rf'\1\n{table}\n\2',
        content,
        flags=re.DOTALL
    )

    if new_content != content:
        mdx_path.write_text(new_content)
        return True

    return False


def find_repo_root() -> Path:
    """Find the repository root (where mint.json is located)."""
    current = Path(__file__).resolve().parent

    for parent in [current] + list(current.parents):
        if (parent / 'mint.json').exists():
            return parent

    return current.parent


def main():
    repo_root = find_repo_root()

    for config in SPECS:
        print(f"\nProcessing: {config['spec_path']}")

        table = process_spec(config, repo_root)
        if not table:
            continue

        mdx_path = repo_root / config['mdx_path']
        marker = config['marker']

        if inject_table_into_mdx(mdx_path, marker, table):
            print(f"  Updated {config['mdx_path']}")
        else:
            print(f"  No changes needed or markers missing")

    print("\n Done")


if __name__ == '__main__':
    main()


================================================
FILE: .github/scripts/generate-v3-api-table.py
================================================
import re
from pathlib import Path
from collections import defaultdict

try:
    import yaml
    HAS_YAML = True
except ImportError:
    HAS_YAML = False
    print("PyYAML not installed. Run: pip install pyyaml")
    exit(1)

SPEC_PATH = 'ecosystem/api/toncenter/v3.yaml'
MDX_PATH = 'ecosystem/api/toncenter/v3/overview.mdx'
MARKER = 'API_V3_ENDPOINTS'
LINK_BASE = '/ecosystem/api/toncenter/v3'

# Tag display order
TAG_ORDER = [
    'accounts',
    'actions and traces',
    'blockchain data',
    'jettons',
    'nfts',
    'dns',
    'multisig',
    'vesting',
    'stats',
    'utils',
    'api/v2',
]

# Map tag slugs to Mintlify's actual URL slugs
TAG_SLUG_MAP = {
    'api-v2': 'apiv2',
}


def load_openapi_spec(filepath: Path) -> dict:
    """Load the OpenAPI YAML file."""
    with open(filepath, 'r') as f:
        return yaml.safe_load(f)


def extract_endpoints(spec: dict) -> list:
    """Extract endpoints from the OpenAPI spec."""
    endpoints = []
    seen_paths = set()
    paths = spec.get('paths', {})

    for path, path_item in paths.items():
        for method in ['get', 'post', 'put', 'patch', 'delete']:
            if method not in path_item:
                continue

            operation = path_item[method]
            tags = operation.get('tags', ['Other'])
            tag = tags[0] if tags else 'Other'

            # Avoid duplicates
            if path in seen_paths:
                continue
            seen_paths.add(path)

            endpoints.append({
                'path': path,
                'method': method.upper(),
                'tag': tag,
                'summary': operation.get('summary', ''),
                'operationId': operation.get('operationId', ''),
            })

    return endpoints


def generate_mintlify_link(endpoint: dict) -> str:
    """Generate Mintlify documentation link based on summary"""
    tag = endpoint['tag'].lower().replace(' ', '-').replace('_', '-').replace('/', '-')

    # Apply tag slug mapping for Mintlify
    tag = TAG_SLUG_MAP.get(tag, tag)

    summary = endpoint.get('summary', '')

    if summary:
        slug = summary.lower()
        slug = re.sub(r'[^a-z0-9\s-]', '', slug)
        slug = re.sub(r'\s+', '-', slug)
        slug = re.sub(r'-+', '-', slug)
        slug = slug.strip('-')
        return f"{LINK_BASE}/{tag}/{slug}"

    operation_id = endpoint.get('operationId', '')
    if operation_id:
        clean_op_id = operation_id.replace('_get', '').replace('_post', '')
        slug = re.sub(r'([a-z])([A-Z])', r'\1-\2', clean_op_id).lower()
        return f"{LINK_BASE}/{tag}/{slug}"

    path_slug = endpoint['path'].split('/')[-1].lower()
    return f"{LINK_BASE}/{tag}/{path_slug}"


def generate_table(endpoints: list) -> str:
    """Generate markdown table from endpoints."""
    # Group by tag
    grouped = defaultdict(list)
    for ep in endpoints:
        grouped[ep['tag']].append(ep)

    def sort_key(tag):
        try:
            return TAG_ORDER.index(tag.lower())
        except ValueError:
            return len(TAG_ORDER)

    sorted_tags = sorted(grouped.keys(), key=sort_key)

    lines = [
        "| Category | Method | Description |",
        "| -------- | ------ | ----------- |",
    ]

    for tag in sorted_tags:
        for ep in grouped[tag]:
            method = ep['method']
            path = ep['path'].replace('/api/v3', '')
            summary = ep['summary']
            link = generate_mintlify_link(ep)

            # Handle tag display
            display_tag = tag
            if tag.lower() == 'api/v2':
                display_tag = 'Legacy (v2)'
            elif tag.islower():
                display_tag = tag.capitalize()

            method_display = f"[`{method} {path}`]({link})"

            lines.append(f"| **{display_tag}** | {method_display} | {summary} |")

    return '\n'.join(lines)


def inject_table_into_mdx(mdx_path: Path, table: str) -> bool:
    """Inject generated table into MDX file between marker comments."""
    if not mdx_path.exists():
        print(f"   MDX not found: {mdx_path}")
        return False

    content = mdx_path.read_text()

    # Pattern to match the marker block
    pattern = rf'(\{{/\* BEGIN_AUTO_GENERATED: {MARKER} \*/\}})[ \t]*\n.*?(\{{/\* END_AUTO_GENERATED: {MARKER} \*/\}})'

    if not re.search(pattern, content, re.DOTALL):
        print(f"      Markers not found in {mdx_path}")
        print(f"      Add these markers where you want the table:")
        print(f"      {{/* BEGIN_AUTO_GENERATED: {MARKER} */}}")
        print(f"      {{/* END_AUTO_GENERATED: {MARKER} */}}")
        return False

    new_content = re.sub(
        pattern,
        rf'\1\n{table}\n\2',
        content,
        flags=re.DOTALL
    )

    if new_content != content:
        mdx_path.write_text(new_content)
        return True

    return False


def find_repo_root() -> Path:
    """Find the repository root"""
    current = Path(__file__).resolve().parent

    for parent in [current] + list(current.parents):
        if (parent / 'docs.json').exists():
            return parent

    return current.parent


def main():
    repo_root = find_repo_root()

    spec_path = repo_root / SPEC_PATH
    mdx_path = repo_root / MDX_PATH

    print(f"\n Processing: {SPEC_PATH}")

    if not spec_path.exists():
        print(f"Spec not found: {spec_path}")
        return

    spec = load_openapi_spec(spec_path)
    endpoints = extract_endpoints(spec)

    print(f"    Found {len(endpoints)} endpoints")

    table = generate_table(endpoints)

    if inject_table_into_mdx(mdx_path, table):
        print(f"   Updated {MDX_PATH}")
    else:
        print(f"   No changes needed or markers missing")

    print("\n Done")


if __name__ == '__main__':
    main()


================================================
FILE: .github/scripts/rewrite_review_links.py
================================================
#!/usr/bin/env python3
"""Convert repo-relative doc links in the review body to absolute blob URLs."""

from __future__ import annotations

import os
import re
import sys


def main() -> None:
    text = sys.stdin.read()
    if not text:
        sys.stdout.write(text)
        return

    repo = os.environ.get("GITHUB_REPOSITORY")
    sha = os.environ.get("PR_HEAD_SHA")
    if not repo:
        sys.stdout.write(text)
        return

    blob_prefix = f"https://github.com/{repo}/blob/"
    doc_blob_prefix = f"{blob_prefix}{sha or 'main'}/"
    style_blob_prefix = f"{blob_prefix}main/"
    style_rel = "contribute/style-guide-extended.mdx"

    def absolutize_location(path: str) -> str:
        if path.startswith("http://") or path.startswith("https://"):
            return path
        normalized = path.lstrip("./")
        base = style_blob_prefix if normalized.startswith(style_rel) else doc_blob_prefix
        return f"{base}{normalized}"

    lines: list[str] = []
    for line in text.splitlines():
        stripped = line.lstrip()
        indent_len = len(line) - len(stripped)
        for marker in ("- Location:", "Location:", "* Location:"):
            if stripped.startswith(marker):
                prefix, _, rest = stripped.partition(":")
                link = rest.strip()
                if link:
                    link = absolutize_location(link)
                    stripped = f"{prefix}: {link}"
                    line = " " * indent_len + stripped
                break
        lines.append(line)

    rewritten = "\n".join(lines)

    style_pattern = re.compile(rf"{re.escape(style_rel)}\?plain=1#L\d+(?:-L\d+)?")

    def replace_style_links(text: str) -> str:
        result: list[str] = []
        last = 0
        for match in style_pattern.finditer(text):
            start, end = match.span()
            result.append(text[last:start])
            link = match.group(0)
            prefix_start = max(0, start - len(style_blob_prefix))
            if text[prefix_start:start] == style_blob_prefix:
                result.append(link)
            else:
                result.append(f"{style_blob_prefix}{link.lstrip('./')}")
            last = end
        result.append(text[last:])
        return "".join(result)

    rewritten = replace_style_links(rewritten)

    # Ensure any doc blob URLs use the PR head SHA (style guide stays on main)
    if sha:
        doc_prefix_regex = re.compile(rf"{re.escape(blob_prefix)}([^/]+)/([^\s)]+)")

        def fix_doc(match: re.Match[str]) -> str:
            base = match.group(1)
            remainder = match.group(2)
            target = "main" if remainder.startswith(style_rel) else sha
            if base == target:
                return match.group(0)
            return f"{blob_prefix}{target}/{remainder}"

        rewritten = doc_prefix_regex.sub(fix_doc, rewritten)

    sys.stdout.write(rewritten)


if __name__ == "__main__":
    main()


================================================
FILE: .github/scripts/tvm-instruction-gen.py
================================================
import json
import os
import sys
import textwrap
import mistletoe

WORKSPACE_ROOT = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir))
MDX_PATH = os.path.join(WORKSPACE_ROOT, "tvm", "instructions.mdx")

START_MARK = "{/* STATIC_START tvm_instructions */}"
END_MARK = "{/* STATIC_END tvm_instructions */}"


def humanize_category(key):
    if not key:
        return "Uncategorized"
    words = [p.capitalize() for p in key.replace("_", " ").split() if p]
    return " ".join(words) or "Uncategorized"


def render_alias(alias):
    return f"""
- `{alias['mnemonic']}`<br />
{textwrap.indent(alias['description'].replace('\n', '<br />'), "  ")}
""".strip()


def render_instruction(insn, aliases):
    return f"""
#### `{insn['doc']['opcode']}` {insn['mnemonic']}

{insn['doc']['description'].replace('\n', '<br />')}<br />
**Category:** {humanize_category(insn['doc']['category'])} ({insn['doc']['category']})<br />

```fift Fift
{insn['doc']['fift']}
```

{'**Aliases**:' if aliases else ''}
{'\n'.join(render_alias(alias) for alias in aliases)}
""".strip()


def render_static_mdx(spec):
    return '\n\n'.join(render_instruction(insn, [alias for alias in spec['aliases'] if alias['alias_of'] == insn['mnemonic']]) for insn in spec['instructions'])


def inject_into_mdx(mdx_path, new_block):
    with open(mdx_path, "r", encoding="utf-8") as fh:
        src = fh.read()
    start_idx = src.find(START_MARK)
    end_idx = src.find(END_MARK) + len(END_MARK)
    if start_idx == -1 or end_idx == -1 or end_idx <= start_idx:
        raise RuntimeError("Static markers not found or malformed in instructions.mdx")

    # Preserve everything outside markers; replace inside with marker + newline + content + newline + end marker
    before = src[: start_idx + len(START_MARK)]
    after = src[end_idx:]

    # Hide the static block in the rendered page to avoid duplicating the
    # interactive table. Keeping it in the DOM still enables full-text search.
    wrapped_block = f"<div hidden>\n{new_block}\n</div>"
    replacement = f"{START_MARK}\n{wrapped_block}\n{END_MARK}"

    updated = before + replacement[len(START_MARK):] + after

    with open(mdx_path, "w", encoding="utf-8") as fh:
        fh.write(updated)


def generate(spec_input_path, spec_output_path, instructions_mdx_path):
    with open(spec_input_path) as f:
        spec = json.load(f)
    static_block = render_static_mdx(spec)
    inject_into_mdx(instructions_mdx_path, static_block)
    update_doc_cp0(spec, spec_output_path)


def update_doc_cp0(spec, spec_output_path):
    for insn in spec['instructions']:
        doc = insn['doc']
        doc['description'] = mistletoe.markdown(doc['description'])
    for alias in spec['aliases']:
        alias['description'] = mistletoe.markdown(alias['description'])
    with open(spec_output_path, 'w', encoding='utf-8') as f:
        json.dump(spec, f, ensure_ascii=False, separators=(',', ':'))


if __name__ == "__main__":
    if len(sys.argv) != 4:
        print(f"Usage: {sys.argv[0]} <cp0-input-path> <cp0-output-path> <instructions-mdx-path>")
        sys.exit(1)
    generate(sys.argv[1], sys.argv[2], sys.argv[3])


================================================
FILE: .github/workflows/bouncer.yml
================================================
name: 🏀 Bouncer
# aka 🚪 Supervisor

env:
  # additions only
  MAX_ADDITIONS: 600
  # many target issues usually mean bigger pull requests
  MAX_ISSUES_PER_PR: 3

on:
  pull_request_target: # do NOT use actions/checkout!
    # any branches
    branches: ["**"]
    # on creation, on new commits, and description edits
    types: [opened, synchronize, edited]

concurrency:
  group: ${{ github.workflow }}-${{ github.ref }}-bouncer
  cancel-in-progress: true

permissions:
  contents: read
  pull-requests: write

jobs:
  enforce-smaller-requests:
    name: "PR is manageable"
    runs-on: ubuntu-latest
    steps:
      - name: Check if a number of additions modulo filtered files is within the threshold
        id: stats
        uses: actions/github-script@f28e40c7f34bde8b3046d885e986cb6290c5673b # v7.1.0
        with:
          script: |
            const maxAdditions = Number(process.env.MAX_ADDITIONS ?? '600');
            await exec.exec('sleep 0.5s');
            const { data: files } = await github.rest.pulls.listFiles({
              owner: context.repo.owner,
              repo: context.repo.repo,
              pull_number: context.payload.pull_request.number,
              per_page: 100,
            });
            const filtered = files.filter((f) =>
              f.filename.match(/\.mdx?$/) !== null &&
              !f.filename.startsWith('tvm/instructions.mdx') &&
              !f.filename.startsWith('snippets'),
            );
            const additions = filtered.reduce((acc, it) => acc + it.additions, 0);
            if (additions > maxAdditions) {
              core.setOutput('trigger', 'true');
            } else {
              core.setOutput('trigger', 'false');
            }

      - name: ${{ steps.stats.outputs.trigger == 'true' && 'An opened PR is too big to be reviewed at once!' || '...' }}
        if: github.event.action == 'opened' && steps.stats.outputs.trigger == 'true'
        uses: actions/github-script@f28e40c7f34bde8b3046d885e986cb6290c5673b # v7.1.0
        with:
          script: |
            await exec.exec('sleep 0.5s');
            await github.rest.issues.createComment({
              owner: context.repo.owner,
              repo: context.repo.repo,
              issue_number: context.payload.pull_request.number,
              body: [
                'Thank you for the contribution!',
                [
                  'Unfortunately, it is too large, with over ${{ env.MAX_ADDITIONS }} added lines,',
                  'excluding some generated or otherwise special files.',
                  'Thus, this pull request is challenging to review and iterate on.',
                ].join(' '),
                [
                  'Please split the PR into several smaller ones and consider',
                  'reverting any unrelated changes, writing less, or approaching',
                  'the problem in the issue from a different angle.',
                ].join(' '),
                [
                  'I look forward to your next submissions.',
                  'If you still intend to proceed as is, then you are at the mercy of the reviewers.',
                ].join(' '),
              ].join('\n\n'),
            });
            process.exit(1);

      - name: ${{ steps.stats.outputs.trigger == 'true' && 'Some change in the PR made it too big!' || '...' }}
        if: github.event.action != 'opened' && steps.stats.outputs.trigger == 'true'
        uses: actions/github-script@f28e40c7f34bde8b3046d885e986cb6290c5673b # v7.1.0
        with:
          script: |
            core.setFailed([
              [
                'This pull request has gotten over ${{ env.MAX_ADDITIONS }} added lines,',
                'which can be challenging to review and iterate on',
                'Please, decrease the size of this PR or consider splitting it into several smaller requests.'
              ].join(' '),
              [
                'Until then, the CI will be soft-marked as failed.',
                'If you still intend to proceed as is, then you are at the mercy of the reviewers.',
              ].join(' '),
            ].join('\n\n'));
            process.exit(1);

  enforce-better-descriptions:
    name: "Title and description"
    runs-on: ubuntu-latest
    steps:
      # pr title check
      - name: "Check that the title conforms to the simplified version of Conventional Commits"
        if: ${{ !cancelled() }}
        uses: actions/github-script@f28e40c7f34bde8b3046d885e986cb6290c5673b # v7.1.0
        with:
          script: |
            const title = context.payload.pull_request.title;
            const types = 'feat|fix|chore|refactor|test';
            const pattern = new RegExp(`^(revert: )?(${types})(?:\\/(${types}))?!?(\\([^\\)]+\\))?!?: [a-zA-Z].{1,200}`);
            const matches = title.match(pattern) !== null;
            if (!matches) {
              core.setFailed([
                'Title of this pull request does not conform to the simplified version of Conventional Commits used in the documentation',
                `Received: ${title}`,
                'Expected to find a type of: feat, fix, chore, refactor, or test, followed by the parts outlined here: https://www.conventionalcommits.org/en/v1.0.0/',
              ].join('\n'));
              process.exit(1);
            }

      # pr close issue limits
      - name: "Check that there is no more than ${{ env.MAX_ISSUES_PER_PR }} linked issues"
        if: ${{ !cancelled() && github.event.pull_request.user.login != 'dependabot[bot]' }}
        uses: actions/github-script@f28e40c7f34bde8b3046d885e986cb6290c5673b # v7.1.0
        with:
          script: |
            const maxIssuesAllowed = Number(process.env.MAX_ISSUES_PER_PR ?? '3');
            const body = context.payload.pull_request.body || '';
            const closePatterns = /\b(?:close[sd]?|fixes|fixed|fix|resolve[sd]?|towards):?\s+(?:https?:\/\/github\.com\/|[a-z0-9\-\_\/]*#\d+)/gi;
            const issueCount = [...body.matchAll(closePatterns)].length;
            if (issueCount > maxIssuesAllowed) {
              core.setFailed(`This pull request attempts to close ${issueCount} issues, while the maximum number allowed is ${maxIssuesAllowed}.`);
              process.exit(1);
            }
            const changelogPattern = /\bchange\s*log:?\s+https?:\/\/.*?\.mdx?/gi;
            const hasChangelog = body.match(changelogPattern) !== null;
            if (issueCount === 0 && !hasChangelog) {
              core.setFailed([
                'This pull request does not resolve any issues — no close patterns found in the description.',
                'Please, specify an issue by writing `Closes #that-issue-number` in the description of this PR.',
                'If there is no such issue, create a new one: https://github.com/ton-org/docs/issues/1366#issuecomment-3560650817',
                '\nIf this PR updates descriptions in accordance with a new release of a tool,',
                'provide a changelog by writing `Changelog https://....md` in the description of this PR.',
              ].join(' '));
              process.exit(1);
            }


================================================
FILE: .github/workflows/commander.yml
================================================
# Listens to new comments with /commands and acts accordingly
name: 📡 Commander

env:
  HUSKY: 0
  NODE_VERSION: 20

on:
  issue_comment:
    types: [created]
  pull_request_review_comment:
    types: [created]

concurrency:
  group: ${{ github.workflow }}-${{ github.ref }}-commander
  cancel-in-progress: true

permissions:
  contents: read
  pull-requests: write

jobs:
  fmt:
    name: "Fix formatting"
    runs-on: ubuntu-latest
    if: |
      (
        github.event_name == 'pull_request_review_comment' ||
        (
          github.event_name == 'issue_comment' &&
          github.event.issue.pull_request != null
        )
      ) &&
      contains(fromJSON('["OWNER", "MEMBER", "COLLABORATOR"]'), github.event.comment.author_association) &&
      (startsWith(github.event.comment.body, '/fmt ') || github.event.comment.body == '/fmt')
    steps:
      # This is done cautiously to confirm whether the comment comes from a PR that is a fork.
      # If so, all other steps are skipped and nothing important is run afterwards.
      - name: Gather PR context in env variables
        env:
          FROM_PR: ${{ github.event.pull_request.number }}
          FROM_ISSUE: ${{ github.event.issue.number }}
        uses: actions/github-script@f28e40c7f34bde8b3046d885e986cb6290c5673b # v7.1.0
        with:
          script: |
            const fs = require('node:fs');
            const prNumRaw = process.env.FROM_PR ?? process.env.FROM_ISSUE ?? '';
            const prNum = Number(prNumRaw);
            if (isNaN(prNum) || prNum <= 0 || prNum >= 1e20) {
              console.error(`PR number was not provided or is invalid: ${prNumRaw}`);
              process.exit(1);
            }
            core.exportVariable('PR_NUMBER', prNumRaw);
            const { data: pr } = await github.rest.pulls.get({
              owner: context.repo.owner,
              repo: context.repo.repo,
              pull_number: prNum,
            });
            core.exportVariable('BASE_REF', pr.base.ref);
            core.exportVariable('HEAD_REF', pr.head.ref);
            const headRepo = pr.head.repo?.full_name ?? '';
            const thisRepo = `${context.repo.owner}/${context.repo.repo}`;
            if (headRepo === '' && headRepo !== thisRepo) {
              core.exportVariable('IS_FORK', 'true');
              core.notice('This job does not run in forks for a vast number of reasons. Please, apply the necessary fixes yourself.');
            } else {
              core.exportVariable('IS_FORK', 'false');
            }

      - name: Checkout the PR branch
        if: env.IS_FORK != 'true'
        uses: actions/checkout@v4
        with:
          ref: ${{ env.HEAD_REF }}
          fetch-depth: 0

      - name: Setup Node.js
        if: env.IS_FORK != 'true'
        uses: actions/setup-node@v4
        with:
          node-version: ${{ env.NODE_VERSION }}
          cache: "npm"

      - name: Install dependencies
        if: env.IS_FORK != 'true'
        run: |
          corepack enable
          npm ci

      - name: Get changed MDX and Markdown files
        if: env.IS_FORK != 'true'
        id: changed-files
        uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47
        with:
          files: |
            **.md
            **.mdx
          separator: " "
          base_sha: ${{ env.BASE_REF }}

      - name: Apply formatting
        if: env.IS_FORK != 'true'
        id: fix-fmt
        env:
          ALL_CHANGED_FILES: ${{ steps.changed-files.outputs.all_changed_files }}
        uses: actions/github-script@f28e40c7f34bde8b3046d885e986cb6290c5673b # v7.1.0
        with:
          script: |
            const files = (process.env.ALL_CHANGED_FILES ?? '')
              .trim().split(' ').filter(Boolean).filter((it) => it.match(/\.mdx?$/) !== null);
            if (files.length === 0) {
              console.log('\nNo such files affected!');
              process.exit(0);
            }
            try {
              await exec.exec('npm', ['run', 'check:fmt:some', '--', ...files], {
                silent: true, // >/dev/null 2>&1
              });
              console.log('\nNo issues');
              core.setOutput('changes', 'false');
            } catch (_) {
              console.log('\nFound issues, fixing...');
              await exec.exec('npm', ['run', 'fmt:some', '--', ...files], {
                silent: true, // >/dev/null 2>&1
              });
              core.setOutput('changes', 'true');
            }

      - name: Commit changes, if any
        if: env.IS_FORK != 'true' && steps.fix-fmt.outputs.changes == 'true'
        uses: stefanzweifel/git-auto-commit-action@28e16e81777b558cc906c8750092100bbb34c5e3 # v7.0.0
        with:
          commit_message: "fix: formatting"
          branch: ${{ env.HEAD_REF }}


================================================
FILE: .github/workflows/generate-api-tables.yml
================================================
name: Generate API Tables

env:
  PYTHON_VERSION: "3.11"
  NODE_VERSION: "20"

on:
  push:
    paths:
      - 'ecosystem/api/toncenter/v2.json'
      - 'ecosystem/api/toncenter/v3.yaml'
    branches:
      - main

permissions:
  contents: write

jobs:
  generate:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - uses: actions/setup-python@v5
        with:
          python-version: ${{ env.PYTHON_VERSION }}

      - uses: actions/setup-node@v4
        with:
          node-version: ${{ env.NODE_VERSION }}
          cache: "npm"

      - name: Install dependencies
        run: |
          pip install pyyaml==6.0.3
          corepack enable
          npm ci

      - name: Generate tables
        run: |
          python3 .github/scripts/generate-v2-api-table.py
          python3 .github/scripts/generate-v3-api-table.py
          npm run fmt:some -- ecosystem/api/toncenter/v2/overview.mdx ecosystem/api/toncenter/v3/overview.mdx

      - name: Commit changes
        run: |
          git config user.name "github-actions[bot]"
          git config user.email "github-actions[bot]@users.noreply.github.com"
          git add ecosystem/api/toncenter/v2/overview.mdx ecosystem/api/toncenter/v3/overview.mdx
          git diff --staged --quiet || git commit -m "chore(bot): auto-generate API tables"
          git push


================================================
FILE: .github/workflows/instructions.yml
================================================
name: 🕘 Instructions update

on:
  schedule:
    - cron: '17 3 * * *'
  workflow_dispatch:
    inputs:
      source_branch:
        description: 'Branch in ton-org/tvm-spec to fetch cp0.json from'
        required: false
        default: 'master'
        type: string

permissions:
  contents: write
  pull-requests: write

jobs:
  fetch-and-release:
    if: ${{ github.event_name == 'workflow_dispatch' || github.repository == 'ton-org/docs' }}
    runs-on: ubuntu-latest
    env:
      SOURCE_BRANCH: ${{ github.event_name == 'workflow_dispatch' && github.event.inputs.source_branch || 'master' }}
    steps:
      - name: Checkout repository
        uses: actions/checkout@v4
        with:
          fetch-depth: 0  # needed for pushing later

      - name: Set up Git
        run: |
          git config user.name "github-actions[bot]"
          git config user.email "github-actions[bot]@users.noreply.github.com"

      - name: Set up Python
        uses: actions/setup-python@v5
        with:
          python-version: '3.x'

      - name: Install Python dependencies
        run: pip install mistletoe==1.5.0

      - name: Clone ton-org/tvm-spec
        run: git clone https://github.com/ton-org/tvm-spec && cd tvm-spec && git checkout $SOURCE_BRANCH

      - name: Update instructions.mdx and cp0.json
        # cp0.txt is a workaround: mintlify gives 404 for url /resources/tvm/cp0.json -_-
        run: python3 .github/scripts/tvm-instruction-gen.py tvm-spec/cp0.json resources/tvm/cp0.txt tvm/instructions.mdx

      - name: Check for changes
        id: git-diff
        run: |
          git add resources/tvm/cp0.txt tvm/instructions.mdx
          CHANGED_FILES=$(git diff --cached --name-only | tr '\n' ' ')
          echo "changed=$CHANGED_FILES" >> $GITHUB_OUTPUT

      - name: Create Pull Request if needed
        if: ${{ steps.git-diff.outputs.changed != '' }}
        id: cpr
        uses: peter-evans/create-pull-request@c5a7806660adbe173f04e3e038b0ccdcd758773c # v6
        with:
          commit-message: "feat: update TVM instructions list"
          title: "feat: update TVM instructions list"
          branch: "update-spec"
          add-paths: |
            resources/tvm/cp0.txt
            tvm/instructions.mdx
          token: ${{ secrets.GITHUB_TOKEN }}


================================================
FILE: .github/workflows/linter.yml
================================================
name: 💅 Linting suite

env:
  HUSKY: 0
  NODE_VERSION: 20

on:
  pull_request:
    branches: ["**"]
  workflow_dispatch:

concurrency:
  group: ${{ github.workflow }}-${{ github.ref }}-linter
  cancel-in-progress: true

permissions:
  contents: read
  pull-requests: write

jobs:
  format-check:
    name: "Formatting"
    runs-on: ubuntu-latest
    steps:
      - name: Checkout repository
        uses: actions/checkout@v4
        with:
          fetch-depth: 0

      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: ${{ env.NODE_VERSION }}
          cache: "npm"

      - name: Install dependencies
        run: |
          corepack enable
          npm ci

      - name: Get changed MDX and Markdown files
        id: changed-files
        uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47
        with:
          files: |
            **.md
            **.mdx
          separator: " "

      - name: Check formatting of MDX and Markdown files
        id: check-fmt
        env:
          ALL_CHANGED_FILES: ${{ steps.changed-files.outputs.all_changed_files }}
        uses: actions/github-script@f28e40c7f34bde8b3046d885e986cb6290c5673b # v7.1.0
        with:
          script: |
            const files = (process.env.ALL_CHANGED_FILES ?? '')
              .trim().split(' ').filter(Boolean).filter((it) => it.match(/\.mdx?$/) !== null);
            if (files.length === 0) {
              console.log('\nNo such files affected!');
              process.exit(0);
            }
            console.log('\nChecking formatting of the following MDX and Markdown files affected by this PR:\n');
            for (const file of files) {
              console.log(`- ${file}`);
            }
            try {
              await exec.exec('npm', ['run', 'check:fmt:some', '--', ...files], {
                silent: true, // >/dev/null 2>&1
              });
            } catch (_) {
              // Comment right in the actions output
              console.log('\n\x1b[31mError:\x1b[0m Some files are not properly formatted!');
              console.log('1. Install necessary dependencies: \x1b[31mnpm ci\x1b[0m');
              console.log(`2. Run this command to fix the issues: \x1b[31mnpm run fmt:some -- ${files.join(' ')}\x1b[0m`);

              // Rethrow the exit code of the failed formatting check
              core.setFailed('Some files are not properly formatted!');
              process.exit(1);
            }

      - name: Hide prior PR comments and issue a new one in case of failure
        if: |
          (
            !cancelled() &&
            steps.changed-files.conclusion == 'success' &&
            github.event_name == 'pull_request' &&
            (
              github.event.pull_request.head.repo.fork == false ||
              github.event.pull_request.head.repo.full_name == github.repository
            )
          )
        env:
          ALL_CHANGED_FILES: ${{ steps.changed-files.outputs.all_changed_files }}
          SUCCESS: ${{ steps.check-fmt.conclusion == 'failure' && 'false' || 'true' }}
        uses: actions/github-script@f28e40c7f34bde8b3046d885e986cb6290c5673b # v7.1.0
        with:
          script: |
            const { hidePriorCommentsWithPrefix, createComment } = await import('${{ github.workspace }}/.github/scripts/common.mjs');
            const success = JSON.parse(process.env.SUCCESS ?? 'false');
            const files = (process.env.ALL_CHANGED_FILES ?? '')
              .trim().split(' ').filter(Boolean).filter((it) => it.match(/\.mdx?$/) !== null);
            const comment = [
              'To fix the **formatting** issues:\n',
              '1. Install necessary dependencises: `npm ci`',
              '2. Then, run this command:',
              '   ```shell',
              `   npm run fmt:some -- ${files.join(' ')}`,
              '   ```',
              '\nAlternatively, a maintainer can comment /fmt in this PR to auto-apply fixes in a new commit from the bot.',
            ].join('\n');
            const prefix = comment.slice(0, 30);
            await hidePriorCommentsWithPrefix({ github, context, exec, prefix, resolved: success });
            // Create a new PR comment in case of a new failure
            if (!success) {
              await createComment({ github, context, exec, body: comment });
            }

  spell-check:
    name: "Spelling"
    runs-on: ubuntu-latest
    steps:
      - name: Checkout repository
        uses: actions/checkout@v4
        # The fetch-depth is not set to 0 to prevent the cspell-action
        # from misfiring on files that are in main but not on this PR branch

      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: ${{ env.NODE_VERSION }}
          cache: "npm"

      - name: Install dependencies
        run: |
          corepack enable
          npm ci

      - name: Run CSpell on changed files
        # This action also annotates the PR
        uses: streetsidesoftware/cspell-action@v7
        with:
          check_dot_files: explicit
          suggestions: true
          config: ".cspell.jsonc"

  link-check:
    name: "Links: broken, navigation, redirects"
    runs-on: ubuntu-latest
    steps:
      - name: Checkout repository
        uses: actions/checkout@v4
        with:
          fetch-depth: 0

      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: ${{ env.NODE_VERSION }}
          cache: "npm"

      - name: Install dependencies
        run: |
          corepack enable
          npm ci

      # Broken

      - name: Check broken links
        if: ${{ !cancelled() }}
        run: npm run check:links

      # Navigation

      - name: Check uniqueness of navigation paths in docs.json
        if: ${{ !cancelled() }}
        run: npm run check:navigation -- unique

      - name: Check existence of navigation .mdx pages in docs.json
        if: ${{ !cancelled() }}
        run: npm run check:navigation -- exist

      - name: Check coverage of .mdx pages by docs.json
        if: ${{ !cancelled() }}
        run: npm run check:navigation -- cover

      # Redirects

      - name: Check uniqueness of redirect sources in docs.json
        if: ${{ !cancelled() }}
        run: npm run check:redirects -- unique

      - name: Check existence of redirect destinations in docs.json
        if: ${{ !cancelled() }}
        run: npm run check:redirects -- exist

      - name: Check redirects against the previous TON Documentation
        if: ${{ !cancelled() }}
        run: npm run check:redirects -- previous

      - name: Check redirects against the upstream docs.json structure
        if: ${{ !cancelled() }}
        run: npm run check:redirects -- upstream


================================================
FILE: .github/workflows/pitaya.yml
================================================
name: 🤖 AI review

on:
  pull_request:
    types: [opened, ready_for_review]
  issue_comment:
    types: [created]
  pull_request_review_comment:
    types: [created]
  pull_request_target:
    types: [opened]

permissions:
  contents: read
  pull-requests: write
  issues: write

jobs:
  fork-pr-note:
    if: github.event_name == 'pull_request_target' && github.event.action == 'opened' && github.event.pull_request.head.repo.full_name != github.repository
    runs-on: ubuntu-latest
    steps:
      - name: Comment external PR use /review
        env:
          GITHUB_TOKEN: ${{ github.token }}
        run: |
          set -euo pipefail
          PR_NUMBER="${{ github.event.pull_request.number }}"
          API="https://api.github.com/repos/${{ github.repository }}/issues/${PR_NUMBER}/comments"
          BODY=$(cat <<'TXT'
          Skipping AI review because this PR is from a fork. A maintainer can start the review by commenting /review in this PR.
          TXT
          )
          jq -n --arg body "$BODY" '{body:$body}' > payload.json
          curl -sS -X POST "$API" \
            -H "Authorization: Bearer ${GITHUB_TOKEN}" \
            -H "Accept: application/vnd.github+json" \
            -H "X-GitHub-Api-Version: 2022-11-28" \
            -H "Content-Type: application/json" \
            -d @payload.json >/dev/null

  pr-review:
    concurrency:
      group: pitaya-ai-review-${{ github.event.pull_request.number || github.event.issue.number || github.run_id }}
      cancel-in-progress: true
    # Run on:
    # - PR events when ready_for_review or opened as non‑draft
    # - Issue comments only when it's a PR thread, command is /review, and commenter is trusted
    if: |
      (
        github.event_name == 'pull_request' &&
        ((github.event.action == 'ready_for_review') || (github.event.action == 'opened' && github.event.pull_request.draft == false)) &&
        github.event.pull_request.head.repo.full_name == github.repository
      ) ||
      (
        github.event_name == 'issue_comment' &&
        github.event.issue.pull_request != null &&
        (github.event.comment.body == '/review' || startsWith(github.event.comment.body, '/review ')) &&
        (
          github.event.comment.author_association == 'OWNER' ||
          github.event.comment.author_association == 'MEMBER' ||
          github.event.comment.author_association == 'COLLABORATOR'
        )
      ) ||
      (
        github.event_name == 'pull_request_review_comment' &&
        (github.event.comment.body == '/review' || startsWith(github.event.comment.body, '/review ')) &&
        (
          github.event.comment.author_association == 'OWNER' ||
          github.event.comment.author_association == 'MEMBER' ||
          github.event.comment.author_association == 'COLLABORATOR'
        )
      )
    runs-on: ubuntu-latest
    steps:
      - name: Checkout
        uses: actions/checkout@v4
        with:
          fetch-depth: 0

      - name: PR context
        env:
          GH_TOKEN: ${{ github.token }}
          PR_FROM_PR: ${{ github.event.pull_request.number }}
          PR_FROM_ISSUE: ${{ github.event.issue.number }}
        run: |
          set -euo pipefail
          PR_NUMBER="${PR_FROM_PR:-}"
          if [ -z "${PR_NUMBER:-}" ] || [ "$PR_NUMBER" = "null" ]; then
            PR_NUMBER="${PR_FROM_ISSUE:-}"
          fi
          if [ -z "${PR_NUMBER:-}" ] || [ "$PR_NUMBER" = "null" ]; then
            echo "PR number not provided." >&2
            exit 1
          fi
          echo "PR_NUMBER=$PR_NUMBER" >> $GITHUB_ENV
          gh api repos/${{ github.repository }}/pulls/${PR_NUMBER} > pr.json
          echo "BASE_REF=$(jq -r '.base.ref' pr.json)" >> $GITHUB_ENV
          echo "HEAD_REF=$(jq -r '.head.ref' pr.json)" >> $GITHUB_ENV
          BASE_REPO="${{ github.repository }}"
          HEAD_REPO="$(jq -r '.head.repo.full_name // ""' pr.json)"
          if [ -n "$HEAD_REPO" ] && [ "$HEAD_REPO" != "$BASE_REPO" ]; then
            echo "IS_FORK=true" >> $GITHUB_ENV
          else
            echo "IS_FORK=false" >> $GITHUB_ENV
          fi

      - name: React 👀 on PR
        env:
          GH_TOKEN: ${{ github.token }}
          REPO: ${{ github.repository }}
        run: |
          set -euo pipefail
          rid=""
          if ! rid=$(gh api \
            -X POST \
            -H "X-GitHub-Api-Version: 2022-11-28" \
            "/repos/${REPO}/issues/${PR_NUMBER}/reactions" \
            -f content=eyes \
            --jq '.id // empty' 2>/dev/null); then
            echo "::warning::Failed to add 👀 reaction to PR ${PR_NUMBER}." >&2
          fi
          if [ -n "${rid:-}" ]; then
            echo "PR_REACTION_EYES_ID=$rid" >> "$GITHUB_ENV"
          fi

      - name: React 👀 on comment
        if: github.event_name == 'issue_comment'
        env:
          GH_TOKEN: ${{ github.token }}
          REPO: ${{ github.repository }}
          COMMENT_ID: ${{ github.event.comment.id }}
        run: |
          set -euo pipefail
          rid=""
          if ! rid=$(gh api \
            -X POST \
            -H "X-GitHub-Api-Version: 2022-11-28" \
            "/repos/${REPO}/issues/comments/${COMMENT_ID}/reactions" \
            -f content=eyes \
            --jq '.id // empty' 2>/dev/null); then
            echo "::warning::Failed to add 👀 reaction to comment ${COMMENT_ID}." >&2
          fi
          if [ -n "${rid:-}" ]; then
            echo "ISSUE_COMMENT_REACTION_EYES_ID=$rid" >> "$GITHUB_ENV"
          fi

      - name: React 👀 on inline comment
        if: github.event_name == 'pull_request_review_comment'
        env:
          GH_TOKEN: ${{ github.token }}
          REPO: ${{ github.repository }}
          COMMENT_ID: ${{ github.event.comment.id }}
        run: |
          set -euo pipefail
          rid=""
          if ! rid=$(gh api \
            -X POST \
            -H "X-GitHub-Api-Version: 2022-11-28" \
            "/repos/${REPO}/pulls/comments/${COMMENT_ID}/reactions" \
            -f content=eyes \
            --jq '.id // empty' 2>/dev/null); then
            echo "::warning::Failed to add 👀 reaction to review comment ${COMMENT_ID}." >&2
          fi
          if [ -n "${rid:-}" ]; then
            echo "REVIEW_COMMENT_REACTION_EYES_ID=$rid" >> "$GITHUB_ENV"
          fi

      - name: Checkout PR head
        run: |
          set -euo pipefail
          git fetch origin "pull/${PR_NUMBER}/head:pr_head"
          git checkout -B pr_head pr_head

      - name: Fetch branches
        run: git fetch origin "+refs/heads/*:refs/remotes/origin/*"

      - name: Ensure base branch
        run: |
          BASE_REF="${BASE_REF:-main}"
          if ! git show-ref --verify --quiet "refs/heads/${BASE_REF}"; then
            git branch --track "${BASE_REF}" "origin/${BASE_REF}" || true
          fi

      - name: Use repo scripts
        if: env.IS_FORK != 'true'
        run: |
          set -euo pipefail
          echo "USING_TRUSTED_CI_SCRIPTS=$GITHUB_WORKSPACE/.github/scripts" >> $GITHUB_ENV

      - name: Use base scripts for forks
        if: env.IS_FORK == 'true'
        run: |
          set -euo pipefail
          mkdir -p "$RUNNER_TEMP/ai-ci"
          git show "$BASE_REF":.github/scripts/build_review_instructions.py > "$RUNNER_TEMP/ai-ci/build_review_instructions.py"
          git show "$BASE_REF":.github/scripts/build_review_payload.py > "$RUNNER_TEMP/ai-ci/build_review_payload.py"
          echo "USING_TRUSTED_CI_SCRIPTS=$RUNNER_TEMP/ai-ci" >> $GITHUB_ENV

      - name: Detect docs changes
        run: |
          set -euo pipefail
          # Compare PR head against BASE_REF and look for docs changes
          CHANGED=$(git diff --name-only "$BASE_REF"...pr_head | grep -E '(\.(md|mdx)$|^docs\.json$)' || true)
          if [ -z "$CHANGED" ]; then
            echo "DOCS_CHANGED=false" >> $GITHUB_ENV
            echo "No docs (.md, .mdx, docs.json) changes detected; skipping AI review." >&2
          else
            echo "DOCS_CHANGED=true" >> $GITHUB_ENV
            echo "$CHANGED" | sed 's/^/- /' >&2
          fi

      - name: Comment no docs changes
        if: env.DOCS_CHANGED != 'true'
        env:
          GITHUB_TOKEN: ${{ github.token }}
        run: |
          set -euo pipefail
          API="https://api.github.com/repos/${{ github.repository }}/issues/${PR_NUMBER}/comments"
          BODY=$(cat <<'TXT'
          Skipping AI review because no docs changes in md, mdx, or docs.json
          TXT
          )
          jq -n --arg body "$BODY" '{body:$body}' > payload.json
          curl -sS -X POST "$API" \
            -H "Authorization: Bearer ${GITHUB_TOKEN}" \
            -H "Accept: application/vnd.github+json" \
            -H "X-GitHub-Api-Version: 2022-11-28" \
            -H "Content-Type: application/json" \
            -d @payload.json >/dev/null

      - name: Check secrets
        if: env.DOCS_CHANGED == 'true' && (env.IS_FORK != 'true' || github.event_name != 'pull_request')
        env:
          OPENROUTER_API_KEY: ${{ secrets.OPENROUTER_API_KEY }}
        run: |
          if [ -z "${OPENROUTER_API_KEY:-}" ]; then
            echo "OPENROUTER_API_KEY is not set. Add it to repository secrets." >&2
            exit 2
          fi

      - name: Setup Python
        if: env.DOCS_CHANGED == 'true' && (env.IS_FORK != 'true' || github.event_name != 'pull_request')
        uses: actions/setup-python@v5
        with:
          python-version: "3.13"

      - name: Setup uv
        if: env.DOCS_CHANGED == 'true' && (env.IS_FORK != 'true' || github.event_name != 'pull_request')
        uses: astral-sh/setup-uv@v3

      - name: Checkout Pitaya
        if: env.DOCS_CHANGED == 'true' && (env.IS_FORK != 'true' || github.event_name != 'pull_request')
        uses: actions/checkout@v4
        with:
          repository: tact-lang/pitaya
          path: pitaya-src

      - name: Install Pitaya deps
        if: env.DOCS_CHANGED == 'true' && (env.IS_FORK != 'true' || github.event_name != 'pull_request')
        working-directory: pitaya-src
        run: uv sync

      - name: Build agent image
        if: env.DOCS_CHANGED == 'true' && (env.IS_FORK != 'true' || github.event_name != 'pull_request')
        run: docker build -t pitaya-agents:latest pitaya-src

      - name: Run Pitaya review
        if: env.DOCS_CHANGED == 'true' && (env.IS_FORK != 'true' || github.event_name != 'pull_request')
        working-directory: pitaya-src
        env:
          OPENROUTER_API_KEY: ${{ secrets.OPENROUTER_API_KEY }}
          OPENROUTER_BASE_URL: https://openrouter.ai/api/v1
        run: |
          REVIEW_INSTRUCTIONS=$(python3 "$USING_TRUSTED_CI_SCRIPTS/build_review_instructions.py")

          uv run pitaya "Review this pull request" \
            --repo "$GITHUB_WORKSPACE" \
            --base-branch pr_head \
            --strategy pr-review \
            -S reviewers=2 \
            -S ci_fail_policy=never \
            -S base_branch="$BASE_REF" \
            -S include_branches="pr_head,$BASE_REF" \
            -S review_instructions="$REVIEW_INSTRUCTIONS" \
            --plugin codex \
            --model "openai/gpt-5.1" \
            --no-tui \
            --verbose

      - name: Post review
        if: env.DOCS_CHANGED == 'true' && (env.IS_FORK != 'true' || github.event_name != 'pull_request')
        working-directory: pitaya-src
        env:
          GITHUB_TOKEN: ${{ github.token }}
        run: |
          set -euo pipefail

          RUN_DIR="$(ls -td .pitaya/results/run_* 2>/dev/null | head -n1)"
          if [ -z "${RUN_DIR:-}" ] || [ ! -d "$RUN_DIR" ]; then
            echo "No results directory found" >&2
            exit 1
          fi

          # Sidecar must exist (selection may be empty when approving clean PRs)
          SIDECAR="$RUN_DIR/review/index.json"
          if [ ! -f "$SIDECAR" ]; then
            echo "Sidecar not found: $SIDECAR" >&2
            exit 1
          fi
          COMMIT_ID="$(jq -r '.commit_id // empty' "$SIDECAR")"
          if [ -z "$COMMIT_ID" ]; then
            echo "commit_id missing in sidecar; aborting." >&2
            exit 1
          fi

          # Build review payload (summary + inline comments)
          INLINE_SEVERITIES="${INLINE_SEVERITIES:-HIGH}"  # comma-separated; default HIGH only
          MAX_COMMENTS="${MAX_COMMENTS:-40}"
          python3 "$USING_TRUSTED_CI_SCRIPTS/build_review_payload.py" \
            --run-dir "$RUN_DIR" \
            --repo "${{ github.repository }}" \
            --sha "$COMMIT_ID" \
            --severities "${INLINE_SEVERITIES}" \
            --max-comments "${MAX_COMMENTS}" > review_payload.json

          API="https://api.github.com/repos/${{ github.repository }}/pulls/${PR_NUMBER}/reviews"

          COMMENTS=$(jq -r '.comments | length' review_payload.json)
          BODY_TEXT=$(jq -r '.body // empty' review_payload.json)
          if [ "${BODY_TEXT// }" = "" ]; then
            BODY_TEXT="No documentation issues detected."
            jq --arg body "$BODY_TEXT" '.body = $body' review_payload.json > review_payload.tmp && mv review_payload.tmp review_payload.json
          fi

          echo "Submitting PR review (comments: $COMMENTS)..."
          HTTP_CODE=$(curl -sS -o response.json -w "%{http_code}" -X POST "$API" \
               -H "Authorization: Bearer ${GITHUB_TOKEN}" \
               -H "Accept: application/vnd.github+json" \
               -H "X-GitHub-Api-Version: 2022-11-28" \
               -H "Content-Type: application/json" \
               -d @review_payload.json || true)

          echo "GitHub API HTTP: ${HTTP_CODE:-<none>}"
          if ! [[ "$HTTP_CODE" =~ ^[0-9]{3}$ ]] || [ "$HTTP_CODE" -lt 200 ] || [ "$HTTP_CODE" -ge 300 ]; then
            echo "Response body:"; cat response.json || true; echo

            # Attempt to submit inline comments individually so good ones still land.
            COMMENT_API_INLINE="https://api.github.com/repos/${{ github.repository }}/pulls/${PR_NUMBER}/comments"
            BODY_TEXT=$(jq -r '.body // ""' review_payload.json)
            COMMIT_FOR_COMMENTS=$(jq -r '.commit_id // ""' review_payload.json)
            GOOD=0; BAD=0
            BAD_SUMMARY_FILE=$(mktemp)
            : > "$BAD_SUMMARY_FILE"

            while IFS= read -r c; do
              TMP=$(mktemp)
              echo "$c" | jq --arg commit "$COMMIT_FOR_COMMENTS" '{
                body: .body,
                commit_id: ($commit // .commit_id // ""),
                path: .path
              } + (if has("line") then {line:.line, side:(.side//"RIGHT")} else {} end)
                + (if has("start_line") then {start_line:.start_line, start_side:(.start_side//"RIGHT")} else {} end)' > "$TMP"

              HTTP_COMMENT=$(curl -sS -o response_comment.json -w "%{http_code}" -X POST "$COMMENT_API_INLINE" \
                -H "Authorization: Bearer ${GITHUB_TOKEN}" \
                -H "Accept: application/vnd.github+json" \
                -H "X-GitHub-Api-Version: 2022-11-28" \
                -H "Content-Type: application/json" \
                -d @"$TMP" || true)

              if [[ "$HTTP_COMMENT" =~ ^2[0-9][0-9]$ ]]; then
                GOOD=$((GOOD+1))
              else
                BAD=$((BAD+1))
                PATH_LINE=$(echo "$c" | jq -r '"\(.path):L\(.start_line // .line // "?")-L\(.line // .start_line // "?")"')
                BODY_SNIP=$(echo "$c" | jq -r '.body')
                BODY_SNIP_FIRST6=$(printf "%s" "$BODY_SNIP" | head -n 6)
                BODY_SNIP_LINECOUNT=$(printf "%s\n" "$BODY_SNIP" | wc -l)
                {
                  echo "- ${PATH_LINE}"
                  printf "%s" "$BODY_SNIP_FIRST6" | sed 's/^/  /'
                  if [ "$BODY_SNIP_LINECOUNT" -gt 6 ]; then
                    echo "  …(truncated)"
                  fi
                  echo
                } >> "$BAD_SUMMARY_FILE"
              fi
              rm -f "$TMP" response_comment.json
            done < <(jq -c '.comments[]' review_payload.json)

            # Build fallback timeline comment containing intro + failed inline text (if any)
            COMMENT_API="https://api.github.com/repos/${{ github.repository }}/issues/${PR_NUMBER}/comments"
            FALLBACK_FILE=$(mktemp)
            {
              echo "$BODY_TEXT"
              echo
              echo "---"
              echo "Per-comment submission: ${GOOD} posted, ${BAD} failed."
              if [ "$BAD" -gt 0 ]; then
                echo
                echo "Unposted inline comments (raw text):"
                cat "$BAD_SUMMARY_FILE"
              fi
            } > "$FALLBACK_FILE"

            jq -n --arg body "$(cat "$FALLBACK_FILE")" '{body:$body}' > payload.json
            HTTP_CODE2=$(curl -sS -o response2.json -w "%{http_code}" -X POST "$COMMENT_API" \
              -H "Authorization: Bearer ${GITHUB_TOKEN}" \
              -H "Accept: application/vnd.github+json" \
              -H "X-GitHub-Api-Version: 2022-11-28" \
              -H "Content-Type: application/json" \
              -d @payload.json || true)
            echo "Fallback GitHub API HTTP: $HTTP_CODE2"; cat response2.json || true; echo
            if ! [[ "$HTTP_CODE2" =~ ^[0-9]{3}$ ]] || [ "$HTTP_CODE2" -lt 200 ] || [ "$HTTP_CODE2" -ge 300 ]; then
              echo "::error::Failed to submit PR review, per-comment comments, and fallback comment." >&2
              exit 1
            fi
            rm -f "$BAD_SUMMARY_FILE" "$FALLBACK_FILE"
          fi

      - name: Summary
        if: env.DOCS_CHANGED == 'true' && (env.IS_FORK != 'true' || github.event_name != 'pull_request')
        working-directory: pitaya-src
        run: |
          set -euo pipefail
          RUN_DIR="$(ls -td .pitaya/results/run_* 2>/dev/null | head -n1)"
          if [ -z "${RUN_DIR:-}" ]; then
            exit 0
          fi
          SUMMARY_FILE="$RUN_DIR/summary.md"
          INTRO_FILE="$RUN_DIR/review/index.json"
          {
            echo "### Pitaya Review"
            if [ -f "$INTRO_FILE" ]; then
              INTRO=$(jq -r '.intro // empty' "$INTRO_FILE")
              SEL=$(jq -r '.selected_details | length' "$INTRO_FILE")
              EVENT=$(jq -r '.event // empty' "$INTRO_FILE"); if [ -z "$EVENT" ]; then EVENT=COMMENT; fi
              COMMIT=$(jq -r '.commit_id // empty' "$INTRO_FILE")
              echo ""
              if [ -n "$INTRO" ]; then
                echo "$INTRO"
                echo ""
              fi
              echo "- Outcome $EVENT"
              echo "- Inline suggestions $SEL"
              if [ -n "$COMMIT" ]; then
                echo "- Reviewed commit \`$COMMIT\`"
              fi
            fi
            if [ -f "$SUMMARY_FILE" ]; then
              echo ""
              echo "<details><summary>Run stats</summary>"
              echo ""
              tail -n +2 "$SUMMARY_FILE"
              echo "</details>"
            fi
          } >> "$GITHUB_STEP_SUMMARY"

      - name: Archive logs
        if: env.DOCS_CHANGED == 'true' && (env.IS_FORK != 'true' || github.event_name != 'pull_request')
        id: pitaya_artifacts
        working-directory: pitaya-src
        run: |
          set -euo pipefail
          if compgen -G ".pitaya/logs/run_*" >/dev/null || compgen -G ".pitaya/results/run_*" >/dev/null; then
            tar -czf pitaya-artifacts.tar.gz .pitaya/logs/run_* .pitaya/results/run_* 2>/dev/null || true
            echo "has_artifacts=true" >> "$GITHUB_OUTPUT"
          else
            echo "No Pitaya logs or results to archive." >&2
            echo "has_artifacts=false" >> "$GITHUB_OUTPUT"
          fi

      - name: Upload artifacts
        if: env.DOCS_CHANGED == 'true' && (env.IS_FORK != 'true' || github.event_name != 'pull_request') && steps.pitaya_artifacts.outputs.has_artifacts == 'true'
        uses: actions/upload-artifact@v4
        with:
          name: pitaya-logs-${{ github.run_id }}
          path: pitaya-src/pitaya-artifacts.tar.gz
          if-no-files-found: ignore
          retention-days: 7

      - name: Cleanup 👀
        if: always()
        env:
          GH_TOKEN: ${{ github.token }}
          REPO: ${{ github.repository }}
          PR_REACTION_EYES_ID: ${{ env.PR_REACTION_EYES_ID }}
          ISSUE_COMMENT_REACTION_EYES_ID: ${{ env.ISSUE_COMMENT_REACTION_EYES_ID }}
          REVIEW_COMMENT_REACTION_EYES_ID: ${{ env.REVIEW_COMMENT_REACTION_EYES_ID }}
          COMMENT_ID: ${{ github.event.comment.id }}
        run: |
          set -euo pipefail
          # Remove from PR
          if [ -n "${PR_REACTION_EYES_ID:-}" ]; then
            gh api -X DELETE \
              -H "X-GitHub-Api-Version: 2022-11-28" \
              "/repos/${REPO}/issues/${PR_NUMBER}/reactions/${PR_REACTION_EYES_ID}" \
              >/dev/null 2>&1 || echo "::warning::Failed to remove 👀 from PR ${PR_NUMBER}." >&2
          fi
          # Remove from issue comment
          if [ -n "${ISSUE_COMMENT_REACTION_EYES_ID:-}" ] && [ -n "${COMMENT_ID:-}" ]; then
            gh api -X DELETE \
              -H "X-GitHub-Api-Version: 2022-11-28" \
              "/repos/${REPO}/issues/comments/${COMMENT_ID}/reactions/${ISSUE_COMMENT_REACTION_EYES_ID}" \
              >/dev/null 2>&1 || echo "::warning::Failed to remove 👀 from issue comment ${COMMENT_ID}." >&2
          fi
          # Remove from review comment
          if [ -n "${REVIEW_COMMENT_REACTION_EYES_ID:-}" ] && [ -n "${COMMENT_ID:-}" ]; then
            gh api -X DELETE \
              -H "X-GitHub-Api-Version: 2022-11-28" \
              "/repos/${REPO}/pulls/comments/${COMMENT_ID}/reactions/${REVIEW_COMMENT_REACTION_EYES_ID}" \
              >/dev/null 2>&1 || echo "::warning::Failed to remove 👀 from review comment ${COMMENT_ID}." >&2
          fi


================================================
FILE: .gitignore
================================================
# Vale (spell and style checker)
.vale/*
!.vale/config/
!.vale/NONE/

# Miscellaneous
.DS_Store

# Editors
.idea/
.vscode/
.helix/
.vim/
.nvim/
.emacs/
.emacs.d/

# Node.js
node_modules/

# Python
__pycache__

# Generated folders
/stats/


================================================
FILE: .husky/pre-push
================================================


================================================
FILE: .prettierignore
================================================
*.mdx
/ecosystem/api/toncenter/v2/
/ecosystem/api/toncenter/v3/
/ecosystem/api/toncenter/smc-index/
/LICENSE*


================================================
FILE: .remarkignore
================================================
# Ignore folders
node_modules/
/pending/

# Ignore some whitepapers
/languages/fift/whitepaper.mdx
/foundations/whitepapers/tblkch.mdx
/foundations/whitepapers/ton.mdx
/foundations/whitepapers/tvm.mdx

# Ignore some root files
/index.mdx
/LICENSE*

# Ignore generated files and directories
/tvm/instructions.mdx
/ecosystem/api/toncenter/v2/
/ecosystem/api/toncenter/v3/
/ecosystem/api/toncenter/smc-index/


================================================
FILE: .remarkrc.mjs
================================================
import remarkFrontmatter from 'remark-frontmatter';
import remarkGfm from 'remark-gfm';
import remarkMath from 'remark-math';
import remarkMdx from 'remark-mdx';
import unifiedConsistency from 'unified-consistency';
import stringWidth from 'string-width';
import { visitParents, SKIP } from 'unist-util-visit-parents';
import { generate } from 'astring';

/**
 * @import {} from 'remark-stringify'
 * @type import('unified').Preset
 */
const remarkConfig = {
  settings: {
    bullet: '-',
    emphasis: '_',
    rule: '-',
    incrementListMarker: false,
    tightDefinitions: true,
  },
  plugins: [
    remarkFrontmatter,
    remarkMath,
    [
      remarkGfm,
      {
        singleTilde: false,
        stringLength: stringWidth,
      },
    ],
    [
      remarkMdx,
      {
        printWidth: 20,
      },
    ],
    function formatJsxElements() {
      return (tree, file) => {
        // a JSX element embedded in flow (block)
        visitParents(tree, 'mdxJsxFlowElement', (node, ancestors) => {
          try {
            if (!node.attributes) { return; }
            for (const attr of node.attributes) {
              if (
                attr.type === 'mdxJsxAttribute' &&
                attr.value?.type === 'mdxJsxAttributeValueExpression' &&
                attr.value.data?.estree
              ) {
                const expr = attr.value;

                // Slighly trim single-line expressions
                if (typeof expr.value === 'string' && !expr.value.trim().includes('\n')) {
                  expr.value = expr.value.trim();
                  delete expr.data.estree;
                  continue;
                }

                // Multi-line expressions
                if (!expr.data) { continue; }
                const indent = ancestors.length === 0 ? 0 : ancestors.length;
                const formatted = generate(expr.data.estree.body[0].expression, {
                  startingIndentLevel: indent,
                });
                expr.value = formatted;
                delete expr.data.estree;
              }
            }
          } catch (_) {
            // NOTE: Let's silently do nothing — this is the default behavior anyways
          }
        });
        // a JSX element embedded in text (span, inline)
        visitParents(tree, 'mdxJsxTextElement', (node) => {
          try {
            if (!node.attributes) { return SKIP; }
            for (const attr of node.attributes) {
              if (
                attr.type === 'mdxJsxAttribute' &&
                attr.value?.type === 'mdxJsxAttributeValueExpression' &&
                attr.value.data?.estree
              ) {
                const expr = attr.value;
                if (!expr.data) { continue; }
                const formatted = generate(expr.data.estree.body[0].expression);
                expr.value = formatted;
                delete expr.data.estree;
              }
            }
            return SKIP;
          } catch (_) {
            // NOTE: Let's silently do nothing — this is the default behavior anyways
          }
        });
        // a JavaScript expression embedded in flow (block)
        visitParents(tree, 'mdxFlowExpression', (node) => {
          try {
            if (!node.data) { return SKIP; }
            const formatted = generate(node.data.estree.body[0].expression);
            node.value = formatted;
            delete node.data.estree;
            return SKIP;
          } catch (_) {
            // NOTE: Let's silently do nothing — this is the default behavior anyways
          }
        });
        // a JavaScript expression embedded in text (span, inline)
        visitParents(tree, 'mdxTextExpression', (node) => {
          try {
            if (!node.data) { return SKIP; }
            const formatted = generate(node.data.estree.body[0].expression);
            node.value = formatted;
            delete node.data.estree;
            return SKIP;
          } catch (_) {
            // NOTE: Let's silently do nothing — this is the default behavior anyways
            // console.error(
            //   `Could not format a node in the file ${file.path}: ${JSON.stringify(node)}`
            // );
          }
        });
      };
    },
    unifiedConsistency,
  ],
};

export default remarkConfig;


================================================
FILE: CODEOWNERS
================================================


================================================
FILE: LICENSE-code
================================================
MIT License

Copyright (c) 2025 TON Studio and others

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.


================================================
FILE: LICENSE-docs
================================================
Attribution-ShareAlike 4.0 International

=======================================================================

Creative Commons Corporation ("Creative Commons") is not a law firm and
does not provide legal services or legal advice. Distribution of
Creative Commons public licenses does not create a lawyer-client or
other relationship. Creative Commons makes its licenses and related
information available on an "as-is" basis. Creative Commons gives no
warranties regarding its licenses, any material licensed under their
terms and conditions, or any related information. Creative Commons
disclaims all liability for damages resulting from their use to the
fullest extent possible.

Using Creative Commons Public Licenses

Creative Commons public licenses provide a standard set of terms and
conditions that creators and other rights holders may use to share
original works of authorship and other material subject to copyright
and certain other rights specified in the public license below. The
following considerations are for informational purposes only, are not
exhaustive, and do not form part of our licenses.

     Considerations for licensors: Our public licenses are
     intended for use by those authorized to give the public
     permission to use material in ways otherwise restricted by
     copyright and certain other rights. Our licenses are
     irrevocable. Licensors should read and understand the terms
     and conditions of the license they choose before applying it.
     Licensors should also secure all rights necessary before
     applying our licenses so that the public can reuse the
     material as expected. Licensors should clearly mark any
     material not subject to the license. This includes other CC-
     licensed material, or material used under an exception or
     limitation to copyright. More considerations for licensors:
    wiki.creativecommons.org/Considerations_for_licensors

     Considerations for the public: By using one of our public
     licenses, a licensor grants the public permission to use the
     licensed material under specified terms and conditions. If
     the licensor's permission is not necessary for any reason--for
     example, because of any applicable exception or limitation to
     copyright--then that use is not regulated by the license. Our
     licenses grant only permissions under copyright and certain
     other rights that a licensor has authority to grant. Use of
     the licensed material may still be restricted for other
     reasons, including because others have copyright or other
     rights in the material. A licensor may make special requests,
     such as asking that all changes be marked or described.
     Although not required by our licenses, you are encouraged to
     respect those requests where reasonable. More considerations
     for the public:
    wiki.creativecommons.org/Considerations_for_licensees

=======================================================================

Creative Commons Attribution-ShareAlike 4.0 International Public
License

By exercising the Licensed Rights (defined below), You accept and agree
to be bound by the terms and conditions of this Creative Commons
Attribution-ShareAlike 4.0 International Public License ("Public
License"). To the extent this Public License may be interpreted as a
contract, You are granted the Licensed Rights in consideration of Your
acceptance of these terms and conditions, and the Licensor grants You
such rights in consideration of benefits the Licensor receives from
making the Licensed Material available under these terms and
conditions.


Section 1 -- Definitions.

  a. Adapted Material means material subject to Copyright and Similar
     Rights that is derived from or based upon the Licensed Material
     and in which the Licensed Material is translated, altered,
     arranged, transformed, or otherwise modified in a manner requiring
     permission under the Copyright and Similar Rights held by the
     Licensor. For purposes of this Public License, where the Licensed
     Material is a musical work, performance, or sound recording,
     Adapted Material is always produced where the Licensed Material is
     synched in timed relation with a moving image.

  b. Adapter's License means the license You apply to Your Copyright
     and Similar Rights in Your contributions to Adapted Material in
     accordance with the terms and conditions of this Public License.

  c. BY-SA Compatible License means a license listed at
     creativecommons.org/compatiblelicenses, approved by Creative
     Commons as essentially the equivalent of this Public License.

  d. Copyright and Similar Rights means copyright and/or similar rights
     closely related to copyright including, without limitation,
     performance, broadcast, sound recording, and Sui Generis Database
     Rights, without regard to how the rights are labeled or
     categorized. For purposes of this Public License, the rights
     specified in Section 2(b)(1)-(2) are not Copyright and Similar
     Rights.

  e. Effective Technological Measures means those measures that, in the
     absence of proper authority, may not be circumvented under laws
     fulfilling obligations under Article 11 of the WIPO Copyright
     Treaty adopted on December 20, 1996, and/or similar international
     agreements.

  f. Exceptions and Limitations means fair use, fair dealing, and/or
     any other exception or limitation to Copyright and Similar Rights
     that applies to Your use of the Licensed Material.

  g. License Elements means the license attributes listed in the name
     of a Creative Commons Public License. The License Elements of this
     Public License are Attribution and ShareAlike.

  h. Licensed Material means the artistic or literary work, database,
     or other material to which the Licensor applied this Public
     License.

  i. Licensed Rights means the rights granted to You subject to the
     terms and conditions of this Public License, which are limited to
     all Copyright and Similar Rights that apply to Your use of the
     Licensed Material and that the Licensor has authority to license.

  j. Licensor means the individual(s) or entity(ies) granting rights
     under this Public License.

  k. Share means to provide material to the public by any means or
     process that requires permission under the Licensed Rights, such
     as reproduction, public display, public performance, distribution,
     dissemination, communication, or importation, and to make material
     available to the public including in ways that members of the
     public may access the material from a place and at a time
     individually chosen by them.

  l. Sui Generis Database Rights means rights other than copyright
     resulting from Directive 96/9/EC of the European Parliament and of
     the Council of 11 March 1996 on the legal protection of databases,
     as amended and/or succeeded, as well as other essentially
     equivalent rights anywhere in the world.

  m. You means the individual or entity exercising the Licensed Rights
     under this Public License. Your has a corresponding meaning.


Section 2 -- Scope.

  a. License grant.

       1. Subject to the terms and conditions of this Public License,
          the Licensor hereby grants You a worldwide, royalty-free,
          non-sublicensable, non-exclusive, irrevocable license to
          exercise the Licensed Rights in the Licensed Material to:

            a. reproduce and Share the Licensed Material, in whole or
               in part; and

            b. produce, reproduce, and Share Adapted Material.

       2. Exceptions and Limitations. For the avoidance of doubt, where
          Exceptions and Limitations apply to Your use, this Public
          License does not apply, and You do not need to comply with
          its terms and conditions.

       3. Term. The term of this Public License is specified in Section
          6(a).

       4. Media and formats; technical modifications allowed. The
          Licensor authorizes You to exercise the Licensed Rights in
          all media and formats whether now known or hereafter created,
          and to make technical modifications necessary to do so. The
          Licensor waives and/or agrees not to assert any right or
          authority to forbid You from making technical modifications
          necessary to exercise the Licensed Rights, including
          technical modifications necessary to circumvent Effective
          Technological Measures. For purposes of this Public License,
          simply making modifications authorized by this Section 2(a)
          (4) never produces Adapted Material.

       5. Downstream recipients.

            a. Offer from the Licensor -- Licensed Material. Every
               recipient of the Licensed Material automatically
               receives an offer from the Licensor to exercise the
               Licensed Rights under the terms and conditions of this
               Public License.

            b. Additional offer from the Licensor -- Adapted Material.
               Every recipient of Adapted Material from You
               automatically receives an offer from the Licensor to
               exercise the Licensed Rights in the Adapted Material
               under the conditions of the Adapter's License You apply.

            c. No downstream restrictions. You may not offer or impose
               any additional or different terms or conditions on, or
               apply any Effective Technological Measures to, the
               Licensed Material if doing so restricts exercise of the
               Licensed Rights by any recipient of the Licensed
               Material.

       6. No endorsement. Nothing in this Public License constitutes or
          may be construed as permission to assert or imply that You
          are, or that Your use of the Licensed Material is, connected
          with, or sponsored, endorsed, or granted official status by,
          the Licensor or others designated to receive attribution as
          provided in Section 3(a)(1)(A)(i).

  b. Other rights.

       1. Moral rights, such as the right of integrity, are not
          licensed under this Public License, nor are publicity,
          privacy, and/or other similar personality rights; however, to
          the extent possible, the Licensor waives and/or agrees not to
          assert any such rights held by the Licensor to the limited
          extent necessary to allow You to exercise the Licensed
          Rights, but not otherwise.

       2. Patent and trademark rights are not licensed under this
          Public License.

       3. To the extent possible, the Licensor waives any right to
          collect royalties from You for the exercise of the Licensed
          Rights, whether directly or through a collecting society
          under any voluntary or waivable statutory or compulsory
          licensing scheme. In all other cases the Licensor expressly
          reserves any right to collect such royalties.


Section 3 -- License Conditions.

Your exercise of the Licensed Rights is expressly made subject to the
following conditions.

  a. Attribution.

       1. If You Share the Licensed Material (including in modified
          form), You must:

            a. retain the following if it is supplied by the Licensor
               with the Licensed Material:

                 i. identification of the creator(s) of the Licensed
                    Material and any others designated to receive
                    attribution, in any reasonable manner requested by
                    the Licensor (including by pseudonym if
                    designated);

                ii. a copyright notice;

               iii. a notice that refers to this Public License;

                iv. a notice that refers to the disclaimer of
                    warranties;

                 v. a URI or hyperlink to the Licensed Material to the
                    extent reasonably practicable;

            b. indicate if You modified the Licensed Material and
               retain an indication of any previous modifications; and

            c. indicate the Licensed Material is licensed under this
               Public License, and include the text of, or the URI or
               hyperlink to, this Public License.

       2. You may satisfy the conditions in Section 3(a)(1) in any
          reasonable manner based on the medium, means, and context in
          which You Share the Licensed Material. For example, it may be
          reasonable to satisfy the conditions by providing a URI or
          hyperlink to a resource that includes the required
          information.

       3. If requested by the Licensor, You must remove any of the
          information required by Section 3(a)(1)(A) to the extent
          reasonably practicable.

  b. ShareAlike.

     In addition to the conditions in Section 3(a), if You Share
     Adapted Material You produce, the following conditions also apply.

       1. The Adapter's License You apply must be a Creative Commons
          license with the same License Elements, this version or
          later, or a BY-SA Compatible License.

       2. You must include the text of, or the URI or hyperlink to, the
          Adapter's License You apply. You may satisfy this condition
          in any reasonable manner based on the medium, means, and
          context in which You Share Adapted Material.

       3. You may not offer or impose any additional or different terms
          or conditions on, or apply any Effective Technological
          Measures to, Adapted Material that restrict exercise of the
          rights granted under the Adapter's License You apply.


Section 4 -- Sui Generis Database Rights.

Where the Licensed Rights include Sui Generis Database Rights that
apply to Your use of the Licensed Material:

  a. for the avoidance of doubt, Section 2(a)(1) grants You the right
     to extract, reuse, reproduce, and Share all or a substantial
     portion of the contents of the database;

  b. if You include all or a substantial portion of the database
     contents in a database in which You have Sui Generis Database
     Rights, then the database in which You have Sui Generis Database
     Rights (but not its individual contents) is Adapted Material,
     including for purposes of Section 3(b); and

  c. You must comply with the conditions in Section 3(a) if You Share
     all or a substantial portion of the contents of the database.

For the avoidance of doubt, this Section 4 supplements and does not
replace Your obligations under this Public License where the Licensed
Rights include other Copyright and Similar Rights.


Section 5 -- Disclaimer of Warranties and Limitation of Liability.

  a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE
     EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS
     AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF
     ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS,
     IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION,
     WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR
     PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS,
     ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT
     KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT
     ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.

  b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE
     TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION,
     NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT,
     INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES,
     COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR
     USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN
     ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR
     DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR
     IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.

  c. The disclaimer of warranties and limitation of liability provided
     above shall be interpreted in a manner that, to the extent
     possible, most closely approximates an absolute disclaimer and
     waiver of all liability.


Section 6 -- Term and Termination.

  a. This Public License applies for the term of the Copyright and
     Similar Rights licensed here. However, if You fail to comply with
     this Public License, then Your rights under this Public License
     terminate automatically.

  b. Where Your right to use the Licensed Material has terminated under
     Section 6(a), it reinstates:

       1. automatically as of the date the violation is cured, provided
          it is cured within 30 days of Your discovery of the
          violation; or

       2. upon express reinstatement by the Licensor.

     For the avoidance of doubt, this Section 6(b) does not affect any
     right the Licensor may have to seek remedies for Your violations
     of this Public License.

  c. For the avoidance of doubt, the Licensor may also offer the
     Licensed Material under separate terms or conditions or stop
     distributing the Licensed Material at any time; however, doing so
     will not terminate this Public License.

  d. Sections 1, 5, 6, 7, and 8 survive termination of this Public
     License.


Section 7 -- Other Terms and Conditions.

  a. The Licensor shall not be bound by any additional or different
     terms or conditions communicated by You unless expressly agreed.

  b. Any arrangements, understandings, or agreements regarding the
     Licensed Material not stated herein are separate from and
     independent of the terms and conditions of this Public License.


Section 8 -- Interpretation.

  a. For the avoidance of doubt, this Public License does not, and
     shall not be interpreted to, reduce, limit, restrict, or impose
     conditions on any use of the Licensed Material that could lawfully
     be made without permission under this Public License.

  b. To the extent possible, if any provision of this Public License is
     deemed unenforceable, it shall be automatically reformed to the
     minimum extent necessary to make it enforceable. If the provision
     cannot be reformed, it shall be severed from this Public License
     without affecting the enforceability of the remaining terms and
     conditions.

  c. No term or condition of this Public License will be waived and no
     failure to comply consented to unless expressly agreed to by the
     Licensor.

  d. Nothing in this Public License constitutes or may be interpreted
     as a limitation upon, or waiver of, any privileges and immunities
     that apply to the Licensor or You, including from the legal
     processes of any jurisdiction or authority.


=======================================================================

Creative Commons is not a party to its public
licenses. Notwithstanding, Creative Commons may elect to apply one of
its public licenses to material it publishes and in those instances
will be considered the “Licensor.” The text of the Creative Commons
public licenses is dedicated to the public domain under the CC0 Public
Domain Dedication. Except for the limited purpose of indicating that
material is shared under a Creative Commons public license or as
otherwise permitted by the Creative Commons policies published at
creativecommons.org/policies, Creative Commons does not authorize the
use of the trademark "Creative Commons" or any other trademark or logo
of Creative Commons without its prior written consent including,
without limitation, in connection with any unauthorized modifications
to any of its public licenses or any other arrangements,
understandings, or agreements concerning use of licensed material. For
the avoidance of doubt, this paragraph does not form part of the
public licenses.

Creative Commons may be contacted at creativecommons.org.


================================================
FILE: README.md
================================================
# TON Docs

**[Follow the full quickstart guide](https://www.mintlify.com/docs/quickstart)**

## Development

Install the [Mintlify CLI](https://www.npmjs.com/package/mint) to preview your documentation changes locally. To install it alongside the necessary dependencies, use the following command:

```shell
npm ci
```

To start a local preview, run the following command at the root of your documentation, where your `docs.json` is located:

```shell
npm start
```

View your local preview at `http://localhost:3000`.

### Spell checks

> \[!NOTE]
> Automatic spelling checks are performed for changed files in each Pull Request.

To check spelling of **all** files, run:

```shell
npm run check:spell

# or simply:

npm run spell
```

To check spelling of some **selected** files, run:

```shell
npm run spell:some <FILES...>
```

#### Adding new words to the spellchecking dictionary

The dictionaries (or vocabularies) for custom words are placed under `resources/dictionaries`. There, each dictionary describes additional allowed or invalid entries.

The primary dictionary is `resources/dictionaries/custom.txt` — extend it in case a word exists in American English but was flagged by CSpell as invalid, or in cases where the word does not exist and shall be prohibited. For the latter, add words to `resources/dictionaries/ban.txt` with the `!` prefix when there are no clear correct replacements.

If an existing two-letter word was flagged as forbidden, remove it from the `resources/dictionaries/two-letter-words-ban.txt` file. However, if a word happened to be a part of a bigger word, e.g., `CL` in `OpenCL`, do not ban it and instead add the bigger word to the primary dictionary in `resources/dictionaries/custom.txt`.

See more: [CSpell docs on custom dictionaries](https://cspell.org/docs/dictionaries/custom-dictionaries).

### Format checks

> \[!NOTE]
> Automatic formatting checks are performed for changed files in each Pull Request.

To check formatting of **all** files, run:

```shell
npm run check:fmt
```

To fix formatting of **all** files, run:

```shell
npm run fmt
```

To check and fix formatting of some **selected** files, run:

```shell
npm run fmt:some <FILES...>
```

## Using components and snippets

See the [`snippets/` directory](./snippets) and the corresponding docs in [`contribute/snippets/` MDX files](./contribute/snippets/).

## Publishing changes

[Mintlify's GitHub app](https://dashboard.mintlify.com/settings/organization/github-app) is connected to this repository. Thus, changes are deployed to production automatically after pushing to the default branch (`main`).

## Need help?

### Troubleshooting

- If your dev environment is not running: Run `mint update` to ensure you have the most recent version of the CLI.
- If a page loads as a 404: Make sure you are running in a folder with a valid `docs.json`.

### Resources

- [Mintlify documentation](https://mintlify.com/docs)
- [Mintlify community](https://mintlify.com/community)

## License

This project is dual-licensed:

- All documentation and non-code text are licensed under [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)
- All code snippets are licensed under [MIT](https://opensource.org/license/mit)


================================================
FILE: contract-dev/blueprint/api.mdx
================================================
---
title: "Blueprint TypeScript API"
---

Blueprint exports functions and classes for programmatic interaction with TON smart contracts.

### `tonDeepLink`

Generates a TON deep-link for transfer.

```typescript
function tonDeepLink(
  address: Address,
  amount: bigint,
  body?: Cell,
  stateInit?: Cell,
  testOnly?: boolean
): string;
```

**Parameters:**

- `address` — the recipient's TON address
- `amount` — the amount of nanoTON to send
- `body` — optional message body as a Cell
- `stateInit` — optional [`StateInit`](/foundations/messages/deploy) cell for deploying a contract
- `testOnly` — optional flag to determine output address format

**Returns:** a URL deep link that can be opened in TON wallets

**Example:**

```typescript
const link = tonDeepLink(myAddress, 10_000_000n); // 0.01 TON
// "ton://transfer/..."
```

### `getExplorerLink`

Generates a link to view a TON address in a selected blockchain explorer.

```typescript
function getExplorerLink(
  address: string,
  network: string,
  explorer: 'tonscan' | 'tonviewer' | 'toncx' | 'dton'
): string;
```

**Parameters:**

- `address` — the TON address to view in explorer
- `network` — the target network (`mainnet` or `testnet`)
- `explorer` — the desired explorer (`tonscan`, `tonviewer`, `toncx`, `dton`)

**Returns:** a full URL pointing to the address in the selected explorer

**Example:**

```typescript
const link = getExplorerLink("<ADDR>", "testnet", "tonscan");
// "https://testnet.tonscan.org/address/EQC...9gA"
// <ADDR> — TON address to view.
```

### `getNormalizedExtMessageHash`

Generates a normalized hash of an `external-in` message for comparison.

```typescript
function getNormalizedExtMessageHash(message: Message): Buffer;
```

This function ensures consistent hashing of external-in messages by following [TEP-467](https://github.com/ton-blockchain/TEPs/blob/8b3beda2d8611c90ec02a18bec946f5e33a80091/text/0467-normalized-message-hash.md).

**Parameters:**

- `message` — the message to be normalized and hashed (must be of type `external-in`)

**Returns:** the hash of the normalized message as `Buffer`

**Throws:** error if the message type is not `external-in`

### `compile`

Compiles a contract using the specified configuration for `tact`, `func`, or `tolk` languages.

```typescript
async function compile(name: string, opts?: CompileOpts): Promise<Cell>
```

**Parameters:**

- `name` — the name of the contract to compile (should correspond to a file named `<name>.compile.ts`)
- `opts` — optional [`CompileOpts`](#compileopts), including user data passed to hooks

**Returns:** a promise that resolves to the compiled contract code as a `Cell`

**Example:**

```typescript
import { compile } from '@ton/blueprint';

async function main() {
    const codeCell = await compile('Contract');
    console.log('Compiled code BoC:', codeCell.toBoc().toString('base64'));
}
```

### `libraryCellFromCode`

Packs the resulting code hash into a library cell.

```typescript
function libraryCellFromCode(code: Cell): Cell
```

**Parameters:**

- `code` — the contract code cell

**Returns:** a library cell containing the code hash

### `NetworkProvider`

Interface representing a network provider for interacting with the TON Blockchain.

```typescript
interface NetworkProvider {
  network(): 'mainnet' | 'testnet' | 'custom';
  explorer(): Explorer;
  sender(): SenderWithSendResult;
  api(): BlueprintTonClient;
  provider(address: Address, init?: { code?: Cell; data?: Cell }): ContractProvider;
  isContractDeployed(address: Address): Promise<boolean>;
  waitForDeploy(address: Address, attempts?: number, sleepDuration?: number): Promise<void>;
  waitForLastTransaction(attempts?: number, sleepDuration?: number): Promise<void>;
  getContractState(address: Address): Promise<ContractState>;
  getConfig(configAddress?: Address): Promise<BlockchainConfig>;
  open<T extends Contract>(contract: T): OpenedContract<T>;
  ui(): UIProvider;
}
```

#### `network()`

```typescript
network(): 'mainnet' | 'testnet' | 'custom';
```

**Returns:** current network type that the provider is connected to

#### `explorer()`

```typescript
explorer(): Explorer;
```

**Returns:** [`Explorer`](#explorer) name for the current network

#### `sender()`

```typescript
sender(): SenderWithSendResult
```

**Returns:** the [`SenderWithSendResult`](#senderwithsendresult) instance used for sending transactions

#### `api()`

```typescript
api(): BlueprintTonClient
```

**Returns:** the underlying [`BlueprintTonClient`](#blueprinttonclient) API for direct blockchain interactions

#### `provider()`

```typescript
provider(address: Address, init?: { code?: Cell; data?: Cell }): ContractProvider
```

Creates a contract provider for interacting with a contract at the specified address.

**Parameters:**

- `address` — the contract address to interact with
- `init` — optional contract initialization data
  - `code` — Contract code cell
  - `data` — Contract initial data cell

**Returns:** `contractProvider` instance for the specified address

#### `isContractDeployed()`

```typescript
isContractDeployed(address: Address): Promise<boolean>
```

Checks whether a contract is deployed at the specified address.

**Parameters:**

- `address` — the contract address to check

**Returns:** promise resolving to `true` if contract is deployed, `false` otherwise

**Usage example:**

```typescript
export async function run(provider: NetworkProvider) {
  const isDeployed = await provider.isContractDeployed(contractAddress);
  if (!isDeployed) {
    console.log('Contract not yet deployed');
  }
}
```

#### `waitForDeploy()`

```typescript
waitForDeploy(address: Address, attempts?: number, sleepDuration?: number): Promise<void>
```

Waits for a contract to be deployed by polling the address until the contract appears on-chain.

**Parameters:**

- `address` — the contract address to monitor
- `attempts` — maximum number of polling attempts (default: 20)
- `sleepDuration` — delay between attempts in milliseconds (default: 2000)

**Returns:** a promise that resolves when the contract is deployed

**Throws:** error if the contract is not deployed within the specified attempts

**Usage example:**

```typescript
export async function run(provider: NetworkProvider) {
  // Send deployment transaction
  await contract.sendDeploy(provider.sender(), { value: toNano('0.01') });

  // Wait for deployment to complete
  await provider.waitForDeploy(contract.address);
  console.log('Contract deployed successfully');
}
```

#### `waitForLastTransaction()`

```typescript
waitForLastTransaction(attempts?: number, sleepDuration?: number): Promise<void>
```

Waits for the last sent transaction to be processed and confirmed on the blockchain.

**Parameters:**

- `attempts` — maximum number of polling attempts (default: 20)
- `sleepDuration` — delay between attempts in milliseconds (default: 2000)

**Returns:** promise that resolves when the last transaction is confirmed

**Usage example:**

```typescript
export async function run(provider: NetworkProvider) {
  await contract.sendIncrement(provider.sender(), { value: toNano('0.01') });
  await provider.waitForLastTransaction();
}
```

#### `getContractState()`

```typescript
getContractState(address: Address): Promise<ContractState>
```

Retrieves the current state of a contract, including its balance, code, and data.

**Parameters:**

- `address` — the contract address to query

**Returns:** promise resolving to `ContractState`.

**Usage example:**

```typescript
export async function run(provider: NetworkProvider) {
  const state = await provider.getContractState(contractAddress);
  console.log(`Contract balance: ${fromNano(state.balance)} TON`);
}
```

#### `getConfig()`

```typescript
getConfig(configAddress?: Address): Promise<BlockchainConfig>
```

Fetches the current blockchain configuration parameters.

**Parameters:**

- `configAddress` — optional config contract address (uses default if not provided)

**Returns:** promise resolving to `BlockchainConfig`

#### `open()`

```typescript
open<T extends Contract>(contract: T): OpenedContract<T>
```

Opens a contract instance for interaction, binding it to the current provider.

**Parameters:**

- `contract` — the contract instance to open

**Returns:** `openedContract` wrapper that enables direct method calls

**Usage example:**

```typescript
export async function run(provider: NetworkProvider) {
  const counter = provider.open(Counter.fromAddress(contractAddress));
  const currentValue = await counter.getCounter();
  console.log('Current counter value:', currentValue);
}
```

#### `ui()`

```typescript
ui(): UIProvider
```

**Returns:** [`UIProvider`](#uiprovider) instance for console interactions

**Usage example:**

```typescript
export async function run(provider: NetworkProvider) {
  const ui = provider.ui();
  ui.write('Deployment starting...');
  const confirmed = await ui.prompt('Deploy to mainnet?');
}
```

### `UIProvider`

Interface for handling user interactions, such as displaying messages, prompting for input, and managing action prompts. This interface abstracts console interactions and can be used in both interactive and automated scenarios.

```typescript
interface UIProvider {
  write(message: string): void;
  prompt(message: string): Promise<boolean>;
  inputAddress(message: string, fallback?: Address): Promise<Address>;
  input(message: string): Promise<string>;
  choose<T>(message: string, choices: T[], display: (v: T) => string): Promise<T>;
  setActionPrompt(message: string): void;
  clearActionPrompt(): void;
}
```

#### `write()`

```typescript
write(message: string): void
```

Displays a message to the user console.

**Parameters:**

- `message` — the text message to display

**Usage example:**

```typescript
export async function run(provider: NetworkProvider) {
  const ui = provider.ui();
  ui.write('Starting contract deployment...');
  ui.write(`Network: ${provider.network()}`);
}
```

#### `prompt()`

```typescript
prompt(message: string): Promise<boolean>
```

Displays a yes/no prompt to the user and waits for their response.

**Parameters:**

- `message` — the prompt message to display

**Returns:** promise resolving to `true` for yes, `false` for no

**Usage example:**

```typescript
export async function run(provider: NetworkProvider) {
  const ui = provider.ui();
  const confirmed = await ui.prompt('Deploy to mainnet? This will cost real TON');
  if (confirmed) {
    ui.write('Proceeding with deployment...');
  } else {
    ui.write('Deployment cancelled');
    return;
  }
}
```

#### `inputAddress()`

```typescript
inputAddress(message: string, fallback?: Address): Promise<Address>
```

Prompts the user to input a TON address with validation.

**Parameters:**

- `message` — the prompt message to display
- `fallback` — optional default address to use if user provides empty input

**Returns:** promise resolving to Address object

**Usage example:**

```typescript
export async function run(provider: NetworkProvider) {
  const ui = provider.ui();
  const targetAddress = await ui.inputAddress(
    'Enter the contract address to interact with:',
    Address.parse('EQD4FPq-PRDieyQKkizFTRtSDyucUIqrj0v_zXJmqaDp6_0t') // fallback
  );
  ui.write(`Using address: ${targetAddress.toString()}`);
}
```

#### `input()`

```typescript
input(message: string): Promise<string>
```

Prompts the user for a text input and returns the entered string.

**Parameters:**

- `message` — the prompt message to display

**Returns:** promise resolving to the user's input as a string

**Usage example:**

```typescript
export async function run(provider: NetworkProvider) {
  const ui = provider.ui();
  const contractName = await ui.input('Enter the contract name:');
  ui.write(`Deploying contract: ${contractName}`);
}
```

#### `choose()`

```typescript
choose<T>(message: string, choices: T[], display: (v: T) => string): Promise<T>
```

Presents a list of choices to the user and returns the selected option.

**Parameters:**

- `message` — the prompt message to display
- `choices` — array of options to choose from
- `display` — function to convert each choice to a display string

**Returns:** promise resolving to the selected choice

**Usage example:**

```typescript
export async function run(provider: NetworkProvider) {
  const ui = provider.ui();

  const networks = ['mainnet', 'testnet'];
  const selectedNetwork = await ui.choose(
    'Select deployment network:',
    networks,
    (network) => network.toUpperCase()
  );

  ui.write(`Selected network: ${selectedNetwork}`);
}
```

#### `setActionPrompt()`

```typescript
setActionPrompt(message: string): void
```

Sets a persistent action prompt that remains visible during operations.

**Parameters:**

- `message` — the action prompt message to display

**Usage example:**

```typescript
export async function run(provider: NetworkProvider) {
  const ui = provider.ui();
  ui.setActionPrompt('⏳ Waiting for transaction confirmation...');

  await contract.send(provider.sender(), { value: toNano('0.01') }, 'increment');
  await provider.waitForLastTransaction();

  ui.clearActionPrompt();
  ui.write('✅ Transaction confirmed');
}
```

#### `clearActionPrompt()`

```typescript
clearActionPrompt(): void
```

Clears the current action prompt, removing it from display.

**Usage example:**

```typescript
export async function run(provider: NetworkProvider) {
  const ui = provider.ui();
  ui.setActionPrompt('🔄 Processing...');

  // Perform some operation
  await someAsyncOperation();

  ui.clearActionPrompt();
  ui.write('Operation completed');
}
```

## Type definitions

Blueprint exports several TypeScript types for configuration and compilation options. These types provide type safety and IntelliSense support when working with Blueprint programmatically.

### `CompileOpts`

Optional compilation settings, including user data passed to hooks and compilation flags.

```typescript
type CompileOpts = {
  hookUserData?: any;
  debugInfo?: boolean;
  buildLibrary?: boolean;
};
```

**Properties:**

- `hookUserData` — optional user data passed to pre/post compile hooks
- `debugInfo` — enable debug information in compiled output (default: `false`)
- `buildLibrary` — build as a library instead of a regular contract (default: `false`)

**Usage example:**

```typescript
import { compile } from '@ton/blueprint';

const codeCell = await compile('MyContract', {
  debugInfo: true,
  hookUserData: { customFlag: true }
});
```

### `CommonCompilerConfig`

Base configuration shared by all compiler types. This interface defines common compilation hooks and options.

```typescript
type CommonCompilerConfig = {
  preCompileHook?: (params: HookParams) => Promise<void>;
  postCompileHook?: (code: Cell, params: HookParams) => Promise<void>;
  buildLibrary?: boolean;
};
```

**Properties:**

- `preCompileHook` — optional function called before compilation starts (receives [`HookParams`](#hookparams))
- `postCompileHook` — optional function called after compilation completes (receives compiled `Cell` and [`HookParams`](#hookparams))
- `buildLibrary` — whether to build as a library (default: `false`)

**Usage example:**

```typescript title="./wrappers/MyContract.compile.ts"
import { CompilerConfig } from '@ton/blueprint';

export const compile: CompilerConfig = {
  lang: 'func',
  targets: ['contracts/my_contract.fc'],
  preCompileHook: async (params) => {
    console.log('Starting compilation...');
  },
  postCompileHook: async (code, params) => {
    console.log('Compilation completed!');
  }
};
```

### `FuncCompilerConfig`

Configuration specific to the FunC compiler, including optimization levels and source file specifications.

```typescript
type FuncCompilerConfig = {
  lang?: 'func';
  optLevel?: number;
  debugInfo?: boolean;
} & (
  | {
      targets: string[];
      sources?: SourceResolver | SourcesMap;
    }
  | {
      targets?: string[];
      sources: SourcesArray;
    }
);
```

**Properties:**

- `lang` — compiler language identifier (optional, defaults to `'func'`)
- `optLevel` — optimization level (0-2, default: 2)
- `debugInfo` — include debug information in output
- `targets` — array of FunC source file paths to compile
- `sources` — alternative source specification method

**Usage example:**

```typescript title="./wrappers/MyContract.compile.ts"
import { CompilerConfig } from '@ton/blueprint';

export const compile: CompilerConfig = {
  lang: 'func',
  targets: [
    'contracts/imports/stdlib.fc',
    'contracts/my_contract.fc'
  ],
  optLevel: 2,
  debugInfo: false
};
```

### `TolkCompilerConfig`

Configuration for the Tolk compiler, including optimization and debugging options.

```typescript
type TolkCompilerConfig = {
  lang: 'tolk';
  entrypoint: string;
  optimizationLevel?: number;
  withStackComments?: boolean;
  withSrcLineComments?: boolean;
  experimentalOptions?: string;
};
```

**Properties:**

- `lang` — compiler language identifier (must be `'tolk'`)
- `entrypoint` — path to the main Tolk source file
- `optimizationLevel` — optimization level
- `withStackComments` — include stack operation comments in Fift output
- `withSrcLineComments` — include source line comments in Fift output
- `experimentalOptions` — additional experimental compiler flags

**Usage example:**

```typescript title="./wrappers/MyContract.compile.ts"
import { CompilerConfig } from '@ton/blueprint';

export const compile: CompilerConfig = {
  lang: 'tolk',
  entrypoint: 'contracts/my_contract.tolk',
  optimizationLevel: 2,
  withStackComments: true,
  withSrcLineComments: true
};
```

### `TactLegacyCompilerConfig`

Configuration for the Tact compiler (legacy configuration format).

```typescript
type TactLegacyCompilerConfig = {
  lang: 'tact';
  target: string;
  options?: Options;
};
```

**Properties:**

- `lang` — compiler language identifier (must be `'tact'`)
- `target` — path to the main Tact source file
- `options` — additional Tact compiler options

**Usage example:**

```typescript title="./wrappers/MyContract.compile.ts"
import { CompilerConfig } from '@ton/blueprint';

export const compile: CompilerConfig = {
  lang: 'tact',
  target: 'contracts/my_contract.tact',
  options: {
    debug: false,
    external: true
  }
};
```

### `HookParams`

Parameters passed to compilation hooks, providing context about the compilation process.

```typescript
type HookParams = {
  userData?: any;
};
```

**Properties:**

- `userData` — optional user data passed from [`CompileOpts`](#compileopts)

### `SenderWithSendResult`

An extended sender interface that tracks the result of the last send operation.

```typescript
interface SenderWithSendResult extends Sender {
  readonly lastSendResult?: unknown;
}
```

**Properties:**

- `lastSendResult` — optional result from the most recent send operation

### `BlueprintTonClient`

Union type representing supported TON client implementations.

```typescript
type BlueprintTonClient = TonClient4 | TonClient | ContractAdapter | LiteClient;
```

**Supported clients:**

- `TonClient4` — TON HTTP API v4 client
- `TonClient` — TON HTTP API v2/v3 client
- `ContractAdapter` — TON API adapter
- `LiteClient` — Lite client for direct node communication

### `Explorer`

Supported blockchain explorer types.

```typescript
type Explorer = 'tonscan' | 'tonviewer' | 'toncx' | 'dton';
```

**Supported explorers:**

- `'tonscan'` — Tonscan explorer
- `'tonviewer'` — Tonviewer explorer (default)
- `'toncx'` — TON.cx explorer
- `'dton'` — dTON.io explorer

## Configuration

For detailed configuration options, refer to the [Blueprint Configuration](/contract-dev/blueprint/config) guide.


================================================
FILE: contract-dev/blueprint/benchmarks.mdx
================================================
---
title: "Benchmarking performance"
---

import { Aside } from "/snippets/aside.jsx";
import { FenceTable } from "/snippets/fence-table.jsx";

In TON, a contract's performance is defined by its gas consumption, so it's important to design your logic efficiently.
Unlike many other blockchains, TON also requires you to pay for storing contract data and forwarding messages between contracts.

## Gas consumption

As you develop and iterate on a contract, even small changes to its logic can affect both gas usage and data size. Monitoring these changes helps ensure that your contract remains efficient and cost-effective.

<Aside type="tip">
  For a deeper breakdown of how fees work in TON, refer to [Transaction fees](/foundations/fees)
</Aside>

## Gas metrics reporting

To simplify tracking changes in gas usage and data size,
we’ve introduced a reporting system that lets you collect and compare metrics across different versions of a contract.

To enable this, write test scenarios that cover the contract’s primary usage patterns and verify expected behavior. This approach is sufficient to gather relevant metrics, which you can later use to compare performance changes after updating the implementation.

Before running the tests, a store is created to collect metrics from all transactions generated during the tests. After test execution, the collected metrics are supplemented with [ABI information from the snapshot](https://github.com/ton-org/sandbox/blob/main/docs/collect-metric-api.md#abi-auto-mapping), and a report is generated based on this data.

While more [metrics are collected](https://github.com/ton-org/sandbox/blob/main/docs/collect-metric-api.md#snapshot-structure), the current report format includes `gasUsed`, `cells`, and `bits`, which correspond to the internal metrics `compute.phase`, `state.code`, and `state.data`.

## Metrics comparison example

To see how gas metrics can be collected and compared in practice, let’s walk through a complete example.

Start by creating a new project using `npm create ton@latest`:

```bash
npm create ton@latest -y -- sample --type func-counter --contractName Sample
cd sample
```

**Note:**

- The `-y` flag skips prompts and accepts defaults.
- `--type` specifies the template (e.g., `func-counter`).
- `--contractName` sets the contract name.

Alternatively, you can run:

```bash
npm create ton@latest sample
```

This command scaffolds a project with a basic counter contract at `contracts/sample.fc`.
It defines a simple stateful contract that stores an `id` and a `counter` and supports an `increase` operation.

```func title="sample.fc"
#include "imports/stdlib.fc";

const op::increase = "op::increase"c;
global int ctx_id;
global int ctx_counter;

() load_data() impure {
    var ds = get_data().begin_parse();

    ctx_id = ds~load_uint(32);
    ctx_counter = ds~load_uint(32);

    ds.end_parse();
}

() save_data() impure {
    set_data(
        begin_cell()
            .store_uint(ctx_id, 32)
            .store_uint(ctx_counter, 32)
            .end_cell()
    );
}

() recv_internal(int my_balance, int msg_value, cell in_msg_full, slice in_msg_body) impure {
    if (in_msg_body.slice_empty?()) { ;; ignore all empty messages
        return ();
    }

    slice cs = in_msg_full.begin_parse();
    int flags = cs~load_uint(4);
    if (flags & 1) { ;; ignore all bounced messages
        return ();
    }

    load_data();

    int op = in_msg_body~load_uint(32);
    int query_id = in_msg_body~load_uint(64);

    if (op == op::increase) {
        int increase_by = in_msg_body~load_uint(32);
        ctx_counter += increase_by;
        save_data();
        return ();
    }

    throw(0xffff);
}

int get_counter() method_id {
    load_data();
    return ctx_counter;
}

int get_id() method_id {
    load_data();
    return ctx_id;
}
```

### Generate a gas report

Let’s now generate a gas usage report for the contract.

Run the following command:

```bash
npx blueprint test --gas-report
```

This runs your tests with gas tracking enabled and outputs a `gas-report.json` with transaction metrics.

<FenceTable>
  ...
  PASS  Comparison metric mode: gas depth: 1
  Gas report write in 'gas-report.json'
  ┌───────────┬──────────────┬───────────────────────────┐
  │           │              │          current          │
  │ Contract  │    Method    ├──────────┬────────┬───────┤
  │           │              │ gasUsed  │ cells  │ bits  │
  ├───────────┼──────────────┼──────────┼────────┼───────┤
  │           │  sendDeploy  │   1937   │   11   │  900  │
  │           ├──────────────┼──────────┼────────┼───────┤
  │           │     send     │   515    │   11   │  900  │
  │  Sample   ├──────────────┼──────────┼────────┼───────┤
  │           │ sendIncrease │   1937   │   11   │  900  │
  │           ├──────────────┼──────────┼────────┼───────┤
  │           │  0x7e8764ef  │   2681   │   11   │  900  │
  └───────────┴──────────────┴──────────┴────────┴───────┘
</FenceTable>

### Storage fee calculation

You can use the `cells` and `bits` values from the report to estimate the **storage fee** for your contract.
Here’s the formula:

```text
storage_fee = ceil(
                  (account.bits * bit_price
                  + account.cells * cell_price)
               * time_delta / 2 ** 16)
```

To try this in practice, use the [calculator example](/foundations/fees).

### Regenerate the gas report

Note that the `op::increase` method appears in the report as the raw opcode `0x7e8764ef`.
To display a human-readable name in the report, update the generated `contract.abi.json` by replacing the raw opcode with the name **increase** in both the `messages` and `types` sections:

```diff
--- a/contract.abi.json
+++ b/contract.abi.json
@@ -6,13 +6,13 @@
         "receiver": "internal",
         "message": {
           "kind": "typed",
-          "type": "0x7e8764ef"
+          "type": "increase"
         }
       }
     ],
     "types": [
       {
-        "name": "0x7e8764ef",
+        "name": "increase",
         "header": 2122802415
       }
     ],
```

Once you've updated the `contract.abi.json` file, rerun the command to regenerate the gas report:

```bash
npx blueprint test --gas-report
```

Now the method name appears in the report as `increase`, making it easier to read:

<FenceTable>
  ...
  │           ├──────────────┼──────────┼────────┼───────┤
  │           │   increase   │   2681   │   11   │  900  │
  └───────────┴──────────────┴──────────┴────────┴───────┘
</FenceTable>

### Save a snapshot for future comparison

To track how gas usage evolves, you can create a named snapshot of the current metrics. This allows you to compare future versions of the contract against this baseline:

```bash
npx blueprint snapshot --label "v1"
```

This creates a snapshot file in `.snapshot/`:

```text
...
PASS  Collect metric mode: "gas"
Report write in '.snapshot/1749821319408.json'
```

### Optimize the contract and compare the metrics

Let’s try a simple optimization — adding the `inline` specifier to some functions.

<Aside type="note">
  An [inline specifier](/languages/func/functions#inline-specifier) is directly substituted into the code wherever it’s called, which can help reduce gas usage by eliminating the overhead of a function call.
</Aside>

Update your contract like this:

```diff
--- a/contracts/sample.fc
+++ b/contracts/sample.fc

-() load_data() impure {
+() load_data() impure inline {

-() save_data() impure {
+() save_data() impure inline {

-() recv_internal(int my_balance, int msg_value, cell in_msg_full, slice in_msg_body) impure {
+() recv_internal(int my_balance, int msg_value, cell in_msg_full, slice in_msg_body) impure inline {
```

Now regenerate the gas report. Since we already created a snapshot labeled `v1`, this report will include a comparison with the previous version:

```bash
npx blueprint test --gas-report
```

You see a side-by-side comparison of gas usage before and after the change:

<FenceTable>
  PASS  Comparison metric mode: gas depth: 2
  Gas report write in 'gas-report.json'
  ┌───────────┬──────────────┬─────────────────────────────────────────┬───────────────────────────┐
  │           │              │                 current                 │            v1             │
  │ Contract  │    Method    ├──────────────┬───────────┬──────────────┼──────────┬────────┬───────┤
  │           │              │   gasUsed    │   cells   │     bits     │ gasUsed  │ cells  │ bits  │
  ├───────────┼──────────────┼──────────────┼───────────┼──────────────┼──────────┼────────┼───────┤
  │           │  sendDeploy  │  1937 same   │ 7 -36.36% │ 1066 +18.44% │   1937   │   11   │  900  │
  │           ├──────────────┼──────────────┼───────────┼──────────────┼──────────┼────────┼───────┤
  │           │     send     │ 446 -13.40%  │ 7 -36.36% │ 1066 +18.44% │   515    │   11   │  900  │
  │  Sample   ├──────────────┼──────────────┼───────────┼──────────────┼──────────┼────────┼───────┤
  │           │ sendIncrease │  1937 same   │ 7 -36.36% │ 1066 +18.44% │   1937   │   11   │  900  │
  │           ├──────────────┼──────────────┼───────────┼──────────────┼──────────┼────────┼───────┤
  │           │   increase   │ 1961 -26.86% │ 7 -36.36% │ 1066 +18.44% │   2681   │   11   │  900  │
  └───────────┴──────────────┴──────────────┴───────────┴──────────────┴──────────┴────────┴───────┘
</FenceTable>

## Project setup instructions

If your project already exists, you need to configure **jest** to collect gas metrics.
You can do this in one of two ways:

#### Option 1: update the existing `jest.config.ts`

Add the necessary environment and reporter settings:

```diff title="jest.config.ts"
import type { Config } from 'jest';

const config: Config = {
    preset: 'ts-jest',
+    testEnvironment: '@ton/sandbox/jest-environment',
    testPathIgnorePatterns: ['/node_modules/', '/dist/'],
+    reporters: [
+        'default',
+        ['@ton/sandbox/jest-reporter', {}],
+    ]
};

export default config;
```

<Aside type="tip">
  See the full list of options in the [Sandbox jest config docs](https://github.com/ton-org/sandbox/blob/main/README.md#setup-in-jestconfigts).
</Aside>

#### Option 2: create a separate config `gas-report.config.ts`

If you prefer not to modify your main `jest.config.ts`, you can create a dedicated config file:

```ts title="gas-report.config.ts"
import config from './jest.config';

// use filter tests if needed, see https://jestjs.io/docs/cli#--testnamepatternregex
// config.testNamePattern = '^Foo should increase counter$'
config.testEnvironment = '@ton/sandbox/jest-environment'
config.reporters = [
    ['@ton/sandbox/jest-reporter', {
    }],
]
export default config;
```

When using this separate config, pass it using the `--config` option:

```bash
npx blueprint test --gas-report -- --config gas-report.config.ts
npx blueprint snapshot --label "v2" -- --config gas-report.config.ts
```

## Collect metrics manually

You can collect metrics manually using the low-level API from `@ton/sandbox`.

```typescript title="collect-metrics.ts"
import {
    Blockchain,
    createMetricStore,
    makeSnapshotMetric,
    resetMetricStore
} from '@ton/sandbox';

const store = createMetricStore();

async function someDo() {
    const blockchain = await Blockchain.create();
    const [alice, bob] = await blockchain.createWallets(2);
    await alice.send({ to: bob.address, value: 1 });
}

async function main() {
    resetMetricStore();
    await someDo();
    const metric = makeSnapshotMetric(store);
    console.log(metric);
}

main().catch((error) => {
    console.log(error.message);
});
```

For more details, see the [Collect Metric API documentation](https://github.com/ton-org/sandbox/blob/main/docs/collect-metric-api.md#example).


================================================
FILE: contract-dev/blueprint/cli.mdx
================================================
---
title: "Blueprint CLI"
---

Blueprint is a CLI tool for TON smart contract development. This reference covers all available commands, options, configuration, and API methods.

## CLI commands

Blueprint provides a comprehensive set of CLI commands for smart contract development, testing, and deployment. Commands support both interactive and non-interactive modes.

### `create`

```bash
npx blueprint create <CONTRACT> --type <TYPE>
```

Creates a new smart contract with all necessary files, including the contract source, TypeScript wrapper, test file, and deployment script.

#### Interactive mode

```bash
npx blueprint create
```

Launches an interactive wizard that guides you through:

1. Contract name selection (validates CamelCase format)
1. Programming language choice (Tolk, FunC, or Tact)
1. Template type selection (empty or counter example)

#### Non-interactive mode

```bash
npx blueprint create <CONTRACT> --type <TYPE>
```

**Parameters:**

- `<CONTRACT>` — contract name in CamelCase format (e.g., `MyAwesomeContract`)
- `<TYPE>` — template type from available options

**Available template types:**

- `tolk-empty` — an empty contract (Tolk)
- `func-empty` — an empty contract (FunC)
- `tact-empty` — an empty contract (Tact)
- `tolk-counter` — a simple counter contract (Tolk)
- `func-counter` — a simple counter contract (FunC)
- `tact-counter` — a simple counter contract (Tact)

**Usage examples:**

```bash
# Create empty Tolk contract
npx blueprint create MyToken --type tolk-empty

# Create Tolk counter example
npx blueprint create SimpleCounter --type tolk-counter

# Create contract interactively
npx blueprint create
```

**Generated files:**

- `contracts/MyContract.{tolk|fc|tact}` — contract source code
- `wrappers/MyContract.ts` — TypeScript wrapper for contract interaction
- `tests/MyContract.spec.ts` — Jest test suite with basic test cases
- `scripts/deployMyContract.ts` — deployment script with network configuration

### `build`

```bash
npx blueprint build <CONTRACT> --all
```

Compiles smart contracts using their corresponding `.compile.ts` configuration files.

#### Interactive mode

```bash
npx blueprint build
```

Displays a list of all available contracts with `.compile.ts` files for selection. Shows compilation status and allows building individual contracts or all at once.

#### Non-interactive mode

```bash
npx blueprint build <CONTRACT>
npx blueprint build --all
```

**Parameters:**

- `<CONTRACT>` — specific contract name to build (matches the `.compile.ts` filename)
- `--all` — build all contracts in the project that have compilation configurations

**Usage examples:**

```bash
# Build specific contract
npx blueprint build MyToken

# Build all contracts
npx blueprint build --all

# Interactive selection
npx blueprint build
```

For detailed information about build artifacts, see [Compiled Artifacts](/contract-dev/blueprint/develop#compiled-artifacts).

### `run`

```bash
npx blueprint run <SCRIPT> <ARGS...> <OPTIONS>
```

Executes TypeScript scripts from the `scripts/` directory with full network provider access. Commonly used for contract deployment, interaction, and maintenance tasks.

#### Interactive mode

```bash
npx blueprint run
```

Displays a list of all available scripts in the `scripts/` directory to select from.

#### Non-interactive mode

```bash
npx blueprint run <SCRIPT> <ARGS...>
```

**Parameters:**

- `<SCRIPT>` — script name (without `.ts` extension)
- `<ARGS...>` — optional arguments passed to the script
- `--<NETWORK>` — network selection (`mainnet`, `testnet`)
- `--<DEPLOY_METHOD>` — deployment method (`tonconnect`, `mnemonic`)

**Network options:**

- `--mainnet` — use TON Mainnet
- `--testnet` — use TON Testnet
- `--custom <URL>` — use custom network endpoint
- `--custom-version <VERSION>` — API version (`v2`, `v4`)
- `--custom-type <TYPE>` — network type (`custom`, `mainnet`, `testnet`)
- `--custom-key <KEY>` — API key (`v2 only`)

**Deploy options:**

- `--tonconnect` — use TON Connect for deployment
- `--deeplink` — use deep link for deployment
- `--mnemonic` — use mnemonic for deployment

**Explorer options:**

- `--tonscan` — use Tonscan explorer
- `--tonviewer` — use Tonviewer explorer (default)
- `--toncx` — use TON.cx explorer
- `--dton` — use dTON explorer

**Usage examples:**

```bash
# Deploy contract to testnet with TON Connect
npx blueprint run deployCounter --testnet --tonconnect

# Deploy to testnet with mnemonic
npx blueprint run deployCounter --testnet --mnemonic

# Run script with custom arguments
npx blueprint run updateConfig arg1 arg2 --testnet

# Use custom network configuration
npx blueprint run deployContract \
  --custom https://toncenter.com/api/v2/jsonRPC \
  --custom-version v2 \
  --custom-type mainnet \
  --custom-key <YOUR_API_KEY>
```

- `<YOUR_API_KEY>` — API key for the selected provider (`v2` only).

**Requirements:**

- Scripts must be located in the `scripts/` directory
- Script files must export a `run` function:

```typescript
import { NetworkProvider } from '@ton/blueprint';

export async function run(provider: NetworkProvider, args: string[]) {

}
```

**Environment variables:**

For mnemonic-based deployments, configure these [environment variables](#environment-variables).

### `test`

Run the full project test suite with all `.spec.ts` files.

#### Basic usage

```bash
npx blueprint test
```

Run all test files in the `tests/` directory.

#### Collecting coverage

```bash
npx blueprint test --coverage
```

Run tests and collect coverage into the `coverage/` directory.

#### Gas reporting

```bash
npx blueprint test --gas-report
# or
npx blueprint test -g
```

Run tests and compare with the last snapshot's metrics.

#### Specific test file

```bash
npx blueprint test <CONTRACT_NAME>
```

**Examples:**

```bash
npx blueprint test
npx blueprint test MyContract
```

**Test file requirements:**

- Test files should be located in the `tests/` directory
- Use `.spec.ts` extension
- Supports standard Jest syntax and matchers

### `verify`

Verify a deployed contract using [TON Contract Verifier](https://verifier.ton.org).

#### Basic usage

```bash
npx blueprint verify
```

Interactive mode to select the contract and network.

#### Non-interactive mode

```bash
npx blueprint verify <CONTRACT> --network <NETWORK>
```

**Parameters:**

- `<CONTRACT>` — contract name to verify
- `--network <NETWORK>` — network (`mainnet`, `testnet`)
- `--compiler-version <VERSION>` — compiler version used for building
- `--custom <URL>` — custom network endpoint
- `--custom-version <VERSION>` — API version (`v2` default)
- `--custom-type <TYPE>` — network type (`mainnet`, `testnet`)
- `--custom-key <KEY>` — API key (`v2` only)

**Examples:**

```bash
npx blueprint verify MyContract --network mainnet
npx blueprint verify MyContract --network testnet --compiler-version 0.4.4-newops.1
```

**Custom network verification:**

```bash
npx blueprint verify MyContract \
  --custom https://toncenter.com/api/v2/jsonRPC \
  --custom-version v2 \
  --custom-type mainnet \
  --custom-key <YOUR_API_KEY> \
  --compiler-version 0.4.4-newops.1
```

### `help`

Show detailed help.

```bash
npx blueprint help
npx blueprint help <COMMAND>
```

**Examples:**

```bash
npx blueprint help
npx blueprint help create
npx blueprint help run
```

### `set`

Sets language versions.

```bash
npx blueprint set <KEY> <VALUE>
```

**Available keys:**

- `func` — overrides `@ton-community/func-js-bin` version

### `convert`

Converts legacy bash build scripts to Blueprint wrappers.

```bash
npx blueprint convert <PATH_TO_BUILD_SCRIPT>
```

### `rename`

Renames a contract by matching in wrappers, scripts, and tests.

```bash
npx blueprint rename <OLD_NAME> <NEW_NAME>
```

### `pack`

Builds and prepares a publish-ready package of wrappers.

```bash
npx blueprint pack
```

**Flags:**

- `--no-warn`, `-n` — ignore warnings about modifying `tsconfig.json` and `package.json`, and about removing the `dist` directory

**Output:**

- Creates a deployment-ready package
- Includes compiled artifacts
- Bundles dependencies

### `snapshot`

Creates snapshots with gas usage and cell sizes.

```bash
npx blueprint snapshot
```

**Flags:**

- `--label=<COMMENT>`, `-l=<COMMENT>` — add a comment label to the snapshot

**Features:**

- Run with gas usage and cell sizes collected
- Write a new snapshot
- Useful for regression testing

## Environment variables

Blueprint supports environment variables for wallet configuration when using the mnemonic provider:

- `WALLET_MNEMONIC` — wallet mnemonic phrase (space-separated words).
- `WALLET_VERSION` — wallet contract version (`v1r1`, `v1r2`, `v1r3`, `v2r1`, `v2r2`, `v3r1`, `v3r2`, `v4r1`, `v4r2`, `v4`, `v5r1`).
- `WALLET_ID` — wallet ID for versions earlier than `v5r1`.
- `SUBWALLET_NUMBER` — subwallet number for `v5r1` wallets.

### Example .env file

```bash
WALLET_MNEMONIC="<MNEMONIC_24_WORDS>"
WALLET_VERSION=v4
WALLET_ID=698983191
SUBWALLET_NUMBER=0
```

- `<MNEMONIC_24_WORDS>` — 24-word wallet mnemonic (space-separated).


================================================
FILE: contract-dev/blueprint/config.mdx
================================================
---
title: "Configuring Blueprint"
---

A [configuration file](https://github.com/ton-org/blueprint/blob/develop/src/config/Config.ts) allows you
to customize certain blueprint features.

Create a `blueprint.config.ts` file in the root of your project, and export the configuration as a named `config`;
do not use a `default` export:

```typescript
import { Config } from '@ton/blueprint';

export const config: Config = {
    // configuration options
};
```

The configuration supports the following options:

| Field                                                                    |                     Type/Values                     | Description                                                                              |
| ------------------------------------------------------------------------ | :-------------------------------------------------: | ---------------------------------------------------------------------------------------- |
| [`plugins`](/contract-dev/blueprint/config#plugins)                      |                      `Plugin[]`                     | Extend or customize the behavior.                                                        |
| [`network`](/contract-dev/blueprint/config#custom-network)               | `'mainnet' `<br />`'testnet'`<br />` CustomNetwork` | Specifies the target network for deployment or interaction.                              |
| `separateCompilables`                                                    |                      `boolean`                      | If `true`, `*.compile.ts` files go to `compilables/`. <br /> If `false`, to `wrappers/`. |
| [`requestTimeout`](/contract-dev/blueprint/config#request-timeouts)      |                       `number`                      | HTTP request timeout in milliseconds.                                                    |
| [`recursiveWrappers`](/contract-dev/blueprint/config#recursive-wrappers) |                      `boolean`                      | If `true`, searches `wrappers/` or `compilables/` recursively for contracts.             |
| [`manifestUrl`](/contract-dev/blueprint/config#ton-connect-manifest)     |                       `string`                      | Overrides the default TON Connect manifest URL.                                          |

## Plugins

Blueprint includes a plugin system, allowing the community to extend its functionality without modifying Blueprint’s core code.
To use plugins, add a `plugins` array to your config:

```typescript
import { Config } from '@ton/blueprint';
import { ScaffoldPlugin } from 'blueprint-scaffold';

export const config: Config = {
    plugins: [new ScaffoldPlugin()],
};
```

This example adds the [scaffold plugin](https://github.com/1IxI1/blueprint-scaffold).

Some community-developed plugins include:

- [scaffold](https://github.com/1IxI1/blueprint-scaffold) – creates a simple DApp using the wrappers’ code.
- [misti](https://github.com/nowarp/blueprint-misti) – simplifies workflow with the [Misti](https://nowarp.github.io/tools/misti/) static analyzer.

## Custom network

A custom network can be set as the default by adding a `network` object to your configuration:

```typescript
import { Config } from '@ton/blueprint';

export const config: Config = {
    network: {
        endpoint: 'https://toncenter.com/api/v2/jsonRPC',
        type: 'mainnet',
        version: 'v2',
        key: <YOUR_API_KEY>,
    },
};
```

Using the `--custom` flags achieves the same result, but it can be tiresome to provide them every time.
The above configuration is equivalent to running:

```bash
npx blueprint run \
  --custom https://toncenter.com/api/v2/jsonRPC \
  --custom-version v2 \
  --custom-type mainnet \
  --custom-key <YOUR_API_KEY>
```

Each property of the `network` object has the same meaning as its corresponding `--custom` flag. See the `blueprint help run` for details.

## Liteclient support

Liteclient can be configured using the `network` object in your configuration:

```typescript
import { Config } from '@ton/blueprint';

export const config: Config = {
    network: {
        endpoint: 'https://ton.org/testnet-global.config.json', // Use https://ton.org/global.config.json for Mainnet or any custom configuration
        version: 'liteclient',
        type: 'testnet',
    }
};
```

You can also specify these parameters using CLI:

```bash
npx blueprint run \
  --custom https://ton.org/testnet-global.config.json \
  --custom-version liteclient \
  --custom-type testnet
```

## Request timeouts

You can configure how long HTTP requests should wait before timing out using the `requestTimeout` field.
This is useful when working with unstable or slow networks.

```typescript
import { Config } from '@ton/blueprint';

export const config: Config = {
    requestTimeout: 10000, // 10 seconds
};
```

## Recursive wrappers

The `recursiveWrappers` field controls whether the `wrappers` directory is searched recursively for contract configurations.

```typescript
import { Config } from '@ton/blueprint';

export const config: Config = {
    recursiveWrappers: true,
};
```

By default, it's set to `false`.

## TON Connect manifest

If you're using a TON Connect provider, you can override the default manifest URL by setting the `manifestUrl` field:

```typescript
import { Config } from '@ton/blueprint';

export const config: Config = {
    manifestUrl: 'https://yourdomain.com/custom-manifest.json',
};
```

By default, the manifest URL is:

```
https://raw.githubusercontent.com/ton-org/blueprint/main/tonconnect/manifest.json
```


================================================
FILE: contract-dev/blueprint/coverage.mdx
================================================
---
title: "Collecting test coverage"
---

import { Aside } from "/snippets/aside.jsx";
import { Image } from "/snippets/image.jsx";

<Aside>
  This page covers coverage calculated on TVM assembly instructions. Path- and source-line coverage is not implemented.
</Aside>

There are two main ways to calculate coverage of your `@ton/sandbox` tests.

## Easy way

<Aside
  type="caution"
  title="Library compatibility"
>
  For this way to work correctly, a version of `@ton/sandbox >= 0.37.2` and `@ton/blueprint >= 0.41.0` is needed.
</Aside>

When using Blueprint, the only thing you need to collect coverage is to run the following command:

```bash
blueprint test --coverage
```

Results will appear in the `coverage/` directory as HTML files with reports for each of your contracts.

## Customizable way

There might be some reasons why you don't want to simply use `--coverage`.

- You don't want to collect coverage for all contracts.
- You use `@ton/sandbox` but don't use `@ton/blueprint`.
- Not all contracts have source code. (For example, for each transaction, you deploy a new contract, and you don't have wrappers for it.)
- You want to get the raw data and customize the output.

### 1. Enable coverage collection

Before running tests, add `blockchain.enableCoverage()` to collect coverage data:

```typescript
import {Blockchain} from '@ton/sandbox';

describe('Contract Tests', () => {
    let blockchain: Blockchain;
    let contract: SandboxContract<MyContract>;

    beforeEach(async () => {
        blockchain = await Blockchain.create();

        blockchain.enableCoverage();
        // or for COVERAGE=true mode only
        // blockchain.enableCoverage(process.env["COVERAGE"] === "true");

        // Deploy your contract
        contract = blockchain.openContract(MyContract.fromInit());
        // ... deployment logic
    });

    // Your tests here...
});
```

### 2. Collect coverage after tests

```typescript
afterAll(() => {
    const coverage = blockchain.coverage(contract);
    console.log(coverage?.summary());
})
```

### 3. Generate reports

```typescript
import {writeFileSync} from 'fs';

afterAll(() => {
    const coverage = blockchain.coverage(contract);
    if (!coverage) return;

    // Generate HTML report for detailed analysis
    const htmlReport = coverage.report("html");
    writeFileSync("coverage.html", htmlReport);

    // Print text report to console
    const textReport = coverage.report("text");
    console.log(textReport);
});
```

## Understanding coverage data

### Coverage summary

The coverage summary provides key metrics about your test coverage:

```typescript
const summary = coverage.summary();

console.log(`Total lines: ${summary.totalLines}`);
console.log(`Covered lines: ${summary.coveredLines}`);
console.log(`Coverage percentage: ${summary.coveragePercentage.toFixed(2)}%`);
console.log(`Total gas consumed: ${summary.totalGas}`);
console.log(`Total hits: ${summary.totalHits}`);

// Instruction-level statistics
summary.instructionStats.forEach(stat => {
    console.log(`${stat.name}: ${stat.totalHits} hits, ${stat.totalGas} gas, avg ${stat.avgGas}`);
});
```

### Coverage reports

- **HTML Report**: Interactive report with highlighting and line-by-line coverage details
- **Text Report**: Console-friendly report with coverage information and marked code

## Advanced usage patterns

### Multiple test suites

When running multiple test files, you might want to merge coverage data:

```typescript
// In first test file
const coverage1 = blockchain.coverage(contract);
if (!coverage1) return;
const coverage1Json = coverage1.toJson();
writeFileSync("coverage1.json", coverage1Json);

// In second test file
const coverage2 = blockchain.coverage(contract);
if (!coverage2) return;
const coverage2Json = coverage2.toJson();
writeFileSync("coverage2.json", coverage2Json);

// Merge coverage data in separate script after tests
const savedCoverage1 = Coverage.fromJson(readFileSync("coverage1.json", "utf-8"));
const savedCoverage2 = Coverage.fromJson(readFileSync("coverage2.json", "utf-8"));
const totalCoverage = savedCoverage1.mergeWith(savedCoverage2);

console.log(`Combined coverage: ${totalCoverage.summary().coveragePercentage}%`);
```

## Coverage for multiple contracts

When testing systems with multiple contracts:

```typescript not runnable
describe('Multi-Contract System', () => {
    let blockchain: Blockchain;
    let contract1: SandboxContract<Contract1>;
    let contract2: SandboxContract<Contract2>;

    beforeEach(async () => {
        blockchain = await Blockchain.create();
        blockchain.enableCoverage();

        // Deploy multiple contracts
        contract1 = blockchain.openContract(Contract1.fromInit());
        contract2 = blockchain.openContract(Contract2.fromInit());
    });

    afterAll(() => {
        // Get coverage for each contract separately
        const coverage1 = blockchain.coverage(contract1);
        const coverage2 = blockchain.coverage(contract2);

        if (!coverage1 || !coverage2) return;

        console.log('Contract 1 Coverage:', coverage1.summary().coveragePercentage);
        console.log('Contract 2 Coverage:', coverage2.summary().coveragePercentage);

        // Generate separate reports
        writeFileSync("contract1-coverage.html", coverage1.report("html"));
        writeFileSync("contract2-coverage.html", coverage2.report("html"));
    });
});
```

## Interpret results

The usual report looks like this:

<Image
  src="/resources/images/coverage-report.png"
/>

Apart from the header statistics, the line-by-line coverage report is the most informative. Most fields are self‑explanatory; the code section shows per‑instruction hit counts (blue) and gas cost (red). This helps you analyze both coverage and gas efficiency.

<Aside>
  To understand the TVM assembly output, read [TVM](/foundations/whitepapers/tvm).
</Aside>

## Limitations

Note that when code of other contracts is stored directly in the code of the contract ([Tact](/languages/tact) does that automatically if a contract system does not contain circular dependencies), that affects the overall code coverage percentage.

To mitigate this effect in coverage estimation, add a circular dependency. For example, import a file with the following content.

```tact title="Tact"
contract A {
    receive() {
        let x = initOf B();
        drop2(x);
    }
}

contract B() {
    receive() {
        let x = initOf A();
        drop2(x);
    }
}

asm fun drop2(x: StateInit) {
    DROP2
}
```


================================================
FILE: contract-dev/blueprint/deploy.mdx
================================================
---
title: "Deployment and interaction"
---

Following development and testing, contracts can be deployed and interacted with. This section outlines deployment scripts, provider configuration, and interaction workflows.

## Running scripts

Blueprint allows you to run scripts directly from the project.

1. Place your script in the `scripts/` folder.
1. Each script file must export a `run` function:
   ```typescript
   export async function run(provider: NetworkProvider, args: string[]) {
     //
   }
   ```
1. Run the script with: `npx blueprint run <SCRIPT> [arg1, arg2, ...]` command.

## Deploying contracts

To deploy a smart contract, create a deployment script in `scripts/deploy<Contract>.ts` with the following content.

```typescript title="./scripts/deploy<Contract>.ts" expandable
import { toNano } from '@ton/core';
import { MyContract } from '../wrappers/MyContract';
import { compile, NetworkProvider } from '@ton/blueprint';

export async function run(provider: NetworkProvider) {
    const myContract = provider.open(MyContract.createFromConfig({}, await compile('MyContract')));

    await myContract.sendDeploy(provider.sender(), toNano('0.05'));

    await provider.waitForDeploy(myContract.address);

    // run methods on `myContract`
}
```

### Interactive mode

To launch a guided prompt to create a contract step by step, use:

```bash
npx blueprint run
```

### Non-interactive mode

To create a contract without prompts, provide the contract name and template type:

```bash
npx blueprint run deploy<CONTRACT> --<NETWORK> --<DEPLOY_METHOD>
```

**Example:**

```bash
npx blueprint run deployCounter --mainnet --tonconnect
```

## Deploying methods

### Mnemonic provider

Run scripts with a wallet using mnemonic authentication by configuring environment variables and specifying the `--mnemonic` for a non-interactive method.

**Required variables:**

Set the following variables in the `.env` file:

- `WALLET_MNEMONIC` — wallet mnemonic phrase (space-separated words).
- `WALLET_VERSION` — wallet contract version.
- **Supported versions:** `v1r1`, `v1r2`, `v1r3`, `v2r1`, `v2r2`, `v3r1`, `v3r2`, `v4r1`, `v4r2` (or `v4`), `v5r1`.

```env
WALLET_MNEMONIC="word1 word2 ... word24"   # Your wallet's mnemonic phrase
WALLET_VERSION="v4r2"                      # Wallet contract version
```

**Optional variables:**

- `WALLET_ID` — wallet ID for versions earlier than `v5r1`, excluding `v5r1`.
- `SUBWALLET_NUMBER` — subwallet number for `v5r1` wallets.

_See the [wallet v5 reference](https://github.com/ton-org/ton/blob/master/src/wallets/v5r1/WalletV5R1WalletId.ts) for `WALLET_ID` construction._

Once your environment is set up, you can use the mnemonic wallet for deployment with the appropriate configuration.

### TON Connect

Run scripts with a wallet using TON Connect by specifying the `--tonconnect` option.

**Steps:**

1. After running the command, select a wallet from the available options.
1. Scan the generated QR code in your wallet app or open the provided link.
1. Confirm the transaction in the wallet's interface.

Once confirmed, the contract is deployed.

## Interaction

After deploying your contracts, you can interact with them using Blueprint scripts. These scripts use the [wrappers](/contract-dev/blueprint/develop#wrappers) you've created to send messages and call get methods on your deployed contracts.

To run the following scripts, refer to the [Running scripts](#runningscripts) section.

### Sending messages

To send [messages](/foundations/messages/ordinary-tx) to your deployed contracts, create a script that calls the `send` methods defined in your wrapper. These methods trigger contract execution and modify the contract's state.

```typescript title="./scripts/sendIncrease.ts"
import { Address, toNano } from '@ton/core';
import { MyContract } from '../wrappers/MyContract';
import { NetworkProvider } from '@ton/blueprint';

const contractAddress = Address.parse('<CONTRACT_ADDRESS>');

export async function run(provider: NetworkProvider) {
    const myContract = provider.open(MyContract.createFromAddress(contractAddress));

    await myContract.sendIncrease(provider.sender(), {
        value: toNano('0.05'),
        increaseBy: 42
    });

    await provider.waitForLastTransaction();
    console.log('Message sent successfully!');
}
```

### Executing get methods

Get methods allow you to read data from your deployed contracts without creating transactions. These methods are free to call and don't modify the contract's state.

```typescript title="./scripts/getCounter.ts"
import { Address } from '@ton/core';
import { MyContract } from '../wrappers/MyContract';
import { NetworkProvider } from '@ton/blueprint';

const contractAddress = Address.parse('<CONTRACT_ADDRESS>');

export async function run(provider: NetworkProvider) {
    const myContract = provider.open(MyContract.createFromAddress(contractAddress));

    const counter = await myContract.getCounter();
    const id = await myContract.getId();

    console.log('Counter:', counter);
    console.log('ID:', id);
}
```


================================================
FILE: contract-dev/blueprint/develop.mdx
================================================
---
title: "Smart contract development"
---

import { Aside } from "/snippets/aside.jsx";

Ensure your current directory is the root of the project initialized with `npm create ton@latest`.

## Contract creation

Use Blueprint to create a new contract.

### Interactive mode

To launch a guided prompt to create a contract step by step, use:

```bash
npx blueprint create
```

### Non-interactive mode

To create a contract without prompts, provide the contract name and template type:

```bash
npx blueprint create <CONTRACT> --type <TYPE>
```

- `<CONTRACT>`- contract name
- `<TYPE>`- template type, e.g., tolk-empty, func-empty, tact-empty, tolk-counter, func-counter, tact-counter

**Example:**

```bash
npx blueprint create MyNewContract --type tolk-empty
```

## Contract code writing

After creation, contracts are placed in the `contracts/` folder.
Each file uses the extension that matches its language.
For example, creating a Tolk contract `MyNewContract` results in `contracts/my_new_contract.tolk`.

## Building

Blueprint compiles your contracts into build artifacts.

### Interactive mode

Run without arguments to select contracts from a prompt:

```bash
npx blueprint build
```

### Non-interactive mode

Specify a contract name or use flags to skip prompts:

```bash
npx blueprint build <CONTRACT>
```

**Example:**

```bash
npx blueprint build MyNewContract
npx blueprint build --all # build all contracts
```

### Compiled artifacts

Compiled outputs are stored in the `build/` directory.

- `build/<CONTRACT>.compiled.json`- serialized contract representation used for deployment and testing.

  Each file contains three fields:

  - `hash` — hash of the compiled contract code in hexadecimal format.
  - `hashBase64`  — the same hash encoded in Base64.
  - `hex` — the compiled contract code in hexadecimal form.

  Example:

  ```json title='<CONTRACT>.compiled.json'
  {
      "hash":"21eabd3331276c532778ad3fdcb5b78e5cf2ffefbc0a6dc...",
      "hashBase64":"Ieq9MzEnbFMneK0/3LW3jlzy/++8Cm3Dxkt+I3yRe...",
      "hex":"b5ee9c72410106010082000114ff00f4a413f4bcf2c80b01..."
  }
  ```

- `build/<CONTRACT>/<CONTRACT>.fif` — Fift code derived from the contract.

## Wrappers

Wrappers are TypeScript classes that **you write** to interact with your smart contracts. They act as a bridge between your application code and the blockchain, encapsulating contract deployment, message sending, and data retrieval logic. Each wrapper implements the `Contract` interface from [`@ton/core`](https://github.com/ton-org/ton-core).

When you create a new contract wit
Download .txt
gitextract_f543x511/

├── .cspell.jsonc
├── .editorconfig
├── .gitattributes
├── .github/
│   ├── dependabot.yml
│   ├── scripts/
│   │   ├── build_review_instructions.py
│   │   ├── build_review_payload.py
│   │   ├── common.mjs
│   │   ├── generate-v2-api-table.py
│   │   ├── generate-v3-api-table.py
│   │   ├── rewrite_review_links.py
│   │   └── tvm-instruction-gen.py
│   └── workflows/
│       ├── bouncer.yml
│       ├── commander.yml
│       ├── generate-api-tables.yml
│       ├── instructions.yml
│       ├── linter.yml
│       └── pitaya.yml
├── .gitignore
├── .husky/
│   └── pre-push
├── .prettierignore
├── .remarkignore
├── .remarkrc.mjs
├── CODEOWNERS
├── LICENSE-code
├── LICENSE-docs
├── README.md
├── contract-dev/
│   ├── blueprint/
│   │   ├── api.mdx
│   │   ├── benchmarks.mdx
│   │   ├── cli.mdx
│   │   ├── config.mdx
│   │   ├── coverage.mdx
│   │   ├── deploy.mdx
│   │   ├── develop.mdx
│   │   └── overview.mdx
│   ├── contract-sharding.mdx
│   ├── debug.mdx
│   ├── first-smart-contract.mdx
│   ├── gas.mdx
│   ├── ide/
│   │   ├── jetbrains.mdx
│   │   ├── overview.mdx
│   │   └── vscode.mdx
│   ├── on-chain-jetton-processing.mdx
│   ├── random.mdx
│   ├── security.mdx
│   ├── signing.mdx
│   ├── testing/
│   │   ├── overview.mdx
│   │   └── reference.mdx
│   ├── upgrades.mdx
│   ├── using-on-chain-libraries.mdx
│   ├── vanity.mdx
│   └── zero-knowledge.mdx
├── contribute/
│   ├── snippets/
│   │   ├── aside.mdx
│   │   ├── filetree.mdx
│   │   ├── image.mdx
│   │   └── overview.mdx
│   ├── style-guide-extended.mdx
│   └── style-guide.mdx
├── docs.json
├── ecosystem/
│   ├── ai/
│   │   └── mcp.mdx
│   ├── analytics.mdx
│   ├── api/
│   │   ├── overview.mdx
│   │   ├── price.mdx
│   │   └── toncenter/
│   │       ├── get-api-key.mdx
│   │       ├── introduction.mdx
│   │       ├── rate-limit.mdx
│   │       ├── smc-index/
│   │       │   ├── get-nominator-bookings-method.mdx
│   │       │   ├── get-nominator-earnings-method.mdx
│   │       │   ├── get-nominator-method.mdx
│   │       │   ├── get-pool-bookings-method.mdx
│   │       │   ├── get-pool-method.mdx
│   │       │   └── lifecheck-method.mdx
│   │       ├── smc-index.json
│   │       ├── v2/
│   │       │   ├── accounts/
│   │       │   │   ├── convert-raw-address-to-user-friendly-format.mdx
│   │       │   │   ├── convert-user-friendly-address-to-raw-format.mdx
│   │       │   │   ├── detect-all-address-formats.mdx
│   │       │   │   ├── get-account-balance-only.mdx
│   │       │   │   ├── get-account-lifecycle-state.mdx
│   │       │   │   ├── get-account-state-and-balance.mdx
│   │       │   │   ├── get-detailed-account-state-extended.mdx
│   │       │   │   ├── get-nft-or-jetton-metadata.mdx
│   │       │   │   ├── get-wallet-information.mdx
│   │       │   │   └── list-account-transactions.mdx
│   │       │   ├── blocks/
│   │       │   │   ├── get-block-header-metadata.mdx
│   │       │   │   ├── get-latest-consensus-block.mdx
│   │       │   │   ├── get-latest-masterchain-info.mdx
│   │       │   │   ├── get-masterchain-block-signatures.mdx
│   │       │   │   ├── get-outgoing-message-queue-sizes.mdx
│   │       │   │   ├── get-shard-block-proof.mdx
│   │       │   │   ├── get-shards-at-masterchain-seqno.mdx
│   │       │   │   ├── get-smart-contract-libraries.mdx
│   │       │   │   ├── list-block-transactions-extended-details.mdx
│   │       │   │   ├── list-block-transactions.mdx
│   │       │   │   └── look-up-block-by-height-lt-or-timestamp.mdx
│   │       │   ├── config/
│   │       │   │   ├── get-all-config-parameters.mdx
│   │       │   │   └── get-single-config-parameter.mdx
│   │       │   ├── json-rpc/
│   │       │   │   └── json-rpc-handler.mdx
│   │       │   ├── messages-and-transactions/
│   │       │   │   ├── estimate-transaction-fees.mdx
│   │       │   │   ├── send-external-message-and-return-hash.mdx
│   │       │   │   ├── send-external-message-boc.mdx
│   │       │   │   └── send-unpacked-external-query.mdx
│   │       │   ├── overview.mdx
│   │       │   ├── smart-contracts/
│   │       │   │   └── run-get-method-on-contract.mdx
│   │       │   └── transactions/
│   │       │       ├── locate-result-transaction-by-incoming-message.mdx
│   │       │       ├── locate-source-transaction-by-outgoing-message.mdx
│   │       │       └── locate-transaction-by-incoming-message.mdx
│   │       ├── v2-authentication.mdx
│   │       ├── v2-errors.mdx
│   │       ├── v2-tonlib-types.mdx
│   │       ├── v2.json
│   │       ├── v3/
│   │       │   ├── accounts/
│   │       │   │   ├── address-book.mdx
│   │       │   │   ├── get-account-states.mdx
│   │       │   │   ├── get-wallet-states.mdx
│   │       │   │   └── metadata.mdx
│   │       │   ├── actions-and-traces/
│   │       │   │   ├── get-actions.mdx
│   │       │   │   ├── get-pending-actions.mdx
│   │       │   │   ├── get-pending-traces.mdx
│   │       │   │   └── get-traces.mdx
│   │       │   ├── apiv2/
│   │       │   │   ├── estimate-fee.mdx
│   │       │   │   ├── get-address-information.mdx
│   │       │   │   ├── get-wallet-information.mdx
│   │       │   │   ├── run-get-method.mdx
│   │       │   │   └── send-message.mdx
│   │       │   ├── blockchain-data/
│   │       │   │   ├── get-adjacent-transactions.mdx
│   │       │   │   ├── get-blocks.mdx
│   │       │   │   ├── get-masterchain-block-shard-state-1.mdx
│   │       │   │   ├── get-masterchain-block-shard-state.mdx
│   │       │   │   ├── get-masterchain-info.mdx
│   │       │   │   ├── get-messages.mdx
│   │       │   │   ├── get-pending-transactions.mdx
│   │       │   │   ├── get-transactions-by-masterchain-block.mdx
│   │       │   │   ├── get-transactions-by-message.mdx
│   │       │   │   └── get-transactions.mdx
│   │       │   ├── dns/
│   │       │   │   └── get-dns-records.mdx
│   │       │   ├── jettons/
│   │       │   │   ├── get-jetton-burns.mdx
│   │       │   │   ├── get-jetton-masters.mdx
│   │       │   │   ├── get-jetton-transfers.mdx
│   │       │   │   └── get-jetton-wallets.mdx
│   │       │   ├── multisig/
│   │       │   │   ├── get-multisig-orders.mdx
│   │       │   │   └── get-multisig-wallets.mdx
│   │       │   ├── nfts/
│   │       │   │   ├── get-nft-collections.mdx
│   │       │   │   ├── get-nft-items.mdx
│   │       │   │   └── get-nft-transfers.mdx
│   │       │   ├── overview.mdx
│   │       │   ├── stats/
│   │       │   │   └── get-top-accounts-by-balance.mdx
│   │       │   ├── utils/
│   │       │   │   ├── decode-opcodes-and-bodies-1.mdx
│   │       │   │   └── decode-opcodes-and-bodies.mdx
│   │       │   └── vesting/
│   │       │       └── get-vesting-contracts.mdx
│   │       ├── v3-authentication.mdx
│   │       ├── v3-errors.mdx
│   │       ├── v3-pagination.mdx
│   │       └── v3.yaml
│   ├── appkit/
│   │   ├── init.mdx
│   │   ├── jettons.mdx
│   │   ├── overview.mdx
│   │   └── toncoin.mdx
│   ├── bridges.mdx
│   ├── explorers/
│   │   ├── overview.mdx
│   │   └── tonviewer.mdx
│   ├── nodes/
│   │   ├── cpp/
│   │   │   ├── integrating-with-prometheus.mdx
│   │   │   ├── mytonctrl/
│   │   │   │   ├── alerting.mdx
│   │   │   │   ├── backups.mdx
│   │   │   │   ├── btc-teleport.mdx
│   │   │   │   ├── collator.mdx
│   │   │   │   ├── core.mdx
│   │   │   │   ├── custom-overlays.mdx
│   │   │   │   ├── installer.mdx
│   │   │   │   ├── liquid-staking.mdx
│   │   │   │   ├── overview.mdx
│   │   │   │   ├── pools.mdx
│   │   │   │   ├── utilities.mdx
│   │   │   │   ├── validator.mdx
│   │   │   │   └── wallet.mdx
│   │   │   ├── run-validator.mdx
│   │   │   ├── setup-mylocalton.mdx
│   │   │   └── setup-mytonctrl.mdx
│   │   ├── overview.mdx
│   │   └── rust/
│   │       ├── architecture.mdx
│   │       ├── global-config.mdx
│   │       ├── logs-config.mdx
│   │       ├── metrics.mdx
│   │       ├── monitoring.mdx
│   │       ├── node-config-ref.mdx
│   │       ├── node-config.mdx
│   │       ├── probes.mdx
│   │       └── quick-start.mdx
│   ├── oracles/
│   │   ├── overview.mdx
│   │   ├── pyth.mdx
│   │   └── redstone.mdx
│   ├── sdks.mdx
│   ├── staking/
│   │   ├── liquid-staking.mdx
│   │   ├── nominator-pools.mdx
│   │   ├── overview.mdx
│   │   └── single-nominator.mdx
│   ├── status.mdx
│   ├── tma/
│   │   ├── analytics/
│   │   │   ├── analytics.mdx
│   │   │   ├── api-endpoints.mdx
│   │   │   ├── faq.mdx
│   │   │   ├── install-via-npm.mdx
│   │   │   ├── install-via-script.mdx
│   │   │   ├── managing-integration.mdx
│   │   │   ├── preparation.mdx
│   │   │   └── supported-events.mdx
│   │   ├── create-mini-app.mdx
│   │   ├── overview.mdx
│   │   └── telegram-ui/
│   │       ├── getting-started.mdx
│   │       ├── overview.mdx
│   │       ├── platform-and-palette.mdx
│   │       └── reference/
│   │           └── avatar.mdx
│   ├── ton-connect/
│   │   ├── dapp.mdx
│   │   ├── manifest.mdx
│   │   ├── message-lookup.mdx
│   │   ├── overview.mdx
│   │   └── wallet.mdx
│   ├── ton-pay/
│   │   ├── api-reference.mdx
│   │   ├── on-ramp.mdx
│   │   ├── overview.mdx
│   │   ├── payment-integration/
│   │   │   ├── payments-react.mdx
│   │   │   ├── payments-tonconnect.mdx
│   │   │   ├── status-info.mdx
│   │   │   └── transfer.mdx
│   │   ├── quick-start.mdx
│   │   ├── ui-integration/
│   │   │   ├── button-js.mdx
│   │   │   └── button-react.mdx
│   │   └── webhooks.mdx
│   ├── wallet-apps/
│   │   ├── addresses-workflow.mdx
│   │   ├── deep-links.mdx
│   │   ├── get-coins.mdx
│   │   ├── tonkeeper.mdx
│   │   └── web.mdx
│   └── walletkit/
│       ├── android/
│       │   ├── data.mdx
│       │   ├── events.mdx
│       │   ├── init.mdx
│       │   ├── installation.mdx
│       │   ├── transactions.mdx
│       │   ├── wallets.mdx
│       │   └── webview.mdx
│       ├── browser-extension.mdx
│       ├── ios/
│       │   ├── data.mdx
│       │   ├── events.mdx
│       │   ├── init.mdx
│       │   ├── installation.mdx
│       │   ├── transactions.mdx
│       │   ├── wallets.mdx
│       │   └── webview.mdx
│       ├── native-web.mdx
│       ├── overview.mdx
│       ├── qa-guide.mdx
│       └── web/
│           ├── connections.mdx
│           ├── events.mdx
│           ├── init.mdx
│           ├── jettons.mdx
│           ├── nfts.mdx
│           ├── toncoin.mdx
│           └── wallets.mdx
├── extra.css
├── extra.js
├── foundations/
│   ├── actions/
│   │   ├── change-library.mdx
│   │   ├── overview.mdx
│   │   ├── reserve.mdx
│   │   ├── send.mdx
│   │   └── set-code.mdx
│   ├── addresses/
│   │   ├── derive.mdx
│   │   ├── formats.mdx
│   │   ├── overview.mdx
│   │   └── serialize.mdx
│   ├── config.mdx
│   ├── consensus/
│   │   └── catchain-visualizer.mdx
│   ├── fees.mdx
│   ├── glossary.mdx
│   ├── limits.mdx
│   ├── messages/
│   │   ├── deploy.mdx
│   │   ├── external-in.mdx
│   │   ├── external-out.mdx
│   │   ├── internal.mdx
│   │   ├── modes.mdx
│   │   ├── ordinary-tx.mdx
│   │   └── overview.mdx
│   ├── phases.mdx
│   ├── precompiled.mdx
│   ├── proofs/
│   │   ├── overview.mdx
│   │   └── verifying-liteserver-proofs.mdx
│   ├── serialization/
│   │   ├── boc.mdx
│   │   ├── cells.mdx
│   │   ├── library.mdx
│   │   ├── merkle-update.mdx
│   │   ├── merkle.mdx
│   │   └── pruned.mdx
│   ├── services.mdx
│   ├── shards.mdx
│   ├── status.mdx
│   ├── system.mdx
│   ├── traces.mdx
│   └── whitepapers/
│       ├── catchain.mdx
│       ├── overview.mdx
│       ├── tblkch.mdx
│       ├── ton.mdx
│       └── tvm.mdx
├── from-ethereum.mdx
├── get-support.mdx
├── index.mdx
├── languages/
│   ├── fift/
│   │   ├── deep-dive.mdx
│   │   ├── fift-and-tvm-assembly.mdx
│   │   ├── multisig.mdx
│   │   ├── overview.mdx
│   │   └── whitepaper.mdx
│   ├── func/
│   │   ├── asm-functions.mdx
│   │   ├── built-ins.mdx
│   │   ├── changelog.mdx
│   │   ├── comments.mdx
│   │   ├── compiler-directives.mdx
│   │   ├── cookbook.mdx
│   │   ├── declarations-overview.mdx
│   │   ├── dictionaries.mdx
│   │   ├── expressions.mdx
│   │   ├── functions.mdx
│   │   ├── global-variables.mdx
│   │   ├── known-issues.mdx
│   │   ├── libraries.mdx
│   │   ├── literals.mdx
│   │   ├── operators.mdx
│   │   ├── overview.mdx
│   │   ├── special-functions.mdx
│   │   ├── statements.mdx
│   │   ├── stdlib.mdx
│   │   └── types.mdx
│   ├── tact.mdx
│   ├── tl-b/
│   │   ├── complex-and-non-trivial-examples.mdx
│   │   ├── overview.mdx
│   │   ├── simple-examples.mdx
│   │   ├── syntax-and-semantics.mdx
│   │   ├── tep-examples.mdx
│   │   └── tooling.mdx
│   └── tolk/
│       ├── basic-syntax.mdx
│       ├── changelog.mdx
│       ├── examples.mdx
│       ├── features/
│       │   ├── asm-functions.mdx
│       │   ├── auto-serialization.mdx
│       │   ├── compiler-optimizations.mdx
│       │   ├── contract-getters.mdx
│       │   ├── contract-storage.mdx
│       │   ├── jetton-payload.mdx
│       │   ├── lazy-loading.mdx
│       │   ├── message-handling.mdx
│       │   ├── message-sending.mdx
│       │   └── standard-library.mdx
│       ├── from-func/
│       │   ├── converter.mdx
│       │   ├── stdlib-comparison.mdx
│       │   └── tolk-vs-func.mdx
│       ├── idioms-conventions.mdx
│       ├── overview.mdx
│       ├── syntax/
│       │   ├── conditions-loops.mdx
│       │   ├── exceptions.mdx
│       │   ├── functions-methods.mdx
│       │   ├── imports.mdx
│       │   ├── mutability.mdx
│       │   ├── operators.mdx
│       │   ├── pattern-matching.mdx
│       │   ├── structures-fields.mdx
│       │   └── variables.mdx
│       └── types/
│           ├── address.mdx
│           ├── aliases.mdx
│           ├── booleans.mdx
│           ├── callables.mdx
│           ├── cells.mdx
│           ├── enums.mdx
│           ├── generics.mdx
│           ├── list-of-types.mdx
│           ├── maps.mdx
│           ├── nullable.mdx
│           ├── numbers.mdx
│           ├── overall-serialization.mdx
│           ├── overall-tvm-stack.mdx
│           ├── strings.mdx
│           ├── structures.mdx
│           ├── tensors.mdx
│           ├── tuples.mdx
│           ├── type-checks-and-casts.mdx
│           ├── unions.mdx
│           └── void-never.mdx
├── more-tutorials.mdx
├── old.mdx
├── package.json
├── payments/
│   ├── jettons.mdx
│   ├── overview.mdx
│   └── toncoin.mdx
├── resources/
│   ├── dictionaries/
│   │   ├── ban.txt
│   │   ├── custom.txt
│   │   ├── tvm-instructions.txt
│   │   └── two-letter-words-ban.txt
│   └── tvm/
│       └── cp0.txt
├── scripts/
│   ├── check-navigation.mjs
│   ├── check-redirects.mjs
│   ├── common.mjs
│   ├── docusaurus-sidebars-types.d.ts
│   └── stats.py
├── snippets/
│   ├── aside.jsx
│   ├── catchain-visualizer.jsx
│   ├── feePlayground.jsx
│   ├── fence-table.jsx
│   ├── filetree.jsx
│   ├── image.jsx
│   ├── stub.jsx
│   └── tvm-instruction-table.jsx
├── standard/
│   ├── tokens/
│   │   ├── airdrop.mdx
│   │   ├── jettons/
│   │   │   ├── api.mdx
│   │   │   ├── burn.mdx
│   │   │   ├── comparison.mdx
│   │   │   ├── find.mdx
│   │   │   ├── how-it-works.mdx
│   │   │   ├── mint.mdx
│   │   │   ├── mintless/
│   │   │   │   ├── deploy.mdx
│   │   │   │   └── overview.mdx
│   │   │   ├── overview.mdx
│   │   │   ├── supply-data.mdx
│   │   │   ├── transfer.mdx
│   │   │   └── wallet-data.mdx
│   │   ├── metadata.mdx
│   │   ├── nft/
│   │   │   ├── api.mdx
│   │   │   ├── comparison.mdx
│   │   │   ├── deploy.mdx
│   │   │   ├── how-it-works.mdx
│   │   │   ├── metadata.mdx
│   │   │   ├── nft-2.0.mdx
│   │   │   ├── overview.mdx
│   │   │   ├── reference.mdx
│   │   │   ├── sbt.mdx
│   │   │   ├── transfer.mdx
│   │   │   └── verify.mdx
│   │   └── overview.mdx
│   ├── vesting.mdx
│   └── wallets/
│       ├── comparison.mdx
│       ├── highload/
│       │   ├── overview.mdx
│       │   ├── v2/
│       │   │   └── specification.mdx
│       │   └── v3/
│       │       ├── create.mdx
│       │       ├── send-batch-transfers.mdx
│       │       ├── send-single-transfer.mdx
│       │       ├── specification.mdx
│       │       └── verify-is-processed.mdx
│       ├── history.mdx
│       ├── how-it-works.mdx
│       ├── interact.mdx
│       ├── lockup.mdx
│       ├── mnemonics.mdx
│       ├── performance.mdx
│       ├── preprocessed-v2/
│       │   ├── interact.mdx
│       │   └── specification.mdx
│       ├── restricted.mdx
│       ├── v4.mdx
│       ├── v5-api.mdx
│       └── v5.mdx
├── start-here.mdx
└── tvm/
    ├── builders-and-slices.mdx
    ├── continuations.mdx
    ├── exit-codes.mdx
    ├── gas.mdx
    ├── get-method.mdx
    ├── initialization.mdx
    ├── instructions.mdx
    ├── overview.mdx
    ├── registers.mdx
    └── tools/
        ├── retracer.mdx
        ├── ton-decompiler.mdx
        ├── tvm-explorer.mdx
        └── txtracer.mdx
Download .txt
SYMBOL INDEX (180 symbols across 13 files)

FILE: .github/scripts/build_review_instructions.py
  function main (line 9) | def main() -> None:

FILE: .github/scripts/build_review_payload.py
  function _read_json (line 34) | def _read_json(path: Path) -> Optional[dict]:
  function _iter_instance_jsons (line 42) | def _iter_instance_jsons(run_dir: Path) -> Iterable[Tuple[Path, dict]]:
  function _role_of (line 53) | def _role_of(obj: dict) -> Optional[str]:
  function _final_message_of (line 68) | def _final_message_of(obj: dict) -> Optional[str]:
  function _metrics_of (line 73) | def _metrics_of(obj: dict) -> Dict[str, object]:
  function _absolutize_location_links (line 80) | def _absolutize_location_links(body: str, repo: Optional[str], sha: Opti...
  function _build_from_sidecar (line 169) | def _build_from_sidecar(sidecar: dict, *, repo: str, sha: str, repo_root...
  class Finding (line 284) | class Finding:
    method key (line 295) | def key(self) -> Tuple[str, int, int, str]:
  function _extract_first_code_block (line 300) | def _extract_first_code_block(text: str) -> Tuple[Optional[str], Optiona...
  function _strip_trailing_json_trailer (line 316) | def _strip_trailing_json_trailer(text: str) -> str:
  function _parse_findings (line 321) | def _parse_findings(md: str) -> List[Finding]:
  function _parse_trailer_findings (line 398) | def _parse_trailer_findings(md: str) -> List[dict]:
  function main (line 423) | def main() -> None:

FILE: .github/scripts/common.mjs
  function hidePriorCommentsWithPrefix (line 1) | async function hidePriorCommentsWithPrefix({
  function createComment (line 61) | async function createComment({
  function withRetry (line 79) | async function withRetry(fn, maxRetries = 3, baseDelayMs = 1500) {

FILE: .github/scripts/generate-v2-api-table.py
  function load_openapi_spec (line 18) | def load_openapi_spec(filepath: Path) -> dict:
  function extract_endpoints (line 24) | def extract_endpoints(spec: dict, exclude_tags: list = None) -> list:
  function generate_mintlify_link (line 64) | def generate_mintlify_link(endpoint: dict, base_path: str) -> str:
  function generate_table (line 89) | def generate_table(endpoints: list, link_base: str) -> str:
  function process_spec (line 127) | def process_spec(config: dict, repo_root: Path) -> str:
  function inject_table_into_mdx (line 158) | def inject_table_into_mdx(mdx_path: Path, marker: str, table: str) -> bool:
  function find_repo_root (line 197) | def find_repo_root() -> Path:
  function main (line 208) | def main():

FILE: .github/scripts/generate-v3-api-table.py
  function load_openapi_spec (line 39) | def load_openapi_spec(filepath: Path) -> dict:
  function extract_endpoints (line 45) | def extract_endpoints(spec: dict) -> list:
  function generate_mintlify_link (line 76) | def generate_mintlify_link(endpoint: dict) -> str:
  function generate_table (line 103) | def generate_table(endpoints: list) -> str:
  function inject_table_into_mdx (line 144) | def inject_table_into_mdx(mdx_path: Path, table: str) -> bool:
  function find_repo_root (line 176) | def find_repo_root() -> Path:
  function main (line 187) | def main():

FILE: .github/scripts/rewrite_review_links.py
  function main (line 11) | def main() -> None:

FILE: .github/scripts/tvm-instruction-gen.py
  function humanize_category (line 14) | def humanize_category(key):
  function render_alias (line 21) | def render_alias(alias):
  function render_instruction (line 28) | def render_instruction(insn, aliases):
  function render_static_mdx (line 44) | def render_static_mdx(spec):
  function inject_into_mdx (line 48) | def inject_into_mdx(mdx_path, new_block):
  function generate (line 71) | def generate(spec_input_path, spec_output_path, instructions_mdx_path):
  function update_doc_cp0 (line 79) | def update_doc_cp0(spec, spec_output_path):

FILE: extra.js
  function isTolkBlockCode (line 46) | function isTolkBlockCode(/**HTMLElement*/ codeElement) {
  function isCodeHighlighted (line 79) | function isCodeHighlighted(/**HTMLElement*/ codeElement) {
  function highlightCode (line 84) | function highlightCode(/**HTMLElement*/ codeElement, Prism) {
  function highlightAllCodeElementsOnPage (line 95) | function highlightAllCodeElementsOnPage(Prism) {
  function u (line 111) | function u(e){s.highlightedCode=e,a.hooks.run("before-insert",s),s.eleme...
  function i (line 111) | function i(e,n,t,r){this.type=e,this.content=n,this.alias=t,this.length=...
  function l (line 111) | function l(e,n,t,r){e.lastIndex=n;var a=e.exec(t);if(a&&r&&a[1]){var i=a...
  function o (line 111) | function o(e,n,t,r,s,g){for(var f in t)if(t.hasOwnProperty(f)&&t[f]){var...
  function s (line 111) | function s(){var e={value:null,prev:null,next:null},n={value:null,prev:e...
  function u (line 111) | function u(e,n,t){var r=n.next,a={value:t,prev:n,next:r};return n.next=a...
  function c (line 111) | function c(e,n,t){for(var r=n.next,a=0;a<t&&r!==e.tail;a++)r=r.next;n.ne...
  function f (line 111) | function f(){a.manual||a.highlightAll()}

FILE: scripts/common.mjs
  function ansiRed (line 29) | function ansiRed(src) {
  function ansiBoldRed (line 34) | function ansiBoldRed(src) {
  function ansiGreen (line 39) | function ansiGreen(src) {
  function ansiBoldGreen (line 44) | function ansiBoldGreen(src) {
  function ansiYellow (line 49) | function ansiYellow(src) {
  function ansiBoldYellow (line 54) | function ansiBoldYellow(src) {
  function ansiBold (line 59) | function ansiBold(src) {
  function composeErrorList (line 80) | function composeErrorList(brief, list, msg) {
  function composeError (line 85) | function composeError(msg) {
  function composeWarningList (line 103) | function composeWarningList(msg, list) {
  function composeWarning (line 108) | function composeWarning(msg) {
  function composeSuccess (line 113) | function composeSuccess(msg) {
  function prefixWithSlash (line 118) | function prefixWithSlash(src) {
  function initMdxParser (line 125) | async function initMdxParser() {
  function hasStub (line 139) | function hasStub(parser, filepath) {
  function findUnignoredFiles (line 162) | function findUnignoredFiles(ext = 'mdx', dir = '.') {
  function getConfig (line 251) | function getConfig() {
  function getNavLinks (line 263) | function getNavLinks(config) {
  function getNavLinksSet (line 296) | function getNavLinksSet(config) {
  function getRedirects (line 311) | function getRedirects(config) {
  function getRedirectsSet (line 324) | function getRedirectsSet(config) {

FILE: scripts/docusaurus-sidebars-types.d.ts
  type Optional (line 8) | type Optional<T extends object, K extends keyof T = keyof T> = Omit<T, K> &
  type Expand (line 11) | type Expand<T extends { [x: string]: unknown }> = { [P in keyof T]: T[P] };
  type SidebarItemBase (line 13) | type SidebarItemBase = {
  type SidebarItemDoc (line 19) | type SidebarItemDoc = SidebarItemBase & {
  type SidebarItemHtml (line 31) | type SidebarItemHtml = SidebarItemBase & {
  type SidebarItemLink (line 37) | type SidebarItemLink = SidebarItemBase & {
  type SidebarItemAutogenerated (line 45) | type SidebarItemAutogenerated = SidebarItemBase & {
  type SidebarItemCategoryBase (line 50) | type SidebarItemCategoryBase = SidebarItemBase & {
  type SidebarItemCategoryLinkDoc (line 58) | type SidebarItemCategoryLinkDoc = { type: "doc"; id: string };
  type SidebarItemCategoryLinkGeneratedIndexConfig (line 60) | type SidebarItemCategoryLinkGeneratedIndexConfig = {
  type SidebarItemCategoryLinkGeneratedIndex (line 68) | type SidebarItemCategoryLinkGeneratedIndex = {
  type SidebarItemCategoryLinkConfig (line 78) | type SidebarItemCategoryLinkConfig =
  type SidebarItemCategoryLink (line 82) | type SidebarItemCategoryLink =
  type SidebarItemCategoryConfig (line 87) | type SidebarItemCategoryConfig = Expand<
  type SidebarCategoriesShorthand (line 95) | type SidebarCategoriesShorthand = {
  type SidebarItemConfig (line 99) | type SidebarItemConfig =
  type SidebarConfig (line 109) | type SidebarConfig = SidebarItemConfig[];
  type SidebarsConfig (line 111) | type SidebarsConfig = {

FILE: scripts/stats.py
  function read_json (line 41) | def read_json(path: Path) -> dict:
  function nav_slugs (line 45) | def nav_slugs(docs: dict) -> List[str]:
  function resolve_file (line 70) | def resolve_file(slug: str) -> Optional[str]:
  function strip_frontmatter (line 83) | def strip_frontmatter(s: str) -> str:
  function strip_imports (line 92) | def strip_imports(s: str) -> str:
  function split_fences (line 96) | def split_fences(s: str) -> Tuple[str, str]:
  function extract_counts (line 113) | def extract_counts(s: str) -> Tuple[int, int, int]:
  function count_words (line 154) | def count_words(s: str) -> int:
  function count_images (line 158) | def count_images(src: str) -> int:
  function is_stub (line 166) | def is_stub(content: str) -> bool:
  function summarize (line 172) | def summarize(words: List[int]) -> Dict[str, int]:
  function write_json (line 192) | def write_json(path: Path, obj: dict) -> None:
  function run_latest (line 199) | def run_latest() -> None:
  function git (line 261) | def git(cmd: List[str]) -> str:
  function day_commits (line 267) | def day_commits() -> List[Tuple[str, str]]:
  function _git_show_silent (line 294) | def _git_show_silent(sha: str, rel: str) -> Optional[str]:
  function resolve_at_commit (line 308) | def resolve_at_commit(slug: str, sha: str) -> Optional[Tuple[str, str]]:
  function compute_snapshot (line 321) | def compute_snapshot(docs_json_str: str, sha: str) -> Tuple[Dict, Dict, ...
  function run_history (line 356) | def run_history() -> None:
  function run_charts (line 384) | def run_charts() -> None:
  function main (line 432) | def main():

FILE: snippets/catchain-visualizer.jsx
  function randomBetween (line 109) | function randomBetween(min, max) {
  function clamp (line 113) | function clamp(value, min, max) {
  function createPositions (line 117) | function createPositions(count) {
  function createNode (line 132) | function createNode(index, pos) {
  function makeCandidate (line 160) | function makeCandidate(round, attempt, proposerIndex, proposerId) {
  function logEvent (line 181) | function logEvent(model, text) {
  function scheduleTask (line 188) | function scheduleTask(model, delayMs, fn, label = "") {
  function getNode (line 196) | function getNode(model, nodeId) {
  function createCatchainEnvelope (line 200) | function createCatchainEnvelope(model, from, actions, deps = []) {
  function sendCatchainEnvelope (line 257) | function sendCatchainEnvelope(model, envelope, options = {}) {
  function sendDepRequest (line 292) | function sendDepRequest(model, from, to, missingIds) {
  function requestMissingDeps (line 320) | function requestMissingDeps(
  function tryDeliverPendingCatchain (line 349) | function tryDeliverPendingCatchain(model, node) {
  function deliverCatchainEnvelope (line 368) | function deliverCatchainEnvelope(model, node, envelope, originalFrom) {
  function handleDepRequest (line 441) | function handleDepRequest(model, node, message) {
  function chooseVoteTarget (line 457) | function chooseVoteTarget(model, node) {
  function broadcastBlock (line 494) | function broadcastBlock(model, options) {
  function addEvent (line 504) | function addEvent(node, candidateId, eventType) {
  function enqueueAction (line 534) | function enqueueAction(model, node, action, delay = 0, includeSelf = fal...
  function issueApproval (line 559) | function issueApproval(model, node, candidateId, opts = {}) {
  function issueVote (line 587) | function issueVote(model, node, candidateId) {
  function issuePrecommit (line 605) | function issuePrecommit(model, node, candidateId) {
  function issueCommit (line 626) | function issueCommit(model, node, candidateId) {
  function tryVote (line 668) | function tryVote(model) {
  function tryPrecommit (line 682) | function tryPrecommit(model, node, candidateId) {
  function tryCommit (line 702) | function tryCommit(model, node, candidateId) {
  function calcApprovalDelay (line 723) | function calcApprovalDelay(model, node, candidate, isSlow) {
  function getSimDelay (line 731) | function getSimDelay() {
  function pickCoordinator (line 736) | function pickCoordinator(model, attempt) {
  function getNodePriority (line 741) | function getNodePriority(round, idx, total, C) {
  function ensureNullCandidate (line 749) | function ensureNullCandidate(model) {
  function sendVoteFor (line 781) | function sendVoteFor(model) {
  function handleAction (line 809) | function handleAction(model, node, action, fromId) {
  function handleMessage (line 938) | function handleMessage(model, message) {
  function deliverMessages (line 950) | function deliverMessages(model) {
  function runTasks (line 964) | function runTasks(model) {
  function startAttempt (line 984) | function startAttempt(model, options = {}) {
  function startRound (line 1109) | function startRound(model, resetRoundNumber = false) {
  function createModel (line 1148) | function createModel(config) {
  function stepModel (line 1180) | function stepModel(model, dt) {

FILE: snippets/tvm-instruction-table.jsx
  function resolveCategoryGroup (line 122) | function resolveCategoryGroup(categoryKey) {
  function humanizeCategoryKey (line 136) | function humanizeCategoryKey(key) {
  function formatGasDisplay (line 146) | function formatGasDisplay(gas) {
  function formatOperandSummary (line 161) | function formatOperandSummary(operand) {
  function formatInlineMarkdown (line 184) | function formatInlineMarkdown(text) {
  function compareOpcodes (line 203) | function compareOpcodes(a, b) {
  function createSearchTokens (line 214) | function createSearchTokens(query) {
  function escapeRegExp (line 223) | function escapeRegExp(value) {
  function highlightMatches (line 227) | function highlightMatches(text, tokens) {
  function highlightHtmlContent (line 247) | function highlightHtmlContent(html, tokens) {
  function getItemSearchFields (line 265) | function getItemSearchFields(item) {
  function computeFieldMatchScore (line 279) | function computeFieldMatchScore(field, token) {
  function computeBestAliasMatchScore (line 288) | function computeBestAliasMatchScore(aliases, token) {
  function itemRelevanceScore (line 299) | function itemRelevanceScore(item, tokens) {
  function buildAnchorId (line 322) | function buildAnchorId(instruction) {
  function copyAnchorUrl (line 333) | function copyAnchorUrl(anchorId) {
  function copyPlainText (line 356) | function copyPlainText(value) {
  function formatAliasOperands (line 377) | function formatAliasOperands(operands) {
  function cleanAliasDescription (line 383) | function cleanAliasDescription(html) {
  function extractImplementationRefs (line 392) | function extractImplementationRefs(implementation) {
  function buildGitHubLineUrl (line 408) | function buildGitHubLineUrl(rawUrl, line) {
  function renderControlFlowSummary (line 430) | function renderControlFlowSummary(controlFlow) {
  function renderStackEntry (line 869) | function renderStackEntry(entry, key, mode) {
  function renderStackColumn (line 990) | function renderStackColumn(title, items, mode = "detail") {
  function renderStackColumns (line 1020) | function renderStackColumns(instruction, mode = "detail") {
  function renderInstructionDetail (line 1036) | function renderInstructionDetail(instruction, options = {}) {
  function load (line 3005) | async function load() {
Condensed preview — 464 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (6,372K chars).
[
  {
    "path": ".cspell.jsonc",
    "chars": 5102,
    "preview": "// The .jsonc extension allows free use of comments and trailing commas.\n// The file is named with a dot in front to dis"
  },
  {
    "path": ".editorconfig",
    "chars": 147,
    "preview": "root = true\n\n[*]\ncharset = utf-8\nend_of_line = lf\nindent_style = space\nindent_size = 2\ninsert_final_newline = true\ntrim_"
  },
  {
    "path": ".gitattributes",
    "chars": 19,
    "preview": "* text=auto eol=lf\n"
  },
  {
    "path": ".github/dependabot.yml",
    "chars": 612,
    "preview": "# https://docs.github.com/en/code-security/dependabot/working-with-dependabot/dependabot-options-reference\n\nversion: 2\nu"
  },
  {
    "path": ".github/scripts/build_review_instructions.py",
    "chars": 6613,
    "preview": "#!/usr/bin/env python3\n\"\"\"Generate reviewer instructions with embedded style guide.\"\"\"\n\nfrom __future__ import annotatio"
  },
  {
    "path": ".github/scripts/build_review_payload.py",
    "chars": 27394,
    "preview": "#!/usr/bin/env python3\n\"\"\"\nBuild a GitHub Pull Request review payload from Pitaya results.\n\nInputs:\n  - --run-dir: path "
  },
  {
    "path": ".github/scripts/common.mjs",
    "chars": 2814,
    "preview": "export async function hidePriorCommentsWithPrefix({\n  github, // injected by GitHub\n  context, // injected by GitHub\n  e"
  },
  {
    "path": ".github/scripts/generate-v2-api-table.py",
    "chars": 7202,
    "preview": "import json\nimport re\nfrom pathlib import Path\nfrom collections import defaultdict\n\n# Define which specs to process and "
  },
  {
    "path": ".github/scripts/generate-v3-api-table.py",
    "chars": 5755,
    "preview": "import re\nfrom pathlib import Path\nfrom collections import defaultdict\n\ntry:\n    import yaml\n    HAS_YAML = True\nexcept "
  },
  {
    "path": ".github/scripts/rewrite_review_links.py",
    "chars": 2938,
    "preview": "#!/usr/bin/env python3\n\"\"\"Convert repo-relative doc links in the review body to absolute blob URLs.\"\"\"\n\nfrom __future__ "
  },
  {
    "path": ".github/scripts/tvm-instruction-gen.py",
    "chars": 3157,
    "preview": "import json\nimport os\nimport sys\nimport textwrap\nimport mistletoe\n\nWORKSPACE_ROOT = os.path.abspath(os.path.join(os.path"
  },
  {
    "path": ".github/workflows/bouncer.yml",
    "chars": 7137,
    "preview": "name: 🏀 Bouncer\n# aka 🚪 Supervisor\n\nenv:\n  # additions only\n  MAX_ADDITIONS: 600\n  # many target issues usually mean big"
  },
  {
    "path": ".github/workflows/commander.yml",
    "chars": 4816,
    "preview": "# Listens to new comments with /commands and acts accordingly\nname: 📡 Commander\n\nenv:\n  HUSKY: 0\n  NODE_VERSION: 20\n\non:"
  },
  {
    "path": ".github/workflows/generate-api-tables.yml",
    "chars": 1353,
    "preview": "name: Generate API Tables\n\nenv:\n  PYTHON_VERSION: \"3.11\"\n  NODE_VERSION: \"20\"\n\non:\n  push:\n    paths:\n      - 'ecosystem"
  },
  {
    "path": ".github/workflows/instructions.yml",
    "chars": 2289,
    "preview": "name: 🕘 Instructions update\n\non:\n  schedule:\n    - cron: '17 3 * * *'\n  workflow_dispatch:\n    inputs:\n      source_bran"
  },
  {
    "path": ".github/workflows/linter.yml",
    "chars": 6799,
    "preview": "name: 💅 Linting suite\n\nenv:\n  HUSKY: 0\n  NODE_VERSION: 20\n\non:\n  pull_request:\n    branches: [\"**\"]\n  workflow_dispatch:"
  },
  {
    "path": ".github/workflows/pitaya.yml",
    "chars": 21715,
    "preview": "name: 🤖 AI review\n\non:\n  pull_request:\n    types: [opened, ready_for_review]\n  issue_comment:\n    types: [created]\n  pul"
  },
  {
    "path": ".gitignore",
    "chars": 238,
    "preview": "# Vale (spell and style checker)\n.vale/*\n!.vale/config/\n!.vale/NONE/\n\n# Miscellaneous\n.DS_Store\n\n# Editors\n.idea/\n.vscod"
  },
  {
    "path": ".husky/pre-push",
    "chars": 0,
    "preview": ""
  },
  {
    "path": ".prettierignore",
    "chars": 110,
    "preview": "*.mdx\n/ecosystem/api/toncenter/v2/\n/ecosystem/api/toncenter/v3/\n/ecosystem/api/toncenter/smc-index/\n/LICENSE*\n"
  },
  {
    "path": ".remarkignore",
    "chars": 406,
    "preview": "# Ignore folders\nnode_modules/\n/pending/\n\n# Ignore some whitepapers\n/languages/fift/whitepaper.mdx\n/foundations/whitepap"
  },
  {
    "path": ".remarkrc.mjs",
    "chars": 4290,
    "preview": "import remarkFrontmatter from 'remark-frontmatter';\nimport remarkGfm from 'remark-gfm';\nimport remarkMath from 'remark-m"
  },
  {
    "path": "CODEOWNERS",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "LICENSE-code",
    "chars": 1078,
    "preview": "MIT License\n\nCopyright (c) 2025 TON Studio and others\n\nPermission is hereby granted, free of charge, to any person obtai"
  },
  {
    "path": "LICENSE-docs",
    "chars": 20133,
    "preview": "Attribution-ShareAlike 4.0 International\n\n=======================================================================\n\nCreat"
  },
  {
    "path": "README.md",
    "chars": 3236,
    "preview": "# TON Docs\n\n**[Follow the full quickstart guide](https://www.mintlify.com/docs/quickstart)**\n\n## Development\n\nInstall th"
  },
  {
    "path": "contract-dev/blueprint/api.mdx",
    "chars": 19679,
    "preview": "---\ntitle: \"Blueprint TypeScript API\"\n---\n\nBlueprint exports functions and classes for programmatic interaction with TON"
  },
  {
    "path": "contract-dev/blueprint/benchmarks.mdx",
    "chars": 11785,
    "preview": "---\ntitle: \"Benchmarking performance\"\n---\n\nimport { Aside } from \"/snippets/aside.jsx\";\nimport { FenceTable } from \"/sni"
  },
  {
    "path": "contract-dev/blueprint/cli.mdx",
    "chars": 9059,
    "preview": "---\ntitle: \"Blueprint CLI\"\n---\n\nBlueprint is a CLI tool for TON smart contract development. This reference covers all av"
  },
  {
    "path": "contract-dev/blueprint/config.mdx",
    "chars": 5559,
    "preview": "---\ntitle: \"Configuring Blueprint\"\n---\n\nA [configuration file](https://github.com/ton-org/blueprint/blob/develop/src/con"
  },
  {
    "path": "contract-dev/blueprint/coverage.mdx",
    "chars": 6527,
    "preview": "---\ntitle: \"Collecting test coverage\"\n---\n\nimport { Aside } from \"/snippets/aside.jsx\";\nimport { Image } from \"/snippets"
  },
  {
    "path": "contract-dev/blueprint/deploy.mdx",
    "chars": 5073,
    "preview": "---\ntitle: \"Deployment and interaction\"\n---\n\nFollowing development and testing, contracts can be deployed and interacted"
  },
  {
    "path": "contract-dev/blueprint/develop.mdx",
    "chars": 10714,
    "preview": "---\ntitle: \"Smart contract development\"\n---\n\nimport { Aside } from \"/snippets/aside.jsx\";\n\nEnsure your current directory"
  },
  {
    "path": "contract-dev/blueprint/overview.mdx",
    "chars": 2600,
    "preview": "---\ntitle: \"Blueprint overview\"\nsidebarTitle: \"Overview\"\n---\n\nFor smart contract development on TON Blockchain, **Bluepr"
  },
  {
    "path": "contract-dev/contract-sharding.mdx",
    "chars": 3417,
    "preview": "---\ntitle: \"Contract sharding\"\n---\n\nimport { Aside } from '/snippets/aside.jsx';\nimport { Image } from '/snippets/image."
  },
  {
    "path": "contract-dev/debug.mdx",
    "chars": 9870,
    "preview": "---\ntitle: \"Debugging smart contracts\"\n---\n\nimport {Aside} from \"/snippets/aside.jsx\";\n\n<Aside>\n  All examples from this"
  },
  {
    "path": "contract-dev/first-smart-contract.mdx",
    "chars": 24159,
    "preview": "---\ntitle: \"Your first smart contract\"\n---\n\nimport { Aside } from \"/snippets/aside.jsx\";\nimport { FenceTable } from \"/sn"
  },
  {
    "path": "contract-dev/gas.mdx",
    "chars": 15878,
    "preview": "---\ntitle: \"Estimate gas usage in TON contracts\"\n---\n\nimport { Aside } from \"/snippets/aside.jsx\";\n\nContracts which rece"
  },
  {
    "path": "contract-dev/ide/jetbrains.mdx",
    "chars": 8622,
    "preview": "---\ntitle: \"TON plugin for IDEs from JetBrains\"\nsidebarTitle: \"JetBrains IDEs\"\n---\n\nimport { Image } from '/snippets/ima"
  },
  {
    "path": "contract-dev/ide/overview.mdx",
    "chars": 2130,
    "preview": "---\ntitle: \"IDEs and editor plugins for smart contract development\"\nsidebarTitle: \"Overview\"\nmode: \"wide\"\n---\n\nimport { "
  },
  {
    "path": "contract-dev/ide/vscode.mdx",
    "chars": 15512,
    "preview": "---\ntitle: \"TON extension for Visual Studio Code (VS Code) and VSCode-based editors\"\nsidebarTitle: \"VSCode and forks\"\n--"
  },
  {
    "path": "contract-dev/on-chain-jetton-processing.mdx",
    "chars": 10080,
    "preview": "---\ntitle: \"On-chain Jetton processing\"\nsidebarTitle: \"Jetton processing\"\n---\n\nimport { Aside } from '/snippets/aside.js"
  },
  {
    "path": "contract-dev/random.mdx",
    "chars": 9591,
    "preview": "---\ntitle: \"Random number generation\"\nsidebarTitle: \"Random numbers\"\n---\n\nimport { Aside } from '/snippets/aside.jsx';\n\n"
  },
  {
    "path": "contract-dev/security.mdx",
    "chars": 16215,
    "preview": "---\ntitle: Security best practices\n---\n\nThere are several anti-patterns and potential attack vectors that smart contract"
  },
  {
    "path": "contract-dev/signing.mdx",
    "chars": 13160,
    "preview": "---\ntitle: \"Signing messages\"\n---\n\nimport { Aside } from '/snippets/aside.jsx';\n\nCryptographic signatures are the founda"
  },
  {
    "path": "contract-dev/testing/overview.mdx",
    "chars": 6210,
    "preview": "---\ntitle: \"Overview\"\n---\n\nimport { Aside } from \"/snippets/aside.jsx\";\n\nTON Sandbox ([`@ton/sandbox`](https://github.co"
  },
  {
    "path": "contract-dev/testing/reference.mdx",
    "chars": 60078,
    "preview": "---\ntitle: \"Reference\"\n---\n\nBlueprint provides comprehensive testing capabilities for TON smart contracts using the TON "
  },
  {
    "path": "contract-dev/upgrades.mdx",
    "chars": 16092,
    "preview": "---\ntitle: \"Upgrading contracts\"\n---\n\nimport { Aside } from '/snippets/aside.jsx';\n\nThe address of a contract is determi"
  },
  {
    "path": "contract-dev/using-on-chain-libraries.mdx",
    "chars": 10486,
    "preview": "---\ntitle: \"Using on-chain libraries\"\n---\n\nimport { Image } from '/snippets/image.jsx';\nimport { Aside } from '/snippets"
  },
  {
    "path": "contract-dev/vanity.mdx",
    "chars": 9471,
    "preview": "---\ntitle: \"How to use a vanity contract\"\nsidebarTitle: \"Use a vanity contract\"\n---\n\nA [vanity contract](https://github."
  },
  {
    "path": "contract-dev/zero-knowledge.mdx",
    "chars": 11989,
    "preview": "---\ntitle: \"Zero-knowledge proofs on TON\"\nsidebarTitle: \"Zero-knowledge proofs\"\n---\n\nimport { Aside } from '/snippets/as"
  },
  {
    "path": "contribute/snippets/aside.mdx",
    "chars": 2927,
    "preview": "---\ntitle: \"Aside component\"\nsidebarTitle: \"Aside\"\n---\n\nimport { Aside } from '/snippets/aside.jsx';\n\nTo display seconda"
  },
  {
    "path": "contribute/snippets/filetree.mdx",
    "chars": 5630,
    "preview": "---\ntitle: \"FileTree component\"\nsidebarTitle: \"FileTree\"\n---\n\nimport { Aside } from '/snippets/aside.jsx';\nimport { File"
  },
  {
    "path": "contribute/snippets/image.mdx",
    "chars": 7636,
    "preview": "---\ntitle: \"Image component\"\nsidebarTitle: \"Image\"\n---\n\nimport { Aside } from '/snippets/aside.jsx';\nimport { Image } fr"
  },
  {
    "path": "contribute/snippets/overview.mdx",
    "chars": 6874,
    "preview": "---\ntitle: \"Using components and snippets\"\nsidebarTitle: \"Overview\"\n---\n\n_Snippets_ keep the same content in sync across"
  },
  {
    "path": "contribute/style-guide-extended.mdx",
    "chars": 99843,
    "preview": "# TON documentation style guide\n\n## 0. Purpose, scope, and normative terms\n\nPurpose. This guide defines the required wri"
  },
  {
    "path": "contribute/style-guide.mdx",
    "chars": 9553,
    "preview": "---\ntitle: \"Documentation style guide\"\nsidebarTitle: \"Style guide\"\n---\n\nThis guide covers the basics: how to structure p"
  },
  {
    "path": "docs.json",
    "chars": 117238,
    "preview": "{\n  \"$schema\": \"https://mintlify.com/docs.json\",\n  \"theme\": \"maple\",\n  \"name\": \"TON Docs\",\n  \"logo\": {\n    \"light\": \"/re"
  },
  {
    "path": "ecosystem/ai/mcp.mdx",
    "chars": 8236,
    "preview": "---\ntitle: \"MCP servers\"\n---\n\nimport { Aside } from '/snippets/aside.jsx';\n\n[Model Context Protocol](https://modelcontex"
  },
  {
    "path": "ecosystem/analytics.mdx",
    "chars": 10825,
    "preview": "---\ntitle: \"Analytics and data providers\"\nsidebarTitle: \"Analytics\"\n---\n\nimport { Aside } from '/snippets/aside.jsx';\n\nD"
  },
  {
    "path": "ecosystem/api/overview.mdx",
    "chars": 5739,
    "preview": "---\ntitle: \"Overview\"\nmode: \"wide\"\n---\n\nAccess TON data via public liteservers, hosted APIs (TON Center v2/v3, TonAPI, d"
  },
  {
    "path": "ecosystem/api/price.mdx",
    "chars": 2977,
    "preview": "---\ntitle: \"Jetton prices API\"\n---\n\nimport { Aside } from '/snippets/aside.jsx';\n\nIn this article, we will discuss diffe"
  },
  {
    "path": "ecosystem/api/toncenter/get-api-key.mdx",
    "chars": 3965,
    "preview": "---\ntitle: \"Get your TON Center API key\"\nsidebarTitle: \"Get API key\"\n---\n\nimport { Image } from '/snippets/image.jsx';\n\n"
  },
  {
    "path": "ecosystem/api/toncenter/introduction.mdx",
    "chars": 509,
    "preview": "---\ntitle: Introduction\n---\n\nTON Center HTTP APIs: read blockchain data, query smart contracts, send transactions.\n\n<Col"
  },
  {
    "path": "ecosystem/api/toncenter/rate-limit.mdx",
    "chars": 2970,
    "preview": "---\ntitle: \"Rate limits\"\n---\n\nimport {Aside} from \"/snippets/aside.jsx\";\n\nTo ensure stability and fair access, TON Cente"
  },
  {
    "path": "ecosystem/api/toncenter/smc-index/get-nominator-bookings-method.mdx",
    "chars": 43,
    "preview": "---\nopenapi: get /getNominatorBookings\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/smc-index/get-nominator-earnings-method.mdx",
    "chars": 43,
    "preview": "---\nopenapi: get /getNominatorEarnings\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/smc-index/get-nominator-method.mdx",
    "chars": 35,
    "preview": "---\nopenapi: get /getNominator\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/smc-index/get-pool-bookings-method.mdx",
    "chars": 38,
    "preview": "---\nopenapi: get /getPoolBookings\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/smc-index/get-pool-method.mdx",
    "chars": 30,
    "preview": "---\nopenapi: get /getPool\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/smc-index/lifecheck-method.mdx",
    "chars": 32,
    "preview": "---\nopenapi: get /lifecheck\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/smc-index.json",
    "chars": 29651,
    "preview": "{\n    \"openapi\": \"3.1.0\",\n    \"info\": {\n        \"title\": \"TON SC Indexer V2\",\n        \"description\": \"TON Smart Contract"
  },
  {
    "path": "ecosystem/api/toncenter/v2/accounts/convert-raw-address-to-user-friendly-format.mdx",
    "chars": 33,
    "preview": "---\nopenapi: get /packAddress\n---"
  },
  {
    "path": "ecosystem/api/toncenter/v2/accounts/convert-user-friendly-address-to-raw-format.mdx",
    "chars": 35,
    "preview": "---\nopenapi: get /unpackAddress\n---"
  },
  {
    "path": "ecosystem/api/toncenter/v2/accounts/detect-all-address-formats.mdx",
    "chars": 35,
    "preview": "---\nopenapi: get /detectAddress\n---"
  },
  {
    "path": "ecosystem/api/toncenter/v2/accounts/get-account-balance-only.mdx",
    "chars": 39,
    "preview": "---\nopenapi: get /getAddressBalance\n---"
  },
  {
    "path": "ecosystem/api/toncenter/v2/accounts/get-account-lifecycle-state.mdx",
    "chars": 37,
    "preview": "---\nopenapi: get /getAddressState\n---"
  },
  {
    "path": "ecosystem/api/toncenter/v2/accounts/get-account-state-and-balance.mdx",
    "chars": 43,
    "preview": "---\nopenapi: get /getAddressInformation\n---"
  },
  {
    "path": "ecosystem/api/toncenter/v2/accounts/get-detailed-account-state-extended.mdx",
    "chars": 51,
    "preview": "---\nopenapi: get /getExtendedAddressInformation\n---"
  },
  {
    "path": "ecosystem/api/toncenter/v2/accounts/get-nft-or-jetton-metadata.mdx",
    "chars": 34,
    "preview": "---\nopenapi: get /getTokenData\n---"
  },
  {
    "path": "ecosystem/api/toncenter/v2/accounts/get-wallet-information.mdx",
    "chars": 42,
    "preview": "---\nopenapi: get /getWalletInformation\n---"
  },
  {
    "path": "ecosystem/api/toncenter/v2/accounts/list-account-transactions.mdx",
    "chars": 37,
    "preview": "---\nopenapi: get /getTransactions\n---"
  },
  {
    "path": "ecosystem/api/toncenter/v2/blocks/get-block-header-metadata.mdx",
    "chars": 36,
    "preview": "---\nopenapi: get /getBlockHeader\n---"
  },
  {
    "path": "ecosystem/api/toncenter/v2/blocks/get-latest-consensus-block.mdx",
    "chars": 39,
    "preview": "---\nopenapi: get /getConsensusBlock\n---"
  },
  {
    "path": "ecosystem/api/toncenter/v2/blocks/get-latest-masterchain-info.mdx",
    "chars": 40,
    "preview": "---\nopenapi: get /getMasterchainInfo\n---"
  },
  {
    "path": "ecosystem/api/toncenter/v2/blocks/get-masterchain-block-signatures.mdx",
    "chars": 51,
    "preview": "---\nopenapi: get /getMasterchainBlockSignatures\n---"
  },
  {
    "path": "ecosystem/api/toncenter/v2/blocks/get-outgoing-message-queue-sizes.mdx",
    "chars": 41,
    "preview": "---\nopenapi: get /getOutMsgQueueSizes\n---"
  },
  {
    "path": "ecosystem/api/toncenter/v2/blocks/get-shard-block-proof.mdx",
    "chars": 40,
    "preview": "---\nopenapi: get /getShardBlockProof\n---"
  },
  {
    "path": "ecosystem/api/toncenter/v2/blocks/get-shards-at-masterchain-seqno.mdx",
    "chars": 28,
    "preview": "---\nopenapi: get /shards\n---"
  },
  {
    "path": "ecosystem/api/toncenter/v2/blocks/get-smart-contract-libraries.mdx",
    "chars": 35,
    "preview": "---\nopenapi: get /getLibraries\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/v2/blocks/list-block-transactions-extended-details.mdx",
    "chars": 45,
    "preview": "---\nopenapi: get /getBlockTransactionsExt\n---"
  },
  {
    "path": "ecosystem/api/toncenter/v2/blocks/list-block-transactions.mdx",
    "chars": 42,
    "preview": "---\nopenapi: get /getBlockTransactions\n---"
  },
  {
    "path": "ecosystem/api/toncenter/v2/blocks/look-up-block-by-height-lt-or-timestamp.mdx",
    "chars": 33,
    "preview": "---\nopenapi: get /lookupBlock\n---"
  },
  {
    "path": "ecosystem/api/toncenter/v2/config/get-all-config-parameters.mdx",
    "chars": 34,
    "preview": "---\nopenapi: get /getConfigAll\n---"
  },
  {
    "path": "ecosystem/api/toncenter/v2/config/get-single-config-parameter.mdx",
    "chars": 36,
    "preview": "---\nopenapi: get /getConfigParam\n---"
  },
  {
    "path": "ecosystem/api/toncenter/v2/json-rpc/json-rpc-handler.mdx",
    "chars": 30,
    "preview": "---\nopenapi: post /jsonRPC\n---"
  },
  {
    "path": "ecosystem/api/toncenter/v2/messages-and-transactions/estimate-transaction-fees.mdx",
    "chars": 34,
    "preview": "---\nopenapi: post /estimateFee\n---"
  },
  {
    "path": "ecosystem/api/toncenter/v2/messages-and-transactions/send-external-message-and-return-hash.mdx",
    "chars": 40,
    "preview": "---\nopenapi: post /sendBocReturnHash\n---"
  },
  {
    "path": "ecosystem/api/toncenter/v2/messages-and-transactions/send-external-message-boc.mdx",
    "chars": 30,
    "preview": "---\nopenapi: post /sendBoc\n---"
  },
  {
    "path": "ecosystem/api/toncenter/v2/messages-and-transactions/send-unpacked-external-query.mdx",
    "chars": 32,
    "preview": "---\nopenapi: post /sendQuery\n---"
  },
  {
    "path": "ecosystem/api/toncenter/v2/overview.mdx",
    "chars": 7441,
    "preview": "---\ntitle: Overview\n---\n\nThe TON Center API v2 provides developer access to the TON blockchain through \n[REST](https://e"
  },
  {
    "path": "ecosystem/api/toncenter/v2/smart-contracts/run-get-method-on-contract.mdx",
    "chars": 35,
    "preview": "---\nopenapi: post /runGetMethod\n---"
  },
  {
    "path": "ecosystem/api/toncenter/v2/transactions/locate-result-transaction-by-incoming-message.mdx",
    "chars": 39,
    "preview": "---\nopenapi: get /tryLocateResultTx\n---"
  },
  {
    "path": "ecosystem/api/toncenter/v2/transactions/locate-source-transaction-by-outgoing-message.mdx",
    "chars": 39,
    "preview": "---\nopenapi: get /tryLocateSourceTx\n---"
  },
  {
    "path": "ecosystem/api/toncenter/v2/transactions/locate-transaction-by-incoming-message.mdx",
    "chars": 33,
    "preview": "---\nopenapi: get /tryLocateTx\n---"
  },
  {
    "path": "ecosystem/api/toncenter/v2-authentication.mdx",
    "chars": 3034,
    "preview": "---\ntitle: \"API authentication\"\nsidebarTitle: \"Authentication\"\n---\n\nimport { Aside } from \"/snippets/aside.jsx\";\n\n## Ove"
  },
  {
    "path": "ecosystem/api/toncenter/v2-errors.mdx",
    "chars": 1549,
    "preview": "---\ntitle: \"API error codes\"\nsidebarTitle: \"Error codes\"\n---\n\nAll TON Center API v2 methods use a standard set of HTTP s"
  },
  {
    "path": "ecosystem/api/toncenter/v2-tonlib-types.mdx",
    "chars": 20126,
    "preview": "---\ntitle: \"Tonlib type identifiers\"\n---\n\nEvery object returned by API v2 includes a `@type` field that identifies the o"
  },
  {
    "path": "ecosystem/api/toncenter/v2.json",
    "chars": 117227,
    "preview": "{\n  \"openapi\": \"3.1.0\",\n  \"info\": {\n    \"title\": \"TON HTTP API\",\n    \"description\": \"\\nThis API enables HTTP access to T"
  },
  {
    "path": "ecosystem/api/toncenter/v3/accounts/address-book.mdx",
    "chars": 41,
    "preview": "---\nopenapi: get /api/v3/addressBook\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/v3/accounts/get-account-states.mdx",
    "chars": 43,
    "preview": "---\nopenapi: get /api/v3/accountStates\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/v3/accounts/get-wallet-states.mdx",
    "chars": 42,
    "preview": "---\nopenapi: get /api/v3/walletStates\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/v3/accounts/metadata.mdx",
    "chars": 38,
    "preview": "---\nopenapi: get /api/v3/metadata\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/v3/actions-and-traces/get-actions.mdx",
    "chars": 37,
    "preview": "---\nopenapi: get /api/v3/actions\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/v3/actions-and-traces/get-pending-actions.mdx",
    "chars": 44,
    "preview": "---\nopenapi: get /api/v3/pendingActions\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/v3/actions-and-traces/get-pending-traces.mdx",
    "chars": 43,
    "preview": "---\nopenapi: get /api/v3/pendingTraces\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/v3/actions-and-traces/get-traces.mdx",
    "chars": 36,
    "preview": "---\nopenapi: get /api/v3/traces\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/v3/apiv2/estimate-fee.mdx",
    "chars": 42,
    "preview": "---\nopenapi: post /api/v3/estimateFee\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/v3/apiv2/get-address-information.mdx",
    "chars": 48,
    "preview": "---\nopenapi: get /api/v3/addressInformation\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/v3/apiv2/get-wallet-information.mdx",
    "chars": 47,
    "preview": "---\nopenapi: get /api/v3/walletInformation\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/v3/apiv2/run-get-method.mdx",
    "chars": 43,
    "preview": "---\nopenapi: post /api/v3/runGetMethod\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/v3/apiv2/send-message.mdx",
    "chars": 38,
    "preview": "---\nopenapi: post /api/v3/message\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/v3/blockchain-data/get-adjacent-transactions.mdx",
    "chars": 50,
    "preview": "---\nopenapi: get /api/v3/adjacentTransactions\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/v3/blockchain-data/get-blocks.mdx",
    "chars": 36,
    "preview": "---\nopenapi: get /api/v3/blocks\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/v3/blockchain-data/get-masterchain-block-shard-state-1.mdx",
    "chars": 52,
    "preview": "---\nopenapi: get /api/v3/masterchainBlockShards\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/v3/blockchain-data/get-masterchain-block-shard-state.mdx",
    "chars": 56,
    "preview": "---\nopenapi: get /api/v3/masterchainBlockShardState\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/v3/blockchain-data/get-masterchain-info.mdx",
    "chars": 45,
    "preview": "---\nopenapi: get /api/v3/masterchainInfo\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/v3/blockchain-data/get-messages.mdx",
    "chars": 38,
    "preview": "---\nopenapi: get /api/v3/messages\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/v3/blockchain-data/get-pending-transactions.mdx",
    "chars": 49,
    "preview": "---\nopenapi: get /api/v3/pendingTransactions\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/v3/blockchain-data/get-transactions-by-masterchain-block.mdx",
    "chars": 60,
    "preview": "---\nopenapi: get /api/v3/transactionsByMasterchainBlock\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/v3/blockchain-data/get-transactions-by-message.mdx",
    "chars": 51,
    "preview": "---\nopenapi: get /api/v3/transactionsByMessage\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/v3/blockchain-data/get-transactions.mdx",
    "chars": 42,
    "preview": "---\nopenapi: get /api/v3/transactions\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/v3/dns/get-dns-records.mdx",
    "chars": 41,
    "preview": "---\nopenapi: get /api/v3/dns/records\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/v3/jettons/get-jetton-burns.mdx",
    "chars": 42,
    "preview": "---\nopenapi: get /api/v3/jetton/burns\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/v3/jettons/get-jetton-masters.mdx",
    "chars": 44,
    "preview": "---\nopenapi: get /api/v3/jetton/masters\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/v3/jettons/get-jetton-transfers.mdx",
    "chars": 46,
    "preview": "---\nopenapi: get /api/v3/jetton/transfers\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/v3/jettons/get-jetton-wallets.mdx",
    "chars": 44,
    "preview": "---\nopenapi: get /api/v3/jetton/wallets\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/v3/multisig/get-multisig-orders.mdx",
    "chars": 45,
    "preview": "---\nopenapi: get /api/v3/multisig/orders\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/v3/multisig/get-multisig-wallets.mdx",
    "chars": 46,
    "preview": "---\nopenapi: get /api/v3/multisig/wallets\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/v3/nfts/get-nft-collections.mdx",
    "chars": 45,
    "preview": "---\nopenapi: get /api/v3/nft/collections\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/v3/nfts/get-nft-items.mdx",
    "chars": 39,
    "preview": "---\nopenapi: get /api/v3/nft/items\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/v3/nfts/get-nft-transfers.mdx",
    "chars": 43,
    "preview": "---\nopenapi: get /api/v3/nft/transfers\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/v3/overview.mdx",
    "chars": 6299,
    "preview": "---\ntitle: Overview\n---\n\nThe TON Center API v3 provides developer access to the TON blockchain through an indexed data l"
  },
  {
    "path": "ecosystem/api/toncenter/v3/stats/get-top-accounts-by-balance.mdx",
    "chars": 50,
    "preview": "---\nopenapi: get /api/v3/topAccountsByBalance\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/v3/utils/decode-opcodes-and-bodies-1.mdx",
    "chars": 37,
    "preview": "---\nopenapi: post /api/v3/decode\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/v3/utils/decode-opcodes-and-bodies.mdx",
    "chars": 36,
    "preview": "---\nopenapi: get /api/v3/decode\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/v3/vesting/get-vesting-contracts.mdx",
    "chars": 37,
    "preview": "---\nopenapi: get /api/v3/vesting\n---\n"
  },
  {
    "path": "ecosystem/api/toncenter/v3-authentication.mdx",
    "chars": 2357,
    "preview": "---\ntitle: \"API authentication\"\nsidebarTitle: \"Authentication\"\n---\n\nimport { Aside } from \"/snippets/aside.jsx\";\n\n## Ove"
  },
  {
    "path": "ecosystem/api/toncenter/v3-errors.mdx",
    "chars": 1108,
    "preview": "---\ntitle: \"API error codes\"\nsidebarTitle: \"Error codes\"\n---\n\nAll TON Center API v3 methods use a standard set of HTTP s"
  },
  {
    "path": "ecosystem/api/toncenter/v3-pagination.mdx",
    "chars": 10486,
    "preview": "---\ntitle: \"Pagination\"\n---\n\nimport { Aside } from '/snippets/aside.jsx';\n\nThe v3 API uses offset-based pagination. Each"
  },
  {
    "path": "ecosystem/api/toncenter/v3.yaml",
    "chars": 94129,
    "preview": "openapi: 3.0.0\ninfo:\n  title: TON Index (Go)\n  description: TON Index collects data from a full node to PostgreSQL datab"
  },
  {
    "path": "ecosystem/appkit/init.mdx",
    "chars": 18444,
    "preview": "---\ntitle: \"How to initialize the TON Connect's AppKit\"\nsidebarTitle: \"Initialize the kit\"\n---\n\nimport { Aside } from '/"
  },
  {
    "path": "ecosystem/appkit/jettons.mdx",
    "chars": 16188,
    "preview": "---\ntitle: \"How to work with Jettons using AppKit\"\nsidebarTitle: \"Work with Jettons\"\n---\n\nimport { Aside } from '/snippe"
  },
  {
    "path": "ecosystem/appkit/overview.mdx",
    "chars": 1919,
    "preview": "---\ntitle: \"AppKit: SDK for decentralized applications (dApps)\"\nsidebarTitle: \"Overview\"\n---\n\nimport { Aside } from '/sn"
  },
  {
    "path": "ecosystem/appkit/toncoin.mdx",
    "chars": 9842,
    "preview": "---\ntitle: \"How to work with Toncoin using AppKit\"\nsidebarTitle: \"Work with Toncoin\"\n---\n\nimport { Aside } from '/snippe"
  },
  {
    "path": "ecosystem/bridges.mdx",
    "chars": 3440,
    "preview": "---\ntitle: \"Bridges\"\n---\n\nimport { Aside } from '/snippets/aside.jsx';\n\nIn the TON ecosystem, bridges allow users to tra"
  },
  {
    "path": "ecosystem/explorers/overview.mdx",
    "chars": 2250,
    "preview": "---\ntitle: \"Overview\"\n---\n\nExplorers are web tools for reading blockchain data. They let you look up accounts, transacti"
  },
  {
    "path": "ecosystem/explorers/tonviewer.mdx",
    "chars": 9696,
    "preview": "---\ntitle: \"Using Tonviewer\"\n---\n\nimport { Aside } from '/snippets/aside.jsx';\nimport { Image } from '/snippets/image.js"
  },
  {
    "path": "ecosystem/nodes/cpp/integrating-with-prometheus.mdx",
    "chars": 2418,
    "preview": "---\ntitle: \"Integrate MyTonCtrl with Prometheus\"\n---\n\nimport { Aside } from \"/snippets/aside.jsx\";\n\n[MyTonCtrl](/ecosyst"
  },
  {
    "path": "ecosystem/nodes/cpp/mytonctrl/alerting.mdx",
    "chars": 6666,
    "preview": "---\ntitle: \"Telegram alerting\"\n---\n\n**MyTonCtrl Private Alerting Bot** is a tool for receiving notifications about node "
  },
  {
    "path": "ecosystem/nodes/cpp/mytonctrl/backups.mdx",
    "chars": 4477,
    "preview": "---\ntitle: \"Backup\"\ndescription: \"MyTonCtrl bundles helper scripts for creating and restoring node backups.\"\n---\n\nimport"
  },
  {
    "path": "ecosystem/nodes/cpp/mytonctrl/btc-teleport.mdx",
    "chars": 1707,
    "preview": "---\ntitle: \"BTC Teleport\"\ndescription: \"The BTC Teleport module manages the optional Bitcoin bridge (Teleport) client sh"
  },
  {
    "path": "ecosystem/nodes/cpp/mytonctrl/collator.mdx",
    "chars": 6543,
    "preview": "---\ntitle: \"Collator\"\ndescription: \"Collator mode lets a node produce blocks for selected shardchains without running th"
  },
  {
    "path": "ecosystem/nodes/cpp/mytonctrl/core.mdx",
    "chars": 14872,
    "preview": "---\ntitle: \"Core\"\ndescription: \"Inspecting node health, managing modes and settings, maintaining the software stack, and"
  },
  {
    "path": "ecosystem/nodes/cpp/mytonctrl/custom-overlays.mdx",
    "chars": 3397,
    "preview": "---\ntitle: \"Custom overlays\"\ndescription: \"Sets up a custom overlay to speed up synchronization for a group of nodes.\"\n-"
  },
  {
    "path": "ecosystem/nodes/cpp/mytonctrl/installer.mdx",
    "chars": 8910,
    "preview": "---\ntitle: \"Installer\"\ndescription: \"MyTonInstaller complements MyTonCtrl by bootstrapping and maintaining TON node comp"
  },
  {
    "path": "ecosystem/nodes/cpp/mytonctrl/liquid-staking.mdx",
    "chars": 7466,
    "preview": "---\ntitle: \"Liquid staking\"\ndescription: \"Liquid staking mode orchestrates controller deployment and maintenance for jet"
  },
  {
    "path": "ecosystem/nodes/cpp/mytonctrl/overview.mdx",
    "chars": 10619,
    "preview": "---\ntitle: \"Overview\"\n---\n\nMyTonCtrl is a tool to run and maintain a TON node (validators and liteservers).\n\n## Install "
  },
  {
    "path": "ecosystem/nodes/cpp/mytonctrl/pools.mdx",
    "chars": 7358,
    "preview": "---\ntitle: \"Nominator pools\"\ndescription: \"Pool-focused commands help you manage validator-run nominator pools and Orbs "
  },
  {
    "path": "ecosystem/nodes/cpp/mytonctrl/utilities.mdx",
    "chars": 4923,
    "preview": "---\ntitle: \"Utilities\"\ndescription: \"Utility commands provide quick inspection and helper tools for accounts, bookmarks,"
  },
  {
    "path": "ecosystem/nodes/cpp/mytonctrl/validator.mdx",
    "chars": 5434,
    "preview": "---\ntitle: \"Validator\"\ndescription: \"Validator mode automates governance voting, election participation, efficiency trac"
  },
  {
    "path": "ecosystem/nodes/cpp/mytonctrl/wallet.mdx",
    "chars": 5038,
    "preview": "---\ntitle: \"Wallet\"\ndescription: \"Wallet mode provides convenience utilities for generating, activating, importing, expo"
  },
  {
    "path": "ecosystem/nodes/cpp/run-validator.mdx",
    "chars": 32662,
    "preview": "---\ntitle: \"Run a validator\"\ndescription: \"Run a validator node with MyTonCtrl\"\n---\n\nimport { Aside } from '/snippets/as"
  },
  {
    "path": "ecosystem/nodes/cpp/setup-mylocalton.mdx",
    "chars": 5073,
    "preview": "---\ntitle: \"Setting up a local blockchain using MyLocalTon\"\ndescription: \"Install MyLocalTon to spin up a self-contained"
  },
  {
    "path": "ecosystem/nodes/cpp/setup-mytonctrl.mdx",
    "chars": 13032,
    "preview": "---\ntitle: \"Run a node with MyTonCtrl\"\ndescription: \"Provision hardware, install MyTonCtrl, and follow runbooks for vali"
  },
  {
    "path": "ecosystem/nodes/overview.mdx",
    "chars": 11445,
    "preview": "---\ntitle: \"Blockchain nodes overview\"\nsidebarTitle: \"Overview\"\ndescription: \"Pick the right TON node setup and understa"
  },
  {
    "path": "ecosystem/nodes/rust/architecture.mdx",
    "chars": 7135,
    "preview": "---\ntitle: \"Architecture reference\"\nsidebarTitle: \"Architecture\"\n---\n\nimport { Aside } from '/snippets/aside.jsx';\n\nTON "
  },
  {
    "path": "ecosystem/nodes/rust/global-config.mdx",
    "chars": 6622,
    "preview": "---\ntitle: \"How to configure global JSON file\"\nsidebarTitle: \"Global configuration\"\n---\n\nimport { Aside } from '/snippet"
  },
  {
    "path": "ecosystem/nodes/rust/logs-config.mdx",
    "chars": 13292,
    "preview": "---\ntitle: \"How to configure logging YAML file\"\nsidebarTitle: \"Logging configuration\"\n---\n\nimport { Aside } from '/snipp"
  },
  {
    "path": "ecosystem/nodes/rust/metrics.mdx",
    "chars": 16786,
    "preview": "---\ntitle: \"Metrics\"\n---\n\nAll metrics are served at `GET /metrics` on the metrics port and are exposed in the standard P"
  },
  {
    "path": "ecosystem/nodes/rust/monitoring.mdx",
    "chars": 5322,
    "preview": "---\ntitle: \"Monitoring\"\nsidebarTitle: \"Monitoring\"\n---\n\nimport { Aside } from \"/snippets/aside.jsx\";\n\n## Objective\n\nSet "
  },
  {
    "path": "ecosystem/nodes/rust/node-config-ref.mdx",
    "chars": 22731,
    "preview": "---\ntitle: \"Node configuration reference\"\nsidebarTitle: \"Node configuration reference\"\n---\n\nimport { Aside } from '/snip"
  },
  {
    "path": "ecosystem/nodes/rust/node-config.mdx",
    "chars": 22483,
    "preview": "---\ntitle: \"How to configure node JSON file\"\nsidebarTitle: \"Node configuration\"\n---\n\nimport { Aside } from '/snippets/as"
  },
  {
    "path": "ecosystem/nodes/rust/probes.mdx",
    "chars": 3411,
    "preview": "---\ntitle: \"Health probes\"\n---\n\nimport { Aside } from '/snippets/aside.jsx';\n\nKubernetes liveness, readiness, and startu"
  },
  {
    "path": "ecosystem/nodes/rust/quick-start.mdx",
    "chars": 10807,
    "preview": "---\ntitle: \"Rust node quick start\"\nsidebarTitle: \"Quick start\"\n---\n\nimport { Aside } from '/snippets/aside.jsx';\n\nDeploy"
  },
  {
    "path": "ecosystem/oracles/overview.mdx",
    "chars": 6801,
    "preview": "---\ntitle: \"Oracles overview\"\nsidebarTitle: \"Overview\"\n---\n\nimport { Aside } from '/snippets/aside.jsx';\nimport { Image "
  },
  {
    "path": "ecosystem/oracles/pyth.mdx",
    "chars": 4210,
    "preview": "---\ntitle: \"Pyth oracle\"\n---\n\n## How to use real-time data in TON contracts\n\nPyth is a pull oracle. Pyth price feeds on "
  },
  {
    "path": "ecosystem/oracles/redstone.mdx",
    "chars": 4670,
    "preview": "---\ntitle: \"RedStone oracle\"\n---\n\n## How to use real-time data in TON contracts\n\nRedStone is a pull oracle that uses an "
  },
  {
    "path": "ecosystem/sdks.mdx",
    "chars": 13010,
    "preview": "---\ntitle: \"SDKs\"\nmode: \"wide\"\n---\n\nThere are several ways to interact with TON blockchain:\n\n- **HTTP** libraries connec"
  },
  {
    "path": "ecosystem/staking/liquid-staking.mdx",
    "chars": 3285,
    "preview": "---\ntitle: \"Liquid staking contracts\"\nsidebarTitle: \"Liquid staking\"\n---\n\nThe [liquid staking contract](https://github.c"
  },
  {
    "path": "ecosystem/staking/nominator-pools.mdx",
    "chars": 4797,
    "preview": "---\ntitle: \"Nominator pool contracts\"\nsidebarTitle: \"Nominator pools\"\n---\n\nimport { Aside } from \"/snippets/aside.jsx\";\n"
  },
  {
    "path": "ecosystem/staking/overview.mdx",
    "chars": 7949,
    "preview": "---\ntitle: \"Staking overview\"\nsidebarTitle: \"Overview\"\n---\n\nimport { Aside } from '/snippets/aside.jsx';\nimport { Image "
  },
  {
    "path": "ecosystem/staking/single-nominator.mdx",
    "chars": 2574,
    "preview": "---\ntitle: \"Single nominator pool contracts\"\nsidebarTitle: \"Single nominator pools\"\n---\n\nimport { Aside } from \"/snippet"
  },
  {
    "path": "ecosystem/status.mdx",
    "chars": 1018,
    "preview": "---\ntitle: \"Network status\"\n---\n\nThis page lists websites that show if specific parts of TON blockchain are working norm"
  },
  {
    "path": "ecosystem/tma/analytics/analytics.mdx",
    "chars": 4607,
    "preview": "---\ntitle: \"Telegram analytics\"\n---\n\n## Overview\n\nTelegram Analytics is a powerful SDK and API that enables your mini-ap"
  },
  {
    "path": "ecosystem/tma/analytics/api-endpoints.mdx",
    "chars": 7544,
    "preview": "---\ntitle: \"API Endpoints\"\n---\n\nHere, you can view information about existing endpoints and how to make\nrequests for the"
  },
  {
    "path": "ecosystem/tma/analytics/faq.mdx",
    "chars": 1088,
    "preview": "---\ntitle: \"FAQ\"\n---\n\n## How can I check the integration status of the SDK?\n\n### Using a bot\n\nAfter [completing the inte"
  },
  {
    "path": "ecosystem/tma/analytics/install-via-npm.mdx",
    "chars": 1157,
    "preview": "---\ntitle: \"Installation via NPM package\"\n---\n\n## How to install it?\n\n**1. Install the NPM package in  your project**\n\n`"
  },
  {
    "path": "ecosystem/tma/analytics/install-via-script.mdx",
    "chars": 1555,
    "preview": "---\ntitle: \"Installation via script tag\"\n---\n\n## How to install it?\n\n### 1.Add Telegram Mini Apps Analytics to your proj"
  },
  {
    "path": "ecosystem/tma/analytics/managing-integration.mdx",
    "chars": 630,
    "preview": "---\ntitle: \"Managing integration\"\n---\n\nimport { Image } from '/snippets/image.jsx';\n\n[TON Builders](https://builders.ton"
  }
]

// ... and 264 more files (download for full content)

About this extraction

This page contains the full source code of the ton-org/docs GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 464 files (5.8 MB), approximately 1.5M tokens, and a symbol index with 180 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!