Repository: ton-org/docs Branch: main Commit: 45160fd71f10 Files: 464 Total size: 5.8 MB Directory structure: gitextract_f543x511/ ├── .cspell.jsonc ├── .editorconfig ├── .gitattributes ├── .github/ │ ├── dependabot.yml │ ├── scripts/ │ │ ├── build_review_instructions.py │ │ ├── build_review_payload.py │ │ ├── common.mjs │ │ ├── generate-v2-api-table.py │ │ ├── generate-v3-api-table.py │ │ ├── rewrite_review_links.py │ │ └── tvm-instruction-gen.py │ └── workflows/ │ ├── bouncer.yml │ ├── commander.yml │ ├── generate-api-tables.yml │ ├── instructions.yml │ ├── linter.yml │ └── pitaya.yml ├── .gitignore ├── .husky/ │ └── pre-push ├── .prettierignore ├── .remarkignore ├── .remarkrc.mjs ├── CODEOWNERS ├── LICENSE-code ├── LICENSE-docs ├── README.md ├── contract-dev/ │ ├── blueprint/ │ │ ├── api.mdx │ │ ├── benchmarks.mdx │ │ ├── cli.mdx │ │ ├── config.mdx │ │ ├── coverage.mdx │ │ ├── deploy.mdx │ │ ├── develop.mdx │ │ └── overview.mdx │ ├── contract-sharding.mdx │ ├── debug.mdx │ ├── first-smart-contract.mdx │ ├── gas.mdx │ ├── ide/ │ │ ├── jetbrains.mdx │ │ ├── overview.mdx │ │ └── vscode.mdx │ ├── on-chain-jetton-processing.mdx │ ├── random.mdx │ ├── security.mdx │ ├── signing.mdx │ ├── testing/ │ │ ├── overview.mdx │ │ └── reference.mdx │ ├── upgrades.mdx │ ├── using-on-chain-libraries.mdx │ ├── vanity.mdx │ └── zero-knowledge.mdx ├── contribute/ │ ├── snippets/ │ │ ├── aside.mdx │ │ ├── filetree.mdx │ │ ├── image.mdx │ │ └── overview.mdx │ ├── style-guide-extended.mdx │ └── style-guide.mdx ├── docs.json ├── ecosystem/ │ ├── ai/ │ │ └── mcp.mdx │ ├── analytics.mdx │ ├── api/ │ │ ├── overview.mdx │ │ ├── price.mdx │ │ └── toncenter/ │ │ ├── get-api-key.mdx │ │ ├── introduction.mdx │ │ ├── rate-limit.mdx │ │ ├── smc-index/ │ │ │ ├── get-nominator-bookings-method.mdx │ │ │ ├── get-nominator-earnings-method.mdx │ │ │ ├── get-nominator-method.mdx │ │ │ ├── get-pool-bookings-method.mdx │ │ │ ├── get-pool-method.mdx │ │ │ └── lifecheck-method.mdx │ │ ├── smc-index.json │ │ ├── v2/ │ │ │ ├── accounts/ │ │ │ │ ├── convert-raw-address-to-user-friendly-format.mdx │ │ │ │ ├── convert-user-friendly-address-to-raw-format.mdx │ │ │ │ ├── detect-all-address-formats.mdx │ │ │ │ ├── get-account-balance-only.mdx │ │ │ │ ├── get-account-lifecycle-state.mdx │ │ │ │ ├── get-account-state-and-balance.mdx │ │ │ │ ├── get-detailed-account-state-extended.mdx │ │ │ │ ├── get-nft-or-jetton-metadata.mdx │ │ │ │ ├── get-wallet-information.mdx │ │ │ │ └── list-account-transactions.mdx │ │ │ ├── blocks/ │ │ │ │ ├── get-block-header-metadata.mdx │ │ │ │ ├── get-latest-consensus-block.mdx │ │ │ │ ├── get-latest-masterchain-info.mdx │ │ │ │ ├── get-masterchain-block-signatures.mdx │ │ │ │ ├── get-outgoing-message-queue-sizes.mdx │ │ │ │ ├── get-shard-block-proof.mdx │ │ │ │ ├── get-shards-at-masterchain-seqno.mdx │ │ │ │ ├── get-smart-contract-libraries.mdx │ │ │ │ ├── list-block-transactions-extended-details.mdx │ │ │ │ ├── list-block-transactions.mdx │ │ │ │ └── look-up-block-by-height-lt-or-timestamp.mdx │ │ │ ├── config/ │ │ │ │ ├── get-all-config-parameters.mdx │ │ │ │ └── get-single-config-parameter.mdx │ │ │ ├── json-rpc/ │ │ │ │ └── json-rpc-handler.mdx │ │ │ ├── messages-and-transactions/ │ │ │ │ ├── estimate-transaction-fees.mdx │ │ │ │ ├── send-external-message-and-return-hash.mdx │ │ │ │ ├── send-external-message-boc.mdx │ │ │ │ └── send-unpacked-external-query.mdx │ │ │ ├── overview.mdx │ │ │ ├── smart-contracts/ │ │ │ │ └── run-get-method-on-contract.mdx │ │ │ └── transactions/ │ │ │ ├── locate-result-transaction-by-incoming-message.mdx │ │ │ ├── locate-source-transaction-by-outgoing-message.mdx │ │ │ └── locate-transaction-by-incoming-message.mdx │ │ ├── v2-authentication.mdx │ │ ├── v2-errors.mdx │ │ ├── v2-tonlib-types.mdx │ │ ├── v2.json │ │ ├── v3/ │ │ │ ├── accounts/ │ │ │ │ ├── address-book.mdx │ │ │ │ ├── get-account-states.mdx │ │ │ │ ├── get-wallet-states.mdx │ │ │ │ └── metadata.mdx │ │ │ ├── actions-and-traces/ │ │ │ │ ├── get-actions.mdx │ │ │ │ ├── get-pending-actions.mdx │ │ │ │ ├── get-pending-traces.mdx │ │ │ │ └── get-traces.mdx │ │ │ ├── apiv2/ │ │ │ │ ├── estimate-fee.mdx │ │ │ │ ├── get-address-information.mdx │ │ │ │ ├── get-wallet-information.mdx │ │ │ │ ├── run-get-method.mdx │ │ │ │ └── send-message.mdx │ │ │ ├── blockchain-data/ │ │ │ │ ├── get-adjacent-transactions.mdx │ │ │ │ ├── get-blocks.mdx │ │ │ │ ├── get-masterchain-block-shard-state-1.mdx │ │ │ │ ├── get-masterchain-block-shard-state.mdx │ │ │ │ ├── get-masterchain-info.mdx │ │ │ │ ├── get-messages.mdx │ │ │ │ ├── get-pending-transactions.mdx │ │ │ │ ├── get-transactions-by-masterchain-block.mdx │ │ │ │ ├── get-transactions-by-message.mdx │ │ │ │ └── get-transactions.mdx │ │ │ ├── dns/ │ │ │ │ └── get-dns-records.mdx │ │ │ ├── jettons/ │ │ │ │ ├── get-jetton-burns.mdx │ │ │ │ ├── get-jetton-masters.mdx │ │ │ │ ├── get-jetton-transfers.mdx │ │ │ │ └── get-jetton-wallets.mdx │ │ │ ├── multisig/ │ │ │ │ ├── get-multisig-orders.mdx │ │ │ │ └── get-multisig-wallets.mdx │ │ │ ├── nfts/ │ │ │ │ ├── get-nft-collections.mdx │ │ │ │ ├── get-nft-items.mdx │ │ │ │ └── get-nft-transfers.mdx │ │ │ ├── overview.mdx │ │ │ ├── stats/ │ │ │ │ └── get-top-accounts-by-balance.mdx │ │ │ ├── utils/ │ │ │ │ ├── decode-opcodes-and-bodies-1.mdx │ │ │ │ └── decode-opcodes-and-bodies.mdx │ │ │ └── vesting/ │ │ │ └── get-vesting-contracts.mdx │ │ ├── v3-authentication.mdx │ │ ├── v3-errors.mdx │ │ ├── v3-pagination.mdx │ │ └── v3.yaml │ ├── appkit/ │ │ ├── init.mdx │ │ ├── jettons.mdx │ │ ├── overview.mdx │ │ └── toncoin.mdx │ ├── bridges.mdx │ ├── explorers/ │ │ ├── overview.mdx │ │ └── tonviewer.mdx │ ├── nodes/ │ │ ├── cpp/ │ │ │ ├── integrating-with-prometheus.mdx │ │ │ ├── mytonctrl/ │ │ │ │ ├── alerting.mdx │ │ │ │ ├── backups.mdx │ │ │ │ ├── btc-teleport.mdx │ │ │ │ ├── collator.mdx │ │ │ │ ├── core.mdx │ │ │ │ ├── custom-overlays.mdx │ │ │ │ ├── installer.mdx │ │ │ │ ├── liquid-staking.mdx │ │ │ │ ├── overview.mdx │ │ │ │ ├── pools.mdx │ │ │ │ ├── utilities.mdx │ │ │ │ ├── validator.mdx │ │ │ │ └── wallet.mdx │ │ │ ├── run-validator.mdx │ │ │ ├── setup-mylocalton.mdx │ │ │ └── setup-mytonctrl.mdx │ │ ├── overview.mdx │ │ └── rust/ │ │ ├── architecture.mdx │ │ ├── global-config.mdx │ │ ├── logs-config.mdx │ │ ├── metrics.mdx │ │ ├── monitoring.mdx │ │ ├── node-config-ref.mdx │ │ ├── node-config.mdx │ │ ├── probes.mdx │ │ └── quick-start.mdx │ ├── oracles/ │ │ ├── overview.mdx │ │ ├── pyth.mdx │ │ └── redstone.mdx │ ├── sdks.mdx │ ├── staking/ │ │ ├── liquid-staking.mdx │ │ ├── nominator-pools.mdx │ │ ├── overview.mdx │ │ └── single-nominator.mdx │ ├── status.mdx │ ├── tma/ │ │ ├── analytics/ │ │ │ ├── analytics.mdx │ │ │ ├── api-endpoints.mdx │ │ │ ├── faq.mdx │ │ │ ├── install-via-npm.mdx │ │ │ ├── install-via-script.mdx │ │ │ ├── managing-integration.mdx │ │ │ ├── preparation.mdx │ │ │ └── supported-events.mdx │ │ ├── create-mini-app.mdx │ │ ├── overview.mdx │ │ └── telegram-ui/ │ │ ├── getting-started.mdx │ │ ├── overview.mdx │ │ ├── platform-and-palette.mdx │ │ └── reference/ │ │ └── avatar.mdx │ ├── ton-connect/ │ │ ├── dapp.mdx │ │ ├── manifest.mdx │ │ ├── message-lookup.mdx │ │ ├── overview.mdx │ │ └── wallet.mdx │ ├── ton-pay/ │ │ ├── api-reference.mdx │ │ ├── on-ramp.mdx │ │ ├── overview.mdx │ │ ├── payment-integration/ │ │ │ ├── payments-react.mdx │ │ │ ├── payments-tonconnect.mdx │ │ │ ├── status-info.mdx │ │ │ └── transfer.mdx │ │ ├── quick-start.mdx │ │ ├── ui-integration/ │ │ │ ├── button-js.mdx │ │ │ └── button-react.mdx │ │ └── webhooks.mdx │ ├── wallet-apps/ │ │ ├── addresses-workflow.mdx │ │ ├── deep-links.mdx │ │ ├── get-coins.mdx │ │ ├── tonkeeper.mdx │ │ └── web.mdx │ └── walletkit/ │ ├── android/ │ │ ├── data.mdx │ │ ├── events.mdx │ │ ├── init.mdx │ │ ├── installation.mdx │ │ ├── transactions.mdx │ │ ├── wallets.mdx │ │ └── webview.mdx │ ├── browser-extension.mdx │ ├── ios/ │ │ ├── data.mdx │ │ ├── events.mdx │ │ ├── init.mdx │ │ ├── installation.mdx │ │ ├── transactions.mdx │ │ ├── wallets.mdx │ │ └── webview.mdx │ ├── native-web.mdx │ ├── overview.mdx │ ├── qa-guide.mdx │ └── web/ │ ├── connections.mdx │ ├── events.mdx │ ├── init.mdx │ ├── jettons.mdx │ ├── nfts.mdx │ ├── toncoin.mdx │ └── wallets.mdx ├── extra.css ├── extra.js ├── foundations/ │ ├── actions/ │ │ ├── change-library.mdx │ │ ├── overview.mdx │ │ ├── reserve.mdx │ │ ├── send.mdx │ │ └── set-code.mdx │ ├── addresses/ │ │ ├── derive.mdx │ │ ├── formats.mdx │ │ ├── overview.mdx │ │ └── serialize.mdx │ ├── config.mdx │ ├── consensus/ │ │ └── catchain-visualizer.mdx │ ├── fees.mdx │ ├── glossary.mdx │ ├── limits.mdx │ ├── messages/ │ │ ├── deploy.mdx │ │ ├── external-in.mdx │ │ ├── external-out.mdx │ │ ├── internal.mdx │ │ ├── modes.mdx │ │ ├── ordinary-tx.mdx │ │ └── overview.mdx │ ├── phases.mdx │ ├── precompiled.mdx │ ├── proofs/ │ │ ├── overview.mdx │ │ └── verifying-liteserver-proofs.mdx │ ├── serialization/ │ │ ├── boc.mdx │ │ ├── cells.mdx │ │ ├── library.mdx │ │ ├── merkle-update.mdx │ │ ├── merkle.mdx │ │ └── pruned.mdx │ ├── services.mdx │ ├── shards.mdx │ ├── status.mdx │ ├── system.mdx │ ├── traces.mdx │ └── whitepapers/ │ ├── catchain.mdx │ ├── overview.mdx │ ├── tblkch.mdx │ ├── ton.mdx │ └── tvm.mdx ├── from-ethereum.mdx ├── get-support.mdx ├── index.mdx ├── languages/ │ ├── fift/ │ │ ├── deep-dive.mdx │ │ ├── fift-and-tvm-assembly.mdx │ │ ├── multisig.mdx │ │ ├── overview.mdx │ │ └── whitepaper.mdx │ ├── func/ │ │ ├── asm-functions.mdx │ │ ├── built-ins.mdx │ │ ├── changelog.mdx │ │ ├── comments.mdx │ │ ├── compiler-directives.mdx │ │ ├── cookbook.mdx │ │ ├── declarations-overview.mdx │ │ ├── dictionaries.mdx │ │ ├── expressions.mdx │ │ ├── functions.mdx │ │ ├── global-variables.mdx │ │ ├── known-issues.mdx │ │ ├── libraries.mdx │ │ ├── literals.mdx │ │ ├── operators.mdx │ │ ├── overview.mdx │ │ ├── special-functions.mdx │ │ ├── statements.mdx │ │ ├── stdlib.mdx │ │ └── types.mdx │ ├── tact.mdx │ ├── tl-b/ │ │ ├── complex-and-non-trivial-examples.mdx │ │ ├── overview.mdx │ │ ├── simple-examples.mdx │ │ ├── syntax-and-semantics.mdx │ │ ├── tep-examples.mdx │ │ └── tooling.mdx │ └── tolk/ │ ├── basic-syntax.mdx │ ├── changelog.mdx │ ├── examples.mdx │ ├── features/ │ │ ├── asm-functions.mdx │ │ ├── auto-serialization.mdx │ │ ├── compiler-optimizations.mdx │ │ ├── contract-getters.mdx │ │ ├── contract-storage.mdx │ │ ├── jetton-payload.mdx │ │ ├── lazy-loading.mdx │ │ ├── message-handling.mdx │ │ ├── message-sending.mdx │ │ └── standard-library.mdx │ ├── from-func/ │ │ ├── converter.mdx │ │ ├── stdlib-comparison.mdx │ │ └── tolk-vs-func.mdx │ ├── idioms-conventions.mdx │ ├── overview.mdx │ ├── syntax/ │ │ ├── conditions-loops.mdx │ │ ├── exceptions.mdx │ │ ├── functions-methods.mdx │ │ ├── imports.mdx │ │ ├── mutability.mdx │ │ ├── operators.mdx │ │ ├── pattern-matching.mdx │ │ ├── structures-fields.mdx │ │ └── variables.mdx │ └── types/ │ ├── address.mdx │ ├── aliases.mdx │ ├── booleans.mdx │ ├── callables.mdx │ ├── cells.mdx │ ├── enums.mdx │ ├── generics.mdx │ ├── list-of-types.mdx │ ├── maps.mdx │ ├── nullable.mdx │ ├── numbers.mdx │ ├── overall-serialization.mdx │ ├── overall-tvm-stack.mdx │ ├── strings.mdx │ ├── structures.mdx │ ├── tensors.mdx │ ├── tuples.mdx │ ├── type-checks-and-casts.mdx │ ├── unions.mdx │ └── void-never.mdx ├── more-tutorials.mdx ├── old.mdx ├── package.json ├── payments/ │ ├── jettons.mdx │ ├── overview.mdx │ └── toncoin.mdx ├── resources/ │ ├── dictionaries/ │ │ ├── ban.txt │ │ ├── custom.txt │ │ ├── tvm-instructions.txt │ │ └── two-letter-words-ban.txt │ └── tvm/ │ └── cp0.txt ├── scripts/ │ ├── check-navigation.mjs │ ├── check-redirects.mjs │ ├── common.mjs │ ├── docusaurus-sidebars-types.d.ts │ └── stats.py ├── snippets/ │ ├── aside.jsx │ ├── catchain-visualizer.jsx │ ├── feePlayground.jsx │ ├── fence-table.jsx │ ├── filetree.jsx │ ├── image.jsx │ ├── stub.jsx │ └── tvm-instruction-table.jsx ├── standard/ │ ├── tokens/ │ │ ├── airdrop.mdx │ │ ├── jettons/ │ │ │ ├── api.mdx │ │ │ ├── burn.mdx │ │ │ ├── comparison.mdx │ │ │ ├── find.mdx │ │ │ ├── how-it-works.mdx │ │ │ ├── mint.mdx │ │ │ ├── mintless/ │ │ │ │ ├── deploy.mdx │ │ │ │ └── overview.mdx │ │ │ ├── overview.mdx │ │ │ ├── supply-data.mdx │ │ │ ├── transfer.mdx │ │ │ └── wallet-data.mdx │ │ ├── metadata.mdx │ │ ├── nft/ │ │ │ ├── api.mdx │ │ │ ├── comparison.mdx │ │ │ ├── deploy.mdx │ │ │ ├── how-it-works.mdx │ │ │ ├── metadata.mdx │ │ │ ├── nft-2.0.mdx │ │ │ ├── overview.mdx │ │ │ ├── reference.mdx │ │ │ ├── sbt.mdx │ │ │ ├── transfer.mdx │ │ │ └── verify.mdx │ │ └── overview.mdx │ ├── vesting.mdx │ └── wallets/ │ ├── comparison.mdx │ ├── highload/ │ │ ├── overview.mdx │ │ ├── v2/ │ │ │ └── specification.mdx │ │ └── v3/ │ │ ├── create.mdx │ │ ├── send-batch-transfers.mdx │ │ ├── send-single-transfer.mdx │ │ ├── specification.mdx │ │ └── verify-is-processed.mdx │ ├── history.mdx │ ├── how-it-works.mdx │ ├── interact.mdx │ ├── lockup.mdx │ ├── mnemonics.mdx │ ├── performance.mdx │ ├── preprocessed-v2/ │ │ ├── interact.mdx │ │ └── specification.mdx │ ├── restricted.mdx │ ├── v4.mdx │ ├── v5-api.mdx │ └── v5.mdx ├── start-here.mdx └── tvm/ ├── builders-and-slices.mdx ├── continuations.mdx ├── exit-codes.mdx ├── gas.mdx ├── get-method.mdx ├── initialization.mdx ├── instructions.mdx ├── overview.mdx ├── registers.mdx └── tools/ ├── retracer.mdx ├── ton-decompiler.mdx ├── tvm-explorer.mdx └── txtracer.mdx ================================================ FILE CONTENTS ================================================ ================================================ FILE: .cspell.jsonc ================================================ // The .jsonc extension allows free use of comments and trailing commas. // The file is named with a dot in front to discourage frequent editing — // target dictionaries are located in the resources/dictionaries/ directory. { "$schema": "https://raw.githubusercontent.com/streetsidesoftware/cspell/main/cspell.schema.json", "version": "0.2", "language": "en-US", "dictionaryDefinitions": [ { // Allowed words "name": "main-list", "path": "resources/dictionaries/custom.txt", "addWords": true, }, { // Banned words with no clear or correct replacements // For a few words with those, see the flagWords property later in this file "name": "deny-list", "path": "resources/dictionaries/ban.txt" }, { "name": "2lw-deny-list", "path": "resources/dictionaries/two-letter-words-ban.txt" }, { "name": "tvm-instructions", "path": "resources/dictionaries/tvm-instructions.txt" } ], "dictionaries": [ "main-list", "deny-list", "2lw-deny-list", "tvm-instructions", ], "useGitignore": true, "files": [ "**/*.{md,mdx}", "**/*.{js,jsx,mjs}", ], "minWordLength": 3, "overrides": [ // Enable case sensitivity for Markdown and MDX files only { "filename": "**/*.{md,mdx}", "caseSensitive": true, // Known incorrect spellings and correct suggestions "flagWords": [ "AccountChain->accountchain", "BaseChain->basechain", "boc->BoC", "BOC->BoC", "Github->GitHub", "id->ID", "Id->ID", "MasterChain->masterchain", "ShardChain->shardchain", "StateInit->`StateInit`", "TLB->TL-B", "Toncenter->TON Center", "toncoins->Toncoin", "Toncoins->Toncoin", "WorkChain->workchain", "zkProofs->ZK-proofs", "zkProof->ZK-proof", ], }, // Do not check for banned words (denylists or flagWords) in certain files { "filename": "contribute/style-guide*.mdx", "ignoreWords": [ "tos", "DOI", "boc", "BOC", ], "ignoreRegExpList": [ "\\b[tT]on[a-zA-Z]+\\b", // ton or Ton-prefixed words "\\b[a-zA-Z]+Chain\\b", // Chain-suffixed words ], "dictionaries": [ "!deny-list", // turns off the dictionary "!2lw-deny-list", // turns off the dictionary ] }, { "filename": "languages/tolk/features/compiler-optimizations.mdx", "ignoreWords": [ "fifting", ] }, { "filename": "languages/tolk/from-func/tolk-vs-func.mdx", "ignoreWords": [ "transpiles", ] }, { "filename": "**/api/**/*.{json,yml,yaml}", "ignoreWords": [ "smc", ], "dictionaries": [ "!deny-list", // turns off the dictionary "!2lw-deny-list", // turns off the dictionary ] }, { "filename": "**/*.{js,jsx,mjs}", "ignoreWords": [ "Dests", ], "dictionaries": [ "!deny-list", // turns off the dictionary "!2lw-deny-list", // turns off the dictionary ] } ], "ignorePaths": [ // Some whitepapers "foundations/whitepapers/tblkch.mdx", "foundations/whitepapers/ton.mdx", "foundations/whitepapers/tvm.mdx", "languages/fift/whitepaper.mdx", "languages/tolk/features/standard-library.mdx", // Generated files "tvm/instructions.mdx", // Binaries "**/*.boc", // Code "**/*.fc", "**/*.fif", "**/*.fift", "**/*.func", "**/*.tact", "**/*.tasm", "**/*.tlb", "**/*.tolk", "**/*.py*", "**/*.{ts,tsx}", "**/*.css", // Miscellaneous "**/*.git*", "**/*.svg", "**/*.txt", "CODEOWNERS", "LICENSE-*", "snippets/tvm-instruction-table.jsx", "snippets/catchain-visualizer.jsx" ], "ignoreRegExpList": [ // // Predefined patterns from: // https://github.com/streetsidesoftware/cspell/blob/main/packages/cspell-lib/src/lib/Settings/DefaultSettings.ts // "SpellCheckerDisable", "SpellCheckerIgnoreInDocSetting", "Urls", "Email", "RsaCert", "SshRsa", "Base64MultiLine", "Base64SingleLine", "CommitHash", "CommitHashLink", "CStyleHexValue", "CSSHexValue", "SHA", "HashStrings", "UnicodeRef", "UUID", "href", // // Custom patterns // "\\s*[^\\s]*?=[\"'\\{]", // arbitrary JSX attribute names "=\\s*\".*?\"", // string values of JSX attributes "=\\s*'.*?'", // string values of JSX attributes "(? None: workspace = os.environ.get("GITHUB_WORKSPACE") if not workspace: raise SystemExit("GITHUB_WORKSPACE env var is required") style_path = os.path.join(workspace, "contribute", "style-guide-extended.mdx") try: with open(style_path, encoding="utf-8") as fh: style_content = fh.read().rstrip() except FileNotFoundError as exc: raise SystemExit(f"Style guide file not found: {style_path}") from exc style_block = f"\n{style_content}\n\n\n" body = textwrap.dedent( """Repository: TON Blockchain documentation Scope and priorities: 1. Style-guide compliance is the first and absolute priority. Before reviewing, read the entire block. For every changed line in the diff, confirm it matches the guide. Any violation must be reported with the exact style-rule link. 2. Only after style compliance, check for obvious, provable, blocking errors not covered by the guide (e.g., an incorrect calculation or an unsafe, non‑runnable step) and report them with proof. If not certain from repo content alone, omit. Review protocol: - Inspect only content files touched by this PR: `.md`, `.mdx`, and `docs.json`. - It is acceptable to report findings that originate in `docs.json` (e.g., broken or duplicate paths/slugs, invalid sidebar grouping, typos in titles). When the problem is in `docs.json`, cite its exact lines. - Examine only the lines changed in this diff (use surrounding context as needed). Do not flag issues that exist solely in unchanged content. - Report every issue you see in this diff; do not postpone or soften problems. - Location links must be repo-relative paths such as pending/discover/web3-basics/glossary.mdx?plain=1#L10-L12 (no https:// prefix). - When a style rule applies, cite it using contribute/style-guide-extended.mdx?plain=1#L-L. Only add the citation after running a verification command such as `rg "" contribute/style-guide-extended.mdx` or `sed -n ',p'` and inspecting the output to confirm the line range. - If no style rule applies (e.g., factual error, typo), explain the issue clearly without a style link. - Keep findings direct, professional, and concise. Suggestions must describe the required fix. - Code identifiers: if the issue is lack of code font, preserve the token’s original case and wrap it in backticks. Only change case when the style guide explicitly mandates a canonical case for that exact identifier and you cite the relevant line range. HARD SCOPE WALL — CONTENT ONLY (MANDATORY): - You MUST NEVER read, open, cite, or rely on any non‑content files. This includes but is not limited to CI configs (`.github/**`), workflows (`*.yml`), code (`*.ts`, `*.tsx`, `*.js`, `*.py`, `*.go`, etc.), configuration/manifests (`package.json`, `pnpm-lock.yaml`, `*.toml`, `*.yaml`), tests, scripts, or build tool files. - Allowed inputs are limited to the changed `.md`/`.mdx` files, `docs.json`, and `contribute/style-guide-extended.mdx` (for rule citations). - Do not search outside these allowed files. Do not run commands that read or display non‑content files. Treat them as inaccessible. Context for `docs.json`: - Purpose: defines the site navigation tree, groupings, and slug mapping used by the docs site (metadata that directly affects the rendered docs experience). - Legit uses during review: • Findings may target `docs.json` when the issue is there (e.g., broken/duplicate slug, incorrect path, wrong ordering/grouping). • You may also use `docs.json` to verify that changed frontmatter `slug`/title or links in `.md`/`.mdx` remain valid. • Cite `docs.json` lines when it is the source of the problem; otherwise cite the offending `.md`/`.mdx` lines. • If an issue relates to both `docs.json` and `.md`/`.mdx`, report it only on `docs.json`. - Do not speculate about Mintlify runtime behavior or external systems; rely solely on repository content. Severity policy: - Report only HIGH‑severity violations. - Do not report MEDIUM or LOW items. - HIGH includes, in this order of precedence: (a) style‑guide rules tagged [HIGH] or listed under “Global overrides (always [HIGH])” in contribute/style-guide-extended.mdx; then (b) obvious, non‑style blocking errors (e.g., incorrect calculations, non‑runnable commands, unsafe steps) that you can prove using repository content (diff lines, examples, reference tables). - For (b), include minimal proof with each finding (a short calculation or exact snippet) and cite the repo path/lines. - Do not assume or infer behavior. Only report (b) when you are 100% certain from the repo itself; if uncertain, omit. Persistence and completeness: - Persist until the review is fully handled end-to-end within this single run. - Do not stop after a partial pass; continue until you have either reported all HIGH-severity issues you can find in scope or are confident there are none. - Do not stop to ask any kind of follow-up questions. Verbosity and structure: - Follow the existing review output contract, do not invent alternative formats. - It is acceptable for the overall review to be long when there are many findings, but keep each Description and Suggestion concise (ideally no more than two short paragraphs each) while still giving enough detail to implement the fix. - Avoid meta-commentary about your own reasoning process or tool usage; focus solely on concrete findings, locations, and fixes. Goal: deliver exhaustive, high-confidence feedback that brings these TON Docs changes into full style-guide compliance and factual correctness. """ ) link_rules = textwrap.dedent( """ LINK FORMATTING — REQUIRED (overrides earlier bullets): - Style‑guide citations: use a compact Markdown link with a short label, e.g. [Style rule — ](contribute/style-guide-extended.mdx?plain=1#L-L). Verify the exact line range first (e.g., `rg "" contribute/style-guide-extended.mdx` or `sed -n ',p'`). - General code/location references: output a plain repo‑relative link on its own line, with no Markdown/backticks/prefix text so GitHub renders a rich preview. Example line: pending/discover/web3-basics/glossary.mdx?plain=1#L10-L12 - Do not use https:// prefixes for repo‑relative links. """ ) print(style_block + body + link_rules) if __name__ == "__main__": main() ================================================ FILE: .github/scripts/build_review_payload.py ================================================ #!/usr/bin/env python3 """ Build a GitHub Pull Request review payload from Pitaya results. Inputs: - --run-dir: path to pitaya results/run_* directory (contains instances/) - --repo: owner/repo for link rewriting (GITHUB_REPOSITORY) - --sha: PR head SHA for absolute blob links (PR_HEAD_SHA) - --severities: comma-separated list of severities to include as inline comments (e.g., "HIGH" or "HIGH,MEDIUM,LOW") - --max-comments: hard cap for number of inline comments (default 40) Output: JSON to stdout: { "body": "", "event": "COMMENT", "comments": [ {"path":"...", "side":"RIGHT", "line":123, "start_line":120, "start_side":"RIGHT", "body":"..."} ] } """ from __future__ import annotations import argparse import json import os import re from dataclasses import dataclass from pathlib import Path from typing import Dict, Iterable, List, Optional, Tuple # ---------- Utilities ---------- def _read_json(path: Path) -> Optional[dict]: try: txt = path.read_text(encoding="utf-8", errors="replace") return json.loads(txt) except Exception: return None def _iter_instance_jsons(run_dir: Path) -> Iterable[Tuple[Path, dict]]: inst = run_dir / "instances" if not inst.is_dir(): return [] files = list(inst.rglob("*.json")) for p in files: data = _read_json(p) if isinstance(data, dict): yield p, data def _role_of(obj: dict) -> Optional[str]: # Strategy stores role either at top-level or under metadata.pr_review.role role = obj.get("role") if isinstance(role, str) and role: return role md = obj.get("metadata") if isinstance(md, dict): prr = md.get("pr_review") if isinstance(prr, dict): r = prr.get("role") if isinstance(r, str): return r return None def _final_message_of(obj: dict) -> Optional[str]: msg = obj.get("final_message") return msg if isinstance(msg, str) else None def _metrics_of(obj: dict) -> Dict[str, object]: m = obj.get("metrics") return m if isinstance(m, dict) else {} # ---------- Link rewriting (replicates rewrite_review_links.py) ---------- def _absolutize_location_links(body: str, repo: Optional[str], sha: Optional[str]) -> str: if not body or not repo: return body blob_prefix = f"https://github.com/{repo}/blob/" doc_blob_prefix = f"{blob_prefix}{sha or 'main'}/" style_blob_prefix = f"{blob_prefix}main/" style_rel = "contribute/style-guide-extended.mdx" def absolutize_path(path: str) -> str: if path.startswith("http://") or path.startswith("https://"): return path normalized = path.lstrip("./") base = style_blob_prefix if normalized.startswith(style_rel) else doc_blob_prefix return f"{base}{normalized}" # 1) Fix explicit Location: lines when present lines: List[str] = [] for line in body.splitlines(): stripped = line.lstrip() indent_len = len(line) - len(stripped) for marker in ("- Location:", "Location:", "* Location:"): if stripped.startswith(marker): prefix, _, rest = stripped.partition(":") link = rest.strip() if link: link = absolutize_path(link) stripped = f"{prefix}: {link}" line = " " * indent_len + stripped break lines.append(line) rewritten = "\n".join(lines) # 2) Convert any doc links like path/to/file.mdx?plain=1#L10-L20 anywhere in text # Avoid variable-width lookbehinds; match optional scheme as a capture and skip when present. if repo: generic_pattern = re.compile( r"(?Phttps?://)?(?P[A-Za-z0-9_./\-]+\.(?:md|mdx|json))\?plain=1#L\d+(?:-L\d+)?" ) def repl(match: re.Match[str]) -> str: if match.group("prefix"): # Already absolute; leave as-is return match.group(0) p = match.group("path").lstrip("./") base = style_blob_prefix if p.startswith(style_rel) else doc_blob_prefix # Append the anchor part after the path suffix = match.group(0)[len(match.group("path")) :] return f"{base}{p}{suffix}" rewritten = generic_pattern.sub(repl, rewritten) style_pattern = re.compile(rf"{re.escape(style_rel)}\?plain=1#L\d+(?:-L\d+)?") def replace_style_links(text: str) -> str: result: list[str] = [] last = 0 for match in style_pattern.finditer(text): start, end = match.span() result.append(text[last:start]) link = match.group(0) prefix_start = max(0, start - len(style_blob_prefix)) if text[prefix_start:start] == style_blob_prefix: result.append(link) else: result.append(f"{style_blob_prefix}{link.lstrip('./')}") last = end result.append(text[last:]) return "".join(result) rewritten = replace_style_links(rewritten) # Ensure doc blob URLs use PR head SHA (style guide stays on main) if sha: doc_prefix_regex = re.compile(rf"{re.escape(blob_prefix)}([^/]+)/([^\s)]+)") def fix_doc(match: re.Match[str]) -> str: base = match.group(1) remainder = match.group(2) target = "main" if remainder.startswith(style_rel) else sha if base == target: return match.group(0) return f"{blob_prefix}{target}/{remainder}" rewritten = doc_prefix_regex.sub(fix_doc, rewritten) return rewritten def _build_from_sidecar(sidecar: dict, *, repo: str, sha: str, repo_root: Path) -> Tuple[str, str, List[Dict[str, object]]]: """Return (body, event, comments[]) from sidecar index.json. Event is always COMMENT.""" body = str(sidecar.get("intro") or "").strip() # Force COMMENT-only behavior regardless of sidecar content event = "COMMENT" commit_id = str(sidecar.get("commit_id") or "").strip() if commit_id: sha = commit_id items = sidecar.get("selected_details") or [] comments: List[Dict[str, object]] = [] def sanitize_code_for_gh_suggestion(code: str) -> str: """Normalize a suggestion snippet for GitHub suggestions. - If a fenced block is present, extract its inner content. - Remove diff headers and treat leading '+' additions as plain text; drop '-' lines. """ # Extract inner of first fenced block when present lang, inner = _extract_first_code_block(code) text = inner if inner is not None else code out: List[str] = [] for ln in text.splitlines(): if ln.startswith('--- ') or ln.startswith('+++ ') or ln.startswith('@@'): continue if ln.startswith('+') and not ln.startswith('++'): out.append(ln[1:]) continue if ln.startswith('-') and not ln.startswith('--'): # Skip removed lines in GH suggestion body continue out.append(ln) return "\n".join(out).rstrip("\n") for it in items: try: path = str(it.get("path") or "").strip() start = int(it.get("start") or 0) end = int(it.get("end") or 0) # severity is not required for comment body; skip storing it title = str(it.get("title") or "").strip() desc = str(it.get("desc") or "").strip() sugg = it.get("suggestion") or {} code = str(sugg.get("code") or "") except Exception: continue if not (path and start > 0 and end >= start and title): continue # Clamp to file length when available file_path = (repo_root / path).resolve() if file_path.is_file(): try: line_count = sum(1 for _ in file_path.open("r", encoding="utf-8", errors="ignore")) if end > line_count: end = line_count if start > line_count: continue except Exception: pass # Build comment body with title + description and optional suggestion fence code = code.rstrip("\n") parts: List[str] = [] # Prefer including severity in heading when present in sidecar sev = (it.get("severity") or "").strip().upper() if title: heading = f"### [{sev}] {title}".strip() parts.append(heading) if desc: parts.append("") parts.append(desc) # When replacement text is present, include a GitHub suggestion block. # Allow empty replacement (deletion) suggestions: GitHub treats an empty block as delete selected lines. if code is not None: repl = sanitize_code_for_gh_suggestion(code) repl_lines = repl.splitlines() n_range = end - start + 1 if ( (n_range == 1 and len(repl_lines) == 1) or (n_range > 1 and len(repl_lines) == n_range) or (repl == "" and n_range >= 1) ): parts.append("") parts.append("```suggestion") if repl: parts.append(repl) parts.append("```") else: # No auto-fix block; rely on title/description and CTA only. pass # Always include the feedback CTA parts.append("") parts.append("Please leave a reaction 👍/👎 to this suggestion to improve future reviews for everyone!") body_text = "\n".join(parts).strip() body_text = _absolutize_location_links(body_text, repo or None, sha or None) c: Dict[str, object] = {"path": path, "side": "RIGHT", "body": body_text} if start == end: c["line"] = end else: c["start_line"] = start c["line"] = end c["start_side"] = "RIGHT" comments.append(c) # Rewrite links in top-level body body = _absolutize_location_links(body, repo or None, sha or None) return body, event, comments # ---------- Finding parsing ---------- _H_RE = re.compile(r"^###\s*\[(HIGH|MEDIUM|LOW)\]\s*(.+?)\s*$", re.IGNORECASE) _LOC_RE = re.compile( r"^Location:\s*([^\s?#]+)(?:\?plain=1)?#L(?P\d+)(?:-L(?P\d+))?\s*$", re.IGNORECASE, ) @dataclass class Finding: severity: str title: str path: str start: int end: int desc: str suggestion_raw: str suggestion_replacement: Optional[str] = None uid: Optional[str] = None def key(self) -> Tuple[str, int, int, str]: t = re.sub(r"\W+", " ", self.title or "").strip().lower() return (self.path, self.start, self.end, t) def _extract_first_code_block(text: str) -> Tuple[Optional[str], Optional[str]]: """ Return (lang, content) for the first fenced code block in text. """ m = re.search(r"```([a-zA-Z0-9_-]*)\s*\n([\s\S]*?)\n```", text) if not m: return None, None lang = (m.group(1) or "").strip().lower() content = m.group(2) return lang, content _TRAILER_JSON_RE = re.compile(r"```json\s*(\{[\s\S]*?\})\s*```\s*$", re.IGNORECASE | re.MULTILINE) # Remove any fenced code blocks (```lang ... ```), used when we can't submit a proper GH suggestion _FENCED_BLOCK_RE = re.compile(r"```[a-zA-Z0-9_-]*\s*\n[\s\S]*?\n```", re.MULTILINE) def _strip_trailing_json_trailer(text: str) -> str: """Remove a trailing fenced JSON block (validator trailer) from text.""" return _TRAILER_JSON_RE.sub("", text).rstrip() def _parse_findings(md: str) -> List[Finding]: lines = md.splitlines() i = 0 items: List[Finding] = [] while i < len(lines): m = _H_RE.match(lines[i]) if not m: i += 1 continue severity = m.group(1).upper() title = m.group(2).strip() i += 1 # Expect blocks with Location:, Description:, Suggestion: loc_path = "" loc_start = 0 loc_end = 0 desc_lines: List[str] = [] sugg_lines: List[str] = [] # Scan until next heading or end section = "none" while i < len(lines) and not _H_RE.match(lines[i]): line = lines[i] if line.strip().lower().startswith("location:"): lm = _LOC_RE.match(line.strip()) if lm: loc_path = lm.group(1).strip() loc_start = int(lm.group("start")) loc_end = int(lm.group("end") or lm.group("start")) section = "location" elif line.strip().lower().startswith("description:"): section = "desc" elif line.strip().lower().startswith("suggestion:"): section = "sugg" else: if section == "desc": desc_lines.append(line) elif section == "sugg": sugg_lines.append(line) i += 1 if not (loc_path and loc_start > 0 and loc_end >= loc_start): # Skip malformed entries continue desc = "\n".join(desc_lines).strip() sugg_raw = "\n".join(sugg_lines).strip() # Remove any trailing validator JSON trailer that might have been captured sugg_raw = _strip_trailing_json_trailer(sugg_raw) # Try to derive a GH suggestion replacement from the first non-diff code block replacement: Optional[str] = None lang, content = _extract_first_code_block(sugg_raw) if content: if lang and lang != "diff" and lang != "patch": replacement = content elif not lang: # Unspecified language — assume it's a replacement snippet replacement = content # else: diff/patch -> skip automated suggestion; keep raw in comment items.append( Finding( severity=severity, title=title, path=loc_path, start=loc_start, end=loc_end, desc=desc, suggestion_raw=sugg_raw, suggestion_replacement=replacement, ) ) return items def _parse_trailer_findings(md: str) -> List[dict]: """Parse the fenced JSON trailer at the end and return .findings list when present.""" m = re.search(r"```json\s*(\{[\s\S]*?\})\s*```\s*$", md, flags=re.IGNORECASE | re.MULTILINE) if not m: return [] try: obj = json.loads(m.group(1)) if isinstance(obj, dict): f = obj.get("findings") if isinstance(f, list): out = [] for it in f: if isinstance(it, dict): out.append(it) return out except Exception: return [] return [] # Removed verdict aggregation logic: event selection is fixed to COMMENT. # ---------- Main ---------- def main() -> None: ap = argparse.ArgumentParser() ap.add_argument("--run-dir", required=True, help="Pitaya results/run_* directory") ap.add_argument("--repo", default=os.environ.get("GITHUB_REPOSITORY") or "", help="owner/repo") ap.add_argument("--sha", default=os.environ.get("PR_HEAD_SHA") or "", help="PR head SHA") ap.add_argument("--severities", default=os.environ.get("INLINE_SEVERITIES") or "HIGH") ap.add_argument("--max-comments", type=int, default=int(os.environ.get("MAX_COMMENTS") or 40)) args = ap.parse_args() run_dir = Path(args.run_dir) repo = args.repo.strip() sha = args.sha.strip() include_sevs = {s.strip().upper() for s in (args.severities or "HIGH").split(",") if s.strip()} # Prefer sidecar when present (new strategy contract) sidecar_path = run_dir / "review" / "index.json" if sidecar_path.exists(): try: sidecar = json.loads(sidecar_path.read_text(encoding="utf-8", errors="replace")) except Exception as e: raise SystemExit(f"Failed to read sidecar {sidecar_path}: {e}") body, _event, comments = _build_from_sidecar(sidecar, repo=repo, sha=sha, repo_root=Path(os.environ.get("GITHUB_WORKSPACE") or ".")) # Always submit a COMMENT review regardless of findings out = { "body": body or "No documentation issues detected.", "event": "COMMENT", "comments": comments, "commit_id": (sidecar.get("commit_id") or sha) or None, } json.dump(out, fp=os.fdopen(1, "w"), ensure_ascii=False) return # Fallback: derive from instances when sidecar is absent files = list(_iter_instance_jsons(run_dir)) if not files: raise SystemExit("No instance JSON files found in run dir and no sidecar present") composer_body: Optional[str] = None composer_metrics: Dict[str, object] = {} validator_messages: List[str] = [] validator_trailer_findings: List[dict] = [] metrics_list: List[Dict[str, object]] = [] for path, obj in files: role = _role_of(obj) or "" fm = _final_message_of(obj) metrics = _metrics_of(obj) if role == "composer": if fm and not composer_body: composer_body = fm if metrics: composer_metrics.update(metrics) elif role == "validator": if fm: validator_messages.append(fm) # collect trailer findings if present validator_trailer_findings.extend(_parse_trailer_findings(fm)) if metrics: metrics_list.append(metrics) else: # Heuristic: treat messages that end with a fenced JSON trailer as validator outputs if isinstance(fm, str) and re.search(r"```json\s*\{[\s\S]*\}\s*```\s*$", fm, re.IGNORECASE): validator_messages.append(fm) validator_trailer_findings.extend(_parse_trailer_findings(fm)) if metrics: metrics_list.append(metrics) # Removed verdict computation; not used for event selection. # Event will be set by simplified rule after building comments. # Derive selected finding IDs and a human body from composer output (new composer may return JSON) selected_ids: List[str] = [] body = composer_body or "" composer_json = None try: composer_json = json.loads(body) if body.strip().startswith("{") else None except Exception: composer_json = None if isinstance(composer_json, dict) and ("intro" in composer_json or "selected_ids" in composer_json): intro = composer_json.get("intro") if isinstance(intro, str) and intro.strip(): body = intro.strip() else: body = "Automated review summary" ids = composer_json.get("selected_ids") if isinstance(ids, list): seen_ids = set() for v in ids: if isinstance(v, str) and v not in seen_ids: selected_ids.append(v) seen_ids.add(v) # Fallback to original markdown body body = _absolutize_location_links(body, repo if repo else None, sha if sha else None) if not body.strip(): body = "No documentation issues detected." # Parse validator findings and deduplicate findings: List[Finding] = [] for msg in validator_messages: parsed = _parse_findings(msg or "") # Attempt to attach UIDs from trailer by matching on (path,start,end,severity,title) if validator_trailer_findings: trailer_index: Dict[Tuple[str, int, int, str, str], str] = {} for it in validator_trailer_findings: path = str(it.get("path") or "").strip() start = int(it.get("start") or 0) end = int(it.get("end") or 0) sev = str(it.get("severity") or "").strip().upper() title = str(it.get("title") or "").strip() uid = str(it.get("uid") or "").strip() if path and start > 0 and end >= start and sev and title and uid: trailer_index[(path, start, end, sev, title)] = uid for f in parsed: key = (f.path, f.start, f.end, f.severity.upper(), f.title) if key in trailer_index: f.uid = trailer_index[key] findings.extend(parsed) # Build selected findings list (preserve order) when composer provided UIDs selected_findings: List[Finding] = [] if selected_ids: # Index validator trailer findings by uid and tuple for robust matching trailer_by_uid: Dict[str, dict] = {} for it in validator_trailer_findings: uid = str(it.get("uid") or "").strip() if uid: trailer_by_uid[uid] = it # Index parsed findings for lookup by (path,start,end,sev,title) parsed_index: Dict[Tuple[str, int, int, str, str], Finding] = {} parsed_alt_index: Dict[Tuple[str, int, int, str], Finding] = {} for f in findings: parsed_index[(f.path, f.start, f.end, f.severity.upper(), f.title)] = f parsed_alt_index[(f.path, f.start, f.end, f.severity.upper())] = f for uid in selected_ids: fobj: Optional[Finding] = None t = trailer_by_uid.get(uid) if t: key = ( str(t.get("path") or "").strip(), int(t.get("start") or 0), int(t.get("end") or 0), str(t.get("severity") or "").strip().upper(), str(t.get("title") or "").strip(), ) fobj = parsed_index.get(key) if not fobj: key2 = (key[0], key[1], key[2], key[3]) fobj = parsed_alt_index.get(key2) if not fobj and key[0] and key[1] > 0 and key[2] >= key[1]: # Create a minimal finding from trailer fobj = Finding( severity=key[3] or "HIGH", title=key[4] or "Selected finding", path=key[0], start=key[1], end=key[2], desc="", suggestion_raw="", ) fobj.uid = uid else: # Fallback: search parsed findings by uid fobj = next((pf for pf in findings if pf.uid == uid), None) if fobj and fobj.severity in include_sevs: selected_findings.append(fobj) base_list = selected_findings else: # Filter by severities, then dedupe findings = [f for f in findings if f.severity in include_sevs] seen: set[Tuple[str, int, int, str]] = set() deduped: List[Finding] = [] for f in findings: k = f.key() if k in seen: continue seen.add(k) deduped.append(f) base_list = deduped # Cap number of comments base_list = base_list[: max(0, int(args.max_comments))] # Build inline comments comments: List[Dict[str, object]] = [] # Optional bounds check against workspace files to reduce 422 errors repo_root = Path(os.environ.get("GITHUB_WORKSPACE") or ".") for f in base_list: # Clamp line numbers to file length when possible file_path = (repo_root / f.path).resolve() if file_path.is_file(): try: line_count = sum(1 for _ in file_path.open("r", encoding="utf-8", errors="ignore")) if f.end > line_count: f.end = line_count if f.start > line_count: # Skip invalid locations entirely continue except Exception: pass # Compose comment body with optional suggestion parts: List[str] = [] parts.append(f"### [{f.severity}] {f.title}") if f.desc.strip(): parts.append("") parts.append(f.desc.strip()) # Only submit commit suggestions when the replacement likely covers the full selected range submitted_suggestion = False if f.suggestion_replacement is not None: repl = f.suggestion_replacement.rstrip("\n") repl_lines = repl.splitlines() n_range = f.end - f.start + 1 if ( (n_range == 1 and len(repl_lines) == 1) or (n_range > 1 and len(repl_lines) == n_range) or (repl == "" and n_range >= 1) ): parts.append("") parts.append("```suggestion") if repl: parts.append(repl) parts.append("```") submitted_suggestion = True if not submitted_suggestion and f.suggestion_raw.strip(): # Detect deletion-only diffs and convert to empty GH suggestion raw = f.suggestion_raw lang, inner = _extract_first_code_block(raw) text = inner if inner is not None else raw lines = [ln.strip() for ln in text.splitlines()] has_add = any(ln.startswith('+') and not ln.startswith('++') for ln in lines) has_del = any(ln.startswith('-') and not ln.startswith('--') for ln in lines) if has_del and not has_add: parts.append("") parts.append("```suggestion") parts.append("```") submitted_suggestion = True if not submitted_suggestion and f.suggestion_raw.strip(): parts.append("") # Do not include fenced blocks if we can't guarantee a commit suggestion cleaned = _TRAILER_JSON_RE.sub("", f.suggestion_raw.strip()) cleaned = _FENCED_BLOCK_RE.sub("", cleaned).strip() if cleaned: parts.append(cleaned) # Always include the feedback CTA parts.append("") parts.append("Please leave a reaction 👍/👎 to this suggestion to improve future reviews for everyone!") body_text = "\n".join(parts).strip() # Rewrite style-guide references to clickable blob URLs body_text = _absolutize_location_links(body_text, repo if repo else None, sha if sha else None) c: Dict[str, object] = { "path": f.path, "side": "RIGHT", "body": body_text, } if f.start == f.end: c["line"] = f.end else: c["start_line"] = f.start c["line"] = f.end c["start_side"] = "RIGHT" comments.append(c) # Always submit a COMMENT review, never approve or request changes. event = "COMMENT" out = { "body": body, "event": event, "comments": comments, "commit_id": sha or None, } json.dump(out, fp=os.fdopen(1, "w"), ensure_ascii=False) if __name__ == "__main__": main() ================================================ FILE: .github/scripts/common.mjs ================================================ export async function hidePriorCommentsWithPrefix({ github, // injected by GitHub context, // injected by GitHub exec, // injected by GitHub prefix = '', resolved = true, user = 'github-actions[bot]', }) { const comments = await withRetry(() => github.rest.issues.listComments({ owner: context.repo.owner, repo: context.repo.repo, issue_number: context.issue.number, }) ); await exec.exec('sleep 0.5s'); for (const comment of comments.data) { const commentData = await withRetry(() => github.graphql(` query($nodeId: ID!) { node(id: $nodeId) { ... on IssueComment { isMinimized } } } `, { nodeId: comment.node_id }) ); await exec.exec('sleep 0.5s'); const isHidden = commentData?.node?.isMinimized; if (isHidden) { continue; } if ( comment.user.login === user && comment.body.startsWith(prefix) ) { console.log('Comment node_id:', comment.node_id); const commentStatus = await withRetry(() => github.graphql(` mutation($subjectId: ID!, $classifier: ReportedContentClassifiers!) { minimizeComment(input: { subjectId: $subjectId, classifier: $classifier }) { minimizedComment { isMinimized minimizedReason } } } `, { subjectId: comment.node_id, classifier: resolved ? 'RESOLVED' : 'OUTDATED', }) ); await exec.exec('sleep 0.5s'); console.log(commentStatus); } } } export async function createComment({ github, // injected by GitHub context, // injected by GitHub exec, // injected by GitHub body = '', }) { await withRetry(() => github.rest.issues.createComment({ owner: context.repo.owner, repo: context.repo.repo, issue_number: context.issue.number, body: body, }) ); await exec.exec('sleep 0.2s'); } /** @param fn {() => Promise} */ async function withRetry(fn, maxRetries = 3, baseDelayMs = 1500) { let lastError; for (let attempt = 1; attempt <= maxRetries; attempt += 1) { try { return await fn(); } catch (error) { // Don't retry on 4xx errors (client errors), only on 5xx or network issues if (error.status && error.status >= 400 && error.status < 500) { throw error; } lastError = error; // Exponential backoff const delay = baseDelayMs * Math.pow(2, attempt - 1); console.log(`Attempt ${attempt} failed, retrying in ${delay / 1000}s...`); await new Promise((resolve) => setTimeout(resolve, delay)); } } // Did not produce results after multiple retries throw lastError; } ================================================ FILE: .github/scripts/generate-v2-api-table.py ================================================ import json import re from pathlib import Path from collections import defaultdict # Define which specs to process and where to inject tables SPECS = [ { 'spec_path': 'ecosystem/api/toncenter/v2.json', 'mdx_path': 'ecosystem/api/toncenter/v2/overview.mdx', 'marker': 'API_V2_ENDPOINTS', 'link_base': '/ecosystem/api/toncenter/v2', 'exclude_tags': ['rpc'], 'include_jsonrpc': True, }, ] def load_openapi_spec(filepath: Path) -> dict: """Load the OpenAPI JSON file.""" with open(filepath, 'r') as f: return json.load(f) def extract_endpoints(spec: dict, exclude_tags: list = None) -> list: """Extract endpoints from the OpenAPI spec.""" exclude_tags = [t.lower() for t in (exclude_tags or [])] endpoints = [] seen_paths = set() paths = spec.get('paths', {}) for path, path_item in paths.items(): for method in ['get', 'post', 'put', 'patch', 'delete']: if method not in path_item: continue operation = path_item[method] tags = operation.get('tags', ['Other']) tags_lower = [t.lower() for t in tags] # Skip if ALL tags are in exclude list if all(t in exclude_tags for t in tags_lower): continue # Use first non-excluded tag as category tag = next((t for t in tags if t.lower() not in exclude_tags), tags[0]) # Avoid duplicates if path in seen_paths: continue seen_paths.add(path) endpoints.append({ 'path': path, 'method': method.upper(), 'tag': tag, 'summary': operation.get('summary', ''), 'operationId': operation.get('operationId', ''), }) return endpoints def generate_mintlify_link(endpoint: dict, base_path: str) -> str: """Generate Mintlify documentation link based on summary (slugified).""" tag = endpoint['tag'].lower().replace(' ', '-').replace('_', '-') summary = endpoint.get('summary', '') if summary: # Mintlify slugifies the summary for the URL # "Get account state and balance" -> "get-account-state-and-balance" slug = summary.lower() slug = re.sub(r'[^a-z0-9\s-]', '', slug) slug = re.sub(r'\s+', '-', slug) slug = re.sub(r'-+', '-', slug) slug = slug.strip('-') return f"{base_path}/{tag}/{slug}" operation_id = endpoint.get('operationId', '') if operation_id: clean_op_id = operation_id.replace('_get', '').replace('_post', '') slug = re.sub(r'([a-z])([A-Z])', r'\1-\2', clean_op_id).lower() return f"{base_path}/{tag}/{slug}" path_slug = endpoint['path'].split('/')[-1].lower() return f"{base_path}/{tag}/{path_slug}" def generate_table(endpoints: list, link_base: str) -> str: """Generate markdown table from endpoints.""" # Group by tag grouped = defaultdict(list) for ep in endpoints: grouped[ep['tag']].append(ep) # Custom sort order tag_order = ['accounts', 'blocks', 'transactions', 'send', 'run method', 'utils', 'configuration', 'json-rpc'] def sort_key(tag): try: return tag_order.index(tag.lower()) except ValueError: return len(tag_order) sorted_tags = sorted(grouped.keys(), key=sort_key) lines = [ "| Category | Method | Description |", "| -------- | ------ | ----------- |", ] for tag in sorted_tags: for ep in grouped[tag]: method = ep['method'] path = ep['path'].replace('/api/v2', '').replace('/api/v3', '') summary = ep['summary'] link = generate_mintlify_link(ep, link_base) display_tag = tag.capitalize() if tag.islower() else tag method_display = f"[`{method} {path}`]({link})" lines.append(f"| **{display_tag}** | {method_display} | {summary} |") return '\n'.join(lines) def process_spec(config: dict, repo_root: Path) -> str: """Process a single OpenAPI spec and generate table.""" spec_path = repo_root / config['spec_path'] if not spec_path.exists(): print(f"Spec not found: {spec_path}") return None spec = load_openapi_spec(spec_path) if spec is None: return None endpoints = extract_endpoints(spec, config.get('exclude_tags', [])) # Optionally add JSON-RPC endpoint if config.get('include_jsonrpc'): paths = spec.get('paths', {}) for rpc_path in ['/api/v2/jsonRPC', '/api/v3/jsonRPC']: if rpc_path in paths: jsonrpc = paths[rpc_path].get('post', {}) endpoints.append({ 'path': rpc_path, 'method': 'POST', 'tag': 'JSON-RPC', 'summary': jsonrpc.get('summary', 'JSON-RPC endpoint'), 'operationId': jsonrpc.get('operationId', 'jsonRPC_post'), }) return generate_table(endpoints, config['link_base']) def inject_table_into_mdx(mdx_path: Path, marker: str, table: str) -> bool: """ Inject generated table into MDX file between marker comments. Markers in MDX should look like: {/* BEGIN_AUTO_GENERATED: API_V2_ENDPOINTS */} {/* END_AUTO_GENERATED: API_V2_ENDPOINTS */} """ if not mdx_path.exists(): print(f"MDX not found: {mdx_path}") return False content = mdx_path.read_text() # Pattern to match the marker block (handles both empty and filled markers) pattern = rf'(\{{/\* BEGIN_AUTO_GENERATED: {marker} \*/\}})[ \t]*\n.*?(\{{/\* END_AUTO_GENERATED: {marker} \*/\}})' if not re.search(pattern, content, re.DOTALL): print(f" Markers not found in {mdx_path}") print(f" Add these markers where you want the table:") print(f" {{/* BEGIN_AUTO_GENERATED: {marker} */}}") print(f" {{/* END_AUTO_GENERATED: {marker} */}}") return False # Replace content between markers new_content = re.sub( pattern, rf'\1\n{table}\n\2', content, flags=re.DOTALL ) if new_content != content: mdx_path.write_text(new_content) return True return False def find_repo_root() -> Path: """Find the repository root (where mint.json is located).""" current = Path(__file__).resolve().parent for parent in [current] + list(current.parents): if (parent / 'mint.json').exists(): return parent return current.parent def main(): repo_root = find_repo_root() for config in SPECS: print(f"\nProcessing: {config['spec_path']}") table = process_spec(config, repo_root) if not table: continue mdx_path = repo_root / config['mdx_path'] marker = config['marker'] if inject_table_into_mdx(mdx_path, marker, table): print(f" Updated {config['mdx_path']}") else: print(f" No changes needed or markers missing") print("\n Done") if __name__ == '__main__': main() ================================================ FILE: .github/scripts/generate-v3-api-table.py ================================================ import re from pathlib import Path from collections import defaultdict try: import yaml HAS_YAML = True except ImportError: HAS_YAML = False print("PyYAML not installed. Run: pip install pyyaml") exit(1) SPEC_PATH = 'ecosystem/api/toncenter/v3.yaml' MDX_PATH = 'ecosystem/api/toncenter/v3/overview.mdx' MARKER = 'API_V3_ENDPOINTS' LINK_BASE = '/ecosystem/api/toncenter/v3' # Tag display order TAG_ORDER = [ 'accounts', 'actions and traces', 'blockchain data', 'jettons', 'nfts', 'dns', 'multisig', 'vesting', 'stats', 'utils', 'api/v2', ] # Map tag slugs to Mintlify's actual URL slugs TAG_SLUG_MAP = { 'api-v2': 'apiv2', } def load_openapi_spec(filepath: Path) -> dict: """Load the OpenAPI YAML file.""" with open(filepath, 'r') as f: return yaml.safe_load(f) def extract_endpoints(spec: dict) -> list: """Extract endpoints from the OpenAPI spec.""" endpoints = [] seen_paths = set() paths = spec.get('paths', {}) for path, path_item in paths.items(): for method in ['get', 'post', 'put', 'patch', 'delete']: if method not in path_item: continue operation = path_item[method] tags = operation.get('tags', ['Other']) tag = tags[0] if tags else 'Other' # Avoid duplicates if path in seen_paths: continue seen_paths.add(path) endpoints.append({ 'path': path, 'method': method.upper(), 'tag': tag, 'summary': operation.get('summary', ''), 'operationId': operation.get('operationId', ''), }) return endpoints def generate_mintlify_link(endpoint: dict) -> str: """Generate Mintlify documentation link based on summary""" tag = endpoint['tag'].lower().replace(' ', '-').replace('_', '-').replace('/', '-') # Apply tag slug mapping for Mintlify tag = TAG_SLUG_MAP.get(tag, tag) summary = endpoint.get('summary', '') if summary: slug = summary.lower() slug = re.sub(r'[^a-z0-9\s-]', '', slug) slug = re.sub(r'\s+', '-', slug) slug = re.sub(r'-+', '-', slug) slug = slug.strip('-') return f"{LINK_BASE}/{tag}/{slug}" operation_id = endpoint.get('operationId', '') if operation_id: clean_op_id = operation_id.replace('_get', '').replace('_post', '') slug = re.sub(r'([a-z])([A-Z])', r'\1-\2', clean_op_id).lower() return f"{LINK_BASE}/{tag}/{slug}" path_slug = endpoint['path'].split('/')[-1].lower() return f"{LINK_BASE}/{tag}/{path_slug}" def generate_table(endpoints: list) -> str: """Generate markdown table from endpoints.""" # Group by tag grouped = defaultdict(list) for ep in endpoints: grouped[ep['tag']].append(ep) def sort_key(tag): try: return TAG_ORDER.index(tag.lower()) except ValueError: return len(TAG_ORDER) sorted_tags = sorted(grouped.keys(), key=sort_key) lines = [ "| Category | Method | Description |", "| -------- | ------ | ----------- |", ] for tag in sorted_tags: for ep in grouped[tag]: method = ep['method'] path = ep['path'].replace('/api/v3', '') summary = ep['summary'] link = generate_mintlify_link(ep) # Handle tag display display_tag = tag if tag.lower() == 'api/v2': display_tag = 'Legacy (v2)' elif tag.islower(): display_tag = tag.capitalize() method_display = f"[`{method} {path}`]({link})" lines.append(f"| **{display_tag}** | {method_display} | {summary} |") return '\n'.join(lines) def inject_table_into_mdx(mdx_path: Path, table: str) -> bool: """Inject generated table into MDX file between marker comments.""" if not mdx_path.exists(): print(f" MDX not found: {mdx_path}") return False content = mdx_path.read_text() # Pattern to match the marker block pattern = rf'(\{{/\* BEGIN_AUTO_GENERATED: {MARKER} \*/\}})[ \t]*\n.*?(\{{/\* END_AUTO_GENERATED: {MARKER} \*/\}})' if not re.search(pattern, content, re.DOTALL): print(f" Markers not found in {mdx_path}") print(f" Add these markers where you want the table:") print(f" {{/* BEGIN_AUTO_GENERATED: {MARKER} */}}") print(f" {{/* END_AUTO_GENERATED: {MARKER} */}}") return False new_content = re.sub( pattern, rf'\1\n{table}\n\2', content, flags=re.DOTALL ) if new_content != content: mdx_path.write_text(new_content) return True return False def find_repo_root() -> Path: """Find the repository root""" current = Path(__file__).resolve().parent for parent in [current] + list(current.parents): if (parent / 'docs.json').exists(): return parent return current.parent def main(): repo_root = find_repo_root() spec_path = repo_root / SPEC_PATH mdx_path = repo_root / MDX_PATH print(f"\n Processing: {SPEC_PATH}") if not spec_path.exists(): print(f"Spec not found: {spec_path}") return spec = load_openapi_spec(spec_path) endpoints = extract_endpoints(spec) print(f" Found {len(endpoints)} endpoints") table = generate_table(endpoints) if inject_table_into_mdx(mdx_path, table): print(f" Updated {MDX_PATH}") else: print(f" No changes needed or markers missing") print("\n Done") if __name__ == '__main__': main() ================================================ FILE: .github/scripts/rewrite_review_links.py ================================================ #!/usr/bin/env python3 """Convert repo-relative doc links in the review body to absolute blob URLs.""" from __future__ import annotations import os import re import sys def main() -> None: text = sys.stdin.read() if not text: sys.stdout.write(text) return repo = os.environ.get("GITHUB_REPOSITORY") sha = os.environ.get("PR_HEAD_SHA") if not repo: sys.stdout.write(text) return blob_prefix = f"https://github.com/{repo}/blob/" doc_blob_prefix = f"{blob_prefix}{sha or 'main'}/" style_blob_prefix = f"{blob_prefix}main/" style_rel = "contribute/style-guide-extended.mdx" def absolutize_location(path: str) -> str: if path.startswith("http://") or path.startswith("https://"): return path normalized = path.lstrip("./") base = style_blob_prefix if normalized.startswith(style_rel) else doc_blob_prefix return f"{base}{normalized}" lines: list[str] = [] for line in text.splitlines(): stripped = line.lstrip() indent_len = len(line) - len(stripped) for marker in ("- Location:", "Location:", "* Location:"): if stripped.startswith(marker): prefix, _, rest = stripped.partition(":") link = rest.strip() if link: link = absolutize_location(link) stripped = f"{prefix}: {link}" line = " " * indent_len + stripped break lines.append(line) rewritten = "\n".join(lines) style_pattern = re.compile(rf"{re.escape(style_rel)}\?plain=1#L\d+(?:-L\d+)?") def replace_style_links(text: str) -> str: result: list[str] = [] last = 0 for match in style_pattern.finditer(text): start, end = match.span() result.append(text[last:start]) link = match.group(0) prefix_start = max(0, start - len(style_blob_prefix)) if text[prefix_start:start] == style_blob_prefix: result.append(link) else: result.append(f"{style_blob_prefix}{link.lstrip('./')}") last = end result.append(text[last:]) return "".join(result) rewritten = replace_style_links(rewritten) # Ensure any doc blob URLs use the PR head SHA (style guide stays on main) if sha: doc_prefix_regex = re.compile(rf"{re.escape(blob_prefix)}([^/]+)/([^\s)]+)") def fix_doc(match: re.Match[str]) -> str: base = match.group(1) remainder = match.group(2) target = "main" if remainder.startswith(style_rel) else sha if base == target: return match.group(0) return f"{blob_prefix}{target}/{remainder}" rewritten = doc_prefix_regex.sub(fix_doc, rewritten) sys.stdout.write(rewritten) if __name__ == "__main__": main() ================================================ FILE: .github/scripts/tvm-instruction-gen.py ================================================ import json import os import sys import textwrap import mistletoe WORKSPACE_ROOT = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir)) MDX_PATH = os.path.join(WORKSPACE_ROOT, "tvm", "instructions.mdx") START_MARK = "{/* STATIC_START tvm_instructions */}" END_MARK = "{/* STATIC_END tvm_instructions */}" def humanize_category(key): if not key: return "Uncategorized" words = [p.capitalize() for p in key.replace("_", " ").split() if p] return " ".join(words) or "Uncategorized" def render_alias(alias): return f""" - `{alias['mnemonic']}`
{textwrap.indent(alias['description'].replace('\n', '
'), " ")} """.strip() def render_instruction(insn, aliases): return f""" #### `{insn['doc']['opcode']}` {insn['mnemonic']} {insn['doc']['description'].replace('\n', '
')}
**Category:** {humanize_category(insn['doc']['category'])} ({insn['doc']['category']})
```fift Fift {insn['doc']['fift']} ``` {'**Aliases**:' if aliases else ''} {'\n'.join(render_alias(alias) for alias in aliases)} """.strip() def render_static_mdx(spec): return '\n\n'.join(render_instruction(insn, [alias for alias in spec['aliases'] if alias['alias_of'] == insn['mnemonic']]) for insn in spec['instructions']) def inject_into_mdx(mdx_path, new_block): with open(mdx_path, "r", encoding="utf-8") as fh: src = fh.read() start_idx = src.find(START_MARK) end_idx = src.find(END_MARK) + len(END_MARK) if start_idx == -1 or end_idx == -1 or end_idx <= start_idx: raise RuntimeError("Static markers not found or malformed in instructions.mdx") # Preserve everything outside markers; replace inside with marker + newline + content + newline + end marker before = src[: start_idx + len(START_MARK)] after = src[end_idx:] # Hide the static block in the rendered page to avoid duplicating the # interactive table. Keeping it in the DOM still enables full-text search. wrapped_block = f"" replacement = f"{START_MARK}\n{wrapped_block}\n{END_MARK}" updated = before + replacement[len(START_MARK):] + after with open(mdx_path, "w", encoding="utf-8") as fh: fh.write(updated) def generate(spec_input_path, spec_output_path, instructions_mdx_path): with open(spec_input_path) as f: spec = json.load(f) static_block = render_static_mdx(spec) inject_into_mdx(instructions_mdx_path, static_block) update_doc_cp0(spec, spec_output_path) def update_doc_cp0(spec, spec_output_path): for insn in spec['instructions']: doc = insn['doc'] doc['description'] = mistletoe.markdown(doc['description']) for alias in spec['aliases']: alias['description'] = mistletoe.markdown(alias['description']) with open(spec_output_path, 'w', encoding='utf-8') as f: json.dump(spec, f, ensure_ascii=False, separators=(',', ':')) if __name__ == "__main__": if len(sys.argv) != 4: print(f"Usage: {sys.argv[0]} ") sys.exit(1) generate(sys.argv[1], sys.argv[2], sys.argv[3]) ================================================ FILE: .github/workflows/bouncer.yml ================================================ name: 🏀 Bouncer # aka 🚪 Supervisor env: # additions only MAX_ADDITIONS: 600 # many target issues usually mean bigger pull requests MAX_ISSUES_PER_PR: 3 on: pull_request_target: # do NOT use actions/checkout! # any branches branches: ["**"] # on creation, on new commits, and description edits types: [opened, synchronize, edited] concurrency: group: ${{ github.workflow }}-${{ github.ref }}-bouncer cancel-in-progress: true permissions: contents: read pull-requests: write jobs: enforce-smaller-requests: name: "PR is manageable" runs-on: ubuntu-latest steps: - name: Check if a number of additions modulo filtered files is within the threshold id: stats uses: actions/github-script@f28e40c7f34bde8b3046d885e986cb6290c5673b # v7.1.0 with: script: | const maxAdditions = Number(process.env.MAX_ADDITIONS ?? '600'); await exec.exec('sleep 0.5s'); const { data: files } = await github.rest.pulls.listFiles({ owner: context.repo.owner, repo: context.repo.repo, pull_number: context.payload.pull_request.number, per_page: 100, }); const filtered = files.filter((f) => f.filename.match(/\.mdx?$/) !== null && !f.filename.startsWith('tvm/instructions.mdx') && !f.filename.startsWith('snippets'), ); const additions = filtered.reduce((acc, it) => acc + it.additions, 0); if (additions > maxAdditions) { core.setOutput('trigger', 'true'); } else { core.setOutput('trigger', 'false'); } - name: ${{ steps.stats.outputs.trigger == 'true' && 'An opened PR is too big to be reviewed at once!' || '...' }} if: github.event.action == 'opened' && steps.stats.outputs.trigger == 'true' uses: actions/github-script@f28e40c7f34bde8b3046d885e986cb6290c5673b # v7.1.0 with: script: | await exec.exec('sleep 0.5s'); await github.rest.issues.createComment({ owner: context.repo.owner, repo: context.repo.repo, issue_number: context.payload.pull_request.number, body: [ 'Thank you for the contribution!', [ 'Unfortunately, it is too large, with over ${{ env.MAX_ADDITIONS }} added lines,', 'excluding some generated or otherwise special files.', 'Thus, this pull request is challenging to review and iterate on.', ].join(' '), [ 'Please split the PR into several smaller ones and consider', 'reverting any unrelated changes, writing less, or approaching', 'the problem in the issue from a different angle.', ].join(' '), [ 'I look forward to your next submissions.', 'If you still intend to proceed as is, then you are at the mercy of the reviewers.', ].join(' '), ].join('\n\n'), }); process.exit(1); - name: ${{ steps.stats.outputs.trigger == 'true' && 'Some change in the PR made it too big!' || '...' }} if: github.event.action != 'opened' && steps.stats.outputs.trigger == 'true' uses: actions/github-script@f28e40c7f34bde8b3046d885e986cb6290c5673b # v7.1.0 with: script: | core.setFailed([ [ 'This pull request has gotten over ${{ env.MAX_ADDITIONS }} added lines,', 'which can be challenging to review and iterate on', 'Please, decrease the size of this PR or consider splitting it into several smaller requests.' ].join(' '), [ 'Until then, the CI will be soft-marked as failed.', 'If you still intend to proceed as is, then you are at the mercy of the reviewers.', ].join(' '), ].join('\n\n')); process.exit(1); enforce-better-descriptions: name: "Title and description" runs-on: ubuntu-latest steps: # pr title check - name: "Check that the title conforms to the simplified version of Conventional Commits" if: ${{ !cancelled() }} uses: actions/github-script@f28e40c7f34bde8b3046d885e986cb6290c5673b # v7.1.0 with: script: | const title = context.payload.pull_request.title; const types = 'feat|fix|chore|refactor|test'; const pattern = new RegExp(`^(revert: )?(${types})(?:\\/(${types}))?!?(\\([^\\)]+\\))?!?: [a-zA-Z].{1,200}`); const matches = title.match(pattern) !== null; if (!matches) { core.setFailed([ 'Title of this pull request does not conform to the simplified version of Conventional Commits used in the documentation', `Received: ${title}`, 'Expected to find a type of: feat, fix, chore, refactor, or test, followed by the parts outlined here: https://www.conventionalcommits.org/en/v1.0.0/', ].join('\n')); process.exit(1); } # pr close issue limits - name: "Check that there is no more than ${{ env.MAX_ISSUES_PER_PR }} linked issues" if: ${{ !cancelled() && github.event.pull_request.user.login != 'dependabot[bot]' }} uses: actions/github-script@f28e40c7f34bde8b3046d885e986cb6290c5673b # v7.1.0 with: script: | const maxIssuesAllowed = Number(process.env.MAX_ISSUES_PER_PR ?? '3'); const body = context.payload.pull_request.body || ''; const closePatterns = /\b(?:close[sd]?|fixes|fixed|fix|resolve[sd]?|towards):?\s+(?:https?:\/\/github\.com\/|[a-z0-9\-\_\/]*#\d+)/gi; const issueCount = [...body.matchAll(closePatterns)].length; if (issueCount > maxIssuesAllowed) { core.setFailed(`This pull request attempts to close ${issueCount} issues, while the maximum number allowed is ${maxIssuesAllowed}.`); process.exit(1); } const changelogPattern = /\bchange\s*log:?\s+https?:\/\/.*?\.mdx?/gi; const hasChangelog = body.match(changelogPattern) !== null; if (issueCount === 0 && !hasChangelog) { core.setFailed([ 'This pull request does not resolve any issues — no close patterns found in the description.', 'Please, specify an issue by writing `Closes #that-issue-number` in the description of this PR.', 'If there is no such issue, create a new one: https://github.com/ton-org/docs/issues/1366#issuecomment-3560650817', '\nIf this PR updates descriptions in accordance with a new release of a tool,', 'provide a changelog by writing `Changelog https://....md` in the description of this PR.', ].join(' ')); process.exit(1); } ================================================ FILE: .github/workflows/commander.yml ================================================ # Listens to new comments with /commands and acts accordingly name: 📡 Commander env: HUSKY: 0 NODE_VERSION: 20 on: issue_comment: types: [created] pull_request_review_comment: types: [created] concurrency: group: ${{ github.workflow }}-${{ github.ref }}-commander cancel-in-progress: true permissions: contents: read pull-requests: write jobs: fmt: name: "Fix formatting" runs-on: ubuntu-latest if: | ( github.event_name == 'pull_request_review_comment' || ( github.event_name == 'issue_comment' && github.event.issue.pull_request != null ) ) && contains(fromJSON('["OWNER", "MEMBER", "COLLABORATOR"]'), github.event.comment.author_association) && (startsWith(github.event.comment.body, '/fmt ') || github.event.comment.body == '/fmt') steps: # This is done cautiously to confirm whether the comment comes from a PR that is a fork. # If so, all other steps are skipped and nothing important is run afterwards. - name: Gather PR context in env variables env: FROM_PR: ${{ github.event.pull_request.number }} FROM_ISSUE: ${{ github.event.issue.number }} uses: actions/github-script@f28e40c7f34bde8b3046d885e986cb6290c5673b # v7.1.0 with: script: | const fs = require('node:fs'); const prNumRaw = process.env.FROM_PR ?? process.env.FROM_ISSUE ?? ''; const prNum = Number(prNumRaw); if (isNaN(prNum) || prNum <= 0 || prNum >= 1e20) { console.error(`PR number was not provided or is invalid: ${prNumRaw}`); process.exit(1); } core.exportVariable('PR_NUMBER', prNumRaw); const { data: pr } = await github.rest.pulls.get({ owner: context.repo.owner, repo: context.repo.repo, pull_number: prNum, }); core.exportVariable('BASE_REF', pr.base.ref); core.exportVariable('HEAD_REF', pr.head.ref); const headRepo = pr.head.repo?.full_name ?? ''; const thisRepo = `${context.repo.owner}/${context.repo.repo}`; if (headRepo === '' && headRepo !== thisRepo) { core.exportVariable('IS_FORK', 'true'); core.notice('This job does not run in forks for a vast number of reasons. Please, apply the necessary fixes yourself.'); } else { core.exportVariable('IS_FORK', 'false'); } - name: Checkout the PR branch if: env.IS_FORK != 'true' uses: actions/checkout@v4 with: ref: ${{ env.HEAD_REF }} fetch-depth: 0 - name: Setup Node.js if: env.IS_FORK != 'true' uses: actions/setup-node@v4 with: node-version: ${{ env.NODE_VERSION }} cache: "npm" - name: Install dependencies if: env.IS_FORK != 'true' run: | corepack enable npm ci - name: Get changed MDX and Markdown files if: env.IS_FORK != 'true' id: changed-files uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47 with: files: | **.md **.mdx separator: " " base_sha: ${{ env.BASE_REF }} - name: Apply formatting if: env.IS_FORK != 'true' id: fix-fmt env: ALL_CHANGED_FILES: ${{ steps.changed-files.outputs.all_changed_files }} uses: actions/github-script@f28e40c7f34bde8b3046d885e986cb6290c5673b # v7.1.0 with: script: | const files = (process.env.ALL_CHANGED_FILES ?? '') .trim().split(' ').filter(Boolean).filter((it) => it.match(/\.mdx?$/) !== null); if (files.length === 0) { console.log('\nNo such files affected!'); process.exit(0); } try { await exec.exec('npm', ['run', 'check:fmt:some', '--', ...files], { silent: true, // >/dev/null 2>&1 }); console.log('\nNo issues'); core.setOutput('changes', 'false'); } catch (_) { console.log('\nFound issues, fixing...'); await exec.exec('npm', ['run', 'fmt:some', '--', ...files], { silent: true, // >/dev/null 2>&1 }); core.setOutput('changes', 'true'); } - name: Commit changes, if any if: env.IS_FORK != 'true' && steps.fix-fmt.outputs.changes == 'true' uses: stefanzweifel/git-auto-commit-action@28e16e81777b558cc906c8750092100bbb34c5e3 # v7.0.0 with: commit_message: "fix: formatting" branch: ${{ env.HEAD_REF }} ================================================ FILE: .github/workflows/generate-api-tables.yml ================================================ name: Generate API Tables env: PYTHON_VERSION: "3.11" NODE_VERSION: "20" on: push: paths: - 'ecosystem/api/toncenter/v2.json' - 'ecosystem/api/toncenter/v3.yaml' branches: - main permissions: contents: write jobs: generate: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: actions/setup-python@v5 with: python-version: ${{ env.PYTHON_VERSION }} - uses: actions/setup-node@v4 with: node-version: ${{ env.NODE_VERSION }} cache: "npm" - name: Install dependencies run: | pip install pyyaml==6.0.3 corepack enable npm ci - name: Generate tables run: | python3 .github/scripts/generate-v2-api-table.py python3 .github/scripts/generate-v3-api-table.py npm run fmt:some -- ecosystem/api/toncenter/v2/overview.mdx ecosystem/api/toncenter/v3/overview.mdx - name: Commit changes run: | git config user.name "github-actions[bot]" git config user.email "github-actions[bot]@users.noreply.github.com" git add ecosystem/api/toncenter/v2/overview.mdx ecosystem/api/toncenter/v3/overview.mdx git diff --staged --quiet || git commit -m "chore(bot): auto-generate API tables" git push ================================================ FILE: .github/workflows/instructions.yml ================================================ name: 🕘 Instructions update on: schedule: - cron: '17 3 * * *' workflow_dispatch: inputs: source_branch: description: 'Branch in ton-org/tvm-spec to fetch cp0.json from' required: false default: 'master' type: string permissions: contents: write pull-requests: write jobs: fetch-and-release: if: ${{ github.event_name == 'workflow_dispatch' || github.repository == 'ton-org/docs' }} runs-on: ubuntu-latest env: SOURCE_BRANCH: ${{ github.event_name == 'workflow_dispatch' && github.event.inputs.source_branch || 'master' }} steps: - name: Checkout repository uses: actions/checkout@v4 with: fetch-depth: 0 # needed for pushing later - name: Set up Git run: | git config user.name "github-actions[bot]" git config user.email "github-actions[bot]@users.noreply.github.com" - name: Set up Python uses: actions/setup-python@v5 with: python-version: '3.x' - name: Install Python dependencies run: pip install mistletoe==1.5.0 - name: Clone ton-org/tvm-spec run: git clone https://github.com/ton-org/tvm-spec && cd tvm-spec && git checkout $SOURCE_BRANCH - name: Update instructions.mdx and cp0.json # cp0.txt is a workaround: mintlify gives 404 for url /resources/tvm/cp0.json -_- run: python3 .github/scripts/tvm-instruction-gen.py tvm-spec/cp0.json resources/tvm/cp0.txt tvm/instructions.mdx - name: Check for changes id: git-diff run: | git add resources/tvm/cp0.txt tvm/instructions.mdx CHANGED_FILES=$(git diff --cached --name-only | tr '\n' ' ') echo "changed=$CHANGED_FILES" >> $GITHUB_OUTPUT - name: Create Pull Request if needed if: ${{ steps.git-diff.outputs.changed != '' }} id: cpr uses: peter-evans/create-pull-request@c5a7806660adbe173f04e3e038b0ccdcd758773c # v6 with: commit-message: "feat: update TVM instructions list" title: "feat: update TVM instructions list" branch: "update-spec" add-paths: | resources/tvm/cp0.txt tvm/instructions.mdx token: ${{ secrets.GITHUB_TOKEN }} ================================================ FILE: .github/workflows/linter.yml ================================================ name: 💅 Linting suite env: HUSKY: 0 NODE_VERSION: 20 on: pull_request: branches: ["**"] workflow_dispatch: concurrency: group: ${{ github.workflow }}-${{ github.ref }}-linter cancel-in-progress: true permissions: contents: read pull-requests: write jobs: format-check: name: "Formatting" runs-on: ubuntu-latest steps: - name: Checkout repository uses: actions/checkout@v4 with: fetch-depth: 0 - name: Setup Node.js uses: actions/setup-node@v4 with: node-version: ${{ env.NODE_VERSION }} cache: "npm" - name: Install dependencies run: | corepack enable npm ci - name: Get changed MDX and Markdown files id: changed-files uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47 with: files: | **.md **.mdx separator: " " - name: Check formatting of MDX and Markdown files id: check-fmt env: ALL_CHANGED_FILES: ${{ steps.changed-files.outputs.all_changed_files }} uses: actions/github-script@f28e40c7f34bde8b3046d885e986cb6290c5673b # v7.1.0 with: script: | const files = (process.env.ALL_CHANGED_FILES ?? '') .trim().split(' ').filter(Boolean).filter((it) => it.match(/\.mdx?$/) !== null); if (files.length === 0) { console.log('\nNo such files affected!'); process.exit(0); } console.log('\nChecking formatting of the following MDX and Markdown files affected by this PR:\n'); for (const file of files) { console.log(`- ${file}`); } try { await exec.exec('npm', ['run', 'check:fmt:some', '--', ...files], { silent: true, // >/dev/null 2>&1 }); } catch (_) { // Comment right in the actions output console.log('\n\x1b[31mError:\x1b[0m Some files are not properly formatted!'); console.log('1. Install necessary dependencies: \x1b[31mnpm ci\x1b[0m'); console.log(`2. Run this command to fix the issues: \x1b[31mnpm run fmt:some -- ${files.join(' ')}\x1b[0m`); // Rethrow the exit code of the failed formatting check core.setFailed('Some files are not properly formatted!'); process.exit(1); } - name: Hide prior PR comments and issue a new one in case of failure if: | ( !cancelled() && steps.changed-files.conclusion == 'success' && github.event_name == 'pull_request' && ( github.event.pull_request.head.repo.fork == false || github.event.pull_request.head.repo.full_name == github.repository ) ) env: ALL_CHANGED_FILES: ${{ steps.changed-files.outputs.all_changed_files }} SUCCESS: ${{ steps.check-fmt.conclusion == 'failure' && 'false' || 'true' }} uses: actions/github-script@f28e40c7f34bde8b3046d885e986cb6290c5673b # v7.1.0 with: script: | const { hidePriorCommentsWithPrefix, createComment } = await import('${{ github.workspace }}/.github/scripts/common.mjs'); const success = JSON.parse(process.env.SUCCESS ?? 'false'); const files = (process.env.ALL_CHANGED_FILES ?? '') .trim().split(' ').filter(Boolean).filter((it) => it.match(/\.mdx?$/) !== null); const comment = [ 'To fix the **formatting** issues:\n', '1. Install necessary dependencises: `npm ci`', '2. Then, run this command:', ' ```shell', ` npm run fmt:some -- ${files.join(' ')}`, ' ```', '\nAlternatively, a maintainer can comment /fmt in this PR to auto-apply fixes in a new commit from the bot.', ].join('\n'); const prefix = comment.slice(0, 30); await hidePriorCommentsWithPrefix({ github, context, exec, prefix, resolved: success }); // Create a new PR comment in case of a new failure if (!success) { await createComment({ github, context, exec, body: comment }); } spell-check: name: "Spelling" runs-on: ubuntu-latest steps: - name: Checkout repository uses: actions/checkout@v4 # The fetch-depth is not set to 0 to prevent the cspell-action # from misfiring on files that are in main but not on this PR branch - name: Setup Node.js uses: actions/setup-node@v4 with: node-version: ${{ env.NODE_VERSION }} cache: "npm" - name: Install dependencies run: | corepack enable npm ci - name: Run CSpell on changed files # This action also annotates the PR uses: streetsidesoftware/cspell-action@v7 with: check_dot_files: explicit suggestions: true config: ".cspell.jsonc" link-check: name: "Links: broken, navigation, redirects" runs-on: ubuntu-latest steps: - name: Checkout repository uses: actions/checkout@v4 with: fetch-depth: 0 - name: Setup Node.js uses: actions/setup-node@v4 with: node-version: ${{ env.NODE_VERSION }} cache: "npm" - name: Install dependencies run: | corepack enable npm ci # Broken - name: Check broken links if: ${{ !cancelled() }} run: npm run check:links # Navigation - name: Check uniqueness of navigation paths in docs.json if: ${{ !cancelled() }} run: npm run check:navigation -- unique - name: Check existence of navigation .mdx pages in docs.json if: ${{ !cancelled() }} run: npm run check:navigation -- exist - name: Check coverage of .mdx pages by docs.json if: ${{ !cancelled() }} run: npm run check:navigation -- cover # Redirects - name: Check uniqueness of redirect sources in docs.json if: ${{ !cancelled() }} run: npm run check:redirects -- unique - name: Check existence of redirect destinations in docs.json if: ${{ !cancelled() }} run: npm run check:redirects -- exist - name: Check redirects against the previous TON Documentation if: ${{ !cancelled() }} run: npm run check:redirects -- previous - name: Check redirects against the upstream docs.json structure if: ${{ !cancelled() }} run: npm run check:redirects -- upstream ================================================ FILE: .github/workflows/pitaya.yml ================================================ name: 🤖 AI review on: pull_request: types: [opened, ready_for_review] issue_comment: types: [created] pull_request_review_comment: types: [created] pull_request_target: types: [opened] permissions: contents: read pull-requests: write issues: write jobs: fork-pr-note: if: github.event_name == 'pull_request_target' && github.event.action == 'opened' && github.event.pull_request.head.repo.full_name != github.repository runs-on: ubuntu-latest steps: - name: Comment external PR use /review env: GITHUB_TOKEN: ${{ github.token }} run: | set -euo pipefail PR_NUMBER="${{ github.event.pull_request.number }}" API="https://api.github.com/repos/${{ github.repository }}/issues/${PR_NUMBER}/comments" BODY=$(cat <<'TXT' Skipping AI review because this PR is from a fork. A maintainer can start the review by commenting /review in this PR. TXT ) jq -n --arg body "$BODY" '{body:$body}' > payload.json curl -sS -X POST "$API" \ -H "Authorization: Bearer ${GITHUB_TOKEN}" \ -H "Accept: application/vnd.github+json" \ -H "X-GitHub-Api-Version: 2022-11-28" \ -H "Content-Type: application/json" \ -d @payload.json >/dev/null pr-review: concurrency: group: pitaya-ai-review-${{ github.event.pull_request.number || github.event.issue.number || github.run_id }} cancel-in-progress: true # Run on: # - PR events when ready_for_review or opened as non‑draft # - Issue comments only when it's a PR thread, command is /review, and commenter is trusted if: | ( github.event_name == 'pull_request' && ((github.event.action == 'ready_for_review') || (github.event.action == 'opened' && github.event.pull_request.draft == false)) && github.event.pull_request.head.repo.full_name == github.repository ) || ( github.event_name == 'issue_comment' && github.event.issue.pull_request != null && (github.event.comment.body == '/review' || startsWith(github.event.comment.body, '/review ')) && ( github.event.comment.author_association == 'OWNER' || github.event.comment.author_association == 'MEMBER' || github.event.comment.author_association == 'COLLABORATOR' ) ) || ( github.event_name == 'pull_request_review_comment' && (github.event.comment.body == '/review' || startsWith(github.event.comment.body, '/review ')) && ( github.event.comment.author_association == 'OWNER' || github.event.comment.author_association == 'MEMBER' || github.event.comment.author_association == 'COLLABORATOR' ) ) runs-on: ubuntu-latest steps: - name: Checkout uses: actions/checkout@v4 with: fetch-depth: 0 - name: PR context env: GH_TOKEN: ${{ github.token }} PR_FROM_PR: ${{ github.event.pull_request.number }} PR_FROM_ISSUE: ${{ github.event.issue.number }} run: | set -euo pipefail PR_NUMBER="${PR_FROM_PR:-}" if [ -z "${PR_NUMBER:-}" ] || [ "$PR_NUMBER" = "null" ]; then PR_NUMBER="${PR_FROM_ISSUE:-}" fi if [ -z "${PR_NUMBER:-}" ] || [ "$PR_NUMBER" = "null" ]; then echo "PR number not provided." >&2 exit 1 fi echo "PR_NUMBER=$PR_NUMBER" >> $GITHUB_ENV gh api repos/${{ github.repository }}/pulls/${PR_NUMBER} > pr.json echo "BASE_REF=$(jq -r '.base.ref' pr.json)" >> $GITHUB_ENV echo "HEAD_REF=$(jq -r '.head.ref' pr.json)" >> $GITHUB_ENV BASE_REPO="${{ github.repository }}" HEAD_REPO="$(jq -r '.head.repo.full_name // ""' pr.json)" if [ -n "$HEAD_REPO" ] && [ "$HEAD_REPO" != "$BASE_REPO" ]; then echo "IS_FORK=true" >> $GITHUB_ENV else echo "IS_FORK=false" >> $GITHUB_ENV fi - name: React 👀 on PR env: GH_TOKEN: ${{ github.token }} REPO: ${{ github.repository }} run: | set -euo pipefail rid="" if ! rid=$(gh api \ -X POST \ -H "X-GitHub-Api-Version: 2022-11-28" \ "/repos/${REPO}/issues/${PR_NUMBER}/reactions" \ -f content=eyes \ --jq '.id // empty' 2>/dev/null); then echo "::warning::Failed to add 👀 reaction to PR ${PR_NUMBER}." >&2 fi if [ -n "${rid:-}" ]; then echo "PR_REACTION_EYES_ID=$rid" >> "$GITHUB_ENV" fi - name: React 👀 on comment if: github.event_name == 'issue_comment' env: GH_TOKEN: ${{ github.token }} REPO: ${{ github.repository }} COMMENT_ID: ${{ github.event.comment.id }} run: | set -euo pipefail rid="" if ! rid=$(gh api \ -X POST \ -H "X-GitHub-Api-Version: 2022-11-28" \ "/repos/${REPO}/issues/comments/${COMMENT_ID}/reactions" \ -f content=eyes \ --jq '.id // empty' 2>/dev/null); then echo "::warning::Failed to add 👀 reaction to comment ${COMMENT_ID}." >&2 fi if [ -n "${rid:-}" ]; then echo "ISSUE_COMMENT_REACTION_EYES_ID=$rid" >> "$GITHUB_ENV" fi - name: React 👀 on inline comment if: github.event_name == 'pull_request_review_comment' env: GH_TOKEN: ${{ github.token }} REPO: ${{ github.repository }} COMMENT_ID: ${{ github.event.comment.id }} run: | set -euo pipefail rid="" if ! rid=$(gh api \ -X POST \ -H "X-GitHub-Api-Version: 2022-11-28" \ "/repos/${REPO}/pulls/comments/${COMMENT_ID}/reactions" \ -f content=eyes \ --jq '.id // empty' 2>/dev/null); then echo "::warning::Failed to add 👀 reaction to review comment ${COMMENT_ID}." >&2 fi if [ -n "${rid:-}" ]; then echo "REVIEW_COMMENT_REACTION_EYES_ID=$rid" >> "$GITHUB_ENV" fi - name: Checkout PR head run: | set -euo pipefail git fetch origin "pull/${PR_NUMBER}/head:pr_head" git checkout -B pr_head pr_head - name: Fetch branches run: git fetch origin "+refs/heads/*:refs/remotes/origin/*" - name: Ensure base branch run: | BASE_REF="${BASE_REF:-main}" if ! git show-ref --verify --quiet "refs/heads/${BASE_REF}"; then git branch --track "${BASE_REF}" "origin/${BASE_REF}" || true fi - name: Use repo scripts if: env.IS_FORK != 'true' run: | set -euo pipefail echo "USING_TRUSTED_CI_SCRIPTS=$GITHUB_WORKSPACE/.github/scripts" >> $GITHUB_ENV - name: Use base scripts for forks if: env.IS_FORK == 'true' run: | set -euo pipefail mkdir -p "$RUNNER_TEMP/ai-ci" git show "$BASE_REF":.github/scripts/build_review_instructions.py > "$RUNNER_TEMP/ai-ci/build_review_instructions.py" git show "$BASE_REF":.github/scripts/build_review_payload.py > "$RUNNER_TEMP/ai-ci/build_review_payload.py" echo "USING_TRUSTED_CI_SCRIPTS=$RUNNER_TEMP/ai-ci" >> $GITHUB_ENV - name: Detect docs changes run: | set -euo pipefail # Compare PR head against BASE_REF and look for docs changes CHANGED=$(git diff --name-only "$BASE_REF"...pr_head | grep -E '(\.(md|mdx)$|^docs\.json$)' || true) if [ -z "$CHANGED" ]; then echo "DOCS_CHANGED=false" >> $GITHUB_ENV echo "No docs (.md, .mdx, docs.json) changes detected; skipping AI review." >&2 else echo "DOCS_CHANGED=true" >> $GITHUB_ENV echo "$CHANGED" | sed 's/^/- /' >&2 fi - name: Comment no docs changes if: env.DOCS_CHANGED != 'true' env: GITHUB_TOKEN: ${{ github.token }} run: | set -euo pipefail API="https://api.github.com/repos/${{ github.repository }}/issues/${PR_NUMBER}/comments" BODY=$(cat <<'TXT' Skipping AI review because no docs changes in md, mdx, or docs.json TXT ) jq -n --arg body "$BODY" '{body:$body}' > payload.json curl -sS -X POST "$API" \ -H "Authorization: Bearer ${GITHUB_TOKEN}" \ -H "Accept: application/vnd.github+json" \ -H "X-GitHub-Api-Version: 2022-11-28" \ -H "Content-Type: application/json" \ -d @payload.json >/dev/null - name: Check secrets if: env.DOCS_CHANGED == 'true' && (env.IS_FORK != 'true' || github.event_name != 'pull_request') env: OPENROUTER_API_KEY: ${{ secrets.OPENROUTER_API_KEY }} run: | if [ -z "${OPENROUTER_API_KEY:-}" ]; then echo "OPENROUTER_API_KEY is not set. Add it to repository secrets." >&2 exit 2 fi - name: Setup Python if: env.DOCS_CHANGED == 'true' && (env.IS_FORK != 'true' || github.event_name != 'pull_request') uses: actions/setup-python@v5 with: python-version: "3.13" - name: Setup uv if: env.DOCS_CHANGED == 'true' && (env.IS_FORK != 'true' || github.event_name != 'pull_request') uses: astral-sh/setup-uv@v3 - name: Checkout Pitaya if: env.DOCS_CHANGED == 'true' && (env.IS_FORK != 'true' || github.event_name != 'pull_request') uses: actions/checkout@v4 with: repository: tact-lang/pitaya path: pitaya-src - name: Install Pitaya deps if: env.DOCS_CHANGED == 'true' && (env.IS_FORK != 'true' || github.event_name != 'pull_request') working-directory: pitaya-src run: uv sync - name: Build agent image if: env.DOCS_CHANGED == 'true' && (env.IS_FORK != 'true' || github.event_name != 'pull_request') run: docker build -t pitaya-agents:latest pitaya-src - name: Run Pitaya review if: env.DOCS_CHANGED == 'true' && (env.IS_FORK != 'true' || github.event_name != 'pull_request') working-directory: pitaya-src env: OPENROUTER_API_KEY: ${{ secrets.OPENROUTER_API_KEY }} OPENROUTER_BASE_URL: https://openrouter.ai/api/v1 run: | REVIEW_INSTRUCTIONS=$(python3 "$USING_TRUSTED_CI_SCRIPTS/build_review_instructions.py") uv run pitaya "Review this pull request" \ --repo "$GITHUB_WORKSPACE" \ --base-branch pr_head \ --strategy pr-review \ -S reviewers=2 \ -S ci_fail_policy=never \ -S base_branch="$BASE_REF" \ -S include_branches="pr_head,$BASE_REF" \ -S review_instructions="$REVIEW_INSTRUCTIONS" \ --plugin codex \ --model "openai/gpt-5.1" \ --no-tui \ --verbose - name: Post review if: env.DOCS_CHANGED == 'true' && (env.IS_FORK != 'true' || github.event_name != 'pull_request') working-directory: pitaya-src env: GITHUB_TOKEN: ${{ github.token }} run: | set -euo pipefail RUN_DIR="$(ls -td .pitaya/results/run_* 2>/dev/null | head -n1)" if [ -z "${RUN_DIR:-}" ] || [ ! -d "$RUN_DIR" ]; then echo "No results directory found" >&2 exit 1 fi # Sidecar must exist (selection may be empty when approving clean PRs) SIDECAR="$RUN_DIR/review/index.json" if [ ! -f "$SIDECAR" ]; then echo "Sidecar not found: $SIDECAR" >&2 exit 1 fi COMMIT_ID="$(jq -r '.commit_id // empty' "$SIDECAR")" if [ -z "$COMMIT_ID" ]; then echo "commit_id missing in sidecar; aborting." >&2 exit 1 fi # Build review payload (summary + inline comments) INLINE_SEVERITIES="${INLINE_SEVERITIES:-HIGH}" # comma-separated; default HIGH only MAX_COMMENTS="${MAX_COMMENTS:-40}" python3 "$USING_TRUSTED_CI_SCRIPTS/build_review_payload.py" \ --run-dir "$RUN_DIR" \ --repo "${{ github.repository }}" \ --sha "$COMMIT_ID" \ --severities "${INLINE_SEVERITIES}" \ --max-comments "${MAX_COMMENTS}" > review_payload.json API="https://api.github.com/repos/${{ github.repository }}/pulls/${PR_NUMBER}/reviews" COMMENTS=$(jq -r '.comments | length' review_payload.json) BODY_TEXT=$(jq -r '.body // empty' review_payload.json) if [ "${BODY_TEXT// }" = "" ]; then BODY_TEXT="No documentation issues detected." jq --arg body "$BODY_TEXT" '.body = $body' review_payload.json > review_payload.tmp && mv review_payload.tmp review_payload.json fi echo "Submitting PR review (comments: $COMMENTS)..." HTTP_CODE=$(curl -sS -o response.json -w "%{http_code}" -X POST "$API" \ -H "Authorization: Bearer ${GITHUB_TOKEN}" \ -H "Accept: application/vnd.github+json" \ -H "X-GitHub-Api-Version: 2022-11-28" \ -H "Content-Type: application/json" \ -d @review_payload.json || true) echo "GitHub API HTTP: ${HTTP_CODE:-}" if ! [[ "$HTTP_CODE" =~ ^[0-9]{3}$ ]] || [ "$HTTP_CODE" -lt 200 ] || [ "$HTTP_CODE" -ge 300 ]; then echo "Response body:"; cat response.json || true; echo # Attempt to submit inline comments individually so good ones still land. COMMENT_API_INLINE="https://api.github.com/repos/${{ github.repository }}/pulls/${PR_NUMBER}/comments" BODY_TEXT=$(jq -r '.body // ""' review_payload.json) COMMIT_FOR_COMMENTS=$(jq -r '.commit_id // ""' review_payload.json) GOOD=0; BAD=0 BAD_SUMMARY_FILE=$(mktemp) : > "$BAD_SUMMARY_FILE" while IFS= read -r c; do TMP=$(mktemp) echo "$c" | jq --arg commit "$COMMIT_FOR_COMMENTS" '{ body: .body, commit_id: ($commit // .commit_id // ""), path: .path } + (if has("line") then {line:.line, side:(.side//"RIGHT")} else {} end) + (if has("start_line") then {start_line:.start_line, start_side:(.start_side//"RIGHT")} else {} end)' > "$TMP" HTTP_COMMENT=$(curl -sS -o response_comment.json -w "%{http_code}" -X POST "$COMMENT_API_INLINE" \ -H "Authorization: Bearer ${GITHUB_TOKEN}" \ -H "Accept: application/vnd.github+json" \ -H "X-GitHub-Api-Version: 2022-11-28" \ -H "Content-Type: application/json" \ -d @"$TMP" || true) if [[ "$HTTP_COMMENT" =~ ^2[0-9][0-9]$ ]]; then GOOD=$((GOOD+1)) else BAD=$((BAD+1)) PATH_LINE=$(echo "$c" | jq -r '"\(.path):L\(.start_line // .line // "?")-L\(.line // .start_line // "?")"') BODY_SNIP=$(echo "$c" | jq -r '.body') BODY_SNIP_FIRST6=$(printf "%s" "$BODY_SNIP" | head -n 6) BODY_SNIP_LINECOUNT=$(printf "%s\n" "$BODY_SNIP" | wc -l) { echo "- ${PATH_LINE}" printf "%s" "$BODY_SNIP_FIRST6" | sed 's/^/ /' if [ "$BODY_SNIP_LINECOUNT" -gt 6 ]; then echo " …(truncated)" fi echo } >> "$BAD_SUMMARY_FILE" fi rm -f "$TMP" response_comment.json done < <(jq -c '.comments[]' review_payload.json) # Build fallback timeline comment containing intro + failed inline text (if any) COMMENT_API="https://api.github.com/repos/${{ github.repository }}/issues/${PR_NUMBER}/comments" FALLBACK_FILE=$(mktemp) { echo "$BODY_TEXT" echo echo "---" echo "Per-comment submission: ${GOOD} posted, ${BAD} failed." if [ "$BAD" -gt 0 ]; then echo echo "Unposted inline comments (raw text):" cat "$BAD_SUMMARY_FILE" fi } > "$FALLBACK_FILE" jq -n --arg body "$(cat "$FALLBACK_FILE")" '{body:$body}' > payload.json HTTP_CODE2=$(curl -sS -o response2.json -w "%{http_code}" -X POST "$COMMENT_API" \ -H "Authorization: Bearer ${GITHUB_TOKEN}" \ -H "Accept: application/vnd.github+json" \ -H "X-GitHub-Api-Version: 2022-11-28" \ -H "Content-Type: application/json" \ -d @payload.json || true) echo "Fallback GitHub API HTTP: $HTTP_CODE2"; cat response2.json || true; echo if ! [[ "$HTTP_CODE2" =~ ^[0-9]{3}$ ]] || [ "$HTTP_CODE2" -lt 200 ] || [ "$HTTP_CODE2" -ge 300 ]; then echo "::error::Failed to submit PR review, per-comment comments, and fallback comment." >&2 exit 1 fi rm -f "$BAD_SUMMARY_FILE" "$FALLBACK_FILE" fi - name: Summary if: env.DOCS_CHANGED == 'true' && (env.IS_FORK != 'true' || github.event_name != 'pull_request') working-directory: pitaya-src run: | set -euo pipefail RUN_DIR="$(ls -td .pitaya/results/run_* 2>/dev/null | head -n1)" if [ -z "${RUN_DIR:-}" ]; then exit 0 fi SUMMARY_FILE="$RUN_DIR/summary.md" INTRO_FILE="$RUN_DIR/review/index.json" { echo "### Pitaya Review" if [ -f "$INTRO_FILE" ]; then INTRO=$(jq -r '.intro // empty' "$INTRO_FILE") SEL=$(jq -r '.selected_details | length' "$INTRO_FILE") EVENT=$(jq -r '.event // empty' "$INTRO_FILE"); if [ -z "$EVENT" ]; then EVENT=COMMENT; fi COMMIT=$(jq -r '.commit_id // empty' "$INTRO_FILE") echo "" if [ -n "$INTRO" ]; then echo "$INTRO" echo "" fi echo "- Outcome $EVENT" echo "- Inline suggestions $SEL" if [ -n "$COMMIT" ]; then echo "- Reviewed commit \`$COMMIT\`" fi fi if [ -f "$SUMMARY_FILE" ]; then echo "" echo "
Run stats" echo "" tail -n +2 "$SUMMARY_FILE" echo "
" fi } >> "$GITHUB_STEP_SUMMARY" - name: Archive logs if: env.DOCS_CHANGED == 'true' && (env.IS_FORK != 'true' || github.event_name != 'pull_request') id: pitaya_artifacts working-directory: pitaya-src run: | set -euo pipefail if compgen -G ".pitaya/logs/run_*" >/dev/null || compgen -G ".pitaya/results/run_*" >/dev/null; then tar -czf pitaya-artifacts.tar.gz .pitaya/logs/run_* .pitaya/results/run_* 2>/dev/null || true echo "has_artifacts=true" >> "$GITHUB_OUTPUT" else echo "No Pitaya logs or results to archive." >&2 echo "has_artifacts=false" >> "$GITHUB_OUTPUT" fi - name: Upload artifacts if: env.DOCS_CHANGED == 'true' && (env.IS_FORK != 'true' || github.event_name != 'pull_request') && steps.pitaya_artifacts.outputs.has_artifacts == 'true' uses: actions/upload-artifact@v4 with: name: pitaya-logs-${{ github.run_id }} path: pitaya-src/pitaya-artifacts.tar.gz if-no-files-found: ignore retention-days: 7 - name: Cleanup 👀 if: always() env: GH_TOKEN: ${{ github.token }} REPO: ${{ github.repository }} PR_REACTION_EYES_ID: ${{ env.PR_REACTION_EYES_ID }} ISSUE_COMMENT_REACTION_EYES_ID: ${{ env.ISSUE_COMMENT_REACTION_EYES_ID }} REVIEW_COMMENT_REACTION_EYES_ID: ${{ env.REVIEW_COMMENT_REACTION_EYES_ID }} COMMENT_ID: ${{ github.event.comment.id }} run: | set -euo pipefail # Remove from PR if [ -n "${PR_REACTION_EYES_ID:-}" ]; then gh api -X DELETE \ -H "X-GitHub-Api-Version: 2022-11-28" \ "/repos/${REPO}/issues/${PR_NUMBER}/reactions/${PR_REACTION_EYES_ID}" \ >/dev/null 2>&1 || echo "::warning::Failed to remove 👀 from PR ${PR_NUMBER}." >&2 fi # Remove from issue comment if [ -n "${ISSUE_COMMENT_REACTION_EYES_ID:-}" ] && [ -n "${COMMENT_ID:-}" ]; then gh api -X DELETE \ -H "X-GitHub-Api-Version: 2022-11-28" \ "/repos/${REPO}/issues/comments/${COMMENT_ID}/reactions/${ISSUE_COMMENT_REACTION_EYES_ID}" \ >/dev/null 2>&1 || echo "::warning::Failed to remove 👀 from issue comment ${COMMENT_ID}." >&2 fi # Remove from review comment if [ -n "${REVIEW_COMMENT_REACTION_EYES_ID:-}" ] && [ -n "${COMMENT_ID:-}" ]; then gh api -X DELETE \ -H "X-GitHub-Api-Version: 2022-11-28" \ "/repos/${REPO}/pulls/comments/${COMMENT_ID}/reactions/${REVIEW_COMMENT_REACTION_EYES_ID}" \ >/dev/null 2>&1 || echo "::warning::Failed to remove 👀 from review comment ${COMMENT_ID}." >&2 fi ================================================ FILE: .gitignore ================================================ # Vale (spell and style checker) .vale/* !.vale/config/ !.vale/NONE/ # Miscellaneous .DS_Store # Editors .idea/ .vscode/ .helix/ .vim/ .nvim/ .emacs/ .emacs.d/ # Node.js node_modules/ # Python __pycache__ # Generated folders /stats/ ================================================ FILE: .husky/pre-push ================================================ ================================================ FILE: .prettierignore ================================================ *.mdx /ecosystem/api/toncenter/v2/ /ecosystem/api/toncenter/v3/ /ecosystem/api/toncenter/smc-index/ /LICENSE* ================================================ FILE: .remarkignore ================================================ # Ignore folders node_modules/ /pending/ # Ignore some whitepapers /languages/fift/whitepaper.mdx /foundations/whitepapers/tblkch.mdx /foundations/whitepapers/ton.mdx /foundations/whitepapers/tvm.mdx # Ignore some root files /index.mdx /LICENSE* # Ignore generated files and directories /tvm/instructions.mdx /ecosystem/api/toncenter/v2/ /ecosystem/api/toncenter/v3/ /ecosystem/api/toncenter/smc-index/ ================================================ FILE: .remarkrc.mjs ================================================ import remarkFrontmatter from 'remark-frontmatter'; import remarkGfm from 'remark-gfm'; import remarkMath from 'remark-math'; import remarkMdx from 'remark-mdx'; import unifiedConsistency from 'unified-consistency'; import stringWidth from 'string-width'; import { visitParents, SKIP } from 'unist-util-visit-parents'; import { generate } from 'astring'; /** * @import {} from 'remark-stringify' * @type import('unified').Preset */ const remarkConfig = { settings: { bullet: '-', emphasis: '_', rule: '-', incrementListMarker: false, tightDefinitions: true, }, plugins: [ remarkFrontmatter, remarkMath, [ remarkGfm, { singleTilde: false, stringLength: stringWidth, }, ], [ remarkMdx, { printWidth: 20, }, ], function formatJsxElements() { return (tree, file) => { // a JSX element embedded in flow (block) visitParents(tree, 'mdxJsxFlowElement', (node, ancestors) => { try { if (!node.attributes) { return; } for (const attr of node.attributes) { if ( attr.type === 'mdxJsxAttribute' && attr.value?.type === 'mdxJsxAttributeValueExpression' && attr.value.data?.estree ) { const expr = attr.value; // Slighly trim single-line expressions if (typeof expr.value === 'string' && !expr.value.trim().includes('\n')) { expr.value = expr.value.trim(); delete expr.data.estree; continue; } // Multi-line expressions if (!expr.data) { continue; } const indent = ancestors.length === 0 ? 0 : ancestors.length; const formatted = generate(expr.data.estree.body[0].expression, { startingIndentLevel: indent, }); expr.value = formatted; delete expr.data.estree; } } } catch (_) { // NOTE: Let's silently do nothing — this is the default behavior anyways } }); // a JSX element embedded in text (span, inline) visitParents(tree, 'mdxJsxTextElement', (node) => { try { if (!node.attributes) { return SKIP; } for (const attr of node.attributes) { if ( attr.type === 'mdxJsxAttribute' && attr.value?.type === 'mdxJsxAttributeValueExpression' && attr.value.data?.estree ) { const expr = attr.value; if (!expr.data) { continue; } const formatted = generate(expr.data.estree.body[0].expression); expr.value = formatted; delete expr.data.estree; } } return SKIP; } catch (_) { // NOTE: Let's silently do nothing — this is the default behavior anyways } }); // a JavaScript expression embedded in flow (block) visitParents(tree, 'mdxFlowExpression', (node) => { try { if (!node.data) { return SKIP; } const formatted = generate(node.data.estree.body[0].expression); node.value = formatted; delete node.data.estree; return SKIP; } catch (_) { // NOTE: Let's silently do nothing — this is the default behavior anyways } }); // a JavaScript expression embedded in text (span, inline) visitParents(tree, 'mdxTextExpression', (node) => { try { if (!node.data) { return SKIP; } const formatted = generate(node.data.estree.body[0].expression); node.value = formatted; delete node.data.estree; return SKIP; } catch (_) { // NOTE: Let's silently do nothing — this is the default behavior anyways // console.error( // `Could not format a node in the file ${file.path}: ${JSON.stringify(node)}` // ); } }); }; }, unifiedConsistency, ], }; export default remarkConfig; ================================================ FILE: CODEOWNERS ================================================ ================================================ FILE: LICENSE-code ================================================ MIT License Copyright (c) 2025 TON Studio and others Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ================================================ FILE: LICENSE-docs ================================================ Attribution-ShareAlike 4.0 International ======================================================================= Creative Commons Corporation ("Creative Commons") is not a law firm and does not provide legal services or legal advice. Distribution of Creative Commons public licenses does not create a lawyer-client or other relationship. Creative Commons makes its licenses and related information available on an "as-is" basis. Creative Commons gives no warranties regarding its licenses, any material licensed under their terms and conditions, or any related information. Creative Commons disclaims all liability for damages resulting from their use to the fullest extent possible. Using Creative Commons Public Licenses Creative Commons public licenses provide a standard set of terms and conditions that creators and other rights holders may use to share original works of authorship and other material subject to copyright and certain other rights specified in the public license below. The following considerations are for informational purposes only, are not exhaustive, and do not form part of our licenses. Considerations for licensors: Our public licenses are intended for use by those authorized to give the public permission to use material in ways otherwise restricted by copyright and certain other rights. Our licenses are irrevocable. Licensors should read and understand the terms and conditions of the license they choose before applying it. Licensors should also secure all rights necessary before applying our licenses so that the public can reuse the material as expected. Licensors should clearly mark any material not subject to the license. This includes other CC- licensed material, or material used under an exception or limitation to copyright. More considerations for licensors: wiki.creativecommons.org/Considerations_for_licensors Considerations for the public: By using one of our public licenses, a licensor grants the public permission to use the licensed material under specified terms and conditions. If the licensor's permission is not necessary for any reason--for example, because of any applicable exception or limitation to copyright--then that use is not regulated by the license. Our licenses grant only permissions under copyright and certain other rights that a licensor has authority to grant. Use of the licensed material may still be restricted for other reasons, including because others have copyright or other rights in the material. A licensor may make special requests, such as asking that all changes be marked or described. Although not required by our licenses, you are encouraged to respect those requests where reasonable. More considerations for the public: wiki.creativecommons.org/Considerations_for_licensees ======================================================================= Creative Commons Attribution-ShareAlike 4.0 International Public License By exercising the Licensed Rights (defined below), You accept and agree to be bound by the terms and conditions of this Creative Commons Attribution-ShareAlike 4.0 International Public License ("Public License"). To the extent this Public License may be interpreted as a contract, You are granted the Licensed Rights in consideration of Your acceptance of these terms and conditions, and the Licensor grants You such rights in consideration of benefits the Licensor receives from making the Licensed Material available under these terms and conditions. Section 1 -- Definitions. a. Adapted Material means material subject to Copyright and Similar Rights that is derived from or based upon the Licensed Material and in which the Licensed Material is translated, altered, arranged, transformed, or otherwise modified in a manner requiring permission under the Copyright and Similar Rights held by the Licensor. For purposes of this Public License, where the Licensed Material is a musical work, performance, or sound recording, Adapted Material is always produced where the Licensed Material is synched in timed relation with a moving image. b. Adapter's License means the license You apply to Your Copyright and Similar Rights in Your contributions to Adapted Material in accordance with the terms and conditions of this Public License. c. BY-SA Compatible License means a license listed at creativecommons.org/compatiblelicenses, approved by Creative Commons as essentially the equivalent of this Public License. d. Copyright and Similar Rights means copyright and/or similar rights closely related to copyright including, without limitation, performance, broadcast, sound recording, and Sui Generis Database Rights, without regard to how the rights are labeled or categorized. For purposes of this Public License, the rights specified in Section 2(b)(1)-(2) are not Copyright and Similar Rights. e. Effective Technological Measures means those measures that, in the absence of proper authority, may not be circumvented under laws fulfilling obligations under Article 11 of the WIPO Copyright Treaty adopted on December 20, 1996, and/or similar international agreements. f. Exceptions and Limitations means fair use, fair dealing, and/or any other exception or limitation to Copyright and Similar Rights that applies to Your use of the Licensed Material. g. License Elements means the license attributes listed in the name of a Creative Commons Public License. The License Elements of this Public License are Attribution and ShareAlike. h. Licensed Material means the artistic or literary work, database, or other material to which the Licensor applied this Public License. i. Licensed Rights means the rights granted to You subject to the terms and conditions of this Public License, which are limited to all Copyright and Similar Rights that apply to Your use of the Licensed Material and that the Licensor has authority to license. j. Licensor means the individual(s) or entity(ies) granting rights under this Public License. k. Share means to provide material to the public by any means or process that requires permission under the Licensed Rights, such as reproduction, public display, public performance, distribution, dissemination, communication, or importation, and to make material available to the public including in ways that members of the public may access the material from a place and at a time individually chosen by them. l. Sui Generis Database Rights means rights other than copyright resulting from Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases, as amended and/or succeeded, as well as other essentially equivalent rights anywhere in the world. m. You means the individual or entity exercising the Licensed Rights under this Public License. Your has a corresponding meaning. Section 2 -- Scope. a. License grant. 1. Subject to the terms and conditions of this Public License, the Licensor hereby grants You a worldwide, royalty-free, non-sublicensable, non-exclusive, irrevocable license to exercise the Licensed Rights in the Licensed Material to: a. reproduce and Share the Licensed Material, in whole or in part; and b. produce, reproduce, and Share Adapted Material. 2. Exceptions and Limitations. For the avoidance of doubt, where Exceptions and Limitations apply to Your use, this Public License does not apply, and You do not need to comply with its terms and conditions. 3. Term. The term of this Public License is specified in Section 6(a). 4. Media and formats; technical modifications allowed. The Licensor authorizes You to exercise the Licensed Rights in all media and formats whether now known or hereafter created, and to make technical modifications necessary to do so. The Licensor waives and/or agrees not to assert any right or authority to forbid You from making technical modifications necessary to exercise the Licensed Rights, including technical modifications necessary to circumvent Effective Technological Measures. For purposes of this Public License, simply making modifications authorized by this Section 2(a) (4) never produces Adapted Material. 5. Downstream recipients. a. Offer from the Licensor -- Licensed Material. Every recipient of the Licensed Material automatically receives an offer from the Licensor to exercise the Licensed Rights under the terms and conditions of this Public License. b. Additional offer from the Licensor -- Adapted Material. Every recipient of Adapted Material from You automatically receives an offer from the Licensor to exercise the Licensed Rights in the Adapted Material under the conditions of the Adapter's License You apply. c. No downstream restrictions. You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, the Licensed Material if doing so restricts exercise of the Licensed Rights by any recipient of the Licensed Material. 6. No endorsement. Nothing in this Public License constitutes or may be construed as permission to assert or imply that You are, or that Your use of the Licensed Material is, connected with, or sponsored, endorsed, or granted official status by, the Licensor or others designated to receive attribution as provided in Section 3(a)(1)(A)(i). b. Other rights. 1. Moral rights, such as the right of integrity, are not licensed under this Public License, nor are publicity, privacy, and/or other similar personality rights; however, to the extent possible, the Licensor waives and/or agrees not to assert any such rights held by the Licensor to the limited extent necessary to allow You to exercise the Licensed Rights, but not otherwise. 2. Patent and trademark rights are not licensed under this Public License. 3. To the extent possible, the Licensor waives any right to collect royalties from You for the exercise of the Licensed Rights, whether directly or through a collecting society under any voluntary or waivable statutory or compulsory licensing scheme. In all other cases the Licensor expressly reserves any right to collect such royalties. Section 3 -- License Conditions. Your exercise of the Licensed Rights is expressly made subject to the following conditions. a. Attribution. 1. If You Share the Licensed Material (including in modified form), You must: a. retain the following if it is supplied by the Licensor with the Licensed Material: i. identification of the creator(s) of the Licensed Material and any others designated to receive attribution, in any reasonable manner requested by the Licensor (including by pseudonym if designated); ii. a copyright notice; iii. a notice that refers to this Public License; iv. a notice that refers to the disclaimer of warranties; v. a URI or hyperlink to the Licensed Material to the extent reasonably practicable; b. indicate if You modified the Licensed Material and retain an indication of any previous modifications; and c. indicate the Licensed Material is licensed under this Public License, and include the text of, or the URI or hyperlink to, this Public License. 2. You may satisfy the conditions in Section 3(a)(1) in any reasonable manner based on the medium, means, and context in which You Share the Licensed Material. For example, it may be reasonable to satisfy the conditions by providing a URI or hyperlink to a resource that includes the required information. 3. If requested by the Licensor, You must remove any of the information required by Section 3(a)(1)(A) to the extent reasonably practicable. b. ShareAlike. In addition to the conditions in Section 3(a), if You Share Adapted Material You produce, the following conditions also apply. 1. The Adapter's License You apply must be a Creative Commons license with the same License Elements, this version or later, or a BY-SA Compatible License. 2. You must include the text of, or the URI or hyperlink to, the Adapter's License You apply. You may satisfy this condition in any reasonable manner based on the medium, means, and context in which You Share Adapted Material. 3. You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, Adapted Material that restrict exercise of the rights granted under the Adapter's License You apply. Section 4 -- Sui Generis Database Rights. Where the Licensed Rights include Sui Generis Database Rights that apply to Your use of the Licensed Material: a. for the avoidance of doubt, Section 2(a)(1) grants You the right to extract, reuse, reproduce, and Share all or a substantial portion of the contents of the database; b. if You include all or a substantial portion of the database contents in a database in which You have Sui Generis Database Rights, then the database in which You have Sui Generis Database Rights (but not its individual contents) is Adapted Material, including for purposes of Section 3(b); and c. You must comply with the conditions in Section 3(a) if You Share all or a substantial portion of the contents of the database. For the avoidance of doubt, this Section 4 supplements and does not replace Your obligations under this Public License where the Licensed Rights include other Copyright and Similar Rights. Section 5 -- Disclaimer of Warranties and Limitation of Liability. a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS, IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION, WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS, ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU. b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION, NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT, INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES, COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR IN PART, THIS LIMITATION MAY NOT APPLY TO YOU. c. The disclaimer of warranties and limitation of liability provided above shall be interpreted in a manner that, to the extent possible, most closely approximates an absolute disclaimer and waiver of all liability. Section 6 -- Term and Termination. a. This Public License applies for the term of the Copyright and Similar Rights licensed here. However, if You fail to comply with this Public License, then Your rights under this Public License terminate automatically. b. Where Your right to use the Licensed Material has terminated under Section 6(a), it reinstates: 1. automatically as of the date the violation is cured, provided it is cured within 30 days of Your discovery of the violation; or 2. upon express reinstatement by the Licensor. For the avoidance of doubt, this Section 6(b) does not affect any right the Licensor may have to seek remedies for Your violations of this Public License. c. For the avoidance of doubt, the Licensor may also offer the Licensed Material under separate terms or conditions or stop distributing the Licensed Material at any time; however, doing so will not terminate this Public License. d. Sections 1, 5, 6, 7, and 8 survive termination of this Public License. Section 7 -- Other Terms and Conditions. a. The Licensor shall not be bound by any additional or different terms or conditions communicated by You unless expressly agreed. b. Any arrangements, understandings, or agreements regarding the Licensed Material not stated herein are separate from and independent of the terms and conditions of this Public License. Section 8 -- Interpretation. a. For the avoidance of doubt, this Public License does not, and shall not be interpreted to, reduce, limit, restrict, or impose conditions on any use of the Licensed Material that could lawfully be made without permission under this Public License. b. To the extent possible, if any provision of this Public License is deemed unenforceable, it shall be automatically reformed to the minimum extent necessary to make it enforceable. If the provision cannot be reformed, it shall be severed from this Public License without affecting the enforceability of the remaining terms and conditions. c. No term or condition of this Public License will be waived and no failure to comply consented to unless expressly agreed to by the Licensor. d. Nothing in this Public License constitutes or may be interpreted as a limitation upon, or waiver of, any privileges and immunities that apply to the Licensor or You, including from the legal processes of any jurisdiction or authority. ======================================================================= Creative Commons is not a party to its public licenses. Notwithstanding, Creative Commons may elect to apply one of its public licenses to material it publishes and in those instances will be considered the “Licensor.” The text of the Creative Commons public licenses is dedicated to the public domain under the CC0 Public Domain Dedication. Except for the limited purpose of indicating that material is shared under a Creative Commons public license or as otherwise permitted by the Creative Commons policies published at creativecommons.org/policies, Creative Commons does not authorize the use of the trademark "Creative Commons" or any other trademark or logo of Creative Commons without its prior written consent including, without limitation, in connection with any unauthorized modifications to any of its public licenses or any other arrangements, understandings, or agreements concerning use of licensed material. For the avoidance of doubt, this paragraph does not form part of the public licenses. Creative Commons may be contacted at creativecommons.org. ================================================ FILE: README.md ================================================ # TON Docs **[Follow the full quickstart guide](https://www.mintlify.com/docs/quickstart)** ## Development Install the [Mintlify CLI](https://www.npmjs.com/package/mint) to preview your documentation changes locally. To install it alongside the necessary dependencies, use the following command: ```shell npm ci ``` To start a local preview, run the following command at the root of your documentation, where your `docs.json` is located: ```shell npm start ``` View your local preview at `http://localhost:3000`. ### Spell checks > \[!NOTE] > Automatic spelling checks are performed for changed files in each Pull Request. To check spelling of **all** files, run: ```shell npm run check:spell # or simply: npm run spell ``` To check spelling of some **selected** files, run: ```shell npm run spell:some ``` #### Adding new words to the spellchecking dictionary The dictionaries (or vocabularies) for custom words are placed under `resources/dictionaries`. There, each dictionary describes additional allowed or invalid entries. The primary dictionary is `resources/dictionaries/custom.txt` — extend it in case a word exists in American English but was flagged by CSpell as invalid, or in cases where the word does not exist and shall be prohibited. For the latter, add words to `resources/dictionaries/ban.txt` with the `!` prefix when there are no clear correct replacements. If an existing two-letter word was flagged as forbidden, remove it from the `resources/dictionaries/two-letter-words-ban.txt` file. However, if a word happened to be a part of a bigger word, e.g., `CL` in `OpenCL`, do not ban it and instead add the bigger word to the primary dictionary in `resources/dictionaries/custom.txt`. See more: [CSpell docs on custom dictionaries](https://cspell.org/docs/dictionaries/custom-dictionaries). ### Format checks > \[!NOTE] > Automatic formatting checks are performed for changed files in each Pull Request. To check formatting of **all** files, run: ```shell npm run check:fmt ``` To fix formatting of **all** files, run: ```shell npm run fmt ``` To check and fix formatting of some **selected** files, run: ```shell npm run fmt:some ``` ## Using components and snippets See the [`snippets/` directory](./snippets) and the corresponding docs in [`contribute/snippets/` MDX files](./contribute/snippets/). ## Publishing changes [Mintlify's GitHub app](https://dashboard.mintlify.com/settings/organization/github-app) is connected to this repository. Thus, changes are deployed to production automatically after pushing to the default branch (`main`). ## Need help? ### Troubleshooting - If your dev environment is not running: Run `mint update` to ensure you have the most recent version of the CLI. - If a page loads as a 404: Make sure you are running in a folder with a valid `docs.json`. ### Resources - [Mintlify documentation](https://mintlify.com/docs) - [Mintlify community](https://mintlify.com/community) ## License This project is dual-licensed: - All documentation and non-code text are licensed under [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/) - All code snippets are licensed under [MIT](https://opensource.org/license/mit) ================================================ FILE: contract-dev/blueprint/api.mdx ================================================ --- title: "Blueprint TypeScript API" --- Blueprint exports functions and classes for programmatic interaction with TON smart contracts. ### `tonDeepLink` Generates a TON deep-link for transfer. ```typescript function tonDeepLink( address: Address, amount: bigint, body?: Cell, stateInit?: Cell, testOnly?: boolean ): string; ``` **Parameters:** - `address` — the recipient's TON address - `amount` — the amount of nanoTON to send - `body` — optional message body as a Cell - `stateInit` — optional [`StateInit`](/foundations/messages/deploy) cell for deploying a contract - `testOnly` — optional flag to determine output address format **Returns:** a URL deep link that can be opened in TON wallets **Example:** ```typescript const link = tonDeepLink(myAddress, 10_000_000n); // 0.01 TON // "ton://transfer/..." ``` ### `getExplorerLink` Generates a link to view a TON address in a selected blockchain explorer. ```typescript function getExplorerLink( address: string, network: string, explorer: 'tonscan' | 'tonviewer' | 'toncx' | 'dton' ): string; ``` **Parameters:** - `address` — the TON address to view in explorer - `network` — the target network (`mainnet` or `testnet`) - `explorer` — the desired explorer (`tonscan`, `tonviewer`, `toncx`, `dton`) **Returns:** a full URL pointing to the address in the selected explorer **Example:** ```typescript const link = getExplorerLink("", "testnet", "tonscan"); // "https://testnet.tonscan.org/address/EQC...9gA" // — TON address to view. ``` ### `getNormalizedExtMessageHash` Generates a normalized hash of an `external-in` message for comparison. ```typescript function getNormalizedExtMessageHash(message: Message): Buffer; ``` This function ensures consistent hashing of external-in messages by following [TEP-467](https://github.com/ton-blockchain/TEPs/blob/8b3beda2d8611c90ec02a18bec946f5e33a80091/text/0467-normalized-message-hash.md). **Parameters:** - `message` — the message to be normalized and hashed (must be of type `external-in`) **Returns:** the hash of the normalized message as `Buffer` **Throws:** error if the message type is not `external-in` ### `compile` Compiles a contract using the specified configuration for `tact`, `func`, or `tolk` languages. ```typescript async function compile(name: string, opts?: CompileOpts): Promise ``` **Parameters:** - `name` — the name of the contract to compile (should correspond to a file named `.compile.ts`) - `opts` — optional [`CompileOpts`](#compileopts), including user data passed to hooks **Returns:** a promise that resolves to the compiled contract code as a `Cell` **Example:** ```typescript import { compile } from '@ton/blueprint'; async function main() { const codeCell = await compile('Contract'); console.log('Compiled code BoC:', codeCell.toBoc().toString('base64')); } ``` ### `libraryCellFromCode` Packs the resulting code hash into a library cell. ```typescript function libraryCellFromCode(code: Cell): Cell ``` **Parameters:** - `code` — the contract code cell **Returns:** a library cell containing the code hash ### `NetworkProvider` Interface representing a network provider for interacting with the TON Blockchain. ```typescript interface NetworkProvider { network(): 'mainnet' | 'testnet' | 'custom'; explorer(): Explorer; sender(): SenderWithSendResult; api(): BlueprintTonClient; provider(address: Address, init?: { code?: Cell; data?: Cell }): ContractProvider; isContractDeployed(address: Address): Promise; waitForDeploy(address: Address, attempts?: number, sleepDuration?: number): Promise; waitForLastTransaction(attempts?: number, sleepDuration?: number): Promise; getContractState(address: Address): Promise; getConfig(configAddress?: Address): Promise; open(contract: T): OpenedContract; ui(): UIProvider; } ``` #### `network()` ```typescript network(): 'mainnet' | 'testnet' | 'custom'; ``` **Returns:** current network type that the provider is connected to #### `explorer()` ```typescript explorer(): Explorer; ``` **Returns:** [`Explorer`](#explorer) name for the current network #### `sender()` ```typescript sender(): SenderWithSendResult ``` **Returns:** the [`SenderWithSendResult`](#senderwithsendresult) instance used for sending transactions #### `api()` ```typescript api(): BlueprintTonClient ``` **Returns:** the underlying [`BlueprintTonClient`](#blueprinttonclient) API for direct blockchain interactions #### `provider()` ```typescript provider(address: Address, init?: { code?: Cell; data?: Cell }): ContractProvider ``` Creates a contract provider for interacting with a contract at the specified address. **Parameters:** - `address` — the contract address to interact with - `init` — optional contract initialization data - `code` — Contract code cell - `data` — Contract initial data cell **Returns:** `contractProvider` instance for the specified address #### `isContractDeployed()` ```typescript isContractDeployed(address: Address): Promise ``` Checks whether a contract is deployed at the specified address. **Parameters:** - `address` — the contract address to check **Returns:** promise resolving to `true` if contract is deployed, `false` otherwise **Usage example:** ```typescript export async function run(provider: NetworkProvider) { const isDeployed = await provider.isContractDeployed(contractAddress); if (!isDeployed) { console.log('Contract not yet deployed'); } } ``` #### `waitForDeploy()` ```typescript waitForDeploy(address: Address, attempts?: number, sleepDuration?: number): Promise ``` Waits for a contract to be deployed by polling the address until the contract appears on-chain. **Parameters:** - `address` — the contract address to monitor - `attempts` — maximum number of polling attempts (default: 20) - `sleepDuration` — delay between attempts in milliseconds (default: 2000) **Returns:** a promise that resolves when the contract is deployed **Throws:** error if the contract is not deployed within the specified attempts **Usage example:** ```typescript export async function run(provider: NetworkProvider) { // Send deployment transaction await contract.sendDeploy(provider.sender(), { value: toNano('0.01') }); // Wait for deployment to complete await provider.waitForDeploy(contract.address); console.log('Contract deployed successfully'); } ``` #### `waitForLastTransaction()` ```typescript waitForLastTransaction(attempts?: number, sleepDuration?: number): Promise ``` Waits for the last sent transaction to be processed and confirmed on the blockchain. **Parameters:** - `attempts` — maximum number of polling attempts (default: 20) - `sleepDuration` — delay between attempts in milliseconds (default: 2000) **Returns:** promise that resolves when the last transaction is confirmed **Usage example:** ```typescript export async function run(provider: NetworkProvider) { await contract.sendIncrement(provider.sender(), { value: toNano('0.01') }); await provider.waitForLastTransaction(); } ``` #### `getContractState()` ```typescript getContractState(address: Address): Promise ``` Retrieves the current state of a contract, including its balance, code, and data. **Parameters:** - `address` — the contract address to query **Returns:** promise resolving to `ContractState`. **Usage example:** ```typescript export async function run(provider: NetworkProvider) { const state = await provider.getContractState(contractAddress); console.log(`Contract balance: ${fromNano(state.balance)} TON`); } ``` #### `getConfig()` ```typescript getConfig(configAddress?: Address): Promise ``` Fetches the current blockchain configuration parameters. **Parameters:** - `configAddress` — optional config contract address (uses default if not provided) **Returns:** promise resolving to `BlockchainConfig` #### `open()` ```typescript open(contract: T): OpenedContract ``` Opens a contract instance for interaction, binding it to the current provider. **Parameters:** - `contract` — the contract instance to open **Returns:** `openedContract` wrapper that enables direct method calls **Usage example:** ```typescript export async function run(provider: NetworkProvider) { const counter = provider.open(Counter.fromAddress(contractAddress)); const currentValue = await counter.getCounter(); console.log('Current counter value:', currentValue); } ``` #### `ui()` ```typescript ui(): UIProvider ``` **Returns:** [`UIProvider`](#uiprovider) instance for console interactions **Usage example:** ```typescript export async function run(provider: NetworkProvider) { const ui = provider.ui(); ui.write('Deployment starting...'); const confirmed = await ui.prompt('Deploy to mainnet?'); } ``` ### `UIProvider` Interface for handling user interactions, such as displaying messages, prompting for input, and managing action prompts. This interface abstracts console interactions and can be used in both interactive and automated scenarios. ```typescript interface UIProvider { write(message: string): void; prompt(message: string): Promise; inputAddress(message: string, fallback?: Address): Promise
; input(message: string): Promise; choose(message: string, choices: T[], display: (v: T) => string): Promise; setActionPrompt(message: string): void; clearActionPrompt(): void; } ``` #### `write()` ```typescript write(message: string): void ``` Displays a message to the user console. **Parameters:** - `message` — the text message to display **Usage example:** ```typescript export async function run(provider: NetworkProvider) { const ui = provider.ui(); ui.write('Starting contract deployment...'); ui.write(`Network: ${provider.network()}`); } ``` #### `prompt()` ```typescript prompt(message: string): Promise ``` Displays a yes/no prompt to the user and waits for their response. **Parameters:** - `message` — the prompt message to display **Returns:** promise resolving to `true` for yes, `false` for no **Usage example:** ```typescript export async function run(provider: NetworkProvider) { const ui = provider.ui(); const confirmed = await ui.prompt('Deploy to mainnet? This will cost real TON'); if (confirmed) { ui.write('Proceeding with deployment...'); } else { ui.write('Deployment cancelled'); return; } } ``` #### `inputAddress()` ```typescript inputAddress(message: string, fallback?: Address): Promise
``` Prompts the user to input a TON address with validation. **Parameters:** - `message` — the prompt message to display - `fallback` — optional default address to use if user provides empty input **Returns:** promise resolving to Address object **Usage example:** ```typescript export async function run(provider: NetworkProvider) { const ui = provider.ui(); const targetAddress = await ui.inputAddress( 'Enter the contract address to interact with:', Address.parse('EQD4FPq-PRDieyQKkizFTRtSDyucUIqrj0v_zXJmqaDp6_0t') // fallback ); ui.write(`Using address: ${targetAddress.toString()}`); } ``` #### `input()` ```typescript input(message: string): Promise ``` Prompts the user for a text input and returns the entered string. **Parameters:** - `message` — the prompt message to display **Returns:** promise resolving to the user's input as a string **Usage example:** ```typescript export async function run(provider: NetworkProvider) { const ui = provider.ui(); const contractName = await ui.input('Enter the contract name:'); ui.write(`Deploying contract: ${contractName}`); } ``` #### `choose()` ```typescript choose(message: string, choices: T[], display: (v: T) => string): Promise ``` Presents a list of choices to the user and returns the selected option. **Parameters:** - `message` — the prompt message to display - `choices` — array of options to choose from - `display` — function to convert each choice to a display string **Returns:** promise resolving to the selected choice **Usage example:** ```typescript export async function run(provider: NetworkProvider) { const ui = provider.ui(); const networks = ['mainnet', 'testnet']; const selectedNetwork = await ui.choose( 'Select deployment network:', networks, (network) => network.toUpperCase() ); ui.write(`Selected network: ${selectedNetwork}`); } ``` #### `setActionPrompt()` ```typescript setActionPrompt(message: string): void ``` Sets a persistent action prompt that remains visible during operations. **Parameters:** - `message` — the action prompt message to display **Usage example:** ```typescript export async function run(provider: NetworkProvider) { const ui = provider.ui(); ui.setActionPrompt('⏳ Waiting for transaction confirmation...'); await contract.send(provider.sender(), { value: toNano('0.01') }, 'increment'); await provider.waitForLastTransaction(); ui.clearActionPrompt(); ui.write('✅ Transaction confirmed'); } ``` #### `clearActionPrompt()` ```typescript clearActionPrompt(): void ``` Clears the current action prompt, removing it from display. **Usage example:** ```typescript export async function run(provider: NetworkProvider) { const ui = provider.ui(); ui.setActionPrompt('🔄 Processing...'); // Perform some operation await someAsyncOperation(); ui.clearActionPrompt(); ui.write('Operation completed'); } ``` ## Type definitions Blueprint exports several TypeScript types for configuration and compilation options. These types provide type safety and IntelliSense support when working with Blueprint programmatically. ### `CompileOpts` Optional compilation settings, including user data passed to hooks and compilation flags. ```typescript type CompileOpts = { hookUserData?: any; debugInfo?: boolean; buildLibrary?: boolean; }; ``` **Properties:** - `hookUserData` — optional user data passed to pre/post compile hooks - `debugInfo` — enable debug information in compiled output (default: `false`) - `buildLibrary` — build as a library instead of a regular contract (default: `false`) **Usage example:** ```typescript import { compile } from '@ton/blueprint'; const codeCell = await compile('MyContract', { debugInfo: true, hookUserData: { customFlag: true } }); ``` ### `CommonCompilerConfig` Base configuration shared by all compiler types. This interface defines common compilation hooks and options. ```typescript type CommonCompilerConfig = { preCompileHook?: (params: HookParams) => Promise; postCompileHook?: (code: Cell, params: HookParams) => Promise; buildLibrary?: boolean; }; ``` **Properties:** - `preCompileHook` — optional function called before compilation starts (receives [`HookParams`](#hookparams)) - `postCompileHook` — optional function called after compilation completes (receives compiled `Cell` and [`HookParams`](#hookparams)) - `buildLibrary` — whether to build as a library (default: `false`) **Usage example:** ```typescript title="./wrappers/MyContract.compile.ts" import { CompilerConfig } from '@ton/blueprint'; export const compile: CompilerConfig = { lang: 'func', targets: ['contracts/my_contract.fc'], preCompileHook: async (params) => { console.log('Starting compilation...'); }, postCompileHook: async (code, params) => { console.log('Compilation completed!'); } }; ``` ### `FuncCompilerConfig` Configuration specific to the FunC compiler, including optimization levels and source file specifications. ```typescript type FuncCompilerConfig = { lang?: 'func'; optLevel?: number; debugInfo?: boolean; } & ( | { targets: string[]; sources?: SourceResolver | SourcesMap; } | { targets?: string[]; sources: SourcesArray; } ); ``` **Properties:** - `lang` — compiler language identifier (optional, defaults to `'func'`) - `optLevel` — optimization level (0-2, default: 2) - `debugInfo` — include debug information in output - `targets` — array of FunC source file paths to compile - `sources` — alternative source specification method **Usage example:** ```typescript title="./wrappers/MyContract.compile.ts" import { CompilerConfig } from '@ton/blueprint'; export const compile: CompilerConfig = { lang: 'func', targets: [ 'contracts/imports/stdlib.fc', 'contracts/my_contract.fc' ], optLevel: 2, debugInfo: false }; ``` ### `TolkCompilerConfig` Configuration for the Tolk compiler, including optimization and debugging options. ```typescript type TolkCompilerConfig = { lang: 'tolk'; entrypoint: string; optimizationLevel?: number; withStackComments?: boolean; withSrcLineComments?: boolean; experimentalOptions?: string; }; ``` **Properties:** - `lang` — compiler language identifier (must be `'tolk'`) - `entrypoint` — path to the main Tolk source file - `optimizationLevel` — optimization level - `withStackComments` — include stack operation comments in Fift output - `withSrcLineComments` — include source line comments in Fift output - `experimentalOptions` — additional experimental compiler flags **Usage example:** ```typescript title="./wrappers/MyContract.compile.ts" import { CompilerConfig } from '@ton/blueprint'; export const compile: CompilerConfig = { lang: 'tolk', entrypoint: 'contracts/my_contract.tolk', optimizationLevel: 2, withStackComments: true, withSrcLineComments: true }; ``` ### `TactLegacyCompilerConfig` Configuration for the Tact compiler (legacy configuration format). ```typescript type TactLegacyCompilerConfig = { lang: 'tact'; target: string; options?: Options; }; ``` **Properties:** - `lang` — compiler language identifier (must be `'tact'`) - `target` — path to the main Tact source file - `options` — additional Tact compiler options **Usage example:** ```typescript title="./wrappers/MyContract.compile.ts" import { CompilerConfig } from '@ton/blueprint'; export const compile: CompilerConfig = { lang: 'tact', target: 'contracts/my_contract.tact', options: { debug: false, external: true } }; ``` ### `HookParams` Parameters passed to compilation hooks, providing context about the compilation process. ```typescript type HookParams = { userData?: any; }; ``` **Properties:** - `userData` — optional user data passed from [`CompileOpts`](#compileopts) ### `SenderWithSendResult` An extended sender interface that tracks the result of the last send operation. ```typescript interface SenderWithSendResult extends Sender { readonly lastSendResult?: unknown; } ``` **Properties:** - `lastSendResult` — optional result from the most recent send operation ### `BlueprintTonClient` Union type representing supported TON client implementations. ```typescript type BlueprintTonClient = TonClient4 | TonClient | ContractAdapter | LiteClient; ``` **Supported clients:** - `TonClient4` — TON HTTP API v4 client - `TonClient` — TON HTTP API v2/v3 client - `ContractAdapter` — TON API adapter - `LiteClient` — Lite client for direct node communication ### `Explorer` Supported blockchain explorer types. ```typescript type Explorer = 'tonscan' | 'tonviewer' | 'toncx' | 'dton'; ``` **Supported explorers:** - `'tonscan'` — Tonscan explorer - `'tonviewer'` — Tonviewer explorer (default) - `'toncx'` — TON.cx explorer - `'dton'` — dTON.io explorer ## Configuration For detailed configuration options, refer to the [Blueprint Configuration](/contract-dev/blueprint/config) guide. ================================================ FILE: contract-dev/blueprint/benchmarks.mdx ================================================ --- title: "Benchmarking performance" --- import { Aside } from "/snippets/aside.jsx"; import { FenceTable } from "/snippets/fence-table.jsx"; In TON, a contract's performance is defined by its gas consumption, so it's important to design your logic efficiently. Unlike many other blockchains, TON also requires you to pay for storing contract data and forwarding messages between contracts. ## Gas consumption As you develop and iterate on a contract, even small changes to its logic can affect both gas usage and data size. Monitoring these changes helps ensure that your contract remains efficient and cost-effective. ## Gas metrics reporting To simplify tracking changes in gas usage and data size, we’ve introduced a reporting system that lets you collect and compare metrics across different versions of a contract. To enable this, write test scenarios that cover the contract’s primary usage patterns and verify expected behavior. This approach is sufficient to gather relevant metrics, which you can later use to compare performance changes after updating the implementation. Before running the tests, a store is created to collect metrics from all transactions generated during the tests. After test execution, the collected metrics are supplemented with [ABI information from the snapshot](https://github.com/ton-org/sandbox/blob/main/docs/collect-metric-api.md#abi-auto-mapping), and a report is generated based on this data. While more [metrics are collected](https://github.com/ton-org/sandbox/blob/main/docs/collect-metric-api.md#snapshot-structure), the current report format includes `gasUsed`, `cells`, and `bits`, which correspond to the internal metrics `compute.phase`, `state.code`, and `state.data`. ## Metrics comparison example To see how gas metrics can be collected and compared in practice, let’s walk through a complete example. Start by creating a new project using `npm create ton@latest`: ```bash npm create ton@latest -y -- sample --type func-counter --contractName Sample cd sample ``` **Note:** - The `-y` flag skips prompts and accepts defaults. - `--type` specifies the template (e.g., `func-counter`). - `--contractName` sets the contract name. Alternatively, you can run: ```bash npm create ton@latest sample ``` This command scaffolds a project with a basic counter contract at `contracts/sample.fc`. It defines a simple stateful contract that stores an `id` and a `counter` and supports an `increase` operation. ```func title="sample.fc" #include "imports/stdlib.fc"; const op::increase = "op::increase"c; global int ctx_id; global int ctx_counter; () load_data() impure { var ds = get_data().begin_parse(); ctx_id = ds~load_uint(32); ctx_counter = ds~load_uint(32); ds.end_parse(); } () save_data() impure { set_data( begin_cell() .store_uint(ctx_id, 32) .store_uint(ctx_counter, 32) .end_cell() ); } () recv_internal(int my_balance, int msg_value, cell in_msg_full, slice in_msg_body) impure { if (in_msg_body.slice_empty?()) { ;; ignore all empty messages return (); } slice cs = in_msg_full.begin_parse(); int flags = cs~load_uint(4); if (flags & 1) { ;; ignore all bounced messages return (); } load_data(); int op = in_msg_body~load_uint(32); int query_id = in_msg_body~load_uint(64); if (op == op::increase) { int increase_by = in_msg_body~load_uint(32); ctx_counter += increase_by; save_data(); return (); } throw(0xffff); } int get_counter() method_id { load_data(); return ctx_counter; } int get_id() method_id { load_data(); return ctx_id; } ``` ### Generate a gas report Let’s now generate a gas usage report for the contract. Run the following command: ```bash npx blueprint test --gas-report ``` This runs your tests with gas tracking enabled and outputs a `gas-report.json` with transaction metrics. ... PASS Comparison metric mode: gas depth: 1 Gas report write in 'gas-report.json' ┌───────────┬──────────────┬───────────────────────────┐ │ │ │ current │ │ Contract │ Method ├──────────┬────────┬───────┤ │ │ │ gasUsed │ cells │ bits │ ├───────────┼──────────────┼──────────┼────────┼───────┤ │ │ sendDeploy │ 1937 │ 11 │ 900 │ │ ├──────────────┼──────────┼────────┼───────┤ │ │ send │ 515 │ 11 │ 900 │ │ Sample ├──────────────┼──────────┼────────┼───────┤ │ │ sendIncrease │ 1937 │ 11 │ 900 │ │ ├──────────────┼──────────┼────────┼───────┤ │ │ 0x7e8764ef │ 2681 │ 11 │ 900 │ └───────────┴──────────────┴──────────┴────────┴───────┘ ### Storage fee calculation You can use the `cells` and `bits` values from the report to estimate the **storage fee** for your contract. Here’s the formula: ```text storage_fee = ceil( (account.bits * bit_price + account.cells * cell_price) * time_delta / 2 ** 16) ``` To try this in practice, use the [calculator example](/foundations/fees). ### Regenerate the gas report Note that the `op::increase` method appears in the report as the raw opcode `0x7e8764ef`. To display a human-readable name in the report, update the generated `contract.abi.json` by replacing the raw opcode with the name **increase** in both the `messages` and `types` sections: ```diff --- a/contract.abi.json +++ b/contract.abi.json @@ -6,13 +6,13 @@ "receiver": "internal", "message": { "kind": "typed", - "type": "0x7e8764ef" + "type": "increase" } } ], "types": [ { - "name": "0x7e8764ef", + "name": "increase", "header": 2122802415 } ], ``` Once you've updated the `contract.abi.json` file, rerun the command to regenerate the gas report: ```bash npx blueprint test --gas-report ``` Now the method name appears in the report as `increase`, making it easier to read: ... │ ├──────────────┼──────────┼────────┼───────┤ │ │ increase │ 2681 │ 11 │ 900 │ └───────────┴──────────────┴──────────┴────────┴───────┘ ### Save a snapshot for future comparison To track how gas usage evolves, you can create a named snapshot of the current metrics. This allows you to compare future versions of the contract against this baseline: ```bash npx blueprint snapshot --label "v1" ``` This creates a snapshot file in `.snapshot/`: ```text ... PASS Collect metric mode: "gas" Report write in '.snapshot/1749821319408.json' ``` ### Optimize the contract and compare the metrics Let’s try a simple optimization — adding the `inline` specifier to some functions. Update your contract like this: ```diff --- a/contracts/sample.fc +++ b/contracts/sample.fc -() load_data() impure { +() load_data() impure inline { -() save_data() impure { +() save_data() impure inline { -() recv_internal(int my_balance, int msg_value, cell in_msg_full, slice in_msg_body) impure { +() recv_internal(int my_balance, int msg_value, cell in_msg_full, slice in_msg_body) impure inline { ``` Now regenerate the gas report. Since we already created a snapshot labeled `v1`, this report will include a comparison with the previous version: ```bash npx blueprint test --gas-report ``` You see a side-by-side comparison of gas usage before and after the change: PASS Comparison metric mode: gas depth: 2 Gas report write in 'gas-report.json' ┌───────────┬──────────────┬─────────────────────────────────────────┬───────────────────────────┐ │ │ │ current │ v1 │ │ Contract │ Method ├──────────────┬───────────┬──────────────┼──────────┬────────┬───────┤ │ │ │ gasUsed │ cells │ bits │ gasUsed │ cells │ bits │ ├───────────┼──────────────┼──────────────┼───────────┼──────────────┼──────────┼────────┼───────┤ │ │ sendDeploy │ 1937 same │ 7 -36.36% │ 1066 +18.44% │ 1937 │ 11 │ 900 │ │ ├──────────────┼──────────────┼───────────┼──────────────┼──────────┼────────┼───────┤ │ │ send │ 446 -13.40% │ 7 -36.36% │ 1066 +18.44% │ 515 │ 11 │ 900 │ │ Sample ├──────────────┼──────────────┼───────────┼──────────────┼──────────┼────────┼───────┤ │ │ sendIncrease │ 1937 same │ 7 -36.36% │ 1066 +18.44% │ 1937 │ 11 │ 900 │ │ ├──────────────┼──────────────┼───────────┼──────────────┼──────────┼────────┼───────┤ │ │ increase │ 1961 -26.86% │ 7 -36.36% │ 1066 +18.44% │ 2681 │ 11 │ 900 │ └───────────┴──────────────┴──────────────┴───────────┴──────────────┴──────────┴────────┴───────┘ ## Project setup instructions If your project already exists, you need to configure **jest** to collect gas metrics. You can do this in one of two ways: #### Option 1: update the existing `jest.config.ts` Add the necessary environment and reporter settings: ```diff title="jest.config.ts" import type { Config } from 'jest'; const config: Config = { preset: 'ts-jest', + testEnvironment: '@ton/sandbox/jest-environment', testPathIgnorePatterns: ['/node_modules/', '/dist/'], + reporters: [ + 'default', + ['@ton/sandbox/jest-reporter', {}], + ] }; export default config; ``` #### Option 2: create a separate config `gas-report.config.ts` If you prefer not to modify your main `jest.config.ts`, you can create a dedicated config file: ```ts title="gas-report.config.ts" import config from './jest.config'; // use filter tests if needed, see https://jestjs.io/docs/cli#--testnamepatternregex // config.testNamePattern = '^Foo should increase counter$' config.testEnvironment = '@ton/sandbox/jest-environment' config.reporters = [ ['@ton/sandbox/jest-reporter', { }], ] export default config; ``` When using this separate config, pass it using the `--config` option: ```bash npx blueprint test --gas-report -- --config gas-report.config.ts npx blueprint snapshot --label "v2" -- --config gas-report.config.ts ``` ## Collect metrics manually You can collect metrics manually using the low-level API from `@ton/sandbox`. ```typescript title="collect-metrics.ts" import { Blockchain, createMetricStore, makeSnapshotMetric, resetMetricStore } from '@ton/sandbox'; const store = createMetricStore(); async function someDo() { const blockchain = await Blockchain.create(); const [alice, bob] = await blockchain.createWallets(2); await alice.send({ to: bob.address, value: 1 }); } async function main() { resetMetricStore(); await someDo(); const metric = makeSnapshotMetric(store); console.log(metric); } main().catch((error) => { console.log(error.message); }); ``` For more details, see the [Collect Metric API documentation](https://github.com/ton-org/sandbox/blob/main/docs/collect-metric-api.md#example). ================================================ FILE: contract-dev/blueprint/cli.mdx ================================================ --- title: "Blueprint CLI" --- Blueprint is a CLI tool for TON smart contract development. This reference covers all available commands, options, configuration, and API methods. ## CLI commands Blueprint provides a comprehensive set of CLI commands for smart contract development, testing, and deployment. Commands support both interactive and non-interactive modes. ### `create` ```bash npx blueprint create --type ``` Creates a new smart contract with all necessary files, including the contract source, TypeScript wrapper, test file, and deployment script. #### Interactive mode ```bash npx blueprint create ``` Launches an interactive wizard that guides you through: 1. Contract name selection (validates CamelCase format) 1. Programming language choice (Tolk, FunC, or Tact) 1. Template type selection (empty or counter example) #### Non-interactive mode ```bash npx blueprint create --type ``` **Parameters:** - `` — contract name in CamelCase format (e.g., `MyAwesomeContract`) - `` — template type from available options **Available template types:** - `tolk-empty` — an empty contract (Tolk) - `func-empty` — an empty contract (FunC) - `tact-empty` — an empty contract (Tact) - `tolk-counter` — a simple counter contract (Tolk) - `func-counter` — a simple counter contract (FunC) - `tact-counter` — a simple counter contract (Tact) **Usage examples:** ```bash # Create empty Tolk contract npx blueprint create MyToken --type tolk-empty # Create Tolk counter example npx blueprint create SimpleCounter --type tolk-counter # Create contract interactively npx blueprint create ``` **Generated files:** - `contracts/MyContract.{tolk|fc|tact}` — contract source code - `wrappers/MyContract.ts` — TypeScript wrapper for contract interaction - `tests/MyContract.spec.ts` — Jest test suite with basic test cases - `scripts/deployMyContract.ts` — deployment script with network configuration ### `build` ```bash npx blueprint build --all ``` Compiles smart contracts using their corresponding `.compile.ts` configuration files. #### Interactive mode ```bash npx blueprint build ``` Displays a list of all available contracts with `.compile.ts` files for selection. Shows compilation status and allows building individual contracts or all at once. #### Non-interactive mode ```bash npx blueprint build npx blueprint build --all ``` **Parameters:** - `` — specific contract name to build (matches the `.compile.ts` filename) - `--all` — build all contracts in the project that have compilation configurations **Usage examples:** ```bash # Build specific contract npx blueprint build MyToken # Build all contracts npx blueprint build --all # Interactive selection npx blueprint build ``` For detailed information about build artifacts, see [Compiled Artifacts](/contract-dev/blueprint/develop#compiled-artifacts). ### `run` ```bash npx blueprint run ``` ```html ``` - [Example](https://github.com/sorawalker/demo-dapp-with-analytics/blob/patch-1/index.html) ### Using the NPM Install using the npm: ```sh icon="npm" npm install @telegram-apps/analytics ``` To ensure that all events are collected correctly, you must initialize the SDK before the application starts rendering. For example, in React applications, before calling the render() function ```javascript import telegramAnalytics from '@telegram-apps/analytics'; telegramAnalytics.init({ token: 'YOUR_TOKEN', // SDK Auth token received via @DataChief_bot appName: 'ANALYTICS_IDENTIFIER', // The analytics identifier you entered in @DataChief_bot }); ``` After initializing the Telegram Analytics, you are all set to transfer the data, gain insights, and improve user engagement. Most of them will be tracked automatically without manual control. - [Example](https://github.com/sorawalker/demo-dapp-with-analytics/blob/master/src/main.tsx) ## Contributing Contributions are welcome! To contribute, fork the repository, make your changes, and submit a pull request. We look forward to your innovative [ideas](https://github.com/Telegram-Mini-Apps/TelegramAnalytics/pulls) and improvements. ## License This Telegram Analytics SDK is available under the [MIT License](https://opensource.org/license/mit). Feel free to use it in both personal and commercial projects. The library was expertly developed by [`@sorawalker`](https://github.com/sorawalker), with generous support from [TON Foundation](https://github.com/ton-society/grants-and-bounties/). ================================================ FILE: ecosystem/tma/analytics/api-endpoints.mdx ================================================ --- title: "API Endpoints" --- Here, you can view information about existing endpoints and how to make requests for them. ## API URL URL for POST requests: [`/events`](https://tganalytics.xyz/events) ## POST events This request is needed to record an event in the database. ### Body The request body may contain an array rather than a single event. The main thing is that all events in the array satisfy the scheme below. #### Required | Field | Type | Description | | ------------ | ------ | --------------------------------------------------------------------------------------------------------------------------------------------------------- | | `user_id` | number | Unique identifier for the user. | | `event_name` | string | The name of the event from the supported events. | | `session_id` | string | Session identifier for tracking user sessions. **Must be** [UUID](https://github.com/Telegram-Mini-Apps/analytics/blob/master/src/utils/generateUUID.ts). | | `app_name` | string | The name of the application that you specified when creating the token | #### Optional | Field | Type | Description | | ------------------ | ------------------------------------- | --------------------------------------------------------------------------------------------- | | `is_premium` | boolean | If the user has a premium account, by default - false | | `is_success` | boolean | Indicates whether a wallet is connected or the transaction was successful, by default - false | | `error_message` | string | Error message if the wallet connection or transaction is unsuccessful | | `error_code` | number | Description: error code if the wallet connection or transaction is unsuccessful | | `wallet_address` | string | Wallet address involved in the event | | `wallet_type` | string | Type of the wallet | | `wallet_version` | string | Version of the wallet software | | `auth_type` | enum | Type of authorization used. `0` - `ton_addr`, `1` - `ton_proof`. | | `valid_until` | string | Timestamp until when a transaction offer is valid | | `from` | string | Wallet address initiating the transaction | | `messages` | `{ address: string; amount: string }` | List of transactions `{to, amount}` involved in the event | | `custom_data` | object | Object to store custom event details as needed | | `client_timestamp` | string | The time when the event occurred on the client | | `platform` | string | The platform from which the MiniApp was opened | | `locale` | string | User language code | | `start_param` | string | `tgWebAppStartParam` | | `url_referer` | string | The URL of the web application from which the request was sent | | `scope` | string | Event scope | ### Request body example: ```json [ { "event_name": "app-init", "session_id": "10c574d9-6d2c-4e6d-a141-ce6da141ce6d", "user_id": 111111111, "app_name": "docs", "is_premium": true, "platform": "tdesktop", "locale": "en", "client_timestamp": "1743503599534" } ] ``` ```json [ { "event_name": "connection-started", "custom_data": { "ton_connect_sdk_lib": "3.0.3", "ton_connect_ui_lib": "2.0.5" }, "session_id": "10c574d9-6d2c-4e6d-a141-ce6da141ce6d", "user_id": 111111111, "app_name": "docs", "is_premium": true, "platform": "tdesktop", "locale": "en", "client_timestamp": "1743503647541" } ] ``` ```json [ { "event_name": "connection-error", "is_success": false, "error_message": "Connection was cancelled", "error_code": null, "custom_data": { "ton_connect_sdk_lib": "3.0.3", "ton_connect_ui_lib": "2.0.5" }, "session_id": "10c574d9-6d2c-4e6d-a141-ce6da141ce6d", "user_id": 111111111, "app_name": "docs", "is_premium": true, "platform": "tdesktop", "locale": "en", "client_timestamp": "1743503683701" } ] ``` ### Headers Instead of `YOUR_TOKEN`, you need to specify the token received using the managing integration. (TO DO link) ```json { "TGA-Auth-Token": "YOUR_TOKEN", "Content-Type": "application/json" } ``` ### Responses #### HTTP201 - Description: The event has been successfully recorded - Content: ```json { "message": "Success record." } ``` #### HTTP400 - Description: The event was not recorded due to server issues - Content: ```json { "message": "Failed to record" } ``` #### HTTP400 - Description: The token was entered incorrectly or in the wrong format - Content: ```json { "message": "The token is not specified in the headers or is specified incorrectly." } ``` #### HTTP400 - Description: The entered token is invalid (was not created through a Data Chief bot) - Content: ```json { "message": "Token is invalid." } ``` #### HTTP400 - Description: The request body contains the application name that does not match the token - Content: ```json { "message": "Invalid app_name is specified." } ``` #### HTTP400 - Description: the body specified in the request was not validated (for example, the type of one of the fields does not match) - Content: ```json { "status": 400, "message": "VALIDATION_MISMATCH_REPORT" } ``` #### HTTP403 - Description: an attempt to use the API on a domain name that does not match the token - Content: ```json { "message": "The domain name does not match." } ``` #### HTTP429 - Description: too many requests from the client in a certain amount of time - Content: ```json { "message": "Too many requests. Try again later." } ``` ================================================ FILE: ecosystem/tma/analytics/faq.mdx ================================================ --- title: "FAQ" --- ## How can I check the integration status of the SDK? ### Using a bot After [completing the integration process](/ecosystem/tma/analytics/preparation#initialize-sdk), the bot [`@DataChief_bot`](https://t.me/DataChief_bot) will display the time of the last recorded event in our database. If it shows something like "one minute ago", then everything is working correctly. ### Using DevTools #### Desktop version Go to Telegram settings, then to Advanced Settings, and then to Experimental Settings. Toggle on the "Enable webview inspecting" function. Next, open your application and right-click to open the developer console. In the Network section, you should see the SDK script (index.js) loading and sending events. If this is not observed, try refreshing the application without closing TMA. #### Web version Go to [Telegram's web version](https://web.telegram.org/), open the developer tools, go to the network section, open your TMA, and filter queries by `'tganalytics.xyz'`. You'll see the SDK being loaded (index.js) and events being sent afterward. ================================================ FILE: ecosystem/tma/analytics/install-via-npm.mdx ================================================ --- title: "Installation via NPM package" --- ## How to install it? **1. Install the NPM package in your project** ```shell npm install @telegram-apps/analytics ``` ```shell yarn add @telegram-apps/analytics ``` ```sh pnpm add @telegram-apps/analytics ``` **2. Add Telegram Mini Apps Analytics in code** Once you have your unique access token (if not, see [Preparations](/ecosystem/tma/analytics/preparation) page) and installed the NPM package, you can initialize the Telegram Analytics SDK in your code. To ensure that all events are collected correctly, you must initialize the SDK before the application starts rendering. For example, in React applications, before calling the `render()` function ```jsx import TelegramAnalytics from '@telegram-apps/analytics' TelegramAnalytics.init({ token: 'YOUR_TOKEN', appName: 'ANALYTICS_IDENTIFIER', }); ``` ### Supported events After initializing the **Telegram analytics**, you are all set to transfer the data, gain insights, and improve user engagement. (99% of them will be tracked **automatically** without manual control) - [Supported events](/ecosystem/tma/analytics/supported-events) ================================================ FILE: ecosystem/tma/analytics/install-via-script.mdx ================================================ --- title: "Installation via script tag" --- ## How to install it? ### 1.Add Telegram Mini Apps Analytics to your project** Include the Telegram Mini Apps Analytics script in the header of your HTML document. This script will allow you to track and analyze user interactions effectively. ```html ``` Alternative solution (not recommended) ```html ``` ### 2. Initialize the **Telegram Mini Apps Analytics** SDK Once you have your unique access token and analytics identifier (if not, see Preparations (TO DO link) page), you can initialize the Telegram Analytics SDK in your code. This step is crucial for enabling the tracking of events without repeatedly transferring the token. ```html ``` ### Supported events After initializing the **Telegram Analytics**, you are all set to transfer the data, gain insights, and improve user engagement. (99% of them will be tracked **automatically** without manual control) - [Supported events](/ecosystem/tma/analytics/supported-events) ================================================ FILE: ecosystem/tma/analytics/managing-integration.mdx ================================================ --- title: "Managing integration" --- import { Image } from '/snippets/image.jsx'; [TON Builders](https://builders.ton.org) helps you manage your SDK keys and participate in various support programs from TON Foundation. Register your project and go to the **Analytics** tab. Analytics tab in TON Builders Enter your Telegram Bot URL and mini app domain to receive an **API key** for SDK initialization. You can also manage your existing keys from the same section. ================================================ FILE: ecosystem/tma/analytics/preparation.mdx ================================================ --- title: "Preparations" --- import { Image } from '/snippets/image.jsx'; ### Connect the TON Connect SDK Web3 events for the Telegram Mini Apps Analytics SDK are supported by the [`@tonconnect/ui`](https://www.npmjs.com/package/@tonconnect/ui) and [`@tonconnect/ui-react`](https://www.npmjs.com/package/@tonconnect/ui-react) libraries since version 2.0.3, [`@tonconnect/sdk`](https://www.npmjs.com/package/@tonconnect/sdk) since version 3.0.3. [Read more about TON Connect integration](https://github.com/ton-connect) Don't worry if your app doesn't use TON Connect; the analytics SDK will still work and collect non-Web3 events. ### Get the token with TON Builders Register your project on [TON Builders](https://builders.ton.org) and go to the Analytics tab. Analytics tab in TON Builders Enter your Telegram Bot URL and mini app domain to receive an **API key** for SDK initialization. You can also manage your existing keys from the same section. Enter your Telegram Bot URL and mini app domain to receive a token for SDK initialization. ### Initialize SDK Now you can initialize the SDK in your application. There are two ways to do this: - [Install with script tag](/ecosystem/tma/analytics/install-via-script) - [Install with NPM package](/ecosystem/tma/analytics/install-via-npm) ================================================ FILE: ecosystem/tma/analytics/supported-events.mdx ================================================ --- title: "Supported events" --- Events from TON Connect will be sent only if `@tonconnect/ui-react@2.0.3`, `@tonconnect/ui@2.0.3` or `@tonconnect/sdk@3.0.3` and higher versions packages are used. | Event name | Description | TON Connect required | | -------------------------------- | ---------------------------------------------------------------------------------------- | -------------------- | | `app-init` | Connection attempts and their initiation | false | | `app-hide` | Hiding the app from the screen | false | | `custom-event` | The event specified by the user | false | | `connection-started` | The user starts connecting the wallet | true | | `connection-completed` | Successful connection to a wallet | true | | `connection-error` | Errors in connection specifying reasons (e.g., user canceled) | true | | `connection-restoring-completed` | The connection was restored successfully | true | | `connection-restoring-error` | Connection restoration failed | true | | `transaction-sent-for-signature` | The user submits the transaction for signature | true | | `transaction-signed` | The user successfully signs the transaction | true | | `transaction-signing-failed` | The user cancels the transaction signature or an error occurs during the signing process | true | | `disconnection user-initiated` | Disconnection events, specifying scope (dApp or wallet) | true | ================================================ FILE: ecosystem/tma/create-mini-app.mdx ================================================ --- title: "TMA create CLI" --- `@telegram-apps/create-mini-app` is a CLI tool designed to scaffold your new mini application on the Telegram Mini Apps platform. It generates a project with pre-configured libraries and template files, allowing you to customize the content based on your specific requirements. ## Usage To run the tool, use one of the following scripts depending on your package manager. ```bash npm icon="npm" npx @telegram-apps/telegram-apps-tools@latest ``` ## Creating a new application The above command executes a script that guides you through the creation of your application by sequentially prompting for the following information: ### Project directory name - **Prompt**: Enter the name of the folder where the project files will be located. - **Default**: mini-app The script will create a subfolder with the specified name in the current directory. ### Preferred technologies #### TMA SDKs - **`tma.js`** [`@telegram-apps/sdk`](https://www.npmjs.com/package/@telegram-apps/sdk) – A TypeScript library for seamless communication with Telegram Mini Apps functionality. - **Telegram SDK** [`@twa-dev/sdk`](https://www.npmjs.com/package/@twa-dev/sdk) – This package allows you to work with the SDK as an npm package. #### Frameworks - **React.js** [template](https://github.com/Telegram-Mini-Apps/reactjs-template) - **Next.js** [template](https://github.com/Telegram-Mini-Apps/nextjs-template) - **Solid.js** [template](https://github.com/Telegram-Mini-Apps/solidjs-js-template) - **Vue.js** [template](https://github.com/Telegram-Mini-Apps/vuejs-template) ### Git remote repository URL Enter the git remote repository URL. This value will be used to connect the created project with your remote git repository. It should be either an HTTPS link or an SSH connection string. ## Build configuration Projects created with `create-mini-app` are configured to use the [Vite](https://vite.dev/) bundler. The project includes a `vite.config.js` file, which you can customize to adjust the build settings according to your needs. ================================================ FILE: ecosystem/tma/overview.mdx ================================================ --- title: "TMA: Telegram Mini Apps overview" sidebarTitle: "Overview" --- Telegram Mini Apps (TMAs) are web applications that run within the Telegram messenger. They are built using web technologies — HTML, CSS, and JavaScript.