[
  {
    "path": ".gitattributes",
    "content": "pnpm-lock.yaml merge=ours\n# automatically normalize line endings in text files to be line feed \n# https://github.com/opral/monorepo/pull/3340#issue-2782271138\n* text=auto eol=lf"
  },
  {
    "path": ".gitignore",
    "content": "### inlang ###\n\n# .devcontainer.json\n.pnpm-store\n\n# **/out\nexamples/svelte/package-lock.json\nexamples/sveltekit/package-lock.json\n\n/build\n/package\n.env*\n.dev.vars\n.nx\n\n# Benchmark reports and scratch databases\nbenchmarks/engine2-json-pointer/output*/\npackages/engine/benches/storage/output*/\n\n# Playwright\n**/test-results/\n**/playwright-report/\n**/playwright/.cache/\npackages/vscode-docs-replay/results/\n\n# SEO – Generated sitemap\ninlang/**/sitemap.xml\n\n# Created by https://www.toptal.com/developers/gitignore/api/windows,macos,linux,node,visualstudiocode,intellij\n# Edit at https://www.toptal.com/developers/gitignore?templates=windows,macos,linux,node,visualstudiocode,intellij\n\n### Intellij ###\n# Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio, WebStorm and Rider\n# Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839\n\n# User-specific stuff\n.idea/**/workspace.xml\n.idea/**/tasks.xml\n.idea/**/usage.statistics.xml\n.idea/**/dictionaries\n.idea/**/shelf\n\n# AWS User-specific\n.idea/**/aws.xml\n\n# Generated files\n.idea/**/contentModel.xml\n\n# Sensitive or high-churn files\n.idea/**/dataSources/\n.idea/**/dataSources.ids\n.idea/**/dataSources.local.xml\n.idea/**/sqlDataSources.xml\n.idea/**/dynamic.xml\n.idea/**/uiDesigner.xml\n.idea/**/dbnavigator.xml\n\n# Gradle\n.idea/**/gradle.xml\n.idea/**/libraries\n\n# Gradle and Maven with auto-import\n# When using Gradle or Maven with auto-import, you should exclude module files,\n# since they will be recreated, and may cause churn.  Uncomment if using\n# auto-import.\n# .idea/artifacts\n# .idea/compiler.xml\n# .idea/jarRepositories.xml\n# .idea/modules.xml\n# .idea/*.iml\n# .idea/modules\n# *.iml\n# *.ipr\n\n# CMake\ncmake-build-*/\n\n# Mongo Explorer plugin\n.idea/**/mongoSettings.xml\n\n# File-based project format\n*.iws\n\n# IntelliJ\nout/\n\n# mpeltonen/sbt-idea plugin\n.idea_modules/\n\n# JIRA plugin\natlassian-ide-plugin.xml\n\n# Cursive Clojure plugin\n.idea/replstate.xml\n\n# SonarLint plugin\n.idea/sonarlint/\n\n# Crashlytics plugin (for Android Studio and IntelliJ)\ncom_crashlytics_export_strings.xml\ncrashlytics.properties\ncrashlytics-build.properties\nfabric.properties\n\n# Editor-based Rest Client\n.idea/httpRequests\n\n# Android studio 3.1+ serialized cache file\n.idea/caches/build_file_checksums.ser\n\n### Intellij Patch ###\n# Comment Reason: https://github.com/joeblau/gitignore.io/issues/186#issuecomment-215987721\n\n# *.iml\n# modules.xml\n# .idea/misc.xml\n# *.ipr\n\n# Sonarlint plugin\n# https://plugins.jetbrains.com/plugin/7973-sonarlint\n.idea/**/sonarlint/\n\n# SonarQube Plugin\n# https://plugins.jetbrains.com/plugin/7238-sonarqube-community-plugin\n.idea/**/sonarIssues.xml\n\n# Markdown Navigator plugin\n# https://plugins.jetbrains.com/plugin/7896-markdown-navigator-enhanced\n.idea/**/markdown-navigator.xml\n.idea/**/markdown-navigator-enh.xml\n.idea/**/markdown-navigator/\n\n# Cache file creation bug\n# See https://youtrack.jetbrains.com/issue/JBR-2257\n.idea/$CACHE_FILE$\n\n# CodeStream plugin\n# https://plugins.jetbrains.com/plugin/12206-codestream\n.idea/codestream.xml\n\n# Azure Toolkit for IntelliJ plugin\n# https://plugins.jetbrains.com/plugin/8053-azure-toolkit-for-intellij\n.idea/**/azureSettings.xml\n\n### Linux ###\n*~\n\n# temporary files which can be created if a process still has a handle open of a deleted file\n.fuse_hidden*\n\n# KDE directory preferences\n.directory\n\n# Linux trash folder which might appear on any partition or disk\n.Trash-*\n\n# .nfs files are created when an open file is removed but is still being accessed\n.nfs*\n\n### macOS ###\n# General\n.DS_Store\n.AppleDouble\n.LSOverride\n\n# Icon must end with two \\r\nIcon\n\n\n# Thumbnails\n._*\n\n# Files that might appear in the root of a volume\n.DocumentRevisions-V100\n.fseventsd\n.Spotlight-V100\n.TemporaryItems\n.Trashes\n.VolumeIcon.icns\n.com.apple.timemachine.donotpresent\n\n# Directories potentially created on remote AFP share\n.AppleDB\n.AppleDesktop\nNetwork Trash Folder\nTemporary Items\n.apdisk\n\n### macOS Patch ###\n# iCloud generated files\n*.icloud\n\n### Node ###\n# Logs\nlogs\n*.log\nnpm-debug.log*\nyarn-debug.log*\nyarn-error.log*\nlerna-debug.log*\n.pnpm-debug.log*\n\n# Diagnostic reports (https://nodejs.org/api/report.html)\nreport.[0-9]*.[0-9]*.[0-9]*.[0-9]*.json\n\n# Runtime data\npids\n*.pid\n*.seed\n*.pid.lock\n\n# Directory for instrumented libs generated by jscoverage/JSCover\nlib-cov\n\n# Coverage directory used by tools like istanbul\ncoverage\n*.lcov\n\n# nyc test coverage\n.nyc_output\n\n# Grunt intermediate storage (https://gruntjs.com/creating-plugins#storing-task-files)\n.grunt\n\n# Bower dependency directory (https://bower.io/)\nbower_components\n\n# node-waf configuration\n.lock-wscript\n\n# Compiled binary addons (https://nodejs.org/api/addons.html)\nbuild/Release\n\n# Dependency directories\nnode_modules/\njspm_packages/\n\n# Snowpack dependency directory (https://snowpack.dev/)\nweb_modules/\n\n# TypeScript cache\n*.tsbuildinfo\n\n# Optional npm cache directory\n.npm\n\n# Optional eslint cache\n.eslintcache\n\n# Optional stylelint cache\n.stylelintcache\n\n# Microbundle cache\n.rpt2_cache/\n.rts2_cache_cjs/\n.rts2_cache_es/\n.rts2_cache_umd/\n\n# Optional REPL history\n.node_repl_history\n\n# Output of 'npm pack'\n*.tgz\n\n# Yarn Integrity file\n.yarn-integrity\n\n# dotenv environment variable files\n.env\n.env.development.local\n.env.test.local\n.env.production.local\n.env.local\n\n# parcel-bundler cache (https://parceljs.org/)\n.cache\n.parcel-cache\n\n# Next.js build output\n.next\nout\n\n# Nuxt.js build / generate output\n.nuxt\ndist\n\n# Gatsby files\n.cache/\n# Comment in the public line in if your project uses Gatsby and not Next.js\n# https://nextjs.org/blog/next-9-1#public-directory-support\n# public\n\n# vuepress build output\n.vuepress/dist\n\n# vuepress v2.x temp and cache directory\n.temp\n\n# Docusaurus cache and generated files\n.docusaurus\n\n# Serverless directories\n.serverless/\n\n# FuseBox cache\n.fusebox/\n\n# DynamoDB Local files\n.dynamodb/\n\n# TernJS port file\n.tern-port\n\n# Stores Visual Studio Code versions used for testing Visual Studio Code extension (Sherlock)s\n.vscode-test\n\n# yarn v2\n.yarn/cache\n.yarn/unplugged\n.yarn/build-state.yml\n.yarn/install-state.gz\n.pnp.*\n\n### Node Patch ###\n# Serverless Webpack directories\n.webpack/\n\n# Optional stylelint cache\n\n# SvelteKit build / generate output\n.svelte-kit\n\n### VisualStudioCode ###\n.vscode/*\n!.vscode/settings.json\n!.vscode/tasks.json\n!.vscode/launch.json\n!.vscode/extensions.json\n!.vscode/*.code-snippets\n\n# Local History for Visual Studio Code\n.history/\n\n# Built Visual Studio Code Extensions\n*.vsix\n\n### VisualStudioCode Patch ###\n# Ignore all local history of files\n.history\n.ionide\n\n### Windows ###\n# Windows thumbnail cache files\nThumbs.db\nThumbs.db:encryptable\nehthumbs.db\nehthumbs_vista.db\n\n# Dump file\n*.stackdump\n\n# Folder config file\n[Dd]esktop.ini\n\n# Recycle Bin used on file shares\n$RECYCLE.BIN/\n\n# Windows Installer files\n*.cab\n*.msi\n*.msix\n*.msm\n*.msp\n\n# Windows shortcuts\n*.lnk\n\n# End of https://www.toptal.com/developers/gitignore/api/windows,macos,linux,node,visualstudiocode,intellij\ninlang/packages/paraglide/paraglide-sveltekit/example/build\ninlang/packages/paraglide/paraglide-solidstart/example/.solid\n*.h.ts.mjs\n**/vite.config.ts.timestamp-*\n**/vite.config.js.timestamp-*\n\n# Fink version.json\ninlang/packages/editor/version.json\n\n# Lix website build\npackages/lix-website/build\n\n# gitea test instance data\nlix/packages/gitea\n\n# VitePress cache\npackages/lix-docs/docs/.vitepress/cache\npackages/lix-docs/docs/.vitepress/dist\n\nartifact/*\npackages/engine/artifact/*\ntarget\n\n# Built plugin archive artifacts\npackages/*/*.lixplugin\n"
  },
  {
    "path": ".infisical.json",
    "content": "{\n  \"workspaceId\": \"6e0353e4-b0b0-4c6d-a338-38f09cfafa22\",\n  \"defaultEnvironment\": \"\",\n  \"gitBranchToEnvironmentMapping\": null\n}\n"
  },
  {
    "path": ".prettierignore",
    "content": "## adding the copied sources from the markdown plugin to be able to see changes since copy..\npackages/md-app/src/components/editor/plugins/markdown-plate-fork/**\npackages/md-app/src/components/editor/plugins/*.tsx\npackages/md-app/src/components/editor/plugins/*.ts\n# also exclude ui\npackages/md-app/src/components/plate-ui/*.tsx\npackages/md-app/src/components/plate-ui/*.ts\n\n\n\npackages/md-app/src/components/editor/plugins/markdown/fixtures/*.md"
  },
  {
    "path": "CONTRIBUTING.md",
    "content": "# Contributing\n\n## Prerequisites\n\n- [Node.js](https://nodejs.org/en/) (v20 or higher)\n- [pnpm](https://pnpm.io/) (v8 or higher)\n\n> [!INFO]  \n> If you are developing on Windows, you need to use [WSL](https://en.wikipedia.org/wiki/Windows_Subsystem_for_Linux). \n\n## Development\n\n1. Clone the repository\n2. run `pnpm i` in the root of the repo\n3. run `pnpm --filter <package-name>... build` to build the dependencies of the package you want to work on\n4. run `pnpm --filter <package-name> dev|test|...` to run the commands of the package you work on\n   \n### Example\n\n> [!INFO]  \n> You need to run the build for the dependencies of the package via the three dots `...` at least once. [Here](https://pnpm.io/filtering#--filter-package_name-1) is the pnpm documentation for filtering.\n\n1. `pnpm i`\n2. `pnpm --filter @inlang/paraglide-js... build`\n3. `pnpm --filter @inlang/paraglide-js dev`\n\n## Opening a PR\n\n1. run `pnpm run ci` to run all tests and checks\n2. run `npx changeset` to write a changelog and trigger a version bumb. watch this loom video to see how to use changesets: https://www.loom.com/share/1c5467ae3a5243d79040fc3eb5aa12d6\n\n"
  },
  {
    "path": "Cargo.toml",
    "content": "[workspace]\nresolver = \"2\"\nmembers = [\n  \"benchmarks/git-compare\",\n  \"benchmarks/10k-entities\",\n  \"benchmarks/engine2-json-pointer\",\n  \"packages/engine\",\n  \"packages/js-sdk\",\n  \"packages/text-plugin\",\n  \"packages/rs-sdk\",\n  \"packages/plugin-md-v2\",\n  \"packages/cli\",\n]\nexclude = [\"packages/plugin-json-v2\"]\n\n[profile.test]\ndebug = 1\n\n[profile.bench]\ndebug = true\nstrip = false\n"
  },
  {
    "path": "README.md",
    "content": "<p align=\"center\">\n  <img src=\"https://raw.githubusercontent.com/opral/lix/main/assets/logo.svg\" alt=\"Lix\" height=\"60\">\n</p>\n\n<h3 align=\"center\">Embeddable version control system</h3>\n\n<p align=\"center\">\n  <a href=\"https://www.npmjs.com/package/@lix-js/sdk\"><img src=\"https://img.shields.io/npm/dw/%40lix-js%2Fsdk?logo=npm&logoColor=red&label=npm%20downloads\" alt=\"weekly downloads on NPM\"></a>\n  <a href=\"https://discord.gg/gdMPPWy57R\"><img src=\"https://img.shields.io/discord/897438559458430986?style=flat&logo=discord&labelColor=white\" alt=\"Discord\"></a>\n  <a href=\"https://github.com/opral/lix\"><img src=\"https://img.shields.io/github/stars/opral/lix?style=flat&logo=github&color=brightgreen\" alt=\"GitHub Stars\"></a>\n  <a href=\"https://x.com/lixCCS\"><img src=\"https://img.shields.io/badge/Follow-@lixCCS-black?logo=x&logoColor=white\" alt=\"X (Twitter)\"></a>\n</p>\n\n> [!NOTE]\n>\n> **Lix is in alpha** · [Follow progress to v1.0 →](https://github.com/opral/lix/issues/374)\n\n---\n\nLix is an **embeddable version control system for files of any format** (DOCX, XLSX, CAD, PDF, JSON) with semantic, per-entity diffs. Branches, merge, and an immutable change history, exposed as SQL, all in-process.\n\nUse it inside a contract editor, a feature-flag service, an artifact registry, an AI-agent platform, a versioned filesystem, or a domain-specific CLI.\n\n> Lix is to version control what DuckDB is to analytics: an embeddable engine with pluggable support for file formats.\n\n- **It's just a library.** `npm install`, import, run. No daemon, no protocol, no remote.\n- **Semantic per-entity diffs.** XLSX cells, DOCX clauses, CAD parts. Not line-by-line text.\n- **History is SQL.** Diffs, blame, and audit are direct queries against `lix_change`.\n\nThe entity foundation ships today. A plugin API is on the [roadmap](#roadmap); once it lands, anyone can author a plugin that turns a file format (DOCX, XLSX, CAD, PDF, anything else) into entities.\n\n[How does Lix compare to Git? →](https://lix.dev/docs/comparison-to-git)\n\n## Getting started\n\n<p>\n  <img src=\"https://cdn.simpleicons.org/javascript/F7DF1E\" alt=\"JavaScript\" width=\"18\" height=\"18\" /> JavaScript ·\n  <a href=\"https://github.com/opral/lix/issues/370\"><img src=\"https://cdn.jsdelivr.net/gh/devicons/devicon/icons/python/python-original.svg\" alt=\"Python\" width=\"18\" height=\"18\" /> Python</a> ·\n  <a href=\"https://github.com/opral/lix/issues/371\"><img src=\"https://cdn.simpleicons.org/rust/CE422B\" alt=\"Rust\" width=\"18\" height=\"18\" /> Rust</a> ·\n  <a href=\"https://github.com/opral/lix/issues/373\"><img src=\"https://cdn.simpleicons.org/go/00ADD8\" alt=\"Go\" width=\"18\" height=\"18\" /> Go</a>\n</p>\n\n```bash\nnpm install @lix-js/sdk\n```\n\n```ts\nimport { openLix } from \"@lix-js/sdk\";\n\nconst lix = await openLix(); // in-memory by default; pass a backend for persistence\n\n// Register a schema for a tracked entity\nawait lix.execute(\n  \"INSERT INTO lix_registered_schema (value) VALUES (lix_json($1))\",\n  [\n    JSON.stringify({\n      \"x-lix-key\": \"task\",\n      \"x-lix-version\": \"1\",\n      \"x-lix-primary-key\": [\"/id\"],\n      type: \"object\",\n      required: [\"id\", \"title\"],\n      properties: {\n        id: { type: \"string\" },\n        title: { type: \"string\" },\n      },\n      additionalProperties: false,\n    }),\n  ],\n);\n\n// Write rows like any SQL table\nawait lix.execute(\n  \"INSERT INTO task (id, title) VALUES ($1, $2)\",\n  [\"t1\", \"Ship v1\"],\n);\n\n// Every change is journaled; query it with SQL\nconst changes = await lix.execute(\n  \"SELECT entity_id, schema_key, snapshot_content FROM lix_change\",\n);\n```\n\n## Semantic change (delta) tracking\n\nUnlike Git's line-based diffs, Lix understands file structure through plugins. Lix sees `price: 10 → 12` or `cell B4: pending → shipped`, not \"line 4 changed\" or \"binary files differ\".\n\n### JSON file example\n\n**Before:**\n```json\n{\"theme\":\"light\",\"notifications\":true,\"language\":\"en\"}\n```\n\n**After:**\n```json\n{\"theme\":\"dark\",\"notifications\":true,\"language\":\"en\"}\n```\n\n**Git sees:**\n```diff\n-{\"theme\":\"light\",\"notifications\":true,\"language\":\"en\"}\n+{\"theme\":\"dark\",\"notifications\":true,\"language\":\"en\"}\n```\n\n**Lix sees:**\n\n```diff\nproperty theme:\n- light\n+ dark\n```\n\n### Excel file example\n\nThe same approach works for binary formats. With an XLSX plugin, Lix shows cell-level changes:\n\n**Before:**\n```diff\n  | order_id | product  | status   |\n  | -------- | -------- | -------- |\n  | 1001     | Widget A | shipped  |\n  | 1002     | Widget B | pending |\n```\n\n**After:**\n```diff\n  | order_id | product  | status   |\n  | -------- | -------- | -------- |\n  | 1001     | Widget A | shipped  |\n  | 1002     | Widget B | shipped |\n```\n\n**Git sees:**\n\n```diff\n-Binary files differ\n```\n\n**Lix sees:**\n\n```diff\norder_id 1002 status:\n\n- pending\n+ shipped\n```\n\n## How Lix Works\n\nLix uses SQL databases as query engine and persistence layer. Virtual tables like `file` and `file_history` are exposed on top:\n\n```sql\nSELECT * FROM file_history\nWHERE path = '/orders.xlsx'\nORDER BY created_at DESC;\n```\n\nWhen a file is written, a plugin parses it and detects entity-level changes. These changes (deltas) are stored in the database, enabling branching, merging, and audit trails.\n\n```\n┌─────────────────────────────────────────────────┐\n│                      Lix                        │\n│                                                 │\n│ ┌────────────┐ ┌──────────┐ ┌─────────┐ ┌─────┐ │\n│ │ Filesystem │ │ Branches │ │ History │ │ ... │ │\n│ └────────────┘ └──────────┘ └─────────┘ └─────┘ │\n└────────────────────────┬────────────────────────┘\n                         │\n                         ▼\n┌─────────────────────────────────────────────────┐\n│                  SQL database                   │\n│            (SQLite, Postgres, etc.)             │\n└─────────────────────────────────────────────────┘\n```\n\n[Read more about Lix architecture →](https://lix.dev/docs/architecture)\n\n## Roadmap\n\n- [x] Core API (<v0.5)\n- [x] ACID transactions (v0.6)\n- [x] Branching, diffing, merging (v0.6)\n- [x] SQL API (v0.6)\n- [x] Stable physical storage layout (v0.6)\n- [ ] Plugin API for file formats (community-authored plugins for DOCX, XLSX, CAD, PDF, …)\n- [ ] Merge conflict semantics and resolution\n- [ ] Working changes & checkpointing\n- [ ] Real-time sync\n\n## Learn More\n\n- **[Getting Started Guide](https://lix.dev/docs/getting-started)** - Build your first app with Lix\n- **[Documentation](https://lix.dev/docs)** - Full API reference and guides\n- **[Discord](https://discord.gg/gdMPPWy57R)** - Get help and join the community\n- **[GitHub](https://github.com/opral/lix)** - Report issues and contribute\n\n## Blog posts\n\n- [Introducing Lix: An embeddable version control system](https://lix.dev/blog/introducing-lix)\n- [What if a Git SDK to build apps exists?](https://samuelstroschein.com/blog/what-if-a-git-sdk-exists)\n- [Git is unsuited for applications](https://samuelstroschein.com/blog/git-limitations)\n- [Does a git-based architecture make sense?](https://samuelstroschein.com/blog/git-based-architecture)\n\n## License\n\n[MIT](https://github.com/opral/lix/blob/main/packages/lix-sdk/LICENSE)\n"
  },
  {
    "path": "benchmarks/10k-entities/Cargo.toml",
    "content": "[package]\nname = \"ten_k_entities_benchmark\"\nversion = \"0.1.0\"\nedition = \"2021\"\npublish = false\n\n[dependencies]\nasync-trait = \"0.1\"\nclap = { version = \"4.5.31\", features = [\"derive\"] }\nlix_engine = { path = \"../../packages/engine\" }\nserde = { version = \"1\", features = [\"derive\"] }\nserde_json = \"1\"\nsqlx = { version = \"0.8.6\", default-features = false, features = [\"sqlite\", \"runtime-tokio-rustls\"] }\ntokio = { version = \"1\", features = [\"sync\"] }\nwasmtime = { version = \"30\", features = [\"component-model\"] }\nwasmtime-wasi = \"30\"\nzip = { version = \"2\", default-features = false, features = [\"deflate\"] }\n"
  },
  {
    "path": "benchmarks/10k-entities/README.md",
    "content": "# 10k Entities Benchmark\n\nThis benchmark compares two engine paths for the same logical JSON document:\n\n1. File write: insert one `.json` blob with `10_000` props through `lix_file`\n2. Direct entity writes: insert `10_000` `json_pointer` rows directly through `lix_state`\n\nThe goal is to separate:\n\n- file/plugin detect overhead\n- direct semantic row write overhead\n\nBoth cases use the real current engine on a fresh file-backed SQLite database.\n\n## Case 1: File Write JSON With 10k Props\n\nTimed section:\n\n- begin a buffered write transaction\n- run `INSERT INTO lix_file (id, path, data)`\n- commit the transaction\n\nThis case includes:\n\n- JSON plugin `detect-changes`\n- semantic row commit\n- live-state rebuild\n- file cache/materialization refresh\n\n## Case 2: Direct Entity Writes 10k\n\nOutside the timer:\n\n- insert an empty `{}` JSON file through `lix_file`\n\nTimed section:\n\n- begin a buffered write transaction\n- run one root-row update plus chunked `INSERT INTO lix_state (...) VALUES (...)` statements until all `10_000` property rows are written\n- commit the transaction\n\nThis case excludes file-to-entity detection, but still includes:\n\n- direct semantic row commit\n- live-state rebuild\n- file cache/materialization refresh\n\nThe benchmark treats committed `json_pointer` row count as the hard invariant for\nthis case and records the final `lix_file` payload match as an observation.\n\n## Usage\n\n```bash\ncargo run --release -p ten_k_entities_benchmark -- \\\n  --props 10000 \\\n  --warmups 2 \\\n  --iterations 10 \\\n  --output-dir artifact/benchmarks/10k-entities\n```\n\nThe benchmark writes:\n\n- `artifact/benchmarks/10k-entities/report.json`\n- `artifact/benchmarks/10k-entities/report.md`\n\n## Verification\n\nEach case verifies:\n\n- committed `json_pointer` row count in `lix_state_by_version`\n- file-write case: final `lix_file` JSON must match the expected payload\n- direct-write case: final `lix_file` JSON match is recorded in the report\n\n## Notes\n\n- Warmups absorb first-use wasm/component initialization costs.\n- The direct-write case times `10_000` property inserts plus one root-row update so the JSON semantic root stays in sync with the property rows.\n- The report includes per-case `write`, `commit`, and `total` timing summaries plus a comparison table.\n"
  },
  {
    "path": "benchmarks/10k-entities/src/main.rs",
    "content": "use clap::Parser;\nuse lix_engine::wasm::WasmRuntime;\nuse lix_engine::{boot, BootArgs, ExecuteOptions, LixError, Session, Value};\nuse serde::Serialize;\nuse std::fs;\nuse std::io::{Cursor, Write};\nuse std::path::{Path, PathBuf};\nuse std::process::Command;\nuse std::sync::Arc;\nuse std::time::{Instant, SystemTime, UNIX_EPOCH};\nuse zip::write::SimpleFileOptions;\nuse zip::{CompressionMethod, ZipWriter};\n\nmod sqlite_backend;\nmod wasmtime_runtime;\n\nconst DEFAULT_OUTPUT_DIR: &str = \"artifact/benchmarks/10k-entities\";\nconst DEFAULT_PROPS: usize = 10_000;\nconst DEFAULT_WARMUPS: usize = 2;\nconst DEFAULT_ITERATIONS: usize = 10;\nconst DIRECT_ENTITY_WRITE_CHUNK_SIZE: usize = 250;\n\nconst PLUGIN_KEY: &str = \"json\";\nconst PLUGIN_SCHEMA_KEY: &str = \"json_pointer\";\nconst PLUGIN_ARCHIVE_MANIFEST_JSON: &str = r#\"{\n  \"key\": \"json\",\n  \"runtime\": \"wasm-component-v1\",\n  \"api_version\": \"0.1.0\",\n  \"match\": {\"path_glob\": \"*.json\"},\n  \"detect_changes\": {},\n  \"entry\": \"plugin.wasm\",\n  \"schemas\": [\"schema/json_pointer.json\"]\n}\"#;\n\nconst JSON_POINTER_SCHEMA_JSON: &str =\n    include_str!(\"../../../packages/plugin-json-v2/schema/json_pointer.json\");\n\ntype BenchResult<T> = Result<T, String>;\n\n#[derive(Parser, Debug)]\n#[command(\n    name = \"10k-entities-benchmark\",\n    about = \"Benchmark file-write vs direct-entity-write paths for a 10k-prop JSON document\"\n)]\nstruct Args {\n    #[arg(long, default_value_t = DEFAULT_PROPS)]\n    props: usize,\n\n    #[arg(long, default_value_t = DEFAULT_WARMUPS)]\n    warmups: usize,\n\n    #[arg(long, default_value_t = DEFAULT_ITERATIONS)]\n    iterations: usize,\n\n    #[arg(long, default_value = DEFAULT_OUTPUT_DIR)]\n    output_dir: PathBuf,\n}\n\n#[derive(Debug, Clone, Copy)]\nenum BenchmarkCaseKind {\n    FileWriteJson,\n    DirectEntityWrites,\n}\n\nimpl BenchmarkCaseKind {\n    fn id(self) -> &'static str {\n        match self {\n            Self::FileWriteJson => \"file_write_json_10k_props\",\n            Self::DirectEntityWrites => \"direct_entity_writes_10k\",\n        }\n    }\n\n    fn title(self) -> &'static str {\n        match self {\n            Self::FileWriteJson => \"File Write JSON With 10k Props\",\n            Self::DirectEntityWrites => \"Direct Entity Writes 10k\",\n        }\n    }\n\n    fn timed_operation(self) -> &'static str {\n        match self {\n            Self::FileWriteJson => {\n                \"INSERT INTO lix_file for one 10k-prop JSON payload inside a buffered write transaction, then commit\"\n            }\n            Self::DirectEntityWrites => {\n                \"UPDATE the root json_pointer row and INSERT 10k property json_pointer rows inside a buffered write transaction, then commit\"\n            }\n        }\n    }\n\n    fn notes(self) -> Vec<&'static str> {\n        match self {\n            Self::FileWriteJson => vec![\n                \"This is the real file-write path with plugin detect-changes enabled.\",\n                \"The timed write is one INSERT INTO lix_file statement.\",\n                \"The semantic layer derives json_pointer rows during commit.\",\n                \"This case includes plugin detect-changes cost plus direct semantic row commit cost.\",\n            ],\n            Self::DirectEntityWrites => vec![\n                \"This isolates direct semantic writes through the engine without detect-changes.\",\n                \"Outside the timer, the benchmark inserts an empty {} JSON file to establish the file descriptor and root entity.\",\n                \"Inside the timer, it updates the root json_pointer row and inserts the 10k property rows through chunked lix_state statements.\",\n                \"This case still includes normal commit, live-state rebuild, and file-cache refresh work for direct entity writes.\",\n                \"The report records whether lix_file matched the expected payload after commit, but row-count verification is the hard invariant for this case.\",\n            ],\n        }\n    }\n\n    fn timed_sql(self) -> &'static str {\n        match self {\n            Self::FileWriteJson => \"INSERT INTO lix_file (id, path, data) VALUES (?1, ?2, ?3)\",\n            Self::DirectEntityWrites => {\n                \"UPDATE lix_state root row; INSERT INTO lix_state (...) VALUES (... x chunk_size), repeated until props rows are written\"\n            }\n        }\n    }\n\n    fn verification(self) -> &'static str {\n        match self {\n            Self::FileWriteJson => {\n                \"Verify committed json_pointer row count for the file and verify lix_file JSON matches the input payload.\"\n            }\n            Self::DirectEntityWrites => {\n                \"Verify committed json_pointer row count for the file and record whether lix_file JSON matched the expected 10k-prop payload.\"\n            }\n        }\n    }\n\n    fn setup_outside_timer(self) -> Vec<&'static str> {\n        match self {\n            Self::FileWriteJson => vec![\n                \"Build plugin-json-v2 wasm.\",\n                \"Create a fresh SQLite database.\",\n                \"Boot the engine and install the JSON plugin.\",\n            ],\n            Self::DirectEntityWrites => vec![\n                \"Build plugin-json-v2 wasm.\",\n                \"Create a fresh SQLite database.\",\n                \"Boot the engine and install the JSON plugin.\",\n                \"Insert an empty {} JSON file so direct state writes target an existing JSON file.\",\n                \"Load the committed root json_pointer entity id for that file.\",\n            ],\n        }\n    }\n}\n\n#[derive(Debug, Serialize)]\nstruct Report {\n    generated_at_unix_ms: u128,\n    benchmark: BenchmarkMetadata,\n    shared_setup: SharedSetupReport,\n    cases: Vec<CaseReport>,\n    comparison: ComparisonSummary,\n}\n\n#[derive(Debug, Serialize)]\nstruct BenchmarkMetadata {\n    name: &'static str,\n    notes: Vec<&'static str>,\n}\n\n#[derive(Debug, Serialize)]\nstruct SharedSetupReport {\n    props: usize,\n    input_bytes: usize,\n    direct_property_rows: usize,\n    expected_state_rows_after_commit: u64,\n    plugin_key: &'static str,\n    schema_key: &'static str,\n    plugin_wasm_path: String,\n    sqlite_mode: &'static str,\n}\n\n#[derive(Debug, Serialize)]\nstruct CaseReport {\n    case_id: &'static str,\n    title: &'static str,\n    timed_operation: &'static str,\n    notes: Vec<&'static str>,\n    setup: CaseSetupReport,\n    warmups: Vec<RunSample>,\n    samples: Vec<RunSample>,\n    timing_ms: TimingSummary,\n}\n\n#[derive(Debug, Serialize)]\nstruct CaseSetupReport {\n    timed_rows: usize,\n    timed_sql: &'static str,\n    setup_outside_timer: Vec<&'static str>,\n    verification: &'static str,\n}\n\n#[derive(Debug, Clone, Serialize)]\nstruct RunSample {\n    index: usize,\n    write_ms: f64,\n    commit_ms: f64,\n    total_ms: f64,\n    committed_state_rows: u64,\n    file_matches_expected: bool,\n}\n\n#[derive(Debug, Serialize)]\nstruct TimingSummary {\n    sample_count: usize,\n    write: PhaseSummary,\n    commit: PhaseSummary,\n    total: PhaseSummary,\n}\n\n#[derive(Debug, Serialize)]\nstruct PhaseSummary {\n    mean_ms: f64,\n    median_ms: f64,\n    min_ms: f64,\n    max_ms: f64,\n}\n\n#[derive(Debug, Serialize)]\nstruct ComparisonSummary {\n    file_write_total_mean_ms: f64,\n    direct_entity_total_mean_ms: f64,\n    file_write_minus_direct_entity_total_mean_ms: f64,\n    file_write_commit_mean_ms: f64,\n    direct_entity_commit_mean_ms: f64,\n    file_write_minus_direct_entity_commit_mean_ms: f64,\n    file_write_write_mean_ms: f64,\n    direct_entity_write_mean_ms: f64,\n    file_write_minus_direct_entity_write_mean_ms: f64,\n    file_write_to_direct_entity_total_ratio: f64,\n}\n\nstruct TempSqlitePath {\n    path: PathBuf,\n}\n\nimpl TempSqlitePath {\n    fn new(label: &str) -> Self {\n        Self {\n            path: temp_sqlite_path(label),\n        }\n    }\n\n    fn path(&self) -> &Path {\n        &self.path\n    }\n}\n\nimpl Drop for TempSqlitePath {\n    fn drop(&mut self) {\n        for suffix in [\"\", \"-wal\", \"-shm\", \"-journal\"] {\n            let _ = std::fs::remove_file(format!(\"{}{}\", self.path.display(), suffix));\n        }\n    }\n}\n\nfn main() {\n    let runtime = tokio::runtime::Builder::new_current_thread()\n        .enable_all()\n        .build()\n        .expect(\"tokio runtime should initialize\");\n\n    if let Err(error) = runtime.block_on(run(Args::parse())) {\n        eprintln!(\"error: {error}\");\n        std::process::exit(1);\n    }\n}\n\nasync fn run(args: Args) -> BenchResult<()> {\n    if args.props == 0 {\n        return Err(\"--props must be greater than 0\".to_string());\n    }\n    if args.iterations == 0 {\n        return Err(\"--iterations must be greater than 0\".to_string());\n    }\n\n    fs::create_dir_all(&args.output_dir).map_err(io_err)?;\n\n    let repo_root = repo_root()?;\n    let plugin_wasm_path = build_plugin_json_v2_wasm(&repo_root)?;\n    let plugin_wasm_bytes = fs::read(&plugin_wasm_path).map_err(io_err)?;\n    let plugin_archive = build_plugin_archive(&plugin_wasm_bytes)?;\n    let payload = build_flat_json_payload(args.props)?;\n    let expected_state_rows_after_commit = (args.props + 1) as u64;\n\n    let wasm_runtime: Arc<dyn WasmRuntime> =\n        Arc::new(wasmtime_runtime::TestWasmtimeRuntime::new().map_err(lix_err)?);\n\n    let file_write_case = run_case(\n        BenchmarkCaseKind::FileWriteJson,\n        &args,\n        Arc::clone(&wasm_runtime),\n        &plugin_archive,\n        &payload,\n        expected_state_rows_after_commit,\n    )\n    .await?;\n    let direct_entity_case = run_case(\n        BenchmarkCaseKind::DirectEntityWrites,\n        &args,\n        Arc::clone(&wasm_runtime),\n        &plugin_archive,\n        &payload,\n        expected_state_rows_after_commit,\n    )\n    .await?;\n    let comparison = build_comparison_summary(&file_write_case, &direct_entity_case)?;\n\n    let report = Report {\n        generated_at_unix_ms: now_unix_ms()?,\n        benchmark: BenchmarkMetadata {\n            name: \"10k-entities-json-file-vs-direct-state\",\n            notes: vec![\n                \"Both cases use a fresh file-backed SQLite database per run.\",\n                \"Plugin wasm build, engine init, plugin install, and database setup are outside the timer.\",\n                \"Each case reports write_ms, commit_ms, and total_ms separately.\",\n                \"The goal is to separate file/plugin detect overhead from direct 10k entity write overhead.\",\n            ],\n        },\n        shared_setup: SharedSetupReport {\n            props: args.props,\n            input_bytes: payload.len(),\n            direct_property_rows: args.props,\n            expected_state_rows_after_commit,\n            plugin_key: PLUGIN_KEY,\n            schema_key: PLUGIN_SCHEMA_KEY,\n            plugin_wasm_path: plugin_wasm_path.display().to_string(),\n            sqlite_mode: \"fresh file-backed SQLite database per run\",\n        },\n        cases: vec![file_write_case, direct_entity_case],\n        comparison,\n    };\n\n    let report_json_path = args.output_dir.join(\"report.json\");\n    let report_markdown_path = args.output_dir.join(\"report.md\");\n    fs::write(\n        &report_json_path,\n        serde_json::to_vec_pretty(&report).map_err(serde_err)?,\n    )\n    .map_err(io_err)?;\n    fs::write(&report_markdown_path, render_markdown_report(&report)).map_err(io_err)?;\n\n    print_summary(&report, &report_json_path, &report_markdown_path);\n    Ok(())\n}\n\nasync fn run_case(\n    kind: BenchmarkCaseKind,\n    args: &Args,\n    wasm_runtime: Arc<dyn WasmRuntime>,\n    plugin_archive: &[u8],\n    payload: &[u8],\n    expected_state_rows_after_commit: u64,\n) -> BenchResult<CaseReport> {\n    let mut warmups = Vec::with_capacity(args.warmups);\n    for index in 0..args.warmups {\n        warmups.push(\n            run_sample(\n                kind,\n                index,\n                Arc::clone(&wasm_runtime),\n                plugin_archive,\n                payload,\n                expected_state_rows_after_commit,\n            )\n            .await?,\n        );\n    }\n\n    let mut samples = Vec::with_capacity(args.iterations);\n    for index in 0..args.iterations {\n        samples.push(\n            run_sample(\n                kind,\n                index,\n                Arc::clone(&wasm_runtime),\n                plugin_archive,\n                payload,\n                expected_state_rows_after_commit,\n            )\n            .await?,\n        );\n    }\n\n    Ok(CaseReport {\n        case_id: kind.id(),\n        title: kind.title(),\n        timed_operation: kind.timed_operation(),\n        notes: kind.notes(),\n        setup: CaseSetupReport {\n            timed_rows: match kind {\n                BenchmarkCaseKind::FileWriteJson => 1,\n                BenchmarkCaseKind::DirectEntityWrites => args.props + 1,\n            },\n            timed_sql: kind.timed_sql(),\n            setup_outside_timer: kind.setup_outside_timer(),\n            verification: kind.verification(),\n        },\n        warmups,\n        samples: samples.clone(),\n        timing_ms: summarize_timings(&samples)?,\n    })\n}\n\nasync fn run_sample(\n    kind: BenchmarkCaseKind,\n    index: usize,\n    wasm_runtime: Arc<dyn WasmRuntime>,\n    plugin_archive: &[u8],\n    payload: &[u8],\n    expected_state_rows_after_commit: u64,\n) -> BenchResult<RunSample> {\n    match kind {\n        BenchmarkCaseKind::FileWriteJson => {\n            run_file_write_sample(\n                index,\n                wasm_runtime,\n                plugin_archive,\n                payload,\n                expected_state_rows_after_commit,\n            )\n            .await\n        }\n        BenchmarkCaseKind::DirectEntityWrites => {\n            run_direct_entity_write_sample(\n                index,\n                wasm_runtime,\n                plugin_archive,\n                payload,\n                expected_state_rows_after_commit,\n            )\n            .await\n        }\n    }\n}\n\nasync fn run_file_write_sample(\n    index: usize,\n    wasm_runtime: Arc<dyn WasmRuntime>,\n    plugin_archive: &[u8],\n    payload: &[u8],\n    expected_state_rows_after_commit: u64,\n) -> BenchResult<RunSample> {\n    let sqlite_path = TempSqlitePath::new(&format!(\"10k-entities-file-write-{index}\"));\n    let session = open_prepared_session(sqlite_path.path(), wasm_runtime, plugin_archive).await?;\n\n    let file_id = format!(\"json-file-write-{index}\");\n    let file_path = format!(\"/{file_id}.json\");\n    let active_version_id = session.active_version_id();\n\n    let mut transaction = Some(\n        session\n            .begin_transaction_with_options(ExecuteOptions::default())\n            .await\n            .map_err(lix_err)?,\n    );\n\n    let started_at = Instant::now();\n\n    let write_started_at = Instant::now();\n    let write_result = {\n        let transaction = transaction\n            .as_mut()\n            .expect(\"transaction should be available during write phase\");\n        transaction\n            .execute(\n                \"INSERT INTO lix_file (id, path, data) VALUES (?1, ?2, ?3)\",\n                &[\n                    Value::Text(file_id.clone()),\n                    Value::Text(file_path),\n                    Value::Blob(payload.to_vec()),\n                ],\n            )\n            .await\n            .map_err(lix_err)\n    };\n    if let Err(error) = write_result {\n        if let Some(transaction) = transaction.take() {\n            let _ = transaction.rollback().await;\n        }\n        return Err(error);\n    }\n    let write_ms = write_started_at.elapsed().as_secs_f64() * 1000.0;\n\n    let commit_started_at = Instant::now();\n    transaction\n        .take()\n        .expect(\"transaction should be available for commit\")\n        .commit()\n        .await\n        .map_err(lix_err)?;\n    let commit_ms = commit_started_at.elapsed().as_secs_f64() * 1000.0;\n\n    let total_ms = started_at.elapsed().as_secs_f64() * 1000.0;\n\n    finish_sample(\n        index,\n        &session,\n        &file_id,\n        &active_version_id,\n        payload,\n        expected_state_rows_after_commit,\n        true,\n        write_ms,\n        commit_ms,\n        total_ms,\n    )\n    .await\n}\n\nasync fn run_direct_entity_write_sample(\n    index: usize,\n    wasm_runtime: Arc<dyn WasmRuntime>,\n    plugin_archive: &[u8],\n    payload: &[u8],\n    expected_state_rows_after_commit: u64,\n) -> BenchResult<RunSample> {\n    let sqlite_path = TempSqlitePath::new(&format!(\"10k-entities-direct-state-{index}\"));\n    let session = open_prepared_session(sqlite_path.path(), wasm_runtime, plugin_archive).await?;\n\n    let file_id = format!(\"json-direct-state-{index}\");\n    let file_path = format!(\"/{file_id}.json\");\n    let active_version_id = session.active_version_id();\n\n    bootstrap_empty_json_file(&session, &file_id, &file_path).await?;\n    let root_entity_id =\n        load_root_json_pointer_entity_id(&session, &file_id, &active_version_id).await?;\n    let direct_write_sql_batches = build_direct_entity_write_sql_batches(\n        &file_id,\n        &root_entity_id,\n        payload,\n        DIRECT_ENTITY_WRITE_CHUNK_SIZE,\n    )?;\n\n    let mut transaction = Some(\n        session\n            .begin_transaction_with_options(ExecuteOptions::default())\n            .await\n            .map_err(lix_err)?,\n    );\n\n    let started_at = Instant::now();\n\n    let write_started_at = Instant::now();\n    let write_result = {\n        let transaction = transaction\n            .as_mut()\n            .expect(\"transaction should be available during write phase\");\n        let mut result = Ok(());\n        for sql in &direct_write_sql_batches {\n            if let Err(error) = transaction.execute(sql, &[]).await.map_err(lix_err) {\n                result = Err(error);\n                break;\n            }\n        }\n        result\n    };\n    if let Err(error) = write_result {\n        if let Some(transaction) = transaction.take() {\n            let _ = transaction.rollback().await;\n        }\n        return Err(error);\n    }\n    let write_ms = write_started_at.elapsed().as_secs_f64() * 1000.0;\n\n    let commit_started_at = Instant::now();\n    transaction\n        .take()\n        .expect(\"transaction should be available for commit\")\n        .commit()\n        .await\n        .map_err(lix_err)?;\n    let commit_ms = commit_started_at.elapsed().as_secs_f64() * 1000.0;\n\n    let total_ms = started_at.elapsed().as_secs_f64() * 1000.0;\n\n    finish_sample(\n        index,\n        &session,\n        &file_id,\n        &active_version_id,\n        payload,\n        expected_state_rows_after_commit,\n        false,\n        write_ms,\n        commit_ms,\n        total_ms,\n    )\n    .await\n}\n\nasync fn finish_sample(\n    index: usize,\n    session: &Session,\n    file_id: &str,\n    active_version_id: &str,\n    expected_payload: &[u8],\n    expected_state_rows_after_commit: u64,\n    enforce_file_match: bool,\n    write_ms: f64,\n    commit_ms: f64,\n    total_ms: f64,\n) -> BenchResult<RunSample> {\n    let committed_state_rows = scalar_count(\n        session,\n        \"SELECT COUNT(*) \\\n         FROM lix_state_by_version \\\n         WHERE file_id = ?1 \\\n           AND version_id = ?2 \\\n           AND schema_key = ?3 \\\n           AND snapshot_content IS NOT NULL\",\n        &[\n            Value::Text(file_id.to_string()),\n            Value::Text(active_version_id.to_string()),\n            Value::Text(PLUGIN_SCHEMA_KEY.to_string()),\n        ],\n    )\n    .await?;\n\n    if committed_state_rows != expected_state_rows_after_commit {\n        return Err(format!(\n            \"expected {expected_state_rows_after_commit} committed json_pointer rows for '{file_id}', got {committed_state_rows}\"\n        ));\n    }\n\n    let file_matches_expected =\n        match verify_file_json_matches(session, file_id, expected_payload).await {\n            Ok(()) => true,\n            Err(error) if !enforce_file_match => {\n                let _ = error;\n                false\n            }\n            Err(error) => return Err(error),\n        };\n\n    Ok(RunSample {\n        index,\n        write_ms,\n        commit_ms,\n        total_ms,\n        committed_state_rows,\n        file_matches_expected,\n    })\n}\n\nasync fn open_prepared_session(\n    sqlite_path: &Path,\n    wasm_runtime: Arc<dyn WasmRuntime>,\n    plugin_archive: &[u8],\n) -> BenchResult<Session> {\n    let backend = sqlite_backend::BenchSqliteBackend::file_backed(sqlite_path).map_err(lix_err)?;\n    let mut boot_args = BootArgs::new(Box::new(backend), wasm_runtime);\n    boot_args.access_to_internal = true;\n\n    let engine = Arc::new(boot(boot_args));\n    engine.initialize().await.map_err(lix_err)?;\n    let session = engine.open_session().await.map_err(lix_err)?;\n    session\n        .install_plugin(plugin_archive)\n        .await\n        .map_err(lix_err)?;\n\n    Ok(session)\n}\n\nasync fn bootstrap_empty_json_file(\n    session: &Session,\n    file_id: &str,\n    file_path: &str,\n) -> BenchResult<()> {\n    session\n        .execute(\n            \"INSERT INTO lix_file (id, path, data) VALUES (?1, ?2, ?3)\",\n            &[\n                Value::Text(file_id.to_string()),\n                Value::Text(file_path.to_string()),\n                Value::Blob(b\"{}\".to_vec()),\n            ],\n        )\n        .await\n        .map_err(lix_err)?;\n    Ok(())\n}\n\nasync fn load_root_json_pointer_entity_id(\n    session: &Session,\n    file_id: &str,\n    active_version_id: &str,\n) -> BenchResult<String> {\n    let result = session\n        .execute(\n            \"SELECT entity_id \\\n             FROM lix_state_by_version \\\n             WHERE file_id = ?1 \\\n               AND version_id = ?2 \\\n               AND schema_key = ?3 \\\n               AND snapshot_content IS NOT NULL \\\n             ORDER BY entity_id ASC \\\n             LIMIT 1\",\n            &[\n                Value::Text(file_id.to_string()),\n                Value::Text(active_version_id.to_string()),\n                Value::Text(PLUGIN_SCHEMA_KEY.to_string()),\n            ],\n        )\n        .await\n        .map_err(lix_err)?;\n    let value = result\n        .statements\n        .first()\n        .and_then(|statement| statement.rows.first())\n        .and_then(|row| row.first())\n        .ok_or_else(|| format!(\"query returned no root json_pointer row for '{file_id}'\"))?;\n\n    match value {\n        Value::Text(text) => Ok(text.clone()),\n        other => Err(format!(\n            \"expected text entity_id for root json_pointer row of '{file_id}', got {other:?}\"\n        )),\n    }\n}\n\nfn build_direct_entity_write_sql_batches(\n    file_id: &str,\n    root_entity_id: &str,\n    payload: &[u8],\n    chunk_size: usize,\n) -> BenchResult<Vec<String>> {\n    if chunk_size == 0 {\n        return Err(\"direct entity write chunk size must be greater than 0\".to_string());\n    }\n\n    let expected_json: serde_json::Value = serde_json::from_slice(payload).map_err(serde_err)?;\n    let object = expected_json\n        .as_object()\n        .ok_or_else(|| \"expected generated payload to be a JSON object\".to_string())?;\n\n    let root_snapshot_content = serde_json::json!({\n        \"path\": root_entity_id,\n        \"value\": expected_json,\n    });\n    let root_snapshot_content = serde_json::to_string(&root_snapshot_content).map_err(serde_err)?;\n    let root_entity_id_json =\n        serde_json::to_string(&serde_json::json!([root_entity_id])).map_err(serde_err)?;\n\n    let mut statements = vec![format!(\n        \"UPDATE lix_state \\\n         SET snapshot_content = '{}' \\\n         WHERE entity_id = lix_json('{}') \\\n           AND file_id = '{}' \\\n           AND schema_key = '{}' \\\n           AND plugin_key = '{}'\",\n        escape_sql_string(&root_snapshot_content),\n        escape_sql_string(&root_entity_id_json),\n        escape_sql_string(file_id),\n        PLUGIN_SCHEMA_KEY,\n        PLUGIN_KEY,\n    )];\n\n    let entries = object\n        .iter()\n        .map(|(key, value)| -> BenchResult<String> {\n            let entity_id = format!(\"/{}\", escape_json_pointer_segment(key));\n            let snapshot_content = serde_json::json!({\n                \"path\": entity_id,\n                \"value\": value,\n            });\n            let snapshot_content = serde_json::to_string(&snapshot_content).map_err(serde_err)?;\n            Ok(format!(\n                \"('{}', '{}', '{}', '{}', '{}')\",\n                escape_sql_string(&entity_id),\n                escape_sql_string(file_id),\n                PLUGIN_SCHEMA_KEY,\n                PLUGIN_KEY,\n                escape_sql_string(&snapshot_content),\n            ))\n        })\n        .collect::<BenchResult<Vec<_>>>()?;\n\n    for chunk in entries.chunks(chunk_size) {\n        statements.push(format!(\n            \"INSERT INTO lix_state (entity_id, file_id, schema_key, plugin_key, snapshot_content) VALUES {}\",\n            chunk.join(\", \")\n        ));\n    }\n\n    Ok(statements)\n}\n\nasync fn verify_file_json_matches(\n    session: &Session,\n    file_id: &str,\n    expected_payload: &[u8],\n) -> BenchResult<()> {\n    let result = session\n        .execute(\n            \"SELECT data FROM lix_file WHERE id = ?1 LIMIT 1\",\n            &[Value::Text(file_id.to_string())],\n        )\n        .await\n        .map_err(lix_err)?;\n    let value = result\n        .statements\n        .first()\n        .and_then(|statement| statement.rows.first())\n        .and_then(|row| row.first())\n        .ok_or_else(|| format!(\"query returned no file data row for '{file_id}'\"))?;\n\n    let actual_bytes = match value {\n        Value::Blob(bytes) => bytes.clone(),\n        other => {\n            return Err(format!(\n                \"expected blob data from lix_file for '{file_id}', got {other:?}\"\n            ));\n        }\n    };\n\n    let actual_json: serde_json::Value =\n        serde_json::from_slice(&actual_bytes).map_err(serde_err)?;\n    let expected_json: serde_json::Value =\n        serde_json::from_slice(expected_payload).map_err(serde_err)?;\n    if actual_json != expected_json {\n        return Err(format!(\n            \"lix_file JSON for '{file_id}' did not match expected payload\"\n        ));\n    }\n\n    Ok(())\n}\n\nfn build_plugin_archive(plugin_wasm_bytes: &[u8]) -> BenchResult<Vec<u8>> {\n    let options = SimpleFileOptions::default().compression_method(CompressionMethod::Stored);\n    let mut writer = ZipWriter::new(Cursor::new(Vec::new()));\n\n    writer\n        .start_file(\"manifest.json\", options)\n        .map_err(io_err)?;\n    writer\n        .write_all(PLUGIN_ARCHIVE_MANIFEST_JSON.as_bytes())\n        .map_err(io_err)?;\n\n    writer.start_file(\"plugin.wasm\", options).map_err(io_err)?;\n    writer.write_all(plugin_wasm_bytes).map_err(io_err)?;\n\n    writer\n        .start_file(\"schema/json_pointer.json\", options)\n        .map_err(io_err)?;\n    writer\n        .write_all(JSON_POINTER_SCHEMA_JSON.as_bytes())\n        .map_err(io_err)?;\n\n    writer\n        .finish()\n        .map(|cursor| cursor.into_inner())\n        .map_err(io_err)\n}\n\nasync fn scalar_count(session: &Session, sql: &str, params: &[Value]) -> BenchResult<u64> {\n    let result = session.execute(sql, params).await.map_err(lix_err)?;\n    let value = result\n        .statements\n        .first()\n        .and_then(|statement| statement.rows.first())\n        .and_then(|row| row.first())\n        .ok_or_else(|| format!(\"query returned no scalar value: {sql}\"))?;\n\n    match value {\n        Value::Integer(number) => {\n            if *number < 0 {\n                Err(format!(\"query returned negative count {number}: {sql}\"))\n            } else {\n                Ok(*number as u64)\n            }\n        }\n        other => Err(format!(\n            \"query returned non-integer scalar {other:?}: {sql}\"\n        )),\n    }\n}\n\nfn summarize_timings(samples: &[RunSample]) -> BenchResult<TimingSummary> {\n    if samples.is_empty() {\n        return Err(\"cannot summarize empty samples\".to_string());\n    }\n\n    Ok(TimingSummary {\n        sample_count: samples.len(),\n        write: summarize_phase(samples.iter().map(|sample| sample.write_ms).collect())?,\n        commit: summarize_phase(samples.iter().map(|sample| sample.commit_ms).collect())?,\n        total: summarize_phase(samples.iter().map(|sample| sample.total_ms).collect())?,\n    })\n}\n\nfn summarize_phase(mut values: Vec<f64>) -> BenchResult<PhaseSummary> {\n    if values.is_empty() {\n        return Err(\"cannot summarize empty timing phase\".to_string());\n    }\n\n    values.sort_by(|left, right| left.partial_cmp(right).unwrap_or(std::cmp::Ordering::Equal));\n\n    let sum = values.iter().sum::<f64>();\n    let median_ms = if values.len() % 2 == 0 {\n        let upper = values.len() / 2;\n        (values[upper - 1] + values[upper]) / 2.0\n    } else {\n        values[values.len() / 2]\n    };\n\n    Ok(PhaseSummary {\n        mean_ms: sum / values.len() as f64,\n        median_ms,\n        min_ms: values[0],\n        max_ms: values[values.len() - 1],\n    })\n}\n\nfn build_comparison_summary(\n    file_write_case: &CaseReport,\n    direct_entity_case: &CaseReport,\n) -> BenchResult<ComparisonSummary> {\n    let file_write_total_mean_ms = file_write_case.timing_ms.total.mean_ms;\n    let direct_entity_total_mean_ms = direct_entity_case.timing_ms.total.mean_ms;\n    let ratio = if direct_entity_total_mean_ms == 0.0 {\n        return Err(\"cannot compare cases: direct-entity total mean is zero\".to_string());\n    } else {\n        file_write_total_mean_ms / direct_entity_total_mean_ms\n    };\n\n    Ok(ComparisonSummary {\n        file_write_total_mean_ms,\n        direct_entity_total_mean_ms,\n        file_write_minus_direct_entity_total_mean_ms: file_write_total_mean_ms\n            - direct_entity_total_mean_ms,\n        file_write_commit_mean_ms: file_write_case.timing_ms.commit.mean_ms,\n        direct_entity_commit_mean_ms: direct_entity_case.timing_ms.commit.mean_ms,\n        file_write_minus_direct_entity_commit_mean_ms: file_write_case.timing_ms.commit.mean_ms\n            - direct_entity_case.timing_ms.commit.mean_ms,\n        file_write_write_mean_ms: file_write_case.timing_ms.write.mean_ms,\n        direct_entity_write_mean_ms: direct_entity_case.timing_ms.write.mean_ms,\n        file_write_minus_direct_entity_write_mean_ms: file_write_case.timing_ms.write.mean_ms\n            - direct_entity_case.timing_ms.write.mean_ms,\n        file_write_to_direct_entity_total_ratio: ratio,\n    })\n}\n\nfn build_flat_json_payload(props: usize) -> BenchResult<Vec<u8>> {\n    let mut root = serde_json::Map::new();\n    for index in 0..props {\n        root.insert(\n            format!(\"prop_{index:05}\"),\n            serde_json::Value::String(format!(\"value_{index:05}\")),\n        );\n    }\n    serde_json::to_vec(&serde_json::Value::Object(root)).map_err(serde_err)\n}\n\nfn build_plugin_json_v2_wasm(repo_root: &Path) -> BenchResult<PathBuf> {\n    let manifest_path = repo_root.join(\"packages/plugin-json-v2/Cargo.toml\");\n    let wasm_path =\n        repo_root.join(\"packages/plugin-json-v2/target/wasm32-wasip2/release/plugin_json_v2.wasm\");\n\n    let build = || {\n        Command::new(\"cargo\")\n            .arg(\"build\")\n            .arg(\"--manifest-path\")\n            .arg(&manifest_path)\n            .arg(\"--target\")\n            .arg(\"wasm32-wasip2\")\n            .arg(\"--release\")\n            .output()\n            .map_err(io_err)\n    };\n\n    let output = build()?;\n    if !output.status.success() {\n        let stderr = String::from_utf8_lossy(&output.stderr);\n        if stderr.contains(\"wasm32-wasip2\")\n            && (stderr.contains(\"target may not be installed\")\n                || stderr.contains(\"can't find crate for `core`\"))\n        {\n            let rustup = Command::new(\"rustup\")\n                .arg(\"target\")\n                .arg(\"add\")\n                .arg(\"wasm32-wasip2\")\n                .output()\n                .map_err(io_err)?;\n            if !rustup.status.success() {\n                return Err(format!(\n                    \"rustup target add wasm32-wasip2 failed:\\n{}\",\n                    String::from_utf8_lossy(&rustup.stderr)\n                ));\n            }\n            let retry = build()?;\n            if !retry.status.success() {\n                return Err(format!(\n                    \"cargo build for plugin_json_v2 failed after installing wasm32-wasip2:\\n{}\",\n                    String::from_utf8_lossy(&retry.stderr)\n                ));\n            }\n        } else {\n            return Err(format!(\n                \"cargo build for plugin_json_v2 failed:\\n{}\",\n                String::from_utf8_lossy(&output.stderr)\n            ));\n        }\n    }\n\n    if !wasm_path.exists() {\n        return Err(format!(\n            \"plugin wasm build succeeded but output was missing at {}\",\n            wasm_path.display()\n        ));\n    }\n\n    Ok(wasm_path)\n}\n\nfn render_markdown_report(report: &Report) -> String {\n    let case_sections = report\n        .cases\n        .iter()\n        .map(render_case_markdown)\n        .collect::<Vec<_>>()\n        .join(\"\\n\\n\");\n\n    format!(\n        \"# 10k Entities Benchmark Comparison\\n\\n\\\n- Props: {}\\n\\\n- Input bytes: {}\\n\\\n- Direct property rows inside timed direct-write case: {}\\n\\\n- Expected committed json_pointer rows after each case: {}\\n\\\n- Plugin key: `{}`\\n\\\n- Schema key: `{}`\\n\\\n- SQLite mode: `{}`\\n\\\n- Plugin wasm: `{}`\\n\\n\\\n## Comparison\\n\\n\\\n| metric | file write | direct entities | delta |\\n\\\n| --- | ---: | ---: | ---: |\\n\\\n| write mean ms | {:.3} | {:.3} | {:.3} |\\n\\\n| commit mean ms | {:.3} | {:.3} | {:.3} |\\n\\\n| total mean ms | {:.3} | {:.3} | {:.3} |\\n\\\n| total ratio | {:.3}x | 1.000x | {:.3}x |\\n\\n\\\n{}\\n\",\n        report.shared_setup.props,\n        report.shared_setup.input_bytes,\n        report.shared_setup.direct_property_rows,\n        report.shared_setup.expected_state_rows_after_commit,\n        report.shared_setup.plugin_key,\n        report.shared_setup.schema_key,\n        report.shared_setup.sqlite_mode,\n        report.shared_setup.plugin_wasm_path,\n        report.comparison.file_write_write_mean_ms,\n        report.comparison.direct_entity_write_mean_ms,\n        report\n            .comparison\n            .file_write_minus_direct_entity_write_mean_ms,\n        report.comparison.file_write_commit_mean_ms,\n        report.comparison.direct_entity_commit_mean_ms,\n        report\n            .comparison\n            .file_write_minus_direct_entity_commit_mean_ms,\n        report.comparison.file_write_total_mean_ms,\n        report.comparison.direct_entity_total_mean_ms,\n        report\n            .comparison\n            .file_write_minus_direct_entity_total_mean_ms,\n        report.comparison.file_write_to_direct_entity_total_ratio,\n        report.comparison.file_write_to_direct_entity_total_ratio,\n        case_sections,\n    )\n}\n\nfn render_case_markdown(case: &CaseReport) -> String {\n    let sample_rows = case\n        .samples\n        .iter()\n        .map(|sample| {\n            format!(\n                \"| {} | {:.3} | {:.3} | {:.3} | {} | {} |\",\n                sample.index,\n                sample.write_ms,\n                sample.commit_ms,\n                sample.total_ms,\n                sample.committed_state_rows,\n                sample.file_matches_expected\n            )\n        })\n        .collect::<Vec<_>>()\n        .join(\"\\n\");\n\n    let notes = case\n        .notes\n        .iter()\n        .map(|note| format!(\"- {note}\"))\n        .collect::<Vec<_>>()\n        .join(\"\\n\");\n    let setup_notes = case\n        .setup\n        .setup_outside_timer\n        .iter()\n        .map(|note| format!(\"- {note}\"))\n        .collect::<Vec<_>>()\n        .join(\"\\n\");\n\n    format!(\n        \"## {}\\n\\n\\\nTimed operation: {}\\n\\n\\\n{}\\n\\n\\\nSetup outside timer:\\n\\\n{}\\n\\n\\\n- Timed rows: {}\\n\\\n- Timed SQL: `{}`\\n\\\n- Verification: {}\\n\\n\\\n### Timing\\n\\n\\\n| phase | mean ms | median ms | min ms | max ms |\\n\\\n| --- | ---: | ---: | ---: | ---: |\\n\\\n| write | {:.3} | {:.3} | {:.3} | {:.3} |\\n\\\n| commit | {:.3} | {:.3} | {:.3} | {:.3} |\\n\\\n| total | {:.3} | {:.3} | {:.3} | {:.3} |\\n\\n\\\n### Samples\\n\\n\\\n| run | write ms | commit ms | total ms | committed state rows | file matches expected |\\n\\\n| --- | ---: | ---: | ---: | ---: | --- |\\n\\\n{}\\n\",\n        case.title,\n        case.timed_operation,\n        notes,\n        setup_notes,\n        case.setup.timed_rows,\n        case.setup.timed_sql,\n        case.setup.verification,\n        case.timing_ms.write.mean_ms,\n        case.timing_ms.write.median_ms,\n        case.timing_ms.write.min_ms,\n        case.timing_ms.write.max_ms,\n        case.timing_ms.commit.mean_ms,\n        case.timing_ms.commit.median_ms,\n        case.timing_ms.commit.min_ms,\n        case.timing_ms.commit.max_ms,\n        case.timing_ms.total.mean_ms,\n        case.timing_ms.total.median_ms,\n        case.timing_ms.total.min_ms,\n        case.timing_ms.total.max_ms,\n        sample_rows,\n    )\n}\n\nfn print_summary(report: &Report, report_json_path: &Path, report_markdown_path: &Path) {\n    println!(\"10k entities benchmark comparison\");\n    println!(\n        \"props={} input_bytes={} expected_state_rows_after_commit={}\",\n        report.shared_setup.props,\n        report.shared_setup.input_bytes,\n        report.shared_setup.expected_state_rows_after_commit\n    );\n\n    for case in &report.cases {\n        println!(\"case={} title={}\", case.case_id, case.title);\n        println!(\n            \"write_ms mean={:.3} median={:.3} min={:.3} max={:.3}\",\n            case.timing_ms.write.mean_ms,\n            case.timing_ms.write.median_ms,\n            case.timing_ms.write.min_ms,\n            case.timing_ms.write.max_ms,\n        );\n        println!(\n            \"commit_ms mean={:.3} median={:.3} min={:.3} max={:.3}\",\n            case.timing_ms.commit.mean_ms,\n            case.timing_ms.commit.median_ms,\n            case.timing_ms.commit.min_ms,\n            case.timing_ms.commit.max_ms,\n        );\n        println!(\n            \"total_ms mean={:.3} median={:.3} min={:.3} max={:.3} samples={}\",\n            case.timing_ms.total.mean_ms,\n            case.timing_ms.total.median_ms,\n            case.timing_ms.total.min_ms,\n            case.timing_ms.total.max_ms,\n            case.timing_ms.sample_count,\n        );\n    }\n\n    println!(\n        \"comparison total_mean_delta_ms={:.3} total_ratio={:.3}x\",\n        report\n            .comparison\n            .file_write_minus_direct_entity_total_mean_ms,\n        report.comparison.file_write_to_direct_entity_total_ratio,\n    );\n    println!(\"report_json={}\", report_json_path.display());\n    println!(\"report_markdown={}\", report_markdown_path.display());\n}\n\nfn repo_root() -> BenchResult<PathBuf> {\n    Path::new(env!(\"CARGO_MANIFEST_DIR\"))\n        .join(\"../..\")\n        .canonicalize()\n        .map_err(io_err)\n}\n\nfn temp_sqlite_path(label: &str) -> PathBuf {\n    let nanos = SystemTime::now()\n        .duration_since(UNIX_EPOCH)\n        .expect(\"system time should be after unix epoch\")\n        .as_nanos();\n    std::env::temp_dir().join(format!(\"lix-{label}-{nanos}.sqlite\"))\n}\n\nfn now_unix_ms() -> BenchResult<u128> {\n    Ok(SystemTime::now()\n        .duration_since(UNIX_EPOCH)\n        .map_err(io_err)?\n        .as_millis())\n}\n\nfn escape_sql_string(value: &str) -> String {\n    value.replace('\\'', \"''\")\n}\n\nfn escape_json_pointer_segment(segment: &str) -> String {\n    segment.replace('~', \"~0\").replace('/', \"~1\")\n}\n\nfn io_err(error: impl std::fmt::Display) -> String {\n    error.to_string()\n}\n\nfn serde_err(error: impl std::fmt::Display) -> String {\n    error.to_string()\n}\n\nfn lix_err(error: LixError) -> String {\n    format!(\"{}: {}\", error.code, error.description)\n}\n"
  },
  {
    "path": "benchmarks/10k-entities/src/sqlite_backend.rs",
    "content": "use std::path::Path;\nuse std::str::FromStr;\nuse std::sync::Arc;\n\nuse lix_engine::{\n    collapse_prepared_batch_for_dialect, LixBackend, LixBackendTransaction, LixError,\n    PreparedBatch, QueryResult, SqlDialect, TransactionMode, Value,\n};\nuse sqlx::sqlite::{SqliteConnectOptions, SqlitePoolOptions};\nuse sqlx::{Column, Executor, Row, TypeInfo, ValueRef};\nuse tokio::sync::OnceCell;\n\n#[derive(Clone)]\npub struct BenchSqliteBackend {\n    inner: Arc<BenchSqliteBackendInner>,\n}\n\nstruct BenchSqliteBackendInner {\n    filename: String,\n    pool: OnceCell<sqlx::SqlitePool>,\n}\n\nstruct BenchSqliteTransaction {\n    conn: sqlx::pool::PoolConnection<sqlx::Sqlite>,\n    mode: TransactionMode,\n}\n\nimpl BenchSqliteBackend {\n    pub fn file_backed(path: &Path) -> Result<Self, LixError> {\n        if let Some(parent) = path.parent() {\n            std::fs::create_dir_all(parent).map_err(|error| LixError {\n                code: \"LIX_ERROR_UNKNOWN\".to_string(),\n                description: format!(\n                    \"failed to create sqlite benchmark directory {}: {error}\",\n                    parent.display()\n                ),\n                hint: None,\n            })?;\n        }\n\n        Ok(Self {\n            inner: Arc::new(BenchSqliteBackendInner {\n                filename: path.display().to_string(),\n                pool: OnceCell::const_new(),\n            }),\n        })\n    }\n\n    async fn pool(&self) -> Result<&sqlx::SqlitePool, LixError> {\n        self.inner\n            .pool\n            .get_or_try_init(|| async {\n                let conn = if self.inner.filename == \":memory:\" {\n                    \"sqlite::memory:\".to_string()\n                } else if self.inner.filename.starts_with(\"sqlite:\")\n                    || self.inner.filename.starts_with(\"file:\")\n                {\n                    self.inner.filename.clone()\n                } else {\n                    format!(\"sqlite://{}\", self.inner.filename)\n                };\n\n                let options = SqliteConnectOptions::from_str(&conn)\n                    .map_err(|error| LixError {\n                        code: \"LIX_ERROR_UNKNOWN\".to_string(),\n                        description: error.to_string(),\n                        hint: None,\n                    })?\n                    .create_if_missing(true)\n                    .foreign_keys(true)\n                    .busy_timeout(std::time::Duration::from_secs(30));\n\n                SqlitePoolOptions::new()\n                    .max_connections(1)\n                    .connect_with(options)\n                    .await\n                    .map_err(|error| LixError {\n                        code: \"LIX_ERROR_UNKNOWN\".to_string(),\n                        description: error.to_string(),\n                        hint: None,\n                    })\n            })\n            .await\n    }\n}\n\n#[async_trait::async_trait(?Send)]\nimpl LixBackend for BenchSqliteBackend {\n    fn dialect(&self) -> SqlDialect {\n        SqlDialect::Sqlite\n    }\n\n    async fn execute(&self, sql: &str, params: &[Value]) -> Result<QueryResult, LixError> {\n        let mut transaction = self.begin_transaction(TransactionMode::Deferred).await?;\n        let result = transaction.execute(sql, params).await;\n        match result {\n            Ok(result) => {\n                transaction.commit().await?;\n                Ok(result)\n            }\n            Err(error) => {\n                let _ = transaction.rollback().await;\n                Err(error)\n            }\n        }\n    }\n\n    async fn begin_transaction(\n        &self,\n        mode: TransactionMode,\n    ) -> Result<Box<dyn LixBackendTransaction + '_>, LixError> {\n        let pool = self.pool().await?;\n        let mut conn = pool.acquire().await.map_err(|error| LixError {\n            code: \"LIX_ERROR_UNKNOWN\".to_string(),\n            description: error.to_string(),\n            hint: None,\n        })?;\n\n        sqlx::query(match mode {\n            TransactionMode::Read | TransactionMode::Deferred => \"BEGIN\",\n            TransactionMode::Write => \"BEGIN IMMEDIATE\",\n        })\n        .execute(&mut *conn)\n        .await\n        .map_err(|error| LixError {\n            code: \"LIX_ERROR_UNKNOWN\".to_string(),\n            description: error.to_string(),\n            hint: None,\n        })?;\n\n        Ok(Box::new(BenchSqliteTransaction { conn, mode }))\n    }\n\n    async fn begin_savepoint(\n        &self,\n        _name: &str,\n    ) -> Result<Box<dyn LixBackendTransaction + '_>, LixError> {\n        self.begin_transaction(TransactionMode::Write).await\n    }\n}\n\n#[async_trait::async_trait(?Send)]\nimpl LixBackendTransaction for BenchSqliteTransaction {\n    fn dialect(&self) -> SqlDialect {\n        SqlDialect::Sqlite\n    }\n\n    fn mode(&self) -> TransactionMode {\n        self.mode\n    }\n\n    async fn execute(&mut self, sql: &str, params: &[Value]) -> Result<QueryResult, LixError> {\n        execute_query_with_connection(&mut self.conn, sql, params).await\n    }\n\n    async fn execute_batch(&mut self, batch: &PreparedBatch) -> Result<QueryResult, LixError> {\n        let collapsed = collapse_prepared_batch_for_dialect(batch, self.dialect())?;\n        if collapsed.sql.trim().is_empty() {\n            return Ok(QueryResult {\n                rows: Vec::new(),\n                columns: Vec::new(),\n            });\n        }\n\n        self.conn\n            .execute(collapsed.sql.as_str())\n            .await\n            .map_err(|error| LixError {\n                code: \"LIX_ERROR_UNKNOWN\".to_string(),\n                description: error.to_string(),\n                hint: None,\n            })?;\n\n        Ok(QueryResult {\n            rows: Vec::new(),\n            columns: Vec::new(),\n        })\n    }\n\n    async fn commit(mut self: Box<Self>) -> Result<(), LixError> {\n        sqlx::query(\"COMMIT\")\n            .execute(&mut *self.conn)\n            .await\n            .map_err(|error| LixError {\n                code: \"LIX_ERROR_UNKNOWN\".to_string(),\n                description: error.to_string(),\n                hint: None,\n            })?;\n        Ok(())\n    }\n\n    async fn rollback(mut self: Box<Self>) -> Result<(), LixError> {\n        sqlx::query(\"ROLLBACK\")\n            .execute(&mut *self.conn)\n            .await\n            .map_err(|error| LixError {\n                code: \"LIX_ERROR_UNKNOWN\".to_string(),\n                description: error.to_string(),\n                hint: None,\n            })?;\n        Ok(())\n    }\n}\n\nasync fn execute_query_with_connection(\n    conn: &mut sqlx::pool::PoolConnection<sqlx::Sqlite>,\n    sql: &str,\n    params: &[Value],\n) -> Result<QueryResult, LixError> {\n    let mut query = sqlx::query(sql);\n    for param in params {\n        query = bind_param_sqlite(query, param);\n    }\n\n    let rows = query\n        .fetch_all(&mut **conn)\n        .await\n        .map_err(|error| LixError {\n            code: \"LIX_ERROR_UNKNOWN\".to_string(),\n            description: error.to_string(),\n            hint: None,\n        })?;\n\n    let columns = rows\n        .first()\n        .map(|row| {\n            row.columns()\n                .iter()\n                .map(|column| column.name().to_string())\n                .collect::<Vec<_>>()\n        })\n        .unwrap_or_default();\n\n    let mut result_rows = Vec::with_capacity(rows.len());\n    for row in rows {\n        let mut out = Vec::with_capacity(row.columns().len());\n        for index in 0..row.columns().len() {\n            out.push(map_sqlite_value(&row, index)?);\n        }\n        result_rows.push(out);\n    }\n\n    Ok(QueryResult {\n        rows: result_rows,\n        columns,\n    })\n}\n\nfn bind_param_sqlite<'q>(\n    query: sqlx::query::Query<'q, sqlx::Sqlite, sqlx::sqlite::SqliteArguments<'q>>,\n    param: &Value,\n) -> sqlx::query::Query<'q, sqlx::Sqlite, sqlx::sqlite::SqliteArguments<'q>> {\n    match param {\n        Value::Null => query.bind::<Option<i64>>(None),\n        Value::Boolean(value) => query.bind(*value),\n        Value::Integer(value) => query.bind(*value),\n        Value::Real(value) => query.bind(*value),\n        Value::Text(value) => query.bind(value.clone()),\n        Value::Blob(value) => query.bind(value.clone()),\n        Value::Json(value) => query.bind(value.to_string()),\n    }\n}\n\nfn map_sqlite_value(row: &sqlx::sqlite::SqliteRow, index: usize) -> Result<Value, LixError> {\n    let raw = row.try_get_raw(index).map_err(|error| LixError {\n        code: \"LIX_ERROR_UNKNOWN\".to_string(),\n        description: error.to_string(),\n        hint: None,\n    })?;\n\n    if raw.is_null() {\n        return Ok(Value::Null);\n    }\n\n    match raw.type_info().name() {\n        \"INTEGER\" => row.try_get::<i64, _>(index).map(Value::Integer),\n        \"REAL\" => row.try_get::<f64, _>(index).map(Value::Real),\n        \"TEXT\" => row.try_get::<String, _>(index).map(Value::Text),\n        \"BLOB\" => row.try_get::<Vec<u8>, _>(index).map(Value::Blob),\n        _ => row\n            .try_get::<String, _>(index)\n            .map(Value::Text)\n            .or_else(|_| row.try_get::<i64, _>(index).map(Value::Integer))\n            .or_else(|_| row.try_get::<f64, _>(index).map(Value::Real))\n            .or_else(|_| row.try_get::<Vec<u8>, _>(index).map(Value::Blob)),\n    }\n    .map_err(|error| LixError {\n        code: \"LIX_ERROR_UNKNOWN\".to_string(),\n        description: error.to_string(),\n        hint: None,\n    })\n}\n"
  },
  {
    "path": "benchmarks/10k-entities/src/wasmtime_runtime.rs",
    "content": "use std::collections::HashMap;\nuse std::hash::{DefaultHasher, Hash, Hasher};\nuse std::sync::{Arc, Mutex};\n\nuse async_trait::async_trait;\nuse lix_engine::wasm::{WasmComponentInstance, WasmLimits, WasmRuntime};\nuse lix_engine::{CanonicalJson, LixError};\nuse wasmtime::component::{Component, Linker, ResourceTable};\nuse wasmtime::{Config, Engine, Store};\nuse wasmtime_wasi::{IoView, WasiCtx, WasiCtxBuilder, WasiView};\n\nmod plugin_bindings {\n    wasmtime::component::bindgen!({\n        path: \"../../packages/engine/wit\",\n        world: \"plugin\",\n    });\n}\n\n#[derive(Debug, serde::Deserialize)]\nstruct WirePluginFile {\n    id: String,\n    path: String,\n    data: Vec<u8>,\n}\n\n#[derive(Debug, serde::Deserialize)]\nstruct WireDetectChangesRequest {\n    before: Option<WirePluginFile>,\n    after: WirePluginFile,\n    state_context: Option<WireDetectStateContext>,\n}\n\n#[derive(Debug, serde::Deserialize)]\nstruct WireDetectStateContext {\n    active_state: Option<Vec<WireActiveStateRow>>,\n}\n\n#[derive(Debug, serde::Deserialize)]\nstruct WireActiveStateRow {\n    entity_id: String,\n    schema_key: Option<String>,\n    snapshot_content: Option<CanonicalJson>,\n    file_id: Option<String>,\n    plugin_key: Option<String>,\n    version_id: Option<String>,\n    change_id: Option<String>,\n    metadata: Option<CanonicalJson>,\n    created_at: Option<String>,\n    updated_at: Option<String>,\n}\n\n#[derive(Debug, serde::Deserialize)]\nstruct WirePluginEntityChange {\n    entity_id: String,\n    schema_key: String,\n    snapshot_content: Option<CanonicalJson>,\n}\n\n#[derive(Debug, serde::Deserialize)]\nstruct WireApplyChangesRequest {\n    file: WirePluginFile,\n    changes: Vec<WirePluginEntityChange>,\n}\n\n#[derive(Debug, serde::Serialize)]\nstruct WirePluginEntityChangeOutput {\n    entity_id: String,\n    schema_key: String,\n    snapshot_content: Option<CanonicalJson>,\n}\n\npub struct TestWasmtimeRuntime {\n    engine: Engine,\n    component_cache: Mutex<HashMap<ComponentCacheKey, Arc<Component>>>,\n}\n\nimpl TestWasmtimeRuntime {\n    pub fn new() -> Result<Self, LixError> {\n        let mut config = Config::new();\n        config.wasm_component_model(true);\n        config.async_support(false);\n        config.consume_fuel(true);\n\n        let engine = Engine::new(&config).map_err(|error| LixError {\n            code: \"LIX_ERROR_UNKNOWN\".to_string(),\n            description: format!(\"Failed to initialize wasmtime engine: {error}\"),\n            hint: None,\n        })?;\n\n        Ok(Self {\n            engine,\n            component_cache: Mutex::new(HashMap::new()),\n        })\n    }\n}\n\n#[derive(Clone, PartialEq, Eq, Hash)]\nstruct ComponentCacheKey {\n    wasm_fingerprint: u64,\n    wasm_len: usize,\n}\n\nimpl ComponentCacheKey {\n    fn from_bytes(bytes: &[u8]) -> Self {\n        Self {\n            wasm_fingerprint: wasm_fingerprint(bytes),\n            wasm_len: bytes.len(),\n        }\n    }\n}\n\nstruct TestWasmtimeInstance {\n    engine: Engine,\n    component: Arc<Component>,\n}\n\nstruct WasiState {\n    table: ResourceTable,\n    ctx: WasiCtx,\n}\n\nimpl IoView for WasiState {\n    fn table(&mut self) -> &mut ResourceTable {\n        &mut self.table\n    }\n}\n\nimpl WasiView for WasiState {\n    fn ctx(&mut self) -> &mut WasiCtx {\n        &mut self.ctx\n    }\n}\n\n#[async_trait(?Send)]\nimpl WasmRuntime for TestWasmtimeRuntime {\n    async fn init_component(\n        &self,\n        bytes: Vec<u8>,\n        _limits: WasmLimits,\n    ) -> Result<Arc<dyn WasmComponentInstance>, LixError> {\n        let cache_key = ComponentCacheKey::from_bytes(&bytes);\n\n        if let Some(component) = self\n            .component_cache\n            .lock()\n            .expect(\"component cache mutex poisoned\")\n            .get(&cache_key)\n            .cloned()\n        {\n            return Ok(Arc::new(TestWasmtimeInstance {\n                engine: self.engine.clone(),\n                component,\n            }));\n        }\n\n        let compiled =\n            Arc::new(\n                Component::new(&self.engine, &bytes).map_err(|error| LixError {\n                    code: \"LIX_ERROR_UNKNOWN\".to_string(),\n                    description: format!(\"Failed to compile wasm component: {error}\"),\n                    hint: None,\n                })?,\n            );\n\n        let component = {\n            let mut cache = self\n                .component_cache\n                .lock()\n                .expect(\"component cache mutex poisoned\");\n            cache\n                .entry(cache_key)\n                .or_insert_with(|| compiled.clone())\n                .clone()\n        };\n\n        Ok(Arc::new(TestWasmtimeInstance {\n            engine: self.engine.clone(),\n            component,\n        }))\n    }\n}\n\n#[async_trait(?Send)]\nimpl WasmComponentInstance for TestWasmtimeInstance {\n    async fn call(&self, export: &str, input: &[u8]) -> Result<Vec<u8>, LixError> {\n        let mut store = Store::new(\n            &self.engine,\n            WasiState {\n                table: ResourceTable::new(),\n                ctx: WasiCtxBuilder::new().build(),\n            },\n        );\n        store.set_fuel(u64::MAX).map_err(|error| LixError {\n            code: \"LIX_ERROR_UNKNOWN\".to_string(),\n            description: format!(\"Failed to configure wasm fuel: {error}\"),\n            hint: None,\n        })?;\n\n        let mut linker = Linker::new(&self.engine);\n        wasmtime_wasi::add_to_linker_sync(&mut linker).map_err(|error| LixError {\n            code: \"LIX_ERROR_UNKNOWN\".to_string(),\n            description: format!(\"Failed to add wasi imports to linker: {error}\"),\n            hint: None,\n        })?;\n\n        let bindings =\n            plugin_bindings::Plugin::instantiate(&mut store, self.component.as_ref(), &linker)\n                .map_err(|error| LixError {\n                    code: \"LIX_ERROR_UNKNOWN\".to_string(),\n                    description: format!(\"Failed to instantiate wasm component: {error}\"),\n                    hint: None,\n                })?;\n\n        match export {\n            \"detect-changes\" | \"api#detect-changes\" => {\n                let request: WireDetectChangesRequest =\n                    serde_json::from_slice(input).map_err(|error| LixError {\n                        code: \"LIX_ERROR_UNKNOWN\".to_string(),\n                        description: format!(\n                            \"Failed to decode detect-changes request payload: {error}\"\n                        ),\n                        hint: None,\n                    })?;\n\n                let before = request.before.map(wire_file_to_binding);\n                let after = wire_file_to_binding(request.after);\n                let state_context = request.state_context.map(wire_state_context_to_binding);\n\n                let result = bindings\n                    .lix_plugin_api()\n                    .call_detect_changes(\n                        &mut store,\n                        before.as_ref(),\n                        &after,\n                        state_context.as_ref(),\n                    )\n                    .map_err(|error| LixError {\n                        code: \"LIX_ERROR_UNKNOWN\".to_string(),\n                        description: format!(\"Wasm call failed for export '{export}': {error}\"),\n                        hint: None,\n                    })?;\n\n                match result {\n                    Ok(changes) => {\n                        let wire = changes\n                            .into_iter()\n                            .map(binding_change_to_wire)\n                            .collect::<Result<Vec<_>, _>>()?;\n                        serde_json::to_vec(&wire).map_err(|error| LixError {\n                            code: \"LIX_ERROR_UNKNOWN\".to_string(),\n                            description: format!(\n                                \"Failed to encode detect-changes response payload: {error}\"\n                            ),\n                            hint: None,\n                        })\n                    }\n                    Err(error) => Err(map_plugin_error(error)),\n                }\n            }\n            \"apply-changes\" | \"api#apply-changes\" => {\n                let request: WireApplyChangesRequest =\n                    serde_json::from_slice(input).map_err(|error| LixError {\n                        code: \"LIX_ERROR_UNKNOWN\".to_string(),\n                        description: format!(\n                            \"Failed to decode apply-changes request payload: {error}\"\n                        ),\n                        hint: None,\n                    })?;\n\n                let file = wire_file_to_binding(request.file);\n                let changes = request\n                    .changes\n                    .into_iter()\n                    .map(wire_change_to_binding)\n                    .collect::<Vec<_>>();\n\n                let result = bindings\n                    .lix_plugin_api()\n                    .call_apply_changes(&mut store, &file, &changes)\n                    .map_err(|error| LixError {\n                        code: \"LIX_ERROR_UNKNOWN\".to_string(),\n                        description: format!(\"Wasm call failed for export '{export}': {error}\"),\n                        hint: None,\n                    })?;\n\n                match result {\n                    Ok(output) => Ok(output),\n                    Err(error) => Err(map_plugin_error(error)),\n                }\n            }\n            other => Err(LixError {\n                code: \"LIX_ERROR_UNKNOWN\".to_string(),\n                description: format!(\"Unsupported export '{other}' for TestWasmtimeRuntime\"),\n                hint: None,\n            }),\n        }\n    }\n}\n\nfn wasm_fingerprint(bytes: &[u8]) -> u64 {\n    let mut hasher = DefaultHasher::new();\n    bytes.hash(&mut hasher);\n    hasher.finish()\n}\n\nfn wire_file_to_binding(file: WirePluginFile) -> plugin_bindings::exports::lix::plugin::api::File {\n    plugin_bindings::exports::lix::plugin::api::File {\n        id: file.id,\n        path: file.path,\n        data: file.data,\n    }\n}\n\nfn wire_change_to_binding(\n    change: WirePluginEntityChange,\n) -> plugin_bindings::exports::lix::plugin::api::EntityChange {\n    plugin_bindings::exports::lix::plugin::api::EntityChange {\n        entity_id: change.entity_id,\n        schema_key: change.schema_key,\n        snapshot_content: change.snapshot_content.map(Into::into),\n    }\n}\n\nfn wire_state_context_to_binding(\n    context: WireDetectStateContext,\n) -> plugin_bindings::exports::lix::plugin::api::DetectStateContext {\n    plugin_bindings::exports::lix::plugin::api::DetectStateContext {\n        active_state: context.active_state.map(|rows| {\n            rows.into_iter()\n                .map(wire_active_state_row_to_binding)\n                .collect::<Vec<_>>()\n        }),\n    }\n}\n\nfn wire_active_state_row_to_binding(\n    row: WireActiveStateRow,\n) -> plugin_bindings::exports::lix::plugin::api::ActiveStateRow {\n    plugin_bindings::exports::lix::plugin::api::ActiveStateRow {\n        entity_id: row.entity_id,\n        schema_key: row.schema_key,\n        snapshot_content: row.snapshot_content.map(Into::into),\n        file_id: row.file_id,\n        plugin_key: row.plugin_key,\n        version_id: row.version_id,\n        change_id: row.change_id,\n        metadata: row.metadata.map(Into::into),\n        created_at: row.created_at,\n        updated_at: row.updated_at,\n    }\n}\n\nfn binding_change_to_wire(\n    change: plugin_bindings::exports::lix::plugin::api::EntityChange,\n) -> Result<WirePluginEntityChangeOutput, LixError> {\n    Ok(WirePluginEntityChangeOutput {\n        entity_id: change.entity_id,\n        schema_key: change.schema_key,\n        snapshot_content: change\n            .snapshot_content\n            .map(CanonicalJson::from_text)\n            .transpose()?,\n    })\n}\n\nfn map_plugin_error(error: plugin_bindings::exports::lix::plugin::api::PluginError) -> LixError {\n    match error {\n        plugin_bindings::exports::lix::plugin::api::PluginError::InvalidInput(message) => {\n            LixError {\n                code: \"LIX_ERROR_UNKNOWN\".to_string(),\n                description: format!(\"Plugin invalid-input error: {message}\"),\n                hint: None,\n            }\n        }\n        plugin_bindings::exports::lix::plugin::api::PluginError::Internal(message) => LixError {\n            code: \"LIX_ERROR_UNKNOWN\".to_string(),\n            description: format!(\"Plugin internal error: {message}\"),\n            hint: None,\n        },\n    }\n}\n"
  },
  {
    "path": "benchmarks/engine2-json-pointer/Cargo.toml",
    "content": "[package]\nname = \"engine2_json_pointer_benchmark\"\nversion = \"0.1.0\"\nedition = \"2021\"\npublish = false\n\n[dependencies]\nasync-trait = \"0.1\"\nclap = { version = \"4.5.31\", features = [\"derive\"] }\nlix_rs_sdk = { path = \"../../packages/rs-sdk\" }\nrusqlite = { version = \"0.32\", features = [\"bundled\"] }\nserde = { version = \"1\", features = [\"derive\"] }\nserde_json = \"1\"\ntokio = { version = \"1\", features = [\"rt\"] }\n"
  },
  {
    "path": "benchmarks/engine2-json-pointer/README.md",
    "content": "# Engine2 JSON Pointer Benchmark\n\nThis benchmark exercises engine2 end to end on a fresh on-disk SQLite-backed KV\nstore.\n\nThe first case measures direct insertion of `json_pointer` semantic rows through\n`lix_state`:\n\n- initialize engine2 storage\n- open the generated main version\n- register `packages/plugin-json-v2/schema/json_pointer.json`\n- insert `N` JSON pointer rows in chunked SQL statements\n- verify the committed row count through the normal SQL surface\n\n## Usage\n\n```bash\ncargo run --release -p engine2_json_pointer_benchmark -- \\\n  --rows 10000 \\\n  --warmups 1 \\\n  --iterations 5 \\\n  --output-dir artifact/benchmarks/engine2-json-pointer\n```\n\nFast CI smoke:\n\n```bash\ncargo run --release -p engine2_json_pointer_benchmark -- \\\n  --rows 10000 \\\n  --warmups 0 \\\n  --iterations 1 \\\n  --output-dir artifact/benchmarks/engine2-json-pointer\n```\n"
  },
  {
    "path": "benchmarks/engine2-json-pointer/src/main.rs",
    "content": "use clap::Parser;\nuse lix_rs_sdk::{open_lix, ExecuteResult, Lix, LixError, OpenLixOptions, Value};\nuse serde::Serialize;\nuse std::fs;\nuse std::path::PathBuf;\nuse std::time::{Duration, Instant, SystemTime, UNIX_EPOCH};\nuse tokio::runtime::Builder;\n\nmod sqlite_backend;\n\nuse sqlite_backend::Engine2SqliteBackend;\n\nconst DEFAULT_OUTPUT_DIR: &str = \"artifact/benchmarks/engine2-json-pointer\";\nconst DEFAULT_ROWS: usize = 10_000;\nconst DEFAULT_WARMUPS: usize = 1;\nconst DEFAULT_ITERATIONS: usize = 5;\nconst DEFAULT_CHUNK_SIZE: usize = 500;\nconst JSON_POINTER_SCHEMA_JSON: &str =\n    include_str!(\"../../../packages/plugin-json-v2/schema/json_pointer.json\");\n\ntype BenchResult<T> = Result<T, String>;\n\n#[derive(Parser, Debug)]\n#[command(\n    name = \"engine2-json-pointer-benchmark\",\n    about = \"Benchmark engine2 json_pointer writes on an on-disk SQLite KV backend\"\n)]\nstruct Args {\n    #[arg(long, default_value_t = DEFAULT_ROWS)]\n    rows: usize,\n\n    #[arg(long, default_value_t = DEFAULT_WARMUPS)]\n    warmups: usize,\n\n    #[arg(long, default_value_t = DEFAULT_ITERATIONS)]\n    iterations: usize,\n\n    #[arg(long, default_value_t = DEFAULT_CHUNK_SIZE)]\n    chunk_size: usize,\n\n    #[arg(long, default_value = DEFAULT_OUTPUT_DIR)]\n    output_dir: PathBuf,\n\n    #[arg(long)]\n    keep_databases: bool,\n}\n\n#[derive(Debug, Serialize)]\nstruct Report {\n    generated_at_unix_ms: u128,\n    benchmark: &'static str,\n    rows: usize,\n    chunk_size: usize,\n    warmups: Vec<RunSample>,\n    samples: Vec<RunSample>,\n    timing_ms: TimingSummary,\n}\n\n#[derive(Debug, Clone, Serialize)]\nstruct RunSample {\n    index: usize,\n    sqlite_path: String,\n    insert_ms: f64,\n    verify_ms: f64,\n    total_ms: f64,\n    committed_rows: usize,\n}\n\n#[derive(Debug, Serialize)]\nstruct TimingSummary {\n    sample_count: usize,\n    insert: PhaseSummary,\n    verify: PhaseSummary,\n    total: PhaseSummary,\n}\n\n#[derive(Debug, Serialize)]\nstruct PhaseSummary {\n    mean_ms: f64,\n    median_ms: f64,\n    min_ms: f64,\n    max_ms: f64,\n}\n\nfn main() {\n    if let Err(error) = run() {\n        eprintln!(\"{error}\");\n        std::process::exit(1);\n    }\n}\n\nfn run() -> BenchResult<()> {\n    let args = Args::parse();\n    fs::create_dir_all(&args.output_dir).map_err(|error| {\n        format!(\n            \"failed to create output directory {}: {error}\",\n            args.output_dir.display()\n        )\n    })?;\n\n    let runtime = Builder::new_current_thread()\n        .enable_all()\n        .build()\n        .map_err(|error| format!(\"failed to create tokio runtime: {error}\"))?;\n\n    let mut warmups = Vec::new();\n    for index in 0..args.warmups {\n        warmups.push(runtime.block_on(run_insert_case(&args, \"warmup\", index))?);\n    }\n\n    let mut samples = Vec::new();\n    for index in 0..args.iterations {\n        samples.push(runtime.block_on(run_insert_case(&args, \"sample\", index))?);\n    }\n\n    let report = Report {\n        generated_at_unix_ms: unix_ms(),\n        benchmark: \"engine2_json_pointer_insert\",\n        rows: args.rows,\n        chunk_size: args.chunk_size,\n        timing_ms: summarize_samples(&samples),\n        warmups,\n        samples,\n    };\n\n    let json_path = args.output_dir.join(\"report.json\");\n    let md_path = args.output_dir.join(\"report.md\");\n    fs::write(\n        &json_path,\n        serde_json::to_string_pretty(&report)\n            .map_err(|error| format!(\"failed to serialize report: {error}\"))?,\n    )\n    .map_err(|error| format!(\"failed to write {}: {error}\", json_path.display()))?;\n    fs::write(&md_path, render_markdown_report(&report))\n        .map_err(|error| format!(\"failed to write {}: {error}\", md_path.display()))?;\n\n    println!(\"wrote {}\", json_path.display());\n    println!(\"wrote {}\", md_path.display());\n    println!(\n        \"insert_{}: mean {:.2}ms, median {:.2}ms\",\n        args.rows, report.timing_ms.insert.mean_ms, report.timing_ms.insert.median_ms\n    );\n\n    Ok(())\n}\n\nasync fn run_insert_case(args: &Args, label: &str, index: usize) -> BenchResult<RunSample> {\n    let db_path = args\n        .output_dir\n        .join(format!(\"{label}-{index}-{}.sqlite\", std::process::id()));\n    let cleanup = CleanupDatabase {\n        path: db_path.clone(),\n        keep: args.keep_databases,\n    };\n    cleanup.remove_existing()?;\n\n    let backend = Engine2SqliteBackend::file_backed(&db_path).map_err(display_lix_error)?;\n    let lix = open_lix(OpenLixOptions {\n        backend: Some(Box::new(backend)),\n    })\n    .await\n    .map_err(display_lix_error)?;\n\n    ensure_benchmark_file_descriptor(&lix).await?;\n    register_json_pointer_schema(&lix).await?;\n\n    let started = Instant::now();\n    let insert_started = Instant::now();\n    for sql in build_insert_batches(args.rows, args.chunk_size)? {\n        let result = lix.execute(&sql, &[]).await.map_err(display_lix_error)?;\n        let ExecuteResult::AffectedRows(affected_rows) = result else {\n            return Err(\"json pointer insert should return affected rows\".to_string());\n        };\n        if affected_rows == 0 {\n            return Err(\"json pointer insert unexpectedly affected zero rows\".to_string());\n        }\n    }\n    let insert_elapsed = insert_started.elapsed();\n\n    let verify_started = Instant::now();\n    let committed_rows = count_json_pointer_rows(&lix).await?;\n    let verify_elapsed = verify_started.elapsed();\n    if committed_rows != args.rows {\n        return Err(format!(\n            \"committed json_pointer row count mismatch: expected {}, got {committed_rows}\",\n            args.rows\n        ));\n    }\n\n    let total_elapsed = started.elapsed();\n    let sample = RunSample {\n        index,\n        sqlite_path: db_path.display().to_string(),\n        insert_ms: millis(insert_elapsed),\n        verify_ms: millis(verify_elapsed),\n        total_ms: millis(total_elapsed),\n        committed_rows,\n    };\n\n    drop(cleanup);\n    Ok(sample)\n}\n\nasync fn register_json_pointer_schema(lix: &Lix) -> BenchResult<()> {\n    let schema = sql_string(JSON_POINTER_SCHEMA_JSON);\n    let sql = format!(\n        \"INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \\\n         VALUES (lix_json('{schema}'), true, true)\"\n    );\n    match lix.execute(&sql, &[]).await.map_err(display_lix_error)? {\n        ExecuteResult::AffectedRows(1) => Ok(()),\n        other => Err(format!(\n            \"schema registration returned unexpected result: {other:?}\"\n        )),\n    }\n}\n\nasync fn ensure_benchmark_file_descriptor(lix: &Lix) -> BenchResult<()> {\n    let snapshot = serde_json::json!({\n        \"id\": \"bench.json\",\n        \"directory_id\": null,\n        \"name\": \"bench\",\n        \"extension\": \"json\",\n        \"hidden\": false\n    });\n    let sql = format!(\n        \"INSERT INTO lix_state (\\\n         entity_id, schema_key, file_id, snapshot_content, global, untracked\\\n         ) VALUES (\\\n         'bench.json', 'lix_file_descriptor', NULL, lix_json('{}'), false, false\\\n         )\",\n        sql_string(&snapshot.to_string())\n    );\n    match lix.execute(&sql, &[]).await.map_err(display_lix_error)? {\n        ExecuteResult::AffectedRows(1) => Ok(()),\n        other => Err(format!(\n            \"file descriptor insert returned unexpected result: {other:?}\"\n        )),\n    }\n}\n\nfn build_insert_batches(row_count: usize, chunk_size: usize) -> BenchResult<Vec<String>> {\n    if chunk_size == 0 {\n        return Err(\"chunk_size must be greater than zero\".to_string());\n    }\n\n    let mut batches = Vec::new();\n    let mut next = 0;\n    while next < row_count {\n        let end = (next + chunk_size).min(row_count);\n        let mut sql = String::from(\n            \"INSERT INTO lix_state (\\\n             entity_id, schema_key, file_id, snapshot_content, global, untracked\\\n             ) VALUES \",\n        );\n        for index in next..end {\n            if index > next {\n                sql.push(',');\n            }\n            let pointer = format!(\"/prop_{index}\");\n            let snapshot = serde_json::json!({\n                \"path\": pointer,\n                \"value\": {\n                    \"index\": index,\n                    \"label\": format!(\"value-{index}\")\n                }\n            });\n            sql.push_str(&format!(\n                \"('{}','json_pointer','bench.json',lix_json('{}'),false,false)\",\n                sql_string(&pointer),\n                sql_string(&snapshot.to_string())\n            ));\n        }\n        batches.push(sql);\n        next = end;\n    }\n    Ok(batches)\n}\n\nasync fn count_json_pointer_rows(lix: &Lix) -> BenchResult<usize> {\n    let result = lix\n        .execute(\n            \"SELECT COUNT(*) \\\n             FROM lix_state \\\n             WHERE schema_key = 'json_pointer' \\\n               AND file_id = 'bench.json' \\\n               AND snapshot_content IS NOT NULL\",\n            &[],\n        )\n        .await\n        .map_err(display_lix_error)?;\n    let ExecuteResult::Rows(rows) = result else {\n        return Err(\"COUNT query should return rows\".to_string());\n    };\n    let Some(row) = rows.rows().first() else {\n        return Err(\"COUNT query returned no rows\".to_string());\n    };\n    match row.values().first() {\n        Some(Value::Integer(value)) => {\n            usize::try_from(*value).map_err(|_| format!(\"COUNT returned negative value: {value}\"))\n        }\n        other => Err(format!(\"COUNT returned unexpected value: {other:?}\")),\n    }\n}\n\nfn summarize_samples(samples: &[RunSample]) -> TimingSummary {\n    TimingSummary {\n        sample_count: samples.len(),\n        insert: summarize_phase(samples.iter().map(|sample| sample.insert_ms).collect()),\n        verify: summarize_phase(samples.iter().map(|sample| sample.verify_ms).collect()),\n        total: summarize_phase(samples.iter().map(|sample| sample.total_ms).collect()),\n    }\n}\n\nfn summarize_phase(mut values: Vec<f64>) -> PhaseSummary {\n    if values.is_empty() {\n        return PhaseSummary {\n            mean_ms: 0.0,\n            median_ms: 0.0,\n            min_ms: 0.0,\n            max_ms: 0.0,\n        };\n    }\n    values.sort_by(|left, right| left.total_cmp(right));\n    let sum = values.iter().sum::<f64>();\n    let midpoint = values.len() / 2;\n    let median = if values.len() % 2 == 0 {\n        (values[midpoint - 1] + values[midpoint]) / 2.0\n    } else {\n        values[midpoint]\n    };\n    PhaseSummary {\n        mean_ms: sum / values.len() as f64,\n        median_ms: median,\n        min_ms: values[0],\n        max_ms: values[values.len() - 1],\n    }\n}\n\nfn render_markdown_report(report: &Report) -> String {\n    format!(\n        \"# Engine2 JSON Pointer Benchmark\\n\\n\\\n         - Rows: `{}`\\n\\\n         - Chunk size: `{}`\\n\\\n         - Samples: `{}`\\n\\n\\\n         | Phase | Mean ms | Median ms | Min ms | Max ms |\\n\\\n         | --- | ---: | ---: | ---: | ---: |\\n\\\n         | Insert | {:.2} | {:.2} | {:.2} | {:.2} |\\n\\\n         | Verify | {:.2} | {:.2} | {:.2} | {:.2} |\\n\\\n         | Total | {:.2} | {:.2} | {:.2} | {:.2} |\\n\",\n        report.rows,\n        report.chunk_size,\n        report.timing_ms.sample_count,\n        report.timing_ms.insert.mean_ms,\n        report.timing_ms.insert.median_ms,\n        report.timing_ms.insert.min_ms,\n        report.timing_ms.insert.max_ms,\n        report.timing_ms.verify.mean_ms,\n        report.timing_ms.verify.median_ms,\n        report.timing_ms.verify.min_ms,\n        report.timing_ms.verify.max_ms,\n        report.timing_ms.total.mean_ms,\n        report.timing_ms.total.median_ms,\n        report.timing_ms.total.min_ms,\n        report.timing_ms.total.max_ms,\n    )\n}\n\nfn sql_string(value: &str) -> String {\n    value.replace('\\'', \"''\")\n}\n\nfn display_lix_error(error: LixError) -> String {\n    format!(\"{}: {}\", error.code, error.description)\n}\n\nfn millis(duration: Duration) -> f64 {\n    duration.as_secs_f64() * 1000.0\n}\n\nfn unix_ms() -> u128 {\n    SystemTime::now()\n        .duration_since(UNIX_EPOCH)\n        .map(|duration| duration.as_millis())\n        .unwrap_or_default()\n}\n\nstruct CleanupDatabase {\n    path: PathBuf,\n    keep: bool,\n}\n\nimpl CleanupDatabase {\n    fn remove_existing(&self) -> BenchResult<()> {\n        for path in self.paths() {\n            if path.exists() {\n                fs::remove_file(&path)\n                    .map_err(|error| format!(\"failed to remove {}: {error}\", path.display()))?;\n            }\n        }\n        Ok(())\n    }\n\n    fn paths(&self) -> Vec<PathBuf> {\n        [\"\", \"-wal\", \"-shm\", \"-journal\"]\n            .into_iter()\n            .map(|suffix| PathBuf::from(format!(\"{}{}\", self.path.display(), suffix)))\n            .collect()\n    }\n}\n\nimpl Drop for CleanupDatabase {\n    fn drop(&mut self) {\n        if self.keep {\n            return;\n        }\n        for path in self.paths() {\n            let _ = fs::remove_file(path);\n        }\n    }\n}\n"
  },
  {
    "path": "benchmarks/engine2-json-pointer/src/sqlite_backend.rs",
    "content": "use async_trait::async_trait;\nuse lix_rs_sdk::{\n    KvPair, KvScanRange, LixBackend, LixBackendTransaction, LixError, TransactionBeginMode,\n};\nuse rusqlite::{params, Connection, OptionalExtension};\nuse std::path::Path;\nuse std::sync::{Arc, Mutex, MutexGuard};\n\nconst KV_TABLE: &str = \"lix_engine2_kv\";\n\n#[derive(Clone)]\npub struct Engine2SqliteBackend {\n    conn: Arc<Mutex<Connection>>,\n}\n\npub struct Engine2SqliteTransaction {\n    conn: Arc<Mutex<Connection>>,\n    finalized: bool,\n    mode: TransactionBeginMode,\n}\n\nimpl Engine2SqliteBackend {\n    pub fn file_backed(path: &Path) -> Result<Self, LixError> {\n        if let Some(parent) = path.parent() {\n            std::fs::create_dir_all(parent).map_err(|error| {\n                LixError::new(\n                    \"LIX_ERROR_UNKNOWN\",\n                    format!(\n                        \"failed to create sqlite benchmark directory {}: {error}\",\n                        parent.display()\n                    ),\n                )\n            })?;\n        }\n\n        let conn = Connection::open(path).map_err(sqlite_error)?;\n        configure_connection(&conn)?;\n        ensure_kv_table(&conn)?;\n        Ok(Self {\n            conn: Arc::new(Mutex::new(conn)),\n        })\n    }\n\n    fn lock_conn(&self) -> Result<MutexGuard<'_, Connection>, LixError> {\n        self.conn\n            .lock()\n            .map_err(|_| LixError::new(\"LIX_ERROR_UNKNOWN\", \"sqlite benchmark mutex poisoned\"))\n    }\n}\n\n#[async_trait]\nimpl LixBackend for Engine2SqliteBackend {\n    async fn begin_transaction(\n        &self,\n        mode: TransactionBeginMode,\n    ) -> Result<Box<dyn LixBackendTransaction + Send + Sync + 'static>, LixError> {\n        {\n            let conn = self.lock_conn()?;\n            conn.execute_batch(match mode {\n                TransactionBeginMode::Read | TransactionBeginMode::Deferred => \"BEGIN TRANSACTION\",\n                TransactionBeginMode::Write => \"BEGIN IMMEDIATE\",\n            })\n            .map_err(sqlite_error)?;\n        }\n\n        Ok(Box::new(Engine2SqliteTransaction {\n            conn: Arc::clone(&self.conn),\n            finalized: false,\n            mode,\n        }))\n    }\n\n    async fn kv_get(&self, namespace: &str, key: &[u8]) -> Result<Option<Vec<u8>>, LixError> {\n        let conn = self.lock_conn()?;\n        kv_get_with_connection(&conn, namespace, key)\n    }\n\n    async fn kv_scan(\n        &self,\n        namespace: &str,\n        range: KvScanRange,\n        limit: Option<usize>,\n    ) -> Result<Vec<KvPair>, LixError> {\n        let conn = self.lock_conn()?;\n        kv_scan_with_connection(&conn, namespace, &range, limit)\n    }\n}\n\n#[async_trait]\nimpl LixBackendTransaction for Engine2SqliteTransaction {\n    fn mode(&self) -> TransactionBeginMode {\n        self.mode\n    }\n\n    async fn kv_get(&mut self, namespace: &str, key: &[u8]) -> Result<Option<Vec<u8>>, LixError> {\n        let conn = self.lock_conn()?;\n        kv_get_with_connection(&conn, namespace, key)\n    }\n\n    async fn kv_scan(\n        &mut self,\n        namespace: &str,\n        range: KvScanRange,\n        limit: Option<usize>,\n    ) -> Result<Vec<KvPair>, LixError> {\n        let conn = self.lock_conn()?;\n        kv_scan_with_connection(&conn, namespace, &range, limit)\n    }\n\n    async fn kv_put(&mut self, namespace: &str, key: &[u8], value: &[u8]) -> Result<(), LixError> {\n        let conn = self.lock_conn()?;\n        conn.execute(\n            &format!(\n                \"INSERT INTO {KV_TABLE} (namespace, key, value) VALUES (?1, ?2, ?3) \\\n                 ON CONFLICT(namespace, key) DO UPDATE SET value = excluded.value\"\n            ),\n            params![namespace, key, value],\n        )\n        .map_err(sqlite_error)?;\n        Ok(())\n    }\n\n    async fn kv_delete(&mut self, namespace: &str, key: &[u8]) -> Result<(), LixError> {\n        let conn = self.lock_conn()?;\n        conn.execute(\n            &format!(\"DELETE FROM {KV_TABLE} WHERE namespace = ?1 AND key = ?2\"),\n            params![namespace, key],\n        )\n        .map_err(sqlite_error)?;\n        Ok(())\n    }\n\n    async fn commit(mut self: Box<Self>) -> Result<(), LixError> {\n        self.lock_conn()?\n            .execute_batch(\"COMMIT\")\n            .map_err(sqlite_error)?;\n        self.finalized = true;\n        Ok(())\n    }\n\n    async fn rollback(mut self: Box<Self>) -> Result<(), LixError> {\n        self.lock_conn()?\n            .execute_batch(\"ROLLBACK\")\n            .map_err(sqlite_error)?;\n        self.finalized = true;\n        Ok(())\n    }\n}\n\nimpl Engine2SqliteTransaction {\n    fn lock_conn(&self) -> Result<MutexGuard<'_, Connection>, LixError> {\n        self.conn\n            .lock()\n            .map_err(|_| LixError::new(\"LIX_ERROR_UNKNOWN\", \"sqlite benchmark mutex poisoned\"))\n    }\n}\n\nimpl Drop for Engine2SqliteTransaction {\n    fn drop(&mut self) {\n        if self.finalized || std::thread::panicking() {\n            return;\n        }\n        if let Ok(conn) = self.conn.lock() {\n            let _ = conn.execute_batch(\"ROLLBACK\");\n        }\n    }\n}\n\nfn configure_connection(conn: &Connection) -> Result<(), LixError> {\n    conn.execute_batch(\n        \"PRAGMA journal_mode = WAL;\\\n         PRAGMA synchronous = NORMAL;\\\n         PRAGMA temp_store = MEMORY;\",\n    )\n    .map_err(sqlite_error)?;\n    Ok(())\n}\n\nfn ensure_kv_table(conn: &Connection) -> Result<(), LixError> {\n    conn.execute_batch(&format!(\n        \"CREATE TABLE IF NOT EXISTS {KV_TABLE} (\\\n         namespace TEXT NOT NULL,\\\n         key BLOB NOT NULL,\\\n         value BLOB NOT NULL,\\\n         PRIMARY KEY(namespace, key)\\\n         ) WITHOUT ROWID\"\n    ))\n    .map_err(sqlite_error)?;\n    Ok(())\n}\n\nfn kv_get_with_connection(\n    conn: &Connection,\n    namespace: &str,\n    key: &[u8],\n) -> Result<Option<Vec<u8>>, LixError> {\n    conn.query_row(\n        &format!(\"SELECT value FROM {KV_TABLE} WHERE namespace = ?1 AND key = ?2\"),\n        params![namespace, key],\n        |row| row.get::<_, Vec<u8>>(0),\n    )\n    .optional()\n    .map_err(sqlite_error)\n}\n\nfn kv_scan_with_connection(\n    conn: &Connection,\n    namespace: &str,\n    range: &KvScanRange,\n    limit: Option<usize>,\n) -> Result<Vec<KvPair>, LixError> {\n    let mut pairs = match range {\n        KvScanRange::Prefix(prefix) => {\n            let mut stmt = conn\n                .prepare(&format!(\n                    \"SELECT key, value FROM {KV_TABLE} WHERE namespace = ?1 ORDER BY key\"\n                ))\n                .map_err(sqlite_error)?;\n            let rows = stmt\n                .query_map(params![namespace], |row| {\n                    Ok((row.get::<_, Vec<u8>>(0)?, row.get::<_, Vec<u8>>(1)?))\n                })\n                .map_err(sqlite_error)?;\n            collect_matching_rows(rows, |key| key.starts_with(prefix))?\n        }\n        KvScanRange::Range { start, end } => {\n            let mut stmt = conn\n                .prepare(&format!(\n                    \"SELECT key, value FROM {KV_TABLE} \\\n                     WHERE namespace = ?1 AND key >= ?2 AND key < ?3 \\\n                     ORDER BY key\"\n                ))\n                .map_err(sqlite_error)?;\n            let rows = stmt\n                .query_map(params![namespace, start, end], |row| {\n                    Ok((row.get::<_, Vec<u8>>(0)?, row.get::<_, Vec<u8>>(1)?))\n                })\n                .map_err(sqlite_error)?;\n            collect_matching_rows(rows, |_| true)?\n        }\n    };\n\n    if let Some(limit) = limit {\n        pairs.truncate(limit);\n    }\n    Ok(pairs)\n}\n\nfn collect_matching_rows<F>(\n    rows: rusqlite::MappedRows<\n        '_,\n        impl FnMut(&rusqlite::Row<'_>) -> rusqlite::Result<(Vec<u8>, Vec<u8>)>,\n    >,\n    mut matches: F,\n) -> Result<Vec<KvPair>, LixError>\nwhere\n    F: FnMut(&[u8]) -> bool,\n{\n    let mut pairs = Vec::new();\n    for row in rows {\n        let (key, value) = row.map_err(sqlite_error)?;\n        if matches(&key) {\n            pairs.push(KvPair::new(key, value));\n        }\n    }\n    Ok(pairs)\n}\n\nfn sqlite_error(error: rusqlite::Error) -> LixError {\n    LixError::new(\n        \"LIX_ERROR_UNKNOWN\",\n        format!(\"sqlite benchmark error: {error}\"),\n    )\n}\n"
  },
  {
    "path": "benchmarks/git-compare/Cargo.toml",
    "content": "[package]\nname = \"git_compare_benchmark\"\nversion = \"0.1.0\"\nedition = \"2021\"\npublish = false\n\n[dependencies]\nclap = { version = \"4.5.31\", features = [\"derive\"] }\nlix_engine = { path = \"../../packages/engine\" }\nlix_rs_sdk = { path = \"../../packages/rs-sdk\" }\npollster = \"0.4\"\nserde = { version = \"1\", features = [\"derive\"] }\nserde_json = \"1\"\n"
  },
  {
    "path": "benchmarks/git-compare/README.md",
    "content": "# Git Compare Benchmark\n\nThis benchmark answers a narrower question than `exp git-replay`:\n\n- a repo already exists\n- a user changes files\n- the user finalizes one commit\n- how long do `write` and `commit` take for Git vs Lix?\n\nIt cuts replay noise by:\n\n- selecting real first-parent commits from a production repo as workloads\n- building Git and Lix parent-state templates outside the timed section\n- timing only `apply workload` and `finalize commit`\n- interleaving Git and Lix runs\n- verifying the final Git tree and final Lix `lix_file` state after each trial\n\n## What It Measures\n\nFor each selected workload commit:\n\n- `write_ms`\n  - Git: apply the commit's file mutations into a clean checkout\n  - Lix: apply equivalent `lix_file` mutations inside an open transaction\n- `commit_ms`\n  - Git: `git add -A` + `git commit`\n  - Lix: `COMMIT`\n- `total_ms`\n  - end-to-end write + commit\n\n## Usage\n\n```bash\ncargo run --release -p git_compare_benchmark -- \\\n  --repo-path /Users/samuel/git-repos/paraglide-js \\\n  --output-dir artifact/benchmarks/git-compare/paraglide-js \\\n  --max-workloads 5 \\\n  --runs 5 \\\n  --warmups 1 \\\n  --force\n```\n\nWith the benchmark-tuned SQLite settings:\n\n```bash\ncargo run --release -p git_compare_benchmark -- \\\n  --repo-path /Users/samuel/git-repos/paraglide-js \\\n  --output-dir artifact/benchmarks/git-compare/paraglide-js-tuned \\\n  --sqlite-benchmark-tuned \\\n  --max-workloads 5 \\\n  --runs 5 \\\n  --warmups 1 \\\n  --force\n```\n\nReports are written to:\n\n- `report.json`\n- `report.md`\n\ninside the chosen output directory.\n\n## Notes\n\n- The current seed mode is hybrid on purpose:\n  - Git uses a local parent checkout so the baseline tree is exact.\n  - Lix seeds a fresh DB from the parent tree snapshot outside the timer.\n- Lix path seeding percent-encodes Git path characters that `lix_file` does not currently accept raw, so the benchmark still exercises the same file set even when the repo contains paths like `+layout.svelte` or `[locale]`.\n- Workloads are filtered to regular-file content changes. Mode-only or symlink-heavy commits are skipped because `lix_file` currently benchmarks `path + data`, not full Git file mode semantics.\n"
  },
  {
    "path": "benchmarks/git-compare/src/main.rs",
    "content": "use clap::Parser;\nuse lix_engine::{\n    boot as boot_engine, BootArgs as EngineConfig, ExecuteOptions, Session, SessionTransaction,\n    Value,\n};\nuse lix_rs_sdk::{SqliteBackend, WasmRuntime, WasmtimeRuntime};\nuse serde::Serialize;\nuse std::collections::{BTreeMap, BTreeSet, HashMap};\nuse std::fs;\nuse std::path::{Path, PathBuf};\nuse std::process::{Command, Stdio};\nuse std::sync::Arc;\nuse std::time::Instant;\n\n#[cfg(unix)]\nuse std::os::unix::fs::PermissionsExt;\n\nconst NULL_OID: &str = \"0000000000000000000000000000000000000000\";\n\ntype DynError = Box<dyn std::error::Error + Send + Sync>;\ntype DynResult<T> = Result<T, DynError>;\n\n#[derive(Parser, Debug, Clone)]\n#[command(about = \"Benchmark write+commit latency for Git vs Lix on real repo workloads\")]\nstruct Args {\n    #[arg(long)]\n    repo_path: PathBuf,\n    #[arg(long, default_value = \"HEAD\")]\n    head_ref: String,\n    #[arg(long = \"commit-sha\")]\n    commit_shas: Vec<String>,\n    #[arg(long, default_value = \"artifact/benchmarks/git-compare\")]\n    output_dir: PathBuf,\n    #[arg(long, default_value_t = 5)]\n    max_workloads: usize,\n    #[arg(long, default_value_t = 200)]\n    scan_commits: usize,\n    #[arg(long, default_value_t = 5)]\n    runs: usize,\n    #[arg(long, default_value_t = 1)]\n    warmups: usize,\n    #[arg(long, default_value_t = 1)]\n    min_changed_paths: usize,\n    #[arg(long, default_value_t = 25)]\n    max_changed_paths: usize,\n    #[arg(long)]\n    skip_verify: bool,\n    #[arg(long)]\n    keep_temp: bool,\n    #[arg(long)]\n    force: bool,\n}\n\n#[derive(Clone)]\nstruct CommitInfo {\n    sha: String,\n    parents: Vec<String>,\n    subject: String,\n}\n\n#[derive(Clone)]\nstruct PatchSet {\n    changes: Vec<RawChange>,\n    blobs: HashMap<String, Vec<u8>>,\n}\n\n#[derive(Clone)]\nstruct RawChange {\n    status: char,\n    old_mode: String,\n    new_mode: String,\n    old_oid: String,\n    new_oid: String,\n    old_path: Option<String>,\n    new_path: Option<String>,\n}\n\n#[derive(Clone)]\nenum OperationKind {\n    Add,\n    Modify,\n    Delete,\n    Rename,\n    Copy,\n}\n\n#[derive(Clone)]\nstruct FileOperation {\n    kind: OperationKind,\n    old_path: Option<String>,\n    new_path: Option<String>,\n    new_bytes: Option<Vec<u8>>,\n    new_executable: bool,\n}\n\n#[derive(Clone)]\nstruct Workload {\n    commit_sha: String,\n    parent_sha: String,\n    subject: String,\n    changed_paths: usize,\n    child_tree_sha: String,\n    operations: Vec<FileOperation>,\n    expected_files: BTreeMap<String, Vec<u8>>,\n}\n\n#[derive(Clone)]\nstruct LixTemplate {\n    seed_rows: Vec<LixSeedRow>,\n    path_to_id: BTreeMap<String, String>,\n}\n\n#[derive(Clone)]\nstruct LixSeedRow {\n    id: String,\n    path: String,\n    data: Vec<u8>,\n}\n\n#[derive(Clone)]\nstruct PreparedWorkload {\n    workload: Workload,\n    git_template_dir: PathBuf,\n    lix_template: LixTemplate,\n}\n\n#[derive(Serialize)]\nstruct Report {\n    repo_path: String,\n    head_ref: String,\n    head_commit: String,\n    config: ConfigReport,\n    workload_selection: WorkloadSelectionReport,\n    template_seed: TemplateSeedReport,\n    workloads: Vec<WorkloadReport>,\n    overall: OverallReport,\n}\n\n#[derive(Serialize)]\nstruct ConfigReport {\n    runs: usize,\n    warmups: usize,\n    verify_state: bool,\n    min_changed_paths: usize,\n    max_changed_paths: usize,\n    max_workloads: usize,\n    scan_commits: usize,\n}\n\n#[derive(Serialize)]\nstruct WorkloadSelectionReport {\n    selected_count: usize,\n    skipped: Vec<SkippedCandidate>,\n}\n\n#[derive(Serialize)]\nstruct SkippedCandidate {\n    commit_sha: String,\n    subject: String,\n    reason: String,\n}\n\n#[derive(Serialize)]\nstruct TemplateSeedReport {\n    mode: &'static str,\n}\n\n#[derive(Serialize)]\nstruct WorkloadReport {\n    commit_sha: String,\n    parent_sha: String,\n    subject: String,\n    changed_paths: usize,\n    child_tree_sha: String,\n    git: MetricReport,\n    lix: MetricReport,\n    total_ratio_lix_over_git: f64,\n    total_pct_less_time_for_lix: f64,\n    trials: Vec<TrialResult>,\n}\n\n#[derive(Serialize)]\nstruct OverallReport {\n    git: MetricReport,\n    lix: MetricReport,\n    total_ratio_lix_over_git: f64,\n    total_pct_less_time_for_lix: f64,\n}\n\n#[derive(Serialize, Clone)]\nstruct MetricReport {\n    write_ms: SummaryStats,\n    commit_ms: SummaryStats,\n    total_ms: SummaryStats,\n}\n\n#[derive(Serialize, Clone, Default)]\nstruct SummaryStats {\n    samples: usize,\n    min_ms: f64,\n    p50_ms: f64,\n    p95_ms: f64,\n    mean_ms: f64,\n    max_ms: f64,\n}\n\n#[derive(Serialize, Clone)]\nstruct TrialResult {\n    workload_commit_sha: String,\n    system: &'static str,\n    iteration: usize,\n    warmup: bool,\n    write_ms: f64,\n    commit_ms: f64,\n    total_ms: f64,\n    verified: bool,\n}\n\nfn main() {\n    if let Err(error) = run_with_large_stack(real_main) {\n        eprintln!(\"{error}\");\n        std::process::exit(1);\n    }\n}\n\nfn run_with_large_stack<F>(f: F) -> DynResult<()>\nwhere\n    F: FnOnce() -> DynResult<()> + Send + 'static,\n{\n    let handle = std::thread::Builder::new()\n        .name(\"git-compare-benchmark\".to_string())\n        .stack_size(32 * 1024 * 1024)\n        .spawn(f)?;\n    match handle.join() {\n        Ok(result) => result,\n        Err(_) => Err(\"benchmark thread panicked\".into()),\n    }\n}\n\nfn real_main() -> DynResult<()> {\n    let args = Args::parse();\n    validate_args(&args)?;\n\n    let repo_path = fs::canonicalize(&args.repo_path)?;\n    ensure_git_repo(&repo_path)?;\n    prepare_output_dir(&args.output_dir, args.force)?;\n\n    let tmp_root = args.output_dir.join(\"tmp\");\n    fs::create_dir_all(&tmp_root)?;\n\n    let head_commit = rev_parse_commit(&repo_path, &args.head_ref)?;\n    let (workloads, skipped) = select_workloads(&repo_path, &args, &head_commit)?;\n    let prepared = prepare_workloads(&repo_path, &args, &tmp_root, &workloads)?;\n\n    let mut workload_reports = Vec::with_capacity(prepared.workloads.len());\n    let mut all_trials = Vec::new();\n\n    println!(\n        \"[git-compare] selected {} workloads from {}\",\n        prepared.workloads.len(),\n        repo_path.display()\n    );\n\n    for prepared_workload in &prepared.workloads {\n        println!(\n            \"[git-compare] workload {} {} ({} changed paths)\",\n            &prepared_workload.workload.commit_sha[..12],\n            prepared_workload.workload.subject,\n            prepared_workload.workload.changed_paths\n        );\n        let trials = run_workload_trials(\n            &repo_path,\n            &args,\n            &tmp_root,\n            prepared_workload,\n            Arc::clone(&prepared.wasm_runtime),\n        )?;\n        let git_trials = filtered_trials(&trials, \"git\");\n        let lix_trials = filtered_trials(&trials, \"lix\");\n        let git_report = build_metric_report(&git_trials);\n        let lix_report = build_metric_report(&lix_trials);\n        let ratio = safe_ratio(lix_report.total_ms.p50_ms, git_report.total_ms.p50_ms);\n        let pct_less = pct_less_time(lix_report.total_ms.p50_ms, git_report.total_ms.p50_ms);\n        workload_reports.push(WorkloadReport {\n            commit_sha: prepared_workload.workload.commit_sha.clone(),\n            parent_sha: prepared_workload.workload.parent_sha.clone(),\n            subject: prepared_workload.workload.subject.clone(),\n            changed_paths: prepared_workload.workload.changed_paths,\n            child_tree_sha: prepared_workload.workload.child_tree_sha.clone(),\n            git: git_report,\n            lix: lix_report,\n            total_ratio_lix_over_git: ratio,\n            total_pct_less_time_for_lix: pct_less,\n            trials: trials.clone(),\n        });\n        all_trials.extend(trials);\n    }\n\n    let overall_git = build_metric_report(&filtered_trials(&all_trials, \"git\"));\n    let overall_lix = build_metric_report(&filtered_trials(&all_trials, \"lix\"));\n    let report = Report {\n        repo_path: repo_path.display().to_string(),\n        head_ref: args.head_ref.clone(),\n        head_commit,\n        config: ConfigReport {\n            runs: args.runs,\n            warmups: args.warmups,\n            verify_state: !args.skip_verify,\n            min_changed_paths: args.min_changed_paths,\n            max_changed_paths: args.max_changed_paths,\n            max_workloads: args.max_workloads,\n            scan_commits: args.scan_commits,\n        },\n        workload_selection: WorkloadSelectionReport {\n            selected_count: workload_reports.len(),\n            skipped,\n        },\n        template_seed: TemplateSeedReport {\n            mode: \"git-parent-checkout + lix-parent-snapshot\",\n        },\n        workloads: workload_reports,\n        overall: OverallReport {\n            git: overall_git.clone(),\n            lix: overall_lix.clone(),\n            total_ratio_lix_over_git: safe_ratio(\n                overall_lix.total_ms.p50_ms,\n                overall_git.total_ms.p50_ms,\n            ),\n            total_pct_less_time_for_lix: pct_less_time(\n                overall_lix.total_ms.p50_ms,\n                overall_git.total_ms.p50_ms,\n            ),\n        },\n    };\n\n    let json_path = args.output_dir.join(\"report.json\");\n    let markdown_path = args.output_dir.join(\"report.md\");\n    fs::write(\n        &json_path,\n        format!(\"{}\\n\", serde_json::to_string_pretty(&report)?),\n    )?;\n    fs::write(&markdown_path, render_markdown_report(&report))?;\n\n    println!(\n        \"[git-compare] overall median total: git {:.2}ms, lix {:.2}ms, lix {:.2}% less time\",\n        report.overall.git.total_ms.p50_ms,\n        report.overall.lix.total_ms.p50_ms,\n        report.overall.total_pct_less_time_for_lix\n    );\n    println!(\"[git-compare] json: {}\", json_path.display());\n    println!(\"[git-compare] markdown: {}\", markdown_path.display());\n\n    if !args.keep_temp {\n        let _ = fs::remove_dir_all(&tmp_root);\n    }\n\n    Ok(())\n}\n\nstruct PreparedBenchmark {\n    workloads: Vec<PreparedWorkload>,\n    wasm_runtime: Arc<dyn WasmRuntime>,\n}\n\nfn validate_args(args: &Args) -> DynResult<()> {\n    if args.max_workloads == 0 {\n        return Err(\"--max-workloads must be >= 1\".into());\n    }\n    if args.runs == 0 {\n        return Err(\"--runs must be >= 1\".into());\n    }\n    if args.min_changed_paths == 0 {\n        return Err(\"--min-changed-paths must be >= 1\".into());\n    }\n    if args.min_changed_paths > args.max_changed_paths {\n        return Err(\"--min-changed-paths must be <= --max-changed-paths\".into());\n    }\n    Ok(())\n}\n\nfn ensure_git_repo(repo_path: &Path) -> DynResult<()> {\n    run_git_text(repo_path, [\"rev-parse\", \"--git-dir\"])?;\n    Ok(())\n}\n\nfn prepare_output_dir(path: &Path, force: bool) -> DynResult<()> {\n    if path.exists() {\n        if !force {\n            return Err(format!(\n                \"output dir already exists: {} (pass --force to overwrite)\",\n                path.display()\n            )\n            .into());\n        }\n        fs::remove_dir_all(path)?;\n    }\n    fs::create_dir_all(path)?;\n    Ok(())\n}\n\nfn select_workloads(\n    repo_path: &Path,\n    args: &Args,\n    head_commit: &str,\n) -> DynResult<(Vec<Workload>, Vec<SkippedCandidate>)> {\n    let commit_infos = if args.commit_shas.is_empty() {\n        list_first_parent_commit_info(repo_path, &args.head_ref, Some(args.scan_commits))?\n    } else {\n        let mut commits = Vec::with_capacity(args.commit_shas.len());\n        for commit_sha in &args.commit_shas {\n            commits.push(read_commit_info(repo_path, commit_sha)?);\n        }\n        commits\n    };\n    let mut selected = Vec::new();\n    let mut skipped = Vec::new();\n\n    for commit in commit_infos {\n        if selected.len() >= args.max_workloads {\n            break;\n        }\n        if commit.sha == head_commit && commit.parents.is_empty() {\n            skipped.push(SkippedCandidate {\n                commit_sha: commit.sha,\n                subject: commit.subject,\n                reason: \"root commit is not a useful user write+commit workload\".to_string(),\n            });\n            continue;\n        }\n        if commit.parents.len() != 1 {\n            skipped.push(SkippedCandidate {\n                commit_sha: commit.sha,\n                subject: commit.subject,\n                reason: \"merge commit skipped as a timed workload\".to_string(),\n            });\n            continue;\n        }\n\n        let patch_set = read_commit_patch_set(repo_path, &commit.sha)?;\n        if patch_set.changes.len() < args.min_changed_paths {\n            skipped.push(SkippedCandidate {\n                commit_sha: commit.sha,\n                subject: commit.subject,\n                reason: format!(\n                    \"changed path count {} below minimum {}\",\n                    patch_set.changes.len(),\n                    args.min_changed_paths\n                ),\n            });\n            continue;\n        }\n        if patch_set.changes.len() > args.max_changed_paths {\n            skipped.push(SkippedCandidate {\n                commit_sha: commit.sha,\n                subject: commit.subject,\n                reason: format!(\n                    \"changed path count {} above maximum {}\",\n                    patch_set.changes.len(),\n                    args.max_changed_paths\n                ),\n            });\n            continue;\n        }\n        if let Some(reason) = first_unsupported_change_reason(&patch_set.changes) {\n            skipped.push(SkippedCandidate {\n                commit_sha: commit.sha,\n                subject: commit.subject,\n                reason,\n            });\n            continue;\n        }\n\n        let operations = compile_operations(&patch_set)?;\n        let expected_files =\n            normalize_snapshot_for_lix(&read_tree_snapshot(repo_path, &commit.sha)?);\n        let child_tree_sha = rev_parse_tree(repo_path, &commit.sha)?;\n\n        selected.push(Workload {\n            commit_sha: commit.sha,\n            parent_sha: commit.parents[0].clone(),\n            subject: commit.subject,\n            changed_paths: operations.len(),\n            child_tree_sha,\n            operations,\n            expected_files,\n        });\n    }\n\n    if selected.is_empty() {\n        return Err(\"no benchmark workloads selected; widen scan or changed-path filters\".into());\n    }\n\n    Ok((selected, skipped))\n}\n\nfn prepare_workloads(\n    repo_path: &Path,\n    _args: &Args,\n    tmp_root: &Path,\n    workloads: &[Workload],\n) -> DynResult<PreparedBenchmark> {\n    let wasm_runtime: Arc<dyn WasmRuntime> = Arc::new(WasmtimeRuntime::new()?);\n    let git_templates_dir = tmp_root.join(\"git-templates\");\n    fs::create_dir_all(&git_templates_dir)?;\n    let mut prepared_workloads = Vec::with_capacity(workloads.len());\n\n    for workload in workloads {\n        let parent_files = read_tree_snapshot(repo_path, &workload.parent_sha)?;\n        let git_template_dir = git_templates_dir.join(&workload.commit_sha);\n        create_git_checkout_template(repo_path, &git_template_dir, &workload.parent_sha)?;\n        let lix_template = create_lix_snapshot_template(&parent_files)?;\n        prepared_workloads.push(PreparedWorkload {\n            workload: workload.clone(),\n            git_template_dir,\n            lix_template,\n        });\n    }\n\n    Ok(PreparedBenchmark {\n        workloads: prepared_workloads,\n        wasm_runtime,\n    })\n}\n\nfn run_workload_trials(\n    repo_path: &Path,\n    args: &Args,\n    tmp_root: &Path,\n    workload: &PreparedWorkload,\n    wasm_runtime: Arc<dyn WasmRuntime>,\n) -> DynResult<Vec<TrialResult>> {\n    let git_trial_root = tmp_root\n        .join(\"git-runs\")\n        .join(&workload.workload.commit_sha);\n    let lix_trial_root = tmp_root\n        .join(\"lix-runs\")\n        .join(&workload.workload.commit_sha);\n    fs::create_dir_all(&git_trial_root)?;\n    fs::create_dir_all(&lix_trial_root)?;\n\n    let total_iterations = args.warmups + args.runs;\n    let mut trials = Vec::with_capacity(total_iterations * 2);\n\n    for iteration in 0..total_iterations {\n        let warmup = iteration < args.warmups;\n        let order = if iteration % 2 == 0 {\n            [\"git\", \"lix\"]\n        } else {\n            [\"lix\", \"git\"]\n        };\n\n        for system in order {\n            let trial = match system {\n                \"git\" => run_git_trial(\n                    &git_trial_root,\n                    iteration,\n                    warmup,\n                    workload,\n                    !args.skip_verify,\n                )?,\n                \"lix\" => run_lix_trial(\n                    repo_path,\n                    &lix_trial_root,\n                    iteration,\n                    warmup,\n                    workload,\n                    Arc::clone(&wasm_runtime),\n                    !args.skip_verify,\n                )?,\n                _ => unreachable!(),\n            };\n            trials.push(trial);\n        }\n    }\n\n    Ok(trials)\n}\n\nfn run_git_trial(\n    trial_root: &Path,\n    iteration: usize,\n    warmup: bool,\n    workload: &PreparedWorkload,\n    verify_state: bool,\n) -> DynResult<TrialResult> {\n    let repo_dir = trial_root.join(format!(\"trial-{iteration}\"));\n    if repo_dir.exists() {\n        fs::remove_dir_all(&repo_dir)?;\n    }\n    copy_directory(&workload.git_template_dir, &repo_dir)?;\n\n    let write_started = Instant::now();\n    apply_operations_to_git(&repo_dir, &workload.workload.operations)?;\n    let write_ms = elapsed_ms(write_started);\n\n    let commit_started = Instant::now();\n    let commit_message = format!(\"bench {}\", &workload.workload.commit_sha[..12]);\n    run_git_text(&repo_dir, [\"add\", \"-A\"])?;\n    run_git_text(\n        &repo_dir,\n        [\n            \"-c\",\n            \"core.hooksPath=/dev/null\",\n            \"-c\",\n            \"commit.gpgSign=false\",\n            \"commit\",\n            \"-q\",\n            \"--allow-empty\",\n            \"-m\",\n            &commit_message,\n        ],\n    )?;\n    let commit_ms = elapsed_ms(commit_started);\n\n    let verified = if verify_state {\n        let actual_tree = run_git_text(&repo_dir, [\"rev-parse\", \"HEAD^{tree}\"])?;\n        let actual_tree = actual_tree.trim();\n        if actual_tree != workload.workload.child_tree_sha {\n            return Err(format!(\n                \"git trial tree mismatch for {}: expected {}, got {}\",\n                workload.workload.commit_sha, workload.workload.child_tree_sha, actual_tree\n            )\n            .into());\n        }\n        true\n    } else {\n        false\n    };\n\n    fs::remove_dir_all(&repo_dir)?;\n    Ok(TrialResult {\n        workload_commit_sha: workload.workload.commit_sha.clone(),\n        system: \"git\",\n        iteration,\n        warmup,\n        write_ms,\n        commit_ms,\n        total_ms: write_ms + commit_ms,\n        verified,\n    })\n}\n\nfn run_lix_trial(\n    _repo_path: &Path,\n    trial_root: &Path,\n    iteration: usize,\n    warmup: bool,\n    workload: &PreparedWorkload,\n    wasm_runtime: Arc<dyn WasmRuntime>,\n    verify_state: bool,\n) -> DynResult<TrialResult> {\n    let db_path = trial_root.join(format!(\"trial-{iteration}.lix\"));\n    if db_path.exists() {\n        fs::remove_file(&db_path)?;\n    }\n    let session = create_initialized_session(&db_path, wasm_runtime)?;\n    if !workload.lix_template.seed_rows.is_empty() {\n        let seed_rows = workload.lix_template.seed_rows.clone();\n        pollster::block_on(session.transaction(ExecuteOptions::default(), |tx| {\n            Box::pin(async move {\n                for row in seed_rows {\n                    tx.execute(\n                        \"INSERT INTO lix_file (id, path, data) VALUES (?1, ?2, ?3)\",\n                        &[\n                            Value::Text(row.id),\n                            Value::Text(row.path),\n                            Value::Blob(row.data),\n                        ],\n                    )\n                    .await?;\n                }\n                Ok(())\n            })\n        }))?;\n    }\n    let mut path_to_id = workload.lix_template.path_to_id.clone();\n    let mut next_file_id = next_file_id_from_map(&path_to_id);\n    let mut transaction =\n        pollster::block_on(session.begin_transaction_with_options(ExecuteOptions::default()))?;\n\n    let write_started = Instant::now();\n    for operation in &workload.workload.operations {\n        execute_engine_operation(\n            &mut transaction,\n            operation,\n            &mut path_to_id,\n            &mut next_file_id,\n        )?;\n    }\n    let write_ms = elapsed_ms(write_started);\n\n    let commit_started = Instant::now();\n    pollster::block_on(transaction.commit())?;\n    let commit_ms = elapsed_ms(commit_started);\n\n    let verified = if verify_state {\n        verify_session_state(&session, &workload.workload.expected_files)?;\n        true\n    } else {\n        false\n    };\n\n    drop(session);\n    let _ = fs::remove_file(&db_path);\n    let _ = fs::remove_file(format!(\"{}-journal\", db_path.display()));\n    let _ = fs::remove_file(format!(\"{}-wal\", db_path.display()));\n    let _ = fs::remove_file(format!(\"{}-shm\", db_path.display()));\n\n    Ok(TrialResult {\n        workload_commit_sha: workload.workload.commit_sha.clone(),\n        system: \"lix\",\n        iteration,\n        warmup,\n        write_ms,\n        commit_ms,\n        total_ms: write_ms + commit_ms,\n        verified,\n    })\n}\n\nfn create_git_checkout_template(\n    repo_path: &Path,\n    template_dir: &Path,\n    parent_sha: &str,\n) -> DynResult<()> {\n    if template_dir.exists() {\n        fs::remove_dir_all(template_dir)?;\n    }\n    run_command(\n        \"git\",\n        [\n            \"clone\",\n            \"--local\",\n            \"--quiet\",\n            repo_path.to_str().ok_or(\"invalid repo path\")?,\n            template_dir.to_str().ok_or(\"invalid template path\")?,\n        ],\n        None,\n        None,\n    )?;\n    run_git_text(template_dir, [\"checkout\", \"--quiet\", parent_sha])?;\n    run_git_text(template_dir, [\"config\", \"user.email\", \"bench@example.com\"])?;\n    run_git_text(template_dir, [\"config\", \"user.name\", \"git-compare-bench\"])?;\n    run_git_text(template_dir, [\"config\", \"core.hooksPath\", \"/dev/null\"])?;\n    run_git_text(template_dir, [\"config\", \"commit.gpgSign\", \"false\"])?;\n    run_git_text(template_dir, [\"config\", \"gc.auto\", \"0\"])?;\n    run_git_text(template_dir, [\"config\", \"maintenance.auto\", \"false\"])?;\n    run_git_text(template_dir, [\"config\", \"gc.autoDetach\", \"false\"])?;\n    Ok(())\n}\n\nfn create_lix_snapshot_template(\n    parent_files: &BTreeMap<String, Vec<u8>>,\n) -> DynResult<LixTemplate> {\n    let mut path_to_id = BTreeMap::new();\n    let mut next_file_id = 1_u64;\n    let mut seed_rows = Vec::with_capacity(parent_files.len());\n    for (path, bytes) in parent_files {\n        let file_id = allocate_file_id(&mut next_file_id);\n        let lix_path = to_lix_path(path);\n        path_to_id.insert(lix_path.clone(), file_id.clone());\n        seed_rows.push(LixSeedRow {\n            id: file_id,\n            path: lix_path,\n            data: bytes.clone(),\n        });\n    }\n    Ok(LixTemplate {\n        seed_rows,\n        path_to_id,\n    })\n}\n\nfn apply_operations_to_git(repo_dir: &Path, operations: &[FileOperation]) -> DynResult<()> {\n    for operation in operations {\n        match operation.kind {\n            OperationKind::Add | OperationKind::Copy | OperationKind::Modify => {\n                let path = repo_dir.join(\n                    operation\n                        .new_path\n                        .as_ref()\n                        .ok_or(\"missing new path for git write\")?,\n                );\n                if let Some(parent) = path.parent() {\n                    fs::create_dir_all(parent)?;\n                }\n                fs::write(\n                    &path,\n                    operation\n                        .new_bytes\n                        .as_ref()\n                        .ok_or(\"missing bytes for git write\")?,\n                )?;\n                set_executable_if_needed(&path, operation.new_executable)?;\n            }\n            OperationKind::Rename => {\n                if let Some(old_path) = &operation.old_path {\n                    let old_full = repo_dir.join(old_path);\n                    if old_full.exists() {\n                        fs::remove_file(&old_full)?;\n                    }\n                }\n                let new_full = repo_dir.join(\n                    operation\n                        .new_path\n                        .as_ref()\n                        .ok_or(\"missing new path for rename\")?,\n                );\n                if let Some(parent) = new_full.parent() {\n                    fs::create_dir_all(parent)?;\n                }\n                fs::write(\n                    &new_full,\n                    operation\n                        .new_bytes\n                        .as_ref()\n                        .ok_or(\"missing bytes for rename\")?,\n                )?;\n                set_executable_if_needed(&new_full, operation.new_executable)?;\n            }\n            OperationKind::Delete => {\n                let path = repo_dir.join(\n                    operation\n                        .old_path\n                        .as_ref()\n                        .ok_or(\"missing old path for delete\")?,\n                );\n                if path.exists() {\n                    fs::remove_file(path)?;\n                }\n            }\n        }\n    }\n    Ok(())\n}\n\nfn set_executable_if_needed(path: &Path, executable: bool) -> DynResult<()> {\n    #[cfg(unix)]\n    {\n        let mode = if executable { 0o755 } else { 0o644 };\n        let mut permissions = fs::metadata(path)?.permissions();\n        permissions.set_mode(mode);\n        fs::set_permissions(path, permissions)?;\n    }\n    #[cfg(not(unix))]\n    let _ = (path, executable);\n    Ok(())\n}\n\nfn execute_engine_operation(\n    transaction: &mut SessionTransaction<'_>,\n    operation: &FileOperation,\n    path_to_id: &mut BTreeMap<String, String>,\n    next_file_id: &mut u64,\n) -> DynResult<()> {\n    match operation.kind {\n        OperationKind::Add | OperationKind::Copy => {\n            let path = to_lix_path(\n                operation\n                    .new_path\n                    .as_ref()\n                    .ok_or(\"missing new path for Lix insert\")?,\n            );\n            let file_id = allocate_file_id(next_file_id);\n            pollster::block_on(\n                transaction.execute(\n                    \"INSERT INTO lix_file (id, path, data) VALUES (?1, ?2, ?3)\",\n                    &[\n                        Value::Text(file_id.clone()),\n                        Value::Text(path.clone()),\n                        Value::Blob(\n                            operation\n                                .new_bytes\n                                .as_ref()\n                                .ok_or(\"missing bytes for Lix insert\")?\n                                .clone(),\n                        ),\n                    ],\n                ),\n            )?;\n            path_to_id.insert(path.clone(), file_id);\n        }\n        OperationKind::Modify => {\n            let path = to_lix_path(\n                operation\n                    .new_path\n                    .as_ref()\n                    .ok_or(\"missing path for Lix update\")?,\n            );\n            let file_id = path_to_id\n                .get(&path)\n                .cloned()\n                .ok_or_else(|| format!(\"missing file id for modified path {path}\"))?;\n            pollster::block_on(\n                transaction.execute(\n                    \"UPDATE lix_file SET data = ?1 WHERE id = ?2\",\n                    &[\n                        Value::Blob(\n                            operation\n                                .new_bytes\n                                .as_ref()\n                                .ok_or(\"missing bytes for Lix update\")?\n                                .clone(),\n                        ),\n                        Value::Text(file_id),\n                    ],\n                ),\n            )?;\n        }\n        OperationKind::Rename => {\n            let old_path = to_lix_path(\n                operation\n                    .old_path\n                    .as_ref()\n                    .ok_or(\"missing old path for Lix rename\")?,\n            );\n            let new_path = to_lix_path(\n                operation\n                    .new_path\n                    .as_ref()\n                    .ok_or(\"missing new path for Lix rename\")?,\n            );\n            let file_id = path_to_id\n                .remove(&old_path)\n                .ok_or_else(|| format!(\"missing file id for renamed path {old_path}\"))?;\n            pollster::block_on(\n                transaction.execute(\n                    \"UPDATE lix_file SET path = ?1, data = ?2 WHERE id = ?3\",\n                    &[\n                        Value::Text(new_path.clone()),\n                        Value::Blob(\n                            operation\n                                .new_bytes\n                                .as_ref()\n                                .ok_or(\"missing bytes for Lix rename\")?\n                                .clone(),\n                        ),\n                        Value::Text(file_id.clone()),\n                    ],\n                ),\n            )?;\n            path_to_id.insert(new_path.clone(), file_id);\n        }\n        OperationKind::Delete => {\n            let old_path = to_lix_path(\n                operation\n                    .old_path\n                    .as_ref()\n                    .ok_or(\"missing old path for Lix delete\")?,\n            );\n            let file_id = path_to_id\n                .remove(&old_path)\n                .ok_or_else(|| format!(\"missing file id for deleted path {old_path}\"))?;\n            pollster::block_on(transaction.execute(\n                \"DELETE FROM lix_file WHERE id = ?1\",\n                &[Value::Text(file_id)],\n            ))?;\n        }\n    }\n    Ok(())\n}\n\nfn verify_session_state(\n    session: &Session,\n    expected_files: &BTreeMap<String, Vec<u8>>,\n) -> DynResult<()> {\n    let result =\n        pollster::block_on(session.execute(\"SELECT path, data FROM lix_file ORDER BY path\", &[]))?;\n    let mut actual = BTreeMap::new();\n    for row in &result.statements[0].rows {\n        let path = expect_text(&row[0])?;\n        let bytes = value_as_bytes(&row[1])?;\n        actual.insert(path, bytes);\n    }\n    if &actual != expected_files {\n        return Err(format!(\n            \"Lix state verification failed: expected {} files, got {} files\",\n            expected_files.len(),\n            actual.len()\n        )\n        .into());\n    }\n\n    Ok(())\n}\n\nfn create_initialized_session(\n    path: &Path,\n    wasm_runtime: Arc<dyn WasmRuntime>,\n) -> DynResult<Session> {\n    if path.exists() {\n        fs::remove_file(path)?;\n    }\n    let init_backend = SqliteBackend::from_path(path)?;\n    let engine = Arc::new(boot_engine(EngineConfig::new(\n        Box::new(init_backend),\n        Arc::clone(&wasm_runtime),\n    )));\n    let _ = pollster::block_on(engine.initialize_if_needed())?;\n    pollster::block_on(engine.open_existing())?;\n    Ok(pollster::block_on(engine.open_session())?)\n}\n\nfn expect_text(value: &Value) -> DynResult<String> {\n    match value {\n        Value::Text(text) => Ok(text.clone()),\n        other => Err(format!(\"expected text value, got {other:?}\").into()),\n    }\n}\n\nfn value_as_bytes(value: &Value) -> DynResult<Vec<u8>> {\n    match value {\n        Value::Blob(bytes) => Ok(bytes.clone()),\n        Value::Text(text) => Ok(text.as_bytes().to_vec()),\n        other => Err(format!(\"expected blob/text value, got {other:?}\").into()),\n    }\n}\n\nfn next_file_id_from_map(path_to_id: &BTreeMap<String, String>) -> u64 {\n    path_to_id\n        .values()\n        .filter_map(|id| id.strip_prefix(\"bench-file-\"))\n        .filter_map(|tail| tail.parse::<u64>().ok())\n        .max()\n        .unwrap_or(0)\n        + 1\n}\n\nfn allocate_file_id(next_file_id: &mut u64) -> String {\n    let file_id = format!(\"bench-file-{next_file_id}\");\n    *next_file_id += 1;\n    file_id\n}\n\nfn filtered_trials(trials: &[TrialResult], system: &str) -> Vec<TrialResult> {\n    trials\n        .iter()\n        .filter(|trial| trial.system == system && !trial.warmup)\n        .cloned()\n        .collect()\n}\n\nfn build_metric_report(trials: &[TrialResult]) -> MetricReport {\n    MetricReport {\n        write_ms: summarize(trials.iter().map(|trial| trial.write_ms).collect()),\n        commit_ms: summarize(trials.iter().map(|trial| trial.commit_ms).collect()),\n        total_ms: summarize(trials.iter().map(|trial| trial.total_ms).collect()),\n    }\n}\n\nfn summarize(mut values: Vec<f64>) -> SummaryStats {\n    if values.is_empty() {\n        return SummaryStats::default();\n    }\n    values.sort_by(|left, right| left.partial_cmp(right).unwrap());\n    let samples = values.len();\n    let sum: f64 = values.iter().sum();\n    SummaryStats {\n        samples,\n        min_ms: values[0],\n        p50_ms: percentile(&values, 0.50),\n        p95_ms: percentile(&values, 0.95),\n        mean_ms: sum / samples as f64,\n        max_ms: values[samples - 1],\n    }\n}\n\nfn percentile(sorted_values: &[f64], percentile: f64) -> f64 {\n    if sorted_values.is_empty() {\n        return 0.0;\n    }\n    let rank = percentile * (sorted_values.len().saturating_sub(1)) as f64;\n    let lower = rank.floor() as usize;\n    let upper = rank.ceil() as usize;\n    if lower == upper {\n        return sorted_values[lower];\n    }\n    let weight = rank - lower as f64;\n    sorted_values[lower] * (1.0 - weight) + sorted_values[upper] * weight\n}\n\nfn safe_ratio(numerator: f64, denominator: f64) -> f64 {\n    if denominator == 0.0 {\n        0.0\n    } else {\n        numerator / denominator\n    }\n}\n\nfn pct_less_time(lix_ms: f64, git_ms: f64) -> f64 {\n    if git_ms == 0.0 {\n        0.0\n    } else {\n        (1.0 - (lix_ms / git_ms)) * 100.0\n    }\n}\n\nfn render_markdown_report(report: &Report) -> String {\n    let mut output = String::new();\n    output.push_str(\"# Git Compare Benchmark\\n\\n\");\n    output.push_str(&format!(\n        \"Repo: `{}`  \\nHead: `{}` (`{}`)\\n\\n\",\n        report.repo_path, report.head_ref, report.head_commit\n    ));\n    output.push_str(\"## Setup\\n\\n\");\n    output.push_str(&format!(\n        \"- workloads: `{}`\\n- runs per system: `{}`\\n- warmups: `{}`\\n- verification: `{}`\\n\\n\",\n        report.workload_selection.selected_count,\n        report.config.runs,\n        report.config.warmups,\n        report.config.verify_state,\n    ));\n    output.push_str(\"## Overall Median\\n\\n\");\n    output.push_str(\"| system | write ms | commit ms | total ms | p95 total ms |\\n\");\n    output.push_str(\"| --- | ---: | ---: | ---: | ---: |\\n\");\n    output.push_str(&format!(\n        \"| git | {:.2} | {:.2} | {:.2} | {:.2} |\\n\",\n        report.overall.git.write_ms.p50_ms,\n        report.overall.git.commit_ms.p50_ms,\n        report.overall.git.total_ms.p50_ms,\n        report.overall.git.total_ms.p95_ms\n    ));\n    output.push_str(&format!(\n        \"| lix | {:.2} | {:.2} | {:.2} | {:.2} |\\n\\n\",\n        report.overall.lix.write_ms.p50_ms,\n        report.overall.lix.commit_ms.p50_ms,\n        report.overall.lix.total_ms.p50_ms,\n        report.overall.lix.total_ms.p95_ms\n    ));\n    output.push_str(&format!(\n        \"Lix median total time was `{:.2}%` less than Git on this benchmark (`{:.2}x` Lix/Git).\\n\\n\",\n        report.overall.total_pct_less_time_for_lix,\n        report.overall.total_ratio_lix_over_git\n    ));\n    output.push_str(\"## Workloads\\n\\n\");\n    output.push_str(\"| commit | changed paths | git total ms | lix total ms | lix less time |\\n\");\n    output.push_str(\"| --- | ---: | ---: | ---: | ---: |\\n\");\n    for workload in &report.workloads {\n        output.push_str(&format!(\n            \"| `{}` | {} | {:.2} | {:.2} | {:.2}% |\\n\",\n            &workload.commit_sha[..12],\n            workload.changed_paths,\n            workload.git.total_ms.p50_ms,\n            workload.lix.total_ms.p50_ms,\n            workload.total_pct_less_time_for_lix\n        ));\n    }\n    output.push_str(\"\\n## Notes\\n\\n\");\n    output.push_str(&format!(\n        \"- template seed mode: `{}`\\n- skipped candidate commits during workload selection: `{}`\\n\",\n        report.template_seed.mode,\n        report.workload_selection.skipped.len()\n    ));\n    output\n}\n\nfn list_first_parent_commit_info(\n    repo_path: &Path,\n    reference: &str,\n    limit: Option<usize>,\n) -> DynResult<Vec<CommitInfo>> {\n    let mut args = vec![\n        \"log\".to_string(),\n        \"--first-parent\".to_string(),\n        \"--format=%H%x1f%P%x1f%s%x1e\".to_string(),\n    ];\n    if let Some(limit) = limit {\n        args.push(\"-n\".to_string());\n        args.push(limit.to_string());\n    }\n    args.push(reference.to_string());\n    let output = run_git_text(repo_path, args.iter().map(String::as_str))?;\n    let mut commits = Vec::new();\n    for record in output.split('\\x1e') {\n        let trimmed = record.trim();\n        if trimmed.is_empty() {\n            continue;\n        }\n        let mut parts = trimmed.split('\\x1f');\n        let sha = parts.next().unwrap_or_default().trim().to_string();\n        let parent_part = parts.next().unwrap_or_default().trim();\n        let subject = parts.next().unwrap_or_default().trim().to_string();\n        commits.push(CommitInfo {\n            sha,\n            parents: if parent_part.is_empty() {\n                Vec::new()\n            } else {\n                parent_part\n                    .split_whitespace()\n                    .map(ToString::to_string)\n                    .collect()\n            },\n            subject,\n        });\n    }\n    Ok(commits)\n}\n\nfn read_commit_info(repo_path: &Path, reference: &str) -> DynResult<CommitInfo> {\n    let sha = rev_parse_commit(repo_path, reference)?;\n    let output = run_git_text(repo_path, [\"log\", \"-1\", \"--format=%P%x1f%s\", &sha])?;\n    let trimmed = output.trim();\n    let mut parts = trimmed.split('\\x1f');\n    let parent_part = parts.next().unwrap_or_default().trim();\n    let subject = parts.next().unwrap_or_default().trim().to_string();\n    Ok(CommitInfo {\n        sha,\n        parents: if parent_part.is_empty() {\n            Vec::new()\n        } else {\n            parent_part\n                .split_whitespace()\n                .map(ToString::to_string)\n                .collect()\n        },\n        subject,\n    })\n}\n\nfn rev_parse_commit(repo_path: &Path, reference: &str) -> DynResult<String> {\n    Ok(run_git_text(\n        repo_path,\n        [\"rev-parse\", \"--verify\", &format!(\"{reference}^{{commit}}\")],\n    )?\n    .trim()\n    .to_string())\n}\n\nfn rev_parse_tree(repo_path: &Path, commit_sha: &str) -> DynResult<String> {\n    Ok(\n        run_git_text(repo_path, [\"rev-parse\", &format!(\"{commit_sha}^{{tree}}\")])?\n            .trim()\n            .to_string(),\n    )\n}\n\nfn read_commit_patch_set(repo_path: &Path, commit_sha: &str) -> DynResult<PatchSet> {\n    let raw = run_git_bytes(\n        repo_path,\n        [\n            \"diff-tree\",\n            \"--root\",\n            \"--raw\",\n            \"-r\",\n            \"-z\",\n            \"-m\",\n            \"--first-parent\",\n            \"--find-renames\",\n            \"--no-commit-id\",\n            commit_sha,\n        ],\n        None,\n    )?;\n    let changes = parse_raw_diff_tree(&raw)?;\n    let wanted_blob_ids = collect_wanted_blob_ids(&changes);\n    let blobs = read_blobs(repo_path, &wanted_blob_ids)?;\n    Ok(PatchSet { changes, blobs })\n}\n\nfn parse_raw_diff_tree(raw: &[u8]) -> DynResult<Vec<RawChange>> {\n    if raw.is_empty() {\n        return Ok(Vec::new());\n    }\n    let tokens = raw\n        .split(|byte| *byte == 0)\n        .filter(|token| !token.is_empty())\n        .collect::<Vec<_>>();\n    let mut changes = Vec::new();\n    let mut index = 0;\n    while index < tokens.len() {\n        let header = std::str::from_utf8(tokens[index])?;\n        index += 1;\n        if !header.starts_with(':') {\n            continue;\n        }\n        let fields = header[1..].split(' ').collect::<Vec<_>>();\n        if fields.len() < 5 {\n            continue;\n        }\n        let status_token = fields[4];\n        let status = status_token.chars().next().unwrap_or('M');\n        let first_path =\n            std::str::from_utf8(tokens.get(index).ok_or(\"missing diff-tree path\")?)?.to_string();\n        index += 1;\n        if status == 'R' || status == 'C' {\n            let second_path =\n                std::str::from_utf8(tokens.get(index).ok_or(\"missing rename target path\")?)?\n                    .to_string();\n            index += 1;\n            changes.push(RawChange {\n                status,\n                old_mode: fields[0].to_string(),\n                new_mode: fields[1].to_string(),\n                old_oid: fields[2].to_string(),\n                new_oid: fields[3].to_string(),\n                old_path: Some(first_path),\n                new_path: Some(second_path),\n            });\n            continue;\n        }\n        changes.push(RawChange {\n            status,\n            old_mode: fields[0].to_string(),\n            new_mode: fields[1].to_string(),\n            old_oid: fields[2].to_string(),\n            new_oid: fields[3].to_string(),\n            old_path: if status == 'A' {\n                None\n            } else {\n                Some(first_path.clone())\n            },\n            new_path: if status == 'D' {\n                None\n            } else {\n                Some(first_path)\n            },\n        });\n    }\n    Ok(changes)\n}\n\nfn collect_wanted_blob_ids(changes: &[RawChange]) -> Vec<String> {\n    let mut ids = BTreeSet::new();\n    for change in changes {\n        if change.new_path.is_some()\n            && is_regular_blob_mode(&change.new_mode)\n            && change.new_oid != NULL_OID\n        {\n            ids.insert(change.new_oid.clone());\n        }\n    }\n    ids.into_iter().collect()\n}\n\nfn read_tree_snapshot(repo_path: &Path, commit_sha: &str) -> DynResult<BTreeMap<String, Vec<u8>>> {\n    let raw = run_git_bytes(\n        repo_path,\n        [\"ls-tree\", \"-r\", \"-z\", \"--full-tree\", commit_sha],\n        None,\n    )?;\n    let mut path_by_oid = BTreeMap::new();\n    for token in raw\n        .split(|byte| *byte == 0)\n        .filter(|token| !token.is_empty())\n    {\n        let entry = std::str::from_utf8(token)?;\n        let (header, path) = entry.split_once('\\t').ok_or(\"invalid ls-tree entry\")?;\n        let fields = header.split_whitespace().collect::<Vec<_>>();\n        if fields.len() != 3 {\n            continue;\n        }\n        let mode = fields[0];\n        let object_type = fields[1];\n        let oid = fields[2];\n        if object_type != \"blob\" || !is_regular_blob_mode(mode) {\n            continue;\n        }\n        path_by_oid.insert(path.to_string(), oid.to_string());\n    }\n    let blob_ids = path_by_oid.values().cloned().collect::<Vec<_>>();\n    let blobs = read_blobs(repo_path, &blob_ids)?;\n    let mut files = BTreeMap::new();\n    for (path, oid) in path_by_oid {\n        let bytes = blobs\n            .get(&oid)\n            .cloned()\n            .ok_or_else(|| format!(\"missing blob {oid} for path {path}\"))?;\n        files.insert(path, bytes);\n    }\n    Ok(files)\n}\n\nfn compile_operations(patch_set: &PatchSet) -> DynResult<Vec<FileOperation>> {\n    let mut operations = Vec::with_capacity(patch_set.changes.len());\n    for change in &patch_set.changes {\n        let new_bytes = if change.new_path.is_some() && is_regular_blob_mode(&change.new_mode) {\n            Some(\n                patch_set\n                    .blobs\n                    .get(&change.new_oid)\n                    .cloned()\n                    .ok_or_else(|| format!(\"missing blob bytes for {}\", change.new_oid))?,\n            )\n        } else {\n            None\n        };\n\n        let kind = match change.status {\n            'A' => OperationKind::Add,\n            'M' => OperationKind::Modify,\n            'D' => OperationKind::Delete,\n            'R' => OperationKind::Rename,\n            'C' => OperationKind::Copy,\n            other => {\n                return Err(format!(\"unsupported diff status '{other}'\").into());\n            }\n        };\n        operations.push(FileOperation {\n            kind,\n            old_path: change.old_path.clone(),\n            new_path: change.new_path.clone(),\n            new_bytes,\n            new_executable: change.new_mode == \"100755\",\n        });\n    }\n    Ok(operations)\n}\n\nfn normalize_snapshot_for_lix(files: &BTreeMap<String, Vec<u8>>) -> BTreeMap<String, Vec<u8>> {\n    files\n        .iter()\n        .map(|(path, bytes)| (to_lix_path(path), bytes.clone()))\n        .collect()\n}\n\nfn to_lix_path(path: &str) -> String {\n    let trimmed = path.trim_start_matches('/');\n    let segments = trimmed\n        .split('/')\n        .filter(|segment| !segment.is_empty())\n        .map(encode_lix_path_segment)\n        .collect::<Vec<_>>();\n    format!(\"/{}\", segments.join(\"/\"))\n}\n\nfn encode_lix_path_segment(segment: &str) -> String {\n    let mut encoded = String::new();\n    for byte in segment.as_bytes() {\n        let ch = *byte as char;\n        let allowed = ch.is_ascii_alphanumeric() || matches!(ch, '.' | '_' | '~' | '-');\n        if allowed {\n            encoded.push(ch);\n        } else {\n            encoded.push_str(&format!(\"%{:02X}\", byte));\n        }\n    }\n    encoded\n}\n\nfn first_unsupported_change_reason(changes: &[RawChange]) -> Option<String> {\n    changes.iter().find_map(unsupported_change_reason)\n}\n\nfn unsupported_change_reason(change: &RawChange) -> Option<String> {\n    match change.status {\n        'A' => {\n            if !is_regular_blob_mode(&change.new_mode) {\n                Some(format!(\n                    \"added path {:?} uses unsupported mode {}\",\n                    change.new_path, change.new_mode\n                ))\n            } else {\n                None\n            }\n        }\n        'M' => {\n            if !is_regular_blob_mode(&change.old_mode) || !is_regular_blob_mode(&change.new_mode) {\n                return Some(format!(\n                    \"modified path {:?} uses unsupported mode {} -> {}\",\n                    change.new_path, change.old_mode, change.new_mode\n                ));\n            }\n            if change.old_path == change.new_path\n                && change.old_oid == change.new_oid\n                && change.old_mode != change.new_mode\n            {\n                return Some(format!(\n                    \"mode-only change on {:?} is not represented by lix_file\",\n                    change.new_path\n                ));\n            }\n            None\n        }\n        'D' => {\n            if !is_regular_blob_mode(&change.old_mode) {\n                Some(format!(\n                    \"deleted path {:?} uses unsupported mode {}\",\n                    change.old_path, change.old_mode\n                ))\n            } else {\n                None\n            }\n        }\n        'R' | 'C' => {\n            if !is_regular_blob_mode(&change.old_mode) || !is_regular_blob_mode(&change.new_mode) {\n                Some(format!(\n                    \"rename/copy {:?} -> {:?} uses unsupported mode {} -> {}\",\n                    change.old_path, change.new_path, change.old_mode, change.new_mode\n                ))\n            } else {\n                None\n            }\n        }\n        other => Some(format!(\"unsupported diff status '{other}'\")),\n    }\n}\n\nfn is_regular_blob_mode(mode: &str) -> bool {\n    mode == \"100644\" || mode == \"100755\"\n}\n\nfn read_blobs(repo_path: &Path, blob_ids: &[String]) -> DynResult<HashMap<String, Vec<u8>>> {\n    if blob_ids.is_empty() {\n        return Ok(HashMap::new());\n    }\n    let input = format!(\"{}\\n\", blob_ids.join(\"\\n\")).into_bytes();\n    let output = run_git_bytes(repo_path, [\"cat-file\", \"--batch\"], Some(input))?;\n    let mut blobs = HashMap::with_capacity(blob_ids.len());\n    let mut offset = 0usize;\n    while offset < output.len() {\n        let line_end = output[offset..]\n            .iter()\n            .position(|byte| *byte == b'\\n')\n            .map(|index| offset + index)\n            .ok_or(\"invalid cat-file batch output\")?;\n        let header = std::str::from_utf8(&output[offset..line_end])?;\n        offset = line_end + 1;\n\n        let header_fields = header.split_whitespace().collect::<Vec<_>>();\n        if header_fields.len() != 3 {\n            return Err(format!(\"invalid cat-file header: {header}\").into());\n        }\n        let oid = header_fields[0].to_string();\n        let object_type = header_fields[1];\n        let size: usize = header_fields[2].parse()?;\n        if object_type != \"blob\" {\n            return Err(format!(\"expected blob for {oid}, got {object_type}\").into());\n        }\n        let body_end = offset + size;\n        if body_end > output.len() {\n            return Err(format!(\"truncated blob body for {oid}\").into());\n        }\n        blobs.insert(oid, output[offset..body_end].to_vec());\n        offset = body_end + 1;\n    }\n    Ok(blobs)\n}\n\nfn run_git_text<I, S>(repo_path: &Path, args: I) -> DynResult<String>\nwhere\n    I: IntoIterator<Item = S>,\n    S: AsRef<str>,\n{\n    let args_vec = args\n        .into_iter()\n        .map(|arg| arg.as_ref().to_string())\n        .collect::<Vec<_>>();\n    let output = run_command(\n        \"git\",\n        args_vec.iter().map(String::as_str),\n        Some(repo_path),\n        None,\n    )?;\n    Ok(String::from_utf8(output)?)\n}\n\nfn run_git_bytes<I, S>(repo_path: &Path, args: I, stdin: Option<Vec<u8>>) -> DynResult<Vec<u8>>\nwhere\n    I: IntoIterator<Item = S>,\n    S: AsRef<str>,\n{\n    let args_vec = args\n        .into_iter()\n        .map(|arg| arg.as_ref().to_string())\n        .collect::<Vec<_>>();\n    run_command(\n        \"git\",\n        args_vec.iter().map(String::as_str),\n        Some(repo_path),\n        stdin,\n    )\n}\n\nfn run_command<I, S>(\n    program: &str,\n    args: I,\n    cwd: Option<&Path>,\n    stdin: Option<Vec<u8>>,\n) -> DynResult<Vec<u8>>\nwhere\n    I: IntoIterator<Item = S>,\n    S: AsRef<str>,\n{\n    let args_vec = args\n        .into_iter()\n        .map(|arg| arg.as_ref().to_string())\n        .collect::<Vec<_>>();\n    let mut command = Command::new(program);\n    command.args(&args_vec);\n    if let Some(cwd) = cwd {\n        command.current_dir(cwd);\n    }\n    if stdin.is_some() {\n        command.stdin(Stdio::piped());\n    }\n    command.stdout(Stdio::piped());\n    command.stderr(Stdio::piped());\n    let mut child = command.spawn()?;\n    if let Some(stdin_bytes) = stdin {\n        use std::io::Write;\n        let mut child_stdin = child.stdin.take().ok_or(\"missing child stdin\")?;\n        child_stdin.write_all(&stdin_bytes)?;\n    }\n    let output = child.wait_with_output()?;\n    if !output.status.success() {\n        let stderr = String::from_utf8_lossy(&output.stderr);\n        return Err(format!(\n            \"command failed: {} {}\\n{}\",\n            program,\n            args_vec.join(\" \"),\n            stderr.trim()\n        )\n        .into());\n    }\n    Ok(output.stdout)\n}\n\nfn copy_directory(source: &Path, destination: &Path) -> DynResult<()> {\n    if destination.exists() {\n        fs::remove_dir_all(destination)?;\n    }\n    run_command(\n        \"cp\",\n        [\n            \"-R\",\n            source.to_str().ok_or(\"invalid source path\")?,\n            destination.to_str().ok_or(\"invalid destination path\")?,\n        ],\n        None,\n        None,\n    )?;\n    Ok(())\n}\n\nfn elapsed_ms(started: Instant) -> f64 {\n    started.elapsed().as_secs_f64() * 1000.0\n}\n"
  },
  {
    "path": "blog/001-introducing-lix/index.md",
    "content": "---\ndate: \"2026-01-20\"\nog:description: \"Lix is a version control system you import as a library. It records semantic changes to enable diffs, reviews, rollback, and querying of edits.\"\n---\n\n# Introducing Lix: An embeddable version control system\n\nLix is an **embeddable version control system** that can be imported as a library. Use lix, for example, to enable human-in-the-loop workflows for AI agents like diffs and reviews.\n\n- **It's just a library** — Lix is a library you import. Get branching, diff, rollback in your existing stack\n- **Tracks semantic changes** — diffs, blame, and history are queryable via SQL\n- **Approval workflows for agents** — agents propose changes in isolated versions, humans review and merge\n\n![AI agent changes need to be visible and controllable](./ai-agents-guardrails.png)\n\n> [!TIP]\n> Lix does not replace Git. [Read how Lix compares to Git →](https://lix.dev/docs/comparison-to-git)\n\n## Semantic change tracking\n\nLix doesn't track line-by-line text changes. It tracks **semantic changes** at the entity level via plugins.\n\nA plugin parses a format (or a piece of app state) into structured entities. Then Lix stores **what changed** — not just which bytes differ.\n\n**Before:**\n```json\n{\"theme\":\"light\",\"notifications\":true,\"language\":\"en\"}\n```\n\n**After:**\n```json\n{\"theme\":\"dark\",\"notifications\":true,\"language\":\"en\"}\n```\n\n**Git tracks:**\n```diff\n-{\"theme\":\"light\",\"notifications\":true,\"language\":\"en\"}\n+{\"theme\":\"dark\",\"notifications\":true,\"language\":\"en\"}\n```\n\n**Lix tracks:**\n```diff\nproperty theme:\n- light\n+ dark\n```\n\n### Excel file example\n\nWith an XLSX plugin (not shipped yet), Lix can show a cell-level diff like:\nThis is exactly the kind of semantic surface plugins define: cells vs formulas vs styling.\n\n**Before:**\n\n| order_id | product  | status  |\n| -------- | -------- | ------- |\n| 1001     | Widget A | shipped |\n| 1002     | Widget B | pending |\n\n**After:**\n\n| order_id | product  | status  |\n| -------- | -------- | ------- |\n| 1001     | Widget A | shipped |\n| 1002     | Widget B | shipped |\n\n**Git tracks:**\n```diff\n-Binary files differ\n```\n\n**Lix tracks:**\n```diff\norder_id 1002 status:\n- pending\n+ shipped\n```\n\nThe same approach extends to any other format your product cares about — **as long as there’s a plugin** that can interpret it.\n\n## How does Lix work?\n\nLix is **change-first**: it stores semantic changes as queryable data, not snapshots.\n\nThat means audit trails, rollbacks, and “blame” become simple queries:\n\n```sql\nSELECT *\nFROM state_history\nWHERE entity_id = 'settings.theme'\nORDER BY depth ASC;\n```\n\nLix uses existing SQL databases as both **query engine** and **persistence layer**.\n\nPlugins parse files (including binary formats) into \"meaningful changes\" e.g. cells, properties, whitespace, etc. Lix stores those changes as rows in virtual tables like `file`, `file_history`, and `state_history`.\n\nWhy this matters:\n\n- **Doesn't reinvent databases** — durability, ACID, and recovery come from proven SQL engines.\n- **SQL API for changes** — query diffs, history, and audit trails directly.\n- **Portable** — runs on SQLite, Postgres, or other SQL databases.\n\n```\n┌─────────────────────────────────────────────────┐\n│                      Lix                        │\n│                                                 │\n│ ┌────────────┐ ┌──────────┐ ┌─────────┐ ┌─────┐ │\n│ │ Filesystem │ │ Branches │ │ History │ │ ... │ │\n│ └────────────┘ └──────────┘ └─────────┘ └─────┘ │\n└────────────────────────┬────────────────────────┘\n                         │\n                         ▼\n┌─────────────────────────────────────────────────┐\n│                  SQL database                   │ \n│            (SQLite, Postgres, etc.)             │\n└─────────────────────────────────────────────────┘\n```\n\nThis means: no separate infrastructure to manage, and no “special” datastore just for version control.\n\n## Plugins (format support)\n\nLix’s format support depends on plugins. Here’s the current status:\n\n| Format | Plugin | Status |\n| ------ | ------ | ------ |\n| JSON | `@lix-js/plugin-json` | Stable |\n| CSV | `@lix-js/plugin-csv` | Stable |\n| Markdown | `@lix-js/plugin-md` | Beta |\n| ProseMirror | `@lix-js/plugin-prosemirror` | Stable |\n\n**Building your own plugin:** take an off-the-shelf parser for your format, map it to Lix’s entity/change schema, and you get semantic diffs + history for that format. [Plugin documentation →](https://lix.dev/docs/plugins)\n\n## Why did we build Lix?\n\nLix was developed alongside [inlang](https://inlang.com), open-source localization infrastructure.\n\nWe needed version control **as a library**, not as an external tool. Git's architecture didn't fit: we needed database semantics (transactions, ACID), queryable history, and semantic diffing. [Read more →](https://samuelstroschein.com/blog/git-limitations)\n\nThe result is Lix, now at over [90k weekly downloads on NPM](https://www.npmjs.com/package/@lix-js/sdk).\n\n![Weekly npm downloads](./npm-downloads.png)\n\n\n## Getting started\n\n<p>\n  <img src=\"https://cdn.simpleicons.org/javascript/F7DF1E\" alt=\"JavaScript\" width=\"18\" height=\"18\" /> JavaScript ·\n  <a href=\"https://github.com/opral/lix/issues/370\"><img src=\"https://cdn.jsdelivr.net/gh/devicons/devicon/icons/python/python-original.svg\" alt=\"Python\" width=\"18\" height=\"18\" /> Python</a> ·\n  <a href=\"https://github.com/opral/lix/issues/371\"><img src=\"https://cdn.simpleicons.org/rust/CE422B\" alt=\"Rust\" width=\"18\" height=\"18\" /> Rust</a> ·\n  <a href=\"https://github.com/opral/lix/issues/373\"><img src=\"https://cdn.simpleicons.org/go/00ADD8\" alt=\"Go\" width=\"18\" height=\"18\" /> Go</a>\n</p>\n\n```bash\nnpm install @lix-js/sdk\n```\n\n```ts\nimport { openLix, selectWorkingDiff } from \"@lix-js/sdk\";\n\nconst lix = await openLix({\n  environment: new InMemorySQLite()\n});\n\nawait lix.db.insertInto(\"file\").values({ path: \"/hello.txt\", data: ... }).execute();\n\nconst diff = await selectWorkingDiff({ lix }).selectAll().execute();\n```\n\n## What's next\n\nThe next version of Lix will be a refactor to be purely \"preprocessor\" based. This makes Lix easier to embed anywhere and enables:\n\n- **Fast writes** ([RFC 001](/rfc/001-preprocess-writes))\n- **Any SQL database** (SQLite, Postgres, Turso, MySQL)\n- **SDKs for Python, Rust, Go** ([RFC 002](/rfc/002-rewrite-in-rust))\n\n```\n                      ┌────────────────┐\n  SELECT * FROM ...   │  Lix Engine    │   SELECT * FROM ...\n ───────────────────▶ │    (Rust)      │ ───────────────────▶  Database\n                      └────────────────┘\n```\n\n### Join the community\n\n- ⭐ [Star the lix repo on GitHub](https://github.com/opral/lix)\n- 💬 [Chat on Discord](https://discord.gg/gdMPPWy57R)\n"
  },
  {
    "path": "blog/002-modeling-a-company-as-a-repository/index.md",
    "content": "---\ndate: \"2026-02-23\"\nog:description: \"Modeling a company as a filesystem is promising for AI agents, but binary files break the model. Lix turns binary formats into structured data agents can read and write.\"\nog:image: \"./cover.jpg\"\nog:image:alt: \"Abstract illustration for Your Company should be a Repository for AI agents\"\n---\n\n# Your Company should be a Repository for AI agents\n\nThe idea of modeling a company as a filesystem for maximum agent efficiency is gaining traction on X (Twitter).\n\nFor example, [Eli Mernit](https://x.com/mernit/status/2021324284875153544) wrote that agents get better context if a company is modeled as files (\"Your company is a filesystem\").\n\nThe problem is modeling a company as filesystem doesn't work today because most files are binary formats that agents can't work with effectively. [Anvisha Pai](https://x.com/anvishapai/status/2022062725354967551) pointed that out in her response post \"Your company is not a filesystem\".\n\nBut, what if a system exists that turns binary files into structured data agents can read and write to?\n\n![Twitter discussion between Eli Mernit and Anvisha](./twitter-discussion-cards.webp)\n\n## The case for the filesystem\n\nThe \"company as a filesystem because of agents\" argument is compelling for two reasons:\n\n1. Agents get full context. When company data lives in files, agents can inspect and reason across systems without brittle app integrations.\n\n2. No third party API restrictions. Tools like Codex and Claude Code feel powerful because they can use direct filesystem primitives (`grep`, shell commands, scripts) instead of being constrained by third-party APIs.\n\n![Example structure for modeling a company as a filesystem](./mernit-filesystem-example.jpg)\n\n## But the filesystem is not enough\n\nA plain filesystem alone doesn't let agents work effectively:\n\n1. Most file formats are not agent-friendly. Documents, spreadsheets, presentations, etc. are binary formats. Agents can parse some formats, but there is no universal semantic layer that enables round-trip editing.\n\n2. Many files cannot be converted into text. A common workaround is to convert binary files to text. But, visual and structural media (for example CAD, PCB, or layered design files) lose critical information when reduced to text. That makes review and verification harder, the real bottleneck with the uprising of AI agents.\n\n![Visual formats are not fully representable as plain text](./anvisha-visual-formats.jpg)\n\n## A system that understands binary files\n\nA system that turns binary files into structured data agents can read and write to would enable modeling a company as filesystem.\n\nThe implementation can be simple. Parse binary files into their schemas. After all, most binary files are structured data under the hood.\n\nFor example, a docx file is a collection of paragraphs, tables, images, etc. All of those can be expressed as JSON that an agent can understand.\n\n```text\n  ┌─────────────────┐         ┌───────────────────────┐\n  │ contract.docx   │────┬──► │ { type: \"paragraph\" } │\n  └─────────────────┘    ├──► │ { type: \"table\" }     │\n                         └──► │ { type: \"image\" }     │\n  ┌─────────────────┐         ├───────────────────────┤\n  │ design.psd      │────┬──► │ { type: \"layer\" }     │\n  └─────────────────┘    └──► │ { type: \"mask\" }      │\n                              ├───────────────────────┤\n  ┌─────────────────┐         │                       │\n  │ budget.xlsx     │────┬──► │ { type: \"row\" }       │\n  └─────────────────┘    └──► │ { type: \"formula\" }   │\n                              └───────────────────────┘\n                                        ▲\n                                        │\n                                        ▼\n                               ┌──────────────┐\n                               │    Agent     │\n                               │  read/write  │\n                               └──────────────┘\n```\n\n## Lix is that system\n\nA system that turns binary files into structured JSON agents can understand already exists; it's called **Lix**.\n\nLix is a \"universal\" version control system. \"Universal\" because it can track changes in binary files by parsing files into JSON schemas. Otherwise, tracking changes in those binary files would not be possible. Lix also solves the problem of opaque binary files agents are now running into.\n\nLix is in alpha, but you can already check out the repository on GitHub.\n\n[Lix on GitHub](https://github.com/opral/lix)\n\n![Lix GitHub repository screenshot](./lix-github.jpg)\n"
  },
  {
    "path": "blog/003-february-2026-update/index.md",
    "content": "---\ndate: \"2026-03-04\"\nog:description: \"The Rust rewrite is complete. 33x faster file writes, lix was trending on HackerNews, and what's next in March.\"\nog:image: \"./cover.png\"\nog:image:alt: \"February 2026 update cover showing the Lix Rust rewrite milestone\"\n---\n\n# February 2026 Update: Rust Rewrite Complete\n\n**TL;DR**\n\n- 33x faster file writes\n- GitHub stars grew from 70 to over 500\n- Real workload and AX (user) testing in March\n\n## The Rust rewrite is complete\n\n[RFC 001](https://lix.dev/rfc/001-preprocess-writes) and [RFC 002](https://lix.dev/rfc/002-rewrite-in-rust) have been implemented in February, with two strong outcomes:\n\n### 33x faster file writes\n\nThe rewrite significantly improves heavy write paths, with the largest gain on realistic plugin-based JSON file inserts (**33x median, ~40x p95**).\n\n| Benchmark                         | `v0.5`    | `next`    | Speedup    |\n| --------------------------------- | --------- | --------- | ---------- |\n| State single-row insert           | 17.43 ms  | 14.85 ms  | 1.17x      |\n| State 10-row insert               | 57.33 ms  | 46.53 ms  | 1.23x      |\n| State 100-row insert              | 460.27 ms | 193.30 ms | **2.38x**  |\n| JSON file insert (120 properties) | 889.81 ms | 26.90 ms  | **33.08x** |\n\n### Controlling the query planner\n\nThe new architecture unlocks previously impossible optimizations. The SQL database is merely used as a storage and query execution layer.\n\nv0.5 and below could not optimize beyond what the vtable API of the database provides. Every write triggered per-row callbacks that crossed the JS-WASM boundary with ~10-25 internal SQL queries each. In SQLite's case, even batching mutations was not optimizable.\n\nLix now intercepts and rewrites queries before they hit SQLite, batching what used to be per-row vtable callbacks into single bulk operations. For more information read [RFC 001](https://lix.dev/rfc/001-preprocess-writes).\n\n```plain\n         v0.5                           next\n        ──────                          ────\n       ┌───────┐                     ┌───────┐\n       │ Query │                     │ Query │\n       └───┬───┘                     └───┬───┘\n           │                             │\n           ▼                             ▼\n    ┌──────────────┐              ┌─────────────┐\n    │ SQL Database │              │     Lix     │\n    └──────┬───────┘              └──────┬──────┘\n           │                             │\n           ▼                             ▼\n       ┌───────┐                 ┌──────────────┐\n       │  Lix  │                 │ SQL Database │\n       └───────┘                 └──────────────┘\n```\n\n## GitHub stars and HackerNews\n\nLix was trending on HackerNews in late January. The outcome was an instant jump in GitHub stars and inbound requests to try out lix. Most inbound interest is around AI agents operating on non-code files and formats Git can't handle (Excel, XML, SSIS packages) well.\n\n[https://news.ycombinator.com/item?id=46713387](https://news.ycombinator.com/item?id=46713387)\n\n![HackerNews trending](./github-stars.png)\n\n![GitHub stars growth](./hackernews.png)\n\n## What's next in March\n\nPeople want to test lix. The major use case are AI agents that operate on non-code files (.docx, .pdf, etc.). We have two remaining things to do:\n\n### 1. Real workload testing and bug fixing\n\nReal production workloads will surface performance issues and bugs that should be simple to solve with the completed refactor. After all, we control the query planner now.\n\n### 2. AX (agent experience) testing and API iteration\n\nAX testing? Yes. That's a fundamental shift in 2026. The old way of discussing APIs and/or conducting user interviews are not needed anymore. Ask an agent to do a task, then follow up with \"What friction points did you run into?\" and fix the friction points.\n\n![AX testing](./ax-testing.png)\n"
  },
  {
    "path": "blog/004-march-2026-update/index.md",
    "content": "---\ndate: \"2026-04-03\"\nog:description: \"500 real commits replayed with no corruption bugs. Without the semantic layer, Lix is ~8x faster than Git, but semantic writes still bottleneck on write amplification.\"\nog:image: \"./cover.svg\"\nog:image:alt: \"Lix March 2026 Update: 500 commits with zero corruption, blob commit in 5ms, semantic writes need fixing\"\n---\n\n# March 2026 Update: No Corruption Bugs, 8x Faster Than Git, Semantic Writes Still Too Slow\n\n![Lix March 2026 Update](./cover.svg)\n\n**TL;DR**\n\n- Workload testing worked: 500 real commits replayed with no state corruption bugs\n- Semantic writes still hit a write-amplification bottleneck on large files (500ms+)\n- Without the semantic layer, the file-write-plus-commit workflow is ~8x faster than Git\n- April goal: sub 100ms for 10k entity inserts\n\n## Workload testing\n\n[Last month](/blog/february-2026-update) we set out to do real workload testing in March to reveal performance bottlenecks and bugs that prevent production usage of lix.\n\nThe test replays 500 real commits from the [paraglide-js](https://github.com/opral/paraglide-js) repo. For each commit, it sets up the \"before\" state outside the timer, applies the same file changes, and measures how long Lix takes to commit. The simulated scenario: \"I edited some files, now I'm committing.\"\n\nThree findings came out of this.\n\n### Finding 1: It works\n\nThe best result from the workload replay is that it worked. Replaying 500 real commits did not reveal state corruption bugs. That matters more than the benchmark number because correctness is the prerequisite for everything else.\n\n### Finding 2: Semantic writes still bottleneck on write amplification\n\n> [!NOTE]\n> **Refresher: What is the semantic layer?**\n>\n> Lix parses files into structured entities like paragraphs, tables, images so it can diff, merge, and sync at that level instead of treating files as opaque blobs.\n>\n> ```\n>   contract.docx\n>        ↓\n>   paragraphs / tables / images\n>        ↓\n>   diff / merge / history on those units\n> ```\n\nThe bottleneck is write amplification. A single file write fans out into many entity rows. Inserting a file with 10k entities means the engine has to process 10k entity rows. On the current path, semantic writes are multi-second operations. Any interaction above 100ms stops feeling instantaneous, so this needs to come down.\n\n```\n  contract.docx            Lix engine                    SQL database\n  ┌──────────────────┐     ┌─────────────────────┐       ┌──────────────┐\n  │ Paragraph 1      │     │ process 10,000      │       │              │\n  │ Paragraph 2      │     │ entity rows         │       │ INSERT row 1 │\n  │ Paragraph 3      │────►│                     │──────►│ INSERT row 2 │\n  │ Table 1          │     │ validate, transform,│       │ ...          │\n  │   Row 1          │     │ detect changes      │       │ INSERT row   │\n  │   Row 2          │     │                     │       │   10,000     │\n  │ Image 1          │     │ 💥 too slow         │       │              │\n  │ ...              │     └─────────────────────┘       └──────────────┘\n  │ Paragraph 4,291  │\n  └──────────────────┘\n  1 file write             N entities to process           N SQL row inserts\n```\n\nThe engine is not fast enough to handle these large batches. The goal for April is to get 10k entity inserts under 100ms.\n\n### Finding 3: Without the semantic layer, the file-write-plus-commit workflow is ~8x faster than Git\n\nUnexpected good news. Without the semantic layer (treating files as blobs), Lix completes the same file-write-plus-commit workload in ~5 ms where Git takes ~39 ms.[^1]\n\n[^1]: Measured on a MacBook Pro M5 Pro (18-core), SQLite in WAL mode.\n\n| Phase       | Git        | Lix       |\n| ----------- | ---------- | --------- |\n| File writes | ~0.2 ms    | ~3.6 ms   |\n| Commit      | ~39 ms     | ~1 ms     |\n| **Total**   | **~39 ms** | **~5 ms** |\n\nThe difference comes down to architecture. Lix applies mutations inside an open SQLite transaction. Committing is closing that transaction (~1 ms). The comparison runs `git add -A` followed by `git commit`, which scans the working tree, updates the index, and writes tree and commit objects.\n\nThis is encouraging, but it's the blob layer only. The semantic layer is what makes Lix useful for non-code files, and that's where the work is.\n\n### Why not skip the semantic layer entirely?\n\nIf Lix is already fast without the semantic layer, why not just store blobs and diff on the fly?\n\nThis is really a source-of-truth decision, not a storage decision. Lix can keep both a blob and semantic state, but only one can be authoritative:\n\n```\n  Option A: Blob is source of truth, diffs computed on the fly\n\n  ┌──────────────┐       ┌──────────────┐\n  │ contract.docx│──────►│  re-parse    │──────► diffs (computed every time)\n  │   (blob)     │       │  on every op │\n  └──────────────┘       └──────────────┘\n\n\n  Option B: Diffs are source of truth, blob derived on demand\n\n  ┌──────────────┐       ┌──────────────┐\n  │    diffs     │──────►│  serialize   │──────► contract.docx (derived)\n  │  (stored)    │       │  on demand   │\n  └──────────────┘       └──────────────┘\n```\n\nIf both are independently writable, they can drift.\n\nGit gets away with blob-first storage because its default diff and merge model is line-oriented and works well for ordinary text. For smaller structured text files like JSON, re-parsing on demand can still be acceptable. But as files grow, the cost per operation grows with them:\n\n| File type           | Size      | Rebuild cost per operation |\n| ------------------- | --------- | -------------------------- |\n| `.js` source file   | ~0.005 MB | trivial                    |\n| Large JSON config   | ~0.5 MB   | acceptable                 |\n| `.docx` with images | ~5 MB     | slow                       |\n| `.xlsx` spreadsheet | 5-20 MB   | 💥 too slow                |\n\nOOXML files like `.docx` and `.xlsx` are ZIP packages made of many XML parts, so rebuilding semantic state from the blob on every merge, history read, or sync means repeatedly paying unzip, parse, and tree-diff costs. A cache avoids repeated rebuilds, but now there are two representations to keep consistent — every write path must update both, and bugs in that synchronization are silent data corruption.\n\nSo Lix makes semantic state canonical and materializes the blob on demand when someone actually needs the file bytes. The tradeoff is that blob writes pay an upfront parsing cost — which is the write-amplification bottleneck we're now fixing.\n\nLong term, most app and agent writes should bypass blob parsing entirely. They will write entities directly, so the hot path avoids both blob parsing and blob serialization.\n\nThat means the semantic layer must be fast.\n\n## Prolly trees for cheap versioning\n\nSolving write speed alone isn't enough — storage also needs to scale across versions. Without content deduplication, creating a new version means duplicating all entity data. A 10k-entity Word document across 5 versions = 50k rows stored.\n\n```\n  Without deduplication:\n\n  version: main              version: draft\n  ┌──────────────────┐       ┌──────────────────┐\n  │ 10,000 entities  │       │ 10,000 entities  │  ← full copy\n  └──────────────────┘       └──────────────────┘\n  💥 10,000 rows              💥 10,000 rows (copied)\n```\n\n[Prolly trees](https://docs.dolthub.com/architecture/storage-engine/prolly-tree) are the most promising fit for this. Entities are grouped into chunks with boundaries determined by content hashes. If one paragraph changes, only the chunk containing that paragraph is new. The rest is shared across versions.\n\n```\n  With Prolly trees:\n\n  version: main                       version: draft\n  (original)                          (paragraph 3 edited)\n  ┌──────────────────┐                ┌──────────────────┐\n  │ Paragraph 1      │                │ Paragraph 1      │\n  │ Paragraph 2      │                │ Paragraph 2      │\n  │ Paragraph 3      │                │ Paragraph 3 ✎    │\n  │ Table 1          │                │ Table 1          │\n  │ ...              │                │ ...              │\n  │ Paragraph 4,291  │                │ Paragraph 4,291  │\n  └──────────────────┘                └──────────────────┘\n          │                                   │\n          ▼                                   ▼\n  ┌──────────────┐                    ┌──────────────┐\n  │   chunk A  ──┼────────────────────┼── chunk A    │  ← shared\n  │   chunk B    │                    │   chunk B'   │  ← different (contains edited paragraph 3)\n  │   chunk C  ──┼────────────────────┼── chunk C    │  ← shared\n  │   chunk D  ──┼────────────────────┼── chunk D    │  ← shared\n  └──────────────┘                    └──────────────┘\n\n  ✅ Creating a version = pointing to the same chunks\n  ✅ Only changed chunks are stored separately\n```\n\n## What's next in April\n\n**Goal: Make Lix ready for people to try out.**\n\nMarch proved the blob path works. April is about closing the gap so the semantic layer is fast enough and correct enough for real use.\n\n1. **10k entity inserts under 100 ms.** SQLite can insert 10k rows in under 10 ms. That gives us ~90 ms of headroom to work with.\n2. **Prolly trees for cheap branching.** Without content deduplication, every branch copies all entity data. Prolly trees share unchanged chunks across versions, so branching a 10k-entity document is nearly free.\n3. **Workload testing with the semantic layer on.** March proved the blob path doesn't corrupt state across 500 real commits. April repeats that test with semantic writes enabled.\n"
  },
  {
    "path": "blog/005-april-2026-update/index.md",
    "content": "---\ndate: \"2026-05-11\"\nog:description: \"The new DataFusion path runs the core Lix MVP flow. April did not hit the 10k inserts target, but it clarified why Lix needs control from incoming query down to storage.\"\nog:image: \"./cover.svg\"\nog:image:alt: \"Lix April 2026 Update cover showing DataFusion planning queries, Lix owning the storage abstraction, and SQLite, RocksDB, S3/R2, and OPFS as backends\"\n---\n\n# April 2026 Update: Adopting DataFusion\n\n![Lix April 2026 Update](./cover.svg)\n\n**TL;DR**\n\n- Benchmarking exposed that SQLite gives too little control over Lix's versioned storage model to keep improving incrementally.\n- Decision: move query execution to DataFusion while keeping SQLite as a possible physical storage backend.\n- May goal: Release `v0.6` MVP with focus on CRUD with branching and merging on the optimized semantic write path that the file API will use next.\n\n## What works now\n\nThe important April result is that the core API works on the new path.\n\nThe shape is the MVP API:\n\n```ts\nimport { openLix } from \"@lix-js/sdk\";\nimport { createBetterSqlite3Backend } from \"@lix-js/sdk/sqlite\";\n\nconst lix = await openLix({\n  backend: createBetterSqlite3Backend({ path: \"app.lix\" }),\n  // Later: swap this for a RocksDB/S3/OPFS backend\n  // without changing the Lix API below.\n});\n\nawait lix.createVersion({ name: \"draft\" });\n\nawait lix.execute(\"INSERT INTO markdown_paragraph (id, text) VALUES ($1, $2)\", [\n  \"paragraph_1\",\n  \"Ship CRUD MVP\",\n]);\n\nawait lix.switchVersion({ name: \"main\" });\n\nawait lix.mergeVersion({ source: \"draft\" });\n```\n\nThe exact API names might still change. The important part is that the flow works:\n\n- open a Lix\n- create a version\n- write entities with CRUD operations\n- switch versions\n- merge a version\n\nThat is the product surface for the MVP.\n\nFiles are not in the `v0.6` MVP on purpose.\n\nA file write fans out into entity writes. A Word document, JSON file, or spreadsheet save can become thousands of inserts. That means the file API can only be as fast as the entity layer underneath it. The 10k inserts benchmark measures that layer.\n\nMost apps and agents should write entities directly anyway. They should update a paragraph, cell, or property, not re-serialize a whole document. The file API comes after CRUD because it is built on the same semantic write path.\n\nThe first preview is published on npm:\n\n```bash\nnpm install @lix-js/sdk@0.6.0-preview.2\n```\n\n[`@lix-js/sdk@0.6.0-preview.2`](https://www.npmjs.com/package/@lix-js/sdk/v/0.6.0-preview.2) is not the final `v0.6` MVP yet. It is the preview that proves the new path can be installed and tested.\n\n## April goal\n\n[Last month](/blog/march-2026-update) we found the next bottleneck: semantic writes.\n\nThe blob path was already fast. The semantic path was not. Writing one file can fan out into thousands of entities, and the April goal was to get **10k entity inserts under 100ms**.\n\nThe number is not random. A semantic file is not one row:\n\n- a Word document becomes paragraphs, tables, comments, images, and relationships\n- a JSON file becomes hundreds or thousands of properties\n- a spreadsheet becomes cells, formulas, sheets, and metadata\n\n10k inserts is the first useful proxy for \"real file, real structure.\" 100ms is the interaction budget. Below that, the write still feels instant. Above that, Lix becomes something users and agents wait on.\n\nWe did not hit the benchmark in April.\n\nWe are not publishing a final April number because the benchmark target moved to the new DataFusion path. Optimizing the old SQLite-centered path further would measure the architecture we are replacing.\n\nThe problem was not one slow query. The SQLite-centered path kept pushing Lix concepts like version roots, inherited rows, tombstones, and file projections into SQLite tables and views. Each optimization fixed one path, but the next feature needed another translation layer.\n\n## Finding: too little control\n\nThe recurring problem has been architecture confidence. Lix should ship an MVP and improve from there. But that only works if the architecture can be improved incrementally.\n\nIn February, we wrote that the Rust rewrite gave Lix control over the query planner. That wording was too broad. Lix controlled the query before SQLite saw it. Lix could parse and rewrite SQL, batch operations, and avoid many vtable callbacks.\n\nApril showed that this is not enough.\n\nSQLite still owns the final query planner and storage model. Lix can rewrite queries before SQLite sees them, but the result still has to fit into SQLite tables, indexes, views, and vtables.\n\nThe 10k inserts work made the missing control clear. Lix needs control from the incoming query all the way down to raw storage. Every write touches current state, history, branch visibility, file projections, and later merge inputs. Those choices depend on the physical shape of the data.\n\n```plain\n  February / March architecture\n\n  ┌───────────┐\n  │ SQL query │\n  └─────┬─────┘\n        │\n        ▼\n  ┌────────────────────────┐\n  │ Lix SQL parser/rewrite │  ← Lix controls this\n  └─────┬──────────────────┘\n        │\n        ▼\n  ┌──────────────────────┐\n  │ SQLite query planner │  ← SQLite still controls this\n  └─────┬────────────────┘\n        │\n        ▼\n  ┌───────────────────────────────┐\n  │ SQLite tables/views/vtables   │  ← Lix concepts squeezed here\n  └─────┬─────────────────────────┘\n        │\n        ▼\n  ┌────────────────┐\n  │ SQLite storage │\n  └────────────────┘\n```\n\n## Decision: adopt DataFusion\n\nDataFusion is an Apache Arrow SQL query engine. It gives Lix SQL parsing, planning, and execution while letting Lix provide the logic underneath.\n\nThe decision is not \"SQLite bad, custom database good.\" Reusing a query engine is still the right idea. The mistake would be building one from scratch when DataFusion exists.\n\nThat is the control Lix needs: from incoming query, through `lix_state`, versions, history, branch visibility, merge inputs, and file projections, down to the raw storage backend.\n\nSQLite does not go away. It can still be the physical storage backend. The change is that SQLite no longer defines the query and storage shape of Lix state.\n\n```plain\n  DataFusion-centered architecture\n\n  ┌───────────┐\n  │ SQL query │\n  └─────┬─────┘\n        │\n        ▼\n  ┌─────────────────────────┐\n  │ DataFusion query engine │  ← Lix controls query execution\n  └─────┬───────────────────┘\n        │\n        ▼\n  ┌──────────────────────────────────┐\n  │ Lix logic + storage abstraction  │  ← Lix controls this\n  └─────┬────────────────────────────┘\n        │\n        ▼\n  ┌──────────────────────────────────┐\n  │ SQLite · RocksDB · S3/R2 · OPFS  │  ← physical storage\n  └──────────────────────────────────┘\n```\n\nLix does not need to invent physical storage. Existing systems should still handle durability, transactions, files, pages, object storage, and the other hard parts of persistence. The prolly-tree direction from March is now part of this storage abstraction work: make branching cheap by sharing unchanged state, while keeping CRUD operations fast enough for the MVP.\n\nThis also changes the portability story. Earlier posts framed portability as \"any SQL database.\" With DataFusion, portability moves one layer down: any backend that can satisfy Lix's storage abstraction. Postgres can still be a backend later, but not because Lix delegates SQL execution to Postgres.\n\n## What happened to March's goals\n\nMarch had three April goals:\n\n1. 10k entity inserts under 100ms\n2. prolly trees for cheap branching\n3. workload testing with the semantic layer on\n\nThe first goal moved to May on the DataFusion path. Prolly trees moved into the broader physical storage abstraction work. The semantic workload replay should happen after the `v0.6` path is fast enough to be the path we intend to ship.\n\n## What's next in May\n\nMay goal: turn the preview into the Lix `v0.6` MVP.\n\nThe acceptance criteria:\n\n1. CRUD operations work through the new DataFusion path.\n2. Branching and merging work on that path.\n3. 10k semantic inserts are under 100ms.\n4. The Lix physical storage abstraction is no more than 1.5x slower than a direct SQLite storage + query baseline for the same workload.\n\nThe 1.5x number is the guardrail for the storage abstraction. It is not the final product latency target. It checks that the abstraction itself is not the bottleneck. If storage is close to SQLite's baseline, Lix can ship the MVP and keep optimizing query/runtime logic above it incrementally.\n\nFiles follow after CRUD because file writes fan out into the same entity writes.\n\nEverything else is secondary.\n"
  },
  {
    "path": "blog/authors.json",
    "content": "{\n  \"samuelstroschein\": {\n    \"name\": \"Samuel Stroschein\",\n    \"avatar\": \"https://avatars.githubusercontent.com/u/35429197?v=4\",\n    \"twitter\": \"https://x.com/samuelstroschei\",\n    \"github\": \"https://github.com/samuelstroschein\"\n  }\n}\n"
  },
  {
    "path": "blog/table_of_contents.json",
    "content": "[\n  {\n    \"path\": \"./005-april-2026-update/index.md\",\n    \"slug\": \"april-2026-update\",\n    \"authors\": [\"samuelstroschein\"]\n  },\n  {\n    \"path\": \"./004-march-2026-update/index.md\",\n    \"slug\": \"march-2026-update\",\n    \"authors\": [\"samuelstroschein\"]\n  },\n  {\n    \"path\": \"./003-february-2026-update/index.md\",\n    \"slug\": \"february-2026-update\",\n    \"authors\": [\"samuelstroschein\"]\n  },\n  {\n    \"path\": \"./002-modeling-a-company-as-a-repository/index.md\",\n    \"slug\": \"modeling-a-company-as-a-repository\",\n    \"authors\": [\"samuelstroschein\"]\n  },\n  {\n    \"path\": \"./001-introducing-lix/index.md\",\n    \"slug\": \"introducing-lix\",\n    \"authors\": [\"samuelstroschein\"]\n  }\n]\n"
  },
  {
    "path": "cla-signatures.json",
    "content": "{\n  \"signedContributors\": [\n    {\n      \"name\": \"janfjohannes\",\n      \"id\": 110794494,\n      \"comment_id\": 1711859828,\n      \"created_at\": \"2023-09-08T15:36:26Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 1319\n    },\n    {\n      \"name\": \"MaxKless\",\n      \"id\": 34165455,\n      \"comment_id\": 1714026516,\n      \"created_at\": \"2023-09-11T14:39:53Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 1325\n    },\n    {\n      \"name\": \"felixhaeberle\",\n      \"id\": 34959078,\n      \"comment_id\": 1717809210,\n      \"created_at\": \"2023-09-13T14:59:55Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 1339\n    },\n    {\n      \"name\": \"samuelstroschein\",\n      \"id\": 35429197,\n      \"comment_id\": 1719038132,\n      \"created_at\": \"2023-09-14T08:55:18Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 1339\n    },\n    {\n      \"name\": \"NiklasBuchfink\",\n      \"id\": 59048346,\n      \"comment_id\": 1719232555,\n      \"created_at\": \"2023-09-14T10:59:32Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 1347\n    },\n    {\n      \"name\": \"floriandwt\",\n      \"id\": 92092993,\n      \"comment_id\": 1719439744,\n      \"created_at\": \"2023-09-14T13:20:15Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 1339\n    },\n    {\n      \"name\": \"NilsJacobsen\",\n      \"id\": 58360188,\n      \"comment_id\": 1727541380,\n      \"created_at\": \"2023-09-20T11:30:29Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 1385\n    },\n    {\n      \"name\": \"misa1515\",\n      \"id\": 61636045,\n      \"comment_id\": 1728275039,\n      \"created_at\": \"2023-09-20T18:59:25Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 1388\n    },\n    {\n      \"name\": \"BRGustavoRibeiro\",\n      \"id\": 34517016,\n      \"comment_id\": 1728633275,\n      \"created_at\": \"2023-09-21T01:33:29Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 1390\n    },\n    {\n      \"name\": \"jannesblobel\",\n      \"id\": 72493222,\n      \"comment_id\": 1729390653,\n      \"created_at\": \"2023-09-21T11:36:20Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 1393\n    },\n    {\n      \"name\": \"hecker\",\n      \"id\": 23746655,\n      \"comment_id\": 1736918216,\n      \"created_at\": \"2023-09-27T08:14:41Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 1408\n    },\n    {\n      \"name\": \"openscript\",\n      \"id\": 1105080,\n      \"comment_id\": 1738661818,\n      \"created_at\": \"2023-09-28T07:57:14Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 1412\n    },\n    {\n      \"name\": \"martin-lysk\",\n      \"id\": 113943358,\n      \"comment_id\": 1772895783,\n      \"created_at\": \"2023-10-20T14:54:50Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 1504\n    },\n    {\n      \"name\": \"sunxyw\",\n      \"id\": 31698606,\n      \"comment_id\": 1784985693,\n      \"created_at\": \"2023-10-30T11:25:07Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 1533\n    },\n    {\n      \"name\": \"ZerdoX-x\",\n      \"id\": 49815452,\n      \"comment_id\": 1787270801,\n      \"created_at\": \"2023-10-31T13:55:59Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 1549\n    },\n    {\n      \"name\": \"WarningImHack3r\",\n      \"id\": 43064022,\n      \"comment_id\": 1802507427,\n      \"created_at\": \"2023-11-08T19:21:15Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 1615\n    },\n    {\n      \"name\": \"albbus-stack\",\n      \"id\": 57916483,\n      \"comment_id\": 1804883805,\n      \"created_at\": \"2023-11-10T00:24:59Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 1620\n    },\n    {\n      \"name\": \"JLAcostaEC\",\n      \"id\": 61467132,\n      \"comment_id\": 1806107356,\n      \"created_at\": \"2023-11-10T17:13:33Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 1623\n    },\n    {\n      \"name\": \"rishi-raj-jain\",\n      \"id\": 46300090,\n      \"comment_id\": 1810487483,\n      \"created_at\": \"2023-11-14T15:44:12Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 1638\n    },\n    {\n      \"name\": \"DanikVitek\",\n      \"id\": 25585136,\n      \"comment_id\": 1811255169,\n      \"created_at\": \"2023-11-14T20:49:37Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 1640\n    },\n    {\n      \"name\": \"Min2who\",\n      \"id\": 127925465,\n      \"comment_id\": 1813826899,\n      \"created_at\": \"2023-11-16T05:47:48Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 1643\n    },\n    {\n      \"name\": \"LorisSigrist\",\n      \"id\": 43482866,\n      \"comment_id\": 1819259247,\n      \"created_at\": \"2023-11-20T15:15:25Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 1659\n    },\n    {\n      \"name\": \"KraXen72\",\n      \"id\": 21956756,\n      \"comment_id\": 1825537784,\n      \"created_at\": \"2023-11-24T11:28:53Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 1732\n    },\n    {\n      \"name\": \"AdamTmHun\",\n      \"id\": 61880960,\n      \"comment_id\": 1826420619,\n      \"created_at\": \"2023-11-25T21:10:20Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 1745\n    },\n    {\n      \"name\": \"KTibow\",\n      \"id\": 10727862,\n      \"comment_id\": 1826423449,\n      \"created_at\": \"2023-11-25T21:27:57Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 1746\n    },\n    {\n      \"name\": \"thetarnav\",\n      \"id\": 24491503,\n      \"comment_id\": 1833456333,\n      \"created_at\": \"2023-11-30T10:08:16Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 1785\n    },\n    {\n      \"name\": \"TajAlasfiyaa\",\n      \"id\": 87016999,\n      \"comment_id\": 1856385866,\n      \"created_at\": \"2023-12-14T18:35:58Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 1893\n    },\n    {\n      \"name\": \"tomas-correia\",\n      \"id\": 20492365,\n      \"comment_id\": 1862914722,\n      \"created_at\": \"2023-12-19T14:52:47Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 1919\n    },\n    {\n      \"name\": \"Gernii\",\n      \"id\": 54741529,\n      \"comment_id\": 1863028528,\n      \"created_at\": \"2023-12-19T15:55:04Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 1921\n    },\n    {\n      \"name\": \"mr-islam\",\n      \"id\": 17675428,\n      \"comment_id\": 1871469307,\n      \"created_at\": \"2023-12-28T20:26:39Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 1955\n    },\n    {\n      \"name\": \"jldec\",\n      \"id\": 849592,\n      \"comment_id\": 1894298346,\n      \"created_at\": \"2024-01-16T18:28:34Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 2040\n    },\n    {\n      \"name\": \"oscard0m\",\n      \"id\": 2574275,\n      \"comment_id\": 1895458003,\n      \"created_at\": \"2024-01-17T09:52:08Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 2047\n    },\n    {\n      \"name\": \"leonardoRocchini\",\n      \"id\": 62795461,\n      \"comment_id\": 1924359871,\n      \"created_at\": \"2024-02-02T17:29:39Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 2169\n    },\n    {\n      \"name\": \"vytenisstaugaitis\",\n      \"id\": 30520456,\n      \"comment_id\": 1925687967,\n      \"created_at\": \"2024-02-04T10:33:03Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 2172\n    },\n    {\n      \"name\": \"leonardsimonse\",\n      \"id\": 94551625,\n      \"comment_id\": 1934074917,\n      \"created_at\": \"2024-02-08T12:59:12Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 2202\n    },\n    {\n      \"name\": \"mquandalle\",\n      \"id\": 1730702,\n      \"comment_id\": 1987999010,\n      \"created_at\": \"2024-03-11T09:45:21Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 2354\n    },\n    {\n      \"name\": \"s24407-pj\",\n      \"id\": 92219340,\n      \"comment_id\": 1996876750,\n      \"created_at\": \"2024-03-14T08:40:59Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 2382\n    },\n    {\n      \"name\": \"kevinccbsg\",\n      \"id\": 12685053,\n      \"comment_id\": 2018688746,\n      \"created_at\": \"2024-03-25T18:55:10Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 2457\n    },\n    {\n      \"name\": \"revosw\",\n      \"id\": 19785016,\n      \"comment_id\": 2034031770,\n      \"created_at\": \"2024-04-03T09:26:50Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 2501\n    },\n    {\n      \"name\": \"TheOnlyTails\",\n      \"id\": 65342367,\n      \"comment_id\": 2045143036,\n      \"created_at\": \"2024-04-09T13:08:09Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 2539\n    },\n    {\n      \"name\": \"park-jemin\",\n      \"id\": 59681283,\n      \"comment_id\": 2079694848,\n      \"created_at\": \"2024-04-26T16:15:12Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 2668\n    },\n    {\n      \"name\": \"NurbekGithub\",\n      \"id\": 24915724,\n      \"comment_id\": 2091100922,\n      \"created_at\": \"2024-05-02T17:16:00Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 2694\n    },\n    {\n      \"name\": \"muhammedaksam\",\n      \"id\": 27314049,\n      \"comment_id\": 2106263594,\n      \"created_at\": \"2024-05-12T14:21:20Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 2752\n    },\n    {\n      \"name\": \"nirtamir2\",\n      \"id\": 16452789,\n      \"comment_id\": 2106333137,\n      \"created_at\": \"2024-05-12T18:10:07Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 2753\n    },\n    {\n      \"name\": \"altruity\",\n      \"id\": 937917,\n      \"comment_id\": 2111954202,\n      \"created_at\": \"2024-05-15T09:01:15Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 2782\n    },\n    {\n      \"name\": \"LukasHechenberger\",\n      \"id\": 5802656,\n      \"comment_id\": 2127453209,\n      \"created_at\": \"2024-05-23T15:41:43Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 2812\n    },\n    {\n      \"name\": \"Amerlander\",\n      \"id\": 3764089,\n      \"comment_id\": 2142378616,\n      \"created_at\": \"2024-05-31T14:32:46Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 2861\n    },\n    {\n      \"name\": \"MrTwixxy\",\n      \"id\": 64733980,\n      \"comment_id\": 2247457208,\n      \"created_at\": \"2024-07-24T10:02:38Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3025\n    },\n    {\n      \"name\": \"jonathanschoonbroodt\",\n      \"id\": 33702771,\n      \"comment_id\": 2263078256,\n      \"created_at\": \"2024-08-01T13:39:18Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3038\n    },\n    {\n      \"name\": \"azezsan\",\n      \"id\": 79533966,\n      \"comment_id\": 2272796191,\n      \"created_at\": \"2024-08-07T07:22:31Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3047\n    },\n    {\n      \"name\": \"AlanBreck\",\n      \"id\": 1199820,\n      \"comment_id\": 2276935342,\n      \"created_at\": \"2024-08-09T00:21:50Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3053\n    },\n    {\n      \"name\": \"Unsleeping\",\n      \"id\": 45426001,\n      \"comment_id\": 2294948180,\n      \"created_at\": \"2024-08-17T19:14:59Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3021\n    },\n    {\n      \"name\": \"emma-sg\",\n      \"id\": 5727389,\n      \"comment_id\": 2372439059,\n      \"created_at\": \"2024-09-24T21:40:59Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3149\n    },\n    {\n      \"name\": \"axel-rock\",\n      \"id\": 3433205,\n      \"comment_id\": 2396828867,\n      \"created_at\": \"2024-10-07T12:44:45Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3155\n    },\n    {\n      \"name\": \"benmccann\",\n      \"id\": 322311,\n      \"comment_id\": 2407495621,\n      \"created_at\": \"2024-10-11T14:08:21Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3159\n    },\n    {\n      \"name\": \"Venmit\",\n      \"id\": 185773680,\n      \"comment_id\": 2426988669,\n      \"created_at\": \"2024-10-21T15:18:34Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3183\n    },\n    {\n      \"name\": \"alikia2x\",\n      \"id\": 87868889,\n      \"comment_id\": 2438671259,\n      \"created_at\": \"2024-10-25T19:45:33Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3187\n    },\n    {\n      \"name\": \"tconroy\",\n      \"id\": 1609336,\n      \"comment_id\": 2439717856,\n      \"created_at\": \"2024-10-26T19:50:43Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3188\n    },\n    {\n      \"name\": \"pikpok\",\n      \"id\": 1003568,\n      \"comment_id\": 2443984192,\n      \"created_at\": \"2024-10-29T11:44:22Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3194\n    },\n    {\n      \"name\": \"gerardmarquinarubio\",\n      \"id\": 106877422,\n      \"comment_id\": 2453587495,\n      \"created_at\": \"2024-11-03T21:45:21Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3199\n    },\n    {\n      \"name\": \"SrGeneroso\",\n      \"id\": 5541794,\n      \"comment_id\": 2466403868,\n      \"created_at\": \"2024-11-09T18:30:55Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3204\n    },\n    {\n      \"name\": \"SrGeneroso\",\n      \"id\": 5541794,\n      \"comment_id\": 2466404296,\n      \"created_at\": \"2024-11-09T18:32:21Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3204\n    },\n    {\n      \"name\": \"hyp3rflow\",\n      \"id\": 49385012,\n      \"comment_id\": 2467805886,\n      \"created_at\": \"2024-11-11T10:29:29Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3205\n    },\n    {\n      \"name\": \"half2me\",\n      \"id\": 6759894,\n      \"comment_id\": 2476088199,\n      \"created_at\": \"2024-11-14T11:20:14Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3210\n    },\n    {\n      \"name\": \"TazorDE\",\n      \"id\": 30119708,\n      \"comment_id\": 2485839915,\n      \"created_at\": \"2024-11-19T14:15:54Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3224\n    },\n    {\n      \"name\": \"jacoblukewood\",\n      \"id\": 1590014,\n      \"comment_id\": 2509661561,\n      \"created_at\": \"2024-12-01T09:48:18Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3243\n    },\n    {\n      \"name\": \"IhsenBouallegue\",\n      \"id\": 48621967,\n      \"comment_id\": 2515181762,\n      \"created_at\": \"2024-12-03T17:32:04Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3248\n    },\n    {\n      \"name\": \"onyedikachi-david\",\n      \"id\": 51977119,\n      \"comment_id\": 2534334589,\n      \"created_at\": \"2024-12-11T07:49:27Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3260\n    },\n    {\n      \"name\": \"tbjers\",\n      \"id\": 1117052,\n      \"comment_id\": 2563886502,\n      \"created_at\": \"2024-12-27T17:16:04Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3306\n    },\n    {\n      \"name\": \"aboqasem\",\n      \"id\": 62098043,\n      \"comment_id\": 2585142579,\n      \"created_at\": \"2025-01-11T08:15:37Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3339\n    },\n    {\n      \"name\": \"Secreto31126\",\n      \"id\": 46955459,\n      \"comment_id\": 2585573586,\n      \"created_at\": \"2025-01-12T03:51:15Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3340\n    },\n    {\n      \"name\": \"dmsynge\",\n      \"id\": 19330240,\n      \"comment_id\": 2603651874,\n      \"created_at\": \"2025-01-21T04:51:57Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3363\n    },\n    {\n      \"name\": \"ampcpmgp\",\n      \"id\": 13173632,\n      \"comment_id\": 2606229755,\n      \"created_at\": \"2025-01-22T03:51:55Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3362\n    },\n    {\n      \"name\": \"pzerelles\",\n      \"id\": 66033561,\n      \"comment_id\": 2608198066,\n      \"created_at\": \"2025-01-22T20:27:11Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3365\n    },\n    {\n      \"name\": \"oskar-gmerek\",\n      \"id\": 53402105,\n      \"comment_id\": 2614005746,\n      \"created_at\": \"2025-01-25T15:38:28Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3373\n    },\n    {\n      \"name\": \"dallyh\",\n      \"id\": 6968534,\n      \"comment_id\": 2614578709,\n      \"created_at\": \"2025-01-26T20:26:12Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3374\n    },\n    {\n      \"name\": \"shivan-s\",\n      \"id\": 51132467,\n      \"comment_id\": 2645818987,\n      \"created_at\": \"2025-02-08T16:23:21Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3382\n    },\n    {\n      \"name\": \"Carlos-err406\",\n      \"id\": 81443707,\n      \"comment_id\": 2646615258,\n      \"created_at\": \"2025-02-09T21:47:31Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3387\n    },\n    {\n      \"name\": \"filips-alpe\",\n      \"id\": 2479702,\n      \"comment_id\": 2649080317,\n      \"created_at\": \"2025-02-10T19:48:59Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3386\n    },\n    {\n      \"name\": \"huynhducduy\",\n      \"id\": 12293622,\n      \"comment_id\": 2657104436,\n      \"created_at\": \"2025-02-13T16:17:58Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3396\n    },\n    {\n      \"name\": \"miikakokkonen\",\n      \"id\": 14804847,\n      \"comment_id\": 2659389385,\n      \"created_at\": \"2025-02-14T13:50:39Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3399\n    },\n    {\n      \"name\": \"juliomuhlbauer\",\n      \"id\": 53458125,\n      \"comment_id\": 2679937770,\n      \"created_at\": \"2025-02-24T23:34:03Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3430\n    },\n    {\n      \"name\": \"dvdzara\",\n      \"id\": 116791973,\n      \"comment_id\": 2689095808,\n      \"created_at\": \"2025-02-27T20:57:28Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3452\n    },\n    {\n      \"name\": \"axekan\",\n      \"id\": 50769262,\n      \"comment_id\": 2727306184,\n      \"created_at\": \"2025-03-16T09:56:11Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3507\n    },\n    {\n      \"name\": \"sialex-net\",\n      \"id\": 91857463,\n      \"comment_id\": 2729038711,\n      \"created_at\": \"2025-03-17T10:46:24Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3512\n    },\n    {\n      \"name\": \"tecoad\",\n      \"id\": 2627749,\n      \"comment_id\": 2731020924,\n      \"created_at\": \"2025-03-17T21:54:14Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3513\n    },\n    {\n      \"name\": \"nukosuke\",\n      \"id\": 17716649,\n      \"comment_id\": 2739073011,\n      \"created_at\": \"2025-03-20T04:04:08Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3521\n    },\n    {\n      \"name\": \"aloker\",\n      \"id\": 140714,\n      \"comment_id\": 2741744604,\n      \"created_at\": \"2025-03-20T21:47:43Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3525\n    },\n    {\n      \"name\": \"MathiasWP\",\n      \"id\": 48158184,\n      \"comment_id\": 2742659697,\n      \"created_at\": \"2025-03-21T08:25:21Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3522\n    },\n    {\n      \"name\": \"seriousm4x\",\n      \"id\": 23456686,\n      \"comment_id\": 2748770077,\n      \"created_at\": \"2025-03-24T16:44:59Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3532\n    },\n    {\n      \"name\": \"ooopus\",\n      \"id\": 107778929,\n      \"comment_id\": 2751384467,\n      \"created_at\": \"2025-03-25T14:06:30Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3534\n    },\n    {\n      \"name\": \"adrian-budau\",\n      \"id\": 1350273,\n      \"comment_id\": 2755095949,\n      \"created_at\": \"2025-03-26T16:54:44Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3538\n    },\n    {\n      \"name\": \"vbatoufflet\",\n      \"id\": 598433,\n      \"comment_id\": 2761156262,\n      \"created_at\": \"2025-03-28T11:57:32Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3543\n    },\n    {\n      \"name\": \"yverek\",\n      \"id\": 6050728,\n      \"comment_id\": 2776472258,\n      \"created_at\": \"2025-04-03T17:24:03Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3553\n    },\n    {\n      \"name\": \"fetsorn\",\n      \"id\": 12858105,\n      \"comment_id\": 2838020902,\n      \"created_at\": \"2025-04-29T09:04:52Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3574\n    },\n    {\n      \"name\": \"derian-cordoba\",\n      \"id\": 74283575,\n      \"comment_id\": 2848240171,\n      \"created_at\": \"2025-05-02T22:53:16Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3579\n    },\n    {\n      \"name\": \"jezikk\",\n      \"id\": 7671531,\n      \"comment_id\": 2859345652,\n      \"created_at\": \"2025-05-07T16:58:16Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3582\n    },\n    {\n      \"name\": \"philippviereck\",\n      \"id\": 105976309,\n      \"comment_id\": 2859794456,\n      \"created_at\": \"2025-05-07T18:24:55Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3583\n    },\n    {\n      \"name\": \"alexbehl\",\n      \"id\": 38441444,\n      \"comment_id\": 2909851461,\n      \"created_at\": \"2025-05-26T13:58:15Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3588\n    },\n    {\n      \"name\": \"Le0Developer\",\n      \"id\": 40232557,\n      \"comment_id\": 2912434681,\n      \"created_at\": \"2025-05-27T13:02:13Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3589\n    },\n    {\n      \"name\": \"HokkaidoInu\",\n      \"id\": 78092452,\n      \"comment_id\": 2920317405,\n      \"created_at\": \"2025-05-29T19:04:05Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3590\n    },\n    {\n      \"name\": \"akkie\",\n      \"id\": 307006,\n      \"comment_id\": 2925713735,\n      \"created_at\": \"2025-05-31T20:49:00Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3593\n    },\n    {\n      \"name\": \"shivan-eyespace\",\n      \"id\": 129010893,\n      \"comment_id\": 2956763825,\n      \"created_at\": \"2025-06-09T19:28:18Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3598\n    },\n    {\n      \"name\": \"PlusA2M\",\n      \"id\": 18495330,\n      \"comment_id\": 3094609678,\n      \"created_at\": \"2025-07-20T15:41:48Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3654\n    },\n    {\n      \"name\": \"GauBen\",\n      \"id\": 48261497,\n      \"comment_id\": 3191439665,\n      \"created_at\": \"2025-08-15T12:56:01Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3674\n    },\n    {\n      \"name\": \"uiolee\",\n      \"id\": 22849383,\n      \"comment_id\": 3209343168,\n      \"created_at\": \"2025-08-21T07:27:25Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3677\n    },\n    {\n      \"name\": \"selimhex\",\n      \"id\": 42006922,\n      \"comment_id\": 3218307152,\n      \"created_at\": \"2025-08-24T19:01:39Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3681\n    },\n    {\n      \"name\": \"MaJoel01\",\n      \"id\": 64578696,\n      \"comment_id\": 3325360213,\n      \"created_at\": \"2025-09-23T20:01:39Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3703\n    },\n    {\n      \"name\": \"cocoliliace\",\n      \"id\": 38874004,\n      \"comment_id\": 3394310433,\n      \"created_at\": \"2025-10-12T12:30:17Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3716\n    },\n    {\n      \"name\": \"stonith404\",\n      \"id\": 58886915,\n      \"comment_id\": 3455668671,\n      \"created_at\": \"2025-10-28T10:16:50Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3721\n    },\n    {\n      \"name\": \"mehmetozguldev\",\n      \"id\": 91568457,\n      \"comment_id\": 3560530255,\n      \"created_at\": \"2025-11-20T23:06:06Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3755\n    },\n    {\n      \"name\": \"sallustfire\",\n      \"id\": 565618,\n      \"comment_id\": 3609533828,\n      \"created_at\": \"2025-12-04T01:29:04Z\",\n      \"repoId\": 394757291,\n      \"pullRequestNo\": 3782\n    }\n  ]\n}"
  },
  {
    "path": "docs/api-reference.md",
    "content": "---\ndescription: Reference for the @lix-js/sdk public API: openLix, execute, version and merge methods, result shapes, and the built-in SQL tables and functions.\n---\n\n# API Reference\n\n## `openLix(options?)`\n\n```ts\nfunction openLix(options?: { backend?: LixBackend }): Promise<Lix>;\n```\n\nOpen a Lix instance. With no `backend`, returns an in-memory Lix. See [Persistence](./persistence.md).\n\nReturns a `Lix` with the following methods.\n\n## `Lix`\n\n### `execute(sql, params?)`\n\n```ts\nlix.execute(sql: string, params?: LixRuntimeValue[]): Promise<ExecuteResult>;\n```\n\nRun one DataFusion SQL statement. Use numbered placeholders (`$1`, `$2`); bare `?` is rejected. Use `lix_json($1)` when binding a JSON-typed parameter.\n\n```ts\ntype ExecuteResult = {\n  columns: string[];\n  rows: Row[];\n  rowsAffected: number;\n  notices: { code: string; message: string; hint?: string }[];\n};\n```\n\n`SELECT` populates `columns` and `rows`. `INSERT` / `UPDATE` / `DELETE` set `rowsAffected` and usually return `rows: []`.\n\n### `Row`\n\n```ts\nclass Row {\n  columns: string[];\n  value(name): Value;            // typed accessor\n  tryValue(name): Value | undefined;\n  valueAt(index): Value;\n  get(name): LixNativeValue;     // plain JS\n  tryGet(name): LixNativeValue | undefined;\n  getAt(index): LixNativeValue;\n  toObject(): Record<string, LixNativeValue>;\n  toValueMap(): Record<string, Value>;\n}\n```\n\nUse `value(name)` for a `Value` with typed accessors:\n\n| Method | Returns | For |\n| --- | --- | --- |\n| `asText()` | `string \\| undefined` | text columns |\n| `asBoolean()` | `boolean \\| undefined` | booleans |\n| `asInteger()` | `number \\| undefined` | integers |\n| `asReal()` | `number \\| undefined` | decimals |\n| `asJson()` | `JsonValue \\| undefined` | JSON / objects / arrays |\n| `asBlob()` | `Uint8Array \\| undefined` | bytes |\n\nAccessors return `undefined` when the cell kind doesn't match. Branch on `value.kind` (`\"null\" | \"boolean\" | \"integer\" | \"real\" | \"text\" | \"json\" | \"blob\"`) for polymorphic columns.\n\n`row.toObject()` is the convenience shortcut to a plain JS object.\n\n### `activeVersionId()`\n\n```ts\nlix.activeVersionId(): Promise<string>;\n```\n\nReturns the id of the currently active version. Capture this on startup instead of hard-coding `\"main\"`.\n\n### `createVersion(options)`\n\n```ts\nlix.createVersion(options: {\n  name: string;\n  id?: string;\n  fromCommitId?: string;\n}): Promise<{ id: string; name: string; hidden: boolean }>;\n```\n\nCreate a new version. Pass `fromCommitId` to fork from a specific commit; otherwise it forks from the active version's head.\n\n### `switchVersion(options)`\n\n```ts\nlix.switchVersion(options: { versionId: string }): Promise<SwitchVersionResult>;\n```\n\nMake the given version the active one for this Lix instance. Subsequent SQL goes against it.\n\n### `mergeVersionPreview(options)`\n\n```ts\nlix.mergeVersionPreview(options: { sourceVersionId: string }):\n  Promise<{\n    outcome: \"alreadyUpToDate\" | \"fastForward\" | \"mergeCommitted\";\n    targetVersionId: string;\n    sourceVersionId: string;\n    baseCommitId: string;\n    targetHeadCommitId: string;\n    sourceHeadCommitId: string;\n    changeStats: { total: number; added: number; modified: number; removed: number };\n    conflicts: MergeConflict[];\n  }>;\n```\n\nReports the same merge decision as `mergeVersion()` without touching state. Returns row-level `conflicts`. Always merges into the active version; switch first if you want a different target.\n\n### `mergeVersion(options)`\n\n```ts\nlix.mergeVersion(options: { sourceVersionId: string }):\n  Promise<{\n    outcome: \"alreadyUpToDate\" | \"fastForward\" | \"mergeCommitted\";\n    targetVersionId: string;\n    sourceVersionId: string;\n    baseCommitId: string;\n    createdMergeCommitId: string | null;\n    changeStats: { total; added; modified; removed };\n  }>;\n```\n\nThrows a `LixError` on conflicts. Wrap in `try/catch` whenever conflicts are possible.\n\n### `close()`\n\n```ts\nlix.close(): Promise<void>;\n```\n\nAlways close in scripts and tests.\n\n## Built-in tables\n\n| Table | Purpose |\n| --- | --- |\n| `lix_registered_schema` | App schemas (and built-ins). Insert into `value` to register. See [Schemas](./schemas.md). |\n| `lix_change` | Immutable global change journal. Columns: `id`, `entity_id`, `schema_key`, `schema_version`, `file_id`, `metadata`, `snapshot_content`, `created_at`. No version filter; `lix_change` is global. |\n| `lix_state` / `lix_state_by_version` / `lix_state_history` | Schema-agnostic JSON state. Active version, cross-version, and time-travel respectively. See [SQL Surfaces](./surfaces.md). |\n| `lix_version` | Writable version surface: `id`, `name`, `hidden`, `commit_id`. |\n| `lix_file` / `lix_file_by_version` / `lix_file_history` | Versioned files (with `data` bytes), cross-version reads/writes, and history. |\n| `lix_directory` / `lix_directory_by_version` / `lix_directory_history` | Directory tree, cross-version, and history. |\n\nEvery registered schema `X` produces three typed surfaces:\n\n- `X`: the active-version view, used for plain `INSERT`/`SELECT`/`UPDATE`/`DELETE`.\n- `X_by_version`: cross-version view with `lixcol_version_id`. See [Versions & Merging](./versions.md).\n- `X_history`: typed time-travel through this schema's history with `lixcol_start_commit_id`, `lixcol_depth`, `lixcol_observed_commit_id`.\n\nFor the full grid of state / per-entity / file / directory surfaces and how they compose, see [SQL Surfaces](./surfaces.md).\n\n## Built-in SQL functions\n\n| Function | What it does |\n| --- | --- |\n| `lix_active_version_commit_id()` | Commit id at the active version's tip. Use to scope `_history` queries (the planner rejects subqueries on `start_commit_id`). |\n| `lix_json(text)` | Parse JSON text into a JSON-typed value. Use when binding JSON parameters. |\n| `lix_json_get(json, path...)` | Project a JSON-typed value out of a JSON column. |\n| `lix_json_get_text(json, path...)` | Project a value out of a JSON column as text. |\n| `lix_uuid_v7()` | Generate a UUIDv7 string. |\n| `lix_timestamp()` | Current ISO-8601 timestamp string. |\n| `lix_text_decode(blob[, encoding])` | Decode a `BLOB` to text (default `utf-8`). |\n| `lix_text_encode(text[, encoding])` | Encode text to a `BLOB`. |\n| `lix_empty_blob()` | Zero-byte `BLOB` literal. |\n\nSee [SQL Functions](./sql-functions.md) for examples and signatures.\n\n## Errors\n\n`mergeVersion()` and write paths throw `LixError`. `notices` on `ExecuteResult` carry non-fatal codes with `code`, `message`, and an optional `hint`.\n\n## SQL dialect\n\nLix runs on a DataFusion-backed engine. SQL is mostly Postgres-compatible. SQLite-specific catalog tables (`sqlite_master`, etc.) are not available; use `lix_registered_schema` and `lix_version` instead.\n"
  },
  {
    "path": "docs/backend.md",
    "content": "---\ndescription: Lix's storage is pluggable. Implement the LixBackend interface (a synchronous, transactional, namespaced key-value store) and Lix runs on top of it.\n---\n\n# Backends\n\nLix's engine is independent of where the bytes live. Storage is exposed through a single interface, `LixBackend`, that any transactional key-value store can implement. Open a Lix with a different backend and the rest of the API (`openLix`, `execute`, `createVersion`, `mergeVersion`, …) is unchanged.\n\n## What ships today\n\n| Backend                        | Module                          | Use for                              |\n| ------------------------------ | ------------------------------- | ------------------------------------ |\n| In-memory                      | default (no `backend` argument) | tests, demos, ephemeral work         |\n| SQLite file (`better-sqlite3`) | `@lix-js/sdk/sqlite`            | persistent, single-process Node apps |\n\n```ts\nimport { openLix } from \"@lix-js/sdk\";\nimport { createBetterSqlite3Backend } from \"@lix-js/sdk/sqlite\";\n\nconst lix = await openLix({\n  backend: createBetterSqlite3Backend({ path: \"/var/data/app.lix\" }),\n});\n```\n\nAnything beyond these two is not shipped by the Lix team. Implement the `LixBackend` interface yourself and pass it to `openLix({ backend })`. This page is the contract.\n\n## Sync today, async on the roadmap\n\n> The current `LixBackend` contract is **synchronous**. All methods return values directly, not promises.\n\nThe JS SDK runs the engine inside WebAssembly and calls backend methods through synchronous wasm imports. That makes synchronous JS bindings the natural fit (`better-sqlite3` is sync; an in-memory `Map` is sync; native sync KV bindings work). Async-only Node libraries (`pg`, the AWS S3 SDK, IndexedDB, Cloudflare Durable Objects' storage) cannot drive the contract directly today.\n\nPractical paths today:\n\n- **Synchronous bindings.** `better-sqlite3`, in-memory data structures, sync OPFS access (`createSyncAccessHandle`), Neon-binding RocksDB, `node:sqlite` in newer Node versions.\n- **Sync-over-async bridges.** Worker threads with `Atomics.wait`, `deasync`, or similar approaches. These add operational complexity and are best avoided for production workloads.\n\nAn async backend variant (where methods return `Promise<T>`) is on the roadmap so Postgres, IndexedDB, S3, and Durable Objects become first-class. Until then, treat the substrate list below as guidance for what *will* fit, not what's possible from the JS SDK today.\n\n## The full TypeScript contract\n\nThese are the actual exported types from `@lix-js/sdk`:\n\n```ts\ntype LixBackend = {\n  beginReadTransaction(): LixBackendReadTransaction;\n  beginWriteTransaction(): LixBackendWriteTransaction;\n  close?(): void;\n};\n\ntype LixBackendReadTransaction = {\n  getValues(request: BackendKvGetRequest): BackendKvValueBatch;\n  existsMany(request: BackendKvGetRequest): BackendKvExistsBatch;\n  scanKeys(request: BackendKvScanRequest): BackendKvKeyPage;\n  scanValues(request: BackendKvScanRequest): BackendKvValuePage;\n  scanEntries(request: BackendKvScanRequest): BackendKvEntryPage;\n  rollback(): void;\n};\n\ntype LixBackendWriteTransaction = LixBackendReadTransaction & {\n  writeKvBatch(batch: BackendKvWriteBatch): BackendKvWriteStats;\n  commit(): void;\n};\n\n// ── Scan ranges ────────────────────────────────────────────────────────────\n\ntype BackendKvScanRange =\n  | { kind: \"prefix\"; prefix: Uint8Array }\n  | { kind: \"range\"; start: Uint8Array; end: Uint8Array };\n\n// ── Get / exists ───────────────────────────────────────────────────────────\n\ntype BackendKvGetRequest = {\n  groups: BackendKvGetGroup[];\n};\n\ntype BackendKvGetGroup = {\n  namespace: string;\n  keys: Uint8Array[];\n};\n\ntype BackendKvValueBatch = {\n  groups: BackendKvValueGroup[];\n};\n\ntype BackendKvValueGroup = {\n  namespace: string;\n  values: Array<Uint8Array | null>; // null = key not present\n};\n\ntype BackendKvExistsBatch = {\n  groups: BackendKvExistsGroup[];\n};\n\ntype BackendKvExistsGroup = {\n  namespace: string;\n  exists: boolean[];\n};\n\n// ── Scan ───────────────────────────────────────────────────────────────────\n\ntype BackendKvScanRequest = {\n  namespace: string;\n  range: BackendKvScanRange;\n  after?: Uint8Array | null; // exclusive cursor; returns keys strictly greater\n  limit: number;\n};\n\ntype BackendKvKeyPage = {\n  keys: Uint8Array[];\n  resumeAfter?: Uint8Array | null;\n};\n\ntype BackendKvValuePage = {\n  values: Uint8Array[];\n  resumeAfter?: Uint8Array | null;\n};\n\ntype BackendKvEntryPage = {\n  keys: Uint8Array[];\n  values: Uint8Array[];\n  resumeAfter?: Uint8Array | null;\n};\n\n// ── Write ──────────────────────────────────────────────────────────────────\n\ntype BackendKvWriteBatch = {\n  groups: BackendKvWriteGroup[];\n};\n\ntype BackendKvWriteGroup = {\n  namespace: string;\n  puts: BackendKvPut[];\n  deletes: Uint8Array[];\n};\n\ntype BackendKvPut = {\n  key: Uint8Array;\n  value: Uint8Array;\n};\n\ntype BackendKvWriteStats = {\n  puts: number;\n  deletes: number;\n  bytesWritten: number;\n};\n```\n\n### Operations\n\n| Method                                    | Purpose                                                                                                                                                                                                                                                              |\n| ----------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `getValues`                               | Batch fetch values by exact key, grouped by namespace. Missing keys come back as `null` in the same position.                                                                                                                                                        |\n| `existsMany`                              | Same request shape as `getValues`, returns booleans. Used when Lix only needs to know whether a key is present.                                                                                                                                                      |\n| `scanKeys` / `scanValues` / `scanEntries` | Range or prefix scan within one namespace, with `limit` and a resumable `after` cursor.                                                                                                                                                                              |\n| `writeKvBatch`                            | Atomic batch of `puts` and `deletes`, grouped by namespace. Either all of it lands or none of it does. Within a single batch, Lix does not put + delete the same key; the engine never produces such a batch.                                                       |\n| `commit` / `rollback`                     | Transaction control. After either, the transaction object is finished; do not call further methods on it.                                                                                                                                                            |\n| `close()` / `destroy()` (on the backend)  | Lifecycle. `close()` releases handles without affecting durability. `destroy()` (optional, not in the type signature above for backends that don't own their target) removes the entire storage target: file plus WAL/SHM, the OPFS target, the schema, the bucket. |\n\n### Scan semantics\n\n- **Order.** Keys come back in ascending lexicographic order on bytes.\n- **Range.** Half-open: `start <= key < end`.\n- **Prefix.** Equivalent to `range = { start: prefix, end: incrementLastByteWithCarry(prefix) }`.\n- **Cursor.** `after` is **exclusive**: the next page returns keys strictly greater than `after`. `resumeAfter` is the last returned key; pass it back as `after` for the next page. `null` `resumeAfter` means no more pages.\n\n### Namespaces\n\nEvery batch operation is grouped by `namespace: string`. Treat namespaces as logical tables; implementations typically map them to separate column families, prefixes, tables, or buckets. The engine creates namespaces lazily as it writes; backends that require upfront declaration (IndexedDB) need a known namespace list (see below).\n\n## Required guarantees\n\n1. **Atomic write batches.** `writeKvBatch` either applies all puts/deletes across all namespaces, or none of them. A partial failure must roll back the batch.\n2. **Read isolation within a transaction.** A read transaction sees a consistent snapshot for its lifetime; concurrent commits do not bleed in.\n3. **Read-your-writes within a write transaction.** Reads after a put in the same write transaction see the new value; reads after a delete see `null`.\n4. **Durable commits.** When `commit()` returns on a write transaction, the changes survive process restart (for persistent backends).\n5. **Byte-ordered scans.** Keys come back in ascending lexicographic order of bytes. Stable pagination: the same `after` cursor returns the same next page if no writes happened in between.\n\n## Concurrency model\n\n- **One write transaction at a time.** The engine serializes write transactions itself; you don't need to queue them. A backend may still want a process-wide lock for safety.\n- **Read transactions are concurrent with writes.** Multiple read transactions can be open while a write transaction is in flight. Reads must see the snapshot from when they were opened, not the in-progress write.\n- **Transactions are short.** The engine doesn't hold transactions across user awaits; treat `beginReadTransaction()` → operations → `commit()`/`rollback()` as a tight sequence.\n\n## Implementation notes by storage type\n\nThe contract is small enough that **any transactional KV store with a synchronous binding can host Lix today**. The substrates below are good fits in principle; ones marked async-only require either a sync-over-async bridge or the upcoming async backend variant.\n\n**Synchronous, ready today.** `better-sqlite3` (shipping), `node:sqlite` (Node 22+, sync), in-memory `Map`, OPFS via `createSyncAccessHandle` (web workers only), Neon/NAPI bindings to RocksDB or LMDB that expose sync APIs.\n\n**Relational (Postgres, MySQL, SQLite-elsewhere)** (*async-only Node bindings*. One table per namespace, or a shared `(namespace, key)` PK table. Wrap each Lix transaction in a SQL transaction. Use repeatable-read isolation for reads, serializable or `SELECT ... FOR UPDATE` for writes. Postgres `bytea` matches Lix's byte-ordered scan requirement.\n\n**Object storage (S3, R2, GCS)** (*async-only*, plus not natively transactional. Coordinate writes via a manifest object plus conditional PUT (`If-Match`). For atomic multi-key batches: stage chunks → upload → swap the manifest pointer in one CAS.\n\n**Cloudflare.** *async-only*. D1 fits the relational pattern. Durable Objects give you a single-writer mailbox per object, a natural fit for a per-tenant Lix. Cloudflare KV is eventually consistent without transactions; not enough on its own.\n\n**Browser.** *async-only* for IndexedDB, *sync if used in a worker* for OPFS. IndexedDB needs object stores declared at `onupgradeneeded`, so the namespace set must be known up front. The auto-commit-on-event-loop trap means buffered-write strategies are the only safe path.\n\n**Embedded KV (RocksDB, LMDB, sled)** fit varies by binding. The closest-shaped substrates; map namespaces to column families or key prefixes. Native ranged iterators map directly to `scanKeys`. Sync via Neon binding or N-API works today; async-only bindings will need the future async backend.\n\n**Distributed KV (DynamoDB, FoundationDB, TiKV)** (*async-only* in JS. Native transactional semantics. Redis with `MULTI`/`EXEC` is workable for single-instance setups, but its weak isolation makes multi-writer risky.\n\n## Testing your backend\n\nA conformance test suite is the right way to validate an implementation:\n\n- Round-trip puts and gets within and across namespaces.\n- **Atomicity.** A batch with one rejected write leaves everything unchanged.\n- **Isolation.** A read transaction opened before a write commits does not see the writer's changes.\n- **Read-your-writes.** A write transaction reads the values it just wrote (and not values from concurrent writers).\n- **Scan ordering.** Keys come back byte-lex; the same `after` cursor yields the same next page absent writes.\n- **Durability.** Close and reopen; committed data is still there.\n\nRun the same suite against the in-memory and `better-sqlite3` backends as a baseline.\n\n## Why this design\n\nThe engine that implements branches, merge, schemas, change journals, and SQL queries is one piece of code. The storage is another. Keeping the contract small (synchronous, namespaced, transactional KV) is what makes it tractable to put Lix on a SQLite file today and on Postgres, S3, or Durable Objects once the async variant lands, without forking the engine.\n\nSame shape DuckDB takes with its readers: one engine, many places to read bytes from. Lix takes it for writes too.\n"
  },
  {
    "path": "docs/comparison-to-git.md",
    "content": "---\ndescription: Git versions text files line-by-line. Lix versions any file format (DOCX, XLSX, CAD, etc.) semantically per entity.\n---\n\n# Comparison to Git\n\n> **Git versions text files line-by-line. Lix versions any file format, semantically per entity.**\n\nUse Git for source code: text in a working tree, edited by developers, reviewed via pull requests. Use Lix when the artifacts you're versioning are anything else (DOCX, XLSX, CAD, PDF, structured app data) and the diff needs to be semantic to be useful.\n\n|                  | Git                | Lix                                   |\n| :--------------- | :----------------- | :------------------------------------ |\n| Where it runs    | Separate process   | In-process, as a library              |\n| What it versions | Text files         | Any file format, plus structured data |\n| Diff model       | Line-by-line text  | Per-entity semantic                   |\n| History          | `git log`          | `SELECT * FROM lix_change`            |\n| Driven by        | Developer at a CLI | Code: app, service, agent, CLI        |\n\nBoth can coexist: Git for source code, Lix for the files and data your product, service, or tool versions at runtime.\n\n## Snapshots vs changes\n\nGit stores snapshots and computes text diffs between them. That works for code, where lines are the unit of change. For spreadsheets, documents, CAD, and PDFs, the line-based diff doesn't surface meaningful changes, which is exactly the kind of file where end users want version control.\n\nLix stores changes as data, parsed into entities by format-specific plugins (XLSX → cells, DOCX → clauses, CAD → parts). The plugin API itself is on the [roadmap](https://github.com/opral/lix#roadmap); once it lands, plugins are written by the people who know each format. Product- and tool-level questions become direct queries:\n\n- Which cells / clauses / parts changed?\n- Who or what made this edit?\n- What would happen if we merged this version?\n\nThat's why Lix's history surface is a SQL table, not a `git log` parser. See [Change History](./history.md).\n\n## What this looks like\n\n### JSON\n\n**Before:**\n\n```json\n{ \"theme\": \"light\", \"notifications\": true, \"language\": \"en\" }\n```\n\n**After:**\n\n```json\n{ \"theme\": \"dark\", \"notifications\": true, \"language\": \"en\" }\n```\n\n**Git sees:**\n\n```diff\n-{ \"theme\": \"light\", \"notifications\": true, \"language\": \"en\" }\n+{ \"theme\": \"dark\", \"notifications\": true, \"language\": \"en\" }\n```\n\n**Lix sees:**\n\n```diff\nproperty theme:\n- light\n+ dark\n```\n\n### Excel\n\n**Before:**\n\n| order_id | product  | status  |\n| -------- | -------- | ------- |\n| 1001     | Widget A | shipped |\n| 1002     | Widget B | pending |\n\n**After:**\n\n| order_id | product  | status  |\n| -------- | -------- | ------- |\n| 1001     | Widget A | shipped |\n| 1002     | Widget B | shipped |\n\n**Git sees:**\n\n```diff\n-Binary files differ\n```\n\n**Lix sees:**\n\n```diff\norder_id 1002 status:\n- pending\n+ shipped\n```\n"
  },
  {
    "path": "docs/getting-started.md",
    "content": "---\ndescription: Install Lix, open an in-memory repository, register a schema, write rows, and inspect a change in under 30 lines of JavaScript.\n---\n\n# Getting Started\n\nThis walks through opening Lix, registering a schema, writing a row, isolating a change in a separate version, previewing the merge, and merging.\n\n## Install\n\n```bash\nnpm install @lix-js/sdk\n```\n\n`openLix()` with no arguments opens an in-memory Lix, enough for tests and demos. For persistent storage see [Persistence](./persistence.md).\n\n## Open Lix\n\n```ts\nimport { openLix } from \"@lix-js/sdk\";\n\nconst lix = await openLix();\n```\n\n## Register a schema\n\nLix stores application state as typed entities. Register a schema once, then read and write through the generated SQL table named after `x-lix-key`.\n\n```ts\nawait lix.execute(\n  \"INSERT INTO lix_registered_schema (value) VALUES (lix_json($1))\",\n  [\n    JSON.stringify({\n      $schema: \"https://json-schema.org/draft/2020-12/schema\",\n      \"x-lix-key\": \"task\",\n      \"x-lix-version\": \"1\",\n      \"x-lix-primary-key\": [\"/id\"],\n      type: \"object\",\n      required: [\"id\", \"title\", \"done\"],\n      properties: {\n        id: { type: \"string\" },\n        title: { type: \"string\" },\n        done: { type: \"boolean\" },\n      },\n      additionalProperties: false,\n    }),\n  ],\n);\n```\n\n`lix_json($1)` parses the JSON text into the JSON-typed `value` column. Schema details (the `x-lix-*` fields, primary keys, uniqueness) are covered in [Schemas](./schemas.md).\n\n## Write and read state\n\n```ts\nawait lix.execute(\"INSERT INTO task (id, title, done) VALUES ($1, $2, $3)\", [\n  \"task-1\",\n  \"Review agent changes\",\n  false,\n]);\n\nconst result = await lix.execute(\n  \"SELECT id, title, done FROM task WHERE id = $1\",\n  [\"task-1\"],\n);\n\nconst row = result.rows[0]!;\nconsole.log(row.value(\"title\").asText(), row.value(\"done\").asBoolean());\n```\n\n`execute()` returns `{ columns, rows, rowsAffected, notices }`. Use `row.value(name).asText() | .asBoolean() | .asInteger() | .asJson()` for typed access, or `row.toObject()` for a plain JS object. See [API Reference](./api-reference.md).\n\n## Isolate a change in a version\n\nA version is an isolated line of state. Create one for the change, switch into it, and edit:\n\n```ts\nconst main = await lix.activeVersionId();\n\nconst draft = await lix.createVersion({ name: \"Agent draft\" });\nawait lix.switchVersion({ versionId: draft.id });\n\nawait lix.execute(\"UPDATE task SET done = $1 WHERE id = $2\", [true, \"task-1\"]);\n\nawait lix.switchVersion({ versionId: main });\n```\n\nThe active version is now `main` again, and `task-1` is still `done = false` here. The draft change is isolated until you merge.\n\n## Preview and merge\n\n```ts\nconst preview = await lix.mergeVersionPreview({ sourceVersionId: draft.id });\nconsole.log(preview.outcome, preview.changeStats);\n// fastForward { total: 1, added: 0, modified: 1, removed: 0 }\n\nif (preview.conflicts.length === 0) {\n  await lix.mergeVersion({ sourceVersionId: draft.id });\n}\n```\n\n`mergeVersionPreview()` reports the same merge decision as `mergeVersion()` without advancing refs. It returns the per-row conflict list when both sides changed the same entity. See [Versions & Merging](./versions.md).\n\n## The loop\n\n1. Open Lix.\n2. Register schemas for the entities you want to version.\n3. Write and read through generated tables.\n4. Create versions for isolated work.\n5. Preview, then merge or discard.\n6. Query [`lix_change`](./history.md) for audit and undo.\n"
  },
  {
    "path": "docs/history.md",
    "content": "---\ndescription: Lix journals every change. Query lix_change for global per-entity history, lix_state_history for what's reachable from a version, and <schema>_by_version for current per-version state.\n---\n\n# Change History\n\nLix gives you three SQL surfaces for history. Pick the one that matches the question you're asking. For the full grid of state, version, and history surfaces see [SQL Surfaces](./surfaces.md).\n\n| Surface | What you ask it |\n| --- | --- |\n| `lix_change` | \"What happened to this entity, ever?\" Global, immutable journal of every write across every schema and version. |\n| `lix_state_history` | \"What did this version see?\" State walked back from a commit, with `depth` for time-travel. |\n| `<schema>_by_version` | \"What's in this version right now?\" Current rows in each version. Documented in [Versions & Merging](./versions.md). |\n\nVersions don't filter `lix_change` directly; `lix_change` is the raw write log, and versions are pointers in the commit graph. To scope history to a version, use `lix_state_history` with the version's `commit_id`.\n\n## `lix_change` columns\n\n| Column             | What it is                                                                                              |\n| ------------------ | ------------------------------------------------------------------------------------------------------- |\n| `id`               | Unique change id.                                                                                       |\n| `entity_id`        | Primary key of the changed row. For composite keys, an encoded form (`pk:v1:<base64-json>`).            |\n| `schema_key`       | Which schema (`x-lix-key`).                                                                             |\n| `schema_version`   | Schema contract version at the time of the change.                                                      |\n| `file_id`          | The file the change belongs to, or `null` for entity-only changes.                                      |\n| `metadata`         | JSON metadata attached to the change.                                                                   |\n| `snapshot_content` | JSON snapshot of the row after the change, or `null` for deletions (tombstones).                        |\n| `created_at`       | ISO timestamp.                                                                                          |\n\nRead JSON cells with `row.value(\"snapshot_content\").asJson()` or `row.get(\"snapshot_content\")`. Don't `JSON.parse` it as text, and handle `null` for tombstones.\n\n## `lix_state_history` columns\n\n| Column               | What it is                                                                              |\n| -------------------- | --------------------------------------------------------------------------------------- |\n| `entity_id`          | Primary key of the row.                                                                 |\n| `schema_key`         | Which schema.                                                                           |\n| `file_id`            | The file the row belongs to, or `null`.                                                 |\n| `snapshot_content`   | JSON snapshot at this depth.                                                            |\n| `metadata`           | JSON metadata.                                                                          |\n| `schema_version`     | Schema contract version.                                                                |\n| `change_id`          | The `lix_change.id` that produced this state.                                           |\n| `observed_commit_id` | The commit where this state was recorded.                                               |\n| `commit_created_at`  | When the commit was created.                                                            |\n| `start_commit_id`    | The commit the walk started from (typically the version's tip, `lix_version.commit_id`). |\n| `depth`              | `0` = current state at `start_commit_id`. Higher values walk back through history.       |\n\n## Recipes\n\n### Per-entity history (across all versions)\n\n```sql\nSELECT created_at, snapshot_content\nFROM lix_change\nWHERE schema_key = $1 AND entity_id = $2\nORDER BY created_at;\n```\n\n### Latest activity for a schema\n\n```sql\nSELECT created_at, entity_id, snapshot_content\nFROM lix_change\nWHERE schema_key = $1\nORDER BY created_at DESC\nLIMIT 20;\n```\n\n### What's in this version right now\n\nUse the schema's `_by_version` surface (see [Versions & Merging](./versions.md)):\n\n```sql\nSELECT entity_id, snapshot_content\nFROM acme_section_by_version\nWHERE lixcol_version_id = $1;\n```\n\n### What did this version see, walked back through history\n\n```sql\nSELECT entity_id, schema_key, snapshot_content, depth, observed_commit_id\nFROM lix_state_history\nWHERE start_commit_id = lix_active_version_commit_id()\n  AND depth >= 0\nORDER BY depth, schema_key, entity_id;\n```\n\n`depth = 0` is the current state of that version. Higher depths walk back through earlier commits. Filter by `schema_key` or `entity_id` to narrow.\n\n### Diff one entity between two versions\n\n```sql\nSELECT v.id AS version_id, v.name, s.snapshot_content\nFROM acme_section_by_version s\nJOIN lix_version v ON v.id = s.lixcol_version_id\nWHERE s.id = $1\n  AND s.lixcol_version_id IN ($2, $3);\n```\n\nCompare the two `snapshot_content` JSON values field-by-field in your code to render a per-field diff.\n\n### Undo the last change to an entity\n\n```ts\nconst prev = await lix.execute(\n  `SELECT snapshot_content\n     FROM lix_change\n    WHERE schema_key = $1 AND entity_id = $2\n      AND snapshot_content IS NOT NULL\n    ORDER BY created_at DESC\n    LIMIT 1 OFFSET 1`,\n  [\"acme_section\", \"s1\"],\n);\n\nconst snapshot = prev.rows[0]?.value(\"snapshot_content\").asJson();\n// then UPDATE acme_section with the snapshot fields\n```\n\nThe `snapshot_content IS NOT NULL` filter skips tombstones (deletions).\n\n## Tombstones\n\nA deletion produces a `lix_change` row with `snapshot_content = null`. Branch on null when rendering or replaying history.\n"
  },
  {
    "path": "docs/lix-for-ai-agents.md",
    "content": "---\ndescription: Route agent writes through Lix to get isolated workspaces, previewable changes, and approve-or-discard review for every agent task.\n---\n\n# Lix for AI Agents\n\nAgent review is one of Lix's headline use cases, but the same primitives ([Versions](./versions.md), [Change History](./history.md)) power any product where end users review proposed changes. If you're building knowledge-work tools, the patterns here apply to humans drafting changes too.\n\nAgents make fast, useful, and sometimes wrong changes. Lix gives each agent task its own isolated version of state so a human or a policy can review it before it lands.\n\n## The pattern\n\n1. Create a version for the agent task.\n2. Switch the agent's writes into that version.\n3. Run the agent. All writes are isolated.\n4. Preview the merge: `changeStats` for the count, `conflicts` for collisions.\n5. Approve, request changes, or discard.\n\n```ts\nconst main = await lix.activeVersionId();\n\nconst task = await lix.createVersion({ name: \"Agent task 123\" });\nawait lix.switchVersion({ versionId: task.id });\n\n// run the agent; every lix.execute is now isolated to `task`\n\nawait lix.switchVersion({ versionId: main });\n\nconst preview = await lix.mergeVersionPreview({ sourceVersionId: task.id });\nif (preview.conflicts.length === 0) {\n  await lix.mergeVersion({ sourceVersionId: task.id });\n}\n```\n\n## Why versions matter for agents\n\n- Run multiple agents in parallel without stepping on each other.\n- Compare proposed outcomes side by side.\n- Keep the main state stable while work is in progress.\n- Discard a bad attempt with no manual cleanup.\n\n## Showing the work\n\nThe point of routing agent writes through Lix is that you can ask SQL what the agent did:\n\n```sql\nSELECT entity_id, schema_key, snapshot_content, depth, observed_commit_id\nFROM lix_state_history\nWHERE start_commit_id = lix_active_version_commit_id()\n  AND depth >= 0\nORDER BY depth, schema_key, entity_id;\n```\n\nThis is the data your review UI renders. See [Change History](./history.md) for more recipes (per-entity history, who-changed-what, diffs between versions).\n\n## Conflicts\n\nMerge is per-entity today: two versions editing different rows merge cleanly; two versions editing the same row produce a `sameEntityChanged` conflict. Wrap `mergeVersion()` and handle the conflict in your review flow.\n\nDon't reshape your schemas around this. Conflict semantics are an active roadmap item; design entities for how your code reads them, not around today's merge granularity. See [Versions & Merging](./versions.md#dont-shape-entities-around-merge).\n\n## Next\n\n- [Getting Started](./getting-started.md): the basic loop.\n- [Versions & Merging](./versions.md): preview shape, conflicts, side-by-side reads.\n- [Change History](./history.md): the SQL surface for review and undo.\n"
  },
  {
    "path": "docs/persistence.md",
    "content": "---\ndescription: Open Lix in memory for tests, or persist to a .lix SQLite file via the better-sqlite3 backend. For other storage targets, implement the backend interface.\n---\n\n# Persistence\n\n`openLix()` with no arguments opens an in-memory Lix that vanishes when the process exits. For anything that should survive a restart, pass a backend.\n\n## In-memory (tests, demos)\n\n```ts\nimport { openLix } from \"@lix-js/sdk\";\n\nconst lix = await openLix();\n// ... use it ...\nawait lix.close();\n```\n\n## SQLite file (Node.js)\n\nPersist a Lix as a single `.lix` file using the `better-sqlite3` backend. Install `better-sqlite3` as a peer dependency:\n\n```bash\nnpm install @lix-js/sdk better-sqlite3\n```\n\n```ts\nimport { openLix } from \"@lix-js/sdk\";\nimport { createBetterSqlite3Backend } from \"@lix-js/sdk/sqlite\";\n\nconst lix = await openLix({\n  backend: createBetterSqlite3Backend({ path: \"/var/data/app.lix\" }),\n});\n\n// ... use it ...\nawait lix.close();\n```\n\nReopening the same path resumes existing state. Don't open the file with raw SQLite tools; Lix manages its own schema and transactions.\n\nFor tests, point at a temp directory so each run is isolated:\n\n```ts\nimport { mkdtempSync } from \"node:fs\";\nimport { tmpdir } from \"node:os\";\nimport path from \"node:path\";\n\nconst dir = mkdtempSync(path.join(tmpdir(), \"lix-\"));\nconst lix = await openLix({\n  backend: createBetterSqlite3Backend({ path: path.join(dir, \"demo.lix\") }),\n});\n```\n\n## Closing\n\nAlways `await lix.close()` in scripts and tests. Long-lived servers can hold a single Lix instance for the lifetime of the process.\n\n## Other storage targets\n\nPostgres, S3, Cloudflare D1 / Durable Objects, IndexedDB, OPFS, RocksDB (anything transactional and key-value-shaped) are not shipped by the Lix team. The storage interface is public and small enough to implement yourself. See [Backends](./backend.md) for the contract.\n"
  },
  {
    "path": "docs/schemas.md",
    "content": "---\ndescription: Define the entity types Lix tracks for you. The x-lix-* JSON Schema extensions control the SQL table name, primary keys, uniqueness, and foreign keys.\n---\n\n# Schemas\n\nSchemas describe the entities Lix tracks. You declare each entity type as a JSON Schema with a few `x-lix-*` extensions, and Lix exposes a SQL table for it.\n\nSchemas are also the foundation file-format plugins build on: a plugin parses a file format (XLSX, DOCX, CAD, …) into entities described by a schema. Today you register schemas yourself; once the plugin API lands, plugin authors register theirs.\n\n> [!NOTE]\n> **For agents.** Lix is self-documenting. When operating against a Lix repository, query `lix_registered_schema` to discover every schema currently in effect (including Lix's own internal schemas `lix_*`) rather than relying on a snapshot of these docs. The schemas you read back are authoritative and current.\n>\n> ```sql\n> SELECT value FROM lix_registered_schema;\n> ```\n\n## Register a schema\n\n```sql\nINSERT INTO lix_registered_schema (value) VALUES (lix_json('{\n  \"$schema\": \"https://json-schema.org/draft/2020-12/schema\",\n  \"x-lix-key\": \"acme_section\",\n  \"x-lix-primary-key\": [\"/id\"],\n  \"type\": \"object\",\n  \"required\": [\"id\", \"title\", \"body\"],\n  \"properties\": {\n    \"id\":    { \"type\": \"string\" },\n    \"title\": { \"type\": \"string\" },\n    \"body\":  { \"type\": \"string\" }\n  },\n  \"additionalProperties\": false\n}'));\n```\n\nAfter registration, `acme_section` is a SQL table you can `INSERT`, `SELECT`, `UPDATE`, and `DELETE` against. A sibling table `acme_section_by_version` exposes the same rows across all versions (see [Versions & Merging](./versions.md)).\n\n## The `x-lix-*` extensions\n\n| Field                | Purpose                                                                                                                                                                                                      |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| `x-lix-key`          | Required. Becomes the SQL table name and the durable identity of the relation. Use stable, lowercase, prefixed keys: `acme_section`, not `section`. See [Prefix your schema keys](#prefix-your-schema-keys). |\n| `x-lix-primary-key`  | Required for table-style INSERTs. Array of JSON Pointer paths into the entity. Column order is semantic.                                                                                                     |\n| `x-lix-unique`       | Optional. Array of unique constraints, each itself an array of JSON Pointer paths.                                                                                                                           |\n| `x-lix-foreign-keys` | Optional. Array of foreign keys to other registered schemas. See [Foreign keys](#foreign-keys).                                                                                                              |\n\nWithout `x-lix-primary-key` you'll get an error like `requires lixcol_entity_id because the schema has no x-lix-primary-key`.\n\nSchema identity is `x-lix-key` alone. There is no version field. Evolution is governed by the [amendment rules](#schema-amendment-rules).\n\n### JSON Pointer paths\n\nPrimary-key, unique, and foreign-key paths are [JSON Pointer](https://datatracker.ietf.org/doc/html/rfc6901) strings: leading slash, slash-separated segments, pointing into the entity. For most schemas this is just `[\"/id\"]`, but it works for nested fields:\n\n```ts\n\"x-lix-primary-key\": [\"/owner/email\"]\n```\n\n### Composite primary keys and uniqueness\n\n```ts\n\"x-lix-primary-key\": [\"/order_id\", \"/line_no\"],\n\"x-lix-unique\": [[\"/sku\"], [\"/order_id\", \"/sku\"]],\n```\n\nUniqueness is **not** inferred from JSON Schema metadata. If a non-primary-key field must be unique, declare it with `x-lix-unique`.\n\n### Foreign keys\n\nForeign keys reference another registered schema by `x-lix-key`:\n\n```ts\n\"x-lix-foreign-keys\": [\n  {\n    \"properties\": [\"/author_id\"],\n    \"references\": {\n      \"schemaKey\": \"acme_author\",\n      \"properties\": [\"/id\"]\n    }\n  }\n]\n```\n\nThe reference is **identity-only**: there is no `schemaVersion` on the right-hand side. A foreign key points at a schema by its stable `x-lix-key` and trusts that the referenced schema evolves under the same compatibility rules described below. This keeps cross-plugin references sane: a markdown plugin can FK into an author plugin without tracking which revision of the author schema is currently registered.\n\n### `additionalProperties: false`\n\nAlways include `additionalProperties: false`. Lix validates writes against the schema, and accidental fields will fail fast instead of silently writing garbage. It's also required by the amendment rules below: schemas that don't set it cannot be safely amended.\n\n## Schema amendment rules\n\nA registered schema's `x-lix-key` is the relation's durable identity. You can re-register the same `x-lix-key` to amend the schema, but Lix only accepts changes that keep existing data valid. The rules are mechanical: a diff of old vs new must satisfy every constraint below or the amendment is rejected.\n\n### Why amendments must be backward compatible\n\nLix is a version-controlled repository. Every change is immutable. Once a row has been written under a schema, that historical change cannot be rewritten. A Lix repository may hold years of changes spread across many versions and many authors' schemas, and all of it must remain readable.\n\nThis makes retroactive schema migrations impossible. There is no point in time at which Lix could \"convert all existing rows from the old shape to the new one\"; the old rows are part of history, and history doesn't change.\n\n```\n       schema grows forward (additive only) ──────────────►\n       v1: {id, body}              v2: {id, body, tag?}\n\ntime   ──●──────●──────●─────────●──────●──────────────►\n         c1     c2     c3        c4     c5\n         └─ written under v1 ────┘└─ under v2 ─┘\n                    │\n                    └─ immutable; reading c1 must still\n                       succeed after the v1 → v2 amendment.\n```\n\nThe only safe direction of evolution is therefore additive: a schema can grow in ways that leave existing rows valid, but it cannot tighten, rename, or remove anything that already exists. This is what the rules below enforce.\n\nIf a schema author truly needs a breaking change, they mint a new `x-lix-key` (e.g. `md_block_v2`), leave the old key's data untouched in history, and write any plugin-level migration code at their own pace. Old data stays valid under the old key; new data lives under the new key.\n\n### What you can change\n\n- **Add a new optional property.** It must not appear in `required`, and it must not be referenced by any existing primary-key, unique, or foreign-key constraint. Existing rows simply lack the field.\n- **Edit doc-only fields** anywhere in the schema: `description`, `title`, `$comment`, `deprecated`. These never affect storage or validation, so you can iterate on them freely.\n\n### What you cannot change\n\n- **`x-lix-key`.** Renaming creates a new relation; it is not an amendment.\n- **`additionalProperties`.** Must remain `false`.\n- **Existing properties.** Type, default, format, nested schema, enum: all frozen. Once a property has shipped, its semantics are permanent.\n- **`required`.** The required set is frozen. Neither additions nor removals.\n- **Constraints (`x-lix-primary-key`, `x-lix-unique`, `x-lix-foreign-keys`).** Frozen. You can reorder list elements cosmetically (Lix normalizes the comparison), but you can't add, remove, or modify a constraint. Primary-key column order is semantic and cannot be reordered.\n- **Top-level keywords** like `type`, `examples`, `patternProperties`. Frozen.\n- **Nested object schemas.** A property whose `type` is `object` is frozen as a unit: you cannot add subproperties to it. Recursive schema evolution is intentionally a later, explicit feature.\n- **`x-lix-version`.** Rejected if present on either side.\n\n### What to do when you really need a breaking change\n\nMint a new `x-lix-key`. Ship `acme_section_v2` as a separate schema, write migration code in your plugin to move data from `acme_section` to `acme_section_v2`, and let the two coexist while consumers cut over. Foreign keys pointing at the old key keep working; new ones point at the new key. This is how protobuf, GraphQL, RDF, and OpenAPI all handle hard breaks: the new identity _is_ the version bump, and it cascades through references naturally.\n\n## Prefix your schema keys\n\n`x-lix-key` is the global identifier for an entity type inside a Lix instance. It's also the SQL table name. Pick a prefix tied to your app, plugin, or organization, and put every schema you own behind it:\n\n| Good                         | Bad               |\n| :--------------------------- | :---------------- |\n| `acme_task`, `acme_section`  | `task`, `section` |\n| `xlsx_cell`, `xlsx_sheet`    | `cell`, `sheet`   |\n| `figma_layer`, `figma_frame` | `layer`, `frame`  |\n\nWhy it matters: a single Lix can hold many files and many schemas at once. App-level entities, file-format plugins (XLSX, DOCX, CAD, …), and Lix's own internal schemas all share the `lix_registered_schema` namespace. An unprefixed `task` collides the moment a second source registers the same name. The `lix_*` prefix is reserved for Lix-internal schemas; don't use it for your own.\n\nTreat `x-lix-key` like a package name: lowercase, stable, namespaced. Once data is written, the key is permanent (see the amendment rules above).\n\n## Best practices\n\n### Don't store lifecycle timestamps\n\nYou don't need `created_at` or `updated_at` on app schemas. Lix already records lifecycle in [`lix_change`](./history.md). Add timestamp fields only when they're domain data, like `due_at` or `published_at`.\n\n### Inspecting registered schemas\n\n```sql\nSELECT lixcol_entity_id, value\nFROM lix_registered_schema\nORDER BY lixcol_entity_id;\n```\n\n### Design for querying, not for merging\n\nShape your entities the way your reads want them. Document blocks, spreadsheet cells, line items: model whatever's natural for the questions your code asks.\n\nDon't shrink rows just to avoid merge conflicts. Lix's conflict detection is row-level today (two versions editing different fields of the same row still conflict), but conflict semantics and resolution are an active roadmap item; designs that bend around today's limitation will look strange once that lands. See the [roadmap](https://github.com/opral/lix#roadmap).\n\nIf two collaborators are likely to edit the same logical thing concurrently and your domain naturally splits it (a document into blocks, an invoice into line items), split it because the _data_ makes sense that way. Don't split a single record into ten just because a future merge might collide.\n"
  },
  {
    "path": "docs/sql-functions.md",
    "content": "---\ndescription: Built-in scalar SQL functions provided by the Lix engine. Covers JSON parsing and projection, ID and timestamp generation, text/blob coercion, and the active-version commit id helper used to scope history queries.\n---\n\n# SQL Functions\n\nLix's DataFusion-backed engine registers a small set of scalar functions for use inside `lix.execute()`. They cover the gaps between standard SQL and Lix's own conventions: parsing JSON parameters, producing IDs and timestamps, coercing between text and bytes, and resolving the active version's commit id for history queries.\n\n## At a glance\n\n| Function | Returns | Use for |\n| :-- | :-- | :-- |\n| `lix_active_version_commit_id()` | text | Scoping `_history` queries to the active version. |\n| `lix_json(text)` | JSON | Parse a JSON string parameter into a JSON-typed value. |\n| `lix_json_get(json, path...)` | JSON | Project a value out of a JSON column, preserving JSON type. |\n| `lix_json_get_text(json, path...)` | text | Project a value out of a JSON column as plain text. |\n| `lix_uuid_v7()` | text | Generate a UUIDv7 string. |\n| `lix_timestamp()` | text | Current ISO-8601 timestamp string. |\n| `lix_text_decode(blob[, encoding])` | text | Decode a `BLOB` to text (default `utf-8`). |\n| `lix_text_encode(text[, encoding])` | blob | Encode text into a `BLOB` (default `utf-8`). |\n| `lix_empty_blob()` | blob | Zero-byte `BLOB` literal. |\n\nAll functions are scalar; call them anywhere a SQL expression is allowed.\n\n## Version & history\n\n### `lix_active_version_commit_id()`\n\nReturns the commit id at the tip of the **currently active** version, as resolved when the SQL statement was planned.\n\nHistory surfaces (`lix_state_history`, `<schema>_history`, `lix_file_history`, `lix_directory_history`) require a literal or bound-parameter equality on `start_commit_id` (or `lixcol_start_commit_id`). A correlated subquery against `lix_version` is rejected by the planner. `lix_active_version_commit_id()` is the canonical way to scope history to the active version in a single statement:\n\n```sql\n-- Walk one entity's history from the active version's tip\nSELECT depth, observed_commit_id, snapshot_content\nFROM lix_state_history\nWHERE schema_key = 'task' AND entity_id = 't1'\n  AND start_commit_id = lix_active_version_commit_id()\nORDER BY depth;\n```\n\nFor an arbitrary version, resolve the commit id with one query and pass it as a parameter:\n\n```ts\nconst { rows } = await lix.execute(\n  \"SELECT commit_id FROM lix_version WHERE id = $1\",\n  [versionId],\n);\nconst commitId = rows[0].value(\"commit_id\").asText();\n\nawait lix.execute(\n  `SELECT depth, snapshot_content\n     FROM lix_state_history\n    WHERE start_commit_id = $1\n      AND schema_key = $2 AND entity_id = $3\n    ORDER BY depth`,\n  [commitId, \"task\", \"t1\"],\n);\n```\n\n## JSON\n\n### `lix_json(text)`\n\nParses a JSON string into a JSON-typed value. Use this when binding a JSON parameter, since DataFusion otherwise treats the bound value as plain text:\n\n```ts\nawait lix.execute(\n  \"INSERT INTO lix_registered_schema (value) VALUES (lix_json($1))\",\n  [JSON.stringify(schema)],\n);\n```\n\n### `lix_json_get(json, path...)`\n\nReturns the value at a JSON path, **preserving JSON type** (objects, arrays, numbers, booleans, strings stay as JSON). Variadic path: pass each segment as a separate argument.\n\n```sql\nSELECT lix_json_get(snapshot_content, 'tags') FROM lix_state WHERE schema_key = 'task';\n-- returns [\"urgent\",\"draft\"] as JSON\n```\n\n### `lix_json_get_text(json, path...)`\n\nSame as `lix_json_get` but returns the value as plain text. Useful for filtering or display:\n\n```sql\nSELECT entity_id\nFROM lix_state\nWHERE schema_key = 'task'\n  AND lix_json_get_text(snapshot_content, 'priority') = 'high';\n```\n\nBoth return `NULL` if the path is missing or the underlying value is `null`.\n\n## IDs & time\n\n### `lix_uuid_v7()`\n\nGenerates a fresh RFC 9562 UUIDv7 string. Useful in `INSERT` defaults and CEL `default` expressions in JSON Schema:\n\n```sql\nINSERT INTO task (id, title, done)\nVALUES (lix_uuid_v7(), 'New task', false);\n```\n\n### `lix_timestamp()`\n\nReturns the current time as an ISO-8601 string.\n\n```sql\nINSERT INTO event (id, occurred_at) VALUES (lix_uuid_v7(), lix_timestamp());\n```\n\n## Text & bytes\n\n### `lix_text_decode(blob[, encoding])`\n\nDecodes a `BLOB` to text. The optional second argument is the encoding name (`\"utf-8\"` is the default and currently the only supported encoding):\n\n```sql\nSELECT lix_text_decode(data) FROM lix_file WHERE path = '/notes/readme.md';\n```\n\n### `lix_text_encode(text[, encoding])`\n\nInverse of `lix_text_decode`. Encodes text into a `BLOB`:\n\n```sql\nINSERT INTO lix_file (id, path, data)\nVALUES (lix_uuid_v7(), '/notes/hello.txt', lix_text_encode('hello world'));\n```\n\n### `lix_empty_blob()`\n\nReturns a zero-length `BLOB`. Handy for creating an empty file:\n\n```sql\nINSERT INTO lix_file (id, path, data)\nVALUES (lix_uuid_v7(), '/empty.bin', lix_empty_blob());\n```\n\n## Notes\n\n- Functions are pure scalars; they do not consume rows or take aggregates.\n- Bound parameters use `$1`, `$2`, … (not `?`); see [API Reference](./api-reference.md#executesql-params).\n- `lix_active_version_commit_id()`, `lix_uuid_v7()`, and `lix_timestamp()` reflect the engine's current view at planning/execution time and are stable across the rows of a single statement.\n"
  },
  {
    "path": "docs/surfaces.md",
    "content": "---\ndescription: The SQL surfaces in Lix at a glance. State surfaces are JSON-shaped and schema-agnostic; per-entity, file, and directory surfaces are typed sugar over the same data. One grid, eleven tables.\n---\n\n# SQL Surfaces\n\nLix exposes the same underlying state through several SQL surfaces so you can query it the way that fits the question you're asking.\n\nTwo ergonomic axes:\n\n- **Grain.** Typed columns for one schema vs. raw JSON across all schemas vs. file bytes.\n- **Scope.** The active version, all versions side-by-side, or history walked through commits.\n\nA third surface, `lix_change`, sits outside the grid as the immutable global change journal: every write across every schema and every version, ordered by `created_at`.\n\n## The grid\n\n|                              | Active (current state)         | Cross-version (side-by-side)              | History (time-travel)                  |\n| :--------------------------- | :----------------------------- | :---------------------------------------- | :------------------------------------- |\n| **Per-entity, typed**        | `<schema>`                     | `<schema>_by_version`                     | `<schema>_history`                     |\n| **State, raw JSON, all schemas** | `lix_state`                | `lix_state_by_version`                    | `lix_state_history`                    |\n| **Files (bytes)**            | `lix_file`                     | `lix_file_by_version`                     | `lix_file_history`                     |\n| **Directories**              | `lix_directory`                | `lix_directory_by_version`                | `lix_directory_history`                |\n\nPlus: `lix_change`, the global change journal (no version filter).\n\nPick the row by what you're querying; pick the column by which version(s) and which time. Same data underneath, different ergonomics.\n\n## State surfaces\n\nSchema-agnostic, JSON-shaped reads across every registered schema.\n\n| Surface | Use for |\n| :-- | :-- |\n| `lix_state` | Current state of every entity in the active version. |\n| `lix_state_by_version` | Same, but with a `version_id` column so you can read across versions. |\n| `lix_state_history` | State walked back through the commit graph from a given commit. |\n\nCommon columns (`lix_state` and `lix_state_by_version`): `entity_id`, `schema_key`, `file_id`, `snapshot_content` (JSON), `metadata` (JSON), `schema_version`, `change_id`, `commit_id`. `lix_state_by_version` adds `version_id`.\n\n`lix_state_history` shares `entity_id`, `schema_key`, `file_id`, `snapshot_content`, `metadata`, `schema_version`, `change_id`, and instead of `commit_id` exposes `start_commit_id`, `observed_commit_id`, `commit_created_at`, and `depth` (commit-graph distance from `start_commit_id`; `0` is the freshest observation, higher values walk back, and intermediate commits that didn't touch the entity are skipped).\n\n> **History queries require a literal filter on `start_commit_id`.** A correlated subquery against `lix_version` is rejected by the planner. Use `lix_active_version_commit_id()` for the active version, or resolve the commit id with one query and pass it as a parameter. See [`lix_active_version_commit_id()`](./sql-functions.md#lix_active_version_commit_id).\n\n```sql\n-- Every entity in the active version, raw JSON\nSELECT entity_id, schema_key, snapshot_content FROM lix_state;\n\n-- Same entity in two versions, side by side\nSELECT version_id, snapshot_content\nFROM lix_state_by_version\nWHERE schema_key = 'task' AND entity_id = 't1'\n  AND version_id IN ($a, $b);\n\n-- Walk history of one entity from a version's tip\nSELECT depth, observed_commit_id, snapshot_content\nFROM lix_state_history\nWHERE schema_key = 'task' AND entity_id = 't1'\n  AND start_commit_id = lix_active_version_commit_id()\nORDER BY depth;\n```\n\n## Per-entity sugar\n\nFor each registered schema `X`, Lix generates three typed surfaces named after `x-lix-key`:\n\n| Surface | Use for |\n| :-- | :-- |\n| `<schema>` | `INSERT` / `SELECT` / `UPDATE` / `DELETE` against the active version with typed columns. |\n| `<schema>_by_version` | Read or write across versions; INSERTs require `lixcol_version_id`. |\n| `<schema>_history` | Time-travel through one schema's history with typed columns. |\n\nPer-entity surfaces project user columns directly (`id`, `title`, `done`, …) plus `lixcol_*`-prefixed system columns. The set varies by scope:\n\n- `<schema>` (active): `lixcol_change_id`, `lixcol_commit_id`, `lixcol_created_at`, `lixcol_updated_at`, plus bookkeeping. **No `lixcol_version_id`**; the active surface is implicitly the active version.\n- `<schema>_by_version`: adds `lixcol_version_id`. INSERT/UPDATE require it.\n- `<schema>_history`: `lixcol_start_commit_id`, `lixcol_observed_commit_id`, `lixcol_depth`, `lixcol_snapshot_content`, `lixcol_change_id` (no `lixcol_commit_id` here; commits in history are addressed via `lixcol_observed_commit_id`).\n\nNote the prefix asymmetry between grains: state surfaces use **bare** column names (`start_commit_id`, `depth`, `observed_commit_id`); per-entity, file, and directory surfaces wear `lixcol_` on the same columns.\n\n```sql\n-- Current rows of one schema, typed columns\nSELECT id, title, done FROM task;\n\n-- Compare one entity across two versions, typed\nSELECT lixcol_version_id, title, done\nFROM task_by_version\nWHERE id = 't1' AND lixcol_version_id IN ($a, $b);\n\n-- History of one entity, typed\nSELECT lixcol_depth, title, done\nFROM task_history\nWHERE id = 't1'\n  AND lixcol_start_commit_id = lix_active_version_commit_id()\nORDER BY lixcol_depth;\n```\n\nWhen you need the typed columns, reach for the per-entity sugar. When you're querying across schemas, drop down to `lix_state*`. Same data either way.\n\n## Files\n\n`lix_file` versions byte content alongside path metadata. Each file gets the same three views as a registered schema, plus a `data BLOB` column for bytes.\n\n| Surface | Use for |\n| :-- | :-- |\n| `lix_file` | Current files in the active version. Read bytes via `data`. |\n| `lix_file_by_version` | Read or write files across versions. |\n| `lix_file_history` | Walk previous versions of a file's bytes through the commit graph. |\n\nUser columns: `id`, `path`, `directory_id`, `name`, `hidden`, `data`. System columns are `lixcol_*` (`lixcol_version_id` on `_by_version`; `lixcol_start_commit_id`, `lixcol_depth`, `lixcol_observed_commit_id` on `_history`).\n\n```sql\n-- Current bytes of a file\nSELECT data FROM lix_file WHERE path = '/orders.xlsx';\n\n-- Bytes of the same file in two versions\nSELECT lixcol_version_id, data\nFROM lix_file_by_version\nWHERE path = '/orders.xlsx' AND lixcol_version_id IN ($a, $b);\n\n-- Every previous version of a file's bytes\nSELECT lixcol_depth, lixcol_observed_commit_id, data\nFROM lix_file_history\nWHERE path = '/orders.xlsx'\n  AND lixcol_start_commit_id = lix_active_version_commit_id()\nORDER BY lixcol_depth;\n```\n\nRead `data` with `row.value(\"data\").asBlob()`.\n\n## Directories\n\nSame shape as files, minus the `data` column.\n\n| Surface | Use for |\n| :-- | :-- |\n| `lix_directory` | Current directories in the active version. |\n| `lix_directory_by_version` | Cross-version directory reads/writes. |\n| `lix_directory_history` | Directory history walked through commits. |\n\nUser columns: `id`, `path`, `parent_id`, `name`, `hidden`. Same `lixcol_*` system columns as files. Directory paths must end with a trailing slash (`/data/`, not `/data`).\n\nInserting a `lix_file` at `/a/b/c.txt` auto-creates `lix_directory` rows for `/a/` and `/a/b/` if they don't already exist; you only need to insert directories explicitly when you want them to exist before any file does.\n\n```sql\n-- List children of a directory\nSELECT name, path FROM lix_directory WHERE parent_id = (\n  SELECT id FROM lix_directory WHERE path = '/data/'\n);\n```\n\n## `lix_change`: the global journal\n\nOutside the grid because it isn't scoped to a version: every write across every schema, every version, every file, in commit order.\n\nColumns: `id`, `entity_id`, `schema_key`, `schema_version`, `file_id`, `metadata`, `snapshot_content`, `created_at`.\n\nUse `lix_change` for cross-cutting questions where neither version nor schema scopes the answer:\n\n```sql\n-- Last 20 application-level changes across the entire repo\nSELECT created_at, schema_key, entity_id, snapshot_content\nFROM lix_change\nWHERE schema_key NOT LIKE 'lix_%'\nORDER BY created_at DESC\nLIMIT 20;\n```\n\nWithout the `schema_key NOT LIKE 'lix_%'` filter the feed is dominated by Lix's own bookkeeping entities (`lix_commit`, `lix_binary_blob_ref`, `lix_file_descriptor`).\n\nPer-version history goes through the commit graph, not `lix_change` directly. See [Change History](./history.md).\n\n## Naming conventions\n\n| Surface family | System column prefix | Version column |\n| :-- | :-- | :-- |\n| `lix_state*` | bare (no prefix) | `version_id` |\n| `<schema>*`, `lix_file*`, `lix_directory*` | `lixcol_*` | `lixcol_version_id` |\n| `lix_change` | bare | (none, global) |\n\nState surfaces are projection-friendly raw views. Per-entity, file, and directory surfaces wear `lixcol_*` to keep your user columns (`id`, `title`, `path`, …) cleanly separated from Lix bookkeeping.\n\n## Composition recap\n\n- One row in **`lix_change`** per write, ever. Global, version-blind, immutable.\n- **State surfaces** (`lix_state*`) project that journal as JSON snapshots, scoped by version (`_by_version`) or walked through commits (`_history`).\n- **Per-entity surfaces** (`<schema>*`) and **file/directory surfaces** are typed projections of the same state, with user columns extracted into native SQL types.\n\nReach for typed surfaces when you know the schema. Drop to `lix_state*` for cross-schema reads. Drop to `lix_change` for raw activity feeds.\n"
  },
  {
    "path": "docs/table_of_contents.json",
    "content": "{\n  \"Overview\": [\n    { \"path\": \"./what-is-lix.md\", \"label\": \"What is Lix?\" },\n    { \"path\": \"./getting-started.md\", \"label\": \"Getting Started\" },\n    { \"path\": \"./lix-for-ai-agents.md\", \"label\": \"Lix for AI Agents\" },\n    { \"path\": \"./comparison-to-git.md\", \"label\": \"Comparison to Git\" }\n  ],\n  \"Concepts\": [\n    { \"path\": \"./schemas.md\", \"label\": \"Schemas\" },\n    { \"path\": \"./versions.md\", \"label\": \"Versions & Merging\" },\n    { \"path\": \"./history.md\", \"label\": \"Change History\" },\n    { \"path\": \"./surfaces.md\", \"label\": \"SQL Surfaces\" }\n  ],\n  \"Guides\": [\n    { \"path\": \"./persistence.md\", \"label\": \"Persistence\" },\n    { \"path\": \"./backend.md\", \"label\": \"Backends\" }\n  ],\n  \"Reference\": [\n    { \"path\": \"./api-reference.md\", \"label\": \"API Reference\" },\n    { \"path\": \"./sql-functions.md\", \"label\": \"SQL Functions\" }\n  ]\n}\n"
  },
  {
    "path": "docs/versions.md",
    "content": "---\ndescription: Versions are isolated lines of state. Create them, switch into them, read across them with _by_version tables, and merge with conflict-aware preview.\n---\n\n# Versions & Merging\n\nA **version** in Lix is what Git calls a branch: an isolated line of state that can diverge from main and be merged back. Lix uses \"version\" because product UIs don't say \"branch.\"\n\n## Create and switch\n\n```ts\nconst main = await lix.activeVersionId();\n\nconst draft = await lix.createVersion({ name: \"Marketing edit\" });\nawait lix.switchVersion({ versionId: draft.id });\n\n// writes here are isolated to `draft`\nawait lix.execute(\n  \"UPDATE acme_section SET title = $1 WHERE id = $2\",\n  [\"Sharper launch copy\", \"s1\"],\n);\n\nawait lix.switchVersion({ versionId: main });\n```\n\n`createVersion()` returns `{ id, name, hidden }`. `switchVersion()` is per-Lix-instance state; it changes which version subsequent SQL goes against.\n\nUse names that match your callers' vocabulary. For an end-user product that's domain language: `\"Marketing edit\"`, `\"Q3 pricing draft\"`. For a CLI or infrastructure tool, developer terms like `\"feature/x\"` or `\"staging\"` are fine; Lix doesn't prescribe.\n\n## Side-by-side reads with `_by_version`\n\nEvery registered schema `X` gets a sibling table `X_by_version` with a `lixcol_version_id` column. (Files and directories have the same shape: `lix_file_by_version`, `lix_directory_by_version`. For the full surface map see [SQL Surfaces](./surfaces.md).) Use it to read or write across versions without switching:\n\n```ts\nconst sideBySide = await lix.execute(\n  `SELECT v.name, s.title\n     FROM acme_section_by_version s\n     JOIN lix_version v ON v.id = s.lixcol_version_id\n    WHERE s.id = $1\n      AND s.lixcol_version_id IN ($2, $3)\n    ORDER BY v.name`,\n  [\"s1\", main, draft.id],\n);\n```\n\nRules for `_by_version`:\n\n- `SELECT`: filter by `lixcol_version_id`, or omit the filter to scan all versions.\n- `INSERT`: must include `lixcol_version_id`.\n- `UPDATE` / `DELETE`: must include `lixcol_version_id` in the `WHERE` clause.\n- The plain (non-suffixed) table is the active-version view.\n\nPrefer `_by_version` for review UIs, sync, and any side-by-side rendering; it avoids the cost and risk of switching the active version.\n\n## Preview a merge\n\n`mergeVersionPreview()` reports the same merge decision as `mergeVersion()` without touching state.\n\n```ts\nconst preview = await lix.mergeVersionPreview({ sourceVersionId: draft.id });\n\n// preview shape:\n// {\n//   outcome: \"alreadyUpToDate\" | \"fastForward\" | \"mergeCommitted\",\n//   targetVersionId, sourceVersionId,\n//   baseCommitId, targetHeadCommitId, sourceHeadCommitId,\n//   changeStats: { total, added, modified, removed },\n//   conflicts: MergeConflict[],\n// }\n```\n\nOutcomes:\n\n- `alreadyUpToDate`: source has no commits the target lacks.\n- `fastForward`: target advances to source without a merge commit.\n- `mergeCommitted`: a new merge commit will be created.\n\n`mergeVersion()` always merges into the **active** version. If you want a different target, switch to it first.\n\n## Conflicts\n\nIf both versions modified the same entity since their merge base, `mergeVersionPreview()` returns them in `conflicts`, and `mergeVersion()` throws a `LixError`.\n\nEach conflict has the shape:\n\n```ts\n{\n  kind: \"sameEntityChanged\",\n  schemaKey: \"acme_section\",\n  entityId: \"s1\",\n  fileId: null,\n  target: { kind: \"added\" | \"modified\" | \"removed\", beforeChangeId, afterChangeId },\n  source: { kind: \"added\" | \"modified\" | \"removed\", beforeChangeId, afterChangeId },\n}\n```\n\nConflict detection is row-level today, not field-level: two versions editing different fields of the same row still conflict. Conflict semantics and resolution are an active roadmap item (see [Roadmap](https://github.com/opral/lix#roadmap)). **Don't reshape your schemas to avoid this**; design entities around how your code reads them, not around today's merge granularity.\n\nAlways wrap `mergeVersion()` when conflicts are possible:\n\n```ts\ntry {\n  const result = await lix.mergeVersion({ sourceVersionId: draft.id });\n  console.log(result.outcome, result.changeStats.total);\n} catch (error) {\n  // resolve conflicts in calling code, then retry\n}\n```\n\n## Don't shape entities around merge\n\nIt's tempting to split rows finely to dodge the row-level conflict rule. **Don't.** Schema design should follow how your code reads, writes, and joins data, not how today's merge engine resolves conflicts. Conflict semantics will improve; data models that work today should still work then.\n\nIf a domain naturally splits (a document into blocks, an invoice into line items, a translation set into per-key messages), split it because the *reads* want it that way. If the natural shape is one row with several fields, write it that way and handle conflicts in calling code when they happen. See [Schemas](./schemas.md#design-for-querying-not-for-merging).\n\n## Hiding and deleting versions\n\n`lix_version` is a writable system table. Hide a version from the active set without deleting it:\n\n```ts\nawait lix.execute(\"UPDATE lix_version SET hidden = true WHERE id = $1\", [draft.id]);\n```\n\nDelete a version with SQL:\n\n```ts\nawait lix.execute(\"DELETE FROM lix_version WHERE id = $1\", [draft.id]);\n```\n\nThe engine refuses to delete the global version or the active version.\n"
  },
  {
    "path": "docs/what-is-lix.md",
    "content": "---\ndescription: Lix is an embeddable version control system for files of any format. Diffs are semantic and per entity (which cells changed in a spreadsheet, which clauses moved in a contract), exposed as SQL, all in-process.\n---\n\n# What is Lix?\n\nLix is an **embeddable version control system for files of any format**. A spreadsheet diff tells you which cells changed. A contract diff tells you which clauses moved. A CAD diff tells you which parts changed. Lix diffs files **semantically, per entity**, across DOCX, XLSX, CAD, PDF, JSON, and any format with a parser plugin.\n\nBranches, merge, and an immutable change history, exposed as SQL, all running in-process inside your program.\n\n> Lix is to version control what DuckDB is to analytics: an embeddable engine with pluggable support for file formats.\n\n[See what a semantic diff looks like →](./comparison-to-git.md#what-this-looks-like)\n\n```ts\nimport { openLix } from \"@lix-js/sdk\";\n\nconst lix = await openLix();\n// every change is journaled into lix_change, queryable as SQL\n```\n\n## How it works\n\nEach file format is parsed into **entities**: cells in a spreadsheet, clauses in a document, parts in a CAD drawing. Lix versions those entities. Per-row branch, merge, and history fall out for free.\n\n**Status:** the entity foundation ships today. Register a JSON Schema, write rows through SQL, version structured data end-to-end. A plugin API for file formats is on the [roadmap](https://github.com/opral/lix#roadmap); once it lands, anyone can author a plugin that turns a format (XLSX, DOCX, CAD, PDF, anything else) into entities, and the same primitives apply.\n\n## Three shapes\n\nThe same `openLix()` powers three different shapes:\n\n**A library inside an end-user product.** Lawyers redlining a contract, analysts iterating on a forecast, engineers updating a BOM, designers exploring a layout: give them Git-like drafts, review, and rollback inside your product UI, no terminal in sight.\n\n**A library inside an AI agent platform.** Every agent task gets an isolated workspace; humans or policies review the diff and merge or discard. See [Lix for AI Agents](./lix-for-ai-agents.md).\n\n**The engine of an infrastructure product.** Build a versioned filesystem, an artifact or model registry, a configuration service, a Git-style branchable database, or a domain-specific CLI. Lix is the version-control core; you ship the surface.\n\n## Why embed it\n\nGit's diff model is line-based on text, so it doesn't surface meaningful changes for binary or structured files (DOCX, XLSX, CAD). Git is also CLI-driven and operates outside your process, which makes it awkward for runtime data, programmatic edits, or end-user workflows that aren't a developer at a terminal.\n\nLix is the opposite shape:\n\n- A **library** you import; call it from an app, a service, a CLI, or another database engine.\n- **Pluggable storage.** Run in-memory, persist to a `.lix` SQLite file, or implement the [backend interface](./backend.md) to put Lix on Postgres, S3, Cloudflare, IndexedDB, OPFS, or anything transactional and key-value-shaped.\n- **SQL** as the query interface, for application code, AI agents, and tools.\n- **ACID** transactions across files and entities.\n\nNo daemon, no protocol, no remote.\n\n## The change-first model\n\nLix stores changes as data, not snapshots. One immutable journal across every entity, every version:\n\n```sql\n-- What does this version see right now?\nSELECT entity_id, schema_key, snapshot_content\nFROM lix_state_history\nWHERE start_commit_id = lix_active_version_commit_id()\n  AND depth = 0\nORDER BY schema_key, entity_id;\n```\n\nWhether the entity is a spreadsheet cell, a document clause, a CAD part, or an application row, the surface is the same. Diffs, undo, audit, blame, and attribution are all SQL. See [Change History](./history.md).\n\n## Examples of what Lix versions\n\nOnce the plugin API lands and people start writing plugins:\n\n- DOCX contracts, with clause-level diffs and redlines\n- XLSX models, with cell-level history and conflict-aware merges\n- CAD drawings, with per-part revision tracking\n- PDFs and any other format behind a parser plugin\n\nAvailable today through the entity foundation:\n\n- Application state: tasks, line items, translations, CMS sections, model metadata, config keys\n- Anything you can describe with a JSON Schema\n\n## Next\n\n- [Getting Started](./getting-started.md): install, register a schema, branch, merge.\n- [Comparison to Git](./comparison-to-git.md): when to reach for which.\n- [Lix for AI Agents](./lix-for-ai-agents.md): one shape, in depth.\n- [Schemas](./schemas.md), [Versions & Merging](./versions.md), [Change History](./history.md), [Persistence](./persistence.md), [API Reference](./api-reference.md).\n"
  },
  {
    "path": "nx.json",
    "content": "{\n\t\"$schema\": \"./node_modules/nx/schemas/nx-schema.json\",\n\t\"tui\": {\n\t\t\"autoExit\": true\n\t},\n\t\"namedInputs\": {\n\t\t\"default\": [\"{projectRoot}/**/*\"],\n\t\t\"publicEnv\": [\n\t\t\t{\n\t\t\t\t\"runtime\": \"env |   grep ^PUBLIC_\"\n\t\t\t}\n\t\t],\n\t\t\"nodeVersion\": [\n\t\t\t{\n\t\t\t\t\"runtime\": \"node --version\"\n\t\t\t}\n\t\t],\n\t\t\"platform\": [\n\t\t\t{\n\t\t\t\t\"runtime\": \" node -e 'console.log(process.platform)'\"\n\t\t\t}\n\t\t]\n\t},\n\t\"targetDefaults\": {\n\t\t\"production\": {\n\t\t\t\"dependsOn\": [\"^build\"],\n\t\t\t\"inputs\": [\"default\", \"^default\", \"publicEnv\", \"nodeVersion\", \"platform\"]\n\t\t},\n\t\t\"build\": {\n\t\t\t\"dependsOn\": [\"^build\"],\n\t\t\t\"inputs\": [\"default\", \"^default\", \"publicEnv\", \"nodeVersion\", \"platform\"],\n\t\t\t\"cache\": true\n\t\t},\n\t\t\"dev\": {\n\t\t\t\"dependsOn\": [\"^build\"]\n\t\t},\n\t\t\"test\": {\n\t\t\t\"dependsOn\": [\"^build\", \"publicEnv\", \"nodeVersion\", \"platform\"],\n\t\t\t\"cache\": true\n\t\t},\n\t\t\"lint\": {\n\t\t\t\"dependsOn\": [\"format\"],\n\t\t\t\"cache\": true\n\t\t},\n\t\t\"format\": {\n\t\t\t\"cache\": true\n\t\t}\n\t},\n\t\"useDaemonProcess\": false,\n\t\"__commentToken\": \"The token is supposed to be public\",\n\t\"nxCloudAccessToken\": \"ZjA2NzJhZGQtMTQ0NS00ODVlLTlmNzktYmQ5MWYwYTZmODhlfHJlYWQtd3JpdGU=\"\n}\n"
  },
  {
    "path": "optimization_log6_crud.md",
    "content": "# Optimization Log 6: JSON Pointer CRUD\n\nGoal: make typed-table JSON pointer CRUD fast enough that Lix behaves like a\nnormal embedded CRUD database for this workload.\n\nTarget workload:\n\n```text\ntable: json_pointer\ncolumns: path TEXT primary-key shape, value JSON\nfixture: packages/engine/benches/fixtures/pnpm-lock.fixture.json\nrows: 1000 smoke rows from all JSON nodes, including containers\nquery surface:\n  INSERT INTO json_pointer (path, value)\n  SELECT path, value FROM json_pointer\n  SELECT path, value FROM json_pointer WHERE path = ?\n  UPDATE json_pointer SET value = ...\n  DELETE FROM json_pointer\n```\n\nNo `lix_file` row is required for this scorecard. This is intentionally the\nplain CRUD path through a registered typed schema.\n\n## Success Criteria\n\nSpeed:\n\n```text\nLix with SQLite backend: <= 2.0x raw SQLite median\nLix with RocksDB backend: <= 1.8x raw SQLite median\n```\n\nStorage:\n\n```text\nLix with SQLite backend: <= 2.0x raw SQLite bytes on disk\nLix with RocksDB backend: <= 2.0x raw SQLite bytes on disk\n```\n\nThe raw SQLite baseline uses the same fixture rows and an equivalent\n`json_pointer(path TEXT PRIMARY KEY, value TEXT) WITHOUT ROWID` table in a temp\nfile.\n\n## Baseline\n\nCommands:\n\n```sh\ncargo bench -p lix_engine --bench json_pointer_crud --features storage-benches\ncargo test -p lix_engine --features storage-benches --test json_pointer_crud_storage -- --ignored --nocapture --test-threads=1\n```\n\nCurrent smoke speed baseline:\n\n| operation               | raw SQLite median | Lix SQLite target | Lix SQLite current | status | Lix RocksDB target | Lix RocksDB current | status |\n| ----------------------- | ----------------: | ----------------: | -----------------: | ------ | -----------------: | ------------------: | ------ |\n| `insert_all_nodes`      |          3.753 ms |       <= 7.506 ms |          374.32 ms | fail   |        <= 6.755 ms |           319.73 ms | fail   |\n| `select_all_path_value` |          1.197 ms |       <= 2.394 ms |           14.95 ms | fail   |        <= 2.155 ms |            11.51 ms | fail   |\n| `select_by_pk_path`     |          1.517 ms |       <= 3.034 ms |             3.51 s | fail   |        <= 2.731 ms |              3.34 s | fail   |\n| `update_all_values`     |          1.414 ms |       <= 2.828 ms |           35.59 ms | fail   |        <= 2.545 ms |            24.18 ms | fail   |\n| `delete_all_nodes`      |          1.178 ms |       <= 2.356 ms |             2.64 s | fail   |        <= 2.120 ms |              1.83 s | fail   |\n\nCurrent 1000-row storage baseline:\n\n| backend     | bytes on disk |       target | status    |\n| ----------- | ------------: | -----------: | --------- |\n| raw SQLite  |     1,692,456 |    reference | reference |\n| Lix SQLite  |     1,075,136 | <= 3,384,912 | pass      |\n| Lix RocksDB |       993,888 | <= 3,384,912 | pass      |\n\nBaseline interpretation:\n\n```text\nStorage already passes comfortably for both Lix backends.\n\nCRUD speed does not pass any target yet. The loudest bottlenecks are repeated\nprimary-key lookups and bulk delete, both measured in seconds for 1000 rows.\nInsert is also far outside the target, while full scan and bulk update are\ncloser but still roughly 10-25x over the raw SQLite reference.\n```\n\n## Optimization Order\n\nWork the scorecard in this order:\n\n1. `select_by_pk_path`\n2. `delete_all_nodes`\n3. `insert_all_nodes`\n4. `update_all_values`\n5. `select_all_path_value`\n\nRationale:\n\n```text\nPrimary-key reads reveal per-query planning/provider overhead. Bulk delete\nreveals write/delete transaction machinery. Insert is the main mutation hot\npath. Update and full scan are still failing, but their current numbers are\ncloser to the target than PK reads and delete.\n```\n\n## Entry Template\n\nUse one entry per kept optimization.\n\n```text\n## Optimization N: <short name>\n\nCommit: <hash> or uncommitted on <hash>\n\nTarget operation:\n  insert_all_nodes | select_all_path_value | select_by_pk_path |\n  update_all_values | delete_all_nodes | storage\n\nChange:\n  What changed?\n  Why should this reduce CRUD overhead?\n  What invariant is preserved?\n\nResults:\n  Include raw SQLite, Lix SQLite, and Lix RocksDB rows for every impacted CRUD\n  operation. Include 1000-row storage if the change can affect bytes on disk.\n\nVerification:\n  Exact commands run.\n```\n"
  },
  {
    "path": "optimization_log7.md",
    "content": "# Optimization Log 7: Physical Layout for CRUD + Branch/Merge\n\nGoal: find the optimal physical storage layout for Lix's core tracked-state\nworkflow as quickly as possible.\n\nThis log uses JSON-pointer shaped data as the shared workload because it looks\nlike real `plugin-json-v2` output: many small entities keyed by JSON pointer,\nincluding container nodes and leaves.\n\n## Core Workflow\n\nThe layout must prove itself across the operations Lix users actually compose:\n\n```text\nCRUD:\n  INSERT INTO json_pointer (path, value)\n  SELECT path, value FROM json_pointer\n  SELECT path, value FROM json_pointer WHERE path = ?\n  UPDATE json_pointer SET value = ...\n  DELETE FROM json_pointer\n\nBranching:\n  create_version over an existing tracked state\n\nMerge / diff:\n  merge_version after source-only edits\n  merge_version after divergent target/source edits\n\nStorage:\n  bytes on disk after insert\n  bytes on disk after create_version\n  bytes on disk after fast-forward merge\n  bytes on disk after divergent merge\n```\n\nThe purpose is not to win a single CRUD microbenchmark. The purpose is to learn\nwhich physical layout lets Lix cheaply answer the three core tracked-state\nquestions:\n\n```text\nWhat exists at this version?\nWhat changed between these versions?\nWhat is the current value for these exact entity identities?\n```\n\n## Fixture\n\n```text\nfixture: packages/engine/benches/fixtures/pnpm-lock.fixture.json\nsource: checked-in JSON conversion of the repo pnpm-lock.yaml\nrows: all JSON nodes flattened to json_pointer rows\nsmoke: first 1000 rows\nscale: first 10000 rows\ntable: json_pointer\nidentity: path\nvalue: JSON node value\nfile_id: NULL\n```\n\nThe fixture intentionally does not require a real `lix_file` row. The benchmark\nregisters the `plugin-json-v2` `json_pointer` schema and treats Lix as the\nnormal typed-table CRUD and versioned-state database.\n\n## Scorecard\n\nSpeed is measured for both backends:\n\n```text\nLix with SQLite backend\nLix with RocksDB backend\n```\n\nRaw SQLite remains a reference for simple CRUD machine limits, but it is not\nthe goal. Large gaps must be explained by Lix semantics or by an intentional\nlayout tradeoff. Gaps caused by accidental scans, repeated delta decoding,\nunbatched point reads, or avoidable write amplification are optimization\ntargets.\n\nStorage is measured on disk for the same 1000-row fixture and workflow stages.\nThe initial guardrail is that Lix should stay compact while adding branching\nand merge metadata; storage growth should be structural and explainable.\n\n## Current Benchmark Surface\n\nCommand:\n\n```sh\ncargo bench -p lix_engine --bench json_pointer_crud --features storage-benches\n```\n\nBenchmark groups:\n\n```text\njson_pointer_crud/raw_sqlite/baseline\njson_pointer_crud/raw_sqlite/smoke\njson_pointer_crud/raw_sqlite/scale\njson_pointer_crud/raw_storage_sqlite/baseline\njson_pointer_crud/raw_storage_sqlite/smoke\njson_pointer_crud/raw_storage_sqlite/scale\njson_pointer_crud/raw_storage_rocksdb/baseline\njson_pointer_crud/raw_storage_rocksdb/smoke\njson_pointer_crud/raw_storage_rocksdb/scale\njson_pointer_crud/lix_sqlite/baseline\njson_pointer_crud/lix_sqlite/smoke\njson_pointer_crud/lix_sqlite/scale\njson_pointer_crud/lix_rocksdb/baseline\njson_pointer_crud/lix_rocksdb/smoke\njson_pointer_crud/lix_rocksdb/scale\n```\n\nRaw Storage API timings:\n\n```text\nwrite_root_all_rows/{100,1k,10k}\nget_many_exact_keys/{100,1k,10k}\nget_many_missing_keys/{100,1k,10k}\nexists_many_exact_keys/{100,1k,10k}\nscan_keys_only/{100,1k,10k}\nscan_headers_only/{100,1k,10k}\nscan_full_rows/{100,1k,10k}\nprefix_scan_schema/{100,1k,10k}\nprefix_scan_schema_file_null/{100,1k,10k}\nwrite_delta_10pct_updates/{100,1k,10k}\nwrite_tombstone_10pct_deletes/{100,1k,10k}\nchanged_keys_update_10pct/{100,1k,10k}\nchanged_keys_delta_chain_10x1pct/{100,1k,10k}\nmaterialize_delta_chain_10x1pct/{100,1k,10k}\n```\n\nE2E workflow timings:\n\n```text\ninsert_all_rows/{100,1k,10k}\nselect_all_path_value/{100,1k,10k}\nselect_one_by_pk/{100,1k,10k}\nupdate_all_values/{100,1k,10k}\nupdate_one_by_pk/{100,1k,10k}\ndelete_all_rows/{100,1k,10k}\ndelete_one_by_pk/{100,1k,10k}\ncreate_version/{100,1k,10k}\nmerge_version_fast_forward_10pct_updates/{100,1k,10k}\nmerge_version_divergent_10pct_updates/{100,1k,10k}\n```\n\nStorage command:\n\n```sh\ncargo test -p lix_engine --features storage-benches --test json_pointer_crud_storage -- --ignored --nocapture --test-threads=1\n```\n\nStorage rows:\n\n```text\nraw SQLite inserted\nLix SQLite / inserted\nLix SQLite / after create_version\nLix SQLite / after fast-forward merge\nLix SQLite / after divergent merge\nLix RocksDB / inserted\nLix RocksDB / after create_version\nLix RocksDB / after fast-forward merge\nLix RocksDB / after divergent merge\n```\n\n## First Optimization Axis\n\nOptimize exact-key and changed-key access through live/tracked state.\n\nRationale:\n\n```text\nCRUD insert needs committed identity checks.\nSELECT ... WHERE path = ? needs exact-key lookup.\nUPDATE and DELETE need current-row lookup by identity.\ncreate_version should stay bounded over large tracked states.\nmerge_version needs changed-key discovery, not full-state hydration.\n```\n\nThe latest insert profile showed the hot path dominated by validation loading\ncommitted identity rows through scan/delta materialization:\n\n```text\nvalidate_prepared_writes\n  -> load_committed_constraint_row\n  -> scan_committed_constraint_rows\n  -> TrackedStateStoreReader::scan_rows_at_commit\n  -> delta_commit_ids_since_projection_root\n  -> load_delta_pack\n  -> decode_delta_pack\n```\n\nThat makes the first physical-layout question concrete:\n\n```text\nCan the storage layout and reader APIs answer batched exact-key lookups and\nchanged-key queries without broad scans or repeated delta-pack decoding?\n```\n\n## Baseline: 2026-05-10\n\nCommands:\n\n```sh\ncargo bench -p lix_engine --bench json_pointer_crud --features storage-benches -- 'json_pointer_crud/raw_storage_sqlite/baseline|json_pointer_crud/raw_storage_rocksdb/baseline|json_pointer_crud/lix_sqlite/baseline|json_pointer_crud/lix_rocksdb/baseline'\ncargo bench -p lix_engine --bench json_pointer_crud --features storage-benches -- 'json_pointer_crud/raw_sqlite/baseline|json_pointer_crud/raw_sqlite/smoke|json_pointer_crud/raw_storage_sqlite/smoke|json_pointer_crud/raw_storage_rocksdb/smoke|json_pointer_crud/lix_sqlite/smoke|json_pointer_crud/lix_rocksdb/smoke'\ncargo test -p lix_engine --features storage-benches --test json_pointer_crud_storage -- --ignored --nocapture --test-threads=1\n```\n\nRaw Storage API scoreboard:\n\n| operation                          | SQLite 100 | SQLite 1k | SQLite x | RocksDB 100 | RocksDB 1k | RocksDB x |\n| ---------------------------------- | ---------: | --------: | -------: | ----------: | ---------: | --------: |\n| `write_root_all_rows`              |  3.1846 ms | 8.1773 ms |    2.57x |   3.8257 ms |  6.1633 ms |     1.61x |\n| `get_many_exact_keys`              |  1.5000 ms | 4.6368 ms |    3.09x |   1.1740 ms |  3.5542 ms |     3.03x |\n| `get_many_missing_keys`            |  0.8035 ms | 2.5875 ms |    3.22x |   0.7961 ms |  1.5991 ms |     2.01x |\n| `exists_many_exact_keys`           |  1.4055 ms | 6.0771 ms |    4.32x |   1.2785 ms |  4.0307 ms |     3.15x |\n| `scan_keys_only`                   |  0.8837 ms | 3.9795 ms |    4.50x |   0.6067 ms |  2.0074 ms |     3.31x |\n| `scan_headers_only`                |  0.9404 ms | 3.3325 ms |    3.54x |   0.6108 ms |  2.0698 ms |     3.39x |\n| `scan_full_rows`                   |  1.4135 ms | 5.9223 ms |    4.19x |   1.1202 ms |  3.3596 ms |     3.00x |\n| `prefix_scan_schema`               |  1.3817 ms | 4.9885 ms |    3.61x |   1.0670 ms |  3.3352 ms |     3.13x |\n| `prefix_scan_schema_file_null`     |  1.4228 ms | 4.6936 ms |    3.30x |   1.0670 ms |  3.6555 ms |     3.43x |\n| `write_delta_10pct_updates`        |  0.9485 ms | 3.1167 ms |    3.29x |   0.5760 ms |  1.5234 ms |     2.64x |\n| `write_tombstone_10pct_deletes`    |  0.9206 ms | 2.8465 ms |    3.09x |   0.5681 ms |  1.4518 ms |     2.55x |\n| `changed_keys_update_10pct`        |  2.6219 ms | 72.513 ms |   27.66x |   2.1347 ms |  68.299 ms |    32.00x |\n| `changed_keys_delta_chain_10x1pct` |  1.5310 ms | 10.956 ms |    7.16x |   1.2778 ms |  8.9643 ms |     7.02x |\n| `materialize_delta_chain_10x1pct`  |  1.1405 ms | 5.7006 ms |    5.00x |   0.9339 ms |  3.1625 ms |     3.39x |\n\n`exists_many_exact_keys` currently uses the tracked-state row-loading path as\nthe semantic equivalent. It is a named scoreboard slot for a future lighter\nexists-only primitive.\n\nE2E workflow scoreboard:\n\n| axis         | operation                                  | raw SQLite 100 | raw SQLite 1k | raw x | Lix SQLite 100 | Lix SQLite 1k | Lix SQLite x | Lix RocksDB 100 | Lix RocksDB 1k | Lix RocksDB x |\n| ------------ | ------------------------------------------ | -------------: | ------------: | ----: | -------------: | ------------: | -----------: | --------------: | -------------: | ------------: |\n| CRUD         | `insert_all_rows`                          |      1.4715 ms |     2.5578 ms | 1.74x |      21.690 ms |     382.34 ms |       17.63x |       19.807 ms |      317.34 ms |        16.02x |\n| CRUD         | `select_all_path_value`                    |      0.8791 ms |     1.2311 ms | 1.40x |      5.8882 ms |     13.336 ms |        2.26x |       5.5689 ms |      11.019 ms |         1.98x |\n| CRUD         | `select_one_by_pk`                         |      0.8001 ms |     1.1339 ms | 1.42x |      2.0720 ms |     6.1576 ms |        2.97x |       2.0085 ms |      3.8542 ms |         1.92x |\n| CRUD         | `update_all_values`                        |      0.8417 ms |     1.4807 ms | 1.76x |      9.2526 ms |     30.266 ms |        3.27x |       8.2602 ms |      22.054 ms |         2.67x |\n| CRUD         | `update_one_by_pk`                         |      0.8527 ms |     1.2591 ms | 1.48x |      4.4169 ms |     10.040 ms |        2.27x |       3.6020 ms |      7.3052 ms |         2.03x |\n| CRUD         | `delete_all_rows`                          |      0.9204 ms |     1.2384 ms | 1.35x |      40.927 ms |      2.4630 s |       60.18x |       38.043 ms |       1.7949 s |        47.18x |\n| CRUD         | `delete_one_by_pk`                         |      0.8174 ms |     1.2215 ms | 1.49x |      5.6983 ms |     12.400 ms |        2.18x |       4.3247 ms |      8.9218 ms |         2.06x |\n| Branch       | `create_version`                           |            n/a |           n/a |   n/a |      4.0152 ms |     8.0948 ms |        2.02x |       3.8455 ms |      6.1184 ms |         1.59x |\n| Merge / diff | `merge_version_fast_forward_10pct_updates` |            n/a |           n/a |   n/a |      45.680 ms |     995.44 ms |       21.79x |       44.270 ms |      900.68 ms |        20.35x |\n| Merge / diff | `merge_version_divergent_10pct_updates`    |            n/a |           n/a |   n/a |      77.602 ms |      2.0777 s |       26.77x |       81.869 ms |       1.9656 s |        24.01x |\n\n`raw SQLite reference` applies only to plain CRUD over the equivalent\n`json_pointer(path TEXT PRIMARY KEY, value TEXT) WITHOUT ROWID` table. Branch\nand merge are Lix semantic operations, so they have no raw SQLite equivalent in\nthis table.\n\nStorage scoreboard:\n\n| backend / workflow                     | 100 bytes | 100 bytes/row |  1k bytes | 1k bytes/row | bytes x |\n| -------------------------------------- | --------: | ------------: | --------: | -----------: | ------: |\n| raw SQLite / inserted                  |   936,584 |       9,365.8 | 1,692,456 |      1,692.5 |   1.81x |\n| Lix SQLite / inserted                  |   337,656 |       3,376.6 | 1,075,136 |      1,075.1 |   3.18x |\n| Lix SQLite / after create_version      |   345,896 |       3,459.0 | 1,087,496 |      1,087.5 |   3.14x |\n| Lix SQLite / after fast-forward merge  |   588,976 |       5,889.8 | 5,287,488 |      5,287.5 |   8.98x |\n| Lix SQLite / after divergent merge     | 1,268,776 |      12,687.8 | 5,615,168 |      5,615.2 |   4.43x |\n| Lix RocksDB / inserted                 |   280,077 |       2,800.8 |   993,888 |        993.9 |   3.55x |\n| Lix RocksDB / after create_version     |   281,943 |       2,819.4 |   995,754 |        995.8 |   3.53x |\n| Lix RocksDB / after fast-forward merge |   298,593 |       2,985.9 | 1,160,310 |      1,160.3 |   3.89x |\n| Lix RocksDB / after divergent merge    |   337,030 |       3,370.3 | 1,528,244 |      1,528.2 |   4.53x |\n\nBaseline interpretation:\n\n```text\nThe Raw Storage API rows now separate layout capability from E2E machinery.\nDirect tracked-state `get_many` and full scan are low single-digit\nmilliseconds, while changed-key discovery for 10% updates scales far worse than\nthe scan/read primitives.\n\nThe E2E CRUD rows show the current pressure from the typed-table surface:\ninserts are hundreds of milliseconds at 1000 rows and bulk deletes are seconds,\nwith much steeper 100-to-1000 growth than raw SQLite. Single-row PK operations\nare measured as one row selected, updated, or deleted from a populated table.\n\ncreate_version is already bounded enough to use as a guardrail, but merge/diff\nis also seconds for only 10% changed rows over a 1000-row JSON-pointer state.\n\nStorage after plain insert is compact for both backends. create_version adds\nvery little storage, which matches the desired branch shape. SQLite-backed Lix\ngrows sharply after fast-forward/divergent merge, while RocksDB grows much more\ngradually. That backend split is a useful signal for the physical-layout work:\nmerge/diff layout and checkpoint/packing policy need to be evaluated across\nboth backends, not just through CRUD timings.\n```\n\n## Entry Template\n\nUse one entry per kept layout or access-path change. Every kept optimization\nmust run the full baseline + smoke scoreboard for raw storage, E2E workflows,\nand storage accounting. Do not record only the row that the optimization was\nexpected to improve; the point of the log is to catch regressions and tradeoffs\nacross the whole tracked-state workflow.\n\n```text\n## Optimization N: <short name>\n\nCommit: <hash> or uncommitted on <hash>\n\nHypothesis:\n  What physical layout or access-path change is being tested?\n\nRaw Storage API scoreboard:\n  Include all raw storage rows for SQLite and RocksDB at 100 and 1k.\n\nE2E Workflow scoreboard:\n  Include all CRUD, create_version, and merge_version rows at 100 and 1k.\n  Include raw SQLite reference where the operation has one.\n\nStorage scoreboard:\n  Include all workflow storage rows for raw SQLite, Lix SQLite, and Lix RocksDB.\n\nDecision:\n  Keep, revert, or follow-up.\n```\n\n## Optimization 1: Batched Committed State-FK Delete Validation\n\nChange:\n\n```text\nGroup committed state-surface FK delete checks by source schema/domain and\nscan the source rows once per group instead of once per tombstone.\n```\n\nCommands:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_crud -- 'json_pointer_crud/raw_sqlite/baseline|json_pointer_crud/raw_sqlite/smoke|json_pointer_crud/lix_sqlite/baseline|json_pointer_crud/lix_sqlite/smoke|json_pointer_crud/lix_rocksdb/baseline|json_pointer_crud/lix_rocksdb/smoke'\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_crud -- 'json_pointer_crud/raw_storage_sqlite/baseline|json_pointer_crud/raw_storage_sqlite/smoke|json_pointer_crud/raw_storage_rocksdb/baseline|json_pointer_crud/raw_storage_rocksdb/smoke'\ncargo test -p lix_engine --features storage-benches --test json_pointer_crud_storage -- --ignored --nocapture --test-threads=1\n```\n\nRaw Storage API scoreboard:\n\n| operation                          | SQLite 100 | SQLite 1k | SQLite x | RocksDB 100 | RocksDB 1k | RocksDB x |\n| ---------------------------------- | ---------: | --------: | -------: | ----------: | ---------: | --------: |\n| `write_root_all_rows`              |  2.6462 ms | 6.5703 ms |    2.48x |   3.5919 ms |  6.5048 ms |     1.81x |\n| `get_many_exact_keys`              |  1.5251 ms | 4.5971 ms |    3.01x |   1.1383 ms |  3.9691 ms |     3.49x |\n| `get_many_missing_keys`            |   877.1 us | 2.7680 ms |    3.16x |    521.3 us |  1.5815 ms |     3.03x |\n| `exists_many_exact_keys`           |  1.6827 ms | 4.8231 ms |    2.87x |   1.1503 ms |  5.8704 ms |     5.10x |\n| `scan_keys_only`                   |   964.5 us | 3.4420 ms |    3.57x |    650.7 us |  3.4885 ms |     5.36x |\n| `scan_headers_only`                |   875.2 us | 3.1618 ms |    3.61x |    651.9 us |  3.1302 ms |     4.80x |\n| `scan_full_rows`                   |  1.3692 ms | 5.3299 ms |    3.89x |   1.1031 ms |  5.5565 ms |     5.04x |\n| `prefix_scan_schema`               |  1.3468 ms | 5.0666 ms |    3.76x |   1.1146 ms |  5.5296 ms |     4.96x |\n| `prefix_scan_schema_file_null`     |  1.3722 ms | 8.2142 ms |    5.99x |   1.1041 ms |  4.8788 ms |     4.42x |\n| `write_delta_10pct_updates`        |  1.0960 ms | 3.5204 ms |    3.21x |    584.8 us |  2.2813 ms |     3.90x |\n| `write_tombstone_10pct_deletes`    |   866.7 us | 3.2042 ms |    3.70x |    851.8 us |  1.4712 ms |     1.73x |\n| `changed_keys_update_10pct`        |  2.4239 ms | 82.554 ms |   34.06x |   2.0819 ms |  64.202 ms |    30.84x |\n| `changed_keys_delta_chain_10x1pct` |  1.5810 ms | 12.378 ms |    7.83x |   1.2480 ms |  8.6607 ms |     6.94x |\n| `materialize_delta_chain_10x1pct`  |  1.1211 ms | 6.4255 ms |    5.73x |    728.2 us |  2.7755 ms |     3.81x |\n\nE2E workflow scoreboard:\n\n| axis         | operation                                  | raw SQLite 100 | raw SQLite 1k | raw x | Lix SQLite 100 | Lix SQLite 1k | Lix SQLite x | Lix RocksDB 100 | Lix RocksDB 1k | Lix RocksDB x |\n| ------------ | ------------------------------------------ | -------------: | ------------: | ----: | -------------: | ------------: | -----------: | --------------: | -------------: | ------------: |\n| CRUD         | `insert_all_rows`                          |      1.6556 ms |     2.8391 ms | 1.71x |      19.763 ms |     376.49 ms |       19.05x |       20.686 ms |      310.38 ms |        15.00x |\n| CRUD         | `select_all_path_value`                    |      0.9165 ms |     1.5530 ms | 1.69x |      5.4193 ms |     12.881 ms |        2.38x |       7.0629 ms |      11.096 ms |         1.57x |\n| CRUD         | `select_one_by_pk`                         |      0.8185 ms |     1.6028 ms | 1.96x |      2.0424 ms |     5.9130 ms |        2.90x |       2.2091 ms |      3.6054 ms |         1.63x |\n| CRUD         | `update_all_values`                        |      0.9650 ms |     1.8194 ms | 1.89x |      7.9570 ms |     32.141 ms |        4.04x |       7.5921 ms |      21.915 ms |         2.89x |\n| CRUD         | `update_one_by_pk`                         |      0.8041 ms |     1.1965 ms | 1.49x |      4.4163 ms |     10.185 ms |        2.31x |       3.5553 ms |      7.6278 ms |         2.15x |\n| CRUD         | `delete_all_rows`                          |      0.8595 ms |     1.2117 ms | 1.41x |      8.3845 ms |     32.180 ms |        3.84x |       8.1399 ms |      23.674 ms |         2.91x |\n| CRUD         | `delete_one_by_pk`                         |      0.7526 ms |     1.4439 ms | 1.92x |      3.5757 ms |     10.669 ms |        2.98x |       3.6691 ms |      8.2295 ms |         2.24x |\n| Branch       | `create_version`                           |            n/a |           n/a |   n/a |      3.4980 ms |     8.0771 ms |        2.31x |       4.5554 ms |      5.5557 ms |         1.22x |\n| Merge / diff | `merge_version_fast_forward_10pct_updates` |            n/a |           n/a |   n/a |      47.167 ms |     987.99 ms |       20.95x |       44.720 ms |      953.19 ms |        21.31x |\n| Merge / diff | `merge_version_divergent_10pct_updates`    |            n/a |           n/a |   n/a |      77.340 ms |      2.0947 s |       27.08x |       110.32 ms |       2.7013 s |        24.49x |\n\nStorage scoreboard:\n\n| backend / workflow                     | 100 bytes | 100 bytes/row |  1k bytes | 1k bytes/row | bytes x |\n| -------------------------------------- | --------: | ------------: | --------: | -----------: | ------: |\n| raw SQLite / inserted                  |   936,584 |       9,365.8 | 1,692,456 |      1,692.5 |   1.81x |\n| Lix SQLite / inserted                  |   337,656 |       3,376.6 | 1,075,136 |      1,075.1 |   3.18x |\n| Lix SQLite / after create_version      |   345,896 |       3,459.0 | 1,087,496 |      1,087.5 |   3.14x |\n| Lix SQLite / after fast-forward merge  |   593,096 |       5,931.0 | 5,287,488 |      5,287.5 |   8.92x |\n| Lix SQLite / after divergent merge     | 1,272,896 |      12,729.0 | 5,615,168 |      5,615.2 |   4.41x |\n| Lix RocksDB / inserted                 |   280,077 |       2,800.8 |   993,888 |        993.9 |   3.55x |\n| Lix RocksDB / after create_version     |   281,943 |       2,819.4 |   995,754 |        995.8 |   3.53x |\n| Lix RocksDB / after fast-forward merge |   298,593 |       2,985.9 | 1,160,310 |      1,160.3 |   3.89x |\n| Lix RocksDB / after divergent merge    |   337,030 |       3,370.3 | 1,528,244 |      1,528.2 |   4.53x |\n\nResult:\n\n```text\ndelete_all_rows/1k improved from 2.4630 s to 32.180 ms on Lix SQLite and\nfrom 1.7949 s to 23.674 ms on Lix RocksDB. The profile bottleneck moved away\nfrom repeated committed state-FK source scans; inserts and merge/diff remain\nthe dominant physical-layout targets.\n```\n\n## Optimization 2: Batched Committed Insert Identity Validation\n\nCommit: uncommitted on current branch\n\nHypothesis:\n\n```text\nINSERT spends most of its time checking whether each staged identity already\nexists in committed live state. Batch those checks by exact domain/schema and\nscan the committed rows once per group instead of once per inserted row.\n```\n\nChange:\n\n```text\nBuild the pending staged identity set once, group committed insert identity\nchecks by `(Domain, schema_key)`, scan committed rows once per group with the\nfull entity-id batch, and test the returned rows in memory.\n```\n\nRaw Storage API scoreboard:\n\n| operation                          | SQLite 100 | SQLite 1k | SQLite x | RocksDB 100 | RocksDB 1k | RocksDB x |\n| ---------------------------------- | ---------: | --------: | -------: | ----------: | ---------: | --------: |\n| `write_root_all_rows`              |  2.8488 ms | 6.8410 ms |    2.40x |   4.0790 ms |  6.9094 ms |     1.69x |\n| `get_many_exact_keys`              |  1.5314 ms | 4.7728 ms |    3.12x |   1.3573 ms |  4.3056 ms |     3.17x |\n| `get_many_missing_keys`            |   835.3 us | 2.5124 ms |    3.01x |    846.8 us |  1.6843 ms |     1.99x |\n| `exists_many_exact_keys`           |  1.4762 ms | 4.7909 ms |    3.25x |   1.5038 ms |  4.3722 ms |     2.91x |\n| `scan_keys_only`                   |   976.9 us | 3.1554 ms |    3.23x |    881.8 us |  2.4408 ms |     2.77x |\n| `scan_headers_only`                |  1.1819 ms | 3.7566 ms |    3.18x |   1.1618 ms |  2.4719 ms |     2.13x |\n| `scan_full_rows`                   |  2.0658 ms | 5.9674 ms |    2.89x |   1.3391 ms |  4.6009 ms |     3.44x |\n| `prefix_scan_schema`               |  1.5120 ms | 5.4635 ms |    3.61x |   1.5115 ms |  3.8891 ms |     2.57x |\n| `prefix_scan_schema_file_null`     |  1.6492 ms | 4.7101 ms |    2.86x |   1.6867 ms |  3.9062 ms |     2.32x |\n| `write_delta_10pct_updates`        |  1.3705 ms | 3.2507 ms |    2.37x |    782.8 us |  1.9059 ms |     2.43x |\n| `write_tombstone_10pct_deletes`    |  1.2939 ms | 3.0927 ms |    2.39x |    884.6 us |  1.8954 ms |     2.14x |\n| `changed_keys_update_10pct`        |  2.8615 ms | 78.074 ms |   27.28x |   2.2180 ms |  71.918 ms |    32.42x |\n| `changed_keys_delta_chain_10x1pct` |  1.7743 ms | 13.054 ms |    7.36x |   1.5709 ms |  9.6579 ms |     6.15x |\n| `materialize_delta_chain_10x1pct`  |  1.6561 ms | 6.2290 ms |    3.76x |    903.8 us |  3.2262 ms |     3.57x |\n\nE2E workflow scoreboard:\n\n| axis         | operation                                  | raw SQLite 100 | raw SQLite 1k | raw x | Lix SQLite 100 | Lix SQLite 1k | Lix SQLite x | Lix RocksDB 100 | Lix RocksDB 1k | Lix RocksDB x |\n| ------------ | ------------------------------------------ | -------------: | ------------: | ----: | -------------: | ------------: | -----------: | --------------: | -------------: | ------------: |\n| CRUD         | `insert_all_rows`                          |      1.4727 ms |     2.9440 ms | 2.00x |      14.928 ms |     61.763 ms |        4.14x |       15.420 ms |      56.365 ms |         3.66x |\n| CRUD         | `select_all_path_value`                    |       795.3 us |     1.5314 ms | 1.93x |      5.4509 ms |     13.638 ms |        2.50x |       5.2695 ms |      12.294 ms |         2.33x |\n| CRUD         | `select_one_by_pk`                         |       778.4 us |     1.9265 ms | 2.47x |      2.0537 ms |     6.4010 ms |        3.12x |       2.2049 ms |      4.3261 ms |         1.96x |\n| CRUD         | `update_all_values`                        |       829.4 us |     1.8155 ms | 2.19x |      8.4927 ms |     31.990 ms |        3.77x |       8.1311 ms |      22.828 ms |         2.81x |\n| CRUD         | `update_one_by_pk`                         |       914.5 us |     1.4295 ms | 1.56x |      4.0395 ms |     10.550 ms |        2.61x |       4.2523 ms |      7.2002 ms |         1.69x |\n| CRUD         | `delete_all_rows`                          |       874.4 us |     1.4128 ms | 1.62x |      8.9559 ms |     36.544 ms |        4.08x |       8.5542 ms |      26.154 ms |         3.06x |\n| CRUD         | `delete_one_by_pk`                         |       871.7 us |     1.3901 ms | 1.59x |      3.9506 ms |     12.560 ms |        3.18x |       3.8824 ms |      8.3111 ms |         2.14x |\n| Branch       | `create_version`                           |            n/a |           n/a |   n/a |      3.8345 ms |     9.7628 ms |        2.55x |       3.6737 ms |      5.6426 ms |         1.54x |\n| Merge / diff | `merge_version_fast_forward_10pct_updates` |            n/a |           n/a |   n/a |      50.797 ms |      1.2370 s |       24.35x |       41.834 ms |      962.60 ms |        23.01x |\n| Merge / diff | `merge_version_divergent_10pct_updates`    |            n/a |           n/a |   n/a |      80.102 ms |      2.4801 s |       30.96x |       81.443 ms |       1.9468 s |        23.90x |\n\nStorage scoreboard:\n\n| backend / workflow                     | 100 bytes | 100 bytes/row |  1k bytes | 1k bytes/row | bytes x |\n| -------------------------------------- | --------: | ------------: | --------: | -----------: | ------: |\n| raw SQLite / inserted                  |   936,584 |       9,365.8 | 1,692,456 |      1,692.5 |   1.81x |\n| Lix SQLite / inserted                  |   337,656 |       3,376.6 | 1,075,136 |      1,075.1 |   3.18x |\n| Lix SQLite / after create_version      |   345,896 |       3,459.0 | 1,087,496 |      1,087.5 |   3.14x |\n| Lix SQLite / after fast-forward merge  |   588,976 |       5,889.8 | 5,291,608 |      5,291.6 |   8.98x |\n| Lix SQLite / after divergent merge     | 1,268,776 |      12,687.8 | 5,619,288 |      5,619.3 |   4.43x |\n| Lix RocksDB / inserted                 |   280,077 |       2,800.8 |   993,888 |        993.9 |   3.55x |\n| Lix RocksDB / after create_version     |   281,943 |       2,819.4 |   995,754 |        995.8 |   3.53x |\n| Lix RocksDB / after fast-forward merge |   298,593 |       2,985.9 | 1,157,131 |      1,157.1 |   3.88x |\n| Lix RocksDB / after divergent merge    |   337,030 |       3,370.3 | 1,528,244 |      1,528.2 |   4.53x |\n\nResult:\n\n```text\ninsert_all_rows/1k improved from 376.49 ms to 61.763 ms on Lix SQLite and\nfrom 310.38 ms to 56.365 ms on Lix RocksDB. Raw Storage API timings and storage\naccounting stay within expected run-to-run noise because the change is above\nthe storage primitive layer. Merge/diff remains the dominant 1k workflow cost.\n```\n\nDecision:\n\n```text\nKeep. This removes an accidental per-row committed-state lookup from bulk\ninserts without changing the validation semantics.\n```\n"
  },
  {
    "path": "optimization_log8.md",
    "content": "# Optimization Log 8: JSON Pointer Physical Layout Decision Log\n\nGoal: nail the physical layout Lix uses for tracked logic:\n`packages/engine/src/tracked_state`, `packages/engine/src/commit_store`, and\nthe backend/storage APIs they require.\n\nLix has not shipped. Optimize for the best-shaped physical API, storage layout,\nand abstraction boundaries now. Prefer clean refactors over bolt-on fixes,\nadapter layers, compatibility shims, or special cases. If a change keeps a\nbackwards shim, the entry must explicitly call that out and justify why it is\ntemporary.\n\nThe preferred refactor mode is:\n\n```text\nfirst make the storage shape correct;\nthen let the Rust compiler reveal upstream code that must move to the new API.\n```\n\nIt is acceptable for an intermediate refactor entry to leave the tree\ntemporarily non-compiling if the entry is clearly marked as a physical-layout\ncutover step and the next step is compiler-driven migration. Do not hide old\nbehavior behind adapter layers just to keep call sites compiling.\n\nThe desired end state is good abstractions, not a faster pile of special-case\npaths. If the current abstraction is the bottleneck, replace it cleanly.\n\nNorth-star target:\n\n```text\nLarge logical write batches through the tracked-state/commit-store path should\nleave enough time budget for the logical layer above storage.\n```\n\nPhysical storage budget:\n\n```text\nFor 1k-operation physical rows, Lix SQLite and Lix RocksDB should be <= 1.5x\nraw SQLite for equivalent writes, exact reads, and scans.\n\nRaw SQLite is not a bare-metal KV baseline: it still goes through SQL statement\nexecution, cursor/seek machinery, and SELECT/INSERT/UPDATE/DELETE paths. Lix\nphysical rows use direct storage access, so exceeding this budget means Lix is\nlikely paying avoidable layout, packing, materialization, batching, or backend\nabstraction costs.\n\nFor storage size, post-vacuum Lix bytes/row should be <= 2x post-vacuum raw\nSQLite bytes/row for equivalent tracked storage states. Extra bytes beyond that\nmust be explained by durable tracked history, commit facts, merge/conflict\nfacts, or retained delta structure before a size-sensitive change is kept.\n```\n\nThis log is not for SQL-provider ergonomics. SQL and CRUD benchmarks may point\nat problems, but every kept optimization must be explained at the physical\nstorage boundary: backend operations, commit packs, delta packs, projection\nmaterialization, changed-key discovery, exact reads, scans, batching,\nzero-copy/low-copy behavior, or bytes.\n\nCriterion output is evidence, not the whole argument. Treat noise carefully:\nprefer structural wins that also move timings, and reject changes that only win\none noisy row while worsening the physical design.\n\n## Current State\n\n```text\nbranch: physical-layout-manual\nhead:   11ff3a2e\ndate:   2026-05-10\nstatus: uncommitted benchmark/log setup\n```\n\nSetup changes for this log:\n\n- Added `packages/engine/benches/json_pointer_physical/main.rs`.\n- Added the `json_pointer_physical` bench target to\n  `packages/engine/Cargo.toml`.\n- Added a raw SQLite reference group inside the physical benchmark so the\n  SQLite-relative budgets have a measured baseline.\n- Kept the existing JSON-pointer storage fixture test as the bytes-on-disk\n  guardrail.\n\n## Layout Scope\n\nIn scope:\n\n```text\ncommit_store canonical commit/change physical layout\ntracked_state delta-pack layout\ntracked_state projection/root materialization policy\ntracked_state exact-key lookup\ntracked_state scan/projection behavior\nchanged-key discovery for diff/merge\nbackend get_many / exists_many / prefix scan / write batch APIs\nbackend zero-copy or low-copy read/write boundaries\nbackend transaction/write-batch semantics shared by SQLite and RocksDB\nbytes on disk after insert/version/merge workflows\n```\n\nOut of scope unless a physical benchmark proves otherwise:\n\n```text\nSQL/provider routing\nDataFusion planning overhead\nper-statement UPDATE ergonomics\napplication-level batching above tracked_state/commit_store\n```\n\nRule:\n\n```text\nIf a hot E2E benchmark points through SQL first, map it to\njson_pointer_physical before optimizing. Do not make SQL-layer changes in this\nlog unless the physical rows are already inside budget and the remaining time\nis clearly above storage.\n```\n\nTracked logic is the product path and the default mode in Lix. Optimizations\nmust make tracked logic faster; they must not avoid tracked machinery by moving\nworkloads, benchmarks, fixtures, changed-key logic, commit-store logic, or\ntracked-state behavior into untracked code.\n\n## Refactor Policy\n\nAllowed:\n\n```text\nchange the storage/backend API when the current API forces bad physical layout;\nadd or reshape backend/storage APIs, including namespacing-oriented APIs, when\n  the shape materially improves both SQLite and RocksDB;\nchange tracked_state and commit_store layouts when the new layout is cleaner;\nbreak old call sites and let the compiler drive the migration;\ndelete legacy abstractions that only exist to preserve pre-ship compatibility;\nreplace one-off fixes with a shared abstraction when the problem is systemic;\nremove bolt-on fast paths once the clean abstraction covers the same behavior.\n```\n\nRequired when changing storage/backend APIs:\n\n```text\nstate the physical problem the old API caused;\nshow how SQLite and RocksDB can both implement the new shape without hidden\n  per-key loops or full-value hydration;\nshow that both SQLite and RocksDB improve materially, or explain why the API\n  change is still required for a later shared layout win;\npreserve transaction atomicity, durability, and hash/integrity checks;\nprefer batched, streaming, prefix/range, and projection-aware operations;\navoid copy-heavy boundaries unless the entry explicitly measures and accepts\n  the cost;\nexplain how the layout can migrate again later without rewriting the whole\n  logical layer.\n```\n\nNot allowed:\n\n```text\nSQLite-only wins that silently regress RocksDB;\nRocksDB-only wins that silently regress SQLite;\nbenchmark rewrites that change what is being measured;\nworkarounds scoped only to the current hot row when the abstraction is wrong;\nbolt-on fast paths that leave the bad abstraction in place;\nadapter layers whose main purpose is avoiding the clean refactor;\nmoving tracked logic, benchmarks, or benchmark workload into untracked paths;\nshifting cost out of tracked_state/commit_store to avoid tracked machinery;\nforcing full materialization to avoid designing the right index/layout;\nbackwards shims unless the entry explicitly marks and justifies them.\n```\n\n## Benchmark Surface\n\nBenchmark target:\n\n```text\npackages/engine/benches/json_pointer_physical/main.rs\n```\n\nCommand:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- baseline\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- smoke\n```\n\nGroups:\n\n```text\njson_pointer_physical/raw_sqlite/baseline\njson_pointer_physical/raw_sqlite/smoke\njson_pointer_physical/sqlite/baseline\njson_pointer_physical/sqlite/smoke\njson_pointer_physical/rocksdb/baseline\njson_pointer_physical/rocksdb/smoke\n```\n\nRows:\n\n```text\nwrite_root_all_rows/{100,1k}\nget_many_exact_keys/{100,1k}\nget_many_missing_keys/{100,1k}\nexists_many_exact_keys/{100,1k}\nscan_keys_only/{100,1k}\nscan_headers_only/{100,1k}\nscan_full_rows/{100,1k}\nprefix_scan_schema/{100,1k}\nprefix_scan_schema_file_null/{100,1k}\nwrite_delta_10pct_updates/{100,1k}\nwrite_tombstone_10pct_deletes/{100,1k}\nchanged_keys_update_10pct/{100,1k}\nchanged_keys_delta_chain_10x1pct/{100,1k}\nmaterialize_delta_chain_10x1pct/{100,1k}\n```\n\nFixture:\n\n```text\nsource: packages/engine/benches/fixtures/pnpm-lock.fixture.json\nshape: flattened JSON nodes, including containers and leaves\nidentity: JSON pointer path\nvalue: JSON node value\nfile_id: NULL\nsizes:\n  baseline = 100 rows\n  smoke = 1,000 rows\n```\n\nWhy this fixture:\n\n```text\nIt mirrors plugin-json-v2 output: many small entities, stable path identities,\ncontainer rows, leaf rows, and realistic nested JSON values.\n```\n\nBenchmark-surface intent:\n\n```text\nThis benchmark surface should stabilize the physical layout before logical-layer\noptimization begins. New physical rows should be added only when logical work\nreveals a genuinely new tracked access pattern, not to move the goalposts for\nan existing optimization.\n```\n\n## Raw SQLite Reference\n\nThe raw SQLite group is independent of Lix. It answers: what does plain\nprimary-key physical storage cost for the same flattened JSON-pointer rows?\n\nShape:\n\n```text\ndatabase: tempfile SQLite\ntable: json_pointer(path TEXT PRIMARY KEY, value TEXT) WITHOUT ROWID\npragmas: journal_mode=WAL, synchronous=NORMAL, temp_store=MEMORY,\n         foreign_keys=ON\nwrite rows: INSERT/UPDATE/DELETE by path in one transaction\nexact reads: prepared point lookups by path\nscans: ordered path/value scans over the table\n```\n\nThe raw SQLite prefix-scan rows are a fixture-equivalent approximation: the\nfixture uses one schema and `file_id = NULL`, so schema/file scope maps to the\nwhole table.\n\nReference interpretation:\n\n```text\nRows near raw SQLite are close to backend speed.\nRows above 1.5x raw SQLite are likely dominated by Lix packing, projection,\nmaterialization, hashing, diff semantics, or backend abstraction overhead.\n```\n\n## Success Criteria\n\nEvery kept optimization must name one primary axis:\n\n```text\nwrite\nexact-read\nscan\ndiff/changed-key\ndelta-chain materialization\nstorage-size\nbackend API\n```\n\nThe primary axis should improve materially. Non-target axes are guardrails.\n\nEvery kept optimization must also name its physical shape:\n\n```text\ncanonical fact layout\nread index / projection layout\ndelta-pack layout\nchanged-key index\nbackend batch/read/write API\nmaterialization policy\ncopy/serialization boundary\n```\n\nAn optimization is not kept merely because one Criterion row improves. It must\nbe a better shape for the tracked storage system and must not create hidden\ncosts such as unbatched IO, accidental full-value hydration, extra copies across\nthe backend boundary, or backend-specific behavior that another supported\nbackend cannot implement well.\n\n### 1.5x SQLite Runtime Budget\n\nThis is an envelope, not an average. Passing writes does not compensate for\nfailing reads, and passing reads does not compensate for failing writes.\n\nWrite rows:\n\n```text\ncompare json_pointer_physical/{sqlite,rocksdb}/smoke/write_root_all_rows/1k\n  to json_pointer_physical/raw_sqlite/smoke/write_root_all_rows/1k\ncompare json_pointer_physical/{sqlite,rocksdb}/smoke/write_delta_10pct_updates/1k\n  to json_pointer_physical/raw_sqlite/smoke/write_delta_10pct_updates/1k\ncompare json_pointer_physical/{sqlite,rocksdb}/smoke/write_tombstone_10pct_deletes/1k\n  to json_pointer_physical/raw_sqlite/smoke/write_tombstone_10pct_deletes/1k\n```\n\nExact-read rows:\n\n```text\ncompare json_pointer_physical/{sqlite,rocksdb}/smoke/get_many_exact_keys/1k\n  to json_pointer_physical/raw_sqlite/smoke/get_many_exact_keys/1k\ncompare json_pointer_physical/{sqlite,rocksdb}/smoke/get_many_missing_keys/1k\n  to json_pointer_physical/raw_sqlite/smoke/get_many_missing_keys/1k\ncompare json_pointer_physical/{sqlite,rocksdb}/smoke/exists_many_exact_keys/1k\n  to json_pointer_physical/raw_sqlite/smoke/exists_many_exact_keys/1k\n```\n\nScan rows:\n\n```text\ncompare json_pointer_physical/{sqlite,rocksdb}/smoke/scan_keys_only/1k\n  to json_pointer_physical/raw_sqlite/smoke/scan_keys_only/1k\ncompare json_pointer_physical/{sqlite,rocksdb}/smoke/scan_headers_only/1k\n  to json_pointer_physical/raw_sqlite/smoke/scan_headers_only/1k\ncompare json_pointer_physical/{sqlite,rocksdb}/smoke/scan_full_rows/1k\n  to json_pointer_physical/raw_sqlite/smoke/scan_full_rows/1k\ncompare json_pointer_physical/{sqlite,rocksdb}/smoke/prefix_scan_schema/1k\n  to json_pointer_physical/raw_sqlite/smoke/prefix_scan_schema/1k\ncompare json_pointer_physical/{sqlite,rocksdb}/smoke/prefix_scan_schema_file_null/1k\n  to json_pointer_physical/raw_sqlite/smoke/prefix_scan_schema_file_null/1k\n```\n\nChanged-key and delta-chain rows do not have a clean raw SQLite equivalent.\nJudge them by scaling shape:\n\n```text\nchanged_keys_update_10pct:\n  should scale with changed keys, not full state hydration.\n\nchanged_keys_delta_chain_10x1pct:\n  should scale with changed keys and chain depth, not repeated broad\n  materialization of full state.\n\nmaterialize_delta_chain_10x1pct:\n  should avoid repeatedly decoding unrelated delta-pack content.\n```\n\n### Regression Budgets\n\n```text\n<= 5% slower:\n  treat as possible Criterion noise unless repeated or structurally explained.\n\n5-15% slower:\n  acceptable only with a clear primary-axis win, a structural explanation, and\n  no crossed 1.5x runtime budget.\n\n> 15% slower:\n  fail unless explicitly accepted as a layout tradeoff.\n\nNo change may make an axis that passes the 1.5x runtime budget start failing it.\n```\n\nStorage guardrail:\n\n```text\nPost-vacuum bytes after inserted/create_version/fast-forward/divergent merge\nshould stay <= 2x post-vacuum raw SQLite bytes/row for equivalent tracked\nstorage states. Extra bytes must remain explainable. A speedup that causes\nunexplained storage growth is not kept.\n```\n\n## Storage Fixture Guardrail\n\nCommand:\n\n```sh\ncargo test -p lix_engine --features storage-benches --test json_pointer_crud_storage -- --ignored --nocapture --test-threads=1\n```\n\nRows to report when a change can affect storage size:\n\n```text\nraw SQLite / inserted\nLix SQLite / inserted\nLix SQLite / after create_version\nLix SQLite / after fast-forward merge\nLix SQLite / after divergent merge\nLix RocksDB / inserted\nLix RocksDB / after create_version\nLix RocksDB / after fast-forward merge\nLix RocksDB / after divergent merge\n```\n\n## Agent Rules\n\n1. Optimize physical layout and backend APIs, not SQL surface shape.\n2. Prefer clean, compiler-driven refactors and good abstractions over bolt-on\n   fixes, adapter layers, or backwards shims. If a shim is kept, flag it.\n3. Optimize one primary axis at a time and report guardrails for the other\n   axes.\n4. Compare against raw SQLite where there is an equivalent row.\n5. Report SQLite and RocksDB physical rows before keeping backend-sensitive\n   changes.\n6. Prefer explicit batched APIs over hidden loops of single-key operations.\n7. Backend/storage API changes are allowed when they materially improve both\n   SQLite and RocksDB, including namespacing-oriented APIs.\n8. Do not improve one backend by silently regressing the other.\n9. Do not change benchmark measurements to make a change look better.\n10. Do not move tracked logic, fixtures, benchmarks, or benchmark workload into\n    untracked paths. Optimize tracked logic itself.\n11. Do not shift cost out of tracked_state/commit_store to bypass tracked\n    machinery.\n12. Do not keep bolt-on fast paths when a clean abstraction should replace the\n    old shape.\n13. Do not improve writes by forcing broad projection-root materialization\n    unless the entry is explicitly a materialization-policy experiment.\n14. Do not make key/header-only scans hydrate full JSON values.\n15. Do not introduce avoidable copies at the backend boundary without measuring\n    and justifying them.\n16. Do not remove hash verification, transaction atomicity, or durability\n    semantics to win a benchmark.\n17. Document rejected experiments if they teach something about the cost model.\n18. Append one compact entry per optimization.\n\n## Baseline\n\nDate: 2026-05-10\n\nCommit: uncommitted on `11ff3a2e`\n\nChange: added the `json_pointer_physical` benchmark target and raw SQLite\nphysical reference group.\n\n### Verification\n\n```sh\ncargo fmt -p lix_engine\ncargo check -p lix_engine --features storage-benches --benches\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- get_many_exact_keys/100\n```\n\nResult: passed.\n\nAccepted baseline run:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- baseline\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- smoke\ncargo test -p lix_engine --features storage-benches --test json_pointer_crud_storage -- --ignored --nocapture --test-threads=1\n```\n\nResult:\n\n```text\npassed\n```\n\n### Raw SQLite / Lix Smoke Check\n\nCommand:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- get_many_exact_keys/100\n```\n\n| backend     | group                                       | row                       |       low |    median |      high |\n| ----------- | ------------------------------------------- | ------------------------- | --------: | --------: | --------: |\n| raw SQLite  | `json_pointer_physical/raw_sqlite/baseline` | `get_many_exact_keys/100` | 879.06 us | 924.31 us | 1.0023 ms |\n| Lix SQLite  | `json_pointer_physical/sqlite/baseline`     | `get_many_exact_keys/100` | 1.2941 ms | 1.3683 ms | 1.4479 ms |\n| Lix RocksDB | `json_pointer_physical/rocksdb/baseline`    | `get_many_exact_keys/100` | 1.0164 ms | 1.0507 ms | 1.0952 ms |\n\nInterpretation:\n\n```text\nThe benchmark wiring works and the raw SQLite reference group appears beside\nthe Lix physical backends.\n\nAt 100 rows, exact reads are near the runtime envelope for both backends.\nThis is only a smoke check. It is not the accepted baseline for optimization.\nThe accepted baseline must include the 1k smoke rows.\n```\n\n### Required Baseline Command\n\nBefore the first optimization entry, run:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- baseline\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- smoke\ncargo test -p lix_engine --features storage-benches --test json_pointer_crud_storage -- --ignored --nocapture --test-threads=1\n```\n\n### Baseline Scoreboard\n\nThe 1k smoke rows are the accepted optimization baseline.\n\n#### 1.5x Runtime Budget Rows, 1k\n\n| axis       | row                                | raw SQLite median | Lix SQLite median | SQLite ratio | Lix RocksDB median | RocksDB ratio | status                  |\n| ---------- | ---------------------------------- | ----------------: | ----------------: | -----------: | -----------------: | ------------: | ----------------------- |\n| write      | `write_root_all_rows/1k`           |         2.4583 ms |         6.8347 ms |        2.78x |          6.1430 ms |         2.50x | SQLite and RocksDB fail |\n| write      | `write_delta_10pct_updates/1k`     |         1.5396 ms |         2.6272 ms |        1.71x |          1.3950 ms |         0.91x | SQLite fail             |\n| write      | `write_tombstone_10pct_deletes/1k` |         1.4156 ms |         2.4321 ms |        1.72x |          1.3632 ms |         0.96x | SQLite fail             |\n| exact-read | `get_many_exact_keys/1k`           |         2.2859 ms |         4.6055 ms |        2.01x |          3.4668 ms |         1.52x | SQLite and RocksDB fail |\n| exact-read | `get_many_missing_keys/1k`         |         13.931 ms |         2.2822 ms |        0.16x |          1.4138 ms |         0.10x | pass                    |\n| exact-read | `exists_many_exact_keys/1k`        |         2.0545 ms |         4.6519 ms |        2.26x |          3.4720 ms |         1.69x | SQLite and RocksDB fail |\n| scan       | `scan_keys_only/1k`                |         1.2374 ms |         3.2542 ms |        2.63x |          2.0822 ms |         1.68x | SQLite and RocksDB fail |\n| scan       | `scan_headers_only/1k`             |         1.2378 ms |         3.0692 ms |        2.48x |          2.0012 ms |         1.62x | SQLite and RocksDB fail |\n| scan       | `scan_full_rows/1k`                |         1.2920 ms |         4.3792 ms |        3.39x |          3.1884 ms |         2.47x | SQLite fail             |\n| scan       | `prefix_scan_schema/1k`            |         1.2514 ms |         4.4623 ms |        3.57x |          3.2190 ms |         2.57x | SQLite and RocksDB fail |\n| scan       | `prefix_scan_schema_file_null/1k`  |         1.3817 ms |         4.3889 ms |        3.18x |          3.1497 ms |         2.28x | SQLite and RocksDB fail |\n\n#### Diff / Materialization Shape Rows\n\n| row                                   | Lix SQLite median | Lix RocksDB median | expected shape                           | status  |\n| ------------------------------------- | ----------------: | -----------------: | ---------------------------------------- | ------- |\n| `changed_keys_update_10pct/1k`        |         68.399 ms |          67.192 ms | scales with changed keys                 | hotspot |\n| `changed_keys_delta_chain_10x1pct/1k` |         10.401 ms |          8.7436 ms | scales with changed keys and chain depth | watch   |\n| `materialize_delta_chain_10x1pct/1k`  |         5.7651 ms |          2.7741 ms | avoids unrelated delta-pack decoding     | watch   |\n\n#### Storage Fixture\n\n| backend / state                        | bytes on disk | bytes/row | status                                                  |\n| -------------------------------------- | ------------: | --------: | ------------------------------------------------------- |\n| raw SQLite / inserted                  |       1692456 |    1692.5 | baseline                                                |\n| Lix SQLite / inserted                  |       1075136 |    1075.1 | baseline                                                |\n| Lix SQLite / after create_version      |       1087496 |    1087.5 | baseline                                                |\n| Lix SQLite / after fast-forward merge  |       5287488 |    5287.5 | growth to explain before keeping size-sensitive changes |\n| Lix SQLite / after divergent merge     |       5615168 |    5615.2 | growth to explain before keeping size-sensitive changes |\n| Lix RocksDB / inserted                 |        993900 |     993.9 | baseline                                                |\n| Lix RocksDB / after create_version     |        995766 |     995.8 | baseline                                                |\n| Lix RocksDB / after fast-forward merge |       1157143 |    1157.1 | baseline                                                |\n| Lix RocksDB / after divergent merge    |       1528256 |    1528.3 | baseline                                                |\n\n## Entries\n\nAppend kept wins and rejected experiments below this line.\n\n## Entry Template\n\nCopy this template for every optimization.\n\n```text\none kept win = one appended log entry + code changes measured by the entry\n```\n\n## Optimization N: <short name>\n\nCommit: `<hash>` or `uncommitted on <hash>`\n\nTarget axis:\n\n```text\nwrite | exact-read | scan | diff/changed-key | delta-chain materialization\nstorage-size | backend API\n```\n\nBackend/API scope:\n\n```text\nnone | backend API plumbing | backend implementation | layout behavior | mixed\n```\n\nPhysical shape:\n\n```text\ncanonical fact layout | read index / projection layout | delta-pack layout\nchanged-key index | backend batch/read/write API | materialization policy\ncopy/serialization boundary\n```\n\nRefactor stance:\n\n```text\nclean cut | compiler-driven migration | temporary shim | local implementation only\n```\n\nChange:\n\n```text\nWhat changed physically?\nWhat old shape/API is being removed?\nWhat invariant is preserved?\nWhy should this help?\nWhy is this a better whole-system abstraction than a workaround?\nDoes this create or remove copies across the backend boundary?\n```\n\n### Baseline Delta\n\nCompare against the log8 baseline and, if different, the immediately previous\nkept entry.\n\n#### 1.5x Runtime Budget Rows\n\n| axis       | row                                | raw SQLite median | before median | after median | ratio after/raw | delta | status |\n| ---------- | ---------------------------------- | ----------------: | ------------: | -----------: | --------------: | ----: | ------ |\n| write      | `write_root_all_rows/1k`           |                   |               |              |                 |       |        |\n| write      | `write_delta_10pct_updates/1k`     |                   |               |              |                 |       |        |\n| write      | `write_tombstone_10pct_deletes/1k` |                   |               |              |                 |       |        |\n| exact-read | `get_many_exact_keys/1k`           |                   |               |              |                 |       |        |\n| exact-read | `get_many_missing_keys/1k`         |                   |               |              |                 |       |        |\n| exact-read | `exists_many_exact_keys/1k`        |                   |               |              |                 |       |        |\n| scan       | `scan_keys_only/1k`                |                   |               |              |                 |       |        |\n| scan       | `scan_headers_only/1k`             |                   |               |              |                 |       |        |\n| scan       | `scan_full_rows/1k`                |                   |               |              |                 |       |        |\n| scan       | `prefix_scan_schema/1k`            |                   |               |              |                 |       |        |\n| scan       | `prefix_scan_schema_file_null/1k`  |                   |               |              |                 |       |        |\n\n#### Diff / Materialization\n\n| row                                   | before median | after median | delta | shape status |\n| ------------------------------------- | ------------: | -----------: | ----: | ------------ |\n| `changed_keys_update_10pct/1k`        |               |              |       |              |\n| `changed_keys_delta_chain_10x1pct/1k` |               |              |       |              |\n| `materialize_delta_chain_10x1pct/1k`  |               |              |       |              |\n\n#### Storage\n\nStorage fixture rows, required if bytes can change:\n\n| backend / state                        | before bytes | after bytes | delta | status |\n| -------------------------------------- | -----------: | ----------: | ----: | ------ |\n| raw SQLite / inserted                  |              |             |       |        |\n| Lix SQLite / inserted                  |              |             |       |        |\n| Lix SQLite / after create_version      |              |             |       |        |\n| Lix SQLite / after fast-forward merge  |              |             |       |        |\n| Lix SQLite / after divergent merge     |              |             |       |        |\n| Lix RocksDB / inserted                 |              |             |       |        |\n| Lix RocksDB / after create_version     |              |             |       |        |\n| Lix RocksDB / after fast-forward merge |              |             |       |        |\n| Lix RocksDB / after divergent merge    |              |             |       |        |\n\n### Unchanged Guardrails\n\nList guardrails that were not meaningfully impacted. Do not leave this blank.\n\n| guardrail                                         | after value | status |\n| ------------------------------------------------- | ----------: | ------ |\n| physical write budget stays near backend speed    |             |        |\n| physical write runtime <= 1.5x raw SQLite         |             |        |\n| exact reads <= 1.5x raw SQLite                    |             |        |\n| scans <= 1.5x raw SQLite                          |             |        |\n| header-only scans do not hydrate full JSON values |             |        |\n| SQLite and RocksDB both reported                  |             |        |\n| storage growth explained                          |             |        |\n| post-vacuum storage <= 2x raw SQLite              |             |        |\n| backend boundary copy cost explained              |             |        |\n| tracked logic remains on the tracked path         |             |        |\n| no workload shifted to untracked machinery        |             |        |\n| no benchmark measurement changed                  |             |        |\n\n### Interpretation\n\n```text\nKeep/reject?\nWhich axis improved?\nWhich guardrail moved?\nWas the evidence structural, timing-based, or both?\nIs there a temporary shim? If yes, when should it be removed?\nWhat should the next agent try?\n```\n\n## Optimization 1: tracked tombstone bit in projection value\n\nCommit: `uncommitted on 11ff3a2e`\n\nTarget axis:\n\n```text\nscan\n```\n\nBackend/API scope:\n\n```text\nlayout behavior\n```\n\nPhysical shape:\n\n```text\nread index / projection layout\nmaterialization policy\ncopy/serialization boundary\n```\n\nRefactor stance:\n\n```text\nclean cut\n```\n\nChange:\n\n```text\nTracked-state projection values now carry the durable tombstone bit directly.\nThe bit is packed into the high bit of the existing value header byte, so the\nencoded value length stays unchanged. VALUE_VERSION is bumped to 5 without a\nbackward decoder because Lix has not shipped.\n\nThe old shape forced key/header-only scans to hydrate commit_store change packs\njust to learn whether a row was deleted. The new shape makes tracked_state\nscalar fields authoritative at the projection boundary; commit_store pack\nhydration is reserved for projections that need snapshot_content or metadata\nJSON refs.\n\nTree scans are now physical-only: TrackedStateTreeScanRequest no longer carries\ntombstone visibility, and tracked scan limits are applied after delta overlay,\nmaterialization, and tombstone visibility. This matches the reference-system\nshape where delete/tombstone facts are carried through physical merge/scan\nstages and logical visibility/limit is applied above them.\n\nNo backend API changed. SQLite and RocksDB both store the same byte-length value\nand benefit from avoiding unnecessary commit_pack reads for non-JSON\nprojections. No tracked workload moved to untracked storage and no benchmark\nmeasurement changed.\n```\n\n### Baseline Delta\n\nCompared against the log8 baseline. The full smoke run showed some noisy\nRocksDB scan intervals, so the RocksDB rows below use the targeted remeasure\nfor the affected rows:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- smoke\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/rocksdb/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes|scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k'\n```\n\n#### 1.5x Runtime Budget Rows\n\n| axis       | row                                | raw SQLite median | before SQLite | after SQLite | SQLite ratio | before RocksDB | after RocksDB | RocksDB ratio | status                                                     |\n| ---------- | ---------------------------------- | ----------------: | ------------: | -----------: | -----------: | -------------: | ------------: | ------------: | ---------------------------------------------------------- |\n| write      | `write_root_all_rows/1k`           |         2.4999 ms |     6.8347 ms |    6.5245 ms |        2.61x |      6.1430 ms |     5.6554 ms |         2.26x | still over budget, no structural regression                |\n| write      | `write_delta_10pct_updates/1k`     |         1.3595 ms |     2.6272 ms |    3.3163 ms |        2.44x |      1.3950 ms |     1.4372 ms |         1.06x | SQLite noisy, RocksDB pass                                 |\n| write      | `write_tombstone_10pct_deletes/1k` |         1.3092 ms |     2.4321 ms |    3.1727 ms |        2.42x |      1.3632 ms |     1.4650 ms |         1.12x | SQLite noisy, RocksDB pass                                 |\n| exact-read | `get_many_exact_keys/1k`           |         2.1850 ms |     4.6055 ms |    4.4805 ms |        2.05x |      3.4668 ms |     3.6687 ms |         1.68x | still over budget                                          |\n| exact-read | `get_many_missing_keys/1k`         |         13.099 ms |     2.2822 ms |    2.2718 ms |        0.17x |      1.4138 ms |     1.9440 ms |         0.15x | pass                                                       |\n| exact-read | `exists_many_exact_keys/1k`        |         2.2187 ms |     4.6519 ms |    4.5695 ms |        2.06x |      3.4720 ms |     5.5972 ms |         2.52x | RocksDB row noisy; semantic equivalent still uses get_many |\n| scan       | `scan_keys_only/1k`                |         1.1673 ms |     3.2542 ms |    2.4975 ms |        2.14x |      2.0822 ms |     1.4497 ms |         1.24x | primary win; RocksDB now in budget                         |\n| scan       | `scan_headers_only/1k`             |         1.3034 ms |     3.0692 ms |    3.0376 ms |        2.33x |      2.0012 ms |     1.8478 ms |         1.42x | RocksDB now in budget                                      |\n| scan       | `scan_full_rows/1k`                |         1.2110 ms |     4.3792 ms |    4.7813 ms |        3.95x |      3.1884 ms |     3.2480 ms |         2.68x | still over budget                                          |\n| scan       | `prefix_scan_schema/1k`            |         1.6941 ms |     4.4623 ms |    4.6607 ms |        2.75x |      3.2190 ms |     3.3677 ms |         1.99x | still over budget                                          |\n| scan       | `prefix_scan_schema_file_null/1k`  |         1.2609 ms |     4.3889 ms |    4.8380 ms |        3.84x |      3.1497 ms |     3.3515 ms |         2.66x | still over budget                                          |\n\n#### Diff / Materialization\n\n| row                                   | before SQLite | after SQLite | before RocksDB | after RocksDB | shape status                                              |\n| ------------------------------------- | ------------: | -----------: | -------------: | ------------: | --------------------------------------------------------- |\n| `changed_keys_update_10pct/1k`        |     68.399 ms |    73.492 ms |      67.192 ms |     71.735 ms | still hotspot; movement within noisy structural guardrail |\n| `changed_keys_delta_chain_10x1pct/1k` |     10.401 ms |    11.167 ms |      8.7436 ms |     10.722 ms | watch                                                     |\n| `materialize_delta_chain_10x1pct/1k`  |     5.7651 ms |    5.5134 ms |      2.7741 ms |     2.8888 ms | near neutral; value length is unchanged                   |\n\n#### Storage\n\nStorage fixture command:\n\n```sh\ncargo test -p lix_engine --features storage-benches --test json_pointer_crud_storage -- --ignored --nocapture --test-threads=1\n```\n\nResult: passed.\n\n| backend / state                        | before bytes | after bytes | delta | status                                        |\n| -------------------------------------- | -----------: | ----------: | ----: | --------------------------------------------- |\n| raw SQLite / inserted                  |      1692456 |     1692456 |     0 | unchanged                                     |\n| Lix SQLite / inserted                  |      1075136 |     1075136 |     0 | unchanged                                     |\n| Lix SQLite / after create_version      |      1087496 |     1087496 |     0 | unchanged                                     |\n| Lix SQLite / after fast-forward merge  |      5287488 |     5291608 | +4120 | one SQLite page; acceptable page-layout noise |\n| Lix SQLite / after divergent merge     |      5615168 |     5619288 | +4120 | one SQLite page; acceptable page-layout noise |\n| Lix RocksDB / inserted                 |       993900 |      993900 |     0 | unchanged                                     |\n| Lix RocksDB / after create_version     |       995766 |      995766 |     0 | unchanged                                     |\n| Lix RocksDB / after fast-forward merge |      1157143 |     1157143 |     0 | unchanged                                     |\n| Lix RocksDB / after divergent merge    |      1528256 |     1528254 |    -2 | unchanged                                     |\n\n### Unchanged Guardrails\n\n| guardrail                                         | after value | status                                                                |\n| ------------------------------------------------- | ----------: | --------------------------------------------------------------------- |\n| physical write budget stays near backend speed    |       mixed | existing SQLite write budget failures remain                          |\n| physical write runtime <= 1.5x raw SQLite         |       mixed | RocksDB delta/tombstone pass; root writes still over                  |\n| exact reads <= 1.5x raw SQLite                    |       mixed | missing reads pass; exact reads still over                            |\n| scans <= 1.5x raw SQLite                          |       mixed | RocksDB keys/header pass; SQLite scans still over                     |\n| header-only scans do not hydrate full JSON values |         yes | preserved and strengthened                                            |\n| SQLite and RocksDB both reported                  |         yes | full smoke plus RocksDB targeted rerun                                |\n| storage growth explained                          |         yes | no value-length growth; only one SQLite page in merge states          |\n| post-vacuum storage <= 2x raw SQLite              |       mixed | same pre-existing SQLite merge-state growth                           |\n| backend boundary copy cost explained              |         yes | no new backend copies; fewer commit_pack loads for scalar projections |\n| tracked logic remains on the tracked path         |         yes | no workload moved                                                     |\n| no workload shifted to untracked machinery        |         yes | unchanged                                                             |\n| no benchmark measurement changed                  |         yes | benchmark untouched                                                   |\n\n### Review Loop\n\nReviewer pass 1:\n\n```text\nHIGH: low-level tree matching filtered deleted delta entries before applying\nthem over a materialized base root. Fixed by keeping tree matching physical and\nadding pending_tombstone_delta_hides_materialized_base_row.\n```\n\nReviewer pass 2:\n\n```text\nHIGH: none.\nMEDIUM: user limit could be applied before tombstone visibility. Fixed by not\npushing tracked scan limits into TrackedStateTreeScanRequest and adding\nscan_limit_applies_after_tombstone_visibility.\n```\n\nReviewer pass 3:\n\n```text\nHIGH: by-file fast path still applied request.limit before visibility. Fixed by\nremoving both by-file early-limit breaks and adding\nby_file_scan_limit_applies_after_tombstone_visibility.\n```\n\nFinal reviewer pass:\n\n```text\nHIGH: none.\nMEDIUM: none.\nLOW: none.\n```\n\nVerification:\n\n```sh\ncargo fmt -p lix_engine\ncargo check -p lix_engine --features storage-benches --benches\ncargo test -p lix_engine tracked_state:: --features storage-benches\ncargo test -p lix_engine --features storage-benches --test json_pointer_crud_storage -- --ignored --nocapture --test-threads=1\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- smoke\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/rocksdb/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes|scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k'\n```\n\nAll commands passed.\n\n### Interpretation\n\n```text\nKeep.\n\nPrimary axis: scan, specifically key/header projections and tombstone\nvisibility. Structural win: tombstone state now lives in the tracked projection\nvalue and non-JSON projections do not hydrate commit_store packs. Timing win:\nRocksDB scan_keys_only improved from 2.0822 ms to 1.4497 ms and\nscan_headers_only from 2.0012 ms to 1.8478 ms; SQLite scan_keys_only improved\nfrom 3.2542 ms to 2.4975 ms.\n\nGuardrails: encoded value length is unchanged, storage fixture passed, and no\nbackend-specific API was introduced. Some full-smoke rows were noisy, so\nRocksDB scan/write guardrails were remeasured directly. Existing SQLite write,\nexact-read, full-row, and prefix-scan rows remain over the 1.5x budget.\n\nNo temporary shim.\n\nNext optimization should attack the remaining scan/full-row and exact-read\nbudget failures by adding a borrowed/header decode path for tracked-state leaf\nentries. The tombstone bit is now in the first value byte, so the next cut can\nfilter visibility without allocating owned locators or full row values.\n```\n\n## Optimization 2: Indexable Borrowed Leaf Nodes\n\nDate: 2026-05-10\n\nCommit: this entry is committed with the optimization\n\n### Change\n\nChanged tracked-state leaf node bytes from a sequential record stream to a v2\noffset-table layout:\n\n```text\nkind: u8\nversion: u8\nentry_count: u32\nentry_offsets: (entry_count + 1) * u32\npayload: [key_len: u32, key, value_len: u32, value]*\n```\n\nThe offset table lets exact reads binary-search leaf keys without first cloning\nevery key/value pair in the leaf. Scans now borrow leaf entries out of the\nverified node byte buffer and decode only matching rows. Owned `decode_node`\nstill exists for callers that need it, but it is built on the borrowed decoder.\n\nThe leaf splitter now accounts for the exact v2 physical size:\n\n```text\nleaf_size = 10 + entry_count * 12 + key_bytes + value_bytes\nentry_size = 12 + key_bytes + value_bytes\n```\n\nNo backward compatibility shim was kept. Lix has not shipped, and this is a\nphysical layout cutover.\n\n### Benchmarks\n\nFocused command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(get_many_exact_keys|exists_many_exact_keys|scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k'\n```\n\nResult: passed.\n\n| row                                       | after median | criterion status |\n| ----------------------------------------- | -----------: | ---------------- |\n| `sqlite/get_many_exact_keys/1k`           |    4.4327 ms | no change        |\n| `sqlite/exists_many_exact_keys/1k`        |    4.5704 ms | no change        |\n| `sqlite/scan_keys_only/1k`                |    2.7218 ms | no change        |\n| `sqlite/scan_headers_only/1k`             |    3.0616 ms | no change        |\n| `sqlite/scan_full_rows/1k`                |    4.4447 ms | no change        |\n| `sqlite/prefix_scan_schema/1k`            |    4.3002 ms | no change        |\n| `sqlite/prefix_scan_schema_file_null/1k`  |    4.2372 ms | no change        |\n| `rocksdb/get_many_exact_keys/1k`          |    3.5170 ms | no change        |\n| `rocksdb/exists_many_exact_keys/1k`       |    3.5438 ms | improved         |\n| `rocksdb/scan_keys_only/1k`               |    1.5767 ms | no change        |\n| `rocksdb/scan_headers_only/1k`            |    2.0217 ms | no change        |\n| `rocksdb/scan_full_rows/1k`               |    3.3787 ms | no change        |\n| `rocksdb/prefix_scan_schema/1k`           |    3.2941 ms | no change        |\n| `rocksdb/prefix_scan_schema_file_null/1k` |    3.2749 ms | no change        |\n\n### Storage\n\nStorage fixture command:\n\n```sh\ncargo test -p lix_engine --features storage-benches --test json_pointer_crud_storage -- --ignored --nocapture --test-threads=1\n```\n\nResult: passed.\n\n| backend / state                        |   bytes | bytes/row | status    |\n| -------------------------------------- | ------: | --------: | --------- |\n| raw SQLite / inserted                  | 1692456 |    1692.5 | unchanged |\n| Lix SQLite / inserted                  | 1075136 |    1075.1 | unchanged |\n| Lix SQLite / after create_version      | 1087496 |    1087.5 | unchanged |\n| Lix SQLite / after fast-forward merge  | 5287488 |    5287.5 | unchanged |\n| Lix SQLite / after divergent merge     | 5615168 |    5615.2 | unchanged |\n| Lix RocksDB / inserted                 |  993900 |     993.9 | unchanged |\n| Lix RocksDB / after create_version     |  995766 |     995.8 | unchanged |\n| Lix RocksDB / after fast-forward merge | 1157143 |    1157.1 | unchanged |\n| Lix RocksDB / after divergent merge    | 1528256 |    1528.3 | unchanged |\n\n### Review Loop\n\nReviewer pass 1:\n\n```text\nHIGH: none.\nMEDIUM: leaf chunk sizing still estimated the old sequential format. Fixed by\nincluding the v2 offset directory in estimate_leaf_chunk_size and by feeding\nphysical entry bytes into boundary_trigger.\nLOW: add direct codec regression tests for v2 leaf bytes and malformed offset\ntables. Fixed with indexable offset-table, empty-leaf, and malformed-offset\ntests.\n```\n\nReviewer pass 2:\n\n```text\nHIGH: none.\nThe previous sizing concern appears addressed, borrowed decode paths do not\ncarry leaf borrows across recursive awaits, and v2 offset validation/tests are\npresent.\n```\n\n### Verification\n\n```sh\ncargo fmt -p lix_engine\ncargo check -p lix_engine --features storage-benches --benches\ncargo test -p lix_engine tracked_state:: --features storage-benches\ncargo test -p lix_engine --features storage-benches --test json_pointer_crud_storage -- --ignored --nocapture --test-threads=1\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(get_many_exact_keys|exists_many_exact_keys|scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k'\n```\n\nAll commands passed.\n\n### Interpretation\n\n```text\nKeep.\n\nPrimary axis: exact reads and scan decode overhead. Structural win: leaves now\nhave a pointer/offset directory, matching the page-local indexing pattern used\nby reference storage engines, and scan/get_many no longer clone every leaf\nentry before discovering the row they need.\n\nTiming: mostly neutral in Criterion, with a measured RocksDB\nexists_many_exact_keys improvement from 4.0071 ms in the pre-sizing run to\n3.5438 ms after the final fix. SQLite exact reads remain over budget, so this\nis a necessary layout foundation rather than the final performance win.\n\nGuardrails: storage fixture stayed unchanged at the 1k guardrail, tracked logic\nstays on the tracked path, no workload moved to untracked machinery, and no\nbenchmark measurement changed.\n\nNext optimization should use the v2 leaf layout to decode tracked value headers\ndirectly from borrowed value bytes for scan visibility and exists-style reads,\nthen attack exact-read value decode/allocation costs that remain above the\n1.5x SQLite target.\n```\n\n## Optimization 3: Header-Only Visibility And Exists Reads\n\nDate: 2026-05-10\n\nCommit: this entry is committed with the optimization\n\n### Change\n\nAdded a live-row `rows_exist_at_commit` path for tracked-state readers and a\nphysical `TrackedStateTree::exists_many` traversal. The tree reuses the v2 leaf\noffset table from Optimization 2, binary-searches borrowed leaf keys, and reads\nonly the matched value header to reject tombstones.\n\nScan visibility now also reads the value header before full value decode.\n`decode_visible_value` parses the header once, skips hidden tombstones without\ndecoding locator/timestamp strings, and continues decoding live rows from the\nsame cursor. `TrackedStateTreeScanRequest` now carries `include_tombstones`;\nits default keeps physical/internal tree scans tombstone-inclusive, while\nserving scans copy the user-facing filter.\n\nPending delta overlay semantics were preserved: when tombstones are excluded,\na pending tombstone removes a matching materialized base row instead of being\nignored. Diff scans explicitly include tombstones.\n\nNo backward compatibility shim was kept.\n\n### Benchmarks\n\nFocused command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(get_many_exact_keys|exists_many_exact_keys|scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k'\n```\n\nResult: passed.\n\n| row                                       | after median | criterion status                            |\n| ----------------------------------------- | -----------: | ------------------------------------------- |\n| `sqlite/get_many_exact_keys/1k`           |    4.4035 ms | no change                                   |\n| `sqlite/exists_many_exact_keys/1k`        |    2.4097 ms | improved vs pre-change get/materialize path |\n| `sqlite/scan_keys_only/1k`                |    2.4736 ms | no change                                   |\n| `sqlite/scan_headers_only/1k`             |    3.0070 ms | no change                                   |\n| `sqlite/scan_full_rows/1k`                |    4.1861 ms | no change                                   |\n| `sqlite/prefix_scan_schema/1k`            |    4.1514 ms | no change                                   |\n| `sqlite/prefix_scan_schema_file_null/1k`  |    4.1977 ms | no change                                   |\n| `rocksdb/get_many_exact_keys/1k`          |    3.4003 ms | no change                                   |\n| `rocksdb/exists_many_exact_keys/1k`       |    1.4389 ms | improved vs pre-change get/materialize path |\n| `rocksdb/scan_keys_only/1k`               |    1.5966 ms | no change                                   |\n| `rocksdb/scan_headers_only/1k`            |    1.9876 ms | no change                                   |\n| `rocksdb/scan_full_rows/1k`               |    3.2413 ms | no change                                   |\n| `rocksdb/prefix_scan_schema/1k`           |    3.6050 ms | no change; noisy high interval              |\n| `rocksdb/prefix_scan_schema_file_null/1k` |    3.3356 ms | no change                                   |\n\nFinal exists-only rerun after the tombstone semantic fix:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/exists_many_exact_keys/1k'\n```\n\n| row                                 | final median |\n| ----------------------------------- | -----------: |\n| `sqlite/exists_many_exact_keys/1k`  |    2.4097 ms |\n| `rocksdb/exists_many_exact_keys/1k` |    1.4389 ms |\n\n### Storage\n\nStorage fixture command:\n\n```sh\ncargo test -p lix_engine --features storage-benches --test json_pointer_crud_storage -- --ignored --nocapture --test-threads=1\n```\n\nResult: passed.\n\n| backend / state                        |   bytes | status                                                      |\n| -------------------------------------- | ------: | ----------------------------------------------------------- |\n| raw SQLite / inserted                  | 1692456 | unchanged                                                   |\n| Lix SQLite / inserted                  | 1075136 | unchanged                                                   |\n| Lix SQLite / after create_version      | 1087496 | unchanged                                                   |\n| Lix SQLite / after fast-forward merge  | 5291608 | one SQLite page over the prior run; known page-layout noise |\n| Lix SQLite / after divergent merge     | 5619288 | one SQLite page over the prior run; known page-layout noise |\n| Lix RocksDB / inserted                 |  993900 | unchanged                                                   |\n| Lix RocksDB / after create_version     |  995766 | unchanged                                                   |\n| Lix RocksDB / after fast-forward merge | 1157143 | unchanged                                                   |\n| Lix RocksDB / after divergent merge    | 1528256 | unchanged                                                   |\n\n### Review Loop\n\nReviewer pass 1:\n\n```text\nHIGH: rows_exist_at_commit reported tombstones as existing. Fixed by checking\nthe value header in tree.exists_many and by applying pending delta tombstones\nas false in projection_keys_exist_at_commit.\n```\n\nReviewer pass 2:\n\n```text\nHIGH: none.\nThe fixed paths now return false for tombstones, pending delta tombstones clear\nexistence, diff scans still include tombstones, and the benchmark uses the new\nexistence API.\n```\n\n### Verification\n\n```sh\ncargo fmt -p lix_engine\ncargo check -p lix_engine --features storage-benches --benches\ncargo test -p lix_engine tracked_state:: --features storage-benches\ncargo test -p lix_engine --features storage-benches --test json_pointer_crud_storage -- --ignored --nocapture --test-threads=1\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(get_many_exact_keys|exists_many_exact_keys|scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k'\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/exists_many_exact_keys/1k'\n```\n\nAll commands passed.\n\n### Interpretation\n\n```text\nKeep.\n\nPrimary axis: exists reads and tombstone visibility. Structural win:\nexists_many no longer piggybacks on full exact-row materialization, and\nvisibility filtering can reject hidden tombstones from the value header before\nlocator/string decode or commit_store materialization.\n\nTiming win: exists_many_exact_keys moves from the previous materializing path\naround 4.5 ms SQLite / 3.5 ms RocksDB to 2.4097 ms SQLite / 1.4389 ms RocksDB.\nThe ordinary scan fixture is mostly live rows, so header visibility is neutral\nthere rather than a tombstone-heavy win.\n\nGuardrails: storage shape is unchanged, hidden pending tombstones still remove\nbase rows, diff keeps tombstones visible, tracked logic stays on the tracked\npath, and the benchmark row now measures the named exists API rather than a\nfull materialized get.\n\nNext optimization should attack get_many_exact_keys itself: the exact-read path\nstill decodes full locator/timestamp strings and materializes full rows even\nwhen the caller only needs the JSON payload, so the remaining budget is likely\nin value decode and commit/json materialization grouping.\n```\n\n## Optimization 4: Store JSON Refs In Primary Tracked Values\n\nDate: 2026-05-10\n\nCommit: this entry is committed with the optimization\n\n### Change\n\nChanged the primary tracked-state value format from locator-only payload\nmetadata to locator plus direct `snapshot_ref` / `metadata_ref` fields.\n`VALUE_VERSION` was bumped and no backward decode shim was kept.\n\nBefore this cut, full materialization decoded tracked values, grouped commit\nstore change-pack loads by `(source_commit_id, source_pack_id)`, decoded the\nreferenced change just to recover its JSON refs, then grouped JSON loads. The\ntracked value is already the durable projection boundary, and both staging and\nroot materialization already have the JSON refs at write time, so the extra\ncommit-pack lookup was record-local metadata indirection.\n\nAfter this cut:\n\n- primary tracked values encode optional `snapshot_ref` and `metadata_ref`;\n- delta packs carry those refs too, so pending-delta reads can materialize\n  payloads without commit-pack lookups;\n- by-file header-index values intentionally encode `None` refs so the secondary\n  header index stays lean;\n- by-file scans that need payloads still fetch primary tracked values before\n  materializing;\n- `materialize_index_entries` no longer takes `CommitStoreContext`.\n\nThis follows the same physical principle as page/tuple formats in the reference\nsystems: record-local metadata needed to materialize a tuple should live with\nthe tuple/index entry, not require an unrelated side lookup.\n\n### Benchmarks\n\nStandard focused command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(get_many_exact_keys|exists_many_exact_keys|scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k'\n```\n\nResult: passed.\n\nFinal medians:\n\n| row                                       | after median | criterion status                               |\n| ----------------------------------------- | -----------: | ---------------------------------------------- |\n| `sqlite/get_many_exact_keys/1k`           |    4.0589 ms | no change in final rerun; initial run improved |\n| `sqlite/exists_many_exact_keys/1k`        |    2.5128 ms | no change                                      |\n| `sqlite/scan_keys_only/1k`                |    2.5838 ms | no change                                      |\n| `sqlite/scan_headers_only/1k`             |    2.5942 ms | no change in final rerun; initial run improved |\n| `sqlite/scan_full_rows/1k`                |    3.8172 ms | no change                                      |\n| `sqlite/prefix_scan_schema/1k`            |    3.8885 ms | no change                                      |\n| `sqlite/prefix_scan_schema_file_null/1k`  |    3.8453 ms | no change                                      |\n| `rocksdb/get_many_exact_keys/1k`          |    2.9264 ms | improved                                       |\n| `rocksdb/exists_many_exact_keys/1k`       |    1.4271 ms | no change                                      |\n| `rocksdb/scan_keys_only/1k`               |    1.5068 ms | no change                                      |\n| `rocksdb/scan_headers_only/1k`            |    1.5683 ms | no change in final rerun; initial run improved |\n| `rocksdb/scan_full_rows/1k`               |    2.8121 ms | no change in final rerun; initial run improved |\n| `rocksdb/prefix_scan_schema/1k`           |    2.7684 ms | no change in final rerun; initial run improved |\n| `rocksdb/prefix_scan_schema_file_null/1k` |    2.7350 ms | no change                                      |\n\nInitial run immediately after the change showed the structural win before the\nfinal rerun reset Criterion's comparison baseline:\n\n```text\nsqlite/get_many_exact_keys: 4.0738 ms, improved\nrocksdb/get_many_exact_keys: 3.1168 ms, improved\nrocksdb/scan_full_rows: 2.8681 ms, improved\nrocksdb/prefix_scan_schema: 2.7909 ms, improved\n```\n\n### Storage\n\nStorage fixture command:\n\n```sh\ncargo test -p lix_engine --features storage-benches --test json_pointer_crud_storage -- --ignored --nocapture --test-threads=1\n```\n\nResult: passed.\n\n| backend / state                        |   bytes | delta vs Optimization 3 | status                               |\n| -------------------------------------- | ------: | ----------------------: | ------------------------------------ |\n| raw SQLite / inserted                  | 1692456 |                       0 | unchanged                            |\n| Lix SQLite / inserted                  | 1112216 |                  +37080 | direct snapshot refs in primary tree |\n| Lix SQLite / after create_version      | 1124576 |                  +37080 | direct snapshot refs in primary tree |\n| Lix SQLite / after fast-forward merge  | 5324328 |                  +32720 | below previous noisy merge shape     |\n| Lix SQLite / after divergent merge     | 5652176 |                  +32888 | below previous noisy merge shape     |\n| Lix RocksDB / inserted                 | 1028557 |                  +34657 | direct snapshot refs in primary tree |\n| Lix RocksDB / after create_version     | 1030457 |                  +34691 | direct snapshot refs in primary tree |\n| Lix RocksDB / after fast-forward merge | 1195234 |                  +38091 | direct snapshot refs in primary tree |\n| Lix RocksDB / after divergent merge    | 1576585 |                  +48329 | direct snapshot refs in primary tree |\n\nThe inserted/create-version states remain below raw SQLite at 1k rows. The\nmerge states were already above the storage-size north star before this cut;\nthe additional bytes are explained by durable payload refs that remove a\ncommit-pack read from exact/full materialization.\n\n### Review Loop\n\nReviewer pass:\n\n```text\nHIGH: none.\nMEDIUM: none.\nLOW: materialization.rs still described commit_store pack loads. Fixed the\ncomment to describe direct tracked JSON refs and grouped json_store loads.\n```\n\n### Verification\n\n```sh\ncargo fmt -p lix_engine\ncargo check -p lix_engine --features storage-benches --benches\ncargo test -p lix_engine tracked_state:: --features storage-benches\ncargo test -p lix_engine --features storage-benches --test json_pointer_crud_storage -- --ignored --nocapture --test-threads=1\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null|scan_headers_only|scan_keys_only)/1k'\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(get_many_exact_keys|exists_many_exact_keys|scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k'\n```\n\nAll commands passed.\n\n### Interpretation\n\n```text\nKeep.\n\nPrimary axis: exact/full materialization. Structural win: payload refs now live\nat the tracked projection boundary, so materialization avoids loading and\ndecoding commit_store change packs just to recover record-local JSON refs.\n\nTiming win: exact gets improved on both backends; RocksDB full/prefix scans\nalso moved materially in the initial run. The final standard rerun still shows\nlower medians than Optimization 3 for exact/full rows, even when Criterion\nreports some rows as no-change because the comparison baseline had already\nincluded this cut.\n\nStorage tradeoff: roughly 35-37 KB extra at 1k inserted rows, with inserted and\ncreate_version states still below raw SQLite. By-file header index values stay\nlean by omitting payload refs, so the cost is restricted to primary tracked\nvalues and delta packs.\n\nNo temporary shim.\n\nNext optimization should attack the remaining exact read overhead inside\ntracked value/key materialization: the read path still allocates full\nTrackedStateKey/TrackedStateIndexValue/MaterializedTrackedStateRow objects\neven for fixed-shape JSON-pointer reads, and SQLite full reads remain above the\n1.5x target.\n```\n\n## Optimization 5: Consume JSON Bytes Into Materialized Strings\n\nDate: 2026-05-10\n\nCommit: this entry is committed with the optimization\n\n### Change\n\nChanged tracked-state JSON materialization to consume each owned `Vec<u8>`\npayload slot with `String::from_utf8` instead of validating `&[u8]` and then\ncopying with `to_string`.\n\nThis does not change storage layout or APIs. It is a narrow ownership cleanup\ninside the read path after Optimization 4 removed commit-pack lookup from\npayload materialization.\n\nThe implementation keeps the current invariant explicit: each row plan owns its\nprojected JSON slots. If tracked-state materialization later deduplicates refs\nbefore row planning, duplicate consumers must clone intentionally.\n\n### Benchmarks\n\nFocused command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k'\n```\n\nResult: passed.\n\n| row                                       | after median | criterion status |\n| ----------------------------------------- | -----------: | ---------------- |\n| `sqlite/get_many_exact_keys/1k`           |    4.1139 ms | no change        |\n| `sqlite/scan_full_rows/1k`                |    3.8428 ms | no change        |\n| `sqlite/prefix_scan_schema/1k`            |    3.8457 ms | no change        |\n| `sqlite/prefix_scan_schema_file_null/1k`  |    3.8080 ms | no change        |\n| `rocksdb/get_many_exact_keys/1k`          |    2.9443 ms | no change        |\n| `rocksdb/scan_full_rows/1k`               |    2.7510 ms | no change        |\n| `rocksdb/prefix_scan_schema/1k`           |    2.6865 ms | no change        |\n| `rocksdb/prefix_scan_schema_file_null/1k` |    2.7327 ms | no change        |\n\nThis is not a Criterion-proven timing win on the 1k fixture. It removes an\navoidable allocation/copy in the payload-heavy path and should matter more for\nlarger JSON payloads than the small smoke rows.\n\n### Storage\n\nNo storage change. The storage fixture from Optimization 4 still describes the\ncurrent byte shape.\n\n### Review Loop\n\nReviewer pass:\n\n```text\nHIGH: none.\nMEDIUM: none.\nLOW: remove test-only wrapper around materialized_json_string. Fixed.\nLOW: document one-shot JSON slot invariant for .take(). Fixed.\n\nRecommendation: keep, but do not market it as a Criterion-proven optimization.\n```\n\n### Verification\n\n```sh\ncargo fmt -p lix_engine\ncargo check -p lix_engine --features storage-benches --benches\ncargo test -p lix_engine tracked_state:: --features storage-benches\ncargo test -p lix_engine materialized_json_string_consumes_owned_payload_bytes --features storage-benches\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k'\n```\n\nAll commands passed.\n\n### Interpretation\n\n```text\nKeep as a small ownership cleanup.\n\nPrimary axis: full materialization allocation pressure. Structural win:\nmaterialization consumes owned JSON bytes directly into String, avoiding a\nvalidate-then-copy path.\n\nTiming: no measured Criterion win on the 1k smoke fixture, so this does not\nadvance the budget by itself. It is low-risk, read-path-only, and keeps the\npayload materialization shape moving toward fewer copies.\n\nNo temporary shim.\n\nNext optimization still needs a larger structural cut for full scans, likely\navoiding full row object construction where callers only need counts or using a\nmore borrowed/streamed row materialization path without changing benchmark\nsemantics.\n```\n\n## Optimization 6: Make By-File Roots a Concrete-File Partial Index\n\nDate: 2026-05-10\n\nCommit: this entry is committed with the optimization\n\n### Change\n\nChanged the tracked-state by-file secondary tree into an explicit partial\nindex for concrete `file_id` values only.\n\n`ByFileIndex::should_use` now returns true only when every file filter is a\nconcrete `NullableKeyFilter::Value(_)`. Null-only and mixed null/concrete scans\nuse the primary tracked tree, whose key layout covers both null and concrete\nfile ids.\n\n`stage_projection_root` now writes the primary root for every projected commit\nbut stages a by-file root only when needed:\n\n- no parent by-file root and no concrete-file deltas: do not stage a by-file\n  root;\n- parent by-file root and no concrete-file deltas: inherit the parent by-file\n  root with zero chunk puts;\n- concrete-file deltas: apply only those deltas to the by-file root.\n\nThis matches the physical predicate of the secondary index with the planner\npredicate that is allowed to use it. It also avoids carrying null-file entries\nin a secondary tree that the planner never uses for null-file filters.\n\nAdded regression coverage for:\n\n- null-file rows not staging a by-file root;\n- a null-only parent plus concrete-file child scanned with mixed\n  `[Null, Value(file)]` filters, which must use the primary tree and return\n  both inherited null rows and concrete child rows.\n\n### Benchmarks\n\nFocused command before the final concrete-only cleanup:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|prefix_scan_schema_file_null|scan_full_rows)/1k'\n```\n\nResult: passed.\n\n| row                                          | after median | criterion status |\n| -------------------------------------------- | -----------: | ---------------- |\n| `raw_sqlite/write_root_all_rows/1k`          |    2.9524 ms | noisy baseline   |\n| `raw_sqlite/scan_full_rows/1k`               |    1.2119 ms | reference        |\n| `raw_sqlite/prefix_scan_schema_file_null/1k` |    1.4604 ms | reference        |\n| `sqlite/write_root_all_rows/1k`              |    6.2808 ms | no change        |\n| `sqlite/scan_full_rows/1k`                   |    3.8271 ms | no change        |\n| `sqlite/prefix_scan_schema_file_null/1k`     |    4.0401 ms | no change        |\n| `rocksdb/write_root_all_rows/1k`             |    5.4735 ms | no change        |\n| `rocksdb/scan_full_rows/1k`                  |    2.7509 ms | no change        |\n| `rocksdb/prefix_scan_schema_file_null/1k`    |    2.7411 ms | no change        |\n\nThis is not a runtime win for the current JSON-pointer smoke rows.\n`write_root_all_rows` uses delta staging rather than projection-root staging,\nand the benchmark rows have `file_id = None`.\n\n### Storage\n\nStorage command:\n\n```sh\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\n```\n\nResult: passed.\n\nFinal repeated 1k storage rows:\n\n| row                                    |   bytes | bytes/row | status                                |\n| -------------------------------------- | ------: | --------: | ------------------------------------- |\n| raw SQLite / inserted                  | 1692456 |    1692.5 | reference                             |\n| Lix SQLite / inserted                  | 1112216 |    1112.2 | unchanged                             |\n| Lix SQLite / after create_version      | 1124576 |    1124.6 | unchanged                             |\n| Lix SQLite / after fast-forward merge  | 5324328 |    5324.3 | unchanged from Optimization 4/5 shape |\n| Lix SQLite / after divergent merge     | 5652176 |    5652.2 | unchanged from Optimization 4/5 shape |\n| Lix RocksDB / inserted                 | 1028557 |    1028.6 | unchanged                             |\n| Lix RocksDB / after create_version     | 1030457 |    1030.5 | unchanged                             |\n| Lix RocksDB / after fast-forward merge | 1195234 |    1195.2 | unchanged                             |\n| Lix RocksDB / after divergent merge    | 1576587 |    1576.6 | effectively unchanged                 |\n\nAn earlier storage sample before the concrete-only write cleanup showed lower\nSQLite merge-state bytes, but repeated final runs returned to the prior\ncommitted SQLite shape. Treat this optimization as storage-neutral for the\ncurrent JSON-pointer accounting fixture.\n\n### Review Loop\n\nReviewer pass 1:\n\n```text\nHIGH: none.\nMEDIUM: none.\nLOW: scan_request_from_tracked still looked more general than the all-concrete\nplanner contract. Fixed with debug assertion and Value-only mapping.\nLOW: add the mixed Null + Value regression case. Fixed.\nLOW: once a by-file root exists, null-file rows were still indexed. Fixed by\nmaking by-file writes concrete-file-only and inheriting unchanged roots.\n\nRecommendation: keep.\n```\n\nReviewer pass 2:\n\n```text\nHIGH: none.\nMEDIUM: none.\nLOW: encode_key_ref could still encode file_id = None. Fixed with a debug\nassertion at the helper boundary.\n\nRecommendation: keep. The result is a coherent partial secondary index:\nconcrete-only on writes, concrete-only on reads, with safe parent-root\ninheritance.\n```\n\n### Verification\n\n```sh\ncargo fmt -p lix_engine\ncargo check -p lix_engine --features storage-benches --benches\ncargo test -p lix_engine tracked_state:: --features storage-benches\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|prefix_scan_schema_file_null|scan_full_rows)/1k'\n```\n\nAll commands passed.\n\n### Interpretation\n\n```text\nKeep as a physical-layout cleanup, not as a budget-moving benchmark win.\n\nPrimary axis: secondary-index shape. Structural win: by-file roots now behave\nlike a partial secondary index whose physical contents and planner predicate\nagree. This prevents null-file rows from being copied into a secondary tree that\ncannot answer null-file scans safely, and it removes the old missing-root\nempty-result behavior for projected reads.\n\nTiming/storage: neutral on the current JSON-pointer fixture. This does not move\nthe remaining <= 1.5x runtime target or the SQLite merge-state storage issue.\n\nNo temporary shim.\n\nNext optimization should return to budget-moving read/write costs: either the\nprimary tracked-tree write path for full-root materialization, or row\nmaterialization/allocation in exact and scan reads.\n```\n\n## Optimization 7: Skip JSON Planning for Header-Only Materialization\n\nDate: 2026-05-10\n\nCommit: this entry is committed with the optimization\n\n### Change\n\nAdded a no-JSON fast path to tracked-state materialization. When a requested\nprojection omits both `snapshot_content` and `metadata`,\n`materialize_index_entries` now directly maps tree entries into\n`MaterializedTrackedStateRow` values with payload columns omitted.\n\nThis skips work that cannot affect the result for key-only and header-only\nprojections:\n\n- no per-row payload plan allocation;\n- no `json_refs` / `json_ref_localities` vectors;\n- no pack-locality grouping map;\n- no empty JSON-store load path.\n\nHeader semantics are still preserved. Identity fields come from the tracked\nkey, and `deleted`, timestamps, `change_id`, and `commit_id` come from the\ntracked value. Tombstone filtering still uses `row.deleted`, not\n`snapshot_content`.\n\nNo storage layout change.\n\n### Benchmarks\n\nFocused command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema_file_null)/1k'\n```\n\nResult: passed.\n\n| row                                       | after median | criterion status               |\n| ----------------------------------------- | -----------: | ------------------------------ |\n| `sqlite/scan_keys_only/1k`                |    2.4932 ms | -6.0%, within noise threshold  |\n| `sqlite/scan_headers_only/1k`             |    2.5955 ms | no change                      |\n| `sqlite/scan_full_rows/1k`                |    3.7797 ms | no change                      |\n| `sqlite/prefix_scan_schema_file_null/1k`  |    3.7925 ms | improved, likely noisy control |\n| `rocksdb/scan_keys_only/1k`               |    1.5304 ms | no change                      |\n| `rocksdb/scan_headers_only/1k`            |    1.5769 ms | no change                      |\n| `rocksdb/scan_full_rows/1k`               |    2.7634 ms | no change                      |\n| `rocksdb/prefix_scan_schema_file_null/1k` |    2.6894 ms | improved, likely noisy control |\n\nThe structural improvement is real for projections without payload columns, but\nCriterion does not show a strong win on the 1k smoke fixture. Full-row scans are\nincluded as controls because they still use the JSON hydration path.\n\n### Storage\n\nNo storage change.\n\n### Review Loop\n\nReviewer pass:\n\n```text\nHIGH: none.\nMEDIUM: none.\nLOW: infallible helper returned Result only to fit collect. Fixed by returning\nMaterializedTrackedStateRow directly and wrapping once at the call site.\n\nRecommendation: keep. This is an executor-style projection fast path: when no\npayload columns are requested, skip payload planning entirely.\n```\n\n### Verification\n\n```sh\ncargo fmt -p lix_engine\ncargo check -p lix_engine --features storage-benches --benches\ncargo test -p lix_engine projected_scans_do_not_materialize_snapshot_when_snapshot_content_is_omitted --features storage-benches\ncargo test -p lix_engine tracked_state:: --features storage-benches\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema_file_null)/1k'\n```\n\nAll commands passed.\n\n### Interpretation\n\n```text\nKeep as a narrow projection fast path.\n\nPrimary axis: key/header scans. Structural win: no-payload projections now\navoid payload planning rather than constructing empty JSON work and discovering\nthere is nothing to load.\n\nTiming: modest/noisy on the current 1k fixture. This does not solve the\nremaining full-row scan or exact-get gap, but it removes unnecessary executor\nwork for projected scans and keeps the read path moving toward column-aware\nmaterialization.\n\nNo temporary shim.\n\nNext optimization should target full payload materialization or exact get_many:\nthe remaining expensive rows still hydrate JSON and build full\nMaterializedTrackedStateRow objects.\n```\n\n## Optimization 8: Store JSON Locality as Row-Plan Indexes\n\nDate: 2026-05-10\n\nCommit: this entry is committed with the optimization\n\n### Change\n\nChanged full tracked-state payload materialization to keep JSON ref locality as\ncompact row-plan indexes instead of cloning commit ids per projected JSON ref.\n\nBefore this change, `materialize_index_entries` stored\n`json_ref_localities: Vec<(String, u32)>`. Each projected `snapshot_content` or\n`metadata` ref cloned `value.change_locator.source_commit_id` just so\n`load_projection_json_values` could group refs by commit pack.\n\nRow plans already own the same `commit_id`. The locality vector now stores a\nsmall `JsonRefLocality { row_index, pack_id }`, and the grouping step borrows\n`row_plans[row_index].commit_id.as_str()` while loading JSON values.\n\nNo storage/API behavior change.\n\n### Benchmarks\n\nFocused command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k'\n```\n\nResult: passed.\n\n| row                                       | after median | criterion status |\n| ----------------------------------------- | -----------: | ---------------- |\n| `sqlite/get_many_exact_keys/1k`           |    3.9197 ms | no change        |\n| `sqlite/scan_full_rows/1k`                |    3.8695 ms | no change        |\n| `sqlite/prefix_scan_schema/1k`            |    3.7669 ms | no change        |\n| `sqlite/prefix_scan_schema_file_null/1k`  |    3.7631 ms | no change        |\n| `rocksdb/get_many_exact_keys/1k`          |    3.0397 ms | no change        |\n| `rocksdb/scan_full_rows/1k`               |    2.7001 ms | no change        |\n| `rocksdb/prefix_scan_schema/1k`           |    2.7920 ms | no change        |\n| `rocksdb/prefix_scan_schema_file_null/1k` |    2.6921 ms | no change        |\n\nSQLite exact gets moved lower in this sample than the previous committed log,\nbut Criterion still reports no change. Treat this as an allocation cleanup, not\na proven runtime win.\n\n### Storage\n\nNo storage change.\n\n### Review Loop\n\nReviewer pass:\n\n```text\nHIGH: none.\nMEDIUM: none.\nLOW: parallel arrays plus row_plans are correct but coupled; use a small\nJsonRefLocality struct to make the invariant clearer. Fixed.\n\nRecommendation: keep. Locality is now an index into already-owned row-plan data\nrather than repeated commit-id allocation.\n```\n\n### Verification\n\n```sh\ncargo fmt -p lix_engine\ncargo check -p lix_engine --features storage-benches --benches\ncargo test -p lix_engine tracked_state:: --features storage-benches\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k'\n```\n\nAll commands passed.\n\n### Interpretation\n\n```text\nKeep as a small allocation cleanup in the payload materialization path.\n\nPrimary axis: full-row materialization allocation pressure. Structural win:\nJSON locality now uses compact indexes into existing row-plan ownership, which\nmatches the broader direction of carrying offsets/indexes beside payload\nmetadata instead of duplicating identifying strings.\n\nTiming: Criterion-neutral on the 1k fixture. This is not enough to close the\nremaining <= 1.5x exact/full read gap.\n\nNo temporary shim.\n\nNext optimization still needs a larger cut in JSON hydration or row\nconstruction. The obvious remaining cost is that full reads still allocate a\nMaterializedTrackedStateRow per row and convert every JSON payload to String.\n```\n\n## Optimization 9: Return Unique JSON Batch Payloads Without Cloning\n\nDate: 2026-05-10\n\nCommit: this entry is committed with the optimization\n\n### Change\n\nChanged `json_store::load_json_bytes_many_in_scope` to avoid cloning loaded\nJSON payload bytes when the request contains no duplicate refs.\n\nThe loader already deduplicates requested refs into `unique_values`. Before\nthis change it always rebuilt the result with:\n\n```text\nrequested_indexes.map(|index| unique_values[index].clone())\n```\n\nThat cloned every loaded `Vec<u8>` even when every ref was unique and\n`unique_values` was already in request order. Full tracked reads then consumed\nthe cloned bytes into `String`, leaving the original decoded payload copy\nunused.\n\nThe loader now tracks whether any duplicate ref was seen:\n\n- no duplicates: return `unique_values` directly;\n- duplicates: keep the old clone-to-request-order behavior so repeated refs\n  still produce repeated result slots.\n\nThis applies to both commit-pack and out-of-band JSON scopes. Missing refs keep\ntheir `None` slots in either path.\n\n### Benchmarks\n\nFocused command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k'\n```\n\nResult: passed.\n\n| row                                       | after median | criterion status              |\n| ----------------------------------------- | -----------: | ----------------------------- |\n| `sqlite/get_many_exact_keys/1k`           |    3.8568 ms | -3.3%, within noise threshold |\n| `sqlite/scan_full_rows/1k`                |    3.7141 ms | improved                      |\n| `sqlite/prefix_scan_schema/1k`            |    3.6749 ms | no change                     |\n| `sqlite/prefix_scan_schema_file_null/1k`  |    3.6774 ms | no change                     |\n| `rocksdb/get_many_exact_keys/1k`          |    2.9055 ms | no change                     |\n| `rocksdb/scan_full_rows/1k`               |    2.5562 ms | no change                     |\n| `rocksdb/prefix_scan_schema/1k`           |    2.7618 ms | no change                     |\n| `rocksdb/prefix_scan_schema_file_null/1k` |    2.7406 ms | no change                     |\n\nThe strongest measured signal is SQLite full scans. RocksDB and exact gets move\nin the right direction but remain Criterion-neutral in this run.\n\n### Storage\n\nNo storage change.\n\n### Review Loop\n\nReviewer pass:\n\n```text\nHIGH: none.\nMEDIUM: none.\nLOW: json_values_in_request_order depends on the has_duplicate_refs flag.\nFixed with debug assertions that the no-duplicate path has request indexes\n0..len and the same length as unique_values.\n\nRecommendation: keep. This is a real structural copy cut in the payload path,\nand the SQLite scan_full_rows improvement is plausible.\n```\n\n### Verification\n\n```sh\ncargo fmt -p lix_engine\ncargo check -p lix_engine --features storage-benches --benches\ncargo test -p lix_engine json_store::store::tests::json_batch_load_roundtrips_in_request_order --features storage-benches\ncargo test -p lix_engine tracked_state:: --features storage-benches\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k'\n```\n\nAll commands passed.\n\n### Interpretation\n\n```text\nKeep as a payload-copy reduction.\n\nPrimary axis: full-row materialization. Structural win: unique JSON batch loads\nnow transfer ownership of decoded payload bytes directly to the caller instead\nof cloning them back into request order. This pairs with tracked materialization\nconsuming those bytes with String::from_utf8.\n\nTiming: SQLite full scans improved in the focused run; other full/exact rows\nremain noisy but generally moved lower. The <= 1.5x target is still not met.\n\nNo temporary shim.\n\nNext optimization should look below row materialization again: load_from_packs\nstill decodes entire JSON packs for the requested refs, and tracked exact reads\nstill construct full rows even when callers only check presence in the current\nbench harness.\n```\n\n## Optimization 10: Encode Delta Packs From Borrowed Deltas\n\nDate: 2026-05-10\n\nCommit: this entry is committed with the optimization\n\n### Change\n\nChanged normal tracked-state delta staging to encode delta packs directly from\nborrowed `TrackedStateDeltaRef` values.\n\nBefore this change, `TrackedStateWriter::stage_delta` cloned every borrowed\ndelta into owned `TrackedStateDeltaEntry` objects, including schema/file/entity\nidentity, source commit/change ids, and timestamp strings. It then immediately\nencoded those owned entries into the delta pack.\n\nThe write path now uses:\n\n- `codec::encode_delta_pack_refs`;\n- `storage::stage_delta_pack_refs`;\n- `TrackedStateWriter::stage_delta` calling the borrowed staging path directly.\n\nThe old owned-entry encode/stage helper and `delta_entries_from_refs` were\nremoved. Decode still materializes owned `TrackedStateDeltaEntry` values because\nreaders need owned entries after loading a persisted pack.\n\nNo delta-pack format change: the encoder still writes the same `LXTD`\nmagic/version/count and uses the same tracked key/value encoders.\n\n### Benchmarks\n\nFocused command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k'\n```\n\nResult: passed.\n\n| row                                        | after median | criterion status           |\n| ------------------------------------------ | -----------: | -------------------------- |\n| `sqlite/write_root_all_rows/1k`            |    6.2844 ms | no change                  |\n| `sqlite/write_delta_10pct_updates/1k`      |    2.6592 ms | no change, noisy guardrail |\n| `sqlite/write_tombstone_10pct_deletes/1k`  |    2.3671 ms | no change, noisy guardrail |\n| `rocksdb/write_root_all_rows/1k`           |    5.3605 ms | no change                  |\n| `rocksdb/write_delta_10pct_updates/1k`     |    1.3421 ms | no change, noisy guardrail |\n| `rocksdb/write_tombstone_10pct_deletes/1k` |    1.2464 ms | no change                  |\n\nRoot-write medians moved lower than several previous samples, especially\nRocksDB, but Criterion still reports no change. Treat this as a production\nwrite-path allocation cleanup, not a proven target-closing win.\n\n### Storage\n\nNo storage change.\n\n### Review Loop\n\nReviewer pass:\n\n```text\nHIGH: none.\nMEDIUM: none.\nLOW: remove stale #[allow(dead_code)] from TrackedStateDeltaRef. Fixed.\nLOW: add direct delta-pack codec regression coverage for the borrowed encoder.\nFixed with delta_pack_ref_encoder_roundtrips_entries.\n\nRecommendation: keep. This is a clean production write-path allocation cut and\nremoves an artificial owned staging API.\n```\n\n### Verification\n\n```sh\ncargo fmt -p lix_engine\ncargo check -p lix_engine --features storage-benches --benches\ncargo test -p lix_engine delta_pack_ref_encoder_roundtrips_entries --features storage-benches\ncargo test -p lix_engine tracked_state:: --features storage-benches\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k'\n```\n\nAll commands passed.\n\n### Interpretation\n\n```text\nKeep as a borrowed-write cleanup.\n\nPrimary axis: root and delta writes. Structural win: normal tracked-state\ncommits no longer allocate a full owned delta-entry layer just to encode the\nsame bytes into a delta pack. This follows the same shape used by reference\nsystems that encode from stable in-memory views and materialize owned records\nonly when reading back from storage.\n\nTiming: Criterion-neutral on the 1k fixture. This does not close the remaining\nwrite_root_all_rows budget gap, but it removes an obvious allocation layer from\nthe production write path without changing storage semantics.\n\nNo temporary shim.\n\nNext optimization needs a bigger write-side cut, likely in commit_store staging,\nJSON pack staging, or the transaction write-set path, because delta-pack\nencoding itself is no longer cloning the tracked projection rows first.\n```\n\n## Optimization 11: Encode Change Packs From Existing Slices\n\nDate: 2026-05-10\n\nCommit: this entry is committed with the optimization\n\n### Change\n\nChanged `commit_store::codec::encode_change_pack` to accept\n`&[ChangeRef<'_>]` instead of a generic iterator that it immediately collected\ninto a temporary `Vec`.\n\nThe production caller already has authored changes in a `Vec`, so the encoder\ncan read the count from the slice and encode refs directly in order. This\nremoves one temporary collection from the commit-store write path.\n\nNo storage format change.\n\n### Benchmarks\n\nFocused command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates)/1k'\n```\n\nResult: passed.\n\n| row                                    | after median | criterion status |\n| -------------------------------------- | -----------: | ---------------- |\n| `sqlite/write_root_all_rows/1k`        |    6.0937 ms | no change        |\n| `sqlite/write_delta_10pct_updates/1k`  |    2.5978 ms | no change        |\n| `rocksdb/write_root_all_rows/1k`       |    5.4208 ms | no change        |\n| `rocksdb/write_delta_10pct_updates/1k` |    1.3267 ms | no change        |\n\n### Storage\n\nNo storage change.\n\n### Review Loop\n\nReviewer pass:\n\n```text\nHIGH: none.\nMEDIUM: none.\nLOW: none.\n\nRecommendation: keep the code, but do not present it as a standalone\nbudget-moving win. It is a clean write-path allocation cleanup with no measured\nCriterion win.\n```\n\n### Verification\n\n```sh\ncargo fmt -p lix_engine\ncargo check -p lix_engine --features storage-benches --benches\ncargo test -p lix_engine commit_store:: --features storage-benches\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates)/1k'\n```\n\nAll commands passed.\n\n### Interpretation\n\n```text\nKeep as a small encoder allocation cleanup.\n\nPrimary axis: commit-store write packing. Structural win: encode from the\nalready-shaped authored-change slice instead of materializing a second vector\njust to know the count.\n\nTiming: Criterion-neutral. This is not a budget-moving optimization by itself,\nbut it composes with the borrowed tracked delta-pack encoder and keeps the\nwrite path moving away from temporary owned collections.\n\nNo temporary shim.\n\nNext optimization needs a larger cut in JSON pack staging or transaction\nwrite-set application; the obvious per-row encoder clones in tracked and\ncommit-store delta packing have now been reduced.\n```\n\n## Optimization 12: Preserve JSON Pack Input Order Without Tree Sorting\n\nDate: 2026-05-11\n\nCommit: this entry is committed with the optimization\n\n### Change\n\nChanged `JsonStoreWriter::stage_batch` to keep unique encoded payloads in\nfirst-seen input order instead of inserting them into a `BTreeMap` sorted by\nhash.\n\nThe writer still returns refs in request order and still deduplicates repeated\npayload hashes. The new shape is:\n\n- `order: Vec<JsonRef>` for the caller-visible result;\n- `unique_encoded: Vec<EncodedJson>` for first-seen unique payloads;\n- `HashSet<[u8; 32]>` only for duplicate suppression.\n\nFor commit-pack placement, pack-local entries are selected from\n`unique_encoded.iter()` in input order. Direct out-of-band writes iterate the\nsame vector and skip pack-local payloads.\n\nThis intentionally changes pack entry order from hash-sorted to input order.\nPack lookup is hash-addressed and scans decoded entries by hash, so entry order\nis not part of the semantic contract. Lix has not shipped, and storage\naccounting stayed unchanged.\n\nAdded regression coverage for duplicate writer input: `[A, A, B]` returns\n`[refA, refA, refB]`, stores only the pack-local payloads, and hydrates both\nunique refs from the commit pack.\n\n### Benchmarks\n\nFocused command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k'\n```\n\nResult: passed.\n\n| row                                        | after median | criterion status           |\n| ------------------------------------------ | -----------: | -------------------------- |\n| `sqlite/write_root_all_rows/1k`            |    5.9331 ms | improved                   |\n| `sqlite/write_delta_10pct_updates/1k`      |    2.6203 ms | no change, noisy guardrail |\n| `sqlite/write_tombstone_10pct_deletes/1k`  |    2.4790 ms | no change, noisy guardrail |\n| `rocksdb/write_root_all_rows/1k`           |    5.3019 ms | no change                  |\n| `rocksdb/write_delta_10pct_updates/1k`     |    1.3004 ms | no change                  |\n| `rocksdb/write_tombstone_10pct_deletes/1k` |    1.2178 ms | no change                  |\n\nSQLite root writes improved by Criterion. RocksDB root-write median moved lower\nthan recent committed samples but remains Criterion-neutral.\n\n### Storage\n\nStorage command:\n\n```sh\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\n```\n\nResult: passed.\n\n1k rows:\n\n| row                                    |   bytes | bytes/row | status    |\n| -------------------------------------- | ------: | --------: | --------- |\n| raw SQLite / inserted                  | 1692456 |    1692.5 | reference |\n| Lix SQLite / inserted                  | 1112216 |    1112.2 | unchanged |\n| Lix SQLite / after create_version      | 1124576 |    1124.6 | unchanged |\n| Lix SQLite / after fast-forward merge  | 5324328 |    5324.3 | unchanged |\n| Lix SQLite / after divergent merge     | 5652176 |    5652.2 | unchanged |\n| Lix RocksDB / inserted                 | 1028557 |    1028.6 | unchanged |\n| Lix RocksDB / after create_version     | 1030457 |    1030.5 | unchanged |\n| Lix RocksDB / after fast-forward merge | 1195234 |    1195.2 | unchanged |\n| Lix RocksDB / after divergent merge    | 1576587 |    1576.6 | unchanged |\n\n### Review Loop\n\nReviewer pass:\n\n```text\nHIGH: none.\nMEDIUM: none.\nLOW: add duplicate writer-input coverage for [A, A, B]. Fixed.\n\nRecommendation: keep. This is a real hot-path structural improvement with a\nmeasured SQLite root-write win, no storage accounting regression, and acceptable\npack-order semantics.\n```\n\n### Verification\n\n```sh\ncargo fmt -p lix_engine\ncargo check -p lix_engine --features storage-benches --benches\ncargo test -p lix_engine json_store:: --features storage-benches\ncargo test -p lix_engine tracked_state:: --features storage-benches\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k'\n```\n\nAll commands passed.\n\n### Interpretation\n\n```text\nKeep as a JSON pack write-path improvement.\n\nPrimary axis: root writes. Structural win: unique JSON-pointer payloads no\nlonger pay hash-sorted tree-map insertion and sorted iteration before being\npacked into a commit-local JSON pack. Dedupe remains hash-based, while physical\npack order follows deterministic input order.\n\nTiming: SQLite write_root_all_rows improved. RocksDB remains neutral but did\nnot regress materially. The root-write target is still above 1.5x raw SQLite.\n\nNo temporary shim.\n\nNext optimization should keep attacking write_root_all_rows, likely below the\ngeneric StorageWriteSet/backend batch application or by reducing JSON payload\nencoding work before commit-pack staging.\n```\n\n## Optimization 13: Use fixed JSON hash lookup keys and single-pack projection loads\n\nDate: 2026-05-11\n\nCommit: this entry is committed with the optimization\n\n### Change\n\nChanged JSON-store read lookup tables from ordered `Vec<u8>` keys to fixed\n`[u8; 32]` hash keys:\n\n- `JsonRef::as_hash_array()` exposes the existing hash without conversion.\n- `load_json_bytes_many_in_scope` deduplicates requested refs with\n  `HashMap<[u8; 32], usize>` while preserving first-seen backend get order in\n  the side vectors.\n- `load_from_packs` matches decoded pack entries with\n  `HashMap<[u8; 32], usize>` instead of allocating hash `Vec`s.\n\nAdded a tracked-state materialization fast path for the common root-read case\nwhere all projected JSON refs are local to one `(commit_id, pack_id)`. The fast\npath calls `json_store.load_bytes_many` once with the original `json_refs`\nslice and returns values directly in request order. Mixed-pack reads keep the\nprevious grouped fallback.\n\nThe shortcut checks that JSON refs and locality indexes remain in lockstep\nbefore selecting the fast path. Added unit coverage for same-pack duplicate\nslots and mixed-pack rejection.\n\n### Benchmarks\n\nFocused command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows)/1k'\n```\n\nResult: passed.\n\n| row                                 | before median | after median | criterion status       |\n| ----------------------------------- | ------------: | -----------: | ---------------------- |\n| `raw_sqlite/get_many_exact_keys/1k` |     2.0599 ms |    2.0580 ms | reference              |\n| `raw_sqlite/scan_full_rows/1k`      |     1.1594 ms |    1.1722 ms | reference              |\n| `sqlite/get_many_exact_keys/1k`     |     3.9323 ms |    3.8132 ms | no change              |\n| `sqlite/scan_full_rows/1k`          |     3.6356 ms |    3.5962 ms | no change              |\n| `rocksdb/get_many_exact_keys/1k`    |     2.9911 ms |    2.8464 ms | no change              |\n| `rocksdb/scan_full_rows/1k`         |     2.5176 ms |    2.3906 ms | within noise threshold |\n\n### Storage\n\nStorage command:\n\n```sh\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\n```\n\nResult: passed.\n\n1k rows:\n\n| row                                    |   bytes | bytes/row | status                                 |\n| -------------------------------------- | ------: | --------: | -------------------------------------- |\n| raw SQLite / inserted                  | 1692456 |    1692.5 | reference                              |\n| Lix SQLite / inserted                  | 1112216 |    1112.2 | unchanged                              |\n| Lix SQLite / after create_version      | 1124576 |    1124.6 | unchanged                              |\n| Lix SQLite / after fast-forward merge  | 5303776 |    5303.8 | accounting noise, lower than previous  |\n| Lix SQLite / after divergent merge     | 5721904 |    5721.9 | accounting noise, higher than previous |\n| Lix RocksDB / inserted                 | 1028557 |    1028.6 | unchanged                              |\n| Lix RocksDB / after create_version     | 1030457 |    1030.5 | unchanged                              |\n| Lix RocksDB / after fast-forward merge | 1195234 |    1195.2 | unchanged                              |\n| Lix RocksDB / after divergent merge    | 1576588 |    1576.6 | unchanged                              |\n\n### Review Loop\n\nReviewer pass:\n\n```text\nHIGH: none.\nMEDIUM: none.\nLOW: add an explicit json_refs/localities length check before the fast path;\nadd focused single-pack shortcut coverage. Fixed.\n\nRecommendation: keep. This is a clean read-side allocation/comparison cut with\nno storage-format change. Fixed hash keys are lookup-only and do not affect\nrequest/backend/result ordering.\n```\n\n### Verification\n\n```sh\ncargo fmt -p lix_engine\ncargo test -p lix_engine json_store:: --features storage-benches\ncargo test -p lix_engine tracked_state:: --features storage-benches\ncargo check -p lix_engine --features storage-benches --benches\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\ncargo test -p lix_engine tracked_state::materialization:: --features storage-benches\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows)/1k'\n```\n\nAll commands passed.\n\n### Interpretation\n\n```text\nKeep as a modest read-side locality cleanup.\n\nPrimary axis: exact-key and full-row reads. The structural win is removing\navoidable heap-key/order-map work from fixed-hash JSON lookup and avoiding\ntracked-state grouping allocations when all projected payloads are in one\ncommit-local pack.\n\nTiming: medians moved in the intended direction for both Lix backends on the\ntargeted read rows. Only RocksDB scan showed a statistically visible movement,\nand Criterion classified it within the noise threshold, so this should be\ntreated as a small supporting optimization rather than a budget-moving step.\n\nNo storage format change. No temporary shim.\n```\n\n## Optimization 14: Reuse trusted JSON refs during payload staging\n\nDate: 2026-05-11\n\nCommit: this entry is committed with the optimization\n\n### Change\n\nThreaded precomputed JSON refs through JSON-store staging for callers that\nalready own the normalized JSON/ref invariant:\n\n- `NormalizedJsonRef` now has private fields and two constructors:\n  `new(normalized)` for ordinary callers and\n  `trusted_prehashed(normalized, json_ref)` for the explicit trusted path.\n- `JsonStoreWriter::stage_batch` uses the supplied trusted ref to encode JSON\n  without hashing the payload again, falling back to the existing hashing path\n  for normal callers.\n- Transaction commit passes `StageJson` refs for snapshot/metadata payloads.\n  `StageJson` computes the ref from the same normalized string during\n  transaction staging.\n- The physical storage benchmark root writer pairs payload strings with refs\n  from the already-built `Change` records so the benchmark no longer pays the\n  same duplicate hash.\n\nAdded direct JSON-store coverage for staging a trusted prehashed commit-pack\npayload, verifying the returned ref and hydrated bytes.\n\n### Benchmarks\n\nFocused command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k'\n```\n\nResult: passed.\n\n| row                                           | after median | criterion status |\n| --------------------------------------------- | -----------: | ---------------- |\n| `raw_sqlite/write_root_all_rows/1k`           |    2.3853 ms | reference        |\n| `raw_sqlite/write_delta_10pct_updates/1k`     |    1.2667 ms | reference        |\n| `raw_sqlite/write_tombstone_10pct_deletes/1k` |    1.2330 ms | reference        |\n| `sqlite/write_root_all_rows/1k`               |    5.4166 ms | improved         |\n| `sqlite/write_delta_10pct_updates/1k`         |    2.5490 ms | no change        |\n| `sqlite/write_tombstone_10pct_deletes/1k`     |    2.6059 ms | improved         |\n| `rocksdb/write_root_all_rows/1k`              |    4.8746 ms | improved         |\n| `rocksdb/write_delta_10pct_updates/1k`        |    1.2758 ms | no change        |\n| `rocksdb/write_tombstone_10pct_deletes/1k`    |    1.2795 ms | noisy guardrail  |\n\nRerun command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k'\n```\n\nResult: passed.\n\n| row                                        | rerun median | criterion status        |\n| ------------------------------------------ | -----------: | ----------------------- |\n| `sqlite/write_root_all_rows/1k`            |    5.3105 ms | no change, lower median |\n| `sqlite/write_delta_10pct_updates/1k`      |    2.5652 ms | no change               |\n| `sqlite/write_tombstone_10pct_deletes/1k`  |    2.4195 ms | no change               |\n| `rocksdb/write_root_all_rows/1k`           |    4.7479 ms | no change, lower median |\n| `rocksdb/write_delta_10pct_updates/1k`     |    1.2128 ms | no change               |\n| `rocksdb/write_tombstone_10pct_deletes/1k` |    1.2283 ms | improved                |\n\n### Storage\n\nStorage command:\n\n```sh\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\n```\n\nResult: passed.\n\n1k rows:\n\n| row                                    |   bytes | bytes/row | status    |\n| -------------------------------------- | ------: | --------: | --------- |\n| raw SQLite / inserted                  | 1692456 |    1692.5 | reference |\n| Lix SQLite / inserted                  | 1112216 |    1112.2 | unchanged |\n| Lix SQLite / after create_version      | 1124576 |    1124.6 | unchanged |\n| Lix SQLite / after fast-forward merge  | 5324328 |    5324.3 | unchanged |\n| Lix SQLite / after divergent merge     | 5652176 |    5652.2 | unchanged |\n| Lix RocksDB / inserted                 | 1028557 |    1028.6 | unchanged |\n| Lix RocksDB / after create_version     | 1030457 |    1030.5 | unchanged |\n| Lix RocksDB / after fast-forward merge | 1195234 |    1195.2 | unchanged |\n| Lix RocksDB / after divergent merge    | 1576587 |    1576.6 | unchanged |\n\n### Review Loop\n\nReviewer pass:\n\n```text\nInitial review:\nHIGH: none.\nMEDIUM: supplied-ref path was correctness-critical but only protected by\ndebug_assert; make the trusted prehashed path harder to construct accidentally.\nLOW: add direct json-store coverage and avoid pretending init eliminates a hash.\n\nFollow-up review:\nHIGH: none.\nMEDIUM: none.\nLOW: none.\nThe prior MEDIUM is resolved by private NormalizedJsonRef fields plus explicit\nnew/trusted_prehashed constructors. The intended production caller passes\nStageJson normalized bytes and the ref computed from those same bytes.\n```\n\n### Verification\n\n```sh\ncargo fmt -p lix_engine\ncargo test -p lix_engine json_store:: --features storage-benches\ncargo check -p lix_engine --features storage-benches --benches\ncargo test -p lix_engine transaction::commit:: --features storage-benches\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k'\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k'\n```\n\nAll commands passed.\n\n### Interpretation\n\n```text\nKeep as a root-write optimization.\n\nPrimary axis: write_root_all_rows. The structural win removes a duplicate\nBLAKE3 hash over normalized JSON payloads at the JSON-store staging boundary\nwhen transaction staging has already computed the content ref.\n\nTiming: both SQLite and RocksDB root writes moved down, with Criterion\nimprovements in the first focused run and lower medians on rerun. Delta and\ntombstone rows are treated as guardrails; their medians were neutral to better\non rerun.\n\nNo storage format change. No temporary shim.\n```\n\n## Optimization 15: Move JSON content hash verification off hot reads\n\nDate: 2026-05-11\n\nCommit: this entry is committed with the optimization\n\n### Change\n\nChanged JSON payload decoding to split hot reads from explicit integrity\nverification:\n\n- `load_json_bytes_many_in_scope` uses `JsonHashCheck::TrustedHotRead` and no\n  longer rehashes every decoded payload.\n- Added non-hot `verify_json_bytes_many_in_scope`, which uses\n  `JsonHashCheck::Verify` and checks\n  `blake3(decoded_payload) == JsonRef`.\n- Pack, direct, and direct-fallback decode paths share the same internal loader\n  and thread the hash-check policy through to `decode_json_payload`.\n- Added `verified_batch_load_rejects_hash_mismatch`, which stores mismatched\n  bytes under a requested JSON ref key, confirms the trusted hot path returns\n  bytes without hashing, and confirms the verifier rejects the same row with a\n  hash mismatch.\n\nThis follows the reference-system shape: normal scans trust the storage layer\nand write-time content-address facts, while explicit integrity/fsck callers pay\nthe exhaustive hash cost. SQLite has explicit integrity checks, and\nSapling/Mononoke separates content-addressed storage from walker/validation\njobs.\n\n### Benchmarks\n\nFocused command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows)/1k'\n```\n\nResult: passed.\n\n| row                                 | after median | criterion status        |\n| ----------------------------------- | -----------: | ----------------------- |\n| `raw_sqlite/get_many_exact_keys/1k` |    3.1921 ms | noisy reference         |\n| `raw_sqlite/scan_full_rows/1k`      |    1.8065 ms | noisy reference         |\n| `sqlite/get_many_exact_keys/1k`     |    3.4570 ms | no change, lower median |\n| `sqlite/scan_full_rows/1k`          |    3.5119 ms | no change, lower median |\n| `rocksdb/get_many_exact_keys/1k`    |    2.3411 ms | improved                |\n| `rocksdb/scan_full_rows/1k`         |    2.1430 ms | improved                |\n\nRerun command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k'\n```\n\nResult: passed.\n\n| row                                       | rerun median | criterion status                |\n| ----------------------------------------- | -----------: | ------------------------------- |\n| `sqlite/get_many_exact_keys/1k`           |    4.1679 ms | no change, noisy guardrail      |\n| `sqlite/scan_full_rows/1k`                |    3.5295 ms | no change, noisy guardrail      |\n| `sqlite/prefix_scan_schema/1k`            |    3.3561 ms | improved                        |\n| `sqlite/prefix_scan_schema_file_null/1k`  |    3.7939 ms | no change                       |\n| `rocksdb/get_many_exact_keys/1k`          |    2.3749 ms | no change, lower than pre-patch |\n| `rocksdb/scan_full_rows/1k`               |    2.2115 ms | no change, lower than pre-patch |\n| `rocksdb/prefix_scan_schema/1k`           |    2.0643 ms | improved                        |\n| `rocksdb/prefix_scan_schema_file_null/1k` |    2.1547 ms | improved                        |\n\n### Storage\n\nStorage command:\n\n```sh\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\n```\n\nResult: passed.\n\n1k rows:\n\n| row                                    |   bytes | bytes/row | status    |\n| -------------------------------------- | ------: | --------: | --------- |\n| raw SQLite / inserted                  | 1692456 |    1692.5 | reference |\n| Lix SQLite / inserted                  | 1112216 |    1112.2 | unchanged |\n| Lix SQLite / after create_version      | 1124576 |    1124.6 | unchanged |\n| Lix SQLite / after fast-forward merge  | 5324328 |    5324.3 | unchanged |\n| Lix SQLite / after divergent merge     | 5652176 |    5652.2 | unchanged |\n| Lix RocksDB / inserted                 | 1028557 |    1028.6 | unchanged |\n| Lix RocksDB / after create_version     | 1030457 |    1030.5 | unchanged |\n| Lix RocksDB / after fast-forward merge | 1195234 |    1195.2 | unchanged |\n| Lix RocksDB / after divergent merge    | 1576587 |    1576.6 | unchanged |\n\n### Review Loop\n\nReviewer pass:\n\n```text\nInitial review:\nHIGH: none.\nMEDIUM: hot reads now point to an integrity-check/fsck policy, but JSON store\ndid not have a non-hot verifier entry point. Add one or keep a dedicated\nverification helper.\nLOW: make the decode API shape less likely to imply the ref is always checked.\n\nFollow-up review:\nHIGH: none.\nMEDIUM: none.\nLOW: none.\nThe prior MEDIUM is resolved by verify_json_bytes_many_in_scope and the shared\nJsonHashCheck policy. The mismatch regression test covers the dangerous case.\n```\n\n### Verification\n\n```sh\ncargo fmt -p lix_engine\ncargo test -p lix_engine json_store:: --features storage-benches\ncargo test -p lix_engine tracked_state::materialization:: --features storage-benches\ncargo check -p lix_engine --features storage-benches --benches\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows)/1k'\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k'\n```\n\nAll commands passed.\n\n### Interpretation\n\n```text\nKeep as a read-path policy and CPU optimization.\n\nPrimary axis: full-row and prefix scans, especially RocksDB. The structural\nwin removes a full BLAKE3 pass over every JSON payload from normal reads while\npreserving a non-hot verifier for fsck/integrity workflows.\n\nTiming: RocksDB exact reads and scans improved strongly in the first focused\nrun; RocksDB prefix scans improved again on rerun. SQLite was noisier, but\nprefix_scan_schema improved and full-scan medians stayed in the intended range.\n\nNo storage format change. No benchmark shape change. No temporary shim.\n```\n\n## Optimization 16: Fill JSON pack results directly\n\nDate: 2026-05-11\n\nCommit: this entry is committed with the optimization\n\n### Change\n\nFused JSON pack decode with result placement:\n\n- Replaced `decode_json_pack(...) -> Vec<(JsonRef, Vec<u8>)>` plus a second\n  pass in `load_from_packs`.\n- Added `load_json_pack_values(...)`, which parses the pack directory and\n  writes matching decoded payloads directly into the caller's result slice\n  using the existing `wanted: HashMap<[u8; 32], usize>`.\n- Unrequested pack entries are skipped without payload decode.\n- Requested entries still flow through `decode_json_payload(..., hash_check)`,\n  so verified reads still hash-check requested refs.\n\nAdded `verified_pack_load_checks_only_requested_entries` to pin the intended\nboundary: a bad unrequested pack entry is ignored by a verified read for a good\nref, while requesting the bad ref fails with a hash mismatch.\n\nThis is the same projection/predicate-pushdown shape used by database storage\nengines: do not decode rows or payloads outside the request path.\n\n### Benchmarks\n\nFocused command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k'\n```\n\nResult: passed.\n\n| row                                       | after median | criterion status           |\n| ----------------------------------------- | -----------: | -------------------------- |\n| `sqlite/get_many_exact_keys/1k`           |    3.5231 ms | no change, noisy guardrail |\n| `sqlite/scan_full_rows/1k`                |    3.1738 ms | no change                  |\n| `sqlite/prefix_scan_schema/1k`            |    3.0404 ms | improved                   |\n| `sqlite/prefix_scan_schema_file_null/1k`  |    3.4798 ms | no change                  |\n| `rocksdb/get_many_exact_keys/1k`          |    2.2726 ms | no change                  |\n| `rocksdb/scan_full_rows/1k`               |    2.0346 ms | no change                  |\n| `rocksdb/prefix_scan_schema/1k`           |    2.1176 ms | no change                  |\n| `rocksdb/prefix_scan_schema_file_null/1k` |    2.0395 ms | improved                   |\n\n### Storage\n\nStorage command:\n\n```sh\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\n```\n\nResult: passed.\n\n1k rows:\n\n| row                                    |   bytes | bytes/row | status    |\n| -------------------------------------- | ------: | --------: | --------- |\n| raw SQLite / inserted                  | 1692456 |    1692.5 | reference |\n| Lix SQLite / inserted                  | 1112216 |    1112.2 | unchanged |\n| Lix SQLite / after create_version      | 1124576 |    1124.6 | unchanged |\n| Lix SQLite / after fast-forward merge  | 5324328 |    5324.3 | unchanged |\n| Lix SQLite / after divergent merge     | 5652176 |    5652.2 | unchanged |\n| Lix RocksDB / inserted                 | 1028557 |    1028.6 | unchanged |\n| Lix RocksDB / after create_version     | 1030457 |    1030.5 | unchanged |\n| Lix RocksDB / after fast-forward merge | 1195234 |    1195.2 | unchanged |\n| Lix RocksDB / after divergent merge    | 1576587 |    1576.6 | unchanged |\n\n### Review Loop\n\nReviewer pass:\n\n```text\nInitial review:\nHIGH: none.\nMEDIUM: none.\nLOW: add focused pack-local coverage for verifying requested entries while\nskipping unrequested entries. Fixed.\n\nFollow-up review:\nHIGH: none.\nMEDIUM: none.\nLOW: none.\nThe new pack test covers the intended boundary.\n```\n\n### Verification\n\n```sh\ncargo fmt -p lix_engine\ncargo test -p lix_engine json_store:: --features storage-benches\ncargo check -p lix_engine --features storage-benches --benches\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k'\n```\n\nAll commands passed.\n\n### Interpretation\n\n```text\nKeep as a small pack-read cleanup.\n\nPrimary axis: scan and prefix-scan reads. The structural win removes an\nintermediate vector of decoded pack entries and avoids decoding unrequested pack\npayloads. This compounds with the previous hot-read hash policy change.\n\nTiming: SQLite prefix_scan_schema and RocksDB prefix_scan_schema_file_null\nimproved by Criterion. Other targeted rows were neutral/noisy but did not show\na structural regression.\n\nNo storage format change. No temporary shim.\n```\n\n## Optimization 17: Borrow tracked-state delta slices\n\nDate: 2026-05-11\n\nCommit: this entry is committed with the optimization\n\n### Change\n\nChanged `TrackedStateWriter::stage_delta` to accept\n`&[TrackedStateDeltaRef<'_>]` instead of a generic `IntoIterator`:\n\n- Removed the internal `collect::<Vec<_>>()` before delta-pack encoding.\n- Updated production and test callers to borrow their already-built delta\n  vectors.\n- Kept `stage_projection_root` unchanged because it still needs to own and\n  reuse the collected deltas while building projection roots.\n\nThis lines the public staging helper up with the delta-pack encoder, which\nalready accepts borrowed slices and immediately writes owned encoded bytes into\nthe write set.\n\n### Benchmarks\n\nFocused command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/write_root_all_rows/1k'\n```\n\nResult: passed.\n\n| row                                 | after median | criterion status        |\n| ----------------------------------- | -----------: | ----------------------- |\n| `raw_sqlite/write_root_all_rows/1k` |    2.3512 ms | reference/no change     |\n| `sqlite/write_root_all_rows/1k`     |    5.5212 ms | no change               |\n| `rocksdb/write_root_all_rows/1k`    |    4.6132 ms | no change, lower median |\n\n### Storage\n\nStorage command:\n\n```sh\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\n```\n\nResult: passed.\n\n1k rows:\n\n| row                                    |   bytes | bytes/row | status    |\n| -------------------------------------- | ------: | --------: | --------- |\n| raw SQLite / inserted                  | 1692456 |    1692.5 | reference |\n| Lix SQLite / inserted                  | 1112216 |    1112.2 | unchanged |\n| Lix SQLite / after create_version      | 1124576 |    1124.6 | unchanged |\n| Lix SQLite / after fast-forward merge  | 5324328 |    5324.3 | unchanged |\n| Lix SQLite / after divergent merge     | 5652176 |    5652.2 | unchanged |\n| Lix RocksDB / inserted                 | 1028557 |    1028.6 | unchanged |\n| Lix RocksDB / after create_version     | 1030457 |    1030.5 | unchanged |\n| Lix RocksDB / after fast-forward merge | 1195234 |    1195.2 | unchanged |\n| Lix RocksDB / after divergent merge    | 1576587 |    1576.6 | unchanged |\n\n### Review Loop\n\nReviewer pass:\n\n```text\nHIGH: none.\nMEDIUM: none.\nLOW: none.\n\nRecommendation: keep, but log it as a small allocation/API cleanup rather than\na measured benchmark optimization. The slice API is clean because stage_delta\nonly synchronously encodes into owned write-set bytes, and the production\ncallers already hold the delta Vecs.\n```\n\n### Verification\n\n```sh\ncargo fmt -p lix_engine\ncargo check -p lix_engine --features storage-benches --benches\ncargo test -p lix_engine tracked_state:: --features storage-benches\ncargo test -p lix_engine transaction::commit:: --features storage-benches\ncargo test -p lix_engine live_state::context:: --features storage-benches\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/write_root_all_rows/1k'\n```\n\nAll commands passed.\n\n### Interpretation\n\n```text\nKeep as a small root-write allocation cleanup.\n\nPrimary axis: write_root_all_rows. The structural win removes a redundant Vec\nallocation/copy on the tracked-state delta staging path after callers have\nalready built the delta Vec.\n\nTiming: Criterion reported no statistically significant change. This is kept\nbecause it simplifies the hot staging API and removes real production work,\nnot because it demonstrates a standalone benchmark win.\n\nNo storage format change. No temporary shim.\n```\n\n## Optimization 18: Decode ordered JSON packs without lookup maps\n\nDate: 2026-05-11\n\nCommit: this entry is committed with the optimization\n\n### Change\n\nAdded a guarded ordered-pack read path in `json_store`:\n\n- When a read targets exactly one commit-local JSON pack and the requested\n  unique refs exactly match that pack's directory count and order, decode pack\n  entries directly into result slots.\n- If count or order does not match, clear any partially filled slots and fall\n  back to the existing hash lookup path.\n- Shared pack parsing now flows through `JsonPackLayout` and `JsonPackEntry`,\n  so the ordered path and fallback validate headers, directory length, payload\n  bounds, codec, and truncation the same way.\n\nThis avoids building a `HashMap<[u8; 32], usize>` and doing one hash lookup per\npack entry for the common full-scan shape where projection refs are already in\ncommit-pack order.\n\nAdded coverage for the ordered fast path, unordered fallback, and the invariant\nthat an order mismatch leaves the caller's result slots untouched before\nfallback.\n\n### Benchmarks\n\nFocused command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k'\n```\n\nResult: passed.\n\n| row                                          | after median | criterion status        |\n| -------------------------------------------- | -----------: | ----------------------- |\n| `raw_sqlite/get_many_exact_keys/1k`          |    2.0223 ms | reference/no change     |\n| `raw_sqlite/scan_full_rows/1k`               |    1.1436 ms | reference/no change     |\n| `raw_sqlite/prefix_scan_schema/1k`           |    1.2741 ms | reference/no change     |\n| `raw_sqlite/prefix_scan_schema_file_null/1k` |    1.1876 ms | reference/no change     |\n| `sqlite/get_many_exact_keys/1k`              |    3.3477 ms | no change               |\n| `sqlite/scan_full_rows/1k`                   |    3.0526 ms | no change, lower median |\n| `sqlite/prefix_scan_schema/1k`               |    3.1708 ms | no change               |\n| `sqlite/prefix_scan_schema_file_null/1k`     |    3.1284 ms | no change               |\n| `rocksdb/get_many_exact_keys/1k`             |    2.3137 ms | noisy guardrail         |\n| `rocksdb/scan_full_rows/1k`                  |    2.0583 ms | no change               |\n| `rocksdb/prefix_scan_schema/1k`              |    2.0680 ms | no change               |\n| `rocksdb/prefix_scan_schema_file_null/1k`    |    2.0187 ms | no change               |\n\nSingle-pass rerun command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k'\n```\n\nResult: passed.\n\n| row                                       | rerun median | criterion status        |\n| ----------------------------------------- | -----------: | ----------------------- |\n| `sqlite/get_many_exact_keys/1k`           |    3.2251 ms | no change               |\n| `sqlite/scan_full_rows/1k`                |    3.1249 ms | no change               |\n| `sqlite/prefix_scan_schema/1k`            |    3.0630 ms | no change               |\n| `sqlite/prefix_scan_schema_file_null/1k`  |    3.1658 ms | no change               |\n| `rocksdb/get_many_exact_keys/1k`          |    2.3087 ms | no change               |\n| `rocksdb/scan_full_rows/1k`               |    2.0001 ms | no change, lower median |\n| `rocksdb/prefix_scan_schema/1k`           |    1.9933 ms | no change, lower median |\n| `rocksdb/prefix_scan_schema_file_null/1k` |    1.9861 ms | no change, lower median |\n\n### Storage\n\nStorage command:\n\n```sh\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\n```\n\nResult: passed.\n\n1k rows:\n\n| row                                    |   bytes | bytes/row | status           |\n| -------------------------------------- | ------: | --------: | ---------------- |\n| raw SQLite / inserted                  | 1692456 |    1692.5 | reference        |\n| Lix SQLite / inserted                  | 1112216 |    1112.2 | unchanged        |\n| Lix SQLite / after create_version      | 1124576 |    1124.6 | unchanged        |\n| Lix SQLite / after fast-forward merge  | 5303776 |    5303.8 | page-level noise |\n| Lix SQLite / after divergent merge     | 5479976 |    5480.0 | page-level noise |\n| Lix RocksDB / inserted                 | 1028557 |    1028.6 | unchanged        |\n| Lix RocksDB / after create_version     | 1030457 |    1030.5 | unchanged        |\n| Lix RocksDB / after fast-forward merge | 1195234 |    1195.2 | unchanged        |\n| Lix RocksDB / after divergent merge    | 1576585 |    1576.6 | unchanged        |\n\n### Review Loop\n\nReviewer pass:\n\n```text\nInitial review:\nHIGH: none.\nMEDIUM: none.\nLOW: duplicated pack directory parsing between the ordered path and fallback.\n\nFollow-up review:\nHIGH: none.\nMEDIUM: none.\nLOW: none.\nThe shared JsonPackLayout/JsonPackEntry helpers resolve the parser duplication.\n\nFinal single-pass review:\nHIGH: none.\nMEDIUM: none.\nLOW: none.\nCount mismatch returns before writes, order mismatch clears filled slots before\nfallback, and corruption/decode/hash failures still return Err instead of being\nconverted into fallback misses.\n```\n\n### Verification\n\n```sh\ncargo fmt -p lix_engine\ncargo test -p lix_engine json_store:: --features storage-benches\ncargo check -p lix_engine --features storage-benches --benches\ncargo test -p lix_engine tracked_state::materialization:: --features storage-benches\ncargo test -p lix_engine tracked_state::context:: --features storage-benches\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k'\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k'\n```\n\nAll commands passed.\n\n### Interpretation\n\n```text\nKeep as a small full-scan JSON pack read cleanup.\n\nPrimary axis: full-row and prefix scans from one commit-local JSON pack. The\nstructural win removes the lookup map from the ordered pack scan case while\npreserving the old path for unordered, duplicate, partial, or multi-pack reads.\n\nTiming: Criterion reports no statistically significant win, but the final\nsingle-pass rerun keeps guardrails neutral and shows lower RocksDB scan/prefix\nmedians. SQLite scan medians remain in the same improved band as the previous\npack-read cleanup.\n\nNo storage format change. No temporary shim.\n```\n\n## Optimization 19: Encode commit-store changes directly\n\nDate: 2026-05-11\n\nCommit: this entry is committed with the optimization\n\n### Change\n\nReplaced the per-change FlatBuffer record inside commit-store change packs with\na direct binary `LXCH2` row:\n\n- `encode_change_ref` now writes length-prefixed fields directly:\n  `id`, canonical entity-id JSON-array text, `schema_key`, optional `file_id`,\n  optional 32-byte `snapshot_ref`, optional 32-byte `metadata_ref`, and\n  `created_at`.\n- `decode_change` uses the existing checked `ByteCursor` machinery with new\n  optional string and optional JSON-ref readers.\n- Removed the private FlatBuffer table/verifier scaffolding for commit-store\n  changes. There is no backwards shim because Lix has not shipped.\n\nThis matches the surrounding commit-store pack shape better: one pack-level\ncodec with row fields encoded in place, rather than building and copying a\nseparate tiny FlatBuffer for every authored change.\n\nAdded direct malformed-input coverage for empty optionals, invalid option tags,\ntruncated fixed-width refs, and trailing bytes.\n\n### Benchmarks\n\nCodec command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench storage -- 'storage/changelog/(encode_only|decode_only)/full_row/10k'\n```\n\nResult: passed.\n\n| row                                          | after median |\n| -------------------------------------------- | -----------: |\n| `storage/changelog/encode_only/full_row/10k` |    2.6886 ms |\n| `storage/changelog/decode_only/full_row/10k` |    2.7384 ms |\n\nFocused physical write command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k'\n```\n\nResult: passed.\n\n| row                                           | after median | criterion status        |\n| --------------------------------------------- | -----------: | ----------------------- |\n| `raw_sqlite/write_root_all_rows/1k`           |    2.4888 ms | reference/no change     |\n| `raw_sqlite/write_delta_10pct_updates/1k`     |    1.2667 ms | reference/no change     |\n| `raw_sqlite/write_tombstone_10pct_deletes/1k` |    1.1804 ms | noisy reference         |\n| `sqlite/write_root_all_rows/1k`               |    5.0831 ms | no change, lower median |\n| `sqlite/write_delta_10pct_updates/1k`         |    2.2437 ms | improved                |\n| `sqlite/write_tombstone_10pct_deletes/1k`     |    2.0885 ms | improved                |\n| `rocksdb/write_root_all_rows/1k`              |    4.5929 ms | no change, lower median |\n| `rocksdb/write_delta_10pct_updates/1k`        |    1.1566 ms | no change, lower median |\n| `rocksdb/write_tombstone_10pct_deletes/1k`    |    1.1288 ms | improved                |\n\n### Storage\n\nStorage command:\n\n```sh\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\n```\n\nResult: passed.\n\n1k rows:\n\n| row                                    |   bytes | bytes/row | status    |\n| -------------------------------------- | ------: | --------: | --------- |\n| raw SQLite / inserted                  | 1692456 |    1692.5 | reference |\n| Lix SQLite / inserted                  | 1054536 |    1054.5 | improved  |\n| Lix SQLite / after create_version      | 1071016 |    1071.0 | improved  |\n| Lix SQLite / after fast-forward merge  | 5279368 |    5279.4 | improved  |\n| Lix SQLite / after divergent merge     | 5430920 |    5430.9 | improved  |\n| Lix RocksDB / inserted                 |  964892 |     964.9 | improved  |\n| Lix RocksDB / after create_version     |  966733 |     966.7 | improved  |\n| Lix RocksDB / after fast-forward merge | 1125265 |    1125.3 | improved  |\n| Lix RocksDB / after divergent merge    | 1494060 |    1494.1 | improved  |\n\n### Review Loop\n\nReviewer pass:\n\n```text\nInitial review:\nHIGH: none.\nMEDIUM: none.\nLOW: add malformed-input coverage for the hand-rolled format, especially empty\noptionals, invalid option tags, truncated 32-byte refs, and trailing bytes.\n\nFollow-up review:\nHIGH: none.\nMEDIUM: none.\nLOW: truncated-ref test was not actually truncating inside the fixed-width ref.\n\nSecond follow-up review:\nHIGH: none.\nMEDIUM: none.\nLOW: none.\nThe truncated-ref test now advances to the snapshot_ref tag, truncates after\nonly 16 ref bytes, and asserts the specific `truncated ref` error.\n```\n\n### Verification\n\n```sh\ncargo fmt -p lix_engine\ncargo test -p lix_engine commit_store::codec:: --features storage-benches\ncargo test -p lix_engine commit_store::storage:: --features storage-benches\ncargo check -p lix_engine --features storage-benches --benches\ncargo test -p lix_engine commit_store:: --features storage-benches\ncargo test -p lix_engine transaction::commit:: --features storage-benches\ncargo test -p lix_engine tracked_state::materializer:: --features storage-benches\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\ncargo bench -p lix_engine --features storage-benches --bench storage -- 'storage/changelog/(encode_only|decode_only)/full_row/10k'\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k'\n```\n\nAll commands passed.\n\n### Interpretation\n\n```text\nKeep as a commit-store physical row codec cleanup.\n\nPrimary axis: write rows, especially delta/tombstone writes that stage compact\ncommit-store change packs. The structural win removes per-change FlatBuffer\nbuilder allocation and nested row blobs from the pack format.\n\nTiming: SQLite delta and tombstone writes improved by Criterion; RocksDB\ntombstone writes improved and root-write medians moved down on both backends.\nRoot writes remain over budget, so this is not the final write-side cut.\n\nStorage: inserted and merge-state byte counts improve on both SQLite and\nRocksDB because each change row carries less codec overhead.\n\nNo storage compatibility shim. No benchmark measurement change.\n```\n\n## Optimization 20: Stage generated bench roots as authored changes\n\n### Hypothesis\n\nThe physical storage benchmark helper for tracked roots was doing an extra\ncommit-store index scan to classify generated rows as authored or adopted\nbefore calling `stage_commit_draft`. That pre-pass does not match the\nproduction transaction boundary: production staging already separates authored\nrows from adopted rows before entering the commit store.\n\nThe helper-generated rows use commit-scoped fresh change ids\n(`tracked_change_id(commit_id, index)`, with a separate fresh append namespace),\nso every `write_tracked_root` row in these benchmark fixtures is authored.\nStaging those rows directly as authored changes keeps commit-store uniqueness\nvalidation intact while removing a redundant history scan from root/delta write\nmeasurement.\n\n### Change\n\n- Removed `load_change_index_entries` pre-classification from\n  `storage_bench.rs::write_tracked_root`.\n- Stage all helper-generated changes as authored changes and build tracked\n  deltas by zipping staged authored locators back to the original rows.\n- Kept commit-store validation in `stage_commit_draft`; no storage format\n  change and no validation weakening inside the commit store.\n\nDiscarded experiment: a physical `change_id -> locator` commit-store index\nimproved RocksDB delta writes but regressed SQLite writes and increased storage\nfootprint, so it was reverted before this optimization.\n\n### Benchmarks\n\nFocused command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k'\n```\n\nResult: passed.\n\n| row                                           | after median | criterion status                |\n| --------------------------------------------- | -----------: | ------------------------------- |\n| `raw_sqlite/write_root_all_rows/1k`           |    2.4088 ms | reference                       |\n| `raw_sqlite/write_delta_10pct_updates/1k`     |    1.2788 ms | reference                       |\n| `raw_sqlite/write_tombstone_10pct_deletes/1k` |    1.2642 ms | reference                       |\n| `sqlite/write_root_all_rows/1k`               |    5.3781 ms | improved                        |\n| `sqlite/write_delta_10pct_updates/1k`         |    1.9665 ms | improved                        |\n| `sqlite/write_tombstone_10pct_deletes/1k`     |    1.8551 ms | improved                        |\n| `rocksdb/write_root_all_rows/1k`              |    4.6757 ms | improved                        |\n| `rocksdb/write_delta_10pct_updates/1k`        |    911.94 µs | noisy, below pre-index baseline |\n| `rocksdb/write_tombstone_10pct_deletes/1k`    |    893.40 µs | noisy, below pre-index baseline |\n\nCriterion marked RocksDB delta/tombstone as regressions only because the\nabandoned change-index experiment had just updated the local Criterion\nbaseline. Compared to Optimization 19, both are lower medians.\n\n### Storage\n\nStorage command:\n\n```sh\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\n```\n\nResult: passed.\n\n1k rows:\n\n| row                                    |   bytes | bytes/row | status    |\n| -------------------------------------- | ------: | --------: | --------- |\n| raw SQLite / inserted                  | 1692456 |    1692.5 | reference |\n| Lix SQLite / inserted                  | 1054536 |    1054.5 | unchanged |\n| Lix SQLite / after create_version      | 1071016 |    1071.0 | unchanged |\n| Lix SQLite / after fast-forward merge  | 5279392 |    5279.4 | unchanged |\n| Lix SQLite / after divergent merge     | 5570208 |    5570.2 | unchanged |\n| Lix RocksDB / inserted                 |  964892 |     964.9 | unchanged |\n| Lix RocksDB / after create_version     |  966733 |     966.7 | unchanged |\n| Lix RocksDB / after fast-forward merge | 1125265 |    1125.3 | unchanged |\n| Lix RocksDB / after divergent merge    | 1494060 |    1494.1 | unchanged |\n\n### Review Loop\n\nReviewer pass:\n\n```text\nHIGH: none.\nMEDIUM: none.\nLOW: none.\n\nReviewer confirmed no `write_tracked_root` benchmark path legitimately needs\nadopted changes: row generators use fresh commit-scoped change ids, and the\nappend-child helper uses a separate fresh namespace. Ordering and timestamps\nremain preserved by zipping authored locators back to the original rows.\n```\n\n### Verification\n\n```sh\ncargo fmt -p lix_engine\ncargo check -p lix_engine --features storage-benches --benches\ncargo test -p lix_engine commit_store::storage:: --features storage-benches\ncargo test -p lix_engine tracked_state::materializer:: --features storage-benches\ncargo test -p lix_engine transaction::commit:: --features storage-benches\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k'\n```\n\nAll commands passed.\n\n### Interpretation\n\n```text\nKeep as a benchmark-path correction and write optimization.\n\nThe change removes work that production transaction staging does not do and\nkeeps the commit-store validation boundary intact. SQLite delta/tombstone writes\nmove under 2 ms in this run; root writes are modestly better but remain above\nthe 1.5x target. No storage change, no backward shim.\n```\n\n## Optimization 21: Load scan roots once\n\n### Hypothesis\n\n`TrackedStateStoreReader::scan_rows_at_commit` was using\n`projection_has_pending_deltas` as a routing check before scan execution. For\ndelta-pack-backed commits that helper walked the first-parent/delta chain, then\n`projection_entries_at_commit` walked it again to produce rows. For materialized\nroot commits, the route also checked root existence and then loaded the same\nroot again before scanning.\n\nLoading the target root once at scan entry should preserve the same routing:\nscan the root directly when it exists; otherwise let `projection_entries_at_commit`\nperform the delta/base walk exactly once.\n\n### Change\n\n- `scan_rows_at_commit` now calls `tree.load_root(commit_id)` once.\n- If a root exists, scan it directly, preserving the by-file index fast path and\n  fallback to the primary tree when no by-file root exists.\n- If no root exists, call `projection_entries_at_commit` directly.\n- Tombstone filtering, materialization, and request limit handling remain after\n  row collection as before.\n\n### Benchmarks\n\nFocused command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k'\n```\n\nResult: passed.\n\n| row                                          | after median | criterion status |\n| -------------------------------------------- | -----------: | ---------------- |\n| `raw_sqlite/scan_keys_only/1k`               |    1.1587 ms | reference        |\n| `raw_sqlite/scan_headers_only/1k`            |    1.1213 ms | reference        |\n| `raw_sqlite/scan_full_rows/1k`               |    1.2689 ms | reference        |\n| `raw_sqlite/prefix_scan_schema/1k`           |    1.1597 ms | reference        |\n| `raw_sqlite/prefix_scan_schema_file_null/1k` |    1.1929 ms | reference        |\n| `sqlite/scan_keys_only/1k`                   |    2.1147 ms | improved         |\n| `sqlite/scan_headers_only/1k`                |    2.7995 ms | no change        |\n| `sqlite/scan_full_rows/1k`                   |    2.8024 ms | improved         |\n| `sqlite/prefix_scan_schema/1k`               |    2.7534 ms | improved         |\n| `sqlite/prefix_scan_schema_file_null/1k`     |    2.7506 ms | improved         |\n| `rocksdb/scan_keys_only/1k`                  |    1.2154 ms | improved         |\n| `rocksdb/scan_headers_only/1k`               |    1.2315 ms | improved         |\n| `rocksdb/scan_full_rows/1k`                  |    1.7649 ms | improved         |\n| `rocksdb/prefix_scan_schema/1k`              |    1.7814 ms | improved         |\n| `rocksdb/prefix_scan_schema_file_null/1k`    |    1.8046 ms | improved         |\n\n### Storage\n\nStorage command:\n\n```sh\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\n```\n\nResult: passed.\n\n1k rows:\n\n| row                                    |   bytes | bytes/row | status    |\n| -------------------------------------- | ------: | --------: | --------- |\n| raw SQLite / inserted                  | 1692456 |    1692.5 | reference |\n| Lix SQLite / inserted                  | 1054536 |    1054.5 | unchanged |\n| Lix SQLite / after create_version      | 1071016 |    1071.0 | unchanged |\n| Lix SQLite / after fast-forward merge  | 5279368 |    5279.4 | unchanged |\n| Lix SQLite / after divergent merge     | 5463856 |    5463.9 | unchanged |\n| Lix RocksDB / inserted                 |  964892 |     964.9 | unchanged |\n| Lix RocksDB / after create_version     |  966733 |     966.7 | unchanged |\n| Lix RocksDB / after fast-forward merge | 1125265 |    1125.3 | unchanged |\n| Lix RocksDB / after divergent merge    | 1494068 |    1494.1 | unchanged |\n\n### Review Loop\n\nReviewer pass:\n\n```text\nHIGH: none.\nMEDIUM: none.\nLOW: none.\n\nReviewer confirmed the root-first routing is equivalent: the old pending-delta\npredicate already stopped immediately when the target commit had a root, while\ndelta-only and missing commits still go through the same projection/delta walk.\nBy-file fallback, tombstone filtering, and limit behavior are preserved.\n```\n\n### Verification\n\n```sh\ncargo fmt -p lix_engine\ncargo test -p lix_engine tracked_state::context:: --features storage-benches\ncargo check -p lix_engine --features storage-benches --benches\ncargo test -p lix_engine tracked_state::materializer:: --features storage-benches\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k'\n```\n\nAll commands passed.\n\n### Interpretation\n\n```text\nKeep as a scan-path optimization.\n\nThis removes duplicated route-discovery reads without changing storage or scan\nsemantics. It improves most SQLite and RocksDB tracked scan rows, but SQLite\nfull/prefix scans remain above the 1.5x target and need deeper tree/materialize\nwork next.\n```\n\n## Optimization 22: Fast-path single delta-pack scans\n\n### Hypothesis\n\nThe JSON-pointer tracked scan fixtures usually read a commit with no materialized\nprojection root and exactly one tracked-state delta pack. The general overlay\npath inserts those delta entries into a `BTreeMap` and then collects the map\nback into sorted rows. For the single-pack/no-base case, that map is only doing\nthree things: key filtering, sorted order, and duplicate-key last-write-wins\ncollapse.\n\nA direct vector path can preserve those semantics with less per-row map work.\n\n### Change\n\n- Added `single_delta_pack_entries` for the `base_commit_id == None` and\n  `delta_commit_ids.len() == 1` case.\n- The fast path:\n  - filters with the same `request.matches_key` predicate as the existing\n    overlay path;\n  - sorts by `(TrackedStateKey, original ordinal)`;\n  - collapses duplicate keys by keeping the last ordinal;\n  - skips final tombstones when `include_tombstones` is false.\n- Added coverage for duplicate-key and tombstone behavior in a single delta\n  pack.\n\n### Benchmarks\n\nFocused command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k'\n```\n\nResult: passed.\n\n| row                                          | after median | criterion status |\n| -------------------------------------------- | -----------: | ---------------- |\n| `raw_sqlite/scan_keys_only/1k`               |    1.1288 ms | reference        |\n| `raw_sqlite/scan_headers_only/1k`            |    1.1685 ms | reference        |\n| `raw_sqlite/scan_full_rows/1k`               |    1.1922 ms | reference        |\n| `raw_sqlite/prefix_scan_schema/1k`           |    1.2255 ms | reference        |\n| `raw_sqlite/prefix_scan_schema_file_null/1k` |    1.7144 ms | reference/noisy  |\n| `sqlite/scan_keys_only/1k`                   |    2.3765 ms | noisy regression |\n| `sqlite/scan_headers_only/1k`                |    2.2331 ms | improved         |\n| `sqlite/scan_full_rows/1k`                   |    2.6767 ms | within noise     |\n| `sqlite/prefix_scan_schema/1k`               |    2.7255 ms | no change        |\n| `sqlite/prefix_scan_schema_file_null/1k`     |    2.7038 ms | no change        |\n| `rocksdb/scan_keys_only/1k`                  |    1.2053 ms | no change        |\n| `rocksdb/scan_headers_only/1k`               |    1.1988 ms | improved         |\n| `rocksdb/scan_full_rows/1k`                  |    1.6527 ms | improved         |\n| `rocksdb/prefix_scan_schema/1k`              |    1.6875 ms | improved         |\n| `rocksdb/prefix_scan_schema_file_null/1k`    |    1.6230 ms | improved         |\n\nSQLite rerun:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/sqlite/smoke/(scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k'\n```\n\n| row                                      | rerun median | criterion status |\n| ---------------------------------------- | -----------: | ---------------- |\n| `sqlite/scan_keys_only/1k`               |    2.0399 ms | improved         |\n| `sqlite/scan_headers_only/1k`            |    2.1180 ms | no change        |\n| `sqlite/scan_full_rows/1k`               |    2.8050 ms | no change        |\n| `sqlite/prefix_scan_schema/1k`           |    2.7217 ms | no change        |\n| `sqlite/prefix_scan_schema_file_null/1k` |    2.6412 ms | no change        |\n\n### Storage\n\nStorage command:\n\n```sh\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\n```\n\nResult: passed.\n\n1k rows:\n\n| row                                    |   bytes | bytes/row | status    |\n| -------------------------------------- | ------: | --------: | --------- |\n| raw SQLite / inserted                  | 1692456 |    1692.5 | reference |\n| Lix SQLite / inserted                  | 1054536 |    1054.5 | unchanged |\n| Lix SQLite / after create_version      | 1071016 |    1071.0 | unchanged |\n| Lix SQLite / after fast-forward merge  | 5279392 |    5279.4 | unchanged |\n| Lix SQLite / after divergent merge     | 5586736 |    5586.7 | unchanged |\n| Lix RocksDB / inserted                 |  964892 |     964.9 | unchanged |\n| Lix RocksDB / after create_version     |  966733 |     966.7 | unchanged |\n| Lix RocksDB / after fast-forward merge | 1125265 |    1125.3 | unchanged |\n| Lix RocksDB / after divergent merge    | 1494068 |    1494.1 | unchanged |\n\n### Review Loop\n\nReviewer pass:\n\n```text\nHIGH: none.\nMEDIUM: none.\nLOW: none.\n\nReviewer confirmed the fast path matches the old BTreeMap overlay semantics:\nsame key-only filtering, sorted key order, last duplicate wins, final tombstone\nremoval when tombstones are excluded, and limits remain above materialization.\n```\n\n### Verification\n\n```sh\ncargo fmt -p lix_engine\ncargo test -p lix_engine tracked_state::context:: --features storage-benches\ncargo check -p lix_engine --features storage-benches --benches\ncargo test -p lix_engine tracked_state::materializer:: --features storage-benches\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k'\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/sqlite/smoke/(scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k'\n```\n\nAll commands passed.\n\n### Interpretation\n\n```text\nKeep as a narrow delta-pack scan optimization.\n\nThe best movement is headers/full rows and RocksDB scans; SQLite full/prefix\nrows remain mostly around the same medians as Optimization 21, with keys-only\nimproving on rerun. No storage change.\n```\n\n## Optimization 23: Encode delta packs directly into the output buffer\n\n### Hypothesis\n\n`encode_delta_pack_refs` still allocated a temporary encoded key `Vec` and\ntemporary encoded value `Vec` for every tracked delta, only to copy both into\nthe delta pack as length-prefixed sections. Reference storage systems avoid\nper-row temporary records on hot write paths when the final output buffer can be\nwritten directly.\n\nWriting each key/value section directly into the pack and backpatching the\nsection length should preserve the binary format while removing per-delta\nallocation/copy work.\n\n### Change\n\n- Split `encode_key_ref` and `encode_value_ref` into allocation-returning public\n  helpers plus private `append_key_ref` / `append_value_ref` buffer writers.\n- Changed `encode_delta_pack_refs` to write key/value sections directly via\n  `push_sized_section`.\n- `decode_delta_pack` is unchanged; the encoded wire shape remains\n  length-prefixed key bytes followed by length-prefixed value bytes.\n\n### Benchmarks\n\nFocused command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k'\n```\n\nResult: passed.\n\n| row                                           | after median | criterion status        |\n| --------------------------------------------- | -----------: | ----------------------- |\n| `raw_sqlite/write_root_all_rows/1k`           |    2.4262 ms | reference               |\n| `raw_sqlite/write_delta_10pct_updates/1k`     |    1.3524 ms | reference               |\n| `raw_sqlite/write_tombstone_10pct_deletes/1k` |    1.2769 ms | reference               |\n| `sqlite/write_root_all_rows/1k`               |    4.9586 ms | no change, lower median |\n| `sqlite/write_delta_10pct_updates/1k`         |    1.9208 ms | no change               |\n| `sqlite/write_tombstone_10pct_deletes/1k`     |    2.0990 ms | noisy regression        |\n| `rocksdb/write_root_all_rows/1k`              |    4.2122 ms | no change, lower median |\n| `rocksdb/write_delta_10pct_updates/1k`        |    880.26 µs | no change               |\n| `rocksdb/write_tombstone_10pct_deletes/1k`    |    836.97 µs | no change, lower median |\n\nSQLite rerun:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/sqlite/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k'\n```\n\n| row                                       | rerun median | criterion status |\n| ----------------------------------------- | -----------: | ---------------- |\n| `sqlite/write_root_all_rows/1k`           |    5.0104 ms | no change        |\n| `sqlite/write_delta_10pct_updates/1k`     |    1.9488 ms | no change        |\n| `sqlite/write_tombstone_10pct_deletes/1k` |    1.7955 ms | improved         |\n\n### Storage\n\nStorage command:\n\n```sh\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\n```\n\nResult: passed.\n\n1k rows:\n\n| row                                    |   bytes | bytes/row | status    |\n| -------------------------------------- | ------: | --------: | --------- |\n| raw SQLite / inserted                  | 1692456 |    1692.5 | reference |\n| Lix SQLite / inserted                  | 1054536 |    1054.5 | unchanged |\n| Lix SQLite / after create_version      | 1071016 |    1071.0 | unchanged |\n| Lix SQLite / after fast-forward merge  | 5279368 |    5279.4 | unchanged |\n| Lix SQLite / after divergent merge     | 5430920 |    5430.9 | unchanged |\n| Lix RocksDB / inserted                 |  964892 |     964.9 | unchanged |\n| Lix RocksDB / after create_version     |  966733 |     966.7 | unchanged |\n| Lix RocksDB / after fast-forward merge | 1125265 |    1125.3 | unchanged |\n| Lix RocksDB / after divergent merge    | 1494068 |    1494.1 | unchanged |\n\n### Review Loop\n\nReviewer pass:\n\n```text\nHIGH: none.\nMEDIUM: none.\nLOW: none.\n\nReviewer confirmed the binary shape is compatible: the append helpers preserve\nfield order and primitive encoders, while `push_sized_section` backpatches the\nsame four-byte length consumed by `decode_delta_pack`.\n```\n\n### Verification\n\n```sh\ncargo fmt -p lix_engine\ncargo test -p lix_engine tracked_state::codec:: --features storage-benches\ncargo check -p lix_engine --features storage-benches --benches\ncargo test -p lix_engine tracked_state::context:: --features storage-benches\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k'\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/sqlite/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k'\n```\n\nAll commands passed.\n\n### Interpretation\n\n```text\nKeep as a write-path allocation cleanup.\n\nThis is a structural writer improvement with neutral-to-better medians on rerun\nand no storage format change. It does not close the remaining root-write gap by\nitself.\n```\n\n## Optimization 24: Compact same-commit delta locators\n\n### Hypothesis\n\nTracked-state delta packs repeat `source_commit_id` inside every row locator,\neven though ordinary authored deltas point back to the delta pack's own commit.\nThis is duplicated physical layout metadata: the storage key already identifies\nthe delta pack commit, and the pack can carry that identity once in its header.\n\nReference storage layouts avoid repeating page/segment identity in every record\nwhen a compact local locator can refer to the owning container. For Lix, a\ndelta-pack-local `SAME_COMMIT` locator tag should shrink write bytes, scan decode\nbytes, and storage footprint while still preserving full locators for adopted\ncross-commit changes.\n\n### Change\n\n- Bumped tracked-state delta packs from version 1 to version 2 with no backward\n  shim; Lix has not shipped.\n- Delta packs now store `commit_id` once in the pack header.\n- Delta values encode locator source as:\n  - `SAME_COMMIT`: no repeated source commit id, decoded from the pack header.\n  - `FULL`: explicit source commit id for adopted/cross-commit locators.\n- Tree value encoding is unchanged.\n- `storage::load_delta_pack` validates the embedded pack commit id against the\n  storage key before returning entries, so swapped/corrupt packs cannot silently\n  rewrite same-commit locators.\n- Tests cover decoded pack identity plus same-commit and full locator roundtrip.\n\n### Benchmarks\n\nFocused command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes|scan_keys_only|scan_full_rows|prefix_scan_schema_file_null)/1k'\n```\n\nResult: passed.\n\n| row                                           | after median | criterion status        |\n| --------------------------------------------- | -----------: | ----------------------- |\n| `raw_sqlite/write_root_all_rows/1k`           |    2.4003 ms | reference               |\n| `raw_sqlite/write_delta_10pct_updates/1k`     |    1.2992 ms | reference               |\n| `raw_sqlite/write_tombstone_10pct_deletes/1k` |    1.2201 ms | reference               |\n| `raw_sqlite/scan_keys_only/1k`                |    1.1429 ms | reference               |\n| `raw_sqlite/scan_full_rows/1k`                |    1.1458 ms | reference               |\n| `raw_sqlite/prefix_scan_schema_file_null/1k`  |    1.2437 ms | reference               |\n| `sqlite/write_root_all_rows/1k`               |    4.8006 ms | no change, lower median |\n| `sqlite/write_delta_10pct_updates/1k`         |    2.0113 ms | no change               |\n| `sqlite/write_tombstone_10pct_deletes/1k`     |    1.7745 ms | no change, lower median |\n| `sqlite/scan_keys_only/1k`                    |    2.1931 ms | noisy regression        |\n| `sqlite/scan_full_rows/1k`                    |    2.6153 ms | no change, lower median |\n| `sqlite/prefix_scan_schema_file_null/1k`      |    3.0283 ms | no change               |\n| `rocksdb/write_root_all_rows/1k`              |    4.5488 ms | no change               |\n| `rocksdb/write_delta_10pct_updates/1k`        |    883.15 µs | no change               |\n| `rocksdb/write_tombstone_10pct_deletes/1k`    |    830.45 µs | no change               |\n| `rocksdb/scan_keys_only/1k`                   |    1.1580 ms | no change               |\n| `rocksdb/scan_full_rows/1k`                   |    1.6353 ms | no change               |\n| `rocksdb/prefix_scan_schema_file_null/1k`     |    1.8247 ms | no change               |\n\nSQLite scan rerun:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/sqlite/smoke/(scan_keys_only|scan_full_rows|prefix_scan_schema_file_null)/1k'\n```\n\n| row                                      | rerun median | criterion status |\n| ---------------------------------------- | -----------: | ---------------- |\n| `sqlite/scan_keys_only/1k`               |    1.9420 ms | improved         |\n| `sqlite/scan_full_rows/1k`               |    2.5912 ms | no change        |\n| `sqlite/prefix_scan_schema_file_null/1k` |    2.6909 ms | no change        |\n\n### Storage\n\nStorage command:\n\n```sh\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\n```\n\nResult: passed.\n\n1k rows:\n\n| row                                    |   bytes | bytes/row | status    |\n| -------------------------------------- | ------: | --------: | --------- |\n| raw SQLite / inserted                  | 1692456 |    1692.5 | reference |\n| Lix SQLite / inserted                  | 1013336 |    1013.3 | improved  |\n| Lix SQLite / after create_version      | 1029816 |    1029.8 | improved  |\n| Lix SQLite / after fast-forward merge  | 5230192 |    5230.2 | improved  |\n| Lix SQLite / after divergent merge     | 5385840 |    5385.8 | improved  |\n| Lix RocksDB / inserted                 |  925304 |     925.3 | improved  |\n| Lix RocksDB / after create_version     |  927146 |     927.1 | improved  |\n| Lix RocksDB / after fast-forward merge | 1085778 |    1085.8 | improved  |\n| Lix RocksDB / after divergent merge    | 1454922 |    1454.9 | improved  |\n\n### Review Loop\n\nReviewer pass:\n\n```text\nInitial review:\nHIGH: none.\nMEDIUM: none.\nLOW: delta pack embeds commit_id but storage did not check it against the key;\nswapped/corrupt packs could produce wrong SAME_COMMIT locators.\n\nFollow-up review:\nHIGH: none.\nMEDIUM: none.\nLOW: none.\nThe LOW is resolved by returning the decoded pack commit id from the codec and\nchecking it in storage::load_delta_pack before exposing entries.\n```\n\n### Verification\n\n```sh\ncargo fmt -p lix_engine\ncargo test -p lix_engine tracked_state::codec:: --features storage-benches\ncargo check -p lix_engine --features storage-benches --benches\ncargo test -p lix_engine tracked_state::context:: --features storage-benches\ncargo test -p lix_engine tracked_state::materializer:: --features storage-benches\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes|scan_keys_only|scan_full_rows|prefix_scan_schema_file_null)/1k'\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/sqlite/smoke/(scan_keys_only|scan_full_rows|prefix_scan_schema_file_null)/1k'\n```\n\nAll commands passed.\n\n### Interpretation\n\n```text\nKeep as a physical layout optimization.\n\nPrimary axis: storage footprint and delta-pack decode bytes. Timing is mostly\nneutral with lower medians in the hot write rows and a cleaner SQLite keys-only\nrerun. Storage improves on both SQLite and RocksDB for inserted and merge\nstates. No backward shim.\n```\n\n## Optimization 25: Dictionary-code delta-pack key prefixes\n\n### Hypothesis\n\nTracked-state delta packs repeat the same `schema_key` and `file_id` for every\nJSON-pointer row. The v2 delta-pack key format stored that full prefix inside\neach entry key even though the pack is already a locality unit. A pack-level\nprefix table should remove repeated key bytes while keeping decoded keys exactly\nthe same shape for downstream ordering and filtering.\n\nThis follows the same first-principles shape as page/segment dictionaries in\nsystems like DuckDB/Turso/Dolt-style physical layouts: pay one compact table per\nstorage unit, then store small indexes in repeated records.\n\n### Change\n\n- Bumped tracked delta packs from version 2 to version 3. No backward shim.\n- Added a pack-level key-prefix dictionary of `(schema_key, file_id)`.\n- Encoded each delta key as `prefix_index + entity_id`.\n- Kept decode output as full `TrackedStateKey` values so scan collapse,\n  ordering, and prefix filtering continue to operate on the existing key type.\n- Added coverage that verifies the prefix table is written for mixed file\n  prefixes and corrupt out-of-bounds prefix indexes reject.\n- Avoided a `HashMap` prefix-index path after a focused rerun showed write\n  regressions; the kept version uses the small prefix vector plus per-delta\n  prefix indexes built during the prefix pass.\n\n### Benchmarks\n\nFocused scan/write command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite)/smoke/(write_delta_10pct_updates|write_tombstone_10pct_deletes|scan_keys_only|scan_full_rows|prefix_scan_schema_file_null)/1k'\n```\n\nResult: passed.\n\n| row                                           |    median | criterion status |\n| --------------------------------------------- | --------: | ---------------- |\n| `raw_sqlite/scan_keys_only/1k`                | 1.2058 ms | reference        |\n| `raw_sqlite/scan_full_rows/1k`                | 1.1330 ms | reference        |\n| `raw_sqlite/prefix_scan_schema_file_null/1k`  | 1.1647 ms | reference        |\n| `raw_sqlite/write_delta_10pct_updates/1k`     | 1.2337 ms | reference        |\n| `raw_sqlite/write_tombstone_10pct_deletes/1k` | 1.2127 ms | reference        |\n| `sqlite/scan_keys_only/1k`                    | 1.9801 ms | improved         |\n| `sqlite/scan_full_rows/1k`                    | 2.5814 ms | improved         |\n| `sqlite/prefix_scan_schema_file_null/1k`      | 2.6188 ms | no change        |\n\nFinal write-focused command after replacing the regressing `HashMap` prefix\nindexer:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/sqlite/smoke/(write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k'\n```\n\nResult: passed.\n\n| row                                       |    median | criterion status |\n| ----------------------------------------- | --------: | ---------------- |\n| `sqlite/write_delta_10pct_updates/1k`     | 2.2536 ms | no change        |\n| `sqlite/write_tombstone_10pct_deletes/1k` | 2.2861 ms | no change        |\n\n### Storage\n\nStorage command:\n\n```sh\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\n```\n\nResult: passed.\n\n1k rows:\n\n| row                                    |   bytes | bytes/row | vs Optimization 24 |\n| -------------------------------------- | ------: | --------: | -----------------: |\n| raw SQLite / inserted                  | 1692456 |    1692.5 |          reference |\n| Lix SQLite / inserted                  |  996856 |     996.9 |             -16480 |\n| Lix SQLite / after create_version      | 1013336 |    1013.3 |             -16480 |\n| Lix SQLite / after fast-forward merge  | 5201424 |    5201.4 |             -28768 |\n| Lix SQLite / after divergent merge     | 5361240 |    5361.2 |             -24600 |\n| Lix RocksDB / inserted                 |  912032 |     912.0 |             -13272 |\n| Lix RocksDB / after create_version     |  913889 |     913.9 |             -13257 |\n| Lix RocksDB / after fast-forward merge | 1073314 |    1073.3 |             -12464 |\n| Lix RocksDB / after divergent merge    | 1442794 |    1442.8 |             -12128 |\n\n### Review Loop\n\nReviewer pass:\n\n```text\nHIGH: none.\nMEDIUM: none.\nLOW: none.\n\nThe v3 shape writes a header-level key-prefix table, then each entry key stores\nonly `prefix_index + entity_id`. Decode reconstructs full `TrackedStateKey`s, so\ndownstream ordering/filter behavior still sees ordinary full keys. Corrupt\nprefix indexes and invalid prefix file-id tags reject. Empty packs work\nnaturally with zero prefixes and zero entries.\n```\n\n### Verification\n\n```sh\ncargo fmt --check\ncargo test -p lix_engine tracked_state::codec:: --features storage-benches\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes|scan_keys_only|scan_full_rows|prefix_scan_schema_file_null)/1k'\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite)/smoke/(write_delta_10pct_updates|write_tombstone_10pct_deletes|scan_keys_only|scan_full_rows|prefix_scan_schema_file_null)/1k'\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/sqlite/smoke/(write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k'\n```\n\nAll commands passed.\n\n### Interpretation\n\n```text\nKeep as a storage-layout optimization.\n\nPrimary axis: bytes per row. The JSON-pointer workload now pays for\n`json_pointer + NULL file_id` once per delta pack instead of once per delta row.\nThe win is modest but repeatable across SQLite and RocksDB accounting, and the\nwrite guardrail is neutral after removing the HashMap indexer.\n\nThis does not close the remaining <=1.5x gap by itself. It is a clean physical\nlayout step that reduces repeated key bytes without changing the logical scan\nsurface.\n```\n\n## Optimization 26: Probe delta-pack existence without loading blobs\n\n### Hypothesis\n\nUnmaterialized tracked commits are served from delta packs until a projection\nroot exists. The scan planner only needs to know whether each first-parent\ncommit has a delta pack, but it was calling `load_delta_pack`, which fetched and\ndecoded the whole pack before the result-producing path fetched and decoded it\nagain. This violates the same locality rule used by storage engines: use an\nindex/key-existence probe to plan, and only read the value blob when the plan\nneeds row data.\n\n### Change\n\n- Added `tracked_state::storage::delta_pack_exists`.\n- Implemented it with `StorageReader::exists_many` against the delta-pack\n  namespace/key, so it does not fetch delta-pack bytes.\n- Replaced the planning-time `load_delta_pack(...).is_some()` in\n  `delta_commit_ids_since_projection_root` with the key-only existence probe.\n- Kept all result-producing paths on `load_delta_pack`, so corrupt or\n  identity-mismatched packs still fail before scans, diffs, point reads, or\n  existence checks return results.\n\n### Benchmarks\n\nFocused read command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema_file_null|get_many_exact_keys|exists_many_exact_keys)/1k'\n```\n\nResult: passed.\n\nRepresentative rerun medians:\n\n| row                                          |    median | criterion status        |\n| -------------------------------------------- | --------: | ----------------------- |\n| `raw_sqlite/scan_keys_only/1k`               | 1.1162 ms | reference               |\n| `raw_sqlite/scan_headers_only/1k`            | 1.1543 ms | reference               |\n| `raw_sqlite/scan_full_rows/1k`               | 1.2260 ms | reference               |\n| `raw_sqlite/prefix_scan_schema_file_null/1k` | 1.5672 ms | noisy reference         |\n| `sqlite/get_many_exact_keys/1k`              | 2.9404 ms | no change               |\n| `sqlite/exists_many_exact_keys/1k`           | 1.9841 ms | no change               |\n| `sqlite/scan_keys_only/1k`                   | 1.8194 ms | no change, lower median |\n| `sqlite/scan_headers_only/1k`                | 1.8309 ms | no change               |\n| `sqlite/scan_full_rows/1k`                   | 2.3452 ms | no change, lower median |\n| `sqlite/prefix_scan_schema_file_null/1k`     | 2.3869 ms | no change, lower median |\n| `rocksdb/get_many_exact_keys/1k`             | 2.0017 ms | no change               |\n| `rocksdb/exists_many_exact_keys/1k`          | 1.1156 ms | no change               |\n| `rocksdb/scan_keys_only/1k`                  | 835.73 us | no change, lower median |\n| `rocksdb/scan_headers_only/1k`               | 878.24 us | improved                |\n| `rocksdb/scan_full_rows/1k`                  | 1.4314 ms | no change               |\n| `rocksdb/prefix_scan_schema_file_null/1k`    | 1.3956 ms | within noise threshold  |\n\nBroad scan/write guardrail command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes|scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema_file_null)/1k'\n```\n\nResult: passed.\n\nNotable medians from the broad run:\n\n| row                                       |    median | criterion status       |\n| ----------------------------------------- | --------: | ---------------------- |\n| `sqlite/write_root_all_rows/1k`           | 5.2065 ms | no change              |\n| `sqlite/write_delta_10pct_updates/1k`     | 1.9378 ms | improved               |\n| `sqlite/write_tombstone_10pct_deletes/1k` | 1.8930 ms | within noise threshold |\n| `sqlite/scan_keys_only/1k`                | 1.8567 ms | no change              |\n| `sqlite/scan_headers_only/1k`             | 1.7894 ms | no change              |\n| `sqlite/scan_full_rows/1k`                | 2.4152 ms | no change              |\n| `sqlite/prefix_scan_schema_file_null/1k`  | 2.4417 ms | no change              |\n| `rocksdb/scan_keys_only/1k`               | 914.58 us | improved               |\n| `rocksdb/scan_full_rows/1k`               | 1.4490 ms | improved               |\n\n### Storage\n\nStorage command:\n\n```sh\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\n```\n\nResult: passed. No format or write-path storage change.\n\n1k rows:\n\n| row                                    |   bytes | bytes/row |\n| -------------------------------------- | ------: | --------: |\n| raw SQLite / inserted                  | 1692456 |    1692.5 |\n| Lix SQLite / inserted                  |  996856 |     996.9 |\n| Lix SQLite / after create_version      | 1013336 |    1013.3 |\n| Lix SQLite / after fast-forward merge  | 5205520 |    5205.5 |\n| Lix SQLite / after divergent merge     | 5361192 |    5361.2 |\n| Lix RocksDB / inserted                 |  912032 |     912.0 |\n| Lix RocksDB / after create_version     |  913889 |     913.9 |\n| Lix RocksDB / after fast-forward merge | 1073314 |    1073.3 |\n| Lix RocksDB / after divergent merge    | 1442794 |    1442.8 |\n\n### Review Loop\n\nReviewer pass:\n\n```text\nInitial review:\nHIGH: none.\nMEDIUM: delta_pack_exists used get_values, so it avoided decode CPU but still\nfetched the blob. Use StorageReader::exists_many as a true key-only probe.\n\nFollow-up review:\nHIGH: none.\nMEDIUM: none.\nThe prior MEDIUM is resolved. delta_pack_exists now uses StorageReader::exists_many\nagainst the delta-pack namespace/key. Result-producing paths still load and\ndecode packs, so corrupt or identity-mismatched packs still fail before results\nare produced.\n```\n\n### Verification\n\n```sh\ncargo fmt --check\ncargo test -p lix_engine tracked_state::context:: --features storage-benches\ncargo test -p lix_engine tracked_state:: --features storage-benches\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema_file_null|get_many_exact_keys|exists_many_exact_keys)/1k'\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes|scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema_file_null)/1k'\n```\n\nAll commands passed.\n\n### Interpretation\n\n```text\nKeep as a read-path physical access optimization.\n\nThe structural win is precise: first-parent planning now asks the backend for\nkey existence instead of fetching a delta-pack value blob it will decode later.\nThis follows the reference-system pattern of separating metadata/index probes\nfrom value materialization.\n\nThe strongest observed impact is on unmaterialized single-delta scans, where\nSQLite scan medians moved from the post-Optimization-25 range of roughly\n1.94-2.69 ms down to roughly 1.82-2.39 ms in focused runs, and RocksDB scan\nmedians moved below or near the raw SQLite reference for key/header scans.\n\nThis does not change storage format, write layout, or corruption semantics for\nvisible reads. It does not close the remaining full-row SQLite gap by itself.\n```\n\n## Optimization 27: Decode delta-pack sections without temporary copies\n\nDate: 2026-05-11\n\nCommit: this entry is committed with the optimization\n\n### Change\n\n`tracked_state::codec::decode_delta_pack` now parses each sized delta key and\nvalue section directly from the pack byte slice:\n\n- Replaced `read_sized_bytes(...)?` followed by borrowing the temporary `Vec`\n  with `read_sized_slice(...)?`.\n- Kept the existing `decode_delta_key` and `decode_delta_value` parsers, so the\n  section format, cursor advancement, truncation checks, and trailing-byte\n  validation are unchanged.\n\nThis removes two heap allocations and two payload copies per delta-pack row on\nunmaterialized tracked-state reads.\n\n### Benchmarks\n\nFocused scan command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema_file_null|get_many_exact_keys)/1k'\n```\n\nResult: passed.\n\nRepresentative medians:\n\n| row                                       |    median | criterion status |\n| ----------------------------------------- | --------: | ---------------- |\n| `sqlite/get_many_exact_keys/1k`           | 2.9171 ms | no change        |\n| `sqlite/scan_keys_only/1k`                | 1.8631 ms | no change        |\n| `sqlite/scan_headers_only/1k`             | 1.7398 ms | improved         |\n| `sqlite/scan_full_rows/1k`                | 2.3693 ms | no change        |\n| `sqlite/prefix_scan_schema_file_null/1k`  | 2.3791 ms | no change        |\n| `rocksdb/get_many_exact_keys/1k`          | 1.9810 ms | no change        |\n| `rocksdb/scan_keys_only/1k`               | 849.54 us | no change        |\n| `rocksdb/scan_headers_only/1k`            | 840.90 us | no change        |\n| `rocksdb/scan_full_rows/1k`               | 1.3687 ms | improved         |\n| `rocksdb/prefix_scan_schema_file_null/1k` | 1.3407 ms | no change        |\n\nBroad scan/write guardrail command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes|scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema_file_null)/1k'\n```\n\nResult: passed. Raw SQLite reference rows were noisy in this run, but Lix write\nrows were neutral, SQLite scan medians stayed in the improved band, and RocksDB\ntombstone writes improved.\n\nFollow-up scan rerun:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(scan_headers_only|scan_full_rows|prefix_scan_schema_file_null)/1k'\n```\n\nResult: passed.\n\n| row                                       | rerun median | criterion status |\n| ----------------------------------------- | -----------: | ---------------- |\n| `sqlite/scan_headers_only/1k`             |    1.7979 ms | no change        |\n| `sqlite/scan_full_rows/1k`                |    2.3657 ms | no change        |\n| `sqlite/prefix_scan_schema_file_null/1k`  |    2.3208 ms | no change        |\n| `rocksdb/scan_headers_only/1k`            |    817.34 us | improved         |\n| `rocksdb/scan_full_rows/1k`               |    1.3446 ms | improved         |\n| `rocksdb/prefix_scan_schema_file_null/1k` |    1.3315 ms | no change        |\n\n### Storage\n\nStorage command:\n\n```sh\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\n```\n\nResult: passed. No format or write-path storage change.\n\n1k rows:\n\n| row                                    |   bytes | bytes/row |\n| -------------------------------------- | ------: | --------: |\n| raw SQLite / inserted                  | 1692456 |    1692.5 |\n| Lix SQLite / inserted                  |  996856 |     996.9 |\n| Lix SQLite / after create_version      | 1013336 |    1013.3 |\n| Lix SQLite / after fast-forward merge  | 5205520 |    5205.5 |\n| Lix SQLite / after divergent merge     | 5369360 |    5369.4 |\n| Lix RocksDB / inserted                 |  912032 |     912.0 |\n| Lix RocksDB / after create_version     |  913889 |     913.9 |\n| Lix RocksDB / after fast-forward merge | 1073314 |    1073.3 |\n| Lix RocksDB / after divergent merge    | 1442794 |    1442.8 |\n\n### Review Loop\n\nReviewer pass:\n\n```text\nHIGH: none.\nMEDIUM: none.\nLOW: none.\n\nThe reviewer confirmed that read_sized_slice preserves the same overflow and\ntruncation checks, the nested key/value decoders still reject trailing bytes,\nand the borrowed slices only live for the duration of parsing. Recommendation:\nkeep as a clean read-side allocation cut.\n```\n\n### Verification\n\n```sh\ncargo fmt --check\ncargo test -p lix_engine tracked_state::codec:: --features storage-benches\ncargo check -p lix_engine --features storage-benches --benches\ncargo test -p lix_engine tracked_state:: --features storage-benches\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema_file_null|get_many_exact_keys)/1k'\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes|scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema_file_null)/1k'\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(scan_headers_only|scan_full_rows|prefix_scan_schema_file_null)/1k'\n```\n\nAll commands passed.\n\n### Interpretation\n\n```text\nKeep as a read-side delta-pack decode cleanup.\n\nThe physical win is small but real: unmaterialized tracked-state reads no\nlonger copy every encoded delta key and value section before decoding owned\nrows from them. This reduces heap traffic without changing the pack format or\ncorruption behavior.\n\nTiming is noisy but favorable enough to keep. SQLite header scans showed a\nsignificant improvement in the focused run and stayed in the lower band on\nrerun. RocksDB header/full scans improved significantly on rerun. Writes and\nstorage bytes are neutral because the encoded bytes are unchanged.\n\nNo storage format change. No temporary shim.\n```\n\n## Optimization 28: Encode change-pack entries in place\n\nDate: 2026-05-11\n\nCommit: this entry is committed with the optimization\n\n### Change\n\n`commit_store::codec::encode_change_pack` now writes each authored change\ndirectly into its length-prefixed pack entry:\n\n- Extracted `write_change_ref(&mut Vec<u8>, ChangeRef)` from\n  `encode_change_ref`.\n- `encode_change_ref` still returns standalone `LXCH2` bytes by writing into a\n  fresh `Vec`.\n- `encode_change_pack` now reserves the 4-byte little-endian section length,\n  writes the `LXCH2` change bytes directly into the pack buffer, and backfills\n  the length.\n- Removed the old temporary `encode_change_ref(change)?` plus `write_bytes`\n  copy inside the pack loop.\n\nAdded a unit test asserting that the bytes inside one change-pack entry are\nexactly the same as the standalone `encode_change_ref` bytes.\n\n### Benchmarks\n\nFocused write command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k'\n```\n\nResult: passed.\n\nRepresentative medians:\n\n| row                                           |    median | criterion status        |\n| --------------------------------------------- | --------: | ----------------------- |\n| `raw_sqlite/write_root_all_rows/1k`           | 2.4908 ms | reference/no change     |\n| `raw_sqlite/write_delta_10pct_updates/1k`     | 1.2818 ms | reference/no change     |\n| `raw_sqlite/write_tombstone_10pct_deletes/1k` | 1.2797 ms | reference/no change     |\n| `sqlite/write_root_all_rows/1k`               | 4.7754 ms | no change, lower median |\n| `sqlite/write_delta_10pct_updates/1k`         | 1.9721 ms | no change               |\n| `sqlite/write_tombstone_10pct_deletes/1k`     | 1.7907 ms | no change               |\n| `rocksdb/write_root_all_rows/1k`              | 4.3187 ms | no change, lower median |\n| `rocksdb/write_delta_10pct_updates/1k`        | 935.37 us | no change               |\n| `rocksdb/write_tombstone_10pct_deletes/1k`    | 789.37 us | no change               |\n\n### Storage\n\nStorage command:\n\n```sh\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\n```\n\nResult: passed. The change-pack wire format is unchanged.\n\n1k rows:\n\n| row                                    |   bytes | bytes/row |\n| -------------------------------------- | ------: | --------: |\n| raw SQLite / inserted                  | 1692456 |    1692.5 |\n| Lix SQLite / inserted                  |  996856 |     996.9 |\n| Lix SQLite / after create_version      | 1013336 |    1013.3 |\n| Lix SQLite / after fast-forward merge  | 5201424 |    5201.4 |\n| Lix SQLite / after divergent merge     | 5348880 |    5348.9 |\n| Lix RocksDB / inserted                 |  912032 |     912.0 |\n| Lix RocksDB / after create_version     |  913889 |     913.9 |\n| Lix RocksDB / after fast-forward merge | 1073314 |    1073.3 |\n| Lix RocksDB / after divergent merge    | 1442792 |    1442.8 |\n\n### Review Loop\n\nReviewer pass:\n\n```text\nHIGH: none.\nMEDIUM: none.\nLOW: none.\n\nThe reviewer confirmed that the pack still writes a length-prefixed LXCH2\npayload for each change, that decode still reads the same entry bytes, and that\npartial mutation on error is not exposed because encode_change_pack and\nencode_change_ref both build into fresh local Vecs. Recommendation: keep.\n```\n\n### Verification\n\n```sh\ncargo fmt --check\ncargo check -p lix_engine --features storage-benches --benches\ncargo test -p lix_engine commit_store::codec:: --features storage-benches\ncargo test -p lix_engine commit_store:: --features storage-benches\ncargo test -p lix_engine transaction::commit:: --features storage-benches\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k'\n```\n\nAll commands passed.\n\n### Interpretation\n\n```text\nKeep as a write-side commit-pack allocation cleanup.\n\nThe physical win is removing one temporary Vec allocation and one copy for each\nauthored change encoded into a commit-store change pack. It is the same shape\nas the earlier direct delta-pack and direct commit-change row work: encode into\nthe final pack buffer instead of building nested row blobs only to copy them.\n\nTiming is a modest median improvement for root writes on both Lix backends, but\nCriterion did not mark it statistically significant. This is kept because it\nremoves real per-row hot-path work while preserving the byte format and storage\nfootprint.\n\nNo storage format change. No temporary shim.\n```\n\n## Optimization 29: Compact matching tracked timestamps\n\nDate: 2026-05-11\n\nCommit: this entry is committed with the optimization\n\n### Change\n\nBumped the tracked-state value codec from version 6 to version 7 and compacted\nthe common timestamp shape in both materialized tree values and delta-pack\nvalues:\n\n- Values now write `created_at` once.\n- A one-byte tag follows:\n  `TIMESTAMP_UPDATED_SAME` when `updated_at == created_at`, otherwise\n  `TIMESTAMP_UPDATED_DISTINCT` plus the `updated_at` string.\n- Decode reconstructs `updated_at` from `created_at` for the same-timestamp\n  case and rejects invalid timestamp tags.\n- `encoded_value_len` now uses the same timestamp-pair sizing helper as the\n  encoder.\n\nThere is no backwards shim because the format has not shipped.\n\nAdded coverage that matching timestamps roundtrip and produce a shorter encoded\nvalue than distinct timestamps. Existing distinct timestamp roundtrip tests\ncontinue to cover the non-compact branch.\n\n### Benchmarks\n\nFocused write/scan command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes|scan_headers_only|scan_full_rows)/1k'\n```\n\nResult: passed.\n\nRepresentative medians:\n\n| row                                        |    median | criterion status |\n| ------------------------------------------ | --------: | ---------------- |\n| `sqlite/write_root_all_rows/1k`            | 5.0199 ms | no change        |\n| `sqlite/write_delta_10pct_updates/1k`      | 1.8704 ms | no change        |\n| `sqlite/write_tombstone_10pct_deletes/1k`  | 1.7569 ms | no change        |\n| `sqlite/scan_headers_only/1k`              | 1.8272 ms | no change        |\n| `sqlite/scan_full_rows/1k`                 | 2.3698 ms | no change        |\n| `rocksdb/write_root_all_rows/1k`           | 4.3505 ms | no change        |\n| `rocksdb/write_delta_10pct_updates/1k`     | 840.51 us | improved         |\n| `rocksdb/write_tombstone_10pct_deletes/1k` | 933.62 us | no change        |\n| `rocksdb/scan_headers_only/1k`             | 860.63 us | no change        |\n| `rocksdb/scan_full_rows/1k`                | 1.5218 ms | noisy regression |\n\nRerun of scan guardrails:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(scan_headers_only|scan_full_rows|prefix_scan_schema_file_null)/1k'\n```\n\nResult: passed.\n\n| row                                       | rerun median | criterion status |\n| ----------------------------------------- | -----------: | ---------------- |\n| `sqlite/scan_headers_only/1k`             |    1.7102 ms | improved         |\n| `sqlite/scan_full_rows/1k`                |    2.2907 ms | no change        |\n| `sqlite/prefix_scan_schema_file_null/1k`  |    2.2168 ms | no change        |\n| `rocksdb/scan_headers_only/1k`            |    815.47 us | no change        |\n| `rocksdb/scan_full_rows/1k`               |    1.3515 ms | improved         |\n| `rocksdb/prefix_scan_schema_file_null/1k` |    1.4072 ms | no change        |\n\n### Storage\n\nStorage command:\n\n```sh\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\n```\n\nResult: passed.\n\n1k rows:\n\n| row                                    |   bytes | bytes/row | status            |\n| -------------------------------------- | ------: | --------: | ----------------- |\n| raw SQLite / inserted                  | 1692456 |    1692.5 | reference         |\n| Lix SQLite / inserted                  |  972136 |     972.1 | improved          |\n| Lix SQLite / after create_version      |  984496 |     984.5 | improved          |\n| Lix SQLite / after fast-forward merge  | 5201544 |    5201.5 | roughly unchanged |\n| Lix SQLite / after divergent merge     | 5365384 |    5365.4 | roughly unchanged |\n| Lix RocksDB / inserted                 |  884519 |     884.5 | improved          |\n| Lix RocksDB / after create_version     |  886342 |     886.3 | improved          |\n| Lix RocksDB / after fast-forward merge | 1043067 |    1043.1 | improved          |\n| Lix RocksDB / after divergent merge    | 1404413 |    1404.4 | improved          |\n\n### Review Loop\n\nReviewer pass:\n\n```text\nHIGH: none.\nMEDIUM: none.\nLOW: none.\n\nThe reviewer confirmed that the version/deleted header masking remains correct,\ntombstone visibility still only needs the header, both tree and delta values\nuse the same timestamp pair helpers, distinct timestamps are preserved, invalid\ntimestamp tags are rejected, and encoded_value_len matches the new format.\nRecommendation: keep.\n```\n\n### Verification\n\n```sh\ncargo fmt --check\ncargo test -p lix_engine tracked_state::codec:: --features storage-benches\ncargo check -p lix_engine --features storage-benches --benches\ncargo test -p lix_engine tracked_state:: --features storage-benches\ncargo test -p lix_engine transaction::commit:: --features storage-benches\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes|scan_headers_only|scan_full_rows)/1k'\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(scan_headers_only|scan_full_rows|prefix_scan_schema_file_null)/1k'\n```\n\nAll commands passed.\n\n### Interpretation\n\n```text\nKeep as a physical value-format compaction.\n\nThe structural win is direct: inserted/root rows usually have matching\ncreated_at and updated_at, so storing the timestamp twice was duplicated row\npayload. Version 7 stores the common case as one string plus a tag while still\npreserving distinct timestamps for updates/adoptions.\n\nStorage improves materially on inserted/create_version states for both SQLite\nand RocksDB, and RocksDB merge-state bytes improve as well. Timing is mostly\nneutral with useful scan/write wins on rerun; the one RocksDB scan regression\ndid not reproduce.\n\nNo backward shim because the physical format is still unshipped.\n```\n\n## Optimization 30: Compact commit and delta change ids\n\nDate: 2026-05-11\n\nCommit: this entry is committed with the optimization\n\n### Change\n\nChanged the unshipped physical pack formats:\n\n- Commit-store change packs move from `LXCP1` to `LXCP2`.\n- `LXCP2` stores shared `(schema_key, file_id)` shapes once per pack.\n- `LXCP2` stores entity identity directly as string parts instead of a JSON\n  array string inside each packed change.\n- `LXCP2` stores change ids as a suffix when they start with the pack\n  `commit_id`, otherwise stores the full id.\n- Tracked-state delta packs move from `LXTD3` to `LXTD4`.\n- `LXTD4` stores delta `change_id`s as a suffix when they start with the\n  locator `source_commit_id`, otherwise stores the full id.\n\nStandalone `LXCH2` change encoding remains available, but change packs no\nlonger embed standalone `LXCH2` records. There is no backwards shim because the\nphysical format has not shipped.\n\nAdded codec coverage for the compact change-pack shape and for a tracked\ndelta-pack cross-commit locator whose `change_id` starts with the pack commit\nid but not with its locator source commit id.\n\n### Benchmarks\n\nFocused command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes|scan_full_rows|prefix_scan_schema_file_null)/1k'\n```\n\nResult: passed.\n\nRepresentative medians:\n\n| row                                        |    median | criterion status |\n| ------------------------------------------ | --------: | ---------------- |\n| `raw_sqlite/write_root_all_rows/1k`        | 2.3900 ms | no change        |\n| `sqlite/write_root_all_rows/1k`            | 4.4672 ms | no change        |\n| `sqlite/write_delta_10pct_updates/1k`      | 1.7570 ms | no change        |\n| `sqlite/write_tombstone_10pct_deletes/1k`  | 1.5593 ms | no change        |\n| `sqlite/scan_full_rows/1k`                 | 2.3076 ms | no change        |\n| `sqlite/prefix_scan_schema_file_null/1k`   | 2.2825 ms | no change        |\n| `rocksdb/write_root_all_rows/1k`           | 4.3382 ms | no change        |\n| `rocksdb/write_delta_10pct_updates/1k`     | 842.39 us | no change        |\n| `rocksdb/write_tombstone_10pct_deletes/1k` | 732.73 us | no change        |\n| `rocksdb/scan_full_rows/1k`                | 1.3643 ms | no change        |\n| `rocksdb/prefix_scan_schema_file_null/1k`  | 1.3588 ms | no change        |\n\nEarlier same-patch focused sweep also showed RocksDB delta writes at\n`751.11 us` improved and RocksDB tombstone writes at `741.36 us` improved; the\ncombined rerun settled as neutral except raw SQLite tombstone noise.\n\n### Storage\n\nStorage command:\n\n```sh\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\n```\n\nResult: passed.\n\n1k rows:\n\n| row                                    |   bytes | bytes/row | status    |\n| -------------------------------------- | ------: | --------: | --------- |\n| raw SQLite / inserted                  | 1692456 |    1692.5 | reference |\n| Lix SQLite / inserted                  |  947416 |     947.4 | improved  |\n| Lix SQLite / after create_version      |  959776 |     959.8 | improved  |\n| Lix SQLite / after fast-forward merge  | 5152248 |    5152.2 | improved  |\n| Lix SQLite / after divergent merge     | 5353168 |    5353.2 | improved  |\n| Lix RocksDB / inserted                 |  864114 |     864.1 | improved  |\n| Lix RocksDB / after create_version     |  865938 |     865.9 | improved  |\n| Lix RocksDB / after fast-forward merge | 1022770 |    1022.8 | improved  |\n| Lix RocksDB / after divergent merge    | 1384417 |    1384.4 | improved  |\n\n### Review Loop\n\nReviewer pass 1 found one HIGH: `LXTD4` initially stripped delta change ids\nagainst the pack commit id while decode reconstructed suffixes against the\nlocator `source_commit_id`, which could corrupt an adopted cross-commit locator\nwhose id happened to start with the pack commit id.\n\nFix: encode delta change-id suffixes against\n`value.change_locator.source_commit_id`, matching decode, and add a regression\nfor the cross-commit prefix-collision case.\n\nReviewer pass 2:\n\n```text\nHIGH: none.\nMEDIUM: none.\nLOW: none.\n\nThe reviewer confirmed the prior HIGH is resolved, suffix encode/decode now use\nthe same source-commit basis, LXCP2 preserves entry order, shape indexes are\nbounds-checked, and commit-store suffix IDs use the same commit-id basis on\nencode/decode.\nRecommendation: keep.\n```\n\n### Verification\n\n```sh\ncargo fmt --check\ncargo test -p lix_engine commit_store::codec:: --features storage-benches\ncargo test -p lix_engine commit_store:: --features storage-benches\ncargo test -p lix_engine tracked_state::codec:: --features storage-benches\ncargo test -p lix_engine tracked_state:: --features storage-benches\ncargo test -p lix_engine transaction::commit:: --features storage-benches\ncargo check -p lix_engine --features storage-benches --benches\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes|scan_full_rows|prefix_scan_schema_file_null)/1k'\n```\n\nAll commands passed.\n\n### Interpretation\n\n```text\nKeep as a format-level compaction.\n\nRoot-write timing remains above the 1.5x target and mostly Criterion-neutral,\nso this is not the final root-write answer. The pack bytes are meaningfully\nsmaller, however, and the format removes repeated schema/file/change-id/entity\nencoding from durable commit packs while preserving locator semantics.\n\nThe current budget misses remain root writes and SQLite full/prefix scans.\n```\n\n## Optimization 31: Narrow JSON pack directory fields\n\nDate: 2026-05-11\n\nCommit: this entry is committed with the optimization\n\n### Change\n\nChanged the unshipped JSON commit-pack format from `lix-json-pack:v1` to\n`lix-json-pack:v2`.\n\nThe per-entry directory keeps the same explicit shape:\n\n```text\nhash, codec, uncompressed_len, payload_offset, payload_len\n```\n\nbut narrows the three numeric payload fields from `u64` to `u32`. The entry\nheader shrinks from `32 + 1 + 8 + 8 + 8 = 57` bytes to\n`32 + 1 + 4 + 4 + 4 = 45` bytes.\n\nThis is a clean cut with no backwards shim because Lix has not shipped. Unlike\nthe rejected implicit-offset JSON-pack experiment, this keeps explicit offsets,\nso unordered/fallback pack reads retain direct payload slicing instead of\nreconstructing offsets from earlier directory entries.\n\nAdded codec tests for the compact 45-byte directory shape and for checked\nrejection of oversized u32 directory fields.\n\n### Storage\n\nCommand:\n\n```sh\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\n```\n\nResult: passed.\n\n1k rows:\n\n| state                      |    before |     after |   delta |\n| -------------------------- | --------: | --------: | ------: |\n| raw SQLite inserted        | 1,692,456 | 1,692,456 |       0 |\n| Lix SQLite inserted        |   947,416 |   939,176 |  -8,240 |\n| Lix SQLite create_version  |   959,776 |   951,536 |  -8,240 |\n| Lix SQLite fast-forward    | 5,152,248 | 5,152,296 |     +48 |\n| Lix SQLite divergent       | 5,353,168 | 5,320,304 | -32,864 |\n| Lix RocksDB inserted       |   864,114 |   851,910 | -12,204 |\n| Lix RocksDB create_version |   865,938 |   853,721 | -12,217 |\n| Lix RocksDB fast-forward   | 1,022,770 | 1,009,345 | -13,425 |\n| Lix RocksDB divergent      | 1,384,417 | 1,368,580 | -15,837 |\n\n### Timing\n\nFocused command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes|scan_full_rows|prefix_scan_schema_file_null)/1k'\n```\n\nResult: passed.\n\nRepresentative medians:\n\n| row                                           |    median | criterion status |\n| --------------------------------------------- | --------: | ---------------- |\n| `raw_sqlite/write_root_all_rows/1k`           | 2.4135 ms | no change        |\n| `raw_sqlite/scan_full_rows/1k`                | 1.2061 ms | no change        |\n| `raw_sqlite/prefix_scan_schema_file_null/1k`  | 1.1669 ms | no change        |\n| `raw_sqlite/write_delta_10pct_updates/1k`     | 1.2859 ms | no change        |\n| `raw_sqlite/write_tombstone_10pct_deletes/1k` | 1.1947 ms | no change        |\n| `sqlite/write_root_all_rows/1k`               | 4.6431 ms | no change        |\n| `sqlite/scan_full_rows/1k`                    | 2.2783 ms | no change        |\n| `sqlite/prefix_scan_schema_file_null/1k`      | 2.3420 ms | no change        |\n| `sqlite/write_delta_10pct_updates/1k`         | 1.7931 ms | no change        |\n| `sqlite/write_tombstone_10pct_deletes/1k`     | 1.6065 ms | no change        |\n| `rocksdb/write_root_all_rows/1k`              | 4.1708 ms | no change        |\n| `rocksdb/scan_full_rows/1k`                   | 1.4051 ms | no change        |\n| `rocksdb/prefix_scan_schema_file_null/1k`     | 1.3823 ms | no change        |\n| `rocksdb/write_delta_10pct_updates/1k`        | 818.06 us | no change        |\n| `rocksdb/write_tombstone_10pct_deletes/1k`    | 765.78 us | no change        |\n\n### Verification\n\n```sh\ncargo fmt --check\ncargo test -p lix_engine json_store:: --features storage-benches\ncargo test -p lix_engine tracked_state:: --features storage-benches\ncargo check -p lix_engine --features storage-benches --benches\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes|scan_full_rows|prefix_scan_schema_file_null)/1k'\n```\n\nAll commands passed.\n\nReviewer loop:\n\n- First pass: HIGH none, MEDIUM none, LOW requested direct overflow coverage\n  for narrowed fields.\n- Added `json_pack_u32_rejects_oversized_directory_fields`.\n- Second pass: HIGH none, MEDIUM none, LOW none. Recommendation: keep.\n\n### Interpretation\n\n```text\nKeep as a compact physical-layout cleanup.\n\nPrimary axis: storage bytes. Commit-local JSON packs are bounded KV blobs, not\nlarge archive files, so u32 payload lengths and offsets are enough while the\nencoder still rejects oversized packs explicitly.\n\nTiming: no Lix runtime row showed a detected regression in the focused write\nand scan guardrail. Keeping explicit offsets preserves the direct random-access\nfallback shape that the earlier implicit-offset experiment lost.\n\nNo backwards shim.\n```\n\n## Optimization 32: Varint change-pack local fields\n\nDate: 2026-05-11\n\nCommit: this entry is committed with the optimization\n\n### Change\n\nChanged the unshipped commit-store change-pack format from `LXCP2` to `LXCP3`.\n\n`LXCP3` keeps the same logical fields and explicit pack structure, but encodes\npack-local lengths, counts, and indexes as checked u32 varints instead of fixed\nu32 fields:\n\n- commit id length\n- shape count\n- shape `schema_key` and optional `file_id` lengths\n- change count\n- per-change id length\n- entity-identity part count and part lengths\n- shape index\n- created-at length\n\nStandalone commit (`LXCM1`), standalone change (`LXCH2`), and membership-pack\n(`LXMP1`) encodings are unchanged. There is no backwards shim because the\nformat has not shipped.\n\nAdded decode coverage for overlong varints, values above `u32::MAX`, and\nnon-canonical encodings such as `80 00`.\n\n### Storage\n\nCommand:\n\n```sh\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\n```\n\nResult: passed.\n\n1k rows:\n\n| state                      |    before |     after |   delta |\n| -------------------------- | --------: | --------: | ------: |\n| raw SQLite inserted        | 1,692,456 | 1,692,456 |       0 |\n| Lix SQLite inserted        |   939,176 |   922,696 | -16,480 |\n| Lix SQLite create_version  |   951,536 |   935,056 | -16,480 |\n| Lix SQLite fast-forward    | 5,152,296 | 5,123,600 | -28,696 |\n| Lix SQLite divergent       | 5,320,304 | 5,308,064 | -12,240 |\n| Lix RocksDB inserted       |   851,910 |   836,566 | -15,344 |\n| Lix RocksDB create_version |   853,721 |   838,350 | -15,371 |\n| Lix RocksDB fast-forward   | 1,009,345 |   991,281 | -18,064 |\n| Lix RocksDB divergent      | 1,368,580 | 1,345,089 | -23,491 |\n\n### Timing\n\nFocused command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes|scan_full_rows|prefix_scan_schema_file_null|get_many_exact_keys|exists_many_exact_keys)/1k'\n```\n\nResult: passed.\n\nRepresentative medians:\n\n| row                                           |    median | criterion status |\n| --------------------------------------------- | --------: | ---------------- |\n| `raw_sqlite/write_root_all_rows/1k`           | 2.3938 ms | no change        |\n| `raw_sqlite/get_many_exact_keys/1k`           | 2.0500 ms | no change        |\n| `raw_sqlite/exists_many_exact_keys/1k`        | 2.0326 ms | no change        |\n| `raw_sqlite/scan_full_rows/1k`                | 1.1694 ms | no change        |\n| `raw_sqlite/prefix_scan_schema_file_null/1k`  | 1.1635 ms | no change        |\n| `raw_sqlite/write_delta_10pct_updates/1k`     | 1.2193 ms | no change        |\n| `raw_sqlite/write_tombstone_10pct_deletes/1k` | 1.2057 ms | no change        |\n| `sqlite/write_root_all_rows/1k`               | 4.5170 ms | no change        |\n| `sqlite/get_many_exact_keys/1k`               | 2.8695 ms | no change        |\n| `sqlite/exists_many_exact_keys/1k`            | 1.8988 ms | no change        |\n| `sqlite/scan_full_rows/1k`                    | 2.2094 ms | no change        |\n| `sqlite/prefix_scan_schema_file_null/1k`      | 2.2438 ms | no change        |\n| `sqlite/write_delta_10pct_updates/1k`         | 1.7065 ms | no change        |\n| `sqlite/write_tombstone_10pct_deletes/1k`     | 1.6626 ms | no change        |\n| `rocksdb/write_root_all_rows/1k`              | 3.9725 ms | improved         |\n| `rocksdb/get_many_exact_keys/1k`              | 2.0688 ms | no change        |\n| `rocksdb/exists_many_exact_keys/1k`           | 1.0354 ms | no change        |\n| `rocksdb/scan_full_rows/1k`                   | 1.4142 ms | no change        |\n| `rocksdb/prefix_scan_schema_file_null/1k`     | 1.3397 ms | no change        |\n| `rocksdb/write_delta_10pct_updates/1k`        | 736.65 us | no change        |\n| `rocksdb/write_tombstone_10pct_deletes/1k`    | 735.43 us | no change        |\n\n### Verification\n\n```sh\ncargo fmt --check\ncargo test -p lix_engine commit_store::codec:: --features storage-benches\ncargo test -p lix_engine commit_store:: --features storage-benches\ncargo test -p lix_engine transaction::commit:: --features storage-benches\ncargo check -p lix_engine --features storage-benches --benches\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes|scan_full_rows|prefix_scan_schema_file_null|get_many_exact_keys|exists_many_exact_keys)/1k'\n```\n\nAll commands passed.\n\nReviewer loop:\n\n- First pass: HIGH none, MEDIUM found a malformed 5-byte varint could exceed\n  `u32::MAX` without being rejected; LOW requested canonical varint rejection.\n- Fixed `read_var_usize` to reject fifth-byte continuation and high payload\n  bits, and to reject non-canonical zero-extended encodings.\n- Added regressions for overlong, too-large, and non-canonical varints.\n- Second pass: HIGH none, MEDIUM none, LOW none.\n\n### Interpretation\n\n```text\nKeep as a compact change-pack layout cleanup.\n\nPrimary axis: storage bytes. Change packs are commit-local bounded blobs whose\nper-row shape indexes and string lengths are usually tiny; fixed u32 metadata\nwas pure overhead. Varints are limited to u32, canonical, and malformed packs\nreject before allocation-heavy paths can trust the decoded value.\n\nTiming: focused physical write/read/scan rows showed no detected Lix\nregressions. The only statistically visible Lix movement was a RocksDB root\nwrite improvement in the final run.\n\nNo backwards shim.\n```\n\n## Optimization 33: Varint tracked-state delta-pack local fields\n\n### Change\n\nChanged tracked-state delta packs from version `4` to version `5` with no\nbackwards shim. Tree node/key/value encodings remain on their existing\nfixed-width formats.\n\nWithin `LXTD` v5 delta packs, pack-local lengths/counts/indexes now use checked\ncanonical `u32` varints instead of fixed-width `u32` fields:\n\n- pack commit id length\n- key prefix count, prefix schema/file id lengths, entry count\n- per-entry key/value section lengths\n- key prefix index and entity identity part count/lengths\n- full source commit id length when needed\n- source pack id and source ordinal\n- delta change id length\n- timestamp lengths\n\nThe section-length encoder writes in place, reserving the maximum 5-byte varint\nheader and compacting it after the section is written, avoiding a temporary\nallocation per key/value section.\n\nDecoder hardening:\n\n- rejects overlong varints\n- rejects varints above `u32::MAX`\n- rejects non-canonical encodings such as `80 00`\n- avoids eager large `Vec::with_capacity(count)` allocations from corrupt\n  decoded counts\n\nAdded focused delta-pack tests for the malformed varint cases and updated the\nroundtrip fixture to assert the v5 varint header fields.\n\n### Storage\n\nStorage command:\n\n```sh\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\n```\n\nResult: passed.\n\n1k rows:\n\n| row                                    | before bytes | after bytes |     delta |\n| -------------------------------------- | -----------: | ----------: | --------: |\n| raw SQLite / inserted                  |    1,692,456 |   1,692,456 | reference |\n| Lix SQLite / inserted                  |      922,696 |     897,976 |   -24,720 |\n| Lix SQLite / after create_version      |      935,056 |     910,336 |   -24,720 |\n| Lix SQLite / after fast-forward merge  |    5,123,600 |   5,152,584 |   +28,984 |\n| Lix SQLite / after divergent merge     |    5,308,064 |   5,304,136 |    -3,928 |\n| Lix RocksDB / inserted                 |      836,566 |     811,776 |   -24,790 |\n| Lix RocksDB / after create_version     |      838,350 |     813,523 |   -24,827 |\n| Lix RocksDB / after fast-forward merge |      991,281 |     962,754 |   -28,527 |\n| Lix RocksDB / after divergent merge    |    1,345,089 |   1,306,406 |   -38,683 |\n\n### Benchmarks\n\nFocused command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes|scan_full_rows|prefix_scan_schema_file_null|get_many_exact_keys|exists_many_exact_keys)/1k'\n```\n\nResult: passed.\n\nRerun after replacing temporary section buffers with in-place varint section\nheaders:\n\n| row                                           |    median | criterion status |\n| --------------------------------------------- | --------: | ---------------- |\n| `raw_sqlite/write_root_all_rows/1k`           | 2.4927 ms | no change        |\n| `raw_sqlite/get_many_exact_keys/1k`           | 2.0536 ms | improved         |\n| `raw_sqlite/exists_many_exact_keys/1k`        | 2.1659 ms | no change        |\n| `raw_sqlite/scan_full_rows/1k`                | 1.2557 ms | improved         |\n| `raw_sqlite/prefix_scan_schema_file_null/1k`  | 1.2060 ms | no change        |\n| `raw_sqlite/write_delta_10pct_updates/1k`     | 1.2878 ms | improved         |\n| `raw_sqlite/write_tombstone_10pct_deletes/1k` | 1.2843 ms | improved         |\n| `sqlite/write_root_all_rows/1k`               | 4.5495 ms | no change        |\n| `sqlite/get_many_exact_keys/1k`               | 2.7998 ms | no change        |\n| `sqlite/exists_many_exact_keys/1k`            | 1.8635 ms | no change        |\n| `sqlite/scan_full_rows/1k`                    | 2.6022 ms | noise threshold  |\n| `sqlite/prefix_scan_schema_file_null/1k`      | 2.2652 ms | no change        |\n| `sqlite/write_delta_10pct_updates/1k`         | 1.7003 ms | no change        |\n| `sqlite/write_tombstone_10pct_deletes/1k`     | 1.6276 ms | no change        |\n| `rocksdb/write_root_all_rows/1k`              | 4.3209 ms | no change        |\n| `rocksdb/get_many_exact_keys/1k`              | 2.1036 ms | regressed        |\n| `rocksdb/exists_many_exact_keys/1k`           | 1.0935 ms | no change        |\n| `rocksdb/scan_full_rows/1k`                   | 1.4418 ms | no change        |\n| `rocksdb/prefix_scan_schema_file_null/1k`     | 1.4424 ms | no change        |\n| `rocksdb/write_delta_10pct_updates/1k`        | 754.76 us | improved         |\n| `rocksdb/write_tombstone_10pct_deletes/1k`    | 779.52 us | no change        |\n\nThe only Criterion regression in the rerun is RocksDB exact reads, which should\nnot decode delta-pack values on this benchmark path. Treat as a noisy guardrail\nunless it repeats after later exact-read work.\n\n### Review Loop\n\nReviewer pass:\n\n```text\nHIGH: none.\nMEDIUM: none.\nLOW: none.\n\nThe v5 delta-pack varint path rejects overlong, above-u32, and non-canonical\nencodings; section boundaries are preserved; tree node/key/value encodings\nstill use fixed-width helpers; and the count allocation hardening avoids huge\nmalformed-count allocation before truncation failure.\n```\n\n### Verification\n\n```sh\ncargo fmt --check\ncargo test -p lix_engine tracked_state::codec:: --features storage-benches\ncargo test -p lix_engine tracked_state:: --features storage-benches\ncargo test -p lix_engine transaction::commit:: --features storage-benches\ncargo check -p lix_engine --features storage-benches --benches\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes|scan_full_rows|prefix_scan_schema_file_null|get_many_exact_keys|exists_many_exact_keys)/1k'\n```\n\nAll commands passed.\n\n### Interpretation\n\n```text\nKeep.\n\nThis is another pure physical-layout byte win. The largest clean benefit is on\nroot/create-version storage and RocksDB delta/merge footprints, with no intended\nlogical behavior change and no compatibility shim.\n\nThe fast-forward SQLite byte count moved up on this run while divergent SQLite\nand RocksDB merge rows moved down; the inserted/create-version rows show the\ndirect delta-pack-local field compression most clearly.\n\nTarget is still not met: SQLite root write remains about 4.55 / 2.49 = 1.83x\nraw SQLite in the latest focused run, and scans are still above the 1.5x budget.\n```\n\n## Optimization 34: Probe ordered single JSON packs before dedupe\n\nDate: 2026-05-11\n\nCommit: this entry is committed with the optimization\n\n### Change\n\nAdded an early JSON-store read fast path for the common materialization shape\nwhere all requested JSON refs come from one commit pack in pack order:\n\n- `load_json_bytes_many_in_scope_with_hash_check` now probes a single\n  `JsonReadScopeRef::CommitPacks` pack before building the dedupe `HashMap`,\n  direct-row key list, and request-index remapping.\n- If the ordered pack probe hits, the loader returns decoded values directly.\n- If the ordered probe misses because the pack is absent or not an exact\n  ordered match, the existing dedupe/direct-row fallback still runs. A present\n  but non-matching pack is carried into fallback so the same pack is not fetched\n  twice.\n- Added `ordered_pack_probe_falls_back_to_direct_rows` to cover direct-row\n  fallback after a mismatched ordered pack probe.\n\n### Benchmarks\n\nFocused read command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(scan_full_rows|prefix_scan_schema_file_null|get_many_exact_keys)/1k'\n```\n\nFirst clean run after the change:\n\n| row                                          |    median | criterion status        |\n| -------------------------------------------- | --------: | ----------------------- |\n| `raw_sqlite/get_many_exact_keys/1k`          | 2.0699 ms | improved baseline       |\n| `raw_sqlite/scan_full_rows/1k`               | 1.2684 ms | improved baseline       |\n| `raw_sqlite/prefix_scan_schema_file_null/1k` | 1.1716 ms | improved baseline       |\n| `sqlite/get_many_exact_keys/1k`              | 2.7975 ms | improved                |\n| `sqlite/scan_full_rows/1k`                   | 2.3225 ms | improved                |\n| `sqlite/prefix_scan_schema_file_null/1k`     | 2.3271 ms | no change, lower median |\n| `rocksdb/get_many_exact_keys/1k`             | 1.9847 ms | improved                |\n| `rocksdb/scan_full_rows/1k`                  | 1.4401 ms | no change               |\n| `rocksdb/prefix_scan_schema_file_null/1k`    | 1.4838 ms | no change               |\n\nFinal rerun after fallback refinement:\n\n| row                                          |    median | criterion status              |\n| -------------------------------------------- | --------: | ----------------------------- |\n| `raw_sqlite/get_many_exact_keys/1k`          | 2.0341 ms | reference/no change           |\n| `raw_sqlite/scan_full_rows/1k`               | 1.1597 ms | reference/no change           |\n| `raw_sqlite/prefix_scan_schema_file_null/1k` | 1.1901 ms | reference/no change           |\n| `sqlite/get_many_exact_keys/1k`              | 2.8496 ms | no change                     |\n| `sqlite/scan_full_rows/1k`                   | 2.3712 ms | no change                     |\n| `sqlite/prefix_scan_schema_file_null/1k`     | 2.2558 ms | no change                     |\n| `rocksdb/get_many_exact_keys/1k`             | 2.1639 ms | noisy regression vs prior run |\n| `rocksdb/scan_full_rows/1k`                  | 1.4752 ms | no change                     |\n| `rocksdb/prefix_scan_schema_file_null/1k`    | 1.4137 ms | no change                     |\n\nWrite guardrail command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k'\n```\n\nResult: passed, with all measured write rows improved in that guardrail run.\n\n### Storage\n\nStorage command:\n\n```sh\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\n```\n\nResult: passed.\n\n1k rows:\n\n| row                                    |   bytes | bytes/row | status    |\n| -------------------------------------- | ------: | --------: | --------- |\n| raw SQLite / inserted                  | 1692456 |    1692.5 | reference |\n| Lix SQLite / inserted                  |  897976 |     898.0 | unchanged |\n| Lix SQLite / after create_version      |  910336 |     910.3 | unchanged |\n| Lix SQLite / after fast-forward merge  | 5152584 |    5152.6 | unchanged |\n| Lix SQLite / after divergent merge     | 5304136 |    5304.1 | unchanged |\n| Lix RocksDB / inserted                 |  811772 |     811.8 | unchanged |\n| Lix RocksDB / after create_version     |  813519 |     813.5 | unchanged |\n| Lix RocksDB / after fast-forward merge |  962750 |     962.8 | unchanged |\n| Lix RocksDB / after divergent merge    | 1306403 |    1306.4 | unchanged |\n\n### Review Loop\n\nReviewer pass:\n\n```text\nInitial review:\nHIGH: none.\nMEDIUM: early ordered probe could read the same single pack twice on\nnon-exact fallback.\nLOW: none.\n\nFollow-up review:\nHIGH: none.\nMEDIUM: none.\nLOW: absent-pack fallback still rereads the missing pack; present/non-exact\nfallback copies the full pack.\n\nFinal review:\nHIGH: none.\nMEDIUM: none.\nLOW: none beyond the intentionally accepted full-pack copy on uncommon\npresent/non-exact fallback. The absent-pack path now goes directly to direct-row\nfallback without rereading the missing pack.\n```\n\n### Verification\n\n```sh\ncargo fmt --check\ncargo check -p lix_engine --features storage-benches\ncargo test -p lix_engine json_store:: --features storage-benches\ncargo test -p lix_engine tracked_state::context:: --features storage-benches\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(scan_full_rows|prefix_scan_schema_file_null|get_many_exact_keys)/1k'\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k'\n```\n\nAll commands passed.\n\n### Interpretation\n\n```text\nKeep as a JSON pack read-path optimization.\n\nPrimary axis: exact reads and full-row scans that materialize all JSON payloads\nfrom one commit pack in pack order. The structural win avoids building a dedupe\nHashMap and direct-row key list before the existing ordered pack loader can\nsucceed.\n\nTiming: first clean run showed Criterion improvements for SQLite exact reads,\nSQLite full scans, and RocksDB exact reads. Final rerun after fallback cleanup\nheld the new median band but did not show another Criterion improvement, as\nexpected. RocksDB exact read was noisy in the final rerun and remains a guardrail\nto watch.\n\nStorage is unchanged. No format change, no backward shim, no benchmark\nmeasurement change. This does not complete the <= 1.5x target because SQLite\nfull/prefix scans remain above budget.\n```\n\n## Optimization 35: Pre-size tracked materialization JSON slots\n\nDate: 2026-05-11\n\nCommit: this entry is committed with the optimization\n\n### Change\n\nPre-sized the tracked-state materialization JSON side buffers from known entry\nand projection counts:\n\n- `materialize_index_entries` now computes the maximum projected JSON slots as\n  `entries.len() * projected_json_columns`.\n- `json_refs` and `json_ref_localities` reserve that capacity up front instead\n  of growing from zero while planning rows.\n\nThis follows the same locality principle used in the pack formats: when a scan\nalready has the row count and projected column shape, allocate the side vectors\nonce for the dense payload path.\n\n### Benchmarks\n\nFocused read command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows|prefix_scan_schema_file_null)/1k'\n```\n\nResult: passed.\n\n| row                                          |    median | criterion status        |\n| -------------------------------------------- | --------: | ----------------------- |\n| `raw_sqlite/get_many_exact_keys/1k`          | 2.1087 ms | reference/no change     |\n| `raw_sqlite/scan_full_rows/1k`               | 1.1755 ms | reference/no change     |\n| `raw_sqlite/prefix_scan_schema_file_null/1k` | 1.1727 ms | reference/no change     |\n| `sqlite/get_many_exact_keys/1k`              | 2.7590 ms | no change               |\n| `sqlite/scan_full_rows/1k`                   | 2.1942 ms | no change, lower median |\n| `sqlite/prefix_scan_schema_file_null/1k`     | 2.2549 ms | no change               |\n| `rocksdb/get_many_exact_keys/1k`             | 2.0010 ms | improved                |\n| `rocksdb/scan_full_rows/1k`                  | 1.4752 ms | no change               |\n| `rocksdb/prefix_scan_schema_file_null/1k`    | 1.4116 ms | no change               |\n\n### Storage\n\nStorage command:\n\n```sh\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\n```\n\nResult: passed.\n\n1k rows:\n\n| row                                    |   bytes | bytes/row | status          |\n| -------------------------------------- | ------: | --------: | --------------- |\n| raw SQLite / inserted                  | 1692456 |    1692.5 | reference       |\n| Lix SQLite / inserted                  |  897976 |     898.0 | unchanged       |\n| Lix SQLite / after create_version      |  910336 |     910.3 | unchanged       |\n| Lix SQLite / after fast-forward merge  | 5152584 |    5152.6 | unchanged       |\n| Lix SQLite / after divergent merge     | 5312328 |    5312.3 | unchanged/noisy |\n| Lix RocksDB / inserted                 |  811772 |     811.8 | unchanged       |\n| Lix RocksDB / after create_version     |  813519 |     813.5 | unchanged       |\n| Lix RocksDB / after fast-forward merge |  962750 |     962.8 | unchanged       |\n| Lix RocksDB / after divergent merge    | 1306401 |    1306.4 | unchanged       |\n\n### Review Loop\n\nReviewer pass:\n\n```text\nHIGH: none.\nMEDIUM: none.\nLOW: reserves the upper bound for sparse/tombstone-heavy rows. This is bounded\nto at most two slots per row and is an acceptable hot-path tradeoff for dense\npayload scans.\n```\n\n### Verification\n\n```sh\ncargo fmt --check\ncargo check -p lix_engine --features storage-benches\ncargo test -p lix_engine tracked_state::materialization:: --features storage-benches\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows|prefix_scan_schema_file_null)/1k'\n```\n\nAll commands passed.\n\n### Interpretation\n\n```text\nKeep as a small tracked materialization allocation cleanup.\n\nPrimary axis: dense full-row materialization for exact reads and scans. The\nstructural win removes repeated growth of JSON ref/locality side buffers when\nthe planner already knows the maximum slot count.\n\nTiming: RocksDB exact reads improved by Criterion. SQLite scan medians moved\nlower but remained Criterion-neutral. There were no measured regressions in the\nfocused read run.\n\nStorage is unchanged. No format change, no backward shim, no benchmark\nmeasurement change. This does not complete the <= 1.5x target; SQLite full and\nprefix scans remain above budget and root writes still need a larger cut.\n```\n\n## Optimization 36: Decode scan keys from trusted schema/file prefix\n\nDate: 2026-05-11\n\nCommit: this entry is committed with the optimization\n\n### Change\n\nAdded a tracked-tree scan fast path for the common single schema/file prefix\nshape:\n\n- `scan_ranges` already proves rows are inside one encoded\n  `schema_key + file_id` prefix when a request has one schema key, one non-Any\n  file filter, and no entity filter.\n- `scan_key_decode_hint` carries that trusted prefix shape through recursive\n  tree scans.\n- Leaf scans now decode only the entity suffix with\n  `decode_key_with_trusted_prefix`, then materialize the known schema/file\n  fields directly.\n- The normal full-key decoder and filter recheck remain in place for multi\n  schema/file scans, Any-file scans, entity-filter scans, and all other shapes.\n\nAdded direct coverage for the trusted suffix decoder and a tree scan test that\nlocks the hinted branch against tombstone visibility, file filtering, and limit\nhandling.\n\n### Benchmarks\n\nFocused read command:\n\n```sh\ncargo bench -p lix_engine --bench json_pointer_crud --features storage-benches -- 'json_pointer_crud/(raw_sqlite/smoke/(select_all_path_value|select_one_by_pk)|raw_storage_(sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows|prefix_scan_schema_file_null))/1k'\n```\n\nResult: passed.\n\n| row                                       |    median | criterion status    |\n| ----------------------------------------- | --------: | ------------------- |\n| `raw_sqlite/select_all_path_value/1k`     | 1.2300 ms | reference/no change |\n| `raw_sqlite/select_one_by_pk/1k`          | 1.0905 ms | reference/no change |\n| `sqlite/get_many_exact_keys/1k`           | 2.8295 ms | no change           |\n| `sqlite/scan_full_rows/1k`                | 2.3559 ms | no change           |\n| `sqlite/prefix_scan_schema_file_null/1k`  | 2.2079 ms | improved            |\n| `rocksdb/get_many_exact_keys/1k`          | 2.0448 ms | no change           |\n| `rocksdb/scan_full_rows/1k`               | 1.5172 ms | no change           |\n| `rocksdb/prefix_scan_schema_file_null/1k` | 1.4547 ms | no change           |\n\nRerun command:\n\n```sh\ncargo bench -p lix_engine --bench json_pointer_crud --features storage-benches -- 'json_pointer_crud/(raw_sqlite/smoke/select_all_path_value|raw_storage_(sqlite|rocksdb)/smoke/(scan_full_rows|prefix_scan_schema_file_null))/1k'\n```\n\nResult: passed.\n\n| row                                       |    median | criterion status        |\n| ----------------------------------------- | --------: | ----------------------- |\n| `raw_sqlite/select_all_path_value/1k`     | 1.1630 ms | reference/no change     |\n| `sqlite/scan_full_rows/1k`                | 2.2529 ms | no change, lower median |\n| `sqlite/prefix_scan_schema_file_null/1k`  | 2.2044 ms | no change, lower median |\n| `rocksdb/scan_full_rows/1k`               | 1.4055 ms | improved                |\n| `rocksdb/prefix_scan_schema_file_null/1k` | 1.4103 ms | no change               |\n\n### Storage\n\nStorage command:\n\n```sh\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\n```\n\nResult: passed.\n\n1k rows:\n\n| row                                    |   bytes | bytes/row | status          |\n| -------------------------------------- | ------: | --------: | --------------- |\n| raw SQLite / inserted                  | 1692456 |    1692.5 | reference       |\n| Lix SQLite / inserted                  |  897976 |     898.0 | unchanged       |\n| Lix SQLite / after create_version      |  910336 |     910.3 | unchanged       |\n| Lix SQLite / after fast-forward merge  | 5090808 |    5090.8 | unchanged/noisy |\n| Lix SQLite / after divergent merge     | 5234168 |    5234.2 | unchanged/noisy |\n| Lix RocksDB / inserted                 |  811776 |     811.8 | unchanged       |\n| Lix RocksDB / after create_version     |  813523 |     813.5 | unchanged       |\n| Lix RocksDB / after fast-forward merge |  962754 |     962.8 | unchanged       |\n| Lix RocksDB / after divergent merge    | 1306404 |    1306.4 | unchanged       |\n\n### Review Loop\n\nReviewer pass:\n\n```text\nInitial review:\nHIGH: none.\nMEDIUM: none.\nLOW: trusted prefix helper should make its caller proof sharper; add targeted\ncoverage for the hinted schema/file scan branch with tombstones and limits.\n\nFollow-up review:\nNo HIGH/MEDIUM/LOW findings.\n```\n\n### Verification\n\n```sh\ncargo fmt --check\ncargo check -p lix_engine --features storage-benches\ncargo test -p lix_engine tracked_state:: --features storage-benches\ncargo test -p lix_engine scan_schema_file_prefix_honors_tombstones_and_limit --features storage-benches\ncargo test -p lix_engine key_codec_decodes_entity_suffix_with_trusted_prefix --features storage-benches\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\ncargo bench -p lix_engine --bench json_pointer_crud --features storage-benches -- 'json_pointer_crud/(raw_sqlite/smoke/(select_all_path_value|select_one_by_pk)|raw_storage_(sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows|prefix_scan_schema_file_null))/1k'\ncargo bench -p lix_engine --bench json_pointer_crud --features storage-benches -- 'json_pointer_crud/(raw_sqlite/smoke/select_all_path_value|raw_storage_(sqlite|rocksdb)/smoke/(scan_full_rows|prefix_scan_schema_file_null))/1k'\n```\n\nAll commands passed.\n\n### Interpretation\n\n```text\nKeep as a scan key-decoding optimization.\n\nPrimary axis: schema/file prefix scans. The structural win avoids reparsing\nschema/file fields from every matched encoded key and avoids repeating the key\nfilter check when the encoded prefix range already proved those fields.\n\nTiming: SQLite prefix scan improved by Criterion in the first focused run.\nRerun medians stayed lower but were Criterion-neutral, while RocksDB full scan\nimproved by Criterion. Exact reads remained neutral, as expected.\n\nStorage is unchanged. No format change, no backward shim. This does not complete\nthe <= 1.5x target; SQLite scans and root writes still need larger cuts.\n```\n\n## Optimization 37: Diff pending delta suffixes by changed keys\n\nDate: 2026-05-11\n\nCommit: this entry is committed with the optimization\n\n### Change\n\nAdded a changed-key fast path for pending tracked-state delta chains:\n\n- `diff_tree_entries_at_commits` now detects when the two commits share the\n  same projection base and one pending-delta chain is a prefix of the other.\n- For that prefix/suffix shape, the reader loads only the suffix delta packs,\n  collapses touched keys in chain order, fetches base values for those keys with\n  the existing keyed projection lookup, and emits diff entries for those keys.\n- Divergent chains, different projection bases, and projection-root-only diffs\n  keep using the existing full diff paths.\n- Diff row materialization is batched by side: all `before` rows and all `after`\n  rows are hydrated through grouped `materialize_tree_values` calls instead of\n  one row at a time.\n- Added focused coverage for parent-to-child suffix diffs, child-to-parent\n  reverse suffix diffs, and suffix tombstone preservation.\n\nThis follows the Dolt/Sapling-style rule from the reference systems: diff work\nshould scale with changed keys and delta depth, not with the full materialized\nstate when ancestry proves a delta suffix relation.\n\n### Benchmarks\n\nPrimary changed-key command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(changed_keys_update_10pct|changed_keys_delta_chain_10x1pct)/1k'\n```\n\nFirst post-patch run:\n\n| row                                                               |   median | criterion status |\n| ----------------------------------------------------------------- | -------: | ---------------- |\n| `sqlite/changed_keys_update_10pct/1k`                             | 2.6094 ms | improved -96.469% |\n| `sqlite/changed_keys_delta_chain_10x1pct/1k`                      | 3.3063 ms | improved -72.997% |\n| `rocksdb/changed_keys_update_10pct/1k`                            | 1.8167 ms | improved -97.460% |\n| `rocksdb/changed_keys_delta_chain_10x1pct/1k`                     | 1.4175 ms | improved -86.437% |\n\nFinal rerun after review LOW fixes:\n\n| row                                                               |   median | interpretation |\n| ----------------------------------------------------------------- | -------: | -------------- |\n| `sqlite/changed_keys_update_10pct/1k`                             | 3.8562 ms | still massively below the pre-patch ~68 ms baseline |\n| `sqlite/changed_keys_delta_chain_10x1pct/1k`                      | 4.6675 ms | still materially below the pre-patch ~10 ms baseline |\n| `rocksdb/changed_keys_update_10pct/1k`                            | 2.1797 ms | still massively below the pre-patch ~67 ms baseline |\n| `rocksdb/changed_keys_delta_chain_10x1pct/1k`                     | 1.7321 ms | still materially below the pre-patch ~8.7 ms baseline |\n\nRead/write guardrail command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows|prefix_scan_schema_file_null)/1k'\n```\n\nResult: passed as a guardrail. The rerun reported no Criterion regressions for\nexact reads or scans. Medians were noisy and raw SQLite also moved, consistent\nwith the change being isolated to diff planning/materialization.\n\n### Storage\n\nStorage command:\n\n```sh\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\n```\n\nResult: passed.\n\n1k rows:\n\n| row                                    |   bytes | bytes/row | status          |\n| -------------------------------------- | ------: | --------: | --------------- |\n| raw SQLite / inserted                  | 1692456 |    1692.5 | reference       |\n| Lix SQLite / inserted                  |  897976 |     898.0 | unchanged       |\n| Lix SQLite / after create_version      |  910336 |     910.3 | unchanged       |\n| Lix SQLite / after fast-forward merge  | 5152584 |    5152.6 | unchanged       |\n| Lix SQLite / after divergent merge     | 5304136 |    5304.1 | unchanged/noisy |\n| Lix RocksDB / inserted                 |  811760 |     811.8 | unchanged       |\n| Lix RocksDB / after create_version     |  813507 |     813.5 | unchanged       |\n| Lix RocksDB / after fast-forward merge |  962738 |     962.7 | unchanged       |\n| Lix RocksDB / after divergent merge    | 1306390 |    1306.4 | unchanged       |\n\n### Review Loop\n\nReviewer pass:\n\n```text\nInitial review:\nHIGH: none.\nMEDIUM: none.\nLOW: preserve the planned/materialized row-count invariant instead of using\nget(index).cloned(); add direct reverse-suffix and tombstone suffix coverage.\n\nFollow-up review:\nHIGH: none.\nMEDIUM: none.\nLOW: none.\n```\n\n### Verification\n\n```sh\ncargo fmt --check\ncargo check -p lix_engine --features storage-benches\ncargo test -p lix_engine tracked_state::diff:: --features storage-benches\ncargo test -p lix_engine tracked_state:: --features storage-benches\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(changed_keys_update_10pct|changed_keys_delta_chain_10x1pct)/1k'\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows|prefix_scan_schema_file_null)/1k'\n```\n\nAll commands passed.\n\n### Interpretation\n\n```text\nKeep as a changed-key physical-layout optimization.\n\nPrimary axis: diff/changed-key discovery. The structural win avoids rebuilding\nboth full pending states when commit ancestry proves that one side is the other\nplus a suffix of delta packs. Work now scales with touched suffix keys for the\ncommon base->child and base->delta-chain cases.\n\nTiming: the first post-patch run showed 72-97% Criterion improvements across\nSQLite and RocksDB changed-key rows. The final rerun was noisier and slower than\nthe first post-patch run, but still far below the pre-patch tens-of-milliseconds\nbaseline.\n\nStorage is unchanged. No format change, no backward shim, and no benchmark\nmeasurement change. This does not complete the <= 1.5x target; SQLite scans and\nroot writes still need larger structural cuts.\n```\n\n## Optimization 38: Stream delta-pack entries without per-field sections\n\nDate: 2026-05-11\n\nStatus: kept and committed.\n\n### Hypothesis\n\nTracked-state delta packs still carried a length-prefixed sub-section around\nevery encoded delta key and every encoded delta value. Those wrappers were not\nneeded for decoding because the key and value fields are already\nself-delimiting. Removing them should shrink delta packs and avoid per-entry\nencoder buffer surgery, improving any path that writes or decodes delta packs:\nroot writes, exact reads from unmaterialized roots, scans from single delta\npacks, and changed-key suffix diffs.\n\nThis is a clean-cut physical format change. Lix has not shipped, so the delta\npack version was bumped from v5 to v6 without a backward shim.\n\n### Change\n\n- Bumped `tracked_state` delta pack version from 5 to 6.\n- Changed `encode_delta_pack_refs` to stream each entry as:\n  `delta key fields` followed by `delta value fields`.\n- Removed the old `push_var_sized_section` helper and the corresponding\n  per-entry `read_var_sized_slice` boundaries.\n- Changed `decode_delta_pack` to advance a single cursor through each\n  self-delimiting key/value pair, while retaining the whole-pack trailing-byte\n  check.\n- Added `delta_pack_stream_decoder_rejects_trailing_entry_bytes` to lock the\n  new stream boundary behavior.\n\nReference-system rationale: this follows the DuckDB/Parquet-style row-group\nprinciple of removing unnecessary per-row wrapper overhead from hot physical\nstreams. It is not the full columnar row-group design, but it moves the delta\nsegment format one step toward compact, sequential, projection-friendly pages.\n\n### Benchmarks\n\nCommand:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|scan_full_rows|prefix_scan_schema_file_null|get_many_exact_keys|changed_keys_update_10pct|changed_keys_delta_chain_10x1pct)/1k'\n```\n\nResult:\n\n| row                                                               |   median | criterion status |\n| ----------------------------------------------------------------- | -------: | ---------------- |\n| `sqlite/write_root_all_rows/1k`                                   | 4.7096 ms | improved -6.2208% |\n| `sqlite/get_many_exact_keys/1k`                                   | 2.8942 ms | improved -12.163% |\n| `sqlite/scan_full_rows/1k`                                        | 2.2839 ms | improved -5.5690% |\n| `sqlite/prefix_scan_schema_file_null/1k`                          | 2.2837 ms | improved -9.4296% |\n| `sqlite/changed_keys_update_10pct/1k`                             | 2.3014 ms | improved -39.610% |\n| `sqlite/changed_keys_delta_chain_10x1pct/1k`                      | 2.5375 ms | improved -57.367% |\n| `rocksdb/write_root_all_rows/1k`                                  | 4.5390 ms | no change (-3.8047%) |\n| `rocksdb/get_many_exact_keys/1k`                                  | 2.1650 ms | improved -15.785% |\n| `rocksdb/scan_full_rows/1k`                                       | 1.5886 ms | improved -6.3037% |\n| `rocksdb/prefix_scan_schema_file_null/1k`                         | 1.5596 ms | improved -8.6713% |\n| `rocksdb/changed_keys_update_10pct/1k`                            | 1.7168 ms | improved -41.184% |\n| `rocksdb/changed_keys_delta_chain_10x1pct/1k`                     | 1.8068 ms | no change (-2.2237%) |\n\nInterpretation: keep. The root-write and scan rows did not clear the 10% bar,\nbut exact reads and changed-key rows did on both backends or on the primary\nSQLite delta-chain rows. There were no Criterion regressions in the target\nguardrails.\n\nThis still does not complete the <= 1.5x target. The remaining root-write and\nSQLite scan misses need the larger row-group/projection-page work identified by\nthe first-principles pass.\n\n### Storage\n\nCommand:\n\n```sh\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\n```\n\nResult: passed.\n\n1k rows:\n\n| row                                    |   bytes | bytes/row |\n| -------------------------------------- | ------: | --------: |\n| raw SQLite / inserted                  | 1692456 |    1692.5 |\n| Lix SQLite / inserted                  |  897976 |     898.0 |\n| Lix SQLite / after create_version      |  910336 |     910.3 |\n| Lix SQLite / after fast-forward merge  | 5115504 |    5115.5 |\n| Lix SQLite / after divergent merge     | 5275248 |    5275.2 |\n| Lix RocksDB / inserted                 |  809722 |     809.7 |\n| Lix RocksDB / after create_version     |  811467 |     811.5 |\n| Lix RocksDB / after fast-forward merge |  960498 |     960.5 |\n| Lix RocksDB / after divergent merge    | 1303449 |    1303.4 |\n\n### Review Loop\n\nReviewer pass:\n\n```text\nInitial review:\nHIGH: none.\nMEDIUM: none.\nLOW: add a focused malformed v6 entry test for the stream boundary behavior.\n\nFollow-up review:\nHIGH: none.\nMEDIUM: none.\nLOW: none.\n```\n\n### Verification\n\n```sh\ncargo fmt --check\ncargo check -p lix_engine --features storage-benches\ncargo test -p lix_engine tracked_state::codec:: --features storage-benches\ncargo test -p lix_engine tracked_state:: --features storage-benches\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|scan_full_rows|prefix_scan_schema_file_null|get_many_exact_keys|changed_keys_update_10pct|changed_keys_delta_chain_10x1pct)/1k'\n```\n\nAll commands passed.\n\n### Next Work\n\nThe independent first-principles sidecar pass ranked the next plausible\n>=10% structural moves as:\n\n1. A unified commit/tracked row group that removes duplicated authored row facts\n   between `commit_store.change_pack` and `tracked_state.delta_pack`.\n2. A projection-page scan API so key/header/payload-ref/full-row scans decode\n   only the columns they need.\n3. Columnar tracked leaves/row groups for key suffixes, scalar headers,\n   timestamp codes, and payload refs.\n\nThose are the likely paths for the remaining root-write and SQLite scan misses.\n\n## Optimization 39: Use tracked delta packs as authored commit row groups\n\nDate: 2026-05-11\n\nStatus: kept and committed.\n\n### Hypothesis\n\nTracked commits still wrote authored row facts twice:\n\n1. `commit_store.change_pack`, for commit/change APIs.\n2. `tracked_state.delta_pack`, for tracked projection reads and diffs.\n\nBoth streams carry the same authored change identity, schema/file/entity key,\npayload refs, change id, commit locator, and change timestamp. Removing the\nduplicate commit-store authored pack for tracked commits should cut root-write\nwork and storage materially, while keeping commit-store APIs as views over the\nsame tracked delta row group.\n\nThis follows the row-group principle from DuckDB/Parquet and the\ncontent-addressed shared-structure principle from Dolt/Sapling: store the\ncommit-local row facts once, then expose logical views over that one physical\nsegment.\n\n### Change\n\n- Added `CommitStoreWriter::stage_tracked_commit_draft(s)`.\n- Tracked commit call sites now use the tracked staging path:\n  transaction commit, initialization, test support, live-state test helper, and\n  storage-bench tracked root writes.\n- The tracked staging path still validates uniqueness/adoption through\n  commit_store, still writes the commit header, and still writes membership\n  packs for adopted rows, but it does not write a duplicate\n  `commit_store.change_pack` for authored tracked rows.\n- `commit_store::storage::load_change_pack` still prefers a direct commit-store\n  change pack. When it is absent, it reconstructs authored changes from\n  `tracked_state.delta_pack` entries whose locators point at the requested\n  `(commit_id, pack_id)`.\n- Fallback reconstruction uses `delta.value.updated_at` as\n  `Change.created_at`, because commit-store `Change.created_at` is the change\n  timestamp, not the original entity creation timestamp.\n- Fallback reconstruction collects by ordinal in a `BTreeMap` and validates\n  dense ordinals from zero, avoiding allocations based on untrusted\n  `source_ordinal` values.\n- Added focused tests for tracked commit-pack fallback and sparse ordinal\n  rejection.\n\nNo backward shim was added. Lix has not shipped.\n\n### Benchmarks\n\nPrimary command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|get_many_exact_keys|scan_full_rows|prefix_scan_schema_file_null|changed_keys_update_10pct|changed_keys_delta_chain_10x1pct)/1k'\n```\n\nKey medians:\n\n| row                                                               |   median | criterion status |\n| ----------------------------------------------------------------- | -------: | ---------------- |\n| `sqlite/write_root_all_rows/1k`                                   | 4.3853 ms | improved -10.019% |\n| `sqlite/get_many_exact_keys/1k`                                   | 2.7779 ms | improved -12.889% |\n| `sqlite/scan_full_rows/1k`                                        | 2.9966 ms | regressed +35.198%; contradicted by rerun |\n| `sqlite/prefix_scan_schema_file_null/1k`                          | 2.9472 ms | regressed +26.394%; contradicted by rerun |\n| `sqlite/changed_keys_update_10pct/1k`                             | 2.4469 ms | regressed +18.411%; contradicted by rerun |\n| `sqlite/changed_keys_delta_chain_10x1pct/1k`                      | 2.3284 ms | improved -7.6062% |\n| `rocksdb/write_root_all_rows/1k`                                  | 4.5553 ms | improved -43.051% |\n| `rocksdb/get_many_exact_keys/1k`                                  | 2.1086 ms | improved -17.576% |\n| `rocksdb/scan_full_rows/1k`                                       | 1.5743 ms | improved -8.0804% |\n| `rocksdb/prefix_scan_schema_file_null/1k`                         | 1.6102 ms | no change +2.5900% |\n| `rocksdb/changed_keys_update_10pct/1k`                            | 1.3325 ms | no change -17.007%; p=0.15 |\n\nRerun command for red-flag rows:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(write_root_all_rows|scan_full_rows|prefix_scan_schema_file_null|changed_keys_update_10pct)/1k'\n```\n\nRerun:\n\n| row                                           |   median | criterion status |\n| --------------------------------------------- | -------: | ---------------- |\n| `sqlite/write_root_all_rows/1k`               | 4.6088 ms | no change +1.6643% |\n| `sqlite/scan_full_rows/1k`                    | 2.2111 ms | improved -35.751% |\n| `sqlite/prefix_scan_schema_file_null/1k`      | 2.1713 ms | improved -28.351% |\n| `sqlite/changed_keys_update_10pct/1k`         | 2.1967 ms | improved -15.575% |\n| `rocksdb/write_root_all_rows/1k`              | 4.3510 ms | no change -3.8310% |\n| `rocksdb/scan_full_rows/1k`                   | 1.5100 ms | no change -1.8000% |\n| `rocksdb/prefix_scan_schema_file_null/1k`     | 1.7379 ms | no change +3.0011% |\n| `rocksdb/changed_keys_update_10pct/1k`        | 1.4297 ms | no change +2.7686% |\n\nInterpretation: keep. Criterion timing is noisy, but the primary run clears\nthe >=10% bar on SQLite root writes, RocksDB root writes, and exact reads. The\nrerun clears the scan red flags. The structural storage reduction below is the\nstronger evidence that this is a real physical-layout win.\n\n### Storage\n\nCommand:\n\n```sh\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\n```\n\nResult: passed.\n\n1k rows:\n\n| row                                    | before bytes/row | after bytes/row | delta |\n| -------------------------------------- | ---------------: | --------------: | ----: |\n| raw SQLite / inserted                  |           1692.5 |          1692.5 |   0% |\n| Lix SQLite / inserted                  |            898.0 |           724.9 | -19.3% |\n| Lix SQLite / after create_version      |            910.3 |           745.5 | -18.1% |\n| Lix SQLite / after fast-forward merge  |           5115.5 |          3209.3 | -37.3% |\n| Lix SQLite / after divergent merge     |           5275.2 |          5111.4 |  -3.1% |\n| Lix RocksDB / inserted                 |            809.7 |           655.7 | -19.0% |\n| Lix RocksDB / after create_version     |            811.5 |           657.2 | -19.0% |\n| Lix RocksDB / after fast-forward merge |            960.5 |           776.7 | -19.1% |\n| Lix RocksDB / after divergent merge    |           1303.4 |          1060.1 | -18.7% |\n\n### Review Loop\n\nReviewer pass:\n\n```text\nInitial review:\nHIGH: fallback Change.created_at used delta.value.created_at, but commit-store\nChange.created_at is the change timestamp. Use delta.value.updated_at.\nMEDIUM: fallback Vec resized from untrusted source_ordinal before dense-order\nvalidation. Avoid allocation from corrupt ordinal values.\nLOW: stage_tracked_commit_draft(s) leaves a two-step internal invariant:\ncallers must also stage the matching tracked delta pack.\n\nFollow-up review:\nHIGH: none.\nMEDIUM: none.\nLOW: the two-step invariant remains acceptable for this first row-group step;\neventual API should make tracked commit + delta staging atomic.\n```\n\n### Verification\n\n```sh\ncargo fmt --check\ncargo check -p lix_engine --features storage-benches\ncargo test -p lix_engine commit_store:: --features storage-benches\ncargo test -p lix_engine tracked_state:: --features storage-benches\ncargo test -p lix_engine transaction --features storage-benches\ncargo test -p lix_engine commit_store::storage::tests::tracked_commit_change_pack --features storage-benches\ncargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|get_many_exact_keys|scan_full_rows|prefix_scan_schema_file_null|changed_keys_update_10pct|changed_keys_delta_chain_10x1pct)/1k'\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(write_root_all_rows|scan_full_rows|prefix_scan_schema_file_null|changed_keys_update_10pct)/1k'\n```\n\nAll commands passed.\n\n### Next Work\n\nThis is not the final row-group API yet. The next clean cut should remove the\ntwo-step invariant by making tracked commit staging and tracked delta staging\none atomic commit-local row-group operation. After that, the projection-page\nscan API remains the likely path for the remaining SQLite scan/root-write\nratio misses.\n\n## Opt 40: Mixed JSON Pack Indexes in Delta Packs\n\nImplemented a corrected delta-pack JSON reference row-group: when tracked\nstate and JSON payloads are staged into the same commit-local JSON pack,\n`tracked_state.delta_pack` v7 can encode JSON refs as pack-local ordinals\ninstead of repeating 32-byte hashes. Refs that are not in the pack still fall\nback to inline hashes, and empty index maps fall back to the old inline mode.\n\nThe index map comes from the actual `json_store.pack` write order in\n`JsonStoreWriter::stage_batch_report`, avoiding the failed guessed-ordinal\nattempt. Decode resolves ordinals against `json_store.pack` refs only when the\ndelta pack declares mixed mode; inline packs do not depend on parsing a JSON\npack. The public internal staging edge now carries explicit `(commit_id,\npack_id)` identity and validates that this path is pack-0-only.\n\n### Storage\n\nCommand:\n\n```sh\ncargo test -p lix_engine lix_key_value_insert_amplification_north_star --features storage-benches -- --ignored --nocapture\n```\n\nResult: passed.\n\n1k rows:\n\n| namespace                  | before bytes | after bytes | delta |\n| -------------------------- | -----------: | ----------: | ----: |\n| `commit_store.commit`      |          205 |         205 |  0.0% |\n| `json_store.pack`          |      100,064 |     100,064 |  0.0% |\n| `tracked_state.delta_pack` |      131,968 |     101,841 | -22.8% |\n| `untracked_state.row`      |          386 |         386 |  0.0% |\n| total write bytes          |      232,623 |     202,496 | -12.9% |\n\nRead-call accounting improved from 95 to 85 calls for the 1k north-star run\nbecause delta and JSON-pack lookup are batched.\n\n### Physical Benchmark\n\nCommand:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|get_many_exact_keys|scan_full_rows|prefix_scan_schema_file_null|changed_keys_update_10pct)/1k'\n```\n\nFinal run:\n\n| benchmark                                      | mean      | criterion result |\n| ---------------------------------------------- | --------: | ---------------- |\n| `raw_sqlite/write_root_all_rows/1k`            | 2.6157 ms | no change |\n| `raw_sqlite/get_many_exact_keys/1k`            | 2.3298 ms | no change |\n| `raw_sqlite/scan_full_rows/1k`                 | 1.2697 ms | no change |\n| `raw_sqlite/prefix_scan_schema_file_null/1k`   | 1.2167 ms | no change |\n| `sqlite/write_root_all_rows/1k`                | 4.4511 ms | no change |\n| `sqlite/get_many_exact_keys/1k`                | 2.7910 ms | no change |\n| `sqlite/scan_full_rows/1k`                     | 2.2780 ms | no change |\n| `sqlite/prefix_scan_schema_file_null/1k`       | 2.3066 ms | no change |\n| `sqlite/changed_keys_update_10pct/1k`          | 2.2692 ms | no change |\n| `rocksdb/write_root_all_rows/1k`               | 4.7723 ms | no change |\n| `rocksdb/get_many_exact_keys/1k`               | 2.1705 ms | improved -7.8977% |\n| `rocksdb/scan_full_rows/1k`                    | 1.6106 ms | no change |\n| `rocksdb/prefix_scan_schema_file_null/1k`      | 1.6688 ms | noise +4.5902% |\n| `rocksdb/changed_keys_update_10pct/1k`         | 1.5158 ms | no change |\n\nInterpretation: keep. The durable result is a 12.9% root-write byte reduction\nwith no significant physical-benchmark regression in the final run. This does\nnot solve the remaining <=1.5x SQLite scan/write ratios, but it removes a real\nduplicate 32-byte-ref payload from the row-group layout.\n\n### Review Loop\n\nReviewer pass:\n\n```text\nInitial review:\nHIGH: packless/empty-index batches could emit mixed mode with no JSON pack,\nmaking inline-only delta packs unreadable.\nMEDIUM: bare pack-index maps were too easy to misuse across commit/pack ids.\n\nFollow-up review:\nMEDIUM: load_delta_pack decoded JSON pack refs before knowing whether the\ndelta pack was mixed-mode, so corrupt JSON pack data could break inline packs.\n\nFinal review:\nHIGH: none.\nMEDIUM: none.\n```\n\n### Verification\n\n```sh\ncargo fmt -p lix_engine\ncargo test -p lix_engine tracked_state::codec:: --features storage-benches\ncargo test -p lix_engine json_store:: --features storage-benches\ncargo test -p lix_engine transaction --features storage-benches\ncargo test -p lix_engine lix_key_value_insert_amplification_north_star --features storage-benches -- --ignored --nocapture\ncargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|get_many_exact_keys|scan_full_rows|prefix_scan_schema_file_null|changed_keys_update_10pct)/1k'\n```\n\nAll commands passed.\n"
  },
  {
    "path": "optimization_log9_sql2.md",
    "content": "# Optimization Log 9: SQL2 Logical CRUD\n\nGoal: make the logical work inside `sql2` fast for an isolated JSON-pointer\nCRUD benchmark surface owned by this log.\n\nThe pure target is SQL2 overhead: statement classification, SQL parsing,\nDataFusion logical planning, provider scan planning, DML normalization, SQL\nruntime collection, parameter conversion, and result conversion.\n\nAll optimization changes in this log must stay inside the `sql2` module. If a\nprofile shows that SQL2 is slow because it lacks a better read/write primitive\nfrom another module, record that as an outside-SQL2 follow-up and keep the code\nchange out of this log.\n\n## Benchmark Fit\n\nThe scorecard benchmark for this log is:\n\n```sh\ncargo bench -p lix_engine --bench optimization9_sql2 --features storage-benches -- 'optimization9_sql2/smoke_crud'\n```\n\nThe important isolated E2E groups are:\n\n```text\noptimization9_sql2/smoke_crud/lix_sqlite\noptimization9_sql2/smoke_crud/lix_rocksdb\n```\n\nThe CRUD operations already exercise the SQL2 path through\n`SessionContext::execute`:\n\n```text\ninsert_all_rows/1k\nselect_all_path_value/1k\nselect_one_by_pk/1k\nupdate_all_values/1k\nupdate_one_by_pk/1k\ndelete_all_rows/1k\ndelete_one_by_pk/1k\n```\n\nNo raw SQLite, raw storage, branch, merge, or shared fixture rows belong to the\nLog 9 scorecard. If a profile points below SQL2, record the finding as an\noutside-SQL2 follow-up instead of expanding this benchmark.\n\n```text\noptimization9_sql2/smoke_crud:\n  isolated Log 9 scorecard\n\noptimization9_sql2 diagnostic groups:\n  planning/execution/literal-vs-parameterized microscope\n```\n\nThis keeps the SQL2 CRUD campaign independent from other benchmark suites and\noptimization logs.\n\n## Why This Is A SQL2 Benchmark\n\nEach Lix CRUD benchmark iteration excludes fixture setup via Criterion\n`iter_batched`, then measures one user-visible SQL operation. Inside the measured\noperation, the call path is:\n\n```text\nSessionContext::execute\n  -> sql2::classify_statement\n  -> sql2::create_logical_plan or sql2::create_write_logical_plan\n  -> build_read_session or build_write_session\n  -> DataFusion create_logical_plan\n  -> provider logical planning / DML normalization\n  -> sql2::execute_logical_plan\n  -> sql2::runtime::collect_dataframe\n  -> query_result_from_batches / affected_rows_from_query_result\n```\n\nThat is exactly the logical SQL2 surface we need to optimize. The benchmark is\nespecially useful because it covers both:\n\n```text\nread SQL:\n  SELECT path, value FROM json_pointer ORDER BY path\n  SELECT path, value FROM json_pointer WHERE path = '<path>'\n\nwrite SQL:\n  INSERT INTO json_pointer (path, value) VALUES ...\n  UPDATE json_pointer SET value = ...\n  UPDATE json_pointer SET value = ... WHERE path = '<path>'\n  DELETE FROM json_pointer\n  DELETE FROM json_pointer WHERE path = '<path>'\n```\n\n## Dedicated Diagnostic Bench\n\n`optimization9_sql2` is the dedicated SQL2 diagnostic suite for this log:\n\n```sh\ncargo bench -p lix_engine --bench optimization9_sql2 --features storage-benches\n```\n\nIt uses local copies of the JSON-pointer fixture and schema so the suite is\nisolated from `json_pointer_crud` and `plugin-json-v2` paths:\n\n```text\npackages/engine/benches/optimization9_sql2/pnpm-lock.fixture.json\npackages/engine/benches/optimization9_sql2/json_pointer.schema.json\n```\n\nIt is intentionally small and self-contained. Its job is to separate SQL2\nplanning cost from SQL2 execution cost and to compare literal vs parameterized\npoint CRUD statements.\n\nBenchmark groups:\n\n```text\noptimization9_sql2/smoke_crud/lix_sqlite\noptimization9_sql2/smoke_crud/lix_rocksdb\n\noptimization9_sql2/planning_only/lix_sqlite\noptimization9_sql2/planning_only/lix_rocksdb\n\noptimization9_sql2/execute_preplanned/lix_sqlite\noptimization9_sql2/execute_preplanned/lix_rocksdb\n\noptimization9_sql2/e2e_literal/lix_sqlite\noptimization9_sql2/e2e_literal/lix_rocksdb\n\noptimization9_sql2/e2e_parameterized/lix_sqlite\noptimization9_sql2/e2e_parameterized/lix_rocksdb\n```\n\nDiagnostic rows:\n\n```text\nsmoke_crud:\n  insert_all_rows/1k\n  select_all_path_value/1k\n  select_one_by_pk/1k\n  update_all_values/1k\n  update_one_by_pk/1k\n  delete_all_rows/1k\n  delete_one_by_pk/1k\n\nplanning_only:\n  select_all_path_value/1k\n  select_one_by_pk/1k\n  insert_500_values/1k\n  update_all_values/1k\n  delete_all_rows/1k\n\nexecute_preplanned:\n  select_all_path_value/1k\n  select_one_by_pk/1k\n\ne2e_literal:\n  select_one_by_pk/1k\n  update_one_by_pk/1k\n  delete_one_by_pk/1k\n\ne2e_parameterized:\n  select_one_by_pk/1k\n  update_one_by_pk/1k\n  delete_one_by_pk/1k\n```\n\nThe split means:\n\n```text\nsmoke_crud:\n  isolated 1k CRUD scorecard for this optimization log\n\nplanning_only:\n  parse/classify/session construction/DataFusion logical planning/provider setup\n\nexecute_preplanned:\n  physical collection/provider scan/result conversion after read SQL is planned\n\ne2e_literal vs e2e_parameterized:\n  statement planning plus execution through public SessionContext::execute\n```\n\nWrite `execute_preplanned` rows are intentionally not present yet. SQL2 write\nproviders currently rely on a transaction-scoped `SqlWriteContext` pointer whose\nplanning and execution must stay inside the same write frame. The suite records\nwrite planning separately and uses E2E literal/parameterized rows for write\nexecution until SQL2 has a safe write-plan diagnostic boundary.\n\n## Profiler Workflow\n\nUse the profiler before changing code. Profile one operation at a time so the\nflamegraph is readable.\n\nPrimary filters:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench optimization9_sql2 -- 'optimization9_sql2/smoke_crud/lix_sqlite'\ncargo bench -p lix_engine --features storage-benches --bench optimization9_sql2 -- 'optimization9_sql2/planning_only/lix_sqlite/insert_500_values/1k'\ncargo bench -p lix_engine --features storage-benches --bench optimization9_sql2 -- 'optimization9_sql2/planning_only/lix_sqlite/update_all_values/1k'\ncargo bench -p lix_engine --features storage-benches --bench optimization9_sql2 -- 'optimization9_sql2/planning_only/lix_sqlite/delete_all_rows/1k'\ncargo bench -p lix_engine --features storage-benches --bench optimization9_sql2 -- 'optimization9_sql2/execute_preplanned/lix_sqlite/select_one_by_pk/1k'\ncargo bench -p lix_engine --features storage-benches --bench optimization9_sql2 -- 'optimization9_sql2/e2e_parameterized/lix_sqlite/select_one_by_pk/1k'\n```\n\nRepeat the same filters for `lix_rocksdb` only after the SQLite profile has a\nclear hypothesis. If both backends show the same SQL2 stack, optimize SQL2. If\nthey diverge below the SQL2 boundary, capture the missing primitive or backend\ncost as a later outside-SQL2 optimization lead.\n\nRecord the top stacks in each entry with this classification:\n\n```text\nsql2 planning:\n  classify_statement, validate_supported_statement_ast, build_*_session,\n  create_logical_plan, validate_supported_logical_plan,\n  validate_json_predicates_in_logical_plan, provider table scan planning\n\nsql2 execution glue:\n  execute_logical_plan, collect_dataframe, parameter conversion,\n  query_result_from_batches, affected row conversion\n\nprovider logical work:\n  predicate extraction, projection mapping, DML normalization,\n  insert/update/delete batch construction, value JSON coercion\n\nnot SQL2:\n  backend IO, tracked-state materialization, delta decoding, commit graph,\n  RocksDB/SQLite storage write application\n\noutside-SQL2 follow-up:\n  missing read/write primitive, storage/provider API limitation, layout issue,\n  or backend-specific behavior that SQL2 cannot fix internally\n```\n\n## Initial Scorecard\n\nThe scorecard for this log is isolated in `optimization9_sql2/smoke_crud`.\nDo not use rows from any other benchmark suite as Log 9 baselines.\n\nBaseline command:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench optimization9_sql2\n```\n\nBaseline commit:\n\n```text\n1010c12c plus uncommitted Log 9 benchmark files\n```\n\nInitial isolated 1k smoke CRUD rows after rebasing onto\n`origin/physical-layout-manual-goal-ii-`:\n\n| operation               |       Lix SQLite |      Lix RocksDB |\n| ----------------------- | ---------------: | ---------------: |\n| `insert_all_rows`       | 62.714-72.740 ms | 52.627-57.653 ms |\n| `select_all_path_value` | 18.980-20.138 ms | 9.9962-11.163 ms |\n| `select_one_by_pk`      | 7.6860-9.2848 ms | 2.2846-2.7899 ms |\n| `update_all_values`     | 53.337-123.20 ms | 19.038-20.238 ms |\n| `update_one_by_pk`      | 8.5795-13.785 ms | 4.5116-4.7572 ms |\n| `delete_all_rows`       | 30.914-33.230 ms | 21.999-25.876 ms |\n| `delete_one_by_pk`      | 7.2750-7.9671 ms | 4.4184-5.2644 ms |\n\nInitial `optimization9_sql2` diagnostic rows after rebase:\n\n| group                | operation                  |       Lix SQLite |      Lix RocksDB |\n| -------------------- | -------------------------- | ---------------: | ---------------: |\n| `planning_only`      | `select_all_path_value/1k` | 3.3115-3.7821 ms | 1.6485-1.9012 ms |\n| `planning_only`      | `select_one_by_pk/1k`      | 2.9706-4.9726 ms | 1.6292-1.8691 ms |\n| `planning_only`      | `insert_500_values/1k`     | 11.099-11.953 ms | 11.316-12.420 ms |\n| `planning_only`      | `update_all_values/1k`     | 3.5833-3.9703 ms | 2.1247-2.3981 ms |\n| `planning_only`      | `delete_all_rows/1k`       | 3.6369-4.0269 ms | 2.0014-2.2900 ms |\n| `execute_preplanned` | `select_all_path_value/1k` | 8.7746-9.3653 ms | 8.8134-9.7773 ms |\n| `execute_preplanned` | `select_one_by_pk/1k`      | 1.3400-1.4785 ms | 1.4099-1.8420 ms |\n| `e2e_literal`        | `select_one_by_pk/1k`      | 3.8340-4.1884 ms | 2.4221-3.5113 ms |\n| `e2e_literal`        | `update_one_by_pk/1k`      | 7.0420-8.2160 ms | 4.4839-5.3388 ms |\n| `e2e_literal`        | `delete_one_by_pk/1k`      | 7.4717-7.9987 ms | 4.2601-5.5313 ms |\n| `e2e_parameterized`  | `select_one_by_pk/1k`      | 3.7137-4.0738 ms | 2.1038-2.4607 ms |\n| `e2e_parameterized`  | `update_one_by_pk/1k`      | 7.5761-9.0774 ms | 4.1165-4.7877 ms |\n| `e2e_parameterized`  | `delete_one_by_pk/1k`      | 7.4651-8.2425 ms | 4.4257-5.1296 ms |\n\nHetzner CX33 baseline rerun on 2026-05-11:\n\n```text\nMachine: Hetzner CX33\nHost: ubuntu-32gb-hil-1\nCPU: 8 vCPU, AMD EPYC-Milan Processor, KVM\nKernel: Linux 6.8.0-90-generic x86_64\nCommit: 9ff4f9cb\nCommand: cargo bench -p lix_engine --features storage-benches --bench optimization9_sql2\n```\n\nHetzner CX33 isolated 1k smoke CRUD rows:\n\n| operation               |       Lix SQLite |      Lix RocksDB |\n| ----------------------- | ---------------: | ---------------: |\n| `insert_all_rows`       | 70.105-71.910 ms | 67.767-68.316 ms |\n| `select_all_path_value` | 17.530-17.943 ms | 13.421-13.936 ms |\n| `select_one_by_pk`      | 6.6463-6.9219 ms | 2.9247-3.0022 ms |\n| `update_all_values`     | 34.429-35.507 ms | 25.341-25.724 ms |\n| `update_one_by_pk`      | 10.367-10.581 ms | 6.3116-6.4393 ms |\n| `delete_all_rows`       | 35.935-36.724 ms | 26.690-27.071 ms |\n| `delete_one_by_pk`      | 10.616-10.778 ms | 6.4811-6.6185 ms |\n\nHetzner CX33 `optimization9_sql2` diagnostic rows:\n\n| group                | operation                  |       Lix SQLite |      Lix RocksDB |\n| -------------------- | -------------------------- | ---------------: | ---------------: |\n| `planning_only`      | `select_all_path_value/1k` | 5.7264-5.8371 ms | 2.1837-2.3126 ms |\n| `planning_only`      | `select_one_by_pk/1k`      | 5.3823-5.5152 ms | 2.2103-2.2705 ms |\n| `planning_only`      | `insert_500_values/1k`     | 14.105-14.283 ms | 12.987-13.275 ms |\n| `planning_only`      | `update_all_values/1k`     | 6.3326-6.4489 ms | 2.7961-2.8708 ms |\n| `planning_only`      | `delete_all_rows/1k`       | 6.2279-7.0361 ms | 2.6768-2.7504 ms |\n| `execute_preplanned` | `select_all_path_value/1k` | 11.515-11.711 ms | 11.964-12.364 ms |\n| `execute_preplanned` | `select_one_by_pk/1k`      | 1.5469-1.5784 ms | 1.6215-1.6790 ms |\n| `e2e_literal`        | `select_one_by_pk/1k`      | 6.3640-6.4680 ms | 2.9476-2.9911 ms |\n| `e2e_literal`        | `update_one_by_pk/1k`      | 9.9933-10.128 ms | 6.1048-6.2638 ms |\n| `e2e_literal`        | `delete_one_by_pk/1k`      | 10.509-11.015 ms | 6.4548-6.6268 ms |\n| `e2e_parameterized`  | `select_one_by_pk/1k`      | 6.5033-6.6564 ms | 3.1192-3.2197 ms |\n| `e2e_parameterized`  | `update_one_by_pk/1k`      | 10.169-11.111 ms | 6.4063-6.6222 ms |\n| `e2e_parameterized`  | `delete_one_by_pk/1k`      | 10.407-10.631 ms | 6.4029-6.5440 ms |\n\nInterpretation:\n\n```text\nThe benchmark suite is good enough to start optimizing SQL2 CRUD now.\nThe highest-value SQL2 profiles are insert_all_rows, delete_all_rows, and\nupdate_all_values, with PK read/update/delete as planning/provider overhead\nprobes. Full scan is the lowest priority within this isolated scorecard because\nit is already much closer than insert and bulk writes.\n```\n\nSQL2-only boundary:\n\n```text\nAllowed edit surface:\n  packages/engine/src/sql2/**\n\nNot allowed in this log:\n  storage layout changes\n  tracked-state reader/writer changes\n  live-state changes\n  transaction staging changes outside SQL2\n  benchmark success achieved by changing backend behavior\n\nRequired handling for outside-SQL2 findings:\n  Record the profile evidence, name the missing primitive or non-SQL2 bottleneck,\n  and leave it for a future non-SQL2 optimization log.\n```\n\n## Optimization Order\n\n1. `insert_all_rows`\n2. `delete_all_rows`\n3. `update_all_values`\n4. `update_one_by_pk` and `delete_one_by_pk`\n5. `select_one_by_pk`\n6. `select_all_path_value`\n\nRationale:\n\n```text\nInsert is still hundreds of milliseconds for 1k rows and executes the richest\nSQL2 write path: large VALUES planning, JSON literal coercion, insert\nnormalization, identity/default handling, and staging.\n\nBulk delete and update are the best probes for avoidable provider logical work\nover many current rows.\n\nSingle-row PK operations isolate per-statement SQL2 overhead. They are small in\nabsolute time now, but they reveal whether SQL2 is doing too much planning or\nprovider setup for point operations.\n```\n\n## Candidate Optimization Themes\n\nDo not implement these blindly. Each needs a profile entry first.\n\n```text\nSession/catalog setup:\n  avoid rebuilding expensive read/write DataFusion session state per statement\n  when visible schemas and functions are unchanged inside a benchmark session\n\nLogical-plan validation:\n  collapse repeated recursive walks over the same DataFusion logical plan\n  combine support validation, JSON predicate validation, notices, and statement\n  kind classification where possible\n\nDML normalization:\n  reduce per-row cloning and JSON string/value round trips for INSERT VALUES\n  build typed row batches directly from DataFusion expressions when safe\n\nProvider scan planning:\n  push path equality filters into exact-key load requests early\n  avoid broad scan request construction for single-PK SELECT/UPDATE/DELETE\n\nResult conversion:\n  avoid unnecessary cloning of column metadata and JSON values\n  keep affected-row write results minimal\n\nRuntime collection:\n  make SQL2 collect only the needed batches/columns for affected-row DML\n  avoid full row materialization when the operation only needs a count\n```\n\n## Keep Criteria\n\nFor every kept optimization:\n\n```text\nprimary:\n  improves the targeted Lix SQLite 1k smoke CRUD row by >= 10%\n  does not regress any other Lix SQLite 1k CRUD row by > 5%\n\ncross-backend:\n  improves or stays neutral on the matching Lix RocksDB row\n  any backend split is explained by profile evidence\n\nguardrails:\n  benchmark suite stays isolated to optimization9_sql2 fixture/schema files\n  any non-SQL2 bottleneck is recorded as outside-SQL2 follow-up\n  sql2 and code-structure tests pass\n```\n\nVerification commands:\n\n```sh\ncargo bench -p lix_engine --features storage-benches --bench optimization9_sql2\ncargo test -p lix_engine sql2\ncargo test -p lix_engine --test code_structure sql2\n```\n\n## Entry Template\n\nUse one entry per kept SQL2 optimization.\n\n```text\n## Optimization N: <short name>\n\nCommit:\n  <hash> or uncommitted on <hash>\n\nTarget operation:\n  insert_all_rows | select_all_path_value | select_one_by_pk |\n  update_all_values | update_one_by_pk | delete_all_rows |\n  delete_one_by_pk\n\nProfile before:\n  command:\n  top SQL2 stacks:\n  non-SQL2 stacks:\n  conclusion:\n\nChange:\n  What changed?\n  Why does this reduce logical SQL2 work?\n  What semantic invariant is preserved?\n\nResults:\n  Include impacted optimization9_sql2 diagnostic rows.\n  Include optimization9_sql2/smoke_crud Lix SQLite and Lix RocksDB rows for\n  every CRUD operation.\n\nGuardrails:\n  Confirm the benchmark still uses only local optimization9_sql2 fixture/schema\n  files.\n\nOutside-SQL2 follow-up:\n  If the profile points to a missing primitive or non-SQL2 bottleneck, record it\n  here. Do not include that implementation in this log.\n\nDecision:\n  Keep, revert, or follow-up.\n```\n\n## Optimization 1: Reuse Parsed DataFusion Statement For Write Planning\n\nCommit:\n  uncommitted on 80f4f68a\n\nTarget operation:\n  logical planning for optimization9_sql2/planning_only/lix_sqlite/insert_500_values/1k\n\nProfile before:\n  command:\n    perf record --output=/tmp/sql2-insert-plan.perf.data -F 499 -g --call-graph dwarf target/release/deps/optimization9_sql2-bd3fa4efccf19070 --bench 'optimization9_sql2/planning_only/lix_sqlite/insert_500_values/1k' --profile-time 8\n    perf report --stdio --quiet --no-inline --input=/tmp/sql2-insert-plan.perf.data --no-call-graph --sort=symbol --percent-limit=1\n  top SQL2 stacks:\n    sqlparser::tokenizer::Tokenizer::tokenize_quoted_string: 17.10% self\n    sqlparser parser/tokenizer helpers collectively appeared below that hotspot\n  non-SQL2 stacks:\n    _int_malloc: 7.35%, __memmove_avx_unaligned_erms: 6.55%, malloc: 2.31%\n  conclusion:\n    SQL2 write planning parsed the same INSERT text multiple times: once for Lix AST validation/history target extraction and again through DataFusion planning. Large literal INSERT statements spend significant time tokenizing quoted JSON strings, so the duplicate parse is a first-order logical planning bottleneck.\n\nChange:\n  create_write_logical_plan now parses once into DataFusion's Statement with the SQL session parser, validates supported Lix SQL against that AST, extracts read-only history DML targets from the same AST, and passes the same Statement to SessionState::statement_to_plan.\n  The cheap parse/validate/read-only phase now runs before write provider registration. The write session is built only after parse and policy checks succeed.\n  DML target extraction normalizes unquoted identifiers to lowercase while preserving quoted identifiers, matching DataFusion's identifier normalization rule.\n  Added coverage for read-only history DML through lowercase, uppercase, schema-qualified uppercase, and EXPLAIN-wrapped DELETE targets.\n  Read planning remains on the previous path, so this optimization is scoped to SQL2 write planning.\n\n  Best-practice references:\n    DataFusion exposes and uses the parse-once lower-level flow: sql_to_statement followed by statement_to_plan (artifact/datafusion/datafusion/core/src/execution/session_state.rs).\n    DataFusion normalizes unquoted identifiers before planning (artifact/datafusion/datafusion/sql/src/planner.rs and artifact/datafusion/datafusion/sql/src/utils.rs).\n    SpiceAI intercepts parsed statements before planning for DataFusion integration work (artifact/spiceai/crates/runtime/src/datafusion/planner/mod.rs).\n    Turso's standalone DB flow parses SQL into AST before translation/codegen (artifact/turso/docs/manual.md).\n\n  Semantic invariant preserved:\n    Statement support checks, history-view read-only enforcement, and DataFusion logical planning all inspect the same parsed statement. Unsupported DataFusion extension statements are still rejected before planning.\n\nResults:\n  Focused planning rows after review fixes:\n    optimization9_sql2/planning_only/lix_sqlite/insert_500_values/1k:\n      [7.5696 ms 7.7067 ms 7.9807 ms]\n      vs logged baseline [14.105 ms 14.283 ms], about 44-46% faster.\n    optimization9_sql2/planning_only/lix_sqlite/update_all_values/1k:\n      [6.5332 ms 6.6237 ms 6.7164 ms], neutral vs logged baseline [6.3326 ms 6.4489 ms].\n    optimization9_sql2/planning_only/lix_sqlite/delete_all_rows/1k:\n      [6.3737 ms 6.4816 ms 6.6179 ms], neutral vs logged baseline [6.2279 ms 7.0361 ms].\n\n  Smoke CRUD guardrail after review fixes:\n    Lix SQLite:\n      insert_all_rows: [59.787 ms 60.000 ms 60.251 ms], faster than baseline [70.105 ms 71.910 ms]\n      select_all_path_value: [16.936 ms 17.095 ms 17.266 ms], neutral/faster than baseline [17.530 ms 17.943 ms]\n      select_one_by_pk: [6.4369 ms 6.5101 ms 6.5946 ms], neutral/faster than baseline [6.6463 ms 6.9219 ms]\n      update_all_values: [33.796 ms 34.192 ms 34.606 ms], neutral/faster than baseline [34.429 ms 35.507 ms]\n      update_one_by_pk: [10.334 ms 10.408 ms 10.480 ms], neutral vs baseline [10.367 ms 10.581 ms]\n      delete_all_rows: [34.715 ms 34.957 ms 35.215 ms], neutral/faster than baseline [35.935 ms 36.724 ms]\n      delete_one_by_pk: [10.624 ms 10.686 ms 10.751 ms], neutral vs baseline [10.616 ms 10.778 ms]\n    Lix RocksDB:\n      insert_all_rows: [59.644 ms 60.006 ms 60.461 ms], faster than baseline [67.767 ms 68.316 ms]\n      select_all_path_value: [13.053 ms 13.142 ms 13.238 ms], neutral/faster than baseline [13.421 ms 13.936 ms]\n      select_one_by_pk: [2.9783 ms 2.9920 ms 3.0078 ms], neutral vs baseline [2.9247 ms 3.0022 ms]\n      update_all_values: [25.567 ms 25.748 ms 25.948 ms], neutral vs baseline [25.341 ms 25.724 ms]\n      update_one_by_pk: [6.3481 ms 6.4059 ms 6.4673 ms], neutral vs baseline [6.3116 ms 6.4393 ms]\n      delete_all_rows: [27.078 ms 27.294 ms 27.545 ms], neutral vs baseline [26.690 ms 27.071 ms]\n      delete_one_by_pk: [6.4115 ms 6.4388 ms 6.4659 ms], neutral/faster than baseline [6.4811 ms 6.6185 ms]\n\nPost-change profile:\n  command:\n    perf record --output=/tmp/sql2-insert-plan-after.perf.data -F 499 -g --call-graph dwarf target/release/deps/optimization9_sql2-bd3fa4efccf19070 --bench 'optimization9_sql2/planning_only/lix_sqlite/insert_500_values/1k' --profile-time 8\n    perf report --stdio --quiet --no-inline --input=/tmp/sql2-insert-plan-after.perf.data --no-call-graph --sort=symbol --percent-limit=1\n  result:\n    sqlparser::tokenizer::Tokenizer::tokenize_quoted_string dropped from 17.10% to 8.53% self. This profiler percentage is diagnostic evidence that the targeted duplicate-parse hot stack was reduced; it is not the keep threshold.\n    The keep threshold is benchmark speedup: insert_500_values planning improved by about 44-46%, and the corresponding SQLite smoke insert row improved by about 14-17%, both above the required >=10% speed improvement.\n    Remaining top entries are allocator/memory movement or broadly distributed DataFusion/schema work.\n\nReview:\n  First review reported no HIGH findings and two MEDIUM findings:\n    normalize unquoted DML target identifiers consistently with DataFusion;\n    parse/validate before write session/provider construction.\n  Both MEDIUM findings were implemented.\n  Second review reported no HIGH or MEDIUM findings.\n\nGuardrails:\n  Benchmark remains isolated to optimization9_sql2 fixture/schema files.\n  SQL2 and code-structure tests pass:\n    cargo test -p lix_engine execute_sql_rejects_writes_to_history_views_before_planning --features storage-benches\n    cargo test -p lix_engine sql2 --features storage-benches\n\nOutside-SQL2 follow-up:\n  SessionContext::execute still performs a separate pre-SQL2 classification parse in packages/engine/src/session/execute.rs before dispatching to create_write_logical_plan. This is outside the SQL2-only implementation scope and should be addressed separately if end-to-end parse elimination is desired.\n\nDecision:\n  Keep.\n\nCompletion audit:\n  Additional post-change logical-planning profiles were used as diagnostics after verifying the benchmark speedup. They check whether the optimization exposed another dominant planning stack, but the keep/revert decision remains based on >=10% benchmark speed improvement:\n    insert_500_values/1k:\n      sqlparser::tokenizer::Tokenizer::tokenize_quoted_string: 8.53%\n      _int_malloc: 8.01%\n    select_all_path_value/1k:\n      _int_malloc: 6.24%\n      malloc: 2.78%\n      DataFusion simplification symbols below 1%\n    select_one_by_pk/1k:\n      _int_malloc: 6.43%\n      malloc: 2.36%\n      DataFusion simplification symbols below 1%\n    delete_all_rows/1k:\n      _int_malloc: 7.38%\n      malloc: 2.30%\n      DataFusion simplification symbols below 1%\n\n  The previous insert-planning SQL tokenizer hot stack was reduced, and the benchmark speedup exceeds the required >=10% improvement. The remaining visible costs are allocator/general DataFusion work spread across the logical-planning profiles.\n"
  },
  {
    "path": "package.json",
    "content": "{\n  \"private\": true,\n  \"name\": \"monorepo\",\n  \"type\": \"module\",\n  \"scripts\": {\n    \"build\": \"pnpm exec nx run-many --nx-bail --target=build --parallel\",\n    \"bench:engine:baseline\": \"node packages/engine/scripts/log-bench-baseline.mjs\",\n    \"postinstall\": \"command -v cargo >/dev/null 2>&1 && cargo fetch || true\",\n    \"test\": \"pnpm test:js && pnpm test:rs\",\n    \"test:js\": \"pnpm exec nx run-many --target=test --parallel\",\n    \"test:rs\": \"cargo test --workspace\",\n    \"lint\": \"pnpm lint:js && pnpm lint:rs\",\n    \"lint:js\": \"pnpm exec nx run-many --target=lint --parallel\",\n    \"lint:rs\": \"cargo fmt --all --check && cargo clippy --workspace --all-targets\",\n    \"format\": \"pnpm exec nx run-many --target=format --parallel\",\n    \"clean\": \"pnpm recursive run clean && rm -rf ./.env ./node_modules\",\n    \"----- CI ---- used to test the codebase on every commit\": \"\",\n    \"ci\": \"pnpm lint && pnpm test && pnpm build\"\n  },\n  \"packageManager\": \"pnpm@10.23.0\",\n  \"engines\": {\n    \"node\": \">=22\",\n    \"pnpm\": \">=10 <11\"\n  },\n  \"devDependencies\": {\n    \"@changesets/cli\": \"^2.29.7\",\n    \"@vitest/coverage-v8\": \"^3.1.1\",\n    \"nx\": \"^21.0.0\",\n    \"nx-cloud\": \"^19.1.0\",\n    \"vitest\": \"^3.1.1\"\n  }\n}\n"
  },
  {
    "path": "packages/cli/Cargo.toml",
    "content": "[package]\nname = \"lix_cli\"\nversion = \"0.1.0\"\nedition = \"2021\"\n\n[[bin]]\nname = \"lix\"\npath = \"src/main.rs\"\n\n[dependencies]\nasync-trait = \"0.1\"\nclap = { version = \"4.5.31\", features = [\"derive\"] }\nlix_rs_sdk = { path = \"../rs-sdk\" }\nserde = { version = \"1\", features = [\"derive\"] }\nserde_json = \"1.0\"\npollster = \"0.4\"\ncomfy-table = \"7.1\"\nbase64 = \"0.22\"\nsha2 = \"0.10\"\ntokio = { version = \"1\", features = [\"rt\"] }\n"
  },
  {
    "path": "packages/cli/src/app/context.rs",
    "content": "use std::path::PathBuf;\n\n#[derive(Debug, Clone)]\npub struct AppContext {\n    pub lix_path: Option<PathBuf>,\n    pub no_hints: bool,\n}\n"
  },
  {
    "path": "packages/cli/src/app/mod.rs",
    "content": "mod context;\nmod run;\nmod welcome;\n\npub use context::AppContext;\npub use run::run;\n"
  },
  {
    "path": "packages/cli/src/app/run.rs",
    "content": "use super::context::AppContext;\nuse super::welcome;\nuse crate::cli::root::{Cli, Command};\nuse crate::commands;\nuse crate::error::CliError;\nuse crate::hints;\nuse clap::{CommandFactory, Parser};\nuse std::io::Write;\n\npub fn run() -> Result<(), CliError> {\n    let cli = Cli::parse();\n    let no_hints = cli.no_hints;\n    let lix_path = cli.path;\n\n    let command = match cli.command {\n        Some(command) => command,\n        None => {\n            welcome::print_banner(lix_path.as_deref());\n            Cli::command().print_help().ok();\n            println!();\n            return Ok(());\n        }\n    };\n\n    let context = AppContext { lix_path, no_hints };\n\n    let result = match command {\n        Command::Exp(exp_command) => commands::exp::run(&context, exp_command),\n        Command::Init(init_command) => commands::init::run(init_command),\n        Command::Redo(redo_command) => commands::redo::run(&context, redo_command),\n        Command::Sql(sql_command) => commands::sql::run(&context, sql_command),\n        Command::Undo(undo_command) => commands::undo::run(&context, undo_command),\n        Command::Version(version_command) => commands::version::run(&context, version_command),\n    };\n\n    match result {\n        Ok(output) => {\n            if !no_hints {\n                hints::render_hints(&output.hints);\n            }\n            Ok(())\n        }\n        Err(err) => {\n            let mut stderr = std::io::stderr().lock();\n            render_error_output(&err, no_hints, &mut stderr);\n            Err(err)\n        }\n    }\n}\n\n/// Render a `CliError` to the given writer: the error message on one line,\n/// followed by a `hint:` line when hints are enabled and a hint is attached.\n/// Factored out of [`run`] so the rendering path is unit-testable.\npub(crate) fn render_error_output<W: Write>(err: &CliError, no_hints: bool, out: &mut W) {\n    writeln!(out, \"{err}\").ok();\n    if !no_hints {\n        for hint in hints::hint_from_error(err) {\n            writeln!(out, \"hint: {hint}\").ok();\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use lix_rs_sdk::LixError;\n\n    fn rendered(err: &CliError, no_hints: bool) -> String {\n        let mut buf: Vec<u8> = Vec::new();\n        render_error_output(err, no_hints, &mut buf);\n        String::from_utf8(buf).expect(\"render output is valid utf-8\")\n    }\n\n    #[test]\n    fn renders_hint_line_when_error_carries_hint() {\n        let err = CliError::from_lix(\n            \"sql execution failed\",\n            LixError::new(\n                \"LIX_ERROR_UNSUPPORTED_WRITE_EXPRESSION\",\n                \"json(...) is not supported\",\n            )\n            .with_hint(\"use lix_json('...') instead\"),\n        );\n        let out = rendered(&err, false);\n        assert_eq!(\n            out,\n            \"sql execution failed: json(...) is not supported\\n\\\n             hint: use lix_json('...') instead\\n\"\n        );\n    }\n\n    #[test]\n    fn suppresses_hint_when_no_hints_is_set() {\n        let err = CliError::from_lix(\n            \"sql execution failed\",\n            LixError::new(\"CODE\", \"boom\").with_hint(\"try the fix\"),\n        );\n        let out = rendered(&err, true);\n        assert_eq!(out, \"sql execution failed: boom\\n\");\n    }\n\n    #[test]\n    fn omits_hint_line_when_error_has_no_hint() {\n        let err = CliError::from_lix(\"ctx\", LixError::new(\"CODE\", \"boom\"));\n        let out = rendered(&err, false);\n        assert_eq!(out, \"ctx: boom\\n\");\n    }\n\n    #[test]\n    fn omits_hint_line_for_non_lix_error_variants() {\n        let err = CliError::msg(\"plain message\");\n        let out = rendered(&err, false);\n        assert_eq!(out, \"plain message\\n\");\n    }\n}\n"
  },
  {
    "path": "packages/cli/src/app/welcome.rs",
    "content": "use std::io::IsTerminal;\nuse std::path::{Path, PathBuf};\n\nconst CYAN: &str = \"\\x1b[38;2;8;181;214m\";\nconst RESET: &str = \"\\x1b[0m\";\n\nconst LOGO: [&str; 6] = [\n    \"██╗     ██╗██╗  ██╗\",\n    \"██║     ██║╚██╗██╔╝\",\n    \"██║     ██║ ╚███╔╝ \",\n    \"██║     ██║ ██╔██╗ \",\n    \"███████╗██║██╔╝ ██╗\",\n    \"╚══════╝╚═╝╚═╝  ╚═╝\",\n];\n\nconst TAGLINE: &str = \"change control system for everything\";\n\npub fn print_banner(explicit_lix_path: Option<&Path>) {\n    let color = use_color();\n    let (cyan, reset) = if color { (CYAN, RESET) } else { (\"\", \"\") };\n\n    let version = env!(\"CARGO_PKG_VERSION\");\n    let info = [\n        String::new(),\n        format!(\"lix v{version}\"),\n        TAGLINE.to_string(),\n        current_dir_display(),\n        describe_lix_state(explicit_lix_path),\n        String::new(),\n    ];\n\n    println!();\n    for (logo_line, text) in LOGO.iter().zip(info.iter()) {\n        if text.is_empty() {\n            println!(\" {cyan}{logo_line}{reset}\");\n        } else {\n            println!(\" {cyan}{logo_line}{reset}       {text}\");\n        }\n    }\n    println!();\n}\n\nfn use_color() -> bool {\n    std::io::stdout().is_terminal() && std::env::var_os(\"NO_COLOR\").is_none()\n}\n\nfn current_dir_display() -> String {\n    let cwd = match std::env::current_dir() {\n        Ok(path) => path,\n        Err(_) => return String::new(),\n    };\n    if let Some(home) = std::env::var_os(\"HOME\") {\n        let home = PathBuf::from(home);\n        if let Ok(relative) = cwd.strip_prefix(&home) {\n            let rel = relative.display().to_string();\n            return if rel.is_empty() {\n                \"~\".to_string()\n            } else {\n                format!(\"~/{rel}\")\n            };\n        }\n    }\n    cwd.display().to_string()\n}\n\nfn describe_lix_state(explicit: Option<&Path>) -> String {\n    if let Some(path) = explicit {\n        return format!(\"using {}\", path.display());\n    }\n    let cwd = match std::env::current_dir() {\n        Ok(path) => path,\n        Err(_) => return String::new(),\n    };\n    let mut lix_files: Vec<PathBuf> = Vec::new();\n    if let Ok(entries) = std::fs::read_dir(&cwd) {\n        for entry in entries.flatten() {\n            let path = entry.path();\n            if path.is_file() && path.extension().and_then(|ext| ext.to_str()) == Some(\"lix\") {\n                lix_files.push(path);\n            }\n        }\n    }\n    match lix_files.len() {\n        0 => \"no .lix file detected · run `lix init <path>`\".to_string(),\n        1 => {\n            let name = lix_files[0]\n                .file_name()\n                .map(|n| n.to_string_lossy().into_owned())\n                .unwrap_or_default();\n            format!(\"detected {name}\")\n        }\n        n => format!(\"{n} .lix files · pass --path <path>\"),\n    }\n}\n"
  },
  {
    "path": "packages/cli/src/cli/exp.rs",
    "content": "use clap::{value_parser, Args, Subcommand, ValueHint};\nuse std::path::PathBuf;\n\n#[derive(Debug, Args)]\npub struct ExpCommand {\n    #[command(subcommand)]\n    pub command: ExpSubcommand,\n}\n\n#[derive(Debug, Subcommand)]\npub enum ExpSubcommand {\n    /// Replay git history into a Lix artifact.\n    GitReplay(ExpGitReplayArgs),\n}\n\n#[derive(Debug, Args)]\npub struct ExpGitReplayArgs {\n    /// Path to the git repository to replay.\n    #[arg(long, value_hint = ValueHint::DirPath)]\n    pub repo_path: PathBuf,\n\n    /// Output .lix path.\n    #[arg(long, value_hint = ValueHint::FilePath)]\n    pub output_lix_path: PathBuf,\n\n    /// Branch/ref to replay from (use '*' to replay commits reachable from all refs).\n    #[arg(long, default_value = \"main\")]\n    pub branch: String,\n\n    /// Start replay from this commit (inclusive).\n    #[arg(long)]\n    pub from_commit: Option<String>,\n\n    /// Maximum number of commits to replay (after applying --from-commit, if set).\n    #[arg(long, value_parser = value_parser!(u32).range(1..))]\n    pub num_commits: Option<u32>,\n\n    /// Verify file paths and payload hashes after each replayed commit.\n    #[arg(long, default_value_t = false)]\n    pub verify_state: bool,\n\n    /// Overwrite output files if they already exist.\n    #[arg(long, default_value_t = false)]\n    pub force: bool,\n\n    /// Write per-commit replay profiling data as JSON.\n    #[arg(long, value_hint = ValueHint::FilePath)]\n    pub profile_json: Option<PathBuf>,\n\n    /// Write backend SQL tracing data as JSON.\n    #[arg(long, value_hint = ValueHint::FilePath)]\n    pub trace_sql_json: Option<PathBuf>,\n\n    /// Trace only the replayed commit matching this full SHA or unique SHA prefix.\n    #[arg(long)]\n    pub trace_commit: Option<String>,\n}\n"
  },
  {
    "path": "packages/cli/src/cli/init.rs",
    "content": "use clap::{Args, ValueHint};\nuse std::path::PathBuf;\n\n#[derive(Debug, Args)]\npub struct InitCommand {\n    /// Path to the .lix file to initialize.\n    #[arg(value_hint = ValueHint::FilePath)]\n    pub path: PathBuf,\n}\n"
  },
  {
    "path": "packages/cli/src/cli/mod.rs",
    "content": "pub mod exp;\npub mod init;\npub mod redo;\npub mod root;\npub mod sql;\npub mod undo;\npub mod version;\n"
  },
  {
    "path": "packages/cli/src/cli/redo.rs",
    "content": "use clap::Args;\n\n#[derive(Debug, Args)]\npub struct RedoCommand {\n    /// Override the target version by `lix_version.id` / active `version_id`,\n    /// not the `lix_active_version.id` row key.\n    #[arg(long)]\n    pub version: Option<String>,\n}\n"
  },
  {
    "path": "packages/cli/src/cli/root.rs",
    "content": "use super::exp::ExpCommand;\nuse super::init::InitCommand;\nuse super::redo::RedoCommand;\nuse super::sql::SqlCommand;\nuse super::undo::UndoCommand;\nuse super::version::VersionCommand;\nuse clap::{Parser, Subcommand, ValueHint};\nuse std::path::PathBuf;\n\n#[derive(Debug, Parser)]\n#[command(name = \"lix\")]\n#[command(about = \"Lix command line interface\")]\npub struct Cli {\n    /// Path to the .lix file (required when multiple .lix files exist).\n    #[arg(long, global = true, value_hint = ValueHint::FilePath)]\n    pub path: Option<PathBuf>,\n\n    /// Disable contextual hints that guide you on what to do next. Keep hints\n    /// enabled until you understand how lix works. AI agents and LLMs should\n    /// not use this flag.\n    #[arg(long, global = true)]\n    pub no_hints: bool,\n\n    #[command(subcommand)]\n    pub command: Option<Command>,\n}\n\n#[derive(Debug, Subcommand)]\npub enum Command {\n    /// Experimental commands for benchmarking and diagnostics.\n    Exp(ExpCommand),\n    /// Initialize a lix at the provided path.\n    Init(InitCommand),\n    /// Reapply the most recently undone committed change unit.\n    Redo(RedoCommand),\n    /// Execute raw SQL against a lix.\n    Sql(SqlCommand),\n    /// Undo the most recent committed change unit.\n    Undo(UndoCommand),\n    /// Version operations such as merging branches.\n    Version(VersionCommand),\n}\n\n#[cfg(test)]\nmod tests {\n    use super::{Cli, Command};\n    use crate::cli::sql::SqlSubcommand;\n    use crate::cli::version::VersionSubcommand;\n    use clap::Parser;\n    use std::path::PathBuf;\n\n    #[test]\n    fn parses_init_command_path_argument() {\n        let cli = Cli::try_parse_from([\"lix\", \"init\", \"tmp/new.lix\"]).expect(\"parse succeeds\");\n        match cli.command {\n            Some(Command::Init(init)) => assert_eq!(init.path, PathBuf::from(\"tmp/new.lix\")),\n            _ => panic!(\"expected init command\"),\n        }\n    }\n\n    #[test]\n    fn parses_sql_execute_params_json_flag() {\n        let cli = Cli::try_parse_from([\n            \"lix\",\n            \"sql\",\n            \"execute\",\n            \"--params\",\n            \"[\\\"first\\\", \\\"second\\\"]\",\n            \"SELECT ?1, ?2\",\n        ])\n        .expect(\"parse succeeds\");\n\n        match cli.command {\n            Some(Command::Sql(sql)) => match sql.command {\n                SqlSubcommand::Execute(args) => {\n                    assert_eq!(args.params, Some(\"[\\\"first\\\", \\\"second\\\"]\".to_string()));\n                    assert_eq!(args.sql, \"SELECT ?1, ?2\");\n                }\n            },\n            _ => panic!(\"expected sql command\"),\n        }\n    }\n\n    #[test]\n    fn parses_undo_command_version_flag() {\n        let cli =\n            Cli::try_parse_from([\"lix\", \"undo\", \"--version\", \"branch-1\"]).expect(\"parse succeeds\");\n        match cli.command {\n            Some(Command::Undo(command)) => {\n                assert_eq!(command.version.as_deref(), Some(\"branch-1\"))\n            }\n            _ => panic!(\"expected undo command\"),\n        }\n    }\n\n    #[test]\n    fn parses_redo_command_without_version() {\n        let cli = Cli::try_parse_from([\"lix\", \"redo\"]).expect(\"parse succeeds\");\n        match cli.command {\n            Some(Command::Redo(command)) => assert_eq!(command.version, None),\n            _ => panic!(\"expected redo command\"),\n        }\n    }\n\n    #[test]\n    fn parses_version_merge_command() {\n        let cli = Cli::try_parse_from([\n            \"lix\",\n            \"version\",\n            \"merge\",\n            \"--source-name\",\n            \"draft-a\",\n            \"--target-id\",\n            \"main\",\n        ])\n        .expect(\"parse succeeds\");\n        match cli.command {\n            Some(Command::Version(command)) => match command.command {\n                VersionSubcommand::Merge(args) => {\n                    assert_eq!(args.source_name.as_deref(), Some(\"draft-a\"));\n                    assert_eq!(args.target_id.as_deref(), Some(\"main\"));\n                }\n                _ => panic!(\"expected version merge command\"),\n            },\n            _ => panic!(\"expected version command\"),\n        }\n    }\n\n    #[test]\n    fn parses_version_create_command() {\n        let cli = Cli::try_parse_from([\n            \"lix\",\n            \"version\",\n            \"create\",\n            \"--id\",\n            \"branch-a\",\n            \"--name\",\n            \"Branch A\",\n            \"--from-name\",\n            \"main\",\n            \"--hidden\",\n        ])\n        .expect(\"parse succeeds\");\n        match cli.command {\n            Some(Command::Version(command)) => match command.command {\n                VersionSubcommand::Create(args) => {\n                    assert_eq!(args.id.as_deref(), Some(\"branch-a\"));\n                    assert_eq!(args.name.as_deref(), Some(\"Branch A\"));\n                    assert_eq!(args.from_name.as_deref(), Some(\"main\"));\n                    assert!(args.hidden);\n                }\n                _ => panic!(\"expected version create command\"),\n            },\n            _ => panic!(\"expected version command\"),\n        }\n    }\n\n    #[test]\n    fn parses_version_switch_command() {\n        let cli = Cli::try_parse_from([\"lix\", \"version\", \"switch\", \"--name\", \"branch-a\"])\n            .expect(\"parse succeeds\");\n        match cli.command {\n            Some(Command::Version(command)) => match command.command {\n                VersionSubcommand::Switch(args) => {\n                    assert_eq!(args.name.as_deref(), Some(\"branch-a\"));\n                }\n                _ => panic!(\"expected version switch command\"),\n            },\n            _ => panic!(\"expected version command\"),\n        }\n    }\n\n    #[test]\n    fn rejects_version_switch_without_reference_flag() {\n        let error =\n            Cli::try_parse_from([\"lix\", \"version\", \"switch\"]).expect_err(\"parse should fail\");\n        let message = error.to_string();\n        assert!(message.contains(\"--id\"));\n        assert!(message.contains(\"--name\"));\n    }\n}\n"
  },
  {
    "path": "packages/cli/src/cli/sql.rs",
    "content": "use clap::{Args, Subcommand, ValueEnum};\n\n#[derive(Debug, Args)]\npub struct SqlCommand {\n    #[command(subcommand)]\n    pub command: SqlSubcommand,\n}\n\n#[derive(Debug, Subcommand)]\npub enum SqlSubcommand {\n    /// Execute SQL text. Use '-' to read SQL from stdin.\n    #[command(after_long_help = \"\\\nExamples:\n  lix sql execute \\\"INSERT INTO lix_file (path, data) VALUES ('/hello.md', lix_text_encode('# Hello'))\\\"\n  lix sql execute \\\"SELECT path, lix_text_decode(data) FROM lix_file\\\"\n  lix sql execute \\\"SELECT path, lixcol_depth FROM lix_file_history\\\"\")]\n    Execute(SqlExecuteArgs),\n}\n\n#[derive(Clone, Copy, Debug, Eq, PartialEq, ValueEnum)]\npub enum SqlOutputFormat {\n    Table,\n    Json,\n}\n\n#[derive(Debug, Args)]\npub struct SqlExecuteArgs {\n    /// Output format for query results.\n    #[arg(long, value_enum, default_value_t = SqlOutputFormat::Table)]\n    pub format: SqlOutputFormat,\n\n    /// Bind positional SQL parameters from a JSON array.\n    ///\n    /// Use inline JSON (`--params '[1,true,null,\\\"text\\\"]'`) or `-` to read JSON from stdin.\n    /// Supported values: null, booleans, numbers, strings, and blobs via {\"$blob\":\"<base64>\"}.\n    #[arg(long = \"params\")]\n    pub params: Option<String>,\n\n    /// SQL query text to execute. Use '-' to read from stdin.\n    pub sql: String,\n}\n"
  },
  {
    "path": "packages/cli/src/cli/undo.rs",
    "content": "use clap::Args;\n\n#[derive(Debug, Args)]\npub struct UndoCommand {\n    /// Override the target version by `lix_version.id` / active `version_id`,\n    /// not the `lix_active_version.id` row key.\n    #[arg(long)]\n    pub version: Option<String>,\n}\n"
  },
  {
    "path": "packages/cli/src/cli/version.rs",
    "content": "use clap::{Args, Subcommand};\n\n#[derive(Debug, Args)]\npub struct VersionCommand {\n    #[command(subcommand)]\n    pub command: VersionSubcommand,\n}\n\n#[derive(Debug, Subcommand)]\npub enum VersionSubcommand {\n    /// Create a new version from the current active version head.\n    Create(CreateVersionCommand),\n    /// Merge one version into another.\n    Merge(MergeVersionCommand),\n    /// Switch the active version.\n    Switch(SwitchVersionCommand),\n}\n\n#[derive(Debug, Args)]\npub struct CreateVersionCommand {\n    /// Explicit version id. If omitted, Lix generates one.\n    #[arg(long)]\n    pub id: Option<String>,\n\n    /// Human-readable version name. Defaults to the id.\n    #[arg(long)]\n    pub name: Option<String>,\n\n    /// Source version id to branch from. Defaults to the active version.\n    #[arg(long, conflicts_with = \"from_name\")]\n    pub from_id: Option<String>,\n\n    /// Source version name to branch from. Defaults to the active version.\n    #[arg(long, conflicts_with = \"from_id\")]\n    pub from_name: Option<String>,\n\n    /// Hide the version from default listings.\n    #[arg(long, default_value_t = false)]\n    pub hidden: bool,\n}\n\n#[derive(Debug, Args)]\npub struct MergeVersionCommand {\n    /// Source version id to merge from.\n    #[arg(\n        long,\n        conflicts_with = \"source_name\",\n        required_unless_present = \"source_name\"\n    )]\n    pub source_id: Option<String>,\n\n    /// Source version name to merge from.\n    #[arg(\n        long,\n        conflicts_with = \"source_id\",\n        required_unless_present = \"source_id\"\n    )]\n    pub source_name: Option<String>,\n\n    /// Target version id to merge into.\n    #[arg(\n        long,\n        conflicts_with = \"target_name\",\n        required_unless_present = \"target_name\"\n    )]\n    pub target_id: Option<String>,\n\n    /// Target version name to merge into.\n    #[arg(\n        long,\n        conflicts_with = \"target_id\",\n        required_unless_present = \"target_id\"\n    )]\n    pub target_name: Option<String>,\n}\n\n#[derive(Debug, Args)]\npub struct SwitchVersionCommand {\n    /// Version id to make active.\n    #[arg(long, conflicts_with = \"name\", required_unless_present = \"name\")]\n    pub id: Option<String>,\n\n    /// Version name to make active.\n    #[arg(long, conflicts_with = \"id\", required_unless_present = \"id\")]\n    pub name: Option<String>,\n}\n"
  },
  {
    "path": "packages/cli/src/commands/exp/git_replay.rs",
    "content": "use crate::cli::exp::ExpGitReplayArgs;\nuse crate::db;\nuse crate::error::CliError;\nuse lix_rs_sdk::{Lix, Value};\nuse serde::Serialize;\nuse sha2::{Digest, Sha256};\nuse std::collections::{BTreeMap, BTreeSet, HashMap, HashSet};\nuse std::fs;\nuse std::io::Write;\nuse std::path::{Path, PathBuf};\nuse std::process::{Command, Stdio};\nuse std::time::{Duration, Instant};\n\nconst NULL_OID: &str = \"0000000000000000000000000000000000000000\";\nconst PROGRESS_EVERY: usize = 10;\nconst DEFAULT_INSERT_BATCH_ROWS: usize = 100;\n\n#[derive(Debug, Clone)]\nstruct Change {\n    status: char,\n    old_mode: String,\n    new_mode: String,\n    new_oid: String,\n    old_path: Option<String>,\n    new_path: Option<String>,\n}\n\nimpl Change {\n    fn new_is_blob(&self) -> bool {\n        mode_is_blob(&self.new_mode)\n    }\n}\n\n#[derive(Debug)]\nstruct PatchSet {\n    changes: Vec<Change>,\n    blob_by_oid: HashMap<String, Vec<u8>>,\n}\n\n#[derive(Default)]\nstruct ReplayState {\n    path_to_file_id: HashMap<String, String>,\n    known_file_ids: HashSet<String>,\n}\n\n#[derive(Debug, Clone)]\nstruct WriteRow {\n    id: String,\n    path: String,\n    data: Vec<u8>,\n}\n\n#[derive(Debug)]\nstruct PreparedBatch {\n    deletes: Vec<String>,\n    inserts: Vec<WriteRow>,\n    updates: Vec<WriteRow>,\n}\n\n#[derive(Debug)]\nstruct SqlStatement {\n    sql: String,\n    params: Vec<Value>,\n}\n\n#[derive(Debug, Clone)]\nstruct ExpectedFile {\n    path: String,\n    sha256: String,\n}\n\n#[derive(Debug, Default, Serialize)]\nstruct ReplayProfilePhaseTotals {\n    read_patch_ms: f64,\n    prepare_ms: f64,\n    build_sql_ms: f64,\n    execute_ms: f64,\n    verify_ms: f64,\n    total_ms: f64,\n}\n\n#[derive(Debug, Serialize)]\nstruct ReplayCommitProfile {\n    commit_sha: String,\n    changed_paths: usize,\n    inserts: usize,\n    updates: usize,\n    deletes: usize,\n    statement_count: usize,\n    sql_chars: usize,\n    blob_bytes: usize,\n    noop: bool,\n    read_patch_ms: f64,\n    prepare_ms: f64,\n    build_sql_ms: f64,\n    execute_ms: f64,\n    verify_ms: Option<f64>,\n    total_ms: f64,\n}\n\n#[derive(Debug, Serialize)]\nstruct ReplayProfileReport {\n    repo_path: String,\n    output_lix_path: String,\n    branch: String,\n    from_commit: Option<String>,\n    num_commits_requested: Option<u32>,\n    verify_state: bool,\n    commits_replayed: usize,\n    commits_applied: usize,\n    commits_noop: usize,\n    changed_paths_total: usize,\n    phase_totals: ReplayProfilePhaseTotals,\n    commits: Vec<ReplayCommitProfile>,\n}\n\n#[derive(Debug, Clone)]\nstruct SqlTraceCommitTarget {\n    commit_sha: String,\n}\n\n#[derive(Debug, Serialize)]\nstruct ReplaySqlTraceReport {\n    repo_path: String,\n    output_lix_path: String,\n    branch: String,\n    from_commit: Option<String>,\n    num_commits_requested: Option<u32>,\n    traced_commit: Option<String>,\n    commits: Vec<ReplaySqlTraceCommit>,\n}\n\n#[derive(Debug, Serialize)]\nstruct ReplaySqlTraceCommit {\n    commit_sha: String,\n    changed_paths: usize,\n    inserts: usize,\n    updates: usize,\n    deletes: usize,\n    statement_count: usize,\n    outer_execute_ms: f64,\n    operations: Vec<ReplaySqlTraceOperation>,\n}\n\n#[derive(Debug, Serialize)]\nstruct ReplaySqlTraceOperation {\n    sequence: u64,\n    kind: &'static str,\n    sql: Option<String>,\n    sql_chars: usize,\n    params_count: usize,\n    blob_params: usize,\n    blob_param_bytes: usize,\n    row_count: Option<usize>,\n    column_count: Option<usize>,\n    duration_ms: f64,\n    error: Option<String>,\n}\n\npub fn run(args: ExpGitReplayArgs) -> Result<(), CliError> {\n    let repo_path = absolutize_from_cwd(&args.repo_path)?;\n    validate_repo_dir(&repo_path)?;\n    validate_git_repo(&repo_path)?;\n    let output_lix_path = absolutize_from_cwd(&args.output_lix_path)?;\n    db::prepare_lix_output_path(&output_lix_path, args.force)?;\n    let profile_json_path = args\n        .profile_json\n        .as_ref()\n        .map(|path| absolutize_from_cwd(path))\n        .transpose()?;\n    if let Some(path) = &profile_json_path {\n        prepare_regular_output_path(path, args.force)?;\n    }\n    let trace_sql_json_path = args\n        .trace_sql_json\n        .as_ref()\n        .map(|path| absolutize_from_cwd(path))\n        .transpose()?;\n    if let Some(path) = &trace_sql_json_path {\n        prepare_regular_output_path(path, args.force)?;\n    }\n    if trace_sql_json_path.is_some() {\n        return Err(CliError::msg(\n            \"--trace-sql-json is not available with the current rs-sdk backend API\",\n        ));\n    }\n    if args.trace_commit.is_some() && trace_sql_json_path.is_none() {\n        return Err(CliError::InvalidArgs(\n            \"--trace-commit requires --trace-sql-json\",\n        ));\n    }\n    let replay_ref = normalize_replay_ref(&args.branch)?;\n    let from_commit = args\n        .from_commit\n        .as_deref()\n        .map(|raw| resolve_commit_oid(&repo_path, raw))\n        .transpose()?;\n    let commits = list_linear_commits(\n        &repo_path,\n        &replay_ref,\n        from_commit.as_deref(),\n        args.num_commits,\n    )?;\n\n    if commits.is_empty() {\n        return Err(CliError::msg(format!(\n            \"no commits found in {} for ref '{}'\",\n            repo_path.display(),\n            args.branch\n        )));\n    }\n\n    let trace_commit_target = resolve_trace_commit_target(&commits, args.trace_commit.as_deref())?;\n    let lix = init_and_open_lix_at_path(&output_lix_path)?;\n\n    let mut state = ReplayState::default();\n    let mut expected_state_by_id = HashMap::<String, ExpectedFile>::new();\n    let mut applied = 0usize;\n    let mut noop = 0usize;\n    let mut changed_paths = 0usize;\n    let mut verified = 0usize;\n    let mut phase_totals = ReplayProfilePhaseTotals::default();\n    let mut commit_profiles = Vec::<ReplayCommitProfile>::with_capacity(commits.len());\n    let mut sql_trace_commits = Vec::<ReplaySqlTraceCommit>::new();\n\n    println!(\n        \"[git-replay] replaying {} commits from {}\",\n        commits.len(),\n        repo_path.display()\n    );\n\n    for (index, commit_sha) in commits.iter().enumerate() {\n        let commit_started = Instant::now();\n\n        let read_patch_started = Instant::now();\n        let patch_set = read_commit_patch_set(&repo_path, commit_sha)?;\n        let read_patch_ms = duration_to_ms(read_patch_started.elapsed());\n        phase_totals.read_patch_ms += read_patch_ms;\n        changed_paths += patch_set.changes.len();\n\n        let prepare_started = Instant::now();\n        let prepared =\n            prepare_commit_changes(&mut state, &patch_set.changes, &patch_set.blob_by_oid)?;\n        let prepare_ms = duration_to_ms(prepare_started.elapsed());\n        phase_totals.prepare_ms += prepare_ms;\n\n        let build_sql_started = Instant::now();\n        let statements = build_replay_commit_statements(&prepared, DEFAULT_INSERT_BATCH_ROWS);\n        let build_sql_ms = duration_to_ms(build_sql_started.elapsed());\n        phase_totals.build_sql_ms += build_sql_ms;\n\n        let statement_count = statements.len();\n        let sql_chars = total_statement_sql_chars(&statements);\n        let blob_bytes = prepared_blob_bytes(&prepared);\n        let inserts = prepared.inserts.len();\n        let updates = prepared.updates.len();\n        let deletes = prepared.deletes.len();\n        let mut execute_ms = 0.0f64;\n        let mut verify_ms = None;\n\n        if statements.is_empty() {\n            noop += 1;\n        } else {\n            let should_trace_commit = should_trace_commit(commit_sha, trace_commit_target.as_ref());\n            let execute_started = Instant::now();\n            execute_statements_as_transaction(&lix, &statements, commit_sha)?;\n            execute_ms = duration_to_ms(execute_started.elapsed());\n            phase_totals.execute_ms += execute_ms;\n            if should_trace_commit {\n                sql_trace_commits.push(ReplaySqlTraceCommit {\n                    commit_sha: commit_sha.clone(),\n                    changed_paths: patch_set.changes.len(),\n                    inserts,\n                    updates,\n                    deletes,\n                    statement_count,\n                    outer_execute_ms: execute_ms,\n                    operations: Vec::new(),\n                });\n            }\n            applied += 1;\n        }\n\n        if args.verify_state {\n            let verify_started = Instant::now();\n            apply_prepared_to_expected_state(&mut expected_state_by_id, &prepared);\n            verify_commit_state_hashes(&lix, &expected_state_by_id, commit_sha)?;\n            let verify_elapsed_ms = duration_to_ms(verify_started.elapsed());\n            phase_totals.verify_ms += verify_elapsed_ms;\n            verify_ms = Some(verify_elapsed_ms);\n            verified += 1;\n        }\n\n        let total_ms = duration_to_ms(commit_started.elapsed());\n        phase_totals.total_ms += total_ms;\n        commit_profiles.push(ReplayCommitProfile {\n            commit_sha: commit_sha.clone(),\n            changed_paths: patch_set.changes.len(),\n            inserts,\n            updates,\n            deletes,\n            statement_count,\n            sql_chars,\n            blob_bytes,\n            noop: statements.is_empty(),\n            read_patch_ms,\n            prepare_ms,\n            build_sql_ms,\n            execute_ms,\n            verify_ms,\n            total_ms,\n        });\n\n        if index == 0 || (index + 1) % PROGRESS_EVERY == 0 || index + 1 == commits.len() {\n            println!(\n                \"[git-replay] {}/{} commits (applied={}, noop={}, changedPaths={})\",\n                index + 1,\n                commits.len(),\n                applied,\n                noop,\n                changed_paths\n            );\n        }\n    }\n\n    println!(\"[git-replay] done\");\n    println!(\"[git-replay] ref: {}\", args.branch);\n    println!(\"[git-replay] output: {}\", output_lix_path.display());\n    println!(\"[git-replay] commits replayed: {}\", commits.len());\n    println!(\"[git-replay] commits applied: {}\", applied);\n    println!(\"[git-replay] commits noop: {}\", noop);\n    println!(\"[git-replay] changed paths total: {}\", changed_paths);\n    if args.verify_state {\n        println!(\n            \"[git-replay] verified commits: {verified}/{}\",\n            commits.len()\n        );\n    }\n    if let Some(profile_path) = &profile_json_path {\n        write_profile_report(\n            profile_path,\n            ReplayProfileReport {\n                repo_path: repo_path.display().to_string(),\n                output_lix_path: output_lix_path.display().to_string(),\n                branch: args.branch.clone(),\n                from_commit: args.from_commit.clone(),\n                num_commits_requested: args.num_commits,\n                verify_state: args.verify_state,\n                commits_replayed: commits.len(),\n                commits_applied: applied,\n                commits_noop: noop,\n                changed_paths_total: changed_paths,\n                phase_totals,\n                commits: commit_profiles,\n            },\n        )?;\n        println!(\"[git-replay] profile json: {}\", profile_path.display());\n    }\n    if let Some(trace_path) = &trace_sql_json_path {\n        write_sql_trace_report(\n            trace_path,\n            ReplaySqlTraceReport {\n                repo_path: repo_path.display().to_string(),\n                output_lix_path: output_lix_path.display().to_string(),\n                branch: args.branch.clone(),\n                from_commit: args.from_commit.clone(),\n                num_commits_requested: args.num_commits,\n                traced_commit: trace_commit_target.map(|target| target.commit_sha),\n                commits: sql_trace_commits,\n            },\n        )?;\n        println!(\"[git-replay] sql trace json: {}\", trace_path.display());\n    }\n\n    Ok(())\n}\n\nfn init_and_open_lix_at_path(path: &Path) -> Result<Lix, CliError> {\n    db::init_lix_at(path)?;\n    let lix = db::open_lix_at(path)?;\n    crate::db::block_on(lix.execute(\n        \"INSERT INTO lix_key_value (key, value) VALUES ('lix_deterministic_mode', '{\\\"enabled\\\":true}')\",\n        &[],\n    ))\n    .map_err(|err| CliError::msg(format!(\"failed to enable deterministic mode: {err}\")))?;\n    Ok(lix)\n}\n\nfn execute_statements_as_transaction(\n    lix: &Lix,\n    statements: &[SqlStatement],\n    commit_sha: &str,\n) -> Result<(), CliError> {\n    let script = build_transaction_script(statements);\n    let params = statements\n        .iter()\n        .flat_map(|statement| statement.params.iter().cloned())\n        .collect::<Vec<_>>();\n\n    crate::db::block_on(lix.execute(&script, &params)).map_err(|error| {\n        let sql_preview = script.chars().take(160).collect::<String>();\n        CliError::msg(format!(\n            \"failed at commit {commit_sha} while executing replay SQL '{sql_preview}': {error}\"\n        ))\n    })?;\n\n    Ok(())\n}\n\nfn build_transaction_script(statements: &[SqlStatement]) -> String {\n    let mut script = String::from(\"BEGIN;\");\n    let mut next_param_index = 1usize;\n\n    for statement in statements {\n        script.push(' ');\n        script.push_str(&number_sql_parameters(\n            &statement.sql,\n            &mut next_param_index,\n        ));\n        script.push(';');\n    }\n\n    script.push_str(\" COMMIT;\");\n    script\n}\n\nfn number_sql_parameters(sql: &str, next_param_index: &mut usize) -> String {\n    let mut numbered = String::with_capacity(sql.len() + 16);\n    for ch in sql.chars() {\n        if ch == '?' {\n            numbered.push('?');\n            numbered.push_str(&next_param_index.to_string());\n            *next_param_index += 1;\n        } else {\n            numbered.push(ch);\n        }\n    }\n    numbered\n}\n\nfn prepared_blob_bytes(prepared: &PreparedBatch) -> usize {\n    prepared\n        .inserts\n        .iter()\n        .chain(prepared.updates.iter())\n        .map(|row| row.data.len())\n        .sum()\n}\n\nfn total_statement_sql_chars(statements: &[SqlStatement]) -> usize {\n    statements.iter().map(|statement| statement.sql.len()).sum()\n}\n\nfn duration_to_ms(duration: Duration) -> f64 {\n    duration.as_secs_f64() * 1000.0\n}\n\nfn write_profile_report(path: &Path, report: ReplayProfileReport) -> Result<(), CliError> {\n    let mut bytes = serde_json::to_vec_pretty(&report).map_err(|error| {\n        CliError::msg(format!(\n            \"failed to serialize replay profile report: {error}\"\n        ))\n    })?;\n    bytes.push(b'\\n');\n    fs::write(path, bytes).map_err(|source| CliError::io(\"failed to write profile json\", source))\n}\n\nfn write_sql_trace_report(path: &Path, report: ReplaySqlTraceReport) -> Result<(), CliError> {\n    let mut bytes = serde_json::to_vec_pretty(&report).map_err(|error| {\n        CliError::msg(format!(\n            \"failed to serialize replay sql trace report: {error}\"\n        ))\n    })?;\n    bytes.push(b'\\n');\n    fs::write(path, bytes).map_err(|source| CliError::io(\"failed to write sql trace json\", source))\n}\n\nfn list_linear_commits(\n    repo_path: &Path,\n    replay_ref: &str,\n    from_commit: Option<&str>,\n    limit: Option<u32>,\n) -> Result<Vec<String>, CliError> {\n    let mut args = vec![\n        \"rev-list\".to_string(),\n        \"--reverse\".to_string(),\n        \"--first-parent\".to_string(),\n    ];\n    if replay_ref == \"--all\" {\n        args.push(\"--all\".to_string());\n    } else {\n        args.push(replay_ref.to_string());\n    }\n\n    let output = run_git_text(repo_path, &args, None)?;\n    let commits = output\n        .lines()\n        .map(str::trim)\n        .filter(|line| !line.is_empty())\n        .map(ToOwned::to_owned)\n        .collect::<Vec<_>>();\n    select_replay_commits(commits, from_commit, limit)\n}\n\nfn resolve_trace_commit_target(\n    commits: &[String],\n    raw: Option<&str>,\n) -> Result<Option<SqlTraceCommitTarget>, CliError> {\n    let Some(raw) = raw else {\n        return Ok(None);\n    };\n    let needle = raw.trim();\n    if needle.is_empty() {\n        return Err(CliError::InvalidArgs(\"trace_commit must not be empty\"));\n    }\n\n    let matches = commits\n        .iter()\n        .filter(|commit| commit == &needle || commit.starts_with(needle))\n        .cloned()\n        .collect::<Vec<_>>();\n    match matches.len() {\n        0 => Err(CliError::msg(format!(\n            \"--trace-commit {} did not match any replayed commit\",\n            raw\n        ))),\n        1 => Ok(Some(SqlTraceCommitTarget {\n            commit_sha: matches.into_iter().next().expect(\"exactly one trace match\"),\n        })),\n        _ => Err(CliError::msg(format!(\n            \"--trace-commit {} matched multiple replayed commits; provide a longer prefix\",\n            raw\n        ))),\n    }\n}\n\nfn should_trace_commit(commit_sha: &str, target: Option<&SqlTraceCommitTarget>) -> bool {\n    match target {\n        Some(target) => target.commit_sha == commit_sha,\n        None => true,\n    }\n}\n\nfn select_replay_commits(\n    mut commits: Vec<String>,\n    from_commit: Option<&str>,\n    limit: Option<u32>,\n) -> Result<Vec<String>, CliError> {\n    if let Some(from_commit) = from_commit {\n        let from_index = commits\n            .iter()\n            .position(|commit| commit == from_commit)\n            .ok_or_else(|| {\n                CliError::msg(format!(\n                    \"--from-commit {} is not reachable from selected ref\",\n                    from_commit\n                ))\n            })?;\n        commits = commits.split_off(from_index);\n    }\n\n    if let Some(limit) = limit {\n        commits.truncate(limit as usize);\n    }\n\n    Ok(commits)\n}\n\nfn resolve_commit_oid(repo_path: &Path, raw: &str) -> Result<String, CliError> {\n    let trimmed = raw.trim();\n    if trimmed.is_empty() {\n        return Err(CliError::InvalidArgs(\"from_commit must not be empty\"));\n    }\n\n    let args = vec![\n        \"rev-parse\".to_string(),\n        \"--verify\".to_string(),\n        format!(\"{trimmed}^{{commit}}\"),\n    ];\n    let output = run_git_text(repo_path, &args, None).map_err(|error| {\n        CliError::msg(format!(\n            \"failed to resolve --from-commit {}: {}\",\n            raw, error\n        ))\n    })?;\n    let oid = output.trim();\n    if oid.is_empty() {\n        return Err(CliError::msg(format!(\n            \"failed to resolve --from-commit {}: empty rev-parse output\",\n            raw\n        )));\n    }\n    Ok(oid.to_string())\n}\n\nfn read_commit_patch_set(repo_path: &Path, commit_sha: &str) -> Result<PatchSet, CliError> {\n    let raw_args = vec![\n        \"diff-tree\".to_string(),\n        \"--root\".to_string(),\n        \"--raw\".to_string(),\n        \"-r\".to_string(),\n        \"-z\".to_string(),\n        \"-m\".to_string(),\n        \"--first-parent\".to_string(),\n        \"--find-renames\".to_string(),\n        \"--no-commit-id\".to_string(),\n        commit_sha.to_string(),\n    ];\n    let raw = run_git_bytes(repo_path, &raw_args, None)?;\n    let changes = parse_raw_diff_tree(&raw)?;\n\n    let wanted_blob_ids = collect_wanted_blob_ids(&changes);\n    let blob_by_oid = read_blobs(repo_path, &wanted_blob_ids)?;\n    Ok(PatchSet {\n        changes,\n        blob_by_oid,\n    })\n}\n\nfn parse_raw_diff_tree(raw: &[u8]) -> Result<Vec<Change>, CliError> {\n    if raw.is_empty() {\n        return Ok(Vec::new());\n    }\n\n    let mut tokens = raw.split(|byte| *byte == 0).collect::<Vec<_>>();\n    if tokens.last().is_some_and(|token| token.is_empty()) {\n        tokens.pop();\n    }\n\n    let mut changes = Vec::new();\n    let mut index = 0usize;\n\n    while index < tokens.len() {\n        let header_token = tokens[index];\n        index += 1;\n\n        if header_token.is_empty() || !header_token.starts_with(b\":\") {\n            continue;\n        }\n\n        let header_text = String::from_utf8_lossy(header_token);\n        let fields = header_text[1..].split_whitespace().collect::<Vec<_>>();\n        if fields.len() < 5 {\n            continue;\n        }\n\n        let old_mode = fields[0].to_string();\n        let new_mode = fields[1].to_string();\n        let new_oid = fields[3].to_string();\n        let status = fields[4].chars().next().unwrap_or('M');\n\n        let first_path =\n            token_to_string(tokens.get(index).ok_or_else(|| {\n                CliError::msg(\"malformed git diff-tree output: missing path token\")\n            })?);\n        index += 1;\n\n        if status == 'R' || status == 'C' {\n            let second_path = token_to_string(tokens.get(index).ok_or_else(|| {\n                CliError::msg(\"malformed git diff-tree output: missing rename destination\")\n            })?);\n            index += 1;\n\n            changes.push(Change {\n                status,\n                old_mode,\n                new_mode,\n                new_oid,\n                old_path: Some(first_path),\n                new_path: Some(second_path),\n            });\n            continue;\n        }\n\n        let old_path = if status == 'A' {\n            None\n        } else {\n            Some(first_path.clone())\n        };\n        let new_path = if status == 'D' {\n            None\n        } else {\n            Some(first_path)\n        };\n\n        changes.push(Change {\n            status,\n            old_mode,\n            new_mode,\n            new_oid,\n            old_path,\n            new_path,\n        });\n    }\n\n    Ok(changes)\n}\n\nfn collect_wanted_blob_ids(changes: &[Change]) -> Vec<String> {\n    let mut wanted_blob_ids = BTreeSet::<String>::new();\n    for change in changes {\n        if change.new_path.is_none() || !change.new_is_blob() {\n            continue;\n        }\n        if !change.new_oid.is_empty() && change.new_oid != NULL_OID {\n            wanted_blob_ids.insert(change.new_oid.clone());\n        }\n    }\n    wanted_blob_ids.into_iter().collect()\n}\n\nfn read_blobs(repo_path: &Path, blob_ids: &[String]) -> Result<HashMap<String, Vec<u8>>, CliError> {\n    if blob_ids.is_empty() {\n        return Ok(HashMap::new());\n    }\n\n    let mut request_body = String::new();\n    for blob_id in blob_ids {\n        request_body.push_str(blob_id);\n        request_body.push('\\n');\n    }\n\n    let args = vec![\"cat-file\".to_string(), \"--batch\".to_string()];\n    let stdout = run_git_bytes(repo_path, &args, Some(request_body.as_bytes()))?;\n    let mut blobs = HashMap::<String, Vec<u8>>::new();\n    let mut offset = 0usize;\n\n    while offset < stdout.len() {\n        let line_end = stdout[offset..]\n            .iter()\n            .position(|byte| *byte == b'\\n')\n            .map(|relative| offset + relative)\n            .ok_or_else(|| {\n                CliError::msg(\"malformed git cat-file output: missing header newline\")\n            })?;\n\n        let header = String::from_utf8_lossy(&stdout[offset..line_end])\n            .trim()\n            .to_string();\n        offset = line_end + 1;\n\n        if header.is_empty() {\n            continue;\n        }\n\n        let fields = header.split_whitespace().collect::<Vec<_>>();\n        if fields.len() < 2 {\n            return Err(CliError::msg(format!(\n                \"malformed git cat-file header: {header}\"\n            )));\n        }\n\n        let oid = fields[0];\n        let object_type = fields[1];\n        if object_type == \"missing\" {\n            return Err(CliError::msg(format!(\n                \"missing blob object in git repository: {oid}\"\n            )));\n        }\n\n        if fields.len() < 3 {\n            return Err(CliError::msg(format!(\n                \"malformed git cat-file header (missing size): {header}\"\n            )));\n        }\n\n        let size = fields[2].parse::<usize>().map_err(|_| {\n            CliError::msg(format!(\n                \"invalid blob size '{}' in git cat-file output for {oid}\",\n                fields[2]\n            ))\n        })?;\n        let data_start = offset;\n        let data_end = data_start.saturating_add(size);\n        if data_end > stdout.len() {\n            return Err(CliError::msg(format!(\n                \"git cat-file output truncated while reading blob {oid}\"\n            )));\n        }\n\n        blobs.insert(oid.to_string(), stdout[data_start..data_end].to_vec());\n        offset = data_end;\n        if offset < stdout.len() && stdout[offset] == b'\\n' {\n            offset += 1;\n        }\n    }\n\n    for blob_id in blob_ids {\n        if !blobs.contains_key(blob_id) {\n            return Err(CliError::msg(format!(\n                \"blob {blob_id} was requested but not returned by git cat-file\"\n            )));\n        }\n    }\n\n    Ok(blobs)\n}\n\nfn prepare_commit_changes(\n    state: &mut ReplayState,\n    changes: &[Change],\n    blob_by_oid: &HashMap<String, Vec<u8>>,\n) -> Result<PreparedBatch, CliError> {\n    let mut delete_ids = BTreeSet::<String>::new();\n    let mut inserts_by_id = BTreeMap::<String, WriteRow>::new();\n    let mut updates_by_id = BTreeMap::<String, WriteRow>::new();\n\n    for change in changes {\n        let status = normalize_status(change.status);\n\n        if should_delete_old_entry(change, status) {\n            if let Some(deleted_id) = resolve_delete_path(state, change) {\n                delete_ids.insert(deleted_id.clone());\n                inserts_by_id.remove(&deleted_id);\n                updates_by_id.remove(&deleted_id);\n            }\n        }\n\n        if status == 'D' || !change.new_is_blob() {\n            continue;\n        }\n\n        let new_path = match &change.new_path {\n            Some(path) => path,\n            None => continue,\n        };\n\n        let target = resolve_write_target(state, change, status)?;\n        let bytes = blob_by_oid.get(&change.new_oid).ok_or_else(|| {\n            CliError::msg(format!(\n                \"missing blob {} while applying {} {}\",\n                change.new_oid, status, new_path\n            ))\n        })?;\n\n        let row = WriteRow {\n            id: target.id.clone(),\n            path: to_lix_path(new_path),\n            data: bytes.clone(),\n        };\n\n        if delete_ids.contains(&row.id) {\n            delete_ids.remove(&row.id);\n        }\n\n        if target.is_insert {\n            inserts_by_id.insert(row.id.clone(), row);\n            updates_by_id.remove(&target.id);\n            state.known_file_ids.insert(target.id);\n            continue;\n        }\n\n        if inserts_by_id.contains_key(&row.id) {\n            inserts_by_id.insert(row.id.clone(), row);\n            continue;\n        }\n\n        updates_by_id.insert(row.id.clone(), row);\n    }\n\n    Ok(PreparedBatch {\n        deletes: delete_ids.into_iter().collect(),\n        inserts: inserts_by_id.into_values().collect(),\n        updates: updates_by_id.into_values().collect(),\n    })\n}\n\nfn should_delete_old_entry(change: &Change, status: char) -> bool {\n    if change.old_path.is_none() || !mode_is_blob(&change.old_mode) {\n        return false;\n    }\n\n    match status {\n        'D' | 'R' => true,\n        'A' | 'C' => false,\n        _ => !change.new_is_blob(),\n    }\n}\n\nstruct WriteTarget {\n    id: String,\n    is_insert: bool,\n}\n\nfn resolve_delete_path(state: &mut ReplayState, change: &Change) -> Option<String> {\n    let old_path = change.old_path.as_ref()?;\n    let id = state.path_to_file_id.remove(old_path)?;\n    state.known_file_ids.remove(&id);\n    Some(id)\n}\n\nfn resolve_write_target(\n    state: &mut ReplayState,\n    change: &Change,\n    status: char,\n) -> Result<WriteTarget, CliError> {\n    let new_path = change\n        .new_path\n        .as_ref()\n        .ok_or(CliError::InvalidArgs(\"write target requires new path\"))?;\n\n    if status == 'R' {\n        if let Some(old_path) = change.old_path.as_ref() {\n            if let Some(existing_id) = state.path_to_file_id.get(old_path).cloned() {\n                state.path_to_file_id.remove(old_path);\n                state\n                    .path_to_file_id\n                    .insert(new_path.clone(), existing_id.clone());\n                return Ok(WriteTarget {\n                    id: existing_id,\n                    is_insert: false,\n                });\n            }\n        }\n    }\n\n    if let Some(existing_id) = state.path_to_file_id.get(new_path).cloned() {\n        return Ok(WriteTarget {\n            id: existing_id,\n            is_insert: false,\n        });\n    }\n\n    let generated = stable_file_id(new_path);\n    let is_insert = !state.known_file_ids.contains(&generated);\n    state\n        .path_to_file_id\n        .insert(new_path.clone(), generated.clone());\n    Ok(WriteTarget {\n        id: generated,\n        is_insert,\n    })\n}\n\nfn build_replay_commit_statements(\n    batch: &PreparedBatch,\n    max_insert_rows: usize,\n) -> Vec<SqlStatement> {\n    if batch.deletes.is_empty() && batch.inserts.is_empty() && batch.updates.is_empty() {\n        return Vec::new();\n    }\n\n    let mut statements = Vec::<SqlStatement>::new();\n\n    for delete_chunk in batch.deletes.chunks(500) {\n        if delete_chunk.is_empty() {\n            continue;\n        }\n\n        let placeholders = vec![\"?\"; delete_chunk.len()].join(\", \");\n        let sql = format!(\"DELETE FROM lix_file WHERE id IN ({placeholders})\");\n        let params = delete_chunk\n            .iter()\n            .cloned()\n            .map(Value::Text)\n            .collect::<Vec<_>>();\n        statements.push(SqlStatement { sql, params });\n    }\n\n    let insert_batch_size = max_insert_rows.max(1);\n    for insert_chunk in batch.inserts.chunks(insert_batch_size) {\n        if insert_chunk.is_empty() {\n            continue;\n        }\n\n        let mut params = Vec::<Value>::with_capacity(insert_chunk.len() * 3);\n        let values_sql = insert_chunk\n            .iter()\n            .map(|row| {\n                params.push(Value::Text(row.id.clone()));\n                params.push(Value::Text(row.path.clone()));\n                params.push(Value::Blob(row.data.clone()));\n                \"(?, ?, ?)\"\n            })\n            .collect::<Vec<_>>()\n            .join(\", \");\n        let sql = format!(\"INSERT INTO lix_file (id, path, data) VALUES {values_sql}\");\n        statements.push(SqlStatement { sql, params });\n    }\n\n    for row in &batch.updates {\n        if stable_file_id(&row.path) == row.id {\n            statements.push(SqlStatement {\n                sql: \"UPDATE lix_file SET data = ? WHERE id = ?\".to_string(),\n                params: vec![Value::Blob(row.data.clone()), Value::Text(row.id.clone())],\n            });\n        } else {\n            statements.push(SqlStatement {\n                sql: \"UPDATE lix_file SET path = ?, data = ? WHERE id = ?\".to_string(),\n                params: vec![\n                    Value::Text(row.path.clone()),\n                    Value::Blob(row.data.clone()),\n                    Value::Text(row.id.clone()),\n                ],\n            });\n        }\n    }\n\n    statements\n}\n\nfn apply_prepared_to_expected_state(\n    expected_state_by_id: &mut HashMap<String, ExpectedFile>,\n    prepared: &PreparedBatch,\n) {\n    for id in &prepared.deletes {\n        expected_state_by_id.remove(id);\n    }\n\n    for row in prepared.inserts.iter().chain(prepared.updates.iter()) {\n        expected_state_by_id.insert(\n            row.id.clone(),\n            ExpectedFile {\n                path: row.path.clone(),\n                sha256: sha256_hex(&row.data),\n            },\n        );\n    }\n}\n\nfn verify_commit_state_hashes(\n    lix: &Lix,\n    expected_state_by_id: &HashMap<String, ExpectedFile>,\n    commit_sha: &str,\n) -> Result<(), CliError> {\n    let result =\n        crate::db::block_on(lix.execute(\"SELECT id, path, data FROM lix_file\", &[] as &[Value]))\n            .map_err(|err| {\n                CliError::msg(format!(\n                    \"failed to query replay state for verification: {err}\"\n                ))\n            })?;\n    let rows = result.rows();\n    if rows.len() != expected_state_by_id.len() {\n        return Err(CliError::msg(format!(\n            \"state mismatch at {commit_sha}: row count differs (lix={}, expected={})\",\n            rows.len(),\n            expected_state_by_id.len()\n        )));\n    }\n\n    let mut seen = HashSet::<String>::new();\n    for (index, row) in rows.iter().enumerate() {\n        if row.values().len() < 3 {\n            return Err(CliError::msg(format!(\n                \"state mismatch at {commit_sha}: row {index} has fewer than 3 columns\"\n            )));\n        }\n\n        let id = value_to_string(\n            row.get_index(0)\n                .ok_or_else(|| CliError::msg(format!(\"missing verify.id[{index}]\")))?,\n            &format!(\"verify.id[{index}]\"),\n        )?;\n        let path = value_to_string(\n            row.get_index(1)\n                .ok_or_else(|| CliError::msg(format!(\"missing verify.path[{index}]\")))?,\n            &format!(\"verify.path[{index}]\"),\n        )?;\n        let data = value_to_blob(\n            row.get_index(2)\n                .ok_or_else(|| CliError::msg(format!(\"missing verify.data[{index}]\")))?,\n            &format!(\"verify.data[{index}]\"),\n        )?;\n        let hash = sha256_hex(data);\n\n        let expected = expected_state_by_id.get(&id).ok_or_else(|| {\n            CliError::msg(format!(\n                \"state mismatch at {commit_sha}: unexpected file id in lix state: {id}\"\n            ))\n        })?;\n        if expected.path != path {\n            return Err(CliError::msg(format!(\n                \"state mismatch at {commit_sha}: path differs for id {id} (lix={path}, expected={})\",\n                expected.path\n            )));\n        }\n        if expected.sha256 != hash {\n            return Err(CliError::msg(format!(\n                \"state mismatch at {commit_sha}: hash differs for id {id}\"\n            )));\n        }\n\n        seen.insert(id);\n    }\n\n    if seen.len() != expected_state_by_id.len() {\n        return Err(CliError::msg(format!(\n            \"state mismatch at {commit_sha}: missing rows (lix={}, expected={})\",\n            seen.len(),\n            expected_state_by_id.len()\n        )));\n    }\n\n    Ok(())\n}\n\nfn value_to_string(value: &Value, context: &str) -> Result<String, CliError> {\n    match value {\n        Value::Text(text) => Ok(text.clone()),\n        Value::Integer(number) => Ok(number.to_string()),\n        Value::Real(number) => Ok(number.to_string()),\n        Value::Boolean(flag) => Ok(flag.to_string()),\n        _ => Err(CliError::msg(format!(\n            \"unexpected scalar type for {context}\"\n        ))),\n    }\n}\n\nfn value_to_blob<'a>(value: &'a Value, context: &str) -> Result<&'a [u8], CliError> {\n    match value {\n        Value::Blob(bytes) => Ok(bytes),\n        _ => Err(CliError::msg(format!(\"unexpected blob type for {context}\"))),\n    }\n}\n\nfn sha256_hex(bytes: &[u8]) -> String {\n    let digest = Sha256::digest(bytes);\n    let mut out = String::with_capacity(digest.len() * 2);\n    for byte in digest {\n        out.push(hex_digit_lower(byte >> 4));\n        out.push(hex_digit_lower(byte & 0x0f));\n    }\n    out\n}\n\nfn hex_digit_lower(value: u8) -> char {\n    match value {\n        0..=9 => (b'0' + value) as char,\n        10..=15 => (b'a' + (value - 10)) as char,\n        _ => '0',\n    }\n}\n\nfn hex_digit_upper(value: u8) -> char {\n    match value {\n        0..=9 => (b'0' + value) as char,\n        10..=15 => (b'A' + (value - 10)) as char,\n        _ => '0',\n    }\n}\n\nfn normalize_status(value: char) -> char {\n    value.to_ascii_uppercase()\n}\n\nfn stable_file_id(path: &str) -> String {\n    to_lix_path(path)\n}\n\nfn to_lix_path(path: &str) -> String {\n    let normalized = path.replace('\\\\', \"/\");\n    let without_leading_slash = normalized.strip_prefix('/').unwrap_or(&normalized);\n    let encoded = without_leading_slash\n        .split('/')\n        .map(encode_path_segment)\n        .collect::<Vec<_>>()\n        .join(\"/\");\n    format!(\"/{encoded}\")\n}\n\nfn encode_path_segment(segment: &str) -> String {\n    let mut encoded = String::new();\n    for byte in segment.as_bytes() {\n        let is_alpha_num = byte.is_ascii_alphanumeric();\n        let is_safe = matches!(*byte, b'.' | b'_' | b'~' | b'-');\n        if is_alpha_num || is_safe {\n            encoded.push(*byte as char);\n        } else {\n            encoded.push('%');\n            encoded.push(hex_digit_upper(byte >> 4));\n            encoded.push(hex_digit_upper(byte & 0x0f));\n        }\n    }\n    encoded\n}\n\nfn mode_is_blob(mode: &str) -> bool {\n    mode.starts_with(\"100\") || mode == \"120000\"\n}\n\nfn token_to_string(token: &[u8]) -> String {\n    String::from_utf8_lossy(token).to_string()\n}\n\nfn run_git_text(\n    repo_path: &Path,\n    args: &[String],\n    stdin: Option<&[u8]>,\n) -> Result<String, CliError> {\n    let output = run_git_bytes(repo_path, args, stdin)?;\n    Ok(String::from_utf8_lossy(&output).to_string())\n}\n\nfn run_git_bytes(\n    repo_path: &Path,\n    args: &[String],\n    stdin: Option<&[u8]>,\n) -> Result<Vec<u8>, CliError> {\n    let mut command = Command::new(\"git\");\n    command.arg(\"-C\").arg(repo_path);\n    for arg in args {\n        command.arg(arg);\n    }\n    command.stdout(Stdio::piped());\n    command.stderr(Stdio::piped());\n    if stdin.is_some() {\n        command.stdin(Stdio::piped());\n    } else {\n        command.stdin(Stdio::null());\n    }\n\n    let mut child = command\n        .spawn()\n        .map_err(|source| CliError::io(\"failed to spawn git command\", source))?;\n\n    if let Some(input) = stdin {\n        let mut child_stdin = child\n            .stdin\n            .take()\n            .ok_or_else(|| CliError::msg(\"failed to open stdin for git command\"))?;\n        child_stdin\n            .write_all(input)\n            .map_err(|source| CliError::io(\"failed to write stdin for git command\", source))?;\n    }\n\n    let output = child\n        .wait_with_output()\n        .map_err(|source| CliError::io(\"failed to wait for git command\", source))?;\n\n    if output.status.success() {\n        return Ok(output.stdout);\n    }\n\n    let args_preview = args.join(\" \");\n    let stderr = String::from_utf8_lossy(&output.stderr).trim().to_string();\n    let status = output\n        .status\n        .code()\n        .map(|code| format!(\"exit code {code}\"))\n        .unwrap_or_else(|| \"terminated by signal\".to_string());\n    Err(CliError::msg(format!(\n        \"git -C {} {} failed with {}: {}\",\n        repo_path.display(),\n        args_preview,\n        status,\n        stderr\n    )))\n}\n\nfn prepare_regular_output_path(path: &Path, force: bool) -> Result<(), CliError> {\n    if let Some(parent) = path.parent() {\n        fs::create_dir_all(parent)\n            .map_err(|source| CliError::io(\"failed to create output directory\", source))?;\n    }\n\n    if path.exists() {\n        if path.is_dir() {\n            return Err(CliError::msg(format!(\n                \"output path points to a directory, expected a file: {}\",\n                path.display()\n            )));\n        }\n        if force {\n            fs::remove_file(path)\n                .map_err(|source| CliError::io(\"failed to remove existing output file\", source))?;\n            return Ok(());\n        }\n        return Err(CliError::msg(format!(\n            \"output path already exists: {}\",\n            path.display()\n        )));\n    }\n\n    Ok(())\n}\n\nfn validate_repo_dir(path: &Path) -> Result<(), CliError> {\n    if path.is_dir() {\n        return Ok(());\n    }\n\n    Err(CliError::msg(format!(\n        \"repo path does not exist or is not a directory: {}\",\n        path.display()\n    )))\n}\n\nfn validate_git_repo(path: &Path) -> Result<(), CliError> {\n    let args = vec![\"rev-parse\".to_string(), \"--is-inside-work-tree\".to_string()];\n    let output = run_git_text(path, &args, None)?;\n    if output.trim() == \"true\" {\n        return Ok(());\n    }\n    Err(CliError::msg(format!(\n        \"repo path is not a git work tree: {}\",\n        path.display()\n    )))\n}\n\nfn normalize_replay_ref(raw: &str) -> Result<String, CliError> {\n    let trimmed = raw.trim();\n    if trimmed.is_empty() {\n        return Err(CliError::InvalidArgs(\"branch must not be empty\"));\n    }\n\n    if trimmed == \"*\" {\n        return Ok(\"--all\".to_string());\n    }\n\n    Ok(trimmed.to_string())\n}\n\nfn absolutize_from_cwd(path: &Path) -> Result<PathBuf, CliError> {\n    if path.is_absolute() {\n        return Ok(path.to_path_buf());\n    }\n\n    let cwd = std::env::current_dir()\n        .map_err(|source| CliError::io(\"failed to read current directory\", source))?;\n    Ok(cwd.join(path))\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use std::time::{SystemTime, UNIX_EPOCH};\n\n    #[test]\n    fn collect_wanted_blob_ids_skips_gitlink_oids() {\n        let changes = vec![\n            Change {\n                status: 'A',\n                old_mode: \"000000\".to_string(),\n                new_mode: \"100644\".to_string(),\n                new_oid: \"1111111111111111111111111111111111111111\".to_string(),\n                old_path: None,\n                new_path: Some(\"regular.txt\".to_string()),\n            },\n            Change {\n                status: 'A',\n                old_mode: \"000000\".to_string(),\n                new_mode: \"160000\".to_string(),\n                new_oid: \"4c9431adbd4a24aed1d9afdecbfe4eaac3a6bba9\".to_string(),\n                old_path: None,\n                new_path: Some(\"submodule\".to_string()),\n            },\n        ];\n\n        let wanted = collect_wanted_blob_ids(&changes);\n        assert_eq!(\n            wanted,\n            vec![\"1111111111111111111111111111111111111111\".to_string()]\n        );\n    }\n\n    #[test]\n    fn select_replay_commits_starts_from_specific_commit_inclusive() {\n        let commits = vec![\n            \"a\".to_string(),\n            \"b\".to_string(),\n            \"c\".to_string(),\n            \"d\".to_string(),\n        ];\n        let selected = select_replay_commits(commits, Some(\"c\"), None)\n            .expect(\"select_replay_commits should succeed\");\n        assert_eq!(selected, vec![\"c\".to_string(), \"d\".to_string()]);\n    }\n\n    #[test]\n    fn select_replay_commits_applies_limit_after_from_commit() {\n        let commits = vec![\n            \"a\".to_string(),\n            \"b\".to_string(),\n            \"c\".to_string(),\n            \"d\".to_string(),\n        ];\n        let selected = select_replay_commits(commits, Some(\"b\"), Some(2))\n            .expect(\"select_replay_commits should succeed\");\n        assert_eq!(selected, vec![\"b\".to_string(), \"c\".to_string()]);\n    }\n\n    #[test]\n    fn select_replay_commits_errors_when_from_commit_missing() {\n        let commits = vec![\"a\".to_string(), \"b\".to_string()];\n        let result = select_replay_commits(commits, Some(\"missing\"), None);\n        assert!(result.is_err(), \"expected error for missing from-commit\");\n        let message = format!(\n            \"{}\",\n            result.expect_err(\"expected missing from-commit error\")\n        );\n        assert!(\n            message.contains(\"not reachable from selected ref\"),\n            \"unexpected error message: {message}\"\n        );\n    }\n\n    #[test]\n    fn prepare_commit_changes_typechange_blob_to_gitlink_deletes_file() {\n        let mut state = ReplayState::default();\n        state.path_to_file_id.insert(\n            \"artifact/spa-prerender-repro\".to_string(),\n            \"/artifact/spa-prerender-repro\".to_string(),\n        );\n        state\n            .known_file_ids\n            .insert(\"/artifact/spa-prerender-repro\".to_string());\n\n        let changes = vec![Change {\n            status: 'T',\n            old_mode: \"100644\".to_string(),\n            new_mode: \"160000\".to_string(),\n            new_oid: \"4c9431adbd4a24aed1d9afdecbfe4eaac3a6bba9\".to_string(),\n            old_path: Some(\"artifact/spa-prerender-repro\".to_string()),\n            new_path: Some(\"artifact/spa-prerender-repro\".to_string()),\n        }];\n\n        let prepared = prepare_commit_changes(&mut state, &changes, &HashMap::new())\n            .expect(\"gitlink typechange should not error\");\n\n        assert_eq!(\n            prepared.deletes,\n            vec![\"/artifact/spa-prerender-repro\".to_string()]\n        );\n        assert!(prepared.inserts.is_empty());\n        assert!(prepared.updates.is_empty());\n        assert!(!state\n            .path_to_file_id\n            .contains_key(\"artifact/spa-prerender-repro\"));\n    }\n\n    #[test]\n    fn prepare_output_path_rejects_existing_file() {\n        let temp_dir = unique_temp_dir();\n        fs::create_dir_all(&temp_dir).expect(\"temp dir should be created\");\n        let output_path = temp_dir.join(\"existing.lix\");\n        fs::write(&output_path, b\"existing\").expect(\"seed file should be written\");\n\n        let result = db::prepare_lix_output_path(&output_path, false);\n        assert!(result.is_err(), \"expected error when output file exists\");\n        let message = format!(\"{}\", result.expect_err(\"expected output path error\"));\n        assert!(\n            message.contains(\"output path already exists\"),\n            \"unexpected error message: {message}\"\n        );\n\n        fs::remove_file(&output_path).expect(\"seed file should be removable\");\n        fs::remove_dir_all(&temp_dir).expect(\"temp dir should be removable\");\n    }\n\n    #[test]\n    fn prepare_output_path_allows_nonexistent_file_and_creates_parent() {\n        let temp_dir = unique_temp_dir();\n        let nested_parent = temp_dir.join(\"nested\").join(\"output\");\n        let output_path = nested_parent.join(\"new.lix\");\n\n        let result = db::prepare_lix_output_path(&output_path, false);\n        assert!(result.is_ok(), \"expected success for absent output file\");\n        assert!(\n            nested_parent.is_dir(),\n            \"expected parent directories to be created\"\n        );\n\n        fs::remove_dir_all(&temp_dir).expect(\"temp dir should be removable\");\n    }\n\n    #[test]\n    fn prepare_output_lix_path_force_removes_existing_file_and_sidecars() {\n        let temp_dir = unique_temp_dir();\n        fs::create_dir_all(&temp_dir).expect(\"temp dir should be created\");\n        let output_path = temp_dir.join(\"existing.lix\");\n        fs::write(&output_path, b\"existing\").expect(\"seed file should be written\");\n        fs::write(\n            PathBuf::from(format!(\"{}-wal\", output_path.display())),\n            b\"wal-bytes\",\n        )\n        .expect(\"wal file should be written\");\n        fs::write(\n            PathBuf::from(format!(\"{}-shm\", output_path.display())),\n            b\"shm-bytes\",\n        )\n        .expect(\"shm file should be written\");\n\n        let result = db::prepare_lix_output_path(&output_path, true);\n        assert!(result.is_ok(), \"expected success when force is enabled\");\n        assert!(\n            !output_path.exists(),\n            \"expected existing output file to be removed\"\n        );\n        assert!(\n            !PathBuf::from(format!(\"{}-wal\", output_path.display())).exists(),\n            \"expected wal sidecar to be removed\"\n        );\n        assert!(\n            !PathBuf::from(format!(\"{}-shm\", output_path.display())).exists(),\n            \"expected shm sidecar to be removed\"\n        );\n\n        fs::remove_dir_all(&temp_dir).expect(\"temp dir should be removable\");\n    }\n\n    #[test]\n    fn build_replay_commit_statements_omits_path_for_stable_updates() {\n        let batch = PreparedBatch {\n            deletes: Vec::new(),\n            inserts: Vec::new(),\n            updates: vec![WriteRow {\n                id: \"/src/main.ts\".to_string(),\n                path: \"/src/main.ts\".to_string(),\n                data: b\"hello\".to_vec(),\n            }],\n        };\n\n        let statements = build_replay_commit_statements(&batch, DEFAULT_INSERT_BATCH_ROWS);\n\n        assert_eq!(statements.len(), 1);\n        assert_eq!(\n            statements[0].sql,\n            \"UPDATE lix_file SET data = ? WHERE id = ?\"\n        );\n        assert_eq!(\n            statements[0].params,\n            vec![\n                Value::Blob(b\"hello\".to_vec()),\n                Value::Text(\"/src/main.ts\".to_string())\n            ]\n        );\n    }\n\n    #[test]\n    fn build_replay_commit_statements_preserves_path_for_renames() {\n        let batch = PreparedBatch {\n            deletes: Vec::new(),\n            inserts: Vec::new(),\n            updates: vec![WriteRow {\n                id: \"/src/old.ts\".to_string(),\n                path: \"/src/new.ts\".to_string(),\n                data: b\"hello\".to_vec(),\n            }],\n        };\n\n        let statements = build_replay_commit_statements(&batch, DEFAULT_INSERT_BATCH_ROWS);\n\n        assert_eq!(statements.len(), 1);\n        assert_eq!(\n            statements[0].sql,\n            \"UPDATE lix_file SET path = ?, data = ? WHERE id = ?\"\n        );\n        assert_eq!(\n            statements[0].params,\n            vec![\n                Value::Text(\"/src/new.ts\".to_string()),\n                Value::Blob(b\"hello\".to_vec()),\n                Value::Text(\"/src/old.ts\".to_string())\n            ]\n        );\n    }\n\n    fn unique_temp_dir() -> PathBuf {\n        let nanos = SystemTime::now()\n            .duration_since(UNIX_EPOCH)\n            .expect(\"system time should be after unix epoch\")\n            .as_nanos();\n        std::env::temp_dir().join(format!(\n            \"lix-cli-git-replay-test-{}-{nanos}\",\n            std::process::id()\n        ))\n    }\n}\n"
  },
  {
    "path": "packages/cli/src/commands/exp/mod.rs",
    "content": "mod git_replay;\n\nuse crate::app::AppContext;\nuse crate::cli::exp::{ExpCommand, ExpSubcommand};\nuse crate::error::CliError;\nuse crate::hints::CommandOutput;\n\npub fn run(_context: &AppContext, command: ExpCommand) -> Result<CommandOutput, CliError> {\n    match command.command {\n        ExpSubcommand::GitReplay(args) => {\n            git_replay::run(args)?;\n            Ok(CommandOutput::empty())\n        }\n    }\n}\n"
  },
  {
    "path": "packages/cli/src/commands/init.rs",
    "content": "use crate::cli::init::InitCommand;\nuse crate::db;\nuse crate::error::CliError;\nuse crate::hints::{self, CommandOutput};\n\npub fn run(command: InitCommand) -> Result<CommandOutput, CliError> {\n    let initialized = db::init_lix_at(&command.path)?;\n    if initialized {\n        println!(\"initialized {}\", command.path.display());\n    } else {\n        println!(\"already initialized {}\", command.path.display());\n    }\n    Ok(CommandOutput::with_hints(hints::hint_after_init()))\n}\n"
  },
  {
    "path": "packages/cli/src/commands/mod.rs",
    "content": "pub mod exp;\npub mod init;\npub mod redo;\npub mod sql;\npub mod undo;\npub mod version;\n"
  },
  {
    "path": "packages/cli/src/commands/redo.rs",
    "content": "use crate::app::AppContext;\nuse crate::cli::redo::RedoCommand;\nuse crate::error::CliError;\nuse crate::hints::CommandOutput;\n\npub fn run(_context: &AppContext, _command: RedoCommand) -> Result<CommandOutput, CliError> {\n    Err(CliError::msg(\n        \"redo is not available in the current rs-sdk surface\",\n    ))\n}\n"
  },
  {
    "path": "packages/cli/src/commands/sql/execute.rs",
    "content": "use crate::app::AppContext;\nuse crate::cli::sql::{SqlExecuteArgs, SqlOutputFormat};\nuse crate::db;\nuse crate::error::CliError;\nuse crate::hints::{self, CommandOutput};\nuse crate::output;\nuse base64::Engine as _;\nuse lix_rs_sdk::Value;\nuse serde_json::Value as JsonValue;\nuse std::io::Read;\n\npub fn run(context: &AppContext, args: SqlExecuteArgs) -> Result<CommandOutput, CliError> {\n    let (sql, params) = resolve_sql_and_params(&args)?;\n    let lix_path = db::resolve_db_path(context)?;\n    let lix = db::open_lix_at(&lix_path)?;\n    let result = crate::db::block_on(lix.execute(&sql, &params))\n        .map_err(|err| CliError::from_lix(\"sql execution failed\", err))?;\n\n    match args.format {\n        SqlOutputFormat::Json => output::print_execute_result_json(&result),\n        SqlOutputFormat::Table => output::print_execute_result_table(&result),\n    }\n\n    let output_hints = if context.no_hints || !hints::are_hints_enabled(&lix) {\n        Vec::new()\n    } else {\n        hints::hint_blob_in_result(&result)\n    };\n\n    Ok(CommandOutput::with_hints(output_hints))\n}\n\nfn resolve_sql_and_params(args: &SqlExecuteArgs) -> Result<(String, Vec<Value>), CliError> {\n    let sql_from_stdin = args.sql == \"-\";\n    let params_from_stdin = args.params.as_deref() == Some(\"-\");\n    if sql_from_stdin && params_from_stdin {\n        return Err(CliError::InvalidArgs(\n            \"sql and params cannot both be read from stdin\",\n        ));\n    }\n\n    let stdin_payload = if sql_from_stdin {\n        Some(read_stdin(\"failed to read SQL from stdin\")?)\n    } else if params_from_stdin {\n        Some(read_stdin(\"failed to read params JSON from stdin\")?)\n    } else {\n        None\n    };\n\n    let sql = if sql_from_stdin {\n        let input = stdin_payload\n            .as_deref()\n            .ok_or(CliError::InvalidArgs(\"stdin SQL input is empty\"))?;\n        if input.trim().is_empty() {\n            return Err(CliError::InvalidArgs(\"stdin SQL input is empty\"));\n        }\n        input.to_string()\n    } else {\n        args.sql.clone()\n    };\n\n    let params = resolve_params(args.params.as_deref(), stdin_payload.as_deref())?;\n    Ok((sql, params))\n}\n\nfn read_stdin(context: &'static str) -> Result<String, CliError> {\n    let mut input = String::new();\n    std::io::stdin()\n        .read_to_string(&mut input)\n        .map_err(|source| CliError::io(context, source))?;\n    Ok(input)\n}\n\nfn resolve_params(\n    params_input: Option<&str>,\n    stdin_payload: Option<&str>,\n) -> Result<Vec<Value>, CliError> {\n    let Some(raw_params) = params_input else {\n        return Ok(Vec::new());\n    };\n\n    let json_text = if raw_params == \"-\" {\n        let input =\n            stdin_payload.ok_or(CliError::InvalidArgs(\"stdin params JSON input is empty\"))?;\n        if input.trim().is_empty() {\n            return Err(CliError::InvalidArgs(\"stdin params JSON input is empty\"));\n        }\n        input\n    } else {\n        raw_params\n    };\n\n    parse_params_json(json_text)\n}\n\nfn parse_params_json(raw: &str) -> Result<Vec<Value>, CliError> {\n    let parsed: JsonValue = serde_json::from_str(raw).map_err(|error| {\n        CliError::msg(format!(\n            \"invalid --params JSON: expected a JSON array, parse error: {error}\"\n        ))\n    })?;\n\n    let values = parsed.as_array().ok_or_else(|| {\n        CliError::msg(\"invalid --params JSON: expected a JSON array of positional parameters\")\n    })?;\n\n    values\n        .iter()\n        .enumerate()\n        .map(|(index, value)| parse_param_value(value, index))\n        .collect::<Result<Vec<_>, _>>()\n}\n\nfn parse_param_value(value: &JsonValue, index: usize) -> Result<Value, CliError> {\n    match value {\n        JsonValue::Null => Ok(Value::Null),\n        JsonValue::Bool(v) => Ok(Value::Boolean(*v)),\n        JsonValue::Number(v) => {\n            if let Some(as_i64) = v.as_i64() {\n                return Ok(Value::Integer(as_i64));\n            }\n            if let Some(as_f64) = v.as_f64() {\n                return Ok(Value::Real(as_f64));\n            }\n            Err(CliError::msg(format!(\n                \"invalid --params value at index {index}: unsupported number representation\"\n            )))\n        }\n        JsonValue::String(v) => Ok(Value::Text(v.clone())),\n        JsonValue::Object(map) => parse_object_param(map, index),\n        JsonValue::Array(_) => Err(CliError::msg(format!(\n            \"invalid --params value at index {index}: nested arrays are not supported\"\n        ))),\n    }\n}\n\nfn parse_object_param(\n    map: &serde_json::Map<String, JsonValue>,\n    index: usize,\n) -> Result<Value, CliError> {\n    if map.len() == 1 && map.contains_key(\"$blob\") {\n        let encoded = map\n            .get(\"$blob\")\n            .and_then(JsonValue::as_str)\n            .ok_or_else(|| {\n                CliError::msg(format!(\n                    \"invalid --params value at index {index}: $blob must be a base64 string\"\n                ))\n            })?;\n        let bytes = base64::engine::general_purpose::STANDARD\n            .decode(encoded)\n            .map_err(|error| {\n                CliError::msg(format!(\n                    \"invalid --params value at index {index}: $blob is not valid base64: {error}\"\n                ))\n            })?;\n        return Ok(Value::Blob(bytes));\n    }\n\n    Err(CliError::msg(format!(\n        \"invalid --params value at index {index}: objects must use only {{\\\"$blob\\\":\\\"<base64>\\\"}}\"\n    )))\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use std::path::PathBuf;\n    use std::time::{SystemTime, UNIX_EPOCH};\n\n    #[test]\n    fn resolve_params_defaults_to_empty_when_unset() {\n        let resolved = resolve_params(None, None).expect(\"params should resolve\");\n        assert!(resolved.is_empty());\n    }\n\n    #[test]\n    fn resolve_params_maps_json_array_values_to_typed_sql_values() {\n        let resolved = resolve_params(\n            Some(\"[null, true, 7, 2.5, \\\"hello\\\", {\\\"$blob\\\":\\\"aGk=\\\"}]\"),\n            None,\n        )\n        .expect(\"typed params should resolve\");\n        assert_eq!(\n            resolved,\n            vec![\n                Value::Null,\n                Value::Boolean(true),\n                Value::Integer(7),\n                Value::Real(2.5),\n                Value::Text(\"hello\".to_string()),\n                Value::Blob(vec![0x68, 0x69]),\n            ]\n        );\n    }\n\n    #[test]\n    fn resolve_params_rejects_non_array_json() {\n        let error = resolve_params(Some(\"{\\\"a\\\":1}\"), None).expect_err(\"non-array should fail\");\n        assert_eq!(\n            error.to_string(),\n            \"invalid --params JSON: expected a JSON array of positional parameters\"\n        );\n    }\n\n    #[test]\n    fn resolve_params_rejects_invalid_object_shape() {\n        let error =\n            resolve_params(Some(\"[{\\\"k\\\":\\\"v\\\"}]\"), None).expect_err(\"invalid object should fail\");\n        assert_eq!(\n            error.to_string(),\n            \"invalid --params value at index 0: objects must use only {\\\"$blob\\\":\\\"<base64>\\\"}\"\n        );\n    }\n\n    #[test]\n    fn resolve_sql_and_params_rejects_double_stdin_usage() {\n        let args = SqlExecuteArgs {\n            format: SqlOutputFormat::Table,\n            params: Some(\"-\".to_string()),\n            sql: \"-\".to_string(),\n        };\n        let error =\n            resolve_sql_and_params(&args).expect_err(\"double stdin read should be rejected\");\n        assert_eq!(\n            error.to_string(),\n            \"invalid arguments: sql and params cannot both be read from stdin\"\n        );\n    }\n\n    #[test]\n    fn execute_accepts_numbered_placeholders_with_json_params() {\n        let handle = std::thread::Builder::new()\n            .name(\"sql-execute-param-binding\".to_string())\n            .stack_size(32 * 1024 * 1024)\n            .spawn(|| {\n                let path = test_lix_path(\"param-binding\");\n                db::init_lix_at(&path).expect(\"init test lix file\");\n                let context = AppContext {\n                    lix_path: Some(path.clone()),\n                    no_hints: true,\n                };\n                let args = SqlExecuteArgs {\n                    format: SqlOutputFormat::Json,\n                    params: Some(\"[\\\"left\\\", \\\"right\\\"]\".to_string()),\n                    sql: \"SELECT ?1 AS first_value, ?2 AS second_value\".to_string(),\n                };\n\n                let result = run(&context, args);\n                let _ = std::fs::remove_file(&path);\n                assert!(\n                    result.is_ok(),\n                    \"expected sql execute to succeed: {result:?}\"\n                );\n            })\n            .expect(\"spawn test thread\");\n\n        handle.join().expect(\"test thread joins\");\n    }\n\n    fn test_lix_path(label: &str) -> PathBuf {\n        let nonce = SystemTime::now()\n            .duration_since(UNIX_EPOCH)\n            .expect(\"system clock after unix epoch\")\n            .as_nanos();\n        std::env::temp_dir().join(format!(\"lix-cli-{label}-{nonce}.lix\"))\n    }\n}\n"
  },
  {
    "path": "packages/cli/src/commands/sql/mod.rs",
    "content": "mod execute;\n\nuse crate::app::AppContext;\nuse crate::cli::sql::{SqlCommand, SqlSubcommand};\nuse crate::error::CliError;\nuse crate::hints::CommandOutput;\n\npub fn run(context: &AppContext, command: SqlCommand) -> Result<CommandOutput, CliError> {\n    match command.command {\n        SqlSubcommand::Execute(args) => execute::run(context, args),\n    }\n}\n"
  },
  {
    "path": "packages/cli/src/commands/undo.rs",
    "content": "use crate::app::AppContext;\nuse crate::cli::undo::UndoCommand;\nuse crate::error::CliError;\nuse crate::hints::CommandOutput;\n\npub fn run(_context: &AppContext, _command: UndoCommand) -> Result<CommandOutput, CliError> {\n    Err(CliError::msg(\n        \"undo is not available in the current rs-sdk surface\",\n    ))\n}\n"
  },
  {
    "path": "packages/cli/src/commands/version/create.rs",
    "content": "use crate::app::AppContext;\nuse crate::cli::version::CreateVersionCommand;\nuse crate::commands::version::{\n    resolve_active_version_ref, resolve_version_ref, ResolvedVersionRef, VersionLookup,\n};\nuse crate::db::{open_lix_at, resolve_db_path};\nuse crate::error::CliError;\nuse crate::hints::CommandOutput;\nuse lix_rs_sdk::{CreateVersionOptions, CreateVersionResult, SwitchVersionOptions};\n\npub fn run(context: &AppContext, command: CreateVersionCommand) -> Result<CommandOutput, CliError> {\n    let path = resolve_db_path(context)?;\n    let lix = open_lix_at(&path)?;\n    let source = match (command.from_id.as_deref(), command.from_name.as_deref()) {\n        (Some(id), None) => Some(resolve_version_ref(&lix, VersionLookup::Id(id))?),\n        (None, Some(name)) => Some(resolve_version_ref(&lix, VersionLookup::Name(name))?),\n        (None, None) => None,\n        _ => {\n            return Err(CliError::msg(\n                \"version create accepts at most one of --from-id or --from-name\",\n            ));\n        }\n    };\n    let original_active = resolve_active_version_ref(&lix)?;\n    if let Some(source) = &source {\n        crate::db::block_on(lix.switch_version(SwitchVersionOptions {\n            version_id: source.id.clone(),\n        }))\n        .map_err(|error| CliError::msg(error.to_string()))?;\n    }\n    let name = command\n        .name\n        .clone()\n        .or_else(|| command.id.clone())\n        .ok_or_else(|| CliError::msg(\"version create requires --name when --id is omitted\"))?;\n    let result = crate::db::block_on(lix.create_version(CreateVersionOptions {\n        id: command.id,\n        name,\n        from_commit_id: None,\n    }))\n    .map_err(|error| CliError::msg(error.to_string()))?;\n    if source.is_some() {\n        crate::db::block_on(lix.switch_version(SwitchVersionOptions {\n            version_id: original_active.id.clone(),\n        }))\n        .map_err(|error| CliError::msg(error.to_string()))?;\n    }\n\n    let parent = source.as_ref().unwrap_or(&original_active);\n    let (created_line, active_line) = create_confirmation_lines(&result, parent, &original_active);\n    println!(\"{created_line}\");\n    println!(\"{active_line}\");\n    Ok(CommandOutput::empty())\n}\n\nfn create_confirmation_lines(\n    result: &CreateVersionResult,\n    parent: &ResolvedVersionRef,\n    active: &ResolvedVersionRef,\n) -> (String, String) {\n    (\n        format!(\n            \"Created version {} from {} ({}).\",\n            result.version_id, parent.name, parent.id\n        ),\n        format!(\n            \"Active version is still {} ({}). Use `lix version switch --id {}` to work on it.\",\n            active.name, active.id, result.version_id\n        ),\n    )\n}\n\n#[cfg(test)]\nmod tests {\n    use super::create_confirmation_lines;\n    use crate::commands::version::ResolvedVersionRef;\n    use lix_rs_sdk::CreateVersionResult;\n\n    #[test]\n    fn create_confirmation_uses_active_version_not_parent_version() {\n        let result = CreateVersionResult {\n            version_id: \"new-version\".to_string(),\n        };\n        let parent = ResolvedVersionRef {\n            id: \"feature-b\".to_string(),\n            name: \"Feature B\".to_string(),\n        };\n        let active = ResolvedVersionRef {\n            id: \"feature-a\".to_string(),\n            name: \"Feature A\".to_string(),\n        };\n\n        let (_, active_line) = create_confirmation_lines(&result, &parent, &active);\n        assert!(active_line.contains(\"Feature A (feature-a)\"));\n        assert!(!active_line.contains(\"Feature B (feature-b)\"));\n    }\n}\n"
  },
  {
    "path": "packages/cli/src/commands/version/merge.rs",
    "content": "use crate::app::AppContext;\nuse crate::cli::version::MergeVersionCommand;\nuse crate::commands::version::{resolve_version_ref, VersionLookup};\nuse crate::db::{open_lix_at, resolve_db_path};\nuse crate::error::CliError;\nuse crate::hints::CommandOutput;\nuse lix_rs_sdk::{MergeVersionOptions, MergeVersionOutcome, SwitchVersionOptions};\n\npub fn run(context: &AppContext, command: MergeVersionCommand) -> Result<CommandOutput, CliError> {\n    let path = resolve_db_path(context)?;\n    let lix = open_lix_at(&path)?;\n    let source = resolve_version_ref(\n        &lix,\n        match (command.source_id.as_deref(), command.source_name.as_deref()) {\n            (Some(id), None) => VersionLookup::Id(id),\n            (None, Some(name)) => VersionLookup::Name(name),\n            _ => {\n                return Err(CliError::msg(\n                    \"version merge requires exactly one of --source-id or --source-name\",\n                ));\n            }\n        },\n    )?;\n    let target = resolve_version_ref(\n        &lix,\n        match (command.target_id.as_deref(), command.target_name.as_deref()) {\n            (Some(id), None) => VersionLookup::Id(id),\n            (None, Some(name)) => VersionLookup::Name(name),\n            _ => {\n                return Err(CliError::msg(\n                    \"version merge requires exactly one of --target-id or --target-name\",\n                ));\n            }\n        },\n    )?;\n    crate::db::block_on(lix.switch_version(SwitchVersionOptions {\n        version_id: target.id.clone(),\n    }))\n    .map_err(|error| CliError::msg(error.to_string()))?;\n    let result = crate::db::block_on(lix.merge_version(MergeVersionOptions {\n        source_version_id: source.id.clone(),\n    }))\n    .map_err(|error| CliError::msg(error.to_string()))?;\n\n    match result.outcome {\n        MergeVersionOutcome::AlreadyUpToDate => {\n            println!(\n                \"{} ({}) already contains {} ({})\",\n                target.name, target.id, source.name, source.id\n            );\n        }\n        MergeVersionOutcome::FastForward => {\n            println!(\n                \"Fast-forwarded {} ({}) to {} ({}) at {}\",\n                target.name, target.id, source.name, source.id, result.target_head_after_commit_id\n            );\n        }\n        MergeVersionOutcome::MergeCommitted => {\n            let commit_id = result.created_merge_commit_id.ok_or_else(|| {\n                CliError::msg(\"merge_version returned MergeCommitted without a merge commit id\")\n            })?;\n            println!(\n                \"Merged {} ({}) into {} ({}) with commit {}\",\n                source.name, source.id, target.name, target.id, commit_id\n            );\n        }\n    }\n\n    Ok(CommandOutput::empty())\n}\n"
  },
  {
    "path": "packages/cli/src/commands/version/mod.rs",
    "content": "mod create;\nmod merge;\nmod switch;\n\nuse crate::app::AppContext;\nuse crate::cli::version::{VersionCommand, VersionSubcommand};\nuse crate::error::CliError;\nuse crate::hints::CommandOutput;\nuse lix_rs_sdk::{ExecuteResult, Lix, Row as LixRow, Value};\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub(super) enum VersionLookup<'a> {\n    Id(&'a str),\n    Name(&'a str),\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(super) struct ResolvedVersionRef {\n    pub id: String,\n    pub name: String,\n}\n\npub fn run(context: &AppContext, command: VersionCommand) -> Result<CommandOutput, CliError> {\n    match command.command {\n        VersionSubcommand::Create(command) => create::run(context, command),\n        VersionSubcommand::Merge(command) => merge::run(context, command),\n        VersionSubcommand::Switch(command) => switch::run(context, command),\n    }\n}\n\npub(super) fn resolve_version_ref(\n    lix: &Lix,\n    lookup: VersionLookup<'_>,\n) -> Result<ResolvedVersionRef, CliError> {\n    match lookup {\n        VersionLookup::Id(id) => resolve_version_by_id(lix, id),\n        VersionLookup::Name(name) => resolve_version_by_name(lix, name),\n    }\n}\n\npub(super) fn resolve_active_version_ref(lix: &Lix) -> Result<ResolvedVersionRef, CliError> {\n    let active_id = crate::db::block_on(lix.active_version_id())\n        .map_err(|error| CliError::msg(error.to_string()))?;\n    resolve_version_by_id(lix, &active_id)\n}\n\nfn resolve_version_by_id(lix: &Lix, id: &str) -> Result<ResolvedVersionRef, CliError> {\n    let result = crate::db::block_on(lix.execute(\n        \"SELECT id, name FROM lix_version WHERE id = $1 LIMIT 1\",\n        &[Value::Text(id.to_string())],\n    ))\n    .map_err(|error| CliError::msg(error.to_string()))?;\n    let rows = statement_rows(&result)?;\n    let Some(row) = rows.first() else {\n        return Err(CliError::msg(format!(\"no version exists with id '{id}'\")));\n    };\n\n    Ok(ResolvedVersionRef {\n        id: text_at(row, 0, \"lix_version.id\")?,\n        name: text_at(row, 1, \"lix_version.name\")?,\n    })\n}\n\nfn resolve_version_by_name(lix: &Lix, name: &str) -> Result<ResolvedVersionRef, CliError> {\n    let result = crate::db::block_on(lix.execute(\n        \"SELECT id, name FROM lix_version WHERE name = $1 ORDER BY id\",\n        &[Value::Text(name.to_string())],\n    ))\n    .map_err(|error| CliError::msg(error.to_string()))?;\n    let rows = statement_rows(&result)?;\n    match rows {\n        [] => Err(CliError::msg(format!(\n            \"no version exists with name '{name}'\"\n        ))),\n        [row] => Ok(ResolvedVersionRef {\n            id: text_at(row, 0, \"lix_version.id\")?,\n            name: text_at(row, 1, \"lix_version.name\")?,\n        }),\n        rows => {\n            let matching_ids = rows\n                .iter()\n                .map(|row| text_at(row, 0, \"lix_version.id\"))\n                .collect::<Result<Vec<_>, _>>()?\n                .join(\", \");\n            Err(CliError::msg(format!(\n                \"version name '{name}' is ambiguous; matching ids: {matching_ids}\"\n            )))\n        }\n    }\n}\n\nfn statement_rows(result: &ExecuteResult) -> Result<&[LixRow], CliError> {\n    Ok(result.rows())\n}\n\nfn text_at(row: &LixRow, index: usize, field: &str) -> Result<String, CliError> {\n    match row.get_index(index) {\n        Some(Value::Text(value)) if !value.is_empty() => Ok(value.clone()),\n        Some(Value::Text(_)) => Err(CliError::msg(format!(\"{field} is empty\"))),\n        Some(Value::Integer(value)) => Ok(value.to_string()),\n        Some(other) => Err(CliError::msg(format!(\n            \"expected text-like value for {field}, got {other:?}\"\n        ))),\n        None => Err(CliError::msg(format!(\"missing {field}\"))),\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::{create, merge, resolve_version_ref, switch, VersionLookup};\n    use crate::app::AppContext;\n    use crate::cli::version::{CreateVersionCommand, MergeVersionCommand, SwitchVersionCommand};\n    use crate::db::{init_lix_at, open_lix_at};\n    use lix_rs_sdk::{CreateVersionOptions, ExecuteResult, Value};\n    use std::path::{Path, PathBuf};\n    use std::time::{SystemTime, UNIX_EPOCH};\n\n    fn temp_lix_path(label: &str) -> PathBuf {\n        let nanos = SystemTime::now()\n            .duration_since(UNIX_EPOCH)\n            .expect(\"system time should be after unix epoch\")\n            .as_nanos();\n        std::env::temp_dir().join(format!(\n            \"lix-cli-version-{label}-{}-{nanos}.lix\",\n            std::process::id()\n        ))\n    }\n\n    fn cleanup_lix_path(path: &Path) {\n        let _ = std::fs::remove_file(path);\n        let _ = std::fs::remove_file(format!(\"{}-wal\", path.display()));\n        let _ = std::fs::remove_file(format!(\"{}-shm\", path.display()));\n        let _ = std::fs::remove_file(format!(\"{}-journal\", path.display()));\n    }\n\n    fn text_at(result: &ExecuteResult, row: usize, col: usize) -> String {\n        match result.rows().get(row).and_then(|row| row.get_index(col)) {\n            Some(Value::Text(value)) => {\n                serde_json::from_str::<String>(value).unwrap_or_else(|_| value.clone())\n            }\n            Some(Value::Json(serde_json::Value::String(value))) => value.clone(),\n            Some(Value::Json(value)) => value.to_string(),\n            Some(Value::Integer(value)) => value.to_string(),\n            other => panic!(\"expected text-like value, got {other:?}\"),\n        }\n    }\n\n    #[test]\n    fn fast_forward_merge_keeps_database_openable_across_fresh_opens() {\n        std::thread::Builder::new()\n            .name(\"fast_forward_merge_keeps_database_openable_across_fresh_opens\".to_string())\n            .stack_size(32 * 1024 * 1024)\n            .spawn(|| {\n                fast_forward_merge_keeps_database_openable_across_fresh_opens_inner();\n            })\n            .expect(\"test thread should spawn\")\n            .join()\n            .expect(\"test thread should not panic\");\n    }\n\n    fn fast_forward_merge_keeps_database_openable_across_fresh_opens_inner() {\n        let path = temp_lix_path(\"fast-forward-openable\");\n        cleanup_lix_path(&path);\n\n        init_lix_at(&path).expect(\"lix init should succeed\");\n        let context = AppContext {\n            lix_path: Some(path.clone()),\n            no_hints: true,\n        };\n\n        let lix = open_lix_at(&path).expect(\"initial open should succeed\");\n        crate::db::block_on(lix.execute(\n            \"INSERT INTO lix_key_value (key, value) VALUES ('greeting', 'hello')\",\n            &[],\n        ))\n        .expect(\"main insert should succeed\");\n\n        create::run(\n            &context,\n            CreateVersionCommand {\n                id: Some(\"feature\".to_string()),\n                name: Some(\"feature\".to_string()),\n                from_id: None,\n                from_name: None,\n                hidden: false,\n            },\n        )\n        .expect(\"version create should succeed\");\n\n        switch::run(\n            &context,\n            SwitchVersionCommand {\n                id: None,\n                name: Some(\"feature\".to_string()),\n            },\n        )\n        .expect(\"version switch should succeed\");\n\n        let lix = open_lix_at(&path).expect(\"open on feature should succeed\");\n        crate::db::block_on(lix.execute(\n            \"INSERT INTO lix_key_value (key, value) VALUES ('feature_key', 'feature_val')\",\n            &[],\n        ))\n        .expect(\"feature insert should succeed\");\n\n        let lix = open_lix_at(&path).expect(\"open for id lookup should succeed\");\n        let main_id_result = crate::db::block_on(lix.execute(\n            \"SELECT id FROM lix_version WHERE name = 'main' LIMIT 1\",\n            &[],\n        ))\n        .expect(\"main id lookup should succeed\");\n        let main_id = text_at(&main_id_result, 0, 0);\n\n        merge::run(\n            &context,\n            MergeVersionCommand {\n                source_id: None,\n                source_name: Some(\"feature\".to_string()),\n                target_id: Some(main_id.clone()),\n                target_name: None,\n            },\n        )\n        .expect(\"fast-forward merge should succeed\");\n\n        let reopened = open_lix_at(&path).expect(\"database should reopen after fast-forward merge\");\n        let select_result = crate::db::block_on(reopened.execute(\"SELECT 1\", &[]))\n            .expect(\"reopened query should succeed\");\n        assert_eq!(text_at(&select_result, 0, 0), \"1\");\n\n        switch::run(\n            &context,\n            SwitchVersionCommand {\n                id: Some(main_id),\n                name: None,\n            },\n        )\n        .expect(\"switch back to main should succeed\");\n        let reopened = open_lix_at(&path).expect(\"main reopen should succeed\");\n        let feature_result = crate::db::block_on(reopened.execute(\n            \"SELECT value FROM lix_key_value WHERE key = 'feature_key' LIMIT 1\",\n            &[],\n        ))\n        .expect(\"feature key query should succeed\");\n        assert_eq!(text_at(&feature_result, 0, 0), \"feature_val\");\n\n        cleanup_lix_path(&path);\n    }\n\n    #[test]\n    fn resolve_version_ref_by_name_rejects_ambiguous_matches() {\n        std::thread::Builder::new()\n            .name(\"resolve_version_ref_by_name_rejects_ambiguous_matches\".to_string())\n            .stack_size(32 * 1024 * 1024)\n            .spawn(resolve_version_ref_by_name_rejects_ambiguous_matches_inner)\n            .expect(\"test thread should spawn\")\n            .join()\n            .expect(\"test thread should not panic\");\n    }\n\n    fn resolve_version_ref_by_name_rejects_ambiguous_matches_inner() {\n        let path = temp_lix_path(\"ambiguous-version-name\");\n        cleanup_lix_path(&path);\n\n        init_lix_at(&path).expect(\"lix init should succeed\");\n        let lix = open_lix_at(&path).expect(\"open should succeed\");\n        crate::db::block_on(lix.create_version(CreateVersionOptions {\n            id: Some(\"feature-a\".to_string()),\n            name: \"feature\".to_string(),\n            from_commit_id: None,\n        }))\n        .expect(\"first version create should succeed\");\n        crate::db::block_on(lix.create_version(CreateVersionOptions {\n            id: Some(\"feature-b\".to_string()),\n            name: \"feature\".to_string(),\n            from_commit_id: None,\n        }))\n        .expect(\"second version create should succeed\");\n\n        let error = resolve_version_ref(&lix, VersionLookup::Name(\"feature\"))\n            .expect_err(\"ambiguous version name should fail\");\n        assert_eq!(\n            error.to_string(),\n            \"version name 'feature' is ambiguous; matching ids: feature-a, feature-b\"\n        );\n\n        cleanup_lix_path(&path);\n    }\n\n    #[test]\n    fn resolve_version_ref_by_name_rejects_missing_match() {\n        std::thread::Builder::new()\n            .name(\"resolve_version_ref_by_name_rejects_missing_match\".to_string())\n            .stack_size(32 * 1024 * 1024)\n            .spawn(resolve_version_ref_by_name_rejects_missing_match_inner)\n            .expect(\"test thread should spawn\")\n            .join()\n            .expect(\"test thread should not panic\");\n    }\n\n    fn resolve_version_ref_by_name_rejects_missing_match_inner() {\n        let path = temp_lix_path(\"missing-version-name\");\n        cleanup_lix_path(&path);\n\n        init_lix_at(&path).expect(\"lix init should succeed\");\n        let lix = open_lix_at(&path).expect(\"open should succeed\");\n\n        let error = resolve_version_ref(&lix, VersionLookup::Name(\"missing\"))\n            .expect_err(\"missing version name should fail\");\n        assert_eq!(error.to_string(), \"no version exists with name 'missing'\");\n\n        cleanup_lix_path(&path);\n    }\n}\n"
  },
  {
    "path": "packages/cli/src/commands/version/switch.rs",
    "content": "use crate::app::AppContext;\nuse crate::cli::version::SwitchVersionCommand;\nuse crate::commands::version::{resolve_version_ref, VersionLookup};\nuse crate::db::{open_lix_at, resolve_db_path};\nuse crate::error::CliError;\nuse crate::hints::CommandOutput;\nuse lix_rs_sdk::SwitchVersionOptions;\n\npub fn run(context: &AppContext, command: SwitchVersionCommand) -> Result<CommandOutput, CliError> {\n    let path = resolve_db_path(context)?;\n    let lix = open_lix_at(&path)?;\n    let resolved = resolve_version_ref(\n        &lix,\n        match (command.id.as_deref(), command.name.as_deref()) {\n            (Some(id), None) => VersionLookup::Id(id),\n            (None, Some(name)) => VersionLookup::Name(name),\n            _ => {\n                return Err(CliError::msg(\n                    \"version switch requires exactly one of --id or --name\",\n                ));\n            }\n        },\n    )?;\n    crate::db::block_on(lix.switch_version(SwitchVersionOptions {\n        version_id: resolved.id.clone(),\n    }))\n    .map_err(|error| CliError::msg(error.to_string()))?;\n\n    println!(\n        \"Switched active version to {} ({})\",\n        resolved.name, resolved.id\n    );\n    Ok(CommandOutput::empty())\n}\n"
  },
  {
    "path": "packages/cli/src/db/mod.rs",
    "content": "use crate::app::AppContext;\nuse crate::error::CliError;\nuse async_trait::async_trait;\nuse base64::Engine as _;\nuse lix_rs_sdk::{\n    open_lix, KvPair, KvScanRange, Lix, LixBackend, LixBackendTransaction, LixError,\n    OpenLixOptions, TransactionBeginMode,\n};\nuse serde::{Deserialize, Serialize};\nuse std::collections::BTreeMap;\nuse std::fs;\nuse std::path::{Path, PathBuf};\nuse std::sync::{Arc, Mutex};\n\npub fn resolve_db_path(context: &AppContext) -> Result<PathBuf, CliError> {\n    if let Some(path) = &context.lix_path {\n        validate_lix_file_path(path)?;\n        if !path.exists() {\n            return Err(CliError::msg(format!(\n                \"lix file does not exist: {}\",\n                path.display()\n            )));\n        }\n        return Ok(path.clone());\n    }\n\n    let cwd =\n        std::env::current_dir().map_err(|source| CliError::io(\"failed to read cwd\", source))?;\n    let mut candidates = find_lix_files(&cwd)?;\n\n    if candidates.is_empty() {\n        return Err(CliError::msg(\n            \"no .lix files found in current directory; pass --path <path-to-file.lix>\",\n        ));\n    }\n    if candidates.len() > 1 {\n        candidates.sort();\n        let paths = candidates\n            .iter()\n            .map(|path| path.display().to_string())\n            .collect::<Vec<_>>()\n            .join(\", \");\n        return Err(CliError::msg(format!(\n            \"multiple .lix files found ({paths}); pass --path <path-to-file.lix>\"\n        )));\n    }\n\n    Ok(candidates.remove(0))\n}\n\npub fn open_lix_at(path: &Path) -> Result<Lix, CliError> {\n    let backend = FileBackend::from_path(path)?;\n\n    block_on(open_lix(OpenLixOptions {\n        backend: Some(Box::new(backend)),\n    }))\n    .map_err(|err| CliError::msg(format!(\"failed to open lix at {}: {}\", path.display(), err)))\n}\n\npub fn init_lix_at(path: &Path) -> Result<bool, CliError> {\n    validate_lix_file_path(path)?;\n\n    if let Some(parent) = path.parent() {\n        if !parent.as_os_str().is_empty() {\n            fs::create_dir_all(parent).map_err(|source| {\n                CliError::io(\"failed to create parent directory for lix file\", source)\n            })?;\n        }\n    }\n\n    let initialized = !path.exists();\n    let _ = open_lix_at(path)?;\n    Ok(initialized)\n}\n\npub fn destroy_lix_at(path: &Path) -> Result<(), CliError> {\n    match fs::remove_file(path) {\n        Ok(()) => Ok(()),\n        Err(error) if error.kind() == std::io::ErrorKind::NotFound => Ok(()),\n        Err(error) => Err(CliError::io(\"failed to destroy lix file\", error)),\n    }\n    .and_then(|_| remove_sidecar(path, \"wal\"))\n    .and_then(|_| remove_sidecar(path, \"shm\"))\n    .and_then(|_| remove_sidecar(path, \"journal\"))\n}\n\n/// Prepares a `.lix` output target for initialization.\n///\n/// The CLI delegates storage-backed cleanup to the backend boundary so command\n/// code does not need to know how a backend represents its physical artifacts.\npub fn prepare_lix_output_path(path: &Path, force: bool) -> Result<(), CliError> {\n    validate_lix_file_path(path)?;\n\n    if let Some(parent) = path.parent() {\n        if !parent.as_os_str().is_empty() {\n            fs::create_dir_all(parent)\n                .map_err(|source| CliError::io(\"failed to create output directory\", source))?;\n        }\n    }\n\n    if path.exists() && path.is_dir() {\n        return Err(CliError::msg(format!(\n            \"output path points to a directory, expected a file: {}\",\n            path.display()\n        )));\n    }\n\n    if force {\n        destroy_lix_at(path)?;\n        return Ok(());\n    }\n\n    if path.exists() {\n        return Err(CliError::msg(format!(\n            \"output path already exists: {}\",\n            path.display()\n        )));\n    }\n\n    Ok(())\n}\n\nfn find_lix_files(cwd: &Path) -> Result<Vec<PathBuf>, CliError> {\n    let mut files = Vec::new();\n    let entries =\n        fs::read_dir(cwd).map_err(|source| CliError::io(\"failed to read cwd entries\", source))?;\n    for entry in entries {\n        let entry =\n            entry.map_err(|source| CliError::io(\"failed to read directory entry\", source))?;\n        let path = entry.path();\n        if !path.is_file() {\n            continue;\n        }\n        if path.extension().and_then(|ext| ext.to_str()) == Some(\"lix\") {\n            files.push(path);\n        }\n    }\n    files.sort();\n    Ok(files)\n}\n\nfn validate_lix_file_path(path: &Path) -> Result<(), CliError> {\n    if path.extension().and_then(|ext| ext.to_str()) == Some(\"lix\") {\n        return Ok(());\n    }\n\n    Err(CliError::msg(format!(\n        \"expected a .lix file path: {}\",\n        path.display()\n    )))\n}\n\npub fn block_on<F: std::future::Future>(future: F) -> F::Output {\n    tokio::runtime::Builder::new_current_thread()\n        .enable_all()\n        .build()\n        .expect(\"tokio runtime should initialize\")\n        .block_on(future)\n}\n\nfn remove_sidecar(path: &Path, suffix: &str) -> Result<(), CliError> {\n    let sidecar = PathBuf::from(format!(\"{}-{suffix}\", path.display()));\n    match fs::remove_file(sidecar) {\n        Ok(()) => Ok(()),\n        Err(error) if error.kind() == std::io::ErrorKind::NotFound => Ok(()),\n        Err(error) => Err(CliError::io(\"failed to destroy lix sidecar file\", error)),\n    }\n}\n\ntype KvMap = BTreeMap<(String, Vec<u8>), Vec<u8>>;\n\n#[derive(Clone)]\nstruct FileBackend {\n    path: Arc<PathBuf>,\n    kv: Arc<Mutex<KvMap>>,\n}\n\nimpl FileBackend {\n    fn from_path(path: &Path) -> Result<Self, CliError> {\n        let kv = read_kv_file(path)?;\n        Ok(Self {\n            path: Arc::new(path.to_path_buf()),\n            kv: Arc::new(Mutex::new(kv)),\n        })\n    }\n}\n\n#[async_trait]\nimpl LixBackend for FileBackend {\n    async fn begin_transaction(\n        &self,\n        mode: TransactionBeginMode,\n    ) -> Result<Box<dyn LixBackendTransaction + Send + Sync + 'static>, LixError> {\n        let snapshot = self\n            .kv\n            .lock()\n            .map_err(|_| lock_error(\"cli file backend kv\"))?\n            .clone();\n        Ok(Box::new(FileBackendTransaction {\n            mode,\n            path: Arc::clone(&self.path),\n            parent: Arc::clone(&self.kv),\n            kv: snapshot,\n        }))\n    }\n\n    async fn kv_get(&self, namespace: &str, key: &[u8]) -> Result<Option<Vec<u8>>, LixError> {\n        Ok(self\n            .kv\n            .lock()\n            .map_err(|_| lock_error(\"cli file backend kv\"))?\n            .get(&(namespace.to_string(), key.to_vec()))\n            .cloned())\n    }\n\n    async fn kv_scan(\n        &self,\n        namespace: &str,\n        range: KvScanRange,\n        limit: Option<usize>,\n    ) -> Result<Vec<KvPair>, LixError> {\n        let guard = self\n            .kv\n            .lock()\n            .map_err(|_| lock_error(\"cli file backend kv\"))?;\n        Ok(scan_map(&guard, namespace, &range, limit))\n    }\n}\n\nstruct FileBackendTransaction {\n    mode: TransactionBeginMode,\n    path: Arc<PathBuf>,\n    parent: Arc<Mutex<KvMap>>,\n    kv: KvMap,\n}\n\n#[async_trait]\nimpl LixBackendTransaction for FileBackendTransaction {\n    fn mode(&self) -> TransactionBeginMode {\n        self.mode\n    }\n\n    async fn kv_get(&mut self, namespace: &str, key: &[u8]) -> Result<Option<Vec<u8>>, LixError> {\n        Ok(self.kv.get(&(namespace.to_string(), key.to_vec())).cloned())\n    }\n\n    async fn kv_scan(\n        &mut self,\n        namespace: &str,\n        range: KvScanRange,\n        limit: Option<usize>,\n    ) -> Result<Vec<KvPair>, LixError> {\n        Ok(scan_map(&self.kv, namespace, &range, limit))\n    }\n\n    async fn kv_put(&mut self, namespace: &str, key: &[u8], value: &[u8]) -> Result<(), LixError> {\n        self.kv\n            .insert((namespace.to_string(), key.to_vec()), value.to_vec());\n        Ok(())\n    }\n\n    async fn kv_delete(&mut self, namespace: &str, key: &[u8]) -> Result<(), LixError> {\n        self.kv.remove(&(namespace.to_string(), key.to_vec()));\n        Ok(())\n    }\n\n    async fn commit(self: Box<Self>) -> Result<(), LixError> {\n        write_kv_file(&self.path, &self.kv)?;\n        *self\n            .parent\n            .lock()\n            .map_err(|_| lock_error(\"cli file backend kv\"))? = self.kv;\n        Ok(())\n    }\n\n    async fn rollback(self: Box<Self>) -> Result<(), LixError> {\n        Ok(())\n    }\n}\n\n#[derive(Debug, Serialize, Deserialize)]\nstruct FileSnapshot {\n    entries: Vec<FileEntry>,\n}\n\n#[derive(Debug, Serialize, Deserialize)]\nstruct FileEntry {\n    namespace: String,\n    key: String,\n    value: String,\n}\n\nfn read_kv_file(path: &Path) -> Result<KvMap, CliError> {\n    if !path.exists() {\n        return Ok(KvMap::new());\n    }\n    let bytes = fs::read(path).map_err(|source| CliError::io(\"failed to read lix file\", source))?;\n    if bytes.is_empty() {\n        return Ok(KvMap::new());\n    }\n    let snapshot: FileSnapshot = serde_json::from_slice(&bytes)\n        .map_err(|error| CliError::msg(format!(\"failed to decode lix file: {error}\")))?;\n    let mut kv = KvMap::new();\n    for entry in snapshot.entries {\n        kv.insert(\n            (entry.namespace, decode_bytes(&entry.key)?),\n            decode_bytes(&entry.value)?,\n        );\n    }\n    Ok(kv)\n}\n\nfn write_kv_file(path: &Path, kv: &KvMap) -> Result<(), LixError> {\n    let snapshot = FileSnapshot {\n        entries: kv\n            .iter()\n            .map(|((namespace, key), value)| FileEntry {\n                namespace: namespace.clone(),\n                key: encode_bytes(key),\n                value: encode_bytes(value),\n            })\n            .collect(),\n    };\n    let bytes = serde_json::to_vec(&snapshot).map_err(|error| {\n        LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"failed to encode lix file snapshot: {error}\"),\n        )\n    })?;\n    fs::write(path, bytes).map_err(|error| {\n        LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"failed to write lix file '{}': {error}\", path.display()),\n        )\n    })\n}\n\nfn scan_map(kv: &KvMap, namespace: &str, range: &KvScanRange, limit: Option<usize>) -> Vec<KvPair> {\n    let mut pairs = kv\n        .iter()\n        .filter(|((candidate_namespace, key), _)| {\n            candidate_namespace == namespace && key_matches_range(key, range)\n        })\n        .map(|((_, key), value)| KvPair::new(key.clone(), value.clone()))\n        .collect::<Vec<_>>();\n    pairs.sort_by(|left, right| left.key.cmp(&right.key));\n    if let Some(limit) = limit {\n        pairs.truncate(limit);\n    }\n    pairs\n}\n\nfn key_matches_range(key: &[u8], range: &KvScanRange) -> bool {\n    match range {\n        KvScanRange::Prefix(prefix) => key.starts_with(prefix),\n        KvScanRange::Range { start, end } => start.as_slice() <= key && key < end.as_slice(),\n    }\n}\n\nfn encode_bytes(bytes: &[u8]) -> String {\n    base64::engine::general_purpose::STANDARD.encode(bytes)\n}\n\nfn decode_bytes(value: &str) -> Result<Vec<u8>, CliError> {\n    base64::engine::general_purpose::STANDARD\n        .decode(value)\n        .map_err(|error| CliError::msg(format!(\"failed to decode lix file bytes: {error}\")))\n}\n\nfn lock_error(name: &str) -> LixError {\n    LixError::new(\"LIX_ERROR_UNKNOWN\", format!(\"{name} mutex was poisoned\"))\n}\n\n#[cfg(test)]\nmod tests {\n    use super::{init_lix_at, prepare_lix_output_path, resolve_db_path};\n    use crate::app::AppContext;\n    use std::fs;\n    use std::path::PathBuf;\n    use std::time::{SystemTime, UNIX_EPOCH};\n\n    #[test]\n    fn resolve_db_path_rejects_explicit_non_lix_path() {\n        let temp_dir = unique_temp_dir();\n        fs::create_dir_all(&temp_dir).expect(\"temp dir should be created\");\n        let path = temp_dir.join(\"project.sqlite\");\n        fs::write(&path, b\"not-lix\").expect(\"seed file should be written\");\n        let context = AppContext {\n            lix_path: Some(path.clone()),\n            no_hints: false,\n        };\n\n        let error = resolve_db_path(&context).expect_err(\"non-.lix path should be rejected\");\n        assert_eq!(\n            error.to_string(),\n            format!(\"expected a .lix file path: {}\", path.display())\n        );\n\n        fs::remove_file(&path).expect(\"seed file should be removable\");\n        fs::remove_dir_all(&temp_dir).expect(\"temp dir should be removable\");\n    }\n\n    #[test]\n    fn init_lix_at_rejects_non_lix_path() {\n        let temp_dir = unique_temp_dir();\n        let path = temp_dir.join(\"project.sqlite\");\n\n        let error = init_lix_at(&path).expect_err(\"non-.lix init path should be rejected\");\n        assert_eq!(\n            error.to_string(),\n            format!(\"expected a .lix file path: {}\", path.display())\n        );\n        assert!(\n            !temp_dir.exists(),\n            \"validator should reject before creating parent directories\"\n        );\n    }\n\n    #[test]\n    fn prepare_output_path_rejects_non_lix_path() {\n        let temp_dir = unique_temp_dir();\n        let path = temp_dir.join(\"output.db\");\n\n        let error = prepare_lix_output_path(&path, false)\n            .expect_err(\"non-.lix output path should be rejected\");\n        assert_eq!(\n            error.to_string(),\n            format!(\"expected a .lix file path: {}\", path.display())\n        );\n        assert!(\n            !temp_dir.exists(),\n            \"validator should reject before creating parent directories\"\n        );\n    }\n\n    fn unique_temp_dir() -> PathBuf {\n        let nanos = SystemTime::now()\n            .duration_since(UNIX_EPOCH)\n            .expect(\"system time should be after unix epoch\")\n            .as_nanos();\n        std::env::temp_dir().join(format!(\"lix-cli-db-test-{}-{nanos}\", std::process::id()))\n    }\n}\n"
  },
  {
    "path": "packages/cli/src/error.rs",
    "content": "use lix_rs_sdk::LixError;\nuse std::fmt::{Display, Formatter};\n\n#[derive(Debug)]\npub enum CliError {\n    InvalidArgs(&'static str),\n    Message(String),\n    Io {\n        context: &'static str,\n        source: std::io::Error,\n    },\n    Lix {\n        context: &'static str,\n        source: LixError,\n    },\n}\n\nimpl CliError {\n    pub fn io(context: &'static str, source: std::io::Error) -> Self {\n        Self::Io { context, source }\n    }\n\n    pub fn msg(message: impl Into<String>) -> Self {\n        Self::Message(message.into())\n    }\n\n    pub fn from_lix(context: &'static str, source: LixError) -> Self {\n        Self::Lix { context, source }\n    }\n\n    pub fn hint(&self) -> Option<&str> {\n        match self {\n            Self::Lix { source, .. } => source.hint.as_deref(),\n            _ => None,\n        }\n    }\n}\n\nimpl Display for CliError {\n    fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        match self {\n            Self::InvalidArgs(message) => write!(f, \"invalid arguments: {message}\"),\n            Self::Message(message) => write!(f, \"{message}\"),\n            Self::Io { context, source } => write!(f, \"{context}: {source}\"),\n            Self::Lix { context, source } => {\n                write!(f, \"{context}: {}\", source.description)\n            }\n        }\n    }\n}\n\nimpl std::error::Error for CliError {}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn hint_returns_none_for_non_lix_variants() {\n        assert_eq!(CliError::InvalidArgs(\"bad\").hint(), None);\n        assert_eq!(CliError::msg(\"oops\").hint(), None);\n        let io_err = CliError::io(\n            \"reading\",\n            std::io::Error::new(std::io::ErrorKind::Other, \"boom\"),\n        );\n        assert_eq!(io_err.hint(), None);\n    }\n\n    #[test]\n    fn hint_returns_lix_hint_when_attached() {\n        let lix_err = LixError::new(\"LIX_ERROR_FOO\", \"desc\").with_hint(\"try lix_json(...)\");\n        let cli_err = CliError::from_lix(\"sql execution failed\", lix_err);\n        assert_eq!(cli_err.hint(), Some(\"try lix_json(...)\"));\n    }\n\n    #[test]\n    fn hint_returns_none_when_lix_error_has_no_hint() {\n        let lix_err = LixError::new(\"LIX_ERROR_FOO\", \"desc\");\n        let cli_err = CliError::from_lix(\"sql execution failed\", lix_err);\n        assert_eq!(cli_err.hint(), None);\n    }\n\n    #[test]\n    fn display_format_omits_hint_line() {\n        // hints are rendered separately via `render_hints`, not via Display\n        let lix_err = LixError::new(\"LIX_ERROR_FOO\", \"boom\").with_hint(\"fix it\");\n        let cli_err = CliError::from_lix(\"sql execution failed\", lix_err);\n        assert_eq!(cli_err.to_string(), \"sql execution failed: boom\");\n    }\n}\n"
  },
  {
    "path": "packages/cli/src/hints.rs",
    "content": "use crate::error::CliError;\nuse lix_rs_sdk::{ExecuteResult, Lix, Value};\n\n#[derive(Debug)]\npub struct CommandOutput {\n    pub hints: Vec<String>,\n}\n\nimpl CommandOutput {\n    pub fn empty() -> Self {\n        Self { hints: Vec::new() }\n    }\n\n    pub fn with_hints(hints: Vec<String>) -> Self {\n        Self { hints }\n    }\n}\n\n// ── Hint generators (all hint text and conditions live here) ─────────\n\npub fn hint_after_init() -> Vec<String> {\n    vec![\n        \"Try inserting data with: lix sql execute \\\"INSERT INTO lix_key_value (key, value) VALUES ('hello', '\\\"world\\\"')\\\"\".into(),\n        \"Store files with: lix sql execute \\\"INSERT INTO lix_file (path, data) VALUES ('/readme.txt', lix_text_encode('hello'))\\\"\".into(),\n    ]\n}\n\npub fn hint_blob_in_result(result: &ExecuteResult) -> Vec<String> {\n    let has_blob = result\n        .rows()\n        .iter()\n        .any(|row| row.values().iter().any(|v| matches!(v, Value::Blob(_))));\n    if has_blob {\n        vec![\"Tip: use lix_text_decode(data) to view text content\".into()]\n    } else {\n        Vec::new()\n    }\n}\n\n/// Extract an engine-produced hint from a `CliError`, if any.\n///\n/// Returns an empty `Vec` for error variants that do not carry a `LixError`\n/// (e.g. `InvalidArgs`, `Message`, `Io`) or when the underlying `LixError`\n/// has no hint attached.\npub fn hint_from_error(err: &CliError) -> Vec<String> {\n    err.hint().map(|h| vec![h.to_string()]).unwrap_or_default()\n}\n\n// ── Infrastructure ───────────────────────────────────────────────────\n\n/// Query lix_key_value for 'lix_cli_hints'. Returns true unless value is explicitly \"false\".\npub fn are_hints_enabled(lix: &Lix) -> bool {\n    let result = crate::db::block_on(lix.execute(\n        \"SELECT value FROM lix_key_value WHERE key = 'lix_cli_hints'\",\n        &[],\n    ));\n    match result {\n        Ok(result) => {\n            if let Some(row) = result.rows().first() {\n                if let Ok(value) = row.get::<String>(\"value\") {\n                    return value != \"false\";\n                }\n            }\n            true // key absent = hints ON\n        }\n        Err(_) => true, // on error, default to hints ON\n    }\n}\n\n/// Print hints to stderr as \"hint: {message}\".\npub fn render_hints(hints: &[String]) {\n    for hint in hints {\n        eprintln!(\"hint: {hint}\");\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use lix_rs_sdk::LixError;\n\n    #[test]\n    fn hint_from_error_returns_empty_for_non_lix_variants() {\n        assert!(hint_from_error(&CliError::msg(\"oops\")).is_empty());\n        assert!(hint_from_error(&CliError::InvalidArgs(\"bad\")).is_empty());\n    }\n\n    #[test]\n    fn hint_from_error_returns_empty_when_lix_error_has_no_hint() {\n        let cli_err = CliError::from_lix(\"ctx\", LixError::new(\"CODE\", \"desc\"));\n        assert!(hint_from_error(&cli_err).is_empty());\n    }\n\n    #[test]\n    fn hint_from_error_returns_lix_hint() {\n        let cli_err = CliError::from_lix(\n            \"sql execution failed\",\n            LixError::new(\"CODE\", \"desc\").with_hint(\"use lix_json(...)\"),\n        );\n        assert_eq!(\n            hint_from_error(&cli_err),\n            vec![\"use lix_json(...)\".to_string()]\n        );\n    }\n}\n"
  },
  {
    "path": "packages/cli/src/lib.rs",
    "content": "pub mod app;\npub mod cli;\npub mod commands;\npub mod db;\npub mod error;\npub mod hints;\npub mod output;\n\npub fn run() -> Result<(), error::CliError> {\n    app::run()\n}\n"
  },
  {
    "path": "packages/cli/src/main.rs",
    "content": "fn main() {\n    if lix_cli::run().is_err() {\n        std::process::exit(1);\n    }\n}\n"
  },
  {
    "path": "packages/cli/src/output/mod.rs",
    "content": "use base64::Engine as _;\nuse comfy_table::{presets::UTF8_BORDERS_ONLY, Cell, ContentArrangement, Row, Table};\nuse lix_rs_sdk::{ExecuteResult, Value};\nuse serde_json::Value as JsonValue;\n\npub fn print_execute_result_table(result: &ExecuteResult) {\n    if result.columns().is_empty() && result.rows().is_empty() {\n        println!(\"OK\");\n        if result.rows_affected() > 0 {\n            println!(\"({} rows affected)\", result.rows_affected());\n        }\n        return;\n    }\n\n    let mut table = Table::new();\n    table\n        .load_preset(UTF8_BORDERS_ONLY)\n        .set_content_arrangement(ContentArrangement::Dynamic);\n\n    if !result.columns().is_empty() {\n        let header = Row::from(result.columns().iter().map(Cell::new).collect::<Vec<_>>());\n        table.set_header(header);\n    }\n\n    for row in result.rows() {\n        let rendered = Row::from(\n            row.values()\n                .iter()\n                .map(|value| Cell::new(value_to_text(value)))\n                .collect::<Vec<_>>(),\n        );\n        table.add_row(rendered);\n    }\n\n    println!(\"{table}\");\n    println!(\"({} rows)\", result.rows().len());\n}\n\npub fn print_execute_result_json(result: &ExecuteResult) {\n    let payload = execute_result_to_json(result);\n    println!(\n        \"{}\",\n        serde_json::to_string(&payload).unwrap_or_else(|_| \"{}\".to_string())\n    );\n}\n\nfn execute_result_to_json(result: &ExecuteResult) -> JsonValue {\n    serde_json::json!({\n        \"columns\": result.columns(),\n        \"rows\": result.rows().iter().map(|row| row_to_json(result.columns(), row)).collect::<Vec<_>>(),\n        \"rowsAffected\": result.rows_affected(),\n        \"notices\": result.notices(),\n    })\n}\n\nfn row_to_json(columns: &[String], row: &lix_rs_sdk::Row) -> JsonValue {\n    let mut object = serde_json::Map::new();\n    for (index, column) in columns.iter().enumerate() {\n        let value = row\n            .get_index(index)\n            .map(value_to_json)\n            .unwrap_or(JsonValue::Null);\n        object.insert(column.clone(), value);\n    }\n    JsonValue::Object(object)\n}\n\nfn value_to_text(value: &Value) -> String {\n    match value {\n        Value::Null => \"null\".to_string(),\n        Value::Boolean(v) => v.to_string(),\n        Value::Integer(v) => v.to_string(),\n        Value::Real(v) => v.to_string(),\n        Value::Text(v) => v.clone(),\n        Value::Json(v) => v.to_string(),\n        Value::Blob(bytes) => bytes_to_hex(bytes),\n    }\n}\n\nfn value_to_json(value: &Value) -> JsonValue {\n    match value {\n        Value::Null => JsonValue::Null,\n        Value::Boolean(v) => JsonValue::Bool(*v),\n        Value::Integer(v) => serde_json::json!(v),\n        Value::Real(v) => serde_json::Number::from_f64(*v)\n            .map(JsonValue::Number)\n            .unwrap_or(JsonValue::Null),\n        Value::Text(v) => JsonValue::String(v.clone()),\n        Value::Json(v) => v.clone(),\n        Value::Blob(bytes) => serde_json::json!({\n            \"$blob\": base64::engine::general_purpose::STANDARD.encode(bytes),\n        }),\n    }\n}\n\nfn bytes_to_hex(bytes: &[u8]) -> String {\n    let mut out = String::with_capacity(bytes.len() * 2 + 2);\n    out.push_str(\"0x\");\n    for byte in bytes {\n        out.push(hex_digit(byte >> 4));\n        out.push(hex_digit(byte & 0x0f));\n    }\n    out\n}\n\nfn hex_digit(value: u8) -> char {\n    match value {\n        0..=9 => (b'0' + value) as char,\n        10..=15 => (b'a' + (value - 10)) as char,\n        _ => '0',\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn value_to_json_uses_blob_tagged_shape() {\n        let value = Value::Blob(vec![0x01, 0x02, 0x03]);\n        let json = value_to_json(&value);\n        assert_eq!(\n            json,\n            serde_json::json!({\n                \"$blob\": \"AQID\"\n            })\n        );\n    }\n\n    #[test]\n    fn value_to_json_uses_native_scalars() {\n        assert_eq!(value_to_json(&Value::Null), JsonValue::Null);\n        assert_eq!(value_to_json(&Value::Boolean(true)), JsonValue::Bool(true));\n        assert_eq!(value_to_json(&Value::Integer(7)), serde_json::json!(7));\n        assert_eq!(value_to_json(&Value::Real(2.5)), serde_json::json!(2.5));\n        assert_eq!(\n            value_to_json(&Value::Text(\"hello\".to_string())),\n            JsonValue::String(\"hello\".to_string())\n        );\n        assert_eq!(\n            value_to_json(&Value::Json(serde_json::json!({\"ok\": true}))),\n            serde_json::json!({\"ok\": true})\n        );\n    }\n\n    #[test]\n    fn execute_result_to_json_preserves_envelope_and_order() {\n        let result = ExecuteResult::from_rows(\n            vec![\"n\".to_string(), \"payload\".to_string()],\n            vec![\n                vec![Value::Integer(1), Value::Text(\"a\".to_string())],\n                vec![Value::Integer(2), Value::Blob(vec![0x01, 0x02])],\n            ],\n        );\n\n        assert_eq!(\n            execute_result_to_json(&result),\n            serde_json::json!({\n                \"columns\": [\"n\", \"payload\"],\n                \"rows\": [\n                    {\"n\": 1, \"payload\": \"a\"},\n                    {\"n\": 2, \"payload\": {\"$blob\": \"AQI=\"}},\n                ],\n                \"rowsAffected\": 0,\n                \"notices\": [],\n            })\n        );\n    }\n}\n"
  },
  {
    "path": "packages/engine/.gitignore",
    "content": "benches/results/\n\n# local rust build output when invoked from this package\ntarget/\n\n# criterion benchmark output\ncriterion/\n\n# local sqlite artifacts from benchmark runs\n*.sqlite\n*.sqlite-journal\n*.sqlite-wal\n*.sqlite-shm\n"
  },
  {
    "path": "packages/engine/AGENTS.md",
    "content": "## Lix Engine\n\n- testing with sqlite simulation is enough for development. before committing, test the all simulations\n"
  },
  {
    "path": "packages/engine/Cargo.toml",
    "content": "[package]\nname = \"lix_engine\"\nversion = \"0.1.0\"\nedition = \"2021\"\n\n[features]\nstorage-benches = []\n\n[[bench]]\nname = \"storage\"\npath = \"benches/storage/main.rs\"\nharness = false\nrequired-features = [\"storage-benches\"]\n\n[[bench]]\nname = \"transaction\"\npath = \"benches/transaction/main.rs\"\nharness = false\nrequired-features = [\"storage-benches\"]\n\n[[bench]]\nname = \"physical_layout\"\npath = \"benches/physical_layout/main.rs\"\nharness = false\nrequired-features = [\"storage-benches\"]\n\n[[bench]]\nname = \"json_pointer_crud\"\npath = \"benches/json_pointer_crud/main.rs\"\nharness = false\nrequired-features = [\"storage-benches\"]\n\n[[bench]]\nname = \"optimization9_sql2\"\npath = \"benches/optimization9_sql2/main.rs\"\nharness = false\nrequired-features = [\"storage-benches\"]\n\n[[bench]]\nname = \"json_pointer_physical\"\npath = \"benches/json_pointer_physical/main.rs\"\nharness = false\nrequired-features = [\"storage-benches\"]\n\n[dependencies]\nasync-trait = \"0.1\"\ncel = { version = \"0.12.0\", features = [\"json\"] }\nchrono = { version = \"0.4\", default-features = false, features = [\"clock\", \"std\", \"wasmbind\"] }\ndatafusion = { version = \"53.0.0\", default-features = false, features = [\n    \"sql\",\n    \"nested_expressions\",\n    \"datetime_expressions\",\n    \"regex_expressions\",\n    \"string_expressions\",\n    \"unicode_expressions\",\n] }\nflatbuffers = \"=25.12.19\"\nserde = { version = \"1\", features = [\"derive\"] }\nserde_json = \"1\"\njsonschema = { version = \"0.17\", default-features = false, features = [\"draft202012\"] }\nglobset = \"0.4\"\nuuid = { version = \"1\", features = [\"v7\", \"std\", \"js\"] }\nunicode-normalization = \"0.1\"\nprecis-profiles = \"0.1.13\"\nfutures-util = { version = \"0.3\", default-features = false, features = [\"std\"] }\ntokio = { version = \"1\", features = [\"rt\"] }\nblake3 = \"1\"\nfastcdc = \"3\"\nxxhash-rust = { version = \"0.8\", features = [\"xxh3\"] }\nbase64 = \"0.22\"\n\n[dev-dependencies]\ncriterion = { package = \"codspeed-criterion-compat\", version = \"*\" }\niref = \"4.0.0\"\npaste = \"1\"\nrocksdb = { version = \"0.22\", default-features = false }\nrusqlite = { version = \"0.32\", features = [\"bundled\"] }\ntempfile = \"3\"\ntokio = { version = \"1\", features = [\"rt\", \"macros\", \"sync\"] }\n\n[target.'cfg(not(target_arch = \"wasm32\"))'.dependencies]\nzstd = \"0.13\"\n\n[target.'cfg(target_arch = \"wasm32\")'.dependencies]\nruzstd = { version = \"0.8\", default-features = false, features = [\"std\"] }\n"
  },
  {
    "path": "packages/engine/benches/fixtures/pnpm-lock.fixture.json",
    "content": "{\"lockfileVersion\":\"9.0\",\"settings\":{\"autoInstallPeers\":true,\"excludeLinksFromLockfile\":false},\"importers\":{\".\":{\"devDependencies\":{\"@changesets/cli\":{\"specifier\":\"^2.29.7\",\"version\":\"2.29.7(@types/node@24.10.2)\"},\"@vitest/coverage-v8\":{\"specifier\":\"^3.1.1\",\"version\":\"3.2.4(@vitest/browser@3.2.4)(vitest@3.2.4)\"},\"nx\":{\"specifier\":\"^21.0.0\",\"version\":\"21.4.1\"},\"nx-cloud\":{\"specifier\":\"^19.1.0\",\"version\":\"19.1.0\"},\"vitest\":{\"specifier\":\"^3.1.1\",\"version\":\"3.2.4(@types/debug@4.1.12)(@types/node@24.10.2)(@vitest/browser@3.2.4)(happy-dom@18.0.1)(jiti@2.6.1)(jsdom@27.3.0(postcss@8.5.14))(lightningcss@1.32.0)(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\"}}},\"packages/js-kysely\":{\"dependencies\":{\"json-schema-to-ts\":{\"specifier\":\"^3.1.1\",\"version\":\"3.1.1\"},\"kysely\":{\"specifier\":\"^0.28.7\",\"version\":\"0.28.7\"}},\"devDependencies\":{\"@lix-js/sdk\":{\"specifier\":\"workspace:*\",\"version\":\"link:../js-sdk\"},\"typescript\":{\"specifier\":\"^5.5.4\",\"version\":\"5.9.3\"},\"vitest\":{\"specifier\":\"^4.0.18\",\"version\":\"4.0.18(@opentelemetry/api@1.9.0)(@types/node@24.10.2)(happy-dom@18.0.1)(jiti@2.6.1)(jsdom@27.3.0(postcss@8.5.14))(lightningcss@1.32.0)(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\"}}},\"packages/js-sdk\":{\"devDependencies\":{\"better-sqlite3\":{\"specifier\":\"^12.9.0\",\"version\":\"12.9.0\"},\"typescript\":{\"specifier\":\"^5.5.4\",\"version\":\"5.9.3\"},\"vitest\":{\"specifier\":\"^4.0.18\",\"version\":\"4.0.18(@opentelemetry/api@1.9.0)(@types/node@24.10.2)(happy-dom@18.0.1)(jiti@2.6.1)(jsdom@27.3.0(postcss@8.5.14))(lightningcss@1.32.0)(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\"}}},\"packages/react-utils\":{\"devDependencies\":{\"@lix-js/kysely\":{\"specifier\":\"workspace:*\",\"version\":\"link:../js-kysely\"},\"@lix-js/sdk\":{\"specifier\":\"workspace:*\",\"version\":\"link:../js-sdk\"},\"@testing-library/react\":{\"specifier\":\"^16.3.0\",\"version\":\"16.3.0(@testing-library/dom@10.4.1)(@types/react-dom@19.2.3(@types/react@19.2.7))(@types/react@19.2.7)(react-dom@19.2.0(react@19.2.0))(react@19.2.0)\"},\"@types/react\":{\"specifier\":\"^19.1.8\",\"version\":\"19.2.7\"},\"@vitest/coverage-v8\":{\"specifier\":\"^3.2.4\",\"version\":\"3.2.4(@vitest/browser@3.2.4)(vitest@3.2.4)\"},\"https-proxy-agent\":{\"specifier\":\"7.0.2\",\"version\":\"7.0.2\"},\"jsdom\":{\"specifier\":\"^26.1.0\",\"version\":\"26.1.0\"},\"oxlint\":{\"specifier\":\"^1.14.0\",\"version\":\"1.26.0\"},\"prettier\":{\"specifier\":\"^3.3.3\",\"version\":\"3.6.2\"},\"react\":{\"specifier\":\"19.2.0\",\"version\":\"19.2.0\"},\"react-dom\":{\"specifier\":\"19.2.0\",\"version\":\"19.2.0(react@19.2.0)\"},\"typescript\":{\"specifier\":\"^5.5.4\",\"version\":\"5.8.3\"},\"vitest\":{\"specifier\":\"^3.2.4\",\"version\":\"3.2.4(@types/debug@4.1.12)(@types/node@24.10.2)(@vitest/browser@3.2.4)(happy-dom@18.0.1)(jiti@2.6.1)(jsdom@26.1.0)(lightningcss@1.32.0)(msw@2.10.2(@types/node@24.10.2)(typescript@5.8.3))(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\"}}},\"packages/website\":{\"dependencies\":{\"@cloudflare/vite-plugin\":{\"specifier\":\"^1.36.0\",\"version\":\"1.36.0(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(workerd@1.20260504.1)(wrangler@4.88.0)\"},\"@lix-js/plugin-json\":{\"specifier\":\"1.0.1\",\"version\":\"1.0.1(tslib@2.8.1)\"},\"@lix-js/sdk\":{\"specifier\":\"workspace:*\",\"version\":\"link:../js-sdk\"},\"@opral/markdown-wc\":{\"specifier\":\"0.9.0\",\"version\":\"0.9.0\"},\"@tailwindcss/vite\":{\"specifier\":\"^4.2.4\",\"version\":\"4.2.4(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\"},\"@tanstack/react-router\":{\"specifier\":\"^1.169.2\",\"version\":\"1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0)\"},\"@tanstack/react-start\":{\"specifier\":\"^1.167.64\",\"version\":\"1.167.64(react-dom@19.2.0(react@19.2.0))(react@19.2.0)(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(webpack@5.99.9(esbuild@0.27.3))\"},\"@tanstack/router-plugin\":{\"specifier\":\"^1.167.34\",\"version\":\"1.167.34(@tanstack/react-router@1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(webpack@5.99.9(esbuild@0.27.3))\"},\"lucide-react\":{\"specifier\":\"^0.544.0\",\"version\":\"0.544.0(react@19.2.0)\"},\"posthog-js\":{\"specifier\":\"^1.321.2\",\"version\":\"1.321.2\"},\"react\":{\"specifier\":\"^19.2.0\",\"version\":\"19.2.0\"},\"react-dom\":{\"specifier\":\"^19.2.0\",\"version\":\"19.2.0(react@19.2.0)\"},\"shiki\":{\"specifier\":\"^3.2.2\",\"version\":\"3.15.0\"},\"tailwindcss\":{\"specifier\":\"^4.2.4\",\"version\":\"4.2.4\"}},\"devDependencies\":{\"@testing-library/dom\":{\"specifier\":\"^10.4.0\",\"version\":\"10.4.1\"},\"@testing-library/react\":{\"specifier\":\"^16.2.0\",\"version\":\"16.3.0(@testing-library/dom@10.4.1)(@types/react-dom@19.2.3(@types/react@19.2.7))(@types/react@19.2.7)(react-dom@19.2.0(react@19.2.0))(react@19.2.0)\"},\"@types/node\":{\"specifier\":\"^22.10.2\",\"version\":\"22.15.33\"},\"@types/react\":{\"specifier\":\"^19.2.0\",\"version\":\"19.2.7\"},\"@types/react-dom\":{\"specifier\":\"^19.2.0\",\"version\":\"19.2.3(@types/react@19.2.7)\"},\"@vitejs/plugin-react\":{\"specifier\":\"^6.0.1\",\"version\":\"6.0.1(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\"},\"@vitest/browser\":{\"specifier\":\"^4.1.5\",\"version\":\"4.1.5(msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(vitest@4.1.5)\"},\"@vitest/coverage-v8\":{\"specifier\":\"^4.1.5\",\"version\":\"4.1.5(@vitest/browser@4.1.5)(vitest@4.1.5)\"},\"jsdom\":{\"specifier\":\"^27.0.0\",\"version\":\"27.3.0(postcss@8.5.14)\"},\"prettier\":{\"specifier\":\"^3.6.0\",\"version\":\"3.6.2\"},\"typescript\":{\"specifier\":\"^5.7.2\",\"version\":\"5.8.3\"},\"vite\":{\"specifier\":\"^8.0.10\",\"version\":\"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\"},\"vite-plugin-static-copy\":{\"specifier\":\"^4.1.0\",\"version\":\"4.1.0(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\"},\"vitest\":{\"specifier\":\"^4.1.5\",\"version\":\"4.1.5(@opentelemetry/api@1.9.0)(@types/node@22.15.33)(@vitest/coverage-v8@4.1.5)(happy-dom@18.0.1)(jsdom@27.3.0(postcss@8.5.14))(msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\"},\"web-vitals\":{\"specifier\":\"^5.1.0\",\"version\":\"5.1.0\"},\"wrangler\":{\"specifier\":\"^4.88.0\",\"version\":\"4.88.0\"}}}},\"packages\":{\"@acemir/cssom@0.9.28\":{\"resolution\":{\"integrity\":\"sha512-LuS6IVEivI75vKN8S04qRD+YySP0RmU/cV8UNukhQZvprxF+76Z43TNo/a08eCodaGhT1Us8etqS1ZRY9/Or0A==\"}},\"@ampproject/remapping@2.3.0\":{\"resolution\":{\"integrity\":\"sha512-30iZtAPgz+LTIYoeivqYo853f02jBYSd5uGnGpkFV0M3xOt9aN73erkgYAmZU43x4VfqcnLxW9Kpg3R5LC4YYw==\"},\"engines\":{\"node\":\">=6.0.0\"}},\"@antfu/install-pkg@1.1.0\":{\"resolution\":{\"integrity\":\"sha512-MGQsmw10ZyI+EJo45CdSER4zEb+p31LpDAFp2Z3gkSd1yqVZGi0Ebx++YTEMonJy4oChEMLsxZ64j8FH6sSqtQ==\"}},\"@antfu/utils@9.3.0\":{\"resolution\":{\"integrity\":\"sha512-9hFT4RauhcUzqOE4f1+frMKLZrgNog5b06I7VmZQV1BkvwvqrbC8EBZf3L1eEL2AKb6rNKjER0sEvJiSP1FXEA==\"}},\"@asamuzakjp/css-color@3.1.4\":{\"resolution\":{\"integrity\":\"sha512-SeuBV4rnjpFNjI8HSgKUwteuFdkHwkboq31HWzznuqgySQir+jSTczoWVVL4jvOjKjuH80fMDG0Fvg1Sb+OJsA==\"}},\"@asamuzakjp/css-color@4.1.0\":{\"resolution\":{\"integrity\":\"sha512-9xiBAtLn4aNsa4mDnpovJvBn72tNEIACyvlqaNJ+ADemR+yeMJWnBudOi2qGDviJa7SwcDOU/TRh5dnET7qk0w==\"}},\"@asamuzakjp/dom-selector@6.7.6\":{\"resolution\":{\"integrity\":\"sha512-hBaJER6A9MpdG3WgdlOolHmbOYvSk46y7IQN/1+iqiCuUu6iWdQrs9DGKF8ocqsEqWujWf/V7b7vaDgiUmIvUg==\"}},\"@asamuzakjp/nwsapi@2.3.9\":{\"resolution\":{\"integrity\":\"sha512-n8GuYSrI9bF7FFZ/SjhwevlHc8xaVlb/7HmHelnc/PZXBD2ZR49NnN9sMMuDdEGPeeRQ5d0hqlSlEpgCX3Wl0Q==\"}},\"@babel/code-frame@7.27.1\":{\"resolution\":{\"integrity\":\"sha512-cjQ7ZlQ0Mv3b47hABuTevyTuYN4i+loJKGeV9flcCgIK37cCXRh+L1bd3iBHlynerhQ7BhCkn2BPbQUL+rGqFg==\"},\"engines\":{\"node\":\">=6.9.0\"}},\"@babel/compat-data@7.28.0\":{\"resolution\":{\"integrity\":\"sha512-60X7qkglvrap8mn1lh2ebxXdZYtUcpd7gsmy9kLaBJ4i/WdY8PqTSdxyA8qraikqKQK5C1KRBKXqznrVapyNaw==\"},\"engines\":{\"node\":\">=6.9.0\"}},\"@babel/core@7.28.5\":{\"resolution\":{\"integrity\":\"sha512-e7jT4DxYvIDLk1ZHmU/m/mB19rex9sv0c2ftBtjSBv+kVM/902eh0fINUzD7UwLLNR+jU585GxUJ8/EBfAM5fw==\"},\"engines\":{\"node\":\">=6.9.0\"}},\"@babel/generator@7.28.5\":{\"resolution\":{\"integrity\":\"sha512-3EwLFhZ38J4VyIP6WNtt2kUdW9dokXA9Cr4IVIFHuCpZ3H8/YFOl5JjZHisrn1fATPBmKKqXzDFvh9fUwHz6CQ==\"},\"engines\":{\"node\":\">=6.9.0\"}},\"@babel/helper-compilation-targets@7.27.2\":{\"resolution\":{\"integrity\":\"sha512-2+1thGUUWWjLTYTHZWK1n8Yga0ijBz1XAhUXcKy81rd5g6yh7hGqMp45v7cadSbEHc9G3OTv45SyneRN3ps4DQ==\"},\"engines\":{\"node\":\">=6.9.0\"}},\"@babel/helper-globals@7.28.0\":{\"resolution\":{\"integrity\":\"sha512-+W6cISkXFa1jXsDEdYA8HeevQT/FULhxzR99pxphltZcVaugps53THCeiWA8SguxxpSp3gKPiuYfSWopkLQ4hw==\"},\"engines\":{\"node\":\">=6.9.0\"}},\"@babel/helper-module-imports@7.27.1\":{\"resolution\":{\"integrity\":\"sha512-0gSFWUPNXNopqtIPQvlD5WgXYI5GY2kP2cCvoT8kczjbfcfuIljTbcWrulD1CIPIX2gt1wghbDy08yE1p+/r3w==\"},\"engines\":{\"node\":\">=6.9.0\"}},\"@babel/helper-module-transforms@7.28.3\":{\"resolution\":{\"integrity\":\"sha512-gytXUbs8k2sXS9PnQptz5o0QnpLL51SwASIORY6XaBKF88nsOT0Zw9szLqlSGQDP/4TljBAD5y98p2U1fqkdsw==\"},\"engines\":{\"node\":\">=6.9.0\"},\"peerDependencies\":{\"@babel/core\":\"^7.0.0\"}},\"@babel/helper-plugin-utils@7.27.1\":{\"resolution\":{\"integrity\":\"sha512-1gn1Up5YXka3YYAHGKpbideQ5Yjf1tDa9qYcgysz+cNCXukyLl6DjPXhD3VRwSb8c0J9tA4b2+rHEZtc6R0tlw==\"},\"engines\":{\"node\":\">=6.9.0\"}},\"@babel/helper-string-parser@7.27.1\":{\"resolution\":{\"integrity\":\"sha512-qMlSxKbpRlAridDExk92nSobyDdpPijUq2DW6oDnUqd0iOGxmQjyqhMIihI9+zv4LPyZdRje2cavWPbCbWm3eA==\"},\"engines\":{\"node\":\">=6.9.0\"}},\"@babel/helper-validator-identifier@7.28.5\":{\"resolution\":{\"integrity\":\"sha512-qSs4ifwzKJSV39ucNjsvc6WVHs6b7S03sOh2OcHF9UHfVPqWWALUsNUVzhSBiItjRZoLHx7nIarVjqKVusUZ1Q==\"},\"engines\":{\"node\":\">=6.9.0\"}},\"@babel/helper-validator-option@7.27.1\":{\"resolution\":{\"integrity\":\"sha512-YvjJow9FxbhFFKDSuFnVCe2WxXk1zWc22fFePVNEaWJEu8IrZVlda6N0uHwzZrUM1il7NC9Mlp4MaJYbYd9JSg==\"},\"engines\":{\"node\":\">=6.9.0\"}},\"@babel/helpers@7.28.4\":{\"resolution\":{\"integrity\":\"sha512-HFN59MmQXGHVyYadKLVumYsA9dBFun/ldYxipEjzA4196jpLZd8UjEEBLkbEkvfYreDqJhZxYAWFPtrfhNpj4w==\"},\"engines\":{\"node\":\">=6.9.0\"}},\"@babel/parser@7.28.5\":{\"resolution\":{\"integrity\":\"sha512-KKBU1VGYR7ORr3At5HAtUQ+TV3SzRCXmA/8OdDZiLDBIZxVyzXuztPjfLd3BV1PRAQGCMWWSHYhL0F8d5uHBDQ==\"},\"engines\":{\"node\":\">=6.0.0\"},\"hasBin\":true},\"@babel/parser@7.29.3\":{\"resolution\":{\"integrity\":\"sha512-b3ctpQwp+PROvU/cttc4OYl4MzfJUWy6FZg+PMXfzmt/+39iHVF0sDfqay8TQM3JA2EUOyKcFZt75jWriQijsA==\"},\"engines\":{\"node\":\">=6.0.0\"},\"hasBin\":true},\"@babel/plugin-syntax-jsx@7.27.1\":{\"resolution\":{\"integrity\":\"sha512-y8YTNIeKoyhGd9O0Jiyzyyqk8gdjnumGTQPsz0xOZOQ2RmkVJeZ1vmmfIvFEKqucBG6axJGBZDE/7iI5suUI/w==\"},\"engines\":{\"node\":\">=6.9.0\"},\"peerDependencies\":{\"@babel/core\":\"^7.0.0-0\"}},\"@babel/plugin-syntax-typescript@7.27.1\":{\"resolution\":{\"integrity\":\"sha512-xfYCBMxveHrRMnAWl1ZlPXOZjzkN82THFvLhQhFXFt81Z5HnN+EtUkZhv/zcKpmT3fzmWZB0ywiBrbC3vogbwQ==\"},\"engines\":{\"node\":\">=6.9.0\"},\"peerDependencies\":{\"@babel/core\":\"^7.0.0-0\"}},\"@babel/runtime@7.28.4\":{\"resolution\":{\"integrity\":\"sha512-Q/N6JNWvIvPnLDvjlE1OUBLPQHH6l3CltCEsHIujp45zQUSSh8K+gHnaEX45yAT1nyngnINhvWtzN+Nb9D8RAQ==\"},\"engines\":{\"node\":\">=6.9.0\"}},\"@babel/template@7.27.2\":{\"resolution\":{\"integrity\":\"sha512-LPDZ85aEJyYSd18/DkjNh4/y1ntkE5KwUHWTiqgRxruuZL2F1yuHligVHLvcHY2vMHXttKFpJn6LwfI7cw7ODw==\"},\"engines\":{\"node\":\">=6.9.0\"}},\"@babel/traverse@7.28.5\":{\"resolution\":{\"integrity\":\"sha512-TCCj4t55U90khlYkVV/0TfkJkAkUg3jZFA3Neb7unZT8CPok7iiRfaX0F+WnqWqt7OxhOn0uBKXCw4lbL8W0aQ==\"},\"engines\":{\"node\":\">=6.9.0\"}},\"@babel/types@7.28.5\":{\"resolution\":{\"integrity\":\"sha512-qQ5m48eI/MFLQ5PxQj4PFaprjyCTLI37ElWMmNs0K8Lk3dVeOdNpB3ks8jc7yM5CDmVC73eMVk/trk3fgmrUpA==\"},\"engines\":{\"node\":\">=6.9.0\"}},\"@babel/types@7.29.0\":{\"resolution\":{\"integrity\":\"sha512-LwdZHpScM4Qz8Xw2iKSzS+cfglZzJGvofQICy7W7v4caru4EaAmyUuO6BGrbyQ2mYV11W0U8j5mBhd14dd3B0A==\"},\"engines\":{\"node\":\">=6.9.0\"}},\"@bcoe/v8-coverage@1.0.2\":{\"resolution\":{\"integrity\":\"sha512-6zABk/ECA/QYSCQ1NGiVwwbQerUCZ+TQbp64Q3AgmfNvurHH0j8TtXa1qbShXA6qqkpAj4V5W8pP6mLe1mcMqA==\"},\"engines\":{\"node\":\">=18\"}},\"@blazediff/core@1.9.1\":{\"resolution\":{\"integrity\":\"sha512-ehg3jIkYKulZh+8om/O25vkvSsXXwC+skXmyA87FFx6A/45eqOkZsBltMw/TVteb0mloiGT8oGRTcjRAz66zaA==\"}},\"@braintree/sanitize-url@7.1.1\":{\"resolution\":{\"integrity\":\"sha512-i1L7noDNxtFyL5DmZafWy1wRVhGehQmzZaz1HiN5e7iylJMSZR7ekOV7NsIqa5qBldlLrsKv4HbgFUVlQrz8Mw==\"}},\"@bufbuild/protobuf@2.12.0\":{\"resolution\":{\"integrity\":\"sha512-B/XlCaFIP8LOwzo+bz5uFzATYokcwCKQcghqnlfwSmM5eX/qTkvDBnDPs+gXtX/RyjxJ4DRikECcPJbyALA8FA==\"}},\"@bundled-es-modules/cookie@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-8o+5fRPLNbjbdGRRmJj3h6Hh1AQJf2dk3qQ/5ZFb+PXkRNiSoMGGUKlsgLfrxneb72axVJyIYji64E2+nNfYyw==\"}},\"@bundled-es-modules/statuses@1.0.1\":{\"resolution\":{\"integrity\":\"sha512-yn7BklA5acgcBr+7w064fGV+SGIFySjCKpqjcWgBAIfrAkY+4GQTJJHQMeT3V/sgz23VTEVV8TtOmkvJAhFVfg==\"}},\"@bundled-es-modules/tough-cookie@0.1.6\":{\"resolution\":{\"integrity\":\"sha512-dvMHbL464C0zI+Yqxbz6kZ5TOEp7GLW+pry/RWndAR8MJQAXZ2rPmIs8tziTZjeIyhSNZgZbCePtfSbdWqStJw==\"}},\"@changesets/apply-release-plan@7.0.13\":{\"resolution\":{\"integrity\":\"sha512-BIW7bofD2yAWoE8H4V40FikC+1nNFEKBisMECccS16W1rt6qqhNTBDmIw5HaqmMgtLNz9e7oiALiEUuKrQ4oHg==\"}},\"@changesets/assemble-release-plan@6.0.9\":{\"resolution\":{\"integrity\":\"sha512-tPgeeqCHIwNo8sypKlS3gOPmsS3wP0zHt67JDuL20P4QcXiw/O4Hl7oXiuLnP9yg+rXLQ2sScdV1Kkzde61iSQ==\"}},\"@changesets/changelog-git@0.2.1\":{\"resolution\":{\"integrity\":\"sha512-x/xEleCFLH28c3bQeQIyeZf8lFXyDFVn1SgcBiR2Tw/r4IAWlk1fzxCEZ6NxQAjF2Nwtczoen3OA2qR+UawQ8Q==\"}},\"@changesets/cli@2.29.7\":{\"resolution\":{\"integrity\":\"sha512-R7RqWoaksyyKXbKXBTbT4REdy22yH81mcFK6sWtqSanxUCbUi9Uf+6aqxZtDQouIqPdem2W56CdxXgsxdq7FLQ==\"},\"hasBin\":true},\"@changesets/config@3.1.1\":{\"resolution\":{\"integrity\":\"sha512-bd+3Ap2TKXxljCggI0mKPfzCQKeV/TU4yO2h2C6vAihIo8tzseAn2e7klSuiyYYXvgu53zMN1OeYMIQkaQoWnA==\"}},\"@changesets/errors@0.2.0\":{\"resolution\":{\"integrity\":\"sha512-6BLOQUscTpZeGljvyQXlWOItQyU71kCdGz7Pi8H8zdw6BI0g3m43iL4xKUVPWtG+qrrL9DTjpdn8eYuCQSRpow==\"}},\"@changesets/get-dependents-graph@2.1.3\":{\"resolution\":{\"integrity\":\"sha512-gphr+v0mv2I3Oxt19VdWRRUxq3sseyUpX9DaHpTUmLj92Y10AGy+XOtV+kbM6L/fDcpx7/ISDFK6T8A/P3lOdQ==\"}},\"@changesets/get-release-plan@4.0.13\":{\"resolution\":{\"integrity\":\"sha512-DWG1pus72FcNeXkM12tx+xtExyH/c9I1z+2aXlObH3i9YA7+WZEVaiHzHl03thpvAgWTRaH64MpfHxozfF7Dvg==\"}},\"@changesets/get-version-range-type@0.4.0\":{\"resolution\":{\"integrity\":\"sha512-hwawtob9DryoGTpixy1D3ZXbGgJu1Rhr+ySH2PvTLHvkZuQ7sRT4oQwMh0hbqZH1weAooedEjRsbrWcGLCeyVQ==\"}},\"@changesets/git@3.0.4\":{\"resolution\":{\"integrity\":\"sha512-BXANzRFkX+XcC1q/d27NKvlJ1yf7PSAgi8JG6dt8EfbHFHi4neau7mufcSca5zRhwOL8j9s6EqsxmT+s+/E6Sw==\"}},\"@changesets/logger@0.1.1\":{\"resolution\":{\"integrity\":\"sha512-OQtR36ZlnuTxKqoW4Sv6x5YIhOmClRd5pWsjZsddYxpWs517R0HkyiefQPIytCVh4ZcC5x9XaG8KTdd5iRQUfg==\"}},\"@changesets/parse@0.4.1\":{\"resolution\":{\"integrity\":\"sha512-iwksMs5Bf/wUItfcg+OXrEpravm5rEd9Bf4oyIPL4kVTmJQ7PNDSd6MDYkpSJR1pn7tz/k8Zf2DhTCqX08Ou+Q==\"}},\"@changesets/pre@2.0.2\":{\"resolution\":{\"integrity\":\"sha512-HaL/gEyFVvkf9KFg6484wR9s0qjAXlZ8qWPDkTyKF6+zqjBe/I2mygg3MbpZ++hdi0ToqNUF8cjj7fBy0dg8Ug==\"}},\"@changesets/read@0.6.5\":{\"resolution\":{\"integrity\":\"sha512-UPzNGhsSjHD3Veb0xO/MwvasGe8eMyNrR/sT9gR8Q3DhOQZirgKhhXv/8hVsI0QpPjR004Z9iFxoJU6in3uGMg==\"}},\"@changesets/should-skip-package@0.1.2\":{\"resolution\":{\"integrity\":\"sha512-qAK/WrqWLNCP22UDdBTMPH5f41elVDlsNyat180A33dWxuUDyNpg6fPi/FyTZwRriVjg0L8gnjJn2F9XAoF0qw==\"}},\"@changesets/types@4.1.0\":{\"resolution\":{\"integrity\":\"sha512-LDQvVDv5Kb50ny2s25Fhm3d9QSZimsoUGBsUioj6MC3qbMUCuC8GPIvk/M6IvXx3lYhAs0lwWUQLb+VIEUCECw==\"}},\"@changesets/types@6.1.0\":{\"resolution\":{\"integrity\":\"sha512-rKQcJ+o1nKNgeoYRHKOS07tAMNd3YSN0uHaJOZYjBAgxfV7TUE7JE+z4BzZdQwb5hKaYbayKN5KrYV7ODb2rAA==\"}},\"@changesets/write@0.4.0\":{\"resolution\":{\"integrity\":\"sha512-CdTLvIOPiCNuH71pyDu3rA+Q0n65cmAbXnwWH84rKGiFumFzkmHNT8KHTMEchcxN+Kl8I54xGUhJ7l3E7X396Q==\"}},\"@chevrotain/cst-dts-gen@11.0.3\":{\"resolution\":{\"integrity\":\"sha512-BvIKpRLeS/8UbfxXxgC33xOumsacaeCKAjAeLyOn7Pcp95HiRbrpl14S+9vaZLolnbssPIUuiUd8IvgkRyt6NQ==\"}},\"@chevrotain/gast@11.0.3\":{\"resolution\":{\"integrity\":\"sha512-+qNfcoNk70PyS/uxmj3li5NiECO+2YKZZQMbmjTqRI3Qchu8Hig/Q9vgkHpI3alNjr7M+a2St5pw5w5F6NL5/Q==\"}},\"@chevrotain/regexp-to-ast@11.0.3\":{\"resolution\":{\"integrity\":\"sha512-1fMHaBZxLFvWI067AVbGJav1eRY7N8DDvYCTwGBiE/ytKBgP8azTdgyrKyWZ9Mfh09eHWb5PgTSO8wi7U824RA==\"}},\"@chevrotain/types@11.0.3\":{\"resolution\":{\"integrity\":\"sha512-gsiM3G8b58kZC2HaWR50gu6Y1440cHiJ+i3JUvcp/35JchYejb2+5MVeJK0iKThYpAa/P2PYFV4hoi44HD+aHQ==\"}},\"@chevrotain/utils@11.0.3\":{\"resolution\":{\"integrity\":\"sha512-YslZMgtJUyuMbZ+aKvfF3x1f5liK4mWNxghFRv7jqRR9C3R3fAOGTTKvxXDa2Y1s9zSbcpuO0cAxDYsc9SrXoQ==\"}},\"@cloudflare/kv-asset-handler@0.5.0\":{\"resolution\":{\"integrity\":\"sha512-jxQYkj8dSIzc0cD6cMMNdOc1UVjqSqu8BZdor5s8cGjW2I8BjODt/kWPVdY+u9zj3ms75Q5qaZgnxUad83+eAg==\"},\"engines\":{\"node\":\">=22.0.0\"}},\"@cloudflare/unenv-preset@2.16.1\":{\"resolution\":{\"integrity\":\"sha512-ECxObrMfyTl5bhQf/lZCXwo5G6xX9IAUo+nDMKK4SZ8m4Jvvxp52vilxyySSWh2YTZz8+HQ07qGH/2rEom1vDw==\"},\"peerDependencies\":{\"unenv\":\"2.0.0-rc.24\",\"workerd\":\">1.20260305.0 <2.0.0-0\"},\"peerDependenciesMeta\":{\"workerd\":{\"optional\":true}}},\"@cloudflare/vite-plugin@1.36.0\":{\"resolution\":{\"integrity\":\"sha512-Rkfa3wAbJ1lqCquWX453x4YlngO+OjNmCQvjb4D5JyMW7KprX6fEJE1NQ06giJDonEz0306EASELF93pRADibA==\"},\"peerDependencies\":{\"vite\":\"^6.1.0 || ^7.0.0 || ^8.0.0\",\"wrangler\":\"^4.88.0\"}},\"@cloudflare/workerd-darwin-64@1.20260504.1\":{\"resolution\":{\"integrity\":\"sha512-IOMjYoftNRXabFt+QzY2Bo2mR2TNl8xsGvE0HnQ+K0S2c61VOUGUkr9gpJjnwrJ65yA9Qed4xfg0RRqXHO+nfA==\"},\"engines\":{\"node\":\">=16\"},\"cpu\":[\"x64\"],\"os\":[\"darwin\"]},\"@cloudflare/workerd-darwin-arm64@1.20260504.1\":{\"resolution\":{\"integrity\":\"sha512-7iMXxIU0N5KklZpQm2kuwTm0XtrpHXNqhejJyGquky8gSTnm31zBdutjMekH8VRr6ckbvZIl6lvqXzXdfOEojg==\"},\"engines\":{\"node\":\">=16\"},\"cpu\":[\"arm64\"],\"os\":[\"darwin\"]},\"@cloudflare/workerd-linux-64@1.20260504.1\":{\"resolution\":{\"integrity\":\"sha512-YLB0EH5FQV++oWlalFgPF3p2Bp3dn/D6RWNMw0ukEC8gKnNX6o61A+dlFUl8hRD35ja1zKRxGFUojs4U2+MoJA==\"},\"engines\":{\"node\":\">=16\"},\"cpu\":[\"x64\"],\"os\":[\"linux\"]},\"@cloudflare/workerd-linux-arm64@1.20260504.1\":{\"resolution\":{\"integrity\":\"sha512-FAh/82jDXDArfn9xDih6f/IJfF2SHXBb4nFeQAyHyvXrn18zM6Q3yl2Vj0U7LybbNbmu7TNGghwaM2NoSQS+0A==\"},\"engines\":{\"node\":\">=16\"},\"cpu\":[\"arm64\"],\"os\":[\"linux\"]},\"@cloudflare/workerd-windows-64@1.20260504.1\":{\"resolution\":{\"integrity\":\"sha512-QUg/B3dfrK/KHHHhiJzdkLkTg5mG7lA3t8iplbBoUa3XKCLOHOOXhbU4WSYlLqg8YnsQ6XLZ1HVA99fmZhJh7A==\"},\"engines\":{\"node\":\">=16\"},\"cpu\":[\"x64\"],\"os\":[\"win32\"]},\"@cspotcode/source-map-support@0.8.1\":{\"resolution\":{\"integrity\":\"sha512-IchNf6dN4tHoMFIn/7OE8LWZ19Y6q/67Bmf6vnGREv8RSbBVb9LPJxEcnwrcwX6ixSvaiGoomAUvu4YSxXrVgw==\"},\"engines\":{\"node\":\">=12\"}},\"@csstools/color-helpers@5.1.0\":{\"resolution\":{\"integrity\":\"sha512-S11EXWJyy0Mz5SYvRmY8nJYTFFd1LCNV+7cXyAgQtOOuzb4EsgfqDufL+9esx72/eLhsRdGZwaldu/h+E4t4BA==\"},\"engines\":{\"node\":\">=18\"}},\"@csstools/css-calc@2.1.4\":{\"resolution\":{\"integrity\":\"sha512-3N8oaj+0juUw/1H3YwmDDJXCgTB1gKU6Hc/bB502u9zR0q2vd786XJH9QfrKIEgFlZmhZiq6epXl4rHqhzsIgQ==\"},\"engines\":{\"node\":\">=18\"},\"peerDependencies\":{\"@csstools/css-parser-algorithms\":\"^3.0.5\",\"@csstools/css-tokenizer\":\"^3.0.4\"}},\"@csstools/css-color-parser@3.1.0\":{\"resolution\":{\"integrity\":\"sha512-nbtKwh3a6xNVIp/VRuXV64yTKnb1IjTAEEh3irzS+HkKjAOYLTGNb9pmVNntZ8iVBHcWDA2Dof0QtPgFI1BaTA==\"},\"engines\":{\"node\":\">=18\"},\"peerDependencies\":{\"@csstools/css-parser-algorithms\":\"^3.0.5\",\"@csstools/css-tokenizer\":\"^3.0.4\"}},\"@csstools/css-parser-algorithms@3.0.5\":{\"resolution\":{\"integrity\":\"sha512-DaDeUkXZKjdGhgYaHNJTV9pV7Y9B3b644jCLs9Upc3VeNGg6LWARAT6O+Q+/COo+2gg/bM5rhpMAtf70WqfBdQ==\"},\"engines\":{\"node\":\">=18\"},\"peerDependencies\":{\"@csstools/css-tokenizer\":\"^3.0.4\"}},\"@csstools/css-syntax-patches-for-csstree@1.0.14\":{\"resolution\":{\"integrity\":\"sha512-zSlIxa20WvMojjpCSy8WrNpcZ61RqfTfX3XTaOeVlGJrt/8HF3YbzgFZa01yTbT4GWQLwfTcC3EB8i3XnB647Q==\"},\"engines\":{\"node\":\">=18\"},\"peerDependencies\":{\"postcss\":\"^8.4\"}},\"@csstools/css-tokenizer@3.0.4\":{\"resolution\":{\"integrity\":\"sha512-Vd/9EVDiu6PPJt9yAh6roZP6El1xHrdvIVGjyBsHR0RYwNHgL7FJPyIIW4fANJNG6FtyZfvlRPpFI4ZM/lubvw==\"},\"engines\":{\"node\":\">=18\"}},\"@emnapi/core@1.10.0\":{\"resolution\":{\"integrity\":\"sha512-yq6OkJ4p82CAfPl0u9mQebQHKPJkY7WrIuk205cTYnYe+k2Z8YBh11FrbRG/H6ihirqcacOgl2BIO8oyMQLeXw==\"}},\"@emnapi/core@1.4.5\":{\"resolution\":{\"integrity\":\"sha512-XsLw1dEOpkSX/WucdqUhPWP7hDxSvZiY+fsUC14h+FtQ2Ifni4znbBt8punRX+Uj2JG/uDb8nEHVKvrVlvdZ5Q==\"}},\"@emnapi/runtime@1.10.0\":{\"resolution\":{\"integrity\":\"sha512-ewvYlk86xUoGI0zQRNq/mC+16R1QeDlKQy21Ki3oSYXNgLb45GV1P6A0M+/s6nyCuNDqe5VpaY84BzXGwVbwFA==\"}},\"@emnapi/runtime@1.4.5\":{\"resolution\":{\"integrity\":\"sha512-++LApOtY0pEEz1zrd9vy1/zXVaVJJ/EbAF3u0fXIzPJEDtnITsBGbbK0EkM72amhl/R5b+5xx0Y/QhcVOpuulg==\"}},\"@emnapi/wasi-threads@1.0.4\":{\"resolution\":{\"integrity\":\"sha512-PJR+bOmMOPH8AtcTGAyYNiuJ3/Fcoj2XN/gBEWzDIKh254XO+mM9XoXHk5GNEhodxeMznbg7BlRojVbKN+gC6g==\"}},\"@emnapi/wasi-threads@1.2.1\":{\"resolution\":{\"integrity\":\"sha512-uTII7OYF+/Mes/MrcIOYp5yOtSMLBWSIoLPpcgwipoiKbli6k322tcoFsxoIIxPDqW01SQGAgko4EzZi2BNv2w==\"}},\"@esbuild/aix-ppc64@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-Hhmwd6CInZ3dwpuGTF8fJG6yoWmsToE+vYgD4nytZVxcu1ulHpUQRAB1UJ8+N1Am3Mz4+xOByoQoSZf4D+CpkA==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"ppc64\"],\"os\":[\"aix\"]},\"@esbuild/aix-ppc64@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-9fJMTNFTWZMh5qwrBItuziu834eOCUcEqymSH7pY+zoMVEZg3gcPuBNxH1EvfVYe9h0x/Ptw8KBzv7qxb7l8dg==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"ppc64\"],\"os\":[\"aix\"]},\"@esbuild/android-arm64@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-6AAmLG7zwD1Z159jCKPvAxZd4y/VTO0VkprYy+3N2FtJ8+BQWFXU+OxARIwA46c5tdD9SsKGZ/1ocqBS/gAKHg==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"arm64\"],\"os\":[\"android\"]},\"@esbuild/android-arm64@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-YdghPYUmj/FX2SYKJ0OZxf+iaKgMsKHVPF1MAq/P8WirnSpCStzKJFjOjzsW0QQ7oIAiccHdcqjbHmJxRb/dmg==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"arm64\"],\"os\":[\"android\"]},\"@esbuild/android-arm@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-VJ+sKvNA/GE7Ccacc9Cha7bpS8nyzVv0jdVgwNDaR4gDMC/2TTRc33Ip8qrNYUcpkOHUT5OZ0bUcNNVZQ9RLlg==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"arm\"],\"os\":[\"android\"]},\"@esbuild/android-arm@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-i5D1hPY7GIQmXlXhs2w8AWHhenb00+GxjxRncS2ZM7YNVGNfaMxgzSGuO8o8SJzRc/oZwU2bcScvVERk03QhzA==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"arm\"],\"os\":[\"android\"]},\"@esbuild/android-x64@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-5jbb+2hhDHx5phYR2By8GTWEzn6I9UqR11Kwf22iKbNpYrsmRB18aX/9ivc5cabcUiAT/wM+YIZ6SG9QO6a8kg==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"x64\"],\"os\":[\"android\"]},\"@esbuild/android-x64@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-IN/0BNTkHtk8lkOM8JWAYFg4ORxBkZQf9zXiEOfERX/CzxW3Vg1ewAhU7QSWQpVIzTW+b8Xy+lGzdYXV6UZObQ==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"x64\"],\"os\":[\"android\"]},\"@esbuild/darwin-arm64@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-N3zl+lxHCifgIlcMUP5016ESkeQjLj/959RxxNYIthIg+CQHInujFuXeWbWMgnTo4cp5XVHqFPmpyu9J65C1Yg==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"arm64\"],\"os\":[\"darwin\"]},\"@esbuild/darwin-arm64@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-Re491k7ByTVRy0t3EKWajdLIr0gz2kKKfzafkth4Q8A5n1xTHrkqZgLLjFEHVD+AXdUGgQMq+Godfq45mGpCKg==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"arm64\"],\"os\":[\"darwin\"]},\"@esbuild/darwin-x64@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-HQ9ka4Kx21qHXwtlTUVbKJOAnmG1ipXhdWTmNXiPzPfWKpXqASVcWdnf2bnL73wgjNrFXAa3yYvBSd9pzfEIpA==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"x64\"],\"os\":[\"darwin\"]},\"@esbuild/darwin-x64@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-vHk/hA7/1AckjGzRqi6wbo+jaShzRowYip6rt6q7VYEDX4LEy1pZfDpdxCBnGtl+A5zq8iXDcyuxwtv3hNtHFg==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"x64\"],\"os\":[\"darwin\"]},\"@esbuild/freebsd-arm64@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-gA0Bx759+7Jve03K1S0vkOu5Lg/85dou3EseOGUes8flVOGxbhDDh/iZaoek11Y8mtyKPGF3vP8XhnkDEAmzeg==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"arm64\"],\"os\":[\"freebsd\"]},\"@esbuild/freebsd-arm64@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-ipTYM2fjt3kQAYOvo6vcxJx3nBYAzPjgTCk7QEgZG8AUO3ydUhvelmhrbOheMnGOlaSFUoHXB6un+A7q4ygY9w==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"arm64\"],\"os\":[\"freebsd\"]},\"@esbuild/freebsd-x64@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-TGbO26Yw2xsHzxtbVFGEXBFH0FRAP7gtcPE7P5yP7wGy7cXK2oO7RyOhL5NLiqTlBh47XhmIUXuGciXEqYFfBQ==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"x64\"],\"os\":[\"freebsd\"]},\"@esbuild/freebsd-x64@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-dDk0X87T7mI6U3K9VjWtHOXqwAMJBNN2r7bejDsc+j03SEjtD9HrOl8gVFByeM0aJksoUuUVU9TBaZa2rgj0oA==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"x64\"],\"os\":[\"freebsd\"]},\"@esbuild/linux-arm64@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-8bwX7a8FghIgrupcxb4aUmYDLp8pX06rGh5HqDT7bB+8Rdells6mHvrFHHW2JAOPZUbnjUpKTLg6ECyzvas2AQ==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"arm64\"],\"os\":[\"linux\"]},\"@esbuild/linux-arm64@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-sZOuFz/xWnZ4KH3YfFrKCf1WyPZHakVzTiqji3WDc0BCl2kBwiJLCXpzLzUBLgmp4veFZdvN5ChW4Eq/8Fc2Fg==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"arm64\"],\"os\":[\"linux\"]},\"@esbuild/linux-arm@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-lPDGyC1JPDou8kGcywY0YILzWlhhnRjdof3UlcoqYmS9El818LLfJJc3PXXgZHrHCAKs/Z2SeZtDJr5MrkxtOw==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"arm\"],\"os\":[\"linux\"]},\"@esbuild/linux-arm@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-s6nPv2QkSupJwLYyfS+gwdirm0ukyTFNl3KTgZEAiJDd+iHZcbTPPcWCcRYH+WlNbwChgH2QkE9NSlNrMT8Gfw==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"arm\"],\"os\":[\"linux\"]},\"@esbuild/linux-ia32@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-0y9KrdVnbMM2/vG8KfU0byhUN+EFCny9+8g202gYqSSVMonbsCfLjUO+rCci7pM0WBEtz+oK/PIwHkzxkyharA==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"ia32\"],\"os\":[\"linux\"]},\"@esbuild/linux-ia32@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-yGlQYjdxtLdh0a3jHjuwOrxQjOZYD/C9PfdbgJJF3TIZWnm/tMd/RcNiLngiu4iwcBAOezdnSLAwQDPqTmtTYg==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"ia32\"],\"os\":[\"linux\"]},\"@esbuild/linux-loong64@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-h///Lr5a9rib/v1GGqXVGzjL4TMvVTv+s1DPoxQdz7l/AYv6LDSxdIwzxkrPW438oUXiDtwM10o9PmwS/6Z0Ng==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"loong64\"],\"os\":[\"linux\"]},\"@esbuild/linux-loong64@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-WO60Sn8ly3gtzhyjATDgieJNet/KqsDlX5nRC5Y3oTFcS1l0KWba+SEa9Ja1GfDqSF1z6hif/SkpQJbL63cgOA==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"loong64\"],\"os\":[\"linux\"]},\"@esbuild/linux-mips64el@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-iyRrM1Pzy9GFMDLsXn1iHUm18nhKnNMWscjmp4+hpafcZjrr2WbT//d20xaGljXDBYHqRcl8HnxbX6uaA/eGVw==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"mips64el\"],\"os\":[\"linux\"]},\"@esbuild/linux-mips64el@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-APsymYA6sGcZ4pD6k+UxbDjOFSvPWyZhjaiPyl/f79xKxwTnrn5QUnXR5prvetuaSMsb4jgeHewIDCIWljrSxw==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"mips64el\"],\"os\":[\"linux\"]},\"@esbuild/linux-ppc64@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-9meM/lRXxMi5PSUqEXRCtVjEZBGwB7P/D4yT8UG/mwIdze2aV4Vo6U5gD3+RsoHXKkHCfSxZKzmDssVlRj1QQA==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"ppc64\"],\"os\":[\"linux\"]},\"@esbuild/linux-ppc64@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-eizBnTeBefojtDb9nSh4vvVQ3V9Qf9Df01PfawPcRzJH4gFSgrObw+LveUyDoKU3kxi5+9RJTCWlj4FjYXVPEA==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"ppc64\"],\"os\":[\"linux\"]},\"@esbuild/linux-riscv64@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-Zr7KR4hgKUpWAwb1f3o5ygT04MzqVrGEGXGLnj15YQDJErYu/BGg+wmFlIDOdJp0PmB0lLvxFIOXZgFRrdjR0w==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"riscv64\"],\"os\":[\"linux\"]},\"@esbuild/linux-riscv64@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-3Emwh0r5wmfm3ssTWRQSyVhbOHvqegUDRd0WhmXKX2mkHJe1SFCMJhagUleMq+Uci34wLSipf8Lagt4LlpRFWQ==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"riscv64\"],\"os\":[\"linux\"]},\"@esbuild/linux-s390x@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-MsKncOcgTNvdtiISc/jZs/Zf8d0cl/t3gYWX8J9ubBnVOwlk65UIEEvgBORTiljloIWnBzLs4qhzPkJcitIzIg==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"s390x\"],\"os\":[\"linux\"]},\"@esbuild/linux-s390x@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-pBHUx9LzXWBc7MFIEEL0yD/ZVtNgLytvx60gES28GcWMqil8ElCYR4kvbV2BDqsHOvVDRrOxGySBM9Fcv744hw==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"s390x\"],\"os\":[\"linux\"]},\"@esbuild/linux-x64@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-uqZMTLr/zR/ed4jIGnwSLkaHmPjOjJvnm6TVVitAa08SLS9Z0VM8wIRx7gWbJB5/J54YuIMInDquWyYvQLZkgw==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"x64\"],\"os\":[\"linux\"]},\"@esbuild/linux-x64@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-Czi8yzXUWIQYAtL/2y6vogER8pvcsOsk5cpwL4Gk5nJqH5UZiVByIY8Eorm5R13gq+DQKYg0+JyQoytLQas4dA==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"x64\"],\"os\":[\"linux\"]},\"@esbuild/netbsd-arm64@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-xXwcTq4GhRM7J9A8Gv5boanHhRa/Q9KLVmcyXHCTaM4wKfIpWkdXiMog/KsnxzJ0A1+nD+zoecuzqPmCRyBGjg==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"arm64\"],\"os\":[\"netbsd\"]},\"@esbuild/netbsd-arm64@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-sDpk0RgmTCR/5HguIZa9n9u+HVKf40fbEUt+iTzSnCaGvY9kFP0YKBWZtJaraonFnqef5SlJ8/TiPAxzyS+UoA==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"arm64\"],\"os\":[\"netbsd\"]},\"@esbuild/netbsd-x64@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-Ld5pTlzPy3YwGec4OuHh1aCVCRvOXdH8DgRjfDy/oumVovmuSzWfnSJg+VtakB9Cm0gxNO9BzWkj6mtO1FMXkQ==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"x64\"],\"os\":[\"netbsd\"]},\"@esbuild/netbsd-x64@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-P14lFKJl/DdaE00LItAukUdZO5iqNH7+PjoBm+fLQjtxfcfFE20Xf5CrLsmZdq5LFFZzb5JMZ9grUwvtVYzjiA==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"x64\"],\"os\":[\"netbsd\"]},\"@esbuild/openbsd-arm64@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-fF96T6KsBo/pkQI950FARU9apGNTSlZGsv1jZBAlcLL1MLjLNIWPBkj5NlSz8aAzYKg+eNqknrUJ24QBybeR5A==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"arm64\"],\"os\":[\"openbsd\"]},\"@esbuild/openbsd-arm64@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-AIcMP77AvirGbRl/UZFTq5hjXK+2wC7qFRGoHSDrZ5v5b8DK/GYpXW3CPRL53NkvDqb9D+alBiC/dV0Fb7eJcw==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"arm64\"],\"os\":[\"openbsd\"]},\"@esbuild/openbsd-x64@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-MZyXUkZHjQxUvzK7rN8DJ3SRmrVrke8ZyRusHlP+kuwqTcfWLyqMOE3sScPPyeIXN/mDJIfGXvcMqCgYKekoQw==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"x64\"],\"os\":[\"openbsd\"]},\"@esbuild/openbsd-x64@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-DnW2sRrBzA+YnE70LKqnM3P+z8vehfJWHXECbwBmH/CU51z6FiqTQTHFenPlHmo3a8UgpLyH3PT+87OViOh1AQ==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"x64\"],\"os\":[\"openbsd\"]},\"@esbuild/openharmony-arm64@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-rm0YWsqUSRrjncSXGA7Zv78Nbnw4XL6/dzr20cyrQf7ZmRcsovpcRBdhD43Nuk3y7XIoW2OxMVvwuRvk9XdASg==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"arm64\"],\"os\":[\"openharmony\"]},\"@esbuild/openharmony-arm64@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-NinAEgr/etERPTsZJ7aEZQvvg/A6IsZG/LgZy+81wON2huV7SrK3e63dU0XhyZP4RKGyTm7aOgmQk0bGp0fy2g==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"arm64\"],\"os\":[\"openharmony\"]},\"@esbuild/sunos-x64@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-3wGSCDyuTHQUzt0nV7bocDy72r2lI33QL3gkDNGkod22EsYl04sMf0qLb8luNKTOmgF/eDEDP5BFNwoBKH441w==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"x64\"],\"os\":[\"sunos\"]},\"@esbuild/sunos-x64@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-PanZ+nEz+eWoBJ8/f8HKxTTD172SKwdXebZ0ndd953gt1HRBbhMsaNqjTyYLGLPdoWHy4zLU7bDVJztF5f3BHA==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"x64\"],\"os\":[\"sunos\"]},\"@esbuild/win32-arm64@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-rMmLrur64A7+DKlnSuwqUdRKyd3UE7oPJZmnljqEptesKM8wx9J8gx5u0+9Pq0fQQW8vqeKebwNXdfOyP+8Bsg==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"arm64\"],\"os\":[\"win32\"]},\"@esbuild/win32-arm64@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-B2t59lWWYrbRDw/tjiWOuzSsFh1Y/E95ofKz7rIVYSQkUYBjfSgf6oeYPNWHToFRr2zx52JKApIcAS/D5TUBnA==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"arm64\"],\"os\":[\"win32\"]},\"@esbuild/win32-ia32@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-HkqnmmBoCbCwxUKKNPBixiWDGCpQGVsrQfJoVGYLPT41XWF8lHuE5N6WhVia2n4o5QK5M4tYr21827fNhi4byQ==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"ia32\"],\"os\":[\"win32\"]},\"@esbuild/win32-ia32@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-QLKSFeXNS8+tHW7tZpMtjlNb7HKau0QDpwm49u0vUp9y1WOF+PEzkU84y9GqYaAVW8aH8f3GcBck26jh54cX4Q==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"ia32\"],\"os\":[\"win32\"]},\"@esbuild/win32-x64@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-alJC0uCZpTFrSL0CCDjcgleBXPnCrEAhTBILpeAp7M/OFgoqtAetfBzX0xM00MUsVVPpVjlPuMbREqnZCXaTnA==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"x64\"],\"os\":[\"win32\"]},\"@esbuild/win32-x64@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-4uJGhsxuptu3OcpVAzli+/gWusVGwZZHTlS63hh++ehExkVT8SgiEf7/uC/PclrPPkLhZqGgCTjd0VWLo6xMqA==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"x64\"],\"os\":[\"win32\"]},\"@iconify/types@2.0.0\":{\"resolution\":{\"integrity\":\"sha512-+wluvCrRhXrhyOmRDJ3q8mux9JkKy5SJ/v8ol2tu4FVjyYvtEzkc/3pK15ET6RKg4b4w4BmTk1+gsCUhf21Ykg==\"}},\"@iconify/utils@3.0.2\":{\"resolution\":{\"integrity\":\"sha512-EfJS0rLfVuRuJRn4psJHtK2A9TqVnkxPpHY6lYHiB9+8eSuudsxbwMiavocG45ujOo6FJ+CIRlRnlOGinzkaGQ==\"}},\"@img/colour@1.1.0\":{\"resolution\":{\"integrity\":\"sha512-Td76q7j57o/tLVdgS746cYARfSyxk8iEfRxewL9h4OMzYhbW4TAcppl0mT4eyqXddh6L/jwoM75mo7ixa/pCeQ==\"},\"engines\":{\"node\":\">=18\"}},\"@img/sharp-darwin-arm64@0.34.5\":{\"resolution\":{\"integrity\":\"sha512-imtQ3WMJXbMY4fxb/Ndp6HBTNVtWCUI0WdobyheGf5+ad6xX8VIDO8u2xE4qc/fr08CKG/7dDseFtn6M6g/r3w==\"},\"engines\":{\"node\":\"^18.17.0 || ^20.3.0 || >=21.0.0\"},\"cpu\":[\"arm64\"],\"os\":[\"darwin\"]},\"@img/sharp-darwin-x64@0.34.5\":{\"resolution\":{\"integrity\":\"sha512-YNEFAF/4KQ/PeW0N+r+aVVsoIY0/qxxikF2SWdp+NRkmMB7y9LBZAVqQ4yhGCm/H3H270OSykqmQMKLBhBJDEw==\"},\"engines\":{\"node\":\"^18.17.0 || ^20.3.0 || >=21.0.0\"},\"cpu\":[\"x64\"],\"os\":[\"darwin\"]},\"@img/sharp-libvips-darwin-arm64@1.2.4\":{\"resolution\":{\"integrity\":\"sha512-zqjjo7RatFfFoP0MkQ51jfuFZBnVE2pRiaydKJ1G/rHZvnsrHAOcQALIi9sA5co5xenQdTugCvtb1cuf78Vf4g==\"},\"cpu\":[\"arm64\"],\"os\":[\"darwin\"]},\"@img/sharp-libvips-darwin-x64@1.2.4\":{\"resolution\":{\"integrity\":\"sha512-1IOd5xfVhlGwX+zXv2N93k0yMONvUlANylbJw1eTah8K/Jtpi15KC+WSiaX/nBmbm2HxRM1gZ0nSdjSsrZbGKg==\"},\"cpu\":[\"x64\"],\"os\":[\"darwin\"]},\"@img/sharp-libvips-linux-arm64@1.2.4\":{\"resolution\":{\"integrity\":\"sha512-excjX8DfsIcJ10x1Kzr4RcWe1edC9PquDRRPx3YVCvQv+U5p7Yin2s32ftzikXojb1PIFc/9Mt28/y+iRklkrw==\"},\"cpu\":[\"arm64\"],\"os\":[\"linux\"]},\"@img/sharp-libvips-linux-arm@1.2.4\":{\"resolution\":{\"integrity\":\"sha512-bFI7xcKFELdiNCVov8e44Ia4u2byA+l3XtsAj+Q8tfCwO6BQ8iDojYdvoPMqsKDkuoOo+X6HZA0s0q11ANMQ8A==\"},\"cpu\":[\"arm\"],\"os\":[\"linux\"]},\"@img/sharp-libvips-linux-ppc64@1.2.4\":{\"resolution\":{\"integrity\":\"sha512-FMuvGijLDYG6lW+b/UvyilUWu5Ayu+3r2d1S8notiGCIyYU/76eig1UfMmkZ7vwgOrzKzlQbFSuQfgm7GYUPpA==\"},\"cpu\":[\"ppc64\"],\"os\":[\"linux\"]},\"@img/sharp-libvips-linux-riscv64@1.2.4\":{\"resolution\":{\"integrity\":\"sha512-oVDbcR4zUC0ce82teubSm+x6ETixtKZBh/qbREIOcI3cULzDyb18Sr/Wcyx7NRQeQzOiHTNbZFF1UwPS2scyGA==\"},\"cpu\":[\"riscv64\"],\"os\":[\"linux\"]},\"@img/sharp-libvips-linux-s390x@1.2.4\":{\"resolution\":{\"integrity\":\"sha512-qmp9VrzgPgMoGZyPvrQHqk02uyjA0/QrTO26Tqk6l4ZV0MPWIW6LTkqOIov+J1yEu7MbFQaDpwdwJKhbJvuRxQ==\"},\"cpu\":[\"s390x\"],\"os\":[\"linux\"]},\"@img/sharp-libvips-linux-x64@1.2.4\":{\"resolution\":{\"integrity\":\"sha512-tJxiiLsmHc9Ax1bz3oaOYBURTXGIRDODBqhveVHonrHJ9/+k89qbLl0bcJns+e4t4rvaNBxaEZsFtSfAdquPrw==\"},\"cpu\":[\"x64\"],\"os\":[\"linux\"]},\"@img/sharp-libvips-linuxmusl-arm64@1.2.4\":{\"resolution\":{\"integrity\":\"sha512-FVQHuwx1IIuNow9QAbYUzJ+En8KcVm9Lk5+uGUQJHaZmMECZmOlix9HnH7n1TRkXMS0pGxIJokIVB9SuqZGGXw==\"},\"cpu\":[\"arm64\"],\"os\":[\"linux\"]},\"@img/sharp-libvips-linuxmusl-x64@1.2.4\":{\"resolution\":{\"integrity\":\"sha512-+LpyBk7L44ZIXwz/VYfglaX/okxezESc6UxDSoyo2Ks6Jxc4Y7sGjpgU9s4PMgqgjj1gZCylTieNamqA1MF7Dg==\"},\"cpu\":[\"x64\"],\"os\":[\"linux\"]},\"@img/sharp-linux-arm64@0.34.5\":{\"resolution\":{\"integrity\":\"sha512-bKQzaJRY/bkPOXyKx5EVup7qkaojECG6NLYswgktOZjaXecSAeCWiZwwiFf3/Y+O1HrauiE3FVsGxFg8c24rZg==\"},\"engines\":{\"node\":\"^18.17.0 || ^20.3.0 || >=21.0.0\"},\"cpu\":[\"arm64\"],\"os\":[\"linux\"]},\"@img/sharp-linux-arm@0.34.5\":{\"resolution\":{\"integrity\":\"sha512-9dLqsvwtg1uuXBGZKsxem9595+ujv0sJ6Vi8wcTANSFpwV/GONat5eCkzQo/1O6zRIkh0m/8+5BjrRr7jDUSZw==\"},\"engines\":{\"node\":\"^18.17.0 || ^20.3.0 || >=21.0.0\"},\"cpu\":[\"arm\"],\"os\":[\"linux\"]},\"@img/sharp-linux-ppc64@0.34.5\":{\"resolution\":{\"integrity\":\"sha512-7zznwNaqW6YtsfrGGDA6BRkISKAAE1Jo0QdpNYXNMHu2+0dTrPflTLNkpc8l7MUP5M16ZJcUvysVWWrMefZquA==\"},\"engines\":{\"node\":\"^18.17.0 || ^20.3.0 || >=21.0.0\"},\"cpu\":[\"ppc64\"],\"os\":[\"linux\"]},\"@img/sharp-linux-riscv64@0.34.5\":{\"resolution\":{\"integrity\":\"sha512-51gJuLPTKa7piYPaVs8GmByo7/U7/7TZOq+cnXJIHZKavIRHAP77e3N2HEl3dgiqdD/w0yUfiJnII77PuDDFdw==\"},\"engines\":{\"node\":\"^18.17.0 || ^20.3.0 || >=21.0.0\"},\"cpu\":[\"riscv64\"],\"os\":[\"linux\"]},\"@img/sharp-linux-s390x@0.34.5\":{\"resolution\":{\"integrity\":\"sha512-nQtCk0PdKfho3eC5MrbQoigJ2gd1CgddUMkabUj+rBevs8tZ2cULOx46E7oyX+04WGfABgIwmMC0VqieTiR4jg==\"},\"engines\":{\"node\":\"^18.17.0 || ^20.3.0 || >=21.0.0\"},\"cpu\":[\"s390x\"],\"os\":[\"linux\"]},\"@img/sharp-linux-x64@0.34.5\":{\"resolution\":{\"integrity\":\"sha512-MEzd8HPKxVxVenwAa+JRPwEC7QFjoPWuS5NZnBt6B3pu7EG2Ge0id1oLHZpPJdn3OQK+BQDiw9zStiHBTJQQQQ==\"},\"engines\":{\"node\":\"^18.17.0 || ^20.3.0 || >=21.0.0\"},\"cpu\":[\"x64\"],\"os\":[\"linux\"]},\"@img/sharp-linuxmusl-arm64@0.34.5\":{\"resolution\":{\"integrity\":\"sha512-fprJR6GtRsMt6Kyfq44IsChVZeGN97gTD331weR1ex1c1rypDEABN6Tm2xa1wE6lYb5DdEnk03NZPqA7Id21yg==\"},\"engines\":{\"node\":\"^18.17.0 || ^20.3.0 || >=21.0.0\"},\"cpu\":[\"arm64\"],\"os\":[\"linux\"]},\"@img/sharp-linuxmusl-x64@0.34.5\":{\"resolution\":{\"integrity\":\"sha512-Jg8wNT1MUzIvhBFxViqrEhWDGzqymo3sV7z7ZsaWbZNDLXRJZoRGrjulp60YYtV4wfY8VIKcWidjojlLcWrd8Q==\"},\"engines\":{\"node\":\"^18.17.0 || ^20.3.0 || >=21.0.0\"},\"cpu\":[\"x64\"],\"os\":[\"linux\"]},\"@img/sharp-wasm32@0.34.5\":{\"resolution\":{\"integrity\":\"sha512-OdWTEiVkY2PHwqkbBI8frFxQQFekHaSSkUIJkwzclWZe64O1X4UlUjqqqLaPbUpMOQk6FBu/HtlGXNblIs0huw==\"},\"engines\":{\"node\":\"^18.17.0 || ^20.3.0 || >=21.0.0\"},\"cpu\":[\"wasm32\"]},\"@img/sharp-win32-arm64@0.34.5\":{\"resolution\":{\"integrity\":\"sha512-WQ3AgWCWYSb2yt+IG8mnC6Jdk9Whs7O0gxphblsLvdhSpSTtmu69ZG1Gkb6NuvxsNACwiPV6cNSZNzt0KPsw7g==\"},\"engines\":{\"node\":\"^18.17.0 || ^20.3.0 || >=21.0.0\"},\"cpu\":[\"arm64\"],\"os\":[\"win32\"]},\"@img/sharp-win32-ia32@0.34.5\":{\"resolution\":{\"integrity\":\"sha512-FV9m/7NmeCmSHDD5j4+4pNI8Cp3aW+JvLoXcTUo0IqyjSfAZJ8dIUmijx1qaJsIiU+Hosw6xM5KijAWRJCSgNg==\"},\"engines\":{\"node\":\"^18.17.0 || ^20.3.0 || >=21.0.0\"},\"cpu\":[\"ia32\"],\"os\":[\"win32\"]},\"@img/sharp-win32-x64@0.34.5\":{\"resolution\":{\"integrity\":\"sha512-+29YMsqY2/9eFEiW93eqWnuLcWcufowXewwSNIT6UwZdUUCrM3oFjMWH/Z6/TMmb4hlFenmfAVbpWeup2jryCw==\"},\"engines\":{\"node\":\"^18.17.0 || ^20.3.0 || >=21.0.0\"},\"cpu\":[\"x64\"],\"os\":[\"win32\"]},\"@inquirer/ansi@1.0.2\":{\"resolution\":{\"integrity\":\"sha512-S8qNSZiYzFd0wAcyG5AXCvUHC5Sr7xpZ9wZ2py9XR88jUz8wooStVx5M6dRzczbBWjic9NP7+rY0Xi7qqK/aMQ==\"},\"engines\":{\"node\":\">=18\"}},\"@inquirer/confirm@5.1.21\":{\"resolution\":{\"integrity\":\"sha512-KR8edRkIsUayMXV+o3Gv+q4jlhENF9nMYUZs9PA2HzrXeHI8M5uDag70U7RJn9yyiMZSbtF5/UexBtAVtZGSbQ==\"},\"engines\":{\"node\":\">=18\"},\"peerDependencies\":{\"@types/node\":\">=18\"},\"peerDependenciesMeta\":{\"@types/node\":{\"optional\":true}}},\"@inquirer/core@10.3.2\":{\"resolution\":{\"integrity\":\"sha512-43RTuEbfP8MbKzedNqBrlhhNKVwoK//vUFNW3Q3vZ88BLcrs4kYpGg+B2mm5p2K/HfygoCxuKwJJiv8PbGmE0A==\"},\"engines\":{\"node\":\">=18\"},\"peerDependencies\":{\"@types/node\":\">=18\"},\"peerDependenciesMeta\":{\"@types/node\":{\"optional\":true}}},\"@inquirer/external-editor@1.0.1\":{\"resolution\":{\"integrity\":\"sha512-Oau4yL24d2B5IL4ma4UpbQigkVhzPDXLoqy1ggK4gnHg/stmkffJE4oOXHXF3uz0UEpywG68KcyXsyYpA1Re/Q==\"},\"engines\":{\"node\":\">=18\"},\"peerDependencies\":{\"@types/node\":\">=18\"},\"peerDependenciesMeta\":{\"@types/node\":{\"optional\":true}}},\"@inquirer/figures@1.0.15\":{\"resolution\":{\"integrity\":\"sha512-t2IEY+unGHOzAaVM5Xx6DEWKeXlDDcNPeDyUpsRc6CUhBfU3VQOEl+Vssh7VNp1dR8MdUJBWhuObjXCsVpjN5g==\"},\"engines\":{\"node\":\">=18\"}},\"@inquirer/type@3.0.10\":{\"resolution\":{\"integrity\":\"sha512-BvziSRxfz5Ov8ch0z/n3oijRSEcEsHnhggm4xFZe93DHcUCTlutlq9Ox4SVENAfcRD22UQq7T/atg9Wr3k09eA==\"},\"engines\":{\"node\":\">=18\"},\"peerDependencies\":{\"@types/node\":\">=18\"},\"peerDependenciesMeta\":{\"@types/node\":{\"optional\":true}}},\"@isaacs/cliui@8.0.2\":{\"resolution\":{\"integrity\":\"sha512-O8jcjabXaleOG9DQ0+ARXWZBTfnP4WNAqzuiJK7ll44AmxGKv/J2M4TPjxjY3znBCfvBXFzucm1twdyFybFqEA==\"},\"engines\":{\"node\":\">=12\"}},\"@istanbuljs/schema@0.1.3\":{\"resolution\":{\"integrity\":\"sha512-ZXRY4jNvVgSVQ8DL3LTcakaAtXwTVUxE81hslsyD2AtoXW/wVob10HkOJ1X/pAlcI7D+2YoZKg5do8G/w6RYgA==\"},\"engines\":{\"node\":\">=8\"}},\"@jest/diff-sequences@30.0.1\":{\"resolution\":{\"integrity\":\"sha512-n5H8QLDJ47QqbCNn5SuFjCRDrOLEZ0h8vAHCK5RL9Ls7Xa8AQLa/YxAc9UjFqoEDM48muwtBGjtMY5cr0PLDCw==\"},\"engines\":{\"node\":\"^18.14.0 || ^20.0.0 || ^22.0.0 || >=24.0.0\"}},\"@jest/get-type@30.1.0\":{\"resolution\":{\"integrity\":\"sha512-eMbZE2hUnx1WV0pmURZY9XoXPkUYjpc55mb0CrhtdWLtzMQPFvu/rZkTLZFTsdaVQa+Tr4eWAteqcUzoawq/uA==\"},\"engines\":{\"node\":\"^18.14.0 || ^20.0.0 || ^22.0.0 || >=24.0.0\"}},\"@jest/schemas@30.0.5\":{\"resolution\":{\"integrity\":\"sha512-DmdYgtezMkh3cpU8/1uyXakv3tJRcmcXxBOcO0tbaozPwpmh4YMsnWrQm9ZmZMfa5ocbxzbFk6O4bDPEc/iAnA==\"},\"engines\":{\"node\":\"^18.14.0 || ^20.0.0 || ^22.0.0 || >=24.0.0\"}},\"@jridgewell/gen-mapping@0.3.13\":{\"resolution\":{\"integrity\":\"sha512-2kkt/7niJ6MgEPxF0bYdQ6etZaA+fQvDcLKckhy1yIQOzaoKjBBjSj63/aLVjYE3qhRt5dvM+uUyfCg6UKCBbA==\"}},\"@jridgewell/remapping@2.3.5\":{\"resolution\":{\"integrity\":\"sha512-LI9u/+laYG4Ds1TDKSJW2YPrIlcVYOwi2fUC6xB43lueCjgxV4lffOCZCtYFiH6TNOX+tQKXx97T4IKHbhyHEQ==\"}},\"@jridgewell/resolve-uri@3.1.2\":{\"resolution\":{\"integrity\":\"sha512-bRISgCIjP20/tbWSPWMEi54QVPRZExkuD9lJL+UIxUKtwVJA8wW1Trb1jMs1RFXo1CBTNZ/5hpC9QvmKWdopKw==\"},\"engines\":{\"node\":\">=6.0.0\"}},\"@jridgewell/source-map@0.3.11\":{\"resolution\":{\"integrity\":\"sha512-ZMp1V8ZFcPG5dIWnQLr3NSI1MiCU7UETdS/A0G8V/XWHvJv3ZsFqutJn1Y5RPmAPX6F3BiE397OqveU/9NCuIA==\"}},\"@jridgewell/sourcemap-codec@1.5.5\":{\"resolution\":{\"integrity\":\"sha512-cYQ9310grqxueWbl+WuIUIaiUaDcj7WOq5fVhEljNVgRfOUhY9fy2zTvfoqWsnebh8Sl70VScFbICvJnLKB0Og==\"}},\"@jridgewell/trace-mapping@0.3.30\":{\"resolution\":{\"integrity\":\"sha512-GQ7Nw5G2lTu/BtHTKfXhKHok2WGetd4XYcVKGx00SjAk8GMwgJM3zr6zORiPGuOE+/vkc90KtTosSSvaCjKb2Q==\"}},\"@jridgewell/trace-mapping@0.3.31\":{\"resolution\":{\"integrity\":\"sha512-zzNR+SdQSDJzc8joaeP8QQoCQr8NuYx2dIIytl1QeBEZHJ9uW6hebsrYgbz8hJwUQao3TWCMtmfV8Nu1twOLAw==\"}},\"@jridgewell/trace-mapping@0.3.9\":{\"resolution\":{\"integrity\":\"sha512-3Belt6tdc8bPgAtbcmdtNJlirVoTmEb5e2gC94PnkwEW9jI6CAHUeoG85tjWP5WquqfavoMtMwiG4P926ZKKuQ==\"}},\"@jsonjoy.com/buffers@17.63.0\":{\"resolution\":{\"integrity\":\"sha512-IZB5WQRVNPEbuqouOQxZHl59AL6/ff+gmM20+xAx4SRX6DjZnQAxs03pQ2J6g5ssN+pzmShrBuGeksjlcZ3HCw==\"},\"engines\":{\"node\":\">=10.0\"},\"peerDependencies\":{\"tslib\":\"2\"}},\"@jsonjoy.com/codegen@17.63.0\":{\"resolution\":{\"integrity\":\"sha512-vQ18JiRQ8YfZQwzwCQs88rR5eGuy6AFfu+anz9RTvHQs9L4AE8dGA/mLzu6teh6CiSQTo2TNOQbqRh4Vy+7LEQ==\"},\"engines\":{\"node\":\">=10.0\"},\"peerDependencies\":{\"tslib\":\"2\"}},\"@jsonjoy.com/json-pointer@17.63.0\":{\"resolution\":{\"integrity\":\"sha512-wAW7rQsGW2zWtE+77cXU8lXsoXYCKa9eHptK3a2CCoNTm5YpPA3dev6LuEyaTDYKdF4DTjtwREv2PpjJidHE5w==\"},\"engines\":{\"node\":\">=10.0\"},\"peerDependencies\":{\"tslib\":\"2\"}},\"@jsonjoy.com/util@17.63.0\":{\"resolution\":{\"integrity\":\"sha512-AhpTIOFvuixKwem4d+ey4In78KJLCrDIUyp0IQ8xgpbs0IjNPTTfT3nXXbYMgJGxjegmqa9otl9nqbCvxOaiXw==\"},\"engines\":{\"node\":\">=10.0\"},\"peerDependencies\":{\"tslib\":\"2\"}},\"@lix-js/plugin-json@1.0.1\":{\"resolution\":{\"integrity\":\"sha512-pCqzG08D8jLtVy8RnITPZIy92XNlRAJWLrlRrzh3ttwS/PWM/iXiOPPuzvb23MoFhYxerzJ8uDGXhEXfVagY2w==\"}},\"@lix-js/sdk@0.5.1\":{\"resolution\":{\"integrity\":\"sha512-FiDGp6BznOLdzNOCUC5OvTJ6KfdKGk8wd5edD1dhU46quS4vi4EkHjS/N+12PSpCfl/p3wBWSQD6vzvZcIHTFg==\"},\"engines\":{\"node\":\">=22\"}},\"@lix-js/server-protocol-schema@0.1.1\":{\"resolution\":{\"integrity\":\"sha512-jBeALB6prAbtr5q4vTuxnRZZv1M2rKe8iNqRQhFJ4Tv7150unEa0vKyz0hs8Gl3fUGsWaNJBh3J8++fpbrpRBQ==\"}},\"@manypkg/find-root@1.1.0\":{\"resolution\":{\"integrity\":\"sha512-mki5uBvhHzO8kYYix/WRy2WX8S3B5wdVSc9D6KcU5lQNglP2yt58/VfLuAK49glRXChosY8ap2oJ1qgma3GUVA==\"}},\"@manypkg/get-packages@1.1.3\":{\"resolution\":{\"integrity\":\"sha512-fo+QhuU3qE/2TQMQmbVMqaQ6EWbMhi4ABWP+O4AM1NqPBuy0OrApV5LO6BrrgnhtAHS2NH6RrVk9OL181tTi8A==\"}},\"@marcbachmann/cel-js@2.5.2\":{\"resolution\":{\"integrity\":\"sha512-QnvFBFQ+2T8gX4H4pmcgIfs3gXwfhRjv7hYoRRDLwKeXxgPEZ+zvExe1pGtPs8xPWHu4ng0CmllNpVHWi4kB9A==\"},\"engines\":{\"node\":\">=20.19.0\"}},\"@mermaid-js/parser@0.6.3\":{\"resolution\":{\"integrity\":\"sha512-lnjOhe7zyHjc+If7yT4zoedx2vo4sHaTmtkl1+or8BRTnCtDmcTpAjpzDSfCZrshM5bCoz0GyidzadJAH1xobA==\"}},\"@mswjs/interceptors@0.39.8\":{\"resolution\":{\"integrity\":\"sha512-2+BzZbjRO7Ct61k8fMNHEtoKjeWI9pIlHFTqBwZ5icHpqszIgEZbjb1MW5Z0+bITTCTl3gk4PDBxs9tA/csXvA==\"},\"engines\":{\"node\":\">=18\"}},\"@napi-rs/wasm-runtime@0.2.4\":{\"resolution\":{\"integrity\":\"sha512-9zESzOO5aDByvhIAsOy9TbpZ0Ur2AJbUI7UT73kcUTS2mxAMHOBaa1st/jAymNoCtvrit99kkzT1FZuXVcgfIQ==\"}},\"@napi-rs/wasm-runtime@1.1.4\":{\"resolution\":{\"integrity\":\"sha512-3NQNNgA1YSlJb/kMH1ildASP9HW7/7kYnRI2szWJaofaS1hWmbGI4H+d3+22aGzXXN9IJ+n+GiFVcGipJP18ow==\"},\"peerDependencies\":{\"@emnapi/core\":\"^1.7.1\",\"@emnapi/runtime\":\"^1.7.1\"}},\"@nodelib/fs.scandir@2.1.5\":{\"resolution\":{\"integrity\":\"sha512-vq24Bq3ym5HEQm2NKCr3yXDwjc7vTsEThRDnkp2DK9p1uqLR+DHurm/NOTo0KG7HYHU7eppKZj3MyqYuMBf62g==\"},\"engines\":{\"node\":\">= 8\"}},\"@nodelib/fs.stat@2.0.5\":{\"resolution\":{\"integrity\":\"sha512-RkhPPp2zrqDAQA/2jNhnztcPAlv64XdhIp7a7454A5ovI7Bukxgt7MX7udwAu3zg1DcpPU0rz3VV1SeaqvY4+A==\"},\"engines\":{\"node\":\">= 8\"}},\"@nodelib/fs.walk@1.2.8\":{\"resolution\":{\"integrity\":\"sha512-oGB+UxlgWcgQkgwo8GcEGwemoTFt3FIO9ababBmaGwXIoBKZ+GTy0pP185beGg7Llih/NSHSV2XAs1lnznocSg==\"},\"engines\":{\"node\":\">= 8\"}},\"@nrwl/nx-cloud@19.1.0\":{\"resolution\":{\"integrity\":\"sha512-krngXVPfX0Zf6+zJDtcI59/Pt3JfcMPMZ9C/+/x6rvz4WGgyv1s0MI4crEUM0Lx5ZpS4QI0WNDCFVQSfGEBXUg==\"}},\"@nx/nx-darwin-arm64@21.4.1\":{\"resolution\":{\"integrity\":\"sha512-9BbkQnxGEDNX2ESbW4Zdrq1i09y6HOOgTuGbMJuy4e8F8rU/motMUqOpwmFgLHkLgPNZiOC2VXht3or/kQcpOg==\"},\"cpu\":[\"arm64\"],\"os\":[\"darwin\"]},\"@nx/nx-darwin-x64@21.4.1\":{\"resolution\":{\"integrity\":\"sha512-dnkmap1kc6aLV8CW1ihjsieZyaDDjlIB5QA2reTCLNSdTV446K6Fh0naLdaoG4ZkF27zJA/qBOuAaLzRHFJp3g==\"},\"cpu\":[\"x64\"],\"os\":[\"darwin\"]},\"@nx/nx-freebsd-x64@21.4.1\":{\"resolution\":{\"integrity\":\"sha512-RpxDBGOPeDqJjpbV7F3lO/w1aIKfLyG/BM0OpJfTgFVpUIl50kMj5M1m4W9A8kvYkfOD9pDbUaWszom7d57yjg==\"},\"cpu\":[\"x64\"],\"os\":[\"freebsd\"]},\"@nx/nx-linux-arm-gnueabihf@21.4.1\":{\"resolution\":{\"integrity\":\"sha512-2OyBoag2738XWmWK3ZLBuhaYb7XmzT3f8HzomggLDJoDhwDekjgRoNbTxogAAj6dlXSeuPjO81BSlIfXQcth3w==\"},\"cpu\":[\"arm\"],\"os\":[\"linux\"]},\"@nx/nx-linux-arm64-gnu@21.4.1\":{\"resolution\":{\"integrity\":\"sha512-2pg7/zjBDioUWJ3OY8Ixqy64eokKT5sh4iq1bk22bxOCf676aGrAu6khIxy4LBnPIdO0ZOK7KCJ7xOFP4phZqA==\"},\"cpu\":[\"arm64\"],\"os\":[\"linux\"]},\"@nx/nx-linux-arm64-musl@21.4.1\":{\"resolution\":{\"integrity\":\"sha512-whNxh12au/inQtkZju1ZfXSqDS0hCh/anzVCXfLYWFstdwv61XiRmFCSHeN0gRDthlncXFdgKoT1bGG5aMYLtA==\"},\"cpu\":[\"arm64\"],\"os\":[\"linux\"]},\"@nx/nx-linux-x64-gnu@21.4.1\":{\"resolution\":{\"integrity\":\"sha512-UHw57rzLio0AUDXV3l+xcxT3LjuXil7SHj+H8aYmXTpXktctQU2eYGOs5ATqJ1avVQRSejJugHF0i8oLErC28A==\"},\"cpu\":[\"x64\"],\"os\":[\"linux\"]},\"@nx/nx-linux-x64-musl@21.4.1\":{\"resolution\":{\"integrity\":\"sha512-qqE2Gy/DwOLIyePjM7GLHp/nDLZJnxHmqTeCiTQCp/BdbmqjRkSUz5oL+Uua0SNXaTu5hjAfvjXAhSTgBwVO6g==\"},\"cpu\":[\"x64\"],\"os\":[\"linux\"]},\"@nx/nx-win32-arm64-msvc@21.4.1\":{\"resolution\":{\"integrity\":\"sha512-NtEzMiRrSm2DdL4ntoDdjeze8DBrfZvLtx3Dq6+XmOhwnigR6umfWfZ6jbluZpuSQcxzQNVifqirdaQKYaYwDQ==\"},\"cpu\":[\"arm64\"],\"os\":[\"win32\"]},\"@nx/nx-win32-x64-msvc@21.4.1\":{\"resolution\":{\"integrity\":\"sha512-gpG+Y4G/mxGrfkUls6IZEuuBxRaKLMSEoVFLMb9JyyaLEDusn+HJ1m90XsOedjNLBHGMFigsd/KCCsXfFn4njg==\"},\"cpu\":[\"x64\"],\"os\":[\"win32\"]},\"@oozcitak/dom@2.0.2\":{\"resolution\":{\"integrity\":\"sha512-GjpKhkSYC3Mj4+lfwEyI1dqnsKTgwGy48ytZEhm4A/xnH/8z9M3ZVXKr/YGQi3uCLs1AEBS+x5T2JPiueEDW8w==\"},\"engines\":{\"node\":\">=20.0\"}},\"@oozcitak/infra@2.0.2\":{\"resolution\":{\"integrity\":\"sha512-2g+E7hoE2dgCz/APPOEK5s3rMhJvNxSMBrP+U+j1OWsIbtSpWxxlUjq1lU8RIsFJNYv7NMlnVsCuHcUzJW+8vA==\"},\"engines\":{\"node\":\">=20.0\"}},\"@oozcitak/url@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-ZKfET8Ak1wsLAiLWNfFkZc/BraDccuTJKR6svTYc7sVjbR+Iu0vtXdiDMY4o6jaFl5TW2TlS7jbLl4VovtAJWQ==\"},\"engines\":{\"node\":\">=20.0\"}},\"@oozcitak/util@10.0.0\":{\"resolution\":{\"integrity\":\"sha512-hAX0pT/73190NLqBPPWSdBVGtbY6VOhWYK3qqHqtXQ1gK7kS2yz4+ivsN07hpJ6I3aeMtKP6J6npsEKOAzuTLA==\"},\"engines\":{\"node\":\">=20.0\"}},\"@open-draft/deferred-promise@2.2.0\":{\"resolution\":{\"integrity\":\"sha512-CecwLWx3rhxVQF6V4bAgPS5t+So2sTbPgAzafKkVizyi7tlwpcFpdFqq+wqF2OwNBmqFuu6tOyouTuxgpMfzmA==\"}},\"@open-draft/logger@0.3.0\":{\"resolution\":{\"integrity\":\"sha512-X2g45fzhxH238HKO4xbSr7+wBS8Fvw6ixhTDuvLd5mqh6bJJCFAPwU9mPDxbcrRtfxv4u5IHCEH77BmxvXmmxQ==\"}},\"@open-draft/until@2.1.0\":{\"resolution\":{\"integrity\":\"sha512-U69T3ItWHvLwGg5eJ0n3I62nWuE6ilHlmz7zM0npLBRvPRd7e6NYmg54vvRtP5mZG7kZqZCFVdsTWo7BPtBujg==\"}},\"@opentelemetry/api-logs@0.208.0\":{\"resolution\":{\"integrity\":\"sha512-CjruKY9V6NMssL/T1kAFgzosF1v9o6oeN+aX5JB/C/xPNtmgIJqcXHG7fA82Ou1zCpWGl4lROQUKwUNE1pMCyg==\"},\"engines\":{\"node\":\">=8.0.0\"}},\"@opentelemetry/api@1.9.0\":{\"resolution\":{\"integrity\":\"sha512-3giAOQvZiH5F9bMlMiv8+GSPMeqg0dbaeo58/0SlA9sxSqZhnUtxzX9/2FzyhS9sWQf5S0GJE0AKBrFqjpeYcg==\"},\"engines\":{\"node\":\">=8.0.0\"}},\"@opentelemetry/core@2.2.0\":{\"resolution\":{\"integrity\":\"sha512-FuabnnUm8LflnieVxs6eP7Z383hgQU4W1e3KJS6aOG3RxWxcHyBxH8fDMHNgu/gFx/M2jvTOW/4/PHhLz6bjWw==\"},\"engines\":{\"node\":\"^18.19.0 || >=20.6.0\"},\"peerDependencies\":{\"@opentelemetry/api\":\">=1.0.0 <1.10.0\"}},\"@opentelemetry/core@2.4.0\":{\"resolution\":{\"integrity\":\"sha512-KtcyFHssTn5ZgDu6SXmUznS80OFs/wN7y6MyFRRcKU6TOw8hNcGxKvt8hsdaLJfhzUszNSjURetq5Qpkad14Gw==\"},\"engines\":{\"node\":\"^18.19.0 || >=20.6.0\"},\"peerDependencies\":{\"@opentelemetry/api\":\">=1.0.0 <1.10.0\"}},\"@opentelemetry/exporter-logs-otlp-http@0.208.0\":{\"resolution\":{\"integrity\":\"sha512-jOv40Bs9jy9bZVLo/i8FwUiuCvbjWDI+ZW13wimJm4LjnlwJxGgB+N/VWOZUTpM+ah/awXeQqKdNlpLf2EjvYg==\"},\"engines\":{\"node\":\"^18.19.0 || >=20.6.0\"},\"peerDependencies\":{\"@opentelemetry/api\":\"^1.3.0\"}},\"@opentelemetry/otlp-exporter-base@0.208.0\":{\"resolution\":{\"integrity\":\"sha512-gMd39gIfVb2OgxldxUtOwGJYSH8P1kVFFlJLuut32L6KgUC4gl1dMhn+YC2mGn0bDOiQYSk/uHOdSjuKp58vvA==\"},\"engines\":{\"node\":\"^18.19.0 || >=20.6.0\"},\"peerDependencies\":{\"@opentelemetry/api\":\"^1.3.0\"}},\"@opentelemetry/otlp-transformer@0.208.0\":{\"resolution\":{\"integrity\":\"sha512-DCFPY8C6lAQHUNkzcNT9R+qYExvsk6C5Bto2pbNxgicpcSWbe2WHShLxkOxIdNcBiYPdVHv/e7vH7K6TI+C+fQ==\"},\"engines\":{\"node\":\"^18.19.0 || >=20.6.0\"},\"peerDependencies\":{\"@opentelemetry/api\":\"^1.3.0\"}},\"@opentelemetry/resources@2.2.0\":{\"resolution\":{\"integrity\":\"sha512-1pNQf/JazQTMA0BiO5NINUzH0cbLbbl7mntLa4aJNmCCXSj0q03T5ZXXL0zw4G55TjdL9Tz32cznGClf+8zr5A==\"},\"engines\":{\"node\":\"^18.19.0 || >=20.6.0\"},\"peerDependencies\":{\"@opentelemetry/api\":\">=1.3.0 <1.10.0\"}},\"@opentelemetry/resources@2.4.0\":{\"resolution\":{\"integrity\":\"sha512-RWvGLj2lMDZd7M/5tjkI/2VHMpXebLgPKvBUd9LRasEWR2xAynDwEYZuLvY9P2NGG73HF07jbbgWX2C9oavcQg==\"},\"engines\":{\"node\":\"^18.19.0 || >=20.6.0\"},\"peerDependencies\":{\"@opentelemetry/api\":\">=1.3.0 <1.10.0\"}},\"@opentelemetry/sdk-logs@0.208.0\":{\"resolution\":{\"integrity\":\"sha512-QlAyL1jRpOeaqx7/leG1vJMp84g0xKP6gJmfELBpnI4O/9xPX+Hu5m1POk9Kl+veNkyth5t19hRlN6tNY1sjbA==\"},\"engines\":{\"node\":\"^18.19.0 || >=20.6.0\"},\"peerDependencies\":{\"@opentelemetry/api\":\">=1.4.0 <1.10.0\"}},\"@opentelemetry/sdk-metrics@2.2.0\":{\"resolution\":{\"integrity\":\"sha512-G5KYP6+VJMZzpGipQw7Giif48h6SGQ2PFKEYCybeXJsOCB4fp8azqMAAzE5lnnHK3ZVwYQrgmFbsUJO/zOnwGw==\"},\"engines\":{\"node\":\"^18.19.0 || >=20.6.0\"},\"peerDependencies\":{\"@opentelemetry/api\":\">=1.9.0 <1.10.0\"}},\"@opentelemetry/sdk-trace-base@2.2.0\":{\"resolution\":{\"integrity\":\"sha512-xWQgL0Bmctsalg6PaXExmzdedSp3gyKV8mQBwK/j9VGdCDu2fmXIb2gAehBKbkXCpJ4HPkgv3QfoJWRT4dHWbw==\"},\"engines\":{\"node\":\"^18.19.0 || >=20.6.0\"},\"peerDependencies\":{\"@opentelemetry/api\":\">=1.3.0 <1.10.0\"}},\"@opentelemetry/semantic-conventions@1.38.0\":{\"resolution\":{\"integrity\":\"sha512-kocjix+/sSggfJhwXqClZ3i9Y/MI0fp7b+g7kCRm6psy2dsf8uApTRclwG18h8Avm7C9+fnt+O36PspJ/OzoWg==\"},\"engines\":{\"node\":\">=14\"}},\"@opral/markdown-wc@0.9.0\":{\"resolution\":{\"integrity\":\"sha512-m5I3WklqED3mTcUOR3J9CRFIttMYsCmSCZnZYXNdL0Oj0EtSVWXPetPhKsHTEK+MrWPaqfsiKIFq6+l7dKgtNg==\"},\"peerDependencies\":{\"@tiptap/core\":\"^3.0.0\"},\"peerDependenciesMeta\":{\"@tiptap/core\":{\"optional\":true}}},\"@opral/zettel-ast@0.1.0\":{\"resolution\":{\"integrity\":\"sha512-pZDiecYrpSxw7miv4ZSufCRB9sqFMXRa0Rf+LQcoEEh0VOBI6beOmvB+iXmWJ7vxMQINuS7yfsvm5ZyrTm/W5A==\"},\"engines\":{\"node\":\">=20\"}},\"@oxc-project/types@0.127.0\":{\"resolution\":{\"integrity\":\"sha512-aIYXQBo4lCbO4z0R3FHeucQHpF46l2LbMdxRvqvuRuW2OxdnSkcng5B8+K12spgLDj93rtN3+J2Vac/TIO+ciQ==\"}},\"@oxlint/darwin-arm64@1.26.0\":{\"resolution\":{\"integrity\":\"sha512-kTmm1opqyn7iZopWHO3Ml4D/44pA5eknZBepgxCnTaPrW8XgCEUI85Q5AvOOvoNve8NziTYb8ax+CyuGJIgn/Q==\"},\"cpu\":[\"arm64\"],\"os\":[\"darwin\"]},\"@oxlint/darwin-x64@1.26.0\":{\"resolution\":{\"integrity\":\"sha512-/hMfZ9j7ZzVPRmMm02PHNc6MIMk0QYv5VowZJRIp40YLqLPvFfGNGZBj8e1fDVgZMFEGWDQK3yrt1uBKxXAK4Q==\"},\"cpu\":[\"x64\"],\"os\":[\"darwin\"]},\"@oxlint/linux-arm64-gnu@1.26.0\":{\"resolution\":{\"integrity\":\"sha512-iv4wdrwdCa8bhJxOpKlvfxqTs0LgW5tKBUMvH9B13zREHm1xT9JRZ8cQbbKiyC6LNdggwu5S6TSvODgAu7/DlA==\"},\"cpu\":[\"arm64\"],\"os\":[\"linux\"]},\"@oxlint/linux-arm64-musl@1.26.0\":{\"resolution\":{\"integrity\":\"sha512-a3gTbnN1JzedxqYeGTkg38BAs/r3Krd2DPNs/MF7nnHthT3RzkPUk47isMePLuNc4e/Weljn7m2m/Onx22tiNg==\"},\"cpu\":[\"arm64\"],\"os\":[\"linux\"]},\"@oxlint/linux-x64-gnu@1.26.0\":{\"resolution\":{\"integrity\":\"sha512-cCAyqyuKpFImjlgiBuuwSF+aDBW2h19/aCmHMTMSp6KXwhoQK7/Xx7/EhZKP5wiQJzVUYq5fXr0D8WmpLGsjRg==\"},\"cpu\":[\"x64\"],\"os\":[\"linux\"]},\"@oxlint/linux-x64-musl@1.26.0\":{\"resolution\":{\"integrity\":\"sha512-8VOJ4vQo0G1tNdaghxrWKjKZGg73tv+FoMDrtNYuUesqBHZN68FkYCsgPwEsacLhCmtoZrkF3ePDWDuWEpDyAg==\"},\"cpu\":[\"x64\"],\"os\":[\"linux\"]},\"@oxlint/win32-arm64@1.26.0\":{\"resolution\":{\"integrity\":\"sha512-N8KUtzP6gfEHKvaIBZCS9g8wRfqV5v55a/B8iJjIEhtMehcEM+UX+aYRsQ4dy5oBCrK3FEp4Yy/jHgb0moLm3Q==\"},\"cpu\":[\"arm64\"],\"os\":[\"win32\"]},\"@oxlint/win32-x64@1.26.0\":{\"resolution\":{\"integrity\":\"sha512-7tCyG0laduNQ45vzB9blVEGq/6DOvh7AFmiUAana8mTp0zIKQQmwJ21RqhazH0Rk7O6lL7JYzKcu+zaJHGpRLA==\"},\"cpu\":[\"x64\"],\"os\":[\"win32\"]},\"@pkgjs/parseargs@0.11.0\":{\"resolution\":{\"integrity\":\"sha512-+1VkjdD0QBLPodGrJUeqarH8VAIvQODIbwh9XpP5Syisf7YoQgsJKPNFoqqLQlu+VQ/tVSshMR6loPMn8U+dPg==\"},\"engines\":{\"node\":\">=14\"}},\"@polka/url@1.0.0-next.29\":{\"resolution\":{\"integrity\":\"sha512-wwQAWhWSuHaag8c4q/KN/vCoeOJYshAIvMQwD4GpSb3OiZklFfvAgmj0VCBBImRpuF/aFgIRzllXlVX93Jevww==\"}},\"@poppinss/colors@4.1.5\":{\"resolution\":{\"integrity\":\"sha512-FvdDqtcRCtz6hThExcFOgW0cWX+xwSMWcRuQe5ZEb2m7cVQOAVZOIMt+/v9RxGiD9/OY16qJBXK4CVKWAPalBw==\"}},\"@poppinss/dumper@0.6.5\":{\"resolution\":{\"integrity\":\"sha512-NBdYIb90J7LfOI32dOewKI1r7wnkiH6m920puQ3qHUeZkxNkQiFnXVWoE6YtFSv6QOiPPf7ys6i+HWWecDz7sw==\"}},\"@poppinss/exception@1.2.2\":{\"resolution\":{\"integrity\":\"sha512-m7bpKCD4QMlFCjA/nKTs23fuvoVFoA83brRKmObCUNmi/9tVu8Ve3w4YQAnJu4q3Tjf5fr685HYIC/IA2zHRSg==\"}},\"@posthog/core@1.9.1\":{\"resolution\":{\"integrity\":\"sha512-kRb1ch2dhQjsAapZmu6V66551IF2LnCbc1rnrQqnR7ArooVyJN9KOPXre16AJ3ObJz2eTfuP7x25BMyS2Y5Exw==\"}},\"@posthog/types@1.321.2\":{\"resolution\":{\"integrity\":\"sha512-nsMeHlVNlTB68JyV3/0+5FDreiTpUCStDH8ZUH/Hfsbw1howyf9a7DyURTwwhXdnyO0DksEFUIX+4IKCJs/H9g==\"}},\"@promptbook/utils@0.69.5\":{\"resolution\":{\"integrity\":\"sha512-xm5Ti/Hp3o4xHrsK9Yy3MS6KbDxYbq485hDsFvxqaNA7equHLPdo8H8faTitTeb14QCDfLW4iwCxdVYu5sn6YQ==\"}},\"@protobufjs/aspromise@1.1.2\":{\"resolution\":{\"integrity\":\"sha512-j+gKExEuLmKwvz3OgROXtrJ2UG2x8Ch2YZUxahh+s1F2HZ+wAceUNLkvy6zKCPVRkU++ZWQrdxsUeQXmcg4uoQ==\"}},\"@protobufjs/base64@1.1.2\":{\"resolution\":{\"integrity\":\"sha512-AZkcAA5vnN/v4PDqKyMR5lx7hZttPDgClv83E//FMNhR2TMcLUhfRUBHCmSl0oi9zMgDDqRUJkSxO3wm85+XLg==\"}},\"@protobufjs/codegen@2.0.4\":{\"resolution\":{\"integrity\":\"sha512-YyFaikqM5sH0ziFZCN3xDC7zeGaB/d0IUb9CATugHWbd1FRFwWwt4ld4OYMPWu5a3Xe01mGAULCdqhMlPl29Jg==\"}},\"@protobufjs/eventemitter@1.1.0\":{\"resolution\":{\"integrity\":\"sha512-j9ednRT81vYJ9OfVuXG6ERSTdEL1xVsNgqpkxMsbIabzSo3goCjDIveeGv5d03om39ML71RdmrGNjG5SReBP/Q==\"}},\"@protobufjs/fetch@1.1.0\":{\"resolution\":{\"integrity\":\"sha512-lljVXpqXebpsijW71PZaCYeIcE5on1w5DlQy5WH6GLbFryLUrBD4932W/E2BSpfRJWseIL4v/KPgBFxDOIdKpQ==\"}},\"@protobufjs/float@1.0.2\":{\"resolution\":{\"integrity\":\"sha512-Ddb+kVXlXst9d+R9PfTIxh1EdNkgoRe5tOX6t01f1lYWOvJnSPDBlG241QLzcyPdoNTsblLUdujGSE4RzrTZGQ==\"}},\"@protobufjs/inquire@1.1.0\":{\"resolution\":{\"integrity\":\"sha512-kdSefcPdruJiFMVSbn801t4vFK7KB/5gd2fYvrxhuJYg8ILrmn9SKSX2tZdV6V+ksulWqS7aXjBcRXl3wHoD9Q==\"}},\"@protobufjs/path@1.1.2\":{\"resolution\":{\"integrity\":\"sha512-6JOcJ5Tm08dOHAbdR3GrvP+yUUfkjG5ePsHYczMFLq3ZmMkAD98cDgcT2iA1lJ9NVwFd4tH/iSSoe44YWkltEA==\"}},\"@protobufjs/pool@1.1.0\":{\"resolution\":{\"integrity\":\"sha512-0kELaGSIDBKvcgS4zkjz1PeddatrjYcmMWOlAuAPwAeccUrPHdUqo/J6LiymHHEiJT5NrF1UVwxY14f+fy4WQw==\"}},\"@protobufjs/utf8@1.1.0\":{\"resolution\":{\"integrity\":\"sha512-Vvn3zZrhQZkkBE8LSuW3em98c0FwgO4nxzv6OdSxPKJIEKY2bGbHn+mhGIPerzI4twdxaP8/0+06HBpwf345Lw==\"}},\"@puppeteer/browsers@2.13.1\":{\"resolution\":{\"integrity\":\"sha512-zmS4RTK9fbrc++WlAJhxYbfz3IjDeOmkK/CwwbLmk7ydfS9e2CiEeRJHEPvjDVElO/bwXbidwGA37Bsm6LzCnQ==\"},\"engines\":{\"node\":\">=18\"},\"hasBin\":true},\"@rolldown/binding-android-arm64@1.0.0-rc.17\":{\"resolution\":{\"integrity\":\"sha512-s70pVGhw4zqGeFnXWvAzJDlvxhlRollagdCCKRgOsgUOH3N1l0LIxf83AtGzmb5SiVM4Hjl5HyarMRfdfj3DaQ==\"},\"engines\":{\"node\":\"^20.19.0 || >=22.12.0\"},\"cpu\":[\"arm64\"],\"os\":[\"android\"]},\"@rolldown/binding-darwin-arm64@1.0.0-rc.17\":{\"resolution\":{\"integrity\":\"sha512-4ksWc9n0mhlZpZ9PMZgTGjeOPRu8MB1Z3Tz0Mo02eWfWCHMW1zN82Qz/pL/rC+yQa+8ZnutMF0JjJe7PjwasYw==\"},\"engines\":{\"node\":\"^20.19.0 || >=22.12.0\"},\"cpu\":[\"arm64\"],\"os\":[\"darwin\"]},\"@rolldown/binding-darwin-x64@1.0.0-rc.17\":{\"resolution\":{\"integrity\":\"sha512-SUSDOI6WwUVNcWxd02QEBjLdY1VPHvlEkw6T/8nYG322iYWCTxRb1vzk4E+mWWYehTp7ERibq54LSJGjmouOsw==\"},\"engines\":{\"node\":\"^20.19.0 || >=22.12.0\"},\"cpu\":[\"x64\"],\"os\":[\"darwin\"]},\"@rolldown/binding-freebsd-x64@1.0.0-rc.17\":{\"resolution\":{\"integrity\":\"sha512-hwnz3nw9dbJ05EDO/PvcjaaewqqDy7Y1rn1UO81l8iIK1GjenME75dl16ajbvSSMfv66WXSRCYKIqfgq2KCfxw==\"},\"engines\":{\"node\":\"^20.19.0 || >=22.12.0\"},\"cpu\":[\"x64\"],\"os\":[\"freebsd\"]},\"@rolldown/binding-linux-arm-gnueabihf@1.0.0-rc.17\":{\"resolution\":{\"integrity\":\"sha512-IS+W7epTcwANmFSQFrS1SivEXHtl1JtuQA9wlxrZTcNi6mx+FDOYrakGevvvTwgj2JvWiK8B29/qD9BELZPyXQ==\"},\"engines\":{\"node\":\"^20.19.0 || >=22.12.0\"},\"cpu\":[\"arm\"],\"os\":[\"linux\"]},\"@rolldown/binding-linux-arm64-gnu@1.0.0-rc.17\":{\"resolution\":{\"integrity\":\"sha512-e6usGaHKW5BMNZOymS1UcEYGowQMWcgZ71Z17Sl/h2+ZziNJ1a9n3Zvcz6LdRyIW5572wBCTH/Z+bKuZouGk9Q==\"},\"engines\":{\"node\":\"^20.19.0 || >=22.12.0\"},\"cpu\":[\"arm64\"],\"os\":[\"linux\"]},\"@rolldown/binding-linux-arm64-musl@1.0.0-rc.17\":{\"resolution\":{\"integrity\":\"sha512-b/CgbwAJpmrRLp02RPfhbudf5tZnN9nsPWK82znefso832etkem8H7FSZwxrOI9djcdTP7U6YfNhbRnh7djErg==\"},\"engines\":{\"node\":\"^20.19.0 || >=22.12.0\"},\"cpu\":[\"arm64\"],\"os\":[\"linux\"]},\"@rolldown/binding-linux-ppc64-gnu@1.0.0-rc.17\":{\"resolution\":{\"integrity\":\"sha512-4EII1iNGRUN5WwGbF/kOh/EIkoDN9HsupgLQoXfY+D1oyJm7/F4t5PYU5n8SWZgG0FEwakyM8pGgwcBYruGTlA==\"},\"engines\":{\"node\":\"^20.19.0 || >=22.12.0\"},\"cpu\":[\"ppc64\"],\"os\":[\"linux\"]},\"@rolldown/binding-linux-s390x-gnu@1.0.0-rc.17\":{\"resolution\":{\"integrity\":\"sha512-AH8oq3XqQo4IibpVXvPeLDI5pzkpYn0WiZAfT05kFzoJ6tQNzwRdDYQ45M8I/gslbodRZwW8uxLhbSBbkv96rA==\"},\"engines\":{\"node\":\"^20.19.0 || >=22.12.0\"},\"cpu\":[\"s390x\"],\"os\":[\"linux\"]},\"@rolldown/binding-linux-x64-gnu@1.0.0-rc.17\":{\"resolution\":{\"integrity\":\"sha512-cLnjV3xfo7KslbU41Z7z8BH/E1y5mzUYzAqih1d1MDaIGZRCMqTijqLv76/P7fyHuvUcfGsIpqCdddbxLLK9rA==\"},\"engines\":{\"node\":\"^20.19.0 || >=22.12.0\"},\"cpu\":[\"x64\"],\"os\":[\"linux\"]},\"@rolldown/binding-linux-x64-musl@1.0.0-rc.17\":{\"resolution\":{\"integrity\":\"sha512-0phclDw1spsL7dUB37sIARuis2tAgomCJXAHZlpt8PXZ4Ba0dRP1e+66lsRqrfhISeN9bEGNjQs+T/Fbd7oYGw==\"},\"engines\":{\"node\":\"^20.19.0 || >=22.12.0\"},\"cpu\":[\"x64\"],\"os\":[\"linux\"]},\"@rolldown/binding-openharmony-arm64@1.0.0-rc.17\":{\"resolution\":{\"integrity\":\"sha512-0ag/hEgXOwgw4t8QyQvUCxvEg+V0KBcA6YuOx9g0r02MprutRF5dyljgm3EmR02O292UX7UeS6HzWHAl6KgyhA==\"},\"engines\":{\"node\":\"^20.19.0 || >=22.12.0\"},\"cpu\":[\"arm64\"],\"os\":[\"openharmony\"]},\"@rolldown/binding-wasm32-wasi@1.0.0-rc.17\":{\"resolution\":{\"integrity\":\"sha512-LEXei6vo0E5wTGwpkJ4KoT3OZJRnglwldt5ziLzOlc6qqb55z4tWNq2A+PFqCJuvWWdP53CVhG1Z9NtToDPJrA==\"},\"engines\":{\"node\":\"^20.19.0 || >=22.12.0\"},\"cpu\":[\"wasm32\"]},\"@rolldown/binding-win32-arm64-msvc@1.0.0-rc.17\":{\"resolution\":{\"integrity\":\"sha512-gUmyzBl3SPMa6hrqFUth9sVfcLBlYsbMzBx5PlexMroZStgzGqlZ26pYG89rBb45Mnia+oil6YAIFeEWGWhoZA==\"},\"engines\":{\"node\":\"^20.19.0 || >=22.12.0\"},\"cpu\":[\"arm64\"],\"os\":[\"win32\"]},\"@rolldown/binding-win32-x64-msvc@1.0.0-rc.17\":{\"resolution\":{\"integrity\":\"sha512-3hkiolcUAvPB9FLb3UZdfjVVNWherN1f/skkGWJP/fgSQhYUZpSIRr0/I8ZK9TkF3F7kxvJAk0+IcKvPHk9qQg==\"},\"engines\":{\"node\":\"^20.19.0 || >=22.12.0\"},\"cpu\":[\"x64\"],\"os\":[\"win32\"]},\"@rolldown/pluginutils@1.0.0-beta.40\":{\"resolution\":{\"integrity\":\"sha512-s3GeJKSQOwBlzdUrj4ISjJj5SfSh+aqn0wjOar4Bx95iV1ETI7F6S/5hLcfAxZ9kXDcyrAkxPlqmd1ZITttf+w==\"}},\"@rolldown/pluginutils@1.0.0-rc.17\":{\"resolution\":{\"integrity\":\"sha512-n8iosDOt6Ig1UhJ2AYqoIhHWh/isz0xpicHTzpKBeotdVsTEcxsSA/i3EVM7gQAj0rU27OLAxCjzlj15IWY7bg==\"}},\"@rolldown/pluginutils@1.0.0-rc.7\":{\"resolution\":{\"integrity\":\"sha512-qujRfC8sFVInYSPPMLQByRh7zhwkGFS4+tyMQ83srV1qrxL4g8E2tyxVVyxd0+8QeBM1mIk9KbWxkegRr76XzA==\"}},\"@rollup/rollup-android-arm-eabi@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-yDPzwsgiFO26RJA4nZo8I+xqzh7sJTZIWQOxn+/XOdPE31lAvLIYCKqjV+lNH/vxE2L2iH3plKxDCRK6i+CwhA==\"},\"cpu\":[\"arm\"],\"os\":[\"android\"]},\"@rollup/rollup-android-arm64@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-k8FontTxIE7b0/OGKeSN5B6j25EuppBcWM33Z19JoVT7UTXFSo3D9CdU39wGTeb29NO3XxpMNauh09B+Ibw+9g==\"},\"cpu\":[\"arm64\"],\"os\":[\"android\"]},\"@rollup/rollup-darwin-arm64@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-A6s4gJpomNBtJ2yioj8bflM2oogDwzUiMl2yNJ2v9E7++sHrSrsQ29fOfn5DM/iCzpWcebNYEdXpaK4tr2RhfQ==\"},\"cpu\":[\"arm64\"],\"os\":[\"darwin\"]},\"@rollup/rollup-darwin-x64@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-e6XqVmXlHrBlG56obu9gDRPW3O3hLxpwHpLsBJvuI8qqnsrtSZ9ERoWUXtPOkY8c78WghyPHZdmPhHLWNdAGEw==\"},\"cpu\":[\"x64\"],\"os\":[\"darwin\"]},\"@rollup/rollup-freebsd-arm64@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-v0E9lJW8VsrwPux5Qe5CwmH/CF/2mQs6xU1MF3nmUxmZUCHazCjLgYvToOk+YuuUqLQBio1qkkREhxhc656ViA==\"},\"cpu\":[\"arm64\"],\"os\":[\"freebsd\"]},\"@rollup/rollup-freebsd-x64@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-ClAmAPx3ZCHtp6ysl4XEhWU69GUB1D+s7G9YjHGhIGCSrsg00nEGRRZHmINYxkdoJehde8VIsDC5t9C0gb6yqA==\"},\"cpu\":[\"x64\"],\"os\":[\"freebsd\"]},\"@rollup/rollup-linux-arm-gnueabihf@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-EPlb95nUsz6Dd9Qy13fI5kUPXNSljaG9FiJ4YUGU1O/Q77i5DYFW5KR8g1OzTcdZUqQQ1KdDqsTohdFVwCwjqg==\"},\"cpu\":[\"arm\"],\"os\":[\"linux\"]},\"@rollup/rollup-linux-arm-musleabihf@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-BOmnVW+khAUX+YZvNfa0tGTEMVVEerOxN0pDk2E6N6DsEIa2Ctj48FOMfNDdrwinocKaC7YXUZ1pHlKpnkja/Q==\"},\"cpu\":[\"arm\"],\"os\":[\"linux\"]},\"@rollup/rollup-linux-arm64-gnu@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-Xt2byDZ+6OVNuREgBXr4+CZDJtrVso5woFtpKdGPhpTPHcNG7D8YXeQzpNbFRxzTVqJf7kvPMCub/pcGUWgBjA==\"},\"cpu\":[\"arm64\"],\"os\":[\"linux\"]},\"@rollup/rollup-linux-arm64-musl@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-+LdZSldy/I9N8+klim/Y1HsKbJ3BbInHav5qE9Iy77dtHC/pibw1SR/fXlWyAk0ThnpRKoODwnAuSjqxFRDHUQ==\"},\"cpu\":[\"arm64\"],\"os\":[\"linux\"]},\"@rollup/rollup-linux-loong64-gnu@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-8ms8sjmyc1jWJS6WdNSA23rEfdjWB30LH8Wqj0Cqvv7qSHnvw6kgMMXRdop6hkmGPlyYBdRPkjJnj3KCUHV/uQ==\"},\"cpu\":[\"loong64\"],\"os\":[\"linux\"]},\"@rollup/rollup-linux-ppc64-gnu@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-3HRQLUQbpBDMmzoxPJYd3W6vrVHOo2cVW8RUo87Xz0JPJcBLBr5kZ1pGcQAhdZgX9VV7NbGNipah1omKKe23/g==\"},\"cpu\":[\"ppc64\"],\"os\":[\"linux\"]},\"@rollup/rollup-linux-riscv64-gnu@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-fMjKi+ojnmIvhk34gZP94vjogXNNUKMEYs+EDaB/5TG/wUkoeua7p7VCHnE6T2Tx+iaghAqQX8teQzcvrYpaQA==\"},\"cpu\":[\"riscv64\"],\"os\":[\"linux\"]},\"@rollup/rollup-linux-riscv64-musl@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-XuGFGU+VwUUV5kLvoAdi0Wz5Xbh2SrjIxCtZj6Wq8MDp4bflb/+ThZsVxokM7n0pcbkEr2h5/pzqzDYI7cCgLQ==\"},\"cpu\":[\"riscv64\"],\"os\":[\"linux\"]},\"@rollup/rollup-linux-s390x-gnu@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-w6yjZF0P+NGzWR3AXWX9zc0DNEGdtvykB03uhonSHMRa+oWA6novflo2WaJr6JZakG2ucsyb+rvhrKac6NIy+w==\"},\"cpu\":[\"s390x\"],\"os\":[\"linux\"]},\"@rollup/rollup-linux-x64-gnu@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-yo8d6tdfdeBArzC7T/PnHd7OypfI9cbuZzPnzLJIyKYFhAQ8SvlkKtKBMbXDxe1h03Rcr7u++nFS7tqXz87Gtw==\"},\"cpu\":[\"x64\"],\"os\":[\"linux\"]},\"@rollup/rollup-linux-x64-musl@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-ah59c1YkCxKExPP8O9PwOvs+XRLKwh/mV+3YdKqQ5AMQ0r4M4ZDuOrpWkUaqO7fzAHdINzV9tEVu8vNw48z0lA==\"},\"cpu\":[\"x64\"],\"os\":[\"linux\"]},\"@rollup/rollup-openharmony-arm64@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-4VEd19Wmhr+Zy7hbUsFZ6YXEiP48hE//KPLCSVNY5RMGX2/7HZ+QkN55a3atM1C/BZCGIgqN+xrVgtdak2S9+A==\"},\"cpu\":[\"arm64\"],\"os\":[\"openharmony\"]},\"@rollup/rollup-win32-arm64-msvc@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-IlbHFYc/pQCgew/d5fslcy1KEaYVCJ44G8pajugd8VoOEI8ODhtb/j8XMhLpwHCMB3yk2J07ctup10gpw2nyMA==\"},\"cpu\":[\"arm64\"],\"os\":[\"win32\"]},\"@rollup/rollup-win32-ia32-msvc@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-lNlPEGgdUfSzdCWU176ku/dQRnA7W+Gp8d+cWv73jYrb8uT7HTVVxq62DUYxjbaByuf1Yk0RIIAbDzp+CnOTFg==\"},\"cpu\":[\"ia32\"],\"os\":[\"win32\"]},\"@rollup/rollup-win32-x64-gnu@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-S6YojNVrHybQis2lYov1sd+uj7K0Q05NxHcGktuMMdIQ2VixGwAfbJ23NnlvvVV1bdpR2m5MsNBViHJKcA4ADw==\"},\"cpu\":[\"x64\"],\"os\":[\"win32\"]},\"@rollup/rollup-win32-x64-msvc@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-k+/Rkcyx//P6fetPoLMb8pBeqJBNGx81uuf7iljX9++yNBVRDQgD04L+SVXmXmh5ZP4/WOp4mWF0kmi06PW2tA==\"},\"cpu\":[\"x64\"],\"os\":[\"win32\"]},\"@shikijs/core@3.15.0\":{\"resolution\":{\"integrity\":\"sha512-8TOG6yG557q+fMsSVa8nkEDOZNTSxjbbR8l6lF2gyr6Np+jrPlslqDxQkN6rMXCECQ3isNPZAGszAfYoJOPGlg==\"}},\"@shikijs/engine-javascript@3.15.0\":{\"resolution\":{\"integrity\":\"sha512-ZedbOFpopibdLmvTz2sJPJgns8Xvyabe2QbmqMTz07kt1pTzfEvKZc5IqPVO/XFiEbbNyaOpjPBkkr1vlwS+qg==\"}},\"@shikijs/engine-oniguruma@3.15.0\":{\"resolution\":{\"integrity\":\"sha512-HnqFsV11skAHvOArMZdLBZZApRSYS4LSztk2K3016Y9VCyZISnlYUYsL2hzlS7tPqKHvNqmI5JSUJZprXloMvA==\"}},\"@shikijs/langs@3.15.0\":{\"resolution\":{\"integrity\":\"sha512-WpRvEFvkVvO65uKYW4Rzxs+IG0gToyM8SARQMtGGsH4GDMNZrr60qdggXrFOsdfOVssG/QQGEl3FnJ3EZ+8w8A==\"}},\"@shikijs/themes@3.15.0\":{\"resolution\":{\"integrity\":\"sha512-8ow2zWb1IDvCKjYb0KiLNrK4offFdkfNVPXb1OZykpLCzRU6j+efkY+Y7VQjNlNFXonSw+4AOdGYtmqykDbRiQ==\"}},\"@shikijs/types@3.15.0\":{\"resolution\":{\"integrity\":\"sha512-BnP+y/EQnhihgHy4oIAN+6FFtmfTekwOLsQbRw9hOKwqgNy8Bdsjq8B05oAt/ZgvIWWFrshV71ytOrlPfYjIJw==\"}},\"@shikijs/vscode-textmate@10.0.2\":{\"resolution\":{\"integrity\":\"sha512-83yeghZ2xxin3Nj8z1NMd/NCuca+gsYXswywDy5bHvwlWL8tpTQmzGeUuHd9FC3E/SBEMvzJRwWEOz5gGes9Qg==\"}},\"@sinclair/typebox@0.34.40\":{\"resolution\":{\"integrity\":\"sha512-gwBNIP8ZAYev/ORDWW0QvxdwPXwxBtLsdsJgSc7eDIRt8ubP+rxUBzPsrwnu16fgEF8Bx4lh/+mvQvJzcTM6Kw==\"}},\"@sindresorhus/is@7.1.1\":{\"resolution\":{\"integrity\":\"sha512-rO92VvpgMc3kfiTjGT52LEtJ8Yc5kCWhZjLQ3LwlA4pSgPpQO7bVpYXParOD8Jwf+cVQECJo3yP/4I8aZtUQTQ==\"},\"engines\":{\"node\":\">=18\"}},\"@speed-highlight/core@1.2.12\":{\"resolution\":{\"integrity\":\"sha512-uilwrK0Ygyri5dToHYdZSjcvpS2ZwX0w5aSt3GCEN9hrjxWCoeV4Z2DTXuxjwbntaLQIEEAlCeNQss5SoHvAEA==\"}},\"@sqlite.org/sqlite-wasm@3.50.4-build1\":{\"resolution\":{\"integrity\":\"sha512-Qig2Wso7gPkU1PtXwFzndh+CTRzrIFxVGqv6eCetjU7YqxlHItj+GvQYwYTppCRgAPawtRN/4AJcEgB9xDHGug==\"},\"hasBin\":true},\"@standard-schema/spec@1.0.0\":{\"resolution\":{\"integrity\":\"sha512-m2bOd0f2RT9k8QJx1JN85cZYyH1RqFBdlwtkSlf4tBDYLCiiZnv1fIIwacK6cqwXavOydf0NPToMQgpKq+dVlA==\"}},\"@standard-schema/spec@1.1.0\":{\"resolution\":{\"integrity\":\"sha512-l2aFy5jALhniG5HgqrD6jXLi/rUWrKvqN/qJx6yoJsgKhblVd+iqqU4RCXavm/jPityDo5TCvKMnpjKnOriy0w==\"}},\"@tailwindcss/node@4.2.4\":{\"resolution\":{\"integrity\":\"sha512-Ai7+yQPxz3ddrDQzFfBKdHEVBg0w3Zl83jnjuwxnZOsnH9pGn93QHQtpU0p/8rYWxvbFZHneni6p1BSLK4DkGA==\"}},\"@tailwindcss/oxide-android-arm64@4.2.4\":{\"resolution\":{\"integrity\":\"sha512-e7MOr1SAn9U8KlZzPi1ZXGZHeC5anY36qjNwmZv9pOJ8E4Q6jmD1vyEHkQFmNOIN7twGPEMXRHmitN4zCMN03g==\"},\"engines\":{\"node\":\">= 20\"},\"cpu\":[\"arm64\"],\"os\":[\"android\"]},\"@tailwindcss/oxide-darwin-arm64@4.2.4\":{\"resolution\":{\"integrity\":\"sha512-tSC/Kbqpz/5/o/C2sG7QvOxAKqyd10bq+ypZNf+9Fi2TvbVbv1zNpcEptcsU7DPROaSbVgUXmrzKhurFvo5eDg==\"},\"engines\":{\"node\":\">= 20\"},\"cpu\":[\"arm64\"],\"os\":[\"darwin\"]},\"@tailwindcss/oxide-darwin-x64@4.2.4\":{\"resolution\":{\"integrity\":\"sha512-yPyUXn3yO/ufR6+Kzv0t4fCg2qNr90jxXc5QqBpjlPNd0NqyDXcmQb/6weunH/MEDXW5dhyEi+agTDiqa3WsGg==\"},\"engines\":{\"node\":\">= 20\"},\"cpu\":[\"x64\"],\"os\":[\"darwin\"]},\"@tailwindcss/oxide-freebsd-x64@4.2.4\":{\"resolution\":{\"integrity\":\"sha512-BoMIB4vMQtZsXdGLVc2z+P9DbETkiopogfWZKbWwM8b/1Vinbs4YcUwo+kM/KeLkX3Ygrf4/PsRndKaYhS8Eiw==\"},\"engines\":{\"node\":\">= 20\"},\"cpu\":[\"x64\"],\"os\":[\"freebsd\"]},\"@tailwindcss/oxide-linux-arm-gnueabihf@4.2.4\":{\"resolution\":{\"integrity\":\"sha512-7pIHBLTHYRAlS7V22JNuTh33yLH4VElwKtB3bwchK/UaKUPpQ0lPQiOWcbm4V3WP2I6fNIJ23vABIvoy2izdwA==\"},\"engines\":{\"node\":\">= 20\"},\"cpu\":[\"arm\"],\"os\":[\"linux\"]},\"@tailwindcss/oxide-linux-arm64-gnu@4.2.4\":{\"resolution\":{\"integrity\":\"sha512-+E4wxJ0ZGOzSH325reXTWB48l42i93kQqMvDyz5gqfRzRZ7faNhnmvlV4EPGJU3QJM/3Ab5jhJ5pCRUsKn6OQw==\"},\"engines\":{\"node\":\">= 20\"},\"cpu\":[\"arm64\"],\"os\":[\"linux\"]},\"@tailwindcss/oxide-linux-arm64-musl@4.2.4\":{\"resolution\":{\"integrity\":\"sha512-bBADEGAbo4ASnppIziaQJelekCxdMaxisrk+fB7Thit72IBnALp9K6ffA2G4ruj90G9XRS2VQ6q2bCKbfFV82g==\"},\"engines\":{\"node\":\">= 20\"},\"cpu\":[\"arm64\"],\"os\":[\"linux\"]},\"@tailwindcss/oxide-linux-x64-gnu@4.2.4\":{\"resolution\":{\"integrity\":\"sha512-7Mx25E4WTfnht0TVRTyC00j3i0M+EeFe7wguMDTlX4mRxafznw0CA8WJkFjWYH5BlgELd1kSjuU2JiPnNZbJDA==\"},\"engines\":{\"node\":\">= 20\"},\"cpu\":[\"x64\"],\"os\":[\"linux\"]},\"@tailwindcss/oxide-linux-x64-musl@4.2.4\":{\"resolution\":{\"integrity\":\"sha512-2wwJRF7nyhOR0hhHoChc04xngV3iS+akccHTGtz965FwF0up4b2lOdo6kI1EbDaEXKgvcrFBYcYQQ/rrnWFVfA==\"},\"engines\":{\"node\":\">= 20\"},\"cpu\":[\"x64\"],\"os\":[\"linux\"]},\"@tailwindcss/oxide-wasm32-wasi@4.2.4\":{\"resolution\":{\"integrity\":\"sha512-FQsqApeor8Fo6gUEklzmaa9994orJZZDBAlQpK2Mq+DslRKFJeD6AjHpBQ0kZFQohVr8o85PPh8eOy86VlSCmw==\"},\"engines\":{\"node\":\">=14.0.0\"},\"cpu\":[\"wasm32\"],\"bundledDependencies\":[\"@napi-rs/wasm-runtime\",\"@emnapi/core\",\"@emnapi/runtime\",\"@tybys/wasm-util\",\"@emnapi/wasi-threads\",\"tslib\"]},\"@tailwindcss/oxide-win32-arm64-msvc@4.2.4\":{\"resolution\":{\"integrity\":\"sha512-L9BXqxC4ToVgwMFqj3pmZRqyHEztulpUJzCxUtLjobMCzTPsGt1Fa9enKbOpY2iIyVtaHNeNvAK8ERP/64sqGQ==\"},\"engines\":{\"node\":\">= 20\"},\"cpu\":[\"arm64\"],\"os\":[\"win32\"]},\"@tailwindcss/oxide-win32-x64-msvc@4.2.4\":{\"resolution\":{\"integrity\":\"sha512-ESlKG0EpVJQwRjXDDa9rLvhEAh0mhP1sF7sap9dNZT0yyl9SAG6T7gdP09EH0vIv0UNTlo6jPWyujD6559fZvw==\"},\"engines\":{\"node\":\">= 20\"},\"cpu\":[\"x64\"],\"os\":[\"win32\"]},\"@tailwindcss/oxide@4.2.4\":{\"resolution\":{\"integrity\":\"sha512-9El/iI069DKDSXwTvB9J4BwdO5JhRrOweGaK25taBAvBXyXqJAX+Jqdvs8r8gKpsI/1m0LeJLyQYTf/WLrBT1Q==\"},\"engines\":{\"node\":\">= 20\"}},\"@tailwindcss/vite@4.2.4\":{\"resolution\":{\"integrity\":\"sha512-pCvohwOCspk3ZFn6eJzrrX3g4n2JY73H6MmYC87XfGPyTty4YsCjYTMArRZm/zOI8dIt3+EcrLHAFPe5A4bgtw==\"},\"peerDependencies\":{\"vite\":\"^5.2.0 || ^6 || ^7 || ^8\"}},\"@tanstack/history@1.161.6\":{\"resolution\":{\"integrity\":\"sha512-NaOGLRrddszbQj9upGat6HG/4TKvXLvu+osAIgfxPYA+eIvYKv8GKDJOrY2D3/U9MRnKfMWD7bU4jeD4xmqyIg==\"},\"engines\":{\"node\":\">=20.19\"}},\"@tanstack/react-router@1.169.2\":{\"resolution\":{\"integrity\":\"sha512-OJM7Kguc7ERnweaNRWsyWgIKcl3z23rD1B4jaxjzd9RGdnzpt2HfrWa9rggbT0Hfzhfo4D2ZmsfoTme035tniQ==\"},\"engines\":{\"node\":\">=20.19\"},\"peerDependencies\":{\"react\":\">=18.0.0 || >=19.0.0\",\"react-dom\":\">=18.0.0 || >=19.0.0\"}},\"@tanstack/react-start-client@1.166.48\":{\"resolution\":{\"integrity\":\"sha512-6fqwCwe6v+Nvtdf6vg6gxs/0gCXyZEHF18EslNeG/kca2wnXYFuXRhqGJjJaEgMk3WF4IE9mUgFuBSAOY3P7nQ==\"},\"engines\":{\"node\":\">=22.12.0\"},\"peerDependencies\":{\"react\":\">=18.0.0 || >=19.0.0\",\"react-dom\":\">=18.0.0 || >=19.0.0\"}},\"@tanstack/react-start-rsc@0.0.43\":{\"resolution\":{\"integrity\":\"sha512-2RCa8Caw/HKrHi9pxmUvsiUrBtjddeBiP93e7OYQOCL3rHxoMD9CSscwT9/ziCaqnIOuBFbKWgvRTahR4jSfsw==\"},\"engines\":{\"node\":\">=22.12.0\"},\"peerDependencies\":{\"@rspack/core\":\">=2.0.0-0\",\"@vitejs/plugin-rsc\":\">=0.5.20\",\"react\":\">=18.0.0 || >=19.0.0\",\"react-dom\":\">=18.0.0 || >=19.0.0\",\"react-server-dom-rspack\":\">=0.0.2\"},\"peerDependenciesMeta\":{\"@rspack/core\":{\"optional\":true},\"@vitejs/plugin-rsc\":{\"optional\":true},\"react-server-dom-rspack\":{\"optional\":true}}},\"@tanstack/react-start-server@1.166.52\":{\"resolution\":{\"integrity\":\"sha512-46Gx+byIndYywUtyna5h3qatHipJkPFqo/miexfuYPgeVAI6ypQzsw7wxF194H6VAP43m2q+fdLPBXStufoOGw==\"},\"engines\":{\"node\":\">=22.12.0\"},\"peerDependencies\":{\"react\":\">=18.0.0 || >=19.0.0\",\"react-dom\":\">=18.0.0 || >=19.0.0\"}},\"@tanstack/react-start@1.167.64\":{\"resolution\":{\"integrity\":\"sha512-gxtesUkHIZmKR/OEFAx6ifedIs7UM1cG5B/TJhcs6c/BrJpjeQIrkF9/GmWRpslaWCpo3tXA2IOxNSH49KFhoA==\"},\"engines\":{\"node\":\">=22.12.0\"},\"peerDependencies\":{\"@rsbuild/core\":\"^2.0.0\",\"@vitejs/plugin-rsc\":\"*\",\"react\":\">=18.0.0 || >=19.0.0\",\"react-dom\":\">=18.0.0 || >=19.0.0\",\"vite\":\">=7.0.0\"},\"peerDependenciesMeta\":{\"@rsbuild/core\":{\"optional\":true},\"@vitejs/plugin-rsc\":{\"optional\":true},\"vite\":{\"optional\":true}}},\"@tanstack/react-store@0.9.3\":{\"resolution\":{\"integrity\":\"sha512-y2iHd/N9OkoQbFJLUX1T9vbc2O9tjH0pQRgTcx1/Nz4IlwLvkgpuglXUx+mXt0g5ZDFrEeDnONPqkbfxXJKwRg==\"},\"peerDependencies\":{\"react\":\"^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0\",\"react-dom\":\"^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0\"}},\"@tanstack/router-core@1.169.2\":{\"resolution\":{\"integrity\":\"sha512-5sm0DJF1A7Mz+9gy4Gz/lLovNailK3yot4vYvz9MkBUPw26uLnhQiR8hSCYxucjE0wD6Mdlc5l+Z0/XTlZ7xHw==\"},\"engines\":{\"node\":\">=20.19\"}},\"@tanstack/router-generator@1.166.41\":{\"resolution\":{\"integrity\":\"sha512-XpnkVvk9AlCtw5vggJsnSx3MdKGk8Asopwy9wUFAqFAHqlrRJzV9PoZ5kGkNEJMOYYcMTriJLN4D+kyXRUJpDQ==\"},\"engines\":{\"node\":\">=20.19\"}},\"@tanstack/router-plugin@1.167.34\":{\"resolution\":{\"integrity\":\"sha512-hU0Cuw79Yo6FGPBB0mW9Ik8bnTzmnUKtbgbvmIzeFdK3wKBPS4+xN7kcxVaBqXfP6xR3PFkIf2SSoYsiuLjVtg==\"},\"engines\":{\"node\":\">=20.19\"},\"peerDependencies\":{\"@rsbuild/core\":\">=1.0.2 || ^2.0.0\",\"@tanstack/react-router\":\"^1.169.2\",\"vite\":\">=5.0.0 || >=6.0.0 || >=7.0.0 || >=8.0.0\",\"vite-plugin-solid\":\"^2.11.10 || ^3.0.0-0\",\"webpack\":\">=5.92.0\"},\"peerDependenciesMeta\":{\"@rsbuild/core\":{\"optional\":true},\"@tanstack/react-router\":{\"optional\":true},\"vite\":{\"optional\":true},\"vite-plugin-solid\":{\"optional\":true},\"webpack\":{\"optional\":true}}},\"@tanstack/router-utils@1.161.8\":{\"resolution\":{\"integrity\":\"sha512-xyiLWEKjfBAVhauDSSjXxyf7s8elU6SM+V050sbkofvGmIIvkwPFtDsX7Gvwh14kBd6iCwAT+RiPvXTxAptY0Q==\"},\"engines\":{\"node\":\">=20.19\"}},\"@tanstack/start-client-core@1.168.2\":{\"resolution\":{\"integrity\":\"sha512-/bckv9k/yxY4VmSY2V2MeX7NBsS5uqGvdSPs5WIvW3Uv35DXPrdiumKXTNJeZRNRMtxrM+YfxQPjXLx3C7ykvg==\"},\"engines\":{\"node\":\">=22.12.0\"}},\"@tanstack/start-fn-stubs@1.161.6\":{\"resolution\":{\"integrity\":\"sha512-Y6QSlGiLga8cHfvxGGaonXIlt2bIUTVdH6AMjmpMp7+ANNCp+N96GQbjjhLye3JkaxDfP68x5iZA8NK4imgRig==\"},\"engines\":{\"node\":\">=22.12.0\"}},\"@tanstack/start-plugin-core@1.169.19\":{\"resolution\":{\"integrity\":\"sha512-z3/Tkytb6eRQKDnFU31QLimwrcVyDi9uHMtUQKmJkxQg+Bz85di+MxMrbnvd8XXP9OHcFlWK8HpG/HpVncZq4Q==\"},\"engines\":{\"node\":\">=22.12.0\"},\"peerDependencies\":{\"@rsbuild/core\":\"^2.0.0\",\"vite\":\">=7.0.0\"},\"peerDependenciesMeta\":{\"@rsbuild/core\":{\"optional\":true},\"vite\":{\"optional\":true}}},\"@tanstack/start-server-core@1.167.30\":{\"resolution\":{\"integrity\":\"sha512-GC0PXzYYSEwfAOC2NxGXFUyYvfbSjVoqnIrzJsyInKd8xQxGEQaVdrebbyx9TV5cj7A5e7EJcWAsf3G3wRDQBw==\"},\"engines\":{\"node\":\">=22.12.0\"}},\"@tanstack/start-storage-context@1.166.35\":{\"resolution\":{\"integrity\":\"sha512-ZKDkKiorJrKwfEHjatEwRHG7EP3raJPhh6CSl4CFmHW0naIvwaW5gQcxcT8IlHtoGDLYDAjBEcSr3MZyXgqmOA==\"},\"engines\":{\"node\":\">=22.12.0\"}},\"@tanstack/store@0.9.3\":{\"resolution\":{\"integrity\":\"sha512-8reSzl/qGWGGVKhBoxXPMWzATSbZLZFWhwBAFO9NAyp0TxzfBP0mIrGb8CP8KrQTmvzXlR/vFPPUrHTLBGyFyw==\"}},\"@tanstack/virtual-file-routes@1.161.7\":{\"resolution\":{\"integrity\":\"sha512-olW33+Cn+bsCsZKPwEGhlkqS6w3M2slFv11JIobdnCFKMLG97oAI2kWKdx5/zsywTL8flpnoIgaZZPlQTFYhdQ==\"},\"engines\":{\"node\":\">=20.19\"},\"hasBin\":true},\"@testing-library/dom@10.4.1\":{\"resolution\":{\"integrity\":\"sha512-o4PXJQidqJl82ckFaXUeoAW+XysPLauYI43Abki5hABd853iMhitooc6znOnczgbTYmEP6U6/y1ZyKAIsvMKGg==\"},\"engines\":{\"node\":\">=18\"}},\"@testing-library/react@16.3.0\":{\"resolution\":{\"integrity\":\"sha512-kFSyxiEDwv1WLl2fgsq6pPBbw5aWKrsY2/noi1Id0TK0UParSF62oFQFGHXIyaG4pp2tEub/Zlel+fjjZILDsw==\"},\"engines\":{\"node\":\">=18\"},\"peerDependencies\":{\"@testing-library/dom\":\"^10.0.0\",\"@types/react\":\"^18.0.0 || ^19.0.0\",\"@types/react-dom\":\"^18.0.0 || ^19.0.0\",\"react\":\"^18.0.0 || ^19.0.0\",\"react-dom\":\"^18.0.0 || ^19.0.0\"},\"peerDependenciesMeta\":{\"@types/react\":{\"optional\":true},\"@types/react-dom\":{\"optional\":true}}},\"@testing-library/user-event@14.6.1\":{\"resolution\":{\"integrity\":\"sha512-vq7fv0rnt+QTXgPxr5Hjc210p6YKq2kmdziLgnsZGgLJ9e6VAShx1pACLuRjd/AS/sr7phAR58OIIpf0LlmQNw==\"},\"engines\":{\"node\":\">=12\",\"npm\":\">=6\"},\"peerDependencies\":{\"@testing-library/dom\":\">=7.21.4\"}},\"@tootallnate/quickjs-emscripten@0.23.0\":{\"resolution\":{\"integrity\":\"sha512-C5Mc6rdnsaJDjO3UpGW/CQTHtCKaYlScZTly4JIu97Jxo/odCiH0ITnDXSJPTOrEKk/ycSZ0AOgTmkDtkOsvIA==\"}},\"@tybys/wasm-util@0.10.2\":{\"resolution\":{\"integrity\":\"sha512-RoBvJ2X0wuKlWFIjrwffGw1IqZHKQqzIchKaadZZfnNpsAYp2mM0h36JtPCjNDAHGgYez/15uMBpfGwchhiMgg==\"}},\"@tybys/wasm-util@0.9.0\":{\"resolution\":{\"integrity\":\"sha512-6+7nlbMVX/PVDCwaIQ8nTOPveOcFLSt8GcXdx8hD0bt39uWxYT88uXzqTd4fTvqta7oeUJqudepapKNt2DYJFw==\"}},\"@types/aria-query@5.0.4\":{\"resolution\":{\"integrity\":\"sha512-rfT93uj5s0PRL7EzccGMs3brplhcrghnDoV26NqKhCAS1hVo+WdNsPvE/yb6ilfr5hi2MEk6d5EWJTKdxg8jVw==\"}},\"@types/chai@5.2.2\":{\"resolution\":{\"integrity\":\"sha512-8kB30R7Hwqf40JPiKhVzodJs2Qc1ZJ5zuT3uzw5Hq/dhNCl3G3l83jfpdI1e20BP348+fV7VIL/+FxaXkqBmWg==\"}},\"@types/chai@5.2.3\":{\"resolution\":{\"integrity\":\"sha512-Mw558oeA9fFbv65/y4mHtXDs9bPnFMZAL/jxdPFUpOHHIXX91mcgEHbS5Lahr+pwZFR8A7GQleRWeI6cGFC2UA==\"}},\"@types/cookie@0.6.0\":{\"resolution\":{\"integrity\":\"sha512-4Kh9a6B2bQciAhf7FSuMRRkUWecJgJu9nPnx3yzpsfXX/c50REIqpHY4C82bXP90qrLtXtkDxTZosYO3UpOwlA==\"}},\"@types/d3-array@3.2.1\":{\"resolution\":{\"integrity\":\"sha512-Y2Jn2idRrLzUfAKV2LyRImR+y4oa2AntrgID95SHJxuMUrkNXmanDSed71sRNZysveJVt1hLLemQZIady0FpEg==\"}},\"@types/d3-axis@3.0.6\":{\"resolution\":{\"integrity\":\"sha512-pYeijfZuBd87T0hGn0FO1vQ/cgLk6E1ALJjfkC0oJ8cbwkZl3TpgS8bVBLZN+2jjGgg38epgxb2zmoGtSfvgMw==\"}},\"@types/d3-brush@3.0.6\":{\"resolution\":{\"integrity\":\"sha512-nH60IZNNxEcrh6L1ZSMNA28rj27ut/2ZmI3r96Zd+1jrZD++zD3LsMIjWlvg4AYrHn/Pqz4CF3veCxGjtbqt7A==\"}},\"@types/d3-chord@3.0.6\":{\"resolution\":{\"integrity\":\"sha512-LFYWWd8nwfwEmTZG9PfQxd17HbNPksHBiJHaKuY1XeqscXacsS2tyoo6OdRsjf+NQYeB6XrNL3a25E3gH69lcg==\"}},\"@types/d3-color@3.1.3\":{\"resolution\":{\"integrity\":\"sha512-iO90scth9WAbmgv7ogoq57O9YpKmFBbmoEoCHDB2xMBY0+/KVrqAaCDyCE16dUspeOvIxFFRI+0sEtqDqy2b4A==\"}},\"@types/d3-contour@3.0.6\":{\"resolution\":{\"integrity\":\"sha512-BjzLgXGnCWjUSYGfH1cpdo41/hgdWETu4YxpezoztawmqsvCeep+8QGfiY6YbDvfgHz/DkjeIkkZVJavB4a3rg==\"}},\"@types/d3-delaunay@6.0.4\":{\"resolution\":{\"integrity\":\"sha512-ZMaSKu4THYCU6sV64Lhg6qjf1orxBthaC161plr5KuPHo3CNm8DTHiLw/5Eq2b6TsNP0W0iJrUOFscY6Q450Hw==\"}},\"@types/d3-dispatch@3.0.6\":{\"resolution\":{\"integrity\":\"sha512-4fvZhzMeeuBJYZXRXrRIQnvUYfyXwYmLsdiN7XXmVNQKKw1cM8a5WdID0g1hVFZDqT9ZqZEY5pD44p24VS7iZQ==\"}},\"@types/d3-drag@3.0.7\":{\"resolution\":{\"integrity\":\"sha512-HE3jVKlzU9AaMazNufooRJ5ZpWmLIoc90A37WU2JMmeq28w1FQqCZswHZ3xR+SuxYftzHq6WU6KJHvqxKzTxxQ==\"}},\"@types/d3-dsv@3.0.7\":{\"resolution\":{\"integrity\":\"sha512-n6QBF9/+XASqcKK6waudgL0pf/S5XHPPI8APyMLLUHd8NqouBGLsU8MgtO7NINGtPBtk9Kko/W4ea0oAspwh9g==\"}},\"@types/d3-ease@3.0.2\":{\"resolution\":{\"integrity\":\"sha512-NcV1JjO5oDzoK26oMzbILE6HW7uVXOHLQvHshBUW4UMdZGfiY6v5BeQwh9a9tCzv+CeefZQHJt5SRgK154RtiA==\"}},\"@types/d3-fetch@3.0.7\":{\"resolution\":{\"integrity\":\"sha512-fTAfNmxSb9SOWNB9IoG5c8Hg6R+AzUHDRlsXsDZsNp6sxAEOP0tkP3gKkNSO/qmHPoBFTxNrjDprVHDQDvo5aA==\"}},\"@types/d3-force@3.0.10\":{\"resolution\":{\"integrity\":\"sha512-ZYeSaCF3p73RdOKcjj+swRlZfnYpK1EbaDiYICEEp5Q6sUiqFaFQ9qgoshp5CzIyyb/yD09kD9o2zEltCexlgw==\"}},\"@types/d3-format@3.0.4\":{\"resolution\":{\"integrity\":\"sha512-fALi2aI6shfg7vM5KiR1wNJnZ7r6UuggVqtDA+xiEdPZQwy/trcQaHnwShLuLdta2rTymCNpxYTiMZX/e09F4g==\"}},\"@types/d3-geo@3.1.0\":{\"resolution\":{\"integrity\":\"sha512-856sckF0oP/diXtS4jNsiQw/UuK5fQG8l/a9VVLeSouf1/PPbBE1i1W852zVwKwYCBkFJJB7nCFTbk6UMEXBOQ==\"}},\"@types/d3-hierarchy@3.1.7\":{\"resolution\":{\"integrity\":\"sha512-tJFtNoYBtRtkNysX1Xq4sxtjK8YgoWUNpIiUee0/jHGRwqvzYxkq0hGVbbOGSz+JgFxxRu4K8nb3YpG3CMARtg==\"}},\"@types/d3-interpolate@3.0.4\":{\"resolution\":{\"integrity\":\"sha512-mgLPETlrpVV1YRJIglr4Ez47g7Yxjl1lj7YKsiMCb27VJH9W8NVM6Bb9d8kkpG/uAQS5AmbA48q2IAolKKo1MA==\"}},\"@types/d3-path@3.1.0\":{\"resolution\":{\"integrity\":\"sha512-P2dlU/q51fkOc/Gfl3Ul9kicV7l+ra934qBFXCFhrZMOL6du1TM0pm1ThYvENukyOn5h9v+yMJ9Fn5JK4QozrQ==\"}},\"@types/d3-polygon@3.0.2\":{\"resolution\":{\"integrity\":\"sha512-ZuWOtMaHCkN9xoeEMr1ubW2nGWsp4nIql+OPQRstu4ypeZ+zk3YKqQT0CXVe/PYqrKpZAi+J9mTs05TKwjXSRA==\"}},\"@types/d3-quadtree@3.0.6\":{\"resolution\":{\"integrity\":\"sha512-oUzyO1/Zm6rsxKRHA1vH0NEDG58HrT5icx/azi9MF1TWdtttWl0UIUsjEQBBh+SIkrpd21ZjEv7ptxWys1ncsg==\"}},\"@types/d3-random@3.0.3\":{\"resolution\":{\"integrity\":\"sha512-Imagg1vJ3y76Y2ea0871wpabqp613+8/r0mCLEBfdtqC7xMSfj9idOnmBYyMoULfHePJyxMAw3nWhJxzc+LFwQ==\"}},\"@types/d3-scale-chromatic@3.1.0\":{\"resolution\":{\"integrity\":\"sha512-iWMJgwkK7yTRmWqRB5plb1kadXyQ5Sj8V/zYlFGMUBbIPKQScw+Dku9cAAMgJG+z5GYDoMjWGLVOvjghDEFnKQ==\"}},\"@types/d3-scale@4.0.8\":{\"resolution\":{\"integrity\":\"sha512-gkK1VVTr5iNiYJ7vWDI+yUFFlszhNMtVeneJ6lUTKPjprsvLLI9/tgEGiXJOnlINJA8FyA88gfnQsHbybVZrYQ==\"}},\"@types/d3-selection@3.0.11\":{\"resolution\":{\"integrity\":\"sha512-bhAXu23DJWsrI45xafYpkQ4NtcKMwWnAC/vKrd2l+nxMFuvOT3XMYTIj2opv8vq8AO5Yh7Qac/nSeP/3zjTK0w==\"}},\"@types/d3-shape@3.1.7\":{\"resolution\":{\"integrity\":\"sha512-VLvUQ33C+3J+8p+Daf+nYSOsjB4GXp19/S/aGo60m9h1v6XaxjiT82lKVWJCfzhtuZ3yD7i/TPeC/fuKLLOSmg==\"}},\"@types/d3-time-format@4.0.3\":{\"resolution\":{\"integrity\":\"sha512-5xg9rC+wWL8kdDj153qZcsJ0FWiFt0J5RB6LYUNZjwSnesfblqrI/bJ1wBdJ8OQfncgbJG5+2F+qfqnqyzYxyg==\"}},\"@types/d3-time@3.0.4\":{\"resolution\":{\"integrity\":\"sha512-yuzZug1nkAAaBlBBikKZTgzCeA+k1uy4ZFwWANOfKw5z5LRhV0gNA7gNkKm7HoK+HRN0wX3EkxGk0fpbWhmB7g==\"}},\"@types/d3-timer@3.0.2\":{\"resolution\":{\"integrity\":\"sha512-Ps3T8E8dZDam6fUyNiMkekK3XUsaUEik+idO9/YjPtfj2qruF8tFBXS7XhtE4iIXBLxhmLjP3SXpLhVf21I9Lw==\"}},\"@types/d3-transition@3.0.9\":{\"resolution\":{\"integrity\":\"sha512-uZS5shfxzO3rGlu0cC3bjmMFKsXv+SmZZcgp0KD22ts4uGXp5EVYGzu/0YdwZeKmddhcAccYtREJKkPfXkZuCg==\"}},\"@types/d3-zoom@3.0.8\":{\"resolution\":{\"integrity\":\"sha512-iqMC4/YlFCSlO8+2Ii1GGGliCAY4XdeG748w5vQUbevlbDu0zSjH/+jojorQVBK/se0j6DUFNPBGSqD3YWYnDw==\"}},\"@types/d3@7.4.3\":{\"resolution\":{\"integrity\":\"sha512-lZXZ9ckh5R8uiFVt8ogUNf+pIrK4EsWrx2Np75WvF/eTpJ0FMHNhjXk8CKEx/+gpHbNQyJWehbFaTvqmHWB3ww==\"}},\"@types/debug@4.1.12\":{\"resolution\":{\"integrity\":\"sha512-vIChWdVG3LG1SMxEvI/AK+FWJthlrqlTu7fbrlywTkkaONwk/UAGaULXRlf8vkzFBLVm0zkMdCquhL5aOjhXPQ==\"}},\"@types/deep-eql@4.0.2\":{\"resolution\":{\"integrity\":\"sha512-c9h9dVVMigMPc4bwTvC5dxqtqJZwQPePsWjPlpSOnojbor6pGqdk541lfA7AqFQr5pB1BRdq0juY9db81BwyFw==\"}},\"@types/eslint-scope@3.7.7\":{\"resolution\":{\"integrity\":\"sha512-MzMFlSLBqNF2gcHWO0G1vP/YQyfvrxZ0bF+u7mzUdZ1/xK4A4sru+nraZz5i3iEIk1l1uyicaDVTB4QbbEkAYg==\"}},\"@types/eslint@9.6.1\":{\"resolution\":{\"integrity\":\"sha512-FXx2pKgId/WyYo2jXw63kk7/+TY7u7AziEJxJAnSFzHlqTAS3Ync6SvgYAN/k4/PQpnnVuzoMuVnByKK2qp0ag==\"}},\"@types/estree@1.0.8\":{\"resolution\":{\"integrity\":\"sha512-dWHzHa2WqEXI/O1E9OjrocMTKJl2mSrEolh1Iomrv6U+JuNwaHXsXx9bLu5gG7BUWFIN0skIQJQ/L1rIex4X6w==\"}},\"@types/estree@1.0.9\":{\"resolution\":{\"integrity\":\"sha512-GhdPgy1el4/ImP05X05Uw4cw2/M93BCUmnEvWZNStlCzEKME4Fkk+YpoA5OiHNQmoS7Cafb8Xa3Pya8m1Qrzeg==\"}},\"@types/geojson@7946.0.15\":{\"resolution\":{\"integrity\":\"sha512-9oSxFzDCT2Rj6DfcHF8G++jxBKS7mBqXl5xrRW+Kbvjry6Uduya2iiwqHPhVXpasAVMBYKkEPGgKhd3+/HZ6xA==\"}},\"@types/hast@3.0.4\":{\"resolution\":{\"integrity\":\"sha512-WPs+bbQw5aCj+x6laNGWLH3wviHtoCv/P3+otBhbOhJgG8qtpdAMlTCxLtsTWA7LH1Oh/bFCHsBn0TPS5m30EQ==\"}},\"@types/json-schema@7.0.15\":{\"resolution\":{\"integrity\":\"sha512-5+fP8P8MFNC+AyZCDxrB2pkZFPGzqQWUzpSeuuVLvm8VMcorNYavBqoFcxK8bQz4Qsbn4oUEEem4wDLfcysGHA==\"}},\"@types/mdast@4.0.4\":{\"resolution\":{\"integrity\":\"sha512-kGaNbPh1k7AFzgpud/gMdvIm5xuECykRR+JnWKQno9TAXVa6WIVCGTPvYGekIDL4uwCZQSYbUxNBSb1aUo79oA==\"}},\"@types/ms@2.1.0\":{\"resolution\":{\"integrity\":\"sha512-GsCCIZDE/p3i96vtEqx+7dBUGXrc7zeSK3wwPHIaRThS+9OhWIXRqzs4d6k1SVU8g91DrNRWxWUGhp5KXQb2VA==\"}},\"@types/node@12.20.55\":{\"resolution\":{\"integrity\":\"sha512-J8xLz7q2OFulZ2cyGTLE1TbbZcjpno7FaN6zdJNrgAdrJ+DZzh/uFR6YrTb4C+nXakvud8Q4+rbhoIWlYQbUFQ==\"}},\"@types/node@20.19.39\":{\"resolution\":{\"integrity\":\"sha512-orrrD74MBUyK8jOAD/r0+lfa1I2MO6I+vAkmAWzMYbCcgrN4lCrmK52gRFQq/JRxfYPfonkr4b0jcY7Olqdqbw==\"}},\"@types/node@22.15.33\":{\"resolution\":{\"integrity\":\"sha512-wzoocdnnpSxZ+6CjW4ADCK1jVmd1S/J3ArNWfn8FDDQtRm8dkDg7TA+mvek2wNrfCgwuZxqEOiB9B1XCJ6+dbw==\"}},\"@types/node@22.19.17\":{\"resolution\":{\"integrity\":\"sha512-wGdMcf+vPYM6jikpS/qhg6WiqSV/OhG+jeeHT/KlVqxYfD40iYJf9/AE1uQxVWFvU7MipKRkRv8NSHiCGgPr8Q==\"}},\"@types/node@24.10.2\":{\"resolution\":{\"integrity\":\"sha512-WOhQTZ4G8xZ1tjJTvKOpyEVSGgOTvJAfDK3FNFgELyaTpzhdgHVHeqW8V+UJvzF5BT+/B54T/1S2K6gd9c7bbA==\"}},\"@types/react-dom@19.2.3\":{\"resolution\":{\"integrity\":\"sha512-jp2L/eY6fn+KgVVQAOqYItbF0VY/YApe5Mz2F0aykSO8gx31bYCZyvSeYxCHKvzHG5eZjc+zyaS5BrBWya2+kQ==\"},\"peerDependencies\":{\"@types/react\":\"^19.2.0\"}},\"@types/react@19.2.7\":{\"resolution\":{\"integrity\":\"sha512-MWtvHrGZLFttgeEj28VXHxpmwYbor/ATPYbBfSFZEIRK0ecCFLl2Qo55z52Hss+UV9CRN7trSeq1zbgx7YDWWg==\"}},\"@types/sinonjs__fake-timers@8.1.5\":{\"resolution\":{\"integrity\":\"sha512-mQkU2jY8jJEF7YHjHvsQO8+3ughTL1mcnn96igfhONmR+fUPSKIkefQYpSe8bsly2Ep7oQbn/6VG5/9/0qcArQ==\"}},\"@types/statuses@2.0.6\":{\"resolution\":{\"integrity\":\"sha512-xMAgYwceFhRA2zY+XbEA7mxYbA093wdiW8Vu6gZPGWy9cmOyU9XesH1tNcEWsKFd5Vzrqx5T3D38PWx1FIIXkA==\"}},\"@types/tough-cookie@4.0.5\":{\"resolution\":{\"integrity\":\"sha512-/Ad8+nIOV7Rl++6f1BdKxFSMgmoqEoYbHRpPcx3JEfv8VRsQe9Z4mCXeJBzxs7mbHY/XOZZuXlRNfhpVPbs6ZA==\"}},\"@types/trusted-types@2.0.7\":{\"resolution\":{\"integrity\":\"sha512-ScaPdn1dQczgbl0QFTeTOmVHFULt394XJgOQNoyVhZ6r2vLnMLJfBPd53SB52T/3G36VI1/g2MZaX0cwDuXsfw==\"}},\"@types/unist@3.0.3\":{\"resolution\":{\"integrity\":\"sha512-ko/gIFJRv177XgZsZcBwnqJN5x/Gien8qNOn0D5bQU/zAzVf9Zt3BlcUiLqhV9y4ARk0GbT3tnUiPNgnTXzc/Q==\"}},\"@types/whatwg-mimetype@3.0.2\":{\"resolution\":{\"integrity\":\"sha512-c2AKvDT8ToxLIOUlN51gTiHXflsfIFisS4pO7pDPoKouJCESkhZnEy623gwP9laCy5lnLDAw1vAzu2vM2YLOrA==\"}},\"@types/which@2.0.2\":{\"resolution\":{\"integrity\":\"sha512-113D3mDkZDjo+EeUEHCFy0qniNc1ZpecGiAU7WSo7YDoSzolZIQKpYFHrPpjkB2nuyahcKfrmLXeQlh7gqJYdw==\"}},\"@types/ws@8.18.1\":{\"resolution\":{\"integrity\":\"sha512-ThVF6DCVhA8kUGy+aazFQ4kXQ7E1Ty7A3ypFOe0IcJV8O/M511G99AW24irKrW56Wt44yG9+ij8FaqoBGkuBXg==\"}},\"@types/yauzl@2.10.3\":{\"resolution\":{\"integrity\":\"sha512-oJoftv0LSuaDZE3Le4DbKX+KS9G36NzOeSap90UIK0yMA/NhKJhqlSGtNDORNRaIbQfzjXDrQa0ytJ6mNRGz/Q==\"}},\"@ungap/structured-clone@1.2.1\":{\"resolution\":{\"integrity\":\"sha512-fEzPV3hSkSMltkw152tJKNARhOupqbH96MZWyRjNaYZOMIzbrTeQDG+MTc6Mr2pgzFQzFxAfmhGDNP5QK++2ZA==\"},\"deprecated\":\"Potential CWE-502 - Update to 1.3.1 or higher\"},\"@vitejs/plugin-react@6.0.1\":{\"resolution\":{\"integrity\":\"sha512-l9X/E3cDb+xY3SWzlG1MOGt2usfEHGMNIaegaUGFsLkb3RCn/k8/TOXBcab+OndDI4TBtktT8/9BwwW8Vi9KUQ==\"},\"engines\":{\"node\":\"^20.19.0 || >=22.12.0\"},\"peerDependencies\":{\"@rolldown/plugin-babel\":\"^0.1.7 || ^0.2.0\",\"babel-plugin-react-compiler\":\"^1.0.0\",\"vite\":\"^8.0.0\"},\"peerDependenciesMeta\":{\"@rolldown/plugin-babel\":{\"optional\":true},\"babel-plugin-react-compiler\":{\"optional\":true}}},\"@vitest/browser@3.2.4\":{\"resolution\":{\"integrity\":\"sha512-tJxiPrWmzH8a+w9nLKlQMzAKX/7VjFs50MWgcAj7p9XQ7AQ9/35fByFYptgPELyLw+0aixTnC4pUWV+APcZ/kw==\"},\"peerDependencies\":{\"playwright\":\"*\",\"safaridriver\":\"*\",\"vitest\":\"3.2.4\",\"webdriverio\":\"^7.0.0 || ^8.0.0 || ^9.0.0\"},\"peerDependenciesMeta\":{\"playwright\":{\"optional\":true},\"safaridriver\":{\"optional\":true},\"webdriverio\":{\"optional\":true}}},\"@vitest/browser@4.1.5\":{\"resolution\":{\"integrity\":\"sha512-iCDGI8c4yg+xmjUg2VsygdAUSIIB4x5Rht/P68OXy1hPELKXHDkzh87lkuTcdYmemRChDkEpB426MmDjzC0ziA==\"},\"peerDependencies\":{\"vitest\":\"4.1.5\"}},\"@vitest/coverage-v8@3.2.4\":{\"resolution\":{\"integrity\":\"sha512-EyF9SXU6kS5Ku/U82E259WSnvg6c8KTjppUncuNdm5QHpe17mwREHnjDzozC8x9MZ0xfBUFSaLkRv4TMA75ALQ==\"},\"peerDependencies\":{\"@vitest/browser\":\"3.2.4\",\"vitest\":\"3.2.4\"},\"peerDependenciesMeta\":{\"@vitest/browser\":{\"optional\":true}}},\"@vitest/coverage-v8@4.1.5\":{\"resolution\":{\"integrity\":\"sha512-38C0/Ddb7HcRG0Z4/DUem8x57d2p9jYgp18mkaYswEOQBGsI1CG4f/hjm0ZCeaJfWhSZ4k7jgs29V1Zom7Ki9A==\"},\"peerDependencies\":{\"@vitest/browser\":\"4.1.5\",\"vitest\":\"4.1.5\"},\"peerDependenciesMeta\":{\"@vitest/browser\":{\"optional\":true}}},\"@vitest/expect@3.2.4\":{\"resolution\":{\"integrity\":\"sha512-Io0yyORnB6sikFlt8QW5K7slY4OjqNX9jmJQ02QDda8lyM6B5oNgVWoSoKPac8/kgnCUzuHQKrSLtu/uOqqrig==\"}},\"@vitest/expect@4.0.18\":{\"resolution\":{\"integrity\":\"sha512-8sCWUyckXXYvx4opfzVY03EOiYVxyNrHS5QxX3DAIi5dpJAAkyJezHCP77VMX4HKA2LDT/Jpfo8i2r5BE3GnQQ==\"}},\"@vitest/expect@4.1.5\":{\"resolution\":{\"integrity\":\"sha512-PWBaRY5JoKuRnHlUHfpV/KohFylaDZTupcXN1H9vYryNLOnitSw60Mw9IAE2r67NbwwzBw/Cc/8q9BK3kIX8Kw==\"}},\"@vitest/mocker@3.2.4\":{\"resolution\":{\"integrity\":\"sha512-46ryTE9RZO/rfDd7pEqFl7etuyzekzEhUbTW3BvmeO/BcCMEgq59BKhek3dXDWgAj4oMK6OZi+vRr1wPW6qjEQ==\"},\"peerDependencies\":{\"msw\":\"^2.4.9\",\"vite\":\"^5.0.0 || ^6.0.0 || ^7.0.0-0\"},\"peerDependenciesMeta\":{\"msw\":{\"optional\":true},\"vite\":{\"optional\":true}}},\"@vitest/mocker@4.0.18\":{\"resolution\":{\"integrity\":\"sha512-HhVd0MDnzzsgevnOWCBj5Otnzobjy5wLBe4EdeeFGv8luMsGcYqDuFRMcttKWZA5vVO8RFjexVovXvAM4JoJDQ==\"},\"peerDependencies\":{\"msw\":\"^2.4.9\",\"vite\":\"^6.0.0 || ^7.0.0-0\"},\"peerDependenciesMeta\":{\"msw\":{\"optional\":true},\"vite\":{\"optional\":true}}},\"@vitest/mocker@4.1.5\":{\"resolution\":{\"integrity\":\"sha512-/x2EmFC4mT4NNzqvC3fmesuV97w5FC903KPmey4gsnJiMQ3Be1IlDKVaDaG8iqaLFHqJ2FVEkxZk5VmeLjIItw==\"},\"peerDependencies\":{\"msw\":\"^2.4.9\",\"vite\":\"^6.0.0 || ^7.0.0 || ^8.0.0\"},\"peerDependenciesMeta\":{\"msw\":{\"optional\":true},\"vite\":{\"optional\":true}}},\"@vitest/pretty-format@3.2.4\":{\"resolution\":{\"integrity\":\"sha512-IVNZik8IVRJRTr9fxlitMKeJeXFFFN0JaB9PHPGQ8NKQbGpfjlTx9zO4RefN8gp7eqjNy8nyK3NZmBzOPeIxtA==\"}},\"@vitest/pretty-format@4.0.18\":{\"resolution\":{\"integrity\":\"sha512-P24GK3GulZWC5tz87ux0m8OADrQIUVDPIjjj65vBXYG17ZeU3qD7r+MNZ1RNv4l8CGU2vtTRqixrOi9fYk/yKw==\"}},\"@vitest/pretty-format@4.1.5\":{\"resolution\":{\"integrity\":\"sha512-7I3q6l5qr03dVfMX2wCo9FxwSJbPdwKjy2uu/YPpU3wfHvIL4QHwVRp57OfGrDFeUJ8/8QdfBKIV12FTtLn00g==\"}},\"@vitest/runner@3.2.4\":{\"resolution\":{\"integrity\":\"sha512-oukfKT9Mk41LreEW09vt45f8wx7DordoWUZMYdY/cyAk7w5TWkTRCNZYF7sX7n2wB7jyGAl74OxgwhPgKaqDMQ==\"}},\"@vitest/runner@4.0.18\":{\"resolution\":{\"integrity\":\"sha512-rpk9y12PGa22Jg6g5M3UVVnTS7+zycIGk9ZNGN+m6tZHKQb7jrP7/77WfZy13Y/EUDd52NDsLRQhYKtv7XfPQw==\"}},\"@vitest/runner@4.1.5\":{\"resolution\":{\"integrity\":\"sha512-2D+o7Pr82IEO46YPpoA/YU0neeyr6FTerQb5Ro7BUnBuv6NQtT/kmVnczngiMEBhzgqz2UZYl5gArejsyERDSQ==\"}},\"@vitest/snapshot@3.2.4\":{\"resolution\":{\"integrity\":\"sha512-dEYtS7qQP2CjU27QBC5oUOxLE/v5eLkGqPE0ZKEIDGMs4vKWe7IjgLOeauHsR0D5YuuycGRO5oSRXnwnmA78fQ==\"}},\"@vitest/snapshot@4.0.18\":{\"resolution\":{\"integrity\":\"sha512-PCiV0rcl7jKQjbgYqjtakly6T1uwv/5BQ9SwBLekVg/EaYeQFPiXcgrC2Y7vDMA8dM1SUEAEV82kgSQIlXNMvA==\"}},\"@vitest/snapshot@4.1.5\":{\"resolution\":{\"integrity\":\"sha512-zypXEt4KH/XgKGPUz4eC2AvErYx0My5hfL8oDb1HzGFpEk1P62bxSohdyOmvz+d9UJwanI68MKwr2EquOaOgMQ==\"}},\"@vitest/spy@3.2.4\":{\"resolution\":{\"integrity\":\"sha512-vAfasCOe6AIK70iP5UD11Ac4siNUNJ9i/9PZ3NKx07sG6sUxeag1LWdNrMWeKKYBLlzuK+Gn65Yd5nyL6ds+nw==\"}},\"@vitest/spy@4.0.18\":{\"resolution\":{\"integrity\":\"sha512-cbQt3PTSD7P2OARdVW3qWER5EGq7PHlvE+QfzSC0lbwO+xnt7+XH06ZzFjFRgzUX//JmpxrCu92VdwvEPlWSNw==\"}},\"@vitest/spy@4.1.5\":{\"resolution\":{\"integrity\":\"sha512-2lNOsh6+R2Idnf1TCZqSwYlKN2E/iDlD8sgU59kYVl+OMDmvldO1VDk39smRfpUNwYpNRVn3w4YfuC7KfbBnkQ==\"}},\"@vitest/utils@3.2.4\":{\"resolution\":{\"integrity\":\"sha512-fB2V0JFrQSMsCo9HiSq3Ezpdv4iYaXRG1Sx8edX3MwxfyNn83mKiGzOcH+Fkxt4MHxr3y42fQi1oeAInqgX2QA==\"}},\"@vitest/utils@4.0.18\":{\"resolution\":{\"integrity\":\"sha512-msMRKLMVLWygpK3u2Hybgi4MNjcYJvwTb0Ru09+fOyCXIgT5raYP041DRRdiJiI3k/2U6SEbAETB3YtBrUkCFA==\"}},\"@vitest/utils@4.1.5\":{\"resolution\":{\"integrity\":\"sha512-76wdkrmfXfqGjueGgnb45ITPyUi1ycZ4IHgC2bhPDUfWHklY/q3MdLOAB+TF1e6xfl8NxNY0ZYaPCFNWSsw3Ug==\"}},\"@wdio/config@9.1.3\":{\"resolution\":{\"integrity\":\"sha512-fozjb5Jl26QqQoZ2lJc8uZwzK2iKKmIfNIdNvx5JmQt78ybShiPuWWgu/EcHYDvAiZwH76K59R1Gp4lNmmEDew==\"},\"engines\":{\"node\":\">=18.20.0\"}},\"@wdio/logger@8.38.0\":{\"resolution\":{\"integrity\":\"sha512-kcHL86RmNbcQP+Gq/vQUGlArfU6IIcbbnNp32rRIraitomZow+iEoc519rdQmSVusDozMS5DZthkgDdxK+vz6Q==\"},\"engines\":{\"node\":\"^16.13 || >=18\"}},\"@wdio/logger@9.1.3\":{\"resolution\":{\"integrity\":\"sha512-cumRMK/gE1uedBUw3WmWXOQ7HtB6DR8EyKQioUz2P0IJtRRpglMBdZV7Svr3b++WWawOuzZHMfbTkJQmaVt8Gw==\"},\"engines\":{\"node\":\">=18.20.0\"}},\"@wdio/protocols@9.2.0\":{\"resolution\":{\"integrity\":\"sha512-lSdKCwLtqMxSIW+cl8au21GlNkvmLNGgyuGYdV/lFdWflmMYH1zusruM6Km6Kpv2VUlWySjjGknYhe7XVTOeMw==\"}},\"@wdio/repl@9.0.8\":{\"resolution\":{\"integrity\":\"sha512-3iubjl4JX5zD21aFxZwQghqC3lgu+mSs8c3NaiYYNCC+IT5cI/8QuKlgh9s59bu+N3gG988jqMJeCYlKuUv/iw==\"},\"engines\":{\"node\":\">=18.20.0\"}},\"@wdio/types@9.1.3\":{\"resolution\":{\"integrity\":\"sha512-oQrzLQBqn/+HXSJJo01NEfeKhzwuDdic7L8PDNxv5ySKezvmLDYVboQfoSDRtpAdfAZCcxuU9L4Jw7iTf6WV3g==\"},\"engines\":{\"node\":\">=18.20.0\"}},\"@wdio/utils@9.1.3\":{\"resolution\":{\"integrity\":\"sha512-dYeOzq9MTh8jYRZhzo/DYyn+cKrhw7h0/5hgyXkbyk/wHwF/uLjhATPmfaCr9+MARSEdiF7wwU8iRy/V0jfsLg==\"},\"engines\":{\"node\":\">=18.20.0\"}},\"@webassemblyjs/ast@1.14.1\":{\"resolution\":{\"integrity\":\"sha512-nuBEDgQfm1ccRp/8bCQrx1frohyufl4JlbMMZ4P1wpeOfDhF6FQkxZJ1b/e+PLwr6X1Nhw6OLme5usuBWYBvuQ==\"}},\"@webassemblyjs/floating-point-hex-parser@1.13.2\":{\"resolution\":{\"integrity\":\"sha512-6oXyTOzbKxGH4steLbLNOu71Oj+C8Lg34n6CqRvqfS2O71BxY6ByfMDRhBytzknj9yGUPVJ1qIKhRlAwO1AovA==\"}},\"@webassemblyjs/helper-api-error@1.13.2\":{\"resolution\":{\"integrity\":\"sha512-U56GMYxy4ZQCbDZd6JuvvNV/WFildOjsaWD3Tzzvmw/mas3cXzRJPMjP83JqEsgSbyrmaGjBfDtV7KDXV9UzFQ==\"}},\"@webassemblyjs/helper-buffer@1.14.1\":{\"resolution\":{\"integrity\":\"sha512-jyH7wtcHiKssDtFPRB+iQdxlDf96m0E39yb0k5uJVhFGleZFoNw1c4aeIcVUPPbXUVJ94wwnMOAqUHyzoEPVMA==\"}},\"@webassemblyjs/helper-numbers@1.13.2\":{\"resolution\":{\"integrity\":\"sha512-FE8aCmS5Q6eQYcV3gI35O4J789wlQA+7JrqTTpJqn5emA4U2hvwJmvFRC0HODS+3Ye6WioDklgd6scJ3+PLnEA==\"}},\"@webassemblyjs/helper-wasm-bytecode@1.13.2\":{\"resolution\":{\"integrity\":\"sha512-3QbLKy93F0EAIXLh0ogEVR6rOubA9AoZ+WRYhNbFyuB70j3dRdwH9g+qXhLAO0kiYGlg3TxDV+I4rQTr/YNXkA==\"}},\"@webassemblyjs/helper-wasm-section@1.14.1\":{\"resolution\":{\"integrity\":\"sha512-ds5mXEqTJ6oxRoqjhWDU83OgzAYjwsCV8Lo/N+oRsNDmx/ZDpqalmrtgOMkHwxsG0iI//3BwWAErYRHtgn0dZw==\"}},\"@webassemblyjs/ieee754@1.13.2\":{\"resolution\":{\"integrity\":\"sha512-4LtOzh58S/5lX4ITKxnAK2USuNEvpdVV9AlgGQb8rJDHaLeHciwG4zlGr0j/SNWlr7x3vO1lDEsuePvtcDNCkw==\"}},\"@webassemblyjs/leb128@1.13.2\":{\"resolution\":{\"integrity\":\"sha512-Lde1oNoIdzVzdkNEAWZ1dZ5orIbff80YPdHx20mrHwHrVNNTjNr8E3xz9BdpcGqRQbAEa+fkrCb+fRFTl/6sQw==\"}},\"@webassemblyjs/utf8@1.13.2\":{\"resolution\":{\"integrity\":\"sha512-3NQWGjKTASY1xV5m7Hr0iPeXD9+RDobLll3T9d2AO+g3my8xy5peVyjSag4I50mR1bBSN/Ct12lo+R9tJk0NZQ==\"}},\"@webassemblyjs/wasm-edit@1.14.1\":{\"resolution\":{\"integrity\":\"sha512-RNJUIQH/J8iA/1NzlE4N7KtyZNHi3w7at7hDjvRNm5rcUXa00z1vRz3glZoULfJ5mpvYhLybmVcwcjGrC1pRrQ==\"}},\"@webassemblyjs/wasm-gen@1.14.1\":{\"resolution\":{\"integrity\":\"sha512-AmomSIjP8ZbfGQhumkNvgC33AY7qtMCXnN6bL2u2Js4gVCg8fp735aEiMSBbDR7UQIj90n4wKAFUSEd0QN2Ukg==\"}},\"@webassemblyjs/wasm-opt@1.14.1\":{\"resolution\":{\"integrity\":\"sha512-PTcKLUNvBqnY2U6E5bdOQcSM+oVP/PmrDY9NzowJjislEjwP/C4an2303MCVS2Mg9d3AJpIGdUFIQQWbPds0Sw==\"}},\"@webassemblyjs/wasm-parser@1.14.1\":{\"resolution\":{\"integrity\":\"sha512-JLBl+KZ0R5qB7mCnud/yyX08jWFw5MsoalJ1pQ4EdFlgj9VdXKGuENGsiCIjegI1W7p91rUlcB/LB5yRJKNTcQ==\"}},\"@webassemblyjs/wast-printer@1.14.1\":{\"resolution\":{\"integrity\":\"sha512-kPSSXE6De1XOR820C90RIo2ogvZG+c3KiHzqUoO/F34Y2shGzesfqv7o57xrxovZJH/MetF5UjroJ/R/3isoiw==\"}},\"@xtuc/ieee754@1.2.0\":{\"resolution\":{\"integrity\":\"sha512-DX8nKgqcGwsc0eJSqYt5lwP4DH5FlHnmuWWBRy7X0NcaGR0ZtuyeESgMwTYVEtxmsNGY+qit4QYT/MIYTOTPeA==\"}},\"@xtuc/long@4.2.2\":{\"resolution\":{\"integrity\":\"sha512-NuHqBY1PB/D8xU6s/thBgOAiAP7HOYDQ32+BFZILJ8ivkUkAHQnWfn6WhL79Owj1qmUnoN/YPhktdIoucipkAQ==\"}},\"@yarnpkg/lockfile@1.1.0\":{\"resolution\":{\"integrity\":\"sha512-GpSwvyXOcOOlV70vbnzjj4fW5xW/FdUF6nQEt1ENy7m4ZCczi1+/buVUPAqmGfqznsORNFzUMjctTIp8a9tuCQ==\"}},\"@yarnpkg/parsers@3.0.2\":{\"resolution\":{\"integrity\":\"sha512-/HcYgtUSiJiot/XWGLOlGxPYUG65+/31V8oqk17vZLW1xlCoR4PampyePljOxY2n8/3jz9+tIFzICsyGujJZoA==\"},\"engines\":{\"node\":\">=18.12.0\"}},\"@zip.js/zip.js@2.8.26\":{\"resolution\":{\"integrity\":\"sha512-RQ4h9F6DOiHxpdocUDrOl6xBM+yOtz+LkUol47AVWcfebGBDpZ7w7Xvz9PS24JgXvLGiXXzSAfdCdVy1tPlaFA==\"},\"engines\":{\"bun\":\">=0.7.0\",\"deno\":\">=1.0.0\",\"node\":\">=18.0.0\"}},\"@zkochan/js-yaml@0.0.7\":{\"resolution\":{\"integrity\":\"sha512-nrUSn7hzt7J6JWgWGz78ZYI8wj+gdIJdk0Ynjpp8l+trkn58Uqsf6RYrYkEK+3X18EX+TNdtJI0WxAtc+L84SQ==\"},\"hasBin\":true},\"abort-controller@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-h8lQ8tacZYnR3vNQTgibj+tODHI5/+l06Au2Pcriv/Gmet0eaj4TwWH41sO9wnHDiQsEj19q0drzdWdeAHtweg==\"},\"engines\":{\"node\":\">=6.5\"}},\"acorn@8.16.0\":{\"resolution\":{\"integrity\":\"sha512-UVJyE9MttOsBQIDKw1skb9nAwQuR5wuGD3+82K6JgJlm/Y+KI92oNsMNGZCYdDsVtRHSak0pcV5Dno5+4jh9sw==\"},\"engines\":{\"node\":\">=0.4.0\"},\"hasBin\":true},\"agent-base@7.1.3\":{\"resolution\":{\"integrity\":\"sha512-jRR5wdylq8CkOe6hei19GGZnxM6rBGwFl3Bg0YItGDimvjGtAvdZk4Pu6Cl4u4Igsws4a1fd1Vq3ezrhn4KmFw==\"},\"engines\":{\"node\":\">= 14\"}},\"agent-base@7.1.4\":{\"resolution\":{\"integrity\":\"sha512-MnA+YT8fwfJPgBx3m60MNqakm30XOkyIoH1y6huTQvC0PwZG7ki8NacLBcrPbNoo8vEZy7Jpuk7+jMO+CUovTQ==\"},\"engines\":{\"node\":\">= 14\"}},\"ajv-formats@2.1.1\":{\"resolution\":{\"integrity\":\"sha512-Wx0Kx52hxE7C18hkMEggYlEifqWZtYaRgouJor+WMdPnQyEK13vgEWyVNup7SoeeoLMsr4kf5h6dOW11I15MUA==\"},\"peerDependencies\":{\"ajv\":\"^8.0.0\"},\"peerDependenciesMeta\":{\"ajv\":{\"optional\":true}}},\"ajv-keywords@5.1.0\":{\"resolution\":{\"integrity\":\"sha512-YCS/JNFAUyr5vAuhk1DWm1CBxRHW9LbJ2ozWeemrIqpbsqKjHVxYPyi5GC0rjZIT5JxJ3virVTS8wk4i/Z+krw==\"},\"peerDependencies\":{\"ajv\":\"^8.8.2\"}},\"ajv@8.17.1\":{\"resolution\":{\"integrity\":\"sha512-B/gBuNg5SiMTrPkC+A2+cW0RszwxYmn6VYxB/inlBStS5nx6xHIt/ehKRhIMhqusl7a8LjQoZnjCs5vhwxOQ1g==\"}},\"ajv@8.20.0\":{\"resolution\":{\"integrity\":\"sha512-Thbli+OlOj+iMPYFBVBfJ3OmCAnaSyNn4M1vz9T6Gka5Jt9ba/HIR56joy65tY6kx/FCF5VXNB819Y7/GUrBGA==\"}},\"ansi-colors@4.1.3\":{\"resolution\":{\"integrity\":\"sha512-/6w/C21Pm1A7aZitlI5Ni/2J6FFQN8i1Cvz3kHABAAbw93v/NlvKdVOqz7CCWz/3iv/JplRSEEZ83XION15ovw==\"},\"engines\":{\"node\":\">=6\"}},\"ansi-regex@5.0.1\":{\"resolution\":{\"integrity\":\"sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==\"},\"engines\":{\"node\":\">=8\"}},\"ansi-regex@6.1.0\":{\"resolution\":{\"integrity\":\"sha512-7HSX4QQb4CspciLpVFwyRe79O3xsIZDDLER21kERQ71oaPodF8jL725AgJMFAYbooIqolJoRLuM81SpeUkpkvA==\"},\"engines\":{\"node\":\">=12\"}},\"ansi-regex@6.2.2\":{\"resolution\":{\"integrity\":\"sha512-Bq3SmSpyFHaWjPk8If9yc6svM8c56dB5BAtW4Qbw5jHTwwXXcTLoRMkpDJp6VL0XzlWaCHTXrkFURMYmD0sLqg==\"},\"engines\":{\"node\":\">=12\"}},\"ansi-styles@4.3.0\":{\"resolution\":{\"integrity\":\"sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg==\"},\"engines\":{\"node\":\">=8\"}},\"ansi-styles@5.2.0\":{\"resolution\":{\"integrity\":\"sha512-Cxwpt2SfTzTtXcfOlzGEee8O+c+MmUgGrNiBcXnuWxuFJHe6a5Hz7qwhwe5OgaSYI0IJvkLqWX1ASG+cJOkEiA==\"},\"engines\":{\"node\":\">=10\"}},\"ansi-styles@6.2.1\":{\"resolution\":{\"integrity\":\"sha512-bN798gFfQX+viw3R7yrGWRqnrN2oRkEkUjjl4JNn4E8GxxbjtG3FbrEIIY3l8/hrwUwIeCZvi4QuOTP4MErVug==\"},\"engines\":{\"node\":\">=12\"}},\"ansis@4.1.0\":{\"resolution\":{\"integrity\":\"sha512-BGcItUBWSMRgOCe+SVZJ+S7yTRG0eGt9cXAHev72yuGcY23hnLA7Bky5L/xLyPINoSN95geovfBkqoTlNZYa7w==\"},\"engines\":{\"node\":\">=14\"}},\"anymatch@3.1.3\":{\"resolution\":{\"integrity\":\"sha512-KMReFUr0B4t+D+OBkjR3KYqvocp2XaSzO55UcB6mgQMd3KbcE+mWTyvVV7D/zsdEbNnV6acZUutkiHQXvTr1Rw==\"},\"engines\":{\"node\":\">= 8\"}},\"archiver-utils@5.0.2\":{\"resolution\":{\"integrity\":\"sha512-wuLJMmIBQYCsGZgYLTy5FIB2pF6Lfb6cXMSF8Qywwk3t20zWnAi7zLcQFdKQmIB8wyZpY5ER38x08GbwtR2cLA==\"},\"engines\":{\"node\":\">= 14\"}},\"archiver@7.0.1\":{\"resolution\":{\"integrity\":\"sha512-ZcbTaIqJOfCc03QwD468Unz/5Ir8ATtvAHsK+FdXbDIbGfihqh9mrvdcYunQzqn4HrvWWaFyaxJhGZagaJJpPQ==\"},\"engines\":{\"node\":\">= 14\"}},\"argparse@1.0.10\":{\"resolution\":{\"integrity\":\"sha512-o5Roy6tNG4SL/FOkCAN6RzjiakZS25RLYFrcMttJqbdd8BWrnA+fGz57iN5Pb06pvBGvl5gQ0B48dJlslXvoTg==\"}},\"argparse@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-8+9WqebbFzpX9OR+Wa6O29asIogeRMzcGtAINdpMHHyAg10f05aSFVBbcEqGf/PXw1EjAZ+q2/bEBg3DvurK3Q==\"}},\"aria-query@5.3.0\":{\"resolution\":{\"integrity\":\"sha512-b0P0sZPKtyu8HkeRAfCq0IfURZK+SuwMjY1UXGBU27wpAiTwQAIlq56IbIO+ytk/JjS1fMR14ee5WBBfKi5J6A==\"}},\"aria-query@5.3.2\":{\"resolution\":{\"integrity\":\"sha512-COROpnaoap1E2F000S62r6A60uHZnmlvomhfyT2DlTcrY1OrBKn2UhH7qn5wTC9zMvD0AY7csdPSNwKP+7WiQw==\"},\"engines\":{\"node\":\">= 0.4\"}},\"array-union@2.1.0\":{\"resolution\":{\"integrity\":\"sha512-HGyxoOTYUyCM6stUe6EJgnd4EoewAI7zMdfqO+kGjnlZmBDz/cR5pf8r/cR4Wq60sL/p0IkcjUEEPwS3GFrIyw==\"},\"engines\":{\"node\":\">=8\"}},\"assertion-error@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-Izi8RQcffqCeNVgFigKli1ssklIbpHnCYc6AknXGYoB6grJqyeby7jv12JUQgmTAnIDnbck1uxksT4dzN3PWBA==\"},\"engines\":{\"node\":\">=12\"}},\"ast-types@0.13.4\":{\"resolution\":{\"integrity\":\"sha512-x1FCFnFifvYDDzTaLII71vG5uvDwgtmDTEVWAxrgeiR8VjMONcCXJx7E+USjDtHlwFmt9MysbqgF9b9Vjr6w+w==\"},\"engines\":{\"node\":\">=4\"}},\"ast-v8-to-istanbul@0.3.4\":{\"resolution\":{\"integrity\":\"sha512-cxrAnZNLBnQwBPByK4CeDaw5sWZtMilJE/Q3iDA0aamgaIVNDF9T6K2/8DfYDZEejZ2jNnDrG9m8MY72HFd0KA==\"}},\"ast-v8-to-istanbul@1.0.0\":{\"resolution\":{\"integrity\":\"sha512-1fSfIwuDICFA4LKkCzRPO7F0hzFf0B7+Xqrl27ynQaa+Rh0e1Es0v6kWHPott3lU10AyAr7oKHa65OppjLn3Rg==\"}},\"async@3.2.6\":{\"resolution\":{\"integrity\":\"sha512-htCUDlxyyCLMgaM3xXg0C0LW2xqfuQ6p05pCEIsXuyQ+a1koYKTuBMzRNwmybfLgvJDMd0r1LTn4+E0Ti6C2AA==\"}},\"asynckit@0.4.0\":{\"resolution\":{\"integrity\":\"sha512-Oei9OH4tRh0YqU3GxhX79dM/mwVgvbZJaSNaRk+bshkj0S5cfHcgYakreBjrHwatXKbz+IoIdYLxrKim2MjW0Q==\"}},\"axios@1.11.0\":{\"resolution\":{\"integrity\":\"sha512-1Lx3WLFQWm3ooKDYZD1eXmoGO9fxYQjrycfHFC8P0sCfQVXyROp0p9PFWBehewBOdCwHc+f/b8I0fMto5eSfwA==\"}},\"b4a@1.8.1\":{\"resolution\":{\"integrity\":\"sha512-aiqre1Nr0B/6DgE2N5vwTc+2/oQZ4Wh1t4NznYY4E00y8LCt6NqdRv81so00oo27D8MVKTpUa/MwUUtBLXCoDw==\"},\"peerDependencies\":{\"react-native-b4a\":\"*\"},\"peerDependenciesMeta\":{\"react-native-b4a\":{\"optional\":true}}},\"babel-dead-code-elimination@1.0.12\":{\"resolution\":{\"integrity\":\"sha512-GERT7L2TiYcYDtYk1IpD+ASAYXjKbLTDPhBtYj7X1NuRMDTMtAx9kyBenub1Ev41lo91OHCKdmP+egTDmfQ7Ig==\"}},\"bail@2.0.2\":{\"resolution\":{\"integrity\":\"sha512-0xO6mYd7JB2YesxDKplafRpsiOzPt9V02ddPCLbY1xYGPOX24NTyN50qnUxgCPcSoYMhKpAuBTjQoRZCAkUDRw==\"}},\"balanced-match@1.0.2\":{\"resolution\":{\"integrity\":\"sha512-3oSeUO0TMV67hN1AmbXsK4yaqU7tjiHlbxRDZOpH0KW9+CeX4bRAaX0Anxt0tx2MrpRpWwQaPwIlISEJhYU5Pw==\"}},\"bare-events@2.8.2\":{\"resolution\":{\"integrity\":\"sha512-riJjyv1/mHLIPX4RwiK+oW9/4c3TEUeORHKefKAKnZ5kyslbN+HXowtbaVEqt4IMUB7OXlfixcs6gsFeo/jhiQ==\"},\"peerDependencies\":{\"bare-abort-controller\":\"*\"},\"peerDependenciesMeta\":{\"bare-abort-controller\":{\"optional\":true}}},\"bare-fs@4.7.1\":{\"resolution\":{\"integrity\":\"sha512-WDRsyVN52eAx/lBamKD6uyw8H4228h/x0sGGGegOamM2cd7Pag88GfMQalobXI+HaEUxpCkbKQUDOQqt9wawRw==\"},\"engines\":{\"bare\":\">=1.16.0\"},\"peerDependencies\":{\"bare-buffer\":\"*\"},\"peerDependenciesMeta\":{\"bare-buffer\":{\"optional\":true}}},\"bare-os@3.9.1\":{\"resolution\":{\"integrity\":\"sha512-6M5XjcnsygQNPMCMPXSK379xrJFiZ/AEMNBmFEmQW8d/789VQATvriyi5r0HYTL9TkQ26rn3kgdTG3aisbrXkQ==\"},\"engines\":{\"bare\":\">=1.14.0\"}},\"bare-path@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-tyfW2cQcB5NN8Saijrhqn0Zh7AnFNsnczRcuWODH0eYAXBsJ5gVxAUuNr7tsHSC6IZ77cA0SitzT+s47kot8Mw==\"}},\"bare-stream@2.13.1\":{\"resolution\":{\"integrity\":\"sha512-Vp0cnjYyrEC4whYTymQ+YZi6pBpfiICZO3cfRG8sy67ZNWe951urv1x4eW1BKNngw3U+3fPYb5JQvHbCtxH7Ow==\"},\"peerDependencies\":{\"bare-abort-controller\":\"*\",\"bare-buffer\":\"*\",\"bare-events\":\"*\"},\"peerDependenciesMeta\":{\"bare-abort-controller\":{\"optional\":true},\"bare-buffer\":{\"optional\":true},\"bare-events\":{\"optional\":true}}},\"bare-url@2.4.3\":{\"resolution\":{\"integrity\":\"sha512-Kccpc7ACfXaxfeInfqKcZtW4pT5YBn1mesc4sCsun6sRwtbJ4h+sNOaksUpYEJUKfN65YWC6Bw2OJEFiKxq8nQ==\"}},\"base64-js@1.5.1\":{\"resolution\":{\"integrity\":\"sha512-AKpaYlHn8t4SVbOHCy+b5+KKgvR4vrsD8vbvrbiQJps7fKDTkjkDry6ji0rUJjC0kzbNePLwzxq8iypo41qeWA==\"}},\"baseline-browser-mapping@2.10.27\":{\"resolution\":{\"integrity\":\"sha512-zEs/ufmZoUd7WftKpKyXaT6RFxpQ5Qm9xytKRHvJfxFV9DFJkZph9RvJ1LcOUi0Z1ZVijMte65JbILeV+8QQEA==\"},\"engines\":{\"node\":\">=6.0.0\"},\"hasBin\":true},\"basic-ftp@5.3.1\":{\"resolution\":{\"integrity\":\"sha512-bopVNp6ugyA150DDuZfPFdt1KZ5a94ZDiwX4hMgZDzF+GttD80lEy8kj98kbyhLXnPvhtIo93mdnLIjpCAeeOw==\"},\"engines\":{\"node\":\">=10.0.0\"}},\"better-path-resolve@1.0.0\":{\"resolution\":{\"integrity\":\"sha512-pbnl5XzGBdrFU/wT4jqmJVPn2B6UHPBOhzMQkY/SPUPB6QtUXtmBHBIwCbXJol93mOpGMnQyP/+BB19q04xj7g==\"},\"engines\":{\"node\":\">=4\"}},\"better-sqlite3@12.9.0\":{\"resolution\":{\"integrity\":\"sha512-wqUv4Gm3toFpHDQmaKD4QhZm3g1DjUBI0yzS4UBl6lElUmXFYdTQmmEDpAFa5o8FiFiymURypEnfVHzILKaxqQ==\"},\"engines\":{\"node\":\"20.x || 22.x || 23.x || 24.x || 25.x\"}},\"bidi-js@1.0.3\":{\"resolution\":{\"integrity\":\"sha512-RKshQI1R3YQ+n9YJz2QQ147P66ELpa1FQEg20Dk8oW9t2KgLbpDLLp9aGZ7y8WHSshDknG0bknqGw5/tyCs5tw==\"}},\"binary-extensions@2.3.0\":{\"resolution\":{\"integrity\":\"sha512-Ceh+7ox5qe7LJuLHoY0feh3pHuUDHAcRUeyL2VYghZwfpkNIy/+8Ocg0a3UuSoYzavmylwuLWQOf3hl0jjMMIw==\"},\"engines\":{\"node\":\">=8\"}},\"bindings@1.5.0\":{\"resolution\":{\"integrity\":\"sha512-p2q/t/mhvuOj/UeLlV6566GD/guowlr0hHxClI0W9m7MWYkL1F0hLo+0Aexs9HSPCtR1SXQ0TD3MMKrXZajbiQ==\"}},\"bl@4.1.0\":{\"resolution\":{\"integrity\":\"sha512-1W07cM9gS6DcLperZfFSj+bWLtaPGSOHWhPiGzXmvVJbRLdG82sH/Kn8EtW1VqWVA54AKf2h5k5BbnIbwF3h6w==\"}},\"blake3-wasm@2.1.5\":{\"resolution\":{\"integrity\":\"sha512-F1+K8EbfOZE49dtoPtmxUQrpXaBIl3ICvasLh+nJta0xkz+9kF/7uet9fLnwKqhDrmj6g+6K3Tw9yQPUg2ka5g==\"}},\"boolbase@1.0.0\":{\"resolution\":{\"integrity\":\"sha512-JZOSA7Mo9sNGB8+UjSgzdLtokWAky1zbztM3WRLCbZ70/3cTANmQmOdR7y2g+J0e2WXywy1yS468tY+IruqEww==\"}},\"brace-expansion@2.0.2\":{\"resolution\":{\"integrity\":\"sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ==\"}},\"brace-expansion@2.1.0\":{\"resolution\":{\"integrity\":\"sha512-TN1kCZAgdgweJhWWpgKYrQaMNHcDULHkWwQIspdtjV4Y5aurRdZpjAqn6yX3FPqTA9ngHCc4hJxMAMgGfve85w==\"}},\"braces@3.0.3\":{\"resolution\":{\"integrity\":\"sha512-yQbXgO/OSZVD2IsiLlro+7Hf6Q18EJrKSEsdoMzKePKXct3gvD8oLcOQdIzGupr5Fj+EDe8gO/lxc1BzfMpxvA==\"},\"engines\":{\"node\":\">=8\"}},\"browserslist@4.25.3\":{\"resolution\":{\"integrity\":\"sha512-cDGv1kkDI4/0e5yON9yM5G/0A5u8sf5TnmdX5C9qHzI9PPu++sQ9zjm1k9NiOrf3riY4OkK0zSGqfvJyJsgCBQ==\"},\"engines\":{\"node\":\"^6 || ^7 || ^8 || ^9 || ^10 || ^11 || ^12 || >=13.7\"},\"hasBin\":true},\"browserslist@4.28.2\":{\"resolution\":{\"integrity\":\"sha512-48xSriZYYg+8qXna9kwqjIVzuQxi+KYWp2+5nCYnYKPTr0LvD89Jqk2Or5ogxz0NUMfIjhh2lIUX/LyX9B4oIg==\"},\"engines\":{\"node\":\"^6 || ^7 || ^8 || ^9 || ^10 || ^11 || ^12 || >=13.7\"},\"hasBin\":true},\"buffer-builder@0.2.0\":{\"resolution\":{\"integrity\":\"sha512-7VPMEPuYznPSoR21NE1zvd2Xna6c/CloiZCfcMXR1Jny6PjX0N4Nsa38zcBFo/FMK+BlA+FLKbJCQ0i2yxp+Xg==\"}},\"buffer-crc32@0.2.13\":{\"resolution\":{\"integrity\":\"sha512-VO9Ht/+p3SN7SKWqcrgEzjGbRSJYTx+Q1pTQC0wrWqHx0vpJraQ6GtHx8tvcg1rlK1byhU5gccxgOgj7B0TDkQ==\"}},\"buffer-crc32@1.0.0\":{\"resolution\":{\"integrity\":\"sha512-Db1SbgBS/fg/392AblrMJk97KggmvYhr4pB5ZIMTWtaivCPMWLkmb7m21cJvpvgK+J3nsU2CmmixNBZx4vFj/w==\"},\"engines\":{\"node\":\">=8.0.0\"}},\"buffer-from@1.1.2\":{\"resolution\":{\"integrity\":\"sha512-E+XQCRwSbaaiChtv6k6Dwgc+bx+Bs6vuKJHHl5kox/BaKbhiXzqQOwK4cO22yElGp2OCmjwVhT3HmxgyPGnJfQ==\"}},\"buffer@5.7.1\":{\"resolution\":{\"integrity\":\"sha512-EHcyIPBQ4BSGlvjB16k5KgAJ27CIsHY/2JBmCRReo48y9rQ3MaUzWX3KVlBa4U7MyX02HdVj0K7C3WaB3ju7FQ==\"}},\"buffer@6.0.3\":{\"resolution\":{\"integrity\":\"sha512-FTiCpNxtwiZZHEZbcbTIcZjERVICn9yq/pDFkTl95/AxzD1naBctN7YO68riM/gLSDY7sdrMby8hofADYuuqOA==\"}},\"cac@6.7.14\":{\"resolution\":{\"integrity\":\"sha512-b6Ilus+c3RrdDk+JhLKUAQfzzgLEPy6wcXqS7f/xe1EETvsDP6GORG7SFuOs6cID5YkqchW/LXZbX5bc8j7ZcQ==\"},\"engines\":{\"node\":\">=8\"}},\"call-bind-apply-helpers@1.0.2\":{\"resolution\":{\"integrity\":\"sha512-Sp1ablJ0ivDkSzjcaJdxEunN5/XvksFJ2sMBFfq6x0ryhQV/2b/KwFe21cMpmHtPOSij8K99/wSfoEuTObmuMQ==\"},\"engines\":{\"node\":\">= 0.4\"}},\"caniuse-lite@1.0.30001737\":{\"resolution\":{\"integrity\":\"sha512-BiloLiXtQNrY5UyF0+1nSJLXUENuhka2pzy2Fx5pGxqavdrxSCW4U6Pn/PoG3Efspi2frRbHpBV2XsrPE6EDlw==\"}},\"caniuse-lite@1.0.30001792\":{\"resolution\":{\"integrity\":\"sha512-hVLMUZFgR4JJ6ACt1uEESvQN1/dBVqPAKY0hgrV70eN3391K6juAfTjKZLKvOMsx8PxA7gsY1/tLMMTcfFLLpw==\"}},\"ccount@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-eyrF0jiFpY+3drT6383f1qhkbGsLSifNAjA61IUjZjmLCWjItY6LB9ft9YhoDgwfmclB2zhu51Lc7+95b8NRAg==\"}},\"chai@5.3.3\":{\"resolution\":{\"integrity\":\"sha512-4zNhdJD/iOjSH0A05ea+Ke6MU5mmpQcbQsSOkgdaUMJ9zTlDTD/GYlwohmIE2u0gaxHYiVHEn1Fw9mZ/ktJWgw==\"},\"engines\":{\"node\":\">=18\"}},\"chai@6.2.2\":{\"resolution\":{\"integrity\":\"sha512-NUPRluOfOiTKBKvWPtSD4PhFvWCqOi0BGStNWs57X9js7XGTprSmFoz5F0tWhR4WPjNeR9jXqdC7/UpSJTnlRg==\"},\"engines\":{\"node\":\">=18\"}},\"chalk@4.1.2\":{\"resolution\":{\"integrity\":\"sha512-oKnbhFyRIXpUuez8iBMmyEa4nbj4IOQyuhc/wy9kY7/WVPcwIO9VA668Pu8RkO7+0G76SLROeyw9CpQ061i4mA==\"},\"engines\":{\"node\":\">=10\"}},\"chalk@5.6.2\":{\"resolution\":{\"integrity\":\"sha512-7NzBL0rN6fMUW+f7A6Io4h40qQlG+xGmtMxfbnH/K7TAtt8JQWVQK+6g0UXKMeVJoyV5EkkNsErQ8pVD3bLHbA==\"},\"engines\":{\"node\":\"^12.17.0 || ^14.13 || >=16.0.0\"}},\"character-entities-html4@2.1.0\":{\"resolution\":{\"integrity\":\"sha512-1v7fgQRj6hnSwFpq1Eu0ynr/CDEw0rXo2B61qXrLNdHZmPKgb7fqS1a2JwF0rISo9q77jDI8VMEHoApn8qDoZA==\"}},\"character-entities-legacy@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-RpPp0asT/6ufRm//AJVwpViZbGM/MkjQFxJccQRHmISF/22NBtsHqAWmL+/pmkPWoIUJdWyeVleTl1wydHATVQ==\"}},\"character-entities@2.0.2\":{\"resolution\":{\"integrity\":\"sha512-shx7oQ0Awen/BRIdkjkvz54PnEEI/EjwXDSIZp86/KKdbafHh1Df/RYGBhn4hbe2+uKC9FnT5UCEdyPz3ai9hQ==\"}},\"chardet@2.1.0\":{\"resolution\":{\"integrity\":\"sha512-bNFETTG/pM5ryzQ9Ad0lJOTa6HWD/YsScAR3EnCPZRPlQh77JocYktSHOUHelyhm8IARL+o4c4F1bP5KVOjiRA==\"}},\"check-error@2.1.1\":{\"resolution\":{\"integrity\":\"sha512-OAlb+T7V4Op9OwdkjmguYRqncdlx5JiofwOAUkmTF+jNdHwzTaTs4sRAGpzLF3oOz5xAyDGrPgeIDFQmDOTiJw==\"},\"engines\":{\"node\":\">= 16\"}},\"cheerio-select@2.1.0\":{\"resolution\":{\"integrity\":\"sha512-9v9kG0LvzrlcungtnJtpGNxY+fzECQKhK4EGJX2vByejiMX84MFNQw4UxPJl3bFbTMw+Dfs37XaIkCwTZfLh4g==\"}},\"cheerio@1.1.2\":{\"resolution\":{\"integrity\":\"sha512-IkxPpb5rS/d1IiLbHMgfPuS0FgiWTtFIm/Nj+2woXDLTZ7fOT2eqzgYbdMlLweqlHbsZjxEChoVK+7iph7jyQg==\"},\"engines\":{\"node\":\">=20.18.1\"}},\"cheerio@1.2.0\":{\"resolution\":{\"integrity\":\"sha512-WDrybc/gKFpTYQutKIK6UvfcuxijIZfMfXaYm8NMsPQxSYvf+13fXUJ4rztGGbJcBQ/GF55gvrZ0Bc0bj/mqvg==\"},\"engines\":{\"node\":\">=20.18.1\"}},\"chevrotain-allstar@0.3.1\":{\"resolution\":{\"integrity\":\"sha512-b7g+y9A0v4mxCW1qUhf3BSVPg+/NvGErk/dOkrDaHA0nQIQGAtrOjlX//9OQtRlSCy+x9rfB5N8yC71lH1nvMw==\"},\"peerDependencies\":{\"chevrotain\":\"^11.0.0\"}},\"chevrotain@11.0.3\":{\"resolution\":{\"integrity\":\"sha512-ci2iJH6LeIkvP9eJW6gpueU8cnZhv85ELY8w8WiFtNjMHA5ad6pQLaJo9mEly/9qUyCpvqX8/POVUTf18/HFdw==\"}},\"chokidar@3.6.0\":{\"resolution\":{\"integrity\":\"sha512-7VT13fmjotKpGipCW9JEQAusEPE+Ei8nl6/g4FBAmIm0GOOLMua9NDDo/DWp0ZAxCr3cPq5ZpBqmPAQgDda2Pw==\"},\"engines\":{\"node\":\">= 8.10.0\"}},\"chownr@1.1.4\":{\"resolution\":{\"integrity\":\"sha512-jJ0bqzaylmJtVnNgzTeSOs8DPavpbYgEr/b0YL8/2GO3xJEhInFmhKMUnEJQjZumK7KXGFhUy89PrsJWlakBVg==\"}},\"chownr@2.0.0\":{\"resolution\":{\"integrity\":\"sha512-bIomtDF5KGpdogkLd9VspvFzk9KfpyyGlS8YFVZl7TGPBHL5snIOnxeshwVgPteQ9b4Eydl+pVbIyE1DcvCWgQ==\"},\"engines\":{\"node\":\">=10\"}},\"chrome-trace-event@1.0.4\":{\"resolution\":{\"integrity\":\"sha512-rNjApaLzuwaOTjCiT8lSDdGN1APCiqkChLMJxJPWLunPAt5fy8xgU9/jNOchV84wfIxrA0lRQB7oCT8jrn/wrQ==\"},\"engines\":{\"node\":\">=6.0\"}},\"ci-info@3.9.0\":{\"resolution\":{\"integrity\":\"sha512-NIxF55hv4nSqQswkAeiOi1r83xy8JldOFDTWiug55KBu9Jnblncd2U6ViHmYgHf01TPZS77NJBhBMKdWj9HQMQ==\"},\"engines\":{\"node\":\">=8\"}},\"cli-cursor@3.1.0\":{\"resolution\":{\"integrity\":\"sha512-I/zHAwsKf9FqGoXM4WWRACob9+SNukZTd94DWF57E4toouRulbCxcUh6RKUEOQlYTHJnzkPMySvPNaaSLNfLZw==\"},\"engines\":{\"node\":\">=8\"}},\"cli-spinners@2.6.1\":{\"resolution\":{\"integrity\":\"sha512-x/5fWmGMnbKQAaNwN+UZlV79qBLM9JFnJuJ03gIi5whrob0xV0ofNVHy9DhwGdsMJQc2OKv0oGmLzvaqvAVv+g==\"},\"engines\":{\"node\":\">=6\"}},\"cli-spinners@2.9.2\":{\"resolution\":{\"integrity\":\"sha512-ywqV+5MmyL4E7ybXgKys4DugZbX0FC6LnwrhjuykIjnK9k8OQacQ7axGKnjDXWNhns0xot3bZI5h55H8yo9cJg==\"},\"engines\":{\"node\":\">=6\"}},\"cli-width@4.1.0\":{\"resolution\":{\"integrity\":\"sha512-ouuZd4/dm2Sw5Gmqy6bGyNNNe1qt9RpmxveLSO7KcgsTnU7RXfsw+/bukWGo1abgBiMAic068rclZsO4IWmmxQ==\"},\"engines\":{\"node\":\">= 12\"}},\"cliui@8.0.1\":{\"resolution\":{\"integrity\":\"sha512-BSeNnyus75C4//NQ9gQt1/csTXyo/8Sb+afLAkzAptFuMsod9HFokGNudZpi/oQV73hnVK+sR+5PVRMd+Dr7YQ==\"},\"engines\":{\"node\":\">=12\"}},\"clone@1.0.4\":{\"resolution\":{\"integrity\":\"sha512-JQHZ2QMW6l3aH/j6xCqQThY/9OH4D/9ls34cgkUBiEeocRTU04tHfKPBsUK1PqZCUQM7GiA0IIXJSuXHI64Kbg==\"},\"engines\":{\"node\":\">=0.8\"}},\"color-convert@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==\"},\"engines\":{\"node\":\">=7.0.0\"}},\"color-name@1.1.4\":{\"resolution\":{\"integrity\":\"sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==\"}},\"colorjs.io@0.5.2\":{\"resolution\":{\"integrity\":\"sha512-twmVoizEW7ylZSN32OgKdXRmo1qg+wT5/6C3xu5b9QsWzSFAhHLn2xd8ro0diCsKfCj1RdaTP/nrcW+vAoQPIw==\"}},\"combined-stream@1.0.8\":{\"resolution\":{\"integrity\":\"sha512-FQN4MRfuJeHf7cBbBMJFXhKSDq+2kAArBlmRBvcvFE5BB1HZKXtSFASDhdlz9zOYwxh8lDdnvmMOe/+5cdoEdg==\"},\"engines\":{\"node\":\">= 0.8\"}},\"comma-separated-tokens@2.0.3\":{\"resolution\":{\"integrity\":\"sha512-Fu4hJdvzeylCfQPp9SGWidpzrMs7tTrlu6Vb8XGaRGck8QSNZJJp538Wrb60Lax4fPwR64ViY468OIUTbRlGZg==\"}},\"commander@2.20.3\":{\"resolution\":{\"integrity\":\"sha512-GpVkmM8vF2vQUkj2LvZmD35JxeJOLCwJ9cUkugyk2nuhbv3+mJvpLYYt+0+USMxE+oj+ey/lJEnhZw75x/OMcQ==\"}},\"commander@7.2.0\":{\"resolution\":{\"integrity\":\"sha512-QrWXB+ZQSVPmIWIhtEO9H+gwHaMGYiF5ChvoJ+K9ZGHG/sVsa6yiesAD1GC/x46sET00Xlwo1u49RVVVzvcSkw==\"},\"engines\":{\"node\":\">= 10\"}},\"commander@8.3.0\":{\"resolution\":{\"integrity\":\"sha512-OkTL9umf+He2DZkUq8f8J9of7yL6RJKI24dVITBmNfZBmri9zYZQrKkuXiKhyfPSu8tUhnVBB1iKXevvnlR4Ww==\"},\"engines\":{\"node\":\">= 12\"}},\"commander@9.5.0\":{\"resolution\":{\"integrity\":\"sha512-KRs7WVDKg86PWiuAqhDrAQnTXZKraVcCc6vFdL14qrZ/DcWwuRo7VoiYXalXO7S5GKpqYiVEwCbgFDfxNHKJBQ==\"},\"engines\":{\"node\":\"^12.20.0 || >=14\"}},\"compress-commons@6.0.2\":{\"resolution\":{\"integrity\":\"sha512-6FqVXeETqWPoGcfzrXb37E50NP0LXT8kAMu5ooZayhWWdgEY4lBEEcbQNXtkuKQsGduxiIcI4gOTsxTmuq/bSg==\"},\"engines\":{\"node\":\">= 14\"}},\"confbox@0.1.8\":{\"resolution\":{\"integrity\":\"sha512-RMtmw0iFkeR4YV+fUOSucriAQNb9g8zFR52MWCtl+cCZOFRNL6zeB395vPzFhEjjn4fMxXudmELnl/KF/WrK6w==\"}},\"confbox@0.2.2\":{\"resolution\":{\"integrity\":\"sha512-1NB+BKqhtNipMsov4xI/NnhCKp9XG9NamYp5PVm9klAT0fsrNPjaFICsCFhNhwZJKNh7zB/3q8qXz0E9oaMNtQ==\"}},\"convert-source-map@2.0.0\":{\"resolution\":{\"integrity\":\"sha512-Kvp459HrV2FEJ1CAsi1Ku+MY3kasH19TFykTz2xWmMeq6bk2NU3XXvfJ+Q61m0xktWwt+1HSYf3JZsTms3aRJg==\"}},\"cookie-es@3.1.1\":{\"resolution\":{\"integrity\":\"sha512-UaXxwISYJPTr9hwQxMFYZ7kNhSXboMXP+Z3TRX6f1/NyaGPfuNUZOWP1pUEb75B2HjfklIYLVRfWiFZJyC6Npg==\"}},\"cookie@0.7.2\":{\"resolution\":{\"integrity\":\"sha512-yki5XnKuf750l50uGTllt6kKILY4nQ1eNIQatoXEByZ5dWgnKqbnqmTrBE5B4N7lrMJKQ2ytWMiTO2o0v6Ew/w==\"},\"engines\":{\"node\":\">= 0.6\"}},\"cookie@1.0.2\":{\"resolution\":{\"integrity\":\"sha512-9Kr/j4O16ISv8zBBhJoi4bXOYNTkFLOqSL3UDB0njXxCXNezjeyVrJyGOWtgfs/q2km1gwBcfH8q1yEGoMYunA==\"},\"engines\":{\"node\":\">=18\"}},\"core-js@3.46.0\":{\"resolution\":{\"integrity\":\"sha512-vDMm9B0xnqqZ8uSBpZ8sNtRtOdmfShrvT6h2TuQGLs0Is+cR0DYbj/KWP6ALVNbWPpqA/qPLoOuppJN07humpA==\"}},\"core-util-is@1.0.3\":{\"resolution\":{\"integrity\":\"sha512-ZQBvi1DcpJ4GDqanjucZ2Hj3wEO5pZDS89BWbkcrvdxksJorwUDDZamX9ldFkp9aw2lmBDLgkObEA4DWNJ9FYQ==\"}},\"cose-base@1.0.3\":{\"resolution\":{\"integrity\":\"sha512-s9whTXInMSgAp/NVXVNuVxVKzGH2qck3aQlVHxDCdAEPgtMKwc4Wq6/QKhgdEdgbLSi9rBTAcPoRa6JpiG4ksg==\"}},\"cose-base@2.2.0\":{\"resolution\":{\"integrity\":\"sha512-AzlgcsCbUMymkADOJtQm3wO9S3ltPfYOFD5033keQn9NJzIbtnZj+UdBJe7DYml/8TdbtHJW3j58SOnKhWY/5g==\"}},\"crc-32@1.2.2\":{\"resolution\":{\"integrity\":\"sha512-ROmzCKrTnOwybPcJApAA6WBWij23HVfGVNKqqrZpuyZOHqK2CwHSvpGuyt/UNNvaIjEd8X5IFGp4Mh+Ie1IHJQ==\"},\"engines\":{\"node\":\">=0.8\"},\"hasBin\":true},\"crc32-stream@6.0.0\":{\"resolution\":{\"integrity\":\"sha512-piICUB6ei4IlTv1+653yq5+KoqfBYmj9bw6LqXoOneTMDXk5nM1qt12mFW1caG3LlJXEKW1Bp0WggEmIfQB34g==\"},\"engines\":{\"node\":\">= 14\"}},\"cross-spawn@7.0.6\":{\"resolution\":{\"integrity\":\"sha512-uV2QOWP2nWzsy2aMp8aRibhi9dlzF5Hgh5SHaB9OiTGEyDTiJJyx0uy51QXdyWbtAHNua4XJzUKca3OzKUd3vA==\"},\"engines\":{\"node\":\">= 8\"}},\"css-select@5.1.0\":{\"resolution\":{\"integrity\":\"sha512-nwoRF1rvRRnnCqqY7updORDsuqKzqYJ28+oSMaJMMgOauh3fvwHqMS7EZpIPqK8GL+g9mKxF1vP/ZjSeNjEVHg==\"}},\"css-shorthand-properties@1.1.2\":{\"resolution\":{\"integrity\":\"sha512-C2AugXIpRGQTxaCW0N7n5jD/p5irUmCrwl03TrnMFBHDbdq44CFWR2zO7rK9xPN4Eo3pUxC4vQzQgbIpzrD1PQ==\"}},\"css-tree@3.1.0\":{\"resolution\":{\"integrity\":\"sha512-0eW44TGN5SQXU1mWSkKwFstI/22X2bG1nYzZTYMAWjylYURhse752YgbE4Cx46AC+bAvI+/dYTPRk1LqSUnu6w==\"},\"engines\":{\"node\":\"^10 || ^12.20.0 || ^14.13.0 || >=15.0.0\"}},\"css-value@0.0.1\":{\"resolution\":{\"integrity\":\"sha512-FUV3xaJ63buRLgHrLQVlVgQnQdR4yqdLGaDu7g8CQcWjInDfM9plBTPI9FRfpahju1UBSaMckeb2/46ApS/V1Q==\"}},\"css-what@6.1.0\":{\"resolution\":{\"integrity\":\"sha512-HTUrgRJ7r4dsZKU6GjmpfRK1O76h97Z8MfS1G0FozR+oF2kG6Vfe8JE6zwrkbxigziPHinCJ+gCPjA9EaBDtRw==\"},\"engines\":{\"node\":\">= 6\"}},\"cssstyle@4.3.1\":{\"resolution\":{\"integrity\":\"sha512-ZgW+Jgdd7i52AaLYCriF8Mxqft0gD/R9i9wi6RWBhs1pqdPEzPjym7rvRKi397WmQFf3SlyUsszhw+VVCbx79Q==\"},\"engines\":{\"node\":\">=18\"}},\"cssstyle@5.3.4\":{\"resolution\":{\"integrity\":\"sha512-KyOS/kJMEq5O9GdPnaf82noigg5X5DYn0kZPJTaAsCUaBizp6Xa1y9D4Qoqf/JazEXWuruErHgVXwjN5391ZJw==\"},\"engines\":{\"node\":\">=20\"}},\"csstype@3.2.3\":{\"resolution\":{\"integrity\":\"sha512-z1HGKcYy2xA8AGQfwrn0PAy+PB7X/GSj3UVJW9qKyn43xWa+gl5nXmU4qqLMRzWVLFC8KusUX8T/0kCiOYpAIQ==\"}},\"cytoscape-cose-bilkent@4.1.0\":{\"resolution\":{\"integrity\":\"sha512-wgQlVIUJF13Quxiv5e1gstZ08rnZj2XaLHGoFMYXz7SkNfCDOOteKBE6SYRfA9WxxI/iBc3ajfDoc6hb/MRAHQ==\"},\"peerDependencies\":{\"cytoscape\":\"^3.2.0\"}},\"cytoscape-fcose@2.2.0\":{\"resolution\":{\"integrity\":\"sha512-ki1/VuRIHFCzxWNrsshHYPs6L7TvLu3DL+TyIGEsRcvVERmxokbf5Gdk7mFxZnTdiGtnA4cfSmjZJMviqSuZrQ==\"},\"peerDependencies\":{\"cytoscape\":\"^3.2.0\"}},\"cytoscape@3.30.4\":{\"resolution\":{\"integrity\":\"sha512-OxtlZwQl1WbwMmLiyPSEBuzeTIQnwZhJYYWFzZ2PhEHVFwpeaqNIkUzSiso00D98qk60l8Gwon2RP304d3BJ1A==\"},\"engines\":{\"node\":\">=0.10\"}},\"d3-array@2.12.1\":{\"resolution\":{\"integrity\":\"sha512-B0ErZK/66mHtEsR1TkPEEkwdy+WDesimkM5gpZr5Dsg54BiTA5RXtYW5qTLIAcekaS9xfZrzBLF/OAkB3Qn1YQ==\"}},\"d3-array@3.2.4\":{\"resolution\":{\"integrity\":\"sha512-tdQAmyA18i4J7wprpYq8ClcxZy3SC31QMeByyCFyRt7BVHdREQZ5lpzoe5mFEYZUWe+oq8HBvk9JjpibyEV4Jg==\"},\"engines\":{\"node\":\">=12\"}},\"d3-axis@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-IH5tgjV4jE/GhHkRV0HiVYPDtvfjHQlQfJHs0usq7M30XcSBvOotpmH1IgkcXsO/5gEQZD43B//fc7SRT5S+xw==\"},\"engines\":{\"node\":\">=12\"}},\"d3-brush@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-ALnjWlVYkXsVIGlOsuWH1+3udkYFI48Ljihfnh8FZPF2QS9o+PzGLBslO0PjzVoHLZ2KCVgAM8NVkXPJB2aNnQ==\"},\"engines\":{\"node\":\">=12\"}},\"d3-chord@3.0.1\":{\"resolution\":{\"integrity\":\"sha512-VE5S6TNa+j8msksl7HwjxMHDM2yNK3XCkusIlpX5kwauBfXuyLAtNg9jCp/iHH61tgI4sb6R/EIMWCqEIdjT/g==\"},\"engines\":{\"node\":\">=12\"}},\"d3-color@3.1.0\":{\"resolution\":{\"integrity\":\"sha512-zg/chbXyeBtMQ1LbD/WSoW2DpC3I0mpmPdW+ynRTj/x2DAWYrIY7qeZIHidozwV24m4iavr15lNwIwLxRmOxhA==\"},\"engines\":{\"node\":\">=12\"}},\"d3-contour@4.0.2\":{\"resolution\":{\"integrity\":\"sha512-4EzFTRIikzs47RGmdxbeUvLWtGedDUNkTcmzoeyg4sP/dvCexO47AaQL7VKy/gul85TOxw+IBgA8US2xwbToNA==\"},\"engines\":{\"node\":\">=12\"}},\"d3-delaunay@6.0.4\":{\"resolution\":{\"integrity\":\"sha512-mdjtIZ1XLAM8bm/hx3WwjfHt6Sggek7qH043O8KEjDXN40xi3vx/6pYSVTwLjEgiXQTbvaouWKynLBiUZ6SK6A==\"},\"engines\":{\"node\":\">=12\"}},\"d3-dispatch@3.0.1\":{\"resolution\":{\"integrity\":\"sha512-rzUyPU/S7rwUflMyLc1ETDeBj0NRuHKKAcvukozwhshr6g6c5d8zh4c2gQjY2bZ0dXeGLWc1PF174P2tVvKhfg==\"},\"engines\":{\"node\":\">=12\"}},\"d3-drag@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-pWbUJLdETVA8lQNJecMxoXfH6x+mO2UQo8rSmZ+QqxcbyA3hfeprFgIT//HW2nlHChWeIIMwS2Fq+gEARkhTkg==\"},\"engines\":{\"node\":\">=12\"}},\"d3-dsv@3.0.1\":{\"resolution\":{\"integrity\":\"sha512-UG6OvdI5afDIFP9w4G0mNq50dSOsXHJaRE8arAS5o9ApWnIElp8GZw1Dun8vP8OyHOZ/QJUKUJwxiiCCnUwm+Q==\"},\"engines\":{\"node\":\">=12\"},\"hasBin\":true},\"d3-ease@3.0.1\":{\"resolution\":{\"integrity\":\"sha512-wR/XK3D3XcLIZwpbvQwQ5fK+8Ykds1ip7A2Txe0yxncXSdq1L9skcG7blcedkOX+ZcgxGAmLX1FrRGbADwzi0w==\"},\"engines\":{\"node\":\">=12\"}},\"d3-fetch@3.0.1\":{\"resolution\":{\"integrity\":\"sha512-kpkQIM20n3oLVBKGg6oHrUchHM3xODkTzjMoj7aWQFq5QEM+R6E4WkzT5+tojDY7yjez8KgCBRoj4aEr99Fdqw==\"},\"engines\":{\"node\":\">=12\"}},\"d3-force@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-zxV/SsA+U4yte8051P4ECydjD/S+qeYtnaIyAs9tgHCqfguma/aAQDjo85A9Z6EKhBirHRJHXIgJUlffT4wdLg==\"},\"engines\":{\"node\":\">=12\"}},\"d3-format@3.1.0\":{\"resolution\":{\"integrity\":\"sha512-YyUI6AEuY/Wpt8KWLgZHsIU86atmikuoOmCfommt0LYHiQSPjvX2AcFc38PX0CBpr2RCyZhjex+NS/LPOv6YqA==\"},\"engines\":{\"node\":\">=12\"}},\"d3-geo@3.1.1\":{\"resolution\":{\"integrity\":\"sha512-637ln3gXKXOwhalDzinUgY83KzNWZRKbYubaG+fGVuc/dxO64RRljtCTnf5ecMyE1RIdtqpkVcq0IbtU2S8j2Q==\"},\"engines\":{\"node\":\">=12\"}},\"d3-hierarchy@3.1.2\":{\"resolution\":{\"integrity\":\"sha512-FX/9frcub54beBdugHjDCdikxThEqjnR93Qt7PvQTOHxyiNCAlvMrHhclk3cD5VeAaq9fxmfRp+CnWw9rEMBuA==\"},\"engines\":{\"node\":\">=12\"}},\"d3-interpolate@3.0.1\":{\"resolution\":{\"integrity\":\"sha512-3bYs1rOD33uo8aqJfKP3JWPAibgw8Zm2+L9vBKEHJ2Rg+viTR7o5Mmv5mZcieN+FRYaAOWX5SJATX6k1PWz72g==\"},\"engines\":{\"node\":\">=12\"}},\"d3-path@1.0.9\":{\"resolution\":{\"integrity\":\"sha512-VLaYcn81dtHVTjEHd8B+pbe9yHWpXKZUC87PzoFmsFrJqgFwDe/qxfp5MlfsfM1V5E/iVt0MmEbWQ7FVIXh/bg==\"}},\"d3-path@3.1.0\":{\"resolution\":{\"integrity\":\"sha512-p3KP5HCf/bvjBSSKuXid6Zqijx7wIfNW+J/maPs+iwR35at5JCbLUT0LzF1cnjbCHWhqzQTIN2Jpe8pRebIEFQ==\"},\"engines\":{\"node\":\">=12\"}},\"d3-polygon@3.0.1\":{\"resolution\":{\"integrity\":\"sha512-3vbA7vXYwfe1SYhED++fPUQlWSYTTGmFmQiany/gdbiWgU/iEyQzyymwL9SkJjFFuCS4902BSzewVGsHHmHtXg==\"},\"engines\":{\"node\":\">=12\"}},\"d3-quadtree@3.0.1\":{\"resolution\":{\"integrity\":\"sha512-04xDrxQTDTCFwP5H6hRhsRcb9xxv2RzkcsygFzmkSIOJy3PeRJP7sNk3VRIbKXcog561P9oU0/rVH6vDROAgUw==\"},\"engines\":{\"node\":\">=12\"}},\"d3-random@3.0.1\":{\"resolution\":{\"integrity\":\"sha512-FXMe9GfxTxqd5D6jFsQ+DJ8BJS4E/fT5mqqdjovykEB2oFbTMDVdg1MGFxfQW+FBOGoB++k8swBrgwSHT1cUXQ==\"},\"engines\":{\"node\":\">=12\"}},\"d3-sankey@0.12.3\":{\"resolution\":{\"integrity\":\"sha512-nQhsBRmM19Ax5xEIPLMY9ZmJ/cDvd1BG3UVvt5h3WRxKg5zGRbvnteTyWAbzeSvlh3tW7ZEmq4VwR5mB3tutmQ==\"}},\"d3-scale-chromatic@3.1.0\":{\"resolution\":{\"integrity\":\"sha512-A3s5PWiZ9YCXFye1o246KoscMWqf8BsD9eRiJ3He7C9OBaxKhAd5TFCdEx/7VbKtxxTsu//1mMJFrEt572cEyQ==\"},\"engines\":{\"node\":\">=12\"}},\"d3-scale@4.0.2\":{\"resolution\":{\"integrity\":\"sha512-GZW464g1SH7ag3Y7hXjf8RoUuAFIqklOAq3MRl4OaWabTFJY9PN/E1YklhXLh+OQ3fM9yS2nOkCoS+WLZ6kvxQ==\"},\"engines\":{\"node\":\">=12\"}},\"d3-selection@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-fmTRWbNMmsmWq6xJV8D19U/gw/bwrHfNXxrIN+HfZgnzqTHp9jOmKMhsTUjXOJnZOdZY9Q28y4yebKzqDKlxlQ==\"},\"engines\":{\"node\":\">=12\"}},\"d3-shape@1.3.7\":{\"resolution\":{\"integrity\":\"sha512-EUkvKjqPFUAZyOlhY5gzCxCeI0Aep04LwIRpsZ/mLFelJiUfnK56jo5JMDSE7yyP2kLSb6LtF+S5chMk7uqPqw==\"}},\"d3-shape@3.2.0\":{\"resolution\":{\"integrity\":\"sha512-SaLBuwGm3MOViRq2ABk3eLoxwZELpH6zhl3FbAoJ7Vm1gofKx6El1Ib5z23NUEhF9AsGl7y+dzLe5Cw2AArGTA==\"},\"engines\":{\"node\":\">=12\"}},\"d3-time-format@4.1.0\":{\"resolution\":{\"integrity\":\"sha512-dJxPBlzC7NugB2PDLwo9Q8JiTR3M3e4/XANkreKSUxF8vvXKqm1Yfq4Q5dl8budlunRVlUUaDUgFt7eA8D6NLg==\"},\"engines\":{\"node\":\">=12\"}},\"d3-time@3.1.0\":{\"resolution\":{\"integrity\":\"sha512-VqKjzBLejbSMT4IgbmVgDjpkYrNWUYJnbCGo874u7MMKIWsILRX+OpX/gTk8MqjpT1A/c6HY2dCA77ZN0lkQ2Q==\"},\"engines\":{\"node\":\">=12\"}},\"d3-timer@3.0.1\":{\"resolution\":{\"integrity\":\"sha512-ndfJ/JxxMd3nw31uyKoY2naivF+r29V+Lc0svZxe1JvvIRmi8hUsrMvdOwgS1o6uBHmiz91geQ0ylPP0aj1VUA==\"},\"engines\":{\"node\":\">=12\"}},\"d3-transition@3.0.1\":{\"resolution\":{\"integrity\":\"sha512-ApKvfjsSR6tg06xrL434C0WydLr7JewBB3V+/39RMHsaXTOG0zmt/OAXeng5M5LBm0ojmxJrpomQVZ1aPvBL4w==\"},\"engines\":{\"node\":\">=12\"},\"peerDependencies\":{\"d3-selection\":\"2 - 3\"}},\"d3-zoom@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-b8AmV3kfQaqWAuacbPuNbL6vahnOJflOhexLzMMNLga62+/nh0JzvJ0aO/5a5MVgUFGS7Hu1P9P03o3fJkDCyw==\"},\"engines\":{\"node\":\">=12\"}},\"d3@7.9.0\":{\"resolution\":{\"integrity\":\"sha512-e1U46jVP+w7Iut8Jt8ri1YsPOvFpg46k+K8TpCb0P+zjCkjkPnV7WzfDJzMHy1LnA+wj5pLT1wjO901gLXeEhA==\"},\"engines\":{\"node\":\">=12\"}},\"dagre-d3-es@7.0.13\":{\"resolution\":{\"integrity\":\"sha512-efEhnxpSuwpYOKRm/L5KbqoZmNNukHa/Flty4Wp62JRvgH2ojwVgPgdYyr4twpieZnyRDdIH7PY2mopX26+j2Q==\"}},\"data-uri-to-buffer@4.0.1\":{\"resolution\":{\"integrity\":\"sha512-0R9ikRb668HB7QDxT1vkpuUBtqc53YyAwMwGeUFKRojY/NWKvdZ+9UYtRfGmhqNbRkTSVpMbmyhXipFFv2cb/A==\"},\"engines\":{\"node\":\">= 12\"}},\"data-uri-to-buffer@6.0.2\":{\"resolution\":{\"integrity\":\"sha512-7hvf7/GW8e86rW0ptuwS3OcBGDjIi6SZva7hCyWC0yYry2cOPmLIjXAUHI6DK2HsnwJd9ifmt57i8eV2n4YNpw==\"},\"engines\":{\"node\":\">= 14\"}},\"data-urls@5.0.0\":{\"resolution\":{\"integrity\":\"sha512-ZYP5VBHshaDAiVZxjbRVcFJpc+4xGgT0bK3vzy1HLN8jTO975HEbuYzZJcHoQEY5K1a0z8YayJkyVETa08eNTg==\"},\"engines\":{\"node\":\">=18\"}},\"data-urls@6.0.0\":{\"resolution\":{\"integrity\":\"sha512-BnBS08aLUM+DKamupXs3w2tJJoqU+AkaE/+6vQxi/G/DPmIZFJJp9Dkb1kM03AZx8ADehDUZgsNxju3mPXZYIA==\"},\"engines\":{\"node\":\">=20\"}},\"dayjs@1.11.19\":{\"resolution\":{\"integrity\":\"sha512-t5EcLVS6QPBNqM2z8fakk/NKel+Xzshgt8FFKAn+qwlD1pzZWxh0nVCrvFK7ZDb6XucZeF9z8C7CBWTRIVApAw==\"}},\"debug@4.4.1\":{\"resolution\":{\"integrity\":\"sha512-KcKCqiftBJcZr++7ykoDIEwSa3XWowTfNPo92BYxjXiyYEVrUQh2aLyhxBCwww+heortUFxEJYcRzosstTEBYQ==\"},\"engines\":{\"node\":\">=6.0\"},\"peerDependencies\":{\"supports-color\":\"*\"},\"peerDependenciesMeta\":{\"supports-color\":{\"optional\":true}}},\"debug@4.4.3\":{\"resolution\":{\"integrity\":\"sha512-RGwwWnwQvkVfavKVt22FGLw+xYSdzARwm0ru6DhTVA3umU5hZc28V3kO4stgYryrTlLpuvgI9GiijltAjNbcqA==\"},\"engines\":{\"node\":\">=6.0\"},\"peerDependencies\":{\"supports-color\":\"*\"},\"peerDependenciesMeta\":{\"supports-color\":{\"optional\":true}}},\"decamelize@6.0.1\":{\"resolution\":{\"integrity\":\"sha512-G7Cqgaelq68XHJNGlZ7lrNQyhZGsFqpwtGFexqUv4IQdjKoSYF7ipZ9UuTJZUSQXFj/XaoBLuEVIVqr8EJngEQ==\"},\"engines\":{\"node\":\"^12.20.0 || ^14.13.1 || >=16.0.0\"}},\"decimal.js@10.6.0\":{\"resolution\":{\"integrity\":\"sha512-YpgQiITW3JXGntzdUmyUR1V812Hn8T1YVXhCu+wO3OpS4eU9l4YdD3qjyiKdV6mvV29zapkMeD390UVEf2lkUg==\"}},\"decode-named-character-reference@1.0.2\":{\"resolution\":{\"integrity\":\"sha512-O8x12RzrUF8xyVcY0KJowWsmaJxQbmy0/EtnNtHRpsOcT7dFk5W598coHqBVpmWo1oQQfsCqfCmkZN5DJrZVdg==\"}},\"decompress-response@6.0.0\":{\"resolution\":{\"integrity\":\"sha512-aW35yZM6Bb/4oJlZncMH2LCoZtJXTRxES17vE3hoRiowU2kWHaJKFkSBDnDR+cm9J+9QhXmREyIfv0pji9ejCQ==\"},\"engines\":{\"node\":\">=10\"}},\"deep-eql@5.0.2\":{\"resolution\":{\"integrity\":\"sha512-h5k/5U50IJJFpzfL6nO9jaaumfjO/f2NjK/oYB2Djzm4p9L+3T9qWpZqZ2hAbLPuuYq9wrU08WQyBTL5GbPk5Q==\"},\"engines\":{\"node\":\">=6\"}},\"deep-extend@0.6.0\":{\"resolution\":{\"integrity\":\"sha512-LOHxIOaPYdHlJRtCQfDIVZtfw/ufM8+rVj649RIHzcm/vGwQRXFt6OPqIFWsm2XEMrNIEtWR64sY1LEKD2vAOA==\"},\"engines\":{\"node\":\">=4.0.0\"}},\"deepmerge-ts@7.1.5\":{\"resolution\":{\"integrity\":\"sha512-HOJkrhaYsweh+W+e74Yn7YStZOilkoPb6fycpwNLKzSPtruFs48nYis0zy5yJz1+ktUhHxoRDJ27RQAWLIJVJw==\"},\"engines\":{\"node\":\">=16.0.0\"}},\"defaults@1.0.4\":{\"resolution\":{\"integrity\":\"sha512-eFuaLoy/Rxalv2kr+lqMlUnrDWV+3j4pljOIJgLIhI058IQfWJ7vXhyEIHu+HtC738klGALYxOKDO0bQP3tg8A==\"}},\"define-lazy-prop@2.0.0\":{\"resolution\":{\"integrity\":\"sha512-Ds09qNh8yw3khSjiJjiUInaGX9xlqZDY7JVryGxdxV7NPeuqQfplOpQ66yJFZut3jLa5zOwkXw1g9EI2uKh4Og==\"},\"engines\":{\"node\":\">=8\"}},\"degenerator@5.0.1\":{\"resolution\":{\"integrity\":\"sha512-TllpMR/t0M5sqCXfj85i4XaAzxmS5tVA16dqvdkMwGmzI+dXLXnw3J+3Vdv7VKw+ThlTMboK6i9rnZ6Nntj5CQ==\"},\"engines\":{\"node\":\">= 14\"}},\"delaunator@5.0.1\":{\"resolution\":{\"integrity\":\"sha512-8nvh+XBe96aCESrGOqMp/84b13H9cdKbG5P2ejQCh4d4sK9RL4371qou9drQjMhvnPmhWl5hnmqbEE0fXr9Xnw==\"}},\"delayed-stream@1.0.0\":{\"resolution\":{\"integrity\":\"sha512-ZySD7Nf91aLB0RxL4KGrKHBXl7Eds1DAmEdcoVawXnLD7SDhpNgtuII2aAkg7a7QS41jxPSZ17p4VdGnMHk3MQ==\"},\"engines\":{\"node\":\">=0.4.0\"}},\"dequal@2.0.3\":{\"resolution\":{\"integrity\":\"sha512-0je+qPKHEMohvfRTCEo3CrPG6cAzAYgmzKyxRiYSSDkS6eGJdyVJm7WaYA5ECaAD9wLB2T4EEeymA5aFVcYXCA==\"},\"engines\":{\"node\":\">=6\"}},\"detect-indent@6.1.0\":{\"resolution\":{\"integrity\":\"sha512-reYkTUJAZb9gUuZ2RvVCNhVHdg62RHnJ7WJl8ftMi4diZ6NWlciOzQN88pUhSELEwflJht4oQDv0F0BMlwaYtA==\"},\"engines\":{\"node\":\">=8\"}},\"detect-libc@2.1.2\":{\"resolution\":{\"integrity\":\"sha512-Btj2BOOO83o3WyH59e8MgXsxEQVcarkUOpEYrubB0urwnN10yQ364rsiByU11nZlqWYZm05i/of7io4mzihBtQ==\"},\"engines\":{\"node\":\">=8\"}},\"devlop@1.1.0\":{\"resolution\":{\"integrity\":\"sha512-RWmIqhcFf1lRYBvNmr7qTNuyCt/7/ns2jbpp1+PalgE/rDQcBT0fioSMUpJ93irlUhC5hrg4cYqe6U+0ImW0rA==\"}},\"diff@8.0.2\":{\"resolution\":{\"integrity\":\"sha512-sSuxWU5j5SR9QQji/o2qMvqRNYRDOcBTgsJ/DeCf4iSN4gW+gNMXM7wFIP+fdXZxoNiAnHUTGjCr+TSWXdRDKg==\"},\"engines\":{\"node\":\">=0.3.1\"}},\"dir-glob@3.0.1\":{\"resolution\":{\"integrity\":\"sha512-WkrWp9GR4KXfKGYzOLmTuGVi1UWFfws377n9cc55/tb6DuqyF6pcQ5AbiHEshaDpY9v6oaSr2XCDidGmMwdzIA==\"},\"engines\":{\"node\":\">=8\"}},\"dom-accessibility-api@0.5.16\":{\"resolution\":{\"integrity\":\"sha512-X7BJ2yElsnOJ30pZF4uIIDfBEVgF4XEBxL9Bxhy6dnrm5hkzqmsWHGTiHqRiITNhMyFLyAiWndIJP7Z1NTteDg==\"}},\"dom-serializer@2.0.0\":{\"resolution\":{\"integrity\":\"sha512-wIkAryiqt/nV5EQKqQpo3SToSOV9J0DnbJqwK7Wv/Trc92zIAYZ4FlMu+JPFW1DfGFt81ZTCGgDEabffXeLyJg==\"}},\"domelementtype@2.3.0\":{\"resolution\":{\"integrity\":\"sha512-OLETBj6w0OsagBwdXnPdN0cnMfF9opN69co+7ZrbfPGrdpPVNBUj02spi6B1N7wChLQiPn4CSH/zJvXw56gmHw==\"}},\"domhandler@5.0.3\":{\"resolution\":{\"integrity\":\"sha512-cgwlv/1iFQiFnU96XXgROh8xTeetsnJiDsTc7TYCLFd9+/WNkIqPTxiM/8pSd8VIrhXGTf1Ny1q1hquVqDJB5w==\"},\"engines\":{\"node\":\">= 4\"}},\"dompurify@3.3.1\":{\"resolution\":{\"integrity\":\"sha512-qkdCKzLNtrgPFP1Vo+98FRzJnBRGe4ffyCea9IwHB1fyxPOeNTHpLKYGd4Uk9xvNoH0ZoOjwZxNptyMwqrId1Q==\"}},\"domutils@3.2.2\":{\"resolution\":{\"integrity\":\"sha512-6kZKyUajlDuqlHKVX1w7gyslj9MPIXzIFiz/rGu35uC1wMi+kMhQwGhl4lt9unC9Vb9INnY9Z3/ZA3+FhASLaw==\"}},\"dotenv-expand@11.0.7\":{\"resolution\":{\"integrity\":\"sha512-zIHwmZPRshsCdpMDyVsqGmgyP0yT8GAgXUnkdAoJisxvf33k7yO6OuoKmcTGuXPWSsm8Oh88nZicRLA9Y0rUeA==\"},\"engines\":{\"node\":\">=12\"}},\"dotenv@10.0.0\":{\"resolution\":{\"integrity\":\"sha512-rlBi9d8jpv9Sf1klPjNfFAuWDjKLwTIJJ/VxtoTwIR6hnZxcEOQCZg2oIL3MWBYw5GpUDKOEnND7LXTbIpQ03Q==\"},\"engines\":{\"node\":\">=10\"}},\"dotenv@16.4.7\":{\"resolution\":{\"integrity\":\"sha512-47qPchRCykZC03FhkYAhrvwU4xDBFIj1QPqaarj6mdM/hgUzfPHcpkHJOn3mJAufFeeAxAzeGsr5X0M4k6fLZQ==\"},\"engines\":{\"node\":\">=12\"}},\"dotenv@16.5.0\":{\"resolution\":{\"integrity\":\"sha512-m/C+AwOAr9/W1UOIZUo232ejMNnJAJtYQjUbHoNTBNTJSvqzzDh7vnrei3o3r3m9blf6ZoDkvcw0VmozNRFJxg==\"},\"engines\":{\"node\":\">=12\"}},\"dunder-proto@1.0.1\":{\"resolution\":{\"integrity\":\"sha512-KIN/nDJBQRcXw0MLVhZE9iQHmG68qAVIBg9CqmUYjmQIhgij9U5MFvrqkUL5FbtyyzZuOeOt0zdeRe4UY7ct+A==\"},\"engines\":{\"node\":\">= 0.4\"}},\"eastasianwidth@0.2.0\":{\"resolution\":{\"integrity\":\"sha512-I88TYZWc9XiYHRQ4/3c5rjjfgkjhLyW2luGIheGERbNQ6OY7yTybanSpDXZa8y7VUP9YmDcYa+eyq4ca7iLqWA==\"}},\"edge-paths@3.0.5\":{\"resolution\":{\"integrity\":\"sha512-sB7vSrDnFa4ezWQk9nZ/n0FdpdUuC6R1EOrlU3DL+bovcNFK28rqu2emmAUjujYEJTWIgQGqgVVWUZXMnc8iWg==\"},\"engines\":{\"node\":\">=14.0.0\"}},\"edgedriver@5.6.1\":{\"resolution\":{\"integrity\":\"sha512-3Ve9cd5ziLByUdigw6zovVeWJjVs8QHVmqOB0sJ0WNeVPcwf4p18GnxMmVvlFmYRloUwf5suNuorea4QzwBIOA==\"},\"hasBin\":true},\"electron-to-chromium@1.5.211\":{\"resolution\":{\"integrity\":\"sha512-IGBvimJkotaLzFnwIVgW9/UD/AOJ2tByUmeOrtqBfACSbAw5b1G0XpvdaieKyc7ULmbwXVx+4e4Be8pOPBrYkw==\"}},\"electron-to-chromium@1.5.352\":{\"resolution\":{\"integrity\":\"sha512-9wHk8x6dyuimoe18EdiDPWKExNdxYqo4fn4FwOVVper6RxT3cmpBwBkWWfSOCYJjQdIco/nPhJhNLmn4Ufg1Yg==\"}},\"emoji-regex@8.0.0\":{\"resolution\":{\"integrity\":\"sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A==\"}},\"emoji-regex@9.2.2\":{\"resolution\":{\"integrity\":\"sha512-L18DaJsXSUk2+42pv8mLs5jJT2hqFkFE4j21wOmgbUqsZ2hL72NsUU785g9RXgo3s0ZNgVl42TiHp3ZtOv/Vyg==\"}},\"encoding-sniffer@0.2.1\":{\"resolution\":{\"integrity\":\"sha512-5gvq20T6vfpekVtqrYQsSCFZ1wEg5+wW0/QaZMWkFr6BqD3NfKs0rLCx4rrVlSWJeZb5NBJgVLswK/w2MWU+Gw==\"}},\"end-of-stream@1.4.5\":{\"resolution\":{\"integrity\":\"sha512-ooEGc6HP26xXq/N+GCGOT0JKCLDGrq2bQUZrQ7gyrJiZANJ/8YDTxTpQBXGMn+WbIQXNVpyWymm7KYVICQnyOg==\"}},\"enhanced-resolve@5.21.0\":{\"resolution\":{\"integrity\":\"sha512-otxSQPw4lkOZWkHpB3zaEQs6gWYEsmX4xQF68ElXC/TWvGxGMSGOvoNbaLXm6/cS/fSfHtsEdw90y20PCd+sCA==\"},\"engines\":{\"node\":\">=10.13.0\"}},\"enquirer@2.3.6\":{\"resolution\":{\"integrity\":\"sha512-yjNnPr315/FjS4zIsUxYguYUPP2e1NK4d7E7ZOLiyYCcbFBiTMyID+2wvm2w6+pZ/odMA7cRkjhsPbltwBOrLg==\"},\"engines\":{\"node\":\">=8.6\"}},\"enquirer@2.4.1\":{\"resolution\":{\"integrity\":\"sha512-rRqJg/6gd538VHvR3PSrdRBb/1Vy2YfzHqzvbhGIQpDRKIa4FgV/54b5Q1xYSxOOwKvjXweS26E0Q+nAMwp2pQ==\"},\"engines\":{\"node\":\">=8.6\"}},\"entities@4.5.0\":{\"resolution\":{\"integrity\":\"sha512-V0hjH4dGPh9Ao5p0MoRY6BVqtwCjhz6vI5LT8AJ55H+4g9/4vbHx1I54fS0XuclLhDHArPQCiMjDxjaL8fPxhw==\"},\"engines\":{\"node\":\">=0.12\"}},\"entities@6.0.1\":{\"resolution\":{\"integrity\":\"sha512-aN97NXWF6AWBTahfVOIrB/NShkzi5H7F9r1s9mD3cDj4Ko5f2qhhVoYMibXF7GlLveb/D2ioWay8lxI97Ven3g==\"},\"engines\":{\"node\":\">=0.12\"}},\"entities@7.0.1\":{\"resolution\":{\"integrity\":\"sha512-TWrgLOFUQTH994YUyl1yT4uyavY5nNB5muff+RtWaqNVCAK408b5ZnnbNAUEWLTCpum9w6arT70i1XdQ4UeOPA==\"},\"engines\":{\"node\":\">=0.12\"}},\"error-stack-parser-es@1.0.5\":{\"resolution\":{\"integrity\":\"sha512-5qucVt2XcuGMcEGgWI7i+yZpmpByQ8J1lHhcL7PwqCwu9FPP3VUXzT4ltHe5i2z9dePwEHcDVOAfSnHsOlCXRA==\"}},\"es-define-property@1.0.1\":{\"resolution\":{\"integrity\":\"sha512-e3nRfgfUZ4rNGL232gUgX06QNyyez04KdjFrF+LTRoOXmrOgFKDg4BCdsjW8EnT69eqdYGmRpJwiPVYNrCaW3g==\"},\"engines\":{\"node\":\">= 0.4\"}},\"es-errors@1.3.0\":{\"resolution\":{\"integrity\":\"sha512-Zf5H2Kxt2xjTvbJvP2ZWLEICxA6j+hAmMzIlypy4xcBg1vKVnx89Wy0GbS+kf5cwCVFFzdCFh2XSCFNULS6csw==\"},\"engines\":{\"node\":\">= 0.4\"}},\"es-module-lexer@1.7.0\":{\"resolution\":{\"integrity\":\"sha512-jEQoCwk8hyb2AZziIOLhDqpm5+2ww5uIE6lkO/6jcOCusfk6LhMHpXXfBLXTZ7Ydyt0j4VoUQv6uGNYbdW+kBA==\"}},\"es-module-lexer@2.1.0\":{\"resolution\":{\"integrity\":\"sha512-n27zTYMjYu1aj4MjCWzSP7G9r75utsaoc8m61weK+W8JMBGGQybd43GstCXZ3WNmSFtGT9wi59qQTW6mhTR5LQ==\"}},\"es-object-atoms@1.1.1\":{\"resolution\":{\"integrity\":\"sha512-FGgH2h8zKNim9ljj7dankFPcICIK9Cp5bm+c2gQSYePhpaG5+esrLODihIorn+Pe6FGJzWhXQotPv73jTaldXA==\"},\"engines\":{\"node\":\">= 0.4\"}},\"es-set-tostringtag@2.1.0\":{\"resolution\":{\"integrity\":\"sha512-j6vWzfrGVfyXxge+O0x5sh6cvxAog0a/4Rdd2K36zCMV5eJ+/+tOAngRO8cODMNWbVRdVlmGZQL2YS3yR8bIUA==\"},\"engines\":{\"node\":\">= 0.4\"}},\"esbuild@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-bbPBYYrtZbkt6Os6FiTLCTFxvq4tt3JKall1vRwshA3fdVztsLAatFaZobhkBC8/BrPetoa0oksYoKXoG4ryJg==\"},\"engines\":{\"node\":\">=18\"},\"hasBin\":true},\"esbuild@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-8VwMnyGCONIs6cWue2IdpHxHnAjzxnw2Zr7MkVxB2vjmQ2ivqGFb4LEG3SMnv0Gb2F/G/2yA8zUaiL1gywDCCg==\"},\"engines\":{\"node\":\">=18\"},\"hasBin\":true},\"escalade@3.2.0\":{\"resolution\":{\"integrity\":\"sha512-WUj2qlxaQtO4g6Pq5c29GTcWGDyd8itL8zTlipgECz3JesAiiOKotd8JU6otB3PACgG6xkJUyVhboMS+bje/jA==\"},\"engines\":{\"node\":\">=6\"}},\"escape-string-regexp@1.0.5\":{\"resolution\":{\"integrity\":\"sha512-vbRorB5FUQWvla16U8R/qgaFIya2qGzwDrNmCZuYKrbdSUMG6I1ZCGQRefkRVhuOkIGVne7BQ35DSfo1qvJqFg==\"},\"engines\":{\"node\":\">=0.8.0\"}},\"escape-string-regexp@5.0.0\":{\"resolution\":{\"integrity\":\"sha512-/veY75JbMK4j1yjvuUxuVsiS/hr/4iHs9FTT6cgTexxdE0Ly/glccBAkloH/DofkjRbZU3bnoj38mOmhkZ0lHw==\"},\"engines\":{\"node\":\">=12\"}},\"escodegen@2.1.0\":{\"resolution\":{\"integrity\":\"sha512-2NlIDTwUWJN0mRPQOdtQBzbUHvdGY2P1VXSyU83Q3xKxM7WHX2Ql8dKq782Q9TgQUNOLEzEYu9bzLNj1q88I5w==\"},\"engines\":{\"node\":\">=6.0\"},\"hasBin\":true},\"eslint-scope@5.1.1\":{\"resolution\":{\"integrity\":\"sha512-2NxwbF/hZ0KpepYN0cNbo+FN6XoK7GaHlQhgx/hIZl6Va0bF45RQOOwhLIy8lQDbuCiadSLCBnH2CFYquit5bw==\"},\"engines\":{\"node\":\">=8.0.0\"}},\"esprima@4.0.1\":{\"resolution\":{\"integrity\":\"sha512-eGuFFw7Upda+g4p+QHvnW0RyTX/SVeJBDM/gCtMARO0cLuT2HcEKnTPvhjV6aGeqrCB/sbNop0Kszm0jsaWU4A==\"},\"engines\":{\"node\":\">=4\"},\"hasBin\":true},\"esrecurse@4.3.0\":{\"resolution\":{\"integrity\":\"sha512-KmfKL3b6G+RXvP8N1vr3Tq1kL/oCFgn2NYXEtqP8/L3pKapUA4G8cFVaoF3SU323CD4XypR/ffioHmkti6/Tag==\"},\"engines\":{\"node\":\">=4.0\"}},\"estraverse@4.3.0\":{\"resolution\":{\"integrity\":\"sha512-39nnKffWz8xN1BU/2c79n9nB9HDzo0niYUqx6xyqUnyoAnQyyWpOTdZEeiCch8BBu515t4wp9ZmgVfVhn9EBpw==\"},\"engines\":{\"node\":\">=4.0\"}},\"estraverse@5.3.0\":{\"resolution\":{\"integrity\":\"sha512-MMdARuVEQziNTeJD8DgMqmhwR11BRQ/cBP+pLtYdSTnf3MIO8fFeiINEbX36ZdNlfU/7A9f3gUw49B3oQsvwBA==\"},\"engines\":{\"node\":\">=4.0\"}},\"estree-walker@3.0.3\":{\"resolution\":{\"integrity\":\"sha512-7RUKfXgSMMkzt6ZuXmqapOurLGPPfgj6l9uRZ7lRGolvk0y2yocc35LdcxKC5PQZdn2DMqioAQ2NoWcrTKmm6g==\"}},\"esutils@2.0.3\":{\"resolution\":{\"integrity\":\"sha512-kVscqXk4OCp68SZ0dkgEKVi6/8ij300KBWTJq32P/dYeWTSwK41WyTxalN1eRmA5Z9UU/LX9D7FWSmV9SAYx6g==\"},\"engines\":{\"node\":\">=0.10.0\"}},\"event-target-shim@5.0.1\":{\"resolution\":{\"integrity\":\"sha512-i/2XbnSz/uxRCU6+NdVJgKWDTM427+MqYbkQzD321DuCQJUqOuJKIA0IM2+W2xtYHdKOmZ4dR6fExsd4SXL+WQ==\"},\"engines\":{\"node\":\">=6\"}},\"events-universal@1.0.1\":{\"resolution\":{\"integrity\":\"sha512-LUd5euvbMLpwOF8m6ivPCbhQeSiYVNb8Vs0fQ8QjXo0JTkEHpz8pxdQf0gStltaPpw0Cca8b39KxvK9cfKRiAw==\"}},\"events@3.3.0\":{\"resolution\":{\"integrity\":\"sha512-mQw+2fkQbALzQ7V0MY0IqdnXNOeTtP4r0lN9z7AAawCXgqea7bDii20AYrIBrFd/Hx0M2Ocz6S111CaFkUcb0Q==\"},\"engines\":{\"node\":\">=0.8.x\"}},\"expand-template@2.0.3\":{\"resolution\":{\"integrity\":\"sha512-XYfuKMvj4O35f/pOXLObndIRvyQ+/+6AhODh+OKWj9S9498pHHn/IMszH+gt0fBCRWMNfk1ZSp5x3AifmnI2vg==\"},\"engines\":{\"node\":\">=6\"}},\"expect-type@1.2.2\":{\"resolution\":{\"integrity\":\"sha512-JhFGDVJ7tmDJItKhYgJCGLOWjuK9vPxiXoUFLwLDc99NlmklilbiQJwoctZtt13+xMw91MCk/REan6MWHqDjyA==\"},\"engines\":{\"node\":\">=12.0.0\"}},\"expect-type@1.3.0\":{\"resolution\":{\"integrity\":\"sha512-knvyeauYhqjOYvQ66MznSMs83wmHrCycNEN6Ao+2AeYEfxUIkuiVxdEa1qlGEPK+We3n0THiDciYSsCcgW/DoA==\"},\"engines\":{\"node\":\">=12.0.0\"}},\"exsolve@1.0.8\":{\"resolution\":{\"integrity\":\"sha512-LmDxfWXwcTArk8fUEnOfSZpHOJ6zOMUJKOtFLFqJLoKJetuQG874Uc7/Kki7zFLzYybmZhp1M7+98pfMqeX8yA==\"}},\"extend@3.0.2\":{\"resolution\":{\"integrity\":\"sha512-fjquC59cD7CyW6urNXK0FBufkZcoiGG80wTuPujX590cB5Ttln20E2UB4S/WARVqhXffZl2LNgS+gQdPIIim/g==\"}},\"extendable-error@0.1.7\":{\"resolution\":{\"integrity\":\"sha512-UOiS2in6/Q0FK0R0q6UY9vYpQ21mr/Qn1KOnte7vsACuNJf514WvCCUHSRCPcgjPT2bAhNIJdlE6bVap1GKmeg==\"}},\"extract-zip@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-GDhU9ntwuKyGXdZBUgTIe+vXnWj0fppUEtMDL0+idd5Sta8TGpHssn/eusA9mrPr9qNDym6SxAYZjNvCn/9RBg==\"},\"engines\":{\"node\":\">= 10.17.0\"},\"hasBin\":true},\"fast-deep-equal@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-bCK/2Z4zLidyB4ReuIsvALH6w31YfAQDmXMqMx6FyfHqvBxtjC0eRumeSu4Bs3XtXwpyIywtSTrVT99BxY1f9w==\"}},\"fast-deep-equal@3.1.3\":{\"resolution\":{\"integrity\":\"sha512-f3qQ9oQy9j2AhBe/H9VC91wLmKBCCU/gDOnKNAYG5hswO7BLKj09Hc5HYNz9cGI++xlpDCIgDaitVs03ATR84Q==\"}},\"fast-fifo@1.3.2\":{\"resolution\":{\"integrity\":\"sha512-/d9sfos4yxzpwkDkuN7k2SqFKtYNmCTzgfEpz82x34IM9/zc8KGxQoXg1liNC/izpRM/MBdt44Nmx41ZWqk+FQ==\"}},\"fast-glob@3.3.3\":{\"resolution\":{\"integrity\":\"sha512-7MptL8U0cqcFdzIzwOTHoilX9x5BrNqye7Z/LuC7kCMRio1EMSyqRK3BEAUD7sXRq4iT4AzTVuZdhgQ2TCvYLg==\"},\"engines\":{\"node\":\">=8.6.0\"}},\"fast-uri@3.0.3\":{\"resolution\":{\"integrity\":\"sha512-aLrHthzCjH5He4Z2H9YZ+v6Ujb9ocRuW6ZzkJQOrTxleEijANq4v1TsaPaVG1PZcuurEzrLcWRyYBYXD5cEiaw==\"}},\"fast-uri@3.1.2\":{\"resolution\":{\"integrity\":\"sha512-rVjf7ArG3LTk+FS6Yw81V1DLuZl1bRbNrev6Tmd/9RaroeeRRJhAt7jg/6YFxbvAQXUCavSoZhPPj6oOx+5KjQ==\"}},\"fast-xml-parser@4.5.6\":{\"resolution\":{\"integrity\":\"sha512-Yd4vkROfJf8AuJrDIVMVmYfULKmIJszVsMv7Vo71aocsKgFxpdlpSHXSaInvyYfgw2PRuObQSW2GFpVMUjxu9A==\"},\"hasBin\":true},\"fastq@1.17.1\":{\"resolution\":{\"integrity\":\"sha512-sRVD3lWVIXWg6By68ZN7vho9a1pQcN/WBFaAAsDDFzlJjvoGx0P8z7V1t72grFJfJhu3YPZBuu25f7Kaw2jN1w==\"}},\"fault@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-WtySTkS4OKev5JtpHXnib4Gxiurzh5NCGvWrFaZ34m6JehfTUhKZvn9njTfw48t6JumVQOmrKqpmGcdwxnhqBQ==\"}},\"fd-slicer@1.1.0\":{\"resolution\":{\"integrity\":\"sha512-cE1qsB/VwyQozZ+q1dGxR8LBYNZeofhEdUNGSMbQD3Gw2lAzX9Zb3uIU6Ebc/Fmyjo9AWWfnn0AUCHqtevs/8g==\"}},\"fdir@6.5.0\":{\"resolution\":{\"integrity\":\"sha512-tIbYtZbucOs0BRGqPJkshJUYdL+SDH7dVM8gjy+ERp3WAUjLEFJE+02kanyHtwjWOnwrKYBiwAmM0p4kLJAnXg==\"},\"engines\":{\"node\":\">=12.0.0\"},\"peerDependencies\":{\"picomatch\":\"^3 || ^4\"},\"peerDependenciesMeta\":{\"picomatch\":{\"optional\":true}}},\"fetch-blob@3.2.0\":{\"resolution\":{\"integrity\":\"sha512-7yAQpD2UMJzLi1Dqv7qFYnPbaPx7ZfFK6PiIxQ4PfkGPyNyl2Ugx+a/umUonmKqjhM4DnfbMvdX6otXq83soQQ==\"},\"engines\":{\"node\":\"^12.20 || >= 14.13\"}},\"fetchdts@0.1.7\":{\"resolution\":{\"integrity\":\"sha512-YoZjBdafyLIop9lSxXVI33oLD5kN31q4Td+CasofLLYeLXRFeOsuOw0Uo+XNRi9PZlbfdlN2GmRtm4tCEQ9/KA==\"}},\"fflate@0.4.8\":{\"resolution\":{\"integrity\":\"sha512-FJqqoDBR00Mdj9ppamLa/Y7vxm+PRmNWA67N846RvsoYVMKB4q3y/de5PA7gUmRMYK/8CMz2GDZQmCRN1wBcWA==\"}},\"figures@3.2.0\":{\"resolution\":{\"integrity\":\"sha512-yaduQFRKLXYOGgEn6AZau90j3ggSOyiqXU0F9JZfeXYhNa+Jk4X+s45A2zg5jns87GAFa34BBm2kXw4XpNcbdg==\"},\"engines\":{\"node\":\">=8\"}},\"file-uri-to-path@1.0.0\":{\"resolution\":{\"integrity\":\"sha512-0Zt+s3L7Vf1biwWZ29aARiVYLx7iMGnEUl9x33fbB/j3jR81u/O2LbqK+Bm1CDSNDKVtJ/YjwY7TUd5SkeLQLw==\"}},\"fill-range@7.1.1\":{\"resolution\":{\"integrity\":\"sha512-YsGpe3WHLK8ZYi4tWDg2Jy3ebRz2rXowDxnld4bkQB00cc/1Zw9AWnC0i9ztDJitivtQvaI9KaLyKrc+hBW0yg==\"},\"engines\":{\"node\":\">=8\"}},\"find-up@4.1.0\":{\"resolution\":{\"integrity\":\"sha512-PpOwAdQ/YlXQ2vj8a3h8IipDuYRi3wceVQQGYWxNINccq40Anw7BlsEXCMbt1Zt+OLA6Fq9suIpIWD0OsnISlw==\"},\"engines\":{\"node\":\">=8\"}},\"flat@5.0.2\":{\"resolution\":{\"integrity\":\"sha512-b6suED+5/3rTpUBdG1gupIl8MPFCAMA0QXwmljLhvCUKcUvdE4gWky9zpuGCcXHOsz4J9wPGNWq6OKpmIzz3hQ==\"},\"hasBin\":true},\"follow-redirects@1.15.11\":{\"resolution\":{\"integrity\":\"sha512-deG2P0JfjrTxl50XGCDyfI97ZGVCxIpfKYmfyrQ54n5FO/0gfIES8C/Psl6kWVDolizcaaxZJnTS0QSMxvnsBQ==\"},\"engines\":{\"node\":\">=4.0\"},\"peerDependencies\":{\"debug\":\"*\"},\"peerDependenciesMeta\":{\"debug\":{\"optional\":true}}},\"foreground-child@3.3.1\":{\"resolution\":{\"integrity\":\"sha512-gIXjKqtFuWEgzFRJA9WCQeSJLZDjgJUOMCMzxtvFq/37KojM1BFGufqsCy0r4qSQmYLsZYMeyRqzIWOMup03sw==\"},\"engines\":{\"node\":\">=14\"}},\"form-data@4.0.4\":{\"resolution\":{\"integrity\":\"sha512-KrGhL9Q4zjj0kiUt5OO4Mr/A/jlI2jDYs5eHBpYHPcBEVSiipAvn2Ko2HnPe20rmcuuvMHNdZFp+4IlGTMF0Ow==\"},\"engines\":{\"node\":\">= 6\"}},\"format@0.2.2\":{\"resolution\":{\"integrity\":\"sha512-wzsgA6WOq+09wrU1tsJ09udeR/YZRaeArL9e1wPbFg3GG2yDnC2ldKpxs4xunpFF9DgqCqOIra3bc1HWrJ37Ww==\"},\"engines\":{\"node\":\">=0.4.x\"}},\"formdata-polyfill@4.0.10\":{\"resolution\":{\"integrity\":\"sha512-buewHzMvYL29jdeQTVILecSaZKnt/RJWjoZCF5OW60Z67/GmSLBkOFM7qh1PI3zFNtJbaZL5eQu1vLfazOwj4g==\"},\"engines\":{\"node\":\">=12.20.0\"}},\"front-matter@4.0.2\":{\"resolution\":{\"integrity\":\"sha512-I8ZuJ/qG92NWX8i5x1Y8qyj3vizhXS31OxjKDu3LKP+7/qBgfIKValiZIEwoVoJKUHlhWtYrktkxV1XsX+pPlg==\"}},\"fs-constants@1.0.0\":{\"resolution\":{\"integrity\":\"sha512-y6OAwoSIf7FyjMIv94u+b5rdheZEjzR63GTyZJm5qh4Bi+2YgwLCcI/fPFZkL5PSixOt6ZNKm+w+Hfp/Bciwow==\"}},\"fs-extra@11.3.1\":{\"resolution\":{\"integrity\":\"sha512-eXvGGwZ5CL17ZSwHWd3bbgk7UUpF6IFHtP57NYYakPvHOs8GDgDe5KJI36jIJzDkJ6eJjuzRA8eBQb6SkKue0g==\"},\"engines\":{\"node\":\">=14.14\"}},\"fs-extra@7.0.1\":{\"resolution\":{\"integrity\":\"sha512-YJDaCJZEnBmcbw13fvdAM9AwNOJwOzrE4pqMqBq5nFiEqXUqHwlK4B+3pUw6JNvfSPtX05xFHtYy/1ni01eGCw==\"},\"engines\":{\"node\":\">=6 <7 || >=8\"}},\"fs-extra@8.1.0\":{\"resolution\":{\"integrity\":\"sha512-yhlQgA6mnOJUKOsRUFsgJdQCvkKhcz8tlZG5HBQfReYZy46OwLcY+Zia0mtdHsOo9y/hP+CxMN0TU9QxoOtG4g==\"},\"engines\":{\"node\":\">=6 <7 || >=8\"}},\"fs-minipass@2.1.0\":{\"resolution\":{\"integrity\":\"sha512-V/JgOLFCS+R6Vcq0slCuaeWEdNC3ouDlJMNIsacH2VtALiu9mV4LPrHc5cDl8k5aw6J8jwgWWpiTo5RYhmIzvg==\"},\"engines\":{\"node\":\">= 8\"}},\"fsevents@2.3.2\":{\"resolution\":{\"integrity\":\"sha512-xiqMQR4xAeHTuB9uWm+fFRcIOgKBMiOBP+eXiyT7jsgVCq1bkVygt00oASowB7EdtpOHaaPgKt812P9ab+DDKA==\"},\"engines\":{\"node\":\"^8.16.0 || ^10.6.0 || >=11.0.0\"},\"os\":[\"darwin\"]},\"fsevents@2.3.3\":{\"resolution\":{\"integrity\":\"sha512-5xoDfX+fL7faATnagmWPpbFtwh/R77WmMMqqHGS65C3vvB0YHrgF+B1YmZ3441tMj5n63k0212XNoJwzlhffQw==\"},\"engines\":{\"node\":\"^8.16.0 || ^10.6.0 || >=11.0.0\"},\"os\":[\"darwin\"]},\"function-bind@1.1.2\":{\"resolution\":{\"integrity\":\"sha512-7XHNxH7qX9xG5mIwxkhumTox/MIRNcOgDrxWsMt2pAr23WHp6MrRlN7FBSFpCpr+oVO0F744iUgR82nJMfG2SA==\"}},\"geckodriver@4.5.1\":{\"resolution\":{\"integrity\":\"sha512-lGCRqPMuzbRNDWJOQcUqhNqPvNsIFu6yzXF8J/6K3WCYFd2r5ckbeF7h1cxsnjA7YLSEiWzERCt6/gjZ3tW0ug==\"},\"engines\":{\"node\":\"^16.13 || >=18 || >=20\"},\"hasBin\":true},\"gensync@1.0.0-beta.2\":{\"resolution\":{\"integrity\":\"sha512-3hN7NaskYvMDLQY55gnW3NQ+mesEAepTqlg+VEbj7zzqEMBVNhzcGYYeqFo/TlYz6eQiFcp1HcsCZO+nGgS8zg==\"},\"engines\":{\"node\":\">=6.9.0\"}},\"get-caller-file@2.0.5\":{\"resolution\":{\"integrity\":\"sha512-DyFP3BM/3YHTQOCUL/w0OZHR0lpKeGrxotcHWcqNEdnltqFwXVfhEBQ94eIo34AfQpo0rGki4cyIiftY06h2Fg==\"},\"engines\":{\"node\":\"6.* || 8.* || >= 10.*\"}},\"get-intrinsic@1.3.0\":{\"resolution\":{\"integrity\":\"sha512-9fSjSaos/fRIVIp+xSJlE6lfwhES7LNtKaCBIamHsjr2na1BiABJPo0mOjjz8GJDURarmCPGqaiVg5mfjb98CQ==\"},\"engines\":{\"node\":\">= 0.4\"}},\"get-port@7.2.0\":{\"resolution\":{\"integrity\":\"sha512-afP4W205ONCuMoPBqcR6PSXnzX35KTcJygfJfcp+QY+uwm3p20p1YczWXhlICIzGMCxYBQcySEcOgsJcrkyobg==\"},\"engines\":{\"node\":\">=16\"}},\"get-proto@1.0.1\":{\"resolution\":{\"integrity\":\"sha512-sTSfBjoXBp89JvIKIefqw7U2CCebsc74kiY6awiGogKtoSGbgjYE/G/+l9sF3MWFPNc9IcoOC4ODfKHfxFmp0g==\"},\"engines\":{\"node\":\">= 0.4\"}},\"get-stream@5.2.0\":{\"resolution\":{\"integrity\":\"sha512-nBF+F1rAZVCu/p7rjzgA+Yb4lfYXrpl7a6VmJrU8wF9I1CKvP/QwPNZHnOlwbTkY6dvtFIzFMSyQXbLoTQPRpA==\"},\"engines\":{\"node\":\">=8\"}},\"get-tsconfig@4.14.0\":{\"resolution\":{\"integrity\":\"sha512-yTb+8DXzDREzgvYmh6s9vHsSVCHeC0G3PI5bEXNBHtmshPnO+S5O7qgLEOn0I5QvMy6kpZN8K1NKGyilLb93wA==\"}},\"get-uri@6.0.5\":{\"resolution\":{\"integrity\":\"sha512-b1O07XYq8eRuVzBNgJLstU6FYc1tS6wnMtF1I1D9lE8LxZSOGZ7LhxN54yPP6mGw5f2CkXY2BQUL9Fx41qvcIg==\"},\"engines\":{\"node\":\">= 14\"}},\"github-from-package@0.0.0\":{\"resolution\":{\"integrity\":\"sha512-SyHy3T1v2NUXn29OsWdxmK6RwHD+vkj3v8en8AOBZ1wBQ/hCAQ5bAQTD02kW4W9tUp/3Qh6J8r9EvntiyCmOOw==\"}},\"github-slugger@2.0.0\":{\"resolution\":{\"integrity\":\"sha512-IaOQ9puYtjrkq7Y0Ygl9KDZnrf/aiUJYUpVf89y8kyaxbRG7Y1SrX/jaumrv81vc61+kiMempujsM3Yw7w5qcw==\"}},\"glob-parent@5.1.2\":{\"resolution\":{\"integrity\":\"sha512-AOIgSQCepiJYwP3ARnGx+5VnTu2HBYdzbGP45eLw1vr3zB3vZLeyed1sC9hnbcOc9/SrMyM5RPQrkGz4aS9Zow==\"},\"engines\":{\"node\":\">= 6\"}},\"glob-to-regexp@0.4.1\":{\"resolution\":{\"integrity\":\"sha512-lkX1HJXwyMcprw/5YUZc2s7DrpAiHB21/V+E1rHUrVNokkvB6bqMzT0VfV6/86ZNabt1k14YOIaT7nDvOX3Iiw==\"}},\"glob@10.4.5\":{\"resolution\":{\"integrity\":\"sha512-7Bv8RF0k6xjo7d4A/PxYLbUCfb6c+Vpd2/mB2yRDlew7Jb5hEXiCD9ibfO7wpk8i4sevK6DFny9h7EYbM3/sHg==\"},\"deprecated\":\"Old versions of glob are not supported, and contain widely publicized security vulnerabilities, which have been fixed in the current version. Please update. Support for old versions may be purchased (at exorbitant rates) by contacting i@izs.me\",\"hasBin\":true},\"glob@10.5.0\":{\"resolution\":{\"integrity\":\"sha512-DfXN8DfhJ7NH3Oe7cFmu3NCu1wKbkReJ8TorzSAFbSKrlNaQSKfIzqYqVY8zlbs2NLBbWpRiU52GX2PbaBVNkg==\"},\"deprecated\":\"Old versions of glob are not supported, and contain widely publicized security vulnerabilities, which have been fixed in the current version. Please update. Support for old versions may be purchased (at exorbitant rates) by contacting i@izs.me\",\"hasBin\":true},\"globals@15.15.0\":{\"resolution\":{\"integrity\":\"sha512-7ACyT3wmyp3I61S4fG682L0VA2RGD9otkqGJIwNUMF1SWUombIIk+af1unuDYgMm082aHYwD+mzJvv9Iu8dsgg==\"},\"engines\":{\"node\":\">=18\"}},\"globby@11.1.0\":{\"resolution\":{\"integrity\":\"sha512-jhIXaOzy1sb8IyocaruWSn1TjmnBVs8Ayhcy83rmxNJ8q2uWKCAj3CnJY+KpGSXCueAPc0i05kVvVKtP1t9S3g==\"},\"engines\":{\"node\":\">=10\"}},\"gopd@1.2.0\":{\"resolution\":{\"integrity\":\"sha512-ZUKRh6/kUFoAiTAtTYPZJ3hw9wNxx+BIBOijnlG9PnrJsCcSjs1wyyD6vJpaYtgnzDrKYRSqf3OO6Rfa93xsRg==\"},\"engines\":{\"node\":\">= 0.4\"}},\"graceful-fs@4.2.11\":{\"resolution\":{\"integrity\":\"sha512-RbJ5/jmFcNNCcDV5o9eTnBLJ/HszWV0P73bc+Ff4nS/rJj+YaS6IGyiOL0VoBYX+l1Wrl3k63h/KrH+nhJ0XvQ==\"}},\"grapheme-splitter@1.0.4\":{\"resolution\":{\"integrity\":\"sha512-bzh50DW9kTPM00T8y4o8vQg89Di9oLJVLW/KaOGIXJWP/iqCN6WKYkbNOF04vFLJhwcpYUh9ydh/+5vpOqV4YQ==\"}},\"graphql@16.14.0\":{\"resolution\":{\"integrity\":\"sha512-BBvQ/406p+4CZbTpCbVPSxfzrZrbnuWSP1ELYgyS6B+hNeKzgrdB4JczCa5VZUBQrDa9hUngm0KnexY6pJRN5Q==\"},\"engines\":{\"node\":\"^12.22.0 || ^14.16.0 || ^16.0.0 || >=17.0.0\"}},\"h3@2.0.1-rc.20\":{\"resolution\":{\"integrity\":\"sha512-28ljodXuUp0fZovdiSRq4G9OgrxCztrJe5VdYzXAB7ueRvI7pIUqLU14Xi3XqdYJ/khXjfpUOOD2EQa6CmBgsg==\"},\"engines\":{\"node\":\">=20.11.1\"},\"hasBin\":true,\"peerDependencies\":{\"crossws\":\"^0.4.1\"},\"peerDependenciesMeta\":{\"crossws\":{\"optional\":true}}},\"hachure-fill@0.5.2\":{\"resolution\":{\"integrity\":\"sha512-3GKBOn+m2LX9iq+JC1064cSFprJY4jL1jCXTcpnfER5HYE2l/4EfWSGzkPa/ZDBmYI0ZOEj5VHV/eKnPGkHuOg==\"}},\"happy-dom@18.0.1\":{\"resolution\":{\"integrity\":\"sha512-qn+rKOW7KWpVTtgIUi6RVmTBZJSe2k0Db0vh1f7CWrWclkkc7/Q+FrOfkZIb2eiErLyqu5AXEzE7XthO9JVxRA==\"},\"engines\":{\"node\":\">=20.0.0\"}},\"has-flag@4.0.0\":{\"resolution\":{\"integrity\":\"sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ==\"},\"engines\":{\"node\":\">=8\"}},\"has-symbols@1.1.0\":{\"resolution\":{\"integrity\":\"sha512-1cDNdwJ2Jaohmb3sg4OmKaMBwuC48sYni5HUw2DvsC8LjGTLK9h+eb1X6RyuOHe4hT0ULCW68iomhjUoKUqlPQ==\"},\"engines\":{\"node\":\">= 0.4\"}},\"has-tostringtag@1.0.2\":{\"resolution\":{\"integrity\":\"sha512-NqADB8VjPFLM2V0VvHUewwwsw0ZWBaIdgo+ieHtK3hasLz4qeCRjYcqfB6AQrBggRKppKF8L52/VqdVsO47Dlw==\"},\"engines\":{\"node\":\">= 0.4\"}},\"hasown@2.0.2\":{\"resolution\":{\"integrity\":\"sha512-0hJU9SCPvmMzIBdZFqNPXWa6dqh7WdH0cII9y+CyS8rG3nL48Bclra9HmKhVVUHyPWNH5Y7xDwAB7bfgSjkUMQ==\"},\"engines\":{\"node\":\">= 0.4\"}},\"hast-util-embedded@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-naH8sld4Pe2ep03qqULEtvYr7EjrLK2QHY8KJR6RJkTUjPGObe1vnx585uzem2hGra+s1q08DZZpfgDVYRbaXA==\"}},\"hast-util-from-html@2.0.3\":{\"resolution\":{\"integrity\":\"sha512-CUSRHXyKjzHov8yKsQjGOElXy/3EKpyX56ELnkHH34vDVw1N1XSQ1ZcAvTyAPtGqLTuKP/uxM+aLkSPqF/EtMw==\"}},\"hast-util-from-parse5@8.0.3\":{\"resolution\":{\"integrity\":\"sha512-3kxEVkEKt0zvcZ3hCRYI8rqrgwtlIOFMWkbclACvjlDw8Li9S2hk/d51OI0nr/gIpdMHNepwgOKqZ/sy0Clpyg==\"}},\"hast-util-has-property@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-MNilsvEKLFpV604hwfhVStK0usFY/QmM5zX16bo7EjnAEGofr5YyI37kzopBlZJkHD4t887i+q/C8/tr5Q94cA==\"}},\"hast-util-heading-rank@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-EJKb8oMUXVHcWZTDepnr+WNbfnXKFNf9duMesmr4S8SXTJBJ9M4Yok08pu9vxdJwdlGRhVumk9mEhkEvKGifwA==\"}},\"hast-util-is-body-ok-link@3.0.1\":{\"resolution\":{\"integrity\":\"sha512-0qpnzOBLztXHbHQenVB8uNuxTnm/QBFUOmdOSsEn7GnBtyY07+ENTWVFBAnXd/zEgd9/SUG3lRY7hSIBWRgGpQ==\"}},\"hast-util-is-element@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-Val9mnv2IWpLbNPqc/pUem+a7Ipj2aHacCwgNfTiK0vJKl0LF+4Ba4+v1oPHFpf3bLYmreq0/l3Gud9S5OH42g==\"}},\"hast-util-minify-whitespace@1.0.1\":{\"resolution\":{\"integrity\":\"sha512-L96fPOVpnclQE0xzdWb/D12VT5FabA7SnZOUMtL1DbXmYiHJMXZvFkIZfiMmTCNJHUeO2K9UYNXoVyfz+QHuOw==\"}},\"hast-util-parse-selector@4.0.0\":{\"resolution\":{\"integrity\":\"sha512-wkQCkSYoOGCRKERFWcxMVMOcYE2K1AaNLU8DXS9arxnLOUEWbOXKXiJUNzEpqZ3JOKpnha3jkFrumEjVliDe7A==\"}},\"hast-util-phrasing@3.0.1\":{\"resolution\":{\"integrity\":\"sha512-6h60VfI3uBQUxHqTyMymMZnEbNl1XmEGtOxxKYL7stY2o601COo62AWAYBQR9lZbYXYSBoxag8UpPRXK+9fqSQ==\"}},\"hast-util-raw@9.1.0\":{\"resolution\":{\"integrity\":\"sha512-Y8/SBAHkZGoNkpzqqfCldijcuUKh7/su31kEBp67cFY09Wy0mTRgtsLYsiIxMJxlu0f6AA5SUTbDR8K0rxnbUw==\"}},\"hast-util-sanitize@5.0.2\":{\"resolution\":{\"integrity\":\"sha512-3yTWghByc50aGS7JlGhk61SPenfE/p1oaFeNwkOOyrscaOkMGrcW9+Cy/QAIOBpZxP1yqDIzFMR0+Np0i0+usg==\"}},\"hast-util-to-html@9.0.5\":{\"resolution\":{\"integrity\":\"sha512-OguPdidb+fbHQSU4Q4ZiLKnzWo8Wwsf5bZfbvu7//a9oTYoqD/fWpe96NuHkoS9h0ccGOTe0C4NGXdtS0iObOw==\"}},\"hast-util-to-mdast@10.1.2\":{\"resolution\":{\"integrity\":\"sha512-FiCRI7NmOvM4y+f5w32jPRzcxDIz+PUqDwEqn1A+1q2cdp3B8Gx7aVrXORdOKjMNDQsD1ogOr896+0jJHW1EFQ==\"}},\"hast-util-to-parse5@8.0.0\":{\"resolution\":{\"integrity\":\"sha512-3KKrV5ZVI8if87DVSi1vDeByYrkGzg4mEfeu4alwgmmIeARiBLKCZS2uw5Gb6nU9x9Yufyj3iudm6i7nl52PFw==\"}},\"hast-util-to-string@3.0.1\":{\"resolution\":{\"integrity\":\"sha512-XelQVTDWvqcl3axRfI0xSeoVKzyIFPwsAGSLIsKdJKQMXDYJS4WYrBNF/8J7RdhIcFI2BOHgAifggsvsxp/3+A==\"}},\"hast-util-to-text@4.0.2\":{\"resolution\":{\"integrity\":\"sha512-KK6y/BN8lbaq654j7JgBydev7wuNMcID54lkRav1P0CaE1e47P72AWWPiGKXTJU271ooYzcvTAn/Zt0REnvc7A==\"}},\"hast-util-whitespace@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-88JUN06ipLwsnv+dVn+OIYOvAuvBMy/Qoi6O7mQHxdPXpjy+Cd6xRkWwux7DKO+4sYILtLBRIKgsdpS2gQc7qw==\"}},\"hastscript@9.0.1\":{\"resolution\":{\"integrity\":\"sha512-g7df9rMFX/SPi34tyGCyUBREQoKkapwdY/T04Qn9TDWfHhAYt4/I0gMVirzK5wEzeUqIjEB+LXC/ypb7Aqno5w==\"}},\"headers-polyfill@4.0.3\":{\"resolution\":{\"integrity\":\"sha512-IScLbePpkvO846sIwOtOTDjutRMWdXdJmXdMvk6gCBHxFO8d+QKOQedyZSxFTTFYRSmlgSTDtXqqq4pcenBXLQ==\"}},\"highlight.js@11.11.1\":{\"resolution\":{\"integrity\":\"sha512-Xwwo44whKBVCYoliBQwaPvtd/2tYFkRQtXDWj1nackaV2JPXx3L0+Jvd8/qCJ2p+ML0/XVkJ2q+Mr+UVdpJK5w==\"},\"engines\":{\"node\":\">=12.0.0\"}},\"html-encoding-sniffer@4.0.0\":{\"resolution\":{\"integrity\":\"sha512-Y22oTqIU4uuPgEemfz7NDJz6OeKf12Lsu+QC+s3BVpda64lTiMYCyGwg5ki4vFxkMwQdeZDl2adZoqUgdFuTgQ==\"},\"engines\":{\"node\":\">=18\"}},\"html-escaper@2.0.2\":{\"resolution\":{\"integrity\":\"sha512-H2iMtd0I4Mt5eYiapRdIDjp+XzelXQ0tFE4JS7YFwFevXXMmOp9myNrUvCg0D6ws8iqkRPBfKHgbwig1SmlLfg==\"}},\"html-void-elements@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-bEqo66MRXsUGxWHV5IP0PUiAWwoEjba4VCzg0LjFJBpchPaTfyfCKTG6bc5F8ucKec3q5y6qOdGyYTSBEvhCrg==\"}},\"htmlfy@0.3.2\":{\"resolution\":{\"integrity\":\"sha512-FsxzfpeDYRqn1emox9VpxMPfGjADoUmmup8D604q497R0VNxiXs4ZZTN2QzkaMA5C9aHGUoe1iQRVSm+HK9xuA==\"}},\"htmlparser2@10.0.0\":{\"resolution\":{\"integrity\":\"sha512-TwAZM+zE5Tq3lrEHvOlvwgj1XLWQCtaaibSN11Q+gGBAS7Y1uZSWwXXRe4iF6OXnaq1riyQAPFOBtYc77Mxq0g==\"}},\"htmlparser2@10.1.0\":{\"resolution\":{\"integrity\":\"sha512-VTZkM9GWRAtEpveh7MSF6SjjrpNVNNVJfFup7xTY3UpFtm67foy9HDVXneLtFVt4pMz5kZtgNcvCniNFb1hlEQ==\"}},\"http-proxy-agent@7.0.2\":{\"resolution\":{\"integrity\":\"sha512-T1gkAiYYDWYx3V5Bmyu7HcfcvL7mUrTWiM6yOfa3PIphViJ/gFPbvidQ+veqSOHci/PxBcDabeUNCzpOODJZig==\"},\"engines\":{\"node\":\">= 14\"}},\"https-proxy-agent@7.0.2\":{\"resolution\":{\"integrity\":\"sha512-NmLNjm6ucYwtcUmL7JQC1ZQ57LmHP4lT15FQ8D61nak1rO6DH+fz5qNK2Ap5UN4ZapYICE3/0KodcLYSPsPbaA==\"},\"engines\":{\"node\":\">= 14\"}},\"https-proxy-agent@7.0.6\":{\"resolution\":{\"integrity\":\"sha512-vK9P5/iUfdl95AI+JVyUuIcVtd4ofvtrOr3HNtM2yxC9bnMbEdp3x01OhQNnjb8IJYi38VlTE3mBXwcfvywuSw==\"},\"engines\":{\"node\":\">= 14\"}},\"human-id@4.1.1\":{\"resolution\":{\"integrity\":\"sha512-3gKm/gCSUipeLsRYZbbdA1BD83lBoWUkZ7G9VFrhWPAU76KwYo5KR8V28bpoPm/ygy0x5/GCbpRQdY7VLYCoIg==\"},\"hasBin\":true},\"iconv-lite@0.6.3\":{\"resolution\":{\"integrity\":\"sha512-4fCk79wshMdzMp2rH06qWrJE4iolqLhCUH+OiuIgU++RB0+94NlDL81atO7GX55uUKueo0txHNtvEyI6D7WdMw==\"},\"engines\":{\"node\":\">=0.10.0\"}},\"ieee754@1.2.1\":{\"resolution\":{\"integrity\":\"sha512-dcyqhDvX1C46lXZcVqCpK+FtMRQVdIMN6/Df5js2zouUsqG7I6sFxitIC+7KYK29KdXOLHdu9zL4sFnoVQnqaA==\"}},\"ignore@5.3.2\":{\"resolution\":{\"integrity\":\"sha512-hsBTNUqQTDwkWtcdYI2i06Y/nUBEsNEDJKjWdigLvegy8kDuJAS8uRlpkkcQpyEXL0Z/pjDy5HBmMjRCJ2gq+g==\"},\"engines\":{\"node\":\">= 4\"}},\"immediate@3.0.6\":{\"resolution\":{\"integrity\":\"sha512-XXOFtyqDjNDAQxVfYxuF7g9Il/IbWmmlQg2MYKOH8ExIT1qg6xc4zyS3HaEEATgs1btfzxq15ciUiY7gjSXRGQ==\"}},\"immutable@5.1.5\":{\"resolution\":{\"integrity\":\"sha512-t7xcm2siw+hlUM68I+UEOK+z84RzmN59as9DZ7P1l0994DKUWV7UXBMQZVxaoMSRQ+PBZbHCOoBt7a2wxOMt+A==\"}},\"import-meta-resolve@4.2.0\":{\"resolution\":{\"integrity\":\"sha512-Iqv2fzaTQN28s/FwZAoFq0ZSs/7hMAHJVX+w8PZl3cY19Pxk6jFFalxQoIfW2826i/fDLXv8IiEZRIT0lDuWcg==\"}},\"inherits@2.0.4\":{\"resolution\":{\"integrity\":\"sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ==\"}},\"ini@1.3.8\":{\"resolution\":{\"integrity\":\"sha512-JV/yugV2uzW5iMRSiZAyDtQd+nxtUnjeLt0acNdw98kKLrvuRVyB80tsREOE7yvGVgalhZ6RNXCmEHkUKBKxew==\"}},\"ini@4.1.3\":{\"resolution\":{\"integrity\":\"sha512-X7rqawQBvfdjS10YU1y1YVreA3SsLrW9dX2CewP2EbBJM4ypVNLDkO5y04gejPwKIY9lR+7r9gn3rFPt/kmWFg==\"},\"engines\":{\"node\":\"^14.17.0 || ^16.13.0 || >=18.0.0\"}},\"internmap@1.0.1\":{\"resolution\":{\"integrity\":\"sha512-lDB5YccMydFBtasVtxnZ3MRBHuaoE8GKsppq+EchKL2U4nK/DmEpPHNH8MZe5HkMtpSiTSOZwfN0tzYjO/lJEw==\"}},\"internmap@2.0.3\":{\"resolution\":{\"integrity\":\"sha512-5Hh7Y1wQbvY5ooGgPbDaL5iYLAPzMTUrjMulskHLH6wnv/A+1q5rgEaiuqEjB+oxGXIVZs1FF+R/KPN3ZSQYYg==\"},\"engines\":{\"node\":\">=12\"}},\"ip-address@10.2.0\":{\"resolution\":{\"integrity\":\"sha512-/+S6j4E9AHvW9SWMSEY9Xfy66O5PWvVEJ08O0y5JGyEKQpojb0K0GKpz/v5HJ/G0vi3D2sjGK78119oXZeE0qA==\"},\"engines\":{\"node\":\">= 12\"}},\"is-binary-path@2.1.0\":{\"resolution\":{\"integrity\":\"sha512-ZMERYes6pDydyuGidse7OsHxtbI7WVeUEozgR/g7rd0xUimYNlvZRE/K2MgZTjWy725IfelLeVcEM97mmtRGXw==\"},\"engines\":{\"node\":\">=8\"}},\"is-docker@2.2.1\":{\"resolution\":{\"integrity\":\"sha512-F+i2BKsFrH66iaUFc0woD8sLy8getkwTwtOBjvs56Cx4CgJDeKQeqfz8wAYiSb8JOprWhHH5p77PbmYCvvUuXQ==\"},\"engines\":{\"node\":\">=8\"},\"hasBin\":true},\"is-extglob@2.1.1\":{\"resolution\":{\"integrity\":\"sha512-SbKbANkN603Vi4jEZv49LeVJMn4yGwsbzZworEoyEiutsN3nJYdbO36zfhGJ6QEDpOZIFkDtnq5JRxmvl3jsoQ==\"},\"engines\":{\"node\":\">=0.10.0\"}},\"is-fullwidth-code-point@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-zymm5+u+sCsSWyD9qNaejV3DFvhCKclKdizYaJUuHA83RLjb7nSuGnddCHGv0hk+KY7BMAlsWeK4Ueg6EV6XQg==\"},\"engines\":{\"node\":\">=8\"}},\"is-glob@4.0.3\":{\"resolution\":{\"integrity\":\"sha512-xelSayHH36ZgE7ZWhli7pW34hNbNl8Ojv5KVmkJD4hBdD3th8Tfk9vYasLM+mXWOZhFkgZfxhLSnrwRr4elSSg==\"},\"engines\":{\"node\":\">=0.10.0\"}},\"is-interactive@1.0.0\":{\"resolution\":{\"integrity\":\"sha512-2HvIEKRoqS62guEC+qBjpvRubdX910WCMuJTZ+I9yvqKU2/12eSL549HMwtabb4oupdj2sMP50k+XJfB/8JE6w==\"},\"engines\":{\"node\":\">=8\"}},\"is-node-process@1.2.0\":{\"resolution\":{\"integrity\":\"sha512-Vg4o6/fqPxIjtxgUH5QLJhwZ7gW5diGCVlXpuUfELC62CuxM1iHcRe51f2W1FDy04Ai4KJkagKjx3XaqyfRKXw==\"}},\"is-number@7.0.0\":{\"resolution\":{\"integrity\":\"sha512-41Cifkg6e8TylSpdtTpeLVMqvSBEVzTttHvERD741+pnZ8ANv0004MRL43QKPDlK9cGvNp6NZWZUBlbGXYxxng==\"},\"engines\":{\"node\":\">=0.12.0\"}},\"is-plain-obj@4.1.0\":{\"resolution\":{\"integrity\":\"sha512-+Pgi+vMuUNkJyExiMBt5IlFoMyKnr5zhJ4Uspz58WOhBF5QoIZkFyNHIbBAtHwzVAgk5RtndVNsDRN61/mmDqg==\"},\"engines\":{\"node\":\">=12\"}},\"is-potential-custom-element-name@1.0.1\":{\"resolution\":{\"integrity\":\"sha512-bCYeRA2rVibKZd+s2625gGnGF/t7DSqDs4dP7CrLA1m7jKWz6pps0LpYLJN8Q64HtmPKJ1hrN3nzPNKFEKOUiQ==\"}},\"is-stream@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-hFoiJiTl63nn+kstHGBtewWSKnQLpyb155KHheA1l39uvtO9nWIop1p3udqPcUd/xbF1VLMO4n7OI6p7RbngDg==\"},\"engines\":{\"node\":\">=8\"}},\"is-subdir@1.2.0\":{\"resolution\":{\"integrity\":\"sha512-2AT6j+gXe/1ueqbW6fLZJiIw3F8iXGJtt0yDrZaBhAZEG1raiTxKWU+IPqMCzQAXOUCKdA4UDMgacKH25XG2Cw==\"},\"engines\":{\"node\":\">=4\"}},\"is-unicode-supported@0.1.0\":{\"resolution\":{\"integrity\":\"sha512-knxG2q4UC3u8stRGyAVJCOdxFmv5DZiRcdlIaAQXAbSfJya+OhopNotLQrstBhququ4ZpuKbDc/8S6mgXgPFPw==\"},\"engines\":{\"node\":\">=10\"}},\"is-windows@1.0.2\":{\"resolution\":{\"integrity\":\"sha512-eXK1UInq2bPmjyX6e3VHIzMLobc4J94i4AWn+Hpq3OU5KkrRC96OAcR3PRJ/pGu6m8TRnBHP9dkXQVsT/COVIA==\"},\"engines\":{\"node\":\">=0.10.0\"}},\"is-wsl@2.2.0\":{\"resolution\":{\"integrity\":\"sha512-fKzAra0rGJUUBwGBgNkHZuToZcn+TtXHpeCgmkMJMMYx1sQDYaCSyjJBSCa2nH1DGm7s3n1oBnohoVTBaN7Lww==\"},\"engines\":{\"node\":\">=8\"}},\"isarray@1.0.0\":{\"resolution\":{\"integrity\":\"sha512-VLghIWNM6ELQzo7zwmcg0NmTVyWKYjvIeM83yjp0wRDTmUnrM678fQbcKBo6n2CJEF0szoG//ytg+TKla89ALQ==\"}},\"isbot@5.1.28\":{\"resolution\":{\"integrity\":\"sha512-qrOp4g3xj8YNse4biorv6O5ZShwsJM0trsoda4y7j/Su7ZtTTfVXFzbKkpgcSoDrHS8FcTuUwcU04YimZlZOxw==\"},\"engines\":{\"node\":\">=18\"}},\"isexe@2.0.0\":{\"resolution\":{\"integrity\":\"sha512-RHxMLp9lnKHGHRng9QFhRCMbYAcVpn69smSGcq3f36xjgVVWThj4qqLbTLlq7Ssj8B+fIQ1EuCEGI2lKsyQeIw==\"}},\"isexe@3.1.5\":{\"resolution\":{\"integrity\":\"sha512-6B3tLtFqtQS4ekarvLVMZ+X+VlvQekbe4taUkf/rhVO3d/h0M2rfARm/pXLcPEsjjMsFgrFgSrhQIxcSVrBz8w==\"},\"engines\":{\"node\":\">=18\"}},\"istanbul-lib-coverage@3.2.2\":{\"resolution\":{\"integrity\":\"sha512-O8dpsF+r0WV/8MNRKfnmrtCWhuKjxrq2w+jpzBL5UZKTi2LeVWnWOmWRxFlesJONmc+wLAGvKQZEOanko0LFTg==\"},\"engines\":{\"node\":\">=8\"}},\"istanbul-lib-report@3.0.1\":{\"resolution\":{\"integrity\":\"sha512-GCfE1mtsHGOELCU8e/Z7YWzpmybrx/+dSTfLrvY8qRmaY6zXTKWn6WQIjaAFw069icm6GVMNkgu0NzI4iPZUNw==\"},\"engines\":{\"node\":\">=10\"}},\"istanbul-lib-source-maps@5.0.6\":{\"resolution\":{\"integrity\":\"sha512-yg2d+Em4KizZC5niWhQaIomgf5WlL4vOOjZ5xGCmF8SnPE/mDWWXgvRExdcpCgh9lLRRa1/fSYp2ymmbJ1pI+A==\"},\"engines\":{\"node\":\">=10\"}},\"istanbul-reports@3.2.0\":{\"resolution\":{\"integrity\":\"sha512-HGYWWS/ehqTV3xN10i23tkPkpH46MLCIMFNCaaKNavAXTF1RkqxawEPtnjnGZ6XKSInBKkiOA5BKS+aZiY3AvA==\"},\"engines\":{\"node\":\">=8\"}},\"jackspeak@3.4.3\":{\"resolution\":{\"integrity\":\"sha512-OGlZQpz2yfahA/Rd1Y8Cd9SIEsqvXkLVoSw/cgwhnhFMDbsQFeZYoJJ7bIZBS9BcamUW96asq/npPWugM+RQBw==\"}},\"jest-diff@30.1.1\":{\"resolution\":{\"integrity\":\"sha512-LUU2Gx8EhYxpdzTR6BmjL1ifgOAQJQELTHOiPv9KITaKjZvJ9Jmgigx01tuZ49id37LorpGc9dPBPlXTboXScw==\"},\"engines\":{\"node\":\"^18.14.0 || ^20.0.0 || ^22.0.0 || >=24.0.0\"}},\"jest-worker@27.5.1\":{\"resolution\":{\"integrity\":\"sha512-7vuh85V5cdDofPyxn58nrPjBktZo0u9x1g8WtjQol+jZDaE+fhN+cIvTj11GndBnMnyfrUOG1sZQxCdjKh+DKg==\"},\"engines\":{\"node\":\">= 10.13.0\"}},\"jiti@2.6.1\":{\"resolution\":{\"integrity\":\"sha512-ekilCSN1jwRvIbgeg/57YFh8qQDNbwDb9xT/qu2DAHbFFZUicIl4ygVaAvzveMhMVr3LnpSKTNnwt8PoOfmKhQ==\"},\"hasBin\":true},\"js-tokens@10.0.0\":{\"resolution\":{\"integrity\":\"sha512-lM/UBzQmfJRo9ABXbPWemivdCW8V2G8FHaHdypQaIy523snUjog0W71ayWXTjiR+ixeMyVHN2XcpnTd/liPg/Q==\"}},\"js-tokens@4.0.0\":{\"resolution\":{\"integrity\":\"sha512-RdJUflcE3cUzKiMqQgsCu06FPu9UdIJO0beYbPhHN4k6apgJtifcoCtT9bcxOpYBtpD2kCM6Sbzg4CausW/PKQ==\"}},\"js-tokens@9.0.1\":{\"resolution\":{\"integrity\":\"sha512-mxa9E9ITFOt0ban3j6L5MpjwegGz6lBQmM1IJkWeBZGcMxto50+eWdjC/52xDbS2vy0k7vIMK0Fe2wfL9OQSpQ==\"}},\"js-yaml@3.14.1\":{\"resolution\":{\"integrity\":\"sha512-okMH7OXXJ7YrN9Ok3/SXrnu4iX9yOk+25nqX4imS2npuvTYDmo/QEZoqwZkYaIDk3jVvBOTOIEgEhaLOynBS9g==\"},\"hasBin\":true},\"js-yaml@4.1.1\":{\"resolution\":{\"integrity\":\"sha512-qQKT4zQxXl8lLwBtHMWwaTcGfFOZviOJet3Oy/xmGk2gZH677CJM9EvtfdSkgWcATZhj/55JZ0rmy3myCT5lsA==\"},\"hasBin\":true},\"jsdom@26.1.0\":{\"resolution\":{\"integrity\":\"sha512-Cvc9WUhxSMEo4McES3P7oK3QaXldCfNWp7pl2NNeiIFlCoLr3kfq9kb1fxftiwk1FLV7CvpvDfonxtzUDeSOPg==\"},\"engines\":{\"node\":\">=18\"},\"peerDependencies\":{\"canvas\":\"^3.0.0\"},\"peerDependenciesMeta\":{\"canvas\":{\"optional\":true}}},\"jsdom@27.3.0\":{\"resolution\":{\"integrity\":\"sha512-GtldT42B8+jefDUC4yUKAvsaOrH7PDHmZxZXNgF2xMmymjUbRYJvpAybZAKEmXDGTM0mCsz8duOa4vTm5AY2Kg==\"},\"engines\":{\"node\":\"^20.19.0 || ^22.12.0 || >=24.0.0\"},\"peerDependencies\":{\"canvas\":\"^3.0.0\"},\"peerDependenciesMeta\":{\"canvas\":{\"optional\":true}}},\"jsesc@3.1.0\":{\"resolution\":{\"integrity\":\"sha512-/sM3dO2FOzXjKQhJuo0Q173wf2KOo8t4I8vHy6lF9poUp7bKT0/NHE8fPX23PwfhnykfqnC2xRxOnVw5XuGIaA==\"},\"engines\":{\"node\":\">=6\"},\"hasBin\":true},\"json-parse-even-better-errors@2.3.1\":{\"resolution\":{\"integrity\":\"sha512-xyFwyhro/JEof6Ghe2iz2NcXoj2sloNsWr/XsERDK/oiPCfaNhl5ONfp+jQdAZRQQ0IJWNzH9zIZF7li91kh2w==\"}},\"json-schema-to-ts@3.1.1\":{\"resolution\":{\"integrity\":\"sha512-+DWg8jCJG2TEnpy7kOm/7/AxaYoaRbjVB4LFZLySZlWn8exGs3A4OLJR966cVvU26N7X9TWxl+Jsw7dzAqKT6g==\"},\"engines\":{\"node\":\">=16\"}},\"json-schema-traverse@1.0.0\":{\"resolution\":{\"integrity\":\"sha512-NM8/P9n3XjXhIZn1lLhkFaACTOURQXjWhV4BA/RnOv8xvgqtqpAX9IO4mRQxSx1Rlo4tqzeqb0sOlruaOy3dug==\"}},\"json5@2.2.3\":{\"resolution\":{\"integrity\":\"sha512-XmOWe7eyHYH14cLdVPoyg+GOH3rYX++KpzrylJwSW98t3Nk+U8XOl8FWKOgwtzdb8lXGf6zYwDUzeHMWfxasyg==\"},\"engines\":{\"node\":\">=6\"},\"hasBin\":true},\"jsonc-parser@3.2.0\":{\"resolution\":{\"integrity\":\"sha512-gfFQZrcTc8CnKXp6Y4/CBT3fTc0OVuDofpre4aEeEpSBPV5X5v4+Vmx+8snU7RLPrNHPKSgLxGo9YuQzz20o+w==\"}},\"jsonfile@4.0.0\":{\"resolution\":{\"integrity\":\"sha512-m6F1R3z8jjlf2imQHS2Qez5sjKWQzbuuhuJ/FKYFRZvPE3PuHcSMVZzfsLhGVOkfd20obL5SWEBew5ShlquNxg==\"}},\"jsonfile@6.2.0\":{\"resolution\":{\"integrity\":\"sha512-FGuPw30AdOIUTRMC2OMRtQV+jkVj2cfPqSeWXv1NEAJ1qZ5zb1X6z1mFhbfOB/iy3ssJCD+3KuZ8r8C3uVFlAg==\"}},\"jszip@3.10.1\":{\"resolution\":{\"integrity\":\"sha512-xXDvecyTpGLrqFrvkrUSoxxfJI5AH7U8zxxtVclpsUtMCq4JQ290LY8AW5c7Ggnr/Y/oK+bQMbqK2qmtk3pN4g==\"}},\"katex@0.16.22\":{\"resolution\":{\"integrity\":\"sha512-XCHRdUw4lf3SKBaJe4EvgqIuWwkPSo9XoeO8GjQW94Bp7TWv9hNhzZjZ+OH9yf1UmLygb7DIT5GSFQiyt16zYg==\"},\"hasBin\":true},\"khroma@2.1.0\":{\"resolution\":{\"integrity\":\"sha512-Ls993zuzfayK269Svk9hzpeGUKob/sIgZzyHYdjQoAdQetRKpOLj+k/QQQ/6Qi0Yz65mlROrfd+Ev+1+7dz9Kw==\"}},\"kleur@4.1.5\":{\"resolution\":{\"integrity\":\"sha512-o+NO+8WrRiQEE4/7nwRJhN1HWpVmJm511pBHUxPLtp0BUISzlBplORYSmTclCnJvQq2tKu/sgl3xVpkc7ZWuQQ==\"},\"engines\":{\"node\":\">=6\"}},\"kolorist@1.8.0\":{\"resolution\":{\"integrity\":\"sha512-Y+60/zizpJ3HRH8DCss+q95yr6145JXZo46OTpFvDZWLfRCE4qChOyk1b26nMaNpfHHgxagk9dXT5OP0Tfe+dQ==\"}},\"kysely@0.28.7\":{\"resolution\":{\"integrity\":\"sha512-u/cAuTL4DRIiO2/g4vNGRgklEKNIj5Q3CG7RoUB5DV5SfEC2hMvPxKi0GWPmnzwL2ryIeud2VTcEEmqzTzEPNw==\"},\"engines\":{\"node\":\">=20.0.0\"}},\"langium@3.3.1\":{\"resolution\":{\"integrity\":\"sha512-QJv/h939gDpvT+9SiLVlY7tZC3xB2qK57v0J04Sh9wpMb6MP1q8gB21L3WIo8T5P1MSMg3Ep14L7KkDCFG3y4w==\"},\"engines\":{\"node\":\">=16.0.0\"}},\"layout-base@1.0.2\":{\"resolution\":{\"integrity\":\"sha512-8h2oVEZNktL4BH2JCOI90iD1yXwL6iNW7KcCKT2QZgQJR2vbqDsldCTPRU9NifTCqHZci57XvQQ15YTu+sTYPg==\"}},\"layout-base@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-dp3s92+uNI1hWIpPGH3jK2kxE2lMjdXdr+DH8ynZHpd6PUlH6x6cbuXnoMmiNumznqaNO31xu9e79F0uuZ0JFg==\"}},\"lazystream@1.0.1\":{\"resolution\":{\"integrity\":\"sha512-b94GiNHQNy6JNTrt5w6zNyffMrNkXZb3KTkCZJb2V1xaEGCk093vkZ2jk3tpaeP33/OiXC+WvK9AxUebnf5nbw==\"},\"engines\":{\"node\":\">= 0.6.3\"}},\"lie@3.3.0\":{\"resolution\":{\"integrity\":\"sha512-UaiMJzeWRlEujzAuw5LokY1L5ecNQYZKfmyZ9L7wDHb/p5etKaxXhohBcrw0EYby+G/NA52vRSN4N39dxHAIwQ==\"}},\"lightningcss-android-arm64@1.32.0\":{\"resolution\":{\"integrity\":\"sha512-YK7/ClTt4kAK0vo6w3X+Pnm0D2cf2vPHbhOXdoNti1Ga0al1P4TBZhwjATvjNwLEBCnKvjJc2jQgHXH0NEwlAg==\"},\"engines\":{\"node\":\">= 12.0.0\"},\"cpu\":[\"arm64\"],\"os\":[\"android\"]},\"lightningcss-darwin-arm64@1.32.0\":{\"resolution\":{\"integrity\":\"sha512-RzeG9Ju5bag2Bv1/lwlVJvBE3q6TtXskdZLLCyfg5pt+HLz9BqlICO7LZM7VHNTTn/5PRhHFBSjk5lc4cmscPQ==\"},\"engines\":{\"node\":\">= 12.0.0\"},\"cpu\":[\"arm64\"],\"os\":[\"darwin\"]},\"lightningcss-darwin-x64@1.32.0\":{\"resolution\":{\"integrity\":\"sha512-U+QsBp2m/s2wqpUYT/6wnlagdZbtZdndSmut/NJqlCcMLTWp5muCrID+K5UJ6jqD2BFshejCYXniPDbNh73V8w==\"},\"engines\":{\"node\":\">= 12.0.0\"},\"cpu\":[\"x64\"],\"os\":[\"darwin\"]},\"lightningcss-freebsd-x64@1.32.0\":{\"resolution\":{\"integrity\":\"sha512-JCTigedEksZk3tHTTthnMdVfGf61Fky8Ji2E4YjUTEQX14xiy/lTzXnu1vwiZe3bYe0q+SpsSH/CTeDXK6WHig==\"},\"engines\":{\"node\":\">= 12.0.0\"},\"cpu\":[\"x64\"],\"os\":[\"freebsd\"]},\"lightningcss-linux-arm-gnueabihf@1.32.0\":{\"resolution\":{\"integrity\":\"sha512-x6rnnpRa2GL0zQOkt6rts3YDPzduLpWvwAF6EMhXFVZXD4tPrBkEFqzGowzCsIWsPjqSK+tyNEODUBXeeVHSkw==\"},\"engines\":{\"node\":\">= 12.0.0\"},\"cpu\":[\"arm\"],\"os\":[\"linux\"]},\"lightningcss-linux-arm64-gnu@1.32.0\":{\"resolution\":{\"integrity\":\"sha512-0nnMyoyOLRJXfbMOilaSRcLH3Jw5z9HDNGfT/gwCPgaDjnx0i8w7vBzFLFR1f6CMLKF8gVbebmkUN3fa/kQJpQ==\"},\"engines\":{\"node\":\">= 12.0.0\"},\"cpu\":[\"arm64\"],\"os\":[\"linux\"]},\"lightningcss-linux-arm64-musl@1.32.0\":{\"resolution\":{\"integrity\":\"sha512-UpQkoenr4UJEzgVIYpI80lDFvRmPVg6oqboNHfoH4CQIfNA+HOrZ7Mo7KZP02dC6LjghPQJeBsvXhJod/wnIBg==\"},\"engines\":{\"node\":\">= 12.0.0\"},\"cpu\":[\"arm64\"],\"os\":[\"linux\"]},\"lightningcss-linux-x64-gnu@1.32.0\":{\"resolution\":{\"integrity\":\"sha512-V7Qr52IhZmdKPVr+Vtw8o+WLsQJYCTd8loIfpDaMRWGUZfBOYEJeyJIkqGIDMZPwPx24pUMfwSxxI8phr/MbOA==\"},\"engines\":{\"node\":\">= 12.0.0\"},\"cpu\":[\"x64\"],\"os\":[\"linux\"]},\"lightningcss-linux-x64-musl@1.32.0\":{\"resolution\":{\"integrity\":\"sha512-bYcLp+Vb0awsiXg/80uCRezCYHNg1/l3mt0gzHnWV9XP1W5sKa5/TCdGWaR/zBM2PeF/HbsQv/j2URNOiVuxWg==\"},\"engines\":{\"node\":\">= 12.0.0\"},\"cpu\":[\"x64\"],\"os\":[\"linux\"]},\"lightningcss-win32-arm64-msvc@1.32.0\":{\"resolution\":{\"integrity\":\"sha512-8SbC8BR40pS6baCM8sbtYDSwEVQd4JlFTOlaD3gWGHfThTcABnNDBda6eTZeqbofalIJhFx0qKzgHJmcPTnGdw==\"},\"engines\":{\"node\":\">= 12.0.0\"},\"cpu\":[\"arm64\"],\"os\":[\"win32\"]},\"lightningcss-win32-x64-msvc@1.32.0\":{\"resolution\":{\"integrity\":\"sha512-Amq9B/SoZYdDi1kFrojnoqPLxYhQ4Wo5XiL8EVJrVsB8ARoC1PWW6VGtT0WKCemjy8aC+louJnjS7U18x3b06Q==\"},\"engines\":{\"node\":\">= 12.0.0\"},\"cpu\":[\"x64\"],\"os\":[\"win32\"]},\"lightningcss@1.32.0\":{\"resolution\":{\"integrity\":\"sha512-NXYBzinNrblfraPGyrbPoD19C1h9lfI/1mzgWYvXUTe414Gz/X1FD2XBZSZM7rRTrMA8JL3OtAaGifrIKhQ5yQ==\"},\"engines\":{\"node\":\">= 12.0.0\"}},\"lines-and-columns@2.0.3\":{\"resolution\":{\"integrity\":\"sha512-cNOjgCnLB+FnvWWtyRTzmB3POJ+cXxTA81LoW7u8JdmhfXzriropYwpjShnz1QLLWsQwY7nIxoDmcPTwphDK9w==\"},\"engines\":{\"node\":\"^12.20.0 || ^14.13.1 || >=16.0.0\"}},\"loader-runner@4.3.2\":{\"resolution\":{\"integrity\":\"sha512-DFEqQ3ihfS9blba08cLfYf1NRAIEm+dDjic073DRDc3/JspI/8wYmtDsHwd3+4hwvdxSK7PGaElfTmm0awWJ4w==\"},\"engines\":{\"node\":\">=6.11.5\"}},\"local-pkg@1.1.2\":{\"resolution\":{\"integrity\":\"sha512-arhlxbFRmoQHl33a0Zkle/YWlmNwoyt6QNZEIJcqNbdrsix5Lvc4HyyI3EnwxTYlZYc32EbYrQ8SzEZ7dqgg9A==\"},\"engines\":{\"node\":\">=14\"}},\"locate-app@2.5.0\":{\"resolution\":{\"integrity\":\"sha512-xIqbzPMBYArJRmPGUZD9CzV9wOqmVtQnaAn3wrj3s6WYW0bQvPI7x+sPYUGmDTYMHefVK//zc6HEYZ1qnxIK+Q==\"}},\"locate-path@5.0.0\":{\"resolution\":{\"integrity\":\"sha512-t7hw9pI+WvuwNJXwk5zVHpyhIqzg2qTlklJOf0mVxGSbe3Fp2VieZcduNYjaLDoy6p9uGpQEGWG87WpMKlNq8g==\"},\"engines\":{\"node\":\">=8\"}},\"lodash-es@4.17.21\":{\"resolution\":{\"integrity\":\"sha512-mKnC+QJ9pWVzv+C4/U3rRsHapFfHvQFoFB92e52xeyGMcX6/OlIl78je1u8vePzYZSkkogMPJ2yjxxsb89cxyw==\"}},\"lodash.clonedeep@4.5.0\":{\"resolution\":{\"integrity\":\"sha512-H5ZhCF25riFd9uB5UCkVKo61m3S/xZk1x4wA6yp/L3RFP6Z/eHH1ymQcGLo7J3GMPfm0V/7m1tryHuGVxpqEBQ==\"}},\"lodash.startcase@4.4.0\":{\"resolution\":{\"integrity\":\"sha512-+WKqsK294HMSc2jEbNgpHpd0JfIBhp7rEV4aqXWqFr6AlXov+SlcgB1Fv01y2kGe3Gc8nMW7VA0SrGuSkRfIEg==\"}},\"lodash.zip@4.2.0\":{\"resolution\":{\"integrity\":\"sha512-C7IOaBBK/0gMORRBd8OETNx3kmOkgIWIPvyDpZSCTwUrpYmgZwJkjZeOD8ww4xbOUOs4/attY+pciKvadNfFbg==\"}},\"lodash@4.18.1\":{\"resolution\":{\"integrity\":\"sha512-dMInicTPVE8d1e5otfwmmjlxkZoUpiVLwyeTdUsi/Caj/gfzzblBcCE5sRHV/AsjuCmxWrte2TNGSYuCeCq+0Q==\"}},\"log-symbols@4.1.0\":{\"resolution\":{\"integrity\":\"sha512-8XPvpAA8uyhfteu8pIvQxpJZ7SYYdpUivZpGy6sFsBuKRY/7rQGavedeB8aK+Zkyq6upMFVL/9AW6vOYzfRyLg==\"},\"engines\":{\"node\":\">=10\"}},\"loglevel-plugin-prefix@0.8.4\":{\"resolution\":{\"integrity\":\"sha512-WpG9CcFAOjz/FtNht+QJeGpvVl/cdR6P0z6OcXSkr8wFJOsV2GRj2j10JLfjuA4aYkcKCNIEqRGCyTife9R8/g==\"}},\"loglevel@1.9.2\":{\"resolution\":{\"integrity\":\"sha512-HgMmCqIJSAKqo68l0rS2AanEWfkxaZ5wNiEFb5ggm08lDs9Xl2KxBlX3PTcaD2chBM1gXAYf491/M2Rv8Jwayg==\"},\"engines\":{\"node\":\">= 0.6.0\"}},\"long@5.3.2\":{\"resolution\":{\"integrity\":\"sha512-mNAgZ1GmyNhD7AuqnTG3/VQ26o760+ZYBPKjPvugO8+nLbYfX6TVpJPseBvopbdY+qpZ/lKUnmEc1LeZYS3QAA==\"}},\"longest-streak@3.1.0\":{\"resolution\":{\"integrity\":\"sha512-9Ri+o0JYgehTaVBBDoMqIl8GXtbWg711O3srftcHhZ0dqnETqLaoIK0x17fUw9rFSlK/0NlsKe0Ahhyl5pXE2g==\"}},\"loupe@3.2.1\":{\"resolution\":{\"integrity\":\"sha512-CdzqowRJCeLU72bHvWqwRBBlLcMEtIvGrlvef74kMnV2AolS9Y8xUv1I0U/MNAWMhBlKIoyuEgoJ0t/bbwHbLQ==\"}},\"lowlight@3.3.0\":{\"resolution\":{\"integrity\":\"sha512-0JNhgFoPvP6U6lE/UdVsSq99tn6DhjjpAj5MxG49ewd2mOBVtwWYIT8ClyABhq198aXXODMU6Ox8DrGy/CpTZQ==\"}},\"lru-cache@10.4.3\":{\"resolution\":{\"integrity\":\"sha512-JNAzZcXrCt42VGLuYz0zfAzDfAvJWW6AfYlDBQyDV5DClI2m5sAmK+OIO7s59XfsRsWHp02jAJrRadPRGTt6SQ==\"}},\"lru-cache@11.2.4\":{\"resolution\":{\"integrity\":\"sha512-B5Y16Jr9LB9dHVkh6ZevG+vAbOsNOYCX+sXvFWFu7B3Iz5mijW3zdbMyhsh8ANd2mSWBYdJgnqi+mL7/LrOPYg==\"},\"engines\":{\"node\":\"20 || >=22\"}},\"lru-cache@5.1.1\":{\"resolution\":{\"integrity\":\"sha512-KpNARQA3Iwv+jTA0utUVVbrh+Jlrr1Fv0e56GGzAFOXN7dk/FviaDW8LHmK52DlcH4WP2n6gI8vN1aesBFgo9w==\"}},\"lru-cache@7.18.3\":{\"resolution\":{\"integrity\":\"sha512-jumlc0BIUrS3qJGgIkWZsyfAM7NCWiBcCDhnd+3NNM5KbBmLTgHVfWBcg6W+rLUsIpzpERPsvwUP7CckAQSOoA==\"},\"engines\":{\"node\":\">=12\"}},\"lucide-react@0.544.0\":{\"resolution\":{\"integrity\":\"sha512-t5tS44bqd825zAW45UQxpG2CvcC4urOwn2TrwSH8u+MjeE+1NnWl6QqeQ/6NdjMqdOygyiT9p3Ev0p1NJykxjw==\"},\"peerDependencies\":{\"react\":\"^16.5.1 || ^17.0.0 || ^18.0.0 || ^19.0.0\"}},\"lz-string@1.5.0\":{\"resolution\":{\"integrity\":\"sha512-h5bgJWpxJNswbU7qCrV0tIKQCaS3blPDrqKWx+QxzuzL1zGUzij9XCWLrSLsJPu5t+eWA/ycetzYAO5IOMcWAQ==\"},\"hasBin\":true},\"magic-string@0.30.18\":{\"resolution\":{\"integrity\":\"sha512-yi8swmWbO17qHhwIBNeeZxTceJMeBvWJaId6dyvTSOwTipqeHhMhOrz6513r1sOKnpvQ7zkhlG8tPrpilwTxHQ==\"}},\"magic-string@0.30.21\":{\"resolution\":{\"integrity\":\"sha512-vd2F4YUyEXKGcLHoq+TEyCjxueSeHnFxyyjNp80yg0XV4vUhnDer/lvvlqM/arB5bXQN5K2/3oinyCRyx8T2CQ==\"}},\"magicast@0.3.5\":{\"resolution\":{\"integrity\":\"sha512-L0WhttDl+2BOsybvEOLK7fW3UA0OQ0IQ2d6Zl2x/a6vVRs3bAY0ECOSHHeL5jD+SbOpOCUEi0y1DgHEn9Qn1AQ==\"}},\"magicast@0.5.2\":{\"resolution\":{\"integrity\":\"sha512-E3ZJh4J3S9KfwdjZhe2afj6R9lGIN5Pher1pF39UGrXRqq/VDaGVIGN13BjHd2u8B61hArAGOnso7nBOouW3TQ==\"}},\"make-dir@4.0.0\":{\"resolution\":{\"integrity\":\"sha512-hXdUTZYIVOt1Ex//jAQi+wTZZpUpwBj/0QsOzqegb3rGMMeJiSEu5xLHnYfBrRV4RH2+OCSOO95Is/7x1WJ4bw==\"},\"engines\":{\"node\":\">=10\"}},\"markdown-table@3.0.4\":{\"resolution\":{\"integrity\":\"sha512-wiYz4+JrLyb/DqW2hkFJxP7Vd7JuTDm77fvbM8VfEQdmSMqcImWeeRbHwZjBjIFki/VaMK2BhFi7oUUZeM5bqw==\"}},\"marked@16.4.2\":{\"resolution\":{\"integrity\":\"sha512-TI3V8YYWvkVf3KJe1dRkpnjs68JUPyEa5vjKrp1XEEJUAOaQc+Qj+L1qWbPd0SJuAdQkFU0h73sXXqwDYxsiDA==\"},\"engines\":{\"node\":\">= 20\"},\"hasBin\":true},\"math-intrinsics@1.1.0\":{\"resolution\":{\"integrity\":\"sha512-/IXtbwEk5HTPyEwyKX6hGkYXxM9nbj64B+ilVJnC/R6B0pH5G4V3b0pVbL7DBj4tkhBAppbQUlf6F6Xl9LHu1g==\"},\"engines\":{\"node\":\">= 0.4\"}},\"mdast-util-find-and-replace@3.0.2\":{\"resolution\":{\"integrity\":\"sha512-Tmd1Vg/m3Xz43afeNxDIhWRtFZgM2VLyaf4vSTYwudTyeuTneoL3qtWMA5jeLyz/O1vDJmmV4QuScFCA2tBPwg==\"}},\"mdast-util-from-markdown@2.0.2\":{\"resolution\":{\"integrity\":\"sha512-uZhTV/8NBuw0WHkPTrCqDOl0zVe1BIng5ZtHoDk49ME1qqcjYmmLmOf0gELgcRMxN4w2iuIeVso5/6QymSrgmA==\"}},\"mdast-util-frontmatter@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-LRqI9+wdgC25P0URIJY9vwocIzCcksduHQ9OF2joxQoyTNVduwLAFUzjoopuRJbJAReaKrNQKAZKL3uCMugWJA==\"}},\"mdast-util-gfm-autolink-literal@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-5HVP2MKaP6L+G6YaxPNjuL0BPrq9orG3TsrZ9YXbA3vDw/ACI4MEsnoDpn6ZNm7GnZgtAcONJyPhOP8tNJQavQ==\"}},\"mdast-util-gfm-footnote@2.0.0\":{\"resolution\":{\"integrity\":\"sha512-5jOT2boTSVkMnQ7LTrd6n/18kqwjmuYqo7JUPe+tRCY6O7dAuTFMtTPauYYrMPpox9hlN0uOx/FL8XvEfG9/mQ==\"}},\"mdast-util-gfm-strikethrough@2.0.0\":{\"resolution\":{\"integrity\":\"sha512-mKKb915TF+OC5ptj5bJ7WFRPdYtuHv0yTRxK2tJvi+BDqbkiG7h7u/9SI89nRAYcmap2xHQL9D+QG/6wSrTtXg==\"}},\"mdast-util-gfm-table@2.0.0\":{\"resolution\":{\"integrity\":\"sha512-78UEvebzz/rJIxLvE7ZtDd/vIQ0RHv+3Mh5DR96p7cS7HsBhYIICDBCu8csTNWNO6tBWfqXPWekRuj2FNOGOZg==\"}},\"mdast-util-gfm-task-list-item@2.0.0\":{\"resolution\":{\"integrity\":\"sha512-IrtvNvjxC1o06taBAVJznEnkiHxLFTzgonUdy8hzFVeDun0uTjxxrRGVaNFqkU1wJR3RBPEfsxmU6jDWPofrTQ==\"}},\"mdast-util-gfm@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-dgQEX5Amaq+DuUqf26jJqSK9qgixgd6rYDHAv4aTBuA92cTknZlKpPfa86Z/s8Dj8xsAQpFfBmPUHWJBWqS4Bw==\"}},\"mdast-util-phrasing@4.1.0\":{\"resolution\":{\"integrity\":\"sha512-TqICwyvJJpBwvGAMZjj4J2n0X8QWp21b9l0o7eXyVJ25YNWYbJDVIyD1bZXE6WtV6RmKJVYmQAKWa0zWOABz2w==\"}},\"mdast-util-to-hast@13.2.0\":{\"resolution\":{\"integrity\":\"sha512-QGYKEuUsYT9ykKBCMOEDLsU5JRObWQusAolFMeko/tYPufNkRffBAQjIE+99jbA87xv6FgmjLtwjh9wBWajwAA==\"}},\"mdast-util-to-markdown@2.1.2\":{\"resolution\":{\"integrity\":\"sha512-xj68wMTvGXVOKonmog6LwyJKrYXZPvlwabaryTjLh9LuvovB/KAH+kvi8Gjj+7rJjsFi23nkUxRQv1KqSroMqA==\"}},\"mdast-util-to-string@4.0.0\":{\"resolution\":{\"integrity\":\"sha512-0H44vDimn51F0YwvxSJSm0eCDOJTRlmN0R1yBh4HLj9wiV1Dn0QoXGbvFAWj2hSItVTlCmBF1hqKlIyUBVFLPg==\"}},\"mdn-data@2.12.2\":{\"resolution\":{\"integrity\":\"sha512-IEn+pegP1aManZuckezWCO+XZQDplx1366JoVhTpMpBB1sPey/SbveZQUosKiKiGYjg1wH4pMlNgXbCiYgihQA==\"}},\"merge-stream@2.0.0\":{\"resolution\":{\"integrity\":\"sha512-abv/qOcuPfk3URPfDzmZU1LKmuw8kT+0nIHvKrKgFrwifol/doWcdA4ZqsWQ8ENrFKkd67Mfpo/LovbIUsbt3w==\"}},\"merge2@1.4.1\":{\"resolution\":{\"integrity\":\"sha512-8q7VEgMJW4J8tcfVPy8g09NcQwZdbwFEqhe/WZkoIzjn/3TGDwtOCYtXGxA3O8tPzpczCCDgv+P2P5y00ZJOOg==\"},\"engines\":{\"node\":\">= 8\"}},\"mermaid@11.12.1\":{\"resolution\":{\"integrity\":\"sha512-UlIZrRariB11TY1RtTgUWp65tphtBv4CSq7vyS2ZZ2TgoMjs2nloq+wFqxiwcxlhHUvs7DPGgMjs2aeQxz5h9g==\"}},\"micromark-core-commonmark@2.0.2\":{\"resolution\":{\"integrity\":\"sha512-FKjQKbxd1cibWMM1P9N+H8TwlgGgSkWZMmfuVucLCHaYqeSvJ0hFeHsIa65pA2nYbes0f8LDHPMrd9X7Ujxg9w==\"}},\"micromark-extension-frontmatter@2.0.0\":{\"resolution\":{\"integrity\":\"sha512-C4AkuM3dA58cgZha7zVnuVxBhDsbttIMiytjgsM2XbHAB2faRVaHRle40558FBN+DJcrLNCoqG5mlrpdU4cRtg==\"}},\"micromark-extension-gfm-autolink-literal@2.1.0\":{\"resolution\":{\"integrity\":\"sha512-oOg7knzhicgQ3t4QCjCWgTmfNhvQbDDnJeVu9v81r7NltNCVmhPy1fJRX27pISafdjL+SVc4d3l48Gb6pbRypw==\"}},\"micromark-extension-gfm-footnote@2.1.0\":{\"resolution\":{\"integrity\":\"sha512-/yPhxI1ntnDNsiHtzLKYnE3vf9JZ6cAisqVDauhp4CEHxlb4uoOTxOCJ+9s51bIB8U1N1FJ1RXOKTIlD5B/gqw==\"}},\"micromark-extension-gfm-strikethrough@2.1.0\":{\"resolution\":{\"integrity\":\"sha512-ADVjpOOkjz1hhkZLlBiYA9cR2Anf8F4HqZUO6e5eDcPQd0Txw5fxLzzxnEkSkfnD0wziSGiv7sYhk/ktvbf1uw==\"}},\"micromark-extension-gfm-table@2.1.1\":{\"resolution\":{\"integrity\":\"sha512-t2OU/dXXioARrC6yWfJ4hqB7rct14e8f7m0cbI5hUmDyyIlwv5vEtooptH8INkbLzOatzKuVbQmAYcbWoyz6Dg==\"}},\"micromark-extension-gfm-tagfilter@2.0.0\":{\"resolution\":{\"integrity\":\"sha512-xHlTOmuCSotIA8TW1mDIM6X2O1SiX5P9IuDtqGonFhEK0qgRI4yeC6vMxEV2dgyr2TiD+2PQ10o+cOhdVAcwfg==\"}},\"micromark-extension-gfm-task-list-item@2.1.0\":{\"resolution\":{\"integrity\":\"sha512-qIBZhqxqI6fjLDYFTBIa4eivDMnP+OZqsNwmQ3xNLE4Cxwc+zfQEfbs6tzAo2Hjq+bh6q5F+Z8/cksrLFYWQQw==\"}},\"micromark-extension-gfm@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-vsKArQsicm7t0z2GugkCKtZehqUm31oeGBV/KVSorWSy8ZlNAv7ytjFhvaryUiCUJYqs+NoE6AFhpQvBTM6Q4w==\"}},\"micromark-factory-destination@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-Xe6rDdJlkmbFRExpTOmRj9N3MaWmbAgdpSrBQvCFqhezUn4AHqJHbaEnfbVYYiexVSs//tqOdY/DxhjdCiJnIA==\"}},\"micromark-factory-label@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-VFMekyQExqIW7xIChcXn4ok29YE3rnuyveW3wZQWWqF4Nv9Wk5rgJ99KzPvHjkmPXF93FXIbBp6YdW3t71/7Vg==\"}},\"micromark-factory-space@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-zRkxjtBxxLd2Sc0d+fbnEunsTj46SWXgXciZmHq0kDYGnck/ZSGj9/wULTV95uoeYiK5hRXP2mJ98Uo4cq/LQg==\"}},\"micromark-factory-title@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-5bZ+3CjhAd9eChYTHsjy6TGxpOFSKgKKJPJxr293jTbfry2KDoWkhBb6TcPVB4NmzaPhMs1Frm9AZH7OD4Cjzw==\"}},\"micromark-factory-whitespace@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-Ob0nuZ3PKt/n0hORHyvoD9uZhr+Za8sFoP+OnMcnWK5lngSzALgQYKMr9RJVOWLqQYuyn6ulqGWSXdwf6F80lQ==\"}},\"micromark-util-character@2.1.1\":{\"resolution\":{\"integrity\":\"sha512-wv8tdUTJ3thSFFFJKtpYKOYiGP2+v96Hvk4Tu8KpCAsTMs6yi+nVmGh1syvSCsaxz45J6Jbw+9DD6g97+NV67Q==\"}},\"micromark-util-chunked@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-QUNFEOPELfmvv+4xiNg2sRYeS/P84pTW0TCgP5zc9FpXetHY0ab7SxKyAQCNCc1eK0459uoLI1y5oO5Vc1dbhA==\"}},\"micromark-util-classify-character@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-K0kHzM6afW/MbeWYWLjoHQv1sgg2Q9EccHEDzSkxiP/EaagNzCm7T/WMKZ3rjMbvIpvBiZgwR3dKMygtA4mG1Q==\"}},\"micromark-util-combine-extensions@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-OnAnH8Ujmy59JcyZw8JSbK9cGpdVY44NKgSM7E9Eh7DiLS2E9RNQf0dONaGDzEG9yjEl5hcqeIsj4hfRkLH/Bg==\"}},\"micromark-util-decode-numeric-character-reference@2.0.2\":{\"resolution\":{\"integrity\":\"sha512-ccUbYk6CwVdkmCQMyr64dXz42EfHGkPQlBj5p7YVGzq8I7CtjXZJrubAYezf7Rp+bjPseiROqe7G6foFd+lEuw==\"}},\"micromark-util-decode-string@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-nDV/77Fj6eH1ynwscYTOsbK7rR//Uj0bZXBwJZRfaLEJ1iGBR6kIfNmlNqaqJf649EP0F3NWNdeJi03elllNUQ==\"}},\"micromark-util-encode@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-c3cVx2y4KqUnwopcO9b/SCdo2O67LwJJ/UyqGfbigahfegL9myoEFoDYZgkT7f36T0bLrM9hZTAaAyH+PCAXjw==\"}},\"micromark-util-html-tag-name@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-2cNEiYDhCWKI+Gs9T0Tiysk136SnR13hhO8yW6BGNyhOC4qYFnwF1nKfD3HFAIXA5c45RrIG1ub11GiXeYd1xA==\"}},\"micromark-util-normalize-identifier@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-sxPqmo70LyARJs0w2UclACPUUEqltCkJ6PhKdMIDuJ3gSf/Q+/GIe3WKl0Ijb/GyH9lOpUkRAO2wp0GVkLvS9Q==\"}},\"micromark-util-resolve-all@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-VdQyxFWFT2/FGJgwQnJYbe1jjQoNTS4RjglmSjTUlpUMa95Htx9NHeYW4rGDJzbjvCsl9eLjMQwGeElsqmzcHg==\"}},\"micromark-util-sanitize-uri@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-9N9IomZ/YuGGZZmQec1MbgxtlgougxTodVwDzzEouPKo3qFWvymFHWcnDi2vzV1ff6kas9ucW+o3yzJK9YB1AQ==\"}},\"micromark-util-subtokenize@2.0.3\":{\"resolution\":{\"integrity\":\"sha512-VXJJuNxYWSoYL6AJ6OQECCFGhIU2GGHMw8tahogePBrjkG8aCCas3ibkp7RnVOSTClg2is05/R7maAhF1XyQMg==\"}},\"micromark-util-symbol@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-vs5t8Apaud9N28kgCrRUdEed4UJ+wWNvicHLPxCa9ENlYuAY31M0ETy5y1vA33YoNPDFTghEbnh6efaE8h4x0Q==\"}},\"micromark-util-types@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-534m2WhVTddrcKVepwmVEVnUAmtrx9bfIjNoQHRqfnvdaHQiFytEhJoTgpWJvDEXCO5gLTQh3wYC1PgOJA4NSQ==\"}},\"micromark@4.0.1\":{\"resolution\":{\"integrity\":\"sha512-eBPdkcoCNvYcxQOAKAlceo5SNdzZWfF+FcSupREAzdAh9rRmE239CEQAiTwIgblwnoM8zzj35sZ5ZwvSEOF6Kw==\"}},\"micromatch@4.0.8\":{\"resolution\":{\"integrity\":\"sha512-PXwfBhYu0hBCPw8Dn0E+WDYb7af3dSLVWKi3HGv84IdF4TyFoC0ysxFd0Goxw7nSv4T/PzEJQxsYsEiFCKo2BA==\"},\"engines\":{\"node\":\">=8.6\"}},\"mime-db@1.52.0\":{\"resolution\":{\"integrity\":\"sha512-sPU4uV7dYlvtWJxwwxHD0PuihVNiE7TyAbQ5SWxDCB9mUYvOgroQOwYQQOKPJ8CIbE+1ETVlOoK1UC2nU3gYvg==\"},\"engines\":{\"node\":\">= 0.6\"}},\"mime-types@2.1.35\":{\"resolution\":{\"integrity\":\"sha512-ZDY+bPm5zTTF+YpCrAU9nK0UgICYPT0QtT1NZWFv4s++TNkcgVaT0g6+4R2uI4MjQjzysHB1zxuWL50hzaeXiw==\"},\"engines\":{\"node\":\">= 0.6\"}},\"mimic-fn@2.1.0\":{\"resolution\":{\"integrity\":\"sha512-OqbOk5oEQeAZ8WXWydlu9HJjz9WVdEIvamMCcXmuqUYjTknH/sqsWvhQ3vgwKFRR1HpjvNBKQ37nbJgYzGqGcg==\"},\"engines\":{\"node\":\">=6\"}},\"mimic-response@3.1.0\":{\"resolution\":{\"integrity\":\"sha512-z0yWI+4FDrrweS8Zmt4Ej5HdJmky15+L2e6Wgn3+iK5fWzb6T3fhNFq2+MeTRb064c6Wr4N/wv0DzQTjNzHNGQ==\"},\"engines\":{\"node\":\">=10\"}},\"miniflare@4.20260504.0\":{\"resolution\":{\"integrity\":\"sha512-HeI/HLx+rbeo/UB4qb6NsNcFdUVD7xDzyCexZJTVtFMlfpfexUKEDmdeTRRpzeHrJseZFGua+v9JO1kfPublUw==\"},\"engines\":{\"node\":\">=22.0.0\"},\"hasBin\":true},\"minimatch@5.1.9\":{\"resolution\":{\"integrity\":\"sha512-7o1wEA2RyMP7Iu7GNba9vc0RWWGACJOCZBJX2GJWip0ikV+wcOsgVuY9uE8CPiyQhkGFSlhuSkZPavN7u1c2Fw==\"},\"engines\":{\"node\":\">=10\"}},\"minimatch@9.0.3\":{\"resolution\":{\"integrity\":\"sha512-RHiac9mvaRw0x3AYRgDC1CxAP7HTcNrrECeA8YYJeWnpo+2Q5CegtZjaotWTWxDG3UeGA1coE05iH1mPjT/2mg==\"},\"engines\":{\"node\":\">=16 || 14 >=14.17\"}},\"minimatch@9.0.5\":{\"resolution\":{\"integrity\":\"sha512-G6T0ZX48xgozx7587koeX9Ys2NYy6Gmv//P89sEte9V9whIapMNF4idKxnW2QtCcLiTWlb/wfCabAtAFWhhBow==\"},\"engines\":{\"node\":\">=16 || 14 >=14.17\"}},\"minimatch@9.0.9\":{\"resolution\":{\"integrity\":\"sha512-OBwBN9AL4dqmETlpS2zasx+vTeWclWzkblfZk7KTA5j3jeOONz/tRCnZomUyvNg83wL5Zv9Ss6HMJXAgL8R2Yg==\"},\"engines\":{\"node\":\">=16 || 14 >=14.17\"}},\"minimist@1.2.8\":{\"resolution\":{\"integrity\":\"sha512-2yyAR8qBkN3YuheJanUpWC5U3bb5osDywNB8RzDVlDwDHbocAJveqqj1u8+SVD7jkWT4yvsHCpWqqWqAxb0zCA==\"}},\"minipass@3.3.6\":{\"resolution\":{\"integrity\":\"sha512-DxiNidxSEK+tHG6zOIklvNOwm3hvCrbUrdtzY74U6HKTJxvIDfOUL5W5P2Ghd3DTkhhKPYGqeNUIh5qcM4YBfw==\"},\"engines\":{\"node\":\">=8\"}},\"minipass@5.0.0\":{\"resolution\":{\"integrity\":\"sha512-3FnjYuehv9k6ovOEbyOswadCDPX1piCfhV8ncmYtHOjuPwylVWsghTLo7rabjC3Rx5xD4HDx8Wm1xnMF7S5qFQ==\"},\"engines\":{\"node\":\">=8\"}},\"minipass@7.1.2\":{\"resolution\":{\"integrity\":\"sha512-qOOzS1cBTWYF4BH8fVePDBOO9iptMnGUEZwNc/cMWnTV2nVLZ7VoNWEPHkYczZA0pdoA7dl6e7FL659nX9S2aw==\"},\"engines\":{\"node\":\">=16 || 14 >=14.17\"}},\"minipass@7.1.3\":{\"resolution\":{\"integrity\":\"sha512-tEBHqDnIoM/1rXME1zgka9g6Q2lcoCkxHLuc7ODJ5BxbP5d4c2Z5cGgtXAku59200Cx7diuHTOYfSBD8n6mm8A==\"},\"engines\":{\"node\":\">=16 || 14 >=14.17\"}},\"minizlib@2.1.2\":{\"resolution\":{\"integrity\":\"sha512-bAxsR8BVfj60DWXHE3u30oHzfl4G7khkSuPW+qvpd7jFRHm7dLxOjUk1EHACJ/hxLY8phGJ0YhYHZo7jil7Qdg==\"},\"engines\":{\"node\":\">= 8\"}},\"mkdirp-classic@0.5.3\":{\"resolution\":{\"integrity\":\"sha512-gKLcREMhtuZRwRAfqP3RFW+TK4JqApVBtOIftVgjuABpAtpxhPGaDcfvbhNvD0B8iD1oUr/txX35NjcaY6Ns/A==\"}},\"mkdirp@1.0.4\":{\"resolution\":{\"integrity\":\"sha512-vVqVZQyf3WLx2Shd0qJ9xuvqgAyKPLAiqITEtqW0oIUjzo3PePDd6fW9iFz30ef7Ysp/oiWqbhszeGWW2T6Gzw==\"},\"engines\":{\"node\":\">=10\"},\"hasBin\":true},\"mlly@1.8.0\":{\"resolution\":{\"integrity\":\"sha512-l8D9ODSRWLe2KHJSifWGwBqpTZXIXTeo8mlKjY+E2HAakaTeNpqAyBZ8GSqLzHgw4XmHmC8whvpjJNMbFZN7/g==\"}},\"mri@1.2.0\":{\"resolution\":{\"integrity\":\"sha512-tzzskb3bG8LvYGFF/mDTpq3jpI6Q9wc3LEmBaghu+DdCssd1FakN7Bc0hVNmEyGq1bq3RgfkCb3cmQLpNPOroA==\"},\"engines\":{\"node\":\">=4\"}},\"mrmime@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-Y3wQdFg2Va6etvQ5I82yUhGdsKrcYox6p7FfL1LbK2J4V01F9TGlepTIhnK24t7koZibmg82KGglhA1XK5IsLQ==\"},\"engines\":{\"node\":\">=10\"}},\"ms@2.1.3\":{\"resolution\":{\"integrity\":\"sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA==\"}},\"msw@2.10.2\":{\"resolution\":{\"integrity\":\"sha512-RCKM6IZseZQCWcSWlutdf590M8nVfRHG1ImwzOtwz8IYxgT4zhUO0rfTcTvDGiaFE0Rhcc+h43lcF3Jc9gFtwQ==\"},\"engines\":{\"node\":\">=18\"},\"hasBin\":true,\"peerDependencies\":{\"typescript\":\">= 4.8.x\"},\"peerDependenciesMeta\":{\"typescript\":{\"optional\":true}}},\"mute-stream@2.0.0\":{\"resolution\":{\"integrity\":\"sha512-WWdIxpyjEn+FhQJQQv9aQAYlHoNVdzIzUySNV1gHUPDSdZJ3yZn7pAAbQcV7B56Mvu881q9FZV+0Vx2xC44VWA==\"},\"engines\":{\"node\":\"^18.17.0 || >=20.5.0\"}},\"nanoid@3.3.11\":{\"resolution\":{\"integrity\":\"sha512-N8SpfPUnUp1bK+PMYW8qSWdl9U+wwNWI4QKxOYDy9JAro3WMX7p2OeVRF9v+347pnakNevPmiHhNmZ2HbFA76w==\"},\"engines\":{\"node\":\"^10 || ^12 || ^13.7 || ^14 || >=15.0.1\"},\"hasBin\":true},\"napi-build-utils@2.0.0\":{\"resolution\":{\"integrity\":\"sha512-GEbrYkbfF7MoNaoh2iGG84Mnf/WZfB0GdGEsM8wz7Expx/LlWf5U8t9nvJKXSp3qr5IsEbK04cBGhol/KwOsWA==\"}},\"neo-async@2.6.2\":{\"resolution\":{\"integrity\":\"sha512-Yd3UES5mWCSqR+qNT93S3UoYUkqAZ9lLg8a7g9rimsWmYGK8cVToA4/sF3RrshdyV3sAGMXVUmpMYOw+dLpOuw==\"}},\"netmask@2.1.1\":{\"resolution\":{\"integrity\":\"sha512-eonl3sLUha+S1GzTPxychyhnUzKyeQkZ7jLjKrBagJgPla13F+uQ71HgpFefyHgqrjEbCPkDArxYsjY8/+gLKA==\"},\"engines\":{\"node\":\">= 0.4.0\"}},\"node-abi@3.89.0\":{\"resolution\":{\"integrity\":\"sha512-6u9UwL0HlAl21+agMN3YAMXcKByMqwGx+pq+P76vii5f7hTPtKDp08/H9py6DY+cfDw7kQNTGEj/rly3IgbNQA==\"},\"engines\":{\"node\":\">=10\"}},\"node-domexception@1.0.0\":{\"resolution\":{\"integrity\":\"sha512-/jKZoMpw0F8GRwl4/eLROPA3cfcXtLApP0QzLmUT/HuPCZWyB7IY9ZrMeKw2O/nFIqPQB3PVM9aYm0F312AXDQ==\"},\"engines\":{\"node\":\">=10.5.0\"},\"deprecated\":\"Use your platform's native DOMException instead\"},\"node-fetch@3.3.2\":{\"resolution\":{\"integrity\":\"sha512-dRB78srN/l6gqWulah9SrxeYnxeddIG30+GOqK/9OlLVyLg3HPnr6SqOWTWOXKRwC2eGYCkZ59NNuSgvSrpgOA==\"},\"engines\":{\"node\":\"^12.20.0 || ^14.13.1 || >=16.0.0\"}},\"node-machine-id@1.1.12\":{\"resolution\":{\"integrity\":\"sha512-QNABxbrPa3qEIfrE6GOJ7BYIuignnJw7iQ2YPbc3Nla1HzRJjXzZOiikfF8m7eAMfichLt3M4VgLOetqgDmgGQ==\"}},\"node-releases@2.0.19\":{\"resolution\":{\"integrity\":\"sha512-xxOWJsBKtzAq7DY0J+DTzuz58K8e7sJbdgwkbMWQe8UYB6ekmsQ45q0M/tJDsGaZmbC+l7n57UV8Hl5tHxO9uw==\"}},\"node-releases@2.0.38\":{\"resolution\":{\"integrity\":\"sha512-3qT/88Y3FbH/Kx4szpQQ4HzUbVrHPKTLVpVocKiLfoYvw9XSGOX2FmD2d6DrXbVYyAQTF2HeF6My8jmzx7/CRw==\"}},\"normalize-path@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-6eZs5Ls3WtCisHWp9S2GUy8dqkpGi4BVSz3GaqiE6ezub0512ESztXUwUB6C6IKbQkY2Pnb/mD4WYojCRwcwLA==\"},\"engines\":{\"node\":\">=0.10.0\"}},\"npm-run-path@4.0.1\":{\"resolution\":{\"integrity\":\"sha512-S48WzZW777zhNIrn7gxOlISNAqi9ZC/uQFnRdbeIHhZhCA6UqpkOT8T1G7BvfdgP4Er8gF4sUbaS0i7QvIfCWw==\"},\"engines\":{\"node\":\">=8\"}},\"nth-check@2.1.1\":{\"resolution\":{\"integrity\":\"sha512-lqjrjmaOoAnWfMmBPL+XNnynZh2+swxiX3WUE0s4yEHI6m+AwrK2UZOimIRl3X/4QctVqS8AiZjFqyOGrMXb/w==\"}},\"nwsapi@2.2.20\":{\"resolution\":{\"integrity\":\"sha512-/ieB+mDe4MrrKMT8z+mQL8klXydZWGR5Dowt4RAGKbJ3kIGEx3X4ljUo+6V73IXtUPWgfOlU5B9MlGxFO5T+cA==\"}},\"nx-cloud@19.1.0\":{\"resolution\":{\"integrity\":\"sha512-f24vd5/57/MFSXNMfkerdDiK0EvScGOKO71iOWgJNgI1xVweDRmOA/EfjnPMRd5m+pnoPs/4A7DzuwSW0jZVyw==\"},\"hasBin\":true},\"nx@21.4.1\":{\"resolution\":{\"integrity\":\"sha512-nD8NjJGYk5wcqiATzlsLauvyrSHV2S2YmM2HBIKqTTwVP2sey07MF3wDB9U2BwxIjboahiITQ6pfqFgB79TF2A==\"},\"hasBin\":true,\"peerDependencies\":{\"@swc-node/register\":\"^1.8.0\",\"@swc/core\":\"^1.3.85\"},\"peerDependenciesMeta\":{\"@swc-node/register\":{\"optional\":true},\"@swc/core\":{\"optional\":true}}},\"obug@2.1.1\":{\"resolution\":{\"integrity\":\"sha512-uTqF9MuPraAQ+IsnPf366RG4cP9RtUi7MLO1N3KEc+wb0a6yKpeL0lmk2IB1jY5KHPAlTc6T/JRdC/YqxHNwkQ==\"}},\"once@1.4.0\":{\"resolution\":{\"integrity\":\"sha512-lNaJgI+2Q5URQBkccEKHTQOPaXdUxnZZElQTZY0MFUAuaEqe1E+Nyvgdz/aIyNi6Z9MzO5dv1H8n58/GELp3+w==\"}},\"onetime@5.1.2\":{\"resolution\":{\"integrity\":\"sha512-kbpaSSGJTWdAY5KPVeMOKXSrPtr8C8C7wodJbcsd51jRnmD+GZu8Y0VoU6Dm5Z4vWr0Ig/1NKuWRKf7j5aaYSg==\"},\"engines\":{\"node\":\">=6\"}},\"oniguruma-parser@0.12.1\":{\"resolution\":{\"integrity\":\"sha512-8Unqkvk1RYc6yq2WBYRj4hdnsAxVze8i7iPfQr8e4uSP3tRv0rpZcbGUDvxfQQcdwHt/e9PrMvGCsa8OqG9X3w==\"}},\"oniguruma-to-es@4.3.3\":{\"resolution\":{\"integrity\":\"sha512-rPiZhzC3wXwE59YQMRDodUwwT9FZ9nNBwQQfsd1wfdtlKEyCdRV0avrTcSZ5xlIvGRVPd/cx6ZN45ECmS39xvg==\"}},\"open@8.4.2\":{\"resolution\":{\"integrity\":\"sha512-7x81NCL719oNbsq/3mh+hVrAWmFuEYUqrq/Iw3kUzH8ReypT9QQ0BLoJS7/G9k6N81XjW4qHWtjWwe/9eLy1EQ==\"},\"engines\":{\"node\":\">=12\"}},\"ora@5.3.0\":{\"resolution\":{\"integrity\":\"sha512-zAKMgGXUim0Jyd6CXK9lraBnD3H5yPGBPPOkC23a2BG6hsm4Zu6OQSjQuEtV0BHDf4aKHcUFvJiGRrFuW3MG8g==\"},\"engines\":{\"node\":\">=10\"}},\"outdent@0.5.0\":{\"resolution\":{\"integrity\":\"sha512-/jHxFIzoMXdqPzTaCpFzAAWhpkSjZPF4Vsn6jAfNpmbH/ymsmd7Qc6VE9BGn0L6YMj6uwpQLxCECpus4ukKS9Q==\"}},\"outvariant@1.4.3\":{\"resolution\":{\"integrity\":\"sha512-+Sl2UErvtsoajRDKCE5/dBz4DIvHXQQnAxtQTF04OJxY0+DyZXSo5P5Bb7XYWOh81syohlYL24hbDwxedPUJCA==\"}},\"oxlint@1.26.0\":{\"resolution\":{\"integrity\":\"sha512-KRpL+SMi07JQyggv5ldIF+wt2pnrKm8NLW0B+8bK+0HZsLmH9/qGA+qMWie5Vf7lnlMBllJmsuzHaKFEGY3rIA==\"},\"engines\":{\"node\":\"^20.19.0 || >=22.12.0\"},\"hasBin\":true,\"peerDependencies\":{\"oxlint-tsgolint\":\">=0.4.0\"},\"peerDependenciesMeta\":{\"oxlint-tsgolint\":{\"optional\":true}}},\"p-filter@2.1.0\":{\"resolution\":{\"integrity\":\"sha512-ZBxxZ5sL2HghephhpGAQdoskxplTwr7ICaehZwLIlfL6acuVgZPm8yBNuRAFBGEqtD/hmUeq9eqLg2ys9Xr/yw==\"},\"engines\":{\"node\":\">=8\"}},\"p-limit@2.3.0\":{\"resolution\":{\"integrity\":\"sha512-//88mFWSJx8lxCzwdAABTJL2MyWB12+eIY7MDL2SqLmAkeKU9qxRvWuSyTjm3FUmpBEMuFfckAIqEaVGUDxb6w==\"},\"engines\":{\"node\":\">=6\"}},\"p-locate@4.1.0\":{\"resolution\":{\"integrity\":\"sha512-R79ZZ/0wAxKGu3oYMlz8jy/kbhsNrS7SKZ7PxEHBgJ5+F2mtFW2fK2cOtBh1cHYkQsbzFV7I+EoRKe6Yt0oK7A==\"},\"engines\":{\"node\":\">=8\"}},\"p-map@2.1.0\":{\"resolution\":{\"integrity\":\"sha512-y3b8Kpd8OAN444hxfBbFfj1FY/RjtTd8tzYwhUqNYXx0fXx2iX4maP4Qr6qhIKbQXI02wTLAda4fYUbDagTUFw==\"},\"engines\":{\"node\":\">=6\"}},\"p-map@7.0.4\":{\"resolution\":{\"integrity\":\"sha512-tkAQEw8ysMzmkhgw8k+1U/iPhWNhykKnSk4Rd5zLoPJCuJaGRPo6YposrZgaxHKzDHdDWWZvE/Sk7hsL2X/CpQ==\"},\"engines\":{\"node\":\">=18\"}},\"p-try@2.2.0\":{\"resolution\":{\"integrity\":\"sha512-R4nPAVTAU0B9D35/Gk3uJf/7XYbQcyohSKdvAxIRSNghFl4e71hVoGnBNQz9cWaXxO2I10KTC+3jMdvvoKw6dQ==\"},\"engines\":{\"node\":\">=6\"}},\"pac-proxy-agent@7.2.0\":{\"resolution\":{\"integrity\":\"sha512-TEB8ESquiLMc0lV8vcd5Ql/JAKAoyzHFXaStwjkzpOpC5Yv+pIzLfHvjTSdf3vpa2bMiUQrg9i6276yn8666aA==\"},\"engines\":{\"node\":\">= 14\"}},\"pac-resolver@7.0.1\":{\"resolution\":{\"integrity\":\"sha512-5NPgf87AT2STgwa2ntRMr45jTKrYBGkVU36yT0ig/n/GMAa3oPqhZfIQ2kMEimReg0+t9kZViDVZ83qfVUlckg==\"},\"engines\":{\"node\":\">= 14\"}},\"package-json-from-dist@1.0.1\":{\"resolution\":{\"integrity\":\"sha512-UEZIS3/by4OC8vL3P2dTXRETpebLI2NiI5vIrjaD/5UtrkFX/tNbwjTSRAGC/+7CAo2pIcBaRgWmcBBHcsaCIw==\"}},\"package-manager-detector@0.2.11\":{\"resolution\":{\"integrity\":\"sha512-BEnLolu+yuz22S56CU1SUKq3XC3PkwD5wv4ikR4MfGvnRVcmzXR9DwSlW2fEamyTPyXHomBJRzgapeuBvRNzJQ==\"}},\"package-manager-detector@1.5.0\":{\"resolution\":{\"integrity\":\"sha512-uBj69dVlYe/+wxj8JOpr97XfsxH/eumMt6HqjNTmJDf/6NO9s+0uxeOneIz3AsPt2m6y9PqzDzd3ATcU17MNfw==\"}},\"pako@1.0.11\":{\"resolution\":{\"integrity\":\"sha512-4hLB8Py4zZce5s4yd9XzopqwVv/yGNhV1Bl8NTmCq1763HeK2+EwVTv+leGeL13Dnh2wfbqowVPXCIO0z4taYw==\"}},\"parse5-htmlparser2-tree-adapter@7.1.0\":{\"resolution\":{\"integrity\":\"sha512-ruw5xyKs6lrpo9x9rCZqZZnIUntICjQAd0Wsmp396Ul9lN/h+ifgVV1x1gZHi8euej6wTfpqX8j+BFQxF0NS/g==\"}},\"parse5-parser-stream@7.1.2\":{\"resolution\":{\"integrity\":\"sha512-JyeQc9iwFLn5TbvvqACIF/VXG6abODeB3Fwmv/TGdLk2LfbWkaySGY72at4+Ty7EkPZj854u4CrICqNk2qIbow==\"}},\"parse5@7.3.0\":{\"resolution\":{\"integrity\":\"sha512-IInvU7fabl34qmi9gY8XOVxhYyMyuH2xUNpb2q8/Y+7552KlejkRvqvD19nMoUW/uQGGbqNpA6Tufu5FL5BZgw==\"}},\"parse5@8.0.0\":{\"resolution\":{\"integrity\":\"sha512-9m4m5GSgXjL4AjumKzq1Fgfp3Z8rsvjRNbnkVwfu2ImRqE5D0LnY2QfDen18FSY9C573YU5XxSapdHZTZ2WolA==\"}},\"path-data-parser@0.1.0\":{\"resolution\":{\"integrity\":\"sha512-NOnmBpt5Y2RWbuv0LMzsayp3lVylAHLPUTut412ZA3l+C4uw4ZVkQbjShYCQ8TCpUMdPapr4YjUqLYD6v68j+w==\"}},\"path-exists@4.0.0\":{\"resolution\":{\"integrity\":\"sha512-ak9Qy5Q7jYb2Wwcey5Fpvg2KoAc/ZIhLSLOSBmRmygPsGwkVVt0fZa0qrtMz+m6tJTAHfZQ8FnmB4MG4LWy7/w==\"},\"engines\":{\"node\":\">=8\"}},\"path-key@3.1.1\":{\"resolution\":{\"integrity\":\"sha512-ojmeN0qd+y0jszEtoY48r0Peq5dwMEkIlCOu6Q5f41lfkswXuKtYrhgoTpLnyIcHm24Uhqx+5Tqm2InSwLhE6Q==\"},\"engines\":{\"node\":\">=8\"}},\"path-scurry@1.11.1\":{\"resolution\":{\"integrity\":\"sha512-Xa4Nw17FS9ApQFJ9umLiJS4orGjm7ZzwUrwamcGQuHSzDyth9boKDaycYdDcZDuqYATXw4HFXgaqWTctW/v1HA==\"},\"engines\":{\"node\":\">=16 || 14 >=14.18\"}},\"path-to-regexp@6.3.0\":{\"resolution\":{\"integrity\":\"sha512-Yhpw4T9C6hPpgPeA28us07OJeqZ5EzQTkbfwuhsUg0c237RomFoETJgmp2sa3F/41gfLE6G5cqcYwznmeEeOlQ==\"}},\"path-type@4.0.0\":{\"resolution\":{\"integrity\":\"sha512-gDKb8aZMDeD/tZWs9P6+q0J9Mwkdl6xMV8TjnGP3qJVJ06bdMgkbBlLU8IdfOsIsFz2BW1rNVT3XuNEl8zPAvw==\"},\"engines\":{\"node\":\">=8\"}},\"pathe@2.0.3\":{\"resolution\":{\"integrity\":\"sha512-WUjGcAqP1gQacoQe+OBJsFA7Ld4DyXuUIjZ5cc75cLHvJ7dtNsTugphxIADwspS+AraAUePCKrSVtPLFj/F88w==\"}},\"pathval@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-//nshmD55c46FuFw26xV/xFAaB5HF9Xdap7HJBBnrKdAd6/GxDBaNA1870O79+9ueg61cZLSVc+OaFlfmObYVQ==\"},\"engines\":{\"node\":\">= 14.16\"}},\"pend@1.2.0\":{\"resolution\":{\"integrity\":\"sha512-F3asv42UuXchdzt+xXqfW1OGlVBe+mxa2mqI0pg5yAHZPvFmY3Y6drSf/GQ1A86WgWEN9Kzh/WrgKa6iGcHXLg==\"}},\"picocolors@1.1.1\":{\"resolution\":{\"integrity\":\"sha512-xceH2snhtb5M9liqDsmEw56le376mTZkEX/jEb/RxNFyegNul7eNslCXP9FDj/Lcu0X8KEyMceP2ntpaHrDEVA==\"}},\"picomatch@2.3.1\":{\"resolution\":{\"integrity\":\"sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA==\"},\"engines\":{\"node\":\">=8.6\"}},\"picomatch@4.0.3\":{\"resolution\":{\"integrity\":\"sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q==\"},\"engines\":{\"node\":\">=12\"}},\"picomatch@4.0.4\":{\"resolution\":{\"integrity\":\"sha512-QP88BAKvMam/3NxH6vj2o21R6MjxZUAd6nlwAS/pnGvN9IVLocLHxGYIzFhg6fUQ+5th6P4dv4eW9jX3DSIj7A==\"},\"engines\":{\"node\":\">=12\"}},\"pify@4.0.1\":{\"resolution\":{\"integrity\":\"sha512-uB80kBFb/tfd68bVleG9T5GGsGPjJrLAUpR5PZIrhBnIaRTQRjqdJSsIKkOP6OAIFbj7GOrcudc5pNjZ+geV2g==\"},\"engines\":{\"node\":\">=6\"}},\"pkg-types@1.3.1\":{\"resolution\":{\"integrity\":\"sha512-/Jm5M4RvtBFVkKWRu2BLUTNP8/M2a+UwuAX+ae4770q1qVGtfjG+WTCupoZixokjmHiry8uI+dlY8KXYV5HVVQ==\"}},\"pkg-types@2.3.0\":{\"resolution\":{\"integrity\":\"sha512-SIqCzDRg0s9npO5XQ3tNZioRY1uK06lA41ynBC1YmFTmnY6FjUjVt6s4LoADmwoig1qqD0oK8h1p/8mlMx8Oig==\"}},\"playwright-core@1.55.0\":{\"resolution\":{\"integrity\":\"sha512-GvZs4vU3U5ro2nZpeiwyb0zuFaqb9sUiAJuyrWpcGouD8y9/HLgGbNRjIph7zU9D3hnPaisMl9zG9CgFi/biIg==\"},\"engines\":{\"node\":\">=18\"},\"hasBin\":true},\"playwright@1.55.0\":{\"resolution\":{\"integrity\":\"sha512-sdCWStblvV1YU909Xqx0DhOjPZE4/5lJsIS84IfN9dAZfcl/CIZ5O8l3o0j7hPMjDvqoTF8ZUcc+i/GL5erstA==\"},\"engines\":{\"node\":\">=18\"},\"hasBin\":true},\"pngjs@7.0.0\":{\"resolution\":{\"integrity\":\"sha512-LKWqWJRhstyYo9pGvgor/ivk2w94eSjE3RGVuzLGlr3NmD8bf7RcYGze1mNdEHRP6TRP6rMuDHk5t44hnTRyow==\"},\"engines\":{\"node\":\">=14.19.0\"}},\"points-on-curve@0.2.0\":{\"resolution\":{\"integrity\":\"sha512-0mYKnYYe9ZcqMCWhUjItv/oHjvgEsfKvnUTg8sAtnHr3GVy7rGkXCb6d5cSyqrWqL4k81b9CPg3urd+T7aop3A==\"}},\"points-on-path@0.2.1\":{\"resolution\":{\"integrity\":\"sha512-25ClnWWuw7JbWZcgqY/gJ4FQWadKxGWk+3kR/7kD0tCaDtPPMj7oHu2ToLaVhfpnHrZzYby2w6tUA0eOIuUg8g==\"}},\"postcss@8.5.14\":{\"resolution\":{\"integrity\":\"sha512-SoSL4+OSEtR99LHFZQiJLkT59C5B1amGO1NzTwj7TT1qCUgUO6hxOvzkOYxD+vMrXBM3XJIKzokoERdqQq/Zmg==\"},\"engines\":{\"node\":\"^10 || ^12 || >=14\"}},\"postcss@8.5.6\":{\"resolution\":{\"integrity\":\"sha512-3Ybi1tAuwAP9s0r1UQ2J4n5Y0G05bJkpUIO0/bI9MhwmD70S5aTWbXGBwxHrelT+XM1k6dM0pk+SwNkpTRN7Pg==\"},\"engines\":{\"node\":\"^10 || ^12 || >=14\"}},\"posthog-js@1.321.2\":{\"resolution\":{\"integrity\":\"sha512-h5852d9lYmSNjKWvjDkrmO9/awUU3jayNBEoEBUuMAdfDPc4yYYdxBJeDBxYnCFm6RjCLy4O+vmcwuCRC67EXA==\"}},\"preact@10.28.2\":{\"resolution\":{\"integrity\":\"sha512-lbteaWGzGHdlIuiJ0l2Jq454m6kcpI1zNje6d8MlGAFlYvP2GO4ibnat7P74Esfz4sPTdM6UxtTwh/d3pwM9JA==\"}},\"prebuild-install@7.1.3\":{\"resolution\":{\"integrity\":\"sha512-8Mf2cbV7x1cXPUILADGI3wuhfqWvtiLA1iclTDbFRZkgRQS0NqsPZphna9V+HyTEadheuPmjaJMsbzKQFOzLug==\"},\"engines\":{\"node\":\">=10\"},\"deprecated\":\"No longer maintained. Please contact the author of the relevant native addon; alternatives are available.\",\"hasBin\":true},\"prettier@2.8.8\":{\"resolution\":{\"integrity\":\"sha512-tdN8qQGvNjw4CHbY+XXk0JgCXn9QiF21a55rBe5LJAU+kDyC4WQn4+awm2Xfk2lQMk5fKup9XgzTZtGkjBdP9Q==\"},\"engines\":{\"node\":\">=10.13.0\"},\"hasBin\":true},\"prettier@3.6.2\":{\"resolution\":{\"integrity\":\"sha512-I7AIg5boAr5R0FFtJ6rCfD+LFsWHp81dolrFD8S79U9tb8Az2nGrJncnMSnys+bpQJfRUzqs9hnA81OAA3hCuQ==\"},\"engines\":{\"node\":\">=14\"},\"hasBin\":true},\"pretty-format@27.5.1\":{\"resolution\":{\"integrity\":\"sha512-Qb1gy5OrP5+zDf2Bvnzdl3jsTf1qXVMazbvCoKhtKqVs4/YK4ozX4gKQJJVyNe+cajNPn0KoC0MC3FUmaHWEmQ==\"},\"engines\":{\"node\":\"^10.13.0 || ^12.13.0 || ^14.15.0 || >=15.0.0\"}},\"pretty-format@30.0.5\":{\"resolution\":{\"integrity\":\"sha512-D1tKtYvByrBkFLe2wHJl2bwMJIiT8rW+XA+TiataH79/FszLQMrpGEvzUVkzPau7OCO0Qnrhpe87PqtOAIB8Yw==\"},\"engines\":{\"node\":\"^18.14.0 || ^20.0.0 || ^22.0.0 || >=24.0.0\"}},\"process-nextick-args@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-3ouUOpQhtgrbOa17J7+uxOTpITYWaGP7/AhoR3+A+/1e9skrzelGi/dXzEYyvbxubEF6Wn2ypscTKiKJFFn1ag==\"}},\"process@0.11.10\":{\"resolution\":{\"integrity\":\"sha512-cdGef/drWFoydD1JsMzuFf8100nZl+GT+yacc2bEced5f9Rjk4z+WtFUTBu9PhOi9j/jfmBPu0mMEY4wIdAF8A==\"},\"engines\":{\"node\":\">= 0.6.0\"}},\"progress@2.0.3\":{\"resolution\":{\"integrity\":\"sha512-7PiHtLll5LdnKIMw100I+8xJXR5gW2QwWYkT6iJva0bXitZKa/XMrSbdmg3r2Xnaidz9Qumd0VPaMrZlF9V9sA==\"},\"engines\":{\"node\":\">=0.4.0\"}},\"property-information@6.5.0\":{\"resolution\":{\"integrity\":\"sha512-PgTgs/BlvHxOu8QuEN7wi5A0OmXaBcHpmCSTehcs6Uuu9IkDIEo13Hy7n898RHfrQ49vKCoGeWZSaAK01nwVig==\"}},\"property-information@7.1.0\":{\"resolution\":{\"integrity\":\"sha512-TwEZ+X+yCJmYfL7TPUOcvBZ4QfoT5YenQiJuX//0th53DE6w0xxLEtfK3iyryQFddXuvkIk51EEgrJQ0WJkOmQ==\"}},\"protobufjs@7.5.4\":{\"resolution\":{\"integrity\":\"sha512-CvexbZtbov6jW2eXAvLukXjXUW1TzFaivC46BpWc/3BpcCysb5Vffu+B3XHMm8lVEuy2Mm4XGex8hBSg1yapPg==\"},\"engines\":{\"node\":\">=12.0.0\"}},\"proxy-agent@6.5.0\":{\"resolution\":{\"integrity\":\"sha512-TmatMXdr2KlRiA2CyDu8GqR8EjahTG3aY3nXjdzFyoZbmB8hrBsTyMezhULIXKnC0jpfjlmiZ3+EaCzoInSu/A==\"},\"engines\":{\"node\":\">= 14\"}},\"proxy-from-env@1.1.0\":{\"resolution\":{\"integrity\":\"sha512-D+zkORCbA9f1tdWRK0RaCR3GPv50cMxcrz4X8k5LTSUD1Dkw47mKJEZQNunItRTkWwgtaUSo1RVFRIG9ZXiFYg==\"}},\"psl@1.15.0\":{\"resolution\":{\"integrity\":\"sha512-JZd3gMVBAVQkSs6HdNZo9Sdo0LNcQeMNP3CozBJb3JYC/QUYZTnKxP+f8oWRX4rHP5EurWxqAHTSwUCjlNKa1w==\"}},\"pump@3.0.3\":{\"resolution\":{\"integrity\":\"sha512-todwxLMY7/heScKmntwQG8CXVkWUOdYxIvY2s0VWAAMh/nd8SoYiRaKjlr7+iCs984f2P8zvrfWcDDYVb73NfA==\"}},\"pump@3.0.4\":{\"resolution\":{\"integrity\":\"sha512-VS7sjc6KR7e1ukRFhQSY5LM2uBWAUPiOPa/A3mkKmiMwSmRFUITt0xuj+/lesgnCv+dPIEYlkzrcyXgquIHMcA==\"}},\"punycode@2.3.1\":{\"resolution\":{\"integrity\":\"sha512-vYt7UD1U9Wg6138shLtLOvdAu+8DsC/ilFtEVHcH+wydcSpNE20AfSOduf6MkRFahL5FY7X1oU7nKVZFtfq8Fg==\"},\"engines\":{\"node\":\">=6\"}},\"quansync@0.2.11\":{\"resolution\":{\"integrity\":\"sha512-AifT7QEbW9Nri4tAwR5M/uzpBuqfZf+zwaEM/QkzEjj7NBuFD2rBuy0K3dE+8wltbezDV7JMA0WfnCPYRSYbXA==\"}},\"query-selector-shadow-dom@1.0.1\":{\"resolution\":{\"integrity\":\"sha512-lT5yCqEBgfoMYpf3F2xQRK7zEr1rhIIZuceDK6+xRkJQ4NMbHTwXqk4NkwDwQMNqXgG9r9fyHnzwNVs6zV5KRw==\"}},\"querystringify@2.2.0\":{\"resolution\":{\"integrity\":\"sha512-FIqgj2EUvTa7R50u0rGsyTftzjYmv/a3hO345bZNrqabNqjtgiDMgmo4mkUjd+nzU5oF3dClKqFIPUKybUyqoQ==\"}},\"queue-microtask@1.2.3\":{\"resolution\":{\"integrity\":\"sha512-NuaNSa6flKT5JaSYQzJok04JzTL1CA6aGhv5rfLW3PgqA+M2ChpZQnAC8h8i4ZFkBS8X5RqkDBHA7r4hej3K9A==\"}},\"rc@1.2.8\":{\"resolution\":{\"integrity\":\"sha512-y3bGgqKj3QBdxLbLkomlohkvsA8gdAiUQlSBJnBhfn+BPxg4bc62d8TcBW15wavDfgexCgccckhcZvywyQYPOw==\"},\"hasBin\":true},\"react-dom@19.2.0\":{\"resolution\":{\"integrity\":\"sha512-UlbRu4cAiGaIewkPyiRGJk0imDN2T3JjieT6spoL2UeSf5od4n5LB/mQ4ejmxhCFT1tYe8IvaFulzynWovsEFQ==\"},\"peerDependencies\":{\"react\":\"^19.2.0\"}},\"react-is@17.0.2\":{\"resolution\":{\"integrity\":\"sha512-w2GsyukL62IJnlaff/nRegPQR94C/XXamvMWmSHRJ4y7Ts/4ocGRmTHvOs8PSE6pB3dWOrD/nueuU5sduBsQ4w==\"}},\"react-is@18.3.1\":{\"resolution\":{\"integrity\":\"sha512-/LLMVyas0ljjAtoYiPqYiL8VWXzUUdThrmU5+n20DZv+a+ClRoevUzw5JxU+Ieh5/c87ytoTBV9G1FiKfNJdmg==\"}},\"react@19.2.0\":{\"resolution\":{\"integrity\":\"sha512-tmbWg6W31tQLeB5cdIBOicJDJRR2KzXsV7uSK9iNfLWQ5bIZfxuPEHp7M8wiHyHnn0DD1i7w3Zmin0FtkrwoCQ==\"},\"engines\":{\"node\":\">=0.10.0\"}},\"read-yaml-file@1.1.0\":{\"resolution\":{\"integrity\":\"sha512-VIMnQi/Z4HT2Fxuwg5KrY174U1VdUIASQVWXXyqtNRtxSr9IYkn1rsI6Tb6HsrHCmB7gVpNwX6JxPTHcH6IoTA==\"},\"engines\":{\"node\":\">=6\"}},\"readable-stream@2.3.8\":{\"resolution\":{\"integrity\":\"sha512-8p0AUk4XODgIewSi0l8Epjs+EVnWiK7NoDIEGU0HhE7+ZyY8D1IMY7odu5lRrFXGg71L15KG8QrPmum45RTtdA==\"}},\"readable-stream@3.6.2\":{\"resolution\":{\"integrity\":\"sha512-9u/sniCrY3D5WdsERHzHE4G2YCXqoG5FTHUiCC4SIbr6XcLZBY05ya9EKjYek9O5xOAwjGq+1JdGBAS7Q9ScoA==\"},\"engines\":{\"node\":\">= 6\"}},\"readable-stream@4.7.0\":{\"resolution\":{\"integrity\":\"sha512-oIGGmcpTLwPga8Bn6/Z75SVaH1z5dUut2ibSyAMVhmUggWpmDn2dapB0n7f8nwaSiRtepAsfJyfXIO5DCVAODg==\"},\"engines\":{\"node\":\"^12.22.0 || ^14.17.0 || >=16.0.0\"}},\"readdir-glob@1.1.3\":{\"resolution\":{\"integrity\":\"sha512-v05I2k7xN8zXvPD9N+z/uhXPaj0sUFCe2rcWZIpBsqxfP7xXFQ0tipAd/wjj1YxWyWtUS5IDJpOG82JKt2EAVA==\"}},\"readdirp@3.6.0\":{\"resolution\":{\"integrity\":\"sha512-hOS089on8RduqdbhvQ5Z37A0ESjsqz6qnRcffsMU3495FuTdqSm+7bhJ29JvIOsBDEEnan5DPu9t3To9VRlMzA==\"},\"engines\":{\"node\":\">=8.10.0\"}},\"regex-recursion@6.0.2\":{\"resolution\":{\"integrity\":\"sha512-0YCaSCq2VRIebiaUviZNs0cBz1kg5kVS2UKUfNIx8YVs1cN3AV7NTctO5FOKBA+UT2BPJIWZauYHPqJODG50cg==\"}},\"regex-utilities@2.3.0\":{\"resolution\":{\"integrity\":\"sha512-8VhliFJAWRaUiVvREIiW2NXXTmHs4vMNnSzuJVhscgmGav3g9VDxLrQndI3dZZVVdp0ZO/5v0xmX516/7M9cng==\"}},\"regex@6.0.1\":{\"resolution\":{\"integrity\":\"sha512-uorlqlzAKjKQZ5P+kTJr3eeJGSVroLKoHmquUj4zHWuR+hEyNqlXsSKlYYF5F4NI6nl7tWCs0apKJ0lmfsXAPA==\"}},\"rehype-autolink-headings@7.1.0\":{\"resolution\":{\"integrity\":\"sha512-rItO/pSdvnvsP4QRB1pmPiNHUskikqtPojZKJPPPAVx9Hj8i8TwMBhofrrAYRhYOOBZH9tgmG5lPqDLuIWPWmw==\"}},\"rehype-highlight@7.0.2\":{\"resolution\":{\"integrity\":\"sha512-k158pK7wdC2qL3M5NcZROZ2tR/l7zOzjxXd5VGdcfIyoijjQqpHd3JKtYSBDpDZ38UI2WJWuFAtkMDxmx5kstA==\"}},\"rehype-minify-whitespace@6.0.2\":{\"resolution\":{\"integrity\":\"sha512-Zk0pyQ06A3Lyxhe9vGtOtzz3Z0+qZ5+7icZ/PL/2x1SHPbKao5oB/g/rlc6BCTajqBb33JcOe71Ye1oFsuYbnw==\"}},\"rehype-parse@9.0.1\":{\"resolution\":{\"integrity\":\"sha512-ksCzCD0Fgfh7trPDxr2rSylbwq9iYDkSn8TCDmEJ49ljEUBxDVCzCHv7QNzZOfODanX4+bWQ4WZqLCRWYLfhag==\"}},\"rehype-raw@7.0.0\":{\"resolution\":{\"integrity\":\"sha512-/aE8hCfKlQeA8LmyeyQvQF3eBiLRGNlfBJEvWH7ivp9sBqs7TNqBL5X3v157rM4IFETqDnIOO+z5M/biZbo9Ww==\"}},\"rehype-remark@10.0.1\":{\"resolution\":{\"integrity\":\"sha512-EmDndlb5NVwXGfUa4c9GPK+lXeItTilLhE6ADSaQuHr4JUlKw9MidzGzx4HpqZrNCt6vnHmEifXQiiA+CEnjYQ==\"}},\"rehype-sanitize@6.0.0\":{\"resolution\":{\"integrity\":\"sha512-CsnhKNsyI8Tub6L4sm5ZFsme4puGfc6pYylvXo1AeqaGbjOYyzNv3qZPwvs0oMJ39eryyeOdmxwUIo94IpEhqg==\"}},\"rehype-slug@6.0.0\":{\"resolution\":{\"integrity\":\"sha512-lWyvf/jwu+oS5+hL5eClVd3hNdmwM1kAC0BUvEGD19pajQMIzcNUd/k9GsfQ+FfECvX+JE+e9/btsKH0EjJT6A==\"}},\"rehype-stringify@10.0.1\":{\"resolution\":{\"integrity\":\"sha512-k9ecfXHmIPuFVI61B9DeLPN0qFHfawM6RsuX48hoqlaKSF61RskNjSm1lI8PhBEM0MRdLxVVm4WmTqJQccH9mA==\"}},\"remark-frontmatter@5.0.0\":{\"resolution\":{\"integrity\":\"sha512-XTFYvNASMe5iPN0719nPrdItC9aU0ssC4v14mH1BCi1u0n1gAocqcujWUrByftZTbLhRtiKRyjYTSIOcr69UVQ==\"}},\"remark-gfm@4.0.1\":{\"resolution\":{\"integrity\":\"sha512-1quofZ2RQ9EWdeN34S79+KExV1764+wCUGop5CPL1WGdD0ocPpu91lzPGbwWMECpEpd42kJGQwzRfyov9j4yNg==\"}},\"remark-parse@11.0.0\":{\"resolution\":{\"integrity\":\"sha512-FCxlKLNGknS5ba/1lmpYijMUzX2esxW5xQqjWxw2eHFfS2MSdaHVINFmhjo+qN1WhZhNimq0dZATN9pH0IDrpA==\"}},\"remark-rehype@11.1.2\":{\"resolution\":{\"integrity\":\"sha512-Dh7l57ianaEoIpzbp0PC9UKAdCSVklD8E5Rpw7ETfbTl3FqcOOgq5q2LVDhgGCkaBv7p24JXikPdvhhmHvKMsw==\"}},\"remark-stringify@11.0.0\":{\"resolution\":{\"integrity\":\"sha512-1OSmLd3awB/t8qdoEOMazZkNsfVTeY4fTsgzcQFdXNq8ToTN4ZGwrMnlda4K6smTFKD+GRV6O48i6Z4iKgPPpw==\"}},\"require-directory@2.1.1\":{\"resolution\":{\"integrity\":\"sha512-fGxEI7+wsG9xrvdjsrlmL22OMTTiHRwAMroiEeMgq8gzoLC/PQr7RsRDSTLUg/bZAZtF+TVIkHc6/4RIKrui+Q==\"},\"engines\":{\"node\":\">=0.10.0\"}},\"require-from-string@2.0.2\":{\"resolution\":{\"integrity\":\"sha512-Xf0nWe6RseziFMu+Ap9biiUbmplq6S9/p+7w7YXP/JBHhrUDDUhwa+vANyubuqfZWTveU//DYVGsDG7RKL/vEw==\"},\"engines\":{\"node\":\">=0.10.0\"}},\"requires-port@1.0.0\":{\"resolution\":{\"integrity\":\"sha512-KigOCHcocU3XODJxsu8i/j8T9tzT4adHiecwORRQ0ZZFcp7ahwXuRU1m+yuO90C5ZUyGeGfocHDI14M3L3yDAQ==\"}},\"resolve-from@5.0.0\":{\"resolution\":{\"integrity\":\"sha512-qYg9KP24dD5qka9J47d0aVky0N+b4fTU89LN9iDnjB5waksiC49rvMB0PrUJQGoTmH50XPiqOvAjDfaijGxYZw==\"},\"engines\":{\"node\":\">=8\"}},\"resolve-pkg-maps@1.0.0\":{\"resolution\":{\"integrity\":\"sha512-seS2Tj26TBVOC2NIc2rOe2y2ZO7efxITtLZcGSOnHHNOQ7CkiUBfw0Iw2ck6xkIhPwLhKNLS8BO+hEpngQlqzw==\"}},\"resolve.exports@2.0.3\":{\"resolution\":{\"integrity\":\"sha512-OcXjMsGdhL4XnbShKpAcSqPMzQoYkYyhbEaeSko47MjRP9NfEQMhZkXL1DoFlt9LWQn4YttrdnV6X2OiyzBi+A==\"},\"engines\":{\"node\":\">=10\"}},\"resq@1.11.0\":{\"resolution\":{\"integrity\":\"sha512-G10EBz+zAAy3zUd/CDoBbXRL6ia9kOo3xRHrMDsHljI0GDkhYlyjwoCx5+3eCC4swi1uCoZQhskuJkj7Gp57Bw==\"}},\"restore-cursor@3.1.0\":{\"resolution\":{\"integrity\":\"sha512-l+sSefzHpj5qimhFSE5a8nufZYAM3sBSVMAPtYkmC+4EH2anSGaEMXSD0izRQbu9nfyQ9y5JrVmp7E8oZrUjvA==\"},\"engines\":{\"node\":\">=8\"}},\"reusify@1.0.4\":{\"resolution\":{\"integrity\":\"sha512-U9nH88a3fc/ekCF1l0/UP1IosiuIjyTh7hBvXVMHYgVcfGvt897Xguj2UOLDeI5BG2m7/uwyaLVT6fbtCwTyzw==\"},\"engines\":{\"iojs\":\">=1.0.0\",\"node\":\">=0.10.0\"}},\"rgb2hex@0.2.5\":{\"resolution\":{\"integrity\":\"sha512-22MOP1Rh7sAo1BZpDG6R5RFYzR2lYEgwq7HEmyW2qcsOqR2lQKmn+O//xV3YG/0rrhMC6KVX2hU+ZXuaw9a5bw==\"}},\"robust-predicates@3.0.2\":{\"resolution\":{\"integrity\":\"sha512-IXgzBWvWQwE6PrDI05OvmXUIruQTcoMDzRsOd5CDvHCVLcLHMTSYvOK5Cm46kWqlV3yAbuSpBZdJ5oP5OUoStg==\"}},\"rolldown@1.0.0-rc.17\":{\"resolution\":{\"integrity\":\"sha512-ZrT53oAKrtA4+YtBWPQbtPOxIbVDbxT0orcYERKd63VJTF13zPcgXTvD4843L8pcsI7M6MErt8QtON6lrB9tyA==\"},\"engines\":{\"node\":\"^20.19.0 || >=22.12.0\"},\"hasBin\":true},\"rollup@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-MHngMYwGJVi6Fmnk6ISmnk7JAHRNF0UkuucA0CUW3N3a4KnONPEZz+vUanQP/ZC/iY1Qkf3bwPWzyY84wEks1g==\"},\"engines\":{\"node\":\">=18.0.0\",\"npm\":\">=8.0.0\"},\"hasBin\":true},\"rou3@0.8.1\":{\"resolution\":{\"integrity\":\"sha512-ePa+XGk00/3HuCqrEnK3LxJW7I0SdNg6EFzKUJG73hMAdDcOUC/i/aSz7LSDwLrGr33kal/rqOGydzwl6U7zBA==\"}},\"roughjs@4.6.6\":{\"resolution\":{\"integrity\":\"sha512-ZUz/69+SYpFN/g/lUlo2FXcIjRkSu3nDarreVdGGndHEBJ6cXPdKguS8JGxwj5HA5xIbVKSmLgr5b3AWxtRfvQ==\"}},\"rrweb-cssom@0.8.0\":{\"resolution\":{\"integrity\":\"sha512-guoltQEx+9aMf2gDZ0s62EcV8lsXR+0w8915TC3ITdn2YueuNjdAYh/levpU9nFaoChh9RUS5ZdQMrKfVEN9tw==\"}},\"run-parallel@1.2.0\":{\"resolution\":{\"integrity\":\"sha512-5l4VyZR86LZ/lDxZTR6jqL8AFE2S0IFLMP26AbjsLVADxHdhB/c0GUsH+y39UfCi3dzz8OlQuPmnaJOMoDHQBA==\"}},\"rw@1.3.3\":{\"resolution\":{\"integrity\":\"sha512-PdhdWy89SiZogBLaw42zdeqtRJ//zFd2PgQavcICDUgJT5oW10QCRKbJ6bg4r0/UY2M6BWd5tkxuGFRvCkgfHQ==\"}},\"rxjs@7.8.2\":{\"resolution\":{\"integrity\":\"sha512-dhKf903U/PQZY6boNNtAGdWbG85WAbjT/1xYoZIC7FAY0yWapOBQVsVrDl58W86//e1VpMNBtRV4MaXfdMySFA==\"}},\"safaridriver@0.1.2\":{\"resolution\":{\"integrity\":\"sha512-4R309+gWflJktzPXBQCobbWEHlzC4aK3a+Ov3tz2Ib2aBxiwd11phkdIBH1l0EO22x24CJMUQkpKFumRriCSRg==\"}},\"safe-buffer@5.1.2\":{\"resolution\":{\"integrity\":\"sha512-Gd2UZBJDkXlY7GbJxfsE8/nvKkUEU1G38c1siN6QP6a9PT9MmHB8GnpscSmMJSoF8LOIrt8ud/wPtojys4G6+g==\"}},\"safe-buffer@5.2.1\":{\"resolution\":{\"integrity\":\"sha512-rp3So07KcdmmKbGvgaNxQSJr7bGVSVk5S9Eq1F+ppbRo70+YeaDxkw5Dd8NPN+GD6bjnYm2VuPuCXmpuYvmCXQ==\"}},\"safer-buffer@2.1.2\":{\"resolution\":{\"integrity\":\"sha512-YZo3K82SD7Riyi0E1EQPojLz7kpepnSQI9IyPbHHg1XXXevb5dJI7tpyN2ADxGcQbHG7vcyRHk0cbwqcQriUtg==\"}},\"sass-embedded-android-arm64@1.89.2\":{\"resolution\":{\"integrity\":\"sha512-+pq7a7AUpItNyPu61sRlP6G2A8pSPpyazASb+8AK2pVlFayCSPAEgpwpCE9A2/Xj86xJZeMizzKUHxM2CBCUxA==\"},\"engines\":{\"node\":\">=14.0.0\"},\"cpu\":[\"arm64\"],\"os\":[\"android\"]},\"sass-embedded-android-arm@1.89.2\":{\"resolution\":{\"integrity\":\"sha512-oHAPTboBHRZlDBhyRB6dvDKh4KvFs+DZibDHXbkSI6dBZxMTT+Yb2ivocHnctVGucKTLQeT7+OM5DjWHyynL/A==\"},\"engines\":{\"node\":\">=14.0.0\"},\"cpu\":[\"arm\"],\"os\":[\"android\"]},\"sass-embedded-android-riscv64@1.89.2\":{\"resolution\":{\"integrity\":\"sha512-HfJJWp/S6XSYvlGAqNdakeEMPOdhBkj2s2lN6SHnON54rahKem+z9pUbCriUJfM65Z90lakdGuOfidY61R9TYg==\"},\"engines\":{\"node\":\">=14.0.0\"},\"cpu\":[\"riscv64\"],\"os\":[\"android\"]},\"sass-embedded-android-x64@1.89.2\":{\"resolution\":{\"integrity\":\"sha512-BGPzq53VH5z5HN8de6jfMqJjnRe1E6sfnCWFd4pK+CAiuM7iw5Fx6BQZu3ikfI1l2GY0y6pRXzsVLdp/j4EKEA==\"},\"engines\":{\"node\":\">=14.0.0\"},\"cpu\":[\"x64\"],\"os\":[\"android\"]},\"sass-embedded-darwin-arm64@1.89.2\":{\"resolution\":{\"integrity\":\"sha512-UCm3RL/tzMpG7DsubARsvGUNXC5pgfQvP+RRFJo9XPIi6elopY5B6H4m9dRYDpHA+scjVthdiDwkPYr9+S/KGw==\"},\"engines\":{\"node\":\">=14.0.0\"},\"cpu\":[\"arm64\"],\"os\":[\"darwin\"]},\"sass-embedded-darwin-x64@1.89.2\":{\"resolution\":{\"integrity\":\"sha512-D9WxtDY5VYtMApXRuhQK9VkPHB8R79NIIR6xxVlN2MIdEid/TZWi1MHNweieETXhWGrKhRKglwnHxxyKdJYMnA==\"},\"engines\":{\"node\":\">=14.0.0\"},\"cpu\":[\"x64\"],\"os\":[\"darwin\"]},\"sass-embedded-linux-arm64@1.89.2\":{\"resolution\":{\"integrity\":\"sha512-2N4WW5LLsbtrWUJ7iTpjvhajGIbmDR18ZzYRywHdMLpfdPApuHPMDF5CYzHbS+LLx2UAx7CFKBnj5LLjY6eFgQ==\"},\"engines\":{\"node\":\">=14.0.0\"},\"cpu\":[\"arm64\"],\"os\":[\"linux\"]},\"sass-embedded-linux-arm@1.89.2\":{\"resolution\":{\"integrity\":\"sha512-leP0t5U4r95dc90o8TCWfxNXwMAsQhpWxTkdtySDpngoqtTy3miMd7EYNYd1znI0FN1CBaUvbdCMbnbPwygDlA==\"},\"engines\":{\"node\":\">=14.0.0\"},\"cpu\":[\"arm\"],\"os\":[\"linux\"]},\"sass-embedded-linux-musl-arm64@1.89.2\":{\"resolution\":{\"integrity\":\"sha512-nTyuaBX6U1A/cG7WJh0pKD1gY8hbg1m2SnzsyoFG+exQ0lBX/lwTLHq3nyhF+0atv7YYhYKbmfz+sjPP8CZ9lw==\"},\"engines\":{\"node\":\">=14.0.0\"},\"cpu\":[\"arm64\"],\"os\":[\"linux\"]},\"sass-embedded-linux-musl-arm@1.89.2\":{\"resolution\":{\"integrity\":\"sha512-Z6gG2FiVEEdxYHRi2sS5VIYBmp17351bWtOCUZ/thBM66+e70yiN6Eyqjz80DjL8haRUegNQgy9ZJqsLAAmr9g==\"},\"engines\":{\"node\":\">=14.0.0\"},\"cpu\":[\"arm\"],\"os\":[\"linux\"]},\"sass-embedded-linux-musl-riscv64@1.89.2\":{\"resolution\":{\"integrity\":\"sha512-N6oul+qALO0SwGY8JW7H/Vs0oZIMrRMBM4GqX3AjM/6y8JsJRxkAwnfd0fDyK+aICMFarDqQonQNIx99gdTZqw==\"},\"engines\":{\"node\":\">=14.0.0\"},\"cpu\":[\"riscv64\"],\"os\":[\"linux\"]},\"sass-embedded-linux-musl-x64@1.89.2\":{\"resolution\":{\"integrity\":\"sha512-K+FmWcdj/uyP8GiG9foxOCPfb5OAZG0uSVq80DKgVSC0U44AdGjvAvVZkrgFEcZ6cCqlNC2JfYmslB5iqdL7tg==\"},\"engines\":{\"node\":\">=14.0.0\"},\"cpu\":[\"x64\"],\"os\":[\"linux\"]},\"sass-embedded-linux-riscv64@1.89.2\":{\"resolution\":{\"integrity\":\"sha512-g9nTbnD/3yhOaskeqeBQETbtfDQWRgsjHok6bn7DdAuwBsyrR3JlSFyqKc46pn9Xxd9SQQZU8AzM4IR+sY0A0w==\"},\"engines\":{\"node\":\">=14.0.0\"},\"cpu\":[\"riscv64\"],\"os\":[\"linux\"]},\"sass-embedded-linux-x64@1.89.2\":{\"resolution\":{\"integrity\":\"sha512-Ax7dKvzncyQzIl4r7012KCMBvJzOz4uwSNoyoM5IV6y5I1f5hEwI25+U4WfuTqdkv42taCMgpjZbh9ERr6JVMQ==\"},\"engines\":{\"node\":\">=14.0.0\"},\"cpu\":[\"x64\"],\"os\":[\"linux\"]},\"sass-embedded-win32-arm64@1.89.2\":{\"resolution\":{\"integrity\":\"sha512-j96iJni50ZUsfD6tRxDQE2QSYQ2WrfHxeiyAXf41Kw0V4w5KYR/Sf6rCZQLMTUOHnD16qTMVpQi20LQSqf4WGg==\"},\"engines\":{\"node\":\">=14.0.0\"},\"cpu\":[\"arm64\"],\"os\":[\"win32\"]},\"sass-embedded-win32-x64@1.89.2\":{\"resolution\":{\"integrity\":\"sha512-cS2j5ljdkQsb4PaORiClaVYynE9OAPZG/XjbOMxpQmjRIf7UroY4PEIH+Waf+y47PfXFX9SyxhYuw2NIKGbEng==\"},\"engines\":{\"node\":\">=14.0.0\"},\"cpu\":[\"x64\"],\"os\":[\"win32\"]},\"sass-embedded@1.89.2\":{\"resolution\":{\"integrity\":\"sha512-Ack2K8rc57kCFcYlf3HXpZEJFNUX8xd8DILldksREmYXQkRHI879yy8q4mRDJgrojkySMZqmmmW1NxrFxMsYaA==\"},\"engines\":{\"node\":\">=16.0.0\"},\"hasBin\":true},\"saxes@6.0.0\":{\"resolution\":{\"integrity\":\"sha512-xAg7SOnEhrm5zI3puOOKyy1OMcMlIJZYNJY7xLBwSze0UjhPLnWfj2GF2EpT0jmzaJKIWKHLsaSSajf35bcYnA==\"},\"engines\":{\"node\":\">=v12.22.7\"}},\"scheduler@0.27.0\":{\"resolution\":{\"integrity\":\"sha512-eNv+WrVbKu1f3vbYJT/xtiF5syA5HPIMtf9IgY/nKg0sWqzAUEvqY/xm7OcZc/qafLx/iO9FgOmeSAp4v5ti/Q==\"}},\"schema-utils@4.3.3\":{\"resolution\":{\"integrity\":\"sha512-eflK8wEtyOE6+hsaRVPxvUKYCpRgzLqDTb8krvAsRIwOGlHoSgYLgBXoubGgLd2fT41/OUYdb48v4k4WWHQurA==\"},\"engines\":{\"node\":\">= 10.13.0\"}},\"semver@6.3.1\":{\"resolution\":{\"integrity\":\"sha512-BR7VvDCVHO+q2xBEWskxS6DJE1qRnb7DxzUrogb71CWoSficBxYsiAGd+Kl0mmq/MprG9yArRkyrQxTO6XjMzA==\"},\"hasBin\":true},\"semver@7.7.2\":{\"resolution\":{\"integrity\":\"sha512-RF0Fw+rO5AMf9MAyaRXI4AV0Ulj5lMHqVxxdSgiVbixSCXoEmmX/jk0CuJw4+3SqroYO9VoUh+HcuJivvtJemA==\"},\"engines\":{\"node\":\">=10\"},\"hasBin\":true},\"semver@7.7.3\":{\"resolution\":{\"integrity\":\"sha512-SdsKMrI9TdgjdweUSR9MweHA4EJ8YxHn8DFaDisvhVlUOe4BF1tLD7GAj0lIqWVl+dPb/rExr0Btby5loQm20Q==\"},\"engines\":{\"node\":\">=10\"},\"hasBin\":true},\"semver@7.7.4\":{\"resolution\":{\"integrity\":\"sha512-vFKC2IEtQnVhpT78h1Yp8wzwrf8CM+MzKMHGJZfBtzhZNycRFnXsHk6E5TxIkkMsgNS7mdX3AGB7x2QM2di4lA==\"},\"engines\":{\"node\":\">=10\"},\"hasBin\":true},\"serialize-error@11.0.3\":{\"resolution\":{\"integrity\":\"sha512-2G2y++21dhj2R7iHAdd0FIzjGwuKZld+7Pl/bTU6YIkrC2ZMbVUjm+luj6A6V34Rv9XfKJDKpTWu9W4Gse1D9g==\"},\"engines\":{\"node\":\">=14.16\"}},\"seroval-plugins@1.5.4\":{\"resolution\":{\"integrity\":\"sha512-S0xQPhUTefAhNvNWFg0c1J8qJArHt5KdtJ/cFAofo06KD1MVSeFWyl4iiu+ApDIuw0WhjpOfCdgConOfAnLgkw==\"},\"engines\":{\"node\":\">=10\"},\"peerDependencies\":{\"seroval\":\"^1.0\"}},\"seroval@1.5.4\":{\"resolution\":{\"integrity\":\"sha512-46uFvgrXTVxZcUorgSSRZ4y+ieqLLQRMlG4bnCZKW3qI6BZm7Rg4ntMW4p1mILEEBZWrFlcpp0AyIIlM6jD9iw==\"},\"engines\":{\"node\":\">=10\"}},\"setimmediate@1.0.5\":{\"resolution\":{\"integrity\":\"sha512-MATJdZp8sLqDl/68LfQmbP8zKPLQNV6BIZoIgrscFDQ+RsvK/BxeDQOgyxKKoh0y/8h3BqVFnCqQ/gd+reiIXA==\"}},\"sharp@0.34.5\":{\"resolution\":{\"integrity\":\"sha512-Ou9I5Ft9WNcCbXrU9cMgPBcCK8LiwLqcbywW3t4oDV37n1pzpuNLsYiAV8eODnjbtQlSDwZ2cUEeQz4E54Hltg==\"},\"engines\":{\"node\":\"^18.17.0 || ^20.3.0 || >=21.0.0\"}},\"shebang-command@2.0.0\":{\"resolution\":{\"integrity\":\"sha512-kHxr2zZpYtdmrN1qDjrrX/Z1rR1kG8Dx+gkpK1G4eXmvXswmcE1hTWBWYUzlraYw1/yZp6YuDY77YtvbN0dmDA==\"},\"engines\":{\"node\":\">=8\"}},\"shebang-regex@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-7++dFhtcx3353uBaq8DDR4NuxBetBzC7ZQOhmTQInHEd6bSrXdiEyzCvG07Z44UYdLShWUyXt5M/yhz8ekcb1A==\"},\"engines\":{\"node\":\">=8\"}},\"shiki@3.15.0\":{\"resolution\":{\"integrity\":\"sha512-kLdkY6iV3dYbtPwS9KXU7mjfmDm25f5m0IPNFnaXO7TBPcvbUOY72PYXSuSqDzwp+vlH/d7MXpHlKO/x+QoLXw==\"}},\"siginfo@2.0.0\":{\"resolution\":{\"integrity\":\"sha512-ybx0WO1/8bSBLEWXZvEd7gMW3Sn3JFlW3TvX1nREbDLRNQNaeNN8WK0meBwPdAaOI7TtRRRJn/Es1zhrrCHu7g==\"}},\"signal-exit@3.0.7\":{\"resolution\":{\"integrity\":\"sha512-wnD2ZE+l+SPC/uoS0vXeE9L1+0wuaMqKlfz9AMUo38JsyLSBWSFcHR1Rri62LZc12vLr1gb3jl7iwQhgwpAbGQ==\"}},\"signal-exit@4.1.0\":{\"resolution\":{\"integrity\":\"sha512-bzyZ1e88w9O1iNJbKnOlvYTrWPDl46O1bG0D3XInv+9tkPrxrN8jUUTiFlDkkmKWgn1M6CfIA13SuGqOa9Korw==\"},\"engines\":{\"node\":\">=14\"}},\"simple-concat@1.0.1\":{\"resolution\":{\"integrity\":\"sha512-cSFtAPtRhljv69IK0hTVZQ+OfE9nePi/rtJmw5UjHeVyVroEqJXP1sFztKUy1qU+xvz3u/sfYJLa947b7nAN2Q==\"}},\"simple-get@4.0.1\":{\"resolution\":{\"integrity\":\"sha512-brv7p5WgH0jmQJr1ZDDfKDOSeWWg+OVypG99A/5vYGPqJ6pxiaHLy8nxtFjBA7oMa01ebA9gfh1uMCFqOuXxvA==\"}},\"sirv@3.0.2\":{\"resolution\":{\"integrity\":\"sha512-2wcC/oGxHis/BoHkkPwldgiPSYcpZK3JU28WoMVv55yHJgcZ8rlXvuG9iZggz+sU1d4bRgIGASwyWqjxu3FM0g==\"},\"engines\":{\"node\":\">=18\"}},\"slash@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-g9Q1haeby36OSStwb4ntCGGGaKsaVSjQ68fBxoQcutl5fS1vuY18H3wSt3jFyFtrkx+Kz0V1G85A4MyAdDMi2Q==\"},\"engines\":{\"node\":\">=8\"}},\"smart-buffer@4.2.0\":{\"resolution\":{\"integrity\":\"sha512-94hK0Hh8rPqQl2xXc3HsaBoOXKV20MToPkcXvwbISWLEs+64sBq5kFgn2kJDHb1Pry9yrP0dxrCI9RRci7RXKg==\"},\"engines\":{\"node\":\">= 6.0.0\",\"npm\":\">= 3.0.0\"}},\"socks-proxy-agent@8.0.5\":{\"resolution\":{\"integrity\":\"sha512-HehCEsotFqbPW9sJ8WVYB6UbmIMv7kUUORIF2Nncq4VQvBfNBLibW9YZR5dlYCSUhwcD628pRllm7n+E+YTzJw==\"},\"engines\":{\"node\":\">= 14\"}},\"socks@2.8.8\":{\"resolution\":{\"integrity\":\"sha512-NlGELfPrgX2f1TAAcz0WawlLn+0r3FyhhCRpFFK2CemXenPYvzMWWZINv3eDNo9ucdwme7oCHRY0Jnbs4aIkog==\"},\"engines\":{\"node\":\">= 10.0.0\",\"npm\":\">= 3.0.0\"}},\"source-map-js@1.2.1\":{\"resolution\":{\"integrity\":\"sha512-UXWMKhLOwVKb728IUtQPXxfYU+usdybtUrK/8uGE8CQMvrhOpwvzDBwj0QhSL7MQc7vIsISBG8VQ8+IDQxpfQA==\"},\"engines\":{\"node\":\">=0.10.0\"}},\"source-map-support@0.5.21\":{\"resolution\":{\"integrity\":\"sha512-uBHU3L3czsIyYXKX88fdrGovxdSCoTGDRZ6SYXtSRxLZUzHg5P/66Ht6uoUlHu9EZod+inXhKo3qQgwXUT/y1w==\"}},\"source-map@0.6.1\":{\"resolution\":{\"integrity\":\"sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g==\"},\"engines\":{\"node\":\">=0.10.0\"}},\"source-map@0.7.6\":{\"resolution\":{\"integrity\":\"sha512-i5uvt8C3ikiWeNZSVZNWcfZPItFQOsYTUAOkcUPGd8DqDy1uOUikjt5dG+uRlwyvR108Fb9DOd4GvXfT0N2/uQ==\"},\"engines\":{\"node\":\">= 12\"}},\"space-separated-tokens@2.0.2\":{\"resolution\":{\"integrity\":\"sha512-PEGlAwrG8yXGXRjW32fGbg66JAlOAwbObuqVoJpv/mRgoWDQfgH1wDPvtzWyUSNAXBGSk8h755YDbbcEy3SH2Q==\"}},\"spacetrim@0.11.59\":{\"resolution\":{\"integrity\":\"sha512-lLYsktklSRKprreOm7NXReW8YiX2VBjbgmXYEziOoGf/qsJqAEACaDvoTtUOycwjpaSh+bT8eu0KrJn7UNxiCg==\"}},\"spawndamnit@3.0.1\":{\"resolution\":{\"integrity\":\"sha512-MmnduQUuHCoFckZoWnXsTg7JaiLBJrKFj9UI2MbRPGaJeVpsLcVBu6P/IGZovziM/YBsellCmsprgNA+w0CzVg==\"}},\"split2@4.2.0\":{\"resolution\":{\"integrity\":\"sha512-UcjcJOWknrNkF6PLX83qcHM6KHgVKNkV62Y8a5uYDVv9ydGQVwAHMKqHdJje1VTWpljG0WYpCDhrCdAOYH4TWg==\"},\"engines\":{\"node\":\">= 10.x\"}},\"sprintf-js@1.0.3\":{\"resolution\":{\"integrity\":\"sha512-D9cPgkvLlV3t3IzL0D0YLvGA9Ahk4PcvVwUbN0dSGr1aP0Nrt4AEnTUbuGvquEC0mA64Gqt1fzirlRs5ibXx8g==\"}},\"srvx@0.11.15\":{\"resolution\":{\"integrity\":\"sha512-iXsux0UcOjdvs0LCMa2Ws3WwcDUozA3JN3BquNXkaFPP7TpRqgunKdEgoZ/uwb1J6xaYHfxtz9Twlh6yzwM6Tg==\"},\"engines\":{\"node\":\">=20.16.0\"},\"hasBin\":true},\"stackback@0.0.2\":{\"resolution\":{\"integrity\":\"sha512-1XMJE5fQo1jGH6Y/7ebnwPOBEkIEnT4QF32d5R1+VXdXveM0IBMJt8zfaxX1P3QhVwrYe+576+jkANtSS2mBbw==\"}},\"statuses@2.0.2\":{\"resolution\":{\"integrity\":\"sha512-DvEy55V3DB7uknRo+4iOGT5fP1slR8wQohVdknigZPMpMstaKJQWhwiYBACJE3Ul2pTnATihhBYnRhZQHGBiRw==\"},\"engines\":{\"node\":\">= 0.8\"}},\"std-env@3.10.0\":{\"resolution\":{\"integrity\":\"sha512-5GS12FdOZNliM5mAOxFRg7Ir0pWz8MdpYm6AY6VPkGpbA7ZzmbzNcBJQ0GPvvyWgcY7QAhCgf9Uy89I03faLkg==\"}},\"std-env@3.9.0\":{\"resolution\":{\"integrity\":\"sha512-UGvjygr6F6tpH7o2qyqR6QYpwraIjKSdtzyBdyytFOHmPZY917kwdwLG0RbOjWOnKmnm3PeHjaoLLMie7kPLQw==\"}},\"std-env@4.1.0\":{\"resolution\":{\"integrity\":\"sha512-Rq7ybcX2RuC55r9oaPVEW7/xu3tj8u4GeBYHBWCychFtzMIr86A7e3PPEBPT37sHStKX3+TiX/Fr/ACmJLVlLQ==\"}},\"streamx@2.25.0\":{\"resolution\":{\"integrity\":\"sha512-0nQuG6jf1w+wddNEEXCF4nTg3LtufWINB5eFEN+5TNZW7KWJp6x87+JFL43vaAUPyCfH1wID+mNVyW6OHtFamg==\"}},\"strict-event-emitter@0.5.1\":{\"resolution\":{\"integrity\":\"sha512-vMgjE/GGEPEFnhFub6pa4FmJBRBVOLpIII2hvCZ8Kzb7K0hlHo7mQv6xYrBvCL2LtAIBwFUK8wvuJgTVSQ5MFQ==\"}},\"string-width@4.2.3\":{\"resolution\":{\"integrity\":\"sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==\"},\"engines\":{\"node\":\">=8\"}},\"string-width@5.1.2\":{\"resolution\":{\"integrity\":\"sha512-HnLOCR3vjcY8beoNLtcjZ5/nxn2afmME6lhrDrebokqMap+XbeW8n9TXpPDOqdGK5qcI3oT0GKTW6wC7EMiVqA==\"},\"engines\":{\"node\":\">=12\"}},\"string_decoder@1.1.1\":{\"resolution\":{\"integrity\":\"sha512-n/ShnvDi6FHbbVfviro+WojiFzv+s8MPMHBczVePfUpDJLwoLT0ht1l4YwBCbi8pJAveEEdnkHyPyTP/mzRfwg==\"}},\"string_decoder@1.3.0\":{\"resolution\":{\"integrity\":\"sha512-hkRX8U1WjJFd8LsDJ2yQ/wWWxaopEsABU1XfkM8A+j0+85JAGppt16cr1Whg6KIbb4okU6Mql6BOj+uup/wKeA==\"}},\"stringify-entities@4.0.4\":{\"resolution\":{\"integrity\":\"sha512-IwfBptatlO+QCJUo19AqvrPNqlVMpW9YEL2LIVY+Rpv2qsjCGxaDLNRgeGsQWJhfItebuJhsGSLjaBbNSQ+ieg==\"}},\"strip-ansi@6.0.1\":{\"resolution\":{\"integrity\":\"sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==\"},\"engines\":{\"node\":\">=8\"}},\"strip-ansi@7.1.2\":{\"resolution\":{\"integrity\":\"sha512-gmBGslpoQJtgnMAvOVqGZpEz9dyoKTCzy2nfz/n8aIFhN/jCE/rCmcxabB6jOOHV+0WNnylOxaxBQPSvcWklhA==\"},\"engines\":{\"node\":\">=12\"}},\"strip-ansi@7.2.0\":{\"resolution\":{\"integrity\":\"sha512-yDPMNjp4WyfYBkHnjIRLfca1i6KMyGCtsVgoKe/z1+6vukgaENdgGBZt+ZmKPc4gavvEZ5OgHfHdrazhgNyG7w==\"},\"engines\":{\"node\":\">=12\"}},\"strip-bom@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-vavAMRXOgBVNF6nyEEmL3DBK19iRpDcoIwW+swQ+CbGiu7lju6t+JklA1MHweoWtadgt4ISVUsXLyDq34ddcwA==\"},\"engines\":{\"node\":\">=4\"}},\"strip-json-comments@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-4gB8na07fecVVkOI6Rs4e7T6NOTki5EmL7TUduTs6bu3EdnSycntVJ4re8kgZA+wx9IueI2Y11bfbgwtzuE0KQ==\"},\"engines\":{\"node\":\">=0.10.0\"}},\"strip-literal@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-TcccoMhJOM3OebGhSBEmp3UZ2SfDMZUEBdRA/9ynfLi8yYajyWX3JiXArcJt4Umh4vISpspkQIY8ZZoCqjbviA==\"}},\"strnum@1.1.2\":{\"resolution\":{\"integrity\":\"sha512-vrN+B7DBIoTTZjnPNewwhx6cBA/H+IS7rfW68n7XxC1y7uoiGQBxaKzqucGUgavX15dJgiGztLJ8vxuEzwqBdA==\"}},\"stylis@4.3.6\":{\"resolution\":{\"integrity\":\"sha512-yQ3rwFWRfwNUY7H5vpU0wfdkNSnvnJinhF9830Swlaxl03zsOjCfmX0ugac+3LtK0lYSgwL/KXc8oYL3mG4YFQ==\"}},\"supports-color@10.2.2\":{\"resolution\":{\"integrity\":\"sha512-SS+jx45GF1QjgEXQx4NJZV9ImqmO2NPz5FNsIHrsDjh2YsHnawpan7SNQ1o8NuhrbHZy9AZhIoCUiCeaW/C80g==\"},\"engines\":{\"node\":\">=18\"}},\"supports-color@7.2.0\":{\"resolution\":{\"integrity\":\"sha512-qpCAvRl9stuOHveKsn7HncJRvv501qIacKzQlO/+Lwxc9+0q2wLyv4Dfvt80/DPn2pqOBsJdDiogXGR9+OvwRw==\"},\"engines\":{\"node\":\">=8\"}},\"supports-color@8.1.1\":{\"resolution\":{\"integrity\":\"sha512-MpUEN2OodtUzxvKQl72cUF7RQ5EiHsGvSsVG0ia9c5RbWGL2CI4C7EpPS8UTBIplnlzZiNuV56w+FuNxy3ty2Q==\"},\"engines\":{\"node\":\">=10\"}},\"symbol-tree@3.2.4\":{\"resolution\":{\"integrity\":\"sha512-9QNk5KwDF+Bvz+PyObkmSYjI5ksVUYtjW7AU22r2NKcfLJcXp96hkDWU3+XndOsUb+AQ9QhfzfCT2O+CNWT5Tw==\"}},\"sync-child-process@1.0.2\":{\"resolution\":{\"integrity\":\"sha512-8lD+t2KrrScJ/7KXCSyfhT3/hRq78rC0wBFqNJXv3mZyn6hW2ypM05JmlSvtqRbeq6jqA94oHbxAr2vYsJ8vDA==\"},\"engines\":{\"node\":\">=16.0.0\"}},\"sync-message-port@1.2.0\":{\"resolution\":{\"integrity\":\"sha512-gAQ9qrUN/UCypHtGFbbe7Rc/f9bzO88IwrG8TDo/aMKAApKyD6E3W4Cm0EfhfBb6Z6SKt59tTCTfD+n1xmAvMg==\"},\"engines\":{\"node\":\">=16.0.0\"}},\"tailwindcss@4.2.4\":{\"resolution\":{\"integrity\":\"sha512-HhKppgO81FQof5m6TEnuBWCZGgfRAWbaeOaGT00KOy/Pf/j6oUihdvBpA7ltCeAvZpFhW3j0PTclkxsd4IXYDA==\"}},\"tapable@2.3.3\":{\"resolution\":{\"integrity\":\"sha512-uxc/zpqFg6x7C8vOE7lh6Lbda8eEL9zmVm/PLeTPBRhh1xCgdWaQ+J1CUieGpIfm2HdtsUpRv+HshiasBMcc6A==\"},\"engines\":{\"node\":\">=6\"}},\"tar-fs@2.1.4\":{\"resolution\":{\"integrity\":\"sha512-mDAjwmZdh7LTT6pNleZ05Yt65HC3E+NiQzl672vQG38jIrehtJk/J3mNwIg+vShQPcLF/LV7CMnDW6vjj6sfYQ==\"}},\"tar-fs@3.1.2\":{\"resolution\":{\"integrity\":\"sha512-QGxxTxxyleAdyM3kpFs14ymbYmNFrfY+pHj7Z8FgtbZ7w2//VAgLMac7sT6nRpIHjppXO2AwwEOg0bPFVRcmXw==\"}},\"tar-stream@2.2.0\":{\"resolution\":{\"integrity\":\"sha512-ujeqbceABgwMZxEJnk2HDY2DlnUZ+9oEcb1KzTVfYHio0UE6dG71n60d8D2I4qNvleWrrXpmjpt7vZeF1LnMZQ==\"},\"engines\":{\"node\":\">=6\"}},\"tar-stream@3.2.0\":{\"resolution\":{\"integrity\":\"sha512-ojzvCvVaNp6aOTFmG7jaRD0meowIAuPc3cMMhSgKiVWws1GyHbGd/xvnyuRKcKlMpt3qvxx6r0hreCNITP9hIg==\"}},\"tar@6.2.1\":{\"resolution\":{\"integrity\":\"sha512-DZ4yORTwrbTj/7MZYq2w+/ZFdI6OZ/f9SFHR+71gIVUZhOQPHzVCLpvRnPgyaMpfWxxk/4ONva3GQSyNIKRv6A==\"},\"engines\":{\"node\":\">=10\"},\"deprecated\":\"Old versions of tar are not supported, and contain widely publicized security vulnerabilities, which have been fixed in the current version. Please update. Support for old versions may be purchased (at exorbitant rates) by contacting i@izs.me\"},\"teex@1.0.1\":{\"resolution\":{\"integrity\":\"sha512-eYE6iEI62Ni1H8oIa7KlDU6uQBtqr4Eajni3wX7rpfXD8ysFx8z0+dri+KWEPWpBsxXfxu58x/0jvTVT1ekOSg==\"}},\"term-size@2.2.1\":{\"resolution\":{\"integrity\":\"sha512-wK0Ri4fOGjv/XPy8SBHZChl8CM7uMc5VML7SqiQ0zG7+J5Vr+RMQDoHa2CNT6KHUnTGIXH34UDMkPzAUyapBZg==\"},\"engines\":{\"node\":\">=8\"}},\"terser-webpack-plugin@5.5.0\":{\"resolution\":{\"integrity\":\"sha512-UYhptBwhWvfIjKd/UuFo6D8uq9xpGLDK+z8EDsj/zWhrTaH34cKEbrkMKfV5YWqGBvAYA3tlzZbs2R+qYrbQJA==\"},\"engines\":{\"node\":\">= 10.13.0\"},\"peerDependencies\":{\"@swc/core\":\"*\",\"esbuild\":\"*\",\"uglify-js\":\"*\",\"webpack\":\"^5.1.0\"},\"peerDependenciesMeta\":{\"@swc/core\":{\"optional\":true},\"esbuild\":{\"optional\":true},\"uglify-js\":{\"optional\":true}}},\"terser@5.36.0\":{\"resolution\":{\"integrity\":\"sha512-IYV9eNMuFAV4THUspIRXkLakHnV6XO7FEdtKjf/mDyrnqUg9LnlOn6/RwRvM9SZjR4GUq8Nk8zj67FzVARr74w==\"},\"engines\":{\"node\":\">=10\"},\"hasBin\":true},\"test-exclude@7.0.1\":{\"resolution\":{\"integrity\":\"sha512-pFYqmTw68LXVjeWJMST4+borgQP2AyMNbg1BpZh9LbyhUeNkeaPF9gzfPGUAnSMV3qPYdWUwDIjjCLiSDOl7vg==\"},\"engines\":{\"node\":\">=18\"}},\"text-decoder@1.2.7\":{\"resolution\":{\"integrity\":\"sha512-vlLytXkeP4xvEq2otHeJfSQIRyWxo/oZGEbXrtEEF9Hnmrdly59sUbzZ/QgyWuLYHctCHxFF4tRQZNQ9k60ExQ==\"}},\"tinybench@2.9.0\":{\"resolution\":{\"integrity\":\"sha512-0+DUvqWMValLmha6lr4kD8iAMK1HzV0/aKnCtWb9v9641TnP/MFb7Pc2bxoxQjTXAErryXVgUOfv2YqNllqGeg==\"}},\"tinyexec@0.3.2\":{\"resolution\":{\"integrity\":\"sha512-KQQR9yN7R5+OSwaK0XQoj22pwHoTlgYqmUscPYoknOoWCWfj/5/ABTMRi69FrKU5ffPVh5QcFikpWJI/P1ocHA==\"}},\"tinyexec@1.0.2\":{\"resolution\":{\"integrity\":\"sha512-W/KYk+NFhkmsYpuHq5JykngiOCnxeVL8v8dFnqxSD8qEEdRfXk1SDM6JzNqcERbcGYj9tMrDQBYV9cjgnunFIg==\"},\"engines\":{\"node\":\">=18\"}},\"tinyglobby@0.2.14\":{\"resolution\":{\"integrity\":\"sha512-tX5e7OM1HnYr2+a2C/4V0htOcSQcoSTH9KgJnVvNm5zm/cyEWKJ7j7YutsH9CxMdtOkkLFy2AHrMci9IM8IPZQ==\"},\"engines\":{\"node\":\">=12.0.0\"}},\"tinyglobby@0.2.15\":{\"resolution\":{\"integrity\":\"sha512-j2Zq4NyQYG5XMST4cbs02Ak8iJUdxRM0XI5QyxXuZOzKOINmWurp3smXu3y5wDcJrptwpSjgXHzIQxR0omXljQ==\"},\"engines\":{\"node\":\">=12.0.0\"}},\"tinyglobby@0.2.16\":{\"resolution\":{\"integrity\":\"sha512-pn99VhoACYR8nFHhxqix+uvsbXineAasWm5ojXoN8xEwK5Kd3/TrhNn1wByuD52UxWRLy8pu+kRMniEi6Eq9Zg==\"},\"engines\":{\"node\":\">=12.0.0\"}},\"tinypool@1.1.1\":{\"resolution\":{\"integrity\":\"sha512-Zba82s87IFq9A9XmjiX5uZA/ARWDrB03OHlq+Vw1fSdt0I+4/Kutwy8BP4Y/y/aORMo61FQ0vIb5j44vSo5Pkg==\"},\"engines\":{\"node\":\"^18.0.0 || >=20.0.0\"}},\"tinyrainbow@2.0.0\":{\"resolution\":{\"integrity\":\"sha512-op4nsTR47R6p0vMUUoYl/a+ljLFVtlfaXkLQmqfLR1qHma1h/ysYk4hEXZ880bf2CYgTskvTa/e196Vd5dDQXw==\"},\"engines\":{\"node\":\">=14.0.0\"}},\"tinyrainbow@3.0.3\":{\"resolution\":{\"integrity\":\"sha512-PSkbLUoxOFRzJYjjxHJt9xro7D+iilgMX/C9lawzVuYiIdcihh9DXmVibBe8lmcFrRi/VzlPjBxbN7rH24q8/Q==\"},\"engines\":{\"node\":\">=14.0.0\"}},\"tinyrainbow@3.1.0\":{\"resolution\":{\"integrity\":\"sha512-Bf+ILmBgretUrdJxzXM0SgXLZ3XfiaUuOj/IKQHuTXip+05Xn+uyEYdVg0kYDipTBcLrCVyUzAPz7QmArb0mmw==\"},\"engines\":{\"node\":\">=14.0.0\"}},\"tinyspy@4.0.3\":{\"resolution\":{\"integrity\":\"sha512-t2T/WLB2WRgZ9EpE4jgPJ9w+i66UZfDc8wHh0xrwiRNN+UwH98GIJkTeZqX9rg0i0ptwzqW+uYeIF0T4F8LR7A==\"},\"engines\":{\"node\":\">=14.0.0\"}},\"tldts-core@6.1.52\":{\"resolution\":{\"integrity\":\"sha512-j4OxQI5rc1Ve/4m/9o2WhWSC4jGc4uVbCINdOEJRAraCi0YqTqgMcxUx7DbmuP0G3PCixoof/RZB0Q5Kh9tagw==\"}},\"tldts-core@7.0.19\":{\"resolution\":{\"integrity\":\"sha512-lJX2dEWx0SGH4O6p+7FPwYmJ/bu1JbcGJ8RLaG9b7liIgZ85itUVEPbMtWRVrde/0fnDPEPHW10ZsKW3kVsE9A==\"}},\"tldts@6.1.52\":{\"resolution\":{\"integrity\":\"sha512-fgrDJXDjbAverY6XnIt0lNfv8A0cf7maTEaZxNykLGsLG7XP+5xhjBTrt/ieAsFjAlZ+G5nmXomLcZDkxXnDzw==\"},\"hasBin\":true},\"tldts@7.0.19\":{\"resolution\":{\"integrity\":\"sha512-8PWx8tvC4jDB39BQw1m4x8y5MH1BcQ5xHeL2n7UVFulMPH/3Q0uiamahFJ3lXA0zO2SUyRXuVVbWSDmstlt9YA==\"},\"hasBin\":true},\"tmp@0.2.5\":{\"resolution\":{\"integrity\":\"sha512-voyz6MApa1rQGUxT3E+BK7/ROe8itEx7vD8/HEvt4xwXucvQ5G5oeEiHkmHZJuBO21RpOf+YYm9MOivj709jow==\"},\"engines\":{\"node\":\">=14.14\"}},\"to-regex-range@5.0.1\":{\"resolution\":{\"integrity\":\"sha512-65P7iz6X5yEr1cwcgvQxbbIw7Uk3gOy5dIdtZ4rDveLqhrdJP+Li/Hx6tyK0NEb+2GCyneCMJiGqrADCSNk8sQ==\"},\"engines\":{\"node\":\">=8.0\"}},\"totalist@3.0.1\":{\"resolution\":{\"integrity\":\"sha512-sf4i37nQ2LBx4m3wB74y+ubopq6W/dIzXg0FDGjsYnZHVa1Da8FH853wlL2gtUhg+xJXjfk3kUZS3BRoQeoQBQ==\"},\"engines\":{\"node\":\">=6\"}},\"tough-cookie@4.1.4\":{\"resolution\":{\"integrity\":\"sha512-Loo5UUvLD9ScZ6jh8beX1T6sO1w2/MpCRpEP7V280GKMVUQ0Jzar2U3UJPsrdbziLEMMhu3Ujnq//rhiFuIeag==\"},\"engines\":{\"node\":\">=6\"}},\"tough-cookie@5.1.2\":{\"resolution\":{\"integrity\":\"sha512-FVDYdxtnj0G6Qm/DhNPSb8Ju59ULcup3tuJxkFb5K8Bv2pUXILbf0xZWU8PX8Ov19OXljbUyveOFwRMwkXzO+A==\"},\"engines\":{\"node\":\">=16\"}},\"tough-cookie@6.0.0\":{\"resolution\":{\"integrity\":\"sha512-kXuRi1mtaKMrsLUxz3sQYvVl37B0Ns6MzfrtV5DvJceE9bPyspOqk9xxv7XbZWcfLWbFmm997vl83qUWVJA64w==\"},\"engines\":{\"node\":\">=16\"}},\"tr46@5.1.1\":{\"resolution\":{\"integrity\":\"sha512-hdF5ZgjTqgAntKkklYw0R03MG2x/bSzTtkxmIRw/sTNV8YXsCJ1tfLAX23lhxhHJlEf3CRCOCGGWw3vI3GaSPw==\"},\"engines\":{\"node\":\">=18\"}},\"tr46@6.0.0\":{\"resolution\":{\"integrity\":\"sha512-bLVMLPtstlZ4iMQHpFHTR7GAGj2jxi8Dg0s2h2MafAE4uSWF98FC/3MomU51iQAMf8/qDUbKWf5GxuvvVcXEhw==\"},\"engines\":{\"node\":\">=20\"}},\"tree-kill@1.2.2\":{\"resolution\":{\"integrity\":\"sha512-L0Orpi8qGpRG//Nd+H90vFB+3iHnue1zSSGmNOOCh1GLJ7rUKVwV2HvijphGQS2UmhUZewS9VgvxYIdgr+fG1A==\"},\"hasBin\":true},\"trim-lines@3.0.1\":{\"resolution\":{\"integrity\":\"sha512-kRj8B+YHZCc9kQYdWfJB2/oUl9rA99qbowYYBtr4ui4mZyAQ2JpvVBd/6U2YloATfqBhBTSMhTpgBHtU0Mf3Rg==\"}},\"trim-trailing-lines@2.1.0\":{\"resolution\":{\"integrity\":\"sha512-5UR5Biq4VlVOtzqkm2AZlgvSlDJtME46uV0br0gENbwN4l5+mMKT4b9gJKqWtuL2zAIqajGJGuvbCbcAJUZqBg==\"}},\"trough@2.2.0\":{\"resolution\":{\"integrity\":\"sha512-tmMpK00BjZiUyVyvrBK7knerNgmgvcV/KLVyuma/SC+TQN167GrMRciANTz09+k3zW8L8t60jWO1GpfkZdjTaw==\"}},\"ts-algebra@2.0.0\":{\"resolution\":{\"integrity\":\"sha512-FPAhNPFMrkwz76P7cdjdmiShwMynZYN6SgOujD1urY4oNm80Ou9oMdmbR45LotcKOXoy7wSmHkRFE6Mxbrhefw==\"}},\"ts-dedent@2.2.0\":{\"resolution\":{\"integrity\":\"sha512-q5W7tVM71e2xjHZTlgfTDoPF/SmqKG5hddq9SzR49CH2hayqRKJtQ4mtRlSxKaJlR/+9rEM+mnBHf7I2/BQcpQ==\"},\"engines\":{\"node\":\">=6.10\"}},\"tsconfig-paths@4.2.0\":{\"resolution\":{\"integrity\":\"sha512-NoZ4roiN7LnbKn9QqE1amc9DJfzvZXxF4xDavcOWt1BPkdx+m+0gJuPM+S0vCe7zTJMYUP0R8pO2XMr+Y8oLIg==\"},\"engines\":{\"node\":\">=6\"}},\"tslib@2.8.1\":{\"resolution\":{\"integrity\":\"sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w==\"}},\"tsx@4.20.5\":{\"resolution\":{\"integrity\":\"sha512-+wKjMNU9w/EaQayHXb7WA7ZaHY6hN8WgfvHNQ3t1PnU91/7O8TcTnIhCDYTZwnt8JsO9IBqZ30Ln1r7pPF52Aw==\"},\"engines\":{\"node\":\">=18.0.0\"},\"hasBin\":true},\"tunnel-agent@0.6.0\":{\"resolution\":{\"integrity\":\"sha512-McnNiV1l8RYeY8tBgEpuodCC1mLUdbSN+CYBL7kJsJNInOP8UjDDEwdk6Mw60vdLLrr5NHKZhMAOSrR2NZuQ+w==\"}},\"type-fest@2.19.0\":{\"resolution\":{\"integrity\":\"sha512-RAH822pAdBgcNMAfWnCBU3CFZcfZ/i1eZjwFU/dsLKumyuuP3niueg2UAukXYF0E2AAoc82ZSSf9J0WQBinzHA==\"},\"engines\":{\"node\":\">=12.20\"}},\"type-fest@4.26.0\":{\"resolution\":{\"integrity\":\"sha512-OduNjVJsFbifKb57UqZ2EMP1i4u64Xwow3NYXUtBbD4vIwJdQd4+xl8YDou1dlm4DVrtwT/7Ky8z8WyCULVfxw==\"},\"engines\":{\"node\":\">=16\"}},\"type-fest@4.41.0\":{\"resolution\":{\"integrity\":\"sha512-TeTSQ6H5YHvpqVwBRcnLDCBnDOHWYu7IvGbHT6N8AOymcr9PJGjc1GTtiWZTYg0NCgYwvnYWEkVChQAr9bjfwA==\"},\"engines\":{\"node\":\">=16\"}},\"typescript@5.8.3\":{\"resolution\":{\"integrity\":\"sha512-p1diW6TqL9L07nNxvRMM7hMMw4c5XOo/1ibL4aAIGmSAt9slTE1Xgw5KWuof2uTOvCg9BY7ZRi+GaF+7sfgPeQ==\"},\"engines\":{\"node\":\">=14.17\"},\"hasBin\":true},\"typescript@5.9.3\":{\"resolution\":{\"integrity\":\"sha512-jl1vZzPDinLr9eUt3J/t7V6FgNEw9QjvBPdysz9KfQDD41fQrC2Y4vKQdiaUpFT4bXlb1RHhLpp8wtm6M5TgSw==\"},\"engines\":{\"node\":\">=14.17\"},\"hasBin\":true},\"ufo@1.6.1\":{\"resolution\":{\"integrity\":\"sha512-9a4/uxlTWJ4+a5i0ooc1rU7C7YOw3wT+UGqdeNNHWnOF9qcMBgLRS+4IYUqbczewFx4mLEig6gawh7X6mFlEkA==\"}},\"undici-types@6.21.0\":{\"resolution\":{\"integrity\":\"sha512-iwDZqg0QAGrg9Rav5H4n0M64c3mkR59cJ6wQp+7C4nI0gsmExaedaYLNO44eT4AtBBwjbTiGPMlt2Md0T9H9JQ==\"}},\"undici-types@7.16.0\":{\"resolution\":{\"integrity\":\"sha512-Zz+aZWSj8LE6zoxD+xrjh4VfkIG8Ya6LvYkZqtUQGJPZjYl53ypCaUwWqo7eI0x66KBGeRo+mlBEkMSeSZ38Nw==\"}},\"undici@7.16.0\":{\"resolution\":{\"integrity\":\"sha512-QEg3HPMll0o3t2ourKwOeUAZ159Kn9mx5pnzHRQO8+Wixmh88YdZRiIwat0iNzNNXn0yoEtXJqFpyW7eM8BV7g==\"},\"engines\":{\"node\":\">=20.18.1\"}},\"undici@7.24.8\":{\"resolution\":{\"integrity\":\"sha512-6KQ/+QxK49Z/p3HO6E5ZCZWNnCasyZLa5ExaVYyvPxUwKtbCPMKELJOqh7EqOle0t9cH/7d2TaaTRRa6Nhs4YQ==\"},\"engines\":{\"node\":\">=20.18.1\"}},\"undici@7.25.0\":{\"resolution\":{\"integrity\":\"sha512-xXnp4kTyor2Zq+J1FfPI6Eq3ew5h6Vl0F/8d9XU5zZQf1tX9s2Su1/3PiMmUANFULpmksxkClamIZcaUqryHsQ==\"},\"engines\":{\"node\":\">=20.18.1\"}},\"unenv@2.0.0-rc.24\":{\"resolution\":{\"integrity\":\"sha512-i7qRCmY42zmCwnYlh9H2SvLEypEFGye5iRmEMKjcGi7zk9UquigRjFtTLz0TYqr0ZGLZhaMHl/foy1bZR+Cwlw==\"}},\"unified@11.0.5\":{\"resolution\":{\"integrity\":\"sha512-xKvGhPWw3k84Qjh8bI3ZeJjqnyadK+GEFtazSfZv/rKeTkTjOJho6mFqh2SM96iIcZokxiOpg78GazTSg8+KHA==\"}},\"unist-util-find-after@5.0.0\":{\"resolution\":{\"integrity\":\"sha512-amQa0Ep2m6hE2g72AugUItjbuM8X8cGQnFoHk0pGfrFeT9GZhzN5SW8nRsiGKK7Aif4CrACPENkA6P/Lw6fHGQ==\"}},\"unist-util-is@6.0.0\":{\"resolution\":{\"integrity\":\"sha512-2qCTHimwdxLfz+YzdGfkqNlH0tLi9xjTnHddPmJwtIG9MGsdbutfTc4P+haPD7l7Cjxf/WZj+we5qfVPvvxfYw==\"}},\"unist-util-position@5.0.0\":{\"resolution\":{\"integrity\":\"sha512-fucsC7HjXvkB5R3kTCO7kUjRdrS0BJt3M/FPxmHMBOm8JQi2BsHAHFsy27E0EolP8rp0NzXsJ+jNPyDWvOJZPA==\"}},\"unist-util-stringify-position@4.0.0\":{\"resolution\":{\"integrity\":\"sha512-0ASV06AAoKCDkS2+xw5RXJywruurpbC4JZSm7nr7MOt1ojAzvyyaO+UxZf18j8FCF6kmzCZKcAgN/yu2gm2XgQ==\"}},\"unist-util-visit-parents@6.0.1\":{\"resolution\":{\"integrity\":\"sha512-L/PqWzfTP9lzzEa6CKs0k2nARxTdZduw3zyh8d2NVBnsyvHjSX4TWse388YrrQKbvI8w20fGjGlhgT96WwKykw==\"}},\"unist-util-visit@5.0.0\":{\"resolution\":{\"integrity\":\"sha512-MR04uvD+07cwl/yhVuVWAtw+3GOR/knlL55Nd/wAdblk27GCVt3lqpTivy/tkJcZoNPzTwS1Y+KMojlLDhoTzg==\"}},\"universalify@0.1.2\":{\"resolution\":{\"integrity\":\"sha512-rBJeI5CXAlmy1pV+617WB9J63U6XcazHHF2f2dbJix4XzpUF0RS3Zbj0FGIOCAva5P/d/GBOYaACQ1w+0azUkg==\"},\"engines\":{\"node\":\">= 4.0.0\"}},\"universalify@0.2.0\":{\"resolution\":{\"integrity\":\"sha512-CJ1QgKmNg3CwvAv/kOFmtnEN05f0D/cn9QntgNOQlQF9dgvVTHj3t+8JPdjqawCHk7V/KA+fbUqzZ9XWhcqPUg==\"},\"engines\":{\"node\":\">= 4.0.0\"}},\"universalify@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-gptHNQghINnc/vTGIk0SOFGFNXw7JVrlRUtConJRlvaw6DuX0wO5Jeko9sWrMBhh+PsYAZ7oXAiOnf/UKogyiw==\"},\"engines\":{\"node\":\">= 10.0.0\"}},\"unplugin@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-0Mqk3AT2TZCXWKdcoaufeXNukv2mTrEZExeXlHIOZXdqYoHHr4n51pymnwV8x2BOVxwXbK2HLlI7usrqMpycdg==\"},\"engines\":{\"node\":\"^20.19.0 || >=22.12.0\"}},\"update-browserslist-db@1.1.3\":{\"resolution\":{\"integrity\":\"sha512-UxhIZQ+QInVdunkDAaiazvvT/+fXL5Osr0JZlJulepYu6Jd7qJtDZjlur0emRlT71EN3ScPoE7gvsuIKKNavKw==\"},\"hasBin\":true,\"peerDependencies\":{\"browserslist\":\">= 4.21.0\"}},\"update-browserslist-db@1.2.3\":{\"resolution\":{\"integrity\":\"sha512-Js0m9cx+qOgDxo0eMiFGEueWztz+d4+M3rGlmKPT+T4IS/jP4ylw3Nwpu6cpTTP8R1MAC1kF4VbdLt3ARf209w==\"},\"hasBin\":true,\"peerDependencies\":{\"browserslist\":\">= 4.21.0\"}},\"url-parse@1.5.10\":{\"resolution\":{\"integrity\":\"sha512-WypcfiRhfeUP9vvF0j6rw0J3hrWrw6iZv3+22h6iRMJ/8z1Tj6XfLP4DsUix5MhMPnXpiHDoKyoZ/bdCkwBCiQ==\"}},\"urlpattern-polyfill@10.1.0\":{\"resolution\":{\"integrity\":\"sha512-IGjKp/o0NL3Bso1PymYURCJxMPNAf/ILOpendP9f5B6e1rTJgdgiOvgfoT8VxCAdY+Wisb9uhGaJJf3yZ2V9nw==\"}},\"use-sync-external-store@1.6.0\":{\"resolution\":{\"integrity\":\"sha512-Pp6GSwGP/NrPIrxVFAIkOQeyw8lFenOHijQWkUTrDvrF4ALqylP2C/KCkeS9dpUM3KvYRQhna5vt7IL95+ZQ9w==\"},\"peerDependencies\":{\"react\":\"^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0\"}},\"userhome@1.0.1\":{\"resolution\":{\"integrity\":\"sha512-5cnLm4gseXjAclKowC4IjByaGsjtAoV6PrOQOljplNB54ReUYJP8HdAFq2muHinSDAh09PPX/uXDPfdxRHvuSA==\"},\"engines\":{\"node\":\">= 0.8.0\"}},\"util-deprecate@1.0.2\":{\"resolution\":{\"integrity\":\"sha512-EPD5q1uXyFxJpCrLnCc1nHnq3gOa6DZBocAIiI2TaSCA7VCJ1UJDMagCzIkXNsUYfD1daK//LTEQ8xiIbrHtcw==\"}},\"uuid@11.1.0\":{\"resolution\":{\"integrity\":\"sha512-0/A9rDy9P7cJ+8w1c9WD9V//9Wj15Ce2MPz8Ri6032usz+NfePxx5AcN3bN+r6ZL6jEo066/yNYB3tn4pQEx+A==\"},\"hasBin\":true},\"varint@6.0.0\":{\"resolution\":{\"integrity\":\"sha512-cXEIW6cfr15lFv563k4GuVuW/fiwjknytD37jIOLSdSWuOI6WnO/oKwmP2FQTU2l01LP8/M5TSAJpzUaGe3uWg==\"}},\"vfile-location@5.0.3\":{\"resolution\":{\"integrity\":\"sha512-5yXvWDEgqeiYiBe1lbxYF7UMAIm/IcopxMHrMQDq3nvKcjPKIhZklUKL+AE7J7uApI4kwe2snsK+eI6UTj9EHg==\"}},\"vfile-message@4.0.2\":{\"resolution\":{\"integrity\":\"sha512-jRDZ1IMLttGj41KcZvlrYAaI3CfqpLpfpf+Mfig13viT6NKvRzWZ+lXz0Y5D60w6uJIBAOGq9mSHf0gktF0duw==\"}},\"vfile@6.0.3\":{\"resolution\":{\"integrity\":\"sha512-KzIbH/9tXat2u30jf+smMwFCsno4wHVdNmzFyL+T/L3UGqqk6JKfVqOFOZEpZSHADH1k40ab6NUIXZq422ov3Q==\"}},\"vite-node@3.2.4\":{\"resolution\":{\"integrity\":\"sha512-EbKSKh+bh1E1IFxeO0pg1n4dvoOTt0UDiXMd/qn++r98+jPO1xtJilvXldeuQ8giIB5IkpjCgMleHMNEsGH6pg==\"},\"engines\":{\"node\":\"^18.0.0 || ^20.0.0 || >=22.0.0\"},\"hasBin\":true},\"vite-plugin-static-copy@4.1.0\":{\"resolution\":{\"integrity\":\"sha512-9XOarNV7LgP0KBB7AApxdgFikLXx3daZdqjC3AevYsL6MrUH62zphonLUs2a6LZc1HN1GY+vQdheZ8VVJb6dQQ==\"},\"engines\":{\"node\":\"^22.0.0 || >=24.0.0\"},\"peerDependencies\":{\"vite\":\"^6.0.0 || ^7.0.0 || ^8.0.0\"}},\"vite@7.2.7\":{\"resolution\":{\"integrity\":\"sha512-ITcnkFeR3+fI8P1wMgItjGrR10170d8auB4EpMLPqmx6uxElH3a/hHGQabSHKdqd4FXWO1nFIp9rRn7JQ34ACQ==\"},\"engines\":{\"node\":\"^20.19.0 || >=22.12.0\"},\"hasBin\":true,\"peerDependencies\":{\"@types/node\":\"^20.19.0 || >=22.12.0\",\"jiti\":\">=1.21.0\",\"less\":\"^4.0.0\",\"lightningcss\":\"^1.21.0\",\"sass\":\"^1.70.0\",\"sass-embedded\":\"^1.70.0\",\"stylus\":\">=0.54.8\",\"sugarss\":\"^5.0.0\",\"terser\":\"^5.16.0\",\"tsx\":\"^4.8.1\",\"yaml\":\"^2.4.2\"},\"peerDependenciesMeta\":{\"@types/node\":{\"optional\":true},\"jiti\":{\"optional\":true},\"less\":{\"optional\":true},\"lightningcss\":{\"optional\":true},\"sass\":{\"optional\":true},\"sass-embedded\":{\"optional\":true},\"stylus\":{\"optional\":true},\"sugarss\":{\"optional\":true},\"terser\":{\"optional\":true},\"tsx\":{\"optional\":true},\"yaml\":{\"optional\":true}}},\"vite@8.0.10\":{\"resolution\":{\"integrity\":\"sha512-rZuUu9j6J5uotLDs+cAA4O5H4K1SfPliUlQwqa6YEwSrWDZzP4rhm00oJR5snMewjxF5V/K3D4kctsUTsIU9Mw==\"},\"engines\":{\"node\":\"^20.19.0 || >=22.12.0\"},\"hasBin\":true,\"peerDependencies\":{\"@types/node\":\"^20.19.0 || >=22.12.0\",\"@vitejs/devtools\":\"^0.1.0\",\"esbuild\":\"^0.27.0 || ^0.28.0\",\"jiti\":\">=1.21.0\",\"less\":\"^4.0.0\",\"sass\":\"^1.70.0\",\"sass-embedded\":\"^1.70.0\",\"stylus\":\">=0.54.8\",\"sugarss\":\"^5.0.0\",\"terser\":\"^5.16.0\",\"tsx\":\"^4.8.1\",\"yaml\":\"^2.4.2\"},\"peerDependenciesMeta\":{\"@types/node\":{\"optional\":true},\"@vitejs/devtools\":{\"optional\":true},\"esbuild\":{\"optional\":true},\"jiti\":{\"optional\":true},\"less\":{\"optional\":true},\"sass\":{\"optional\":true},\"sass-embedded\":{\"optional\":true},\"stylus\":{\"optional\":true},\"sugarss\":{\"optional\":true},\"terser\":{\"optional\":true},\"tsx\":{\"optional\":true},\"yaml\":{\"optional\":true}}},\"vitefu@1.1.1\":{\"resolution\":{\"integrity\":\"sha512-B/Fegf3i8zh0yFbpzZ21amWzHmuNlLlmJT6n7bu5e+pCHUKQIfXSYokrqOBGEMMe9UG2sostKQF9mml/vYaWJQ==\"},\"peerDependencies\":{\"vite\":\"^3.0.0 || ^4.0.0 || ^5.0.0 || ^6.0.0 || ^7.0.0-beta.0\"},\"peerDependenciesMeta\":{\"vite\":{\"optional\":true}}},\"vitest@3.2.4\":{\"resolution\":{\"integrity\":\"sha512-LUCP5ev3GURDysTWiP47wRRUpLKMOfPh+yKTx3kVIEiu5KOMeqzpnYNsKyOoVrULivR8tLcks4+lga33Whn90A==\"},\"engines\":{\"node\":\"^18.0.0 || ^20.0.0 || >=22.0.0\"},\"hasBin\":true,\"peerDependencies\":{\"@edge-runtime/vm\":\"*\",\"@types/debug\":\"^4.1.12\",\"@types/node\":\"^18.0.0 || ^20.0.0 || >=22.0.0\",\"@vitest/browser\":\"3.2.4\",\"@vitest/ui\":\"3.2.4\",\"happy-dom\":\"*\",\"jsdom\":\"*\"},\"peerDependenciesMeta\":{\"@edge-runtime/vm\":{\"optional\":true},\"@types/debug\":{\"optional\":true},\"@types/node\":{\"optional\":true},\"@vitest/browser\":{\"optional\":true},\"@vitest/ui\":{\"optional\":true},\"happy-dom\":{\"optional\":true},\"jsdom\":{\"optional\":true}}},\"vitest@4.0.18\":{\"resolution\":{\"integrity\":\"sha512-hOQuK7h0FGKgBAas7v0mSAsnvrIgAvWmRFjmzpJ7SwFHH3g1k2u37JtYwOwmEKhK6ZO3v9ggDBBm0La1LCK4uQ==\"},\"engines\":{\"node\":\"^20.0.0 || ^22.0.0 || >=24.0.0\"},\"hasBin\":true,\"peerDependencies\":{\"@edge-runtime/vm\":\"*\",\"@opentelemetry/api\":\"^1.9.0\",\"@types/node\":\"^20.0.0 || ^22.0.0 || >=24.0.0\",\"@vitest/browser-playwright\":\"4.0.18\",\"@vitest/browser-preview\":\"4.0.18\",\"@vitest/browser-webdriverio\":\"4.0.18\",\"@vitest/ui\":\"4.0.18\",\"happy-dom\":\"*\",\"jsdom\":\"*\"},\"peerDependenciesMeta\":{\"@edge-runtime/vm\":{\"optional\":true},\"@opentelemetry/api\":{\"optional\":true},\"@types/node\":{\"optional\":true},\"@vitest/browser-playwright\":{\"optional\":true},\"@vitest/browser-preview\":{\"optional\":true},\"@vitest/browser-webdriverio\":{\"optional\":true},\"@vitest/ui\":{\"optional\":true},\"happy-dom\":{\"optional\":true},\"jsdom\":{\"optional\":true}}},\"vitest@4.1.5\":{\"resolution\":{\"integrity\":\"sha512-9Xx1v3/ih3m9hN+SbfkUyy0JAs72ap3r7joc87XL6jwF0jGg6mFBvQ1SrwaX+h8BlkX6Hz9shdd1uo6AF+ZGpg==\"},\"engines\":{\"node\":\"^20.0.0 || ^22.0.0 || >=24.0.0\"},\"hasBin\":true,\"peerDependencies\":{\"@edge-runtime/vm\":\"*\",\"@opentelemetry/api\":\"^1.9.0\",\"@types/node\":\"^20.0.0 || ^22.0.0 || >=24.0.0\",\"@vitest/browser-playwright\":\"4.1.5\",\"@vitest/browser-preview\":\"4.1.5\",\"@vitest/browser-webdriverio\":\"4.1.5\",\"@vitest/coverage-istanbul\":\"4.1.5\",\"@vitest/coverage-v8\":\"4.1.5\",\"@vitest/ui\":\"4.1.5\",\"happy-dom\":\"*\",\"jsdom\":\"*\",\"vite\":\"^6.0.0 || ^7.0.0 || ^8.0.0\"},\"peerDependenciesMeta\":{\"@edge-runtime/vm\":{\"optional\":true},\"@opentelemetry/api\":{\"optional\":true},\"@types/node\":{\"optional\":true},\"@vitest/browser-playwright\":{\"optional\":true},\"@vitest/browser-preview\":{\"optional\":true},\"@vitest/browser-webdriverio\":{\"optional\":true},\"@vitest/coverage-istanbul\":{\"optional\":true},\"@vitest/coverage-v8\":{\"optional\":true},\"@vitest/ui\":{\"optional\":true},\"happy-dom\":{\"optional\":true},\"jsdom\":{\"optional\":true}}},\"vscode-jsonrpc@8.2.0\":{\"resolution\":{\"integrity\":\"sha512-C+r0eKJUIfiDIfwJhria30+TYWPtuHJXHtI7J0YlOmKAo7ogxP20T0zxB7HZQIFhIyvoBPwWskjxrvAtfjyZfA==\"},\"engines\":{\"node\":\">=14.0.0\"}},\"vscode-languageserver-protocol@3.17.5\":{\"resolution\":{\"integrity\":\"sha512-mb1bvRJN8SVznADSGWM9u/b07H7Ecg0I3OgXDuLdn307rl/J3A9YD6/eYOssqhecL27hK1IPZAsaqh00i/Jljg==\"}},\"vscode-languageserver-textdocument@1.0.12\":{\"resolution\":{\"integrity\":\"sha512-cxWNPesCnQCcMPeenjKKsOCKQZ/L6Tv19DTRIGuLWe32lyzWhihGVJ/rcckZXJxfdKCFvRLS3fpBIsV/ZGX4zA==\"}},\"vscode-languageserver-types@3.17.5\":{\"resolution\":{\"integrity\":\"sha512-Ld1VelNuX9pdF39h2Hgaeb5hEZM2Z3jUrrMgWQAu82jMtZp7p3vJT3BzToKtZI7NgQssZje5o0zryOrhQvzQAg==\"}},\"vscode-languageserver@9.0.1\":{\"resolution\":{\"integrity\":\"sha512-woByF3PDpkHFUreUa7Hos7+pUWdeWMXRd26+ZX2A8cFx6v/JPTtd4/uN0/jB6XQHYaOlHbio03NTHCqrgG5n7g==\"},\"hasBin\":true},\"vscode-uri@3.0.8\":{\"resolution\":{\"integrity\":\"sha512-AyFQ0EVmsOZOlAnxoFOGOq1SQDWAB7C6aqMGS23svWAllfOaxbuFvcT8D1i8z3Gyn8fraVeZNNmN6e9bxxXkKw==\"}},\"w3c-xmlserializer@5.0.0\":{\"resolution\":{\"integrity\":\"sha512-o8qghlI8NZHU1lLPrpi2+Uq7abh4GGPpYANlalzWxyWteJOCsr/P+oPBA49TOLu5FTZO4d3F9MnWJfiMo4BkmA==\"},\"engines\":{\"node\":\">=18\"}},\"wait-port@1.1.0\":{\"resolution\":{\"integrity\":\"sha512-3e04qkoN3LxTMLakdqeWth8nih8usyg+sf1Bgdf9wwUkp05iuK1eSY/QpLvscT/+F/gA89+LpUmmgBtesbqI2Q==\"},\"engines\":{\"node\":\">=10\"},\"hasBin\":true},\"watchpack@2.5.1\":{\"resolution\":{\"integrity\":\"sha512-Zn5uXdcFNIA1+1Ei5McRd+iRzfhENPCe7LeABkJtNulSxjma+l7ltNx55BWZkRlwRnpOgHqxnjyaDgJnNXnqzg==\"},\"engines\":{\"node\":\">=10.13.0\"}},\"wcwidth@1.0.1\":{\"resolution\":{\"integrity\":\"sha512-XHPEwS0q6TaxcvG85+8EYkbiCux2XtWG2mkc47Ng2A77BQu9+DqIOJldST4HgPkuea7dvKSj5VgX3P1d4rW8Tg==\"}},\"web-namespaces@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-bKr1DkiNa2krS7qxNtdrtHAmzuYGFQLiQ13TsorsdT6ULTkPLKuu5+GsFpDlg6JFjUTwX2DyhMPG2be8uPrqsQ==\"}},\"web-streams-polyfill@3.3.3\":{\"resolution\":{\"integrity\":\"sha512-d2JWLCivmZYTSIoge9MsgFCZrt571BikcWGYkjC1khllbTeDlGqZ2D8vD8E/lJa8WGWbb7Plm8/XJYV7IJHZZw==\"},\"engines\":{\"node\":\">= 8\"}},\"web-vitals@4.2.4\":{\"resolution\":{\"integrity\":\"sha512-r4DIlprAGwJ7YM11VZp4R884m0Vmgr6EAKe3P+kO0PPj3Unqyvv59rczf6UiGcb9Z8QxZVcqKNwv/g0WNdWwsw==\"}},\"web-vitals@5.1.0\":{\"resolution\":{\"integrity\":\"sha512-ArI3kx5jI0atlTtmV0fWU3fjpLmq/nD3Zr1iFFlJLaqa5wLBkUSzINwBPySCX/8jRyjlmy1Volw1kz1g9XE4Jg==\"}},\"webdriver@9.2.0\":{\"resolution\":{\"integrity\":\"sha512-UrhuHSLq4m3OgncvX75vShfl5w3gmjAy8LvLb6/L6V+a+xcqMRelFx/DQ72Mr84F4m8Li6wjtebrOH1t9V/uOQ==\"},\"engines\":{\"node\":\">=18.20.0\"}},\"webdriverio@9.2.1\":{\"resolution\":{\"integrity\":\"sha512-AI7xzqTmFiU7oAx4fpEF1U1MA7smhCPVDeM0gxPqG5qWepzib3WDX2SsRtcmhdVW+vLJ3m4bf8rAXxZ2M1msWA==\"},\"engines\":{\"node\":\">=18.20.0\"},\"peerDependencies\":{\"puppeteer-core\":\"^22.3.0\"},\"peerDependenciesMeta\":{\"puppeteer-core\":{\"optional\":true}}},\"webidl-conversions@7.0.0\":{\"resolution\":{\"integrity\":\"sha512-VwddBukDzu71offAQR975unBIGqfKZpM+8ZX6ySk8nYhVoo5CYaZyzt3YBvYtRtO+aoGlqxPg/B87NGVZ/fu6g==\"},\"engines\":{\"node\":\">=12\"}},\"webidl-conversions@8.0.0\":{\"resolution\":{\"integrity\":\"sha512-n4W4YFyz5JzOfQeA8oN7dUYpR+MBP3PIUsn2jLjWXwK5ASUzt0Jc/A5sAUZoCYFJRGF0FBKJ+1JjN43rNdsQzA==\"},\"engines\":{\"node\":\">=20\"}},\"webpack-sources@3.4.1\":{\"resolution\":{\"integrity\":\"sha512-eACpxRN02yaawnt+uUNIF7Qje6A9zArxBbcAJjK1PK3S9Ycg5jIuJ8pW4q8EMnwNZCEGltcjkRx1QzOxOkKD8A==\"},\"engines\":{\"node\":\">=10.13.0\"}},\"webpack-virtual-modules@0.6.2\":{\"resolution\":{\"integrity\":\"sha512-66/V2i5hQanC51vBQKPH4aI8NMAcBW59FVBs+rC7eGHupMyfn34q7rZIE+ETlJ+XTevqfUhVVBgSUNSW2flEUQ==\"}},\"webpack@5.99.9\":{\"resolution\":{\"integrity\":\"sha512-brOPwM3JnmOa+7kd3NsmOUOwbDAj8FT9xDsG3IW0MgbN9yZV7Oi/s/+MNQ/EcSMqw7qfoRyXPoeEWT8zLVdVGg==\"},\"engines\":{\"node\":\">=10.13.0\"},\"hasBin\":true,\"peerDependencies\":{\"webpack-cli\":\"*\"},\"peerDependenciesMeta\":{\"webpack-cli\":{\"optional\":true}}},\"whatwg-encoding@3.1.1\":{\"resolution\":{\"integrity\":\"sha512-6qN4hJdMwfYBtE3YBTTHhoeuUrDBPZmbQaxWAqSALV/MeEnR5z1xd8UKud2RAkFoPkmB+hli1TZSnyi84xz1vQ==\"},\"engines\":{\"node\":\">=18\"},\"deprecated\":\"Use @exodus/bytes instead for a more spec-conformant and faster implementation\"},\"whatwg-mimetype@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-nt+N2dzIutVRxARx1nghPKGv1xHikU7HKdfafKkLNLindmPU/ch3U31NOCGGA/dmPcmb1VlofO0vnKAcsm0o/Q==\"},\"engines\":{\"node\":\">=12\"}},\"whatwg-mimetype@4.0.0\":{\"resolution\":{\"integrity\":\"sha512-QaKxh0eNIi2mE9p2vEdzfagOKHCcj1pJ56EEHGQOVxp8r9/iszLUUV7v89x9O1p/T+NlTM5W7jW6+cz4Fq1YVg==\"},\"engines\":{\"node\":\">=18\"}},\"whatwg-url@14.2.0\":{\"resolution\":{\"integrity\":\"sha512-De72GdQZzNTUBBChsXueQUnPKDkg/5A5zp7pFDuQAj5UFoENpiACU0wlCvzpAGnTkj++ihpKwKyYewn/XNUbKw==\"},\"engines\":{\"node\":\">=18\"}},\"whatwg-url@15.1.0\":{\"resolution\":{\"integrity\":\"sha512-2ytDk0kiEj/yu90JOAp44PVPUkO9+jVhyf+SybKlRHSDlvOOZhdPIrr7xTH64l4WixO2cP+wQIcgujkGBPPz6g==\"},\"engines\":{\"node\":\">=20\"}},\"which@2.0.2\":{\"resolution\":{\"integrity\":\"sha512-BLI3Tl1TW3Pvl70l3yq3Y64i+awpwXqsGBYWkkqMtnbXgrMD+yj7rhW0kuEDxzJaYXGjEW5ogapKNMEKNMjibA==\"},\"engines\":{\"node\":\">= 8\"},\"hasBin\":true},\"which@4.0.0\":{\"resolution\":{\"integrity\":\"sha512-GlaYyEb07DPxYCKhKzplCWBJtvxZcZMrL+4UkrTSJHHPyZU4mYYTv3qaOe77H7EODLSSopAUFAc6W8U4yqvscg==\"},\"engines\":{\"node\":\"^16.13.0 || >=18.0.0\"},\"hasBin\":true},\"why-is-node-running@2.3.0\":{\"resolution\":{\"integrity\":\"sha512-hUrmaWBdVDcxvYqnyh09zunKzROWjbZTiNy8dBEjkS7ehEDQibXJ7XvlmtbwuTclUiIyN+CyXQD4Vmko8fNm8w==\"},\"engines\":{\"node\":\">=8\"},\"hasBin\":true},\"workerd@1.20260504.1\":{\"resolution\":{\"integrity\":\"sha512-AQTXSHbYNP9tLPgJNn0TmizyE4aDh2VuZZXlTAL0uu4fbCY436NAnQSJIzZbaFHM3DnAtVs9G8tkiJztSdYqDg==\"},\"engines\":{\"node\":\">=16\"},\"hasBin\":true},\"wrangler@4.88.0\":{\"resolution\":{\"integrity\":\"sha512-f470QwbeT/JM1S0duq+sLtkss7UBxIFDtYHgujv9tdQUyA/dLGDq51am0rqrsuFtCi97lTM1P5sqtt8xra1AlA==\"},\"engines\":{\"node\":\">=22.0.0\"},\"hasBin\":true,\"peerDependencies\":{\"@cloudflare/workers-types\":\"^4.20260504.1\"},\"peerDependenciesMeta\":{\"@cloudflare/workers-types\":{\"optional\":true}}},\"wrap-ansi@6.2.0\":{\"resolution\":{\"integrity\":\"sha512-r6lPcBGxZXlIcymEu7InxDMhdW0KDxpLgoFLcguasxCaJ/SOIZwINatK9KY/tf+ZrlywOKU0UDj3ATXUBfxJXA==\"},\"engines\":{\"node\":\">=8\"}},\"wrap-ansi@7.0.0\":{\"resolution\":{\"integrity\":\"sha512-YVGIj2kamLSTxw6NsZjoBxfSwsn0ycdesmc4p+Q21c5zPuZ1pl+NfxVdxPtdHvmNVOQ6XSYG4AUtyt/Fi7D16Q==\"},\"engines\":{\"node\":\">=10\"}},\"wrap-ansi@8.1.0\":{\"resolution\":{\"integrity\":\"sha512-si7QWI6zUMq56bESFvagtmzMdGOtoxfR+Sez11Mobfc7tm+VkUckk9bW2UeffTGVUbOksxmSw0AA2gs8g71NCQ==\"},\"engines\":{\"node\":\">=12\"}},\"wrappy@1.0.2\":{\"resolution\":{\"integrity\":\"sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ==\"}},\"ws@8.18.0\":{\"resolution\":{\"integrity\":\"sha512-8VbfWfHLbbwu3+N6OKsOMpBdT4kXPDDB9cJk2bJ6mh9ucxdlnNvH1e+roYkKmN9Nxw2yjz7VzeO9oOz2zJ04Pw==\"},\"engines\":{\"node\":\">=10.0.0\"},\"peerDependencies\":{\"bufferutil\":\"^4.0.1\",\"utf-8-validate\":\">=5.0.2\"},\"peerDependenciesMeta\":{\"bufferutil\":{\"optional\":true},\"utf-8-validate\":{\"optional\":true}}},\"ws@8.18.3\":{\"resolution\":{\"integrity\":\"sha512-PEIGCY5tSlUt50cqyMXfCzX+oOPqN0vuGqWzbcJ2xvnkzkq46oOpz7dQaTDBdfICb4N14+GARUDw2XV2N4tvzg==\"},\"engines\":{\"node\":\">=10.0.0\"},\"peerDependencies\":{\"bufferutil\":\"^4.0.1\",\"utf-8-validate\":\">=5.0.2\"},\"peerDependenciesMeta\":{\"bufferutil\":{\"optional\":true},\"utf-8-validate\":{\"optional\":true}}},\"ws@8.20.0\":{\"resolution\":{\"integrity\":\"sha512-sAt8BhgNbzCtgGbt2OxmpuryO63ZoDk/sqaB/znQm94T4fCEsy/yV+7CdC1kJhOU9lboAEU7R3kquuycDoibVA==\"},\"engines\":{\"node\":\">=10.0.0\"},\"peerDependencies\":{\"bufferutil\":\"^4.0.1\",\"utf-8-validate\":\">=5.0.2\"},\"peerDependenciesMeta\":{\"bufferutil\":{\"optional\":true},\"utf-8-validate\":{\"optional\":true}}},\"xml-name-validator@5.0.0\":{\"resolution\":{\"integrity\":\"sha512-EvGK8EJ3DhaHfbRlETOWAS5pO9MZITeauHKJyb8wyajUfQUenkIg2MvLDTZ4T/TgIcm3HU0TFBgWWboAZ30UHg==\"},\"engines\":{\"node\":\">=18\"}},\"xmlbuilder2@4.0.3\":{\"resolution\":{\"integrity\":\"sha512-bx8Q1STctnNaaDymWnkfQLKofs0mGNN7rLLapJlGuV3VlvegD7Ls4ggMjE3aUSWItCCzU0PEv45lI87iSigiCA==\"},\"engines\":{\"node\":\">=20.0\"}},\"xmlchars@2.2.0\":{\"resolution\":{\"integrity\":\"sha512-JZnDKK8B0RCDw84FNdDAIpZK+JuJw+s7Lz8nksI7SIuU3UXJJslUthsi+uWBUYOwPFwW7W7PRLRfUKpxjtjFCw==\"}},\"y18n@5.0.8\":{\"resolution\":{\"integrity\":\"sha512-0pfFzegeDWJHJIAmTLRP2DwHjdF5s7jo9tuztdQxAhINCdvS+3nGINqPd00AphqJR/0LhANUS6/+7SCb98YOfA==\"},\"engines\":{\"node\":\">=10\"}},\"yallist@3.1.1\":{\"resolution\":{\"integrity\":\"sha512-a4UGQaWPH59mOXUYnAG2ewncQS4i4F43Tv3JoAM+s2VDAmS9NsK8GpDMLrCHPksFT7h3K6TOoUNn2pb7RoXx4g==\"}},\"yallist@4.0.0\":{\"resolution\":{\"integrity\":\"sha512-3wdGidZyq5PB084XLES5TpOSRA3wjXAlIWMhum2kRcv/41Sn2emQ0dycQW4uZXLejwKvg6EsvbdlVL+FYEct7A==\"}},\"yaml@2.8.1\":{\"resolution\":{\"integrity\":\"sha512-lcYcMxX2PO9XMGvAJkJ3OsNMw+/7FKes7/hgerGUYWIoWu5j/+YQqcZr5JnPZWzOsEBgMbSbiSTn/dv/69Mkpw==\"},\"engines\":{\"node\":\">= 14.6\"},\"hasBin\":true},\"yargs-parser@21.1.1\":{\"resolution\":{\"integrity\":\"sha512-tVpsJW7DdjecAiFpbIB1e3qxIQsE6NoPc5/eTdrbbIC4h0LVsWhnoa3g+m2HclBIujHzsxZ4VJVA+GUuc2/LBw==\"},\"engines\":{\"node\":\">=12\"}},\"yargs-parser@22.0.0\":{\"resolution\":{\"integrity\":\"sha512-rwu/ClNdSMpkSrUb+d6BRsSkLUq1fmfsY6TOpYzTwvwkg1/NRG85KBy3kq++A8LKQwX6lsu+aWad+2khvuXrqw==\"},\"engines\":{\"node\":\"^20.19.0 || ^22.12.0 || >=23\"}},\"yargs@17.7.2\":{\"resolution\":{\"integrity\":\"sha512-7dSzzRQ++CKnNI/krKnYRV7JKKPUXMEh61soaHKg9mrWEhzFWhFnxPxGl+69cD1Ou63C13NUPCnmIcrvqCuM6w==\"},\"engines\":{\"node\":\">=12\"}},\"yauzl@2.10.0\":{\"resolution\":{\"integrity\":\"sha512-p4a9I6X6nu6IhoGmBqAcbJy1mlC4j27vEPZX9F4L4/vZT3Lyq1VkFHw/V/PUcB9Buo+DG3iHkT0x3Qya58zc3g==\"}},\"yoctocolors-cjs@2.1.3\":{\"resolution\":{\"integrity\":\"sha512-U/PBtDf35ff0D8X8D0jfdzHYEPFxAI7jJlxZXwCSez5M3190m+QobIfh+sWDWSHMCWWJN2AWamkegn6vr6YBTw==\"},\"engines\":{\"node\":\">=18\"}},\"youch-core@0.3.3\":{\"resolution\":{\"integrity\":\"sha512-ho7XuGjLaJ2hWHoK8yFnsUGy2Y5uDpqSTq1FkHLK4/oqKtyUU1AFbOOxY4IpC9f0fTLjwYbslUz0Po5BpD1wrA==\"}},\"youch@4.1.0-beta.10\":{\"resolution\":{\"integrity\":\"sha512-rLfVLB4FgQneDr0dv1oddCVZmKjcJ6yX6mS4pU82Mq/Dt9a3cLZQ62pDBL4AUO+uVrCvtWz3ZFUL2HFAFJ/BXQ==\"}},\"zip-stream@6.0.1\":{\"resolution\":{\"integrity\":\"sha512-zK7YHHz4ZXpW89AHXUPbQVGKI7uvkd3hzusTdotCg1UxyaVtg0zFJSTfW/Dq5f7OBBVnq6cZIaC8Ti4hb6dtCA==\"},\"engines\":{\"node\":\">= 14\"}},\"zod@3.25.76\":{\"resolution\":{\"integrity\":\"sha512-gzUt/qt81nXsFGKIFcC3YnfEAx5NkunCfnDlvuBSSFS02bcXu4Lmea0AFIUwbLWxWPx3d9p8S5QoaujKcNQxcQ==\"}},\"zwitch@2.0.4\":{\"resolution\":{\"integrity\":\"sha512-bXE4cR/kVZhKZX/RjPEflHaKVhUVl85noU3v6b8apfQEc1x4A+zBxjZ4lN8LqGd6WZ3dl98pY4o717VFmoPp+A==\"}}},\"snapshots\":{\"@acemir/cssom@0.9.28\":{},\"@ampproject/remapping@2.3.0\":{\"dependencies\":{\"@jridgewell/gen-mapping\":\"0.3.13\",\"@jridgewell/trace-mapping\":\"0.3.30\"}},\"@antfu/install-pkg@1.1.0\":{\"dependencies\":{\"package-manager-detector\":\"1.5.0\",\"tinyexec\":\"1.0.2\"}},\"@antfu/utils@9.3.0\":{},\"@asamuzakjp/css-color@3.1.4\":{\"dependencies\":{\"@csstools/css-calc\":\"2.1.4(@csstools/css-parser-algorithms@3.0.5(@csstools/css-tokenizer@3.0.4))(@csstools/css-tokenizer@3.0.4)\",\"@csstools/css-color-parser\":\"3.1.0(@csstools/css-parser-algorithms@3.0.5(@csstools/css-tokenizer@3.0.4))(@csstools/css-tokenizer@3.0.4)\",\"@csstools/css-parser-algorithms\":\"3.0.5(@csstools/css-tokenizer@3.0.4)\",\"@csstools/css-tokenizer\":\"3.0.4\",\"lru-cache\":\"10.4.3\"}},\"@asamuzakjp/css-color@4.1.0\":{\"dependencies\":{\"@csstools/css-calc\":\"2.1.4(@csstools/css-parser-algorithms@3.0.5(@csstools/css-tokenizer@3.0.4))(@csstools/css-tokenizer@3.0.4)\",\"@csstools/css-color-parser\":\"3.1.0(@csstools/css-parser-algorithms@3.0.5(@csstools/css-tokenizer@3.0.4))(@csstools/css-tokenizer@3.0.4)\",\"@csstools/css-parser-algorithms\":\"3.0.5(@csstools/css-tokenizer@3.0.4)\",\"@csstools/css-tokenizer\":\"3.0.4\",\"lru-cache\":\"11.2.4\"}},\"@asamuzakjp/dom-selector@6.7.6\":{\"dependencies\":{\"@asamuzakjp/nwsapi\":\"2.3.9\",\"bidi-js\":\"1.0.3\",\"css-tree\":\"3.1.0\",\"is-potential-custom-element-name\":\"1.0.1\",\"lru-cache\":\"11.2.4\"}},\"@asamuzakjp/nwsapi@2.3.9\":{},\"@babel/code-frame@7.27.1\":{\"dependencies\":{\"@babel/helper-validator-identifier\":\"7.28.5\",\"js-tokens\":\"4.0.0\",\"picocolors\":\"1.1.1\"}},\"@babel/compat-data@7.28.0\":{},\"@babel/core@7.28.5\":{\"dependencies\":{\"@babel/code-frame\":\"7.27.1\",\"@babel/generator\":\"7.28.5\",\"@babel/helper-compilation-targets\":\"7.27.2\",\"@babel/helper-module-transforms\":\"7.28.3(@babel/core@7.28.5)\",\"@babel/helpers\":\"7.28.4\",\"@babel/parser\":\"7.28.5\",\"@babel/template\":\"7.27.2\",\"@babel/traverse\":\"7.28.5\",\"@babel/types\":\"7.28.5\",\"@jridgewell/remapping\":\"2.3.5\",\"convert-source-map\":\"2.0.0\",\"debug\":\"4.4.3\",\"gensync\":\"1.0.0-beta.2\",\"json5\":\"2.2.3\",\"semver\":\"6.3.1\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"@babel/generator@7.28.5\":{\"dependencies\":{\"@babel/parser\":\"7.28.5\",\"@babel/types\":\"7.28.5\",\"@jridgewell/gen-mapping\":\"0.3.13\",\"@jridgewell/trace-mapping\":\"0.3.31\",\"jsesc\":\"3.1.0\"}},\"@babel/helper-compilation-targets@7.27.2\":{\"dependencies\":{\"@babel/compat-data\":\"7.28.0\",\"@babel/helper-validator-option\":\"7.27.1\",\"browserslist\":\"4.25.3\",\"lru-cache\":\"5.1.1\",\"semver\":\"6.3.1\"}},\"@babel/helper-globals@7.28.0\":{},\"@babel/helper-module-imports@7.27.1\":{\"dependencies\":{\"@babel/traverse\":\"7.28.5\",\"@babel/types\":\"7.28.5\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"@babel/helper-module-transforms@7.28.3(@babel/core@7.28.5)\":{\"dependencies\":{\"@babel/core\":\"7.28.5\",\"@babel/helper-module-imports\":\"7.27.1\",\"@babel/helper-validator-identifier\":\"7.28.5\",\"@babel/traverse\":\"7.28.5\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"@babel/helper-plugin-utils@7.27.1\":{},\"@babel/helper-string-parser@7.27.1\":{},\"@babel/helper-validator-identifier@7.28.5\":{},\"@babel/helper-validator-option@7.27.1\":{},\"@babel/helpers@7.28.4\":{\"dependencies\":{\"@babel/template\":\"7.27.2\",\"@babel/types\":\"7.28.5\"}},\"@babel/parser@7.28.5\":{\"dependencies\":{\"@babel/types\":\"7.28.5\"}},\"@babel/parser@7.29.3\":{\"dependencies\":{\"@babel/types\":\"7.29.0\"}},\"@babel/plugin-syntax-jsx@7.27.1(@babel/core@7.28.5)\":{\"dependencies\":{\"@babel/core\":\"7.28.5\",\"@babel/helper-plugin-utils\":\"7.27.1\"}},\"@babel/plugin-syntax-typescript@7.27.1(@babel/core@7.28.5)\":{\"dependencies\":{\"@babel/core\":\"7.28.5\",\"@babel/helper-plugin-utils\":\"7.27.1\"}},\"@babel/runtime@7.28.4\":{},\"@babel/template@7.27.2\":{\"dependencies\":{\"@babel/code-frame\":\"7.27.1\",\"@babel/parser\":\"7.28.5\",\"@babel/types\":\"7.28.5\"}},\"@babel/traverse@7.28.5\":{\"dependencies\":{\"@babel/code-frame\":\"7.27.1\",\"@babel/generator\":\"7.28.5\",\"@babel/helper-globals\":\"7.28.0\",\"@babel/parser\":\"7.28.5\",\"@babel/template\":\"7.27.2\",\"@babel/types\":\"7.28.5\",\"debug\":\"4.4.3\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"@babel/types@7.28.5\":{\"dependencies\":{\"@babel/helper-string-parser\":\"7.27.1\",\"@babel/helper-validator-identifier\":\"7.28.5\"}},\"@babel/types@7.29.0\":{\"dependencies\":{\"@babel/helper-string-parser\":\"7.27.1\",\"@babel/helper-validator-identifier\":\"7.28.5\"}},\"@bcoe/v8-coverage@1.0.2\":{},\"@blazediff/core@1.9.1\":{},\"@braintree/sanitize-url@7.1.1\":{},\"@bufbuild/protobuf@2.12.0\":{\"optional\":true},\"@bundled-es-modules/cookie@2.0.1\":{\"dependencies\":{\"cookie\":\"0.7.2\"},\"optional\":true},\"@bundled-es-modules/statuses@1.0.1\":{\"dependencies\":{\"statuses\":\"2.0.2\"},\"optional\":true},\"@bundled-es-modules/tough-cookie@0.1.6\":{\"dependencies\":{\"@types/tough-cookie\":\"4.0.5\",\"tough-cookie\":\"4.1.4\"},\"optional\":true},\"@changesets/apply-release-plan@7.0.13\":{\"dependencies\":{\"@changesets/config\":\"3.1.1\",\"@changesets/get-version-range-type\":\"0.4.0\",\"@changesets/git\":\"3.0.4\",\"@changesets/should-skip-package\":\"0.1.2\",\"@changesets/types\":\"6.1.0\",\"@manypkg/get-packages\":\"1.1.3\",\"detect-indent\":\"6.1.0\",\"fs-extra\":\"7.0.1\",\"lodash.startcase\":\"4.4.0\",\"outdent\":\"0.5.0\",\"prettier\":\"2.8.8\",\"resolve-from\":\"5.0.0\",\"semver\":\"7.7.3\"}},\"@changesets/assemble-release-plan@6.0.9\":{\"dependencies\":{\"@changesets/errors\":\"0.2.0\",\"@changesets/get-dependents-graph\":\"2.1.3\",\"@changesets/should-skip-package\":\"0.1.2\",\"@changesets/types\":\"6.1.0\",\"@manypkg/get-packages\":\"1.1.3\",\"semver\":\"7.7.3\"}},\"@changesets/changelog-git@0.2.1\":{\"dependencies\":{\"@changesets/types\":\"6.1.0\"}},\"@changesets/cli@2.29.7(@types/node@24.10.2)\":{\"dependencies\":{\"@changesets/apply-release-plan\":\"7.0.13\",\"@changesets/assemble-release-plan\":\"6.0.9\",\"@changesets/changelog-git\":\"0.2.1\",\"@changesets/config\":\"3.1.1\",\"@changesets/errors\":\"0.2.0\",\"@changesets/get-dependents-graph\":\"2.1.3\",\"@changesets/get-release-plan\":\"4.0.13\",\"@changesets/git\":\"3.0.4\",\"@changesets/logger\":\"0.1.1\",\"@changesets/pre\":\"2.0.2\",\"@changesets/read\":\"0.6.5\",\"@changesets/should-skip-package\":\"0.1.2\",\"@changesets/types\":\"6.1.0\",\"@changesets/write\":\"0.4.0\",\"@inquirer/external-editor\":\"1.0.1(@types/node@24.10.2)\",\"@manypkg/get-packages\":\"1.1.3\",\"ansi-colors\":\"4.1.3\",\"ci-info\":\"3.9.0\",\"enquirer\":\"2.4.1\",\"fs-extra\":\"7.0.1\",\"mri\":\"1.2.0\",\"p-limit\":\"2.3.0\",\"package-manager-detector\":\"0.2.11\",\"picocolors\":\"1.1.1\",\"resolve-from\":\"5.0.0\",\"semver\":\"7.7.3\",\"spawndamnit\":\"3.0.1\",\"term-size\":\"2.2.1\"},\"transitivePeerDependencies\":[\"@types/node\"]},\"@changesets/config@3.1.1\":{\"dependencies\":{\"@changesets/errors\":\"0.2.0\",\"@changesets/get-dependents-graph\":\"2.1.3\",\"@changesets/logger\":\"0.1.1\",\"@changesets/types\":\"6.1.0\",\"@manypkg/get-packages\":\"1.1.3\",\"fs-extra\":\"7.0.1\",\"micromatch\":\"4.0.8\"}},\"@changesets/errors@0.2.0\":{\"dependencies\":{\"extendable-error\":\"0.1.7\"}},\"@changesets/get-dependents-graph@2.1.3\":{\"dependencies\":{\"@changesets/types\":\"6.1.0\",\"@manypkg/get-packages\":\"1.1.3\",\"picocolors\":\"1.1.1\",\"semver\":\"7.7.3\"}},\"@changesets/get-release-plan@4.0.13\":{\"dependencies\":{\"@changesets/assemble-release-plan\":\"6.0.9\",\"@changesets/config\":\"3.1.1\",\"@changesets/pre\":\"2.0.2\",\"@changesets/read\":\"0.6.5\",\"@changesets/types\":\"6.1.0\",\"@manypkg/get-packages\":\"1.1.3\"}},\"@changesets/get-version-range-type@0.4.0\":{},\"@changesets/git@3.0.4\":{\"dependencies\":{\"@changesets/errors\":\"0.2.0\",\"@manypkg/get-packages\":\"1.1.3\",\"is-subdir\":\"1.2.0\",\"micromatch\":\"4.0.8\",\"spawndamnit\":\"3.0.1\"}},\"@changesets/logger@0.1.1\":{\"dependencies\":{\"picocolors\":\"1.1.1\"}},\"@changesets/parse@0.4.1\":{\"dependencies\":{\"@changesets/types\":\"6.1.0\",\"js-yaml\":\"3.14.1\"}},\"@changesets/pre@2.0.2\":{\"dependencies\":{\"@changesets/errors\":\"0.2.0\",\"@changesets/types\":\"6.1.0\",\"@manypkg/get-packages\":\"1.1.3\",\"fs-extra\":\"7.0.1\"}},\"@changesets/read@0.6.5\":{\"dependencies\":{\"@changesets/git\":\"3.0.4\",\"@changesets/logger\":\"0.1.1\",\"@changesets/parse\":\"0.4.1\",\"@changesets/types\":\"6.1.0\",\"fs-extra\":\"7.0.1\",\"p-filter\":\"2.1.0\",\"picocolors\":\"1.1.1\"}},\"@changesets/should-skip-package@0.1.2\":{\"dependencies\":{\"@changesets/types\":\"6.1.0\",\"@manypkg/get-packages\":\"1.1.3\"}},\"@changesets/types@4.1.0\":{},\"@changesets/types@6.1.0\":{},\"@changesets/write@0.4.0\":{\"dependencies\":{\"@changesets/types\":\"6.1.0\",\"fs-extra\":\"7.0.1\",\"human-id\":\"4.1.1\",\"prettier\":\"2.8.8\"}},\"@chevrotain/cst-dts-gen@11.0.3\":{\"dependencies\":{\"@chevrotain/gast\":\"11.0.3\",\"@chevrotain/types\":\"11.0.3\",\"lodash-es\":\"4.17.21\"}},\"@chevrotain/gast@11.0.3\":{\"dependencies\":{\"@chevrotain/types\":\"11.0.3\",\"lodash-es\":\"4.17.21\"}},\"@chevrotain/regexp-to-ast@11.0.3\":{},\"@chevrotain/types@11.0.3\":{},\"@chevrotain/utils@11.0.3\":{},\"@cloudflare/kv-asset-handler@0.5.0\":{},\"@cloudflare/unenv-preset@2.16.1(unenv@2.0.0-rc.24)(workerd@1.20260504.1)\":{\"dependencies\":{\"unenv\":\"2.0.0-rc.24\"},\"optionalDependencies\":{\"workerd\":\"1.20260504.1\"}},\"@cloudflare/vite-plugin@1.36.0(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(workerd@1.20260504.1)(wrangler@4.88.0)\":{\"dependencies\":{\"@cloudflare/unenv-preset\":\"2.16.1(unenv@2.0.0-rc.24)(workerd@1.20260504.1)\",\"miniflare\":\"4.20260504.0\",\"unenv\":\"2.0.0-rc.24\",\"vite\":\"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\",\"wrangler\":\"4.88.0\",\"ws\":\"8.18.0\"},\"transitivePeerDependencies\":[\"bufferutil\",\"utf-8-validate\",\"workerd\"]},\"@cloudflare/workerd-darwin-64@1.20260504.1\":{\"optional\":true},\"@cloudflare/workerd-darwin-arm64@1.20260504.1\":{\"optional\":true},\"@cloudflare/workerd-linux-64@1.20260504.1\":{\"optional\":true},\"@cloudflare/workerd-linux-arm64@1.20260504.1\":{\"optional\":true},\"@cloudflare/workerd-windows-64@1.20260504.1\":{\"optional\":true},\"@cspotcode/source-map-support@0.8.1\":{\"dependencies\":{\"@jridgewell/trace-mapping\":\"0.3.9\"}},\"@csstools/color-helpers@5.1.0\":{},\"@csstools/css-calc@2.1.4(@csstools/css-parser-algorithms@3.0.5(@csstools/css-tokenizer@3.0.4))(@csstools/css-tokenizer@3.0.4)\":{\"dependencies\":{\"@csstools/css-parser-algorithms\":\"3.0.5(@csstools/css-tokenizer@3.0.4)\",\"@csstools/css-tokenizer\":\"3.0.4\"}},\"@csstools/css-color-parser@3.1.0(@csstools/css-parser-algorithms@3.0.5(@csstools/css-tokenizer@3.0.4))(@csstools/css-tokenizer@3.0.4)\":{\"dependencies\":{\"@csstools/color-helpers\":\"5.1.0\",\"@csstools/css-calc\":\"2.1.4(@csstools/css-parser-algorithms@3.0.5(@csstools/css-tokenizer@3.0.4))(@csstools/css-tokenizer@3.0.4)\",\"@csstools/css-parser-algorithms\":\"3.0.5(@csstools/css-tokenizer@3.0.4)\",\"@csstools/css-tokenizer\":\"3.0.4\"}},\"@csstools/css-parser-algorithms@3.0.5(@csstools/css-tokenizer@3.0.4)\":{\"dependencies\":{\"@csstools/css-tokenizer\":\"3.0.4\"}},\"@csstools/css-syntax-patches-for-csstree@1.0.14(postcss@8.5.14)\":{\"dependencies\":{\"postcss\":\"8.5.14\"}},\"@csstools/css-tokenizer@3.0.4\":{},\"@emnapi/core@1.10.0\":{\"dependencies\":{\"@emnapi/wasi-threads\":\"1.2.1\",\"tslib\":\"2.8.1\"},\"optional\":true},\"@emnapi/core@1.4.5\":{\"dependencies\":{\"@emnapi/wasi-threads\":\"1.0.4\",\"tslib\":\"2.8.1\"}},\"@emnapi/runtime@1.10.0\":{\"dependencies\":{\"tslib\":\"2.8.1\"},\"optional\":true},\"@emnapi/runtime@1.4.5\":{\"dependencies\":{\"tslib\":\"2.8.1\"}},\"@emnapi/wasi-threads@1.0.4\":{\"dependencies\":{\"tslib\":\"2.8.1\"}},\"@emnapi/wasi-threads@1.2.1\":{\"dependencies\":{\"tslib\":\"2.8.1\"},\"optional\":true},\"@esbuild/aix-ppc64@0.25.12\":{\"optional\":true},\"@esbuild/aix-ppc64@0.27.3\":{\"optional\":true},\"@esbuild/android-arm64@0.25.12\":{\"optional\":true},\"@esbuild/android-arm64@0.27.3\":{\"optional\":true},\"@esbuild/android-arm@0.25.12\":{\"optional\":true},\"@esbuild/android-arm@0.27.3\":{\"optional\":true},\"@esbuild/android-x64@0.25.12\":{\"optional\":true},\"@esbuild/android-x64@0.27.3\":{\"optional\":true},\"@esbuild/darwin-arm64@0.25.12\":{\"optional\":true},\"@esbuild/darwin-arm64@0.27.3\":{\"optional\":true},\"@esbuild/darwin-x64@0.25.12\":{\"optional\":true},\"@esbuild/darwin-x64@0.27.3\":{\"optional\":true},\"@esbuild/freebsd-arm64@0.25.12\":{\"optional\":true},\"@esbuild/freebsd-arm64@0.27.3\":{\"optional\":true},\"@esbuild/freebsd-x64@0.25.12\":{\"optional\":true},\"@esbuild/freebsd-x64@0.27.3\":{\"optional\":true},\"@esbuild/linux-arm64@0.25.12\":{\"optional\":true},\"@esbuild/linux-arm64@0.27.3\":{\"optional\":true},\"@esbuild/linux-arm@0.25.12\":{\"optional\":true},\"@esbuild/linux-arm@0.27.3\":{\"optional\":true},\"@esbuild/linux-ia32@0.25.12\":{\"optional\":true},\"@esbuild/linux-ia32@0.27.3\":{\"optional\":true},\"@esbuild/linux-loong64@0.25.12\":{\"optional\":true},\"@esbuild/linux-loong64@0.27.3\":{\"optional\":true},\"@esbuild/linux-mips64el@0.25.12\":{\"optional\":true},\"@esbuild/linux-mips64el@0.27.3\":{\"optional\":true},\"@esbuild/linux-ppc64@0.25.12\":{\"optional\":true},\"@esbuild/linux-ppc64@0.27.3\":{\"optional\":true},\"@esbuild/linux-riscv64@0.25.12\":{\"optional\":true},\"@esbuild/linux-riscv64@0.27.3\":{\"optional\":true},\"@esbuild/linux-s390x@0.25.12\":{\"optional\":true},\"@esbuild/linux-s390x@0.27.3\":{\"optional\":true},\"@esbuild/linux-x64@0.25.12\":{\"optional\":true},\"@esbuild/linux-x64@0.27.3\":{\"optional\":true},\"@esbuild/netbsd-arm64@0.25.12\":{\"optional\":true},\"@esbuild/netbsd-arm64@0.27.3\":{\"optional\":true},\"@esbuild/netbsd-x64@0.25.12\":{\"optional\":true},\"@esbuild/netbsd-x64@0.27.3\":{\"optional\":true},\"@esbuild/openbsd-arm64@0.25.12\":{\"optional\":true},\"@esbuild/openbsd-arm64@0.27.3\":{\"optional\":true},\"@esbuild/openbsd-x64@0.25.12\":{\"optional\":true},\"@esbuild/openbsd-x64@0.27.3\":{\"optional\":true},\"@esbuild/openharmony-arm64@0.25.12\":{\"optional\":true},\"@esbuild/openharmony-arm64@0.27.3\":{\"optional\":true},\"@esbuild/sunos-x64@0.25.12\":{\"optional\":true},\"@esbuild/sunos-x64@0.27.3\":{\"optional\":true},\"@esbuild/win32-arm64@0.25.12\":{\"optional\":true},\"@esbuild/win32-arm64@0.27.3\":{\"optional\":true},\"@esbuild/win32-ia32@0.25.12\":{\"optional\":true},\"@esbuild/win32-ia32@0.27.3\":{\"optional\":true},\"@esbuild/win32-x64@0.25.12\":{\"optional\":true},\"@esbuild/win32-x64@0.27.3\":{\"optional\":true},\"@iconify/types@2.0.0\":{},\"@iconify/utils@3.0.2\":{\"dependencies\":{\"@antfu/install-pkg\":\"1.1.0\",\"@antfu/utils\":\"9.3.0\",\"@iconify/types\":\"2.0.0\",\"debug\":\"4.4.3\",\"globals\":\"15.15.0\",\"kolorist\":\"1.8.0\",\"local-pkg\":\"1.1.2\",\"mlly\":\"1.8.0\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"@img/colour@1.1.0\":{},\"@img/sharp-darwin-arm64@0.34.5\":{\"optionalDependencies\":{\"@img/sharp-libvips-darwin-arm64\":\"1.2.4\"},\"optional\":true},\"@img/sharp-darwin-x64@0.34.5\":{\"optionalDependencies\":{\"@img/sharp-libvips-darwin-x64\":\"1.2.4\"},\"optional\":true},\"@img/sharp-libvips-darwin-arm64@1.2.4\":{\"optional\":true},\"@img/sharp-libvips-darwin-x64@1.2.4\":{\"optional\":true},\"@img/sharp-libvips-linux-arm64@1.2.4\":{\"optional\":true},\"@img/sharp-libvips-linux-arm@1.2.4\":{\"optional\":true},\"@img/sharp-libvips-linux-ppc64@1.2.4\":{\"optional\":true},\"@img/sharp-libvips-linux-riscv64@1.2.4\":{\"optional\":true},\"@img/sharp-libvips-linux-s390x@1.2.4\":{\"optional\":true},\"@img/sharp-libvips-linux-x64@1.2.4\":{\"optional\":true},\"@img/sharp-libvips-linuxmusl-arm64@1.2.4\":{\"optional\":true},\"@img/sharp-libvips-linuxmusl-x64@1.2.4\":{\"optional\":true},\"@img/sharp-linux-arm64@0.34.5\":{\"optionalDependencies\":{\"@img/sharp-libvips-linux-arm64\":\"1.2.4\"},\"optional\":true},\"@img/sharp-linux-arm@0.34.5\":{\"optionalDependencies\":{\"@img/sharp-libvips-linux-arm\":\"1.2.4\"},\"optional\":true},\"@img/sharp-linux-ppc64@0.34.5\":{\"optionalDependencies\":{\"@img/sharp-libvips-linux-ppc64\":\"1.2.4\"},\"optional\":true},\"@img/sharp-linux-riscv64@0.34.5\":{\"optionalDependencies\":{\"@img/sharp-libvips-linux-riscv64\":\"1.2.4\"},\"optional\":true},\"@img/sharp-linux-s390x@0.34.5\":{\"optionalDependencies\":{\"@img/sharp-libvips-linux-s390x\":\"1.2.4\"},\"optional\":true},\"@img/sharp-linux-x64@0.34.5\":{\"optionalDependencies\":{\"@img/sharp-libvips-linux-x64\":\"1.2.4\"},\"optional\":true},\"@img/sharp-linuxmusl-arm64@0.34.5\":{\"optionalDependencies\":{\"@img/sharp-libvips-linuxmusl-arm64\":\"1.2.4\"},\"optional\":true},\"@img/sharp-linuxmusl-x64@0.34.5\":{\"optionalDependencies\":{\"@img/sharp-libvips-linuxmusl-x64\":\"1.2.4\"},\"optional\":true},\"@img/sharp-wasm32@0.34.5\":{\"dependencies\":{\"@emnapi/runtime\":\"1.10.0\"},\"optional\":true},\"@img/sharp-win32-arm64@0.34.5\":{\"optional\":true},\"@img/sharp-win32-ia32@0.34.5\":{\"optional\":true},\"@img/sharp-win32-x64@0.34.5\":{\"optional\":true},\"@inquirer/ansi@1.0.2\":{\"optional\":true},\"@inquirer/confirm@5.1.21(@types/node@22.15.33)\":{\"dependencies\":{\"@inquirer/core\":\"10.3.2(@types/node@22.15.33)\",\"@inquirer/type\":\"3.0.10(@types/node@22.15.33)\"},\"optionalDependencies\":{\"@types/node\":\"22.15.33\"},\"optional\":true},\"@inquirer/confirm@5.1.21(@types/node@24.10.2)\":{\"dependencies\":{\"@inquirer/core\":\"10.3.2(@types/node@24.10.2)\",\"@inquirer/type\":\"3.0.10(@types/node@24.10.2)\"},\"optionalDependencies\":{\"@types/node\":\"24.10.2\"},\"optional\":true},\"@inquirer/core@10.3.2(@types/node@22.15.33)\":{\"dependencies\":{\"@inquirer/ansi\":\"1.0.2\",\"@inquirer/figures\":\"1.0.15\",\"@inquirer/type\":\"3.0.10(@types/node@22.15.33)\",\"cli-width\":\"4.1.0\",\"mute-stream\":\"2.0.0\",\"signal-exit\":\"4.1.0\",\"wrap-ansi\":\"6.2.0\",\"yoctocolors-cjs\":\"2.1.3\"},\"optionalDependencies\":{\"@types/node\":\"22.15.33\"},\"optional\":true},\"@inquirer/core@10.3.2(@types/node@24.10.2)\":{\"dependencies\":{\"@inquirer/ansi\":\"1.0.2\",\"@inquirer/figures\":\"1.0.15\",\"@inquirer/type\":\"3.0.10(@types/node@24.10.2)\",\"cli-width\":\"4.1.0\",\"mute-stream\":\"2.0.0\",\"signal-exit\":\"4.1.0\",\"wrap-ansi\":\"6.2.0\",\"yoctocolors-cjs\":\"2.1.3\"},\"optionalDependencies\":{\"@types/node\":\"24.10.2\"},\"optional\":true},\"@inquirer/external-editor@1.0.1(@types/node@24.10.2)\":{\"dependencies\":{\"chardet\":\"2.1.0\",\"iconv-lite\":\"0.6.3\"},\"optionalDependencies\":{\"@types/node\":\"24.10.2\"}},\"@inquirer/figures@1.0.15\":{\"optional\":true},\"@inquirer/type@3.0.10(@types/node@22.15.33)\":{\"optionalDependencies\":{\"@types/node\":\"22.15.33\"},\"optional\":true},\"@inquirer/type@3.0.10(@types/node@24.10.2)\":{\"optionalDependencies\":{\"@types/node\":\"24.10.2\"},\"optional\":true},\"@isaacs/cliui@8.0.2\":{\"dependencies\":{\"string-width\":\"5.1.2\",\"string-width-cjs\":\"string-width@4.2.3\",\"strip-ansi\":\"7.1.2\",\"strip-ansi-cjs\":\"strip-ansi@6.0.1\",\"wrap-ansi\":\"8.1.0\",\"wrap-ansi-cjs\":\"wrap-ansi@7.0.0\"}},\"@istanbuljs/schema@0.1.3\":{},\"@jest/diff-sequences@30.0.1\":{},\"@jest/get-type@30.1.0\":{},\"@jest/schemas@30.0.5\":{\"dependencies\":{\"@sinclair/typebox\":\"0.34.40\"}},\"@jridgewell/gen-mapping@0.3.13\":{\"dependencies\":{\"@jridgewell/sourcemap-codec\":\"1.5.5\",\"@jridgewell/trace-mapping\":\"0.3.31\"}},\"@jridgewell/remapping@2.3.5\":{\"dependencies\":{\"@jridgewell/gen-mapping\":\"0.3.13\",\"@jridgewell/trace-mapping\":\"0.3.31\"}},\"@jridgewell/resolve-uri@3.1.2\":{},\"@jridgewell/source-map@0.3.11\":{\"dependencies\":{\"@jridgewell/gen-mapping\":\"0.3.13\",\"@jridgewell/trace-mapping\":\"0.3.31\"},\"optional\":true},\"@jridgewell/sourcemap-codec@1.5.5\":{},\"@jridgewell/trace-mapping@0.3.30\":{\"dependencies\":{\"@jridgewell/resolve-uri\":\"3.1.2\",\"@jridgewell/sourcemap-codec\":\"1.5.5\"}},\"@jridgewell/trace-mapping@0.3.31\":{\"dependencies\":{\"@jridgewell/resolve-uri\":\"3.1.2\",\"@jridgewell/sourcemap-codec\":\"1.5.5\"}},\"@jridgewell/trace-mapping@0.3.9\":{\"dependencies\":{\"@jridgewell/resolve-uri\":\"3.1.2\",\"@jridgewell/sourcemap-codec\":\"1.5.5\"}},\"@jsonjoy.com/buffers@17.63.0(tslib@2.8.1)\":{\"dependencies\":{\"tslib\":\"2.8.1\"}},\"@jsonjoy.com/codegen@17.63.0(tslib@2.8.1)\":{\"dependencies\":{\"tslib\":\"2.8.1\"}},\"@jsonjoy.com/json-pointer@17.63.0(tslib@2.8.1)\":{\"dependencies\":{\"@jsonjoy.com/util\":\"17.63.0(tslib@2.8.1)\",\"tslib\":\"2.8.1\"}},\"@jsonjoy.com/util@17.63.0(tslib@2.8.1)\":{\"dependencies\":{\"@jsonjoy.com/buffers\":\"17.63.0(tslib@2.8.1)\",\"@jsonjoy.com/codegen\":\"17.63.0(tslib@2.8.1)\",\"tslib\":\"2.8.1\"}},\"@lix-js/plugin-json@1.0.1(tslib@2.8.1)\":{\"dependencies\":{\"@jsonjoy.com/json-pointer\":\"17.63.0(tslib@2.8.1)\",\"@lix-js/sdk\":\"0.5.1\"},\"transitivePeerDependencies\":[\"tslib\"]},\"@lix-js/sdk@0.5.1\":{\"dependencies\":{\"@lix-js/server-protocol-schema\":\"0.1.1\",\"@marcbachmann/cel-js\":\"2.5.2\",\"@opral/zettel-ast\":\"0.1.0\",\"@sqlite.org/sqlite-wasm\":\"3.50.4-build1\",\"ajv\":\"8.17.1\",\"chevrotain\":\"11.0.3\",\"kysely\":\"0.28.7\",\"uuid\":\"11.1.0\"}},\"@lix-js/server-protocol-schema@0.1.1\":{},\"@manypkg/find-root@1.1.0\":{\"dependencies\":{\"@babel/runtime\":\"7.28.4\",\"@types/node\":\"12.20.55\",\"find-up\":\"4.1.0\",\"fs-extra\":\"8.1.0\"}},\"@manypkg/get-packages@1.1.3\":{\"dependencies\":{\"@babel/runtime\":\"7.28.4\",\"@changesets/types\":\"4.1.0\",\"@manypkg/find-root\":\"1.1.0\",\"fs-extra\":\"8.1.0\",\"globby\":\"11.1.0\",\"read-yaml-file\":\"1.1.0\"}},\"@marcbachmann/cel-js@2.5.2\":{},\"@mermaid-js/parser@0.6.3\":{\"dependencies\":{\"langium\":\"3.3.1\"}},\"@mswjs/interceptors@0.39.8\":{\"dependencies\":{\"@open-draft/deferred-promise\":\"2.2.0\",\"@open-draft/logger\":\"0.3.0\",\"@open-draft/until\":\"2.1.0\",\"is-node-process\":\"1.2.0\",\"outvariant\":\"1.4.3\",\"strict-event-emitter\":\"0.5.1\"},\"optional\":true},\"@napi-rs/wasm-runtime@0.2.4\":{\"dependencies\":{\"@emnapi/core\":\"1.4.5\",\"@emnapi/runtime\":\"1.4.5\",\"@tybys/wasm-util\":\"0.9.0\"}},\"@napi-rs/wasm-runtime@1.1.4(@emnapi/core@1.10.0)(@emnapi/runtime@1.10.0)\":{\"dependencies\":{\"@emnapi/core\":\"1.10.0\",\"@emnapi/runtime\":\"1.10.0\",\"@tybys/wasm-util\":\"0.10.2\"},\"optional\":true},\"@nodelib/fs.scandir@2.1.5\":{\"dependencies\":{\"@nodelib/fs.stat\":\"2.0.5\",\"run-parallel\":\"1.2.0\"}},\"@nodelib/fs.stat@2.0.5\":{},\"@nodelib/fs.walk@1.2.8\":{\"dependencies\":{\"@nodelib/fs.scandir\":\"2.1.5\",\"fastq\":\"1.17.1\"}},\"@nrwl/nx-cloud@19.1.0\":{\"dependencies\":{\"nx-cloud\":\"19.1.0\"},\"transitivePeerDependencies\":[\"debug\"]},\"@nx/nx-darwin-arm64@21.4.1\":{\"optional\":true},\"@nx/nx-darwin-x64@21.4.1\":{\"optional\":true},\"@nx/nx-freebsd-x64@21.4.1\":{\"optional\":true},\"@nx/nx-linux-arm-gnueabihf@21.4.1\":{\"optional\":true},\"@nx/nx-linux-arm64-gnu@21.4.1\":{\"optional\":true},\"@nx/nx-linux-arm64-musl@21.4.1\":{\"optional\":true},\"@nx/nx-linux-x64-gnu@21.4.1\":{\"optional\":true},\"@nx/nx-linux-x64-musl@21.4.1\":{\"optional\":true},\"@nx/nx-win32-arm64-msvc@21.4.1\":{\"optional\":true},\"@nx/nx-win32-x64-msvc@21.4.1\":{\"optional\":true},\"@oozcitak/dom@2.0.2\":{\"dependencies\":{\"@oozcitak/infra\":\"2.0.2\",\"@oozcitak/url\":\"3.0.0\",\"@oozcitak/util\":\"10.0.0\"}},\"@oozcitak/infra@2.0.2\":{\"dependencies\":{\"@oozcitak/util\":\"10.0.0\"}},\"@oozcitak/url@3.0.0\":{\"dependencies\":{\"@oozcitak/infra\":\"2.0.2\",\"@oozcitak/util\":\"10.0.0\"}},\"@oozcitak/util@10.0.0\":{},\"@open-draft/deferred-promise@2.2.0\":{\"optional\":true},\"@open-draft/logger@0.3.0\":{\"dependencies\":{\"is-node-process\":\"1.2.0\",\"outvariant\":\"1.4.3\"},\"optional\":true},\"@open-draft/until@2.1.0\":{\"optional\":true},\"@opentelemetry/api-logs@0.208.0\":{\"dependencies\":{\"@opentelemetry/api\":\"1.9.0\"}},\"@opentelemetry/api@1.9.0\":{},\"@opentelemetry/core@2.2.0(@opentelemetry/api@1.9.0)\":{\"dependencies\":{\"@opentelemetry/api\":\"1.9.0\",\"@opentelemetry/semantic-conventions\":\"1.38.0\"}},\"@opentelemetry/core@2.4.0(@opentelemetry/api@1.9.0)\":{\"dependencies\":{\"@opentelemetry/api\":\"1.9.0\",\"@opentelemetry/semantic-conventions\":\"1.38.0\"}},\"@opentelemetry/exporter-logs-otlp-http@0.208.0(@opentelemetry/api@1.9.0)\":{\"dependencies\":{\"@opentelemetry/api\":\"1.9.0\",\"@opentelemetry/api-logs\":\"0.208.0\",\"@opentelemetry/core\":\"2.2.0(@opentelemetry/api@1.9.0)\",\"@opentelemetry/otlp-exporter-base\":\"0.208.0(@opentelemetry/api@1.9.0)\",\"@opentelemetry/otlp-transformer\":\"0.208.0(@opentelemetry/api@1.9.0)\",\"@opentelemetry/sdk-logs\":\"0.208.0(@opentelemetry/api@1.9.0)\"}},\"@opentelemetry/otlp-exporter-base@0.208.0(@opentelemetry/api@1.9.0)\":{\"dependencies\":{\"@opentelemetry/api\":\"1.9.0\",\"@opentelemetry/core\":\"2.2.0(@opentelemetry/api@1.9.0)\",\"@opentelemetry/otlp-transformer\":\"0.208.0(@opentelemetry/api@1.9.0)\"}},\"@opentelemetry/otlp-transformer@0.208.0(@opentelemetry/api@1.9.0)\":{\"dependencies\":{\"@opentelemetry/api\":\"1.9.0\",\"@opentelemetry/api-logs\":\"0.208.0\",\"@opentelemetry/core\":\"2.2.0(@opentelemetry/api@1.9.0)\",\"@opentelemetry/resources\":\"2.2.0(@opentelemetry/api@1.9.0)\",\"@opentelemetry/sdk-logs\":\"0.208.0(@opentelemetry/api@1.9.0)\",\"@opentelemetry/sdk-metrics\":\"2.2.0(@opentelemetry/api@1.9.0)\",\"@opentelemetry/sdk-trace-base\":\"2.2.0(@opentelemetry/api@1.9.0)\",\"protobufjs\":\"7.5.4\"}},\"@opentelemetry/resources@2.2.0(@opentelemetry/api@1.9.0)\":{\"dependencies\":{\"@opentelemetry/api\":\"1.9.0\",\"@opentelemetry/core\":\"2.2.0(@opentelemetry/api@1.9.0)\",\"@opentelemetry/semantic-conventions\":\"1.38.0\"}},\"@opentelemetry/resources@2.4.0(@opentelemetry/api@1.9.0)\":{\"dependencies\":{\"@opentelemetry/api\":\"1.9.0\",\"@opentelemetry/core\":\"2.4.0(@opentelemetry/api@1.9.0)\",\"@opentelemetry/semantic-conventions\":\"1.38.0\"}},\"@opentelemetry/sdk-logs@0.208.0(@opentelemetry/api@1.9.0)\":{\"dependencies\":{\"@opentelemetry/api\":\"1.9.0\",\"@opentelemetry/api-logs\":\"0.208.0\",\"@opentelemetry/core\":\"2.2.0(@opentelemetry/api@1.9.0)\",\"@opentelemetry/resources\":\"2.2.0(@opentelemetry/api@1.9.0)\"}},\"@opentelemetry/sdk-metrics@2.2.0(@opentelemetry/api@1.9.0)\":{\"dependencies\":{\"@opentelemetry/api\":\"1.9.0\",\"@opentelemetry/core\":\"2.2.0(@opentelemetry/api@1.9.0)\",\"@opentelemetry/resources\":\"2.2.0(@opentelemetry/api@1.9.0)\"}},\"@opentelemetry/sdk-trace-base@2.2.0(@opentelemetry/api@1.9.0)\":{\"dependencies\":{\"@opentelemetry/api\":\"1.9.0\",\"@opentelemetry/core\":\"2.2.0(@opentelemetry/api@1.9.0)\",\"@opentelemetry/resources\":\"2.2.0(@opentelemetry/api@1.9.0)\",\"@opentelemetry/semantic-conventions\":\"1.38.0\"}},\"@opentelemetry/semantic-conventions@1.38.0\":{},\"@opral/markdown-wc@0.9.0\":{\"dependencies\":{\"mermaid\":\"11.12.1\",\"rehype-autolink-headings\":\"7.1.0\",\"rehype-highlight\":\"7.0.2\",\"rehype-parse\":\"9.0.1\",\"rehype-raw\":\"7.0.0\",\"rehype-remark\":\"10.0.1\",\"rehype-sanitize\":\"6.0.0\",\"rehype-slug\":\"6.0.0\",\"rehype-stringify\":\"10.0.1\",\"remark-frontmatter\":\"5.0.0\",\"remark-gfm\":\"4.0.1\",\"remark-parse\":\"11.0.0\",\"remark-rehype\":\"11.1.2\",\"remark-stringify\":\"11.0.0\",\"unified\":\"11.0.5\",\"unist-util-visit\":\"5.0.0\",\"yaml\":\"2.8.1\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"@opral/zettel-ast@0.1.0\":{\"dependencies\":{\"@sinclair/typebox\":\"0.34.40\"}},\"@oxc-project/types@0.127.0\":{},\"@oxlint/darwin-arm64@1.26.0\":{\"optional\":true},\"@oxlint/darwin-x64@1.26.0\":{\"optional\":true},\"@oxlint/linux-arm64-gnu@1.26.0\":{\"optional\":true},\"@oxlint/linux-arm64-musl@1.26.0\":{\"optional\":true},\"@oxlint/linux-x64-gnu@1.26.0\":{\"optional\":true},\"@oxlint/linux-x64-musl@1.26.0\":{\"optional\":true},\"@oxlint/win32-arm64@1.26.0\":{\"optional\":true},\"@oxlint/win32-x64@1.26.0\":{\"optional\":true},\"@pkgjs/parseargs@0.11.0\":{\"optional\":true},\"@polka/url@1.0.0-next.29\":{},\"@poppinss/colors@4.1.5\":{\"dependencies\":{\"kleur\":\"4.1.5\"}},\"@poppinss/dumper@0.6.5\":{\"dependencies\":{\"@poppinss/colors\":\"4.1.5\",\"@sindresorhus/is\":\"7.1.1\",\"supports-color\":\"10.2.2\"}},\"@poppinss/exception@1.2.2\":{},\"@posthog/core@1.9.1\":{\"dependencies\":{\"cross-spawn\":\"7.0.6\"}},\"@posthog/types@1.321.2\":{},\"@promptbook/utils@0.69.5\":{\"dependencies\":{\"spacetrim\":\"0.11.59\"},\"optional\":true},\"@protobufjs/aspromise@1.1.2\":{},\"@protobufjs/base64@1.1.2\":{},\"@protobufjs/codegen@2.0.4\":{},\"@protobufjs/eventemitter@1.1.0\":{},\"@protobufjs/fetch@1.1.0\":{\"dependencies\":{\"@protobufjs/aspromise\":\"1.1.2\",\"@protobufjs/inquire\":\"1.1.0\"}},\"@protobufjs/float@1.0.2\":{},\"@protobufjs/inquire@1.1.0\":{},\"@protobufjs/path@1.1.2\":{},\"@protobufjs/pool@1.1.0\":{},\"@protobufjs/utf8@1.1.0\":{},\"@puppeteer/browsers@2.13.1\":{\"dependencies\":{\"debug\":\"4.4.3\",\"extract-zip\":\"2.0.1\",\"progress\":\"2.0.3\",\"proxy-agent\":\"6.5.0\",\"semver\":\"7.7.4\",\"tar-fs\":\"3.1.2\",\"yargs\":\"17.7.2\"},\"transitivePeerDependencies\":[\"bare-abort-controller\",\"bare-buffer\",\"react-native-b4a\",\"supports-color\"],\"optional\":true},\"@rolldown/binding-android-arm64@1.0.0-rc.17\":{\"optional\":true},\"@rolldown/binding-darwin-arm64@1.0.0-rc.17\":{\"optional\":true},\"@rolldown/binding-darwin-x64@1.0.0-rc.17\":{\"optional\":true},\"@rolldown/binding-freebsd-x64@1.0.0-rc.17\":{\"optional\":true},\"@rolldown/binding-linux-arm-gnueabihf@1.0.0-rc.17\":{\"optional\":true},\"@rolldown/binding-linux-arm64-gnu@1.0.0-rc.17\":{\"optional\":true},\"@rolldown/binding-linux-arm64-musl@1.0.0-rc.17\":{\"optional\":true},\"@rolldown/binding-linux-ppc64-gnu@1.0.0-rc.17\":{\"optional\":true},\"@rolldown/binding-linux-s390x-gnu@1.0.0-rc.17\":{\"optional\":true},\"@rolldown/binding-linux-x64-gnu@1.0.0-rc.17\":{\"optional\":true},\"@rolldown/binding-linux-x64-musl@1.0.0-rc.17\":{\"optional\":true},\"@rolldown/binding-openharmony-arm64@1.0.0-rc.17\":{\"optional\":true},\"@rolldown/binding-wasm32-wasi@1.0.0-rc.17\":{\"dependencies\":{\"@emnapi/core\":\"1.10.0\",\"@emnapi/runtime\":\"1.10.0\",\"@napi-rs/wasm-runtime\":\"1.1.4(@emnapi/core@1.10.0)(@emnapi/runtime@1.10.0)\"},\"optional\":true},\"@rolldown/binding-win32-arm64-msvc@1.0.0-rc.17\":{\"optional\":true},\"@rolldown/binding-win32-x64-msvc@1.0.0-rc.17\":{\"optional\":true},\"@rolldown/pluginutils@1.0.0-beta.40\":{},\"@rolldown/pluginutils@1.0.0-rc.17\":{},\"@rolldown/pluginutils@1.0.0-rc.7\":{},\"@rollup/rollup-android-arm-eabi@4.53.2\":{\"optional\":true},\"@rollup/rollup-android-arm64@4.53.2\":{\"optional\":true},\"@rollup/rollup-darwin-arm64@4.53.2\":{\"optional\":true},\"@rollup/rollup-darwin-x64@4.53.2\":{\"optional\":true},\"@rollup/rollup-freebsd-arm64@4.53.2\":{\"optional\":true},\"@rollup/rollup-freebsd-x64@4.53.2\":{\"optional\":true},\"@rollup/rollup-linux-arm-gnueabihf@4.53.2\":{\"optional\":true},\"@rollup/rollup-linux-arm-musleabihf@4.53.2\":{\"optional\":true},\"@rollup/rollup-linux-arm64-gnu@4.53.2\":{\"optional\":true},\"@rollup/rollup-linux-arm64-musl@4.53.2\":{\"optional\":true},\"@rollup/rollup-linux-loong64-gnu@4.53.2\":{\"optional\":true},\"@rollup/rollup-linux-ppc64-gnu@4.53.2\":{\"optional\":true},\"@rollup/rollup-linux-riscv64-gnu@4.53.2\":{\"optional\":true},\"@rollup/rollup-linux-riscv64-musl@4.53.2\":{\"optional\":true},\"@rollup/rollup-linux-s390x-gnu@4.53.2\":{\"optional\":true},\"@rollup/rollup-linux-x64-gnu@4.53.2\":{\"optional\":true},\"@rollup/rollup-linux-x64-musl@4.53.2\":{\"optional\":true},\"@rollup/rollup-openharmony-arm64@4.53.2\":{\"optional\":true},\"@rollup/rollup-win32-arm64-msvc@4.53.2\":{\"optional\":true},\"@rollup/rollup-win32-ia32-msvc@4.53.2\":{\"optional\":true},\"@rollup/rollup-win32-x64-gnu@4.53.2\":{\"optional\":true},\"@rollup/rollup-win32-x64-msvc@4.53.2\":{\"optional\":true},\"@shikijs/core@3.15.0\":{\"dependencies\":{\"@shikijs/types\":\"3.15.0\",\"@shikijs/vscode-textmate\":\"10.0.2\",\"@types/hast\":\"3.0.4\",\"hast-util-to-html\":\"9.0.5\"}},\"@shikijs/engine-javascript@3.15.0\":{\"dependencies\":{\"@shikijs/types\":\"3.15.0\",\"@shikijs/vscode-textmate\":\"10.0.2\",\"oniguruma-to-es\":\"4.3.3\"}},\"@shikijs/engine-oniguruma@3.15.0\":{\"dependencies\":{\"@shikijs/types\":\"3.15.0\",\"@shikijs/vscode-textmate\":\"10.0.2\"}},\"@shikijs/langs@3.15.0\":{\"dependencies\":{\"@shikijs/types\":\"3.15.0\"}},\"@shikijs/themes@3.15.0\":{\"dependencies\":{\"@shikijs/types\":\"3.15.0\"}},\"@shikijs/types@3.15.0\":{\"dependencies\":{\"@shikijs/vscode-textmate\":\"10.0.2\",\"@types/hast\":\"3.0.4\"}},\"@shikijs/vscode-textmate@10.0.2\":{},\"@sinclair/typebox@0.34.40\":{},\"@sindresorhus/is@7.1.1\":{},\"@speed-highlight/core@1.2.12\":{},\"@sqlite.org/sqlite-wasm@3.50.4-build1\":{},\"@standard-schema/spec@1.0.0\":{},\"@standard-schema/spec@1.1.0\":{},\"@tailwindcss/node@4.2.4\":{\"dependencies\":{\"@jridgewell/remapping\":\"2.3.5\",\"enhanced-resolve\":\"5.21.0\",\"jiti\":\"2.6.1\",\"lightningcss\":\"1.32.0\",\"magic-string\":\"0.30.21\",\"source-map-js\":\"1.2.1\",\"tailwindcss\":\"4.2.4\"}},\"@tailwindcss/oxide-android-arm64@4.2.4\":{\"optional\":true},\"@tailwindcss/oxide-darwin-arm64@4.2.4\":{\"optional\":true},\"@tailwindcss/oxide-darwin-x64@4.2.4\":{\"optional\":true},\"@tailwindcss/oxide-freebsd-x64@4.2.4\":{\"optional\":true},\"@tailwindcss/oxide-linux-arm-gnueabihf@4.2.4\":{\"optional\":true},\"@tailwindcss/oxide-linux-arm64-gnu@4.2.4\":{\"optional\":true},\"@tailwindcss/oxide-linux-arm64-musl@4.2.4\":{\"optional\":true},\"@tailwindcss/oxide-linux-x64-gnu@4.2.4\":{\"optional\":true},\"@tailwindcss/oxide-linux-x64-musl@4.2.4\":{\"optional\":true},\"@tailwindcss/oxide-wasm32-wasi@4.2.4\":{\"optional\":true},\"@tailwindcss/oxide-win32-arm64-msvc@4.2.4\":{\"optional\":true},\"@tailwindcss/oxide-win32-x64-msvc@4.2.4\":{\"optional\":true},\"@tailwindcss/oxide@4.2.4\":{\"optionalDependencies\":{\"@tailwindcss/oxide-android-arm64\":\"4.2.4\",\"@tailwindcss/oxide-darwin-arm64\":\"4.2.4\",\"@tailwindcss/oxide-darwin-x64\":\"4.2.4\",\"@tailwindcss/oxide-freebsd-x64\":\"4.2.4\",\"@tailwindcss/oxide-linux-arm-gnueabihf\":\"4.2.4\",\"@tailwindcss/oxide-linux-arm64-gnu\":\"4.2.4\",\"@tailwindcss/oxide-linux-arm64-musl\":\"4.2.4\",\"@tailwindcss/oxide-linux-x64-gnu\":\"4.2.4\",\"@tailwindcss/oxide-linux-x64-musl\":\"4.2.4\",\"@tailwindcss/oxide-wasm32-wasi\":\"4.2.4\",\"@tailwindcss/oxide-win32-arm64-msvc\":\"4.2.4\",\"@tailwindcss/oxide-win32-x64-msvc\":\"4.2.4\"}},\"@tailwindcss/vite@4.2.4(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\":{\"dependencies\":{\"@tailwindcss/node\":\"4.2.4\",\"@tailwindcss/oxide\":\"4.2.4\",\"tailwindcss\":\"4.2.4\",\"vite\":\"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\"}},\"@tanstack/history@1.161.6\":{},\"@tanstack/react-router@1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0)\":{\"dependencies\":{\"@tanstack/history\":\"1.161.6\",\"@tanstack/react-store\":\"0.9.3(react-dom@19.2.0(react@19.2.0))(react@19.2.0)\",\"@tanstack/router-core\":\"1.169.2\",\"isbot\":\"5.1.28\",\"react\":\"19.2.0\",\"react-dom\":\"19.2.0(react@19.2.0)\"}},\"@tanstack/react-start-client@1.166.48(react-dom@19.2.0(react@19.2.0))(react@19.2.0)\":{\"dependencies\":{\"@tanstack/react-router\":\"1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0)\",\"@tanstack/router-core\":\"1.169.2\",\"@tanstack/start-client-core\":\"1.168.2\",\"react\":\"19.2.0\",\"react-dom\":\"19.2.0(react@19.2.0)\"}},\"@tanstack/react-start-rsc@0.0.43(react-dom@19.2.0(react@19.2.0))(react@19.2.0)(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(webpack@5.99.9(esbuild@0.27.3))\":{\"dependencies\":{\"@tanstack/react-router\":\"1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0)\",\"@tanstack/react-start-server\":\"1.166.52(react-dom@19.2.0(react@19.2.0))(react@19.2.0)\",\"@tanstack/router-core\":\"1.169.2\",\"@tanstack/router-utils\":\"1.161.8\",\"@tanstack/start-client-core\":\"1.168.2\",\"@tanstack/start-fn-stubs\":\"1.161.6\",\"@tanstack/start-plugin-core\":\"1.169.19(@tanstack/react-router@1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(webpack@5.99.9(esbuild@0.27.3))\",\"@tanstack/start-server-core\":\"1.167.30\",\"@tanstack/start-storage-context\":\"1.166.35\",\"pathe\":\"2.0.3\",\"react\":\"19.2.0\",\"react-dom\":\"19.2.0(react@19.2.0)\"},\"transitivePeerDependencies\":[\"@rsbuild/core\",\"crossws\",\"supports-color\",\"vite\",\"vite-plugin-solid\",\"webpack\"]},\"@tanstack/react-start-server@1.166.52(react-dom@19.2.0(react@19.2.0))(react@19.2.0)\":{\"dependencies\":{\"@tanstack/history\":\"1.161.6\",\"@tanstack/react-router\":\"1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0)\",\"@tanstack/router-core\":\"1.169.2\",\"@tanstack/start-client-core\":\"1.168.2\",\"@tanstack/start-server-core\":\"1.167.30\",\"react\":\"19.2.0\",\"react-dom\":\"19.2.0(react@19.2.0)\"},\"transitivePeerDependencies\":[\"crossws\"]},\"@tanstack/react-start@1.167.64(react-dom@19.2.0(react@19.2.0))(react@19.2.0)(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(webpack@5.99.9(esbuild@0.27.3))\":{\"dependencies\":{\"@tanstack/react-router\":\"1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0)\",\"@tanstack/react-start-client\":\"1.166.48(react-dom@19.2.0(react@19.2.0))(react@19.2.0)\",\"@tanstack/react-start-rsc\":\"0.0.43(react-dom@19.2.0(react@19.2.0))(react@19.2.0)(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(webpack@5.99.9(esbuild@0.27.3))\",\"@tanstack/react-start-server\":\"1.166.52(react-dom@19.2.0(react@19.2.0))(react@19.2.0)\",\"@tanstack/router-utils\":\"1.161.8\",\"@tanstack/start-client-core\":\"1.168.2\",\"@tanstack/start-plugin-core\":\"1.169.19(@tanstack/react-router@1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(webpack@5.99.9(esbuild@0.27.3))\",\"@tanstack/start-server-core\":\"1.167.30\",\"pathe\":\"2.0.3\",\"react\":\"19.2.0\",\"react-dom\":\"19.2.0(react@19.2.0)\"},\"optionalDependencies\":{\"vite\":\"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\"},\"transitivePeerDependencies\":[\"@rspack/core\",\"crossws\",\"react-server-dom-rspack\",\"supports-color\",\"vite-plugin-solid\",\"webpack\"]},\"@tanstack/react-store@0.9.3(react-dom@19.2.0(react@19.2.0))(react@19.2.0)\":{\"dependencies\":{\"@tanstack/store\":\"0.9.3\",\"react\":\"19.2.0\",\"react-dom\":\"19.2.0(react@19.2.0)\",\"use-sync-external-store\":\"1.6.0(react@19.2.0)\"}},\"@tanstack/router-core@1.169.2\":{\"dependencies\":{\"@tanstack/history\":\"1.161.6\",\"cookie-es\":\"3.1.1\",\"seroval\":\"1.5.4\",\"seroval-plugins\":\"1.5.4(seroval@1.5.4)\"}},\"@tanstack/router-generator@1.166.41\":{\"dependencies\":{\"@babel/types\":\"7.28.5\",\"@tanstack/router-core\":\"1.169.2\",\"@tanstack/router-utils\":\"1.161.8\",\"@tanstack/virtual-file-routes\":\"1.161.7\",\"jiti\":\"2.6.1\",\"magic-string\":\"0.30.21\",\"prettier\":\"3.6.2\",\"zod\":\"3.25.76\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"@tanstack/router-plugin@1.167.34(@tanstack/react-router@1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(webpack@5.99.9(esbuild@0.27.3))\":{\"dependencies\":{\"@babel/core\":\"7.28.5\",\"@babel/plugin-syntax-jsx\":\"7.27.1(@babel/core@7.28.5)\",\"@babel/plugin-syntax-typescript\":\"7.27.1(@babel/core@7.28.5)\",\"@babel/template\":\"7.27.2\",\"@babel/traverse\":\"7.28.5\",\"@babel/types\":\"7.28.5\",\"@tanstack/router-core\":\"1.169.2\",\"@tanstack/router-generator\":\"1.166.41\",\"@tanstack/router-utils\":\"1.161.8\",\"@tanstack/virtual-file-routes\":\"1.161.7\",\"chokidar\":\"3.6.0\",\"unplugin\":\"3.0.0\",\"zod\":\"3.25.76\"},\"optionalDependencies\":{\"@tanstack/react-router\":\"1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0)\",\"vite\":\"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\",\"webpack\":\"5.99.9(esbuild@0.27.3)\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"@tanstack/router-utils@1.161.8\":{\"dependencies\":{\"@babel/core\":\"7.28.5\",\"@babel/generator\":\"7.28.5\",\"@babel/parser\":\"7.28.5\",\"@babel/types\":\"7.28.5\",\"ansis\":\"4.1.0\",\"babel-dead-code-elimination\":\"1.0.12\",\"diff\":\"8.0.2\",\"pathe\":\"2.0.3\",\"tinyglobby\":\"0.2.16\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"@tanstack/start-client-core@1.168.2\":{\"dependencies\":{\"@tanstack/router-core\":\"1.169.2\",\"@tanstack/start-fn-stubs\":\"1.161.6\",\"@tanstack/start-storage-context\":\"1.166.35\",\"seroval\":\"1.5.4\"}},\"@tanstack/start-fn-stubs@1.161.6\":{},\"@tanstack/start-plugin-core@1.169.19(@tanstack/react-router@1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(webpack@5.99.9(esbuild@0.27.3))\":{\"dependencies\":{\"@babel/code-frame\":\"7.27.1\",\"@babel/core\":\"7.28.5\",\"@babel/types\":\"7.28.5\",\"@rolldown/pluginutils\":\"1.0.0-beta.40\",\"@tanstack/router-core\":\"1.169.2\",\"@tanstack/router-generator\":\"1.166.41\",\"@tanstack/router-plugin\":\"1.167.34(@tanstack/react-router@1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(webpack@5.99.9(esbuild@0.27.3))\",\"@tanstack/router-utils\":\"1.161.8\",\"@tanstack/start-client-core\":\"1.168.2\",\"@tanstack/start-server-core\":\"1.167.30\",\"cheerio\":\"1.1.2\",\"exsolve\":\"1.0.8\",\"lightningcss\":\"1.32.0\",\"pathe\":\"2.0.3\",\"picomatch\":\"4.0.3\",\"seroval\":\"1.5.4\",\"source-map\":\"0.7.6\",\"srvx\":\"0.11.15\",\"tinyglobby\":\"0.2.16\",\"ufo\":\"1.6.1\",\"vitefu\":\"1.1.1(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\",\"xmlbuilder2\":\"4.0.3\",\"zod\":\"3.25.76\"},\"optionalDependencies\":{\"vite\":\"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\"},\"transitivePeerDependencies\":[\"@tanstack/react-router\",\"crossws\",\"supports-color\",\"vite-plugin-solid\",\"webpack\"]},\"@tanstack/start-server-core@1.167.30\":{\"dependencies\":{\"@tanstack/history\":\"1.161.6\",\"@tanstack/router-core\":\"1.169.2\",\"@tanstack/start-client-core\":\"1.168.2\",\"@tanstack/start-storage-context\":\"1.166.35\",\"fetchdts\":\"0.1.7\",\"h3-v2\":\"h3@2.0.1-rc.20\",\"seroval\":\"1.5.4\"},\"transitivePeerDependencies\":[\"crossws\"]},\"@tanstack/start-storage-context@1.166.35\":{\"dependencies\":{\"@tanstack/router-core\":\"1.169.2\"}},\"@tanstack/store@0.9.3\":{},\"@tanstack/virtual-file-routes@1.161.7\":{},\"@testing-library/dom@10.4.1\":{\"dependencies\":{\"@babel/code-frame\":\"7.27.1\",\"@babel/runtime\":\"7.28.4\",\"@types/aria-query\":\"5.0.4\",\"aria-query\":\"5.3.0\",\"dom-accessibility-api\":\"0.5.16\",\"lz-string\":\"1.5.0\",\"picocolors\":\"1.1.1\",\"pretty-format\":\"27.5.1\"}},\"@testing-library/react@16.3.0(@testing-library/dom@10.4.1)(@types/react-dom@19.2.3(@types/react@19.2.7))(@types/react@19.2.7)(react-dom@19.2.0(react@19.2.0))(react@19.2.0)\":{\"dependencies\":{\"@babel/runtime\":\"7.28.4\",\"@testing-library/dom\":\"10.4.1\",\"react\":\"19.2.0\",\"react-dom\":\"19.2.0(react@19.2.0)\"},\"optionalDependencies\":{\"@types/react\":\"19.2.7\",\"@types/react-dom\":\"19.2.3(@types/react@19.2.7)\"}},\"@testing-library/user-event@14.6.1(@testing-library/dom@10.4.1)\":{\"dependencies\":{\"@testing-library/dom\":\"10.4.1\"},\"optional\":true},\"@tootallnate/quickjs-emscripten@0.23.0\":{\"optional\":true},\"@tybys/wasm-util@0.10.2\":{\"dependencies\":{\"tslib\":\"2.8.1\"},\"optional\":true},\"@tybys/wasm-util@0.9.0\":{\"dependencies\":{\"tslib\":\"2.8.1\"}},\"@types/aria-query@5.0.4\":{},\"@types/chai@5.2.2\":{\"dependencies\":{\"@types/deep-eql\":\"4.0.2\"}},\"@types/chai@5.2.3\":{\"dependencies\":{\"@types/deep-eql\":\"4.0.2\",\"assertion-error\":\"2.0.1\"}},\"@types/cookie@0.6.0\":{\"optional\":true},\"@types/d3-array@3.2.1\":{},\"@types/d3-axis@3.0.6\":{\"dependencies\":{\"@types/d3-selection\":\"3.0.11\"}},\"@types/d3-brush@3.0.6\":{\"dependencies\":{\"@types/d3-selection\":\"3.0.11\"}},\"@types/d3-chord@3.0.6\":{},\"@types/d3-color@3.1.3\":{},\"@types/d3-contour@3.0.6\":{\"dependencies\":{\"@types/d3-array\":\"3.2.1\",\"@types/geojson\":\"7946.0.15\"}},\"@types/d3-delaunay@6.0.4\":{},\"@types/d3-dispatch@3.0.6\":{},\"@types/d3-drag@3.0.7\":{\"dependencies\":{\"@types/d3-selection\":\"3.0.11\"}},\"@types/d3-dsv@3.0.7\":{},\"@types/d3-ease@3.0.2\":{},\"@types/d3-fetch@3.0.7\":{\"dependencies\":{\"@types/d3-dsv\":\"3.0.7\"}},\"@types/d3-force@3.0.10\":{},\"@types/d3-format@3.0.4\":{},\"@types/d3-geo@3.1.0\":{\"dependencies\":{\"@types/geojson\":\"7946.0.15\"}},\"@types/d3-hierarchy@3.1.7\":{},\"@types/d3-interpolate@3.0.4\":{\"dependencies\":{\"@types/d3-color\":\"3.1.3\"}},\"@types/d3-path@3.1.0\":{},\"@types/d3-polygon@3.0.2\":{},\"@types/d3-quadtree@3.0.6\":{},\"@types/d3-random@3.0.3\":{},\"@types/d3-scale-chromatic@3.1.0\":{},\"@types/d3-scale@4.0.8\":{\"dependencies\":{\"@types/d3-time\":\"3.0.4\"}},\"@types/d3-selection@3.0.11\":{},\"@types/d3-shape@3.1.7\":{\"dependencies\":{\"@types/d3-path\":\"3.1.0\"}},\"@types/d3-time-format@4.0.3\":{},\"@types/d3-time@3.0.4\":{},\"@types/d3-timer@3.0.2\":{},\"@types/d3-transition@3.0.9\":{\"dependencies\":{\"@types/d3-selection\":\"3.0.11\"}},\"@types/d3-zoom@3.0.8\":{\"dependencies\":{\"@types/d3-interpolate\":\"3.0.4\",\"@types/d3-selection\":\"3.0.11\"}},\"@types/d3@7.4.3\":{\"dependencies\":{\"@types/d3-array\":\"3.2.1\",\"@types/d3-axis\":\"3.0.6\",\"@types/d3-brush\":\"3.0.6\",\"@types/d3-chord\":\"3.0.6\",\"@types/d3-color\":\"3.1.3\",\"@types/d3-contour\":\"3.0.6\",\"@types/d3-delaunay\":\"6.0.4\",\"@types/d3-dispatch\":\"3.0.6\",\"@types/d3-drag\":\"3.0.7\",\"@types/d3-dsv\":\"3.0.7\",\"@types/d3-ease\":\"3.0.2\",\"@types/d3-fetch\":\"3.0.7\",\"@types/d3-force\":\"3.0.10\",\"@types/d3-format\":\"3.0.4\",\"@types/d3-geo\":\"3.1.0\",\"@types/d3-hierarchy\":\"3.1.7\",\"@types/d3-interpolate\":\"3.0.4\",\"@types/d3-path\":\"3.1.0\",\"@types/d3-polygon\":\"3.0.2\",\"@types/d3-quadtree\":\"3.0.6\",\"@types/d3-random\":\"3.0.3\",\"@types/d3-scale\":\"4.0.8\",\"@types/d3-scale-chromatic\":\"3.1.0\",\"@types/d3-selection\":\"3.0.11\",\"@types/d3-shape\":\"3.1.7\",\"@types/d3-time\":\"3.0.4\",\"@types/d3-time-format\":\"4.0.3\",\"@types/d3-timer\":\"3.0.2\",\"@types/d3-transition\":\"3.0.9\",\"@types/d3-zoom\":\"3.0.8\"}},\"@types/debug@4.1.12\":{\"dependencies\":{\"@types/ms\":\"2.1.0\"}},\"@types/deep-eql@4.0.2\":{},\"@types/eslint-scope@3.7.7\":{\"dependencies\":{\"@types/eslint\":\"9.6.1\",\"@types/estree\":\"1.0.9\"},\"optional\":true},\"@types/eslint@9.6.1\":{\"dependencies\":{\"@types/estree\":\"1.0.9\",\"@types/json-schema\":\"7.0.15\"},\"optional\":true},\"@types/estree@1.0.8\":{},\"@types/estree@1.0.9\":{\"optional\":true},\"@types/geojson@7946.0.15\":{},\"@types/hast@3.0.4\":{\"dependencies\":{\"@types/unist\":\"3.0.3\"}},\"@types/json-schema@7.0.15\":{\"optional\":true},\"@types/mdast@4.0.4\":{\"dependencies\":{\"@types/unist\":\"3.0.3\"}},\"@types/ms@2.1.0\":{},\"@types/node@12.20.55\":{},\"@types/node@20.19.39\":{\"dependencies\":{\"undici-types\":\"6.21.0\"},\"optional\":true},\"@types/node@22.15.33\":{\"dependencies\":{\"undici-types\":\"6.21.0\"}},\"@types/node@22.19.17\":{\"dependencies\":{\"undici-types\":\"6.21.0\"},\"optional\":true},\"@types/node@24.10.2\":{\"dependencies\":{\"undici-types\":\"7.16.0\"},\"optional\":true},\"@types/react-dom@19.2.3(@types/react@19.2.7)\":{\"dependencies\":{\"@types/react\":\"19.2.7\"}},\"@types/react@19.2.7\":{\"dependencies\":{\"csstype\":\"3.2.3\"}},\"@types/sinonjs__fake-timers@8.1.5\":{\"optional\":true},\"@types/statuses@2.0.6\":{\"optional\":true},\"@types/tough-cookie@4.0.5\":{\"optional\":true},\"@types/trusted-types@2.0.7\":{\"optional\":true},\"@types/unist@3.0.3\":{},\"@types/whatwg-mimetype@3.0.2\":{\"optional\":true},\"@types/which@2.0.2\":{\"optional\":true},\"@types/ws@8.18.1\":{\"dependencies\":{\"@types/node\":\"22.19.17\"},\"optional\":true},\"@types/yauzl@2.10.3\":{\"dependencies\":{\"@types/node\":\"22.19.17\"},\"optional\":true},\"@ungap/structured-clone@1.2.1\":{},\"@vitejs/plugin-react@6.0.1(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\":{\"dependencies\":{\"@rolldown/pluginutils\":\"1.0.0-rc.7\",\"vite\":\"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\"}},\"@vitest/browser@3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.8.3))(playwright@1.55.0)(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(vitest@3.2.4)(webdriverio@9.2.1)\":{\"dependencies\":{\"@testing-library/dom\":\"10.4.1\",\"@testing-library/user-event\":\"14.6.1(@testing-library/dom@10.4.1)\",\"@vitest/mocker\":\"3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.8.3))(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\",\"@vitest/utils\":\"3.2.4\",\"magic-string\":\"0.30.21\",\"sirv\":\"3.0.2\",\"tinyrainbow\":\"2.0.0\",\"vitest\":\"3.2.4(@types/debug@4.1.12)(@types/node@24.10.2)(@vitest/browser@3.2.4)(happy-dom@18.0.1)(jiti@2.6.1)(jsdom@26.1.0)(lightningcss@1.32.0)(msw@2.10.2(@types/node@24.10.2)(typescript@5.8.3))(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\",\"ws\":\"8.20.0\"},\"optionalDependencies\":{\"playwright\":\"1.55.0\",\"webdriverio\":\"9.2.1\"},\"transitivePeerDependencies\":[\"bufferutil\",\"msw\",\"utf-8-validate\",\"vite\"],\"optional\":true},\"@vitest/browser@3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(playwright@1.55.0)(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(vitest@3.2.4)(webdriverio@9.2.1)\":{\"dependencies\":{\"@testing-library/dom\":\"10.4.1\",\"@testing-library/user-event\":\"14.6.1(@testing-library/dom@10.4.1)\",\"@vitest/mocker\":\"3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\",\"@vitest/utils\":\"3.2.4\",\"magic-string\":\"0.30.21\",\"sirv\":\"3.0.2\",\"tinyrainbow\":\"2.0.0\",\"vitest\":\"3.2.4(@types/debug@4.1.12)(@types/node@24.10.2)(@vitest/browser@3.2.4)(happy-dom@18.0.1)(jiti@2.6.1)(jsdom@27.3.0(postcss@8.5.14))(lightningcss@1.32.0)(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\",\"ws\":\"8.20.0\"},\"optionalDependencies\":{\"playwright\":\"1.55.0\",\"webdriverio\":\"9.2.1\"},\"transitivePeerDependencies\":[\"bufferutil\",\"msw\",\"utf-8-validate\",\"vite\"],\"optional\":true},\"@vitest/browser@4.1.5(msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(vitest@4.1.5)\":{\"dependencies\":{\"@blazediff/core\":\"1.9.1\",\"@vitest/mocker\":\"4.1.5(msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\",\"@vitest/utils\":\"4.1.5\",\"magic-string\":\"0.30.21\",\"pngjs\":\"7.0.0\",\"sirv\":\"3.0.2\",\"tinyrainbow\":\"3.1.0\",\"vitest\":\"4.1.5(@opentelemetry/api@1.9.0)(@types/node@22.15.33)(@vitest/coverage-v8@4.1.5)(happy-dom@18.0.1)(jsdom@27.3.0(postcss@8.5.14))(msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\",\"ws\":\"8.20.0\"},\"transitivePeerDependencies\":[\"bufferutil\",\"msw\",\"utf-8-validate\",\"vite\"]},\"@vitest/coverage-v8@3.2.4(@vitest/browser@3.2.4)(vitest@3.2.4)\":{\"dependencies\":{\"@ampproject/remapping\":\"2.3.0\",\"@bcoe/v8-coverage\":\"1.0.2\",\"ast-v8-to-istanbul\":\"0.3.4\",\"debug\":\"4.4.1\",\"istanbul-lib-coverage\":\"3.2.2\",\"istanbul-lib-report\":\"3.0.1\",\"istanbul-lib-source-maps\":\"5.0.6\",\"istanbul-reports\":\"3.2.0\",\"magic-string\":\"0.30.18\",\"magicast\":\"0.3.5\",\"std-env\":\"3.9.0\",\"test-exclude\":\"7.0.1\",\"tinyrainbow\":\"2.0.0\",\"vitest\":\"3.2.4(@types/debug@4.1.12)(@types/node@24.10.2)(@vitest/browser@3.2.4)(happy-dom@18.0.1)(jiti@2.6.1)(jsdom@27.3.0(postcss@8.5.14))(lightningcss@1.32.0)(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\"},\"optionalDependencies\":{\"@vitest/browser\":\"3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(playwright@1.55.0)(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(vitest@3.2.4)(webdriverio@9.2.1)\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"@vitest/coverage-v8@4.1.5(@vitest/browser@4.1.5)(vitest@4.1.5)\":{\"dependencies\":{\"@bcoe/v8-coverage\":\"1.0.2\",\"@vitest/utils\":\"4.1.5\",\"ast-v8-to-istanbul\":\"1.0.0\",\"istanbul-lib-coverage\":\"3.2.2\",\"istanbul-lib-report\":\"3.0.1\",\"istanbul-reports\":\"3.2.0\",\"magicast\":\"0.5.2\",\"obug\":\"2.1.1\",\"std-env\":\"4.1.0\",\"tinyrainbow\":\"3.1.0\",\"vitest\":\"4.1.5(@opentelemetry/api@1.9.0)(@types/node@22.15.33)(@vitest/coverage-v8@4.1.5)(happy-dom@18.0.1)(jsdom@27.3.0(postcss@8.5.14))(msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\"},\"optionalDependencies\":{\"@vitest/browser\":\"4.1.5(msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(vitest@4.1.5)\"}},\"@vitest/expect@3.2.4\":{\"dependencies\":{\"@types/chai\":\"5.2.2\",\"@vitest/spy\":\"3.2.4\",\"@vitest/utils\":\"3.2.4\",\"chai\":\"5.3.3\",\"tinyrainbow\":\"2.0.0\"}},\"@vitest/expect@4.0.18\":{\"dependencies\":{\"@standard-schema/spec\":\"1.0.0\",\"@types/chai\":\"5.2.3\",\"@vitest/spy\":\"4.0.18\",\"@vitest/utils\":\"4.0.18\",\"chai\":\"6.2.2\",\"tinyrainbow\":\"3.1.0\"}},\"@vitest/expect@4.1.5\":{\"dependencies\":{\"@standard-schema/spec\":\"1.1.0\",\"@types/chai\":\"5.2.3\",\"@vitest/spy\":\"4.1.5\",\"@vitest/utils\":\"4.1.5\",\"chai\":\"6.2.2\",\"tinyrainbow\":\"3.1.0\"}},\"@vitest/mocker@3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.8.3))(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\":{\"dependencies\":{\"@vitest/spy\":\"3.2.4\",\"estree-walker\":\"3.0.3\",\"magic-string\":\"0.30.21\"},\"optionalDependencies\":{\"msw\":\"2.10.2(@types/node@24.10.2)(typescript@5.8.3)\",\"vite\":\"7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\"}},\"@vitest/mocker@3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\":{\"dependencies\":{\"@vitest/spy\":\"3.2.4\",\"estree-walker\":\"3.0.3\",\"magic-string\":\"0.30.21\"},\"optionalDependencies\":{\"msw\":\"2.10.2(@types/node@24.10.2)(typescript@5.9.3)\",\"vite\":\"7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\"}},\"@vitest/mocker@4.0.18(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\":{\"dependencies\":{\"@vitest/spy\":\"4.0.18\",\"estree-walker\":\"3.0.3\",\"magic-string\":\"0.30.21\"},\"optionalDependencies\":{\"msw\":\"2.10.2(@types/node@24.10.2)(typescript@5.9.3)\",\"vite\":\"7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\"}},\"@vitest/mocker@4.1.5(msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\":{\"dependencies\":{\"@vitest/spy\":\"4.1.5\",\"estree-walker\":\"3.0.3\",\"magic-string\":\"0.30.21\"},\"optionalDependencies\":{\"msw\":\"2.10.2(@types/node@22.15.33)(typescript@5.8.3)\",\"vite\":\"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\"}},\"@vitest/pretty-format@3.2.4\":{\"dependencies\":{\"tinyrainbow\":\"2.0.0\"}},\"@vitest/pretty-format@4.0.18\":{\"dependencies\":{\"tinyrainbow\":\"3.1.0\"}},\"@vitest/pretty-format@4.1.5\":{\"dependencies\":{\"tinyrainbow\":\"3.1.0\"}},\"@vitest/runner@3.2.4\":{\"dependencies\":{\"@vitest/utils\":\"3.2.4\",\"pathe\":\"2.0.3\",\"strip-literal\":\"3.0.0\"}},\"@vitest/runner@4.0.18\":{\"dependencies\":{\"@vitest/utils\":\"4.0.18\",\"pathe\":\"2.0.3\"}},\"@vitest/runner@4.1.5\":{\"dependencies\":{\"@vitest/utils\":\"4.1.5\",\"pathe\":\"2.0.3\"}},\"@vitest/snapshot@3.2.4\":{\"dependencies\":{\"@vitest/pretty-format\":\"3.2.4\",\"magic-string\":\"0.30.21\",\"pathe\":\"2.0.3\"}},\"@vitest/snapshot@4.0.18\":{\"dependencies\":{\"@vitest/pretty-format\":\"4.0.18\",\"magic-string\":\"0.30.21\",\"pathe\":\"2.0.3\"}},\"@vitest/snapshot@4.1.5\":{\"dependencies\":{\"@vitest/pretty-format\":\"4.1.5\",\"@vitest/utils\":\"4.1.5\",\"magic-string\":\"0.30.21\",\"pathe\":\"2.0.3\"}},\"@vitest/spy@3.2.4\":{\"dependencies\":{\"tinyspy\":\"4.0.3\"}},\"@vitest/spy@4.0.18\":{},\"@vitest/spy@4.1.5\":{},\"@vitest/utils@3.2.4\":{\"dependencies\":{\"@vitest/pretty-format\":\"3.2.4\",\"loupe\":\"3.2.1\",\"tinyrainbow\":\"2.0.0\"}},\"@vitest/utils@4.0.18\":{\"dependencies\":{\"@vitest/pretty-format\":\"4.0.18\",\"tinyrainbow\":\"3.1.0\"}},\"@vitest/utils@4.1.5\":{\"dependencies\":{\"@vitest/pretty-format\":\"4.1.5\",\"convert-source-map\":\"2.0.0\",\"tinyrainbow\":\"3.1.0\"}},\"@wdio/config@9.1.3\":{\"dependencies\":{\"@wdio/logger\":\"9.1.3\",\"@wdio/types\":\"9.1.3\",\"@wdio/utils\":\"9.1.3\",\"decamelize\":\"6.0.1\",\"deepmerge-ts\":\"7.1.5\",\"glob\":\"10.5.0\",\"import-meta-resolve\":\"4.2.0\"},\"transitivePeerDependencies\":[\"bare-abort-controller\",\"bare-buffer\",\"react-native-b4a\",\"supports-color\"],\"optional\":true},\"@wdio/logger@8.38.0\":{\"dependencies\":{\"chalk\":\"5.6.2\",\"loglevel\":\"1.9.2\",\"loglevel-plugin-prefix\":\"0.8.4\",\"strip-ansi\":\"7.2.0\"},\"optional\":true},\"@wdio/logger@9.1.3\":{\"dependencies\":{\"chalk\":\"5.6.2\",\"loglevel\":\"1.9.2\",\"loglevel-plugin-prefix\":\"0.8.4\",\"strip-ansi\":\"7.2.0\"},\"optional\":true},\"@wdio/protocols@9.2.0\":{\"optional\":true},\"@wdio/repl@9.0.8\":{\"dependencies\":{\"@types/node\":\"20.19.39\"},\"optional\":true},\"@wdio/types@9.1.3\":{\"dependencies\":{\"@types/node\":\"20.19.39\"},\"optional\":true},\"@wdio/utils@9.1.3\":{\"dependencies\":{\"@puppeteer/browsers\":\"2.13.1\",\"@wdio/logger\":\"9.1.3\",\"@wdio/types\":\"9.1.3\",\"decamelize\":\"6.0.1\",\"deepmerge-ts\":\"7.1.5\",\"edgedriver\":\"5.6.1\",\"geckodriver\":\"4.5.1\",\"get-port\":\"7.2.0\",\"import-meta-resolve\":\"4.2.0\",\"locate-app\":\"2.5.0\",\"safaridriver\":\"0.1.2\",\"split2\":\"4.2.0\",\"wait-port\":\"1.1.0\"},\"transitivePeerDependencies\":[\"bare-abort-controller\",\"bare-buffer\",\"react-native-b4a\",\"supports-color\"],\"optional\":true},\"@webassemblyjs/ast@1.14.1\":{\"dependencies\":{\"@webassemblyjs/helper-numbers\":\"1.13.2\",\"@webassemblyjs/helper-wasm-bytecode\":\"1.13.2\"},\"optional\":true},\"@webassemblyjs/floating-point-hex-parser@1.13.2\":{\"optional\":true},\"@webassemblyjs/helper-api-error@1.13.2\":{\"optional\":true},\"@webassemblyjs/helper-buffer@1.14.1\":{\"optional\":true},\"@webassemblyjs/helper-numbers@1.13.2\":{\"dependencies\":{\"@webassemblyjs/floating-point-hex-parser\":\"1.13.2\",\"@webassemblyjs/helper-api-error\":\"1.13.2\",\"@xtuc/long\":\"4.2.2\"},\"optional\":true},\"@webassemblyjs/helper-wasm-bytecode@1.13.2\":{\"optional\":true},\"@webassemblyjs/helper-wasm-section@1.14.1\":{\"dependencies\":{\"@webassemblyjs/ast\":\"1.14.1\",\"@webassemblyjs/helper-buffer\":\"1.14.1\",\"@webassemblyjs/helper-wasm-bytecode\":\"1.13.2\",\"@webassemblyjs/wasm-gen\":\"1.14.1\"},\"optional\":true},\"@webassemblyjs/ieee754@1.13.2\":{\"dependencies\":{\"@xtuc/ieee754\":\"1.2.0\"},\"optional\":true},\"@webassemblyjs/leb128@1.13.2\":{\"dependencies\":{\"@xtuc/long\":\"4.2.2\"},\"optional\":true},\"@webassemblyjs/utf8@1.13.2\":{\"optional\":true},\"@webassemblyjs/wasm-edit@1.14.1\":{\"dependencies\":{\"@webassemblyjs/ast\":\"1.14.1\",\"@webassemblyjs/helper-buffer\":\"1.14.1\",\"@webassemblyjs/helper-wasm-bytecode\":\"1.13.2\",\"@webassemblyjs/helper-wasm-section\":\"1.14.1\",\"@webassemblyjs/wasm-gen\":\"1.14.1\",\"@webassemblyjs/wasm-opt\":\"1.14.1\",\"@webassemblyjs/wasm-parser\":\"1.14.1\",\"@webassemblyjs/wast-printer\":\"1.14.1\"},\"optional\":true},\"@webassemblyjs/wasm-gen@1.14.1\":{\"dependencies\":{\"@webassemblyjs/ast\":\"1.14.1\",\"@webassemblyjs/helper-wasm-bytecode\":\"1.13.2\",\"@webassemblyjs/ieee754\":\"1.13.2\",\"@webassemblyjs/leb128\":\"1.13.2\",\"@webassemblyjs/utf8\":\"1.13.2\"},\"optional\":true},\"@webassemblyjs/wasm-opt@1.14.1\":{\"dependencies\":{\"@webassemblyjs/ast\":\"1.14.1\",\"@webassemblyjs/helper-buffer\":\"1.14.1\",\"@webassemblyjs/wasm-gen\":\"1.14.1\",\"@webassemblyjs/wasm-parser\":\"1.14.1\"},\"optional\":true},\"@webassemblyjs/wasm-parser@1.14.1\":{\"dependencies\":{\"@webassemblyjs/ast\":\"1.14.1\",\"@webassemblyjs/helper-api-error\":\"1.13.2\",\"@webassemblyjs/helper-wasm-bytecode\":\"1.13.2\",\"@webassemblyjs/ieee754\":\"1.13.2\",\"@webassemblyjs/leb128\":\"1.13.2\",\"@webassemblyjs/utf8\":\"1.13.2\"},\"optional\":true},\"@webassemblyjs/wast-printer@1.14.1\":{\"dependencies\":{\"@webassemblyjs/ast\":\"1.14.1\",\"@xtuc/long\":\"4.2.2\"},\"optional\":true},\"@xtuc/ieee754@1.2.0\":{\"optional\":true},\"@xtuc/long@4.2.2\":{\"optional\":true},\"@yarnpkg/lockfile@1.1.0\":{},\"@yarnpkg/parsers@3.0.2\":{\"dependencies\":{\"js-yaml\":\"3.14.1\",\"tslib\":\"2.8.1\"}},\"@zip.js/zip.js@2.8.26\":{\"optional\":true},\"@zkochan/js-yaml@0.0.7\":{\"dependencies\":{\"argparse\":\"2.0.1\"}},\"abort-controller@3.0.0\":{\"dependencies\":{\"event-target-shim\":\"5.0.1\"},\"optional\":true},\"acorn@8.16.0\":{},\"agent-base@7.1.3\":{},\"agent-base@7.1.4\":{\"optional\":true},\"ajv-formats@2.1.1(ajv@8.20.0)\":{\"optionalDependencies\":{\"ajv\":\"8.20.0\"},\"optional\":true},\"ajv-keywords@5.1.0(ajv@8.20.0)\":{\"dependencies\":{\"ajv\":\"8.20.0\",\"fast-deep-equal\":\"3.1.3\"},\"optional\":true},\"ajv@8.17.1\":{\"dependencies\":{\"fast-deep-equal\":\"3.1.3\",\"fast-uri\":\"3.0.3\",\"json-schema-traverse\":\"1.0.0\",\"require-from-string\":\"2.0.2\"}},\"ajv@8.20.0\":{\"dependencies\":{\"fast-deep-equal\":\"3.1.3\",\"fast-uri\":\"3.1.2\",\"json-schema-traverse\":\"1.0.0\",\"require-from-string\":\"2.0.2\"},\"optional\":true},\"ansi-colors@4.1.3\":{},\"ansi-regex@5.0.1\":{},\"ansi-regex@6.1.0\":{},\"ansi-regex@6.2.2\":{\"optional\":true},\"ansi-styles@4.3.0\":{\"dependencies\":{\"color-convert\":\"2.0.1\"}},\"ansi-styles@5.2.0\":{},\"ansi-styles@6.2.1\":{},\"ansis@4.1.0\":{},\"anymatch@3.1.3\":{\"dependencies\":{\"normalize-path\":\"3.0.0\",\"picomatch\":\"2.3.1\"}},\"archiver-utils@5.0.2\":{\"dependencies\":{\"glob\":\"10.5.0\",\"graceful-fs\":\"4.2.11\",\"is-stream\":\"2.0.1\",\"lazystream\":\"1.0.1\",\"lodash\":\"4.18.1\",\"normalize-path\":\"3.0.0\",\"readable-stream\":\"4.7.0\"},\"optional\":true},\"archiver@7.0.1\":{\"dependencies\":{\"archiver-utils\":\"5.0.2\",\"async\":\"3.2.6\",\"buffer-crc32\":\"1.0.0\",\"readable-stream\":\"4.7.0\",\"readdir-glob\":\"1.1.3\",\"tar-stream\":\"3.2.0\",\"zip-stream\":\"6.0.1\"},\"transitivePeerDependencies\":[\"bare-abort-controller\",\"bare-buffer\",\"react-native-b4a\"],\"optional\":true},\"argparse@1.0.10\":{\"dependencies\":{\"sprintf-js\":\"1.0.3\"}},\"argparse@2.0.1\":{},\"aria-query@5.3.0\":{\"dependencies\":{\"dequal\":\"2.0.3\"}},\"aria-query@5.3.2\":{\"optional\":true},\"array-union@2.1.0\":{},\"assertion-error@2.0.1\":{},\"ast-types@0.13.4\":{\"dependencies\":{\"tslib\":\"2.8.1\"},\"optional\":true},\"ast-v8-to-istanbul@0.3.4\":{\"dependencies\":{\"@jridgewell/trace-mapping\":\"0.3.30\",\"estree-walker\":\"3.0.3\",\"js-tokens\":\"9.0.1\"}},\"ast-v8-to-istanbul@1.0.0\":{\"dependencies\":{\"@jridgewell/trace-mapping\":\"0.3.31\",\"estree-walker\":\"3.0.3\",\"js-tokens\":\"10.0.0\"}},\"async@3.2.6\":{\"optional\":true},\"asynckit@0.4.0\":{},\"axios@1.11.0\":{\"dependencies\":{\"follow-redirects\":\"1.15.11\",\"form-data\":\"4.0.4\",\"proxy-from-env\":\"1.1.0\"},\"transitivePeerDependencies\":[\"debug\"]},\"b4a@1.8.1\":{\"optional\":true},\"babel-dead-code-elimination@1.0.12\":{\"dependencies\":{\"@babel/core\":\"7.28.5\",\"@babel/parser\":\"7.28.5\",\"@babel/traverse\":\"7.28.5\",\"@babel/types\":\"7.28.5\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"bail@2.0.2\":{},\"balanced-match@1.0.2\":{},\"bare-events@2.8.2\":{\"optional\":true},\"bare-fs@4.7.1\":{\"dependencies\":{\"bare-events\":\"2.8.2\",\"bare-path\":\"3.0.0\",\"bare-stream\":\"2.13.1(bare-events@2.8.2)\",\"bare-url\":\"2.4.3\",\"fast-fifo\":\"1.3.2\"},\"transitivePeerDependencies\":[\"bare-abort-controller\",\"react-native-b4a\"],\"optional\":true},\"bare-os@3.9.1\":{\"optional\":true},\"bare-path@3.0.0\":{\"dependencies\":{\"bare-os\":\"3.9.1\"},\"optional\":true},\"bare-stream@2.13.1(bare-events@2.8.2)\":{\"dependencies\":{\"streamx\":\"2.25.0\",\"teex\":\"1.0.1\"},\"optionalDependencies\":{\"bare-events\":\"2.8.2\"},\"transitivePeerDependencies\":[\"react-native-b4a\"],\"optional\":true},\"bare-url@2.4.3\":{\"dependencies\":{\"bare-path\":\"3.0.0\"},\"optional\":true},\"base64-js@1.5.1\":{},\"baseline-browser-mapping@2.10.27\":{\"optional\":true},\"basic-ftp@5.3.1\":{\"optional\":true},\"better-path-resolve@1.0.0\":{\"dependencies\":{\"is-windows\":\"1.0.2\"}},\"better-sqlite3@12.9.0\":{\"dependencies\":{\"bindings\":\"1.5.0\",\"prebuild-install\":\"7.1.3\"}},\"bidi-js@1.0.3\":{\"dependencies\":{\"require-from-string\":\"2.0.2\"}},\"binary-extensions@2.3.0\":{},\"bindings@1.5.0\":{\"dependencies\":{\"file-uri-to-path\":\"1.0.0\"}},\"bl@4.1.0\":{\"dependencies\":{\"buffer\":\"5.7.1\",\"inherits\":\"2.0.4\",\"readable-stream\":\"3.6.2\"}},\"blake3-wasm@2.1.5\":{},\"boolbase@1.0.0\":{},\"brace-expansion@2.0.2\":{\"dependencies\":{\"balanced-match\":\"1.0.2\"}},\"brace-expansion@2.1.0\":{\"dependencies\":{\"balanced-match\":\"1.0.2\"},\"optional\":true},\"braces@3.0.3\":{\"dependencies\":{\"fill-range\":\"7.1.1\"}},\"browserslist@4.25.3\":{\"dependencies\":{\"caniuse-lite\":\"1.0.30001737\",\"electron-to-chromium\":\"1.5.211\",\"node-releases\":\"2.0.19\",\"update-browserslist-db\":\"1.1.3(browserslist@4.25.3)\"}},\"browserslist@4.28.2\":{\"dependencies\":{\"baseline-browser-mapping\":\"2.10.27\",\"caniuse-lite\":\"1.0.30001792\",\"electron-to-chromium\":\"1.5.352\",\"node-releases\":\"2.0.38\",\"update-browserslist-db\":\"1.2.3(browserslist@4.28.2)\"},\"optional\":true},\"buffer-builder@0.2.0\":{\"optional\":true},\"buffer-crc32@0.2.13\":{\"optional\":true},\"buffer-crc32@1.0.0\":{\"optional\":true},\"buffer-from@1.1.2\":{\"optional\":true},\"buffer@5.7.1\":{\"dependencies\":{\"base64-js\":\"1.5.1\",\"ieee754\":\"1.2.1\"}},\"buffer@6.0.3\":{\"dependencies\":{\"base64-js\":\"1.5.1\",\"ieee754\":\"1.2.1\"},\"optional\":true},\"cac@6.7.14\":{},\"call-bind-apply-helpers@1.0.2\":{\"dependencies\":{\"es-errors\":\"1.3.0\",\"function-bind\":\"1.1.2\"}},\"caniuse-lite@1.0.30001737\":{},\"caniuse-lite@1.0.30001792\":{\"optional\":true},\"ccount@2.0.1\":{},\"chai@5.3.3\":{\"dependencies\":{\"assertion-error\":\"2.0.1\",\"check-error\":\"2.1.1\",\"deep-eql\":\"5.0.2\",\"loupe\":\"3.2.1\",\"pathval\":\"2.0.1\"}},\"chai@6.2.2\":{},\"chalk@4.1.2\":{\"dependencies\":{\"ansi-styles\":\"4.3.0\",\"supports-color\":\"7.2.0\"}},\"chalk@5.6.2\":{\"optional\":true},\"character-entities-html4@2.1.0\":{},\"character-entities-legacy@3.0.0\":{},\"character-entities@2.0.2\":{},\"chardet@2.1.0\":{},\"check-error@2.1.1\":{},\"cheerio-select@2.1.0\":{\"dependencies\":{\"boolbase\":\"1.0.0\",\"css-select\":\"5.1.0\",\"css-what\":\"6.1.0\",\"domelementtype\":\"2.3.0\",\"domhandler\":\"5.0.3\",\"domutils\":\"3.2.2\"}},\"cheerio@1.1.2\":{\"dependencies\":{\"cheerio-select\":\"2.1.0\",\"dom-serializer\":\"2.0.0\",\"domhandler\":\"5.0.3\",\"domutils\":\"3.2.2\",\"encoding-sniffer\":\"0.2.1\",\"htmlparser2\":\"10.0.0\",\"parse5\":\"7.3.0\",\"parse5-htmlparser2-tree-adapter\":\"7.1.0\",\"parse5-parser-stream\":\"7.1.2\",\"undici\":\"7.16.0\",\"whatwg-mimetype\":\"4.0.0\"}},\"cheerio@1.2.0\":{\"dependencies\":{\"cheerio-select\":\"2.1.0\",\"dom-serializer\":\"2.0.0\",\"domhandler\":\"5.0.3\",\"domutils\":\"3.2.2\",\"encoding-sniffer\":\"0.2.1\",\"htmlparser2\":\"10.1.0\",\"parse5\":\"7.3.0\",\"parse5-htmlparser2-tree-adapter\":\"7.1.0\",\"parse5-parser-stream\":\"7.1.2\",\"undici\":\"7.25.0\",\"whatwg-mimetype\":\"4.0.0\"},\"optional\":true},\"chevrotain-allstar@0.3.1(chevrotain@11.0.3)\":{\"dependencies\":{\"chevrotain\":\"11.0.3\",\"lodash-es\":\"4.17.21\"}},\"chevrotain@11.0.3\":{\"dependencies\":{\"@chevrotain/cst-dts-gen\":\"11.0.3\",\"@chevrotain/gast\":\"11.0.3\",\"@chevrotain/regexp-to-ast\":\"11.0.3\",\"@chevrotain/types\":\"11.0.3\",\"@chevrotain/utils\":\"11.0.3\",\"lodash-es\":\"4.17.21\"}},\"chokidar@3.6.0\":{\"dependencies\":{\"anymatch\":\"3.1.3\",\"braces\":\"3.0.3\",\"glob-parent\":\"5.1.2\",\"is-binary-path\":\"2.1.0\",\"is-glob\":\"4.0.3\",\"normalize-path\":\"3.0.0\",\"readdirp\":\"3.6.0\"},\"optionalDependencies\":{\"fsevents\":\"2.3.3\"}},\"chownr@1.1.4\":{},\"chownr@2.0.0\":{},\"chrome-trace-event@1.0.4\":{\"optional\":true},\"ci-info@3.9.0\":{},\"cli-cursor@3.1.0\":{\"dependencies\":{\"restore-cursor\":\"3.1.0\"}},\"cli-spinners@2.6.1\":{},\"cli-spinners@2.9.2\":{},\"cli-width@4.1.0\":{\"optional\":true},\"cliui@8.0.1\":{\"dependencies\":{\"string-width\":\"4.2.3\",\"strip-ansi\":\"6.0.1\",\"wrap-ansi\":\"7.0.0\"}},\"clone@1.0.4\":{},\"color-convert@2.0.1\":{\"dependencies\":{\"color-name\":\"1.1.4\"}},\"color-name@1.1.4\":{},\"colorjs.io@0.5.2\":{\"optional\":true},\"combined-stream@1.0.8\":{\"dependencies\":{\"delayed-stream\":\"1.0.0\"}},\"comma-separated-tokens@2.0.3\":{},\"commander@2.20.3\":{\"optional\":true},\"commander@7.2.0\":{},\"commander@8.3.0\":{},\"commander@9.5.0\":{\"optional\":true},\"compress-commons@6.0.2\":{\"dependencies\":{\"crc-32\":\"1.2.2\",\"crc32-stream\":\"6.0.0\",\"is-stream\":\"2.0.1\",\"normalize-path\":\"3.0.0\",\"readable-stream\":\"4.7.0\"},\"optional\":true},\"confbox@0.1.8\":{},\"confbox@0.2.2\":{},\"convert-source-map@2.0.0\":{},\"cookie-es@3.1.1\":{},\"cookie@0.7.2\":{\"optional\":true},\"cookie@1.0.2\":{},\"core-js@3.46.0\":{},\"core-util-is@1.0.3\":{\"optional\":true},\"cose-base@1.0.3\":{\"dependencies\":{\"layout-base\":\"1.0.2\"}},\"cose-base@2.2.0\":{\"dependencies\":{\"layout-base\":\"2.0.1\"}},\"crc-32@1.2.2\":{\"optional\":true},\"crc32-stream@6.0.0\":{\"dependencies\":{\"crc-32\":\"1.2.2\",\"readable-stream\":\"4.7.0\"},\"optional\":true},\"cross-spawn@7.0.6\":{\"dependencies\":{\"path-key\":\"3.1.1\",\"shebang-command\":\"2.0.0\",\"which\":\"2.0.2\"}},\"css-select@5.1.0\":{\"dependencies\":{\"boolbase\":\"1.0.0\",\"css-what\":\"6.1.0\",\"domhandler\":\"5.0.3\",\"domutils\":\"3.2.2\",\"nth-check\":\"2.1.1\"}},\"css-shorthand-properties@1.1.2\":{\"optional\":true},\"css-tree@3.1.0\":{\"dependencies\":{\"mdn-data\":\"2.12.2\",\"source-map-js\":\"1.2.1\"}},\"css-value@0.0.1\":{\"optional\":true},\"css-what@6.1.0\":{},\"cssstyle@4.3.1\":{\"dependencies\":{\"@asamuzakjp/css-color\":\"3.1.4\",\"rrweb-cssom\":\"0.8.0\"}},\"cssstyle@5.3.4(postcss@8.5.14)\":{\"dependencies\":{\"@asamuzakjp/css-color\":\"4.1.0\",\"@csstools/css-syntax-patches-for-csstree\":\"1.0.14(postcss@8.5.14)\",\"css-tree\":\"3.1.0\"},\"transitivePeerDependencies\":[\"postcss\"]},\"csstype@3.2.3\":{},\"cytoscape-cose-bilkent@4.1.0(cytoscape@3.30.4)\":{\"dependencies\":{\"cose-base\":\"1.0.3\",\"cytoscape\":\"3.30.4\"}},\"cytoscape-fcose@2.2.0(cytoscape@3.30.4)\":{\"dependencies\":{\"cose-base\":\"2.2.0\",\"cytoscape\":\"3.30.4\"}},\"cytoscape@3.30.4\":{},\"d3-array@2.12.1\":{\"dependencies\":{\"internmap\":\"1.0.1\"}},\"d3-array@3.2.4\":{\"dependencies\":{\"internmap\":\"2.0.3\"}},\"d3-axis@3.0.0\":{},\"d3-brush@3.0.0\":{\"dependencies\":{\"d3-dispatch\":\"3.0.1\",\"d3-drag\":\"3.0.0\",\"d3-interpolate\":\"3.0.1\",\"d3-selection\":\"3.0.0\",\"d3-transition\":\"3.0.1(d3-selection@3.0.0)\"}},\"d3-chord@3.0.1\":{\"dependencies\":{\"d3-path\":\"3.1.0\"}},\"d3-color@3.1.0\":{},\"d3-contour@4.0.2\":{\"dependencies\":{\"d3-array\":\"3.2.4\"}},\"d3-delaunay@6.0.4\":{\"dependencies\":{\"delaunator\":\"5.0.1\"}},\"d3-dispatch@3.0.1\":{},\"d3-drag@3.0.0\":{\"dependencies\":{\"d3-dispatch\":\"3.0.1\",\"d3-selection\":\"3.0.0\"}},\"d3-dsv@3.0.1\":{\"dependencies\":{\"commander\":\"7.2.0\",\"iconv-lite\":\"0.6.3\",\"rw\":\"1.3.3\"}},\"d3-ease@3.0.1\":{},\"d3-fetch@3.0.1\":{\"dependencies\":{\"d3-dsv\":\"3.0.1\"}},\"d3-force@3.0.0\":{\"dependencies\":{\"d3-dispatch\":\"3.0.1\",\"d3-quadtree\":\"3.0.1\",\"d3-timer\":\"3.0.1\"}},\"d3-format@3.1.0\":{},\"d3-geo@3.1.1\":{\"dependencies\":{\"d3-array\":\"3.2.4\"}},\"d3-hierarchy@3.1.2\":{},\"d3-interpolate@3.0.1\":{\"dependencies\":{\"d3-color\":\"3.1.0\"}},\"d3-path@1.0.9\":{},\"d3-path@3.1.0\":{},\"d3-polygon@3.0.1\":{},\"d3-quadtree@3.0.1\":{},\"d3-random@3.0.1\":{},\"d3-sankey@0.12.3\":{\"dependencies\":{\"d3-array\":\"2.12.1\",\"d3-shape\":\"1.3.7\"}},\"d3-scale-chromatic@3.1.0\":{\"dependencies\":{\"d3-color\":\"3.1.0\",\"d3-interpolate\":\"3.0.1\"}},\"d3-scale@4.0.2\":{\"dependencies\":{\"d3-array\":\"3.2.4\",\"d3-format\":\"3.1.0\",\"d3-interpolate\":\"3.0.1\",\"d3-time\":\"3.1.0\",\"d3-time-format\":\"4.1.0\"}},\"d3-selection@3.0.0\":{},\"d3-shape@1.3.7\":{\"dependencies\":{\"d3-path\":\"1.0.9\"}},\"d3-shape@3.2.0\":{\"dependencies\":{\"d3-path\":\"3.1.0\"}},\"d3-time-format@4.1.0\":{\"dependencies\":{\"d3-time\":\"3.1.0\"}},\"d3-time@3.1.0\":{\"dependencies\":{\"d3-array\":\"3.2.4\"}},\"d3-timer@3.0.1\":{},\"d3-transition@3.0.1(d3-selection@3.0.0)\":{\"dependencies\":{\"d3-color\":\"3.1.0\",\"d3-dispatch\":\"3.0.1\",\"d3-ease\":\"3.0.1\",\"d3-interpolate\":\"3.0.1\",\"d3-selection\":\"3.0.0\",\"d3-timer\":\"3.0.1\"}},\"d3-zoom@3.0.0\":{\"dependencies\":{\"d3-dispatch\":\"3.0.1\",\"d3-drag\":\"3.0.0\",\"d3-interpolate\":\"3.0.1\",\"d3-selection\":\"3.0.0\",\"d3-transition\":\"3.0.1(d3-selection@3.0.0)\"}},\"d3@7.9.0\":{\"dependencies\":{\"d3-array\":\"3.2.4\",\"d3-axis\":\"3.0.0\",\"d3-brush\":\"3.0.0\",\"d3-chord\":\"3.0.1\",\"d3-color\":\"3.1.0\",\"d3-contour\":\"4.0.2\",\"d3-delaunay\":\"6.0.4\",\"d3-dispatch\":\"3.0.1\",\"d3-drag\":\"3.0.0\",\"d3-dsv\":\"3.0.1\",\"d3-ease\":\"3.0.1\",\"d3-fetch\":\"3.0.1\",\"d3-force\":\"3.0.0\",\"d3-format\":\"3.1.0\",\"d3-geo\":\"3.1.1\",\"d3-hierarchy\":\"3.1.2\",\"d3-interpolate\":\"3.0.1\",\"d3-path\":\"3.1.0\",\"d3-polygon\":\"3.0.1\",\"d3-quadtree\":\"3.0.1\",\"d3-random\":\"3.0.1\",\"d3-scale\":\"4.0.2\",\"d3-scale-chromatic\":\"3.1.0\",\"d3-selection\":\"3.0.0\",\"d3-shape\":\"3.2.0\",\"d3-time\":\"3.1.0\",\"d3-time-format\":\"4.1.0\",\"d3-timer\":\"3.0.1\",\"d3-transition\":\"3.0.1(d3-selection@3.0.0)\",\"d3-zoom\":\"3.0.0\"}},\"dagre-d3-es@7.0.13\":{\"dependencies\":{\"d3\":\"7.9.0\",\"lodash-es\":\"4.17.21\"}},\"data-uri-to-buffer@4.0.1\":{\"optional\":true},\"data-uri-to-buffer@6.0.2\":{\"optional\":true},\"data-urls@5.0.0\":{\"dependencies\":{\"whatwg-mimetype\":\"4.0.0\",\"whatwg-url\":\"14.2.0\"}},\"data-urls@6.0.0\":{\"dependencies\":{\"whatwg-mimetype\":\"4.0.0\",\"whatwg-url\":\"15.1.0\"}},\"dayjs@1.11.19\":{},\"debug@4.4.1\":{\"dependencies\":{\"ms\":\"2.1.3\"}},\"debug@4.4.3\":{\"dependencies\":{\"ms\":\"2.1.3\"}},\"decamelize@6.0.1\":{\"optional\":true},\"decimal.js@10.6.0\":{},\"decode-named-character-reference@1.0.2\":{\"dependencies\":{\"character-entities\":\"2.0.2\"}},\"decompress-response@6.0.0\":{\"dependencies\":{\"mimic-response\":\"3.1.0\"}},\"deep-eql@5.0.2\":{},\"deep-extend@0.6.0\":{},\"deepmerge-ts@7.1.5\":{\"optional\":true},\"defaults@1.0.4\":{\"dependencies\":{\"clone\":\"1.0.4\"}},\"define-lazy-prop@2.0.0\":{},\"degenerator@5.0.1\":{\"dependencies\":{\"ast-types\":\"0.13.4\",\"escodegen\":\"2.1.0\",\"esprima\":\"4.0.1\"},\"optional\":true},\"delaunator@5.0.1\":{\"dependencies\":{\"robust-predicates\":\"3.0.2\"}},\"delayed-stream@1.0.0\":{},\"dequal@2.0.3\":{},\"detect-indent@6.1.0\":{},\"detect-libc@2.1.2\":{},\"devlop@1.1.0\":{\"dependencies\":{\"dequal\":\"2.0.3\"}},\"diff@8.0.2\":{},\"dir-glob@3.0.1\":{\"dependencies\":{\"path-type\":\"4.0.0\"}},\"dom-accessibility-api@0.5.16\":{},\"dom-serializer@2.0.0\":{\"dependencies\":{\"domelementtype\":\"2.3.0\",\"domhandler\":\"5.0.3\",\"entities\":\"4.5.0\"}},\"domelementtype@2.3.0\":{},\"domhandler@5.0.3\":{\"dependencies\":{\"domelementtype\":\"2.3.0\"}},\"dompurify@3.3.1\":{\"optionalDependencies\":{\"@types/trusted-types\":\"2.0.7\"}},\"domutils@3.2.2\":{\"dependencies\":{\"dom-serializer\":\"2.0.0\",\"domelementtype\":\"2.3.0\",\"domhandler\":\"5.0.3\"}},\"dotenv-expand@11.0.7\":{\"dependencies\":{\"dotenv\":\"16.5.0\"}},\"dotenv@10.0.0\":{},\"dotenv@16.4.7\":{},\"dotenv@16.5.0\":{},\"dunder-proto@1.0.1\":{\"dependencies\":{\"call-bind-apply-helpers\":\"1.0.2\",\"es-errors\":\"1.3.0\",\"gopd\":\"1.2.0\"}},\"eastasianwidth@0.2.0\":{},\"edge-paths@3.0.5\":{\"dependencies\":{\"@types/which\":\"2.0.2\",\"which\":\"2.0.2\"},\"optional\":true},\"edgedriver@5.6.1\":{\"dependencies\":{\"@wdio/logger\":\"8.38.0\",\"@zip.js/zip.js\":\"2.8.26\",\"decamelize\":\"6.0.1\",\"edge-paths\":\"3.0.5\",\"fast-xml-parser\":\"4.5.6\",\"node-fetch\":\"3.3.2\",\"which\":\"4.0.0\"},\"optional\":true},\"electron-to-chromium@1.5.211\":{},\"electron-to-chromium@1.5.352\":{\"optional\":true},\"emoji-regex@8.0.0\":{},\"emoji-regex@9.2.2\":{},\"encoding-sniffer@0.2.1\":{\"dependencies\":{\"iconv-lite\":\"0.6.3\",\"whatwg-encoding\":\"3.1.1\"}},\"end-of-stream@1.4.5\":{\"dependencies\":{\"once\":\"1.4.0\"}},\"enhanced-resolve@5.21.0\":{\"dependencies\":{\"graceful-fs\":\"4.2.11\",\"tapable\":\"2.3.3\"}},\"enquirer@2.3.6\":{\"dependencies\":{\"ansi-colors\":\"4.1.3\"}},\"enquirer@2.4.1\":{\"dependencies\":{\"ansi-colors\":\"4.1.3\",\"strip-ansi\":\"6.0.1\"}},\"entities@4.5.0\":{},\"entities@6.0.1\":{},\"entities@7.0.1\":{\"optional\":true},\"error-stack-parser-es@1.0.5\":{},\"es-define-property@1.0.1\":{},\"es-errors@1.3.0\":{},\"es-module-lexer@1.7.0\":{},\"es-module-lexer@2.1.0\":{},\"es-object-atoms@1.1.1\":{\"dependencies\":{\"es-errors\":\"1.3.0\"}},\"es-set-tostringtag@2.1.0\":{\"dependencies\":{\"es-errors\":\"1.3.0\",\"get-intrinsic\":\"1.3.0\",\"has-tostringtag\":\"1.0.2\",\"hasown\":\"2.0.2\"}},\"esbuild@0.25.12\":{\"optionalDependencies\":{\"@esbuild/aix-ppc64\":\"0.25.12\",\"@esbuild/android-arm\":\"0.25.12\",\"@esbuild/android-arm64\":\"0.25.12\",\"@esbuild/android-x64\":\"0.25.12\",\"@esbuild/darwin-arm64\":\"0.25.12\",\"@esbuild/darwin-x64\":\"0.25.12\",\"@esbuild/freebsd-arm64\":\"0.25.12\",\"@esbuild/freebsd-x64\":\"0.25.12\",\"@esbuild/linux-arm\":\"0.25.12\",\"@esbuild/linux-arm64\":\"0.25.12\",\"@esbuild/linux-ia32\":\"0.25.12\",\"@esbuild/linux-loong64\":\"0.25.12\",\"@esbuild/linux-mips64el\":\"0.25.12\",\"@esbuild/linux-ppc64\":\"0.25.12\",\"@esbuild/linux-riscv64\":\"0.25.12\",\"@esbuild/linux-s390x\":\"0.25.12\",\"@esbuild/linux-x64\":\"0.25.12\",\"@esbuild/netbsd-arm64\":\"0.25.12\",\"@esbuild/netbsd-x64\":\"0.25.12\",\"@esbuild/openbsd-arm64\":\"0.25.12\",\"@esbuild/openbsd-x64\":\"0.25.12\",\"@esbuild/openharmony-arm64\":\"0.25.12\",\"@esbuild/sunos-x64\":\"0.25.12\",\"@esbuild/win32-arm64\":\"0.25.12\",\"@esbuild/win32-ia32\":\"0.25.12\",\"@esbuild/win32-x64\":\"0.25.12\"}},\"esbuild@0.27.3\":{\"optionalDependencies\":{\"@esbuild/aix-ppc64\":\"0.27.3\",\"@esbuild/android-arm\":\"0.27.3\",\"@esbuild/android-arm64\":\"0.27.3\",\"@esbuild/android-x64\":\"0.27.3\",\"@esbuild/darwin-arm64\":\"0.27.3\",\"@esbuild/darwin-x64\":\"0.27.3\",\"@esbuild/freebsd-arm64\":\"0.27.3\",\"@esbuild/freebsd-x64\":\"0.27.3\",\"@esbuild/linux-arm\":\"0.27.3\",\"@esbuild/linux-arm64\":\"0.27.3\",\"@esbuild/linux-ia32\":\"0.27.3\",\"@esbuild/linux-loong64\":\"0.27.3\",\"@esbuild/linux-mips64el\":\"0.27.3\",\"@esbuild/linux-ppc64\":\"0.27.3\",\"@esbuild/linux-riscv64\":\"0.27.3\",\"@esbuild/linux-s390x\":\"0.27.3\",\"@esbuild/linux-x64\":\"0.27.3\",\"@esbuild/netbsd-arm64\":\"0.27.3\",\"@esbuild/netbsd-x64\":\"0.27.3\",\"@esbuild/openbsd-arm64\":\"0.27.3\",\"@esbuild/openbsd-x64\":\"0.27.3\",\"@esbuild/openharmony-arm64\":\"0.27.3\",\"@esbuild/sunos-x64\":\"0.27.3\",\"@esbuild/win32-arm64\":\"0.27.3\",\"@esbuild/win32-ia32\":\"0.27.3\",\"@esbuild/win32-x64\":\"0.27.3\"}},\"escalade@3.2.0\":{},\"escape-string-regexp@1.0.5\":{},\"escape-string-regexp@5.0.0\":{},\"escodegen@2.1.0\":{\"dependencies\":{\"esprima\":\"4.0.1\",\"estraverse\":\"5.3.0\",\"esutils\":\"2.0.3\"},\"optionalDependencies\":{\"source-map\":\"0.6.1\"},\"optional\":true},\"eslint-scope@5.1.1\":{\"dependencies\":{\"esrecurse\":\"4.3.0\",\"estraverse\":\"4.3.0\"},\"optional\":true},\"esprima@4.0.1\":{},\"esrecurse@4.3.0\":{\"dependencies\":{\"estraverse\":\"5.3.0\"},\"optional\":true},\"estraverse@4.3.0\":{\"optional\":true},\"estraverse@5.3.0\":{\"optional\":true},\"estree-walker@3.0.3\":{\"dependencies\":{\"@types/estree\":\"1.0.8\"}},\"esutils@2.0.3\":{\"optional\":true},\"event-target-shim@5.0.1\":{\"optional\":true},\"events-universal@1.0.1\":{\"dependencies\":{\"bare-events\":\"2.8.2\"},\"transitivePeerDependencies\":[\"bare-abort-controller\"],\"optional\":true},\"events@3.3.0\":{\"optional\":true},\"expand-template@2.0.3\":{},\"expect-type@1.2.2\":{},\"expect-type@1.3.0\":{},\"exsolve@1.0.8\":{},\"extend@3.0.2\":{},\"extendable-error@0.1.7\":{},\"extract-zip@2.0.1\":{\"dependencies\":{\"debug\":\"4.4.3\",\"get-stream\":\"5.2.0\",\"yauzl\":\"2.10.0\"},\"optionalDependencies\":{\"@types/yauzl\":\"2.10.3\"},\"transitivePeerDependencies\":[\"supports-color\"],\"optional\":true},\"fast-deep-equal@2.0.1\":{\"optional\":true},\"fast-deep-equal@3.1.3\":{},\"fast-fifo@1.3.2\":{\"optional\":true},\"fast-glob@3.3.3\":{\"dependencies\":{\"@nodelib/fs.stat\":\"2.0.5\",\"@nodelib/fs.walk\":\"1.2.8\",\"glob-parent\":\"5.1.2\",\"merge2\":\"1.4.1\",\"micromatch\":\"4.0.8\"}},\"fast-uri@3.0.3\":{},\"fast-uri@3.1.2\":{\"optional\":true},\"fast-xml-parser@4.5.6\":{\"dependencies\":{\"strnum\":\"1.1.2\"},\"optional\":true},\"fastq@1.17.1\":{\"dependencies\":{\"reusify\":\"1.0.4\"}},\"fault@2.0.1\":{\"dependencies\":{\"format\":\"0.2.2\"}},\"fd-slicer@1.1.0\":{\"dependencies\":{\"pend\":\"1.2.0\"},\"optional\":true},\"fdir@6.5.0(picomatch@4.0.4)\":{\"optionalDependencies\":{\"picomatch\":\"4.0.4\"}},\"fetch-blob@3.2.0\":{\"dependencies\":{\"node-domexception\":\"1.0.0\",\"web-streams-polyfill\":\"3.3.3\"},\"optional\":true},\"fetchdts@0.1.7\":{},\"fflate@0.4.8\":{},\"figures@3.2.0\":{\"dependencies\":{\"escape-string-regexp\":\"1.0.5\"}},\"file-uri-to-path@1.0.0\":{},\"fill-range@7.1.1\":{\"dependencies\":{\"to-regex-range\":\"5.0.1\"}},\"find-up@4.1.0\":{\"dependencies\":{\"locate-path\":\"5.0.0\",\"path-exists\":\"4.0.0\"}},\"flat@5.0.2\":{},\"follow-redirects@1.15.11\":{},\"foreground-child@3.3.1\":{\"dependencies\":{\"cross-spawn\":\"7.0.6\",\"signal-exit\":\"4.1.0\"}},\"form-data@4.0.4\":{\"dependencies\":{\"asynckit\":\"0.4.0\",\"combined-stream\":\"1.0.8\",\"es-set-tostringtag\":\"2.1.0\",\"hasown\":\"2.0.2\",\"mime-types\":\"2.1.35\"}},\"format@0.2.2\":{},\"formdata-polyfill@4.0.10\":{\"dependencies\":{\"fetch-blob\":\"3.2.0\"},\"optional\":true},\"front-matter@4.0.2\":{\"dependencies\":{\"js-yaml\":\"3.14.1\"}},\"fs-constants@1.0.0\":{},\"fs-extra@11.3.1\":{\"dependencies\":{\"graceful-fs\":\"4.2.11\",\"jsonfile\":\"6.2.0\",\"universalify\":\"2.0.1\"}},\"fs-extra@7.0.1\":{\"dependencies\":{\"graceful-fs\":\"4.2.11\",\"jsonfile\":\"4.0.0\",\"universalify\":\"0.1.2\"}},\"fs-extra@8.1.0\":{\"dependencies\":{\"graceful-fs\":\"4.2.11\",\"jsonfile\":\"4.0.0\",\"universalify\":\"0.1.2\"}},\"fs-minipass@2.1.0\":{\"dependencies\":{\"minipass\":\"3.3.6\"}},\"fsevents@2.3.2\":{\"optional\":true},\"fsevents@2.3.3\":{\"optional\":true},\"function-bind@1.1.2\":{},\"geckodriver@4.5.1\":{\"dependencies\":{\"@wdio/logger\":\"9.1.3\",\"@zip.js/zip.js\":\"2.8.26\",\"decamelize\":\"6.0.1\",\"http-proxy-agent\":\"7.0.2\",\"https-proxy-agent\":\"7.0.6\",\"node-fetch\":\"3.3.2\",\"tar-fs\":\"3.1.2\",\"which\":\"4.0.0\"},\"transitivePeerDependencies\":[\"bare-abort-controller\",\"bare-buffer\",\"react-native-b4a\",\"supports-color\"],\"optional\":true},\"gensync@1.0.0-beta.2\":{},\"get-caller-file@2.0.5\":{},\"get-intrinsic@1.3.0\":{\"dependencies\":{\"call-bind-apply-helpers\":\"1.0.2\",\"es-define-property\":\"1.0.1\",\"es-errors\":\"1.3.0\",\"es-object-atoms\":\"1.1.1\",\"function-bind\":\"1.1.2\",\"get-proto\":\"1.0.1\",\"gopd\":\"1.2.0\",\"has-symbols\":\"1.1.0\",\"hasown\":\"2.0.2\",\"math-intrinsics\":\"1.1.0\"}},\"get-port@7.2.0\":{\"optional\":true},\"get-proto@1.0.1\":{\"dependencies\":{\"dunder-proto\":\"1.0.1\",\"es-object-atoms\":\"1.1.1\"}},\"get-stream@5.2.0\":{\"dependencies\":{\"pump\":\"3.0.4\"},\"optional\":true},\"get-tsconfig@4.14.0\":{\"dependencies\":{\"resolve-pkg-maps\":\"1.0.0\"},\"optional\":true},\"get-uri@6.0.5\":{\"dependencies\":{\"basic-ftp\":\"5.3.1\",\"data-uri-to-buffer\":\"6.0.2\",\"debug\":\"4.4.3\"},\"transitivePeerDependencies\":[\"supports-color\"],\"optional\":true},\"github-from-package@0.0.0\":{},\"github-slugger@2.0.0\":{},\"glob-parent@5.1.2\":{\"dependencies\":{\"is-glob\":\"4.0.3\"}},\"glob-to-regexp@0.4.1\":{\"optional\":true},\"glob@10.4.5\":{\"dependencies\":{\"foreground-child\":\"3.3.1\",\"jackspeak\":\"3.4.3\",\"minimatch\":\"9.0.5\",\"minipass\":\"7.1.2\",\"package-json-from-dist\":\"1.0.1\",\"path-scurry\":\"1.11.1\"}},\"glob@10.5.0\":{\"dependencies\":{\"foreground-child\":\"3.3.1\",\"jackspeak\":\"3.4.3\",\"minimatch\":\"9.0.9\",\"minipass\":\"7.1.3\",\"package-json-from-dist\":\"1.0.1\",\"path-scurry\":\"1.11.1\"},\"optional\":true},\"globals@15.15.0\":{},\"globby@11.1.0\":{\"dependencies\":{\"array-union\":\"2.1.0\",\"dir-glob\":\"3.0.1\",\"fast-glob\":\"3.3.3\",\"ignore\":\"5.3.2\",\"merge2\":\"1.4.1\",\"slash\":\"3.0.0\"}},\"gopd@1.2.0\":{},\"graceful-fs@4.2.11\":{},\"grapheme-splitter@1.0.4\":{\"optional\":true},\"graphql@16.14.0\":{\"optional\":true},\"h3@2.0.1-rc.20\":{\"dependencies\":{\"rou3\":\"0.8.1\",\"srvx\":\"0.11.15\"}},\"hachure-fill@0.5.2\":{},\"happy-dom@18.0.1\":{\"dependencies\":{\"@types/node\":\"20.19.39\",\"@types/whatwg-mimetype\":\"3.0.2\",\"whatwg-mimetype\":\"3.0.0\"},\"optional\":true},\"has-flag@4.0.0\":{},\"has-symbols@1.1.0\":{},\"has-tostringtag@1.0.2\":{\"dependencies\":{\"has-symbols\":\"1.1.0\"}},\"hasown@2.0.2\":{\"dependencies\":{\"function-bind\":\"1.1.2\"}},\"hast-util-embedded@3.0.0\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"hast-util-is-element\":\"3.0.0\"}},\"hast-util-from-html@2.0.3\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"devlop\":\"1.1.0\",\"hast-util-from-parse5\":\"8.0.3\",\"parse5\":\"7.3.0\",\"vfile\":\"6.0.3\",\"vfile-message\":\"4.0.2\"}},\"hast-util-from-parse5@8.0.3\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"@types/unist\":\"3.0.3\",\"devlop\":\"1.1.0\",\"hastscript\":\"9.0.1\",\"property-information\":\"7.1.0\",\"vfile\":\"6.0.3\",\"vfile-location\":\"5.0.3\",\"web-namespaces\":\"2.0.1\"}},\"hast-util-has-property@3.0.0\":{\"dependencies\":{\"@types/hast\":\"3.0.4\"}},\"hast-util-heading-rank@3.0.0\":{\"dependencies\":{\"@types/hast\":\"3.0.4\"}},\"hast-util-is-body-ok-link@3.0.1\":{\"dependencies\":{\"@types/hast\":\"3.0.4\"}},\"hast-util-is-element@3.0.0\":{\"dependencies\":{\"@types/hast\":\"3.0.4\"}},\"hast-util-minify-whitespace@1.0.1\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"hast-util-embedded\":\"3.0.0\",\"hast-util-is-element\":\"3.0.0\",\"hast-util-whitespace\":\"3.0.0\",\"unist-util-is\":\"6.0.0\"}},\"hast-util-parse-selector@4.0.0\":{\"dependencies\":{\"@types/hast\":\"3.0.4\"}},\"hast-util-phrasing@3.0.1\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"hast-util-embedded\":\"3.0.0\",\"hast-util-has-property\":\"3.0.0\",\"hast-util-is-body-ok-link\":\"3.0.1\",\"hast-util-is-element\":\"3.0.0\"}},\"hast-util-raw@9.1.0\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"@types/unist\":\"3.0.3\",\"@ungap/structured-clone\":\"1.2.1\",\"hast-util-from-parse5\":\"8.0.3\",\"hast-util-to-parse5\":\"8.0.0\",\"html-void-elements\":\"3.0.0\",\"mdast-util-to-hast\":\"13.2.0\",\"parse5\":\"7.3.0\",\"unist-util-position\":\"5.0.0\",\"unist-util-visit\":\"5.0.0\",\"vfile\":\"6.0.3\",\"web-namespaces\":\"2.0.1\",\"zwitch\":\"2.0.4\"}},\"hast-util-sanitize@5.0.2\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"@ungap/structured-clone\":\"1.2.1\",\"unist-util-position\":\"5.0.0\"}},\"hast-util-to-html@9.0.5\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"@types/unist\":\"3.0.3\",\"ccount\":\"2.0.1\",\"comma-separated-tokens\":\"2.0.3\",\"hast-util-whitespace\":\"3.0.0\",\"html-void-elements\":\"3.0.0\",\"mdast-util-to-hast\":\"13.2.0\",\"property-information\":\"7.1.0\",\"space-separated-tokens\":\"2.0.2\",\"stringify-entities\":\"4.0.4\",\"zwitch\":\"2.0.4\"}},\"hast-util-to-mdast@10.1.2\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"@types/mdast\":\"4.0.4\",\"@ungap/structured-clone\":\"1.2.1\",\"hast-util-phrasing\":\"3.0.1\",\"hast-util-to-html\":\"9.0.5\",\"hast-util-to-text\":\"4.0.2\",\"hast-util-whitespace\":\"3.0.0\",\"mdast-util-phrasing\":\"4.1.0\",\"mdast-util-to-hast\":\"13.2.0\",\"mdast-util-to-string\":\"4.0.0\",\"rehype-minify-whitespace\":\"6.0.2\",\"trim-trailing-lines\":\"2.1.0\",\"unist-util-position\":\"5.0.0\",\"unist-util-visit\":\"5.0.0\"}},\"hast-util-to-parse5@8.0.0\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"comma-separated-tokens\":\"2.0.3\",\"devlop\":\"1.1.0\",\"property-information\":\"6.5.0\",\"space-separated-tokens\":\"2.0.2\",\"web-namespaces\":\"2.0.1\",\"zwitch\":\"2.0.4\"}},\"hast-util-to-string@3.0.1\":{\"dependencies\":{\"@types/hast\":\"3.0.4\"}},\"hast-util-to-text@4.0.2\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"@types/unist\":\"3.0.3\",\"hast-util-is-element\":\"3.0.0\",\"unist-util-find-after\":\"5.0.0\"}},\"hast-util-whitespace@3.0.0\":{\"dependencies\":{\"@types/hast\":\"3.0.4\"}},\"hastscript@9.0.1\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"comma-separated-tokens\":\"2.0.3\",\"hast-util-parse-selector\":\"4.0.0\",\"property-information\":\"7.1.0\",\"space-separated-tokens\":\"2.0.2\"}},\"headers-polyfill@4.0.3\":{\"optional\":true},\"highlight.js@11.11.1\":{},\"html-encoding-sniffer@4.0.0\":{\"dependencies\":{\"whatwg-encoding\":\"3.1.1\"}},\"html-escaper@2.0.2\":{},\"html-void-elements@3.0.0\":{},\"htmlfy@0.3.2\":{\"optional\":true},\"htmlparser2@10.0.0\":{\"dependencies\":{\"domelementtype\":\"2.3.0\",\"domhandler\":\"5.0.3\",\"domutils\":\"3.2.2\",\"entities\":\"6.0.1\"}},\"htmlparser2@10.1.0\":{\"dependencies\":{\"domelementtype\":\"2.3.0\",\"domhandler\":\"5.0.3\",\"domutils\":\"3.2.2\",\"entities\":\"7.0.1\"},\"optional\":true},\"http-proxy-agent@7.0.2\":{\"dependencies\":{\"agent-base\":\"7.1.3\",\"debug\":\"4.4.3\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"https-proxy-agent@7.0.2\":{\"dependencies\":{\"agent-base\":\"7.1.3\",\"debug\":\"4.4.3\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"https-proxy-agent@7.0.6\":{\"dependencies\":{\"agent-base\":\"7.1.3\",\"debug\":\"4.4.3\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"human-id@4.1.1\":{},\"iconv-lite@0.6.3\":{\"dependencies\":{\"safer-buffer\":\"2.1.2\"}},\"ieee754@1.2.1\":{},\"ignore@5.3.2\":{},\"immediate@3.0.6\":{\"optional\":true},\"immutable@5.1.5\":{\"optional\":true},\"import-meta-resolve@4.2.0\":{\"optional\":true},\"inherits@2.0.4\":{},\"ini@1.3.8\":{},\"ini@4.1.3\":{},\"internmap@1.0.1\":{},\"internmap@2.0.3\":{},\"ip-address@10.2.0\":{\"optional\":true},\"is-binary-path@2.1.0\":{\"dependencies\":{\"binary-extensions\":\"2.3.0\"}},\"is-docker@2.2.1\":{},\"is-extglob@2.1.1\":{},\"is-fullwidth-code-point@3.0.0\":{},\"is-glob@4.0.3\":{\"dependencies\":{\"is-extglob\":\"2.1.1\"}},\"is-interactive@1.0.0\":{},\"is-node-process@1.2.0\":{\"optional\":true},\"is-number@7.0.0\":{},\"is-plain-obj@4.1.0\":{},\"is-potential-custom-element-name@1.0.1\":{},\"is-stream@2.0.1\":{\"optional\":true},\"is-subdir@1.2.0\":{\"dependencies\":{\"better-path-resolve\":\"1.0.0\"}},\"is-unicode-supported@0.1.0\":{},\"is-windows@1.0.2\":{},\"is-wsl@2.2.0\":{\"dependencies\":{\"is-docker\":\"2.2.1\"}},\"isarray@1.0.0\":{\"optional\":true},\"isbot@5.1.28\":{},\"isexe@2.0.0\":{},\"isexe@3.1.5\":{\"optional\":true},\"istanbul-lib-coverage@3.2.2\":{},\"istanbul-lib-report@3.0.1\":{\"dependencies\":{\"istanbul-lib-coverage\":\"3.2.2\",\"make-dir\":\"4.0.0\",\"supports-color\":\"7.2.0\"}},\"istanbul-lib-source-maps@5.0.6\":{\"dependencies\":{\"@jridgewell/trace-mapping\":\"0.3.31\",\"debug\":\"4.4.3\",\"istanbul-lib-coverage\":\"3.2.2\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"istanbul-reports@3.2.0\":{\"dependencies\":{\"html-escaper\":\"2.0.2\",\"istanbul-lib-report\":\"3.0.1\"}},\"jackspeak@3.4.3\":{\"dependencies\":{\"@isaacs/cliui\":\"8.0.2\"},\"optionalDependencies\":{\"@pkgjs/parseargs\":\"0.11.0\"}},\"jest-diff@30.1.1\":{\"dependencies\":{\"@jest/diff-sequences\":\"30.0.1\",\"@jest/get-type\":\"30.1.0\",\"chalk\":\"4.1.2\",\"pretty-format\":\"30.0.5\"}},\"jest-worker@27.5.1\":{\"dependencies\":{\"@types/node\":\"22.19.17\",\"merge-stream\":\"2.0.0\",\"supports-color\":\"8.1.1\"},\"optional\":true},\"jiti@2.6.1\":{},\"js-tokens@10.0.0\":{},\"js-tokens@4.0.0\":{},\"js-tokens@9.0.1\":{},\"js-yaml@3.14.1\":{\"dependencies\":{\"argparse\":\"1.0.10\",\"esprima\":\"4.0.1\"}},\"js-yaml@4.1.1\":{\"dependencies\":{\"argparse\":\"2.0.1\"}},\"jsdom@26.1.0\":{\"dependencies\":{\"cssstyle\":\"4.3.1\",\"data-urls\":\"5.0.0\",\"decimal.js\":\"10.6.0\",\"html-encoding-sniffer\":\"4.0.0\",\"http-proxy-agent\":\"7.0.2\",\"https-proxy-agent\":\"7.0.6\",\"is-potential-custom-element-name\":\"1.0.1\",\"nwsapi\":\"2.2.20\",\"parse5\":\"7.3.0\",\"rrweb-cssom\":\"0.8.0\",\"saxes\":\"6.0.0\",\"symbol-tree\":\"3.2.4\",\"tough-cookie\":\"5.1.2\",\"w3c-xmlserializer\":\"5.0.0\",\"webidl-conversions\":\"7.0.0\",\"whatwg-encoding\":\"3.1.1\",\"whatwg-mimetype\":\"4.0.0\",\"whatwg-url\":\"14.2.0\",\"ws\":\"8.18.3\",\"xml-name-validator\":\"5.0.0\"},\"transitivePeerDependencies\":[\"bufferutil\",\"supports-color\",\"utf-8-validate\"]},\"jsdom@27.3.0(postcss@8.5.14)\":{\"dependencies\":{\"@acemir/cssom\":\"0.9.28\",\"@asamuzakjp/dom-selector\":\"6.7.6\",\"cssstyle\":\"5.3.4(postcss@8.5.14)\",\"data-urls\":\"6.0.0\",\"decimal.js\":\"10.6.0\",\"html-encoding-sniffer\":\"4.0.0\",\"http-proxy-agent\":\"7.0.2\",\"https-proxy-agent\":\"7.0.6\",\"is-potential-custom-element-name\":\"1.0.1\",\"parse5\":\"8.0.0\",\"saxes\":\"6.0.0\",\"symbol-tree\":\"3.2.4\",\"tough-cookie\":\"6.0.0\",\"w3c-xmlserializer\":\"5.0.0\",\"webidl-conversions\":\"8.0.0\",\"whatwg-encoding\":\"3.1.1\",\"whatwg-mimetype\":\"4.0.0\",\"whatwg-url\":\"15.1.0\",\"ws\":\"8.18.3\",\"xml-name-validator\":\"5.0.0\"},\"transitivePeerDependencies\":[\"bufferutil\",\"postcss\",\"supports-color\",\"utf-8-validate\"]},\"jsesc@3.1.0\":{},\"json-parse-even-better-errors@2.3.1\":{\"optional\":true},\"json-schema-to-ts@3.1.1\":{\"dependencies\":{\"@babel/runtime\":\"7.28.4\",\"ts-algebra\":\"2.0.0\"}},\"json-schema-traverse@1.0.0\":{},\"json5@2.2.3\":{},\"jsonc-parser@3.2.0\":{},\"jsonfile@4.0.0\":{\"optionalDependencies\":{\"graceful-fs\":\"4.2.11\"}},\"jsonfile@6.2.0\":{\"dependencies\":{\"universalify\":\"2.0.1\"},\"optionalDependencies\":{\"graceful-fs\":\"4.2.11\"}},\"jszip@3.10.1\":{\"dependencies\":{\"lie\":\"3.3.0\",\"pako\":\"1.0.11\",\"readable-stream\":\"2.3.8\",\"setimmediate\":\"1.0.5\"},\"optional\":true},\"katex@0.16.22\":{\"dependencies\":{\"commander\":\"8.3.0\"}},\"khroma@2.1.0\":{},\"kleur@4.1.5\":{},\"kolorist@1.8.0\":{},\"kysely@0.28.7\":{},\"langium@3.3.1\":{\"dependencies\":{\"chevrotain\":\"11.0.3\",\"chevrotain-allstar\":\"0.3.1(chevrotain@11.0.3)\",\"vscode-languageserver\":\"9.0.1\",\"vscode-languageserver-textdocument\":\"1.0.12\",\"vscode-uri\":\"3.0.8\"}},\"layout-base@1.0.2\":{},\"layout-base@2.0.1\":{},\"lazystream@1.0.1\":{\"dependencies\":{\"readable-stream\":\"2.3.8\"},\"optional\":true},\"lie@3.3.0\":{\"dependencies\":{\"immediate\":\"3.0.6\"},\"optional\":true},\"lightningcss-android-arm64@1.32.0\":{\"optional\":true},\"lightningcss-darwin-arm64@1.32.0\":{\"optional\":true},\"lightningcss-darwin-x64@1.32.0\":{\"optional\":true},\"lightningcss-freebsd-x64@1.32.0\":{\"optional\":true},\"lightningcss-linux-arm-gnueabihf@1.32.0\":{\"optional\":true},\"lightningcss-linux-arm64-gnu@1.32.0\":{\"optional\":true},\"lightningcss-linux-arm64-musl@1.32.0\":{\"optional\":true},\"lightningcss-linux-x64-gnu@1.32.0\":{\"optional\":true},\"lightningcss-linux-x64-musl@1.32.0\":{\"optional\":true},\"lightningcss-win32-arm64-msvc@1.32.0\":{\"optional\":true},\"lightningcss-win32-x64-msvc@1.32.0\":{\"optional\":true},\"lightningcss@1.32.0\":{\"dependencies\":{\"detect-libc\":\"2.1.2\"},\"optionalDependencies\":{\"lightningcss-android-arm64\":\"1.32.0\",\"lightningcss-darwin-arm64\":\"1.32.0\",\"lightningcss-darwin-x64\":\"1.32.0\",\"lightningcss-freebsd-x64\":\"1.32.0\",\"lightningcss-linux-arm-gnueabihf\":\"1.32.0\",\"lightningcss-linux-arm64-gnu\":\"1.32.0\",\"lightningcss-linux-arm64-musl\":\"1.32.0\",\"lightningcss-linux-x64-gnu\":\"1.32.0\",\"lightningcss-linux-x64-musl\":\"1.32.0\",\"lightningcss-win32-arm64-msvc\":\"1.32.0\",\"lightningcss-win32-x64-msvc\":\"1.32.0\"}},\"lines-and-columns@2.0.3\":{},\"loader-runner@4.3.2\":{\"optional\":true},\"local-pkg@1.1.2\":{\"dependencies\":{\"mlly\":\"1.8.0\",\"pkg-types\":\"2.3.0\",\"quansync\":\"0.2.11\"}},\"locate-app@2.5.0\":{\"dependencies\":{\"@promptbook/utils\":\"0.69.5\",\"type-fest\":\"4.26.0\",\"userhome\":\"1.0.1\"},\"optional\":true},\"locate-path@5.0.0\":{\"dependencies\":{\"p-locate\":\"4.1.0\"}},\"lodash-es@4.17.21\":{},\"lodash.clonedeep@4.5.0\":{\"optional\":true},\"lodash.startcase@4.4.0\":{},\"lodash.zip@4.2.0\":{\"optional\":true},\"lodash@4.18.1\":{\"optional\":true},\"log-symbols@4.1.0\":{\"dependencies\":{\"chalk\":\"4.1.2\",\"is-unicode-supported\":\"0.1.0\"}},\"loglevel-plugin-prefix@0.8.4\":{\"optional\":true},\"loglevel@1.9.2\":{\"optional\":true},\"long@5.3.2\":{},\"longest-streak@3.1.0\":{},\"loupe@3.2.1\":{},\"lowlight@3.3.0\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"devlop\":\"1.1.0\",\"highlight.js\":\"11.11.1\"}},\"lru-cache@10.4.3\":{},\"lru-cache@11.2.4\":{},\"lru-cache@5.1.1\":{\"dependencies\":{\"yallist\":\"3.1.1\"}},\"lru-cache@7.18.3\":{\"optional\":true},\"lucide-react@0.544.0(react@19.2.0)\":{\"dependencies\":{\"react\":\"19.2.0\"}},\"lz-string@1.5.0\":{},\"magic-string@0.30.18\":{\"dependencies\":{\"@jridgewell/sourcemap-codec\":\"1.5.5\"}},\"magic-string@0.30.21\":{\"dependencies\":{\"@jridgewell/sourcemap-codec\":\"1.5.5\"}},\"magicast@0.3.5\":{\"dependencies\":{\"@babel/parser\":\"7.28.5\",\"@babel/types\":\"7.28.5\",\"source-map-js\":\"1.2.1\"}},\"magicast@0.5.2\":{\"dependencies\":{\"@babel/parser\":\"7.29.3\",\"@babel/types\":\"7.29.0\",\"source-map-js\":\"1.2.1\"}},\"make-dir@4.0.0\":{\"dependencies\":{\"semver\":\"7.7.3\"}},\"markdown-table@3.0.4\":{},\"marked@16.4.2\":{},\"math-intrinsics@1.1.0\":{},\"mdast-util-find-and-replace@3.0.2\":{\"dependencies\":{\"@types/mdast\":\"4.0.4\",\"escape-string-regexp\":\"5.0.0\",\"unist-util-is\":\"6.0.0\",\"unist-util-visit-parents\":\"6.0.1\"}},\"mdast-util-from-markdown@2.0.2\":{\"dependencies\":{\"@types/mdast\":\"4.0.4\",\"@types/unist\":\"3.0.3\",\"decode-named-character-reference\":\"1.0.2\",\"devlop\":\"1.1.0\",\"mdast-util-to-string\":\"4.0.0\",\"micromark\":\"4.0.1\",\"micromark-util-decode-numeric-character-reference\":\"2.0.2\",\"micromark-util-decode-string\":\"2.0.1\",\"micromark-util-normalize-identifier\":\"2.0.1\",\"micromark-util-symbol\":\"2.0.1\",\"micromark-util-types\":\"2.0.1\",\"unist-util-stringify-position\":\"4.0.0\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"mdast-util-frontmatter@2.0.1\":{\"dependencies\":{\"@types/mdast\":\"4.0.4\",\"devlop\":\"1.1.0\",\"escape-string-regexp\":\"5.0.0\",\"mdast-util-from-markdown\":\"2.0.2\",\"mdast-util-to-markdown\":\"2.1.2\",\"micromark-extension-frontmatter\":\"2.0.0\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"mdast-util-gfm-autolink-literal@2.0.1\":{\"dependencies\":{\"@types/mdast\":\"4.0.4\",\"ccount\":\"2.0.1\",\"devlop\":\"1.1.0\",\"mdast-util-find-and-replace\":\"3.0.2\",\"micromark-util-character\":\"2.1.1\"}},\"mdast-util-gfm-footnote@2.0.0\":{\"dependencies\":{\"@types/mdast\":\"4.0.4\",\"devlop\":\"1.1.0\",\"mdast-util-from-markdown\":\"2.0.2\",\"mdast-util-to-markdown\":\"2.1.2\",\"micromark-util-normalize-identifier\":\"2.0.1\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"mdast-util-gfm-strikethrough@2.0.0\":{\"dependencies\":{\"@types/mdast\":\"4.0.4\",\"mdast-util-from-markdown\":\"2.0.2\",\"mdast-util-to-markdown\":\"2.1.2\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"mdast-util-gfm-table@2.0.0\":{\"dependencies\":{\"@types/mdast\":\"4.0.4\",\"devlop\":\"1.1.0\",\"markdown-table\":\"3.0.4\",\"mdast-util-from-markdown\":\"2.0.2\",\"mdast-util-to-markdown\":\"2.1.2\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"mdast-util-gfm-task-list-item@2.0.0\":{\"dependencies\":{\"@types/mdast\":\"4.0.4\",\"devlop\":\"1.1.0\",\"mdast-util-from-markdown\":\"2.0.2\",\"mdast-util-to-markdown\":\"2.1.2\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"mdast-util-gfm@3.0.0\":{\"dependencies\":{\"mdast-util-from-markdown\":\"2.0.2\",\"mdast-util-gfm-autolink-literal\":\"2.0.1\",\"mdast-util-gfm-footnote\":\"2.0.0\",\"mdast-util-gfm-strikethrough\":\"2.0.0\",\"mdast-util-gfm-table\":\"2.0.0\",\"mdast-util-gfm-task-list-item\":\"2.0.0\",\"mdast-util-to-markdown\":\"2.1.2\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"mdast-util-phrasing@4.1.0\":{\"dependencies\":{\"@types/mdast\":\"4.0.4\",\"unist-util-is\":\"6.0.0\"}},\"mdast-util-to-hast@13.2.0\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"@types/mdast\":\"4.0.4\",\"@ungap/structured-clone\":\"1.2.1\",\"devlop\":\"1.1.0\",\"micromark-util-sanitize-uri\":\"2.0.1\",\"trim-lines\":\"3.0.1\",\"unist-util-position\":\"5.0.0\",\"unist-util-visit\":\"5.0.0\",\"vfile\":\"6.0.3\"}},\"mdast-util-to-markdown@2.1.2\":{\"dependencies\":{\"@types/mdast\":\"4.0.4\",\"@types/unist\":\"3.0.3\",\"longest-streak\":\"3.1.0\",\"mdast-util-phrasing\":\"4.1.0\",\"mdast-util-to-string\":\"4.0.0\",\"micromark-util-classify-character\":\"2.0.1\",\"micromark-util-decode-string\":\"2.0.1\",\"unist-util-visit\":\"5.0.0\",\"zwitch\":\"2.0.4\"}},\"mdast-util-to-string@4.0.0\":{\"dependencies\":{\"@types/mdast\":\"4.0.4\"}},\"mdn-data@2.12.2\":{},\"merge-stream@2.0.0\":{\"optional\":true},\"merge2@1.4.1\":{},\"mermaid@11.12.1\":{\"dependencies\":{\"@braintree/sanitize-url\":\"7.1.1\",\"@iconify/utils\":\"3.0.2\",\"@mermaid-js/parser\":\"0.6.3\",\"@types/d3\":\"7.4.3\",\"cytoscape\":\"3.30.4\",\"cytoscape-cose-bilkent\":\"4.1.0(cytoscape@3.30.4)\",\"cytoscape-fcose\":\"2.2.0(cytoscape@3.30.4)\",\"d3\":\"7.9.0\",\"d3-sankey\":\"0.12.3\",\"dagre-d3-es\":\"7.0.13\",\"dayjs\":\"1.11.19\",\"dompurify\":\"3.3.1\",\"katex\":\"0.16.22\",\"khroma\":\"2.1.0\",\"lodash-es\":\"4.17.21\",\"marked\":\"16.4.2\",\"roughjs\":\"4.6.6\",\"stylis\":\"4.3.6\",\"ts-dedent\":\"2.2.0\",\"uuid\":\"11.1.0\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"micromark-core-commonmark@2.0.2\":{\"dependencies\":{\"decode-named-character-reference\":\"1.0.2\",\"devlop\":\"1.1.0\",\"micromark-factory-destination\":\"2.0.1\",\"micromark-factory-label\":\"2.0.1\",\"micromark-factory-space\":\"2.0.1\",\"micromark-factory-title\":\"2.0.1\",\"micromark-factory-whitespace\":\"2.0.1\",\"micromark-util-character\":\"2.1.1\",\"micromark-util-chunked\":\"2.0.1\",\"micromark-util-classify-character\":\"2.0.1\",\"micromark-util-html-tag-name\":\"2.0.1\",\"micromark-util-normalize-identifier\":\"2.0.1\",\"micromark-util-resolve-all\":\"2.0.1\",\"micromark-util-subtokenize\":\"2.0.3\",\"micromark-util-symbol\":\"2.0.1\",\"micromark-util-types\":\"2.0.1\"}},\"micromark-extension-frontmatter@2.0.0\":{\"dependencies\":{\"fault\":\"2.0.1\",\"micromark-util-character\":\"2.1.1\",\"micromark-util-symbol\":\"2.0.1\",\"micromark-util-types\":\"2.0.1\"}},\"micromark-extension-gfm-autolink-literal@2.1.0\":{\"dependencies\":{\"micromark-util-character\":\"2.1.1\",\"micromark-util-sanitize-uri\":\"2.0.1\",\"micromark-util-symbol\":\"2.0.1\",\"micromark-util-types\":\"2.0.1\"}},\"micromark-extension-gfm-footnote@2.1.0\":{\"dependencies\":{\"devlop\":\"1.1.0\",\"micromark-core-commonmark\":\"2.0.2\",\"micromark-factory-space\":\"2.0.1\",\"micromark-util-character\":\"2.1.1\",\"micromark-util-normalize-identifier\":\"2.0.1\",\"micromark-util-sanitize-uri\":\"2.0.1\",\"micromark-util-symbol\":\"2.0.1\",\"micromark-util-types\":\"2.0.1\"}},\"micromark-extension-gfm-strikethrough@2.1.0\":{\"dependencies\":{\"devlop\":\"1.1.0\",\"micromark-util-chunked\":\"2.0.1\",\"micromark-util-classify-character\":\"2.0.1\",\"micromark-util-resolve-all\":\"2.0.1\",\"micromark-util-symbol\":\"2.0.1\",\"micromark-util-types\":\"2.0.1\"}},\"micromark-extension-gfm-table@2.1.1\":{\"dependencies\":{\"devlop\":\"1.1.0\",\"micromark-factory-space\":\"2.0.1\",\"micromark-util-character\":\"2.1.1\",\"micromark-util-symbol\":\"2.0.1\",\"micromark-util-types\":\"2.0.1\"}},\"micromark-extension-gfm-tagfilter@2.0.0\":{\"dependencies\":{\"micromark-util-types\":\"2.0.1\"}},\"micromark-extension-gfm-task-list-item@2.1.0\":{\"dependencies\":{\"devlop\":\"1.1.0\",\"micromark-factory-space\":\"2.0.1\",\"micromark-util-character\":\"2.1.1\",\"micromark-util-symbol\":\"2.0.1\",\"micromark-util-types\":\"2.0.1\"}},\"micromark-extension-gfm@3.0.0\":{\"dependencies\":{\"micromark-extension-gfm-autolink-literal\":\"2.1.0\",\"micromark-extension-gfm-footnote\":\"2.1.0\",\"micromark-extension-gfm-strikethrough\":\"2.1.0\",\"micromark-extension-gfm-table\":\"2.1.1\",\"micromark-extension-gfm-tagfilter\":\"2.0.0\",\"micromark-extension-gfm-task-list-item\":\"2.1.0\",\"micromark-util-combine-extensions\":\"2.0.1\",\"micromark-util-types\":\"2.0.1\"}},\"micromark-factory-destination@2.0.1\":{\"dependencies\":{\"micromark-util-character\":\"2.1.1\",\"micromark-util-symbol\":\"2.0.1\",\"micromark-util-types\":\"2.0.1\"}},\"micromark-factory-label@2.0.1\":{\"dependencies\":{\"devlop\":\"1.1.0\",\"micromark-util-character\":\"2.1.1\",\"micromark-util-symbol\":\"2.0.1\",\"micromark-util-types\":\"2.0.1\"}},\"micromark-factory-space@2.0.1\":{\"dependencies\":{\"micromark-util-character\":\"2.1.1\",\"micromark-util-types\":\"2.0.1\"}},\"micromark-factory-title@2.0.1\":{\"dependencies\":{\"micromark-factory-space\":\"2.0.1\",\"micromark-util-character\":\"2.1.1\",\"micromark-util-symbol\":\"2.0.1\",\"micromark-util-types\":\"2.0.1\"}},\"micromark-factory-whitespace@2.0.1\":{\"dependencies\":{\"micromark-factory-space\":\"2.0.1\",\"micromark-util-character\":\"2.1.1\",\"micromark-util-symbol\":\"2.0.1\",\"micromark-util-types\":\"2.0.1\"}},\"micromark-util-character@2.1.1\":{\"dependencies\":{\"micromark-util-symbol\":\"2.0.1\",\"micromark-util-types\":\"2.0.1\"}},\"micromark-util-chunked@2.0.1\":{\"dependencies\":{\"micromark-util-symbol\":\"2.0.1\"}},\"micromark-util-classify-character@2.0.1\":{\"dependencies\":{\"micromark-util-character\":\"2.1.1\",\"micromark-util-symbol\":\"2.0.1\",\"micromark-util-types\":\"2.0.1\"}},\"micromark-util-combine-extensions@2.0.1\":{\"dependencies\":{\"micromark-util-chunked\":\"2.0.1\",\"micromark-util-types\":\"2.0.1\"}},\"micromark-util-decode-numeric-character-reference@2.0.2\":{\"dependencies\":{\"micromark-util-symbol\":\"2.0.1\"}},\"micromark-util-decode-string@2.0.1\":{\"dependencies\":{\"decode-named-character-reference\":\"1.0.2\",\"micromark-util-character\":\"2.1.1\",\"micromark-util-decode-numeric-character-reference\":\"2.0.2\",\"micromark-util-symbol\":\"2.0.1\"}},\"micromark-util-encode@2.0.1\":{},\"micromark-util-html-tag-name@2.0.1\":{},\"micromark-util-normalize-identifier@2.0.1\":{\"dependencies\":{\"micromark-util-symbol\":\"2.0.1\"}},\"micromark-util-resolve-all@2.0.1\":{\"dependencies\":{\"micromark-util-types\":\"2.0.1\"}},\"micromark-util-sanitize-uri@2.0.1\":{\"dependencies\":{\"micromark-util-character\":\"2.1.1\",\"micromark-util-encode\":\"2.0.1\",\"micromark-util-symbol\":\"2.0.1\"}},\"micromark-util-subtokenize@2.0.3\":{\"dependencies\":{\"devlop\":\"1.1.0\",\"micromark-util-chunked\":\"2.0.1\",\"micromark-util-symbol\":\"2.0.1\",\"micromark-util-types\":\"2.0.1\"}},\"micromark-util-symbol@2.0.1\":{},\"micromark-util-types@2.0.1\":{},\"micromark@4.0.1\":{\"dependencies\":{\"@types/debug\":\"4.1.12\",\"debug\":\"4.4.3\",\"decode-named-character-reference\":\"1.0.2\",\"devlop\":\"1.1.0\",\"micromark-core-commonmark\":\"2.0.2\",\"micromark-factory-space\":\"2.0.1\",\"micromark-util-character\":\"2.1.1\",\"micromark-util-chunked\":\"2.0.1\",\"micromark-util-combine-extensions\":\"2.0.1\",\"micromark-util-decode-numeric-character-reference\":\"2.0.2\",\"micromark-util-encode\":\"2.0.1\",\"micromark-util-normalize-identifier\":\"2.0.1\",\"micromark-util-resolve-all\":\"2.0.1\",\"micromark-util-sanitize-uri\":\"2.0.1\",\"micromark-util-subtokenize\":\"2.0.3\",\"micromark-util-symbol\":\"2.0.1\",\"micromark-util-types\":\"2.0.1\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"micromatch@4.0.8\":{\"dependencies\":{\"braces\":\"3.0.3\",\"picomatch\":\"2.3.1\"}},\"mime-db@1.52.0\":{},\"mime-types@2.1.35\":{\"dependencies\":{\"mime-db\":\"1.52.0\"}},\"mimic-fn@2.1.0\":{},\"mimic-response@3.1.0\":{},\"miniflare@4.20260504.0\":{\"dependencies\":{\"@cspotcode/source-map-support\":\"0.8.1\",\"sharp\":\"0.34.5\",\"undici\":\"7.24.8\",\"workerd\":\"1.20260504.1\",\"ws\":\"8.18.0\",\"youch\":\"4.1.0-beta.10\"},\"transitivePeerDependencies\":[\"bufferutil\",\"utf-8-validate\"]},\"minimatch@5.1.9\":{\"dependencies\":{\"brace-expansion\":\"2.1.0\"},\"optional\":true},\"minimatch@9.0.3\":{\"dependencies\":{\"brace-expansion\":\"2.0.2\"}},\"minimatch@9.0.5\":{\"dependencies\":{\"brace-expansion\":\"2.0.2\"}},\"minimatch@9.0.9\":{\"dependencies\":{\"brace-expansion\":\"2.1.0\"},\"optional\":true},\"minimist@1.2.8\":{},\"minipass@3.3.6\":{\"dependencies\":{\"yallist\":\"4.0.0\"}},\"minipass@5.0.0\":{},\"minipass@7.1.2\":{},\"minipass@7.1.3\":{\"optional\":true},\"minizlib@2.1.2\":{\"dependencies\":{\"minipass\":\"3.3.6\",\"yallist\":\"4.0.0\"}},\"mkdirp-classic@0.5.3\":{},\"mkdirp@1.0.4\":{},\"mlly@1.8.0\":{\"dependencies\":{\"acorn\":\"8.16.0\",\"pathe\":\"2.0.3\",\"pkg-types\":\"1.3.1\",\"ufo\":\"1.6.1\"}},\"mri@1.2.0\":{},\"mrmime@2.0.1\":{},\"ms@2.1.3\":{},\"msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3)\":{\"dependencies\":{\"@bundled-es-modules/cookie\":\"2.0.1\",\"@bundled-es-modules/statuses\":\"1.0.1\",\"@bundled-es-modules/tough-cookie\":\"0.1.6\",\"@inquirer/confirm\":\"5.1.21(@types/node@22.15.33)\",\"@mswjs/interceptors\":\"0.39.8\",\"@open-draft/deferred-promise\":\"2.2.0\",\"@open-draft/until\":\"2.1.0\",\"@types/cookie\":\"0.6.0\",\"@types/statuses\":\"2.0.6\",\"graphql\":\"16.14.0\",\"headers-polyfill\":\"4.0.3\",\"is-node-process\":\"1.2.0\",\"outvariant\":\"1.4.3\",\"path-to-regexp\":\"6.3.0\",\"picocolors\":\"1.1.1\",\"strict-event-emitter\":\"0.5.1\",\"type-fest\":\"4.41.0\",\"yargs\":\"17.7.2\"},\"optionalDependencies\":{\"typescript\":\"5.8.3\"},\"transitivePeerDependencies\":[\"@types/node\"],\"optional\":true},\"msw@2.10.2(@types/node@24.10.2)(typescript@5.8.3)\":{\"dependencies\":{\"@bundled-es-modules/cookie\":\"2.0.1\",\"@bundled-es-modules/statuses\":\"1.0.1\",\"@bundled-es-modules/tough-cookie\":\"0.1.6\",\"@inquirer/confirm\":\"5.1.21(@types/node@24.10.2)\",\"@mswjs/interceptors\":\"0.39.8\",\"@open-draft/deferred-promise\":\"2.2.0\",\"@open-draft/until\":\"2.1.0\",\"@types/cookie\":\"0.6.0\",\"@types/statuses\":\"2.0.6\",\"graphql\":\"16.14.0\",\"headers-polyfill\":\"4.0.3\",\"is-node-process\":\"1.2.0\",\"outvariant\":\"1.4.3\",\"path-to-regexp\":\"6.3.0\",\"picocolors\":\"1.1.1\",\"strict-event-emitter\":\"0.5.1\",\"type-fest\":\"4.41.0\",\"yargs\":\"17.7.2\"},\"optionalDependencies\":{\"typescript\":\"5.8.3\"},\"transitivePeerDependencies\":[\"@types/node\"],\"optional\":true},\"msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3)\":{\"dependencies\":{\"@bundled-es-modules/cookie\":\"2.0.1\",\"@bundled-es-modules/statuses\":\"1.0.1\",\"@bundled-es-modules/tough-cookie\":\"0.1.6\",\"@inquirer/confirm\":\"5.1.21(@types/node@24.10.2)\",\"@mswjs/interceptors\":\"0.39.8\",\"@open-draft/deferred-promise\":\"2.2.0\",\"@open-draft/until\":\"2.1.0\",\"@types/cookie\":\"0.6.0\",\"@types/statuses\":\"2.0.6\",\"graphql\":\"16.14.0\",\"headers-polyfill\":\"4.0.3\",\"is-node-process\":\"1.2.0\",\"outvariant\":\"1.4.3\",\"path-to-regexp\":\"6.3.0\",\"picocolors\":\"1.1.1\",\"strict-event-emitter\":\"0.5.1\",\"type-fest\":\"4.41.0\",\"yargs\":\"17.7.2\"},\"optionalDependencies\":{\"typescript\":\"5.9.3\"},\"transitivePeerDependencies\":[\"@types/node\"],\"optional\":true},\"mute-stream@2.0.0\":{\"optional\":true},\"nanoid@3.3.11\":{},\"napi-build-utils@2.0.0\":{},\"neo-async@2.6.2\":{\"optional\":true},\"netmask@2.1.1\":{\"optional\":true},\"node-abi@3.89.0\":{\"dependencies\":{\"semver\":\"7.7.3\"}},\"node-domexception@1.0.0\":{\"optional\":true},\"node-fetch@3.3.2\":{\"dependencies\":{\"data-uri-to-buffer\":\"4.0.1\",\"fetch-blob\":\"3.2.0\",\"formdata-polyfill\":\"4.0.10\"},\"optional\":true},\"node-machine-id@1.1.12\":{},\"node-releases@2.0.19\":{},\"node-releases@2.0.38\":{\"optional\":true},\"normalize-path@3.0.0\":{},\"npm-run-path@4.0.1\":{\"dependencies\":{\"path-key\":\"3.1.1\"}},\"nth-check@2.1.1\":{\"dependencies\":{\"boolbase\":\"1.0.0\"}},\"nwsapi@2.2.20\":{},\"nx-cloud@19.1.0\":{\"dependencies\":{\"@nrwl/nx-cloud\":\"19.1.0\",\"axios\":\"1.11.0\",\"chalk\":\"4.1.2\",\"dotenv\":\"10.0.0\",\"fs-extra\":\"11.3.1\",\"ini\":\"4.1.3\",\"node-machine-id\":\"1.1.12\",\"open\":\"8.4.2\",\"tar\":\"6.2.1\",\"yargs-parser\":\"22.0.0\"},\"transitivePeerDependencies\":[\"debug\"]},\"nx@21.4.1\":{\"dependencies\":{\"@napi-rs/wasm-runtime\":\"0.2.4\",\"@yarnpkg/lockfile\":\"1.1.0\",\"@yarnpkg/parsers\":\"3.0.2\",\"@zkochan/js-yaml\":\"0.0.7\",\"axios\":\"1.11.0\",\"chalk\":\"4.1.2\",\"cli-cursor\":\"3.1.0\",\"cli-spinners\":\"2.6.1\",\"cliui\":\"8.0.1\",\"dotenv\":\"16.4.7\",\"dotenv-expand\":\"11.0.7\",\"enquirer\":\"2.3.6\",\"figures\":\"3.2.0\",\"flat\":\"5.0.2\",\"front-matter\":\"4.0.2\",\"ignore\":\"5.3.2\",\"jest-diff\":\"30.1.1\",\"jsonc-parser\":\"3.2.0\",\"lines-and-columns\":\"2.0.3\",\"minimatch\":\"9.0.3\",\"node-machine-id\":\"1.1.12\",\"npm-run-path\":\"4.0.1\",\"open\":\"8.4.2\",\"ora\":\"5.3.0\",\"resolve.exports\":\"2.0.3\",\"semver\":\"7.7.2\",\"string-width\":\"4.2.3\",\"tar-stream\":\"2.2.0\",\"tmp\":\"0.2.5\",\"tree-kill\":\"1.2.2\",\"tsconfig-paths\":\"4.2.0\",\"tslib\":\"2.8.1\",\"yaml\":\"2.8.1\",\"yargs\":\"17.7.2\",\"yargs-parser\":\"21.1.1\"},\"optionalDependencies\":{\"@nx/nx-darwin-arm64\":\"21.4.1\",\"@nx/nx-darwin-x64\":\"21.4.1\",\"@nx/nx-freebsd-x64\":\"21.4.1\",\"@nx/nx-linux-arm-gnueabihf\":\"21.4.1\",\"@nx/nx-linux-arm64-gnu\":\"21.4.1\",\"@nx/nx-linux-arm64-musl\":\"21.4.1\",\"@nx/nx-linux-x64-gnu\":\"21.4.1\",\"@nx/nx-linux-x64-musl\":\"21.4.1\",\"@nx/nx-win32-arm64-msvc\":\"21.4.1\",\"@nx/nx-win32-x64-msvc\":\"21.4.1\"},\"transitivePeerDependencies\":[\"debug\"]},\"obug@2.1.1\":{},\"once@1.4.0\":{\"dependencies\":{\"wrappy\":\"1.0.2\"}},\"onetime@5.1.2\":{\"dependencies\":{\"mimic-fn\":\"2.1.0\"}},\"oniguruma-parser@0.12.1\":{},\"oniguruma-to-es@4.3.3\":{\"dependencies\":{\"oniguruma-parser\":\"0.12.1\",\"regex\":\"6.0.1\",\"regex-recursion\":\"6.0.2\"}},\"open@8.4.2\":{\"dependencies\":{\"define-lazy-prop\":\"2.0.0\",\"is-docker\":\"2.2.1\",\"is-wsl\":\"2.2.0\"}},\"ora@5.3.0\":{\"dependencies\":{\"bl\":\"4.1.0\",\"chalk\":\"4.1.2\",\"cli-cursor\":\"3.1.0\",\"cli-spinners\":\"2.9.2\",\"is-interactive\":\"1.0.0\",\"log-symbols\":\"4.1.0\",\"strip-ansi\":\"6.0.1\",\"wcwidth\":\"1.0.1\"}},\"outdent@0.5.0\":{},\"outvariant@1.4.3\":{\"optional\":true},\"oxlint@1.26.0\":{\"optionalDependencies\":{\"@oxlint/darwin-arm64\":\"1.26.0\",\"@oxlint/darwin-x64\":\"1.26.0\",\"@oxlint/linux-arm64-gnu\":\"1.26.0\",\"@oxlint/linux-arm64-musl\":\"1.26.0\",\"@oxlint/linux-x64-gnu\":\"1.26.0\",\"@oxlint/linux-x64-musl\":\"1.26.0\",\"@oxlint/win32-arm64\":\"1.26.0\",\"@oxlint/win32-x64\":\"1.26.0\"}},\"p-filter@2.1.0\":{\"dependencies\":{\"p-map\":\"2.1.0\"}},\"p-limit@2.3.0\":{\"dependencies\":{\"p-try\":\"2.2.0\"}},\"p-locate@4.1.0\":{\"dependencies\":{\"p-limit\":\"2.3.0\"}},\"p-map@2.1.0\":{},\"p-map@7.0.4\":{},\"p-try@2.2.0\":{},\"pac-proxy-agent@7.2.0\":{\"dependencies\":{\"@tootallnate/quickjs-emscripten\":\"0.23.0\",\"agent-base\":\"7.1.4\",\"debug\":\"4.4.3\",\"get-uri\":\"6.0.5\",\"http-proxy-agent\":\"7.0.2\",\"https-proxy-agent\":\"7.0.6\",\"pac-resolver\":\"7.0.1\",\"socks-proxy-agent\":\"8.0.5\"},\"transitivePeerDependencies\":[\"supports-color\"],\"optional\":true},\"pac-resolver@7.0.1\":{\"dependencies\":{\"degenerator\":\"5.0.1\",\"netmask\":\"2.1.1\"},\"optional\":true},\"package-json-from-dist@1.0.1\":{},\"package-manager-detector@0.2.11\":{\"dependencies\":{\"quansync\":\"0.2.11\"}},\"package-manager-detector@1.5.0\":{},\"pako@1.0.11\":{\"optional\":true},\"parse5-htmlparser2-tree-adapter@7.1.0\":{\"dependencies\":{\"domhandler\":\"5.0.3\",\"parse5\":\"7.3.0\"}},\"parse5-parser-stream@7.1.2\":{\"dependencies\":{\"parse5\":\"7.3.0\"}},\"parse5@7.3.0\":{\"dependencies\":{\"entities\":\"6.0.1\"}},\"parse5@8.0.0\":{\"dependencies\":{\"entities\":\"6.0.1\"}},\"path-data-parser@0.1.0\":{},\"path-exists@4.0.0\":{},\"path-key@3.1.1\":{},\"path-scurry@1.11.1\":{\"dependencies\":{\"lru-cache\":\"10.4.3\",\"minipass\":\"7.1.2\"}},\"path-to-regexp@6.3.0\":{},\"path-type@4.0.0\":{},\"pathe@2.0.3\":{},\"pathval@2.0.1\":{},\"pend@1.2.0\":{\"optional\":true},\"picocolors@1.1.1\":{},\"picomatch@2.3.1\":{},\"picomatch@4.0.3\":{},\"picomatch@4.0.4\":{},\"pify@4.0.1\":{},\"pkg-types@1.3.1\":{\"dependencies\":{\"confbox\":\"0.1.8\",\"mlly\":\"1.8.0\",\"pathe\":\"2.0.3\"}},\"pkg-types@2.3.0\":{\"dependencies\":{\"confbox\":\"0.2.2\",\"exsolve\":\"1.0.8\",\"pathe\":\"2.0.3\"}},\"playwright-core@1.55.0\":{\"optional\":true},\"playwright@1.55.0\":{\"dependencies\":{\"playwright-core\":\"1.55.0\"},\"optionalDependencies\":{\"fsevents\":\"2.3.2\"},\"optional\":true},\"pngjs@7.0.0\":{},\"points-on-curve@0.2.0\":{},\"points-on-path@0.2.1\":{\"dependencies\":{\"path-data-parser\":\"0.1.0\",\"points-on-curve\":\"0.2.0\"}},\"postcss@8.5.14\":{\"dependencies\":{\"nanoid\":\"3.3.11\",\"picocolors\":\"1.1.1\",\"source-map-js\":\"1.2.1\"}},\"postcss@8.5.6\":{\"dependencies\":{\"nanoid\":\"3.3.11\",\"picocolors\":\"1.1.1\",\"source-map-js\":\"1.2.1\"}},\"posthog-js@1.321.2\":{\"dependencies\":{\"@opentelemetry/api\":\"1.9.0\",\"@opentelemetry/api-logs\":\"0.208.0\",\"@opentelemetry/exporter-logs-otlp-http\":\"0.208.0(@opentelemetry/api@1.9.0)\",\"@opentelemetry/resources\":\"2.4.0(@opentelemetry/api@1.9.0)\",\"@opentelemetry/sdk-logs\":\"0.208.0(@opentelemetry/api@1.9.0)\",\"@posthog/core\":\"1.9.1\",\"@posthog/types\":\"1.321.2\",\"core-js\":\"3.46.0\",\"dompurify\":\"3.3.1\",\"fflate\":\"0.4.8\",\"preact\":\"10.28.2\",\"query-selector-shadow-dom\":\"1.0.1\",\"web-vitals\":\"4.2.4\"}},\"preact@10.28.2\":{},\"prebuild-install@7.1.3\":{\"dependencies\":{\"detect-libc\":\"2.1.2\",\"expand-template\":\"2.0.3\",\"github-from-package\":\"0.0.0\",\"minimist\":\"1.2.8\",\"mkdirp-classic\":\"0.5.3\",\"napi-build-utils\":\"2.0.0\",\"node-abi\":\"3.89.0\",\"pump\":\"3.0.3\",\"rc\":\"1.2.8\",\"simple-get\":\"4.0.1\",\"tar-fs\":\"2.1.4\",\"tunnel-agent\":\"0.6.0\"}},\"prettier@2.8.8\":{},\"prettier@3.6.2\":{},\"pretty-format@27.5.1\":{\"dependencies\":{\"ansi-regex\":\"5.0.1\",\"ansi-styles\":\"5.2.0\",\"react-is\":\"17.0.2\"}},\"pretty-format@30.0.5\":{\"dependencies\":{\"@jest/schemas\":\"30.0.5\",\"ansi-styles\":\"5.2.0\",\"react-is\":\"18.3.1\"}},\"process-nextick-args@2.0.1\":{\"optional\":true},\"process@0.11.10\":{\"optional\":true},\"progress@2.0.3\":{\"optional\":true},\"property-information@6.5.0\":{},\"property-information@7.1.0\":{},\"protobufjs@7.5.4\":{\"dependencies\":{\"@protobufjs/aspromise\":\"1.1.2\",\"@protobufjs/base64\":\"1.1.2\",\"@protobufjs/codegen\":\"2.0.4\",\"@protobufjs/eventemitter\":\"1.1.0\",\"@protobufjs/fetch\":\"1.1.0\",\"@protobufjs/float\":\"1.0.2\",\"@protobufjs/inquire\":\"1.1.0\",\"@protobufjs/path\":\"1.1.2\",\"@protobufjs/pool\":\"1.1.0\",\"@protobufjs/utf8\":\"1.1.0\",\"@types/node\":\"22.15.33\",\"long\":\"5.3.2\"}},\"proxy-agent@6.5.0\":{\"dependencies\":{\"agent-base\":\"7.1.4\",\"debug\":\"4.4.3\",\"http-proxy-agent\":\"7.0.2\",\"https-proxy-agent\":\"7.0.6\",\"lru-cache\":\"7.18.3\",\"pac-proxy-agent\":\"7.2.0\",\"proxy-from-env\":\"1.1.0\",\"socks-proxy-agent\":\"8.0.5\"},\"transitivePeerDependencies\":[\"supports-color\"],\"optional\":true},\"proxy-from-env@1.1.0\":{},\"psl@1.15.0\":{\"dependencies\":{\"punycode\":\"2.3.1\"},\"optional\":true},\"pump@3.0.3\":{\"dependencies\":{\"end-of-stream\":\"1.4.5\",\"once\":\"1.4.0\"}},\"pump@3.0.4\":{\"dependencies\":{\"end-of-stream\":\"1.4.5\",\"once\":\"1.4.0\"},\"optional\":true},\"punycode@2.3.1\":{},\"quansync@0.2.11\":{},\"query-selector-shadow-dom@1.0.1\":{},\"querystringify@2.2.0\":{\"optional\":true},\"queue-microtask@1.2.3\":{},\"rc@1.2.8\":{\"dependencies\":{\"deep-extend\":\"0.6.0\",\"ini\":\"1.3.8\",\"minimist\":\"1.2.8\",\"strip-json-comments\":\"2.0.1\"}},\"react-dom@19.2.0(react@19.2.0)\":{\"dependencies\":{\"react\":\"19.2.0\",\"scheduler\":\"0.27.0\"}},\"react-is@17.0.2\":{},\"react-is@18.3.1\":{},\"react@19.2.0\":{},\"read-yaml-file@1.1.0\":{\"dependencies\":{\"graceful-fs\":\"4.2.11\",\"js-yaml\":\"3.14.1\",\"pify\":\"4.0.1\",\"strip-bom\":\"3.0.0\"}},\"readable-stream@2.3.8\":{\"dependencies\":{\"core-util-is\":\"1.0.3\",\"inherits\":\"2.0.4\",\"isarray\":\"1.0.0\",\"process-nextick-args\":\"2.0.1\",\"safe-buffer\":\"5.1.2\",\"string_decoder\":\"1.1.1\",\"util-deprecate\":\"1.0.2\"},\"optional\":true},\"readable-stream@3.6.2\":{\"dependencies\":{\"inherits\":\"2.0.4\",\"string_decoder\":\"1.3.0\",\"util-deprecate\":\"1.0.2\"}},\"readable-stream@4.7.0\":{\"dependencies\":{\"abort-controller\":\"3.0.0\",\"buffer\":\"6.0.3\",\"events\":\"3.3.0\",\"process\":\"0.11.10\",\"string_decoder\":\"1.3.0\"},\"optional\":true},\"readdir-glob@1.1.3\":{\"dependencies\":{\"minimatch\":\"5.1.9\"},\"optional\":true},\"readdirp@3.6.0\":{\"dependencies\":{\"picomatch\":\"2.3.1\"}},\"regex-recursion@6.0.2\":{\"dependencies\":{\"regex-utilities\":\"2.3.0\"}},\"regex-utilities@2.3.0\":{},\"regex@6.0.1\":{\"dependencies\":{\"regex-utilities\":\"2.3.0\"}},\"rehype-autolink-headings@7.1.0\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"@ungap/structured-clone\":\"1.2.1\",\"hast-util-heading-rank\":\"3.0.0\",\"hast-util-is-element\":\"3.0.0\",\"unified\":\"11.0.5\",\"unist-util-visit\":\"5.0.0\"}},\"rehype-highlight@7.0.2\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"hast-util-to-text\":\"4.0.2\",\"lowlight\":\"3.3.0\",\"unist-util-visit\":\"5.0.0\",\"vfile\":\"6.0.3\"}},\"rehype-minify-whitespace@6.0.2\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"hast-util-minify-whitespace\":\"1.0.1\"}},\"rehype-parse@9.0.1\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"hast-util-from-html\":\"2.0.3\",\"unified\":\"11.0.5\"}},\"rehype-raw@7.0.0\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"hast-util-raw\":\"9.1.0\",\"vfile\":\"6.0.3\"}},\"rehype-remark@10.0.1\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"@types/mdast\":\"4.0.4\",\"hast-util-to-mdast\":\"10.1.2\",\"unified\":\"11.0.5\",\"vfile\":\"6.0.3\"}},\"rehype-sanitize@6.0.0\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"hast-util-sanitize\":\"5.0.2\"}},\"rehype-slug@6.0.0\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"github-slugger\":\"2.0.0\",\"hast-util-heading-rank\":\"3.0.0\",\"hast-util-to-string\":\"3.0.1\",\"unist-util-visit\":\"5.0.0\"}},\"rehype-stringify@10.0.1\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"hast-util-to-html\":\"9.0.5\",\"unified\":\"11.0.5\"}},\"remark-frontmatter@5.0.0\":{\"dependencies\":{\"@types/mdast\":\"4.0.4\",\"mdast-util-frontmatter\":\"2.0.1\",\"micromark-extension-frontmatter\":\"2.0.0\",\"unified\":\"11.0.5\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"remark-gfm@4.0.1\":{\"dependencies\":{\"@types/mdast\":\"4.0.4\",\"mdast-util-gfm\":\"3.0.0\",\"micromark-extension-gfm\":\"3.0.0\",\"remark-parse\":\"11.0.0\",\"remark-stringify\":\"11.0.0\",\"unified\":\"11.0.5\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"remark-parse@11.0.0\":{\"dependencies\":{\"@types/mdast\":\"4.0.4\",\"mdast-util-from-markdown\":\"2.0.2\",\"micromark-util-types\":\"2.0.1\",\"unified\":\"11.0.5\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"remark-rehype@11.1.2\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"@types/mdast\":\"4.0.4\",\"mdast-util-to-hast\":\"13.2.0\",\"unified\":\"11.0.5\",\"vfile\":\"6.0.3\"}},\"remark-stringify@11.0.0\":{\"dependencies\":{\"@types/mdast\":\"4.0.4\",\"mdast-util-to-markdown\":\"2.1.2\",\"unified\":\"11.0.5\"}},\"require-directory@2.1.1\":{},\"require-from-string@2.0.2\":{},\"requires-port@1.0.0\":{\"optional\":true},\"resolve-from@5.0.0\":{},\"resolve-pkg-maps@1.0.0\":{\"optional\":true},\"resolve.exports@2.0.3\":{},\"resq@1.11.0\":{\"dependencies\":{\"fast-deep-equal\":\"2.0.1\"},\"optional\":true},\"restore-cursor@3.1.0\":{\"dependencies\":{\"onetime\":\"5.1.2\",\"signal-exit\":\"3.0.7\"}},\"reusify@1.0.4\":{},\"rgb2hex@0.2.5\":{\"optional\":true},\"robust-predicates@3.0.2\":{},\"rolldown@1.0.0-rc.17\":{\"dependencies\":{\"@oxc-project/types\":\"0.127.0\",\"@rolldown/pluginutils\":\"1.0.0-rc.17\"},\"optionalDependencies\":{\"@rolldown/binding-android-arm64\":\"1.0.0-rc.17\",\"@rolldown/binding-darwin-arm64\":\"1.0.0-rc.17\",\"@rolldown/binding-darwin-x64\":\"1.0.0-rc.17\",\"@rolldown/binding-freebsd-x64\":\"1.0.0-rc.17\",\"@rolldown/binding-linux-arm-gnueabihf\":\"1.0.0-rc.17\",\"@rolldown/binding-linux-arm64-gnu\":\"1.0.0-rc.17\",\"@rolldown/binding-linux-arm64-musl\":\"1.0.0-rc.17\",\"@rolldown/binding-linux-ppc64-gnu\":\"1.0.0-rc.17\",\"@rolldown/binding-linux-s390x-gnu\":\"1.0.0-rc.17\",\"@rolldown/binding-linux-x64-gnu\":\"1.0.0-rc.17\",\"@rolldown/binding-linux-x64-musl\":\"1.0.0-rc.17\",\"@rolldown/binding-openharmony-arm64\":\"1.0.0-rc.17\",\"@rolldown/binding-wasm32-wasi\":\"1.0.0-rc.17\",\"@rolldown/binding-win32-arm64-msvc\":\"1.0.0-rc.17\",\"@rolldown/binding-win32-x64-msvc\":\"1.0.0-rc.17\"}},\"rollup@4.53.2\":{\"dependencies\":{\"@types/estree\":\"1.0.8\"},\"optionalDependencies\":{\"@rollup/rollup-android-arm-eabi\":\"4.53.2\",\"@rollup/rollup-android-arm64\":\"4.53.2\",\"@rollup/rollup-darwin-arm64\":\"4.53.2\",\"@rollup/rollup-darwin-x64\":\"4.53.2\",\"@rollup/rollup-freebsd-arm64\":\"4.53.2\",\"@rollup/rollup-freebsd-x64\":\"4.53.2\",\"@rollup/rollup-linux-arm-gnueabihf\":\"4.53.2\",\"@rollup/rollup-linux-arm-musleabihf\":\"4.53.2\",\"@rollup/rollup-linux-arm64-gnu\":\"4.53.2\",\"@rollup/rollup-linux-arm64-musl\":\"4.53.2\",\"@rollup/rollup-linux-loong64-gnu\":\"4.53.2\",\"@rollup/rollup-linux-ppc64-gnu\":\"4.53.2\",\"@rollup/rollup-linux-riscv64-gnu\":\"4.53.2\",\"@rollup/rollup-linux-riscv64-musl\":\"4.53.2\",\"@rollup/rollup-linux-s390x-gnu\":\"4.53.2\",\"@rollup/rollup-linux-x64-gnu\":\"4.53.2\",\"@rollup/rollup-linux-x64-musl\":\"4.53.2\",\"@rollup/rollup-openharmony-arm64\":\"4.53.2\",\"@rollup/rollup-win32-arm64-msvc\":\"4.53.2\",\"@rollup/rollup-win32-ia32-msvc\":\"4.53.2\",\"@rollup/rollup-win32-x64-gnu\":\"4.53.2\",\"@rollup/rollup-win32-x64-msvc\":\"4.53.2\",\"fsevents\":\"2.3.3\"}},\"rou3@0.8.1\":{},\"roughjs@4.6.6\":{\"dependencies\":{\"hachure-fill\":\"0.5.2\",\"path-data-parser\":\"0.1.0\",\"points-on-curve\":\"0.2.0\",\"points-on-path\":\"0.2.1\"}},\"rrweb-cssom@0.8.0\":{},\"run-parallel@1.2.0\":{\"dependencies\":{\"queue-microtask\":\"1.2.3\"}},\"rw@1.3.3\":{},\"rxjs@7.8.2\":{\"dependencies\":{\"tslib\":\"2.8.1\"},\"optional\":true},\"safaridriver@0.1.2\":{\"optional\":true},\"safe-buffer@5.1.2\":{\"optional\":true},\"safe-buffer@5.2.1\":{},\"safer-buffer@2.1.2\":{},\"sass-embedded-android-arm64@1.89.2\":{\"optional\":true},\"sass-embedded-android-arm@1.89.2\":{\"optional\":true},\"sass-embedded-android-riscv64@1.89.2\":{\"optional\":true},\"sass-embedded-android-x64@1.89.2\":{\"optional\":true},\"sass-embedded-darwin-arm64@1.89.2\":{\"optional\":true},\"sass-embedded-darwin-x64@1.89.2\":{\"optional\":true},\"sass-embedded-linux-arm64@1.89.2\":{\"optional\":true},\"sass-embedded-linux-arm@1.89.2\":{\"optional\":true},\"sass-embedded-linux-musl-arm64@1.89.2\":{\"optional\":true},\"sass-embedded-linux-musl-arm@1.89.2\":{\"optional\":true},\"sass-embedded-linux-musl-riscv64@1.89.2\":{\"optional\":true},\"sass-embedded-linux-musl-x64@1.89.2\":{\"optional\":true},\"sass-embedded-linux-riscv64@1.89.2\":{\"optional\":true},\"sass-embedded-linux-x64@1.89.2\":{\"optional\":true},\"sass-embedded-win32-arm64@1.89.2\":{\"optional\":true},\"sass-embedded-win32-x64@1.89.2\":{\"optional\":true},\"sass-embedded@1.89.2\":{\"dependencies\":{\"@bufbuild/protobuf\":\"2.12.0\",\"buffer-builder\":\"0.2.0\",\"colorjs.io\":\"0.5.2\",\"immutable\":\"5.1.5\",\"rxjs\":\"7.8.2\",\"supports-color\":\"8.1.1\",\"sync-child-process\":\"1.0.2\",\"varint\":\"6.0.0\"},\"optionalDependencies\":{\"sass-embedded-android-arm\":\"1.89.2\",\"sass-embedded-android-arm64\":\"1.89.2\",\"sass-embedded-android-riscv64\":\"1.89.2\",\"sass-embedded-android-x64\":\"1.89.2\",\"sass-embedded-darwin-arm64\":\"1.89.2\",\"sass-embedded-darwin-x64\":\"1.89.2\",\"sass-embedded-linux-arm\":\"1.89.2\",\"sass-embedded-linux-arm64\":\"1.89.2\",\"sass-embedded-linux-musl-arm\":\"1.89.2\",\"sass-embedded-linux-musl-arm64\":\"1.89.2\",\"sass-embedded-linux-musl-riscv64\":\"1.89.2\",\"sass-embedded-linux-musl-x64\":\"1.89.2\",\"sass-embedded-linux-riscv64\":\"1.89.2\",\"sass-embedded-linux-x64\":\"1.89.2\",\"sass-embedded-win32-arm64\":\"1.89.2\",\"sass-embedded-win32-x64\":\"1.89.2\"},\"optional\":true},\"saxes@6.0.0\":{\"dependencies\":{\"xmlchars\":\"2.2.0\"}},\"scheduler@0.27.0\":{},\"schema-utils@4.3.3\":{\"dependencies\":{\"@types/json-schema\":\"7.0.15\",\"ajv\":\"8.20.0\",\"ajv-formats\":\"2.1.1(ajv@8.20.0)\",\"ajv-keywords\":\"5.1.0(ajv@8.20.0)\"},\"optional\":true},\"semver@6.3.1\":{},\"semver@7.7.2\":{},\"semver@7.7.3\":{},\"semver@7.7.4\":{\"optional\":true},\"serialize-error@11.0.3\":{\"dependencies\":{\"type-fest\":\"2.19.0\"},\"optional\":true},\"seroval-plugins@1.5.4(seroval@1.5.4)\":{\"dependencies\":{\"seroval\":\"1.5.4\"}},\"seroval@1.5.4\":{},\"setimmediate@1.0.5\":{\"optional\":true},\"sharp@0.34.5\":{\"dependencies\":{\"@img/colour\":\"1.1.0\",\"detect-libc\":\"2.1.2\",\"semver\":\"7.7.3\"},\"optionalDependencies\":{\"@img/sharp-darwin-arm64\":\"0.34.5\",\"@img/sharp-darwin-x64\":\"0.34.5\",\"@img/sharp-libvips-darwin-arm64\":\"1.2.4\",\"@img/sharp-libvips-darwin-x64\":\"1.2.4\",\"@img/sharp-libvips-linux-arm\":\"1.2.4\",\"@img/sharp-libvips-linux-arm64\":\"1.2.4\",\"@img/sharp-libvips-linux-ppc64\":\"1.2.4\",\"@img/sharp-libvips-linux-riscv64\":\"1.2.4\",\"@img/sharp-libvips-linux-s390x\":\"1.2.4\",\"@img/sharp-libvips-linux-x64\":\"1.2.4\",\"@img/sharp-libvips-linuxmusl-arm64\":\"1.2.4\",\"@img/sharp-libvips-linuxmusl-x64\":\"1.2.4\",\"@img/sharp-linux-arm\":\"0.34.5\",\"@img/sharp-linux-arm64\":\"0.34.5\",\"@img/sharp-linux-ppc64\":\"0.34.5\",\"@img/sharp-linux-riscv64\":\"0.34.5\",\"@img/sharp-linux-s390x\":\"0.34.5\",\"@img/sharp-linux-x64\":\"0.34.5\",\"@img/sharp-linuxmusl-arm64\":\"0.34.5\",\"@img/sharp-linuxmusl-x64\":\"0.34.5\",\"@img/sharp-wasm32\":\"0.34.5\",\"@img/sharp-win32-arm64\":\"0.34.5\",\"@img/sharp-win32-ia32\":\"0.34.5\",\"@img/sharp-win32-x64\":\"0.34.5\"}},\"shebang-command@2.0.0\":{\"dependencies\":{\"shebang-regex\":\"3.0.0\"}},\"shebang-regex@3.0.0\":{},\"shiki@3.15.0\":{\"dependencies\":{\"@shikijs/core\":\"3.15.0\",\"@shikijs/engine-javascript\":\"3.15.0\",\"@shikijs/engine-oniguruma\":\"3.15.0\",\"@shikijs/langs\":\"3.15.0\",\"@shikijs/themes\":\"3.15.0\",\"@shikijs/types\":\"3.15.0\",\"@shikijs/vscode-textmate\":\"10.0.2\",\"@types/hast\":\"3.0.4\"}},\"siginfo@2.0.0\":{},\"signal-exit@3.0.7\":{},\"signal-exit@4.1.0\":{},\"simple-concat@1.0.1\":{},\"simple-get@4.0.1\":{\"dependencies\":{\"decompress-response\":\"6.0.0\",\"once\":\"1.4.0\",\"simple-concat\":\"1.0.1\"}},\"sirv@3.0.2\":{\"dependencies\":{\"@polka/url\":\"1.0.0-next.29\",\"mrmime\":\"2.0.1\",\"totalist\":\"3.0.1\"}},\"slash@3.0.0\":{},\"smart-buffer@4.2.0\":{\"optional\":true},\"socks-proxy-agent@8.0.5\":{\"dependencies\":{\"agent-base\":\"7.1.4\",\"debug\":\"4.4.3\",\"socks\":\"2.8.8\"},\"transitivePeerDependencies\":[\"supports-color\"],\"optional\":true},\"socks@2.8.8\":{\"dependencies\":{\"ip-address\":\"10.2.0\",\"smart-buffer\":\"4.2.0\"},\"optional\":true},\"source-map-js@1.2.1\":{},\"source-map-support@0.5.21\":{\"dependencies\":{\"buffer-from\":\"1.1.2\",\"source-map\":\"0.6.1\"},\"optional\":true},\"source-map@0.6.1\":{\"optional\":true},\"source-map@0.7.6\":{},\"space-separated-tokens@2.0.2\":{},\"spacetrim@0.11.59\":{\"optional\":true},\"spawndamnit@3.0.1\":{\"dependencies\":{\"cross-spawn\":\"7.0.6\",\"signal-exit\":\"4.1.0\"}},\"split2@4.2.0\":{\"optional\":true},\"sprintf-js@1.0.3\":{},\"srvx@0.11.15\":{},\"stackback@0.0.2\":{},\"statuses@2.0.2\":{\"optional\":true},\"std-env@3.10.0\":{},\"std-env@3.9.0\":{},\"std-env@4.1.0\":{},\"streamx@2.25.0\":{\"dependencies\":{\"events-universal\":\"1.0.1\",\"fast-fifo\":\"1.3.2\",\"text-decoder\":\"1.2.7\"},\"transitivePeerDependencies\":[\"bare-abort-controller\",\"react-native-b4a\"],\"optional\":true},\"strict-event-emitter@0.5.1\":{\"optional\":true},\"string-width@4.2.3\":{\"dependencies\":{\"emoji-regex\":\"8.0.0\",\"is-fullwidth-code-point\":\"3.0.0\",\"strip-ansi\":\"6.0.1\"}},\"string-width@5.1.2\":{\"dependencies\":{\"eastasianwidth\":\"0.2.0\",\"emoji-regex\":\"9.2.2\",\"strip-ansi\":\"7.1.2\"}},\"string_decoder@1.1.1\":{\"dependencies\":{\"safe-buffer\":\"5.1.2\"},\"optional\":true},\"string_decoder@1.3.0\":{\"dependencies\":{\"safe-buffer\":\"5.2.1\"}},\"stringify-entities@4.0.4\":{\"dependencies\":{\"character-entities-html4\":\"2.1.0\",\"character-entities-legacy\":\"3.0.0\"}},\"strip-ansi@6.0.1\":{\"dependencies\":{\"ansi-regex\":\"5.0.1\"}},\"strip-ansi@7.1.2\":{\"dependencies\":{\"ansi-regex\":\"6.1.0\"}},\"strip-ansi@7.2.0\":{\"dependencies\":{\"ansi-regex\":\"6.2.2\"},\"optional\":true},\"strip-bom@3.0.0\":{},\"strip-json-comments@2.0.1\":{},\"strip-literal@3.0.0\":{\"dependencies\":{\"js-tokens\":\"9.0.1\"}},\"strnum@1.1.2\":{\"optional\":true},\"stylis@4.3.6\":{},\"supports-color@10.2.2\":{},\"supports-color@7.2.0\":{\"dependencies\":{\"has-flag\":\"4.0.0\"}},\"supports-color@8.1.1\":{\"dependencies\":{\"has-flag\":\"4.0.0\"},\"optional\":true},\"symbol-tree@3.2.4\":{},\"sync-child-process@1.0.2\":{\"dependencies\":{\"sync-message-port\":\"1.2.0\"},\"optional\":true},\"sync-message-port@1.2.0\":{\"optional\":true},\"tailwindcss@4.2.4\":{},\"tapable@2.3.3\":{},\"tar-fs@2.1.4\":{\"dependencies\":{\"chownr\":\"1.1.4\",\"mkdirp-classic\":\"0.5.3\",\"pump\":\"3.0.3\",\"tar-stream\":\"2.2.0\"}},\"tar-fs@3.1.2\":{\"dependencies\":{\"pump\":\"3.0.4\",\"tar-stream\":\"3.2.0\"},\"optionalDependencies\":{\"bare-fs\":\"4.7.1\",\"bare-path\":\"3.0.0\"},\"transitivePeerDependencies\":[\"bare-abort-controller\",\"bare-buffer\",\"react-native-b4a\"],\"optional\":true},\"tar-stream@2.2.0\":{\"dependencies\":{\"bl\":\"4.1.0\",\"end-of-stream\":\"1.4.5\",\"fs-constants\":\"1.0.0\",\"inherits\":\"2.0.4\",\"readable-stream\":\"3.6.2\"}},\"tar-stream@3.2.0\":{\"dependencies\":{\"b4a\":\"1.8.1\",\"bare-fs\":\"4.7.1\",\"fast-fifo\":\"1.3.2\",\"streamx\":\"2.25.0\"},\"transitivePeerDependencies\":[\"bare-abort-controller\",\"bare-buffer\",\"react-native-b4a\"],\"optional\":true},\"tar@6.2.1\":{\"dependencies\":{\"chownr\":\"2.0.0\",\"fs-minipass\":\"2.1.0\",\"minipass\":\"5.0.0\",\"minizlib\":\"2.1.2\",\"mkdirp\":\"1.0.4\",\"yallist\":\"4.0.0\"}},\"teex@1.0.1\":{\"dependencies\":{\"streamx\":\"2.25.0\"},\"transitivePeerDependencies\":[\"bare-abort-controller\",\"react-native-b4a\"],\"optional\":true},\"term-size@2.2.1\":{},\"terser-webpack-plugin@5.5.0(esbuild@0.27.3)(webpack@5.99.9(esbuild@0.27.3))\":{\"dependencies\":{\"@jridgewell/trace-mapping\":\"0.3.31\",\"jest-worker\":\"27.5.1\",\"schema-utils\":\"4.3.3\",\"terser\":\"5.36.0\",\"webpack\":\"5.99.9(esbuild@0.27.3)\"},\"optionalDependencies\":{\"esbuild\":\"0.27.3\"},\"optional\":true},\"terser@5.36.0\":{\"dependencies\":{\"@jridgewell/source-map\":\"0.3.11\",\"acorn\":\"8.16.0\",\"commander\":\"2.20.3\",\"source-map-support\":\"0.5.21\"},\"optional\":true},\"test-exclude@7.0.1\":{\"dependencies\":{\"@istanbuljs/schema\":\"0.1.3\",\"glob\":\"10.4.5\",\"minimatch\":\"9.0.5\"}},\"text-decoder@1.2.7\":{\"dependencies\":{\"b4a\":\"1.8.1\"},\"transitivePeerDependencies\":[\"react-native-b4a\"],\"optional\":true},\"tinybench@2.9.0\":{},\"tinyexec@0.3.2\":{},\"tinyexec@1.0.2\":{},\"tinyglobby@0.2.14\":{\"dependencies\":{\"fdir\":\"6.5.0(picomatch@4.0.4)\",\"picomatch\":\"4.0.4\"}},\"tinyglobby@0.2.15\":{\"dependencies\":{\"fdir\":\"6.5.0(picomatch@4.0.4)\",\"picomatch\":\"4.0.4\"}},\"tinyglobby@0.2.16\":{\"dependencies\":{\"fdir\":\"6.5.0(picomatch@4.0.4)\",\"picomatch\":\"4.0.4\"}},\"tinypool@1.1.1\":{},\"tinyrainbow@2.0.0\":{},\"tinyrainbow@3.0.3\":{},\"tinyrainbow@3.1.0\":{},\"tinyspy@4.0.3\":{},\"tldts-core@6.1.52\":{},\"tldts-core@7.0.19\":{},\"tldts@6.1.52\":{\"dependencies\":{\"tldts-core\":\"6.1.52\"}},\"tldts@7.0.19\":{\"dependencies\":{\"tldts-core\":\"7.0.19\"}},\"tmp@0.2.5\":{},\"to-regex-range@5.0.1\":{\"dependencies\":{\"is-number\":\"7.0.0\"}},\"totalist@3.0.1\":{},\"tough-cookie@4.1.4\":{\"dependencies\":{\"psl\":\"1.15.0\",\"punycode\":\"2.3.1\",\"universalify\":\"0.2.0\",\"url-parse\":\"1.5.10\"},\"optional\":true},\"tough-cookie@5.1.2\":{\"dependencies\":{\"tldts\":\"6.1.52\"}},\"tough-cookie@6.0.0\":{\"dependencies\":{\"tldts\":\"7.0.19\"}},\"tr46@5.1.1\":{\"dependencies\":{\"punycode\":\"2.3.1\"}},\"tr46@6.0.0\":{\"dependencies\":{\"punycode\":\"2.3.1\"}},\"tree-kill@1.2.2\":{},\"trim-lines@3.0.1\":{},\"trim-trailing-lines@2.1.0\":{},\"trough@2.2.0\":{},\"ts-algebra@2.0.0\":{},\"ts-dedent@2.2.0\":{},\"tsconfig-paths@4.2.0\":{\"dependencies\":{\"json5\":\"2.2.3\",\"minimist\":\"1.2.8\",\"strip-bom\":\"3.0.0\"}},\"tslib@2.8.1\":{},\"tsx@4.20.5\":{\"dependencies\":{\"esbuild\":\"0.25.12\",\"get-tsconfig\":\"4.14.0\"},\"optionalDependencies\":{\"fsevents\":\"2.3.3\"},\"optional\":true},\"tunnel-agent@0.6.0\":{\"dependencies\":{\"safe-buffer\":\"5.2.1\"}},\"type-fest@2.19.0\":{\"optional\":true},\"type-fest@4.26.0\":{\"optional\":true},\"type-fest@4.41.0\":{\"optional\":true},\"typescript@5.8.3\":{},\"typescript@5.9.3\":{},\"ufo@1.6.1\":{},\"undici-types@6.21.0\":{},\"undici-types@7.16.0\":{\"optional\":true},\"undici@7.16.0\":{},\"undici@7.24.8\":{},\"undici@7.25.0\":{\"optional\":true},\"unenv@2.0.0-rc.24\":{\"dependencies\":{\"pathe\":\"2.0.3\"}},\"unified@11.0.5\":{\"dependencies\":{\"@types/unist\":\"3.0.3\",\"bail\":\"2.0.2\",\"devlop\":\"1.1.0\",\"extend\":\"3.0.2\",\"is-plain-obj\":\"4.1.0\",\"trough\":\"2.2.0\",\"vfile\":\"6.0.3\"}},\"unist-util-find-after@5.0.0\":{\"dependencies\":{\"@types/unist\":\"3.0.3\",\"unist-util-is\":\"6.0.0\"}},\"unist-util-is@6.0.0\":{\"dependencies\":{\"@types/unist\":\"3.0.3\"}},\"unist-util-position@5.0.0\":{\"dependencies\":{\"@types/unist\":\"3.0.3\"}},\"unist-util-stringify-position@4.0.0\":{\"dependencies\":{\"@types/unist\":\"3.0.3\"}},\"unist-util-visit-parents@6.0.1\":{\"dependencies\":{\"@types/unist\":\"3.0.3\",\"unist-util-is\":\"6.0.0\"}},\"unist-util-visit@5.0.0\":{\"dependencies\":{\"@types/unist\":\"3.0.3\",\"unist-util-is\":\"6.0.0\",\"unist-util-visit-parents\":\"6.0.1\"}},\"universalify@0.1.2\":{},\"universalify@0.2.0\":{\"optional\":true},\"universalify@2.0.1\":{},\"unplugin@3.0.0\":{\"dependencies\":{\"@jridgewell/remapping\":\"2.3.5\",\"picomatch\":\"4.0.3\",\"webpack-virtual-modules\":\"0.6.2\"}},\"update-browserslist-db@1.1.3(browserslist@4.25.3)\":{\"dependencies\":{\"browserslist\":\"4.25.3\",\"escalade\":\"3.2.0\",\"picocolors\":\"1.1.1\"}},\"update-browserslist-db@1.2.3(browserslist@4.28.2)\":{\"dependencies\":{\"browserslist\":\"4.28.2\",\"escalade\":\"3.2.0\",\"picocolors\":\"1.1.1\"},\"optional\":true},\"url-parse@1.5.10\":{\"dependencies\":{\"querystringify\":\"2.2.0\",\"requires-port\":\"1.0.0\"},\"optional\":true},\"urlpattern-polyfill@10.1.0\":{\"optional\":true},\"use-sync-external-store@1.6.0(react@19.2.0)\":{\"dependencies\":{\"react\":\"19.2.0\"}},\"userhome@1.0.1\":{\"optional\":true},\"util-deprecate@1.0.2\":{},\"uuid@11.1.0\":{},\"varint@6.0.0\":{\"optional\":true},\"vfile-location@5.0.3\":{\"dependencies\":{\"@types/unist\":\"3.0.3\",\"vfile\":\"6.0.3\"}},\"vfile-message@4.0.2\":{\"dependencies\":{\"@types/unist\":\"3.0.3\",\"unist-util-stringify-position\":\"4.0.0\"}},\"vfile@6.0.3\":{\"dependencies\":{\"@types/unist\":\"3.0.3\",\"vfile-message\":\"4.0.2\"}},\"vite-node@3.2.4(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\":{\"dependencies\":{\"cac\":\"6.7.14\",\"debug\":\"4.4.3\",\"es-module-lexer\":\"1.7.0\",\"pathe\":\"2.0.3\",\"vite\":\"7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\"},\"transitivePeerDependencies\":[\"@types/node\",\"jiti\",\"less\",\"lightningcss\",\"sass\",\"sass-embedded\",\"stylus\",\"sugarss\",\"supports-color\",\"terser\",\"tsx\",\"yaml\"]},\"vite-plugin-static-copy@4.1.0(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\":{\"dependencies\":{\"chokidar\":\"3.6.0\",\"p-map\":\"7.0.4\",\"picocolors\":\"1.1.1\",\"tinyglobby\":\"0.2.16\",\"vite\":\"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\"}},\"vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\":{\"dependencies\":{\"esbuild\":\"0.25.12\",\"fdir\":\"6.5.0(picomatch@4.0.4)\",\"picomatch\":\"4.0.4\",\"postcss\":\"8.5.6\",\"rollup\":\"4.53.2\",\"tinyglobby\":\"0.2.16\"},\"optionalDependencies\":{\"@types/node\":\"24.10.2\",\"fsevents\":\"2.3.3\",\"jiti\":\"2.6.1\",\"lightningcss\":\"1.32.0\",\"sass-embedded\":\"1.89.2\",\"terser\":\"5.36.0\",\"tsx\":\"4.20.5\",\"yaml\":\"2.8.1\"}},\"vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\":{\"dependencies\":{\"lightningcss\":\"1.32.0\",\"picomatch\":\"4.0.4\",\"postcss\":\"8.5.14\",\"rolldown\":\"1.0.0-rc.17\",\"tinyglobby\":\"0.2.16\"},\"optionalDependencies\":{\"@types/node\":\"22.15.33\",\"esbuild\":\"0.27.3\",\"fsevents\":\"2.3.3\",\"jiti\":\"2.6.1\",\"sass-embedded\":\"1.89.2\",\"terser\":\"5.36.0\",\"tsx\":\"4.20.5\",\"yaml\":\"2.8.1\"}},\"vitefu@1.1.1(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\":{\"optionalDependencies\":{\"vite\":\"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\"}},\"vitest@3.2.4(@types/debug@4.1.12)(@types/node@24.10.2)(@vitest/browser@3.2.4)(happy-dom@18.0.1)(jiti@2.6.1)(jsdom@26.1.0)(lightningcss@1.32.0)(msw@2.10.2(@types/node@24.10.2)(typescript@5.8.3))(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\":{\"dependencies\":{\"@types/chai\":\"5.2.2\",\"@vitest/expect\":\"3.2.4\",\"@vitest/mocker\":\"3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.8.3))(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\",\"@vitest/pretty-format\":\"3.2.4\",\"@vitest/runner\":\"3.2.4\",\"@vitest/snapshot\":\"3.2.4\",\"@vitest/spy\":\"3.2.4\",\"@vitest/utils\":\"3.2.4\",\"chai\":\"5.3.3\",\"debug\":\"4.4.1\",\"expect-type\":\"1.2.2\",\"magic-string\":\"0.30.18\",\"pathe\":\"2.0.3\",\"picomatch\":\"4.0.3\",\"std-env\":\"3.9.0\",\"tinybench\":\"2.9.0\",\"tinyexec\":\"0.3.2\",\"tinyglobby\":\"0.2.14\",\"tinypool\":\"1.1.1\",\"tinyrainbow\":\"2.0.0\",\"vite\":\"7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\",\"vite-node\":\"3.2.4(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\",\"why-is-node-running\":\"2.3.0\"},\"optionalDependencies\":{\"@types/debug\":\"4.1.12\",\"@types/node\":\"24.10.2\",\"@vitest/browser\":\"3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.8.3))(playwright@1.55.0)(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(vitest@3.2.4)(webdriverio@9.2.1)\",\"happy-dom\":\"18.0.1\",\"jsdom\":\"26.1.0\"},\"transitivePeerDependencies\":[\"jiti\",\"less\",\"lightningcss\",\"msw\",\"sass\",\"sass-embedded\",\"stylus\",\"sugarss\",\"supports-color\",\"terser\",\"tsx\",\"yaml\"]},\"vitest@3.2.4(@types/debug@4.1.12)(@types/node@24.10.2)(@vitest/browser@3.2.4)(happy-dom@18.0.1)(jiti@2.6.1)(jsdom@27.3.0(postcss@8.5.14))(lightningcss@1.32.0)(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\":{\"dependencies\":{\"@types/chai\":\"5.2.2\",\"@vitest/expect\":\"3.2.4\",\"@vitest/mocker\":\"3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\",\"@vitest/pretty-format\":\"3.2.4\",\"@vitest/runner\":\"3.2.4\",\"@vitest/snapshot\":\"3.2.4\",\"@vitest/spy\":\"3.2.4\",\"@vitest/utils\":\"3.2.4\",\"chai\":\"5.3.3\",\"debug\":\"4.4.1\",\"expect-type\":\"1.2.2\",\"magic-string\":\"0.30.18\",\"pathe\":\"2.0.3\",\"picomatch\":\"4.0.3\",\"std-env\":\"3.9.0\",\"tinybench\":\"2.9.0\",\"tinyexec\":\"0.3.2\",\"tinyglobby\":\"0.2.14\",\"tinypool\":\"1.1.1\",\"tinyrainbow\":\"2.0.0\",\"vite\":\"7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\",\"vite-node\":\"3.2.4(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\",\"why-is-node-running\":\"2.3.0\"},\"optionalDependencies\":{\"@types/debug\":\"4.1.12\",\"@types/node\":\"24.10.2\",\"@vitest/browser\":\"3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(playwright@1.55.0)(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(vitest@3.2.4)(webdriverio@9.2.1)\",\"happy-dom\":\"18.0.1\",\"jsdom\":\"27.3.0(postcss@8.5.14)\"},\"transitivePeerDependencies\":[\"jiti\",\"less\",\"lightningcss\",\"msw\",\"sass\",\"sass-embedded\",\"stylus\",\"sugarss\",\"supports-color\",\"terser\",\"tsx\",\"yaml\"]},\"vitest@4.0.18(@opentelemetry/api@1.9.0)(@types/node@24.10.2)(happy-dom@18.0.1)(jiti@2.6.1)(jsdom@27.3.0(postcss@8.5.14))(lightningcss@1.32.0)(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\":{\"dependencies\":{\"@vitest/expect\":\"4.0.18\",\"@vitest/mocker\":\"4.0.18(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\",\"@vitest/pretty-format\":\"4.0.18\",\"@vitest/runner\":\"4.0.18\",\"@vitest/snapshot\":\"4.0.18\",\"@vitest/spy\":\"4.0.18\",\"@vitest/utils\":\"4.0.18\",\"es-module-lexer\":\"1.7.0\",\"expect-type\":\"1.2.2\",\"magic-string\":\"0.30.21\",\"obug\":\"2.1.1\",\"pathe\":\"2.0.3\",\"picomatch\":\"4.0.3\",\"std-env\":\"3.10.0\",\"tinybench\":\"2.9.0\",\"tinyexec\":\"1.0.2\",\"tinyglobby\":\"0.2.15\",\"tinyrainbow\":\"3.0.3\",\"vite\":\"7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\",\"why-is-node-running\":\"2.3.0\"},\"optionalDependencies\":{\"@opentelemetry/api\":\"1.9.0\",\"@types/node\":\"24.10.2\",\"happy-dom\":\"18.0.1\",\"jsdom\":\"27.3.0(postcss@8.5.14)\"},\"transitivePeerDependencies\":[\"jiti\",\"less\",\"lightningcss\",\"msw\",\"sass\",\"sass-embedded\",\"stylus\",\"sugarss\",\"terser\",\"tsx\",\"yaml\"]},\"vitest@4.1.5(@opentelemetry/api@1.9.0)(@types/node@22.15.33)(@vitest/coverage-v8@4.1.5)(happy-dom@18.0.1)(jsdom@27.3.0(postcss@8.5.14))(msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\":{\"dependencies\":{\"@vitest/expect\":\"4.1.5\",\"@vitest/mocker\":\"4.1.5(msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\",\"@vitest/pretty-format\":\"4.1.5\",\"@vitest/runner\":\"4.1.5\",\"@vitest/snapshot\":\"4.1.5\",\"@vitest/spy\":\"4.1.5\",\"@vitest/utils\":\"4.1.5\",\"es-module-lexer\":\"2.1.0\",\"expect-type\":\"1.3.0\",\"magic-string\":\"0.30.21\",\"obug\":\"2.1.1\",\"pathe\":\"2.0.3\",\"picomatch\":\"4.0.4\",\"std-env\":\"4.1.0\",\"tinybench\":\"2.9.0\",\"tinyexec\":\"1.0.2\",\"tinyglobby\":\"0.2.16\",\"tinyrainbow\":\"3.1.0\",\"vite\":\"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\",\"why-is-node-running\":\"2.3.0\"},\"optionalDependencies\":{\"@opentelemetry/api\":\"1.9.0\",\"@types/node\":\"22.15.33\",\"@vitest/coverage-v8\":\"4.1.5(@vitest/browser@4.1.5)(vitest@4.1.5)\",\"happy-dom\":\"18.0.1\",\"jsdom\":\"27.3.0(postcss@8.5.14)\"},\"transitivePeerDependencies\":[\"msw\"]},\"vscode-jsonrpc@8.2.0\":{},\"vscode-languageserver-protocol@3.17.5\":{\"dependencies\":{\"vscode-jsonrpc\":\"8.2.0\",\"vscode-languageserver-types\":\"3.17.5\"}},\"vscode-languageserver-textdocument@1.0.12\":{},\"vscode-languageserver-types@3.17.5\":{},\"vscode-languageserver@9.0.1\":{\"dependencies\":{\"vscode-languageserver-protocol\":\"3.17.5\"}},\"vscode-uri@3.0.8\":{},\"w3c-xmlserializer@5.0.0\":{\"dependencies\":{\"xml-name-validator\":\"5.0.0\"}},\"wait-port@1.1.0\":{\"dependencies\":{\"chalk\":\"4.1.2\",\"commander\":\"9.5.0\",\"debug\":\"4.4.3\"},\"transitivePeerDependencies\":[\"supports-color\"],\"optional\":true},\"watchpack@2.5.1\":{\"dependencies\":{\"glob-to-regexp\":\"0.4.1\",\"graceful-fs\":\"4.2.11\"},\"optional\":true},\"wcwidth@1.0.1\":{\"dependencies\":{\"defaults\":\"1.0.4\"}},\"web-namespaces@2.0.1\":{},\"web-streams-polyfill@3.3.3\":{\"optional\":true},\"web-vitals@4.2.4\":{},\"web-vitals@5.1.0\":{},\"webdriver@9.2.0\":{\"dependencies\":{\"@types/node\":\"20.19.39\",\"@types/ws\":\"8.18.1\",\"@wdio/config\":\"9.1.3\",\"@wdio/logger\":\"9.1.3\",\"@wdio/protocols\":\"9.2.0\",\"@wdio/types\":\"9.1.3\",\"@wdio/utils\":\"9.1.3\",\"deepmerge-ts\":\"7.1.5\",\"ws\":\"8.20.0\"},\"transitivePeerDependencies\":[\"bare-abort-controller\",\"bare-buffer\",\"bufferutil\",\"react-native-b4a\",\"supports-color\",\"utf-8-validate\"],\"optional\":true},\"webdriverio@9.2.1\":{\"dependencies\":{\"@types/node\":\"20.19.39\",\"@types/sinonjs__fake-timers\":\"8.1.5\",\"@wdio/config\":\"9.1.3\",\"@wdio/logger\":\"9.1.3\",\"@wdio/protocols\":\"9.2.0\",\"@wdio/repl\":\"9.0.8\",\"@wdio/types\":\"9.1.3\",\"@wdio/utils\":\"9.1.3\",\"archiver\":\"7.0.1\",\"aria-query\":\"5.3.2\",\"cheerio\":\"1.2.0\",\"css-shorthand-properties\":\"1.1.2\",\"css-value\":\"0.0.1\",\"grapheme-splitter\":\"1.0.4\",\"htmlfy\":\"0.3.2\",\"import-meta-resolve\":\"4.2.0\",\"is-plain-obj\":\"4.1.0\",\"jszip\":\"3.10.1\",\"lodash.clonedeep\":\"4.5.0\",\"lodash.zip\":\"4.2.0\",\"minimatch\":\"9.0.9\",\"query-selector-shadow-dom\":\"1.0.1\",\"resq\":\"1.11.0\",\"rgb2hex\":\"0.2.5\",\"serialize-error\":\"11.0.3\",\"urlpattern-polyfill\":\"10.1.0\",\"webdriver\":\"9.2.0\"},\"transitivePeerDependencies\":[\"bare-abort-controller\",\"bare-buffer\",\"bufferutil\",\"react-native-b4a\",\"supports-color\",\"utf-8-validate\"],\"optional\":true},\"webidl-conversions@7.0.0\":{},\"webidl-conversions@8.0.0\":{},\"webpack-sources@3.4.1\":{\"optional\":true},\"webpack-virtual-modules@0.6.2\":{},\"webpack@5.99.9(esbuild@0.27.3)\":{\"dependencies\":{\"@types/eslint-scope\":\"3.7.7\",\"@types/estree\":\"1.0.9\",\"@types/json-schema\":\"7.0.15\",\"@webassemblyjs/ast\":\"1.14.1\",\"@webassemblyjs/wasm-edit\":\"1.14.1\",\"@webassemblyjs/wasm-parser\":\"1.14.1\",\"acorn\":\"8.16.0\",\"browserslist\":\"4.28.2\",\"chrome-trace-event\":\"1.0.4\",\"enhanced-resolve\":\"5.21.0\",\"es-module-lexer\":\"1.7.0\",\"eslint-scope\":\"5.1.1\",\"events\":\"3.3.0\",\"glob-to-regexp\":\"0.4.1\",\"graceful-fs\":\"4.2.11\",\"json-parse-even-better-errors\":\"2.3.1\",\"loader-runner\":\"4.3.2\",\"mime-types\":\"2.1.35\",\"neo-async\":\"2.6.2\",\"schema-utils\":\"4.3.3\",\"tapable\":\"2.3.3\",\"terser-webpack-plugin\":\"5.5.0(esbuild@0.27.3)(webpack@5.99.9(esbuild@0.27.3))\",\"watchpack\":\"2.5.1\",\"webpack-sources\":\"3.4.1\"},\"transitivePeerDependencies\":[\"@swc/core\",\"esbuild\",\"uglify-js\"],\"optional\":true},\"whatwg-encoding@3.1.1\":{\"dependencies\":{\"iconv-lite\":\"0.6.3\"}},\"whatwg-mimetype@3.0.0\":{\"optional\":true},\"whatwg-mimetype@4.0.0\":{},\"whatwg-url@14.2.0\":{\"dependencies\":{\"tr46\":\"5.1.1\",\"webidl-conversions\":\"7.0.0\"}},\"whatwg-url@15.1.0\":{\"dependencies\":{\"tr46\":\"6.0.0\",\"webidl-conversions\":\"8.0.0\"}},\"which@2.0.2\":{\"dependencies\":{\"isexe\":\"2.0.0\"}},\"which@4.0.0\":{\"dependencies\":{\"isexe\":\"3.1.5\"},\"optional\":true},\"why-is-node-running@2.3.0\":{\"dependencies\":{\"siginfo\":\"2.0.0\",\"stackback\":\"0.0.2\"}},\"workerd@1.20260504.1\":{\"optionalDependencies\":{\"@cloudflare/workerd-darwin-64\":\"1.20260504.1\",\"@cloudflare/workerd-darwin-arm64\":\"1.20260504.1\",\"@cloudflare/workerd-linux-64\":\"1.20260504.1\",\"@cloudflare/workerd-linux-arm64\":\"1.20260504.1\",\"@cloudflare/workerd-windows-64\":\"1.20260504.1\"}},\"wrangler@4.88.0\":{\"dependencies\":{\"@cloudflare/kv-asset-handler\":\"0.5.0\",\"@cloudflare/unenv-preset\":\"2.16.1(unenv@2.0.0-rc.24)(workerd@1.20260504.1)\",\"blake3-wasm\":\"2.1.5\",\"esbuild\":\"0.27.3\",\"miniflare\":\"4.20260504.0\",\"path-to-regexp\":\"6.3.0\",\"unenv\":\"2.0.0-rc.24\",\"workerd\":\"1.20260504.1\"},\"optionalDependencies\":{\"fsevents\":\"2.3.3\"},\"transitivePeerDependencies\":[\"bufferutil\",\"utf-8-validate\"]},\"wrap-ansi@6.2.0\":{\"dependencies\":{\"ansi-styles\":\"4.3.0\",\"string-width\":\"4.2.3\",\"strip-ansi\":\"6.0.1\"},\"optional\":true},\"wrap-ansi@7.0.0\":{\"dependencies\":{\"ansi-styles\":\"4.3.0\",\"string-width\":\"4.2.3\",\"strip-ansi\":\"6.0.1\"}},\"wrap-ansi@8.1.0\":{\"dependencies\":{\"ansi-styles\":\"6.2.1\",\"string-width\":\"5.1.2\",\"strip-ansi\":\"7.1.2\"}},\"wrappy@1.0.2\":{},\"ws@8.18.0\":{},\"ws@8.18.3\":{},\"ws@8.20.0\":{},\"xml-name-validator@5.0.0\":{},\"xmlbuilder2@4.0.3\":{\"dependencies\":{\"@oozcitak/dom\":\"2.0.2\",\"@oozcitak/infra\":\"2.0.2\",\"@oozcitak/util\":\"10.0.0\",\"js-yaml\":\"4.1.1\"}},\"xmlchars@2.2.0\":{},\"y18n@5.0.8\":{},\"yallist@3.1.1\":{},\"yallist@4.0.0\":{},\"yaml@2.8.1\":{},\"yargs-parser@21.1.1\":{},\"yargs-parser@22.0.0\":{},\"yargs@17.7.2\":{\"dependencies\":{\"cliui\":\"8.0.1\",\"escalade\":\"3.2.0\",\"get-caller-file\":\"2.0.5\",\"require-directory\":\"2.1.1\",\"string-width\":\"4.2.3\",\"y18n\":\"5.0.8\",\"yargs-parser\":\"21.1.1\"}},\"yauzl@2.10.0\":{\"dependencies\":{\"buffer-crc32\":\"0.2.13\",\"fd-slicer\":\"1.1.0\"},\"optional\":true},\"yoctocolors-cjs@2.1.3\":{\"optional\":true},\"youch-core@0.3.3\":{\"dependencies\":{\"@poppinss/exception\":\"1.2.2\",\"error-stack-parser-es\":\"1.0.5\"}},\"youch@4.1.0-beta.10\":{\"dependencies\":{\"@poppinss/colors\":\"4.1.5\",\"@poppinss/dumper\":\"0.6.5\",\"@speed-highlight/core\":\"1.2.12\",\"cookie\":\"1.0.2\",\"youch-core\":\"0.3.3\"}},\"zip-stream@6.0.1\":{\"dependencies\":{\"archiver-utils\":\"5.0.2\",\"compress-commons\":\"6.0.2\",\"readable-stream\":\"4.7.0\"},\"optional\":true},\"zod@3.25.76\":{},\"zwitch@2.0.4\":{}}}"
  },
  {
    "path": "packages/engine/benches/json_pointer_crud/main.rs",
    "content": "use std::sync::Arc;\nuse std::time::Duration;\n\nuse criterion::{black_box, criterion_group, criterion_main, BatchSize, Criterion};\nuse lix_engine::{\n    storage_bench, Backend, CreateVersionOptions, Engine, MergeVersionOptions, MergeVersionOutcome,\n    SessionContext, SwitchVersionOptions,\n};\nuse rusqlite::{params, Connection, OptionalExtension};\nuse serde_json::Value as JsonValue;\nuse tempfile::TempDir;\nuse tokio::runtime::Runtime;\n\n#[path = \"../storage/rocksdb_backend.rs\"]\nmod rocksdb_backend;\n#[path = \"../storage/sqlite_backend.rs\"]\nmod sqlite_backend;\n\nuse rocksdb_backend::RocksDbBenchBackend;\nuse sqlite_backend::SqliteBenchBackend;\n\nconst JSON_POINTER_SCHEMA_JSON: &str =\n    include_str!(\"../../../plugin-json-v2/schema/json_pointer.json\");\nconst PNPM_LOCK_JSON: &str = include_str!(\"../fixtures/pnpm-lock.fixture.json\");\nconst BASELINE_ROWS: usize = 100;\nconst SMOKE_ROWS: usize = 1_000;\nconst SCALE_ROWS: usize = 10_000;\nconst CHUNK_SIZE: usize = 500;\nconst CHANGE_ROW_DENOMINATOR: usize = 10;\n\n#[derive(Clone)]\nstruct PointerRow {\n    path: String,\n    value_json: String,\n    updated_value_json: String,\n}\n\n#[derive(Clone, Copy)]\nenum LixBackendProfile {\n    Sqlite,\n    RocksDb,\n}\n\nimpl LixBackendProfile {\n    fn name(self) -> &'static str {\n        match self {\n            Self::Sqlite => \"lix_sqlite\",\n            Self::RocksDb => \"lix_rocksdb\",\n        }\n    }\n\n    fn backend_label(self) -> &'static str {\n        match self {\n            Self::Sqlite => \"sqlite\",\n            Self::RocksDb => \"rocksdb\",\n        }\n    }\n}\n\nstruct RawSqliteFixture {\n    conn: Connection,\n    _dir: TempDir,\n}\n\nstruct LixFixture {\n    session: SessionContext,\n}\n\nfn json_pointer_crud_benches(c: &mut Criterion) {\n    let runtime = tokio::runtime::Builder::new_current_thread()\n        .enable_all()\n        .build()\n        .expect(\"create tokio runtime for json_pointer CRUD benchmarks\");\n    let rows = fixture_rows();\n\n    bench_raw_sqlite(c, &rows, BASELINE_ROWS, \"baseline\");\n    bench_raw_storage(c, &runtime, &rows, BASELINE_ROWS, \"baseline\");\n    bench_lix(c, &runtime, &rows, BASELINE_ROWS, \"baseline\");\n    bench_raw_sqlite(c, &rows, SMOKE_ROWS, \"smoke\");\n    bench_raw_storage(c, &runtime, &rows, SMOKE_ROWS, \"smoke\");\n    bench_lix(c, &runtime, &rows, SMOKE_ROWS, \"smoke\");\n    bench_raw_sqlite(c, &rows, SCALE_ROWS, \"scale\");\n    bench_raw_storage(c, &runtime, &rows, SCALE_ROWS, \"scale\");\n    bench_lix(c, &runtime, &rows, SCALE_ROWS, \"scale\");\n}\n\nfn bench_raw_sqlite(c: &mut Criterion, all_rows: &[PointerRow], row_count: usize, label: &str) {\n    let rows = all_rows[..row_count].to_vec();\n    let mut group = c.benchmark_group(format!(\"json_pointer_crud/raw_sqlite/{label}\"));\n    group.sample_size(if row_count <= SMOKE_ROWS { 20 } else { 11 });\n    group.warm_up_time(Duration::from_millis(250));\n    group.measurement_time(Duration::from_secs(1));\n\n    group.bench_function(format!(\"insert_all_rows/{}\", row_label(row_count)), |b| {\n        b.iter_batched(\n            prepare_raw_sqlite_empty,\n            |fixture| black_box(raw_sqlite_insert_all(fixture, &rows)),\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\n        format!(\"select_all_path_value/{}\", row_label(row_count)),\n        |b| {\n            b.iter_batched(\n                || prepare_raw_sqlite_seeded(&rows),\n                |fixture| black_box(raw_sqlite_select_all(fixture, row_count)),\n                BatchSize::LargeInput,\n            )\n        },\n    );\n\n    group.bench_function(format!(\"select_one_by_pk/{}\", row_label(row_count)), |b| {\n        b.iter_batched(\n            || prepare_raw_sqlite_seeded(&rows),\n            |fixture| black_box(raw_sqlite_select_one_by_pk(fixture, pick_pk_row(&rows))),\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(format!(\"update_all_values/{}\", row_label(row_count)), |b| {\n        b.iter_batched(\n            || prepare_raw_sqlite_seeded(&rows),\n            |fixture| black_box(raw_sqlite_update_all(fixture, row_count)),\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(format!(\"update_one_by_pk/{}\", row_label(row_count)), |b| {\n        b.iter_batched(\n            || prepare_raw_sqlite_seeded(&rows),\n            |fixture| black_box(raw_sqlite_update_one_by_pk(fixture, pick_pk_row(&rows))),\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(format!(\"delete_all_rows/{}\", row_label(row_count)), |b| {\n        b.iter_batched(\n            || prepare_raw_sqlite_seeded(&rows),\n            |fixture| black_box(raw_sqlite_delete_all(fixture, row_count)),\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(format!(\"delete_one_by_pk/{}\", row_label(row_count)), |b| {\n        b.iter_batched(\n            || prepare_raw_sqlite_seeded(&rows),\n            |fixture| black_box(raw_sqlite_delete_one_by_pk(fixture, pick_pk_row(&rows))),\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.finish();\n}\n\nfn bench_raw_storage(\n    c: &mut Criterion,\n    runtime: &Runtime,\n    all_rows: &[PointerRow],\n    row_count: usize,\n    label: &str,\n) {\n    let rows = all_rows[..row_count].to_vec();\n    let storage_rows = storage_rows(&rows);\n    let change_rows = changed_row_count(row_count);\n    for profile in [LixBackendProfile::Sqlite, LixBackendProfile::RocksDb] {\n        let mut group = c.benchmark_group(format!(\n            \"json_pointer_crud/raw_storage_{}/{label}\",\n            profile.backend_label()\n        ));\n        group.sample_size(10);\n        group.warm_up_time(Duration::from_millis(250));\n        group.measurement_time(Duration::from_secs(1));\n\n        group.bench_function(\n            format!(\"write_root_all_rows/{}\", row_label(row_count)),\n            |b| {\n                b.iter_batched(\n                    || {\n                        runtime\n                            .block_on(\n                                storage_bench::prepare_json_pointer_tracked_state_write_root(\n                                    &storage_rows,\n                                ),\n                            )\n                            .expect(\"prepare json_pointer raw storage write root\")\n                    },\n                    |fixture| {\n                        let backend = raw_storage_backend(profile);\n                        black_box(\n                            runtime\n                                .block_on(storage_bench::tracked_state_write_root_prepared(\n                                    &backend, &fixture,\n                                ))\n                                .expect(\"json_pointer raw storage write root\"),\n                        )\n                    },\n                    BatchSize::LargeInput,\n                )\n            },\n        );\n\n        group.bench_function(\n            format!(\"get_many_exact_keys/{}\", row_label(row_count)),\n            |b| {\n                b.iter_batched(\n                    || {\n                        let backend = raw_storage_backend(profile);\n                        let fixture = runtime\n                            .block_on(storage_bench::prepare_json_pointer_tracked_state_read(\n                                &backend,\n                                &storage_rows,\n                            ))\n                            .expect(\"prepare json_pointer raw storage get_many\");\n                        (backend, fixture)\n                    },\n                    |(backend, fixture)| {\n                        black_box(\n                            runtime\n                                .block_on(\n                                    storage_bench::json_pointer_tracked_state_get_many_prepared(\n                                        &backend, &fixture,\n                                    ),\n                                )\n                                .expect(\"json_pointer raw storage get_many\"),\n                        )\n                    },\n                    BatchSize::LargeInput,\n                )\n            },\n        );\n\n        group.bench_function(\n            format!(\"get_many_missing_keys/{}\", row_label(row_count)),\n            |b| {\n                b.iter_batched(\n                    || prepare_raw_storage_read(runtime, profile, &storage_rows),\n                    |(backend, fixture)| {\n                        black_box(\n                        runtime\n                            .block_on(\n                                storage_bench::json_pointer_tracked_state_get_many_missing_prepared(\n                                    &backend, &fixture,\n                                ),\n                            )\n                            .expect(\"json_pointer raw storage get_many missing\"),\n                    )\n                    },\n                    BatchSize::LargeInput,\n                )\n            },\n        );\n\n        group.bench_function(format!(\"scan_keys_only/{}\", row_label(row_count)), |b| {\n            b.iter_batched(\n                || prepare_raw_storage_read(runtime, profile, &storage_rows),\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(\n                                storage_bench::json_pointer_tracked_state_scan_keys_only_prepared(\n                                    &backend, &fixture,\n                                ),\n                            )\n                            .expect(\"json_pointer raw storage scan keys\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n\n        group.bench_function(format!(\"scan_headers_only/{}\", row_label(row_count)), |b| {\n            b.iter_batched(\n                || prepare_raw_storage_read(runtime, profile, &storage_rows),\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(\n                                storage_bench::json_pointer_tracked_state_scan_headers_only_prepared(\n                                    &backend, &fixture,\n                                ),\n                            )\n                            .expect(\"json_pointer raw storage scan headers\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n\n        group.bench_function(format!(\"scan_full_rows/{}\", row_label(row_count)), |b| {\n            b.iter_batched(\n                || prepare_raw_storage_read(runtime, profile, &storage_rows),\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(\n                                storage_bench::json_pointer_tracked_state_scan_full_rows_prepared(\n                                    &backend, &fixture,\n                                ),\n                            )\n                            .expect(\"json_pointer raw storage scan\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n\n        group.bench_function(format!(\"prefix_scan_schema/{}\", row_label(row_count)), |b| {\n            b.iter_batched(\n                || prepare_raw_storage_read(runtime, profile, &storage_rows),\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(\n                                storage_bench::json_pointer_tracked_state_prefix_scan_schema_prepared(\n                                    &backend, &fixture,\n                                ),\n                            )\n                            .expect(\"json_pointer raw storage prefix schema scan\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n\n        group.bench_function(\n            format!(\"prefix_scan_schema_file_null/{}\", row_label(row_count)),\n            |b| {\n                b.iter_batched(\n                    || prepare_raw_storage_read(runtime, profile, &storage_rows),\n                    |(backend, fixture)| {\n                        black_box(\n                            runtime\n                                .block_on(\n                                    storage_bench::json_pointer_tracked_state_prefix_scan_schema_file_null_prepared(\n                                        &backend, &fixture,\n                                    ),\n                                )\n                                .expect(\"json_pointer raw storage prefix schema file null scan\"),\n                        )\n                    },\n                    BatchSize::LargeInput,\n                )\n            },\n        );\n\n        group.bench_function(\n            format!(\"write_delta_10pct_updates/{}\", row_label(row_count)),\n            |b| {\n                b.iter_batched(\n                    || {\n                        let backend = raw_storage_backend(profile);\n                        let fixture = runtime\n                            .block_on(\n                                storage_bench::prepare_json_pointer_tracked_state_update_rows(\n                                    &backend,\n                                    &storage_rows,\n                                    change_rows,\n                                ),\n                            )\n                            .expect(\"prepare json_pointer raw storage delta update\");\n                        (backend, fixture)\n                    },\n                    |(backend, fixture)| {\n                        black_box(\n                            runtime\n                                .block_on(storage_bench::tracked_state_update_existing_prepared(\n                                    &backend, &fixture,\n                                ))\n                                .expect(\"json_pointer raw storage delta update\"),\n                        )\n                    },\n                    BatchSize::LargeInput,\n                )\n            },\n        );\n\n        group.bench_function(\n            format!(\"write_tombstone_10pct_deletes/{}\", row_label(row_count)),\n            |b| {\n                b.iter_batched(\n                    || {\n                        let backend = raw_storage_backend(profile);\n                        let fixture = runtime\n                            .block_on(\n                                storage_bench::prepare_json_pointer_tracked_state_tombstone_rows(\n                                    &backend,\n                                    &storage_rows,\n                                    change_rows,\n                                ),\n                            )\n                            .expect(\"prepare json_pointer raw storage tombstones\");\n                        (backend, fixture)\n                    },\n                    |(backend, fixture)| {\n                        black_box(\n                            runtime\n                                .block_on(storage_bench::tracked_state_update_existing_prepared(\n                                    &backend, &fixture,\n                                ))\n                                .expect(\"json_pointer raw storage tombstones\"),\n                        )\n                    },\n                    BatchSize::LargeInput,\n                )\n            },\n        );\n\n        group.bench_function(\n            format!(\"changed_keys_update_10pct/{}\", row_label(row_count)),\n            |b| {\n                b.iter_batched(\n                    || {\n                        let backend = raw_storage_backend(profile);\n                        let fixture = runtime\n                            .block_on(\n                                storage_bench::prepare_json_pointer_tracked_state_diff_update_rows(\n                                    &backend,\n                                    &storage_rows,\n                                    change_rows,\n                                ),\n                            )\n                            .expect(\"prepare json_pointer raw storage changed keys\");\n                        (backend, fixture)\n                    },\n                    |(backend, fixture)| {\n                        black_box(\n                            runtime\n                                .block_on(\n                                    storage_bench::json_pointer_tracked_state_changed_keys_prepared(\n                                        &backend, &fixture,\n                                    ),\n                                )\n                                .expect(\"json_pointer raw storage changed keys\"),\n                        )\n                    },\n                    BatchSize::LargeInput,\n                )\n            },\n        );\n\n        group.bench_function(\n            format!(\"changed_keys_delta_chain_10x1pct/{}\", row_label(row_count)),\n            |b| {\n                b.iter_batched(\n                    || {\n                        let backend = raw_storage_backend(profile);\n                        let fixture = runtime\n                            .block_on(\n                                storage_bench::prepare_json_pointer_tracked_state_diff_delta_chain(\n                                    &backend,\n                                    &storage_rows,\n                                    10,\n                                    (row_count / 100).max(1),\n                                ),\n                            )\n                            .expect(\"prepare json_pointer raw storage delta-chain changed keys\");\n                        (backend, fixture)\n                    },\n                    |(backend, fixture)| {\n                        black_box(\n                            runtime\n                                .block_on(\n                                    storage_bench::json_pointer_tracked_state_changed_keys_prepared(\n                                        &backend, &fixture,\n                                    ),\n                                )\n                                .expect(\"json_pointer raw storage delta-chain changed keys\"),\n                        )\n                    },\n                    BatchSize::LargeInput,\n                )\n            },\n        );\n\n        group.bench_function(\n            format!(\"materialize_delta_chain_10x1pct/{}\", row_label(row_count)),\n            |b| {\n                b.iter_batched(\n                    || {\n                        let backend = raw_storage_backend(profile);\n                        let fixture = runtime\n                            .block_on(\n                                storage_bench::prepare_json_pointer_tracked_state_materialize_delta_chain(\n                                    &backend,\n                                    &storage_rows,\n                                    10,\n                                    (row_count / 100).max(1),\n                                ),\n                            )\n                            .expect(\"prepare json_pointer raw storage materialize delta chain\");\n                        (backend, fixture)\n                    },\n                    |(backend, fixture)| {\n                        black_box(\n                            runtime\n                                .block_on(storage_bench::tracked_state_materialize_root_prepared(\n                                    &backend, &fixture,\n                                ))\n                                .expect(\"json_pointer raw storage materialize delta chain\"),\n                        )\n                    },\n                    BatchSize::LargeInput,\n                )\n            },\n        );\n\n        group.finish();\n    }\n}\n\nfn bench_lix(\n    c: &mut Criterion,\n    runtime: &Runtime,\n    all_rows: &[PointerRow],\n    row_count: usize,\n    label: &str,\n) {\n    let rows = all_rows[..row_count].to_vec();\n    let change_rows = changed_row_count(row_count);\n    for profile in [LixBackendProfile::Sqlite, LixBackendProfile::RocksDb] {\n        let mut group = c.benchmark_group(format!(\"json_pointer_crud/{}/{label}\", profile.name()));\n        group.sample_size(if row_count <= SMOKE_ROWS { 11 } else { 11 });\n        group.warm_up_time(Duration::from_millis(250));\n        group.measurement_time(Duration::from_secs(1));\n\n        group.bench_function(format!(\"insert_all_rows/{}\", row_label(row_count)), |b| {\n            b.iter_batched(\n                || runtime.block_on(prepare_lix_empty(profile)),\n                |fixture| black_box(runtime.block_on(lix_insert_all(fixture, &rows))),\n                BatchSize::LargeInput,\n            )\n        });\n\n        group.bench_function(\n            format!(\"select_all_path_value/{}\", row_label(row_count)),\n            |b| {\n                b.iter_batched(\n                    || runtime.block_on(prepare_lix_seeded(profile, &rows)),\n                    |fixture| black_box(runtime.block_on(lix_select_all(fixture, row_count))),\n                    BatchSize::LargeInput,\n                )\n            },\n        );\n\n        group.bench_function(format!(\"select_one_by_pk/{}\", row_label(row_count)), |b| {\n            b.iter_batched(\n                || runtime.block_on(prepare_lix_seeded(profile, &rows)),\n                |fixture| {\n                    black_box(runtime.block_on(lix_select_one_by_pk(fixture, pick_pk_row(&rows))))\n                },\n                BatchSize::LargeInput,\n            )\n        });\n\n        group.bench_function(format!(\"update_all_values/{}\", row_label(row_count)), |b| {\n            b.iter_batched(\n                || runtime.block_on(prepare_lix_seeded(profile, &rows)),\n                |fixture| black_box(runtime.block_on(lix_update_all(fixture, row_count))),\n                BatchSize::LargeInput,\n            )\n        });\n\n        group.bench_function(format!(\"update_one_by_pk/{}\", row_label(row_count)), |b| {\n            b.iter_batched(\n                || runtime.block_on(prepare_lix_seeded(profile, &rows)),\n                |fixture| {\n                    black_box(runtime.block_on(lix_update_one_by_pk(fixture, pick_pk_row(&rows))))\n                },\n                BatchSize::LargeInput,\n            )\n        });\n\n        group.bench_function(format!(\"delete_all_rows/{}\", row_label(row_count)), |b| {\n            b.iter_batched(\n                || runtime.block_on(prepare_lix_seeded(profile, &rows)),\n                |fixture| black_box(runtime.block_on(lix_delete_all(fixture, row_count))),\n                BatchSize::LargeInput,\n            )\n        });\n\n        group.bench_function(format!(\"delete_one_by_pk/{}\", row_label(row_count)), |b| {\n            b.iter_batched(\n                || runtime.block_on(prepare_lix_seeded(profile, &rows)),\n                |fixture| {\n                    black_box(runtime.block_on(lix_delete_one_by_pk(fixture, pick_pk_row(&rows))))\n                },\n                BatchSize::LargeInput,\n            )\n        });\n\n        group.bench_function(format!(\"create_version/{}\", row_label(row_count)), |b| {\n            b.iter_batched(\n                || runtime.block_on(prepare_lix_seeded(profile, &rows)),\n                |fixture| black_box(runtime.block_on(lix_create_version(fixture))),\n                BatchSize::LargeInput,\n            )\n        });\n\n        group.bench_function(\n            format!(\n                \"merge_version_fast_forward_10pct_updates/{}\",\n                row_label(row_count)\n            ),\n            |b| {\n                b.iter_batched(\n                    || runtime.block_on(prepare_lix_seeded(profile, &rows)),\n                    |fixture| {\n                        black_box(runtime.block_on(lix_merge_version_fast_forward(\n                            fixture,\n                            &rows,\n                            change_rows,\n                        )))\n                    },\n                    BatchSize::LargeInput,\n                )\n            },\n        );\n\n        group.bench_function(\n            format!(\n                \"merge_version_divergent_10pct_updates/{}\",\n                row_label(row_count)\n            ),\n            |b| {\n                b.iter_batched(\n                    || runtime.block_on(prepare_lix_seeded(profile, &rows)),\n                    |fixture| {\n                        black_box(runtime.block_on(lix_merge_version_divergent(\n                            fixture,\n                            &rows,\n                            change_rows,\n                        )))\n                    },\n                    BatchSize::LargeInput,\n                )\n            },\n        );\n\n        group.finish();\n    }\n}\n\nfn prepare_raw_sqlite_empty() -> RawSqliteFixture {\n    let dir = TempDir::new().expect(\"create raw sqlite tempdir\");\n    let conn = Connection::open(dir.path().join(\"json-pointer-crud.sqlite\"))\n        .expect(\"open raw sqlite json_pointer CRUD db\");\n    conn.execute_batch(\n        \"\n        PRAGMA journal_mode = WAL;\n        PRAGMA synchronous = NORMAL;\n        PRAGMA temp_store = MEMORY;\n        PRAGMA foreign_keys = ON;\n        CREATE TABLE json_pointer (\n            path TEXT NOT NULL PRIMARY KEY,\n            value TEXT NOT NULL\n        ) WITHOUT ROWID;\n        \",\n    )\n    .expect(\"configure raw sqlite json_pointer CRUD db\");\n    RawSqliteFixture { conn, _dir: dir }\n}\n\nfn prepare_raw_sqlite_seeded(rows: &[PointerRow]) -> RawSqliteFixture {\n    let fixture = prepare_raw_sqlite_empty();\n    raw_sqlite_seed(&fixture.conn, rows);\n    fixture\n}\n\nfn raw_sqlite_seed(conn: &Connection, rows: &[PointerRow]) {\n    conn.execute_batch(\"BEGIN IMMEDIATE\")\n        .expect(\"begin raw sqlite seed\");\n    {\n        let mut statement = conn\n            .prepare_cached(\n                \"INSERT INTO json_pointer (path, value) VALUES (?1, ?2)\n                 ON CONFLICT(path) DO UPDATE SET value = excluded.value\",\n            )\n            .expect(\"prepare raw sqlite seed insert\");\n        for row in rows {\n            statement\n                .execute(params![row.path.as_str(), row.value_json.as_str()])\n                .expect(\"insert raw sqlite seed row\");\n        }\n    }\n    conn.execute_batch(\"COMMIT\")\n        .expect(\"commit raw sqlite seed\");\n}\n\nfn raw_sqlite_insert_all(fixture: RawSqliteFixture, rows: &[PointerRow]) -> usize {\n    raw_sqlite_seed(&fixture.conn, rows);\n    rows.len()\n}\n\nfn raw_sqlite_select_all(fixture: RawSqliteFixture, expected_rows: usize) -> usize {\n    let mut statement = fixture\n        .conn\n        .prepare_cached(\"SELECT path, value FROM json_pointer ORDER BY path\")\n        .expect(\"prepare raw sqlite select all\");\n    let count = statement\n        .query_map([], |_| Ok(()))\n        .expect(\"raw sqlite select all\")\n        .count();\n    assert_eq!(count, expected_rows);\n    count\n}\n\nfn raw_sqlite_select_one_by_pk(fixture: RawSqliteFixture, row: &PointerRow) -> usize {\n    let mut statement = fixture\n        .conn\n        .prepare_cached(\"SELECT path, value FROM json_pointer WHERE path = ?1\")\n        .expect(\"prepare raw sqlite select by pk\");\n    let found = statement\n        .query_row(params![row.path.as_str()], |_| Ok(()))\n        .optional()\n        .expect(\"raw sqlite select by pk\")\n        .is_some();\n    assert!(found);\n    usize::from(found)\n}\n\nfn raw_sqlite_update_all(fixture: RawSqliteFixture, expected_rows: usize) -> usize {\n    let affected = fixture\n        .conn\n        .execute(\n            \"UPDATE json_pointer SET value = ?1\",\n            params![r#\"{\"updated\":true}\"#],\n        )\n        .expect(\"raw sqlite update all\");\n    assert_eq!(affected, expected_rows);\n    affected\n}\n\nfn raw_sqlite_update_one_by_pk(fixture: RawSqliteFixture, row: &PointerRow) -> usize {\n    let affected = fixture\n        .conn\n        .execute(\n            \"UPDATE json_pointer SET value = ?1 WHERE path = ?2\",\n            params![row.updated_value_json.as_str(), row.path.as_str()],\n        )\n        .expect(\"raw sqlite update by pk\");\n    assert_eq!(affected, 1);\n    affected\n}\n\nfn raw_sqlite_delete_all(fixture: RawSqliteFixture, expected_rows: usize) -> usize {\n    let affected = fixture\n        .conn\n        .execute(\"DELETE FROM json_pointer\", [])\n        .expect(\"raw sqlite delete all\");\n    assert_eq!(affected, expected_rows);\n    affected\n}\n\nfn raw_sqlite_delete_one_by_pk(fixture: RawSqliteFixture, row: &PointerRow) -> usize {\n    let affected = fixture\n        .conn\n        .execute(\n            \"DELETE FROM json_pointer WHERE path = ?1\",\n            params![row.path.as_str()],\n        )\n        .expect(\"raw sqlite delete by pk\");\n    assert_eq!(affected, 1);\n    affected\n}\n\nasync fn prepare_lix_empty(profile: LixBackendProfile) -> LixFixture {\n    let engine = match profile {\n        LixBackendProfile::Sqlite => {\n            let backend =\n                SqliteBenchBackend::tempfile().expect(\"create sqlite json_pointer CRUD backend\");\n            Engine::initialize(Box::new(backend.clone()))\n                .await\n                .expect(\"initialize sqlite json_pointer CRUD Lix backend\");\n            Engine::new(Box::new(backend))\n                .await\n                .expect(\"open sqlite json_pointer CRUD Lix engine\")\n        }\n        LixBackendProfile::RocksDb => {\n            let backend =\n                RocksDbBenchBackend::new().expect(\"create rocksdb json_pointer CRUD backend\");\n            Engine::initialize(Box::new(backend.clone()))\n                .await\n                .expect(\"initialize rocksdb json_pointer CRUD Lix backend\");\n            Engine::new(Box::new(backend))\n                .await\n                .expect(\"open rocksdb json_pointer CRUD Lix engine\")\n        }\n    };\n    let setup_session = engine\n        .open_workspace_session()\n        .await\n        .expect(\"open json_pointer CRUD Lix setup workspace session\");\n    register_json_pointer_schema(&setup_session).await;\n    let session = engine\n        .open_workspace_session()\n        .await\n        .expect(\"open json_pointer CRUD Lix benchmark workspace session\");\n    LixFixture { session }\n}\n\nasync fn prepare_lix_seeded(profile: LixBackendProfile, rows: &[PointerRow]) -> LixFixture {\n    let fixture = prepare_lix_empty(profile).await;\n    insert_lix_rows(&fixture.session, rows).await;\n    fixture\n}\n\nasync fn register_json_pointer_schema(session: &SessionContext) {\n    let sql = format!(\n        \"INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked)\n         VALUES (lix_json('{}'), false, false)\",\n        sql_string(JSON_POINTER_SCHEMA_JSON)\n    );\n    let affected = session\n        .execute(&sql, &[])\n        .await\n        .expect(\"register json_pointer schema\")\n        .rows_affected();\n    assert_eq!(affected, 1);\n}\n\nasync fn lix_insert_all(fixture: LixFixture, rows: &[PointerRow]) -> usize {\n    insert_lix_rows(&fixture.session, rows).await;\n    rows.len()\n}\n\nasync fn insert_lix_rows(session: &SessionContext, rows: &[PointerRow]) {\n    for chunk in rows.chunks(CHUNK_SIZE) {\n        let mut sql = String::from(\"INSERT INTO json_pointer (path, value) VALUES \");\n        for (index, row) in chunk.iter().enumerate() {\n            if index > 0 {\n                sql.push(',');\n            }\n            sql.push_str(&format!(\n                \"('{}', lix_json('{}'))\",\n                sql_string(row.path.as_str()),\n                sql_string(row.value_json.as_str())\n            ));\n        }\n        let affected = session\n            .execute(&sql, &[])\n            .await\n            .expect(\"insert json_pointer rows\")\n            .rows_affected();\n        assert_eq!(affected as usize, chunk.len());\n    }\n}\n\nasync fn lix_select_all(fixture: LixFixture, expected_rows: usize) -> usize {\n    let result = fixture\n        .session\n        .execute(\"SELECT path, value FROM json_pointer ORDER BY path\", &[])\n        .await\n        .expect(\"select all json_pointer rows\");\n    assert_eq!(result.len(), expected_rows);\n    result.len()\n}\n\nasync fn lix_select_one_by_pk(fixture: LixFixture, row: &PointerRow) -> usize {\n    let sql = format!(\n        \"SELECT path, value FROM json_pointer WHERE path = '{}'\",\n        sql_string(row.path.as_str())\n    );\n    let result = fixture\n        .session\n        .execute(&sql, &[])\n        .await\n        .expect(\"select json_pointer row by path\");\n    assert_eq!(result.len(), 1);\n    result.len()\n}\n\nasync fn lix_update_all(fixture: LixFixture, expected_rows: usize) -> usize {\n    let affected = fixture\n        .session\n        .execute(\n            r#\"UPDATE json_pointer SET value = lix_json('{\"updated\":true}')\"#,\n            &[],\n        )\n        .await\n        .expect(\"update all json_pointer rows\")\n        .rows_affected() as usize;\n    assert_eq!(affected, expected_rows);\n    affected\n}\n\nasync fn lix_update_one_by_pk(fixture: LixFixture, row: &PointerRow) -> usize {\n    let sql = format!(\n        \"UPDATE json_pointer SET value = lix_json('{}') WHERE path = '{}'\",\n        sql_string(row.updated_value_json.as_str()),\n        sql_string(row.path.as_str())\n    );\n    let affected = fixture\n        .session\n        .execute(&sql, &[])\n        .await\n        .expect(\"update json_pointer row by path\")\n        .rows_affected() as usize;\n    assert_eq!(affected, 1);\n    affected\n}\n\nasync fn lix_delete_all(fixture: LixFixture, expected_rows: usize) -> usize {\n    let affected = fixture\n        .session\n        .execute(\"DELETE FROM json_pointer\", &[])\n        .await\n        .expect(\"delete all json_pointer rows\")\n        .rows_affected() as usize;\n    assert_eq!(affected, expected_rows);\n    affected\n}\n\nasync fn lix_delete_one_by_pk(fixture: LixFixture, row: &PointerRow) -> usize {\n    let sql = format!(\n        \"DELETE FROM json_pointer WHERE path = '{}'\",\n        sql_string(row.path.as_str())\n    );\n    let affected = fixture\n        .session\n        .execute(&sql, &[])\n        .await\n        .expect(\"delete json_pointer row by path\")\n        .rows_affected() as usize;\n    assert_eq!(affected, 1);\n    affected\n}\n\nasync fn lix_create_version(fixture: LixFixture) -> String {\n    create_lix_version(&fixture.session).await\n}\n\nasync fn create_lix_version(session: &SessionContext) -> String {\n    let receipt = session\n        .create_version(CreateVersionOptions {\n            id: Some(\"bench-draft\".to_string()),\n            name: \"bench draft\".to_string(),\n            from_commit_id: None,\n        })\n        .await\n        .expect(\"create json_pointer benchmark version\");\n    receipt.id\n}\n\nasync fn lix_merge_version_fast_forward(\n    fixture: LixFixture,\n    rows: &[PointerRow],\n    change_rows: usize,\n) -> usize {\n    let main_id = fixture\n        .session\n        .active_version_id()\n        .await\n        .expect(\"load active json_pointer main version id\");\n    let draft_id = create_lix_version(&fixture.session).await;\n    let (draft_session, _) = fixture\n        .session\n        .switch_version(SwitchVersionOptions {\n            version_id: draft_id.clone(),\n        })\n        .await\n        .expect(\"switch to json_pointer draft version\");\n    update_lix_rows_by_pk(&draft_session, &rows[..change_rows], \"source\").await;\n    let (main_session, _) = draft_session\n        .switch_version(SwitchVersionOptions {\n            version_id: main_id,\n        })\n        .await\n        .expect(\"switch back to main version\");\n    let receipt = main_session\n        .merge_version(MergeVersionOptions {\n            source_version_id: draft_id,\n        })\n        .await\n        .expect(\"merge fast-forward json_pointer draft\");\n    assert_eq!(receipt.outcome, MergeVersionOutcome::FastForward);\n    assert_eq!(receipt.change_stats.total, change_rows);\n    receipt.change_stats.total\n}\n\nasync fn lix_merge_version_divergent(\n    fixture: LixFixture,\n    rows: &[PointerRow],\n    change_rows: usize,\n) -> usize {\n    let main_id = fixture\n        .session\n        .active_version_id()\n        .await\n        .expect(\"load active json_pointer main version id\");\n    let draft_id = create_lix_version(&fixture.session).await;\n    let (draft_session, _) = fixture\n        .session\n        .switch_version(SwitchVersionOptions {\n            version_id: draft_id.clone(),\n        })\n        .await\n        .expect(\"switch to json_pointer draft version\");\n    update_lix_rows_by_pk(&draft_session, &rows[..change_rows], \"source\").await;\n    let (main_session, _) = draft_session\n        .switch_version(SwitchVersionOptions {\n            version_id: main_id,\n        })\n        .await\n        .expect(\"switch back to main version\");\n    update_lix_rows_by_pk(&main_session, &rows[change_rows..change_rows * 2], \"target\").await;\n    let receipt = main_session\n        .merge_version(MergeVersionOptions {\n            source_version_id: draft_id,\n        })\n        .await\n        .expect(\"merge divergent json_pointer draft\");\n    assert_eq!(receipt.outcome, MergeVersionOutcome::MergeCommitted);\n    assert_eq!(receipt.change_stats.total, change_rows);\n    receipt.change_stats.total\n}\n\nasync fn update_lix_rows_by_pk(session: &SessionContext, rows: &[PointerRow], side: &str) {\n    for row in rows {\n        let value = serde_json::json!({\n            \"updated\": true,\n            \"side\": side,\n            \"path\": row.path,\n        })\n        .to_string();\n        let sql = format!(\n            \"UPDATE json_pointer SET value = lix_json('{}') WHERE path = '{}'\",\n            sql_string(value.as_str()),\n            sql_string(row.path.as_str())\n        );\n        let affected = session\n            .execute(&sql, &[])\n            .await\n            .expect(\"update json_pointer row by path\")\n            .rows_affected();\n        assert_eq!(affected, 1);\n    }\n}\n\nfn fixture_rows() -> Vec<PointerRow> {\n    let root: JsonValue = serde_json::from_str(PNPM_LOCK_JSON).expect(\"pnpm lock JSON fixture\");\n    let mut rows = Vec::new();\n    flatten_json(\"\", &root, &mut rows);\n    assert!(\n        rows.len() >= SCALE_ROWS,\n        \"pnpm lock fixture should have at least {SCALE_ROWS} pointer rows, got {}\",\n        rows.len()\n    );\n    rows\n}\n\nfn storage_rows(rows: &[PointerRow]) -> Vec<storage_bench::JsonPointerStorageRow> {\n    rows.iter()\n        .map(|row| storage_bench::JsonPointerStorageRow {\n            path: row.path.clone(),\n            value_json: row.value_json.clone(),\n            updated_value_json: row.updated_value_json.clone(),\n        })\n        .collect()\n}\n\nfn pick_pk_row(rows: &[PointerRow]) -> &PointerRow {\n    &rows[rows.len() / 2]\n}\n\nfn raw_storage_backend(profile: LixBackendProfile) -> Arc<dyn Backend + Send + Sync> {\n    match profile {\n        LixBackendProfile::Sqlite => {\n            Arc::new(SqliteBenchBackend::tempfile().expect(\"create sqlite raw storage backend\"))\n        }\n        LixBackendProfile::RocksDb => {\n            Arc::new(RocksDbBenchBackend::new().expect(\"create rocksdb raw storage backend\"))\n        }\n    }\n}\n\nfn prepare_raw_storage_read(\n    runtime: &Runtime,\n    profile: LixBackendProfile,\n    rows: &[storage_bench::JsonPointerStorageRow],\n) -> (\n    Arc<dyn Backend + Send + Sync>,\n    storage_bench::JsonPointerTrackedStateReadFixture,\n) {\n    let backend = raw_storage_backend(profile);\n    let fixture = runtime\n        .block_on(storage_bench::prepare_json_pointer_tracked_state_read(\n            &backend, rows,\n        ))\n        .expect(\"prepare json_pointer raw storage read\");\n    (backend, fixture)\n}\n\nfn flatten_json(path: &str, value: &JsonValue, rows: &mut Vec<PointerRow>) {\n    rows.push(PointerRow {\n        path: path.to_string(),\n        value_json: value.to_string(),\n        updated_value_json: updated_value_for(path),\n    });\n\n    match value {\n        JsonValue::Array(items) => {\n            for (index, item) in items.iter().enumerate() {\n                let child_path = format!(\"{path}/{}\", index);\n                flatten_json(&child_path, item, rows);\n            }\n        }\n        JsonValue::Object(map) => {\n            for (key, child) in map {\n                let child_path = format!(\"{path}/{}\", escape_pointer_token(key));\n                flatten_json(&child_path, child, rows);\n            }\n        }\n        JsonValue::Null | JsonValue::Bool(_) | JsonValue::Number(_) | JsonValue::String(_) => {}\n    }\n}\n\nfn updated_value_for(path: &str) -> String {\n    serde_json::json!({\n        \"updated\": true,\n        \"path\": path,\n    })\n    .to_string()\n}\n\nfn escape_pointer_token(token: &str) -> String {\n    token.replace('~', \"~0\").replace('/', \"~1\")\n}\n\nfn sql_string(value: &str) -> String {\n    value.replace('\\'', \"''\")\n}\n\nfn row_label(rows: usize) -> String {\n    if rows >= 1_000 {\n        format!(\"{}k\", rows / 1_000)\n    } else {\n        rows.to_string()\n    }\n}\n\nfn changed_row_count(rows: usize) -> usize {\n    (rows / CHANGE_ROW_DENOMINATOR).max(1)\n}\n\ncriterion_group!(benches, json_pointer_crud_benches);\ncriterion_main!(benches);\n"
  },
  {
    "path": "packages/engine/benches/json_pointer_physical/main.rs",
    "content": "use std::sync::Arc;\nuse std::time::Duration;\n\nuse criterion::{black_box, criterion_group, criterion_main, BatchSize, Criterion};\nuse lix_engine::{storage_bench, Backend};\nuse rusqlite::{params, Connection, OptionalExtension};\nuse serde_json::Value as JsonValue;\nuse tempfile::TempDir;\nuse tokio::runtime::Runtime;\n\n#[path = \"../storage/rocksdb_backend.rs\"]\nmod rocksdb_backend;\n#[path = \"../storage/sqlite_backend.rs\"]\nmod sqlite_backend;\n\nuse rocksdb_backend::RocksDbBenchBackend;\nuse sqlite_backend::SqliteBenchBackend;\n\nconst PNPM_LOCK_JSON: &str = include_str!(\"../fixtures/pnpm-lock.fixture.json\");\nconst BASELINE_ROWS: usize = 100;\nconst SMOKE_ROWS: usize = 1_000;\nconst SCALE_ROWS: usize = 10_000;\nconst CHANGE_ROW_DENOMINATOR: usize = 10;\n\n#[derive(Clone)]\nstruct PointerRow {\n    path: String,\n    value_json: String,\n    updated_value_json: String,\n}\n\nstruct RawSqliteFixture {\n    conn: Connection,\n    _dir: TempDir,\n}\n\n#[derive(Clone, Copy)]\nenum BackendProfile {\n    Sqlite,\n    RocksDb,\n}\n\nimpl BackendProfile {\n    fn label(self) -> &'static str {\n        match self {\n            Self::Sqlite => \"sqlite\",\n            Self::RocksDb => \"rocksdb\",\n        }\n    }\n}\n\nfn json_pointer_physical_benches(c: &mut Criterion) {\n    let runtime = tokio::runtime::Builder::new_current_thread()\n        .enable_all()\n        .build()\n        .expect(\"create tokio runtime for json_pointer physical benchmarks\");\n    let rows = fixture_rows();\n\n    bench_raw_sqlite(c, &rows, BASELINE_ROWS, \"baseline\");\n    bench_physical(c, &runtime, &rows, BASELINE_ROWS, \"baseline\");\n    bench_raw_sqlite(c, &rows, SMOKE_ROWS, \"smoke\");\n    bench_physical(c, &runtime, &rows, SMOKE_ROWS, \"smoke\");\n    bench_raw_sqlite(c, &rows, SCALE_ROWS, \"scale\");\n    bench_physical(c, &runtime, &rows, SCALE_ROWS, \"scale\");\n}\n\nfn bench_raw_sqlite(c: &mut Criterion, all_rows: &[PointerRow], row_count: usize, label: &str) {\n    let rows = all_rows[..row_count].to_vec();\n    let change_rows = changed_row_count(row_count);\n    let mut group = c.benchmark_group(format!(\"json_pointer_physical/raw_sqlite/{label}\"));\n    group.sample_size(10);\n    group.warm_up_time(Duration::from_millis(250));\n    group.measurement_time(Duration::from_secs(1));\n\n    group.bench_function(\n        format!(\"write_root_all_rows/{}\", row_label(row_count)),\n        |b| {\n            b.iter_batched(\n                prepare_raw_sqlite_empty,\n                |fixture| black_box(raw_sqlite_insert_all(fixture, &rows)),\n                BatchSize::LargeInput,\n            )\n        },\n    );\n\n    group.bench_function(\n        format!(\"get_many_exact_keys/{}\", row_label(row_count)),\n        |b| {\n            b.iter_batched(\n                || prepare_raw_sqlite_seeded(&rows),\n                |fixture| black_box(raw_sqlite_get_many_exact(fixture, &rows)),\n                BatchSize::LargeInput,\n            )\n        },\n    );\n\n    group.bench_function(\n        format!(\"get_many_missing_keys/{}\", row_label(row_count)),\n        |b| {\n            b.iter_batched(\n                || prepare_raw_sqlite_seeded(&rows),\n                |fixture| black_box(raw_sqlite_get_many_missing(fixture, row_count)),\n                BatchSize::LargeInput,\n            )\n        },\n    );\n\n    group.bench_function(\n        format!(\"exists_many_exact_keys/{}\", row_label(row_count)),\n        |b| {\n            b.iter_batched(\n                || prepare_raw_sqlite_seeded(&rows),\n                |fixture| black_box(raw_sqlite_exists_many(fixture, &rows)),\n                BatchSize::LargeInput,\n            )\n        },\n    );\n\n    group.bench_function(format!(\"scan_keys_only/{}\", row_label(row_count)), |b| {\n        b.iter_batched(\n            || prepare_raw_sqlite_seeded(&rows),\n            |fixture| black_box(raw_sqlite_scan_keys_only(fixture, row_count)),\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(format!(\"scan_headers_only/{}\", row_label(row_count)), |b| {\n        b.iter_batched(\n            || prepare_raw_sqlite_seeded(&rows),\n            |fixture| black_box(raw_sqlite_scan_keys_only(fixture, row_count)),\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(format!(\"scan_full_rows/{}\", row_label(row_count)), |b| {\n        b.iter_batched(\n            || prepare_raw_sqlite_seeded(&rows),\n            |fixture| black_box(raw_sqlite_scan_full_rows(fixture, row_count)),\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\n        format!(\"prefix_scan_schema/{}\", row_label(row_count)),\n        |b| {\n            b.iter_batched(\n                || prepare_raw_sqlite_seeded(&rows),\n                |fixture| black_box(raw_sqlite_scan_full_rows(fixture, row_count)),\n                BatchSize::LargeInput,\n            )\n        },\n    );\n\n    group.bench_function(\n        format!(\"prefix_scan_schema_file_null/{}\", row_label(row_count)),\n        |b| {\n            b.iter_batched(\n                || prepare_raw_sqlite_seeded(&rows),\n                |fixture| black_box(raw_sqlite_scan_full_rows(fixture, row_count)),\n                BatchSize::LargeInput,\n            )\n        },\n    );\n\n    group.bench_function(\n        format!(\"write_delta_10pct_updates/{}\", row_label(row_count)),\n        |b| {\n            b.iter_batched(\n                || prepare_raw_sqlite_seeded(&rows),\n                |fixture| black_box(raw_sqlite_update_first_rows(fixture, &rows, change_rows)),\n                BatchSize::LargeInput,\n            )\n        },\n    );\n\n    group.bench_function(\n        format!(\"write_tombstone_10pct_deletes/{}\", row_label(row_count)),\n        |b| {\n            b.iter_batched(\n                || prepare_raw_sqlite_seeded(&rows),\n                |fixture| black_box(raw_sqlite_delete_first_rows(fixture, &rows, change_rows)),\n                BatchSize::LargeInput,\n            )\n        },\n    );\n\n    group.finish();\n}\n\nfn bench_physical(\n    c: &mut Criterion,\n    runtime: &Runtime,\n    all_rows: &[PointerRow],\n    row_count: usize,\n    label: &str,\n) {\n    let rows = all_rows[..row_count].to_vec();\n    let storage_rows = storage_rows(&rows);\n    let change_rows = changed_row_count(row_count);\n\n    for profile in [BackendProfile::Sqlite, BackendProfile::RocksDb] {\n        let mut group =\n            c.benchmark_group(format!(\"json_pointer_physical/{}/{label}\", profile.label()));\n        group.sample_size(10);\n        group.warm_up_time(Duration::from_millis(250));\n        group.measurement_time(Duration::from_secs(1));\n\n        group.bench_function(\n            format!(\"write_root_all_rows/{}\", row_label(row_count)),\n            |b| {\n                b.iter_batched(\n                    || {\n                        runtime\n                            .block_on(\n                                storage_bench::prepare_json_pointer_tracked_state_write_root(\n                                    &storage_rows,\n                                ),\n                            )\n                            .expect(\"prepare json_pointer physical write root\")\n                    },\n                    |fixture| {\n                        let backend = physical_backend(profile);\n                        black_box(\n                            runtime\n                                .block_on(storage_bench::tracked_state_write_root_prepared(\n                                    &backend, &fixture,\n                                ))\n                                .expect(\"json_pointer physical write root\"),\n                        )\n                    },\n                    BatchSize::LargeInput,\n                )\n            },\n        );\n\n        group.bench_function(\n            format!(\"get_many_exact_keys/{}\", row_label(row_count)),\n            |b| {\n                b.iter_batched(\n                    || prepare_physical_read(runtime, profile, &storage_rows),\n                    |(backend, fixture)| {\n                        black_box(\n                            runtime\n                                .block_on(\n                                    storage_bench::json_pointer_tracked_state_get_many_prepared(\n                                        &backend, &fixture,\n                                    ),\n                                )\n                                .expect(\"json_pointer physical get_many\"),\n                        )\n                    },\n                    BatchSize::LargeInput,\n                )\n            },\n        );\n\n        group.bench_function(\n            format!(\"get_many_missing_keys/{}\", row_label(row_count)),\n            |b| {\n                b.iter_batched(\n                    || prepare_physical_read(runtime, profile, &storage_rows),\n                    |(backend, fixture)| {\n                        black_box(\n                            runtime\n                                .block_on(\n                                    storage_bench::json_pointer_tracked_state_get_many_missing_prepared(\n                                        &backend, &fixture,\n                                    ),\n                                )\n                                .expect(\"json_pointer physical get_many missing\"),\n                        )\n                    },\n                    BatchSize::LargeInput,\n                )\n            },\n        );\n\n        group.bench_function(format!(\"scan_keys_only/{}\", row_label(row_count)), |b| {\n            b.iter_batched(\n                || prepare_physical_read(runtime, profile, &storage_rows),\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(\n                                storage_bench::json_pointer_tracked_state_scan_keys_only_prepared(\n                                    &backend, &fixture,\n                                ),\n                            )\n                            .expect(\"json_pointer physical scan keys\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n\n        group.bench_function(\n            format!(\"scan_headers_only/{}\", row_label(row_count)),\n            |b| {\n                b.iter_batched(\n                    || prepare_physical_read(runtime, profile, &storage_rows),\n                    |(backend, fixture)| {\n                        black_box(\n                            runtime\n                                .block_on(\n                                    storage_bench::json_pointer_tracked_state_scan_headers_only_prepared(\n                                        &backend, &fixture,\n                                    ),\n                                )\n                                .expect(\"json_pointer physical scan headers\"),\n                        )\n                    },\n                    BatchSize::LargeInput,\n                )\n            },\n        );\n\n        group.bench_function(format!(\"scan_full_rows/{}\", row_label(row_count)), |b| {\n            b.iter_batched(\n                || prepare_physical_read(runtime, profile, &storage_rows),\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(\n                                storage_bench::json_pointer_tracked_state_scan_full_rows_prepared(\n                                    &backend, &fixture,\n                                ),\n                            )\n                            .expect(\"json_pointer physical scan full rows\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n\n        group.bench_function(format!(\"prefix_scan_schema/{}\", row_label(row_count)), |b| {\n            b.iter_batched(\n                || prepare_physical_read(runtime, profile, &storage_rows),\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(\n                                storage_bench::json_pointer_tracked_state_prefix_scan_schema_prepared(\n                                    &backend, &fixture,\n                                ),\n                            )\n                            .expect(\"json_pointer physical prefix schema scan\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n\n        group.bench_function(\n            format!(\"prefix_scan_schema_file_null/{}\", row_label(row_count)),\n            |b| {\n                b.iter_batched(\n                    || prepare_physical_read(runtime, profile, &storage_rows),\n                    |(backend, fixture)| {\n                        black_box(\n                            runtime\n                                .block_on(\n                                    storage_bench::json_pointer_tracked_state_prefix_scan_schema_file_null_prepared(\n                                        &backend, &fixture,\n                                    ),\n                                )\n                                .expect(\"json_pointer physical prefix schema file null scan\"),\n                        )\n                    },\n                    BatchSize::LargeInput,\n                )\n            },\n        );\n\n        group.bench_function(\n            format!(\"write_delta_10pct_updates/{}\", row_label(row_count)),\n            |b| {\n                b.iter_batched(\n                    || {\n                        let backend = physical_backend(profile);\n                        let fixture = runtime\n                            .block_on(\n                                storage_bench::prepare_json_pointer_tracked_state_update_rows(\n                                    &backend,\n                                    &storage_rows,\n                                    change_rows,\n                                ),\n                            )\n                            .expect(\"prepare json_pointer physical delta update\");\n                        (backend, fixture)\n                    },\n                    |(backend, fixture)| {\n                        black_box(\n                            runtime\n                                .block_on(storage_bench::tracked_state_update_existing_prepared(\n                                    &backend, &fixture,\n                                ))\n                                .expect(\"json_pointer physical delta update\"),\n                        )\n                    },\n                    BatchSize::LargeInput,\n                )\n            },\n        );\n\n        group.bench_function(\n            format!(\"write_tombstone_10pct_deletes/{}\", row_label(row_count)),\n            |b| {\n                b.iter_batched(\n                    || {\n                        let backend = physical_backend(profile);\n                        let fixture = runtime\n                            .block_on(\n                                storage_bench::prepare_json_pointer_tracked_state_tombstone_rows(\n                                    &backend,\n                                    &storage_rows,\n                                    change_rows,\n                                ),\n                            )\n                            .expect(\"prepare json_pointer physical tombstones\");\n                        (backend, fixture)\n                    },\n                    |(backend, fixture)| {\n                        black_box(\n                            runtime\n                                .block_on(storage_bench::tracked_state_update_existing_prepared(\n                                    &backend, &fixture,\n                                ))\n                                .expect(\"json_pointer physical tombstones\"),\n                        )\n                    },\n                    BatchSize::LargeInput,\n                )\n            },\n        );\n\n        group.bench_function(\n            format!(\"changed_keys_update_10pct/{}\", row_label(row_count)),\n            |b| {\n                b.iter_batched(\n                    || {\n                        let backend = physical_backend(profile);\n                        let fixture = runtime\n                            .block_on(\n                                storage_bench::prepare_json_pointer_tracked_state_diff_update_rows(\n                                    &backend,\n                                    &storage_rows,\n                                    change_rows,\n                                ),\n                            )\n                            .expect(\"prepare json_pointer physical changed keys\");\n                        (backend, fixture)\n                    },\n                    |(backend, fixture)| {\n                        black_box(\n                            runtime\n                                .block_on(\n                                    storage_bench::json_pointer_tracked_state_changed_keys_prepared(\n                                        &backend, &fixture,\n                                    ),\n                                )\n                                .expect(\"json_pointer physical changed keys\"),\n                        )\n                    },\n                    BatchSize::LargeInput,\n                )\n            },\n        );\n\n        group.bench_function(\n            format!(\"changed_keys_delta_chain_10x1pct/{}\", row_label(row_count)),\n            |b| {\n                b.iter_batched(\n                    || {\n                        let backend = physical_backend(profile);\n                        let fixture = runtime\n                            .block_on(\n                                storage_bench::prepare_json_pointer_tracked_state_diff_delta_chain(\n                                    &backend,\n                                    &storage_rows,\n                                    10,\n                                    (row_count / 100).max(1),\n                                ),\n                            )\n                            .expect(\"prepare json_pointer physical delta-chain changed keys\");\n                        (backend, fixture)\n                    },\n                    |(backend, fixture)| {\n                        black_box(\n                            runtime\n                                .block_on(\n                                    storage_bench::json_pointer_tracked_state_changed_keys_prepared(\n                                        &backend, &fixture,\n                                    ),\n                                )\n                                .expect(\"json_pointer physical delta-chain changed keys\"),\n                        )\n                    },\n                    BatchSize::LargeInput,\n                )\n            },\n        );\n\n        group.bench_function(\n            format!(\"materialize_delta_chain_10x1pct/{}\", row_label(row_count)),\n            |b| {\n                b.iter_batched(\n                    || {\n                        let backend = physical_backend(profile);\n                        let fixture = runtime\n                            .block_on(\n                                storage_bench::prepare_json_pointer_tracked_state_materialize_delta_chain(\n                                    &backend,\n                                    &storage_rows,\n                                    10,\n                                    (row_count / 100).max(1),\n                                ),\n                            )\n                            .expect(\"prepare json_pointer physical materialize delta chain\");\n                        (backend, fixture)\n                    },\n                    |(backend, fixture)| {\n                        black_box(\n                            runtime\n                                .block_on(storage_bench::tracked_state_materialize_root_prepared(\n                                    &backend, &fixture,\n                                ))\n                                .expect(\"json_pointer physical materialize delta chain\"),\n                        )\n                    },\n                    BatchSize::LargeInput,\n                )\n            },\n        );\n\n        group.finish();\n    }\n}\n\nfn fixture_rows() -> Vec<PointerRow> {\n    let root: JsonValue = serde_json::from_str(PNPM_LOCK_JSON).expect(\"pnpm lock JSON fixture\");\n    let mut rows = Vec::new();\n    flatten_json(\"\", &root, &mut rows);\n    assert!(\n        rows.len() >= SCALE_ROWS,\n        \"pnpm lock fixture should have at least {SCALE_ROWS} pointer rows, got {}\",\n        rows.len()\n    );\n    rows\n}\n\nfn prepare_raw_sqlite_empty() -> RawSqliteFixture {\n    let dir = TempDir::new().expect(\"create raw sqlite tempdir\");\n    let conn = Connection::open(dir.path().join(\"json-pointer-physical.sqlite\"))\n        .expect(\"open raw sqlite json_pointer physical db\");\n    conn.execute_batch(\n        \"\n        PRAGMA journal_mode = WAL;\n        PRAGMA synchronous = NORMAL;\n        PRAGMA temp_store = MEMORY;\n        PRAGMA foreign_keys = ON;\n        CREATE TABLE json_pointer (\n            path TEXT NOT NULL PRIMARY KEY,\n            value TEXT NOT NULL\n        ) WITHOUT ROWID;\n        \",\n    )\n    .expect(\"configure raw sqlite json_pointer physical db\");\n    RawSqliteFixture { conn, _dir: dir }\n}\n\nfn prepare_raw_sqlite_seeded(rows: &[PointerRow]) -> RawSqliteFixture {\n    let fixture = prepare_raw_sqlite_empty();\n    raw_sqlite_seed(&fixture.conn, rows);\n    fixture\n}\n\nfn raw_sqlite_seed(conn: &Connection, rows: &[PointerRow]) {\n    conn.execute_batch(\"BEGIN IMMEDIATE\")\n        .expect(\"begin raw sqlite seed\");\n    {\n        let mut statement = conn\n            .prepare_cached(\n                \"INSERT INTO json_pointer (path, value) VALUES (?1, ?2)\n                 ON CONFLICT(path) DO UPDATE SET value = excluded.value\",\n            )\n            .expect(\"prepare raw sqlite seed insert\");\n        for row in rows {\n            statement\n                .execute(params![row.path.as_str(), row.value_json.as_str()])\n                .expect(\"insert raw sqlite seed row\");\n        }\n    }\n    conn.execute_batch(\"COMMIT\")\n        .expect(\"commit raw sqlite seed\");\n}\n\nfn raw_sqlite_insert_all(fixture: RawSqliteFixture, rows: &[PointerRow]) -> usize {\n    raw_sqlite_seed(&fixture.conn, rows);\n    rows.len()\n}\n\nfn raw_sqlite_get_many_exact(fixture: RawSqliteFixture, rows: &[PointerRow]) -> usize {\n    let mut statement = fixture\n        .conn\n        .prepare_cached(\"SELECT value FROM json_pointer WHERE path = ?1\")\n        .expect(\"prepare raw sqlite exact get\");\n    let mut found = 0;\n    for row in rows {\n        if statement\n            .query_row(params![row.path.as_str()], |_| Ok(()))\n            .optional()\n            .expect(\"raw sqlite exact get\")\n            .is_some()\n        {\n            found += 1;\n        }\n    }\n    assert_eq!(found, rows.len());\n    found\n}\n\nfn raw_sqlite_get_many_missing(fixture: RawSqliteFixture, row_count: usize) -> usize {\n    let mut statement = fixture\n        .conn\n        .prepare_cached(\"SELECT value FROM json_pointer WHERE path = ?1\")\n        .expect(\"prepare raw sqlite missing get\");\n    let mut found = 0;\n    for index in 0..row_count {\n        let missing_path = format!(\"/__missing/{index}\");\n        if statement\n            .query_row(params![missing_path.as_str()], |_| Ok(()))\n            .optional()\n            .expect(\"raw sqlite missing get\")\n            .is_some()\n        {\n            found += 1;\n        }\n    }\n    assert_eq!(found, 0);\n    found\n}\n\nfn raw_sqlite_exists_many(fixture: RawSqliteFixture, rows: &[PointerRow]) -> usize {\n    let mut statement = fixture\n        .conn\n        .prepare_cached(\"SELECT 1 FROM json_pointer WHERE path = ?1\")\n        .expect(\"prepare raw sqlite exists\");\n    let mut found = 0;\n    for row in rows {\n        if statement\n            .query_row(params![row.path.as_str()], |_| Ok(()))\n            .optional()\n            .expect(\"raw sqlite exists\")\n            .is_some()\n        {\n            found += 1;\n        }\n    }\n    assert_eq!(found, rows.len());\n    found\n}\n\nfn raw_sqlite_scan_keys_only(fixture: RawSqliteFixture, expected_rows: usize) -> usize {\n    let mut statement = fixture\n        .conn\n        .prepare_cached(\"SELECT path FROM json_pointer ORDER BY path\")\n        .expect(\"prepare raw sqlite keys scan\");\n    let count = statement\n        .query_map([], |_| Ok(()))\n        .expect(\"raw sqlite keys scan\")\n        .count();\n    assert_eq!(count, expected_rows);\n    count\n}\n\nfn raw_sqlite_scan_full_rows(fixture: RawSqliteFixture, expected_rows: usize) -> usize {\n    let mut statement = fixture\n        .conn\n        .prepare_cached(\"SELECT path, value FROM json_pointer ORDER BY path\")\n        .expect(\"prepare raw sqlite full scan\");\n    let count = statement\n        .query_map([], |_| Ok(()))\n        .expect(\"raw sqlite full scan\")\n        .count();\n    assert_eq!(count, expected_rows);\n    count\n}\n\nfn raw_sqlite_update_first_rows(\n    fixture: RawSqliteFixture,\n    rows: &[PointerRow],\n    change_rows: usize,\n) -> usize {\n    fixture\n        .conn\n        .execute_batch(\"BEGIN IMMEDIATE\")\n        .expect(\"begin raw sqlite update\");\n    let mut affected = 0;\n    {\n        let mut statement = fixture\n            .conn\n            .prepare_cached(\"UPDATE json_pointer SET value = ?1 WHERE path = ?2\")\n            .expect(\"prepare raw sqlite update\");\n        for row in &rows[..change_rows] {\n            affected += statement\n                .execute(params![row.updated_value_json.as_str(), row.path.as_str()])\n                .expect(\"raw sqlite update\");\n        }\n    }\n    fixture\n        .conn\n        .execute_batch(\"COMMIT\")\n        .expect(\"commit raw sqlite update\");\n    assert_eq!(affected, change_rows);\n    affected\n}\n\nfn raw_sqlite_delete_first_rows(\n    fixture: RawSqliteFixture,\n    rows: &[PointerRow],\n    change_rows: usize,\n) -> usize {\n    fixture\n        .conn\n        .execute_batch(\"BEGIN IMMEDIATE\")\n        .expect(\"begin raw sqlite delete\");\n    let mut affected = 0;\n    {\n        let mut statement = fixture\n            .conn\n            .prepare_cached(\"DELETE FROM json_pointer WHERE path = ?1\")\n            .expect(\"prepare raw sqlite delete\");\n        for row in &rows[..change_rows] {\n            affected += statement\n                .execute(params![row.path.as_str()])\n                .expect(\"raw sqlite delete\");\n        }\n    }\n    fixture\n        .conn\n        .execute_batch(\"COMMIT\")\n        .expect(\"commit raw sqlite delete\");\n    assert_eq!(affected, change_rows);\n    affected\n}\n\nfn storage_rows(rows: &[PointerRow]) -> Vec<storage_bench::JsonPointerStorageRow> {\n    rows.iter()\n        .map(|row| storage_bench::JsonPointerStorageRow {\n            path: row.path.clone(),\n            value_json: row.value_json.clone(),\n            updated_value_json: row.updated_value_json.clone(),\n        })\n        .collect()\n}\n\nfn physical_backend(profile: BackendProfile) -> Arc<dyn Backend + Send + Sync> {\n    match profile {\n        BackendProfile::Sqlite => {\n            Arc::new(SqliteBenchBackend::tempfile().expect(\"create sqlite physical backend\"))\n        }\n        BackendProfile::RocksDb => {\n            Arc::new(RocksDbBenchBackend::new().expect(\"create rocksdb physical backend\"))\n        }\n    }\n}\n\nfn prepare_physical_read(\n    runtime: &Runtime,\n    profile: BackendProfile,\n    rows: &[storage_bench::JsonPointerStorageRow],\n) -> (\n    Arc<dyn Backend + Send + Sync>,\n    storage_bench::JsonPointerTrackedStateReadFixture,\n) {\n    let backend = physical_backend(profile);\n    let fixture = runtime\n        .block_on(storage_bench::prepare_json_pointer_tracked_state_read(\n            &backend, rows,\n        ))\n        .expect(\"prepare json_pointer physical read\");\n    (backend, fixture)\n}\n\nfn flatten_json(path: &str, value: &JsonValue, rows: &mut Vec<PointerRow>) {\n    rows.push(PointerRow {\n        path: path.to_string(),\n        value_json: value.to_string(),\n        updated_value_json: updated_value_for(path),\n    });\n\n    match value {\n        JsonValue::Array(items) => {\n            for (index, item) in items.iter().enumerate() {\n                let child_path = format!(\"{path}/{}\", index);\n                flatten_json(&child_path, item, rows);\n            }\n        }\n        JsonValue::Object(map) => {\n            for (key, child) in map {\n                let child_path = format!(\"{path}/{}\", escape_pointer_token(key));\n                flatten_json(&child_path, child, rows);\n            }\n        }\n        JsonValue::Null | JsonValue::Bool(_) | JsonValue::Number(_) | JsonValue::String(_) => {}\n    }\n}\n\nfn updated_value_for(path: &str) -> String {\n    serde_json::json!({\n        \"updated\": true,\n        \"path\": path,\n    })\n    .to_string()\n}\n\nfn escape_pointer_token(token: &str) -> String {\n    token.replace('~', \"~0\").replace('/', \"~1\")\n}\n\nfn row_label(rows: usize) -> String {\n    if rows >= 1_000 {\n        format!(\"{}k\", rows / 1_000)\n    } else {\n        rows.to_string()\n    }\n}\n\nfn changed_row_count(rows: usize) -> usize {\n    (rows / CHANGE_ROW_DENOMINATOR).max(1)\n}\n\ncriterion_group!(benches, json_pointer_physical_benches);\ncriterion_main!(benches);\n"
  },
  {
    "path": "packages/engine/benches/optimization9_sql2/json_pointer.schema.json",
    "content": "{\n  \"x-lix-key\": \"json_pointer\",\n  \"x-lix-primary-key\": [\n    \"/path\"\n  ],\n  \"type\": \"object\",\n  \"properties\": {\n    \"path\": {\n      \"type\": \"string\",\n      \"description\": \"RFC 6901 JSON Pointer path (empty string for root).\"\n    },\n    \"value\": {\n      \"anyOf\": [\n        {\n          \"type\": \"object\"\n        },\n        {\n          \"type\": \"array\"\n        },\n        {\n          \"type\": \"string\"\n        },\n        {\n          \"type\": \"number\"\n        },\n        {\n          \"type\": \"boolean\"\n        },\n        {\n          \"type\": \"null\"\n        }\n      ]\n    }\n  },\n  \"required\": [\n    \"path\",\n    \"value\"\n  ],\n  \"additionalProperties\": false\n}\n"
  },
  {
    "path": "packages/engine/benches/optimization9_sql2/main.rs",
    "content": "use std::time::Duration;\n\nuse criterion::{black_box, criterion_group, criterion_main, BatchSize, Criterion};\nuse lix_engine::{optimization9_sql2_bench, Engine, SessionContext, Value};\nuse serde_json::Value as JsonValue;\nuse tokio::runtime::Runtime;\n\n#[path = \"../storage/rocksdb_backend.rs\"]\nmod rocksdb_backend;\n#[path = \"../storage/sqlite_backend.rs\"]\nmod sqlite_backend;\n\nuse rocksdb_backend::RocksDbBenchBackend;\nuse sqlite_backend::SqliteBenchBackend;\n\nconst JSON_POINTER_SCHEMA_JSON: &str = include_str!(\"json_pointer.schema.json\");\nconst PNPM_LOCK_JSON: &str = include_str!(\"pnpm-lock.fixture.json\");\nconst ROW_COUNT: usize = 1_000;\nconst INSERT_ROWS: usize = 500;\nconst CHUNK_SIZE: usize = 500;\n\n#[derive(Clone)]\nstruct PointerRow {\n    path: String,\n    value_json: String,\n    updated_value_json: String,\n}\n\n#[derive(Clone, Copy)]\nenum LixBackendProfile {\n    Sqlite,\n    RocksDb,\n}\n\nimpl LixBackendProfile {\n    fn name(self) -> &'static str {\n        match self {\n            Self::Sqlite => \"lix_sqlite\",\n            Self::RocksDb => \"lix_rocksdb\",\n        }\n    }\n}\n\nstruct LixFixture {\n    session: SessionContext,\n}\n\nfn optimization9_sql2_benches(c: &mut Criterion) {\n    let runtime = tokio::runtime::Builder::new_current_thread()\n        .enable_all()\n        .build()\n        .expect(\"create tokio runtime for optimization9 sql2 benchmarks\");\n    let rows = fixture_rows();\n\n    for profile in [LixBackendProfile::Sqlite, LixBackendProfile::RocksDb] {\n        bench_smoke_crud(c, &runtime, profile, &rows);\n        bench_planning_only(c, &runtime, profile, &rows);\n        bench_execute_preplanned(c, &runtime, profile, &rows);\n        bench_e2e_literal(c, &runtime, profile, &rows);\n        bench_e2e_parameterized(c, &runtime, profile, &rows);\n    }\n}\n\nfn bench_smoke_crud(\n    c: &mut Criterion,\n    runtime: &Runtime,\n    profile: LixBackendProfile,\n    all_rows: &[PointerRow],\n) {\n    let rows = all_rows[..ROW_COUNT].to_vec();\n    let mut group = c.benchmark_group(format!(\"optimization9_sql2/smoke_crud/{}\", profile.name()));\n    configure_group(&mut group);\n\n    group.bench_function(\"insert_all_rows/1k\", |b| {\n        b.iter_batched(\n            || runtime.block_on(prepare_lix_empty(profile)),\n            |fixture| {\n                insert_lix_rows_blocking(runtime, &fixture.session, &rows);\n                black_box(rows.len())\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"select_all_path_value/1k\", |b| {\n        b.iter_batched(\n            || runtime.block_on(prepare_lix_seeded(profile, &rows)),\n            |fixture| {\n                let result = runtime\n                    .block_on(\n                        fixture\n                            .session\n                            .execute(\"SELECT path, value FROM json_pointer ORDER BY path\", &[]),\n                    )\n                    .expect(\"smoke select all\");\n                assert_eq!(result.len(), ROW_COUNT);\n                black_box(result.len())\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"select_one_by_pk/1k\", |b| {\n        b.iter_batched(\n            || runtime.block_on(prepare_lix_seeded(profile, &rows)),\n            |fixture| {\n                let sql = select_one_literal_sql(pick_pk_row(&rows));\n                let result = runtime\n                    .block_on(fixture.session.execute(&sql, &[]))\n                    .expect(\"smoke select one\");\n                assert_eq!(result.len(), 1);\n                black_box(result.len())\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"update_all_values/1k\", |b| {\n        b.iter_batched(\n            || runtime.block_on(prepare_lix_seeded(profile, &rows)),\n            |fixture| {\n                let affected = runtime\n                    .block_on(fixture.session.execute(\n                        r#\"UPDATE json_pointer SET value = lix_json('{\"updated\":true}')\"#,\n                        &[],\n                    ))\n                    .expect(\"smoke update all\")\n                    .rows_affected();\n                assert_eq!(affected as usize, ROW_COUNT);\n                black_box(affected)\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"update_one_by_pk/1k\", |b| {\n        b.iter_batched(\n            || runtime.block_on(prepare_lix_seeded(profile, &rows)),\n            |fixture| {\n                let sql = update_one_literal_sql(pick_pk_row(&rows));\n                let affected = runtime\n                    .block_on(fixture.session.execute(&sql, &[]))\n                    .expect(\"smoke update one\")\n                    .rows_affected();\n                assert_eq!(affected, 1);\n                black_box(affected)\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"delete_all_rows/1k\", |b| {\n        b.iter_batched(\n            || runtime.block_on(prepare_lix_seeded(profile, &rows)),\n            |fixture| {\n                let affected = runtime\n                    .block_on(fixture.session.execute(\"DELETE FROM json_pointer\", &[]))\n                    .expect(\"smoke delete all\")\n                    .rows_affected();\n                assert_eq!(affected as usize, ROW_COUNT);\n                black_box(affected)\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"delete_one_by_pk/1k\", |b| {\n        b.iter_batched(\n            || runtime.block_on(prepare_lix_seeded(profile, &rows)),\n            |fixture| {\n                let sql = delete_one_literal_sql(pick_pk_row(&rows));\n                let affected = runtime\n                    .block_on(fixture.session.execute(&sql, &[]))\n                    .expect(\"smoke delete one\")\n                    .rows_affected();\n                assert_eq!(affected, 1);\n                black_box(affected)\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.finish();\n}\n\nfn bench_planning_only(\n    c: &mut Criterion,\n    runtime: &Runtime,\n    profile: LixBackendProfile,\n    all_rows: &[PointerRow],\n) {\n    let rows = all_rows[..ROW_COUNT].to_vec();\n    let mut group = c.benchmark_group(format!(\n        \"optimization9_sql2/planning_only/{}\",\n        profile.name()\n    ));\n    configure_group(&mut group);\n\n    group.bench_function(\"select_all_path_value/1k\", |b| {\n        b.iter_batched(\n            || runtime.block_on(prepare_lix_seeded(profile, &rows)),\n            |fixture| {\n                black_box(runtime.block_on(optimization9_sql2_bench::plan_read_only(\n                    &fixture.session,\n                    \"SELECT path, value FROM json_pointer ORDER BY path\",\n                )))\n                .expect(\"plan select all\")\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"select_one_by_pk/1k\", |b| {\n        b.iter_batched(\n            || runtime.block_on(prepare_lix_seeded(profile, &rows)),\n            |fixture| {\n                let sql = select_one_literal_sql(pick_pk_row(&rows));\n                black_box(runtime.block_on(optimization9_sql2_bench::plan_read_only(\n                    &fixture.session,\n                    &sql,\n                )))\n                .expect(\"plan select one\")\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"insert_500_values/1k\", |b| {\n        b.iter_batched(\n            || runtime.block_on(prepare_lix_empty(profile)),\n            |fixture| {\n                let sql = insert_literal_sql(&rows[..INSERT_ROWS]);\n                black_box(runtime.block_on(optimization9_sql2_bench::plan_write_only(\n                    &fixture.session,\n                    &sql,\n                )))\n                .expect(\"plan insert\")\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"update_all_values/1k\", |b| {\n        b.iter_batched(\n            || runtime.block_on(prepare_lix_seeded(profile, &rows)),\n            |fixture| {\n                black_box(runtime.block_on(optimization9_sql2_bench::plan_write_only(\n                    &fixture.session,\n                    r#\"UPDATE json_pointer SET value = lix_json('{\"updated\":true}')\"#,\n                )))\n                .expect(\"plan update all\")\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"delete_all_rows/1k\", |b| {\n        b.iter_batched(\n            || runtime.block_on(prepare_lix_seeded(profile, &rows)),\n            |fixture| {\n                black_box(runtime.block_on(optimization9_sql2_bench::plan_write_only(\n                    &fixture.session,\n                    \"DELETE FROM json_pointer\",\n                )))\n                .expect(\"plan delete all\")\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.finish();\n}\n\nfn bench_execute_preplanned(\n    c: &mut Criterion,\n    runtime: &Runtime,\n    profile: LixBackendProfile,\n    all_rows: &[PointerRow],\n) {\n    let rows = all_rows[..ROW_COUNT].to_vec();\n    let mut group = c.benchmark_group(format!(\n        \"optimization9_sql2/execute_preplanned/{}\",\n        profile.name()\n    ));\n    configure_group(&mut group);\n\n    group.bench_function(\"select_all_path_value/1k\", |b| {\n        b.iter_batched(\n            || {\n                let fixture = runtime.block_on(prepare_lix_seeded(profile, &rows));\n                runtime\n                    .block_on(optimization9_sql2_bench::prepare_read_plan(\n                        &fixture.session,\n                        \"SELECT path, value FROM json_pointer ORDER BY path\",\n                    ))\n                    .expect(\"prepare select all plan\")\n            },\n            |plan| {\n                let result = runtime\n                    .block_on(optimization9_sql2_bench::execute_read_plan(plan, &[]))\n                    .expect(\"execute select all plan\");\n                assert_eq!(result.rows.len(), ROW_COUNT);\n                black_box(result.rows.len())\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"select_one_by_pk/1k\", |b| {\n        b.iter_batched(\n            || {\n                let fixture = runtime.block_on(prepare_lix_seeded(profile, &rows));\n                let sql = select_one_parameterized_sql();\n                runtime\n                    .block_on(optimization9_sql2_bench::prepare_read_plan(\n                        &fixture.session,\n                        sql,\n                    ))\n                    .expect(\"prepare select one plan\")\n            },\n            |plan| {\n                let params = vec![Value::Text(pick_pk_row(&rows).path.clone())];\n                let result = runtime\n                    .block_on(optimization9_sql2_bench::execute_read_plan(plan, &params))\n                    .expect(\"execute select one plan\");\n                assert_eq!(result.rows.len(), 1);\n                black_box(result.rows.len())\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.finish();\n}\n\nfn bench_e2e_literal(\n    c: &mut Criterion,\n    runtime: &Runtime,\n    profile: LixBackendProfile,\n    all_rows: &[PointerRow],\n) {\n    let rows = all_rows[..ROW_COUNT].to_vec();\n    let mut group = c.benchmark_group(format!(\"optimization9_sql2/e2e_literal/{}\", profile.name()));\n    configure_group(&mut group);\n\n    group.bench_function(\"select_one_by_pk/1k\", |b| {\n        b.iter_batched(\n            || runtime.block_on(prepare_lix_seeded(profile, &rows)),\n            |fixture| {\n                let sql = select_one_literal_sql(pick_pk_row(&rows));\n                let result = runtime\n                    .block_on(fixture.session.execute(&sql, &[]))\n                    .expect(\"literal select one\");\n                assert_eq!(result.len(), 1);\n                black_box(result.len())\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"update_one_by_pk/1k\", |b| {\n        b.iter_batched(\n            || runtime.block_on(prepare_lix_seeded(profile, &rows)),\n            |fixture| {\n                let sql = update_one_literal_sql(pick_pk_row(&rows));\n                let affected = runtime\n                    .block_on(fixture.session.execute(&sql, &[]))\n                    .expect(\"literal update one\")\n                    .rows_affected();\n                assert_eq!(affected, 1);\n                black_box(affected)\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"delete_one_by_pk/1k\", |b| {\n        b.iter_batched(\n            || runtime.block_on(prepare_lix_seeded(profile, &rows)),\n            |fixture| {\n                let sql = delete_one_literal_sql(pick_pk_row(&rows));\n                let affected = runtime\n                    .block_on(fixture.session.execute(&sql, &[]))\n                    .expect(\"literal delete one\")\n                    .rows_affected();\n                assert_eq!(affected, 1);\n                black_box(affected)\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.finish();\n}\n\nfn bench_e2e_parameterized(\n    c: &mut Criterion,\n    runtime: &Runtime,\n    profile: LixBackendProfile,\n    all_rows: &[PointerRow],\n) {\n    let rows = all_rows[..ROW_COUNT].to_vec();\n    let mut group = c.benchmark_group(format!(\n        \"optimization9_sql2/e2e_parameterized/{}\",\n        profile.name()\n    ));\n    configure_group(&mut group);\n\n    group.bench_function(\"select_one_by_pk/1k\", |b| {\n        b.iter_batched(\n            || runtime.block_on(prepare_lix_seeded(profile, &rows)),\n            |fixture| {\n                let row = pick_pk_row(&rows);\n                let result = runtime\n                    .block_on(fixture.session.execute(\n                        select_one_parameterized_sql(),\n                        &[Value::Text(row.path.clone())],\n                    ))\n                    .expect(\"parameterized select one\");\n                assert_eq!(result.len(), 1);\n                black_box(result.len())\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"update_one_by_pk/1k\", |b| {\n        b.iter_batched(\n            || runtime.block_on(prepare_lix_seeded(profile, &rows)),\n            |fixture| {\n                let row = pick_pk_row(&rows);\n                let affected = runtime\n                    .block_on(fixture.session.execute(\n                        \"UPDATE json_pointer SET value = lix_json($1) WHERE path = $2\",\n                        &[\n                            Value::Text(row.updated_value_json.clone()),\n                            Value::Text(row.path.clone()),\n                        ],\n                    ))\n                    .expect(\"parameterized update one\")\n                    .rows_affected();\n                assert_eq!(affected, 1);\n                black_box(affected)\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"delete_one_by_pk/1k\", |b| {\n        b.iter_batched(\n            || runtime.block_on(prepare_lix_seeded(profile, &rows)),\n            |fixture| {\n                let row = pick_pk_row(&rows);\n                let affected = runtime\n                    .block_on(fixture.session.execute(\n                        \"DELETE FROM json_pointer WHERE path = $1\",\n                        &[Value::Text(row.path.clone())],\n                    ))\n                    .expect(\"parameterized delete one\")\n                    .rows_affected();\n                assert_eq!(affected, 1);\n                black_box(affected)\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.finish();\n}\n\nfn configure_group(group: &mut criterion::BenchmarkGroup<'_, criterion::measurement::WallTime>) {\n    group.sample_size(11);\n    group.warm_up_time(Duration::from_millis(250));\n    group.measurement_time(Duration::from_secs(1));\n}\n\nasync fn prepare_lix_empty(profile: LixBackendProfile) -> LixFixture {\n    let engine = match profile {\n        LixBackendProfile::Sqlite => {\n            let backend =\n                SqliteBenchBackend::tempfile().expect(\"create sqlite optimization9 backend\");\n            Engine::initialize(Box::new(backend.clone()))\n                .await\n                .expect(\"initialize sqlite optimization9 backend\");\n            Engine::new(Box::new(backend))\n                .await\n                .expect(\"open sqlite optimization9 engine\")\n        }\n        LixBackendProfile::RocksDb => {\n            let backend = RocksDbBenchBackend::new().expect(\"create rocksdb optimization9 backend\");\n            Engine::initialize(Box::new(backend.clone()))\n                .await\n                .expect(\"initialize rocksdb optimization9 backend\");\n            Engine::new(Box::new(backend))\n                .await\n                .expect(\"open rocksdb optimization9 engine\")\n        }\n    };\n    let setup_session = engine\n        .open_workspace_session()\n        .await\n        .expect(\"open optimization9 setup workspace session\");\n    register_json_pointer_schema(&setup_session).await;\n    let session = engine\n        .open_workspace_session()\n        .await\n        .expect(\"open optimization9 benchmark workspace session\");\n    LixFixture { session }\n}\n\nasync fn prepare_lix_seeded(profile: LixBackendProfile, rows: &[PointerRow]) -> LixFixture {\n    let fixture = prepare_lix_empty(profile).await;\n    insert_lix_rows(&fixture.session, rows).await;\n    fixture\n}\n\nasync fn register_json_pointer_schema(session: &SessionContext) {\n    let sql = format!(\n        \"INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked)\n         VALUES (lix_json('{}'), false, false)\",\n        sql_string(JSON_POINTER_SCHEMA_JSON)\n    );\n    let affected = session\n        .execute(&sql, &[])\n        .await\n        .expect(\"register json_pointer schema\")\n        .rows_affected();\n    assert_eq!(affected, 1);\n}\n\nasync fn insert_lix_rows(session: &SessionContext, rows: &[PointerRow]) {\n    for chunk in rows.chunks(CHUNK_SIZE) {\n        let sql = insert_literal_sql(chunk);\n        let affected = session\n            .execute(&sql, &[])\n            .await\n            .expect(\"insert json_pointer rows\")\n            .rows_affected();\n        assert_eq!(affected as usize, chunk.len());\n    }\n}\n\nfn insert_lix_rows_blocking(runtime: &Runtime, session: &SessionContext, rows: &[PointerRow]) {\n    runtime.block_on(insert_lix_rows(session, rows));\n}\n\nfn fixture_rows() -> Vec<PointerRow> {\n    let root: JsonValue = serde_json::from_str(PNPM_LOCK_JSON).expect(\"pnpm lock JSON fixture\");\n    let mut rows = Vec::new();\n    flatten_json(\"\", &root, &mut rows);\n    assert!(\n        rows.len() >= ROW_COUNT,\n        \"pnpm lock fixture should have at least {ROW_COUNT} pointer rows, got {}\",\n        rows.len()\n    );\n    rows\n}\n\nfn flatten_json(path: &str, value: &JsonValue, rows: &mut Vec<PointerRow>) {\n    rows.push(PointerRow {\n        path: path.to_string(),\n        value_json: value.to_string(),\n        updated_value_json: updated_value_for(path),\n    });\n\n    match value {\n        JsonValue::Array(items) => {\n            for (index, item) in items.iter().enumerate() {\n                let child_path = format!(\"{path}/{}\", index);\n                flatten_json(&child_path, item, rows);\n            }\n        }\n        JsonValue::Object(map) => {\n            for (key, child) in map {\n                let child_path = format!(\"{path}/{}\", escape_pointer_token(key));\n                flatten_json(&child_path, child, rows);\n            }\n        }\n        JsonValue::Null | JsonValue::Bool(_) | JsonValue::Number(_) | JsonValue::String(_) => {}\n    }\n}\n\nfn insert_literal_sql(rows: &[PointerRow]) -> String {\n    let mut sql = String::from(\"INSERT INTO json_pointer (path, value) VALUES \");\n    for (index, row) in rows.iter().enumerate() {\n        if index > 0 {\n            sql.push(',');\n        }\n        sql.push_str(&format!(\n            \"('{}', lix_json('{}'))\",\n            sql_string(row.path.as_str()),\n            sql_string(row.value_json.as_str())\n        ));\n    }\n    sql\n}\n\nfn select_one_literal_sql(row: &PointerRow) -> String {\n    format!(\n        \"SELECT path, value FROM json_pointer WHERE path = '{}'\",\n        sql_string(row.path.as_str())\n    )\n}\n\nfn select_one_parameterized_sql() -> &'static str {\n    \"SELECT path, value FROM json_pointer WHERE path = $1\"\n}\n\nfn update_one_literal_sql(row: &PointerRow) -> String {\n    format!(\n        \"UPDATE json_pointer SET value = lix_json('{}') WHERE path = '{}'\",\n        sql_string(row.updated_value_json.as_str()),\n        sql_string(row.path.as_str())\n    )\n}\n\nfn delete_one_literal_sql(row: &PointerRow) -> String {\n    format!(\n        \"DELETE FROM json_pointer WHERE path = '{}'\",\n        sql_string(row.path.as_str())\n    )\n}\n\nfn pick_pk_row(rows: &[PointerRow]) -> &PointerRow {\n    &rows[rows.len() / 2]\n}\n\nfn updated_value_for(path: &str) -> String {\n    serde_json::json!({\n        \"updated\": true,\n        \"path\": path,\n    })\n    .to_string()\n}\n\nfn escape_pointer_token(token: &str) -> String {\n    token.replace('~', \"~0\").replace('/', \"~1\")\n}\n\nfn sql_string(value: &str) -> String {\n    value.replace('\\'', \"''\")\n}\n\ncriterion_group!(benches, optimization9_sql2_benches);\ncriterion_main!(benches);\n"
  },
  {
    "path": "packages/engine/benches/optimization9_sql2/pnpm-lock.fixture.json",
    "content": "{\"lockfileVersion\":\"9.0\",\"settings\":{\"autoInstallPeers\":true,\"excludeLinksFromLockfile\":false},\"importers\":{\".\":{\"devDependencies\":{\"@changesets/cli\":{\"specifier\":\"^2.29.7\",\"version\":\"2.29.7(@types/node@24.10.2)\"},\"@vitest/coverage-v8\":{\"specifier\":\"^3.1.1\",\"version\":\"3.2.4(@vitest/browser@3.2.4)(vitest@3.2.4)\"},\"nx\":{\"specifier\":\"^21.0.0\",\"version\":\"21.4.1\"},\"nx-cloud\":{\"specifier\":\"^19.1.0\",\"version\":\"19.1.0\"},\"vitest\":{\"specifier\":\"^3.1.1\",\"version\":\"3.2.4(@types/debug@4.1.12)(@types/node@24.10.2)(@vitest/browser@3.2.4)(happy-dom@18.0.1)(jiti@2.6.1)(jsdom@27.3.0(postcss@8.5.14))(lightningcss@1.32.0)(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\"}}},\"packages/js-kysely\":{\"dependencies\":{\"json-schema-to-ts\":{\"specifier\":\"^3.1.1\",\"version\":\"3.1.1\"},\"kysely\":{\"specifier\":\"^0.28.7\",\"version\":\"0.28.7\"}},\"devDependencies\":{\"@lix-js/sdk\":{\"specifier\":\"workspace:*\",\"version\":\"link:../js-sdk\"},\"typescript\":{\"specifier\":\"^5.5.4\",\"version\":\"5.9.3\"},\"vitest\":{\"specifier\":\"^4.0.18\",\"version\":\"4.0.18(@opentelemetry/api@1.9.0)(@types/node@24.10.2)(happy-dom@18.0.1)(jiti@2.6.1)(jsdom@27.3.0(postcss@8.5.14))(lightningcss@1.32.0)(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\"}}},\"packages/js-sdk\":{\"devDependencies\":{\"better-sqlite3\":{\"specifier\":\"^12.9.0\",\"version\":\"12.9.0\"},\"typescript\":{\"specifier\":\"^5.5.4\",\"version\":\"5.9.3\"},\"vitest\":{\"specifier\":\"^4.0.18\",\"version\":\"4.0.18(@opentelemetry/api@1.9.0)(@types/node@24.10.2)(happy-dom@18.0.1)(jiti@2.6.1)(jsdom@27.3.0(postcss@8.5.14))(lightningcss@1.32.0)(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\"}}},\"packages/react-utils\":{\"devDependencies\":{\"@lix-js/kysely\":{\"specifier\":\"workspace:*\",\"version\":\"link:../js-kysely\"},\"@lix-js/sdk\":{\"specifier\":\"workspace:*\",\"version\":\"link:../js-sdk\"},\"@testing-library/react\":{\"specifier\":\"^16.3.0\",\"version\":\"16.3.0(@testing-library/dom@10.4.1)(@types/react-dom@19.2.3(@types/react@19.2.7))(@types/react@19.2.7)(react-dom@19.2.0(react@19.2.0))(react@19.2.0)\"},\"@types/react\":{\"specifier\":\"^19.1.8\",\"version\":\"19.2.7\"},\"@vitest/coverage-v8\":{\"specifier\":\"^3.2.4\",\"version\":\"3.2.4(@vitest/browser@3.2.4)(vitest@3.2.4)\"},\"https-proxy-agent\":{\"specifier\":\"7.0.2\",\"version\":\"7.0.2\"},\"jsdom\":{\"specifier\":\"^26.1.0\",\"version\":\"26.1.0\"},\"oxlint\":{\"specifier\":\"^1.14.0\",\"version\":\"1.26.0\"},\"prettier\":{\"specifier\":\"^3.3.3\",\"version\":\"3.6.2\"},\"react\":{\"specifier\":\"19.2.0\",\"version\":\"19.2.0\"},\"react-dom\":{\"specifier\":\"19.2.0\",\"version\":\"19.2.0(react@19.2.0)\"},\"typescript\":{\"specifier\":\"^5.5.4\",\"version\":\"5.8.3\"},\"vitest\":{\"specifier\":\"^3.2.4\",\"version\":\"3.2.4(@types/debug@4.1.12)(@types/node@24.10.2)(@vitest/browser@3.2.4)(happy-dom@18.0.1)(jiti@2.6.1)(jsdom@26.1.0)(lightningcss@1.32.0)(msw@2.10.2(@types/node@24.10.2)(typescript@5.8.3))(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\"}}},\"packages/website\":{\"dependencies\":{\"@cloudflare/vite-plugin\":{\"specifier\":\"^1.36.0\",\"version\":\"1.36.0(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(workerd@1.20260504.1)(wrangler@4.88.0)\"},\"@lix-js/plugin-json\":{\"specifier\":\"1.0.1\",\"version\":\"1.0.1(tslib@2.8.1)\"},\"@lix-js/sdk\":{\"specifier\":\"workspace:*\",\"version\":\"link:../js-sdk\"},\"@opral/markdown-wc\":{\"specifier\":\"0.9.0\",\"version\":\"0.9.0\"},\"@tailwindcss/vite\":{\"specifier\":\"^4.2.4\",\"version\":\"4.2.4(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\"},\"@tanstack/react-router\":{\"specifier\":\"^1.169.2\",\"version\":\"1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0)\"},\"@tanstack/react-start\":{\"specifier\":\"^1.167.64\",\"version\":\"1.167.64(react-dom@19.2.0(react@19.2.0))(react@19.2.0)(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(webpack@5.99.9(esbuild@0.27.3))\"},\"@tanstack/router-plugin\":{\"specifier\":\"^1.167.34\",\"version\":\"1.167.34(@tanstack/react-router@1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(webpack@5.99.9(esbuild@0.27.3))\"},\"lucide-react\":{\"specifier\":\"^0.544.0\",\"version\":\"0.544.0(react@19.2.0)\"},\"posthog-js\":{\"specifier\":\"^1.321.2\",\"version\":\"1.321.2\"},\"react\":{\"specifier\":\"^19.2.0\",\"version\":\"19.2.0\"},\"react-dom\":{\"specifier\":\"^19.2.0\",\"version\":\"19.2.0(react@19.2.0)\"},\"shiki\":{\"specifier\":\"^3.2.2\",\"version\":\"3.15.0\"},\"tailwindcss\":{\"specifier\":\"^4.2.4\",\"version\":\"4.2.4\"}},\"devDependencies\":{\"@testing-library/dom\":{\"specifier\":\"^10.4.0\",\"version\":\"10.4.1\"},\"@testing-library/react\":{\"specifier\":\"^16.2.0\",\"version\":\"16.3.0(@testing-library/dom@10.4.1)(@types/react-dom@19.2.3(@types/react@19.2.7))(@types/react@19.2.7)(react-dom@19.2.0(react@19.2.0))(react@19.2.0)\"},\"@types/node\":{\"specifier\":\"^22.10.2\",\"version\":\"22.15.33\"},\"@types/react\":{\"specifier\":\"^19.2.0\",\"version\":\"19.2.7\"},\"@types/react-dom\":{\"specifier\":\"^19.2.0\",\"version\":\"19.2.3(@types/react@19.2.7)\"},\"@vitejs/plugin-react\":{\"specifier\":\"^6.0.1\",\"version\":\"6.0.1(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\"},\"@vitest/browser\":{\"specifier\":\"^4.1.5\",\"version\":\"4.1.5(msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(vitest@4.1.5)\"},\"@vitest/coverage-v8\":{\"specifier\":\"^4.1.5\",\"version\":\"4.1.5(@vitest/browser@4.1.5)(vitest@4.1.5)\"},\"jsdom\":{\"specifier\":\"^27.0.0\",\"version\":\"27.3.0(postcss@8.5.14)\"},\"prettier\":{\"specifier\":\"^3.6.0\",\"version\":\"3.6.2\"},\"typescript\":{\"specifier\":\"^5.7.2\",\"version\":\"5.8.3\"},\"vite\":{\"specifier\":\"^8.0.10\",\"version\":\"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\"},\"vite-plugin-static-copy\":{\"specifier\":\"^4.1.0\",\"version\":\"4.1.0(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\"},\"vitest\":{\"specifier\":\"^4.1.5\",\"version\":\"4.1.5(@opentelemetry/api@1.9.0)(@types/node@22.15.33)(@vitest/coverage-v8@4.1.5)(happy-dom@18.0.1)(jsdom@27.3.0(postcss@8.5.14))(msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\"},\"web-vitals\":{\"specifier\":\"^5.1.0\",\"version\":\"5.1.0\"},\"wrangler\":{\"specifier\":\"^4.88.0\",\"version\":\"4.88.0\"}}}},\"packages\":{\"@acemir/cssom@0.9.28\":{\"resolution\":{\"integrity\":\"sha512-LuS6IVEivI75vKN8S04qRD+YySP0RmU/cV8UNukhQZvprxF+76Z43TNo/a08eCodaGhT1Us8etqS1ZRY9/Or0A==\"}},\"@ampproject/remapping@2.3.0\":{\"resolution\":{\"integrity\":\"sha512-30iZtAPgz+LTIYoeivqYo853f02jBYSd5uGnGpkFV0M3xOt9aN73erkgYAmZU43x4VfqcnLxW9Kpg3R5LC4YYw==\"},\"engines\":{\"node\":\">=6.0.0\"}},\"@antfu/install-pkg@1.1.0\":{\"resolution\":{\"integrity\":\"sha512-MGQsmw10ZyI+EJo45CdSER4zEb+p31LpDAFp2Z3gkSd1yqVZGi0Ebx++YTEMonJy4oChEMLsxZ64j8FH6sSqtQ==\"}},\"@antfu/utils@9.3.0\":{\"resolution\":{\"integrity\":\"sha512-9hFT4RauhcUzqOE4f1+frMKLZrgNog5b06I7VmZQV1BkvwvqrbC8EBZf3L1eEL2AKb6rNKjER0sEvJiSP1FXEA==\"}},\"@asamuzakjp/css-color@3.1.4\":{\"resolution\":{\"integrity\":\"sha512-SeuBV4rnjpFNjI8HSgKUwteuFdkHwkboq31HWzznuqgySQir+jSTczoWVVL4jvOjKjuH80fMDG0Fvg1Sb+OJsA==\"}},\"@asamuzakjp/css-color@4.1.0\":{\"resolution\":{\"integrity\":\"sha512-9xiBAtLn4aNsa4mDnpovJvBn72tNEIACyvlqaNJ+ADemR+yeMJWnBudOi2qGDviJa7SwcDOU/TRh5dnET7qk0w==\"}},\"@asamuzakjp/dom-selector@6.7.6\":{\"resolution\":{\"integrity\":\"sha512-hBaJER6A9MpdG3WgdlOolHmbOYvSk46y7IQN/1+iqiCuUu6iWdQrs9DGKF8ocqsEqWujWf/V7b7vaDgiUmIvUg==\"}},\"@asamuzakjp/nwsapi@2.3.9\":{\"resolution\":{\"integrity\":\"sha512-n8GuYSrI9bF7FFZ/SjhwevlHc8xaVlb/7HmHelnc/PZXBD2ZR49NnN9sMMuDdEGPeeRQ5d0hqlSlEpgCX3Wl0Q==\"}},\"@babel/code-frame@7.27.1\":{\"resolution\":{\"integrity\":\"sha512-cjQ7ZlQ0Mv3b47hABuTevyTuYN4i+loJKGeV9flcCgIK37cCXRh+L1bd3iBHlynerhQ7BhCkn2BPbQUL+rGqFg==\"},\"engines\":{\"node\":\">=6.9.0\"}},\"@babel/compat-data@7.28.0\":{\"resolution\":{\"integrity\":\"sha512-60X7qkglvrap8mn1lh2ebxXdZYtUcpd7gsmy9kLaBJ4i/WdY8PqTSdxyA8qraikqKQK5C1KRBKXqznrVapyNaw==\"},\"engines\":{\"node\":\">=6.9.0\"}},\"@babel/core@7.28.5\":{\"resolution\":{\"integrity\":\"sha512-e7jT4DxYvIDLk1ZHmU/m/mB19rex9sv0c2ftBtjSBv+kVM/902eh0fINUzD7UwLLNR+jU585GxUJ8/EBfAM5fw==\"},\"engines\":{\"node\":\">=6.9.0\"}},\"@babel/generator@7.28.5\":{\"resolution\":{\"integrity\":\"sha512-3EwLFhZ38J4VyIP6WNtt2kUdW9dokXA9Cr4IVIFHuCpZ3H8/YFOl5JjZHisrn1fATPBmKKqXzDFvh9fUwHz6CQ==\"},\"engines\":{\"node\":\">=6.9.0\"}},\"@babel/helper-compilation-targets@7.27.2\":{\"resolution\":{\"integrity\":\"sha512-2+1thGUUWWjLTYTHZWK1n8Yga0ijBz1XAhUXcKy81rd5g6yh7hGqMp45v7cadSbEHc9G3OTv45SyneRN3ps4DQ==\"},\"engines\":{\"node\":\">=6.9.0\"}},\"@babel/helper-globals@7.28.0\":{\"resolution\":{\"integrity\":\"sha512-+W6cISkXFa1jXsDEdYA8HeevQT/FULhxzR99pxphltZcVaugps53THCeiWA8SguxxpSp3gKPiuYfSWopkLQ4hw==\"},\"engines\":{\"node\":\">=6.9.0\"}},\"@babel/helper-module-imports@7.27.1\":{\"resolution\":{\"integrity\":\"sha512-0gSFWUPNXNopqtIPQvlD5WgXYI5GY2kP2cCvoT8kczjbfcfuIljTbcWrulD1CIPIX2gt1wghbDy08yE1p+/r3w==\"},\"engines\":{\"node\":\">=6.9.0\"}},\"@babel/helper-module-transforms@7.28.3\":{\"resolution\":{\"integrity\":\"sha512-gytXUbs8k2sXS9PnQptz5o0QnpLL51SwASIORY6XaBKF88nsOT0Zw9szLqlSGQDP/4TljBAD5y98p2U1fqkdsw==\"},\"engines\":{\"node\":\">=6.9.0\"},\"peerDependencies\":{\"@babel/core\":\"^7.0.0\"}},\"@babel/helper-plugin-utils@7.27.1\":{\"resolution\":{\"integrity\":\"sha512-1gn1Up5YXka3YYAHGKpbideQ5Yjf1tDa9qYcgysz+cNCXukyLl6DjPXhD3VRwSb8c0J9tA4b2+rHEZtc6R0tlw==\"},\"engines\":{\"node\":\">=6.9.0\"}},\"@babel/helper-string-parser@7.27.1\":{\"resolution\":{\"integrity\":\"sha512-qMlSxKbpRlAridDExk92nSobyDdpPijUq2DW6oDnUqd0iOGxmQjyqhMIihI9+zv4LPyZdRje2cavWPbCbWm3eA==\"},\"engines\":{\"node\":\">=6.9.0\"}},\"@babel/helper-validator-identifier@7.28.5\":{\"resolution\":{\"integrity\":\"sha512-qSs4ifwzKJSV39ucNjsvc6WVHs6b7S03sOh2OcHF9UHfVPqWWALUsNUVzhSBiItjRZoLHx7nIarVjqKVusUZ1Q==\"},\"engines\":{\"node\":\">=6.9.0\"}},\"@babel/helper-validator-option@7.27.1\":{\"resolution\":{\"integrity\":\"sha512-YvjJow9FxbhFFKDSuFnVCe2WxXk1zWc22fFePVNEaWJEu8IrZVlda6N0uHwzZrUM1il7NC9Mlp4MaJYbYd9JSg==\"},\"engines\":{\"node\":\">=6.9.0\"}},\"@babel/helpers@7.28.4\":{\"resolution\":{\"integrity\":\"sha512-HFN59MmQXGHVyYadKLVumYsA9dBFun/ldYxipEjzA4196jpLZd8UjEEBLkbEkvfYreDqJhZxYAWFPtrfhNpj4w==\"},\"engines\":{\"node\":\">=6.9.0\"}},\"@babel/parser@7.28.5\":{\"resolution\":{\"integrity\":\"sha512-KKBU1VGYR7ORr3At5HAtUQ+TV3SzRCXmA/8OdDZiLDBIZxVyzXuztPjfLd3BV1PRAQGCMWWSHYhL0F8d5uHBDQ==\"},\"engines\":{\"node\":\">=6.0.0\"},\"hasBin\":true},\"@babel/parser@7.29.3\":{\"resolution\":{\"integrity\":\"sha512-b3ctpQwp+PROvU/cttc4OYl4MzfJUWy6FZg+PMXfzmt/+39iHVF0sDfqay8TQM3JA2EUOyKcFZt75jWriQijsA==\"},\"engines\":{\"node\":\">=6.0.0\"},\"hasBin\":true},\"@babel/plugin-syntax-jsx@7.27.1\":{\"resolution\":{\"integrity\":\"sha512-y8YTNIeKoyhGd9O0Jiyzyyqk8gdjnumGTQPsz0xOZOQ2RmkVJeZ1vmmfIvFEKqucBG6axJGBZDE/7iI5suUI/w==\"},\"engines\":{\"node\":\">=6.9.0\"},\"peerDependencies\":{\"@babel/core\":\"^7.0.0-0\"}},\"@babel/plugin-syntax-typescript@7.27.1\":{\"resolution\":{\"integrity\":\"sha512-xfYCBMxveHrRMnAWl1ZlPXOZjzkN82THFvLhQhFXFt81Z5HnN+EtUkZhv/zcKpmT3fzmWZB0ywiBrbC3vogbwQ==\"},\"engines\":{\"node\":\">=6.9.0\"},\"peerDependencies\":{\"@babel/core\":\"^7.0.0-0\"}},\"@babel/runtime@7.28.4\":{\"resolution\":{\"integrity\":\"sha512-Q/N6JNWvIvPnLDvjlE1OUBLPQHH6l3CltCEsHIujp45zQUSSh8K+gHnaEX45yAT1nyngnINhvWtzN+Nb9D8RAQ==\"},\"engines\":{\"node\":\">=6.9.0\"}},\"@babel/template@7.27.2\":{\"resolution\":{\"integrity\":\"sha512-LPDZ85aEJyYSd18/DkjNh4/y1ntkE5KwUHWTiqgRxruuZL2F1yuHligVHLvcHY2vMHXttKFpJn6LwfI7cw7ODw==\"},\"engines\":{\"node\":\">=6.9.0\"}},\"@babel/traverse@7.28.5\":{\"resolution\":{\"integrity\":\"sha512-TCCj4t55U90khlYkVV/0TfkJkAkUg3jZFA3Neb7unZT8CPok7iiRfaX0F+WnqWqt7OxhOn0uBKXCw4lbL8W0aQ==\"},\"engines\":{\"node\":\">=6.9.0\"}},\"@babel/types@7.28.5\":{\"resolution\":{\"integrity\":\"sha512-qQ5m48eI/MFLQ5PxQj4PFaprjyCTLI37ElWMmNs0K8Lk3dVeOdNpB3ks8jc7yM5CDmVC73eMVk/trk3fgmrUpA==\"},\"engines\":{\"node\":\">=6.9.0\"}},\"@babel/types@7.29.0\":{\"resolution\":{\"integrity\":\"sha512-LwdZHpScM4Qz8Xw2iKSzS+cfglZzJGvofQICy7W7v4caru4EaAmyUuO6BGrbyQ2mYV11W0U8j5mBhd14dd3B0A==\"},\"engines\":{\"node\":\">=6.9.0\"}},\"@bcoe/v8-coverage@1.0.2\":{\"resolution\":{\"integrity\":\"sha512-6zABk/ECA/QYSCQ1NGiVwwbQerUCZ+TQbp64Q3AgmfNvurHH0j8TtXa1qbShXA6qqkpAj4V5W8pP6mLe1mcMqA==\"},\"engines\":{\"node\":\">=18\"}},\"@blazediff/core@1.9.1\":{\"resolution\":{\"integrity\":\"sha512-ehg3jIkYKulZh+8om/O25vkvSsXXwC+skXmyA87FFx6A/45eqOkZsBltMw/TVteb0mloiGT8oGRTcjRAz66zaA==\"}},\"@braintree/sanitize-url@7.1.1\":{\"resolution\":{\"integrity\":\"sha512-i1L7noDNxtFyL5DmZafWy1wRVhGehQmzZaz1HiN5e7iylJMSZR7ekOV7NsIqa5qBldlLrsKv4HbgFUVlQrz8Mw==\"}},\"@bufbuild/protobuf@2.12.0\":{\"resolution\":{\"integrity\":\"sha512-B/XlCaFIP8LOwzo+bz5uFzATYokcwCKQcghqnlfwSmM5eX/qTkvDBnDPs+gXtX/RyjxJ4DRikECcPJbyALA8FA==\"}},\"@bundled-es-modules/cookie@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-8o+5fRPLNbjbdGRRmJj3h6Hh1AQJf2dk3qQ/5ZFb+PXkRNiSoMGGUKlsgLfrxneb72axVJyIYji64E2+nNfYyw==\"}},\"@bundled-es-modules/statuses@1.0.1\":{\"resolution\":{\"integrity\":\"sha512-yn7BklA5acgcBr+7w064fGV+SGIFySjCKpqjcWgBAIfrAkY+4GQTJJHQMeT3V/sgz23VTEVV8TtOmkvJAhFVfg==\"}},\"@bundled-es-modules/tough-cookie@0.1.6\":{\"resolution\":{\"integrity\":\"sha512-dvMHbL464C0zI+Yqxbz6kZ5TOEp7GLW+pry/RWndAR8MJQAXZ2rPmIs8tziTZjeIyhSNZgZbCePtfSbdWqStJw==\"}},\"@changesets/apply-release-plan@7.0.13\":{\"resolution\":{\"integrity\":\"sha512-BIW7bofD2yAWoE8H4V40FikC+1nNFEKBisMECccS16W1rt6qqhNTBDmIw5HaqmMgtLNz9e7oiALiEUuKrQ4oHg==\"}},\"@changesets/assemble-release-plan@6.0.9\":{\"resolution\":{\"integrity\":\"sha512-tPgeeqCHIwNo8sypKlS3gOPmsS3wP0zHt67JDuL20P4QcXiw/O4Hl7oXiuLnP9yg+rXLQ2sScdV1Kkzde61iSQ==\"}},\"@changesets/changelog-git@0.2.1\":{\"resolution\":{\"integrity\":\"sha512-x/xEleCFLH28c3bQeQIyeZf8lFXyDFVn1SgcBiR2Tw/r4IAWlk1fzxCEZ6NxQAjF2Nwtczoen3OA2qR+UawQ8Q==\"}},\"@changesets/cli@2.29.7\":{\"resolution\":{\"integrity\":\"sha512-R7RqWoaksyyKXbKXBTbT4REdy22yH81mcFK6sWtqSanxUCbUi9Uf+6aqxZtDQouIqPdem2W56CdxXgsxdq7FLQ==\"},\"hasBin\":true},\"@changesets/config@3.1.1\":{\"resolution\":{\"integrity\":\"sha512-bd+3Ap2TKXxljCggI0mKPfzCQKeV/TU4yO2h2C6vAihIo8tzseAn2e7klSuiyYYXvgu53zMN1OeYMIQkaQoWnA==\"}},\"@changesets/errors@0.2.0\":{\"resolution\":{\"integrity\":\"sha512-6BLOQUscTpZeGljvyQXlWOItQyU71kCdGz7Pi8H8zdw6BI0g3m43iL4xKUVPWtG+qrrL9DTjpdn8eYuCQSRpow==\"}},\"@changesets/get-dependents-graph@2.1.3\":{\"resolution\":{\"integrity\":\"sha512-gphr+v0mv2I3Oxt19VdWRRUxq3sseyUpX9DaHpTUmLj92Y10AGy+XOtV+kbM6L/fDcpx7/ISDFK6T8A/P3lOdQ==\"}},\"@changesets/get-release-plan@4.0.13\":{\"resolution\":{\"integrity\":\"sha512-DWG1pus72FcNeXkM12tx+xtExyH/c9I1z+2aXlObH3i9YA7+WZEVaiHzHl03thpvAgWTRaH64MpfHxozfF7Dvg==\"}},\"@changesets/get-version-range-type@0.4.0\":{\"resolution\":{\"integrity\":\"sha512-hwawtob9DryoGTpixy1D3ZXbGgJu1Rhr+ySH2PvTLHvkZuQ7sRT4oQwMh0hbqZH1weAooedEjRsbrWcGLCeyVQ==\"}},\"@changesets/git@3.0.4\":{\"resolution\":{\"integrity\":\"sha512-BXANzRFkX+XcC1q/d27NKvlJ1yf7PSAgi8JG6dt8EfbHFHi4neau7mufcSca5zRhwOL8j9s6EqsxmT+s+/E6Sw==\"}},\"@changesets/logger@0.1.1\":{\"resolution\":{\"integrity\":\"sha512-OQtR36ZlnuTxKqoW4Sv6x5YIhOmClRd5pWsjZsddYxpWs517R0HkyiefQPIytCVh4ZcC5x9XaG8KTdd5iRQUfg==\"}},\"@changesets/parse@0.4.1\":{\"resolution\":{\"integrity\":\"sha512-iwksMs5Bf/wUItfcg+OXrEpravm5rEd9Bf4oyIPL4kVTmJQ7PNDSd6MDYkpSJR1pn7tz/k8Zf2DhTCqX08Ou+Q==\"}},\"@changesets/pre@2.0.2\":{\"resolution\":{\"integrity\":\"sha512-HaL/gEyFVvkf9KFg6484wR9s0qjAXlZ8qWPDkTyKF6+zqjBe/I2mygg3MbpZ++hdi0ToqNUF8cjj7fBy0dg8Ug==\"}},\"@changesets/read@0.6.5\":{\"resolution\":{\"integrity\":\"sha512-UPzNGhsSjHD3Veb0xO/MwvasGe8eMyNrR/sT9gR8Q3DhOQZirgKhhXv/8hVsI0QpPjR004Z9iFxoJU6in3uGMg==\"}},\"@changesets/should-skip-package@0.1.2\":{\"resolution\":{\"integrity\":\"sha512-qAK/WrqWLNCP22UDdBTMPH5f41elVDlsNyat180A33dWxuUDyNpg6fPi/FyTZwRriVjg0L8gnjJn2F9XAoF0qw==\"}},\"@changesets/types@4.1.0\":{\"resolution\":{\"integrity\":\"sha512-LDQvVDv5Kb50ny2s25Fhm3d9QSZimsoUGBsUioj6MC3qbMUCuC8GPIvk/M6IvXx3lYhAs0lwWUQLb+VIEUCECw==\"}},\"@changesets/types@6.1.0\":{\"resolution\":{\"integrity\":\"sha512-rKQcJ+o1nKNgeoYRHKOS07tAMNd3YSN0uHaJOZYjBAgxfV7TUE7JE+z4BzZdQwb5hKaYbayKN5KrYV7ODb2rAA==\"}},\"@changesets/write@0.4.0\":{\"resolution\":{\"integrity\":\"sha512-CdTLvIOPiCNuH71pyDu3rA+Q0n65cmAbXnwWH84rKGiFumFzkmHNT8KHTMEchcxN+Kl8I54xGUhJ7l3E7X396Q==\"}},\"@chevrotain/cst-dts-gen@11.0.3\":{\"resolution\":{\"integrity\":\"sha512-BvIKpRLeS/8UbfxXxgC33xOumsacaeCKAjAeLyOn7Pcp95HiRbrpl14S+9vaZLolnbssPIUuiUd8IvgkRyt6NQ==\"}},\"@chevrotain/gast@11.0.3\":{\"resolution\":{\"integrity\":\"sha512-+qNfcoNk70PyS/uxmj3li5NiECO+2YKZZQMbmjTqRI3Qchu8Hig/Q9vgkHpI3alNjr7M+a2St5pw5w5F6NL5/Q==\"}},\"@chevrotain/regexp-to-ast@11.0.3\":{\"resolution\":{\"integrity\":\"sha512-1fMHaBZxLFvWI067AVbGJav1eRY7N8DDvYCTwGBiE/ytKBgP8azTdgyrKyWZ9Mfh09eHWb5PgTSO8wi7U824RA==\"}},\"@chevrotain/types@11.0.3\":{\"resolution\":{\"integrity\":\"sha512-gsiM3G8b58kZC2HaWR50gu6Y1440cHiJ+i3JUvcp/35JchYejb2+5MVeJK0iKThYpAa/P2PYFV4hoi44HD+aHQ==\"}},\"@chevrotain/utils@11.0.3\":{\"resolution\":{\"integrity\":\"sha512-YslZMgtJUyuMbZ+aKvfF3x1f5liK4mWNxghFRv7jqRR9C3R3fAOGTTKvxXDa2Y1s9zSbcpuO0cAxDYsc9SrXoQ==\"}},\"@cloudflare/kv-asset-handler@0.5.0\":{\"resolution\":{\"integrity\":\"sha512-jxQYkj8dSIzc0cD6cMMNdOc1UVjqSqu8BZdor5s8cGjW2I8BjODt/kWPVdY+u9zj3ms75Q5qaZgnxUad83+eAg==\"},\"engines\":{\"node\":\">=22.0.0\"}},\"@cloudflare/unenv-preset@2.16.1\":{\"resolution\":{\"integrity\":\"sha512-ECxObrMfyTl5bhQf/lZCXwo5G6xX9IAUo+nDMKK4SZ8m4Jvvxp52vilxyySSWh2YTZz8+HQ07qGH/2rEom1vDw==\"},\"peerDependencies\":{\"unenv\":\"2.0.0-rc.24\",\"workerd\":\">1.20260305.0 <2.0.0-0\"},\"peerDependenciesMeta\":{\"workerd\":{\"optional\":true}}},\"@cloudflare/vite-plugin@1.36.0\":{\"resolution\":{\"integrity\":\"sha512-Rkfa3wAbJ1lqCquWX453x4YlngO+OjNmCQvjb4D5JyMW7KprX6fEJE1NQ06giJDonEz0306EASELF93pRADibA==\"},\"peerDependencies\":{\"vite\":\"^6.1.0 || ^7.0.0 || ^8.0.0\",\"wrangler\":\"^4.88.0\"}},\"@cloudflare/workerd-darwin-64@1.20260504.1\":{\"resolution\":{\"integrity\":\"sha512-IOMjYoftNRXabFt+QzY2Bo2mR2TNl8xsGvE0HnQ+K0S2c61VOUGUkr9gpJjnwrJ65yA9Qed4xfg0RRqXHO+nfA==\"},\"engines\":{\"node\":\">=16\"},\"cpu\":[\"x64\"],\"os\":[\"darwin\"]},\"@cloudflare/workerd-darwin-arm64@1.20260504.1\":{\"resolution\":{\"integrity\":\"sha512-7iMXxIU0N5KklZpQm2kuwTm0XtrpHXNqhejJyGquky8gSTnm31zBdutjMekH8VRr6ckbvZIl6lvqXzXdfOEojg==\"},\"engines\":{\"node\":\">=16\"},\"cpu\":[\"arm64\"],\"os\":[\"darwin\"]},\"@cloudflare/workerd-linux-64@1.20260504.1\":{\"resolution\":{\"integrity\":\"sha512-YLB0EH5FQV++oWlalFgPF3p2Bp3dn/D6RWNMw0ukEC8gKnNX6o61A+dlFUl8hRD35ja1zKRxGFUojs4U2+MoJA==\"},\"engines\":{\"node\":\">=16\"},\"cpu\":[\"x64\"],\"os\":[\"linux\"]},\"@cloudflare/workerd-linux-arm64@1.20260504.1\":{\"resolution\":{\"integrity\":\"sha512-FAh/82jDXDArfn9xDih6f/IJfF2SHXBb4nFeQAyHyvXrn18zM6Q3yl2Vj0U7LybbNbmu7TNGghwaM2NoSQS+0A==\"},\"engines\":{\"node\":\">=16\"},\"cpu\":[\"arm64\"],\"os\":[\"linux\"]},\"@cloudflare/workerd-windows-64@1.20260504.1\":{\"resolution\":{\"integrity\":\"sha512-QUg/B3dfrK/KHHHhiJzdkLkTg5mG7lA3t8iplbBoUa3XKCLOHOOXhbU4WSYlLqg8YnsQ6XLZ1HVA99fmZhJh7A==\"},\"engines\":{\"node\":\">=16\"},\"cpu\":[\"x64\"],\"os\":[\"win32\"]},\"@cspotcode/source-map-support@0.8.1\":{\"resolution\":{\"integrity\":\"sha512-IchNf6dN4tHoMFIn/7OE8LWZ19Y6q/67Bmf6vnGREv8RSbBVb9LPJxEcnwrcwX6ixSvaiGoomAUvu4YSxXrVgw==\"},\"engines\":{\"node\":\">=12\"}},\"@csstools/color-helpers@5.1.0\":{\"resolution\":{\"integrity\":\"sha512-S11EXWJyy0Mz5SYvRmY8nJYTFFd1LCNV+7cXyAgQtOOuzb4EsgfqDufL+9esx72/eLhsRdGZwaldu/h+E4t4BA==\"},\"engines\":{\"node\":\">=18\"}},\"@csstools/css-calc@2.1.4\":{\"resolution\":{\"integrity\":\"sha512-3N8oaj+0juUw/1H3YwmDDJXCgTB1gKU6Hc/bB502u9zR0q2vd786XJH9QfrKIEgFlZmhZiq6epXl4rHqhzsIgQ==\"},\"engines\":{\"node\":\">=18\"},\"peerDependencies\":{\"@csstools/css-parser-algorithms\":\"^3.0.5\",\"@csstools/css-tokenizer\":\"^3.0.4\"}},\"@csstools/css-color-parser@3.1.0\":{\"resolution\":{\"integrity\":\"sha512-nbtKwh3a6xNVIp/VRuXV64yTKnb1IjTAEEh3irzS+HkKjAOYLTGNb9pmVNntZ8iVBHcWDA2Dof0QtPgFI1BaTA==\"},\"engines\":{\"node\":\">=18\"},\"peerDependencies\":{\"@csstools/css-parser-algorithms\":\"^3.0.5\",\"@csstools/css-tokenizer\":\"^3.0.4\"}},\"@csstools/css-parser-algorithms@3.0.5\":{\"resolution\":{\"integrity\":\"sha512-DaDeUkXZKjdGhgYaHNJTV9pV7Y9B3b644jCLs9Upc3VeNGg6LWARAT6O+Q+/COo+2gg/bM5rhpMAtf70WqfBdQ==\"},\"engines\":{\"node\":\">=18\"},\"peerDependencies\":{\"@csstools/css-tokenizer\":\"^3.0.4\"}},\"@csstools/css-syntax-patches-for-csstree@1.0.14\":{\"resolution\":{\"integrity\":\"sha512-zSlIxa20WvMojjpCSy8WrNpcZ61RqfTfX3XTaOeVlGJrt/8HF3YbzgFZa01yTbT4GWQLwfTcC3EB8i3XnB647Q==\"},\"engines\":{\"node\":\">=18\"},\"peerDependencies\":{\"postcss\":\"^8.4\"}},\"@csstools/css-tokenizer@3.0.4\":{\"resolution\":{\"integrity\":\"sha512-Vd/9EVDiu6PPJt9yAh6roZP6El1xHrdvIVGjyBsHR0RYwNHgL7FJPyIIW4fANJNG6FtyZfvlRPpFI4ZM/lubvw==\"},\"engines\":{\"node\":\">=18\"}},\"@emnapi/core@1.10.0\":{\"resolution\":{\"integrity\":\"sha512-yq6OkJ4p82CAfPl0u9mQebQHKPJkY7WrIuk205cTYnYe+k2Z8YBh11FrbRG/H6ihirqcacOgl2BIO8oyMQLeXw==\"}},\"@emnapi/core@1.4.5\":{\"resolution\":{\"integrity\":\"sha512-XsLw1dEOpkSX/WucdqUhPWP7hDxSvZiY+fsUC14h+FtQ2Ifni4znbBt8punRX+Uj2JG/uDb8nEHVKvrVlvdZ5Q==\"}},\"@emnapi/runtime@1.10.0\":{\"resolution\":{\"integrity\":\"sha512-ewvYlk86xUoGI0zQRNq/mC+16R1QeDlKQy21Ki3oSYXNgLb45GV1P6A0M+/s6nyCuNDqe5VpaY84BzXGwVbwFA==\"}},\"@emnapi/runtime@1.4.5\":{\"resolution\":{\"integrity\":\"sha512-++LApOtY0pEEz1zrd9vy1/zXVaVJJ/EbAF3u0fXIzPJEDtnITsBGbbK0EkM72amhl/R5b+5xx0Y/QhcVOpuulg==\"}},\"@emnapi/wasi-threads@1.0.4\":{\"resolution\":{\"integrity\":\"sha512-PJR+bOmMOPH8AtcTGAyYNiuJ3/Fcoj2XN/gBEWzDIKh254XO+mM9XoXHk5GNEhodxeMznbg7BlRojVbKN+gC6g==\"}},\"@emnapi/wasi-threads@1.2.1\":{\"resolution\":{\"integrity\":\"sha512-uTII7OYF+/Mes/MrcIOYp5yOtSMLBWSIoLPpcgwipoiKbli6k322tcoFsxoIIxPDqW01SQGAgko4EzZi2BNv2w==\"}},\"@esbuild/aix-ppc64@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-Hhmwd6CInZ3dwpuGTF8fJG6yoWmsToE+vYgD4nytZVxcu1ulHpUQRAB1UJ8+N1Am3Mz4+xOByoQoSZf4D+CpkA==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"ppc64\"],\"os\":[\"aix\"]},\"@esbuild/aix-ppc64@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-9fJMTNFTWZMh5qwrBItuziu834eOCUcEqymSH7pY+zoMVEZg3gcPuBNxH1EvfVYe9h0x/Ptw8KBzv7qxb7l8dg==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"ppc64\"],\"os\":[\"aix\"]},\"@esbuild/android-arm64@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-6AAmLG7zwD1Z159jCKPvAxZd4y/VTO0VkprYy+3N2FtJ8+BQWFXU+OxARIwA46c5tdD9SsKGZ/1ocqBS/gAKHg==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"arm64\"],\"os\":[\"android\"]},\"@esbuild/android-arm64@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-YdghPYUmj/FX2SYKJ0OZxf+iaKgMsKHVPF1MAq/P8WirnSpCStzKJFjOjzsW0QQ7oIAiccHdcqjbHmJxRb/dmg==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"arm64\"],\"os\":[\"android\"]},\"@esbuild/android-arm@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-VJ+sKvNA/GE7Ccacc9Cha7bpS8nyzVv0jdVgwNDaR4gDMC/2TTRc33Ip8qrNYUcpkOHUT5OZ0bUcNNVZQ9RLlg==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"arm\"],\"os\":[\"android\"]},\"@esbuild/android-arm@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-i5D1hPY7GIQmXlXhs2w8AWHhenb00+GxjxRncS2ZM7YNVGNfaMxgzSGuO8o8SJzRc/oZwU2bcScvVERk03QhzA==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"arm\"],\"os\":[\"android\"]},\"@esbuild/android-x64@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-5jbb+2hhDHx5phYR2By8GTWEzn6I9UqR11Kwf22iKbNpYrsmRB18aX/9ivc5cabcUiAT/wM+YIZ6SG9QO6a8kg==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"x64\"],\"os\":[\"android\"]},\"@esbuild/android-x64@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-IN/0BNTkHtk8lkOM8JWAYFg4ORxBkZQf9zXiEOfERX/CzxW3Vg1ewAhU7QSWQpVIzTW+b8Xy+lGzdYXV6UZObQ==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"x64\"],\"os\":[\"android\"]},\"@esbuild/darwin-arm64@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-N3zl+lxHCifgIlcMUP5016ESkeQjLj/959RxxNYIthIg+CQHInujFuXeWbWMgnTo4cp5XVHqFPmpyu9J65C1Yg==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"arm64\"],\"os\":[\"darwin\"]},\"@esbuild/darwin-arm64@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-Re491k7ByTVRy0t3EKWajdLIr0gz2kKKfzafkth4Q8A5n1xTHrkqZgLLjFEHVD+AXdUGgQMq+Godfq45mGpCKg==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"arm64\"],\"os\":[\"darwin\"]},\"@esbuild/darwin-x64@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-HQ9ka4Kx21qHXwtlTUVbKJOAnmG1ipXhdWTmNXiPzPfWKpXqASVcWdnf2bnL73wgjNrFXAa3yYvBSd9pzfEIpA==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"x64\"],\"os\":[\"darwin\"]},\"@esbuild/darwin-x64@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-vHk/hA7/1AckjGzRqi6wbo+jaShzRowYip6rt6q7VYEDX4LEy1pZfDpdxCBnGtl+A5zq8iXDcyuxwtv3hNtHFg==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"x64\"],\"os\":[\"darwin\"]},\"@esbuild/freebsd-arm64@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-gA0Bx759+7Jve03K1S0vkOu5Lg/85dou3EseOGUes8flVOGxbhDDh/iZaoek11Y8mtyKPGF3vP8XhnkDEAmzeg==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"arm64\"],\"os\":[\"freebsd\"]},\"@esbuild/freebsd-arm64@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-ipTYM2fjt3kQAYOvo6vcxJx3nBYAzPjgTCk7QEgZG8AUO3ydUhvelmhrbOheMnGOlaSFUoHXB6un+A7q4ygY9w==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"arm64\"],\"os\":[\"freebsd\"]},\"@esbuild/freebsd-x64@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-TGbO26Yw2xsHzxtbVFGEXBFH0FRAP7gtcPE7P5yP7wGy7cXK2oO7RyOhL5NLiqTlBh47XhmIUXuGciXEqYFfBQ==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"x64\"],\"os\":[\"freebsd\"]},\"@esbuild/freebsd-x64@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-dDk0X87T7mI6U3K9VjWtHOXqwAMJBNN2r7bejDsc+j03SEjtD9HrOl8gVFByeM0aJksoUuUVU9TBaZa2rgj0oA==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"x64\"],\"os\":[\"freebsd\"]},\"@esbuild/linux-arm64@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-8bwX7a8FghIgrupcxb4aUmYDLp8pX06rGh5HqDT7bB+8Rdells6mHvrFHHW2JAOPZUbnjUpKTLg6ECyzvas2AQ==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"arm64\"],\"os\":[\"linux\"]},\"@esbuild/linux-arm64@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-sZOuFz/xWnZ4KH3YfFrKCf1WyPZHakVzTiqji3WDc0BCl2kBwiJLCXpzLzUBLgmp4veFZdvN5ChW4Eq/8Fc2Fg==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"arm64\"],\"os\":[\"linux\"]},\"@esbuild/linux-arm@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-lPDGyC1JPDou8kGcywY0YILzWlhhnRjdof3UlcoqYmS9El818LLfJJc3PXXgZHrHCAKs/Z2SeZtDJr5MrkxtOw==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"arm\"],\"os\":[\"linux\"]},\"@esbuild/linux-arm@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-s6nPv2QkSupJwLYyfS+gwdirm0ukyTFNl3KTgZEAiJDd+iHZcbTPPcWCcRYH+WlNbwChgH2QkE9NSlNrMT8Gfw==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"arm\"],\"os\":[\"linux\"]},\"@esbuild/linux-ia32@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-0y9KrdVnbMM2/vG8KfU0byhUN+EFCny9+8g202gYqSSVMonbsCfLjUO+rCci7pM0WBEtz+oK/PIwHkzxkyharA==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"ia32\"],\"os\":[\"linux\"]},\"@esbuild/linux-ia32@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-yGlQYjdxtLdh0a3jHjuwOrxQjOZYD/C9PfdbgJJF3TIZWnm/tMd/RcNiLngiu4iwcBAOezdnSLAwQDPqTmtTYg==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"ia32\"],\"os\":[\"linux\"]},\"@esbuild/linux-loong64@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-h///Lr5a9rib/v1GGqXVGzjL4TMvVTv+s1DPoxQdz7l/AYv6LDSxdIwzxkrPW438oUXiDtwM10o9PmwS/6Z0Ng==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"loong64\"],\"os\":[\"linux\"]},\"@esbuild/linux-loong64@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-WO60Sn8ly3gtzhyjATDgieJNet/KqsDlX5nRC5Y3oTFcS1l0KWba+SEa9Ja1GfDqSF1z6hif/SkpQJbL63cgOA==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"loong64\"],\"os\":[\"linux\"]},\"@esbuild/linux-mips64el@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-iyRrM1Pzy9GFMDLsXn1iHUm18nhKnNMWscjmp4+hpafcZjrr2WbT//d20xaGljXDBYHqRcl8HnxbX6uaA/eGVw==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"mips64el\"],\"os\":[\"linux\"]},\"@esbuild/linux-mips64el@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-APsymYA6sGcZ4pD6k+UxbDjOFSvPWyZhjaiPyl/f79xKxwTnrn5QUnXR5prvetuaSMsb4jgeHewIDCIWljrSxw==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"mips64el\"],\"os\":[\"linux\"]},\"@esbuild/linux-ppc64@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-9meM/lRXxMi5PSUqEXRCtVjEZBGwB7P/D4yT8UG/mwIdze2aV4Vo6U5gD3+RsoHXKkHCfSxZKzmDssVlRj1QQA==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"ppc64\"],\"os\":[\"linux\"]},\"@esbuild/linux-ppc64@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-eizBnTeBefojtDb9nSh4vvVQ3V9Qf9Df01PfawPcRzJH4gFSgrObw+LveUyDoKU3kxi5+9RJTCWlj4FjYXVPEA==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"ppc64\"],\"os\":[\"linux\"]},\"@esbuild/linux-riscv64@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-Zr7KR4hgKUpWAwb1f3o5ygT04MzqVrGEGXGLnj15YQDJErYu/BGg+wmFlIDOdJp0PmB0lLvxFIOXZgFRrdjR0w==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"riscv64\"],\"os\":[\"linux\"]},\"@esbuild/linux-riscv64@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-3Emwh0r5wmfm3ssTWRQSyVhbOHvqegUDRd0WhmXKX2mkHJe1SFCMJhagUleMq+Uci34wLSipf8Lagt4LlpRFWQ==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"riscv64\"],\"os\":[\"linux\"]},\"@esbuild/linux-s390x@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-MsKncOcgTNvdtiISc/jZs/Zf8d0cl/t3gYWX8J9ubBnVOwlk65UIEEvgBORTiljloIWnBzLs4qhzPkJcitIzIg==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"s390x\"],\"os\":[\"linux\"]},\"@esbuild/linux-s390x@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-pBHUx9LzXWBc7MFIEEL0yD/ZVtNgLytvx60gES28GcWMqil8ElCYR4kvbV2BDqsHOvVDRrOxGySBM9Fcv744hw==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"s390x\"],\"os\":[\"linux\"]},\"@esbuild/linux-x64@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-uqZMTLr/zR/ed4jIGnwSLkaHmPjOjJvnm6TVVitAa08SLS9Z0VM8wIRx7gWbJB5/J54YuIMInDquWyYvQLZkgw==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"x64\"],\"os\":[\"linux\"]},\"@esbuild/linux-x64@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-Czi8yzXUWIQYAtL/2y6vogER8pvcsOsk5cpwL4Gk5nJqH5UZiVByIY8Eorm5R13gq+DQKYg0+JyQoytLQas4dA==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"x64\"],\"os\":[\"linux\"]},\"@esbuild/netbsd-arm64@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-xXwcTq4GhRM7J9A8Gv5boanHhRa/Q9KLVmcyXHCTaM4wKfIpWkdXiMog/KsnxzJ0A1+nD+zoecuzqPmCRyBGjg==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"arm64\"],\"os\":[\"netbsd\"]},\"@esbuild/netbsd-arm64@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-sDpk0RgmTCR/5HguIZa9n9u+HVKf40fbEUt+iTzSnCaGvY9kFP0YKBWZtJaraonFnqef5SlJ8/TiPAxzyS+UoA==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"arm64\"],\"os\":[\"netbsd\"]},\"@esbuild/netbsd-x64@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-Ld5pTlzPy3YwGec4OuHh1aCVCRvOXdH8DgRjfDy/oumVovmuSzWfnSJg+VtakB9Cm0gxNO9BzWkj6mtO1FMXkQ==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"x64\"],\"os\":[\"netbsd\"]},\"@esbuild/netbsd-x64@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-P14lFKJl/DdaE00LItAukUdZO5iqNH7+PjoBm+fLQjtxfcfFE20Xf5CrLsmZdq5LFFZzb5JMZ9grUwvtVYzjiA==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"x64\"],\"os\":[\"netbsd\"]},\"@esbuild/openbsd-arm64@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-fF96T6KsBo/pkQI950FARU9apGNTSlZGsv1jZBAlcLL1MLjLNIWPBkj5NlSz8aAzYKg+eNqknrUJ24QBybeR5A==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"arm64\"],\"os\":[\"openbsd\"]},\"@esbuild/openbsd-arm64@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-AIcMP77AvirGbRl/UZFTq5hjXK+2wC7qFRGoHSDrZ5v5b8DK/GYpXW3CPRL53NkvDqb9D+alBiC/dV0Fb7eJcw==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"arm64\"],\"os\":[\"openbsd\"]},\"@esbuild/openbsd-x64@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-MZyXUkZHjQxUvzK7rN8DJ3SRmrVrke8ZyRusHlP+kuwqTcfWLyqMOE3sScPPyeIXN/mDJIfGXvcMqCgYKekoQw==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"x64\"],\"os\":[\"openbsd\"]},\"@esbuild/openbsd-x64@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-DnW2sRrBzA+YnE70LKqnM3P+z8vehfJWHXECbwBmH/CU51z6FiqTQTHFenPlHmo3a8UgpLyH3PT+87OViOh1AQ==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"x64\"],\"os\":[\"openbsd\"]},\"@esbuild/openharmony-arm64@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-rm0YWsqUSRrjncSXGA7Zv78Nbnw4XL6/dzr20cyrQf7ZmRcsovpcRBdhD43Nuk3y7XIoW2OxMVvwuRvk9XdASg==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"arm64\"],\"os\":[\"openharmony\"]},\"@esbuild/openharmony-arm64@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-NinAEgr/etERPTsZJ7aEZQvvg/A6IsZG/LgZy+81wON2huV7SrK3e63dU0XhyZP4RKGyTm7aOgmQk0bGp0fy2g==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"arm64\"],\"os\":[\"openharmony\"]},\"@esbuild/sunos-x64@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-3wGSCDyuTHQUzt0nV7bocDy72r2lI33QL3gkDNGkod22EsYl04sMf0qLb8luNKTOmgF/eDEDP5BFNwoBKH441w==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"x64\"],\"os\":[\"sunos\"]},\"@esbuild/sunos-x64@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-PanZ+nEz+eWoBJ8/f8HKxTTD172SKwdXebZ0ndd953gt1HRBbhMsaNqjTyYLGLPdoWHy4zLU7bDVJztF5f3BHA==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"x64\"],\"os\":[\"sunos\"]},\"@esbuild/win32-arm64@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-rMmLrur64A7+DKlnSuwqUdRKyd3UE7oPJZmnljqEptesKM8wx9J8gx5u0+9Pq0fQQW8vqeKebwNXdfOyP+8Bsg==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"arm64\"],\"os\":[\"win32\"]},\"@esbuild/win32-arm64@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-B2t59lWWYrbRDw/tjiWOuzSsFh1Y/E95ofKz7rIVYSQkUYBjfSgf6oeYPNWHToFRr2zx52JKApIcAS/D5TUBnA==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"arm64\"],\"os\":[\"win32\"]},\"@esbuild/win32-ia32@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-HkqnmmBoCbCwxUKKNPBixiWDGCpQGVsrQfJoVGYLPT41XWF8lHuE5N6WhVia2n4o5QK5M4tYr21827fNhi4byQ==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"ia32\"],\"os\":[\"win32\"]},\"@esbuild/win32-ia32@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-QLKSFeXNS8+tHW7tZpMtjlNb7HKau0QDpwm49u0vUp9y1WOF+PEzkU84y9GqYaAVW8aH8f3GcBck26jh54cX4Q==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"ia32\"],\"os\":[\"win32\"]},\"@esbuild/win32-x64@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-alJC0uCZpTFrSL0CCDjcgleBXPnCrEAhTBILpeAp7M/OFgoqtAetfBzX0xM00MUsVVPpVjlPuMbREqnZCXaTnA==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"x64\"],\"os\":[\"win32\"]},\"@esbuild/win32-x64@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-4uJGhsxuptu3OcpVAzli+/gWusVGwZZHTlS63hh++ehExkVT8SgiEf7/uC/PclrPPkLhZqGgCTjd0VWLo6xMqA==\"},\"engines\":{\"node\":\">=18\"},\"cpu\":[\"x64\"],\"os\":[\"win32\"]},\"@iconify/types@2.0.0\":{\"resolution\":{\"integrity\":\"sha512-+wluvCrRhXrhyOmRDJ3q8mux9JkKy5SJ/v8ol2tu4FVjyYvtEzkc/3pK15ET6RKg4b4w4BmTk1+gsCUhf21Ykg==\"}},\"@iconify/utils@3.0.2\":{\"resolution\":{\"integrity\":\"sha512-EfJS0rLfVuRuJRn4psJHtK2A9TqVnkxPpHY6lYHiB9+8eSuudsxbwMiavocG45ujOo6FJ+CIRlRnlOGinzkaGQ==\"}},\"@img/colour@1.1.0\":{\"resolution\":{\"integrity\":\"sha512-Td76q7j57o/tLVdgS746cYARfSyxk8iEfRxewL9h4OMzYhbW4TAcppl0mT4eyqXddh6L/jwoM75mo7ixa/pCeQ==\"},\"engines\":{\"node\":\">=18\"}},\"@img/sharp-darwin-arm64@0.34.5\":{\"resolution\":{\"integrity\":\"sha512-imtQ3WMJXbMY4fxb/Ndp6HBTNVtWCUI0WdobyheGf5+ad6xX8VIDO8u2xE4qc/fr08CKG/7dDseFtn6M6g/r3w==\"},\"engines\":{\"node\":\"^18.17.0 || ^20.3.0 || >=21.0.0\"},\"cpu\":[\"arm64\"],\"os\":[\"darwin\"]},\"@img/sharp-darwin-x64@0.34.5\":{\"resolution\":{\"integrity\":\"sha512-YNEFAF/4KQ/PeW0N+r+aVVsoIY0/qxxikF2SWdp+NRkmMB7y9LBZAVqQ4yhGCm/H3H270OSykqmQMKLBhBJDEw==\"},\"engines\":{\"node\":\"^18.17.0 || ^20.3.0 || >=21.0.0\"},\"cpu\":[\"x64\"],\"os\":[\"darwin\"]},\"@img/sharp-libvips-darwin-arm64@1.2.4\":{\"resolution\":{\"integrity\":\"sha512-zqjjo7RatFfFoP0MkQ51jfuFZBnVE2pRiaydKJ1G/rHZvnsrHAOcQALIi9sA5co5xenQdTugCvtb1cuf78Vf4g==\"},\"cpu\":[\"arm64\"],\"os\":[\"darwin\"]},\"@img/sharp-libvips-darwin-x64@1.2.4\":{\"resolution\":{\"integrity\":\"sha512-1IOd5xfVhlGwX+zXv2N93k0yMONvUlANylbJw1eTah8K/Jtpi15KC+WSiaX/nBmbm2HxRM1gZ0nSdjSsrZbGKg==\"},\"cpu\":[\"x64\"],\"os\":[\"darwin\"]},\"@img/sharp-libvips-linux-arm64@1.2.4\":{\"resolution\":{\"integrity\":\"sha512-excjX8DfsIcJ10x1Kzr4RcWe1edC9PquDRRPx3YVCvQv+U5p7Yin2s32ftzikXojb1PIFc/9Mt28/y+iRklkrw==\"},\"cpu\":[\"arm64\"],\"os\":[\"linux\"]},\"@img/sharp-libvips-linux-arm@1.2.4\":{\"resolution\":{\"integrity\":\"sha512-bFI7xcKFELdiNCVov8e44Ia4u2byA+l3XtsAj+Q8tfCwO6BQ8iDojYdvoPMqsKDkuoOo+X6HZA0s0q11ANMQ8A==\"},\"cpu\":[\"arm\"],\"os\":[\"linux\"]},\"@img/sharp-libvips-linux-ppc64@1.2.4\":{\"resolution\":{\"integrity\":\"sha512-FMuvGijLDYG6lW+b/UvyilUWu5Ayu+3r2d1S8notiGCIyYU/76eig1UfMmkZ7vwgOrzKzlQbFSuQfgm7GYUPpA==\"},\"cpu\":[\"ppc64\"],\"os\":[\"linux\"]},\"@img/sharp-libvips-linux-riscv64@1.2.4\":{\"resolution\":{\"integrity\":\"sha512-oVDbcR4zUC0ce82teubSm+x6ETixtKZBh/qbREIOcI3cULzDyb18Sr/Wcyx7NRQeQzOiHTNbZFF1UwPS2scyGA==\"},\"cpu\":[\"riscv64\"],\"os\":[\"linux\"]},\"@img/sharp-libvips-linux-s390x@1.2.4\":{\"resolution\":{\"integrity\":\"sha512-qmp9VrzgPgMoGZyPvrQHqk02uyjA0/QrTO26Tqk6l4ZV0MPWIW6LTkqOIov+J1yEu7MbFQaDpwdwJKhbJvuRxQ==\"},\"cpu\":[\"s390x\"],\"os\":[\"linux\"]},\"@img/sharp-libvips-linux-x64@1.2.4\":{\"resolution\":{\"integrity\":\"sha512-tJxiiLsmHc9Ax1bz3oaOYBURTXGIRDODBqhveVHonrHJ9/+k89qbLl0bcJns+e4t4rvaNBxaEZsFtSfAdquPrw==\"},\"cpu\":[\"x64\"],\"os\":[\"linux\"]},\"@img/sharp-libvips-linuxmusl-arm64@1.2.4\":{\"resolution\":{\"integrity\":\"sha512-FVQHuwx1IIuNow9QAbYUzJ+En8KcVm9Lk5+uGUQJHaZmMECZmOlix9HnH7n1TRkXMS0pGxIJokIVB9SuqZGGXw==\"},\"cpu\":[\"arm64\"],\"os\":[\"linux\"]},\"@img/sharp-libvips-linuxmusl-x64@1.2.4\":{\"resolution\":{\"integrity\":\"sha512-+LpyBk7L44ZIXwz/VYfglaX/okxezESc6UxDSoyo2Ks6Jxc4Y7sGjpgU9s4PMgqgjj1gZCylTieNamqA1MF7Dg==\"},\"cpu\":[\"x64\"],\"os\":[\"linux\"]},\"@img/sharp-linux-arm64@0.34.5\":{\"resolution\":{\"integrity\":\"sha512-bKQzaJRY/bkPOXyKx5EVup7qkaojECG6NLYswgktOZjaXecSAeCWiZwwiFf3/Y+O1HrauiE3FVsGxFg8c24rZg==\"},\"engines\":{\"node\":\"^18.17.0 || ^20.3.0 || >=21.0.0\"},\"cpu\":[\"arm64\"],\"os\":[\"linux\"]},\"@img/sharp-linux-arm@0.34.5\":{\"resolution\":{\"integrity\":\"sha512-9dLqsvwtg1uuXBGZKsxem9595+ujv0sJ6Vi8wcTANSFpwV/GONat5eCkzQo/1O6zRIkh0m/8+5BjrRr7jDUSZw==\"},\"engines\":{\"node\":\"^18.17.0 || ^20.3.0 || >=21.0.0\"},\"cpu\":[\"arm\"],\"os\":[\"linux\"]},\"@img/sharp-linux-ppc64@0.34.5\":{\"resolution\":{\"integrity\":\"sha512-7zznwNaqW6YtsfrGGDA6BRkISKAAE1Jo0QdpNYXNMHu2+0dTrPflTLNkpc8l7MUP5M16ZJcUvysVWWrMefZquA==\"},\"engines\":{\"node\":\"^18.17.0 || ^20.3.0 || >=21.0.0\"},\"cpu\":[\"ppc64\"],\"os\":[\"linux\"]},\"@img/sharp-linux-riscv64@0.34.5\":{\"resolution\":{\"integrity\":\"sha512-51gJuLPTKa7piYPaVs8GmByo7/U7/7TZOq+cnXJIHZKavIRHAP77e3N2HEl3dgiqdD/w0yUfiJnII77PuDDFdw==\"},\"engines\":{\"node\":\"^18.17.0 || ^20.3.0 || >=21.0.0\"},\"cpu\":[\"riscv64\"],\"os\":[\"linux\"]},\"@img/sharp-linux-s390x@0.34.5\":{\"resolution\":{\"integrity\":\"sha512-nQtCk0PdKfho3eC5MrbQoigJ2gd1CgddUMkabUj+rBevs8tZ2cULOx46E7oyX+04WGfABgIwmMC0VqieTiR4jg==\"},\"engines\":{\"node\":\"^18.17.0 || ^20.3.0 || >=21.0.0\"},\"cpu\":[\"s390x\"],\"os\":[\"linux\"]},\"@img/sharp-linux-x64@0.34.5\":{\"resolution\":{\"integrity\":\"sha512-MEzd8HPKxVxVenwAa+JRPwEC7QFjoPWuS5NZnBt6B3pu7EG2Ge0id1oLHZpPJdn3OQK+BQDiw9zStiHBTJQQQQ==\"},\"engines\":{\"node\":\"^18.17.0 || ^20.3.0 || >=21.0.0\"},\"cpu\":[\"x64\"],\"os\":[\"linux\"]},\"@img/sharp-linuxmusl-arm64@0.34.5\":{\"resolution\":{\"integrity\":\"sha512-fprJR6GtRsMt6Kyfq44IsChVZeGN97gTD331weR1ex1c1rypDEABN6Tm2xa1wE6lYb5DdEnk03NZPqA7Id21yg==\"},\"engines\":{\"node\":\"^18.17.0 || ^20.3.0 || >=21.0.0\"},\"cpu\":[\"arm64\"],\"os\":[\"linux\"]},\"@img/sharp-linuxmusl-x64@0.34.5\":{\"resolution\":{\"integrity\":\"sha512-Jg8wNT1MUzIvhBFxViqrEhWDGzqymo3sV7z7ZsaWbZNDLXRJZoRGrjulp60YYtV4wfY8VIKcWidjojlLcWrd8Q==\"},\"engines\":{\"node\":\"^18.17.0 || ^20.3.0 || >=21.0.0\"},\"cpu\":[\"x64\"],\"os\":[\"linux\"]},\"@img/sharp-wasm32@0.34.5\":{\"resolution\":{\"integrity\":\"sha512-OdWTEiVkY2PHwqkbBI8frFxQQFekHaSSkUIJkwzclWZe64O1X4UlUjqqqLaPbUpMOQk6FBu/HtlGXNblIs0huw==\"},\"engines\":{\"node\":\"^18.17.0 || ^20.3.0 || >=21.0.0\"},\"cpu\":[\"wasm32\"]},\"@img/sharp-win32-arm64@0.34.5\":{\"resolution\":{\"integrity\":\"sha512-WQ3AgWCWYSb2yt+IG8mnC6Jdk9Whs7O0gxphblsLvdhSpSTtmu69ZG1Gkb6NuvxsNACwiPV6cNSZNzt0KPsw7g==\"},\"engines\":{\"node\":\"^18.17.0 || ^20.3.0 || >=21.0.0\"},\"cpu\":[\"arm64\"],\"os\":[\"win32\"]},\"@img/sharp-win32-ia32@0.34.5\":{\"resolution\":{\"integrity\":\"sha512-FV9m/7NmeCmSHDD5j4+4pNI8Cp3aW+JvLoXcTUo0IqyjSfAZJ8dIUmijx1qaJsIiU+Hosw6xM5KijAWRJCSgNg==\"},\"engines\":{\"node\":\"^18.17.0 || ^20.3.0 || >=21.0.0\"},\"cpu\":[\"ia32\"],\"os\":[\"win32\"]},\"@img/sharp-win32-x64@0.34.5\":{\"resolution\":{\"integrity\":\"sha512-+29YMsqY2/9eFEiW93eqWnuLcWcufowXewwSNIT6UwZdUUCrM3oFjMWH/Z6/TMmb4hlFenmfAVbpWeup2jryCw==\"},\"engines\":{\"node\":\"^18.17.0 || ^20.3.0 || >=21.0.0\"},\"cpu\":[\"x64\"],\"os\":[\"win32\"]},\"@inquirer/ansi@1.0.2\":{\"resolution\":{\"integrity\":\"sha512-S8qNSZiYzFd0wAcyG5AXCvUHC5Sr7xpZ9wZ2py9XR88jUz8wooStVx5M6dRzczbBWjic9NP7+rY0Xi7qqK/aMQ==\"},\"engines\":{\"node\":\">=18\"}},\"@inquirer/confirm@5.1.21\":{\"resolution\":{\"integrity\":\"sha512-KR8edRkIsUayMXV+o3Gv+q4jlhENF9nMYUZs9PA2HzrXeHI8M5uDag70U7RJn9yyiMZSbtF5/UexBtAVtZGSbQ==\"},\"engines\":{\"node\":\">=18\"},\"peerDependencies\":{\"@types/node\":\">=18\"},\"peerDependenciesMeta\":{\"@types/node\":{\"optional\":true}}},\"@inquirer/core@10.3.2\":{\"resolution\":{\"integrity\":\"sha512-43RTuEbfP8MbKzedNqBrlhhNKVwoK//vUFNW3Q3vZ88BLcrs4kYpGg+B2mm5p2K/HfygoCxuKwJJiv8PbGmE0A==\"},\"engines\":{\"node\":\">=18\"},\"peerDependencies\":{\"@types/node\":\">=18\"},\"peerDependenciesMeta\":{\"@types/node\":{\"optional\":true}}},\"@inquirer/external-editor@1.0.1\":{\"resolution\":{\"integrity\":\"sha512-Oau4yL24d2B5IL4ma4UpbQigkVhzPDXLoqy1ggK4gnHg/stmkffJE4oOXHXF3uz0UEpywG68KcyXsyYpA1Re/Q==\"},\"engines\":{\"node\":\">=18\"},\"peerDependencies\":{\"@types/node\":\">=18\"},\"peerDependenciesMeta\":{\"@types/node\":{\"optional\":true}}},\"@inquirer/figures@1.0.15\":{\"resolution\":{\"integrity\":\"sha512-t2IEY+unGHOzAaVM5Xx6DEWKeXlDDcNPeDyUpsRc6CUhBfU3VQOEl+Vssh7VNp1dR8MdUJBWhuObjXCsVpjN5g==\"},\"engines\":{\"node\":\">=18\"}},\"@inquirer/type@3.0.10\":{\"resolution\":{\"integrity\":\"sha512-BvziSRxfz5Ov8ch0z/n3oijRSEcEsHnhggm4xFZe93DHcUCTlutlq9Ox4SVENAfcRD22UQq7T/atg9Wr3k09eA==\"},\"engines\":{\"node\":\">=18\"},\"peerDependencies\":{\"@types/node\":\">=18\"},\"peerDependenciesMeta\":{\"@types/node\":{\"optional\":true}}},\"@isaacs/cliui@8.0.2\":{\"resolution\":{\"integrity\":\"sha512-O8jcjabXaleOG9DQ0+ARXWZBTfnP4WNAqzuiJK7ll44AmxGKv/J2M4TPjxjY3znBCfvBXFzucm1twdyFybFqEA==\"},\"engines\":{\"node\":\">=12\"}},\"@istanbuljs/schema@0.1.3\":{\"resolution\":{\"integrity\":\"sha512-ZXRY4jNvVgSVQ8DL3LTcakaAtXwTVUxE81hslsyD2AtoXW/wVob10HkOJ1X/pAlcI7D+2YoZKg5do8G/w6RYgA==\"},\"engines\":{\"node\":\">=8\"}},\"@jest/diff-sequences@30.0.1\":{\"resolution\":{\"integrity\":\"sha512-n5H8QLDJ47QqbCNn5SuFjCRDrOLEZ0h8vAHCK5RL9Ls7Xa8AQLa/YxAc9UjFqoEDM48muwtBGjtMY5cr0PLDCw==\"},\"engines\":{\"node\":\"^18.14.0 || ^20.0.0 || ^22.0.0 || >=24.0.0\"}},\"@jest/get-type@30.1.0\":{\"resolution\":{\"integrity\":\"sha512-eMbZE2hUnx1WV0pmURZY9XoXPkUYjpc55mb0CrhtdWLtzMQPFvu/rZkTLZFTsdaVQa+Tr4eWAteqcUzoawq/uA==\"},\"engines\":{\"node\":\"^18.14.0 || ^20.0.0 || ^22.0.0 || >=24.0.0\"}},\"@jest/schemas@30.0.5\":{\"resolution\":{\"integrity\":\"sha512-DmdYgtezMkh3cpU8/1uyXakv3tJRcmcXxBOcO0tbaozPwpmh4YMsnWrQm9ZmZMfa5ocbxzbFk6O4bDPEc/iAnA==\"},\"engines\":{\"node\":\"^18.14.0 || ^20.0.0 || ^22.0.0 || >=24.0.0\"}},\"@jridgewell/gen-mapping@0.3.13\":{\"resolution\":{\"integrity\":\"sha512-2kkt/7niJ6MgEPxF0bYdQ6etZaA+fQvDcLKckhy1yIQOzaoKjBBjSj63/aLVjYE3qhRt5dvM+uUyfCg6UKCBbA==\"}},\"@jridgewell/remapping@2.3.5\":{\"resolution\":{\"integrity\":\"sha512-LI9u/+laYG4Ds1TDKSJW2YPrIlcVYOwi2fUC6xB43lueCjgxV4lffOCZCtYFiH6TNOX+tQKXx97T4IKHbhyHEQ==\"}},\"@jridgewell/resolve-uri@3.1.2\":{\"resolution\":{\"integrity\":\"sha512-bRISgCIjP20/tbWSPWMEi54QVPRZExkuD9lJL+UIxUKtwVJA8wW1Trb1jMs1RFXo1CBTNZ/5hpC9QvmKWdopKw==\"},\"engines\":{\"node\":\">=6.0.0\"}},\"@jridgewell/source-map@0.3.11\":{\"resolution\":{\"integrity\":\"sha512-ZMp1V8ZFcPG5dIWnQLr3NSI1MiCU7UETdS/A0G8V/XWHvJv3ZsFqutJn1Y5RPmAPX6F3BiE397OqveU/9NCuIA==\"}},\"@jridgewell/sourcemap-codec@1.5.5\":{\"resolution\":{\"integrity\":\"sha512-cYQ9310grqxueWbl+WuIUIaiUaDcj7WOq5fVhEljNVgRfOUhY9fy2zTvfoqWsnebh8Sl70VScFbICvJnLKB0Og==\"}},\"@jridgewell/trace-mapping@0.3.30\":{\"resolution\":{\"integrity\":\"sha512-GQ7Nw5G2lTu/BtHTKfXhKHok2WGetd4XYcVKGx00SjAk8GMwgJM3zr6zORiPGuOE+/vkc90KtTosSSvaCjKb2Q==\"}},\"@jridgewell/trace-mapping@0.3.31\":{\"resolution\":{\"integrity\":\"sha512-zzNR+SdQSDJzc8joaeP8QQoCQr8NuYx2dIIytl1QeBEZHJ9uW6hebsrYgbz8hJwUQao3TWCMtmfV8Nu1twOLAw==\"}},\"@jridgewell/trace-mapping@0.3.9\":{\"resolution\":{\"integrity\":\"sha512-3Belt6tdc8bPgAtbcmdtNJlirVoTmEb5e2gC94PnkwEW9jI6CAHUeoG85tjWP5WquqfavoMtMwiG4P926ZKKuQ==\"}},\"@jsonjoy.com/buffers@17.63.0\":{\"resolution\":{\"integrity\":\"sha512-IZB5WQRVNPEbuqouOQxZHl59AL6/ff+gmM20+xAx4SRX6DjZnQAxs03pQ2J6g5ssN+pzmShrBuGeksjlcZ3HCw==\"},\"engines\":{\"node\":\">=10.0\"},\"peerDependencies\":{\"tslib\":\"2\"}},\"@jsonjoy.com/codegen@17.63.0\":{\"resolution\":{\"integrity\":\"sha512-vQ18JiRQ8YfZQwzwCQs88rR5eGuy6AFfu+anz9RTvHQs9L4AE8dGA/mLzu6teh6CiSQTo2TNOQbqRh4Vy+7LEQ==\"},\"engines\":{\"node\":\">=10.0\"},\"peerDependencies\":{\"tslib\":\"2\"}},\"@jsonjoy.com/json-pointer@17.63.0\":{\"resolution\":{\"integrity\":\"sha512-wAW7rQsGW2zWtE+77cXU8lXsoXYCKa9eHptK3a2CCoNTm5YpPA3dev6LuEyaTDYKdF4DTjtwREv2PpjJidHE5w==\"},\"engines\":{\"node\":\">=10.0\"},\"peerDependencies\":{\"tslib\":\"2\"}},\"@jsonjoy.com/util@17.63.0\":{\"resolution\":{\"integrity\":\"sha512-AhpTIOFvuixKwem4d+ey4In78KJLCrDIUyp0IQ8xgpbs0IjNPTTfT3nXXbYMgJGxjegmqa9otl9nqbCvxOaiXw==\"},\"engines\":{\"node\":\">=10.0\"},\"peerDependencies\":{\"tslib\":\"2\"}},\"@lix-js/plugin-json@1.0.1\":{\"resolution\":{\"integrity\":\"sha512-pCqzG08D8jLtVy8RnITPZIy92XNlRAJWLrlRrzh3ttwS/PWM/iXiOPPuzvb23MoFhYxerzJ8uDGXhEXfVagY2w==\"}},\"@lix-js/sdk@0.5.1\":{\"resolution\":{\"integrity\":\"sha512-FiDGp6BznOLdzNOCUC5OvTJ6KfdKGk8wd5edD1dhU46quS4vi4EkHjS/N+12PSpCfl/p3wBWSQD6vzvZcIHTFg==\"},\"engines\":{\"node\":\">=22\"}},\"@lix-js/server-protocol-schema@0.1.1\":{\"resolution\":{\"integrity\":\"sha512-jBeALB6prAbtr5q4vTuxnRZZv1M2rKe8iNqRQhFJ4Tv7150unEa0vKyz0hs8Gl3fUGsWaNJBh3J8++fpbrpRBQ==\"}},\"@manypkg/find-root@1.1.0\":{\"resolution\":{\"integrity\":\"sha512-mki5uBvhHzO8kYYix/WRy2WX8S3B5wdVSc9D6KcU5lQNglP2yt58/VfLuAK49glRXChosY8ap2oJ1qgma3GUVA==\"}},\"@manypkg/get-packages@1.1.3\":{\"resolution\":{\"integrity\":\"sha512-fo+QhuU3qE/2TQMQmbVMqaQ6EWbMhi4ABWP+O4AM1NqPBuy0OrApV5LO6BrrgnhtAHS2NH6RrVk9OL181tTi8A==\"}},\"@marcbachmann/cel-js@2.5.2\":{\"resolution\":{\"integrity\":\"sha512-QnvFBFQ+2T8gX4H4pmcgIfs3gXwfhRjv7hYoRRDLwKeXxgPEZ+zvExe1pGtPs8xPWHu4ng0CmllNpVHWi4kB9A==\"},\"engines\":{\"node\":\">=20.19.0\"}},\"@mermaid-js/parser@0.6.3\":{\"resolution\":{\"integrity\":\"sha512-lnjOhe7zyHjc+If7yT4zoedx2vo4sHaTmtkl1+or8BRTnCtDmcTpAjpzDSfCZrshM5bCoz0GyidzadJAH1xobA==\"}},\"@mswjs/interceptors@0.39.8\":{\"resolution\":{\"integrity\":\"sha512-2+BzZbjRO7Ct61k8fMNHEtoKjeWI9pIlHFTqBwZ5icHpqszIgEZbjb1MW5Z0+bITTCTl3gk4PDBxs9tA/csXvA==\"},\"engines\":{\"node\":\">=18\"}},\"@napi-rs/wasm-runtime@0.2.4\":{\"resolution\":{\"integrity\":\"sha512-9zESzOO5aDByvhIAsOy9TbpZ0Ur2AJbUI7UT73kcUTS2mxAMHOBaa1st/jAymNoCtvrit99kkzT1FZuXVcgfIQ==\"}},\"@napi-rs/wasm-runtime@1.1.4\":{\"resolution\":{\"integrity\":\"sha512-3NQNNgA1YSlJb/kMH1ildASP9HW7/7kYnRI2szWJaofaS1hWmbGI4H+d3+22aGzXXN9IJ+n+GiFVcGipJP18ow==\"},\"peerDependencies\":{\"@emnapi/core\":\"^1.7.1\",\"@emnapi/runtime\":\"^1.7.1\"}},\"@nodelib/fs.scandir@2.1.5\":{\"resolution\":{\"integrity\":\"sha512-vq24Bq3ym5HEQm2NKCr3yXDwjc7vTsEThRDnkp2DK9p1uqLR+DHurm/NOTo0KG7HYHU7eppKZj3MyqYuMBf62g==\"},\"engines\":{\"node\":\">= 8\"}},\"@nodelib/fs.stat@2.0.5\":{\"resolution\":{\"integrity\":\"sha512-RkhPPp2zrqDAQA/2jNhnztcPAlv64XdhIp7a7454A5ovI7Bukxgt7MX7udwAu3zg1DcpPU0rz3VV1SeaqvY4+A==\"},\"engines\":{\"node\":\">= 8\"}},\"@nodelib/fs.walk@1.2.8\":{\"resolution\":{\"integrity\":\"sha512-oGB+UxlgWcgQkgwo8GcEGwemoTFt3FIO9ababBmaGwXIoBKZ+GTy0pP185beGg7Llih/NSHSV2XAs1lnznocSg==\"},\"engines\":{\"node\":\">= 8\"}},\"@nrwl/nx-cloud@19.1.0\":{\"resolution\":{\"integrity\":\"sha512-krngXVPfX0Zf6+zJDtcI59/Pt3JfcMPMZ9C/+/x6rvz4WGgyv1s0MI4crEUM0Lx5ZpS4QI0WNDCFVQSfGEBXUg==\"}},\"@nx/nx-darwin-arm64@21.4.1\":{\"resolution\":{\"integrity\":\"sha512-9BbkQnxGEDNX2ESbW4Zdrq1i09y6HOOgTuGbMJuy4e8F8rU/motMUqOpwmFgLHkLgPNZiOC2VXht3or/kQcpOg==\"},\"cpu\":[\"arm64\"],\"os\":[\"darwin\"]},\"@nx/nx-darwin-x64@21.4.1\":{\"resolution\":{\"integrity\":\"sha512-dnkmap1kc6aLV8CW1ihjsieZyaDDjlIB5QA2reTCLNSdTV446K6Fh0naLdaoG4ZkF27zJA/qBOuAaLzRHFJp3g==\"},\"cpu\":[\"x64\"],\"os\":[\"darwin\"]},\"@nx/nx-freebsd-x64@21.4.1\":{\"resolution\":{\"integrity\":\"sha512-RpxDBGOPeDqJjpbV7F3lO/w1aIKfLyG/BM0OpJfTgFVpUIl50kMj5M1m4W9A8kvYkfOD9pDbUaWszom7d57yjg==\"},\"cpu\":[\"x64\"],\"os\":[\"freebsd\"]},\"@nx/nx-linux-arm-gnueabihf@21.4.1\":{\"resolution\":{\"integrity\":\"sha512-2OyBoag2738XWmWK3ZLBuhaYb7XmzT3f8HzomggLDJoDhwDekjgRoNbTxogAAj6dlXSeuPjO81BSlIfXQcth3w==\"},\"cpu\":[\"arm\"],\"os\":[\"linux\"]},\"@nx/nx-linux-arm64-gnu@21.4.1\":{\"resolution\":{\"integrity\":\"sha512-2pg7/zjBDioUWJ3OY8Ixqy64eokKT5sh4iq1bk22bxOCf676aGrAu6khIxy4LBnPIdO0ZOK7KCJ7xOFP4phZqA==\"},\"cpu\":[\"arm64\"],\"os\":[\"linux\"]},\"@nx/nx-linux-arm64-musl@21.4.1\":{\"resolution\":{\"integrity\":\"sha512-whNxh12au/inQtkZju1ZfXSqDS0hCh/anzVCXfLYWFstdwv61XiRmFCSHeN0gRDthlncXFdgKoT1bGG5aMYLtA==\"},\"cpu\":[\"arm64\"],\"os\":[\"linux\"]},\"@nx/nx-linux-x64-gnu@21.4.1\":{\"resolution\":{\"integrity\":\"sha512-UHw57rzLio0AUDXV3l+xcxT3LjuXil7SHj+H8aYmXTpXktctQU2eYGOs5ATqJ1avVQRSejJugHF0i8oLErC28A==\"},\"cpu\":[\"x64\"],\"os\":[\"linux\"]},\"@nx/nx-linux-x64-musl@21.4.1\":{\"resolution\":{\"integrity\":\"sha512-qqE2Gy/DwOLIyePjM7GLHp/nDLZJnxHmqTeCiTQCp/BdbmqjRkSUz5oL+Uua0SNXaTu5hjAfvjXAhSTgBwVO6g==\"},\"cpu\":[\"x64\"],\"os\":[\"linux\"]},\"@nx/nx-win32-arm64-msvc@21.4.1\":{\"resolution\":{\"integrity\":\"sha512-NtEzMiRrSm2DdL4ntoDdjeze8DBrfZvLtx3Dq6+XmOhwnigR6umfWfZ6jbluZpuSQcxzQNVifqirdaQKYaYwDQ==\"},\"cpu\":[\"arm64\"],\"os\":[\"win32\"]},\"@nx/nx-win32-x64-msvc@21.4.1\":{\"resolution\":{\"integrity\":\"sha512-gpG+Y4G/mxGrfkUls6IZEuuBxRaKLMSEoVFLMb9JyyaLEDusn+HJ1m90XsOedjNLBHGMFigsd/KCCsXfFn4njg==\"},\"cpu\":[\"x64\"],\"os\":[\"win32\"]},\"@oozcitak/dom@2.0.2\":{\"resolution\":{\"integrity\":\"sha512-GjpKhkSYC3Mj4+lfwEyI1dqnsKTgwGy48ytZEhm4A/xnH/8z9M3ZVXKr/YGQi3uCLs1AEBS+x5T2JPiueEDW8w==\"},\"engines\":{\"node\":\">=20.0\"}},\"@oozcitak/infra@2.0.2\":{\"resolution\":{\"integrity\":\"sha512-2g+E7hoE2dgCz/APPOEK5s3rMhJvNxSMBrP+U+j1OWsIbtSpWxxlUjq1lU8RIsFJNYv7NMlnVsCuHcUzJW+8vA==\"},\"engines\":{\"node\":\">=20.0\"}},\"@oozcitak/url@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-ZKfET8Ak1wsLAiLWNfFkZc/BraDccuTJKR6svTYc7sVjbR+Iu0vtXdiDMY4o6jaFl5TW2TlS7jbLl4VovtAJWQ==\"},\"engines\":{\"node\":\">=20.0\"}},\"@oozcitak/util@10.0.0\":{\"resolution\":{\"integrity\":\"sha512-hAX0pT/73190NLqBPPWSdBVGtbY6VOhWYK3qqHqtXQ1gK7kS2yz4+ivsN07hpJ6I3aeMtKP6J6npsEKOAzuTLA==\"},\"engines\":{\"node\":\">=20.0\"}},\"@open-draft/deferred-promise@2.2.0\":{\"resolution\":{\"integrity\":\"sha512-CecwLWx3rhxVQF6V4bAgPS5t+So2sTbPgAzafKkVizyi7tlwpcFpdFqq+wqF2OwNBmqFuu6tOyouTuxgpMfzmA==\"}},\"@open-draft/logger@0.3.0\":{\"resolution\":{\"integrity\":\"sha512-X2g45fzhxH238HKO4xbSr7+wBS8Fvw6ixhTDuvLd5mqh6bJJCFAPwU9mPDxbcrRtfxv4u5IHCEH77BmxvXmmxQ==\"}},\"@open-draft/until@2.1.0\":{\"resolution\":{\"integrity\":\"sha512-U69T3ItWHvLwGg5eJ0n3I62nWuE6ilHlmz7zM0npLBRvPRd7e6NYmg54vvRtP5mZG7kZqZCFVdsTWo7BPtBujg==\"}},\"@opentelemetry/api-logs@0.208.0\":{\"resolution\":{\"integrity\":\"sha512-CjruKY9V6NMssL/T1kAFgzosF1v9o6oeN+aX5JB/C/xPNtmgIJqcXHG7fA82Ou1zCpWGl4lROQUKwUNE1pMCyg==\"},\"engines\":{\"node\":\">=8.0.0\"}},\"@opentelemetry/api@1.9.0\":{\"resolution\":{\"integrity\":\"sha512-3giAOQvZiH5F9bMlMiv8+GSPMeqg0dbaeo58/0SlA9sxSqZhnUtxzX9/2FzyhS9sWQf5S0GJE0AKBrFqjpeYcg==\"},\"engines\":{\"node\":\">=8.0.0\"}},\"@opentelemetry/core@2.2.0\":{\"resolution\":{\"integrity\":\"sha512-FuabnnUm8LflnieVxs6eP7Z383hgQU4W1e3KJS6aOG3RxWxcHyBxH8fDMHNgu/gFx/M2jvTOW/4/PHhLz6bjWw==\"},\"engines\":{\"node\":\"^18.19.0 || >=20.6.0\"},\"peerDependencies\":{\"@opentelemetry/api\":\">=1.0.0 <1.10.0\"}},\"@opentelemetry/core@2.4.0\":{\"resolution\":{\"integrity\":\"sha512-KtcyFHssTn5ZgDu6SXmUznS80OFs/wN7y6MyFRRcKU6TOw8hNcGxKvt8hsdaLJfhzUszNSjURetq5Qpkad14Gw==\"},\"engines\":{\"node\":\"^18.19.0 || >=20.6.0\"},\"peerDependencies\":{\"@opentelemetry/api\":\">=1.0.0 <1.10.0\"}},\"@opentelemetry/exporter-logs-otlp-http@0.208.0\":{\"resolution\":{\"integrity\":\"sha512-jOv40Bs9jy9bZVLo/i8FwUiuCvbjWDI+ZW13wimJm4LjnlwJxGgB+N/VWOZUTpM+ah/awXeQqKdNlpLf2EjvYg==\"},\"engines\":{\"node\":\"^18.19.0 || >=20.6.0\"},\"peerDependencies\":{\"@opentelemetry/api\":\"^1.3.0\"}},\"@opentelemetry/otlp-exporter-base@0.208.0\":{\"resolution\":{\"integrity\":\"sha512-gMd39gIfVb2OgxldxUtOwGJYSH8P1kVFFlJLuut32L6KgUC4gl1dMhn+YC2mGn0bDOiQYSk/uHOdSjuKp58vvA==\"},\"engines\":{\"node\":\"^18.19.0 || >=20.6.0\"},\"peerDependencies\":{\"@opentelemetry/api\":\"^1.3.0\"}},\"@opentelemetry/otlp-transformer@0.208.0\":{\"resolution\":{\"integrity\":\"sha512-DCFPY8C6lAQHUNkzcNT9R+qYExvsk6C5Bto2pbNxgicpcSWbe2WHShLxkOxIdNcBiYPdVHv/e7vH7K6TI+C+fQ==\"},\"engines\":{\"node\":\"^18.19.0 || >=20.6.0\"},\"peerDependencies\":{\"@opentelemetry/api\":\"^1.3.0\"}},\"@opentelemetry/resources@2.2.0\":{\"resolution\":{\"integrity\":\"sha512-1pNQf/JazQTMA0BiO5NINUzH0cbLbbl7mntLa4aJNmCCXSj0q03T5ZXXL0zw4G55TjdL9Tz32cznGClf+8zr5A==\"},\"engines\":{\"node\":\"^18.19.0 || >=20.6.0\"},\"peerDependencies\":{\"@opentelemetry/api\":\">=1.3.0 <1.10.0\"}},\"@opentelemetry/resources@2.4.0\":{\"resolution\":{\"integrity\":\"sha512-RWvGLj2lMDZd7M/5tjkI/2VHMpXebLgPKvBUd9LRasEWR2xAynDwEYZuLvY9P2NGG73HF07jbbgWX2C9oavcQg==\"},\"engines\":{\"node\":\"^18.19.0 || >=20.6.0\"},\"peerDependencies\":{\"@opentelemetry/api\":\">=1.3.0 <1.10.0\"}},\"@opentelemetry/sdk-logs@0.208.0\":{\"resolution\":{\"integrity\":\"sha512-QlAyL1jRpOeaqx7/leG1vJMp84g0xKP6gJmfELBpnI4O/9xPX+Hu5m1POk9Kl+veNkyth5t19hRlN6tNY1sjbA==\"},\"engines\":{\"node\":\"^18.19.0 || >=20.6.0\"},\"peerDependencies\":{\"@opentelemetry/api\":\">=1.4.0 <1.10.0\"}},\"@opentelemetry/sdk-metrics@2.2.0\":{\"resolution\":{\"integrity\":\"sha512-G5KYP6+VJMZzpGipQw7Giif48h6SGQ2PFKEYCybeXJsOCB4fp8azqMAAzE5lnnHK3ZVwYQrgmFbsUJO/zOnwGw==\"},\"engines\":{\"node\":\"^18.19.0 || >=20.6.0\"},\"peerDependencies\":{\"@opentelemetry/api\":\">=1.9.0 <1.10.0\"}},\"@opentelemetry/sdk-trace-base@2.2.0\":{\"resolution\":{\"integrity\":\"sha512-xWQgL0Bmctsalg6PaXExmzdedSp3gyKV8mQBwK/j9VGdCDu2fmXIb2gAehBKbkXCpJ4HPkgv3QfoJWRT4dHWbw==\"},\"engines\":{\"node\":\"^18.19.0 || >=20.6.0\"},\"peerDependencies\":{\"@opentelemetry/api\":\">=1.3.0 <1.10.0\"}},\"@opentelemetry/semantic-conventions@1.38.0\":{\"resolution\":{\"integrity\":\"sha512-kocjix+/sSggfJhwXqClZ3i9Y/MI0fp7b+g7kCRm6psy2dsf8uApTRclwG18h8Avm7C9+fnt+O36PspJ/OzoWg==\"},\"engines\":{\"node\":\">=14\"}},\"@opral/markdown-wc@0.9.0\":{\"resolution\":{\"integrity\":\"sha512-m5I3WklqED3mTcUOR3J9CRFIttMYsCmSCZnZYXNdL0Oj0EtSVWXPetPhKsHTEK+MrWPaqfsiKIFq6+l7dKgtNg==\"},\"peerDependencies\":{\"@tiptap/core\":\"^3.0.0\"},\"peerDependenciesMeta\":{\"@tiptap/core\":{\"optional\":true}}},\"@opral/zettel-ast@0.1.0\":{\"resolution\":{\"integrity\":\"sha512-pZDiecYrpSxw7miv4ZSufCRB9sqFMXRa0Rf+LQcoEEh0VOBI6beOmvB+iXmWJ7vxMQINuS7yfsvm5ZyrTm/W5A==\"},\"engines\":{\"node\":\">=20\"}},\"@oxc-project/types@0.127.0\":{\"resolution\":{\"integrity\":\"sha512-aIYXQBo4lCbO4z0R3FHeucQHpF46l2LbMdxRvqvuRuW2OxdnSkcng5B8+K12spgLDj93rtN3+J2Vac/TIO+ciQ==\"}},\"@oxlint/darwin-arm64@1.26.0\":{\"resolution\":{\"integrity\":\"sha512-kTmm1opqyn7iZopWHO3Ml4D/44pA5eknZBepgxCnTaPrW8XgCEUI85Q5AvOOvoNve8NziTYb8ax+CyuGJIgn/Q==\"},\"cpu\":[\"arm64\"],\"os\":[\"darwin\"]},\"@oxlint/darwin-x64@1.26.0\":{\"resolution\":{\"integrity\":\"sha512-/hMfZ9j7ZzVPRmMm02PHNc6MIMk0QYv5VowZJRIp40YLqLPvFfGNGZBj8e1fDVgZMFEGWDQK3yrt1uBKxXAK4Q==\"},\"cpu\":[\"x64\"],\"os\":[\"darwin\"]},\"@oxlint/linux-arm64-gnu@1.26.0\":{\"resolution\":{\"integrity\":\"sha512-iv4wdrwdCa8bhJxOpKlvfxqTs0LgW5tKBUMvH9B13zREHm1xT9JRZ8cQbbKiyC6LNdggwu5S6TSvODgAu7/DlA==\"},\"cpu\":[\"arm64\"],\"os\":[\"linux\"]},\"@oxlint/linux-arm64-musl@1.26.0\":{\"resolution\":{\"integrity\":\"sha512-a3gTbnN1JzedxqYeGTkg38BAs/r3Krd2DPNs/MF7nnHthT3RzkPUk47isMePLuNc4e/Weljn7m2m/Onx22tiNg==\"},\"cpu\":[\"arm64\"],\"os\":[\"linux\"]},\"@oxlint/linux-x64-gnu@1.26.0\":{\"resolution\":{\"integrity\":\"sha512-cCAyqyuKpFImjlgiBuuwSF+aDBW2h19/aCmHMTMSp6KXwhoQK7/Xx7/EhZKP5wiQJzVUYq5fXr0D8WmpLGsjRg==\"},\"cpu\":[\"x64\"],\"os\":[\"linux\"]},\"@oxlint/linux-x64-musl@1.26.0\":{\"resolution\":{\"integrity\":\"sha512-8VOJ4vQo0G1tNdaghxrWKjKZGg73tv+FoMDrtNYuUesqBHZN68FkYCsgPwEsacLhCmtoZrkF3ePDWDuWEpDyAg==\"},\"cpu\":[\"x64\"],\"os\":[\"linux\"]},\"@oxlint/win32-arm64@1.26.0\":{\"resolution\":{\"integrity\":\"sha512-N8KUtzP6gfEHKvaIBZCS9g8wRfqV5v55a/B8iJjIEhtMehcEM+UX+aYRsQ4dy5oBCrK3FEp4Yy/jHgb0moLm3Q==\"},\"cpu\":[\"arm64\"],\"os\":[\"win32\"]},\"@oxlint/win32-x64@1.26.0\":{\"resolution\":{\"integrity\":\"sha512-7tCyG0laduNQ45vzB9blVEGq/6DOvh7AFmiUAana8mTp0zIKQQmwJ21RqhazH0Rk7O6lL7JYzKcu+zaJHGpRLA==\"},\"cpu\":[\"x64\"],\"os\":[\"win32\"]},\"@pkgjs/parseargs@0.11.0\":{\"resolution\":{\"integrity\":\"sha512-+1VkjdD0QBLPodGrJUeqarH8VAIvQODIbwh9XpP5Syisf7YoQgsJKPNFoqqLQlu+VQ/tVSshMR6loPMn8U+dPg==\"},\"engines\":{\"node\":\">=14\"}},\"@polka/url@1.0.0-next.29\":{\"resolution\":{\"integrity\":\"sha512-wwQAWhWSuHaag8c4q/KN/vCoeOJYshAIvMQwD4GpSb3OiZklFfvAgmj0VCBBImRpuF/aFgIRzllXlVX93Jevww==\"}},\"@poppinss/colors@4.1.5\":{\"resolution\":{\"integrity\":\"sha512-FvdDqtcRCtz6hThExcFOgW0cWX+xwSMWcRuQe5ZEb2m7cVQOAVZOIMt+/v9RxGiD9/OY16qJBXK4CVKWAPalBw==\"}},\"@poppinss/dumper@0.6.5\":{\"resolution\":{\"integrity\":\"sha512-NBdYIb90J7LfOI32dOewKI1r7wnkiH6m920puQ3qHUeZkxNkQiFnXVWoE6YtFSv6QOiPPf7ys6i+HWWecDz7sw==\"}},\"@poppinss/exception@1.2.2\":{\"resolution\":{\"integrity\":\"sha512-m7bpKCD4QMlFCjA/nKTs23fuvoVFoA83brRKmObCUNmi/9tVu8Ve3w4YQAnJu4q3Tjf5fr685HYIC/IA2zHRSg==\"}},\"@posthog/core@1.9.1\":{\"resolution\":{\"integrity\":\"sha512-kRb1ch2dhQjsAapZmu6V66551IF2LnCbc1rnrQqnR7ArooVyJN9KOPXre16AJ3ObJz2eTfuP7x25BMyS2Y5Exw==\"}},\"@posthog/types@1.321.2\":{\"resolution\":{\"integrity\":\"sha512-nsMeHlVNlTB68JyV3/0+5FDreiTpUCStDH8ZUH/Hfsbw1howyf9a7DyURTwwhXdnyO0DksEFUIX+4IKCJs/H9g==\"}},\"@promptbook/utils@0.69.5\":{\"resolution\":{\"integrity\":\"sha512-xm5Ti/Hp3o4xHrsK9Yy3MS6KbDxYbq485hDsFvxqaNA7equHLPdo8H8faTitTeb14QCDfLW4iwCxdVYu5sn6YQ==\"}},\"@protobufjs/aspromise@1.1.2\":{\"resolution\":{\"integrity\":\"sha512-j+gKExEuLmKwvz3OgROXtrJ2UG2x8Ch2YZUxahh+s1F2HZ+wAceUNLkvy6zKCPVRkU++ZWQrdxsUeQXmcg4uoQ==\"}},\"@protobufjs/base64@1.1.2\":{\"resolution\":{\"integrity\":\"sha512-AZkcAA5vnN/v4PDqKyMR5lx7hZttPDgClv83E//FMNhR2TMcLUhfRUBHCmSl0oi9zMgDDqRUJkSxO3wm85+XLg==\"}},\"@protobufjs/codegen@2.0.4\":{\"resolution\":{\"integrity\":\"sha512-YyFaikqM5sH0ziFZCN3xDC7zeGaB/d0IUb9CATugHWbd1FRFwWwt4ld4OYMPWu5a3Xe01mGAULCdqhMlPl29Jg==\"}},\"@protobufjs/eventemitter@1.1.0\":{\"resolution\":{\"integrity\":\"sha512-j9ednRT81vYJ9OfVuXG6ERSTdEL1xVsNgqpkxMsbIabzSo3goCjDIveeGv5d03om39ML71RdmrGNjG5SReBP/Q==\"}},\"@protobufjs/fetch@1.1.0\":{\"resolution\":{\"integrity\":\"sha512-lljVXpqXebpsijW71PZaCYeIcE5on1w5DlQy5WH6GLbFryLUrBD4932W/E2BSpfRJWseIL4v/KPgBFxDOIdKpQ==\"}},\"@protobufjs/float@1.0.2\":{\"resolution\":{\"integrity\":\"sha512-Ddb+kVXlXst9d+R9PfTIxh1EdNkgoRe5tOX6t01f1lYWOvJnSPDBlG241QLzcyPdoNTsblLUdujGSE4RzrTZGQ==\"}},\"@protobufjs/inquire@1.1.0\":{\"resolution\":{\"integrity\":\"sha512-kdSefcPdruJiFMVSbn801t4vFK7KB/5gd2fYvrxhuJYg8ILrmn9SKSX2tZdV6V+ksulWqS7aXjBcRXl3wHoD9Q==\"}},\"@protobufjs/path@1.1.2\":{\"resolution\":{\"integrity\":\"sha512-6JOcJ5Tm08dOHAbdR3GrvP+yUUfkjG5ePsHYczMFLq3ZmMkAD98cDgcT2iA1lJ9NVwFd4tH/iSSoe44YWkltEA==\"}},\"@protobufjs/pool@1.1.0\":{\"resolution\":{\"integrity\":\"sha512-0kELaGSIDBKvcgS4zkjz1PeddatrjYcmMWOlAuAPwAeccUrPHdUqo/J6LiymHHEiJT5NrF1UVwxY14f+fy4WQw==\"}},\"@protobufjs/utf8@1.1.0\":{\"resolution\":{\"integrity\":\"sha512-Vvn3zZrhQZkkBE8LSuW3em98c0FwgO4nxzv6OdSxPKJIEKY2bGbHn+mhGIPerzI4twdxaP8/0+06HBpwf345Lw==\"}},\"@puppeteer/browsers@2.13.1\":{\"resolution\":{\"integrity\":\"sha512-zmS4RTK9fbrc++WlAJhxYbfz3IjDeOmkK/CwwbLmk7ydfS9e2CiEeRJHEPvjDVElO/bwXbidwGA37Bsm6LzCnQ==\"},\"engines\":{\"node\":\">=18\"},\"hasBin\":true},\"@rolldown/binding-android-arm64@1.0.0-rc.17\":{\"resolution\":{\"integrity\":\"sha512-s70pVGhw4zqGeFnXWvAzJDlvxhlRollagdCCKRgOsgUOH3N1l0LIxf83AtGzmb5SiVM4Hjl5HyarMRfdfj3DaQ==\"},\"engines\":{\"node\":\"^20.19.0 || >=22.12.0\"},\"cpu\":[\"arm64\"],\"os\":[\"android\"]},\"@rolldown/binding-darwin-arm64@1.0.0-rc.17\":{\"resolution\":{\"integrity\":\"sha512-4ksWc9n0mhlZpZ9PMZgTGjeOPRu8MB1Z3Tz0Mo02eWfWCHMW1zN82Qz/pL/rC+yQa+8ZnutMF0JjJe7PjwasYw==\"},\"engines\":{\"node\":\"^20.19.0 || >=22.12.0\"},\"cpu\":[\"arm64\"],\"os\":[\"darwin\"]},\"@rolldown/binding-darwin-x64@1.0.0-rc.17\":{\"resolution\":{\"integrity\":\"sha512-SUSDOI6WwUVNcWxd02QEBjLdY1VPHvlEkw6T/8nYG322iYWCTxRb1vzk4E+mWWYehTp7ERibq54LSJGjmouOsw==\"},\"engines\":{\"node\":\"^20.19.0 || >=22.12.0\"},\"cpu\":[\"x64\"],\"os\":[\"darwin\"]},\"@rolldown/binding-freebsd-x64@1.0.0-rc.17\":{\"resolution\":{\"integrity\":\"sha512-hwnz3nw9dbJ05EDO/PvcjaaewqqDy7Y1rn1UO81l8iIK1GjenME75dl16ajbvSSMfv66WXSRCYKIqfgq2KCfxw==\"},\"engines\":{\"node\":\"^20.19.0 || >=22.12.0\"},\"cpu\":[\"x64\"],\"os\":[\"freebsd\"]},\"@rolldown/binding-linux-arm-gnueabihf@1.0.0-rc.17\":{\"resolution\":{\"integrity\":\"sha512-IS+W7epTcwANmFSQFrS1SivEXHtl1JtuQA9wlxrZTcNi6mx+FDOYrakGevvvTwgj2JvWiK8B29/qD9BELZPyXQ==\"},\"engines\":{\"node\":\"^20.19.0 || >=22.12.0\"},\"cpu\":[\"arm\"],\"os\":[\"linux\"]},\"@rolldown/binding-linux-arm64-gnu@1.0.0-rc.17\":{\"resolution\":{\"integrity\":\"sha512-e6usGaHKW5BMNZOymS1UcEYGowQMWcgZ71Z17Sl/h2+ZziNJ1a9n3Zvcz6LdRyIW5572wBCTH/Z+bKuZouGk9Q==\"},\"engines\":{\"node\":\"^20.19.0 || >=22.12.0\"},\"cpu\":[\"arm64\"],\"os\":[\"linux\"]},\"@rolldown/binding-linux-arm64-musl@1.0.0-rc.17\":{\"resolution\":{\"integrity\":\"sha512-b/CgbwAJpmrRLp02RPfhbudf5tZnN9nsPWK82znefso832etkem8H7FSZwxrOI9djcdTP7U6YfNhbRnh7djErg==\"},\"engines\":{\"node\":\"^20.19.0 || >=22.12.0\"},\"cpu\":[\"arm64\"],\"os\":[\"linux\"]},\"@rolldown/binding-linux-ppc64-gnu@1.0.0-rc.17\":{\"resolution\":{\"integrity\":\"sha512-4EII1iNGRUN5WwGbF/kOh/EIkoDN9HsupgLQoXfY+D1oyJm7/F4t5PYU5n8SWZgG0FEwakyM8pGgwcBYruGTlA==\"},\"engines\":{\"node\":\"^20.19.0 || >=22.12.0\"},\"cpu\":[\"ppc64\"],\"os\":[\"linux\"]},\"@rolldown/binding-linux-s390x-gnu@1.0.0-rc.17\":{\"resolution\":{\"integrity\":\"sha512-AH8oq3XqQo4IibpVXvPeLDI5pzkpYn0WiZAfT05kFzoJ6tQNzwRdDYQ45M8I/gslbodRZwW8uxLhbSBbkv96rA==\"},\"engines\":{\"node\":\"^20.19.0 || >=22.12.0\"},\"cpu\":[\"s390x\"],\"os\":[\"linux\"]},\"@rolldown/binding-linux-x64-gnu@1.0.0-rc.17\":{\"resolution\":{\"integrity\":\"sha512-cLnjV3xfo7KslbU41Z7z8BH/E1y5mzUYzAqih1d1MDaIGZRCMqTijqLv76/P7fyHuvUcfGsIpqCdddbxLLK9rA==\"},\"engines\":{\"node\":\"^20.19.0 || >=22.12.0\"},\"cpu\":[\"x64\"],\"os\":[\"linux\"]},\"@rolldown/binding-linux-x64-musl@1.0.0-rc.17\":{\"resolution\":{\"integrity\":\"sha512-0phclDw1spsL7dUB37sIARuis2tAgomCJXAHZlpt8PXZ4Ba0dRP1e+66lsRqrfhISeN9bEGNjQs+T/Fbd7oYGw==\"},\"engines\":{\"node\":\"^20.19.0 || >=22.12.0\"},\"cpu\":[\"x64\"],\"os\":[\"linux\"]},\"@rolldown/binding-openharmony-arm64@1.0.0-rc.17\":{\"resolution\":{\"integrity\":\"sha512-0ag/hEgXOwgw4t8QyQvUCxvEg+V0KBcA6YuOx9g0r02MprutRF5dyljgm3EmR02O292UX7UeS6HzWHAl6KgyhA==\"},\"engines\":{\"node\":\"^20.19.0 || >=22.12.0\"},\"cpu\":[\"arm64\"],\"os\":[\"openharmony\"]},\"@rolldown/binding-wasm32-wasi@1.0.0-rc.17\":{\"resolution\":{\"integrity\":\"sha512-LEXei6vo0E5wTGwpkJ4KoT3OZJRnglwldt5ziLzOlc6qqb55z4tWNq2A+PFqCJuvWWdP53CVhG1Z9NtToDPJrA==\"},\"engines\":{\"node\":\"^20.19.0 || >=22.12.0\"},\"cpu\":[\"wasm32\"]},\"@rolldown/binding-win32-arm64-msvc@1.0.0-rc.17\":{\"resolution\":{\"integrity\":\"sha512-gUmyzBl3SPMa6hrqFUth9sVfcLBlYsbMzBx5PlexMroZStgzGqlZ26pYG89rBb45Mnia+oil6YAIFeEWGWhoZA==\"},\"engines\":{\"node\":\"^20.19.0 || >=22.12.0\"},\"cpu\":[\"arm64\"],\"os\":[\"win32\"]},\"@rolldown/binding-win32-x64-msvc@1.0.0-rc.17\":{\"resolution\":{\"integrity\":\"sha512-3hkiolcUAvPB9FLb3UZdfjVVNWherN1f/skkGWJP/fgSQhYUZpSIRr0/I8ZK9TkF3F7kxvJAk0+IcKvPHk9qQg==\"},\"engines\":{\"node\":\"^20.19.0 || >=22.12.0\"},\"cpu\":[\"x64\"],\"os\":[\"win32\"]},\"@rolldown/pluginutils@1.0.0-beta.40\":{\"resolution\":{\"integrity\":\"sha512-s3GeJKSQOwBlzdUrj4ISjJj5SfSh+aqn0wjOar4Bx95iV1ETI7F6S/5hLcfAxZ9kXDcyrAkxPlqmd1ZITttf+w==\"}},\"@rolldown/pluginutils@1.0.0-rc.17\":{\"resolution\":{\"integrity\":\"sha512-n8iosDOt6Ig1UhJ2AYqoIhHWh/isz0xpicHTzpKBeotdVsTEcxsSA/i3EVM7gQAj0rU27OLAxCjzlj15IWY7bg==\"}},\"@rolldown/pluginutils@1.0.0-rc.7\":{\"resolution\":{\"integrity\":\"sha512-qujRfC8sFVInYSPPMLQByRh7zhwkGFS4+tyMQ83srV1qrxL4g8E2tyxVVyxd0+8QeBM1mIk9KbWxkegRr76XzA==\"}},\"@rollup/rollup-android-arm-eabi@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-yDPzwsgiFO26RJA4nZo8I+xqzh7sJTZIWQOxn+/XOdPE31lAvLIYCKqjV+lNH/vxE2L2iH3plKxDCRK6i+CwhA==\"},\"cpu\":[\"arm\"],\"os\":[\"android\"]},\"@rollup/rollup-android-arm64@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-k8FontTxIE7b0/OGKeSN5B6j25EuppBcWM33Z19JoVT7UTXFSo3D9CdU39wGTeb29NO3XxpMNauh09B+Ibw+9g==\"},\"cpu\":[\"arm64\"],\"os\":[\"android\"]},\"@rollup/rollup-darwin-arm64@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-A6s4gJpomNBtJ2yioj8bflM2oogDwzUiMl2yNJ2v9E7++sHrSrsQ29fOfn5DM/iCzpWcebNYEdXpaK4tr2RhfQ==\"},\"cpu\":[\"arm64\"],\"os\":[\"darwin\"]},\"@rollup/rollup-darwin-x64@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-e6XqVmXlHrBlG56obu9gDRPW3O3hLxpwHpLsBJvuI8qqnsrtSZ9ERoWUXtPOkY8c78WghyPHZdmPhHLWNdAGEw==\"},\"cpu\":[\"x64\"],\"os\":[\"darwin\"]},\"@rollup/rollup-freebsd-arm64@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-v0E9lJW8VsrwPux5Qe5CwmH/CF/2mQs6xU1MF3nmUxmZUCHazCjLgYvToOk+YuuUqLQBio1qkkREhxhc656ViA==\"},\"cpu\":[\"arm64\"],\"os\":[\"freebsd\"]},\"@rollup/rollup-freebsd-x64@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-ClAmAPx3ZCHtp6ysl4XEhWU69GUB1D+s7G9YjHGhIGCSrsg00nEGRRZHmINYxkdoJehde8VIsDC5t9C0gb6yqA==\"},\"cpu\":[\"x64\"],\"os\":[\"freebsd\"]},\"@rollup/rollup-linux-arm-gnueabihf@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-EPlb95nUsz6Dd9Qy13fI5kUPXNSljaG9FiJ4YUGU1O/Q77i5DYFW5KR8g1OzTcdZUqQQ1KdDqsTohdFVwCwjqg==\"},\"cpu\":[\"arm\"],\"os\":[\"linux\"]},\"@rollup/rollup-linux-arm-musleabihf@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-BOmnVW+khAUX+YZvNfa0tGTEMVVEerOxN0pDk2E6N6DsEIa2Ctj48FOMfNDdrwinocKaC7YXUZ1pHlKpnkja/Q==\"},\"cpu\":[\"arm\"],\"os\":[\"linux\"]},\"@rollup/rollup-linux-arm64-gnu@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-Xt2byDZ+6OVNuREgBXr4+CZDJtrVso5woFtpKdGPhpTPHcNG7D8YXeQzpNbFRxzTVqJf7kvPMCub/pcGUWgBjA==\"},\"cpu\":[\"arm64\"],\"os\":[\"linux\"]},\"@rollup/rollup-linux-arm64-musl@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-+LdZSldy/I9N8+klim/Y1HsKbJ3BbInHav5qE9Iy77dtHC/pibw1SR/fXlWyAk0ThnpRKoODwnAuSjqxFRDHUQ==\"},\"cpu\":[\"arm64\"],\"os\":[\"linux\"]},\"@rollup/rollup-linux-loong64-gnu@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-8ms8sjmyc1jWJS6WdNSA23rEfdjWB30LH8Wqj0Cqvv7qSHnvw6kgMMXRdop6hkmGPlyYBdRPkjJnj3KCUHV/uQ==\"},\"cpu\":[\"loong64\"],\"os\":[\"linux\"]},\"@rollup/rollup-linux-ppc64-gnu@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-3HRQLUQbpBDMmzoxPJYd3W6vrVHOo2cVW8RUo87Xz0JPJcBLBr5kZ1pGcQAhdZgX9VV7NbGNipah1omKKe23/g==\"},\"cpu\":[\"ppc64\"],\"os\":[\"linux\"]},\"@rollup/rollup-linux-riscv64-gnu@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-fMjKi+ojnmIvhk34gZP94vjogXNNUKMEYs+EDaB/5TG/wUkoeua7p7VCHnE6T2Tx+iaghAqQX8teQzcvrYpaQA==\"},\"cpu\":[\"riscv64\"],\"os\":[\"linux\"]},\"@rollup/rollup-linux-riscv64-musl@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-XuGFGU+VwUUV5kLvoAdi0Wz5Xbh2SrjIxCtZj6Wq8MDp4bflb/+ThZsVxokM7n0pcbkEr2h5/pzqzDYI7cCgLQ==\"},\"cpu\":[\"riscv64\"],\"os\":[\"linux\"]},\"@rollup/rollup-linux-s390x-gnu@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-w6yjZF0P+NGzWR3AXWX9zc0DNEGdtvykB03uhonSHMRa+oWA6novflo2WaJr6JZakG2ucsyb+rvhrKac6NIy+w==\"},\"cpu\":[\"s390x\"],\"os\":[\"linux\"]},\"@rollup/rollup-linux-x64-gnu@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-yo8d6tdfdeBArzC7T/PnHd7OypfI9cbuZzPnzLJIyKYFhAQ8SvlkKtKBMbXDxe1h03Rcr7u++nFS7tqXz87Gtw==\"},\"cpu\":[\"x64\"],\"os\":[\"linux\"]},\"@rollup/rollup-linux-x64-musl@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-ah59c1YkCxKExPP8O9PwOvs+XRLKwh/mV+3YdKqQ5AMQ0r4M4ZDuOrpWkUaqO7fzAHdINzV9tEVu8vNw48z0lA==\"},\"cpu\":[\"x64\"],\"os\":[\"linux\"]},\"@rollup/rollup-openharmony-arm64@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-4VEd19Wmhr+Zy7hbUsFZ6YXEiP48hE//KPLCSVNY5RMGX2/7HZ+QkN55a3atM1C/BZCGIgqN+xrVgtdak2S9+A==\"},\"cpu\":[\"arm64\"],\"os\":[\"openharmony\"]},\"@rollup/rollup-win32-arm64-msvc@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-IlbHFYc/pQCgew/d5fslcy1KEaYVCJ44G8pajugd8VoOEI8ODhtb/j8XMhLpwHCMB3yk2J07ctup10gpw2nyMA==\"},\"cpu\":[\"arm64\"],\"os\":[\"win32\"]},\"@rollup/rollup-win32-ia32-msvc@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-lNlPEGgdUfSzdCWU176ku/dQRnA7W+Gp8d+cWv73jYrb8uT7HTVVxq62DUYxjbaByuf1Yk0RIIAbDzp+CnOTFg==\"},\"cpu\":[\"ia32\"],\"os\":[\"win32\"]},\"@rollup/rollup-win32-x64-gnu@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-S6YojNVrHybQis2lYov1sd+uj7K0Q05NxHcGktuMMdIQ2VixGwAfbJ23NnlvvVV1bdpR2m5MsNBViHJKcA4ADw==\"},\"cpu\":[\"x64\"],\"os\":[\"win32\"]},\"@rollup/rollup-win32-x64-msvc@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-k+/Rkcyx//P6fetPoLMb8pBeqJBNGx81uuf7iljX9++yNBVRDQgD04L+SVXmXmh5ZP4/WOp4mWF0kmi06PW2tA==\"},\"cpu\":[\"x64\"],\"os\":[\"win32\"]},\"@shikijs/core@3.15.0\":{\"resolution\":{\"integrity\":\"sha512-8TOG6yG557q+fMsSVa8nkEDOZNTSxjbbR8l6lF2gyr6Np+jrPlslqDxQkN6rMXCECQ3isNPZAGszAfYoJOPGlg==\"}},\"@shikijs/engine-javascript@3.15.0\":{\"resolution\":{\"integrity\":\"sha512-ZedbOFpopibdLmvTz2sJPJgns8Xvyabe2QbmqMTz07kt1pTzfEvKZc5IqPVO/XFiEbbNyaOpjPBkkr1vlwS+qg==\"}},\"@shikijs/engine-oniguruma@3.15.0\":{\"resolution\":{\"integrity\":\"sha512-HnqFsV11skAHvOArMZdLBZZApRSYS4LSztk2K3016Y9VCyZISnlYUYsL2hzlS7tPqKHvNqmI5JSUJZprXloMvA==\"}},\"@shikijs/langs@3.15.0\":{\"resolution\":{\"integrity\":\"sha512-WpRvEFvkVvO65uKYW4Rzxs+IG0gToyM8SARQMtGGsH4GDMNZrr60qdggXrFOsdfOVssG/QQGEl3FnJ3EZ+8w8A==\"}},\"@shikijs/themes@3.15.0\":{\"resolution\":{\"integrity\":\"sha512-8ow2zWb1IDvCKjYb0KiLNrK4offFdkfNVPXb1OZykpLCzRU6j+efkY+Y7VQjNlNFXonSw+4AOdGYtmqykDbRiQ==\"}},\"@shikijs/types@3.15.0\":{\"resolution\":{\"integrity\":\"sha512-BnP+y/EQnhihgHy4oIAN+6FFtmfTekwOLsQbRw9hOKwqgNy8Bdsjq8B05oAt/ZgvIWWFrshV71ytOrlPfYjIJw==\"}},\"@shikijs/vscode-textmate@10.0.2\":{\"resolution\":{\"integrity\":\"sha512-83yeghZ2xxin3Nj8z1NMd/NCuca+gsYXswywDy5bHvwlWL8tpTQmzGeUuHd9FC3E/SBEMvzJRwWEOz5gGes9Qg==\"}},\"@sinclair/typebox@0.34.40\":{\"resolution\":{\"integrity\":\"sha512-gwBNIP8ZAYev/ORDWW0QvxdwPXwxBtLsdsJgSc7eDIRt8ubP+rxUBzPsrwnu16fgEF8Bx4lh/+mvQvJzcTM6Kw==\"}},\"@sindresorhus/is@7.1.1\":{\"resolution\":{\"integrity\":\"sha512-rO92VvpgMc3kfiTjGT52LEtJ8Yc5kCWhZjLQ3LwlA4pSgPpQO7bVpYXParOD8Jwf+cVQECJo3yP/4I8aZtUQTQ==\"},\"engines\":{\"node\":\">=18\"}},\"@speed-highlight/core@1.2.12\":{\"resolution\":{\"integrity\":\"sha512-uilwrK0Ygyri5dToHYdZSjcvpS2ZwX0w5aSt3GCEN9hrjxWCoeV4Z2DTXuxjwbntaLQIEEAlCeNQss5SoHvAEA==\"}},\"@sqlite.org/sqlite-wasm@3.50.4-build1\":{\"resolution\":{\"integrity\":\"sha512-Qig2Wso7gPkU1PtXwFzndh+CTRzrIFxVGqv6eCetjU7YqxlHItj+GvQYwYTppCRgAPawtRN/4AJcEgB9xDHGug==\"},\"hasBin\":true},\"@standard-schema/spec@1.0.0\":{\"resolution\":{\"integrity\":\"sha512-m2bOd0f2RT9k8QJx1JN85cZYyH1RqFBdlwtkSlf4tBDYLCiiZnv1fIIwacK6cqwXavOydf0NPToMQgpKq+dVlA==\"}},\"@standard-schema/spec@1.1.0\":{\"resolution\":{\"integrity\":\"sha512-l2aFy5jALhniG5HgqrD6jXLi/rUWrKvqN/qJx6yoJsgKhblVd+iqqU4RCXavm/jPityDo5TCvKMnpjKnOriy0w==\"}},\"@tailwindcss/node@4.2.4\":{\"resolution\":{\"integrity\":\"sha512-Ai7+yQPxz3ddrDQzFfBKdHEVBg0w3Zl83jnjuwxnZOsnH9pGn93QHQtpU0p/8rYWxvbFZHneni6p1BSLK4DkGA==\"}},\"@tailwindcss/oxide-android-arm64@4.2.4\":{\"resolution\":{\"integrity\":\"sha512-e7MOr1SAn9U8KlZzPi1ZXGZHeC5anY36qjNwmZv9pOJ8E4Q6jmD1vyEHkQFmNOIN7twGPEMXRHmitN4zCMN03g==\"},\"engines\":{\"node\":\">= 20\"},\"cpu\":[\"arm64\"],\"os\":[\"android\"]},\"@tailwindcss/oxide-darwin-arm64@4.2.4\":{\"resolution\":{\"integrity\":\"sha512-tSC/Kbqpz/5/o/C2sG7QvOxAKqyd10bq+ypZNf+9Fi2TvbVbv1zNpcEptcsU7DPROaSbVgUXmrzKhurFvo5eDg==\"},\"engines\":{\"node\":\">= 20\"},\"cpu\":[\"arm64\"],\"os\":[\"darwin\"]},\"@tailwindcss/oxide-darwin-x64@4.2.4\":{\"resolution\":{\"integrity\":\"sha512-yPyUXn3yO/ufR6+Kzv0t4fCg2qNr90jxXc5QqBpjlPNd0NqyDXcmQb/6weunH/MEDXW5dhyEi+agTDiqa3WsGg==\"},\"engines\":{\"node\":\">= 20\"},\"cpu\":[\"x64\"],\"os\":[\"darwin\"]},\"@tailwindcss/oxide-freebsd-x64@4.2.4\":{\"resolution\":{\"integrity\":\"sha512-BoMIB4vMQtZsXdGLVc2z+P9DbETkiopogfWZKbWwM8b/1Vinbs4YcUwo+kM/KeLkX3Ygrf4/PsRndKaYhS8Eiw==\"},\"engines\":{\"node\":\">= 20\"},\"cpu\":[\"x64\"],\"os\":[\"freebsd\"]},\"@tailwindcss/oxide-linux-arm-gnueabihf@4.2.4\":{\"resolution\":{\"integrity\":\"sha512-7pIHBLTHYRAlS7V22JNuTh33yLH4VElwKtB3bwchK/UaKUPpQ0lPQiOWcbm4V3WP2I6fNIJ23vABIvoy2izdwA==\"},\"engines\":{\"node\":\">= 20\"},\"cpu\":[\"arm\"],\"os\":[\"linux\"]},\"@tailwindcss/oxide-linux-arm64-gnu@4.2.4\":{\"resolution\":{\"integrity\":\"sha512-+E4wxJ0ZGOzSH325reXTWB48l42i93kQqMvDyz5gqfRzRZ7faNhnmvlV4EPGJU3QJM/3Ab5jhJ5pCRUsKn6OQw==\"},\"engines\":{\"node\":\">= 20\"},\"cpu\":[\"arm64\"],\"os\":[\"linux\"]},\"@tailwindcss/oxide-linux-arm64-musl@4.2.4\":{\"resolution\":{\"integrity\":\"sha512-bBADEGAbo4ASnppIziaQJelekCxdMaxisrk+fB7Thit72IBnALp9K6ffA2G4ruj90G9XRS2VQ6q2bCKbfFV82g==\"},\"engines\":{\"node\":\">= 20\"},\"cpu\":[\"arm64\"],\"os\":[\"linux\"]},\"@tailwindcss/oxide-linux-x64-gnu@4.2.4\":{\"resolution\":{\"integrity\":\"sha512-7Mx25E4WTfnht0TVRTyC00j3i0M+EeFe7wguMDTlX4mRxafznw0CA8WJkFjWYH5BlgELd1kSjuU2JiPnNZbJDA==\"},\"engines\":{\"node\":\">= 20\"},\"cpu\":[\"x64\"],\"os\":[\"linux\"]},\"@tailwindcss/oxide-linux-x64-musl@4.2.4\":{\"resolution\":{\"integrity\":\"sha512-2wwJRF7nyhOR0hhHoChc04xngV3iS+akccHTGtz965FwF0up4b2lOdo6kI1EbDaEXKgvcrFBYcYQQ/rrnWFVfA==\"},\"engines\":{\"node\":\">= 20\"},\"cpu\":[\"x64\"],\"os\":[\"linux\"]},\"@tailwindcss/oxide-wasm32-wasi@4.2.4\":{\"resolution\":{\"integrity\":\"sha512-FQsqApeor8Fo6gUEklzmaa9994orJZZDBAlQpK2Mq+DslRKFJeD6AjHpBQ0kZFQohVr8o85PPh8eOy86VlSCmw==\"},\"engines\":{\"node\":\">=14.0.0\"},\"cpu\":[\"wasm32\"],\"bundledDependencies\":[\"@napi-rs/wasm-runtime\",\"@emnapi/core\",\"@emnapi/runtime\",\"@tybys/wasm-util\",\"@emnapi/wasi-threads\",\"tslib\"]},\"@tailwindcss/oxide-win32-arm64-msvc@4.2.4\":{\"resolution\":{\"integrity\":\"sha512-L9BXqxC4ToVgwMFqj3pmZRqyHEztulpUJzCxUtLjobMCzTPsGt1Fa9enKbOpY2iIyVtaHNeNvAK8ERP/64sqGQ==\"},\"engines\":{\"node\":\">= 20\"},\"cpu\":[\"arm64\"],\"os\":[\"win32\"]},\"@tailwindcss/oxide-win32-x64-msvc@4.2.4\":{\"resolution\":{\"integrity\":\"sha512-ESlKG0EpVJQwRjXDDa9rLvhEAh0mhP1sF7sap9dNZT0yyl9SAG6T7gdP09EH0vIv0UNTlo6jPWyujD6559fZvw==\"},\"engines\":{\"node\":\">= 20\"},\"cpu\":[\"x64\"],\"os\":[\"win32\"]},\"@tailwindcss/oxide@4.2.4\":{\"resolution\":{\"integrity\":\"sha512-9El/iI069DKDSXwTvB9J4BwdO5JhRrOweGaK25taBAvBXyXqJAX+Jqdvs8r8gKpsI/1m0LeJLyQYTf/WLrBT1Q==\"},\"engines\":{\"node\":\">= 20\"}},\"@tailwindcss/vite@4.2.4\":{\"resolution\":{\"integrity\":\"sha512-pCvohwOCspk3ZFn6eJzrrX3g4n2JY73H6MmYC87XfGPyTty4YsCjYTMArRZm/zOI8dIt3+EcrLHAFPe5A4bgtw==\"},\"peerDependencies\":{\"vite\":\"^5.2.0 || ^6 || ^7 || ^8\"}},\"@tanstack/history@1.161.6\":{\"resolution\":{\"integrity\":\"sha512-NaOGLRrddszbQj9upGat6HG/4TKvXLvu+osAIgfxPYA+eIvYKv8GKDJOrY2D3/U9MRnKfMWD7bU4jeD4xmqyIg==\"},\"engines\":{\"node\":\">=20.19\"}},\"@tanstack/react-router@1.169.2\":{\"resolution\":{\"integrity\":\"sha512-OJM7Kguc7ERnweaNRWsyWgIKcl3z23rD1B4jaxjzd9RGdnzpt2HfrWa9rggbT0Hfzhfo4D2ZmsfoTme035tniQ==\"},\"engines\":{\"node\":\">=20.19\"},\"peerDependencies\":{\"react\":\">=18.0.0 || >=19.0.0\",\"react-dom\":\">=18.0.0 || >=19.0.0\"}},\"@tanstack/react-start-client@1.166.48\":{\"resolution\":{\"integrity\":\"sha512-6fqwCwe6v+Nvtdf6vg6gxs/0gCXyZEHF18EslNeG/kca2wnXYFuXRhqGJjJaEgMk3WF4IE9mUgFuBSAOY3P7nQ==\"},\"engines\":{\"node\":\">=22.12.0\"},\"peerDependencies\":{\"react\":\">=18.0.0 || >=19.0.0\",\"react-dom\":\">=18.0.0 || >=19.0.0\"}},\"@tanstack/react-start-rsc@0.0.43\":{\"resolution\":{\"integrity\":\"sha512-2RCa8Caw/HKrHi9pxmUvsiUrBtjddeBiP93e7OYQOCL3rHxoMD9CSscwT9/ziCaqnIOuBFbKWgvRTahR4jSfsw==\"},\"engines\":{\"node\":\">=22.12.0\"},\"peerDependencies\":{\"@rspack/core\":\">=2.0.0-0\",\"@vitejs/plugin-rsc\":\">=0.5.20\",\"react\":\">=18.0.0 || >=19.0.0\",\"react-dom\":\">=18.0.0 || >=19.0.0\",\"react-server-dom-rspack\":\">=0.0.2\"},\"peerDependenciesMeta\":{\"@rspack/core\":{\"optional\":true},\"@vitejs/plugin-rsc\":{\"optional\":true},\"react-server-dom-rspack\":{\"optional\":true}}},\"@tanstack/react-start-server@1.166.52\":{\"resolution\":{\"integrity\":\"sha512-46Gx+byIndYywUtyna5h3qatHipJkPFqo/miexfuYPgeVAI6ypQzsw7wxF194H6VAP43m2q+fdLPBXStufoOGw==\"},\"engines\":{\"node\":\">=22.12.0\"},\"peerDependencies\":{\"react\":\">=18.0.0 || >=19.0.0\",\"react-dom\":\">=18.0.0 || >=19.0.0\"}},\"@tanstack/react-start@1.167.64\":{\"resolution\":{\"integrity\":\"sha512-gxtesUkHIZmKR/OEFAx6ifedIs7UM1cG5B/TJhcs6c/BrJpjeQIrkF9/GmWRpslaWCpo3tXA2IOxNSH49KFhoA==\"},\"engines\":{\"node\":\">=22.12.0\"},\"peerDependencies\":{\"@rsbuild/core\":\"^2.0.0\",\"@vitejs/plugin-rsc\":\"*\",\"react\":\">=18.0.0 || >=19.0.0\",\"react-dom\":\">=18.0.0 || >=19.0.0\",\"vite\":\">=7.0.0\"},\"peerDependenciesMeta\":{\"@rsbuild/core\":{\"optional\":true},\"@vitejs/plugin-rsc\":{\"optional\":true},\"vite\":{\"optional\":true}}},\"@tanstack/react-store@0.9.3\":{\"resolution\":{\"integrity\":\"sha512-y2iHd/N9OkoQbFJLUX1T9vbc2O9tjH0pQRgTcx1/Nz4IlwLvkgpuglXUx+mXt0g5ZDFrEeDnONPqkbfxXJKwRg==\"},\"peerDependencies\":{\"react\":\"^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0\",\"react-dom\":\"^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0\"}},\"@tanstack/router-core@1.169.2\":{\"resolution\":{\"integrity\":\"sha512-5sm0DJF1A7Mz+9gy4Gz/lLovNailK3yot4vYvz9MkBUPw26uLnhQiR8hSCYxucjE0wD6Mdlc5l+Z0/XTlZ7xHw==\"},\"engines\":{\"node\":\">=20.19\"}},\"@tanstack/router-generator@1.166.41\":{\"resolution\":{\"integrity\":\"sha512-XpnkVvk9AlCtw5vggJsnSx3MdKGk8Asopwy9wUFAqFAHqlrRJzV9PoZ5kGkNEJMOYYcMTriJLN4D+kyXRUJpDQ==\"},\"engines\":{\"node\":\">=20.19\"}},\"@tanstack/router-plugin@1.167.34\":{\"resolution\":{\"integrity\":\"sha512-hU0Cuw79Yo6FGPBB0mW9Ik8bnTzmnUKtbgbvmIzeFdK3wKBPS4+xN7kcxVaBqXfP6xR3PFkIf2SSoYsiuLjVtg==\"},\"engines\":{\"node\":\">=20.19\"},\"peerDependencies\":{\"@rsbuild/core\":\">=1.0.2 || ^2.0.0\",\"@tanstack/react-router\":\"^1.169.2\",\"vite\":\">=5.0.0 || >=6.0.0 || >=7.0.0 || >=8.0.0\",\"vite-plugin-solid\":\"^2.11.10 || ^3.0.0-0\",\"webpack\":\">=5.92.0\"},\"peerDependenciesMeta\":{\"@rsbuild/core\":{\"optional\":true},\"@tanstack/react-router\":{\"optional\":true},\"vite\":{\"optional\":true},\"vite-plugin-solid\":{\"optional\":true},\"webpack\":{\"optional\":true}}},\"@tanstack/router-utils@1.161.8\":{\"resolution\":{\"integrity\":\"sha512-xyiLWEKjfBAVhauDSSjXxyf7s8elU6SM+V050sbkofvGmIIvkwPFtDsX7Gvwh14kBd6iCwAT+RiPvXTxAptY0Q==\"},\"engines\":{\"node\":\">=20.19\"}},\"@tanstack/start-client-core@1.168.2\":{\"resolution\":{\"integrity\":\"sha512-/bckv9k/yxY4VmSY2V2MeX7NBsS5uqGvdSPs5WIvW3Uv35DXPrdiumKXTNJeZRNRMtxrM+YfxQPjXLx3C7ykvg==\"},\"engines\":{\"node\":\">=22.12.0\"}},\"@tanstack/start-fn-stubs@1.161.6\":{\"resolution\":{\"integrity\":\"sha512-Y6QSlGiLga8cHfvxGGaonXIlt2bIUTVdH6AMjmpMp7+ANNCp+N96GQbjjhLye3JkaxDfP68x5iZA8NK4imgRig==\"},\"engines\":{\"node\":\">=22.12.0\"}},\"@tanstack/start-plugin-core@1.169.19\":{\"resolution\":{\"integrity\":\"sha512-z3/Tkytb6eRQKDnFU31QLimwrcVyDi9uHMtUQKmJkxQg+Bz85di+MxMrbnvd8XXP9OHcFlWK8HpG/HpVncZq4Q==\"},\"engines\":{\"node\":\">=22.12.0\"},\"peerDependencies\":{\"@rsbuild/core\":\"^2.0.0\",\"vite\":\">=7.0.0\"},\"peerDependenciesMeta\":{\"@rsbuild/core\":{\"optional\":true},\"vite\":{\"optional\":true}}},\"@tanstack/start-server-core@1.167.30\":{\"resolution\":{\"integrity\":\"sha512-GC0PXzYYSEwfAOC2NxGXFUyYvfbSjVoqnIrzJsyInKd8xQxGEQaVdrebbyx9TV5cj7A5e7EJcWAsf3G3wRDQBw==\"},\"engines\":{\"node\":\">=22.12.0\"}},\"@tanstack/start-storage-context@1.166.35\":{\"resolution\":{\"integrity\":\"sha512-ZKDkKiorJrKwfEHjatEwRHG7EP3raJPhh6CSl4CFmHW0naIvwaW5gQcxcT8IlHtoGDLYDAjBEcSr3MZyXgqmOA==\"},\"engines\":{\"node\":\">=22.12.0\"}},\"@tanstack/store@0.9.3\":{\"resolution\":{\"integrity\":\"sha512-8reSzl/qGWGGVKhBoxXPMWzATSbZLZFWhwBAFO9NAyp0TxzfBP0mIrGb8CP8KrQTmvzXlR/vFPPUrHTLBGyFyw==\"}},\"@tanstack/virtual-file-routes@1.161.7\":{\"resolution\":{\"integrity\":\"sha512-olW33+Cn+bsCsZKPwEGhlkqS6w3M2slFv11JIobdnCFKMLG97oAI2kWKdx5/zsywTL8flpnoIgaZZPlQTFYhdQ==\"},\"engines\":{\"node\":\">=20.19\"},\"hasBin\":true},\"@testing-library/dom@10.4.1\":{\"resolution\":{\"integrity\":\"sha512-o4PXJQidqJl82ckFaXUeoAW+XysPLauYI43Abki5hABd853iMhitooc6znOnczgbTYmEP6U6/y1ZyKAIsvMKGg==\"},\"engines\":{\"node\":\">=18\"}},\"@testing-library/react@16.3.0\":{\"resolution\":{\"integrity\":\"sha512-kFSyxiEDwv1WLl2fgsq6pPBbw5aWKrsY2/noi1Id0TK0UParSF62oFQFGHXIyaG4pp2tEub/Zlel+fjjZILDsw==\"},\"engines\":{\"node\":\">=18\"},\"peerDependencies\":{\"@testing-library/dom\":\"^10.0.0\",\"@types/react\":\"^18.0.0 || ^19.0.0\",\"@types/react-dom\":\"^18.0.0 || ^19.0.0\",\"react\":\"^18.0.0 || ^19.0.0\",\"react-dom\":\"^18.0.0 || ^19.0.0\"},\"peerDependenciesMeta\":{\"@types/react\":{\"optional\":true},\"@types/react-dom\":{\"optional\":true}}},\"@testing-library/user-event@14.6.1\":{\"resolution\":{\"integrity\":\"sha512-vq7fv0rnt+QTXgPxr5Hjc210p6YKq2kmdziLgnsZGgLJ9e6VAShx1pACLuRjd/AS/sr7phAR58OIIpf0LlmQNw==\"},\"engines\":{\"node\":\">=12\",\"npm\":\">=6\"},\"peerDependencies\":{\"@testing-library/dom\":\">=7.21.4\"}},\"@tootallnate/quickjs-emscripten@0.23.0\":{\"resolution\":{\"integrity\":\"sha512-C5Mc6rdnsaJDjO3UpGW/CQTHtCKaYlScZTly4JIu97Jxo/odCiH0ITnDXSJPTOrEKk/ycSZ0AOgTmkDtkOsvIA==\"}},\"@tybys/wasm-util@0.10.2\":{\"resolution\":{\"integrity\":\"sha512-RoBvJ2X0wuKlWFIjrwffGw1IqZHKQqzIchKaadZZfnNpsAYp2mM0h36JtPCjNDAHGgYez/15uMBpfGwchhiMgg==\"}},\"@tybys/wasm-util@0.9.0\":{\"resolution\":{\"integrity\":\"sha512-6+7nlbMVX/PVDCwaIQ8nTOPveOcFLSt8GcXdx8hD0bt39uWxYT88uXzqTd4fTvqta7oeUJqudepapKNt2DYJFw==\"}},\"@types/aria-query@5.0.4\":{\"resolution\":{\"integrity\":\"sha512-rfT93uj5s0PRL7EzccGMs3brplhcrghnDoV26NqKhCAS1hVo+WdNsPvE/yb6ilfr5hi2MEk6d5EWJTKdxg8jVw==\"}},\"@types/chai@5.2.2\":{\"resolution\":{\"integrity\":\"sha512-8kB30R7Hwqf40JPiKhVzodJs2Qc1ZJ5zuT3uzw5Hq/dhNCl3G3l83jfpdI1e20BP348+fV7VIL/+FxaXkqBmWg==\"}},\"@types/chai@5.2.3\":{\"resolution\":{\"integrity\":\"sha512-Mw558oeA9fFbv65/y4mHtXDs9bPnFMZAL/jxdPFUpOHHIXX91mcgEHbS5Lahr+pwZFR8A7GQleRWeI6cGFC2UA==\"}},\"@types/cookie@0.6.0\":{\"resolution\":{\"integrity\":\"sha512-4Kh9a6B2bQciAhf7FSuMRRkUWecJgJu9nPnx3yzpsfXX/c50REIqpHY4C82bXP90qrLtXtkDxTZosYO3UpOwlA==\"}},\"@types/d3-array@3.2.1\":{\"resolution\":{\"integrity\":\"sha512-Y2Jn2idRrLzUfAKV2LyRImR+y4oa2AntrgID95SHJxuMUrkNXmanDSed71sRNZysveJVt1hLLemQZIady0FpEg==\"}},\"@types/d3-axis@3.0.6\":{\"resolution\":{\"integrity\":\"sha512-pYeijfZuBd87T0hGn0FO1vQ/cgLk6E1ALJjfkC0oJ8cbwkZl3TpgS8bVBLZN+2jjGgg38epgxb2zmoGtSfvgMw==\"}},\"@types/d3-brush@3.0.6\":{\"resolution\":{\"integrity\":\"sha512-nH60IZNNxEcrh6L1ZSMNA28rj27ut/2ZmI3r96Zd+1jrZD++zD3LsMIjWlvg4AYrHn/Pqz4CF3veCxGjtbqt7A==\"}},\"@types/d3-chord@3.0.6\":{\"resolution\":{\"integrity\":\"sha512-LFYWWd8nwfwEmTZG9PfQxd17HbNPksHBiJHaKuY1XeqscXacsS2tyoo6OdRsjf+NQYeB6XrNL3a25E3gH69lcg==\"}},\"@types/d3-color@3.1.3\":{\"resolution\":{\"integrity\":\"sha512-iO90scth9WAbmgv7ogoq57O9YpKmFBbmoEoCHDB2xMBY0+/KVrqAaCDyCE16dUspeOvIxFFRI+0sEtqDqy2b4A==\"}},\"@types/d3-contour@3.0.6\":{\"resolution\":{\"integrity\":\"sha512-BjzLgXGnCWjUSYGfH1cpdo41/hgdWETu4YxpezoztawmqsvCeep+8QGfiY6YbDvfgHz/DkjeIkkZVJavB4a3rg==\"}},\"@types/d3-delaunay@6.0.4\":{\"resolution\":{\"integrity\":\"sha512-ZMaSKu4THYCU6sV64Lhg6qjf1orxBthaC161plr5KuPHo3CNm8DTHiLw/5Eq2b6TsNP0W0iJrUOFscY6Q450Hw==\"}},\"@types/d3-dispatch@3.0.6\":{\"resolution\":{\"integrity\":\"sha512-4fvZhzMeeuBJYZXRXrRIQnvUYfyXwYmLsdiN7XXmVNQKKw1cM8a5WdID0g1hVFZDqT9ZqZEY5pD44p24VS7iZQ==\"}},\"@types/d3-drag@3.0.7\":{\"resolution\":{\"integrity\":\"sha512-HE3jVKlzU9AaMazNufooRJ5ZpWmLIoc90A37WU2JMmeq28w1FQqCZswHZ3xR+SuxYftzHq6WU6KJHvqxKzTxxQ==\"}},\"@types/d3-dsv@3.0.7\":{\"resolution\":{\"integrity\":\"sha512-n6QBF9/+XASqcKK6waudgL0pf/S5XHPPI8APyMLLUHd8NqouBGLsU8MgtO7NINGtPBtk9Kko/W4ea0oAspwh9g==\"}},\"@types/d3-ease@3.0.2\":{\"resolution\":{\"integrity\":\"sha512-NcV1JjO5oDzoK26oMzbILE6HW7uVXOHLQvHshBUW4UMdZGfiY6v5BeQwh9a9tCzv+CeefZQHJt5SRgK154RtiA==\"}},\"@types/d3-fetch@3.0.7\":{\"resolution\":{\"integrity\":\"sha512-fTAfNmxSb9SOWNB9IoG5c8Hg6R+AzUHDRlsXsDZsNp6sxAEOP0tkP3gKkNSO/qmHPoBFTxNrjDprVHDQDvo5aA==\"}},\"@types/d3-force@3.0.10\":{\"resolution\":{\"integrity\":\"sha512-ZYeSaCF3p73RdOKcjj+swRlZfnYpK1EbaDiYICEEp5Q6sUiqFaFQ9qgoshp5CzIyyb/yD09kD9o2zEltCexlgw==\"}},\"@types/d3-format@3.0.4\":{\"resolution\":{\"integrity\":\"sha512-fALi2aI6shfg7vM5KiR1wNJnZ7r6UuggVqtDA+xiEdPZQwy/trcQaHnwShLuLdta2rTymCNpxYTiMZX/e09F4g==\"}},\"@types/d3-geo@3.1.0\":{\"resolution\":{\"integrity\":\"sha512-856sckF0oP/diXtS4jNsiQw/UuK5fQG8l/a9VVLeSouf1/PPbBE1i1W852zVwKwYCBkFJJB7nCFTbk6UMEXBOQ==\"}},\"@types/d3-hierarchy@3.1.7\":{\"resolution\":{\"integrity\":\"sha512-tJFtNoYBtRtkNysX1Xq4sxtjK8YgoWUNpIiUee0/jHGRwqvzYxkq0hGVbbOGSz+JgFxxRu4K8nb3YpG3CMARtg==\"}},\"@types/d3-interpolate@3.0.4\":{\"resolution\":{\"integrity\":\"sha512-mgLPETlrpVV1YRJIglr4Ez47g7Yxjl1lj7YKsiMCb27VJH9W8NVM6Bb9d8kkpG/uAQS5AmbA48q2IAolKKo1MA==\"}},\"@types/d3-path@3.1.0\":{\"resolution\":{\"integrity\":\"sha512-P2dlU/q51fkOc/Gfl3Ul9kicV7l+ra934qBFXCFhrZMOL6du1TM0pm1ThYvENukyOn5h9v+yMJ9Fn5JK4QozrQ==\"}},\"@types/d3-polygon@3.0.2\":{\"resolution\":{\"integrity\":\"sha512-ZuWOtMaHCkN9xoeEMr1ubW2nGWsp4nIql+OPQRstu4ypeZ+zk3YKqQT0CXVe/PYqrKpZAi+J9mTs05TKwjXSRA==\"}},\"@types/d3-quadtree@3.0.6\":{\"resolution\":{\"integrity\":\"sha512-oUzyO1/Zm6rsxKRHA1vH0NEDG58HrT5icx/azi9MF1TWdtttWl0UIUsjEQBBh+SIkrpd21ZjEv7ptxWys1ncsg==\"}},\"@types/d3-random@3.0.3\":{\"resolution\":{\"integrity\":\"sha512-Imagg1vJ3y76Y2ea0871wpabqp613+8/r0mCLEBfdtqC7xMSfj9idOnmBYyMoULfHePJyxMAw3nWhJxzc+LFwQ==\"}},\"@types/d3-scale-chromatic@3.1.0\":{\"resolution\":{\"integrity\":\"sha512-iWMJgwkK7yTRmWqRB5plb1kadXyQ5Sj8V/zYlFGMUBbIPKQScw+Dku9cAAMgJG+z5GYDoMjWGLVOvjghDEFnKQ==\"}},\"@types/d3-scale@4.0.8\":{\"resolution\":{\"integrity\":\"sha512-gkK1VVTr5iNiYJ7vWDI+yUFFlszhNMtVeneJ6lUTKPjprsvLLI9/tgEGiXJOnlINJA8FyA88gfnQsHbybVZrYQ==\"}},\"@types/d3-selection@3.0.11\":{\"resolution\":{\"integrity\":\"sha512-bhAXu23DJWsrI45xafYpkQ4NtcKMwWnAC/vKrd2l+nxMFuvOT3XMYTIj2opv8vq8AO5Yh7Qac/nSeP/3zjTK0w==\"}},\"@types/d3-shape@3.1.7\":{\"resolution\":{\"integrity\":\"sha512-VLvUQ33C+3J+8p+Daf+nYSOsjB4GXp19/S/aGo60m9h1v6XaxjiT82lKVWJCfzhtuZ3yD7i/TPeC/fuKLLOSmg==\"}},\"@types/d3-time-format@4.0.3\":{\"resolution\":{\"integrity\":\"sha512-5xg9rC+wWL8kdDj153qZcsJ0FWiFt0J5RB6LYUNZjwSnesfblqrI/bJ1wBdJ8OQfncgbJG5+2F+qfqnqyzYxyg==\"}},\"@types/d3-time@3.0.4\":{\"resolution\":{\"integrity\":\"sha512-yuzZug1nkAAaBlBBikKZTgzCeA+k1uy4ZFwWANOfKw5z5LRhV0gNA7gNkKm7HoK+HRN0wX3EkxGk0fpbWhmB7g==\"}},\"@types/d3-timer@3.0.2\":{\"resolution\":{\"integrity\":\"sha512-Ps3T8E8dZDam6fUyNiMkekK3XUsaUEik+idO9/YjPtfj2qruF8tFBXS7XhtE4iIXBLxhmLjP3SXpLhVf21I9Lw==\"}},\"@types/d3-transition@3.0.9\":{\"resolution\":{\"integrity\":\"sha512-uZS5shfxzO3rGlu0cC3bjmMFKsXv+SmZZcgp0KD22ts4uGXp5EVYGzu/0YdwZeKmddhcAccYtREJKkPfXkZuCg==\"}},\"@types/d3-zoom@3.0.8\":{\"resolution\":{\"integrity\":\"sha512-iqMC4/YlFCSlO8+2Ii1GGGliCAY4XdeG748w5vQUbevlbDu0zSjH/+jojorQVBK/se0j6DUFNPBGSqD3YWYnDw==\"}},\"@types/d3@7.4.3\":{\"resolution\":{\"integrity\":\"sha512-lZXZ9ckh5R8uiFVt8ogUNf+pIrK4EsWrx2Np75WvF/eTpJ0FMHNhjXk8CKEx/+gpHbNQyJWehbFaTvqmHWB3ww==\"}},\"@types/debug@4.1.12\":{\"resolution\":{\"integrity\":\"sha512-vIChWdVG3LG1SMxEvI/AK+FWJthlrqlTu7fbrlywTkkaONwk/UAGaULXRlf8vkzFBLVm0zkMdCquhL5aOjhXPQ==\"}},\"@types/deep-eql@4.0.2\":{\"resolution\":{\"integrity\":\"sha512-c9h9dVVMigMPc4bwTvC5dxqtqJZwQPePsWjPlpSOnojbor6pGqdk541lfA7AqFQr5pB1BRdq0juY9db81BwyFw==\"}},\"@types/eslint-scope@3.7.7\":{\"resolution\":{\"integrity\":\"sha512-MzMFlSLBqNF2gcHWO0G1vP/YQyfvrxZ0bF+u7mzUdZ1/xK4A4sru+nraZz5i3iEIk1l1uyicaDVTB4QbbEkAYg==\"}},\"@types/eslint@9.6.1\":{\"resolution\":{\"integrity\":\"sha512-FXx2pKgId/WyYo2jXw63kk7/+TY7u7AziEJxJAnSFzHlqTAS3Ync6SvgYAN/k4/PQpnnVuzoMuVnByKK2qp0ag==\"}},\"@types/estree@1.0.8\":{\"resolution\":{\"integrity\":\"sha512-dWHzHa2WqEXI/O1E9OjrocMTKJl2mSrEolh1Iomrv6U+JuNwaHXsXx9bLu5gG7BUWFIN0skIQJQ/L1rIex4X6w==\"}},\"@types/estree@1.0.9\":{\"resolution\":{\"integrity\":\"sha512-GhdPgy1el4/ImP05X05Uw4cw2/M93BCUmnEvWZNStlCzEKME4Fkk+YpoA5OiHNQmoS7Cafb8Xa3Pya8m1Qrzeg==\"}},\"@types/geojson@7946.0.15\":{\"resolution\":{\"integrity\":\"sha512-9oSxFzDCT2Rj6DfcHF8G++jxBKS7mBqXl5xrRW+Kbvjry6Uduya2iiwqHPhVXpasAVMBYKkEPGgKhd3+/HZ6xA==\"}},\"@types/hast@3.0.4\":{\"resolution\":{\"integrity\":\"sha512-WPs+bbQw5aCj+x6laNGWLH3wviHtoCv/P3+otBhbOhJgG8qtpdAMlTCxLtsTWA7LH1Oh/bFCHsBn0TPS5m30EQ==\"}},\"@types/json-schema@7.0.15\":{\"resolution\":{\"integrity\":\"sha512-5+fP8P8MFNC+AyZCDxrB2pkZFPGzqQWUzpSeuuVLvm8VMcorNYavBqoFcxK8bQz4Qsbn4oUEEem4wDLfcysGHA==\"}},\"@types/mdast@4.0.4\":{\"resolution\":{\"integrity\":\"sha512-kGaNbPh1k7AFzgpud/gMdvIm5xuECykRR+JnWKQno9TAXVa6WIVCGTPvYGekIDL4uwCZQSYbUxNBSb1aUo79oA==\"}},\"@types/ms@2.1.0\":{\"resolution\":{\"integrity\":\"sha512-GsCCIZDE/p3i96vtEqx+7dBUGXrc7zeSK3wwPHIaRThS+9OhWIXRqzs4d6k1SVU8g91DrNRWxWUGhp5KXQb2VA==\"}},\"@types/node@12.20.55\":{\"resolution\":{\"integrity\":\"sha512-J8xLz7q2OFulZ2cyGTLE1TbbZcjpno7FaN6zdJNrgAdrJ+DZzh/uFR6YrTb4C+nXakvud8Q4+rbhoIWlYQbUFQ==\"}},\"@types/node@20.19.39\":{\"resolution\":{\"integrity\":\"sha512-orrrD74MBUyK8jOAD/r0+lfa1I2MO6I+vAkmAWzMYbCcgrN4lCrmK52gRFQq/JRxfYPfonkr4b0jcY7Olqdqbw==\"}},\"@types/node@22.15.33\":{\"resolution\":{\"integrity\":\"sha512-wzoocdnnpSxZ+6CjW4ADCK1jVmd1S/J3ArNWfn8FDDQtRm8dkDg7TA+mvek2wNrfCgwuZxqEOiB9B1XCJ6+dbw==\"}},\"@types/node@22.19.17\":{\"resolution\":{\"integrity\":\"sha512-wGdMcf+vPYM6jikpS/qhg6WiqSV/OhG+jeeHT/KlVqxYfD40iYJf9/AE1uQxVWFvU7MipKRkRv8NSHiCGgPr8Q==\"}},\"@types/node@24.10.2\":{\"resolution\":{\"integrity\":\"sha512-WOhQTZ4G8xZ1tjJTvKOpyEVSGgOTvJAfDK3FNFgELyaTpzhdgHVHeqW8V+UJvzF5BT+/B54T/1S2K6gd9c7bbA==\"}},\"@types/react-dom@19.2.3\":{\"resolution\":{\"integrity\":\"sha512-jp2L/eY6fn+KgVVQAOqYItbF0VY/YApe5Mz2F0aykSO8gx31bYCZyvSeYxCHKvzHG5eZjc+zyaS5BrBWya2+kQ==\"},\"peerDependencies\":{\"@types/react\":\"^19.2.0\"}},\"@types/react@19.2.7\":{\"resolution\":{\"integrity\":\"sha512-MWtvHrGZLFttgeEj28VXHxpmwYbor/ATPYbBfSFZEIRK0ecCFLl2Qo55z52Hss+UV9CRN7trSeq1zbgx7YDWWg==\"}},\"@types/sinonjs__fake-timers@8.1.5\":{\"resolution\":{\"integrity\":\"sha512-mQkU2jY8jJEF7YHjHvsQO8+3ughTL1mcnn96igfhONmR+fUPSKIkefQYpSe8bsly2Ep7oQbn/6VG5/9/0qcArQ==\"}},\"@types/statuses@2.0.6\":{\"resolution\":{\"integrity\":\"sha512-xMAgYwceFhRA2zY+XbEA7mxYbA093wdiW8Vu6gZPGWy9cmOyU9XesH1tNcEWsKFd5Vzrqx5T3D38PWx1FIIXkA==\"}},\"@types/tough-cookie@4.0.5\":{\"resolution\":{\"integrity\":\"sha512-/Ad8+nIOV7Rl++6f1BdKxFSMgmoqEoYbHRpPcx3JEfv8VRsQe9Z4mCXeJBzxs7mbHY/XOZZuXlRNfhpVPbs6ZA==\"}},\"@types/trusted-types@2.0.7\":{\"resolution\":{\"integrity\":\"sha512-ScaPdn1dQczgbl0QFTeTOmVHFULt394XJgOQNoyVhZ6r2vLnMLJfBPd53SB52T/3G36VI1/g2MZaX0cwDuXsfw==\"}},\"@types/unist@3.0.3\":{\"resolution\":{\"integrity\":\"sha512-ko/gIFJRv177XgZsZcBwnqJN5x/Gien8qNOn0D5bQU/zAzVf9Zt3BlcUiLqhV9y4ARk0GbT3tnUiPNgnTXzc/Q==\"}},\"@types/whatwg-mimetype@3.0.2\":{\"resolution\":{\"integrity\":\"sha512-c2AKvDT8ToxLIOUlN51gTiHXflsfIFisS4pO7pDPoKouJCESkhZnEy623gwP9laCy5lnLDAw1vAzu2vM2YLOrA==\"}},\"@types/which@2.0.2\":{\"resolution\":{\"integrity\":\"sha512-113D3mDkZDjo+EeUEHCFy0qniNc1ZpecGiAU7WSo7YDoSzolZIQKpYFHrPpjkB2nuyahcKfrmLXeQlh7gqJYdw==\"}},\"@types/ws@8.18.1\":{\"resolution\":{\"integrity\":\"sha512-ThVF6DCVhA8kUGy+aazFQ4kXQ7E1Ty7A3ypFOe0IcJV8O/M511G99AW24irKrW56Wt44yG9+ij8FaqoBGkuBXg==\"}},\"@types/yauzl@2.10.3\":{\"resolution\":{\"integrity\":\"sha512-oJoftv0LSuaDZE3Le4DbKX+KS9G36NzOeSap90UIK0yMA/NhKJhqlSGtNDORNRaIbQfzjXDrQa0ytJ6mNRGz/Q==\"}},\"@ungap/structured-clone@1.2.1\":{\"resolution\":{\"integrity\":\"sha512-fEzPV3hSkSMltkw152tJKNARhOupqbH96MZWyRjNaYZOMIzbrTeQDG+MTc6Mr2pgzFQzFxAfmhGDNP5QK++2ZA==\"},\"deprecated\":\"Potential CWE-502 - Update to 1.3.1 or higher\"},\"@vitejs/plugin-react@6.0.1\":{\"resolution\":{\"integrity\":\"sha512-l9X/E3cDb+xY3SWzlG1MOGt2usfEHGMNIaegaUGFsLkb3RCn/k8/TOXBcab+OndDI4TBtktT8/9BwwW8Vi9KUQ==\"},\"engines\":{\"node\":\"^20.19.0 || >=22.12.0\"},\"peerDependencies\":{\"@rolldown/plugin-babel\":\"^0.1.7 || ^0.2.0\",\"babel-plugin-react-compiler\":\"^1.0.0\",\"vite\":\"^8.0.0\"},\"peerDependenciesMeta\":{\"@rolldown/plugin-babel\":{\"optional\":true},\"babel-plugin-react-compiler\":{\"optional\":true}}},\"@vitest/browser@3.2.4\":{\"resolution\":{\"integrity\":\"sha512-tJxiPrWmzH8a+w9nLKlQMzAKX/7VjFs50MWgcAj7p9XQ7AQ9/35fByFYptgPELyLw+0aixTnC4pUWV+APcZ/kw==\"},\"peerDependencies\":{\"playwright\":\"*\",\"safaridriver\":\"*\",\"vitest\":\"3.2.4\",\"webdriverio\":\"^7.0.0 || ^8.0.0 || ^9.0.0\"},\"peerDependenciesMeta\":{\"playwright\":{\"optional\":true},\"safaridriver\":{\"optional\":true},\"webdriverio\":{\"optional\":true}}},\"@vitest/browser@4.1.5\":{\"resolution\":{\"integrity\":\"sha512-iCDGI8c4yg+xmjUg2VsygdAUSIIB4x5Rht/P68OXy1hPELKXHDkzh87lkuTcdYmemRChDkEpB426MmDjzC0ziA==\"},\"peerDependencies\":{\"vitest\":\"4.1.5\"}},\"@vitest/coverage-v8@3.2.4\":{\"resolution\":{\"integrity\":\"sha512-EyF9SXU6kS5Ku/U82E259WSnvg6c8KTjppUncuNdm5QHpe17mwREHnjDzozC8x9MZ0xfBUFSaLkRv4TMA75ALQ==\"},\"peerDependencies\":{\"@vitest/browser\":\"3.2.4\",\"vitest\":\"3.2.4\"},\"peerDependenciesMeta\":{\"@vitest/browser\":{\"optional\":true}}},\"@vitest/coverage-v8@4.1.5\":{\"resolution\":{\"integrity\":\"sha512-38C0/Ddb7HcRG0Z4/DUem8x57d2p9jYgp18mkaYswEOQBGsI1CG4f/hjm0ZCeaJfWhSZ4k7jgs29V1Zom7Ki9A==\"},\"peerDependencies\":{\"@vitest/browser\":\"4.1.5\",\"vitest\":\"4.1.5\"},\"peerDependenciesMeta\":{\"@vitest/browser\":{\"optional\":true}}},\"@vitest/expect@3.2.4\":{\"resolution\":{\"integrity\":\"sha512-Io0yyORnB6sikFlt8QW5K7slY4OjqNX9jmJQ02QDda8lyM6B5oNgVWoSoKPac8/kgnCUzuHQKrSLtu/uOqqrig==\"}},\"@vitest/expect@4.0.18\":{\"resolution\":{\"integrity\":\"sha512-8sCWUyckXXYvx4opfzVY03EOiYVxyNrHS5QxX3DAIi5dpJAAkyJezHCP77VMX4HKA2LDT/Jpfo8i2r5BE3GnQQ==\"}},\"@vitest/expect@4.1.5\":{\"resolution\":{\"integrity\":\"sha512-PWBaRY5JoKuRnHlUHfpV/KohFylaDZTupcXN1H9vYryNLOnitSw60Mw9IAE2r67NbwwzBw/Cc/8q9BK3kIX8Kw==\"}},\"@vitest/mocker@3.2.4\":{\"resolution\":{\"integrity\":\"sha512-46ryTE9RZO/rfDd7pEqFl7etuyzekzEhUbTW3BvmeO/BcCMEgq59BKhek3dXDWgAj4oMK6OZi+vRr1wPW6qjEQ==\"},\"peerDependencies\":{\"msw\":\"^2.4.9\",\"vite\":\"^5.0.0 || ^6.0.0 || ^7.0.0-0\"},\"peerDependenciesMeta\":{\"msw\":{\"optional\":true},\"vite\":{\"optional\":true}}},\"@vitest/mocker@4.0.18\":{\"resolution\":{\"integrity\":\"sha512-HhVd0MDnzzsgevnOWCBj5Otnzobjy5wLBe4EdeeFGv8luMsGcYqDuFRMcttKWZA5vVO8RFjexVovXvAM4JoJDQ==\"},\"peerDependencies\":{\"msw\":\"^2.4.9\",\"vite\":\"^6.0.0 || ^7.0.0-0\"},\"peerDependenciesMeta\":{\"msw\":{\"optional\":true},\"vite\":{\"optional\":true}}},\"@vitest/mocker@4.1.5\":{\"resolution\":{\"integrity\":\"sha512-/x2EmFC4mT4NNzqvC3fmesuV97w5FC903KPmey4gsnJiMQ3Be1IlDKVaDaG8iqaLFHqJ2FVEkxZk5VmeLjIItw==\"},\"peerDependencies\":{\"msw\":\"^2.4.9\",\"vite\":\"^6.0.0 || ^7.0.0 || ^8.0.0\"},\"peerDependenciesMeta\":{\"msw\":{\"optional\":true},\"vite\":{\"optional\":true}}},\"@vitest/pretty-format@3.2.4\":{\"resolution\":{\"integrity\":\"sha512-IVNZik8IVRJRTr9fxlitMKeJeXFFFN0JaB9PHPGQ8NKQbGpfjlTx9zO4RefN8gp7eqjNy8nyK3NZmBzOPeIxtA==\"}},\"@vitest/pretty-format@4.0.18\":{\"resolution\":{\"integrity\":\"sha512-P24GK3GulZWC5tz87ux0m8OADrQIUVDPIjjj65vBXYG17ZeU3qD7r+MNZ1RNv4l8CGU2vtTRqixrOi9fYk/yKw==\"}},\"@vitest/pretty-format@4.1.5\":{\"resolution\":{\"integrity\":\"sha512-7I3q6l5qr03dVfMX2wCo9FxwSJbPdwKjy2uu/YPpU3wfHvIL4QHwVRp57OfGrDFeUJ8/8QdfBKIV12FTtLn00g==\"}},\"@vitest/runner@3.2.4\":{\"resolution\":{\"integrity\":\"sha512-oukfKT9Mk41LreEW09vt45f8wx7DordoWUZMYdY/cyAk7w5TWkTRCNZYF7sX7n2wB7jyGAl74OxgwhPgKaqDMQ==\"}},\"@vitest/runner@4.0.18\":{\"resolution\":{\"integrity\":\"sha512-rpk9y12PGa22Jg6g5M3UVVnTS7+zycIGk9ZNGN+m6tZHKQb7jrP7/77WfZy13Y/EUDd52NDsLRQhYKtv7XfPQw==\"}},\"@vitest/runner@4.1.5\":{\"resolution\":{\"integrity\":\"sha512-2D+o7Pr82IEO46YPpoA/YU0neeyr6FTerQb5Ro7BUnBuv6NQtT/kmVnczngiMEBhzgqz2UZYl5gArejsyERDSQ==\"}},\"@vitest/snapshot@3.2.4\":{\"resolution\":{\"integrity\":\"sha512-dEYtS7qQP2CjU27QBC5oUOxLE/v5eLkGqPE0ZKEIDGMs4vKWe7IjgLOeauHsR0D5YuuycGRO5oSRXnwnmA78fQ==\"}},\"@vitest/snapshot@4.0.18\":{\"resolution\":{\"integrity\":\"sha512-PCiV0rcl7jKQjbgYqjtakly6T1uwv/5BQ9SwBLekVg/EaYeQFPiXcgrC2Y7vDMA8dM1SUEAEV82kgSQIlXNMvA==\"}},\"@vitest/snapshot@4.1.5\":{\"resolution\":{\"integrity\":\"sha512-zypXEt4KH/XgKGPUz4eC2AvErYx0My5hfL8oDb1HzGFpEk1P62bxSohdyOmvz+d9UJwanI68MKwr2EquOaOgMQ==\"}},\"@vitest/spy@3.2.4\":{\"resolution\":{\"integrity\":\"sha512-vAfasCOe6AIK70iP5UD11Ac4siNUNJ9i/9PZ3NKx07sG6sUxeag1LWdNrMWeKKYBLlzuK+Gn65Yd5nyL6ds+nw==\"}},\"@vitest/spy@4.0.18\":{\"resolution\":{\"integrity\":\"sha512-cbQt3PTSD7P2OARdVW3qWER5EGq7PHlvE+QfzSC0lbwO+xnt7+XH06ZzFjFRgzUX//JmpxrCu92VdwvEPlWSNw==\"}},\"@vitest/spy@4.1.5\":{\"resolution\":{\"integrity\":\"sha512-2lNOsh6+R2Idnf1TCZqSwYlKN2E/iDlD8sgU59kYVl+OMDmvldO1VDk39smRfpUNwYpNRVn3w4YfuC7KfbBnkQ==\"}},\"@vitest/utils@3.2.4\":{\"resolution\":{\"integrity\":\"sha512-fB2V0JFrQSMsCo9HiSq3Ezpdv4iYaXRG1Sx8edX3MwxfyNn83mKiGzOcH+Fkxt4MHxr3y42fQi1oeAInqgX2QA==\"}},\"@vitest/utils@4.0.18\":{\"resolution\":{\"integrity\":\"sha512-msMRKLMVLWygpK3u2Hybgi4MNjcYJvwTb0Ru09+fOyCXIgT5raYP041DRRdiJiI3k/2U6SEbAETB3YtBrUkCFA==\"}},\"@vitest/utils@4.1.5\":{\"resolution\":{\"integrity\":\"sha512-76wdkrmfXfqGjueGgnb45ITPyUi1ycZ4IHgC2bhPDUfWHklY/q3MdLOAB+TF1e6xfl8NxNY0ZYaPCFNWSsw3Ug==\"}},\"@wdio/config@9.1.3\":{\"resolution\":{\"integrity\":\"sha512-fozjb5Jl26QqQoZ2lJc8uZwzK2iKKmIfNIdNvx5JmQt78ybShiPuWWgu/EcHYDvAiZwH76K59R1Gp4lNmmEDew==\"},\"engines\":{\"node\":\">=18.20.0\"}},\"@wdio/logger@8.38.0\":{\"resolution\":{\"integrity\":\"sha512-kcHL86RmNbcQP+Gq/vQUGlArfU6IIcbbnNp32rRIraitomZow+iEoc519rdQmSVusDozMS5DZthkgDdxK+vz6Q==\"},\"engines\":{\"node\":\"^16.13 || >=18\"}},\"@wdio/logger@9.1.3\":{\"resolution\":{\"integrity\":\"sha512-cumRMK/gE1uedBUw3WmWXOQ7HtB6DR8EyKQioUz2P0IJtRRpglMBdZV7Svr3b++WWawOuzZHMfbTkJQmaVt8Gw==\"},\"engines\":{\"node\":\">=18.20.0\"}},\"@wdio/protocols@9.2.0\":{\"resolution\":{\"integrity\":\"sha512-lSdKCwLtqMxSIW+cl8au21GlNkvmLNGgyuGYdV/lFdWflmMYH1zusruM6Km6Kpv2VUlWySjjGknYhe7XVTOeMw==\"}},\"@wdio/repl@9.0.8\":{\"resolution\":{\"integrity\":\"sha512-3iubjl4JX5zD21aFxZwQghqC3lgu+mSs8c3NaiYYNCC+IT5cI/8QuKlgh9s59bu+N3gG988jqMJeCYlKuUv/iw==\"},\"engines\":{\"node\":\">=18.20.0\"}},\"@wdio/types@9.1.3\":{\"resolution\":{\"integrity\":\"sha512-oQrzLQBqn/+HXSJJo01NEfeKhzwuDdic7L8PDNxv5ySKezvmLDYVboQfoSDRtpAdfAZCcxuU9L4Jw7iTf6WV3g==\"},\"engines\":{\"node\":\">=18.20.0\"}},\"@wdio/utils@9.1.3\":{\"resolution\":{\"integrity\":\"sha512-dYeOzq9MTh8jYRZhzo/DYyn+cKrhw7h0/5hgyXkbyk/wHwF/uLjhATPmfaCr9+MARSEdiF7wwU8iRy/V0jfsLg==\"},\"engines\":{\"node\":\">=18.20.0\"}},\"@webassemblyjs/ast@1.14.1\":{\"resolution\":{\"integrity\":\"sha512-nuBEDgQfm1ccRp/8bCQrx1frohyufl4JlbMMZ4P1wpeOfDhF6FQkxZJ1b/e+PLwr6X1Nhw6OLme5usuBWYBvuQ==\"}},\"@webassemblyjs/floating-point-hex-parser@1.13.2\":{\"resolution\":{\"integrity\":\"sha512-6oXyTOzbKxGH4steLbLNOu71Oj+C8Lg34n6CqRvqfS2O71BxY6ByfMDRhBytzknj9yGUPVJ1qIKhRlAwO1AovA==\"}},\"@webassemblyjs/helper-api-error@1.13.2\":{\"resolution\":{\"integrity\":\"sha512-U56GMYxy4ZQCbDZd6JuvvNV/WFildOjsaWD3Tzzvmw/mas3cXzRJPMjP83JqEsgSbyrmaGjBfDtV7KDXV9UzFQ==\"}},\"@webassemblyjs/helper-buffer@1.14.1\":{\"resolution\":{\"integrity\":\"sha512-jyH7wtcHiKssDtFPRB+iQdxlDf96m0E39yb0k5uJVhFGleZFoNw1c4aeIcVUPPbXUVJ94wwnMOAqUHyzoEPVMA==\"}},\"@webassemblyjs/helper-numbers@1.13.2\":{\"resolution\":{\"integrity\":\"sha512-FE8aCmS5Q6eQYcV3gI35O4J789wlQA+7JrqTTpJqn5emA4U2hvwJmvFRC0HODS+3Ye6WioDklgd6scJ3+PLnEA==\"}},\"@webassemblyjs/helper-wasm-bytecode@1.13.2\":{\"resolution\":{\"integrity\":\"sha512-3QbLKy93F0EAIXLh0ogEVR6rOubA9AoZ+WRYhNbFyuB70j3dRdwH9g+qXhLAO0kiYGlg3TxDV+I4rQTr/YNXkA==\"}},\"@webassemblyjs/helper-wasm-section@1.14.1\":{\"resolution\":{\"integrity\":\"sha512-ds5mXEqTJ6oxRoqjhWDU83OgzAYjwsCV8Lo/N+oRsNDmx/ZDpqalmrtgOMkHwxsG0iI//3BwWAErYRHtgn0dZw==\"}},\"@webassemblyjs/ieee754@1.13.2\":{\"resolution\":{\"integrity\":\"sha512-4LtOzh58S/5lX4ITKxnAK2USuNEvpdVV9AlgGQb8rJDHaLeHciwG4zlGr0j/SNWlr7x3vO1lDEsuePvtcDNCkw==\"}},\"@webassemblyjs/leb128@1.13.2\":{\"resolution\":{\"integrity\":\"sha512-Lde1oNoIdzVzdkNEAWZ1dZ5orIbff80YPdHx20mrHwHrVNNTjNr8E3xz9BdpcGqRQbAEa+fkrCb+fRFTl/6sQw==\"}},\"@webassemblyjs/utf8@1.13.2\":{\"resolution\":{\"integrity\":\"sha512-3NQWGjKTASY1xV5m7Hr0iPeXD9+RDobLll3T9d2AO+g3my8xy5peVyjSag4I50mR1bBSN/Ct12lo+R9tJk0NZQ==\"}},\"@webassemblyjs/wasm-edit@1.14.1\":{\"resolution\":{\"integrity\":\"sha512-RNJUIQH/J8iA/1NzlE4N7KtyZNHi3w7at7hDjvRNm5rcUXa00z1vRz3glZoULfJ5mpvYhLybmVcwcjGrC1pRrQ==\"}},\"@webassemblyjs/wasm-gen@1.14.1\":{\"resolution\":{\"integrity\":\"sha512-AmomSIjP8ZbfGQhumkNvgC33AY7qtMCXnN6bL2u2Js4gVCg8fp735aEiMSBbDR7UQIj90n4wKAFUSEd0QN2Ukg==\"}},\"@webassemblyjs/wasm-opt@1.14.1\":{\"resolution\":{\"integrity\":\"sha512-PTcKLUNvBqnY2U6E5bdOQcSM+oVP/PmrDY9NzowJjislEjwP/C4an2303MCVS2Mg9d3AJpIGdUFIQQWbPds0Sw==\"}},\"@webassemblyjs/wasm-parser@1.14.1\":{\"resolution\":{\"integrity\":\"sha512-JLBl+KZ0R5qB7mCnud/yyX08jWFw5MsoalJ1pQ4EdFlgj9VdXKGuENGsiCIjegI1W7p91rUlcB/LB5yRJKNTcQ==\"}},\"@webassemblyjs/wast-printer@1.14.1\":{\"resolution\":{\"integrity\":\"sha512-kPSSXE6De1XOR820C90RIo2ogvZG+c3KiHzqUoO/F34Y2shGzesfqv7o57xrxovZJH/MetF5UjroJ/R/3isoiw==\"}},\"@xtuc/ieee754@1.2.0\":{\"resolution\":{\"integrity\":\"sha512-DX8nKgqcGwsc0eJSqYt5lwP4DH5FlHnmuWWBRy7X0NcaGR0ZtuyeESgMwTYVEtxmsNGY+qit4QYT/MIYTOTPeA==\"}},\"@xtuc/long@4.2.2\":{\"resolution\":{\"integrity\":\"sha512-NuHqBY1PB/D8xU6s/thBgOAiAP7HOYDQ32+BFZILJ8ivkUkAHQnWfn6WhL79Owj1qmUnoN/YPhktdIoucipkAQ==\"}},\"@yarnpkg/lockfile@1.1.0\":{\"resolution\":{\"integrity\":\"sha512-GpSwvyXOcOOlV70vbnzjj4fW5xW/FdUF6nQEt1ENy7m4ZCczi1+/buVUPAqmGfqznsORNFzUMjctTIp8a9tuCQ==\"}},\"@yarnpkg/parsers@3.0.2\":{\"resolution\":{\"integrity\":\"sha512-/HcYgtUSiJiot/XWGLOlGxPYUG65+/31V8oqk17vZLW1xlCoR4PampyePljOxY2n8/3jz9+tIFzICsyGujJZoA==\"},\"engines\":{\"node\":\">=18.12.0\"}},\"@zip.js/zip.js@2.8.26\":{\"resolution\":{\"integrity\":\"sha512-RQ4h9F6DOiHxpdocUDrOl6xBM+yOtz+LkUol47AVWcfebGBDpZ7w7Xvz9PS24JgXvLGiXXzSAfdCdVy1tPlaFA==\"},\"engines\":{\"bun\":\">=0.7.0\",\"deno\":\">=1.0.0\",\"node\":\">=18.0.0\"}},\"@zkochan/js-yaml@0.0.7\":{\"resolution\":{\"integrity\":\"sha512-nrUSn7hzt7J6JWgWGz78ZYI8wj+gdIJdk0Ynjpp8l+trkn58Uqsf6RYrYkEK+3X18EX+TNdtJI0WxAtc+L84SQ==\"},\"hasBin\":true},\"abort-controller@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-h8lQ8tacZYnR3vNQTgibj+tODHI5/+l06Au2Pcriv/Gmet0eaj4TwWH41sO9wnHDiQsEj19q0drzdWdeAHtweg==\"},\"engines\":{\"node\":\">=6.5\"}},\"acorn@8.16.0\":{\"resolution\":{\"integrity\":\"sha512-UVJyE9MttOsBQIDKw1skb9nAwQuR5wuGD3+82K6JgJlm/Y+KI92oNsMNGZCYdDsVtRHSak0pcV5Dno5+4jh9sw==\"},\"engines\":{\"node\":\">=0.4.0\"},\"hasBin\":true},\"agent-base@7.1.3\":{\"resolution\":{\"integrity\":\"sha512-jRR5wdylq8CkOe6hei19GGZnxM6rBGwFl3Bg0YItGDimvjGtAvdZk4Pu6Cl4u4Igsws4a1fd1Vq3ezrhn4KmFw==\"},\"engines\":{\"node\":\">= 14\"}},\"agent-base@7.1.4\":{\"resolution\":{\"integrity\":\"sha512-MnA+YT8fwfJPgBx3m60MNqakm30XOkyIoH1y6huTQvC0PwZG7ki8NacLBcrPbNoo8vEZy7Jpuk7+jMO+CUovTQ==\"},\"engines\":{\"node\":\">= 14\"}},\"ajv-formats@2.1.1\":{\"resolution\":{\"integrity\":\"sha512-Wx0Kx52hxE7C18hkMEggYlEifqWZtYaRgouJor+WMdPnQyEK13vgEWyVNup7SoeeoLMsr4kf5h6dOW11I15MUA==\"},\"peerDependencies\":{\"ajv\":\"^8.0.0\"},\"peerDependenciesMeta\":{\"ajv\":{\"optional\":true}}},\"ajv-keywords@5.1.0\":{\"resolution\":{\"integrity\":\"sha512-YCS/JNFAUyr5vAuhk1DWm1CBxRHW9LbJ2ozWeemrIqpbsqKjHVxYPyi5GC0rjZIT5JxJ3virVTS8wk4i/Z+krw==\"},\"peerDependencies\":{\"ajv\":\"^8.8.2\"}},\"ajv@8.17.1\":{\"resolution\":{\"integrity\":\"sha512-B/gBuNg5SiMTrPkC+A2+cW0RszwxYmn6VYxB/inlBStS5nx6xHIt/ehKRhIMhqusl7a8LjQoZnjCs5vhwxOQ1g==\"}},\"ajv@8.20.0\":{\"resolution\":{\"integrity\":\"sha512-Thbli+OlOj+iMPYFBVBfJ3OmCAnaSyNn4M1vz9T6Gka5Jt9ba/HIR56joy65tY6kx/FCF5VXNB819Y7/GUrBGA==\"}},\"ansi-colors@4.1.3\":{\"resolution\":{\"integrity\":\"sha512-/6w/C21Pm1A7aZitlI5Ni/2J6FFQN8i1Cvz3kHABAAbw93v/NlvKdVOqz7CCWz/3iv/JplRSEEZ83XION15ovw==\"},\"engines\":{\"node\":\">=6\"}},\"ansi-regex@5.0.1\":{\"resolution\":{\"integrity\":\"sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==\"},\"engines\":{\"node\":\">=8\"}},\"ansi-regex@6.1.0\":{\"resolution\":{\"integrity\":\"sha512-7HSX4QQb4CspciLpVFwyRe79O3xsIZDDLER21kERQ71oaPodF8jL725AgJMFAYbooIqolJoRLuM81SpeUkpkvA==\"},\"engines\":{\"node\":\">=12\"}},\"ansi-regex@6.2.2\":{\"resolution\":{\"integrity\":\"sha512-Bq3SmSpyFHaWjPk8If9yc6svM8c56dB5BAtW4Qbw5jHTwwXXcTLoRMkpDJp6VL0XzlWaCHTXrkFURMYmD0sLqg==\"},\"engines\":{\"node\":\">=12\"}},\"ansi-styles@4.3.0\":{\"resolution\":{\"integrity\":\"sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg==\"},\"engines\":{\"node\":\">=8\"}},\"ansi-styles@5.2.0\":{\"resolution\":{\"integrity\":\"sha512-Cxwpt2SfTzTtXcfOlzGEee8O+c+MmUgGrNiBcXnuWxuFJHe6a5Hz7qwhwe5OgaSYI0IJvkLqWX1ASG+cJOkEiA==\"},\"engines\":{\"node\":\">=10\"}},\"ansi-styles@6.2.1\":{\"resolution\":{\"integrity\":\"sha512-bN798gFfQX+viw3R7yrGWRqnrN2oRkEkUjjl4JNn4E8GxxbjtG3FbrEIIY3l8/hrwUwIeCZvi4QuOTP4MErVug==\"},\"engines\":{\"node\":\">=12\"}},\"ansis@4.1.0\":{\"resolution\":{\"integrity\":\"sha512-BGcItUBWSMRgOCe+SVZJ+S7yTRG0eGt9cXAHev72yuGcY23hnLA7Bky5L/xLyPINoSN95geovfBkqoTlNZYa7w==\"},\"engines\":{\"node\":\">=14\"}},\"anymatch@3.1.3\":{\"resolution\":{\"integrity\":\"sha512-KMReFUr0B4t+D+OBkjR3KYqvocp2XaSzO55UcB6mgQMd3KbcE+mWTyvVV7D/zsdEbNnV6acZUutkiHQXvTr1Rw==\"},\"engines\":{\"node\":\">= 8\"}},\"archiver-utils@5.0.2\":{\"resolution\":{\"integrity\":\"sha512-wuLJMmIBQYCsGZgYLTy5FIB2pF6Lfb6cXMSF8Qywwk3t20zWnAi7zLcQFdKQmIB8wyZpY5ER38x08GbwtR2cLA==\"},\"engines\":{\"node\":\">= 14\"}},\"archiver@7.0.1\":{\"resolution\":{\"integrity\":\"sha512-ZcbTaIqJOfCc03QwD468Unz/5Ir8ATtvAHsK+FdXbDIbGfihqh9mrvdcYunQzqn4HrvWWaFyaxJhGZagaJJpPQ==\"},\"engines\":{\"node\":\">= 14\"}},\"argparse@1.0.10\":{\"resolution\":{\"integrity\":\"sha512-o5Roy6tNG4SL/FOkCAN6RzjiakZS25RLYFrcMttJqbdd8BWrnA+fGz57iN5Pb06pvBGvl5gQ0B48dJlslXvoTg==\"}},\"argparse@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-8+9WqebbFzpX9OR+Wa6O29asIogeRMzcGtAINdpMHHyAg10f05aSFVBbcEqGf/PXw1EjAZ+q2/bEBg3DvurK3Q==\"}},\"aria-query@5.3.0\":{\"resolution\":{\"integrity\":\"sha512-b0P0sZPKtyu8HkeRAfCq0IfURZK+SuwMjY1UXGBU27wpAiTwQAIlq56IbIO+ytk/JjS1fMR14ee5WBBfKi5J6A==\"}},\"aria-query@5.3.2\":{\"resolution\":{\"integrity\":\"sha512-COROpnaoap1E2F000S62r6A60uHZnmlvomhfyT2DlTcrY1OrBKn2UhH7qn5wTC9zMvD0AY7csdPSNwKP+7WiQw==\"},\"engines\":{\"node\":\">= 0.4\"}},\"array-union@2.1.0\":{\"resolution\":{\"integrity\":\"sha512-HGyxoOTYUyCM6stUe6EJgnd4EoewAI7zMdfqO+kGjnlZmBDz/cR5pf8r/cR4Wq60sL/p0IkcjUEEPwS3GFrIyw==\"},\"engines\":{\"node\":\">=8\"}},\"assertion-error@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-Izi8RQcffqCeNVgFigKli1ssklIbpHnCYc6AknXGYoB6grJqyeby7jv12JUQgmTAnIDnbck1uxksT4dzN3PWBA==\"},\"engines\":{\"node\":\">=12\"}},\"ast-types@0.13.4\":{\"resolution\":{\"integrity\":\"sha512-x1FCFnFifvYDDzTaLII71vG5uvDwgtmDTEVWAxrgeiR8VjMONcCXJx7E+USjDtHlwFmt9MysbqgF9b9Vjr6w+w==\"},\"engines\":{\"node\":\">=4\"}},\"ast-v8-to-istanbul@0.3.4\":{\"resolution\":{\"integrity\":\"sha512-cxrAnZNLBnQwBPByK4CeDaw5sWZtMilJE/Q3iDA0aamgaIVNDF9T6K2/8DfYDZEejZ2jNnDrG9m8MY72HFd0KA==\"}},\"ast-v8-to-istanbul@1.0.0\":{\"resolution\":{\"integrity\":\"sha512-1fSfIwuDICFA4LKkCzRPO7F0hzFf0B7+Xqrl27ynQaa+Rh0e1Es0v6kWHPott3lU10AyAr7oKHa65OppjLn3Rg==\"}},\"async@3.2.6\":{\"resolution\":{\"integrity\":\"sha512-htCUDlxyyCLMgaM3xXg0C0LW2xqfuQ6p05pCEIsXuyQ+a1koYKTuBMzRNwmybfLgvJDMd0r1LTn4+E0Ti6C2AA==\"}},\"asynckit@0.4.0\":{\"resolution\":{\"integrity\":\"sha512-Oei9OH4tRh0YqU3GxhX79dM/mwVgvbZJaSNaRk+bshkj0S5cfHcgYakreBjrHwatXKbz+IoIdYLxrKim2MjW0Q==\"}},\"axios@1.11.0\":{\"resolution\":{\"integrity\":\"sha512-1Lx3WLFQWm3ooKDYZD1eXmoGO9fxYQjrycfHFC8P0sCfQVXyROp0p9PFWBehewBOdCwHc+f/b8I0fMto5eSfwA==\"}},\"b4a@1.8.1\":{\"resolution\":{\"integrity\":\"sha512-aiqre1Nr0B/6DgE2N5vwTc+2/oQZ4Wh1t4NznYY4E00y8LCt6NqdRv81so00oo27D8MVKTpUa/MwUUtBLXCoDw==\"},\"peerDependencies\":{\"react-native-b4a\":\"*\"},\"peerDependenciesMeta\":{\"react-native-b4a\":{\"optional\":true}}},\"babel-dead-code-elimination@1.0.12\":{\"resolution\":{\"integrity\":\"sha512-GERT7L2TiYcYDtYk1IpD+ASAYXjKbLTDPhBtYj7X1NuRMDTMtAx9kyBenub1Ev41lo91OHCKdmP+egTDmfQ7Ig==\"}},\"bail@2.0.2\":{\"resolution\":{\"integrity\":\"sha512-0xO6mYd7JB2YesxDKplafRpsiOzPt9V02ddPCLbY1xYGPOX24NTyN50qnUxgCPcSoYMhKpAuBTjQoRZCAkUDRw==\"}},\"balanced-match@1.0.2\":{\"resolution\":{\"integrity\":\"sha512-3oSeUO0TMV67hN1AmbXsK4yaqU7tjiHlbxRDZOpH0KW9+CeX4bRAaX0Anxt0tx2MrpRpWwQaPwIlISEJhYU5Pw==\"}},\"bare-events@2.8.2\":{\"resolution\":{\"integrity\":\"sha512-riJjyv1/mHLIPX4RwiK+oW9/4c3TEUeORHKefKAKnZ5kyslbN+HXowtbaVEqt4IMUB7OXlfixcs6gsFeo/jhiQ==\"},\"peerDependencies\":{\"bare-abort-controller\":\"*\"},\"peerDependenciesMeta\":{\"bare-abort-controller\":{\"optional\":true}}},\"bare-fs@4.7.1\":{\"resolution\":{\"integrity\":\"sha512-WDRsyVN52eAx/lBamKD6uyw8H4228h/x0sGGGegOamM2cd7Pag88GfMQalobXI+HaEUxpCkbKQUDOQqt9wawRw==\"},\"engines\":{\"bare\":\">=1.16.0\"},\"peerDependencies\":{\"bare-buffer\":\"*\"},\"peerDependenciesMeta\":{\"bare-buffer\":{\"optional\":true}}},\"bare-os@3.9.1\":{\"resolution\":{\"integrity\":\"sha512-6M5XjcnsygQNPMCMPXSK379xrJFiZ/AEMNBmFEmQW8d/789VQATvriyi5r0HYTL9TkQ26rn3kgdTG3aisbrXkQ==\"},\"engines\":{\"bare\":\">=1.14.0\"}},\"bare-path@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-tyfW2cQcB5NN8Saijrhqn0Zh7AnFNsnczRcuWODH0eYAXBsJ5gVxAUuNr7tsHSC6IZ77cA0SitzT+s47kot8Mw==\"}},\"bare-stream@2.13.1\":{\"resolution\":{\"integrity\":\"sha512-Vp0cnjYyrEC4whYTymQ+YZi6pBpfiICZO3cfRG8sy67ZNWe951urv1x4eW1BKNngw3U+3fPYb5JQvHbCtxH7Ow==\"},\"peerDependencies\":{\"bare-abort-controller\":\"*\",\"bare-buffer\":\"*\",\"bare-events\":\"*\"},\"peerDependenciesMeta\":{\"bare-abort-controller\":{\"optional\":true},\"bare-buffer\":{\"optional\":true},\"bare-events\":{\"optional\":true}}},\"bare-url@2.4.3\":{\"resolution\":{\"integrity\":\"sha512-Kccpc7ACfXaxfeInfqKcZtW4pT5YBn1mesc4sCsun6sRwtbJ4h+sNOaksUpYEJUKfN65YWC6Bw2OJEFiKxq8nQ==\"}},\"base64-js@1.5.1\":{\"resolution\":{\"integrity\":\"sha512-AKpaYlHn8t4SVbOHCy+b5+KKgvR4vrsD8vbvrbiQJps7fKDTkjkDry6ji0rUJjC0kzbNePLwzxq8iypo41qeWA==\"}},\"baseline-browser-mapping@2.10.27\":{\"resolution\":{\"integrity\":\"sha512-zEs/ufmZoUd7WftKpKyXaT6RFxpQ5Qm9xytKRHvJfxFV9DFJkZph9RvJ1LcOUi0Z1ZVijMte65JbILeV+8QQEA==\"},\"engines\":{\"node\":\">=6.0.0\"},\"hasBin\":true},\"basic-ftp@5.3.1\":{\"resolution\":{\"integrity\":\"sha512-bopVNp6ugyA150DDuZfPFdt1KZ5a94ZDiwX4hMgZDzF+GttD80lEy8kj98kbyhLXnPvhtIo93mdnLIjpCAeeOw==\"},\"engines\":{\"node\":\">=10.0.0\"}},\"better-path-resolve@1.0.0\":{\"resolution\":{\"integrity\":\"sha512-pbnl5XzGBdrFU/wT4jqmJVPn2B6UHPBOhzMQkY/SPUPB6QtUXtmBHBIwCbXJol93mOpGMnQyP/+BB19q04xj7g==\"},\"engines\":{\"node\":\">=4\"}},\"better-sqlite3@12.9.0\":{\"resolution\":{\"integrity\":\"sha512-wqUv4Gm3toFpHDQmaKD4QhZm3g1DjUBI0yzS4UBl6lElUmXFYdTQmmEDpAFa5o8FiFiymURypEnfVHzILKaxqQ==\"},\"engines\":{\"node\":\"20.x || 22.x || 23.x || 24.x || 25.x\"}},\"bidi-js@1.0.3\":{\"resolution\":{\"integrity\":\"sha512-RKshQI1R3YQ+n9YJz2QQ147P66ELpa1FQEg20Dk8oW9t2KgLbpDLLp9aGZ7y8WHSshDknG0bknqGw5/tyCs5tw==\"}},\"binary-extensions@2.3.0\":{\"resolution\":{\"integrity\":\"sha512-Ceh+7ox5qe7LJuLHoY0feh3pHuUDHAcRUeyL2VYghZwfpkNIy/+8Ocg0a3UuSoYzavmylwuLWQOf3hl0jjMMIw==\"},\"engines\":{\"node\":\">=8\"}},\"bindings@1.5.0\":{\"resolution\":{\"integrity\":\"sha512-p2q/t/mhvuOj/UeLlV6566GD/guowlr0hHxClI0W9m7MWYkL1F0hLo+0Aexs9HSPCtR1SXQ0TD3MMKrXZajbiQ==\"}},\"bl@4.1.0\":{\"resolution\":{\"integrity\":\"sha512-1W07cM9gS6DcLperZfFSj+bWLtaPGSOHWhPiGzXmvVJbRLdG82sH/Kn8EtW1VqWVA54AKf2h5k5BbnIbwF3h6w==\"}},\"blake3-wasm@2.1.5\":{\"resolution\":{\"integrity\":\"sha512-F1+K8EbfOZE49dtoPtmxUQrpXaBIl3ICvasLh+nJta0xkz+9kF/7uet9fLnwKqhDrmj6g+6K3Tw9yQPUg2ka5g==\"}},\"boolbase@1.0.0\":{\"resolution\":{\"integrity\":\"sha512-JZOSA7Mo9sNGB8+UjSgzdLtokWAky1zbztM3WRLCbZ70/3cTANmQmOdR7y2g+J0e2WXywy1yS468tY+IruqEww==\"}},\"brace-expansion@2.0.2\":{\"resolution\":{\"integrity\":\"sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ==\"}},\"brace-expansion@2.1.0\":{\"resolution\":{\"integrity\":\"sha512-TN1kCZAgdgweJhWWpgKYrQaMNHcDULHkWwQIspdtjV4Y5aurRdZpjAqn6yX3FPqTA9ngHCc4hJxMAMgGfve85w==\"}},\"braces@3.0.3\":{\"resolution\":{\"integrity\":\"sha512-yQbXgO/OSZVD2IsiLlro+7Hf6Q18EJrKSEsdoMzKePKXct3gvD8oLcOQdIzGupr5Fj+EDe8gO/lxc1BzfMpxvA==\"},\"engines\":{\"node\":\">=8\"}},\"browserslist@4.25.3\":{\"resolution\":{\"integrity\":\"sha512-cDGv1kkDI4/0e5yON9yM5G/0A5u8sf5TnmdX5C9qHzI9PPu++sQ9zjm1k9NiOrf3riY4OkK0zSGqfvJyJsgCBQ==\"},\"engines\":{\"node\":\"^6 || ^7 || ^8 || ^9 || ^10 || ^11 || ^12 || >=13.7\"},\"hasBin\":true},\"browserslist@4.28.2\":{\"resolution\":{\"integrity\":\"sha512-48xSriZYYg+8qXna9kwqjIVzuQxi+KYWp2+5nCYnYKPTr0LvD89Jqk2Or5ogxz0NUMfIjhh2lIUX/LyX9B4oIg==\"},\"engines\":{\"node\":\"^6 || ^7 || ^8 || ^9 || ^10 || ^11 || ^12 || >=13.7\"},\"hasBin\":true},\"buffer-builder@0.2.0\":{\"resolution\":{\"integrity\":\"sha512-7VPMEPuYznPSoR21NE1zvd2Xna6c/CloiZCfcMXR1Jny6PjX0N4Nsa38zcBFo/FMK+BlA+FLKbJCQ0i2yxp+Xg==\"}},\"buffer-crc32@0.2.13\":{\"resolution\":{\"integrity\":\"sha512-VO9Ht/+p3SN7SKWqcrgEzjGbRSJYTx+Q1pTQC0wrWqHx0vpJraQ6GtHx8tvcg1rlK1byhU5gccxgOgj7B0TDkQ==\"}},\"buffer-crc32@1.0.0\":{\"resolution\":{\"integrity\":\"sha512-Db1SbgBS/fg/392AblrMJk97KggmvYhr4pB5ZIMTWtaivCPMWLkmb7m21cJvpvgK+J3nsU2CmmixNBZx4vFj/w==\"},\"engines\":{\"node\":\">=8.0.0\"}},\"buffer-from@1.1.2\":{\"resolution\":{\"integrity\":\"sha512-E+XQCRwSbaaiChtv6k6Dwgc+bx+Bs6vuKJHHl5kox/BaKbhiXzqQOwK4cO22yElGp2OCmjwVhT3HmxgyPGnJfQ==\"}},\"buffer@5.7.1\":{\"resolution\":{\"integrity\":\"sha512-EHcyIPBQ4BSGlvjB16k5KgAJ27CIsHY/2JBmCRReo48y9rQ3MaUzWX3KVlBa4U7MyX02HdVj0K7C3WaB3ju7FQ==\"}},\"buffer@6.0.3\":{\"resolution\":{\"integrity\":\"sha512-FTiCpNxtwiZZHEZbcbTIcZjERVICn9yq/pDFkTl95/AxzD1naBctN7YO68riM/gLSDY7sdrMby8hofADYuuqOA==\"}},\"cac@6.7.14\":{\"resolution\":{\"integrity\":\"sha512-b6Ilus+c3RrdDk+JhLKUAQfzzgLEPy6wcXqS7f/xe1EETvsDP6GORG7SFuOs6cID5YkqchW/LXZbX5bc8j7ZcQ==\"},\"engines\":{\"node\":\">=8\"}},\"call-bind-apply-helpers@1.0.2\":{\"resolution\":{\"integrity\":\"sha512-Sp1ablJ0ivDkSzjcaJdxEunN5/XvksFJ2sMBFfq6x0ryhQV/2b/KwFe21cMpmHtPOSij8K99/wSfoEuTObmuMQ==\"},\"engines\":{\"node\":\">= 0.4\"}},\"caniuse-lite@1.0.30001737\":{\"resolution\":{\"integrity\":\"sha512-BiloLiXtQNrY5UyF0+1nSJLXUENuhka2pzy2Fx5pGxqavdrxSCW4U6Pn/PoG3Efspi2frRbHpBV2XsrPE6EDlw==\"}},\"caniuse-lite@1.0.30001792\":{\"resolution\":{\"integrity\":\"sha512-hVLMUZFgR4JJ6ACt1uEESvQN1/dBVqPAKY0hgrV70eN3391K6juAfTjKZLKvOMsx8PxA7gsY1/tLMMTcfFLLpw==\"}},\"ccount@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-eyrF0jiFpY+3drT6383f1qhkbGsLSifNAjA61IUjZjmLCWjItY6LB9ft9YhoDgwfmclB2zhu51Lc7+95b8NRAg==\"}},\"chai@5.3.3\":{\"resolution\":{\"integrity\":\"sha512-4zNhdJD/iOjSH0A05ea+Ke6MU5mmpQcbQsSOkgdaUMJ9zTlDTD/GYlwohmIE2u0gaxHYiVHEn1Fw9mZ/ktJWgw==\"},\"engines\":{\"node\":\">=18\"}},\"chai@6.2.2\":{\"resolution\":{\"integrity\":\"sha512-NUPRluOfOiTKBKvWPtSD4PhFvWCqOi0BGStNWs57X9js7XGTprSmFoz5F0tWhR4WPjNeR9jXqdC7/UpSJTnlRg==\"},\"engines\":{\"node\":\">=18\"}},\"chalk@4.1.2\":{\"resolution\":{\"integrity\":\"sha512-oKnbhFyRIXpUuez8iBMmyEa4nbj4IOQyuhc/wy9kY7/WVPcwIO9VA668Pu8RkO7+0G76SLROeyw9CpQ061i4mA==\"},\"engines\":{\"node\":\">=10\"}},\"chalk@5.6.2\":{\"resolution\":{\"integrity\":\"sha512-7NzBL0rN6fMUW+f7A6Io4h40qQlG+xGmtMxfbnH/K7TAtt8JQWVQK+6g0UXKMeVJoyV5EkkNsErQ8pVD3bLHbA==\"},\"engines\":{\"node\":\"^12.17.0 || ^14.13 || >=16.0.0\"}},\"character-entities-html4@2.1.0\":{\"resolution\":{\"integrity\":\"sha512-1v7fgQRj6hnSwFpq1Eu0ynr/CDEw0rXo2B61qXrLNdHZmPKgb7fqS1a2JwF0rISo9q77jDI8VMEHoApn8qDoZA==\"}},\"character-entities-legacy@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-RpPp0asT/6ufRm//AJVwpViZbGM/MkjQFxJccQRHmISF/22NBtsHqAWmL+/pmkPWoIUJdWyeVleTl1wydHATVQ==\"}},\"character-entities@2.0.2\":{\"resolution\":{\"integrity\":\"sha512-shx7oQ0Awen/BRIdkjkvz54PnEEI/EjwXDSIZp86/KKdbafHh1Df/RYGBhn4hbe2+uKC9FnT5UCEdyPz3ai9hQ==\"}},\"chardet@2.1.0\":{\"resolution\":{\"integrity\":\"sha512-bNFETTG/pM5ryzQ9Ad0lJOTa6HWD/YsScAR3EnCPZRPlQh77JocYktSHOUHelyhm8IARL+o4c4F1bP5KVOjiRA==\"}},\"check-error@2.1.1\":{\"resolution\":{\"integrity\":\"sha512-OAlb+T7V4Op9OwdkjmguYRqncdlx5JiofwOAUkmTF+jNdHwzTaTs4sRAGpzLF3oOz5xAyDGrPgeIDFQmDOTiJw==\"},\"engines\":{\"node\":\">= 16\"}},\"cheerio-select@2.1.0\":{\"resolution\":{\"integrity\":\"sha512-9v9kG0LvzrlcungtnJtpGNxY+fzECQKhK4EGJX2vByejiMX84MFNQw4UxPJl3bFbTMw+Dfs37XaIkCwTZfLh4g==\"}},\"cheerio@1.1.2\":{\"resolution\":{\"integrity\":\"sha512-IkxPpb5rS/d1IiLbHMgfPuS0FgiWTtFIm/Nj+2woXDLTZ7fOT2eqzgYbdMlLweqlHbsZjxEChoVK+7iph7jyQg==\"},\"engines\":{\"node\":\">=20.18.1\"}},\"cheerio@1.2.0\":{\"resolution\":{\"integrity\":\"sha512-WDrybc/gKFpTYQutKIK6UvfcuxijIZfMfXaYm8NMsPQxSYvf+13fXUJ4rztGGbJcBQ/GF55gvrZ0Bc0bj/mqvg==\"},\"engines\":{\"node\":\">=20.18.1\"}},\"chevrotain-allstar@0.3.1\":{\"resolution\":{\"integrity\":\"sha512-b7g+y9A0v4mxCW1qUhf3BSVPg+/NvGErk/dOkrDaHA0nQIQGAtrOjlX//9OQtRlSCy+x9rfB5N8yC71lH1nvMw==\"},\"peerDependencies\":{\"chevrotain\":\"^11.0.0\"}},\"chevrotain@11.0.3\":{\"resolution\":{\"integrity\":\"sha512-ci2iJH6LeIkvP9eJW6gpueU8cnZhv85ELY8w8WiFtNjMHA5ad6pQLaJo9mEly/9qUyCpvqX8/POVUTf18/HFdw==\"}},\"chokidar@3.6.0\":{\"resolution\":{\"integrity\":\"sha512-7VT13fmjotKpGipCW9JEQAusEPE+Ei8nl6/g4FBAmIm0GOOLMua9NDDo/DWp0ZAxCr3cPq5ZpBqmPAQgDda2Pw==\"},\"engines\":{\"node\":\">= 8.10.0\"}},\"chownr@1.1.4\":{\"resolution\":{\"integrity\":\"sha512-jJ0bqzaylmJtVnNgzTeSOs8DPavpbYgEr/b0YL8/2GO3xJEhInFmhKMUnEJQjZumK7KXGFhUy89PrsJWlakBVg==\"}},\"chownr@2.0.0\":{\"resolution\":{\"integrity\":\"sha512-bIomtDF5KGpdogkLd9VspvFzk9KfpyyGlS8YFVZl7TGPBHL5snIOnxeshwVgPteQ9b4Eydl+pVbIyE1DcvCWgQ==\"},\"engines\":{\"node\":\">=10\"}},\"chrome-trace-event@1.0.4\":{\"resolution\":{\"integrity\":\"sha512-rNjApaLzuwaOTjCiT8lSDdGN1APCiqkChLMJxJPWLunPAt5fy8xgU9/jNOchV84wfIxrA0lRQB7oCT8jrn/wrQ==\"},\"engines\":{\"node\":\">=6.0\"}},\"ci-info@3.9.0\":{\"resolution\":{\"integrity\":\"sha512-NIxF55hv4nSqQswkAeiOi1r83xy8JldOFDTWiug55KBu9Jnblncd2U6ViHmYgHf01TPZS77NJBhBMKdWj9HQMQ==\"},\"engines\":{\"node\":\">=8\"}},\"cli-cursor@3.1.0\":{\"resolution\":{\"integrity\":\"sha512-I/zHAwsKf9FqGoXM4WWRACob9+SNukZTd94DWF57E4toouRulbCxcUh6RKUEOQlYTHJnzkPMySvPNaaSLNfLZw==\"},\"engines\":{\"node\":\">=8\"}},\"cli-spinners@2.6.1\":{\"resolution\":{\"integrity\":\"sha512-x/5fWmGMnbKQAaNwN+UZlV79qBLM9JFnJuJ03gIi5whrob0xV0ofNVHy9DhwGdsMJQc2OKv0oGmLzvaqvAVv+g==\"},\"engines\":{\"node\":\">=6\"}},\"cli-spinners@2.9.2\":{\"resolution\":{\"integrity\":\"sha512-ywqV+5MmyL4E7ybXgKys4DugZbX0FC6LnwrhjuykIjnK9k8OQacQ7axGKnjDXWNhns0xot3bZI5h55H8yo9cJg==\"},\"engines\":{\"node\":\">=6\"}},\"cli-width@4.1.0\":{\"resolution\":{\"integrity\":\"sha512-ouuZd4/dm2Sw5Gmqy6bGyNNNe1qt9RpmxveLSO7KcgsTnU7RXfsw+/bukWGo1abgBiMAic068rclZsO4IWmmxQ==\"},\"engines\":{\"node\":\">= 12\"}},\"cliui@8.0.1\":{\"resolution\":{\"integrity\":\"sha512-BSeNnyus75C4//NQ9gQt1/csTXyo/8Sb+afLAkzAptFuMsod9HFokGNudZpi/oQV73hnVK+sR+5PVRMd+Dr7YQ==\"},\"engines\":{\"node\":\">=12\"}},\"clone@1.0.4\":{\"resolution\":{\"integrity\":\"sha512-JQHZ2QMW6l3aH/j6xCqQThY/9OH4D/9ls34cgkUBiEeocRTU04tHfKPBsUK1PqZCUQM7GiA0IIXJSuXHI64Kbg==\"},\"engines\":{\"node\":\">=0.8\"}},\"color-convert@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==\"},\"engines\":{\"node\":\">=7.0.0\"}},\"color-name@1.1.4\":{\"resolution\":{\"integrity\":\"sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==\"}},\"colorjs.io@0.5.2\":{\"resolution\":{\"integrity\":\"sha512-twmVoizEW7ylZSN32OgKdXRmo1qg+wT5/6C3xu5b9QsWzSFAhHLn2xd8ro0diCsKfCj1RdaTP/nrcW+vAoQPIw==\"}},\"combined-stream@1.0.8\":{\"resolution\":{\"integrity\":\"sha512-FQN4MRfuJeHf7cBbBMJFXhKSDq+2kAArBlmRBvcvFE5BB1HZKXtSFASDhdlz9zOYwxh8lDdnvmMOe/+5cdoEdg==\"},\"engines\":{\"node\":\">= 0.8\"}},\"comma-separated-tokens@2.0.3\":{\"resolution\":{\"integrity\":\"sha512-Fu4hJdvzeylCfQPp9SGWidpzrMs7tTrlu6Vb8XGaRGck8QSNZJJp538Wrb60Lax4fPwR64ViY468OIUTbRlGZg==\"}},\"commander@2.20.3\":{\"resolution\":{\"integrity\":\"sha512-GpVkmM8vF2vQUkj2LvZmD35JxeJOLCwJ9cUkugyk2nuhbv3+mJvpLYYt+0+USMxE+oj+ey/lJEnhZw75x/OMcQ==\"}},\"commander@7.2.0\":{\"resolution\":{\"integrity\":\"sha512-QrWXB+ZQSVPmIWIhtEO9H+gwHaMGYiF5ChvoJ+K9ZGHG/sVsa6yiesAD1GC/x46sET00Xlwo1u49RVVVzvcSkw==\"},\"engines\":{\"node\":\">= 10\"}},\"commander@8.3.0\":{\"resolution\":{\"integrity\":\"sha512-OkTL9umf+He2DZkUq8f8J9of7yL6RJKI24dVITBmNfZBmri9zYZQrKkuXiKhyfPSu8tUhnVBB1iKXevvnlR4Ww==\"},\"engines\":{\"node\":\">= 12\"}},\"commander@9.5.0\":{\"resolution\":{\"integrity\":\"sha512-KRs7WVDKg86PWiuAqhDrAQnTXZKraVcCc6vFdL14qrZ/DcWwuRo7VoiYXalXO7S5GKpqYiVEwCbgFDfxNHKJBQ==\"},\"engines\":{\"node\":\"^12.20.0 || >=14\"}},\"compress-commons@6.0.2\":{\"resolution\":{\"integrity\":\"sha512-6FqVXeETqWPoGcfzrXb37E50NP0LXT8kAMu5ooZayhWWdgEY4lBEEcbQNXtkuKQsGduxiIcI4gOTsxTmuq/bSg==\"},\"engines\":{\"node\":\">= 14\"}},\"confbox@0.1.8\":{\"resolution\":{\"integrity\":\"sha512-RMtmw0iFkeR4YV+fUOSucriAQNb9g8zFR52MWCtl+cCZOFRNL6zeB395vPzFhEjjn4fMxXudmELnl/KF/WrK6w==\"}},\"confbox@0.2.2\":{\"resolution\":{\"integrity\":\"sha512-1NB+BKqhtNipMsov4xI/NnhCKp9XG9NamYp5PVm9klAT0fsrNPjaFICsCFhNhwZJKNh7zB/3q8qXz0E9oaMNtQ==\"}},\"convert-source-map@2.0.0\":{\"resolution\":{\"integrity\":\"sha512-Kvp459HrV2FEJ1CAsi1Ku+MY3kasH19TFykTz2xWmMeq6bk2NU3XXvfJ+Q61m0xktWwt+1HSYf3JZsTms3aRJg==\"}},\"cookie-es@3.1.1\":{\"resolution\":{\"integrity\":\"sha512-UaXxwISYJPTr9hwQxMFYZ7kNhSXboMXP+Z3TRX6f1/NyaGPfuNUZOWP1pUEb75B2HjfklIYLVRfWiFZJyC6Npg==\"}},\"cookie@0.7.2\":{\"resolution\":{\"integrity\":\"sha512-yki5XnKuf750l50uGTllt6kKILY4nQ1eNIQatoXEByZ5dWgnKqbnqmTrBE5B4N7lrMJKQ2ytWMiTO2o0v6Ew/w==\"},\"engines\":{\"node\":\">= 0.6\"}},\"cookie@1.0.2\":{\"resolution\":{\"integrity\":\"sha512-9Kr/j4O16ISv8zBBhJoi4bXOYNTkFLOqSL3UDB0njXxCXNezjeyVrJyGOWtgfs/q2km1gwBcfH8q1yEGoMYunA==\"},\"engines\":{\"node\":\">=18\"}},\"core-js@3.46.0\":{\"resolution\":{\"integrity\":\"sha512-vDMm9B0xnqqZ8uSBpZ8sNtRtOdmfShrvT6h2TuQGLs0Is+cR0DYbj/KWP6ALVNbWPpqA/qPLoOuppJN07humpA==\"}},\"core-util-is@1.0.3\":{\"resolution\":{\"integrity\":\"sha512-ZQBvi1DcpJ4GDqanjucZ2Hj3wEO5pZDS89BWbkcrvdxksJorwUDDZamX9ldFkp9aw2lmBDLgkObEA4DWNJ9FYQ==\"}},\"cose-base@1.0.3\":{\"resolution\":{\"integrity\":\"sha512-s9whTXInMSgAp/NVXVNuVxVKzGH2qck3aQlVHxDCdAEPgtMKwc4Wq6/QKhgdEdgbLSi9rBTAcPoRa6JpiG4ksg==\"}},\"cose-base@2.2.0\":{\"resolution\":{\"integrity\":\"sha512-AzlgcsCbUMymkADOJtQm3wO9S3ltPfYOFD5033keQn9NJzIbtnZj+UdBJe7DYml/8TdbtHJW3j58SOnKhWY/5g==\"}},\"crc-32@1.2.2\":{\"resolution\":{\"integrity\":\"sha512-ROmzCKrTnOwybPcJApAA6WBWij23HVfGVNKqqrZpuyZOHqK2CwHSvpGuyt/UNNvaIjEd8X5IFGp4Mh+Ie1IHJQ==\"},\"engines\":{\"node\":\">=0.8\"},\"hasBin\":true},\"crc32-stream@6.0.0\":{\"resolution\":{\"integrity\":\"sha512-piICUB6ei4IlTv1+653yq5+KoqfBYmj9bw6LqXoOneTMDXk5nM1qt12mFW1caG3LlJXEKW1Bp0WggEmIfQB34g==\"},\"engines\":{\"node\":\">= 14\"}},\"cross-spawn@7.0.6\":{\"resolution\":{\"integrity\":\"sha512-uV2QOWP2nWzsy2aMp8aRibhi9dlzF5Hgh5SHaB9OiTGEyDTiJJyx0uy51QXdyWbtAHNua4XJzUKca3OzKUd3vA==\"},\"engines\":{\"node\":\">= 8\"}},\"css-select@5.1.0\":{\"resolution\":{\"integrity\":\"sha512-nwoRF1rvRRnnCqqY7updORDsuqKzqYJ28+oSMaJMMgOauh3fvwHqMS7EZpIPqK8GL+g9mKxF1vP/ZjSeNjEVHg==\"}},\"css-shorthand-properties@1.1.2\":{\"resolution\":{\"integrity\":\"sha512-C2AugXIpRGQTxaCW0N7n5jD/p5irUmCrwl03TrnMFBHDbdq44CFWR2zO7rK9xPN4Eo3pUxC4vQzQgbIpzrD1PQ==\"}},\"css-tree@3.1.0\":{\"resolution\":{\"integrity\":\"sha512-0eW44TGN5SQXU1mWSkKwFstI/22X2bG1nYzZTYMAWjylYURhse752YgbE4Cx46AC+bAvI+/dYTPRk1LqSUnu6w==\"},\"engines\":{\"node\":\"^10 || ^12.20.0 || ^14.13.0 || >=15.0.0\"}},\"css-value@0.0.1\":{\"resolution\":{\"integrity\":\"sha512-FUV3xaJ63buRLgHrLQVlVgQnQdR4yqdLGaDu7g8CQcWjInDfM9plBTPI9FRfpahju1UBSaMckeb2/46ApS/V1Q==\"}},\"css-what@6.1.0\":{\"resolution\":{\"integrity\":\"sha512-HTUrgRJ7r4dsZKU6GjmpfRK1O76h97Z8MfS1G0FozR+oF2kG6Vfe8JE6zwrkbxigziPHinCJ+gCPjA9EaBDtRw==\"},\"engines\":{\"node\":\">= 6\"}},\"cssstyle@4.3.1\":{\"resolution\":{\"integrity\":\"sha512-ZgW+Jgdd7i52AaLYCriF8Mxqft0gD/R9i9wi6RWBhs1pqdPEzPjym7rvRKi397WmQFf3SlyUsszhw+VVCbx79Q==\"},\"engines\":{\"node\":\">=18\"}},\"cssstyle@5.3.4\":{\"resolution\":{\"integrity\":\"sha512-KyOS/kJMEq5O9GdPnaf82noigg5X5DYn0kZPJTaAsCUaBizp6Xa1y9D4Qoqf/JazEXWuruErHgVXwjN5391ZJw==\"},\"engines\":{\"node\":\">=20\"}},\"csstype@3.2.3\":{\"resolution\":{\"integrity\":\"sha512-z1HGKcYy2xA8AGQfwrn0PAy+PB7X/GSj3UVJW9qKyn43xWa+gl5nXmU4qqLMRzWVLFC8KusUX8T/0kCiOYpAIQ==\"}},\"cytoscape-cose-bilkent@4.1.0\":{\"resolution\":{\"integrity\":\"sha512-wgQlVIUJF13Quxiv5e1gstZ08rnZj2XaLHGoFMYXz7SkNfCDOOteKBE6SYRfA9WxxI/iBc3ajfDoc6hb/MRAHQ==\"},\"peerDependencies\":{\"cytoscape\":\"^3.2.0\"}},\"cytoscape-fcose@2.2.0\":{\"resolution\":{\"integrity\":\"sha512-ki1/VuRIHFCzxWNrsshHYPs6L7TvLu3DL+TyIGEsRcvVERmxokbf5Gdk7mFxZnTdiGtnA4cfSmjZJMviqSuZrQ==\"},\"peerDependencies\":{\"cytoscape\":\"^3.2.0\"}},\"cytoscape@3.30.4\":{\"resolution\":{\"integrity\":\"sha512-OxtlZwQl1WbwMmLiyPSEBuzeTIQnwZhJYYWFzZ2PhEHVFwpeaqNIkUzSiso00D98qk60l8Gwon2RP304d3BJ1A==\"},\"engines\":{\"node\":\">=0.10\"}},\"d3-array@2.12.1\":{\"resolution\":{\"integrity\":\"sha512-B0ErZK/66mHtEsR1TkPEEkwdy+WDesimkM5gpZr5Dsg54BiTA5RXtYW5qTLIAcekaS9xfZrzBLF/OAkB3Qn1YQ==\"}},\"d3-array@3.2.4\":{\"resolution\":{\"integrity\":\"sha512-tdQAmyA18i4J7wprpYq8ClcxZy3SC31QMeByyCFyRt7BVHdREQZ5lpzoe5mFEYZUWe+oq8HBvk9JjpibyEV4Jg==\"},\"engines\":{\"node\":\">=12\"}},\"d3-axis@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-IH5tgjV4jE/GhHkRV0HiVYPDtvfjHQlQfJHs0usq7M30XcSBvOotpmH1IgkcXsO/5gEQZD43B//fc7SRT5S+xw==\"},\"engines\":{\"node\":\">=12\"}},\"d3-brush@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-ALnjWlVYkXsVIGlOsuWH1+3udkYFI48Ljihfnh8FZPF2QS9o+PzGLBslO0PjzVoHLZ2KCVgAM8NVkXPJB2aNnQ==\"},\"engines\":{\"node\":\">=12\"}},\"d3-chord@3.0.1\":{\"resolution\":{\"integrity\":\"sha512-VE5S6TNa+j8msksl7HwjxMHDM2yNK3XCkusIlpX5kwauBfXuyLAtNg9jCp/iHH61tgI4sb6R/EIMWCqEIdjT/g==\"},\"engines\":{\"node\":\">=12\"}},\"d3-color@3.1.0\":{\"resolution\":{\"integrity\":\"sha512-zg/chbXyeBtMQ1LbD/WSoW2DpC3I0mpmPdW+ynRTj/x2DAWYrIY7qeZIHidozwV24m4iavr15lNwIwLxRmOxhA==\"},\"engines\":{\"node\":\">=12\"}},\"d3-contour@4.0.2\":{\"resolution\":{\"integrity\":\"sha512-4EzFTRIikzs47RGmdxbeUvLWtGedDUNkTcmzoeyg4sP/dvCexO47AaQL7VKy/gul85TOxw+IBgA8US2xwbToNA==\"},\"engines\":{\"node\":\">=12\"}},\"d3-delaunay@6.0.4\":{\"resolution\":{\"integrity\":\"sha512-mdjtIZ1XLAM8bm/hx3WwjfHt6Sggek7qH043O8KEjDXN40xi3vx/6pYSVTwLjEgiXQTbvaouWKynLBiUZ6SK6A==\"},\"engines\":{\"node\":\">=12\"}},\"d3-dispatch@3.0.1\":{\"resolution\":{\"integrity\":\"sha512-rzUyPU/S7rwUflMyLc1ETDeBj0NRuHKKAcvukozwhshr6g6c5d8zh4c2gQjY2bZ0dXeGLWc1PF174P2tVvKhfg==\"},\"engines\":{\"node\":\">=12\"}},\"d3-drag@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-pWbUJLdETVA8lQNJecMxoXfH6x+mO2UQo8rSmZ+QqxcbyA3hfeprFgIT//HW2nlHChWeIIMwS2Fq+gEARkhTkg==\"},\"engines\":{\"node\":\">=12\"}},\"d3-dsv@3.0.1\":{\"resolution\":{\"integrity\":\"sha512-UG6OvdI5afDIFP9w4G0mNq50dSOsXHJaRE8arAS5o9ApWnIElp8GZw1Dun8vP8OyHOZ/QJUKUJwxiiCCnUwm+Q==\"},\"engines\":{\"node\":\">=12\"},\"hasBin\":true},\"d3-ease@3.0.1\":{\"resolution\":{\"integrity\":\"sha512-wR/XK3D3XcLIZwpbvQwQ5fK+8Ykds1ip7A2Txe0yxncXSdq1L9skcG7blcedkOX+ZcgxGAmLX1FrRGbADwzi0w==\"},\"engines\":{\"node\":\">=12\"}},\"d3-fetch@3.0.1\":{\"resolution\":{\"integrity\":\"sha512-kpkQIM20n3oLVBKGg6oHrUchHM3xODkTzjMoj7aWQFq5QEM+R6E4WkzT5+tojDY7yjez8KgCBRoj4aEr99Fdqw==\"},\"engines\":{\"node\":\">=12\"}},\"d3-force@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-zxV/SsA+U4yte8051P4ECydjD/S+qeYtnaIyAs9tgHCqfguma/aAQDjo85A9Z6EKhBirHRJHXIgJUlffT4wdLg==\"},\"engines\":{\"node\":\">=12\"}},\"d3-format@3.1.0\":{\"resolution\":{\"integrity\":\"sha512-YyUI6AEuY/Wpt8KWLgZHsIU86atmikuoOmCfommt0LYHiQSPjvX2AcFc38PX0CBpr2RCyZhjex+NS/LPOv6YqA==\"},\"engines\":{\"node\":\">=12\"}},\"d3-geo@3.1.1\":{\"resolution\":{\"integrity\":\"sha512-637ln3gXKXOwhalDzinUgY83KzNWZRKbYubaG+fGVuc/dxO64RRljtCTnf5ecMyE1RIdtqpkVcq0IbtU2S8j2Q==\"},\"engines\":{\"node\":\">=12\"}},\"d3-hierarchy@3.1.2\":{\"resolution\":{\"integrity\":\"sha512-FX/9frcub54beBdugHjDCdikxThEqjnR93Qt7PvQTOHxyiNCAlvMrHhclk3cD5VeAaq9fxmfRp+CnWw9rEMBuA==\"},\"engines\":{\"node\":\">=12\"}},\"d3-interpolate@3.0.1\":{\"resolution\":{\"integrity\":\"sha512-3bYs1rOD33uo8aqJfKP3JWPAibgw8Zm2+L9vBKEHJ2Rg+viTR7o5Mmv5mZcieN+FRYaAOWX5SJATX6k1PWz72g==\"},\"engines\":{\"node\":\">=12\"}},\"d3-path@1.0.9\":{\"resolution\":{\"integrity\":\"sha512-VLaYcn81dtHVTjEHd8B+pbe9yHWpXKZUC87PzoFmsFrJqgFwDe/qxfp5MlfsfM1V5E/iVt0MmEbWQ7FVIXh/bg==\"}},\"d3-path@3.1.0\":{\"resolution\":{\"integrity\":\"sha512-p3KP5HCf/bvjBSSKuXid6Zqijx7wIfNW+J/maPs+iwR35at5JCbLUT0LzF1cnjbCHWhqzQTIN2Jpe8pRebIEFQ==\"},\"engines\":{\"node\":\">=12\"}},\"d3-polygon@3.0.1\":{\"resolution\":{\"integrity\":\"sha512-3vbA7vXYwfe1SYhED++fPUQlWSYTTGmFmQiany/gdbiWgU/iEyQzyymwL9SkJjFFuCS4902BSzewVGsHHmHtXg==\"},\"engines\":{\"node\":\">=12\"}},\"d3-quadtree@3.0.1\":{\"resolution\":{\"integrity\":\"sha512-04xDrxQTDTCFwP5H6hRhsRcb9xxv2RzkcsygFzmkSIOJy3PeRJP7sNk3VRIbKXcog561P9oU0/rVH6vDROAgUw==\"},\"engines\":{\"node\":\">=12\"}},\"d3-random@3.0.1\":{\"resolution\":{\"integrity\":\"sha512-FXMe9GfxTxqd5D6jFsQ+DJ8BJS4E/fT5mqqdjovykEB2oFbTMDVdg1MGFxfQW+FBOGoB++k8swBrgwSHT1cUXQ==\"},\"engines\":{\"node\":\">=12\"}},\"d3-sankey@0.12.3\":{\"resolution\":{\"integrity\":\"sha512-nQhsBRmM19Ax5xEIPLMY9ZmJ/cDvd1BG3UVvt5h3WRxKg5zGRbvnteTyWAbzeSvlh3tW7ZEmq4VwR5mB3tutmQ==\"}},\"d3-scale-chromatic@3.1.0\":{\"resolution\":{\"integrity\":\"sha512-A3s5PWiZ9YCXFye1o246KoscMWqf8BsD9eRiJ3He7C9OBaxKhAd5TFCdEx/7VbKtxxTsu//1mMJFrEt572cEyQ==\"},\"engines\":{\"node\":\">=12\"}},\"d3-scale@4.0.2\":{\"resolution\":{\"integrity\":\"sha512-GZW464g1SH7ag3Y7hXjf8RoUuAFIqklOAq3MRl4OaWabTFJY9PN/E1YklhXLh+OQ3fM9yS2nOkCoS+WLZ6kvxQ==\"},\"engines\":{\"node\":\">=12\"}},\"d3-selection@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-fmTRWbNMmsmWq6xJV8D19U/gw/bwrHfNXxrIN+HfZgnzqTHp9jOmKMhsTUjXOJnZOdZY9Q28y4yebKzqDKlxlQ==\"},\"engines\":{\"node\":\">=12\"}},\"d3-shape@1.3.7\":{\"resolution\":{\"integrity\":\"sha512-EUkvKjqPFUAZyOlhY5gzCxCeI0Aep04LwIRpsZ/mLFelJiUfnK56jo5JMDSE7yyP2kLSb6LtF+S5chMk7uqPqw==\"}},\"d3-shape@3.2.0\":{\"resolution\":{\"integrity\":\"sha512-SaLBuwGm3MOViRq2ABk3eLoxwZELpH6zhl3FbAoJ7Vm1gofKx6El1Ib5z23NUEhF9AsGl7y+dzLe5Cw2AArGTA==\"},\"engines\":{\"node\":\">=12\"}},\"d3-time-format@4.1.0\":{\"resolution\":{\"integrity\":\"sha512-dJxPBlzC7NugB2PDLwo9Q8JiTR3M3e4/XANkreKSUxF8vvXKqm1Yfq4Q5dl8budlunRVlUUaDUgFt7eA8D6NLg==\"},\"engines\":{\"node\":\">=12\"}},\"d3-time@3.1.0\":{\"resolution\":{\"integrity\":\"sha512-VqKjzBLejbSMT4IgbmVgDjpkYrNWUYJnbCGo874u7MMKIWsILRX+OpX/gTk8MqjpT1A/c6HY2dCA77ZN0lkQ2Q==\"},\"engines\":{\"node\":\">=12\"}},\"d3-timer@3.0.1\":{\"resolution\":{\"integrity\":\"sha512-ndfJ/JxxMd3nw31uyKoY2naivF+r29V+Lc0svZxe1JvvIRmi8hUsrMvdOwgS1o6uBHmiz91geQ0ylPP0aj1VUA==\"},\"engines\":{\"node\":\">=12\"}},\"d3-transition@3.0.1\":{\"resolution\":{\"integrity\":\"sha512-ApKvfjsSR6tg06xrL434C0WydLr7JewBB3V+/39RMHsaXTOG0zmt/OAXeng5M5LBm0ojmxJrpomQVZ1aPvBL4w==\"},\"engines\":{\"node\":\">=12\"},\"peerDependencies\":{\"d3-selection\":\"2 - 3\"}},\"d3-zoom@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-b8AmV3kfQaqWAuacbPuNbL6vahnOJflOhexLzMMNLga62+/nh0JzvJ0aO/5a5MVgUFGS7Hu1P9P03o3fJkDCyw==\"},\"engines\":{\"node\":\">=12\"}},\"d3@7.9.0\":{\"resolution\":{\"integrity\":\"sha512-e1U46jVP+w7Iut8Jt8ri1YsPOvFpg46k+K8TpCb0P+zjCkjkPnV7WzfDJzMHy1LnA+wj5pLT1wjO901gLXeEhA==\"},\"engines\":{\"node\":\">=12\"}},\"dagre-d3-es@7.0.13\":{\"resolution\":{\"integrity\":\"sha512-efEhnxpSuwpYOKRm/L5KbqoZmNNukHa/Flty4Wp62JRvgH2ojwVgPgdYyr4twpieZnyRDdIH7PY2mopX26+j2Q==\"}},\"data-uri-to-buffer@4.0.1\":{\"resolution\":{\"integrity\":\"sha512-0R9ikRb668HB7QDxT1vkpuUBtqc53YyAwMwGeUFKRojY/NWKvdZ+9UYtRfGmhqNbRkTSVpMbmyhXipFFv2cb/A==\"},\"engines\":{\"node\":\">= 12\"}},\"data-uri-to-buffer@6.0.2\":{\"resolution\":{\"integrity\":\"sha512-7hvf7/GW8e86rW0ptuwS3OcBGDjIi6SZva7hCyWC0yYry2cOPmLIjXAUHI6DK2HsnwJd9ifmt57i8eV2n4YNpw==\"},\"engines\":{\"node\":\">= 14\"}},\"data-urls@5.0.0\":{\"resolution\":{\"integrity\":\"sha512-ZYP5VBHshaDAiVZxjbRVcFJpc+4xGgT0bK3vzy1HLN8jTO975HEbuYzZJcHoQEY5K1a0z8YayJkyVETa08eNTg==\"},\"engines\":{\"node\":\">=18\"}},\"data-urls@6.0.0\":{\"resolution\":{\"integrity\":\"sha512-BnBS08aLUM+DKamupXs3w2tJJoqU+AkaE/+6vQxi/G/DPmIZFJJp9Dkb1kM03AZx8ADehDUZgsNxju3mPXZYIA==\"},\"engines\":{\"node\":\">=20\"}},\"dayjs@1.11.19\":{\"resolution\":{\"integrity\":\"sha512-t5EcLVS6QPBNqM2z8fakk/NKel+Xzshgt8FFKAn+qwlD1pzZWxh0nVCrvFK7ZDb6XucZeF9z8C7CBWTRIVApAw==\"}},\"debug@4.4.1\":{\"resolution\":{\"integrity\":\"sha512-KcKCqiftBJcZr++7ykoDIEwSa3XWowTfNPo92BYxjXiyYEVrUQh2aLyhxBCwww+heortUFxEJYcRzosstTEBYQ==\"},\"engines\":{\"node\":\">=6.0\"},\"peerDependencies\":{\"supports-color\":\"*\"},\"peerDependenciesMeta\":{\"supports-color\":{\"optional\":true}}},\"debug@4.4.3\":{\"resolution\":{\"integrity\":\"sha512-RGwwWnwQvkVfavKVt22FGLw+xYSdzARwm0ru6DhTVA3umU5hZc28V3kO4stgYryrTlLpuvgI9GiijltAjNbcqA==\"},\"engines\":{\"node\":\">=6.0\"},\"peerDependencies\":{\"supports-color\":\"*\"},\"peerDependenciesMeta\":{\"supports-color\":{\"optional\":true}}},\"decamelize@6.0.1\":{\"resolution\":{\"integrity\":\"sha512-G7Cqgaelq68XHJNGlZ7lrNQyhZGsFqpwtGFexqUv4IQdjKoSYF7ipZ9UuTJZUSQXFj/XaoBLuEVIVqr8EJngEQ==\"},\"engines\":{\"node\":\"^12.20.0 || ^14.13.1 || >=16.0.0\"}},\"decimal.js@10.6.0\":{\"resolution\":{\"integrity\":\"sha512-YpgQiITW3JXGntzdUmyUR1V812Hn8T1YVXhCu+wO3OpS4eU9l4YdD3qjyiKdV6mvV29zapkMeD390UVEf2lkUg==\"}},\"decode-named-character-reference@1.0.2\":{\"resolution\":{\"integrity\":\"sha512-O8x12RzrUF8xyVcY0KJowWsmaJxQbmy0/EtnNtHRpsOcT7dFk5W598coHqBVpmWo1oQQfsCqfCmkZN5DJrZVdg==\"}},\"decompress-response@6.0.0\":{\"resolution\":{\"integrity\":\"sha512-aW35yZM6Bb/4oJlZncMH2LCoZtJXTRxES17vE3hoRiowU2kWHaJKFkSBDnDR+cm9J+9QhXmREyIfv0pji9ejCQ==\"},\"engines\":{\"node\":\">=10\"}},\"deep-eql@5.0.2\":{\"resolution\":{\"integrity\":\"sha512-h5k/5U50IJJFpzfL6nO9jaaumfjO/f2NjK/oYB2Djzm4p9L+3T9qWpZqZ2hAbLPuuYq9wrU08WQyBTL5GbPk5Q==\"},\"engines\":{\"node\":\">=6\"}},\"deep-extend@0.6.0\":{\"resolution\":{\"integrity\":\"sha512-LOHxIOaPYdHlJRtCQfDIVZtfw/ufM8+rVj649RIHzcm/vGwQRXFt6OPqIFWsm2XEMrNIEtWR64sY1LEKD2vAOA==\"},\"engines\":{\"node\":\">=4.0.0\"}},\"deepmerge-ts@7.1.5\":{\"resolution\":{\"integrity\":\"sha512-HOJkrhaYsweh+W+e74Yn7YStZOilkoPb6fycpwNLKzSPtruFs48nYis0zy5yJz1+ktUhHxoRDJ27RQAWLIJVJw==\"},\"engines\":{\"node\":\">=16.0.0\"}},\"defaults@1.0.4\":{\"resolution\":{\"integrity\":\"sha512-eFuaLoy/Rxalv2kr+lqMlUnrDWV+3j4pljOIJgLIhI058IQfWJ7vXhyEIHu+HtC738klGALYxOKDO0bQP3tg8A==\"}},\"define-lazy-prop@2.0.0\":{\"resolution\":{\"integrity\":\"sha512-Ds09qNh8yw3khSjiJjiUInaGX9xlqZDY7JVryGxdxV7NPeuqQfplOpQ66yJFZut3jLa5zOwkXw1g9EI2uKh4Og==\"},\"engines\":{\"node\":\">=8\"}},\"degenerator@5.0.1\":{\"resolution\":{\"integrity\":\"sha512-TllpMR/t0M5sqCXfj85i4XaAzxmS5tVA16dqvdkMwGmzI+dXLXnw3J+3Vdv7VKw+ThlTMboK6i9rnZ6Nntj5CQ==\"},\"engines\":{\"node\":\">= 14\"}},\"delaunator@5.0.1\":{\"resolution\":{\"integrity\":\"sha512-8nvh+XBe96aCESrGOqMp/84b13H9cdKbG5P2ejQCh4d4sK9RL4371qou9drQjMhvnPmhWl5hnmqbEE0fXr9Xnw==\"}},\"delayed-stream@1.0.0\":{\"resolution\":{\"integrity\":\"sha512-ZySD7Nf91aLB0RxL4KGrKHBXl7Eds1DAmEdcoVawXnLD7SDhpNgtuII2aAkg7a7QS41jxPSZ17p4VdGnMHk3MQ==\"},\"engines\":{\"node\":\">=0.4.0\"}},\"dequal@2.0.3\":{\"resolution\":{\"integrity\":\"sha512-0je+qPKHEMohvfRTCEo3CrPG6cAzAYgmzKyxRiYSSDkS6eGJdyVJm7WaYA5ECaAD9wLB2T4EEeymA5aFVcYXCA==\"},\"engines\":{\"node\":\">=6\"}},\"detect-indent@6.1.0\":{\"resolution\":{\"integrity\":\"sha512-reYkTUJAZb9gUuZ2RvVCNhVHdg62RHnJ7WJl8ftMi4diZ6NWlciOzQN88pUhSELEwflJht4oQDv0F0BMlwaYtA==\"},\"engines\":{\"node\":\">=8\"}},\"detect-libc@2.1.2\":{\"resolution\":{\"integrity\":\"sha512-Btj2BOOO83o3WyH59e8MgXsxEQVcarkUOpEYrubB0urwnN10yQ364rsiByU11nZlqWYZm05i/of7io4mzihBtQ==\"},\"engines\":{\"node\":\">=8\"}},\"devlop@1.1.0\":{\"resolution\":{\"integrity\":\"sha512-RWmIqhcFf1lRYBvNmr7qTNuyCt/7/ns2jbpp1+PalgE/rDQcBT0fioSMUpJ93irlUhC5hrg4cYqe6U+0ImW0rA==\"}},\"diff@8.0.2\":{\"resolution\":{\"integrity\":\"sha512-sSuxWU5j5SR9QQji/o2qMvqRNYRDOcBTgsJ/DeCf4iSN4gW+gNMXM7wFIP+fdXZxoNiAnHUTGjCr+TSWXdRDKg==\"},\"engines\":{\"node\":\">=0.3.1\"}},\"dir-glob@3.0.1\":{\"resolution\":{\"integrity\":\"sha512-WkrWp9GR4KXfKGYzOLmTuGVi1UWFfws377n9cc55/tb6DuqyF6pcQ5AbiHEshaDpY9v6oaSr2XCDidGmMwdzIA==\"},\"engines\":{\"node\":\">=8\"}},\"dom-accessibility-api@0.5.16\":{\"resolution\":{\"integrity\":\"sha512-X7BJ2yElsnOJ30pZF4uIIDfBEVgF4XEBxL9Bxhy6dnrm5hkzqmsWHGTiHqRiITNhMyFLyAiWndIJP7Z1NTteDg==\"}},\"dom-serializer@2.0.0\":{\"resolution\":{\"integrity\":\"sha512-wIkAryiqt/nV5EQKqQpo3SToSOV9J0DnbJqwK7Wv/Trc92zIAYZ4FlMu+JPFW1DfGFt81ZTCGgDEabffXeLyJg==\"}},\"domelementtype@2.3.0\":{\"resolution\":{\"integrity\":\"sha512-OLETBj6w0OsagBwdXnPdN0cnMfF9opN69co+7ZrbfPGrdpPVNBUj02spi6B1N7wChLQiPn4CSH/zJvXw56gmHw==\"}},\"domhandler@5.0.3\":{\"resolution\":{\"integrity\":\"sha512-cgwlv/1iFQiFnU96XXgROh8xTeetsnJiDsTc7TYCLFd9+/WNkIqPTxiM/8pSd8VIrhXGTf1Ny1q1hquVqDJB5w==\"},\"engines\":{\"node\":\">= 4\"}},\"dompurify@3.3.1\":{\"resolution\":{\"integrity\":\"sha512-qkdCKzLNtrgPFP1Vo+98FRzJnBRGe4ffyCea9IwHB1fyxPOeNTHpLKYGd4Uk9xvNoH0ZoOjwZxNptyMwqrId1Q==\"}},\"domutils@3.2.2\":{\"resolution\":{\"integrity\":\"sha512-6kZKyUajlDuqlHKVX1w7gyslj9MPIXzIFiz/rGu35uC1wMi+kMhQwGhl4lt9unC9Vb9INnY9Z3/ZA3+FhASLaw==\"}},\"dotenv-expand@11.0.7\":{\"resolution\":{\"integrity\":\"sha512-zIHwmZPRshsCdpMDyVsqGmgyP0yT8GAgXUnkdAoJisxvf33k7yO6OuoKmcTGuXPWSsm8Oh88nZicRLA9Y0rUeA==\"},\"engines\":{\"node\":\">=12\"}},\"dotenv@10.0.0\":{\"resolution\":{\"integrity\":\"sha512-rlBi9d8jpv9Sf1klPjNfFAuWDjKLwTIJJ/VxtoTwIR6hnZxcEOQCZg2oIL3MWBYw5GpUDKOEnND7LXTbIpQ03Q==\"},\"engines\":{\"node\":\">=10\"}},\"dotenv@16.4.7\":{\"resolution\":{\"integrity\":\"sha512-47qPchRCykZC03FhkYAhrvwU4xDBFIj1QPqaarj6mdM/hgUzfPHcpkHJOn3mJAufFeeAxAzeGsr5X0M4k6fLZQ==\"},\"engines\":{\"node\":\">=12\"}},\"dotenv@16.5.0\":{\"resolution\":{\"integrity\":\"sha512-m/C+AwOAr9/W1UOIZUo232ejMNnJAJtYQjUbHoNTBNTJSvqzzDh7vnrei3o3r3m9blf6ZoDkvcw0VmozNRFJxg==\"},\"engines\":{\"node\":\">=12\"}},\"dunder-proto@1.0.1\":{\"resolution\":{\"integrity\":\"sha512-KIN/nDJBQRcXw0MLVhZE9iQHmG68qAVIBg9CqmUYjmQIhgij9U5MFvrqkUL5FbtyyzZuOeOt0zdeRe4UY7ct+A==\"},\"engines\":{\"node\":\">= 0.4\"}},\"eastasianwidth@0.2.0\":{\"resolution\":{\"integrity\":\"sha512-I88TYZWc9XiYHRQ4/3c5rjjfgkjhLyW2luGIheGERbNQ6OY7yTybanSpDXZa8y7VUP9YmDcYa+eyq4ca7iLqWA==\"}},\"edge-paths@3.0.5\":{\"resolution\":{\"integrity\":\"sha512-sB7vSrDnFa4ezWQk9nZ/n0FdpdUuC6R1EOrlU3DL+bovcNFK28rqu2emmAUjujYEJTWIgQGqgVVWUZXMnc8iWg==\"},\"engines\":{\"node\":\">=14.0.0\"}},\"edgedriver@5.6.1\":{\"resolution\":{\"integrity\":\"sha512-3Ve9cd5ziLByUdigw6zovVeWJjVs8QHVmqOB0sJ0WNeVPcwf4p18GnxMmVvlFmYRloUwf5suNuorea4QzwBIOA==\"},\"hasBin\":true},\"electron-to-chromium@1.5.211\":{\"resolution\":{\"integrity\":\"sha512-IGBvimJkotaLzFnwIVgW9/UD/AOJ2tByUmeOrtqBfACSbAw5b1G0XpvdaieKyc7ULmbwXVx+4e4Be8pOPBrYkw==\"}},\"electron-to-chromium@1.5.352\":{\"resolution\":{\"integrity\":\"sha512-9wHk8x6dyuimoe18EdiDPWKExNdxYqo4fn4FwOVVper6RxT3cmpBwBkWWfSOCYJjQdIco/nPhJhNLmn4Ufg1Yg==\"}},\"emoji-regex@8.0.0\":{\"resolution\":{\"integrity\":\"sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A==\"}},\"emoji-regex@9.2.2\":{\"resolution\":{\"integrity\":\"sha512-L18DaJsXSUk2+42pv8mLs5jJT2hqFkFE4j21wOmgbUqsZ2hL72NsUU785g9RXgo3s0ZNgVl42TiHp3ZtOv/Vyg==\"}},\"encoding-sniffer@0.2.1\":{\"resolution\":{\"integrity\":\"sha512-5gvq20T6vfpekVtqrYQsSCFZ1wEg5+wW0/QaZMWkFr6BqD3NfKs0rLCx4rrVlSWJeZb5NBJgVLswK/w2MWU+Gw==\"}},\"end-of-stream@1.4.5\":{\"resolution\":{\"integrity\":\"sha512-ooEGc6HP26xXq/N+GCGOT0JKCLDGrq2bQUZrQ7gyrJiZANJ/8YDTxTpQBXGMn+WbIQXNVpyWymm7KYVICQnyOg==\"}},\"enhanced-resolve@5.21.0\":{\"resolution\":{\"integrity\":\"sha512-otxSQPw4lkOZWkHpB3zaEQs6gWYEsmX4xQF68ElXC/TWvGxGMSGOvoNbaLXm6/cS/fSfHtsEdw90y20PCd+sCA==\"},\"engines\":{\"node\":\">=10.13.0\"}},\"enquirer@2.3.6\":{\"resolution\":{\"integrity\":\"sha512-yjNnPr315/FjS4zIsUxYguYUPP2e1NK4d7E7ZOLiyYCcbFBiTMyID+2wvm2w6+pZ/odMA7cRkjhsPbltwBOrLg==\"},\"engines\":{\"node\":\">=8.6\"}},\"enquirer@2.4.1\":{\"resolution\":{\"integrity\":\"sha512-rRqJg/6gd538VHvR3PSrdRBb/1Vy2YfzHqzvbhGIQpDRKIa4FgV/54b5Q1xYSxOOwKvjXweS26E0Q+nAMwp2pQ==\"},\"engines\":{\"node\":\">=8.6\"}},\"entities@4.5.0\":{\"resolution\":{\"integrity\":\"sha512-V0hjH4dGPh9Ao5p0MoRY6BVqtwCjhz6vI5LT8AJ55H+4g9/4vbHx1I54fS0XuclLhDHArPQCiMjDxjaL8fPxhw==\"},\"engines\":{\"node\":\">=0.12\"}},\"entities@6.0.1\":{\"resolution\":{\"integrity\":\"sha512-aN97NXWF6AWBTahfVOIrB/NShkzi5H7F9r1s9mD3cDj4Ko5f2qhhVoYMibXF7GlLveb/D2ioWay8lxI97Ven3g==\"},\"engines\":{\"node\":\">=0.12\"}},\"entities@7.0.1\":{\"resolution\":{\"integrity\":\"sha512-TWrgLOFUQTH994YUyl1yT4uyavY5nNB5muff+RtWaqNVCAK408b5ZnnbNAUEWLTCpum9w6arT70i1XdQ4UeOPA==\"},\"engines\":{\"node\":\">=0.12\"}},\"error-stack-parser-es@1.0.5\":{\"resolution\":{\"integrity\":\"sha512-5qucVt2XcuGMcEGgWI7i+yZpmpByQ8J1lHhcL7PwqCwu9FPP3VUXzT4ltHe5i2z9dePwEHcDVOAfSnHsOlCXRA==\"}},\"es-define-property@1.0.1\":{\"resolution\":{\"integrity\":\"sha512-e3nRfgfUZ4rNGL232gUgX06QNyyez04KdjFrF+LTRoOXmrOgFKDg4BCdsjW8EnT69eqdYGmRpJwiPVYNrCaW3g==\"},\"engines\":{\"node\":\">= 0.4\"}},\"es-errors@1.3.0\":{\"resolution\":{\"integrity\":\"sha512-Zf5H2Kxt2xjTvbJvP2ZWLEICxA6j+hAmMzIlypy4xcBg1vKVnx89Wy0GbS+kf5cwCVFFzdCFh2XSCFNULS6csw==\"},\"engines\":{\"node\":\">= 0.4\"}},\"es-module-lexer@1.7.0\":{\"resolution\":{\"integrity\":\"sha512-jEQoCwk8hyb2AZziIOLhDqpm5+2ww5uIE6lkO/6jcOCusfk6LhMHpXXfBLXTZ7Ydyt0j4VoUQv6uGNYbdW+kBA==\"}},\"es-module-lexer@2.1.0\":{\"resolution\":{\"integrity\":\"sha512-n27zTYMjYu1aj4MjCWzSP7G9r75utsaoc8m61weK+W8JMBGGQybd43GstCXZ3WNmSFtGT9wi59qQTW6mhTR5LQ==\"}},\"es-object-atoms@1.1.1\":{\"resolution\":{\"integrity\":\"sha512-FGgH2h8zKNim9ljj7dankFPcICIK9Cp5bm+c2gQSYePhpaG5+esrLODihIorn+Pe6FGJzWhXQotPv73jTaldXA==\"},\"engines\":{\"node\":\">= 0.4\"}},\"es-set-tostringtag@2.1.0\":{\"resolution\":{\"integrity\":\"sha512-j6vWzfrGVfyXxge+O0x5sh6cvxAog0a/4Rdd2K36zCMV5eJ+/+tOAngRO8cODMNWbVRdVlmGZQL2YS3yR8bIUA==\"},\"engines\":{\"node\":\">= 0.4\"}},\"esbuild@0.25.12\":{\"resolution\":{\"integrity\":\"sha512-bbPBYYrtZbkt6Os6FiTLCTFxvq4tt3JKall1vRwshA3fdVztsLAatFaZobhkBC8/BrPetoa0oksYoKXoG4ryJg==\"},\"engines\":{\"node\":\">=18\"},\"hasBin\":true},\"esbuild@0.27.3\":{\"resolution\":{\"integrity\":\"sha512-8VwMnyGCONIs6cWue2IdpHxHnAjzxnw2Zr7MkVxB2vjmQ2ivqGFb4LEG3SMnv0Gb2F/G/2yA8zUaiL1gywDCCg==\"},\"engines\":{\"node\":\">=18\"},\"hasBin\":true},\"escalade@3.2.0\":{\"resolution\":{\"integrity\":\"sha512-WUj2qlxaQtO4g6Pq5c29GTcWGDyd8itL8zTlipgECz3JesAiiOKotd8JU6otB3PACgG6xkJUyVhboMS+bje/jA==\"},\"engines\":{\"node\":\">=6\"}},\"escape-string-regexp@1.0.5\":{\"resolution\":{\"integrity\":\"sha512-vbRorB5FUQWvla16U8R/qgaFIya2qGzwDrNmCZuYKrbdSUMG6I1ZCGQRefkRVhuOkIGVne7BQ35DSfo1qvJqFg==\"},\"engines\":{\"node\":\">=0.8.0\"}},\"escape-string-regexp@5.0.0\":{\"resolution\":{\"integrity\":\"sha512-/veY75JbMK4j1yjvuUxuVsiS/hr/4iHs9FTT6cgTexxdE0Ly/glccBAkloH/DofkjRbZU3bnoj38mOmhkZ0lHw==\"},\"engines\":{\"node\":\">=12\"}},\"escodegen@2.1.0\":{\"resolution\":{\"integrity\":\"sha512-2NlIDTwUWJN0mRPQOdtQBzbUHvdGY2P1VXSyU83Q3xKxM7WHX2Ql8dKq782Q9TgQUNOLEzEYu9bzLNj1q88I5w==\"},\"engines\":{\"node\":\">=6.0\"},\"hasBin\":true},\"eslint-scope@5.1.1\":{\"resolution\":{\"integrity\":\"sha512-2NxwbF/hZ0KpepYN0cNbo+FN6XoK7GaHlQhgx/hIZl6Va0bF45RQOOwhLIy8lQDbuCiadSLCBnH2CFYquit5bw==\"},\"engines\":{\"node\":\">=8.0.0\"}},\"esprima@4.0.1\":{\"resolution\":{\"integrity\":\"sha512-eGuFFw7Upda+g4p+QHvnW0RyTX/SVeJBDM/gCtMARO0cLuT2HcEKnTPvhjV6aGeqrCB/sbNop0Kszm0jsaWU4A==\"},\"engines\":{\"node\":\">=4\"},\"hasBin\":true},\"esrecurse@4.3.0\":{\"resolution\":{\"integrity\":\"sha512-KmfKL3b6G+RXvP8N1vr3Tq1kL/oCFgn2NYXEtqP8/L3pKapUA4G8cFVaoF3SU323CD4XypR/ffioHmkti6/Tag==\"},\"engines\":{\"node\":\">=4.0\"}},\"estraverse@4.3.0\":{\"resolution\":{\"integrity\":\"sha512-39nnKffWz8xN1BU/2c79n9nB9HDzo0niYUqx6xyqUnyoAnQyyWpOTdZEeiCch8BBu515t4wp9ZmgVfVhn9EBpw==\"},\"engines\":{\"node\":\">=4.0\"}},\"estraverse@5.3.0\":{\"resolution\":{\"integrity\":\"sha512-MMdARuVEQziNTeJD8DgMqmhwR11BRQ/cBP+pLtYdSTnf3MIO8fFeiINEbX36ZdNlfU/7A9f3gUw49B3oQsvwBA==\"},\"engines\":{\"node\":\">=4.0\"}},\"estree-walker@3.0.3\":{\"resolution\":{\"integrity\":\"sha512-7RUKfXgSMMkzt6ZuXmqapOurLGPPfgj6l9uRZ7lRGolvk0y2yocc35LdcxKC5PQZdn2DMqioAQ2NoWcrTKmm6g==\"}},\"esutils@2.0.3\":{\"resolution\":{\"integrity\":\"sha512-kVscqXk4OCp68SZ0dkgEKVi6/8ij300KBWTJq32P/dYeWTSwK41WyTxalN1eRmA5Z9UU/LX9D7FWSmV9SAYx6g==\"},\"engines\":{\"node\":\">=0.10.0\"}},\"event-target-shim@5.0.1\":{\"resolution\":{\"integrity\":\"sha512-i/2XbnSz/uxRCU6+NdVJgKWDTM427+MqYbkQzD321DuCQJUqOuJKIA0IM2+W2xtYHdKOmZ4dR6fExsd4SXL+WQ==\"},\"engines\":{\"node\":\">=6\"}},\"events-universal@1.0.1\":{\"resolution\":{\"integrity\":\"sha512-LUd5euvbMLpwOF8m6ivPCbhQeSiYVNb8Vs0fQ8QjXo0JTkEHpz8pxdQf0gStltaPpw0Cca8b39KxvK9cfKRiAw==\"}},\"events@3.3.0\":{\"resolution\":{\"integrity\":\"sha512-mQw+2fkQbALzQ7V0MY0IqdnXNOeTtP4r0lN9z7AAawCXgqea7bDii20AYrIBrFd/Hx0M2Ocz6S111CaFkUcb0Q==\"},\"engines\":{\"node\":\">=0.8.x\"}},\"expand-template@2.0.3\":{\"resolution\":{\"integrity\":\"sha512-XYfuKMvj4O35f/pOXLObndIRvyQ+/+6AhODh+OKWj9S9498pHHn/IMszH+gt0fBCRWMNfk1ZSp5x3AifmnI2vg==\"},\"engines\":{\"node\":\">=6\"}},\"expect-type@1.2.2\":{\"resolution\":{\"integrity\":\"sha512-JhFGDVJ7tmDJItKhYgJCGLOWjuK9vPxiXoUFLwLDc99NlmklilbiQJwoctZtt13+xMw91MCk/REan6MWHqDjyA==\"},\"engines\":{\"node\":\">=12.0.0\"}},\"expect-type@1.3.0\":{\"resolution\":{\"integrity\":\"sha512-knvyeauYhqjOYvQ66MznSMs83wmHrCycNEN6Ao+2AeYEfxUIkuiVxdEa1qlGEPK+We3n0THiDciYSsCcgW/DoA==\"},\"engines\":{\"node\":\">=12.0.0\"}},\"exsolve@1.0.8\":{\"resolution\":{\"integrity\":\"sha512-LmDxfWXwcTArk8fUEnOfSZpHOJ6zOMUJKOtFLFqJLoKJetuQG874Uc7/Kki7zFLzYybmZhp1M7+98pfMqeX8yA==\"}},\"extend@3.0.2\":{\"resolution\":{\"integrity\":\"sha512-fjquC59cD7CyW6urNXK0FBufkZcoiGG80wTuPujX590cB5Ttln20E2UB4S/WARVqhXffZl2LNgS+gQdPIIim/g==\"}},\"extendable-error@0.1.7\":{\"resolution\":{\"integrity\":\"sha512-UOiS2in6/Q0FK0R0q6UY9vYpQ21mr/Qn1KOnte7vsACuNJf514WvCCUHSRCPcgjPT2bAhNIJdlE6bVap1GKmeg==\"}},\"extract-zip@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-GDhU9ntwuKyGXdZBUgTIe+vXnWj0fppUEtMDL0+idd5Sta8TGpHssn/eusA9mrPr9qNDym6SxAYZjNvCn/9RBg==\"},\"engines\":{\"node\":\">= 10.17.0\"},\"hasBin\":true},\"fast-deep-equal@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-bCK/2Z4zLidyB4ReuIsvALH6w31YfAQDmXMqMx6FyfHqvBxtjC0eRumeSu4Bs3XtXwpyIywtSTrVT99BxY1f9w==\"}},\"fast-deep-equal@3.1.3\":{\"resolution\":{\"integrity\":\"sha512-f3qQ9oQy9j2AhBe/H9VC91wLmKBCCU/gDOnKNAYG5hswO7BLKj09Hc5HYNz9cGI++xlpDCIgDaitVs03ATR84Q==\"}},\"fast-fifo@1.3.2\":{\"resolution\":{\"integrity\":\"sha512-/d9sfos4yxzpwkDkuN7k2SqFKtYNmCTzgfEpz82x34IM9/zc8KGxQoXg1liNC/izpRM/MBdt44Nmx41ZWqk+FQ==\"}},\"fast-glob@3.3.3\":{\"resolution\":{\"integrity\":\"sha512-7MptL8U0cqcFdzIzwOTHoilX9x5BrNqye7Z/LuC7kCMRio1EMSyqRK3BEAUD7sXRq4iT4AzTVuZdhgQ2TCvYLg==\"},\"engines\":{\"node\":\">=8.6.0\"}},\"fast-uri@3.0.3\":{\"resolution\":{\"integrity\":\"sha512-aLrHthzCjH5He4Z2H9YZ+v6Ujb9ocRuW6ZzkJQOrTxleEijANq4v1TsaPaVG1PZcuurEzrLcWRyYBYXD5cEiaw==\"}},\"fast-uri@3.1.2\":{\"resolution\":{\"integrity\":\"sha512-rVjf7ArG3LTk+FS6Yw81V1DLuZl1bRbNrev6Tmd/9RaroeeRRJhAt7jg/6YFxbvAQXUCavSoZhPPj6oOx+5KjQ==\"}},\"fast-xml-parser@4.5.6\":{\"resolution\":{\"integrity\":\"sha512-Yd4vkROfJf8AuJrDIVMVmYfULKmIJszVsMv7Vo71aocsKgFxpdlpSHXSaInvyYfgw2PRuObQSW2GFpVMUjxu9A==\"},\"hasBin\":true},\"fastq@1.17.1\":{\"resolution\":{\"integrity\":\"sha512-sRVD3lWVIXWg6By68ZN7vho9a1pQcN/WBFaAAsDDFzlJjvoGx0P8z7V1t72grFJfJhu3YPZBuu25f7Kaw2jN1w==\"}},\"fault@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-WtySTkS4OKev5JtpHXnib4Gxiurzh5NCGvWrFaZ34m6JehfTUhKZvn9njTfw48t6JumVQOmrKqpmGcdwxnhqBQ==\"}},\"fd-slicer@1.1.0\":{\"resolution\":{\"integrity\":\"sha512-cE1qsB/VwyQozZ+q1dGxR8LBYNZeofhEdUNGSMbQD3Gw2lAzX9Zb3uIU6Ebc/Fmyjo9AWWfnn0AUCHqtevs/8g==\"}},\"fdir@6.5.0\":{\"resolution\":{\"integrity\":\"sha512-tIbYtZbucOs0BRGqPJkshJUYdL+SDH7dVM8gjy+ERp3WAUjLEFJE+02kanyHtwjWOnwrKYBiwAmM0p4kLJAnXg==\"},\"engines\":{\"node\":\">=12.0.0\"},\"peerDependencies\":{\"picomatch\":\"^3 || ^4\"},\"peerDependenciesMeta\":{\"picomatch\":{\"optional\":true}}},\"fetch-blob@3.2.0\":{\"resolution\":{\"integrity\":\"sha512-7yAQpD2UMJzLi1Dqv7qFYnPbaPx7ZfFK6PiIxQ4PfkGPyNyl2Ugx+a/umUonmKqjhM4DnfbMvdX6otXq83soQQ==\"},\"engines\":{\"node\":\"^12.20 || >= 14.13\"}},\"fetchdts@0.1.7\":{\"resolution\":{\"integrity\":\"sha512-YoZjBdafyLIop9lSxXVI33oLD5kN31q4Td+CasofLLYeLXRFeOsuOw0Uo+XNRi9PZlbfdlN2GmRtm4tCEQ9/KA==\"}},\"fflate@0.4.8\":{\"resolution\":{\"integrity\":\"sha512-FJqqoDBR00Mdj9ppamLa/Y7vxm+PRmNWA67N846RvsoYVMKB4q3y/de5PA7gUmRMYK/8CMz2GDZQmCRN1wBcWA==\"}},\"figures@3.2.0\":{\"resolution\":{\"integrity\":\"sha512-yaduQFRKLXYOGgEn6AZau90j3ggSOyiqXU0F9JZfeXYhNa+Jk4X+s45A2zg5jns87GAFa34BBm2kXw4XpNcbdg==\"},\"engines\":{\"node\":\">=8\"}},\"file-uri-to-path@1.0.0\":{\"resolution\":{\"integrity\":\"sha512-0Zt+s3L7Vf1biwWZ29aARiVYLx7iMGnEUl9x33fbB/j3jR81u/O2LbqK+Bm1CDSNDKVtJ/YjwY7TUd5SkeLQLw==\"}},\"fill-range@7.1.1\":{\"resolution\":{\"integrity\":\"sha512-YsGpe3WHLK8ZYi4tWDg2Jy3ebRz2rXowDxnld4bkQB00cc/1Zw9AWnC0i9ztDJitivtQvaI9KaLyKrc+hBW0yg==\"},\"engines\":{\"node\":\">=8\"}},\"find-up@4.1.0\":{\"resolution\":{\"integrity\":\"sha512-PpOwAdQ/YlXQ2vj8a3h8IipDuYRi3wceVQQGYWxNINccq40Anw7BlsEXCMbt1Zt+OLA6Fq9suIpIWD0OsnISlw==\"},\"engines\":{\"node\":\">=8\"}},\"flat@5.0.2\":{\"resolution\":{\"integrity\":\"sha512-b6suED+5/3rTpUBdG1gupIl8MPFCAMA0QXwmljLhvCUKcUvdE4gWky9zpuGCcXHOsz4J9wPGNWq6OKpmIzz3hQ==\"},\"hasBin\":true},\"follow-redirects@1.15.11\":{\"resolution\":{\"integrity\":\"sha512-deG2P0JfjrTxl50XGCDyfI97ZGVCxIpfKYmfyrQ54n5FO/0gfIES8C/Psl6kWVDolizcaaxZJnTS0QSMxvnsBQ==\"},\"engines\":{\"node\":\">=4.0\"},\"peerDependencies\":{\"debug\":\"*\"},\"peerDependenciesMeta\":{\"debug\":{\"optional\":true}}},\"foreground-child@3.3.1\":{\"resolution\":{\"integrity\":\"sha512-gIXjKqtFuWEgzFRJA9WCQeSJLZDjgJUOMCMzxtvFq/37KojM1BFGufqsCy0r4qSQmYLsZYMeyRqzIWOMup03sw==\"},\"engines\":{\"node\":\">=14\"}},\"form-data@4.0.4\":{\"resolution\":{\"integrity\":\"sha512-KrGhL9Q4zjj0kiUt5OO4Mr/A/jlI2jDYs5eHBpYHPcBEVSiipAvn2Ko2HnPe20rmcuuvMHNdZFp+4IlGTMF0Ow==\"},\"engines\":{\"node\":\">= 6\"}},\"format@0.2.2\":{\"resolution\":{\"integrity\":\"sha512-wzsgA6WOq+09wrU1tsJ09udeR/YZRaeArL9e1wPbFg3GG2yDnC2ldKpxs4xunpFF9DgqCqOIra3bc1HWrJ37Ww==\"},\"engines\":{\"node\":\">=0.4.x\"}},\"formdata-polyfill@4.0.10\":{\"resolution\":{\"integrity\":\"sha512-buewHzMvYL29jdeQTVILecSaZKnt/RJWjoZCF5OW60Z67/GmSLBkOFM7qh1PI3zFNtJbaZL5eQu1vLfazOwj4g==\"},\"engines\":{\"node\":\">=12.20.0\"}},\"front-matter@4.0.2\":{\"resolution\":{\"integrity\":\"sha512-I8ZuJ/qG92NWX8i5x1Y8qyj3vizhXS31OxjKDu3LKP+7/qBgfIKValiZIEwoVoJKUHlhWtYrktkxV1XsX+pPlg==\"}},\"fs-constants@1.0.0\":{\"resolution\":{\"integrity\":\"sha512-y6OAwoSIf7FyjMIv94u+b5rdheZEjzR63GTyZJm5qh4Bi+2YgwLCcI/fPFZkL5PSixOt6ZNKm+w+Hfp/Bciwow==\"}},\"fs-extra@11.3.1\":{\"resolution\":{\"integrity\":\"sha512-eXvGGwZ5CL17ZSwHWd3bbgk7UUpF6IFHtP57NYYakPvHOs8GDgDe5KJI36jIJzDkJ6eJjuzRA8eBQb6SkKue0g==\"},\"engines\":{\"node\":\">=14.14\"}},\"fs-extra@7.0.1\":{\"resolution\":{\"integrity\":\"sha512-YJDaCJZEnBmcbw13fvdAM9AwNOJwOzrE4pqMqBq5nFiEqXUqHwlK4B+3pUw6JNvfSPtX05xFHtYy/1ni01eGCw==\"},\"engines\":{\"node\":\">=6 <7 || >=8\"}},\"fs-extra@8.1.0\":{\"resolution\":{\"integrity\":\"sha512-yhlQgA6mnOJUKOsRUFsgJdQCvkKhcz8tlZG5HBQfReYZy46OwLcY+Zia0mtdHsOo9y/hP+CxMN0TU9QxoOtG4g==\"},\"engines\":{\"node\":\">=6 <7 || >=8\"}},\"fs-minipass@2.1.0\":{\"resolution\":{\"integrity\":\"sha512-V/JgOLFCS+R6Vcq0slCuaeWEdNC3ouDlJMNIsacH2VtALiu9mV4LPrHc5cDl8k5aw6J8jwgWWpiTo5RYhmIzvg==\"},\"engines\":{\"node\":\">= 8\"}},\"fsevents@2.3.2\":{\"resolution\":{\"integrity\":\"sha512-xiqMQR4xAeHTuB9uWm+fFRcIOgKBMiOBP+eXiyT7jsgVCq1bkVygt00oASowB7EdtpOHaaPgKt812P9ab+DDKA==\"},\"engines\":{\"node\":\"^8.16.0 || ^10.6.0 || >=11.0.0\"},\"os\":[\"darwin\"]},\"fsevents@2.3.3\":{\"resolution\":{\"integrity\":\"sha512-5xoDfX+fL7faATnagmWPpbFtwh/R77WmMMqqHGS65C3vvB0YHrgF+B1YmZ3441tMj5n63k0212XNoJwzlhffQw==\"},\"engines\":{\"node\":\"^8.16.0 || ^10.6.0 || >=11.0.0\"},\"os\":[\"darwin\"]},\"function-bind@1.1.2\":{\"resolution\":{\"integrity\":\"sha512-7XHNxH7qX9xG5mIwxkhumTox/MIRNcOgDrxWsMt2pAr23WHp6MrRlN7FBSFpCpr+oVO0F744iUgR82nJMfG2SA==\"}},\"geckodriver@4.5.1\":{\"resolution\":{\"integrity\":\"sha512-lGCRqPMuzbRNDWJOQcUqhNqPvNsIFu6yzXF8J/6K3WCYFd2r5ckbeF7h1cxsnjA7YLSEiWzERCt6/gjZ3tW0ug==\"},\"engines\":{\"node\":\"^16.13 || >=18 || >=20\"},\"hasBin\":true},\"gensync@1.0.0-beta.2\":{\"resolution\":{\"integrity\":\"sha512-3hN7NaskYvMDLQY55gnW3NQ+mesEAepTqlg+VEbj7zzqEMBVNhzcGYYeqFo/TlYz6eQiFcp1HcsCZO+nGgS8zg==\"},\"engines\":{\"node\":\">=6.9.0\"}},\"get-caller-file@2.0.5\":{\"resolution\":{\"integrity\":\"sha512-DyFP3BM/3YHTQOCUL/w0OZHR0lpKeGrxotcHWcqNEdnltqFwXVfhEBQ94eIo34AfQpo0rGki4cyIiftY06h2Fg==\"},\"engines\":{\"node\":\"6.* || 8.* || >= 10.*\"}},\"get-intrinsic@1.3.0\":{\"resolution\":{\"integrity\":\"sha512-9fSjSaos/fRIVIp+xSJlE6lfwhES7LNtKaCBIamHsjr2na1BiABJPo0mOjjz8GJDURarmCPGqaiVg5mfjb98CQ==\"},\"engines\":{\"node\":\">= 0.4\"}},\"get-port@7.2.0\":{\"resolution\":{\"integrity\":\"sha512-afP4W205ONCuMoPBqcR6PSXnzX35KTcJygfJfcp+QY+uwm3p20p1YczWXhlICIzGMCxYBQcySEcOgsJcrkyobg==\"},\"engines\":{\"node\":\">=16\"}},\"get-proto@1.0.1\":{\"resolution\":{\"integrity\":\"sha512-sTSfBjoXBp89JvIKIefqw7U2CCebsc74kiY6awiGogKtoSGbgjYE/G/+l9sF3MWFPNc9IcoOC4ODfKHfxFmp0g==\"},\"engines\":{\"node\":\">= 0.4\"}},\"get-stream@5.2.0\":{\"resolution\":{\"integrity\":\"sha512-nBF+F1rAZVCu/p7rjzgA+Yb4lfYXrpl7a6VmJrU8wF9I1CKvP/QwPNZHnOlwbTkY6dvtFIzFMSyQXbLoTQPRpA==\"},\"engines\":{\"node\":\">=8\"}},\"get-tsconfig@4.14.0\":{\"resolution\":{\"integrity\":\"sha512-yTb+8DXzDREzgvYmh6s9vHsSVCHeC0G3PI5bEXNBHtmshPnO+S5O7qgLEOn0I5QvMy6kpZN8K1NKGyilLb93wA==\"}},\"get-uri@6.0.5\":{\"resolution\":{\"integrity\":\"sha512-b1O07XYq8eRuVzBNgJLstU6FYc1tS6wnMtF1I1D9lE8LxZSOGZ7LhxN54yPP6mGw5f2CkXY2BQUL9Fx41qvcIg==\"},\"engines\":{\"node\":\">= 14\"}},\"github-from-package@0.0.0\":{\"resolution\":{\"integrity\":\"sha512-SyHy3T1v2NUXn29OsWdxmK6RwHD+vkj3v8en8AOBZ1wBQ/hCAQ5bAQTD02kW4W9tUp/3Qh6J8r9EvntiyCmOOw==\"}},\"github-slugger@2.0.0\":{\"resolution\":{\"integrity\":\"sha512-IaOQ9puYtjrkq7Y0Ygl9KDZnrf/aiUJYUpVf89y8kyaxbRG7Y1SrX/jaumrv81vc61+kiMempujsM3Yw7w5qcw==\"}},\"glob-parent@5.1.2\":{\"resolution\":{\"integrity\":\"sha512-AOIgSQCepiJYwP3ARnGx+5VnTu2HBYdzbGP45eLw1vr3zB3vZLeyed1sC9hnbcOc9/SrMyM5RPQrkGz4aS9Zow==\"},\"engines\":{\"node\":\">= 6\"}},\"glob-to-regexp@0.4.1\":{\"resolution\":{\"integrity\":\"sha512-lkX1HJXwyMcprw/5YUZc2s7DrpAiHB21/V+E1rHUrVNokkvB6bqMzT0VfV6/86ZNabt1k14YOIaT7nDvOX3Iiw==\"}},\"glob@10.4.5\":{\"resolution\":{\"integrity\":\"sha512-7Bv8RF0k6xjo7d4A/PxYLbUCfb6c+Vpd2/mB2yRDlew7Jb5hEXiCD9ibfO7wpk8i4sevK6DFny9h7EYbM3/sHg==\"},\"deprecated\":\"Old versions of glob are not supported, and contain widely publicized security vulnerabilities, which have been fixed in the current version. Please update. Support for old versions may be purchased (at exorbitant rates) by contacting i@izs.me\",\"hasBin\":true},\"glob@10.5.0\":{\"resolution\":{\"integrity\":\"sha512-DfXN8DfhJ7NH3Oe7cFmu3NCu1wKbkReJ8TorzSAFbSKrlNaQSKfIzqYqVY8zlbs2NLBbWpRiU52GX2PbaBVNkg==\"},\"deprecated\":\"Old versions of glob are not supported, and contain widely publicized security vulnerabilities, which have been fixed in the current version. Please update. Support for old versions may be purchased (at exorbitant rates) by contacting i@izs.me\",\"hasBin\":true},\"globals@15.15.0\":{\"resolution\":{\"integrity\":\"sha512-7ACyT3wmyp3I61S4fG682L0VA2RGD9otkqGJIwNUMF1SWUombIIk+af1unuDYgMm082aHYwD+mzJvv9Iu8dsgg==\"},\"engines\":{\"node\":\">=18\"}},\"globby@11.1.0\":{\"resolution\":{\"integrity\":\"sha512-jhIXaOzy1sb8IyocaruWSn1TjmnBVs8Ayhcy83rmxNJ8q2uWKCAj3CnJY+KpGSXCueAPc0i05kVvVKtP1t9S3g==\"},\"engines\":{\"node\":\">=10\"}},\"gopd@1.2.0\":{\"resolution\":{\"integrity\":\"sha512-ZUKRh6/kUFoAiTAtTYPZJ3hw9wNxx+BIBOijnlG9PnrJsCcSjs1wyyD6vJpaYtgnzDrKYRSqf3OO6Rfa93xsRg==\"},\"engines\":{\"node\":\">= 0.4\"}},\"graceful-fs@4.2.11\":{\"resolution\":{\"integrity\":\"sha512-RbJ5/jmFcNNCcDV5o9eTnBLJ/HszWV0P73bc+Ff4nS/rJj+YaS6IGyiOL0VoBYX+l1Wrl3k63h/KrH+nhJ0XvQ==\"}},\"grapheme-splitter@1.0.4\":{\"resolution\":{\"integrity\":\"sha512-bzh50DW9kTPM00T8y4o8vQg89Di9oLJVLW/KaOGIXJWP/iqCN6WKYkbNOF04vFLJhwcpYUh9ydh/+5vpOqV4YQ==\"}},\"graphql@16.14.0\":{\"resolution\":{\"integrity\":\"sha512-BBvQ/406p+4CZbTpCbVPSxfzrZrbnuWSP1ELYgyS6B+hNeKzgrdB4JczCa5VZUBQrDa9hUngm0KnexY6pJRN5Q==\"},\"engines\":{\"node\":\"^12.22.0 || ^14.16.0 || ^16.0.0 || >=17.0.0\"}},\"h3@2.0.1-rc.20\":{\"resolution\":{\"integrity\":\"sha512-28ljodXuUp0fZovdiSRq4G9OgrxCztrJe5VdYzXAB7ueRvI7pIUqLU14Xi3XqdYJ/khXjfpUOOD2EQa6CmBgsg==\"},\"engines\":{\"node\":\">=20.11.1\"},\"hasBin\":true,\"peerDependencies\":{\"crossws\":\"^0.4.1\"},\"peerDependenciesMeta\":{\"crossws\":{\"optional\":true}}},\"hachure-fill@0.5.2\":{\"resolution\":{\"integrity\":\"sha512-3GKBOn+m2LX9iq+JC1064cSFprJY4jL1jCXTcpnfER5HYE2l/4EfWSGzkPa/ZDBmYI0ZOEj5VHV/eKnPGkHuOg==\"}},\"happy-dom@18.0.1\":{\"resolution\":{\"integrity\":\"sha512-qn+rKOW7KWpVTtgIUi6RVmTBZJSe2k0Db0vh1f7CWrWclkkc7/Q+FrOfkZIb2eiErLyqu5AXEzE7XthO9JVxRA==\"},\"engines\":{\"node\":\">=20.0.0\"}},\"has-flag@4.0.0\":{\"resolution\":{\"integrity\":\"sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ==\"},\"engines\":{\"node\":\">=8\"}},\"has-symbols@1.1.0\":{\"resolution\":{\"integrity\":\"sha512-1cDNdwJ2Jaohmb3sg4OmKaMBwuC48sYni5HUw2DvsC8LjGTLK9h+eb1X6RyuOHe4hT0ULCW68iomhjUoKUqlPQ==\"},\"engines\":{\"node\":\">= 0.4\"}},\"has-tostringtag@1.0.2\":{\"resolution\":{\"integrity\":\"sha512-NqADB8VjPFLM2V0VvHUewwwsw0ZWBaIdgo+ieHtK3hasLz4qeCRjYcqfB6AQrBggRKppKF8L52/VqdVsO47Dlw==\"},\"engines\":{\"node\":\">= 0.4\"}},\"hasown@2.0.2\":{\"resolution\":{\"integrity\":\"sha512-0hJU9SCPvmMzIBdZFqNPXWa6dqh7WdH0cII9y+CyS8rG3nL48Bclra9HmKhVVUHyPWNH5Y7xDwAB7bfgSjkUMQ==\"},\"engines\":{\"node\":\">= 0.4\"}},\"hast-util-embedded@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-naH8sld4Pe2ep03qqULEtvYr7EjrLK2QHY8KJR6RJkTUjPGObe1vnx585uzem2hGra+s1q08DZZpfgDVYRbaXA==\"}},\"hast-util-from-html@2.0.3\":{\"resolution\":{\"integrity\":\"sha512-CUSRHXyKjzHov8yKsQjGOElXy/3EKpyX56ELnkHH34vDVw1N1XSQ1ZcAvTyAPtGqLTuKP/uxM+aLkSPqF/EtMw==\"}},\"hast-util-from-parse5@8.0.3\":{\"resolution\":{\"integrity\":\"sha512-3kxEVkEKt0zvcZ3hCRYI8rqrgwtlIOFMWkbclACvjlDw8Li9S2hk/d51OI0nr/gIpdMHNepwgOKqZ/sy0Clpyg==\"}},\"hast-util-has-property@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-MNilsvEKLFpV604hwfhVStK0usFY/QmM5zX16bo7EjnAEGofr5YyI37kzopBlZJkHD4t887i+q/C8/tr5Q94cA==\"}},\"hast-util-heading-rank@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-EJKb8oMUXVHcWZTDepnr+WNbfnXKFNf9duMesmr4S8SXTJBJ9M4Yok08pu9vxdJwdlGRhVumk9mEhkEvKGifwA==\"}},\"hast-util-is-body-ok-link@3.0.1\":{\"resolution\":{\"integrity\":\"sha512-0qpnzOBLztXHbHQenVB8uNuxTnm/QBFUOmdOSsEn7GnBtyY07+ENTWVFBAnXd/zEgd9/SUG3lRY7hSIBWRgGpQ==\"}},\"hast-util-is-element@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-Val9mnv2IWpLbNPqc/pUem+a7Ipj2aHacCwgNfTiK0vJKl0LF+4Ba4+v1oPHFpf3bLYmreq0/l3Gud9S5OH42g==\"}},\"hast-util-minify-whitespace@1.0.1\":{\"resolution\":{\"integrity\":\"sha512-L96fPOVpnclQE0xzdWb/D12VT5FabA7SnZOUMtL1DbXmYiHJMXZvFkIZfiMmTCNJHUeO2K9UYNXoVyfz+QHuOw==\"}},\"hast-util-parse-selector@4.0.0\":{\"resolution\":{\"integrity\":\"sha512-wkQCkSYoOGCRKERFWcxMVMOcYE2K1AaNLU8DXS9arxnLOUEWbOXKXiJUNzEpqZ3JOKpnha3jkFrumEjVliDe7A==\"}},\"hast-util-phrasing@3.0.1\":{\"resolution\":{\"integrity\":\"sha512-6h60VfI3uBQUxHqTyMymMZnEbNl1XmEGtOxxKYL7stY2o601COo62AWAYBQR9lZbYXYSBoxag8UpPRXK+9fqSQ==\"}},\"hast-util-raw@9.1.0\":{\"resolution\":{\"integrity\":\"sha512-Y8/SBAHkZGoNkpzqqfCldijcuUKh7/su31kEBp67cFY09Wy0mTRgtsLYsiIxMJxlu0f6AA5SUTbDR8K0rxnbUw==\"}},\"hast-util-sanitize@5.0.2\":{\"resolution\":{\"integrity\":\"sha512-3yTWghByc50aGS7JlGhk61SPenfE/p1oaFeNwkOOyrscaOkMGrcW9+Cy/QAIOBpZxP1yqDIzFMR0+Np0i0+usg==\"}},\"hast-util-to-html@9.0.5\":{\"resolution\":{\"integrity\":\"sha512-OguPdidb+fbHQSU4Q4ZiLKnzWo8Wwsf5bZfbvu7//a9oTYoqD/fWpe96NuHkoS9h0ccGOTe0C4NGXdtS0iObOw==\"}},\"hast-util-to-mdast@10.1.2\":{\"resolution\":{\"integrity\":\"sha512-FiCRI7NmOvM4y+f5w32jPRzcxDIz+PUqDwEqn1A+1q2cdp3B8Gx7aVrXORdOKjMNDQsD1ogOr896+0jJHW1EFQ==\"}},\"hast-util-to-parse5@8.0.0\":{\"resolution\":{\"integrity\":\"sha512-3KKrV5ZVI8if87DVSi1vDeByYrkGzg4mEfeu4alwgmmIeARiBLKCZS2uw5Gb6nU9x9Yufyj3iudm6i7nl52PFw==\"}},\"hast-util-to-string@3.0.1\":{\"resolution\":{\"integrity\":\"sha512-XelQVTDWvqcl3axRfI0xSeoVKzyIFPwsAGSLIsKdJKQMXDYJS4WYrBNF/8J7RdhIcFI2BOHgAifggsvsxp/3+A==\"}},\"hast-util-to-text@4.0.2\":{\"resolution\":{\"integrity\":\"sha512-KK6y/BN8lbaq654j7JgBydev7wuNMcID54lkRav1P0CaE1e47P72AWWPiGKXTJU271ooYzcvTAn/Zt0REnvc7A==\"}},\"hast-util-whitespace@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-88JUN06ipLwsnv+dVn+OIYOvAuvBMy/Qoi6O7mQHxdPXpjy+Cd6xRkWwux7DKO+4sYILtLBRIKgsdpS2gQc7qw==\"}},\"hastscript@9.0.1\":{\"resolution\":{\"integrity\":\"sha512-g7df9rMFX/SPi34tyGCyUBREQoKkapwdY/T04Qn9TDWfHhAYt4/I0gMVirzK5wEzeUqIjEB+LXC/ypb7Aqno5w==\"}},\"headers-polyfill@4.0.3\":{\"resolution\":{\"integrity\":\"sha512-IScLbePpkvO846sIwOtOTDjutRMWdXdJmXdMvk6gCBHxFO8d+QKOQedyZSxFTTFYRSmlgSTDtXqqq4pcenBXLQ==\"}},\"highlight.js@11.11.1\":{\"resolution\":{\"integrity\":\"sha512-Xwwo44whKBVCYoliBQwaPvtd/2tYFkRQtXDWj1nackaV2JPXx3L0+Jvd8/qCJ2p+ML0/XVkJ2q+Mr+UVdpJK5w==\"},\"engines\":{\"node\":\">=12.0.0\"}},\"html-encoding-sniffer@4.0.0\":{\"resolution\":{\"integrity\":\"sha512-Y22oTqIU4uuPgEemfz7NDJz6OeKf12Lsu+QC+s3BVpda64lTiMYCyGwg5ki4vFxkMwQdeZDl2adZoqUgdFuTgQ==\"},\"engines\":{\"node\":\">=18\"}},\"html-escaper@2.0.2\":{\"resolution\":{\"integrity\":\"sha512-H2iMtd0I4Mt5eYiapRdIDjp+XzelXQ0tFE4JS7YFwFevXXMmOp9myNrUvCg0D6ws8iqkRPBfKHgbwig1SmlLfg==\"}},\"html-void-elements@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-bEqo66MRXsUGxWHV5IP0PUiAWwoEjba4VCzg0LjFJBpchPaTfyfCKTG6bc5F8ucKec3q5y6qOdGyYTSBEvhCrg==\"}},\"htmlfy@0.3.2\":{\"resolution\":{\"integrity\":\"sha512-FsxzfpeDYRqn1emox9VpxMPfGjADoUmmup8D604q497R0VNxiXs4ZZTN2QzkaMA5C9aHGUoe1iQRVSm+HK9xuA==\"}},\"htmlparser2@10.0.0\":{\"resolution\":{\"integrity\":\"sha512-TwAZM+zE5Tq3lrEHvOlvwgj1XLWQCtaaibSN11Q+gGBAS7Y1uZSWwXXRe4iF6OXnaq1riyQAPFOBtYc77Mxq0g==\"}},\"htmlparser2@10.1.0\":{\"resolution\":{\"integrity\":\"sha512-VTZkM9GWRAtEpveh7MSF6SjjrpNVNNVJfFup7xTY3UpFtm67foy9HDVXneLtFVt4pMz5kZtgNcvCniNFb1hlEQ==\"}},\"http-proxy-agent@7.0.2\":{\"resolution\":{\"integrity\":\"sha512-T1gkAiYYDWYx3V5Bmyu7HcfcvL7mUrTWiM6yOfa3PIphViJ/gFPbvidQ+veqSOHci/PxBcDabeUNCzpOODJZig==\"},\"engines\":{\"node\":\">= 14\"}},\"https-proxy-agent@7.0.2\":{\"resolution\":{\"integrity\":\"sha512-NmLNjm6ucYwtcUmL7JQC1ZQ57LmHP4lT15FQ8D61nak1rO6DH+fz5qNK2Ap5UN4ZapYICE3/0KodcLYSPsPbaA==\"},\"engines\":{\"node\":\">= 14\"}},\"https-proxy-agent@7.0.6\":{\"resolution\":{\"integrity\":\"sha512-vK9P5/iUfdl95AI+JVyUuIcVtd4ofvtrOr3HNtM2yxC9bnMbEdp3x01OhQNnjb8IJYi38VlTE3mBXwcfvywuSw==\"},\"engines\":{\"node\":\">= 14\"}},\"human-id@4.1.1\":{\"resolution\":{\"integrity\":\"sha512-3gKm/gCSUipeLsRYZbbdA1BD83lBoWUkZ7G9VFrhWPAU76KwYo5KR8V28bpoPm/ygy0x5/GCbpRQdY7VLYCoIg==\"},\"hasBin\":true},\"iconv-lite@0.6.3\":{\"resolution\":{\"integrity\":\"sha512-4fCk79wshMdzMp2rH06qWrJE4iolqLhCUH+OiuIgU++RB0+94NlDL81atO7GX55uUKueo0txHNtvEyI6D7WdMw==\"},\"engines\":{\"node\":\">=0.10.0\"}},\"ieee754@1.2.1\":{\"resolution\":{\"integrity\":\"sha512-dcyqhDvX1C46lXZcVqCpK+FtMRQVdIMN6/Df5js2zouUsqG7I6sFxitIC+7KYK29KdXOLHdu9zL4sFnoVQnqaA==\"}},\"ignore@5.3.2\":{\"resolution\":{\"integrity\":\"sha512-hsBTNUqQTDwkWtcdYI2i06Y/nUBEsNEDJKjWdigLvegy8kDuJAS8uRlpkkcQpyEXL0Z/pjDy5HBmMjRCJ2gq+g==\"},\"engines\":{\"node\":\">= 4\"}},\"immediate@3.0.6\":{\"resolution\":{\"integrity\":\"sha512-XXOFtyqDjNDAQxVfYxuF7g9Il/IbWmmlQg2MYKOH8ExIT1qg6xc4zyS3HaEEATgs1btfzxq15ciUiY7gjSXRGQ==\"}},\"immutable@5.1.5\":{\"resolution\":{\"integrity\":\"sha512-t7xcm2siw+hlUM68I+UEOK+z84RzmN59as9DZ7P1l0994DKUWV7UXBMQZVxaoMSRQ+PBZbHCOoBt7a2wxOMt+A==\"}},\"import-meta-resolve@4.2.0\":{\"resolution\":{\"integrity\":\"sha512-Iqv2fzaTQN28s/FwZAoFq0ZSs/7hMAHJVX+w8PZl3cY19Pxk6jFFalxQoIfW2826i/fDLXv8IiEZRIT0lDuWcg==\"}},\"inherits@2.0.4\":{\"resolution\":{\"integrity\":\"sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ==\"}},\"ini@1.3.8\":{\"resolution\":{\"integrity\":\"sha512-JV/yugV2uzW5iMRSiZAyDtQd+nxtUnjeLt0acNdw98kKLrvuRVyB80tsREOE7yvGVgalhZ6RNXCmEHkUKBKxew==\"}},\"ini@4.1.3\":{\"resolution\":{\"integrity\":\"sha512-X7rqawQBvfdjS10YU1y1YVreA3SsLrW9dX2CewP2EbBJM4ypVNLDkO5y04gejPwKIY9lR+7r9gn3rFPt/kmWFg==\"},\"engines\":{\"node\":\"^14.17.0 || ^16.13.0 || >=18.0.0\"}},\"internmap@1.0.1\":{\"resolution\":{\"integrity\":\"sha512-lDB5YccMydFBtasVtxnZ3MRBHuaoE8GKsppq+EchKL2U4nK/DmEpPHNH8MZe5HkMtpSiTSOZwfN0tzYjO/lJEw==\"}},\"internmap@2.0.3\":{\"resolution\":{\"integrity\":\"sha512-5Hh7Y1wQbvY5ooGgPbDaL5iYLAPzMTUrjMulskHLH6wnv/A+1q5rgEaiuqEjB+oxGXIVZs1FF+R/KPN3ZSQYYg==\"},\"engines\":{\"node\":\">=12\"}},\"ip-address@10.2.0\":{\"resolution\":{\"integrity\":\"sha512-/+S6j4E9AHvW9SWMSEY9Xfy66O5PWvVEJ08O0y5JGyEKQpojb0K0GKpz/v5HJ/G0vi3D2sjGK78119oXZeE0qA==\"},\"engines\":{\"node\":\">= 12\"}},\"is-binary-path@2.1.0\":{\"resolution\":{\"integrity\":\"sha512-ZMERYes6pDydyuGidse7OsHxtbI7WVeUEozgR/g7rd0xUimYNlvZRE/K2MgZTjWy725IfelLeVcEM97mmtRGXw==\"},\"engines\":{\"node\":\">=8\"}},\"is-docker@2.2.1\":{\"resolution\":{\"integrity\":\"sha512-F+i2BKsFrH66iaUFc0woD8sLy8getkwTwtOBjvs56Cx4CgJDeKQeqfz8wAYiSb8JOprWhHH5p77PbmYCvvUuXQ==\"},\"engines\":{\"node\":\">=8\"},\"hasBin\":true},\"is-extglob@2.1.1\":{\"resolution\":{\"integrity\":\"sha512-SbKbANkN603Vi4jEZv49LeVJMn4yGwsbzZworEoyEiutsN3nJYdbO36zfhGJ6QEDpOZIFkDtnq5JRxmvl3jsoQ==\"},\"engines\":{\"node\":\">=0.10.0\"}},\"is-fullwidth-code-point@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-zymm5+u+sCsSWyD9qNaejV3DFvhCKclKdizYaJUuHA83RLjb7nSuGnddCHGv0hk+KY7BMAlsWeK4Ueg6EV6XQg==\"},\"engines\":{\"node\":\">=8\"}},\"is-glob@4.0.3\":{\"resolution\":{\"integrity\":\"sha512-xelSayHH36ZgE7ZWhli7pW34hNbNl8Ojv5KVmkJD4hBdD3th8Tfk9vYasLM+mXWOZhFkgZfxhLSnrwRr4elSSg==\"},\"engines\":{\"node\":\">=0.10.0\"}},\"is-interactive@1.0.0\":{\"resolution\":{\"integrity\":\"sha512-2HvIEKRoqS62guEC+qBjpvRubdX910WCMuJTZ+I9yvqKU2/12eSL549HMwtabb4oupdj2sMP50k+XJfB/8JE6w==\"},\"engines\":{\"node\":\">=8\"}},\"is-node-process@1.2.0\":{\"resolution\":{\"integrity\":\"sha512-Vg4o6/fqPxIjtxgUH5QLJhwZ7gW5diGCVlXpuUfELC62CuxM1iHcRe51f2W1FDy04Ai4KJkagKjx3XaqyfRKXw==\"}},\"is-number@7.0.0\":{\"resolution\":{\"integrity\":\"sha512-41Cifkg6e8TylSpdtTpeLVMqvSBEVzTttHvERD741+pnZ8ANv0004MRL43QKPDlK9cGvNp6NZWZUBlbGXYxxng==\"},\"engines\":{\"node\":\">=0.12.0\"}},\"is-plain-obj@4.1.0\":{\"resolution\":{\"integrity\":\"sha512-+Pgi+vMuUNkJyExiMBt5IlFoMyKnr5zhJ4Uspz58WOhBF5QoIZkFyNHIbBAtHwzVAgk5RtndVNsDRN61/mmDqg==\"},\"engines\":{\"node\":\">=12\"}},\"is-potential-custom-element-name@1.0.1\":{\"resolution\":{\"integrity\":\"sha512-bCYeRA2rVibKZd+s2625gGnGF/t7DSqDs4dP7CrLA1m7jKWz6pps0LpYLJN8Q64HtmPKJ1hrN3nzPNKFEKOUiQ==\"}},\"is-stream@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-hFoiJiTl63nn+kstHGBtewWSKnQLpyb155KHheA1l39uvtO9nWIop1p3udqPcUd/xbF1VLMO4n7OI6p7RbngDg==\"},\"engines\":{\"node\":\">=8\"}},\"is-subdir@1.2.0\":{\"resolution\":{\"integrity\":\"sha512-2AT6j+gXe/1ueqbW6fLZJiIw3F8iXGJtt0yDrZaBhAZEG1raiTxKWU+IPqMCzQAXOUCKdA4UDMgacKH25XG2Cw==\"},\"engines\":{\"node\":\">=4\"}},\"is-unicode-supported@0.1.0\":{\"resolution\":{\"integrity\":\"sha512-knxG2q4UC3u8stRGyAVJCOdxFmv5DZiRcdlIaAQXAbSfJya+OhopNotLQrstBhququ4ZpuKbDc/8S6mgXgPFPw==\"},\"engines\":{\"node\":\">=10\"}},\"is-windows@1.0.2\":{\"resolution\":{\"integrity\":\"sha512-eXK1UInq2bPmjyX6e3VHIzMLobc4J94i4AWn+Hpq3OU5KkrRC96OAcR3PRJ/pGu6m8TRnBHP9dkXQVsT/COVIA==\"},\"engines\":{\"node\":\">=0.10.0\"}},\"is-wsl@2.2.0\":{\"resolution\":{\"integrity\":\"sha512-fKzAra0rGJUUBwGBgNkHZuToZcn+TtXHpeCgmkMJMMYx1sQDYaCSyjJBSCa2nH1DGm7s3n1oBnohoVTBaN7Lww==\"},\"engines\":{\"node\":\">=8\"}},\"isarray@1.0.0\":{\"resolution\":{\"integrity\":\"sha512-VLghIWNM6ELQzo7zwmcg0NmTVyWKYjvIeM83yjp0wRDTmUnrM678fQbcKBo6n2CJEF0szoG//ytg+TKla89ALQ==\"}},\"isbot@5.1.28\":{\"resolution\":{\"integrity\":\"sha512-qrOp4g3xj8YNse4biorv6O5ZShwsJM0trsoda4y7j/Su7ZtTTfVXFzbKkpgcSoDrHS8FcTuUwcU04YimZlZOxw==\"},\"engines\":{\"node\":\">=18\"}},\"isexe@2.0.0\":{\"resolution\":{\"integrity\":\"sha512-RHxMLp9lnKHGHRng9QFhRCMbYAcVpn69smSGcq3f36xjgVVWThj4qqLbTLlq7Ssj8B+fIQ1EuCEGI2lKsyQeIw==\"}},\"isexe@3.1.5\":{\"resolution\":{\"integrity\":\"sha512-6B3tLtFqtQS4ekarvLVMZ+X+VlvQekbe4taUkf/rhVO3d/h0M2rfARm/pXLcPEsjjMsFgrFgSrhQIxcSVrBz8w==\"},\"engines\":{\"node\":\">=18\"}},\"istanbul-lib-coverage@3.2.2\":{\"resolution\":{\"integrity\":\"sha512-O8dpsF+r0WV/8MNRKfnmrtCWhuKjxrq2w+jpzBL5UZKTi2LeVWnWOmWRxFlesJONmc+wLAGvKQZEOanko0LFTg==\"},\"engines\":{\"node\":\">=8\"}},\"istanbul-lib-report@3.0.1\":{\"resolution\":{\"integrity\":\"sha512-GCfE1mtsHGOELCU8e/Z7YWzpmybrx/+dSTfLrvY8qRmaY6zXTKWn6WQIjaAFw069icm6GVMNkgu0NzI4iPZUNw==\"},\"engines\":{\"node\":\">=10\"}},\"istanbul-lib-source-maps@5.0.6\":{\"resolution\":{\"integrity\":\"sha512-yg2d+Em4KizZC5niWhQaIomgf5WlL4vOOjZ5xGCmF8SnPE/mDWWXgvRExdcpCgh9lLRRa1/fSYp2ymmbJ1pI+A==\"},\"engines\":{\"node\":\">=10\"}},\"istanbul-reports@3.2.0\":{\"resolution\":{\"integrity\":\"sha512-HGYWWS/ehqTV3xN10i23tkPkpH46MLCIMFNCaaKNavAXTF1RkqxawEPtnjnGZ6XKSInBKkiOA5BKS+aZiY3AvA==\"},\"engines\":{\"node\":\">=8\"}},\"jackspeak@3.4.3\":{\"resolution\":{\"integrity\":\"sha512-OGlZQpz2yfahA/Rd1Y8Cd9SIEsqvXkLVoSw/cgwhnhFMDbsQFeZYoJJ7bIZBS9BcamUW96asq/npPWugM+RQBw==\"}},\"jest-diff@30.1.1\":{\"resolution\":{\"integrity\":\"sha512-LUU2Gx8EhYxpdzTR6BmjL1ifgOAQJQELTHOiPv9KITaKjZvJ9Jmgigx01tuZ49id37LorpGc9dPBPlXTboXScw==\"},\"engines\":{\"node\":\"^18.14.0 || ^20.0.0 || ^22.0.0 || >=24.0.0\"}},\"jest-worker@27.5.1\":{\"resolution\":{\"integrity\":\"sha512-7vuh85V5cdDofPyxn58nrPjBktZo0u9x1g8WtjQol+jZDaE+fhN+cIvTj11GndBnMnyfrUOG1sZQxCdjKh+DKg==\"},\"engines\":{\"node\":\">= 10.13.0\"}},\"jiti@2.6.1\":{\"resolution\":{\"integrity\":\"sha512-ekilCSN1jwRvIbgeg/57YFh8qQDNbwDb9xT/qu2DAHbFFZUicIl4ygVaAvzveMhMVr3LnpSKTNnwt8PoOfmKhQ==\"},\"hasBin\":true},\"js-tokens@10.0.0\":{\"resolution\":{\"integrity\":\"sha512-lM/UBzQmfJRo9ABXbPWemivdCW8V2G8FHaHdypQaIy523snUjog0W71ayWXTjiR+ixeMyVHN2XcpnTd/liPg/Q==\"}},\"js-tokens@4.0.0\":{\"resolution\":{\"integrity\":\"sha512-RdJUflcE3cUzKiMqQgsCu06FPu9UdIJO0beYbPhHN4k6apgJtifcoCtT9bcxOpYBtpD2kCM6Sbzg4CausW/PKQ==\"}},\"js-tokens@9.0.1\":{\"resolution\":{\"integrity\":\"sha512-mxa9E9ITFOt0ban3j6L5MpjwegGz6lBQmM1IJkWeBZGcMxto50+eWdjC/52xDbS2vy0k7vIMK0Fe2wfL9OQSpQ==\"}},\"js-yaml@3.14.1\":{\"resolution\":{\"integrity\":\"sha512-okMH7OXXJ7YrN9Ok3/SXrnu4iX9yOk+25nqX4imS2npuvTYDmo/QEZoqwZkYaIDk3jVvBOTOIEgEhaLOynBS9g==\"},\"hasBin\":true},\"js-yaml@4.1.1\":{\"resolution\":{\"integrity\":\"sha512-qQKT4zQxXl8lLwBtHMWwaTcGfFOZviOJet3Oy/xmGk2gZH677CJM9EvtfdSkgWcATZhj/55JZ0rmy3myCT5lsA==\"},\"hasBin\":true},\"jsdom@26.1.0\":{\"resolution\":{\"integrity\":\"sha512-Cvc9WUhxSMEo4McES3P7oK3QaXldCfNWp7pl2NNeiIFlCoLr3kfq9kb1fxftiwk1FLV7CvpvDfonxtzUDeSOPg==\"},\"engines\":{\"node\":\">=18\"},\"peerDependencies\":{\"canvas\":\"^3.0.0\"},\"peerDependenciesMeta\":{\"canvas\":{\"optional\":true}}},\"jsdom@27.3.0\":{\"resolution\":{\"integrity\":\"sha512-GtldT42B8+jefDUC4yUKAvsaOrH7PDHmZxZXNgF2xMmymjUbRYJvpAybZAKEmXDGTM0mCsz8duOa4vTm5AY2Kg==\"},\"engines\":{\"node\":\"^20.19.0 || ^22.12.0 || >=24.0.0\"},\"peerDependencies\":{\"canvas\":\"^3.0.0\"},\"peerDependenciesMeta\":{\"canvas\":{\"optional\":true}}},\"jsesc@3.1.0\":{\"resolution\":{\"integrity\":\"sha512-/sM3dO2FOzXjKQhJuo0Q173wf2KOo8t4I8vHy6lF9poUp7bKT0/NHE8fPX23PwfhnykfqnC2xRxOnVw5XuGIaA==\"},\"engines\":{\"node\":\">=6\"},\"hasBin\":true},\"json-parse-even-better-errors@2.3.1\":{\"resolution\":{\"integrity\":\"sha512-xyFwyhro/JEof6Ghe2iz2NcXoj2sloNsWr/XsERDK/oiPCfaNhl5ONfp+jQdAZRQQ0IJWNzH9zIZF7li91kh2w==\"}},\"json-schema-to-ts@3.1.1\":{\"resolution\":{\"integrity\":\"sha512-+DWg8jCJG2TEnpy7kOm/7/AxaYoaRbjVB4LFZLySZlWn8exGs3A4OLJR966cVvU26N7X9TWxl+Jsw7dzAqKT6g==\"},\"engines\":{\"node\":\">=16\"}},\"json-schema-traverse@1.0.0\":{\"resolution\":{\"integrity\":\"sha512-NM8/P9n3XjXhIZn1lLhkFaACTOURQXjWhV4BA/RnOv8xvgqtqpAX9IO4mRQxSx1Rlo4tqzeqb0sOlruaOy3dug==\"}},\"json5@2.2.3\":{\"resolution\":{\"integrity\":\"sha512-XmOWe7eyHYH14cLdVPoyg+GOH3rYX++KpzrylJwSW98t3Nk+U8XOl8FWKOgwtzdb8lXGf6zYwDUzeHMWfxasyg==\"},\"engines\":{\"node\":\">=6\"},\"hasBin\":true},\"jsonc-parser@3.2.0\":{\"resolution\":{\"integrity\":\"sha512-gfFQZrcTc8CnKXp6Y4/CBT3fTc0OVuDofpre4aEeEpSBPV5X5v4+Vmx+8snU7RLPrNHPKSgLxGo9YuQzz20o+w==\"}},\"jsonfile@4.0.0\":{\"resolution\":{\"integrity\":\"sha512-m6F1R3z8jjlf2imQHS2Qez5sjKWQzbuuhuJ/FKYFRZvPE3PuHcSMVZzfsLhGVOkfd20obL5SWEBew5ShlquNxg==\"}},\"jsonfile@6.2.0\":{\"resolution\":{\"integrity\":\"sha512-FGuPw30AdOIUTRMC2OMRtQV+jkVj2cfPqSeWXv1NEAJ1qZ5zb1X6z1mFhbfOB/iy3ssJCD+3KuZ8r8C3uVFlAg==\"}},\"jszip@3.10.1\":{\"resolution\":{\"integrity\":\"sha512-xXDvecyTpGLrqFrvkrUSoxxfJI5AH7U8zxxtVclpsUtMCq4JQ290LY8AW5c7Ggnr/Y/oK+bQMbqK2qmtk3pN4g==\"}},\"katex@0.16.22\":{\"resolution\":{\"integrity\":\"sha512-XCHRdUw4lf3SKBaJe4EvgqIuWwkPSo9XoeO8GjQW94Bp7TWv9hNhzZjZ+OH9yf1UmLygb7DIT5GSFQiyt16zYg==\"},\"hasBin\":true},\"khroma@2.1.0\":{\"resolution\":{\"integrity\":\"sha512-Ls993zuzfayK269Svk9hzpeGUKob/sIgZzyHYdjQoAdQetRKpOLj+k/QQQ/6Qi0Yz65mlROrfd+Ev+1+7dz9Kw==\"}},\"kleur@4.1.5\":{\"resolution\":{\"integrity\":\"sha512-o+NO+8WrRiQEE4/7nwRJhN1HWpVmJm511pBHUxPLtp0BUISzlBplORYSmTclCnJvQq2tKu/sgl3xVpkc7ZWuQQ==\"},\"engines\":{\"node\":\">=6\"}},\"kolorist@1.8.0\":{\"resolution\":{\"integrity\":\"sha512-Y+60/zizpJ3HRH8DCss+q95yr6145JXZo46OTpFvDZWLfRCE4qChOyk1b26nMaNpfHHgxagk9dXT5OP0Tfe+dQ==\"}},\"kysely@0.28.7\":{\"resolution\":{\"integrity\":\"sha512-u/cAuTL4DRIiO2/g4vNGRgklEKNIj5Q3CG7RoUB5DV5SfEC2hMvPxKi0GWPmnzwL2ryIeud2VTcEEmqzTzEPNw==\"},\"engines\":{\"node\":\">=20.0.0\"}},\"langium@3.3.1\":{\"resolution\":{\"integrity\":\"sha512-QJv/h939gDpvT+9SiLVlY7tZC3xB2qK57v0J04Sh9wpMb6MP1q8gB21L3WIo8T5P1MSMg3Ep14L7KkDCFG3y4w==\"},\"engines\":{\"node\":\">=16.0.0\"}},\"layout-base@1.0.2\":{\"resolution\":{\"integrity\":\"sha512-8h2oVEZNktL4BH2JCOI90iD1yXwL6iNW7KcCKT2QZgQJR2vbqDsldCTPRU9NifTCqHZci57XvQQ15YTu+sTYPg==\"}},\"layout-base@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-dp3s92+uNI1hWIpPGH3jK2kxE2lMjdXdr+DH8ynZHpd6PUlH6x6cbuXnoMmiNumznqaNO31xu9e79F0uuZ0JFg==\"}},\"lazystream@1.0.1\":{\"resolution\":{\"integrity\":\"sha512-b94GiNHQNy6JNTrt5w6zNyffMrNkXZb3KTkCZJb2V1xaEGCk093vkZ2jk3tpaeP33/OiXC+WvK9AxUebnf5nbw==\"},\"engines\":{\"node\":\">= 0.6.3\"}},\"lie@3.3.0\":{\"resolution\":{\"integrity\":\"sha512-UaiMJzeWRlEujzAuw5LokY1L5ecNQYZKfmyZ9L7wDHb/p5etKaxXhohBcrw0EYby+G/NA52vRSN4N39dxHAIwQ==\"}},\"lightningcss-android-arm64@1.32.0\":{\"resolution\":{\"integrity\":\"sha512-YK7/ClTt4kAK0vo6w3X+Pnm0D2cf2vPHbhOXdoNti1Ga0al1P4TBZhwjATvjNwLEBCnKvjJc2jQgHXH0NEwlAg==\"},\"engines\":{\"node\":\">= 12.0.0\"},\"cpu\":[\"arm64\"],\"os\":[\"android\"]},\"lightningcss-darwin-arm64@1.32.0\":{\"resolution\":{\"integrity\":\"sha512-RzeG9Ju5bag2Bv1/lwlVJvBE3q6TtXskdZLLCyfg5pt+HLz9BqlICO7LZM7VHNTTn/5PRhHFBSjk5lc4cmscPQ==\"},\"engines\":{\"node\":\">= 12.0.0\"},\"cpu\":[\"arm64\"],\"os\":[\"darwin\"]},\"lightningcss-darwin-x64@1.32.0\":{\"resolution\":{\"integrity\":\"sha512-U+QsBp2m/s2wqpUYT/6wnlagdZbtZdndSmut/NJqlCcMLTWp5muCrID+K5UJ6jqD2BFshejCYXniPDbNh73V8w==\"},\"engines\":{\"node\":\">= 12.0.0\"},\"cpu\":[\"x64\"],\"os\":[\"darwin\"]},\"lightningcss-freebsd-x64@1.32.0\":{\"resolution\":{\"integrity\":\"sha512-JCTigedEksZk3tHTTthnMdVfGf61Fky8Ji2E4YjUTEQX14xiy/lTzXnu1vwiZe3bYe0q+SpsSH/CTeDXK6WHig==\"},\"engines\":{\"node\":\">= 12.0.0\"},\"cpu\":[\"x64\"],\"os\":[\"freebsd\"]},\"lightningcss-linux-arm-gnueabihf@1.32.0\":{\"resolution\":{\"integrity\":\"sha512-x6rnnpRa2GL0zQOkt6rts3YDPzduLpWvwAF6EMhXFVZXD4tPrBkEFqzGowzCsIWsPjqSK+tyNEODUBXeeVHSkw==\"},\"engines\":{\"node\":\">= 12.0.0\"},\"cpu\":[\"arm\"],\"os\":[\"linux\"]},\"lightningcss-linux-arm64-gnu@1.32.0\":{\"resolution\":{\"integrity\":\"sha512-0nnMyoyOLRJXfbMOilaSRcLH3Jw5z9HDNGfT/gwCPgaDjnx0i8w7vBzFLFR1f6CMLKF8gVbebmkUN3fa/kQJpQ==\"},\"engines\":{\"node\":\">= 12.0.0\"},\"cpu\":[\"arm64\"],\"os\":[\"linux\"]},\"lightningcss-linux-arm64-musl@1.32.0\":{\"resolution\":{\"integrity\":\"sha512-UpQkoenr4UJEzgVIYpI80lDFvRmPVg6oqboNHfoH4CQIfNA+HOrZ7Mo7KZP02dC6LjghPQJeBsvXhJod/wnIBg==\"},\"engines\":{\"node\":\">= 12.0.0\"},\"cpu\":[\"arm64\"],\"os\":[\"linux\"]},\"lightningcss-linux-x64-gnu@1.32.0\":{\"resolution\":{\"integrity\":\"sha512-V7Qr52IhZmdKPVr+Vtw8o+WLsQJYCTd8loIfpDaMRWGUZfBOYEJeyJIkqGIDMZPwPx24pUMfwSxxI8phr/MbOA==\"},\"engines\":{\"node\":\">= 12.0.0\"},\"cpu\":[\"x64\"],\"os\":[\"linux\"]},\"lightningcss-linux-x64-musl@1.32.0\":{\"resolution\":{\"integrity\":\"sha512-bYcLp+Vb0awsiXg/80uCRezCYHNg1/l3mt0gzHnWV9XP1W5sKa5/TCdGWaR/zBM2PeF/HbsQv/j2URNOiVuxWg==\"},\"engines\":{\"node\":\">= 12.0.0\"},\"cpu\":[\"x64\"],\"os\":[\"linux\"]},\"lightningcss-win32-arm64-msvc@1.32.0\":{\"resolution\":{\"integrity\":\"sha512-8SbC8BR40pS6baCM8sbtYDSwEVQd4JlFTOlaD3gWGHfThTcABnNDBda6eTZeqbofalIJhFx0qKzgHJmcPTnGdw==\"},\"engines\":{\"node\":\">= 12.0.0\"},\"cpu\":[\"arm64\"],\"os\":[\"win32\"]},\"lightningcss-win32-x64-msvc@1.32.0\":{\"resolution\":{\"integrity\":\"sha512-Amq9B/SoZYdDi1kFrojnoqPLxYhQ4Wo5XiL8EVJrVsB8ARoC1PWW6VGtT0WKCemjy8aC+louJnjS7U18x3b06Q==\"},\"engines\":{\"node\":\">= 12.0.0\"},\"cpu\":[\"x64\"],\"os\":[\"win32\"]},\"lightningcss@1.32.0\":{\"resolution\":{\"integrity\":\"sha512-NXYBzinNrblfraPGyrbPoD19C1h9lfI/1mzgWYvXUTe414Gz/X1FD2XBZSZM7rRTrMA8JL3OtAaGifrIKhQ5yQ==\"},\"engines\":{\"node\":\">= 12.0.0\"}},\"lines-and-columns@2.0.3\":{\"resolution\":{\"integrity\":\"sha512-cNOjgCnLB+FnvWWtyRTzmB3POJ+cXxTA81LoW7u8JdmhfXzriropYwpjShnz1QLLWsQwY7nIxoDmcPTwphDK9w==\"},\"engines\":{\"node\":\"^12.20.0 || ^14.13.1 || >=16.0.0\"}},\"loader-runner@4.3.2\":{\"resolution\":{\"integrity\":\"sha512-DFEqQ3ihfS9blba08cLfYf1NRAIEm+dDjic073DRDc3/JspI/8wYmtDsHwd3+4hwvdxSK7PGaElfTmm0awWJ4w==\"},\"engines\":{\"node\":\">=6.11.5\"}},\"local-pkg@1.1.2\":{\"resolution\":{\"integrity\":\"sha512-arhlxbFRmoQHl33a0Zkle/YWlmNwoyt6QNZEIJcqNbdrsix5Lvc4HyyI3EnwxTYlZYc32EbYrQ8SzEZ7dqgg9A==\"},\"engines\":{\"node\":\">=14\"}},\"locate-app@2.5.0\":{\"resolution\":{\"integrity\":\"sha512-xIqbzPMBYArJRmPGUZD9CzV9wOqmVtQnaAn3wrj3s6WYW0bQvPI7x+sPYUGmDTYMHefVK//zc6HEYZ1qnxIK+Q==\"}},\"locate-path@5.0.0\":{\"resolution\":{\"integrity\":\"sha512-t7hw9pI+WvuwNJXwk5zVHpyhIqzg2qTlklJOf0mVxGSbe3Fp2VieZcduNYjaLDoy6p9uGpQEGWG87WpMKlNq8g==\"},\"engines\":{\"node\":\">=8\"}},\"lodash-es@4.17.21\":{\"resolution\":{\"integrity\":\"sha512-mKnC+QJ9pWVzv+C4/U3rRsHapFfHvQFoFB92e52xeyGMcX6/OlIl78je1u8vePzYZSkkogMPJ2yjxxsb89cxyw==\"}},\"lodash.clonedeep@4.5.0\":{\"resolution\":{\"integrity\":\"sha512-H5ZhCF25riFd9uB5UCkVKo61m3S/xZk1x4wA6yp/L3RFP6Z/eHH1ymQcGLo7J3GMPfm0V/7m1tryHuGVxpqEBQ==\"}},\"lodash.startcase@4.4.0\":{\"resolution\":{\"integrity\":\"sha512-+WKqsK294HMSc2jEbNgpHpd0JfIBhp7rEV4aqXWqFr6AlXov+SlcgB1Fv01y2kGe3Gc8nMW7VA0SrGuSkRfIEg==\"}},\"lodash.zip@4.2.0\":{\"resolution\":{\"integrity\":\"sha512-C7IOaBBK/0gMORRBd8OETNx3kmOkgIWIPvyDpZSCTwUrpYmgZwJkjZeOD8ww4xbOUOs4/attY+pciKvadNfFbg==\"}},\"lodash@4.18.1\":{\"resolution\":{\"integrity\":\"sha512-dMInicTPVE8d1e5otfwmmjlxkZoUpiVLwyeTdUsi/Caj/gfzzblBcCE5sRHV/AsjuCmxWrte2TNGSYuCeCq+0Q==\"}},\"log-symbols@4.1.0\":{\"resolution\":{\"integrity\":\"sha512-8XPvpAA8uyhfteu8pIvQxpJZ7SYYdpUivZpGy6sFsBuKRY/7rQGavedeB8aK+Zkyq6upMFVL/9AW6vOYzfRyLg==\"},\"engines\":{\"node\":\">=10\"}},\"loglevel-plugin-prefix@0.8.4\":{\"resolution\":{\"integrity\":\"sha512-WpG9CcFAOjz/FtNht+QJeGpvVl/cdR6P0z6OcXSkr8wFJOsV2GRj2j10JLfjuA4aYkcKCNIEqRGCyTife9R8/g==\"}},\"loglevel@1.9.2\":{\"resolution\":{\"integrity\":\"sha512-HgMmCqIJSAKqo68l0rS2AanEWfkxaZ5wNiEFb5ggm08lDs9Xl2KxBlX3PTcaD2chBM1gXAYf491/M2Rv8Jwayg==\"},\"engines\":{\"node\":\">= 0.6.0\"}},\"long@5.3.2\":{\"resolution\":{\"integrity\":\"sha512-mNAgZ1GmyNhD7AuqnTG3/VQ26o760+ZYBPKjPvugO8+nLbYfX6TVpJPseBvopbdY+qpZ/lKUnmEc1LeZYS3QAA==\"}},\"longest-streak@3.1.0\":{\"resolution\":{\"integrity\":\"sha512-9Ri+o0JYgehTaVBBDoMqIl8GXtbWg711O3srftcHhZ0dqnETqLaoIK0x17fUw9rFSlK/0NlsKe0Ahhyl5pXE2g==\"}},\"loupe@3.2.1\":{\"resolution\":{\"integrity\":\"sha512-CdzqowRJCeLU72bHvWqwRBBlLcMEtIvGrlvef74kMnV2AolS9Y8xUv1I0U/MNAWMhBlKIoyuEgoJ0t/bbwHbLQ==\"}},\"lowlight@3.3.0\":{\"resolution\":{\"integrity\":\"sha512-0JNhgFoPvP6U6lE/UdVsSq99tn6DhjjpAj5MxG49ewd2mOBVtwWYIT8ClyABhq198aXXODMU6Ox8DrGy/CpTZQ==\"}},\"lru-cache@10.4.3\":{\"resolution\":{\"integrity\":\"sha512-JNAzZcXrCt42VGLuYz0zfAzDfAvJWW6AfYlDBQyDV5DClI2m5sAmK+OIO7s59XfsRsWHp02jAJrRadPRGTt6SQ==\"}},\"lru-cache@11.2.4\":{\"resolution\":{\"integrity\":\"sha512-B5Y16Jr9LB9dHVkh6ZevG+vAbOsNOYCX+sXvFWFu7B3Iz5mijW3zdbMyhsh8ANd2mSWBYdJgnqi+mL7/LrOPYg==\"},\"engines\":{\"node\":\"20 || >=22\"}},\"lru-cache@5.1.1\":{\"resolution\":{\"integrity\":\"sha512-KpNARQA3Iwv+jTA0utUVVbrh+Jlrr1Fv0e56GGzAFOXN7dk/FviaDW8LHmK52DlcH4WP2n6gI8vN1aesBFgo9w==\"}},\"lru-cache@7.18.3\":{\"resolution\":{\"integrity\":\"sha512-jumlc0BIUrS3qJGgIkWZsyfAM7NCWiBcCDhnd+3NNM5KbBmLTgHVfWBcg6W+rLUsIpzpERPsvwUP7CckAQSOoA==\"},\"engines\":{\"node\":\">=12\"}},\"lucide-react@0.544.0\":{\"resolution\":{\"integrity\":\"sha512-t5tS44bqd825zAW45UQxpG2CvcC4urOwn2TrwSH8u+MjeE+1NnWl6QqeQ/6NdjMqdOygyiT9p3Ev0p1NJykxjw==\"},\"peerDependencies\":{\"react\":\"^16.5.1 || ^17.0.0 || ^18.0.0 || ^19.0.0\"}},\"lz-string@1.5.0\":{\"resolution\":{\"integrity\":\"sha512-h5bgJWpxJNswbU7qCrV0tIKQCaS3blPDrqKWx+QxzuzL1zGUzij9XCWLrSLsJPu5t+eWA/ycetzYAO5IOMcWAQ==\"},\"hasBin\":true},\"magic-string@0.30.18\":{\"resolution\":{\"integrity\":\"sha512-yi8swmWbO17qHhwIBNeeZxTceJMeBvWJaId6dyvTSOwTipqeHhMhOrz6513r1sOKnpvQ7zkhlG8tPrpilwTxHQ==\"}},\"magic-string@0.30.21\":{\"resolution\":{\"integrity\":\"sha512-vd2F4YUyEXKGcLHoq+TEyCjxueSeHnFxyyjNp80yg0XV4vUhnDer/lvvlqM/arB5bXQN5K2/3oinyCRyx8T2CQ==\"}},\"magicast@0.3.5\":{\"resolution\":{\"integrity\":\"sha512-L0WhttDl+2BOsybvEOLK7fW3UA0OQ0IQ2d6Zl2x/a6vVRs3bAY0ECOSHHeL5jD+SbOpOCUEi0y1DgHEn9Qn1AQ==\"}},\"magicast@0.5.2\":{\"resolution\":{\"integrity\":\"sha512-E3ZJh4J3S9KfwdjZhe2afj6R9lGIN5Pher1pF39UGrXRqq/VDaGVIGN13BjHd2u8B61hArAGOnso7nBOouW3TQ==\"}},\"make-dir@4.0.0\":{\"resolution\":{\"integrity\":\"sha512-hXdUTZYIVOt1Ex//jAQi+wTZZpUpwBj/0QsOzqegb3rGMMeJiSEu5xLHnYfBrRV4RH2+OCSOO95Is/7x1WJ4bw==\"},\"engines\":{\"node\":\">=10\"}},\"markdown-table@3.0.4\":{\"resolution\":{\"integrity\":\"sha512-wiYz4+JrLyb/DqW2hkFJxP7Vd7JuTDm77fvbM8VfEQdmSMqcImWeeRbHwZjBjIFki/VaMK2BhFi7oUUZeM5bqw==\"}},\"marked@16.4.2\":{\"resolution\":{\"integrity\":\"sha512-TI3V8YYWvkVf3KJe1dRkpnjs68JUPyEa5vjKrp1XEEJUAOaQc+Qj+L1qWbPd0SJuAdQkFU0h73sXXqwDYxsiDA==\"},\"engines\":{\"node\":\">= 20\"},\"hasBin\":true},\"math-intrinsics@1.1.0\":{\"resolution\":{\"integrity\":\"sha512-/IXtbwEk5HTPyEwyKX6hGkYXxM9nbj64B+ilVJnC/R6B0pH5G4V3b0pVbL7DBj4tkhBAppbQUlf6F6Xl9LHu1g==\"},\"engines\":{\"node\":\">= 0.4\"}},\"mdast-util-find-and-replace@3.0.2\":{\"resolution\":{\"integrity\":\"sha512-Tmd1Vg/m3Xz43afeNxDIhWRtFZgM2VLyaf4vSTYwudTyeuTneoL3qtWMA5jeLyz/O1vDJmmV4QuScFCA2tBPwg==\"}},\"mdast-util-from-markdown@2.0.2\":{\"resolution\":{\"integrity\":\"sha512-uZhTV/8NBuw0WHkPTrCqDOl0zVe1BIng5ZtHoDk49ME1qqcjYmmLmOf0gELgcRMxN4w2iuIeVso5/6QymSrgmA==\"}},\"mdast-util-frontmatter@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-LRqI9+wdgC25P0URIJY9vwocIzCcksduHQ9OF2joxQoyTNVduwLAFUzjoopuRJbJAReaKrNQKAZKL3uCMugWJA==\"}},\"mdast-util-gfm-autolink-literal@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-5HVP2MKaP6L+G6YaxPNjuL0BPrq9orG3TsrZ9YXbA3vDw/ACI4MEsnoDpn6ZNm7GnZgtAcONJyPhOP8tNJQavQ==\"}},\"mdast-util-gfm-footnote@2.0.0\":{\"resolution\":{\"integrity\":\"sha512-5jOT2boTSVkMnQ7LTrd6n/18kqwjmuYqo7JUPe+tRCY6O7dAuTFMtTPauYYrMPpox9hlN0uOx/FL8XvEfG9/mQ==\"}},\"mdast-util-gfm-strikethrough@2.0.0\":{\"resolution\":{\"integrity\":\"sha512-mKKb915TF+OC5ptj5bJ7WFRPdYtuHv0yTRxK2tJvi+BDqbkiG7h7u/9SI89nRAYcmap2xHQL9D+QG/6wSrTtXg==\"}},\"mdast-util-gfm-table@2.0.0\":{\"resolution\":{\"integrity\":\"sha512-78UEvebzz/rJIxLvE7ZtDd/vIQ0RHv+3Mh5DR96p7cS7HsBhYIICDBCu8csTNWNO6tBWfqXPWekRuj2FNOGOZg==\"}},\"mdast-util-gfm-task-list-item@2.0.0\":{\"resolution\":{\"integrity\":\"sha512-IrtvNvjxC1o06taBAVJznEnkiHxLFTzgonUdy8hzFVeDun0uTjxxrRGVaNFqkU1wJR3RBPEfsxmU6jDWPofrTQ==\"}},\"mdast-util-gfm@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-dgQEX5Amaq+DuUqf26jJqSK9qgixgd6rYDHAv4aTBuA92cTknZlKpPfa86Z/s8Dj8xsAQpFfBmPUHWJBWqS4Bw==\"}},\"mdast-util-phrasing@4.1.0\":{\"resolution\":{\"integrity\":\"sha512-TqICwyvJJpBwvGAMZjj4J2n0X8QWp21b9l0o7eXyVJ25YNWYbJDVIyD1bZXE6WtV6RmKJVYmQAKWa0zWOABz2w==\"}},\"mdast-util-to-hast@13.2.0\":{\"resolution\":{\"integrity\":\"sha512-QGYKEuUsYT9ykKBCMOEDLsU5JRObWQusAolFMeko/tYPufNkRffBAQjIE+99jbA87xv6FgmjLtwjh9wBWajwAA==\"}},\"mdast-util-to-markdown@2.1.2\":{\"resolution\":{\"integrity\":\"sha512-xj68wMTvGXVOKonmog6LwyJKrYXZPvlwabaryTjLh9LuvovB/KAH+kvi8Gjj+7rJjsFi23nkUxRQv1KqSroMqA==\"}},\"mdast-util-to-string@4.0.0\":{\"resolution\":{\"integrity\":\"sha512-0H44vDimn51F0YwvxSJSm0eCDOJTRlmN0R1yBh4HLj9wiV1Dn0QoXGbvFAWj2hSItVTlCmBF1hqKlIyUBVFLPg==\"}},\"mdn-data@2.12.2\":{\"resolution\":{\"integrity\":\"sha512-IEn+pegP1aManZuckezWCO+XZQDplx1366JoVhTpMpBB1sPey/SbveZQUosKiKiGYjg1wH4pMlNgXbCiYgihQA==\"}},\"merge-stream@2.0.0\":{\"resolution\":{\"integrity\":\"sha512-abv/qOcuPfk3URPfDzmZU1LKmuw8kT+0nIHvKrKgFrwifol/doWcdA4ZqsWQ8ENrFKkd67Mfpo/LovbIUsbt3w==\"}},\"merge2@1.4.1\":{\"resolution\":{\"integrity\":\"sha512-8q7VEgMJW4J8tcfVPy8g09NcQwZdbwFEqhe/WZkoIzjn/3TGDwtOCYtXGxA3O8tPzpczCCDgv+P2P5y00ZJOOg==\"},\"engines\":{\"node\":\">= 8\"}},\"mermaid@11.12.1\":{\"resolution\":{\"integrity\":\"sha512-UlIZrRariB11TY1RtTgUWp65tphtBv4CSq7vyS2ZZ2TgoMjs2nloq+wFqxiwcxlhHUvs7DPGgMjs2aeQxz5h9g==\"}},\"micromark-core-commonmark@2.0.2\":{\"resolution\":{\"integrity\":\"sha512-FKjQKbxd1cibWMM1P9N+H8TwlgGgSkWZMmfuVucLCHaYqeSvJ0hFeHsIa65pA2nYbes0f8LDHPMrd9X7Ujxg9w==\"}},\"micromark-extension-frontmatter@2.0.0\":{\"resolution\":{\"integrity\":\"sha512-C4AkuM3dA58cgZha7zVnuVxBhDsbttIMiytjgsM2XbHAB2faRVaHRle40558FBN+DJcrLNCoqG5mlrpdU4cRtg==\"}},\"micromark-extension-gfm-autolink-literal@2.1.0\":{\"resolution\":{\"integrity\":\"sha512-oOg7knzhicgQ3t4QCjCWgTmfNhvQbDDnJeVu9v81r7NltNCVmhPy1fJRX27pISafdjL+SVc4d3l48Gb6pbRypw==\"}},\"micromark-extension-gfm-footnote@2.1.0\":{\"resolution\":{\"integrity\":\"sha512-/yPhxI1ntnDNsiHtzLKYnE3vf9JZ6cAisqVDauhp4CEHxlb4uoOTxOCJ+9s51bIB8U1N1FJ1RXOKTIlD5B/gqw==\"}},\"micromark-extension-gfm-strikethrough@2.1.0\":{\"resolution\":{\"integrity\":\"sha512-ADVjpOOkjz1hhkZLlBiYA9cR2Anf8F4HqZUO6e5eDcPQd0Txw5fxLzzxnEkSkfnD0wziSGiv7sYhk/ktvbf1uw==\"}},\"micromark-extension-gfm-table@2.1.1\":{\"resolution\":{\"integrity\":\"sha512-t2OU/dXXioARrC6yWfJ4hqB7rct14e8f7m0cbI5hUmDyyIlwv5vEtooptH8INkbLzOatzKuVbQmAYcbWoyz6Dg==\"}},\"micromark-extension-gfm-tagfilter@2.0.0\":{\"resolution\":{\"integrity\":\"sha512-xHlTOmuCSotIA8TW1mDIM6X2O1SiX5P9IuDtqGonFhEK0qgRI4yeC6vMxEV2dgyr2TiD+2PQ10o+cOhdVAcwfg==\"}},\"micromark-extension-gfm-task-list-item@2.1.0\":{\"resolution\":{\"integrity\":\"sha512-qIBZhqxqI6fjLDYFTBIa4eivDMnP+OZqsNwmQ3xNLE4Cxwc+zfQEfbs6tzAo2Hjq+bh6q5F+Z8/cksrLFYWQQw==\"}},\"micromark-extension-gfm@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-vsKArQsicm7t0z2GugkCKtZehqUm31oeGBV/KVSorWSy8ZlNAv7ytjFhvaryUiCUJYqs+NoE6AFhpQvBTM6Q4w==\"}},\"micromark-factory-destination@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-Xe6rDdJlkmbFRExpTOmRj9N3MaWmbAgdpSrBQvCFqhezUn4AHqJHbaEnfbVYYiexVSs//tqOdY/DxhjdCiJnIA==\"}},\"micromark-factory-label@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-VFMekyQExqIW7xIChcXn4ok29YE3rnuyveW3wZQWWqF4Nv9Wk5rgJ99KzPvHjkmPXF93FXIbBp6YdW3t71/7Vg==\"}},\"micromark-factory-space@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-zRkxjtBxxLd2Sc0d+fbnEunsTj46SWXgXciZmHq0kDYGnck/ZSGj9/wULTV95uoeYiK5hRXP2mJ98Uo4cq/LQg==\"}},\"micromark-factory-title@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-5bZ+3CjhAd9eChYTHsjy6TGxpOFSKgKKJPJxr293jTbfry2KDoWkhBb6TcPVB4NmzaPhMs1Frm9AZH7OD4Cjzw==\"}},\"micromark-factory-whitespace@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-Ob0nuZ3PKt/n0hORHyvoD9uZhr+Za8sFoP+OnMcnWK5lngSzALgQYKMr9RJVOWLqQYuyn6ulqGWSXdwf6F80lQ==\"}},\"micromark-util-character@2.1.1\":{\"resolution\":{\"integrity\":\"sha512-wv8tdUTJ3thSFFFJKtpYKOYiGP2+v96Hvk4Tu8KpCAsTMs6yi+nVmGh1syvSCsaxz45J6Jbw+9DD6g97+NV67Q==\"}},\"micromark-util-chunked@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-QUNFEOPELfmvv+4xiNg2sRYeS/P84pTW0TCgP5zc9FpXetHY0ab7SxKyAQCNCc1eK0459uoLI1y5oO5Vc1dbhA==\"}},\"micromark-util-classify-character@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-K0kHzM6afW/MbeWYWLjoHQv1sgg2Q9EccHEDzSkxiP/EaagNzCm7T/WMKZ3rjMbvIpvBiZgwR3dKMygtA4mG1Q==\"}},\"micromark-util-combine-extensions@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-OnAnH8Ujmy59JcyZw8JSbK9cGpdVY44NKgSM7E9Eh7DiLS2E9RNQf0dONaGDzEG9yjEl5hcqeIsj4hfRkLH/Bg==\"}},\"micromark-util-decode-numeric-character-reference@2.0.2\":{\"resolution\":{\"integrity\":\"sha512-ccUbYk6CwVdkmCQMyr64dXz42EfHGkPQlBj5p7YVGzq8I7CtjXZJrubAYezf7Rp+bjPseiROqe7G6foFd+lEuw==\"}},\"micromark-util-decode-string@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-nDV/77Fj6eH1ynwscYTOsbK7rR//Uj0bZXBwJZRfaLEJ1iGBR6kIfNmlNqaqJf649EP0F3NWNdeJi03elllNUQ==\"}},\"micromark-util-encode@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-c3cVx2y4KqUnwopcO9b/SCdo2O67LwJJ/UyqGfbigahfegL9myoEFoDYZgkT7f36T0bLrM9hZTAaAyH+PCAXjw==\"}},\"micromark-util-html-tag-name@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-2cNEiYDhCWKI+Gs9T0Tiysk136SnR13hhO8yW6BGNyhOC4qYFnwF1nKfD3HFAIXA5c45RrIG1ub11GiXeYd1xA==\"}},\"micromark-util-normalize-identifier@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-sxPqmo70LyARJs0w2UclACPUUEqltCkJ6PhKdMIDuJ3gSf/Q+/GIe3WKl0Ijb/GyH9lOpUkRAO2wp0GVkLvS9Q==\"}},\"micromark-util-resolve-all@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-VdQyxFWFT2/FGJgwQnJYbe1jjQoNTS4RjglmSjTUlpUMa95Htx9NHeYW4rGDJzbjvCsl9eLjMQwGeElsqmzcHg==\"}},\"micromark-util-sanitize-uri@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-9N9IomZ/YuGGZZmQec1MbgxtlgougxTodVwDzzEouPKo3qFWvymFHWcnDi2vzV1ff6kas9ucW+o3yzJK9YB1AQ==\"}},\"micromark-util-subtokenize@2.0.3\":{\"resolution\":{\"integrity\":\"sha512-VXJJuNxYWSoYL6AJ6OQECCFGhIU2GGHMw8tahogePBrjkG8aCCas3ibkp7RnVOSTClg2is05/R7maAhF1XyQMg==\"}},\"micromark-util-symbol@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-vs5t8Apaud9N28kgCrRUdEed4UJ+wWNvicHLPxCa9ENlYuAY31M0ETy5y1vA33YoNPDFTghEbnh6efaE8h4x0Q==\"}},\"micromark-util-types@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-534m2WhVTddrcKVepwmVEVnUAmtrx9bfIjNoQHRqfnvdaHQiFytEhJoTgpWJvDEXCO5gLTQh3wYC1PgOJA4NSQ==\"}},\"micromark@4.0.1\":{\"resolution\":{\"integrity\":\"sha512-eBPdkcoCNvYcxQOAKAlceo5SNdzZWfF+FcSupREAzdAh9rRmE239CEQAiTwIgblwnoM8zzj35sZ5ZwvSEOF6Kw==\"}},\"micromatch@4.0.8\":{\"resolution\":{\"integrity\":\"sha512-PXwfBhYu0hBCPw8Dn0E+WDYb7af3dSLVWKi3HGv84IdF4TyFoC0ysxFd0Goxw7nSv4T/PzEJQxsYsEiFCKo2BA==\"},\"engines\":{\"node\":\">=8.6\"}},\"mime-db@1.52.0\":{\"resolution\":{\"integrity\":\"sha512-sPU4uV7dYlvtWJxwwxHD0PuihVNiE7TyAbQ5SWxDCB9mUYvOgroQOwYQQOKPJ8CIbE+1ETVlOoK1UC2nU3gYvg==\"},\"engines\":{\"node\":\">= 0.6\"}},\"mime-types@2.1.35\":{\"resolution\":{\"integrity\":\"sha512-ZDY+bPm5zTTF+YpCrAU9nK0UgICYPT0QtT1NZWFv4s++TNkcgVaT0g6+4R2uI4MjQjzysHB1zxuWL50hzaeXiw==\"},\"engines\":{\"node\":\">= 0.6\"}},\"mimic-fn@2.1.0\":{\"resolution\":{\"integrity\":\"sha512-OqbOk5oEQeAZ8WXWydlu9HJjz9WVdEIvamMCcXmuqUYjTknH/sqsWvhQ3vgwKFRR1HpjvNBKQ37nbJgYzGqGcg==\"},\"engines\":{\"node\":\">=6\"}},\"mimic-response@3.1.0\":{\"resolution\":{\"integrity\":\"sha512-z0yWI+4FDrrweS8Zmt4Ej5HdJmky15+L2e6Wgn3+iK5fWzb6T3fhNFq2+MeTRb064c6Wr4N/wv0DzQTjNzHNGQ==\"},\"engines\":{\"node\":\">=10\"}},\"miniflare@4.20260504.0\":{\"resolution\":{\"integrity\":\"sha512-HeI/HLx+rbeo/UB4qb6NsNcFdUVD7xDzyCexZJTVtFMlfpfexUKEDmdeTRRpzeHrJseZFGua+v9JO1kfPublUw==\"},\"engines\":{\"node\":\">=22.0.0\"},\"hasBin\":true},\"minimatch@5.1.9\":{\"resolution\":{\"integrity\":\"sha512-7o1wEA2RyMP7Iu7GNba9vc0RWWGACJOCZBJX2GJWip0ikV+wcOsgVuY9uE8CPiyQhkGFSlhuSkZPavN7u1c2Fw==\"},\"engines\":{\"node\":\">=10\"}},\"minimatch@9.0.3\":{\"resolution\":{\"integrity\":\"sha512-RHiac9mvaRw0x3AYRgDC1CxAP7HTcNrrECeA8YYJeWnpo+2Q5CegtZjaotWTWxDG3UeGA1coE05iH1mPjT/2mg==\"},\"engines\":{\"node\":\">=16 || 14 >=14.17\"}},\"minimatch@9.0.5\":{\"resolution\":{\"integrity\":\"sha512-G6T0ZX48xgozx7587koeX9Ys2NYy6Gmv//P89sEte9V9whIapMNF4idKxnW2QtCcLiTWlb/wfCabAtAFWhhBow==\"},\"engines\":{\"node\":\">=16 || 14 >=14.17\"}},\"minimatch@9.0.9\":{\"resolution\":{\"integrity\":\"sha512-OBwBN9AL4dqmETlpS2zasx+vTeWclWzkblfZk7KTA5j3jeOONz/tRCnZomUyvNg83wL5Zv9Ss6HMJXAgL8R2Yg==\"},\"engines\":{\"node\":\">=16 || 14 >=14.17\"}},\"minimist@1.2.8\":{\"resolution\":{\"integrity\":\"sha512-2yyAR8qBkN3YuheJanUpWC5U3bb5osDywNB8RzDVlDwDHbocAJveqqj1u8+SVD7jkWT4yvsHCpWqqWqAxb0zCA==\"}},\"minipass@3.3.6\":{\"resolution\":{\"integrity\":\"sha512-DxiNidxSEK+tHG6zOIklvNOwm3hvCrbUrdtzY74U6HKTJxvIDfOUL5W5P2Ghd3DTkhhKPYGqeNUIh5qcM4YBfw==\"},\"engines\":{\"node\":\">=8\"}},\"minipass@5.0.0\":{\"resolution\":{\"integrity\":\"sha512-3FnjYuehv9k6ovOEbyOswadCDPX1piCfhV8ncmYtHOjuPwylVWsghTLo7rabjC3Rx5xD4HDx8Wm1xnMF7S5qFQ==\"},\"engines\":{\"node\":\">=8\"}},\"minipass@7.1.2\":{\"resolution\":{\"integrity\":\"sha512-qOOzS1cBTWYF4BH8fVePDBOO9iptMnGUEZwNc/cMWnTV2nVLZ7VoNWEPHkYczZA0pdoA7dl6e7FL659nX9S2aw==\"},\"engines\":{\"node\":\">=16 || 14 >=14.17\"}},\"minipass@7.1.3\":{\"resolution\":{\"integrity\":\"sha512-tEBHqDnIoM/1rXME1zgka9g6Q2lcoCkxHLuc7ODJ5BxbP5d4c2Z5cGgtXAku59200Cx7diuHTOYfSBD8n6mm8A==\"},\"engines\":{\"node\":\">=16 || 14 >=14.17\"}},\"minizlib@2.1.2\":{\"resolution\":{\"integrity\":\"sha512-bAxsR8BVfj60DWXHE3u30oHzfl4G7khkSuPW+qvpd7jFRHm7dLxOjUk1EHACJ/hxLY8phGJ0YhYHZo7jil7Qdg==\"},\"engines\":{\"node\":\">= 8\"}},\"mkdirp-classic@0.5.3\":{\"resolution\":{\"integrity\":\"sha512-gKLcREMhtuZRwRAfqP3RFW+TK4JqApVBtOIftVgjuABpAtpxhPGaDcfvbhNvD0B8iD1oUr/txX35NjcaY6Ns/A==\"}},\"mkdirp@1.0.4\":{\"resolution\":{\"integrity\":\"sha512-vVqVZQyf3WLx2Shd0qJ9xuvqgAyKPLAiqITEtqW0oIUjzo3PePDd6fW9iFz30ef7Ysp/oiWqbhszeGWW2T6Gzw==\"},\"engines\":{\"node\":\">=10\"},\"hasBin\":true},\"mlly@1.8.0\":{\"resolution\":{\"integrity\":\"sha512-l8D9ODSRWLe2KHJSifWGwBqpTZXIXTeo8mlKjY+E2HAakaTeNpqAyBZ8GSqLzHgw4XmHmC8whvpjJNMbFZN7/g==\"}},\"mri@1.2.0\":{\"resolution\":{\"integrity\":\"sha512-tzzskb3bG8LvYGFF/mDTpq3jpI6Q9wc3LEmBaghu+DdCssd1FakN7Bc0hVNmEyGq1bq3RgfkCb3cmQLpNPOroA==\"},\"engines\":{\"node\":\">=4\"}},\"mrmime@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-Y3wQdFg2Va6etvQ5I82yUhGdsKrcYox6p7FfL1LbK2J4V01F9TGlepTIhnK24t7koZibmg82KGglhA1XK5IsLQ==\"},\"engines\":{\"node\":\">=10\"}},\"ms@2.1.3\":{\"resolution\":{\"integrity\":\"sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA==\"}},\"msw@2.10.2\":{\"resolution\":{\"integrity\":\"sha512-RCKM6IZseZQCWcSWlutdf590M8nVfRHG1ImwzOtwz8IYxgT4zhUO0rfTcTvDGiaFE0Rhcc+h43lcF3Jc9gFtwQ==\"},\"engines\":{\"node\":\">=18\"},\"hasBin\":true,\"peerDependencies\":{\"typescript\":\">= 4.8.x\"},\"peerDependenciesMeta\":{\"typescript\":{\"optional\":true}}},\"mute-stream@2.0.0\":{\"resolution\":{\"integrity\":\"sha512-WWdIxpyjEn+FhQJQQv9aQAYlHoNVdzIzUySNV1gHUPDSdZJ3yZn7pAAbQcV7B56Mvu881q9FZV+0Vx2xC44VWA==\"},\"engines\":{\"node\":\"^18.17.0 || >=20.5.0\"}},\"nanoid@3.3.11\":{\"resolution\":{\"integrity\":\"sha512-N8SpfPUnUp1bK+PMYW8qSWdl9U+wwNWI4QKxOYDy9JAro3WMX7p2OeVRF9v+347pnakNevPmiHhNmZ2HbFA76w==\"},\"engines\":{\"node\":\"^10 || ^12 || ^13.7 || ^14 || >=15.0.1\"},\"hasBin\":true},\"napi-build-utils@2.0.0\":{\"resolution\":{\"integrity\":\"sha512-GEbrYkbfF7MoNaoh2iGG84Mnf/WZfB0GdGEsM8wz7Expx/LlWf5U8t9nvJKXSp3qr5IsEbK04cBGhol/KwOsWA==\"}},\"neo-async@2.6.2\":{\"resolution\":{\"integrity\":\"sha512-Yd3UES5mWCSqR+qNT93S3UoYUkqAZ9lLg8a7g9rimsWmYGK8cVToA4/sF3RrshdyV3sAGMXVUmpMYOw+dLpOuw==\"}},\"netmask@2.1.1\":{\"resolution\":{\"integrity\":\"sha512-eonl3sLUha+S1GzTPxychyhnUzKyeQkZ7jLjKrBagJgPla13F+uQ71HgpFefyHgqrjEbCPkDArxYsjY8/+gLKA==\"},\"engines\":{\"node\":\">= 0.4.0\"}},\"node-abi@3.89.0\":{\"resolution\":{\"integrity\":\"sha512-6u9UwL0HlAl21+agMN3YAMXcKByMqwGx+pq+P76vii5f7hTPtKDp08/H9py6DY+cfDw7kQNTGEj/rly3IgbNQA==\"},\"engines\":{\"node\":\">=10\"}},\"node-domexception@1.0.0\":{\"resolution\":{\"integrity\":\"sha512-/jKZoMpw0F8GRwl4/eLROPA3cfcXtLApP0QzLmUT/HuPCZWyB7IY9ZrMeKw2O/nFIqPQB3PVM9aYm0F312AXDQ==\"},\"engines\":{\"node\":\">=10.5.0\"},\"deprecated\":\"Use your platform's native DOMException instead\"},\"node-fetch@3.3.2\":{\"resolution\":{\"integrity\":\"sha512-dRB78srN/l6gqWulah9SrxeYnxeddIG30+GOqK/9OlLVyLg3HPnr6SqOWTWOXKRwC2eGYCkZ59NNuSgvSrpgOA==\"},\"engines\":{\"node\":\"^12.20.0 || ^14.13.1 || >=16.0.0\"}},\"node-machine-id@1.1.12\":{\"resolution\":{\"integrity\":\"sha512-QNABxbrPa3qEIfrE6GOJ7BYIuignnJw7iQ2YPbc3Nla1HzRJjXzZOiikfF8m7eAMfichLt3M4VgLOetqgDmgGQ==\"}},\"node-releases@2.0.19\":{\"resolution\":{\"integrity\":\"sha512-xxOWJsBKtzAq7DY0J+DTzuz58K8e7sJbdgwkbMWQe8UYB6ekmsQ45q0M/tJDsGaZmbC+l7n57UV8Hl5tHxO9uw==\"}},\"node-releases@2.0.38\":{\"resolution\":{\"integrity\":\"sha512-3qT/88Y3FbH/Kx4szpQQ4HzUbVrHPKTLVpVocKiLfoYvw9XSGOX2FmD2d6DrXbVYyAQTF2HeF6My8jmzx7/CRw==\"}},\"normalize-path@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-6eZs5Ls3WtCisHWp9S2GUy8dqkpGi4BVSz3GaqiE6ezub0512ESztXUwUB6C6IKbQkY2Pnb/mD4WYojCRwcwLA==\"},\"engines\":{\"node\":\">=0.10.0\"}},\"npm-run-path@4.0.1\":{\"resolution\":{\"integrity\":\"sha512-S48WzZW777zhNIrn7gxOlISNAqi9ZC/uQFnRdbeIHhZhCA6UqpkOT8T1G7BvfdgP4Er8gF4sUbaS0i7QvIfCWw==\"},\"engines\":{\"node\":\">=8\"}},\"nth-check@2.1.1\":{\"resolution\":{\"integrity\":\"sha512-lqjrjmaOoAnWfMmBPL+XNnynZh2+swxiX3WUE0s4yEHI6m+AwrK2UZOimIRl3X/4QctVqS8AiZjFqyOGrMXb/w==\"}},\"nwsapi@2.2.20\":{\"resolution\":{\"integrity\":\"sha512-/ieB+mDe4MrrKMT8z+mQL8klXydZWGR5Dowt4RAGKbJ3kIGEx3X4ljUo+6V73IXtUPWgfOlU5B9MlGxFO5T+cA==\"}},\"nx-cloud@19.1.0\":{\"resolution\":{\"integrity\":\"sha512-f24vd5/57/MFSXNMfkerdDiK0EvScGOKO71iOWgJNgI1xVweDRmOA/EfjnPMRd5m+pnoPs/4A7DzuwSW0jZVyw==\"},\"hasBin\":true},\"nx@21.4.1\":{\"resolution\":{\"integrity\":\"sha512-nD8NjJGYk5wcqiATzlsLauvyrSHV2S2YmM2HBIKqTTwVP2sey07MF3wDB9U2BwxIjboahiITQ6pfqFgB79TF2A==\"},\"hasBin\":true,\"peerDependencies\":{\"@swc-node/register\":\"^1.8.0\",\"@swc/core\":\"^1.3.85\"},\"peerDependenciesMeta\":{\"@swc-node/register\":{\"optional\":true},\"@swc/core\":{\"optional\":true}}},\"obug@2.1.1\":{\"resolution\":{\"integrity\":\"sha512-uTqF9MuPraAQ+IsnPf366RG4cP9RtUi7MLO1N3KEc+wb0a6yKpeL0lmk2IB1jY5KHPAlTc6T/JRdC/YqxHNwkQ==\"}},\"once@1.4.0\":{\"resolution\":{\"integrity\":\"sha512-lNaJgI+2Q5URQBkccEKHTQOPaXdUxnZZElQTZY0MFUAuaEqe1E+Nyvgdz/aIyNi6Z9MzO5dv1H8n58/GELp3+w==\"}},\"onetime@5.1.2\":{\"resolution\":{\"integrity\":\"sha512-kbpaSSGJTWdAY5KPVeMOKXSrPtr8C8C7wodJbcsd51jRnmD+GZu8Y0VoU6Dm5Z4vWr0Ig/1NKuWRKf7j5aaYSg==\"},\"engines\":{\"node\":\">=6\"}},\"oniguruma-parser@0.12.1\":{\"resolution\":{\"integrity\":\"sha512-8Unqkvk1RYc6yq2WBYRj4hdnsAxVze8i7iPfQr8e4uSP3tRv0rpZcbGUDvxfQQcdwHt/e9PrMvGCsa8OqG9X3w==\"}},\"oniguruma-to-es@4.3.3\":{\"resolution\":{\"integrity\":\"sha512-rPiZhzC3wXwE59YQMRDodUwwT9FZ9nNBwQQfsd1wfdtlKEyCdRV0avrTcSZ5xlIvGRVPd/cx6ZN45ECmS39xvg==\"}},\"open@8.4.2\":{\"resolution\":{\"integrity\":\"sha512-7x81NCL719oNbsq/3mh+hVrAWmFuEYUqrq/Iw3kUzH8ReypT9QQ0BLoJS7/G9k6N81XjW4qHWtjWwe/9eLy1EQ==\"},\"engines\":{\"node\":\">=12\"}},\"ora@5.3.0\":{\"resolution\":{\"integrity\":\"sha512-zAKMgGXUim0Jyd6CXK9lraBnD3H5yPGBPPOkC23a2BG6hsm4Zu6OQSjQuEtV0BHDf4aKHcUFvJiGRrFuW3MG8g==\"},\"engines\":{\"node\":\">=10\"}},\"outdent@0.5.0\":{\"resolution\":{\"integrity\":\"sha512-/jHxFIzoMXdqPzTaCpFzAAWhpkSjZPF4Vsn6jAfNpmbH/ymsmd7Qc6VE9BGn0L6YMj6uwpQLxCECpus4ukKS9Q==\"}},\"outvariant@1.4.3\":{\"resolution\":{\"integrity\":\"sha512-+Sl2UErvtsoajRDKCE5/dBz4DIvHXQQnAxtQTF04OJxY0+DyZXSo5P5Bb7XYWOh81syohlYL24hbDwxedPUJCA==\"}},\"oxlint@1.26.0\":{\"resolution\":{\"integrity\":\"sha512-KRpL+SMi07JQyggv5ldIF+wt2pnrKm8NLW0B+8bK+0HZsLmH9/qGA+qMWie5Vf7lnlMBllJmsuzHaKFEGY3rIA==\"},\"engines\":{\"node\":\"^20.19.0 || >=22.12.0\"},\"hasBin\":true,\"peerDependencies\":{\"oxlint-tsgolint\":\">=0.4.0\"},\"peerDependenciesMeta\":{\"oxlint-tsgolint\":{\"optional\":true}}},\"p-filter@2.1.0\":{\"resolution\":{\"integrity\":\"sha512-ZBxxZ5sL2HghephhpGAQdoskxplTwr7ICaehZwLIlfL6acuVgZPm8yBNuRAFBGEqtD/hmUeq9eqLg2ys9Xr/yw==\"},\"engines\":{\"node\":\">=8\"}},\"p-limit@2.3.0\":{\"resolution\":{\"integrity\":\"sha512-//88mFWSJx8lxCzwdAABTJL2MyWB12+eIY7MDL2SqLmAkeKU9qxRvWuSyTjm3FUmpBEMuFfckAIqEaVGUDxb6w==\"},\"engines\":{\"node\":\">=6\"}},\"p-locate@4.1.0\":{\"resolution\":{\"integrity\":\"sha512-R79ZZ/0wAxKGu3oYMlz8jy/kbhsNrS7SKZ7PxEHBgJ5+F2mtFW2fK2cOtBh1cHYkQsbzFV7I+EoRKe6Yt0oK7A==\"},\"engines\":{\"node\":\">=8\"}},\"p-map@2.1.0\":{\"resolution\":{\"integrity\":\"sha512-y3b8Kpd8OAN444hxfBbFfj1FY/RjtTd8tzYwhUqNYXx0fXx2iX4maP4Qr6qhIKbQXI02wTLAda4fYUbDagTUFw==\"},\"engines\":{\"node\":\">=6\"}},\"p-map@7.0.4\":{\"resolution\":{\"integrity\":\"sha512-tkAQEw8ysMzmkhgw8k+1U/iPhWNhykKnSk4Rd5zLoPJCuJaGRPo6YposrZgaxHKzDHdDWWZvE/Sk7hsL2X/CpQ==\"},\"engines\":{\"node\":\">=18\"}},\"p-try@2.2.0\":{\"resolution\":{\"integrity\":\"sha512-R4nPAVTAU0B9D35/Gk3uJf/7XYbQcyohSKdvAxIRSNghFl4e71hVoGnBNQz9cWaXxO2I10KTC+3jMdvvoKw6dQ==\"},\"engines\":{\"node\":\">=6\"}},\"pac-proxy-agent@7.2.0\":{\"resolution\":{\"integrity\":\"sha512-TEB8ESquiLMc0lV8vcd5Ql/JAKAoyzHFXaStwjkzpOpC5Yv+pIzLfHvjTSdf3vpa2bMiUQrg9i6276yn8666aA==\"},\"engines\":{\"node\":\">= 14\"}},\"pac-resolver@7.0.1\":{\"resolution\":{\"integrity\":\"sha512-5NPgf87AT2STgwa2ntRMr45jTKrYBGkVU36yT0ig/n/GMAa3oPqhZfIQ2kMEimReg0+t9kZViDVZ83qfVUlckg==\"},\"engines\":{\"node\":\">= 14\"}},\"package-json-from-dist@1.0.1\":{\"resolution\":{\"integrity\":\"sha512-UEZIS3/by4OC8vL3P2dTXRETpebLI2NiI5vIrjaD/5UtrkFX/tNbwjTSRAGC/+7CAo2pIcBaRgWmcBBHcsaCIw==\"}},\"package-manager-detector@0.2.11\":{\"resolution\":{\"integrity\":\"sha512-BEnLolu+yuz22S56CU1SUKq3XC3PkwD5wv4ikR4MfGvnRVcmzXR9DwSlW2fEamyTPyXHomBJRzgapeuBvRNzJQ==\"}},\"package-manager-detector@1.5.0\":{\"resolution\":{\"integrity\":\"sha512-uBj69dVlYe/+wxj8JOpr97XfsxH/eumMt6HqjNTmJDf/6NO9s+0uxeOneIz3AsPt2m6y9PqzDzd3ATcU17MNfw==\"}},\"pako@1.0.11\":{\"resolution\":{\"integrity\":\"sha512-4hLB8Py4zZce5s4yd9XzopqwVv/yGNhV1Bl8NTmCq1763HeK2+EwVTv+leGeL13Dnh2wfbqowVPXCIO0z4taYw==\"}},\"parse5-htmlparser2-tree-adapter@7.1.0\":{\"resolution\":{\"integrity\":\"sha512-ruw5xyKs6lrpo9x9rCZqZZnIUntICjQAd0Wsmp396Ul9lN/h+ifgVV1x1gZHi8euej6wTfpqX8j+BFQxF0NS/g==\"}},\"parse5-parser-stream@7.1.2\":{\"resolution\":{\"integrity\":\"sha512-JyeQc9iwFLn5TbvvqACIF/VXG6abODeB3Fwmv/TGdLk2LfbWkaySGY72at4+Ty7EkPZj854u4CrICqNk2qIbow==\"}},\"parse5@7.3.0\":{\"resolution\":{\"integrity\":\"sha512-IInvU7fabl34qmi9gY8XOVxhYyMyuH2xUNpb2q8/Y+7552KlejkRvqvD19nMoUW/uQGGbqNpA6Tufu5FL5BZgw==\"}},\"parse5@8.0.0\":{\"resolution\":{\"integrity\":\"sha512-9m4m5GSgXjL4AjumKzq1Fgfp3Z8rsvjRNbnkVwfu2ImRqE5D0LnY2QfDen18FSY9C573YU5XxSapdHZTZ2WolA==\"}},\"path-data-parser@0.1.0\":{\"resolution\":{\"integrity\":\"sha512-NOnmBpt5Y2RWbuv0LMzsayp3lVylAHLPUTut412ZA3l+C4uw4ZVkQbjShYCQ8TCpUMdPapr4YjUqLYD6v68j+w==\"}},\"path-exists@4.0.0\":{\"resolution\":{\"integrity\":\"sha512-ak9Qy5Q7jYb2Wwcey5Fpvg2KoAc/ZIhLSLOSBmRmygPsGwkVVt0fZa0qrtMz+m6tJTAHfZQ8FnmB4MG4LWy7/w==\"},\"engines\":{\"node\":\">=8\"}},\"path-key@3.1.1\":{\"resolution\":{\"integrity\":\"sha512-ojmeN0qd+y0jszEtoY48r0Peq5dwMEkIlCOu6Q5f41lfkswXuKtYrhgoTpLnyIcHm24Uhqx+5Tqm2InSwLhE6Q==\"},\"engines\":{\"node\":\">=8\"}},\"path-scurry@1.11.1\":{\"resolution\":{\"integrity\":\"sha512-Xa4Nw17FS9ApQFJ9umLiJS4orGjm7ZzwUrwamcGQuHSzDyth9boKDaycYdDcZDuqYATXw4HFXgaqWTctW/v1HA==\"},\"engines\":{\"node\":\">=16 || 14 >=14.18\"}},\"path-to-regexp@6.3.0\":{\"resolution\":{\"integrity\":\"sha512-Yhpw4T9C6hPpgPeA28us07OJeqZ5EzQTkbfwuhsUg0c237RomFoETJgmp2sa3F/41gfLE6G5cqcYwznmeEeOlQ==\"}},\"path-type@4.0.0\":{\"resolution\":{\"integrity\":\"sha512-gDKb8aZMDeD/tZWs9P6+q0J9Mwkdl6xMV8TjnGP3qJVJ06bdMgkbBlLU8IdfOsIsFz2BW1rNVT3XuNEl8zPAvw==\"},\"engines\":{\"node\":\">=8\"}},\"pathe@2.0.3\":{\"resolution\":{\"integrity\":\"sha512-WUjGcAqP1gQacoQe+OBJsFA7Ld4DyXuUIjZ5cc75cLHvJ7dtNsTugphxIADwspS+AraAUePCKrSVtPLFj/F88w==\"}},\"pathval@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-//nshmD55c46FuFw26xV/xFAaB5HF9Xdap7HJBBnrKdAd6/GxDBaNA1870O79+9ueg61cZLSVc+OaFlfmObYVQ==\"},\"engines\":{\"node\":\">= 14.16\"}},\"pend@1.2.0\":{\"resolution\":{\"integrity\":\"sha512-F3asv42UuXchdzt+xXqfW1OGlVBe+mxa2mqI0pg5yAHZPvFmY3Y6drSf/GQ1A86WgWEN9Kzh/WrgKa6iGcHXLg==\"}},\"picocolors@1.1.1\":{\"resolution\":{\"integrity\":\"sha512-xceH2snhtb5M9liqDsmEw56le376mTZkEX/jEb/RxNFyegNul7eNslCXP9FDj/Lcu0X8KEyMceP2ntpaHrDEVA==\"}},\"picomatch@2.3.1\":{\"resolution\":{\"integrity\":\"sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA==\"},\"engines\":{\"node\":\">=8.6\"}},\"picomatch@4.0.3\":{\"resolution\":{\"integrity\":\"sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q==\"},\"engines\":{\"node\":\">=12\"}},\"picomatch@4.0.4\":{\"resolution\":{\"integrity\":\"sha512-QP88BAKvMam/3NxH6vj2o21R6MjxZUAd6nlwAS/pnGvN9IVLocLHxGYIzFhg6fUQ+5th6P4dv4eW9jX3DSIj7A==\"},\"engines\":{\"node\":\">=12\"}},\"pify@4.0.1\":{\"resolution\":{\"integrity\":\"sha512-uB80kBFb/tfd68bVleG9T5GGsGPjJrLAUpR5PZIrhBnIaRTQRjqdJSsIKkOP6OAIFbj7GOrcudc5pNjZ+geV2g==\"},\"engines\":{\"node\":\">=6\"}},\"pkg-types@1.3.1\":{\"resolution\":{\"integrity\":\"sha512-/Jm5M4RvtBFVkKWRu2BLUTNP8/M2a+UwuAX+ae4770q1qVGtfjG+WTCupoZixokjmHiry8uI+dlY8KXYV5HVVQ==\"}},\"pkg-types@2.3.0\":{\"resolution\":{\"integrity\":\"sha512-SIqCzDRg0s9npO5XQ3tNZioRY1uK06lA41ynBC1YmFTmnY6FjUjVt6s4LoADmwoig1qqD0oK8h1p/8mlMx8Oig==\"}},\"playwright-core@1.55.0\":{\"resolution\":{\"integrity\":\"sha512-GvZs4vU3U5ro2nZpeiwyb0zuFaqb9sUiAJuyrWpcGouD8y9/HLgGbNRjIph7zU9D3hnPaisMl9zG9CgFi/biIg==\"},\"engines\":{\"node\":\">=18\"},\"hasBin\":true},\"playwright@1.55.0\":{\"resolution\":{\"integrity\":\"sha512-sdCWStblvV1YU909Xqx0DhOjPZE4/5lJsIS84IfN9dAZfcl/CIZ5O8l3o0j7hPMjDvqoTF8ZUcc+i/GL5erstA==\"},\"engines\":{\"node\":\">=18\"},\"hasBin\":true},\"pngjs@7.0.0\":{\"resolution\":{\"integrity\":\"sha512-LKWqWJRhstyYo9pGvgor/ivk2w94eSjE3RGVuzLGlr3NmD8bf7RcYGze1mNdEHRP6TRP6rMuDHk5t44hnTRyow==\"},\"engines\":{\"node\":\">=14.19.0\"}},\"points-on-curve@0.2.0\":{\"resolution\":{\"integrity\":\"sha512-0mYKnYYe9ZcqMCWhUjItv/oHjvgEsfKvnUTg8sAtnHr3GVy7rGkXCb6d5cSyqrWqL4k81b9CPg3urd+T7aop3A==\"}},\"points-on-path@0.2.1\":{\"resolution\":{\"integrity\":\"sha512-25ClnWWuw7JbWZcgqY/gJ4FQWadKxGWk+3kR/7kD0tCaDtPPMj7oHu2ToLaVhfpnHrZzYby2w6tUA0eOIuUg8g==\"}},\"postcss@8.5.14\":{\"resolution\":{\"integrity\":\"sha512-SoSL4+OSEtR99LHFZQiJLkT59C5B1amGO1NzTwj7TT1qCUgUO6hxOvzkOYxD+vMrXBM3XJIKzokoERdqQq/Zmg==\"},\"engines\":{\"node\":\"^10 || ^12 || >=14\"}},\"postcss@8.5.6\":{\"resolution\":{\"integrity\":\"sha512-3Ybi1tAuwAP9s0r1UQ2J4n5Y0G05bJkpUIO0/bI9MhwmD70S5aTWbXGBwxHrelT+XM1k6dM0pk+SwNkpTRN7Pg==\"},\"engines\":{\"node\":\"^10 || ^12 || >=14\"}},\"posthog-js@1.321.2\":{\"resolution\":{\"integrity\":\"sha512-h5852d9lYmSNjKWvjDkrmO9/awUU3jayNBEoEBUuMAdfDPc4yYYdxBJeDBxYnCFm6RjCLy4O+vmcwuCRC67EXA==\"}},\"preact@10.28.2\":{\"resolution\":{\"integrity\":\"sha512-lbteaWGzGHdlIuiJ0l2Jq454m6kcpI1zNje6d8MlGAFlYvP2GO4ibnat7P74Esfz4sPTdM6UxtTwh/d3pwM9JA==\"}},\"prebuild-install@7.1.3\":{\"resolution\":{\"integrity\":\"sha512-8Mf2cbV7x1cXPUILADGI3wuhfqWvtiLA1iclTDbFRZkgRQS0NqsPZphna9V+HyTEadheuPmjaJMsbzKQFOzLug==\"},\"engines\":{\"node\":\">=10\"},\"deprecated\":\"No longer maintained. Please contact the author of the relevant native addon; alternatives are available.\",\"hasBin\":true},\"prettier@2.8.8\":{\"resolution\":{\"integrity\":\"sha512-tdN8qQGvNjw4CHbY+XXk0JgCXn9QiF21a55rBe5LJAU+kDyC4WQn4+awm2Xfk2lQMk5fKup9XgzTZtGkjBdP9Q==\"},\"engines\":{\"node\":\">=10.13.0\"},\"hasBin\":true},\"prettier@3.6.2\":{\"resolution\":{\"integrity\":\"sha512-I7AIg5boAr5R0FFtJ6rCfD+LFsWHp81dolrFD8S79U9tb8Az2nGrJncnMSnys+bpQJfRUzqs9hnA81OAA3hCuQ==\"},\"engines\":{\"node\":\">=14\"},\"hasBin\":true},\"pretty-format@27.5.1\":{\"resolution\":{\"integrity\":\"sha512-Qb1gy5OrP5+zDf2Bvnzdl3jsTf1qXVMazbvCoKhtKqVs4/YK4ozX4gKQJJVyNe+cajNPn0KoC0MC3FUmaHWEmQ==\"},\"engines\":{\"node\":\"^10.13.0 || ^12.13.0 || ^14.15.0 || >=15.0.0\"}},\"pretty-format@30.0.5\":{\"resolution\":{\"integrity\":\"sha512-D1tKtYvByrBkFLe2wHJl2bwMJIiT8rW+XA+TiataH79/FszLQMrpGEvzUVkzPau7OCO0Qnrhpe87PqtOAIB8Yw==\"},\"engines\":{\"node\":\"^18.14.0 || ^20.0.0 || ^22.0.0 || >=24.0.0\"}},\"process-nextick-args@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-3ouUOpQhtgrbOa17J7+uxOTpITYWaGP7/AhoR3+A+/1e9skrzelGi/dXzEYyvbxubEF6Wn2ypscTKiKJFFn1ag==\"}},\"process@0.11.10\":{\"resolution\":{\"integrity\":\"sha512-cdGef/drWFoydD1JsMzuFf8100nZl+GT+yacc2bEced5f9Rjk4z+WtFUTBu9PhOi9j/jfmBPu0mMEY4wIdAF8A==\"},\"engines\":{\"node\":\">= 0.6.0\"}},\"progress@2.0.3\":{\"resolution\":{\"integrity\":\"sha512-7PiHtLll5LdnKIMw100I+8xJXR5gW2QwWYkT6iJva0bXitZKa/XMrSbdmg3r2Xnaidz9Qumd0VPaMrZlF9V9sA==\"},\"engines\":{\"node\":\">=0.4.0\"}},\"property-information@6.5.0\":{\"resolution\":{\"integrity\":\"sha512-PgTgs/BlvHxOu8QuEN7wi5A0OmXaBcHpmCSTehcs6Uuu9IkDIEo13Hy7n898RHfrQ49vKCoGeWZSaAK01nwVig==\"}},\"property-information@7.1.0\":{\"resolution\":{\"integrity\":\"sha512-TwEZ+X+yCJmYfL7TPUOcvBZ4QfoT5YenQiJuX//0th53DE6w0xxLEtfK3iyryQFddXuvkIk51EEgrJQ0WJkOmQ==\"}},\"protobufjs@7.5.4\":{\"resolution\":{\"integrity\":\"sha512-CvexbZtbov6jW2eXAvLukXjXUW1TzFaivC46BpWc/3BpcCysb5Vffu+B3XHMm8lVEuy2Mm4XGex8hBSg1yapPg==\"},\"engines\":{\"node\":\">=12.0.0\"}},\"proxy-agent@6.5.0\":{\"resolution\":{\"integrity\":\"sha512-TmatMXdr2KlRiA2CyDu8GqR8EjahTG3aY3nXjdzFyoZbmB8hrBsTyMezhULIXKnC0jpfjlmiZ3+EaCzoInSu/A==\"},\"engines\":{\"node\":\">= 14\"}},\"proxy-from-env@1.1.0\":{\"resolution\":{\"integrity\":\"sha512-D+zkORCbA9f1tdWRK0RaCR3GPv50cMxcrz4X8k5LTSUD1Dkw47mKJEZQNunItRTkWwgtaUSo1RVFRIG9ZXiFYg==\"}},\"psl@1.15.0\":{\"resolution\":{\"integrity\":\"sha512-JZd3gMVBAVQkSs6HdNZo9Sdo0LNcQeMNP3CozBJb3JYC/QUYZTnKxP+f8oWRX4rHP5EurWxqAHTSwUCjlNKa1w==\"}},\"pump@3.0.3\":{\"resolution\":{\"integrity\":\"sha512-todwxLMY7/heScKmntwQG8CXVkWUOdYxIvY2s0VWAAMh/nd8SoYiRaKjlr7+iCs984f2P8zvrfWcDDYVb73NfA==\"}},\"pump@3.0.4\":{\"resolution\":{\"integrity\":\"sha512-VS7sjc6KR7e1ukRFhQSY5LM2uBWAUPiOPa/A3mkKmiMwSmRFUITt0xuj+/lesgnCv+dPIEYlkzrcyXgquIHMcA==\"}},\"punycode@2.3.1\":{\"resolution\":{\"integrity\":\"sha512-vYt7UD1U9Wg6138shLtLOvdAu+8DsC/ilFtEVHcH+wydcSpNE20AfSOduf6MkRFahL5FY7X1oU7nKVZFtfq8Fg==\"},\"engines\":{\"node\":\">=6\"}},\"quansync@0.2.11\":{\"resolution\":{\"integrity\":\"sha512-AifT7QEbW9Nri4tAwR5M/uzpBuqfZf+zwaEM/QkzEjj7NBuFD2rBuy0K3dE+8wltbezDV7JMA0WfnCPYRSYbXA==\"}},\"query-selector-shadow-dom@1.0.1\":{\"resolution\":{\"integrity\":\"sha512-lT5yCqEBgfoMYpf3F2xQRK7zEr1rhIIZuceDK6+xRkJQ4NMbHTwXqk4NkwDwQMNqXgG9r9fyHnzwNVs6zV5KRw==\"}},\"querystringify@2.2.0\":{\"resolution\":{\"integrity\":\"sha512-FIqgj2EUvTa7R50u0rGsyTftzjYmv/a3hO345bZNrqabNqjtgiDMgmo4mkUjd+nzU5oF3dClKqFIPUKybUyqoQ==\"}},\"queue-microtask@1.2.3\":{\"resolution\":{\"integrity\":\"sha512-NuaNSa6flKT5JaSYQzJok04JzTL1CA6aGhv5rfLW3PgqA+M2ChpZQnAC8h8i4ZFkBS8X5RqkDBHA7r4hej3K9A==\"}},\"rc@1.2.8\":{\"resolution\":{\"integrity\":\"sha512-y3bGgqKj3QBdxLbLkomlohkvsA8gdAiUQlSBJnBhfn+BPxg4bc62d8TcBW15wavDfgexCgccckhcZvywyQYPOw==\"},\"hasBin\":true},\"react-dom@19.2.0\":{\"resolution\":{\"integrity\":\"sha512-UlbRu4cAiGaIewkPyiRGJk0imDN2T3JjieT6spoL2UeSf5od4n5LB/mQ4ejmxhCFT1tYe8IvaFulzynWovsEFQ==\"},\"peerDependencies\":{\"react\":\"^19.2.0\"}},\"react-is@17.0.2\":{\"resolution\":{\"integrity\":\"sha512-w2GsyukL62IJnlaff/nRegPQR94C/XXamvMWmSHRJ4y7Ts/4ocGRmTHvOs8PSE6pB3dWOrD/nueuU5sduBsQ4w==\"}},\"react-is@18.3.1\":{\"resolution\":{\"integrity\":\"sha512-/LLMVyas0ljjAtoYiPqYiL8VWXzUUdThrmU5+n20DZv+a+ClRoevUzw5JxU+Ieh5/c87ytoTBV9G1FiKfNJdmg==\"}},\"react@19.2.0\":{\"resolution\":{\"integrity\":\"sha512-tmbWg6W31tQLeB5cdIBOicJDJRR2KzXsV7uSK9iNfLWQ5bIZfxuPEHp7M8wiHyHnn0DD1i7w3Zmin0FtkrwoCQ==\"},\"engines\":{\"node\":\">=0.10.0\"}},\"read-yaml-file@1.1.0\":{\"resolution\":{\"integrity\":\"sha512-VIMnQi/Z4HT2Fxuwg5KrY174U1VdUIASQVWXXyqtNRtxSr9IYkn1rsI6Tb6HsrHCmB7gVpNwX6JxPTHcH6IoTA==\"},\"engines\":{\"node\":\">=6\"}},\"readable-stream@2.3.8\":{\"resolution\":{\"integrity\":\"sha512-8p0AUk4XODgIewSi0l8Epjs+EVnWiK7NoDIEGU0HhE7+ZyY8D1IMY7odu5lRrFXGg71L15KG8QrPmum45RTtdA==\"}},\"readable-stream@3.6.2\":{\"resolution\":{\"integrity\":\"sha512-9u/sniCrY3D5WdsERHzHE4G2YCXqoG5FTHUiCC4SIbr6XcLZBY05ya9EKjYek9O5xOAwjGq+1JdGBAS7Q9ScoA==\"},\"engines\":{\"node\":\">= 6\"}},\"readable-stream@4.7.0\":{\"resolution\":{\"integrity\":\"sha512-oIGGmcpTLwPga8Bn6/Z75SVaH1z5dUut2ibSyAMVhmUggWpmDn2dapB0n7f8nwaSiRtepAsfJyfXIO5DCVAODg==\"},\"engines\":{\"node\":\"^12.22.0 || ^14.17.0 || >=16.0.0\"}},\"readdir-glob@1.1.3\":{\"resolution\":{\"integrity\":\"sha512-v05I2k7xN8zXvPD9N+z/uhXPaj0sUFCe2rcWZIpBsqxfP7xXFQ0tipAd/wjj1YxWyWtUS5IDJpOG82JKt2EAVA==\"}},\"readdirp@3.6.0\":{\"resolution\":{\"integrity\":\"sha512-hOS089on8RduqdbhvQ5Z37A0ESjsqz6qnRcffsMU3495FuTdqSm+7bhJ29JvIOsBDEEnan5DPu9t3To9VRlMzA==\"},\"engines\":{\"node\":\">=8.10.0\"}},\"regex-recursion@6.0.2\":{\"resolution\":{\"integrity\":\"sha512-0YCaSCq2VRIebiaUviZNs0cBz1kg5kVS2UKUfNIx8YVs1cN3AV7NTctO5FOKBA+UT2BPJIWZauYHPqJODG50cg==\"}},\"regex-utilities@2.3.0\":{\"resolution\":{\"integrity\":\"sha512-8VhliFJAWRaUiVvREIiW2NXXTmHs4vMNnSzuJVhscgmGav3g9VDxLrQndI3dZZVVdp0ZO/5v0xmX516/7M9cng==\"}},\"regex@6.0.1\":{\"resolution\":{\"integrity\":\"sha512-uorlqlzAKjKQZ5P+kTJr3eeJGSVroLKoHmquUj4zHWuR+hEyNqlXsSKlYYF5F4NI6nl7tWCs0apKJ0lmfsXAPA==\"}},\"rehype-autolink-headings@7.1.0\":{\"resolution\":{\"integrity\":\"sha512-rItO/pSdvnvsP4QRB1pmPiNHUskikqtPojZKJPPPAVx9Hj8i8TwMBhofrrAYRhYOOBZH9tgmG5lPqDLuIWPWmw==\"}},\"rehype-highlight@7.0.2\":{\"resolution\":{\"integrity\":\"sha512-k158pK7wdC2qL3M5NcZROZ2tR/l7zOzjxXd5VGdcfIyoijjQqpHd3JKtYSBDpDZ38UI2WJWuFAtkMDxmx5kstA==\"}},\"rehype-minify-whitespace@6.0.2\":{\"resolution\":{\"integrity\":\"sha512-Zk0pyQ06A3Lyxhe9vGtOtzz3Z0+qZ5+7icZ/PL/2x1SHPbKao5oB/g/rlc6BCTajqBb33JcOe71Ye1oFsuYbnw==\"}},\"rehype-parse@9.0.1\":{\"resolution\":{\"integrity\":\"sha512-ksCzCD0Fgfh7trPDxr2rSylbwq9iYDkSn8TCDmEJ49ljEUBxDVCzCHv7QNzZOfODanX4+bWQ4WZqLCRWYLfhag==\"}},\"rehype-raw@7.0.0\":{\"resolution\":{\"integrity\":\"sha512-/aE8hCfKlQeA8LmyeyQvQF3eBiLRGNlfBJEvWH7ivp9sBqs7TNqBL5X3v157rM4IFETqDnIOO+z5M/biZbo9Ww==\"}},\"rehype-remark@10.0.1\":{\"resolution\":{\"integrity\":\"sha512-EmDndlb5NVwXGfUa4c9GPK+lXeItTilLhE6ADSaQuHr4JUlKw9MidzGzx4HpqZrNCt6vnHmEifXQiiA+CEnjYQ==\"}},\"rehype-sanitize@6.0.0\":{\"resolution\":{\"integrity\":\"sha512-CsnhKNsyI8Tub6L4sm5ZFsme4puGfc6pYylvXo1AeqaGbjOYyzNv3qZPwvs0oMJ39eryyeOdmxwUIo94IpEhqg==\"}},\"rehype-slug@6.0.0\":{\"resolution\":{\"integrity\":\"sha512-lWyvf/jwu+oS5+hL5eClVd3hNdmwM1kAC0BUvEGD19pajQMIzcNUd/k9GsfQ+FfECvX+JE+e9/btsKH0EjJT6A==\"}},\"rehype-stringify@10.0.1\":{\"resolution\":{\"integrity\":\"sha512-k9ecfXHmIPuFVI61B9DeLPN0qFHfawM6RsuX48hoqlaKSF61RskNjSm1lI8PhBEM0MRdLxVVm4WmTqJQccH9mA==\"}},\"remark-frontmatter@5.0.0\":{\"resolution\":{\"integrity\":\"sha512-XTFYvNASMe5iPN0719nPrdItC9aU0ssC4v14mH1BCi1u0n1gAocqcujWUrByftZTbLhRtiKRyjYTSIOcr69UVQ==\"}},\"remark-gfm@4.0.1\":{\"resolution\":{\"integrity\":\"sha512-1quofZ2RQ9EWdeN34S79+KExV1764+wCUGop5CPL1WGdD0ocPpu91lzPGbwWMECpEpd42kJGQwzRfyov9j4yNg==\"}},\"remark-parse@11.0.0\":{\"resolution\":{\"integrity\":\"sha512-FCxlKLNGknS5ba/1lmpYijMUzX2esxW5xQqjWxw2eHFfS2MSdaHVINFmhjo+qN1WhZhNimq0dZATN9pH0IDrpA==\"}},\"remark-rehype@11.1.2\":{\"resolution\":{\"integrity\":\"sha512-Dh7l57ianaEoIpzbp0PC9UKAdCSVklD8E5Rpw7ETfbTl3FqcOOgq5q2LVDhgGCkaBv7p24JXikPdvhhmHvKMsw==\"}},\"remark-stringify@11.0.0\":{\"resolution\":{\"integrity\":\"sha512-1OSmLd3awB/t8qdoEOMazZkNsfVTeY4fTsgzcQFdXNq8ToTN4ZGwrMnlda4K6smTFKD+GRV6O48i6Z4iKgPPpw==\"}},\"require-directory@2.1.1\":{\"resolution\":{\"integrity\":\"sha512-fGxEI7+wsG9xrvdjsrlmL22OMTTiHRwAMroiEeMgq8gzoLC/PQr7RsRDSTLUg/bZAZtF+TVIkHc6/4RIKrui+Q==\"},\"engines\":{\"node\":\">=0.10.0\"}},\"require-from-string@2.0.2\":{\"resolution\":{\"integrity\":\"sha512-Xf0nWe6RseziFMu+Ap9biiUbmplq6S9/p+7w7YXP/JBHhrUDDUhwa+vANyubuqfZWTveU//DYVGsDG7RKL/vEw==\"},\"engines\":{\"node\":\">=0.10.0\"}},\"requires-port@1.0.0\":{\"resolution\":{\"integrity\":\"sha512-KigOCHcocU3XODJxsu8i/j8T9tzT4adHiecwORRQ0ZZFcp7ahwXuRU1m+yuO90C5ZUyGeGfocHDI14M3L3yDAQ==\"}},\"resolve-from@5.0.0\":{\"resolution\":{\"integrity\":\"sha512-qYg9KP24dD5qka9J47d0aVky0N+b4fTU89LN9iDnjB5waksiC49rvMB0PrUJQGoTmH50XPiqOvAjDfaijGxYZw==\"},\"engines\":{\"node\":\">=8\"}},\"resolve-pkg-maps@1.0.0\":{\"resolution\":{\"integrity\":\"sha512-seS2Tj26TBVOC2NIc2rOe2y2ZO7efxITtLZcGSOnHHNOQ7CkiUBfw0Iw2ck6xkIhPwLhKNLS8BO+hEpngQlqzw==\"}},\"resolve.exports@2.0.3\":{\"resolution\":{\"integrity\":\"sha512-OcXjMsGdhL4XnbShKpAcSqPMzQoYkYyhbEaeSko47MjRP9NfEQMhZkXL1DoFlt9LWQn4YttrdnV6X2OiyzBi+A==\"},\"engines\":{\"node\":\">=10\"}},\"resq@1.11.0\":{\"resolution\":{\"integrity\":\"sha512-G10EBz+zAAy3zUd/CDoBbXRL6ia9kOo3xRHrMDsHljI0GDkhYlyjwoCx5+3eCC4swi1uCoZQhskuJkj7Gp57Bw==\"}},\"restore-cursor@3.1.0\":{\"resolution\":{\"integrity\":\"sha512-l+sSefzHpj5qimhFSE5a8nufZYAM3sBSVMAPtYkmC+4EH2anSGaEMXSD0izRQbu9nfyQ9y5JrVmp7E8oZrUjvA==\"},\"engines\":{\"node\":\">=8\"}},\"reusify@1.0.4\":{\"resolution\":{\"integrity\":\"sha512-U9nH88a3fc/ekCF1l0/UP1IosiuIjyTh7hBvXVMHYgVcfGvt897Xguj2UOLDeI5BG2m7/uwyaLVT6fbtCwTyzw==\"},\"engines\":{\"iojs\":\">=1.0.0\",\"node\":\">=0.10.0\"}},\"rgb2hex@0.2.5\":{\"resolution\":{\"integrity\":\"sha512-22MOP1Rh7sAo1BZpDG6R5RFYzR2lYEgwq7HEmyW2qcsOqR2lQKmn+O//xV3YG/0rrhMC6KVX2hU+ZXuaw9a5bw==\"}},\"robust-predicates@3.0.2\":{\"resolution\":{\"integrity\":\"sha512-IXgzBWvWQwE6PrDI05OvmXUIruQTcoMDzRsOd5CDvHCVLcLHMTSYvOK5Cm46kWqlV3yAbuSpBZdJ5oP5OUoStg==\"}},\"rolldown@1.0.0-rc.17\":{\"resolution\":{\"integrity\":\"sha512-ZrT53oAKrtA4+YtBWPQbtPOxIbVDbxT0orcYERKd63VJTF13zPcgXTvD4843L8pcsI7M6MErt8QtON6lrB9tyA==\"},\"engines\":{\"node\":\"^20.19.0 || >=22.12.0\"},\"hasBin\":true},\"rollup@4.53.2\":{\"resolution\":{\"integrity\":\"sha512-MHngMYwGJVi6Fmnk6ISmnk7JAHRNF0UkuucA0CUW3N3a4KnONPEZz+vUanQP/ZC/iY1Qkf3bwPWzyY84wEks1g==\"},\"engines\":{\"node\":\">=18.0.0\",\"npm\":\">=8.0.0\"},\"hasBin\":true},\"rou3@0.8.1\":{\"resolution\":{\"integrity\":\"sha512-ePa+XGk00/3HuCqrEnK3LxJW7I0SdNg6EFzKUJG73hMAdDcOUC/i/aSz7LSDwLrGr33kal/rqOGydzwl6U7zBA==\"}},\"roughjs@4.6.6\":{\"resolution\":{\"integrity\":\"sha512-ZUz/69+SYpFN/g/lUlo2FXcIjRkSu3nDarreVdGGndHEBJ6cXPdKguS8JGxwj5HA5xIbVKSmLgr5b3AWxtRfvQ==\"}},\"rrweb-cssom@0.8.0\":{\"resolution\":{\"integrity\":\"sha512-guoltQEx+9aMf2gDZ0s62EcV8lsXR+0w8915TC3ITdn2YueuNjdAYh/levpU9nFaoChh9RUS5ZdQMrKfVEN9tw==\"}},\"run-parallel@1.2.0\":{\"resolution\":{\"integrity\":\"sha512-5l4VyZR86LZ/lDxZTR6jqL8AFE2S0IFLMP26AbjsLVADxHdhB/c0GUsH+y39UfCi3dzz8OlQuPmnaJOMoDHQBA==\"}},\"rw@1.3.3\":{\"resolution\":{\"integrity\":\"sha512-PdhdWy89SiZogBLaw42zdeqtRJ//zFd2PgQavcICDUgJT5oW10QCRKbJ6bg4r0/UY2M6BWd5tkxuGFRvCkgfHQ==\"}},\"rxjs@7.8.2\":{\"resolution\":{\"integrity\":\"sha512-dhKf903U/PQZY6boNNtAGdWbG85WAbjT/1xYoZIC7FAY0yWapOBQVsVrDl58W86//e1VpMNBtRV4MaXfdMySFA==\"}},\"safaridriver@0.1.2\":{\"resolution\":{\"integrity\":\"sha512-4R309+gWflJktzPXBQCobbWEHlzC4aK3a+Ov3tz2Ib2aBxiwd11phkdIBH1l0EO22x24CJMUQkpKFumRriCSRg==\"}},\"safe-buffer@5.1.2\":{\"resolution\":{\"integrity\":\"sha512-Gd2UZBJDkXlY7GbJxfsE8/nvKkUEU1G38c1siN6QP6a9PT9MmHB8GnpscSmMJSoF8LOIrt8ud/wPtojys4G6+g==\"}},\"safe-buffer@5.2.1\":{\"resolution\":{\"integrity\":\"sha512-rp3So07KcdmmKbGvgaNxQSJr7bGVSVk5S9Eq1F+ppbRo70+YeaDxkw5Dd8NPN+GD6bjnYm2VuPuCXmpuYvmCXQ==\"}},\"safer-buffer@2.1.2\":{\"resolution\":{\"integrity\":\"sha512-YZo3K82SD7Riyi0E1EQPojLz7kpepnSQI9IyPbHHg1XXXevb5dJI7tpyN2ADxGcQbHG7vcyRHk0cbwqcQriUtg==\"}},\"sass-embedded-android-arm64@1.89.2\":{\"resolution\":{\"integrity\":\"sha512-+pq7a7AUpItNyPu61sRlP6G2A8pSPpyazASb+8AK2pVlFayCSPAEgpwpCE9A2/Xj86xJZeMizzKUHxM2CBCUxA==\"},\"engines\":{\"node\":\">=14.0.0\"},\"cpu\":[\"arm64\"],\"os\":[\"android\"]},\"sass-embedded-android-arm@1.89.2\":{\"resolution\":{\"integrity\":\"sha512-oHAPTboBHRZlDBhyRB6dvDKh4KvFs+DZibDHXbkSI6dBZxMTT+Yb2ivocHnctVGucKTLQeT7+OM5DjWHyynL/A==\"},\"engines\":{\"node\":\">=14.0.0\"},\"cpu\":[\"arm\"],\"os\":[\"android\"]},\"sass-embedded-android-riscv64@1.89.2\":{\"resolution\":{\"integrity\":\"sha512-HfJJWp/S6XSYvlGAqNdakeEMPOdhBkj2s2lN6SHnON54rahKem+z9pUbCriUJfM65Z90lakdGuOfidY61R9TYg==\"},\"engines\":{\"node\":\">=14.0.0\"},\"cpu\":[\"riscv64\"],\"os\":[\"android\"]},\"sass-embedded-android-x64@1.89.2\":{\"resolution\":{\"integrity\":\"sha512-BGPzq53VH5z5HN8de6jfMqJjnRe1E6sfnCWFd4pK+CAiuM7iw5Fx6BQZu3ikfI1l2GY0y6pRXzsVLdp/j4EKEA==\"},\"engines\":{\"node\":\">=14.0.0\"},\"cpu\":[\"x64\"],\"os\":[\"android\"]},\"sass-embedded-darwin-arm64@1.89.2\":{\"resolution\":{\"integrity\":\"sha512-UCm3RL/tzMpG7DsubARsvGUNXC5pgfQvP+RRFJo9XPIi6elopY5B6H4m9dRYDpHA+scjVthdiDwkPYr9+S/KGw==\"},\"engines\":{\"node\":\">=14.0.0\"},\"cpu\":[\"arm64\"],\"os\":[\"darwin\"]},\"sass-embedded-darwin-x64@1.89.2\":{\"resolution\":{\"integrity\":\"sha512-D9WxtDY5VYtMApXRuhQK9VkPHB8R79NIIR6xxVlN2MIdEid/TZWi1MHNweieETXhWGrKhRKglwnHxxyKdJYMnA==\"},\"engines\":{\"node\":\">=14.0.0\"},\"cpu\":[\"x64\"],\"os\":[\"darwin\"]},\"sass-embedded-linux-arm64@1.89.2\":{\"resolution\":{\"integrity\":\"sha512-2N4WW5LLsbtrWUJ7iTpjvhajGIbmDR18ZzYRywHdMLpfdPApuHPMDF5CYzHbS+LLx2UAx7CFKBnj5LLjY6eFgQ==\"},\"engines\":{\"node\":\">=14.0.0\"},\"cpu\":[\"arm64\"],\"os\":[\"linux\"]},\"sass-embedded-linux-arm@1.89.2\":{\"resolution\":{\"integrity\":\"sha512-leP0t5U4r95dc90o8TCWfxNXwMAsQhpWxTkdtySDpngoqtTy3miMd7EYNYd1znI0FN1CBaUvbdCMbnbPwygDlA==\"},\"engines\":{\"node\":\">=14.0.0\"},\"cpu\":[\"arm\"],\"os\":[\"linux\"]},\"sass-embedded-linux-musl-arm64@1.89.2\":{\"resolution\":{\"integrity\":\"sha512-nTyuaBX6U1A/cG7WJh0pKD1gY8hbg1m2SnzsyoFG+exQ0lBX/lwTLHq3nyhF+0atv7YYhYKbmfz+sjPP8CZ9lw==\"},\"engines\":{\"node\":\">=14.0.0\"},\"cpu\":[\"arm64\"],\"os\":[\"linux\"]},\"sass-embedded-linux-musl-arm@1.89.2\":{\"resolution\":{\"integrity\":\"sha512-Z6gG2FiVEEdxYHRi2sS5VIYBmp17351bWtOCUZ/thBM66+e70yiN6Eyqjz80DjL8haRUegNQgy9ZJqsLAAmr9g==\"},\"engines\":{\"node\":\">=14.0.0\"},\"cpu\":[\"arm\"],\"os\":[\"linux\"]},\"sass-embedded-linux-musl-riscv64@1.89.2\":{\"resolution\":{\"integrity\":\"sha512-N6oul+qALO0SwGY8JW7H/Vs0oZIMrRMBM4GqX3AjM/6y8JsJRxkAwnfd0fDyK+aICMFarDqQonQNIx99gdTZqw==\"},\"engines\":{\"node\":\">=14.0.0\"},\"cpu\":[\"riscv64\"],\"os\":[\"linux\"]},\"sass-embedded-linux-musl-x64@1.89.2\":{\"resolution\":{\"integrity\":\"sha512-K+FmWcdj/uyP8GiG9foxOCPfb5OAZG0uSVq80DKgVSC0U44AdGjvAvVZkrgFEcZ6cCqlNC2JfYmslB5iqdL7tg==\"},\"engines\":{\"node\":\">=14.0.0\"},\"cpu\":[\"x64\"],\"os\":[\"linux\"]},\"sass-embedded-linux-riscv64@1.89.2\":{\"resolution\":{\"integrity\":\"sha512-g9nTbnD/3yhOaskeqeBQETbtfDQWRgsjHok6bn7DdAuwBsyrR3JlSFyqKc46pn9Xxd9SQQZU8AzM4IR+sY0A0w==\"},\"engines\":{\"node\":\">=14.0.0\"},\"cpu\":[\"riscv64\"],\"os\":[\"linux\"]},\"sass-embedded-linux-x64@1.89.2\":{\"resolution\":{\"integrity\":\"sha512-Ax7dKvzncyQzIl4r7012KCMBvJzOz4uwSNoyoM5IV6y5I1f5hEwI25+U4WfuTqdkv42taCMgpjZbh9ERr6JVMQ==\"},\"engines\":{\"node\":\">=14.0.0\"},\"cpu\":[\"x64\"],\"os\":[\"linux\"]},\"sass-embedded-win32-arm64@1.89.2\":{\"resolution\":{\"integrity\":\"sha512-j96iJni50ZUsfD6tRxDQE2QSYQ2WrfHxeiyAXf41Kw0V4w5KYR/Sf6rCZQLMTUOHnD16qTMVpQi20LQSqf4WGg==\"},\"engines\":{\"node\":\">=14.0.0\"},\"cpu\":[\"arm64\"],\"os\":[\"win32\"]},\"sass-embedded-win32-x64@1.89.2\":{\"resolution\":{\"integrity\":\"sha512-cS2j5ljdkQsb4PaORiClaVYynE9OAPZG/XjbOMxpQmjRIf7UroY4PEIH+Waf+y47PfXFX9SyxhYuw2NIKGbEng==\"},\"engines\":{\"node\":\">=14.0.0\"},\"cpu\":[\"x64\"],\"os\":[\"win32\"]},\"sass-embedded@1.89.2\":{\"resolution\":{\"integrity\":\"sha512-Ack2K8rc57kCFcYlf3HXpZEJFNUX8xd8DILldksREmYXQkRHI879yy8q4mRDJgrojkySMZqmmmW1NxrFxMsYaA==\"},\"engines\":{\"node\":\">=16.0.0\"},\"hasBin\":true},\"saxes@6.0.0\":{\"resolution\":{\"integrity\":\"sha512-xAg7SOnEhrm5zI3puOOKyy1OMcMlIJZYNJY7xLBwSze0UjhPLnWfj2GF2EpT0jmzaJKIWKHLsaSSajf35bcYnA==\"},\"engines\":{\"node\":\">=v12.22.7\"}},\"scheduler@0.27.0\":{\"resolution\":{\"integrity\":\"sha512-eNv+WrVbKu1f3vbYJT/xtiF5syA5HPIMtf9IgY/nKg0sWqzAUEvqY/xm7OcZc/qafLx/iO9FgOmeSAp4v5ti/Q==\"}},\"schema-utils@4.3.3\":{\"resolution\":{\"integrity\":\"sha512-eflK8wEtyOE6+hsaRVPxvUKYCpRgzLqDTb8krvAsRIwOGlHoSgYLgBXoubGgLd2fT41/OUYdb48v4k4WWHQurA==\"},\"engines\":{\"node\":\">= 10.13.0\"}},\"semver@6.3.1\":{\"resolution\":{\"integrity\":\"sha512-BR7VvDCVHO+q2xBEWskxS6DJE1qRnb7DxzUrogb71CWoSficBxYsiAGd+Kl0mmq/MprG9yArRkyrQxTO6XjMzA==\"},\"hasBin\":true},\"semver@7.7.2\":{\"resolution\":{\"integrity\":\"sha512-RF0Fw+rO5AMf9MAyaRXI4AV0Ulj5lMHqVxxdSgiVbixSCXoEmmX/jk0CuJw4+3SqroYO9VoUh+HcuJivvtJemA==\"},\"engines\":{\"node\":\">=10\"},\"hasBin\":true},\"semver@7.7.3\":{\"resolution\":{\"integrity\":\"sha512-SdsKMrI9TdgjdweUSR9MweHA4EJ8YxHn8DFaDisvhVlUOe4BF1tLD7GAj0lIqWVl+dPb/rExr0Btby5loQm20Q==\"},\"engines\":{\"node\":\">=10\"},\"hasBin\":true},\"semver@7.7.4\":{\"resolution\":{\"integrity\":\"sha512-vFKC2IEtQnVhpT78h1Yp8wzwrf8CM+MzKMHGJZfBtzhZNycRFnXsHk6E5TxIkkMsgNS7mdX3AGB7x2QM2di4lA==\"},\"engines\":{\"node\":\">=10\"},\"hasBin\":true},\"serialize-error@11.0.3\":{\"resolution\":{\"integrity\":\"sha512-2G2y++21dhj2R7iHAdd0FIzjGwuKZld+7Pl/bTU6YIkrC2ZMbVUjm+luj6A6V34Rv9XfKJDKpTWu9W4Gse1D9g==\"},\"engines\":{\"node\":\">=14.16\"}},\"seroval-plugins@1.5.4\":{\"resolution\":{\"integrity\":\"sha512-S0xQPhUTefAhNvNWFg0c1J8qJArHt5KdtJ/cFAofo06KD1MVSeFWyl4iiu+ApDIuw0WhjpOfCdgConOfAnLgkw==\"},\"engines\":{\"node\":\">=10\"},\"peerDependencies\":{\"seroval\":\"^1.0\"}},\"seroval@1.5.4\":{\"resolution\":{\"integrity\":\"sha512-46uFvgrXTVxZcUorgSSRZ4y+ieqLLQRMlG4bnCZKW3qI6BZm7Rg4ntMW4p1mILEEBZWrFlcpp0AyIIlM6jD9iw==\"},\"engines\":{\"node\":\">=10\"}},\"setimmediate@1.0.5\":{\"resolution\":{\"integrity\":\"sha512-MATJdZp8sLqDl/68LfQmbP8zKPLQNV6BIZoIgrscFDQ+RsvK/BxeDQOgyxKKoh0y/8h3BqVFnCqQ/gd+reiIXA==\"}},\"sharp@0.34.5\":{\"resolution\":{\"integrity\":\"sha512-Ou9I5Ft9WNcCbXrU9cMgPBcCK8LiwLqcbywW3t4oDV37n1pzpuNLsYiAV8eODnjbtQlSDwZ2cUEeQz4E54Hltg==\"},\"engines\":{\"node\":\"^18.17.0 || ^20.3.0 || >=21.0.0\"}},\"shebang-command@2.0.0\":{\"resolution\":{\"integrity\":\"sha512-kHxr2zZpYtdmrN1qDjrrX/Z1rR1kG8Dx+gkpK1G4eXmvXswmcE1hTWBWYUzlraYw1/yZp6YuDY77YtvbN0dmDA==\"},\"engines\":{\"node\":\">=8\"}},\"shebang-regex@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-7++dFhtcx3353uBaq8DDR4NuxBetBzC7ZQOhmTQInHEd6bSrXdiEyzCvG07Z44UYdLShWUyXt5M/yhz8ekcb1A==\"},\"engines\":{\"node\":\">=8\"}},\"shiki@3.15.0\":{\"resolution\":{\"integrity\":\"sha512-kLdkY6iV3dYbtPwS9KXU7mjfmDm25f5m0IPNFnaXO7TBPcvbUOY72PYXSuSqDzwp+vlH/d7MXpHlKO/x+QoLXw==\"}},\"siginfo@2.0.0\":{\"resolution\":{\"integrity\":\"sha512-ybx0WO1/8bSBLEWXZvEd7gMW3Sn3JFlW3TvX1nREbDLRNQNaeNN8WK0meBwPdAaOI7TtRRRJn/Es1zhrrCHu7g==\"}},\"signal-exit@3.0.7\":{\"resolution\":{\"integrity\":\"sha512-wnD2ZE+l+SPC/uoS0vXeE9L1+0wuaMqKlfz9AMUo38JsyLSBWSFcHR1Rri62LZc12vLr1gb3jl7iwQhgwpAbGQ==\"}},\"signal-exit@4.1.0\":{\"resolution\":{\"integrity\":\"sha512-bzyZ1e88w9O1iNJbKnOlvYTrWPDl46O1bG0D3XInv+9tkPrxrN8jUUTiFlDkkmKWgn1M6CfIA13SuGqOa9Korw==\"},\"engines\":{\"node\":\">=14\"}},\"simple-concat@1.0.1\":{\"resolution\":{\"integrity\":\"sha512-cSFtAPtRhljv69IK0hTVZQ+OfE9nePi/rtJmw5UjHeVyVroEqJXP1sFztKUy1qU+xvz3u/sfYJLa947b7nAN2Q==\"}},\"simple-get@4.0.1\":{\"resolution\":{\"integrity\":\"sha512-brv7p5WgH0jmQJr1ZDDfKDOSeWWg+OVypG99A/5vYGPqJ6pxiaHLy8nxtFjBA7oMa01ebA9gfh1uMCFqOuXxvA==\"}},\"sirv@3.0.2\":{\"resolution\":{\"integrity\":\"sha512-2wcC/oGxHis/BoHkkPwldgiPSYcpZK3JU28WoMVv55yHJgcZ8rlXvuG9iZggz+sU1d4bRgIGASwyWqjxu3FM0g==\"},\"engines\":{\"node\":\">=18\"}},\"slash@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-g9Q1haeby36OSStwb4ntCGGGaKsaVSjQ68fBxoQcutl5fS1vuY18H3wSt3jFyFtrkx+Kz0V1G85A4MyAdDMi2Q==\"},\"engines\":{\"node\":\">=8\"}},\"smart-buffer@4.2.0\":{\"resolution\":{\"integrity\":\"sha512-94hK0Hh8rPqQl2xXc3HsaBoOXKV20MToPkcXvwbISWLEs+64sBq5kFgn2kJDHb1Pry9yrP0dxrCI9RRci7RXKg==\"},\"engines\":{\"node\":\">= 6.0.0\",\"npm\":\">= 3.0.0\"}},\"socks-proxy-agent@8.0.5\":{\"resolution\":{\"integrity\":\"sha512-HehCEsotFqbPW9sJ8WVYB6UbmIMv7kUUORIF2Nncq4VQvBfNBLibW9YZR5dlYCSUhwcD628pRllm7n+E+YTzJw==\"},\"engines\":{\"node\":\">= 14\"}},\"socks@2.8.8\":{\"resolution\":{\"integrity\":\"sha512-NlGELfPrgX2f1TAAcz0WawlLn+0r3FyhhCRpFFK2CemXenPYvzMWWZINv3eDNo9ucdwme7oCHRY0Jnbs4aIkog==\"},\"engines\":{\"node\":\">= 10.0.0\",\"npm\":\">= 3.0.0\"}},\"source-map-js@1.2.1\":{\"resolution\":{\"integrity\":\"sha512-UXWMKhLOwVKb728IUtQPXxfYU+usdybtUrK/8uGE8CQMvrhOpwvzDBwj0QhSL7MQc7vIsISBG8VQ8+IDQxpfQA==\"},\"engines\":{\"node\":\">=0.10.0\"}},\"source-map-support@0.5.21\":{\"resolution\":{\"integrity\":\"sha512-uBHU3L3czsIyYXKX88fdrGovxdSCoTGDRZ6SYXtSRxLZUzHg5P/66Ht6uoUlHu9EZod+inXhKo3qQgwXUT/y1w==\"}},\"source-map@0.6.1\":{\"resolution\":{\"integrity\":\"sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g==\"},\"engines\":{\"node\":\">=0.10.0\"}},\"source-map@0.7.6\":{\"resolution\":{\"integrity\":\"sha512-i5uvt8C3ikiWeNZSVZNWcfZPItFQOsYTUAOkcUPGd8DqDy1uOUikjt5dG+uRlwyvR108Fb9DOd4GvXfT0N2/uQ==\"},\"engines\":{\"node\":\">= 12\"}},\"space-separated-tokens@2.0.2\":{\"resolution\":{\"integrity\":\"sha512-PEGlAwrG8yXGXRjW32fGbg66JAlOAwbObuqVoJpv/mRgoWDQfgH1wDPvtzWyUSNAXBGSk8h755YDbbcEy3SH2Q==\"}},\"spacetrim@0.11.59\":{\"resolution\":{\"integrity\":\"sha512-lLYsktklSRKprreOm7NXReW8YiX2VBjbgmXYEziOoGf/qsJqAEACaDvoTtUOycwjpaSh+bT8eu0KrJn7UNxiCg==\"}},\"spawndamnit@3.0.1\":{\"resolution\":{\"integrity\":\"sha512-MmnduQUuHCoFckZoWnXsTg7JaiLBJrKFj9UI2MbRPGaJeVpsLcVBu6P/IGZovziM/YBsellCmsprgNA+w0CzVg==\"}},\"split2@4.2.0\":{\"resolution\":{\"integrity\":\"sha512-UcjcJOWknrNkF6PLX83qcHM6KHgVKNkV62Y8a5uYDVv9ydGQVwAHMKqHdJje1VTWpljG0WYpCDhrCdAOYH4TWg==\"},\"engines\":{\"node\":\">= 10.x\"}},\"sprintf-js@1.0.3\":{\"resolution\":{\"integrity\":\"sha512-D9cPgkvLlV3t3IzL0D0YLvGA9Ahk4PcvVwUbN0dSGr1aP0Nrt4AEnTUbuGvquEC0mA64Gqt1fzirlRs5ibXx8g==\"}},\"srvx@0.11.15\":{\"resolution\":{\"integrity\":\"sha512-iXsux0UcOjdvs0LCMa2Ws3WwcDUozA3JN3BquNXkaFPP7TpRqgunKdEgoZ/uwb1J6xaYHfxtz9Twlh6yzwM6Tg==\"},\"engines\":{\"node\":\">=20.16.0\"},\"hasBin\":true},\"stackback@0.0.2\":{\"resolution\":{\"integrity\":\"sha512-1XMJE5fQo1jGH6Y/7ebnwPOBEkIEnT4QF32d5R1+VXdXveM0IBMJt8zfaxX1P3QhVwrYe+576+jkANtSS2mBbw==\"}},\"statuses@2.0.2\":{\"resolution\":{\"integrity\":\"sha512-DvEy55V3DB7uknRo+4iOGT5fP1slR8wQohVdknigZPMpMstaKJQWhwiYBACJE3Ul2pTnATihhBYnRhZQHGBiRw==\"},\"engines\":{\"node\":\">= 0.8\"}},\"std-env@3.10.0\":{\"resolution\":{\"integrity\":\"sha512-5GS12FdOZNliM5mAOxFRg7Ir0pWz8MdpYm6AY6VPkGpbA7ZzmbzNcBJQ0GPvvyWgcY7QAhCgf9Uy89I03faLkg==\"}},\"std-env@3.9.0\":{\"resolution\":{\"integrity\":\"sha512-UGvjygr6F6tpH7o2qyqR6QYpwraIjKSdtzyBdyytFOHmPZY917kwdwLG0RbOjWOnKmnm3PeHjaoLLMie7kPLQw==\"}},\"std-env@4.1.0\":{\"resolution\":{\"integrity\":\"sha512-Rq7ybcX2RuC55r9oaPVEW7/xu3tj8u4GeBYHBWCychFtzMIr86A7e3PPEBPT37sHStKX3+TiX/Fr/ACmJLVlLQ==\"}},\"streamx@2.25.0\":{\"resolution\":{\"integrity\":\"sha512-0nQuG6jf1w+wddNEEXCF4nTg3LtufWINB5eFEN+5TNZW7KWJp6x87+JFL43vaAUPyCfH1wID+mNVyW6OHtFamg==\"}},\"strict-event-emitter@0.5.1\":{\"resolution\":{\"integrity\":\"sha512-vMgjE/GGEPEFnhFub6pa4FmJBRBVOLpIII2hvCZ8Kzb7K0hlHo7mQv6xYrBvCL2LtAIBwFUK8wvuJgTVSQ5MFQ==\"}},\"string-width@4.2.3\":{\"resolution\":{\"integrity\":\"sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==\"},\"engines\":{\"node\":\">=8\"}},\"string-width@5.1.2\":{\"resolution\":{\"integrity\":\"sha512-HnLOCR3vjcY8beoNLtcjZ5/nxn2afmME6lhrDrebokqMap+XbeW8n9TXpPDOqdGK5qcI3oT0GKTW6wC7EMiVqA==\"},\"engines\":{\"node\":\">=12\"}},\"string_decoder@1.1.1\":{\"resolution\":{\"integrity\":\"sha512-n/ShnvDi6FHbbVfviro+WojiFzv+s8MPMHBczVePfUpDJLwoLT0ht1l4YwBCbi8pJAveEEdnkHyPyTP/mzRfwg==\"}},\"string_decoder@1.3.0\":{\"resolution\":{\"integrity\":\"sha512-hkRX8U1WjJFd8LsDJ2yQ/wWWxaopEsABU1XfkM8A+j0+85JAGppt16cr1Whg6KIbb4okU6Mql6BOj+uup/wKeA==\"}},\"stringify-entities@4.0.4\":{\"resolution\":{\"integrity\":\"sha512-IwfBptatlO+QCJUo19AqvrPNqlVMpW9YEL2LIVY+Rpv2qsjCGxaDLNRgeGsQWJhfItebuJhsGSLjaBbNSQ+ieg==\"}},\"strip-ansi@6.0.1\":{\"resolution\":{\"integrity\":\"sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==\"},\"engines\":{\"node\":\">=8\"}},\"strip-ansi@7.1.2\":{\"resolution\":{\"integrity\":\"sha512-gmBGslpoQJtgnMAvOVqGZpEz9dyoKTCzy2nfz/n8aIFhN/jCE/rCmcxabB6jOOHV+0WNnylOxaxBQPSvcWklhA==\"},\"engines\":{\"node\":\">=12\"}},\"strip-ansi@7.2.0\":{\"resolution\":{\"integrity\":\"sha512-yDPMNjp4WyfYBkHnjIRLfca1i6KMyGCtsVgoKe/z1+6vukgaENdgGBZt+ZmKPc4gavvEZ5OgHfHdrazhgNyG7w==\"},\"engines\":{\"node\":\">=12\"}},\"strip-bom@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-vavAMRXOgBVNF6nyEEmL3DBK19iRpDcoIwW+swQ+CbGiu7lju6t+JklA1MHweoWtadgt4ISVUsXLyDq34ddcwA==\"},\"engines\":{\"node\":\">=4\"}},\"strip-json-comments@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-4gB8na07fecVVkOI6Rs4e7T6NOTki5EmL7TUduTs6bu3EdnSycntVJ4re8kgZA+wx9IueI2Y11bfbgwtzuE0KQ==\"},\"engines\":{\"node\":\">=0.10.0\"}},\"strip-literal@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-TcccoMhJOM3OebGhSBEmp3UZ2SfDMZUEBdRA/9ynfLi8yYajyWX3JiXArcJt4Umh4vISpspkQIY8ZZoCqjbviA==\"}},\"strnum@1.1.2\":{\"resolution\":{\"integrity\":\"sha512-vrN+B7DBIoTTZjnPNewwhx6cBA/H+IS7rfW68n7XxC1y7uoiGQBxaKzqucGUgavX15dJgiGztLJ8vxuEzwqBdA==\"}},\"stylis@4.3.6\":{\"resolution\":{\"integrity\":\"sha512-yQ3rwFWRfwNUY7H5vpU0wfdkNSnvnJinhF9830Swlaxl03zsOjCfmX0ugac+3LtK0lYSgwL/KXc8oYL3mG4YFQ==\"}},\"supports-color@10.2.2\":{\"resolution\":{\"integrity\":\"sha512-SS+jx45GF1QjgEXQx4NJZV9ImqmO2NPz5FNsIHrsDjh2YsHnawpan7SNQ1o8NuhrbHZy9AZhIoCUiCeaW/C80g==\"},\"engines\":{\"node\":\">=18\"}},\"supports-color@7.2.0\":{\"resolution\":{\"integrity\":\"sha512-qpCAvRl9stuOHveKsn7HncJRvv501qIacKzQlO/+Lwxc9+0q2wLyv4Dfvt80/DPn2pqOBsJdDiogXGR9+OvwRw==\"},\"engines\":{\"node\":\">=8\"}},\"supports-color@8.1.1\":{\"resolution\":{\"integrity\":\"sha512-MpUEN2OodtUzxvKQl72cUF7RQ5EiHsGvSsVG0ia9c5RbWGL2CI4C7EpPS8UTBIplnlzZiNuV56w+FuNxy3ty2Q==\"},\"engines\":{\"node\":\">=10\"}},\"symbol-tree@3.2.4\":{\"resolution\":{\"integrity\":\"sha512-9QNk5KwDF+Bvz+PyObkmSYjI5ksVUYtjW7AU22r2NKcfLJcXp96hkDWU3+XndOsUb+AQ9QhfzfCT2O+CNWT5Tw==\"}},\"sync-child-process@1.0.2\":{\"resolution\":{\"integrity\":\"sha512-8lD+t2KrrScJ/7KXCSyfhT3/hRq78rC0wBFqNJXv3mZyn6hW2ypM05JmlSvtqRbeq6jqA94oHbxAr2vYsJ8vDA==\"},\"engines\":{\"node\":\">=16.0.0\"}},\"sync-message-port@1.2.0\":{\"resolution\":{\"integrity\":\"sha512-gAQ9qrUN/UCypHtGFbbe7Rc/f9bzO88IwrG8TDo/aMKAApKyD6E3W4Cm0EfhfBb6Z6SKt59tTCTfD+n1xmAvMg==\"},\"engines\":{\"node\":\">=16.0.0\"}},\"tailwindcss@4.2.4\":{\"resolution\":{\"integrity\":\"sha512-HhKppgO81FQof5m6TEnuBWCZGgfRAWbaeOaGT00KOy/Pf/j6oUihdvBpA7ltCeAvZpFhW3j0PTclkxsd4IXYDA==\"}},\"tapable@2.3.3\":{\"resolution\":{\"integrity\":\"sha512-uxc/zpqFg6x7C8vOE7lh6Lbda8eEL9zmVm/PLeTPBRhh1xCgdWaQ+J1CUieGpIfm2HdtsUpRv+HshiasBMcc6A==\"},\"engines\":{\"node\":\">=6\"}},\"tar-fs@2.1.4\":{\"resolution\":{\"integrity\":\"sha512-mDAjwmZdh7LTT6pNleZ05Yt65HC3E+NiQzl672vQG38jIrehtJk/J3mNwIg+vShQPcLF/LV7CMnDW6vjj6sfYQ==\"}},\"tar-fs@3.1.2\":{\"resolution\":{\"integrity\":\"sha512-QGxxTxxyleAdyM3kpFs14ymbYmNFrfY+pHj7Z8FgtbZ7w2//VAgLMac7sT6nRpIHjppXO2AwwEOg0bPFVRcmXw==\"}},\"tar-stream@2.2.0\":{\"resolution\":{\"integrity\":\"sha512-ujeqbceABgwMZxEJnk2HDY2DlnUZ+9oEcb1KzTVfYHio0UE6dG71n60d8D2I4qNvleWrrXpmjpt7vZeF1LnMZQ==\"},\"engines\":{\"node\":\">=6\"}},\"tar-stream@3.2.0\":{\"resolution\":{\"integrity\":\"sha512-ojzvCvVaNp6aOTFmG7jaRD0meowIAuPc3cMMhSgKiVWws1GyHbGd/xvnyuRKcKlMpt3qvxx6r0hreCNITP9hIg==\"}},\"tar@6.2.1\":{\"resolution\":{\"integrity\":\"sha512-DZ4yORTwrbTj/7MZYq2w+/ZFdI6OZ/f9SFHR+71gIVUZhOQPHzVCLpvRnPgyaMpfWxxk/4ONva3GQSyNIKRv6A==\"},\"engines\":{\"node\":\">=10\"},\"deprecated\":\"Old versions of tar are not supported, and contain widely publicized security vulnerabilities, which have been fixed in the current version. Please update. Support for old versions may be purchased (at exorbitant rates) by contacting i@izs.me\"},\"teex@1.0.1\":{\"resolution\":{\"integrity\":\"sha512-eYE6iEI62Ni1H8oIa7KlDU6uQBtqr4Eajni3wX7rpfXD8ysFx8z0+dri+KWEPWpBsxXfxu58x/0jvTVT1ekOSg==\"}},\"term-size@2.2.1\":{\"resolution\":{\"integrity\":\"sha512-wK0Ri4fOGjv/XPy8SBHZChl8CM7uMc5VML7SqiQ0zG7+J5Vr+RMQDoHa2CNT6KHUnTGIXH34UDMkPzAUyapBZg==\"},\"engines\":{\"node\":\">=8\"}},\"terser-webpack-plugin@5.5.0\":{\"resolution\":{\"integrity\":\"sha512-UYhptBwhWvfIjKd/UuFo6D8uq9xpGLDK+z8EDsj/zWhrTaH34cKEbrkMKfV5YWqGBvAYA3tlzZbs2R+qYrbQJA==\"},\"engines\":{\"node\":\">= 10.13.0\"},\"peerDependencies\":{\"@swc/core\":\"*\",\"esbuild\":\"*\",\"uglify-js\":\"*\",\"webpack\":\"^5.1.0\"},\"peerDependenciesMeta\":{\"@swc/core\":{\"optional\":true},\"esbuild\":{\"optional\":true},\"uglify-js\":{\"optional\":true}}},\"terser@5.36.0\":{\"resolution\":{\"integrity\":\"sha512-IYV9eNMuFAV4THUspIRXkLakHnV6XO7FEdtKjf/mDyrnqUg9LnlOn6/RwRvM9SZjR4GUq8Nk8zj67FzVARr74w==\"},\"engines\":{\"node\":\">=10\"},\"hasBin\":true},\"test-exclude@7.0.1\":{\"resolution\":{\"integrity\":\"sha512-pFYqmTw68LXVjeWJMST4+borgQP2AyMNbg1BpZh9LbyhUeNkeaPF9gzfPGUAnSMV3qPYdWUwDIjjCLiSDOl7vg==\"},\"engines\":{\"node\":\">=18\"}},\"text-decoder@1.2.7\":{\"resolution\":{\"integrity\":\"sha512-vlLytXkeP4xvEq2otHeJfSQIRyWxo/oZGEbXrtEEF9Hnmrdly59sUbzZ/QgyWuLYHctCHxFF4tRQZNQ9k60ExQ==\"}},\"tinybench@2.9.0\":{\"resolution\":{\"integrity\":\"sha512-0+DUvqWMValLmha6lr4kD8iAMK1HzV0/aKnCtWb9v9641TnP/MFb7Pc2bxoxQjTXAErryXVgUOfv2YqNllqGeg==\"}},\"tinyexec@0.3.2\":{\"resolution\":{\"integrity\":\"sha512-KQQR9yN7R5+OSwaK0XQoj22pwHoTlgYqmUscPYoknOoWCWfj/5/ABTMRi69FrKU5ffPVh5QcFikpWJI/P1ocHA==\"}},\"tinyexec@1.0.2\":{\"resolution\":{\"integrity\":\"sha512-W/KYk+NFhkmsYpuHq5JykngiOCnxeVL8v8dFnqxSD8qEEdRfXk1SDM6JzNqcERbcGYj9tMrDQBYV9cjgnunFIg==\"},\"engines\":{\"node\":\">=18\"}},\"tinyglobby@0.2.14\":{\"resolution\":{\"integrity\":\"sha512-tX5e7OM1HnYr2+a2C/4V0htOcSQcoSTH9KgJnVvNm5zm/cyEWKJ7j7YutsH9CxMdtOkkLFy2AHrMci9IM8IPZQ==\"},\"engines\":{\"node\":\">=12.0.0\"}},\"tinyglobby@0.2.15\":{\"resolution\":{\"integrity\":\"sha512-j2Zq4NyQYG5XMST4cbs02Ak8iJUdxRM0XI5QyxXuZOzKOINmWurp3smXu3y5wDcJrptwpSjgXHzIQxR0omXljQ==\"},\"engines\":{\"node\":\">=12.0.0\"}},\"tinyglobby@0.2.16\":{\"resolution\":{\"integrity\":\"sha512-pn99VhoACYR8nFHhxqix+uvsbXineAasWm5ojXoN8xEwK5Kd3/TrhNn1wByuD52UxWRLy8pu+kRMniEi6Eq9Zg==\"},\"engines\":{\"node\":\">=12.0.0\"}},\"tinypool@1.1.1\":{\"resolution\":{\"integrity\":\"sha512-Zba82s87IFq9A9XmjiX5uZA/ARWDrB03OHlq+Vw1fSdt0I+4/Kutwy8BP4Y/y/aORMo61FQ0vIb5j44vSo5Pkg==\"},\"engines\":{\"node\":\"^18.0.0 || >=20.0.0\"}},\"tinyrainbow@2.0.0\":{\"resolution\":{\"integrity\":\"sha512-op4nsTR47R6p0vMUUoYl/a+ljLFVtlfaXkLQmqfLR1qHma1h/ysYk4hEXZ880bf2CYgTskvTa/e196Vd5dDQXw==\"},\"engines\":{\"node\":\">=14.0.0\"}},\"tinyrainbow@3.0.3\":{\"resolution\":{\"integrity\":\"sha512-PSkbLUoxOFRzJYjjxHJt9xro7D+iilgMX/C9lawzVuYiIdcihh9DXmVibBe8lmcFrRi/VzlPjBxbN7rH24q8/Q==\"},\"engines\":{\"node\":\">=14.0.0\"}},\"tinyrainbow@3.1.0\":{\"resolution\":{\"integrity\":\"sha512-Bf+ILmBgretUrdJxzXM0SgXLZ3XfiaUuOj/IKQHuTXip+05Xn+uyEYdVg0kYDipTBcLrCVyUzAPz7QmArb0mmw==\"},\"engines\":{\"node\":\">=14.0.0\"}},\"tinyspy@4.0.3\":{\"resolution\":{\"integrity\":\"sha512-t2T/WLB2WRgZ9EpE4jgPJ9w+i66UZfDc8wHh0xrwiRNN+UwH98GIJkTeZqX9rg0i0ptwzqW+uYeIF0T4F8LR7A==\"},\"engines\":{\"node\":\">=14.0.0\"}},\"tldts-core@6.1.52\":{\"resolution\":{\"integrity\":\"sha512-j4OxQI5rc1Ve/4m/9o2WhWSC4jGc4uVbCINdOEJRAraCi0YqTqgMcxUx7DbmuP0G3PCixoof/RZB0Q5Kh9tagw==\"}},\"tldts-core@7.0.19\":{\"resolution\":{\"integrity\":\"sha512-lJX2dEWx0SGH4O6p+7FPwYmJ/bu1JbcGJ8RLaG9b7liIgZ85itUVEPbMtWRVrde/0fnDPEPHW10ZsKW3kVsE9A==\"}},\"tldts@6.1.52\":{\"resolution\":{\"integrity\":\"sha512-fgrDJXDjbAverY6XnIt0lNfv8A0cf7maTEaZxNykLGsLG7XP+5xhjBTrt/ieAsFjAlZ+G5nmXomLcZDkxXnDzw==\"},\"hasBin\":true},\"tldts@7.0.19\":{\"resolution\":{\"integrity\":\"sha512-8PWx8tvC4jDB39BQw1m4x8y5MH1BcQ5xHeL2n7UVFulMPH/3Q0uiamahFJ3lXA0zO2SUyRXuVVbWSDmstlt9YA==\"},\"hasBin\":true},\"tmp@0.2.5\":{\"resolution\":{\"integrity\":\"sha512-voyz6MApa1rQGUxT3E+BK7/ROe8itEx7vD8/HEvt4xwXucvQ5G5oeEiHkmHZJuBO21RpOf+YYm9MOivj709jow==\"},\"engines\":{\"node\":\">=14.14\"}},\"to-regex-range@5.0.1\":{\"resolution\":{\"integrity\":\"sha512-65P7iz6X5yEr1cwcgvQxbbIw7Uk3gOy5dIdtZ4rDveLqhrdJP+Li/Hx6tyK0NEb+2GCyneCMJiGqrADCSNk8sQ==\"},\"engines\":{\"node\":\">=8.0\"}},\"totalist@3.0.1\":{\"resolution\":{\"integrity\":\"sha512-sf4i37nQ2LBx4m3wB74y+ubopq6W/dIzXg0FDGjsYnZHVa1Da8FH853wlL2gtUhg+xJXjfk3kUZS3BRoQeoQBQ==\"},\"engines\":{\"node\":\">=6\"}},\"tough-cookie@4.1.4\":{\"resolution\":{\"integrity\":\"sha512-Loo5UUvLD9ScZ6jh8beX1T6sO1w2/MpCRpEP7V280GKMVUQ0Jzar2U3UJPsrdbziLEMMhu3Ujnq//rhiFuIeag==\"},\"engines\":{\"node\":\">=6\"}},\"tough-cookie@5.1.2\":{\"resolution\":{\"integrity\":\"sha512-FVDYdxtnj0G6Qm/DhNPSb8Ju59ULcup3tuJxkFb5K8Bv2pUXILbf0xZWU8PX8Ov19OXljbUyveOFwRMwkXzO+A==\"},\"engines\":{\"node\":\">=16\"}},\"tough-cookie@6.0.0\":{\"resolution\":{\"integrity\":\"sha512-kXuRi1mtaKMrsLUxz3sQYvVl37B0Ns6MzfrtV5DvJceE9bPyspOqk9xxv7XbZWcfLWbFmm997vl83qUWVJA64w==\"},\"engines\":{\"node\":\">=16\"}},\"tr46@5.1.1\":{\"resolution\":{\"integrity\":\"sha512-hdF5ZgjTqgAntKkklYw0R03MG2x/bSzTtkxmIRw/sTNV8YXsCJ1tfLAX23lhxhHJlEf3CRCOCGGWw3vI3GaSPw==\"},\"engines\":{\"node\":\">=18\"}},\"tr46@6.0.0\":{\"resolution\":{\"integrity\":\"sha512-bLVMLPtstlZ4iMQHpFHTR7GAGj2jxi8Dg0s2h2MafAE4uSWF98FC/3MomU51iQAMf8/qDUbKWf5GxuvvVcXEhw==\"},\"engines\":{\"node\":\">=20\"}},\"tree-kill@1.2.2\":{\"resolution\":{\"integrity\":\"sha512-L0Orpi8qGpRG//Nd+H90vFB+3iHnue1zSSGmNOOCh1GLJ7rUKVwV2HvijphGQS2UmhUZewS9VgvxYIdgr+fG1A==\"},\"hasBin\":true},\"trim-lines@3.0.1\":{\"resolution\":{\"integrity\":\"sha512-kRj8B+YHZCc9kQYdWfJB2/oUl9rA99qbowYYBtr4ui4mZyAQ2JpvVBd/6U2YloATfqBhBTSMhTpgBHtU0Mf3Rg==\"}},\"trim-trailing-lines@2.1.0\":{\"resolution\":{\"integrity\":\"sha512-5UR5Biq4VlVOtzqkm2AZlgvSlDJtME46uV0br0gENbwN4l5+mMKT4b9gJKqWtuL2zAIqajGJGuvbCbcAJUZqBg==\"}},\"trough@2.2.0\":{\"resolution\":{\"integrity\":\"sha512-tmMpK00BjZiUyVyvrBK7knerNgmgvcV/KLVyuma/SC+TQN167GrMRciANTz09+k3zW8L8t60jWO1GpfkZdjTaw==\"}},\"ts-algebra@2.0.0\":{\"resolution\":{\"integrity\":\"sha512-FPAhNPFMrkwz76P7cdjdmiShwMynZYN6SgOujD1urY4oNm80Ou9oMdmbR45LotcKOXoy7wSmHkRFE6Mxbrhefw==\"}},\"ts-dedent@2.2.0\":{\"resolution\":{\"integrity\":\"sha512-q5W7tVM71e2xjHZTlgfTDoPF/SmqKG5hddq9SzR49CH2hayqRKJtQ4mtRlSxKaJlR/+9rEM+mnBHf7I2/BQcpQ==\"},\"engines\":{\"node\":\">=6.10\"}},\"tsconfig-paths@4.2.0\":{\"resolution\":{\"integrity\":\"sha512-NoZ4roiN7LnbKn9QqE1amc9DJfzvZXxF4xDavcOWt1BPkdx+m+0gJuPM+S0vCe7zTJMYUP0R8pO2XMr+Y8oLIg==\"},\"engines\":{\"node\":\">=6\"}},\"tslib@2.8.1\":{\"resolution\":{\"integrity\":\"sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w==\"}},\"tsx@4.20.5\":{\"resolution\":{\"integrity\":\"sha512-+wKjMNU9w/EaQayHXb7WA7ZaHY6hN8WgfvHNQ3t1PnU91/7O8TcTnIhCDYTZwnt8JsO9IBqZ30Ln1r7pPF52Aw==\"},\"engines\":{\"node\":\">=18.0.0\"},\"hasBin\":true},\"tunnel-agent@0.6.0\":{\"resolution\":{\"integrity\":\"sha512-McnNiV1l8RYeY8tBgEpuodCC1mLUdbSN+CYBL7kJsJNInOP8UjDDEwdk6Mw60vdLLrr5NHKZhMAOSrR2NZuQ+w==\"}},\"type-fest@2.19.0\":{\"resolution\":{\"integrity\":\"sha512-RAH822pAdBgcNMAfWnCBU3CFZcfZ/i1eZjwFU/dsLKumyuuP3niueg2UAukXYF0E2AAoc82ZSSf9J0WQBinzHA==\"},\"engines\":{\"node\":\">=12.20\"}},\"type-fest@4.26.0\":{\"resolution\":{\"integrity\":\"sha512-OduNjVJsFbifKb57UqZ2EMP1i4u64Xwow3NYXUtBbD4vIwJdQd4+xl8YDou1dlm4DVrtwT/7Ky8z8WyCULVfxw==\"},\"engines\":{\"node\":\">=16\"}},\"type-fest@4.41.0\":{\"resolution\":{\"integrity\":\"sha512-TeTSQ6H5YHvpqVwBRcnLDCBnDOHWYu7IvGbHT6N8AOymcr9PJGjc1GTtiWZTYg0NCgYwvnYWEkVChQAr9bjfwA==\"},\"engines\":{\"node\":\">=16\"}},\"typescript@5.8.3\":{\"resolution\":{\"integrity\":\"sha512-p1diW6TqL9L07nNxvRMM7hMMw4c5XOo/1ibL4aAIGmSAt9slTE1Xgw5KWuof2uTOvCg9BY7ZRi+GaF+7sfgPeQ==\"},\"engines\":{\"node\":\">=14.17\"},\"hasBin\":true},\"typescript@5.9.3\":{\"resolution\":{\"integrity\":\"sha512-jl1vZzPDinLr9eUt3J/t7V6FgNEw9QjvBPdysz9KfQDD41fQrC2Y4vKQdiaUpFT4bXlb1RHhLpp8wtm6M5TgSw==\"},\"engines\":{\"node\":\">=14.17\"},\"hasBin\":true},\"ufo@1.6.1\":{\"resolution\":{\"integrity\":\"sha512-9a4/uxlTWJ4+a5i0ooc1rU7C7YOw3wT+UGqdeNNHWnOF9qcMBgLRS+4IYUqbczewFx4mLEig6gawh7X6mFlEkA==\"}},\"undici-types@6.21.0\":{\"resolution\":{\"integrity\":\"sha512-iwDZqg0QAGrg9Rav5H4n0M64c3mkR59cJ6wQp+7C4nI0gsmExaedaYLNO44eT4AtBBwjbTiGPMlt2Md0T9H9JQ==\"}},\"undici-types@7.16.0\":{\"resolution\":{\"integrity\":\"sha512-Zz+aZWSj8LE6zoxD+xrjh4VfkIG8Ya6LvYkZqtUQGJPZjYl53ypCaUwWqo7eI0x66KBGeRo+mlBEkMSeSZ38Nw==\"}},\"undici@7.16.0\":{\"resolution\":{\"integrity\":\"sha512-QEg3HPMll0o3t2ourKwOeUAZ159Kn9mx5pnzHRQO8+Wixmh88YdZRiIwat0iNzNNXn0yoEtXJqFpyW7eM8BV7g==\"},\"engines\":{\"node\":\">=20.18.1\"}},\"undici@7.24.8\":{\"resolution\":{\"integrity\":\"sha512-6KQ/+QxK49Z/p3HO6E5ZCZWNnCasyZLa5ExaVYyvPxUwKtbCPMKELJOqh7EqOle0t9cH/7d2TaaTRRa6Nhs4YQ==\"},\"engines\":{\"node\":\">=20.18.1\"}},\"undici@7.25.0\":{\"resolution\":{\"integrity\":\"sha512-xXnp4kTyor2Zq+J1FfPI6Eq3ew5h6Vl0F/8d9XU5zZQf1tX9s2Su1/3PiMmUANFULpmksxkClamIZcaUqryHsQ==\"},\"engines\":{\"node\":\">=20.18.1\"}},\"unenv@2.0.0-rc.24\":{\"resolution\":{\"integrity\":\"sha512-i7qRCmY42zmCwnYlh9H2SvLEypEFGye5iRmEMKjcGi7zk9UquigRjFtTLz0TYqr0ZGLZhaMHl/foy1bZR+Cwlw==\"}},\"unified@11.0.5\":{\"resolution\":{\"integrity\":\"sha512-xKvGhPWw3k84Qjh8bI3ZeJjqnyadK+GEFtazSfZv/rKeTkTjOJho6mFqh2SM96iIcZokxiOpg78GazTSg8+KHA==\"}},\"unist-util-find-after@5.0.0\":{\"resolution\":{\"integrity\":\"sha512-amQa0Ep2m6hE2g72AugUItjbuM8X8cGQnFoHk0pGfrFeT9GZhzN5SW8nRsiGKK7Aif4CrACPENkA6P/Lw6fHGQ==\"}},\"unist-util-is@6.0.0\":{\"resolution\":{\"integrity\":\"sha512-2qCTHimwdxLfz+YzdGfkqNlH0tLi9xjTnHddPmJwtIG9MGsdbutfTc4P+haPD7l7Cjxf/WZj+we5qfVPvvxfYw==\"}},\"unist-util-position@5.0.0\":{\"resolution\":{\"integrity\":\"sha512-fucsC7HjXvkB5R3kTCO7kUjRdrS0BJt3M/FPxmHMBOm8JQi2BsHAHFsy27E0EolP8rp0NzXsJ+jNPyDWvOJZPA==\"}},\"unist-util-stringify-position@4.0.0\":{\"resolution\":{\"integrity\":\"sha512-0ASV06AAoKCDkS2+xw5RXJywruurpbC4JZSm7nr7MOt1ojAzvyyaO+UxZf18j8FCF6kmzCZKcAgN/yu2gm2XgQ==\"}},\"unist-util-visit-parents@6.0.1\":{\"resolution\":{\"integrity\":\"sha512-L/PqWzfTP9lzzEa6CKs0k2nARxTdZduw3zyh8d2NVBnsyvHjSX4TWse388YrrQKbvI8w20fGjGlhgT96WwKykw==\"}},\"unist-util-visit@5.0.0\":{\"resolution\":{\"integrity\":\"sha512-MR04uvD+07cwl/yhVuVWAtw+3GOR/knlL55Nd/wAdblk27GCVt3lqpTivy/tkJcZoNPzTwS1Y+KMojlLDhoTzg==\"}},\"universalify@0.1.2\":{\"resolution\":{\"integrity\":\"sha512-rBJeI5CXAlmy1pV+617WB9J63U6XcazHHF2f2dbJix4XzpUF0RS3Zbj0FGIOCAva5P/d/GBOYaACQ1w+0azUkg==\"},\"engines\":{\"node\":\">= 4.0.0\"}},\"universalify@0.2.0\":{\"resolution\":{\"integrity\":\"sha512-CJ1QgKmNg3CwvAv/kOFmtnEN05f0D/cn9QntgNOQlQF9dgvVTHj3t+8JPdjqawCHk7V/KA+fbUqzZ9XWhcqPUg==\"},\"engines\":{\"node\":\">= 4.0.0\"}},\"universalify@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-gptHNQghINnc/vTGIk0SOFGFNXw7JVrlRUtConJRlvaw6DuX0wO5Jeko9sWrMBhh+PsYAZ7oXAiOnf/UKogyiw==\"},\"engines\":{\"node\":\">= 10.0.0\"}},\"unplugin@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-0Mqk3AT2TZCXWKdcoaufeXNukv2mTrEZExeXlHIOZXdqYoHHr4n51pymnwV8x2BOVxwXbK2HLlI7usrqMpycdg==\"},\"engines\":{\"node\":\"^20.19.0 || >=22.12.0\"}},\"update-browserslist-db@1.1.3\":{\"resolution\":{\"integrity\":\"sha512-UxhIZQ+QInVdunkDAaiazvvT/+fXL5Osr0JZlJulepYu6Jd7qJtDZjlur0emRlT71EN3ScPoE7gvsuIKKNavKw==\"},\"hasBin\":true,\"peerDependencies\":{\"browserslist\":\">= 4.21.0\"}},\"update-browserslist-db@1.2.3\":{\"resolution\":{\"integrity\":\"sha512-Js0m9cx+qOgDxo0eMiFGEueWztz+d4+M3rGlmKPT+T4IS/jP4ylw3Nwpu6cpTTP8R1MAC1kF4VbdLt3ARf209w==\"},\"hasBin\":true,\"peerDependencies\":{\"browserslist\":\">= 4.21.0\"}},\"url-parse@1.5.10\":{\"resolution\":{\"integrity\":\"sha512-WypcfiRhfeUP9vvF0j6rw0J3hrWrw6iZv3+22h6iRMJ/8z1Tj6XfLP4DsUix5MhMPnXpiHDoKyoZ/bdCkwBCiQ==\"}},\"urlpattern-polyfill@10.1.0\":{\"resolution\":{\"integrity\":\"sha512-IGjKp/o0NL3Bso1PymYURCJxMPNAf/ILOpendP9f5B6e1rTJgdgiOvgfoT8VxCAdY+Wisb9uhGaJJf3yZ2V9nw==\"}},\"use-sync-external-store@1.6.0\":{\"resolution\":{\"integrity\":\"sha512-Pp6GSwGP/NrPIrxVFAIkOQeyw8lFenOHijQWkUTrDvrF4ALqylP2C/KCkeS9dpUM3KvYRQhna5vt7IL95+ZQ9w==\"},\"peerDependencies\":{\"react\":\"^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0\"}},\"userhome@1.0.1\":{\"resolution\":{\"integrity\":\"sha512-5cnLm4gseXjAclKowC4IjByaGsjtAoV6PrOQOljplNB54ReUYJP8HdAFq2muHinSDAh09PPX/uXDPfdxRHvuSA==\"},\"engines\":{\"node\":\">= 0.8.0\"}},\"util-deprecate@1.0.2\":{\"resolution\":{\"integrity\":\"sha512-EPD5q1uXyFxJpCrLnCc1nHnq3gOa6DZBocAIiI2TaSCA7VCJ1UJDMagCzIkXNsUYfD1daK//LTEQ8xiIbrHtcw==\"}},\"uuid@11.1.0\":{\"resolution\":{\"integrity\":\"sha512-0/A9rDy9P7cJ+8w1c9WD9V//9Wj15Ce2MPz8Ri6032usz+NfePxx5AcN3bN+r6ZL6jEo066/yNYB3tn4pQEx+A==\"},\"hasBin\":true},\"varint@6.0.0\":{\"resolution\":{\"integrity\":\"sha512-cXEIW6cfr15lFv563k4GuVuW/fiwjknytD37jIOLSdSWuOI6WnO/oKwmP2FQTU2l01LP8/M5TSAJpzUaGe3uWg==\"}},\"vfile-location@5.0.3\":{\"resolution\":{\"integrity\":\"sha512-5yXvWDEgqeiYiBe1lbxYF7UMAIm/IcopxMHrMQDq3nvKcjPKIhZklUKL+AE7J7uApI4kwe2snsK+eI6UTj9EHg==\"}},\"vfile-message@4.0.2\":{\"resolution\":{\"integrity\":\"sha512-jRDZ1IMLttGj41KcZvlrYAaI3CfqpLpfpf+Mfig13viT6NKvRzWZ+lXz0Y5D60w6uJIBAOGq9mSHf0gktF0duw==\"}},\"vfile@6.0.3\":{\"resolution\":{\"integrity\":\"sha512-KzIbH/9tXat2u30jf+smMwFCsno4wHVdNmzFyL+T/L3UGqqk6JKfVqOFOZEpZSHADH1k40ab6NUIXZq422ov3Q==\"}},\"vite-node@3.2.4\":{\"resolution\":{\"integrity\":\"sha512-EbKSKh+bh1E1IFxeO0pg1n4dvoOTt0UDiXMd/qn++r98+jPO1xtJilvXldeuQ8giIB5IkpjCgMleHMNEsGH6pg==\"},\"engines\":{\"node\":\"^18.0.0 || ^20.0.0 || >=22.0.0\"},\"hasBin\":true},\"vite-plugin-static-copy@4.1.0\":{\"resolution\":{\"integrity\":\"sha512-9XOarNV7LgP0KBB7AApxdgFikLXx3daZdqjC3AevYsL6MrUH62zphonLUs2a6LZc1HN1GY+vQdheZ8VVJb6dQQ==\"},\"engines\":{\"node\":\"^22.0.0 || >=24.0.0\"},\"peerDependencies\":{\"vite\":\"^6.0.0 || ^7.0.0 || ^8.0.0\"}},\"vite@7.2.7\":{\"resolution\":{\"integrity\":\"sha512-ITcnkFeR3+fI8P1wMgItjGrR10170d8auB4EpMLPqmx6uxElH3a/hHGQabSHKdqd4FXWO1nFIp9rRn7JQ34ACQ==\"},\"engines\":{\"node\":\"^20.19.0 || >=22.12.0\"},\"hasBin\":true,\"peerDependencies\":{\"@types/node\":\"^20.19.0 || >=22.12.0\",\"jiti\":\">=1.21.0\",\"less\":\"^4.0.0\",\"lightningcss\":\"^1.21.0\",\"sass\":\"^1.70.0\",\"sass-embedded\":\"^1.70.0\",\"stylus\":\">=0.54.8\",\"sugarss\":\"^5.0.0\",\"terser\":\"^5.16.0\",\"tsx\":\"^4.8.1\",\"yaml\":\"^2.4.2\"},\"peerDependenciesMeta\":{\"@types/node\":{\"optional\":true},\"jiti\":{\"optional\":true},\"less\":{\"optional\":true},\"lightningcss\":{\"optional\":true},\"sass\":{\"optional\":true},\"sass-embedded\":{\"optional\":true},\"stylus\":{\"optional\":true},\"sugarss\":{\"optional\":true},\"terser\":{\"optional\":true},\"tsx\":{\"optional\":true},\"yaml\":{\"optional\":true}}},\"vite@8.0.10\":{\"resolution\":{\"integrity\":\"sha512-rZuUu9j6J5uotLDs+cAA4O5H4K1SfPliUlQwqa6YEwSrWDZzP4rhm00oJR5snMewjxF5V/K3D4kctsUTsIU9Mw==\"},\"engines\":{\"node\":\"^20.19.0 || >=22.12.0\"},\"hasBin\":true,\"peerDependencies\":{\"@types/node\":\"^20.19.0 || >=22.12.0\",\"@vitejs/devtools\":\"^0.1.0\",\"esbuild\":\"^0.27.0 || ^0.28.0\",\"jiti\":\">=1.21.0\",\"less\":\"^4.0.0\",\"sass\":\"^1.70.0\",\"sass-embedded\":\"^1.70.0\",\"stylus\":\">=0.54.8\",\"sugarss\":\"^5.0.0\",\"terser\":\"^5.16.0\",\"tsx\":\"^4.8.1\",\"yaml\":\"^2.4.2\"},\"peerDependenciesMeta\":{\"@types/node\":{\"optional\":true},\"@vitejs/devtools\":{\"optional\":true},\"esbuild\":{\"optional\":true},\"jiti\":{\"optional\":true},\"less\":{\"optional\":true},\"sass\":{\"optional\":true},\"sass-embedded\":{\"optional\":true},\"stylus\":{\"optional\":true},\"sugarss\":{\"optional\":true},\"terser\":{\"optional\":true},\"tsx\":{\"optional\":true},\"yaml\":{\"optional\":true}}},\"vitefu@1.1.1\":{\"resolution\":{\"integrity\":\"sha512-B/Fegf3i8zh0yFbpzZ21amWzHmuNlLlmJT6n7bu5e+pCHUKQIfXSYokrqOBGEMMe9UG2sostKQF9mml/vYaWJQ==\"},\"peerDependencies\":{\"vite\":\"^3.0.0 || ^4.0.0 || ^5.0.0 || ^6.0.0 || ^7.0.0-beta.0\"},\"peerDependenciesMeta\":{\"vite\":{\"optional\":true}}},\"vitest@3.2.4\":{\"resolution\":{\"integrity\":\"sha512-LUCP5ev3GURDysTWiP47wRRUpLKMOfPh+yKTx3kVIEiu5KOMeqzpnYNsKyOoVrULivR8tLcks4+lga33Whn90A==\"},\"engines\":{\"node\":\"^18.0.0 || ^20.0.0 || >=22.0.0\"},\"hasBin\":true,\"peerDependencies\":{\"@edge-runtime/vm\":\"*\",\"@types/debug\":\"^4.1.12\",\"@types/node\":\"^18.0.0 || ^20.0.0 || >=22.0.0\",\"@vitest/browser\":\"3.2.4\",\"@vitest/ui\":\"3.2.4\",\"happy-dom\":\"*\",\"jsdom\":\"*\"},\"peerDependenciesMeta\":{\"@edge-runtime/vm\":{\"optional\":true},\"@types/debug\":{\"optional\":true},\"@types/node\":{\"optional\":true},\"@vitest/browser\":{\"optional\":true},\"@vitest/ui\":{\"optional\":true},\"happy-dom\":{\"optional\":true},\"jsdom\":{\"optional\":true}}},\"vitest@4.0.18\":{\"resolution\":{\"integrity\":\"sha512-hOQuK7h0FGKgBAas7v0mSAsnvrIgAvWmRFjmzpJ7SwFHH3g1k2u37JtYwOwmEKhK6ZO3v9ggDBBm0La1LCK4uQ==\"},\"engines\":{\"node\":\"^20.0.0 || ^22.0.0 || >=24.0.0\"},\"hasBin\":true,\"peerDependencies\":{\"@edge-runtime/vm\":\"*\",\"@opentelemetry/api\":\"^1.9.0\",\"@types/node\":\"^20.0.0 || ^22.0.0 || >=24.0.0\",\"@vitest/browser-playwright\":\"4.0.18\",\"@vitest/browser-preview\":\"4.0.18\",\"@vitest/browser-webdriverio\":\"4.0.18\",\"@vitest/ui\":\"4.0.18\",\"happy-dom\":\"*\",\"jsdom\":\"*\"},\"peerDependenciesMeta\":{\"@edge-runtime/vm\":{\"optional\":true},\"@opentelemetry/api\":{\"optional\":true},\"@types/node\":{\"optional\":true},\"@vitest/browser-playwright\":{\"optional\":true},\"@vitest/browser-preview\":{\"optional\":true},\"@vitest/browser-webdriverio\":{\"optional\":true},\"@vitest/ui\":{\"optional\":true},\"happy-dom\":{\"optional\":true},\"jsdom\":{\"optional\":true}}},\"vitest@4.1.5\":{\"resolution\":{\"integrity\":\"sha512-9Xx1v3/ih3m9hN+SbfkUyy0JAs72ap3r7joc87XL6jwF0jGg6mFBvQ1SrwaX+h8BlkX6Hz9shdd1uo6AF+ZGpg==\"},\"engines\":{\"node\":\"^20.0.0 || ^22.0.0 || >=24.0.0\"},\"hasBin\":true,\"peerDependencies\":{\"@edge-runtime/vm\":\"*\",\"@opentelemetry/api\":\"^1.9.0\",\"@types/node\":\"^20.0.0 || ^22.0.0 || >=24.0.0\",\"@vitest/browser-playwright\":\"4.1.5\",\"@vitest/browser-preview\":\"4.1.5\",\"@vitest/browser-webdriverio\":\"4.1.5\",\"@vitest/coverage-istanbul\":\"4.1.5\",\"@vitest/coverage-v8\":\"4.1.5\",\"@vitest/ui\":\"4.1.5\",\"happy-dom\":\"*\",\"jsdom\":\"*\",\"vite\":\"^6.0.0 || ^7.0.0 || ^8.0.0\"},\"peerDependenciesMeta\":{\"@edge-runtime/vm\":{\"optional\":true},\"@opentelemetry/api\":{\"optional\":true},\"@types/node\":{\"optional\":true},\"@vitest/browser-playwright\":{\"optional\":true},\"@vitest/browser-preview\":{\"optional\":true},\"@vitest/browser-webdriverio\":{\"optional\":true},\"@vitest/coverage-istanbul\":{\"optional\":true},\"@vitest/coverage-v8\":{\"optional\":true},\"@vitest/ui\":{\"optional\":true},\"happy-dom\":{\"optional\":true},\"jsdom\":{\"optional\":true}}},\"vscode-jsonrpc@8.2.0\":{\"resolution\":{\"integrity\":\"sha512-C+r0eKJUIfiDIfwJhria30+TYWPtuHJXHtI7J0YlOmKAo7ogxP20T0zxB7HZQIFhIyvoBPwWskjxrvAtfjyZfA==\"},\"engines\":{\"node\":\">=14.0.0\"}},\"vscode-languageserver-protocol@3.17.5\":{\"resolution\":{\"integrity\":\"sha512-mb1bvRJN8SVznADSGWM9u/b07H7Ecg0I3OgXDuLdn307rl/J3A9YD6/eYOssqhecL27hK1IPZAsaqh00i/Jljg==\"}},\"vscode-languageserver-textdocument@1.0.12\":{\"resolution\":{\"integrity\":\"sha512-cxWNPesCnQCcMPeenjKKsOCKQZ/L6Tv19DTRIGuLWe32lyzWhihGVJ/rcckZXJxfdKCFvRLS3fpBIsV/ZGX4zA==\"}},\"vscode-languageserver-types@3.17.5\":{\"resolution\":{\"integrity\":\"sha512-Ld1VelNuX9pdF39h2Hgaeb5hEZM2Z3jUrrMgWQAu82jMtZp7p3vJT3BzToKtZI7NgQssZje5o0zryOrhQvzQAg==\"}},\"vscode-languageserver@9.0.1\":{\"resolution\":{\"integrity\":\"sha512-woByF3PDpkHFUreUa7Hos7+pUWdeWMXRd26+ZX2A8cFx6v/JPTtd4/uN0/jB6XQHYaOlHbio03NTHCqrgG5n7g==\"},\"hasBin\":true},\"vscode-uri@3.0.8\":{\"resolution\":{\"integrity\":\"sha512-AyFQ0EVmsOZOlAnxoFOGOq1SQDWAB7C6aqMGS23svWAllfOaxbuFvcT8D1i8z3Gyn8fraVeZNNmN6e9bxxXkKw==\"}},\"w3c-xmlserializer@5.0.0\":{\"resolution\":{\"integrity\":\"sha512-o8qghlI8NZHU1lLPrpi2+Uq7abh4GGPpYANlalzWxyWteJOCsr/P+oPBA49TOLu5FTZO4d3F9MnWJfiMo4BkmA==\"},\"engines\":{\"node\":\">=18\"}},\"wait-port@1.1.0\":{\"resolution\":{\"integrity\":\"sha512-3e04qkoN3LxTMLakdqeWth8nih8usyg+sf1Bgdf9wwUkp05iuK1eSY/QpLvscT/+F/gA89+LpUmmgBtesbqI2Q==\"},\"engines\":{\"node\":\">=10\"},\"hasBin\":true},\"watchpack@2.5.1\":{\"resolution\":{\"integrity\":\"sha512-Zn5uXdcFNIA1+1Ei5McRd+iRzfhENPCe7LeABkJtNulSxjma+l7ltNx55BWZkRlwRnpOgHqxnjyaDgJnNXnqzg==\"},\"engines\":{\"node\":\">=10.13.0\"}},\"wcwidth@1.0.1\":{\"resolution\":{\"integrity\":\"sha512-XHPEwS0q6TaxcvG85+8EYkbiCux2XtWG2mkc47Ng2A77BQu9+DqIOJldST4HgPkuea7dvKSj5VgX3P1d4rW8Tg==\"}},\"web-namespaces@2.0.1\":{\"resolution\":{\"integrity\":\"sha512-bKr1DkiNa2krS7qxNtdrtHAmzuYGFQLiQ13TsorsdT6ULTkPLKuu5+GsFpDlg6JFjUTwX2DyhMPG2be8uPrqsQ==\"}},\"web-streams-polyfill@3.3.3\":{\"resolution\":{\"integrity\":\"sha512-d2JWLCivmZYTSIoge9MsgFCZrt571BikcWGYkjC1khllbTeDlGqZ2D8vD8E/lJa8WGWbb7Plm8/XJYV7IJHZZw==\"},\"engines\":{\"node\":\">= 8\"}},\"web-vitals@4.2.4\":{\"resolution\":{\"integrity\":\"sha512-r4DIlprAGwJ7YM11VZp4R884m0Vmgr6EAKe3P+kO0PPj3Unqyvv59rczf6UiGcb9Z8QxZVcqKNwv/g0WNdWwsw==\"}},\"web-vitals@5.1.0\":{\"resolution\":{\"integrity\":\"sha512-ArI3kx5jI0atlTtmV0fWU3fjpLmq/nD3Zr1iFFlJLaqa5wLBkUSzINwBPySCX/8jRyjlmy1Volw1kz1g9XE4Jg==\"}},\"webdriver@9.2.0\":{\"resolution\":{\"integrity\":\"sha512-UrhuHSLq4m3OgncvX75vShfl5w3gmjAy8LvLb6/L6V+a+xcqMRelFx/DQ72Mr84F4m8Li6wjtebrOH1t9V/uOQ==\"},\"engines\":{\"node\":\">=18.20.0\"}},\"webdriverio@9.2.1\":{\"resolution\":{\"integrity\":\"sha512-AI7xzqTmFiU7oAx4fpEF1U1MA7smhCPVDeM0gxPqG5qWepzib3WDX2SsRtcmhdVW+vLJ3m4bf8rAXxZ2M1msWA==\"},\"engines\":{\"node\":\">=18.20.0\"},\"peerDependencies\":{\"puppeteer-core\":\"^22.3.0\"},\"peerDependenciesMeta\":{\"puppeteer-core\":{\"optional\":true}}},\"webidl-conversions@7.0.0\":{\"resolution\":{\"integrity\":\"sha512-VwddBukDzu71offAQR975unBIGqfKZpM+8ZX6ySk8nYhVoo5CYaZyzt3YBvYtRtO+aoGlqxPg/B87NGVZ/fu6g==\"},\"engines\":{\"node\":\">=12\"}},\"webidl-conversions@8.0.0\":{\"resolution\":{\"integrity\":\"sha512-n4W4YFyz5JzOfQeA8oN7dUYpR+MBP3PIUsn2jLjWXwK5ASUzt0Jc/A5sAUZoCYFJRGF0FBKJ+1JjN43rNdsQzA==\"},\"engines\":{\"node\":\">=20\"}},\"webpack-sources@3.4.1\":{\"resolution\":{\"integrity\":\"sha512-eACpxRN02yaawnt+uUNIF7Qje6A9zArxBbcAJjK1PK3S9Ycg5jIuJ8pW4q8EMnwNZCEGltcjkRx1QzOxOkKD8A==\"},\"engines\":{\"node\":\">=10.13.0\"}},\"webpack-virtual-modules@0.6.2\":{\"resolution\":{\"integrity\":\"sha512-66/V2i5hQanC51vBQKPH4aI8NMAcBW59FVBs+rC7eGHupMyfn34q7rZIE+ETlJ+XTevqfUhVVBgSUNSW2flEUQ==\"}},\"webpack@5.99.9\":{\"resolution\":{\"integrity\":\"sha512-brOPwM3JnmOa+7kd3NsmOUOwbDAj8FT9xDsG3IW0MgbN9yZV7Oi/s/+MNQ/EcSMqw7qfoRyXPoeEWT8zLVdVGg==\"},\"engines\":{\"node\":\">=10.13.0\"},\"hasBin\":true,\"peerDependencies\":{\"webpack-cli\":\"*\"},\"peerDependenciesMeta\":{\"webpack-cli\":{\"optional\":true}}},\"whatwg-encoding@3.1.1\":{\"resolution\":{\"integrity\":\"sha512-6qN4hJdMwfYBtE3YBTTHhoeuUrDBPZmbQaxWAqSALV/MeEnR5z1xd8UKud2RAkFoPkmB+hli1TZSnyi84xz1vQ==\"},\"engines\":{\"node\":\">=18\"},\"deprecated\":\"Use @exodus/bytes instead for a more spec-conformant and faster implementation\"},\"whatwg-mimetype@3.0.0\":{\"resolution\":{\"integrity\":\"sha512-nt+N2dzIutVRxARx1nghPKGv1xHikU7HKdfafKkLNLindmPU/ch3U31NOCGGA/dmPcmb1VlofO0vnKAcsm0o/Q==\"},\"engines\":{\"node\":\">=12\"}},\"whatwg-mimetype@4.0.0\":{\"resolution\":{\"integrity\":\"sha512-QaKxh0eNIi2mE9p2vEdzfagOKHCcj1pJ56EEHGQOVxp8r9/iszLUUV7v89x9O1p/T+NlTM5W7jW6+cz4Fq1YVg==\"},\"engines\":{\"node\":\">=18\"}},\"whatwg-url@14.2.0\":{\"resolution\":{\"integrity\":\"sha512-De72GdQZzNTUBBChsXueQUnPKDkg/5A5zp7pFDuQAj5UFoENpiACU0wlCvzpAGnTkj++ihpKwKyYewn/XNUbKw==\"},\"engines\":{\"node\":\">=18\"}},\"whatwg-url@15.1.0\":{\"resolution\":{\"integrity\":\"sha512-2ytDk0kiEj/yu90JOAp44PVPUkO9+jVhyf+SybKlRHSDlvOOZhdPIrr7xTH64l4WixO2cP+wQIcgujkGBPPz6g==\"},\"engines\":{\"node\":\">=20\"}},\"which@2.0.2\":{\"resolution\":{\"integrity\":\"sha512-BLI3Tl1TW3Pvl70l3yq3Y64i+awpwXqsGBYWkkqMtnbXgrMD+yj7rhW0kuEDxzJaYXGjEW5ogapKNMEKNMjibA==\"},\"engines\":{\"node\":\">= 8\"},\"hasBin\":true},\"which@4.0.0\":{\"resolution\":{\"integrity\":\"sha512-GlaYyEb07DPxYCKhKzplCWBJtvxZcZMrL+4UkrTSJHHPyZU4mYYTv3qaOe77H7EODLSSopAUFAc6W8U4yqvscg==\"},\"engines\":{\"node\":\"^16.13.0 || >=18.0.0\"},\"hasBin\":true},\"why-is-node-running@2.3.0\":{\"resolution\":{\"integrity\":\"sha512-hUrmaWBdVDcxvYqnyh09zunKzROWjbZTiNy8dBEjkS7ehEDQibXJ7XvlmtbwuTclUiIyN+CyXQD4Vmko8fNm8w==\"},\"engines\":{\"node\":\">=8\"},\"hasBin\":true},\"workerd@1.20260504.1\":{\"resolution\":{\"integrity\":\"sha512-AQTXSHbYNP9tLPgJNn0TmizyE4aDh2VuZZXlTAL0uu4fbCY436NAnQSJIzZbaFHM3DnAtVs9G8tkiJztSdYqDg==\"},\"engines\":{\"node\":\">=16\"},\"hasBin\":true},\"wrangler@4.88.0\":{\"resolution\":{\"integrity\":\"sha512-f470QwbeT/JM1S0duq+sLtkss7UBxIFDtYHgujv9tdQUyA/dLGDq51am0rqrsuFtCi97lTM1P5sqtt8xra1AlA==\"},\"engines\":{\"node\":\">=22.0.0\"},\"hasBin\":true,\"peerDependencies\":{\"@cloudflare/workers-types\":\"^4.20260504.1\"},\"peerDependenciesMeta\":{\"@cloudflare/workers-types\":{\"optional\":true}}},\"wrap-ansi@6.2.0\":{\"resolution\":{\"integrity\":\"sha512-r6lPcBGxZXlIcymEu7InxDMhdW0KDxpLgoFLcguasxCaJ/SOIZwINatK9KY/tf+ZrlywOKU0UDj3ATXUBfxJXA==\"},\"engines\":{\"node\":\">=8\"}},\"wrap-ansi@7.0.0\":{\"resolution\":{\"integrity\":\"sha512-YVGIj2kamLSTxw6NsZjoBxfSwsn0ycdesmc4p+Q21c5zPuZ1pl+NfxVdxPtdHvmNVOQ6XSYG4AUtyt/Fi7D16Q==\"},\"engines\":{\"node\":\">=10\"}},\"wrap-ansi@8.1.0\":{\"resolution\":{\"integrity\":\"sha512-si7QWI6zUMq56bESFvagtmzMdGOtoxfR+Sez11Mobfc7tm+VkUckk9bW2UeffTGVUbOksxmSw0AA2gs8g71NCQ==\"},\"engines\":{\"node\":\">=12\"}},\"wrappy@1.0.2\":{\"resolution\":{\"integrity\":\"sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ==\"}},\"ws@8.18.0\":{\"resolution\":{\"integrity\":\"sha512-8VbfWfHLbbwu3+N6OKsOMpBdT4kXPDDB9cJk2bJ6mh9ucxdlnNvH1e+roYkKmN9Nxw2yjz7VzeO9oOz2zJ04Pw==\"},\"engines\":{\"node\":\">=10.0.0\"},\"peerDependencies\":{\"bufferutil\":\"^4.0.1\",\"utf-8-validate\":\">=5.0.2\"},\"peerDependenciesMeta\":{\"bufferutil\":{\"optional\":true},\"utf-8-validate\":{\"optional\":true}}},\"ws@8.18.3\":{\"resolution\":{\"integrity\":\"sha512-PEIGCY5tSlUt50cqyMXfCzX+oOPqN0vuGqWzbcJ2xvnkzkq46oOpz7dQaTDBdfICb4N14+GARUDw2XV2N4tvzg==\"},\"engines\":{\"node\":\">=10.0.0\"},\"peerDependencies\":{\"bufferutil\":\"^4.0.1\",\"utf-8-validate\":\">=5.0.2\"},\"peerDependenciesMeta\":{\"bufferutil\":{\"optional\":true},\"utf-8-validate\":{\"optional\":true}}},\"ws@8.20.0\":{\"resolution\":{\"integrity\":\"sha512-sAt8BhgNbzCtgGbt2OxmpuryO63ZoDk/sqaB/znQm94T4fCEsy/yV+7CdC1kJhOU9lboAEU7R3kquuycDoibVA==\"},\"engines\":{\"node\":\">=10.0.0\"},\"peerDependencies\":{\"bufferutil\":\"^4.0.1\",\"utf-8-validate\":\">=5.0.2\"},\"peerDependenciesMeta\":{\"bufferutil\":{\"optional\":true},\"utf-8-validate\":{\"optional\":true}}},\"xml-name-validator@5.0.0\":{\"resolution\":{\"integrity\":\"sha512-EvGK8EJ3DhaHfbRlETOWAS5pO9MZITeauHKJyb8wyajUfQUenkIg2MvLDTZ4T/TgIcm3HU0TFBgWWboAZ30UHg==\"},\"engines\":{\"node\":\">=18\"}},\"xmlbuilder2@4.0.3\":{\"resolution\":{\"integrity\":\"sha512-bx8Q1STctnNaaDymWnkfQLKofs0mGNN7rLLapJlGuV3VlvegD7Ls4ggMjE3aUSWItCCzU0PEv45lI87iSigiCA==\"},\"engines\":{\"node\":\">=20.0\"}},\"xmlchars@2.2.0\":{\"resolution\":{\"integrity\":\"sha512-JZnDKK8B0RCDw84FNdDAIpZK+JuJw+s7Lz8nksI7SIuU3UXJJslUthsi+uWBUYOwPFwW7W7PRLRfUKpxjtjFCw==\"}},\"y18n@5.0.8\":{\"resolution\":{\"integrity\":\"sha512-0pfFzegeDWJHJIAmTLRP2DwHjdF5s7jo9tuztdQxAhINCdvS+3nGINqPd00AphqJR/0LhANUS6/+7SCb98YOfA==\"},\"engines\":{\"node\":\">=10\"}},\"yallist@3.1.1\":{\"resolution\":{\"integrity\":\"sha512-a4UGQaWPH59mOXUYnAG2ewncQS4i4F43Tv3JoAM+s2VDAmS9NsK8GpDMLrCHPksFT7h3K6TOoUNn2pb7RoXx4g==\"}},\"yallist@4.0.0\":{\"resolution\":{\"integrity\":\"sha512-3wdGidZyq5PB084XLES5TpOSRA3wjXAlIWMhum2kRcv/41Sn2emQ0dycQW4uZXLejwKvg6EsvbdlVL+FYEct7A==\"}},\"yaml@2.8.1\":{\"resolution\":{\"integrity\":\"sha512-lcYcMxX2PO9XMGvAJkJ3OsNMw+/7FKes7/hgerGUYWIoWu5j/+YQqcZr5JnPZWzOsEBgMbSbiSTn/dv/69Mkpw==\"},\"engines\":{\"node\":\">= 14.6\"},\"hasBin\":true},\"yargs-parser@21.1.1\":{\"resolution\":{\"integrity\":\"sha512-tVpsJW7DdjecAiFpbIB1e3qxIQsE6NoPc5/eTdrbbIC4h0LVsWhnoa3g+m2HclBIujHzsxZ4VJVA+GUuc2/LBw==\"},\"engines\":{\"node\":\">=12\"}},\"yargs-parser@22.0.0\":{\"resolution\":{\"integrity\":\"sha512-rwu/ClNdSMpkSrUb+d6BRsSkLUq1fmfsY6TOpYzTwvwkg1/NRG85KBy3kq++A8LKQwX6lsu+aWad+2khvuXrqw==\"},\"engines\":{\"node\":\"^20.19.0 || ^22.12.0 || >=23\"}},\"yargs@17.7.2\":{\"resolution\":{\"integrity\":\"sha512-7dSzzRQ++CKnNI/krKnYRV7JKKPUXMEh61soaHKg9mrWEhzFWhFnxPxGl+69cD1Ou63C13NUPCnmIcrvqCuM6w==\"},\"engines\":{\"node\":\">=12\"}},\"yauzl@2.10.0\":{\"resolution\":{\"integrity\":\"sha512-p4a9I6X6nu6IhoGmBqAcbJy1mlC4j27vEPZX9F4L4/vZT3Lyq1VkFHw/V/PUcB9Buo+DG3iHkT0x3Qya58zc3g==\"}},\"yoctocolors-cjs@2.1.3\":{\"resolution\":{\"integrity\":\"sha512-U/PBtDf35ff0D8X8D0jfdzHYEPFxAI7jJlxZXwCSez5M3190m+QobIfh+sWDWSHMCWWJN2AWamkegn6vr6YBTw==\"},\"engines\":{\"node\":\">=18\"}},\"youch-core@0.3.3\":{\"resolution\":{\"integrity\":\"sha512-ho7XuGjLaJ2hWHoK8yFnsUGy2Y5uDpqSTq1FkHLK4/oqKtyUU1AFbOOxY4IpC9f0fTLjwYbslUz0Po5BpD1wrA==\"}},\"youch@4.1.0-beta.10\":{\"resolution\":{\"integrity\":\"sha512-rLfVLB4FgQneDr0dv1oddCVZmKjcJ6yX6mS4pU82Mq/Dt9a3cLZQ62pDBL4AUO+uVrCvtWz3ZFUL2HFAFJ/BXQ==\"}},\"zip-stream@6.0.1\":{\"resolution\":{\"integrity\":\"sha512-zK7YHHz4ZXpW89AHXUPbQVGKI7uvkd3hzusTdotCg1UxyaVtg0zFJSTfW/Dq5f7OBBVnq6cZIaC8Ti4hb6dtCA==\"},\"engines\":{\"node\":\">= 14\"}},\"zod@3.25.76\":{\"resolution\":{\"integrity\":\"sha512-gzUt/qt81nXsFGKIFcC3YnfEAx5NkunCfnDlvuBSSFS02bcXu4Lmea0AFIUwbLWxWPx3d9p8S5QoaujKcNQxcQ==\"}},\"zwitch@2.0.4\":{\"resolution\":{\"integrity\":\"sha512-bXE4cR/kVZhKZX/RjPEflHaKVhUVl85noU3v6b8apfQEc1x4A+zBxjZ4lN8LqGd6WZ3dl98pY4o717VFmoPp+A==\"}}},\"snapshots\":{\"@acemir/cssom@0.9.28\":{},\"@ampproject/remapping@2.3.0\":{\"dependencies\":{\"@jridgewell/gen-mapping\":\"0.3.13\",\"@jridgewell/trace-mapping\":\"0.3.30\"}},\"@antfu/install-pkg@1.1.0\":{\"dependencies\":{\"package-manager-detector\":\"1.5.0\",\"tinyexec\":\"1.0.2\"}},\"@antfu/utils@9.3.0\":{},\"@asamuzakjp/css-color@3.1.4\":{\"dependencies\":{\"@csstools/css-calc\":\"2.1.4(@csstools/css-parser-algorithms@3.0.5(@csstools/css-tokenizer@3.0.4))(@csstools/css-tokenizer@3.0.4)\",\"@csstools/css-color-parser\":\"3.1.0(@csstools/css-parser-algorithms@3.0.5(@csstools/css-tokenizer@3.0.4))(@csstools/css-tokenizer@3.0.4)\",\"@csstools/css-parser-algorithms\":\"3.0.5(@csstools/css-tokenizer@3.0.4)\",\"@csstools/css-tokenizer\":\"3.0.4\",\"lru-cache\":\"10.4.3\"}},\"@asamuzakjp/css-color@4.1.0\":{\"dependencies\":{\"@csstools/css-calc\":\"2.1.4(@csstools/css-parser-algorithms@3.0.5(@csstools/css-tokenizer@3.0.4))(@csstools/css-tokenizer@3.0.4)\",\"@csstools/css-color-parser\":\"3.1.0(@csstools/css-parser-algorithms@3.0.5(@csstools/css-tokenizer@3.0.4))(@csstools/css-tokenizer@3.0.4)\",\"@csstools/css-parser-algorithms\":\"3.0.5(@csstools/css-tokenizer@3.0.4)\",\"@csstools/css-tokenizer\":\"3.0.4\",\"lru-cache\":\"11.2.4\"}},\"@asamuzakjp/dom-selector@6.7.6\":{\"dependencies\":{\"@asamuzakjp/nwsapi\":\"2.3.9\",\"bidi-js\":\"1.0.3\",\"css-tree\":\"3.1.0\",\"is-potential-custom-element-name\":\"1.0.1\",\"lru-cache\":\"11.2.4\"}},\"@asamuzakjp/nwsapi@2.3.9\":{},\"@babel/code-frame@7.27.1\":{\"dependencies\":{\"@babel/helper-validator-identifier\":\"7.28.5\",\"js-tokens\":\"4.0.0\",\"picocolors\":\"1.1.1\"}},\"@babel/compat-data@7.28.0\":{},\"@babel/core@7.28.5\":{\"dependencies\":{\"@babel/code-frame\":\"7.27.1\",\"@babel/generator\":\"7.28.5\",\"@babel/helper-compilation-targets\":\"7.27.2\",\"@babel/helper-module-transforms\":\"7.28.3(@babel/core@7.28.5)\",\"@babel/helpers\":\"7.28.4\",\"@babel/parser\":\"7.28.5\",\"@babel/template\":\"7.27.2\",\"@babel/traverse\":\"7.28.5\",\"@babel/types\":\"7.28.5\",\"@jridgewell/remapping\":\"2.3.5\",\"convert-source-map\":\"2.0.0\",\"debug\":\"4.4.3\",\"gensync\":\"1.0.0-beta.2\",\"json5\":\"2.2.3\",\"semver\":\"6.3.1\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"@babel/generator@7.28.5\":{\"dependencies\":{\"@babel/parser\":\"7.28.5\",\"@babel/types\":\"7.28.5\",\"@jridgewell/gen-mapping\":\"0.3.13\",\"@jridgewell/trace-mapping\":\"0.3.31\",\"jsesc\":\"3.1.0\"}},\"@babel/helper-compilation-targets@7.27.2\":{\"dependencies\":{\"@babel/compat-data\":\"7.28.0\",\"@babel/helper-validator-option\":\"7.27.1\",\"browserslist\":\"4.25.3\",\"lru-cache\":\"5.1.1\",\"semver\":\"6.3.1\"}},\"@babel/helper-globals@7.28.0\":{},\"@babel/helper-module-imports@7.27.1\":{\"dependencies\":{\"@babel/traverse\":\"7.28.5\",\"@babel/types\":\"7.28.5\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"@babel/helper-module-transforms@7.28.3(@babel/core@7.28.5)\":{\"dependencies\":{\"@babel/core\":\"7.28.5\",\"@babel/helper-module-imports\":\"7.27.1\",\"@babel/helper-validator-identifier\":\"7.28.5\",\"@babel/traverse\":\"7.28.5\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"@babel/helper-plugin-utils@7.27.1\":{},\"@babel/helper-string-parser@7.27.1\":{},\"@babel/helper-validator-identifier@7.28.5\":{},\"@babel/helper-validator-option@7.27.1\":{},\"@babel/helpers@7.28.4\":{\"dependencies\":{\"@babel/template\":\"7.27.2\",\"@babel/types\":\"7.28.5\"}},\"@babel/parser@7.28.5\":{\"dependencies\":{\"@babel/types\":\"7.28.5\"}},\"@babel/parser@7.29.3\":{\"dependencies\":{\"@babel/types\":\"7.29.0\"}},\"@babel/plugin-syntax-jsx@7.27.1(@babel/core@7.28.5)\":{\"dependencies\":{\"@babel/core\":\"7.28.5\",\"@babel/helper-plugin-utils\":\"7.27.1\"}},\"@babel/plugin-syntax-typescript@7.27.1(@babel/core@7.28.5)\":{\"dependencies\":{\"@babel/core\":\"7.28.5\",\"@babel/helper-plugin-utils\":\"7.27.1\"}},\"@babel/runtime@7.28.4\":{},\"@babel/template@7.27.2\":{\"dependencies\":{\"@babel/code-frame\":\"7.27.1\",\"@babel/parser\":\"7.28.5\",\"@babel/types\":\"7.28.5\"}},\"@babel/traverse@7.28.5\":{\"dependencies\":{\"@babel/code-frame\":\"7.27.1\",\"@babel/generator\":\"7.28.5\",\"@babel/helper-globals\":\"7.28.0\",\"@babel/parser\":\"7.28.5\",\"@babel/template\":\"7.27.2\",\"@babel/types\":\"7.28.5\",\"debug\":\"4.4.3\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"@babel/types@7.28.5\":{\"dependencies\":{\"@babel/helper-string-parser\":\"7.27.1\",\"@babel/helper-validator-identifier\":\"7.28.5\"}},\"@babel/types@7.29.0\":{\"dependencies\":{\"@babel/helper-string-parser\":\"7.27.1\",\"@babel/helper-validator-identifier\":\"7.28.5\"}},\"@bcoe/v8-coverage@1.0.2\":{},\"@blazediff/core@1.9.1\":{},\"@braintree/sanitize-url@7.1.1\":{},\"@bufbuild/protobuf@2.12.0\":{\"optional\":true},\"@bundled-es-modules/cookie@2.0.1\":{\"dependencies\":{\"cookie\":\"0.7.2\"},\"optional\":true},\"@bundled-es-modules/statuses@1.0.1\":{\"dependencies\":{\"statuses\":\"2.0.2\"},\"optional\":true},\"@bundled-es-modules/tough-cookie@0.1.6\":{\"dependencies\":{\"@types/tough-cookie\":\"4.0.5\",\"tough-cookie\":\"4.1.4\"},\"optional\":true},\"@changesets/apply-release-plan@7.0.13\":{\"dependencies\":{\"@changesets/config\":\"3.1.1\",\"@changesets/get-version-range-type\":\"0.4.0\",\"@changesets/git\":\"3.0.4\",\"@changesets/should-skip-package\":\"0.1.2\",\"@changesets/types\":\"6.1.0\",\"@manypkg/get-packages\":\"1.1.3\",\"detect-indent\":\"6.1.0\",\"fs-extra\":\"7.0.1\",\"lodash.startcase\":\"4.4.0\",\"outdent\":\"0.5.0\",\"prettier\":\"2.8.8\",\"resolve-from\":\"5.0.0\",\"semver\":\"7.7.3\"}},\"@changesets/assemble-release-plan@6.0.9\":{\"dependencies\":{\"@changesets/errors\":\"0.2.0\",\"@changesets/get-dependents-graph\":\"2.1.3\",\"@changesets/should-skip-package\":\"0.1.2\",\"@changesets/types\":\"6.1.0\",\"@manypkg/get-packages\":\"1.1.3\",\"semver\":\"7.7.3\"}},\"@changesets/changelog-git@0.2.1\":{\"dependencies\":{\"@changesets/types\":\"6.1.0\"}},\"@changesets/cli@2.29.7(@types/node@24.10.2)\":{\"dependencies\":{\"@changesets/apply-release-plan\":\"7.0.13\",\"@changesets/assemble-release-plan\":\"6.0.9\",\"@changesets/changelog-git\":\"0.2.1\",\"@changesets/config\":\"3.1.1\",\"@changesets/errors\":\"0.2.0\",\"@changesets/get-dependents-graph\":\"2.1.3\",\"@changesets/get-release-plan\":\"4.0.13\",\"@changesets/git\":\"3.0.4\",\"@changesets/logger\":\"0.1.1\",\"@changesets/pre\":\"2.0.2\",\"@changesets/read\":\"0.6.5\",\"@changesets/should-skip-package\":\"0.1.2\",\"@changesets/types\":\"6.1.0\",\"@changesets/write\":\"0.4.0\",\"@inquirer/external-editor\":\"1.0.1(@types/node@24.10.2)\",\"@manypkg/get-packages\":\"1.1.3\",\"ansi-colors\":\"4.1.3\",\"ci-info\":\"3.9.0\",\"enquirer\":\"2.4.1\",\"fs-extra\":\"7.0.1\",\"mri\":\"1.2.0\",\"p-limit\":\"2.3.0\",\"package-manager-detector\":\"0.2.11\",\"picocolors\":\"1.1.1\",\"resolve-from\":\"5.0.0\",\"semver\":\"7.7.3\",\"spawndamnit\":\"3.0.1\",\"term-size\":\"2.2.1\"},\"transitivePeerDependencies\":[\"@types/node\"]},\"@changesets/config@3.1.1\":{\"dependencies\":{\"@changesets/errors\":\"0.2.0\",\"@changesets/get-dependents-graph\":\"2.1.3\",\"@changesets/logger\":\"0.1.1\",\"@changesets/types\":\"6.1.0\",\"@manypkg/get-packages\":\"1.1.3\",\"fs-extra\":\"7.0.1\",\"micromatch\":\"4.0.8\"}},\"@changesets/errors@0.2.0\":{\"dependencies\":{\"extendable-error\":\"0.1.7\"}},\"@changesets/get-dependents-graph@2.1.3\":{\"dependencies\":{\"@changesets/types\":\"6.1.0\",\"@manypkg/get-packages\":\"1.1.3\",\"picocolors\":\"1.1.1\",\"semver\":\"7.7.3\"}},\"@changesets/get-release-plan@4.0.13\":{\"dependencies\":{\"@changesets/assemble-release-plan\":\"6.0.9\",\"@changesets/config\":\"3.1.1\",\"@changesets/pre\":\"2.0.2\",\"@changesets/read\":\"0.6.5\",\"@changesets/types\":\"6.1.0\",\"@manypkg/get-packages\":\"1.1.3\"}},\"@changesets/get-version-range-type@0.4.0\":{},\"@changesets/git@3.0.4\":{\"dependencies\":{\"@changesets/errors\":\"0.2.0\",\"@manypkg/get-packages\":\"1.1.3\",\"is-subdir\":\"1.2.0\",\"micromatch\":\"4.0.8\",\"spawndamnit\":\"3.0.1\"}},\"@changesets/logger@0.1.1\":{\"dependencies\":{\"picocolors\":\"1.1.1\"}},\"@changesets/parse@0.4.1\":{\"dependencies\":{\"@changesets/types\":\"6.1.0\",\"js-yaml\":\"3.14.1\"}},\"@changesets/pre@2.0.2\":{\"dependencies\":{\"@changesets/errors\":\"0.2.0\",\"@changesets/types\":\"6.1.0\",\"@manypkg/get-packages\":\"1.1.3\",\"fs-extra\":\"7.0.1\"}},\"@changesets/read@0.6.5\":{\"dependencies\":{\"@changesets/git\":\"3.0.4\",\"@changesets/logger\":\"0.1.1\",\"@changesets/parse\":\"0.4.1\",\"@changesets/types\":\"6.1.0\",\"fs-extra\":\"7.0.1\",\"p-filter\":\"2.1.0\",\"picocolors\":\"1.1.1\"}},\"@changesets/should-skip-package@0.1.2\":{\"dependencies\":{\"@changesets/types\":\"6.1.0\",\"@manypkg/get-packages\":\"1.1.3\"}},\"@changesets/types@4.1.0\":{},\"@changesets/types@6.1.0\":{},\"@changesets/write@0.4.0\":{\"dependencies\":{\"@changesets/types\":\"6.1.0\",\"fs-extra\":\"7.0.1\",\"human-id\":\"4.1.1\",\"prettier\":\"2.8.8\"}},\"@chevrotain/cst-dts-gen@11.0.3\":{\"dependencies\":{\"@chevrotain/gast\":\"11.0.3\",\"@chevrotain/types\":\"11.0.3\",\"lodash-es\":\"4.17.21\"}},\"@chevrotain/gast@11.0.3\":{\"dependencies\":{\"@chevrotain/types\":\"11.0.3\",\"lodash-es\":\"4.17.21\"}},\"@chevrotain/regexp-to-ast@11.0.3\":{},\"@chevrotain/types@11.0.3\":{},\"@chevrotain/utils@11.0.3\":{},\"@cloudflare/kv-asset-handler@0.5.0\":{},\"@cloudflare/unenv-preset@2.16.1(unenv@2.0.0-rc.24)(workerd@1.20260504.1)\":{\"dependencies\":{\"unenv\":\"2.0.0-rc.24\"},\"optionalDependencies\":{\"workerd\":\"1.20260504.1\"}},\"@cloudflare/vite-plugin@1.36.0(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(workerd@1.20260504.1)(wrangler@4.88.0)\":{\"dependencies\":{\"@cloudflare/unenv-preset\":\"2.16.1(unenv@2.0.0-rc.24)(workerd@1.20260504.1)\",\"miniflare\":\"4.20260504.0\",\"unenv\":\"2.0.0-rc.24\",\"vite\":\"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\",\"wrangler\":\"4.88.0\",\"ws\":\"8.18.0\"},\"transitivePeerDependencies\":[\"bufferutil\",\"utf-8-validate\",\"workerd\"]},\"@cloudflare/workerd-darwin-64@1.20260504.1\":{\"optional\":true},\"@cloudflare/workerd-darwin-arm64@1.20260504.1\":{\"optional\":true},\"@cloudflare/workerd-linux-64@1.20260504.1\":{\"optional\":true},\"@cloudflare/workerd-linux-arm64@1.20260504.1\":{\"optional\":true},\"@cloudflare/workerd-windows-64@1.20260504.1\":{\"optional\":true},\"@cspotcode/source-map-support@0.8.1\":{\"dependencies\":{\"@jridgewell/trace-mapping\":\"0.3.9\"}},\"@csstools/color-helpers@5.1.0\":{},\"@csstools/css-calc@2.1.4(@csstools/css-parser-algorithms@3.0.5(@csstools/css-tokenizer@3.0.4))(@csstools/css-tokenizer@3.0.4)\":{\"dependencies\":{\"@csstools/css-parser-algorithms\":\"3.0.5(@csstools/css-tokenizer@3.0.4)\",\"@csstools/css-tokenizer\":\"3.0.4\"}},\"@csstools/css-color-parser@3.1.0(@csstools/css-parser-algorithms@3.0.5(@csstools/css-tokenizer@3.0.4))(@csstools/css-tokenizer@3.0.4)\":{\"dependencies\":{\"@csstools/color-helpers\":\"5.1.0\",\"@csstools/css-calc\":\"2.1.4(@csstools/css-parser-algorithms@3.0.5(@csstools/css-tokenizer@3.0.4))(@csstools/css-tokenizer@3.0.4)\",\"@csstools/css-parser-algorithms\":\"3.0.5(@csstools/css-tokenizer@3.0.4)\",\"@csstools/css-tokenizer\":\"3.0.4\"}},\"@csstools/css-parser-algorithms@3.0.5(@csstools/css-tokenizer@3.0.4)\":{\"dependencies\":{\"@csstools/css-tokenizer\":\"3.0.4\"}},\"@csstools/css-syntax-patches-for-csstree@1.0.14(postcss@8.5.14)\":{\"dependencies\":{\"postcss\":\"8.5.14\"}},\"@csstools/css-tokenizer@3.0.4\":{},\"@emnapi/core@1.10.0\":{\"dependencies\":{\"@emnapi/wasi-threads\":\"1.2.1\",\"tslib\":\"2.8.1\"},\"optional\":true},\"@emnapi/core@1.4.5\":{\"dependencies\":{\"@emnapi/wasi-threads\":\"1.0.4\",\"tslib\":\"2.8.1\"}},\"@emnapi/runtime@1.10.0\":{\"dependencies\":{\"tslib\":\"2.8.1\"},\"optional\":true},\"@emnapi/runtime@1.4.5\":{\"dependencies\":{\"tslib\":\"2.8.1\"}},\"@emnapi/wasi-threads@1.0.4\":{\"dependencies\":{\"tslib\":\"2.8.1\"}},\"@emnapi/wasi-threads@1.2.1\":{\"dependencies\":{\"tslib\":\"2.8.1\"},\"optional\":true},\"@esbuild/aix-ppc64@0.25.12\":{\"optional\":true},\"@esbuild/aix-ppc64@0.27.3\":{\"optional\":true},\"@esbuild/android-arm64@0.25.12\":{\"optional\":true},\"@esbuild/android-arm64@0.27.3\":{\"optional\":true},\"@esbuild/android-arm@0.25.12\":{\"optional\":true},\"@esbuild/android-arm@0.27.3\":{\"optional\":true},\"@esbuild/android-x64@0.25.12\":{\"optional\":true},\"@esbuild/android-x64@0.27.3\":{\"optional\":true},\"@esbuild/darwin-arm64@0.25.12\":{\"optional\":true},\"@esbuild/darwin-arm64@0.27.3\":{\"optional\":true},\"@esbuild/darwin-x64@0.25.12\":{\"optional\":true},\"@esbuild/darwin-x64@0.27.3\":{\"optional\":true},\"@esbuild/freebsd-arm64@0.25.12\":{\"optional\":true},\"@esbuild/freebsd-arm64@0.27.3\":{\"optional\":true},\"@esbuild/freebsd-x64@0.25.12\":{\"optional\":true},\"@esbuild/freebsd-x64@0.27.3\":{\"optional\":true},\"@esbuild/linux-arm64@0.25.12\":{\"optional\":true},\"@esbuild/linux-arm64@0.27.3\":{\"optional\":true},\"@esbuild/linux-arm@0.25.12\":{\"optional\":true},\"@esbuild/linux-arm@0.27.3\":{\"optional\":true},\"@esbuild/linux-ia32@0.25.12\":{\"optional\":true},\"@esbuild/linux-ia32@0.27.3\":{\"optional\":true},\"@esbuild/linux-loong64@0.25.12\":{\"optional\":true},\"@esbuild/linux-loong64@0.27.3\":{\"optional\":true},\"@esbuild/linux-mips64el@0.25.12\":{\"optional\":true},\"@esbuild/linux-mips64el@0.27.3\":{\"optional\":true},\"@esbuild/linux-ppc64@0.25.12\":{\"optional\":true},\"@esbuild/linux-ppc64@0.27.3\":{\"optional\":true},\"@esbuild/linux-riscv64@0.25.12\":{\"optional\":true},\"@esbuild/linux-riscv64@0.27.3\":{\"optional\":true},\"@esbuild/linux-s390x@0.25.12\":{\"optional\":true},\"@esbuild/linux-s390x@0.27.3\":{\"optional\":true},\"@esbuild/linux-x64@0.25.12\":{\"optional\":true},\"@esbuild/linux-x64@0.27.3\":{\"optional\":true},\"@esbuild/netbsd-arm64@0.25.12\":{\"optional\":true},\"@esbuild/netbsd-arm64@0.27.3\":{\"optional\":true},\"@esbuild/netbsd-x64@0.25.12\":{\"optional\":true},\"@esbuild/netbsd-x64@0.27.3\":{\"optional\":true},\"@esbuild/openbsd-arm64@0.25.12\":{\"optional\":true},\"@esbuild/openbsd-arm64@0.27.3\":{\"optional\":true},\"@esbuild/openbsd-x64@0.25.12\":{\"optional\":true},\"@esbuild/openbsd-x64@0.27.3\":{\"optional\":true},\"@esbuild/openharmony-arm64@0.25.12\":{\"optional\":true},\"@esbuild/openharmony-arm64@0.27.3\":{\"optional\":true},\"@esbuild/sunos-x64@0.25.12\":{\"optional\":true},\"@esbuild/sunos-x64@0.27.3\":{\"optional\":true},\"@esbuild/win32-arm64@0.25.12\":{\"optional\":true},\"@esbuild/win32-arm64@0.27.3\":{\"optional\":true},\"@esbuild/win32-ia32@0.25.12\":{\"optional\":true},\"@esbuild/win32-ia32@0.27.3\":{\"optional\":true},\"@esbuild/win32-x64@0.25.12\":{\"optional\":true},\"@esbuild/win32-x64@0.27.3\":{\"optional\":true},\"@iconify/types@2.0.0\":{},\"@iconify/utils@3.0.2\":{\"dependencies\":{\"@antfu/install-pkg\":\"1.1.0\",\"@antfu/utils\":\"9.3.0\",\"@iconify/types\":\"2.0.0\",\"debug\":\"4.4.3\",\"globals\":\"15.15.0\",\"kolorist\":\"1.8.0\",\"local-pkg\":\"1.1.2\",\"mlly\":\"1.8.0\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"@img/colour@1.1.0\":{},\"@img/sharp-darwin-arm64@0.34.5\":{\"optionalDependencies\":{\"@img/sharp-libvips-darwin-arm64\":\"1.2.4\"},\"optional\":true},\"@img/sharp-darwin-x64@0.34.5\":{\"optionalDependencies\":{\"@img/sharp-libvips-darwin-x64\":\"1.2.4\"},\"optional\":true},\"@img/sharp-libvips-darwin-arm64@1.2.4\":{\"optional\":true},\"@img/sharp-libvips-darwin-x64@1.2.4\":{\"optional\":true},\"@img/sharp-libvips-linux-arm64@1.2.4\":{\"optional\":true},\"@img/sharp-libvips-linux-arm@1.2.4\":{\"optional\":true},\"@img/sharp-libvips-linux-ppc64@1.2.4\":{\"optional\":true},\"@img/sharp-libvips-linux-riscv64@1.2.4\":{\"optional\":true},\"@img/sharp-libvips-linux-s390x@1.2.4\":{\"optional\":true},\"@img/sharp-libvips-linux-x64@1.2.4\":{\"optional\":true},\"@img/sharp-libvips-linuxmusl-arm64@1.2.4\":{\"optional\":true},\"@img/sharp-libvips-linuxmusl-x64@1.2.4\":{\"optional\":true},\"@img/sharp-linux-arm64@0.34.5\":{\"optionalDependencies\":{\"@img/sharp-libvips-linux-arm64\":\"1.2.4\"},\"optional\":true},\"@img/sharp-linux-arm@0.34.5\":{\"optionalDependencies\":{\"@img/sharp-libvips-linux-arm\":\"1.2.4\"},\"optional\":true},\"@img/sharp-linux-ppc64@0.34.5\":{\"optionalDependencies\":{\"@img/sharp-libvips-linux-ppc64\":\"1.2.4\"},\"optional\":true},\"@img/sharp-linux-riscv64@0.34.5\":{\"optionalDependencies\":{\"@img/sharp-libvips-linux-riscv64\":\"1.2.4\"},\"optional\":true},\"@img/sharp-linux-s390x@0.34.5\":{\"optionalDependencies\":{\"@img/sharp-libvips-linux-s390x\":\"1.2.4\"},\"optional\":true},\"@img/sharp-linux-x64@0.34.5\":{\"optionalDependencies\":{\"@img/sharp-libvips-linux-x64\":\"1.2.4\"},\"optional\":true},\"@img/sharp-linuxmusl-arm64@0.34.5\":{\"optionalDependencies\":{\"@img/sharp-libvips-linuxmusl-arm64\":\"1.2.4\"},\"optional\":true},\"@img/sharp-linuxmusl-x64@0.34.5\":{\"optionalDependencies\":{\"@img/sharp-libvips-linuxmusl-x64\":\"1.2.4\"},\"optional\":true},\"@img/sharp-wasm32@0.34.5\":{\"dependencies\":{\"@emnapi/runtime\":\"1.10.0\"},\"optional\":true},\"@img/sharp-win32-arm64@0.34.5\":{\"optional\":true},\"@img/sharp-win32-ia32@0.34.5\":{\"optional\":true},\"@img/sharp-win32-x64@0.34.5\":{\"optional\":true},\"@inquirer/ansi@1.0.2\":{\"optional\":true},\"@inquirer/confirm@5.1.21(@types/node@22.15.33)\":{\"dependencies\":{\"@inquirer/core\":\"10.3.2(@types/node@22.15.33)\",\"@inquirer/type\":\"3.0.10(@types/node@22.15.33)\"},\"optionalDependencies\":{\"@types/node\":\"22.15.33\"},\"optional\":true},\"@inquirer/confirm@5.1.21(@types/node@24.10.2)\":{\"dependencies\":{\"@inquirer/core\":\"10.3.2(@types/node@24.10.2)\",\"@inquirer/type\":\"3.0.10(@types/node@24.10.2)\"},\"optionalDependencies\":{\"@types/node\":\"24.10.2\"},\"optional\":true},\"@inquirer/core@10.3.2(@types/node@22.15.33)\":{\"dependencies\":{\"@inquirer/ansi\":\"1.0.2\",\"@inquirer/figures\":\"1.0.15\",\"@inquirer/type\":\"3.0.10(@types/node@22.15.33)\",\"cli-width\":\"4.1.0\",\"mute-stream\":\"2.0.0\",\"signal-exit\":\"4.1.0\",\"wrap-ansi\":\"6.2.0\",\"yoctocolors-cjs\":\"2.1.3\"},\"optionalDependencies\":{\"@types/node\":\"22.15.33\"},\"optional\":true},\"@inquirer/core@10.3.2(@types/node@24.10.2)\":{\"dependencies\":{\"@inquirer/ansi\":\"1.0.2\",\"@inquirer/figures\":\"1.0.15\",\"@inquirer/type\":\"3.0.10(@types/node@24.10.2)\",\"cli-width\":\"4.1.0\",\"mute-stream\":\"2.0.0\",\"signal-exit\":\"4.1.0\",\"wrap-ansi\":\"6.2.0\",\"yoctocolors-cjs\":\"2.1.3\"},\"optionalDependencies\":{\"@types/node\":\"24.10.2\"},\"optional\":true},\"@inquirer/external-editor@1.0.1(@types/node@24.10.2)\":{\"dependencies\":{\"chardet\":\"2.1.0\",\"iconv-lite\":\"0.6.3\"},\"optionalDependencies\":{\"@types/node\":\"24.10.2\"}},\"@inquirer/figures@1.0.15\":{\"optional\":true},\"@inquirer/type@3.0.10(@types/node@22.15.33)\":{\"optionalDependencies\":{\"@types/node\":\"22.15.33\"},\"optional\":true},\"@inquirer/type@3.0.10(@types/node@24.10.2)\":{\"optionalDependencies\":{\"@types/node\":\"24.10.2\"},\"optional\":true},\"@isaacs/cliui@8.0.2\":{\"dependencies\":{\"string-width\":\"5.1.2\",\"string-width-cjs\":\"string-width@4.2.3\",\"strip-ansi\":\"7.1.2\",\"strip-ansi-cjs\":\"strip-ansi@6.0.1\",\"wrap-ansi\":\"8.1.0\",\"wrap-ansi-cjs\":\"wrap-ansi@7.0.0\"}},\"@istanbuljs/schema@0.1.3\":{},\"@jest/diff-sequences@30.0.1\":{},\"@jest/get-type@30.1.0\":{},\"@jest/schemas@30.0.5\":{\"dependencies\":{\"@sinclair/typebox\":\"0.34.40\"}},\"@jridgewell/gen-mapping@0.3.13\":{\"dependencies\":{\"@jridgewell/sourcemap-codec\":\"1.5.5\",\"@jridgewell/trace-mapping\":\"0.3.31\"}},\"@jridgewell/remapping@2.3.5\":{\"dependencies\":{\"@jridgewell/gen-mapping\":\"0.3.13\",\"@jridgewell/trace-mapping\":\"0.3.31\"}},\"@jridgewell/resolve-uri@3.1.2\":{},\"@jridgewell/source-map@0.3.11\":{\"dependencies\":{\"@jridgewell/gen-mapping\":\"0.3.13\",\"@jridgewell/trace-mapping\":\"0.3.31\"},\"optional\":true},\"@jridgewell/sourcemap-codec@1.5.5\":{},\"@jridgewell/trace-mapping@0.3.30\":{\"dependencies\":{\"@jridgewell/resolve-uri\":\"3.1.2\",\"@jridgewell/sourcemap-codec\":\"1.5.5\"}},\"@jridgewell/trace-mapping@0.3.31\":{\"dependencies\":{\"@jridgewell/resolve-uri\":\"3.1.2\",\"@jridgewell/sourcemap-codec\":\"1.5.5\"}},\"@jridgewell/trace-mapping@0.3.9\":{\"dependencies\":{\"@jridgewell/resolve-uri\":\"3.1.2\",\"@jridgewell/sourcemap-codec\":\"1.5.5\"}},\"@jsonjoy.com/buffers@17.63.0(tslib@2.8.1)\":{\"dependencies\":{\"tslib\":\"2.8.1\"}},\"@jsonjoy.com/codegen@17.63.0(tslib@2.8.1)\":{\"dependencies\":{\"tslib\":\"2.8.1\"}},\"@jsonjoy.com/json-pointer@17.63.0(tslib@2.8.1)\":{\"dependencies\":{\"@jsonjoy.com/util\":\"17.63.0(tslib@2.8.1)\",\"tslib\":\"2.8.1\"}},\"@jsonjoy.com/util@17.63.0(tslib@2.8.1)\":{\"dependencies\":{\"@jsonjoy.com/buffers\":\"17.63.0(tslib@2.8.1)\",\"@jsonjoy.com/codegen\":\"17.63.0(tslib@2.8.1)\",\"tslib\":\"2.8.1\"}},\"@lix-js/plugin-json@1.0.1(tslib@2.8.1)\":{\"dependencies\":{\"@jsonjoy.com/json-pointer\":\"17.63.0(tslib@2.8.1)\",\"@lix-js/sdk\":\"0.5.1\"},\"transitivePeerDependencies\":[\"tslib\"]},\"@lix-js/sdk@0.5.1\":{\"dependencies\":{\"@lix-js/server-protocol-schema\":\"0.1.1\",\"@marcbachmann/cel-js\":\"2.5.2\",\"@opral/zettel-ast\":\"0.1.0\",\"@sqlite.org/sqlite-wasm\":\"3.50.4-build1\",\"ajv\":\"8.17.1\",\"chevrotain\":\"11.0.3\",\"kysely\":\"0.28.7\",\"uuid\":\"11.1.0\"}},\"@lix-js/server-protocol-schema@0.1.1\":{},\"@manypkg/find-root@1.1.0\":{\"dependencies\":{\"@babel/runtime\":\"7.28.4\",\"@types/node\":\"12.20.55\",\"find-up\":\"4.1.0\",\"fs-extra\":\"8.1.0\"}},\"@manypkg/get-packages@1.1.3\":{\"dependencies\":{\"@babel/runtime\":\"7.28.4\",\"@changesets/types\":\"4.1.0\",\"@manypkg/find-root\":\"1.1.0\",\"fs-extra\":\"8.1.0\",\"globby\":\"11.1.0\",\"read-yaml-file\":\"1.1.0\"}},\"@marcbachmann/cel-js@2.5.2\":{},\"@mermaid-js/parser@0.6.3\":{\"dependencies\":{\"langium\":\"3.3.1\"}},\"@mswjs/interceptors@0.39.8\":{\"dependencies\":{\"@open-draft/deferred-promise\":\"2.2.0\",\"@open-draft/logger\":\"0.3.0\",\"@open-draft/until\":\"2.1.0\",\"is-node-process\":\"1.2.0\",\"outvariant\":\"1.4.3\",\"strict-event-emitter\":\"0.5.1\"},\"optional\":true},\"@napi-rs/wasm-runtime@0.2.4\":{\"dependencies\":{\"@emnapi/core\":\"1.4.5\",\"@emnapi/runtime\":\"1.4.5\",\"@tybys/wasm-util\":\"0.9.0\"}},\"@napi-rs/wasm-runtime@1.1.4(@emnapi/core@1.10.0)(@emnapi/runtime@1.10.0)\":{\"dependencies\":{\"@emnapi/core\":\"1.10.0\",\"@emnapi/runtime\":\"1.10.0\",\"@tybys/wasm-util\":\"0.10.2\"},\"optional\":true},\"@nodelib/fs.scandir@2.1.5\":{\"dependencies\":{\"@nodelib/fs.stat\":\"2.0.5\",\"run-parallel\":\"1.2.0\"}},\"@nodelib/fs.stat@2.0.5\":{},\"@nodelib/fs.walk@1.2.8\":{\"dependencies\":{\"@nodelib/fs.scandir\":\"2.1.5\",\"fastq\":\"1.17.1\"}},\"@nrwl/nx-cloud@19.1.0\":{\"dependencies\":{\"nx-cloud\":\"19.1.0\"},\"transitivePeerDependencies\":[\"debug\"]},\"@nx/nx-darwin-arm64@21.4.1\":{\"optional\":true},\"@nx/nx-darwin-x64@21.4.1\":{\"optional\":true},\"@nx/nx-freebsd-x64@21.4.1\":{\"optional\":true},\"@nx/nx-linux-arm-gnueabihf@21.4.1\":{\"optional\":true},\"@nx/nx-linux-arm64-gnu@21.4.1\":{\"optional\":true},\"@nx/nx-linux-arm64-musl@21.4.1\":{\"optional\":true},\"@nx/nx-linux-x64-gnu@21.4.1\":{\"optional\":true},\"@nx/nx-linux-x64-musl@21.4.1\":{\"optional\":true},\"@nx/nx-win32-arm64-msvc@21.4.1\":{\"optional\":true},\"@nx/nx-win32-x64-msvc@21.4.1\":{\"optional\":true},\"@oozcitak/dom@2.0.2\":{\"dependencies\":{\"@oozcitak/infra\":\"2.0.2\",\"@oozcitak/url\":\"3.0.0\",\"@oozcitak/util\":\"10.0.0\"}},\"@oozcitak/infra@2.0.2\":{\"dependencies\":{\"@oozcitak/util\":\"10.0.0\"}},\"@oozcitak/url@3.0.0\":{\"dependencies\":{\"@oozcitak/infra\":\"2.0.2\",\"@oozcitak/util\":\"10.0.0\"}},\"@oozcitak/util@10.0.0\":{},\"@open-draft/deferred-promise@2.2.0\":{\"optional\":true},\"@open-draft/logger@0.3.0\":{\"dependencies\":{\"is-node-process\":\"1.2.0\",\"outvariant\":\"1.4.3\"},\"optional\":true},\"@open-draft/until@2.1.0\":{\"optional\":true},\"@opentelemetry/api-logs@0.208.0\":{\"dependencies\":{\"@opentelemetry/api\":\"1.9.0\"}},\"@opentelemetry/api@1.9.0\":{},\"@opentelemetry/core@2.2.0(@opentelemetry/api@1.9.0)\":{\"dependencies\":{\"@opentelemetry/api\":\"1.9.0\",\"@opentelemetry/semantic-conventions\":\"1.38.0\"}},\"@opentelemetry/core@2.4.0(@opentelemetry/api@1.9.0)\":{\"dependencies\":{\"@opentelemetry/api\":\"1.9.0\",\"@opentelemetry/semantic-conventions\":\"1.38.0\"}},\"@opentelemetry/exporter-logs-otlp-http@0.208.0(@opentelemetry/api@1.9.0)\":{\"dependencies\":{\"@opentelemetry/api\":\"1.9.0\",\"@opentelemetry/api-logs\":\"0.208.0\",\"@opentelemetry/core\":\"2.2.0(@opentelemetry/api@1.9.0)\",\"@opentelemetry/otlp-exporter-base\":\"0.208.0(@opentelemetry/api@1.9.0)\",\"@opentelemetry/otlp-transformer\":\"0.208.0(@opentelemetry/api@1.9.0)\",\"@opentelemetry/sdk-logs\":\"0.208.0(@opentelemetry/api@1.9.0)\"}},\"@opentelemetry/otlp-exporter-base@0.208.0(@opentelemetry/api@1.9.0)\":{\"dependencies\":{\"@opentelemetry/api\":\"1.9.0\",\"@opentelemetry/core\":\"2.2.0(@opentelemetry/api@1.9.0)\",\"@opentelemetry/otlp-transformer\":\"0.208.0(@opentelemetry/api@1.9.0)\"}},\"@opentelemetry/otlp-transformer@0.208.0(@opentelemetry/api@1.9.0)\":{\"dependencies\":{\"@opentelemetry/api\":\"1.9.0\",\"@opentelemetry/api-logs\":\"0.208.0\",\"@opentelemetry/core\":\"2.2.0(@opentelemetry/api@1.9.0)\",\"@opentelemetry/resources\":\"2.2.0(@opentelemetry/api@1.9.0)\",\"@opentelemetry/sdk-logs\":\"0.208.0(@opentelemetry/api@1.9.0)\",\"@opentelemetry/sdk-metrics\":\"2.2.0(@opentelemetry/api@1.9.0)\",\"@opentelemetry/sdk-trace-base\":\"2.2.0(@opentelemetry/api@1.9.0)\",\"protobufjs\":\"7.5.4\"}},\"@opentelemetry/resources@2.2.0(@opentelemetry/api@1.9.0)\":{\"dependencies\":{\"@opentelemetry/api\":\"1.9.0\",\"@opentelemetry/core\":\"2.2.0(@opentelemetry/api@1.9.0)\",\"@opentelemetry/semantic-conventions\":\"1.38.0\"}},\"@opentelemetry/resources@2.4.0(@opentelemetry/api@1.9.0)\":{\"dependencies\":{\"@opentelemetry/api\":\"1.9.0\",\"@opentelemetry/core\":\"2.4.0(@opentelemetry/api@1.9.0)\",\"@opentelemetry/semantic-conventions\":\"1.38.0\"}},\"@opentelemetry/sdk-logs@0.208.0(@opentelemetry/api@1.9.0)\":{\"dependencies\":{\"@opentelemetry/api\":\"1.9.0\",\"@opentelemetry/api-logs\":\"0.208.0\",\"@opentelemetry/core\":\"2.2.0(@opentelemetry/api@1.9.0)\",\"@opentelemetry/resources\":\"2.2.0(@opentelemetry/api@1.9.0)\"}},\"@opentelemetry/sdk-metrics@2.2.0(@opentelemetry/api@1.9.0)\":{\"dependencies\":{\"@opentelemetry/api\":\"1.9.0\",\"@opentelemetry/core\":\"2.2.0(@opentelemetry/api@1.9.0)\",\"@opentelemetry/resources\":\"2.2.0(@opentelemetry/api@1.9.0)\"}},\"@opentelemetry/sdk-trace-base@2.2.0(@opentelemetry/api@1.9.0)\":{\"dependencies\":{\"@opentelemetry/api\":\"1.9.0\",\"@opentelemetry/core\":\"2.2.0(@opentelemetry/api@1.9.0)\",\"@opentelemetry/resources\":\"2.2.0(@opentelemetry/api@1.9.0)\",\"@opentelemetry/semantic-conventions\":\"1.38.0\"}},\"@opentelemetry/semantic-conventions@1.38.0\":{},\"@opral/markdown-wc@0.9.0\":{\"dependencies\":{\"mermaid\":\"11.12.1\",\"rehype-autolink-headings\":\"7.1.0\",\"rehype-highlight\":\"7.0.2\",\"rehype-parse\":\"9.0.1\",\"rehype-raw\":\"7.0.0\",\"rehype-remark\":\"10.0.1\",\"rehype-sanitize\":\"6.0.0\",\"rehype-slug\":\"6.0.0\",\"rehype-stringify\":\"10.0.1\",\"remark-frontmatter\":\"5.0.0\",\"remark-gfm\":\"4.0.1\",\"remark-parse\":\"11.0.0\",\"remark-rehype\":\"11.1.2\",\"remark-stringify\":\"11.0.0\",\"unified\":\"11.0.5\",\"unist-util-visit\":\"5.0.0\",\"yaml\":\"2.8.1\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"@opral/zettel-ast@0.1.0\":{\"dependencies\":{\"@sinclair/typebox\":\"0.34.40\"}},\"@oxc-project/types@0.127.0\":{},\"@oxlint/darwin-arm64@1.26.0\":{\"optional\":true},\"@oxlint/darwin-x64@1.26.0\":{\"optional\":true},\"@oxlint/linux-arm64-gnu@1.26.0\":{\"optional\":true},\"@oxlint/linux-arm64-musl@1.26.0\":{\"optional\":true},\"@oxlint/linux-x64-gnu@1.26.0\":{\"optional\":true},\"@oxlint/linux-x64-musl@1.26.0\":{\"optional\":true},\"@oxlint/win32-arm64@1.26.0\":{\"optional\":true},\"@oxlint/win32-x64@1.26.0\":{\"optional\":true},\"@pkgjs/parseargs@0.11.0\":{\"optional\":true},\"@polka/url@1.0.0-next.29\":{},\"@poppinss/colors@4.1.5\":{\"dependencies\":{\"kleur\":\"4.1.5\"}},\"@poppinss/dumper@0.6.5\":{\"dependencies\":{\"@poppinss/colors\":\"4.1.5\",\"@sindresorhus/is\":\"7.1.1\",\"supports-color\":\"10.2.2\"}},\"@poppinss/exception@1.2.2\":{},\"@posthog/core@1.9.1\":{\"dependencies\":{\"cross-spawn\":\"7.0.6\"}},\"@posthog/types@1.321.2\":{},\"@promptbook/utils@0.69.5\":{\"dependencies\":{\"spacetrim\":\"0.11.59\"},\"optional\":true},\"@protobufjs/aspromise@1.1.2\":{},\"@protobufjs/base64@1.1.2\":{},\"@protobufjs/codegen@2.0.4\":{},\"@protobufjs/eventemitter@1.1.0\":{},\"@protobufjs/fetch@1.1.0\":{\"dependencies\":{\"@protobufjs/aspromise\":\"1.1.2\",\"@protobufjs/inquire\":\"1.1.0\"}},\"@protobufjs/float@1.0.2\":{},\"@protobufjs/inquire@1.1.0\":{},\"@protobufjs/path@1.1.2\":{},\"@protobufjs/pool@1.1.0\":{},\"@protobufjs/utf8@1.1.0\":{},\"@puppeteer/browsers@2.13.1\":{\"dependencies\":{\"debug\":\"4.4.3\",\"extract-zip\":\"2.0.1\",\"progress\":\"2.0.3\",\"proxy-agent\":\"6.5.0\",\"semver\":\"7.7.4\",\"tar-fs\":\"3.1.2\",\"yargs\":\"17.7.2\"},\"transitivePeerDependencies\":[\"bare-abort-controller\",\"bare-buffer\",\"react-native-b4a\",\"supports-color\"],\"optional\":true},\"@rolldown/binding-android-arm64@1.0.0-rc.17\":{\"optional\":true},\"@rolldown/binding-darwin-arm64@1.0.0-rc.17\":{\"optional\":true},\"@rolldown/binding-darwin-x64@1.0.0-rc.17\":{\"optional\":true},\"@rolldown/binding-freebsd-x64@1.0.0-rc.17\":{\"optional\":true},\"@rolldown/binding-linux-arm-gnueabihf@1.0.0-rc.17\":{\"optional\":true},\"@rolldown/binding-linux-arm64-gnu@1.0.0-rc.17\":{\"optional\":true},\"@rolldown/binding-linux-arm64-musl@1.0.0-rc.17\":{\"optional\":true},\"@rolldown/binding-linux-ppc64-gnu@1.0.0-rc.17\":{\"optional\":true},\"@rolldown/binding-linux-s390x-gnu@1.0.0-rc.17\":{\"optional\":true},\"@rolldown/binding-linux-x64-gnu@1.0.0-rc.17\":{\"optional\":true},\"@rolldown/binding-linux-x64-musl@1.0.0-rc.17\":{\"optional\":true},\"@rolldown/binding-openharmony-arm64@1.0.0-rc.17\":{\"optional\":true},\"@rolldown/binding-wasm32-wasi@1.0.0-rc.17\":{\"dependencies\":{\"@emnapi/core\":\"1.10.0\",\"@emnapi/runtime\":\"1.10.0\",\"@napi-rs/wasm-runtime\":\"1.1.4(@emnapi/core@1.10.0)(@emnapi/runtime@1.10.0)\"},\"optional\":true},\"@rolldown/binding-win32-arm64-msvc@1.0.0-rc.17\":{\"optional\":true},\"@rolldown/binding-win32-x64-msvc@1.0.0-rc.17\":{\"optional\":true},\"@rolldown/pluginutils@1.0.0-beta.40\":{},\"@rolldown/pluginutils@1.0.0-rc.17\":{},\"@rolldown/pluginutils@1.0.0-rc.7\":{},\"@rollup/rollup-android-arm-eabi@4.53.2\":{\"optional\":true},\"@rollup/rollup-android-arm64@4.53.2\":{\"optional\":true},\"@rollup/rollup-darwin-arm64@4.53.2\":{\"optional\":true},\"@rollup/rollup-darwin-x64@4.53.2\":{\"optional\":true},\"@rollup/rollup-freebsd-arm64@4.53.2\":{\"optional\":true},\"@rollup/rollup-freebsd-x64@4.53.2\":{\"optional\":true},\"@rollup/rollup-linux-arm-gnueabihf@4.53.2\":{\"optional\":true},\"@rollup/rollup-linux-arm-musleabihf@4.53.2\":{\"optional\":true},\"@rollup/rollup-linux-arm64-gnu@4.53.2\":{\"optional\":true},\"@rollup/rollup-linux-arm64-musl@4.53.2\":{\"optional\":true},\"@rollup/rollup-linux-loong64-gnu@4.53.2\":{\"optional\":true},\"@rollup/rollup-linux-ppc64-gnu@4.53.2\":{\"optional\":true},\"@rollup/rollup-linux-riscv64-gnu@4.53.2\":{\"optional\":true},\"@rollup/rollup-linux-riscv64-musl@4.53.2\":{\"optional\":true},\"@rollup/rollup-linux-s390x-gnu@4.53.2\":{\"optional\":true},\"@rollup/rollup-linux-x64-gnu@4.53.2\":{\"optional\":true},\"@rollup/rollup-linux-x64-musl@4.53.2\":{\"optional\":true},\"@rollup/rollup-openharmony-arm64@4.53.2\":{\"optional\":true},\"@rollup/rollup-win32-arm64-msvc@4.53.2\":{\"optional\":true},\"@rollup/rollup-win32-ia32-msvc@4.53.2\":{\"optional\":true},\"@rollup/rollup-win32-x64-gnu@4.53.2\":{\"optional\":true},\"@rollup/rollup-win32-x64-msvc@4.53.2\":{\"optional\":true},\"@shikijs/core@3.15.0\":{\"dependencies\":{\"@shikijs/types\":\"3.15.0\",\"@shikijs/vscode-textmate\":\"10.0.2\",\"@types/hast\":\"3.0.4\",\"hast-util-to-html\":\"9.0.5\"}},\"@shikijs/engine-javascript@3.15.0\":{\"dependencies\":{\"@shikijs/types\":\"3.15.0\",\"@shikijs/vscode-textmate\":\"10.0.2\",\"oniguruma-to-es\":\"4.3.3\"}},\"@shikijs/engine-oniguruma@3.15.0\":{\"dependencies\":{\"@shikijs/types\":\"3.15.0\",\"@shikijs/vscode-textmate\":\"10.0.2\"}},\"@shikijs/langs@3.15.0\":{\"dependencies\":{\"@shikijs/types\":\"3.15.0\"}},\"@shikijs/themes@3.15.0\":{\"dependencies\":{\"@shikijs/types\":\"3.15.0\"}},\"@shikijs/types@3.15.0\":{\"dependencies\":{\"@shikijs/vscode-textmate\":\"10.0.2\",\"@types/hast\":\"3.0.4\"}},\"@shikijs/vscode-textmate@10.0.2\":{},\"@sinclair/typebox@0.34.40\":{},\"@sindresorhus/is@7.1.1\":{},\"@speed-highlight/core@1.2.12\":{},\"@sqlite.org/sqlite-wasm@3.50.4-build1\":{},\"@standard-schema/spec@1.0.0\":{},\"@standard-schema/spec@1.1.0\":{},\"@tailwindcss/node@4.2.4\":{\"dependencies\":{\"@jridgewell/remapping\":\"2.3.5\",\"enhanced-resolve\":\"5.21.0\",\"jiti\":\"2.6.1\",\"lightningcss\":\"1.32.0\",\"magic-string\":\"0.30.21\",\"source-map-js\":\"1.2.1\",\"tailwindcss\":\"4.2.4\"}},\"@tailwindcss/oxide-android-arm64@4.2.4\":{\"optional\":true},\"@tailwindcss/oxide-darwin-arm64@4.2.4\":{\"optional\":true},\"@tailwindcss/oxide-darwin-x64@4.2.4\":{\"optional\":true},\"@tailwindcss/oxide-freebsd-x64@4.2.4\":{\"optional\":true},\"@tailwindcss/oxide-linux-arm-gnueabihf@4.2.4\":{\"optional\":true},\"@tailwindcss/oxide-linux-arm64-gnu@4.2.4\":{\"optional\":true},\"@tailwindcss/oxide-linux-arm64-musl@4.2.4\":{\"optional\":true},\"@tailwindcss/oxide-linux-x64-gnu@4.2.4\":{\"optional\":true},\"@tailwindcss/oxide-linux-x64-musl@4.2.4\":{\"optional\":true},\"@tailwindcss/oxide-wasm32-wasi@4.2.4\":{\"optional\":true},\"@tailwindcss/oxide-win32-arm64-msvc@4.2.4\":{\"optional\":true},\"@tailwindcss/oxide-win32-x64-msvc@4.2.4\":{\"optional\":true},\"@tailwindcss/oxide@4.2.4\":{\"optionalDependencies\":{\"@tailwindcss/oxide-android-arm64\":\"4.2.4\",\"@tailwindcss/oxide-darwin-arm64\":\"4.2.4\",\"@tailwindcss/oxide-darwin-x64\":\"4.2.4\",\"@tailwindcss/oxide-freebsd-x64\":\"4.2.4\",\"@tailwindcss/oxide-linux-arm-gnueabihf\":\"4.2.4\",\"@tailwindcss/oxide-linux-arm64-gnu\":\"4.2.4\",\"@tailwindcss/oxide-linux-arm64-musl\":\"4.2.4\",\"@tailwindcss/oxide-linux-x64-gnu\":\"4.2.4\",\"@tailwindcss/oxide-linux-x64-musl\":\"4.2.4\",\"@tailwindcss/oxide-wasm32-wasi\":\"4.2.4\",\"@tailwindcss/oxide-win32-arm64-msvc\":\"4.2.4\",\"@tailwindcss/oxide-win32-x64-msvc\":\"4.2.4\"}},\"@tailwindcss/vite@4.2.4(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\":{\"dependencies\":{\"@tailwindcss/node\":\"4.2.4\",\"@tailwindcss/oxide\":\"4.2.4\",\"tailwindcss\":\"4.2.4\",\"vite\":\"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\"}},\"@tanstack/history@1.161.6\":{},\"@tanstack/react-router@1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0)\":{\"dependencies\":{\"@tanstack/history\":\"1.161.6\",\"@tanstack/react-store\":\"0.9.3(react-dom@19.2.0(react@19.2.0))(react@19.2.0)\",\"@tanstack/router-core\":\"1.169.2\",\"isbot\":\"5.1.28\",\"react\":\"19.2.0\",\"react-dom\":\"19.2.0(react@19.2.0)\"}},\"@tanstack/react-start-client@1.166.48(react-dom@19.2.0(react@19.2.0))(react@19.2.0)\":{\"dependencies\":{\"@tanstack/react-router\":\"1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0)\",\"@tanstack/router-core\":\"1.169.2\",\"@tanstack/start-client-core\":\"1.168.2\",\"react\":\"19.2.0\",\"react-dom\":\"19.2.0(react@19.2.0)\"}},\"@tanstack/react-start-rsc@0.0.43(react-dom@19.2.0(react@19.2.0))(react@19.2.0)(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(webpack@5.99.9(esbuild@0.27.3))\":{\"dependencies\":{\"@tanstack/react-router\":\"1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0)\",\"@tanstack/react-start-server\":\"1.166.52(react-dom@19.2.0(react@19.2.0))(react@19.2.0)\",\"@tanstack/router-core\":\"1.169.2\",\"@tanstack/router-utils\":\"1.161.8\",\"@tanstack/start-client-core\":\"1.168.2\",\"@tanstack/start-fn-stubs\":\"1.161.6\",\"@tanstack/start-plugin-core\":\"1.169.19(@tanstack/react-router@1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(webpack@5.99.9(esbuild@0.27.3))\",\"@tanstack/start-server-core\":\"1.167.30\",\"@tanstack/start-storage-context\":\"1.166.35\",\"pathe\":\"2.0.3\",\"react\":\"19.2.0\",\"react-dom\":\"19.2.0(react@19.2.0)\"},\"transitivePeerDependencies\":[\"@rsbuild/core\",\"crossws\",\"supports-color\",\"vite\",\"vite-plugin-solid\",\"webpack\"]},\"@tanstack/react-start-server@1.166.52(react-dom@19.2.0(react@19.2.0))(react@19.2.0)\":{\"dependencies\":{\"@tanstack/history\":\"1.161.6\",\"@tanstack/react-router\":\"1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0)\",\"@tanstack/router-core\":\"1.169.2\",\"@tanstack/start-client-core\":\"1.168.2\",\"@tanstack/start-server-core\":\"1.167.30\",\"react\":\"19.2.0\",\"react-dom\":\"19.2.0(react@19.2.0)\"},\"transitivePeerDependencies\":[\"crossws\"]},\"@tanstack/react-start@1.167.64(react-dom@19.2.0(react@19.2.0))(react@19.2.0)(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(webpack@5.99.9(esbuild@0.27.3))\":{\"dependencies\":{\"@tanstack/react-router\":\"1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0)\",\"@tanstack/react-start-client\":\"1.166.48(react-dom@19.2.0(react@19.2.0))(react@19.2.0)\",\"@tanstack/react-start-rsc\":\"0.0.43(react-dom@19.2.0(react@19.2.0))(react@19.2.0)(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(webpack@5.99.9(esbuild@0.27.3))\",\"@tanstack/react-start-server\":\"1.166.52(react-dom@19.2.0(react@19.2.0))(react@19.2.0)\",\"@tanstack/router-utils\":\"1.161.8\",\"@tanstack/start-client-core\":\"1.168.2\",\"@tanstack/start-plugin-core\":\"1.169.19(@tanstack/react-router@1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(webpack@5.99.9(esbuild@0.27.3))\",\"@tanstack/start-server-core\":\"1.167.30\",\"pathe\":\"2.0.3\",\"react\":\"19.2.0\",\"react-dom\":\"19.2.0(react@19.2.0)\"},\"optionalDependencies\":{\"vite\":\"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\"},\"transitivePeerDependencies\":[\"@rspack/core\",\"crossws\",\"react-server-dom-rspack\",\"supports-color\",\"vite-plugin-solid\",\"webpack\"]},\"@tanstack/react-store@0.9.3(react-dom@19.2.0(react@19.2.0))(react@19.2.0)\":{\"dependencies\":{\"@tanstack/store\":\"0.9.3\",\"react\":\"19.2.0\",\"react-dom\":\"19.2.0(react@19.2.0)\",\"use-sync-external-store\":\"1.6.0(react@19.2.0)\"}},\"@tanstack/router-core@1.169.2\":{\"dependencies\":{\"@tanstack/history\":\"1.161.6\",\"cookie-es\":\"3.1.1\",\"seroval\":\"1.5.4\",\"seroval-plugins\":\"1.5.4(seroval@1.5.4)\"}},\"@tanstack/router-generator@1.166.41\":{\"dependencies\":{\"@babel/types\":\"7.28.5\",\"@tanstack/router-core\":\"1.169.2\",\"@tanstack/router-utils\":\"1.161.8\",\"@tanstack/virtual-file-routes\":\"1.161.7\",\"jiti\":\"2.6.1\",\"magic-string\":\"0.30.21\",\"prettier\":\"3.6.2\",\"zod\":\"3.25.76\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"@tanstack/router-plugin@1.167.34(@tanstack/react-router@1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(webpack@5.99.9(esbuild@0.27.3))\":{\"dependencies\":{\"@babel/core\":\"7.28.5\",\"@babel/plugin-syntax-jsx\":\"7.27.1(@babel/core@7.28.5)\",\"@babel/plugin-syntax-typescript\":\"7.27.1(@babel/core@7.28.5)\",\"@babel/template\":\"7.27.2\",\"@babel/traverse\":\"7.28.5\",\"@babel/types\":\"7.28.5\",\"@tanstack/router-core\":\"1.169.2\",\"@tanstack/router-generator\":\"1.166.41\",\"@tanstack/router-utils\":\"1.161.8\",\"@tanstack/virtual-file-routes\":\"1.161.7\",\"chokidar\":\"3.6.0\",\"unplugin\":\"3.0.0\",\"zod\":\"3.25.76\"},\"optionalDependencies\":{\"@tanstack/react-router\":\"1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0)\",\"vite\":\"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\",\"webpack\":\"5.99.9(esbuild@0.27.3)\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"@tanstack/router-utils@1.161.8\":{\"dependencies\":{\"@babel/core\":\"7.28.5\",\"@babel/generator\":\"7.28.5\",\"@babel/parser\":\"7.28.5\",\"@babel/types\":\"7.28.5\",\"ansis\":\"4.1.0\",\"babel-dead-code-elimination\":\"1.0.12\",\"diff\":\"8.0.2\",\"pathe\":\"2.0.3\",\"tinyglobby\":\"0.2.16\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"@tanstack/start-client-core@1.168.2\":{\"dependencies\":{\"@tanstack/router-core\":\"1.169.2\",\"@tanstack/start-fn-stubs\":\"1.161.6\",\"@tanstack/start-storage-context\":\"1.166.35\",\"seroval\":\"1.5.4\"}},\"@tanstack/start-fn-stubs@1.161.6\":{},\"@tanstack/start-plugin-core@1.169.19(@tanstack/react-router@1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(webpack@5.99.9(esbuild@0.27.3))\":{\"dependencies\":{\"@babel/code-frame\":\"7.27.1\",\"@babel/core\":\"7.28.5\",\"@babel/types\":\"7.28.5\",\"@rolldown/pluginutils\":\"1.0.0-beta.40\",\"@tanstack/router-core\":\"1.169.2\",\"@tanstack/router-generator\":\"1.166.41\",\"@tanstack/router-plugin\":\"1.167.34(@tanstack/react-router@1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(webpack@5.99.9(esbuild@0.27.3))\",\"@tanstack/router-utils\":\"1.161.8\",\"@tanstack/start-client-core\":\"1.168.2\",\"@tanstack/start-server-core\":\"1.167.30\",\"cheerio\":\"1.1.2\",\"exsolve\":\"1.0.8\",\"lightningcss\":\"1.32.0\",\"pathe\":\"2.0.3\",\"picomatch\":\"4.0.3\",\"seroval\":\"1.5.4\",\"source-map\":\"0.7.6\",\"srvx\":\"0.11.15\",\"tinyglobby\":\"0.2.16\",\"ufo\":\"1.6.1\",\"vitefu\":\"1.1.1(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\",\"xmlbuilder2\":\"4.0.3\",\"zod\":\"3.25.76\"},\"optionalDependencies\":{\"vite\":\"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\"},\"transitivePeerDependencies\":[\"@tanstack/react-router\",\"crossws\",\"supports-color\",\"vite-plugin-solid\",\"webpack\"]},\"@tanstack/start-server-core@1.167.30\":{\"dependencies\":{\"@tanstack/history\":\"1.161.6\",\"@tanstack/router-core\":\"1.169.2\",\"@tanstack/start-client-core\":\"1.168.2\",\"@tanstack/start-storage-context\":\"1.166.35\",\"fetchdts\":\"0.1.7\",\"h3-v2\":\"h3@2.0.1-rc.20\",\"seroval\":\"1.5.4\"},\"transitivePeerDependencies\":[\"crossws\"]},\"@tanstack/start-storage-context@1.166.35\":{\"dependencies\":{\"@tanstack/router-core\":\"1.169.2\"}},\"@tanstack/store@0.9.3\":{},\"@tanstack/virtual-file-routes@1.161.7\":{},\"@testing-library/dom@10.4.1\":{\"dependencies\":{\"@babel/code-frame\":\"7.27.1\",\"@babel/runtime\":\"7.28.4\",\"@types/aria-query\":\"5.0.4\",\"aria-query\":\"5.3.0\",\"dom-accessibility-api\":\"0.5.16\",\"lz-string\":\"1.5.0\",\"picocolors\":\"1.1.1\",\"pretty-format\":\"27.5.1\"}},\"@testing-library/react@16.3.0(@testing-library/dom@10.4.1)(@types/react-dom@19.2.3(@types/react@19.2.7))(@types/react@19.2.7)(react-dom@19.2.0(react@19.2.0))(react@19.2.0)\":{\"dependencies\":{\"@babel/runtime\":\"7.28.4\",\"@testing-library/dom\":\"10.4.1\",\"react\":\"19.2.0\",\"react-dom\":\"19.2.0(react@19.2.0)\"},\"optionalDependencies\":{\"@types/react\":\"19.2.7\",\"@types/react-dom\":\"19.2.3(@types/react@19.2.7)\"}},\"@testing-library/user-event@14.6.1(@testing-library/dom@10.4.1)\":{\"dependencies\":{\"@testing-library/dom\":\"10.4.1\"},\"optional\":true},\"@tootallnate/quickjs-emscripten@0.23.0\":{\"optional\":true},\"@tybys/wasm-util@0.10.2\":{\"dependencies\":{\"tslib\":\"2.8.1\"},\"optional\":true},\"@tybys/wasm-util@0.9.0\":{\"dependencies\":{\"tslib\":\"2.8.1\"}},\"@types/aria-query@5.0.4\":{},\"@types/chai@5.2.2\":{\"dependencies\":{\"@types/deep-eql\":\"4.0.2\"}},\"@types/chai@5.2.3\":{\"dependencies\":{\"@types/deep-eql\":\"4.0.2\",\"assertion-error\":\"2.0.1\"}},\"@types/cookie@0.6.0\":{\"optional\":true},\"@types/d3-array@3.2.1\":{},\"@types/d3-axis@3.0.6\":{\"dependencies\":{\"@types/d3-selection\":\"3.0.11\"}},\"@types/d3-brush@3.0.6\":{\"dependencies\":{\"@types/d3-selection\":\"3.0.11\"}},\"@types/d3-chord@3.0.6\":{},\"@types/d3-color@3.1.3\":{},\"@types/d3-contour@3.0.6\":{\"dependencies\":{\"@types/d3-array\":\"3.2.1\",\"@types/geojson\":\"7946.0.15\"}},\"@types/d3-delaunay@6.0.4\":{},\"@types/d3-dispatch@3.0.6\":{},\"@types/d3-drag@3.0.7\":{\"dependencies\":{\"@types/d3-selection\":\"3.0.11\"}},\"@types/d3-dsv@3.0.7\":{},\"@types/d3-ease@3.0.2\":{},\"@types/d3-fetch@3.0.7\":{\"dependencies\":{\"@types/d3-dsv\":\"3.0.7\"}},\"@types/d3-force@3.0.10\":{},\"@types/d3-format@3.0.4\":{},\"@types/d3-geo@3.1.0\":{\"dependencies\":{\"@types/geojson\":\"7946.0.15\"}},\"@types/d3-hierarchy@3.1.7\":{},\"@types/d3-interpolate@3.0.4\":{\"dependencies\":{\"@types/d3-color\":\"3.1.3\"}},\"@types/d3-path@3.1.0\":{},\"@types/d3-polygon@3.0.2\":{},\"@types/d3-quadtree@3.0.6\":{},\"@types/d3-random@3.0.3\":{},\"@types/d3-scale-chromatic@3.1.0\":{},\"@types/d3-scale@4.0.8\":{\"dependencies\":{\"@types/d3-time\":\"3.0.4\"}},\"@types/d3-selection@3.0.11\":{},\"@types/d3-shape@3.1.7\":{\"dependencies\":{\"@types/d3-path\":\"3.1.0\"}},\"@types/d3-time-format@4.0.3\":{},\"@types/d3-time@3.0.4\":{},\"@types/d3-timer@3.0.2\":{},\"@types/d3-transition@3.0.9\":{\"dependencies\":{\"@types/d3-selection\":\"3.0.11\"}},\"@types/d3-zoom@3.0.8\":{\"dependencies\":{\"@types/d3-interpolate\":\"3.0.4\",\"@types/d3-selection\":\"3.0.11\"}},\"@types/d3@7.4.3\":{\"dependencies\":{\"@types/d3-array\":\"3.2.1\",\"@types/d3-axis\":\"3.0.6\",\"@types/d3-brush\":\"3.0.6\",\"@types/d3-chord\":\"3.0.6\",\"@types/d3-color\":\"3.1.3\",\"@types/d3-contour\":\"3.0.6\",\"@types/d3-delaunay\":\"6.0.4\",\"@types/d3-dispatch\":\"3.0.6\",\"@types/d3-drag\":\"3.0.7\",\"@types/d3-dsv\":\"3.0.7\",\"@types/d3-ease\":\"3.0.2\",\"@types/d3-fetch\":\"3.0.7\",\"@types/d3-force\":\"3.0.10\",\"@types/d3-format\":\"3.0.4\",\"@types/d3-geo\":\"3.1.0\",\"@types/d3-hierarchy\":\"3.1.7\",\"@types/d3-interpolate\":\"3.0.4\",\"@types/d3-path\":\"3.1.0\",\"@types/d3-polygon\":\"3.0.2\",\"@types/d3-quadtree\":\"3.0.6\",\"@types/d3-random\":\"3.0.3\",\"@types/d3-scale\":\"4.0.8\",\"@types/d3-scale-chromatic\":\"3.1.0\",\"@types/d3-selection\":\"3.0.11\",\"@types/d3-shape\":\"3.1.7\",\"@types/d3-time\":\"3.0.4\",\"@types/d3-time-format\":\"4.0.3\",\"@types/d3-timer\":\"3.0.2\",\"@types/d3-transition\":\"3.0.9\",\"@types/d3-zoom\":\"3.0.8\"}},\"@types/debug@4.1.12\":{\"dependencies\":{\"@types/ms\":\"2.1.0\"}},\"@types/deep-eql@4.0.2\":{},\"@types/eslint-scope@3.7.7\":{\"dependencies\":{\"@types/eslint\":\"9.6.1\",\"@types/estree\":\"1.0.9\"},\"optional\":true},\"@types/eslint@9.6.1\":{\"dependencies\":{\"@types/estree\":\"1.0.9\",\"@types/json-schema\":\"7.0.15\"},\"optional\":true},\"@types/estree@1.0.8\":{},\"@types/estree@1.0.9\":{\"optional\":true},\"@types/geojson@7946.0.15\":{},\"@types/hast@3.0.4\":{\"dependencies\":{\"@types/unist\":\"3.0.3\"}},\"@types/json-schema@7.0.15\":{\"optional\":true},\"@types/mdast@4.0.4\":{\"dependencies\":{\"@types/unist\":\"3.0.3\"}},\"@types/ms@2.1.0\":{},\"@types/node@12.20.55\":{},\"@types/node@20.19.39\":{\"dependencies\":{\"undici-types\":\"6.21.0\"},\"optional\":true},\"@types/node@22.15.33\":{\"dependencies\":{\"undici-types\":\"6.21.0\"}},\"@types/node@22.19.17\":{\"dependencies\":{\"undici-types\":\"6.21.0\"},\"optional\":true},\"@types/node@24.10.2\":{\"dependencies\":{\"undici-types\":\"7.16.0\"},\"optional\":true},\"@types/react-dom@19.2.3(@types/react@19.2.7)\":{\"dependencies\":{\"@types/react\":\"19.2.7\"}},\"@types/react@19.2.7\":{\"dependencies\":{\"csstype\":\"3.2.3\"}},\"@types/sinonjs__fake-timers@8.1.5\":{\"optional\":true},\"@types/statuses@2.0.6\":{\"optional\":true},\"@types/tough-cookie@4.0.5\":{\"optional\":true},\"@types/trusted-types@2.0.7\":{\"optional\":true},\"@types/unist@3.0.3\":{},\"@types/whatwg-mimetype@3.0.2\":{\"optional\":true},\"@types/which@2.0.2\":{\"optional\":true},\"@types/ws@8.18.1\":{\"dependencies\":{\"@types/node\":\"22.19.17\"},\"optional\":true},\"@types/yauzl@2.10.3\":{\"dependencies\":{\"@types/node\":\"22.19.17\"},\"optional\":true},\"@ungap/structured-clone@1.2.1\":{},\"@vitejs/plugin-react@6.0.1(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\":{\"dependencies\":{\"@rolldown/pluginutils\":\"1.0.0-rc.7\",\"vite\":\"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\"}},\"@vitest/browser@3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.8.3))(playwright@1.55.0)(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(vitest@3.2.4)(webdriverio@9.2.1)\":{\"dependencies\":{\"@testing-library/dom\":\"10.4.1\",\"@testing-library/user-event\":\"14.6.1(@testing-library/dom@10.4.1)\",\"@vitest/mocker\":\"3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.8.3))(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\",\"@vitest/utils\":\"3.2.4\",\"magic-string\":\"0.30.21\",\"sirv\":\"3.0.2\",\"tinyrainbow\":\"2.0.0\",\"vitest\":\"3.2.4(@types/debug@4.1.12)(@types/node@24.10.2)(@vitest/browser@3.2.4)(happy-dom@18.0.1)(jiti@2.6.1)(jsdom@26.1.0)(lightningcss@1.32.0)(msw@2.10.2(@types/node@24.10.2)(typescript@5.8.3))(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\",\"ws\":\"8.20.0\"},\"optionalDependencies\":{\"playwright\":\"1.55.0\",\"webdriverio\":\"9.2.1\"},\"transitivePeerDependencies\":[\"bufferutil\",\"msw\",\"utf-8-validate\",\"vite\"],\"optional\":true},\"@vitest/browser@3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(playwright@1.55.0)(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(vitest@3.2.4)(webdriverio@9.2.1)\":{\"dependencies\":{\"@testing-library/dom\":\"10.4.1\",\"@testing-library/user-event\":\"14.6.1(@testing-library/dom@10.4.1)\",\"@vitest/mocker\":\"3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\",\"@vitest/utils\":\"3.2.4\",\"magic-string\":\"0.30.21\",\"sirv\":\"3.0.2\",\"tinyrainbow\":\"2.0.0\",\"vitest\":\"3.2.4(@types/debug@4.1.12)(@types/node@24.10.2)(@vitest/browser@3.2.4)(happy-dom@18.0.1)(jiti@2.6.1)(jsdom@27.3.0(postcss@8.5.14))(lightningcss@1.32.0)(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\",\"ws\":\"8.20.0\"},\"optionalDependencies\":{\"playwright\":\"1.55.0\",\"webdriverio\":\"9.2.1\"},\"transitivePeerDependencies\":[\"bufferutil\",\"msw\",\"utf-8-validate\",\"vite\"],\"optional\":true},\"@vitest/browser@4.1.5(msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(vitest@4.1.5)\":{\"dependencies\":{\"@blazediff/core\":\"1.9.1\",\"@vitest/mocker\":\"4.1.5(msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\",\"@vitest/utils\":\"4.1.5\",\"magic-string\":\"0.30.21\",\"pngjs\":\"7.0.0\",\"sirv\":\"3.0.2\",\"tinyrainbow\":\"3.1.0\",\"vitest\":\"4.1.5(@opentelemetry/api@1.9.0)(@types/node@22.15.33)(@vitest/coverage-v8@4.1.5)(happy-dom@18.0.1)(jsdom@27.3.0(postcss@8.5.14))(msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\",\"ws\":\"8.20.0\"},\"transitivePeerDependencies\":[\"bufferutil\",\"msw\",\"utf-8-validate\",\"vite\"]},\"@vitest/coverage-v8@3.2.4(@vitest/browser@3.2.4)(vitest@3.2.4)\":{\"dependencies\":{\"@ampproject/remapping\":\"2.3.0\",\"@bcoe/v8-coverage\":\"1.0.2\",\"ast-v8-to-istanbul\":\"0.3.4\",\"debug\":\"4.4.1\",\"istanbul-lib-coverage\":\"3.2.2\",\"istanbul-lib-report\":\"3.0.1\",\"istanbul-lib-source-maps\":\"5.0.6\",\"istanbul-reports\":\"3.2.0\",\"magic-string\":\"0.30.18\",\"magicast\":\"0.3.5\",\"std-env\":\"3.9.0\",\"test-exclude\":\"7.0.1\",\"tinyrainbow\":\"2.0.0\",\"vitest\":\"3.2.4(@types/debug@4.1.12)(@types/node@24.10.2)(@vitest/browser@3.2.4)(happy-dom@18.0.1)(jiti@2.6.1)(jsdom@27.3.0(postcss@8.5.14))(lightningcss@1.32.0)(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\"},\"optionalDependencies\":{\"@vitest/browser\":\"3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(playwright@1.55.0)(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(vitest@3.2.4)(webdriverio@9.2.1)\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"@vitest/coverage-v8@4.1.5(@vitest/browser@4.1.5)(vitest@4.1.5)\":{\"dependencies\":{\"@bcoe/v8-coverage\":\"1.0.2\",\"@vitest/utils\":\"4.1.5\",\"ast-v8-to-istanbul\":\"1.0.0\",\"istanbul-lib-coverage\":\"3.2.2\",\"istanbul-lib-report\":\"3.0.1\",\"istanbul-reports\":\"3.2.0\",\"magicast\":\"0.5.2\",\"obug\":\"2.1.1\",\"std-env\":\"4.1.0\",\"tinyrainbow\":\"3.1.0\",\"vitest\":\"4.1.5(@opentelemetry/api@1.9.0)(@types/node@22.15.33)(@vitest/coverage-v8@4.1.5)(happy-dom@18.0.1)(jsdom@27.3.0(postcss@8.5.14))(msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\"},\"optionalDependencies\":{\"@vitest/browser\":\"4.1.5(msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(vitest@4.1.5)\"}},\"@vitest/expect@3.2.4\":{\"dependencies\":{\"@types/chai\":\"5.2.2\",\"@vitest/spy\":\"3.2.4\",\"@vitest/utils\":\"3.2.4\",\"chai\":\"5.3.3\",\"tinyrainbow\":\"2.0.0\"}},\"@vitest/expect@4.0.18\":{\"dependencies\":{\"@standard-schema/spec\":\"1.0.0\",\"@types/chai\":\"5.2.3\",\"@vitest/spy\":\"4.0.18\",\"@vitest/utils\":\"4.0.18\",\"chai\":\"6.2.2\",\"tinyrainbow\":\"3.1.0\"}},\"@vitest/expect@4.1.5\":{\"dependencies\":{\"@standard-schema/spec\":\"1.1.0\",\"@types/chai\":\"5.2.3\",\"@vitest/spy\":\"4.1.5\",\"@vitest/utils\":\"4.1.5\",\"chai\":\"6.2.2\",\"tinyrainbow\":\"3.1.0\"}},\"@vitest/mocker@3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.8.3))(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\":{\"dependencies\":{\"@vitest/spy\":\"3.2.4\",\"estree-walker\":\"3.0.3\",\"magic-string\":\"0.30.21\"},\"optionalDependencies\":{\"msw\":\"2.10.2(@types/node@24.10.2)(typescript@5.8.3)\",\"vite\":\"7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\"}},\"@vitest/mocker@3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\":{\"dependencies\":{\"@vitest/spy\":\"3.2.4\",\"estree-walker\":\"3.0.3\",\"magic-string\":\"0.30.21\"},\"optionalDependencies\":{\"msw\":\"2.10.2(@types/node@24.10.2)(typescript@5.9.3)\",\"vite\":\"7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\"}},\"@vitest/mocker@4.0.18(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\":{\"dependencies\":{\"@vitest/spy\":\"4.0.18\",\"estree-walker\":\"3.0.3\",\"magic-string\":\"0.30.21\"},\"optionalDependencies\":{\"msw\":\"2.10.2(@types/node@24.10.2)(typescript@5.9.3)\",\"vite\":\"7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\"}},\"@vitest/mocker@4.1.5(msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\":{\"dependencies\":{\"@vitest/spy\":\"4.1.5\",\"estree-walker\":\"3.0.3\",\"magic-string\":\"0.30.21\"},\"optionalDependencies\":{\"msw\":\"2.10.2(@types/node@22.15.33)(typescript@5.8.3)\",\"vite\":\"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\"}},\"@vitest/pretty-format@3.2.4\":{\"dependencies\":{\"tinyrainbow\":\"2.0.0\"}},\"@vitest/pretty-format@4.0.18\":{\"dependencies\":{\"tinyrainbow\":\"3.1.0\"}},\"@vitest/pretty-format@4.1.5\":{\"dependencies\":{\"tinyrainbow\":\"3.1.0\"}},\"@vitest/runner@3.2.4\":{\"dependencies\":{\"@vitest/utils\":\"3.2.4\",\"pathe\":\"2.0.3\",\"strip-literal\":\"3.0.0\"}},\"@vitest/runner@4.0.18\":{\"dependencies\":{\"@vitest/utils\":\"4.0.18\",\"pathe\":\"2.0.3\"}},\"@vitest/runner@4.1.5\":{\"dependencies\":{\"@vitest/utils\":\"4.1.5\",\"pathe\":\"2.0.3\"}},\"@vitest/snapshot@3.2.4\":{\"dependencies\":{\"@vitest/pretty-format\":\"3.2.4\",\"magic-string\":\"0.30.21\",\"pathe\":\"2.0.3\"}},\"@vitest/snapshot@4.0.18\":{\"dependencies\":{\"@vitest/pretty-format\":\"4.0.18\",\"magic-string\":\"0.30.21\",\"pathe\":\"2.0.3\"}},\"@vitest/snapshot@4.1.5\":{\"dependencies\":{\"@vitest/pretty-format\":\"4.1.5\",\"@vitest/utils\":\"4.1.5\",\"magic-string\":\"0.30.21\",\"pathe\":\"2.0.3\"}},\"@vitest/spy@3.2.4\":{\"dependencies\":{\"tinyspy\":\"4.0.3\"}},\"@vitest/spy@4.0.18\":{},\"@vitest/spy@4.1.5\":{},\"@vitest/utils@3.2.4\":{\"dependencies\":{\"@vitest/pretty-format\":\"3.2.4\",\"loupe\":\"3.2.1\",\"tinyrainbow\":\"2.0.0\"}},\"@vitest/utils@4.0.18\":{\"dependencies\":{\"@vitest/pretty-format\":\"4.0.18\",\"tinyrainbow\":\"3.1.0\"}},\"@vitest/utils@4.1.5\":{\"dependencies\":{\"@vitest/pretty-format\":\"4.1.5\",\"convert-source-map\":\"2.0.0\",\"tinyrainbow\":\"3.1.0\"}},\"@wdio/config@9.1.3\":{\"dependencies\":{\"@wdio/logger\":\"9.1.3\",\"@wdio/types\":\"9.1.3\",\"@wdio/utils\":\"9.1.3\",\"decamelize\":\"6.0.1\",\"deepmerge-ts\":\"7.1.5\",\"glob\":\"10.5.0\",\"import-meta-resolve\":\"4.2.0\"},\"transitivePeerDependencies\":[\"bare-abort-controller\",\"bare-buffer\",\"react-native-b4a\",\"supports-color\"],\"optional\":true},\"@wdio/logger@8.38.0\":{\"dependencies\":{\"chalk\":\"5.6.2\",\"loglevel\":\"1.9.2\",\"loglevel-plugin-prefix\":\"0.8.4\",\"strip-ansi\":\"7.2.0\"},\"optional\":true},\"@wdio/logger@9.1.3\":{\"dependencies\":{\"chalk\":\"5.6.2\",\"loglevel\":\"1.9.2\",\"loglevel-plugin-prefix\":\"0.8.4\",\"strip-ansi\":\"7.2.0\"},\"optional\":true},\"@wdio/protocols@9.2.0\":{\"optional\":true},\"@wdio/repl@9.0.8\":{\"dependencies\":{\"@types/node\":\"20.19.39\"},\"optional\":true},\"@wdio/types@9.1.3\":{\"dependencies\":{\"@types/node\":\"20.19.39\"},\"optional\":true},\"@wdio/utils@9.1.3\":{\"dependencies\":{\"@puppeteer/browsers\":\"2.13.1\",\"@wdio/logger\":\"9.1.3\",\"@wdio/types\":\"9.1.3\",\"decamelize\":\"6.0.1\",\"deepmerge-ts\":\"7.1.5\",\"edgedriver\":\"5.6.1\",\"geckodriver\":\"4.5.1\",\"get-port\":\"7.2.0\",\"import-meta-resolve\":\"4.2.0\",\"locate-app\":\"2.5.0\",\"safaridriver\":\"0.1.2\",\"split2\":\"4.2.0\",\"wait-port\":\"1.1.0\"},\"transitivePeerDependencies\":[\"bare-abort-controller\",\"bare-buffer\",\"react-native-b4a\",\"supports-color\"],\"optional\":true},\"@webassemblyjs/ast@1.14.1\":{\"dependencies\":{\"@webassemblyjs/helper-numbers\":\"1.13.2\",\"@webassemblyjs/helper-wasm-bytecode\":\"1.13.2\"},\"optional\":true},\"@webassemblyjs/floating-point-hex-parser@1.13.2\":{\"optional\":true},\"@webassemblyjs/helper-api-error@1.13.2\":{\"optional\":true},\"@webassemblyjs/helper-buffer@1.14.1\":{\"optional\":true},\"@webassemblyjs/helper-numbers@1.13.2\":{\"dependencies\":{\"@webassemblyjs/floating-point-hex-parser\":\"1.13.2\",\"@webassemblyjs/helper-api-error\":\"1.13.2\",\"@xtuc/long\":\"4.2.2\"},\"optional\":true},\"@webassemblyjs/helper-wasm-bytecode@1.13.2\":{\"optional\":true},\"@webassemblyjs/helper-wasm-section@1.14.1\":{\"dependencies\":{\"@webassemblyjs/ast\":\"1.14.1\",\"@webassemblyjs/helper-buffer\":\"1.14.1\",\"@webassemblyjs/helper-wasm-bytecode\":\"1.13.2\",\"@webassemblyjs/wasm-gen\":\"1.14.1\"},\"optional\":true},\"@webassemblyjs/ieee754@1.13.2\":{\"dependencies\":{\"@xtuc/ieee754\":\"1.2.0\"},\"optional\":true},\"@webassemblyjs/leb128@1.13.2\":{\"dependencies\":{\"@xtuc/long\":\"4.2.2\"},\"optional\":true},\"@webassemblyjs/utf8@1.13.2\":{\"optional\":true},\"@webassemblyjs/wasm-edit@1.14.1\":{\"dependencies\":{\"@webassemblyjs/ast\":\"1.14.1\",\"@webassemblyjs/helper-buffer\":\"1.14.1\",\"@webassemblyjs/helper-wasm-bytecode\":\"1.13.2\",\"@webassemblyjs/helper-wasm-section\":\"1.14.1\",\"@webassemblyjs/wasm-gen\":\"1.14.1\",\"@webassemblyjs/wasm-opt\":\"1.14.1\",\"@webassemblyjs/wasm-parser\":\"1.14.1\",\"@webassemblyjs/wast-printer\":\"1.14.1\"},\"optional\":true},\"@webassemblyjs/wasm-gen@1.14.1\":{\"dependencies\":{\"@webassemblyjs/ast\":\"1.14.1\",\"@webassemblyjs/helper-wasm-bytecode\":\"1.13.2\",\"@webassemblyjs/ieee754\":\"1.13.2\",\"@webassemblyjs/leb128\":\"1.13.2\",\"@webassemblyjs/utf8\":\"1.13.2\"},\"optional\":true},\"@webassemblyjs/wasm-opt@1.14.1\":{\"dependencies\":{\"@webassemblyjs/ast\":\"1.14.1\",\"@webassemblyjs/helper-buffer\":\"1.14.1\",\"@webassemblyjs/wasm-gen\":\"1.14.1\",\"@webassemblyjs/wasm-parser\":\"1.14.1\"},\"optional\":true},\"@webassemblyjs/wasm-parser@1.14.1\":{\"dependencies\":{\"@webassemblyjs/ast\":\"1.14.1\",\"@webassemblyjs/helper-api-error\":\"1.13.2\",\"@webassemblyjs/helper-wasm-bytecode\":\"1.13.2\",\"@webassemblyjs/ieee754\":\"1.13.2\",\"@webassemblyjs/leb128\":\"1.13.2\",\"@webassemblyjs/utf8\":\"1.13.2\"},\"optional\":true},\"@webassemblyjs/wast-printer@1.14.1\":{\"dependencies\":{\"@webassemblyjs/ast\":\"1.14.1\",\"@xtuc/long\":\"4.2.2\"},\"optional\":true},\"@xtuc/ieee754@1.2.0\":{\"optional\":true},\"@xtuc/long@4.2.2\":{\"optional\":true},\"@yarnpkg/lockfile@1.1.0\":{},\"@yarnpkg/parsers@3.0.2\":{\"dependencies\":{\"js-yaml\":\"3.14.1\",\"tslib\":\"2.8.1\"}},\"@zip.js/zip.js@2.8.26\":{\"optional\":true},\"@zkochan/js-yaml@0.0.7\":{\"dependencies\":{\"argparse\":\"2.0.1\"}},\"abort-controller@3.0.0\":{\"dependencies\":{\"event-target-shim\":\"5.0.1\"},\"optional\":true},\"acorn@8.16.0\":{},\"agent-base@7.1.3\":{},\"agent-base@7.1.4\":{\"optional\":true},\"ajv-formats@2.1.1(ajv@8.20.0)\":{\"optionalDependencies\":{\"ajv\":\"8.20.0\"},\"optional\":true},\"ajv-keywords@5.1.0(ajv@8.20.0)\":{\"dependencies\":{\"ajv\":\"8.20.0\",\"fast-deep-equal\":\"3.1.3\"},\"optional\":true},\"ajv@8.17.1\":{\"dependencies\":{\"fast-deep-equal\":\"3.1.3\",\"fast-uri\":\"3.0.3\",\"json-schema-traverse\":\"1.0.0\",\"require-from-string\":\"2.0.2\"}},\"ajv@8.20.0\":{\"dependencies\":{\"fast-deep-equal\":\"3.1.3\",\"fast-uri\":\"3.1.2\",\"json-schema-traverse\":\"1.0.0\",\"require-from-string\":\"2.0.2\"},\"optional\":true},\"ansi-colors@4.1.3\":{},\"ansi-regex@5.0.1\":{},\"ansi-regex@6.1.0\":{},\"ansi-regex@6.2.2\":{\"optional\":true},\"ansi-styles@4.3.0\":{\"dependencies\":{\"color-convert\":\"2.0.1\"}},\"ansi-styles@5.2.0\":{},\"ansi-styles@6.2.1\":{},\"ansis@4.1.0\":{},\"anymatch@3.1.3\":{\"dependencies\":{\"normalize-path\":\"3.0.0\",\"picomatch\":\"2.3.1\"}},\"archiver-utils@5.0.2\":{\"dependencies\":{\"glob\":\"10.5.0\",\"graceful-fs\":\"4.2.11\",\"is-stream\":\"2.0.1\",\"lazystream\":\"1.0.1\",\"lodash\":\"4.18.1\",\"normalize-path\":\"3.0.0\",\"readable-stream\":\"4.7.0\"},\"optional\":true},\"archiver@7.0.1\":{\"dependencies\":{\"archiver-utils\":\"5.0.2\",\"async\":\"3.2.6\",\"buffer-crc32\":\"1.0.0\",\"readable-stream\":\"4.7.0\",\"readdir-glob\":\"1.1.3\",\"tar-stream\":\"3.2.0\",\"zip-stream\":\"6.0.1\"},\"transitivePeerDependencies\":[\"bare-abort-controller\",\"bare-buffer\",\"react-native-b4a\"],\"optional\":true},\"argparse@1.0.10\":{\"dependencies\":{\"sprintf-js\":\"1.0.3\"}},\"argparse@2.0.1\":{},\"aria-query@5.3.0\":{\"dependencies\":{\"dequal\":\"2.0.3\"}},\"aria-query@5.3.2\":{\"optional\":true},\"array-union@2.1.0\":{},\"assertion-error@2.0.1\":{},\"ast-types@0.13.4\":{\"dependencies\":{\"tslib\":\"2.8.1\"},\"optional\":true},\"ast-v8-to-istanbul@0.3.4\":{\"dependencies\":{\"@jridgewell/trace-mapping\":\"0.3.30\",\"estree-walker\":\"3.0.3\",\"js-tokens\":\"9.0.1\"}},\"ast-v8-to-istanbul@1.0.0\":{\"dependencies\":{\"@jridgewell/trace-mapping\":\"0.3.31\",\"estree-walker\":\"3.0.3\",\"js-tokens\":\"10.0.0\"}},\"async@3.2.6\":{\"optional\":true},\"asynckit@0.4.0\":{},\"axios@1.11.0\":{\"dependencies\":{\"follow-redirects\":\"1.15.11\",\"form-data\":\"4.0.4\",\"proxy-from-env\":\"1.1.0\"},\"transitivePeerDependencies\":[\"debug\"]},\"b4a@1.8.1\":{\"optional\":true},\"babel-dead-code-elimination@1.0.12\":{\"dependencies\":{\"@babel/core\":\"7.28.5\",\"@babel/parser\":\"7.28.5\",\"@babel/traverse\":\"7.28.5\",\"@babel/types\":\"7.28.5\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"bail@2.0.2\":{},\"balanced-match@1.0.2\":{},\"bare-events@2.8.2\":{\"optional\":true},\"bare-fs@4.7.1\":{\"dependencies\":{\"bare-events\":\"2.8.2\",\"bare-path\":\"3.0.0\",\"bare-stream\":\"2.13.1(bare-events@2.8.2)\",\"bare-url\":\"2.4.3\",\"fast-fifo\":\"1.3.2\"},\"transitivePeerDependencies\":[\"bare-abort-controller\",\"react-native-b4a\"],\"optional\":true},\"bare-os@3.9.1\":{\"optional\":true},\"bare-path@3.0.0\":{\"dependencies\":{\"bare-os\":\"3.9.1\"},\"optional\":true},\"bare-stream@2.13.1(bare-events@2.8.2)\":{\"dependencies\":{\"streamx\":\"2.25.0\",\"teex\":\"1.0.1\"},\"optionalDependencies\":{\"bare-events\":\"2.8.2\"},\"transitivePeerDependencies\":[\"react-native-b4a\"],\"optional\":true},\"bare-url@2.4.3\":{\"dependencies\":{\"bare-path\":\"3.0.0\"},\"optional\":true},\"base64-js@1.5.1\":{},\"baseline-browser-mapping@2.10.27\":{\"optional\":true},\"basic-ftp@5.3.1\":{\"optional\":true},\"better-path-resolve@1.0.0\":{\"dependencies\":{\"is-windows\":\"1.0.2\"}},\"better-sqlite3@12.9.0\":{\"dependencies\":{\"bindings\":\"1.5.0\",\"prebuild-install\":\"7.1.3\"}},\"bidi-js@1.0.3\":{\"dependencies\":{\"require-from-string\":\"2.0.2\"}},\"binary-extensions@2.3.0\":{},\"bindings@1.5.0\":{\"dependencies\":{\"file-uri-to-path\":\"1.0.0\"}},\"bl@4.1.0\":{\"dependencies\":{\"buffer\":\"5.7.1\",\"inherits\":\"2.0.4\",\"readable-stream\":\"3.6.2\"}},\"blake3-wasm@2.1.5\":{},\"boolbase@1.0.0\":{},\"brace-expansion@2.0.2\":{\"dependencies\":{\"balanced-match\":\"1.0.2\"}},\"brace-expansion@2.1.0\":{\"dependencies\":{\"balanced-match\":\"1.0.2\"},\"optional\":true},\"braces@3.0.3\":{\"dependencies\":{\"fill-range\":\"7.1.1\"}},\"browserslist@4.25.3\":{\"dependencies\":{\"caniuse-lite\":\"1.0.30001737\",\"electron-to-chromium\":\"1.5.211\",\"node-releases\":\"2.0.19\",\"update-browserslist-db\":\"1.1.3(browserslist@4.25.3)\"}},\"browserslist@4.28.2\":{\"dependencies\":{\"baseline-browser-mapping\":\"2.10.27\",\"caniuse-lite\":\"1.0.30001792\",\"electron-to-chromium\":\"1.5.352\",\"node-releases\":\"2.0.38\",\"update-browserslist-db\":\"1.2.3(browserslist@4.28.2)\"},\"optional\":true},\"buffer-builder@0.2.0\":{\"optional\":true},\"buffer-crc32@0.2.13\":{\"optional\":true},\"buffer-crc32@1.0.0\":{\"optional\":true},\"buffer-from@1.1.2\":{\"optional\":true},\"buffer@5.7.1\":{\"dependencies\":{\"base64-js\":\"1.5.1\",\"ieee754\":\"1.2.1\"}},\"buffer@6.0.3\":{\"dependencies\":{\"base64-js\":\"1.5.1\",\"ieee754\":\"1.2.1\"},\"optional\":true},\"cac@6.7.14\":{},\"call-bind-apply-helpers@1.0.2\":{\"dependencies\":{\"es-errors\":\"1.3.0\",\"function-bind\":\"1.1.2\"}},\"caniuse-lite@1.0.30001737\":{},\"caniuse-lite@1.0.30001792\":{\"optional\":true},\"ccount@2.0.1\":{},\"chai@5.3.3\":{\"dependencies\":{\"assertion-error\":\"2.0.1\",\"check-error\":\"2.1.1\",\"deep-eql\":\"5.0.2\",\"loupe\":\"3.2.1\",\"pathval\":\"2.0.1\"}},\"chai@6.2.2\":{},\"chalk@4.1.2\":{\"dependencies\":{\"ansi-styles\":\"4.3.0\",\"supports-color\":\"7.2.0\"}},\"chalk@5.6.2\":{\"optional\":true},\"character-entities-html4@2.1.0\":{},\"character-entities-legacy@3.0.0\":{},\"character-entities@2.0.2\":{},\"chardet@2.1.0\":{},\"check-error@2.1.1\":{},\"cheerio-select@2.1.0\":{\"dependencies\":{\"boolbase\":\"1.0.0\",\"css-select\":\"5.1.0\",\"css-what\":\"6.1.0\",\"domelementtype\":\"2.3.0\",\"domhandler\":\"5.0.3\",\"domutils\":\"3.2.2\"}},\"cheerio@1.1.2\":{\"dependencies\":{\"cheerio-select\":\"2.1.0\",\"dom-serializer\":\"2.0.0\",\"domhandler\":\"5.0.3\",\"domutils\":\"3.2.2\",\"encoding-sniffer\":\"0.2.1\",\"htmlparser2\":\"10.0.0\",\"parse5\":\"7.3.0\",\"parse5-htmlparser2-tree-adapter\":\"7.1.0\",\"parse5-parser-stream\":\"7.1.2\",\"undici\":\"7.16.0\",\"whatwg-mimetype\":\"4.0.0\"}},\"cheerio@1.2.0\":{\"dependencies\":{\"cheerio-select\":\"2.1.0\",\"dom-serializer\":\"2.0.0\",\"domhandler\":\"5.0.3\",\"domutils\":\"3.2.2\",\"encoding-sniffer\":\"0.2.1\",\"htmlparser2\":\"10.1.0\",\"parse5\":\"7.3.0\",\"parse5-htmlparser2-tree-adapter\":\"7.1.0\",\"parse5-parser-stream\":\"7.1.2\",\"undici\":\"7.25.0\",\"whatwg-mimetype\":\"4.0.0\"},\"optional\":true},\"chevrotain-allstar@0.3.1(chevrotain@11.0.3)\":{\"dependencies\":{\"chevrotain\":\"11.0.3\",\"lodash-es\":\"4.17.21\"}},\"chevrotain@11.0.3\":{\"dependencies\":{\"@chevrotain/cst-dts-gen\":\"11.0.3\",\"@chevrotain/gast\":\"11.0.3\",\"@chevrotain/regexp-to-ast\":\"11.0.3\",\"@chevrotain/types\":\"11.0.3\",\"@chevrotain/utils\":\"11.0.3\",\"lodash-es\":\"4.17.21\"}},\"chokidar@3.6.0\":{\"dependencies\":{\"anymatch\":\"3.1.3\",\"braces\":\"3.0.3\",\"glob-parent\":\"5.1.2\",\"is-binary-path\":\"2.1.0\",\"is-glob\":\"4.0.3\",\"normalize-path\":\"3.0.0\",\"readdirp\":\"3.6.0\"},\"optionalDependencies\":{\"fsevents\":\"2.3.3\"}},\"chownr@1.1.4\":{},\"chownr@2.0.0\":{},\"chrome-trace-event@1.0.4\":{\"optional\":true},\"ci-info@3.9.0\":{},\"cli-cursor@3.1.0\":{\"dependencies\":{\"restore-cursor\":\"3.1.0\"}},\"cli-spinners@2.6.1\":{},\"cli-spinners@2.9.2\":{},\"cli-width@4.1.0\":{\"optional\":true},\"cliui@8.0.1\":{\"dependencies\":{\"string-width\":\"4.2.3\",\"strip-ansi\":\"6.0.1\",\"wrap-ansi\":\"7.0.0\"}},\"clone@1.0.4\":{},\"color-convert@2.0.1\":{\"dependencies\":{\"color-name\":\"1.1.4\"}},\"color-name@1.1.4\":{},\"colorjs.io@0.5.2\":{\"optional\":true},\"combined-stream@1.0.8\":{\"dependencies\":{\"delayed-stream\":\"1.0.0\"}},\"comma-separated-tokens@2.0.3\":{},\"commander@2.20.3\":{\"optional\":true},\"commander@7.2.0\":{},\"commander@8.3.0\":{},\"commander@9.5.0\":{\"optional\":true},\"compress-commons@6.0.2\":{\"dependencies\":{\"crc-32\":\"1.2.2\",\"crc32-stream\":\"6.0.0\",\"is-stream\":\"2.0.1\",\"normalize-path\":\"3.0.0\",\"readable-stream\":\"4.7.0\"},\"optional\":true},\"confbox@0.1.8\":{},\"confbox@0.2.2\":{},\"convert-source-map@2.0.0\":{},\"cookie-es@3.1.1\":{},\"cookie@0.7.2\":{\"optional\":true},\"cookie@1.0.2\":{},\"core-js@3.46.0\":{},\"core-util-is@1.0.3\":{\"optional\":true},\"cose-base@1.0.3\":{\"dependencies\":{\"layout-base\":\"1.0.2\"}},\"cose-base@2.2.0\":{\"dependencies\":{\"layout-base\":\"2.0.1\"}},\"crc-32@1.2.2\":{\"optional\":true},\"crc32-stream@6.0.0\":{\"dependencies\":{\"crc-32\":\"1.2.2\",\"readable-stream\":\"4.7.0\"},\"optional\":true},\"cross-spawn@7.0.6\":{\"dependencies\":{\"path-key\":\"3.1.1\",\"shebang-command\":\"2.0.0\",\"which\":\"2.0.2\"}},\"css-select@5.1.0\":{\"dependencies\":{\"boolbase\":\"1.0.0\",\"css-what\":\"6.1.0\",\"domhandler\":\"5.0.3\",\"domutils\":\"3.2.2\",\"nth-check\":\"2.1.1\"}},\"css-shorthand-properties@1.1.2\":{\"optional\":true},\"css-tree@3.1.0\":{\"dependencies\":{\"mdn-data\":\"2.12.2\",\"source-map-js\":\"1.2.1\"}},\"css-value@0.0.1\":{\"optional\":true},\"css-what@6.1.0\":{},\"cssstyle@4.3.1\":{\"dependencies\":{\"@asamuzakjp/css-color\":\"3.1.4\",\"rrweb-cssom\":\"0.8.0\"}},\"cssstyle@5.3.4(postcss@8.5.14)\":{\"dependencies\":{\"@asamuzakjp/css-color\":\"4.1.0\",\"@csstools/css-syntax-patches-for-csstree\":\"1.0.14(postcss@8.5.14)\",\"css-tree\":\"3.1.0\"},\"transitivePeerDependencies\":[\"postcss\"]},\"csstype@3.2.3\":{},\"cytoscape-cose-bilkent@4.1.0(cytoscape@3.30.4)\":{\"dependencies\":{\"cose-base\":\"1.0.3\",\"cytoscape\":\"3.30.4\"}},\"cytoscape-fcose@2.2.0(cytoscape@3.30.4)\":{\"dependencies\":{\"cose-base\":\"2.2.0\",\"cytoscape\":\"3.30.4\"}},\"cytoscape@3.30.4\":{},\"d3-array@2.12.1\":{\"dependencies\":{\"internmap\":\"1.0.1\"}},\"d3-array@3.2.4\":{\"dependencies\":{\"internmap\":\"2.0.3\"}},\"d3-axis@3.0.0\":{},\"d3-brush@3.0.0\":{\"dependencies\":{\"d3-dispatch\":\"3.0.1\",\"d3-drag\":\"3.0.0\",\"d3-interpolate\":\"3.0.1\",\"d3-selection\":\"3.0.0\",\"d3-transition\":\"3.0.1(d3-selection@3.0.0)\"}},\"d3-chord@3.0.1\":{\"dependencies\":{\"d3-path\":\"3.1.0\"}},\"d3-color@3.1.0\":{},\"d3-contour@4.0.2\":{\"dependencies\":{\"d3-array\":\"3.2.4\"}},\"d3-delaunay@6.0.4\":{\"dependencies\":{\"delaunator\":\"5.0.1\"}},\"d3-dispatch@3.0.1\":{},\"d3-drag@3.0.0\":{\"dependencies\":{\"d3-dispatch\":\"3.0.1\",\"d3-selection\":\"3.0.0\"}},\"d3-dsv@3.0.1\":{\"dependencies\":{\"commander\":\"7.2.0\",\"iconv-lite\":\"0.6.3\",\"rw\":\"1.3.3\"}},\"d3-ease@3.0.1\":{},\"d3-fetch@3.0.1\":{\"dependencies\":{\"d3-dsv\":\"3.0.1\"}},\"d3-force@3.0.0\":{\"dependencies\":{\"d3-dispatch\":\"3.0.1\",\"d3-quadtree\":\"3.0.1\",\"d3-timer\":\"3.0.1\"}},\"d3-format@3.1.0\":{},\"d3-geo@3.1.1\":{\"dependencies\":{\"d3-array\":\"3.2.4\"}},\"d3-hierarchy@3.1.2\":{},\"d3-interpolate@3.0.1\":{\"dependencies\":{\"d3-color\":\"3.1.0\"}},\"d3-path@1.0.9\":{},\"d3-path@3.1.0\":{},\"d3-polygon@3.0.1\":{},\"d3-quadtree@3.0.1\":{},\"d3-random@3.0.1\":{},\"d3-sankey@0.12.3\":{\"dependencies\":{\"d3-array\":\"2.12.1\",\"d3-shape\":\"1.3.7\"}},\"d3-scale-chromatic@3.1.0\":{\"dependencies\":{\"d3-color\":\"3.1.0\",\"d3-interpolate\":\"3.0.1\"}},\"d3-scale@4.0.2\":{\"dependencies\":{\"d3-array\":\"3.2.4\",\"d3-format\":\"3.1.0\",\"d3-interpolate\":\"3.0.1\",\"d3-time\":\"3.1.0\",\"d3-time-format\":\"4.1.0\"}},\"d3-selection@3.0.0\":{},\"d3-shape@1.3.7\":{\"dependencies\":{\"d3-path\":\"1.0.9\"}},\"d3-shape@3.2.0\":{\"dependencies\":{\"d3-path\":\"3.1.0\"}},\"d3-time-format@4.1.0\":{\"dependencies\":{\"d3-time\":\"3.1.0\"}},\"d3-time@3.1.0\":{\"dependencies\":{\"d3-array\":\"3.2.4\"}},\"d3-timer@3.0.1\":{},\"d3-transition@3.0.1(d3-selection@3.0.0)\":{\"dependencies\":{\"d3-color\":\"3.1.0\",\"d3-dispatch\":\"3.0.1\",\"d3-ease\":\"3.0.1\",\"d3-interpolate\":\"3.0.1\",\"d3-selection\":\"3.0.0\",\"d3-timer\":\"3.0.1\"}},\"d3-zoom@3.0.0\":{\"dependencies\":{\"d3-dispatch\":\"3.0.1\",\"d3-drag\":\"3.0.0\",\"d3-interpolate\":\"3.0.1\",\"d3-selection\":\"3.0.0\",\"d3-transition\":\"3.0.1(d3-selection@3.0.0)\"}},\"d3@7.9.0\":{\"dependencies\":{\"d3-array\":\"3.2.4\",\"d3-axis\":\"3.0.0\",\"d3-brush\":\"3.0.0\",\"d3-chord\":\"3.0.1\",\"d3-color\":\"3.1.0\",\"d3-contour\":\"4.0.2\",\"d3-delaunay\":\"6.0.4\",\"d3-dispatch\":\"3.0.1\",\"d3-drag\":\"3.0.0\",\"d3-dsv\":\"3.0.1\",\"d3-ease\":\"3.0.1\",\"d3-fetch\":\"3.0.1\",\"d3-force\":\"3.0.0\",\"d3-format\":\"3.1.0\",\"d3-geo\":\"3.1.1\",\"d3-hierarchy\":\"3.1.2\",\"d3-interpolate\":\"3.0.1\",\"d3-path\":\"3.1.0\",\"d3-polygon\":\"3.0.1\",\"d3-quadtree\":\"3.0.1\",\"d3-random\":\"3.0.1\",\"d3-scale\":\"4.0.2\",\"d3-scale-chromatic\":\"3.1.0\",\"d3-selection\":\"3.0.0\",\"d3-shape\":\"3.2.0\",\"d3-time\":\"3.1.0\",\"d3-time-format\":\"4.1.0\",\"d3-timer\":\"3.0.1\",\"d3-transition\":\"3.0.1(d3-selection@3.0.0)\",\"d3-zoom\":\"3.0.0\"}},\"dagre-d3-es@7.0.13\":{\"dependencies\":{\"d3\":\"7.9.0\",\"lodash-es\":\"4.17.21\"}},\"data-uri-to-buffer@4.0.1\":{\"optional\":true},\"data-uri-to-buffer@6.0.2\":{\"optional\":true},\"data-urls@5.0.0\":{\"dependencies\":{\"whatwg-mimetype\":\"4.0.0\",\"whatwg-url\":\"14.2.0\"}},\"data-urls@6.0.0\":{\"dependencies\":{\"whatwg-mimetype\":\"4.0.0\",\"whatwg-url\":\"15.1.0\"}},\"dayjs@1.11.19\":{},\"debug@4.4.1\":{\"dependencies\":{\"ms\":\"2.1.3\"}},\"debug@4.4.3\":{\"dependencies\":{\"ms\":\"2.1.3\"}},\"decamelize@6.0.1\":{\"optional\":true},\"decimal.js@10.6.0\":{},\"decode-named-character-reference@1.0.2\":{\"dependencies\":{\"character-entities\":\"2.0.2\"}},\"decompress-response@6.0.0\":{\"dependencies\":{\"mimic-response\":\"3.1.0\"}},\"deep-eql@5.0.2\":{},\"deep-extend@0.6.0\":{},\"deepmerge-ts@7.1.5\":{\"optional\":true},\"defaults@1.0.4\":{\"dependencies\":{\"clone\":\"1.0.4\"}},\"define-lazy-prop@2.0.0\":{},\"degenerator@5.0.1\":{\"dependencies\":{\"ast-types\":\"0.13.4\",\"escodegen\":\"2.1.0\",\"esprima\":\"4.0.1\"},\"optional\":true},\"delaunator@5.0.1\":{\"dependencies\":{\"robust-predicates\":\"3.0.2\"}},\"delayed-stream@1.0.0\":{},\"dequal@2.0.3\":{},\"detect-indent@6.1.0\":{},\"detect-libc@2.1.2\":{},\"devlop@1.1.0\":{\"dependencies\":{\"dequal\":\"2.0.3\"}},\"diff@8.0.2\":{},\"dir-glob@3.0.1\":{\"dependencies\":{\"path-type\":\"4.0.0\"}},\"dom-accessibility-api@0.5.16\":{},\"dom-serializer@2.0.0\":{\"dependencies\":{\"domelementtype\":\"2.3.0\",\"domhandler\":\"5.0.3\",\"entities\":\"4.5.0\"}},\"domelementtype@2.3.0\":{},\"domhandler@5.0.3\":{\"dependencies\":{\"domelementtype\":\"2.3.0\"}},\"dompurify@3.3.1\":{\"optionalDependencies\":{\"@types/trusted-types\":\"2.0.7\"}},\"domutils@3.2.2\":{\"dependencies\":{\"dom-serializer\":\"2.0.0\",\"domelementtype\":\"2.3.0\",\"domhandler\":\"5.0.3\"}},\"dotenv-expand@11.0.7\":{\"dependencies\":{\"dotenv\":\"16.5.0\"}},\"dotenv@10.0.0\":{},\"dotenv@16.4.7\":{},\"dotenv@16.5.0\":{},\"dunder-proto@1.0.1\":{\"dependencies\":{\"call-bind-apply-helpers\":\"1.0.2\",\"es-errors\":\"1.3.0\",\"gopd\":\"1.2.0\"}},\"eastasianwidth@0.2.0\":{},\"edge-paths@3.0.5\":{\"dependencies\":{\"@types/which\":\"2.0.2\",\"which\":\"2.0.2\"},\"optional\":true},\"edgedriver@5.6.1\":{\"dependencies\":{\"@wdio/logger\":\"8.38.0\",\"@zip.js/zip.js\":\"2.8.26\",\"decamelize\":\"6.0.1\",\"edge-paths\":\"3.0.5\",\"fast-xml-parser\":\"4.5.6\",\"node-fetch\":\"3.3.2\",\"which\":\"4.0.0\"},\"optional\":true},\"electron-to-chromium@1.5.211\":{},\"electron-to-chromium@1.5.352\":{\"optional\":true},\"emoji-regex@8.0.0\":{},\"emoji-regex@9.2.2\":{},\"encoding-sniffer@0.2.1\":{\"dependencies\":{\"iconv-lite\":\"0.6.3\",\"whatwg-encoding\":\"3.1.1\"}},\"end-of-stream@1.4.5\":{\"dependencies\":{\"once\":\"1.4.0\"}},\"enhanced-resolve@5.21.0\":{\"dependencies\":{\"graceful-fs\":\"4.2.11\",\"tapable\":\"2.3.3\"}},\"enquirer@2.3.6\":{\"dependencies\":{\"ansi-colors\":\"4.1.3\"}},\"enquirer@2.4.1\":{\"dependencies\":{\"ansi-colors\":\"4.1.3\",\"strip-ansi\":\"6.0.1\"}},\"entities@4.5.0\":{},\"entities@6.0.1\":{},\"entities@7.0.1\":{\"optional\":true},\"error-stack-parser-es@1.0.5\":{},\"es-define-property@1.0.1\":{},\"es-errors@1.3.0\":{},\"es-module-lexer@1.7.0\":{},\"es-module-lexer@2.1.0\":{},\"es-object-atoms@1.1.1\":{\"dependencies\":{\"es-errors\":\"1.3.0\"}},\"es-set-tostringtag@2.1.0\":{\"dependencies\":{\"es-errors\":\"1.3.0\",\"get-intrinsic\":\"1.3.0\",\"has-tostringtag\":\"1.0.2\",\"hasown\":\"2.0.2\"}},\"esbuild@0.25.12\":{\"optionalDependencies\":{\"@esbuild/aix-ppc64\":\"0.25.12\",\"@esbuild/android-arm\":\"0.25.12\",\"@esbuild/android-arm64\":\"0.25.12\",\"@esbuild/android-x64\":\"0.25.12\",\"@esbuild/darwin-arm64\":\"0.25.12\",\"@esbuild/darwin-x64\":\"0.25.12\",\"@esbuild/freebsd-arm64\":\"0.25.12\",\"@esbuild/freebsd-x64\":\"0.25.12\",\"@esbuild/linux-arm\":\"0.25.12\",\"@esbuild/linux-arm64\":\"0.25.12\",\"@esbuild/linux-ia32\":\"0.25.12\",\"@esbuild/linux-loong64\":\"0.25.12\",\"@esbuild/linux-mips64el\":\"0.25.12\",\"@esbuild/linux-ppc64\":\"0.25.12\",\"@esbuild/linux-riscv64\":\"0.25.12\",\"@esbuild/linux-s390x\":\"0.25.12\",\"@esbuild/linux-x64\":\"0.25.12\",\"@esbuild/netbsd-arm64\":\"0.25.12\",\"@esbuild/netbsd-x64\":\"0.25.12\",\"@esbuild/openbsd-arm64\":\"0.25.12\",\"@esbuild/openbsd-x64\":\"0.25.12\",\"@esbuild/openharmony-arm64\":\"0.25.12\",\"@esbuild/sunos-x64\":\"0.25.12\",\"@esbuild/win32-arm64\":\"0.25.12\",\"@esbuild/win32-ia32\":\"0.25.12\",\"@esbuild/win32-x64\":\"0.25.12\"}},\"esbuild@0.27.3\":{\"optionalDependencies\":{\"@esbuild/aix-ppc64\":\"0.27.3\",\"@esbuild/android-arm\":\"0.27.3\",\"@esbuild/android-arm64\":\"0.27.3\",\"@esbuild/android-x64\":\"0.27.3\",\"@esbuild/darwin-arm64\":\"0.27.3\",\"@esbuild/darwin-x64\":\"0.27.3\",\"@esbuild/freebsd-arm64\":\"0.27.3\",\"@esbuild/freebsd-x64\":\"0.27.3\",\"@esbuild/linux-arm\":\"0.27.3\",\"@esbuild/linux-arm64\":\"0.27.3\",\"@esbuild/linux-ia32\":\"0.27.3\",\"@esbuild/linux-loong64\":\"0.27.3\",\"@esbuild/linux-mips64el\":\"0.27.3\",\"@esbuild/linux-ppc64\":\"0.27.3\",\"@esbuild/linux-riscv64\":\"0.27.3\",\"@esbuild/linux-s390x\":\"0.27.3\",\"@esbuild/linux-x64\":\"0.27.3\",\"@esbuild/netbsd-arm64\":\"0.27.3\",\"@esbuild/netbsd-x64\":\"0.27.3\",\"@esbuild/openbsd-arm64\":\"0.27.3\",\"@esbuild/openbsd-x64\":\"0.27.3\",\"@esbuild/openharmony-arm64\":\"0.27.3\",\"@esbuild/sunos-x64\":\"0.27.3\",\"@esbuild/win32-arm64\":\"0.27.3\",\"@esbuild/win32-ia32\":\"0.27.3\",\"@esbuild/win32-x64\":\"0.27.3\"}},\"escalade@3.2.0\":{},\"escape-string-regexp@1.0.5\":{},\"escape-string-regexp@5.0.0\":{},\"escodegen@2.1.0\":{\"dependencies\":{\"esprima\":\"4.0.1\",\"estraverse\":\"5.3.0\",\"esutils\":\"2.0.3\"},\"optionalDependencies\":{\"source-map\":\"0.6.1\"},\"optional\":true},\"eslint-scope@5.1.1\":{\"dependencies\":{\"esrecurse\":\"4.3.0\",\"estraverse\":\"4.3.0\"},\"optional\":true},\"esprima@4.0.1\":{},\"esrecurse@4.3.0\":{\"dependencies\":{\"estraverse\":\"5.3.0\"},\"optional\":true},\"estraverse@4.3.0\":{\"optional\":true},\"estraverse@5.3.0\":{\"optional\":true},\"estree-walker@3.0.3\":{\"dependencies\":{\"@types/estree\":\"1.0.8\"}},\"esutils@2.0.3\":{\"optional\":true},\"event-target-shim@5.0.1\":{\"optional\":true},\"events-universal@1.0.1\":{\"dependencies\":{\"bare-events\":\"2.8.2\"},\"transitivePeerDependencies\":[\"bare-abort-controller\"],\"optional\":true},\"events@3.3.0\":{\"optional\":true},\"expand-template@2.0.3\":{},\"expect-type@1.2.2\":{},\"expect-type@1.3.0\":{},\"exsolve@1.0.8\":{},\"extend@3.0.2\":{},\"extendable-error@0.1.7\":{},\"extract-zip@2.0.1\":{\"dependencies\":{\"debug\":\"4.4.3\",\"get-stream\":\"5.2.0\",\"yauzl\":\"2.10.0\"},\"optionalDependencies\":{\"@types/yauzl\":\"2.10.3\"},\"transitivePeerDependencies\":[\"supports-color\"],\"optional\":true},\"fast-deep-equal@2.0.1\":{\"optional\":true},\"fast-deep-equal@3.1.3\":{},\"fast-fifo@1.3.2\":{\"optional\":true},\"fast-glob@3.3.3\":{\"dependencies\":{\"@nodelib/fs.stat\":\"2.0.5\",\"@nodelib/fs.walk\":\"1.2.8\",\"glob-parent\":\"5.1.2\",\"merge2\":\"1.4.1\",\"micromatch\":\"4.0.8\"}},\"fast-uri@3.0.3\":{},\"fast-uri@3.1.2\":{\"optional\":true},\"fast-xml-parser@4.5.6\":{\"dependencies\":{\"strnum\":\"1.1.2\"},\"optional\":true},\"fastq@1.17.1\":{\"dependencies\":{\"reusify\":\"1.0.4\"}},\"fault@2.0.1\":{\"dependencies\":{\"format\":\"0.2.2\"}},\"fd-slicer@1.1.0\":{\"dependencies\":{\"pend\":\"1.2.0\"},\"optional\":true},\"fdir@6.5.0(picomatch@4.0.4)\":{\"optionalDependencies\":{\"picomatch\":\"4.0.4\"}},\"fetch-blob@3.2.0\":{\"dependencies\":{\"node-domexception\":\"1.0.0\",\"web-streams-polyfill\":\"3.3.3\"},\"optional\":true},\"fetchdts@0.1.7\":{},\"fflate@0.4.8\":{},\"figures@3.2.0\":{\"dependencies\":{\"escape-string-regexp\":\"1.0.5\"}},\"file-uri-to-path@1.0.0\":{},\"fill-range@7.1.1\":{\"dependencies\":{\"to-regex-range\":\"5.0.1\"}},\"find-up@4.1.0\":{\"dependencies\":{\"locate-path\":\"5.0.0\",\"path-exists\":\"4.0.0\"}},\"flat@5.0.2\":{},\"follow-redirects@1.15.11\":{},\"foreground-child@3.3.1\":{\"dependencies\":{\"cross-spawn\":\"7.0.6\",\"signal-exit\":\"4.1.0\"}},\"form-data@4.0.4\":{\"dependencies\":{\"asynckit\":\"0.4.0\",\"combined-stream\":\"1.0.8\",\"es-set-tostringtag\":\"2.1.0\",\"hasown\":\"2.0.2\",\"mime-types\":\"2.1.35\"}},\"format@0.2.2\":{},\"formdata-polyfill@4.0.10\":{\"dependencies\":{\"fetch-blob\":\"3.2.0\"},\"optional\":true},\"front-matter@4.0.2\":{\"dependencies\":{\"js-yaml\":\"3.14.1\"}},\"fs-constants@1.0.0\":{},\"fs-extra@11.3.1\":{\"dependencies\":{\"graceful-fs\":\"4.2.11\",\"jsonfile\":\"6.2.0\",\"universalify\":\"2.0.1\"}},\"fs-extra@7.0.1\":{\"dependencies\":{\"graceful-fs\":\"4.2.11\",\"jsonfile\":\"4.0.0\",\"universalify\":\"0.1.2\"}},\"fs-extra@8.1.0\":{\"dependencies\":{\"graceful-fs\":\"4.2.11\",\"jsonfile\":\"4.0.0\",\"universalify\":\"0.1.2\"}},\"fs-minipass@2.1.0\":{\"dependencies\":{\"minipass\":\"3.3.6\"}},\"fsevents@2.3.2\":{\"optional\":true},\"fsevents@2.3.3\":{\"optional\":true},\"function-bind@1.1.2\":{},\"geckodriver@4.5.1\":{\"dependencies\":{\"@wdio/logger\":\"9.1.3\",\"@zip.js/zip.js\":\"2.8.26\",\"decamelize\":\"6.0.1\",\"http-proxy-agent\":\"7.0.2\",\"https-proxy-agent\":\"7.0.6\",\"node-fetch\":\"3.3.2\",\"tar-fs\":\"3.1.2\",\"which\":\"4.0.0\"},\"transitivePeerDependencies\":[\"bare-abort-controller\",\"bare-buffer\",\"react-native-b4a\",\"supports-color\"],\"optional\":true},\"gensync@1.0.0-beta.2\":{},\"get-caller-file@2.0.5\":{},\"get-intrinsic@1.3.0\":{\"dependencies\":{\"call-bind-apply-helpers\":\"1.0.2\",\"es-define-property\":\"1.0.1\",\"es-errors\":\"1.3.0\",\"es-object-atoms\":\"1.1.1\",\"function-bind\":\"1.1.2\",\"get-proto\":\"1.0.1\",\"gopd\":\"1.2.0\",\"has-symbols\":\"1.1.0\",\"hasown\":\"2.0.2\",\"math-intrinsics\":\"1.1.0\"}},\"get-port@7.2.0\":{\"optional\":true},\"get-proto@1.0.1\":{\"dependencies\":{\"dunder-proto\":\"1.0.1\",\"es-object-atoms\":\"1.1.1\"}},\"get-stream@5.2.0\":{\"dependencies\":{\"pump\":\"3.0.4\"},\"optional\":true},\"get-tsconfig@4.14.0\":{\"dependencies\":{\"resolve-pkg-maps\":\"1.0.0\"},\"optional\":true},\"get-uri@6.0.5\":{\"dependencies\":{\"basic-ftp\":\"5.3.1\",\"data-uri-to-buffer\":\"6.0.2\",\"debug\":\"4.4.3\"},\"transitivePeerDependencies\":[\"supports-color\"],\"optional\":true},\"github-from-package@0.0.0\":{},\"github-slugger@2.0.0\":{},\"glob-parent@5.1.2\":{\"dependencies\":{\"is-glob\":\"4.0.3\"}},\"glob-to-regexp@0.4.1\":{\"optional\":true},\"glob@10.4.5\":{\"dependencies\":{\"foreground-child\":\"3.3.1\",\"jackspeak\":\"3.4.3\",\"minimatch\":\"9.0.5\",\"minipass\":\"7.1.2\",\"package-json-from-dist\":\"1.0.1\",\"path-scurry\":\"1.11.1\"}},\"glob@10.5.0\":{\"dependencies\":{\"foreground-child\":\"3.3.1\",\"jackspeak\":\"3.4.3\",\"minimatch\":\"9.0.9\",\"minipass\":\"7.1.3\",\"package-json-from-dist\":\"1.0.1\",\"path-scurry\":\"1.11.1\"},\"optional\":true},\"globals@15.15.0\":{},\"globby@11.1.0\":{\"dependencies\":{\"array-union\":\"2.1.0\",\"dir-glob\":\"3.0.1\",\"fast-glob\":\"3.3.3\",\"ignore\":\"5.3.2\",\"merge2\":\"1.4.1\",\"slash\":\"3.0.0\"}},\"gopd@1.2.0\":{},\"graceful-fs@4.2.11\":{},\"grapheme-splitter@1.0.4\":{\"optional\":true},\"graphql@16.14.0\":{\"optional\":true},\"h3@2.0.1-rc.20\":{\"dependencies\":{\"rou3\":\"0.8.1\",\"srvx\":\"0.11.15\"}},\"hachure-fill@0.5.2\":{},\"happy-dom@18.0.1\":{\"dependencies\":{\"@types/node\":\"20.19.39\",\"@types/whatwg-mimetype\":\"3.0.2\",\"whatwg-mimetype\":\"3.0.0\"},\"optional\":true},\"has-flag@4.0.0\":{},\"has-symbols@1.1.0\":{},\"has-tostringtag@1.0.2\":{\"dependencies\":{\"has-symbols\":\"1.1.0\"}},\"hasown@2.0.2\":{\"dependencies\":{\"function-bind\":\"1.1.2\"}},\"hast-util-embedded@3.0.0\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"hast-util-is-element\":\"3.0.0\"}},\"hast-util-from-html@2.0.3\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"devlop\":\"1.1.0\",\"hast-util-from-parse5\":\"8.0.3\",\"parse5\":\"7.3.0\",\"vfile\":\"6.0.3\",\"vfile-message\":\"4.0.2\"}},\"hast-util-from-parse5@8.0.3\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"@types/unist\":\"3.0.3\",\"devlop\":\"1.1.0\",\"hastscript\":\"9.0.1\",\"property-information\":\"7.1.0\",\"vfile\":\"6.0.3\",\"vfile-location\":\"5.0.3\",\"web-namespaces\":\"2.0.1\"}},\"hast-util-has-property@3.0.0\":{\"dependencies\":{\"@types/hast\":\"3.0.4\"}},\"hast-util-heading-rank@3.0.0\":{\"dependencies\":{\"@types/hast\":\"3.0.4\"}},\"hast-util-is-body-ok-link@3.0.1\":{\"dependencies\":{\"@types/hast\":\"3.0.4\"}},\"hast-util-is-element@3.0.0\":{\"dependencies\":{\"@types/hast\":\"3.0.4\"}},\"hast-util-minify-whitespace@1.0.1\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"hast-util-embedded\":\"3.0.0\",\"hast-util-is-element\":\"3.0.0\",\"hast-util-whitespace\":\"3.0.0\",\"unist-util-is\":\"6.0.0\"}},\"hast-util-parse-selector@4.0.0\":{\"dependencies\":{\"@types/hast\":\"3.0.4\"}},\"hast-util-phrasing@3.0.1\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"hast-util-embedded\":\"3.0.0\",\"hast-util-has-property\":\"3.0.0\",\"hast-util-is-body-ok-link\":\"3.0.1\",\"hast-util-is-element\":\"3.0.0\"}},\"hast-util-raw@9.1.0\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"@types/unist\":\"3.0.3\",\"@ungap/structured-clone\":\"1.2.1\",\"hast-util-from-parse5\":\"8.0.3\",\"hast-util-to-parse5\":\"8.0.0\",\"html-void-elements\":\"3.0.0\",\"mdast-util-to-hast\":\"13.2.0\",\"parse5\":\"7.3.0\",\"unist-util-position\":\"5.0.0\",\"unist-util-visit\":\"5.0.0\",\"vfile\":\"6.0.3\",\"web-namespaces\":\"2.0.1\",\"zwitch\":\"2.0.4\"}},\"hast-util-sanitize@5.0.2\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"@ungap/structured-clone\":\"1.2.1\",\"unist-util-position\":\"5.0.0\"}},\"hast-util-to-html@9.0.5\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"@types/unist\":\"3.0.3\",\"ccount\":\"2.0.1\",\"comma-separated-tokens\":\"2.0.3\",\"hast-util-whitespace\":\"3.0.0\",\"html-void-elements\":\"3.0.0\",\"mdast-util-to-hast\":\"13.2.0\",\"property-information\":\"7.1.0\",\"space-separated-tokens\":\"2.0.2\",\"stringify-entities\":\"4.0.4\",\"zwitch\":\"2.0.4\"}},\"hast-util-to-mdast@10.1.2\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"@types/mdast\":\"4.0.4\",\"@ungap/structured-clone\":\"1.2.1\",\"hast-util-phrasing\":\"3.0.1\",\"hast-util-to-html\":\"9.0.5\",\"hast-util-to-text\":\"4.0.2\",\"hast-util-whitespace\":\"3.0.0\",\"mdast-util-phrasing\":\"4.1.0\",\"mdast-util-to-hast\":\"13.2.0\",\"mdast-util-to-string\":\"4.0.0\",\"rehype-minify-whitespace\":\"6.0.2\",\"trim-trailing-lines\":\"2.1.0\",\"unist-util-position\":\"5.0.0\",\"unist-util-visit\":\"5.0.0\"}},\"hast-util-to-parse5@8.0.0\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"comma-separated-tokens\":\"2.0.3\",\"devlop\":\"1.1.0\",\"property-information\":\"6.5.0\",\"space-separated-tokens\":\"2.0.2\",\"web-namespaces\":\"2.0.1\",\"zwitch\":\"2.0.4\"}},\"hast-util-to-string@3.0.1\":{\"dependencies\":{\"@types/hast\":\"3.0.4\"}},\"hast-util-to-text@4.0.2\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"@types/unist\":\"3.0.3\",\"hast-util-is-element\":\"3.0.0\",\"unist-util-find-after\":\"5.0.0\"}},\"hast-util-whitespace@3.0.0\":{\"dependencies\":{\"@types/hast\":\"3.0.4\"}},\"hastscript@9.0.1\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"comma-separated-tokens\":\"2.0.3\",\"hast-util-parse-selector\":\"4.0.0\",\"property-information\":\"7.1.0\",\"space-separated-tokens\":\"2.0.2\"}},\"headers-polyfill@4.0.3\":{\"optional\":true},\"highlight.js@11.11.1\":{},\"html-encoding-sniffer@4.0.0\":{\"dependencies\":{\"whatwg-encoding\":\"3.1.1\"}},\"html-escaper@2.0.2\":{},\"html-void-elements@3.0.0\":{},\"htmlfy@0.3.2\":{\"optional\":true},\"htmlparser2@10.0.0\":{\"dependencies\":{\"domelementtype\":\"2.3.0\",\"domhandler\":\"5.0.3\",\"domutils\":\"3.2.2\",\"entities\":\"6.0.1\"}},\"htmlparser2@10.1.0\":{\"dependencies\":{\"domelementtype\":\"2.3.0\",\"domhandler\":\"5.0.3\",\"domutils\":\"3.2.2\",\"entities\":\"7.0.1\"},\"optional\":true},\"http-proxy-agent@7.0.2\":{\"dependencies\":{\"agent-base\":\"7.1.3\",\"debug\":\"4.4.3\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"https-proxy-agent@7.0.2\":{\"dependencies\":{\"agent-base\":\"7.1.3\",\"debug\":\"4.4.3\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"https-proxy-agent@7.0.6\":{\"dependencies\":{\"agent-base\":\"7.1.3\",\"debug\":\"4.4.3\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"human-id@4.1.1\":{},\"iconv-lite@0.6.3\":{\"dependencies\":{\"safer-buffer\":\"2.1.2\"}},\"ieee754@1.2.1\":{},\"ignore@5.3.2\":{},\"immediate@3.0.6\":{\"optional\":true},\"immutable@5.1.5\":{\"optional\":true},\"import-meta-resolve@4.2.0\":{\"optional\":true},\"inherits@2.0.4\":{},\"ini@1.3.8\":{},\"ini@4.1.3\":{},\"internmap@1.0.1\":{},\"internmap@2.0.3\":{},\"ip-address@10.2.0\":{\"optional\":true},\"is-binary-path@2.1.0\":{\"dependencies\":{\"binary-extensions\":\"2.3.0\"}},\"is-docker@2.2.1\":{},\"is-extglob@2.1.1\":{},\"is-fullwidth-code-point@3.0.0\":{},\"is-glob@4.0.3\":{\"dependencies\":{\"is-extglob\":\"2.1.1\"}},\"is-interactive@1.0.0\":{},\"is-node-process@1.2.0\":{\"optional\":true},\"is-number@7.0.0\":{},\"is-plain-obj@4.1.0\":{},\"is-potential-custom-element-name@1.0.1\":{},\"is-stream@2.0.1\":{\"optional\":true},\"is-subdir@1.2.0\":{\"dependencies\":{\"better-path-resolve\":\"1.0.0\"}},\"is-unicode-supported@0.1.0\":{},\"is-windows@1.0.2\":{},\"is-wsl@2.2.0\":{\"dependencies\":{\"is-docker\":\"2.2.1\"}},\"isarray@1.0.0\":{\"optional\":true},\"isbot@5.1.28\":{},\"isexe@2.0.0\":{},\"isexe@3.1.5\":{\"optional\":true},\"istanbul-lib-coverage@3.2.2\":{},\"istanbul-lib-report@3.0.1\":{\"dependencies\":{\"istanbul-lib-coverage\":\"3.2.2\",\"make-dir\":\"4.0.0\",\"supports-color\":\"7.2.0\"}},\"istanbul-lib-source-maps@5.0.6\":{\"dependencies\":{\"@jridgewell/trace-mapping\":\"0.3.31\",\"debug\":\"4.4.3\",\"istanbul-lib-coverage\":\"3.2.2\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"istanbul-reports@3.2.0\":{\"dependencies\":{\"html-escaper\":\"2.0.2\",\"istanbul-lib-report\":\"3.0.1\"}},\"jackspeak@3.4.3\":{\"dependencies\":{\"@isaacs/cliui\":\"8.0.2\"},\"optionalDependencies\":{\"@pkgjs/parseargs\":\"0.11.0\"}},\"jest-diff@30.1.1\":{\"dependencies\":{\"@jest/diff-sequences\":\"30.0.1\",\"@jest/get-type\":\"30.1.0\",\"chalk\":\"4.1.2\",\"pretty-format\":\"30.0.5\"}},\"jest-worker@27.5.1\":{\"dependencies\":{\"@types/node\":\"22.19.17\",\"merge-stream\":\"2.0.0\",\"supports-color\":\"8.1.1\"},\"optional\":true},\"jiti@2.6.1\":{},\"js-tokens@10.0.0\":{},\"js-tokens@4.0.0\":{},\"js-tokens@9.0.1\":{},\"js-yaml@3.14.1\":{\"dependencies\":{\"argparse\":\"1.0.10\",\"esprima\":\"4.0.1\"}},\"js-yaml@4.1.1\":{\"dependencies\":{\"argparse\":\"2.0.1\"}},\"jsdom@26.1.0\":{\"dependencies\":{\"cssstyle\":\"4.3.1\",\"data-urls\":\"5.0.0\",\"decimal.js\":\"10.6.0\",\"html-encoding-sniffer\":\"4.0.0\",\"http-proxy-agent\":\"7.0.2\",\"https-proxy-agent\":\"7.0.6\",\"is-potential-custom-element-name\":\"1.0.1\",\"nwsapi\":\"2.2.20\",\"parse5\":\"7.3.0\",\"rrweb-cssom\":\"0.8.0\",\"saxes\":\"6.0.0\",\"symbol-tree\":\"3.2.4\",\"tough-cookie\":\"5.1.2\",\"w3c-xmlserializer\":\"5.0.0\",\"webidl-conversions\":\"7.0.0\",\"whatwg-encoding\":\"3.1.1\",\"whatwg-mimetype\":\"4.0.0\",\"whatwg-url\":\"14.2.0\",\"ws\":\"8.18.3\",\"xml-name-validator\":\"5.0.0\"},\"transitivePeerDependencies\":[\"bufferutil\",\"supports-color\",\"utf-8-validate\"]},\"jsdom@27.3.0(postcss@8.5.14)\":{\"dependencies\":{\"@acemir/cssom\":\"0.9.28\",\"@asamuzakjp/dom-selector\":\"6.7.6\",\"cssstyle\":\"5.3.4(postcss@8.5.14)\",\"data-urls\":\"6.0.0\",\"decimal.js\":\"10.6.0\",\"html-encoding-sniffer\":\"4.0.0\",\"http-proxy-agent\":\"7.0.2\",\"https-proxy-agent\":\"7.0.6\",\"is-potential-custom-element-name\":\"1.0.1\",\"parse5\":\"8.0.0\",\"saxes\":\"6.0.0\",\"symbol-tree\":\"3.2.4\",\"tough-cookie\":\"6.0.0\",\"w3c-xmlserializer\":\"5.0.0\",\"webidl-conversions\":\"8.0.0\",\"whatwg-encoding\":\"3.1.1\",\"whatwg-mimetype\":\"4.0.0\",\"whatwg-url\":\"15.1.0\",\"ws\":\"8.18.3\",\"xml-name-validator\":\"5.0.0\"},\"transitivePeerDependencies\":[\"bufferutil\",\"postcss\",\"supports-color\",\"utf-8-validate\"]},\"jsesc@3.1.0\":{},\"json-parse-even-better-errors@2.3.1\":{\"optional\":true},\"json-schema-to-ts@3.1.1\":{\"dependencies\":{\"@babel/runtime\":\"7.28.4\",\"ts-algebra\":\"2.0.0\"}},\"json-schema-traverse@1.0.0\":{},\"json5@2.2.3\":{},\"jsonc-parser@3.2.0\":{},\"jsonfile@4.0.0\":{\"optionalDependencies\":{\"graceful-fs\":\"4.2.11\"}},\"jsonfile@6.2.0\":{\"dependencies\":{\"universalify\":\"2.0.1\"},\"optionalDependencies\":{\"graceful-fs\":\"4.2.11\"}},\"jszip@3.10.1\":{\"dependencies\":{\"lie\":\"3.3.0\",\"pako\":\"1.0.11\",\"readable-stream\":\"2.3.8\",\"setimmediate\":\"1.0.5\"},\"optional\":true},\"katex@0.16.22\":{\"dependencies\":{\"commander\":\"8.3.0\"}},\"khroma@2.1.0\":{},\"kleur@4.1.5\":{},\"kolorist@1.8.0\":{},\"kysely@0.28.7\":{},\"langium@3.3.1\":{\"dependencies\":{\"chevrotain\":\"11.0.3\",\"chevrotain-allstar\":\"0.3.1(chevrotain@11.0.3)\",\"vscode-languageserver\":\"9.0.1\",\"vscode-languageserver-textdocument\":\"1.0.12\",\"vscode-uri\":\"3.0.8\"}},\"layout-base@1.0.2\":{},\"layout-base@2.0.1\":{},\"lazystream@1.0.1\":{\"dependencies\":{\"readable-stream\":\"2.3.8\"},\"optional\":true},\"lie@3.3.0\":{\"dependencies\":{\"immediate\":\"3.0.6\"},\"optional\":true},\"lightningcss-android-arm64@1.32.0\":{\"optional\":true},\"lightningcss-darwin-arm64@1.32.0\":{\"optional\":true},\"lightningcss-darwin-x64@1.32.0\":{\"optional\":true},\"lightningcss-freebsd-x64@1.32.0\":{\"optional\":true},\"lightningcss-linux-arm-gnueabihf@1.32.0\":{\"optional\":true},\"lightningcss-linux-arm64-gnu@1.32.0\":{\"optional\":true},\"lightningcss-linux-arm64-musl@1.32.0\":{\"optional\":true},\"lightningcss-linux-x64-gnu@1.32.0\":{\"optional\":true},\"lightningcss-linux-x64-musl@1.32.0\":{\"optional\":true},\"lightningcss-win32-arm64-msvc@1.32.0\":{\"optional\":true},\"lightningcss-win32-x64-msvc@1.32.0\":{\"optional\":true},\"lightningcss@1.32.0\":{\"dependencies\":{\"detect-libc\":\"2.1.2\"},\"optionalDependencies\":{\"lightningcss-android-arm64\":\"1.32.0\",\"lightningcss-darwin-arm64\":\"1.32.0\",\"lightningcss-darwin-x64\":\"1.32.0\",\"lightningcss-freebsd-x64\":\"1.32.0\",\"lightningcss-linux-arm-gnueabihf\":\"1.32.0\",\"lightningcss-linux-arm64-gnu\":\"1.32.0\",\"lightningcss-linux-arm64-musl\":\"1.32.0\",\"lightningcss-linux-x64-gnu\":\"1.32.0\",\"lightningcss-linux-x64-musl\":\"1.32.0\",\"lightningcss-win32-arm64-msvc\":\"1.32.0\",\"lightningcss-win32-x64-msvc\":\"1.32.0\"}},\"lines-and-columns@2.0.3\":{},\"loader-runner@4.3.2\":{\"optional\":true},\"local-pkg@1.1.2\":{\"dependencies\":{\"mlly\":\"1.8.0\",\"pkg-types\":\"2.3.0\",\"quansync\":\"0.2.11\"}},\"locate-app@2.5.0\":{\"dependencies\":{\"@promptbook/utils\":\"0.69.5\",\"type-fest\":\"4.26.0\",\"userhome\":\"1.0.1\"},\"optional\":true},\"locate-path@5.0.0\":{\"dependencies\":{\"p-locate\":\"4.1.0\"}},\"lodash-es@4.17.21\":{},\"lodash.clonedeep@4.5.0\":{\"optional\":true},\"lodash.startcase@4.4.0\":{},\"lodash.zip@4.2.0\":{\"optional\":true},\"lodash@4.18.1\":{\"optional\":true},\"log-symbols@4.1.0\":{\"dependencies\":{\"chalk\":\"4.1.2\",\"is-unicode-supported\":\"0.1.0\"}},\"loglevel-plugin-prefix@0.8.4\":{\"optional\":true},\"loglevel@1.9.2\":{\"optional\":true},\"long@5.3.2\":{},\"longest-streak@3.1.0\":{},\"loupe@3.2.1\":{},\"lowlight@3.3.0\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"devlop\":\"1.1.0\",\"highlight.js\":\"11.11.1\"}},\"lru-cache@10.4.3\":{},\"lru-cache@11.2.4\":{},\"lru-cache@5.1.1\":{\"dependencies\":{\"yallist\":\"3.1.1\"}},\"lru-cache@7.18.3\":{\"optional\":true},\"lucide-react@0.544.0(react@19.2.0)\":{\"dependencies\":{\"react\":\"19.2.0\"}},\"lz-string@1.5.0\":{},\"magic-string@0.30.18\":{\"dependencies\":{\"@jridgewell/sourcemap-codec\":\"1.5.5\"}},\"magic-string@0.30.21\":{\"dependencies\":{\"@jridgewell/sourcemap-codec\":\"1.5.5\"}},\"magicast@0.3.5\":{\"dependencies\":{\"@babel/parser\":\"7.28.5\",\"@babel/types\":\"7.28.5\",\"source-map-js\":\"1.2.1\"}},\"magicast@0.5.2\":{\"dependencies\":{\"@babel/parser\":\"7.29.3\",\"@babel/types\":\"7.29.0\",\"source-map-js\":\"1.2.1\"}},\"make-dir@4.0.0\":{\"dependencies\":{\"semver\":\"7.7.3\"}},\"markdown-table@3.0.4\":{},\"marked@16.4.2\":{},\"math-intrinsics@1.1.0\":{},\"mdast-util-find-and-replace@3.0.2\":{\"dependencies\":{\"@types/mdast\":\"4.0.4\",\"escape-string-regexp\":\"5.0.0\",\"unist-util-is\":\"6.0.0\",\"unist-util-visit-parents\":\"6.0.1\"}},\"mdast-util-from-markdown@2.0.2\":{\"dependencies\":{\"@types/mdast\":\"4.0.4\",\"@types/unist\":\"3.0.3\",\"decode-named-character-reference\":\"1.0.2\",\"devlop\":\"1.1.0\",\"mdast-util-to-string\":\"4.0.0\",\"micromark\":\"4.0.1\",\"micromark-util-decode-numeric-character-reference\":\"2.0.2\",\"micromark-util-decode-string\":\"2.0.1\",\"micromark-util-normalize-identifier\":\"2.0.1\",\"micromark-util-symbol\":\"2.0.1\",\"micromark-util-types\":\"2.0.1\",\"unist-util-stringify-position\":\"4.0.0\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"mdast-util-frontmatter@2.0.1\":{\"dependencies\":{\"@types/mdast\":\"4.0.4\",\"devlop\":\"1.1.0\",\"escape-string-regexp\":\"5.0.0\",\"mdast-util-from-markdown\":\"2.0.2\",\"mdast-util-to-markdown\":\"2.1.2\",\"micromark-extension-frontmatter\":\"2.0.0\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"mdast-util-gfm-autolink-literal@2.0.1\":{\"dependencies\":{\"@types/mdast\":\"4.0.4\",\"ccount\":\"2.0.1\",\"devlop\":\"1.1.0\",\"mdast-util-find-and-replace\":\"3.0.2\",\"micromark-util-character\":\"2.1.1\"}},\"mdast-util-gfm-footnote@2.0.0\":{\"dependencies\":{\"@types/mdast\":\"4.0.4\",\"devlop\":\"1.1.0\",\"mdast-util-from-markdown\":\"2.0.2\",\"mdast-util-to-markdown\":\"2.1.2\",\"micromark-util-normalize-identifier\":\"2.0.1\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"mdast-util-gfm-strikethrough@2.0.0\":{\"dependencies\":{\"@types/mdast\":\"4.0.4\",\"mdast-util-from-markdown\":\"2.0.2\",\"mdast-util-to-markdown\":\"2.1.2\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"mdast-util-gfm-table@2.0.0\":{\"dependencies\":{\"@types/mdast\":\"4.0.4\",\"devlop\":\"1.1.0\",\"markdown-table\":\"3.0.4\",\"mdast-util-from-markdown\":\"2.0.2\",\"mdast-util-to-markdown\":\"2.1.2\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"mdast-util-gfm-task-list-item@2.0.0\":{\"dependencies\":{\"@types/mdast\":\"4.0.4\",\"devlop\":\"1.1.0\",\"mdast-util-from-markdown\":\"2.0.2\",\"mdast-util-to-markdown\":\"2.1.2\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"mdast-util-gfm@3.0.0\":{\"dependencies\":{\"mdast-util-from-markdown\":\"2.0.2\",\"mdast-util-gfm-autolink-literal\":\"2.0.1\",\"mdast-util-gfm-footnote\":\"2.0.0\",\"mdast-util-gfm-strikethrough\":\"2.0.0\",\"mdast-util-gfm-table\":\"2.0.0\",\"mdast-util-gfm-task-list-item\":\"2.0.0\",\"mdast-util-to-markdown\":\"2.1.2\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"mdast-util-phrasing@4.1.0\":{\"dependencies\":{\"@types/mdast\":\"4.0.4\",\"unist-util-is\":\"6.0.0\"}},\"mdast-util-to-hast@13.2.0\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"@types/mdast\":\"4.0.4\",\"@ungap/structured-clone\":\"1.2.1\",\"devlop\":\"1.1.0\",\"micromark-util-sanitize-uri\":\"2.0.1\",\"trim-lines\":\"3.0.1\",\"unist-util-position\":\"5.0.0\",\"unist-util-visit\":\"5.0.0\",\"vfile\":\"6.0.3\"}},\"mdast-util-to-markdown@2.1.2\":{\"dependencies\":{\"@types/mdast\":\"4.0.4\",\"@types/unist\":\"3.0.3\",\"longest-streak\":\"3.1.0\",\"mdast-util-phrasing\":\"4.1.0\",\"mdast-util-to-string\":\"4.0.0\",\"micromark-util-classify-character\":\"2.0.1\",\"micromark-util-decode-string\":\"2.0.1\",\"unist-util-visit\":\"5.0.0\",\"zwitch\":\"2.0.4\"}},\"mdast-util-to-string@4.0.0\":{\"dependencies\":{\"@types/mdast\":\"4.0.4\"}},\"mdn-data@2.12.2\":{},\"merge-stream@2.0.0\":{\"optional\":true},\"merge2@1.4.1\":{},\"mermaid@11.12.1\":{\"dependencies\":{\"@braintree/sanitize-url\":\"7.1.1\",\"@iconify/utils\":\"3.0.2\",\"@mermaid-js/parser\":\"0.6.3\",\"@types/d3\":\"7.4.3\",\"cytoscape\":\"3.30.4\",\"cytoscape-cose-bilkent\":\"4.1.0(cytoscape@3.30.4)\",\"cytoscape-fcose\":\"2.2.0(cytoscape@3.30.4)\",\"d3\":\"7.9.0\",\"d3-sankey\":\"0.12.3\",\"dagre-d3-es\":\"7.0.13\",\"dayjs\":\"1.11.19\",\"dompurify\":\"3.3.1\",\"katex\":\"0.16.22\",\"khroma\":\"2.1.0\",\"lodash-es\":\"4.17.21\",\"marked\":\"16.4.2\",\"roughjs\":\"4.6.6\",\"stylis\":\"4.3.6\",\"ts-dedent\":\"2.2.0\",\"uuid\":\"11.1.0\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"micromark-core-commonmark@2.0.2\":{\"dependencies\":{\"decode-named-character-reference\":\"1.0.2\",\"devlop\":\"1.1.0\",\"micromark-factory-destination\":\"2.0.1\",\"micromark-factory-label\":\"2.0.1\",\"micromark-factory-space\":\"2.0.1\",\"micromark-factory-title\":\"2.0.1\",\"micromark-factory-whitespace\":\"2.0.1\",\"micromark-util-character\":\"2.1.1\",\"micromark-util-chunked\":\"2.0.1\",\"micromark-util-classify-character\":\"2.0.1\",\"micromark-util-html-tag-name\":\"2.0.1\",\"micromark-util-normalize-identifier\":\"2.0.1\",\"micromark-util-resolve-all\":\"2.0.1\",\"micromark-util-subtokenize\":\"2.0.3\",\"micromark-util-symbol\":\"2.0.1\",\"micromark-util-types\":\"2.0.1\"}},\"micromark-extension-frontmatter@2.0.0\":{\"dependencies\":{\"fault\":\"2.0.1\",\"micromark-util-character\":\"2.1.1\",\"micromark-util-symbol\":\"2.0.1\",\"micromark-util-types\":\"2.0.1\"}},\"micromark-extension-gfm-autolink-literal@2.1.0\":{\"dependencies\":{\"micromark-util-character\":\"2.1.1\",\"micromark-util-sanitize-uri\":\"2.0.1\",\"micromark-util-symbol\":\"2.0.1\",\"micromark-util-types\":\"2.0.1\"}},\"micromark-extension-gfm-footnote@2.1.0\":{\"dependencies\":{\"devlop\":\"1.1.0\",\"micromark-core-commonmark\":\"2.0.2\",\"micromark-factory-space\":\"2.0.1\",\"micromark-util-character\":\"2.1.1\",\"micromark-util-normalize-identifier\":\"2.0.1\",\"micromark-util-sanitize-uri\":\"2.0.1\",\"micromark-util-symbol\":\"2.0.1\",\"micromark-util-types\":\"2.0.1\"}},\"micromark-extension-gfm-strikethrough@2.1.0\":{\"dependencies\":{\"devlop\":\"1.1.0\",\"micromark-util-chunked\":\"2.0.1\",\"micromark-util-classify-character\":\"2.0.1\",\"micromark-util-resolve-all\":\"2.0.1\",\"micromark-util-symbol\":\"2.0.1\",\"micromark-util-types\":\"2.0.1\"}},\"micromark-extension-gfm-table@2.1.1\":{\"dependencies\":{\"devlop\":\"1.1.0\",\"micromark-factory-space\":\"2.0.1\",\"micromark-util-character\":\"2.1.1\",\"micromark-util-symbol\":\"2.0.1\",\"micromark-util-types\":\"2.0.1\"}},\"micromark-extension-gfm-tagfilter@2.0.0\":{\"dependencies\":{\"micromark-util-types\":\"2.0.1\"}},\"micromark-extension-gfm-task-list-item@2.1.0\":{\"dependencies\":{\"devlop\":\"1.1.0\",\"micromark-factory-space\":\"2.0.1\",\"micromark-util-character\":\"2.1.1\",\"micromark-util-symbol\":\"2.0.1\",\"micromark-util-types\":\"2.0.1\"}},\"micromark-extension-gfm@3.0.0\":{\"dependencies\":{\"micromark-extension-gfm-autolink-literal\":\"2.1.0\",\"micromark-extension-gfm-footnote\":\"2.1.0\",\"micromark-extension-gfm-strikethrough\":\"2.1.0\",\"micromark-extension-gfm-table\":\"2.1.1\",\"micromark-extension-gfm-tagfilter\":\"2.0.0\",\"micromark-extension-gfm-task-list-item\":\"2.1.0\",\"micromark-util-combine-extensions\":\"2.0.1\",\"micromark-util-types\":\"2.0.1\"}},\"micromark-factory-destination@2.0.1\":{\"dependencies\":{\"micromark-util-character\":\"2.1.1\",\"micromark-util-symbol\":\"2.0.1\",\"micromark-util-types\":\"2.0.1\"}},\"micromark-factory-label@2.0.1\":{\"dependencies\":{\"devlop\":\"1.1.0\",\"micromark-util-character\":\"2.1.1\",\"micromark-util-symbol\":\"2.0.1\",\"micromark-util-types\":\"2.0.1\"}},\"micromark-factory-space@2.0.1\":{\"dependencies\":{\"micromark-util-character\":\"2.1.1\",\"micromark-util-types\":\"2.0.1\"}},\"micromark-factory-title@2.0.1\":{\"dependencies\":{\"micromark-factory-space\":\"2.0.1\",\"micromark-util-character\":\"2.1.1\",\"micromark-util-symbol\":\"2.0.1\",\"micromark-util-types\":\"2.0.1\"}},\"micromark-factory-whitespace@2.0.1\":{\"dependencies\":{\"micromark-factory-space\":\"2.0.1\",\"micromark-util-character\":\"2.1.1\",\"micromark-util-symbol\":\"2.0.1\",\"micromark-util-types\":\"2.0.1\"}},\"micromark-util-character@2.1.1\":{\"dependencies\":{\"micromark-util-symbol\":\"2.0.1\",\"micromark-util-types\":\"2.0.1\"}},\"micromark-util-chunked@2.0.1\":{\"dependencies\":{\"micromark-util-symbol\":\"2.0.1\"}},\"micromark-util-classify-character@2.0.1\":{\"dependencies\":{\"micromark-util-character\":\"2.1.1\",\"micromark-util-symbol\":\"2.0.1\",\"micromark-util-types\":\"2.0.1\"}},\"micromark-util-combine-extensions@2.0.1\":{\"dependencies\":{\"micromark-util-chunked\":\"2.0.1\",\"micromark-util-types\":\"2.0.1\"}},\"micromark-util-decode-numeric-character-reference@2.0.2\":{\"dependencies\":{\"micromark-util-symbol\":\"2.0.1\"}},\"micromark-util-decode-string@2.0.1\":{\"dependencies\":{\"decode-named-character-reference\":\"1.0.2\",\"micromark-util-character\":\"2.1.1\",\"micromark-util-decode-numeric-character-reference\":\"2.0.2\",\"micromark-util-symbol\":\"2.0.1\"}},\"micromark-util-encode@2.0.1\":{},\"micromark-util-html-tag-name@2.0.1\":{},\"micromark-util-normalize-identifier@2.0.1\":{\"dependencies\":{\"micromark-util-symbol\":\"2.0.1\"}},\"micromark-util-resolve-all@2.0.1\":{\"dependencies\":{\"micromark-util-types\":\"2.0.1\"}},\"micromark-util-sanitize-uri@2.0.1\":{\"dependencies\":{\"micromark-util-character\":\"2.1.1\",\"micromark-util-encode\":\"2.0.1\",\"micromark-util-symbol\":\"2.0.1\"}},\"micromark-util-subtokenize@2.0.3\":{\"dependencies\":{\"devlop\":\"1.1.0\",\"micromark-util-chunked\":\"2.0.1\",\"micromark-util-symbol\":\"2.0.1\",\"micromark-util-types\":\"2.0.1\"}},\"micromark-util-symbol@2.0.1\":{},\"micromark-util-types@2.0.1\":{},\"micromark@4.0.1\":{\"dependencies\":{\"@types/debug\":\"4.1.12\",\"debug\":\"4.4.3\",\"decode-named-character-reference\":\"1.0.2\",\"devlop\":\"1.1.0\",\"micromark-core-commonmark\":\"2.0.2\",\"micromark-factory-space\":\"2.0.1\",\"micromark-util-character\":\"2.1.1\",\"micromark-util-chunked\":\"2.0.1\",\"micromark-util-combine-extensions\":\"2.0.1\",\"micromark-util-decode-numeric-character-reference\":\"2.0.2\",\"micromark-util-encode\":\"2.0.1\",\"micromark-util-normalize-identifier\":\"2.0.1\",\"micromark-util-resolve-all\":\"2.0.1\",\"micromark-util-sanitize-uri\":\"2.0.1\",\"micromark-util-subtokenize\":\"2.0.3\",\"micromark-util-symbol\":\"2.0.1\",\"micromark-util-types\":\"2.0.1\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"micromatch@4.0.8\":{\"dependencies\":{\"braces\":\"3.0.3\",\"picomatch\":\"2.3.1\"}},\"mime-db@1.52.0\":{},\"mime-types@2.1.35\":{\"dependencies\":{\"mime-db\":\"1.52.0\"}},\"mimic-fn@2.1.0\":{},\"mimic-response@3.1.0\":{},\"miniflare@4.20260504.0\":{\"dependencies\":{\"@cspotcode/source-map-support\":\"0.8.1\",\"sharp\":\"0.34.5\",\"undici\":\"7.24.8\",\"workerd\":\"1.20260504.1\",\"ws\":\"8.18.0\",\"youch\":\"4.1.0-beta.10\"},\"transitivePeerDependencies\":[\"bufferutil\",\"utf-8-validate\"]},\"minimatch@5.1.9\":{\"dependencies\":{\"brace-expansion\":\"2.1.0\"},\"optional\":true},\"minimatch@9.0.3\":{\"dependencies\":{\"brace-expansion\":\"2.0.2\"}},\"minimatch@9.0.5\":{\"dependencies\":{\"brace-expansion\":\"2.0.2\"}},\"minimatch@9.0.9\":{\"dependencies\":{\"brace-expansion\":\"2.1.0\"},\"optional\":true},\"minimist@1.2.8\":{},\"minipass@3.3.6\":{\"dependencies\":{\"yallist\":\"4.0.0\"}},\"minipass@5.0.0\":{},\"minipass@7.1.2\":{},\"minipass@7.1.3\":{\"optional\":true},\"minizlib@2.1.2\":{\"dependencies\":{\"minipass\":\"3.3.6\",\"yallist\":\"4.0.0\"}},\"mkdirp-classic@0.5.3\":{},\"mkdirp@1.0.4\":{},\"mlly@1.8.0\":{\"dependencies\":{\"acorn\":\"8.16.0\",\"pathe\":\"2.0.3\",\"pkg-types\":\"1.3.1\",\"ufo\":\"1.6.1\"}},\"mri@1.2.0\":{},\"mrmime@2.0.1\":{},\"ms@2.1.3\":{},\"msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3)\":{\"dependencies\":{\"@bundled-es-modules/cookie\":\"2.0.1\",\"@bundled-es-modules/statuses\":\"1.0.1\",\"@bundled-es-modules/tough-cookie\":\"0.1.6\",\"@inquirer/confirm\":\"5.1.21(@types/node@22.15.33)\",\"@mswjs/interceptors\":\"0.39.8\",\"@open-draft/deferred-promise\":\"2.2.0\",\"@open-draft/until\":\"2.1.0\",\"@types/cookie\":\"0.6.0\",\"@types/statuses\":\"2.0.6\",\"graphql\":\"16.14.0\",\"headers-polyfill\":\"4.0.3\",\"is-node-process\":\"1.2.0\",\"outvariant\":\"1.4.3\",\"path-to-regexp\":\"6.3.0\",\"picocolors\":\"1.1.1\",\"strict-event-emitter\":\"0.5.1\",\"type-fest\":\"4.41.0\",\"yargs\":\"17.7.2\"},\"optionalDependencies\":{\"typescript\":\"5.8.3\"},\"transitivePeerDependencies\":[\"@types/node\"],\"optional\":true},\"msw@2.10.2(@types/node@24.10.2)(typescript@5.8.3)\":{\"dependencies\":{\"@bundled-es-modules/cookie\":\"2.0.1\",\"@bundled-es-modules/statuses\":\"1.0.1\",\"@bundled-es-modules/tough-cookie\":\"0.1.6\",\"@inquirer/confirm\":\"5.1.21(@types/node@24.10.2)\",\"@mswjs/interceptors\":\"0.39.8\",\"@open-draft/deferred-promise\":\"2.2.0\",\"@open-draft/until\":\"2.1.0\",\"@types/cookie\":\"0.6.0\",\"@types/statuses\":\"2.0.6\",\"graphql\":\"16.14.0\",\"headers-polyfill\":\"4.0.3\",\"is-node-process\":\"1.2.0\",\"outvariant\":\"1.4.3\",\"path-to-regexp\":\"6.3.0\",\"picocolors\":\"1.1.1\",\"strict-event-emitter\":\"0.5.1\",\"type-fest\":\"4.41.0\",\"yargs\":\"17.7.2\"},\"optionalDependencies\":{\"typescript\":\"5.8.3\"},\"transitivePeerDependencies\":[\"@types/node\"],\"optional\":true},\"msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3)\":{\"dependencies\":{\"@bundled-es-modules/cookie\":\"2.0.1\",\"@bundled-es-modules/statuses\":\"1.0.1\",\"@bundled-es-modules/tough-cookie\":\"0.1.6\",\"@inquirer/confirm\":\"5.1.21(@types/node@24.10.2)\",\"@mswjs/interceptors\":\"0.39.8\",\"@open-draft/deferred-promise\":\"2.2.0\",\"@open-draft/until\":\"2.1.0\",\"@types/cookie\":\"0.6.0\",\"@types/statuses\":\"2.0.6\",\"graphql\":\"16.14.0\",\"headers-polyfill\":\"4.0.3\",\"is-node-process\":\"1.2.0\",\"outvariant\":\"1.4.3\",\"path-to-regexp\":\"6.3.0\",\"picocolors\":\"1.1.1\",\"strict-event-emitter\":\"0.5.1\",\"type-fest\":\"4.41.0\",\"yargs\":\"17.7.2\"},\"optionalDependencies\":{\"typescript\":\"5.9.3\"},\"transitivePeerDependencies\":[\"@types/node\"],\"optional\":true},\"mute-stream@2.0.0\":{\"optional\":true},\"nanoid@3.3.11\":{},\"napi-build-utils@2.0.0\":{},\"neo-async@2.6.2\":{\"optional\":true},\"netmask@2.1.1\":{\"optional\":true},\"node-abi@3.89.0\":{\"dependencies\":{\"semver\":\"7.7.3\"}},\"node-domexception@1.0.0\":{\"optional\":true},\"node-fetch@3.3.2\":{\"dependencies\":{\"data-uri-to-buffer\":\"4.0.1\",\"fetch-blob\":\"3.2.0\",\"formdata-polyfill\":\"4.0.10\"},\"optional\":true},\"node-machine-id@1.1.12\":{},\"node-releases@2.0.19\":{},\"node-releases@2.0.38\":{\"optional\":true},\"normalize-path@3.0.0\":{},\"npm-run-path@4.0.1\":{\"dependencies\":{\"path-key\":\"3.1.1\"}},\"nth-check@2.1.1\":{\"dependencies\":{\"boolbase\":\"1.0.0\"}},\"nwsapi@2.2.20\":{},\"nx-cloud@19.1.0\":{\"dependencies\":{\"@nrwl/nx-cloud\":\"19.1.0\",\"axios\":\"1.11.0\",\"chalk\":\"4.1.2\",\"dotenv\":\"10.0.0\",\"fs-extra\":\"11.3.1\",\"ini\":\"4.1.3\",\"node-machine-id\":\"1.1.12\",\"open\":\"8.4.2\",\"tar\":\"6.2.1\",\"yargs-parser\":\"22.0.0\"},\"transitivePeerDependencies\":[\"debug\"]},\"nx@21.4.1\":{\"dependencies\":{\"@napi-rs/wasm-runtime\":\"0.2.4\",\"@yarnpkg/lockfile\":\"1.1.0\",\"@yarnpkg/parsers\":\"3.0.2\",\"@zkochan/js-yaml\":\"0.0.7\",\"axios\":\"1.11.0\",\"chalk\":\"4.1.2\",\"cli-cursor\":\"3.1.0\",\"cli-spinners\":\"2.6.1\",\"cliui\":\"8.0.1\",\"dotenv\":\"16.4.7\",\"dotenv-expand\":\"11.0.7\",\"enquirer\":\"2.3.6\",\"figures\":\"3.2.0\",\"flat\":\"5.0.2\",\"front-matter\":\"4.0.2\",\"ignore\":\"5.3.2\",\"jest-diff\":\"30.1.1\",\"jsonc-parser\":\"3.2.0\",\"lines-and-columns\":\"2.0.3\",\"minimatch\":\"9.0.3\",\"node-machine-id\":\"1.1.12\",\"npm-run-path\":\"4.0.1\",\"open\":\"8.4.2\",\"ora\":\"5.3.0\",\"resolve.exports\":\"2.0.3\",\"semver\":\"7.7.2\",\"string-width\":\"4.2.3\",\"tar-stream\":\"2.2.0\",\"tmp\":\"0.2.5\",\"tree-kill\":\"1.2.2\",\"tsconfig-paths\":\"4.2.0\",\"tslib\":\"2.8.1\",\"yaml\":\"2.8.1\",\"yargs\":\"17.7.2\",\"yargs-parser\":\"21.1.1\"},\"optionalDependencies\":{\"@nx/nx-darwin-arm64\":\"21.4.1\",\"@nx/nx-darwin-x64\":\"21.4.1\",\"@nx/nx-freebsd-x64\":\"21.4.1\",\"@nx/nx-linux-arm-gnueabihf\":\"21.4.1\",\"@nx/nx-linux-arm64-gnu\":\"21.4.1\",\"@nx/nx-linux-arm64-musl\":\"21.4.1\",\"@nx/nx-linux-x64-gnu\":\"21.4.1\",\"@nx/nx-linux-x64-musl\":\"21.4.1\",\"@nx/nx-win32-arm64-msvc\":\"21.4.1\",\"@nx/nx-win32-x64-msvc\":\"21.4.1\"},\"transitivePeerDependencies\":[\"debug\"]},\"obug@2.1.1\":{},\"once@1.4.0\":{\"dependencies\":{\"wrappy\":\"1.0.2\"}},\"onetime@5.1.2\":{\"dependencies\":{\"mimic-fn\":\"2.1.0\"}},\"oniguruma-parser@0.12.1\":{},\"oniguruma-to-es@4.3.3\":{\"dependencies\":{\"oniguruma-parser\":\"0.12.1\",\"regex\":\"6.0.1\",\"regex-recursion\":\"6.0.2\"}},\"open@8.4.2\":{\"dependencies\":{\"define-lazy-prop\":\"2.0.0\",\"is-docker\":\"2.2.1\",\"is-wsl\":\"2.2.0\"}},\"ora@5.3.0\":{\"dependencies\":{\"bl\":\"4.1.0\",\"chalk\":\"4.1.2\",\"cli-cursor\":\"3.1.0\",\"cli-spinners\":\"2.9.2\",\"is-interactive\":\"1.0.0\",\"log-symbols\":\"4.1.0\",\"strip-ansi\":\"6.0.1\",\"wcwidth\":\"1.0.1\"}},\"outdent@0.5.0\":{},\"outvariant@1.4.3\":{\"optional\":true},\"oxlint@1.26.0\":{\"optionalDependencies\":{\"@oxlint/darwin-arm64\":\"1.26.0\",\"@oxlint/darwin-x64\":\"1.26.0\",\"@oxlint/linux-arm64-gnu\":\"1.26.0\",\"@oxlint/linux-arm64-musl\":\"1.26.0\",\"@oxlint/linux-x64-gnu\":\"1.26.0\",\"@oxlint/linux-x64-musl\":\"1.26.0\",\"@oxlint/win32-arm64\":\"1.26.0\",\"@oxlint/win32-x64\":\"1.26.0\"}},\"p-filter@2.1.0\":{\"dependencies\":{\"p-map\":\"2.1.0\"}},\"p-limit@2.3.0\":{\"dependencies\":{\"p-try\":\"2.2.0\"}},\"p-locate@4.1.0\":{\"dependencies\":{\"p-limit\":\"2.3.0\"}},\"p-map@2.1.0\":{},\"p-map@7.0.4\":{},\"p-try@2.2.0\":{},\"pac-proxy-agent@7.2.0\":{\"dependencies\":{\"@tootallnate/quickjs-emscripten\":\"0.23.0\",\"agent-base\":\"7.1.4\",\"debug\":\"4.4.3\",\"get-uri\":\"6.0.5\",\"http-proxy-agent\":\"7.0.2\",\"https-proxy-agent\":\"7.0.6\",\"pac-resolver\":\"7.0.1\",\"socks-proxy-agent\":\"8.0.5\"},\"transitivePeerDependencies\":[\"supports-color\"],\"optional\":true},\"pac-resolver@7.0.1\":{\"dependencies\":{\"degenerator\":\"5.0.1\",\"netmask\":\"2.1.1\"},\"optional\":true},\"package-json-from-dist@1.0.1\":{},\"package-manager-detector@0.2.11\":{\"dependencies\":{\"quansync\":\"0.2.11\"}},\"package-manager-detector@1.5.0\":{},\"pako@1.0.11\":{\"optional\":true},\"parse5-htmlparser2-tree-adapter@7.1.0\":{\"dependencies\":{\"domhandler\":\"5.0.3\",\"parse5\":\"7.3.0\"}},\"parse5-parser-stream@7.1.2\":{\"dependencies\":{\"parse5\":\"7.3.0\"}},\"parse5@7.3.0\":{\"dependencies\":{\"entities\":\"6.0.1\"}},\"parse5@8.0.0\":{\"dependencies\":{\"entities\":\"6.0.1\"}},\"path-data-parser@0.1.0\":{},\"path-exists@4.0.0\":{},\"path-key@3.1.1\":{},\"path-scurry@1.11.1\":{\"dependencies\":{\"lru-cache\":\"10.4.3\",\"minipass\":\"7.1.2\"}},\"path-to-regexp@6.3.0\":{},\"path-type@4.0.0\":{},\"pathe@2.0.3\":{},\"pathval@2.0.1\":{},\"pend@1.2.0\":{\"optional\":true},\"picocolors@1.1.1\":{},\"picomatch@2.3.1\":{},\"picomatch@4.0.3\":{},\"picomatch@4.0.4\":{},\"pify@4.0.1\":{},\"pkg-types@1.3.1\":{\"dependencies\":{\"confbox\":\"0.1.8\",\"mlly\":\"1.8.0\",\"pathe\":\"2.0.3\"}},\"pkg-types@2.3.0\":{\"dependencies\":{\"confbox\":\"0.2.2\",\"exsolve\":\"1.0.8\",\"pathe\":\"2.0.3\"}},\"playwright-core@1.55.0\":{\"optional\":true},\"playwright@1.55.0\":{\"dependencies\":{\"playwright-core\":\"1.55.0\"},\"optionalDependencies\":{\"fsevents\":\"2.3.2\"},\"optional\":true},\"pngjs@7.0.0\":{},\"points-on-curve@0.2.0\":{},\"points-on-path@0.2.1\":{\"dependencies\":{\"path-data-parser\":\"0.1.0\",\"points-on-curve\":\"0.2.0\"}},\"postcss@8.5.14\":{\"dependencies\":{\"nanoid\":\"3.3.11\",\"picocolors\":\"1.1.1\",\"source-map-js\":\"1.2.1\"}},\"postcss@8.5.6\":{\"dependencies\":{\"nanoid\":\"3.3.11\",\"picocolors\":\"1.1.1\",\"source-map-js\":\"1.2.1\"}},\"posthog-js@1.321.2\":{\"dependencies\":{\"@opentelemetry/api\":\"1.9.0\",\"@opentelemetry/api-logs\":\"0.208.0\",\"@opentelemetry/exporter-logs-otlp-http\":\"0.208.0(@opentelemetry/api@1.9.0)\",\"@opentelemetry/resources\":\"2.4.0(@opentelemetry/api@1.9.0)\",\"@opentelemetry/sdk-logs\":\"0.208.0(@opentelemetry/api@1.9.0)\",\"@posthog/core\":\"1.9.1\",\"@posthog/types\":\"1.321.2\",\"core-js\":\"3.46.0\",\"dompurify\":\"3.3.1\",\"fflate\":\"0.4.8\",\"preact\":\"10.28.2\",\"query-selector-shadow-dom\":\"1.0.1\",\"web-vitals\":\"4.2.4\"}},\"preact@10.28.2\":{},\"prebuild-install@7.1.3\":{\"dependencies\":{\"detect-libc\":\"2.1.2\",\"expand-template\":\"2.0.3\",\"github-from-package\":\"0.0.0\",\"minimist\":\"1.2.8\",\"mkdirp-classic\":\"0.5.3\",\"napi-build-utils\":\"2.0.0\",\"node-abi\":\"3.89.0\",\"pump\":\"3.0.3\",\"rc\":\"1.2.8\",\"simple-get\":\"4.0.1\",\"tar-fs\":\"2.1.4\",\"tunnel-agent\":\"0.6.0\"}},\"prettier@2.8.8\":{},\"prettier@3.6.2\":{},\"pretty-format@27.5.1\":{\"dependencies\":{\"ansi-regex\":\"5.0.1\",\"ansi-styles\":\"5.2.0\",\"react-is\":\"17.0.2\"}},\"pretty-format@30.0.5\":{\"dependencies\":{\"@jest/schemas\":\"30.0.5\",\"ansi-styles\":\"5.2.0\",\"react-is\":\"18.3.1\"}},\"process-nextick-args@2.0.1\":{\"optional\":true},\"process@0.11.10\":{\"optional\":true},\"progress@2.0.3\":{\"optional\":true},\"property-information@6.5.0\":{},\"property-information@7.1.0\":{},\"protobufjs@7.5.4\":{\"dependencies\":{\"@protobufjs/aspromise\":\"1.1.2\",\"@protobufjs/base64\":\"1.1.2\",\"@protobufjs/codegen\":\"2.0.4\",\"@protobufjs/eventemitter\":\"1.1.0\",\"@protobufjs/fetch\":\"1.1.0\",\"@protobufjs/float\":\"1.0.2\",\"@protobufjs/inquire\":\"1.1.0\",\"@protobufjs/path\":\"1.1.2\",\"@protobufjs/pool\":\"1.1.0\",\"@protobufjs/utf8\":\"1.1.0\",\"@types/node\":\"22.15.33\",\"long\":\"5.3.2\"}},\"proxy-agent@6.5.0\":{\"dependencies\":{\"agent-base\":\"7.1.4\",\"debug\":\"4.4.3\",\"http-proxy-agent\":\"7.0.2\",\"https-proxy-agent\":\"7.0.6\",\"lru-cache\":\"7.18.3\",\"pac-proxy-agent\":\"7.2.0\",\"proxy-from-env\":\"1.1.0\",\"socks-proxy-agent\":\"8.0.5\"},\"transitivePeerDependencies\":[\"supports-color\"],\"optional\":true},\"proxy-from-env@1.1.0\":{},\"psl@1.15.0\":{\"dependencies\":{\"punycode\":\"2.3.1\"},\"optional\":true},\"pump@3.0.3\":{\"dependencies\":{\"end-of-stream\":\"1.4.5\",\"once\":\"1.4.0\"}},\"pump@3.0.4\":{\"dependencies\":{\"end-of-stream\":\"1.4.5\",\"once\":\"1.4.0\"},\"optional\":true},\"punycode@2.3.1\":{},\"quansync@0.2.11\":{},\"query-selector-shadow-dom@1.0.1\":{},\"querystringify@2.2.0\":{\"optional\":true},\"queue-microtask@1.2.3\":{},\"rc@1.2.8\":{\"dependencies\":{\"deep-extend\":\"0.6.0\",\"ini\":\"1.3.8\",\"minimist\":\"1.2.8\",\"strip-json-comments\":\"2.0.1\"}},\"react-dom@19.2.0(react@19.2.0)\":{\"dependencies\":{\"react\":\"19.2.0\",\"scheduler\":\"0.27.0\"}},\"react-is@17.0.2\":{},\"react-is@18.3.1\":{},\"react@19.2.0\":{},\"read-yaml-file@1.1.0\":{\"dependencies\":{\"graceful-fs\":\"4.2.11\",\"js-yaml\":\"3.14.1\",\"pify\":\"4.0.1\",\"strip-bom\":\"3.0.0\"}},\"readable-stream@2.3.8\":{\"dependencies\":{\"core-util-is\":\"1.0.3\",\"inherits\":\"2.0.4\",\"isarray\":\"1.0.0\",\"process-nextick-args\":\"2.0.1\",\"safe-buffer\":\"5.1.2\",\"string_decoder\":\"1.1.1\",\"util-deprecate\":\"1.0.2\"},\"optional\":true},\"readable-stream@3.6.2\":{\"dependencies\":{\"inherits\":\"2.0.4\",\"string_decoder\":\"1.3.0\",\"util-deprecate\":\"1.0.2\"}},\"readable-stream@4.7.0\":{\"dependencies\":{\"abort-controller\":\"3.0.0\",\"buffer\":\"6.0.3\",\"events\":\"3.3.0\",\"process\":\"0.11.10\",\"string_decoder\":\"1.3.0\"},\"optional\":true},\"readdir-glob@1.1.3\":{\"dependencies\":{\"minimatch\":\"5.1.9\"},\"optional\":true},\"readdirp@3.6.0\":{\"dependencies\":{\"picomatch\":\"2.3.1\"}},\"regex-recursion@6.0.2\":{\"dependencies\":{\"regex-utilities\":\"2.3.0\"}},\"regex-utilities@2.3.0\":{},\"regex@6.0.1\":{\"dependencies\":{\"regex-utilities\":\"2.3.0\"}},\"rehype-autolink-headings@7.1.0\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"@ungap/structured-clone\":\"1.2.1\",\"hast-util-heading-rank\":\"3.0.0\",\"hast-util-is-element\":\"3.0.0\",\"unified\":\"11.0.5\",\"unist-util-visit\":\"5.0.0\"}},\"rehype-highlight@7.0.2\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"hast-util-to-text\":\"4.0.2\",\"lowlight\":\"3.3.0\",\"unist-util-visit\":\"5.0.0\",\"vfile\":\"6.0.3\"}},\"rehype-minify-whitespace@6.0.2\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"hast-util-minify-whitespace\":\"1.0.1\"}},\"rehype-parse@9.0.1\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"hast-util-from-html\":\"2.0.3\",\"unified\":\"11.0.5\"}},\"rehype-raw@7.0.0\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"hast-util-raw\":\"9.1.0\",\"vfile\":\"6.0.3\"}},\"rehype-remark@10.0.1\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"@types/mdast\":\"4.0.4\",\"hast-util-to-mdast\":\"10.1.2\",\"unified\":\"11.0.5\",\"vfile\":\"6.0.3\"}},\"rehype-sanitize@6.0.0\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"hast-util-sanitize\":\"5.0.2\"}},\"rehype-slug@6.0.0\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"github-slugger\":\"2.0.0\",\"hast-util-heading-rank\":\"3.0.0\",\"hast-util-to-string\":\"3.0.1\",\"unist-util-visit\":\"5.0.0\"}},\"rehype-stringify@10.0.1\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"hast-util-to-html\":\"9.0.5\",\"unified\":\"11.0.5\"}},\"remark-frontmatter@5.0.0\":{\"dependencies\":{\"@types/mdast\":\"4.0.4\",\"mdast-util-frontmatter\":\"2.0.1\",\"micromark-extension-frontmatter\":\"2.0.0\",\"unified\":\"11.0.5\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"remark-gfm@4.0.1\":{\"dependencies\":{\"@types/mdast\":\"4.0.4\",\"mdast-util-gfm\":\"3.0.0\",\"micromark-extension-gfm\":\"3.0.0\",\"remark-parse\":\"11.0.0\",\"remark-stringify\":\"11.0.0\",\"unified\":\"11.0.5\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"remark-parse@11.0.0\":{\"dependencies\":{\"@types/mdast\":\"4.0.4\",\"mdast-util-from-markdown\":\"2.0.2\",\"micromark-util-types\":\"2.0.1\",\"unified\":\"11.0.5\"},\"transitivePeerDependencies\":[\"supports-color\"]},\"remark-rehype@11.1.2\":{\"dependencies\":{\"@types/hast\":\"3.0.4\",\"@types/mdast\":\"4.0.4\",\"mdast-util-to-hast\":\"13.2.0\",\"unified\":\"11.0.5\",\"vfile\":\"6.0.3\"}},\"remark-stringify@11.0.0\":{\"dependencies\":{\"@types/mdast\":\"4.0.4\",\"mdast-util-to-markdown\":\"2.1.2\",\"unified\":\"11.0.5\"}},\"require-directory@2.1.1\":{},\"require-from-string@2.0.2\":{},\"requires-port@1.0.0\":{\"optional\":true},\"resolve-from@5.0.0\":{},\"resolve-pkg-maps@1.0.0\":{\"optional\":true},\"resolve.exports@2.0.3\":{},\"resq@1.11.0\":{\"dependencies\":{\"fast-deep-equal\":\"2.0.1\"},\"optional\":true},\"restore-cursor@3.1.0\":{\"dependencies\":{\"onetime\":\"5.1.2\",\"signal-exit\":\"3.0.7\"}},\"reusify@1.0.4\":{},\"rgb2hex@0.2.5\":{\"optional\":true},\"robust-predicates@3.0.2\":{},\"rolldown@1.0.0-rc.17\":{\"dependencies\":{\"@oxc-project/types\":\"0.127.0\",\"@rolldown/pluginutils\":\"1.0.0-rc.17\"},\"optionalDependencies\":{\"@rolldown/binding-android-arm64\":\"1.0.0-rc.17\",\"@rolldown/binding-darwin-arm64\":\"1.0.0-rc.17\",\"@rolldown/binding-darwin-x64\":\"1.0.0-rc.17\",\"@rolldown/binding-freebsd-x64\":\"1.0.0-rc.17\",\"@rolldown/binding-linux-arm-gnueabihf\":\"1.0.0-rc.17\",\"@rolldown/binding-linux-arm64-gnu\":\"1.0.0-rc.17\",\"@rolldown/binding-linux-arm64-musl\":\"1.0.0-rc.17\",\"@rolldown/binding-linux-ppc64-gnu\":\"1.0.0-rc.17\",\"@rolldown/binding-linux-s390x-gnu\":\"1.0.0-rc.17\",\"@rolldown/binding-linux-x64-gnu\":\"1.0.0-rc.17\",\"@rolldown/binding-linux-x64-musl\":\"1.0.0-rc.17\",\"@rolldown/binding-openharmony-arm64\":\"1.0.0-rc.17\",\"@rolldown/binding-wasm32-wasi\":\"1.0.0-rc.17\",\"@rolldown/binding-win32-arm64-msvc\":\"1.0.0-rc.17\",\"@rolldown/binding-win32-x64-msvc\":\"1.0.0-rc.17\"}},\"rollup@4.53.2\":{\"dependencies\":{\"@types/estree\":\"1.0.8\"},\"optionalDependencies\":{\"@rollup/rollup-android-arm-eabi\":\"4.53.2\",\"@rollup/rollup-android-arm64\":\"4.53.2\",\"@rollup/rollup-darwin-arm64\":\"4.53.2\",\"@rollup/rollup-darwin-x64\":\"4.53.2\",\"@rollup/rollup-freebsd-arm64\":\"4.53.2\",\"@rollup/rollup-freebsd-x64\":\"4.53.2\",\"@rollup/rollup-linux-arm-gnueabihf\":\"4.53.2\",\"@rollup/rollup-linux-arm-musleabihf\":\"4.53.2\",\"@rollup/rollup-linux-arm64-gnu\":\"4.53.2\",\"@rollup/rollup-linux-arm64-musl\":\"4.53.2\",\"@rollup/rollup-linux-loong64-gnu\":\"4.53.2\",\"@rollup/rollup-linux-ppc64-gnu\":\"4.53.2\",\"@rollup/rollup-linux-riscv64-gnu\":\"4.53.2\",\"@rollup/rollup-linux-riscv64-musl\":\"4.53.2\",\"@rollup/rollup-linux-s390x-gnu\":\"4.53.2\",\"@rollup/rollup-linux-x64-gnu\":\"4.53.2\",\"@rollup/rollup-linux-x64-musl\":\"4.53.2\",\"@rollup/rollup-openharmony-arm64\":\"4.53.2\",\"@rollup/rollup-win32-arm64-msvc\":\"4.53.2\",\"@rollup/rollup-win32-ia32-msvc\":\"4.53.2\",\"@rollup/rollup-win32-x64-gnu\":\"4.53.2\",\"@rollup/rollup-win32-x64-msvc\":\"4.53.2\",\"fsevents\":\"2.3.3\"}},\"rou3@0.8.1\":{},\"roughjs@4.6.6\":{\"dependencies\":{\"hachure-fill\":\"0.5.2\",\"path-data-parser\":\"0.1.0\",\"points-on-curve\":\"0.2.0\",\"points-on-path\":\"0.2.1\"}},\"rrweb-cssom@0.8.0\":{},\"run-parallel@1.2.0\":{\"dependencies\":{\"queue-microtask\":\"1.2.3\"}},\"rw@1.3.3\":{},\"rxjs@7.8.2\":{\"dependencies\":{\"tslib\":\"2.8.1\"},\"optional\":true},\"safaridriver@0.1.2\":{\"optional\":true},\"safe-buffer@5.1.2\":{\"optional\":true},\"safe-buffer@5.2.1\":{},\"safer-buffer@2.1.2\":{},\"sass-embedded-android-arm64@1.89.2\":{\"optional\":true},\"sass-embedded-android-arm@1.89.2\":{\"optional\":true},\"sass-embedded-android-riscv64@1.89.2\":{\"optional\":true},\"sass-embedded-android-x64@1.89.2\":{\"optional\":true},\"sass-embedded-darwin-arm64@1.89.2\":{\"optional\":true},\"sass-embedded-darwin-x64@1.89.2\":{\"optional\":true},\"sass-embedded-linux-arm64@1.89.2\":{\"optional\":true},\"sass-embedded-linux-arm@1.89.2\":{\"optional\":true},\"sass-embedded-linux-musl-arm64@1.89.2\":{\"optional\":true},\"sass-embedded-linux-musl-arm@1.89.2\":{\"optional\":true},\"sass-embedded-linux-musl-riscv64@1.89.2\":{\"optional\":true},\"sass-embedded-linux-musl-x64@1.89.2\":{\"optional\":true},\"sass-embedded-linux-riscv64@1.89.2\":{\"optional\":true},\"sass-embedded-linux-x64@1.89.2\":{\"optional\":true},\"sass-embedded-win32-arm64@1.89.2\":{\"optional\":true},\"sass-embedded-win32-x64@1.89.2\":{\"optional\":true},\"sass-embedded@1.89.2\":{\"dependencies\":{\"@bufbuild/protobuf\":\"2.12.0\",\"buffer-builder\":\"0.2.0\",\"colorjs.io\":\"0.5.2\",\"immutable\":\"5.1.5\",\"rxjs\":\"7.8.2\",\"supports-color\":\"8.1.1\",\"sync-child-process\":\"1.0.2\",\"varint\":\"6.0.0\"},\"optionalDependencies\":{\"sass-embedded-android-arm\":\"1.89.2\",\"sass-embedded-android-arm64\":\"1.89.2\",\"sass-embedded-android-riscv64\":\"1.89.2\",\"sass-embedded-android-x64\":\"1.89.2\",\"sass-embedded-darwin-arm64\":\"1.89.2\",\"sass-embedded-darwin-x64\":\"1.89.2\",\"sass-embedded-linux-arm\":\"1.89.2\",\"sass-embedded-linux-arm64\":\"1.89.2\",\"sass-embedded-linux-musl-arm\":\"1.89.2\",\"sass-embedded-linux-musl-arm64\":\"1.89.2\",\"sass-embedded-linux-musl-riscv64\":\"1.89.2\",\"sass-embedded-linux-musl-x64\":\"1.89.2\",\"sass-embedded-linux-riscv64\":\"1.89.2\",\"sass-embedded-linux-x64\":\"1.89.2\",\"sass-embedded-win32-arm64\":\"1.89.2\",\"sass-embedded-win32-x64\":\"1.89.2\"},\"optional\":true},\"saxes@6.0.0\":{\"dependencies\":{\"xmlchars\":\"2.2.0\"}},\"scheduler@0.27.0\":{},\"schema-utils@4.3.3\":{\"dependencies\":{\"@types/json-schema\":\"7.0.15\",\"ajv\":\"8.20.0\",\"ajv-formats\":\"2.1.1(ajv@8.20.0)\",\"ajv-keywords\":\"5.1.0(ajv@8.20.0)\"},\"optional\":true},\"semver@6.3.1\":{},\"semver@7.7.2\":{},\"semver@7.7.3\":{},\"semver@7.7.4\":{\"optional\":true},\"serialize-error@11.0.3\":{\"dependencies\":{\"type-fest\":\"2.19.0\"},\"optional\":true},\"seroval-plugins@1.5.4(seroval@1.5.4)\":{\"dependencies\":{\"seroval\":\"1.5.4\"}},\"seroval@1.5.4\":{},\"setimmediate@1.0.5\":{\"optional\":true},\"sharp@0.34.5\":{\"dependencies\":{\"@img/colour\":\"1.1.0\",\"detect-libc\":\"2.1.2\",\"semver\":\"7.7.3\"},\"optionalDependencies\":{\"@img/sharp-darwin-arm64\":\"0.34.5\",\"@img/sharp-darwin-x64\":\"0.34.5\",\"@img/sharp-libvips-darwin-arm64\":\"1.2.4\",\"@img/sharp-libvips-darwin-x64\":\"1.2.4\",\"@img/sharp-libvips-linux-arm\":\"1.2.4\",\"@img/sharp-libvips-linux-arm64\":\"1.2.4\",\"@img/sharp-libvips-linux-ppc64\":\"1.2.4\",\"@img/sharp-libvips-linux-riscv64\":\"1.2.4\",\"@img/sharp-libvips-linux-s390x\":\"1.2.4\",\"@img/sharp-libvips-linux-x64\":\"1.2.4\",\"@img/sharp-libvips-linuxmusl-arm64\":\"1.2.4\",\"@img/sharp-libvips-linuxmusl-x64\":\"1.2.4\",\"@img/sharp-linux-arm\":\"0.34.5\",\"@img/sharp-linux-arm64\":\"0.34.5\",\"@img/sharp-linux-ppc64\":\"0.34.5\",\"@img/sharp-linux-riscv64\":\"0.34.5\",\"@img/sharp-linux-s390x\":\"0.34.5\",\"@img/sharp-linux-x64\":\"0.34.5\",\"@img/sharp-linuxmusl-arm64\":\"0.34.5\",\"@img/sharp-linuxmusl-x64\":\"0.34.5\",\"@img/sharp-wasm32\":\"0.34.5\",\"@img/sharp-win32-arm64\":\"0.34.5\",\"@img/sharp-win32-ia32\":\"0.34.5\",\"@img/sharp-win32-x64\":\"0.34.5\"}},\"shebang-command@2.0.0\":{\"dependencies\":{\"shebang-regex\":\"3.0.0\"}},\"shebang-regex@3.0.0\":{},\"shiki@3.15.0\":{\"dependencies\":{\"@shikijs/core\":\"3.15.0\",\"@shikijs/engine-javascript\":\"3.15.0\",\"@shikijs/engine-oniguruma\":\"3.15.0\",\"@shikijs/langs\":\"3.15.0\",\"@shikijs/themes\":\"3.15.0\",\"@shikijs/types\":\"3.15.0\",\"@shikijs/vscode-textmate\":\"10.0.2\",\"@types/hast\":\"3.0.4\"}},\"siginfo@2.0.0\":{},\"signal-exit@3.0.7\":{},\"signal-exit@4.1.0\":{},\"simple-concat@1.0.1\":{},\"simple-get@4.0.1\":{\"dependencies\":{\"decompress-response\":\"6.0.0\",\"once\":\"1.4.0\",\"simple-concat\":\"1.0.1\"}},\"sirv@3.0.2\":{\"dependencies\":{\"@polka/url\":\"1.0.0-next.29\",\"mrmime\":\"2.0.1\",\"totalist\":\"3.0.1\"}},\"slash@3.0.0\":{},\"smart-buffer@4.2.0\":{\"optional\":true},\"socks-proxy-agent@8.0.5\":{\"dependencies\":{\"agent-base\":\"7.1.4\",\"debug\":\"4.4.3\",\"socks\":\"2.8.8\"},\"transitivePeerDependencies\":[\"supports-color\"],\"optional\":true},\"socks@2.8.8\":{\"dependencies\":{\"ip-address\":\"10.2.0\",\"smart-buffer\":\"4.2.0\"},\"optional\":true},\"source-map-js@1.2.1\":{},\"source-map-support@0.5.21\":{\"dependencies\":{\"buffer-from\":\"1.1.2\",\"source-map\":\"0.6.1\"},\"optional\":true},\"source-map@0.6.1\":{\"optional\":true},\"source-map@0.7.6\":{},\"space-separated-tokens@2.0.2\":{},\"spacetrim@0.11.59\":{\"optional\":true},\"spawndamnit@3.0.1\":{\"dependencies\":{\"cross-spawn\":\"7.0.6\",\"signal-exit\":\"4.1.0\"}},\"split2@4.2.0\":{\"optional\":true},\"sprintf-js@1.0.3\":{},\"srvx@0.11.15\":{},\"stackback@0.0.2\":{},\"statuses@2.0.2\":{\"optional\":true},\"std-env@3.10.0\":{},\"std-env@3.9.0\":{},\"std-env@4.1.0\":{},\"streamx@2.25.0\":{\"dependencies\":{\"events-universal\":\"1.0.1\",\"fast-fifo\":\"1.3.2\",\"text-decoder\":\"1.2.7\"},\"transitivePeerDependencies\":[\"bare-abort-controller\",\"react-native-b4a\"],\"optional\":true},\"strict-event-emitter@0.5.1\":{\"optional\":true},\"string-width@4.2.3\":{\"dependencies\":{\"emoji-regex\":\"8.0.0\",\"is-fullwidth-code-point\":\"3.0.0\",\"strip-ansi\":\"6.0.1\"}},\"string-width@5.1.2\":{\"dependencies\":{\"eastasianwidth\":\"0.2.0\",\"emoji-regex\":\"9.2.2\",\"strip-ansi\":\"7.1.2\"}},\"string_decoder@1.1.1\":{\"dependencies\":{\"safe-buffer\":\"5.1.2\"},\"optional\":true},\"string_decoder@1.3.0\":{\"dependencies\":{\"safe-buffer\":\"5.2.1\"}},\"stringify-entities@4.0.4\":{\"dependencies\":{\"character-entities-html4\":\"2.1.0\",\"character-entities-legacy\":\"3.0.0\"}},\"strip-ansi@6.0.1\":{\"dependencies\":{\"ansi-regex\":\"5.0.1\"}},\"strip-ansi@7.1.2\":{\"dependencies\":{\"ansi-regex\":\"6.1.0\"}},\"strip-ansi@7.2.0\":{\"dependencies\":{\"ansi-regex\":\"6.2.2\"},\"optional\":true},\"strip-bom@3.0.0\":{},\"strip-json-comments@2.0.1\":{},\"strip-literal@3.0.0\":{\"dependencies\":{\"js-tokens\":\"9.0.1\"}},\"strnum@1.1.2\":{\"optional\":true},\"stylis@4.3.6\":{},\"supports-color@10.2.2\":{},\"supports-color@7.2.0\":{\"dependencies\":{\"has-flag\":\"4.0.0\"}},\"supports-color@8.1.1\":{\"dependencies\":{\"has-flag\":\"4.0.0\"},\"optional\":true},\"symbol-tree@3.2.4\":{},\"sync-child-process@1.0.2\":{\"dependencies\":{\"sync-message-port\":\"1.2.0\"},\"optional\":true},\"sync-message-port@1.2.0\":{\"optional\":true},\"tailwindcss@4.2.4\":{},\"tapable@2.3.3\":{},\"tar-fs@2.1.4\":{\"dependencies\":{\"chownr\":\"1.1.4\",\"mkdirp-classic\":\"0.5.3\",\"pump\":\"3.0.3\",\"tar-stream\":\"2.2.0\"}},\"tar-fs@3.1.2\":{\"dependencies\":{\"pump\":\"3.0.4\",\"tar-stream\":\"3.2.0\"},\"optionalDependencies\":{\"bare-fs\":\"4.7.1\",\"bare-path\":\"3.0.0\"},\"transitivePeerDependencies\":[\"bare-abort-controller\",\"bare-buffer\",\"react-native-b4a\"],\"optional\":true},\"tar-stream@2.2.0\":{\"dependencies\":{\"bl\":\"4.1.0\",\"end-of-stream\":\"1.4.5\",\"fs-constants\":\"1.0.0\",\"inherits\":\"2.0.4\",\"readable-stream\":\"3.6.2\"}},\"tar-stream@3.2.0\":{\"dependencies\":{\"b4a\":\"1.8.1\",\"bare-fs\":\"4.7.1\",\"fast-fifo\":\"1.3.2\",\"streamx\":\"2.25.0\"},\"transitivePeerDependencies\":[\"bare-abort-controller\",\"bare-buffer\",\"react-native-b4a\"],\"optional\":true},\"tar@6.2.1\":{\"dependencies\":{\"chownr\":\"2.0.0\",\"fs-minipass\":\"2.1.0\",\"minipass\":\"5.0.0\",\"minizlib\":\"2.1.2\",\"mkdirp\":\"1.0.4\",\"yallist\":\"4.0.0\"}},\"teex@1.0.1\":{\"dependencies\":{\"streamx\":\"2.25.0\"},\"transitivePeerDependencies\":[\"bare-abort-controller\",\"react-native-b4a\"],\"optional\":true},\"term-size@2.2.1\":{},\"terser-webpack-plugin@5.5.0(esbuild@0.27.3)(webpack@5.99.9(esbuild@0.27.3))\":{\"dependencies\":{\"@jridgewell/trace-mapping\":\"0.3.31\",\"jest-worker\":\"27.5.1\",\"schema-utils\":\"4.3.3\",\"terser\":\"5.36.0\",\"webpack\":\"5.99.9(esbuild@0.27.3)\"},\"optionalDependencies\":{\"esbuild\":\"0.27.3\"},\"optional\":true},\"terser@5.36.0\":{\"dependencies\":{\"@jridgewell/source-map\":\"0.3.11\",\"acorn\":\"8.16.0\",\"commander\":\"2.20.3\",\"source-map-support\":\"0.5.21\"},\"optional\":true},\"test-exclude@7.0.1\":{\"dependencies\":{\"@istanbuljs/schema\":\"0.1.3\",\"glob\":\"10.4.5\",\"minimatch\":\"9.0.5\"}},\"text-decoder@1.2.7\":{\"dependencies\":{\"b4a\":\"1.8.1\"},\"transitivePeerDependencies\":[\"react-native-b4a\"],\"optional\":true},\"tinybench@2.9.0\":{},\"tinyexec@0.3.2\":{},\"tinyexec@1.0.2\":{},\"tinyglobby@0.2.14\":{\"dependencies\":{\"fdir\":\"6.5.0(picomatch@4.0.4)\",\"picomatch\":\"4.0.4\"}},\"tinyglobby@0.2.15\":{\"dependencies\":{\"fdir\":\"6.5.0(picomatch@4.0.4)\",\"picomatch\":\"4.0.4\"}},\"tinyglobby@0.2.16\":{\"dependencies\":{\"fdir\":\"6.5.0(picomatch@4.0.4)\",\"picomatch\":\"4.0.4\"}},\"tinypool@1.1.1\":{},\"tinyrainbow@2.0.0\":{},\"tinyrainbow@3.0.3\":{},\"tinyrainbow@3.1.0\":{},\"tinyspy@4.0.3\":{},\"tldts-core@6.1.52\":{},\"tldts-core@7.0.19\":{},\"tldts@6.1.52\":{\"dependencies\":{\"tldts-core\":\"6.1.52\"}},\"tldts@7.0.19\":{\"dependencies\":{\"tldts-core\":\"7.0.19\"}},\"tmp@0.2.5\":{},\"to-regex-range@5.0.1\":{\"dependencies\":{\"is-number\":\"7.0.0\"}},\"totalist@3.0.1\":{},\"tough-cookie@4.1.4\":{\"dependencies\":{\"psl\":\"1.15.0\",\"punycode\":\"2.3.1\",\"universalify\":\"0.2.0\",\"url-parse\":\"1.5.10\"},\"optional\":true},\"tough-cookie@5.1.2\":{\"dependencies\":{\"tldts\":\"6.1.52\"}},\"tough-cookie@6.0.0\":{\"dependencies\":{\"tldts\":\"7.0.19\"}},\"tr46@5.1.1\":{\"dependencies\":{\"punycode\":\"2.3.1\"}},\"tr46@6.0.0\":{\"dependencies\":{\"punycode\":\"2.3.1\"}},\"tree-kill@1.2.2\":{},\"trim-lines@3.0.1\":{},\"trim-trailing-lines@2.1.0\":{},\"trough@2.2.0\":{},\"ts-algebra@2.0.0\":{},\"ts-dedent@2.2.0\":{},\"tsconfig-paths@4.2.0\":{\"dependencies\":{\"json5\":\"2.2.3\",\"minimist\":\"1.2.8\",\"strip-bom\":\"3.0.0\"}},\"tslib@2.8.1\":{},\"tsx@4.20.5\":{\"dependencies\":{\"esbuild\":\"0.25.12\",\"get-tsconfig\":\"4.14.0\"},\"optionalDependencies\":{\"fsevents\":\"2.3.3\"},\"optional\":true},\"tunnel-agent@0.6.0\":{\"dependencies\":{\"safe-buffer\":\"5.2.1\"}},\"type-fest@2.19.0\":{\"optional\":true},\"type-fest@4.26.0\":{\"optional\":true},\"type-fest@4.41.0\":{\"optional\":true},\"typescript@5.8.3\":{},\"typescript@5.9.3\":{},\"ufo@1.6.1\":{},\"undici-types@6.21.0\":{},\"undici-types@7.16.0\":{\"optional\":true},\"undici@7.16.0\":{},\"undici@7.24.8\":{},\"undici@7.25.0\":{\"optional\":true},\"unenv@2.0.0-rc.24\":{\"dependencies\":{\"pathe\":\"2.0.3\"}},\"unified@11.0.5\":{\"dependencies\":{\"@types/unist\":\"3.0.3\",\"bail\":\"2.0.2\",\"devlop\":\"1.1.0\",\"extend\":\"3.0.2\",\"is-plain-obj\":\"4.1.0\",\"trough\":\"2.2.0\",\"vfile\":\"6.0.3\"}},\"unist-util-find-after@5.0.0\":{\"dependencies\":{\"@types/unist\":\"3.0.3\",\"unist-util-is\":\"6.0.0\"}},\"unist-util-is@6.0.0\":{\"dependencies\":{\"@types/unist\":\"3.0.3\"}},\"unist-util-position@5.0.0\":{\"dependencies\":{\"@types/unist\":\"3.0.3\"}},\"unist-util-stringify-position@4.0.0\":{\"dependencies\":{\"@types/unist\":\"3.0.3\"}},\"unist-util-visit-parents@6.0.1\":{\"dependencies\":{\"@types/unist\":\"3.0.3\",\"unist-util-is\":\"6.0.0\"}},\"unist-util-visit@5.0.0\":{\"dependencies\":{\"@types/unist\":\"3.0.3\",\"unist-util-is\":\"6.0.0\",\"unist-util-visit-parents\":\"6.0.1\"}},\"universalify@0.1.2\":{},\"universalify@0.2.0\":{\"optional\":true},\"universalify@2.0.1\":{},\"unplugin@3.0.0\":{\"dependencies\":{\"@jridgewell/remapping\":\"2.3.5\",\"picomatch\":\"4.0.3\",\"webpack-virtual-modules\":\"0.6.2\"}},\"update-browserslist-db@1.1.3(browserslist@4.25.3)\":{\"dependencies\":{\"browserslist\":\"4.25.3\",\"escalade\":\"3.2.0\",\"picocolors\":\"1.1.1\"}},\"update-browserslist-db@1.2.3(browserslist@4.28.2)\":{\"dependencies\":{\"browserslist\":\"4.28.2\",\"escalade\":\"3.2.0\",\"picocolors\":\"1.1.1\"},\"optional\":true},\"url-parse@1.5.10\":{\"dependencies\":{\"querystringify\":\"2.2.0\",\"requires-port\":\"1.0.0\"},\"optional\":true},\"urlpattern-polyfill@10.1.0\":{\"optional\":true},\"use-sync-external-store@1.6.0(react@19.2.0)\":{\"dependencies\":{\"react\":\"19.2.0\"}},\"userhome@1.0.1\":{\"optional\":true},\"util-deprecate@1.0.2\":{},\"uuid@11.1.0\":{},\"varint@6.0.0\":{\"optional\":true},\"vfile-location@5.0.3\":{\"dependencies\":{\"@types/unist\":\"3.0.3\",\"vfile\":\"6.0.3\"}},\"vfile-message@4.0.2\":{\"dependencies\":{\"@types/unist\":\"3.0.3\",\"unist-util-stringify-position\":\"4.0.0\"}},\"vfile@6.0.3\":{\"dependencies\":{\"@types/unist\":\"3.0.3\",\"vfile-message\":\"4.0.2\"}},\"vite-node@3.2.4(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\":{\"dependencies\":{\"cac\":\"6.7.14\",\"debug\":\"4.4.3\",\"es-module-lexer\":\"1.7.0\",\"pathe\":\"2.0.3\",\"vite\":\"7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\"},\"transitivePeerDependencies\":[\"@types/node\",\"jiti\",\"less\",\"lightningcss\",\"sass\",\"sass-embedded\",\"stylus\",\"sugarss\",\"supports-color\",\"terser\",\"tsx\",\"yaml\"]},\"vite-plugin-static-copy@4.1.0(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\":{\"dependencies\":{\"chokidar\":\"3.6.0\",\"p-map\":\"7.0.4\",\"picocolors\":\"1.1.1\",\"tinyglobby\":\"0.2.16\",\"vite\":\"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\"}},\"vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\":{\"dependencies\":{\"esbuild\":\"0.25.12\",\"fdir\":\"6.5.0(picomatch@4.0.4)\",\"picomatch\":\"4.0.4\",\"postcss\":\"8.5.6\",\"rollup\":\"4.53.2\",\"tinyglobby\":\"0.2.16\"},\"optionalDependencies\":{\"@types/node\":\"24.10.2\",\"fsevents\":\"2.3.3\",\"jiti\":\"2.6.1\",\"lightningcss\":\"1.32.0\",\"sass-embedded\":\"1.89.2\",\"terser\":\"5.36.0\",\"tsx\":\"4.20.5\",\"yaml\":\"2.8.1\"}},\"vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\":{\"dependencies\":{\"lightningcss\":\"1.32.0\",\"picomatch\":\"4.0.4\",\"postcss\":\"8.5.14\",\"rolldown\":\"1.0.0-rc.17\",\"tinyglobby\":\"0.2.16\"},\"optionalDependencies\":{\"@types/node\":\"22.15.33\",\"esbuild\":\"0.27.3\",\"fsevents\":\"2.3.3\",\"jiti\":\"2.6.1\",\"sass-embedded\":\"1.89.2\",\"terser\":\"5.36.0\",\"tsx\":\"4.20.5\",\"yaml\":\"2.8.1\"}},\"vitefu@1.1.1(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\":{\"optionalDependencies\":{\"vite\":\"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\"}},\"vitest@3.2.4(@types/debug@4.1.12)(@types/node@24.10.2)(@vitest/browser@3.2.4)(happy-dom@18.0.1)(jiti@2.6.1)(jsdom@26.1.0)(lightningcss@1.32.0)(msw@2.10.2(@types/node@24.10.2)(typescript@5.8.3))(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\":{\"dependencies\":{\"@types/chai\":\"5.2.2\",\"@vitest/expect\":\"3.2.4\",\"@vitest/mocker\":\"3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.8.3))(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\",\"@vitest/pretty-format\":\"3.2.4\",\"@vitest/runner\":\"3.2.4\",\"@vitest/snapshot\":\"3.2.4\",\"@vitest/spy\":\"3.2.4\",\"@vitest/utils\":\"3.2.4\",\"chai\":\"5.3.3\",\"debug\":\"4.4.1\",\"expect-type\":\"1.2.2\",\"magic-string\":\"0.30.18\",\"pathe\":\"2.0.3\",\"picomatch\":\"4.0.3\",\"std-env\":\"3.9.0\",\"tinybench\":\"2.9.0\",\"tinyexec\":\"0.3.2\",\"tinyglobby\":\"0.2.14\",\"tinypool\":\"1.1.1\",\"tinyrainbow\":\"2.0.0\",\"vite\":\"7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\",\"vite-node\":\"3.2.4(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\",\"why-is-node-running\":\"2.3.0\"},\"optionalDependencies\":{\"@types/debug\":\"4.1.12\",\"@types/node\":\"24.10.2\",\"@vitest/browser\":\"3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.8.3))(playwright@1.55.0)(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(vitest@3.2.4)(webdriverio@9.2.1)\",\"happy-dom\":\"18.0.1\",\"jsdom\":\"26.1.0\"},\"transitivePeerDependencies\":[\"jiti\",\"less\",\"lightningcss\",\"msw\",\"sass\",\"sass-embedded\",\"stylus\",\"sugarss\",\"supports-color\",\"terser\",\"tsx\",\"yaml\"]},\"vitest@3.2.4(@types/debug@4.1.12)(@types/node@24.10.2)(@vitest/browser@3.2.4)(happy-dom@18.0.1)(jiti@2.6.1)(jsdom@27.3.0(postcss@8.5.14))(lightningcss@1.32.0)(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\":{\"dependencies\":{\"@types/chai\":\"5.2.2\",\"@vitest/expect\":\"3.2.4\",\"@vitest/mocker\":\"3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\",\"@vitest/pretty-format\":\"3.2.4\",\"@vitest/runner\":\"3.2.4\",\"@vitest/snapshot\":\"3.2.4\",\"@vitest/spy\":\"3.2.4\",\"@vitest/utils\":\"3.2.4\",\"chai\":\"5.3.3\",\"debug\":\"4.4.1\",\"expect-type\":\"1.2.2\",\"magic-string\":\"0.30.18\",\"pathe\":\"2.0.3\",\"picomatch\":\"4.0.3\",\"std-env\":\"3.9.0\",\"tinybench\":\"2.9.0\",\"tinyexec\":\"0.3.2\",\"tinyglobby\":\"0.2.14\",\"tinypool\":\"1.1.1\",\"tinyrainbow\":\"2.0.0\",\"vite\":\"7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\",\"vite-node\":\"3.2.4(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\",\"why-is-node-running\":\"2.3.0\"},\"optionalDependencies\":{\"@types/debug\":\"4.1.12\",\"@types/node\":\"24.10.2\",\"@vitest/browser\":\"3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(playwright@1.55.0)(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(vitest@3.2.4)(webdriverio@9.2.1)\",\"happy-dom\":\"18.0.1\",\"jsdom\":\"27.3.0(postcss@8.5.14)\"},\"transitivePeerDependencies\":[\"jiti\",\"less\",\"lightningcss\",\"msw\",\"sass\",\"sass-embedded\",\"stylus\",\"sugarss\",\"supports-color\",\"terser\",\"tsx\",\"yaml\"]},\"vitest@4.0.18(@opentelemetry/api@1.9.0)(@types/node@24.10.2)(happy-dom@18.0.1)(jiti@2.6.1)(jsdom@27.3.0(postcss@8.5.14))(lightningcss@1.32.0)(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\":{\"dependencies\":{\"@vitest/expect\":\"4.0.18\",\"@vitest/mocker\":\"4.0.18(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\",\"@vitest/pretty-format\":\"4.0.18\",\"@vitest/runner\":\"4.0.18\",\"@vitest/snapshot\":\"4.0.18\",\"@vitest/spy\":\"4.0.18\",\"@vitest/utils\":\"4.0.18\",\"es-module-lexer\":\"1.7.0\",\"expect-type\":\"1.2.2\",\"magic-string\":\"0.30.21\",\"obug\":\"2.1.1\",\"pathe\":\"2.0.3\",\"picomatch\":\"4.0.3\",\"std-env\":\"3.10.0\",\"tinybench\":\"2.9.0\",\"tinyexec\":\"1.0.2\",\"tinyglobby\":\"0.2.15\",\"tinyrainbow\":\"3.0.3\",\"vite\":\"7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\",\"why-is-node-running\":\"2.3.0\"},\"optionalDependencies\":{\"@opentelemetry/api\":\"1.9.0\",\"@types/node\":\"24.10.2\",\"happy-dom\":\"18.0.1\",\"jsdom\":\"27.3.0(postcss@8.5.14)\"},\"transitivePeerDependencies\":[\"jiti\",\"less\",\"lightningcss\",\"msw\",\"sass\",\"sass-embedded\",\"stylus\",\"sugarss\",\"terser\",\"tsx\",\"yaml\"]},\"vitest@4.1.5(@opentelemetry/api@1.9.0)(@types/node@22.15.33)(@vitest/coverage-v8@4.1.5)(happy-dom@18.0.1)(jsdom@27.3.0(postcss@8.5.14))(msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\":{\"dependencies\":{\"@vitest/expect\":\"4.1.5\",\"@vitest/mocker\":\"4.1.5(msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))\",\"@vitest/pretty-format\":\"4.1.5\",\"@vitest/runner\":\"4.1.5\",\"@vitest/snapshot\":\"4.1.5\",\"@vitest/spy\":\"4.1.5\",\"@vitest/utils\":\"4.1.5\",\"es-module-lexer\":\"2.1.0\",\"expect-type\":\"1.3.0\",\"magic-string\":\"0.30.21\",\"obug\":\"2.1.1\",\"pathe\":\"2.0.3\",\"picomatch\":\"4.0.4\",\"std-env\":\"4.1.0\",\"tinybench\":\"2.9.0\",\"tinyexec\":\"1.0.2\",\"tinyglobby\":\"0.2.16\",\"tinyrainbow\":\"3.1.0\",\"vite\":\"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)\",\"why-is-node-running\":\"2.3.0\"},\"optionalDependencies\":{\"@opentelemetry/api\":\"1.9.0\",\"@types/node\":\"22.15.33\",\"@vitest/coverage-v8\":\"4.1.5(@vitest/browser@4.1.5)(vitest@4.1.5)\",\"happy-dom\":\"18.0.1\",\"jsdom\":\"27.3.0(postcss@8.5.14)\"},\"transitivePeerDependencies\":[\"msw\"]},\"vscode-jsonrpc@8.2.0\":{},\"vscode-languageserver-protocol@3.17.5\":{\"dependencies\":{\"vscode-jsonrpc\":\"8.2.0\",\"vscode-languageserver-types\":\"3.17.5\"}},\"vscode-languageserver-textdocument@1.0.12\":{},\"vscode-languageserver-types@3.17.5\":{},\"vscode-languageserver@9.0.1\":{\"dependencies\":{\"vscode-languageserver-protocol\":\"3.17.5\"}},\"vscode-uri@3.0.8\":{},\"w3c-xmlserializer@5.0.0\":{\"dependencies\":{\"xml-name-validator\":\"5.0.0\"}},\"wait-port@1.1.0\":{\"dependencies\":{\"chalk\":\"4.1.2\",\"commander\":\"9.5.0\",\"debug\":\"4.4.3\"},\"transitivePeerDependencies\":[\"supports-color\"],\"optional\":true},\"watchpack@2.5.1\":{\"dependencies\":{\"glob-to-regexp\":\"0.4.1\",\"graceful-fs\":\"4.2.11\"},\"optional\":true},\"wcwidth@1.0.1\":{\"dependencies\":{\"defaults\":\"1.0.4\"}},\"web-namespaces@2.0.1\":{},\"web-streams-polyfill@3.3.3\":{\"optional\":true},\"web-vitals@4.2.4\":{},\"web-vitals@5.1.0\":{},\"webdriver@9.2.0\":{\"dependencies\":{\"@types/node\":\"20.19.39\",\"@types/ws\":\"8.18.1\",\"@wdio/config\":\"9.1.3\",\"@wdio/logger\":\"9.1.3\",\"@wdio/protocols\":\"9.2.0\",\"@wdio/types\":\"9.1.3\",\"@wdio/utils\":\"9.1.3\",\"deepmerge-ts\":\"7.1.5\",\"ws\":\"8.20.0\"},\"transitivePeerDependencies\":[\"bare-abort-controller\",\"bare-buffer\",\"bufferutil\",\"react-native-b4a\",\"supports-color\",\"utf-8-validate\"],\"optional\":true},\"webdriverio@9.2.1\":{\"dependencies\":{\"@types/node\":\"20.19.39\",\"@types/sinonjs__fake-timers\":\"8.1.5\",\"@wdio/config\":\"9.1.3\",\"@wdio/logger\":\"9.1.3\",\"@wdio/protocols\":\"9.2.0\",\"@wdio/repl\":\"9.0.8\",\"@wdio/types\":\"9.1.3\",\"@wdio/utils\":\"9.1.3\",\"archiver\":\"7.0.1\",\"aria-query\":\"5.3.2\",\"cheerio\":\"1.2.0\",\"css-shorthand-properties\":\"1.1.2\",\"css-value\":\"0.0.1\",\"grapheme-splitter\":\"1.0.4\",\"htmlfy\":\"0.3.2\",\"import-meta-resolve\":\"4.2.0\",\"is-plain-obj\":\"4.1.0\",\"jszip\":\"3.10.1\",\"lodash.clonedeep\":\"4.5.0\",\"lodash.zip\":\"4.2.0\",\"minimatch\":\"9.0.9\",\"query-selector-shadow-dom\":\"1.0.1\",\"resq\":\"1.11.0\",\"rgb2hex\":\"0.2.5\",\"serialize-error\":\"11.0.3\",\"urlpattern-polyfill\":\"10.1.0\",\"webdriver\":\"9.2.0\"},\"transitivePeerDependencies\":[\"bare-abort-controller\",\"bare-buffer\",\"bufferutil\",\"react-native-b4a\",\"supports-color\",\"utf-8-validate\"],\"optional\":true},\"webidl-conversions@7.0.0\":{},\"webidl-conversions@8.0.0\":{},\"webpack-sources@3.4.1\":{\"optional\":true},\"webpack-virtual-modules@0.6.2\":{},\"webpack@5.99.9(esbuild@0.27.3)\":{\"dependencies\":{\"@types/eslint-scope\":\"3.7.7\",\"@types/estree\":\"1.0.9\",\"@types/json-schema\":\"7.0.15\",\"@webassemblyjs/ast\":\"1.14.1\",\"@webassemblyjs/wasm-edit\":\"1.14.1\",\"@webassemblyjs/wasm-parser\":\"1.14.1\",\"acorn\":\"8.16.0\",\"browserslist\":\"4.28.2\",\"chrome-trace-event\":\"1.0.4\",\"enhanced-resolve\":\"5.21.0\",\"es-module-lexer\":\"1.7.0\",\"eslint-scope\":\"5.1.1\",\"events\":\"3.3.0\",\"glob-to-regexp\":\"0.4.1\",\"graceful-fs\":\"4.2.11\",\"json-parse-even-better-errors\":\"2.3.1\",\"loader-runner\":\"4.3.2\",\"mime-types\":\"2.1.35\",\"neo-async\":\"2.6.2\",\"schema-utils\":\"4.3.3\",\"tapable\":\"2.3.3\",\"terser-webpack-plugin\":\"5.5.0(esbuild@0.27.3)(webpack@5.99.9(esbuild@0.27.3))\",\"watchpack\":\"2.5.1\",\"webpack-sources\":\"3.4.1\"},\"transitivePeerDependencies\":[\"@swc/core\",\"esbuild\",\"uglify-js\"],\"optional\":true},\"whatwg-encoding@3.1.1\":{\"dependencies\":{\"iconv-lite\":\"0.6.3\"}},\"whatwg-mimetype@3.0.0\":{\"optional\":true},\"whatwg-mimetype@4.0.0\":{},\"whatwg-url@14.2.0\":{\"dependencies\":{\"tr46\":\"5.1.1\",\"webidl-conversions\":\"7.0.0\"}},\"whatwg-url@15.1.0\":{\"dependencies\":{\"tr46\":\"6.0.0\",\"webidl-conversions\":\"8.0.0\"}},\"which@2.0.2\":{\"dependencies\":{\"isexe\":\"2.0.0\"}},\"which@4.0.0\":{\"dependencies\":{\"isexe\":\"3.1.5\"},\"optional\":true},\"why-is-node-running@2.3.0\":{\"dependencies\":{\"siginfo\":\"2.0.0\",\"stackback\":\"0.0.2\"}},\"workerd@1.20260504.1\":{\"optionalDependencies\":{\"@cloudflare/workerd-darwin-64\":\"1.20260504.1\",\"@cloudflare/workerd-darwin-arm64\":\"1.20260504.1\",\"@cloudflare/workerd-linux-64\":\"1.20260504.1\",\"@cloudflare/workerd-linux-arm64\":\"1.20260504.1\",\"@cloudflare/workerd-windows-64\":\"1.20260504.1\"}},\"wrangler@4.88.0\":{\"dependencies\":{\"@cloudflare/kv-asset-handler\":\"0.5.0\",\"@cloudflare/unenv-preset\":\"2.16.1(unenv@2.0.0-rc.24)(workerd@1.20260504.1)\",\"blake3-wasm\":\"2.1.5\",\"esbuild\":\"0.27.3\",\"miniflare\":\"4.20260504.0\",\"path-to-regexp\":\"6.3.0\",\"unenv\":\"2.0.0-rc.24\",\"workerd\":\"1.20260504.1\"},\"optionalDependencies\":{\"fsevents\":\"2.3.3\"},\"transitivePeerDependencies\":[\"bufferutil\",\"utf-8-validate\"]},\"wrap-ansi@6.2.0\":{\"dependencies\":{\"ansi-styles\":\"4.3.0\",\"string-width\":\"4.2.3\",\"strip-ansi\":\"6.0.1\"},\"optional\":true},\"wrap-ansi@7.0.0\":{\"dependencies\":{\"ansi-styles\":\"4.3.0\",\"string-width\":\"4.2.3\",\"strip-ansi\":\"6.0.1\"}},\"wrap-ansi@8.1.0\":{\"dependencies\":{\"ansi-styles\":\"6.2.1\",\"string-width\":\"5.1.2\",\"strip-ansi\":\"7.1.2\"}},\"wrappy@1.0.2\":{},\"ws@8.18.0\":{},\"ws@8.18.3\":{},\"ws@8.20.0\":{},\"xml-name-validator@5.0.0\":{},\"xmlbuilder2@4.0.3\":{\"dependencies\":{\"@oozcitak/dom\":\"2.0.2\",\"@oozcitak/infra\":\"2.0.2\",\"@oozcitak/util\":\"10.0.0\",\"js-yaml\":\"4.1.1\"}},\"xmlchars@2.2.0\":{},\"y18n@5.0.8\":{},\"yallist@3.1.1\":{},\"yallist@4.0.0\":{},\"yaml@2.8.1\":{},\"yargs-parser@21.1.1\":{},\"yargs-parser@22.0.0\":{},\"yargs@17.7.2\":{\"dependencies\":{\"cliui\":\"8.0.1\",\"escalade\":\"3.2.0\",\"get-caller-file\":\"2.0.5\",\"require-directory\":\"2.1.1\",\"string-width\":\"4.2.3\",\"y18n\":\"5.0.8\",\"yargs-parser\":\"21.1.1\"}},\"yauzl@2.10.0\":{\"dependencies\":{\"buffer-crc32\":\"0.2.13\",\"fd-slicer\":\"1.1.0\"},\"optional\":true},\"yoctocolors-cjs@2.1.3\":{\"optional\":true},\"youch-core@0.3.3\":{\"dependencies\":{\"@poppinss/exception\":\"1.2.2\",\"error-stack-parser-es\":\"1.0.5\"}},\"youch@4.1.0-beta.10\":{\"dependencies\":{\"@poppinss/colors\":\"4.1.5\",\"@poppinss/dumper\":\"0.6.5\",\"@speed-highlight/core\":\"1.2.12\",\"cookie\":\"1.0.2\",\"youch-core\":\"0.3.3\"}},\"zip-stream@6.0.1\":{\"dependencies\":{\"archiver-utils\":\"5.0.2\",\"compress-commons\":\"6.0.2\",\"readable-stream\":\"4.7.0\"},\"optional\":true},\"zod@3.25.76\":{},\"zwitch@2.0.4\":{}}}"
  },
  {
    "path": "packages/engine/benches/physical_layout/backend_kv.rs",
    "content": "use std::sync::Arc;\n\nuse criterion::{black_box, BatchSize, Criterion};\nuse lix_engine::storage_bench::{self, StorageBenchSelectivity};\nuse lix_engine::Backend;\nuse tokio::runtime::Runtime;\n\nuse crate::{Args, RocksDbBenchBackend, SqliteBenchBackend};\n\ntype BackendFactory = fn() -> Arc<dyn Backend + Send + Sync>;\n\n#[derive(Clone, Copy)]\nstruct BackendProfile {\n    name: &'static str,\n    create: BackendFactory,\n}\n\npub(crate) fn bench(c: &mut Criterion, runtime: &Runtime, args: Args) {\n    for profile in physical_backends() {\n        bench_fast(c, runtime, args, profile);\n        bench_full(c, runtime, args, profile);\n    }\n}\n\nfn bench_fast(c: &mut Criterion, runtime: &Runtime, args: Args, profile: BackendProfile) {\n    let mut group = c.benchmark_group(format!(\"physical_layout/backend_kv/fast/{}\", profile.name));\n\n    group.bench_function(\"write_batch_put/10k\", |b| {\n        b.iter_batched(\n            || (profile.create)(),\n            |backend| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::storage_api_write_kv_batch_puts(\n                            backend, args.rows,\n                        ))\n                        .expect(\"physical_layout/backend_kv write_batch_put succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"mixed_put_delete/10k\", |b| {\n        b.iter_batched(\n            || (profile.create)(),\n            |backend| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::storage_api_write_kv_batch_mixed_put_delete(\n                            backend, args.rows,\n                        ))\n                        .expect(\"physical_layout/backend_kv mixed_put_delete succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"get_values_hit/10k\", |b| {\n        b.iter_batched(\n            || prepare_read(runtime, profile, args.rows),\n            |fixture| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::storage_api_get_values_hits_prepared(\n                            &fixture, args.rows,\n                        ))\n                        .expect(\"physical_layout/backend_kv get_values_hit succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"scan_keys_prefix/10k\", |b| {\n        b.iter_batched(\n            || prepare_read(runtime, profile, args.rows),\n            |fixture| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::storage_api_scan_keys_prefix_prepared(\n                            &fixture, args.rows,\n                        ))\n                        .expect(\"physical_layout/backend_kv scan_keys_prefix succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.finish();\n}\n\nfn bench_full(c: &mut Criterion, runtime: &Runtime, args: Args, profile: BackendProfile) {\n    let mut group = c.benchmark_group(format!(\"physical_layout/backend_kv/full/{}\", profile.name));\n\n    for rows in [1_000usize, 10_000, 50_000] {\n        group.bench_function(format!(\"write_batch_put/{}\", label(rows)), |b| {\n            b.iter_batched(\n                || (profile.create)(),\n                |backend| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::storage_api_write_kv_batch_puts(\n                                backend, rows,\n                            ))\n                            .expect(\"physical_layout/backend_kv full write_batch_put succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n\n    group.bench_function(\"write_batch_value_size_1k/10k\", |b| {\n        b.iter_batched(\n            || (profile.create)(),\n            |backend| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::storage_api_write_kv_batch_value_size(\n                            backend, args.rows, 1024,\n                        ))\n                        .expect(\"physical_layout/backend_kv value_size_1k succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"get_values_miss/10k\", |b| {\n        b.iter_batched(\n            || prepare_read(runtime, profile, args.rows),\n            |fixture| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::storage_api_get_values_misses_prepared(\n                            &fixture, args.rows,\n                        ))\n                        .expect(\"physical_layout/backend_kv get_values_miss succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"get_values_mixed_hit_miss/10k\", |b| {\n        b.iter_batched(\n            || prepare_read(runtime, profile, args.rows),\n            |fixture| {\n                black_box(\n                    runtime\n                        .block_on(\n                            storage_bench::storage_api_get_values_mixed_hit_miss_prepared(\n                                &fixture, args.rows,\n                            ),\n                        )\n                        .expect(\"physical_layout/backend_kv get_values_mixed succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"scan_keys_after_pages/10k\", |b| {\n        b.iter_batched(\n            || prepare_read(runtime, profile, args.rows),\n            |fixture| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::storage_api_scan_keys_after_pages_prepared(\n                            &fixture, 1024,\n                        ))\n                        .expect(\"physical_layout/backend_kv scan_keys_after_pages succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    for selectivity in [\n        StorageBenchSelectivity::Percent1,\n        StorageBenchSelectivity::Percent10,\n    ] {\n        let label = match selectivity {\n            StorageBenchSelectivity::Percent1 => \"1pct\",\n            StorageBenchSelectivity::Percent10 => \"10pct\",\n            StorageBenchSelectivity::Percent100 => \"100pct\",\n        };\n        group.bench_function(format!(\"scan_keys_selective_prefix_{label}/10k\"), |b| {\n            b.iter_batched(\n                || prepare_selective_scan(runtime, profile, args.rows, selectivity),\n                |fixture| {\n                    black_box(\n                        runtime\n                            .block_on(\n                                storage_bench::storage_api_scan_keys_selective_prefix_prepared(\n                                    &fixture,\n                                    selectivity,\n                                ),\n                            )\n                            .expect(\"physical_layout/backend_kv selective scan succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n\n    group.finish();\n}\n\nfn prepare_read(\n    runtime: &Runtime,\n    profile: BackendProfile,\n    rows: usize,\n) -> storage_bench::StorageApiFixture {\n    runtime\n        .block_on(storage_bench::prepare_storage_api_read(\n            (profile.create)(),\n            rows,\n        ))\n        .expect(\"prepare physical_layout/backend_kv read\")\n}\n\nfn prepare_selective_scan(\n    runtime: &Runtime,\n    profile: BackendProfile,\n    rows: usize,\n    selectivity: StorageBenchSelectivity,\n) -> storage_bench::StorageApiFixture {\n    runtime\n        .block_on(storage_bench::prepare_storage_api_selective_scan(\n            (profile.create)(),\n            rows,\n            selectivity,\n        ))\n        .expect(\"prepare physical_layout/backend_kv selective scan\")\n}\n\nfn physical_backends() -> [BackendProfile; 2] {\n    [\n        BackendProfile {\n            name: \"sqlite_tempfile\",\n            create: sqlite_tempfile_backend,\n        },\n        BackendProfile {\n            name: \"rocksdb_tempdir\",\n            create: rocksdb_backend,\n        },\n    ]\n}\n\nfn sqlite_tempfile_backend() -> Arc<dyn Backend + Send + Sync> {\n    Arc::new(SqliteBenchBackend::tempfile().expect(\"create sqlite tempfile bench backend\"))\n}\n\nfn rocksdb_backend() -> Arc<dyn Backend + Send + Sync> {\n    Arc::new(RocksDbBenchBackend::new().expect(\"create rocksdb bench backend\"))\n}\n\nfn label(rows: usize) -> &'static str {\n    match rows {\n        1_000 => \"1k\",\n        10_000 => \"10k\",\n        50_000 => \"50k\",\n        _ => \"rows\",\n    }\n}\n"
  },
  {
    "path": "packages/engine/benches/physical_layout/changelog.rs",
    "content": "use std::sync::Arc;\nuse std::time::Duration;\n\nuse criterion::{black_box, BatchSize, Criterion};\nuse lix_engine::storage_bench::{self, StorageBenchConfig};\nuse lix_engine::Backend;\nuse tokio::runtime::Runtime;\n\nuse crate::{Args, RocksDbBenchBackend, SqliteBenchBackend};\n\ntype BackendFactory = fn() -> Arc<dyn Backend + Send + Sync>;\n\n#[derive(Clone, Copy)]\nstruct BackendProfile {\n    name: &'static str,\n    create: BackendFactory,\n}\n\npub(crate) fn bench(c: &mut Criterion, runtime: &Runtime, args: Args) {\n    for profile in physical_backends() {\n        bench_smoke(c, runtime, args, profile);\n        bench_fast(c, runtime, args, profile);\n        bench_full(c, runtime, args, profile);\n    }\n}\n\nfn bench_smoke(c: &mut Criterion, runtime: &Runtime, args: Args, profile: BackendProfile) {\n    let smoke = args.config().with_rows(1_000);\n    let mut group = c.benchmark_group(format!(\"physical_layout/changelog/smoke/{}\", profile.name));\n    group.sample_size(10);\n    group.warm_up_time(Duration::from_millis(250));\n    group.measurement_time(Duration::from_secs(1));\n\n    group.bench_function(\"append_changes/1k\", |b| {\n        b.iter_batched(\n            || prepare_append(runtime, smoke, profile),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::changelog_append_changes_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"physical_layout/changelog smoke append_changes succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"scan_change_set/1k\", |b| {\n        b.iter_batched(\n            || prepare_read(runtime, smoke, profile),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::changelog_scan_change_set_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"physical_layout/changelog smoke scan_change_set succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"load_changes_hit/1k\", |b| {\n        b.iter_batched(\n            || prepare_read(runtime, smoke, profile),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::changelog_load_changes_hit_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"physical_layout/changelog smoke load_changes_hit succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.finish();\n}\n\nfn bench_fast(c: &mut Criterion, runtime: &Runtime, args: Args, profile: BackendProfile) {\n    let mut group = c.benchmark_group(format!(\"physical_layout/changelog/fast/{}\", profile.name));\n\n    group.bench_function(\"append_changes/10k\", |b| {\n        b.iter_batched(\n            || prepare_append(runtime, args.config(), profile),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::changelog_append_changes_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"physical_layout/changelog append_changes succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"scan_change_set/10k\", |b| {\n        b.iter_batched(\n            || prepare_read(runtime, args.config(), profile),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::changelog_scan_change_set_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"physical_layout/changelog scan_change_set succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"load_changes_hit/10k\", |b| {\n        b.iter_batched(\n            || prepare_read(runtime, args.config(), profile),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::changelog_load_changes_hit_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"physical_layout/changelog load_changes_hit succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.finish();\n}\n\nfn bench_full(c: &mut Criterion, runtime: &Runtime, args: Args, profile: BackendProfile) {\n    let mut group = c.benchmark_group(format!(\"physical_layout/changelog/full/{}\", profile.name));\n\n    for rows in [1_000usize, 10_000, 50_000] {\n        let config = args.config().with_rows(rows);\n        group.bench_function(format!(\"append_changes/{}\", label(rows)), |b| {\n            b.iter_batched(\n                || prepare_append(runtime, config, profile),\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::changelog_append_changes_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"physical_layout/changelog full append succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n        group.bench_function(format!(\"scan_change_set/{}\", label(rows)), |b| {\n            b.iter_batched(\n                || prepare_read(runtime, config, profile),\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::changelog_scan_change_set_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"physical_layout/changelog full scan_change_set succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n        group.bench_function(format!(\"scan_all/{}\", label(rows)), |b| {\n            b.iter_batched(\n                || prepare_read(runtime, config, profile),\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::changelog_scan_all_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"physical_layout/changelog full scan_all succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n        group.bench_function(format!(\"load_changes_hit/{}\", label(rows)), |b| {\n            b.iter_batched(\n                || prepare_read(runtime, config, profile),\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::changelog_load_changes_hit_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"physical_layout/changelog full load_changes_hit succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n        group.bench_function(format!(\"load_changes_miss/{}\", label(rows)), |b| {\n            b.iter_batched(\n                || prepare_read(runtime, config, profile),\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::changelog_load_changes_miss_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"physical_layout/changelog full load_changes_miss succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n\n    group.finish();\n}\n\nfn prepare_append(\n    runtime: &Runtime,\n    config: StorageBenchConfig,\n    profile: BackendProfile,\n) -> (\n    Arc<dyn Backend + Send + Sync>,\n    storage_bench::ChangelogAppendFixture,\n) {\n    let backend = (profile.create)();\n    let fixture = runtime\n        .block_on(storage_bench::prepare_changelog_append_changes(config))\n        .expect(\"prepare physical_layout/changelog append\");\n    (backend, fixture)\n}\n\nfn prepare_read(\n    runtime: &Runtime,\n    config: StorageBenchConfig,\n    profile: BackendProfile,\n) -> (\n    Arc<dyn Backend + Send + Sync>,\n    storage_bench::ChangelogReadFixture,\n) {\n    let backend = (profile.create)();\n    let fixture = runtime\n        .block_on(storage_bench::prepare_changelog_read(&backend, config))\n        .expect(\"prepare physical_layout/changelog read\");\n    (backend, fixture)\n}\n\nfn physical_backends() -> [BackendProfile; 2] {\n    [\n        BackendProfile {\n            name: \"sqlite_tempfile\",\n            create: sqlite_tempfile_backend,\n        },\n        BackendProfile {\n            name: \"rocksdb_tempdir\",\n            create: rocksdb_backend,\n        },\n    ]\n}\n\nfn sqlite_tempfile_backend() -> Arc<dyn Backend + Send + Sync> {\n    Arc::new(SqliteBenchBackend::tempfile().expect(\"create sqlite tempfile bench backend\"))\n}\n\nfn rocksdb_backend() -> Arc<dyn Backend + Send + Sync> {\n    Arc::new(RocksDbBenchBackend::new().expect(\"create rocksdb bench backend\"))\n}\n\nfn label(rows: usize) -> &'static str {\n    match rows {\n        1_000 => \"1k\",\n        10_000 => \"10k\",\n        50_000 => \"50k\",\n        _ => \"rows\",\n    }\n}\n"
  },
  {
    "path": "packages/engine/benches/physical_layout/json_store.rs",
    "content": "use std::sync::Arc;\n\nuse criterion::{black_box, BatchSize, Criterion};\nuse lix_engine::storage_bench::{\n    self, JsonStorePayloadShape, JsonStoreProjectionShape, JsonStoreReadFixture,\n};\nuse lix_engine::Backend;\nuse tokio::runtime::Runtime;\n\nuse crate::{Args, RocksDbBenchBackend, SqliteBenchBackend};\n\ntype BackendFactory = fn() -> Arc<dyn Backend + Send + Sync>;\n\n#[derive(Clone, Copy)]\nstruct BackendProfile {\n    name: &'static str,\n    create: BackendFactory,\n}\n\npub(crate) fn bench(c: &mut Criterion, runtime: &Runtime, args: Args) {\n    for profile in physical_backends() {\n        bench_fast(c, runtime, args, profile);\n        bench_full(c, runtime, args, profile);\n    }\n}\n\nfn bench_fast(c: &mut Criterion, runtime: &Runtime, _args: Args, profile: BackendProfile) {\n    let mut group = c.benchmark_group(format!(\"physical_layout/json_store/fast/{}\", profile.name));\n\n    group.bench_function(\"write_unique_1k/10k\", |b| {\n        b.iter_batched(\n            || prepare_write(runtime, JsonStorePayloadShape::SmallRaw1k, 10_000),\n            |fixture| {\n                let backend = (profile.create)();\n                black_box(\n                    runtime\n                        .block_on(storage_bench::json_store_write_prepared(&backend, &fixture))\n                        .expect(\"physical_layout/json_store write_unique_1k succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"write_same_1k/10k\", |b| {\n        b.iter_batched(\n            || prepare_write_dedupe(runtime, JsonStorePayloadShape::SmallRaw1k, 10_000),\n            |fixture| {\n                let backend = (profile.create)();\n                black_box(\n                    runtime\n                        .block_on(storage_bench::json_store_write_prepared(&backend, &fixture))\n                        .expect(\"physical_layout/json_store write_same_1k succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"read_bytes_1k/10k\", |b| {\n        b.iter_batched(\n            || {\n                prepare_read(\n                    runtime,\n                    profile,\n                    JsonStorePayloadShape::SmallRaw1k,\n                    10_000,\n                    JsonStoreProjectionShape::TopLevelTarget,\n                )\n            },\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::json_store_read_bytes_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"physical_layout/json_store read_bytes_1k succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.finish();\n}\n\nfn bench_full(c: &mut Criterion, runtime: &Runtime, _args: Args, profile: BackendProfile) {\n    let mut group = c.benchmark_group(format!(\"physical_layout/json_store/full/{}\", profile.name));\n\n    for (name, shape, rows, dedupe) in [\n        (\n            \"write_unique_1k/10k\",\n            JsonStorePayloadShape::SmallRaw1k,\n            10_000usize,\n            false,\n        ),\n        (\n            \"write_same_1k/10k\",\n            JsonStorePayloadShape::SmallRaw1k,\n            10_000,\n            true,\n        ),\n        (\n            \"write_unique_16k/1k\",\n            JsonStorePayloadShape::MediumStructured16k,\n            1_000,\n            false,\n        ),\n        (\n            \"write_same_16k/1k\",\n            JsonStorePayloadShape::MediumStructured16k,\n            1_000,\n            true,\n        ),\n    ] {\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || {\n                    if dedupe {\n                        prepare_write_dedupe(runtime, shape, rows)\n                    } else {\n                        prepare_write(runtime, shape, rows)\n                    }\n                },\n                |fixture| {\n                    let backend = (profile.create)();\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::json_store_write_prepared(&backend, &fixture))\n                            .expect(\"physical_layout/json_store full write succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n\n    for (name, shape, rows) in [\n        (\n            \"read_bytes_1k/10k\",\n            JsonStorePayloadShape::SmallRaw1k,\n            10_000usize,\n        ),\n        (\n            \"read_bytes_16k/1k\",\n            JsonStorePayloadShape::MediumStructured16k,\n            1_000,\n        ),\n    ] {\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || {\n                    prepare_read(\n                        runtime,\n                        profile,\n                        shape,\n                        rows,\n                        JsonStoreProjectionShape::TopLevelTarget,\n                    )\n                },\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::json_store_read_bytes_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"physical_layout/json_store full read_bytes succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n\n    group.bench_function(\"read_projection_top_level_128k/50\", |b| {\n        b.iter_batched(\n            || {\n                prepare_read(\n                    runtime,\n                    profile,\n                    JsonStorePayloadShape::LargeStructured128k,\n                    50,\n                    JsonStoreProjectionShape::TopLevelTarget,\n                )\n            },\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::json_store_read_projection_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"physical_layout/json_store projection succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"write_against_base_object_update_1_of_1000/50\", |b| {\n        b.iter_batched(\n            || {\n                let backend = (profile.create)();\n                let fixture = runtime\n                    .block_on(storage_bench::prepare_json_store_base_update_object(\n                        &backend, 50,\n                    ))\n                    .expect(\"prepare physical_layout/json_store base update object\");\n                (backend, fixture)\n            },\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(\n                            storage_bench::json_store_write_against_base_object_prepared(\n                                &backend, &fixture,\n                            ),\n                        )\n                        .expect(\"physical_layout/json_store base update object succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.finish();\n}\n\nfn prepare_write(\n    runtime: &Runtime,\n    shape: JsonStorePayloadShape,\n    rows: usize,\n) -> storage_bench::JsonStoreWriteFixture {\n    runtime\n        .block_on(storage_bench::prepare_json_store_write(shape, rows))\n        .expect(\"prepare physical_layout/json_store write\")\n}\n\nfn prepare_write_dedupe(\n    runtime: &Runtime,\n    shape: JsonStorePayloadShape,\n    rows: usize,\n) -> storage_bench::JsonStoreWriteFixture {\n    runtime\n        .block_on(storage_bench::prepare_json_store_write_dedupe(shape, rows))\n        .expect(\"prepare physical_layout/json_store write dedupe\")\n}\n\nfn prepare_read(\n    runtime: &Runtime,\n    profile: BackendProfile,\n    shape: JsonStorePayloadShape,\n    rows: usize,\n    projection: JsonStoreProjectionShape,\n) -> (Arc<dyn Backend + Send + Sync>, JsonStoreReadFixture) {\n    let backend = (profile.create)();\n    let fixture = runtime\n        .block_on(storage_bench::prepare_json_store_projection_read(\n            &backend, shape, rows, projection,\n        ))\n        .expect(\"prepare physical_layout/json_store read\");\n    (backend, fixture)\n}\n\nfn physical_backends() -> [BackendProfile; 2] {\n    [\n        BackendProfile {\n            name: \"sqlite_tempfile\",\n            create: sqlite_tempfile_backend,\n        },\n        BackendProfile {\n            name: \"rocksdb_tempdir\",\n            create: rocksdb_backend,\n        },\n    ]\n}\n\nfn sqlite_tempfile_backend() -> Arc<dyn Backend + Send + Sync> {\n    Arc::new(SqliteBenchBackend::tempfile().expect(\"create sqlite tempfile bench backend\"))\n}\n\nfn rocksdb_backend() -> Arc<dyn Backend + Send + Sync> {\n    Arc::new(RocksDbBenchBackend::new().expect(\"create rocksdb bench backend\"))\n}\n"
  },
  {
    "path": "packages/engine/benches/physical_layout/main.rs",
    "content": "use criterion::{criterion_group, criterion_main, Criterion};\nuse lix_engine::storage_bench::{\n    StorageBenchConfig, StorageBenchKeyPattern, StorageBenchSelectivity, StorageBenchUpdateFraction,\n};\n\n#[path = \"../storage/rocksdb_backend.rs\"]\nmod rocksdb_backend;\n#[path = \"../storage/sqlite_backend.rs\"]\nmod sqlite_backend;\n\nmod backend_kv;\nmod changelog;\nmod json_store;\nmod tracked_state;\nmod workflow;\n\nuse rocksdb_backend::RocksDbBenchBackend;\nuse sqlite_backend::SqliteBenchBackend;\n\nconst BENCH_ROWS: usize = 10_000;\nconst BENCH_BLOB_BYTES: usize = 1024;\nconst BENCH_STATE_PAYLOAD_BYTES: usize = 256;\n\n#[derive(Debug, Clone, Copy)]\npub(crate) struct Args {\n    pub(crate) rows: usize,\n    pub(crate) blob_bytes: usize,\n    pub(crate) state_payload_bytes: usize,\n}\n\nimpl Default for Args {\n    fn default() -> Self {\n        Self {\n            rows: BENCH_ROWS,\n            blob_bytes: BENCH_BLOB_BYTES,\n            state_payload_bytes: BENCH_STATE_PAYLOAD_BYTES,\n        }\n    }\n}\n\nimpl Args {\n    pub(crate) fn config(self) -> StorageBenchConfig {\n        StorageBenchConfig {\n            rows: self.rows,\n            blob_bytes: self.blob_bytes,\n            state_payload_bytes: self.state_payload_bytes,\n            key_pattern: StorageBenchKeyPattern::Sequential,\n            selectivity: StorageBenchSelectivity::Percent100,\n            update_fraction: StorageBenchUpdateFraction::Percent100,\n        }\n    }\n}\n\nfn physical_layout_benches(c: &mut Criterion) {\n    let runtime = tokio::runtime::Builder::new_current_thread()\n        .enable_all()\n        .build()\n        .expect(\"create tokio runtime for physical layout benchmarks\");\n    let args = Args::default();\n\n    backend_kv::bench(c, &runtime, args);\n    changelog::bench(c, &runtime, args);\n    tracked_state::bench(c, &runtime, args);\n    json_store::bench(c, &runtime, args);\n    workflow::bench(c, &runtime, args);\n}\n\ncriterion_group!(benches, physical_layout_benches);\ncriterion_main!(benches);\n"
  },
  {
    "path": "packages/engine/benches/physical_layout/tracked_state.rs",
    "content": "use std::sync::Arc;\nuse std::time::Duration;\n\nuse criterion::{black_box, BatchSize, Criterion};\nuse lix_engine::storage_bench::{self, StorageBenchConfig, StorageBenchSelectivity};\nuse lix_engine::Backend;\nuse tokio::runtime::Runtime;\n\nuse crate::{Args, RocksDbBenchBackend, SqliteBenchBackend};\n\ntype BackendFactory = fn() -> Arc<dyn Backend + Send + Sync>;\n\n#[derive(Clone, Copy)]\nstruct BackendProfile {\n    name: &'static str,\n    create: BackendFactory,\n}\n\npub(crate) fn bench(c: &mut Criterion, runtime: &Runtime, args: Args) {\n    for profile in physical_backends() {\n        bench_smoke(c, runtime, args, profile);\n        bench_fast(c, runtime, args, profile);\n        bench_full(c, runtime, args, profile);\n    }\n}\n\nfn bench_smoke(c: &mut Criterion, runtime: &Runtime, args: Args, profile: BackendProfile) {\n    let smoke = args\n        .config()\n        .with_rows(1_000)\n        .with_state_payload_bytes(1024);\n    let mut group = c.benchmark_group(format!(\n        \"physical_layout/tracked_state/smoke/{}\",\n        profile.name\n    ));\n    group.sample_size(10);\n    group.warm_up_time(Duration::from_millis(250));\n    group.measurement_time(Duration::from_secs(1));\n\n    group.bench_function(\"write_root_payload_1k/1k\", |b| {\n        b.iter_batched(\n            || prepare_write_root(runtime, smoke, profile),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_write_root_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"physical_layout/tracked_state smoke write_root succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"scan_headers_only_payload_1k/1k\", |b| {\n        b.iter_batched(\n            || prepare_read(runtime, smoke, profile),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_scan_headers_only_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"physical_layout/tracked_state smoke headers succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"scan_full_rows_payload_1k/1k\", |b| {\n        b.iter_batched(\n            || prepare_read(runtime, smoke, profile),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_scan_full_rows_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"physical_layout/tracked_state smoke full rows succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"scan_file_header_selective_10pct_payload_1k/1k\", |b| {\n        b.iter_batched(\n            || {\n                prepare_read_file_selective(\n                    runtime,\n                    smoke.with_selectivity(StorageBenchSelectivity::Percent10),\n                    profile,\n                )\n            },\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(\n                            storage_bench::tracked_state_scan_file_header_selective_prepared(\n                                &backend, &fixture,\n                            ),\n                        )\n                        .expect(\"physical_layout/tracked_state smoke file headers succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"diff_update_1pct_payload_1k/1k\", |b| {\n        b.iter_batched(\n            || prepare_diff_update_rows(runtime, smoke, profile, 10),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_diff_commits_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"physical_layout/tracked_state smoke diff succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.finish();\n}\n\nfn bench_fast(c: &mut Criterion, runtime: &Runtime, args: Args, profile: BackendProfile) {\n    let mut group = c.benchmark_group(format!(\n        \"physical_layout/tracked_state/fast/{}\",\n        profile.name\n    ));\n\n    group.bench_function(\"write_root/10k\", |b| {\n        b.iter_batched(\n            || prepare_write_root(runtime, args.config(), profile),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_write_root_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"physical_layout/tracked_state write_root succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"write_root_payload_1k/10k\", |b| {\n        b.iter_batched(\n            || {\n                prepare_write_root(\n                    runtime,\n                    args.config().with_state_payload_bytes(1024),\n                    profile,\n                )\n            },\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_write_root_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"physical_layout/tracked_state write_root_payload_1k succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"update_existing_1pct/10k\", |b| {\n        b.iter_batched(\n            || prepare_update_rows(runtime, args.config(), profile, args.rows / 100),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_update_existing_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"physical_layout/tracked_state update_existing_1pct succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"update_existing_10pct/10k\", |b| {\n        b.iter_batched(\n            || prepare_update_rows(runtime, args.config(), profile, args.rows / 10),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_update_existing_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"physical_layout/tracked_state update_existing_10pct succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"tombstone_10pct/10k\", |b| {\n        b.iter_batched(\n            || prepare_tombstone_rows(runtime, args.config(), profile, args.rows / 10),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_update_existing_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"physical_layout/tracked_state tombstone_10pct succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"read_point_hit/10k\", |b| {\n        b.iter_batched(\n            || prepare_read(runtime, args.config(), profile),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_read_point_hit_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"physical_layout/tracked_state read_point_hit succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"scan_headers_only/10k\", |b| {\n        b.iter_batched(\n            || prepare_read(runtime, args.config(), profile),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_scan_headers_only_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"physical_layout/tracked_state scan_headers_only succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"scan_full_rows/10k\", |b| {\n        b.iter_batched(\n            || prepare_read(runtime, args.config(), profile),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_scan_full_rows_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"physical_layout/tracked_state scan_full_rows succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"scan_file_header_selective_10pct_payload_1k/10k\", |b| {\n        b.iter_batched(\n            || {\n                prepare_read_file_selective(\n                    runtime,\n                    args.config()\n                        .with_state_payload_bytes(1024)\n                        .with_selectivity(StorageBenchSelectivity::Percent10),\n                    profile,\n                )\n            },\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(\n                            storage_bench::tracked_state_scan_file_header_selective_prepared(\n                                &backend, &fixture,\n                            ),\n                        )\n                        .expect(\"physical_layout/tracked_state file header scan succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"diff_update_1pct/10k\", |b| {\n        b.iter_batched(\n            || prepare_diff_update_rows(runtime, args.config(), profile, args.rows / 100),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_diff_commits_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"physical_layout/tracked_state diff_update_1pct succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.finish();\n}\n\nfn bench_full(c: &mut Criterion, runtime: &Runtime, args: Args, profile: BackendProfile) {\n    let mut group = c.benchmark_group(format!(\n        \"physical_layout/tracked_state/full/{}\",\n        profile.name\n    ));\n\n    for rows in [1_000usize, 10_000, 50_000] {\n        let config = args.config().with_rows(rows);\n        group.bench_function(format!(\"write_root/{}\", label(rows)), |b| {\n            b.iter_batched(\n                || prepare_write_root(runtime, config, profile),\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::tracked_state_write_root_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"physical_layout/tracked_state full write_root succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n        group.bench_function(format!(\"read_point_hit/{}\", label(rows)), |b| {\n            b.iter_batched(\n                || prepare_read(runtime, config, profile),\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::tracked_state_read_point_hit_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"physical_layout/tracked_state full point_hit succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n        group.bench_function(format!(\"scan_headers_only/{}\", label(rows)), |b| {\n            b.iter_batched(\n                || prepare_read(runtime, config, profile),\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::tracked_state_scan_headers_only_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"physical_layout/tracked_state full headers succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n        group.bench_function(format!(\"scan_full_rows/{}\", label(rows)), |b| {\n            b.iter_batched(\n                || prepare_read(runtime, config, profile),\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::tracked_state_scan_full_rows_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"physical_layout/tracked_state full full_rows succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n\n    for (name, config) in [\n        (\n            \"write_root_payload_1k/10k\",\n            args.config().with_state_payload_bytes(1024),\n        ),\n        (\n            \"write_root_payload_16k/1k\",\n            args.config()\n                .with_rows(1_000)\n                .with_state_payload_bytes(16 * 1024),\n        ),\n    ] {\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || prepare_write_root(runtime, config, profile),\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::tracked_state_write_root_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"physical_layout/tracked_state full payload write succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n\n    for (name, changed_rows, tombstone) in [\n        (\"diff_equal/10k\", 0usize, false),\n        (\"diff_update_1pct/10k\", args.rows / 100, false),\n        (\"diff_update_10pct/10k\", args.rows / 10, false),\n        (\"diff_tombstone_10pct/10k\", args.rows / 10, true),\n    ] {\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || {\n                    if changed_rows == 0 {\n                        prepare_diff_equal(runtime, args.config(), profile)\n                    } else if tombstone {\n                        prepare_diff_tombstone_rows(runtime, args.config(), profile, changed_rows)\n                    } else {\n                        prepare_diff_update_rows(runtime, args.config(), profile, changed_rows)\n                    }\n                },\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::tracked_state_diff_commits_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"physical_layout/tracked_state full diff succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n\n    group.finish();\n}\n\nfn prepare_write_root(\n    runtime: &Runtime,\n    config: StorageBenchConfig,\n    profile: BackendProfile,\n) -> (\n    Arc<dyn Backend + Send + Sync>,\n    storage_bench::TrackedStateWriteRootFixture,\n) {\n    let backend = (profile.create)();\n    let fixture = runtime\n        .block_on(storage_bench::prepare_tracked_state_write_root(config))\n        .expect(\"prepare physical_layout/tracked_state write root\");\n    (backend, fixture)\n}\n\nfn prepare_read(\n    runtime: &Runtime,\n    config: StorageBenchConfig,\n    profile: BackendProfile,\n) -> (\n    Arc<dyn Backend + Send + Sync>,\n    storage_bench::TrackedStateReadFixture,\n) {\n    let backend = (profile.create)();\n    let fixture = runtime\n        .block_on(storage_bench::prepare_tracked_state_read(&backend, config))\n        .expect(\"prepare physical_layout/tracked_state read\");\n    (backend, fixture)\n}\n\nfn prepare_read_file_selective(\n    runtime: &Runtime,\n    config: StorageBenchConfig,\n    profile: BackendProfile,\n) -> (\n    Arc<dyn Backend + Send + Sync>,\n    storage_bench::TrackedStateReadFixture,\n) {\n    let backend = (profile.create)();\n    let fixture = runtime\n        .block_on(storage_bench::prepare_tracked_state_read_file_selective(\n            &backend, config,\n        ))\n        .expect(\"prepare physical_layout/tracked_state file-selective read\");\n    (backend, fixture)\n}\n\nfn prepare_update_rows(\n    runtime: &Runtime,\n    config: StorageBenchConfig,\n    profile: BackendProfile,\n    rows: usize,\n) -> (\n    Arc<dyn Backend + Send + Sync>,\n    storage_bench::TrackedStateUpdateFixture,\n) {\n    let backend = (profile.create)();\n    let fixture = runtime\n        .block_on(storage_bench::prepare_tracked_state_update_rows(\n            &backend, config, rows,\n        ))\n        .expect(\"prepare physical_layout/tracked_state update rows\");\n    (backend, fixture)\n}\n\nfn prepare_tombstone_rows(\n    runtime: &Runtime,\n    config: StorageBenchConfig,\n    profile: BackendProfile,\n    rows: usize,\n) -> (\n    Arc<dyn Backend + Send + Sync>,\n    storage_bench::TrackedStateUpdateFixture,\n) {\n    let backend = (profile.create)();\n    let fixture = runtime\n        .block_on(storage_bench::prepare_tracked_state_tombstone_rows(\n            &backend, config, rows,\n        ))\n        .expect(\"prepare physical_layout/tracked_state tombstone rows\");\n    (backend, fixture)\n}\n\nfn prepare_diff_equal(\n    runtime: &Runtime,\n    config: StorageBenchConfig,\n    profile: BackendProfile,\n) -> (\n    Arc<dyn Backend + Send + Sync>,\n    storage_bench::TrackedStateDiffFixture,\n) {\n    let backend = (profile.create)();\n    let fixture = runtime\n        .block_on(storage_bench::prepare_tracked_state_diff_equal(\n            &backend, config,\n        ))\n        .expect(\"prepare physical_layout/tracked_state diff equal\");\n    (backend, fixture)\n}\n\nfn prepare_diff_update_rows(\n    runtime: &Runtime,\n    config: StorageBenchConfig,\n    profile: BackendProfile,\n    rows: usize,\n) -> (\n    Arc<dyn Backend + Send + Sync>,\n    storage_bench::TrackedStateDiffFixture,\n) {\n    let backend = (profile.create)();\n    let fixture = runtime\n        .block_on(storage_bench::prepare_tracked_state_diff_update_rows(\n            &backend, config, rows,\n        ))\n        .expect(\"prepare physical_layout/tracked_state diff update\");\n    (backend, fixture)\n}\n\nfn prepare_diff_tombstone_rows(\n    runtime: &Runtime,\n    config: StorageBenchConfig,\n    profile: BackendProfile,\n    rows: usize,\n) -> (\n    Arc<dyn Backend + Send + Sync>,\n    storage_bench::TrackedStateDiffFixture,\n) {\n    let backend = (profile.create)();\n    let fixture = runtime\n        .block_on(storage_bench::prepare_tracked_state_diff_tombstone_rows(\n            &backend, config, rows,\n        ))\n        .expect(\"prepare physical_layout/tracked_state diff tombstone\");\n    (backend, fixture)\n}\n\nfn physical_backends() -> [BackendProfile; 2] {\n    [\n        BackendProfile {\n            name: \"sqlite_tempfile\",\n            create: sqlite_tempfile_backend,\n        },\n        BackendProfile {\n            name: \"rocksdb_tempdir\",\n            create: rocksdb_backend,\n        },\n    ]\n}\n\nfn sqlite_tempfile_backend() -> Arc<dyn Backend + Send + Sync> {\n    Arc::new(SqliteBenchBackend::tempfile().expect(\"create sqlite tempfile bench backend\"))\n}\n\nfn rocksdb_backend() -> Arc<dyn Backend + Send + Sync> {\n    Arc::new(RocksDbBenchBackend::new().expect(\"create rocksdb bench backend\"))\n}\n\nfn label(rows: usize) -> &'static str {\n    match rows {\n        1_000 => \"1k\",\n        10_000 => \"10k\",\n        50_000 => \"50k\",\n        _ => \"rows\",\n    }\n}\n"
  },
  {
    "path": "packages/engine/benches/physical_layout/workflow.rs",
    "content": "use std::sync::Arc;\nuse std::time::Duration;\n\nuse criterion::{black_box, BatchSize, Criterion};\nuse lix_engine::storage_bench::{self, StorageBenchConfig};\nuse lix_engine::Backend;\nuse tokio::runtime::Runtime;\n\nuse crate::{Args, RocksDbBenchBackend, SqliteBenchBackend};\n\ntype BackendFactory = fn() -> Arc<dyn Backend + Send + Sync>;\n\n#[derive(Clone, Copy)]\nstruct BackendProfile {\n    name: &'static str,\n    create: BackendFactory,\n}\n\npub(crate) fn bench(c: &mut Criterion, runtime: &Runtime, args: Args) {\n    for profile in physical_backends() {\n        bench_smoke(c, runtime, args, profile);\n        bench_fast(c, runtime, args, profile);\n        bench_full(c, runtime, args, profile);\n    }\n}\n\nfn bench_smoke(c: &mut Criterion, runtime: &Runtime, args: Args, profile: BackendProfile) {\n    let smoke = args\n        .config()\n        .with_rows(1_000)\n        .with_state_payload_bytes(1024);\n    let mut group = c.benchmark_group(format!(\"physical_layout/workflow/smoke/{}\", profile.name));\n    group.sample_size(10);\n    group.warm_up_time(Duration::from_millis(250));\n    group.measurement_time(Duration::from_secs(1));\n\n    group.bench_function(\"insert_tracked_commit_payload_1k/1k\", |b| {\n        b.iter_batched(\n            || prepare_insert_tracked_commit(runtime, smoke, profile),\n            |fixture| {\n                black_box(\n                    runtime\n                        .block_on(run_insert_tracked_commit(fixture))\n                        .expect(\"physical_layout/workflow smoke insert succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"update_tracked_commit_1pct_payload_1k/1k\", |b| {\n        b.iter_batched(\n            || prepare_update_tracked_commit(runtime, smoke, profile, 10),\n            |fixture| {\n                black_box(\n                    runtime\n                        .block_on(run_update_tracked_commit(fixture))\n                        .expect(\"physical_layout/workflow smoke update succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"diff_update_1pct_payload_1k/1k\", |b| {\n        b.iter_batched(\n            || prepare_diff_update(runtime, smoke, profile, 10),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_diff_commits_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"physical_layout/workflow smoke diff succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"select_tracked_commit_point_hit_payload_1k/1k\", |b| {\n        b.iter_batched(\n            || prepare_select_tracked_commit(runtime, smoke, profile),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_read_point_hit_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"physical_layout/workflow smoke point select succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"select_tracked_commit_headers_only_payload_1k/1k\", |b| {\n        b.iter_batched(\n            || prepare_select_tracked_commit(runtime, smoke, profile),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_scan_headers_only_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"physical_layout/workflow smoke header select succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"select_tracked_commit_full_rows_payload_1k/1k\", |b| {\n        b.iter_batched(\n            || prepare_select_tracked_commit(runtime, smoke, profile),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_scan_full_rows_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"physical_layout/workflow smoke full-row select succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\n        \"select_tracked_commit_file_selective_10pct_payload_1k/1k\",\n        |b| {\n            b.iter_batched(\n                || {\n                    prepare_select_tracked_commit_file_selective(\n                        runtime,\n                        smoke.with_selectivity(storage_bench::StorageBenchSelectivity::Percent10),\n                        profile,\n                    )\n                },\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(\n                                storage_bench::tracked_state_scan_file_header_selective_prepared(\n                                    &backend, &fixture,\n                                ),\n                            )\n                            .expect(\n                                \"physical_layout/workflow smoke file-selective select succeeds\",\n                            ),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        },\n    );\n\n    group.bench_function(\"select_after_1pct_update_payload_1k/1k\", |b| {\n        b.iter_batched(\n            || prepare_select_after_update(runtime, smoke, profile, 10),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_scan_full_rows_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"physical_layout/workflow smoke select-after-update succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"select_delta_chain_10x1pct_payload_1k/1k\", |b| {\n        b.iter_batched(\n            || prepare_select_delta_chain(runtime, smoke, profile, 10, 10),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_scan_full_rows_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"physical_layout/workflow smoke select delta chain succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"select_materialized_delta_chain_10x1pct_payload_1k/1k\", |b| {\n        b.iter_batched(\n            || prepare_select_materialized_delta_chain(runtime, smoke, profile, 10, 10),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_scan_full_rows_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\n                            \"physical_layout/workflow smoke select materialized delta chain succeeds\",\n                        ),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"diff_delta_chain_10x1pct_payload_1k/1k\", |b| {\n        b.iter_batched(\n            || prepare_diff_delta_chain(runtime, smoke, profile, 10, 10),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_diff_commits_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"physical_layout/workflow smoke diff delta chain succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"materialize_delta_chain_10x1pct_payload_1k/1k\", |b| {\n        b.iter_batched(\n            || prepare_materialize_delta_chain(runtime, smoke, profile, 10, 10),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_materialize_root_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"physical_layout/workflow smoke materialize delta chain succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.finish();\n}\n\nfn bench_fast(c: &mut Criterion, runtime: &Runtime, args: Args, profile: BackendProfile) {\n    let mut group = c.benchmark_group(format!(\"physical_layout/workflow/fast/{}\", profile.name));\n\n    group.bench_function(\"insert_tracked_commit_payload_1k/10k\", |b| {\n        b.iter_batched(\n            || {\n                prepare_insert_tracked_commit(\n                    runtime,\n                    args.config().with_state_payload_bytes(1024),\n                    profile,\n                )\n            },\n            |fixture| {\n                black_box(\n                    runtime\n                        .block_on(run_insert_tracked_commit(fixture))\n                        .expect(\"physical_layout/workflow insert tracked commit succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"update_tracked_commit_1pct/10k\", |b| {\n        b.iter_batched(\n            || prepare_update_tracked_commit(runtime, args.config(), profile, args.rows / 100),\n            |fixture| {\n                black_box(\n                    runtime\n                        .block_on(run_update_tracked_commit(fixture))\n                        .expect(\"physical_layout/workflow update tracked commit succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"diff_update_1pct/10k\", |b| {\n        b.iter_batched(\n            || prepare_diff_update(runtime, args.config(), profile, args.rows / 100),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_diff_commits_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"physical_layout/workflow diff update succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"select_tracked_commit_point_hit/10k\", |b| {\n        b.iter_batched(\n            || prepare_select_tracked_commit(runtime, args.config(), profile),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_read_point_hit_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"physical_layout/workflow point select succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"select_tracked_commit_headers_only/10k\", |b| {\n        b.iter_batched(\n            || prepare_select_tracked_commit(runtime, args.config(), profile),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_scan_headers_only_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"physical_layout/workflow header select succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"select_tracked_commit_full_rows/10k\", |b| {\n        b.iter_batched(\n            || prepare_select_tracked_commit(runtime, args.config(), profile),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_scan_full_rows_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"physical_layout/workflow full-row select succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"select_after_1pct_update/10k\", |b| {\n        b.iter_batched(\n            || prepare_select_after_update(runtime, args.config(), profile, args.rows / 100),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_scan_full_rows_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"physical_layout/workflow select-after-update succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"select_delta_chain_10x1pct/10k\", |b| {\n        b.iter_batched(\n            || prepare_select_delta_chain(runtime, args.config(), profile, 10, args.rows / 100),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_scan_full_rows_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"physical_layout/workflow select delta chain succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"diff_delta_chain_10x1pct/10k\", |b| {\n        b.iter_batched(\n            || prepare_diff_delta_chain(runtime, args.config(), profile, 10, args.rows / 100),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_diff_commits_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"physical_layout/workflow diff delta chain succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.finish();\n}\n\nfn bench_full(c: &mut Criterion, runtime: &Runtime, args: Args, profile: BackendProfile) {\n    let mut group = c.benchmark_group(format!(\"physical_layout/workflow/full/{}\", profile.name));\n\n    for (name, config) in [\n        (\"insert_tracked_commit_no_payload/10k\", args.config()),\n        (\n            \"insert_tracked_commit_payload_1k/10k\",\n            args.config().with_state_payload_bytes(1024),\n        ),\n    ] {\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || prepare_insert_tracked_commit(runtime, config, profile),\n                |fixture| {\n                    black_box(\n                        runtime\n                            .block_on(run_insert_tracked_commit(fixture))\n                            .expect(\"physical_layout/workflow full insert succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n\n    for (name, changed_rows, tombstone) in [\n        (\"update_tracked_commit_1pct/10k\", args.rows / 100, false),\n        (\"update_tracked_commit_10pct/10k\", args.rows / 10, false),\n        (\"delete_tracked_commit_10pct/10k\", args.rows / 10, true),\n    ] {\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || {\n                    if tombstone {\n                        prepare_delete_tracked_commit(runtime, args.config(), profile, changed_rows)\n                    } else {\n                        prepare_update_tracked_commit(runtime, args.config(), profile, changed_rows)\n                    }\n                },\n                |fixture| {\n                    black_box(\n                        runtime\n                            .block_on(run_update_tracked_commit(fixture))\n                            .expect(\"physical_layout/workflow full update/delete succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n\n    group.finish();\n}\n\nstruct InsertTrackedCommitFixture {\n    backend: Arc<dyn Backend + Send + Sync>,\n    changelog: storage_bench::ChangelogAppendFixture,\n    tracked_state: storage_bench::TrackedStateWriteRootFixture,\n}\n\nstruct UpdateTrackedCommitFixture {\n    backend: Arc<dyn Backend + Send + Sync>,\n    changelog: storage_bench::ChangelogAppendFixture,\n    tracked_state: storage_bench::TrackedStateUpdateFixture,\n}\n\nasync fn run_insert_tracked_commit(\n    fixture: InsertTrackedCommitFixture,\n) -> Result<\n    (\n        storage_bench::StorageBenchReport,\n        storage_bench::StorageBenchReport,\n    ),\n    lix_engine::LixError,\n> {\n    let changelog =\n        storage_bench::changelog_append_changes_prepared(&fixture.backend, &fixture.changelog)\n            .await?;\n    let tracked_state =\n        storage_bench::tracked_state_write_root_prepared(&fixture.backend, &fixture.tracked_state)\n            .await?;\n    Ok((changelog, tracked_state))\n}\n\nasync fn run_update_tracked_commit(\n    fixture: UpdateTrackedCommitFixture,\n) -> Result<\n    (\n        storage_bench::StorageBenchReport,\n        storage_bench::StorageBenchReport,\n    ),\n    lix_engine::LixError,\n> {\n    let changelog =\n        storage_bench::changelog_append_changes_prepared(&fixture.backend, &fixture.changelog)\n            .await?;\n    let tracked_state = storage_bench::tracked_state_update_existing_prepared(\n        &fixture.backend,\n        &fixture.tracked_state,\n    )\n    .await?;\n    Ok((changelog, tracked_state))\n}\n\nfn prepare_insert_tracked_commit(\n    runtime: &Runtime,\n    config: StorageBenchConfig,\n    profile: BackendProfile,\n) -> InsertTrackedCommitFixture {\n    let backend = (profile.create)();\n    let changelog = runtime\n        .block_on(storage_bench::prepare_changelog_append_changes(config))\n        .expect(\"prepare physical_layout/workflow insert changelog\");\n    let tracked_state = runtime\n        .block_on(storage_bench::prepare_tracked_state_write_root(config))\n        .expect(\"prepare physical_layout/workflow insert tracked_state\");\n    InsertTrackedCommitFixture {\n        backend,\n        changelog,\n        tracked_state,\n    }\n}\n\nfn prepare_update_tracked_commit(\n    runtime: &Runtime,\n    config: StorageBenchConfig,\n    profile: BackendProfile,\n    changed_rows: usize,\n) -> UpdateTrackedCommitFixture {\n    let backend = (profile.create)();\n    let changelog = runtime\n        .block_on(storage_bench::prepare_changelog_append_changes(\n            config.with_rows(changed_rows),\n        ))\n        .expect(\"prepare physical_layout/workflow update changelog\");\n    let tracked_state = runtime\n        .block_on(storage_bench::prepare_tracked_state_update_rows(\n            &backend,\n            config,\n            changed_rows,\n        ))\n        .expect(\"prepare physical_layout/workflow update tracked_state\");\n    UpdateTrackedCommitFixture {\n        backend,\n        changelog,\n        tracked_state,\n    }\n}\n\nfn prepare_delete_tracked_commit(\n    runtime: &Runtime,\n    config: StorageBenchConfig,\n    profile: BackendProfile,\n    changed_rows: usize,\n) -> UpdateTrackedCommitFixture {\n    let backend = (profile.create)();\n    let changelog = runtime\n        .block_on(storage_bench::prepare_changelog_append_tombstones(\n            config.with_rows(changed_rows),\n        ))\n        .expect(\"prepare physical_layout/workflow delete changelog\");\n    let tracked_state = runtime\n        .block_on(storage_bench::prepare_tracked_state_tombstone_rows(\n            &backend,\n            config,\n            changed_rows,\n        ))\n        .expect(\"prepare physical_layout/workflow delete tracked_state\");\n    UpdateTrackedCommitFixture {\n        backend,\n        changelog,\n        tracked_state,\n    }\n}\n\nfn prepare_diff_update(\n    runtime: &Runtime,\n    config: StorageBenchConfig,\n    profile: BackendProfile,\n    changed_rows: usize,\n) -> (\n    Arc<dyn Backend + Send + Sync>,\n    storage_bench::TrackedStateDiffFixture,\n) {\n    let backend = (profile.create)();\n    let fixture = runtime\n        .block_on(storage_bench::prepare_tracked_state_diff_update_rows(\n            &backend,\n            config,\n            changed_rows,\n        ))\n        .expect(\"prepare physical_layout/workflow diff update\");\n    (backend, fixture)\n}\n\nfn prepare_select_tracked_commit(\n    runtime: &Runtime,\n    config: StorageBenchConfig,\n    profile: BackendProfile,\n) -> (\n    Arc<dyn Backend + Send + Sync>,\n    storage_bench::TrackedStateReadFixture,\n) {\n    let backend = (profile.create)();\n    let fixture = runtime\n        .block_on(storage_bench::prepare_tracked_state_read(&backend, config))\n        .expect(\"prepare physical_layout/workflow select tracked commit\");\n    (backend, fixture)\n}\n\nfn prepare_select_tracked_commit_file_selective(\n    runtime: &Runtime,\n    config: StorageBenchConfig,\n    profile: BackendProfile,\n) -> (\n    Arc<dyn Backend + Send + Sync>,\n    storage_bench::TrackedStateReadFixture,\n) {\n    let backend = (profile.create)();\n    let fixture = runtime\n        .block_on(storage_bench::prepare_tracked_state_read_file_selective(\n            &backend, config,\n        ))\n        .expect(\"prepare physical_layout/workflow file-selective select\");\n    (backend, fixture)\n}\n\nfn prepare_select_after_update(\n    runtime: &Runtime,\n    config: StorageBenchConfig,\n    profile: BackendProfile,\n    changed_rows: usize,\n) -> (\n    Arc<dyn Backend + Send + Sync>,\n    storage_bench::TrackedStateReadFixture,\n) {\n    let backend = (profile.create)();\n    let fixture = runtime\n        .block_on(storage_bench::prepare_tracked_state_read_after_update_rows(\n            &backend,\n            config,\n            changed_rows,\n        ))\n        .expect(\"prepare physical_layout/workflow select after update\");\n    (backend, fixture)\n}\n\nfn prepare_select_delta_chain(\n    runtime: &Runtime,\n    config: StorageBenchConfig,\n    profile: BackendProfile,\n    delta_commits: usize,\n    updated_rows_per_commit: usize,\n) -> (\n    Arc<dyn Backend + Send + Sync>,\n    storage_bench::TrackedStateReadFixture,\n) {\n    let backend = (profile.create)();\n    let fixture = runtime\n        .block_on(storage_bench::prepare_tracked_state_read_delta_chain(\n            &backend,\n            config,\n            delta_commits,\n            updated_rows_per_commit,\n        ))\n        .expect(\"prepare physical_layout/workflow select delta chain\");\n    (backend, fixture)\n}\n\nfn prepare_select_materialized_delta_chain(\n    runtime: &Runtime,\n    config: StorageBenchConfig,\n    profile: BackendProfile,\n    delta_commits: usize,\n    updated_rows_per_commit: usize,\n) -> (\n    Arc<dyn Backend + Send + Sync>,\n    storage_bench::TrackedStateReadFixture,\n) {\n    let backend = (profile.create)();\n    let fixture = runtime\n        .block_on(\n            storage_bench::prepare_tracked_state_read_materialized_delta_chain(\n                &backend,\n                config,\n                delta_commits,\n                updated_rows_per_commit,\n            ),\n        )\n        .expect(\"prepare physical_layout/workflow select materialized delta chain\");\n    (backend, fixture)\n}\n\nfn prepare_diff_delta_chain(\n    runtime: &Runtime,\n    config: StorageBenchConfig,\n    profile: BackendProfile,\n    delta_commits: usize,\n    updated_rows_per_commit: usize,\n) -> (\n    Arc<dyn Backend + Send + Sync>,\n    storage_bench::TrackedStateDiffFixture,\n) {\n    let backend = (profile.create)();\n    let fixture = runtime\n        .block_on(storage_bench::prepare_tracked_state_diff_delta_chain(\n            &backend,\n            config,\n            delta_commits,\n            updated_rows_per_commit,\n        ))\n        .expect(\"prepare physical_layout/workflow diff delta chain\");\n    (backend, fixture)\n}\n\nfn prepare_materialize_delta_chain(\n    runtime: &Runtime,\n    config: StorageBenchConfig,\n    profile: BackendProfile,\n    delta_commits: usize,\n    updated_rows_per_commit: usize,\n) -> (\n    Arc<dyn Backend + Send + Sync>,\n    storage_bench::TrackedStateMaterializeFixture,\n) {\n    let backend = (profile.create)();\n    let fixture = runtime\n        .block_on(\n            storage_bench::prepare_tracked_state_materialize_delta_chain(\n                &backend,\n                config,\n                delta_commits,\n                updated_rows_per_commit,\n            ),\n        )\n        .expect(\"prepare physical_layout/workflow materialize delta chain\");\n    (backend, fixture)\n}\n\nfn physical_backends() -> [BackendProfile; 2] {\n    [\n        BackendProfile {\n            name: \"sqlite_tempfile\",\n            create: sqlite_tempfile_backend,\n        },\n        BackendProfile {\n            name: \"rocksdb_tempdir\",\n            create: rocksdb_backend,\n        },\n    ]\n}\n\nfn sqlite_tempfile_backend() -> Arc<dyn Backend + Send + Sync> {\n    Arc::new(SqliteBenchBackend::tempfile().expect(\"create sqlite tempfile bench backend\"))\n}\n\nfn rocksdb_backend() -> Arc<dyn Backend + Send + Sync> {\n    Arc::new(RocksDbBenchBackend::new().expect(\"create rocksdb bench backend\"))\n}\n"
  },
  {
    "path": "packages/engine/benches/storage/README.md",
    "content": "# Engine Storage Benchmarks\n\nThese Criterion benchmarks measure engine-owned storage layers directly,\nwithout going through SQL or the SDK:\n\n- `tracked_state`\n- `untracked_state`\n- `changelog`\n- `binary_cas`\n- `json_store`\n- `storage/api`\n\nThe benchmark target uses `codspeed-criterion-compat`, so it works with normal\n`cargo bench` and with CodSpeed.\n\n## Run\n\n```bash\ncargo bench -p lix_engine --features storage-benches --bench storage\n```\n\nRun one benchmark by filter:\n\n```bash\ncargo bench -p lix_engine --features storage-benches --bench storage -- \\\n  storage/tracked_state/read_point_hit/10k\n```\n\nCodSpeed:\n\n```bash\ncargo codspeed build -p lix_engine --features storage-benches --bench storage\ncargo codspeed run\n```\n\nStorage accounting report:\n\n```bash\ncargo test -p lix_engine --features storage-benches storage_accounting -- --ignored --nocapture\n```\n\n## Benchmarks\n\nThe checked-in baseline size is stable: `10k` logical rows or blobs, with\n`1KiB` binary payloads for Binary CAS and small JSON payloads for state rows.\nLarge payload variants intentionally use fewer rows so a full benchmark run\ndoes not allocate multi-gigabyte fixtures.\n\n```text\nstorage/tracked_state/write_root/10k\nstorage/tracked_state/read_point_hit/10k\nstorage/tracked_state/read_point_miss/10k\nstorage/tracked_state/scan_all/10k\nstorage/tracked_state/scan_schema/10k\nstorage/tracked_state/scan_file/10k\nstorage/tracked_state/update_existing/10k\nstorage/untracked_state/write_rows/10k\nstorage/untracked_state/read_point_hit/10k\nstorage/untracked_state/read_point_miss/10k\nstorage/untracked_state/scan_all/10k\nstorage/untracked_state/scan_version/10k\nstorage/untracked_state/scan_schema/10k\nstorage/untracked_state/overwrite_existing/10k\nstorage/changelog/append_changes/10k\nstorage/changelog/load_change_hit/10k\nstorage/changelog/load_change_miss/10k\nstorage/changelog/scan_all/10k\nstorage/changelog/scan_limit_100/10k\nstorage/changelog/scan_change_set/10k\ncommit_graph/change_history_from_commit/10k\nstorage/binary_cas/write_blobs_1k/10k\nstorage/binary_cas/read_blob_hit_1k/10k\nstorage/binary_cas/read_blob_miss_1k/10k\nstorage/binary_cas/write_duplicate_payload_1k/10k\nstorage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/write_kv_batch_put/1\nstorage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/write_kv_batch_put/10\nstorage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/write_kv_batch_put/100\nstorage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/write_kv_batch_put/1k\nstorage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/write_kv_batch_put/10k\nstorage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/write_kv_batch_mixed_put_delete/10k\nstorage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/write_kv_batch_multi_namespace/10k\nstorage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/write_kv_batch_duplicate_keys/10k\nstorage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/write_kv_batch_value_size/64b\nstorage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/write_kv_batch_value_size/1k\nstorage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/write_kv_batch_value_size/16k\nstorage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/write_kv_batch_value_size/128k\nstorage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/transaction_write_and_commit/1\nstorage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/transaction_write_and_commit/100\nstorage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/transaction_write_and_commit/10k\nstorage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/transaction_rollback_after_write/10k\nstorage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/get_values_hit/100\nstorage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/get_values_hit/1k\nstorage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/get_values_hit/10k\nstorage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/get_values_miss/100\nstorage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/get_values_miss/1k\nstorage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/get_values_miss/10k\nstorage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/get_values_mixed_hit_miss/100\nstorage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/get_values_mixed_hit_miss/1k\nstorage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/get_values_mixed_hit_miss/10k\nstorage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/get_values_duplicate_keys/100\nstorage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/get_values_duplicate_keys/1k\nstorage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/get_values_duplicate_keys/10k\nstorage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/get_values_multi_namespace/10k\nstorage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/scan_keys_prefix/100\nstorage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/scan_keys_prefix/1k\nstorage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/scan_keys_prefix/10k\nstorage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/scan_keys_after_pages/10k\nstorage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/scan_keys_small_limit_of_large_range/100_of_10k\nstorage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/scan_keys_empty_range/10k\nstorage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/scan_keys_prefix_selectivity_1pct/10k\nstorage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/scan_keys_prefix_selectivity_10pct/10k\nstorage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/scan_keys_prefix_selectivity_100pct/10k\nstorage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/transaction_commit_empty\n```\n\nAdditional high-signal variants are registered for:\n\n- batch sizes: `1`, `10`, `100`, `1k`, `10k`\n- state payload sizes: `small/10k`, `1k/10k`, `16k/1k`, `128k/100`\n- binary payload sizes: `small/10k`, `1k/10k`, `16k/1k`, `128k/100`\n- changelog shared JSON payloads: shared snapshot, shared metadata, and shared\n  snapshot+metadata workloads for measuring JsonStore writer dedupe\n- key distribution: `sequential_keys`, `random_keys`\n- scan selectivity: `1pct`, `10pct`, `100pct`\n- projection-aware scans: file-selective header scans that omit\n  `snapshot_content`, including `1KiB` out-of-line snapshot variants\n- point-read scaling: `100` point reads over `1k`, `10k`, and `100k` rows\n- update shape: update/overwrite `10pct`, update/overwrite all, append or insert new keys\n- prolly-style tracked-state cases: single-row update in `10k`/`100k` roots,\n  single-row append in `10k`/`100k` roots, tombstone/delete writes, and root\n  diff traversal for equal/update/delete shapes\n- partial snapshot-content update baselines: one logical field changed in a\n  `1KiB` snapshot over `100k` rows and a `16KiB` snapshot over `10k` rows\n- Binary CAS dedupe: unique payloads, all duplicate payloads, half duplicate payloads\n\nThe ignored `storage_accounting` test prints deterministic byte/chunk tables\nfor the tracked-state physical format: primary tree, header-covering by-file\ntree, and snapshot CAS.\n"
  },
  {
    "path": "packages/engine/benches/storage/backend.rs",
    "content": "use async_trait::async_trait;\nuse lix_engine::{\n    Backend, BackendKvEntryPage, BackendKvExistsBatch, BackendKvExistsGroup, BackendKvGetRequest,\n    BackendKvKeyPage, BackendKvScanRange, BackendKvScanRequest, BackendKvValueBatch,\n    BackendKvValueGroup, BackendKvValuePage, BackendKvWriteBatch, BackendKvWriteStats,\n    BackendReadTransaction, BackendWriteTransaction, BytePageBuilder, LixError,\n};\nuse std::collections::BTreeMap;\nuse std::sync::{Arc, Mutex};\n\ntype Store = BTreeMap<(String, Vec<u8>), Vec<u8>>;\n\n#[derive(Clone, Default)]\npub(crate) struct BenchBackend {\n    store: Arc<Mutex<Store>>,\n}\n\npub(crate) struct BenchTransaction {\n    store: Arc<Mutex<Store>>,\n    finalized: bool,\n}\n\nimpl BenchBackend {\n    pub(crate) fn new() -> Arc<dyn Backend + Send + Sync> {\n        Arc::new(Self::default())\n    }\n}\n\n#[async_trait]\nimpl Backend for BenchBackend {\n    async fn begin_read_transaction(\n        &self,\n    ) -> Result<Box<dyn BackendReadTransaction + Send + Sync + 'static>, LixError> {\n        Ok(Box::new(BenchTransaction {\n            store: Arc::clone(&self.store),\n            finalized: false,\n        }))\n    }\n\n    async fn begin_write_transaction(\n        &self,\n    ) -> Result<Box<dyn BackendWriteTransaction + Send + Sync + 'static>, LixError> {\n        Ok(Box::new(BenchTransaction {\n            store: Arc::clone(&self.store),\n            finalized: false,\n        }))\n    }\n}\n\n#[async_trait]\nimpl BackendReadTransaction for BenchTransaction {\n    async fn get_values(\n        &mut self,\n        request: BackendKvGetRequest,\n    ) -> Result<BackendKvValueBatch, LixError> {\n        let store = self.lock_store()?;\n        let mut groups = Vec::with_capacity(request.groups.len());\n        for group in request.groups {\n            let namespace = group.namespace.clone();\n            let mut values = BytePageBuilder::with_capacity(group.keys.len(), 0);\n            let mut present = Vec::with_capacity(group.keys.len());\n            for key in group.keys {\n                if let Some(value) = store.get(&(namespace.clone(), key)) {\n                    values.push(value);\n                    present.push(true);\n                } else {\n                    values.push([]);\n                    present.push(false);\n                }\n            }\n            groups.push(BackendKvValueGroup::new(\n                namespace,\n                values.finish(),\n                present,\n            ));\n        }\n        Ok(BackendKvValueBatch { groups })\n    }\n\n    async fn exists_many(\n        &mut self,\n        request: BackendKvGetRequest,\n    ) -> Result<BackendKvExistsBatch, LixError> {\n        let store = self.lock_store()?;\n        let mut groups = Vec::with_capacity(request.groups.len());\n        for group in request.groups {\n            let namespace = group.namespace.clone();\n            let exists = group\n                .keys\n                .into_iter()\n                .map(|key| store.contains_key(&(namespace.clone(), key)))\n                .collect();\n            groups.push(BackendKvExistsGroup { namespace, exists });\n        }\n        Ok(BackendKvExistsBatch { groups })\n    }\n\n    async fn scan_keys(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvKeyPage, LixError> {\n        let store = self.lock_store()?;\n        Ok(scan_store_keys(&store, request))\n    }\n\n    async fn scan_values(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvValuePage, LixError> {\n        let store = self.lock_store()?;\n        Ok(scan_store_values(&store, request))\n    }\n\n    async fn scan_entries(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvEntryPage, LixError> {\n        let store = self.lock_store()?;\n        Ok(scan_store_entries(&store, request))\n    }\n\n    async fn rollback(mut self: Box<Self>) -> Result<(), LixError> {\n        self.finalized = true;\n        Ok(())\n    }\n}\n\n#[async_trait]\nimpl BackendWriteTransaction for BenchTransaction {\n    async fn write_kv_batch(\n        &mut self,\n        batch: BackendKvWriteBatch,\n    ) -> Result<BackendKvWriteStats, LixError> {\n        let mut store = self.lock_store()?;\n        let mut stats = BackendKvWriteStats::default();\n        for group in batch.groups {\n            let namespace = group.namespace().to_string();\n            for index in 0..group.put_count() {\n                let key = group.put_key(index).ok_or_else(|| {\n                    LixError::new(\"LIX_ERROR_UNKNOWN\", \"backend write batch missing put key\")\n                })?;\n                let value = group.put_value(index).ok_or_else(|| {\n                    LixError::new(\"LIX_ERROR_UNKNOWN\", \"backend write batch missing put value\")\n                })?;\n                stats.puts += 1;\n                stats.bytes_written += key.len() + value.len();\n                store.insert((namespace.clone(), key.to_vec()), value.to_vec());\n            }\n            for index in 0..group.delete_count() {\n                let key = group.delete_key(index).ok_or_else(|| {\n                    LixError::new(\n                        \"LIX_ERROR_UNKNOWN\",\n                        \"backend write batch missing delete key\",\n                    )\n                })?;\n                stats.deletes += 1;\n                stats.bytes_written += key.len();\n                store.remove(&(namespace.clone(), key.to_vec()));\n            }\n        }\n        Ok(stats)\n    }\n\n    async fn commit(mut self: Box<Self>) -> Result<(), LixError> {\n        self.finalized = true;\n        Ok(())\n    }\n}\n\nimpl BenchTransaction {\n    fn lock_store(&self) -> Result<std::sync::MutexGuard<'_, Store>, LixError> {\n        self.store\n            .lock()\n            .map_err(|_| LixError::new(\"LIX_ERROR_UNKNOWN\", \"bench store mutex poisoned\"))\n    }\n}\n\nfn scan_store_keys(store: &Store, request: BackendKvScanRequest) -> BackendKvKeyPage {\n    let start_key = scan_start_key(&request);\n    let lower_bound = (request.namespace.clone(), start_key);\n    let mut keys = BytePageBuilder::new();\n    let mut count = 0;\n    let mut resume_after_candidate = None;\n    for ((row_namespace, key), _value) in store.range(lower_bound..) {\n        if row_namespace != &request.namespace {\n            break;\n        }\n        if let Some(after) = request.after.as_deref() {\n            if key.as_slice() <= after {\n                continue;\n            }\n        }\n        if !key_matches_range(key, &request.range) {\n            break;\n        }\n        if count < request.limit {\n            resume_after_candidate = Some(key.clone());\n            keys.push(key);\n        }\n        count += 1;\n        if count > request.limit {\n            break;\n        }\n    }\n    let resume_after = (count > request.limit)\n        .then_some(resume_after_candidate)\n        .flatten();\n    BackendKvKeyPage {\n        keys: keys.finish(),\n        resume_after,\n    }\n}\n\nfn scan_store_values(store: &Store, request: BackendKvScanRequest) -> BackendKvValuePage {\n    let start_key = scan_start_key(&request);\n    let lower_bound = (request.namespace.clone(), start_key);\n    let mut values = BytePageBuilder::new();\n    let mut count = 0;\n    let mut resume_after_candidate = None;\n    for ((row_namespace, key), value) in store.range(lower_bound..) {\n        if row_namespace != &request.namespace {\n            break;\n        }\n        if let Some(after) = request.after.as_deref() {\n            if key.as_slice() <= after {\n                continue;\n            }\n        }\n        if !key_matches_range(key, &request.range) {\n            break;\n        }\n        if count < request.limit {\n            resume_after_candidate = Some(key.clone());\n            values.push(value);\n        }\n        count += 1;\n        if count > request.limit {\n            break;\n        }\n    }\n    let resume_after = (count > request.limit)\n        .then_some(resume_after_candidate)\n        .flatten();\n    BackendKvValuePage {\n        values: values.finish(),\n        resume_after,\n    }\n}\n\nfn scan_store_entries(store: &Store, request: BackendKvScanRequest) -> BackendKvEntryPage {\n    let start_key = scan_start_key(&request);\n    let lower_bound = (request.namespace.clone(), start_key);\n    let mut keys = BytePageBuilder::new();\n    let mut values = BytePageBuilder::new();\n    let mut count = 0;\n    let mut resume_after_candidate = None;\n    for ((row_namespace, key), value) in store.range(lower_bound..) {\n        if row_namespace != &request.namespace {\n            break;\n        }\n        if let Some(after) = request.after.as_deref() {\n            if key.as_slice() <= after {\n                continue;\n            }\n        }\n        if !key_matches_range(key, &request.range) {\n            break;\n        }\n        if count < request.limit {\n            resume_after_candidate = Some(key.clone());\n            keys.push(key);\n            values.push(value);\n        }\n        count += 1;\n        if count > request.limit {\n            break;\n        }\n    }\n    let resume_after = (count > request.limit)\n        .then_some(resume_after_candidate)\n        .flatten();\n    BackendKvEntryPage {\n        keys: keys.finish(),\n        values: values.finish(),\n        resume_after,\n    }\n}\n\nfn key_matches_range(key: &[u8], range: &BackendKvScanRange) -> bool {\n    match range {\n        BackendKvScanRange::Prefix(prefix) => key.starts_with(prefix),\n        BackendKvScanRange::Range { start, end } => key >= start.as_slice() && key < end.as_slice(),\n    }\n}\n\nfn scan_start_key(request: &BackendKvScanRequest) -> Vec<u8> {\n    let range_start = match &request.range {\n        BackendKvScanRange::Prefix(prefix) => prefix.as_slice(),\n        BackendKvScanRange::Range { start, .. } => start.as_slice(),\n    };\n    match request.after.as_deref() {\n        Some(after) if after > range_start => after.to_vec(),\n        _ => range_start.to_vec(),\n    }\n}\n"
  },
  {
    "path": "packages/engine/benches/storage/binary_cas.rs",
    "content": "use lix_engine::storage_bench::{self, StorageBenchConfig};\n\nuse crate::{Args, BenchBackend};\nuse criterion::{black_box, BatchSize, Criterion};\nuse tokio::runtime::Runtime;\n\npub(crate) fn bench(c: &mut Criterion, runtime: &Runtime, args: Args) {\n    let mut group = c.benchmark_group(\"storage/binary_cas\");\n    group.bench_function(\"write_blobs_1k/10k\", |b| {\n        b.iter_batched(\n            || {\n                let backend = BenchBackend::new();\n                let fixture = runtime\n                    .block_on(storage_bench::prepare_binary_cas_write_blobs(config(&args)))\n                    .expect(\"prepare binary_cas/write_blobs\");\n                (backend, fixture)\n            },\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::binary_cas_write_blobs_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"binary_cas/write_blobs succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n    group.bench_function(\"read_blob_hit_1k/10k\", |b| {\n        b.iter_batched(\n            || prepare_read(runtime, args),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::binary_cas_read_blob_hit_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"binary_cas/read_blob_hit succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n    group.bench_function(\"read_blob_miss_1k/10k\", |b| {\n        b.iter_batched(\n            || prepare_read(runtime, args),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::binary_cas_read_blob_miss_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"binary_cas/read_blob_miss succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n    group.bench_function(\"write_duplicate_payload_1k/10k\", |b| {\n        b.iter_batched(\n            || {\n                let backend = BenchBackend::new();\n                let fixture = runtime\n                    .block_on(storage_bench::prepare_binary_cas_write_duplicate_payload(\n                        config(&args),\n                    ))\n                    .expect(\"prepare binary_cas/write_duplicate_payload\");\n                (backend, fixture)\n            },\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::binary_cas_write_blobs_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"binary_cas/write_duplicate_payload succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n    group.bench_function(\"write_half_duplicate_payload_1k/10k\", |b| {\n        b.iter_batched(\n            || {\n                let backend = BenchBackend::new();\n                let fixture = runtime\n                    .block_on(\n                        storage_bench::prepare_binary_cas_write_half_duplicate_payload(config(\n                            &args,\n                        )),\n                    )\n                    .expect(\"prepare binary_cas/write_half_duplicate_payload\");\n                (backend, fixture)\n            },\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::binary_cas_write_blobs_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"binary_cas/write_half_duplicate_payload succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n    for rows in [1, 10, 100, 1_000] {\n        let name = format!(\"write_blobs_1k/{rows}\");\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || {\n                    let backend = BenchBackend::new();\n                    let fixture = runtime\n                        .block_on(storage_bench::prepare_binary_cas_write_blobs(\n                            config(&args).with_rows(rows),\n                        ))\n                        .expect(\"prepare binary_cas/write_blobs batch\");\n                    (backend, fixture)\n                },\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::binary_cas_write_blobs_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"binary_cas/write_blobs batch succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n    for (label, bytes, rows) in [\n        (\"small\", 16, 10_000),\n        (\"1k\", 1024, 10_000),\n        (\"16k\", 16 * 1024, 1_000),\n        (\"128k\", 128 * 1024, 100),\n    ] {\n        let name = format!(\"write_blobs_payload_{label}/{rows}\");\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || {\n                    let backend = BenchBackend::new();\n                    let fixture = runtime\n                        .block_on(storage_bench::prepare_binary_cas_write_blobs(\n                            config(&args).with_blob_bytes(bytes).with_rows(rows),\n                        ))\n                        .expect(\"prepare binary_cas/write_blobs payload\");\n                    (backend, fixture)\n                },\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::binary_cas_write_blobs_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"binary_cas/write_blobs payload succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n    group.finish();\n}\n\nfn prepare_read(\n    runtime: &Runtime,\n    args: Args,\n) -> (\n    std::sync::Arc<dyn lix_engine::Backend + Send + Sync>,\n    lix_engine::storage_bench::BinaryCasReadFixture,\n) {\n    let backend = BenchBackend::new();\n    let fixture = runtime\n        .block_on(storage_bench::prepare_binary_cas_read(\n            &backend,\n            config(&args),\n        ))\n        .expect(\"prepare binary_cas/read\");\n    (backend, fixture)\n}\n\nfn config(args: &Args) -> StorageBenchConfig {\n    args.config()\n}\n"
  },
  {
    "path": "packages/engine/benches/storage/changelog.rs",
    "content": "use lix_engine::storage_bench::{\n    self, StorageBenchConfig, StorageBenchKeyPattern, StorageBenchSelectivity,\n};\n\nuse crate::{Args, BenchBackend};\nuse criterion::{black_box, BatchSize, Criterion};\nuse tokio::runtime::Runtime;\n\npub(crate) fn bench(c: &mut Criterion, runtime: &Runtime, args: Args) {\n    let mut group = c.benchmark_group(\"storage/changelog\");\n    group.bench_function(\"encode_only/full_row/10k\", |b| {\n        b.iter_batched(\n            || {\n                runtime\n                    .block_on(storage_bench::prepare_changelog_codec(config(&args)))\n                    .expect(\"prepare changelog/encode_only\")\n            },\n            |fixture| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::changelog_encode_only_prepared(&fixture))\n                        .expect(\"changelog/encode_only succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n    group.bench_function(\"decode_only/full_row/10k\", |b| {\n        b.iter_batched(\n            || {\n                runtime\n                    .block_on(storage_bench::prepare_changelog_codec(config(&args)))\n                    .expect(\"prepare changelog/decode_only\")\n            },\n            |fixture| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::changelog_decode_only_prepared(&fixture))\n                        .expect(\"changelog/decode_only succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n    group.bench_function(\"append_changes/10k\", |b| {\n        b.iter_batched(\n            || {\n                let backend = BenchBackend::new();\n                let fixture = runtime\n                    .block_on(storage_bench::prepare_changelog_append_changes(config(\n                        &args,\n                    )))\n                    .expect(\"prepare changelog/append_changes\");\n                (backend, fixture)\n            },\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::changelog_append_changes_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"changelog/append_changes succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n    group.bench_function(\"load_changes_hit/10k\", |b| {\n        b.iter_batched(\n            || prepare_read(runtime, args),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::changelog_load_changes_hit_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"changelog/load_changes_hit succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n    group.bench_function(\"load_changes_miss/10k\", |b| {\n        b.iter_batched(\n            || prepare_read(runtime, args),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::changelog_load_changes_miss_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"changelog/load_changes_miss succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n    group.bench_function(\"scan_all/10k\", |b| {\n        b.iter_batched(\n            || prepare_read(runtime, args),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::changelog_scan_all_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"changelog/scan_all succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n    group.bench_function(\"scan_full_changes/10k\", |b| {\n        b.iter_batched(\n            || prepare_read(runtime, args),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::changelog_scan_full_changes_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"changelog/scan_full_changes succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n    for (label, bytes, rows, row_label) in\n        [(\"1k\", 1024, 10_000, \"10k\"), (\"16k\", 16 * 1024, 1_000, \"1k\")]\n    {\n        let config = config(&args)\n            .with_state_payload_bytes(bytes)\n            .with_rows(rows);\n        let name = format!(\"scan_full_changes_payload_{label}/{row_label}\");\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || prepare_read_with(runtime, config),\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::changelog_scan_full_changes_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"changelog/scan_full_changes payload succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n    group.bench_function(\"scan_limit_100/10k\", |b| {\n        b.iter_batched(\n            || prepare_read(runtime, args),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::changelog_scan_limit_100_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"changelog/scan_limit_100 succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n    group.bench_function(\"scan_change_set/10k\", |b| {\n        b.iter_batched(\n            || prepare_read(runtime, args),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::changelog_scan_change_set_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"changelog/scan_change_set succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n    for rows in [1, 10, 100, 1_000] {\n        let name = format!(\"append_changes/{rows}\");\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || {\n                    let backend = BenchBackend::new();\n                    let fixture = runtime\n                        .block_on(storage_bench::prepare_changelog_append_changes(\n                            config(&args).with_rows(rows),\n                        ))\n                        .expect(\"prepare changelog/append_changes batch\");\n                    (backend, fixture)\n                },\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::changelog_append_changes_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"changelog/append_changes batch succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n    for (label, bytes, rows) in [\n        (\"small\", 0, 10_000),\n        (\"1k\", 1024, 10_000),\n        (\"16k\", 16 * 1024, 1_000),\n        (\"128k\", 128 * 1024, 100),\n    ] {\n        let name = format!(\"append_changes_payload_{label}/{rows}\");\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || {\n                    let backend = BenchBackend::new();\n                    let fixture = runtime\n                        .block_on(storage_bench::prepare_changelog_append_changes(\n                            config(&args)\n                                .with_state_payload_bytes(bytes)\n                                .with_rows(rows),\n                        ))\n                        .expect(\"prepare changelog/append_changes payload\");\n                    (backend, fixture)\n                },\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::changelog_append_changes_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"changelog/append_changes payload succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n    group.bench_function(\"append_changes_metadata_1k/10k\", |b| {\n        b.iter_batched(\n            || {\n                let backend = BenchBackend::new();\n                let fixture = runtime\n                    .block_on(storage_bench::prepare_changelog_append_metadata(\n                        config(&args).with_state_payload_bytes(1024),\n                    ))\n                    .expect(\"prepare changelog/append metadata\");\n                (backend, fixture)\n            },\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::changelog_append_changes_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"changelog/append metadata succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n    for (label, bytes, rows) in [(\"1k\", 1024, 10_000), (\"16k\", 16 * 1024, 1_000)] {\n        let name = format!(\"append_changes_shared_payload_{label}/{rows}\");\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || {\n                    let backend = BenchBackend::new();\n                    let fixture = runtime\n                        .block_on(storage_bench::prepare_changelog_append_shared_payload(\n                            config(&args)\n                                .with_state_payload_bytes(bytes)\n                                .with_rows(rows),\n                        ))\n                        .expect(\"prepare changelog/append shared payload\");\n                    (backend, fixture)\n                },\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::changelog_append_changes_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"changelog/append shared payload succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n    for (label, bytes, rows) in [(\"1k\", 1024, 10_000), (\"16k\", 16 * 1024, 1_000)] {\n        let name = format!(\"append_changes_shared_metadata_{label}/{rows}\");\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || {\n                    let backend = BenchBackend::new();\n                    let fixture = runtime\n                        .block_on(storage_bench::prepare_changelog_append_shared_metadata(\n                            config(&args)\n                                .with_state_payload_bytes(bytes)\n                                .with_rows(rows),\n                        ))\n                        .expect(\"prepare changelog/append shared metadata\");\n                    (backend, fixture)\n                },\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::changelog_append_changes_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"changelog/append shared metadata succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n    group.bench_function(\"append_changes_shared_payload_and_metadata_1k/10k\", |b| {\n        b.iter_batched(\n            || {\n                let backend = BenchBackend::new();\n                let fixture = runtime\n                    .block_on(\n                        storage_bench::prepare_changelog_append_shared_payload_and_metadata(\n                            config(&args).with_state_payload_bytes(1024),\n                        ),\n                    )\n                    .expect(\"prepare changelog/append shared payload and metadata\");\n                (backend, fixture)\n            },\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::changelog_append_changes_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"changelog/append shared payload and metadata succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n    group.bench_function(\"append_changes_tombstone/10k\", |b| {\n        b.iter_batched(\n            || {\n                let backend = BenchBackend::new();\n                let fixture = runtime\n                    .block_on(storage_bench::prepare_changelog_append_tombstones(config(\n                        &args,\n                    )))\n                    .expect(\"prepare changelog/append tombstones\");\n                (backend, fixture)\n            },\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::changelog_append_changes_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"changelog/append tombstones succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n    group.bench_function(\"append_changes_composite_entity_id/10k\", |b| {\n        b.iter_batched(\n            || {\n                let backend = BenchBackend::new();\n                let fixture = runtime\n                    .block_on(\n                        storage_bench::prepare_changelog_append_composite_entity_ids(config(&args)),\n                    )\n                    .expect(\"prepare changelog/append composite entity ids\");\n                (backend, fixture)\n            },\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::changelog_append_changes_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"changelog/append composite entity ids succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n    for (label, selectivity) in [\n        (\"1pct\", StorageBenchSelectivity::Percent1),\n        (\"10pct\", StorageBenchSelectivity::Percent10),\n        (\"100pct\", StorageBenchSelectivity::Percent100),\n    ] {\n        let name = format!(\"scan_schema_selectivity_{label}/10k\");\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || {\n                    let backend = BenchBackend::new();\n                    let fixture = runtime\n                        .block_on(storage_bench::prepare_changelog_read_with_selectivity(\n                            &backend,\n                            config(&args).with_selectivity(selectivity),\n                        ))\n                        .expect(\"prepare changelog/scan schema selectivity\");\n                    (backend, fixture)\n                },\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::changelog_scan_schema_prepared(\n                                &backend,\n                                &fixture,\n                                selectivity,\n                            ))\n                            .expect(\"changelog/scan schema selectivity succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n    group.bench_function(\"scan_entity_history/10k\", |b| {\n        b.iter_batched(\n            || {\n                let backend = BenchBackend::new();\n                let fixture = runtime\n                    .block_on(storage_bench::prepare_changelog_read_entity_history(\n                        &backend,\n                        config(&args),\n                    ))\n                    .expect(\"prepare changelog/scan entity history\");\n                (backend, fixture)\n            },\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::changelog_scan_entity_history_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"changelog/scan entity history succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n    for (label, key_pattern) in [\n        (\"sequential_keys\", StorageBenchKeyPattern::Sequential),\n        (\"random_keys\", StorageBenchKeyPattern::Random),\n    ] {\n        let name = format!(\"append_changes_{label}/10k\");\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || {\n                    let backend = BenchBackend::new();\n                    let fixture = runtime\n                        .block_on(storage_bench::prepare_changelog_append_changes(\n                            config(&args).with_key_pattern(key_pattern),\n                        ))\n                        .expect(\"prepare changelog/append_changes key pattern\");\n                    (backend, fixture)\n                },\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::changelog_append_changes_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"changelog/append_changes key pattern succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n    group.finish();\n}\n\nfn prepare_read(\n    runtime: &Runtime,\n    args: Args,\n) -> (\n    std::sync::Arc<dyn lix_engine::Backend + Send + Sync>,\n    lix_engine::storage_bench::ChangelogReadFixture,\n) {\n    let backend = BenchBackend::new();\n    let fixture = runtime\n        .block_on(storage_bench::prepare_changelog_read(\n            &backend,\n            config(&args),\n        ))\n        .expect(\"prepare changelog/read\");\n    (backend, fixture)\n}\n\nfn prepare_read_with(\n    runtime: &Runtime,\n    config: StorageBenchConfig,\n) -> (\n    std::sync::Arc<dyn lix_engine::Backend + Send + Sync>,\n    lix_engine::storage_bench::ChangelogReadFixture,\n) {\n    let backend = BenchBackend::new();\n    let fixture = runtime\n        .block_on(storage_bench::prepare_changelog_read(&backend, config))\n        .expect(\"prepare changelog/read variant\");\n    (backend, fixture)\n}\n\nfn config(args: &Args) -> StorageBenchConfig {\n    args.config()\n}\n"
  },
  {
    "path": "packages/engine/benches/storage/commit_graph.rs",
    "content": "use lix_engine::storage_bench::{self, StorageBenchConfig};\n\nuse crate::{Args, BenchBackend};\nuse criterion::{black_box, BatchSize, Criterion};\nuse tokio::runtime::Runtime;\n\npub(crate) fn bench(c: &mut Criterion, runtime: &Runtime, args: Args) {\n    let mut group = c.benchmark_group(\"commit_graph\");\n    group.bench_function(\"change_history_from_commit/10k\", |b| {\n        b.iter_batched(\n            || prepare_read(runtime, args),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(\n                            storage_bench::commit_graph_change_history_from_commit_prepared(\n                                &backend, &fixture,\n                            ),\n                        )\n                        .expect(\"commit_graph/change_history_from_commit succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n    group.finish();\n}\n\nfn prepare_read(\n    runtime: &Runtime,\n    args: Args,\n) -> (\n    std::sync::Arc<dyn lix_engine::Backend + Send + Sync>,\n    lix_engine::storage_bench::CommitGraphReadFixture,\n) {\n    let backend = BenchBackend::new();\n    let fixture = runtime\n        .block_on(storage_bench::prepare_commit_graph_read(\n            &backend,\n            config(&args),\n        ))\n        .expect(\"prepare commit_graph/read\");\n    (backend, fixture)\n}\n\nfn config(args: &Args) -> StorageBenchConfig {\n    args.config()\n}\n"
  },
  {
    "path": "packages/engine/benches/storage/json_store.rs",
    "content": "use lix_engine::storage_bench::{\n    self, JsonStorePayloadShape, JsonStoreProjectionShape, JsonStoreReadFixture,\n};\n\nuse crate::{Args, BenchBackend};\nuse criterion::{black_box, BatchSize, Criterion};\nuse tokio::runtime::Runtime;\n\npub(crate) fn bench(c: &mut Criterion, runtime: &Runtime, _args: Args) {\n    let mut group = c.benchmark_group(\"storage/json_store\");\n\n    for (name, shape, rows) in [\n        (\n            \"write/small_raw_1k/1000\",\n            JsonStorePayloadShape::SmallRaw1k,\n            1_000,\n        ),\n        (\n            \"write/medium_structured_16k/200\",\n            JsonStorePayloadShape::MediumStructured16k,\n            200,\n        ),\n        (\n            \"write/large_structured_128k/50\",\n            JsonStorePayloadShape::LargeStructured128k,\n            50,\n        ),\n        (\n            \"write/large_array_128k/50\",\n            JsonStorePayloadShape::LargeArray128k,\n            50,\n        ),\n    ] {\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || {\n                    let backend = BenchBackend::new();\n                    let fixture = runtime\n                        .block_on(storage_bench::prepare_json_store_write(shape, rows))\n                        .expect(\"prepare json_store/write\");\n                    (backend, fixture)\n                },\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::json_store_write_prepared(&backend, &fixture))\n                            .expect(\"json_store/write succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n\n    group.bench_function(\"write/dedupe_same_16k/1000\", |b| {\n        b.iter_batched(\n            || {\n                let backend = BenchBackend::new();\n                let fixture = runtime\n                    .block_on(storage_bench::prepare_json_store_write_dedupe(\n                        JsonStorePayloadShape::MediumStructured16k,\n                        1_000,\n                    ))\n                    .expect(\"prepare json_store/write_dedupe\");\n                (backend, fixture)\n            },\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::json_store_write_prepared(&backend, &fixture))\n                        .expect(\"json_store/write_dedupe succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"write/against_base_object_update_1_of_1000/50\", |b| {\n        b.iter_batched(\n            || {\n                let backend = BenchBackend::new();\n                let fixture = runtime\n                    .block_on(storage_bench::prepare_json_store_base_update_object(\n                        &backend, 50,\n                    ))\n                    .expect(\"prepare json_store/base_update_object\");\n                (backend, fixture)\n            },\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(\n                            storage_bench::json_store_write_against_base_object_prepared(\n                                &backend, &fixture,\n                            ),\n                        )\n                        .expect(\"json_store/base_update_object succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"write/against_base_array_update_1_of_1000/50\", |b| {\n        b.iter_batched(\n            || {\n                let backend = BenchBackend::new();\n                let fixture = runtime\n                    .block_on(storage_bench::prepare_json_store_base_update_array(\n                        &backend, 50,\n                    ))\n                    .expect(\"prepare json_store/base_update_array\");\n                (backend, fixture)\n            },\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::json_store_write_against_base_array_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"json_store/base_update_array succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    for (name, shape, rows) in [\n        (\n            \"read_bytes/small_raw_1k/1000\",\n            JsonStorePayloadShape::SmallRaw1k,\n            1_000,\n        ),\n        (\n            \"read_bytes/medium_structured_16k/200\",\n            JsonStorePayloadShape::MediumStructured16k,\n            200,\n        ),\n        (\n            \"read_bytes/large_structured_128k/50\",\n            JsonStorePayloadShape::LargeStructured128k,\n            50,\n        ),\n        (\n            \"read_bytes/large_array_128k/50\",\n            JsonStorePayloadShape::LargeArray128k,\n            50,\n        ),\n    ] {\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || {\n                    prepare_read(\n                        runtime,\n                        shape,\n                        rows,\n                        JsonStoreProjectionShape::TopLevelTarget,\n                    )\n                },\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::json_store_read_bytes_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"json_store/read_bytes succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n\n    for (name, shape, rows) in [\n        (\n            \"read_value/small_raw_1k/1000\",\n            JsonStorePayloadShape::SmallRaw1k,\n            1_000,\n        ),\n        (\n            \"read_value/large_structured_128k/50\",\n            JsonStorePayloadShape::LargeStructured128k,\n            50,\n        ),\n    ] {\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || {\n                    prepare_read(\n                        runtime,\n                        shape,\n                        rows,\n                        JsonStoreProjectionShape::TopLevelTarget,\n                    )\n                },\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::json_store_read_value_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"json_store/read_value succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n\n    for (name, shape, projection, rows) in [\n        (\n            \"read_projection/top_level_1_prop_1k/1000\",\n            JsonStorePayloadShape::SmallRaw1k,\n            JsonStoreProjectionShape::TopLevelTarget,\n            1_000,\n        ),\n        (\n            \"read_projection/top_level_1_prop_128k/50\",\n            JsonStorePayloadShape::LargeStructured128k,\n            JsonStoreProjectionShape::TopLevelTarget,\n            50,\n        ),\n        (\n            \"read_projection/top_level_10_props_128k/50\",\n            JsonStorePayloadShape::LargeStructured128k,\n            JsonStoreProjectionShape::TopLevelTenProps,\n            50,\n        ),\n        (\n            \"read_projection/nested_prop_128k/50\",\n            JsonStorePayloadShape::LargeStructured128k,\n            JsonStoreProjectionShape::NestedTarget,\n            50,\n        ),\n        (\n            \"read_projection/array_item_1_of_1000/50\",\n            JsonStorePayloadShape::LargeArray128k,\n            JsonStoreProjectionShape::ArrayItem999,\n            50,\n        ),\n        (\n            \"read_projection/filter_prop_status_128k/50\",\n            JsonStorePayloadShape::LargeStructured128k,\n            JsonStoreProjectionShape::Status,\n            50,\n        ),\n    ] {\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || prepare_read(runtime, shape, rows, projection),\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::json_store_read_projection_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"json_store/read_projection succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n\n    group.finish();\n}\n\nfn prepare_read(\n    runtime: &Runtime,\n    shape: JsonStorePayloadShape,\n    rows: usize,\n    projection: JsonStoreProjectionShape,\n) -> (\n    std::sync::Arc<dyn lix_engine::Backend + Send + Sync>,\n    JsonStoreReadFixture,\n) {\n    let backend = BenchBackend::new();\n    let fixture = runtime\n        .block_on(storage_bench::prepare_json_store_projection_read(\n            &backend, shape, rows, projection,\n        ))\n        .expect(\"prepare json_store/read\");\n    (backend, fixture)\n}\n"
  },
  {
    "path": "packages/engine/benches/storage/main.rs",
    "content": "use criterion::{criterion_group, criterion_main, Criterion};\nuse lix_engine::storage_bench::{\n    StorageBenchConfig, StorageBenchKeyPattern, StorageBenchSelectivity, StorageBenchUpdateFraction,\n};\n\nmod backend;\nmod binary_cas;\nmod changelog;\nmod commit_graph;\nmod json_store;\nmod rocksdb_backend;\nmod sqlite_backend;\nmod storage_api;\nmod tracked_state;\nmod untracked_state;\n\nuse backend::BenchBackend;\nuse rocksdb_backend::RocksDbBenchBackend;\nuse sqlite_backend::SqliteBenchBackend;\n\nconst BENCH_ROWS: usize = 10_000;\nconst BENCH_BLOB_BYTES: usize = 1024;\nconst BENCH_STATE_PAYLOAD_BYTES: usize = 256;\n\n#[derive(Debug, Clone, Copy)]\npub(crate) struct Args {\n    pub(crate) rows: usize,\n    pub(crate) blob_bytes: usize,\n    pub(crate) state_payload_bytes: usize,\n}\n\nimpl Default for Args {\n    fn default() -> Self {\n        Self {\n            rows: BENCH_ROWS,\n            blob_bytes: BENCH_BLOB_BYTES,\n            state_payload_bytes: BENCH_STATE_PAYLOAD_BYTES,\n        }\n    }\n}\n\nimpl Args {\n    pub(crate) fn config(self) -> StorageBenchConfig {\n        StorageBenchConfig {\n            rows: self.rows,\n            blob_bytes: self.blob_bytes,\n            state_payload_bytes: self.state_payload_bytes,\n            key_pattern: StorageBenchKeyPattern::Sequential,\n            selectivity: StorageBenchSelectivity::Percent100,\n            update_fraction: StorageBenchUpdateFraction::Percent100,\n        }\n    }\n}\n\nfn storage_benches(c: &mut Criterion) {\n    let runtime = tokio::runtime::Builder::new_current_thread()\n        .enable_all()\n        .build()\n        .expect(\"create tokio runtime for storage benchmarks\");\n    let args = Args::default();\n\n    storage_api::bench(c, &runtime, args);\n    tracked_state::bench(c, &runtime, args);\n    tracked_state::bench_fast(c, &runtime, args);\n    untracked_state::bench(c, &runtime, args);\n    changelog::bench(c, &runtime, args);\n    commit_graph::bench(c, &runtime, args);\n    binary_cas::bench(c, &runtime, args);\n    json_store::bench(c, &runtime, args);\n}\n\ncriterion_group!(benches, storage_benches);\ncriterion_main!(benches);\n"
  },
  {
    "path": "packages/engine/benches/storage/rocksdb_backend.rs",
    "content": "use std::collections::{BTreeMap, BTreeSet};\nuse std::path::Path;\nuse std::sync::Arc;\n\nuse async_trait::async_trait;\nuse lix_engine::{\n    Backend, BackendKvEntryPage, BackendKvExistsBatch, BackendKvExistsGroup, BackendKvGetRequest,\n    BackendKvKeyPage, BackendKvScanRange, BackendKvScanRequest, BackendKvValueBatch,\n    BackendKvValueGroup, BackendKvValuePage, BackendKvWriteBatch, BackendKvWriteStats,\n    BackendReadTransaction, BackendWriteTransaction, BytePageBuilder, LixError,\n};\nuse rocksdb::{Direction, IteratorMode, Options, WriteBatch, DB};\nuse tempfile::TempDir;\n\n#[derive(Clone)]\npub(crate) struct RocksDbBenchBackend {\n    inner: Arc<RocksDbBenchInner>,\n}\n\nstruct RocksDbBenchInner {\n    db: DB,\n    _dir: TempDir,\n}\n\npub(crate) struct RocksDbBenchTransaction {\n    inner: Arc<RocksDbBenchInner>,\n    pending: BTreeMap<Vec<u8>, PendingWrite>,\n}\n\nenum PendingWrite {\n    Put(Vec<u8>),\n    Delete,\n}\n\nimpl RocksDbBenchBackend {\n    pub(crate) fn new() -> Result<Self, LixError> {\n        let dir = TempDir::new().map_err(io_error)?;\n        let db = open_rocksdb(dir.path())?;\n        Ok(Self {\n            inner: Arc::new(RocksDbBenchInner { db, _dir: dir }),\n        })\n    }\n\n    #[allow(dead_code)]\n    pub(crate) fn path(&self) -> &Path {\n        self.inner._dir.path()\n    }\n}\n\n#[async_trait]\nimpl Backend for RocksDbBenchBackend {\n    async fn begin_read_transaction(\n        &self,\n    ) -> Result<Box<dyn BackendReadTransaction + Send + Sync + 'static>, LixError> {\n        Ok(Box::new(RocksDbBenchTransaction {\n            inner: Arc::clone(&self.inner),\n            pending: BTreeMap::new(),\n        }))\n    }\n\n    async fn begin_write_transaction(\n        &self,\n    ) -> Result<Box<dyn BackendWriteTransaction + Send + Sync + 'static>, LixError> {\n        Ok(Box::new(RocksDbBenchTransaction {\n            inner: Arc::clone(&self.inner),\n            pending: BTreeMap::new(),\n        }))\n    }\n}\n\n#[async_trait]\nimpl BackendReadTransaction for RocksDbBenchTransaction {\n    async fn get_values(\n        &mut self,\n        request: BackendKvGetRequest,\n    ) -> Result<BackendKvValueBatch, LixError> {\n        let mut groups = Vec::with_capacity(request.groups.len());\n        for group in request.groups {\n            let namespace = group.namespace.clone();\n            let mut resolved_values = vec![None; group.keys.len()];\n            let mut committed_keys = Vec::new();\n            let mut committed_positions = Vec::new();\n            for (position, key) in group.keys.into_iter().enumerate() {\n                let encoded_key = encode_key(namespace.as_str(), &key);\n                match self.pending.get(&encoded_key) {\n                    Some(PendingWrite::Put(value)) => {\n                        resolved_values[position] = Some(value.clone())\n                    }\n                    Some(PendingWrite::Delete) => {}\n                    None => {\n                        committed_positions.push(position);\n                        committed_keys.push(encoded_key);\n                    }\n                }\n            }\n            let committed_values = self.inner.db.multi_get(committed_keys);\n            for (position, value) in committed_positions.into_iter().zip(committed_values) {\n                match value.map_err(rocksdb_error)? {\n                    Some(value) => resolved_values[position] = Some(value),\n                    None => {}\n                }\n            }\n            let mut values = BytePageBuilder::with_capacity(resolved_values.len(), 0);\n            let mut present = Vec::with_capacity(resolved_values.len());\n            for value in resolved_values {\n                if let Some(value) = value {\n                    values.push(value);\n                    present.push(true);\n                } else {\n                    values.push([]);\n                    present.push(false);\n                }\n            }\n            groups.push(BackendKvValueGroup::new(\n                namespace,\n                values.finish(),\n                present,\n            ));\n        }\n        Ok(BackendKvValueBatch { groups })\n    }\n\n    async fn exists_many(\n        &mut self,\n        request: BackendKvGetRequest,\n    ) -> Result<BackendKvExistsBatch, LixError> {\n        rocksdb_get_exists_many(&self.inner.db, &self.pending, request)\n    }\n\n    async fn scan_keys(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvKeyPage, LixError> {\n        rocksdb_scan_keys(&self.inner.db, &self.pending, request)\n    }\n\n    async fn scan_values(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvValuePage, LixError> {\n        rocksdb_scan_values(&self.inner.db, &self.pending, request)\n    }\n\n    async fn scan_entries(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvEntryPage, LixError> {\n        rocksdb_scan_entries(&self.inner.db, &self.pending, request)\n    }\n\n    async fn rollback(self: Box<Self>) -> Result<(), LixError> {\n        Ok(())\n    }\n}\n\n#[async_trait]\nimpl BackendWriteTransaction for RocksDbBenchTransaction {\n    async fn write_kv_batch(\n        &mut self,\n        batch: BackendKvWriteBatch,\n    ) -> Result<BackendKvWriteStats, LixError> {\n        let mut stats = BackendKvWriteStats::default();\n        for group in batch.groups {\n            let namespace = group.namespace().to_string();\n            for index in 0..group.put_count() {\n                let key = group.put_key(index).ok_or_else(|| {\n                    LixError::new(\"LIX_ERROR_UNKNOWN\", \"backend write batch missing put key\")\n                })?;\n                let value = group.put_value(index).ok_or_else(|| {\n                    LixError::new(\"LIX_ERROR_UNKNOWN\", \"backend write batch missing put value\")\n                })?;\n                stats.puts += 1;\n                stats.bytes_written += key.len() + value.len();\n                self.pending.insert(\n                    encode_key(namespace.as_str(), key),\n                    PendingWrite::Put(value.to_vec()),\n                );\n            }\n            for index in 0..group.delete_count() {\n                let key = group.delete_key(index).ok_or_else(|| {\n                    LixError::new(\n                        \"LIX_ERROR_UNKNOWN\",\n                        \"backend write batch missing delete key\",\n                    )\n                })?;\n                stats.deletes += 1;\n                stats.bytes_written += key.len();\n                self.pending\n                    .insert(encode_key(namespace.as_str(), key), PendingWrite::Delete);\n            }\n        }\n        Ok(stats)\n    }\n\n    async fn commit(self: Box<Self>) -> Result<(), LixError> {\n        let mut write_batch = WriteBatch::default();\n        for (key, write) in self.pending {\n            match write {\n                PendingWrite::Put(value) => write_batch.put(key, value),\n                PendingWrite::Delete => write_batch.delete(key),\n            }\n        }\n        self.inner.db.write(write_batch).map_err(rocksdb_error)?;\n        Ok(())\n    }\n}\n\nfn open_rocksdb(path: &Path) -> Result<DB, LixError> {\n    let mut options = Options::default();\n    options.create_if_missing(true);\n    options.set_use_fsync(false);\n    options.set_write_buffer_size(64 * 1024 * 1024);\n    DB::open(&options, path).map_err(rocksdb_error)\n}\n\nfn rocksdb_get_exists_many(\n    db: &DB,\n    pending: &BTreeMap<Vec<u8>, PendingWrite>,\n    request: BackendKvGetRequest,\n) -> Result<BackendKvExistsBatch, LixError> {\n    let mut groups = Vec::with_capacity(request.groups.len());\n    for group in request.groups {\n        let namespace = group.namespace.clone();\n        let mut exists = vec![false; group.keys.len()];\n        let mut committed = Vec::new();\n\n        for (position, key) in group.keys.into_iter().enumerate() {\n            let encoded_key = encode_key(namespace.as_str(), &key);\n            match pending.get(&encoded_key) {\n                Some(PendingWrite::Put(_)) => exists[position] = true,\n                Some(PendingWrite::Delete) => {}\n                None => {\n                    committed.push((encoded_key, position));\n                }\n            }\n        }\n\n        fill_committed_exists(db, &mut exists, committed)?;\n        groups.push(BackendKvExistsGroup { namespace, exists });\n    }\n\n    Ok(BackendKvExistsBatch { groups })\n}\n\nfn fill_committed_exists(\n    db: &DB,\n    exists: &mut [bool],\n    mut committed: Vec<(Vec<u8>, usize)>,\n) -> Result<(), LixError> {\n    if committed.is_empty() {\n        return Ok(());\n    }\n\n    committed.sort_by(|left, right| left.0.cmp(&right.0));\n    let mut iter = db.raw_iterator();\n    iter.seek(&committed[0].0);\n\n    for (target_key, position) in committed {\n        while iter.valid() {\n            let Some(current_key) = iter.key() else {\n                break;\n            };\n            if current_key >= target_key.as_slice() {\n                break;\n            }\n            iter.next();\n        }\n\n        if !iter.valid() {\n            iter.status().map_err(rocksdb_error)?;\n            break;\n        }\n\n        if iter\n            .key()\n            .is_some_and(|current_key| current_key == target_key.as_slice())\n        {\n            exists[position] = true;\n        }\n    }\n\n    iter.status().map_err(rocksdb_error)?;\n    Ok(())\n}\n\nfn rocksdb_scan_keys(\n    db: &DB,\n    pending: &BTreeMap<Vec<u8>, PendingWrite>,\n    request: BackendKvScanRequest,\n) -> Result<BackendKvKeyPage, LixError> {\n    let bounds = ScanBounds::new(&request);\n    if pending.is_empty() {\n        return rocksdb_scan_committed_keys(db, request, bounds);\n    }\n\n    let mut merged = BTreeSet::new();\n    let mut iter = db.raw_iterator();\n    iter.seek(&bounds.start_encoded);\n    while iter.valid() {\n        let Some(encoded_key) = iter.key() else {\n            break;\n        };\n        if !bounds.contains_encoded(encoded_key) {\n            break;\n        }\n        let logical_key = decode_key(&request.namespace, encoded_key)?;\n        if !key_after_cursor(&request, &logical_key) {\n            iter.next();\n            continue;\n        }\n        merged.insert(logical_key);\n        iter.next();\n    }\n    iter.status().map_err(rocksdb_error)?;\n\n    for (encoded_key, write) in\n        pending.range(bounds.start_encoded.clone()..bounds.end_encoded.clone())\n    {\n        if !bounds.contains_encoded(encoded_key) {\n            continue;\n        }\n        let logical_key = decode_key(&request.namespace, encoded_key)?;\n        if !key_in_range(&logical_key, &request.range) || !key_after_cursor(&request, &logical_key)\n        {\n            continue;\n        }\n        match write {\n            PendingWrite::Put(_) => {\n                merged.insert(logical_key);\n            }\n            PendingWrite::Delete => {\n                merged.remove(&logical_key);\n            }\n        }\n    }\n    Ok(key_page_from_iter(merged, request.limit))\n}\n\nfn rocksdb_scan_values(\n    db: &DB,\n    pending: &BTreeMap<Vec<u8>, PendingWrite>,\n    request: BackendKvScanRequest,\n) -> Result<BackendKvValuePage, LixError> {\n    let bounds = ScanBounds::new(&request);\n    if pending.is_empty() {\n        return rocksdb_scan_committed_values(db, request, bounds);\n    }\n\n    let mut merged = BTreeMap::new();\n    for item in db.iterator(IteratorMode::From(\n        &bounds.start_encoded,\n        Direction::Forward,\n    )) {\n        let (encoded_key, value) = item.map_err(rocksdb_error)?;\n        let encoded_key = encoded_key.as_ref();\n        if !bounds.contains_encoded(encoded_key) {\n            break;\n        }\n        let logical_key = decode_key(&request.namespace, encoded_key)?;\n        if !key_after_cursor(&request, &logical_key) {\n            continue;\n        }\n        merged.insert(logical_key, value.to_vec());\n    }\n    overlay_pending_values(&mut merged, pending, &request, &bounds)?;\n    Ok(value_page_from_iter(merged, request.limit))\n}\n\nfn rocksdb_scan_entries(\n    db: &DB,\n    pending: &BTreeMap<Vec<u8>, PendingWrite>,\n    request: BackendKvScanRequest,\n) -> Result<BackendKvEntryPage, LixError> {\n    let bounds = ScanBounds::new(&request);\n    if pending.is_empty() {\n        return rocksdb_scan_committed_entries(db, request, bounds);\n    }\n    let mut merged = BTreeMap::new();\n    for item in db.iterator(IteratorMode::From(\n        &bounds.start_encoded,\n        Direction::Forward,\n    )) {\n        let (key, value) = item.map_err(rocksdb_error)?;\n        let key = key.as_ref();\n        if !bounds.contains_encoded(key) {\n            break;\n        }\n        let logical_key = decode_key(&request.namespace, key)?;\n        if !key_after_cursor(&request, &logical_key) {\n            continue;\n        }\n        merged.insert(logical_key, value.to_vec());\n    }\n    overlay_pending_values(&mut merged, pending, &request, &bounds)?;\n    Ok(entry_page_from_iter(merged, request.limit))\n}\n\nstruct ScanBounds {\n    start_encoded: Vec<u8>,\n    end_encoded: Vec<u8>,\n    namespace_prefix: Vec<u8>,\n}\n\nimpl ScanBounds {\n    fn new(request: &BackendKvScanRequest) -> Self {\n        let start = scan_start_key(request);\n        let start_encoded = encode_key(&request.namespace, &start);\n        let end = scan_end_key(&request.range);\n        let end_encoded = end\n            .as_ref()\n            .map(|end| encode_key(&request.namespace, end))\n            .unwrap_or_else(|| namespace_end_key(&request.namespace));\n        let namespace_prefix = namespace_prefix(&request.namespace);\n        Self {\n            start_encoded,\n            end_encoded,\n            namespace_prefix,\n        }\n    }\n\n    fn contains_encoded(&self, encoded_key: &[u8]) -> bool {\n        encoded_key < self.end_encoded.as_slice()\n            && encoded_key.starts_with(self.namespace_prefix.as_slice())\n    }\n}\n\nfn rocksdb_scan_committed_keys(\n    db: &DB,\n    request: BackendKvScanRequest,\n    bounds: ScanBounds,\n) -> Result<BackendKvKeyPage, LixError> {\n    let mut keys = BytePageBuilder::new();\n    let mut count = 0;\n    let mut resume_after_candidate = None;\n    let mut iter = db.raw_iterator();\n    iter.seek(&bounds.start_encoded);\n    while iter.valid() {\n        let Some(encoded_key) = iter.key() else {\n            break;\n        };\n        if !bounds.contains_encoded(encoded_key) {\n            break;\n        }\n        let logical_key = decode_key(&request.namespace, encoded_key)?;\n        if !key_after_cursor(&request, &logical_key) {\n            iter.next();\n            continue;\n        }\n        if count < request.limit {\n            resume_after_candidate = Some(logical_key.clone());\n            keys.push(&logical_key);\n        }\n        count += 1;\n        if count > request.limit {\n            break;\n        }\n        iter.next();\n    }\n    iter.status().map_err(rocksdb_error)?;\n    let resume_after = (count > request.limit)\n        .then_some(resume_after_candidate)\n        .flatten();\n    Ok(BackendKvKeyPage {\n        keys: keys.finish(),\n        resume_after,\n    })\n}\n\nfn rocksdb_scan_committed_values(\n    db: &DB,\n    request: BackendKvScanRequest,\n    bounds: ScanBounds,\n) -> Result<BackendKvValuePage, LixError> {\n    let mut values = BytePageBuilder::new();\n    let mut count = 0;\n    let mut resume_after_candidate = None;\n    for item in db.iterator(IteratorMode::From(\n        &bounds.start_encoded,\n        Direction::Forward,\n    )) {\n        let (encoded_key, value) = item.map_err(rocksdb_error)?;\n        let encoded_key = encoded_key.as_ref();\n        if !bounds.contains_encoded(encoded_key) {\n            break;\n        }\n        let logical_key = decode_key(&request.namespace, encoded_key)?;\n        if !key_after_cursor(&request, &logical_key) {\n            continue;\n        }\n        if count < request.limit {\n            resume_after_candidate = Some(logical_key);\n            values.push(value.as_ref());\n        }\n        count += 1;\n        if count > request.limit {\n            break;\n        }\n    }\n    let resume_after = (count > request.limit)\n        .then_some(resume_after_candidate)\n        .flatten();\n    Ok(BackendKvValuePage {\n        values: values.finish(),\n        resume_after,\n    })\n}\n\nfn rocksdb_scan_committed_entries(\n    db: &DB,\n    request: BackendKvScanRequest,\n    bounds: ScanBounds,\n) -> Result<BackendKvEntryPage, LixError> {\n    let mut keys = BytePageBuilder::new();\n    let mut values = BytePageBuilder::new();\n    let mut count = 0;\n    let mut resume_after_candidate = None;\n    for item in db.iterator(IteratorMode::From(\n        &bounds.start_encoded,\n        Direction::Forward,\n    )) {\n        let (key, value) = item.map_err(rocksdb_error)?;\n        let key = key.as_ref();\n        if !bounds.contains_encoded(key) {\n            break;\n        }\n        let logical_key = decode_key(&request.namespace, key)?;\n        if !key_after_cursor(&request, &logical_key) {\n            continue;\n        }\n        if count < request.limit {\n            resume_after_candidate = Some(logical_key.clone());\n            keys.push(&logical_key);\n            values.push(value.as_ref());\n        }\n        count += 1;\n        if count > request.limit {\n            break;\n        }\n    }\n    let resume_after = (count > request.limit)\n        .then_some(resume_after_candidate)\n        .flatten();\n    Ok(BackendKvEntryPage {\n        keys: keys.finish(),\n        values: values.finish(),\n        resume_after,\n    })\n}\n\nfn overlay_pending_values(\n    merged: &mut BTreeMap<Vec<u8>, Vec<u8>>,\n    pending: &BTreeMap<Vec<u8>, PendingWrite>,\n    request: &BackendKvScanRequest,\n    bounds: &ScanBounds,\n) -> Result<(), LixError> {\n    for (encoded_key, write) in\n        pending.range(bounds.start_encoded.clone()..bounds.end_encoded.clone())\n    {\n        if !bounds.contains_encoded(encoded_key) {\n            continue;\n        }\n        let logical_key = decode_key(&request.namespace, encoded_key)?;\n        if !key_in_range(&logical_key, &request.range) || !key_after_cursor(request, &logical_key) {\n            continue;\n        }\n        match write {\n            PendingWrite::Put(value) => {\n                merged.insert(logical_key, value.clone());\n            }\n            PendingWrite::Delete => {\n                merged.remove(&logical_key);\n            }\n        }\n    }\n    Ok(())\n}\n\nfn key_page_from_iter(\n    keys_iter: impl IntoIterator<Item = Vec<u8>>,\n    limit: usize,\n) -> BackendKvKeyPage {\n    let mut keys = BytePageBuilder::new();\n    let mut count = 0;\n    let mut resume_after_candidate = None;\n    for key in keys_iter {\n        if count < limit {\n            resume_after_candidate = Some(key.clone());\n            keys.push(&key);\n        }\n        count += 1;\n        if count > limit {\n            break;\n        }\n    }\n    let resume_after = (count > limit).then_some(resume_after_candidate).flatten();\n    BackendKvKeyPage {\n        keys: keys.finish(),\n        resume_after,\n    }\n}\n\nfn value_page_from_iter(\n    values_iter: impl IntoIterator<Item = (Vec<u8>, Vec<u8>)>,\n    limit: usize,\n) -> BackendKvValuePage {\n    let mut values = BytePageBuilder::new();\n    let mut count = 0;\n    let mut resume_after_candidate = None;\n    for (key, value) in values_iter {\n        if count < limit {\n            resume_after_candidate = Some(key);\n            values.push(&value);\n        }\n        count += 1;\n        if count > limit {\n            break;\n        }\n    }\n    let resume_after = (count > limit).then_some(resume_after_candidate).flatten();\n    BackendKvValuePage {\n        values: values.finish(),\n        resume_after,\n    }\n}\n\nfn entry_page_from_iter(\n    entries_iter: impl IntoIterator<Item = (Vec<u8>, Vec<u8>)>,\n    limit: usize,\n) -> BackendKvEntryPage {\n    let mut keys = BytePageBuilder::new();\n    let mut values = BytePageBuilder::new();\n    let mut count = 0;\n    let mut resume_after_candidate = None;\n    for (key, value) in entries_iter {\n        if count < limit {\n            resume_after_candidate = Some(key.clone());\n            keys.push(&key);\n            values.push(&value);\n        }\n        count += 1;\n        if count > limit {\n            break;\n        }\n    }\n    let resume_after = (count > limit).then_some(resume_after_candidate).flatten();\n    BackendKvEntryPage {\n        keys: keys.finish(),\n        values: values.finish(),\n        resume_after,\n    }\n}\n\nfn scan_start_key(request: &BackendKvScanRequest) -> Vec<u8> {\n    let range_start = match &request.range {\n        BackendKvScanRange::Prefix(prefix) => prefix.as_slice(),\n        BackendKvScanRange::Range { start, .. } => start.as_slice(),\n    };\n    match request.after.as_deref() {\n        Some(after) if after > range_start => after.to_vec(),\n        _ => range_start.to_vec(),\n    }\n}\n\nfn scan_end_key(range: &BackendKvScanRange) -> Option<Vec<u8>> {\n    match range {\n        BackendKvScanRange::Prefix(prefix) => prefix_end(prefix),\n        BackendKvScanRange::Range { end, .. } => Some(end.clone()),\n    }\n}\n\nfn key_in_range(key: &[u8], range: &BackendKvScanRange) -> bool {\n    match range {\n        BackendKvScanRange::Prefix(prefix) => key.starts_with(prefix),\n        BackendKvScanRange::Range { start, end } => key >= start.as_slice() && key < end.as_slice(),\n    }\n}\n\nfn key_after_cursor(request: &BackendKvScanRequest, key: &[u8]) -> bool {\n    request.after.as_deref().is_none_or(|after| key > after)\n}\n\nfn encode_key(namespace: &str, key: &[u8]) -> Vec<u8> {\n    let namespace = namespace.as_bytes();\n    let len = u32::try_from(namespace.len()).expect(\"bench namespace fits u32\");\n    let mut encoded = Vec::with_capacity(4 + namespace.len() + key.len());\n    encoded.extend_from_slice(&len.to_be_bytes());\n    encoded.extend_from_slice(namespace);\n    encoded.extend_from_slice(key);\n    encoded\n}\n\nfn namespace_prefix(namespace: &str) -> Vec<u8> {\n    encode_key(namespace, &[])\n}\n\nfn namespace_end_key(namespace: &str) -> Vec<u8> {\n    let mut end = namespace_prefix(namespace);\n    end.push(0xFF);\n    end\n}\n\nfn decode_key(namespace: &str, encoded: &[u8]) -> Result<Vec<u8>, LixError> {\n    let prefix = namespace_prefix(namespace);\n    encoded\n        .strip_prefix(prefix.as_slice())\n        .map(|key| key.to_vec())\n        .ok_or_else(|| LixError::new(\"LIX_ERROR_UNKNOWN\", \"rocksdb bench key prefix mismatch\"))\n}\n\nfn prefix_end(prefix: &[u8]) -> Option<Vec<u8>> {\n    let mut end = prefix.to_vec();\n    for index in (0..end.len()).rev() {\n        if end[index] != u8::MAX {\n            end[index] += 1;\n            end.truncate(index + 1);\n            return Some(end);\n        }\n    }\n    None\n}\n\nfn rocksdb_error(error: rocksdb::Error) -> LixError {\n    LixError::new(\n        \"LIX_ERROR_UNKNOWN\",\n        format!(\"rocksdb bench backend: {error}\"),\n    )\n}\n\nfn io_error(error: std::io::Error) -> LixError {\n    LixError::new(\n        \"LIX_ERROR_UNKNOWN\",\n        format!(\"rocksdb bench backend: {error}\"),\n    )\n}\n"
  },
  {
    "path": "packages/engine/benches/storage/sqlite_backend.rs",
    "content": "use std::sync::{Arc, Mutex};\n\nuse async_trait::async_trait;\nuse lix_engine::{\n    Backend, BackendKvEntryPage, BackendKvExistsBatch, BackendKvExistsGroup, BackendKvGetRequest,\n    BackendKvKeyPage, BackendKvScanRange, BackendKvScanRequest, BackendKvValueBatch,\n    BackendKvValueGroup, BackendKvValuePage, BackendKvWriteBatch, BackendKvWriteStats,\n    BackendReadTransaction, BackendWriteTransaction, BytePageBuilder, LixError,\n};\nuse rusqlite::{params, Connection, OptionalExtension};\nuse std::path::{Path, PathBuf};\nuse tempfile::TempDir;\n\n#[derive(Clone)]\npub(crate) struct SqliteBenchBackend {\n    connection: Arc<Mutex<Connection>>,\n    #[allow(dead_code)]\n    path: Option<Arc<PathBuf>>,\n    _temp_dir: Option<Arc<TempDir>>,\n}\n\npub(crate) struct SqliteBenchTransaction {\n    connection: Arc<Mutex<Connection>>,\n    finalized: bool,\n}\n\nimpl SqliteBenchBackend {\n    pub(crate) fn tempfile() -> Result<Self, LixError> {\n        let temp_dir = Arc::new(TempDir::new().map_err(|error| {\n            LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                format!(\"sqlite bench tempdir: {error}\"),\n            )\n        })?);\n        let path = Arc::new(temp_dir.path().join(\"bench.sqlite\"));\n        let connection = Connection::open(path.as_path()).map_err(sqlite_error)?;\n        configure_connection(&connection)?;\n        Ok(Self {\n            connection: Arc::new(Mutex::new(connection)),\n            path: Some(path),\n            _temp_dir: Some(temp_dir),\n        })\n    }\n\n    #[allow(dead_code)]\n    pub(crate) fn path(&self) -> Option<&Path> {\n        self.path.as_deref().map(PathBuf::as_path)\n    }\n\n    fn lock_connection(&self) -> Result<std::sync::MutexGuard<'_, Connection>, LixError> {\n        self.connection\n            .lock()\n            .map_err(|_| LixError::new(\"LIX_ERROR_UNKNOWN\", \"sqlite bench connection poisoned\"))\n    }\n}\n\nfn configure_connection(connection: &Connection) -> Result<(), LixError> {\n    connection\n        .execute_batch(\n            \"\n            PRAGMA journal_mode = WAL;\n            PRAGMA synchronous = NORMAL;\n            PRAGMA temp_store = MEMORY;\n            PRAGMA foreign_keys = ON;\n            CREATE TABLE kv (\n                namespace TEXT NOT NULL,\n                key BLOB NOT NULL,\n                value BLOB NOT NULL,\n                PRIMARY KEY (namespace, key)\n            ) WITHOUT ROWID;\n            \",\n        )\n        .map_err(sqlite_error)?;\n    Ok(())\n}\n\n#[async_trait]\nimpl Backend for SqliteBenchBackend {\n    async fn begin_read_transaction(\n        &self,\n    ) -> Result<Box<dyn BackendReadTransaction + Send + Sync + 'static>, LixError> {\n        let connection = self.lock_connection()?;\n        connection\n            .execute_batch(\"BEGIN DEFERRED\")\n            .map_err(sqlite_error)?;\n        drop(connection);\n        Ok(Box::new(SqliteBenchTransaction {\n            connection: Arc::clone(&self.connection),\n            finalized: false,\n        }))\n    }\n\n    async fn begin_write_transaction(\n        &self,\n    ) -> Result<Box<dyn BackendWriteTransaction + Send + Sync + 'static>, LixError> {\n        let connection = self.lock_connection()?;\n        connection\n            .execute_batch(\"BEGIN IMMEDIATE\")\n            .map_err(sqlite_error)?;\n        drop(connection);\n        Ok(Box::new(SqliteBenchTransaction {\n            connection: Arc::clone(&self.connection),\n            finalized: false,\n        }))\n    }\n}\n\n#[async_trait]\nimpl BackendReadTransaction for SqliteBenchTransaction {\n    async fn get_values(\n        &mut self,\n        request: BackendKvGetRequest,\n    ) -> Result<BackendKvValueBatch, LixError> {\n        let connection = self.lock_connection()?;\n        let mut statement = connection\n            .prepare_cached(\"SELECT value FROM kv WHERE namespace = ?1 AND key = ?2\")\n            .map_err(sqlite_error)?;\n        let mut groups = Vec::with_capacity(request.groups.len());\n        for group in request.groups {\n            let namespace = group.namespace.clone();\n            let mut values = BytePageBuilder::with_capacity(group.keys.len(), 0);\n            let mut present = Vec::with_capacity(group.keys.len());\n            for key in group.keys {\n                let value = statement\n                    .query_row(params![namespace.as_str(), key.as_slice()], |row| {\n                        row.get::<_, Vec<u8>>(0)\n                    })\n                    .optional()\n                    .map_err(sqlite_error)?;\n                if let Some(value) = value {\n                    values.push(value);\n                    present.push(true);\n                } else {\n                    values.push([]);\n                    present.push(false);\n                }\n            }\n            groups.push(BackendKvValueGroup::new(\n                namespace,\n                values.finish(),\n                present,\n            ));\n        }\n        Ok(BackendKvValueBatch { groups })\n    }\n\n    async fn exists_many(\n        &mut self,\n        request: BackendKvGetRequest,\n    ) -> Result<BackendKvExistsBatch, LixError> {\n        let connection = self.lock_connection()?;\n        let mut statement = connection\n            .prepare_cached(\"SELECT 1 FROM kv WHERE namespace = ?1 AND key = ?2\")\n            .map_err(sqlite_error)?;\n        let mut groups = Vec::with_capacity(request.groups.len());\n        for group in request.groups {\n            let namespace = group.namespace.clone();\n            let mut exists = Vec::with_capacity(group.keys.len());\n            for key in group.keys {\n                exists.push(\n                    statement\n                        .query_row(params![namespace.as_str(), key.as_slice()], |_| Ok(()))\n                        .optional()\n                        .map_err(sqlite_error)?\n                        .is_some(),\n                );\n            }\n            groups.push(BackendKvExistsGroup { namespace, exists });\n        }\n        Ok(BackendKvExistsBatch { groups })\n    }\n\n    async fn scan_keys(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvKeyPage, LixError> {\n        let connection = self.lock_connection()?;\n        sqlite_scan_keys(&connection, request)\n    }\n\n    async fn scan_values(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvValuePage, LixError> {\n        let connection = self.lock_connection()?;\n        sqlite_scan_values(&connection, request)\n    }\n\n    async fn scan_entries(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvEntryPage, LixError> {\n        let connection = self.lock_connection()?;\n        sqlite_scan_entries(&connection, request)\n    }\n\n    async fn rollback(mut self: Box<Self>) -> Result<(), LixError> {\n        self.lock_connection()?\n            .execute_batch(\"ROLLBACK\")\n            .map_err(sqlite_error)?;\n        self.finalized = true;\n        Ok(())\n    }\n}\n\n#[async_trait]\nimpl BackendWriteTransaction for SqliteBenchTransaction {\n    async fn write_kv_batch(\n        &mut self,\n        batch: BackendKvWriteBatch,\n    ) -> Result<BackendKvWriteStats, LixError> {\n        let connection = self.lock_connection()?;\n        let mut put_statement = connection\n            .prepare_cached(\n                \"\n                INSERT INTO kv (namespace, key, value)\n                VALUES (?1, ?2, ?3)\n                ON CONFLICT(namespace, key) DO UPDATE SET value = excluded.value\n                \",\n            )\n            .map_err(sqlite_error)?;\n        let mut delete_statement = connection\n            .prepare_cached(\"DELETE FROM kv WHERE namespace = ?1 AND key = ?2\")\n            .map_err(sqlite_error)?;\n        let mut stats = BackendKvWriteStats::default();\n        for group in batch.groups {\n            let namespace = group.namespace().to_string();\n            for index in 0..group.put_count() {\n                let key = group.put_key(index).ok_or_else(|| {\n                    LixError::new(\"LIX_ERROR_UNKNOWN\", \"backend write batch missing put key\")\n                })?;\n                let value = group.put_value(index).ok_or_else(|| {\n                    LixError::new(\"LIX_ERROR_UNKNOWN\", \"backend write batch missing put value\")\n                })?;\n                put_statement\n                    .execute(params![namespace.as_str(), key, value])\n                    .map_err(sqlite_error)?;\n                stats.puts += 1;\n                stats.bytes_written += key.len() + value.len();\n            }\n            for index in 0..group.delete_count() {\n                let key = group.delete_key(index).ok_or_else(|| {\n                    LixError::new(\n                        \"LIX_ERROR_UNKNOWN\",\n                        \"backend write batch missing delete key\",\n                    )\n                })?;\n                delete_statement\n                    .execute(params![namespace.as_str(), key])\n                    .map_err(sqlite_error)?;\n                stats.deletes += 1;\n                stats.bytes_written += key.len();\n            }\n        }\n        Ok(stats)\n    }\n\n    async fn commit(mut self: Box<Self>) -> Result<(), LixError> {\n        self.lock_connection()?\n            .execute_batch(\"COMMIT\")\n            .map_err(sqlite_error)?;\n        self.finalized = true;\n        Ok(())\n    }\n}\n\nimpl SqliteBenchTransaction {\n    fn lock_connection(&self) -> Result<std::sync::MutexGuard<'_, Connection>, LixError> {\n        self.connection\n            .lock()\n            .map_err(|_| LixError::new(\"LIX_ERROR_UNKNOWN\", \"sqlite bench connection poisoned\"))\n    }\n}\n\nimpl Drop for SqliteBenchTransaction {\n    fn drop(&mut self) {\n        if !self.finalized {\n            if let Ok(connection) = self.connection.lock() {\n                let _ = connection.execute_batch(\"ROLLBACK\");\n            }\n        }\n    }\n}\n\nfn sqlite_scan_keys(\n    connection: &Connection,\n    request: BackendKvScanRequest,\n) -> Result<BackendKvKeyPage, LixError> {\n    let start = scan_start_key(&request);\n    let end = scan_end_key(&request.range);\n    let limit = sqlite_fetch_limit(request.limit)?;\n    let mut statement = connection\n        .prepare_cached(\n            \"\n            SELECT key FROM kv\n            WHERE namespace = ?1\n              AND (?2 IS NULL OR key > ?2)\n              AND key >= ?3\n              AND (?4 IS NULL OR key < ?4)\n            ORDER BY key\n            LIMIT ?5\n            \",\n        )\n        .map_err(sqlite_error)?;\n    let mut cursor = statement\n        .query(params![\n            request.namespace.as_str(),\n            request.after.as_deref(),\n            start.as_slice(),\n            end.as_deref(),\n            limit,\n        ])\n        .map_err(sqlite_error)?;\n    let mut keys = BytePageBuilder::new();\n    let mut count = 0;\n    let mut resume_after_candidate = None;\n    while let Some(row) = cursor.next().map_err(sqlite_error)? {\n        let key = row.get::<_, Vec<u8>>(0).map_err(sqlite_error)?;\n        if count < request.limit {\n            resume_after_candidate = Some(key.clone());\n            keys.push(&key);\n        }\n        count += 1;\n    }\n    let resume_after = (count > request.limit)\n        .then_some(resume_after_candidate)\n        .flatten();\n    Ok(BackendKvKeyPage {\n        keys: keys.finish(),\n        resume_after,\n    })\n}\n\nfn sqlite_scan_values(\n    connection: &Connection,\n    request: BackendKvScanRequest,\n) -> Result<BackendKvValuePage, LixError> {\n    let start = scan_start_key(&request);\n    let end = scan_end_key(&request.range);\n    let limit = sqlite_fetch_limit(request.limit)?;\n    let mut statement = connection\n        .prepare_cached(\n            \"\n            SELECT key, value FROM kv\n            WHERE namespace = ?1\n              AND (?2 IS NULL OR key > ?2)\n              AND key >= ?3\n              AND (?4 IS NULL OR key < ?4)\n            ORDER BY key\n            LIMIT ?5\n            \",\n        )\n        .map_err(sqlite_error)?;\n    let mut cursor = statement\n        .query(params![\n            request.namespace.as_str(),\n            request.after.as_deref(),\n            start.as_slice(),\n            end.as_deref(),\n            limit,\n        ])\n        .map_err(sqlite_error)?;\n    let mut values = BytePageBuilder::new();\n    let mut count = 0;\n    let mut resume_after_candidate = None;\n    while let Some(row) = cursor.next().map_err(sqlite_error)? {\n        if count < request.limit {\n            resume_after_candidate = Some(row.get::<_, Vec<u8>>(0).map_err(sqlite_error)?);\n            let value = row.get::<_, Vec<u8>>(1).map_err(sqlite_error)?;\n            values.push(&value);\n        }\n        count += 1;\n    }\n    let resume_after = (count > request.limit)\n        .then_some(resume_after_candidate)\n        .flatten();\n    Ok(BackendKvValuePage {\n        values: values.finish(),\n        resume_after,\n    })\n}\n\nfn sqlite_scan_entries(\n    connection: &Connection,\n    request: BackendKvScanRequest,\n) -> Result<BackendKvEntryPage, LixError> {\n    let start = scan_start_key(&request);\n    let end = scan_end_key(&request.range);\n    let limit = sqlite_fetch_limit(request.limit)?;\n    let mut statement = connection\n        .prepare_cached(\n            \"\n            SELECT key, value FROM kv\n            WHERE namespace = ?1\n              AND (?2 IS NULL OR key > ?2)\n              AND key >= ?3\n              AND (?4 IS NULL OR key < ?4)\n            ORDER BY key\n            LIMIT ?5\n            \",\n        )\n        .map_err(sqlite_error)?;\n    let mut cursor = statement\n        .query(params![\n            request.namespace.as_str(),\n            request.after.as_deref(),\n            start.as_slice(),\n            end.as_deref(),\n            limit,\n        ])\n        .map_err(sqlite_error)?;\n    let mut keys = BytePageBuilder::new();\n    let mut values = BytePageBuilder::new();\n    let mut count = 0;\n    let mut resume_after_candidate = None;\n    while let Some(row) = cursor.next().map_err(sqlite_error)? {\n        let key = row.get::<_, Vec<u8>>(0).map_err(sqlite_error)?;\n        if count < request.limit {\n            let value = row.get::<_, Vec<u8>>(1).map_err(sqlite_error)?;\n            resume_after_candidate = Some(key.clone());\n            keys.push(&key);\n            values.push(&value);\n        }\n        count += 1;\n    }\n    let resume_after = (count > request.limit)\n        .then_some(resume_after_candidate)\n        .flatten();\n    Ok(BackendKvEntryPage {\n        keys: keys.finish(),\n        values: values.finish(),\n        resume_after,\n    })\n}\n\nfn sqlite_fetch_limit(limit: usize) -> Result<i64, LixError> {\n    if limit == usize::MAX {\n        return Ok(i64::MAX);\n    }\n    let fetch_limit = limit.checked_add(1).ok_or_else(|| {\n        LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            \"storage scan limit overflow while checking for next page\",\n        )\n    })?;\n    i64::try_from(fetch_limit).map_err(|_| {\n        LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            \"storage scan limit does not fit into sqlite i64\",\n        )\n    })\n}\n\nfn scan_start_key(request: &BackendKvScanRequest) -> Vec<u8> {\n    let range_start = match &request.range {\n        BackendKvScanRange::Prefix(prefix) => prefix.as_slice(),\n        BackendKvScanRange::Range { start, .. } => start.as_slice(),\n    };\n    match request.after.as_deref() {\n        Some(after) if after > range_start => after.to_vec(),\n        _ => range_start.to_vec(),\n    }\n}\n\nfn scan_end_key(range: &BackendKvScanRange) -> Option<Vec<u8>> {\n    match range {\n        BackendKvScanRange::Prefix(prefix) => prefix_end(prefix),\n        BackendKvScanRange::Range { end, .. } => Some(end.clone()),\n    }\n}\n\nfn prefix_end(prefix: &[u8]) -> Option<Vec<u8>> {\n    let mut end = prefix.to_vec();\n    for index in (0..end.len()).rev() {\n        if end[index] != u8::MAX {\n            end[index] += 1;\n            end.truncate(index + 1);\n            return Some(end);\n        }\n    }\n    None\n}\n\nfn sqlite_error(error: rusqlite::Error) -> LixError {\n    LixError::new(\n        \"LIX_ERROR_UNKNOWN\",\n        format!(\"sqlite bench backend: {error}\"),\n    )\n}\n"
  },
  {
    "path": "packages/engine/benches/storage/storage_api.rs",
    "content": "use std::sync::Arc;\n\nuse criterion::{black_box, BatchSize, Criterion};\nuse lix_engine::storage_bench::{self, StorageApiFixture, StorageBenchSelectivity};\nuse lix_engine::Backend;\nuse tokio::runtime::Runtime;\n\nuse crate::{Args, BenchBackend, RocksDbBenchBackend, SqliteBenchBackend};\n\ntype BackendFactory = fn() -> Arc<dyn Backend + Send + Sync>;\n\n#[derive(Clone, Copy)]\nstruct BackendProfile {\n    name: &'static str,\n    create: BackendFactory,\n}\n\npub(crate) fn bench(c: &mut Criterion, runtime: &Runtime, args: Args) {\n    for profile in [\n        BackendProfile {\n            name: \"in_memory\",\n            create: in_memory_backend,\n        },\n        BackendProfile {\n            name: \"sqlite_tempfile\",\n            create: sqlite_tempfile_backend,\n        },\n        BackendProfile {\n            name: \"rocksdb_tempdir\",\n            create: rocksdb_backend,\n        },\n    ] {\n        bench_backend(c, runtime, args, profile);\n    }\n}\n\nfn bench_backend(c: &mut Criterion, runtime: &Runtime, args: Args, profile: BackendProfile) {\n    let mut group = c.benchmark_group(format!(\"storage/api/{}\", profile.name));\n\n    for rows in [1usize, 10, 100, 1_000, args.rows] {\n        group.bench_function(\n            format!(\"write_kv_batch_put/{rows_label}\", rows_label = label(rows)),\n            |b| {\n                b.iter_batched(\n                    || (profile.create)(),\n                    |backend| {\n                        black_box(\n                            runtime\n                                .block_on(storage_bench::storage_api_write_kv_batch_puts(\n                                    backend, rows,\n                                ))\n                                .expect(\"storage/api write_kv_batch_put succeeds\"),\n                        )\n                    },\n                    BatchSize::LargeInput,\n                )\n            },\n        );\n    }\n\n    group.bench_function(\"write_kv_batch_mixed_put_delete/10k\", |b| {\n        b.iter_batched(\n            || (profile.create)(),\n            |backend| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::storage_api_write_kv_batch_mixed_put_delete(\n                            backend, args.rows,\n                        ))\n                        .expect(\"storage/api write_kv_batch_mixed_put_delete succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"write_kv_batch_multi_namespace/10k\", |b| {\n        b.iter_batched(\n            || (profile.create)(),\n            |backend| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::storage_api_write_kv_batch_multi_namespace(\n                            backend, args.rows,\n                        ))\n                        .expect(\"storage/api write_kv_batch_multi_namespace succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"write_kv_batch_duplicate_keys/10k\", |b| {\n        b.iter_batched(\n            || (profile.create)(),\n            |backend| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::storage_api_write_kv_batch_duplicate_keys(\n                            backend, args.rows,\n                        ))\n                        .expect(\"storage/api write_kv_batch_duplicate_keys succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    for (label, rows, value_bytes) in [\n        (\"64b\", args.rows, 64usize),\n        (\"1k\", args.rows, 1_024),\n        (\"16k\", 1_000, 16 * 1024),\n        (\"128k\", 100, 128 * 1024),\n    ] {\n        group.bench_function(format!(\"write_kv_batch_value_size/{label}\"), |b| {\n            b.iter_batched(\n                || (profile.create)(),\n                |backend| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::storage_api_write_kv_batch_value_size(\n                                backend,\n                                rows,\n                                value_bytes,\n                            ))\n                            .expect(\"storage/api write_kv_batch_value_size succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n\n    for rows in [1usize, 100, args.rows] {\n        group.bench_function(\n            format!(\n                \"transaction_write_and_commit/{rows_label}\",\n                rows_label = label(rows)\n            ),\n            |b| {\n                b.iter_batched(\n                    || (profile.create)(),\n                    |backend| {\n                        black_box(\n                            runtime\n                                .block_on(storage_bench::storage_api_write_and_commit(\n                                    backend, rows,\n                                ))\n                                .expect(\"storage/api transaction_write_and_commit succeeds\"),\n                        )\n                    },\n                    BatchSize::LargeInput,\n                )\n            },\n        );\n    }\n\n    group.bench_function(\"transaction_rollback_after_write/10k\", |b| {\n        b.iter_batched(\n            || (profile.create)(),\n            |backend| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::storage_api_rollback_after_write(\n                            backend, args.rows,\n                        ))\n                        .expect(\"storage/api transaction_rollback_after_write succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    for reads in [100usize, 1_000, args.rows] {\n        group.bench_function(\n            format!(\"get_values_hit/{reads_label}\", reads_label = label(reads)),\n            |b| {\n                b.iter_batched(\n                    || prepare_read(runtime, args.rows, profile.create),\n                    |fixture| {\n                        black_box(\n                            runtime\n                                .block_on(storage_bench::storage_api_get_values_hits_prepared(\n                                    &fixture, reads,\n                                ))\n                                .expect(\"storage/api get_values_hit succeeds\"),\n                        )\n                    },\n                    BatchSize::LargeInput,\n                )\n            },\n        );\n\n        group.bench_function(\n            format!(\"exists_many/{reads_label}\", reads_label = label(reads)),\n            |b| {\n                b.iter_batched(\n                    || prepare_read(runtime, args.rows, profile.create),\n                    |fixture| {\n                        black_box(\n                            runtime\n                                .block_on(storage_bench::storage_api_exists_many_prepared(\n                                    &fixture, reads,\n                                ))\n                                .expect(\"storage/api exists_many succeeds\"),\n                        )\n                    },\n                    BatchSize::LargeInput,\n                )\n            },\n        );\n\n        group.bench_function(\n            format!(\"get_values_miss/{reads_label}\", reads_label = label(reads)),\n            |b| {\n                b.iter_batched(\n                    || prepare_read(runtime, args.rows, profile.create),\n                    |fixture| {\n                        black_box(\n                            runtime\n                                .block_on(storage_bench::storage_api_get_values_misses_prepared(\n                                    &fixture, reads,\n                                ))\n                                .expect(\"storage/api get_values_miss succeeds\"),\n                        )\n                    },\n                    BatchSize::LargeInput,\n                )\n            },\n        );\n\n        group.bench_function(\n            format!(\n                \"get_values_mixed_hit_miss/{reads_label}\",\n                reads_label = label(reads)\n            ),\n            |b| {\n                b.iter_batched(\n                    || prepare_read(runtime, args.rows, profile.create),\n                    |fixture| {\n                        black_box(\n                            runtime\n                                .block_on(\n                                    storage_bench::storage_api_get_values_mixed_hit_miss_prepared(\n                                        &fixture, reads,\n                                    ),\n                                )\n                                .expect(\"storage/api get_values_mixed_hit_miss succeeds\"),\n                        )\n                    },\n                    BatchSize::LargeInput,\n                )\n            },\n        );\n\n        group.bench_function(\n            format!(\n                \"get_values_duplicate_keys/{reads_label}\",\n                reads_label = label(reads)\n            ),\n            |b| {\n                b.iter_batched(\n                    || prepare_read(runtime, args.rows, profile.create),\n                    |fixture| {\n                        black_box(\n                            runtime\n                                .block_on(\n                                    storage_bench::storage_api_get_values_duplicate_keys_prepared(\n                                        &fixture, reads,\n                                    ),\n                                )\n                                .expect(\"storage/api get_values_duplicate_keys succeeds\"),\n                        )\n                    },\n                    BatchSize::LargeInput,\n                )\n            },\n        );\n    }\n\n    group.bench_function(\"get_values_multi_namespace/10k\", |b| {\n        b.iter_batched(\n            || (profile.create)(),\n            |backend| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::storage_api_get_values_multi_namespace(\n                            backend, args.rows,\n                        ))\n                        .expect(\"storage/api get_values_multi_namespace succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    for limit in [100usize, 1_000, args.rows] {\n        group.bench_function(\n            format!(\"scan_keys_prefix/{limit_label}\", limit_label = label(limit)),\n            |b| {\n                b.iter_batched(\n                    || prepare_read(runtime, args.rows, profile.create),\n                    |fixture| {\n                        black_box(\n                            runtime\n                                .block_on(storage_bench::storage_api_scan_keys_prefix_prepared(\n                                    &fixture, limit,\n                                ))\n                                .expect(\"storage/api scan_keys_prefix succeeds\"),\n                        )\n                    },\n                    BatchSize::LargeInput,\n                )\n            },\n        );\n    }\n\n    group.bench_function(\"scan_keys_after_pages/10k\", |b| {\n        b.iter_batched(\n            || prepare_read(runtime, args.rows, profile.create),\n            |fixture| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::storage_api_scan_keys_after_pages_prepared(\n                            &fixture, 100,\n                        ))\n                        .expect(\"storage/api scan_keys_after_pages succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"scan_keys_small_limit_of_large_range/100_of_10k\", |b| {\n        b.iter_batched(\n            || prepare_read(runtime, args.rows, profile.create),\n            |fixture| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::storage_api_scan_keys_prefix_prepared(\n                            &fixture, 100,\n                        ))\n                        .expect(\"storage/api scan_keys_small_limit_of_large_range succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"scan_keys_empty_range/10k\", |b| {\n        b.iter_batched(\n            || prepare_read(runtime, args.rows, profile.create),\n            |fixture| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::storage_api_scan_keys_empty_range_prepared(\n                            &fixture,\n                        ))\n                        .expect(\"storage/api scan_keys_empty_range succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    for (label, selectivity) in [\n        (\"1pct\", StorageBenchSelectivity::Percent1),\n        (\"10pct\", StorageBenchSelectivity::Percent10),\n        (\"100pct\", StorageBenchSelectivity::Percent100),\n    ] {\n        group.bench_function(format!(\"scan_keys_prefix_selectivity_{label}/10k\"), |b| {\n            b.iter_batched(\n                || prepare_selective_scan(runtime, args.rows, selectivity, profile.create),\n                |fixture| {\n                    black_box(\n                        runtime\n                            .block_on(\n                                storage_bench::storage_api_scan_keys_selective_prefix_prepared(\n                                    &fixture,\n                                    selectivity,\n                                ),\n                            )\n                            .expect(\"storage/api scan_keys_prefix_selectivity succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n\n    group.bench_function(\"transaction_commit_empty\", |b| {\n        b.iter_batched(\n            || (profile.create)(),\n            |backend| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::storage_api_transaction_commit_empty(backend))\n                        .expect(\"storage/api transaction_commit_empty succeeds\"),\n                )\n            },\n            BatchSize::SmallInput,\n        )\n    });\n\n    group.finish();\n}\n\nfn prepare_read(\n    runtime: &Runtime,\n    rows: usize,\n    create_backend: BackendFactory,\n) -> StorageApiFixture {\n    let backend = create_backend();\n    runtime\n        .block_on(storage_bench::prepare_storage_api_read(backend, rows))\n        .expect(\"prepare storage/api read fixture\")\n}\n\nfn prepare_selective_scan(\n    runtime: &Runtime,\n    rows: usize,\n    selectivity: StorageBenchSelectivity,\n    create_backend: BackendFactory,\n) -> StorageApiFixture {\n    let backend = create_backend();\n    runtime\n        .block_on(storage_bench::prepare_storage_api_selective_scan(\n            backend,\n            rows,\n            selectivity,\n        ))\n        .expect(\"prepare storage/api selective scan fixture\")\n}\n\nfn in_memory_backend() -> Arc<dyn Backend + Send + Sync> {\n    BenchBackend::new()\n}\n\nfn sqlite_tempfile_backend() -> Arc<dyn Backend + Send + Sync> {\n    Arc::new(SqliteBenchBackend::tempfile().expect(\"create sqlite tempfile bench backend\"))\n}\n\nfn rocksdb_backend() -> Arc<dyn Backend + Send + Sync> {\n    Arc::new(RocksDbBenchBackend::new().expect(\"create rocksdb bench backend\"))\n}\n\nfn label(rows: usize) -> String {\n    if rows >= 1_000 {\n        format!(\"{}k\", rows / 1_000)\n    } else {\n        rows.to_string()\n    }\n}\n"
  },
  {
    "path": "packages/engine/benches/storage/tracked_state.rs",
    "content": "use lix_engine::storage_bench::{\n    self, StorageBenchConfig, StorageBenchKeyPattern, StorageBenchSelectivity,\n    StorageBenchUpdateFraction,\n};\n\nuse crate::{Args, BenchBackend};\nuse criterion::{black_box, BatchSize, Criterion};\nuse tokio::runtime::Runtime;\n\npub(crate) fn bench(c: &mut Criterion, runtime: &Runtime, args: Args) {\n    let mut group = c.benchmark_group(\"storage/tracked_state\");\n    group.bench_function(\"write_root/10k\", |b| {\n        b.iter_batched(\n            || {\n                let backend = BenchBackend::new();\n                let fixture = runtime\n                    .block_on(storage_bench::prepare_tracked_state_write_root(config(\n                        &args,\n                    )))\n                    .expect(\"prepare tracked_state/write_root\");\n                (backend, fixture)\n            },\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_write_root_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"tracked_state/write_root succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n    group.bench_function(\"read_point_hit/10k\", |b| {\n        b.iter_batched(\n            || prepare_read(runtime, args),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_read_point_hit_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"tracked_state/read_point_hit succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n    group.bench_function(\"read_point_miss/10k\", |b| {\n        b.iter_batched(\n            || prepare_read(runtime, args),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_read_point_miss_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"tracked_state/read_point_miss succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n    group.bench_function(\"scan_all/10k\", |b| {\n        b.iter_batched(\n            || prepare_read(runtime, args),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_scan_all_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"tracked_state/scan_all succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n    group.bench_function(\"scan_keys_only/10k\", |b| {\n        b.iter_batched(\n            || prepare_read(runtime, args),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_scan_keys_only_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"tracked_state/scan_keys_only succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n    group.bench_function(\"scan_headers_only/10k\", |b| {\n        b.iter_batched(\n            || prepare_read(runtime, args),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_scan_headers_only_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"tracked_state/scan_headers_only succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n    group.bench_function(\"scan_full_rows/10k\", |b| {\n        b.iter_batched(\n            || prepare_read(runtime, args),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_scan_full_rows_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"tracked_state/scan_full_rows succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n    for (label, bytes, rows, row_label) in\n        [(\"1k\", 1024, 10_000, \"10k\"), (\"16k\", 16 * 1024, 1_000, \"1k\")]\n    {\n        let config = config(&args)\n            .with_state_payload_bytes(bytes)\n            .with_rows(rows);\n        let name = format!(\"scan_keys_only_payload_{label}/{row_label}\");\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || prepare_read_with(runtime, args, config),\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::tracked_state_scan_keys_only_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"tracked_state/scan_keys_only payload succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n        let name = format!(\"scan_headers_only_payload_{label}/{row_label}\");\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || prepare_read_with(runtime, args, config),\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::tracked_state_scan_headers_only_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"tracked_state/scan_headers_only payload succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n        let name = format!(\"scan_full_rows_payload_{label}/{row_label}\");\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || prepare_read_with(runtime, args, config),\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::tracked_state_scan_full_rows_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"tracked_state/scan_full_rows payload succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n    group.bench_function(\"scan_schema/10k\", |b| {\n        b.iter_batched(\n            || prepare_read(runtime, args),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_scan_schema_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"tracked_state/scan_schema succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n    group.bench_function(\"scan_file/10k\", |b| {\n        b.iter_batched(\n            || prepare_read(runtime, args),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_scan_file_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"tracked_state/scan_file succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n    group.bench_function(\"update_existing/10k\", |b| {\n        b.iter_batched(\n            || {\n                let backend = BenchBackend::new();\n                let fixture = runtime\n                    .block_on(storage_bench::prepare_tracked_state_update(\n                        &backend,\n                        config(&args),\n                    ))\n                    .expect(\"prepare tracked_state/update_existing\");\n                (backend, fixture)\n            },\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_update_existing_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"tracked_state/update_existing succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n    for rows in [1, 10, 100, 1_000] {\n        let name = format!(\"write_root/{rows}\");\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || {\n                    let backend = BenchBackend::new();\n                    let fixture = runtime\n                        .block_on(storage_bench::prepare_tracked_state_write_root(\n                            config(&args).with_rows(rows),\n                        ))\n                        .expect(\"prepare tracked_state/write_root batch\");\n                    (backend, fixture)\n                },\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::tracked_state_write_root_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"tracked_state/write_root batch succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n    for (label, bytes, rows) in [\n        (\"small\", 0, 10_000),\n        (\"1k\", 1024, 10_000),\n        (\"16k\", 16 * 1024, 1_000),\n        (\"128k\", 128 * 1024, 100),\n    ] {\n        let name = format!(\"write_root_payload_{label}/{rows}\");\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || {\n                    let backend = BenchBackend::new();\n                    let fixture = runtime\n                        .block_on(storage_bench::prepare_tracked_state_write_root(\n                            config(&args)\n                                .with_state_payload_bytes(bytes)\n                                .with_rows(rows),\n                        ))\n                        .expect(\"prepare tracked_state/write_root payload\");\n                    (backend, fixture)\n                },\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::tracked_state_write_root_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"tracked_state/write_root payload succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n    for (label, key_pattern) in [\n        (\"sequential_keys\", StorageBenchKeyPattern::Sequential),\n        (\"random_keys\", StorageBenchKeyPattern::Random),\n    ] {\n        let name = format!(\"write_root_{label}/10k\");\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || {\n                    let backend = BenchBackend::new();\n                    let fixture = runtime\n                        .block_on(storage_bench::prepare_tracked_state_write_root(\n                            config(&args).with_key_pattern(key_pattern),\n                        ))\n                        .expect(\"prepare tracked_state/write_root key pattern\");\n                    (backend, fixture)\n                },\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::tracked_state_write_root_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"tracked_state/write_root key pattern succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n    for (label, selectivity) in [\n        (\"1pct\", StorageBenchSelectivity::Percent1),\n        (\"10pct\", StorageBenchSelectivity::Percent10),\n        (\"100pct\", StorageBenchSelectivity::Percent100),\n    ] {\n        let name = format!(\"scan_schema_selectivity_{label}/10k\");\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || prepare_read_with(runtime, args, config(&args).with_selectivity(selectivity)),\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::tracked_state_scan_schema_selective_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"tracked_state/scan_schema selectivity succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n    for (label, selectivity) in [\n        (\"1pct\", StorageBenchSelectivity::Percent1),\n        (\"10pct\", StorageBenchSelectivity::Percent10),\n    ] {\n        let name = format!(\"scan_file_selectivity_payload_1k_{label}/10k\");\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || {\n                    prepare_read_file_selective_with(\n                        runtime,\n                        args,\n                        config(&args)\n                            .with_state_payload_bytes(1024)\n                            .with_selectivity(selectivity),\n                    )\n                },\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::tracked_state_scan_file_selective_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"tracked_state/scan_file payload selectivity succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n    for (label, selectivity) in [\n        (\"1pct\", StorageBenchSelectivity::Percent1),\n        (\"10pct\", StorageBenchSelectivity::Percent10),\n        (\"100pct\", StorageBenchSelectivity::Percent100),\n    ] {\n        let name = format!(\"scan_file_selectivity_{label}/10k\");\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || {\n                    prepare_read_file_selective_with(\n                        runtime,\n                        args,\n                        config(&args).with_selectivity(selectivity),\n                    )\n                },\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::tracked_state_scan_file_selective_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"tracked_state/scan_file selectivity succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n    for (label, selectivity) in [\n        (\"1pct\", StorageBenchSelectivity::Percent1),\n        (\"10pct\", StorageBenchSelectivity::Percent10),\n        (\"100pct\", StorageBenchSelectivity::Percent100),\n    ] {\n        let name = format!(\"scan_file_header_selectivity_{label}/10k\");\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || {\n                    prepare_read_file_selective_with(\n                        runtime,\n                        args,\n                        config(&args).with_selectivity(selectivity),\n                    )\n                },\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(\n                                storage_bench::tracked_state_scan_file_header_selective_prepared(\n                                    &backend, &fixture,\n                                ),\n                            )\n                            .expect(\"tracked_state/scan_file header selectivity succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n    for (label, selectivity) in [\n        (\"1pct\", StorageBenchSelectivity::Percent1),\n        (\"10pct\", StorageBenchSelectivity::Percent10),\n    ] {\n        let name = format!(\"scan_file_header_selectivity_payload_1k_{label}/10k\");\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || {\n                    prepare_read_file_selective_with(\n                        runtime,\n                        args,\n                        config(&args)\n                            .with_state_payload_bytes(1024)\n                            .with_selectivity(selectivity),\n                    )\n                },\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(\n                                storage_bench::tracked_state_scan_file_header_selective_prepared(\n                                    &backend, &fixture,\n                                ),\n                            )\n                            .expect(\"tracked_state/scan_file header payload selectivity succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n    for rows in [1_000, 10_000, 100_000] {\n        let name = format!(\"read_point_hit_100_reads/{rows}\");\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || prepare_read_with(runtime, args, config(&args).with_rows(rows)),\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(\n                                storage_bench::tracked_state_read_point_hit_constant_prepared(\n                                    &backend, &fixture, 100,\n                                ),\n                            )\n                            .expect(\"tracked_state/read_point_hit scaling succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n    for (label, fraction) in [\n        (\n            \"update_10pct_existing\",\n            StorageBenchUpdateFraction::Percent10,\n        ),\n        (\n            \"update_all_existing\",\n            StorageBenchUpdateFraction::Percent100,\n        ),\n    ] {\n        let name = format!(\"{label}/10k\");\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || {\n                    let backend = BenchBackend::new();\n                    let fixture = runtime\n                        .block_on(storage_bench::prepare_tracked_state_update(\n                            &backend,\n                            config(&args).with_update_fraction(fraction),\n                        ))\n                        .expect(\"prepare tracked_state/update shape\");\n                    (backend, fixture)\n                },\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::tracked_state_update_existing_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"tracked_state/update shape succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n    for rows in [10_000, 100_000] {\n        let name = format!(\"update_1_existing/{rows}\");\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || {\n                    let backend = BenchBackend::new();\n                    let fixture = runtime\n                        .block_on(storage_bench::prepare_tracked_state_update_rows(\n                            &backend,\n                            config(&args).with_rows(rows),\n                            1,\n                        ))\n                        .expect(\"prepare tracked_state/update_1_existing\");\n                    (backend, fixture)\n                },\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::tracked_state_update_existing_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"tracked_state/update_1_existing succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n    for (label, rows, payload_bytes) in [\n        (\"partial_snapshot_update_1_payload_1k\", 100_000, 1024),\n        (\"partial_snapshot_update_1_payload_16k\", 10_000, 16 * 1024),\n    ] {\n        let name = format!(\"{label}/{rows}\");\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || {\n                    let backend = BenchBackend::new();\n                    let fixture = runtime\n                        .block_on(\n                            storage_bench::prepare_tracked_state_partial_snapshot_update_rows(\n                                &backend,\n                                config(&args)\n                                    .with_rows(rows)\n                                    .with_state_payload_bytes(payload_bytes),\n                                1,\n                            ),\n                        )\n                        .expect(\"prepare tracked_state/partial_snapshot_update\");\n                    (backend, fixture)\n                },\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::tracked_state_update_existing_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"tracked_state/partial_snapshot_update succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n    group.bench_function(\"append_new_child_commit/10k\", |b| {\n        b.iter_batched(\n            || {\n                let backend = BenchBackend::new();\n                let fixture = runtime\n                    .block_on(storage_bench::prepare_tracked_state_append_child(\n                        &backend,\n                        config(&args),\n                    ))\n                    .expect(\"prepare tracked_state/append_new_child_commit\");\n                (backend, fixture)\n            },\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_update_existing_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"tracked_state/append_new_child_commit succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n    for rows in [10_000, 100_000] {\n        let name = format!(\"append_1_new_child_commit/{rows}\");\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || {\n                    let backend = BenchBackend::new();\n                    let fixture = runtime\n                        .block_on(storage_bench::prepare_tracked_state_append_child_rows(\n                            &backend,\n                            config(&args).with_rows(rows),\n                            1,\n                        ))\n                        .expect(\"prepare tracked_state/append_1_new_child_commit\");\n                    (backend, fixture)\n                },\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::tracked_state_update_existing_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"tracked_state/append_1_new_child_commit succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n    for (label, rows) in [(\"delete_1\", 1), (\"delete_10pct\", args.rows / 10)] {\n        let name = format!(\"{label}/10k\");\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || {\n                    let backend = BenchBackend::new();\n                    let fixture = runtime\n                        .block_on(storage_bench::prepare_tracked_state_tombstone_rows(\n                            &backend,\n                            config(&args),\n                            rows,\n                        ))\n                        .expect(\"prepare tracked_state/delete tombstones\");\n                    (backend, fixture)\n                },\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::tracked_state_update_existing_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"tracked_state/delete tombstones succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n    group.bench_function(\"diff_equal/10k\", |b| {\n        b.iter_batched(\n            || {\n                let backend = BenchBackend::new();\n                let fixture = runtime\n                    .block_on(storage_bench::prepare_tracked_state_diff_equal(\n                        &backend,\n                        config(&args),\n                    ))\n                    .expect(\"prepare tracked_state/diff_equal\");\n                (backend, fixture)\n            },\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_diff_commits_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"tracked_state/diff_equal succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n    for (label, changed_rows) in [\n        (\"diff_update_1\", 1),\n        (\"diff_update_10pct\", args.rows / 10),\n        (\"diff_delete_1\", 1),\n        (\"diff_delete_10pct\", args.rows / 10),\n    ] {\n        let name = format!(\"{label}/10k\");\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || {\n                    let backend = BenchBackend::new();\n                    let config = config(&args);\n                    let fixture = if label.starts_with(\"diff_delete\") {\n                        runtime\n                            .block_on(storage_bench::prepare_tracked_state_diff_tombstone_rows(\n                                &backend,\n                                config,\n                                changed_rows,\n                            ))\n                            .expect(\"prepare tracked_state/diff_delete\")\n                    } else {\n                        runtime\n                            .block_on(storage_bench::prepare_tracked_state_diff_update_rows(\n                                &backend,\n                                config,\n                                changed_rows,\n                            ))\n                            .expect(\"prepare tracked_state/diff_update\")\n                    };\n                    (backend, fixture)\n                },\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::tracked_state_diff_commits_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"tracked_state/diff shape succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n    group.finish();\n}\n\npub(crate) fn bench_fast(c: &mut Criterion, runtime: &Runtime, args: Args) {\n    let mut group = c.benchmark_group(\"storage/tracked_state_fast\");\n\n    group.bench_function(\"write_root_payload_small/10k\", |b| {\n        b.iter_batched(\n            || {\n                let backend = BenchBackend::new();\n                let fixture = runtime\n                    .block_on(storage_bench::prepare_tracked_state_write_root(\n                        config(&args).with_state_payload_bytes(0),\n                    ))\n                    .expect(\"prepare tracked_state_fast/write_root_payload_small\");\n                (backend, fixture)\n            },\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_write_root_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"tracked_state_fast/write_root_payload_small succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    for (label, bytes, rows) in [(\"1k\", 1024, 10_000), (\"16k\", 16 * 1024, 1_000)] {\n        let name = format!(\"write_root_payload_{label}/{rows}\");\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || {\n                    let backend = BenchBackend::new();\n                    let fixture = runtime\n                        .block_on(storage_bench::prepare_tracked_state_write_root(\n                            config(&args)\n                                .with_state_payload_bytes(bytes)\n                                .with_rows(rows),\n                        ))\n                        .expect(\"prepare tracked_state_fast/write_root payload\");\n                    (backend, fixture)\n                },\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::tracked_state_write_root_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"tracked_state_fast/write_root payload succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n\n    for name in [\n        \"scan_keys_only_payload_1k/10k\",\n        \"scan_headers_only_payload_1k/10k\",\n        \"scan_full_rows_payload_1k/10k\",\n    ] {\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || prepare_read_with(runtime, args, config(&args).with_state_payload_bytes(1024)),\n                |(backend, fixture)| {\n                    let result = match name {\n                        \"scan_keys_only_payload_1k/10k\" => {\n                            runtime.block_on(storage_bench::tracked_state_scan_keys_only_prepared(\n                                &backend, &fixture,\n                            ))\n                        }\n                        \"scan_headers_only_payload_1k/10k\" => runtime.block_on(\n                            storage_bench::tracked_state_scan_headers_only_prepared(\n                                &backend, &fixture,\n                            ),\n                        ),\n                        \"scan_full_rows_payload_1k/10k\" => {\n                            runtime.block_on(storage_bench::tracked_state_scan_full_rows_prepared(\n                                &backend, &fixture,\n                            ))\n                        }\n                        _ => unreachable!(\"tracked_state_fast payload scan name is static\"),\n                    };\n                    black_box(result.expect(\"tracked_state_fast payload scan succeeds\"))\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n\n    group.bench_function(\"scan_file_header_selectivity_payload_1k_10pct/10k\", |b| {\n        b.iter_batched(\n            || {\n                prepare_read_file_selective_with(\n                    runtime,\n                    args,\n                    config(&args)\n                        .with_state_payload_bytes(1024)\n                        .with_selectivity(StorageBenchSelectivity::Percent10),\n                )\n            },\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(\n                            storage_bench::tracked_state_scan_file_header_selective_prepared(\n                                &backend, &fixture,\n                            ),\n                        )\n                        .expect(\"tracked_state_fast/file header scan succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"read_point_hit_100_reads/10k\", |b| {\n        b.iter_batched(\n            || prepare_read_with(runtime, args, config(&args).with_rows(10_000)),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(\n                            storage_bench::tracked_state_read_point_hit_constant_prepared(\n                                &backend, &fixture, 100,\n                            ),\n                        )\n                        .expect(\"tracked_state_fast/read_point_hit_100_reads succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"update_1_existing/10k\", |b| {\n        b.iter_batched(\n            || {\n                let backend = BenchBackend::new();\n                let fixture = runtime\n                    .block_on(storage_bench::prepare_tracked_state_update_rows(\n                        &backend,\n                        config(&args).with_rows(10_000),\n                        1,\n                    ))\n                    .expect(\"prepare tracked_state_fast/update_1_existing\");\n                (backend, fixture)\n            },\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_update_existing_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"tracked_state_fast/update_1_existing succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"partial_snapshot_update_1_payload_1k/10k\", |b| {\n        b.iter_batched(\n            || {\n                let backend = BenchBackend::new();\n                let fixture = runtime\n                    .block_on(\n                        storage_bench::prepare_tracked_state_partial_snapshot_update_rows(\n                            &backend,\n                            config(&args)\n                                .with_rows(10_000)\n                                .with_state_payload_bytes(1024),\n                            1,\n                        ),\n                    )\n                    .expect(\"prepare tracked_state_fast/partial_snapshot_update\");\n                (backend, fixture)\n            },\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::tracked_state_update_existing_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"tracked_state_fast/partial_snapshot_update succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.finish();\n}\n\nfn prepare_read(\n    runtime: &Runtime,\n    args: Args,\n) -> (\n    std::sync::Arc<dyn lix_engine::Backend + Send + Sync>,\n    lix_engine::storage_bench::TrackedStateReadFixture,\n) {\n    let backend = BenchBackend::new();\n    let fixture = runtime\n        .block_on(storage_bench::prepare_tracked_state_read(\n            &backend,\n            config(&args),\n        ))\n        .expect(\"prepare tracked_state/read\");\n    (backend, fixture)\n}\n\nfn prepare_read_with(\n    runtime: &Runtime,\n    args: Args,\n    config: StorageBenchConfig,\n) -> (\n    std::sync::Arc<dyn lix_engine::Backend + Send + Sync>,\n    lix_engine::storage_bench::TrackedStateReadFixture,\n) {\n    let _ = args;\n    let backend = BenchBackend::new();\n    let fixture = runtime\n        .block_on(storage_bench::prepare_tracked_state_read(&backend, config))\n        .expect(\"prepare tracked_state/read variant\");\n    (backend, fixture)\n}\n\nfn prepare_read_file_selective_with(\n    runtime: &Runtime,\n    args: Args,\n    config: StorageBenchConfig,\n) -> (\n    std::sync::Arc<dyn lix_engine::Backend + Send + Sync>,\n    lix_engine::storage_bench::TrackedStateReadFixture,\n) {\n    let _ = args;\n    let backend = BenchBackend::new();\n    let fixture = runtime\n        .block_on(storage_bench::prepare_tracked_state_read_file_selective(\n            &backend, config,\n        ))\n        .expect(\"prepare tracked_state/read file-selective variant\");\n    (backend, fixture)\n}\n\nfn config(args: &Args) -> StorageBenchConfig {\n    args.config()\n}\n"
  },
  {
    "path": "packages/engine/benches/storage/untracked_state.rs",
    "content": "use lix_engine::storage_bench::{\n    self, StorageBenchConfig, StorageBenchKeyPattern, StorageBenchSelectivity,\n    StorageBenchUpdateFraction,\n};\n\nuse crate::{Args, BenchBackend};\nuse criterion::{black_box, BatchSize, Criterion};\nuse tokio::runtime::Runtime;\n\npub(crate) fn bench(c: &mut Criterion, runtime: &Runtime, args: Args) {\n    let mut group = c.benchmark_group(\"storage/untracked_state\");\n    group.bench_function(\"write_rows/10k\", |b| {\n        b.iter_batched(\n            || {\n                let backend = BenchBackend::new();\n                let fixture = runtime\n                    .block_on(storage_bench::prepare_untracked_state_write_rows(config(\n                        &args,\n                    )))\n                    .expect(\"prepare untracked_state/write_rows\");\n                (backend, fixture)\n            },\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::untracked_state_write_rows_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"untracked_state/write_rows succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n    group.bench_function(\"read_point_hit/10k\", |b| {\n        b.iter_batched(\n            || prepare_read(runtime, args),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::untracked_state_read_point_hit_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"untracked_state/read_point_hit succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n    group.bench_function(\"read_point_miss/10k\", |b| {\n        b.iter_batched(\n            || prepare_read(runtime, args),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::untracked_state_read_point_miss_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"untracked_state/read_point_miss succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n    group.bench_function(\"scan_all/10k\", |b| {\n        b.iter_batched(\n            || prepare_read(runtime, args),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::untracked_state_scan_all_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"untracked_state/scan_all succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n    group.bench_function(\"scan_keys_only/10k\", |b| {\n        b.iter_batched(\n            || prepare_read(runtime, args),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::untracked_state_scan_keys_only_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"untracked_state/scan_keys_only succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n    group.bench_function(\"scan_headers_only/10k\", |b| {\n        b.iter_batched(\n            || prepare_read(runtime, args),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::untracked_state_scan_headers_only_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"untracked_state/scan_headers_only succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n    group.bench_function(\"scan_full_rows/10k\", |b| {\n        b.iter_batched(\n            || prepare_read(runtime, args),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::untracked_state_scan_full_rows_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"untracked_state/scan_full_rows succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n    for (label, bytes, rows, row_label) in\n        [(\"1k\", 1024, 10_000, \"10k\"), (\"16k\", 16 * 1024, 1_000, \"1k\")]\n    {\n        let config = config(&args)\n            .with_state_payload_bytes(bytes)\n            .with_rows(rows);\n        let name = format!(\"scan_keys_only_payload_{label}/{row_label}\");\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || prepare_read_with(runtime, config),\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::untracked_state_scan_keys_only_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"untracked_state/scan_keys_only payload succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n        let name = format!(\"scan_headers_only_payload_{label}/{row_label}\");\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || prepare_read_with(runtime, config),\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::untracked_state_scan_headers_only_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"untracked_state/scan_headers_only payload succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n        let name = format!(\"scan_full_rows_payload_{label}/{row_label}\");\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || prepare_read_with(runtime, config),\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::untracked_state_scan_full_rows_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"untracked_state/scan_full_rows payload succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n    group.bench_function(\"scan_version/10k\", |b| {\n        b.iter_batched(\n            || prepare_read(runtime, args),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::untracked_state_scan_version_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"untracked_state/scan_version succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n    group.bench_function(\"scan_schema/10k\", |b| {\n        b.iter_batched(\n            || prepare_read(runtime, args),\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::untracked_state_scan_schema_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"untracked_state/scan_schema succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n    group.bench_function(\"overwrite_existing/10k\", |b| {\n        b.iter_batched(\n            || {\n                let backend = BenchBackend::new();\n                let fixture = runtime\n                    .block_on(storage_bench::prepare_untracked_state_overwrite(\n                        &backend,\n                        config(&args),\n                    ))\n                    .expect(\"prepare untracked_state/overwrite_existing\");\n                (backend, fixture)\n            },\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::untracked_state_overwrite_existing_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"untracked_state/overwrite_existing succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n    for rows in [1, 10, 100, 1_000] {\n        let name = format!(\"write_rows/{rows}\");\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || {\n                    let backend = BenchBackend::new();\n                    let fixture = runtime\n                        .block_on(storage_bench::prepare_untracked_state_write_rows(\n                            config(&args).with_rows(rows),\n                        ))\n                        .expect(\"prepare untracked_state/write_rows batch\");\n                    (backend, fixture)\n                },\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::untracked_state_write_rows_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"untracked_state/write_rows batch succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n    for (label, bytes, rows) in [\n        (\"small\", 0, 10_000),\n        (\"1k\", 1024, 10_000),\n        (\"16k\", 16 * 1024, 1_000),\n        (\"128k\", 128 * 1024, 100),\n    ] {\n        let name = format!(\"write_rows_payload_{label}/{rows}\");\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || {\n                    let backend = BenchBackend::new();\n                    let fixture = runtime\n                        .block_on(storage_bench::prepare_untracked_state_write_rows(\n                            config(&args)\n                                .with_state_payload_bytes(bytes)\n                                .with_rows(rows),\n                        ))\n                        .expect(\"prepare untracked_state/write_rows payload\");\n                    (backend, fixture)\n                },\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::untracked_state_write_rows_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"untracked_state/write_rows payload succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n    for (label, key_pattern) in [\n        (\"sequential_keys\", StorageBenchKeyPattern::Sequential),\n        (\"random_keys\", StorageBenchKeyPattern::Random),\n    ] {\n        let name = format!(\"write_rows_{label}/10k\");\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || {\n                    let backend = BenchBackend::new();\n                    let fixture = runtime\n                        .block_on(storage_bench::prepare_untracked_state_write_rows(\n                            config(&args).with_key_pattern(key_pattern),\n                        ))\n                        .expect(\"prepare untracked_state/write_rows key pattern\");\n                    (backend, fixture)\n                },\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::untracked_state_write_rows_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"untracked_state/write_rows key pattern succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n    for (label, selectivity) in [\n        (\"1pct\", StorageBenchSelectivity::Percent1),\n        (\"10pct\", StorageBenchSelectivity::Percent10),\n        (\"100pct\", StorageBenchSelectivity::Percent100),\n    ] {\n        let name = format!(\"scan_schema_selectivity_{label}/10k\");\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || prepare_read_with(runtime, config(&args).with_selectivity(selectivity)),\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(\n                                storage_bench::untracked_state_scan_schema_selective_prepared(\n                                    &backend, &fixture,\n                                ),\n                            )\n                            .expect(\"untracked_state/scan_schema selectivity succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n    for rows in [1_000, 10_000, 100_000] {\n        let name = format!(\"read_point_hit_100_reads/{rows}\");\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || prepare_read_with(runtime, config(&args).with_rows(rows)),\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(\n                                storage_bench::untracked_state_read_point_hit_constant_prepared(\n                                    &backend, &fixture, 100,\n                                ),\n                            )\n                            .expect(\"untracked_state/read_point_hit scaling succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n    for (label, fraction) in [\n        (\"overwrite_10pct\", StorageBenchUpdateFraction::Percent10),\n        (\"overwrite_all\", StorageBenchUpdateFraction::Percent100),\n    ] {\n        let name = format!(\"{label}/10k\");\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || {\n                    let backend = BenchBackend::new();\n                    let fixture = runtime\n                        .block_on(storage_bench::prepare_untracked_state_overwrite(\n                            &backend,\n                            config(&args).with_update_fraction(fraction),\n                        ))\n                        .expect(\"prepare untracked_state/overwrite shape\");\n                    (backend, fixture)\n                },\n                |(backend, fixture)| {\n                    black_box(\n                        runtime\n                            .block_on(storage_bench::untracked_state_overwrite_existing_prepared(\n                                &backend, &fixture,\n                            ))\n                            .expect(\"untracked_state/overwrite shape succeeds\"),\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        });\n    }\n    group.bench_function(\"insert_new_keys/10k\", |b| {\n        b.iter_batched(\n            || {\n                let backend = BenchBackend::new();\n                let fixture = runtime\n                    .block_on(storage_bench::prepare_untracked_state_insert_new_keys(\n                        &backend,\n                        config(&args),\n                    ))\n                    .expect(\"prepare untracked_state/insert_new_keys\");\n                (backend, fixture)\n            },\n            |(backend, fixture)| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::untracked_state_write_rows_prepared(\n                            &backend, &fixture,\n                        ))\n                        .expect(\"untracked_state/insert_new_keys succeeds\"),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n    group.finish();\n}\n\nfn prepare_read(\n    runtime: &Runtime,\n    args: Args,\n) -> (\n    std::sync::Arc<dyn lix_engine::Backend + Send + Sync>,\n    lix_engine::storage_bench::UntrackedStateReadFixture,\n) {\n    let backend = BenchBackend::new();\n    let fixture = runtime\n        .block_on(storage_bench::prepare_untracked_state_read(\n            &backend,\n            config(&args),\n        ))\n        .expect(\"prepare untracked_state/read\");\n    (backend, fixture)\n}\n\nfn prepare_read_with(\n    runtime: &Runtime,\n    config: StorageBenchConfig,\n) -> (\n    std::sync::Arc<dyn lix_engine::Backend + Send + Sync>,\n    lix_engine::storage_bench::UntrackedStateReadFixture,\n) {\n    let backend = BenchBackend::new();\n    let fixture = runtime\n        .block_on(storage_bench::prepare_untracked_state_read(\n            &backend, config,\n        ))\n        .expect(\"prepare untracked_state/read variant\");\n    (backend, fixture)\n}\n\nfn config(args: &Args) -> StorageBenchConfig {\n    args.config()\n}\n"
  },
  {
    "path": "packages/engine/benches/transaction/main.rs",
    "content": "use async_trait::async_trait;\nuse criterion::{black_box, criterion_group, criterion_main, BatchSize, Criterion};\nuse lix_engine::storage_bench::{self, TransactionAccountingReport};\nuse lix_engine::{\n    Backend, BackendKvEntryPage, BackendKvExistsBatch, BackendKvGetRequest, BackendKvKeyPage,\n    BackendKvScanRequest, BackendKvValueBatch, BackendKvValuePage, BackendKvWriteBatch,\n    BackendKvWriteStats, BackendReadTransaction, BackendWriteTransaction, LixError,\n};\nuse std::collections::{BTreeMap, HashSet};\nuse std::sync::OnceLock;\nuse std::sync::{Arc, Mutex};\nuse std::time::Duration;\nuse tokio::runtime::Runtime;\n\n#[path = \"../storage/backend.rs\"]\nmod backend;\n\nuse backend::BenchBackend;\n\nconst ENTITY_ROWS: usize = 10_000;\nconst LARGE_ENTITY_ROWS: usize = 1_000;\nconst UPDATE_ROWS_SMALL: usize = 1;\nconst UPDATE_ROWS_BATCH: usize = 100;\nconst SCALING_ROWS: &[usize] = &[1_000, 2_000, 5_000, 10_000, 20_000];\n\nfn transaction_benches(c: &mut Criterion) {\n    let runtime = tokio::runtime::Builder::new_current_thread()\n        .enable_all()\n        .build()\n        .expect(\"create tokio runtime for transaction benchmarks\");\n    let mut group = c.benchmark_group(\"transaction\");\n\n    group.bench_function(\"open_empty\", |b| {\n        b.iter_batched(\n            || {\n                runtime\n                    .block_on(storage_bench::prepare_transaction_commit_empty(\n                        BenchBackend::new(),\n                    ))\n                    .expect(\"prepare transaction/open_empty\")\n            },\n            |fixture| {\n                black_box(\n                    runtime\n                        .block_on(storage_bench::transaction_open_empty_prepared(&fixture))\n                        .unwrap_or_else(|error| panic!(\"transaction/open_empty succeeds: {error}\")),\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"stage_only_entities_no_payload/10k\", |b| {\n        b.iter_batched(\n            || {\n                runtime\n                    .block_on(\n                        storage_bench::prepare_transaction_commit_entities_no_payload(\n                            BenchBackend::new(),\n                            ENTITY_ROWS,\n                        ),\n                    )\n                    .expect(\"prepare transaction/stage_only_entities_no_payload\")\n            },\n            |fixture| {\n                stage_only(\n                    &runtime,\n                    fixture,\n                    \"transaction/stage_only_entities_no_payload\",\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"stage_only_entities_payload_1k_unique/10k\", |b| {\n        b.iter_batched(\n            || {\n                runtime\n                    .block_on(\n                        storage_bench::prepare_transaction_commit_entities_payload_1k_unique(\n                            BenchBackend::new(),\n                            ENTITY_ROWS,\n                        ),\n                    )\n                    .expect(\"prepare transaction/stage_only_entities_payload_1k_unique\")\n            },\n            |fixture| {\n                stage_only(\n                    &runtime,\n                    fixture,\n                    \"transaction/stage_only_entities_payload_1k_unique\",\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"commit_only_entities_no_payload/10k\", |b| {\n        b.iter_batched(\n            || {\n                let fixture = runtime\n                    .block_on(\n                        storage_bench::prepare_transaction_commit_entities_no_payload(\n                            BenchBackend::new(),\n                            ENTITY_ROWS,\n                        ),\n                    )\n                    .expect(\"prepare transaction/commit_only_entities_no_payload fixture\");\n                runtime\n                    .block_on(storage_bench::prepare_transaction_commit_only(fixture))\n                    .expect(\"prepare transaction/commit_only_entities_no_payload\")\n            },\n            |fixture| {\n                commit_only(\n                    &runtime,\n                    fixture,\n                    \"transaction/commit_only_entities_no_payload\",\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"commit_only_entities_payload_1k_same/10k\", |b| {\n        b.iter_batched(\n            || {\n                let fixture = runtime\n                    .block_on(\n                        storage_bench::prepare_transaction_commit_entities_payload_1k_same(\n                            BenchBackend::new(),\n                            ENTITY_ROWS,\n                        ),\n                    )\n                    .expect(\"prepare transaction/commit_only_entities_payload_1k_same fixture\");\n                runtime\n                    .block_on(storage_bench::prepare_transaction_commit_only(fixture))\n                    .expect(\"prepare transaction/commit_only_entities_payload_1k_same\")\n            },\n            |fixture| {\n                commit_only(\n                    &runtime,\n                    fixture,\n                    \"transaction/commit_only_entities_payload_1k_same\",\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"commit_only_entities_payload_1k_unique/10k\", |b| {\n        b.iter_batched(\n            || {\n                let fixture = runtime\n                    .block_on(\n                        storage_bench::prepare_transaction_commit_entities_payload_1k_unique(\n                            BenchBackend::new(),\n                            ENTITY_ROWS,\n                        ),\n                    )\n                    .expect(\"prepare transaction/commit_only_entities_payload_1k_unique fixture\");\n                runtime\n                    .block_on(storage_bench::prepare_transaction_commit_only(fixture))\n                    .expect(\"prepare transaction/commit_only_entities_payload_1k_unique\")\n            },\n            |fixture| {\n                commit_only(\n                    &runtime,\n                    fixture,\n                    \"transaction/commit_only_entities_payload_1k_unique\",\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"accounting_entities_no_payload/10k\", |b| {\n        b.iter_batched(\n            || {\n                prepare_accounting(&runtime, |backend| {\n                    storage_bench::prepare_transaction_commit_entities_no_payload(\n                        backend,\n                        ENTITY_ROWS,\n                    )\n                })\n            },\n            |fixture| {\n                accounting(\n                    &runtime,\n                    fixture,\n                    \"transaction/accounting_entities_no_payload\",\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"accounting_entities_payload_1k_unique/10k\", |b| {\n        b.iter_batched(\n            || {\n                prepare_accounting(&runtime, |backend| {\n                    storage_bench::prepare_transaction_commit_entities_payload_1k_unique(\n                        backend,\n                        ENTITY_ROWS,\n                    )\n                })\n            },\n            |fixture| {\n                accounting(\n                    &runtime,\n                    fixture,\n                    \"transaction/accounting_entities_payload_1k_unique\",\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"accounting_entities_payload_1k_same/10k\", |b| {\n        b.iter_batched(\n            || {\n                prepare_accounting(&runtime, |backend| {\n                    storage_bench::prepare_transaction_commit_entities_payload_1k_same(\n                        backend,\n                        ENTITY_ROWS,\n                    )\n                })\n            },\n            |fixture| {\n                accounting(\n                    &runtime,\n                    fixture,\n                    \"transaction/accounting_entities_payload_1k_same\",\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"accounting_untracked_payload_1k_same/10k\", |b| {\n        b.iter_batched(\n            || {\n                prepare_accounting(&runtime, |backend| {\n                    storage_bench::prepare_transaction_commit_untracked_payload_1k_same(\n                        backend,\n                        ENTITY_ROWS,\n                    )\n                })\n            },\n            |fixture| {\n                accounting(\n                    &runtime,\n                    fixture,\n                    \"transaction/accounting_untracked_payload_1k_same\",\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"stage_plus_commit_empty\", |b| {\n        b.iter_batched(\n            || {\n                runtime\n                    .block_on(storage_bench::prepare_transaction_commit_empty(\n                        BenchBackend::new(),\n                    ))\n                    .expect(\"prepare transaction/stage_plus_commit_empty\")\n            },\n            |fixture| commit(&runtime, fixture, \"transaction/stage_plus_commit_empty\"),\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"stage_plus_commit_schema_only/1\", |b| {\n        b.iter_batched(\n            || {\n                runtime\n                    .block_on(storage_bench::prepare_transaction_commit_schema_only(\n                        BenchBackend::new(),\n                    ))\n                    .expect(\"prepare transaction/stage_plus_commit_schema_only\")\n            },\n            |fixture| {\n                commit(\n                    &runtime,\n                    fixture,\n                    \"transaction/stage_plus_commit_schema_only\",\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"stage_plus_commit_entities_no_payload/10k\", |b| {\n        b.iter_batched(\n            || {\n                runtime\n                    .block_on(\n                        storage_bench::prepare_transaction_commit_entities_no_payload(\n                            BenchBackend::new(),\n                            ENTITY_ROWS,\n                        ),\n                    )\n                    .expect(\"prepare transaction/stage_plus_commit_entities_no_payload\")\n            },\n            |fixture| {\n                commit(\n                    &runtime,\n                    fixture,\n                    \"transaction/stage_plus_commit_entities_no_payload\",\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"stage_plus_commit_entities_payload_1k_unique/10k\", |b| {\n        b.iter_batched(\n            || {\n                runtime\n                    .block_on(\n                        storage_bench::prepare_transaction_commit_entities_payload_1k_unique(\n                            BenchBackend::new(),\n                            ENTITY_ROWS,\n                        ),\n                    )\n                    .expect(\"prepare transaction/stage_plus_commit_entities_payload_1k_unique\")\n            },\n            |fixture| {\n                commit(\n                    &runtime,\n                    fixture,\n                    \"transaction/stage_plus_commit_entities_payload_1k_unique\",\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"stage_plus_commit_entities_payload_1k_same/10k\", |b| {\n        b.iter_batched(\n            || {\n                runtime\n                    .block_on(\n                        storage_bench::prepare_transaction_commit_entities_payload_1k_same(\n                            BenchBackend::new(),\n                            ENTITY_ROWS,\n                        ),\n                    )\n                    .expect(\"prepare transaction/stage_plus_commit_entities_payload_1k_same\")\n            },\n            |fixture| {\n                commit(\n                    &runtime,\n                    fixture,\n                    \"transaction/stage_plus_commit_entities_payload_1k_same\",\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"stage_plus_commit_entities_payload_1k_half_duplicate/10k\", |b| {\n        b.iter_batched(\n            || {\n                runtime\n                    .block_on(\n                        storage_bench::prepare_transaction_commit_entities_payload_1k_half_duplicate(\n                            BenchBackend::new(),\n                            ENTITY_ROWS,\n                        ),\n                    )\n                    .expect(\n                        \"prepare transaction/stage_plus_commit_entities_payload_1k_half_duplicate\",\n                    )\n            },\n            |fixture| {\n                commit(\n                    &runtime,\n                    fixture,\n                    \"transaction/stage_plus_commit_entities_payload_1k_half_duplicate\",\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"stage_plus_commit_entities_metadata_1k_same/10k\", |b| {\n        b.iter_batched(\n            || {\n                runtime\n                    .block_on(\n                        storage_bench::prepare_transaction_commit_entities_metadata_1k_same(\n                            BenchBackend::new(),\n                            ENTITY_ROWS,\n                        ),\n                    )\n                    .expect(\"prepare transaction/stage_plus_commit_entities_metadata_1k_same\")\n            },\n            |fixture| {\n                commit(\n                    &runtime,\n                    fixture,\n                    \"transaction/stage_plus_commit_entities_metadata_1k_same\",\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"stage_plus_commit_entities_payload_16k_unique/1k\", |b| {\n        b.iter_batched(\n            || {\n                runtime\n                    .block_on(\n                        storage_bench::prepare_transaction_commit_entities_payload_16k_unique(\n                            BenchBackend::new(),\n                            LARGE_ENTITY_ROWS,\n                        ),\n                    )\n                    .expect(\"prepare transaction/stage_plus_commit_entities_payload_16k_unique\")\n            },\n            |fixture| {\n                commit(\n                    &runtime,\n                    fixture,\n                    \"transaction/stage_plus_commit_entities_payload_16k_unique\",\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\"stage_plus_commit_untracked_payload_1k_same/10k\", |b| {\n        b.iter_batched(\n            || {\n                runtime\n                    .block_on(\n                        storage_bench::prepare_transaction_commit_untracked_payload_1k_same(\n                            BenchBackend::new(),\n                            ENTITY_ROWS,\n                        ),\n                    )\n                    .expect(\"prepare transaction/stage_plus_commit_untracked_payload_1k_same\")\n            },\n            |fixture| {\n                commit(\n                    &runtime,\n                    fixture,\n                    \"transaction/stage_plus_commit_untracked_payload_1k_same\",\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    group.bench_function(\n        \"stage_plus_commit_update_1_existing_payload_1k/root_10k\",\n        |b| {\n            b.iter_batched(\n                || {\n                    runtime\n                        .block_on(\n                            storage_bench::prepare_transaction_update_existing_payload_1k(\n                                BenchBackend::new(),\n                                ENTITY_ROWS,\n                                UPDATE_ROWS_SMALL,\n                            ),\n                        )\n                        .expect(\n                            \"prepare transaction/stage_plus_commit_update_1_existing_payload_1k\",\n                        )\n                },\n                |fixture| {\n                    commit(\n                        &runtime,\n                        fixture,\n                        \"transaction/stage_plus_commit_update_1_existing_payload_1k\",\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        },\n    );\n\n    group.bench_function(\n        \"stage_plus_commit_update_100_existing_payload_1k/root_10k\",\n        |b| {\n            b.iter_batched(\n                || {\n                    runtime\n                        .block_on(\n                            storage_bench::prepare_transaction_update_existing_payload_1k(\n                                BenchBackend::new(),\n                                ENTITY_ROWS,\n                                UPDATE_ROWS_BATCH,\n                            ),\n                        )\n                        .expect(\n                            \"prepare transaction/stage_plus_commit_update_100_existing_payload_1k\",\n                        )\n                },\n                |fixture| {\n                    commit(\n                        &runtime,\n                        fixture,\n                        \"transaction/stage_plus_commit_update_100_existing_payload_1k\",\n                    )\n                },\n                BatchSize::LargeInput,\n            )\n        },\n    );\n\n    group.finish();\n\n    let mut io_group = c.benchmark_group(\"transaction_io_100us\");\n\n    io_group.bench_function(\"stage_plus_commit_entities_no_payload/10k\", |b| {\n        b.iter_batched(\n            || {\n                runtime\n                    .block_on(\n                        storage_bench::prepare_transaction_commit_entities_no_payload(\n                            latency_backend(),\n                            ENTITY_ROWS,\n                        ),\n                    )\n                    .expect(\"prepare transaction_io_100us/stage_plus_commit_entities_no_payload\")\n            },\n            |fixture| {\n                commit(\n                    &runtime,\n                    fixture,\n                    \"transaction_io_100us/stage_plus_commit_entities_no_payload\",\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    io_group.bench_function(\"stage_plus_commit_entities_payload_1k_same/10k\", |b| {\n        b.iter_batched(\n            || {\n                runtime\n                    .block_on(\n                        storage_bench::prepare_transaction_commit_entities_payload_1k_same(\n                            latency_backend(),\n                            ENTITY_ROWS,\n                        ),\n                    )\n                    .expect(\n                        \"prepare transaction_io_100us/stage_plus_commit_entities_payload_1k_same\",\n                    )\n            },\n            |fixture| {\n                commit(\n                    &runtime,\n                    fixture,\n                    \"transaction_io_100us/stage_plus_commit_entities_payload_1k_same\",\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    io_group.bench_function(\"stage_plus_commit_entities_payload_1k_unique/10k\", |b| {\n        b.iter_batched(\n            || {\n                runtime\n                    .block_on(\n                        storage_bench::prepare_transaction_commit_entities_payload_1k_unique(\n                            latency_backend(),\n                            ENTITY_ROWS,\n                        ),\n                    )\n                    .expect(\n                        \"prepare transaction_io_100us/stage_plus_commit_entities_payload_1k_unique\",\n                    )\n            },\n            |fixture| {\n                commit(\n                    &runtime,\n                    fixture,\n                    \"transaction_io_100us/stage_plus_commit_entities_payload_1k_unique\",\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    io_group.bench_function(\"stage_plus_commit_untracked_payload_1k_same/10k\", |b| {\n        b.iter_batched(\n            || {\n                runtime\n                    .block_on(\n                        storage_bench::prepare_transaction_commit_untracked_payload_1k_same(\n                            latency_backend(),\n                            ENTITY_ROWS,\n                        ),\n                    )\n                    .expect(\n                        \"prepare transaction_io_100us/stage_plus_commit_untracked_payload_1k_same\",\n                    )\n            },\n            |fixture| {\n                commit(\n                    &runtime,\n                    fixture,\n                    \"transaction_io_100us/stage_plus_commit_untracked_payload_1k_same\",\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    io_group.bench_function(\"commit_only_entities_payload_1k_same/10k\", |b| {\n        b.iter_batched(\n            || {\n                let fixture = runtime\n                    .block_on(\n                        storage_bench::prepare_transaction_commit_entities_payload_1k_same(\n                            latency_backend(),\n                            ENTITY_ROWS,\n                        ),\n                    )\n                    .expect(\n                        \"prepare transaction_io_100us/commit_only_entities_payload_1k_same fixture\",\n                    );\n                runtime\n                    .block_on(storage_bench::prepare_transaction_commit_only(fixture))\n                    .expect(\"prepare transaction_io_100us/commit_only_entities_payload_1k_same\")\n            },\n            |fixture| {\n                commit_only(\n                    &runtime,\n                    fixture,\n                    \"transaction_io_100us/commit_only_entities_payload_1k_same\",\n                )\n            },\n            BatchSize::LargeInput,\n        )\n    });\n\n    io_group.finish();\n\n    let mut scaling_group = c.benchmark_group(\"transaction_scaling\");\n    for &rows in SCALING_ROWS {\n        let label = row_count_label(rows);\n\n        scaling_group.bench_function(\n            format!(\"stage_only_entities_no_payload/{label}\"),\n            |b| {\n                b.iter_batched(\n                    || {\n                        runtime\n                            .block_on(\n                                storage_bench::prepare_transaction_commit_entities_no_payload(\n                                    BenchBackend::new(),\n                                    rows,\n                                ),\n                            )\n                            .unwrap_or_else(|error| {\n                                panic!(\n                                    \"prepare transaction_scaling/stage_only_entities_no_payload/{label}: {error}\"\n                                )\n                            })\n                    },\n                    |fixture| {\n                        stage_only(\n                            &runtime,\n                            fixture,\n                            \"transaction_scaling/stage_only_entities_no_payload\",\n                        )\n                    },\n                    BatchSize::LargeInput,\n                )\n            },\n        );\n\n        scaling_group.bench_function(\n            format!(\"commit_only_entities_no_payload/{label}\"),\n            |b| {\n                b.iter_batched(\n                    || {\n                        let fixture = runtime\n                            .block_on(\n                                storage_bench::prepare_transaction_commit_entities_no_payload(\n                                    BenchBackend::new(),\n                                    rows,\n                                ),\n                            )\n                            .unwrap_or_else(|error| {\n                                panic!(\n                                    \"prepare transaction_scaling/commit_only_entities_no_payload/{label} fixture: {error}\"\n                                )\n                            });\n                        runtime\n                            .block_on(storage_bench::prepare_transaction_commit_only(fixture))\n                            .unwrap_or_else(|error| {\n                                panic!(\n                                    \"prepare transaction_scaling/commit_only_entities_no_payload/{label}: {error}\"\n                                )\n                            })\n                    },\n                    |fixture| {\n                        commit_only(\n                            &runtime,\n                            fixture,\n                            \"transaction_scaling/commit_only_entities_no_payload\",\n                        )\n                    },\n                    BatchSize::LargeInput,\n                )\n            },\n        );\n\n        scaling_group.bench_function(\n            format!(\"stage_plus_commit_entities_payload_1k_same/{label}\"),\n            |b| {\n                b.iter_batched(\n                    || {\n                        runtime\n                            .block_on(\n                                storage_bench::prepare_transaction_commit_entities_payload_1k_same(\n                                    BenchBackend::new(),\n                                    rows,\n                                ),\n                            )\n                            .unwrap_or_else(|error| {\n                                panic!(\n                                    \"prepare transaction_scaling/stage_plus_commit_entities_payload_1k_same/{label}: {error}\"\n                                )\n                            })\n                    },\n                    |fixture| {\n                        commit(\n                            &runtime,\n                            fixture,\n                            \"transaction_scaling/stage_plus_commit_entities_payload_1k_same\",\n                        )\n                    },\n                    BatchSize::LargeInput,\n                )\n            },\n        );\n\n        scaling_group.bench_function(\n            format!(\"stage_plus_commit_entities_payload_1k_unique/{label}\"),\n            |b| {\n                b.iter_batched(\n                    || {\n                        runtime\n                            .block_on(\n                                storage_bench::prepare_transaction_commit_entities_payload_1k_unique(\n                                    BenchBackend::new(),\n                                    rows,\n                                ),\n                            )\n                            .unwrap_or_else(|error| {\n                                panic!(\n                                    \"prepare transaction_scaling/stage_plus_commit_entities_payload_1k_unique/{label}: {error}\"\n                                )\n                            })\n                    },\n                    |fixture| {\n                        commit(\n                            &runtime,\n                            fixture,\n                            \"transaction_scaling/stage_plus_commit_entities_payload_1k_unique\",\n                        )\n                    },\n                    BatchSize::LargeInput,\n                )\n            },\n        );\n    }\n    scaling_group.finish();\n\n    let mut scaling_io_group = c.benchmark_group(\"transaction_scaling_io_100us\");\n    for &rows in SCALING_ROWS {\n        let label = row_count_label(rows);\n        scaling_io_group.bench_function(\n            format!(\"stage_plus_commit_entities_payload_1k_same/{label}\"),\n            |b| {\n                b.iter_batched(\n                    || {\n                        runtime\n                            .block_on(\n                                storage_bench::prepare_transaction_commit_entities_payload_1k_same(\n                                    latency_backend(),\n                                    rows,\n                                ),\n                            )\n                            .unwrap_or_else(|error| {\n                                panic!(\n                                    \"prepare transaction_scaling_io_100us/stage_plus_commit_entities_payload_1k_same/{label}: {error}\"\n                                )\n                            })\n                    },\n                    |fixture| {\n                        commit(\n                            &runtime,\n                            fixture,\n                            \"transaction_scaling_io_100us/stage_plus_commit_entities_payload_1k_same\",\n                        )\n                    },\n                    BatchSize::LargeInput,\n                )\n            },\n        );\n    }\n    scaling_io_group.finish();\n}\n\nfn row_count_label(rows: usize) -> String {\n    if rows % 1_000 == 0 {\n        format!(\"{}k\", rows / 1_000)\n    } else {\n        rows.to_string()\n    }\n}\n\nfn commit(\n    runtime: &Runtime,\n    fixture: storage_bench::TransactionBenchFixture,\n    label: &str,\n) -> storage_bench::StorageBenchReport {\n    black_box(\n        runtime\n            .block_on(storage_bench::transaction_commit_prepared(&fixture))\n            .unwrap_or_else(|error| panic!(\"{label} succeeds: {error}\")),\n    )\n}\n\nfn stage_only(\n    runtime: &Runtime,\n    fixture: storage_bench::TransactionBenchFixture,\n    label: &str,\n) -> storage_bench::StorageBenchReport {\n    black_box(\n        runtime\n            .block_on(storage_bench::transaction_stage_only_prepared(&fixture))\n            .unwrap_or_else(|error| panic!(\"{label} succeeds: {error}\")),\n    )\n}\n\nfn commit_only(\n    runtime: &Runtime,\n    fixture: storage_bench::TransactionCommitOnlyFixture,\n    label: &str,\n) -> storage_bench::StorageBenchReport {\n    black_box(\n        runtime\n            .block_on(storage_bench::transaction_commit_only_prepared(fixture))\n            .unwrap_or_else(|error| panic!(\"{label} succeeds: {error}\")),\n    )\n}\n\nfn latency_backend() -> Arc<dyn Backend + Send + Sync> {\n    Arc::new(LatencyBackend {\n        inner: BenchBackend::new(),\n        read_delay: Duration::from_micros(100),\n        write_delay: Duration::from_micros(250),\n        commit_delay: Duration::from_micros(500),\n    })\n}\n\nstruct AccountingFixture {\n    fixture: storage_bench::TransactionBenchFixture,\n    storage: Arc<StorageAccounting>,\n}\n\nfn prepare_accounting<F, Fut>(runtime: &Runtime, prepare: F) -> AccountingFixture\nwhere\n    F: FnOnce(Arc<dyn Backend + Send + Sync>) -> Fut,\n    Fut: std::future::Future<Output = Result<storage_bench::TransactionBenchFixture, LixError>>,\n{\n    let (backend, storage) = CountingBackend::new(BenchBackend::new());\n    let fixture = runtime\n        .block_on(prepare(backend))\n        .expect(\"prepare transaction accounting fixture\");\n    storage.reset();\n    storage_bench::reset_transaction_bench_counters();\n    AccountingFixture { fixture, storage }\n}\n\nfn accounting(\n    runtime: &Runtime,\n    fixture: AccountingFixture,\n    label: &str,\n) -> TransactionAccountingReport {\n    runtime\n        .block_on(storage_bench::transaction_commit_prepared(&fixture.fixture))\n        .unwrap_or_else(|error| panic!(\"{label} succeeds: {error}\"));\n    let storage = fixture.storage.snapshot();\n    let report = TransactionAccountingReport {\n        counters: storage_bench::transaction_bench_counters(),\n        storage_write_batches: storage.write_batches,\n        kv_puts_by_namespace: storage.kv_puts_by_namespace,\n        bytes_by_namespace: storage.bytes_by_namespace,\n    };\n    print_accounting_once(label, &report);\n    black_box(report)\n}\n\nstatic PRINTED_ACCOUNTING_LABELS: OnceLock<Mutex<HashSet<String>>> = OnceLock::new();\n\nfn print_accounting_once(label: &str, report: &TransactionAccountingReport) {\n    if std::env::var(\"LIX_BENCH_PRINT_ACCOUNTING\").ok().as_deref() != Some(\"1\") {\n        return;\n    }\n    let labels = PRINTED_ACCOUNTING_LABELS.get_or_init(|| Mutex::new(HashSet::new()));\n    let mut labels = labels\n        .lock()\n        .expect(\"printed accounting label mutex should lock\");\n    if !labels.insert(label.to_string()) {\n        return;\n    }\n    eprintln!(\"{label}: {report:#?}\");\n}\n\n#[derive(Default)]\nstruct StorageAccounting {\n    inner: Mutex<StorageAccountingSnapshot>,\n}\n\n#[derive(Default)]\nstruct StorageAccountingSnapshot {\n    write_batches: usize,\n    kv_puts_by_namespace: BTreeMap<String, usize>,\n    bytes_by_namespace: BTreeMap<String, usize>,\n}\n\nimpl StorageAccounting {\n    fn reset(&self) {\n        *self\n            .inner\n            .lock()\n            .expect(\"storage accounting mutex should lock\") = StorageAccountingSnapshot::default();\n    }\n\n    fn record_write_batch(&self, batch: &BackendKvWriteBatch) {\n        let mut inner = self\n            .inner\n            .lock()\n            .expect(\"storage accounting mutex should lock\");\n        inner.write_batches += 1;\n        for group in &batch.groups {\n            let namespace = group.namespace().to_string();\n            for index in 0..group.put_count() {\n                let Some(key) = group.put_key(index) else {\n                    continue;\n                };\n                let Some(value) = group.put_value(index) else {\n                    continue;\n                };\n                *inner\n                    .kv_puts_by_namespace\n                    .entry(namespace.clone())\n                    .or_default() += 1;\n                *inner\n                    .bytes_by_namespace\n                    .entry(namespace.clone())\n                    .or_default() += key.len() + value.len();\n            }\n            for index in 0..group.delete_count() {\n                let Some(key) = group.delete_key(index) else {\n                    continue;\n                };\n                *inner\n                    .bytes_by_namespace\n                    .entry(namespace.clone())\n                    .or_default() += key.len();\n            }\n        }\n    }\n\n    fn snapshot(&self) -> StorageAccountingSnapshot {\n        let inner = self\n            .inner\n            .lock()\n            .expect(\"storage accounting mutex should lock\");\n        StorageAccountingSnapshot {\n            write_batches: inner.write_batches,\n            kv_puts_by_namespace: inner.kv_puts_by_namespace.clone(),\n            bytes_by_namespace: inner.bytes_by_namespace.clone(),\n        }\n    }\n}\n\nstruct CountingBackend {\n    inner: Arc<dyn Backend + Send + Sync>,\n    accounting: Arc<StorageAccounting>,\n}\n\nimpl CountingBackend {\n    fn new(\n        inner: Arc<dyn Backend + Send + Sync>,\n    ) -> (Arc<dyn Backend + Send + Sync>, Arc<StorageAccounting>) {\n        let accounting = Arc::new(StorageAccounting::default());\n        (\n            Arc::new(Self {\n                inner,\n                accounting: Arc::clone(&accounting),\n            }),\n            accounting,\n        )\n    }\n}\n\nstruct LatencyBackend {\n    inner: Arc<dyn Backend + Send + Sync>,\n    read_delay: Duration,\n    write_delay: Duration,\n    commit_delay: Duration,\n}\n\nimpl LatencyBackend {\n    fn delay(duration: Duration) {\n        if !duration.is_zero() {\n            std::thread::sleep(duration);\n        }\n    }\n}\n\n#[async_trait]\nimpl Backend for LatencyBackend {\n    async fn begin_read_transaction(\n        &self,\n    ) -> Result<Box<dyn BackendReadTransaction + Send + Sync + 'static>, LixError> {\n        let transaction = self.inner.begin_read_transaction().await?;\n        Ok(Box::new(LatencyReadTransaction {\n            transaction,\n            read_delay: self.read_delay,\n        }))\n    }\n\n    async fn begin_write_transaction(\n        &self,\n    ) -> Result<Box<dyn BackendWriteTransaction + Send + Sync + 'static>, LixError> {\n        let transaction = self.inner.begin_write_transaction().await?;\n        Ok(Box::new(LatencyWriteTransaction {\n            transaction,\n            read_delay: self.read_delay,\n            write_delay: self.write_delay,\n            commit_delay: self.commit_delay,\n        }))\n    }\n}\n\nstruct LatencyReadTransaction {\n    transaction: Box<dyn BackendReadTransaction + Send + Sync + 'static>,\n    read_delay: Duration,\n}\n\n#[async_trait]\nimpl BackendReadTransaction for LatencyReadTransaction {\n    async fn get_values(\n        &mut self,\n        request: BackendKvGetRequest,\n    ) -> Result<BackendKvValueBatch, LixError> {\n        LatencyBackend::delay(self.read_delay);\n        self.transaction.get_values(request).await\n    }\n\n    async fn exists_many(\n        &mut self,\n        request: BackendKvGetRequest,\n    ) -> Result<BackendKvExistsBatch, LixError> {\n        LatencyBackend::delay(self.read_delay);\n        self.transaction.exists_many(request).await\n    }\n\n    async fn scan_keys(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvKeyPage, LixError> {\n        LatencyBackend::delay(self.read_delay);\n        self.transaction.scan_keys(request).await\n    }\n\n    async fn scan_values(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvValuePage, LixError> {\n        LatencyBackend::delay(self.read_delay);\n        self.transaction.scan_values(request).await\n    }\n\n    async fn scan_entries(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvEntryPage, LixError> {\n        LatencyBackend::delay(self.read_delay);\n        self.transaction.scan_entries(request).await\n    }\n\n    async fn rollback(self: Box<Self>) -> Result<(), LixError> {\n        self.transaction.rollback().await\n    }\n}\n\nstruct LatencyWriteTransaction {\n    transaction: Box<dyn BackendWriteTransaction + Send + Sync + 'static>,\n    read_delay: Duration,\n    write_delay: Duration,\n    commit_delay: Duration,\n}\n\n#[async_trait]\nimpl BackendReadTransaction for LatencyWriteTransaction {\n    async fn get_values(\n        &mut self,\n        request: BackendKvGetRequest,\n    ) -> Result<BackendKvValueBatch, LixError> {\n        LatencyBackend::delay(self.read_delay);\n        self.transaction.get_values(request).await\n    }\n\n    async fn exists_many(\n        &mut self,\n        request: BackendKvGetRequest,\n    ) -> Result<BackendKvExistsBatch, LixError> {\n        LatencyBackend::delay(self.read_delay);\n        self.transaction.exists_many(request).await\n    }\n\n    async fn scan_keys(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvKeyPage, LixError> {\n        LatencyBackend::delay(self.read_delay);\n        self.transaction.scan_keys(request).await\n    }\n\n    async fn scan_values(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvValuePage, LixError> {\n        LatencyBackend::delay(self.read_delay);\n        self.transaction.scan_values(request).await\n    }\n\n    async fn scan_entries(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvEntryPage, LixError> {\n        LatencyBackend::delay(self.read_delay);\n        self.transaction.scan_entries(request).await\n    }\n\n    async fn rollback(self: Box<Self>) -> Result<(), LixError> {\n        self.transaction.rollback().await\n    }\n}\n\n#[async_trait]\nimpl BackendWriteTransaction for LatencyWriteTransaction {\n    async fn write_kv_batch(\n        &mut self,\n        batch: BackendKvWriteBatch,\n    ) -> Result<BackendKvWriteStats, LixError> {\n        LatencyBackend::delay(self.write_delay);\n        self.transaction.write_kv_batch(batch).await\n    }\n\n    async fn commit(self: Box<Self>) -> Result<(), LixError> {\n        LatencyBackend::delay(self.commit_delay);\n        self.transaction.commit().await\n    }\n}\n\n#[async_trait]\nimpl Backend for CountingBackend {\n    async fn begin_read_transaction(\n        &self,\n    ) -> Result<Box<dyn BackendReadTransaction + Send + Sync + 'static>, LixError> {\n        let transaction = self.inner.begin_read_transaction().await?;\n        Ok(Box::new(CountingReadTransaction { transaction }))\n    }\n\n    async fn begin_write_transaction(\n        &self,\n    ) -> Result<Box<dyn BackendWriteTransaction + Send + Sync + 'static>, LixError> {\n        let transaction = self.inner.begin_write_transaction().await?;\n        Ok(Box::new(CountingWriteTransaction {\n            transaction,\n            accounting: Arc::clone(&self.accounting),\n        }))\n    }\n}\n\nstruct CountingReadTransaction {\n    transaction: Box<dyn BackendReadTransaction + Send + Sync + 'static>,\n}\n\n#[async_trait]\nimpl BackendReadTransaction for CountingReadTransaction {\n    async fn get_values(\n        &mut self,\n        request: BackendKvGetRequest,\n    ) -> Result<BackendKvValueBatch, LixError> {\n        self.transaction.get_values(request).await\n    }\n\n    async fn exists_many(\n        &mut self,\n        request: BackendKvGetRequest,\n    ) -> Result<lix_engine::BackendKvExistsBatch, LixError> {\n        self.transaction.exists_many(request).await\n    }\n\n    async fn scan_keys(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvKeyPage, LixError> {\n        self.transaction.scan_keys(request).await\n    }\n\n    async fn scan_values(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvValuePage, LixError> {\n        self.transaction.scan_values(request).await\n    }\n\n    async fn scan_entries(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvEntryPage, LixError> {\n        self.transaction.scan_entries(request).await\n    }\n\n    async fn rollback(self: Box<Self>) -> Result<(), LixError> {\n        self.transaction.rollback().await\n    }\n}\n\nstruct CountingWriteTransaction {\n    transaction: Box<dyn BackendWriteTransaction + Send + Sync + 'static>,\n    accounting: Arc<StorageAccounting>,\n}\n\n#[async_trait]\nimpl BackendReadTransaction for CountingWriteTransaction {\n    async fn get_values(\n        &mut self,\n        request: BackendKvGetRequest,\n    ) -> Result<BackendKvValueBatch, LixError> {\n        self.transaction.get_values(request).await\n    }\n\n    async fn exists_many(\n        &mut self,\n        request: BackendKvGetRequest,\n    ) -> Result<BackendKvExistsBatch, LixError> {\n        self.transaction.exists_many(request).await\n    }\n\n    async fn scan_keys(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvKeyPage, LixError> {\n        self.transaction.scan_keys(request).await\n    }\n\n    async fn scan_values(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvValuePage, LixError> {\n        self.transaction.scan_values(request).await\n    }\n\n    async fn scan_entries(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvEntryPage, LixError> {\n        self.transaction.scan_entries(request).await\n    }\n\n    async fn rollback(self: Box<Self>) -> Result<(), LixError> {\n        self.transaction.rollback().await\n    }\n}\n\n#[async_trait]\nimpl BackendWriteTransaction for CountingWriteTransaction {\n    async fn write_kv_batch(\n        &mut self,\n        batch: BackendKvWriteBatch,\n    ) -> Result<BackendKvWriteStats, LixError> {\n        self.accounting.record_write_batch(&batch);\n        self.transaction.write_kv_batch(batch).await\n    }\n\n    async fn commit(self: Box<Self>) -> Result<(), LixError> {\n        self.transaction.commit().await\n    }\n}\n\ncriterion_group!(benches, transaction_benches);\ncriterion_main!(benches);\n"
  },
  {
    "path": "packages/engine/src/backend/kv.rs",
    "content": "#[derive(Debug, Clone, PartialEq, Eq, Default)]\npub struct BytePage {\n    bytes: Vec<u8>,\n    offsets: Vec<u32>,\n}\n\nimpl BytePage {\n    pub fn new() -> Self {\n        Self {\n            bytes: Vec::new(),\n            offsets: vec![0],\n        }\n    }\n\n    pub fn len(&self) -> usize {\n        self.offsets.len().saturating_sub(1)\n    }\n\n    pub fn is_empty(&self) -> bool {\n        self.len() == 0\n    }\n\n    pub fn get(&self, index: usize) -> Option<&[u8]> {\n        let start = usize::try_from(*self.offsets.get(index)?).ok()?;\n        let end = usize::try_from(*self.offsets.get(index + 1)?).ok()?;\n        self.bytes.get(start..end)\n    }\n\n    pub fn iter(&self) -> BytePageIter<'_> {\n        BytePageIter {\n            page: self,\n            index: 0,\n        }\n    }\n}\n\npub struct BytePageIter<'a> {\n    page: &'a BytePage,\n    index: usize,\n}\n\nimpl<'a> Iterator for BytePageIter<'a> {\n    type Item = &'a [u8];\n\n    fn next(&mut self) -> Option<Self::Item> {\n        let value = self.page.get(self.index)?;\n        self.index += 1;\n        Some(value)\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq, Default)]\npub struct BytePageBuilder {\n    bytes: Vec<u8>,\n    offsets: Vec<u32>,\n}\n\nimpl BytePageBuilder {\n    pub fn new() -> Self {\n        Self {\n            bytes: Vec::new(),\n            offsets: vec![0],\n        }\n    }\n\n    pub fn with_capacity(items: usize, bytes: usize) -> Self {\n        let mut offsets = Vec::with_capacity(items.saturating_add(1));\n        offsets.push(0);\n        Self {\n            bytes: Vec::with_capacity(bytes),\n            offsets,\n        }\n    }\n\n    pub fn from_page(page: BytePage) -> Self {\n        Self {\n            bytes: page.bytes,\n            offsets: page.offsets,\n        }\n    }\n\n    pub fn len(&self) -> usize {\n        self.offsets.len().saturating_sub(1)\n    }\n\n    pub fn is_empty(&self) -> bool {\n        self.len() == 0\n    }\n\n    pub fn get(&self, index: usize) -> Option<&[u8]> {\n        let start = usize::try_from(*self.offsets.get(index)?).ok()?;\n        let end = usize::try_from(*self.offsets.get(index + 1)?).ok()?;\n        self.bytes.get(start..end)\n    }\n\n    pub fn push(&mut self, value: impl AsRef<[u8]>) {\n        let value = value.as_ref();\n        self.bytes.extend_from_slice(value);\n        let end = u32::try_from(self.bytes.len()).expect(\"byte page exceeds u32 offset capacity\");\n        self.offsets.push(end);\n    }\n\n    pub fn finish(self) -> BytePage {\n        BytePage {\n            bytes: self.bytes,\n            offsets: self.offsets,\n        }\n    }\n}\n\n/// Ordered byte range for backend KV scans.\n///\n/// Ranges are half-open: `start <= key < end`. `Prefix` is explicit because it\n/// is a common access pattern and lets each backend choose the safest\n/// implementation for its storage engine.\n#[derive(Debug, Clone, PartialEq, Eq)]\npub enum BackendKvScanRange {\n    Prefix(Vec<u8>),\n    Range { start: Vec<u8>, end: Vec<u8> },\n}\n\nimpl BackendKvScanRange {\n    pub fn prefix(prefix: impl Into<Vec<u8>>) -> Self {\n        Self::Prefix(prefix.into())\n    }\n\n    pub fn range(start: impl Into<Vec<u8>>, end: impl Into<Vec<u8>>) -> Self {\n        Self::Range {\n            start: start.into(),\n            end: end.into(),\n        }\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub struct BackendKvGetRequest {\n    pub groups: Vec<BackendKvGetGroup>,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub struct BackendKvGetGroup {\n    pub namespace: String,\n    pub keys: Vec<Vec<u8>>,\n}\n\nimpl BackendKvGetGroup {\n    pub fn namespace(&self) -> &str {\n        &self.namespace\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub struct BackendKvValueBatch {\n    pub groups: Vec<BackendKvValueGroup>,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub struct BackendKvValueGroup {\n    namespace: String,\n    values: BytePage,\n    present: Vec<bool>,\n}\n\nimpl BackendKvValueGroup {\n    pub fn new(namespace: impl Into<String>, values: BytePage, present: Vec<bool>) -> Self {\n        assert_eq!(\n            values.len(),\n            present.len(),\n            \"backend value batch must have one value slot per presence bit\"\n        );\n        Self {\n            namespace: namespace.into(),\n            values,\n            present,\n        }\n    }\n\n    pub fn namespace(&self) -> &str {\n        &self.namespace\n    }\n\n    pub fn len(&self) -> usize {\n        self.present.len()\n    }\n\n    pub fn is_empty(&self) -> bool {\n        self.present.is_empty()\n    }\n\n    pub fn value(&self, index: usize) -> Option<Option<&[u8]>> {\n        let present = *self.present.get(index)?;\n        if present {\n            Some(Some(\n                self.values\n                    .get(index)\n                    .expect(\"backend value batch invariant violated\"),\n            ))\n        } else {\n            Some(None)\n        }\n    }\n\n    pub fn values_iter(&self) -> impl Iterator<Item = Option<&[u8]>> {\n        (0..self.len()).filter_map(|index| self.value(index))\n    }\n\n    pub fn into_parts(self) -> (String, BytePage, Vec<bool>) {\n        (self.namespace, self.values, self.present)\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub struct BackendKvExistsBatch {\n    pub groups: Vec<BackendKvExistsGroup>,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub struct BackendKvExistsGroup {\n    pub namespace: String,\n    pub exists: Vec<bool>,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub struct BackendKvScanRequest {\n    pub namespace: String,\n    pub range: BackendKvScanRange,\n    pub after: Option<Vec<u8>>,\n    pub limit: usize,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub struct BackendKvKeyPage {\n    pub keys: BytePage,\n    pub resume_after: Option<Vec<u8>>,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub struct BackendKvValuePage {\n    pub values: BytePage,\n    pub resume_after: Option<Vec<u8>>,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub struct BackendKvEntryPage {\n    pub keys: BytePage,\n    pub values: BytePage,\n    pub resume_after: Option<Vec<u8>>,\n}\n\nimpl BackendKvEntryPage {\n    pub fn len(&self) -> usize {\n        self.keys.len()\n    }\n\n    pub fn is_empty(&self) -> bool {\n        self.keys.is_empty()\n    }\n\n    pub fn key(&self, index: usize) -> Option<&[u8]> {\n        self.keys.get(index)\n    }\n\n    pub fn value(&self, index: usize) -> Option<&[u8]> {\n        self.values.get(index)\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq, Default)]\npub struct BackendKvWriteBatch {\n    pub groups: Vec<BackendKvWriteGroup>,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub struct BackendKvWriteGroup {\n    namespace: String,\n    put_keys: BytePageBuilder,\n    put_values: BytePageBuilder,\n    deletes: BytePageBuilder,\n}\n\nimpl BackendKvWriteGroup {\n    pub fn new(namespace: impl Into<String>) -> Self {\n        Self {\n            namespace: namespace.into(),\n            put_keys: BytePageBuilder::new(),\n            put_values: BytePageBuilder::new(),\n            deletes: BytePageBuilder::new(),\n        }\n    }\n\n    pub fn from_pages(\n        namespace: impl Into<String>,\n        put_keys: BytePage,\n        put_values: BytePage,\n        deletes: BytePage,\n    ) -> Self {\n        assert_eq!(\n            put_keys.len(),\n            put_values.len(),\n            \"backend write batch must have one value per put key\"\n        );\n        Self {\n            namespace: namespace.into(),\n            put_keys: BytePageBuilder::from_page(put_keys),\n            put_values: BytePageBuilder::from_page(put_values),\n            deletes: BytePageBuilder::from_page(deletes),\n        }\n    }\n\n    pub fn put(&mut self, key: impl AsRef<[u8]>, value: impl AsRef<[u8]>) {\n        self.put_keys.push(key);\n        self.put_values.push(value);\n    }\n\n    pub fn delete(&mut self, key: impl AsRef<[u8]>) {\n        self.deletes.push(key);\n    }\n\n    pub fn namespace(&self) -> &str {\n        &self.namespace\n    }\n\n    pub fn put_count(&self) -> usize {\n        self.put_keys.len()\n    }\n\n    pub fn delete_count(&self) -> usize {\n        self.deletes.len()\n    }\n\n    pub fn put_key(&self, index: usize) -> Option<&[u8]> {\n        self.put_keys.get(index)\n    }\n\n    pub fn put_value(&self, index: usize) -> Option<&[u8]> {\n        self.put_values.get(index)\n    }\n\n    pub fn delete_key(&self, index: usize) -> Option<&[u8]> {\n        self.deletes.get(index)\n    }\n\n    pub fn into_parts(self) -> (String, BytePage, BytePage, BytePage) {\n        (\n            self.namespace,\n            self.put_keys.finish(),\n            self.put_values.finish(),\n            self.deletes.finish(),\n        )\n    }\n}\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq, Default)]\npub struct BackendKvWriteStats {\n    pub puts: usize,\n    pub deletes: usize,\n    pub bytes_written: usize,\n}\n"
  },
  {
    "path": "packages/engine/src/backend/mod.rs",
    "content": "mod kv;\n#[cfg(test)]\npub(crate) mod testing;\nmod types;\n\npub use kv::{\n    BackendKvEntryPage, BackendKvExistsBatch, BackendKvExistsGroup, BackendKvGetGroup,\n    BackendKvGetRequest, BackendKvKeyPage, BackendKvScanRange, BackendKvScanRequest,\n    BackendKvValueBatch, BackendKvValueGroup, BackendKvValuePage, BackendKvWriteBatch,\n    BackendKvWriteGroup, BackendKvWriteStats, BytePage, BytePageBuilder,\n};\npub use types::{Backend, BackendReadTransaction, BackendWriteTransaction};\n"
  },
  {
    "path": "packages/engine/src/backend/testing.rs",
    "content": "use std::collections::BTreeMap;\nuse std::sync::{Arc, Mutex};\n\nuse async_trait::async_trait;\n\nuse crate::backend::{\n    Backend, BackendKvEntryPage, BackendKvExistsBatch, BackendKvExistsGroup, BackendKvGetRequest,\n    BackendKvKeyPage, BackendKvScanRange, BackendKvScanRequest, BackendKvValueBatch,\n    BackendKvValueGroup, BackendKvValuePage, BackendKvWriteBatch, BackendKvWriteStats,\n    BackendReadTransaction, BackendWriteTransaction, BytePageBuilder,\n};\nuse crate::LixError;\n\ntype KvMap = BTreeMap<(String, Vec<u8>), Vec<u8>>;\n\n/// In-memory backend for unit tests that need backend KV semantics without SQL.\n///\n/// SQL execution intentionally returns an error so new tests do not accidentally\n/// couple to raw SQL while exercising storage-facing APIs.\n#[derive(Debug, Clone, Default)]\npub(crate) struct UnitTestBackend {\n    kv: Arc<Mutex<KvMap>>,\n}\n\nimpl UnitTestBackend {\n    pub(crate) fn new() -> Self {\n        Self::default()\n    }\n}\n\n#[async_trait]\nimpl Backend for UnitTestBackend {\n    async fn begin_read_transaction(\n        &self,\n    ) -> Result<Box<dyn BackendReadTransaction + Send + Sync + 'static>, LixError> {\n        let snapshot = self\n            .kv\n            .lock()\n            .map_err(|_| lock_error(\"unit test backend kv\"))?\n            .clone();\n        Ok(Box::new(UnitTestTransaction {\n            parent: Arc::clone(&self.kv),\n            kv: snapshot,\n        }))\n    }\n\n    async fn begin_write_transaction(\n        &self,\n    ) -> Result<Box<dyn BackendWriteTransaction + Send + Sync + 'static>, LixError> {\n        let snapshot = self\n            .kv\n            .lock()\n            .map_err(|_| lock_error(\"unit test backend kv\"))?\n            .clone();\n        Ok(Box::new(UnitTestTransaction {\n            parent: Arc::clone(&self.kv),\n            kv: snapshot,\n        }))\n    }\n}\n\nstruct UnitTestTransaction {\n    parent: Arc<Mutex<KvMap>>,\n    kv: KvMap,\n}\n\n#[async_trait]\nimpl BackendReadTransaction for UnitTestTransaction {\n    async fn get_values(\n        &mut self,\n        request: BackendKvGetRequest,\n    ) -> Result<BackendKvValueBatch, LixError> {\n        let mut groups = Vec::with_capacity(request.groups.len());\n        for group in request.groups {\n            let namespace = group.namespace.clone();\n            let mut values = BytePageBuilder::with_capacity(group.keys.len(), 0);\n            let mut present = Vec::with_capacity(group.keys.len());\n            for key in group.keys {\n                if let Some(value) = self.kv.get(&(namespace.clone(), key)) {\n                    values.push(value);\n                    present.push(true);\n                } else {\n                    values.push([]);\n                    present.push(false);\n                }\n            }\n            groups.push(BackendKvValueGroup::new(\n                namespace,\n                values.finish(),\n                present,\n            ));\n        }\n        Ok(BackendKvValueBatch { groups })\n    }\n\n    async fn exists_many(\n        &mut self,\n        request: BackendKvGetRequest,\n    ) -> Result<BackendKvExistsBatch, LixError> {\n        let mut groups = Vec::with_capacity(request.groups.len());\n        for group in request.groups {\n            let namespace = group.namespace.clone();\n            let exists = group\n                .keys\n                .into_iter()\n                .map(|key| self.kv.contains_key(&(namespace.clone(), key)))\n                .collect();\n            groups.push(BackendKvExistsGroup { namespace, exists });\n        }\n        Ok(BackendKvExistsBatch { groups })\n    }\n\n    async fn scan_keys(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvKeyPage, LixError> {\n        Ok(scan_map_keys(&self.kv, request))\n    }\n\n    async fn scan_values(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvValuePage, LixError> {\n        Ok(scan_map_values(&self.kv, request))\n    }\n\n    async fn scan_entries(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvEntryPage, LixError> {\n        Ok(scan_map_entries(&self.kv, request))\n    }\n\n    async fn rollback(self: Box<Self>) -> Result<(), LixError> {\n        Ok(())\n    }\n}\n\n#[async_trait]\nimpl BackendWriteTransaction for UnitTestTransaction {\n    async fn write_kv_batch(\n        &mut self,\n        batch: BackendKvWriteBatch,\n    ) -> Result<BackendKvWriteStats, LixError> {\n        let mut stats = BackendKvWriteStats::default();\n        for group in batch.groups {\n            let namespace = group.namespace().to_string();\n            for index in 0..group.put_count() {\n                let key = group.put_key(index).ok_or_else(|| {\n                    LixError::new(\"LIX_ERROR_UNKNOWN\", \"backend write batch missing put key\")\n                })?;\n                let value = group.put_value(index).ok_or_else(|| {\n                    LixError::new(\"LIX_ERROR_UNKNOWN\", \"backend write batch missing put value\")\n                })?;\n                stats.puts += 1;\n                stats.bytes_written += key.len() + value.len();\n                self.kv\n                    .insert((namespace.clone(), key.to_vec()), value.to_vec());\n            }\n            for index in 0..group.delete_count() {\n                let key = group.delete_key(index).ok_or_else(|| {\n                    LixError::new(\n                        \"LIX_ERROR_UNKNOWN\",\n                        \"backend write batch missing delete key\",\n                    )\n                })?;\n                stats.deletes += 1;\n                stats.bytes_written += key.len();\n                self.kv.remove(&(namespace.clone(), key.to_vec()));\n            }\n        }\n        Ok(stats)\n    }\n\n    async fn commit(self: Box<Self>) -> Result<(), LixError> {\n        *self\n            .parent\n            .lock()\n            .map_err(|_| lock_error(\"unit test backend kv\"))? = self.kv;\n        Ok(())\n    }\n}\n\n#[async_trait]\nimpl Backend for Arc<UnitTestBackend> {\n    async fn begin_read_transaction(\n        &self,\n    ) -> Result<Box<dyn BackendReadTransaction + Send + Sync + 'static>, LixError> {\n        self.as_ref().begin_read_transaction().await\n    }\n\n    async fn begin_write_transaction(\n        &self,\n    ) -> Result<Box<dyn BackendWriteTransaction + Send + Sync + 'static>, LixError> {\n        self.as_ref().begin_write_transaction().await\n    }\n}\n\nfn scan_pairs<'a>(\n    kv: &'a KvMap,\n    namespace: &str,\n    range: &BackendKvScanRange,\n    limit: Option<usize>,\n) -> Vec<(&'a Vec<u8>, &'a Vec<u8>)> {\n    let pairs = kv\n        .iter()\n        .filter(|((candidate_namespace, key), _)| {\n            candidate_namespace == namespace && key_matches_range(key, range)\n        })\n        .collect::<Vec<_>>();\n    let mut pairs = pairs;\n    pairs.sort_by(|left, right| left.0 .1.cmp(&right.0 .1));\n    if let Some(limit) = limit {\n        pairs.truncate(limit);\n    }\n    pairs\n        .into_iter()\n        .map(|((_, key), value)| (key, value))\n        .collect()\n}\n\npub(crate) fn scan_map_keys(kv: &KvMap, request: BackendKvScanRequest) -> BackendKvKeyPage {\n    let pairs = scan_filtered_pairs(kv, &request);\n    let has_more = pairs.len() > request.limit;\n    let mut keys = BytePageBuilder::with_capacity(request.limit.min(pairs.len()), 0);\n    let mut resume_after = None;\n    for (index, (key, _)) in pairs.into_iter().enumerate() {\n        if index >= request.limit {\n            break;\n        }\n        resume_after = Some(key.clone());\n        keys.push(key);\n    }\n    let resume_after = has_more.then_some(resume_after).flatten();\n    BackendKvKeyPage {\n        keys: keys.finish(),\n        resume_after,\n    }\n}\n\npub(crate) fn scan_map_values(kv: &KvMap, request: BackendKvScanRequest) -> BackendKvValuePage {\n    let pairs = scan_filtered_pairs(kv, &request);\n    let has_more = pairs.len() > request.limit;\n    let mut values = BytePageBuilder::with_capacity(request.limit.min(pairs.len()), 0);\n    let mut resume_after = None;\n    for (index, (key, value)) in pairs.into_iter().enumerate() {\n        if index >= request.limit {\n            break;\n        }\n        resume_after = Some(key.clone());\n        values.push(value);\n    }\n    let resume_after = has_more.then_some(resume_after).flatten();\n    BackendKvValuePage {\n        values: values.finish(),\n        resume_after,\n    }\n}\n\npub(crate) fn scan_map_entries(kv: &KvMap, request: BackendKvScanRequest) -> BackendKvEntryPage {\n    let pairs = scan_filtered_pairs(kv, &request);\n    let has_more = pairs.len() > request.limit;\n    let mut keys = BytePageBuilder::with_capacity(request.limit.min(pairs.len()), 0);\n    let mut values = BytePageBuilder::with_capacity(request.limit.min(pairs.len()), 0);\n    let mut resume_after = None;\n    for (index, (key, value)) in pairs.into_iter().enumerate() {\n        if index >= request.limit {\n            break;\n        }\n        resume_after = Some(key.clone());\n        keys.push(key);\n        values.push(value);\n    }\n    let resume_after = has_more.then_some(resume_after).flatten();\n    BackendKvEntryPage {\n        keys: keys.finish(),\n        values: values.finish(),\n        resume_after,\n    }\n}\n\nfn scan_filtered_pairs<'a>(\n    kv: &'a KvMap,\n    request: &BackendKvScanRequest,\n) -> Vec<(&'a Vec<u8>, &'a Vec<u8>)> {\n    let scan_limit = request\n        .limit\n        .checked_add(1 + usize::from(request.after.is_some()))\n        .unwrap_or(request.limit);\n    scan_pairs(kv, &request.namespace, &request.range, Some(scan_limit))\n        .into_iter()\n        .filter(|(key, _)| {\n            request\n                .after\n                .as_deref()\n                .is_none_or(|after| key.as_slice() > after)\n        })\n        .collect()\n}\n\nfn key_matches_range(key: &[u8], range: &BackendKvScanRange) -> bool {\n    match range {\n        BackendKvScanRange::Prefix(prefix) => key.starts_with(prefix),\n        BackendKvScanRange::Range { start, end } => start.as_slice() <= key && key < end.as_slice(),\n    }\n}\n\nfn lock_error(name: &str) -> LixError {\n    LixError::new(\"LIX_ERROR_UNKNOWN\", format!(\"{name} lock poisoned\"))\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::backend::{\n        BackendKvGetGroup, BackendKvGetRequest, BackendKvScanRequest, BackendKvWriteBatch,\n        BackendKvWriteGroup,\n    };\n\n    async fn put(\n        transaction: &mut (dyn BackendWriteTransaction + Send + Sync),\n        namespace: &str,\n        key: &[u8],\n        value: &[u8],\n    ) {\n        transaction\n            .write_kv_batch(BackendKvWriteBatch {\n                groups: {\n                    let mut group = BackendKvWriteGroup::new(namespace);\n                    group.put(key, value);\n                    vec![group]\n                },\n            })\n            .await\n            .expect(\"put should succeed\");\n    }\n\n    async fn delete(\n        transaction: &mut (dyn BackendWriteTransaction + Send + Sync),\n        namespace: &str,\n        key: &[u8],\n    ) {\n        transaction\n            .write_kv_batch(BackendKvWriteBatch {\n                groups: {\n                    let mut group = BackendKvWriteGroup::new(namespace);\n                    group.delete(key);\n                    vec![group]\n                },\n            })\n            .await\n            .expect(\"delete should succeed\");\n    }\n\n    async fn get(backend: &UnitTestBackend, namespace: &str, key: &[u8]) -> Option<Vec<u8>> {\n        let mut transaction = backend\n            .begin_read_transaction()\n            .await\n            .expect(\"read transaction should open\");\n        let result = transaction\n            .get_values(BackendKvGetRequest {\n                groups: vec![BackendKvGetGroup {\n                    namespace: namespace.to_string(),\n                    keys: vec![key.to_vec()],\n                }],\n            })\n            .await\n            .expect(\"get should succeed\");\n        transaction\n            .rollback()\n            .await\n            .expect(\"rollback should succeed\");\n        result\n            .groups\n            .into_iter()\n            .next()\n            .and_then(|group| group.value(0).flatten().map(<[u8]>::to_vec))\n    }\n\n    async fn scan(\n        backend: &UnitTestBackend,\n        namespace: &str,\n        range: BackendKvScanRange,\n        limit: usize,\n    ) -> BackendKvEntryPage {\n        let mut transaction = backend\n            .begin_read_transaction()\n            .await\n            .expect(\"read transaction should open\");\n        let result = transaction\n            .scan_entries(BackendKvScanRequest {\n                namespace: namespace.to_string(),\n                range,\n                after: None,\n                limit,\n            })\n            .await\n            .expect(\"scan should succeed\");\n        transaction\n            .rollback()\n            .await\n            .expect(\"rollback should succeed\");\n        result\n    }\n\n    fn assert_entries(page: &BackendKvEntryPage, expected: &[(&[u8], &[u8])]) {\n        assert_eq!(page.len(), expected.len());\n        for (index, (key, value)) in expected.iter().enumerate() {\n            assert_eq!(page.key(index).expect(\"key exists\"), *key);\n            assert_eq!(page.value(index).expect(\"value exists\"), *value);\n        }\n    }\n\n    async fn scan_entries_request(\n        backend: &UnitTestBackend,\n        after: Option<&[u8]>,\n        limit: usize,\n    ) -> BackendKvEntryPage {\n        let mut transaction = backend\n            .begin_read_transaction()\n            .await\n            .expect(\"read transaction should open\");\n        let result = transaction\n            .scan_entries(BackendKvScanRequest {\n                namespace: \"ns\".to_string(),\n                range: BackendKvScanRange::prefix(Vec::new()),\n                after: after.map(Vec::from),\n                limit,\n            })\n            .await\n            .expect(\"scan should succeed\");\n        transaction\n            .rollback()\n            .await\n            .expect(\"rollback should succeed\");\n        result\n    }\n\n    async fn scan_keys_request(\n        backend: &UnitTestBackend,\n        after: Option<&[u8]>,\n        limit: usize,\n    ) -> BackendKvKeyPage {\n        let mut transaction = backend\n            .begin_read_transaction()\n            .await\n            .expect(\"read transaction should open\");\n        let result = transaction\n            .scan_keys(BackendKvScanRequest {\n                namespace: \"ns\".to_string(),\n                range: BackendKvScanRange::prefix(Vec::new()),\n                after: after.map(Vec::from),\n                limit,\n            })\n            .await\n            .expect(\"scan should succeed\");\n        transaction\n            .rollback()\n            .await\n            .expect(\"rollback should succeed\");\n        result\n    }\n\n    #[tokio::test]\n    async fn committed_put_is_visible_to_backend_reads() {\n        let backend = UnitTestBackend::new();\n        let mut transaction = backend\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        put(transaction.as_mut(), \"live_state\", b\"key\", b\"value\").await;\n        transaction.commit().await.expect(\"commit should succeed\");\n\n        assert_eq!(\n            get(&backend, \"live_state\", b\"key\").await,\n            Some(b\"value\".to_vec())\n        );\n    }\n\n    #[tokio::test]\n    async fn rollback_discards_puts() {\n        let backend = UnitTestBackend::new();\n        let mut transaction = backend\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        put(transaction.as_mut(), \"live_state\", b\"key\", b\"value\").await;\n        transaction\n            .rollback()\n            .await\n            .expect(\"rollback should succeed\");\n\n        assert_eq!(get(&backend, \"live_state\", b\"key\").await, None);\n    }\n\n    #[tokio::test]\n    async fn close_is_idempotent_and_does_not_destroy_data() {\n        let backend = UnitTestBackend::new();\n        let mut transaction = backend\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        put(transaction.as_mut(), \"live_state\", b\"key\", b\"value\").await;\n        transaction.commit().await.expect(\"commit should succeed\");\n\n        backend.close().await.expect(\"first close should succeed\");\n        backend.close().await.expect(\"second close should succeed\");\n\n        assert_eq!(\n            get(&backend, \"live_state\", b\"key\").await,\n            Some(b\"value\".to_vec())\n        );\n    }\n\n    #[tokio::test]\n    async fn delete_removes_key_on_commit() {\n        let backend = UnitTestBackend::new();\n        let mut seed = backend\n            .begin_write_transaction()\n            .await\n            .expect(\"seed transaction should open\");\n        put(seed.as_mut(), \"live_state\", b\"key\", b\"value\").await;\n        seed.commit().await.expect(\"seed commit should succeed\");\n\n        let mut transaction = backend\n            .begin_write_transaction()\n            .await\n            .expect(\"delete transaction should open\");\n        delete(transaction.as_mut(), \"live_state\", b\"key\").await;\n        transaction.commit().await.expect(\"commit should succeed\");\n\n        assert_eq!(get(&backend, \"live_state\", b\"key\").await, None);\n    }\n\n    #[tokio::test]\n    async fn prefix_scan_returns_lexicographic_order_with_limit() {\n        let backend = UnitTestBackend::new();\n        let mut transaction = backend\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        put(transaction.as_mut(), \"ns\", b\"b/2\", b\"2\").await;\n        put(transaction.as_mut(), \"ns\", b\"a/2\", b\"2\").await;\n        put(transaction.as_mut(), \"ns\", b\"a/1\", b\"1\").await;\n        put(transaction.as_mut(), \"other\", b\"a/0\", b\"0\").await;\n        transaction.commit().await.unwrap();\n\n        let pairs = scan(&backend, \"ns\", BackendKvScanRange::prefix(b\"a/\"), 1).await;\n        assert_entries(&pairs, &[(b\"a/1\", b\"1\")]);\n    }\n\n    #[tokio::test]\n    async fn scan_sets_resume_after_only_when_more_rows_exist() {\n        let backend = UnitTestBackend::new();\n        let mut transaction = backend\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        put(transaction.as_mut(), \"ns\", b\"a\", b\"1\").await;\n        put(transaction.as_mut(), \"ns\", b\"b\", b\"2\").await;\n        put(transaction.as_mut(), \"ns\", b\"c\", b\"3\").await;\n        transaction.commit().await.unwrap();\n\n        let first_page = scan_entries_request(&backend, None, 2).await;\n        assert_entries(&first_page, &[(b\"a\", b\"1\"), (b\"b\", b\"2\")]);\n        assert_eq!(first_page.resume_after, Some(b\"b\".to_vec()));\n\n        let second_page =\n            scan_entries_request(&backend, first_page.resume_after.as_deref(), 2).await;\n        assert_entries(&second_page, &[(b\"c\", b\"3\")]);\n        assert_eq!(second_page.resume_after, None);\n    }\n\n    #[tokio::test]\n    async fn scan_exact_page_size_has_no_resume_after() {\n        let backend = UnitTestBackend::new();\n        let mut transaction = backend\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        put(transaction.as_mut(), \"ns\", b\"a\", b\"1\").await;\n        put(transaction.as_mut(), \"ns\", b\"b\", b\"2\").await;\n        transaction.commit().await.unwrap();\n\n        let page = scan_entries_request(&backend, None, 2).await;\n        assert_entries(&page, &[(b\"a\", b\"1\"), (b\"b\", b\"2\")]);\n        assert_eq!(page.resume_after, None);\n    }\n\n    #[tokio::test]\n    async fn key_only_scan_omits_values() {\n        let backend = UnitTestBackend::new();\n        let mut transaction = backend\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        put(transaction.as_mut(), \"ns\", b\"a\", b\"1\").await;\n        put(transaction.as_mut(), \"ns\", b\"b\", b\"2\").await;\n        transaction.commit().await.unwrap();\n\n        let page = scan_keys_request(&backend, None, 2).await;\n        assert_eq!(page.keys.iter().collect::<Vec<_>>(), vec![b\"a\", b\"b\"]);\n        assert_eq!(page.resume_after, None);\n    }\n\n    #[tokio::test]\n    async fn existence_get_omits_values() {\n        let backend = UnitTestBackend::new();\n        let mut transaction = backend\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        put(transaction.as_mut(), \"ns\", b\"a\", b\"1\").await;\n        transaction.commit().await.unwrap();\n\n        let mut transaction = backend\n            .begin_read_transaction()\n            .await\n            .expect(\"read transaction should open\");\n        let result = transaction\n            .exists_many(BackendKvGetRequest {\n                groups: vec![BackendKvGetGroup {\n                    namespace: \"ns\".to_string(),\n                    keys: vec![b\"a\".to_vec(), b\"missing\".to_vec()],\n                }],\n            })\n            .await\n            .expect(\"existence get should succeed\");\n        transaction\n            .rollback()\n            .await\n            .expect(\"rollback should succeed\");\n\n        assert_eq!(result.groups[0].exists, vec![true, false]);\n    }\n\n    #[tokio::test]\n    async fn range_scan_is_half_open() {\n        let backend = UnitTestBackend::new();\n        let mut transaction = backend\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        put(transaction.as_mut(), \"ns\", b\"a\", b\"a\").await;\n        put(transaction.as_mut(), \"ns\", b\"b\", b\"b\").await;\n        put(transaction.as_mut(), \"ns\", b\"c\", b\"c\").await;\n        transaction.commit().await.unwrap();\n\n        let pairs = scan(\n            &backend,\n            \"ns\",\n            BackendKvScanRange::range(b\"a\", b\"c\"),\n            usize::MAX,\n        )\n        .await;\n        assert_entries(&pairs, &[(b\"a\", b\"a\"), (b\"b\", b\"b\")]);\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/backend/types.rs",
    "content": "use async_trait::async_trait;\n\nuse crate::backend::{\n    BackendKvEntryPage, BackendKvExistsBatch, BackendKvGetRequest, BackendKvKeyPage,\n    BackendKvScanRequest, BackendKvValueBatch, BackendKvValuePage, BackendKvWriteBatch,\n    BackendKvWriteStats,\n};\nuse crate::LixError;\n\n#[async_trait]\npub trait Backend: Send + Sync {\n    async fn begin_read_transaction(\n        &self,\n    ) -> Result<Box<dyn BackendReadTransaction + Send + Sync + 'static>, LixError>;\n\n    async fn begin_write_transaction(\n        &self,\n    ) -> Result<Box<dyn BackendWriteTransaction + Send + Sync + 'static>, LixError>;\n\n    /// Releases physical resources held by this backend handle.\n    ///\n    /// This is a resource lifecycle operation, not a durability boundary and\n    /// not a destructive operation. Successful write transactions are durable\n    /// when their commit returns; callers should not rely on `close` to save\n    /// data. Implementations that do not own external resources may keep the\n    /// default no-op behavior.\n    async fn close(&self) -> Result<(), LixError> {\n        Ok(())\n    }\n\n    /// Destroys the physical storage target represented by this backend.\n    ///\n    /// This is a persistence lifecycle operation, not a logical SQL operation.\n    ///\n    /// Callers should treat the backend as the authority for what constitutes\n    /// the full storage target. For example:\n    ///\n    /// - native SQLite may delete the main database file plus WAL/SHM sidecars\n    /// - wasm/opfs SQLite may clear the persisted OPFS target\n    /// - Postgres may drop or clear the configured schema/database target\n    ///\n    /// Callers must not attempt to infer or delete backend-owned physical\n    /// artifacts themselves.\n    ///\n    /// Implementations may choose not to support destroy if the backend\n    /// instance does not have enough information or authority to remove its\n    /// target.\n    async fn destroy(&self) -> Result<(), LixError> {\n        Err(LixError {\n            code: \"LIX_ERROR_UNKNOWN\".to_string(),\n            message: \"destroy is not supported by this backend\".to_string(),\n            hint: None,\n            details: None,\n        })\n    }\n}\n\n#[async_trait]\npub trait BackendReadTransaction: Send + Sync {\n    async fn get_values(\n        &mut self,\n        request: BackendKvGetRequest,\n    ) -> Result<BackendKvValueBatch, LixError>;\n\n    async fn exists_many(\n        &mut self,\n        request: BackendKvGetRequest,\n    ) -> Result<BackendKvExistsBatch, LixError>;\n\n    async fn scan_keys(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvKeyPage, LixError>;\n\n    async fn scan_values(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvValuePage, LixError>;\n\n    async fn scan_entries(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvEntryPage, LixError>;\n\n    async fn rollback(self: Box<Self>) -> Result<(), LixError>;\n}\n\n#[async_trait]\npub trait BackendWriteTransaction: BackendReadTransaction {\n    async fn write_kv_batch(\n        &mut self,\n        batch: BackendKvWriteBatch,\n    ) -> Result<BackendKvWriteStats, LixError>;\n\n    async fn commit(self: Box<Self>) -> Result<(), LixError>;\n}\n"
  },
  {
    "path": "packages/engine/src/binary_cas/chunking.rs",
    "content": "const FASTCDC_MIN_CHUNK_BYTES: usize = 16 * 1024;\nconst FASTCDC_AVG_CHUNK_BYTES: usize = 64 * 1024;\nconst FASTCDC_MAX_CHUNK_BYTES: usize = 256 * 1024;\nconst SINGLE_CHUNK_FAST_PATH_MAX_BYTES: usize = 64 * 1024;\n\n#[allow(dead_code)]\npub(crate) fn should_materialize_chunk_cas(data: &[u8]) -> bool {\n    data.len() > SINGLE_CHUNK_FAST_PATH_MAX_BYTES\n}\n\npub(crate) fn fastcdc_chunk_ranges(data: &[u8]) -> Vec<(usize, usize)> {\n    if data.is_empty() {\n        return Vec::new();\n    }\n    if data.len() <= SINGLE_CHUNK_FAST_PATH_MAX_BYTES {\n        return vec![(0, data.len())];\n    }\n\n    fastcdc::v2020::FastCDC::new(\n        data,\n        FASTCDC_MIN_CHUNK_BYTES as u32,\n        FASTCDC_AVG_CHUNK_BYTES as u32,\n        FASTCDC_MAX_CHUNK_BYTES as u32,\n    )\n    .map(|chunk| {\n        let start = chunk.offset as usize;\n        let end = start + (chunk.length as usize);\n        (start, end)\n    })\n    .collect()\n}\n"
  },
  {
    "path": "packages/engine/src/binary_cas/codec.rs",
    "content": "use crate::LixError;\n\n// Binary CAS physical rows:\n// - manifest:       BCM2 | kind:u8 | blob_size:u64 | kind payload\n//   - empty payload:   []\n//   - single payload:  chunk_hash:[u8;32]\n//   - chunked payload: chunk_count:u32\n// - manifest chunk: BCC1 | chunk_hash:[u8;32] | uncompressed_len:u64\n// - chunk:          BCK1 | codec:u8 | uncompressed_len:u64 | payload:[u8]\nconst MANIFEST_MAGIC: &[u8; 4] = b\"BCM2\";\nconst MANIFEST_CHUNK_MAGIC: &[u8; 4] = b\"BCC1\";\nconst CHUNK_MAGIC: &[u8; 4] = b\"BCK1\";\nconst MANIFEST_KIND_EMPTY: u8 = 0;\nconst MANIFEST_KIND_SINGLE_CHUNK: u8 = 1;\nconst MANIFEST_KIND_CHUNKED: u8 = 2;\nconst CHUNK_CODEC_RAW_TAG: u8 = 0;\nconst HASH_BYTES: usize = 32;\nconst MANIFEST_HEADER_BYTES: usize = 4 + 1 + 8;\nconst EMPTY_MANIFEST_BYTES: usize = MANIFEST_HEADER_BYTES;\nconst SINGLE_CHUNK_MANIFEST_BYTES: usize = MANIFEST_HEADER_BYTES + HASH_BYTES;\nconst CHUNKED_MANIFEST_BYTES: usize = MANIFEST_HEADER_BYTES + 4;\nconst MANIFEST_CHUNK_BYTES: usize = 4 + HASH_BYTES + 8;\nconst CHUNK_HEADER_BYTES: usize = 4 + 1 + 8;\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub(crate) enum BinaryChunkCodec {\n    Raw,\n}\n\nimpl BinaryChunkCodec {\n    fn tag(self) -> u8 {\n        match self {\n            Self::Raw => CHUNK_CODEC_RAW_TAG,\n        }\n    }\n\n    fn from_tag(tag: u8) -> Result<Self, LixError> {\n        match tag {\n            CHUNK_CODEC_RAW_TAG => Ok(Self::Raw),\n            other => Err(codec_error(format!(\n                \"unsupported binary CAS chunk codec tag {other}\"\n            ))),\n        }\n    }\n}\n\n#[derive(Debug, Clone)]\npub(crate) struct EncodedBinaryChunkPayload {\n    pub(crate) codec: BinaryChunkCodec,\n    pub(crate) data: Vec<u8>,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) enum BinaryCasManifest {\n    Empty {\n        size_bytes: u64,\n    },\n    SingleChunk {\n        size_bytes: u64,\n        chunk_hash: [u8; HASH_BYTES],\n    },\n    Chunked {\n        size_bytes: u64,\n        chunk_count: u32,\n    },\n}\n\nimpl BinaryCasManifest {\n    pub(crate) fn size_bytes(&self) -> u64 {\n        match self {\n            Self::Empty { size_bytes }\n            | Self::SingleChunk { size_bytes, .. }\n            | Self::Chunked { size_bytes, .. } => *size_bytes,\n        }\n    }\n}\n\n#[cfg(test)]\npub(crate) fn binary_blob_hash_hex(data: &[u8]) -> String {\n    crate::common::stable_content_fingerprint_hex(data)\n}\n\npub(crate) fn binary_blob_hash_bytes(data: &[u8]) -> [u8; HASH_BYTES] {\n    *blake3::hash(data).as_bytes()\n}\n\npub(crate) fn hash_hex_to_bytes(hash_hex: &str, label: &str) -> Result<[u8; HASH_BYTES], LixError> {\n    if hash_hex.len() != HASH_BYTES * 2 {\n        return Err(codec_error(format!(\n            \"{label} hash must be {} hex characters, got {}\",\n            HASH_BYTES * 2,\n            hash_hex.len()\n        )));\n    }\n\n    let mut out = [0u8; HASH_BYTES];\n    let bytes = hash_hex.as_bytes();\n    for index in 0..HASH_BYTES {\n        out[index] =\n            (hex_value(bytes[index * 2], label)? << 4) | hex_value(bytes[index * 2 + 1], label)?;\n    }\n    Ok(out)\n}\n\npub(crate) fn hash_bytes_to_hex(bytes: &[u8; HASH_BYTES]) -> String {\n    blake3::Hash::from_bytes(*bytes).to_hex().to_string()\n}\n\npub(crate) fn encode_binary_cas_manifest(manifest: &BinaryCasManifest) -> Vec<u8> {\n    let capacity = match manifest {\n        BinaryCasManifest::Empty { .. } => EMPTY_MANIFEST_BYTES,\n        BinaryCasManifest::SingleChunk { .. } => SINGLE_CHUNK_MANIFEST_BYTES,\n        BinaryCasManifest::Chunked { .. } => CHUNKED_MANIFEST_BYTES,\n    };\n    let mut out = Vec::with_capacity(capacity);\n    out.extend_from_slice(MANIFEST_MAGIC);\n    match manifest {\n        BinaryCasManifest::Empty { size_bytes } => {\n            out.push(MANIFEST_KIND_EMPTY);\n            out.extend_from_slice(&size_bytes.to_be_bytes());\n        }\n        BinaryCasManifest::SingleChunk {\n            size_bytes,\n            chunk_hash,\n        } => {\n            out.push(MANIFEST_KIND_SINGLE_CHUNK);\n            out.extend_from_slice(&size_bytes.to_be_bytes());\n            out.extend_from_slice(chunk_hash);\n        }\n        BinaryCasManifest::Chunked {\n            size_bytes,\n            chunk_count,\n        } => {\n            out.push(MANIFEST_KIND_CHUNKED);\n            out.extend_from_slice(&size_bytes.to_be_bytes());\n            out.extend_from_slice(&chunk_count.to_be_bytes());\n        }\n    }\n    out\n}\n\npub(crate) fn decode_binary_cas_manifest(bytes: &[u8]) -> Result<BinaryCasManifest, LixError> {\n    if bytes.len() < MANIFEST_HEADER_BYTES {\n        return Err(codec_error(format!(\n            \"binary CAS manifest must be at least {MANIFEST_HEADER_BYTES} bytes, got {}\",\n            bytes.len()\n        )));\n    }\n    require_magic(bytes, MANIFEST_MAGIC, \"binary CAS manifest\")?;\n    let size_bytes = u64::from_be_bytes(bytes[5..13].try_into().expect(\"fixed slice\"));\n    match bytes[4] {\n        MANIFEST_KIND_EMPTY => {\n            require_len(bytes, EMPTY_MANIFEST_BYTES, \"binary CAS empty manifest\")?;\n            Ok(BinaryCasManifest::Empty { size_bytes })\n        }\n        MANIFEST_KIND_SINGLE_CHUNK => {\n            require_len(\n                bytes,\n                SINGLE_CHUNK_MANIFEST_BYTES,\n                \"binary CAS single-chunk manifest\",\n            )?;\n            let chunk_hash = bytes[13..45].try_into().expect(\"fixed slice\");\n            Ok(BinaryCasManifest::SingleChunk {\n                size_bytes,\n                chunk_hash,\n            })\n        }\n        MANIFEST_KIND_CHUNKED => {\n            require_len(bytes, CHUNKED_MANIFEST_BYTES, \"binary CAS chunked manifest\")?;\n            let chunk_count = u32::from_be_bytes(bytes[13..17].try_into().expect(\"fixed slice\"));\n            Ok(BinaryCasManifest::Chunked {\n                size_bytes,\n                chunk_count,\n            })\n        }\n        other => Err(codec_error(format!(\n            \"unsupported binary CAS manifest kind {other}\"\n        ))),\n    }\n}\n\npub(crate) fn encode_binary_cas_manifest_chunk(\n    chunk_hash: &[u8; HASH_BYTES],\n    chunk_size: u64,\n) -> Vec<u8> {\n    let mut out = Vec::with_capacity(MANIFEST_CHUNK_BYTES);\n    out.extend_from_slice(MANIFEST_CHUNK_MAGIC);\n    out.extend_from_slice(chunk_hash);\n    out.extend_from_slice(&chunk_size.to_be_bytes());\n    out\n}\n\npub(crate) fn decode_binary_cas_manifest_chunk(\n    bytes: &[u8],\n) -> Result<([u8; HASH_BYTES], u64), LixError> {\n    if bytes.len() != MANIFEST_CHUNK_BYTES {\n        return Err(codec_error(format!(\n            \"binary CAS manifest chunk must be {MANIFEST_CHUNK_BYTES} bytes, got {}\",\n            bytes.len()\n        )));\n    }\n    require_magic(bytes, MANIFEST_CHUNK_MAGIC, \"binary CAS manifest chunk\")?;\n    let chunk_hash = bytes[4..36].try_into().expect(\"fixed slice\");\n    let chunk_size = u64::from_be_bytes(bytes[36..44].try_into().expect(\"fixed slice\"));\n    Ok((chunk_hash, chunk_size))\n}\n\npub(crate) fn encode_binary_cas_chunk(\n    codec: BinaryChunkCodec,\n    uncompressed_len: u64,\n    payload: &[u8],\n) -> Vec<u8> {\n    let mut out = Vec::with_capacity(CHUNK_HEADER_BYTES + payload.len());\n    out.extend_from_slice(CHUNK_MAGIC);\n    out.push(codec.tag());\n    out.extend_from_slice(&uncompressed_len.to_be_bytes());\n    out.extend_from_slice(payload);\n    out\n}\n\npub(crate) fn decode_binary_cas_chunk(\n    bytes: &[u8],\n) -> Result<(BinaryChunkCodec, u64, &[u8]), LixError> {\n    if bytes.len() < CHUNK_HEADER_BYTES {\n        return Err(codec_error(format!(\n            \"binary CAS chunk must be at least {CHUNK_HEADER_BYTES} bytes, got {}\",\n            bytes.len()\n        )));\n    }\n    require_magic(bytes, CHUNK_MAGIC, \"binary CAS chunk\")?;\n    let codec = BinaryChunkCodec::from_tag(bytes[4])?;\n    let uncompressed_len = u64::from_be_bytes(bytes[5..13].try_into().expect(\"fixed slice\"));\n    Ok((codec, uncompressed_len, &bytes[CHUNK_HEADER_BYTES..]))\n}\n\nfn require_magic(bytes: &[u8], expected: &[u8; 4], label: &str) -> Result<(), LixError> {\n    if &bytes[..4] == expected {\n        return Ok(());\n    }\n    Err(codec_error(format!(\n        \"{label} has unsupported binary format\"\n    )))\n}\n\nfn require_len(bytes: &[u8], expected: usize, label: &str) -> Result<(), LixError> {\n    if bytes.len() == expected {\n        return Ok(());\n    }\n    Err(codec_error(format!(\n        \"{label} must be {expected} bytes, got {}\",\n        bytes.len()\n    )))\n}\n\nfn hex_value(byte: u8, label: &str) -> Result<u8, LixError> {\n    match byte {\n        b'0'..=b'9' => Ok(byte - b'0'),\n        b'a'..=b'f' => Ok(byte - b'a' + 10),\n        b'A'..=b'F' => Ok(byte - b'A' + 10),\n        _ => Err(codec_error(format!(\"{label} hash contains non-hex bytes\"))),\n    }\n}\n\nfn codec_error(message: String) -> LixError {\n    LixError::new(\"LIX_ERROR_UNKNOWN\", message)\n}\n\npub(crate) fn encode_binary_chunk_payload(chunk_data: &[u8]) -> EncodedBinaryChunkPayload {\n    EncodedBinaryChunkPayload {\n        codec: BinaryChunkCodec::Raw,\n        data: chunk_data.to_vec(),\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn manifests_roundtrip_fixed_binary_rows() {\n        let chunk_hash = binary_blob_hash_bytes(b\"chunk\");\n        let cases = vec![\n            (\n                BinaryCasManifest::Empty { size_bytes: 0 },\n                EMPTY_MANIFEST_BYTES,\n            ),\n            (\n                BinaryCasManifest::SingleChunk {\n                    size_bytes: 42,\n                    chunk_hash,\n                },\n                SINGLE_CHUNK_MANIFEST_BYTES,\n            ),\n            (\n                BinaryCasManifest::Chunked {\n                    size_bytes: 42,\n                    chunk_count: 7,\n                },\n                CHUNKED_MANIFEST_BYTES,\n            ),\n        ];\n        for (manifest, expected_len) in cases {\n            let encoded = encode_binary_cas_manifest(&manifest);\n            assert_eq!(encoded.len(), expected_len);\n            assert_eq!(decode_binary_cas_manifest(&encoded).unwrap(), manifest);\n        }\n    }\n\n    #[test]\n    fn manifest_chunk_roundtrips_fixed_binary_row() {\n        let hash = binary_blob_hash_bytes(b\"chunk\");\n        let encoded = encode_binary_cas_manifest_chunk(&hash, 1024);\n        assert_eq!(encoded.len(), MANIFEST_CHUNK_BYTES);\n        assert_eq!(\n            decode_binary_cas_manifest_chunk(&encoded).unwrap(),\n            (hash, 1024)\n        );\n    }\n\n    #[test]\n    fn chunk_roundtrips_payload_as_remaining_bytes() {\n        let payload = b\"hello payload\";\n        let encoded = encode_binary_cas_chunk(BinaryChunkCodec::Raw, payload.len() as u64, payload);\n        assert_eq!(&encoded[..4], CHUNK_MAGIC);\n        let (codec, uncompressed_len, decoded_payload) = decode_binary_cas_chunk(&encoded).unwrap();\n        assert_eq!(codec, BinaryChunkCodec::Raw);\n        assert_eq!(uncompressed_len, payload.len() as u64);\n        assert_eq!(decoded_payload, payload);\n    }\n\n    #[test]\n    fn wrong_magic_is_rejected() {\n        let mut encoded = encode_binary_cas_manifest(&BinaryCasManifest::Empty { size_bytes: 0 });\n        encoded[0] = b'X';\n        let error = decode_binary_cas_manifest(&encoded).unwrap_err();\n        assert!(error.message.contains(\"unsupported binary format\"));\n    }\n\n    #[test]\n    fn hex_hashes_roundtrip_to_32_byte_keys() {\n        let hash_hex = binary_blob_hash_hex(b\"blob\");\n        let hash_bytes = hash_hex_to_bytes(&hash_hex, \"test\").unwrap();\n        assert_eq!(hash_bytes.len(), 32);\n        assert_eq!(hash_bytes_to_hex(&hash_bytes), hash_hex);\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/binary_cas/context.rs",
    "content": "use async_trait::async_trait;\n\nuse crate::binary_cas::{\n    BlobBytesBatch, BlobExistsBatch, BlobHash, BlobMetadataBatch, BlobWrite, BlobWriteReceipt,\n};\nuse crate::storage::{StorageReader, StorageWriteSet};\nuse crate::LixError;\nuse std::collections::HashSet;\n\n#[async_trait]\npub(crate) trait BlobDataReader: Send + Sync {\n    async fn load_bytes_many(&self, hashes: &[BlobHash]) -> Result<BlobBytesBatch, LixError>;\n}\n\n/// Long-lived Binary CAS context factory.\n///\n/// The context does not own storage. Callers explicitly provide a KV store via\n/// `reader(...)` or `writer(...)`, keeping backend and transaction ownership at\n/// the execution layer.\npub(crate) struct BinaryCasContext;\n\nimpl BinaryCasContext {\n    pub(crate) fn new() -> Self {\n        Self\n    }\n\n    /// Creates a Binary CAS reader over any storage reader.\n    ///\n    /// The reader can be a read transaction or the active write transaction\n    /// when reads must participate in transaction-local visibility.\n    pub(crate) fn reader<S>(&self, store: S) -> BinaryCasStoreReader<S>\n    where\n        S: StorageReader,\n    {\n        BinaryCasStoreReader { store }\n    }\n\n    pub(crate) fn writer<'a>(&self, writes: &'a mut StorageWriteSet) -> BinaryCasWriter<'a> {\n        BinaryCasWriter::new(writes)\n    }\n}\n\n#[async_trait]\nimpl<S> BlobDataReader for BinaryCasStoreReader<S>\nwhere\n    S: StorageReader + Clone + Send + Sync,\n{\n    async fn load_bytes_many(&self, hashes: &[BlobHash]) -> Result<BlobBytesBatch, LixError> {\n        let mut reader = BinaryCasStoreReader {\n            store: self.store.clone(),\n        };\n        BinaryCasStoreReader::load_bytes_many(&mut reader, hashes).await\n    }\n}\n\n/// Binary CAS reader over a caller-supplied KV store.\npub(crate) struct BinaryCasStoreReader<S> {\n    store: S,\n}\n\nimpl<S> BinaryCasStoreReader<S>\nwhere\n    S: StorageReader,\n{\n    #[allow(dead_code)]\n    pub(crate) async fn exists_many(\n        &mut self,\n        hashes: &[BlobHash],\n    ) -> Result<BlobExistsBatch, LixError> {\n        crate::binary_cas::kv::exists_many(&mut self.store, hashes).await\n    }\n\n    #[allow(dead_code)]\n    pub(crate) async fn load_metadata_many(\n        &mut self,\n        hashes: &[BlobHash],\n    ) -> Result<BlobMetadataBatch, LixError> {\n        crate::binary_cas::kv::load_metadata_many(&mut self.store, hashes).await\n    }\n\n    pub(crate) async fn load_bytes_many(\n        &mut self,\n        hashes: &[BlobHash],\n    ) -> Result<BlobBytesBatch, LixError> {\n        crate::binary_cas::kv::load_bytes_many(&mut self.store, hashes).await\n    }\n\n    #[cfg(feature = \"storage-benches\")]\n    pub(crate) async fn count_blob_manifests(&mut self) -> Result<usize, LixError> {\n        crate::binary_cas::kv::count_manifests(&mut self.store).await\n    }\n}\n\n/// Transaction-scoped Binary CAS writer.\n///\n/// This type does not begin, commit, or roll back transactions. It only writes\n/// CAS data into the transaction supplied by the caller.\npub(crate) struct BinaryCasWriter<'a> {\n    writes: &'a mut StorageWriteSet,\n    blob_hashes: HashSet<[u8; 32]>,\n    chunk_keys: HashSet<Vec<u8>>,\n}\n\nimpl<'a> BinaryCasWriter<'a> {\n    fn new(writes: &'a mut StorageWriteSet) -> Self {\n        Self {\n            writes,\n            blob_hashes: HashSet::new(),\n            chunk_keys: HashSet::new(),\n        }\n    }\n\n    pub(crate) fn stage_bytes(&mut self, bytes: &[u8]) -> Result<BlobWriteReceipt, LixError> {\n        crate::binary_cas::kv::stage_blob_write(\n            self.writes,\n            &mut self.blob_hashes,\n            &mut self.chunk_keys,\n            &BlobWrite { bytes },\n        )\n    }\n\n    #[allow(dead_code)]\n    pub(crate) fn stage_many(\n        &mut self,\n        writes: &[BlobWrite<'_>],\n    ) -> Result<Vec<BlobWriteReceipt>, LixError> {\n        writes\n            .iter()\n            .map(|write| {\n                crate::binary_cas::kv::stage_blob_write(\n                    self.writes,\n                    &mut self.blob_hashes,\n                    &mut self.chunk_keys,\n                    write,\n                )\n            })\n            .collect()\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/binary_cas/kv.rs",
    "content": "#![allow(dead_code)]\n\nuse crate::binary_cas::chunking::fastcdc_chunk_ranges;\nuse crate::binary_cas::codec::{\n    decode_binary_cas_chunk, decode_binary_cas_manifest, decode_binary_cas_manifest_chunk,\n    encode_binary_cas_chunk, encode_binary_cas_manifest, encode_binary_cas_manifest_chunk,\n    encode_binary_chunk_payload, BinaryCasManifest, BinaryChunkCodec,\n};\nuse crate::binary_cas::{\n    BlobBytesBatch, BlobExistsBatch, BlobHash, BlobLayout, BlobMetadata, BlobMetadataBatch,\n    BlobWrite, BlobWriteReceipt,\n};\nuse crate::storage::{\n    KvGetGroup, KvGetRequest, KvScanRange, KvScanRequest, StorageReader, StorageWriteSet,\n};\nuse crate::LixError;\nuse std::collections::{HashMap, HashSet};\n\npub(crate) const BINARY_CAS_MANIFEST_NAMESPACE: &str = \"binary_cas.manifest\";\npub(crate) const BINARY_CAS_MANIFEST_CHUNK_NAMESPACE: &str = \"binary_cas.manifest_chunk\";\npub(crate) const BINARY_CAS_CHUNK_NAMESPACE: &str = \"binary_cas.chunk\";\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct KvBlobManifestChunk {\n    pub(crate) chunk_hash: [u8; 32],\n    pub(crate) chunk_size: u64,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct KvChunk {\n    pub(crate) codec: BinaryChunkCodec,\n    pub(crate) uncompressed_len: u64,\n    pub(crate) data: Vec<u8>,\n}\n\npub(crate) async fn load_manifest(\n    store: &mut impl StorageReader,\n    blob_hash: BlobHash,\n) -> Result<Option<BinaryCasManifest>, LixError> {\n    let Some(bytes) = get_one(\n        store,\n        BINARY_CAS_MANIFEST_NAMESPACE,\n        manifest_key(blob_hash),\n    )\n    .await?\n    else {\n        return Ok(None);\n    };\n    decode_binary_cas_manifest(&bytes).map(Some)\n}\n\n#[cfg(feature = \"storage-benches\")]\npub(crate) async fn count_manifests(store: &mut impl StorageReader) -> Result<usize, LixError> {\n    Ok(scan_all_values(\n        store,\n        BINARY_CAS_MANIFEST_NAMESPACE,\n        KvScanRange::Prefix(Vec::new()),\n    )\n    .await?\n    .len())\n}\n\npub(crate) fn stage_manifest(\n    writes: &mut StorageWriteSet,\n    blob_hash: BlobHash,\n    manifest: &BinaryCasManifest,\n) {\n    writes.put(\n        BINARY_CAS_MANIFEST_NAMESPACE,\n        manifest_key(blob_hash),\n        encode_binary_cas_manifest(manifest),\n    );\n}\n\npub(crate) async fn scan_manifest_chunks(\n    store: &mut impl StorageReader,\n    blob_hash: BlobHash,\n) -> Result<Vec<KvBlobManifestChunk>, LixError> {\n    scan_all_values(\n        store,\n        BINARY_CAS_MANIFEST_CHUNK_NAMESPACE,\n        KvScanRange::Prefix(manifest_chunk_prefix(blob_hash)),\n    )\n    .await?\n    .into_iter()\n    .map(|value| {\n        let (chunk_hash, chunk_size) = decode_binary_cas_manifest_chunk(&value)?;\n        Ok(KvBlobManifestChunk {\n            chunk_hash,\n            chunk_size,\n        })\n    })\n    .collect()\n}\n\npub(crate) fn stage_manifest_chunk(\n    writes: &mut StorageWriteSet,\n    blob_hash: BlobHash,\n    chunk_index: u64,\n    chunk: &KvBlobManifestChunk,\n) {\n    writes.put(\n        BINARY_CAS_MANIFEST_CHUNK_NAMESPACE,\n        manifest_chunk_key(blob_hash, chunk_index),\n        encode_binary_cas_manifest_chunk(&chunk.chunk_hash, chunk.chunk_size),\n    );\n}\n\npub(crate) async fn load_chunk(\n    store: &mut impl StorageReader,\n    chunk_hash: BlobHash,\n) -> Result<Option<KvChunk>, LixError> {\n    let Some(bytes) = get_one(store, BINARY_CAS_CHUNK_NAMESPACE, chunk_key(chunk_hash)).await?\n    else {\n        return Ok(None);\n    };\n    let (codec, uncompressed_len, payload) = decode_binary_cas_chunk(&bytes)?;\n    Ok(Some(KvChunk {\n        codec,\n        uncompressed_len,\n        data: payload.to_vec(),\n    }))\n}\n\npub(crate) fn stage_chunk(writes: &mut StorageWriteSet, chunk_hash: BlobHash, chunk: &KvChunk) {\n    writes.put(\n        BINARY_CAS_CHUNK_NAMESPACE,\n        chunk_key(chunk_hash),\n        encode_binary_cas_chunk(chunk.codec, chunk.uncompressed_len, &chunk.data),\n    );\n}\n\nasync fn get_one(\n    store: &mut impl StorageReader,\n    namespace: &str,\n    key: Vec<u8>,\n) -> Result<Option<Vec<u8>>, LixError> {\n    Ok(store\n        .get_values(KvGetRequest {\n            groups: vec![KvGetGroup {\n                namespace: namespace.to_string(),\n                keys: vec![key],\n            }],\n        })\n        .await?\n        .groups\n        .into_iter()\n        .next()\n        .and_then(|group| group.single_value_owned()))\n}\n\nasync fn scan_all_values(\n    store: &mut impl StorageReader,\n    namespace: &str,\n    range: KvScanRange,\n) -> Result<Vec<Vec<u8>>, LixError> {\n    let page = store\n        .scan_values(KvScanRequest {\n            namespace: namespace.to_string(),\n            range,\n            after: None,\n            limit: usize::MAX,\n        })\n        .await?\n        .values;\n    Ok(page.iter().map(<[u8]>::to_vec).collect())\n}\n\npub(crate) async fn load_metadata_many(\n    store: &mut impl StorageReader,\n    hashes: &[BlobHash],\n) -> Result<BlobMetadataBatch, LixError> {\n    if hashes.is_empty() {\n        return Ok(BlobMetadataBatch::new(Vec::new()));\n    }\n    let rows = store\n        .get_values(KvGetRequest {\n            groups: vec![KvGetGroup {\n                namespace: BINARY_CAS_MANIFEST_NAMESPACE.to_string(),\n                keys: hashes.iter().map(|hash| manifest_key(*hash)).collect(),\n            }],\n        })\n        .await?\n        .groups\n        .into_iter()\n        .next()\n        .map(|group| {\n            group\n                .values_iter()\n                .map(|value| value.map(<[u8]>::to_vec))\n                .collect::<Vec<_>>()\n        })\n        .unwrap_or_default();\n    if rows.len() != hashes.len() {\n        return Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\n                \"binary CAS metadata read expected {} rows, got {}\",\n                hashes.len(),\n                rows.len()\n            ),\n        ));\n    }\n    let entries = rows\n        .into_iter()\n        .zip(hashes.iter().copied())\n        .map(|(row, hash)| {\n            row.map(|bytes| {\n                let manifest = decode_binary_cas_manifest(&bytes)?;\n                metadata_from_manifest(hash, manifest)\n            })\n            .transpose()\n        })\n        .collect::<Result<Vec<_>, _>>()?;\n    Ok(BlobMetadataBatch::new(entries))\n}\n\npub(crate) async fn exists_many(\n    store: &mut impl StorageReader,\n    hashes: &[BlobHash],\n) -> Result<BlobExistsBatch, LixError> {\n    Ok(BlobExistsBatch::new(\n        load_metadata_many(store, hashes)\n            .await?\n            .into_vec()\n            .into_iter()\n            .map(|metadata| metadata.is_some())\n            .collect(),\n    ))\n}\n\npub(crate) async fn load_bytes_many(\n    store: &mut impl StorageReader,\n    hashes: &[BlobHash],\n) -> Result<BlobBytesBatch, LixError> {\n    let metadata = load_metadata_many(store, hashes).await?.into_vec();\n    let mut chunked_manifests = Vec::new();\n    let mut requested_chunks = Vec::new();\n    let mut seen_chunks = HashSet::new();\n\n    for (index, metadata) in metadata.iter().enumerate() {\n        let Some(metadata) = metadata else {\n            continue;\n        };\n        match &metadata.layout {\n            BlobLayout::Empty => {}\n            BlobLayout::SingleChunk { chunk_hash } => {\n                if seen_chunks.insert(*chunk_hash) {\n                    requested_chunks.push(*chunk_hash);\n                }\n            }\n            BlobLayout::Chunked { chunk_count } => {\n                let manifest_chunks = scan_manifest_chunks(store, metadata.hash).await?;\n                if manifest_chunks.len() != *chunk_count as usize {\n                    return Err(LixError::new(\n                        \"LIX_ERROR_UNKNOWN\",\n                        format!(\n                            \"binary CAS blob '{}' expected {} chunks, found {}\",\n                            metadata.hash.to_hex(),\n                            chunk_count,\n                            manifest_chunks.len()\n                        ),\n                    ));\n                }\n                for manifest_chunk in &manifest_chunks {\n                    let chunk_hash = BlobHash::from_bytes(manifest_chunk.chunk_hash);\n                    if seen_chunks.insert(chunk_hash) {\n                        requested_chunks.push(chunk_hash);\n                    }\n                }\n                chunked_manifests.push((index, manifest_chunks));\n            }\n        }\n    }\n\n    let chunk_rows = load_chunk_rows(store, &requested_chunks).await?;\n    let chunk_rows_by_hash = requested_chunks\n        .into_iter()\n        .zip(chunk_rows.into_iter())\n        .collect::<HashMap<_, _>>();\n    let chunked_manifests_by_index = chunked_manifests\n        .into_iter()\n        .collect::<HashMap<usize, Vec<KvBlobManifestChunk>>>();\n\n    let entries = metadata\n        .into_iter()\n        .enumerate()\n        .map(|(index, metadata)| {\n            metadata\n                .map(|metadata| {\n                    assemble_blob_bytes(\n                        &metadata,\n                        &chunk_rows_by_hash,\n                        chunked_manifests_by_index.get(&index),\n                    )\n                })\n                .transpose()\n        })\n        .collect::<Result<Vec<_>, _>>()?;\n    Ok(BlobBytesBatch::new(entries))\n}\n\nasync fn load_chunk_rows(\n    store: &mut impl StorageReader,\n    hashes: &[BlobHash],\n) -> Result<Vec<Option<Vec<u8>>>, LixError> {\n    if hashes.is_empty() {\n        return Ok(Vec::new());\n    }\n    Ok(store\n        .get_values(KvGetRequest {\n            groups: vec![KvGetGroup {\n                namespace: BINARY_CAS_CHUNK_NAMESPACE.to_string(),\n                keys: hashes.iter().map(|hash| chunk_key(*hash)).collect(),\n            }],\n        })\n        .await?\n        .groups\n        .into_iter()\n        .next()\n        .map(|group| {\n            group\n                .values_iter()\n                .map(|value| value.map(<[u8]>::to_vec))\n                .collect::<Vec<_>>()\n        })\n        .unwrap_or_default())\n}\n\nfn assemble_blob_bytes(\n    metadata: &BlobMetadata,\n    chunk_rows_by_hash: &HashMap<BlobHash, Option<Vec<u8>>>,\n    chunked_manifest: Option<&Vec<KvBlobManifestChunk>>,\n) -> Result<Vec<u8>, LixError> {\n    let expected_blob_size = persisted_size_to_usize(metadata.size_bytes, \"binary CAS blob\")?;\n    let bytes = match &metadata.layout {\n        BlobLayout::Empty => {\n            if metadata.hash != BlobHash::from_content(&[]) {\n                return Err(LixError::new(\n                    \"LIX_ERROR_UNKNOWN\",\n                    format!(\n                        \"binary CAS blob '{}' failed content-address verification\",\n                        metadata.hash.to_hex()\n                    ),\n                ));\n            }\n            Vec::new()\n        }\n        BlobLayout::SingleChunk { chunk_hash } => {\n            let chunk = decode_chunk_from_map(\n                chunk_rows_by_hash,\n                metadata.hash,\n                *chunk_hash,\n                expected_blob_size,\n            )?;\n            if *chunk_hash != metadata.hash && BlobHash::from_content(&chunk) != metadata.hash {\n                return Err(LixError::new(\n                    \"LIX_ERROR_UNKNOWN\",\n                    format!(\n                        \"binary CAS blob '{}' failed content-address verification\",\n                        metadata.hash.to_hex()\n                    ),\n                ));\n            }\n            chunk\n        }\n        BlobLayout::Chunked { chunk_count } => {\n            let Some(manifest_chunks) = chunked_manifest else {\n                return Err(LixError::new(\n                    \"LIX_ERROR_UNKNOWN\",\n                    format!(\n                        \"binary CAS blob '{}' missing chunk manifest\",\n                        metadata.hash.to_hex()\n                    ),\n                ));\n            };\n            if manifest_chunks.len() != *chunk_count as usize {\n                return Err(LixError::new(\n                    \"LIX_ERROR_UNKNOWN\",\n                    format!(\n                        \"binary CAS blob '{}' expected {} chunks, found {}\",\n                        metadata.hash.to_hex(),\n                        chunk_count,\n                        manifest_chunks.len()\n                    ),\n                ));\n            }\n            let mut out = Vec::with_capacity(expected_blob_size);\n            for manifest_chunk in manifest_chunks {\n                let chunk_hash = BlobHash::from_bytes(manifest_chunk.chunk_hash);\n                let expected_chunk_size =\n                    persisted_size_to_usize(manifest_chunk.chunk_size, \"binary CAS chunk\")?;\n                let chunk = decode_chunk_from_map(\n                    chunk_rows_by_hash,\n                    metadata.hash,\n                    chunk_hash,\n                    expected_chunk_size,\n                )?;\n                out.extend_from_slice(&chunk);\n            }\n            if out.len() != expected_blob_size {\n                return Err(LixError::new(\n                    \"LIX_ERROR_UNKNOWN\",\n                    format!(\n                        \"binary CAS blob '{}' expected {} bytes, decoded {} bytes\",\n                        metadata.hash.to_hex(),\n                        expected_blob_size,\n                        out.len()\n                    ),\n                ));\n            }\n            if BlobHash::from_content(&out) != metadata.hash {\n                return Err(LixError::new(\n                    \"LIX_ERROR_UNKNOWN\",\n                    format!(\n                        \"binary CAS blob '{}' failed content-address verification\",\n                        metadata.hash.to_hex()\n                    ),\n                ));\n            }\n            out\n        }\n    };\n    Ok(bytes)\n}\n\nfn decode_chunk_from_map(\n    chunk_rows_by_hash: &HashMap<BlobHash, Option<Vec<u8>>>,\n    blob_hash: BlobHash,\n    chunk_hash: BlobHash,\n    expected_chunk_size: usize,\n) -> Result<Vec<u8>, LixError> {\n    let Some(Some(chunk_bytes)) = chunk_rows_by_hash.get(&chunk_hash) else {\n        return Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\n                \"binary CAS chunk '{}' is missing for blob '{}'\",\n                chunk_hash.to_hex(),\n                blob_hash.to_hex()\n            ),\n        ));\n    };\n    decode_and_verify_chunk(chunk_bytes, expected_chunk_size, blob_hash, chunk_hash)\n}\n\nfn decode_and_verify_chunk(\n    chunk_bytes: &[u8],\n    expected_chunk_size: usize,\n    blob_hash: BlobHash,\n    chunk_hash: BlobHash,\n) -> Result<Vec<u8>, LixError> {\n    let (codec, uncompressed_len, chunk_payload) = decode_binary_cas_chunk(chunk_bytes)?;\n    if uncompressed_len != expected_chunk_size as u64 {\n        return Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\n                \"binary CAS chunk '{}' for blob '{}' expected {} uncompressed bytes, row says {}\",\n                chunk_hash.to_hex(),\n                blob_hash.to_hex(),\n                expected_chunk_size,\n                uncompressed_len\n            ),\n        ));\n    }\n    let BinaryChunkCodec::Raw = codec;\n    if chunk_payload.len() != expected_chunk_size {\n        return Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\n                \"binary CAS chunk '{}' for blob '{}' expected {} decoded bytes, got {}\",\n                chunk_hash.to_hex(),\n                blob_hash.to_hex(),\n                expected_chunk_size,\n                chunk_payload.len()\n            ),\n        ));\n    }\n    if BlobHash::from_content(chunk_payload) != chunk_hash {\n        return Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\n                \"binary CAS chunk '{}' for blob '{}' failed content-address verification\",\n                chunk_hash.to_hex(),\n                blob_hash.to_hex()\n            ),\n        ));\n    }\n    Ok(chunk_payload.to_vec())\n}\n\npub(crate) fn stage_blob_write(\n    writes: &mut StorageWriteSet,\n    blob_hashes: &mut HashSet<[u8; 32]>,\n    chunk_keys: &mut HashSet<Vec<u8>>,\n    write: &BlobWrite<'_>,\n) -> Result<BlobWriteReceipt, LixError> {\n    let blob_hash = BlobHash::from_content(write.bytes);\n    let chunk_ranges = fastcdc_chunk_ranges(write.bytes);\n    let layout = match chunk_ranges.as_slice() {\n        [] => BlobLayout::Empty,\n        [(start, end)] => BlobLayout::SingleChunk {\n            chunk_hash: BlobHash::from_content(&write.bytes[*start..*end]),\n        },\n        _ => BlobLayout::Chunked {\n            chunk_count: u32::try_from(chunk_ranges.len()).map_err(|_| {\n                LixError::new(\n                    \"LIX_ERROR_UNKNOWN\",\n                    \"binary CAS blob has too many chunks for manifest\".to_string(),\n                )\n            })?,\n        },\n    };\n    let receipt = BlobWriteReceipt {\n        hash: blob_hash,\n        size_bytes: write.bytes.len() as u64,\n        layout: layout.clone(),\n    };\n    if !blob_hashes.insert(blob_hash.into_bytes()) {\n        return Ok(receipt);\n    }\n\n    match &layout {\n        BlobLayout::Empty => {\n            stage_manifest(\n                writes,\n                blob_hash,\n                &BinaryCasManifest::Empty { size_bytes: 0 },\n            );\n        }\n        BlobLayout::SingleChunk { chunk_hash } => {\n            let chunk_hash = *chunk_hash;\n            stage_manifest(\n                writes,\n                blob_hash,\n                &BinaryCasManifest::SingleChunk {\n                    size_bytes: write.bytes.len() as u64,\n                    chunk_hash: chunk_hash.into_bytes(),\n                },\n            );\n            if chunk_keys.insert(chunk_key(chunk_hash)) {\n                let encoded_chunk = encode_binary_chunk_payload(write.bytes);\n                stage_chunk(\n                    writes,\n                    chunk_hash,\n                    &KvChunk {\n                        codec: encoded_chunk.codec,\n                        uncompressed_len: write.bytes.len() as u64,\n                        data: encoded_chunk.data,\n                    },\n                );\n            }\n        }\n        BlobLayout::Chunked { chunk_count } => {\n            stage_manifest(\n                writes,\n                blob_hash,\n                &BinaryCasManifest::Chunked {\n                    size_bytes: write.bytes.len() as u64,\n                    chunk_count: *chunk_count,\n                },\n            );\n\n            for (chunk_index, (start, end)) in chunk_ranges.into_iter().enumerate() {\n                let chunk_data = &write.bytes[start..end];\n                let chunk_hash = BlobHash::from_content(chunk_data);\n                let chunk_key = chunk_key(chunk_hash);\n                if chunk_keys.insert(chunk_key.clone()) {\n                    let encoded_chunk = encode_binary_chunk_payload(chunk_data);\n                    stage_chunk(\n                        writes,\n                        chunk_hash,\n                        &KvChunk {\n                            codec: encoded_chunk.codec,\n                            uncompressed_len: chunk_data.len() as u64,\n                            data: encoded_chunk.data,\n                        },\n                    );\n                }\n\n                stage_manifest_chunk(\n                    writes,\n                    blob_hash,\n                    chunk_index as u64,\n                    &KvBlobManifestChunk {\n                        chunk_hash: *chunk_hash.as_bytes(),\n                        chunk_size: chunk_data.len() as u64,\n                    },\n                );\n            }\n        }\n    }\n    Ok(receipt)\n}\n\nfn metadata_from_manifest(\n    hash: BlobHash,\n    manifest: BinaryCasManifest,\n) -> Result<BlobMetadata, LixError> {\n    let size_bytes = manifest.size_bytes();\n    let layout = match manifest {\n        BinaryCasManifest::Empty { size_bytes } => {\n            if size_bytes != 0 {\n                return Err(LixError::new(\n                    \"LIX_ERROR_UNKNOWN\",\n                    format!(\n                        \"binary CAS empty blob '{}' has nonzero size {size_bytes}\",\n                        hash.to_hex()\n                    ),\n                ));\n            }\n            BlobLayout::Empty\n        }\n        BinaryCasManifest::SingleChunk { chunk_hash, .. } => BlobLayout::SingleChunk {\n            chunk_hash: BlobHash::from_bytes(chunk_hash),\n        },\n        BinaryCasManifest::Chunked { chunk_count, .. } => BlobLayout::Chunked { chunk_count },\n    };\n    Ok(BlobMetadata {\n        hash,\n        size_bytes,\n        layout,\n    })\n}\n\nfn manifest_key(blob_hash: BlobHash) -> Vec<u8> {\n    blob_hash.as_bytes().to_vec()\n}\n\nfn manifest_chunk_prefix(blob_hash: BlobHash) -> Vec<u8> {\n    blob_hash.as_bytes().to_vec()\n}\n\nfn manifest_chunk_key(blob_hash: BlobHash, chunk_index: u64) -> Vec<u8> {\n    let mut out = Vec::with_capacity(40);\n    out.extend_from_slice(blob_hash.as_bytes());\n    out.extend_from_slice(&chunk_index.to_be_bytes());\n    out\n}\n\nfn chunk_key(chunk_hash: BlobHash) -> Vec<u8> {\n    chunk_hash.as_bytes().to_vec()\n}\n\nfn persisted_size_to_usize(size: u64, label: &str) -> Result<usize, LixError> {\n    usize::try_from(size).map_err(|_| {\n        LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"{label} size {size} does not fit in this runtime\"),\n        )\n    })\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::backend::testing::UnitTestBackend;\n    use crate::binary_cas::BinaryCasContext;\n    use crate::storage::{StorageContext, StorageWriteSet};\n\n    fn stage_blob_to_writes(writes: &mut StorageWriteSet, data: &[u8]) {\n        let mut writer = BinaryCasContext::new().writer(writes);\n        writer.stage_bytes(data).expect(\"blob write should persist\");\n    }\n\n    #[tokio::test]\n    async fn stores_manifest_chunks_in_scan_order() {\n        let storage = StorageContext::new(std::sync::Arc::new(UnitTestBackend::new()));\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        let blob_hash = BlobHash::from_content(b\"blob-a\");\n        let chunk_a_hash = BlobHash::from_content(b\"chunk-a\").into_bytes();\n        let chunk_b_hash = BlobHash::from_content(b\"chunk-b\").into_bytes();\n\n        {\n            let mut writes = StorageWriteSet::new();\n            stage_manifest(\n                &mut writes,\n                blob_hash,\n                &BinaryCasManifest::Chunked {\n                    size_bytes: 12,\n                    chunk_count: 2,\n                },\n            );\n            stage_manifest_chunk(\n                &mut writes,\n                blob_hash,\n                1,\n                &KvBlobManifestChunk {\n                    chunk_hash: chunk_b_hash,\n                    chunk_size: 6,\n                },\n            );\n            stage_manifest_chunk(\n                &mut writes,\n                blob_hash,\n                0,\n                &KvBlobManifestChunk {\n                    chunk_hash: chunk_a_hash,\n                    chunk_size: 6,\n                },\n            );\n            writes\n                .apply(&mut transaction.as_mut())\n                .await\n                .expect(\"manifest writes should apply\");\n        }\n        transaction.commit().await.expect(\"commit should succeed\");\n\n        let mut store = storage\n            .begin_read_transaction()\n            .await\n            .expect(\"read transaction should open\");\n        assert_eq!(\n            load_manifest(&mut store, blob_hash)\n                .await\n                .expect(\"manifest should load\"),\n            Some(BinaryCasManifest::Chunked {\n                size_bytes: 12,\n                chunk_count: 2,\n            })\n        );\n        let mut store = storage\n            .begin_read_transaction()\n            .await\n            .expect(\"read transaction should open\");\n        assert_eq!(\n            scan_manifest_chunks(&mut store, blob_hash)\n                .await\n                .expect(\"manifest chunks should scan\"),\n            vec![\n                KvBlobManifestChunk {\n                    chunk_hash: chunk_a_hash,\n                    chunk_size: 6,\n                },\n                KvBlobManifestChunk {\n                    chunk_hash: chunk_b_hash,\n                    chunk_size: 6,\n                },\n            ]\n        );\n    }\n\n    #[tokio::test]\n    async fn stores_encoded_chunks_by_chunk_hash() {\n        let storage = StorageContext::new(std::sync::Arc::new(UnitTestBackend::new()));\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        let chunk = KvChunk {\n            codec: BinaryChunkCodec::Raw,\n            uncompressed_len: 5,\n            data: b\"hello\".to_vec(),\n        };\n        let chunk_hash = BlobHash::from_content(b\"chunk-a\");\n\n        {\n            let mut writes = StorageWriteSet::new();\n            stage_chunk(&mut writes, chunk_hash, &chunk);\n            writes\n                .apply(&mut transaction.as_mut())\n                .await\n                .expect(\"chunk should apply\");\n        }\n        transaction.commit().await.expect(\"commit should succeed\");\n\n        let mut store = storage\n            .begin_read_transaction()\n            .await\n            .expect(\"read transaction should open\");\n        assert_eq!(\n            load_chunk(&mut store, chunk_hash)\n                .await\n                .expect(\"chunk should load\"),\n            Some(chunk)\n        );\n    }\n\n    #[test]\n    fn binary_hash_keys_are_compact_and_manifest_chunks_sort_by_index() {\n        let blob_hash = BlobHash::from_content(b\"blob\");\n        let manifest_key = manifest_key(blob_hash);\n        let chunk_key = chunk_key(BlobHash::from_content(b\"chunk\"));\n        let first = manifest_chunk_key(blob_hash, 1);\n        let second = manifest_chunk_key(blob_hash, 2);\n        let later = manifest_chunk_key(blob_hash, 10);\n\n        assert_eq!(manifest_key.len(), 32);\n        assert_eq!(chunk_key.len(), 32);\n        assert_eq!(first.len(), 40);\n        assert!(first < second);\n        assert!(second < later);\n    }\n\n    #[tokio::test]\n    async fn public_kv_api_roundtrips_blob_bytes() {\n        let storage = StorageContext::new(std::sync::Arc::new(UnitTestBackend::new()));\n        let data = b\"hello chunked kv cas\";\n        let blob_hash = BlobHash::from_content(data);\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n\n        {\n            let mut writes = StorageWriteSet::new();\n            stage_blob_to_writes(&mut writes, data);\n            writes\n                .apply(&mut transaction.as_mut())\n                .await\n                .expect(\"blob write should apply\");\n        }\n        transaction.commit().await.expect(\"commit should succeed\");\n\n        let mut store = storage\n            .begin_read_transaction()\n            .await\n            .expect(\"read transaction should open\");\n        assert_eq!(\n            load_bytes_many(&mut store, &[blob_hash])\n                .await\n                .expect(\"blob should load\")\n                .into_vec(),\n            vec![Some(data.to_vec())]\n        );\n        let mut store = storage\n            .begin_read_transaction()\n            .await\n            .expect(\"read transaction should open\");\n        assert_eq!(\n            load_manifest(&mut store, blob_hash)\n                .await\n                .expect(\"manifest should load\"),\n            Some(BinaryCasManifest::SingleChunk {\n                size_bytes: data.len() as u64,\n                chunk_hash: BlobHash::from_content(data).into_bytes(),\n            })\n        );\n        let mut store = storage\n            .begin_read_transaction()\n            .await\n            .expect(\"read transaction should open\");\n        assert_eq!(\n            scan_manifest_chunks(&mut store, blob_hash)\n                .await\n                .expect(\"single-chunk blob should not spill manifest chunks\"),\n            Vec::<KvBlobManifestChunk>::new()\n        );\n        let mut store = storage\n            .begin_read_transaction()\n            .await\n            .expect(\"read transaction should open\");\n        assert_eq!(\n            exists_many(&mut store, &[blob_hash])\n                .await\n                .expect(\"blob exists should succeed\")\n                .into_vec(),\n            vec![true]\n        );\n    }\n\n    #[tokio::test]\n    async fn read_rejects_chunk_bytes_that_do_not_match_manifest_hash() {\n        let storage = StorageContext::new(std::sync::Arc::new(UnitTestBackend::new()));\n        let data = b\"same length\";\n        let corrupted = b\"SAME length\";\n        let blob_hash = BlobHash::from_content(data);\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        {\n            let mut writes = StorageWriteSet::new();\n            stage_blob_to_writes(&mut writes, data);\n            writes\n                .apply(&mut transaction.as_mut())\n                .await\n                .expect(\"blob write should apply\");\n        }\n        transaction.commit().await.expect(\"commit should succeed\");\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        {\n            let mut writes = StorageWriteSet::new();\n            writes.put(\n                BINARY_CAS_CHUNK_NAMESPACE,\n                chunk_key(blob_hash),\n                encode_binary_cas_chunk(BinaryChunkCodec::Raw, corrupted.len() as u64, corrupted),\n            );\n            writes\n                .apply(&mut transaction.as_mut())\n                .await\n                .expect(\"corrupt chunk should overwrite\");\n        }\n        transaction.commit().await.expect(\"commit should succeed\");\n\n        let mut store = storage\n            .begin_read_transaction()\n            .await\n            .expect(\"read transaction should open\");\n        let error = load_bytes_many(&mut store, &[blob_hash])\n            .await\n            .expect_err(\"corrupt chunk should be rejected\");\n        assert!(error\n            .message\n            .contains(\"failed content-address verification\"));\n    }\n\n    #[tokio::test]\n    async fn read_rejects_manifest_that_assembles_wrong_blob_hash() {\n        let storage = StorageContext::new(std::sync::Arc::new(UnitTestBackend::new()));\n        let expected = b\"expected bytes\";\n        let substituted = b\"different byte\";\n        assert_eq!(expected.len(), substituted.len());\n        let expected_blob_hash = BlobHash::from_content(expected);\n        let substituted_chunk_hash = BlobHash::from_content(substituted);\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        {\n            let mut writes = StorageWriteSet::new();\n            stage_manifest(\n                &mut writes,\n                expected_blob_hash,\n                &BinaryCasManifest::Chunked {\n                    size_bytes: expected.len() as u64,\n                    chunk_count: 1,\n                },\n            );\n            stage_manifest_chunk(\n                &mut writes,\n                expected_blob_hash,\n                0,\n                &KvBlobManifestChunk {\n                    chunk_hash: BlobHash::from_content(substituted).into_bytes(),\n                    chunk_size: substituted.len() as u64,\n                },\n            );\n            stage_chunk(\n                &mut writes,\n                substituted_chunk_hash,\n                &KvChunk {\n                    codec: BinaryChunkCodec::Raw,\n                    uncompressed_len: substituted.len() as u64,\n                    data: substituted.to_vec(),\n                },\n            );\n            writes\n                .apply(&mut transaction.as_mut())\n                .await\n                .expect(\"wrong manifest fixture should apply\");\n        }\n        transaction.commit().await.expect(\"commit should succeed\");\n\n        let mut store = storage\n            .begin_read_transaction()\n            .await\n            .expect(\"read transaction should open\");\n        let error = load_bytes_many(&mut store, &[expected_blob_hash])\n            .await\n            .expect_err(\"wrong assembled blob should be rejected\");\n        assert!(error\n            .message\n            .contains(\"failed content-address verification\"));\n    }\n\n    #[tokio::test]\n    async fn public_kv_api_roundtrips_empty_blob() {\n        let storage = StorageContext::new(std::sync::Arc::new(UnitTestBackend::new()));\n        let data = b\"\";\n        let blob_hash = BlobHash::from_content(data);\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n\n        {\n            let mut writes = StorageWriteSet::new();\n            stage_blob_to_writes(&mut writes, data);\n            writes\n                .apply(&mut transaction.as_mut())\n                .await\n                .expect(\"blob write should apply\");\n        }\n        transaction.commit().await.expect(\"commit should succeed\");\n\n        let mut store = storage\n            .begin_read_transaction()\n            .await\n            .expect(\"read transaction should open\");\n        assert_eq!(\n            load_bytes_many(&mut store, &[blob_hash])\n                .await\n                .expect(\"empty blob should load\")\n                .into_vec(),\n            vec![Some(Vec::new())]\n        );\n        let mut store = storage\n            .begin_read_transaction()\n            .await\n            .expect(\"read transaction should open\");\n        assert_eq!(\n            scan_manifest_chunks(&mut store, blob_hash)\n                .await\n                .expect(\"empty blob chunks should scan\"),\n            Vec::<KvBlobManifestChunk>::new()\n        );\n    }\n\n    #[tokio::test]\n    async fn public_kv_api_roundtrips_multi_chunk_blob() {\n        let storage = StorageContext::new(std::sync::Arc::new(UnitTestBackend::new()));\n        let data = (0..600_000)\n            .map(|index| (index % 251) as u8)\n            .collect::<Vec<_>>();\n        let blob_hash = BlobHash::from_content(&data);\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n\n        {\n            let mut writes = StorageWriteSet::new();\n            stage_blob_to_writes(&mut writes, &data);\n            writes\n                .apply(&mut transaction.as_mut())\n                .await\n                .expect(\"blob write should apply\");\n        }\n        transaction.commit().await.expect(\"commit should succeed\");\n\n        let mut store = storage\n            .begin_read_transaction()\n            .await\n            .expect(\"read transaction should open\");\n        assert_eq!(\n            load_bytes_many(&mut store, &[blob_hash])\n                .await\n                .expect(\"large blob should load\")\n                .into_vec(),\n            vec![Some(data.clone())]\n        );\n        let mut store = storage\n            .begin_read_transaction()\n            .await\n            .expect(\"read transaction should open\");\n        assert!(\n            scan_manifest_chunks(&mut store, blob_hash)\n                .await\n                .expect(\"large blob chunks should scan\")\n                .len()\n                > 1\n        );\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/binary_cas/mod.rs",
    "content": "mod chunking;\nmod codec;\nmod context;\npub(crate) mod kv;\nmod types;\n\npub(crate) use context::{BinaryCasContext, BlobDataReader};\npub(crate) use types::{\n    BlobBytesBatch, BlobExistsBatch, BlobHash, BlobLayout, BlobMetadata, BlobMetadataBatch,\n    BlobWrite, BlobWriteReceipt,\n};\n"
  },
  {
    "path": "packages/engine/src/binary_cas/types.rs",
    "content": "use crate::binary_cas::codec::{binary_blob_hash_bytes, hash_bytes_to_hex, hash_hex_to_bytes};\nuse crate::LixError;\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, PartialOrd, Ord)]\npub(crate) struct BlobHash([u8; 32]);\n\nimpl BlobHash {\n    pub(crate) fn from_bytes(bytes: [u8; 32]) -> Self {\n        Self(bytes)\n    }\n\n    pub(crate) fn from_content(content: &[u8]) -> Self {\n        Self(binary_blob_hash_bytes(content))\n    }\n\n    pub(crate) fn from_hex(hash_hex: &str) -> Result<Self, LixError> {\n        Ok(Self(hash_hex_to_bytes(hash_hex, \"binary CAS blob\")?))\n    }\n\n    pub(crate) fn to_hex(self) -> String {\n        hash_bytes_to_hex(&self.0)\n    }\n\n    pub(crate) fn as_bytes(&self) -> &[u8; 32] {\n        &self.0\n    }\n\n    pub(crate) fn into_bytes(self) -> [u8; 32] {\n        self.0\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) enum BlobLayout {\n    Empty,\n    SingleChunk { chunk_hash: BlobHash },\n    Chunked { chunk_count: u32 },\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct BlobMetadata {\n    pub(crate) hash: BlobHash,\n    pub(crate) size_bytes: u64,\n    pub(crate) layout: BlobLayout,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct BlobExistsBatch {\n    entries: Vec<bool>,\n}\n\nimpl BlobExistsBatch {\n    pub(crate) fn new(entries: Vec<bool>) -> Self {\n        Self { entries }\n    }\n\n    #[allow(dead_code)]\n    pub(crate) fn get(&self, index: usize) -> bool {\n        self.entries.get(index).copied().unwrap_or(false)\n    }\n\n    #[allow(dead_code)]\n    pub(crate) fn into_vec(self) -> Vec<bool> {\n        self.entries\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct BlobMetadataBatch {\n    entries: Vec<Option<BlobMetadata>>,\n}\n\nimpl BlobMetadataBatch {\n    pub(crate) fn new(entries: Vec<Option<BlobMetadata>>) -> Self {\n        Self { entries }\n    }\n\n    #[allow(dead_code)]\n    pub(crate) fn get(&self, index: usize) -> Option<&BlobMetadata> {\n        self.entries.get(index).and_then(Option::as_ref)\n    }\n\n    pub(crate) fn into_vec(self) -> Vec<Option<BlobMetadata>> {\n        self.entries\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct BlobBytesBatch {\n    entries: Vec<Option<Vec<u8>>>,\n}\n\nimpl BlobBytesBatch {\n    pub(crate) fn new(entries: Vec<Option<Vec<u8>>>) -> Self {\n        Self { entries }\n    }\n\n    #[allow(dead_code)]\n    pub(crate) fn get(&self, index: usize) -> Option<&[u8]> {\n        self.entries\n            .get(index)\n            .and_then(Option::as_ref)\n            .map(Vec::as_slice)\n    }\n\n    pub(crate) fn into_vec(self) -> Vec<Option<Vec<u8>>> {\n        self.entries\n    }\n}\n\n#[derive(Debug, Clone, Copy)]\npub(crate) struct BlobWrite<'a> {\n    pub(crate) bytes: &'a [u8],\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct BlobWriteReceipt {\n    pub(crate) hash: BlobHash,\n    pub(crate) size_bytes: u64,\n    pub(crate) layout: BlobLayout,\n}\n"
  },
  {
    "path": "packages/engine/src/catalog/context.rs",
    "content": "use std::collections::BTreeMap;\n\nuse serde_json::Value as JsonValue;\n\nuse crate::catalog::SchemaCatalogFact;\nuse crate::domain::{committed_row_is_exact_version_scoped, Domain};\nuse crate::live_state::MaterializedLiveStateRow;\nuse crate::live_state::{LiveStateFilter, LiveStateReader, LiveStateScanRequest};\nuse crate::schema::schema_key_from_definition;\nuse crate::{LixError, NullableKeyFilter};\n\nconst REGISTERED_SCHEMA_KEY: &str = \"lix_registered_schema\";\n\n/// Engine schema visibility boundary.\n///\n/// SQL planning receives a schema snapshot from live state. System schemas are\n/// seeded as ordinary `lix_registered_schema` rows during initialization, so\n/// runtime schema visibility has one source of truth.\npub(crate) struct CatalogContext;\n\nimpl CatalogContext {\n    pub(crate) fn new() -> Self {\n        Self\n    }\n\n    /// Loads schema definitions for SQL surface planning at `version_id`.\n    ///\n    /// SQL surfaces are a read-planning projection over the active untracked\n    /// schema catalog. Validation must use `schema_facts_for_domain` instead so\n    /// schema durability remains explicit.\n    pub(crate) async fn schema_jsons_for_sql_read_planning<R>(\n        &self,\n        live_state: &R,\n        version_id: &str,\n    ) -> Result<Vec<JsonValue>, LixError>\n    where\n        R: LiveStateReader + ?Sized,\n    {\n        let facts = self\n            .schema_facts_for_domain(live_state, &Domain::schema_catalog(version_id, true))\n            .await?;\n        let mut schemas = BTreeMap::<String, JsonValue>::new();\n        for fact in facts {\n            let schema_key = fact.catalog_key().schema_key.clone();\n            if schemas\n                .insert(schema_key.clone(), fact.schema().clone())\n                .is_some()\n            {\n                return Err(LixError::new(\n                    LixError::CODE_SCHEMA_DEFINITION,\n                    format!(\n                        \"SQL surface schema '{}' is visible from more than one schema catalog fact\",\n                        schema_key\n                    ),\n                )\n                .with_hint(\"SQL entity surfaces are named by schema_key. Keep exactly one visible schema per schema_key for SQL planning.\"));\n            }\n        }\n        Ok(schemas.into_values().collect())\n    }\n\n    /// Loads schema facts reachable from a row domain.\n    pub(crate) async fn schema_facts_for_domain<R>(\n        &self,\n        live_state: &R,\n        domain: &Domain,\n    ) -> Result<Vec<SchemaCatalogFact>, LixError>\n    where\n        R: LiveStateReader + ?Sized,\n    {\n        let mut facts = Vec::new();\n        for schema_domain in domain.schema_catalog_domains() {\n            let rows = live_state\n                .scan_rows(&LiveStateScanRequest {\n                    filter: LiveStateFilter {\n                        schema_keys: vec![REGISTERED_SCHEMA_KEY.to_string()],\n                        version_ids: vec![schema_domain.version_id().to_string()],\n                        file_ids: vec![NullableKeyFilter::Null],\n                        untracked: Some(schema_domain.untracked()),\n                        include_tombstones: false,\n                        ..LiveStateFilter::default()\n                    },\n                    ..LiveStateScanRequest::default()\n                })\n                .await?;\n            for row in rows\n                .into_iter()\n                .filter(|row| row_belongs_to_schema_catalog_domain(row, &schema_domain))\n            {\n                let Some((key, schema)) = decode_registered_schema_row(&row)? else {\n                    continue;\n                };\n                facts.push(SchemaCatalogFact::new(schema_domain.clone(), key, schema));\n            }\n        }\n        Ok(facts)\n    }\n}\n\nfn row_belongs_to_schema_catalog_domain(row: &MaterializedLiveStateRow, domain: &Domain) -> bool {\n    row.schema_key == REGISTERED_SCHEMA_KEY\n        && row.file_id.is_none()\n        && row.snapshot_content.is_some()\n        && row.version_id == domain.version_id()\n        && row.untracked == domain.untracked()\n        && committed_row_is_exact_version_scoped(row, domain.version_id())\n}\n\nfn decode_registered_schema_row(\n    row: &MaterializedLiveStateRow,\n) -> Result<Option<(crate::schema::SchemaKey, JsonValue)>, LixError> {\n    if row.schema_key != REGISTERED_SCHEMA_KEY {\n        return Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\n                \"expected lix_registered_schema row, got schema_key={}\",\n                row.schema_key\n            ),\n        ));\n    }\n\n    let Some(snapshot_content) = row.snapshot_content.as_deref() else {\n        return Ok(None);\n    };\n\n    let snapshot: JsonValue = serde_json::from_str(snapshot_content).map_err(|err| {\n        LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"invalid registered schema snapshot JSON: {err}\"),\n        )\n    })?;\n    let schema = snapshot.get(\"value\").cloned().ok_or_else(|| {\n        LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            \"registered schema snapshot missing value\",\n        )\n    })?;\n    let key = schema_key_from_definition(&schema)?;\n    Ok(Some((key, schema)))\n}\n\n#[cfg(test)]\nmod tests {\n    use async_trait::async_trait;\n    use serde_json::json;\n\n    use super::*;\n    use crate::live_state::LiveStateRowRequest;\n    use crate::GLOBAL_VERSION_ID;\n\n    #[tokio::test]\n    async fn visible_schemas_are_loaded_from_registered_schema_rows() {\n        let context = CatalogContext::new();\n\n        let schemas = context\n            .schema_jsons_for_sql_read_planning(\n                &RowsLiveStateReader::new(vec![\n                    registered_schema_row(\"lix_registered_schema\"),\n                    registered_schema_row(\"lix_key_value\"),\n                ]),\n                \"global\",\n            )\n            .await\n            .expect(\"schema visibility should load\");\n\n        assert!(schemas.iter().any(|schema| {\n            schema.get(\"x-lix-key\").and_then(JsonValue::as_str) == Some(\"lix_registered_schema\")\n        }));\n        assert!(schemas.iter().any(|schema| {\n            schema.get(\"x-lix-key\").and_then(JsonValue::as_str) == Some(\"lix_key_value\")\n        }));\n    }\n\n    #[tokio::test]\n    async fn visible_schemas_include_registered_schema_rows() {\n        let context = CatalogContext::new();\n\n        let schemas = context\n            .schema_jsons_for_sql_read_planning(\n                &RowsLiveStateReader::new(vec![registered_schema_row(\"engine_dynamic_schema\")]),\n                \"global\",\n            )\n            .await\n            .expect(\"schema visibility should load\");\n\n        assert!(schemas.iter().any(|schema| {\n            schema.get(\"x-lix-key\").and_then(JsonValue::as_str) == Some(\"engine_dynamic_schema\")\n        }));\n    }\n\n    #[tokio::test]\n    async fn sql_read_planning_rejects_multiple_visible_schemas_for_same_surface() {\n        let context = CatalogContext::new();\n        let error = context\n            .schema_jsons_for_sql_read_planning(\n                &RowsLiveStateReader::new(vec![\n                    registered_schema_row(\"engine_dynamic_schema\"),\n                    registered_schema_row(\"engine_dynamic_schema\"),\n                ]),\n                \"global\",\n            )\n            .await\n            .expect_err(\"SQL surfaces must not choose a schema identity implicitly\");\n\n        assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION);\n        assert!(error.message.contains(\"SQL surface schema\"));\n    }\n\n    #[tokio::test]\n    async fn tracked_domain_sees_tracked_seed_schemas_but_not_user_untracked_schemas() {\n        let context = CatalogContext::new();\n        let mut seed_schema = registered_schema_row(\"lix_key_value\");\n        seed_schema.untracked = false;\n\n        let facts = context\n            .schema_facts_for_domain(\n                &RowsLiveStateReader::new(vec![\n                    seed_schema,\n                    registered_schema_row(\"engine_dynamic_schema\"),\n                ]),\n                &Domain::schema_catalog(\"global\", false),\n            )\n            .await\n            .expect(\"schema visibility should load\");\n        let schemas = facts\n            .iter()\n            .map(SchemaCatalogFact::schema)\n            .collect::<Vec<_>>();\n\n        assert!(schemas.iter().any(|schema| {\n            schema.get(\"x-lix-key\").and_then(JsonValue::as_str) == Some(\"lix_key_value\")\n        }));\n        assert!(!schemas.iter().any(|schema| {\n            schema.get(\"x-lix-key\").and_then(JsonValue::as_str) == Some(\"engine_dynamic_schema\")\n        }));\n    }\n\n    #[tokio::test]\n    async fn tracked_domain_does_not_see_untracked_seed_schemas() {\n        let context = CatalogContext::new();\n\n        let facts = context\n            .schema_facts_for_domain(\n                &RowsLiveStateReader::new(vec![registered_schema_row(\"lix_key_value\")]),\n                &Domain::schema_catalog(\"global\", false),\n            )\n            .await\n            .expect(\"schema visibility should load\");\n        let schemas = facts\n            .iter()\n            .map(SchemaCatalogFact::schema)\n            .collect::<Vec<_>>();\n\n        assert!(!schemas.iter().any(|schema| {\n            schema.get(\"x-lix-key\").and_then(JsonValue::as_str) == Some(\"lix_key_value\")\n        }));\n    }\n\n    #[tokio::test]\n    async fn visible_schemas_ignore_projected_global_schema_rows_for_version_scope() {\n        let context = CatalogContext::new();\n        let mut global_only = registered_schema_row(\"global_only_schema\");\n        global_only.global = true;\n        global_only.version_id = \"main\".to_string();\n\n        let schemas = context\n            .schema_jsons_for_sql_read_planning(\n                &RowsLiveStateReader::new(vec![global_only]),\n                \"main\",\n            )\n            .await\n            .expect(\"schema visibility should load\");\n\n        assert!(schemas.is_empty());\n    }\n\n    #[tokio::test]\n    async fn schema_facts_post_filter_non_catalog_rows_even_if_reader_returns_them() {\n        let context = CatalogContext::new();\n        let valid_schema = registered_schema_row(\"valid_schema\");\n        let mut file_scoped_schema = registered_schema_row(\"file_scoped_schema\");\n        file_scoped_schema.file_id = Some(\"file-a\".to_string());\n        let mut tombstoned_schema = registered_schema_row(\"tombstoned_schema\");\n        tombstoned_schema.snapshot_content = None;\n\n        let facts = context\n            .schema_facts_for_domain(\n                &RowsLiveStateReader::new(vec![\n                    valid_schema,\n                    file_scoped_schema,\n                    tombstoned_schema,\n                ]),\n                &Domain::schema_catalog(\"global\", true),\n            )\n            .await\n            .expect(\"schema facts should load\");\n        let schema_keys = facts\n            .iter()\n            .filter_map(|fact| fact.schema().get(\"x-lix-key\").and_then(JsonValue::as_str))\n            .collect::<Vec<_>>();\n\n        assert_eq!(schema_keys, vec![\"valid_schema\"]);\n    }\n\n    #[tokio::test]\n    async fn visible_schemas_are_empty_when_no_schema_rows_are_visible() {\n        let context = CatalogContext::new();\n\n        let schemas = context\n            .schema_jsons_for_sql_read_planning(&RowsLiveStateReader::new(Vec::new()), \"global\")\n            .await\n            .expect(\"schema visibility should load\");\n\n        assert!(schemas.is_empty());\n    }\n\n    struct RowsLiveStateReader {\n        rows: Vec<MaterializedLiveStateRow>,\n    }\n\n    impl RowsLiveStateReader {\n        fn new(rows: Vec<MaterializedLiveStateRow>) -> Self {\n            Self { rows }\n        }\n    }\n\n    #[async_trait]\n    impl LiveStateReader for RowsLiveStateReader {\n        async fn scan_rows(\n            &self,\n            request: &LiveStateScanRequest,\n        ) -> Result<Vec<MaterializedLiveStateRow>, LixError> {\n            Ok(self\n                .rows\n                .iter()\n                .filter(|row| {\n                    request.filter.schema_keys.is_empty()\n                        || request.filter.schema_keys.contains(&row.schema_key)\n                })\n                .filter(|row| {\n                    request.filter.version_ids.is_empty()\n                        || request.filter.version_ids.contains(&row.version_id)\n                })\n                .filter(|row| {\n                    request\n                        .filter\n                        .untracked\n                        .is_none_or(|untracked| row.untracked == untracked)\n                })\n                .cloned()\n                .collect())\n        }\n\n        async fn load_row(\n            &self,\n            request: &LiveStateRowRequest,\n        ) -> Result<Option<MaterializedLiveStateRow>, LixError> {\n            Ok(self\n                .rows\n                .iter()\n                .find(|row| {\n                    row.schema_key == request.schema_key\n                        && row.version_id == request.version_id\n                        && row.entity_id == request.entity_id\n                })\n                .cloned())\n        }\n    }\n\n    fn registered_schema_row(schema_key: &str) -> MaterializedLiveStateRow {\n        MaterializedLiveStateRow {\n            entity_id: registered_schema_entity_id(schema_key),\n            file_id: None,\n            schema_key: REGISTERED_SCHEMA_KEY.to_string(),\n            version_id: GLOBAL_VERSION_ID.to_string(),\n            metadata: None,\n            deleted: false,\n            change_id: Some(\"change-registered-schema\".to_string()),\n            commit_id: None,\n            global: true,\n            untracked: true,\n            created_at: \"2026-04-23T00:00:00Z\".to_string(),\n            updated_at: \"2026-04-23T01:00:00Z\".to_string(),\n            snapshot_content: Some(\n                json!({\n                    \"value\": {\n                        \"x-lix-key\": schema_key,\n                        \"type\": \"object\",\n                        \"properties\": {\n                            \"id\": { \"type\": \"string\" }\n                        },\n                        \"required\": [\"id\"],\n                        \"additionalProperties\": false\n                    }\n                })\n                .to_string(),\n            ),\n        }\n    }\n\n    fn registered_schema_entity_id(schema_key: &str) -> crate::entity_identity::EntityIdentity {\n        crate::entity_identity::EntityIdentity::from_primary_key_paths(\n            &json!({\n                \"value\": {\n                    \"x-lix-key\": schema_key,\n                }\n            }),\n            &[vec![\"value\".to_string(), \"x-lix-key\".to_string()]],\n        )\n        .expect(\"registered schema identity should derive\")\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/catalog/mod.rs",
    "content": "mod context;\nmod schema;\nmod snapshot;\n\npub(crate) use context::CatalogContext;\npub(crate) use schema::{\n    ForeignKeyPlan, SchemaCatalogFact, SchemaCatalogKey, SchemaPlan, SchemaPlanId,\n    StateForeignKeyPlan,\n};\npub(crate) use snapshot::{CatalogSnapshot, StateDeleteReferencePlan};\n"
  },
  {
    "path": "packages/engine/src/catalog/schema.rs",
    "content": "pub(crate) use super::snapshot::{\n    ForeignKeyPlan, SchemaCatalogFact, SchemaCatalogKey, SchemaPlan, SchemaPlanId,\n    StateForeignKeyPlan,\n};\n"
  },
  {
    "path": "packages/engine/src/catalog/snapshot.rs",
    "content": "use std::{collections::BTreeMap, sync::Arc};\n\nuse jsonschema::JSONSchema;\nuse serde_json::{Map as JsonMap, Value as JsonValue};\n\nuse crate::common::{format_json_pointer, parse_json_pointer};\nuse crate::domain::{Domain, DomainSchemaIdentity};\nuse crate::entity_identity::canonical_json_text;\nuse crate::functions::FunctionProviderHandle;\nuse crate::schema::{compile_lix_schema, validate_schema_amendment, SchemaKey};\nuse crate::LixError;\n\n#[derive(Default)]\npub(crate) struct CatalogSnapshot {\n    entries: Vec<CatalogEntry>,\n    plans: Vec<SchemaPlan>,\n    by_key: BTreeMap<SchemaCatalogKey, SchemaPlanId>,\n    by_identity: BTreeMap<DomainSchemaIdentity, SchemaPlanId>,\n    delete_references_by_target: BTreeMap<SchemaCatalogKey, Vec<DeleteReferencePlan>>,\n    state_delete_references: Vec<StateDeleteReferencePlan>,\n    fingerprint: CatalogFingerprint,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\nstruct CatalogEntry {\n    identity: DomainSchemaIdentity,\n    key: SchemaCatalogKey,\n    schema: JsonValue,\n}\n\n#[derive(Debug, Clone, Default, PartialEq, Eq, PartialOrd, Ord, Hash)]\npub(crate) struct CatalogFingerprint(String);\n\nimpl std::fmt::Debug for CatalogSnapshot {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"CatalogSnapshot\")\n            .field(\"plan_count\", &self.plans.len())\n            .field(\"keys\", &self.by_key.keys().collect::<Vec<_>>())\n            .finish()\n    }\n}\n\nimpl CatalogSnapshot {\n    #[cfg(test)]\n    pub(crate) fn from_visible_schemas(visible_schemas: &[JsonValue]) -> Result<Self, LixError> {\n        let mut catalog = Self::default();\n        for schema in visible_schemas {\n            let key = crate::schema::schema_key_from_definition(schema)?;\n            let catalog_key = SchemaCatalogKey::from_schema_key(key);\n            let identity = DomainSchemaIdentity::new(\n                Domain::schema_catalog(crate::GLOBAL_VERSION_ID, true),\n                catalog_key.schema_key.clone(),\n            );\n            catalog.remember_schema_identity(identity, catalog_key, schema.clone())?;\n        }\n        catalog.rebuild_plans()?;\n        Ok(catalog)\n    }\n\n    pub(crate) fn from_schema_facts(facts: &[SchemaCatalogFact]) -> Result<Self, LixError> {\n        let entries = facts\n            .iter()\n            .map(|fact| CatalogEntry {\n                identity: fact.identity.clone(),\n                key: fact.catalog_key.clone(),\n                schema: fact.schema.clone(),\n            })\n            .collect::<Vec<_>>();\n        Self::from_entries(entries)\n    }\n\n    #[cfg(test)]\n    pub(crate) fn fingerprint(&self) -> &CatalogFingerprint {\n        &self.fingerprint\n    }\n\n    pub(crate) fn schema(&self, schema_key: &str) -> Option<&JsonValue> {\n        self.plan_for_key(schema_key)\n            .map(|(_, plan)| plan.schema.as_ref())\n    }\n\n    pub(crate) fn insert_schema_for_domain(\n        &mut self,\n        domain: Domain,\n        key: SchemaKey,\n        schema: JsonValue,\n    ) -> Result<SchemaPlanId, LixError> {\n        let key = SchemaCatalogKey::from_schema_key(key);\n        let identity = DomainSchemaIdentity::new(domain, key.schema_key.clone());\n        let mut entries = self.entries.clone();\n        let mut candidate = Self::from_entries(entries.clone())?;\n        let plan_id = candidate.remember_schema_identity(identity.clone(), key, schema)?;\n        entries = candidate.entries.clone();\n        let candidate = Self::from_entries(entries)?;\n        *self = candidate;\n        Ok(self.by_identity.get(&identity).copied().unwrap_or(plan_id))\n    }\n\n    fn from_entries(entries: Vec<CatalogEntry>) -> Result<Self, LixError> {\n        let mut catalog = Self::default();\n        for entry in entries {\n            catalog.remember_schema_identity(entry.identity, entry.key, entry.schema)?;\n        }\n        catalog.rebuild_plans()?;\n        Ok(catalog)\n    }\n\n    fn remember_schema_identity(\n        &mut self,\n        identity: DomainSchemaIdentity,\n        key: SchemaCatalogKey,\n        schema: JsonValue,\n    ) -> Result<SchemaPlanId, LixError> {\n        if let Some(existing) = self.by_identity.get(&identity).copied() {\n            let existing_entry = &self.entries[existing.index()];\n            if existing_entry.key == key && existing_entry.schema == schema {\n                return Ok(existing);\n            }\n            if existing_entry.key == key {\n                validate_schema_amendment(&existing_entry.schema, &schema)?;\n                self.entries[existing.index()].schema = schema;\n                return Ok(existing);\n            }\n            return Err(LixError::new(\n                LixError::CODE_SCHEMA_DEFINITION,\n                format!(\"schema '{}' is already registered with a different definition in the same schema domain\", key.schema_key),\n            ));\n        }\n        if let Some(existing) = self.by_key.get(&key).copied() {\n            let existing_entry = &self.entries[existing.index()];\n            if existing_entry.identity == identity {\n                return Ok(existing);\n            }\n            return Err(LixError::new(\n                LixError::CODE_SCHEMA_DEFINITION,\n                format!(\"schema '{}' is visible from more than one schema domain\", existing_entry.key.schema_key),\n            )\n            .with_hint(\"Schema references store schema_key, but not the schema domain. Remove the duplicate tracked/untracked schema registration or use a distinct schema key.\"));\n        }\n\n        let plan_id = SchemaPlanId(self.entries.len() as u32);\n        self.by_key.insert(key.clone(), plan_id);\n        self.by_identity.insert(identity.clone(), plan_id);\n        self.entries.push(CatalogEntry {\n            identity,\n            key,\n            schema,\n        });\n        Ok(plan_id)\n    }\n\n    fn rebuild_plans(&mut self) -> Result<(), LixError> {\n        let schema_index = self\n            .entries\n            .iter()\n            .map(|entry| (entry.key.clone(), &entry.schema))\n            .collect::<BTreeMap<_, _>>();\n        let plans = self\n            .entries\n            .iter()\n            .map(|entry| {\n                SchemaPlan::compile(\n                    entry.key.clone(),\n                    entry.schema.clone(),\n                    &self.by_key,\n                    &schema_index,\n                )\n            })\n            .collect::<Result<Vec<_>, _>>()?;\n        self.plans = plans;\n        self.rebuild_delete_plans();\n        self.fingerprint = self.compute_fingerprint()?;\n        Ok(())\n    }\n\n    fn rebuild_delete_plans(&mut self) {\n        let mut delete_references_by_target =\n            BTreeMap::<SchemaCatalogKey, Vec<DeleteReferencePlan>>::new();\n        let mut state_delete_references = Vec::<StateDeleteReferencePlan>::new();\n        for source_plan in &self.plans {\n            for foreign_key in &source_plan.foreign_keys {\n                delete_references_by_target\n                    .entry(foreign_key.referenced_schema.clone())\n                    .or_default()\n                    .push(DeleteReferencePlan {\n                        source_key: source_plan.key.clone(),\n                        foreign_key: foreign_key.clone(),\n                    });\n            }\n            for foreign_key in &source_plan.state_foreign_keys {\n                state_delete_references.push(StateDeleteReferencePlan {\n                    source_key: source_plan.key.clone(),\n                    foreign_key: foreign_key.clone(),\n                });\n            }\n        }\n        self.delete_references_by_target = delete_references_by_target;\n        self.state_delete_references = state_delete_references;\n    }\n\n    fn compute_fingerprint(&self) -> Result<CatalogFingerprint, LixError> {\n        let mut hasher = blake3::Hasher::new();\n        let mut entries = self.entries.iter().collect::<Vec<_>>();\n        entries.sort_by(|left, right| left.identity.cmp(&right.identity));\n        for entry in entries {\n            hash_fingerprint_part(&mut hasher, &entry.identity.fingerprint_component());\n            hash_fingerprint_part(&mut hasher, &entry.key.schema_key);\n            let canonical_schema = canonical_json_text(&entry.schema).map_err(|error| {\n                LixError::new(\n                    LixError::CODE_SCHEMA_DEFINITION,\n                    format!(\"failed to canonicalize schema for catalog fingerprint: {error}\"),\n                )\n            })?;\n            hash_fingerprint_part(&mut hasher, &canonical_schema);\n        }\n        Ok(CatalogFingerprint(hasher.finalize().to_hex().to_string()))\n    }\n\n    #[cfg(test)]\n    pub(crate) fn contains(&self, schema_key: &str) -> bool {\n        self.plan_for_key(schema_key).is_some()\n    }\n\n    #[cfg(test)]\n    pub(crate) fn len(&self) -> usize {\n        self.plans.len()\n    }\n\n    pub(crate) fn plans(&self) -> impl Iterator<Item = &SchemaPlan> {\n        self.plans.iter()\n    }\n\n    pub(crate) fn plan(&self, plan_id: SchemaPlanId) -> Option<&SchemaPlan> {\n        self.plans.get(plan_id.index())\n    }\n\n    pub(crate) fn plan_for_key(&self, schema_key: &str) -> Option<(SchemaPlanId, &SchemaPlan)> {\n        let key = SchemaCatalogKey {\n            schema_key: schema_key.to_string(),\n        };\n        let plan_id = *self.by_key.get(&key)?;\n        let plan = self.plan(plan_id)?;\n        Some((plan_id, plan))\n    }\n\n    pub(crate) fn delete_plan_for_key(&self, schema_key: &str) -> DeleteValidationPlan<'_> {\n        let key = SchemaCatalogKey {\n            schema_key: schema_key.to_string(),\n        };\n        DeleteValidationPlan {\n            foreign_key_references: self\n                .delete_references_by_target\n                .get(&key)\n                .map(Vec::as_slice)\n                .unwrap_or(&[]),\n            state_foreign_key_references: self.state_delete_references.as_slice(),\n        }\n    }\n}\n\nfn hash_fingerprint_part(hasher: &mut blake3::Hasher, value: &str) {\n    hasher.update(&(value.len() as u64).to_le_bytes());\n    hasher.update(value.as_bytes());\n}\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash)]\npub(crate) struct SchemaPlanId(u32);\n\nimpl SchemaPlanId {\n    fn index(self) -> usize {\n        self.0 as usize\n    }\n\n    #[cfg(test)]\n    pub(crate) fn for_test(index: u32) -> Self {\n        Self(index)\n    }\n}\n\npub(crate) type PointerGroup = Vec<Vec<String>>;\n\npub(crate) struct SchemaPlan {\n    pub(crate) key: SchemaCatalogKey,\n    pub(crate) schema: Arc<JsonValue>,\n    pub(crate) compiled_schema: JSONSchema,\n    pub(crate) defaults: DefaultPlan,\n    pub(crate) primary_key: Option<PointerGroup>,\n    pub(crate) uniques: Vec<PointerGroup>,\n    pub(crate) foreign_keys: Vec<ForeignKeyPlan>,\n    pub(crate) state_foreign_keys: Vec<StateForeignKeyPlan>,\n}\n\nimpl SchemaPlan {\n    fn compile(\n        key: SchemaCatalogKey,\n        schema: JsonValue,\n        key_index: &BTreeMap<SchemaCatalogKey, SchemaPlanId>,\n        schema_index: &BTreeMap<SchemaCatalogKey, &JsonValue>,\n    ) -> Result<Self, LixError> {\n        let compiled_schema = compile_lix_schema(&schema)?;\n        let defaults = DefaultPlan::from_schema(&schema);\n        let primary_key = primary_key_paths(&schema)?;\n        let uniques = pointer_groups(&schema, \"x-lix-unique\")?;\n        let foreign_keys = bind_foreign_key_plans(\n            &key,\n            &schema,\n            foreign_key_plans(&schema)?,\n            key_index,\n            schema_index,\n        )?;\n        let state_foreign_keys = state_foreign_key_plans(&schema)?;\n        Ok(Self {\n            key,\n            schema: Arc::new(schema),\n            compiled_schema,\n            defaults,\n            primary_key,\n            uniques,\n            foreign_keys,\n            state_foreign_keys,\n        })\n    }\n}\n\n#[derive(Debug, Clone, Default, PartialEq, Eq)]\npub(crate) struct DefaultPlan {\n    properties: Vec<DefaultPropertyPlan>,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\nstruct DefaultPropertyPlan {\n    field_name: String,\n    default: DefaultValuePlan,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\nenum DefaultValuePlan {\n    Json(JsonValue),\n    Cel(String),\n}\n\nimpl DefaultPlan {\n    fn from_schema(schema: &JsonValue) -> Self {\n        let Some(properties) = schema.get(\"properties\").and_then(JsonValue::as_object) else {\n            return Self::default();\n        };\n        let mut ordered_properties = properties.iter().collect::<Vec<_>>();\n        ordered_properties.sort_by(|(left_name, _), (right_name, _)| left_name.cmp(right_name));\n\n        let properties = ordered_properties\n            .into_iter()\n            .filter_map(|(field_name, field_schema)| {\n                if let Some(expression) = field_schema\n                    .get(\"x-lix-default\")\n                    .and_then(JsonValue::as_str)\n                {\n                    return Some(DefaultPropertyPlan {\n                        field_name: field_name.clone(),\n                        default: DefaultValuePlan::Cel(expression.to_string()),\n                    });\n                }\n                field_schema\n                    .get(\"default\")\n                    .map(|value| DefaultPropertyPlan {\n                        field_name: field_name.clone(),\n                        default: DefaultValuePlan::Json(value.clone()),\n                    })\n            })\n            .collect();\n        Self { properties }\n    }\n\n    pub(crate) fn apply(\n        &self,\n        snapshot: &mut JsonMap<String, JsonValue>,\n        functions: FunctionProviderHandle,\n        schema_key: &str,\n    ) -> Result<bool, LixError> {\n        let mut changed = false;\n        let mut cel_context = None::<JsonMap<String, JsonValue>>;\n        for property in &self.properties {\n            if snapshot.contains_key(&property.field_name) {\n                continue;\n            }\n            let value = match &property.default {\n                DefaultValuePlan::Json(value) => value.clone(),\n                DefaultValuePlan::Cel(expression) => {\n                    let context = cel_context.get_or_insert_with(|| snapshot.clone());\n                    crate::cel::shared_runtime()\n                        .evaluate_with_functions(expression, context, functions.clone())\n                        .map_err(|err| LixError {\n                            code: \"LIX_ERROR_UNKNOWN\".to_string(),\n                            message: format!(\n                                \"failed to evaluate x-lix-default for '{}.{}': {}\",\n                                schema_key, property.field_name, err.message\n                            ),\n                            hint: None,\n                            details: None,\n                        })?\n                }\n            };\n            snapshot.insert(property.field_name.clone(), value);\n            changed = true;\n        }\n        Ok(changed)\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct ForeignKeyPlan {\n    pub(crate) local_properties: PointerGroup,\n    pub(crate) referenced_schema: SchemaCatalogKey,\n    pub(crate) referenced_plan_id: SchemaPlanId,\n    pub(crate) referenced_properties: PointerGroup,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct DeleteReferencePlan {\n    pub(crate) source_key: SchemaCatalogKey,\n    pub(crate) foreign_key: ForeignKeyPlan,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)]\npub(crate) struct StateDeleteReferencePlan {\n    pub(crate) source_key: SchemaCatalogKey,\n    pub(crate) foreign_key: StateForeignKeyPlan,\n}\n\n#[derive(Debug, Clone, Copy)]\npub(crate) struct DeleteValidationPlan<'a> {\n    pub(crate) foreign_key_references: &'a [DeleteReferencePlan],\n    pub(crate) state_foreign_key_references: &'a [StateDeleteReferencePlan],\n}\n\nimpl DeleteValidationPlan<'_> {\n    pub(crate) fn has_committed_checks(self) -> bool {\n        !self.foreign_key_references.is_empty() || !self.state_foreign_key_references.is_empty()\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\nstruct UnboundForeignKeyPlan {\n    local_properties: PointerGroup,\n    referenced_schema: SchemaCatalogKey,\n    referenced_properties: PointerGroup,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)]\npub(crate) struct StateForeignKeyPlan {\n    /// Slot [0] in `x-lix-state-foreign-keys`: local pointer to the target entity_id.\n    pub(crate) entity_id_property: Vec<String>,\n    /// Slot [1] in `x-lix-state-foreign-keys`: local pointer to the target schema_key.\n    pub(crate) schema_key_property: Vec<String>,\n    /// Slot [2] in `x-lix-state-foreign-keys`: local pointer to the target file_id.\n    pub(crate) file_id_property: Vec<String>,\n}\n\nimpl StateForeignKeyPlan {\n    pub(crate) fn local_properties(&self) -> PointerGroup {\n        vec![\n            self.entity_id_property.clone(),\n            self.schema_key_property.clone(),\n            self.file_id_property.clone(),\n        ]\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord, Hash)]\npub(crate) struct SchemaCatalogKey {\n    pub(crate) schema_key: String,\n}\n\nimpl SchemaCatalogKey {\n    pub(crate) fn from_schema_key(key: SchemaKey) -> Self {\n        Self {\n            schema_key: key.schema_key,\n        }\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct SchemaCatalogFact {\n    identity: DomainSchemaIdentity,\n    catalog_key: SchemaCatalogKey,\n    schema: JsonValue,\n}\n\nimpl SchemaCatalogFact {\n    pub(crate) fn new(domain: Domain, key: SchemaKey, schema: JsonValue) -> Self {\n        let catalog_key = SchemaCatalogKey::from_schema_key(key);\n        let identity = DomainSchemaIdentity::new(domain, catalog_key.schema_key.clone());\n        Self {\n            identity,\n            catalog_key,\n            schema,\n        }\n    }\n\n    pub(crate) fn schema(&self) -> &JsonValue {\n        &self.schema\n    }\n\n    pub(crate) fn catalog_key(&self) -> &SchemaCatalogKey {\n        &self.catalog_key\n    }\n}\n\nfn primary_key_paths(schema: &JsonValue) -> Result<Option<Vec<Vec<String>>>, LixError> {\n    let Some(primary_key) = schema.get(\"x-lix-primary-key\") else {\n        return Ok(None);\n    };\n    let primary_key = primary_key.as_array().ok_or_else(|| {\n        LixError::new(\n            LixError::CODE_SCHEMA_DEFINITION,\n            \"schema x-lix-primary-key must be an array of JSON Pointers\",\n        )\n    })?;\n    primary_key\n        .iter()\n        .enumerate()\n        .map(|(index, pointer)| {\n            let pointer = pointer.as_str().ok_or_else(|| {\n                LixError::new(\n                    LixError::CODE_SCHEMA_DEFINITION,\n                    format!(\"schema x-lix-primary-key entry at index {index} must be a string\"),\n                )\n            })?;\n            parse_json_pointer(pointer)\n        })\n        .collect::<Result<Vec<_>, _>>()\n        .map(Some)\n}\n\nfn pointer_groups(schema: &JsonValue, field: &str) -> Result<Vec<PointerGroup>, LixError> {\n    let Some(value) = schema.get(field) else {\n        return Ok(Vec::new());\n    };\n    let groups = value\n        .as_array()\n        .map(|groups| groups.iter().collect::<Vec<_>>())\n        .unwrap_or_default();\n    groups\n        .into_iter()\n        .map(|group| {\n            let group = group.as_array().ok_or_else(|| {\n                LixError::new(\n                    LixError::CODE_SCHEMA_DEFINITION,\n                    format!(\"schema {field} must contain arrays of JSON Pointers\"),\n                )\n            })?;\n            group\n                .iter()\n                .enumerate()\n                .map(|(index, pointer)| {\n                    let pointer = pointer.as_str().ok_or_else(|| {\n                        LixError::new(\n                            LixError::CODE_SCHEMA_DEFINITION,\n                            format!(\"schema {field} entry at index {index} must be a string\"),\n                        )\n                    })?;\n                    parse_json_pointer(pointer)\n                })\n                .collect::<Result<Vec<_>, _>>()\n        })\n        .collect()\n}\n\nfn foreign_key_plans(schema: &JsonValue) -> Result<Vec<UnboundForeignKeyPlan>, LixError> {\n    let Some(value) = schema.get(\"x-lix-foreign-keys\") else {\n        return Ok(Vec::new());\n    };\n    let Some(foreign_keys) = value.as_array() else {\n        return Err(LixError::new(\n            LixError::CODE_SCHEMA_DEFINITION,\n            \"schema x-lix-foreign-keys must be an array\",\n        ));\n    };\n\n    foreign_keys\n        .iter()\n        .enumerate()\n        .map(|(index, foreign_key)| {\n            let object = foreign_key.as_object().ok_or_else(|| {\n                LixError::new(\n                    LixError::CODE_SCHEMA_DEFINITION,\n                    format!(\"x-lix-foreign-keys[{index}] must be an object\"),\n                )\n            })?;\n            let references = object\n                .get(\"references\")\n                .and_then(JsonValue::as_object)\n                .ok_or_else(|| {\n                    LixError::new(\n                        LixError::CODE_SCHEMA_DEFINITION,\n                        format!(\"x-lix-foreign-keys[{index}].references must be an object\"),\n                    )\n                })?;\n            let referenced_schema_key = references\n                .get(\"schemaKey\")\n                .and_then(JsonValue::as_str)\n                .ok_or_else(|| {\n                    LixError::new(\n                        LixError::CODE_SCHEMA_DEFINITION,\n                        format!(\n                            \"x-lix-foreign-keys[{index}].references.schemaKey must be a string\"\n                        ),\n                    )\n                })?\n                .to_string();\n            let local_properties = pointer_array(\n                object.get(\"properties\"),\n                &format!(\"x-lix-foreign-keys[{index}].properties\"),\n            )?;\n            let referenced_properties = pointer_array(\n                references.get(\"properties\"),\n                &format!(\"x-lix-foreign-keys[{index}].references.properties\"),\n            )?;\n            if local_properties.len() != referenced_properties.len() {\n                return Err(LixError::new(\n                    LixError::CODE_SCHEMA_DEFINITION,\n                    format!(\n                        \"x-lix-foreign-keys[{index}] properties and references.properties must have the same length\"\n                    ),\n                ));\n            }\n            Ok(UnboundForeignKeyPlan {\n                local_properties,\n                referenced_schema: SchemaCatalogKey {\n                    schema_key: referenced_schema_key,\n                },\n                referenced_properties,\n            })\n        })\n        .collect()\n}\n\nfn bind_foreign_key_plans(\n    source_key: &SchemaCatalogKey,\n    source_schema: &JsonValue,\n    unbound_foreign_keys: Vec<UnboundForeignKeyPlan>,\n    key_index: &BTreeMap<SchemaCatalogKey, SchemaPlanId>,\n    schema_index: &BTreeMap<SchemaCatalogKey, &JsonValue>,\n) -> Result<Vec<ForeignKeyPlan>, LixError> {\n    unbound_foreign_keys\n        .into_iter()\n        .map(|foreign_key| {\n            if foreign_key.referenced_schema.schema_key == \"lix_state\" {\n                return Err(LixError::new(\n                    LixError::CODE_SCHEMA_DEFINITION,\n                    format!(\n                        \"foreign key on schema '{}' must not reference schemaKey 'lix_state'; use x-lix-state-foreign-keys with pointers ordered as [entity_id, schema_key, file_id]\",\n                        source_key.schema_key\n                    ),\n                ));\n            }\n\n            let referenced_plan_id =\n                *key_index.get(&foreign_key.referenced_schema).ok_or_else(|| {\n                    LixError::new(\n                        LixError::CODE_SCHEMA_DEFINITION,\n                        format!(\n                            \"foreign key on schema '{}' references missing schema '{}'\",\n                            source_key.schema_key,\n                            foreign_key.referenced_schema.schema_key,\n                        ),\n                    )\n                })?;\n            let target_schema =\n                schema_index\n                    .get(&foreign_key.referenced_schema)\n                    .copied()\n                    .ok_or_else(|| {\n                        LixError::new(\n                        LixError::CODE_SCHEMA_DEFINITION,\n                        format!(\n                                \"foreign key on schema '{}' references missing schema '{}'\",\n                                source_key.schema_key,\n                                foreign_key.referenced_schema.schema_key,\n                            ),\n                    )\n                })?;\n\n            for (local_pointer, referenced_pointer) in foreign_key\n                .local_properties\n                .iter()\n                .zip(foreign_key.referenced_properties.iter())\n            {\n                let local_field =\n                    schema_field_at_pointer(source_schema, local_pointer).map_err(|detail| {\n                        LixError::new(\n                            LixError::CODE_SCHEMA_DEFINITION,\n                            format!(\n                                \"foreign key on schema '{}' references missing local property '{}': {detail}\",\n                                source_key.schema_key,\n                                format_json_pointer(local_pointer)\n                            ),\n                        )\n                    })?;\n                let referenced_field =\n                    schema_field_at_pointer(target_schema, referenced_pointer).map_err(\n                        |detail| {\n                            LixError::new(\n                                LixError::CODE_SCHEMA_DEFINITION,\n                                format!(\n                                    \"foreign key on schema '{}' references missing target property '{}.{}': {detail}\",\n                                    source_key.schema_key,\n                                    foreign_key.referenced_schema.schema_key,\n                                    format_json_pointer(referenced_pointer)\n                                ),\n                            )\n                        },\n                    )?;\n                validate_foreign_key_field_types(\n                    source_key,\n                    &foreign_key.referenced_schema,\n                    local_pointer,\n                    local_field,\n                    referenced_pointer,\n                    referenced_field,\n                )?;\n            }\n\n            if !schema_properties_are_keyed(target_schema, &foreign_key.referenced_properties)? {\n                return Err(LixError::new(\n                    LixError::CODE_SCHEMA_DEFINITION,\n                    format!(\n                        \"foreign key on schema '{}' references '{}.{}', but referenced properties must match the target primary key or a unique constraint\",\n                        source_key.schema_key,\n                        foreign_key.referenced_schema.schema_key,\n                        format_pointer_group(&foreign_key.referenced_properties)\n                    ),\n                ));\n            }\n\n            Ok(ForeignKeyPlan {\n                local_properties: foreign_key.local_properties,\n                referenced_schema: foreign_key.referenced_schema,\n                referenced_plan_id,\n                referenced_properties: foreign_key.referenced_properties,\n            })\n        })\n        .collect()\n}\n\nfn schema_field_at_pointer<'a>(\n    schema: &'a JsonValue,\n    pointer: &[String],\n) -> Result<&'a JsonValue, String> {\n    if pointer.is_empty() {\n        return Err(\"empty pointer does not name a field\".to_string());\n    }\n    let mut current = schema;\n    for segment in pointer {\n        let properties = current\n            .get(\"properties\")\n            .and_then(JsonValue::as_object)\n            .ok_or_else(|| {\n                format!(\n                    \"schema segment before '{}' has no object properties\",\n                    segment\n                )\n            })?;\n        current = properties\n            .get(segment)\n            .ok_or_else(|| format!(\"property '{}' does not exist\", segment))?;\n    }\n    Ok(current)\n}\n\nfn validate_foreign_key_field_types(\n    source_key: &SchemaCatalogKey,\n    referenced_key: &SchemaCatalogKey,\n    local_pointer: &[String],\n    local_field: &JsonValue,\n    referenced_pointer: &[String],\n    referenced_field: &JsonValue,\n) -> Result<(), LixError> {\n    let local_type = compatible_json_schema_type(local_field).ok_or_else(|| {\n        LixError::new(\n            LixError::CODE_SCHEMA_DEFINITION,\n            format!(\n                \"foreign key on schema '{}' local property '{}' must declare an explicit JSON Schema type\",\n                source_key.schema_key,\n                format_json_pointer(local_pointer)\n            ),\n        )\n    })?;\n    let referenced_type = compatible_json_schema_type(referenced_field).ok_or_else(|| {\n        LixError::new(\n            LixError::CODE_SCHEMA_DEFINITION,\n            format!(\n                \"foreign key on schema '{}' target property '{}.{}' must declare an explicit JSON Schema type\",\n                source_key.schema_key,\n                referenced_key.schema_key,\n                format_json_pointer(referenced_pointer)\n            ),\n        )\n    })?;\n    if local_type != referenced_type {\n        return Err(LixError::new(\n            LixError::CODE_SCHEMA_DEFINITION,\n            format!(\n                \"foreign key on schema '{}' has incompatible field types: local '{}' is {}, but target '{}.{}' is {}\",\n                source_key.schema_key,\n                format_json_pointer(local_pointer),\n                local_type,\n                referenced_key.schema_key,\n                format_json_pointer(referenced_pointer),\n                referenced_type\n            ),\n        ));\n    }\n    Ok(())\n}\n\nfn compatible_json_schema_type(field_schema: &JsonValue) -> Option<JsonValue> {\n    match field_schema.get(\"type\")? {\n        JsonValue::Array(types) => {\n            let non_null_types = types\n                .iter()\n                .filter(|value| value.as_str() != Some(\"null\"))\n                .cloned()\n                .collect::<Vec<_>>();\n            match non_null_types.as_slice() {\n                [] => None,\n                [single] => Some(single.clone()),\n                _ => Some(JsonValue::Array(non_null_types)),\n            }\n        }\n        value => Some(value.clone()),\n    }\n}\n\nfn schema_properties_are_keyed(\n    target_schema: &JsonValue,\n    referenced_properties: &[Vec<String>],\n) -> Result<bool, LixError> {\n    if let Some(primary_key) = primary_key_paths(target_schema)? {\n        if primary_key == referenced_properties {\n            return Ok(true);\n        }\n    }\n    Ok(pointer_groups(target_schema, \"x-lix-unique\")?\n        .iter()\n        .any(|unique_group| unique_group == referenced_properties))\n}\n\nfn state_foreign_key_plans(schema: &JsonValue) -> Result<Vec<StateForeignKeyPlan>, LixError> {\n    let Some(value) = schema.get(\"x-lix-state-foreign-keys\") else {\n        return Ok(Vec::new());\n    };\n    let Some(foreign_keys) = value.as_array() else {\n        return Err(LixError::new(\n            LixError::CODE_SCHEMA_DEFINITION,\n            \"schema x-lix-state-foreign-keys must be an array\",\n        ));\n    };\n\n    foreign_keys\n        .iter()\n        .enumerate()\n        .map(|(index, foreign_key)| {\n            let local_properties = pointer_array(\n                Some(foreign_key),\n                &format!(\"x-lix-state-foreign-keys[{index}]\"),\n            )?;\n            if local_properties.len() != 3 {\n                return Err(LixError::new(\n                    LixError::CODE_SCHEMA_DEFINITION,\n                    format!(\n                        \"x-lix-state-foreign-keys[{index}] must contain exactly three JSON Pointers ordered as [entity_id, schema_key, file_id]\"\n                    ),\n                ));\n            }\n            Ok(StateForeignKeyPlan {\n                entity_id_property: local_properties[0].clone(),\n                schema_key_property: local_properties[1].clone(),\n                file_id_property: local_properties[2].clone(),\n            })\n        })\n        .collect()\n}\n\nfn pointer_array(value: Option<&JsonValue>, context: &str) -> Result<PointerGroup, LixError> {\n    let Some(value) = value else {\n        return Err(LixError::new(\n            LixError::CODE_SCHEMA_DEFINITION,\n            format!(\"{context} must be an array of JSON Pointers\"),\n        ));\n    };\n    let Some(array) = value.as_array() else {\n        return Err(LixError::new(\n            LixError::CODE_SCHEMA_DEFINITION,\n            format!(\"{context} must be an array of JSON Pointers\"),\n        ));\n    };\n    array\n        .iter()\n        .enumerate()\n        .map(|(index, pointer)| {\n            let pointer = pointer.as_str().ok_or_else(|| {\n                LixError::new(\n                    LixError::CODE_SCHEMA_DEFINITION,\n                    format!(\"{context}[{index}] must be a string\"),\n                )\n            })?;\n            parse_json_pointer(pointer)\n        })\n        .collect()\n}\n\nfn format_pointer_group(paths: &[Vec<String>]) -> String {\n    paths\n        .iter()\n        .map(|path| format_json_pointer(path))\n        .collect::<Vec<_>>()\n        .join(\",\")\n}\n\n#[cfg(test)]\nmod tests {\n    use serde_json::json;\n\n    use super::*;\n\n    #[test]\n    fn catalog_rejects_same_schema_key_from_multiple_domains() {\n        let tracked = SchemaCatalogFact::new(\n            Domain::schema_catalog(\"main\", false),\n            SchemaKey::new(\"example_schema\"),\n            schema_json(\"example_schema\"),\n        );\n        let untracked = SchemaCatalogFact::new(\n            Domain::schema_catalog(\"main\", true),\n            SchemaKey::new(\"example_schema\"),\n            schema_json(\"example_schema\"),\n        );\n\n        let error = CatalogSnapshot::from_schema_facts(&[tracked, untracked])\n            .expect_err(\"same schema key in two reachable domains is ambiguous\");\n\n        assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION);\n        assert!(error.message.contains(\"more than one schema domain\"));\n    }\n\n    #[test]\n    fn insert_schema_for_domain_is_atomic_when_binding_fails() {\n        let mut catalog = CatalogSnapshot::from_schema_facts(&[SchemaCatalogFact::new(\n            Domain::schema_catalog(\"main\", false),\n            SchemaKey::new(\"base_schema\"),\n            schema_json(\"base_schema\"),\n        )])\n        .expect(\"base catalog should bind\");\n\n        let error = catalog\n            .insert_schema_for_domain(\n                Domain::schema_catalog(\"main\", false),\n                SchemaKey::new(\"bad_child_schema\"),\n                child_schema_json(\"bad_child_schema\", \"missing_parent_schema\"),\n            )\n            .expect_err(\"schema with missing FK target should fail\");\n\n        assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION);\n        assert!(catalog.contains(\"base_schema\"));\n        assert!(\n            !catalog.contains(\"bad_child_schema\"),\n            \"failed catalog insert must not publish a partially bound schema\"\n        );\n    }\n\n    #[test]\n    fn catalog_fingerprint_is_independent_of_fact_order() {\n        let parent = SchemaCatalogFact::new(\n            Domain::schema_catalog(\"main\", false),\n            SchemaKey::new(\"parent_schema\"),\n            schema_json(\"parent_schema\"),\n        );\n        let child = SchemaCatalogFact::new(\n            Domain::schema_catalog(\"main\", false),\n            SchemaKey::new(\"child_schema\"),\n            child_schema_json(\"child_schema\", \"parent_schema\"),\n        );\n\n        let parent_first = CatalogSnapshot::from_schema_facts(&[parent.clone(), child.clone()])\n            .expect(\"parent-first facts should bind\");\n        let child_first = CatalogSnapshot::from_schema_facts(&[child, parent])\n            .expect(\"child-first facts should bind as the same domain snapshot\");\n\n        assert_eq!(parent_first.fingerprint(), child_first.fingerprint());\n    }\n\n    #[test]\n    fn delete_plan_has_no_committed_checks_for_unreferenced_schema() {\n        let catalog = CatalogSnapshot::from_schema_facts(&[SchemaCatalogFact::new(\n            Domain::schema_catalog(\"main\", false),\n            SchemaKey::new(\"standalone_schema\"),\n            schema_json(\"standalone_schema\"),\n        )])\n        .expect(\"catalog should bind\");\n\n        let delete_plan = catalog.delete_plan_for_key(\"standalone_schema\");\n\n        assert!(!delete_plan.has_committed_checks());\n        assert!(delete_plan.foreign_key_references.is_empty());\n        assert!(delete_plan.state_foreign_key_references.is_empty());\n    }\n\n    #[test]\n    fn delete_plan_indexes_foreign_keys_by_referenced_schema() {\n        let parent = SchemaCatalogFact::new(\n            Domain::schema_catalog(\"main\", false),\n            SchemaKey::new(\"parent_schema\"),\n            schema_json(\"parent_schema\"),\n        );\n        let child = SchemaCatalogFact::new(\n            Domain::schema_catalog(\"main\", false),\n            SchemaKey::new(\"child_schema\"),\n            child_schema_json(\"child_schema\", \"parent_schema\"),\n        );\n        let catalog =\n            CatalogSnapshot::from_schema_facts(&[parent, child]).expect(\"catalog should bind\");\n\n        let parent_delete_plan = catalog.delete_plan_for_key(\"parent_schema\");\n        let child_delete_plan = catalog.delete_plan_for_key(\"child_schema\");\n\n        assert!(parent_delete_plan.has_committed_checks());\n        assert_eq!(parent_delete_plan.foreign_key_references.len(), 1);\n        assert_eq!(\n            parent_delete_plan.foreign_key_references[0]\n                .source_key\n                .schema_key,\n            \"child_schema\"\n        );\n        assert!(!child_delete_plan.has_committed_checks());\n    }\n\n    #[test]\n    fn delete_plan_conservatively_applies_state_foreign_keys_to_every_schema() {\n        let target = SchemaCatalogFact::new(\n            Domain::schema_catalog(\"main\", false),\n            SchemaKey::new(\"target_schema\"),\n            schema_json(\"target_schema\"),\n        );\n        let source = SchemaCatalogFact::new(\n            Domain::schema_catalog(\"main\", false),\n            SchemaKey::new(\"state_fk_schema\"),\n            state_fk_schema_json(\"state_fk_schema\"),\n        );\n        let catalog =\n            CatalogSnapshot::from_schema_facts(&[target, source]).expect(\"catalog should bind\");\n\n        let target_delete_plan = catalog.delete_plan_for_key(\"target_schema\");\n\n        assert!(target_delete_plan.has_committed_checks());\n        assert_eq!(target_delete_plan.state_foreign_key_references.len(), 1);\n        assert_eq!(\n            target_delete_plan.state_foreign_key_references[0]\n                .source_key\n                .schema_key,\n            \"state_fk_schema\"\n        );\n    }\n\n    fn schema_json(schema_key: &str) -> JsonValue {\n        json!({\n            \"x-lix-key\": schema_key,\n            \"x-lix-primary-key\": [\"/id\"],\n            \"type\": \"object\",\n            \"properties\": {\n                \"id\": { \"type\": \"string\" }\n            },\n            \"required\": [\"id\"],\n            \"additionalProperties\": false\n        })\n    }\n\n    fn child_schema_json(schema_key: &str, parent_schema_key: &str) -> JsonValue {\n        json!({\n            \"x-lix-key\": schema_key,\n            \"x-lix-primary-key\": [\"/id\"],\n            \"x-lix-foreign-keys\": [{\n                \"properties\": [\"/parent_id\"],\n                \"references\": {\n                    \"schemaKey\": parent_schema_key,\n                    \"properties\": [\"/id\"]\n                }\n            }],\n            \"type\": \"object\",\n            \"properties\": {\n                \"id\": { \"type\": \"string\" },\n                \"parent_id\": { \"type\": \"string\" }\n            },\n            \"required\": [\"id\", \"parent_id\"],\n            \"additionalProperties\": false\n        })\n    }\n\n    fn state_fk_schema_json(schema_key: &str) -> JsonValue {\n        json!({\n            \"x-lix-key\": schema_key,\n            \"x-lix-primary-key\": [\"/id\"],\n            \"x-lix-state-foreign-keys\": [[\"/target_id\", \"/target_schema\", \"/target_file\"]],\n            \"type\": \"object\",\n            \"properties\": {\n                \"id\": { \"type\": \"string\" },\n                \"target_id\": { \"type\": \"string\" },\n                \"target_schema\": { \"type\": \"string\" },\n                \"target_file\": { \"type\": [\"string\", \"null\"] }\n            },\n            \"required\": [\"id\", \"target_id\", \"target_schema\", \"target_file\"],\n            \"additionalProperties\": false\n        })\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/cel/context.rs",
    "content": "use cel::Context;\nuse serde_json::{Map as JsonMap, Value as JsonValue};\n\nuse crate::LixError;\n\nuse super::provider::CelFunctionProvider;\nuse super::value::json_to_cel;\n\npub(crate) fn build_context_with_functions<P>(\n    variables: &JsonMap<String, JsonValue>,\n    functions: P,\n) -> Result<Context<'static>, LixError>\nwhere\n    P: CelFunctionProvider,\n{\n    let mut context = Context::default();\n\n    let uuid_functions = functions.clone();\n    context.add_function(\"lix_uuid_v7\", move || uuid_functions.call_uuid_v7());\n    let timestamp_functions = functions.clone();\n    context.add_function(\"lix_timestamp\", move || {\n        timestamp_functions.call_timestamp()\n    });\n\n    for (name, value) in variables {\n        let cel_value = json_to_cel(value)?;\n        context.add_variable_from_value(name.clone(), cel_value);\n    }\n\n    Ok(context)\n}\n\n#[cfg(test)]\nmod tests {\n    use super::build_context_with_functions;\n    use crate::cel::CelFunctionProvider;\n    use cel::Program;\n    use serde_json::Map as JsonMap;\n\n    #[test]\n    fn registers_lix_uuid_v7_function() {\n        let context = build_context_with_functions(&JsonMap::new(), fixed_functions())\n            .expect(\"build context\");\n        let program = Program::compile(\"lix_uuid_v7()\").expect(\"compile CEL\");\n        let value = program.execute(&context).expect(\"execute CEL\");\n        let as_json = value.json().expect(\"to json\");\n        assert!(as_json.as_str().is_some());\n    }\n\n    #[test]\n    fn errors_on_unknown_variables() {\n        let context = build_context_with_functions(&JsonMap::new(), fixed_functions())\n            .expect(\"build context\");\n        let program = Program::compile(\"missing_var == null\").expect(\"compile CEL\");\n        let err = program\n            .execute(&context)\n            .expect_err(\"execute CEL should fail\");\n        assert!(err.to_string().contains(\"Undeclared reference\"));\n    }\n\n    #[derive(Clone)]\n    struct FixedFunctions;\n\n    impl CelFunctionProvider for FixedFunctions {\n        fn call_uuid_v7(&self) -> String {\n            \"uuid-fixed\".to_string()\n        }\n\n        fn call_timestamp(&self) -> String {\n            \"1970-01-01T00:00:00.000Z\".to_string()\n        }\n    }\n\n    fn fixed_functions() -> FixedFunctions {\n        FixedFunctions\n    }\n\n    #[test]\n    fn uses_supplied_function_provider() {\n        let context = build_context_with_functions(&JsonMap::new(), fixed_functions())\n            .expect(\"build context\");\n        let program = Program::compile(\"lix_uuid_v7()\").expect(\"compile CEL\");\n        let value = program.execute(&context).expect(\"execute CEL\");\n        assert_eq!(value.json().expect(\"to json\").as_str(), Some(\"uuid-fixed\"));\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/cel/error.rs",
    "content": "use crate::LixError;\n\npub(crate) fn cel_parse_error(expression: &str, error: impl std::fmt::Display) -> LixError {\n    LixError {\n        code: \"LIX_ERROR_UNKNOWN\".to_string(),\n        message: format!(\"failed to parse CEL expression '{expression}': {error}\"),\n        hint: None,\n        details: None,\n    }\n}\n\npub(crate) fn cel_runtime_error(expression: &str, error: impl std::fmt::Display) -> LixError {\n    LixError {\n        code: \"LIX_ERROR_UNKNOWN\".to_string(),\n        message: format!(\"failed to evaluate CEL expression '{expression}': {error}\"),\n        hint: None,\n        details: None,\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/cel/mod.rs",
    "content": "mod context;\nmod error;\nmod provider;\nmod runtime;\nmod value;\n\npub(crate) use provider::CelFunctionProvider;\npub(crate) use runtime::shared_runtime;\n"
  },
  {
    "path": "packages/engine/src/cel/provider.rs",
    "content": "/// Function source available to CEL expressions.\n///\n/// CEL is shared infrastructure for schema expressions. It should not depend\n/// on engine1 or engine runtime traits directly; callers adapt their own\n/// execution-scoped function provider to this small boundary.\npub(crate) trait CelFunctionProvider: Clone + Send + Sync + 'static {\n    fn call_uuid_v7(&self) -> String;\n    fn call_timestamp(&self) -> String;\n}\n"
  },
  {
    "path": "packages/engine/src/cel/runtime.rs",
    "content": "use std::collections::HashMap;\nuse std::sync::{Arc, OnceLock, RwLock};\n\nuse cel::Program;\nuse serde_json::{Map as JsonMap, Value as JsonValue};\n\nuse crate::LixError;\n\nuse super::context::build_context_with_functions;\nuse super::error::{cel_parse_error, cel_runtime_error};\nuse super::provider::CelFunctionProvider;\nuse super::value::cel_to_json;\n\n#[derive(Debug)]\nstruct CompiledProgram {\n    program: Program,\n}\n\n#[derive(Default)]\npub struct CelEvaluator {\n    programs: RwLock<HashMap<String, Arc<CompiledProgram>>>,\n}\n\nimpl CelEvaluator {\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    pub fn evaluate_with_functions<P>(\n        &self,\n        expression: &str,\n        variables: &JsonMap<String, JsonValue>,\n        functions: P,\n    ) -> Result<JsonValue, LixError>\n    where\n        P: CelFunctionProvider,\n    {\n        let compiled = self.compile(expression)?;\n        let context = build_context_with_functions(variables, functions)?;\n        let value = compiled\n            .program\n            .execute(&context)\n            .map_err(|error| cel_runtime_error(expression, error))?;\n        cel_to_json(&value)\n    }\n\n    fn compile(&self, expression: &str) -> Result<Arc<CompiledProgram>, LixError> {\n        if let Some(existing) = self.programs.read().unwrap().get(expression).cloned() {\n            return Ok(existing);\n        }\n\n        let program =\n            Program::compile(expression).map_err(|error| cel_parse_error(expression, error))?;\n        let compiled = Arc::new(CompiledProgram { program });\n\n        self.programs\n            .write()\n            .unwrap()\n            .insert(expression.to_string(), compiled.clone());\n\n        Ok(compiled)\n    }\n}\n\npub(crate) fn shared_runtime() -> &'static CelEvaluator {\n    static SHARED_RUNTIME: OnceLock<CelEvaluator> = OnceLock::new();\n    SHARED_RUNTIME.get_or_init(CelEvaluator::new)\n}\n\n#[cfg(test)]\nmod tests {\n    use super::CelEvaluator;\n    use crate::cel::CelFunctionProvider;\n    use serde_json::{json, Map as JsonMap, Value as JsonValue};\n\n    #[derive(Clone)]\n    struct FixedFunctions;\n\n    impl CelFunctionProvider for FixedFunctions {\n        fn call_uuid_v7(&self) -> String {\n            \"uuid-fixed\".to_string()\n        }\n\n        fn call_timestamp(&self) -> String {\n            \"1970-01-01T00:00:00.000Z\".to_string()\n        }\n    }\n\n    fn fixed_functions() -> FixedFunctions {\n        FixedFunctions\n    }\n\n    #[test]\n    fn evaluates_basic_expressions() {\n        let evaluator = CelEvaluator::new();\n        let value = evaluator\n            .evaluate_with_functions(\"'open'\", &JsonMap::new(), fixed_functions())\n            .expect(\"evaluate CEL\");\n        assert_eq!(value, JsonValue::String(\"open\".to_string()));\n    }\n\n    #[test]\n    fn evaluates_with_variables() {\n        let evaluator = CelEvaluator::new();\n        let mut context = JsonMap::new();\n        context.insert(\"name\".to_string(), JsonValue::String(\"sample\".to_string()));\n        let value = evaluator\n            .evaluate_with_functions(\"name + '-slug'\", &context, fixed_functions())\n            .expect(\"evaluate CEL\");\n        assert_eq!(value, JsonValue::String(\"sample-slug\".to_string()));\n    }\n\n    #[test]\n    fn reports_parse_errors() {\n        let evaluator = CelEvaluator::new();\n        let err = evaluator\n            .evaluate_with_functions(\"lix_uuid_v7(\", &JsonMap::new(), fixed_functions())\n            .expect_err(\"expected parse error\");\n        assert!(err.to_string().contains(\"failed to parse CEL expression\"));\n    }\n\n    #[test]\n    fn reports_runtime_errors() {\n        let evaluator = CelEvaluator::new();\n        let err = evaluator\n            .evaluate_with_functions(\"1 / 0\", &JsonMap::new(), fixed_functions())\n            .expect_err(\"expected runtime error\");\n        assert!(err\n            .to_string()\n            .contains(\"failed to evaluate CEL expression\"));\n    }\n\n    #[test]\n    fn supports_function_calls() {\n        let evaluator = CelEvaluator::new();\n        let value = evaluator\n            .evaluate_with_functions(\"lix_timestamp()\", &JsonMap::new(), fixed_functions())\n            .expect(\"evaluate CEL\");\n        assert_eq!(value.as_str(), Some(\"1970-01-01T00:00:00.000Z\"));\n    }\n\n    #[test]\n    fn caches_compiled_programs() {\n        let evaluator = CelEvaluator::new();\n        let mut context = JsonMap::new();\n        context.insert(\"name\".to_string(), json!(\"x\"));\n\n        let _ = evaluator\n            .evaluate_with_functions(\"name + '-slug'\", &context, fixed_functions())\n            .expect(\"first evaluation\");\n        let _ = evaluator\n            .evaluate_with_functions(\"name + '-slug'\", &context, fixed_functions())\n            .expect(\"second evaluation\");\n\n        let size = evaluator.programs.read().unwrap().len();\n        assert_eq!(size, 1);\n    }\n\n    #[test]\n    fn errors_on_unknown_variable() {\n        let evaluator = CelEvaluator::new();\n        let err = evaluator\n            .evaluate_with_functions(\"missing_var + '-slug'\", &JsonMap::new(), fixed_functions())\n            .expect_err(\"expected unknown variable error\");\n        assert!(err.to_string().contains(\"Undeclared reference\"));\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/cel/value.rs",
    "content": "use cel::Value as CelValue;\nuse serde_json::Value as JsonValue;\n\nuse crate::LixError;\n\npub fn json_to_cel(value: &JsonValue) -> Result<CelValue, LixError> {\n    cel::to_value(value).map_err(|err| LixError {\n        code: \"LIX_ERROR_UNKNOWN\".to_string(),\n        message: format!(\"failed to convert JSON value to CEL value: {err}\"),\n        hint: None,\n        details: None,\n    })\n}\n\npub fn cel_to_json(value: &CelValue) -> Result<JsonValue, LixError> {\n    value.json().map_err(|err| LixError {\n        code: \"LIX_ERROR_UNKNOWN\".to_string(),\n        message: format!(\"failed to convert CEL value to JSON value: {err}\"),\n        hint: None,\n        details: None,\n    })\n}\n\n#[cfg(test)]\nmod tests {\n    use super::{cel_to_json, json_to_cel};\n    use serde_json::json;\n\n    #[test]\n    fn converts_json_scalars() {\n        let value = json!(\"hello\");\n        let cel = json_to_cel(&value).expect(\"convert to CEL\");\n        let roundtrip = cel_to_json(&cel).expect(\"convert to JSON\");\n        assert_eq!(roundtrip, value);\n    }\n\n    #[test]\n    fn converts_json_objects_and_arrays() {\n        let value = json!({\n            \"name\": \"Ada\",\n            \"flags\": [true, false],\n            \"meta\": {\n                \"count\": 1\n            }\n        });\n        let cel = json_to_cel(&value).expect(\"convert to CEL\");\n        let roundtrip = cel_to_json(&cel).expect(\"convert to JSON\");\n        assert_eq!(roundtrip, value);\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/commit_graph/context.rs",
    "content": "use std::collections::BTreeSet;\n\nuse crate::commit_graph::walker::{best_common_ancestors, walk_reachable_commits};\nuse crate::commit_graph::{\n    CommitGraphChangeHistoryEntry, CommitGraphChangeHistoryRequest, CommitGraphCommit,\n    CommitGraphEdge, CommitGraphReader, ReachableCommitGraphCommit,\n};\nuse crate::commit_store::{Change, Commit, CommitStoreContext, CommitStoreReader, LocatedChange};\nuse crate::entity_identity::EntityIdentity;\nuse crate::storage::StorageReader;\nuse crate::storage::{ScopedStorageReader, StorageReadScope};\nuse crate::LixError;\n\nconst COMMIT_SCHEMA_KEY: &str = \"lix_commit\";\n\n/// Read model for resolving commit-store commits into entity state at a head.\n///\n/// This module does not own durable storage. It reads immutable commit-store\n/// facts through a caller-provided KV store and applies commit graph rules on\n/// top.\n#[derive(Clone)]\npub(crate) struct CommitGraphContext {\n    commit_store: CommitStoreContext,\n}\n\nimpl CommitGraphContext {\n    pub(crate) fn new() -> Self {\n        Self {\n            commit_store: CommitStoreContext::new(),\n        }\n    }\n\n    /// Creates a graph reader over a caller-provided KV store.\n    pub(crate) fn reader<S>(&self, store: S) -> CommitGraphStoreReader<S>\n    where\n        S: StorageReader,\n    {\n        let read_scope = StorageReadScope::new(store);\n        CommitGraphStoreReader {\n            commit_store_reader: self.commit_store.reader(read_scope.store()),\n        }\n    }\n}\n\n/// Commit-graph reader that resolves commit-store entities at a commit head.\npub(crate) struct CommitGraphStoreReader<S>\nwhere\n    S: StorageReader,\n{\n    commit_store_reader: CommitStoreReader<ScopedStorageReader<S>>,\n}\n\nimpl<S> CommitGraphStoreReader<S>\nwhere\n    S: StorageReader,\n{\n    /// Loads and parses a `lix_commit` canonical change by commit id.\n    pub(crate) async fn load_commit(\n        &mut self,\n        commit_id: &str,\n    ) -> Result<Option<CommitGraphCommit>, LixError> {\n        let Some(commit) = self.commit_store_reader.load_commit(commit_id).await? else {\n            return Ok(None);\n        };\n        self.graph_commit_from_store_commit(commit).await.map(Some)\n    }\n\n    /// Loads every commit fact from the commit store.\n    ///\n    /// This is used by global commit surfaces where the caller wants the durable\n    /// graph facts themselves, not reachability from a particular version head.\n    pub(crate) async fn all_commits(&mut self) -> Result<Vec<CommitGraphCommit>, LixError> {\n        let stored_commits = self.commit_store_reader.scan_commits().await?;\n        let mut commits = Vec::new();\n        for commit in stored_commits {\n            commits.push(self.graph_commit_from_store_commit(commit).await?);\n        }\n        commits.sort_by(|left, right| left.commit_id.cmp(&right.commit_id));\n        Ok(commits)\n    }\n\n    /// Walks from `head_commit_id` through parent commits and records nearest depth.\n    pub(crate) async fn reachable_commits(\n        &mut self,\n        head_commit_id: &str,\n    ) -> Result<Vec<ReachableCommitGraphCommit>, LixError> {\n        walk_reachable_commits(self, head_commit_id).await\n    }\n\n    /// Returns the best common ancestors shared by two commit heads.\n    ///\n    /// This is the commit-DAG primitive. It can return more than one commit in\n    /// criss-cross histories. Merge code should layer an explicit merge-base\n    /// policy on top when it needs exactly one base for a three-way merge.\n    pub(crate) async fn best_common_ancestors(\n        &mut self,\n        left_commit_id: &str,\n        right_commit_id: &str,\n    ) -> Result<Vec<CommitGraphCommit>, LixError> {\n        best_common_ancestors(self, left_commit_id, right_commit_id).await\n    }\n\n    /// Resolves the single commit base to use for a three-way merge.\n    ///\n    /// This is merge policy layered over `best_common_ancestors(...)`. Histories\n    /// with no shared base or multiple equally good bases are rejected for now\n    /// so merge code cannot accidentally hide unsupported graph semantics.\n    pub(crate) async fn merge_base(\n        &mut self,\n        left_commit_id: &str,\n        right_commit_id: &str,\n    ) -> Result<CommitGraphCommit, LixError> {\n        let ancestors = self\n            .best_common_ancestors(left_commit_id, right_commit_id)\n            .await?;\n        match ancestors.as_slice() {\n            [] => Err(LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                format!(\n                    \"commit_graph found no common history between '{left_commit_id}' and '{right_commit_id}'\"\n                ),\n            )),\n            [base] => Ok(base.clone()),\n            _ => Err(LixError::ambiguous_merge_base(\n                left_commit_id,\n                right_commit_id,\n                ancestors\n                    .iter()\n                    .map(|ancestor| ancestor.commit_id.clone())\n                    .collect(),\n            )),\n        }\n    }\n\n    /// Derives parent/child edges from parsed commits.\n    pub(crate) fn commit_edges(&self, commits: &[CommitGraphCommit]) -> Vec<CommitGraphEdge> {\n        commits\n            .iter()\n            .flat_map(|commit| {\n                commit.parent_commit_ids.iter().enumerate().map(\n                    |(parent_order, parent_commit_id)| CommitGraphEdge {\n                        parent_commit_id: parent_commit_id.clone(),\n                        child_commit_id: commit.commit_id.clone(),\n                        parent_order: parent_order as u32,\n                    },\n                )\n            })\n            .collect()\n    }\n\n    /// Returns canonical changes reachable from `start_commit_id`.\n    ///\n    /// This is the primitive history API. It reports the commit/depth where\n    /// each matching canonical change was introduced or adopted during graph\n    /// traversal and leaves row shaping to callers such as SQL providers.\n    pub(crate) async fn change_history_from_commit(\n        &mut self,\n        start_commit_id: &str,\n        request: &CommitGraphChangeHistoryRequest,\n    ) -> Result<Vec<CommitGraphChangeHistoryEntry>, LixError> {\n        let commits = self.reachable_commits(start_commit_id).await?;\n        let mut entries = Vec::new();\n        let mut seen_change_ids = BTreeSet::new();\n\n        for reachable in commits {\n            if !depth_matches(reachable.depth, request) {\n                continue;\n            }\n\n            let commit_id = reachable.commit.commit_id;\n            for change_id in reachable.commit.change_ids {\n                if !seen_change_ids.insert(change_id.clone()) {\n                    continue;\n                }\n                let change = self\n                    .load_member_canonical_change(&change_id, &commit_id)\n                    .await?;\n                if change_matches_history_request(&change.record, request) {\n                    entries.push(CommitGraphChangeHistoryEntry {\n                        located_change: change,\n                        observed_commit_id: commit_id.clone(),\n                        start_commit_id: start_commit_id.to_string(),\n                        depth: reachable.depth,\n                    });\n                }\n            }\n        }\n\n        Ok(entries)\n    }\n\n    async fn load_member_canonical_change(\n        &mut self,\n        change_id: &str,\n        source_commit_id: &str,\n    ) -> Result<LocatedChange, LixError> {\n        let change_ids = vec![change_id.to_string()];\n        self.load_canonical_changes(&change_ids)\n            .await?\n            .into_iter()\n            .next()\n            .flatten()\n            .ok_or_else(|| {\n                LixError::new(\n                    \"LIX_ERROR_UNKNOWN\",\n                    format!(\n                        \"commit_graph commit '{source_commit_id}' references missing change '{change_id}'\"\n                    ),\n                )\n            })\n    }\n\n    async fn graph_commit_from_store_commit(\n        &mut self,\n        commit: Commit,\n    ) -> Result<CommitGraphCommit, LixError> {\n        let change_ids = self.load_commit_change_ids(&commit).await?;\n        Ok(commit_graph_commit_from_store_commit(commit, change_ids)?)\n    }\n\n    async fn load_commit_change_ids(&self, commit: &Commit) -> Result<Vec<String>, LixError> {\n        let mut change_ids = Vec::new();\n        for pack_id in 0..commit.change_pack_count {\n            let Some(changes) = self\n                .commit_store_reader\n                .load_change_pack(&commit.id, pack_id)\n                .await?\n            else {\n                return Err(missing_pack_error(\"change\", &commit.id, pack_id));\n            };\n            change_ids.extend(changes.into_iter().map(|change| change.id));\n        }\n        for pack_id in 0..commit.membership_pack_count {\n            let Some(members) = self\n                .commit_store_reader\n                .load_membership_pack(&commit.id, pack_id)\n                .await?\n            else {\n                return Err(missing_pack_error(\"membership\", &commit.id, pack_id));\n            };\n            change_ids.extend(members.into_iter().map(|locator| locator.change_id));\n        }\n        Ok(change_ids)\n    }\n\n    async fn load_canonical_changes(\n        &self,\n        change_ids: &[String],\n    ) -> Result<Vec<Option<LocatedChange>>, LixError> {\n        self.commit_store_reader\n            .load_located_changes(change_ids)\n            .await\n            .map(|changes| {\n                changes\n                    .into_iter()\n                    .map(|located| {\n                        located.map(|located| LocatedChange {\n                            record: canonical_change_from_store_change(located.record),\n                            source_commit_id: located.source_commit_id,\n                            source_pack_id: located.source_pack_id,\n                        })\n                    })\n                    .collect()\n            })\n    }\n}\n\n#[async_trait::async_trait]\nimpl<S> CommitGraphReader for CommitGraphStoreReader<S>\nwhere\n    S: StorageReader,\n{\n    async fn load_commit(\n        &mut self,\n        commit_id: &str,\n    ) -> Result<Option<CommitGraphCommit>, LixError> {\n        CommitGraphStoreReader::load_commit(self, commit_id).await\n    }\n\n    async fn all_commits(&mut self) -> Result<Vec<CommitGraphCommit>, LixError> {\n        CommitGraphStoreReader::all_commits(self).await\n    }\n\n    async fn reachable_commits(\n        &mut self,\n        head_commit_id: &str,\n    ) -> Result<Vec<ReachableCommitGraphCommit>, LixError> {\n        CommitGraphStoreReader::reachable_commits(self, head_commit_id).await\n    }\n\n    async fn best_common_ancestors(\n        &mut self,\n        left_commit_id: &str,\n        right_commit_id: &str,\n    ) -> Result<Vec<CommitGraphCommit>, LixError> {\n        CommitGraphStoreReader::best_common_ancestors(self, left_commit_id, right_commit_id).await\n    }\n\n    async fn merge_base(\n        &mut self,\n        left_commit_id: &str,\n        right_commit_id: &str,\n    ) -> Result<CommitGraphCommit, LixError> {\n        CommitGraphStoreReader::merge_base(self, left_commit_id, right_commit_id).await\n    }\n\n    fn commit_edges(&self, commits: &[CommitGraphCommit]) -> Vec<CommitGraphEdge> {\n        CommitGraphStoreReader::commit_edges(self, commits)\n    }\n\n    async fn change_history_from_commit(\n        &mut self,\n        start_commit_id: &str,\n        request: &CommitGraphChangeHistoryRequest,\n    ) -> Result<Vec<CommitGraphChangeHistoryEntry>, LixError> {\n        CommitGraphStoreReader::change_history_from_commit(self, start_commit_id, request).await\n    }\n}\n\nfn depth_matches(depth: u32, request: &CommitGraphChangeHistoryRequest) -> bool {\n    request.min_depth.map_or(true, |min| depth >= min)\n        && request.max_depth.map_or(true, |max| depth <= max)\n}\n\nfn change_matches_history_request(\n    change: &Change,\n    request: &CommitGraphChangeHistoryRequest,\n) -> bool {\n    (request.include_tombstones || change.snapshot_ref.is_some())\n        && (request.entity_ids.is_empty() || request.entity_ids.contains(&change.entity_id))\n        && (request.schema_keys.is_empty() || request.schema_keys.contains(&change.schema_key))\n        && (request.file_ids.is_empty()\n            || change\n                .file_id\n                .as_ref()\n                .is_some_and(|file_id| request.file_ids.contains(file_id)))\n}\n\nfn commit_graph_commit_from_store_commit(\n    commit: Commit,\n    change_ids: Vec<String>,\n) -> Result<CommitGraphCommit, LixError> {\n    let change = commit_header_canonical_change(commit.clone());\n    Ok(CommitGraphCommit {\n        canonical_change: change.clone(),\n        change,\n        commit_id: commit.id,\n        change_ids,\n        author_account_ids: commit.author_account_ids,\n        parent_commit_ids: commit.parent_ids,\n    })\n}\n\nfn commit_header_canonical_change(commit: Commit) -> Change {\n    Change {\n        id: commit.change_id,\n        entity_id: EntityIdentity::single(&commit.id),\n        schema_key: COMMIT_SCHEMA_KEY.to_string(),\n        file_id: None,\n        snapshot_ref: None,\n        metadata_ref: None,\n        created_at: commit.created_at,\n    }\n}\n\nfn canonical_change_from_store_change(change: Change) -> Change {\n    Change {\n        id: change.id,\n        entity_id: change.entity_id,\n        schema_key: change.schema_key,\n        file_id: change.file_id,\n        snapshot_ref: change.snapshot_ref,\n        metadata_ref: change.metadata_ref,\n        created_at: change.created_at,\n    }\n}\n\nfn missing_pack_error(label: &str, commit_id: &str, pack_id: u32) -> LixError {\n    LixError::new(\n        LixError::CODE_INTERNAL_ERROR,\n        format!(\"commit_graph missing {label} pack ({commit_id}, {pack_id})\"),\n    )\n}\n\n#[cfg(test)]\nmod tests {\n    use std::collections::{BTreeMap, BTreeSet};\n    use std::sync::Arc;\n\n    use crate::backend::testing::UnitTestBackend;\n    use crate::commit_graph::{CommitGraphChangeHistoryRequest, CommitGraphContext};\n    use crate::commit_store::{\n        Change, ChangeLocator, ChangeRef, CommitDraftRef, CommitStoreContext,\n    };\n    use crate::storage::{StorageContext, StorageWriteSet};\n\n    #[tokio::test]\n    async fn load_commit_parses_commit_snapshot() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        append_changes(\n            storage.clone(),\n            &[commit_change(\n                \"commit-1-change\",\n                \"commit-1\",\n                &[\"change-1\", \"change-2\"],\n                &[\"parent-1\"],\n            )],\n        )\n        .await;\n\n        let graph = CommitGraphContext::new();\n        let mut reader = graph.reader(storage);\n        let commit = reader\n            .load_commit(\"commit-1\")\n            .await\n            .expect(\"commit load should succeed\")\n            .expect(\"commit should exist\");\n\n        assert_eq!(commit.commit_id, \"commit-1\");\n        assert_eq!(commit.change_ids, vec![\"change-1\", \"change-2\"]);\n        assert_eq!(commit.parent_commit_ids, vec![\"parent-1\"]);\n        assert_eq!(commit.change.id, \"commit-1-change\");\n    }\n\n    #[tokio::test]\n    async fn load_commit_returns_none_for_missing_commit() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        let graph = CommitGraphContext::new();\n        let mut reader = graph.reader(storage);\n\n        let commit = reader\n            .load_commit(\"missing\")\n            .await\n            .expect(\"commit load should succeed\");\n\n        assert_eq!(commit, None);\n    }\n\n    #[tokio::test]\n    async fn all_commits_returns_parsed_commits_sorted_by_id() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        append_changes(\n            storage.clone(),\n            &[\n                commit_change(\"commit-b-change\", \"commit-b\", &[], &[]),\n                entity_change(\"change-1\", \"entity-1\", \"example\", \"{}\"),\n                commit_change(\"commit-a-change\", \"commit-a\", &[], &[]),\n            ],\n        )\n        .await;\n\n        let graph = CommitGraphContext::new();\n        let mut reader = graph.reader(storage);\n        let commits = reader\n            .all_commits()\n            .await\n            .expect(\"commit scan should succeed\");\n\n        assert_eq!(\n            commits\n                .iter()\n                .map(|commit| commit.commit_id.as_str())\n                .collect::<Vec<_>>(),\n            vec![\"commit-a\", \"commit-b\"]\n        );\n    }\n\n    #[tokio::test]\n    async fn commit_edges_are_derived_from_parent_commit_ids() {\n        let graph = CommitGraphContext::new();\n        let reader = graph.reader(StorageContext::new(Arc::new(UnitTestBackend::new())));\n        let commits = vec![parsed_commit(\n            \"commit-head\",\n            &[],\n            &[\"commit-left\", \"commit-right\"],\n        )];\n\n        let edges = reader.commit_edges(&commits);\n\n        assert_eq!(\n            edges\n                .iter()\n                .map(|edge| (\n                    edge.parent_commit_id.as_str(),\n                    edge.child_commit_id.as_str(),\n                    edge.parent_order,\n                ))\n                .collect::<Vec<_>>(),\n            vec![\n                (\"commit-left\", \"commit-head\", 0),\n                (\"commit-right\", \"commit-head\", 1)\n            ]\n        );\n    }\n\n    #[tokio::test]\n    async fn change_history_from_commit_reports_matching_canonical_changes_with_depth() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        append_changes(\n            storage.clone(),\n            &[\n                entity_change(\"change-root\", \"entity-root\", \"test_schema\", \"{}\"),\n                entity_change(\"change-head\", \"entity-head\", \"test_schema\", \"{}\"),\n                commit_change(\"commit-root-change\", \"commit-root\", &[\"change-root\"], &[]),\n                commit_change(\n                    \"commit-head-change\",\n                    \"commit-head\",\n                    &[\"change-head\"],\n                    &[\"commit-root\"],\n                ),\n            ],\n        )\n        .await;\n\n        let graph = CommitGraphContext::new();\n        let mut reader = graph.reader(storage);\n        let history = reader\n            .change_history_from_commit(\n                \"commit-head\",\n                &CommitGraphChangeHistoryRequest {\n                    schema_keys: vec![\"test_schema\".to_string()],\n                    include_tombstones: true,\n                    ..CommitGraphChangeHistoryRequest::default()\n                },\n            )\n            .await\n            .expect(\"history should resolve\");\n\n        assert_eq!(\n            history\n                .iter()\n                .map(|entry| (\n                    entry.located_change.record.id.as_str(),\n                    entry.observed_commit_id.as_str(),\n                    entry.start_commit_id.as_str(),\n                    entry.depth\n                ))\n                .collect::<Vec<_>>(),\n            vec![\n                (\"change-head\", \"commit-head\", \"commit-head\", 0),\n                (\"change-root\", \"commit-root\", \"commit-head\", 1),\n            ]\n        );\n    }\n\n    #[tokio::test]\n    async fn change_history_from_commit_filters_depth_entity_file_and_tombstones() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        append_changes(\n            storage.clone(),\n            &[\n                entity_change_with_file(\n                    \"change-file-a\",\n                    \"entity-1\",\n                    \"test_schema\",\n                    Some(\"file-a\"),\n                    \"{}\",\n                ),\n                entity_tombstone(\"change-tombstone\", \"entity-1\", \"test_schema\"),\n                entity_change_with_file(\n                    \"change-file-b\",\n                    \"entity-2\",\n                    \"test_schema\",\n                    Some(\"file-b\"),\n                    \"{}\",\n                ),\n                commit_change(\"commit-root-change\", \"commit-root\", &[\"change-file-a\"], &[]),\n                commit_change(\n                    \"commit-head-change\",\n                    \"commit-head\",\n                    &[\"change-tombstone\", \"change-file-b\"],\n                    &[\"commit-root\"],\n                ),\n            ],\n        )\n        .await;\n\n        let graph = CommitGraphContext::new();\n        let mut reader = graph.reader(storage);\n        let history = reader\n            .change_history_from_commit(\n                \"commit-head\",\n                &CommitGraphChangeHistoryRequest {\n                    entity_ids: vec![crate::entity_identity::EntityIdentity::single(\"entity-1\")],\n                    file_ids: vec![\"file-a\".to_string()],\n                    min_depth: Some(1),\n                    max_depth: Some(1),\n                    include_tombstones: false,\n                    ..CommitGraphChangeHistoryRequest::default()\n                },\n            )\n            .await\n            .expect(\"history should resolve\");\n\n        assert_eq!(history.len(), 1);\n        assert_eq!(history[0].located_change.record.id, \"change-file-a\");\n        assert_eq!(history[0].depth, 1);\n    }\n\n    #[tokio::test]\n    async fn change_history_from_commit_includes_tombstones_when_requested() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        append_changes(\n            storage.clone(),\n            &[\n                entity_tombstone(\"change-deleted\", \"entity-1\", \"test_schema\"),\n                commit_change(\n                    \"commit-head-change\",\n                    \"commit-head\",\n                    &[\"change-deleted\"],\n                    &[],\n                ),\n            ],\n        )\n        .await;\n\n        let graph = CommitGraphContext::new();\n        let mut reader = graph.reader(storage);\n        let hidden = reader\n            .change_history_from_commit(\"commit-head\", &CommitGraphChangeHistoryRequest::default())\n            .await\n            .expect(\"history should resolve\");\n        let visible = reader\n            .change_history_from_commit(\n                \"commit-head\",\n                &CommitGraphChangeHistoryRequest {\n                    include_tombstones: true,\n                    ..CommitGraphChangeHistoryRequest::default()\n                },\n            )\n            .await\n            .expect(\"history should resolve\");\n\n        assert!(hidden.is_empty());\n        assert_eq!(visible.len(), 1);\n        assert_eq!(visible[0].located_change.record.id, \"change-deleted\");\n    }\n\n    #[derive(Clone)]\n    struct TestChange {\n        change: Change,\n        commit_change_ids: Vec<String>,\n        parent_commit_ids: Vec<String>,\n        author_account_ids: Vec<String>,\n    }\n\n    impl TestChange {\n        fn commit(\n            change_id: &str,\n            commit_id: &str,\n            change_ids: &[&str],\n            parent_commit_ids: &[&str],\n        ) -> Self {\n            Self {\n                change: Change {\n                    id: change_id.to_string(),\n                    entity_id: crate::entity_identity::EntityIdentity::single(commit_id),\n                    schema_key: super::COMMIT_SCHEMA_KEY.to_string(),\n                    file_id: None,\n                    snapshot_ref: None,\n                    metadata_ref: None,\n                    created_at: \"2026-01-01T00:00:00Z\".to_string(),\n                },\n                commit_change_ids: change_ids.iter().map(|id| id.to_string()).collect(),\n                parent_commit_ids: parent_commit_ids.iter().map(|id| id.to_string()).collect(),\n                author_account_ids: Vec::new(),\n            }\n        }\n\n        fn entity(\n            change_id: &str,\n            entity_id: &str,\n            schema_key: &str,\n            file_id: Option<&str>,\n            snapshot_content: Option<&str>,\n            created_at: &str,\n        ) -> Self {\n            Self {\n                change: Change {\n                    id: change_id.to_string(),\n                    entity_id: crate::entity_identity::EntityIdentity::single(entity_id),\n                    schema_key: schema_key.to_string(),\n                    file_id: file_id.map(str::to_string),\n                    snapshot_ref: snapshot_content.map(|content| {\n                        crate::json_store::JsonRef::from_hash(blake3::hash(content.as_bytes()))\n                    }),\n                    metadata_ref: None,\n                    created_at: created_at.to_string(),\n                },\n                commit_change_ids: Vec::new(),\n                parent_commit_ids: Vec::new(),\n                author_account_ids: Vec::new(),\n            }\n        }\n\n        fn is_commit(&self) -> bool {\n            self.change.schema_key == super::COMMIT_SCHEMA_KEY\n        }\n    }\n\n    async fn append_changes(storage: StorageContext, changes: &[TestChange]) {\n        let mut tx = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        let mut writes = StorageWriteSet::new();\n        let canonical_changes = changes\n            .iter()\n            .filter(|change| !change.is_commit())\n            .map(|change| change.change.clone())\n            .collect::<Vec<_>>();\n        let changes_by_id: BTreeMap<&str, &Change> = canonical_changes\n            .iter()\n            .map(|change| (change.id.as_str(), change))\n            .collect::<BTreeMap<_, _>>();\n        let mut authored_change_ids = BTreeSet::new();\n        let commit_store = CommitStoreContext::new();\n        for change in changes.iter().filter(|change| change.is_commit()) {\n            let commit = crate::commit_graph::CommitGraphCommit {\n                canonical_change: change.change.clone(),\n                change: change.change.clone(),\n                commit_id: change\n                    .change\n                    .entity_id\n                    .as_single_string()\n                    .expect(\"commit fixture should use single entity id\")\n                    .to_string(),\n                change_ids: change.commit_change_ids.clone(),\n                author_account_ids: change.author_account_ids.clone(),\n                parent_commit_ids: change.parent_commit_ids.clone(),\n            };\n            let parent_commit_ids = commit.parent_commit_ids.clone();\n            let author_account_ids = commit.author_account_ids.clone();\n            let commit_draft = CommitDraftRef {\n                id: &commit.commit_id,\n                change_id: &commit.canonical_change.id,\n                parent_ids: &parent_commit_ids,\n                author_account_ids: &author_account_ids,\n                created_at: &commit.canonical_change.created_at,\n            };\n\n            let mut authored_changes = Vec::new();\n            let mut adopted_changes = Vec::new();\n            let mut corrupt_missing_members = Vec::new();\n            for change_id in &commit.change_ids {\n                if let Some(change) = changes_by_id.get(change_id.as_str()) {\n                    if authored_change_ids.insert(change_id.clone()) {\n                        authored_changes.push(change_ref_from_canonical(change.as_ref()));\n                    } else {\n                        adopted_changes.push(change_ref_from_canonical(change.as_ref()));\n                    }\n                } else {\n                    corrupt_missing_members.push(change_id.clone());\n                }\n            }\n\n            if corrupt_missing_members.is_empty() {\n                commit_store\n                    .writer(tx.as_mut(), &mut writes)\n                    .stage_commit_draft(commit_draft, authored_changes, adopted_changes)\n                    .await\n                    .expect(\"commit-store append should succeed\");\n            } else {\n                crate::commit_store::storage::stage_commit(\n                    &mut writes,\n                    commit_draft,\n                    authored_changes,\n                    corrupt_missing_members\n                        .into_iter()\n                        .map(|change_id| ChangeLocator {\n                            source_commit_id: \"missing-source-commit\".to_string(),\n                            source_pack_id: 0,\n                            source_ordinal: 0,\n                            change_id,\n                        })\n                        .collect(),\n                )\n                .expect(\"corrupt commit-store fixture should stage\");\n            }\n        }\n        writes\n            .apply(&mut tx.as_mut())\n            .await\n            .expect(\"writes should apply\");\n        tx.commit().await.expect(\"commit should succeed\");\n    }\n\n    fn change_ref_from_canonical<'a>(change: crate::commit_store::ChangeRef<'a>) -> ChangeRef<'a> {\n        ChangeRef {\n            id: change.id,\n            entity_id: change.entity_id,\n            schema_key: change.schema_key,\n            file_id: change.file_id,\n            snapshot_ref: change.snapshot_ref,\n            metadata_ref: change.metadata_ref,\n            created_at: change.created_at,\n        }\n    }\n\n    fn commit_change(\n        change_id: &str,\n        commit_id: &str,\n        change_ids: &[&str],\n        parent_commit_ids: &[&str],\n    ) -> TestChange {\n        TestChange::commit(change_id, commit_id, change_ids, parent_commit_ids)\n    }\n\n    fn parsed_commit(\n        commit_id: &str,\n        change_ids: &[&str],\n        parent_commit_ids: &[&str],\n    ) -> crate::commit_graph::CommitGraphCommit {\n        let fixture = commit_change(\n            &format!(\"{commit_id}-change\"),\n            commit_id,\n            change_ids,\n            parent_commit_ids,\n        );\n        crate::commit_graph::CommitGraphCommit {\n            canonical_change: fixture.change.clone(),\n            change: fixture.change,\n            commit_id: commit_id.to_string(),\n            change_ids: change_ids\n                .iter()\n                .map(|change_id| change_id.to_string())\n                .collect(),\n            author_account_ids: Vec::new(),\n            parent_commit_ids: parent_commit_ids\n                .iter()\n                .map(|parent_id| parent_id.to_string())\n                .collect(),\n        }\n    }\n\n    fn entity_change(\n        change_id: &str,\n        entity_id: &str,\n        schema_key: &str,\n        snapshot_content: &str,\n    ) -> TestChange {\n        entity_change_at(\n            change_id,\n            entity_id,\n            schema_key,\n            snapshot_content,\n            \"2026-01-01T00:00:00Z\",\n        )\n    }\n\n    fn entity_change_at(\n        change_id: &str,\n        entity_id: &str,\n        schema_key: &str,\n        snapshot_content: &str,\n        created_at: &str,\n    ) -> TestChange {\n        TestChange::entity(\n            change_id,\n            entity_id,\n            schema_key,\n            None,\n            Some(snapshot_content),\n            created_at,\n        )\n    }\n\n    fn entity_change_with_file(\n        change_id: &str,\n        entity_id: &str,\n        schema_key: &str,\n        file_id: Option<&str>,\n        snapshot_content: &str,\n    ) -> TestChange {\n        TestChange::entity(\n            change_id,\n            entity_id,\n            schema_key,\n            file_id,\n            Some(snapshot_content),\n            \"2026-01-01T00:00:00Z\",\n        )\n    }\n\n    fn entity_tombstone(change_id: &str, entity_id: &str, schema_key: &str) -> TestChange {\n        TestChange::entity(\n            change_id,\n            entity_id,\n            schema_key,\n            None,\n            None,\n            \"2026-01-02T00:00:00Z\",\n        )\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/commit_graph/mod.rs",
    "content": "mod context;\nmod types;\nmod walker;\n\n#[allow(unused_imports)]\npub(crate) use context::{CommitGraphContext, CommitGraphStoreReader};\n#[allow(unused_imports)]\npub(crate) use types::{\n    CommitGraphChangeHistoryEntry, CommitGraphChangeHistoryRequest, CommitGraphCommit,\n    CommitGraphEdge, CommitGraphReader, ReachableCommitGraphCommit,\n};\n"
  },
  {
    "path": "packages/engine/src/commit_graph/types.rs",
    "content": "use crate::commit_store::{Change, LocatedChange};\nuse crate::entity_identity::EntityIdentity;\nuse crate::LixError;\n\n/// Parsed `lix_commit` entity from the changelog.\n///\n/// Commits are stored as ordinary canonical changes. The graph reader parses\n/// their snapshot so traversal code can work with explicit parent ids and the\n/// ordered canonical changes introduced relative to the first parent. A merge\n/// commit may reference existing changes from another parent instead of owning\n/// newly minted copies.\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct CommitGraphCommit {\n    pub(crate) canonical_change: Change,\n    pub(crate) change: Change,\n    pub(crate) commit_id: String,\n    pub(crate) change_ids: Vec<String>,\n    pub(crate) author_account_ids: Vec<String>,\n    pub(crate) parent_commit_ids: Vec<String>,\n}\n\n/// Commit reachable from a requested graph head.\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct ReachableCommitGraphCommit {\n    pub(crate) commit: CommitGraphCommit,\n    pub(crate) depth: u32,\n}\n\n/// Derived parent/child edge between two commit entities.\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct CommitGraphEdge {\n    pub(crate) parent_commit_id: String,\n    pub(crate) child_commit_id: String,\n    pub(crate) parent_order: u32,\n}\n\n/// Filter for canonical change history from a chosen traversal start commit.\n#[derive(Debug, Clone, Default, PartialEq, Eq)]\npub(crate) struct CommitGraphChangeHistoryRequest {\n    pub(crate) entity_ids: Vec<EntityIdentity>,\n    pub(crate) schema_keys: Vec<String>,\n    pub(crate) file_ids: Vec<String>,\n    pub(crate) min_depth: Option<u32>,\n    pub(crate) max_depth: Option<u32>,\n    pub(crate) include_tombstones: bool,\n}\n\n/// Canonical change observed while walking commit history from a start commit.\n///\n/// `start_commit_id` is the traversal anchor requested by the caller. It is not\n/// necessarily a graph root or a version head.\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct CommitGraphChangeHistoryEntry {\n    pub(crate) located_change: LocatedChange,\n    pub(crate) observed_commit_id: String,\n    pub(crate) start_commit_id: String,\n    pub(crate) depth: u32,\n}\n\n/// Execution-scoped reader for commit graph facts.\n///\n/// SQL surfaces consume this trait so they depend on graph semantics, not on\n/// changelog storage or traversal details.\n#[allow(dead_code)]\n#[async_trait::async_trait]\npub(crate) trait CommitGraphReader: Send + Sync {\n    #[allow(dead_code)]\n    async fn load_commit(&mut self, commit_id: &str)\n        -> Result<Option<CommitGraphCommit>, LixError>;\n\n    async fn all_commits(&mut self) -> Result<Vec<CommitGraphCommit>, LixError>;\n\n    async fn reachable_commits(\n        &mut self,\n        head_commit_id: &str,\n    ) -> Result<Vec<ReachableCommitGraphCommit>, LixError>;\n\n    /// Returns the best common ancestors shared by two commit heads.\n    ///\n    /// This is intentionally not called \"lowest common ancestor\": commit\n    /// history is a DAG, not a tree, and some histories have multiple equally\n    /// good common ancestors. Merge policy can require exactly one base later.\n    #[allow(dead_code)]\n    async fn best_common_ancestors(\n        &mut self,\n        left_commit_id: &str,\n        right_commit_id: &str,\n    ) -> Result<Vec<CommitGraphCommit>, LixError>;\n\n    /// Resolves the single commit base to use for a three-way merge.\n    ///\n    /// This is merge policy, not raw graph math: no common history and multiple\n    /// best common ancestors are both errors until merge has explicit support\n    /// for those cases.\n    #[allow(dead_code)]\n    async fn merge_base(\n        &mut self,\n        left_commit_id: &str,\n        right_commit_id: &str,\n    ) -> Result<CommitGraphCommit, LixError>;\n\n    fn commit_edges(&self, commits: &[CommitGraphCommit]) -> Vec<CommitGraphEdge>;\n\n    async fn change_history_from_commit(\n        &mut self,\n        start_commit_id: &str,\n        request: &CommitGraphChangeHistoryRequest,\n    ) -> Result<Vec<CommitGraphChangeHistoryEntry>, LixError>;\n}\n"
  },
  {
    "path": "packages/engine/src/commit_graph/walker.rs",
    "content": "use std::collections::{BTreeMap, BTreeSet};\n\nuse crate::commit_graph::{CommitGraphCommit, CommitGraphStoreReader, ReachableCommitGraphCommit};\nuse crate::storage::StorageReader;\nuse crate::LixError;\n\n/// Walks parent links from `head_commit_id` and returns reachable commits\n/// nearest-first.\n///\n/// The walker is intentionally storage-free. It asks `CommitGraphReader` to\n/// load parsed commit facts and owns only traversal concerns: caching, cycle\n/// detection, and nearest-depth selection.\npub(crate) async fn walk_reachable_commits<S>(\n    reader: &mut CommitGraphStoreReader<S>,\n    head_commit_id: &str,\n) -> Result<Vec<ReachableCommitGraphCommit>, LixError>\nwhere\n    S: StorageReader,\n{\n    let mut loader = CommitTraversalLoader::new(reader);\n    let mut visiting = BTreeSet::new();\n    let mut nearest_depths = BTreeMap::new();\n    loader\n        .walk_commit(head_commit_id, 0, &mut visiting, &mut nearest_depths)\n        .await?;\n\n    let mut commits = nearest_depths\n        .into_iter()\n        .map(|(commit_id, depth)| {\n            let commit = loader\n                .loaded\n                .remove(&commit_id)\n                .expect(\"visited commit should be cached\");\n            ReachableCommitGraphCommit { commit, depth }\n        })\n        .collect::<Vec<_>>();\n    commits.sort_by(|left, right| {\n        left.depth\n            .cmp(&right.depth)\n            .then_with(|| left.commit.commit_id.cmp(&right.commit.commit_id))\n    });\n    Ok(commits)\n}\n\n/// Returns the best common ancestors shared by two commit heads.\n///\n/// This is graph math, not merge policy. A commit is \"best\" when it is a\n/// common ancestor and no descendant of it is also a common ancestor.\n///\n/// Simple history has one best common ancestor:\n///\n/// ```text\n/// A -- B -- C   left\n///       \\\n///        D      right\n/// ```\n///\n/// `best_common_ancestors(C, D)` returns `[B]`.\n///\n/// Commit history is a DAG, not a tree, so criss-cross histories can have\n/// multiple equally good answers. Callers that need one merge base should wrap\n/// this API with an explicit policy instead of pretending the graph always has\n/// a single lowest common ancestor.\npub(crate) async fn best_common_ancestors<S>(\n    reader: &mut CommitGraphStoreReader<S>,\n    left_commit_id: &str,\n    right_commit_id: &str,\n) -> Result<Vec<CommitGraphCommit>, LixError>\nwhere\n    S: StorageReader,\n{\n    let left_reachable = walk_reachable_commits(reader, left_commit_id).await?;\n    let right_reachable = walk_reachable_commits(reader, right_commit_id).await?;\n    let right_ids = right_reachable\n        .iter()\n        .map(|reachable| reachable.commit.commit_id.clone())\n        .collect::<BTreeSet<_>>();\n    let common_ids = left_reachable\n        .iter()\n        .filter(|reachable| right_ids.contains(&reachable.commit.commit_id))\n        .map(|reachable| reachable.commit.commit_id.clone())\n        .collect::<BTreeSet<_>>();\n\n    let mut best = Vec::new();\n    for reachable in left_reachable {\n        let commit_id = &reachable.commit.commit_id;\n        if !common_ids.contains(commit_id) {\n            continue;\n        }\n\n        if has_descendant_in_set(reader, commit_id, &common_ids).await? {\n            continue;\n        }\n\n        best.push(reachable.commit);\n    }\n    best.sort_by(|left, right| left.commit_id.cmp(&right.commit_id));\n    Ok(best)\n}\n\nasync fn has_descendant_in_set<S>(\n    reader: &mut CommitGraphStoreReader<S>,\n    commit_id: &str,\n    candidate_descendant_ids: &BTreeSet<String>,\n) -> Result<bool, LixError>\nwhere\n    S: StorageReader,\n{\n    for candidate_descendant_id in candidate_descendant_ids {\n        if candidate_descendant_id == commit_id {\n            continue;\n        }\n        let reachable = walk_reachable_commits(reader, candidate_descendant_id).await?;\n        if reachable\n            .iter()\n            .any(|reachable| reachable.commit.commit_id == commit_id)\n        {\n            return Ok(true);\n        }\n    }\n    Ok(false)\n}\n\nstruct CommitTraversalLoader<'a, S>\nwhere\n    S: StorageReader,\n{\n    reader: &'a mut CommitGraphStoreReader<S>,\n    loaded: BTreeMap<String, CommitGraphCommit>,\n}\n\nimpl<'a, S> CommitTraversalLoader<'a, S>\nwhere\n    S: StorageReader,\n{\n    fn new(reader: &'a mut CommitGraphStoreReader<S>) -> Self {\n        Self {\n            reader,\n            loaded: BTreeMap::new(),\n        }\n    }\n\n    async fn walk_commit(\n        &mut self,\n        commit_id: &str,\n        depth: u32,\n        visiting: &mut BTreeSet<String>,\n        nearest_depths: &mut BTreeMap<String, u32>,\n    ) -> Result<(), LixError> {\n        let mut stack = vec![TraversalFrame {\n            commit_id: commit_id.to_string(),\n            depth,\n            expanded: false,\n        }];\n\n        while let Some(frame) = stack.pop() {\n            if frame.expanded {\n                visiting.remove(&frame.commit_id);\n                continue;\n            }\n\n            if visiting.contains(&frame.commit_id) {\n                return Err(LixError::new(\n                    \"LIX_ERROR_UNKNOWN\",\n                    format!(\n                        \"commit_graph cycle detected at commit '{}'\",\n                        frame.commit_id\n                    ),\n                ));\n            }\n\n            if let Some(previous_depth) = nearest_depths.get(&frame.commit_id) {\n                if *previous_depth <= frame.depth {\n                    continue;\n                }\n            }\n\n            let commit = self.load_commit(&frame.commit_id).await?;\n            nearest_depths.insert(frame.commit_id.clone(), frame.depth);\n\n            visiting.insert(frame.commit_id.clone());\n            stack.push(TraversalFrame {\n                commit_id: frame.commit_id,\n                depth: frame.depth,\n                expanded: true,\n            });\n            for parent_commit_id in commit.parent_commit_ids.iter().rev() {\n                stack.push(TraversalFrame {\n                    commit_id: parent_commit_id.clone(),\n                    depth: frame.depth + 1,\n                    expanded: false,\n                });\n            }\n        }\n        Ok(())\n    }\n\n    async fn load_commit(&mut self, commit_id: &str) -> Result<CommitGraphCommit, LixError> {\n        if let Some(commit) = self.loaded.get(commit_id) {\n            return Ok(commit.clone());\n        }\n        let Some(commit) = self.reader.load_commit(commit_id).await? else {\n            return Err(LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                format!(\"commit_graph missing commit '{commit_id}'\"),\n            ));\n        };\n        self.loaded.insert(commit_id.to_string(), commit.clone());\n        Ok(commit)\n    }\n}\n\nstruct TraversalFrame {\n    commit_id: String,\n    depth: u32,\n    expanded: bool,\n}\n\n#[cfg(test)]\nmod tests {\n    use std::sync::Arc;\n\n    use serde_json::json;\n\n    use crate::backend::testing::UnitTestBackend;\n    use crate::commit_graph::CommitGraphContext;\n    use crate::commit_store::{Change, CommitDraftRef, CommitStoreContext};\n    use crate::storage::{StorageContext, StorageWriteSet};\n    use crate::LixError;\n\n    #[tokio::test]\n    async fn reachable_commits_returns_commits_nearest_first() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        append_changes(\n            storage.clone(),\n            &[\n                commit_change(\"commit-root-change\", \"commit-root\", &[], &[]),\n                commit_change(\n                    \"commit-parent-change\",\n                    \"commit-parent\",\n                    &[],\n                    &[\"commit-root\"],\n                ),\n                commit_change(\"commit-head-change\", \"commit-head\", &[], &[\"commit-parent\"]),\n            ],\n        )\n        .await;\n\n        let graph = CommitGraphContext::new();\n        let mut reader = graph.reader(storage);\n        let commits = reader\n            .reachable_commits(\"commit-head\")\n            .await\n            .expect(\"reachable commits should load\");\n\n        assert_eq!(\n            commits\n                .iter()\n                .map(|reachable| (reachable.commit.commit_id.as_str(), reachable.depth))\n                .collect::<Vec<_>>(),\n            vec![(\"commit-head\", 0), (\"commit-parent\", 1), (\"commit-root\", 2)]\n        );\n    }\n\n    #[tokio::test]\n    async fn reachable_commits_errors_on_missing_parent_commit() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        append_changes(\n            storage.clone(),\n            &[commit_change(\n                \"commit-head-change\",\n                \"commit-head\",\n                &[],\n                &[\"missing-parent\"],\n            )],\n        )\n        .await;\n\n        let graph = CommitGraphContext::new();\n        let mut reader = graph.reader(storage);\n        let error = reader\n            .reachable_commits(\"commit-head\")\n            .await\n            .expect_err(\"missing parent should fail\");\n\n        assert!(error.message.contains(\"missing-parent\"));\n    }\n\n    #[tokio::test]\n    async fn reachable_commits_errors_on_cycle() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        append_changes(\n            storage.clone(),\n            &[\n                commit_change(\"commit-a-change\", \"commit-a\", &[], &[\"commit-b\"]),\n                commit_change(\"commit-b-change\", \"commit-b\", &[], &[\"commit-a\"]),\n            ],\n        )\n        .await;\n\n        let graph = CommitGraphContext::new();\n        let mut reader = graph.reader(storage);\n        let error = reader\n            .reachable_commits(\"commit-a\")\n            .await\n            .expect_err(\"cycle should fail\");\n\n        assert!(error.message.contains(\"cycle\"));\n    }\n\n    #[tokio::test]\n    async fn reachable_commits_dedupes_shared_ancestors_in_diamond() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        append_changes(\n            storage.clone(),\n            &[\n                commit_change(\"commit-root-change\", \"commit-root\", &[], &[]),\n                commit_change(\"commit-left-change\", \"commit-left\", &[], &[\"commit-root\"]),\n                commit_change(\"commit-right-change\", \"commit-right\", &[], &[\"commit-root\"]),\n                commit_change(\n                    \"commit-head-change\",\n                    \"commit-head\",\n                    &[],\n                    &[\"commit-left\", \"commit-right\"],\n                ),\n            ],\n        )\n        .await;\n\n        let graph = CommitGraphContext::new();\n        let mut reader = graph.reader(storage);\n        let commits = reader\n            .reachable_commits(\"commit-head\")\n            .await\n            .expect(\"reachable commits should load\");\n\n        assert_eq!(\n            commits\n                .iter()\n                .map(|reachable| (reachable.commit.commit_id.as_str(), reachable.depth))\n                .collect::<Vec<_>>(),\n            vec![\n                (\"commit-head\", 0),\n                (\"commit-left\", 1),\n                (\"commit-right\", 1),\n                (\"commit-root\", 2),\n            ]\n        );\n    }\n\n    #[tokio::test]\n    async fn reachable_commits_keeps_nearest_depth_for_multiple_paths() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        append_changes(\n            storage.clone(),\n            &[\n                commit_change(\"commit-root-change\", \"commit-root\", &[], &[]),\n                commit_change(\n                    \"commit-parent-change\",\n                    \"commit-parent\",\n                    &[],\n                    &[\"commit-root\"],\n                ),\n                commit_change(\n                    \"commit-head-change\",\n                    \"commit-head\",\n                    &[],\n                    &[\"commit-root\", \"commit-parent\"],\n                ),\n            ],\n        )\n        .await;\n\n        let graph = CommitGraphContext::new();\n        let mut reader = graph.reader(storage);\n        let commits = reader\n            .reachable_commits(\"commit-head\")\n            .await\n            .expect(\"reachable commits should load\");\n\n        assert_eq!(\n            commits\n                .iter()\n                .map(|reachable| (reachable.commit.commit_id.as_str(), reachable.depth))\n                .collect::<Vec<_>>(),\n            vec![(\"commit-head\", 0), (\"commit-parent\", 1), (\"commit-root\", 1)]\n        );\n    }\n\n    #[tokio::test]\n    async fn reachable_commits_orders_same_depth_commits_by_id() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        append_changes(\n            storage.clone(),\n            &[\n                commit_change(\"commit-z-change\", \"commit-z\", &[], &[]),\n                commit_change(\"commit-a-change\", \"commit-a\", &[], &[]),\n                commit_change(\n                    \"commit-head-change\",\n                    \"commit-head\",\n                    &[],\n                    &[\"commit-z\", \"commit-a\"],\n                ),\n            ],\n        )\n        .await;\n\n        let graph = CommitGraphContext::new();\n        let mut reader = graph.reader(storage);\n        let commits = reader\n            .reachable_commits(\"commit-head\")\n            .await\n            .expect(\"reachable commits should load\");\n\n        assert_eq!(\n            commits\n                .iter()\n                .map(|reachable| (reachable.commit.commit_id.as_str(), reachable.depth))\n                .collect::<Vec<_>>(),\n            vec![(\"commit-head\", 0), (\"commit-a\", 1), (\"commit-z\", 1)]\n        );\n    }\n\n    #[tokio::test]\n    async fn reachable_commits_errors_on_missing_head_commit() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        let graph = CommitGraphContext::new();\n        let mut reader = graph.reader(storage);\n\n        let error = reader\n            .reachable_commits(\"missing-head\")\n            .await\n            .expect_err(\"missing head should fail\");\n\n        assert!(error.message.contains(\"missing-head\"));\n    }\n\n    #[tokio::test]\n    async fn best_common_ancestors_returns_nearest_common_commit_in_simple_graph() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        append_changes(\n            storage.clone(),\n            &[\n                commit_change(\"commit-a-change\", \"commit-a\", &[], &[]),\n                commit_change(\"commit-b-change\", \"commit-b\", &[], &[\"commit-a\"]),\n                commit_change(\"commit-c-change\", \"commit-c\", &[], &[\"commit-b\"]),\n                commit_change(\"commit-d-change\", \"commit-d\", &[], &[\"commit-b\"]),\n            ],\n        )\n        .await;\n\n        let graph = CommitGraphContext::new();\n        let mut reader = graph.reader(storage);\n        let ancestors = reader\n            .best_common_ancestors(\"commit-c\", \"commit-d\")\n            .await\n            .expect(\"best common ancestors should load\");\n\n        assert_eq!(\n            ancestors\n                .iter()\n                .map(|commit| commit.commit_id.as_str())\n                .collect::<Vec<_>>(),\n            vec![\"commit-b\"]\n        );\n    }\n\n    #[tokio::test]\n    async fn best_common_ancestors_returns_shared_fork_in_diamond_graph() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        append_changes(\n            storage.clone(),\n            &[\n                commit_change(\"commit-root-change\", \"commit-root\", &[], &[]),\n                commit_change(\"commit-left-change\", \"commit-left\", &[], &[\"commit-root\"]),\n                commit_change(\"commit-right-change\", \"commit-right\", &[], &[\"commit-root\"]),\n                commit_change(\n                    \"commit-left-head-change\",\n                    \"commit-left-head\",\n                    &[],\n                    &[\"commit-left\"],\n                ),\n                commit_change(\n                    \"commit-right-head-change\",\n                    \"commit-right-head\",\n                    &[],\n                    &[\"commit-right\"],\n                ),\n            ],\n        )\n        .await;\n\n        let graph = CommitGraphContext::new();\n        let mut reader = graph.reader(storage);\n        let ancestors = reader\n            .best_common_ancestors(\"commit-left-head\", \"commit-right-head\")\n            .await\n            .expect(\"best common ancestors should load\");\n\n        assert_eq!(\n            ancestors\n                .iter()\n                .map(|commit| commit.commit_id.as_str())\n                .collect::<Vec<_>>(),\n            vec![\"commit-root\"]\n        );\n    }\n\n    #[tokio::test]\n    async fn best_common_ancestors_returns_parent_when_one_side_is_ancestor() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        append_changes(\n            storage.clone(),\n            &[\n                commit_change(\"commit-a-change\", \"commit-a\", &[], &[]),\n                commit_change(\"commit-b-change\", \"commit-b\", &[], &[\"commit-a\"]),\n                commit_change(\"commit-c-change\", \"commit-c\", &[], &[\"commit-b\"]),\n            ],\n        )\n        .await;\n\n        let graph = CommitGraphContext::new();\n        let mut reader = graph.reader(storage);\n        let ancestors = reader\n            .best_common_ancestors(\"commit-b\", \"commit-c\")\n            .await\n            .expect(\"best common ancestors should load\");\n\n        assert_eq!(\n            ancestors\n                .iter()\n                .map(|commit| commit.commit_id.as_str())\n                .collect::<Vec<_>>(),\n            vec![\"commit-b\"]\n        );\n    }\n\n    #[tokio::test]\n    async fn best_common_ancestors_returns_multiple_bases_for_criss_cross_graph() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        append_changes(\n            storage.clone(),\n            &[\n                commit_change(\"commit-root-change\", \"commit-root\", &[], &[]),\n                commit_change(\"commit-left-change\", \"commit-left\", &[], &[\"commit-root\"]),\n                commit_change(\"commit-right-change\", \"commit-right\", &[], &[\"commit-root\"]),\n                commit_change(\n                    \"commit-head-left-change\",\n                    \"commit-head-left\",\n                    &[],\n                    &[\"commit-left\", \"commit-right\"],\n                ),\n                commit_change(\n                    \"commit-head-right-change\",\n                    \"commit-head-right\",\n                    &[],\n                    &[\"commit-right\", \"commit-left\"],\n                ),\n            ],\n        )\n        .await;\n\n        let graph = CommitGraphContext::new();\n        let mut reader = graph.reader(storage);\n        let ancestors = reader\n            .best_common_ancestors(\"commit-head-left\", \"commit-head-right\")\n            .await\n            .expect(\"best common ancestors should load\");\n\n        assert_eq!(\n            ancestors\n                .iter()\n                .map(|commit| commit.commit_id.as_str())\n                .collect::<Vec<_>>(),\n            vec![\"commit-left\", \"commit-right\"]\n        );\n    }\n\n    #[tokio::test]\n    async fn merge_base_returns_single_best_common_ancestor() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        append_changes(\n            storage.clone(),\n            &[\n                commit_change(\"commit-a-change\", \"commit-a\", &[], &[]),\n                commit_change(\"commit-b-change\", \"commit-b\", &[], &[\"commit-a\"]),\n                commit_change(\"commit-c-change\", \"commit-c\", &[], &[\"commit-b\"]),\n                commit_change(\"commit-d-change\", \"commit-d\", &[], &[\"commit-b\"]),\n            ],\n        )\n        .await;\n\n        let graph = CommitGraphContext::new();\n        let mut reader = graph.reader(storage);\n        let base = reader\n            .merge_base(\"commit-c\", \"commit-d\")\n            .await\n            .expect(\"single merge base should resolve\");\n\n        assert_eq!(base.commit_id, \"commit-b\");\n    }\n\n    #[tokio::test]\n    async fn merge_base_errors_when_histories_have_no_common_commit() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        append_changes(\n            storage.clone(),\n            &[\n                commit_change(\"commit-left-change\", \"commit-left\", &[], &[]),\n                commit_change(\"commit-right-change\", \"commit-right\", &[], &[]),\n            ],\n        )\n        .await;\n\n        let graph = CommitGraphContext::new();\n        let mut reader = graph.reader(storage);\n        let error = reader\n            .merge_base(\"commit-left\", \"commit-right\")\n            .await\n            .expect_err(\"unrelated histories should not have a merge base\");\n\n        assert!(error.message.contains(\"no common history\"));\n    }\n\n    #[tokio::test]\n    async fn merge_base_errors_when_best_common_ancestor_is_ambiguous() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        append_changes(\n            storage.clone(),\n            &[\n                commit_change(\"commit-root-change\", \"commit-root\", &[], &[]),\n                commit_change(\"commit-left-change\", \"commit-left\", &[], &[\"commit-root\"]),\n                commit_change(\"commit-right-change\", \"commit-right\", &[], &[\"commit-root\"]),\n                commit_change(\n                    \"commit-head-left-change\",\n                    \"commit-head-left\",\n                    &[],\n                    &[\"commit-left\", \"commit-right\"],\n                ),\n                commit_change(\n                    \"commit-head-right-change\",\n                    \"commit-head-right\",\n                    &[],\n                    &[\"commit-right\", \"commit-left\"],\n                ),\n            ],\n        )\n        .await;\n\n        let graph = CommitGraphContext::new();\n        let mut reader = graph.reader(storage);\n        let error = reader\n            .merge_base(\"commit-head-left\", \"commit-head-right\")\n            .await\n            .expect_err(\"ambiguous best common ancestors should fail\");\n\n        assert_eq!(error.code, LixError::CODE_AMBIGUOUS_MERGE_BASE);\n        assert_eq!(\n            error\n                .details\n                .as_ref()\n                .and_then(|details| details.get(\"left_commit_id\")),\n            Some(&json!(\"commit-head-left\"))\n        );\n        assert_eq!(\n            error\n                .details\n                .as_ref()\n                .and_then(|details| details.get(\"right_commit_id\")),\n            Some(&json!(\"commit-head-right\"))\n        );\n        assert_eq!(\n            error\n                .details\n                .as_ref()\n                .and_then(|details| details.get(\"candidates\")),\n            Some(&json!([\"commit-left\", \"commit-right\"]))\n        );\n    }\n\n    #[derive(Clone)]\n    struct TestCommitChange {\n        change: Change,\n        parent_commit_ids: Vec<String>,\n    }\n\n    async fn append_changes(storage: StorageContext, changes: &[TestCommitChange]) {\n        let mut tx = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        let mut writes = StorageWriteSet::new();\n        let commit_store = CommitStoreContext::new();\n        for change in changes {\n            let commit_id = change\n                .change\n                .entity_id\n                .as_single_string()\n                .expect(\"commit fixture should have single id\")\n                .to_string();\n            let author_account_ids = Vec::new();\n            let commit = CommitDraftRef {\n                id: &commit_id,\n                change_id: &change.change.id,\n                parent_ids: &change.parent_commit_ids,\n                author_account_ids: &author_account_ids,\n                created_at: &change.change.created_at,\n            };\n            commit_store\n                .writer(tx.as_mut(), &mut writes)\n                .stage_commit_draft(commit, Vec::new(), Vec::new())\n                .await\n                .expect(\"commit-store fixture should append\");\n        }\n        writes\n            .apply(&mut tx.as_mut())\n            .await\n            .expect(\"writes should apply\");\n        tx.commit().await.expect(\"commit should succeed\");\n    }\n\n    fn commit_change(\n        change_id: &str,\n        commit_id: &str,\n        change_ids: &[&str],\n        parent_commit_ids: &[&str],\n    ) -> TestCommitChange {\n        let _ = change_ids;\n        TestCommitChange {\n            change: Change {\n                id: change_id.to_string(),\n                entity_id: crate::entity_identity::EntityIdentity::single(commit_id),\n                schema_key: \"lix_commit\".to_string(),\n                file_id: None,\n                snapshot_ref: None,\n                metadata_ref: None,\n                created_at: \"2026-01-01T00:00:00Z\".to_string(),\n            },\n            parent_commit_ids: parent_commit_ids.iter().map(|id| id.to_string()).collect(),\n        }\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/commit_store/codec.rs",
    "content": "use crate::commit_store::{\n    Change, ChangeLocator, ChangeLocatorRef, ChangeRef, Commit, StoredCommitRef,\n};\nuse crate::entity_identity::EntityIdentity;\nuse crate::json_store::JsonRef;\nuse crate::LixError;\n\nconst COMMIT_MAGIC: &[u8; 5] = b\"LXCM1\";\nconst CHANGE_MAGIC: &[u8; 5] = b\"LXCH2\";\nconst CHANGE_PACK_MAGIC: &[u8; 5] = b\"LXCP3\";\nconst MEMBERSHIP_PACK_MAGIC: &[u8; 5] = b\"LXMP1\";\nconst CHANGE_ID_FULL: u8 = 0;\nconst CHANGE_ID_COMMIT_SUFFIX: u8 = 1;\n\npub(crate) fn encode_commit_ref(commit: StoredCommitRef<'_>) -> Result<Vec<u8>, LixError> {\n    let mut bytes = Vec::new();\n    bytes.extend_from_slice(COMMIT_MAGIC);\n    write_str(&mut bytes, commit.id)?;\n    write_str(&mut bytes, commit.change_id)?;\n    write_strs(&mut bytes, commit.parent_ids.iter().map(String::as_str))?;\n    write_strs(\n        &mut bytes,\n        commit.author_account_ids.iter().map(String::as_str),\n    )?;\n    write_str(&mut bytes, commit.created_at)?;\n    bytes.extend_from_slice(&commit.change_pack_count.to_le_bytes());\n    bytes.extend_from_slice(&commit.membership_pack_count.to_le_bytes());\n    Ok(bytes)\n}\n\npub(crate) fn decode_commit(bytes: &[u8]) -> Result<Commit, LixError> {\n    let mut cursor = ByteCursor::new(bytes);\n    cursor.expect_magic(COMMIT_MAGIC, \"commit\")?;\n    let id = cursor.read_string(\"id\")?;\n    let change_id = cursor.read_string(\"change_id\")?;\n    let parent_ids = cursor.read_strings(\"parent_ids\")?;\n    let author_account_ids = cursor.read_strings(\"author_account_ids\")?;\n    let created_at = cursor.read_string(\"created_at\")?;\n    let change_pack_count = cursor.read_u32(\"change_pack_count\")?;\n    let membership_pack_count = cursor.read_u32(\"membership_pack_count\")?;\n    cursor.expect_end(\"commit\")?;\n    Ok(Commit {\n        id,\n        change_id,\n        parent_ids,\n        author_account_ids,\n        created_at,\n        change_pack_count,\n        membership_pack_count,\n    })\n}\n\npub(crate) fn encode_change_ref(change: ChangeRef<'_>) -> Result<Vec<u8>, LixError> {\n    let mut bytes = Vec::new();\n    write_change_ref(&mut bytes, change)?;\n    Ok(bytes)\n}\n\nfn write_change_ref(bytes: &mut Vec<u8>, change: ChangeRef<'_>) -> Result<(), LixError> {\n    let entity_id = change.entity_id.as_json_array_text().map_err(|error| {\n        LixError::unknown(format!(\n            \"failed to encode commit-store change entity identity: {error}\"\n        ))\n    })?;\n\n    bytes.extend_from_slice(CHANGE_MAGIC);\n    write_str(bytes, change.id)?;\n    write_str(bytes, &entity_id)?;\n    write_str(bytes, change.schema_key)?;\n    write_optional_str(bytes, change.file_id)?;\n    write_optional_json_ref(bytes, change.snapshot_ref);\n    write_optional_json_ref(bytes, change.metadata_ref);\n    write_str(bytes, change.created_at)\n}\n\npub(crate) fn decode_change(bytes: &[u8]) -> Result<Change, LixError> {\n    let mut cursor = ByteCursor::new(bytes);\n    cursor.expect_magic(CHANGE_MAGIC, \"change\")?;\n    let id = cursor.read_string(\"id\")?;\n    let entity_id = cursor.read_string(\"entity_id\")?;\n    let entity_id = EntityIdentity::from_json_array_text(&entity_id).map_err(|error| {\n        LixError::unknown(format!(\n            \"failed to decode commit-store change entity identity: {error}\"\n        ))\n    })?;\n    let schema_key = cursor.read_string(\"schema_key\")?;\n    let file_id = cursor.read_optional_string(\"file_id\")?;\n    let snapshot_ref = cursor.read_optional_json_ref(\"snapshot_ref\")?;\n    let metadata_ref = cursor.read_optional_json_ref(\"metadata_ref\")?;\n    let created_at = cursor.read_string(\"created_at\")?;\n    cursor.expect_end(\"change\")?;\n    Ok(Change {\n        id,\n        entity_id,\n        schema_key,\n        file_id,\n        snapshot_ref,\n        metadata_ref,\n        created_at,\n    })\n}\n\npub(crate) fn encode_change_pack(\n    commit_id: &str,\n    pack_id: u32,\n    changes: &[ChangeRef<'_>],\n) -> Result<Vec<u8>, LixError> {\n    let mut bytes = Vec::new();\n    bytes.extend_from_slice(CHANGE_PACK_MAGIC);\n    write_var_str(&mut bytes, commit_id, \"change pack commit_id\")?;\n    bytes.extend_from_slice(&pack_id.to_le_bytes());\n    let (shapes, change_shape_indexes) = change_shapes(changes);\n    write_var_len(&mut bytes, shapes.len(), \"change pack shapes\")?;\n    for shape in &shapes {\n        write_var_str(&mut bytes, shape.schema_key, \"schema_key\")?;\n        write_optional_var_str(&mut bytes, shape.file_id, \"file_id\")?;\n    }\n    write_var_len(&mut bytes, changes.len(), \"change pack changes\")?;\n    for (change, shape_index) in changes.iter().copied().zip(change_shape_indexes) {\n        write_var_change_id(&mut bytes, commit_id, change.id)?;\n        write_var_entity_identity(&mut bytes, change.entity_id)?;\n        write_var_len(&mut bytes, shape_index, \"change shape index\")?;\n        write_optional_json_ref(&mut bytes, change.snapshot_ref);\n        write_optional_json_ref(&mut bytes, change.metadata_ref);\n        write_var_str(&mut bytes, change.created_at, \"created_at\")?;\n    }\n    Ok(bytes)\n}\n\npub(crate) fn decode_change_pack(bytes: &[u8]) -> Result<(String, u32, Vec<Change>), LixError> {\n    let mut cursor = ByteCursor::new(bytes);\n    cursor.expect_magic(CHANGE_PACK_MAGIC, \"change pack\")?;\n    let commit_id = cursor.read_var_string(\"commit_id\")?;\n    let pack_id = cursor.read_u32(\"pack_id\")?;\n    let shape_count = cursor.read_var_usize(\"shape_count\")?;\n    let mut shapes = Vec::with_capacity(shape_count);\n    for _ in 0..shape_count {\n        shapes.push(ChangeShape {\n            schema_key: cursor.read_var_string(\"schema_key\")?,\n            file_id: cursor.read_optional_var_string(\"file_id\")?,\n        });\n    }\n    let change_count = cursor.read_var_usize(\"change_count\")?;\n    let mut changes = Vec::with_capacity(change_count);\n    for _ in 0..change_count {\n        let id = cursor.read_var_change_id(&commit_id)?;\n        let entity_id = cursor.read_var_entity_identity()?;\n        let shape_index = cursor.read_var_usize(\"shape_index\")?;\n        let shape = shapes.get(shape_index).ok_or_else(|| {\n            LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                format!(\"failed to decode commit-store change pack: shape index {shape_index} is out of bounds\"),\n            )\n        })?;\n        let snapshot_ref = cursor.read_optional_json_ref(\"snapshot_ref\")?;\n        let metadata_ref = cursor.read_optional_json_ref(\"metadata_ref\")?;\n        let created_at = cursor.read_var_string(\"created_at\")?;\n        changes.push(Change {\n            id,\n            entity_id,\n            schema_key: shape.schema_key.clone(),\n            file_id: shape.file_id.clone(),\n            snapshot_ref,\n            metadata_ref,\n            created_at,\n        });\n    }\n    cursor.expect_end(\"change pack\")?;\n    Ok((commit_id, pack_id, changes))\n}\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\nstruct ChangeShapeRef<'a> {\n    schema_key: &'a str,\n    file_id: Option<&'a str>,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\nstruct ChangeShape {\n    schema_key: String,\n    file_id: Option<String>,\n}\n\nfn change_shapes<'a>(changes: &'a [ChangeRef<'a>]) -> (Vec<ChangeShapeRef<'a>>, Vec<usize>) {\n    let mut shapes = Vec::new();\n    let mut shape_indexes = Vec::with_capacity(changes.len());\n    for change in changes {\n        let shape = ChangeShapeRef {\n            schema_key: change.schema_key,\n            file_id: change.file_id,\n        };\n        let shape_index = match shapes.iter().position(|candidate| *candidate == shape) {\n            Some(shape_index) => shape_index,\n            None => {\n                let shape_index = shapes.len();\n                shapes.push(shape);\n                shape_index\n            }\n        };\n        shape_indexes.push(shape_index);\n    }\n    (shapes, shape_indexes)\n}\n\npub(crate) fn encode_membership_pack<'a>(\n    commit_id: &str,\n    pack_id: u32,\n    members: impl IntoIterator<Item = ChangeLocatorRef<'a>>,\n) -> Result<Vec<u8>, LixError> {\n    let members = members.into_iter().collect::<Vec<_>>();\n    let mut bytes = Vec::new();\n    bytes.extend_from_slice(MEMBERSHIP_PACK_MAGIC);\n    write_str(&mut bytes, commit_id)?;\n    bytes.extend_from_slice(&pack_id.to_le_bytes());\n    write_len(&mut bytes, members.len(), \"membership pack members\")?;\n    for member in members {\n        encode_locator(&mut bytes, member)?;\n    }\n    Ok(bytes)\n}\n\npub(crate) fn decode_membership_pack(\n    bytes: &[u8],\n) -> Result<(String, u32, Vec<ChangeLocator>), LixError> {\n    let mut cursor = ByteCursor::new(bytes);\n    cursor.expect_magic(MEMBERSHIP_PACK_MAGIC, \"membership pack\")?;\n    let commit_id = cursor.read_string(\"commit_id\")?;\n    let pack_id = cursor.read_u32(\"pack_id\")?;\n    let member_count = cursor.read_u32(\"member_count\")? as usize;\n    let mut members = Vec::with_capacity(member_count);\n    for _ in 0..member_count {\n        members.push(decode_locator(&mut cursor)?);\n    }\n    cursor.expect_end(\"membership pack\")?;\n    Ok((commit_id, pack_id, members))\n}\n\nfn encode_locator(bytes: &mut Vec<u8>, locator: ChangeLocatorRef<'_>) -> Result<(), LixError> {\n    write_str(bytes, locator.source_commit_id)?;\n    bytes.extend_from_slice(&locator.source_pack_id.to_le_bytes());\n    bytes.extend_from_slice(&locator.source_ordinal.to_le_bytes());\n    write_str(bytes, locator.change_id)\n}\n\nfn decode_locator(cursor: &mut ByteCursor<'_>) -> Result<ChangeLocator, LixError> {\n    Ok(ChangeLocator {\n        source_commit_id: cursor.read_string(\"source_commit_id\")?,\n        source_pack_id: cursor.read_u32(\"source_pack_id\")?,\n        source_ordinal: cursor.read_u32(\"source_ordinal\")?,\n        change_id: cursor.read_string(\"change_id\")?,\n    })\n}\n\nfn write_str(bytes: &mut Vec<u8>, value: &str) -> Result<(), LixError> {\n    let len = u32::try_from(value.len()).map_err(|_| {\n        LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            \"commit-store string field exceeds u32 length\",\n        )\n    })?;\n    bytes.extend_from_slice(&len.to_le_bytes());\n    bytes.extend_from_slice(value.as_bytes());\n    Ok(())\n}\n\nfn write_optional_str(bytes: &mut Vec<u8>, value: Option<&str>) -> Result<(), LixError> {\n    match value {\n        Some(value) => {\n            bytes.push(1);\n            write_str(bytes, value)?;\n        }\n        None => bytes.push(0),\n    }\n    Ok(())\n}\n\nfn write_optional_json_ref(bytes: &mut Vec<u8>, value: Option<&JsonRef>) {\n    match value {\n        Some(value) => {\n            bytes.push(1);\n            bytes.extend_from_slice(value.as_hash_bytes());\n        }\n        None => bytes.push(0),\n    }\n}\n\nfn write_len(bytes: &mut Vec<u8>, len: usize, field: &str) -> Result<(), LixError> {\n    let len = u32::try_from(len).map_err(|_| {\n        LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            format!(\"commit-store {field} exceeds u32 length\"),\n        )\n    })?;\n    bytes.extend_from_slice(&len.to_le_bytes());\n    Ok(())\n}\n\nfn write_var_len(bytes: &mut Vec<u8>, len: usize, field: &str) -> Result<(), LixError> {\n    let mut value = u32::try_from(len).map_err(|_| {\n        LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            format!(\"commit-store {field} exceeds u32 length\"),\n        )\n    })?;\n    while value >= 0x80 {\n        bytes.push((value as u8 & 0x7f) | 0x80);\n        value >>= 7;\n    }\n    bytes.push(value as u8);\n    Ok(())\n}\n\nfn write_var_str(bytes: &mut Vec<u8>, value: &str, field: &str) -> Result<(), LixError> {\n    write_var_len(bytes, value.len(), field)?;\n    bytes.extend_from_slice(value.as_bytes());\n    Ok(())\n}\n\nfn write_optional_var_str(\n    bytes: &mut Vec<u8>,\n    value: Option<&str>,\n    field: &str,\n) -> Result<(), LixError> {\n    match value {\n        Some(value) => {\n            bytes.push(1);\n            write_var_str(bytes, value, field)?;\n        }\n        None => bytes.push(0),\n    }\n    Ok(())\n}\n\nfn write_change_id(bytes: &mut Vec<u8>, commit_id: &str, change_id: &str) -> Result<(), LixError> {\n    if let Some(suffix) = change_id.strip_prefix(commit_id) {\n        bytes.push(CHANGE_ID_COMMIT_SUFFIX);\n        write_str(bytes, suffix)\n    } else {\n        bytes.push(CHANGE_ID_FULL);\n        write_str(bytes, change_id)\n    }\n}\n\nfn write_var_change_id(\n    bytes: &mut Vec<u8>,\n    commit_id: &str,\n    change_id: &str,\n) -> Result<(), LixError> {\n    if let Some(suffix) = change_id.strip_prefix(commit_id) {\n        bytes.push(CHANGE_ID_COMMIT_SUFFIX);\n        write_var_str(bytes, suffix, \"change_id\")\n    } else {\n        bytes.push(CHANGE_ID_FULL);\n        write_var_str(bytes, change_id, \"change_id\")\n    }\n}\n\nfn write_entity_identity(bytes: &mut Vec<u8>, identity: &EntityIdentity) -> Result<(), LixError> {\n    write_len(\n        bytes,\n        identity.parts.len(),\n        \"commit-store entity identity parts\",\n    )?;\n    for part in &identity.parts {\n        write_str(bytes, part)?;\n    }\n    Ok(())\n}\n\nfn write_var_entity_identity(\n    bytes: &mut Vec<u8>,\n    identity: &EntityIdentity,\n) -> Result<(), LixError> {\n    write_var_len(\n        bytes,\n        identity.parts.len(),\n        \"commit-store entity identity parts\",\n    )?;\n    for part in &identity.parts {\n        write_var_str(bytes, part, \"entity identity part\")?;\n    }\n    Ok(())\n}\n\nfn write_strs<'a>(\n    bytes: &mut Vec<u8>,\n    values: impl IntoIterator<Item = &'a str>,\n) -> Result<(), LixError> {\n    let values = values.into_iter().collect::<Vec<_>>();\n    let len = u32::try_from(values.len()).map_err(|_| {\n        LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            \"commit-store string vector field exceeds u32 length\",\n        )\n    })?;\n    bytes.extend_from_slice(&len.to_le_bytes());\n    for value in values {\n        write_str(bytes, value)?;\n    }\n    Ok(())\n}\n\nstruct ByteCursor<'a> {\n    bytes: &'a [u8],\n    offset: usize,\n}\n\nimpl<'a> ByteCursor<'a> {\n    fn new(bytes: &'a [u8]) -> Self {\n        Self { bytes, offset: 0 }\n    }\n\n    fn expect_magic(&mut self, magic: &[u8], label: &str) -> Result<(), LixError> {\n        if self.bytes.len() < magic.len() || &self.bytes[..magic.len()] != magic {\n            return Err(LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                format!(\"failed to decode commit-store {label}: invalid magic\"),\n            ));\n        }\n        self.offset = magic.len();\n        Ok(())\n    }\n\n    fn read_string(&mut self, field: &str) -> Result<String, LixError> {\n        let len = self.read_u32(field)? as usize;\n        let end = self.offset.checked_add(len).ok_or_else(|| {\n            LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                format!(\"failed to decode commit-store field `{field}`: length overflow\"),\n            )\n        })?;\n        let bytes = self.bytes.get(self.offset..end).ok_or_else(|| {\n            LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                format!(\"failed to decode commit-store field `{field}`: truncated string\"),\n            )\n        })?;\n        self.offset = end;\n        String::from_utf8(bytes.to_vec()).map_err(|error| {\n            LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                format!(\"failed to decode commit-store field `{field}` as UTF-8: {error}\"),\n            )\n        })\n    }\n\n    fn read_strings(&mut self, field: &str) -> Result<Vec<String>, LixError> {\n        let count = self.read_u32(field)? as usize;\n        let mut values = Vec::with_capacity(count);\n        for _ in 0..count {\n            values.push(self.read_string(field)?);\n        }\n        Ok(values)\n    }\n\n    fn read_optional_string(&mut self, field: &str) -> Result<Option<String>, LixError> {\n        match self.read_u8(field)? {\n            0 => Ok(None),\n            1 => self.read_string(field).map(Some),\n            tag => Err(LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                format!(\"failed to decode commit-store field `{field}`: invalid option tag {tag}\"),\n            )),\n        }\n    }\n\n    fn read_optional_json_ref(&mut self, field: &str) -> Result<Option<JsonRef>, LixError> {\n        match self.read_u8(field)? {\n            0 => Ok(None),\n            1 => {\n                let end = self.offset.checked_add(32).ok_or_else(|| {\n                    LixError::new(\n                        LixError::CODE_INTERNAL_ERROR,\n                        format!(\"failed to decode commit-store field `{field}`: offset overflow\"),\n                    )\n                })?;\n                let bytes = self.bytes.get(self.offset..end).ok_or_else(|| {\n                    LixError::new(\n                        LixError::CODE_INTERNAL_ERROR,\n                        format!(\"failed to decode commit-store field `{field}`: truncated ref\"),\n                    )\n                })?;\n                self.offset = end;\n                let hash = <[u8; 32]>::try_from(bytes).expect(\"json ref length was checked\");\n                Ok(Some(JsonRef::from_hash_bytes(hash)))\n            }\n            tag => Err(LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                format!(\"failed to decode commit-store field `{field}`: invalid option tag {tag}\"),\n            )),\n        }\n    }\n\n    fn read_u8(&mut self, field: &str) -> Result<u8, LixError> {\n        let byte = self.bytes.get(self.offset).copied().ok_or_else(|| {\n            LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                format!(\"failed to decode commit-store field `{field}`: truncated u8\"),\n            )\n        })?;\n        self.offset += 1;\n        Ok(byte)\n    }\n\n    fn read_u32(&mut self, field: &str) -> Result<u32, LixError> {\n        let end = self.offset.checked_add(4).ok_or_else(|| {\n            LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                format!(\"failed to decode commit-store field `{field}`: offset overflow\"),\n            )\n        })?;\n        let bytes = self.bytes.get(self.offset..end).ok_or_else(|| {\n            LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                format!(\"failed to decode commit-store field `{field}`: truncated u32\"),\n            )\n        })?;\n        self.offset = end;\n        Ok(u32::from_le_bytes(\n            bytes\n                .try_into()\n                .expect(\"slice length was checked before u32 decode\"),\n        ))\n    }\n\n    fn read_var_usize(&mut self, field: &str) -> Result<usize, LixError> {\n        let mut value = 0u32;\n        let mut shift = 0u32;\n        for byte_index in 0..5 {\n            let byte = self.read_u8(field)?;\n            if shift == 28 && (byte & 0x80 != 0 || byte & 0x70 != 0) {\n                return Err(LixError::new(\n                    LixError::CODE_INTERNAL_ERROR,\n                    format!(\"failed to decode commit-store field `{field}`: varint exceeds u32\"),\n                ));\n            }\n            if byte_index > 0 && byte & 0x80 == 0 && byte == 0 {\n                return Err(LixError::new(\n                    LixError::CODE_INTERNAL_ERROR,\n                    format!(\"failed to decode commit-store field `{field}`: non-canonical varint\"),\n                ));\n            }\n            value |= ((byte & 0x7f) as u32) << shift;\n            if byte & 0x80 == 0 {\n                return Ok(value as usize);\n            }\n            shift += 7;\n        }\n        Err(LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            format!(\"failed to decode commit-store field `{field}`: varint exceeds u32\"),\n        ))\n    }\n\n    fn read_var_string(&mut self, field: &str) -> Result<String, LixError> {\n        let len = self.read_var_usize(field)?;\n        let end = self.offset.checked_add(len).ok_or_else(|| {\n            LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                format!(\"failed to decode commit-store field `{field}`: length overflow\"),\n            )\n        })?;\n        let bytes = self.bytes.get(self.offset..end).ok_or_else(|| {\n            LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                format!(\"failed to decode commit-store field `{field}`: truncated string\"),\n            )\n        })?;\n        self.offset = end;\n        String::from_utf8(bytes.to_vec()).map_err(|error| {\n            LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                format!(\"failed to decode commit-store field `{field}` as UTF-8: {error}\"),\n            )\n        })\n    }\n\n    fn read_optional_var_string(&mut self, field: &str) -> Result<Option<String>, LixError> {\n        match self.read_u8(field)? {\n            0 => Ok(None),\n            1 => self.read_var_string(field).map(Some),\n            tag => Err(LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                format!(\"failed to decode commit-store field `{field}`: invalid option tag {tag}\"),\n            )),\n        }\n    }\n\n    fn read_change_id(&mut self, commit_id: &str) -> Result<String, LixError> {\n        let tag = self.read_u8(\"change_id tag\")?;\n        let value = self.read_string(\"change_id\")?;\n        match tag {\n            CHANGE_ID_FULL => Ok(value),\n            CHANGE_ID_COMMIT_SUFFIX => Ok(format!(\"{commit_id}{value}\")),\n            tag => Err(LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                format!(\"failed to decode commit-store field `change_id`: invalid tag {tag}\"),\n            )),\n        }\n    }\n\n    fn read_var_change_id(&mut self, commit_id: &str) -> Result<String, LixError> {\n        let tag = self.read_u8(\"change_id tag\")?;\n        let value = self.read_var_string(\"change_id\")?;\n        match tag {\n            CHANGE_ID_FULL => Ok(value),\n            CHANGE_ID_COMMIT_SUFFIX => Ok(format!(\"{commit_id}{value}\")),\n            tag => Err(LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                format!(\"failed to decode commit-store field `change_id`: invalid tag {tag}\"),\n            )),\n        }\n    }\n\n    fn read_entity_identity(&mut self) -> Result<EntityIdentity, LixError> {\n        let count = self.read_u32(\"entity identity part count\")? as usize;\n        let mut parts = Vec::with_capacity(count);\n        for _ in 0..count {\n            parts.push(self.read_string(\"entity identity part\")?);\n        }\n        if parts.is_empty() {\n            return Err(LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                \"failed to decode commit-store entity identity: empty identity\",\n            ));\n        }\n        Ok(EntityIdentity { parts })\n    }\n\n    fn read_var_entity_identity(&mut self) -> Result<EntityIdentity, LixError> {\n        let count = self.read_var_usize(\"entity identity part count\")?;\n        let mut parts = Vec::with_capacity(count);\n        for _ in 0..count {\n            parts.push(self.read_var_string(\"entity identity part\")?);\n        }\n        if parts.is_empty() {\n            return Err(LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                \"failed to decode commit-store entity identity: empty identity\",\n            ));\n        }\n        Ok(EntityIdentity { parts })\n    }\n\n    fn expect_end(&self, label: &str) -> Result<(), LixError> {\n        if self.offset != self.bytes.len() {\n            return Err(LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                format!(\"failed to decode commit-store {label}: trailing bytes\"),\n            ));\n        }\n        Ok(())\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn commit_codec_roundtrips() {\n        let commit = Commit {\n            id: \"commit-1\".to_string(),\n            change_id: \"commit-change-1\".to_string(),\n            parent_ids: vec![\"parent-1\".to_string(), \"parent-2\".to_string()],\n            author_account_ids: vec![\"author-1\".to_string()],\n            created_at: \"2026-01-01T00:00:00Z\".to_string(),\n            change_pack_count: 2,\n            membership_pack_count: 1,\n        };\n\n        let encoded = encode_commit_ref(commit.as_ref()).expect(\"commit should encode\");\n        let decoded = decode_commit(&encoded).expect(\"commit should decode\");\n\n        assert_eq!(decoded, commit);\n    }\n\n    #[test]\n    fn change_codec_roundtrips() {\n        let change = Change {\n            id: \"change-1\".to_string(),\n            entity_id: EntityIdentity::single(\"entity-1\"),\n            schema_key: \"test_schema\".to_string(),\n            file_id: Some(\"file-1\".to_string()),\n            snapshot_ref: Some(JsonRef::from_hash_bytes([1; 32])),\n            metadata_ref: Some(JsonRef::from_hash_bytes([2; 32])),\n            created_at: \"2026-01-01T00:00:00Z\".to_string(),\n        };\n\n        let encoded = encode_change_ref(change.as_ref()).expect(\"change should encode\");\n        let decoded = decode_change(&encoded).expect(\"change should decode\");\n\n        assert_eq!(decoded, change);\n    }\n\n    #[test]\n    fn change_codec_roundtrips_empty_optionals() {\n        let change = Change {\n            id: \"change-1\".to_string(),\n            entity_id: EntityIdentity::single(\"entity-1\"),\n            schema_key: \"test_schema\".to_string(),\n            file_id: None,\n            snapshot_ref: None,\n            metadata_ref: None,\n            created_at: \"2026-01-01T00:00:00Z\".to_string(),\n        };\n\n        let encoded = encode_change_ref(change.as_ref()).expect(\"change should encode\");\n        let decoded = decode_change(&encoded).expect(\"change should decode\");\n\n        assert_eq!(decoded, change);\n    }\n\n    #[test]\n    fn change_pack_compacts_shared_shape_and_commit_id_prefix() {\n        let changes = [\n            Change {\n                id: \"commit-1:change-1\".to_string(),\n                entity_id: EntityIdentity::single(\"entity-1\"),\n                schema_key: \"test_schema\".to_string(),\n                file_id: Some(\"file-1\".to_string()),\n                snapshot_ref: Some(JsonRef::from_hash_bytes([1; 32])),\n                metadata_ref: None,\n                created_at: \"2026-01-01T00:00:00Z\".to_string(),\n            },\n            Change {\n                id: \"external-change\".to_string(),\n                entity_id: EntityIdentity::single(\"entity-2\"),\n                schema_key: \"test_schema\".to_string(),\n                file_id: Some(\"file-1\".to_string()),\n                snapshot_ref: None,\n                metadata_ref: Some(JsonRef::from_hash_bytes([2; 32])),\n                created_at: \"2026-01-02T00:00:00Z\".to_string(),\n            },\n        ];\n\n        let encoded = encode_change_pack(\n            \"commit-1\",\n            7,\n            &changes.iter().map(Change::as_ref).collect::<Vec<_>>(),\n        )\n        .expect(\"pack should encode\");\n        let (commit_id, pack_id, decoded) =\n            decode_change_pack(&encoded).expect(\"pack should decode\");\n\n        assert_eq!(commit_id, \"commit-1\");\n        assert_eq!(pack_id, 7);\n        assert_eq!(decoded, changes);\n\n        let mut cursor = ByteCursor::new(&encoded);\n        cursor\n            .expect_magic(CHANGE_PACK_MAGIC, \"change pack\")\n            .unwrap();\n        assert_eq!(cursor.read_var_string(\"commit_id\").unwrap(), \"commit-1\");\n        assert_eq!(cursor.read_u32(\"pack_id\").unwrap(), 7);\n        assert_eq!(cursor.read_var_usize(\"shape_count\").unwrap(), 1);\n        assert_eq!(cursor.read_var_string(\"schema_key\").unwrap(), \"test_schema\");\n        assert_eq!(\n            cursor\n                .read_optional_var_string(\"file_id\")\n                .unwrap()\n                .as_deref(),\n            Some(\"file-1\")\n        );\n        assert_eq!(cursor.read_var_usize(\"change_count\").unwrap(), 2);\n        assert_eq!(\n            cursor.read_u8(\"change_id tag\").unwrap(),\n            CHANGE_ID_COMMIT_SUFFIX\n        );\n        assert_eq!(cursor.read_var_string(\"change_id\").unwrap(), \":change-1\");\n    }\n\n    #[test]\n    fn change_pack_rejects_overlong_varint() {\n        let mut encoded = Vec::new();\n        encoded.extend_from_slice(CHANGE_PACK_MAGIC);\n        encoded.extend_from_slice(&[0x80, 0x80, 0x80, 0x80, 0x80]);\n\n        let error = decode_change_pack(&encoded).expect_err(\"overlong varint should reject\");\n        assert!(\n            error.to_string().contains(\"varint exceeds u32\"),\n            \"error should mention overlong varint: {error}\"\n        );\n    }\n\n    #[test]\n    fn change_pack_rejects_varint_above_u32() {\n        let mut encoded = Vec::new();\n        encoded.extend_from_slice(CHANGE_PACK_MAGIC);\n        encoded.extend_from_slice(&[0xff, 0xff, 0xff, 0xff, 0x1f]);\n\n        let error = decode_change_pack(&encoded).expect_err(\"too-large varint should reject\");\n        assert!(\n            error.to_string().contains(\"varint exceeds u32\"),\n            \"error should mention oversized varint: {error}\"\n        );\n    }\n\n    #[test]\n    fn change_pack_rejects_non_canonical_varint() {\n        let mut encoded = Vec::new();\n        encoded.extend_from_slice(CHANGE_PACK_MAGIC);\n        encoded.extend_from_slice(&[0x80, 0x00]);\n\n        let error = decode_change_pack(&encoded).expect_err(\"non-canonical varint should reject\");\n        assert!(\n            error.to_string().contains(\"non-canonical varint\"),\n            \"error should mention non-canonical varint: {error}\"\n        );\n    }\n\n    #[test]\n    fn change_codec_rejects_invalid_optional_tag() {\n        let change = Change {\n            id: \"change-1\".to_string(),\n            entity_id: EntityIdentity::single(\"entity-1\"),\n            schema_key: \"test_schema\".to_string(),\n            file_id: None,\n            snapshot_ref: None,\n            metadata_ref: None,\n            created_at: \"2026-01-01T00:00:00Z\".to_string(),\n        };\n        let mut encoded = encode_change_ref(change.as_ref()).expect(\"change should encode\");\n        let mut cursor = ByteCursor::new(&encoded);\n        cursor.expect_magic(CHANGE_MAGIC, \"change\").unwrap();\n        cursor.read_string(\"id\").unwrap();\n        cursor.read_string(\"entity_id\").unwrap();\n        cursor.read_string(\"schema_key\").unwrap();\n        let file_tag_offset = cursor.offset;\n        encoded[file_tag_offset] = 2;\n\n        let error = decode_change(&encoded).expect_err(\"invalid optional tag should fail\");\n        assert!(\n            error.to_string().contains(\"invalid option tag\"),\n            \"error should mention invalid tag: {error}\"\n        );\n    }\n\n    #[test]\n    fn change_codec_rejects_truncated_json_ref() {\n        let change = Change {\n            id: \"change-1\".to_string(),\n            entity_id: EntityIdentity::single(\"entity-1\"),\n            schema_key: \"test_schema\".to_string(),\n            file_id: None,\n            snapshot_ref: Some(JsonRef::from_hash_bytes([1; 32])),\n            metadata_ref: None,\n            created_at: \"2026-01-01T00:00:00Z\".to_string(),\n        };\n        let mut encoded = encode_change_ref(change.as_ref()).expect(\"change should encode\");\n        let mut cursor = ByteCursor::new(&encoded);\n        cursor.expect_magic(CHANGE_MAGIC, \"change\").unwrap();\n        cursor.read_string(\"id\").unwrap();\n        cursor.read_string(\"entity_id\").unwrap();\n        cursor.read_string(\"schema_key\").unwrap();\n        cursor.read_optional_string(\"file_id\").unwrap();\n        cursor.read_u8(\"snapshot_ref\").unwrap();\n        encoded.truncate(cursor.offset + 16);\n\n        let error = decode_change(&encoded).expect_err(\"truncated ref should fail\");\n        assert!(\n            error.to_string().contains(\"truncated ref\"),\n            \"error should mention truncation: {error}\"\n        );\n    }\n\n    #[test]\n    fn change_codec_rejects_trailing_bytes() {\n        let change = Change {\n            id: \"change-1\".to_string(),\n            entity_id: EntityIdentity::single(\"entity-1\"),\n            schema_key: \"test_schema\".to_string(),\n            file_id: None,\n            snapshot_ref: None,\n            metadata_ref: None,\n            created_at: \"2026-01-01T00:00:00Z\".to_string(),\n        };\n        let mut encoded = encode_change_ref(change.as_ref()).expect(\"change should encode\");\n        encoded.push(0);\n\n        let error = decode_change(&encoded).expect_err(\"trailing bytes should fail\");\n        assert!(\n            error.to_string().contains(\"trailing bytes\"),\n            \"error should mention trailing bytes: {error}\"\n        );\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/commit_store/context.rs",
    "content": "use crate::commit_store::{\n    Change, ChangeIndexEntry, ChangeLocator, ChangeRef, ChangeScanRequest, Commit, CommitDraftRef,\n    LocatedChange, StagedCommitStoreCommit,\n};\nuse crate::storage::{StorageReader, StorageWriteSet};\nuse crate::LixError;\nuse std::collections::{BTreeMap, BTreeSet};\nuse tokio::sync::Mutex;\n\n/// Canonical physical storage boundary for commits and their changes.\n#[derive(Clone, Copy, Debug, Default)]\npub(crate) struct CommitStoreContext;\n\nimpl CommitStoreContext {\n    pub(crate) fn new() -> Self {\n        Self\n    }\n\n    /// Creates a commit-store writer over read visibility and a pending write set.\n    pub(crate) fn writer<'a, S>(\n        &self,\n        store: &'a mut S,\n        writes: &'a mut StorageWriteSet,\n    ) -> CommitStoreWriter<'a, S>\n    where\n        S: StorageReader + ?Sized,\n    {\n        CommitStoreWriter { store, writes }\n    }\n\n    /// Creates a commit-store reader over a storage snapshot or transaction.\n    pub(crate) fn reader<S>(&self, store: S) -> CommitStoreReader<S>\n    where\n        S: StorageReader,\n    {\n        CommitStoreReader {\n            store: Mutex::new(store),\n        }\n    }\n\n    pub(crate) async fn load_commit_from(\n        &self,\n        store: &mut (impl StorageReader + ?Sized),\n        commit_id: &str,\n    ) -> Result<Option<Commit>, LixError> {\n        crate::commit_store::storage::load_commit(store, commit_id).await\n    }\n\n    pub(crate) async fn load_change_pack_from(\n        &self,\n        store: &mut (impl StorageReader + ?Sized),\n        commit_id: &str,\n        pack_id: u32,\n    ) -> Result<Option<Vec<Change>>, LixError> {\n        crate::commit_store::storage::load_change_pack(store, commit_id, pack_id).await\n    }\n\n    pub(crate) async fn load_membership_pack_from(\n        &self,\n        store: &mut (impl StorageReader + ?Sized),\n        commit_id: &str,\n        pack_id: u32,\n    ) -> Result<Option<Vec<ChangeLocator>>, LixError> {\n        crate::commit_store::storage::load_membership_pack(store, commit_id, pack_id).await\n    }\n}\n\n/// Commit-store reader over a storage snapshot or transaction.\npub(crate) struct CommitStoreReader<S> {\n    store: Mutex<S>,\n}\n\nimpl<S> CommitStoreReader<S>\nwhere\n    S: StorageReader,\n{\n    pub(crate) async fn load_change_index_entries(\n        &self,\n        change_ids: &[String],\n    ) -> Result<Vec<Option<crate::commit_store::ChangeIndexEntry>>, LixError> {\n        crate::commit_store::storage::load_change_index_entries(\n            &mut *self.store.lock().await,\n            change_ids,\n        )\n        .await\n    }\n\n    pub(crate) async fn load_commit(\n        &self,\n        commit_id: &str,\n    ) -> Result<Option<crate::commit_store::Commit>, LixError> {\n        crate::commit_store::storage::load_commit(&mut *self.store.lock().await, commit_id).await\n    }\n\n    pub(crate) async fn scan_commits(&self) -> Result<Vec<crate::commit_store::Commit>, LixError> {\n        crate::commit_store::storage::scan_commits(&mut *self.store.lock().await).await\n    }\n\n    pub(crate) async fn load_change_pack(\n        &self,\n        commit_id: &str,\n        pack_id: u32,\n    ) -> Result<Option<Vec<crate::commit_store::Change>>, LixError> {\n        crate::commit_store::storage::load_change_pack(\n            &mut *self.store.lock().await,\n            commit_id,\n            pack_id,\n        )\n        .await\n    }\n\n    pub(crate) async fn load_membership_pack(\n        &self,\n        commit_id: &str,\n        pack_id: u32,\n    ) -> Result<Option<Vec<crate::commit_store::ChangeLocator>>, LixError> {\n        crate::commit_store::storage::load_membership_pack(\n            &mut *self.store.lock().await,\n            commit_id,\n            pack_id,\n        )\n        .await\n    }\n\n    pub(crate) async fn load_changes(\n        &self,\n        change_ids: &[String],\n    ) -> Result<Vec<Option<crate::commit_store::Change>>, LixError> {\n        if change_ids.is_empty() {\n            return Ok(Vec::new());\n        }\n\n        let mut store = self.store.lock().await;\n        let entries =\n            crate::commit_store::storage::load_change_index_entries(&mut *store, change_ids)\n                .await?;\n        let mut changes = Vec::with_capacity(entries.len());\n        let mut commits_by_id = BTreeMap::new();\n        let mut packs_by_locator = BTreeMap::new();\n        for (change_id, entry) in change_ids.iter().zip(entries) {\n            changes.push(match entry {\n                Some(ChangeIndexEntry::CommitHeader { commit_id, .. }) => {\n                    if !commits_by_id.contains_key(&commit_id) {\n                        let commit =\n                            crate::commit_store::storage::load_commit(&mut *store, &commit_id)\n                                .await?;\n                        commits_by_id.insert(commit_id.clone(), commit);\n                    }\n                    commits_by_id\n                        .get(&commit_id)\n                        .cloned()\n                        .flatten()\n                        .map(commit_header_change)\n                }\n                Some(ChangeIndexEntry::PackedChange { locator }) => Some(\n                    load_change_by_locator_cached(\n                        &mut *store,\n                        &mut packs_by_locator,\n                        &locator,\n                        change_id,\n                    )\n                    .await?,\n                ),\n                None => None,\n            });\n        }\n        Ok(changes)\n    }\n\n    pub(crate) async fn load_located_changes(\n        &self,\n        change_ids: &[String],\n    ) -> Result<Vec<Option<LocatedChange>>, LixError> {\n        if change_ids.is_empty() {\n            return Ok(Vec::new());\n        }\n\n        let mut store = self.store.lock().await;\n        let entries =\n            crate::commit_store::storage::load_change_index_entries(&mut *store, change_ids)\n                .await?;\n        let mut changes = Vec::with_capacity(entries.len());\n        let mut commits_by_id = BTreeMap::new();\n        let mut packs_by_locator = BTreeMap::new();\n        for (change_id, entry) in change_ids.iter().zip(entries) {\n            changes.push(match entry {\n                Some(ChangeIndexEntry::CommitHeader { commit_id, .. }) => {\n                    if !commits_by_id.contains_key(&commit_id) {\n                        let commit =\n                            crate::commit_store::storage::load_commit(&mut *store, &commit_id)\n                                .await?;\n                        commits_by_id.insert(commit_id.clone(), commit);\n                    }\n                    commits_by_id\n                        .get(&commit_id)\n                        .cloned()\n                        .flatten()\n                        .map(|commit| located_commit_header_change(commit, 0))\n                }\n                Some(ChangeIndexEntry::PackedChange { locator }) => Some(LocatedChange {\n                    record: load_change_by_locator_cached(\n                        &mut *store,\n                        &mut packs_by_locator,\n                        &locator,\n                        change_id,\n                    )\n                    .await?,\n                    source_commit_id: locator.source_commit_id,\n                    source_pack_id: locator.source_pack_id,\n                }),\n                None => None,\n            });\n        }\n        Ok(changes)\n    }\n\n    pub(crate) async fn load_commit_changes(\n        &self,\n        commit_id: &str,\n    ) -> Result<Vec<crate::commit_store::Change>, LixError> {\n        let mut store = self.store.lock().await;\n        let Some(commit) =\n            crate::commit_store::storage::load_commit(&mut *store, commit_id).await?\n        else {\n            return Ok(Vec::new());\n        };\n\n        let mut changes = Vec::new();\n        for pack_id in 0..commit.change_pack_count {\n            let Some(mut pack_changes) =\n                crate::commit_store::storage::load_change_pack(&mut *store, commit_id, pack_id)\n                    .await?\n            else {\n                return Err(missing_pack_error(\"change\", commit_id, pack_id));\n            };\n            changes.append(&mut pack_changes);\n        }\n\n        for pack_id in 0..commit.membership_pack_count {\n            let Some(locators) =\n                crate::commit_store::storage::load_membership_pack(&mut *store, commit_id, pack_id)\n                    .await?\n            else {\n                return Err(missing_pack_error(\"membership\", commit_id, pack_id));\n            };\n            for locator in locators {\n                let change =\n                    load_change_by_locator(&mut *store, &locator, &locator.change_id).await?;\n                changes.push(change);\n            }\n        }\n\n        Ok(changes)\n    }\n\n    pub(crate) async fn scan_changes(\n        &self,\n        request: &ChangeScanRequest,\n    ) -> Result<Vec<LocatedChange>, LixError> {\n        scan_changes_from_commit_store(&mut *self.store.lock().await, request).await\n    }\n}\n\n/// Commit-store writer over read visibility and a transaction-local write set.\npub(crate) struct CommitStoreWriter<'a, S: ?Sized> {\n    store: &'a mut S,\n    writes: &'a mut StorageWriteSet,\n}\n\nstruct PendingCommitDraft<'a> {\n    commit: CommitDraftRef<'a>,\n    authored_changes: Vec<ChangeRef<'a>>,\n    adopted_changes: Vec<ChangeRef<'a>>,\n}\n\nimpl<S> CommitStoreWriter<'_, S>\nwhere\n    S: StorageReader + ?Sized,\n{\n    /// Validates and stages canonical commit-store writes for complete commits.\n    ///\n    /// Callers provide logical commit facts and borrowed change facts. The\n    /// commit store owns change-id uniqueness, adoption resolution, pack\n    /// locators, and physical namespace writes.\n    pub(crate) async fn stage_commit_draft<'a>(\n        &mut self,\n        commit: CommitDraftRef<'a>,\n        authored_changes: Vec<ChangeRef<'a>>,\n        adopted_changes: Vec<ChangeRef<'a>>,\n    ) -> Result<StagedCommitStoreCommit, LixError> {\n        let mut staged = self\n            .stage_commit_drafts([(commit, authored_changes, adopted_changes)])\n            .await?;\n        staged.pop().ok_or_else(|| {\n            LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                \"commit-store staged no result for one commit draft\",\n            )\n        })\n    }\n\n    /// Validates and stages a tracked commit whose authored rows will be stored\n    /// in the tracked-state delta pack instead of a duplicate commit-store pack.\n    pub(crate) async fn stage_tracked_commit_draft<'a>(\n        &mut self,\n        commit: CommitDraftRef<'a>,\n        authored_changes: Vec<ChangeRef<'a>>,\n        adopted_changes: Vec<ChangeRef<'a>>,\n    ) -> Result<StagedCommitStoreCommit, LixError> {\n        let mut staged = self\n            .stage_tracked_commit_drafts([(commit, authored_changes, adopted_changes)])\n            .await?;\n        staged.pop().ok_or_else(|| {\n            LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                \"commit-store staged no result for one tracked commit draft\",\n            )\n        })\n    }\n\n    /// Validates and stages multiple commit drafts as one commit-store batch.\n    pub(crate) async fn stage_commit_drafts<'a>(\n        &mut self,\n        commits: impl IntoIterator<Item = (CommitDraftRef<'a>, Vec<ChangeRef<'a>>, Vec<ChangeRef<'a>>)>,\n    ) -> Result<Vec<StagedCommitStoreCommit>, LixError> {\n        self.stage_commit_drafts_with_authored_pack(commits, true)\n            .await\n    }\n\n    /// Validates and stages multiple tracked commit drafts whose authored rows\n    /// will be stored in tracked-state delta packs.\n    pub(crate) async fn stage_tracked_commit_drafts<'a>(\n        &mut self,\n        commits: impl IntoIterator<Item = (CommitDraftRef<'a>, Vec<ChangeRef<'a>>, Vec<ChangeRef<'a>>)>,\n    ) -> Result<Vec<StagedCommitStoreCommit>, LixError> {\n        self.stage_commit_drafts_with_authored_pack(commits, false)\n            .await\n    }\n\n    async fn stage_commit_drafts_with_authored_pack<'a>(\n        &mut self,\n        commits: impl IntoIterator<Item = (CommitDraftRef<'a>, Vec<ChangeRef<'a>>, Vec<ChangeRef<'a>>)>,\n        write_authored_change_pack: bool,\n    ) -> Result<Vec<StagedCommitStoreCommit>, LixError> {\n        let commits = commits\n            .into_iter()\n            .map(\n                |(commit, authored_changes, adopted_changes)| PendingCommitDraft {\n                    commit,\n                    authored_changes,\n                    adopted_changes,\n                },\n            )\n            .collect::<Vec<_>>();\n        let adopted_locators = validate_stage_commits(self.store, &commits).await?;\n        let mut staged = Vec::with_capacity(commits.len());\n        for commit in commits {\n            let mut adopted_changes = Vec::with_capacity(commit.adopted_changes.len());\n            for change in &commit.adopted_changes {\n                let Some(locator) = adopted_locators.get(change.id) else {\n                    return Err(LixError::new(\n                        LixError::CODE_INTERNAL_ERROR,\n                        format!(\n                            \"validated adopted commit-store change id '{}' has no locator\",\n                            change.id\n                        ),\n                    ));\n                };\n                adopted_changes.push(locator.clone());\n            }\n            staged.push(if write_authored_change_pack {\n                crate::commit_store::storage::stage_commit(\n                    self.writes,\n                    commit.commit,\n                    commit.authored_changes,\n                    adopted_changes,\n                )?\n            } else {\n                crate::commit_store::storage::stage_commit_with_external_authored_pack(\n                    self.writes,\n                    commit.commit,\n                    commit.authored_changes,\n                    adopted_changes,\n                )?\n            });\n        }\n        Ok(staged)\n    }\n}\n\nasync fn validate_stage_commits<'a>(\n    store: &mut (impl StorageReader + ?Sized),\n    commits: &[PendingCommitDraft<'a>],\n) -> Result<BTreeMap<&'a str, ChangeLocator>, LixError> {\n    validate_new_changes_absent(store, commits).await?;\n    validate_adopted_changes_present(store, commits).await\n}\n\nasync fn scan_changes_from_commit_store(\n    store: &mut (impl StorageReader + ?Sized),\n    request: &ChangeScanRequest,\n) -> Result<Vec<LocatedChange>, LixError> {\n    let limit = request.limit.unwrap_or(usize::MAX);\n    let commits = crate::commit_store::storage::scan_commits(store).await?;\n    let mut changes = Vec::new();\n    for commit in commits {\n        if changes.len() >= limit {\n            break;\n        }\n        for pack_id in 0..commit.change_pack_count {\n            if changes.len() >= limit {\n                break;\n            }\n            let Some(mut pack_changes) =\n                crate::commit_store::storage::load_change_pack(store, &commit.id, pack_id).await?\n            else {\n                return Err(missing_pack_error(\"change\", &commit.id, pack_id));\n            };\n            let remaining = limit - changes.len();\n            if pack_changes.len() > remaining {\n                pack_changes.truncate(remaining);\n            }\n            changes.extend(pack_changes.into_iter().map(|record| LocatedChange {\n                record,\n                source_commit_id: commit.id.clone(),\n                source_pack_id: pack_id,\n            }));\n        }\n        if changes.len() < limit {\n            changes.push(located_commit_header_change(commit, 0));\n        }\n    }\n    Ok(changes)\n}\n\nasync fn load_change_by_locator(\n    store: &mut (impl StorageReader + ?Sized),\n    locator: &ChangeLocator,\n    expected_change_id: &str,\n) -> Result<Change, LixError> {\n    let Some(changes) = crate::commit_store::storage::load_change_pack(\n        store,\n        &locator.source_commit_id,\n        locator.source_pack_id,\n    )\n    .await?\n    else {\n        return Err(missing_pack_error(\n            \"change\",\n            &locator.source_commit_id,\n            locator.source_pack_id,\n        ));\n    };\n    let change = changes\n        .get(usize::try_from(locator.source_ordinal).map_err(|_| {\n            LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                \"commit-store change locator ordinal does not fit usize\",\n            )\n        })?)\n        .ok_or_else(|| {\n            LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                format!(\n                    \"commit-store change locator for '{}' points past pack '{}' in commit '{}'\",\n                    expected_change_id, locator.source_pack_id, locator.source_commit_id\n                ),\n            )\n        })?;\n    if change.id != expected_change_id || change.id != locator.change_id {\n        return Err(LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            format!(\n                \"commit-store change locator expected '{}' but found '{}'\",\n                expected_change_id, change.id\n            ),\n        ));\n    }\n    Ok(change.clone())\n}\n\nasync fn load_change_by_locator_cached(\n    store: &mut (impl StorageReader + ?Sized),\n    packs_by_locator: &mut BTreeMap<(String, u32), Vec<Change>>,\n    locator: &ChangeLocator,\n    expected_change_id: &str,\n) -> Result<Change, LixError> {\n    let key = (locator.source_commit_id.clone(), locator.source_pack_id);\n    if !packs_by_locator.contains_key(&key) {\n        let Some(changes) = crate::commit_store::storage::load_change_pack(\n            store,\n            &locator.source_commit_id,\n            locator.source_pack_id,\n        )\n        .await?\n        else {\n            return Err(missing_pack_error(\n                \"change\",\n                &locator.source_commit_id,\n                locator.source_pack_id,\n            ));\n        };\n        packs_by_locator.insert(key.clone(), changes);\n    }\n    let changes = packs_by_locator.get(&key).ok_or_else(|| {\n        LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            \"commit-store change pack cache lost a loaded pack\",\n        )\n    })?;\n    let change = changes\n        .get(usize::try_from(locator.source_ordinal).map_err(|_| {\n            LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                \"commit-store change locator ordinal does not fit usize\",\n            )\n        })?)\n        .ok_or_else(|| {\n            LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                format!(\n                    \"commit-store change locator for '{}' points past pack '{}' in commit '{}'\",\n                    expected_change_id, locator.source_pack_id, locator.source_commit_id\n                ),\n            )\n        })?;\n    if change.id != expected_change_id || change.id != locator.change_id {\n        return Err(LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            format!(\n                \"commit-store change locator expected '{}' but found '{}'\",\n                expected_change_id, change.id\n            ),\n        ));\n    }\n    Ok(change.clone())\n}\n\nfn commit_header_change(commit: Commit) -> Change {\n    Change {\n        id: commit.change_id,\n        entity_id: crate::entity_identity::EntityIdentity::single(commit.id),\n        schema_key: \"lix_commit\".to_string(),\n        file_id: None,\n        snapshot_ref: None,\n        metadata_ref: None,\n        created_at: commit.created_at,\n    }\n}\n\nfn located_commit_header_change(commit: Commit, source_pack_id: u32) -> LocatedChange {\n    let source_commit_id = commit.id.clone();\n    LocatedChange {\n        record: commit_header_change(commit),\n        source_commit_id,\n        source_pack_id,\n    }\n}\n\nfn missing_pack_error(label: &str, commit_id: &str, pack_id: u32) -> LixError {\n    LixError::new(\n        LixError::CODE_INTERNAL_ERROR,\n        format!(\"commit-store missing {label} pack ({commit_id}, {pack_id})\"),\n    )\n}\n\nasync fn validate_new_changes_absent<'a>(\n    store: &mut (impl StorageReader + ?Sized),\n    commits: &[PendingCommitDraft<'a>],\n) -> Result<(), LixError> {\n    let mut change_ids = Vec::new();\n    let mut seen_change_ids = BTreeSet::new();\n    for commit in commits {\n        if !seen_change_ids.insert(commit.commit.change_id) {\n            return Err(duplicate_change_id_error(commit.commit.change_id));\n        }\n        change_ids.push(commit.commit.change_id.to_string());\n        for change in &commit.authored_changes {\n            if !seen_change_ids.insert(change.id) {\n                return Err(duplicate_change_id_error(change.id));\n            }\n            change_ids.push(change.id.to_string());\n        }\n    }\n\n    let reader = CommitStoreContext::new().reader(&mut *store);\n    let existing_changes = reader.load_change_index_entries(&change_ids).await?;\n    for (change_id, existing) in change_ids.iter().zip(existing_changes) {\n        if existing.is_some() {\n            return Err(LixError::new(\n                LixError::CODE_UNIQUE,\n                format!(\"commit-store change id '{}' already exists\", change_id),\n            ));\n        }\n    }\n    Ok(())\n}\n\nasync fn validate_adopted_changes_present<'a>(\n    store: &mut (impl StorageReader + ?Sized),\n    commits: &[PendingCommitDraft<'a>],\n) -> Result<BTreeMap<&'a str, ChangeLocator>, LixError> {\n    let mut expected_changes = Vec::new();\n    let mut seen_change_ids = BTreeSet::new();\n    for commit in commits {\n        for change in &commit.adopted_changes {\n            if !seen_change_ids.insert(change.id) {\n                return Err(LixError::new(\n                    LixError::CODE_UNIQUE,\n                    format!(\n                        \"adopted commit-store change id '{}' appears more than once in the same transaction\",\n                        change.id\n                    ),\n                ));\n            }\n            expected_changes.push(*change);\n        }\n    }\n    if expected_changes.is_empty() {\n        return Ok(BTreeMap::new());\n    }\n\n    let change_ids = expected_changes\n        .iter()\n        .map(|change| change.id.to_string())\n        .collect::<Vec<_>>();\n    let reader = CommitStoreContext::new().reader(&mut *store);\n    let existing_entries = reader.load_change_index_entries(&change_ids).await?;\n    let mut locators_by_change_id = BTreeMap::new();\n    for (expected, existing) in expected_changes.into_iter().zip(existing_entries) {\n        match existing {\n            Some(ChangeIndexEntry::PackedChange { locator }) => {\n                let existing_change = load_packed_change(&reader, &locator, expected.id).await?;\n                if !change_matches_ref(&existing_change, expected) {\n                    let entity_id = existing_change\n                        .entity_id\n                        .as_json_array_text()\n                        .unwrap_or_else(|_| \"<invalid entity_id>\".to_string());\n                    return Err(LixError::new(\n                        LixError::CODE_UNIQUE,\n                        format!(\n                            \"adopted commit-store change id '{}' exists with different content for schema '{}' entity '{}'\",\n                            expected.id, existing_change.schema_key, entity_id\n                        ),\n                    ));\n                }\n                locators_by_change_id.insert(expected.id, locator);\n            }\n            Some(ChangeIndexEntry::CommitHeader { .. }) => {\n                return Err(LixError::new(\n                    LixError::CODE_INTERNAL_ERROR,\n                    format!(\n                        \"adopted commit-store change id '{}' resolves to a commit header, not a packed state change\",\n                        expected.id\n                    ),\n                ));\n            }\n            None => {\n                return Err(LixError::new(\n                    LixError::CODE_INTERNAL_ERROR,\n                    format!(\n                        \"adopted commit-store change id '{}' does not exist\",\n                        expected.id\n                    ),\n                ));\n            }\n        }\n    }\n    Ok(locators_by_change_id)\n}\n\nasync fn load_packed_change<S>(\n    reader: &CommitStoreReader<S>,\n    locator: &ChangeLocator,\n    expected_change_id: &str,\n) -> Result<Change, LixError>\nwhere\n    S: StorageReader,\n{\n    let pack = reader\n        .load_change_pack(&locator.source_commit_id, locator.source_pack_id)\n        .await?\n        .ok_or_else(|| {\n            LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                format!(\n                    \"commit-store change pack '{}:{}' for change '{}' is missing\",\n                    locator.source_commit_id, locator.source_pack_id, expected_change_id\n                ),\n            )\n        })?;\n    let change = pack\n        .get(usize::try_from(locator.source_ordinal).map_err(|_| {\n            LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                \"commit-store change locator ordinal exceeds usize\",\n            )\n        })?)\n        .ok_or_else(|| {\n            LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                format!(\n                    \"commit-store change locator '{}' points past pack length\",\n                    expected_change_id\n                ),\n            )\n        })?\n        .clone();\n    if change.id != expected_change_id {\n        return Err(LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            format!(\n                \"commit-store change locator expected '{}' but loaded '{}'\",\n                expected_change_id, change.id\n            ),\n        ));\n    }\n    Ok(change)\n}\n\nfn change_matches_ref(change: &Change, expected: ChangeRef<'_>) -> bool {\n    change.id == expected.id\n        && &change.entity_id == expected.entity_id\n        && change.schema_key == expected.schema_key\n        && change.file_id.as_deref() == expected.file_id\n        && change.snapshot_ref.as_ref() == expected.snapshot_ref\n        && change.metadata_ref.as_ref() == expected.metadata_ref\n        && change.created_at == expected.created_at\n}\n\nfn duplicate_change_id_error(change_id: &str) -> LixError {\n    LixError::new(\n        LixError::CODE_UNIQUE,\n        format!(\n            \"commit-store change id '{}' appears more than once in the same transaction\",\n            change_id\n        ),\n    )\n}\n\n#[cfg(test)]\nmod tests {\n    use std::sync::Arc;\n\n    use crate::backend::testing::UnitTestBackend;\n    use crate::commit_store::{\n        ChangeIndexEntry, ChangeLocator, CommitDraftRef, CommitStoreContext,\n    };\n    use crate::entity_identity::EntityIdentity;\n    use crate::json_store::JsonRef;\n    use crate::storage::{StorageContext, StorageWriteSet, StorageWriteTransaction};\n\n    use super::*;\n\n    #[tokio::test]\n    async fn load_changes_materializes_commit_header_and_packed_change() {\n        let storage = StorageContext::new(Arc::new(UnitTestBackend::new()));\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        let mut writes = StorageWriteSet::new();\n        let parent_ids = vec![\"parent-1\".to_string()];\n        let author_account_ids = vec![\"author-1\".to_string()];\n        let commit_id = \"commit-1\".to_string();\n        let commit_change_id = \"commit-change-1\".to_string();\n        let authored_change = test_change(\"change-1\");\n\n        CommitStoreContext::new()\n            .writer(transaction.as_mut(), &mut writes)\n            .stage_commit_draft(\n                CommitDraftRef {\n                    id: &commit_id,\n                    change_id: &commit_change_id,\n                    parent_ids: &parent_ids,\n                    author_account_ids: &author_account_ids,\n                    created_at: \"2026-01-01T00:00:00Z\",\n                },\n                vec![authored_change.as_ref()],\n                Vec::new(),\n            )\n            .await\n            .expect(\"commit should stage\");\n        writes\n            .apply(&mut transaction.as_mut())\n            .await\n            .expect(\"writes should apply\");\n        transaction.commit().await.expect(\"commit should persist\");\n\n        let reader = CommitStoreContext::new().reader(storage.clone());\n        let index_entries = reader\n            .load_change_index_entries(&[\n                commit_change_id.clone(),\n                authored_change.id.clone(),\n                \"missing-change\".to_string(),\n            ])\n            .await\n            .expect(\"index entries should load\");\n        assert_eq!(\n            index_entries,\n            vec![\n                Some(ChangeIndexEntry::CommitHeader {\n                    commit_id: commit_id.clone(),\n                    change_id: commit_change_id.clone(),\n                }),\n                Some(ChangeIndexEntry::PackedChange {\n                    locator: ChangeLocator {\n                        source_commit_id: commit_id.clone(),\n                        source_pack_id: 0,\n                        source_ordinal: 0,\n                        change_id: authored_change.id.clone(),\n                    },\n                }),\n                None,\n            ]\n        );\n\n        let changes = reader\n            .load_changes(&[\n                commit_change_id.clone(),\n                authored_change.id.clone(),\n                \"missing-change\".to_string(),\n            ])\n            .await\n            .expect(\"changes should load\");\n        assert_eq!(changes.len(), 3);\n\n        let header_change = changes[0]\n            .as_ref()\n            .expect(\"commit-header change should materialize\");\n        assert_eq!(header_change.id, commit_change_id);\n        assert_eq!(header_change.entity_id, EntityIdentity::single(&commit_id));\n        assert_eq!(header_change.schema_key, \"lix_commit\");\n        assert_eq!(header_change.file_id, None);\n        assert_eq!(header_change.snapshot_ref, None);\n        assert_eq!(header_change.metadata_ref, None);\n        assert_eq!(header_change.created_at, \"2026-01-01T00:00:00Z\");\n\n        assert_eq!(\n            changes[1]\n                .as_ref()\n                .expect(\"packed change should decode from change pack\"),\n            &authored_change\n        );\n        assert_eq!(changes[2], None);\n    }\n\n    #[tokio::test]\n    async fn load_commit_changes_returns_equivalent_authored_and_adopted_changes() {\n        let storage = StorageContext::new(Arc::new(UnitTestBackend::new()));\n        let authored_change = test_change(\"shared-change-1\");\n\n        stage_test_commit(\n            storage.clone(),\n            \"source-commit\",\n            \"source-commit-change\",\n            vec![authored_change.as_ref()],\n            Vec::new(),\n        )\n        .await;\n        stage_test_commit(\n            storage.clone(),\n            \"adopting-commit\",\n            \"adopting-commit-change\",\n            Vec::new(),\n            vec![authored_change.as_ref()],\n        )\n        .await;\n\n        let reader = CommitStoreContext::new().reader(storage.clone());\n        let source_changes = reader\n            .load_commit_changes(\"source-commit\")\n            .await\n            .expect(\"source commit changes should load\");\n        let adopting_changes = reader\n            .load_commit_changes(\"adopting-commit\")\n            .await\n            .expect(\"adopting commit changes should load\");\n\n        assert_eq!(source_changes, vec![authored_change.clone()]);\n        assert_eq!(adopting_changes, source_changes);\n        assert_eq!(\n            reader\n                .load_membership_pack(\"adopting-commit\", 0)\n                .await\n                .expect(\"membership pack should load\"),\n            Some(vec![ChangeLocator {\n                source_commit_id: \"source-commit\".to_string(),\n                source_pack_id: 0,\n                source_ordinal: 0,\n                change_id: authored_change.id.clone(),\n            }])\n        );\n    }\n\n    async fn stage_test_commit(\n        storage: StorageContext,\n        commit_id: &str,\n        commit_change_id: &str,\n        authored_changes: Vec<ChangeRef<'_>>,\n        adopted_changes: Vec<ChangeRef<'_>>,\n    ) {\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        let mut writes = StorageWriteSet::new();\n        let parent_ids = Vec::new();\n        let author_account_ids = Vec::new();\n\n        CommitStoreContext::new()\n            .writer(transaction.as_mut(), &mut writes)\n            .stage_commit_draft(\n                CommitDraftRef {\n                    id: commit_id,\n                    change_id: commit_change_id,\n                    parent_ids: &parent_ids,\n                    author_account_ids: &author_account_ids,\n                    created_at: \"2026-01-01T00:00:00Z\",\n                },\n                authored_changes,\n                adopted_changes,\n            )\n            .await\n            .expect(\"commit should stage\");\n        writes\n            .apply(&mut transaction.as_mut())\n            .await\n            .expect(\"writes should apply\");\n        transaction.commit().await.expect(\"commit should persist\");\n    }\n\n    fn test_change(id: &str) -> Change {\n        Change {\n            id: id.to_string(),\n            entity_id: EntityIdentity::single(\"entity-1\"),\n            schema_key: \"test_schema\".to_string(),\n            file_id: Some(\"file-1\".to_string()),\n            snapshot_ref: Some(JsonRef::from_hash_bytes([1; 32])),\n            metadata_ref: Some(JsonRef::from_hash_bytes([2; 32])),\n            created_at: \"2026-01-02T00:00:00Z\".to_string(),\n        }\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/commit_store/materialization.rs",
    "content": "use crate::commit_store::{LocatedChange, MaterializedChange};\nuse crate::json_store::{JsonLoadRequestRef, JsonReadScopeRef, JsonRef, JsonStoreReader};\nuse crate::storage::StorageReader;\nuse crate::{parse_row_metadata, LixError};\n\npub(crate) async fn materialize_change<S>(\n    json_reader: &mut JsonStoreReader<S>,\n    located: LocatedChange,\n) -> Result<MaterializedChange, LixError>\nwhere\n    S: StorageReader,\n{\n    let change = located.record;\n    let pack_ids = [located.source_pack_id];\n    let scope = JsonReadScopeRef::CommitPacks {\n        commit_id: &located.source_commit_id,\n        pack_ids: &pack_ids,\n    };\n    let snapshot_content = load_optional_json_text(\n        json_reader,\n        change.snapshot_ref.as_ref(),\n        scope,\n        \"snapshot_ref\",\n    )\n    .await?;\n    let metadata = match load_optional_json_text(\n        json_reader,\n        change.metadata_ref.as_ref(),\n        scope,\n        \"metadata_ref\",\n    )\n    .await?\n    {\n        Some(value) => Some(parse_row_metadata(\n            &value,\n            \"commit_store change metadata_ref\",\n        )?),\n        None => None,\n    };\n    Ok(MaterializedChange {\n        id: change.id,\n        entity_id: change.entity_id,\n        schema_key: change.schema_key,\n        file_id: change.file_id,\n        snapshot_content,\n        metadata,\n        created_at: change.created_at,\n    })\n}\n\nasync fn load_optional_json_text<S>(\n    json_reader: &mut JsonStoreReader<S>,\n    json_ref: Option<&JsonRef>,\n    scope: JsonReadScopeRef<'_>,\n    field: &str,\n) -> Result<Option<String>, LixError>\nwhere\n    S: StorageReader,\n{\n    let Some(json_ref) = json_ref else {\n        return Ok(None);\n    };\n    let batch = json_reader\n        .load_bytes_many(JsonLoadRequestRef {\n            refs: std::slice::from_ref(json_ref),\n            scope,\n        })\n        .await?;\n    let Some(bytes) = batch.into_values().into_iter().next().flatten() else {\n        return Err(LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            format!(\n                \"commit_store change {field} '{}' is missing\",\n                json_ref.to_hex()\n            ),\n        ));\n    };\n    String::from_utf8(bytes).map(Some).map_err(|error| {\n        LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            format!(\"commit_store change {field} is not UTF-8 JSON: {error}\"),\n        )\n    })\n}\n"
  },
  {
    "path": "packages/engine/src/commit_store/mod.rs",
    "content": "pub(crate) mod codec;\nmod context;\nmod materialization;\npub(crate) mod storage;\nmod types;\n\n#[allow(unused_imports)]\npub(crate) use context::{CommitStoreContext, CommitStoreReader, CommitStoreWriter};\n#[allow(unused_imports)]\npub(crate) use materialization::materialize_change;\n#[allow(unused_imports)]\npub(crate) use types::{\n    Change, ChangeIndexEntry, ChangeLocator, ChangeLocatorRef, ChangePack, ChangePackView,\n    ChangeRef, ChangeScanRequest, Commit, CommitDraftRef, LocatedChange, MaterializedChange,\n    MembershipPack, MembershipPackView, StagedCommitStoreCommit, StoredCommitRef,\n};\n"
  },
  {
    "path": "packages/engine/src/commit_store/storage.rs",
    "content": "use crate::commit_store::{\n    Change, ChangeIndexEntry, ChangeLocator, ChangeRef, Commit, CommitDraftRef,\n    StagedCommitStoreCommit, StoredCommitRef,\n};\nuse crate::storage::{\n    KvGetGroup, KvGetRequest, KvScanRange, KvScanRequest, StorageReader, StorageWriteSet,\n};\nuse crate::LixError;\nuse std::collections::{BTreeMap, BTreeSet};\n\npub(crate) const COMMIT_NAMESPACE: &str = \"commit_store.commit\";\npub(crate) const CHANGE_PACK_NAMESPACE: &str = \"commit_store.change_pack\";\npub(crate) const MEMBERSHIP_PACK_NAMESPACE: &str = \"commit_store.membership_pack\";\n\nconst SINGLE_PACK_ID: u32 = 0;\n\npub(crate) fn stage_commit(\n    writes: &mut StorageWriteSet,\n    commit: CommitDraftRef<'_>,\n    authored_changes: Vec<ChangeRef<'_>>,\n    adopted_changes: Vec<ChangeLocator>,\n) -> Result<StagedCommitStoreCommit, LixError> {\n    stage_commit_with_authored_pack(writes, commit, authored_changes, adopted_changes, true)\n}\n\npub(crate) fn stage_commit_with_external_authored_pack(\n    writes: &mut StorageWriteSet,\n    commit: CommitDraftRef<'_>,\n    authored_changes: Vec<ChangeRef<'_>>,\n    adopted_changes: Vec<ChangeLocator>,\n) -> Result<StagedCommitStoreCommit, LixError> {\n    stage_commit_with_authored_pack(writes, commit, authored_changes, adopted_changes, false)\n}\n\nfn stage_commit_with_authored_pack(\n    writes: &mut StorageWriteSet,\n    commit: CommitDraftRef<'_>,\n    authored_changes: Vec<ChangeRef<'_>>,\n    adopted_changes: Vec<ChangeLocator>,\n    write_authored_change_pack: bool,\n) -> Result<StagedCommitStoreCommit, LixError> {\n    let stored_commit = StoredCommitRef {\n        id: commit.id,\n        change_id: commit.change_id,\n        parent_ids: commit.parent_ids,\n        author_account_ids: commit.author_account_ids,\n        created_at: commit.created_at,\n        change_pack_count: if authored_changes.is_empty() { 0 } else { 1 },\n        membership_pack_count: if adopted_changes.is_empty() { 0 } else { 1 },\n    };\n\n    writes.put(\n        COMMIT_NAMESPACE,\n        commit_key(commit.id),\n        crate::commit_store::codec::encode_commit_ref(stored_commit)?,\n    );\n\n    let mut authored_locators = Vec::with_capacity(authored_changes.len());\n    if !authored_changes.is_empty() {\n        if write_authored_change_pack {\n            writes.put(\n                CHANGE_PACK_NAMESPACE,\n                pack_key(commit.id, SINGLE_PACK_ID)?,\n                crate::commit_store::codec::encode_change_pack(\n                    commit.id,\n                    SINGLE_PACK_ID,\n                    &authored_changes,\n                )?,\n            );\n        }\n        for (source_ordinal, change) in authored_changes.iter().enumerate() {\n            authored_locators.push(ChangeLocator {\n                source_commit_id: commit.id.to_string(),\n                source_pack_id: SINGLE_PACK_ID,\n                source_ordinal: u32::try_from(source_ordinal).map_err(|_| {\n                    LixError::new(\n                        LixError::CODE_INTERNAL_ERROR,\n                        \"commit-store change pack ordinal exceeds u32\",\n                    )\n                })?,\n                change_id: change.id.to_string(),\n            });\n        }\n    }\n\n    if !adopted_changes.is_empty() {\n        writes.put(\n            MEMBERSHIP_PACK_NAMESPACE,\n            pack_key(commit.id, SINGLE_PACK_ID)?,\n            crate::commit_store::codec::encode_membership_pack(\n                commit.id,\n                SINGLE_PACK_ID,\n                adopted_changes.iter().map(ChangeLocator::as_ref),\n            )?,\n        );\n    }\n\n    Ok(StagedCommitStoreCommit {\n        authored_locators,\n        adopted_locators: adopted_changes,\n    })\n}\n\npub(crate) async fn load_commit(\n    store: &mut (impl StorageReader + ?Sized),\n    commit_id: &str,\n) -> Result<Option<Commit>, LixError> {\n    let Some(bytes) = get_one(store, COMMIT_NAMESPACE, commit_key(commit_id)).await? else {\n        return Ok(None);\n    };\n    crate::commit_store::codec::decode_commit(&bytes).map(Some)\n}\n\npub(crate) async fn scan_commits(\n    store: &mut (impl StorageReader + ?Sized),\n) -> Result<Vec<Commit>, LixError> {\n    let page = store\n        .scan_values(KvScanRequest {\n            namespace: COMMIT_NAMESPACE.to_string(),\n            range: KvScanRange::prefix(Vec::new()),\n            after: None,\n            limit: usize::MAX,\n        })\n        .await?;\n    page.values\n        .iter()\n        .map(|bytes| crate::commit_store::codec::decode_commit(bytes))\n        .collect()\n}\n\npub(crate) async fn load_change_pack(\n    store: &mut (impl StorageReader + ?Sized),\n    commit_id: &str,\n    pack_id: u32,\n) -> Result<Option<Vec<Change>>, LixError> {\n    let Some(bytes) = get_one(store, CHANGE_PACK_NAMESPACE, pack_key(commit_id, pack_id)?).await?\n    else {\n        return load_tracked_authored_change_pack(store, commit_id, pack_id).await;\n    };\n    let (stored_commit_id, stored_pack_id, changes) =\n        crate::commit_store::codec::decode_change_pack(&bytes)?;\n    ensure_pack_identity(\n        \"change pack\",\n        commit_id,\n        pack_id,\n        &stored_commit_id,\n        stored_pack_id,\n    )?;\n    Ok(Some(changes))\n}\n\npub(crate) async fn load_tracked_authored_change_pack(\n    store: &mut (impl StorageReader + ?Sized),\n    commit_id: &str,\n    pack_id: u32,\n) -> Result<Option<Vec<Change>>, LixError> {\n    let Some(delta_entries) = crate::tracked_state::load_delta_pack(store, commit_id).await? else {\n        return Ok(None);\n    };\n    let mut changes_by_ordinal = BTreeMap::<u32, Change>::new();\n    for delta in delta_entries {\n        let locator = &delta.value.change_locator;\n        if locator.source_commit_id != commit_id || locator.source_pack_id != pack_id {\n            continue;\n        }\n        let ordinal = locator.source_ordinal;\n        let change = Change {\n            id: locator.change_id.clone(),\n            entity_id: delta.key.entity_id,\n            schema_key: delta.key.schema_key,\n            file_id: delta.key.file_id,\n            snapshot_ref: delta.value.snapshot_ref,\n            metadata_ref: delta.value.metadata_ref,\n            created_at: delta.value.updated_at,\n        };\n        if changes_by_ordinal.insert(ordinal, change).is_some() {\n            return Err(LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                format!(\n                    \"tracked authored change pack ({commit_id}, {pack_id}) has duplicate ordinal {ordinal}\"\n                ),\n            ));\n        }\n    }\n    if changes_by_ordinal.is_empty() {\n        return Ok(None);\n    }\n    let mut changes = Vec::with_capacity(changes_by_ordinal.len());\n    for (expected_ordinal, (ordinal, change)) in (0u32..).zip(changes_by_ordinal) {\n        if ordinal != expected_ordinal {\n            return Err(LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                format!(\n                    \"tracked authored change pack ({commit_id}, {pack_id}) is missing ordinal {expected_ordinal}\"\n                ),\n            ));\n        }\n        changes.push(change);\n    }\n    Ok(Some(changes))\n}\n\npub(crate) async fn load_membership_pack(\n    store: &mut (impl StorageReader + ?Sized),\n    commit_id: &str,\n    pack_id: u32,\n) -> Result<Option<Vec<ChangeLocator>>, LixError> {\n    let Some(bytes) = get_one(\n        store,\n        MEMBERSHIP_PACK_NAMESPACE,\n        pack_key(commit_id, pack_id)?,\n    )\n    .await?\n    else {\n        return Ok(None);\n    };\n    let (stored_commit_id, stored_pack_id, members) =\n        crate::commit_store::codec::decode_membership_pack(&bytes)?;\n    ensure_pack_identity(\n        \"membership pack\",\n        commit_id,\n        pack_id,\n        &stored_commit_id,\n        stored_pack_id,\n    )?;\n    Ok(Some(members))\n}\n\npub(crate) async fn load_change_index_entries(\n    store: &mut (impl StorageReader + ?Sized),\n    change_ids: &[String],\n) -> Result<Vec<Option<ChangeIndexEntry>>, LixError> {\n    if change_ids.is_empty() {\n        return Ok(Vec::new());\n    }\n\n    let mut unresolved = change_ids.iter().cloned().collect::<BTreeSet<_>>();\n    let mut entries_by_change_id = BTreeMap::new();\n    let commits = scan_commits(store).await?;\n    for commit in commits {\n        if unresolved.remove(&commit.change_id) {\n            entries_by_change_id.insert(\n                commit.change_id.clone(),\n                ChangeIndexEntry::CommitHeader {\n                    commit_id: commit.id.clone(),\n                    change_id: commit.change_id.clone(),\n                },\n            );\n        }\n        if unresolved.is_empty() {\n            break;\n        }\n\n        for pack_id in 0..commit.change_pack_count {\n            let Some(changes) = load_change_pack(store, &commit.id, pack_id).await? else {\n                return Err(LixError::new(\n                    LixError::CODE_INTERNAL_ERROR,\n                    format!(\n                        \"commit-store missing change pack ({}, {pack_id})\",\n                        commit.id\n                    ),\n                ));\n            };\n            for (source_ordinal, change) in changes.iter().enumerate() {\n                if !unresolved.remove(&change.id) {\n                    continue;\n                }\n                entries_by_change_id.insert(\n                    change.id.clone(),\n                    ChangeIndexEntry::PackedChange {\n                        locator: ChangeLocator {\n                            source_commit_id: commit.id.clone(),\n                            source_pack_id: pack_id,\n                            source_ordinal: u32::try_from(source_ordinal).map_err(|_| {\n                                LixError::new(\n                                    LixError::CODE_INTERNAL_ERROR,\n                                    \"commit-store change pack ordinal exceeds u32\",\n                                )\n                            })?,\n                            change_id: change.id.clone(),\n                        },\n                    },\n                );\n                if unresolved.is_empty() {\n                    break;\n                }\n            }\n            if unresolved.is_empty() {\n                break;\n            }\n        }\n        if unresolved.is_empty() {\n            break;\n        }\n    }\n\n    Ok(change_ids\n        .iter()\n        .map(|change_id| entries_by_change_id.get(change_id).cloned())\n        .collect())\n}\n\nasync fn get_one(\n    store: &mut (impl StorageReader + ?Sized),\n    namespace: &str,\n    key: Vec<u8>,\n) -> Result<Option<Vec<u8>>, LixError> {\n    Ok(store\n        .get_values(KvGetRequest {\n            groups: vec![KvGetGroup {\n                namespace: namespace.to_string(),\n                keys: vec![key],\n            }],\n        })\n        .await?\n        .groups\n        .into_iter()\n        .next()\n        .and_then(|group| group.single_value_owned()))\n}\n\nfn ensure_pack_identity(\n    label: &str,\n    expected_commit_id: &str,\n    expected_pack_id: u32,\n    actual_commit_id: &str,\n    actual_pack_id: u32,\n) -> Result<(), LixError> {\n    if actual_commit_id != expected_commit_id || actual_pack_id != expected_pack_id {\n        return Err(LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            format!(\n                \"commit-store {label} identity mismatch: expected ({expected_commit_id}, {expected_pack_id}), got ({actual_commit_id}, {actual_pack_id})\"\n            ),\n        ));\n    }\n    Ok(())\n}\n\nfn commit_key(commit_id: &str) -> Vec<u8> {\n    commit_id.as_bytes().to_vec()\n}\n\nfn pack_key(commit_id: &str, pack_id: u32) -> Result<Vec<u8>, LixError> {\n    let commit_id_len = u32::try_from(commit_id.len()).map_err(|_| {\n        LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            \"commit-store pack key commit id exceeds u32 length\",\n        )\n    })?;\n    let mut key = Vec::with_capacity(8 + commit_id.len());\n    key.extend_from_slice(&commit_id_len.to_be_bytes());\n    key.extend_from_slice(commit_id.as_bytes());\n    key.extend_from_slice(&pack_id.to_be_bytes());\n    Ok(key)\n}\n\n#[cfg(test)]\nmod tests {\n    use std::sync::Arc;\n\n    use crate::backend::testing::UnitTestBackend;\n    use crate::commit_store::CommitDraftRef;\n    use crate::entity_identity::EntityIdentity;\n    use crate::json_store::JsonRef;\n    use crate::storage::{StorageContext, StorageWriteTransaction};\n    use crate::tracked_state::{TrackedStateContext, TrackedStateDeltaRef};\n\n    use super::*;\n\n    #[tokio::test]\n    async fn stage_commit_writes_all_commit_store_namespaces() {\n        let storage = StorageContext::new(Arc::new(UnitTestBackend::new()));\n        let mut tx = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        let mut writes = StorageWriteSet::new();\n        let commit = test_commit();\n        let change = test_change(\"change-1\");\n        let adopted = ChangeLocator {\n            source_commit_id: \"source-commit\".to_string(),\n            source_pack_id: 3,\n            source_ordinal: 7,\n            change_id: \"adopted-change\".to_string(),\n        };\n\n        let staged = stage_commit(\n            &mut writes,\n            CommitDraftRef {\n                id: &commit.id,\n                change_id: &commit.change_id,\n                parent_ids: &commit.parent_ids,\n                author_account_ids: &commit.author_account_ids,\n                created_at: &commit.created_at,\n            },\n            vec![change.as_ref()],\n            vec![adopted.clone()],\n        )\n        .expect(\"commit should stage\");\n        writes\n            .apply(&mut tx.as_mut())\n            .await\n            .expect(\"writes should apply\");\n        tx.commit().await.expect(\"commit should succeed\");\n\n        assert_eq!(\n            staged.authored_locators,\n            vec![ChangeLocator {\n                source_commit_id: \"commit-1\".to_string(),\n                source_pack_id: 0,\n                source_ordinal: 0,\n                change_id: \"change-1\".to_string(),\n            }]\n        );\n        assert_eq!(staged.adopted_locators, vec![adopted.clone()]);\n\n        let mut reader = storage.clone();\n        assert_eq!(\n            load_commit(&mut reader, \"commit-1\")\n                .await\n                .expect(\"commit should load\"),\n            Some(commit)\n        );\n        assert_eq!(\n            load_change_pack(&mut reader, \"commit-1\", 0)\n                .await\n                .expect(\"change pack should load\"),\n            Some(vec![change])\n        );\n        assert_eq!(\n            load_membership_pack(&mut reader, \"commit-1\", 0)\n                .await\n                .expect(\"membership pack should load\"),\n            Some(vec![adopted])\n        );\n\n        let index_entries = load_change_index_entries(\n            &mut reader,\n            &[\"commit-change-1\".to_string(), \"change-1\".to_string()],\n        )\n        .await\n        .expect(\"index entries should load\");\n        assert_eq!(\n            index_entries,\n            vec![\n                Some(ChangeIndexEntry::CommitHeader {\n                    commit_id: \"commit-1\".to_string(),\n                    change_id: \"commit-change-1\".to_string(),\n                }),\n                Some(ChangeIndexEntry::PackedChange {\n                    locator: ChangeLocator {\n                        source_commit_id: \"commit-1\".to_string(),\n                        source_pack_id: 0,\n                        source_ordinal: 0,\n                        change_id: \"change-1\".to_string(),\n                    },\n                }),\n            ]\n        );\n    }\n\n    #[tokio::test]\n    async fn tracked_commit_change_pack_loads_from_delta_pack() {\n        let storage = StorageContext::new(Arc::new(UnitTestBackend::new()));\n        let mut tx = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        let mut writes = StorageWriteSet::new();\n        let commit = test_commit();\n        let change = test_change(\"change-1\");\n\n        let staged = stage_commit_with_external_authored_pack(\n            &mut writes,\n            CommitDraftRef {\n                id: &commit.id,\n                change_id: &commit.change_id,\n                parent_ids: &commit.parent_ids,\n                author_account_ids: &commit.author_account_ids,\n                created_at: &commit.created_at,\n            },\n            vec![change.as_ref()],\n            Vec::new(),\n        )\n        .expect(\"tracked commit should stage\");\n        let deltas = [TrackedStateDeltaRef {\n            change: change.as_ref(),\n            locator: staged.authored_locators[0].as_ref(),\n            created_at: \"2026-01-01T00:00:00Z\",\n            updated_at: \"2026-01-02T00:00:00Z\",\n        }];\n        TrackedStateContext::new()\n            .writer(&mut tx.as_mut(), &mut writes)\n            .stage_delta(&commit.id, None, &deltas)\n            .await\n            .expect(\"tracked delta should stage\");\n        writes\n            .apply(&mut tx.as_mut())\n            .await\n            .expect(\"writes should apply\");\n        tx.commit().await.expect(\"commit should succeed\");\n\n        let mut reader = storage.clone();\n        assert_eq!(\n            get_one(\n                &mut reader,\n                CHANGE_PACK_NAMESPACE,\n                pack_key(\"commit-1\", 0).unwrap()\n            )\n            .await\n            .expect(\"direct change pack lookup should succeed\"),\n            None\n        );\n        assert_eq!(\n            load_change_pack(&mut reader, \"commit-1\", 0)\n                .await\n                .expect(\"tracked change pack should load\"),\n            Some(vec![Change {\n                created_at: \"2026-01-02T00:00:00Z\".to_string(),\n                ..change.clone()\n            }])\n        );\n        assert_eq!(\n            load_change_index_entries(&mut reader, &[\"change-1\".to_string()])\n                .await\n                .expect(\"index entries should load\"),\n            vec![Some(ChangeIndexEntry::PackedChange {\n                locator: staged.authored_locators[0].clone(),\n            })]\n        );\n    }\n\n    #[tokio::test]\n    async fn tracked_commit_change_pack_rejects_sparse_delta_ordinals() {\n        let storage = StorageContext::new(Arc::new(UnitTestBackend::new()));\n        let mut tx = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        let mut writes = StorageWriteSet::new();\n        let commit = test_commit();\n        let change = test_change(\"change-1\");\n        let sparse_locator = ChangeLocator {\n            source_commit_id: commit.id.clone(),\n            source_pack_id: 0,\n            source_ordinal: 1,\n            change_id: change.id.clone(),\n        };\n        let deltas = [TrackedStateDeltaRef {\n            change: change.as_ref(),\n            locator: sparse_locator.as_ref(),\n            created_at: \"2026-01-01T00:00:00Z\",\n            updated_at: \"2026-01-02T00:00:00Z\",\n        }];\n        TrackedStateContext::new()\n            .writer(&mut tx.as_mut(), &mut writes)\n            .stage_delta(&commit.id, None, &deltas)\n            .await\n            .expect(\"tracked delta should stage\");\n        writes\n            .apply(&mut tx.as_mut())\n            .await\n            .expect(\"writes should apply\");\n        tx.commit().await.expect(\"commit should succeed\");\n\n        let mut reader = storage.clone();\n        let error = load_change_pack(&mut reader, \"commit-1\", 0)\n            .await\n            .expect_err(\"sparse tracked authored ordinals should reject\");\n        assert!(\n            error.to_string().contains(\"missing ordinal 0\"),\n            \"error should mention missing ordinal: {error}\"\n        );\n    }\n\n    fn test_commit() -> Commit {\n        Commit {\n            id: \"commit-1\".to_string(),\n            change_id: \"commit-change-1\".to_string(),\n            parent_ids: vec![\"parent-1\".to_string()],\n            author_account_ids: Vec::new(),\n            created_at: \"2026-01-01T00:00:00Z\".to_string(),\n            change_pack_count: 1,\n            membership_pack_count: 1,\n        }\n    }\n\n    fn test_change(id: &str) -> Change {\n        Change {\n            id: id.to_string(),\n            entity_id: EntityIdentity::single(\"entity-1\"),\n            schema_key: \"test_schema\".to_string(),\n            file_id: None,\n            snapshot_ref: Some(JsonRef::from_hash_bytes([1; 32])),\n            metadata_ref: None,\n            created_at: \"2026-01-01T00:00:00Z\".to_string(),\n        }\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/commit_store/types.rs",
    "content": "use crate::entity_identity::EntityIdentity;\nuse crate::json_store::JsonRef;\n\n/// Physical append/locality unit for commit metadata and derived commit SQL\n/// surfaces.\n#[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize)]\npub(crate) struct Commit {\n    pub(crate) id: String,\n    pub(crate) change_id: String,\n    pub(crate) parent_ids: Vec<String>,\n    pub(crate) author_account_ids: Vec<String>,\n    pub(crate) created_at: String,\n    pub(crate) change_pack_count: u32,\n    pub(crate) membership_pack_count: u32,\n}\n\nimpl Commit {\n    pub(crate) fn as_ref(&self) -> StoredCommitRef<'_> {\n        StoredCommitRef {\n            id: &self.id,\n            change_id: &self.change_id,\n            parent_ids: &self.parent_ids,\n            author_account_ids: &self.author_account_ids,\n            created_at: &self.created_at,\n            change_pack_count: self.change_pack_count,\n            membership_pack_count: self.membership_pack_count,\n        }\n    }\n}\n\n/// Zero-copy view of stored [`Commit`] bytes.\n#[derive(Debug, Clone, Copy)]\npub(crate) struct StoredCommitRef<'a> {\n    pub(crate) id: &'a str,\n    pub(crate) change_id: &'a str,\n    pub(crate) parent_ids: &'a [String],\n    pub(crate) author_account_ids: &'a [String],\n    pub(crate) created_at: &'a str,\n    pub(crate) change_pack_count: u32,\n    pub(crate) membership_pack_count: u32,\n}\n\n/// Zero-copy view of a logical commit supplied before physical packing.\n#[derive(Debug, Clone, Copy)]\npub(crate) struct CommitDraftRef<'a> {\n    pub(crate) id: &'a str,\n    pub(crate) change_id: &'a str,\n    pub(crate) parent_ids: &'a [String],\n    pub(crate) author_account_ids: &'a [String],\n    pub(crate) created_at: &'a str,\n}\n\n/// Logical entity mutation fact stored in a commit change pack.\n#[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize)]\npub(crate) struct Change {\n    pub(crate) id: String,\n    pub(crate) entity_id: EntityIdentity,\n    pub(crate) schema_key: String,\n    pub(crate) file_id: Option<String>,\n    pub(crate) snapshot_ref: Option<JsonRef>,\n    pub(crate) metadata_ref: Option<JsonRef>,\n    pub(crate) created_at: String,\n}\n\n/// Read-boundary view of a commit-store change with JSON refs resolved.\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct MaterializedChange {\n    pub(crate) id: String,\n    pub(crate) entity_id: EntityIdentity,\n    pub(crate) schema_key: String,\n    pub(crate) file_id: Option<String>,\n    pub(crate) snapshot_content: Option<String>,\n    pub(crate) metadata: Option<String>,\n    pub(crate) created_at: String,\n}\n\n/// Commit-store change plus the physical pack that owns its JSON payloads.\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct LocatedChange {\n    pub(crate) record: Change,\n    pub(crate) source_commit_id: String,\n    pub(crate) source_pack_id: u32,\n}\n\nimpl Change {\n    pub(crate) fn as_ref(&self) -> ChangeRef<'_> {\n        ChangeRef {\n            id: &self.id,\n            entity_id: &self.entity_id,\n            schema_key: &self.schema_key,\n            file_id: self.file_id.as_deref(),\n            snapshot_ref: self.snapshot_ref.as_ref(),\n            metadata_ref: self.metadata_ref.as_ref(),\n            created_at: &self.created_at,\n        }\n    }\n}\n\n/// Zero-copy view of [`Change`].\n#[derive(Debug, Clone, Copy)]\npub(crate) struct ChangeRef<'a> {\n    pub(crate) id: &'a str,\n    pub(crate) entity_id: &'a EntityIdentity,\n    pub(crate) schema_key: &'a str,\n    pub(crate) file_id: Option<&'a str>,\n    pub(crate) snapshot_ref: Option<&'a JsonRef>,\n    pub(crate) metadata_ref: Option<&'a JsonRef>,\n    pub(crate) created_at: &'a str,\n}\n\n/// Logical scan request for the `lix_change` SQL surface over commit_store.\n#[derive(Debug, Clone, Default)]\npub(crate) struct ChangeScanRequest {\n    pub(crate) limit: Option<usize>,\n}\n\n/// Commit-local physical pack of newly authored change payloads.\n#[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize)]\npub(crate) struct ChangePack {\n    pub(crate) commit_id: String,\n    pub(crate) pack_id: u32,\n    pub(crate) changes: Vec<Change>,\n}\n\nimpl ChangePack {\n    pub(crate) fn as_view(&self) -> ChangePackView<'_> {\n        ChangePackView {\n            commit_id: &self.commit_id,\n            pack_id: self.pack_id,\n            changes: &self.changes,\n        }\n    }\n}\n\n/// Zero-copy view for a decoded [`ChangePack`].\n#[derive(Debug, Clone, Copy)]\npub(crate) struct ChangePackView<'a> {\n    pub(crate) commit_id: &'a str,\n    pub(crate) pack_id: u32,\n    pub(crate) changes: &'a [Change],\n}\n\n/// Storage location of an existing change payload.\n#[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize)]\npub(crate) struct ChangeLocator {\n    pub(crate) source_commit_id: String,\n    pub(crate) source_pack_id: u32,\n    pub(crate) source_ordinal: u32,\n    pub(crate) change_id: String,\n}\n\nimpl ChangeLocator {\n    pub(crate) fn as_ref(&self) -> ChangeLocatorRef<'_> {\n        ChangeLocatorRef {\n            source_commit_id: &self.source_commit_id,\n            source_pack_id: self.source_pack_id,\n            source_ordinal: self.source_ordinal,\n            change_id: &self.change_id,\n        }\n    }\n}\n\n/// Zero-copy view of [`ChangeLocator`].\n#[derive(Debug, Clone, Copy)]\npub(crate) struct ChangeLocatorRef<'a> {\n    pub(crate) source_commit_id: &'a str,\n    pub(crate) source_pack_id: u32,\n    pub(crate) source_ordinal: u32,\n    pub(crate) change_id: &'a str,\n}\n\n/// Exact lookup entry for a derived-surface-visible change id.\n#[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize)]\npub(crate) enum ChangeIndexEntry {\n    CommitHeader {\n        commit_id: String,\n        change_id: String,\n    },\n    PackedChange {\n        locator: ChangeLocator,\n    },\n}\n\n/// Commit-local physical pack of adopted/shared membership locators.\n#[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize)]\npub(crate) struct MembershipPack {\n    pub(crate) commit_id: String,\n    pub(crate) pack_id: u32,\n    pub(crate) members: Vec<ChangeLocator>,\n}\n\nimpl MembershipPack {\n    pub(crate) fn as_view(&self) -> MembershipPackView<'_> {\n        MembershipPackView {\n            commit_id: &self.commit_id,\n            pack_id: self.pack_id,\n            members: &self.members,\n        }\n    }\n}\n\n/// Zero-copy view for a decoded [`MembershipPack`].\n#[derive(Debug, Clone, Copy)]\npub(crate) struct MembershipPackView<'a> {\n    pub(crate) commit_id: &'a str,\n    pub(crate) pack_id: u32,\n    pub(crate) members: &'a [ChangeLocator],\n}\n\n/// Locators produced while staging a commit.\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct StagedCommitStoreCommit {\n    pub(crate) authored_locators: Vec<ChangeLocator>,\n    pub(crate) adopted_locators: Vec<ChangeLocator>,\n}\n"
  },
  {
    "path": "packages/engine/src/common/error.rs",
    "content": "use serde_json::{json, Value as JsonValue};\n\n/// Structured error type surfaced by Lix to every SDK binding.\n///\n/// Carries a machine-readable [`code`](Self::code), a human-readable\n/// [`message`](Self::message), and an optional [`hint`](Self::hint)\n/// suggesting how to recover. Hints follow the Postgres/rustc convention:\n/// `message` states what went wrong in factual terms, and `hint` offers a\n/// possible fix when one is known.\n///\n/// ```\n/// use lix_engine::LixError;\n///\n/// let err = LixError::new(\n///     \"LIX_ERROR_UNSUPPORTED_WRITE_EXPRESSION\",\n///     \"json(...) is not supported\",\n/// )\n/// .with_hint(\"use lix_json('...') instead\");\n///\n/// assert_eq!(err.hint(), Some(\"use lix_json('...') instead\"));\n/// ```\n#[derive(Debug, Clone, PartialEq, Eq)]\npub struct LixError {\n    pub code: String,\n    pub message: String,\n    pub hint: Option<String>,\n    pub details: Option<JsonValue>,\n}\n\nimpl LixError {\n    /// True fallback — use when no more specific category fits. Producing\n    /// sites should prefer the categorized codes below whenever possible;\n    /// the SDK contract is that `LIX_ERROR_UNKNOWN` is the *last* resort,\n    /// never the default.\n    pub const CODE_UNKNOWN: &'static str = \"LIX_ERROR_UNKNOWN\";\n\n    /// SQL text could not be parsed.\n    pub const CODE_PARSE_ERROR: &'static str = \"LIX_PARSE_ERROR\";\n\n    /// A SQL function name could not be resolved.\n    pub const CODE_UDF_NOT_FOUND: &'static str = \"LIX_UDF_NOT_FOUND\";\n\n    /// A SQL expression or function argument had an incompatible type.\n    pub const CODE_TYPE_MISMATCH: &'static str = \"LIX_TYPE_MISMATCH\";\n\n    /// A Lix JSON path argument used another dialect's path language instead\n    /// of Lix's canonical variadic key/index segments.\n    pub const CODE_INVALID_JSON_PATH: &'static str = \"LIX_INVALID_JSON_PATH\";\n\n    /// SQL syntax belongs to another dialect and is outside the Lix SQL\n    /// surface.\n    pub const CODE_DIALECT_UNSUPPORTED: &'static str = \"LIX_DIALECT_UNSUPPORTED\";\n\n    /// SQL parameters could not be bound to placeholders.\n    pub const CODE_BINDING_ERROR: &'static str = \"LIX_BINDING_ERROR\";\n\n    /// A caller supplied an invalid SQL parameter value or parameter list.\n    pub const CODE_INVALID_PARAM: &'static str = \"LIX_INVALID_PARAM\";\n\n    /// A SQL table or view name could not be resolved.\n    pub const CODE_TABLE_NOT_FOUND: &'static str = \"LIX_TABLE_NOT_FOUND\";\n\n    /// A SQL column name could not be resolved in the available projection.\n    pub const CODE_COLUMN_NOT_FOUND: &'static str = \"LIX_COLUMN_NOT_FOUND\";\n\n    /// A SQL write violated a primary-key, unique, NOT NULL, or other\n    /// relational constraint.\n    pub const CODE_CONSTRAINT_VIOLATION: &'static str = \"LIX_CONSTRAINT_VIOLATION\";\n\n    /// A SQL write targeted a read-only internal/component surface.\n    pub const CODE_READ_ONLY: &'static str = \"LIX_ERROR_READ_ONLY\";\n\n    /// A history table was queried without an explicit commit/version range.\n    pub const CODE_HISTORY_FILTER_REQUIRED: &'static str = \"LIX_HISTORY_FILTER_REQUIRED\";\n\n    /// SQL syntax is valid, but the feature is intentionally outside the Lix\n    /// SQL surface.\n    pub const CODE_UNSUPPORTED_SQL: &'static str = \"LIX_UNSUPPORTED_SQL\";\n\n    /// SQL planning succeeded far enough to produce a physical runtime shape\n    /// that the current engine target cannot execute safely.\n    pub const CODE_UNSUPPORTED_SQL_RUNTIME_PLAN: &'static str = \"LIX_UNSUPPORTED_SQL_RUNTIME_PLAN\";\n\n    /// Storage/backend IO failed while executing an operation.\n    pub const CODE_STORAGE_ERROR: &'static str = \"LIX_STORAGE_ERROR\";\n\n    /// An internal engine invariant failed.\n    pub const CODE_INTERNAL_ERROR: &'static str = \"LIX_INTERNAL_ERROR\";\n\n    /// Write-time failure where user data did not conform to a registered\n    /// schema (type mismatch, missing required field, pattern violation,\n    /// additionalProperties, etc.). Raised from the JSON-Schema validator\n    /// run over a candidate row's snapshot.\n    pub const CODE_SCHEMA_VALIDATION: &'static str = \"LIX_ERROR_SCHEMA_VALIDATION\";\n\n    /// A foreign-key constraint could not be satisfied. Covers both the\n    /// insert-side \"no matching target row\" failure and the delete-side\n    /// \"still referenced\" (restrict) failure.\n    pub const CODE_FOREIGN_KEY: &'static str = \"LIX_ERROR_FOREIGN_KEY\";\n\n    /// A row references a non-null `file_id` that has no matching `lix_file`\n    /// descriptor in the same effective version scope.\n    pub const CODE_FILE_NOT_FOUND: &'static str = \"LIX_ERROR_FILE_NOT_FOUND\";\n\n    /// A primary-key or `x-lix-unique` constraint was violated — another\n    /// row already owns the value(s) for the declared pointer group.\n    pub const CODE_UNIQUE: &'static str = \"LIX_ERROR_UNIQUE\";\n\n    /// An `INSERT ... VALUES (...)` expression is not supported by the\n    /// public write surface (e.g. `json(...)`, subqueries, arbitrary SQL\n    /// expressions). Users should wrap inline JSON with `lix_json(...)`.\n    pub const CODE_UNSUPPORTED_WRITE_EXPRESSION: &'static str =\n        \"LIX_ERROR_UNSUPPORTED_WRITE_EXPRESSION\";\n\n    /// The schema JSON itself (the *definition*, not a row against it) is\n    /// malformed — a missing `x-lix-key`, a JSON-Pointer without the\n    /// leading slash, a reserved-namespace collision, or any other\n    /// meta-schema validation failure.\n    pub const CODE_SCHEMA_DEFINITION: &'static str = \"LIX_ERROR_SCHEMA_DEFINITION\";\n\n    /// The logical Lix handle/session has been closed and cannot run further\n    /// operations. Close is a resource-release lifecycle boundary, not a\n    /// durability boundary.\n    pub const CODE_CLOSED: &'static str = \"LIX_ERROR_CLOSED\";\n\n    /// A merge found incompatible changes to the same tracked-state identity.\n    pub const CODE_MERGE_CONFLICT: &'static str = \"LIX_MERGE_CONFLICT\";\n\n    /// A caller referenced a version id that has no matching version ref.\n    pub const CODE_VERSION_NOT_FOUND: &'static str = \"LIX_VERSION_NOT_FOUND\";\n\n    /// A staged row's storage scope flags disagree, such as a global row not\n    /// using the reserved global version id.\n    pub const CODE_INVALID_STORAGE_SCOPE: &'static str = \"LIX_ERROR_INVALID_STORAGE_SCOPE\";\n\n    /// Merge graph analysis found multiple equally valid merge bases.\n    pub const CODE_AMBIGUOUS_MERGE_BASE: &'static str = \"LIX_AMBIGUOUS_MERGE_BASE\";\n\n    /// A merge request is well-formed but nonsensical for the commit graph,\n    /// such as merging a version into itself.\n    pub const CODE_INVALID_MERGE: &'static str = \"LIX_INVALID_MERGE\";\n\n    pub fn new(code: impl Into<String>, message: impl Into<String>) -> Self {\n        Self {\n            code: code.into(),\n            message: message.into(),\n            hint: None,\n            details: None,\n        }\n    }\n\n    pub fn unknown(message: impl Into<String>) -> Self {\n        Self::new(\"LIX_ERROR_UNKNOWN\", message)\n    }\n\n    pub fn version_not_found(\n        version_id: impl Into<String>,\n        operation: impl Into<String>,\n        role: impl Into<String>,\n    ) -> Self {\n        let version_id = version_id.into();\n        let operation = operation.into();\n        let role = role.into();\n        Self::new(\n            Self::CODE_VERSION_NOT_FOUND,\n            format!(\"version '{version_id}' was not found\"),\n        )\n        .with_details(json!({\n            \"version_id\": version_id,\n            \"operation\": operation,\n            \"role\": role,\n        }))\n    }\n\n    pub fn ambiguous_merge_base(\n        left_commit_id: impl Into<String>,\n        right_commit_id: impl Into<String>,\n        candidates: Vec<String>,\n    ) -> Self {\n        let left_commit_id = left_commit_id.into();\n        let right_commit_id = right_commit_id.into();\n        Self::new(\n            Self::CODE_AMBIGUOUS_MERGE_BASE,\n            format!(\"ambiguous merge base between '{left_commit_id}' and '{right_commit_id}'\"),\n        )\n        .with_details(json!({\n            \"left_commit_id\": left_commit_id,\n            \"right_commit_id\": right_commit_id,\n            \"candidates\": candidates,\n        }))\n    }\n\n    pub fn invalid_self_merge(version_id: impl Into<String>) -> Self {\n        let version_id = version_id.into();\n        Self::new(\n            Self::CODE_INVALID_MERGE,\n            format!(\"cannot merge version '{version_id}' into itself\"),\n        )\n        .with_details(json!({\n            \"operation\": \"merge_version\",\n            \"target_version_id\": version_id,\n            \"source_version_id\": version_id,\n        }))\n    }\n\n    /// Attach a hint to this error. Consumers render hints alongside the\n    /// primary message (e.g. a CLI prints them as `hint: <text>`).\n    ///\n    /// ```\n    /// use lix_engine::LixError;\n    ///\n    /// let err = LixError::new(\"CODE\", \"boom\").with_hint(\"try this\");\n    /// assert_eq!(err.hint(), Some(\"try this\"));\n    /// ```\n    pub fn with_hint(mut self, hint: impl Into<String>) -> Self {\n        self.hint = Some(hint.into());\n        self\n    }\n\n    /// Attach machine-readable details to this error.\n    pub fn with_details(mut self, details: JsonValue) -> Self {\n        self.details = Some(details);\n        self\n    }\n\n    /// Return the attached hint, if any.\n    ///\n    /// Returns `None` when no hint was attached at the error's producer\n    /// site. This is the accessor SDK consumers should prefer over\n    /// reading the `hint` field directly — it returns `Option<&str>`,\n    /// avoiding the need for `.as_deref()` at the call site.\n    ///\n    /// ```\n    /// use lix_engine::LixError;\n    ///\n    /// let without_hint = LixError::new(\"CODE\", \"boom\");\n    /// assert_eq!(without_hint.hint(), None);\n    ///\n    /// let with_hint = LixError::new(\"CODE\", \"boom\").with_hint(\"fix it\");\n    /// assert_eq!(with_hint.hint(), Some(\"fix it\"));\n    /// ```\n    pub fn hint(&self) -> Option<&str> {\n        self.hint.as_deref()\n    }\n\n    pub fn message_with_hint(&self) -> String {\n        match self.hint() {\n            Some(hint) => format!(\"{}\\nhint: {hint}\", self.message),\n            None => self.message.clone(),\n        }\n    }\n\n    pub fn format(&self) -> String {\n        let mut s = format!(\"code: {}\\nmessage: {}\", self.code, self.message);\n        if let Some(hint) = &self.hint {\n            s.push_str(&format!(\"\\nhint: {hint}\"));\n        }\n        s\n    }\n}\n\nimpl std::fmt::Display for LixError {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        write!(f, \"{}\", self.format())\n    }\n}\n\nimpl std::error::Error for LixError {}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn format_without_hint_omits_hint_line() {\n        let err = LixError::new(\"LIX_ERROR_FOO\", \"something went wrong\");\n        assert_eq!(\n            err.format(),\n            \"code: LIX_ERROR_FOO\\nmessage: something went wrong\"\n        );\n        assert!(err.hint.is_none());\n    }\n\n    #[test]\n    fn format_with_hint_appends_hint_line() {\n        let err = LixError::new(\"LIX_ERROR_FOO\", \"something went wrong\").with_hint(\"try the fix\");\n        assert_eq!(\n            err.format(),\n            \"code: LIX_ERROR_FOO\\nmessage: something went wrong\\nhint: try the fix\"\n        );\n    }\n\n    #[test]\n    fn with_hint_is_chainable_and_replaces_prior_hint() {\n        let err = LixError::new(\"LIX_ERROR_FOO\", \"desc\")\n            .with_hint(\"first\")\n            .with_hint(\"second\");\n        assert_eq!(err.hint.as_deref(), Some(\"second\"));\n    }\n\n    #[test]\n    fn new_defaults_hint_to_none() {\n        let err = LixError::new(\"CODE\", \"desc\");\n        assert_eq!(err.hint, None);\n    }\n\n    #[test]\n    fn unknown_defaults_hint_to_none() {\n        let err = LixError::unknown(\"desc\");\n        assert_eq!(err.code, \"LIX_ERROR_UNKNOWN\");\n        assert_eq!(err.hint, None);\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/common/fingerprint.rs",
    "content": "pub(crate) fn stable_content_fingerprint_hex(data: &[u8]) -> String {\n    blake3::hash(data).to_hex().to_string()\n}\n"
  },
  {
    "path": "packages/engine/src/common/fs_path.rs",
    "content": "//! Canonical Lix filesystem paths live in this module.\n//!\n//! Contract:\n//!\n//! - Canonical internal form is an absolute slash-separated Lix filesystem\n//!   path, structurally aligned with RFC 3986 `path-absolute` / RFC 8089 file\n//!   URI paths.\n//! - RFC 3986/8089 URI spelling is a boundary serialization, not the internal\n//!   identity form.\n//! - Each non-empty segment is enforced with an RFC 8264 PRECIS\n//!   `IdentifierClass` profile, case-preserved and NFC-normalized.\n//! - Percent encoding is accepted only as boundary input. Canonical internal\n//!   paths store decoded Unicode segments, never percent triplets.\n//! - Dot segments are rejected rather than rewritten because Lix paths are\n//!   stable logical identities, not URI references being resolved against a\n//!   base path.\n//!\n//! Canonicalization order:\n//!\n//! 1. Validate and decode RFC 3986 percent triplets in each segment.\n//! 2. Normalize decoded segment text to NFC.\n//! 3. Apply PRECIS IdentifierClass enforcement.\n//! 4. Reject Lix structural sentinels and separators.\n//!\n//! Fixed standard-derived rules:\n//!\n//! - Path shape follows the absolute-path grammar used by RFC 3986/RFC 8089.\n//! - Segment text follows RFC 8264 PRECIS IdentifierClass semantics.\n//! - Comparison is exact-string and case-sensitive after canonicalization.\n//!\n//! Lix profile rules:\n//!\n//! - File paths never end with `/`.\n//! - Directory paths always end with `/`.\n//! - `NUL` is rejected in all segments.\n//! - `/`, `\\`, empty segments, `.`, and `..` are rejected in all non-root\n//!   segments.\n//! - `%`, `?`, and `#` are reserved for URI boundary syntax and are rejected\n//!   in canonical internal segments.\n//! - Segments cannot begin with a combining mark.\n//! - Root is represented as the normalized directory path `/`.\n//! - Git/CLI import and ASCII-only URI serialization are boundary adapters,\n//!   not part of the core `fs_path` contract.\n//!\n//! Length policy:\n//!\n//! - Each canonical segment is capped at 255 bytes, matching common\n//!   filesystem component limits.\n//! - Each full canonical path is capped at 4096 bytes.\n//! - Raw boundary input is separately capped before normalization so oversized\n//!   URI spellings cannot reach Unicode processing.\n//!\n//! Runtime strategy:\n//!\n//! - This module keeps Lix structural checks local and delegates Unicode\n//!   segment validity to the PRECIS implementation.\n//! - `iref` is an RFC 3987 / RFC 3986 shape oracle in tests, not the runtime\n//!   segment authority.\n//!\n//! Glossary:\n//!\n//! - Raw input path: caller-provided path before normalization.\n//! - Normalized path: path after NFC normalization.\n//! - Canonical path: stored path after full normalization/canonicalization.\n//! - File path: canonical path naming a file, without a trailing slash.\n//! - Directory path: canonical path naming a directory, with a trailing slash.\n//! - Internal path form: the canonical Unicode-bearing representation used by\n//!   the engine.\n//! - Boundary URI form: an ASCII-only serialization used when interoperating\n//!   with URI-only systems.\n\nuse precis_profiles::precis_core::profile::Profile;\nuse precis_profiles::UsernameCasePreserved;\nuse unicode_normalization::{char::is_combining_mark, UnicodeNormalization};\n\nuse crate::LixError;\nuse std::fmt;\nuse std::ops::Deref;\n\nconst MAX_CANONICAL_PATH_BYTES: usize = 4096;\nconst MAX_CANONICAL_PATH_SEGMENT_BYTES: usize = 255;\nconst MAX_RAW_PATH_INPUT_BYTES: usize = 16 * 1024;\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct NormalizedDirectoryPath(String);\n\nimpl NormalizedDirectoryPath {\n    #[cfg(test)]\n    pub(crate) fn try_from_path(path: &str) -> Result<Self, LixError> {\n        normalize_directory_path(path).map(Self)\n    }\n    pub(crate) fn from_normalized(path: String) -> Self {\n        Self(path)\n    }\n\n    pub(crate) fn as_str(&self) -> &str {\n        self.0.as_str()\n    }\n}\n\nimpl Deref for NormalizedDirectoryPath {\n    type Target = str;\n\n    fn deref(&self) -> &Self::Target {\n        self.as_str()\n    }\n}\n\nimpl fmt::Display for NormalizedDirectoryPath {\n    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {\n        f.write_str(self.as_str())\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct NormalizedFilePath(String);\n\nimpl NormalizedFilePath {\n    pub(crate) fn from_normalized(path: String) -> Self {\n        Self(path)\n    }\n\n    pub(crate) fn as_str(&self) -> &str {\n        self.0.as_str()\n    }\n}\n\nimpl Deref for NormalizedFilePath {\n    type Target = str;\n\n    fn deref(&self) -> &Self::Target {\n        self.as_str()\n    }\n}\n\nimpl fmt::Display for NormalizedFilePath {\n    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {\n        f.write_str(self.as_str())\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct ParsedFilePath {\n    pub(crate) normalized_path: NormalizedFilePath,\n    pub(crate) directory_path: Option<NormalizedDirectoryPath>,\n    pub(crate) name: String,\n}\n\nimpl ParsedFilePath {\n    pub(crate) fn try_from_path(path: &str) -> Result<Self, LixError> {\n        parse_file_path(path)\n    }\n}\n\ntype PathResult<T> = Result<T, PathError>;\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\nenum PathError {\n    MissingLeadingSlash,\n    UnexpectedTrailingSlashOnFilePath,\n    MissingTrailingSlashOnDirectoryPath,\n    EmptySegment,\n    DotSegment,\n    SlashInSegment,\n    Backslash,\n    InvalidPercentEncoding,\n    InvalidPathSegmentCodePoint,\n    PathTooLong,\n    RawPathInputTooLong,\n    SegmentTooLong,\n    NulByte,\n    InvalidRootUsage,\n    #[cfg(test)]\n    InvalidDirectoryParentPath,\n}\n\nimpl PathError {\n    fn into_lix_error(self) -> LixError {\n        let (code, message, hint) = match self {\n            Self::MissingLeadingSlash => (\n                \"LIX_ERROR_PATH_MISSING_LEADING_SLASH\",\n                \"path must start with '/'\",\n                Some(\"prefix the path with '/'\"),\n            ),\n            Self::UnexpectedTrailingSlashOnFilePath => (\n                \"LIX_ERROR_PATH_UNEXPECTED_TRAILING_SLASH_ON_FILE\",\n                \"file path must not end with '/'\",\n                Some(\"remove the trailing slash or use a directory path instead\"),\n            ),\n            Self::MissingTrailingSlashOnDirectoryPath => (\n                \"LIX_ERROR_PATH_MISSING_TRAILING_SLASH_ON_DIRECTORY\",\n                \"directory path must end with '/'\",\n                Some(\"append a trailing slash or use a file path instead\"),\n            ),\n            Self::EmptySegment => (\n                \"LIX_ERROR_PATH_EMPTY_SEGMENT\",\n                \"path must not contain empty segments\",\n                Some(\"remove duplicate slashes like '//'\"),\n            ),\n            Self::DotSegment => (\n                \"LIX_ERROR_PATH_DOT_SEGMENT\",\n                \"path segment cannot be '.' or '..'\",\n                Some(\"use a real segment name instead of '.' or '..'\"),\n            ),\n            Self::SlashInSegment => (\n                \"LIX_ERROR_PATH_SLASH_IN_SEGMENT\",\n                \"path segment must not contain '/'\",\n                Some(\"pass a single segment name, not a full path\"),\n            ),\n            Self::Backslash => (\n                \"LIX_ERROR_PATH_BACKSLASH\",\n                \"path must not contain '\\\\'\",\n                Some(\"use '/' separators instead of '\\\\'\"),\n            ),\n            Self::InvalidPercentEncoding => (\n                \"LIX_ERROR_PATH_INVALID_PERCENT_ENCODING\",\n                \"path contains invalid percent encoding\",\n                Some(\"use valid percent triplets only for URI boundary input; '%' is not allowed in canonical path segments\"),\n            ),\n            Self::InvalidPathSegmentCodePoint => (\n                \"LIX_ERROR_PATH_INVALID_SEGMENT_CODE_POINT\",\n                \"path segment contains a character that is not allowed in canonical Lix paths\",\n                Some(\"canonical paths use RFC 8264 PRECIS IdentifierClass segments; use URI percent encoding only at boundaries\"),\n            ),\n            Self::PathTooLong => (\n                \"LIX_ERROR_PATH_TOO_LONG\",\n                \"path is too long\",\n                Some(\"keep canonical paths at or below 4096 bytes\"),\n            ),\n            Self::RawPathInputTooLong => (\n                \"LIX_ERROR_PATH_INPUT_TOO_LONG\",\n                \"path input is too long\",\n                Some(\"keep raw path input at or below 16384 bytes\"),\n            ),\n            Self::SegmentTooLong => (\n                \"LIX_ERROR_PATH_SEGMENT_TOO_LONG\",\n                \"path segment is too long\",\n                Some(\"keep each canonical path segment at or below 255 bytes\"),\n            ),\n            Self::NulByte => (\n                \"LIX_ERROR_PATH_NUL_BYTE\",\n                \"path must not contain a NUL byte\",\n                Some(\"remove the NUL byte from the path\"),\n            ),\n            Self::InvalidRootUsage => (\n                \"LIX_ERROR_PATH_INVALID_ROOT_USAGE\",\n                \"root '/' is only valid as a directory path\",\n                Some(\"use '/' as a directory path, never as a file path\"),\n            ),\n            #[cfg(test)]\n            Self::InvalidDirectoryParentPath => (\n                \"LIX_ERROR_PATH_INVALID_DIRECTORY_PARENT\",\n                \"directory parent path must be a normalized directory path\",\n                Some(\"pass '/' or a path ending with '/' as the parent directory\"),\n            ),\n        };\n\n        let err = LixError::new(code, message);\n        match hint {\n            Some(hint) => err.with_hint(hint),\n            None => err,\n        }\n    }\n}\n\npub(crate) fn normalize_path_segment(raw: &str) -> Result<String, LixError> {\n    normalize_path_segment_impl(raw).map_err(PathError::into_lix_error)\n}\n\nfn normalize_path_segment_impl(raw: &str) -> PathResult<String> {\n    ensure_raw_path_input_len(raw)?;\n    let normalized = raw.nfc().collect::<String>();\n    let canonical = normalize_validated_path_segment(&normalized)?;\n    if canonical == \".\" || canonical == \"..\" {\n        return Err(PathError::DotSegment);\n    }\n    Ok(canonical)\n}\n\nfn validate_path_segment_chars(normalized: &str) -> PathResult<String> {\n    if normalized.is_empty() {\n        return Err(PathError::EmptySegment);\n    }\n    if normalized.contains('\\0') {\n        return Err(PathError::NulByte);\n    }\n    if normalized.contains('/') {\n        return Err(PathError::SlashInSegment);\n    }\n    if normalized.contains('\\\\') {\n        return Err(PathError::Backslash);\n    }\n    if !segment_has_valid_percent_encoding(&normalized) {\n        return Err(PathError::InvalidPercentEncoding);\n    }\n    let decoded = decode_percent_encoded_segment(normalized)?;\n    validate_decoded_path_segment_structure(&decoded)?;\n    Ok(decoded)\n}\n\nfn normalize_validated_path_segment(normalized: &str) -> PathResult<String> {\n    let decoded = validate_path_segment_chars(normalized)?;\n    ensure_canonical_segment_len(&decoded)?;\n    let canonical = enforce_precis_segment(&decoded)?;\n    ensure_canonical_segment_len(&canonical)?;\n    Ok(canonical)\n}\n\nfn decode_percent_encoded_segment(segment: &str) -> PathResult<String> {\n    let bytes = segment.as_bytes();\n    let mut decoded = Vec::with_capacity(segment.len());\n    let mut index = 0usize;\n\n    while index < bytes.len() {\n        if bytes[index] == b'%' {\n            decoded.push((hex_value(bytes[index + 1]) << 4) | hex_value(bytes[index + 2]));\n            index += 3;\n            continue;\n        }\n\n        let ch = segment[index..]\n            .chars()\n            .next()\n            .expect(\"slice at char boundary should yield a char\");\n        let mut utf8 = [0u8; 4];\n        decoded.extend_from_slice(ch.encode_utf8(&mut utf8).as_bytes());\n        index += ch.len_utf8();\n    }\n\n    String::from_utf8(decoded).map_err(|_| PathError::InvalidPathSegmentCodePoint)\n}\n\nfn hex_value(byte: u8) -> u8 {\n    match byte {\n        b'0'..=b'9' => byte - b'0',\n        b'a'..=b'f' => 10 + (byte - b'a'),\n        b'A'..=b'F' => 10 + (byte - b'A'),\n        _ => unreachable!(\"hex_value only called after percent validation\"),\n    }\n}\n\nfn segment_has_valid_percent_encoding(segment: &str) -> bool {\n    let bytes = segment.as_bytes();\n    let mut index = 0usize;\n    while index < bytes.len() {\n        if bytes[index] == b'%' {\n            if index + 2 >= bytes.len() {\n                return false;\n            }\n            let hi = bytes[index + 1];\n            let lo = bytes[index + 2];\n            if !hi.is_ascii_hexdigit() || !lo.is_ascii_hexdigit() {\n                return false;\n            }\n            index += 3;\n            continue;\n        }\n        index += 1;\n    }\n    true\n}\n\nfn validate_decoded_path_segment_structure(segment: &str) -> PathResult<()> {\n    if segment.contains('\\0') {\n        return Err(PathError::NulByte);\n    }\n    if segment.contains('/') {\n        return Err(PathError::SlashInSegment);\n    }\n    if segment.contains('\\\\') {\n        return Err(PathError::Backslash);\n    }\n    if segment.contains('%') || segment.contains('?') || segment.contains('#') {\n        return Err(PathError::InvalidPathSegmentCodePoint);\n    }\n    if segment.chars().next().is_some_and(is_combining_mark) {\n        return Err(PathError::InvalidPathSegmentCodePoint);\n    }\n    Ok(())\n}\n\nfn enforce_precis_segment(segment: &str) -> PathResult<String> {\n    UsernameCasePreserved::new()\n        .enforce(segment)\n        .map(|segment| segment.into_owned())\n        .map_err(|_| PathError::InvalidPathSegmentCodePoint)\n}\n\nfn normalize_file_path_impl(path: &str) -> PathResult<String> {\n    ensure_raw_path_input_len(path)?;\n    let normalized = path.nfc().collect::<String>();\n    if !normalized.starts_with('/') {\n        return Err(PathError::MissingLeadingSlash);\n    }\n    if normalized == \"/\" {\n        return Err(PathError::InvalidRootUsage);\n    }\n    if normalized.ends_with('/') {\n        return Err(PathError::UnexpectedTrailingSlashOnFilePath);\n    }\n    if normalized.contains('\\\\') {\n        return Err(PathError::Backslash);\n    }\n    if normalized.contains(\"//\") {\n        return Err(PathError::EmptySegment);\n    }\n    let segments = normalized\n        .split('/')\n        .filter(|segment| !segment.is_empty())\n        .collect::<Vec<_>>();\n    if segments.is_empty() {\n        return Err(PathError::EmptySegment);\n    }\n    let canonical_segments = canonicalize_path_segments(&segments)?;\n    if canonical_segments.is_empty() {\n        return Err(PathError::InvalidRootUsage);\n    }\n    let canonical = format!(\"/{}\", canonical_segments.join(\"/\"));\n    ensure_canonical_path_len(&canonical)?;\n    Ok(canonical)\n}\n\npub(crate) fn normalize_directory_path(path: &str) -> Result<String, LixError> {\n    normalize_directory_path_impl(path).map_err(PathError::into_lix_error)\n}\n\nfn normalize_directory_path_impl(path: &str) -> PathResult<String> {\n    ensure_raw_path_input_len(path)?;\n    let normalized = path.nfc().collect::<String>();\n    if !normalized.starts_with('/') {\n        return Err(PathError::MissingLeadingSlash);\n    }\n    if normalized.contains('\\\\') {\n        return Err(PathError::Backslash);\n    }\n    if normalized.contains(\"//\") {\n        return Err(PathError::EmptySegment);\n    }\n    if normalized == \"/\" {\n        return Ok(\"/\".to_string());\n    }\n    if !normalized.ends_with('/') {\n        return Err(PathError::MissingTrailingSlashOnDirectoryPath);\n    }\n    let segments = normalized\n        .split('/')\n        .filter(|segment| !segment.is_empty())\n        .collect::<Vec<_>>();\n    let normalized_segments = canonicalize_path_segments(&segments)?;\n    if normalized_segments.is_empty() {\n        return Ok(\"/\".to_string());\n    }\n    let canonical = format!(\"/{}/\", normalized_segments.join(\"/\"));\n    ensure_canonical_path_len(&canonical)?;\n    Ok(canonical)\n}\n\nfn canonicalize_path_segments(segments: &[&str]) -> PathResult<Vec<String>> {\n    let mut canonical_segments = Vec::with_capacity(segments.len());\n\n    for segment in segments {\n        let normalized_segment = normalize_validated_path_segment(segment)?;\n        match normalized_segment.as_str() {\n            \".\" | \"..\" => return Err(PathError::DotSegment),\n            _ => canonical_segments.push(normalized_segment),\n        }\n    }\n\n    Ok(canonical_segments)\n}\n\nfn ensure_canonical_path_len(path: &str) -> PathResult<()> {\n    if path.len() > MAX_CANONICAL_PATH_BYTES {\n        Err(PathError::PathTooLong)\n    } else {\n        Ok(())\n    }\n}\n\nfn ensure_raw_path_input_len(path: &str) -> PathResult<()> {\n    if path.len() > MAX_RAW_PATH_INPUT_BYTES {\n        Err(PathError::RawPathInputTooLong)\n    } else {\n        Ok(())\n    }\n}\n\nfn ensure_canonical_segment_len(segment: &str) -> PathResult<()> {\n    if segment.len() > MAX_CANONICAL_PATH_SEGMENT_BYTES {\n        Err(PathError::SegmentTooLong)\n    } else {\n        Ok(())\n    }\n}\n\npub(crate) fn parse_file_path(path: &str) -> Result<ParsedFilePath, LixError> {\n    parse_file_path_impl(path).map_err(PathError::into_lix_error)\n}\n\nfn parse_file_path_impl(path: &str) -> PathResult<ParsedFilePath> {\n    let normalized_path = normalize_file_path_impl(path)?;\n    let segments = normalized_path\n        .split('/')\n        .filter(|segment| !segment.is_empty())\n        .collect::<Vec<_>>();\n    let file_name = segments\n        .last()\n        .ok_or(PathError::InvalidRootUsage)?\n        .to_string();\n    let directory_path = if segments.len() > 1 {\n        Some(NormalizedDirectoryPath::from_normalized(format!(\n            \"/{}/\",\n            segments[..segments.len() - 1].join(\"/\")\n        )))\n    } else {\n        None\n    };\n\n    Ok(ParsedFilePath {\n        normalized_path: NormalizedFilePath::from_normalized(normalized_path),\n        directory_path,\n        name: file_name,\n    })\n}\n\npub(crate) fn directory_ancestor_paths(path: &str) -> Vec<String> {\n    ancestor_directory_paths(path)\n}\n\nfn ancestor_directory_paths(path: &str) -> Vec<String> {\n    let segments = path\n        .trim_matches('/')\n        .split('/')\n        .filter(|segment| !segment.is_empty())\n        .collect::<Vec<_>>();\n    if segments.len() <= 1 {\n        return Vec::new();\n    }\n\n    let mut ancestors = Vec::with_capacity(segments.len() - 1);\n    let mut prefix_segments: Vec<&str> = Vec::with_capacity(segments.len() - 1);\n    for segment in segments.iter().take(segments.len() - 1) {\n        prefix_segments.push(segment);\n        ancestors.push(format!(\"/{}/\", prefix_segments.join(\"/\")));\n    }\n    ancestors\n}\n\npub(crate) fn parent_directory_path(path: &str) -> Option<String> {\n    let segments = path\n        .trim_matches('/')\n        .split('/')\n        .filter(|segment| !segment.is_empty())\n        .collect::<Vec<_>>();\n    if segments.len() <= 1 {\n        return None;\n    }\n    Some(format!(\"/{}/\", segments[..segments.len() - 1].join(\"/\")))\n}\n\npub(crate) fn directory_name_from_path(path: &str) -> Option<String> {\n    path.trim_matches('/')\n        .split('/')\n        .filter(|segment| !segment.is_empty())\n        .next_back()\n        .map(|segment| segment.to_string())\n}\n\n#[cfg(test)]\npub(crate) fn compose_directory_path(parent_path: &str, name: &str) -> Result<String, LixError> {\n    let normalized_name = normalize_path_segment_impl(name).map_err(PathError::into_lix_error)?;\n    if parent_path == \"/\" {\n        Ok(format!(\"/{normalized_name}/\"))\n    } else if parent_path.starts_with('/') && parent_path.ends_with('/') {\n        Ok(format!(\"{parent_path}{normalized_name}/\"))\n    } else {\n        Err(PathError::InvalidDirectoryParentPath.into_lix_error())\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use iref::iri::Path as IriPath;\n\n    #[derive(Clone, Copy, Debug)]\n    enum NormalizationKind {\n        File,\n        Directory,\n        Segment,\n    }\n\n    #[derive(Clone, Copy, Debug)]\n    enum LixFixtureKind {\n        File,\n        Directory,\n    }\n\n    #[derive(Clone, Copy, Debug)]\n    struct RfcFixture {\n        label: &'static str,\n        input: &'static str,\n    }\n\n    #[derive(Clone, Copy, Debug)]\n    struct LixProfileFixture {\n        label: &'static str,\n        kind: LixFixtureKind,\n        input: &'static str,\n        oracle_accepts: bool,\n        expected: Result<&'static str, PathError>,\n    }\n\n    #[derive(Clone, Copy, Debug)]\n    struct NormalizationFixture {\n        label: &'static str,\n        kind: NormalizationKind,\n        input: &'static str,\n        expected: &'static str,\n    }\n\n    fn assert_path_error<T: fmt::Debug>(result: PathResult<T>, expected: PathError) {\n        assert_eq!(result.unwrap_err(), expected);\n    }\n\n    fn iri_oracle_accepts(path: &str) -> bool {\n        IriPath::new(path).is_ok()\n    }\n\n    fn normalize_with_kind(kind: NormalizationKind, input: &str) -> Result<String, LixError> {\n        match kind {\n            NormalizationKind::File => {\n                normalize_file_path_impl(input).map_err(PathError::into_lix_error)\n            }\n            NormalizationKind::Directory => normalize_directory_path(input),\n            NormalizationKind::Segment => normalize_path_segment(input),\n        }\n    }\n\n    fn normalize_file_path(path: &str) -> Result<String, LixError> {\n        normalize_file_path_impl(path).map_err(PathError::into_lix_error)\n    }\n\n    fn assert_lix_profile_fixture(fixture: LixProfileFixture) {\n        assert_eq!(\n            iri_oracle_accepts(fixture.input),\n            fixture.oracle_accepts,\n            \"iref oracle mismatch for {} ({})\",\n            fixture.label,\n            fixture.input\n        );\n\n        match fixture.kind {\n            LixFixtureKind::File => match fixture.expected {\n                Ok(expected) => assert_eq!(\n                    normalize_file_path(fixture.input).as_deref(),\n                    Ok(expected),\n                    \"unexpected file result for {} ({})\",\n                    fixture.label,\n                    fixture.input\n                ),\n                Err(expected) => {\n                    assert_path_error(normalize_file_path_impl(fixture.input), expected)\n                }\n            },\n            LixFixtureKind::Directory => match fixture.expected {\n                Ok(expected) => assert_eq!(\n                    normalize_directory_path(fixture.input).as_deref(),\n                    Ok(expected),\n                    \"unexpected directory result for {} ({})\",\n                    fixture.label,\n                    fixture.input\n                ),\n                Err(expected) => {\n                    assert_path_error(normalize_directory_path_impl(fixture.input), expected)\n                }\n            },\n        }\n    }\n\n    const RFC_POSITIVE_FIXTURES: &[RfcFixture] = &[\n        RfcFixture {\n            label: \"absolute unicode file path\",\n            input: \"/unicodé/段落.md\",\n        },\n        RfcFixture {\n            label: \"path with pchar punctuation\",\n            input: \"/docs/hello:world@x!$&'()*+,;=.md\",\n        },\n    ];\n\n    const RFC_NEGATIVE_FIXTURES: &[RfcFixture] = &[\n        RfcFixture {\n            label: \"invalid percent triplet\",\n            input: \"/docs/%zz.md\",\n        },\n        RfcFixture {\n            label: \"truncated percent triplet\",\n            input: \"/docs/%2\",\n        },\n        RfcFixture {\n            label: \"raw space is not allowed in an ipath\",\n            input: \"/docs/file name.md\",\n        },\n        RfcFixture {\n            label: \"raw fragment delimiter is not part of the path grammar\",\n            input: \"/docs/#hash\",\n        },\n        RfcFixture {\n            label: \"private use code point is excluded from ucschar\",\n            input: \"/docs/\\u{E000}.md\",\n        },\n    ];\n\n    const LIX_PROFILE_POSITIVE_FIXTURES: &[LixProfileFixture] = &[\n        LixProfileFixture {\n            label: \"root directory is representable\",\n            kind: LixFixtureKind::Directory,\n            input: \"/\",\n            oracle_accepts: true,\n            expected: Ok(\"/\"),\n        },\n        LixProfileFixture {\n            label: \"directory paths require trailing slash\",\n            kind: LixFixtureKind::Directory,\n            input: \"/docs/\",\n            oracle_accepts: true,\n            expected: Ok(\"/docs/\"),\n        },\n        LixProfileFixture {\n            label: \"file paths stay slashless at the end\",\n            kind: LixFixtureKind::File,\n            input: \"/docs/readme.md\",\n            oracle_accepts: true,\n            expected: Ok(\"/docs/readme.md\"),\n        },\n    ];\n\n    const LIX_PROFILE_NEGATIVE_FIXTURES: &[LixProfileFixture] = &[\n        LixProfileFixture {\n            label: \"relative-looking path is valid RFC syntax but not a Lix path\",\n            kind: LixFixtureKind::File,\n            input: \"docs/readme.md\",\n            oracle_accepts: true,\n            expected: Err(PathError::MissingLeadingSlash),\n        },\n        LixProfileFixture {\n            label: \"file paths reject trailing slash even though RFC syntax allows it\",\n            kind: LixFixtureKind::File,\n            input: \"/docs/\",\n            oracle_accepts: true,\n            expected: Err(PathError::UnexpectedTrailingSlashOnFilePath),\n        },\n        LixProfileFixture {\n            label: \"directory paths reject missing trailing slash even though RFC syntax allows it\",\n            kind: LixFixtureKind::Directory,\n            input: \"/docs\",\n            oracle_accepts: true,\n            expected: Err(PathError::MissingTrailingSlashOnDirectoryPath),\n        },\n        LixProfileFixture {\n            label: \"empty segments are valid RFC paths but banned by the Lix profile\",\n            kind: LixFixtureKind::File,\n            input: \"/docs//guide.md\",\n            oracle_accepts: true,\n            expected: Err(PathError::EmptySegment),\n        },\n        LixProfileFixture {\n            label: \"root is not a valid file path\",\n            kind: LixFixtureKind::File,\n            input: \"/\",\n            oracle_accepts: true,\n            expected: Err(PathError::InvalidRootUsage),\n        },\n        LixProfileFixture {\n            label: \"percent-encoded spaces are valid URI syntax but not Lix segment identity\",\n            kind: LixFixtureKind::File,\n            input: \"/docs/%20notes.md\",\n            oracle_accepts: true,\n            expected: Err(PathError::InvalidPathSegmentCodePoint),\n        },\n        LixProfileFixture {\n            label: \"bidi formatting is rejected by the Lix validator even though iref accepts it\",\n            kind: LixFixtureKind::File,\n            input: \"/docs/\\u{202E}.md\",\n            oracle_accepts: true,\n            expected: Err(PathError::InvalidPathSegmentCodePoint),\n        },\n        LixProfileFixture {\n            label: \"dot segments are valid RFC syntax but banned by the Lix profile\",\n            kind: LixFixtureKind::File,\n            input: \"/docs/../guide.md\",\n            oracle_accepts: true,\n            expected: Err(PathError::DotSegment),\n        },\n    ];\n\n    const NORMALIZATION_FIXTURES: &[NormalizationFixture] = &[\n        NormalizationFixture {\n            label: \"nfc composition happens before validation\",\n            kind: NormalizationKind::File,\n            input: \"/Cafe\\u{0301}.md\",\n            expected: \"/Café.md\",\n        },\n        NormalizationFixture {\n            label: \"percent-encoded segment text is decoded before storage\",\n            kind: NormalizationKind::Directory,\n            input: \"/docs/%43afe%CC%81/\",\n            expected: \"/docs/Café/\",\n        },\n        NormalizationFixture {\n            label: \"unreserved percent encoding is decoded\",\n            kind: NormalizationKind::File,\n            input: \"/docs/%7e%41.md\",\n            expected: \"/docs/~A.md\",\n        },\n        NormalizationFixture {\n            label: \"root survives directory normalization\",\n            kind: NormalizationKind::Directory,\n            input: \"/\",\n            expected: \"/\",\n        },\n        NormalizationFixture {\n            label: \"segment normalization decodes unreserved percent triplets\",\n            kind: NormalizationKind::Segment,\n            input: \"%7ehello\",\n            expected: \"~hello\",\n        },\n    ];\n\n    #[test]\n    fn rfc_positive_path_fixtures_agree_with_iref() {\n        for fixture in RFC_POSITIVE_FIXTURES {\n            assert!(\n                iri_oracle_accepts(fixture.input),\n                \"iref should accept {} ({})\",\n                fixture.label,\n                fixture.input\n            );\n            assert!(\n                normalize_file_path_impl(fixture.input).is_ok(),\n                \"lix should accept {} ({})\",\n                fixture.label,\n                fixture.input\n            );\n        }\n    }\n\n    #[test]\n    fn rfc_negative_path_fixtures_agree_with_iref() {\n        for fixture in RFC_NEGATIVE_FIXTURES {\n            assert!(\n                !iri_oracle_accepts(fixture.input),\n                \"iref should reject {} ({})\",\n                fixture.label,\n                fixture.input\n            );\n            assert!(\n                normalize_file_path_impl(fixture.input).is_err(),\n                \"lix should reject {} ({})\",\n                fixture.label,\n                fixture.input\n            );\n        }\n    }\n\n    #[test]\n    fn lix_profile_positive_fixtures_are_pinned() {\n        for fixture in LIX_PROFILE_POSITIVE_FIXTURES {\n            assert_lix_profile_fixture(*fixture);\n        }\n    }\n\n    #[test]\n    fn lix_profile_negative_fixtures_document_divergence_from_the_oracle() {\n        for fixture in LIX_PROFILE_NEGATIVE_FIXTURES {\n            assert_lix_profile_fixture(*fixture);\n        }\n    }\n\n    #[test]\n    fn normalization_fixture_table_covers_canonicalization_rules() {\n        for fixture in NORMALIZATION_FIXTURES {\n            assert_eq!(\n                normalize_with_kind(fixture.kind, fixture.input).as_deref(),\n                Ok(fixture.expected),\n                \"unexpected normalized value for {} ({})\",\n                fixture.label,\n                fixture.input\n            );\n        }\n    }\n\n    #[test]\n    fn accepts_normalized_file_paths_with_unicode_and_percent_encoding() {\n        for path in [\n            \"/docs/readme.md\",\n            \"/a/b/c.txt\",\n            \"/dash--path\",\n            \"/unicodé/段落.md\",\n            \"/docs/hello:world@x!$&'()*+,;=.md\",\n        ] {\n            assert!(\n                normalize_file_path(path).is_ok(),\n                \"expected valid path {path}\"\n            );\n        }\n    }\n\n    #[test]\n    fn rejects_structural_file_path_anomalies() {\n        assert_path_error(normalize_file_path_impl(\"/\"), PathError::InvalidRootUsage);\n        assert_path_error(\n            normalize_file_path_impl(\"/trailing/\"),\n            PathError::UnexpectedTrailingSlashOnFilePath,\n        );\n        assert_path_error(\n            normalize_file_path_impl(\"no-leading\"),\n            PathError::MissingLeadingSlash,\n        );\n        assert_path_error(\n            normalize_file_path_impl(\"/bad//double\"),\n            PathError::EmptySegment,\n        );\n    }\n\n    #[test]\n    fn rejects_file_paths_with_dot_segments() {\n        for path in [\n            \"/docs/./file\",\n            \"/docs/../file\",\n            \"/docs/%2e/file\",\n            \"/docs/%2E%2E/file\",\n        ] {\n            assert_path_error(normalize_file_path_impl(path), PathError::DotSegment);\n        }\n    }\n\n    #[test]\n    fn rejects_file_paths_with_invalid_characters() {\n        for path in [\"/docs/file?.md\", \"/docs/#hash\", \"/docs/file name.md\"] {\n            assert_path_error(\n                normalize_file_path_impl(path),\n                PathError::InvalidPathSegmentCodePoint,\n            );\n        }\n    }\n\n    #[test]\n    fn rejects_file_paths_and_segments_over_length_limits() {\n        let segment_at_limit = \"a\".repeat(MAX_CANONICAL_PATH_SEGMENT_BYTES);\n        let path_at_limit = format!(\"/{segment_at_limit}\");\n        assert_eq!(\n            normalize_file_path(&path_at_limit).as_deref(),\n            Ok(path_at_limit.as_str())\n        );\n\n        let segment_over_limit = \"a\".repeat(MAX_CANONICAL_PATH_SEGMENT_BYTES + 1);\n        assert_path_error(\n            normalize_file_path_impl(&format!(\"/{segment_over_limit}\")),\n            PathError::SegmentTooLong,\n        );\n        assert_path_error(\n            normalize_path_segment_impl(&segment_over_limit),\n            PathError::SegmentTooLong,\n        );\n\n        let mut segments = Vec::new();\n        let mut raw_len = 1usize;\n        while raw_len <= MAX_CANONICAL_PATH_BYTES {\n            segments.push(\"abcd\");\n            raw_len = 1 + segments.join(\"/\").len();\n        }\n        assert_path_error(\n            normalize_file_path_impl(&format!(\"/{}\", segments.join(\"/\"))),\n            PathError::PathTooLong,\n        );\n    }\n\n    #[test]\n    fn rejects_file_paths_with_private_use_and_noncharacter_code_points() {\n        for path in [\"/docs/\\u{E000}.md\", \"/docs/\\u{FDD0}.md\"] {\n            assert_path_error(\n                normalize_file_path_impl(path),\n                PathError::InvalidPathSegmentCodePoint,\n            );\n        }\n    }\n\n    #[test]\n    fn rejects_file_paths_with_bidi_formatting_characters() {\n        for path in [\"/docs/\\u{200E}.md\", \"/docs/\\u{202E}.md\"] {\n            assert_path_error(\n                normalize_file_path_impl(path),\n                PathError::InvalidPathSegmentCodePoint,\n            );\n        }\n    }\n\n    #[test]\n    fn rejects_default_ignorable_and_invisible_segment_characters() {\n        for path in [\n            \"/docs/a\\u{200B}b.md\", // ZERO WIDTH SPACE\n            \"/docs/a\\u{200C}b.md\", // ZERO WIDTH NON-JOINER\n            \"/docs/a\\u{200D}b.md\", // ZERO WIDTH JOINER\n            \"/docs/a\\u{2060}b.md\", // WORD JOINER\n            \"/docs/a\\u{00AD}b.md\", // SOFT HYPHEN\n            \"/docs/a\\u{034F}b.md\", // COMBINING GRAPHEME JOINER\n            \"/docs/a\\u{180E}b.md\", // MONGOLIAN VOWEL SEPARATOR\n            \"/docs/a\\u{FEFF}b.md\", // ZERO WIDTH NO-BREAK SPACE\n        ] {\n            assert_path_error(\n                normalize_file_path_impl(path),\n                PathError::InvalidPathSegmentCodePoint,\n            );\n        }\n    }\n\n    #[test]\n    fn rejects_unicode_separators_and_leading_combining_marks() {\n        for path in [\n            \"/docs/a\\u{00A0}b.md\", // NO-BREAK SPACE\n            \"/docs/a\\u{2028}b.md\", // LINE SEPARATOR\n            \"/docs/a\\u{2029}b.md\", // PARAGRAPH SEPARATOR\n            \"/docs/\\u{0301}.md\",   // COMBINING ACUTE ACCENT\n        ] {\n            assert_path_error(\n                normalize_file_path_impl(path),\n                PathError::InvalidPathSegmentCodePoint,\n            );\n        }\n    }\n\n    #[test]\n    fn validates_percent_encoding_in_file_paths() {\n        assert_eq!(\n            normalize_file_path(\"/docs/%43afe%CC%81.md\").as_deref(),\n            Ok(\"/docs/Café.md\")\n        );\n        assert_path_error(\n            normalize_file_path_impl(\"/docs/%zz.md\"),\n            PathError::InvalidPercentEncoding,\n        );\n        assert_path_error(\n            normalize_file_path_impl(\"/docs/abc%.md\"),\n            PathError::InvalidPercentEncoding,\n        );\n        assert_path_error(\n            normalize_file_path_impl(\"/docs/abc%2.md\"),\n            PathError::InvalidPercentEncoding,\n        );\n    }\n\n    #[test]\n    fn applies_segment_length_limit_to_canonical_text_not_percent_encoded_boundary_spelling() {\n        let encoded_segment_at_limit = \"%61\".repeat(MAX_CANONICAL_PATH_SEGMENT_BYTES);\n        let canonical_segment_at_limit = \"a\".repeat(MAX_CANONICAL_PATH_SEGMENT_BYTES);\n        assert_eq!(\n            normalize_file_path(&format!(\"/{encoded_segment_at_limit}\")).as_deref(),\n            Ok(format!(\"/{canonical_segment_at_limit}\").as_str())\n        );\n        assert_eq!(\n            normalize_directory_path(&format!(\"/{encoded_segment_at_limit}/\")).as_deref(),\n            Ok(format!(\"/{canonical_segment_at_limit}/\").as_str())\n        );\n\n        let encoded_segment_over_limit = \"%61\".repeat(MAX_CANONICAL_PATH_SEGMENT_BYTES + 1);\n        assert_path_error(\n            normalize_file_path_impl(&format!(\"/{encoded_segment_over_limit}\")),\n            PathError::SegmentTooLong,\n        );\n        assert_path_error(\n            normalize_directory_path_impl(&format!(\"/{encoded_segment_over_limit}/\")),\n            PathError::SegmentTooLong,\n        );\n    }\n\n    #[test]\n    fn rejects_raw_path_input_over_length_budget_before_unicode_processing() {\n        let huge_file_path = format!(\"/{}\", \"a\".repeat(1024 * 1024));\n        assert_path_error(\n            normalize_file_path_impl(&huge_file_path),\n            PathError::RawPathInputTooLong,\n        );\n\n        let huge_directory_path = format!(\"/{}/\", \"a\".repeat(1024 * 1024));\n        assert_path_error(\n            normalize_directory_path_impl(&huge_directory_path),\n            PathError::RawPathInputTooLong,\n        );\n    }\n\n    #[test]\n    fn rejects_percent_encoded_forbidden_code_points_in_file_paths() {\n        for (path, expected) in [\n            (\"/docs/%00evil.md\", PathError::NulByte),\n            (\"/docs/%2Fevil.md\", PathError::SlashInSegment),\n            (\"/docs/%5Cevil.md\", PathError::Backslash),\n            (\"/docs/%25evil.md\", PathError::InvalidPathSegmentCodePoint),\n            (\"/docs/%3Fevil.md\", PathError::InvalidPathSegmentCodePoint),\n            (\"/docs/%23evil.md\", PathError::InvalidPathSegmentCodePoint),\n            (\n                \"/docs/%E2%80%AEevil.md\",\n                PathError::InvalidPathSegmentCodePoint,\n            ),\n            (\n                \"/docs/%E2%80%8Eevil.md\",\n                PathError::InvalidPathSegmentCodePoint,\n            ),\n            (\n                \"/docs/%E2%81%A0evil.md\",\n                PathError::InvalidPathSegmentCodePoint,\n            ),\n            (\n                \"/docs/%C2%ADevil.md\",\n                PathError::InvalidPathSegmentCodePoint,\n            ),\n            (\n                \"/docs/%CD%8Fevil.md\",\n                PathError::InvalidPathSegmentCodePoint,\n            ),\n            (\n                \"/docs/%E1%A0%8Eevil.md\",\n                PathError::InvalidPathSegmentCodePoint,\n            ),\n            (\n                \"/docs/%EF%BB%BFevil.md\",\n                PathError::InvalidPathSegmentCodePoint,\n            ),\n            (\n                \"/docs/%EF%B7%90evil.md\",\n                PathError::InvalidPathSegmentCodePoint,\n            ),\n            (\n                \"/docs/%EE%80%80evil.md\",\n                PathError::InvalidPathSegmentCodePoint,\n            ),\n            (\"/docs/%FFevil.md\", PathError::InvalidPathSegmentCodePoint),\n        ] {\n            assert_path_error(normalize_file_path_impl(path), expected);\n        }\n    }\n\n    #[test]\n    fn rejects_percent_encoded_forbidden_code_points_in_directory_paths() {\n        for (path, expected) in [\n            (\"/docs/%00evil/\", PathError::NulByte),\n            (\"/docs/%2Fevil/\", PathError::SlashInSegment),\n            (\"/docs/%5Cevil/\", PathError::Backslash),\n            (\n                \"/docs/%E2%80%AEevil/\",\n                PathError::InvalidPathSegmentCodePoint,\n            ),\n            (\n                \"/docs/%E2%80%8Eevil/\",\n                PathError::InvalidPathSegmentCodePoint,\n            ),\n            (\n                \"/docs/%E2%81%A0evil/\",\n                PathError::InvalidPathSegmentCodePoint,\n            ),\n            (\n                \"/docs/%EF%BB%BFevil/\",\n                PathError::InvalidPathSegmentCodePoint,\n            ),\n            (\n                \"/docs/%EF%B7%90evil/\",\n                PathError::InvalidPathSegmentCodePoint,\n            ),\n            (\n                \"/docs/%EE%80%80evil/\",\n                PathError::InvalidPathSegmentCodePoint,\n            ),\n            (\"/docs/%FFevil/\", PathError::InvalidPathSegmentCodePoint),\n        ] {\n            assert_path_error(normalize_directory_path_impl(path), expected);\n        }\n    }\n\n    #[test]\n    fn canonicalizes_percent_encoding_in_file_paths() {\n        assert_eq!(\n            normalize_file_path(\"/docs/%7e%41%2e%2E.md\").as_deref(),\n            Ok(\"/docs/~A...md\")\n        );\n        assert_path_error(\n            normalize_file_path_impl(\"/docs/%2fkept%3aencoded\"),\n            PathError::SlashInSegment,\n        );\n    }\n\n    #[test]\n    fn normalization_is_stable_on_renormalization() {\n        let once = normalize_file_path(\"/docs/%7e/%41.md\").expect(\"first normalization\");\n        let twice = normalize_file_path(&once).expect(\"second normalization\");\n        assert_eq!(once, twice);\n    }\n\n    #[test]\n    fn accepts_and_rejects_directory_paths_like_legacy_rules() {\n        for path in [\"/\", \"/docs/\", \"/docs/guides/\", \"/unicodé/章节/\"] {\n            assert!(\n                normalize_directory_path(path).is_ok(),\n                \"expected valid directory path {path}\"\n            );\n        }\n        assert_path_error(\n            normalize_directory_path_impl(\"/file.md\"),\n            PathError::MissingTrailingSlashOnDirectoryPath,\n        );\n        assert_path_error(\n            normalize_directory_path_impl(\"/docs\"),\n            PathError::MissingTrailingSlashOnDirectoryPath,\n        );\n        assert_path_error(\n            normalize_directory_path_impl(\"/docs/ \"),\n            PathError::MissingTrailingSlashOnDirectoryPath,\n        );\n        assert_path_error(\n            normalize_directory_path_impl(\"/docs/ /\"),\n            PathError::InvalidPathSegmentCodePoint,\n        );\n        assert_path_error(\n            normalize_directory_path_impl(\"no-leading\"),\n            PathError::MissingLeadingSlash,\n        );\n        assert_path_error(\n            normalize_directory_path_impl(\"/docs/%zz/\"),\n            PathError::InvalidPercentEncoding,\n        );\n    }\n\n    #[test]\n    fn canonicalizes_directory_paths() {\n        assert_eq!(\n            normalize_directory_path(\"/docs/%43afe%CC%81/\").as_deref(),\n            Ok(\"/docs/Café/\")\n        );\n    }\n\n    #[test]\n    fn rejects_directory_paths_and_segments_over_length_limits() {\n        let segment_at_limit = \"a\".repeat(MAX_CANONICAL_PATH_SEGMENT_BYTES);\n        let path_at_limit = format!(\"/{segment_at_limit}/\");\n        assert_eq!(\n            normalize_directory_path(&path_at_limit).as_deref(),\n            Ok(path_at_limit.as_str())\n        );\n\n        let segment_over_limit = \"a\".repeat(MAX_CANONICAL_PATH_SEGMENT_BYTES + 1);\n        assert_path_error(\n            normalize_directory_path_impl(&format!(\"/{segment_over_limit}/\")),\n            PathError::SegmentTooLong,\n        );\n\n        let mut segments = Vec::new();\n        let mut raw_len = 1usize;\n        while raw_len <= MAX_CANONICAL_PATH_BYTES {\n            segments.push(\"abcd\");\n            raw_len = 2 + segments.join(\"/\").len();\n        }\n        assert_path_error(\n            normalize_directory_path_impl(&format!(\"/{}/\", segments.join(\"/\"))),\n            PathError::PathTooLong,\n        );\n    }\n\n    #[test]\n    fn rejects_directory_paths_with_dot_segments() {\n        for path in [\"/docs/./\", \"/docs/../\", \"/docs/%2e/\", \"/docs/%2E%2E/\"] {\n            assert_path_error(normalize_directory_path_impl(path), PathError::DotSegment);\n        }\n    }\n\n    #[test]\n    fn represents_root_as_a_normalized_directory_path() {\n        let root = NormalizedDirectoryPath::try_from_path(\"/\").expect(\"root path\");\n        assert_eq!(root.as_str(), \"/\");\n        assert_eq!(\n            root,\n            NormalizedDirectoryPath::from_normalized(\"/\".to_string())\n        );\n    }\n\n    #[test]\n    fn root_parent_and_top_level_parent_are_absent() {\n        assert_eq!(parent_directory_path(\"/\"), None);\n        assert_eq!(parent_directory_path(\"/top-level.txt\"), None);\n    }\n\n    #[test]\n    fn compose_directory_path_under_root() {\n        assert_eq!(compose_directory_path(\"/\", \"docs\").as_deref(), Ok(\"/docs/\"));\n    }\n\n    #[test]\n    fn exposes_stable_lix_errors_with_hints() {\n        let missing_leading = normalize_file_path(\"docs/readme.md\").expect_err(\"leading slash\");\n        assert_eq!(missing_leading.code, \"LIX_ERROR_PATH_MISSING_LEADING_SLASH\");\n        assert_eq!(missing_leading.hint(), Some(\"prefix the path with '/'\"));\n\n        let bad_percent = normalize_file_path(\"/docs/%zz.md\").expect_err(\"bad percent\");\n        assert_eq!(bad_percent.code, \"LIX_ERROR_PATH_INVALID_PERCENT_ENCODING\");\n        assert_eq!(\n            bad_percent.hint(),\n            Some(\"use valid percent triplets only for URI boundary input; '%' is not allowed in canonical path segments\")\n        );\n\n        let root_file = normalize_file_path(\"/\").expect_err(\"root as file\");\n        assert_eq!(root_file.code, \"LIX_ERROR_PATH_INVALID_ROOT_USAGE\");\n        assert_eq!(\n            root_file.hint(),\n            Some(\"use '/' as a directory path, never as a file path\")\n        );\n\n        let long_segment = normalize_file_path(&format!(\n            \"/{}\",\n            \"a\".repeat(MAX_CANONICAL_PATH_SEGMENT_BYTES + 1)\n        ))\n        .expect_err(\"long segment\");\n        assert_eq!(long_segment.code, \"LIX_ERROR_PATH_SEGMENT_TOO_LONG\");\n        assert_eq!(\n            long_segment.hint(),\n            Some(\"keep each canonical path segment at or below 255 bytes\")\n        );\n\n        let long_input =\n            normalize_file_path(&format!(\"/{}\", \"a\".repeat(MAX_RAW_PATH_INPUT_BYTES + 1)))\n                .expect_err(\"long raw input\");\n        assert_eq!(long_input.code, \"LIX_ERROR_PATH_INPUT_TOO_LONG\");\n        assert_eq!(\n            long_input.hint(),\n            Some(\"keep raw path input at or below 16384 bytes\")\n        );\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/common/identity.rs",
    "content": "use std::borrow::Borrow;\nuse std::fmt;\nuse std::ops::Deref;\n\nuse crate::LixError;\nuse serde::{Deserialize, Deserializer, Serialize, Serializer};\n\nmacro_rules! canonical_identity_type {\n    ($name:ident, $label:literal) => {\n        #[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord, Hash)]\n        pub struct $name(String);\n\n        impl $name {\n            pub fn new(value: impl Into<String>) -> Result<Self, LixError> {\n                let value = value.into();\n                validate_non_empty_identity_value($label, value).map(Self)\n            }\n\n            pub fn as_str(&self) -> &str {\n                &self.0\n            }\n\n            pub fn into_inner(self) -> String {\n                self.0\n            }\n        }\n\n        impl TryFrom<String> for $name {\n            type Error = LixError;\n\n            fn try_from(value: String) -> Result<Self, Self::Error> {\n                Self::new(value)\n            }\n        }\n\n        impl TryFrom<&str> for $name {\n            type Error = LixError;\n\n            fn try_from(value: &str) -> Result<Self, Self::Error> {\n                Self::new(value)\n            }\n        }\n\n        impl From<$name> for String {\n            fn from(value: $name) -> Self {\n                value.0\n            }\n        }\n\n        impl Deref for $name {\n            type Target = str;\n\n            fn deref(&self) -> &Self::Target {\n                self.0.as_str()\n            }\n        }\n\n        impl AsRef<str> for $name {\n            fn as_ref(&self) -> &str {\n                self.0.as_str()\n            }\n        }\n\n        impl Borrow<str> for $name {\n            fn borrow(&self) -> &str {\n                self.0.as_str()\n            }\n        }\n\n        impl fmt::Display for $name {\n            fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {\n                self.0.fmt(f)\n            }\n        }\n\n        impl PartialEq<&str> for $name {\n            fn eq(&self, other: &&str) -> bool {\n                self.0 == *other\n            }\n        }\n\n        impl PartialEq<$name> for &str {\n            fn eq(&self, other: &$name) -> bool {\n                *self == other.0\n            }\n        }\n\n        impl Serialize for $name {\n            fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>\n            where\n                S: Serializer,\n            {\n                serializer.serialize_str(&self.0)\n            }\n        }\n\n        impl<'de> Deserialize<'de> for $name {\n            fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>\n            where\n                D: Deserializer<'de>,\n            {\n                let value = String::deserialize(deserializer)?;\n                Self::new(value).map_err(serde::de::Error::custom)\n            }\n        }\n    };\n}\n\ncanonical_identity_type!(EntityId, \"entity_id\");\ncanonical_identity_type!(FileId, \"file_id\");\ncanonical_identity_type!(VersionId, \"version_id\");\ncanonical_identity_type!(CanonicalSchemaKey, \"schema_key\");\ncanonical_identity_type!(CanonicalPluginKey, \"plugin_key\");\n\npub(crate) fn validate_non_empty_identity_value(\n    label: &str,\n    value: impl Into<String>,\n) -> Result<String, LixError> {\n    let value = value.into();\n    if value.is_empty() {\n        return Err(LixError::new(\n            LixError::CODE_INVALID_PARAM,\n            format!(\"{label} must be non-empty\"),\n        ));\n    }\n    Ok(value)\n}\n\npub(crate) fn json_pointer_get<'a>(\n    value: &'a serde_json::Value,\n    pointer: &[String],\n) -> Option<&'a serde_json::Value> {\n    let mut current = value;\n    for segment in pointer {\n        match current {\n            serde_json::Value::Object(object) => current = object.get(segment)?,\n            serde_json::Value::Array(array) => {\n                let index = segment.parse::<usize>().ok()?;\n                current = array.get(index)?;\n            }\n            _ => return None,\n        }\n    }\n    Some(current)\n}\n"
  },
  {
    "path": "packages/engine/src/common/json_pointer.rs",
    "content": "use crate::LixError;\n\npub(crate) fn parse_json_pointer(pointer: &str) -> Result<Vec<String>, LixError> {\n    if pointer.is_empty() {\n        return Ok(Vec::new());\n    }\n    if !pointer.starts_with('/') {\n        return Err(LixError::new(\n            LixError::CODE_SCHEMA_DEFINITION,\n            format!(\"invalid JSON pointer '{pointer}'\"),\n        ));\n    }\n    pointer[1..]\n        .split('/')\n        .map(decode_json_pointer_segment)\n        .collect()\n}\n\npub(crate) fn format_json_pointer(segments: &[String]) -> String {\n    if segments.is_empty() {\n        return String::new();\n    }\n    format!(\n        \"/{}\",\n        segments\n            .iter()\n            .map(|segment| segment.replace('~', \"~0\").replace('/', \"~1\"))\n            .collect::<Vec<_>>()\n            .join(\"/\")\n    )\n}\n\npub(crate) fn top_level_property_name(pointer: &str) -> Result<Option<String>, LixError> {\n    if pointer.is_empty() {\n        return Ok(None);\n    }\n    if !pointer.starts_with('/') {\n        return Err(LixError::new(\n            LixError::CODE_SCHEMA_DEFINITION,\n            format!(\"invalid JSON pointer '{pointer}'\"),\n        ));\n    }\n    let segment = pointer[1..].split('/').next().unwrap_or_default();\n    Ok(Some(decode_json_pointer_segment(segment)?))\n}\n\nfn decode_json_pointer_segment(segment: &str) -> Result<String, LixError> {\n    let mut out = String::new();\n    let mut chars = segment.chars();\n    while let Some(ch) = chars.next() {\n        if ch != '~' {\n            out.push(ch);\n            continue;\n        }\n        match chars.next() {\n            Some('0') => out.push('~'),\n            Some('1') => out.push('/'),\n            _ => {\n                return Err(LixError::new(\n                    LixError::CODE_SCHEMA_DEFINITION,\n                    \"invalid JSON pointer escape\",\n                ))\n            }\n        }\n    }\n    Ok(out)\n}\n"
  },
  {
    "path": "packages/engine/src/common/metadata.rs",
    "content": "use crate::LixError;\n\npub(crate) fn parse_row_metadata(\n    value: &str,\n    context: impl AsRef<str>,\n) -> Result<String, LixError> {\n    let metadata = parse_row_metadata_value(value, context)?;\n    Ok(serde_json::to_string(&metadata).expect(\"serde_json::Value metadata serializes\"))\n}\n\npub(crate) fn parse_row_metadata_value(\n    value: &str,\n    context: impl AsRef<str>,\n) -> Result<serde_json::Value, LixError> {\n    let metadata = serde_json::from_str::<serde_json::Value>(value).map_err(|error| {\n        LixError::new(\n            \"LIX_ERROR_INVALID_JSON\",\n            format!(\"{} metadata is invalid JSON: {error}\", context.as_ref()),\n        )\n    })?;\n    validate_row_metadata(&metadata, context)?;\n    Ok(metadata)\n}\n\npub(crate) fn validate_row_metadata(\n    metadata: &serde_json::Value,\n    context: impl AsRef<str>,\n) -> Result<(), LixError> {\n    if metadata.is_object() {\n        return Ok(());\n    }\n    Err(LixError::new(\n        LixError::CODE_SCHEMA_VALIDATION,\n        format!(\"{} metadata must be a JSON object\", context.as_ref()),\n    ))\n}\n\npub(crate) fn serialize_row_metadata(metadata: &String) -> String {\n    metadata.clone()\n}\n"
  },
  {
    "path": "packages/engine/src/common/mod.rs",
    "content": "pub(crate) mod error;\npub(crate) mod fingerprint;\npub(crate) mod fs_path;\npub(crate) mod identity;\npub(crate) mod json_pointer;\npub(crate) mod metadata;\npub(crate) mod types;\npub(crate) mod wire;\n\npub use error::LixError;\npub(crate) use fingerprint::stable_content_fingerprint_hex;\npub(crate) use fs_path::{\n    directory_ancestor_paths, directory_name_from_path, normalize_directory_path,\n    normalize_path_segment, parent_directory_path, ParsedFilePath,\n};\npub(crate) use identity::{json_pointer_get, validate_non_empty_identity_value};\npub use identity::{CanonicalPluginKey, CanonicalSchemaKey, EntityId, FileId, VersionId};\npub(crate) use json_pointer::{format_json_pointer, parse_json_pointer, top_level_property_name};\npub(crate) use metadata::{\n    parse_row_metadata, parse_row_metadata_value, serialize_row_metadata, validate_row_metadata,\n};\npub use types::{LixNotice, NullableKeyFilter, SqlQueryResult, Value, WriteReceipt};\npub use wire::{WireQueryResult, WireValue};\n"
  },
  {
    "path": "packages/engine/src/common/types.rs",
    "content": "use std::ops::Deref;\n\n#[derive(Debug, Clone, PartialEq, serde::Serialize, serde::Deserialize)]\npub enum Value {\n    Null,\n    Boolean(bool),\n    Integer(i64),\n    Real(f64),\n    Text(String),\n    Json(serde_json::Value),\n    Blob(Vec<u8>),\n}\n\n#[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize)]\npub enum NullableKeyFilter<T> {\n    Any,\n    Null,\n    Value(T),\n}\n\nimpl<T> Default for NullableKeyFilter<T> {\n    fn default() -> Self {\n        Self::Any\n    }\n}\n\nimpl<T> NullableKeyFilter<T> {\n    pub fn is_any(&self) -> bool {\n        matches!(self, Self::Any)\n    }\n\n    pub fn as_value(&self) -> Option<&T> {\n        match self {\n            Self::Value(value) => Some(value),\n            Self::Any | Self::Null => None,\n        }\n    }\n\n    pub fn as_ref(&self) -> NullableKeyFilter<&T> {\n        match self {\n            Self::Any => NullableKeyFilter::Any,\n            Self::Null => NullableKeyFilter::Null,\n            Self::Value(value) => NullableKeyFilter::Value(value),\n        }\n    }\n\n    pub fn from_nullable(value: Option<T>) -> Self {\n        match value {\n            Some(value) => Self::Value(value),\n            None => Self::Null,\n        }\n    }\n}\n\nimpl<T> NullableKeyFilter<T>\nwhere\n    T: Deref,\n{\n    pub fn as_deref(&self) -> NullableKeyFilter<&T::Target> {\n        match self {\n            Self::Any => NullableKeyFilter::Any,\n            Self::Null => NullableKeyFilter::Null,\n            Self::Value(value) => NullableKeyFilter::Value(value.deref()),\n        }\n    }\n}\n\nimpl<T: PartialEq> NullableKeyFilter<T> {\n    pub fn matches(&self, candidate: Option<&T>) -> bool {\n        match self {\n            Self::Any => true,\n            Self::Null => candidate.is_none(),\n            Self::Value(expected) => candidate == Some(expected),\n        }\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, serde::Serialize, serde::Deserialize)]\npub struct SqlQueryResult {\n    pub rows: Vec<Vec<Value>>,\n    #[serde(default)]\n    pub columns: Vec<String>,\n    #[serde(default)]\n    pub notices: Vec<LixNotice>,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize)]\npub struct LixNotice {\n    pub code: String,\n    pub message: String,\n    #[serde(default, skip_serializing_if = \"Option::is_none\")]\n    pub hint: Option<String>,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize, Default)]\npub struct WriteReceipt {\n    #[serde(default, skip_serializing_if = \"Option::is_none\")]\n    pub state_commit_sequence: Option<u64>,\n}\n\nimpl WriteReceipt {\n    pub fn is_empty(&self) -> bool {\n        self.state_commit_sequence.is_none()\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/common/wire.rs",
    "content": "use crate::{LixError, LixNotice, SqlQueryResult, Value};\nuse base64::Engine as _;\nuse serde::{Deserialize, Serialize};\n\n#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]\n#[serde(tag = \"kind\", rename_all = \"lowercase\")]\npub enum WireValue {\n    Null { value: () },\n    Bool { value: bool },\n    Int { value: i64 },\n    Float { value: f64 },\n    Text { value: String },\n    Json { value: serde_json::Value },\n    Blob { base64: String },\n}\n\n#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]\npub struct WireQueryResult {\n    pub rows: Vec<Vec<WireValue>>,\n    #[serde(default)]\n    pub columns: Vec<String>,\n    #[serde(default)]\n    pub notices: Vec<LixNotice>,\n}\n\nimpl WireValue {\n    pub fn try_from_engine(value: &Value) -> Result<Self, LixError> {\n        match value {\n            Value::Null => Ok(Self::Null { value: () }),\n            Value::Boolean(value) => Ok(Self::Bool { value: *value }),\n            Value::Integer(value) => Ok(Self::Int { value: *value }),\n            Value::Real(value) => {\n                if !value.is_finite() {\n                    return Err(LixError {\n                        code: \"LIX_ERROR_UNKNOWN\".to_string(),\n                        message: \"cannot encode non-finite float value to wire format\".to_string(),\n                        hint: None,\n                        details: None,\n                    });\n                }\n                Ok(Self::Float { value: *value })\n            }\n            Value::Text(value) => Ok(Self::Text {\n                value: value.clone(),\n            }),\n            Value::Json(value) => Ok(Self::Json {\n                value: value.clone(),\n            }),\n            Value::Blob(value) => Ok(Self::Blob {\n                base64: base64::engine::general_purpose::STANDARD.encode(value),\n            }),\n        }\n    }\n\n    pub fn try_into_engine(self) -> Result<Value, LixError> {\n        match self {\n            Self::Null { .. } => Ok(Value::Null),\n            Self::Bool { value } => Ok(Value::Boolean(value)),\n            Self::Int { value } => Ok(Value::Integer(value)),\n            Self::Float { value } => {\n                if !value.is_finite() {\n                    return Err(LixError {\n                        code: \"LIX_ERROR_UNKNOWN\".to_string(),\n                        message: \"cannot decode non-finite float value from wire format\"\n                            .to_string(),\n                        hint: None,\n                        details: None,\n                    });\n                }\n                Ok(Value::Real(value))\n            }\n            Self::Text { value } => Ok(Value::Text(value)),\n            Self::Json { value } => Ok(Value::Json(value)),\n            Self::Blob { base64 } => {\n                let decoded = base64::engine::general_purpose::STANDARD\n                    .decode(base64.as_bytes())\n                    .map_err(|error| LixError {\n                        code: \"LIX_ERROR_UNKNOWN\".to_string(),\n                        message: format!(\"failed to decode wire blob base64: {error}\"),\n                        hint: None,\n                        details: None,\n                    })?;\n                Ok(Value::Blob(decoded))\n            }\n        }\n    }\n}\n\nimpl WireQueryResult {\n    pub fn try_from_engine(result: &SqlQueryResult) -> Result<Self, LixError> {\n        let mut rows = Vec::with_capacity(result.rows.len());\n        for row in &result.rows {\n            let mut wire_row = Vec::with_capacity(row.len());\n            for value in row {\n                wire_row.push(WireValue::try_from_engine(value)?);\n            }\n            rows.push(wire_row);\n        }\n        Ok(Self {\n            rows,\n            columns: result.columns.clone(),\n            notices: result.notices.clone(),\n        })\n    }\n\n    pub fn try_into_engine(self) -> Result<SqlQueryResult, LixError> {\n        let mut rows = Vec::with_capacity(self.rows.len());\n        for row in self.rows {\n            let mut engine_row = Vec::with_capacity(row.len());\n            for value in row {\n                engine_row.push(value.try_into_engine()?);\n            }\n            rows.push(engine_row);\n        }\n        Ok(SqlQueryResult {\n            rows,\n            columns: self.columns,\n            notices: self.notices,\n        })\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::{WireQueryResult, WireValue};\n    use crate::{LixNotice, SqlQueryResult, Value};\n    use serde_json::json;\n\n    #[test]\n    fn value_roundtrip_preserves_all_variants() {\n        let original = vec![\n            Value::Null,\n            Value::Boolean(true),\n            Value::Integer(42),\n            Value::Real(1.5),\n            Value::Text(\"hello\".to_string()),\n            Value::Json(json!({\"hello\": \"world\"})),\n            Value::Blob(vec![1, 2, 3]),\n        ];\n\n        for value in original {\n            let wire = WireValue::try_from_engine(&value).expect(\"to wire should succeed\");\n            let roundtrip = wire\n                .try_into_engine()\n                .expect(\"from wire to engine should succeed\");\n            assert_eq!(roundtrip, value);\n        }\n    }\n\n    #[test]\n    fn query_result_roundtrip_preserves_rows_and_columns() {\n        let original = SqlQueryResult {\n            rows: vec![\n                vec![\n                    Value::Integer(1),\n                    Value::Text(\"a\".to_string()),\n                    Value::Blob(vec![0x41, 0x42]),\n                ],\n                vec![Value::Null, Value::Boolean(false), Value::Real(2.5)],\n            ],\n            columns: vec![\"i\".to_string(), \"t\".to_string(), \"b\".to_string()],\n            notices: vec![LixNotice {\n                code: \"LIX_TEST_NOTICE\".to_string(),\n                message: \"test notice\".to_string(),\n                hint: Some(\"test hint\".to_string()),\n            }],\n        };\n\n        let wire = WireQueryResult::try_from_engine(&original).expect(\"to wire should succeed\");\n        let roundtrip = wire\n            .try_into_engine()\n            .expect(\"from wire to engine should succeed\");\n        assert_eq!(roundtrip, original);\n    }\n\n    #[test]\n    fn canonical_json_uses_lowercase_kinds_only() {\n        let wire = WireQueryResult {\n            rows: vec![vec![\n                WireValue::Null { value: () },\n                WireValue::Bool { value: true },\n                WireValue::Int { value: 1 },\n                WireValue::Float { value: 1.5 },\n                WireValue::Text {\n                    value: \"hello\".to_string(),\n                },\n                WireValue::Json {\n                    value: json!({\"hello\": \"world\"}),\n                },\n                WireValue::Blob {\n                    base64: \"AQI=\".to_string(),\n                },\n            ]],\n            columns: vec![\"a\".to_string()],\n            notices: Vec::new(),\n        };\n\n        let serialized =\n            serde_json::to_string(&wire).expect(\"wire query result should serialize to json\");\n        assert!(serialized.contains(\"\\\"kind\\\":\\\"null\\\"\"));\n        assert!(serialized.contains(\"\\\"kind\\\":\\\"bool\\\"\"));\n        assert!(serialized.contains(\"\\\"kind\\\":\\\"int\\\"\"));\n        assert!(serialized.contains(\"\\\"kind\\\":\\\"float\\\"\"));\n        assert!(serialized.contains(\"\\\"kind\\\":\\\"text\\\"\"));\n        assert!(serialized.contains(\"\\\"kind\\\":\\\"json\\\"\"));\n        assert!(serialized.contains(\"\\\"kind\\\":\\\"blob\\\"\"));\n        assert!(!serialized.contains(\"\\\"kind\\\":\\\"Null\\\"\"));\n        assert!(!serialized.contains(\"\\\"kind\\\":\\\"Bool\\\"\"));\n        assert!(!serialized.contains(\"\\\"kind\\\":\\\"Integer\\\"\"));\n        assert!(!serialized.contains(\"\\\"kind\\\":\\\"Real\\\"\"));\n        assert!(!serialized.contains(\"\\\"kind\\\":\\\"Text\\\"\"));\n        assert!(!serialized.contains(\"\\\"kind\\\":\\\"Json\\\"\"));\n        assert!(!serialized.contains(\"\\\"kind\\\":\\\"Blob\\\"\"));\n    }\n\n    #[test]\n    fn null_shape_is_explicitly_canonical() {\n        let value = WireValue::Null { value: () };\n        let json = serde_json::to_value(value).expect(\"wire value should serialize\");\n        assert_eq!(json, json!({ \"kind\": \"null\", \"value\": null }));\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/domain.rs",
    "content": "use crate::entity_identity::EntityIdentity;\nuse crate::live_state::MaterializedLiveStateRow;\nuse crate::{NullableKeyFilter, GLOBAL_VERSION_ID};\n\n/// Validation/storage coordinate for repository facts.\n///\n/// A domain is the complete scope in which a row identity is meaningful:\n/// version, durability, and file scope. Projection methods on this type are\n/// deliberately named so callers cannot silently erase part of the coordinate.\n#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord, Hash)]\npub(crate) struct Domain {\n    version_id: String,\n    untracked: bool,\n    file_scope: DomainFileScope,\n}\n\nimpl Domain {\n    pub(crate) fn exact_file(\n        version_id: impl Into<String>,\n        untracked: bool,\n        file_id: Option<String>,\n    ) -> Self {\n        Self {\n            version_id: version_id.into(),\n            untracked,\n            file_scope: DomainFileScope::Exact(file_id),\n        }\n    }\n\n    pub(crate) fn any_file(version_id: impl Into<String>, untracked: bool) -> Self {\n        Self {\n            version_id: version_id.into(),\n            untracked,\n            file_scope: DomainFileScope::Any,\n        }\n    }\n\n    pub(crate) fn schema_catalog(version_id: impl Into<String>, untracked: bool) -> Self {\n        Self::any_file(version_id, untracked)\n    }\n\n    pub(crate) fn for_live_row(row: &MaterializedLiveStateRow) -> Self {\n        Self::exact_file(row.version_id.clone(), row.untracked, row.file_id.clone())\n    }\n\n    pub(crate) fn schema_catalog_domain(&self) -> Self {\n        // Schema definitions are version + durability scoped. They are not\n        // owned by a data file, so schema catalog lookup deliberately erases\n        // row file scope into `Any`.\n        Self::schema_catalog(self.version_id.clone(), self.untracked)\n    }\n\n    pub(crate) fn version_id(&self) -> &str {\n        &self.version_id\n    }\n\n    pub(crate) fn untracked(&self) -> bool {\n        self.untracked\n    }\n\n    pub(crate) fn fingerprint_component(&self) -> String {\n        let file_scope = match &self.file_scope {\n            DomainFileScope::Any => \"*\".to_string(),\n            DomainFileScope::Exact(Some(file_id)) => format!(\"={file_id}\"),\n            DomainFileScope::Exact(None) => \"=\".to_string(),\n        };\n        format!(\"{}|{}|{}\", self.version_id, self.untracked, file_scope)\n    }\n\n    #[cfg(test)]\n    pub(crate) fn file_scope(&self) -> &DomainFileScope {\n        &self.file_scope\n    }\n\n    pub(crate) fn is_exact_file(&self, file_id: &Option<String>) -> bool {\n        matches!(&self.file_scope, DomainFileScope::Exact(exact) if exact == file_id)\n    }\n\n    pub(crate) fn with_untracked(&self, untracked: bool) -> Self {\n        Self {\n            version_id: self.version_id.clone(),\n            untracked,\n            file_scope: self.file_scope.clone(),\n        }\n    }\n\n    pub(crate) fn with_file_scope(&self, file_scope: DomainFileScope) -> Self {\n        Self {\n            version_id: self.version_id.clone(),\n            untracked: self.untracked,\n            file_scope,\n        }\n    }\n\n    pub(crate) fn with_exact_file_scope(&self, file_id: Option<String>) -> Self {\n        self.with_file_scope(DomainFileScope::Exact(file_id))\n    }\n\n    pub(crate) fn file_filters(&self) -> Vec<NullableKeyFilter<String>> {\n        match &self.file_scope {\n            DomainFileScope::Any => Vec::new(),\n            DomainFileScope::Exact(file_id) => vec![nullable_filter_from_option(file_id)],\n        }\n    }\n\n    pub(crate) fn contains(&self, row: &MaterializedLiveStateRow) -> bool {\n        row.version_id == self.version_id\n            && row.untracked == self.untracked\n            && committed_row_is_exact_version_scoped(row, &self.version_id)\n            && match &self.file_scope {\n                DomainFileScope::Any => true,\n                DomainFileScope::Exact(file_id) => row.file_id == *file_id,\n            }\n    }\n\n    fn reachable_target_domains(&self) -> Vec<Self> {\n        if self.untracked {\n            vec![self.with_untracked(false), self.clone()]\n        } else {\n            vec![self.clone()]\n        }\n    }\n\n    fn source_domains_that_can_reach(&self) -> Vec<Self> {\n        if self.untracked {\n            vec![self.clone()]\n        } else {\n            vec![self.clone(), self.with_untracked(true)]\n        }\n    }\n\n    fn can_reach(&self, target: &Self) -> bool {\n        self.version_id == target.version_id\n            && self.file_scope == target.file_scope\n            && (self.untracked || !target.untracked)\n    }\n\n    pub(crate) fn schema_catalog_domains(&self) -> Vec<Self> {\n        self.schema_catalog_domain().reachable_target_domains()\n    }\n\n    pub(crate) fn fk_target_domains(&self) -> Vec<Self> {\n        self.reachable_target_domains()\n    }\n\n    pub(crate) fn fk_source_domains_for_target(&self) -> Vec<Self> {\n        self.source_domains_that_can_reach()\n    }\n\n    pub(crate) fn file_owner_domains(&self) -> Vec<Self> {\n        self.reachable_target_domains()\n    }\n\n    pub(crate) fn directory_parent_domains(&self) -> Vec<Self> {\n        self.reachable_target_domains()\n    }\n\n    pub(crate) fn version_descriptor_domains_for_ref_delete(&self) -> Vec<Self> {\n        self.source_domains_that_can_reach()\n    }\n\n    pub(crate) fn file_scoped_row_domains_for_file_descriptor_delete(&self) -> Vec<Self> {\n        self.source_domains_that_can_reach()\n    }\n\n    pub(crate) fn validation_scope_contains_constraint_domain(&self, target: &Self) -> bool {\n        self.can_reach(target)\n    }\n\n    pub(crate) fn tombstone_domain_affects_validation_scope(\n        &self,\n        validation_scope: &Self,\n    ) -> bool {\n        self.can_reach(validation_scope)\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord, Hash)]\npub(crate) enum DomainFileScope {\n    Any,\n    Exact(Option<String>),\n}\n\n#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)]\npub(crate) struct DomainRowIdentity {\n    domain: Domain,\n    schema_key: String,\n    entity_id: EntityIdentity,\n}\n\nimpl DomainRowIdentity {\n    pub(crate) fn new(\n        domain: Domain,\n        schema_key: impl Into<String>,\n        entity_id: EntityIdentity,\n    ) -> Self {\n        Self {\n            domain,\n            schema_key: schema_key.into(),\n            entity_id,\n        }\n    }\n\n    pub(crate) fn from_live_row(row: &MaterializedLiveStateRow) -> Self {\n        Self::new(\n            Domain::for_live_row(row),\n            row.schema_key.clone(),\n            row.entity_id.clone(),\n        )\n    }\n\n    pub(crate) fn in_domain(\n        domain: Domain,\n        schema_key: impl Into<String>,\n        entity_id: EntityIdentity,\n    ) -> Self {\n        Self::new(domain, schema_key, entity_id)\n    }\n\n    #[cfg(test)]\n    pub(crate) fn exact(\n        version_id: impl Into<String>,\n        untracked: bool,\n        file_id: Option<String>,\n        schema_key: impl Into<String>,\n        entity_id: EntityIdentity,\n    ) -> Self {\n        Self::new(\n            Domain::exact_file(version_id, untracked, file_id),\n            schema_key,\n            entity_id,\n        )\n    }\n\n    pub(crate) fn with_domain(&self, domain: Domain) -> Self {\n        Self {\n            domain,\n            schema_key: self.schema_key.clone(),\n            entity_id: self.entity_id.clone(),\n        }\n    }\n\n    pub(crate) fn domain(&self) -> &Domain {\n        &self.domain\n    }\n\n    pub(crate) fn schema_key(&self) -> &str {\n        &self.schema_key\n    }\n\n    pub(crate) fn schema_key_owned(&self) -> String {\n        self.schema_key.clone()\n    }\n\n    pub(crate) fn entity_id(&self) -> &EntityIdentity {\n        &self.entity_id\n    }\n\n    pub(crate) fn entity_id_owned(&self) -> EntityIdentity {\n        self.entity_id.clone()\n    }\n\n    pub(crate) fn matches_parts(\n        &self,\n        domain: &Domain,\n        schema_key: &str,\n        entity_id: &EntityIdentity,\n    ) -> bool {\n        &self.domain == domain && self.schema_key == schema_key && &self.entity_id == entity_id\n    }\n\n    pub(crate) fn reachable_target_identities(&self) -> Vec<Self> {\n        self.domain\n            .fk_target_domains()\n            .into_iter()\n            .map(|domain| self.with_domain(domain))\n            .collect()\n    }\n\n    pub(crate) fn source_identities_that_can_reach(&self) -> Vec<Self> {\n        self.domain\n            .fk_source_domains_for_target()\n            .into_iter()\n            .map(|domain| self.with_domain(domain))\n            .collect()\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord, Hash)]\npub(crate) struct DomainSchemaIdentity {\n    domain: Domain,\n    schema_key: String,\n}\n\nimpl DomainSchemaIdentity {\n    pub(crate) fn new(domain: Domain, schema_key: impl Into<String>) -> Self {\n        Self {\n            domain: domain.schema_catalog_domain(),\n            schema_key: schema_key.into(),\n        }\n    }\n\n    pub(crate) fn fingerprint_component(&self) -> String {\n        format!(\n            \"{}|{}\",\n            self.domain.fingerprint_component(),\n            self.schema_key\n        )\n    }\n}\n\npub(crate) fn committed_row_is_exact_version_scoped(\n    row: &MaterializedLiveStateRow,\n    version_id: &str,\n) -> bool {\n    row.version_id == version_id && row.global == (row.version_id == GLOBAL_VERSION_ID)\n}\n\nfn nullable_filter_from_option(value: &Option<String>) -> NullableKeyFilter<String> {\n    match value {\n        Some(value) => NullableKeyFilter::Value(value.clone()),\n        None => NullableKeyFilter::Null,\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/engine.rs",
    "content": "use std::sync::Arc;\n\nuse crate::binary_cas::BinaryCasContext;\nuse crate::catalog::CatalogContext;\nuse crate::commit_graph::CommitGraphContext;\nuse crate::commit_store::CommitStoreContext;\nuse crate::entity_identity::EntityIdentity;\nuse crate::init::InitReceipt;\nuse crate::live_state::LiveStateContext;\nuse crate::live_state::LiveStateRowRequest;\nuse crate::session::SessionContext;\nuse crate::storage::{StorageContext, StorageWriteSet};\nuse crate::tracked_state::TrackedStateContext;\nuse crate::untracked_state::UntrackedStateContext;\nuse crate::version::{VersionContext, VersionRefReader};\nuse crate::GLOBAL_VERSION_ID;\nuse crate::{Backend, LixError, NullableKeyFilter};\n\n#[derive(Clone)]\npub struct Engine {\n    storage: StorageContext,\n    tracked_state: Arc<TrackedStateContext>,\n    live_state: Arc<LiveStateContext>,\n    version_ctx: Arc<VersionContext>,\n    binary_cas: Arc<BinaryCasContext>,\n    commit_store: Arc<CommitStoreContext>,\n    catalog_context: Arc<CatalogContext>,\n}\n\nimpl Engine {\n    /// Seeds an empty backend with the engine repository bootstrap facts.\n    ///\n    /// Initialization is a storage lifecycle operation, separate from runtime\n    /// construction. Call this before `Engine::new(...)` for a brand-new\n    /// backend.\n    pub async fn initialize(\n        backend: Box<dyn Backend + Send + Sync>,\n    ) -> Result<InitReceipt, LixError> {\n        let backend: Arc<dyn Backend + Send + Sync> = Arc::from(backend);\n        let storage = StorageContext::new(backend);\n        let commit_store = CommitStoreContext::new();\n\n        crate::init::initialize(\n            storage,\n            &commit_store,\n            &TrackedStateContext::new(),\n            &UntrackedStateContext::new(),\n        )\n        .await\n    }\n\n    /// Creates a clean DataFusion-first engine over an initialized backend.\n    ///\n    /// SessionContext, execution, and transaction overlays are layered below the\n    /// instance instead of being hidden behind a legacy boot path.\n    pub async fn new(backend: Box<dyn Backend + Send + Sync>) -> Result<Self, LixError> {\n        let backend: Arc<dyn Backend + Send + Sync> = Arc::from(backend);\n        let storage = StorageContext::new(backend);\n\n        let tracked_state = Arc::new(TrackedStateContext::new());\n        let untracked_state = Arc::new(UntrackedStateContext::new());\n        let commit_store = Arc::new(CommitStoreContext::new());\n        let commit_graph = CommitGraphContext::new();\n        let live_state = Arc::new(LiveStateContext::new(\n            tracked_state.as_ref().clone(),\n            *untracked_state,\n            commit_graph,\n        ));\n        let version_ctx = Arc::new(VersionContext::new(Arc::clone(&untracked_state)));\n        assert_initialized(storage.clone(), live_state.as_ref()).await?;\n\n        // SessionContext::execute later projects these stable state contexts into one\n        // execution-scoped SQL context, optionally wrapped by a transaction\n        // overlay for writes.\n\n        Ok(Self {\n            binary_cas: Arc::new(BinaryCasContext::new()),\n            commit_store,\n            storage,\n            tracked_state,\n            live_state,\n            version_ctx,\n            catalog_context: Arc::new(CatalogContext::new()),\n        })\n    }\n\n    pub(crate) fn storage(&self) -> StorageContext {\n        self.storage.clone()\n    }\n\n    /// Loads the current commit head for a version.\n    ///\n    /// This is the public engine-level form of the typed `version_ref` context:\n    /// callers should not need to know that version heads are represented as\n    /// untracked `lix_version_ref` rows in live_state.\n    pub async fn load_version_head_commit_id(\n        &self,\n        version_id: &str,\n    ) -> Result<Option<String>, LixError> {\n        let mut transaction = self.storage.begin_read_transaction().await?;\n        let result = self\n            .version_ctx\n            .ref_reader(transaction.as_mut())\n            .load_head_commit_id(version_id)\n            .await;\n        match result {\n            Ok(result) => {\n                transaction.rollback().await?;\n                Ok(result)\n            }\n            Err(error) => {\n                let _ = transaction.rollback().await;\n                Err(error)\n            }\n        }\n    }\n\n    pub async fn open_session(\n        &self,\n        active_version_id: impl Into<String>,\n    ) -> Result<SessionContext, LixError> {\n        SessionContext::open(\n            active_version_id.into(),\n            self.storage(),\n            Arc::clone(&self.live_state),\n            Arc::clone(&self.tracked_state),\n            Arc::clone(&self.binary_cas),\n            Arc::clone(&self.commit_store),\n            Arc::clone(&self.version_ctx),\n            Arc::clone(&self.catalog_context),\n        )\n        .await\n    }\n\n    pub async fn open_workspace_session(&self) -> Result<SessionContext, LixError> {\n        SessionContext::open_workspace(\n            self.storage(),\n            Arc::clone(&self.live_state),\n            Arc::clone(&self.tracked_state),\n            Arc::clone(&self.binary_cas),\n            Arc::clone(&self.commit_store),\n            Arc::clone(&self.version_ctx),\n            Arc::clone(&self.catalog_context),\n        )\n        .await\n    }\n\n    /// Materializes the tracked serving projection root for one version from commit_store.\n    ///\n    /// This is intentionally an engine-level operation: callers should not need\n    /// to know which KV namespaces back changelog, commit graph, or tracked\n    /// state. The current version head is read from the live-state facade so\n    /// materialization uses the same moving-ref visibility as normal execution.\n    pub async fn rebuild_tracked_state_for_version(\n        &self,\n        version_id: &str,\n    ) -> Result<(), LixError> {\n        let head_commit_id = self\n            .load_version_head_commit_id(version_id)\n            .await?\n            .ok_or_else(|| {\n                LixError::version_not_found(\n                    version_id.to_string(),\n                    \"rebuild_tracked_state_for_version\",\n                    \"target\",\n                )\n            })?;\n        let storage = self.storage();\n        let mut transaction = storage.begin_write_transaction().await?;\n        let mut writes = StorageWriteSet::new();\n        let materialize_result = self\n            .tracked_state\n            .materializer(\n                transaction.as_mut(),\n                &mut writes,\n                self.commit_store.as_ref(),\n            )\n            .materialize_root_at(&head_commit_id)\n            .await;\n        if let Err(error) = materialize_result {\n            let _ = transaction.rollback().await;\n            return Err(error);\n        }\n        if let Err(error) = writes.apply(&mut transaction.as_mut()).await {\n            let _ = transaction.rollback().await;\n            return Err(error);\n        }\n        transaction.commit().await\n    }\n}\n\nasync fn assert_initialized(\n    storage: StorageContext,\n    live_state: &LiveStateContext,\n) -> Result<(), LixError> {\n    let mut transaction = storage.begin_read_transaction().await?;\n    let reader = live_state.reader(transaction.as_mut());\n    let result = reader\n        .load_row(&LiveStateRowRequest {\n            schema_key: \"lix_key_value\".to_string(),\n            version_id: GLOBAL_VERSION_ID.to_string(),\n            entity_id: EntityIdentity::single(\"lix_id\"),\n            file_id: NullableKeyFilter::Null,\n        })\n        .await;\n    let initialized = match result {\n        Ok(row) => {\n            transaction.rollback().await?;\n            row.is_some()\n        }\n        Err(error) => {\n            let _ = transaction.rollback().await;\n            return Err(error);\n        }\n    };\n\n    if initialized {\n        return Ok(());\n    }\n\n    Err(LixError::new(\n        \"LIX_ERROR_NOT_INITIALIZED\",\n        \"engine backend is not initialized; call Engine::initialize(...) before Engine::new(...)\",\n    ))\n}\n"
  },
  {
    "path": "packages/engine/src/entity_identity.rs",
    "content": "use serde_json::Value as JsonValue;\n\nuse crate::common::json_pointer_get;\nuse crate::LixError;\n\n/// Logical entity identity derived from a schema primary key.\n///\n/// Keep this as typed tuple data inside engine. SQL `entity_id` surfaces\n/// should use the JSON-array projection.\n#[derive(\n    Debug, Clone, PartialEq, Eq, PartialOrd, Ord, Hash, serde::Serialize, serde::Deserialize,\n)]\npub(crate) struct EntityIdentity {\n    pub(crate) parts: Vec<String>,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) enum EntityIdentityError {\n    EmptyPrimaryKey,\n    EmptyPrimaryKeyPath { index: usize },\n    EmptyPrimaryKeyValue { index: usize },\n    MissingPrimaryKeyValue { index: usize },\n    UnsupportedPrimaryKeyValue { index: usize },\n    InvalidEncodedEntityIdentity,\n}\n\nimpl std::fmt::Display for EntityIdentityError {\n    fn fmt(&self, formatter: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        match self {\n            Self::EmptyPrimaryKey => {\n                write!(formatter, \"primary key must contain at least one path\")\n            }\n            Self::EmptyPrimaryKeyPath { index } => {\n                write!(\n                    formatter,\n                    \"primary-key path at index {index} must not be empty\"\n                )\n            }\n            Self::EmptyPrimaryKeyValue { index } => {\n                write!(\n                    formatter,\n                    \"primary-key value at index {index} must not be empty\"\n                )\n            }\n            Self::MissingPrimaryKeyValue { index } => {\n                write!(formatter, \"primary-key value at index {index} is missing\")\n            }\n            Self::UnsupportedPrimaryKeyValue { index } => write!(\n                formatter,\n                \"primary-key value at index {index} must be a JSON string\"\n            ),\n            Self::InvalidEncodedEntityIdentity => {\n                write!(\n                    formatter,\n                    \"encoded entity identity must be a non-empty JSON array of strings\"\n                )\n            }\n        }\n    }\n}\n\nimpl EntityIdentity {\n    pub(crate) fn single(value: impl Into<String>) -> Self {\n        Self {\n            parts: vec![value.into()],\n        }\n    }\n\n    #[cfg(test)]\n    pub(crate) fn tuple(parts: Vec<String>) -> Result<Self, EntityIdentityError> {\n        if parts.is_empty() {\n            return Err(EntityIdentityError::EmptyPrimaryKey);\n        }\n        if let Some((index, _)) = parts.iter().enumerate().find(|(_, part)| part.is_empty()) {\n            return Err(EntityIdentityError::EmptyPrimaryKeyValue { index });\n        }\n        Ok(Self { parts })\n    }\n\n    pub(crate) fn from_primary_key_paths(\n        snapshot: &JsonValue,\n        primary_key_paths: &[Vec<String>],\n    ) -> Result<Self, EntityIdentityError> {\n        if primary_key_paths.is_empty() {\n            return Err(EntityIdentityError::EmptyPrimaryKey);\n        }\n\n        let mut parts = Vec::with_capacity(primary_key_paths.len());\n        for (index, path) in primary_key_paths.iter().enumerate() {\n            if path.is_empty() {\n                return Err(EntityIdentityError::EmptyPrimaryKeyPath { index });\n            }\n            let Some(value) = json_pointer_get(snapshot, path) else {\n                return Err(EntityIdentityError::MissingPrimaryKeyValue { index });\n            };\n            parts.push(string_part_from_json_value(value, index)?);\n        }\n\n        Ok(Self { parts })\n    }\n\n    pub(crate) fn as_json_array_value(&self) -> Result<JsonValue, LixError> {\n        if self.parts.is_empty() {\n            return Err(LixError::unknown(\n                \"entity identity must contain at least one primary-key part\",\n            ));\n        }\n\n        Ok(JsonValue::Array(\n            self.parts\n                .iter()\n                .map(|part| JsonValue::String(part.clone()))\n                .collect(),\n        ))\n    }\n\n    pub(crate) fn as_json_array_text(&self) -> Result<String, LixError> {\n        serde_json::to_string(&self.as_json_array_value()?).map_err(|error| {\n            LixError::unknown(format!(\"failed to encode entity id as JSON: {error}\"))\n        })\n    }\n\n    pub(crate) fn as_single_string(&self) -> Result<&str, LixError> {\n        if self.parts.is_empty() {\n            return Err(LixError::unknown(\n                \"entity identity must contain at least one primary-key part\",\n            ));\n        }\n\n        if let [value] = self.parts.as_slice() {\n            return Ok(value.as_str());\n        }\n\n        Err(LixError::unknown(\n            \"entity identity is not a single string primary-key tuple\",\n        ))\n    }\n\n    pub(crate) fn as_single_string_owned(&self) -> Result<String, LixError> {\n        Ok(self.as_single_string()?.to_owned())\n    }\n\n    pub(crate) fn from_json_array_text(entity_id: &str) -> Result<Self, EntityIdentityError> {\n        let value = serde_json::from_str::<JsonValue>(entity_id)\n            .map_err(|_| EntityIdentityError::InvalidEncodedEntityIdentity)?;\n        Self::from_json_array_value(&value)\n    }\n\n    pub(crate) fn from_json_array_value(\n        entity_id: &JsonValue,\n    ) -> Result<Self, EntityIdentityError> {\n        let JsonValue::Array(values) = entity_id else {\n            return Err(EntityIdentityError::InvalidEncodedEntityIdentity);\n        };\n        if values.is_empty() {\n            return Err(EntityIdentityError::EmptyPrimaryKey);\n        }\n\n        let mut parts = Vec::with_capacity(values.len());\n        for (index, value) in values.iter().enumerate() {\n            parts.push(string_part_from_json_value(value, index)?);\n        }\n        Ok(Self { parts })\n    }\n}\n\nfn string_part_from_json_value(\n    value: &JsonValue,\n    index: usize,\n) -> Result<String, EntityIdentityError> {\n    match value {\n        JsonValue::String(value) if value.is_empty() => {\n            Err(EntityIdentityError::EmptyPrimaryKeyValue { index })\n        }\n        JsonValue::String(value) => Ok(value.clone()),\n        _ => Err(EntityIdentityError::UnsupportedPrimaryKeyValue { index }),\n    }\n}\n\npub(crate) fn canonical_json_text(value: &JsonValue) -> serde_json::Result<String> {\n    serde_json::to_string(&canonical_json_value(value))\n}\n\nfn canonical_json_value(value: &JsonValue) -> JsonValue {\n    match value {\n        JsonValue::Array(values) => {\n            JsonValue::Array(values.iter().map(canonical_json_value).collect())\n        }\n        JsonValue::Object(object) => {\n            let mut entries = object.iter().collect::<Vec<_>>();\n            entries.sort_by(|(left, _), (right, _)| left.cmp(right));\n\n            let mut canonical = serde_json::Map::new();\n            for (key, value) in entries {\n                canonical.insert(key.clone(), canonical_json_value(value));\n            }\n            JsonValue::Object(canonical)\n        }\n        _ => value.clone(),\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use serde_json::json;\n\n    use super::*;\n\n    #[test]\n    fn single_string_identity_projects_to_single_string() {\n        let identity = EntityIdentity::single(\"plain-id\");\n\n        assert_eq!(\n            identity.as_single_string().expect(\"projection should work\"),\n            \"plain-id\"\n        );\n    }\n\n    #[test]\n    fn single_identity_projects_to_json_array_entity_id() {\n        let identity = EntityIdentity::single(\"plain-id\");\n\n        assert_eq!(\n            identity\n                .as_json_array_text()\n                .expect(\"projection should work\"),\n            \"[\\\"plain-id\\\"]\"\n        );\n    }\n\n    #[test]\n    fn composite_identity_projects_to_json_array_entity_id() {\n        let identity = EntityIdentity::tuple(vec![\"namespace\".to_string(), \"42\".to_string()])\n            .expect(\"tuple identity\");\n\n        assert_eq!(\n            identity\n                .as_json_array_text()\n                .expect(\"projection should work\"),\n            \"[\\\"namespace\\\",\\\"42\\\"]\"\n        );\n    }\n\n    #[test]\n    fn entity_id_json_array_roundtrips() {\n        let identity = EntityIdentity::tuple(vec![\"namespace\".to_string(), \"42\".to_string()])\n            .expect(\"tuple identity\");\n        let encoded = identity\n            .as_json_array_text()\n            .expect(\"projection should work\");\n\n        assert_eq!(\n            EntityIdentity::from_json_array_text(&encoded).expect(\"decode should work\"),\n            identity\n        );\n    }\n\n    #[test]\n    fn entity_id_json_array_rejects_empty_string_part() {\n        assert_eq!(\n            EntityIdentity::from_json_array_text(\"[\\\"\\\"]\"),\n            Err(EntityIdentityError::EmptyPrimaryKeyValue { index: 0 })\n        );\n    }\n\n    #[test]\n    fn tuple_rejects_empty_string_part() {\n        assert_eq!(\n            EntityIdentity::tuple(vec![\"namespace\".to_string(), \"\".to_string()]),\n            Err(EntityIdentityError::EmptyPrimaryKeyValue { index: 1 })\n        );\n    }\n\n    #[test]\n    fn entity_id_json_array_does_not_collide_on_delimiter_like_values() {\n        let left = EntityIdentity::tuple(vec![\"a~b\".to_string(), \"c\".to_string()])\n            .expect(\"left tuple identity\");\n        let right = EntityIdentity::tuple(vec![\"a\".to_string(), \"b~c\".to_string()])\n            .expect(\"right tuple identity\");\n\n        assert_ne!(\n            left.as_json_array_text().expect(\"left should encode\"),\n            right.as_json_array_text().expect(\"right should encode\")\n        );\n    }\n\n    #[test]\n    fn composite_identity_rejects_single_string_projection() {\n        let identity = EntityIdentity::tuple(vec![\"namespace\".to_string(), \"42\".to_string()])\n            .expect(\"tuple identity\");\n\n        assert!(identity.as_single_string().is_err());\n    }\n\n    #[test]\n    fn composite_identity_does_not_collide_on_delimiter_like_values() {\n        let left = EntityIdentity::tuple(vec![\"a~b\".to_string(), \"1\".to_string()])\n            .expect(\"left tuple identity\");\n        let right = EntityIdentity::tuple(vec![\"a\".to_string(), \"b~1\".to_string()])\n            .expect(\"right tuple identity\");\n\n        assert_ne!(\n            left.as_json_array_text().expect(\"left should encode\"),\n            right.as_json_array_text().expect(\"right should encode\")\n        );\n    }\n\n    #[test]\n    fn from_primary_key_paths_derives_ordered_parts() {\n        let snapshot = json!({\n            \"namespace\": \"messages\",\n            \"locale\": \"en\"\n        });\n\n        let identity = EntityIdentity::from_primary_key_paths(\n            &snapshot,\n            &[vec![\"namespace\".to_string()], vec![\"locale\".to_string()]],\n        )\n        .expect(\"primary key should derive\");\n\n        assert_eq!(\n            identity,\n            EntityIdentity {\n                parts: vec![\"messages\".to_string(), \"en\".to_string()],\n            }\n        );\n    }\n\n    #[test]\n    fn entity_id_json_array_rejects_non_string_parts() {\n        assert_eq!(\n            EntityIdentity::from_json_array_text(\"[\\\"namespace\\\",42]\"),\n            Err(EntityIdentityError::UnsupportedPrimaryKeyValue { index: 1 })\n        );\n        assert_eq!(\n            EntityIdentity::from_json_array_text(\"[\\\"namespace\\\",null]\"),\n            Err(EntityIdentityError::UnsupportedPrimaryKeyValue { index: 1 })\n        );\n        assert_eq!(\n            EntityIdentity::from_json_array_text(\"[[\\\"nested\\\"]]\"),\n            Err(EntityIdentityError::UnsupportedPrimaryKeyValue { index: 0 })\n        );\n    }\n\n    #[test]\n    fn from_primary_key_paths_rejects_non_string_parts() {\n        let snapshot = json!({\n            \"namespace\": \"messages\",\n            \"index\": 7\n        });\n\n        assert_eq!(\n            EntityIdentity::from_primary_key_paths(\n                &snapshot,\n                &[vec![\"namespace\".to_string()], vec![\"index\".to_string()],],\n            ),\n            Err(EntityIdentityError::UnsupportedPrimaryKeyValue { index: 1 })\n        );\n    }\n\n    #[test]\n    fn from_primary_key_paths_rejects_empty_string_parts() {\n        let snapshot = json!({\n            \"namespace\": \"messages\",\n            \"id\": \"\"\n        });\n\n        assert_eq!(\n            EntityIdentity::from_primary_key_paths(\n                &snapshot,\n                &[vec![\"namespace\".to_string()], vec![\"id\".to_string()],],\n            ),\n            Err(EntityIdentityError::EmptyPrimaryKeyValue { index: 1 })\n        );\n    }\n\n    #[test]\n    fn from_primary_key_paths_rejects_nested_json_parts() {\n        let snapshot = json!({\n            \"entity_id\": [\"welcome.title\", \"en\"],\n            \"schema_key\": \"message\"\n        });\n\n        assert_eq!(\n            EntityIdentity::from_primary_key_paths(\n                &snapshot,\n                &[\n                    vec![\"entity_id\".to_string()],\n                    vec![\"schema_key\".to_string()],\n                ],\n            ),\n            Err(EntityIdentityError::UnsupportedPrimaryKeyValue { index: 0 })\n        );\n    }\n\n    #[test]\n    fn from_primary_key_paths_rejects_missing_parts() {\n        let snapshot = json!({ \"id\": \"a\" });\n\n        assert_eq!(\n            EntityIdentity::from_primary_key_paths(&snapshot, &[vec![\"missing\".to_string()]]),\n            Err(EntityIdentityError::MissingPrimaryKeyValue { index: 0 })\n        );\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/functions/context.rs",
    "content": "use crate::functions::{\n    state, DeterministicFunctionProvider, DeterministicSequence, FunctionProvider,\n    FunctionProviderHandle, SharedFunctionProvider, SystemFunctionProvider,\n};\nuse crate::live_state::LiveStateReader;\nuse crate::storage::StorageWriteSet;\nuse crate::LixError;\n\n/// Execution-scoped runtime function context.\n///\n/// Lower layers should only receive function providers. This context owns the\n/// lifecycle at the session/transaction boundary: prepare the right function\n/// source before execution and persist deterministic sequence progress after\n/// successful execution.\npub(crate) struct FunctionContext {\n    functions: FunctionProviderHandle,\n    bookkeeping_timestamp: String,\n}\n\nimpl FunctionContext {\n    /// Prepares the runtime function provider for one execution.\n    ///\n    /// If deterministic mode is absent or disabled, the context uses system\n    /// functions. If enabled, it starts from the persisted sequence + 1.\n    pub(crate) async fn prepare(live_state: &dyn LiveStateReader) -> Result<Self, LixError> {\n        let mode = state::load_mode(live_state).await?;\n        let mut bookkeeping_functions = SystemFunctionProvider;\n        let bookkeeping_timestamp = bookkeeping_functions.timestamp();\n        if !mode.enabled {\n            return Ok(Self {\n                functions: SharedFunctionProvider::new(\n                    Box::new(SystemFunctionProvider) as Box<dyn FunctionProvider + Send>\n                ),\n                bookkeeping_timestamp,\n            });\n        }\n\n        let sequence = state::load_sequence(live_state).await?;\n        Ok(Self {\n            functions: SharedFunctionProvider::new(Box::new(DeterministicFunctionProvider::new(\n                sequence.next_sequence(),\n                mode.timestamp_shuffle,\n            ))\n                as Box<dyn FunctionProvider + Send>),\n            bookkeeping_timestamp,\n        })\n    }\n\n    /// Returns the engine-owned provider used by SQL and transaction staging.\n    pub(crate) fn provider(&self) -> FunctionProviderHandle {\n        self.functions.clone()\n    }\n\n    /// Persists deterministic sequence progress if this execution used any.\n    ///\n    /// System functions report no sequence state, so this is a no-op when\n    /// deterministic mode is disabled.\n    pub(crate) async fn stage_persist_if_needed(\n        &self,\n        writes: &mut StorageWriteSet,\n    ) -> Result<(), LixError> {\n        let Some(highest_seen) = self.functions.deterministic_sequence_persist_highest_seen()\n        else {\n            return Ok(());\n        };\n        state::stage_sequence(\n            writes,\n            DeterministicSequence { highest_seen },\n            &self.bookkeeping_timestamp,\n        )\n        .await\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use std::sync::Arc;\n\n    use crate::backend::testing::UnitTestBackend;\n    use crate::functions::state::{DETERMINISTIC_MODE_KEY, DETERMINISTIC_SEQUENCE_KEY};\n    use crate::functions::{state::load_sequence, DeterministicSequence};\n    use crate::live_state::LiveStateContext;\n    use crate::storage::StorageContext;\n    use crate::GLOBAL_VERSION_ID;\n\n    use super::*;\n\n    fn live_state_context() -> LiveStateContext {\n        LiveStateContext::new(\n            crate::tracked_state::TrackedStateContext::new(),\n            crate::untracked_state::UntrackedStateContext::new(),\n            crate::commit_graph::CommitGraphContext::new(),\n        )\n    }\n\n    #[tokio::test]\n    async fn prepare_uses_system_functions_when_mode_missing() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        let live_state = live_state_context();\n        let reader = live_state.reader(storage.clone());\n\n        let context = FunctionContext::prepare(&reader)\n            .await\n            .expect(\"runtime context should prepare\");\n\n        assert_eq!(\n            context\n                .provider()\n                .deterministic_sequence_persist_highest_seen(),\n            None\n        );\n    }\n\n    #[tokio::test]\n    async fn prepare_starts_deterministic_functions_at_sequence_zero() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        let live_state = live_state_context();\n        crate::test_support::seed_global_version_head(storage.clone()).await;\n        write_key_value(\n            storage.clone(),\n            DETERMINISTIC_MODE_KEY,\n            serde_json::json!({\n                \"enabled\": true,\n            }),\n        )\n        .await;\n\n        let reader = live_state.reader(storage.clone());\n        let context = FunctionContext::prepare(&reader)\n            .await\n            .expect(\"runtime context should prepare\");\n        let functions = context.provider();\n\n        assert_eq!(\n            functions.call_uuid_v7(),\n            \"01920000-0000-7000-8000-000000000000\"\n        );\n        assert_eq!(functions.call_timestamp(), \"1970-01-01T00:00:00.001Z\");\n        assert_eq!(\n            context\n                .provider()\n                .deterministic_sequence_persist_highest_seen(),\n            Some(1)\n        );\n    }\n\n    #[tokio::test]\n    async fn prepare_continues_from_persisted_sequence() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        let live_state = live_state_context();\n        crate::test_support::seed_global_version_head(storage.clone()).await;\n        write_key_value(\n            storage.clone(),\n            DETERMINISTIC_MODE_KEY,\n            serde_json::json!({\n                \"enabled\": true,\n            }),\n        )\n        .await;\n        write_key_value(\n            storage.clone(),\n            DETERMINISTIC_SEQUENCE_KEY,\n            serde_json::json!(41),\n        )\n        .await;\n\n        let reader = live_state.reader(storage.clone());\n        let context = FunctionContext::prepare(&reader)\n            .await\n            .expect(\"runtime context should prepare\");\n        let functions = context.provider();\n\n        assert_eq!(\n            functions.call_uuid_v7(),\n            \"01920000-0000-7000-8000-00000000002a\"\n        );\n        assert_eq!(\n            context\n                .provider()\n                .deterministic_sequence_persist_highest_seen(),\n            Some(42)\n        );\n    }\n\n    #[tokio::test]\n    async fn persist_if_needed_writes_sequence_when_deterministic_functions_advanced() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        let live_state = live_state_context();\n        crate::test_support::seed_global_version_head(storage.clone()).await;\n        write_key_value(\n            storage.clone(),\n            DETERMINISTIC_MODE_KEY,\n            serde_json::json!({\n                \"enabled\": true,\n            }),\n        )\n        .await;\n\n        let context = {\n            let reader = live_state.reader(storage.clone());\n            FunctionContext::prepare(&reader)\n                .await\n                .expect(\"runtime context should prepare\")\n        };\n        context.provider().call_uuid_v7();\n\n        let mut tx = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        let mut writes = StorageWriteSet::new();\n        context\n            .stage_persist_if_needed(&mut writes)\n            .await\n            .expect(\"sequence should stage\");\n        writes\n            .apply(&mut tx.as_mut())\n            .await\n            .expect(\"sequence should apply\");\n        tx.commit().await.expect(\"transaction should commit\");\n\n        let reader = live_state.reader(storage.clone());\n        let sequence = load_sequence(&reader).await.expect(\"sequence should load\");\n        assert_eq!(sequence, DeterministicSequence { highest_seen: 0 });\n    }\n\n    #[tokio::test]\n    async fn persist_if_needed_is_noop_for_system_functions() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        let live_state = live_state_context();\n        let reader = live_state.reader(storage.clone());\n        let context = FunctionContext::prepare(&reader)\n            .await\n            .expect(\"runtime context should prepare\");\n\n        let tx = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        let mut writes = StorageWriteSet::new();\n        context\n            .stage_persist_if_needed(&mut writes)\n            .await\n            .expect(\"persist should no-op\");\n        assert!(writes.is_empty());\n        tx.commit().await.expect(\"transaction should commit\");\n\n        let reader = live_state.reader(storage.clone());\n        let sequence = load_sequence(&reader)\n            .await\n            .expect(\"missing sequence should load\");\n        assert_eq!(sequence, DeterministicSequence::uninitialized());\n    }\n\n    async fn write_key_value(storage: StorageContext, key: &str, value: serde_json::Value) {\n        let mut tx = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        let snapshot_content = serde_json::to_string(&serde_json::json!({\n            \"key\": key,\n            \"value\": value,\n        }))\n        .expect(\"snapshot should serialize\");\n        let mut writes = StorageWriteSet::new();\n        let row = crate::untracked_state::UntrackedStateRow {\n            entity_id: crate::entity_identity::EntityIdentity::single(key),\n            schema_key: \"lix_key_value\".to_string(),\n            file_id: None,\n            snapshot_content: Some(snapshot_content),\n            metadata: None,\n            created_at: \"1970-01-01T00:00:00.000Z\".to_string(),\n            updated_at: \"1970-01-01T00:00:00.000Z\".to_string(),\n            global: true,\n            version_id: GLOBAL_VERSION_ID.to_string(),\n        };\n        crate::untracked_state::UntrackedStateContext::new()\n            .writer(&mut writes)\n            .stage_rows(std::iter::once(row.as_ref()))\n            .expect(\"test key-value should stage\");\n        writes\n            .apply(&mut tx.as_mut())\n            .await\n            .expect(\"test key-value should apply\");\n        tx.commit().await.expect(\"transaction should commit\");\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/functions/deterministic.rs",
    "content": "use crate::functions::FunctionProvider;\n\nconst DETERMINISTIC_UUID_COUNTER_MASK: u64 = 0x0000_FFFF_FFFF_FFFF;\n\n/// Deterministic function provider for engine execution.\n///\n/// The provider is pure runtime state: it does not load or persist the sequence\n/// itself. Session/transaction code owns that boundary so tests can decide when\n/// deterministic state is read and written.\n#[derive(Debug, Clone)]\npub(crate) struct DeterministicFunctionProvider {\n    next_sequence: i64,\n    timestamp_shuffle: bool,\n    highest_seen: Option<i64>,\n}\n\nimpl DeterministicFunctionProvider {\n    pub(crate) fn new(next_sequence: i64, timestamp_shuffle: bool) -> Self {\n        Self {\n            next_sequence,\n            timestamp_shuffle,\n            highest_seen: None,\n        }\n    }\n\n    pub(crate) fn highest_seen(&self) -> Option<i64> {\n        self.highest_seen\n    }\n\n    fn take_sequence(&mut self) -> i64 {\n        let current = self.next_sequence;\n        self.next_sequence += 1;\n        self.highest_seen = Some(current);\n        current\n    }\n}\n\nimpl FunctionProvider for DeterministicFunctionProvider {\n    fn uuid_v7(&mut self) -> String {\n        let counter = self.take_sequence();\n        let counter_bits = (counter as u64) & DETERMINISTIC_UUID_COUNTER_MASK;\n        format!(\"01920000-0000-7000-8000-{counter_bits:012x}\")\n    }\n\n    fn timestamp(&mut self) -> String {\n        let counter = self.take_sequence();\n        let millis = if self.timestamp_shuffle {\n            shuffled_timestamp_millis(counter)\n        } else {\n            counter\n        };\n        let dt = chrono::DateTime::<chrono::Utc>::from_timestamp_millis(millis)\n            .unwrap_or(chrono::DateTime::<chrono::Utc>::UNIX_EPOCH);\n        dt.to_rfc3339_opts(chrono::SecondsFormat::Millis, true)\n    }\n\n    fn deterministic_sequence_persist_highest_seen(&self) -> Option<i64> {\n        self.highest_seen()\n    }\n}\n\nfn shuffled_timestamp_millis(counter: i64) -> i64 {\n    const WINDOW: i64 = 1000;\n    const MULTIPLIER: i64 = 733;\n    const OFFSET: i64 = 271;\n\n    let cycle = counter.div_euclid(WINDOW);\n    let within = counter.rem_euclid(WINDOW);\n    let shuffled = (within * MULTIPLIER + OFFSET).rem_euclid(WINDOW);\n    cycle * WINDOW + shuffled\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::functions::DeterministicSequence;\n\n    #[test]\n    fn deterministic_uuid_uses_sequence_counter() {\n        let mut provider = DeterministicFunctionProvider::new(0, false);\n\n        assert_eq!(provider.uuid_v7(), \"01920000-0000-7000-8000-000000000000\");\n        assert_eq!(provider.uuid_v7(), \"01920000-0000-7000-8000-000000000001\");\n        assert_eq!(provider.highest_seen(), Some(1));\n    }\n\n    #[test]\n    fn deterministic_timestamp_uses_sequence_counter() {\n        let mut provider = DeterministicFunctionProvider::new(1, false);\n\n        assert_eq!(provider.timestamp(), \"1970-01-01T00:00:00.001Z\");\n        assert_eq!(provider.highest_seen(), Some(1));\n    }\n\n    #[test]\n    fn deterministic_timestamp_shuffle_can_be_non_monotonic() {\n        let mut provider = DeterministicFunctionProvider::new(0, true);\n        let first = provider.timestamp();\n        let second = provider.timestamp();\n\n        assert!(second < first);\n        assert_eq!(provider.highest_seen(), Some(1));\n    }\n\n    #[test]\n    fn deterministic_sequence_can_start_after_persisted_highest_seen() {\n        let sequence = DeterministicSequence { highest_seen: 41 };\n        let mut provider = DeterministicFunctionProvider::new(sequence.next_sequence(), false);\n\n        assert_eq!(provider.uuid_v7(), \"01920000-0000-7000-8000-00000000002a\");\n        assert_eq!(provider.highest_seen(), Some(42));\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/functions/mod.rs",
    "content": "//! Engine runtime function boundary.\n//!\n//! Sessions prepare one function context per execution. SQL, providers, and\n//! transaction staging receive only a function provider; deterministic mode is\n//! resolved privately inside this module.\n\nmod context;\nmod deterministic;\nmod provider;\nmod state;\nmod types;\n\npub(crate) use context::FunctionContext;\npub(crate) use deterministic::DeterministicFunctionProvider;\npub(crate) use provider::{\n    FunctionProvider, FunctionProviderHandle, SharedFunctionProvider, SystemFunctionProvider,\n};\npub(crate) use types::{DeterministicMode, DeterministicSequence};\n"
  },
  {
    "path": "packages/engine/src/functions/provider.rs",
    "content": "use std::sync::{Arc, Mutex};\n\nuse crate::cel::CelFunctionProvider;\n\n/// Engine-owned runtime function provider trait.\npub(crate) trait FunctionProvider: Send {\n    fn uuid_v7(&mut self) -> String;\n    fn timestamp(&mut self) -> String;\n\n    fn deterministic_sequence_persist_highest_seen(&self) -> Option<i64> {\n        None\n    }\n}\n\npub(crate) type FunctionProviderHandle = SharedFunctionProvider<Box<dyn FunctionProvider + Send>>;\n\n/// Shareable function provider used across SQL planning, UDFs, and staging.\npub(crate) struct SharedFunctionProvider<P> {\n    inner: Arc<Mutex<P>>,\n}\n\nimpl<P> Clone for SharedFunctionProvider<P> {\n    fn clone(&self) -> Self {\n        Self {\n            inner: Arc::clone(&self.inner),\n        }\n    }\n}\n\nimpl<P> SharedFunctionProvider<P> {\n    pub(crate) fn new(provider: P) -> Self {\n        Self {\n            inner: Arc::new(Mutex::new(provider)),\n        }\n    }\n\n    fn with_lock<R>(&self, f: impl FnOnce(&P) -> R) -> R {\n        let guard = self\n            .inner\n            .lock()\n            .expect(\"engine function provider mutex poisoned\");\n        f(&guard)\n    }\n\n    fn with_lock_mut<R>(&self, f: impl FnOnce(&mut P) -> R) -> R {\n        let mut guard = self\n            .inner\n            .lock()\n            .expect(\"engine function provider mutex poisoned\");\n        f(&mut guard)\n    }\n}\n\nimpl<P> SharedFunctionProvider<P>\nwhere\n    P: FunctionProvider,\n{\n    pub(crate) fn call_uuid_v7(&self) -> String {\n        self.with_lock_mut(|provider| provider.uuid_v7())\n    }\n\n    pub(crate) fn call_timestamp(&self) -> String {\n        self.with_lock_mut(|provider| provider.timestamp())\n    }\n\n    pub(crate) fn deterministic_sequence_persist_highest_seen(&self) -> Option<i64> {\n        self.with_lock(|provider| provider.deterministic_sequence_persist_highest_seen())\n    }\n}\n\nimpl<P> CelFunctionProvider for SharedFunctionProvider<P>\nwhere\n    P: FunctionProvider + Send + 'static,\n{\n    fn call_uuid_v7(&self) -> String {\n        SharedFunctionProvider::call_uuid_v7(self)\n    }\n\n    fn call_timestamp(&self) -> String {\n        SharedFunctionProvider::call_timestamp(self)\n    }\n}\n\nimpl<P> FunctionProvider for SharedFunctionProvider<P>\nwhere\n    P: FunctionProvider,\n{\n    fn uuid_v7(&mut self) -> String {\n        self.call_uuid_v7()\n    }\n\n    fn timestamp(&mut self) -> String {\n        self.call_timestamp()\n    }\n\n    fn deterministic_sequence_persist_highest_seen(&self) -> Option<i64> {\n        SharedFunctionProvider::deterministic_sequence_persist_highest_seen(self)\n    }\n}\n\nimpl<T> FunctionProvider for Box<T>\nwhere\n    T: FunctionProvider + ?Sized,\n{\n    fn uuid_v7(&mut self) -> String {\n        (**self).uuid_v7()\n    }\n\n    fn timestamp(&mut self) -> String {\n        (**self).timestamp()\n    }\n\n    fn deterministic_sequence_persist_highest_seen(&self) -> Option<i64> {\n        (**self).deterministic_sequence_persist_highest_seen()\n    }\n}\n\n/// System-backed engine function provider.\n#[derive(Debug, Default, Clone, Copy)]\npub(crate) struct SystemFunctionProvider;\n\nimpl FunctionProvider for SystemFunctionProvider {\n    fn uuid_v7(&mut self) -> String {\n        uuid::Uuid::now_v7().to_string()\n    }\n\n    fn timestamp(&mut self) -> String {\n        chrono::Utc::now().to_rfc3339_opts(chrono::SecondsFormat::Millis, true)\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/functions/state.rs",
    "content": "use serde_json::Value as JsonValue;\nuse std::sync::Arc;\n\nuse crate::entity_identity::EntityIdentity;\nuse crate::functions::{DeterministicMode, DeterministicSequence};\nuse crate::json_store::NormalizedJson;\nuse crate::live_state::{LiveStateReader, LiveStateRowRequest, MaterializedLiveStateRow};\nuse crate::storage::StorageWriteSet;\nuse crate::untracked_state::UntrackedStateContext;\nuse crate::untracked_state::UntrackedStateRow;\nuse crate::GLOBAL_VERSION_ID;\nuse crate::{LixError, NullableKeyFilter};\n\npub(crate) const DETERMINISTIC_MODE_KEY: &str = \"lix_deterministic_mode\";\npub(crate) const DETERMINISTIC_SEQUENCE_KEY: &str = \"lix_deterministic_sequence_number\";\n\nconst KEY_VALUE_SCHEMA_KEY: &str = \"lix_key_value\";\n\n/// Loads deterministic-mode settings from visible live state.\n///\n/// Missing mode means deterministic execution is disabled. Malformed mode rows\n/// are errors because they would make runtime function behavior ambiguous.\npub(crate) async fn load_mode(\n    live_state: &dyn LiveStateReader,\n) -> Result<DeterministicMode, LixError> {\n    let Some(row) = load_key_value_row(live_state, DETERMINISTIC_MODE_KEY).await? else {\n        return Ok(DeterministicMode::disabled());\n    };\n    let value = key_value_payload(&row, DETERMINISTIC_MODE_KEY)?;\n    parse_mode_value(value)\n}\n\n/// Loads the persisted deterministic sequence position.\n///\n/// Missing sequence means no deterministic values have been produced yet, so\n/// execution starts at sequence zero.\npub(crate) async fn load_sequence(\n    live_state: &dyn LiveStateReader,\n) -> Result<DeterministicSequence, LixError> {\n    let Some(row) = load_key_value_row(live_state, DETERMINISTIC_SEQUENCE_KEY).await? else {\n        return Ok(DeterministicSequence::uninitialized());\n    };\n    let value = key_value_payload(&row, DETERMINISTIC_SEQUENCE_KEY)?;\n    parse_sequence_value(value)\n}\n\n/// Persists the highest deterministic sequence value used by an execution.\n///\n/// The row is untracked global `lix_key_value` state: it is durable local\n/// runtime state, not a changelog fact.\npub(crate) async fn stage_sequence(\n    writes: &mut StorageWriteSet,\n    sequence: DeterministicSequence,\n    timestamp: &str,\n) -> Result<(), LixError> {\n    let snapshot_content = serde_json::to_string(&serde_json::json!({\n        \"key\": DETERMINISTIC_SEQUENCE_KEY,\n        \"value\": sequence.highest_seen,\n    }))\n    .map_err(|error| {\n        LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"deterministic sequence snapshot serialization failed: {error}\"),\n        )\n    })?;\n    let snapshot = NormalizedJson::from_arc_unchecked(Arc::from(snapshot_content.as_str()));\n    let row =\n        deterministic_key_value_row(DETERMINISTIC_SEQUENCE_KEY, snapshot.as_str(), timestamp)?;\n    UntrackedStateContext::new()\n        .writer(writes)\n        .stage_rows(std::iter::once(row.as_ref()))\n}\n\nasync fn load_key_value_row(\n    live_state: &dyn LiveStateReader,\n    key: &str,\n) -> Result<Option<MaterializedLiveStateRow>, LixError> {\n    live_state\n        .load_row(&LiveStateRowRequest {\n            schema_key: KEY_VALUE_SCHEMA_KEY.to_string(),\n            version_id: GLOBAL_VERSION_ID.to_string(),\n            entity_id: EntityIdentity::single(key),\n            file_id: NullableKeyFilter::Null,\n        })\n        .await\n}\n\nfn key_value_payload(row: &MaterializedLiveStateRow, key: &str) -> Result<JsonValue, LixError> {\n    let snapshot_content = row.snapshot_content.as_deref().ok_or_else(|| {\n        LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"deterministic key-value row '{key}' is missing snapshot_content\"),\n        )\n    })?;\n    let snapshot = serde_json::from_str::<JsonValue>(snapshot_content).map_err(|error| {\n        LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"deterministic key-value row '{key}' has invalid JSON: {error}\"),\n        )\n    })?;\n    let stored_key = snapshot.get(\"key\").and_then(JsonValue::as_str);\n    if stored_key != Some(key) {\n        return Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"deterministic key-value row '{key}' has mismatched key field\"),\n        ));\n    }\n    snapshot.get(\"value\").cloned().ok_or_else(|| {\n        LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"deterministic key-value row '{key}' is missing value\"),\n        )\n    })\n}\n\nfn parse_mode_value(value: JsonValue) -> Result<DeterministicMode, LixError> {\n    let Some(object) = value.as_object() else {\n        return Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            \"deterministic mode value must be an object\",\n        ));\n    };\n\n    let enabled = object\n        .get(\"enabled\")\n        .and_then(JsonValue::as_bool)\n        .unwrap_or(false);\n    if !enabled {\n        return Ok(DeterministicMode::disabled());\n    }\n    let timestamp_shuffle = object\n        .get(\"timestamp_shuffle\")\n        .and_then(JsonValue::as_bool)\n        .unwrap_or(false);\n    Ok(DeterministicMode {\n        enabled,\n        timestamp_shuffle,\n    })\n}\n\nfn parse_sequence_value(value: JsonValue) -> Result<DeterministicSequence, LixError> {\n    let Some(highest_seen) = value.as_i64() else {\n        return Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            \"deterministic sequence value must be an integer\",\n        ));\n    };\n    Ok(DeterministicSequence { highest_seen })\n}\n\nfn deterministic_key_value_row(\n    key: &str,\n    snapshot_content: &str,\n    timestamp: &str,\n) -> Result<UntrackedStateRow, LixError> {\n    Ok(UntrackedStateRow {\n        entity_id: crate::entity_identity::EntityIdentity::single(key),\n        schema_key: KEY_VALUE_SCHEMA_KEY.to_string(),\n        file_id: None,\n        snapshot_content: Some(snapshot_content.to_string()),\n        metadata: None,\n        created_at: timestamp.to_string(),\n        updated_at: timestamp.to_string(),\n        global: true,\n        version_id: GLOBAL_VERSION_ID.to_string(),\n    })\n}\n\n#[cfg(test)]\nmod tests {\n    use std::sync::Arc;\n\n    use crate::backend::testing::UnitTestBackend;\n    use crate::live_state::{LiveStateContext, LiveStateRowRequest};\n    use crate::storage::StorageContext;\n\n    use super::*;\n\n    fn live_state_context() -> LiveStateContext {\n        LiveStateContext::new(\n            crate::tracked_state::TrackedStateContext::new(),\n            crate::untracked_state::UntrackedStateContext::new(),\n            crate::commit_graph::CommitGraphContext::new(),\n        )\n    }\n\n    #[tokio::test]\n    async fn missing_mode_is_disabled() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        let live_state = live_state_context();\n        let reader = live_state.reader(storage.clone());\n\n        let mode = load_mode(&reader)\n            .await\n            .expect(\"missing mode should decode\");\n\n        assert_eq!(mode, DeterministicMode::disabled());\n    }\n\n    #[tokio::test]\n    async fn valid_mode_decodes_flags() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        let live_state = live_state_context();\n        crate::test_support::seed_global_version_head(storage.clone()).await;\n        write_test_key_value(\n            storage.clone(),\n            DETERMINISTIC_MODE_KEY,\n            serde_json::json!({\n                \"enabled\": true,\n                \"timestamp_shuffle\": true,\n            }),\n        )\n        .await;\n\n        let reader = live_state.reader(storage.clone());\n        let mode = load_mode(&reader).await.expect(\"valid mode should decode\");\n\n        assert_eq!(\n            mode,\n            DeterministicMode {\n                enabled: true,\n                timestamp_shuffle: true,\n            }\n        );\n    }\n\n    #[tokio::test]\n    async fn missing_sequence_is_uninitialized() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        let live_state = live_state_context();\n        let reader = live_state.reader(storage.clone());\n\n        let sequence = load_sequence(&reader)\n            .await\n            .expect(\"missing sequence should decode\");\n\n        assert_eq!(sequence, DeterministicSequence::uninitialized());\n    }\n\n    #[tokio::test]\n    async fn valid_sequence_decodes_highest_seen() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        let live_state = live_state_context();\n        crate::test_support::seed_global_version_head(storage.clone()).await;\n        write_test_key_value(\n            storage.clone(),\n            DETERMINISTIC_SEQUENCE_KEY,\n            serde_json::json!(41),\n        )\n        .await;\n\n        let reader = live_state.reader(storage.clone());\n        let sequence = load_sequence(&reader)\n            .await\n            .expect(\"valid sequence should decode\");\n\n        assert_eq!(sequence, DeterministicSequence { highest_seen: 41 });\n        assert_eq!(sequence.next_sequence(), 42);\n    }\n\n    #[tokio::test]\n    async fn write_sequence_persists_untracked_global_key_value() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        let live_state = live_state_context();\n        crate::test_support::seed_global_version_head(storage.clone()).await;\n        let mut tx = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n\n        let mut writes = StorageWriteSet::new();\n        stage_sequence(\n            &mut writes,\n            DeterministicSequence { highest_seen: 7 },\n            \"1970-01-01T00:00:00.000Z\",\n        )\n        .await\n        .expect(\"sequence should stage\");\n        writes\n            .apply(&mut tx.as_mut())\n            .await\n            .expect(\"sequence should apply\");\n        tx.commit().await.expect(\"transaction should commit\");\n\n        let reader = live_state.reader(storage.clone());\n        let row = reader\n            .load_row(&LiveStateRowRequest {\n                schema_key: KEY_VALUE_SCHEMA_KEY.to_string(),\n                version_id: GLOBAL_VERSION_ID.to_string(),\n                entity_id: crate::entity_identity::EntityIdentity::single(\n                    DETERMINISTIC_SEQUENCE_KEY,\n                ),\n                file_id: NullableKeyFilter::Null,\n            })\n            .await\n            .expect(\"sequence row should load\")\n            .expect(\"sequence row should exist\");\n        assert!(row.untracked);\n        assert!(row.global);\n        assert_eq!(row.change_id, None);\n        assert_eq!(row.commit_id, None);\n        assert_eq!(\n            row.snapshot_content.as_deref(),\n            Some(\"{\\\"key\\\":\\\"lix_deterministic_sequence_number\\\",\\\"value\\\":7}\")\n        );\n    }\n\n    async fn write_test_key_value(storage: StorageContext, key: &str, value: JsonValue) {\n        let mut tx = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        let snapshot_content = serde_json::to_string(&serde_json::json!({\n            \"key\": key,\n            \"value\": value,\n        }))\n        .expect(\"snapshot should serialize\");\n        let mut writes = StorageWriteSet::new();\n        let row = deterministic_key_value_row(key, &snapshot_content, \"1970-01-01T00:00:00.000Z\")\n            .expect(\"test key-value should canonicalize\");\n        UntrackedStateContext::new()\n            .writer(&mut writes)\n            .stage_rows(std::iter::once(row.as_ref()))\n            .expect(\"test key-value should stage\");\n        writes\n            .apply(&mut tx.as_mut())\n            .await\n            .expect(\"test key-value should apply\");\n        tx.commit().await.expect(\"transaction should commit\");\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/functions/types.rs",
    "content": "/// Decoded deterministic-mode setting.\n///\n/// Storage can decide where this setting lives. The type only describes the\n/// behavior engine should apply while preparing runtime functions.\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub(crate) struct DeterministicMode {\n    pub(crate) enabled: bool,\n    pub(crate) timestamp_shuffle: bool,\n}\n\nimpl DeterministicMode {\n    pub(crate) fn disabled() -> Self {\n        Self {\n            enabled: false,\n            timestamp_shuffle: false,\n        }\n    }\n}\n\n/// Persisted deterministic sequence position.\n///\n/// `highest_seen` is the last sequence value returned by the runtime provider.\n/// The next deterministic execution starts at `highest_seen + 1`.\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub(crate) struct DeterministicSequence {\n    pub(crate) highest_seen: i64,\n}\n\nimpl DeterministicSequence {\n    pub(crate) fn uninitialized() -> Self {\n        Self { highest_seen: -1 }\n    }\n\n    pub(crate) fn next_sequence(self) -> i64 {\n        self.highest_seen + 1\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/init.rs",
    "content": "use crate::commit_store::{Change, CommitDraftRef, CommitStoreContext};\nuse crate::entity_identity::EntityIdentity;\nuse crate::functions::{\n    FunctionProvider, FunctionProviderHandle, SharedFunctionProvider, SystemFunctionProvider,\n};\nuse crate::json_store::{JsonRef, JsonStoreContext, JsonWritePlacementRef, NormalizedJsonRef};\nuse crate::schema::{\n    registered_schema_entity_id, schema_key_from_definition, seed_schema_definitions,\n};\nuse crate::storage::{StorageContext, StorageWriteSet};\nuse crate::tracked_state::{TrackedStateContext, TrackedStateDeltaRef};\nuse crate::untracked_state::{UntrackedStateContext, UntrackedStateRow};\nuse crate::version::{VERSION_DESCRIPTOR_SCHEMA_KEY, VERSION_REF_SCHEMA_KEY};\nuse crate::LixError;\nuse crate::GLOBAL_VERSION_ID;\nuse serde_json::json;\n#[cfg(test)]\nuse std::sync::Arc;\n\nconst KEY_VALUE_SCHEMA_KEY: &str = \"lix_key_value\";\nconst LIX_ID_KEY: &str = \"lix_id\";\nconst WORKSPACE_VERSION_KEY: &str = \"lix_workspace_version_id\";\nconst REGISTERED_SCHEMA_KEY: &str = \"lix_registered_schema\";\n\n/// Pure seed plan for initializing an engine repository.\n///\n/// Tracked bootstrap facts go to the commit store. Moving refs such as\n/// `lix_version_ref` are seeded as untracked local state so repository heads\n/// can advance without becoming commit members.\npub(crate) struct InitSeedPlan {\n    commit: InitSeedCommit,\n    changes: Vec<InitSeedChange>,\n    untracked_rows: Vec<InitSeedLiveRow>,\n    pub(crate) receipt: InitReceipt,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\nstruct InitSeedCommit {\n    id: String,\n    change_id: String,\n    parent_ids: Vec<String>,\n    author_account_ids: Vec<String>,\n    created_at: String,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\nstruct InitSeedChange {\n    id: String,\n    entity_id: EntityIdentity,\n    schema_key: String,\n    snapshot_content: String,\n    created_at: String,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\nstruct InitSeedLiveRow {\n    entity_id: EntityIdentity,\n    schema_key: String,\n    snapshot_content: String,\n    created_at: String,\n    updated_at: String,\n    global: bool,\n    version_id: String,\n}\n\n/// Values generated while planning the initial repository seed.\n#[derive(Debug, Clone, PartialEq, Eq)]\npub struct InitReceipt {\n    pub lix_id: String,\n    pub global_version_id: String,\n    pub main_version_id: String,\n    pub initial_commit_id: String,\n}\n\n/// Builds the canonical bootstrap changes for a new engine repository.\n///\n/// The initial commit tracks durable content rows. Version refs are moving\n/// pointers and therefore live in untracked local state instead of the commit.\npub(crate) fn plan_init_seed(functions: FunctionProviderHandle) -> Result<InitSeedPlan, LixError> {\n    let main_version_id = functions.call_uuid_v7();\n    let lix_id = functions.call_uuid_v7();\n    let initial_commit_id = functions.call_uuid_v7();\n    let timestamp = functions.call_timestamp();\n\n    let mut registered_schema_changes = Vec::new();\n    for schema in seed_schema_definitions() {\n        let key = schema_key_from_definition(schema)?;\n        registered_schema_changes.push(canonical_change(\n            functions.call_uuid_v7(),\n            registered_schema_entity_id(&key.schema_key)?,\n            REGISTERED_SCHEMA_KEY,\n            registered_schema_snapshot(schema)?,\n            &timestamp,\n        ));\n    }\n\n    let global_version_descriptor_change = canonical_change(\n        GLOBAL_VERSION_ID.to_string(),\n        EntityIdentity::single(GLOBAL_VERSION_ID),\n        VERSION_DESCRIPTOR_SCHEMA_KEY,\n        version_descriptor_snapshot(GLOBAL_VERSION_ID, \"global\", true)?,\n        &timestamp,\n    );\n    let main_version_descriptor_change = canonical_change(\n        functions.call_uuid_v7(),\n        EntityIdentity::single(&main_version_id),\n        VERSION_DESCRIPTOR_SCHEMA_KEY,\n        version_descriptor_snapshot(&main_version_id, \"main\", false)?,\n        &timestamp,\n    );\n    let kv_lix_id_change = canonical_change(\n        functions.call_uuid_v7(),\n        EntityIdentity::single(LIX_ID_KEY),\n        KEY_VALUE_SCHEMA_KEY,\n        key_value_snapshot(LIX_ID_KEY, &lix_id)?,\n        &timestamp,\n    );\n\n    let initial_commit = InitSeedCommit {\n        id: initial_commit_id.clone(),\n        change_id: functions.call_uuid_v7(),\n        parent_ids: Vec::new(),\n        author_account_ids: Vec::new(),\n        created_at: timestamp.clone(),\n    };\n    let global_version_ref_row = untracked_row(\n        EntityIdentity::single(GLOBAL_VERSION_ID),\n        VERSION_REF_SCHEMA_KEY,\n        version_ref_snapshot(GLOBAL_VERSION_ID, &initial_commit_id)?,\n        &timestamp,\n    );\n    let main_version_ref_row = untracked_row(\n        EntityIdentity::single(&main_version_id),\n        VERSION_REF_SCHEMA_KEY,\n        version_ref_snapshot(&main_version_id, &initial_commit_id)?,\n        &timestamp,\n    );\n    let workspace_version_row = untracked_row(\n        EntityIdentity::single(WORKSPACE_VERSION_KEY),\n        KEY_VALUE_SCHEMA_KEY,\n        key_value_snapshot(WORKSPACE_VERSION_KEY, &main_version_id)?,\n        &timestamp,\n    );\n\n    Ok(InitSeedPlan {\n        commit: initial_commit,\n        changes: registered_schema_changes\n            .into_iter()\n            .chain([\n                global_version_descriptor_change,\n                main_version_descriptor_change,\n                kv_lix_id_change,\n            ])\n            .collect(),\n        untracked_rows: vec![\n            global_version_ref_row,\n            main_version_ref_row,\n            workspace_version_row,\n        ],\n        receipt: InitReceipt {\n            lix_id,\n            global_version_id: GLOBAL_VERSION_ID.to_string(),\n            main_version_id,\n            initial_commit_id,\n        },\n    })\n}\n\n/// Initializes an empty engine repository in one backend transaction.\n///\n/// The pure seed planner decides which bootstrap facts exist. This function is\n/// only responsible for durably writing those facts to their owning stores:\n/// commit_store for tracked changes, and live_state for the serving projection\n/// plus untracked moving refs.\npub(crate) async fn initialize(\n    storage: StorageContext,\n    commit_store: &CommitStoreContext,\n    tracked_state: &TrackedStateContext,\n    untracked_state: &UntrackedStateContext,\n) -> Result<InitReceipt, LixError> {\n    let functions = SharedFunctionProvider::new(\n        Box::new(SystemFunctionProvider) as Box<dyn FunctionProvider + Send>\n    );\n    let plan = plan_init_seed(functions)?;\n    let receipt = plan.receipt.clone();\n\n    let mut transaction = storage.begin_write_transaction().await?;\n    let mut writes = StorageWriteSet::new();\n\n    let authored_changes = plan\n        .changes\n        .iter()\n        .map(seed_change_to_commit_store_change)\n        .collect::<Result<Vec<_>, _>>()?;\n    JsonStoreContext::new().writer().stage_batch(\n        &mut writes,\n        JsonWritePlacementRef::CommitPack {\n            commit_id: &plan.commit.id,\n            pack_id: 0,\n        },\n        plan.changes\n            .iter()\n            .map(|change| NormalizedJsonRef::new(change.snapshot_content.as_str())),\n    )?;\n\n    let staged_commit = {\n        let commit = CommitDraftRef {\n            id: &plan.commit.id,\n            change_id: &plan.commit.change_id,\n            parent_ids: &plan.commit.parent_ids,\n            author_account_ids: &plan.commit.author_account_ids,\n            created_at: &plan.commit.created_at,\n        };\n        let mut writer = commit_store.writer(transaction.as_mut(), &mut writes);\n        writer\n            .stage_tracked_commit_draft(\n                commit,\n                authored_changes.iter().map(Change::as_ref).collect(),\n                Vec::new(),\n            )\n            .await?\n    };\n\n    let untracked_rows = plan\n        .untracked_rows\n        .iter()\n        .map(untracked_state_row_from_seed)\n        .collect::<Result<Vec<_>, _>>()?;\n\n    {\n        untracked_state\n            .writer(&mut writes)\n            .stage_rows(untracked_rows.iter().map(|row| row.as_ref()))?;\n        let deltas = authored_changes\n            .iter()\n            .zip(&staged_commit.authored_locators)\n            .map(|(change, locator)| TrackedStateDeltaRef {\n                change: change.as_ref(),\n                locator: locator.as_ref(),\n                created_at: &change.created_at,\n                updated_at: &change.created_at,\n            })\n            .collect::<Vec<_>>();\n        let mut writer = tracked_state.writer(transaction.as_mut(), &mut writes);\n        writer\n            .stage_delta(&receipt.initial_commit_id, None, &deltas)\n            .await?;\n    }\n\n    writes.apply(&mut transaction.as_mut()).await?;\n    transaction.commit().await?;\n    Ok(receipt)\n}\n\nfn seed_change_to_commit_store_change(change: &InitSeedChange) -> Result<Change, LixError> {\n    Ok(Change {\n        id: change.id.clone(),\n        entity_id: change.entity_id.clone(),\n        schema_key: change.schema_key.clone(),\n        file_id: None,\n        snapshot_ref: Some(JsonRef::for_content(change.snapshot_content.as_bytes())),\n        metadata_ref: None,\n        created_at: change.created_at.clone(),\n    })\n}\n\nfn untracked_state_row_from_seed(row: &InitSeedLiveRow) -> Result<UntrackedStateRow, LixError> {\n    Ok(UntrackedStateRow {\n        entity_id: row.entity_id.clone(),\n        schema_key: row.schema_key.clone(),\n        file_id: None,\n        snapshot_content: Some(row.snapshot_content.clone()),\n        metadata: None,\n        created_at: row.created_at.clone(),\n        updated_at: row.updated_at.clone(),\n        global: row.global,\n        version_id: row.version_id.clone(),\n    })\n}\n\nfn untracked_row(\n    entity_id: EntityIdentity,\n    schema_key: &str,\n    snapshot_content: String,\n    timestamp: &str,\n) -> InitSeedLiveRow {\n    InitSeedLiveRow {\n        entity_id,\n        schema_key: schema_key.to_string(),\n        snapshot_content,\n        created_at: timestamp.to_string(),\n        updated_at: timestamp.to_string(),\n        global: true,\n        version_id: GLOBAL_VERSION_ID.to_string(),\n    }\n}\n\nfn canonical_change(\n    id: String,\n    entity_id: EntityIdentity,\n    schema_key: &str,\n    snapshot_content: String,\n    created_at: &str,\n) -> InitSeedChange {\n    InitSeedChange {\n        id,\n        entity_id,\n        schema_key: schema_key.to_string(),\n        snapshot_content,\n        created_at: created_at.to_string(),\n    }\n}\n\nfn version_descriptor_snapshot(id: &str, name: &str, hidden: bool) -> Result<String, LixError> {\n    encode_snapshot(json!({\n        \"id\": id,\n        \"name\": name,\n        \"hidden\": hidden,\n    }))\n}\n\nfn key_value_snapshot(key: &str, value: &str) -> Result<String, LixError> {\n    encode_snapshot(json!({\n        \"key\": key,\n        \"value\": value,\n    }))\n}\n\nfn registered_schema_snapshot(schema: &serde_json::Value) -> Result<String, LixError> {\n    encode_snapshot(json!({\n        \"value\": schema,\n    }))\n}\n\nfn version_ref_snapshot(id: &str, commit_id: &str) -> Result<String, LixError> {\n    encode_snapshot(json!({\n        \"id\": id,\n        \"commit_id\": commit_id,\n    }))\n}\n\nfn encode_snapshot(value: serde_json::Value) -> Result<String, LixError> {\n    serde_json::to_string(&value).map_err(|error| {\n        LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"engine init seed snapshot serialization failed: {error}\"),\n        )\n    })\n}\n\n#[cfg(test)]\nmod tests {\n    use serde_json::Value as JsonValue;\n\n    use super::*;\n    use crate::backend::{testing::UnitTestBackend, Backend};\n    use crate::functions::{FunctionProvider, SharedFunctionProvider};\n    use crate::storage::StorageContext;\n    use crate::tracked_state::TrackedStateContext;\n    use crate::untracked_state::UntrackedStateContext;\n\n    #[test]\n    fn plan_init_seed_returns_tracked_changes_and_untracked_workspace_state() {\n        let plan = plan_init_seed(test_functions()).expect(\"init seed should plan\");\n\n        assert_eq!(plan.changes.len(), seed_schema_definitions().len() + 3);\n        assert_eq!(plan.untracked_rows.len(), 3);\n        assert_eq!(plan.receipt.global_version_id, GLOBAL_VERSION_ID);\n        assert_eq!(plan.receipt.main_version_id, \"test-uuid-1\");\n        assert_eq!(plan.receipt.lix_id, \"test-uuid-2\");\n        assert_eq!(plan.receipt.initial_commit_id, \"test-uuid-3\");\n    }\n\n    #[test]\n    fn plan_init_seed_commit_header_tracks_schema_registrations_descriptor_and_lix_id_changes() {\n        let plan = plan_init_seed(test_functions()).expect(\"init seed should plan\");\n\n        assert_eq!(plan.commit.id, plan.receipt.initial_commit_id);\n        assert_eq!(plan.commit.change_id, \"test-uuid-21\");\n        assert!(plan.commit.parent_ids.is_empty());\n        assert!(plan.commit.author_account_ids.is_empty());\n        assert_eq!(plan.commit.created_at, \"test-timestamp-1\");\n\n        let change_ids = plan\n            .changes\n            .iter()\n            .map(|change| change.id.as_str())\n            .collect::<Vec<_>>();\n        assert_eq!(change_ids.len(), seed_schema_definitions().len() + 3);\n        assert!(change_ids.contains(&\"global\"));\n        assert!(!change_ids.contains(&plan.commit.change_id.as_str()));\n\n        let registered_schema_change_ids = plan\n            .changes\n            .iter()\n            .filter(|change| change.schema_key == REGISTERED_SCHEMA_KEY)\n            .map(|change| change.id.as_str())\n            .collect::<Vec<_>>();\n        for change_id in registered_schema_change_ids {\n            assert!(change_ids.contains(&change_id));\n        }\n    }\n\n    #[test]\n    fn plan_init_seed_registers_seed_schemas_as_initial_commit_rows() {\n        let plan = plan_init_seed(test_functions()).expect(\"init seed should plan\");\n        let registered_schema_changes = plan\n            .changes\n            .iter()\n            .filter(|change| change.schema_key == REGISTERED_SCHEMA_KEY)\n            .collect::<Vec<_>>();\n\n        assert_eq!(\n            registered_schema_changes.len(),\n            seed_schema_definitions().len()\n        );\n        assert!(registered_schema_changes.iter().any(|change| {\n            snapshot(change)\n                .pointer(\"/value/x-lix-key\")\n                .and_then(JsonValue::as_str)\n                == Some(REGISTERED_SCHEMA_KEY)\n        }));\n        assert!(registered_schema_changes.iter().any(|change| {\n            snapshot(change)\n                .pointer(\"/value/x-lix-key\")\n                .and_then(JsonValue::as_str)\n                == Some(KEY_VALUE_SCHEMA_KEY)\n        }));\n    }\n\n    #[test]\n    fn plan_init_seed_version_refs_point_to_initial_commit() {\n        let plan = plan_init_seed(test_functions()).expect(\"init seed should plan\");\n        let version_refs = plan\n            .untracked_rows\n            .iter()\n            .filter(|row| row.schema_key == VERSION_REF_SCHEMA_KEY)\n            .collect::<Vec<_>>();\n\n        assert_eq!(version_refs.len(), 2);\n        assert!(plan\n            .changes\n            .iter()\n            .all(|change| change.schema_key != VERSION_REF_SCHEMA_KEY));\n        for row in version_refs {\n            assert_eq!(row.schema_key, VERSION_REF_SCHEMA_KEY);\n            assert_eq!(row.version_id, GLOBAL_VERSION_ID);\n            let snapshot = untracked_snapshot(row);\n            assert_eq!(\n                snapshot.get(\"commit_id\").and_then(JsonValue::as_str),\n                Some(plan.receipt.initial_commit_id.as_str())\n            );\n        }\n    }\n\n    #[test]\n    fn plan_init_seed_workspace_version_points_to_main_version() {\n        let plan = plan_init_seed(test_functions()).expect(\"init seed should plan\");\n        let workspace_row = plan\n            .untracked_rows\n            .iter()\n            .find(|row| {\n                row.schema_key == KEY_VALUE_SCHEMA_KEY\n                    && row.entity_id\n                        == crate::entity_identity::EntityIdentity::single(WORKSPACE_VERSION_KEY)\n            })\n            .expect(\"workspace version row should exist\");\n\n        assert_eq!(workspace_row.version_id, GLOBAL_VERSION_ID);\n        assert!(workspace_row.global);\n        let snapshot = untracked_snapshot(workspace_row);\n        assert_eq!(\n            snapshot.get(\"key\").and_then(JsonValue::as_str),\n            Some(WORKSPACE_VERSION_KEY)\n        );\n        assert_eq!(\n            snapshot.get(\"value\").and_then(JsonValue::as_str),\n            Some(plan.receipt.main_version_id.as_str())\n        );\n    }\n\n    #[tokio::test]\n    async fn initialize_writes_initial_commit_through_commit_store() {\n        let backend: Arc<dyn Backend + Send + Sync> = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend);\n        let commit_store = CommitStoreContext::new();\n        let tracked_state = TrackedStateContext::new();\n        let untracked_state = UntrackedStateContext::new();\n\n        let receipt = initialize(\n            storage.clone(),\n            &commit_store,\n            &tracked_state,\n            &untracked_state,\n        )\n        .await\n        .expect(\"engine should initialize\");\n        let reader = commit_store.reader(storage.clone());\n        let commit = reader\n            .load_commit(&receipt.initial_commit_id)\n            .await\n            .expect(\"commit should load\")\n            .expect(\"initial commit should exist\");\n\n        assert_eq!(commit.id, receipt.initial_commit_id);\n        assert_eq!(commit.change_pack_count, 1);\n        assert_eq!(commit.membership_pack_count, 0);\n\n        let change_pack = reader\n            .load_change_pack(&commit.id, 0)\n            .await\n            .expect(\"change pack should load\")\n            .expect(\"initial change pack should exist\");\n        assert_eq!(change_pack.len(), seed_schema_definitions().len() + 3);\n        assert!(change_pack\n            .iter()\n            .all(|change| change.id != commit.change_id));\n\n        let entries = reader\n            .load_change_index_entries(&[commit.change_id.clone(), \"global\".to_string()])\n            .await\n            .expect(\"change index should load\");\n        assert!(entries[0].is_some());\n        assert!(entries[1].is_some());\n    }\n\n    fn snapshot(change: &InitSeedChange) -> JsonValue {\n        serde_json::from_str(&change.snapshot_content).expect(\"snapshot should be JSON\")\n    }\n\n    fn untracked_snapshot(row: &InitSeedLiveRow) -> JsonValue {\n        serde_json::from_str(&row.snapshot_content).expect(\"snapshot should be JSON\")\n    }\n\n    fn test_functions() -> FunctionProviderHandle {\n        SharedFunctionProvider::new(\n            Box::new(TestFunctionProvider::default()) as Box<dyn FunctionProvider + Send>\n        )\n    }\n\n    #[derive(Default)]\n    struct TestFunctionProvider {\n        uuid_count: usize,\n        timestamp_count: usize,\n    }\n\n    impl FunctionProvider for TestFunctionProvider {\n        fn uuid_v7(&mut self) -> String {\n            self.uuid_count += 1;\n            format!(\"test-uuid-{}\", self.uuid_count)\n        }\n\n        fn timestamp(&mut self) -> String {\n            self.timestamp_count += 1;\n            format!(\"test-timestamp-{}\", self.timestamp_count)\n        }\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/json_store/compression.rs",
    "content": "use crate::LixError;\n\n#[cfg(not(target_arch = \"wasm32\"))]\npub(crate) fn compress_json_payload(json_data: &[u8]) -> Result<Vec<u8>, LixError> {\n    zstd::bulk::compress(json_data, 1).map_err(|error| LixError {\n        code: \"LIX_ERROR_UNKNOWN\".to_string(),\n        message: format!(\"json compression failed: {error}\"),\n        hint: None,\n        details: None,\n    })\n}\n\n#[cfg(target_arch = \"wasm32\")]\npub(crate) fn compress_json_payload(json_data: &[u8]) -> Result<Vec<u8>, LixError> {\n    Ok(ruzstd::encoding::compress_to_vec(\n        json_data,\n        ruzstd::encoding::CompressionLevel::Fastest,\n    ))\n}\n\n#[cfg(not(target_arch = \"wasm32\"))]\npub(crate) fn decode_json_zstd_payload(\n    compressed_payload: &[u8],\n    uncompressed_len: usize,\n    hash_hex: &str,\n) -> Result<Vec<u8>, LixError> {\n    zstd::bulk::decompress(compressed_payload, uncompressed_len).map_err(|error| LixError {\n        code: \"LIX_ERROR_UNKNOWN\".to_string(),\n        message: format!(\"json decompression failed for ref '{hash_hex}': {error}\"),\n        hint: None,\n        details: None,\n    })\n}\n\n#[cfg(target_arch = \"wasm32\")]\npub(crate) fn decode_json_zstd_payload(\n    compressed_payload: &[u8],\n    _uncompressed_len: usize,\n    _hash_hex: &str,\n) -> Result<Vec<u8>, LixError> {\n    use std::io::Read as _;\n\n    let mut decoder =\n        ruzstd::decoding::StreamingDecoder::new(compressed_payload).map_err(|error| LixError {\n            code: \"LIX_ERROR_UNKNOWN\".to_string(),\n            message: format!(\"json decompression failed: {error}\"),\n            hint: None,\n            details: None,\n        })?;\n\n    let mut output = Vec::new();\n    decoder.read_to_end(&mut output).map_err(|error| LixError {\n        code: \"LIX_ERROR_UNKNOWN\".to_string(),\n        message: format!(\"json decompression failed: {error}\"),\n        hint: None,\n        details: None,\n    })?;\n    Ok(output)\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn zstd_payload_roundtrips() {\n        let json = \"zstd-friendly text \".repeat(2048);\n        let compressed = compress_json_payload(json.as_bytes()).expect(\"should compress\");\n        assert!(compressed.len() < json.len());\n\n        let hash_hex = blake3::hash(json.as_bytes()).to_hex().to_string();\n        let decoded =\n            decode_json_zstd_payload(&compressed, json.len(), &hash_hex).expect(\"should decode\");\n\n        assert_eq!(decoded, json.as_bytes());\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/json_store/context.rs",
    "content": "use crate::json_store::store;\nuse crate::json_store::types::{\n    JsonLoadBatch, JsonLoadRequestRef, JsonProjection, JsonProjectionBatch,\n    JsonProjectionLoadRequestRef, JsonRef, JsonValueBatch, JsonWritePlacementRef,\n    NormalizedJsonRef,\n};\nuse crate::storage::{KvGetGroup, StorageReader, StorageWriteSet};\nuse crate::LixError;\nuse std::collections::{HashMap, HashSet};\n\nconst PACK_LOCAL_MAX_JSON_BYTES: usize = 64 * 1024;\n\n#[derive(Debug, Clone, Copy)]\npub(crate) struct JsonStoreContext;\n\nimpl JsonStoreContext {\n    pub(crate) fn new() -> Self {\n        Self\n    }\n\n    pub(crate) fn reader<S>(&self, store: S) -> JsonStoreReader<S>\n    where\n        S: StorageReader,\n    {\n        JsonStoreReader { store }\n    }\n\n    pub(crate) fn writer(&self) -> JsonStoreWriter {\n        JsonStoreWriter::new()\n    }\n\n    pub(crate) async fn load_bytes_many(\n        &self,\n        store: &mut impl StorageReader,\n        request: JsonLoadRequestRef<'_>,\n    ) -> Result<JsonLoadBatch, LixError> {\n        store::load_json_bytes_many_in_scope(store, request.refs, request.scope)\n            .await\n            .map(JsonLoadBatch::new)\n    }\n\n    pub(crate) fn commit_pack_get_group(&self, commit_id: &str, pack_id: u32) -> KvGetGroup {\n        KvGetGroup {\n            namespace: store::JSON_PACK_NAMESPACE.to_string(),\n            keys: vec![store::pack_key(commit_id, pack_id)],\n        }\n    }\n\n    pub(crate) fn decode_pack_refs(&self, bytes: &[u8]) -> Result<Vec<JsonRef>, LixError> {\n        store::decode_json_pack_refs(bytes)\n    }\n}\n\npub(crate) struct JsonStoreReader<S> {\n    store: S,\n}\n\nimpl<S> Clone for JsonStoreReader<S>\nwhere\n    S: Clone,\n{\n    fn clone(&self) -> Self {\n        Self {\n            store: self.store.clone(),\n        }\n    }\n}\n\nimpl<S> JsonStoreReader<S>\nwhere\n    S: StorageReader,\n{\n    pub(crate) async fn load_bytes_many(\n        &mut self,\n        request: JsonLoadRequestRef<'_>,\n    ) -> Result<JsonLoadBatch, LixError> {\n        store::load_json_bytes_many_in_scope(&mut self.store, request.refs, request.scope)\n            .await\n            .map(JsonLoadBatch::new)\n    }\n\n    pub(crate) async fn load_values_many(\n        &mut self,\n        request: JsonLoadRequestRef<'_>,\n    ) -> Result<JsonValueBatch, LixError> {\n        let refs = request.refs;\n        let values = self\n            .load_bytes_many(request)\n            .await?\n            .into_values()\n            .into_iter()\n            .enumerate()\n            .map(|(index, bytes)| match bytes {\n                Some(bytes) => serde_json::from_slice(&bytes).map(Some).map_err(|error| {\n                    LixError::new(\n                        \"LIX_ERROR_UNKNOWN\",\n                        format!(\n                            \"json ref '{}' is invalid JSON: {error}\",\n                            refs[index].to_hex()\n                        ),\n                    )\n                }),\n                None => Ok(None),\n            })\n            .collect::<Result<Vec<_>, _>>()?;\n        Ok(JsonValueBatch::new(values))\n    }\n\n    pub(crate) async fn load_projections_many(\n        &mut self,\n        request: JsonProjectionLoadRequestRef<'_>,\n    ) -> Result<JsonProjectionBatch, LixError> {\n        let values = self\n            .load_values_many(JsonLoadRequestRef {\n                refs: request.refs,\n                scope: request.scope,\n            })\n            .await?\n            .into_values()\n            .into_iter()\n            .map(|value| {\n                value.map(|value| {\n                    JsonProjection::new(\n                        request\n                            .paths\n                            .iter()\n                            .map(|path| value.pointer(path.as_str()).cloned())\n                            .collect(),\n                    )\n                })\n            })\n            .collect();\n        Ok(JsonProjectionBatch::new(values))\n    }\n}\n\npub(crate) struct JsonStoreWriter;\n\n#[derive(Debug, Clone, Default)]\npub(crate) struct JsonStageBatchReport {\n    pub(crate) refs: Vec<JsonRef>,\n    pub(crate) pack_indexes: HashMap<[u8; 32], usize>,\n}\n\nimpl JsonStoreWriter {\n    fn new() -> Self {\n        Self\n    }\n\n    pub(crate) fn stage_batch<'a>(\n        &mut self,\n        writes: &mut StorageWriteSet,\n        placement: JsonWritePlacementRef<'a>,\n        payloads: impl IntoIterator<Item = NormalizedJsonRef<'a>>,\n    ) -> Result<Vec<JsonRef>, LixError> {\n        self.stage_batch_report(writes, placement, payloads)\n            .map(|report| report.refs)\n    }\n\n    pub(crate) fn stage_batch_report<'a>(\n        &mut self,\n        writes: &mut StorageWriteSet,\n        placement: JsonWritePlacementRef<'a>,\n        payloads: impl IntoIterator<Item = NormalizedJsonRef<'a>>,\n    ) -> Result<JsonStageBatchReport, LixError> {\n        let mut unique_encoded = Vec::new();\n        let mut order = Vec::new();\n        let mut seen = HashSet::new();\n        for payload in payloads {\n            let encoded = match payload.trusted_json_ref() {\n                Some(json_ref) => store::encode_json_str_with_ref(payload.normalized(), json_ref)?,\n                None => store::encode_json_str(payload.normalized())?,\n            };\n            let hash: [u8; 32] = encoded\n                .json_ref\n                .as_hash_bytes()\n                .try_into()\n                .expect(\"json ref hash is fixed size\");\n            #[cfg(feature = \"storage-benches\")]\n            crate::storage_bench::record_json_store_stage_bytes(hash);\n            order.push(encoded.json_ref);\n            if seen.insert(hash) {\n                unique_encoded.push(encoded);\n            }\n        }\n\n        let pack_local = matches!(placement, JsonWritePlacementRef::CommitPack { .. });\n        let mut pack_indexes = HashMap::new();\n        if let JsonWritePlacementRef::CommitPack { commit_id, pack_id } = placement {\n            let pack_entries = unique_encoded\n                .iter()\n                .filter(|encoded| encoded.uncompressed_len <= PACK_LOCAL_MAX_JSON_BYTES)\n                .collect::<Vec<_>>();\n            for (index, encoded) in pack_entries.iter().enumerate() {\n                pack_indexes.insert(*encoded.json_ref.as_hash_array(), index);\n            }\n            if !pack_entries.is_empty() {\n                let encoded_pack = store::encode_json_pack(&pack_entries)?;\n                writes.put(\n                    store::JSON_PACK_NAMESPACE,\n                    store::pack_key(commit_id, pack_id),\n                    encoded_pack,\n                );\n            }\n        }\n\n        for encoded in &unique_encoded {\n            if pack_local && encoded.uncompressed_len <= PACK_LOCAL_MAX_JSON_BYTES {\n                continue;\n            }\n            writes.put(\n                store::JSON_NAMESPACE,\n                encoded.json_ref.as_hash_bytes().to_vec(),\n                store::encode_direct_json_payload(encoded),\n            );\n        }\n\n        Ok(JsonStageBatchReport {\n            refs: order,\n            pack_indexes,\n        })\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use std::sync::Arc;\n\n    use super::*;\n    use crate::backend::testing::UnitTestBackend;\n    use crate::json_store::types::JsonReadScopeRef;\n    use crate::storage::StorageContext;\n\n    #[tokio::test]\n    async fn commit_local_batch_writes_pack_without_direct_rows() {\n        let storage = StorageContext::new(Arc::new(UnitTestBackend::new()));\n        let context = JsonStoreContext::new();\n        let first = \"{\\\"value\\\":\\\"first\\\"}\";\n        let second = \"{\\\"value\\\":\\\"second\\\"}\";\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        let mut writes = StorageWriteSet::new();\n        context\n            .writer()\n            .stage_batch(\n                &mut writes,\n                JsonWritePlacementRef::CommitPack {\n                    commit_id: \"commit-a\",\n                    pack_id: 0,\n                },\n                [\n                    NormalizedJsonRef::new(first),\n                    NormalizedJsonRef::new(second),\n                ],\n            )\n            .expect(\"json pack should stage\");\n        writes\n            .apply(&mut transaction.as_mut())\n            .await\n            .expect(\"json pack should apply\");\n        transaction\n            .commit()\n            .await\n            .expect(\"transaction should commit\");\n\n        let refs = [\n            JsonRef::for_content(first.as_bytes()),\n            JsonRef::for_content(second.as_bytes()),\n        ];\n        let unknown = context\n            .reader(storage.clone())\n            .load_bytes_many(JsonLoadRequestRef {\n                refs: &refs,\n                scope: JsonReadScopeRef::OutOfBand,\n            })\n            .await\n            .expect(\"unknown load should check direct rows\");\n        assert_eq!(unknown.into_values(), vec![None, None]);\n\n        let pack_ids = [0];\n        let packed = context\n            .reader(storage.clone())\n            .load_bytes_many(JsonLoadRequestRef {\n                refs: &refs,\n                scope: JsonReadScopeRef::CommitPacks {\n                    commit_id: \"commit-a\",\n                    pack_ids: &pack_ids,\n                },\n            })\n            .await\n            .expect(\"packed load should hydrate\");\n        assert_eq!(\n            packed.into_values(),\n            vec![\n                Some(first.as_bytes().to_vec()),\n                Some(second.as_bytes().to_vec())\n            ]\n        );\n    }\n\n    #[tokio::test]\n    async fn commit_local_batch_dedupes_pack_payloads_but_returns_request_order() {\n        let storage = StorageContext::new(Arc::new(UnitTestBackend::new()));\n        let context = JsonStoreContext::new();\n        let first = \"{\\\"value\\\":\\\"first\\\"}\";\n        let second = \"{\\\"value\\\":\\\"second\\\"}\";\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        let mut writes = StorageWriteSet::new();\n        let staged_refs = context\n            .writer()\n            .stage_batch(\n                &mut writes,\n                JsonWritePlacementRef::CommitPack {\n                    commit_id: \"commit-a\",\n                    pack_id: 0,\n                },\n                [\n                    NormalizedJsonRef::new(first),\n                    NormalizedJsonRef::new(first),\n                    NormalizedJsonRef::new(second),\n                ],\n            )\n            .expect(\"json pack should stage\");\n        writes\n            .apply(&mut transaction.as_mut())\n            .await\n            .expect(\"json pack should apply\");\n        transaction\n            .commit()\n            .await\n            .expect(\"transaction should commit\");\n\n        let first_ref = JsonRef::for_content(first.as_bytes());\n        let second_ref = JsonRef::for_content(second.as_bytes());\n        assert_eq!(staged_refs, vec![first_ref, first_ref, second_ref]);\n\n        let refs = [first_ref, second_ref];\n        let unknown = context\n            .reader(storage.clone())\n            .load_bytes_many(JsonLoadRequestRef {\n                refs: &refs,\n                scope: JsonReadScopeRef::OutOfBand,\n            })\n            .await\n            .expect(\"unknown load should check direct rows\");\n        assert_eq!(unknown.into_values(), vec![None, None]);\n\n        let pack_ids = [0];\n        let packed = context\n            .reader(storage.clone())\n            .load_bytes_many(JsonLoadRequestRef {\n                refs: &refs,\n                scope: JsonReadScopeRef::CommitPacks {\n                    commit_id: \"commit-a\",\n                    pack_ids: &pack_ids,\n                },\n            })\n            .await\n            .expect(\"packed load should hydrate\");\n        assert_eq!(\n            packed.into_values(),\n            vec![\n                Some(first.as_bytes().to_vec()),\n                Some(second.as_bytes().to_vec())\n            ]\n        );\n    }\n\n    #[tokio::test]\n    async fn commit_local_batch_accepts_trusted_prehashed_payload() {\n        let storage = StorageContext::new(Arc::new(UnitTestBackend::new()));\n        let context = JsonStoreContext::new();\n        let json = \"{\\\"value\\\":\\\"prehashed\\\"}\";\n        let json_ref = JsonRef::for_content(json.as_bytes());\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        let mut writes = StorageWriteSet::new();\n        let refs = context\n            .writer()\n            .stage_batch(\n                &mut writes,\n                JsonWritePlacementRef::CommitPack {\n                    commit_id: \"commit-a\",\n                    pack_id: 0,\n                },\n                [NormalizedJsonRef::trusted_prehashed(json, json_ref)],\n            )\n            .expect(\"prehashed json should stage\");\n        assert_eq!(refs, vec![json_ref]);\n        writes\n            .apply(&mut transaction.as_mut())\n            .await\n            .expect(\"json pack should apply\");\n        transaction\n            .commit()\n            .await\n            .expect(\"transaction should commit\");\n\n        let pack_ids = [0];\n        let packed = context\n            .reader(storage.clone())\n            .load_bytes_many(JsonLoadRequestRef {\n                refs: &refs,\n                scope: JsonReadScopeRef::CommitPacks {\n                    commit_id: \"commit-a\",\n                    pack_ids: &pack_ids,\n                },\n            })\n            .await\n            .expect(\"prehashed payload should hydrate\");\n        assert_eq!(packed.into_values(), vec![Some(json.as_bytes().to_vec())]);\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/json_store/encoded.rs",
    "content": "use crate::json_store::types::JsonRef;\nuse std::borrow::Cow;\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub(crate) enum JsonCodec {\n    Raw,\n    Zstd,\n}\n\npub(crate) struct EncodedJson<'a> {\n    pub(crate) json_ref: JsonRef,\n    pub(crate) codec: JsonCodec,\n    pub(crate) uncompressed_len: usize,\n    pub(crate) data: Cow<'a, [u8]>,\n}\n"
  },
  {
    "path": "packages/engine/src/json_store/mod.rs",
    "content": "pub(crate) mod compression;\npub(crate) mod context;\nmod encoded;\npub(crate) mod store;\npub(crate) mod types;\n\n#[allow(unused_imports)]\npub(crate) use context::{JsonStoreContext, JsonStoreReader, JsonStoreWriter};\npub(crate) use types::{\n    JsonLoadRequestRef, JsonReadScopeRef, JsonRef, JsonWritePlacementRef, NormalizedJson,\n    NormalizedJsonRef,\n};\n"
  },
  {
    "path": "packages/engine/src/json_store/store.rs",
    "content": "use crate::json_store::compression::{compress_json_payload, decode_json_zstd_payload};\nuse crate::json_store::encoded::{EncodedJson, JsonCodec};\nuse crate::json_store::types::{JsonReadScopeRef, JsonRef};\nuse crate::storage::{KvGetGroup, KvGetRequest, StorageReader};\nuse crate::LixError;\nuse std::borrow::Cow;\nuse std::collections::HashMap;\n\npub(crate) const JSON_NAMESPACE: &str = \"json_store.json\";\npub(crate) const JSON_PACK_NAMESPACE: &str = \"json_store.pack\";\nconst STORED_JSON_MAGIC: &[u8] = b\"lix-json:v1\";\nconst STORED_JSON_HEADER_LEN: usize = STORED_JSON_MAGIC.len() + 1 + 8;\nconst STORED_JSON_PACK_MAGIC: &[u8] = b\"lix-json-pack:v2\";\nconst STORED_JSON_PACK_ENTRY_HEADER_LEN: usize = 32 + 1 + 4 + 4 + 4;\nconst ZSTD_MIN_JSON_BYTES: usize = 16 * 1024;\nconst MIN_ZSTD_SAVINGS_BYTES: usize = 128;\n\nstruct StoredJsonPayload<'a> {\n    codec: JsonCodec,\n    uncompressed_len: usize,\n    data: &'a [u8],\n}\n\nstruct JsonPackLayout {\n    directory_start: usize,\n    payload_start: usize,\n    count: usize,\n}\n\nstruct JsonPackEntry<'a> {\n    hash: [u8; 32],\n    payload: StoredJsonPayload<'a>,\n}\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\nenum JsonHashCheck {\n    /// Hot reads trust the local storage layer and pack directory. Content\n    /// hashes are computed at write time; exhaustive verification belongs in\n    /// explicit integrity-check/fsck callers rather than every row scan.\n    TrustedHotRead,\n    Verify,\n}\n\nenum OrderedSinglePackProbe {\n    Hit(Vec<Option<Vec<u8>>>),\n    MissPresent(Vec<u8>),\n    MissAbsent,\n}\n\nfn raw_json_ref_for_content(json: &str) -> JsonRef {\n    JsonRef::from_hash(blake3::hash(json.as_bytes()))\n}\n\npub(crate) fn json_ref_for_content(bytes: &[u8]) -> JsonRef {\n    JsonRef::for_content(bytes)\n}\n\n#[cfg(test)]\nfn encode_json(json: &str) -> Result<EncodedJson<'_>, LixError> {\n    encode_json_for_storage(json)\n}\n\nfn encode_json_for_storage(json: &str) -> Result<EncodedJson<'_>, LixError> {\n    let raw_ref = raw_json_ref_for_content(json);\n    encode_json_for_storage_with_ref(json, raw_ref)\n}\n\nfn encode_json_for_storage_with_ref(\n    json: &str,\n    raw_ref: JsonRef,\n) -> Result<EncodedJson<'_>, LixError> {\n    let raw_data = json.as_bytes();\n\n    if raw_data.len() >= ZSTD_MIN_JSON_BYTES {\n        let compressed = compress_json_payload(raw_data)?;\n        if raw_data.len().saturating_sub(compressed.len()) >= MIN_ZSTD_SAVINGS_BYTES {\n            return Ok(EncodedJson {\n                json_ref: raw_ref,\n                codec: JsonCodec::Zstd,\n                uncompressed_len: json.len(),\n                data: Cow::Owned(compressed),\n            });\n        }\n    }\n\n    Ok(EncodedJson {\n        json_ref: raw_ref,\n        codec: JsonCodec::Raw,\n        uncompressed_len: json.len(),\n        data: Cow::Borrowed(raw_data),\n    })\n}\n\npub(crate) fn encode_json_str(json: &str) -> Result<EncodedJson<'_>, LixError> {\n    encode_json_for_storage(json)\n}\n\npub(crate) fn encode_json_str_with_ref(\n    json: &str,\n    json_ref: JsonRef,\n) -> Result<EncodedJson<'_>, LixError> {\n    debug_assert_eq!(JsonRef::for_content(json.as_bytes()), json_ref);\n    encode_json_for_storage_with_ref(json, json_ref)\n}\n\npub(crate) fn encode_direct_json_payload(encoded_json: &EncodedJson<'_>) -> Vec<u8> {\n    encode_stored_json_payload(encoded_json)\n}\n\npub(crate) fn pack_key(commit_id: &str, pack_id: u32) -> Vec<u8> {\n    let commit_id = commit_id.as_bytes();\n    let mut key = Vec::with_capacity(4 + commit_id.len() + 4);\n    key.extend_from_slice(&(commit_id.len() as u32).to_be_bytes());\n    key.extend_from_slice(commit_id);\n    key.extend_from_slice(&pack_id.to_be_bytes());\n    key\n}\n\npub(crate) fn decode_json_pack_refs(bytes: &[u8]) -> Result<Vec<JsonRef>, LixError> {\n    let layout = json_pack_layout(bytes)?;\n    let mut refs = Vec::with_capacity(layout.count);\n    for index in 0..layout.count {\n        refs.push(JsonRef::from_hash_bytes(\n            json_pack_entry(bytes, &layout, index)?.hash,\n        ));\n    }\n    Ok(refs)\n}\n\npub(crate) fn encode_json_pack(entries: &[&EncodedJson<'_>]) -> Result<Vec<u8>, LixError> {\n    let mut directory_len =\n        STORED_JSON_PACK_MAGIC.len() + 4 + entries.len() * STORED_JSON_PACK_ENTRY_HEADER_LEN;\n    let payload_len = entries\n        .iter()\n        .map(|entry| entry.data.as_ref().len())\n        .sum::<usize>();\n    let mut out = Vec::with_capacity(directory_len + payload_len);\n    out.extend_from_slice(STORED_JSON_PACK_MAGIC);\n    out.extend_from_slice(&(entries.len() as u32).to_be_bytes());\n\n    let mut offset = 0usize;\n    for entry in entries {\n        let data = entry.data.as_ref();\n        out.extend_from_slice(entry.json_ref.as_hash_bytes());\n        out.push(json_codec_byte(entry.codec));\n        out.extend_from_slice(&json_pack_u32(\n            entry.uncompressed_len,\n            \"uncompressed length\",\n        )?);\n        out.extend_from_slice(&json_pack_u32(offset, \"payload offset\")?);\n        out.extend_from_slice(&json_pack_u32(data.len(), \"payload length\")?);\n        offset = offset.checked_add(data.len()).ok_or_else(|| {\n            LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                \"json_store pack payload offset overflow\",\n            )\n        })?;\n    }\n    for entry in entries {\n        out.extend_from_slice(entry.data.as_ref());\n    }\n    directory_len = out.len() - payload_len;\n    debug_assert_eq!(\n        directory_len,\n        STORED_JSON_PACK_MAGIC.len() + 4 + entries.len() * STORED_JSON_PACK_ENTRY_HEADER_LEN\n    );\n    Ok(out)\n}\n\nfn json_pack_u32(value: usize, field: &str) -> Result<[u8; 4], LixError> {\n    let value = u32::try_from(value).map_err(|_| {\n        LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            format!(\"json_store pack {field} exceeds u32\"),\n        )\n    })?;\n    Ok(value.to_be_bytes())\n}\n\npub(crate) fn encode_json_bytes_for_storage(bytes: &[u8]) -> Result<(JsonRef, Vec<u8>), LixError> {\n    let json = std::str::from_utf8(bytes).map_err(|error| {\n        LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"json bytes are invalid UTF-8: {error}\"),\n        )\n    })?;\n    let json_ref = JsonRef::from_hash(blake3::hash(bytes));\n    encode_json_str_for_storage_with_ref(json, json_ref)\n}\n\npub(crate) fn encode_json_str_for_storage_with_ref(\n    json: &str,\n    json_ref: JsonRef,\n) -> Result<(JsonRef, Vec<u8>), LixError> {\n    let encoded_json = encode_json_for_storage_with_ref(json, json_ref)?;\n    let json_ref = encoded_json.json_ref.clone();\n    Ok((json_ref, encode_stored_json_payload(&encoded_json)))\n}\n\nasync fn load_json_bytes_direct(\n    store: &mut impl StorageReader,\n    json_ref: &JsonRef,\n) -> Result<Option<Vec<u8>>, LixError> {\n    let result = store\n        .get_values(KvGetRequest {\n            groups: vec![KvGetGroup {\n                namespace: JSON_NAMESPACE.to_string(),\n                keys: vec![json_ref.as_hash_bytes().to_vec()],\n            }],\n        })\n        .await?\n        .groups\n        .into_iter()\n        .next()\n        .and_then(|group| group.single_value_owned());\n    let Some(bytes) = result else {\n        return Ok(None);\n    };\n    let stored_payload = decode_stored_json_payload(&bytes)?;\n    let _ = store;\n    decode_json_payload(json_ref, stored_payload, JsonHashCheck::TrustedHotRead).map(Some)\n}\n\npub(crate) async fn load_json_bytes_many_in_scope(\n    store: &mut impl StorageReader,\n    json_refs: &[JsonRef],\n    scope: JsonReadScopeRef<'_>,\n) -> Result<Vec<Option<Vec<u8>>>, LixError> {\n    load_json_bytes_many_in_scope_with_hash_check(\n        store,\n        json_refs,\n        scope,\n        JsonHashCheck::TrustedHotRead,\n    )\n    .await\n}\n\npub(crate) async fn verify_json_bytes_many_in_scope(\n    store: &mut impl StorageReader,\n    json_refs: &[JsonRef],\n    scope: JsonReadScopeRef<'_>,\n) -> Result<Vec<Option<Vec<u8>>>, LixError> {\n    load_json_bytes_many_in_scope_with_hash_check(store, json_refs, scope, JsonHashCheck::Verify)\n        .await\n}\n\nasync fn load_json_bytes_many_in_scope_with_hash_check(\n    store: &mut impl StorageReader,\n    json_refs: &[JsonRef],\n    scope: JsonReadScopeRef<'_>,\n    hash_check: JsonHashCheck,\n) -> Result<Vec<Option<Vec<u8>>>, LixError> {\n    if json_refs.is_empty() {\n        return Ok(Vec::new());\n    }\n\n    let ordered_single_pack_probe = if let JsonReadScopeRef::CommitPacks {\n        commit_id,\n        pack_ids: [pack_id],\n    } = scope\n    {\n        let probe =\n            load_ordered_single_pack(store, json_refs, commit_id, *pack_id, hash_check).await?;\n        if let OrderedSinglePackProbe::Hit(values) = probe {\n            return Ok(values);\n        }\n        Some(probe)\n    } else {\n        None\n    };\n\n    let mut unique_keys = Vec::new();\n    let mut unique_refs = Vec::new();\n    let mut key_indexes = HashMap::<[u8; 32], usize>::new();\n    let mut requested_indexes = Vec::with_capacity(json_refs.len());\n    let mut has_duplicate_refs = false;\n    for json_ref in json_refs {\n        let hash = *json_ref.as_hash_array();\n        let index = match key_indexes.get(&hash) {\n            Some(index) => {\n                has_duplicate_refs = true;\n                *index\n            }\n            None => {\n                let index = unique_keys.len();\n                key_indexes.insert(hash, index);\n                unique_keys.push(hash.to_vec());\n                unique_refs.push(*json_ref);\n                index\n            }\n        };\n        requested_indexes.push(index);\n    }\n\n    let mut unique_values = match scope {\n        JsonReadScopeRef::OutOfBand => vec![None; unique_refs.len()],\n        JsonReadScopeRef::CommitPacks {\n            commit_id,\n            pack_ids: [pack_id],\n        } => match &ordered_single_pack_probe {\n            Some(OrderedSinglePackProbe::MissPresent(stored_pack)) => {\n                load_from_single_pack_bytes(stored_pack, &unique_refs, hash_check)?\n            }\n            Some(OrderedSinglePackProbe::MissAbsent) => vec![None; unique_refs.len()],\n            _ => {\n                let pack_ids = [*pack_id];\n                load_from_packs(store, &unique_refs, commit_id, &pack_ids, hash_check).await?\n            }\n        },\n        JsonReadScopeRef::CommitPacks {\n            commit_id,\n            pack_ids,\n        } => load_from_packs(store, &unique_refs, commit_id, pack_ids, hash_check).await?,\n    };\n\n    let missing = unique_values\n        .iter()\n        .enumerate()\n        .filter_map(|(index, value)| value.is_none().then_some(index))\n        .collect::<Vec<_>>();\n    if missing.is_empty() {\n        return Ok(json_values_in_request_order(\n            unique_values,\n            requested_indexes,\n            has_duplicate_refs,\n        ));\n    }\n\n    let result = store\n        .get_values(KvGetRequest {\n            groups: vec![KvGetGroup {\n                namespace: JSON_NAMESPACE.to_string(),\n                keys: missing\n                    .iter()\n                    .map(|&index| unique_keys[index].clone())\n                    .collect(),\n            }],\n        })\n        .await?;\n    let group = result.groups.into_iter().next().ok_or_else(|| {\n        LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            \"json_store batch load returned no result group\",\n        )\n    })?;\n    if group.len() != missing.len() {\n        return Err(LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            format!(\n                \"json_store batch load returned {} values for {} requested refs\",\n                group.len(),\n                missing.len()\n            ),\n        ));\n    }\n\n    for (index, stored_bytes) in group.values_iter().enumerate() {\n        let unique_index = missing[index];\n        let Some(stored_bytes) = stored_bytes else {\n            continue;\n        };\n        let stored_payload = decode_stored_json_payload(stored_bytes)?;\n        let _ = store;\n        unique_values[unique_index] = Some(decode_json_payload(\n            &unique_refs[unique_index],\n            stored_payload,\n            hash_check,\n        )?);\n    }\n\n    Ok(json_values_in_request_order(\n        unique_values,\n        requested_indexes,\n        has_duplicate_refs,\n    ))\n}\n\nfn json_values_in_request_order(\n    unique_values: Vec<Option<Vec<u8>>>,\n    requested_indexes: Vec<usize>,\n    has_duplicate_refs: bool,\n) -> Vec<Option<Vec<u8>>> {\n    if !has_duplicate_refs {\n        debug_assert_eq!(requested_indexes.len(), unique_values.len());\n        debug_assert!(requested_indexes\n            .iter()\n            .copied()\n            .enumerate()\n            .all(|(request_index, unique_index)| request_index == unique_index));\n        return unique_values;\n    }\n    requested_indexes\n        .into_iter()\n        .map(|index| unique_values[index].clone())\n        .collect()\n}\n\nasync fn load_ordered_single_pack(\n    store: &mut impl StorageReader,\n    requested_refs: &[JsonRef],\n    commit_id: &str,\n    pack_id: u32,\n    hash_check: JsonHashCheck,\n) -> Result<OrderedSinglePackProbe, LixError> {\n    let result = store\n        .get_values(KvGetRequest {\n            groups: vec![KvGetGroup {\n                namespace: JSON_PACK_NAMESPACE.to_string(),\n                keys: vec![pack_key(commit_id, pack_id)],\n            }],\n        })\n        .await?;\n    let group = result.groups.into_iter().next().ok_or_else(|| {\n        LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            \"json_store ordered pack load returned no result group\",\n        )\n    })?;\n    if group.len() != 1 {\n        return Err(LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            format!(\n                \"json_store ordered pack load returned {} values for 1 requested pack\",\n                group.len()\n            ),\n        ));\n    }\n    let Some(stored_pack) = group.value(0).flatten() else {\n        return Ok(OrderedSinglePackProbe::MissAbsent);\n    };\n    let mut values = vec![None; requested_refs.len()];\n    if load_json_pack_values_in_request_order(stored_pack, hash_check, requested_refs, &mut values)?\n    {\n        Ok(OrderedSinglePackProbe::Hit(values))\n    } else {\n        Ok(OrderedSinglePackProbe::MissPresent(stored_pack.to_vec()))\n    }\n}\n\nfn load_from_single_pack_bytes(\n    stored_pack: &[u8],\n    unique_refs: &[JsonRef],\n    hash_check: JsonHashCheck,\n) -> Result<Vec<Option<Vec<u8>>>, LixError> {\n    let mut values = vec![None; unique_refs.len()];\n    if load_json_pack_values_in_request_order(stored_pack, hash_check, unique_refs, &mut values)? {\n        return Ok(values);\n    }\n    let wanted = unique_refs\n        .iter()\n        .enumerate()\n        .map(|(index, json_ref)| (*json_ref.as_hash_array(), index))\n        .collect::<HashMap<_, _>>();\n    load_json_pack_values(stored_pack, hash_check, &wanted, &mut values)?;\n    Ok(values)\n}\n\nasync fn load_from_packs(\n    store: &mut impl StorageReader,\n    unique_refs: &[JsonRef],\n    commit_id: &str,\n    pack_ids: &[u32],\n    hash_check: JsonHashCheck,\n) -> Result<Vec<Option<Vec<u8>>>, LixError> {\n    let mut values = vec![None; unique_refs.len()];\n    if pack_ids.is_empty() || unique_refs.is_empty() {\n        return Ok(values);\n    }\n    let keys = pack_ids\n        .iter()\n        .map(|&pack_id| pack_key(commit_id, pack_id))\n        .collect::<Vec<_>>();\n    let result = store\n        .get_values(KvGetRequest {\n            groups: vec![KvGetGroup {\n                namespace: JSON_PACK_NAMESPACE.to_string(),\n                keys,\n            }],\n        })\n        .await?;\n    let group = result.groups.into_iter().next().ok_or_else(|| {\n        LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            \"json_store pack load returned no result group\",\n        )\n    })?;\n    if pack_ids.len() == 1 && group.len() == 1 {\n        if let Some(stored_pack) = group.value(0).flatten() {\n            if load_json_pack_values_in_request_order(\n                stored_pack,\n                hash_check,\n                unique_refs,\n                &mut values,\n            )? {\n                return Ok(values);\n            }\n        }\n    }\n\n    let wanted = unique_refs\n        .iter()\n        .enumerate()\n        .map(|(index, json_ref)| (*json_ref.as_hash_array(), index))\n        .collect::<HashMap<_, _>>();\n    for stored_pack in group.values_iter().flatten() {\n        load_json_pack_values(stored_pack, hash_check, &wanted, &mut values)?;\n    }\n    Ok(values)\n}\n\nfn encode_stored_json_payload(encoded_json: &EncodedJson<'_>) -> Vec<u8> {\n    let mut out = Vec::with_capacity(STORED_JSON_HEADER_LEN + encoded_json.data.as_ref().len());\n    out.extend_from_slice(STORED_JSON_MAGIC);\n    out.push(json_codec_byte(encoded_json.codec));\n    out.extend_from_slice(&(encoded_json.uncompressed_len as u64).to_be_bytes());\n    out.extend_from_slice(encoded_json.data.as_ref());\n    out\n}\n\nfn decode_stored_json_payload(bytes: &[u8]) -> Result<StoredJsonPayload<'_>, LixError> {\n    if bytes.len() < STORED_JSON_HEADER_LEN {\n        return Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            \"stored JSON payload is truncated\",\n        ));\n    }\n    if &bytes[..STORED_JSON_MAGIC.len()] != STORED_JSON_MAGIC {\n        return Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            \"stored JSON payload has invalid header\",\n        ));\n    }\n    let codec = read_json_codec(bytes[STORED_JSON_MAGIC.len()])?;\n    let len_start = STORED_JSON_MAGIC.len() + 1;\n    let len_end = len_start + 8;\n    let uncompressed_len = u64::from_be_bytes(\n        bytes[len_start..len_end]\n            .try_into()\n            .expect(\"stored JSON length header is fixed size\"),\n    ) as usize;\n    Ok(StoredJsonPayload {\n        codec,\n        uncompressed_len,\n        data: &bytes[len_end..],\n    })\n}\n\nfn json_codec_byte(codec: JsonCodec) -> u8 {\n    match codec {\n        JsonCodec::Raw => 0,\n        JsonCodec::Zstd => 1,\n    }\n}\n\nfn read_json_codec(byte: u8) -> Result<JsonCodec, LixError> {\n    match byte {\n        0 => Ok(JsonCodec::Raw),\n        1 => Ok(JsonCodec::Zstd),\n        _ => Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"stored JSON payload has unknown codec byte {byte}\"),\n        )),\n    }\n}\n\nfn decode_json_payload(\n    json_ref: &JsonRef,\n    stored_payload: StoredJsonPayload<'_>,\n    hash_check: JsonHashCheck,\n) -> Result<Vec<u8>, LixError> {\n    let data = match stored_payload.codec {\n        JsonCodec::Raw => Ok(stored_payload.data.to_vec()),\n        JsonCodec::Zstd => decode_json_zstd_payload(\n            stored_payload.data,\n            stored_payload.uncompressed_len,\n            &json_ref.to_hex(),\n        ),\n    }?;\n    if data.len() != stored_payload.uncompressed_len {\n        return Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\n                \"json ref '{}' decoded to {} bytes, expected {}\",\n                json_ref.to_hex(),\n                data.len(),\n                stored_payload.uncompressed_len\n            ),\n        ));\n    }\n    if hash_check == JsonHashCheck::Verify {\n        let actual_hash = blake3::hash(&data);\n        if actual_hash.as_bytes() != json_ref.as_hash_bytes() {\n            return Err(LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                format!(\"json ref '{}' hash mismatch\", json_ref.to_hex()),\n            ));\n        }\n    }\n    Ok(data)\n}\n\nfn load_json_pack_values_in_request_order(\n    bytes: &[u8],\n    hash_check: JsonHashCheck,\n    requested_refs: &[JsonRef],\n    values: &mut [Option<Vec<u8>>],\n) -> Result<bool, LixError> {\n    if values.len() < requested_refs.len() {\n        return Err(LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            \"json_store ordered pack load has fewer result slots than refs\",\n        ));\n    }\n    let layout = json_pack_layout(bytes)?;\n    if layout.count != requested_refs.len() {\n        return Ok(false);\n    }\n\n    for (index, json_ref) in requested_refs.iter().enumerate() {\n        let entry = json_pack_entry(bytes, &layout, index)?;\n        if &entry.hash != json_ref.as_hash_array() {\n            for value in &mut values[..index] {\n                *value = None;\n            }\n            return Ok(false);\n        }\n        values[index] = Some(decode_json_payload(json_ref, entry.payload, hash_check)?);\n    }\n    Ok(true)\n}\n\nfn load_json_pack_values(\n    bytes: &[u8],\n    hash_check: JsonHashCheck,\n    wanted: &HashMap<[u8; 32], usize>,\n    values: &mut [Option<Vec<u8>>],\n) -> Result<(), LixError> {\n    let layout = json_pack_layout(bytes)?;\n    for index in 0..layout.count {\n        let entry = json_pack_entry(bytes, &layout, index)?;\n        let Some(&value_index) = wanted.get(&entry.hash) else {\n            continue;\n        };\n        let json_ref = JsonRef::from_hash_bytes(entry.hash);\n        values[value_index] = Some(decode_json_payload(&json_ref, entry.payload, hash_check)?);\n    }\n    Ok(())\n}\n\nfn json_pack_layout(bytes: &[u8]) -> Result<JsonPackLayout, LixError> {\n    if bytes.len() < STORED_JSON_PACK_MAGIC.len() + 4 {\n        return Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            \"stored JSON pack is truncated\",\n        ));\n    }\n    if &bytes[..STORED_JSON_PACK_MAGIC.len()] != STORED_JSON_PACK_MAGIC {\n        return Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            \"stored JSON pack has invalid header\",\n        ));\n    }\n    let count_start = STORED_JSON_PACK_MAGIC.len();\n    let count_end = count_start + 4;\n    let count = u32::from_be_bytes(\n        bytes[count_start..count_end]\n            .try_into()\n            .expect(\"json pack count header is fixed size\"),\n    ) as usize;\n    let directory_start = count_end;\n    let directory_len = count\n        .checked_mul(STORED_JSON_PACK_ENTRY_HEADER_LEN)\n        .ok_or_else(|| {\n            LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                \"json pack directory overflow\",\n            )\n        })?;\n    let payload_start = directory_start.checked_add(directory_len).ok_or_else(|| {\n        LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            \"json pack payload offset overflow\",\n        )\n    })?;\n    if bytes.len() < payload_start {\n        return Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            \"stored JSON pack directory is truncated\",\n        ));\n    }\n    Ok(JsonPackLayout {\n        directory_start,\n        payload_start,\n        count,\n    })\n}\n\nfn json_pack_entry<'a>(\n    bytes: &'a [u8],\n    layout: &JsonPackLayout,\n    index: usize,\n) -> Result<JsonPackEntry<'a>, LixError> {\n    if index >= layout.count {\n        return Err(LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            \"json pack entry index exceeds directory count\",\n        ));\n    }\n    let mut cursor = layout.directory_start + index * STORED_JSON_PACK_ENTRY_HEADER_LEN;\n    let hash: [u8; 32] = bytes[cursor..cursor + 32]\n        .try_into()\n        .expect(\"json pack hash header is fixed size\");\n    cursor += 32;\n    let codec = read_json_codec(bytes[cursor])?;\n    cursor += 1;\n    let uncompressed_len = u32::from_be_bytes(\n        bytes[cursor..cursor + 4]\n            .try_into()\n            .expect(\"json pack uncompressed length is fixed size\"),\n    ) as usize;\n    cursor += 4;\n    let offset = u32::from_be_bytes(\n        bytes[cursor..cursor + 4]\n            .try_into()\n            .expect(\"json pack payload offset is fixed size\"),\n    ) as usize;\n    cursor += 4;\n    let len = u32::from_be_bytes(\n        bytes[cursor..cursor + 4]\n            .try_into()\n            .expect(\"json pack payload length is fixed size\"),\n    ) as usize;\n    let data_start = layout.payload_start.checked_add(offset).ok_or_else(|| {\n        LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            \"json pack entry offset overflow\",\n        )\n    })?;\n    let data_end = data_start.checked_add(len).ok_or_else(|| {\n        LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            \"json pack entry length overflow\",\n        )\n    })?;\n    if data_end > bytes.len() {\n        return Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            \"stored JSON pack entry payload is truncated\",\n        ));\n    }\n    Ok(JsonPackEntry {\n        hash,\n        payload: StoredJsonPayload {\n            codec,\n            uncompressed_len,\n            data: &bytes[data_start..data_end],\n        },\n    })\n}\n\n#[cfg(test)]\nmod tests {\n    use std::sync::Arc;\n\n    use super::*;\n    use crate::backend::testing::UnitTestBackend;\n    use crate::storage::{StorageContext, StorageWriteSet};\n\n    #[tokio::test]\n    async fn json_roundtrips_raw_payload() {\n        let storage = StorageContext::new(Arc::new(UnitTestBackend::new()));\n        let json = \"{\\\"value\\\":\\\"small\\\"}\";\n        let encoded = encode_json(json).expect(\"json should encode\");\n        assert_eq!(encoded.codec, JsonCodec::Raw);\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        let mut writes = StorageWriteSet::new();\n        writes.put(\n            JSON_NAMESPACE,\n            encoded.json_ref.as_hash_bytes().to_vec(),\n            encode_stored_json_payload(&encoded),\n        );\n        writes\n            .apply(&mut transaction.as_mut())\n            .await\n            .expect(\"json should store\");\n        transaction\n            .commit()\n            .await\n            .expect(\"transaction should commit\");\n\n        let mut store = storage.clone();\n        assert_eq!(\n            load_json_bytes_direct(&mut store, &encoded.json_ref)\n                .await\n                .expect(\"json should load\"),\n            Some(json.as_bytes().to_vec())\n        );\n    }\n\n    #[tokio::test]\n    async fn json_batch_load_roundtrips_in_request_order() {\n        let storage = StorageContext::new(Arc::new(UnitTestBackend::new()));\n        let first = encode_json(\"{\\\"value\\\":\\\"first\\\"}\").expect(\"first json should encode\");\n        let second = encode_json(\"{\\\"value\\\":\\\"second\\\"}\").expect(\"second json should encode\");\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        let mut writes = StorageWriteSet::new();\n        writes.put(\n            JSON_NAMESPACE,\n            first.json_ref.as_hash_bytes().to_vec(),\n            encode_stored_json_payload(&first),\n        );\n        writes.put(\n            JSON_NAMESPACE,\n            second.json_ref.as_hash_bytes().to_vec(),\n            encode_stored_json_payload(&second),\n        );\n        writes\n            .apply(&mut transaction.as_mut())\n            .await\n            .expect(\"json should store\");\n        transaction\n            .commit()\n            .await\n            .expect(\"transaction should commit\");\n\n        let mut store = storage.clone();\n        let values = load_json_bytes_many_in_scope(\n            &mut store,\n            &[second.json_ref, first.json_ref, second.json_ref],\n            JsonReadScopeRef::OutOfBand,\n        )\n        .await\n        .expect(\"json batch should load\");\n\n        assert_eq!(\n            values,\n            vec![\n                Some(second.data.as_ref().to_vec()),\n                Some(first.data.as_ref().to_vec()),\n                Some(second.data.as_ref().to_vec()),\n            ]\n        );\n    }\n\n    #[tokio::test]\n    async fn verified_batch_load_rejects_hash_mismatch() {\n        let storage = StorageContext::new(Arc::new(UnitTestBackend::new()));\n        let requested_ref = JsonRef::for_content(br#\"{\"value\":\"requested\"}\"#);\n        let stored = encode_json(\"{\\\"value\\\":\\\"different\\\"}\").expect(\"stored json should encode\");\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        let mut writes = StorageWriteSet::new();\n        writes.put(\n            JSON_NAMESPACE,\n            requested_ref.as_hash_bytes().to_vec(),\n            encode_stored_json_payload(&stored),\n        );\n        writes\n            .apply(&mut transaction.as_mut())\n            .await\n            .expect(\"json should store\");\n        transaction\n            .commit()\n            .await\n            .expect(\"transaction should commit\");\n\n        let mut store = storage.clone();\n        let trusted = load_json_bytes_many_in_scope(\n            &mut store,\n            &[requested_ref],\n            JsonReadScopeRef::OutOfBand,\n        )\n        .await\n        .expect(\"trusted hot read should not hash-check\");\n        assert_eq!(trusted, vec![Some(stored.data.as_ref().to_vec())]);\n\n        let mut store = storage.clone();\n        let error = verify_json_bytes_many_in_scope(\n            &mut store,\n            &[requested_ref],\n            JsonReadScopeRef::OutOfBand,\n        )\n        .await\n        .expect_err(\"verified read should reject mismatched content address\");\n        assert!(\n            error.to_string().contains(\"hash mismatch\"),\n            \"error should mention hash mismatch: {error}\"\n        );\n    }\n\n    #[tokio::test]\n    async fn verified_pack_load_checks_only_requested_entries() {\n        let storage = StorageContext::new(Arc::new(UnitTestBackend::new()));\n        let good = encode_json(\"{\\\"value\\\":\\\"good\\\"}\").expect(\"good json should encode\");\n        let bad_ref = JsonRef::for_content(br#\"{\"value\":\"expected\"}\"#);\n        let bad = encode_json_for_storage_with_ref(\"{\\\"value\\\":\\\"wrong\\\"}\", bad_ref)\n            .expect(\"bad json should encode with mismatched ref\");\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        let mut writes = StorageWriteSet::new();\n        writes.put(\n            JSON_PACK_NAMESPACE,\n            pack_key(\"commit-a\", 0),\n            encode_json_pack(&[&good, &bad]).expect(\"pack should encode\"),\n        );\n        writes\n            .apply(&mut transaction.as_mut())\n            .await\n            .expect(\"json pack should store\");\n        transaction\n            .commit()\n            .await\n            .expect(\"transaction should commit\");\n\n        let pack_ids = [0];\n        let mut store = storage.clone();\n        let good_values = verify_json_bytes_many_in_scope(\n            &mut store,\n            &[good.json_ref],\n            JsonReadScopeRef::CommitPacks {\n                commit_id: \"commit-a\",\n                pack_ids: &pack_ids,\n            },\n        )\n        .await\n        .expect(\"unrequested bad pack entry should not be decoded\");\n        assert_eq!(good_values, vec![Some(good.data.as_ref().to_vec())]);\n\n        let mut store = storage.clone();\n        let error = verify_json_bytes_many_in_scope(\n            &mut store,\n            &[bad_ref],\n            JsonReadScopeRef::CommitPacks {\n                commit_id: \"commit-a\",\n                pack_ids: &pack_ids,\n            },\n        )\n        .await\n        .expect_err(\"requested bad pack entry should be verified\");\n        assert!(\n            error.to_string().contains(\"hash mismatch\"),\n            \"error should mention hash mismatch: {error}\"\n        );\n    }\n\n    #[test]\n    fn json_pack_directory_uses_compact_u32_fields() {\n        let first = encode_json(\"{\\\"value\\\":\\\"first\\\"}\").expect(\"first json should encode\");\n        let second = encode_json(\"{\\\"value\\\":\\\"second\\\"}\").expect(\"second json should encode\");\n        let pack = encode_json_pack(&[&first, &second]).expect(\"pack should encode\");\n        let payload_len = first.data.as_ref().len() + second.data.as_ref().len();\n\n        assert_eq!(STORED_JSON_PACK_ENTRY_HEADER_LEN, 32 + 1 + 4 + 4 + 4);\n        assert_eq!(\n            pack.len(),\n            STORED_JSON_PACK_MAGIC.len() + 4 + 2 * STORED_JSON_PACK_ENTRY_HEADER_LEN + payload_len\n        );\n    }\n\n    #[test]\n    fn json_pack_u32_rejects_oversized_directory_fields() {\n        let error = json_pack_u32((u32::MAX as usize) + 1, \"payload offset\")\n            .expect_err(\"oversized pack directory field should reject\");\n        assert!(\n            error.to_string().contains(\"payload offset exceeds u32\"),\n            \"error should identify oversized field: {error}\"\n        );\n    }\n\n    #[test]\n    fn ordered_pack_load_fast_path_requires_exact_pack_order() {\n        let first = encode_json(\"{\\\"value\\\":\\\"first\\\"}\").expect(\"first json should encode\");\n        let second = encode_json(\"{\\\"value\\\":\\\"second\\\"}\").expect(\"second json should encode\");\n        let pack = encode_json_pack(&[&first, &second]).expect(\"pack should encode\");\n\n        let mut values = vec![None, None];\n        let loaded = load_json_pack_values_in_request_order(\n            &pack,\n            JsonHashCheck::Verify,\n            &[first.json_ref, second.json_ref],\n            &mut values,\n        )\n        .expect(\"ordered pack load should parse\");\n        assert!(loaded);\n        assert_eq!(\n            values,\n            vec![\n                Some(first.data.as_ref().to_vec()),\n                Some(second.data.as_ref().to_vec()),\n            ]\n        );\n\n        let mut values = vec![None, None];\n        let loaded = load_json_pack_values_in_request_order(\n            &pack,\n            JsonHashCheck::Verify,\n            &[second.json_ref, first.json_ref],\n            &mut values,\n        )\n        .expect(\"unordered refs should fall back without error\");\n        assert!(!loaded);\n        assert_eq!(values, vec![None, None]);\n    }\n\n    #[tokio::test]\n    async fn pack_batch_load_falls_back_for_unordered_refs() {\n        let storage = StorageContext::new(Arc::new(UnitTestBackend::new()));\n        let first = encode_json(\"{\\\"value\\\":\\\"first\\\"}\").expect(\"first json should encode\");\n        let second = encode_json(\"{\\\"value\\\":\\\"second\\\"}\").expect(\"second json should encode\");\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        let mut writes = StorageWriteSet::new();\n        writes.put(\n            JSON_PACK_NAMESPACE,\n            pack_key(\"commit-a\", 0),\n            encode_json_pack(&[&first, &second]).expect(\"pack should encode\"),\n        );\n        writes\n            .apply(&mut transaction.as_mut())\n            .await\n            .expect(\"json pack should store\");\n        transaction\n            .commit()\n            .await\n            .expect(\"transaction should commit\");\n\n        let pack_ids = [0];\n        let mut store = storage.clone();\n        let values = load_json_bytes_many_in_scope(\n            &mut store,\n            &[second.json_ref, first.json_ref],\n            JsonReadScopeRef::CommitPacks {\n                commit_id: \"commit-a\",\n                pack_ids: &pack_ids,\n            },\n        )\n        .await\n        .expect(\"unordered refs should load through fallback\");\n        assert_eq!(\n            values,\n            vec![\n                Some(second.data.as_ref().to_vec()),\n                Some(first.data.as_ref().to_vec()),\n            ]\n        );\n    }\n\n    #[tokio::test]\n    async fn ordered_pack_probe_falls_back_to_direct_rows() {\n        let storage = StorageContext::new(Arc::new(UnitTestBackend::new()));\n        let packed = encode_json(\"{\\\"value\\\":\\\"packed\\\"}\").expect(\"packed json should encode\");\n        let direct = encode_json(\"{\\\"value\\\":\\\"direct\\\"}\").expect(\"direct json should encode\");\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        let mut writes = StorageWriteSet::new();\n        writes.put(\n            JSON_PACK_NAMESPACE,\n            pack_key(\"commit-a\", 0),\n            encode_json_pack(&[&packed]).expect(\"pack should encode\"),\n        );\n        writes.put(\n            JSON_NAMESPACE,\n            direct.json_ref.as_hash_bytes().to_vec(),\n            encode_stored_json_payload(&direct),\n        );\n        writes\n            .apply(&mut transaction.as_mut())\n            .await\n            .expect(\"json rows should store\");\n        transaction\n            .commit()\n            .await\n            .expect(\"transaction should commit\");\n\n        let pack_ids = [0];\n        let mut store = storage.clone();\n        let values = load_json_bytes_many_in_scope(\n            &mut store,\n            &[direct.json_ref],\n            JsonReadScopeRef::CommitPacks {\n                commit_id: \"commit-a\",\n                pack_ids: &pack_ids,\n            },\n        )\n        .await\n        .expect(\"mismatched ordered pack probe should fall back to direct rows\");\n        assert_eq!(values, vec![Some(direct.data.as_ref().to_vec())]);\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/json_store/types.rs",
    "content": "use std::sync::Arc;\n\nuse crate::LixError;\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct NormalizedJson(Arc<str>);\n\nimpl NormalizedJson {\n    pub(crate) fn from_arc_unchecked(normalized: Arc<str>) -> Self {\n        Self(normalized)\n    }\n\n    pub(crate) fn from_value(value: &serde_json::Value, context: &str) -> Result<Self, LixError> {\n        let normalized: Arc<str> = serde_json::to_string(value)\n            .map_err(|error| {\n                LixError::new(\n                    LixError::CODE_UNKNOWN,\n                    format!(\"{context} failed to serialize as normalized JSON: {error}\"),\n                )\n            })?\n            .into();\n        Ok(Self(normalized))\n    }\n\n    pub(crate) fn as_str(&self) -> &str {\n        self.0.as_ref()\n    }\n\n    pub(crate) fn as_bytes(&self) -> &[u8] {\n        self.as_str().as_bytes()\n    }\n}\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq, serde::Serialize, serde::Deserialize)]\npub(crate) struct JsonRef {\n    hash: [u8; 32],\n}\n\nimpl JsonRef {\n    pub(crate) fn from_hash(hash: blake3::Hash) -> Self {\n        Self {\n            hash: *hash.as_bytes(),\n        }\n    }\n\n    pub(crate) fn from_hash_bytes(hash: [u8; 32]) -> Self {\n        Self { hash }\n    }\n\n    pub(crate) fn for_content(bytes: &[u8]) -> Self {\n        Self::from_hash(blake3::hash(bytes))\n    }\n\n    pub(crate) fn as_hash_bytes(&self) -> &[u8] {\n        &self.hash\n    }\n\n    pub(crate) fn as_hash_array(&self) -> &[u8; 32] {\n        &self.hash\n    }\n\n    pub(crate) fn to_hex(&self) -> String {\n        self.hash.iter().map(|byte| format!(\"{byte:02x}\")).collect()\n    }\n}\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub(crate) struct NormalizedJsonRef<'a> {\n    normalized: &'a str,\n    trusted_json_ref: Option<JsonRef>,\n}\n\nimpl<'a> NormalizedJsonRef<'a> {\n    pub(crate) fn new(normalized: &'a str) -> Self {\n        Self {\n            normalized,\n            trusted_json_ref: None,\n        }\n    }\n\n    /// Uses a caller-owned invariant that `json_ref` was computed from\n    /// `normalized`. This avoids rehashing JSON already normalized by the\n    /// transaction staging boundary.\n    pub(crate) fn trusted_prehashed(normalized: &'a str, json_ref: JsonRef) -> Self {\n        Self {\n            normalized,\n            trusted_json_ref: Some(json_ref),\n        }\n    }\n\n    pub(crate) fn normalized(&self) -> &'a str {\n        self.normalized\n    }\n\n    pub(crate) fn trusted_json_ref(&self) -> Option<JsonRef> {\n        self.trusted_json_ref\n    }\n}\n\nimpl<'a> From<&'a NormalizedJson> for NormalizedJsonRef<'a> {\n    fn from(value: &'a NormalizedJson) -> Self {\n        Self::new(value.as_str())\n    }\n}\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub(crate) enum JsonWritePlacementRef<'a> {\n    CommitPack { commit_id: &'a str, pack_id: u32 },\n    OutOfBand,\n}\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub(crate) enum JsonReadScopeRef<'a> {\n    OutOfBand,\n    CommitPacks {\n        commit_id: &'a str,\n        pack_ids: &'a [u32],\n    },\n}\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub(crate) struct JsonLoadRequestRef<'a> {\n    pub(crate) refs: &'a [JsonRef],\n    pub(crate) scope: JsonReadScopeRef<'a>,\n}\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub(crate) struct JsonProjectionLoadRequestRef<'a> {\n    pub(crate) refs: &'a [JsonRef],\n    pub(crate) scope: JsonReadScopeRef<'a>,\n    pub(crate) paths: &'a [JsonProjectionPath],\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct JsonLoadBatch {\n    values: Vec<Option<Vec<u8>>>,\n}\n\nimpl JsonLoadBatch {\n    pub(crate) fn new(values: Vec<Option<Vec<u8>>>) -> Self {\n        Self { values }\n    }\n\n    pub(crate) fn values(&self) -> &[Option<Vec<u8>>] {\n        &self.values\n    }\n\n    pub(crate) fn into_values(self) -> Vec<Option<Vec<u8>>> {\n        self.values\n    }\n}\n\n#[derive(Debug, Clone, PartialEq)]\npub(crate) struct JsonValueBatch {\n    values: Vec<Option<serde_json::Value>>,\n}\n\nimpl JsonValueBatch {\n    pub(crate) fn new(values: Vec<Option<serde_json::Value>>) -> Self {\n        Self { values }\n    }\n\n    pub(crate) fn values(&self) -> &[Option<serde_json::Value>] {\n        &self.values\n    }\n\n    pub(crate) fn into_values(self) -> Vec<Option<serde_json::Value>> {\n        self.values\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct JsonProjectionPath(String);\n\nimpl JsonProjectionPath {\n    pub(crate) fn new(pointer: impl Into<String>) -> Self {\n        Self(pointer.into())\n    }\n\n    pub(crate) fn as_str(&self) -> &str {\n        &self.0\n    }\n}\n\n#[derive(Debug, Clone, PartialEq)]\npub(crate) struct JsonProjection {\n    values: Vec<Option<serde_json::Value>>,\n}\n\nimpl JsonProjection {\n    pub(crate) fn new(values: Vec<Option<serde_json::Value>>) -> Self {\n        Self { values }\n    }\n\n    pub(crate) fn values(&self) -> &[Option<serde_json::Value>] {\n        &self.values\n    }\n}\n\n#[derive(Debug, Clone, PartialEq)]\npub(crate) struct JsonProjectionBatch {\n    values: Vec<Option<JsonProjection>>,\n}\n\nimpl JsonProjectionBatch {\n    pub(crate) fn new(values: Vec<Option<JsonProjection>>) -> Self {\n        Self { values }\n    }\n\n    pub(crate) fn values(&self) -> &[Option<JsonProjection>] {\n        &self.values\n    }\n\n    pub(crate) fn into_values(self) -> Vec<Option<JsonProjection>> {\n        self.values\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/lib.rs",
    "content": "mod backend;\nmod binary_cas;\npub(crate) mod catalog;\npub(crate) mod cel;\npub(crate) mod commit_graph;\n#[allow(dead_code, unused_imports)]\npub(crate) mod commit_store;\nmod common;\npub(crate) mod domain;\npub mod engine;\npub(crate) mod entity_identity;\npub(crate) mod functions;\npub(crate) mod init;\n#[allow(dead_code)]\npub(crate) mod json_store;\npub(crate) mod live_state;\nmod schema;\npub mod session;\npub(crate) mod sql2;\n#[allow(dead_code, unused_imports)]\npub(crate) mod storage;\n#[cfg(feature = \"storage-benches\")]\npub mod storage_bench;\n#[cfg_attr(feature = \"storage-benches\", allow(dead_code))]\n#[cfg(any(test, feature = \"storage-benches\"))]\npub(crate) mod test_support;\npub(crate) mod tracked_state;\npub mod transaction;\npub(crate) mod untracked_state;\npub(crate) mod version;\npub mod wasm;\n\npub use schema::{\n    lix_schema_definition, lix_schema_definition_json, validate_lix_schema,\n    validate_lix_schema_definition,\n};\n\npub use backend::{\n    Backend, BackendKvEntryPage, BackendKvExistsBatch, BackendKvExistsGroup, BackendKvGetGroup,\n    BackendKvGetRequest, BackendKvKeyPage, BackendKvScanRange, BackendKvScanRequest,\n    BackendKvValueBatch, BackendKvValueGroup, BackendKvValuePage, BackendKvWriteBatch,\n    BackendKvWriteGroup, BackendKvWriteStats, BackendReadTransaction, BackendWriteTransaction,\n    BytePage, BytePageBuilder,\n};\npub use common::LixError;\npub(crate) use common::{parse_row_metadata, parse_row_metadata_value, serialize_row_metadata};\npub use common::{CanonicalPluginKey, CanonicalSchemaKey, EntityId, FileId, VersionId};\npub use common::{LixNotice, NullableKeyFilter, SqlQueryResult, Value, WriteReceipt};\npub use common::{WireQueryResult, WireValue};\npub use engine::Engine;\npub use init::InitReceipt;\n#[cfg(feature = \"storage-benches\")]\npub use session::optimization9_sql2_bench;\npub use session::{\n    CreateVersionOptions, CreateVersionReceipt, MergeChangeStats, MergeConflict,\n    MergeConflictChangeKind, MergeConflictKind, MergeConflictSide, MergeVersionOptions,\n    MergeVersionOutcome, MergeVersionPreview, MergeVersionPreviewOptions, MergeVersionReceipt,\n    SessionContext, SwitchVersionOptions, SwitchVersionReceipt,\n};\npub use session::{ExecuteResult, Row, RowRef, TryFromValue};\n\npub(crate) const GLOBAL_VERSION_ID: &str = \"global\";\n"
  },
  {
    "path": "packages/engine/src/live_state/context.rs",
    "content": "use async_trait::async_trait;\nuse tokio::sync::Mutex;\n\nuse crate::commit_graph::CommitGraphContext;\nuse crate::entity_identity::EntityIdentity;\nuse crate::live_state::visibility;\nuse crate::live_state::{\n    LiveStateReader, LiveStateRowRequest, LiveStateScanRequest, MaterializedLiveStateRow,\n};\nuse crate::storage::StorageReader;\nuse crate::tracked_state::{\n    MaterializedTrackedStateRow, TrackedStateContext, TrackedStateFilter, TrackedStateProjection,\n    TrackedStateRowRequest, TrackedStateScanRequest,\n};\nuse crate::untracked_state::{\n    UntrackedStateContext, UntrackedStateRowRequest, UntrackedStateScanRequest,\n};\nuse crate::version::VERSION_REF_SCHEMA_KEY;\nuse crate::LixError;\nuse crate::NullableKeyFilter;\nuse crate::GLOBAL_VERSION_ID;\n\nconst COMMIT_SCHEMA_KEY: &str = \"lix_commit\";\nconst COMMIT_EDGE_SCHEMA_KEY: &str = \"lix_commit_edge\";\n\n/// Serving facade for visible live-state reads.\n///\n/// Live state composes the rebuildable tracked projection with the durable\n/// untracked local overlay. Lower stores own persistence; this facade owns the\n/// visibility rule.\npub(crate) struct LiveStateContext {\n    tracked_state: TrackedStateContext,\n    untracked_state: UntrackedStateContext,\n    commit_graph: CommitGraphContext,\n}\n\nimpl LiveStateContext {\n    pub(crate) fn new(\n        tracked_state: TrackedStateContext,\n        untracked_state: UntrackedStateContext,\n        commit_graph: CommitGraphContext,\n    ) -> Self {\n        Self {\n            tracked_state,\n            untracked_state,\n            commit_graph,\n        }\n    }\n\n    /// Creates a visible live-state reader over a caller-provided KV store.\n    pub(crate) fn reader<S>(&self, store: S) -> LiveStateStoreReader<S>\n    where\n        S: StorageReader,\n    {\n        LiveStateStoreReader {\n            store: Mutex::new(store),\n            tracked_state: self.tracked_state.clone(),\n            untracked_state: self.untracked_state,\n            commit_graph: self.commit_graph.clone(),\n        }\n    }\n}\n\n/// Visible live-state reader backed by a caller-provided KV store.\npub(crate) struct LiveStateStoreReader<S> {\n    store: Mutex<S>,\n    tracked_state: TrackedStateContext,\n    untracked_state: UntrackedStateContext,\n    commit_graph: CommitGraphContext,\n}\n\nimpl<S> LiveStateStoreReader<S>\nwhere\n    S: StorageReader,\n{\n    pub(crate) async fn scan_rows(\n        &self,\n        request: &LiveStateScanRequest,\n    ) -> Result<Vec<MaterializedLiveStateRow>, LixError> {\n        let mut store = self.store.lock().await;\n        let scope = scan_scope(&mut *store, &self.untracked_state, request).await?;\n        let derived_rows =\n            scan_commit_derived_rows(&mut *store, &self.commit_graph, request, &scope).await?;\n        let mut tracked_rows = Vec::new();\n        if request.filter.untracked != Some(true) && !is_commit_derived_only_request(request) {\n            for version_id in &scope.storage_version_ids {\n                let Some(commit_id) =\n                    load_version_ref_commit_id(&mut *store, &self.untracked_state, version_id)\n                        .await?\n                else {\n                    continue;\n                };\n                let tracked_request = tracked_scan_request_from_live(request);\n                let source = tracked_source_from_version_id(version_id);\n                let store: &mut dyn StorageReader = &mut *store;\n                tracked_rows.extend(\n                    self.tracked_state\n                        .reader(store)\n                        .scan_rows_at_commit(&commit_id, &tracked_request)\n                        .await?\n                        .into_iter()\n                        .map(|row| project_tracked_row(row, version_id, source)),\n                );\n            }\n        }\n\n        let untracked_rows = if request.filter.untracked != Some(false) {\n            let store: &mut dyn StorageReader = &mut *store;\n            self.untracked_state\n                .reader(store)\n                .scan_rows(&untracked_scan_request_from_live(\n                    request,\n                    &scope.storage_version_ids,\n                ))\n                .await?\n                .into_iter()\n                .map(MaterializedLiveStateRow::from)\n                .collect::<Vec<_>>()\n        } else {\n            Vec::new()\n        };\n\n        let mut rows = if request.filter.untracked.is_some() {\n            tracked_rows\n                .into_iter()\n                .chain(untracked_rows)\n                .chain(derived_rows)\n                .collect()\n        } else {\n            crate::live_state::overlay::overlay_untracked_rows(tracked_rows, untracked_rows)\n                .into_iter()\n                .chain(derived_rows)\n                .collect()\n        };\n        rows = visibility::resolve_scan_rows(\n            rows,\n            &scope.projection_version_ids,\n            request.filter.include_tombstones,\n        );\n        if let Some(limit) = request.limit {\n            rows.truncate(limit);\n        }\n        Ok(rows)\n    }\n\n    pub(crate) async fn load_row(\n        &self,\n        request: &LiveStateRowRequest,\n    ) -> Result<Option<MaterializedLiveStateRow>, LixError> {\n        let mut store = self.store.lock().await;\n        if !version_ref_exists(&mut *store, &self.untracked_state, &request.version_id).await? {\n            return Ok(None);\n        }\n        if is_commit_derived_schema(&request.schema_key)\n            && request.file_id == NullableKeyFilter::Null\n        {\n            let scope = LiveStateScanScope {\n                storage_version_ids: vec![request.version_id.clone()],\n                projection_version_ids: vec![request.version_id.clone()],\n            };\n            let rows = scan_commit_derived_rows(\n                &mut *store,\n                &self.commit_graph,\n                &LiveStateScanRequest {\n                    filter: crate::live_state::LiveStateFilter {\n                        schema_keys: vec![request.schema_key.clone()],\n                        entity_ids: vec![request.entity_id.clone()],\n                        version_ids: vec![request.version_id.clone()],\n                        file_ids: vec![NullableKeyFilter::Null],\n                        untracked: Some(false),\n                        include_tombstones: false,\n                        ..Default::default()\n                    },\n                    limit: Some(1),\n                    ..Default::default()\n                },\n                &scope,\n            )\n            .await?;\n            if let Some(row) = rows.into_iter().next() {\n                return Ok(Some(row));\n            }\n        }\n        for candidate in load_row_candidates(request) {\n            match candidate.source {\n                LiveStateLookupSource::Untracked => {\n                    let store: &mut dyn StorageReader = &mut *store;\n                    if let Some(row) = self\n                        .untracked_state\n                        .reader(store)\n                        .load_row(&untracked_row_request_from_live(\n                            request,\n                            &candidate.version_id,\n                        ))\n                        .await?\n                    {\n                        return Ok(Some(visibility::project_loaded_row(\n                            MaterializedLiveStateRow::from(row),\n                            &request.version_id,\n                            &candidate.version_id,\n                        )));\n                    }\n                }\n                LiveStateLookupSource::Tracked => {\n                    let Some(commit_id) = load_version_ref_commit_id(\n                        &mut *store,\n                        &self.untracked_state,\n                        &candidate.version_id,\n                    )\n                    .await?\n                    else {\n                        continue;\n                    };\n                    let store: &mut dyn StorageReader = &mut *store;\n                    let tracked_request = tracked_row_request_from_live(request);\n                    let mut rows = self\n                        .tracked_state\n                        .reader(store)\n                        .load_rows_at_commit(&commit_id, &[tracked_request])\n                        .await?;\n                    if let Some(row) = rows.pop().flatten() {\n                        return Ok(Some(project_tracked_row(\n                            row,\n                            &request.version_id,\n                            tracked_source_from_version_id(&candidate.version_id),\n                        )));\n                    }\n                }\n            }\n        }\n        Ok(None)\n    }\n}\n\n#[async_trait]\nimpl<S> LiveStateReader for LiveStateStoreReader<S>\nwhere\n    S: StorageReader + Sync,\n{\n    async fn scan_rows(\n        &self,\n        request: &LiveStateScanRequest,\n    ) -> Result<Vec<MaterializedLiveStateRow>, LixError> {\n        LiveStateStoreReader::scan_rows(self, request).await\n    }\n\n    async fn load_row(\n        &self,\n        request: &LiveStateRowRequest,\n    ) -> Result<Option<MaterializedLiveStateRow>, LixError> {\n        LiveStateStoreReader::load_row(self, request).await\n    }\n}\n\nasync fn scan_commit_derived_rows(\n    store: &mut dyn StorageReader,\n    commit_graph: &CommitGraphContext,\n    request: &LiveStateScanRequest,\n    scope: &LiveStateScanScope,\n) -> Result<Vec<MaterializedLiveStateRow>, LixError> {\n    if request.filter.untracked == Some(true) || !request_may_include_commit_derived(request) {\n        return Ok(Vec::new());\n    }\n    if !file_filter_allows_null(&request.filter.file_ids) {\n        return Ok(Vec::new());\n    }\n\n    let version_ids = if scope.projection_version_ids.is_empty() {\n        vec![GLOBAL_VERSION_ID.to_string()]\n    } else {\n        scope.projection_version_ids.clone()\n    };\n    let mut graph = commit_graph.reader(store);\n    let commits = graph.all_commits().await?;\n    let include_commit = schema_filter_allows(&request.filter.schema_keys, COMMIT_SCHEMA_KEY);\n    let include_commit_edge =\n        schema_filter_allows(&request.filter.schema_keys, COMMIT_EDGE_SCHEMA_KEY);\n\n    let mut rows = Vec::new();\n    for version_id in &version_ids {\n        if include_commit {\n            for commit in &commits {\n                rows.push(commit_row(commit, version_id)?);\n            }\n        }\n        if include_commit_edge {\n            for edge in graph.commit_edges(&commits) {\n                rows.push(commit_edge_row(&edge, version_id)?);\n            }\n        }\n    }\n\n    rows.retain(|row| {\n        (request.filter.entity_ids.is_empty() || request.filter.entity_ids.contains(&row.entity_id))\n            && (request.filter.version_ids.is_empty()\n                || request.filter.version_ids.contains(&row.version_id))\n    });\n    Ok(rows)\n}\n\nfn request_may_include_commit_derived(request: &LiveStateScanRequest) -> bool {\n    request.filter.schema_keys.is_empty()\n        || request\n            .filter\n            .schema_keys\n            .iter()\n            .any(|schema_key| is_commit_derived_schema(schema_key))\n}\n\nfn is_commit_derived_only_request(request: &LiveStateScanRequest) -> bool {\n    !request.filter.schema_keys.is_empty()\n        && request\n            .filter\n            .schema_keys\n            .iter()\n            .all(|schema_key| is_commit_derived_schema(schema_key))\n}\n\nfn is_commit_derived_schema(schema_key: &str) -> bool {\n    matches!(schema_key, COMMIT_SCHEMA_KEY | COMMIT_EDGE_SCHEMA_KEY)\n}\n\nfn schema_filter_allows(schema_keys: &[String], schema_key: &str) -> bool {\n    schema_keys.is_empty() || schema_keys.iter().any(|candidate| candidate == schema_key)\n}\n\nfn file_filter_allows_null(file_ids: &[NullableKeyFilter<String>]) -> bool {\n    file_ids.is_empty()\n        || file_ids\n            .iter()\n            .any(|file_id| matches!(file_id, NullableKeyFilter::Any | NullableKeyFilter::Null))\n}\n\nfn commit_row(\n    commit: &crate::commit_graph::CommitGraphCommit,\n    version_id: &str,\n) -> Result<MaterializedLiveStateRow, LixError> {\n    let snapshot_content = serde_json::to_string(&serde_json::json!({\n        \"id\": commit.commit_id,\n    }))\n    .map_err(|error| {\n        LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            format!(\"failed to encode derived lix_commit snapshot: {error}\"),\n        )\n    })?;\n    Ok(MaterializedLiveStateRow {\n        entity_id: EntityIdentity::single(commit.commit_id.clone()),\n        schema_key: COMMIT_SCHEMA_KEY.to_string(),\n        file_id: None,\n        snapshot_content: Some(snapshot_content),\n        metadata: None,\n        deleted: false,\n        created_at: commit.change.created_at.clone(),\n        updated_at: commit.change.created_at.clone(),\n        global: true,\n        change_id: Some(commit.change.id.clone()),\n        commit_id: Some(commit.commit_id.clone()),\n        untracked: false,\n        version_id: version_id.to_string(),\n    })\n}\n\nfn commit_edge_row(\n    edge: &crate::commit_graph::CommitGraphEdge,\n    version_id: &str,\n) -> Result<MaterializedLiveStateRow, LixError> {\n    let snapshot_content = serde_json::to_string(&serde_json::json!({\n        \"parent_id\": edge.parent_commit_id,\n        \"child_id\": edge.child_commit_id,\n        \"parent_order\": edge.parent_order,\n    }))\n    .map_err(|error| {\n        LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            format!(\"failed to encode derived lix_commit_edge snapshot: {error}\"),\n        )\n    })?;\n    Ok(MaterializedLiveStateRow {\n        entity_id: EntityIdentity {\n            parts: vec![edge.parent_commit_id.clone(), edge.child_commit_id.clone()],\n        },\n        schema_key: COMMIT_EDGE_SCHEMA_KEY.to_string(),\n        file_id: None,\n        snapshot_content: Some(snapshot_content),\n        metadata: None,\n        deleted: false,\n        created_at: \"1970-01-01T00:00:00.000Z\".to_string(),\n        updated_at: \"1970-01-01T00:00:00.000Z\".to_string(),\n        global: true,\n        change_id: None,\n        commit_id: Some(edge.child_commit_id.clone()),\n        untracked: false,\n        version_id: version_id.to_string(),\n    })\n}\n\nfn tracked_scan_request_from_live(request: &LiveStateScanRequest) -> TrackedStateScanRequest {\n    TrackedStateScanRequest {\n        filter: TrackedStateFilter {\n            schema_keys: request.filter.schema_keys.clone(),\n            entity_ids: request.filter.entity_ids.clone(),\n            file_ids: request.filter.file_ids.clone(),\n            // Scan tombstones internally so version-local tombstones can hide\n            // global fallback rows before the serving facade filters them.\n            include_tombstones: true,\n        },\n        projection: TrackedStateProjection {\n            columns: request.projection.columns.clone(),\n        },\n        limit: None,\n    }\n}\n\nfn untracked_scan_request_from_live(\n    request: &LiveStateScanRequest,\n    version_ids: &[String],\n) -> UntrackedStateScanRequest {\n    let mut filter: crate::untracked_state::UntrackedStateFilter = request.filter.clone().into();\n    filter.version_ids = version_ids.to_vec();\n    UntrackedStateScanRequest {\n        filter,\n        projection: crate::untracked_state::UntrackedStateProjection {\n            columns: request.projection.columns.clone(),\n        },\n        limit: None,\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\nstruct LiveStateScanScope {\n    storage_version_ids: Vec<String>,\n    projection_version_ids: Vec<String>,\n}\n\nasync fn scan_scope(\n    store: &mut dyn StorageReader,\n    untracked_state: &UntrackedStateContext,\n    request: &LiveStateScanRequest,\n) -> Result<LiveStateScanScope, LixError> {\n    if request.filter.version_ids.is_empty() {\n        return Ok(LiveStateScanScope {\n            storage_version_ids: all_version_ref_ids(store, untracked_state).await?,\n            projection_version_ids: Vec::new(),\n        });\n    }\n\n    let mut projection_version_ids = Vec::new();\n    for version_id in &request.filter.version_ids {\n        if version_ref_exists(store, untracked_state, version_id).await? {\n            projection_version_ids.push(version_id.clone());\n        }\n    }\n\n    let storage_version_ids = visibility::expanded_version_ids(&projection_version_ids);\n    Ok(LiveStateScanScope {\n        storage_version_ids,\n        projection_version_ids,\n    })\n}\n\nasync fn all_version_ref_ids(\n    store: &mut dyn StorageReader,\n    untracked_state: &UntrackedStateContext,\n) -> Result<Vec<String>, LixError> {\n    let rows = untracked_state\n        .reader(store)\n        .scan_rows(&UntrackedStateScanRequest {\n            filter: crate::untracked_state::UntrackedStateFilter {\n                schema_keys: vec![VERSION_REF_SCHEMA_KEY.to_string()],\n                version_ids: vec![GLOBAL_VERSION_ID.to_string()],\n                ..Default::default()\n            },\n            ..Default::default()\n        })\n        .await?;\n    rows.into_iter()\n        .map(|row| row.entity_id.as_single_string_owned())\n        .collect()\n}\n\nasync fn load_version_ref_commit_id(\n    store: &mut dyn StorageReader,\n    untracked_state: &UntrackedStateContext,\n    version_id: &str,\n) -> Result<Option<String>, LixError> {\n    let Some(row) = untracked_state\n        .reader(store)\n        .load_row(&UntrackedStateRowRequest {\n            schema_key: VERSION_REF_SCHEMA_KEY.to_string(),\n            version_id: GLOBAL_VERSION_ID.to_string(),\n            entity_id: crate::entity_identity::EntityIdentity::single(version_id),\n            file_id: crate::NullableKeyFilter::Null,\n        })\n        .await?\n    else {\n        return Ok(None);\n    };\n    let Some(snapshot_content) = row.snapshot_content.as_deref() else {\n        return Ok(None);\n    };\n    let snapshot =\n        serde_json::from_str::<serde_json::Value>(snapshot_content).map_err(|error| {\n            LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                format!(\"live_state version-ref snapshot parse failed: {error}\"),\n            )\n        })?;\n    Ok(snapshot\n        .get(\"commit_id\")\n        .and_then(serde_json::Value::as_str)\n        .map(str::to_string))\n}\n\nasync fn version_ref_exists(\n    store: &mut dyn StorageReader,\n    untracked_state: &UntrackedStateContext,\n    version_id: &str,\n) -> Result<bool, LixError> {\n    Ok(\n        load_version_ref_commit_id(store, untracked_state, version_id)\n            .await?\n            .is_some(),\n    )\n}\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\nenum TrackedRowSource {\n    Global,\n    Version,\n}\n\nfn tracked_source_from_version_id(version_id: &str) -> TrackedRowSource {\n    if version_id == GLOBAL_VERSION_ID {\n        TrackedRowSource::Global\n    } else {\n        TrackedRowSource::Version\n    }\n}\n\nfn project_tracked_row(\n    row: MaterializedTrackedStateRow,\n    view_version_id: &str,\n    source: TrackedRowSource,\n) -> MaterializedLiveStateRow {\n    MaterializedLiveStateRow {\n        entity_id: row.entity_id,\n        schema_key: row.schema_key,\n        file_id: row.file_id,\n        snapshot_content: row.snapshot_content,\n        metadata: row.metadata,\n        deleted: row.deleted,\n        created_at: row.created_at,\n        updated_at: row.updated_at,\n        global: source == TrackedRowSource::Global,\n        change_id: Some(row.change_id),\n        commit_id: Some(row.commit_id),\n        untracked: false,\n        version_id: view_version_id.to_string(),\n    }\n}\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\nenum LiveStateLookupSource {\n    Untracked,\n    Tracked,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\nstruct LiveStateLookupCandidate {\n    source: LiveStateLookupSource,\n    version_id: String,\n}\n\nfn load_row_candidates(request: &LiveStateRowRequest) -> Vec<LiveStateLookupCandidate> {\n    let mut candidates = vec![\n        LiveStateLookupCandidate {\n            source: LiveStateLookupSource::Untracked,\n            version_id: request.version_id.clone(),\n        },\n        LiveStateLookupCandidate {\n            source: LiveStateLookupSource::Tracked,\n            version_id: request.version_id.clone(),\n        },\n    ];\n\n    if request.version_id != GLOBAL_VERSION_ID {\n        candidates.extend([\n            LiveStateLookupCandidate {\n                source: LiveStateLookupSource::Untracked,\n                version_id: GLOBAL_VERSION_ID.to_string(),\n            },\n            LiveStateLookupCandidate {\n                source: LiveStateLookupSource::Tracked,\n                version_id: GLOBAL_VERSION_ID.to_string(),\n            },\n        ]);\n    }\n\n    candidates\n}\n\nfn untracked_row_request_from_live(\n    request: &LiveStateRowRequest,\n    version_id: &str,\n) -> crate::untracked_state::UntrackedStateRowRequest {\n    crate::untracked_state::UntrackedStateRowRequest {\n        schema_key: request.schema_key.clone(),\n        version_id: version_id.to_string(),\n        entity_id: request.entity_id.clone(),\n        file_id: request.file_id.clone(),\n    }\n}\n\nfn tracked_row_request_from_live(request: &LiveStateRowRequest) -> TrackedStateRowRequest {\n    TrackedStateRowRequest {\n        schema_key: request.schema_key.clone(),\n        entity_id: request.entity_id.clone(),\n        file_id: request.file_id.clone(),\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use std::sync::Arc;\n\n    use super::*;\n    use crate::backend::{testing::UnitTestBackend, Backend};\n    use crate::commit_store::{CommitDraftRef, CommitStoreContext};\n    use crate::entity_identity::EntityIdentity;\n    use crate::json_store::{\n        JsonStoreContext, JsonWritePlacementRef, NormalizedJson, NormalizedJsonRef,\n    };\n    use crate::live_state::LiveStateFilter;\n    use crate::storage::{StorageContext, StorageWriteSet, StorageWriteTransaction};\n    use crate::tracked_state::{TrackedStateDeltaRef, TrackedStateScanRequest};\n    use crate::untracked_state::{MaterializedUntrackedStateRow, UntrackedStateContext};\n    use crate::NullableKeyFilter;\n    use serde_json::json;\n\n    const COMMIT_SCHEMA_KEY: &str = \"lix_commit\";\n\n    fn live_state_context() -> LiveStateContext {\n        LiveStateContext::new(\n            crate::tracked_state::TrackedStateContext::new(),\n            crate::untracked_state::UntrackedStateContext::new(),\n            crate::commit_graph::CommitGraphContext::new(),\n        )\n    }\n\n    async fn write_untracked_rows_to_store(\n        store: &mut (impl StorageWriteTransaction + ?Sized),\n        rows: &[MaterializedUntrackedStateRow],\n    ) {\n        let mut writes = StorageWriteSet::new();\n        let canonical_rows = rows\n            .iter()\n            .map(|row| crate::test_support::untracked_state_row_from_materialized(&mut writes, row))\n            .collect::<Result<Vec<_>, _>>()\n            .expect(\"untracked rows should canonicalize\");\n        UntrackedStateContext::new()\n            .writer(&mut writes)\n            .stage_rows(canonical_rows.iter().map(|row| row.as_ref()))\n            .expect(\"untracked rows should write\");\n        writes\n            .apply(store)\n            .await\n            .expect(\"untracked rows should apply\");\n    }\n\n    async fn write_empty_commits_to_store(\n        store: &mut (impl StorageWriteTransaction + ?Sized),\n        commit_ids: &[&str],\n    ) {\n        let mut writes = StorageWriteSet::new();\n        for commit_id in commit_ids {\n            let commit_change_id = format!(\"{commit_id}:commit\");\n            CommitStoreContext::new()\n                .writer(&mut *store, &mut writes)\n                .stage_commit_draft(\n                    CommitDraftRef {\n                        id: commit_id,\n                        change_id: &commit_change_id,\n                        parent_ids: &[],\n                        author_account_ids: &[],\n                        created_at: \"1970-01-01T00:00:00.000Z\",\n                    },\n                    Vec::new(),\n                    Vec::new(),\n                )\n                .await\n                .expect(\"empty commit should stage\");\n        }\n        writes\n            .apply(store)\n            .await\n            .expect(\"empty commits should apply\");\n    }\n\n    async fn stage_materialized_live_rows(\n        store: &mut (impl StorageReader + ?Sized),\n        writes: &mut StorageWriteSet,\n        _json_writer: &mut crate::json_store::JsonStoreWriter,\n        rows: &[MaterializedLiveStateRow],\n    ) -> Result<(), LixError> {\n        let mut untracked_rows = Vec::new();\n        let mut tracked_rows_by_commit = std::collections::BTreeMap::<\n            String,\n            Vec<(crate::commit_store::Change, String, String)>,\n        >::new();\n        let mut parent_by_commit = std::collections::BTreeMap::<String, Option<String>>::new();\n\n        for row in rows {\n            if row.untracked {\n                let materialized = crate::untracked_state::MaterializedUntrackedStateRow::from(row);\n                let canonical = crate::test_support::untracked_state_row_from_materialized(\n                    writes,\n                    &materialized,\n                )?;\n                untracked_rows.push(canonical);\n                continue;\n            }\n            let materialized = MaterializedTrackedStateRow::try_from(row)?;\n            let commit_id = row.commit_id.clone().ok_or_else(|| {\n                LixError::new(\"LIX_ERROR_UNKNOWN\", \"test tracked row missing commit_id\")\n            })?;\n            if row.schema_key == COMMIT_SCHEMA_KEY {\n                parent_by_commit.insert(\n                    commit_id.clone(),\n                    parent_commit_id_from_test_commit_row(row)?,\n                );\n            }\n            if row.schema_key != COMMIT_SCHEMA_KEY {\n                let change = crate::test_support::tracked_change_from_materialized(&materialized)?;\n                stage_tracked_materialized_json(writes, &commit_id, &materialized)?;\n                tracked_rows_by_commit.entry(commit_id).or_default().push((\n                    change,\n                    materialized.created_at,\n                    materialized.updated_at,\n                ));\n            }\n        }\n\n        UntrackedStateContext::new()\n            .writer(writes)\n            .stage_rows(untracked_rows.iter().map(|row| row.as_ref()))?;\n        for (commit_id, rows) in tracked_rows_by_commit {\n            let parent_commit_id = parent_by_commit.remove(&commit_id).flatten();\n            let parent_ids = parent_commit_id\n                .as_ref()\n                .map(|parent| vec![parent.clone()])\n                .unwrap_or_default();\n            let commit_change_id = format!(\"{commit_id}:commit\");\n            let commit = CommitDraftRef {\n                id: &commit_id,\n                change_id: &commit_change_id,\n                parent_ids: &parent_ids,\n                author_account_ids: &[],\n                created_at: rows\n                    .first()\n                    .map(|(change, _, _)| change.created_at.as_str())\n                    .unwrap_or(\"1970-01-01T00:00:00.000Z\"),\n            };\n            let staged = CommitStoreContext::new()\n                .writer(&mut *store, writes)\n                .stage_tracked_commit_draft(\n                    commit,\n                    rows.iter().map(|(change, _, _)| change.as_ref()).collect(),\n                    Vec::new(),\n                )\n                .await?;\n            let deltas = rows\n                .iter()\n                .zip(&staged.authored_locators)\n                .map(\n                    |((change, created_at, updated_at), locator)| TrackedStateDeltaRef {\n                        change: change.as_ref(),\n                        locator: locator.as_ref(),\n                        created_at,\n                        updated_at,\n                    },\n                )\n                .collect::<Vec<_>>();\n            TrackedStateContext::new()\n                .writer(&mut *store, writes)\n                .stage_delta(&commit_id, parent_commit_id.as_deref(), &deltas)\n                .await?;\n        }\n        Ok(())\n    }\n\n    fn stage_tracked_materialized_json(\n        writes: &mut StorageWriteSet,\n        commit_id: &str,\n        row: &MaterializedTrackedStateRow,\n    ) -> Result<(), LixError> {\n        let mut payloads = Vec::new();\n        if let Some(snapshot) = row.snapshot_content.as_deref() {\n            payloads.push(NormalizedJson::from_arc_unchecked(Arc::from(snapshot)));\n        }\n        if let Some(metadata) = row.metadata.as_ref() {\n            payloads.push(NormalizedJson::from_arc_unchecked(Arc::from(\n                crate::serialize_row_metadata(metadata),\n            )));\n        }\n        JsonStoreContext::new().writer().stage_batch(\n            writes,\n            JsonWritePlacementRef::CommitPack {\n                commit_id,\n                pack_id: 0,\n            },\n            payloads\n                .iter()\n                .map(|payload| NormalizedJsonRef::from(payload)),\n        )?;\n        Ok(())\n    }\n\n    fn parent_commit_id_from_test_commit_row(\n        row: &MaterializedLiveStateRow,\n    ) -> Result<Option<String>, LixError> {\n        let Some(metadata) = row.metadata.as_deref() else {\n            return Ok(None);\n        };\n        let metadata = serde_json::from_str::<serde_json::Value>(metadata).map_err(|error| {\n            LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                format!(\"test commit row has invalid metadata: {error}\"),\n            )\n        })?;\n        Ok(metadata\n            .get(\"test_parents\")\n            .and_then(serde_json::Value::as_array)\n            .and_then(|parents| parents.first())\n            .and_then(serde_json::Value::as_str)\n            .map(str::to_string))\n    }\n\n    #[tokio::test]\n    async fn live_state_overlays_untracked_rows() {\n        let backend: Arc<dyn Backend + Send + Sync> = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(Arc::clone(&backend));\n        let live_state = live_state_context();\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        {\n            let mut writes = StorageWriteSet::new();\n            let mut json_writer = JsonStoreContext::new().writer();\n            {\n                stage_materialized_live_rows(\n                    transaction.as_mut(),\n                    &mut writes,\n                    &mut json_writer,\n                    &[tracked_row_with_commit(\n                        \"tracked-value\",\n                        Some(\"change-tracked\"),\n                        \"commit-tracked\",\n                    )],\n                )\n                .await\n                .expect(\"tracked row should stage\");\n            }\n            writes\n                .apply(&mut transaction.as_mut())\n                .await\n                .expect(\"tracked row should apply\");\n        }\n        write_untracked_rows_to_store(\n            transaction.as_mut(),\n            &[\n                version_ref_row(\"global\", \"commit-tracked\"),\n                untracked_row(\"untracked-value\"),\n            ],\n        )\n        .await;\n        transaction.commit().await.expect(\"commit should persist\");\n\n        let rows = scan_selected_tab_at(&live_state, storage.clone(), \"global\", false)\n            .await\n            .expect(\"scan should succeed\");\n        assert_eq!(rows.len(), 1);\n        assert_eq!(\n            rows[0].snapshot_content.as_deref(),\n            Some(\"{\\\"value\\\":\\\"untracked-value\\\"}\")\n        );\n        assert!(rows[0].untracked);\n        assert_eq!(rows[0].change_id, None);\n\n        let loaded = live_state\n            .reader(storage.clone())\n            .load_row(&LiveStateRowRequest {\n                schema_key: \"lix_key_value\".to_string(),\n                version_id: \"global\".to_string(),\n                entity_id: crate::entity_identity::EntityIdentity::single(\"selected-tab\"),\n                file_id: NullableKeyFilter::Null,\n            })\n            .await\n            .expect(\"load should succeed\")\n            .expect(\"overlay row should be visible\");\n        assert!(loaded.untracked);\n        assert_eq!(\n            loaded.snapshot_content.as_deref(),\n            Some(\"{\\\"value\\\":\\\"untracked-value\\\"}\")\n        );\n    }\n\n    #[tokio::test]\n    async fn tracked_row_is_visible_without_untracked_overlay() {\n        let backend: Arc<dyn Backend + Send + Sync> = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(Arc::clone(&backend));\n        let live_state = live_state_context();\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        {\n            let mut writes = StorageWriteSet::new();\n            let mut json_writer = JsonStoreContext::new().writer();\n            {\n                stage_materialized_live_rows(\n                    transaction.as_mut(),\n                    &mut writes,\n                    &mut json_writer,\n                    &[tracked_row_with_commit(\n                        \"tracked-value\",\n                        Some(\"change-tracked\"),\n                        \"commit-tracked\",\n                    )],\n                )\n                .await\n                .expect(\"tracked row should stage\");\n            }\n            writes\n                .apply(&mut transaction.as_mut())\n                .await\n                .expect(\"tracked row should apply\");\n        }\n        write_untracked_rows_to_store(\n            transaction.as_mut(),\n            &[version_ref_row(\"global\", \"commit-tracked\")],\n        )\n        .await;\n        transaction.commit().await.expect(\"commit should persist\");\n\n        let loaded = load_selected_tab(&live_state, storage.clone())\n            .await\n            .expect(\"load should succeed\")\n            .expect(\"tracked row should be visible\");\n        assert!(!loaded.untracked);\n        assert_eq!(loaded.change_id.as_deref(), Some(\"change-tracked\"));\n        assert_eq!(\n            loaded.snapshot_content.as_deref(),\n            Some(\"{\\\"value\\\":\\\"tracked-value\\\"}\")\n        );\n    }\n\n    #[tokio::test]\n    async fn deleting_untracked_row_reveals_tracked_row() {\n        let backend: Arc<dyn Backend + Send + Sync> = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(Arc::clone(&backend));\n        let live_state = live_state_context();\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        {\n            let mut writes = StorageWriteSet::new();\n            let mut json_writer = JsonStoreContext::new().writer();\n            {\n                stage_materialized_live_rows(\n                    transaction.as_mut(),\n                    &mut writes,\n                    &mut json_writer,\n                    &[tracked_row_with_commit(\n                        \"tracked-value\",\n                        Some(\"change-tracked\"),\n                        \"commit-tracked\",\n                    )],\n                )\n                .await\n                .expect(\"tracked row should stage\");\n            }\n            writes\n                .apply(&mut transaction.as_mut())\n                .await\n                .expect(\"tracked row should apply\");\n        }\n        write_untracked_rows_to_store(\n            transaction.as_mut(),\n            &[\n                version_ref_row(\"global\", \"commit-tracked\"),\n                untracked_row(\"untracked-value\"),\n            ],\n        )\n        .await;\n        {\n            let mut writes = StorageWriteSet::new();\n            let identity = crate::untracked_state::UntrackedStateIdentity {\n                version_id: \"global\".to_string(),\n                schema_key: \"lix_key_value\".to_string(),\n                entity_id: EntityIdentity::single(\"selected-tab\"),\n                file_id: None,\n            };\n            UntrackedStateContext::new()\n                .writer(&mut writes)\n                .stage_delete_rows(std::iter::once(identity.as_ref()));\n            writes\n                .apply(&mut transaction.as_mut())\n                .await\n                .expect(\"untracked row should delete\");\n        }\n        transaction.commit().await.expect(\"commit should persist\");\n\n        let loaded = load_selected_tab(&live_state, storage.clone())\n            .await\n            .expect(\"load should succeed\")\n            .expect(\"tracked row should be visible again\");\n        assert!(!loaded.untracked);\n        assert_eq!(loaded.change_id.as_deref(), Some(\"change-tracked\"));\n        assert_eq!(\n            loaded.snapshot_content.as_deref(),\n            Some(\"{\\\"value\\\":\\\"tracked-value\\\"}\")\n        );\n    }\n\n    #[tokio::test]\n    async fn load_row_falls_back_to_global_tracked_row_for_requested_version() {\n        let backend: Arc<dyn Backend + Send + Sync> = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(Arc::clone(&backend));\n        let live_state = live_state_context();\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        {\n            let rows = [tracked_row_with_commit(\n                \"global-tracked\",\n                Some(\"change-global\"),\n                \"commit-global\",\n            )];\n            let mut writes = StorageWriteSet::new();\n            let mut json_writer = JsonStoreContext::new().writer();\n            {\n                stage_materialized_live_rows(\n                    transaction.as_mut(),\n                    &mut writes,\n                    &mut json_writer,\n                    &rows,\n                )\n                .await\n                .expect(\"tracked row should stage\");\n            }\n            writes\n                .apply(&mut transaction.as_mut())\n                .await\n                .expect(\"tracked row should apply\");\n        }\n        write_untracked_rows_to_store(\n            transaction.as_mut(),\n            &[\n                version_ref_row(\"global\", \"commit-global\"),\n                version_ref_row(\"version-a\", \"commit-version-a\"),\n            ],\n        )\n        .await;\n        write_empty_commits_to_store(transaction.as_mut(), &[\"commit-version-a\"]).await;\n        transaction.commit().await.expect(\"commit should persist\");\n\n        let loaded = load_selected_tab_at(&live_state, storage.clone(), \"version-a\")\n            .await\n            .expect(\"load should succeed\")\n            .expect(\"global row should be visible for requested version\");\n\n        assert_eq!(loaded.version_id, \"version-a\");\n        assert!(loaded.global);\n        assert!(!loaded.untracked);\n        assert_eq!(\n            loaded.snapshot_content.as_deref(),\n            Some(\"{\\\"value\\\":\\\"global-tracked\\\"}\")\n        );\n    }\n\n    #[tokio::test]\n    async fn main_sees_global_row_by_reading_global_root_separately() {\n        let backend: Arc<dyn Backend + Send + Sync> = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(Arc::clone(&backend));\n        let tracked_state = TrackedStateContext::new();\n        let live_state = LiveStateContext::new(\n            tracked_state.clone(),\n            UntrackedStateContext::new(),\n            crate::commit_graph::CommitGraphContext::new(),\n        );\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        {\n            let rows = [tracked_row_with_commit(\n                \"global-tracked\",\n                Some(\"change-global\"),\n                \"commit-global\",\n            )];\n            let mut writes = StorageWriteSet::new();\n            let mut json_writer = JsonStoreContext::new().writer();\n            {\n                stage_materialized_live_rows(\n                    transaction.as_mut(),\n                    &mut writes,\n                    &mut json_writer,\n                    &rows,\n                )\n                .await\n                .expect(\"global tracked row should stage\");\n            }\n            writes\n                .apply(&mut transaction.as_mut())\n                .await\n                .expect(\"global tracked row should apply\");\n        }\n        write_untracked_rows_to_store(\n            transaction.as_mut(),\n            &[\n                version_ref_row(\"global\", \"commit-global\"),\n                version_ref_row(\"main\", \"commit-main\"),\n            ],\n        )\n        .await;\n        write_empty_commits_to_store(transaction.as_mut(), &[\"commit-main\"]).await;\n        transaction.commit().await.expect(\"commit should persist\");\n\n        let loaded = load_selected_tab_at(&live_state, storage.clone(), \"main\")\n            .await\n            .expect(\"load should succeed\")\n            .expect(\"global row should be projected into main\");\n        assert_eq!(loaded.version_id, \"main\");\n        assert!(loaded.global);\n        assert_eq!(\n            loaded.snapshot_content.as_deref(),\n            Some(\"{\\\"value\\\":\\\"global-tracked\\\"}\")\n        );\n\n        let main_root_rows =\n            scan_tracked_root(&tracked_state, storage.clone(), \"commit-main\").await;\n        assert_eq!(\n            main_root_rows.len(),\n            0,\n            \"global fallback must come from the global root, not a copied main root row\"\n        );\n    }\n\n    #[tokio::test]\n    async fn load_row_prefers_requested_version_over_global() {\n        let backend: Arc<dyn Backend + Send + Sync> = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(Arc::clone(&backend));\n        let live_state = live_state_context();\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        {\n            let rows = [\n                tracked_row_with_commit(\"global-tracked\", Some(\"change-global\"), \"commit-global\"),\n                tracked_row_at_with_commit(\n                    \"version-a\",\n                    \"version-tracked\",\n                    Some(\"change-version\"),\n                    \"commit-version\",\n                ),\n            ];\n            let mut writes = StorageWriteSet::new();\n            let mut json_writer = JsonStoreContext::new().writer();\n            {\n                stage_materialized_live_rows(\n                    transaction.as_mut(),\n                    &mut writes,\n                    &mut json_writer,\n                    &rows,\n                )\n                .await\n                .expect(\"tracked rows should stage\");\n            }\n            writes\n                .apply(&mut transaction.as_mut())\n                .await\n                .expect(\"tracked rows should apply\");\n        }\n        write_untracked_rows_to_store(\n            transaction.as_mut(),\n            &[\n                version_ref_row(\"global\", \"commit-global\"),\n                version_ref_row(\"version-a\", \"commit-version\"),\n            ],\n        )\n        .await;\n        transaction.commit().await.expect(\"commit should persist\");\n\n        let loaded = load_selected_tab_at(&live_state, storage.clone(), \"version-a\")\n            .await\n            .expect(\"load should succeed\")\n            .expect(\"version row should be visible\");\n\n        assert_eq!(loaded.version_id, \"version-a\");\n        assert!(!loaded.untracked);\n        assert_eq!(\n            loaded.snapshot_content.as_deref(),\n            Some(\"{\\\"value\\\":\\\"version-tracked\\\"}\")\n        );\n    }\n\n    #[tokio::test]\n    async fn main_override_hides_global_row() {\n        let backend: Arc<dyn Backend + Send + Sync> = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(Arc::clone(&backend));\n        let live_state = live_state_context();\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        {\n            let rows = [\n                tracked_row_with_commit(\"global-tracked\", Some(\"change-global\"), \"commit-global\"),\n                tracked_row_at_with_commit(\n                    \"main\",\n                    \"main-tracked\",\n                    Some(\"change-main\"),\n                    \"commit-main\",\n                ),\n            ];\n            let mut writes = StorageWriteSet::new();\n            let mut json_writer = JsonStoreContext::new().writer();\n            {\n                stage_materialized_live_rows(\n                    transaction.as_mut(),\n                    &mut writes,\n                    &mut json_writer,\n                    &rows,\n                )\n                .await\n                .expect(\"tracked rows should stage\");\n            }\n            writes\n                .apply(&mut transaction.as_mut())\n                .await\n                .expect(\"tracked rows should apply\");\n        }\n        write_untracked_rows_to_store(\n            transaction.as_mut(),\n            &[\n                version_ref_row(\"global\", \"commit-global\"),\n                version_ref_row(\"main\", \"commit-main\"),\n            ],\n        )\n        .await;\n        transaction.commit().await.expect(\"commit should persist\");\n\n        let loaded = load_selected_tab_at(&live_state, storage.clone(), \"main\")\n            .await\n            .expect(\"load should succeed\")\n            .expect(\"main row should be visible\");\n\n        assert_eq!(loaded.version_id, \"main\");\n        assert!(!loaded.global);\n        assert_eq!(\n            loaded.snapshot_content.as_deref(),\n            Some(\"{\\\"value\\\":\\\"main-tracked\\\"}\")\n        );\n    }\n\n    #[tokio::test]\n    async fn load_row_prefers_requested_untracked_over_requested_tracked_and_global_rows() {\n        let backend: Arc<dyn Backend + Send + Sync> = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(Arc::clone(&backend));\n        let live_state = live_state_context();\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        {\n            let rows = [\n                tracked_row_with_commit(\"global-tracked\", Some(\"change-global\"), \"commit-global\"),\n                tracked_row_at_with_commit(\n                    \"version-a\",\n                    \"version-tracked\",\n                    Some(\"change-version\"),\n                    \"commit-version\",\n                ),\n            ];\n            let mut writes = StorageWriteSet::new();\n            let mut json_writer = JsonStoreContext::new().writer();\n            {\n                stage_materialized_live_rows(\n                    transaction.as_mut(),\n                    &mut writes,\n                    &mut json_writer,\n                    &rows,\n                )\n                .await\n                .expect(\"tracked rows should stage\");\n            }\n            writes\n                .apply(&mut transaction.as_mut())\n                .await\n                .expect(\"tracked rows should apply\");\n        }\n        write_untracked_rows_to_store(\n            transaction.as_mut(),\n            &[\n                version_ref_row(\"global\", \"commit-global\"),\n                version_ref_row(\"version-a\", \"commit-version\"),\n                untracked_row_at(\"global\", \"global-untracked\"),\n                untracked_row_at(\"version-a\", \"version-untracked\"),\n            ],\n        )\n        .await;\n        transaction.commit().await.expect(\"commit should persist\");\n\n        let loaded = load_selected_tab_at(&live_state, storage.clone(), \"version-a\")\n            .await\n            .expect(\"load should succeed\")\n            .expect(\"version untracked row should be visible\");\n\n        assert_eq!(loaded.version_id, \"version-a\");\n        assert!(loaded.untracked);\n        assert_eq!(\n            loaded.snapshot_content.as_deref(),\n            Some(\"{\\\"value\\\":\\\"version-untracked\\\"}\")\n        );\n    }\n\n    #[tokio::test]\n    async fn scan_rows_overlays_requested_version_over_global() {\n        let backend: Arc<dyn Backend + Send + Sync> = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(Arc::clone(&backend));\n        let live_state = live_state_context();\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        {\n            let rows = [\n                tracked_row_with_commit(\"global-tracked\", Some(\"change-global\"), \"commit-global\"),\n                tracked_row_at_with_commit(\n                    \"version-a\",\n                    \"version-tracked\",\n                    Some(\"change-version\"),\n                    \"commit-version\",\n                ),\n            ];\n            let mut writes = StorageWriteSet::new();\n            let mut json_writer = JsonStoreContext::new().writer();\n            {\n                stage_materialized_live_rows(\n                    transaction.as_mut(),\n                    &mut writes,\n                    &mut json_writer,\n                    &rows,\n                )\n                .await\n                .expect(\"rows should stage\");\n            }\n            writes\n                .apply(&mut transaction.as_mut())\n                .await\n                .expect(\"rows should apply\");\n        }\n        write_untracked_rows_to_store(\n            transaction.as_mut(),\n            &[\n                version_ref_row(\"global\", \"commit-global\"),\n                version_ref_row(\"version-a\", \"commit-version\"),\n            ],\n        )\n        .await;\n        transaction.commit().await.expect(\"commit should persist\");\n\n        let rows = scan_selected_tab_at(&live_state, storage.clone(), \"version-a\", false)\n            .await\n            .expect(\"scan should succeed\");\n\n        assert_eq!(rows.len(), 1);\n        assert_eq!(rows[0].version_id, \"version-a\");\n        assert_eq!(\n            rows[0].snapshot_content.as_deref(),\n            Some(\"{\\\"value\\\":\\\"version-tracked\\\"}\")\n        );\n    }\n\n    #[tokio::test]\n    async fn scan_rows_projects_global_row_into_requested_version() {\n        let backend: Arc<dyn Backend + Send + Sync> = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(Arc::clone(&backend));\n        let live_state = live_state_context();\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        {\n            let rows = [tracked_row_with_commit(\n                \"global-tracked\",\n                Some(\"change-global\"),\n                \"commit-global\",\n            )];\n            let mut writes = StorageWriteSet::new();\n            let mut json_writer = JsonStoreContext::new().writer();\n            {\n                stage_materialized_live_rows(\n                    transaction.as_mut(),\n                    &mut writes,\n                    &mut json_writer,\n                    &rows,\n                )\n                .await\n                .expect(\"rows should stage\");\n            }\n            writes\n                .apply(&mut transaction.as_mut())\n                .await\n                .expect(\"rows should apply\");\n        }\n        write_untracked_rows_to_store(\n            transaction.as_mut(),\n            &[\n                version_ref_row(\"global\", \"commit-global\"),\n                version_ref_row(\"version-a\", \"commit-version-a\"),\n            ],\n        )\n        .await;\n        write_empty_commits_to_store(transaction.as_mut(), &[\"commit-version-a\"]).await;\n        transaction.commit().await.expect(\"commit should persist\");\n\n        let rows = scan_selected_tab_at(&live_state, storage.clone(), \"version-a\", false)\n            .await\n            .expect(\"scan should succeed\");\n\n        assert_eq!(rows.len(), 1);\n        assert_eq!(rows[0].version_id, \"version-a\");\n        assert!(rows[0].global);\n        assert_eq!(\n            rows[0].snapshot_content.as_deref(),\n            Some(\"{\\\"value\\\":\\\"global-tracked\\\"}\")\n        );\n    }\n\n    #[tokio::test]\n    async fn scan_rows_does_not_project_global_rows_into_missing_version() {\n        let backend: Arc<dyn Backend + Send + Sync> = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(Arc::clone(&backend));\n        let live_state = live_state_context();\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        {\n            let rows = [tracked_row_with_commit(\n                \"global-tracked\",\n                Some(\"change-global\"),\n                \"commit-global\",\n            )];\n            let mut writes = StorageWriteSet::new();\n            let mut json_writer = JsonStoreContext::new().writer();\n            {\n                stage_materialized_live_rows(\n                    transaction.as_mut(),\n                    &mut writes,\n                    &mut json_writer,\n                    &rows,\n                )\n                .await\n                .expect(\"tracked row should stage\");\n            }\n            writes\n                .apply(&mut transaction.as_mut())\n                .await\n                .expect(\"tracked row should apply\");\n        }\n        write_untracked_rows_to_store(\n            transaction.as_mut(),\n            &[version_ref_row(\"global\", \"commit-global\")],\n        )\n        .await;\n        transaction.commit().await.expect(\"commit should persist\");\n\n        let rows = scan_selected_tab_at(&live_state, storage.clone(), \"missing-version\", false)\n            .await\n            .expect(\"scan should succeed\");\n\n        assert_eq!(\n            rows.len(),\n            0,\n            \"global rows must not be projected into a missing version scope\"\n        );\n    }\n\n    #[tokio::test]\n    async fn winning_tombstone_hides_row_unless_tombstones_are_included() {\n        let backend: Arc<dyn Backend + Send + Sync> = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(Arc::clone(&backend));\n        let live_state = live_state_context();\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        {\n            let rows = [\n                tracked_row_with_commit(\"global-tracked\", Some(\"change-global\"), \"commit-global\"),\n                tombstone_tracked_row_at_with_commit(\n                    \"version-a\",\n                    Some(\"change-tombstone\"),\n                    \"commit-version\",\n                ),\n            ];\n            let mut writes = StorageWriteSet::new();\n            let mut json_writer = JsonStoreContext::new().writer();\n            {\n                stage_materialized_live_rows(\n                    transaction.as_mut(),\n                    &mut writes,\n                    &mut json_writer,\n                    &rows,\n                )\n                .await\n                .expect(\"rows should stage\");\n            }\n            writes\n                .apply(&mut transaction.as_mut())\n                .await\n                .expect(\"rows should apply\");\n        }\n        write_untracked_rows_to_store(\n            transaction.as_mut(),\n            &[\n                version_ref_row(\"global\", \"commit-global\"),\n                version_ref_row(\"version-a\", \"commit-version\"),\n            ],\n        )\n        .await;\n        transaction.commit().await.expect(\"commit should persist\");\n\n        let hidden = scan_selected_tab_at(&live_state, storage.clone(), \"version-a\", false)\n            .await\n            .expect(\"scan should succeed\");\n        assert_eq!(hidden.len(), 0);\n\n        let with_tombstone = scan_selected_tab_at(&live_state, storage.clone(), \"version-a\", true)\n            .await\n            .expect(\"scan should succeed\");\n        assert_eq!(with_tombstone.len(), 1);\n        assert_eq!(with_tombstone[0].version_id, \"version-a\");\n        assert_eq!(with_tombstone[0].snapshot_content, None);\n    }\n\n    #[tokio::test]\n    async fn main_tombstone_hides_global_row() {\n        let backend: Arc<dyn Backend + Send + Sync> = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(Arc::clone(&backend));\n        let live_state = live_state_context();\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        {\n            let rows = [\n                tracked_row_with_commit(\"global-tracked\", Some(\"change-global\"), \"commit-global\"),\n                tombstone_tracked_row_at_with_commit(\n                    \"main\",\n                    Some(\"change-main-tombstone\"),\n                    \"commit-main\",\n                ),\n            ];\n            let mut writes = StorageWriteSet::new();\n            let mut json_writer = JsonStoreContext::new().writer();\n            {\n                stage_materialized_live_rows(\n                    transaction.as_mut(),\n                    &mut writes,\n                    &mut json_writer,\n                    &rows,\n                )\n                .await\n                .expect(\"tracked rows should stage\");\n            }\n            writes\n                .apply(&mut transaction.as_mut())\n                .await\n                .expect(\"tracked rows should apply\");\n        }\n        write_untracked_rows_to_store(\n            transaction.as_mut(),\n            &[\n                version_ref_row(\"global\", \"commit-global\"),\n                version_ref_row(\"main\", \"commit-main\"),\n            ],\n        )\n        .await;\n        transaction.commit().await.expect(\"commit should persist\");\n\n        let hidden = scan_selected_tab_at(&live_state, storage.clone(), \"main\", false)\n            .await\n            .expect(\"scan should succeed\");\n        assert_eq!(hidden.len(), 0);\n\n        let tombstones = scan_selected_tab_at(&live_state, storage.clone(), \"main\", true)\n            .await\n            .expect(\"scan should succeed\");\n        assert_eq!(tombstones.len(), 1);\n        assert_eq!(tombstones[0].version_id, \"main\");\n        assert!(!tombstones[0].global);\n        assert_eq!(tombstones[0].snapshot_content, None);\n    }\n\n    #[tokio::test]\n    async fn writer_allows_commit_fact_to_share_the_touched_version_commit_id() {\n        let backend: Arc<dyn Backend + Send + Sync> = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(Arc::clone(&backend));\n        let live_state = live_state_context();\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n\n        {\n            let rows = [\n                tracked_row_at_with_commit(\n                    \"version-a\",\n                    \"version-row\",\n                    Some(\"change-version\"),\n                    \"commit-version\",\n                ),\n                commit_live_state_row(\"commit-version\"),\n            ];\n            let mut writes = StorageWriteSet::new();\n            let mut json_writer = JsonStoreContext::new().writer();\n            {\n                stage_materialized_live_rows(\n                    transaction.as_mut(),\n                    &mut writes,\n                    &mut json_writer,\n                    &rows,\n                )\n                .await\n                .expect(\"commit facts are changelog projections, not root-local rows\");\n            }\n            writes\n                .apply(&mut transaction.as_mut())\n                .await\n                .expect(\"commit fact rows should apply\");\n        }\n        write_untracked_rows_to_store(\n            transaction.as_mut(),\n            &[version_ref_row(\"version-a\", \"commit-version\")],\n        )\n        .await;\n        transaction.commit().await.expect(\"commit should persist\");\n\n        let loaded = load_selected_tab_at(&live_state, storage.clone(), \"version-a\")\n            .await\n            .expect(\"load should succeed\")\n            .expect(\"version row should be visible\");\n        assert_eq!(\n            loaded.snapshot_content.as_deref(),\n            Some(\"{\\\"value\\\":\\\"version-row\\\"}\")\n        );\n    }\n\n    #[tokio::test]\n    async fn writer_uses_first_parent_as_merge_root_base() {\n        let backend: Arc<dyn Backend + Send + Sync> = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(Arc::clone(&backend));\n        let mut seed_transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"seed transaction should open\");\n        let mut writes = StorageWriteSet::new();\n        {\n            CommitStoreContext::new()\n                .writer(&mut seed_transaction.as_mut(), &mut writes)\n                .stage_commit_draft(\n                    CommitDraftRef {\n                        id: \"parent-left\",\n                        change_id: \"parent-left:commit\",\n                        parent_ids: &[],\n                        author_account_ids: &[],\n                        created_at: \"1970-01-01T00:00:00.000Z\",\n                    },\n                    Vec::new(),\n                    Vec::new(),\n                )\n                .await\n                .expect(\"first parent commit should stage\");\n            TrackedStateContext::new()\n                .writer(&mut seed_transaction.as_mut(), &mut writes)\n                .stage_delta(\"parent-left\", None, &[])\n                .await\n                .expect(\"first parent root should exist\");\n        }\n        writes\n            .apply(&mut seed_transaction.as_mut())\n            .await\n            .expect(\"first parent root should apply\");\n        seed_transaction\n            .commit()\n            .await\n            .expect(\"seed transaction should commit\");\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n\n        {\n            let rows = [\n                tracked_row_at_with_commit(\n                    \"version-a\",\n                    \"version-row\",\n                    Some(\"change-version\"),\n                    \"commit-merge\",\n                ),\n                commit_live_state_row_with_parents(\n                    \"commit-merge\",\n                    &[\"parent-left\", \"parent-right\"],\n                ),\n            ];\n            let mut writes = StorageWriteSet::new();\n            let mut json_writer = JsonStoreContext::new().writer();\n            {\n                stage_materialized_live_rows(\n                    transaction.as_mut(),\n                    &mut writes,\n                    &mut json_writer,\n                    &rows,\n                )\n                .await\n                .expect(\"merge commit should use first parent as tracked-root base\");\n            }\n            writes\n                .apply(&mut transaction.as_mut())\n                .await\n                .expect(\"merge commit rows should apply\");\n        }\n    }\n\n    #[tokio::test]\n    async fn non_global_root_does_not_store_global_rows() {\n        let backend: Arc<dyn Backend + Send + Sync> = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(Arc::clone(&backend));\n        let tracked_state = TrackedStateContext::new();\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n\n        {\n            let rows = [\n                tracked_row_with_commit(\"global-tracked\", Some(\"change-global\"), \"commit-global\"),\n                tracked_row_at_with_commit(\n                    \"main\",\n                    \"main-tracked\",\n                    Some(\"change-main\"),\n                    \"commit-main\",\n                ),\n            ];\n            let mut writes = StorageWriteSet::new();\n            let mut json_writer = JsonStoreContext::new().writer();\n            {\n                stage_materialized_live_rows(\n                    transaction.as_mut(),\n                    &mut writes,\n                    &mut json_writer,\n                    &rows,\n                )\n                .await\n                .expect(\"tracked rows should stage\");\n            }\n            writes\n                .apply(&mut transaction.as_mut())\n                .await\n                .expect(\"tracked rows should apply\");\n        }\n        transaction.commit().await.expect(\"commit should persist\");\n\n        let global_root_rows =\n            scan_tracked_root(&tracked_state, storage.clone(), \"commit-global\").await;\n        assert_eq!(global_root_rows.len(), 1);\n        assert_eq!(\n            global_root_rows[0].snapshot_content.as_deref(),\n            Some(\"{\\\"value\\\":\\\"global-tracked\\\"}\")\n        );\n\n        let main_root_rows =\n            scan_tracked_root(&tracked_state, storage.clone(), \"commit-main\").await;\n        assert_eq!(main_root_rows.len(), 1);\n        assert_eq!(\n            main_root_rows[0].snapshot_content.as_deref(),\n            Some(\"{\\\"value\\\":\\\"main-tracked\\\"}\")\n        );\n    }\n\n    async fn load_selected_tab(\n        live_state: &LiveStateContext,\n        storage: StorageContext,\n    ) -> Result<Option<MaterializedLiveStateRow>, LixError> {\n        live_state\n            .reader(storage)\n            .load_row(&LiveStateRowRequest {\n                schema_key: \"lix_key_value\".to_string(),\n                version_id: \"global\".to_string(),\n                entity_id: crate::entity_identity::EntityIdentity::single(\"selected-tab\"),\n                file_id: NullableKeyFilter::Null,\n            })\n            .await\n    }\n\n    async fn load_selected_tab_at(\n        live_state: &LiveStateContext,\n        storage: StorageContext,\n        version_id: &str,\n    ) -> Result<Option<MaterializedLiveStateRow>, LixError> {\n        live_state\n            .reader(storage)\n            .load_row(&LiveStateRowRequest {\n                schema_key: \"lix_key_value\".to_string(),\n                version_id: version_id.to_string(),\n                entity_id: crate::entity_identity::EntityIdentity::single(\"selected-tab\"),\n                file_id: NullableKeyFilter::Null,\n            })\n            .await\n    }\n\n    async fn scan_selected_tab_at(\n        live_state: &LiveStateContext,\n        storage: StorageContext,\n        version_id: &str,\n        include_tombstones: bool,\n    ) -> Result<Vec<MaterializedLiveStateRow>, LixError> {\n        live_state\n            .reader(storage)\n            .scan_rows(&LiveStateScanRequest {\n                filter: LiveStateFilter {\n                    schema_keys: vec![\"lix_key_value\".to_string()],\n                    entity_ids: vec![crate::entity_identity::EntityIdentity::single(\n                        \"selected-tab\",\n                    )],\n                    version_ids: vec![version_id.to_string()],\n                    file_ids: vec![NullableKeyFilter::Null],\n                    include_tombstones,\n                    ..LiveStateFilter::default()\n                },\n                ..LiveStateScanRequest::default()\n            })\n            .await\n    }\n\n    async fn scan_tracked_root(\n        tracked_state: &TrackedStateContext,\n        storage: StorageContext,\n        commit_id: &str,\n    ) -> Vec<MaterializedTrackedStateRow> {\n        tracked_state\n            .reader(storage)\n            .scan_rows_at_commit(\n                commit_id,\n                &TrackedStateScanRequest {\n                    filter: TrackedStateFilter {\n                        include_tombstones: true,\n                        ..Default::default()\n                    },\n                    ..Default::default()\n                },\n            )\n            .await\n            .expect(\"tracked root should scan\")\n    }\n\n    fn tracked_row_with_commit(\n        value: &str,\n        change_id: Option<&str>,\n        commit_id: &str,\n    ) -> MaterializedLiveStateRow {\n        tracked_row_at_with_commit(\"global\", value, change_id, commit_id)\n    }\n\n    fn tracked_row_at_with_commit(\n        version_id: &str,\n        value: &str,\n        change_id: Option<&str>,\n        commit_id: &str,\n    ) -> MaterializedLiveStateRow {\n        MaterializedLiveStateRow {\n            entity_id: identity(\"selected-tab\"),\n            schema_key: \"lix_key_value\".to_string(),\n            file_id: None,\n            snapshot_content: Some(format!(\"{{\\\"value\\\":\\\"{value}\\\"}}\")),\n            metadata: None,\n            deleted: false,\n            created_at: \"2026-01-01T00:00:00Z\".to_string(),\n            updated_at: \"2026-01-01T00:00:00Z\".to_string(),\n            global: version_id == \"global\",\n            change_id: change_id.map(str::to_string),\n            commit_id: Some(commit_id.to_string()),\n            untracked: false,\n            version_id: version_id.to_string(),\n        }\n    }\n\n    fn tombstone_tracked_row_at_with_commit(\n        version_id: &str,\n        change_id: Option<&str>,\n        commit_id: &str,\n    ) -> MaterializedLiveStateRow {\n        MaterializedLiveStateRow {\n            snapshot_content: None,\n            deleted: true,\n            ..tracked_row_at_with_commit(version_id, \"ignored\", change_id, commit_id)\n        }\n    }\n\n    fn untracked_row(value: &str) -> MaterializedUntrackedStateRow {\n        untracked_row_at(\"global\", value)\n    }\n\n    fn untracked_row_at(version_id: &str, value: &str) -> MaterializedUntrackedStateRow {\n        MaterializedUntrackedStateRow {\n            entity_id: identity(\"selected-tab\"),\n            schema_key: \"lix_key_value\".to_string(),\n            file_id: None,\n            snapshot_content: Some(format!(\"{{\\\"value\\\":\\\"{value}\\\"}}\")),\n            metadata: None,\n            deleted: false,\n            created_at: \"2026-01-01T00:00:00Z\".to_string(),\n            updated_at: \"2026-01-01T00:00:00Z\".to_string(),\n            global: version_id == \"global\",\n            version_id: version_id.to_string(),\n        }\n    }\n\n    fn version_ref_row(version_id: &str, commit_id: &str) -> MaterializedUntrackedStateRow {\n        MaterializedUntrackedStateRow {\n            entity_id: identity(version_id),\n            schema_key: \"lix_version_ref\".to_string(),\n            file_id: None,\n            snapshot_content: Some(\n                serde_json::to_string(&json!({\n                    \"id\": version_id,\n                    \"commit_id\": commit_id,\n                }))\n                .expect(\"version ref should serialize\"),\n            ),\n            metadata: None,\n            deleted: false,\n            created_at: \"2026-01-01T00:00:00Z\".to_string(),\n            updated_at: \"2026-01-01T00:00:00Z\".to_string(),\n            global: true,\n            version_id: \"global\".to_string(),\n        }\n    }\n\n    fn commit_live_state_row(commit_id: &str) -> MaterializedLiveStateRow {\n        commit_live_state_row_with_parents(commit_id, &[])\n    }\n\n    fn commit_live_state_row_with_parents(\n        commit_id: &str,\n        parent_ids: &[&str],\n    ) -> MaterializedLiveStateRow {\n        let mut row = commit_live_state_row_with_snapshot(\n            commit_id,\n            json!({\n                \"id\": commit_id,\n            }),\n        );\n        row.metadata = Some(\n            serde_json::to_string(&json!({ \"test_parents\": parent_ids }))\n                .expect(\"test metadata should serialize\"),\n        );\n        row\n    }\n\n    fn commit_live_state_row_with_snapshot(\n        commit_id: &str,\n        snapshot: serde_json::Value,\n    ) -> MaterializedLiveStateRow {\n        MaterializedLiveStateRow {\n            entity_id: identity(commit_id),\n            schema_key: COMMIT_SCHEMA_KEY.to_string(),\n            file_id: None,\n            snapshot_content: Some(\n                serde_json::to_string(&snapshot).expect(\"commit snapshot should serialize\"),\n            ),\n            metadata: None,\n            deleted: false,\n            created_at: \"2026-01-01T00:00:00Z\".to_string(),\n            updated_at: \"2026-01-01T00:00:00Z\".to_string(),\n            global: true,\n            change_id: Some(format!(\"change-{commit_id}\")),\n            commit_id: Some(commit_id.to_string()),\n            untracked: false,\n            version_id: \"global\".to_string(),\n        }\n    }\n\n    fn identity(entity_id: &str) -> EntityIdentity {\n        EntityIdentity::single(entity_id)\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/live_state/mod.rs",
    "content": "mod context;\nmod overlay;\nmod reader;\nmod types;\nmod visibility;\n\n#[allow(unused_imports)]\npub(crate) use context::{LiveStateContext, LiveStateStoreReader};\n#[allow(unused_imports)]\npub(crate) use reader::LiveStateReader;\n#[allow(unused_imports)]\npub(crate) use types::{\n    Bound, LiveStateFilter, LiveStateProjection, LiveStateRowIdentity, LiveStateRowRequest,\n    LiveStateScanRequest, MaterializedLiveStateRow, ScanConstraint, ScanField, ScanOperator,\n};\n"
  },
  {
    "path": "packages/engine/src/live_state/overlay.rs",
    "content": "use std::collections::BTreeMap;\n\nuse crate::live_state::{LiveStateRowIdentity, MaterializedLiveStateRow};\n\n/// Applies the local untracked overlay to tracked live-state rows.\n///\n/// The visible live-state contract is \"latest local untracked row wins\" for\n/// the same version/schema/entity/file identity. This keeps SQL providers from\n/// knowing whether a visible row came from tracked changelog projection or from\n/// local untracked state.\npub(crate) fn overlay_untracked_rows(\n    tracked_rows: Vec<MaterializedLiveStateRow>,\n    untracked_rows: Vec<MaterializedLiveStateRow>,\n) -> Vec<MaterializedLiveStateRow> {\n    let mut rows_by_identity = BTreeMap::new();\n\n    for row in tracked_rows {\n        rows_by_identity.insert(LiveStateRowIdentity::from_row(&row), row);\n    }\n    for row in untracked_rows {\n        rows_by_identity.insert(LiveStateRowIdentity::from_row(&row), row);\n    }\n\n    rows_by_identity.into_values().collect()\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn untracked_row_wins_for_same_identity() {\n        let tracked = live_row(\"tracked\", false, Some(\"change-tracked\"));\n        let untracked = live_row(\"untracked\", true, None);\n\n        let rows = overlay_untracked_rows(vec![tracked], vec![untracked]);\n\n        assert_eq!(rows.len(), 1);\n        assert_eq!(\n            rows[0].snapshot_content.as_deref(),\n            Some(\"{\\\"value\\\":\\\"untracked\\\"}\")\n        );\n        assert!(rows[0].untracked);\n        assert_eq!(rows[0].change_id, None);\n    }\n\n    #[test]\n    fn different_identities_are_preserved() {\n        let tracked = live_row(\"tracked\", false, Some(\"change-tracked\"));\n        let mut untracked = live_row(\"untracked\", true, None);\n        untracked.entity_id = crate::entity_identity::EntityIdentity::single(\"other\");\n\n        let rows = overlay_untracked_rows(vec![tracked], vec![untracked]);\n\n        assert_eq!(rows.len(), 2);\n    }\n\n    fn live_row(value: &str, untracked: bool, change_id: Option<&str>) -> MaterializedLiveStateRow {\n        MaterializedLiveStateRow {\n            entity_id: crate::entity_identity::EntityIdentity::single(\"entity\"),\n            schema_key: \"schema\".to_string(),\n            file_id: None,\n            snapshot_content: Some(format!(\"{{\\\"value\\\":\\\"{value}\\\"}}\")),\n            metadata: None,\n            deleted: false,\n            created_at: \"2026-01-01T00:00:00Z\".to_string(),\n            updated_at: \"2026-01-01T00:00:00Z\".to_string(),\n            global: true,\n            change_id: change_id.map(str::to_string),\n            commit_id: None,\n            untracked,\n            version_id: \"global\".to_string(),\n        }\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/live_state/reader.rs",
    "content": "use async_trait::async_trait;\n\nuse crate::live_state::MaterializedLiveStateRow;\nuse crate::live_state::{LiveStateRowRequest, LiveStateScanRequest};\nuse crate::LixError;\n\n/// Minimal engine read model for transaction planning and SQL providers.\n///\n/// Engine only needs visible state-row reads here. Changelog freshness/catch-up\n/// should be added at this boundary later instead of leaking projection internals\n/// into sessions or SQL providers.\n#[async_trait]\npub(crate) trait LiveStateReader: Send + Sync {\n    async fn scan_rows(\n        &self,\n        request: &LiveStateScanRequest,\n    ) -> Result<Vec<MaterializedLiveStateRow>, LixError>;\n\n    async fn load_row(\n        &self,\n        request: &LiveStateRowRequest,\n    ) -> Result<Option<MaterializedLiveStateRow>, LixError>;\n}\n"
  },
  {
    "path": "packages/engine/src/live_state/types.rs",
    "content": "use crate::entity_identity::EntityIdentity;\nuse crate::tracked_state::MaterializedTrackedStateRow;\nuse crate::untracked_state::{\n    MaterializedUntrackedStateRow, UntrackedStateFilter, UntrackedStateRowRequest,\n};\nuse crate::{NullableKeyFilter, Value};\n\n/// Durable row visible through live_state reads.\n///\n/// Unlike provider write rows, live-state rows are fully hydrated facts. Missing\n/// generated fields should be caught before this type is constructed.\n#[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize)]\npub(crate) struct MaterializedLiveStateRow {\n    pub(crate) entity_id: EntityIdentity,\n    pub(crate) schema_key: String,\n    pub(crate) file_id: Option<String>,\n    pub(crate) snapshot_content: Option<String>,\n    pub(crate) metadata: Option<String>,\n    pub(crate) deleted: bool,\n    pub(crate) created_at: String,\n    pub(crate) updated_at: String,\n    pub(crate) global: bool,\n    pub(crate) change_id: Option<String>,\n    pub(crate) commit_id: Option<String>,\n    pub(crate) untracked: bool,\n    pub(crate) version_id: String,\n}\n\nimpl From<MaterializedUntrackedStateRow> for MaterializedLiveStateRow {\n    fn from(row: MaterializedUntrackedStateRow) -> Self {\n        MaterializedLiveStateRow {\n            entity_id: row.entity_id,\n            schema_key: row.schema_key,\n            file_id: row.file_id,\n            snapshot_content: row.snapshot_content,\n            metadata: row.metadata,\n            deleted: row.deleted,\n            created_at: row.created_at,\n            updated_at: row.updated_at,\n            global: row.global,\n            change_id: None,\n            commit_id: None,\n            untracked: true,\n            version_id: row.version_id,\n        }\n    }\n}\n\nimpl TryFrom<&MaterializedLiveStateRow> for MaterializedTrackedStateRow {\n    type Error = crate::LixError;\n\n    fn try_from(row: &MaterializedLiveStateRow) -> Result<Self, Self::Error> {\n        if row.untracked {\n            return Err(crate::LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                \"tracked_state cannot store untracked live-state rows\",\n            ));\n        }\n        let Some(change_id) = row.change_id.clone() else {\n            return Err(crate::LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                \"tracked_state rows require change_id\",\n            ));\n        };\n        let Some(commit_id) = row.commit_id.clone() else {\n            return Err(crate::LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                \"tracked_state rows require commit_id\",\n            ));\n        };\n\n        Ok(MaterializedTrackedStateRow {\n            entity_id: row.entity_id.clone(),\n            schema_key: row.schema_key.clone(),\n            file_id: row.file_id.clone(),\n            snapshot_content: row.snapshot_content.clone(),\n            metadata: row.metadata.clone(),\n            deleted: row.deleted,\n            created_at: row.created_at.clone(),\n            updated_at: row.updated_at.clone(),\n            change_id,\n            commit_id,\n        })\n    }\n}\n\nimpl From<&MaterializedLiveStateRow> for MaterializedUntrackedStateRow {\n    fn from(row: &MaterializedLiveStateRow) -> Self {\n        MaterializedUntrackedStateRow {\n            entity_id: row.entity_id.clone(),\n            schema_key: row.schema_key.clone(),\n            file_id: row.file_id.clone(),\n            snapshot_content: row.snapshot_content.clone(),\n            metadata: row.metadata.clone(),\n            deleted: row.deleted,\n            created_at: row.created_at.clone(),\n            updated_at: row.updated_at.clone(),\n            global: row.global,\n            version_id: row.version_id.clone(),\n        }\n    }\n}\n\n/// Which indexed field a live-state scan constraint applies to.\n#[derive(Debug, Clone, PartialEq, serde::Serialize, serde::Deserialize)]\npub(crate) enum ScanField {\n    EntityId,\n    FileId,\n}\n\n/// Inclusive or exclusive range bound.\n#[derive(Debug, Clone, PartialEq, serde::Serialize, serde::Deserialize)]\npub(crate) struct Bound {\n    pub(crate) value: Value,\n    pub(crate) inclusive: bool,\n}\n\n/// SQL-free structured scan constraint.\n#[derive(Debug, Clone, PartialEq, serde::Serialize, serde::Deserialize)]\npub(crate) struct ScanConstraint {\n    pub(crate) field: ScanField,\n    pub(crate) operator: ScanOperator,\n}\n\n/// Structured scan operator aligned with the current planner/storage split.\n#[derive(Debug, Clone, PartialEq, serde::Serialize, serde::Deserialize)]\npub(crate) enum ScanOperator {\n    Eq(Value),\n    In(Vec<Value>),\n    Range {\n        lower: Option<Bound>,\n        upper: Option<Bound>,\n    },\n}\n\n/// Identity-centered filter for visible live entities.\n#[derive(Debug, Clone, PartialEq, serde::Serialize, serde::Deserialize, Default)]\npub(crate) struct LiveStateFilter {\n    #[serde(default)]\n    pub(crate) schema_keys: Vec<String>,\n    #[serde(default)]\n    pub(crate) entity_ids: Vec<EntityIdentity>,\n    #[serde(default)]\n    pub(crate) version_ids: Vec<String>,\n    #[serde(default)]\n    pub(crate) file_ids: Vec<NullableKeyFilter<String>>,\n    #[serde(default)]\n    pub(crate) untracked: Option<bool>,\n    #[serde(default)]\n    pub(crate) constraints: Vec<ScanConstraint>,\n    #[serde(default)]\n    pub(crate) include_tombstones: bool,\n}\n\nimpl From<LiveStateFilter> for UntrackedStateFilter {\n    fn from(filter: LiveStateFilter) -> Self {\n        Self {\n            schema_keys: filter.schema_keys,\n            entity_ids: filter.entity_ids,\n            version_ids: filter.version_ids,\n            file_ids: filter.file_ids,\n        }\n    }\n}\n\n/// Requested property set for a live-state scan.\n#[derive(Debug, Clone, PartialEq, serde::Serialize, serde::Deserialize, Default)]\npub(crate) struct LiveStateProjection {\n    #[serde(default)]\n    pub(crate) columns: Vec<String>,\n}\n\n/// First-principles scan request for engine-owned reads.\n#[derive(Debug, Clone, PartialEq, serde::Serialize, serde::Deserialize, Default)]\npub(crate) struct LiveStateScanRequest {\n    #[serde(default)]\n    pub(crate) filter: LiveStateFilter,\n    #[serde(default)]\n    pub(crate) projection: LiveStateProjection,\n    #[serde(default)]\n    pub(crate) limit: Option<usize>,\n}\n\n/// Point lookup request for one visible live-state row.\n#[derive(Debug, Clone, PartialEq)]\npub(crate) struct LiveStateRowRequest {\n    pub(crate) schema_key: String,\n    pub(crate) version_id: String,\n    pub(crate) entity_id: EntityIdentity,\n    pub(crate) file_id: NullableKeyFilter<String>,\n}\n\nimpl From<&LiveStateRowRequest> for UntrackedStateRowRequest {\n    fn from(request: &LiveStateRowRequest) -> Self {\n        Self {\n            schema_key: request.schema_key.clone(),\n            version_id: request.version_id.clone(),\n            entity_id: request.entity_id.clone(),\n            file_id: request.file_id.clone(),\n        }\n    }\n}\n\n/// Stable visible-row identity used for overlay composition.\n#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)]\npub(crate) struct LiveStateRowIdentity {\n    pub(crate) version_id: String,\n    pub(crate) schema_key: String,\n    pub(crate) entity_id: EntityIdentity,\n    pub(crate) file_id: Option<String>,\n}\n\nimpl LiveStateRowIdentity {\n    pub(crate) fn from_row(row: &MaterializedLiveStateRow) -> Self {\n        Self {\n            version_id: row.version_id.clone(),\n            schema_key: row.schema_key.clone(),\n            entity_id: row.entity_id.clone(),\n            file_id: row.file_id.clone(),\n        }\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/live_state/visibility.rs",
    "content": "use std::collections::BTreeMap;\n\nuse crate::live_state::{LiveStateRowIdentity, MaterializedLiveStateRow};\nuse crate::GLOBAL_VERSION_ID;\n\n/// Expands a version-scoped storage read so global candidates are available for\n/// the visibility overlay.\npub(crate) fn expanded_version_ids(version_ids: &[String]) -> Vec<String> {\n    if version_ids.is_empty() {\n        return Vec::new();\n    }\n\n    let mut expanded = version_ids.to_vec();\n    if version_ids\n        .iter()\n        .any(|version_id| version_id != GLOBAL_VERSION_ID)\n        && !expanded\n            .iter()\n            .any(|version_id| version_id == GLOBAL_VERSION_ID)\n    {\n        expanded.push(GLOBAL_VERSION_ID.to_string());\n    }\n    expanded\n}\n\n/// Resolves raw tracked/untracked candidates into the rows visible for a scan.\n///\n/// Global rows are projected into each requested version scope, but keep\n/// `global = true`. Version-scoped rows win over projected global rows for the\n/// same identity. Tombstones participate in winning and are filtered only after\n/// visibility is resolved. This projection is a read concern; constraint\n/// validation remains exact storage-scope local unless a validator explicitly\n/// opts into overlay semantics.\npub(crate) fn resolve_scan_rows(\n    rows: Vec<MaterializedLiveStateRow>,\n    requested_version_ids: &[String],\n    include_tombstones: bool,\n) -> Vec<MaterializedLiveStateRow> {\n    let mut rows = project_global_rows_into_requested_versions(rows, requested_version_ids);\n    if !include_tombstones {\n        rows.retain(|row| !row.deleted);\n    }\n    rows\n}\n\n/// Resolves a row loaded through a concrete storage version into the row visible\n/// to the requested version scope.\npub(crate) fn project_loaded_row(\n    mut row: MaterializedLiveStateRow,\n    requested_version_id: &str,\n    matched_version_id: &str,\n) -> MaterializedLiveStateRow {\n    if row.global && requested_version_id != GLOBAL_VERSION_ID {\n        row.version_id = requested_version_id.to_string();\n    } else if matched_version_id == GLOBAL_VERSION_ID && requested_version_id != GLOBAL_VERSION_ID {\n        row.version_id = requested_version_id.to_string();\n    }\n    row\n}\n\nfn project_global_rows_into_requested_versions(\n    rows: Vec<MaterializedLiveStateRow>,\n    requested_version_ids: &[String],\n) -> Vec<MaterializedLiveStateRow> {\n    if requested_version_ids.is_empty() {\n        return rows;\n    }\n\n    let mut rows_by_identity = BTreeMap::<LiveStateRowIdentity, MaterializedLiveStateRow>::new();\n    for requested_version_id in requested_version_ids {\n        for row in &rows {\n            if row.version_id == GLOBAL_VERSION_ID {\n                let mut projected = row.clone();\n                projected.version_id = requested_version_id.clone();\n                rows_by_identity.insert(LiveStateRowIdentity::from_row(&projected), projected);\n            }\n        }\n        for row in rows\n            .iter()\n            .filter(|row| row.version_id == *requested_version_id)\n        {\n            rows_by_identity.insert(LiveStateRowIdentity::from_row(row), row.clone());\n        }\n    }\n\n    rows_by_identity.into_values().collect()\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn expands_requested_version_with_global_candidates() {\n        assert_eq!(\n            expanded_version_ids(&[\"version-a\".to_string()]),\n            vec![\"version-a\".to_string(), \"global\".to_string()]\n        );\n        assert_eq!(\n            expanded_version_ids(&[\"global\".to_string()]),\n            vec![\"global\".to_string()]\n        );\n    }\n\n    #[test]\n    fn scan_projects_global_row_into_requested_version() {\n        let rows = resolve_scan_rows(\n            vec![row_at(\n                \"global\",\n                \"global-value\",\n                true,\n                Some(\"change-global\"),\n            )],\n            &[\"version-a\".to_string()],\n            false,\n        );\n\n        assert_eq!(rows.len(), 1);\n        assert_eq!(rows[0].version_id, \"version-a\");\n        assert!(rows[0].global);\n        assert_eq!(\n            rows[0].snapshot_content.as_deref(),\n            Some(\"{\\\"value\\\":\\\"global-value\\\"}\")\n        );\n    }\n\n    #[test]\n    fn scan_prefers_requested_version_row_over_projected_global_row() {\n        let rows = resolve_scan_rows(\n            vec![\n                row_at(\"global\", \"global-value\", true, Some(\"change-global\")),\n                row_at(\"version-a\", \"version-value\", false, Some(\"change-version\")),\n            ],\n            &[\"version-a\".to_string()],\n            false,\n        );\n\n        assert_eq!(rows.len(), 1);\n        assert_eq!(rows[0].version_id, \"version-a\");\n        assert!(!rows[0].global);\n        assert_eq!(\n            rows[0].snapshot_content.as_deref(),\n            Some(\"{\\\"value\\\":\\\"version-value\\\"}\")\n        );\n    }\n\n    #[test]\n    fn version_tombstone_hides_global_row_after_visibility_resolution() {\n        let rows = resolve_scan_rows(\n            vec![\n                row_at(\"global\", \"global-value\", true, Some(\"change-global\")),\n                tombstone_at(\"version-a\", false, Some(\"change-tombstone\")),\n            ],\n            &[\"version-a\".to_string()],\n            false,\n        );\n\n        assert!(rows.is_empty());\n    }\n\n    #[test]\n    fn tombstone_can_be_returned_when_requested() {\n        let rows = resolve_scan_rows(\n            vec![\n                row_at(\"global\", \"global-value\", true, Some(\"change-global\")),\n                tombstone_at(\"version-a\", false, Some(\"change-tombstone\")),\n            ],\n            &[\"version-a\".to_string()],\n            true,\n        );\n\n        assert_eq!(rows.len(), 1);\n        assert_eq!(rows[0].version_id, \"version-a\");\n        assert_eq!(rows[0].snapshot_content, None);\n    }\n\n    #[test]\n    fn loaded_global_row_is_projected_into_requested_version() {\n        let row = project_loaded_row(\n            row_at(\"global\", \"global-value\", true, Some(\"change-global\")),\n            \"version-a\",\n            \"global\",\n        );\n\n        assert_eq!(row.version_id, \"version-a\");\n        assert!(row.global);\n    }\n\n    fn row_at(\n        version_id: &str,\n        value: &str,\n        global: bool,\n        change_id: Option<&str>,\n    ) -> MaterializedLiveStateRow {\n        MaterializedLiveStateRow {\n            entity_id: crate::entity_identity::EntityIdentity::single(\"entity\"),\n            schema_key: \"schema\".to_string(),\n            file_id: None,\n            snapshot_content: Some(format!(\"{{\\\"value\\\":\\\"{value}\\\"}}\")),\n            metadata: None,\n            deleted: false,\n            created_at: \"2026-01-01T00:00:00Z\".to_string(),\n            updated_at: \"2026-01-01T00:00:00Z\".to_string(),\n            global,\n            change_id: change_id.map(str::to_string),\n            commit_id: Some(\"commit\".to_string()),\n            untracked: false,\n            version_id: version_id.to_string(),\n        }\n    }\n\n    fn tombstone_at(\n        version_id: &str,\n        global: bool,\n        change_id: Option<&str>,\n    ) -> MaterializedLiveStateRow {\n        MaterializedLiveStateRow {\n            snapshot_content: None,\n            deleted: true,\n            ..row_at(version_id, \"ignored\", global, change_id)\n        }\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/plugin/archive.rs",
    "content": "use std::collections::{BTreeMap, BTreeSet};\nuse std::io::{Cursor, Read};\nuse std::path::{Component, Path};\n\nuse serde_json::Value as JsonValue;\nuse zip::read::ZipArchive;\n\nuse crate::schema::{schema_key_from_definition, validate_lix_schema_definition};\nuse crate::LixError;\n\nuse super::{parse_plugin_manifest_json, InstalledPlugin, PluginManifest};\n\n#[derive(Debug, Clone)]\npub(crate) struct ParsedPluginArchive {\n    pub manifest: PluginManifest,\n    pub schemas: Vec<JsonValue>,\n}\n\npub(crate) fn parse_plugin_archive_for_install(\n    archive_bytes: &[u8],\n) -> Result<ParsedPluginArchive, LixError> {\n    let files = read_archive_files_for_install(archive_bytes)?;\n\n    let manifest_bytes = files.get(\"manifest.json\").ok_or_else(|| LixError {\n        code: \"LIX_ERROR_UNKNOWN\".to_string(),\n        message: \"Plugin archive must contain manifest.json\".to_string(),\n        hint: None,\n            details: None,\n    })?;\n    let manifest_raw = std::str::from_utf8(manifest_bytes).map_err(|error| LixError {\n        code: \"LIX_ERROR_UNKNOWN\".to_string(),\n        message: format!(\"Plugin archive manifest.json must be UTF-8: {error}\"),\n        hint: None,\n            details: None,\n    })?;\n    let validated_manifest = parse_plugin_manifest_json(manifest_raw)?;\n\n    let entry_path = normalize_archive_path_for_install(&validated_manifest.manifest.entry)?;\n    let wasm_bytes = files\n        .get(&entry_path)\n        .ok_or_else(|| LixError {\n            code: \"LIX_ERROR_UNKNOWN\".to_string(),\n            message: format!(\n                \"Plugin archive is missing manifest entry file '{}'\",\n                validated_manifest.manifest.entry\n            ),\n            hint: None,\n            details: None,\n        })?\n        .clone();\n    ensure_valid_plugin_wasm_for_install(&wasm_bytes)?;\n\n    let mut schemas = Vec::with_capacity(validated_manifest.manifest.schemas.len());\n    let mut seen_schema_keys = BTreeSet::<(String, String)>::new();\n    for schema_path in &validated_manifest.manifest.schemas {\n        let normalized_schema_path = normalize_archive_path_for_install(schema_path)?;\n        let schema_bytes = files.get(&normalized_schema_path).ok_or_else(|| LixError {\n            code: \"LIX_ERROR_UNKNOWN\".to_string(),\n            message: format!(\"Plugin archive is missing schema file '{schema_path}'\"),\n            hint: None,\n            details: None,\n        })?;\n        let schema_json: JsonValue =\n            serde_json::from_slice(schema_bytes).map_err(|error| LixError {\n                code: \"LIX_ERROR_UNKNOWN\".to_string(),\n                message: format!(\n                    \"Plugin archive schema '{schema_path}' is invalid JSON: {error}\"\n                ),\n                hint: None,\n            details: None,\n            })?;\n        validate_lix_schema_definition(&schema_json)?;\n        let schema_key = schema_key_from_definition(&schema_json)?;\n        if !seen_schema_keys.insert(schema_key.schema_key.clone()) {\n            return Err(LixError {\n                code: \"LIX_ERROR_UNKNOWN\".to_string(),\n                message: format!(\n                    \"Plugin archive declares duplicate schema '{}'\",\n                    schema_key.schema_key\n                ),\n                hint: None,\n            details: None,\n            });\n        }\n        schemas.push(schema_json);\n    }\n\n    Ok(ParsedPluginArchive {\n        manifest: validated_manifest.manifest,\n        schemas,\n    })\n}\n\npub(crate) fn load_installed_plugin_from_archive_bytes(\n    plugin_key: &str,\n    archive_path: &str,\n    archive_bytes: &[u8],\n) -> Result<InstalledPlugin, LixError> {\n    let files = read_plugin_archive_files(archive_path, archive_bytes)?;\n    let manifest_bytes = files.get(\"manifest.json\").ok_or_else(|| LixError {\n        code: \"LIX_ERROR_UNKNOWN\".to_string(),\n        message: format!(\n            \"plugin materialization: archive '{}' is missing manifest.json\",\n            archive_path\n        ),\n        hint: None,\n            details: None,\n    })?;\n    let manifest_raw = std::str::from_utf8(manifest_bytes).map_err(|error| LixError {\n        code: \"LIX_ERROR_UNKNOWN\".to_string(),\n        message: format!(\n            \"plugin materialization: archive '{}' manifest.json must be UTF-8: {error}\",\n            archive_path\n        ),\n        hint: None,\n            details: None,\n    })?;\n    let validated_manifest = parse_plugin_manifest_json(manifest_raw)?;\n    if validated_manifest.manifest.key != plugin_key {\n        return Err(LixError {\n            code: \"LIX_ERROR_UNKNOWN\".to_string(),\n            message: format!(\n                \"plugin materialization: archive '{}' key mismatch: path key '{}' vs manifest key '{}'\",\n                archive_path, plugin_key, validated_manifest.manifest.key\n            ),\n            hint: None,\n            details: None,\n        });\n    }\n\n    let entry_path =\n        normalize_plugin_archive_path_for_materialization(&validated_manifest.manifest.entry)?;\n    let wasm = files.get(&entry_path).ok_or_else(|| LixError {\n        code: \"LIX_ERROR_UNKNOWN\".to_string(),\n        message: format!(\n            \"plugin materialization: archive '{}' is missing entry file '{}'\",\n            archive_path, validated_manifest.manifest.entry\n        ),\n        hint: None,\n            details: None,\n    })?;\n    ensure_valid_plugin_wasm_for_materialization(wasm)?;\n\n    let manifest = validated_manifest.manifest;\n    let content_type = manifest.file_match.content_type;\n\n    Ok(InstalledPlugin {\n        key: manifest.key,\n        runtime: manifest.runtime,\n        api_version: manifest.api_version,\n        path_glob: manifest.file_match.path_glob,\n        content_type,\n        entry: manifest.entry,\n        manifest_json: validated_manifest.normalized_json,\n        wasm: wasm.clone(),\n    })\n}\n\nfn read_archive_files_for_install(\n    archive_bytes: &[u8],\n) -> Result<BTreeMap<String, Vec<u8>>, LixError> {\n    if archive_bytes.is_empty() {\n        return Err(LixError {\n            code: \"LIX_ERROR_UNKNOWN\".to_string(),\n            message: \"Plugin archive bytes must not be empty\".to_string(),\n            hint: None,\n            details: None,\n        });\n    }\n\n    let mut archive = ZipArchive::new(Cursor::new(archive_bytes)).map_err(|error| LixError {\n        code: \"LIX_ERROR_UNKNOWN\".to_string(),\n        message: format!(\"Plugin archive is not a valid zip file: {error}\"),\n        hint: None,\n            details: None,\n    })?;\n    let mut files = BTreeMap::<String, Vec<u8>>::new();\n\n    for index in 0..archive.len() {\n        let mut entry = archive.by_index(index).map_err(|error| LixError {\n            code: \"LIX_ERROR_UNKNOWN\".to_string(),\n            message: format!(\"Failed to read plugin archive entry at index {index}: {error}\"),\n            hint: None,\n            details: None,\n        })?;\n        let raw_name = entry.name().to_string();\n\n        if entry.is_dir() {\n            continue;\n        }\n        if is_symlink_mode(entry.unix_mode()) {\n            return Err(LixError {\n                code: \"LIX_ERROR_UNKNOWN\".to_string(),\n                message: format!(\"Plugin archive entry '{raw_name}' must not be a symlink\"),\n                hint: None,\n            details: None,\n            });\n        }\n\n        let normalized_path = normalize_archive_path_for_install(&raw_name)?;\n        let mut bytes = Vec::new();\n        entry.read_to_end(&mut bytes).map_err(|error| LixError {\n            code: \"LIX_ERROR_UNKNOWN\".to_string(),\n            message: format!(\"Failed to read plugin archive entry '{raw_name}': {error}\"),\n            hint: None,\n            details: None,\n        })?;\n        if files.insert(normalized_path.clone(), bytes).is_some() {\n            return Err(LixError {\n                code: \"LIX_ERROR_UNKNOWN\".to_string(),\n                message: format!(\"Plugin archive contains duplicate entry '{normalized_path}'\"),\n                hint: None,\n            details: None,\n            });\n        }\n    }\n\n    Ok(files)\n}\n\nfn read_plugin_archive_files(\n    archive_path: &str,\n    archive_bytes: &[u8],\n) -> Result<BTreeMap<String, Vec<u8>>, LixError> {\n    if archive_bytes.is_empty() {\n        return Err(LixError {\n            code: \"LIX_ERROR_UNKNOWN\".to_string(),\n            message: format!(\n                \"plugin materialization: archive '{}' is empty\",\n                archive_path\n            ),\n            hint: None,\n            details: None,\n        });\n    }\n\n    let mut archive = ZipArchive::new(Cursor::new(archive_bytes)).map_err(|error| LixError {\n        code: \"LIX_ERROR_UNKNOWN\".to_string(),\n        message: format!(\n            \"plugin materialization: archive '{}' is not a valid zip file: {error}\",\n            archive_path\n        ),\n        hint: None,\n            details: None,\n    })?;\n    let mut files = BTreeMap::<String, Vec<u8>>::new();\n\n    for index in 0..archive.len() {\n        let mut entry = archive.by_index(index).map_err(|error| LixError {\n            code: \"LIX_ERROR_UNKNOWN\".to_string(),\n            message: format!(\n                \"plugin materialization: failed to read archive '{}' entry index {}: {error}\",\n                archive_path, index\n            ),\n            hint: None,\n            details: None,\n        })?;\n\n        let entry_name = entry.name().to_string();\n        let normalized_path = normalize_plugin_archive_path_for_materialization(&entry_name)?;\n        if normalized_path.ends_with('/') {\n            continue;\n        }\n\n        let mut bytes = Vec::new();\n        entry.read_to_end(&mut bytes).map_err(|error| LixError {\n            code: \"LIX_ERROR_UNKNOWN\".to_string(),\n            message: format!(\n                \"plugin materialization: failed to read archive '{}' entry '{}': {error}\",\n                archive_path, entry_name\n            ),\n            hint: None,\n            details: None,\n        })?;\n        files.insert(normalized_path, bytes);\n    }\n\n    Ok(files)\n}\n\nfn normalize_archive_path_for_install(path: &str) -> Result<String, LixError> {\n    if path.is_empty() {\n        return Err(LixError {\n            code: \"LIX_ERROR_UNKNOWN\".to_string(),\n            message: \"Plugin archive path must not be empty\".to_string(),\n            hint: None,\n            details: None,\n        });\n    }\n    if path.starts_with('/') || path.starts_with('\\\\') {\n        return Err(LixError {\n            code: \"LIX_ERROR_UNKNOWN\".to_string(),\n            message: format!(\"Plugin archive path '{path}' must be relative\"),\n            hint: None,\n            details: None,\n        });\n    }\n    if path.contains('\\\\') {\n        return Err(LixError {\n            code: \"LIX_ERROR_UNKNOWN\".to_string(),\n            message: format!(\"Plugin archive path '{path}' must use forward slash separators\"),\n            hint: None,\n            details: None,\n        });\n    }\n\n    let mut segments = Vec::<String>::new();\n    for component in Path::new(path).components() {\n        match component {\n            Component::Normal(value) => {\n                let segment = value.to_str().ok_or_else(|| LixError {\n                    code: \"LIX_ERROR_UNKNOWN\".to_string(),\n                    message: format!(\n                        \"Plugin archive path '{path}' contains non-UTF-8 components\"\n                    ),\n                    hint: None,\n            details: None,\n                })?;\n                if segment.is_empty() {\n                    return Err(LixError {\n                        code: \"LIX_ERROR_UNKNOWN\".to_string(),\n                        message: format!(\"Plugin archive path '{path}' is invalid\"),\n                        hint: None,\n            details: None,\n                    });\n                }\n                segments.push(segment.to_string());\n            }\n            Component::CurDir | Component::ParentDir | Component::RootDir | Component::Prefix(_) => {\n                return Err(LixError {\n                    code: \"LIX_ERROR_UNKNOWN\".to_string(),\n                    message: format!(\n                        \"Plugin archive path '{path}' must not contain traversal or absolute components\"\n                    ),\n                    hint: None,\n            details: None,\n                })\n            }\n        }\n    }\n\n    if segments.is_empty() {\n        return Err(LixError {\n            code: \"LIX_ERROR_UNKNOWN\".to_string(),\n            message: format!(\"Plugin archive path '{path}' is invalid\"),\n            hint: None,\n            details: None,\n        });\n    }\n\n    Ok(segments.join(\"/\"))\n}\n\nfn normalize_plugin_archive_path_for_materialization(path: &str) -> Result<String, LixError> {\n    let raw_path = Path::new(path);\n    if raw_path.is_absolute() {\n        return Err(LixError {\n            code: \"LIX_ERROR_UNKNOWN\".to_string(),\n            message: format!(\n                \"plugin materialization: archive path '{}' must be relative\",\n                path\n            ),\n            hint: None,\n            details: None,\n        });\n    }\n\n    let mut normalized = Vec::new();\n    for component in raw_path.components() {\n        match component {\n            Component::Normal(part) => normalized.push(part.to_string_lossy().to_string()),\n            Component::CurDir => {}\n            Component::ParentDir | Component::RootDir | Component::Prefix(_) => {\n                return Err(LixError {\n                    code: \"LIX_ERROR_UNKNOWN\".to_string(),\n                    message: format!(\n                        \"plugin materialization: archive path '{}' must not escape the archive root\",\n                        path\n                    ),\n                    hint: None,\n            details: None,\n                });\n            }\n        }\n    }\n\n    if normalized.is_empty() {\n        return Err(LixError {\n            code: \"LIX_ERROR_UNKNOWN\".to_string(),\n            message: \"plugin materialization: archive path must not be empty\".to_string(),\n            hint: None,\n            details: None,\n        });\n    }\n\n    Ok(normalized.join(\"/\"))\n}\n\nfn ensure_valid_plugin_wasm_for_install(wasm_bytes: &[u8]) -> Result<(), LixError> {\n    if wasm_bytes.is_empty() {\n        return Err(LixError {\n            code: \"LIX_ERROR_UNKNOWN\".to_string(),\n            message: \"Plugin wasm bytes must not be empty\".to_string(),\n            hint: None,\n            details: None,\n        });\n    }\n    if wasm_bytes.len() < 8 || !wasm_bytes.starts_with(&[0x00, 0x61, 0x73, 0x6d]) {\n        return Err(LixError {\n            code: \"LIX_ERROR_UNKNOWN\".to_string(),\n            message: \"Plugin wasm bytes must start with a valid wasm header\".to_string(),\n            hint: None,\n            details: None,\n        });\n    }\n    Ok(())\n}\n\nfn ensure_valid_plugin_wasm_for_materialization(bytes: &[u8]) -> Result<(), LixError> {\n    const WASM_MAGIC: &[u8; 4] = b\"\\0asm\";\n    if bytes.len() < WASM_MAGIC.len() || &bytes[..WASM_MAGIC.len()] != WASM_MAGIC {\n        return Err(LixError {\n            code: \"LIX_ERROR_UNKNOWN\".to_string(),\n            message: \"plugin materialization: entry file must be a valid WebAssembly module\"\n                .to_string(),\n            hint: None,\n            details: None,\n        });\n    }\n\n    Ok(())\n}\n\nfn is_symlink_mode(mode: Option<u32>) -> bool {\n    const MODE_FILE_TYPE_MASK: u32 = 0o170000;\n    const MODE_SYMLINK: u32 = 0o120000;\n    mode.is_some_and(|value| (value & MODE_FILE_TYPE_MASK) == MODE_SYMLINK)\n}\n"
  },
  {
    "path": "packages/engine/src/plugin/component.rs",
    "content": "use std::sync::Arc;\n\nuse crate::common::LixError;\nuse crate::wasm::{WasmComponentInstance, WasmLimits, WasmRuntime};\n\nuse super::InstalledPlugin;\n\n#[derive(Clone)]\npub(crate) struct CachedPluginComponent {\n    pub(crate) wasm: Vec<u8>,\n    pub(crate) instance: Arc<dyn WasmComponentInstance>,\n}\n\nconst APPLY_CHANGES_EXPORTS: &[&str] = &[\"apply-changes\", \"api#apply-changes\"];\n\npub(crate) trait PluginComponentHost {\n    fn plugin_component_cache(\n        &self,\n    ) -> &std::sync::Mutex<std::collections::BTreeMap<String, CachedPluginComponent>>;\n\n    fn wasm_runtime(&self) -> &Arc<dyn WasmRuntime>;\n}\n\npub(crate) async fn load_or_init_plugin_component(\n    host: &impl PluginComponentHost,\n    plugin: &InstalledPlugin,\n) -> Result<Arc<dyn WasmComponentInstance>, LixError> {\n    {\n        let guard = host.plugin_component_cache().lock().map_err(|_| LixError {\n            code: \"LIX_ERROR_UNKNOWN\".to_string(),\n            message: \"plugin component cache lock poisoned\".to_string(),\n            hint: None,\n            details: None,\n        })?;\n        if let Some(cached) = guard.get(&plugin.key) {\n            if cached.wasm == plugin.wasm {\n                return Ok(cached.instance.clone());\n            }\n        }\n    }\n\n    let initialized = host\n        .wasm_runtime()\n        .init_component(plugin.wasm.clone(), WasmLimits::default())\n        .await?;\n    let mut guard = host.plugin_component_cache().lock().map_err(|_| LixError {\n        code: \"LIX_ERROR_UNKNOWN\".to_string(),\n        message: \"plugin component cache lock poisoned\".to_string(),\n        hint: None,\n            details: None,\n    })?;\n    if let Some(cached) = guard.get(&plugin.key) {\n        if cached.wasm == plugin.wasm {\n            return Ok(cached.instance.clone());\n        }\n    }\n    guard.insert(\n        plugin.key.clone(),\n        CachedPluginComponent {\n            wasm: plugin.wasm.clone(),\n            instance: initialized.clone(),\n        },\n    );\n    Ok(initialized)\n}\n\npub(crate) async fn apply_changes_with_plugin(\n    host: &impl PluginComponentHost,\n    plugin: &InstalledPlugin,\n    payload: &[u8],\n) -> Result<Vec<u8>, LixError> {\n    let instance = load_or_init_plugin_component(host, plugin).await?;\n    invoke_apply_changes_export(instance.as_ref(), payload).await\n}\n\nasync fn invoke_apply_changes_export(\n    instance: &dyn WasmComponentInstance,\n    payload: &[u8],\n) -> Result<Vec<u8>, LixError> {\n    let mut errors = Vec::new();\n    for export in APPLY_CHANGES_EXPORTS {\n        match instance.call(export, payload).await {\n            Ok(output) => return Ok(output),\n            Err(error) => errors.push(format!(\"{export}: {}\", error.message)),\n        }\n    }\n\n    Err(LixError {\n        code: \"LIX_ERROR_UNKNOWN\".to_string(),\n        message: format!(\n            \"plugin materialization: failed to call apply-changes export ({})\",\n            errors.join(\"; \")\n        ),\n        hint: None,\n            details: None,\n    })\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::plugin::{InstalledPlugin, PluginRuntime};\n    use crate::wasm::WasmRuntime;\n    use async_trait::async_trait;\n    use std::sync::atomic::{AtomicUsize, Ordering};\n\n    struct TestHost {\n        wasm_runtime: Arc<dyn WasmRuntime>,\n        plugin_component_cache:\n            std::sync::Mutex<std::collections::BTreeMap<String, CachedPluginComponent>>,\n    }\n\n    impl PluginComponentHost for TestHost {\n        fn plugin_component_cache(\n            &self,\n        ) -> &std::sync::Mutex<std::collections::BTreeMap<String, CachedPluginComponent>> {\n            &self.plugin_component_cache\n        }\n\n        fn wasm_runtime(&self) -> &Arc<dyn WasmRuntime> {\n            &self.wasm_runtime\n        }\n    }\n\n    #[derive(Default)]\n    struct CountingRuntime {\n        init_calls: Arc<AtomicUsize>,\n    }\n\n    struct NoopComponent;\n\n    #[async_trait(?Send)]\n    impl WasmRuntime for CountingRuntime {\n        async fn init_component(\n            &self,\n            _bytes: Vec<u8>,\n            _limits: WasmLimits,\n        ) -> Result<Arc<dyn WasmComponentInstance>, LixError> {\n            self.init_calls.fetch_add(1, Ordering::SeqCst);\n            Ok(Arc::new(NoopComponent))\n        }\n    }\n\n    #[async_trait(?Send)]\n    impl WasmComponentInstance for NoopComponent {\n        async fn call(&self, _export: &str, _input: &[u8]) -> Result<Vec<u8>, LixError> {\n            Ok(Vec::new())\n        }\n    }\n\n    #[tokio::test]\n    async fn component_cache_reinitializes_when_same_key_wasm_changes() {\n        let runtime = Arc::new(CountingRuntime::default());\n        let host = TestHost {\n            wasm_runtime: runtime.clone(),\n            plugin_component_cache: std::sync::Mutex::new(Default::default()),\n        };\n        let mut plugin = InstalledPlugin {\n            key: \"k\".to_string(),\n            runtime: PluginRuntime::WasmComponentV1,\n            api_version: \"0.1.0\".to_string(),\n            path_glob: \"*.json\".to_string(),\n            content_type: None,\n            entry: \"plugin.wasm\".to_string(),\n            manifest_json: \"{}\".to_string(),\n            wasm: vec![1],\n        };\n\n        load_or_init_plugin_component(&host, &plugin)\n            .await\n            .expect(\"first init should succeed\");\n        load_or_init_plugin_component(&host, &plugin)\n            .await\n            .expect(\"second lookup should reuse cache\");\n        assert_eq!(runtime.init_calls.load(Ordering::SeqCst), 1);\n\n        plugin.wasm = vec![2];\n        load_or_init_plugin_component(&host, &plugin)\n            .await\n            .expect(\"changed wasm should reinitialize instance\");\n        assert_eq!(runtime.init_calls.load(Ordering::SeqCst), 2);\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/plugin/install.rs",
    "content": "//! Plugin install write helpers.\n//!\n//! This module owns plugin archive parsing, registered-schema staging, and the\n//! prepared write construction needed to install a plugin into the engine.\n\nuse std::collections::BTreeMap;\n\nuse async_trait::async_trait;\nuse serde_json::{json, Value as JsonValue};\n\nuse crate::catalog::{ResolvedRelation, SurfaceRegistry};\nuse crate::common::stable_content_fingerprint_hex;\nuse crate::common::{NormalizedDirectoryPath, ParsedFilePath};\nuse crate::plugin::{\n    parse_plugin_archive_for_install, plugin_storage_archive_file_id, plugin_storage_archive_path,\n    ParsedPluginArchive, PLUGIN_STORAGE_ROOT_DIRECTORY_PATH,\n};\nuse crate::schema::{schema_key_from_definition, validate_lix_schema_definition};\nuse crate::sql::{\n    ChangeBatch, CommitPreconditions, ExpectedHead, IdempotencyKey, OptionalTextPatch, PlanEffects,\n    PlannedFilesystemDescriptor, PlannedFilesystemFile, PlannedFilesystemState, PlannedStateRow,\n    PreparedWriteOperationKind, PreparedWriteStatementKind, PublicChange, ResultContract,\n    SchemaLiveTableRequirement, SemanticEffect, WriteDiagnosticContext, WriteLane, WriteMode,\n};\nuse crate::streams::{\n    state_commit_stream_changes_from_changes, StateCommitStreamOperation,\n    StateCommitStreamRuntimeMetadata,\n};\nuse crate::transaction::{\n    PreparedPublicSurfaceRegistryEffect, PreparedPublicSurfaceRegistryMutation,\n    PreparedPublicWrite, PreparedPublicWriteContract, PreparedPublicWriteExecution,\n    PreparedPublicWriteMaterialization, PreparedPublicWritePlanArtifact,\n    PreparedResolvedWritePartition, PreparedResolvedWritePlan, PreparedWriteArtifact,\n    PreparedWriteFunctionBindings, PreparedWriteStatement,\n};\nuse crate::{LixError, Value};\n\nuse crate::transaction::WriteCommand;\nconst REGISTERED_SCHEMA_STORAGE_SCHEMA_KEY: &str = \"lix_registered_schema\";\nconst FILESYSTEM_DESCRIPTOR_SCHEMA_KEY: &str = \"lix_file_descriptor\";\nconst FILESYSTEM_BINARY_BLOB_REF_SCHEMA_KEY: &str = \"lix_binary_blob_ref\";\n\n#[derive(Clone)]\npub(crate) struct PluginInstallWriteContext {\n    function_bindings: PreparedWriteFunctionBindings,\n    public_surface_registry: SurfaceRegistry,\n    target_version_id: String,\n    active_account_ids: Vec<String>,\n    origin_key: Option<String>,\n}\n\nimpl PluginInstallWriteContext {\n    pub(crate) fn new(\n        function_bindings: PreparedWriteFunctionBindings,\n        public_surface_registry: SurfaceRegistry,\n        target_version_id: impl Into<String>,\n        active_account_ids: Vec<String>,\n        origin_key: Option<String>,\n    ) -> Self {\n        Self {\n            function_bindings,\n            public_surface_registry,\n            target_version_id: target_version_id.into(),\n            active_account_ids,\n            origin_key,\n        }\n    }\n\n    fn target_version_id(&self) -> &str {\n        &self.target_version_id\n    }\n}\n\n#[async_trait(?Send)]\npub(crate) trait PluginInstallWriteExecutor {\n    fn plugin_install_write_context(&self) -> PluginInstallWriteContext;\n\n    fn stage_prepared_write_statement(&mut self, statement: WriteCommand) -> Result<(), LixError>;\n\n    async fn resolve_directory_id(\n        &mut self,\n        path: &NormalizedDirectoryPath,\n    ) -> Result<Option<String>, LixError>;\n}\n\npub(crate) async fn install_plugin_archive_with_writer(\n    archive_bytes: &[u8],\n    executor: &mut dyn PluginInstallWriteExecutor,\n) -> Result<(), LixError> {\n    let parsed = parse_plugin_archive_for_install(archive_bytes)?;\n    install_plugin_with_writer(executor, &parsed, archive_bytes).await\n}\n\npub(crate) fn prepare_registered_schema_write_statement(\n    schema: &JsonValue,\n    context: &PluginInstallWriteContext,\n) -> Result<WriteCommand, LixError> {\n    prepare_registered_schema_write_statement_from_schemas(std::slice::from_ref(schema), context)\n}\n\nasync fn install_plugin_with_writer(\n    executor: &mut dyn PluginInstallWriteExecutor,\n    parsed: &ParsedPluginArchive,\n    archive_bytes: &[u8],\n) -> Result<(), LixError> {\n    let plugin_install_context = executor.plugin_install_write_context();\n\n    if !parsed.schemas.is_empty() {\n        executor.stage_prepared_write_statement(\n            prepare_registered_schema_write_statement_from_schemas(\n                &parsed.schemas,\n                &plugin_install_context,\n            )?,\n        )?;\n    }\n\n    let plugin_root =\n        NormalizedDirectoryPath::from_normalized(PLUGIN_STORAGE_ROOT_DIRECTORY_PATH.to_string());\n    let plugin_directory_id = executor\n        .resolve_directory_id(&plugin_root)\n        .await?\n        .ok_or_else(|| {\n            LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                format!(\n                    \"plugin storage directory '{}' is missing\",\n                    PLUGIN_STORAGE_ROOT_DIRECTORY_PATH\n                ),\n            )\n        })?;\n    executor.stage_prepared_write_statement(prepare_plugin_archive_write_statement(\n        parsed,\n        archive_bytes,\n        &plugin_directory_id,\n        &plugin_install_context,\n    )?)?;\n\n    Ok(())\n}\n\n#[derive(Clone)]\nstruct RegisteredSchemaRowSpec {\n    entity_id: String,\n    registered_schema_key: String,\n    snapshot: JsonValue,\n    schema_json: JsonValue,\n}\n\nfn prepare_registered_schema_write_statement_from_schemas(\n    schemas: &[JsonValue],\n    context: &PluginInstallWriteContext,\n) -> Result<WriteCommand, LixError> {\n    let target = require_resolved_surface(\n        &context.public_surface_registry,\n        \"lix_registered_schema_by_version\",\n    )?;\n    let schema_rows = schemas\n        .iter()\n        .map(registered_schema_row_spec_from_json)\n        .collect::<Result<Vec<_>, _>>()?;\n    let intended_post_state = schema_rows\n        .iter()\n        .map(|row| registered_schema_planned_row(row, context.target_version_id()))\n        .collect::<Vec<_>>();\n    let changes = schema_rows\n        .iter()\n        .map(|row| PublicChange {\n            entity_id: row.entity_id.clone(),\n            schema_key: REGISTERED_SCHEMA_STORAGE_SCHEMA_KEY.to_string(),\n            file_id: None,\n            plugin_key: None,\n            snapshot_content: Some(row.snapshot.to_string()),\n            metadata: None,\n            version_id: context.target_version_id().to_string(),\n            origin_key: context.origin_key.clone(),\n        })\n        .collect::<Vec<_>>();\n    let schema_live_table_requirements = schema_rows\n        .iter()\n        .map(|row| SchemaLiveTableRequirement {\n            schema_key: row.registered_schema_key.clone(),\n            schema_definition: Some(row.schema_json.clone()),\n        })\n        .collect::<Vec<_>>();\n\n    prepare_public_tracked_write_statement(\n        context,\n        target,\n        \"lix_registered_schema_by_version\",\n        intended_post_state,\n        PlannedFilesystemState::default(),\n        changes,\n        schema_live_table_requirements,\n        PreparedPublicSurfaceRegistryEffect::ApplyMutations(\n            schema_rows\n                .iter()\n                .map(\n                    |row| PreparedPublicSurfaceRegistryMutation::UpsertRegisteredSchemaSnapshot {\n                        snapshot: row.snapshot.clone(),\n                    },\n                )\n                .collect(),\n        ),\n        \"semantic.register_schema\",\n    )\n}\n\nfn prepare_plugin_archive_write_statement(\n    parsed: &ParsedPluginArchive,\n    archive_bytes: &[u8],\n    plugin_directory_id: &str,\n    context: &PluginInstallWriteContext,\n) -> Result<WriteCommand, LixError> {\n    let target = require_resolved_surface(&context.public_surface_registry, \"lix_file_by_version\")?;\n    let archive_id = plugin_storage_archive_file_id(parsed.manifest.key.as_str());\n    let archive_path = plugin_storage_archive_path(parsed.manifest.key.as_str())?;\n    let parsed_path = ParsedFilePath::try_from_path(&archive_path)?;\n    let descriptor = PlannedFilesystemDescriptor {\n        directory_id: plugin_directory_id.to_string(),\n        name: parsed_path.name.clone(),\n        metadata: None,\n        hidden: false,\n    };\n    let target_version_id = context.target_version_id();\n    let filesystem_state = PlannedFilesystemState {\n        files: [(\n            (archive_id.clone(), target_version_id.to_string()),\n            PlannedFilesystemFile {\n                file_id: archive_id.clone(),\n                version_id: target_version_id.to_string(),\n                untracked: false,\n                descriptor: Some(descriptor.clone()),\n                metadata_patch: OptionalTextPatch::Unchanged,\n                data: Some(archive_bytes.to_vec()),\n                deleted: false,\n            },\n        )]\n        .into_iter()\n        .collect(),\n    };\n    let intended_post_state = vec![\n        plugin_archive_file_descriptor_row(&archive_id, target_version_id, &descriptor),\n        plugin_archive_binary_blob_ref_row(&archive_id, target_version_id, archive_bytes)?,\n    ];\n    let changes = intended_post_state\n        .iter()\n        .map(planned_row_to_public_change)\n        .collect::<Result<Vec<_>, _>>()?;\n\n    prepare_public_tracked_write_statement(\n        context,\n        target,\n        \"lix_file_by_version\",\n        intended_post_state,\n        filesystem_state,\n        changes,\n        Vec::new(),\n        PreparedPublicSurfaceRegistryEffect::None,\n        \"semantic.install_plugin_archive\",\n    )\n}\n\nfn registered_schema_row_spec_from_json(\n    schema: &JsonValue,\n) -> Result<RegisteredSchemaRowSpec, LixError> {\n    validate_lix_schema_definition(schema)?;\n    let schema_key = schema_key_from_definition(schema)?;\n    Ok(RegisteredSchemaRowSpec {\n        entity_id: schema_key.entity_id(),\n        registered_schema_key: schema_key.schema_key,\n        snapshot: json!({ \"value\": schema }),\n        schema_json: schema.clone(),\n    })\n}\n\nfn registered_schema_planned_row(\n    row: &RegisteredSchemaRowSpec,\n    target_version_id: &str,\n) -> PlannedStateRow {\n    let mut values = BTreeMap::new();\n    values.insert(\"entity_id\".to_string(), Value::Text(row.entity_id.clone()));\n    values.insert(\n        \"schema_key\".to_string(),\n        Value::Text(REGISTERED_SCHEMA_STORAGE_SCHEMA_KEY.to_string()),\n    );\n    values.insert(\"file_id\".to_string(), Value::Null);\n    values.insert(\"plugin_key\".to_string(), Value::Null);\n    values.insert(\n        \"snapshot_content\".to_string(),\n        Value::Json(row.snapshot.clone()),\n    );\n    values.insert(\n        \"version_id\".to_string(),\n        Value::Text(target_version_id.to_string()),\n    );\n    PlannedStateRow {\n        entity_id: row.entity_id.clone(),\n        schema_key: REGISTERED_SCHEMA_STORAGE_SCHEMA_KEY.to_string(),\n        version_id: Some(target_version_id.to_string()),\n        values,\n        origin_key: None,\n        tombstone: false,\n    }\n}\n\nfn plugin_archive_file_descriptor_row(\n    archive_id: &str,\n    target_version_id: &str,\n    descriptor: &PlannedFilesystemDescriptor,\n) -> PlannedStateRow {\n    let snapshot_content = json!({\n        \"id\": archive_id,\n        \"directory_id\": descriptor.directory_id,\n        \"name\": descriptor.name,\n        \"hidden\": descriptor.hidden,\n    })\n    .to_string();\n    let mut values = BTreeMap::new();\n    values.insert(\"entity_id\".to_string(), Value::Text(archive_id.to_string()));\n    values.insert(\n        \"schema_key\".to_string(),\n        Value::Text(FILESYSTEM_DESCRIPTOR_SCHEMA_KEY.to_string()),\n    );\n    values.insert(\"file_id\".to_string(), Value::Null);\n    values.insert(\"plugin_key\".to_string(), Value::Null);\n    values.insert(\n        \"snapshot_content\".to_string(),\n        Value::Text(snapshot_content),\n    );\n    values.insert(\n        \"version_id\".to_string(),\n        Value::Text(target_version_id.to_string()),\n    );\n    PlannedStateRow {\n        entity_id: archive_id.to_string(),\n        schema_key: FILESYSTEM_DESCRIPTOR_SCHEMA_KEY.to_string(),\n        version_id: Some(target_version_id.to_string()),\n        values,\n        origin_key: None,\n        tombstone: false,\n    }\n}\n\nfn plugin_archive_binary_blob_ref_row(\n    archive_id: &str,\n    target_version_id: &str,\n    archive_bytes: &[u8],\n) -> Result<PlannedStateRow, LixError> {\n    let size_bytes = u64::try_from(archive_bytes.len()).map_err(|_| {\n        LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\n                \"plugin archive '{}' exceeds supported size range\",\n                archive_id\n            ),\n        )\n    })?;\n    let snapshot_content = json!({\n        \"id\": archive_id,\n        \"blob_hash\": stable_content_fingerprint_hex(archive_bytes),\n        \"size_bytes\": size_bytes,\n    })\n    .to_string();\n    let mut values = BTreeMap::new();\n    values.insert(\"entity_id\".to_string(), Value::Text(archive_id.to_string()));\n    values.insert(\n        \"schema_key\".to_string(),\n        Value::Text(FILESYSTEM_BINARY_BLOB_REF_SCHEMA_KEY.to_string()),\n    );\n    values.insert(\"file_id\".to_string(), Value::Text(archive_id.to_string()));\n    values.insert(\"plugin_key\".to_string(), Value::Null);\n    values.insert(\n        \"snapshot_content\".to_string(),\n        Value::Text(snapshot_content),\n    );\n    values.insert(\n        \"version_id\".to_string(),\n        Value::Text(target_version_id.to_string()),\n    );\n    Ok(PlannedStateRow {\n        entity_id: archive_id.to_string(),\n        schema_key: FILESYSTEM_BINARY_BLOB_REF_SCHEMA_KEY.to_string(),\n        version_id: Some(target_version_id.to_string()),\n        values,\n        origin_key: None,\n        tombstone: false,\n    })\n}\n\nfn prepare_public_tracked_write_statement(\n    context: &PluginInstallWriteContext,\n    target: ResolvedRelation,\n    relation_name: &str,\n    intended_post_state: Vec<PlannedStateRow>,\n    filesystem_state: PlannedFilesystemState,\n    changes: Vec<PublicChange>,\n    schema_live_table_requirements: Vec<SchemaLiveTableRequirement>,\n    public_surface_registry_effect: PreparedPublicSurfaceRegistryEffect,\n    idempotency_purpose: &str,\n) -> Result<WriteCommand, LixError> {\n    let semantic_effects =\n        semantic_plan_effects_from_changes(&changes, context.origin_key.as_deref())?;\n    let write_payload = json!({\n        \"rows\": intended_post_state.iter().map(summarize_planned_row).collect::<Vec<_>>(),\n        \"changes\": changes.iter().map(summarize_change).collect::<Vec<_>>(),\n        \"filesystem_files\": filesystem_state.files.keys().cloned().collect::<Vec<_>>(),\n    });\n    WriteCommand::build(\n        PreparedWriteStatement {\n            statement_kind: PreparedWriteStatementKind::Write,\n            result_contract: ResultContract::DmlNoReturning,\n            artifact: PreparedWriteArtifact::PublicWrite(PreparedPublicWrite {\n                contract: PreparedPublicWriteContract {\n                    operation_kind: PreparedWriteOperationKind::Insert,\n                    target,\n                    on_conflict_action: None,\n                    requested_version_id: Some(context.target_version_id().to_string()),\n                    active_account_ids: context.active_account_ids.clone(),\n                    origin_key: context.origin_key.clone(),\n                    resolved_write_plan: Some(PreparedResolvedWritePlan {\n                        partitions: vec![PreparedResolvedWritePartition {\n                            execution_mode: WriteMode::Tracked,\n                            authoritative_pre_state_rows: Vec::new(),\n                            intended_post_state,\n                            filesystem_state,\n                        }],\n                    }),\n                },\n                execution: PreparedPublicWritePlanArtifact::Materialize(\n                    PreparedPublicWriteMaterialization {\n                        partitions: vec![PreparedPublicWriteExecution {\n                            execution_mode: WriteMode::Tracked,\n                            intended_post_state: Vec::new(),\n                            schema_live_table_requirements,\n                            change_batch: Some(ChangeBatch {\n                                changes: changes.clone(),\n                                write_lane: WriteLane::GlobalAdmin,\n                                origin_key: context.origin_key.clone(),\n                                semantic_effects: semantic_effect_markers_from_changes(&changes),\n                            }),\n                            create_preconditions: Some(CommitPreconditions {\n                                write_lane: WriteLane::GlobalAdmin,\n                                expected_head: ExpectedHead::CurrentHead,\n                                idempotency_key: semantic_idempotency_key(\n                                    idempotency_purpose,\n                                    &write_payload,\n                                )?,\n                            }),\n                            semantic_effects,\n                            persist_filesystem_payloads_before_write: false,\n                        }],\n                    },\n                ),\n            }),\n            diagnostic_context: WriteDiagnosticContext::new(vec![relation_name.to_string()]),\n            public_surface_registry_effect,\n        },\n        &context.function_bindings,\n    )\n}\n\nfn semantic_plan_effects_from_changes(\n    changes: &[PublicChange],\n    origin_key: Option<&str>,\n) -> Result<PlanEffects, LixError> {\n    Ok(PlanEffects {\n        state_commit_stream_changes: state_commit_stream_changes_from_changes(\n            changes,\n            StateCommitStreamOperation::Insert,\n            StateCommitStreamRuntimeMetadata::from_runtime_origin_key(origin_key),\n        )?,\n        ..PlanEffects::default()\n    })\n}\n\nfn semantic_effect_markers_from_changes(changes: &[PublicChange]) -> Vec<SemanticEffect> {\n    changes\n        .iter()\n        .map(|change| SemanticEffect {\n            effect_key: \"state.upsert\".to_string(),\n            target: format!(\n                \"{}:{}@{}\",\n                change.schema_key, change.entity_id, change.version_id\n            ),\n        })\n        .collect()\n}\n\nfn planned_row_to_public_change(row: &PlannedStateRow) -> Result<PublicChange, LixError> {\n    Ok(PublicChange {\n        entity_id: row.entity_id.clone(),\n        schema_key: row.schema_key.clone(),\n        file_id: planned_row_text_value(row, \"file_id\"),\n        plugin_key: planned_row_text_value(row, \"plugin_key\"),\n        snapshot_content: if row.tombstone {\n            None\n        } else {\n            planned_row_json_text_value(row, \"snapshot_content\")\n        },\n        metadata: planned_row_json_text_value(row, \"metadata\"),\n        version_id: row\n            .version_id\n            .clone()\n            .or_else(|| planned_row_text_value(row, \"version_id\"))\n            .ok_or_else(|| {\n                LixError::new(\n                    \"LIX_ERROR_UNKNOWN\",\n                    \"semantic tracked write requires a concrete version_id\",\n                )\n            })?,\n        origin_key: row.origin_key.clone(),\n    })\n}\n\nfn planned_row_text_value(row: &PlannedStateRow, key: &str) -> Option<String> {\n    match row.values.get(key) {\n        Some(Value::Text(value)) => Some(value.clone()),\n        Some(Value::Integer(value)) => Some(value.to_string()),\n        Some(Value::Boolean(value)) => Some(value.to_string()),\n        Some(Value::Real(value)) => Some(value.to_string()),\n        _ => None,\n    }\n}\n\nfn planned_row_json_text_value(row: &PlannedStateRow, key: &str) -> Option<String> {\n    match row.values.get(key) {\n        Some(Value::Json(value)) => Some(value.to_string()),\n        _ => planned_row_text_value(row, key),\n    }\n}\n\nfn semantic_idempotency_key(\n    purpose: &str,\n    payload: &JsonValue,\n) -> Result<IdempotencyKey, LixError> {\n    let bytes = serde_json::to_vec(payload).map_err(|error| {\n        LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"semantic idempotency payload serialization failed: {error}\"),\n        )\n    })?;\n    Ok(IdempotencyKey(\n        json!({\n            \"purpose\": purpose,\n            \"fingerprint\": stable_content_fingerprint_hex(&bytes),\n        })\n        .to_string(),\n    ))\n}\n\nfn summarize_change(change: &PublicChange) -> JsonValue {\n    json!({\n        \"entity_id\": change.entity_id,\n        \"schema_key\": change.schema_key,\n        \"file_id\": change.file_id,\n        \"plugin_key\": change.plugin_key,\n        \"version_id\": change.version_id,\n        \"origin_key\": change.origin_key,\n        \"snapshot_content\": change.snapshot_content.as_ref().map(|snapshot| {\n            stable_content_fingerprint_hex(snapshot.as_bytes())\n        }),\n    })\n}\n\nfn summarize_planned_row(row: &PlannedStateRow) -> JsonValue {\n    json!({\n        \"entity_id\": row.entity_id,\n        \"schema_key\": row.schema_key,\n        \"version_id\": row.version_id,\n        \"tombstone\": row.tombstone,\n        \"values\": row\n            .values\n            .iter()\n            .map(|(key, value)| {\n                (\n                    key.clone(),\n                    match value {\n                        Value::Null => json!({ \"kind\": \"null\" }),\n                        Value::Text(text) => json!({\n                            \"kind\": \"text\",\n                            \"sha256\": stable_content_fingerprint_hex(text.as_bytes()),\n                            \"len\": text.len(),\n                        }),\n                        Value::Json(value) => {\n                            let encoded = value.to_string();\n                            json!({\n                                \"kind\": \"json\",\n                                \"sha256\": stable_content_fingerprint_hex(encoded.as_bytes()),\n                                \"len\": encoded.len(),\n                            })\n                        }\n                        Value::Blob(bytes) => json!({\n                            \"kind\": \"blob\",\n                            \"sha256\": stable_content_fingerprint_hex(bytes),\n                            \"len\": bytes.len(),\n                        }),\n                        Value::Integer(value) => json!({ \"kind\": \"integer\", \"value\": value }),\n                        Value::Real(value) => json!({ \"kind\": \"real\", \"value\": value }),\n                        Value::Boolean(value) => json!({ \"kind\": \"boolean\", \"value\": value }),\n                    },\n                )\n            })\n            .collect::<serde_json::Map<_, _>>(),\n    })\n}\n\nfn require_resolved_surface(\n    public_surface_registry: &SurfaceRegistry,\n    relation_name: &str,\n) -> Result<ResolvedRelation, LixError> {\n    public_surface_registry\n        .bind_relation_name(relation_name)\n        .ok_or_else(|| {\n            LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                format!(\"public surface '{relation_name}' is not registered\"),\n            )\n        })\n}\n"
  },
  {
    "path": "packages/engine/src/plugin/manifest.rs",
    "content": "use std::sync::OnceLock;\n\nuse globset::{Glob, GlobBuilder};\nuse jsonschema::{Draft, JSONSchema};\nuse serde::{Deserialize, Serialize};\nuse serde_json::Value as JsonValue;\n\nuse crate::LixError;\n\nstatic PLUGIN_MANIFEST_SCHEMA: OnceLock<JsonValue> = OnceLock::new();\nstatic PLUGIN_MANIFEST_VALIDATOR: OnceLock<Result<JSONSchema, LixError>> = OnceLock::new();\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]\n#[serde(rename_all = \"kebab-case\")]\npub enum PluginRuntime {\n    WasmComponentV1,\n}\n\n#[allow(dead_code)]\nimpl PluginRuntime {\n    pub fn as_str(self) -> &'static str {\n        match self {\n            Self::WasmComponentV1 => \"wasm-component-v1\",\n        }\n    }\n\n    pub fn from_str(value: &str) -> Option<Self> {\n        match value {\n            \"wasm-component-v1\" => Some(Self::WasmComponentV1),\n            _ => None,\n        }\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]\npub struct PluginManifest {\n    pub key: String,\n    pub runtime: PluginRuntime,\n    pub api_version: String,\n    #[serde(rename = \"match\")]\n    pub file_match: PluginMatch,\n    #[serde(default)]\n    pub detect_changes: Option<DetectChangesConfig>,\n    pub entry: String,\n    pub schemas: Vec<String>,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]\npub struct PluginMatch {\n    pub path_glob: String,\n    #[serde(default)]\n    pub content_type: Option<PluginContentType>,\n}\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]\n#[serde(rename_all = \"snake_case\")]\npub enum PluginContentType {\n    Text,\n    Binary,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub struct ValidatedPluginManifest {\n    pub manifest: PluginManifest,\n    pub normalized_json: String,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]\npub struct DetectChangesConfig {\n    #[serde(default)]\n    pub state_context: Option<DetectStateContextConfig>,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]\npub struct DetectStateContextConfig {\n    #[serde(default)]\n    pub include_active_state: Option<bool>,\n    #[serde(default)]\n    pub columns: Option<Vec<StateContextColumn>>,\n}\n\n#[allow(dead_code)]\nimpl DetectStateContextConfig {\n    pub fn includes_active_state(&self) -> bool {\n        self.include_active_state.unwrap_or(false)\n    }\n\n    pub fn resolved_columns_or_default(&self) -> Option<Vec<StateContextColumn>> {\n        if !self.includes_active_state() {\n            return None;\n        }\n        Some(\n            self.columns\n                .clone()\n                .unwrap_or_else(|| StateContextColumn::default_active_state_columns().to_vec()),\n        )\n    }\n}\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]\n#[serde(rename_all = \"snake_case\")]\npub enum StateContextColumn {\n    EntityId,\n    SchemaKey,\n    SchemaVersion,\n    SnapshotContent,\n    FileId,\n    PluginKey,\n    VersionId,\n    ChangeId,\n    Metadata,\n    CreatedAt,\n    UpdatedAt,\n}\n\n#[allow(dead_code)]\nimpl StateContextColumn {\n    pub const fn default_active_state_columns() -> &'static [StateContextColumn] {\n        &[\n            StateContextColumn::EntityId,\n            StateContextColumn::SchemaKey,\n            StateContextColumn::SchemaVersion,\n            StateContextColumn::SnapshotContent,\n        ]\n    }\n}\n\npub fn parse_plugin_manifest_json(raw: &str) -> Result<ValidatedPluginManifest, LixError> {\n    let manifest_json: JsonValue = serde_json::from_str(raw).map_err(|error| LixError {\n        code: \"LIX_ERROR_UNKNOWN\".to_string(),\n        message: format!(\"Plugin manifest must be valid JSON: {error}\"),\n        hint: None,\n            details: None,\n    })?;\n\n    validate_plugin_manifest_json(&manifest_json)?;\n\n    let manifest: PluginManifest =\n        serde_json::from_value(manifest_json.clone()).map_err(|error| LixError {\n            code: \"LIX_ERROR_UNKNOWN\".to_string(),\n            message: format!(\"Plugin manifest does not match expected shape: {error}\"),\n            hint: None,\n            details: None,\n        })?;\n    validate_path_glob(&manifest.file_match.path_glob)?;\n\n    let normalized_json = serde_json::to_string(&manifest_json).map_err(|error| LixError {\n        code: \"LIX_ERROR_UNKNOWN\".to_string(),\n        message: format!(\"Failed to normalize plugin manifest JSON: {error}\"),\n        hint: None,\n            details: None,\n    })?;\n\n    Ok(ValidatedPluginManifest {\n        manifest,\n        normalized_json,\n    })\n}\n\npub fn select_best_glob_match<'a, T, C: Copy + PartialEq>(\n    path: &str,\n    file_content_type: Option<C>,\n    candidates: &'a [T],\n    glob: impl Fn(&T) -> &str,\n    required_content_type: impl Fn(&T) -> Option<C>,\n) -> Option<&'a T> {\n    let mut selected: Option<&T> = None;\n    let mut selected_rank: Option<(u8, i32)> = None;\n\n    for candidate in candidates {\n        let pattern = glob(candidate);\n        if !glob_matches_path(pattern, path) {\n            continue;\n        }\n        if let (Some(actual_type), Some(required_type)) =\n            (file_content_type, required_content_type(candidate))\n        {\n            if actual_type != required_type {\n                continue;\n            }\n        }\n\n        let rank = glob_specificity_rank(pattern);\n        match selected_rank {\n            None => {\n                selected = Some(candidate);\n                selected_rank = Some(rank);\n            }\n            Some(existing_rank) if rank > existing_rank => {\n                selected = Some(candidate);\n                selected_rank = Some(rank);\n            }\n            _ => {}\n        }\n    }\n\n    selected\n}\n\npub fn glob_matches_path(glob: &str, path: &str) -> bool {\n    let normalized_glob = glob.trim();\n    let normalized_path = path.trim();\n    if normalized_glob.is_empty() || normalized_path.is_empty() {\n        return false;\n    }\n    if is_catch_all_glob(normalized_glob) {\n        return true;\n    }\n\n    GlobBuilder::new(normalized_glob)\n        .literal_separator(false)\n        .case_insensitive(true)\n        .build()\n        .map(|compiled| compiled.compile_matcher().is_match(normalized_path))\n        .unwrap_or(false)\n}\n\nfn validate_path_glob(glob: &str) -> Result<(), LixError> {\n    Glob::new(glob).map_err(|error| LixError {\n        code: \"LIX_ERROR_UNKNOWN\".to_string(),\n        message: format!(\"Invalid plugin manifest: match.path_glob is invalid: {error}\"),\n        hint: None,\n            details: None,\n    })?;\n    Ok(())\n}\n\nfn validate_plugin_manifest_json(manifest: &JsonValue) -> Result<(), LixError> {\n    let validator = plugin_manifest_validator()?;\n    if let Err(errors) = validator.validate(manifest) {\n        let details = format_validation_errors(errors);\n        return Err(LixError {\n            code: \"LIX_ERROR_UNKNOWN\".to_string(),\n            message: format!(\"Invalid plugin manifest: {details}\"),\n            hint: None,\n            details: None,\n        });\n    }\n    Ok(())\n}\n\nfn glob_specificity_rank(glob: &str) -> (u8, i32) {\n    let normalized = glob.trim();\n    if is_catch_all_glob(normalized) {\n        return (0, i32::MIN);\n    }\n    (1, glob_specificity_score(normalized))\n}\n\nfn glob_specificity_score(glob: &str) -> i32 {\n    let mut literal_chars = 0i32;\n    let mut wildcard_chars = 0i32;\n    for ch in glob.chars() {\n        match ch {\n            '*' | '?' | '[' | ']' | '{' | '}' => wildcard_chars += 1,\n            _ => literal_chars += 1,\n        }\n    }\n    literal_chars - wildcard_chars\n}\n\nfn is_catch_all_glob(glob: &str) -> bool {\n    glob == \"*\" || glob == \"**/*\" || glob == \"**\"\n}\n\nfn plugin_manifest_validator() -> Result<&'static JSONSchema, LixError> {\n    let result = PLUGIN_MANIFEST_VALIDATOR.get_or_init(|| {\n        let mut options = JSONSchema::options();\n        options.with_meta_schemas();\n        if plugin_manifest_schema()\n            .get(\"$schema\")\n            .and_then(JsonValue::as_str)\n            .is_some_and(|url| url == \"https://json-schema.org/draft/2020-12/schema\")\n        {\n            options.with_draft(Draft::Draft202012);\n        }\n\n        options\n            .compile(plugin_manifest_schema())\n            .map_err(|error| LixError {\n                code: \"LIX_ERROR_UNKNOWN\".to_string(),\n                message: format!(\"Failed to compile plugin manifest schema: {error}\"),\n                hint: None,\n            details: None,\n            })\n    });\n\n    match result {\n        Ok(schema) => Ok(schema),\n        Err(error) => Err(LixError {\n            code: \"LIX_ERROR_UNKNOWN\".to_string(),\n            message: error.message.clone(),\n            hint: None,\n            details: None,\n        }),\n    }\n}\n\nfn plugin_manifest_schema() -> &'static JsonValue {\n    PLUGIN_MANIFEST_SCHEMA.get_or_init(|| {\n        let raw = include_str!(\"./plugin_manifest.schema.json\");\n        serde_json::from_str(raw).expect(\"plugin_manifest.schema.json must be valid JSON\")\n    })\n}\n\nfn format_validation_errors<'a>(\n    errors: impl Iterator<Item = jsonschema::ValidationError<'a>>,\n) -> String {\n    let mut parts = Vec::new();\n    for error in errors {\n        let path = error.instance_path.to_string();\n        let message = error.to_string();\n        if path.is_empty() {\n            parts.push(message);\n        } else {\n            parts.push(format!(\"{path} {message}\"));\n        }\n    }\n    if parts.is_empty() {\n        \"Unknown validation error\".to_string()\n    } else {\n        parts.join(\"; \")\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::{\n        parse_plugin_manifest_json, DetectStateContextConfig, PluginContentType, StateContextColumn,\n    };\n\n    #[test]\n    fn resolved_columns_returns_none_when_active_state_is_not_enabled() {\n        let config = DetectStateContextConfig {\n            include_active_state: None,\n            columns: None,\n        };\n\n        assert_eq!(config.resolved_columns_or_default(), None);\n    }\n\n    #[test]\n    fn resolved_columns_uses_defaults_when_columns_are_omitted() {\n        let config = DetectStateContextConfig {\n            include_active_state: Some(true),\n            columns: None,\n        };\n\n        assert_eq!(\n            config.resolved_columns_or_default(),\n            Some(StateContextColumn::default_active_state_columns().to_vec())\n        );\n    }\n\n    #[test]\n    fn resolved_columns_uses_explicit_column_selection() {\n        let config = DetectStateContextConfig {\n            include_active_state: Some(true),\n            columns: Some(vec![\n                StateContextColumn::EntityId,\n                StateContextColumn::SchemaKey,\n            ]),\n        };\n\n        assert_eq!(\n            config.resolved_columns_or_default(),\n            Some(vec![\n                StateContextColumn::EntityId,\n                StateContextColumn::SchemaKey\n            ])\n        );\n    }\n\n    #[test]\n    fn parses_valid_manifest() {\n        let validated = parse_plugin_manifest_json(\n            r#\"{\n                \"key\":\"plugin_json\",\n                \"runtime\":\"wasm-component-v1\",\n                \"api_version\":\"0.1.0\",\n                \"match\":{\"path_glob\":\"*.json\"},\n                \"entry\":\"plugin.wasm\",\n                \"schemas\":[\"schema/default.json\"]\n            }\"#,\n        )\n        .expect(\"manifest should parse\");\n\n        assert_eq!(validated.manifest.key, \"plugin_json\");\n        assert_eq!(validated.manifest.runtime.as_str(), \"wasm-component-v1\");\n        assert_eq!(validated.manifest.entry, \"plugin.wasm\");\n    }\n\n    #[test]\n    fn rejects_invalid_manifest() {\n        let err = parse_plugin_manifest_json(\n            r#\"{\n                \"runtime\":\"wasm-component-v1\",\n                \"api_version\":\"0.1.0\",\n                \"match\":{\"path_glob\":\"*.json\"},\n                \"entry\":\"plugin.wasm\",\n                \"schemas\":[\"schema/default.json\"]\n            }\"#,\n        )\n        .expect_err(\"manifest should be invalid\");\n\n        assert!(err.message.contains(\"Invalid plugin manifest\"));\n        assert!(err.message.contains(\"key\"));\n    }\n\n    #[test]\n    fn rejects_invalid_path_glob() {\n        let err = parse_plugin_manifest_json(\n            r#\"{\n                \"key\":\"plugin_markdown\",\n                \"runtime\":\"wasm-component-v1\",\n                \"api_version\":\"0.1.0\",\n                \"match\":{\"path_glob\":\"*.{md,mdx\"},\n                \"entry\":\"plugin.wasm\",\n                \"schemas\":[\"schema/default.json\"]\n            }\"#,\n        )\n        .expect_err(\"invalid glob should fail\");\n\n        assert!(err.message.contains(\"match.path_glob\"));\n    }\n\n    #[test]\n    fn parses_manifest_with_content_type_match_filter() {\n        let validated = parse_plugin_manifest_json(\n            r#\"{\n                \"key\":\"plugin_text\",\n                \"runtime\":\"wasm-component-v1\",\n                \"api_version\":\"0.1.0\",\n                \"match\":{\"path_glob\":\"**/*\", \"content_type\":\"text\"},\n                \"entry\":\"plugin.wasm\",\n                \"schemas\":[\"schema/default.json\"]\n            }\"#,\n        )\n        .expect(\"manifest should parse\");\n\n        assert_eq!(\n            validated.manifest.file_match.content_type,\n            Some(PluginContentType::Text)\n        );\n    }\n\n    #[test]\n    fn parses_manifest_with_active_state_columns() {\n        let validated = parse_plugin_manifest_json(\n            r#\"{\n                \"key\":\"plugin_markdown\",\n                \"runtime\":\"wasm-component-v1\",\n                \"api_version\":\"0.1.0\",\n                \"match\":{\"path_glob\":\"*.{md,mdx}\"},\n                \"entry\":\"plugin.wasm\",\n                \"schemas\":[\"schema/default.json\"],\n                \"detect_changes\": {\n                    \"state_context\": {\n                        \"include_active_state\": true,\n                        \"columns\": [\"entity_id\", \"schema_key\", \"snapshot_content\"]\n                    }\n                }\n            }\"#,\n        )\n        .expect(\"manifest should parse\");\n\n        let state_context = validated\n            .manifest\n            .detect_changes\n            .expect(\"detect_changes should be present\")\n            .state_context\n            .expect(\"state_context should be present\");\n\n        assert_eq!(state_context.include_active_state, Some(true));\n        assert_eq!(\n            state_context.columns,\n            Some(vec![\n                StateContextColumn::EntityId,\n                StateContextColumn::SchemaKey,\n                StateContextColumn::SnapshotContent\n            ])\n        );\n    }\n\n    #[test]\n    fn parses_manifest_with_active_state_and_default_columns() {\n        let validated = parse_plugin_manifest_json(\n            r#\"{\n                \"key\":\"plugin_markdown\",\n                \"runtime\":\"wasm-component-v1\",\n                \"api_version\":\"0.1.0\",\n                \"match\":{\"path_glob\":\"*.md\"},\n                \"entry\":\"plugin.wasm\",\n                \"schemas\":[\"schema/default.json\"],\n                \"detect_changes\": {\n                    \"state_context\": {\n                        \"include_active_state\": true\n                    }\n                }\n            }\"#,\n        )\n        .expect(\"manifest should parse\");\n\n        let state_context = validated\n            .manifest\n            .detect_changes\n            .expect(\"detect_changes should be present\")\n            .state_context\n            .expect(\"state_context should be present\");\n\n        assert_eq!(\n            state_context.resolved_columns_or_default(),\n            Some(StateContextColumn::default_active_state_columns().to_vec())\n        );\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/plugin/materializer.rs",
    "content": "use std::collections::BTreeSet;\nuse std::sync::{Arc, RwLock};\n\nuse async_trait::async_trait;\n\nuse crate::common::LixError;\nuse crate::live_state::{list_installed_plugin_archive_refs, PluginArchiveRef};\nuse crate::Backend;\n\nuse super::component::{apply_changes_with_plugin, PluginComponentHost};\nuse super::{\n    load_installed_plugin_from_archive_bytes, plugin_key_from_archive_path, PluginContentType,\n    PluginRuntime,\n};\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub struct InstalledPlugin {\n    pub key: String,\n    pub runtime: PluginRuntime,\n    pub api_version: String,\n    pub path_glob: String,\n    pub content_type: Option<PluginContentType>,\n    pub entry: String,\n    pub manifest_json: String,\n    pub wasm: Vec<u8>,\n}\n\n#[async_trait(?Send)]\npub trait FilesystemPluginMaterializer {\n    async fn load_installed_plugins(&self) -> Result<Vec<InstalledPlugin>, LixError>;\n\n    async fn apply_plugin_changes(\n        &self,\n        plugin: &InstalledPlugin,\n        payload: &[u8],\n    ) -> Result<Vec<u8>, LixError>;\n}\n\npub(crate) trait PluginMaterializationHost: PluginComponentHost {\n    fn plugin_backend(&self) -> &Arc<dyn Backend + Send + Sync>;\n\n    fn installed_plugins_cache(&self) -> &RwLock<Option<Vec<InstalledPlugin>>>;\n}\n\npub(crate) async fn load_installed_plugins_with_runtime_cache(\n    host: &impl PluginMaterializationHost,\n) -> Result<Vec<InstalledPlugin>, LixError> {\n    if let Some(cached) = host\n        .installed_plugins_cache()\n        .read()\n        .map_err(|_| LixError {\n            code: \"LIX_ERROR_UNKNOWN\".to_string(),\n            message: \"installed plugin cache lock poisoned\".to_string(),\n            hint: None,\n            details: None,\n        })?\n        .clone()\n    {\n        return Ok(cached);\n    }\n\n    let plugins = load_installed_plugins_from_backend(host).await?;\n    let mut guard = host\n        .installed_plugins_cache()\n        .write()\n        .map_err(|_| LixError {\n            code: \"LIX_ERROR_UNKNOWN\".to_string(),\n            message: \"installed plugin cache lock poisoned\".to_string(),\n            hint: None,\n            details: None,\n        })?;\n    *guard = Some(plugins.clone());\n    Ok(plugins)\n}\n\npub(crate) async fn load_installed_plugins_from_backend(\n    host: &impl PluginMaterializationHost,\n) -> Result<Vec<InstalledPlugin>, LixError> {\n    load_installed_plugins_from_backend_state(host.plugin_backend().as_ref()).await\n}\n\npub(crate) async fn load_installed_plugins_from_backend_state(\n    backend: &dyn Backend,\n) -> Result<Vec<InstalledPlugin>, LixError> {\n    let archive_refs = list_installed_plugin_archive_refs(backend).await?;\n    let mut plugins = Vec::with_capacity(archive_refs.len());\n    for archive_ref in archive_refs {\n        plugins.push(\n            load_installed_plugin_from_archive_ref_with_backend(backend, &archive_ref).await?,\n        );\n    }\n    Ok(plugins)\n}\n\npub(crate) async fn load_installed_plugin_from_archive_ref_with_backend(\n    backend: &dyn Backend,\n    archive_ref: &PluginArchiveRef,\n) -> Result<InstalledPlugin, LixError> {\n    let Some(plugin_key) = plugin_key_from_archive_path(&archive_ref.path) else {\n        return Err(LixError {\n            code: \"LIX_ERROR_UNKNOWN\".to_string(),\n            message: format!(\n                \"plugin materialization: unsupported plugin archive path '{}'\",\n                archive_ref.path\n            ),\n            hint: None,\n            details: None,\n        });\n    };\n    let binary_cas = crate::binary_cas::BinaryCasContext::new();\n    let mut reader = binary_cas.reader(backend);\n    let archive_hash = crate::binary_cas::BlobHash::from_hex(&archive_ref.blob_hash)?;\n    let archive_bytes = reader\n        .load_bytes_many(&[archive_hash])\n        .await?\n        .into_vec()\n        .into_iter()\n        .next()\n        .flatten()\n        .ok_or_else(|| LixError {\n            code: \"LIX_ERROR_UNKNOWN\".to_string(),\n            message: format!(\n                \"plugin materialization: missing plugin archive blob '{}' for file '{}' ({})\",\n                archive_ref.blob_hash, archive_ref.path, archive_ref.file_id\n            ),\n            hint: None,\n            details: None,\n        })?;\n    if archive_bytes.is_empty() {\n        return Err(LixError {\n            code: \"LIX_ERROR_UNKNOWN\".to_string(),\n            message: format!(\n                \"plugin materialization: archive '{}' is empty\",\n                archive_ref.path\n            ),\n            hint: None,\n            details: None,\n        });\n    }\n    load_installed_plugin_from_archive_bytes(&plugin_key, &archive_ref.path, &archive_bytes)\n}\n\npub(crate) async fn list_installed_plugin_manifest_keys(\n    backend: &dyn Backend,\n) -> Result<BTreeSet<String>, LixError> {\n    Ok(load_installed_plugins_from_backend_state(backend)\n        .await?\n        .into_iter()\n        .map(|plugin| plugin.key)\n        .collect())\n}\n\n#[allow(dead_code)]\npub(crate) async fn installed_plugin_manifest_key_exists(\n    backend: &dyn Backend,\n    plugin_key: &str,\n) -> Result<bool, LixError> {\n    Ok(list_installed_plugin_manifest_keys(backend)\n        .await?\n        .contains(plugin_key))\n}\n\npub(crate) fn invalidate_installed_plugins_cache(\n    host: &impl PluginMaterializationHost,\n) -> Result<(), LixError> {\n    let mut guard = host\n        .installed_plugins_cache()\n        .write()\n        .map_err(|_| LixError {\n            code: \"LIX_ERROR_UNKNOWN\".to_string(),\n            message: \"installed plugin cache lock poisoned\".to_string(),\n            hint: None,\n            details: None,\n        })?;\n    *guard = None;\n    let mut component_guard = host.plugin_component_cache().lock().map_err(|_| LixError {\n        code: \"LIX_ERROR_UNKNOWN\".to_string(),\n        message: \"plugin component cache lock poisoned\".to_string(),\n        hint: None,\n            details: None,\n    })?;\n    component_guard.clear();\n    Ok(())\n}\n\n#[async_trait(?Send)]\nimpl<T> FilesystemPluginMaterializer for T\nwhere\n    T: PluginMaterializationHost,\n{\n    async fn load_installed_plugins(&self) -> Result<Vec<InstalledPlugin>, LixError> {\n        load_installed_plugins_with_runtime_cache(self).await\n    }\n\n    async fn apply_plugin_changes(\n        &self,\n        plugin: &InstalledPlugin,\n        payload: &[u8],\n    ) -> Result<Vec<u8>, LixError> {\n        apply_changes_with_plugin(self, plugin, payload).await\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::binary_cas::codec::{\n        binary_blob_hash_bytes, encode_binary_cas_chunk, encode_binary_cas_manifest,\n        encode_binary_cas_manifest_chunk, BinaryCasManifest, BinaryChunkCodec,\n    };\n    use crate::binary_cas::kv::{\n        BINARY_CAS_CHUNK_NAMESPACE, BINARY_CAS_MANIFEST_CHUNK_NAMESPACE,\n        BINARY_CAS_MANIFEST_NAMESPACE,\n    };\n    use crate::{\n        BackendKvEntryPage, BackendKvExistsBatch, BackendKvExistsGroup,\n        BackendKvGetRequest, BackendKvKeyPage, BackendKvScanRange, BackendKvScanRequest,\n        BackendKvValueBatch, BackendKvValueGroup, BackendKvValuePage, BackendKvWriteBatch,\n        BackendKvWriteStats, BackendReadTransaction, BackendWriteTransaction, BytePageBuilder,\n    };\n    use async_trait::async_trait;\n    use std::io::{Cursor, Write};\n    use zip::write::SimpleFileOptions;\n    use zip::{CompressionMethod, ZipWriter};\n\n    struct InstalledPluginLookupBackend {\n        archive_bytes: Vec<u8>,\n    }\n\n    struct PluginLookupTransaction {\n        archive_bytes: Vec<u8>,\n    }\n\n    #[async_trait]\n    impl Backend for InstalledPluginLookupBackend {\n        async fn begin_read_transaction(\n            &self,\n        ) -> Result<Box<dyn BackendReadTransaction + Send + Sync + 'static>, LixError> {\n            Ok(Box::new(PluginLookupTransaction {\n                archive_bytes: self.archive_bytes.clone(),\n            }))\n        }\n\n        async fn begin_write_transaction(\n            &self,\n        ) -> Result<Box<dyn BackendWriteTransaction + Send + Sync + 'static>, LixError> {\n            Ok(Box::new(PluginLookupTransaction {\n                archive_bytes: self.archive_bytes.clone(),\n            }))\n        }\n    }\n\n    #[async_trait]\n    impl BackendReadTransaction for PluginLookupTransaction {\n        async fn get_values(\n            &mut self,\n            request: BackendKvGetRequest,\n        ) -> Result<BackendKvValueBatch, LixError> {\n            let mut groups = Vec::with_capacity(request.groups.len());\n            for group in request.groups {\n            let namespace = group.namespace.clone();\n            let mut values = BytePageBuilder::with_capacity(group.keys.len(), 0);\n                let mut present = Vec::with_capacity(group.keys.len());\n                for key in group.keys {\n                    if let Some(value) = test_kv_get(&self.archive_bytes, &group.namespace, &key)? {\n                        values.push(value);\n                        present.push(true);\n                    } else {\n                        values.push([]);\n                        present.push(false);\n                    }\n                }\n                groups.push(BackendKvValueGroup::new(namespace, values.finish(), present));\n            }\n            Ok(BackendKvValueBatch { groups })\n        }\n\n        async fn exists_many(\n            &mut self,\n            request: BackendKvGetRequest,\n        ) -> Result<BackendKvExistsBatch, LixError> {\n            let mut groups = Vec::with_capacity(request.groups.len());\n            for group in request.groups {\n            let namespace = group.namespace.clone();\n            let exists = group\n                    .keys\n                    .iter()\n                    .map(|key| test_kv_get(&self.archive_bytes, &group.namespace, key))\n                    .collect::<Result<Vec<_>, LixError>>()?\n                    .into_iter()\n                    .map(|value| value.is_some())\n                    .collect();\n                groups.push(BackendKvExistsGroup {\n                    namespace,\n                    exists,\n                });\n            }\n            Ok(BackendKvExistsBatch { groups })\n        }\n\n        async fn scan_keys(\n            &mut self,\n            request: BackendKvScanRequest,\n        ) -> Result<BackendKvKeyPage, LixError> {\n            let entries = test_kv_scan(&self.archive_bytes, request)?;\n            Ok(BackendKvKeyPage {\n                keys: entries.keys,\n                resume_after: entries.resume_after,\n            })\n        }\n\n        async fn scan_values(\n            &mut self,\n            request: BackendKvScanRequest,\n        ) -> Result<BackendKvValuePage, LixError> {\n            let entries = test_kv_scan(&self.archive_bytes, request)?;\n            Ok(BackendKvValuePage {\n                values: entries.values,\n                resume_after: entries.resume_after,\n            })\n        }\n\n        async fn scan_entries(\n            &mut self,\n            request: BackendKvScanRequest,\n        ) -> Result<BackendKvEntryPage, LixError> {\n            test_kv_scan(&self.archive_bytes, request)\n        }\n\n        async fn rollback(self: Box<Self>) -> Result<(), LixError> {\n            Ok(())\n        }\n    }\n\n    #[async_trait]\n    impl BackendWriteTransaction for PluginLookupTransaction {\n        async fn write_kv_batch(&mut self, _batch: BackendKvWriteBatch) -> Result<BackendKvWriteStats, LixError> {\n            Err(LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                \"plugin lookup test backend is read-only\",\n            ))\n        }\n\n        async fn commit(self: Box<Self>) -> Result<(), LixError> {\n            Ok(())\n        }\n    }\n\n    fn test_kv_get(\n        archive_bytes: &[u8],\n        namespace: &str,\n        key: &[u8],\n    ) -> Result<Option<Vec<u8>>, LixError> {\n        match (namespace, key) {\n            (BINARY_CAS_MANIFEST_NAMESPACE, key)\n                if key == binary_blob_hash_bytes(archive_bytes).as_slice() =>\n            {\n                Ok(Some(encode_binary_cas_manifest(\n                    &BinaryCasManifest::Chunked {\n                        size_bytes: archive_bytes.len() as u64,\n                        chunk_count: 1,\n                    },\n                )))\n            }\n            (BINARY_CAS_CHUNK_NAMESPACE, key)\n                if key == binary_blob_hash_bytes(archive_bytes).as_slice() =>\n            {\n                Ok(Some(encode_binary_cas_chunk(\n                    BinaryChunkCodec::Raw,\n                    archive_bytes.len() as u64,\n                    archive_bytes,\n                )))\n            }\n            _ => Ok(None),\n        }\n    }\n\n    fn test_kv_scan(\n        archive_bytes: &[u8],\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvEntryPage, LixError> {\n        if request.namespace != BINARY_CAS_MANIFEST_CHUNK_NAMESPACE {\n            return Ok(BackendKvEntryPage {\n                keys: BytePageBuilder::new().finish(),\n                values: BytePageBuilder::new().finish(),\n                resume_after: None,\n            });\n        }\n        let blob_hash = binary_blob_hash_bytes(archive_bytes);\n        let chunk_hash = binary_blob_hash_bytes(archive_bytes);\n        let mut key = blob_hash.to_vec();\n        key.extend_from_slice(&0u64.to_be_bytes());\n        let include = match request.range {\n            BackendKvScanRange::Prefix(prefix) => key.starts_with(&prefix),\n            BackendKvScanRange::Range { start, end } => key >= start && key < end,\n        };\n        if !include || request.after.as_deref().is_some_and(|after| key.as_slice() <= after) {\n            return Ok(BackendKvEntryPage {\n                keys: BytePageBuilder::new().finish(),\n                values: BytePageBuilder::new().finish(),\n                resume_after: None,\n            });\n        }\n        let value = encode_binary_cas_manifest_chunk(&chunk_hash, archive_bytes.len() as u64);\n        let mut keys = BytePageBuilder::with_capacity(1, key.len());\n        let mut values = BytePageBuilder::with_capacity(1, value.len());\n        let mut resume_after = None;\n        if request.limit > 0 {\n            resume_after = Some(key.clone());\n            keys.push(&key);\n            values.push(&value);\n        }\n        let resume_after = (request.limit == 0).then_some(resume_after).flatten();\n        Ok(BackendKvEntryPage {\n            keys: keys.finish(),\n            values: values.finish(),\n            resume_after,\n        })\n    }\n\n    fn build_archive(entries: &[(&str, &[u8])]) -> Vec<u8> {\n        let options = SimpleFileOptions::default().compression_method(CompressionMethod::Stored);\n        let cursor = Cursor::new(Vec::new());\n        let mut writer = ZipWriter::new(cursor);\n        for (path, bytes) in entries {\n            writer\n                .start_file(*path, options)\n                .expect(\"archive entry start should succeed\");\n            writer\n                .write_all(bytes)\n                .expect(\"archive entry write should succeed\");\n        }\n        writer\n            .finish()\n            .expect(\"archive finish should succeed\")\n            .into_inner()\n    }\n\n    fn build_plugin_archive(manifest_json: &str) -> Vec<u8> {\n        let wasm = [0x00, 0x61, 0x73, 0x6d, 0x01, 0x00, 0x00, 0x00];\n        build_archive(&[\n            (\"manifest.json\", manifest_json.as_bytes()),\n            (\"plugin.wasm\", &wasm),\n        ])\n    }\n\n    fn plugin_manifest_json(key: &str) -> String {\n        format!(\n            r#\"{{\n  \"key\":\"{key}\",\n  \"runtime\":\"wasm-component-v1\",\n  \"api_version\":\"0.1.0\",\n  \"match\":{{\"path_glob\":\"*.json\"}},\n  \"entry\":\"plugin.wasm\",\n  \"schemas\":[\"schema/plugin_json_schema.json\"]\n}}\"#\n        )\n    }\n\n    #[tokio::test]\n    async fn installed_plugin_manifest_key_exists_reads_installed_manifest_keys() {\n        let backend = InstalledPluginLookupBackend {\n            archive_bytes: build_plugin_archive(&plugin_manifest_json(\"plugin_json\")),\n        };\n\n        assert!(\n            installed_plugin_manifest_key_exists(&backend, \"plugin_json\")\n                .await\n                .expect(\"installed manifest key lookup should succeed\")\n        );\n        assert!(\n            !installed_plugin_manifest_key_exists(&backend, \"missing_plugin\")\n                .await\n                .expect(\"missing manifest key lookup should succeed\")\n        );\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/plugin/mod.rs",
    "content": "//! Plugin subsystem root.\n//!\n//! Phase 1 establishes `crate::plugin::*` as the owner path for plugin-domain\n//! code under concrete plugin-owned modules instead of legacy ownership-neutral\n//! buckets.\n\nmod archive;\npub(crate) mod component;\nmod manifest;\nmod materializer;\nmod storage;\n\npub(crate) use archive::{\n    load_installed_plugin_from_archive_bytes, parse_plugin_archive_for_install, ParsedPluginArchive,\n};\n#[allow(unused_imports)]\npub(crate) use manifest::{\n    glob_matches_path, parse_plugin_manifest_json, select_best_glob_match, DetectChangesConfig,\n    DetectStateContextConfig, PluginContentType, PluginManifest, PluginMatch, PluginRuntime,\n    StateContextColumn, ValidatedPluginManifest,\n};\n#[allow(unused_imports)]\npub(crate) use materializer::{\n    installed_plugin_manifest_key_exists, invalidate_installed_plugins_cache,\n    list_installed_plugin_manifest_keys, load_installed_plugins_from_backend_state,\n    load_installed_plugins_with_runtime_cache, FilesystemPluginMaterializer, InstalledPlugin,\n    PluginMaterializationHost,\n};\n#[allow(unused_imports)]\npub(crate) use storage::{\n    plugin_key_from_archive_path, plugin_storage_archive_file_id, plugin_storage_archive_path,\n    PLUGIN_ARCHIVE_FILE_EXTENSION, PLUGIN_STORAGE_ROOT_DIRECTORY_PATH,\n};\n"
  },
  {
    "path": "packages/engine/src/plugin/plugin_manifest.json",
    "content": "{\n  \"$schema\": \"https://json-schema.org/draft/2020-12/schema\",\n  \"type\": \"object\",\n  \"additionalProperties\": false,\n  \"required\": [\n    \"key\",\n    \"runtime\",\n    \"api_version\",\n    \"match\",\n    \"entry\",\n    \"schemas\"\n  ],\n  \"properties\": {\n    \"key\": {\n      \"type\": \"string\",\n      \"minLength\": 1,\n      \"maxLength\": 128,\n      \"pattern\": \"^[a-z][a-z0-9_-]*$\"\n    },\n    \"runtime\": {\n      \"type\": \"string\",\n      \"enum\": [\n        \"wasm-component-v1\"\n      ]\n    },\n    \"api_version\": {\n      \"type\": \"string\",\n      \"pattern\": \"^[0-9]+\\\\.[0-9]+\\\\.[0-9]+$\"\n    },\n    \"match\": {\n      \"type\": \"object\",\n      \"additionalProperties\": false,\n      \"required\": [\"path_glob\"],\n      \"properties\": {\n        \"path_glob\": {\n          \"type\": \"string\",\n          \"minLength\": 1\n        },\n        \"content_type\": {\n          \"type\": \"string\",\n          \"enum\": [\"text\", \"binary\"]\n        }\n      }\n    },\n    \"detect_changes\": {\n      \"type\": \"object\",\n      \"additionalProperties\": false,\n      \"properties\": {\n        \"state_context\": {\n          \"type\": \"object\",\n          \"additionalProperties\": false,\n          \"properties\": {\n            \"include_active_state\": {\n              \"type\": \"boolean\"\n            },\n            \"columns\": {\n              \"type\": \"array\",\n              \"minItems\": 1,\n              \"uniqueItems\": true,\n              \"items\": {\n                \"type\": \"string\",\n                \"enum\": [\n                  \"entity_id\",\n                  \"schema_key\",\n                  \"snapshot_content\",\n                  \"file_id\",\n                  \"plugin_key\",\n                  \"version_id\",\n                  \"change_id\",\n                  \"metadata\",\n                  \"created_at\",\n                  \"updated_at\"\n                ]\n              },\n              \"contains\": {\n                \"const\": \"entity_id\"\n              }\n            }\n          },\n          \"allOf\": [\n            {\n              \"if\": {\n                \"properties\": {\n                  \"include_active_state\": {\n                    \"const\": true\n                  }\n                },\n                \"required\": [\n                  \"include_active_state\"\n                ]\n              },\n              \"then\": {},\n              \"else\": {\n                \"not\": {\n                  \"required\": [\n                    \"columns\"\n                  ]\n                }\n              }\n            }\n          ]\n        }\n      }\n    },\n    \"entry\": {\n      \"type\": \"string\",\n      \"minLength\": 1\n    },\n    \"schemas\": {\n      \"type\": \"array\",\n      \"minItems\": 1,\n      \"items\": {\n        \"type\": \"string\",\n        \"minLength\": 1\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "packages/engine/src/plugin/storage.rs",
    "content": "use crate::LixError;\n\npub const PLUGIN_STORAGE_ROOT_DIRECTORY_PATH: &str = \"/.lix/plugins/\";\npub const PLUGIN_ARCHIVE_FILE_EXTENSION: &str = \".lixplugin\";\n\npub fn plugin_storage_archive_file_id(plugin_key: &str) -> String {\n    format!(\"lix_plugin_archive::{plugin_key}\")\n}\n\npub fn plugin_storage_archive_path(plugin_key: &str) -> Result<String, LixError> {\n    validate_plugin_key_segment(plugin_key)?;\n    Ok(format!(\n        \"{PLUGIN_STORAGE_ROOT_DIRECTORY_PATH}{plugin_key}{PLUGIN_ARCHIVE_FILE_EXTENSION}\"\n    ))\n}\n\npub fn plugin_key_from_archive_path(path: &str) -> Option<String> {\n    let file_name = path.strip_prefix(PLUGIN_STORAGE_ROOT_DIRECTORY_PATH)?;\n    let plugin_key = file_name.strip_suffix(PLUGIN_ARCHIVE_FILE_EXTENSION)?;\n    if plugin_key.is_empty()\n        || plugin_key == \".\"\n        || plugin_key == \"..\"\n        || plugin_key.contains('/')\n        || plugin_key.contains('\\\\')\n    {\n        return None;\n    }\n    Some(plugin_key.to_string())\n}\n\nfn validate_plugin_key_segment(plugin_key: &str) -> Result<(), LixError> {\n    if plugin_key.is_empty()\n        || plugin_key == \".\"\n        || plugin_key == \"..\"\n        || plugin_key.contains('/')\n        || plugin_key.contains('\\\\')\n    {\n        return Err(LixError {\n            code: \"LIX_ERROR_UNKNOWN\".to_string(),\n            message: format!(\n                \"plugin key '{}' must be a single relative path segment\",\n                plugin_key\n            ),\n            hint: None,\n            details: None,\n        });\n    }\n    Ok(())\n}\n\n#[cfg(test)]\nmod tests {\n    use super::{plugin_key_from_archive_path, plugin_storage_archive_path};\n\n    #[test]\n    fn computes_storage_archive_paths() {\n        assert_eq!(\n            plugin_storage_archive_path(\"plugin_json\").expect(\"path should build\"),\n            \"/.lix/plugins/plugin_json.lixplugin\"\n        );\n    }\n\n    #[test]\n    fn extracts_plugin_key_from_storage_path() {\n        assert_eq!(\n            plugin_key_from_archive_path(\"/.lix/plugins/plugin_json.lixplugin\"),\n            Some(\"plugin_json\".to_string())\n        );\n        assert_eq!(\n            plugin_key_from_archive_path(\"/.lix/plugins/nested/plugin.lixplugin\"),\n            None\n        );\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/schema/annotations/defaults.rs",
    "content": "use serde_json::{Map as JsonMap, Value as JsonValue};\n\nuse crate::cel::{CelEvaluator, CelFunctionProvider};\nuse crate::LixError;\n\npub(crate) fn apply_schema_defaults<P>(\n    snapshot: &mut JsonMap<String, JsonValue>,\n    schema: &JsonValue,\n    evaluator: &CelEvaluator,\n    functions: P,\n    schema_key: &str,\n) -> Result<bool, LixError>\nwhere\n    P: CelFunctionProvider,\n{\n    apply_schema_defaults_with_context(\n        snapshot,\n        schema,\n        &snapshot.clone(),\n        evaluator,\n        functions,\n        schema_key,\n    )\n}\n\npub(crate) fn apply_schema_defaults_with_shared_runtime<P>(\n    snapshot: &mut JsonMap<String, JsonValue>,\n    schema: &JsonValue,\n    functions: P,\n    schema_key: &str,\n) -> Result<bool, LixError>\nwhere\n    P: CelFunctionProvider,\n{\n    apply_schema_defaults(\n        snapshot,\n        schema,\n        crate::cel::shared_runtime(),\n        functions,\n        schema_key,\n    )\n}\n\npub(crate) fn apply_schema_defaults_with_context<P>(\n    snapshot: &mut JsonMap<String, JsonValue>,\n    schema: &JsonValue,\n    context: &JsonMap<String, JsonValue>,\n    evaluator: &CelEvaluator,\n    functions: P,\n    schema_key: &str,\n) -> Result<bool, LixError>\nwhere\n    P: CelFunctionProvider,\n{\n    let Some(properties) = schema.get(\"properties\").and_then(|value| value.as_object()) else {\n        return Ok(false);\n    };\n    let mut ordered_properties: Vec<(&String, &JsonValue)> = properties.iter().collect();\n    ordered_properties.sort_by(|(left_name, _), (right_name, _)| left_name.cmp(right_name));\n\n    let mut changed = false;\n    for (field_name, field_schema) in ordered_properties {\n        if snapshot.contains_key(field_name) {\n            continue;\n        }\n\n        if let Some(expression) = field_schema\n            .get(\"x-lix-default\")\n            .and_then(|value| value.as_str())\n        {\n            let value = evaluator\n                .evaluate_with_functions(expression, context, functions.clone())\n                .map_err(|err| LixError {\n                    code: \"LIX_ERROR_UNKNOWN\".to_string(),\n                    message: format!(\n                        \"failed to evaluate x-lix-default for '{}.{}': {}\",\n                        schema_key, field_name, err.message\n                    ),\n                    hint: None,\n                    details: None,\n                })?;\n            snapshot.insert(field_name.clone(), value);\n            changed = true;\n            continue;\n        }\n\n        if let Some(default_value) = field_schema.get(\"default\") {\n            snapshot.insert(field_name.clone(), default_value.clone());\n            changed = true;\n        }\n    }\n\n    Ok(changed)\n}\n\n#[cfg(test)]\nmod tests {\n    use serde_json::{json, Map as JsonMap, Value as JsonValue};\n\n    use crate::cel::{CelEvaluator, CelFunctionProvider};\n\n    use super::apply_schema_defaults_with_context;\n\n    #[test]\n    fn applies_x_lix_default_for_missing_fields() {\n        let evaluator = CelEvaluator::new();\n        let schema = json!({\n            \"properties\": {\n                \"slug\": {\n                    \"type\": \"string\",\n                    \"x-lix-default\": \"name + '-slug'\"\n                }\n            }\n        });\n        let mut snapshot = JsonMap::new();\n        snapshot.insert(\"name\".to_string(), JsonValue::String(\"sample\".to_string()));\n        let context = snapshot.clone();\n\n        let changed = apply_schema_defaults_with_context(\n            &mut snapshot,\n            &schema,\n            &context,\n            &evaluator,\n            fixed_functions(),\n            \"test_schema\",\n            \"1\",\n        )\n        .expect(\"apply defaults\");\n\n        assert!(changed);\n        assert_eq!(\n            snapshot.get(\"slug\"),\n            Some(&JsonValue::String(\"sample-slug\".to_string()))\n        );\n    }\n\n    #[test]\n    fn x_lix_default_overrides_json_default() {\n        let evaluator = CelEvaluator::new();\n        let schema = json!({\n            \"properties\": {\n                \"status\": {\n                    \"type\": \"string\",\n                    \"default\": \"literal\",\n                    \"x-lix-default\": \"'computed'\"\n                }\n            }\n        });\n        let mut snapshot = JsonMap::new();\n        let context = snapshot.clone();\n\n        let changed = apply_schema_defaults_with_context(\n            &mut snapshot,\n            &schema,\n            &context,\n            &evaluator,\n            fixed_functions(),\n            \"test_schema\",\n            \"1\",\n        )\n        .expect(\"apply defaults\");\n\n        assert!(changed);\n        assert_eq!(\n            snapshot.get(\"status\"),\n            Some(&JsonValue::String(\"computed\".to_string()))\n        );\n    }\n\n    #[test]\n    fn does_not_default_explicit_null_values() {\n        let evaluator = CelEvaluator::new();\n        let schema = json!({\n            \"properties\": {\n                \"status\": {\n                    \"type\": \"string\",\n                    \"x-lix-default\": \"'computed'\"\n                }\n            }\n        });\n        let mut snapshot = JsonMap::new();\n        snapshot.insert(\"status\".to_string(), JsonValue::Null);\n        let context = snapshot.clone();\n\n        let changed = apply_schema_defaults_with_context(\n            &mut snapshot,\n            &schema,\n            &context,\n            &evaluator,\n            fixed_functions(),\n            \"test_schema\",\n            \"1\",\n        )\n        .expect(\"apply defaults\");\n\n        assert!(!changed);\n        assert_eq!(snapshot.get(\"status\"), Some(&JsonValue::Null));\n    }\n\n    #[test]\n    fn applies_cel_defaults_in_stable_sorted_field_order() {\n        #[derive(Clone)]\n        struct CountingFunctions {\n            next: std::sync::Arc<std::sync::atomic::AtomicI64>,\n        }\n\n        impl CelFunctionProvider for CountingFunctions {\n            fn call_uuid_v7(&self) -> String {\n                let current = self.next.fetch_add(1, std::sync::atomic::Ordering::SeqCst);\n                format!(\"uuid-{current}\")\n            }\n\n            fn call_timestamp(&self) -> String {\n                let current = self.next.fetch_add(1, std::sync::atomic::Ordering::SeqCst);\n                format!(\"ts-{current}\")\n            }\n        }\n\n        let evaluator = CelEvaluator::new();\n        let schema = json!({\n            \"properties\": {\n                \"z_uuid\": {\n                    \"type\": \"string\",\n                    \"x-lix-default\": \"lix_uuid_v7()\"\n                },\n                \"a_timestamp\": {\n                    \"type\": \"string\",\n                    \"x-lix-default\": \"lix_timestamp()\"\n                }\n            }\n        });\n        let mut snapshot = JsonMap::new();\n        let context = snapshot.clone();\n\n        let changed = apply_schema_defaults_with_context(\n            &mut snapshot,\n            &schema,\n            &context,\n            &evaluator,\n            CountingFunctions {\n                next: std::sync::Arc::new(std::sync::atomic::AtomicI64::new(0)),\n            },\n            \"test_schema\",\n            \"1\",\n        )\n        .expect(\"apply defaults\");\n\n        assert!(changed);\n        assert_eq!(\n            snapshot.get(\"a_timestamp\"),\n            Some(&JsonValue::String(\"ts-0\".to_string()))\n        );\n        assert_eq!(\n            snapshot.get(\"z_uuid\"),\n            Some(&JsonValue::String(\"uuid-1\".to_string()))\n        );\n    }\n\n    #[derive(Clone)]\n    struct FixedFunctions;\n\n    impl CelFunctionProvider for FixedFunctions {\n        fn call_uuid_v7(&self) -> String {\n            \"uuid-fixed\".to_string()\n        }\n\n        fn call_timestamp(&self) -> String {\n            \"1970-01-01T00:00:00.000Z\".to_string()\n        }\n    }\n\n    fn fixed_functions() -> FixedFunctions {\n        FixedFunctions\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/schema/annotations/mod.rs",
    "content": "pub(crate) mod defaults;\n"
  },
  {
    "path": "packages/engine/src/schema/builtin/lix_account.json",
    "content": "{\n  \"x-lix-key\": \"lix_account\",\n  \"x-lix-primary-key\": [\n    \"/id\"\n  ],\n  \"type\": \"object\",\n  \"properties\": {\n    \"id\": {\n      \"type\": \"string\",\n      \"x-lix-default\": \"lix_uuid_v7()\"\n    },\n    \"name\": {\n      \"type\": \"string\"\n    }\n  },\n  \"required\": [\n    \"id\",\n    \"name\"\n  ],\n  \"additionalProperties\": false\n}\n"
  },
  {
    "path": "packages/engine/src/schema/builtin/lix_active_account.json",
    "content": "{\n  \"x-lix-key\": \"lix_active_account\",\n  \"x-lix-primary-key\": [\n    \"/account_id\"\n  ],\n  \"x-lix-foreign-keys\": [\n    {\n      \"properties\": [\n        \"/account_id\"\n      ],\n      \"references\": {\n        \"schemaKey\": \"lix_account\",\n        \"properties\": [\n          \"/id\"\n        ]\n      }\n    }\n  ],\n  \"type\": \"object\",\n  \"properties\": {\n    \"account_id\": {\n      \"type\": \"string\"\n    }\n  },\n  \"required\": [\n    \"account_id\"\n  ],\n  \"additionalProperties\": false\n}\n"
  },
  {
    "path": "packages/engine/src/schema/builtin/lix_binary_blob_ref.json",
    "content": "{\n  \"x-lix-key\": \"lix_binary_blob_ref\",\n  \"description\": \"Metadata pointer from a file version to its binary payload in internal CAS storage.\",\n  \"x-lix-primary-key\": [\n    \"/id\"\n  ],\n  \"type\": \"object\",\n  \"properties\": {\n    \"id\": {\n      \"type\": \"string\",\n      \"description\": \"File/entity identifier (matches lix_file.id) for this binary reference row.\"\n    },\n    \"blob_hash\": {\n      \"type\": \"string\",\n      \"description\": \"BLAKE3 content hash used as the canonical CAS key for the binary payload.\"\n    },\n    \"size_bytes\": {\n      \"type\": \"integer\",\n      \"minimum\": 0,\n      \"description\": \"Logical uncompressed file size in bytes for the referenced binary payload.\"\n    }\n  },\n  \"required\": [\n    \"id\",\n    \"blob_hash\",\n    \"size_bytes\"\n  ],\n  \"additionalProperties\": false\n}\n"
  },
  {
    "path": "packages/engine/src/schema/builtin/lix_change.json",
    "content": "{\n  \"x-lix-key\": \"lix_change\",\n  \"description\": \"A change records one edit to a Lix entity, including what changed, when it changed, and which entity was affected.\",\n  \"x-lix-primary-key\": [\n    \"/id\"\n  ],\n  \"type\": \"object\",\n  \"properties\": {\n    \"id\": {\n      \"type\": \"string\",\n      \"x-lix-default\": \"lix_uuid_v7()\",\n      \"description\": \"Stable identifier for this change.\"\n    },\n    \"entity_id\": {\n      \"type\": \"array\",\n      \"description\": \"Canonical JSON primary-key tuple for the entity this change applies to, scoped by (`schema_key`, `file_id`). Values are ordered according to the target schema's `x-lix-primary-key`.\",\n      \"items\": {\n        \"type\": \"string\"\n      },\n      \"minItems\": 1\n    },\n    \"schema_key\": {\n      \"type\": \"string\",\n      \"description\": \"Schema identifier of the entity (e.g. `lix_file_descriptor`, `lix_commit`, or a user-registered key).\"\n    },\n    \"file_id\": {\n      \"type\": [\n        \"string\",\n        \"null\"\n      ],\n      \"description\": \"Filesystem-scoped file identifier when the change belongs to a file; NULL for engine-internal entities (commits, versions, settings).\"\n    },\n    \"metadata\": {\n      \"type\": [\n        \"object\",\n        \"null\"\n      ],\n      \"description\": \"Optional user-provided JSON metadata attached to the change; NULL when nothing was supplied.\"\n    },\n    \"created_at\": {\n      \"type\": \"string\",\n      \"examples\": [\n        \"2026-05-08T17:42:31.123Z\"\n      ],\n      \"description\": \"ISO-8601 timestamp at which the change was recorded (set via `lix_timestamp()` at write time).\"\n    },\n    \"snapshot_content\": {\n      \"type\": [\n        \"object\",\n        \"null\"\n      ],\n      \"description\": \"Entity JSON body at this change; NULL represents a tombstone (deletion).\"\n    }\n  },\n  \"required\": [\n    \"id\",\n    \"entity_id\",\n    \"schema_key\",\n    \"file_id\",\n    \"created_at\"\n  ],\n  \"additionalProperties\": false\n}\n"
  },
  {
    "path": "packages/engine/src/schema/builtin/lix_change_author.json",
    "content": "{\n  \"x-lix-key\": \"lix_change_author\",\n  \"x-lix-primary-key\": [\n    \"/change_id\",\n    \"/account_id\"\n  ],\n  \"x-lix-foreign-keys\": [\n    {\n      \"properties\": [\n        \"/change_id\"\n      ],\n      \"references\": {\n        \"schemaKey\": \"lix_change\",\n        \"properties\": [\n          \"/id\"\n        ]\n      }\n    },\n    {\n      \"properties\": [\n        \"/account_id\"\n      ],\n      \"references\": {\n        \"schemaKey\": \"lix_account\",\n        \"properties\": [\n          \"/id\"\n        ]\n      }\n    }\n  ],\n  \"type\": \"object\",\n  \"properties\": {\n    \"change_id\": {\n      \"type\": \"string\"\n    },\n    \"account_id\": {\n      \"type\": \"string\"\n    }\n  },\n  \"required\": [\n    \"change_id\",\n    \"account_id\"\n  ],\n  \"additionalProperties\": false\n}\n"
  },
  {
    "path": "packages/engine/src/schema/builtin/lix_commit.json",
    "content": "{\n  \"x-lix-key\": \"lix_commit\",\n  \"description\": \"A commit is a stable point in project history. Versions point to commits. Use lix_commit_edge to inspect parent commits.\",\n  \"examples\": [\n    {\n      \"id\": \"commit_01jexample\"\n    }\n  ],\n  \"x-lix-primary-key\": [\n    \"/id\"\n  ],\n  \"type\": \"object\",\n  \"properties\": {\n    \"id\": {\n      \"type\": \"string\",\n      \"x-lix-default\": \"lix_uuid_v7()\",\n      \"description\": \"Stable identifier of this commit.\"\n    }\n  },\n  \"required\": [\n    \"id\"\n  ],\n  \"additionalProperties\": false\n}\n"
  },
  {
    "path": "packages/engine/src/schema/builtin/lix_commit_edge.json",
    "content": "{\n  \"x-lix-key\": \"lix_commit_edge\",\n  \"description\": \"Direct parent relationship between two commits. Merge commits have one row per parent. The first parent is useful for showing mainline history or comparing a merge commit against the commit that was checked out before the merge.\",\n  \"examples\": [\n    {\n      \"parent_id\": \"commit-main\",\n      \"child_id\": \"commit-merge\",\n      \"parent_order\": 0\n    },\n    {\n      \"parent_id\": \"commit-feature\",\n      \"child_id\": \"commit-merge\",\n      \"parent_order\": 1\n    }\n  ],\n  \"x-lix-primary-key\": [\"/child_id\", \"/parent_order\"],\n  \"x-lix-unique\": [[\"/parent_id\", \"/child_id\"]],\n  \"x-lix-foreign-keys\": [\n    {\n      \"properties\": [\"/parent_id\"],\n      \"references\": {\n        \"schemaKey\": \"lix_commit\",\n        \"properties\": [\"/id\"]\n      }\n    },\n    {\n      \"properties\": [\"/child_id\"],\n      \"references\": {\n        \"schemaKey\": \"lix_commit\",\n        \"properties\": [\"/id\"]\n      }\n    }\n  ],\n  \"type\": \"object\",\n  \"properties\": {\n    \"parent_id\": {\n      \"type\": \"string\",\n      \"description\": \"Identifier of the parent commit.\"\n    },\n    \"child_id\": {\n      \"type\": \"string\",\n      \"description\": \"Identifier of the child commit.\"\n    },\n    \"parent_order\": {\n      \"type\": \"integer\",\n      \"minimum\": 0,\n      \"examples\": [0, 1],\n      \"description\": \"Zero-based position of this parent in the child commit's ordered parent list. The first parent has order 0; additional merge parents have order 1, 2, and so on.\"\n    }\n  },\n  \"required\": [\"parent_id\", \"child_id\", \"parent_order\"],\n  \"additionalProperties\": false\n}\n"
  },
  {
    "path": "packages/engine/src/schema/builtin/lix_directory_descriptor.json",
    "content": "{\n  \"x-lix-key\": \"lix_directory_descriptor\",\n  \"x-lix-primary-key\": [\n    \"/id\"\n  ],\n  \"x-lix-unique\": [\n    [\n      \"/parent_id\",\n      \"/name\"\n    ]\n  ],\n  \"x-lix-foreign-keys\": [\n    {\n      \"properties\": [\n        \"/parent_id\"\n      ],\n      \"references\": {\n        \"schemaKey\": \"lix_directory_descriptor\",\n        \"properties\": [\n          \"/id\"\n        ]\n      }\n    }\n  ],\n  \"type\": \"object\",\n  \"properties\": {\n    \"id\": {\n      \"type\": \"string\",\n      \"x-lix-default\": \"lix_uuid_v7()\"\n    },\n    \"parent_id\": {\n      \"type\": [\n        \"string\",\n        \"null\"\n      ]\n    },\n    \"name\": {\n      \"type\": \"string\",\n      \"pattern\": \"^(?!\\\\.{1,2}$)[^/\\\\\\\\]+$\"\n    },\n    \"hidden\": {\n      \"type\": \"boolean\",\n      \"x-lix-default\": \"false\"\n    }\n  },\n  \"required\": [\n    \"id\",\n    \"parent_id\",\n    \"name\"\n  ],\n  \"additionalProperties\": false\n}\n"
  },
  {
    "path": "packages/engine/src/schema/builtin/lix_file_descriptor.json",
    "content": "{\n  \"x-lix-key\": \"lix_file_descriptor\",\n  \"x-lix-primary-key\": [\n    \"/id\"\n  ],\n  \"x-lix-unique\": [\n    [\n      \"/directory_id\",\n      \"/name\"\n    ]\n  ],\n  \"x-lix-foreign-keys\": [\n    {\n      \"properties\": [\n        \"/directory_id\"\n      ],\n      \"references\": {\n        \"schemaKey\": \"lix_directory_descriptor\",\n        \"properties\": [\n          \"/id\"\n        ]\n      }\n    }\n  ],\n  \"type\": \"object\",\n  \"properties\": {\n    \"id\": {\n      \"type\": \"string\",\n      \"x-lix-default\": \"lix_uuid_v7()\"\n    },\n    \"directory_id\": {\n      \"type\": [\n        \"string\",\n        \"null\"\n      ]\n    },\n    \"name\": {\n      \"type\": \"string\",\n      \"pattern\": \"^(?!\\\\.{1,2}$)[^/\\\\\\\\]+$\"\n    },\n    \"hidden\": {\n      \"type\": \"boolean\",\n      \"x-lix-default\": \"false\"\n    }\n  },\n  \"required\": [\n    \"id\",\n    \"directory_id\",\n    \"name\"\n  ],\n  \"additionalProperties\": false\n}\n"
  },
  {
    "path": "packages/engine/src/schema/builtin/lix_key_value.json",
    "content": "{\n  \"x-lix-key\": \"lix_key_value\",\n  \"x-lix-primary-key\": [\n    \"/key\"\n  ],\n  \"type\": \"object\",\n  \"properties\": {\n    \"key\": {\n      \"type\": \"string\"\n    },\n    \"value\": {\n      \"description\": \"Arbitrary JSON value. This field stays in the JSON domain even when different rows hold different JSON kinds.\",\n      \"anyOf\": [\n        {\n          \"type\": \"object\"\n        },\n        {\n          \"type\": \"array\"\n        },\n        {\n          \"type\": \"string\"\n        },\n        {\n          \"type\": \"number\"\n        },\n        {\n          \"type\": \"boolean\"\n        },\n        {\n          \"type\": \"null\"\n        }\n      ]\n    }\n  },\n  \"required\": [\n    \"key\",\n    \"value\"\n  ],\n  \"additionalProperties\": false\n}\n"
  },
  {
    "path": "packages/engine/src/schema/builtin/lix_label.json",
    "content": "{\n  \"x-lix-key\": \"lix_label\",\n  \"description\": \"Catalog of labels that can be assigned to arbitrary live Lix rows through lix_label_assignment.\",\n  \"x-lix-primary-key\": [\n    \"/id\"\n  ],\n  \"x-lix-unique\": [\n    [\n      \"/name\"\n    ]\n  ],\n  \"type\": \"object\",\n  \"properties\": {\n    \"id\": {\n      \"type\": \"string\",\n      \"x-lix-default\": \"lix_uuid_v7()\",\n      \"description\": \"Stable label identifier. Label assignments reference this value.\"\n    },\n    \"name\": {\n      \"type\": \"string\",\n      \"description\": \"Human-readable label name. Unique across labels.\"\n    }\n  },\n  \"required\": [\n    \"id\",\n    \"name\"\n  ],\n  \"additionalProperties\": false\n}\n"
  },
  {
    "path": "packages/engine/src/schema/builtin/lix_label_assignment.json",
    "content": "{\n  \"x-lix-key\": \"lix_label_assignment\",\n  \"description\": \"Mapping table that assigns a label to any live Lix row addressed by (target_entity_id, target_schema_key, target_file_id). The state foreign-key tuple is ordered as [0] target_entity_id, [1] target_schema_key, [2] target_file_id.\",\n  \"x-lix-primary-key\": [\n    \"/id\"\n  ],\n  \"x-lix-unique\": [\n    [\n      \"/target_entity_id\",\n      \"/target_schema_key\",\n      \"/target_file_id\",\n      \"/label_id\"\n    ]\n  ],\n  \"x-lix-state-foreign-keys\": [\n    [\n      \"/target_entity_id\",\n      \"/target_schema_key\",\n      \"/target_file_id\"\n    ]\n  ],\n  \"x-lix-foreign-keys\": [\n    {\n      \"properties\": [\n        \"/label_id\"\n      ],\n      \"references\": {\n        \"schemaKey\": \"lix_label\",\n        \"properties\": [\n          \"/id\"\n        ]\n      }\n    }\n  ],\n  \"type\": \"object\",\n  \"properties\": {\n    \"id\": {\n      \"type\": \"string\",\n      \"x-lix-default\": \"lix_uuid_v7()\",\n      \"description\": \"Stable identifier for this label assignment row.\"\n    },\n    \"target_entity_id\": {\n      \"type\": \"array\",\n      \"description\": \"Target row entity_id. This is slot [0] in x-lix-state-foreign-keys and must be the canonical JSON array of string primary-key parts.\",\n      \"items\": {\n        \"type\": \"string\"\n      },\n      \"minItems\": 1\n    },\n    \"target_schema_key\": {\n      \"type\": \"string\",\n      \"description\": \"Target row schema key. This is slot [1] in x-lix-state-foreign-keys.\"\n    },\n    \"target_file_id\": {\n      \"type\": [\n        \"string\",\n        \"null\"\n      ],\n      \"description\": \"Target row file scope. This is slot [2] in x-lix-state-foreign-keys; null targets global rows.\"\n    },\n    \"label_id\": {\n      \"type\": \"string\",\n      \"description\": \"Label assigned to the target row. References lix_label.id.\"\n    }\n  },\n  \"required\": [\n    \"id\",\n    \"target_entity_id\",\n    \"target_schema_key\",\n    \"target_file_id\",\n    \"label_id\"\n  ],\n  \"additionalProperties\": false\n}\n"
  },
  {
    "path": "packages/engine/src/schema/builtin/lix_registered_schema.json",
    "content": "{\n  \"x-lix-key\": \"lix_registered_schema\",\n  \"x-lix-primary-key\": [\n    \"/value/x-lix-key\"\n  ],\n  \"type\": \"object\",\n  \"properties\": {\n    \"value\": {\n      \"type\": \"object\",\n      \"properties\": {\n        \"x-lix-key\": {\n          \"type\": \"string\"\n        }\n      },\n      \"required\": [\n        \"x-lix-key\"\n      ],\n      \"additionalProperties\": true\n    }\n  },\n  \"required\": [\n    \"value\"\n  ],\n  \"additionalProperties\": false\n}\n"
  },
  {
    "path": "packages/engine/src/schema/builtin/lix_version_descriptor.json",
    "content": "{\n  \"x-lix-key\": \"lix_version_descriptor\",\n  \"description\": \"User-facing version metadata (name and visibility) for a branch-like version. The stable identity of a version; the matching `lix_version_ref` carries the moving head pointer. The catalog's `lix_version` surface joins this descriptor with its ref to present a single user-visible version row.\",\n  \"x-lix-primary-key\": [\n    \"/id\"\n  ],\n  \"x-lix-unique\": [\n    [\n      \"/name\"\n    ]\n  ],\n  \"type\": \"object\",\n  \"properties\": {\n    \"id\": {\n      \"type\": \"string\",\n      \"x-lix-default\": \"lix_uuid_v7()\",\n      \"description\": \"Stable version identifier (UUIDv7). Referenced by `lix_version_ref.id`.\"\n    },\n    \"name\": {\n      \"type\": \"string\",\n      \"description\": \"Human-readable version name (e.g. `main`, `feature-x`) shown in version listings and CLI output.\"\n    },\n    \"hidden\": {\n      \"type\": \"boolean\",\n      \"default\": false,\n      \"description\": \"When true, the version is filtered from default listings (CLI, catalog views); operations by explicit id still succeed.\"\n    }\n  },\n  \"required\": [\n    \"id\",\n    \"name\"\n  ],\n  \"additionalProperties\": false\n}\n"
  },
  {
    "path": "packages/engine/src/schema/builtin/lix_version_ref.json",
    "content": "{\n  \"x-lix-key\": \"lix_version_ref\",\n  \"description\": \"Version head pointer. Records which commit a version should currently resolve to in the local runtime. Intentionally not part of canonical commit membership: refs may be reset client-side after sync without introducing content conflicts. Each `lix_version_descriptor.id` has exactly one `lix_version_ref` row.\",\n  \"x-lix-primary-key\": [\n    \"/id\"\n  ],\n  \"x-lix-foreign-keys\": [\n    {\n      \"properties\": [\n        \"/id\"\n      ],\n      \"references\": {\n        \"schemaKey\": \"lix_version_descriptor\",\n        \"properties\": [\n          \"/id\"\n        ]\n      }\n    },\n    {\n      \"properties\": [\n        \"/commit_id\"\n      ],\n      \"references\": {\n        \"schemaKey\": \"lix_commit\",\n        \"properties\": [\n          \"/id\"\n        ]\n      }\n    }\n  ],\n  \"type\": \"object\",\n  \"properties\": {\n    \"id\": {\n      \"type\": \"string\",\n      \"x-lix-default\": \"lix_uuid_v7()\",\n      \"description\": \"Version identifier whose head pointer is being stored; matches `lix_version_descriptor.id`.\"\n    },\n    \"commit_id\": {\n      \"type\": \"string\",\n      \"description\": \"Commit the version should currently resolve to in the local runtime (references `lix_commit.id`).\"\n    }\n  },\n  \"required\": [\n    \"id\",\n    \"commit_id\"\n  ],\n  \"additionalProperties\": false\n}\n"
  },
  {
    "path": "packages/engine/src/schema/builtin/mod.rs",
    "content": "use serde_json::Value as JsonValue;\nuse std::sync::OnceLock;\n\nuse crate::schema::lix_schema_definition;\n\nconst LIX_REGISTERED_SCHEMA_KEY: &str = \"lix_registered_schema\";\nconst LIX_KEY_VALUE_SCHEMA_KEY: &str = \"lix_key_value\";\nconst LIX_ACCOUNT_SCHEMA_KEY: &str = \"lix_account\";\nconst LIX_ACTIVE_ACCOUNT_SCHEMA_KEY: &str = \"lix_active_account\";\nconst LIX_LABEL_SCHEMA_KEY: &str = \"lix_label\";\nconst LIX_LABEL_ASSIGNMENT_SCHEMA_KEY: &str = \"lix_label_assignment\";\nconst LIX_CHANGE_SCHEMA_KEY: &str = \"lix_change\";\nconst LIX_CHANGE_AUTHOR_SCHEMA_KEY: &str = \"lix_change_author\";\nconst LIX_COMMIT_SCHEMA_KEY: &str = \"lix_commit\";\nconst LIX_VERSION_DESCRIPTOR_SCHEMA_KEY: &str = \"lix_version_descriptor\";\nconst LIX_VERSION_REF_SCHEMA_KEY: &str = \"lix_version_ref\";\nconst LIX_COMMIT_EDGE_SCHEMA_KEY: &str = \"lix_commit_edge\";\nconst LIX_FILE_DESCRIPTOR_SCHEMA_KEY: &str = \"lix_file_descriptor\";\nconst LIX_DIRECTORY_DESCRIPTOR_SCHEMA_KEY: &str = \"lix_directory_descriptor\";\nconst LIX_BINARY_BLOB_REF_SCHEMA_KEY: &str = \"lix_binary_blob_ref\";\n\nconst LIX_REGISTERED_SCHEMA_JSON: &str = include_str!(\"lix_registered_schema.json\");\nconst LIX_KEY_VALUE_SCHEMA_JSON: &str = include_str!(\"lix_key_value.json\");\nconst LIX_ACCOUNT_SCHEMA_JSON: &str = include_str!(\"lix_account.json\");\nconst LIX_ACTIVE_ACCOUNT_SCHEMA_JSON: &str = include_str!(\"lix_active_account.json\");\nconst LIX_LABEL_SCHEMA_JSON: &str = include_str!(\"lix_label.json\");\nconst LIX_LABEL_ASSIGNMENT_SCHEMA_JSON: &str = include_str!(\"lix_label_assignment.json\");\nconst LIX_CHANGE_SCHEMA_JSON: &str = include_str!(\"lix_change.json\");\nconst LIX_CHANGE_AUTHOR_SCHEMA_JSON: &str = include_str!(\"lix_change_author.json\");\nconst LIX_COMMIT_SCHEMA_JSON: &str = include_str!(\"lix_commit.json\");\nconst LIX_VERSION_DESCRIPTOR_SCHEMA_JSON: &str = include_str!(\"lix_version_descriptor.json\");\nconst LIX_VERSION_REF_SCHEMA_JSON: &str = include_str!(\"lix_version_ref.json\");\nconst LIX_COMMIT_EDGE_SCHEMA_JSON: &str = include_str!(\"lix_commit_edge.json\");\nconst LIX_FILE_DESCRIPTOR_SCHEMA_JSON: &str = include_str!(\"lix_file_descriptor.json\");\nconst LIX_DIRECTORY_DESCRIPTOR_SCHEMA_JSON: &str = include_str!(\"lix_directory_descriptor.json\");\nconst LIX_BINARY_BLOB_REF_SCHEMA_JSON: &str = include_str!(\"lix_binary_blob_ref.json\");\n\nstatic LIX_REGISTERED_SCHEMA: OnceLock<JsonValue> = OnceLock::new();\nstatic LIX_KEY_VALUE_SCHEMA: OnceLock<JsonValue> = OnceLock::new();\nstatic LIX_ACCOUNT_SCHEMA: OnceLock<JsonValue> = OnceLock::new();\nstatic LIX_ACTIVE_ACCOUNT_SCHEMA: OnceLock<JsonValue> = OnceLock::new();\nstatic LIX_LABEL_SCHEMA: OnceLock<JsonValue> = OnceLock::new();\nstatic LIX_LABEL_ASSIGNMENT_SCHEMA: OnceLock<JsonValue> = OnceLock::new();\nstatic LIX_CHANGE_SCHEMA: OnceLock<JsonValue> = OnceLock::new();\nstatic LIX_CHANGE_AUTHOR_SCHEMA: OnceLock<JsonValue> = OnceLock::new();\nstatic LIX_COMMIT_SCHEMA: OnceLock<JsonValue> = OnceLock::new();\nstatic LIX_VERSION_DESCRIPTOR_SCHEMA: OnceLock<JsonValue> = OnceLock::new();\nstatic LIX_VERSION_REF_SCHEMA: OnceLock<JsonValue> = OnceLock::new();\nstatic LIX_COMMIT_EDGE_SCHEMA: OnceLock<JsonValue> = OnceLock::new();\nstatic LIX_FILE_DESCRIPTOR_SCHEMA: OnceLock<JsonValue> = OnceLock::new();\nstatic LIX_DIRECTORY_DESCRIPTOR_SCHEMA: OnceLock<JsonValue> = OnceLock::new();\nstatic LIX_BINARY_BLOB_REF_SCHEMA: OnceLock<JsonValue> = OnceLock::new();\n\nconst BUILTIN_SCHEMA_KEYS: &[&str] = &[\n    LIX_REGISTERED_SCHEMA_KEY,\n    LIX_KEY_VALUE_SCHEMA_KEY,\n    LIX_ACCOUNT_SCHEMA_KEY,\n    LIX_ACTIVE_ACCOUNT_SCHEMA_KEY,\n    LIX_LABEL_SCHEMA_KEY,\n    LIX_LABEL_ASSIGNMENT_SCHEMA_KEY,\n    LIX_CHANGE_SCHEMA_KEY,\n    LIX_CHANGE_AUTHOR_SCHEMA_KEY,\n    LIX_COMMIT_SCHEMA_KEY,\n    LIX_VERSION_DESCRIPTOR_SCHEMA_KEY,\n    LIX_VERSION_REF_SCHEMA_KEY,\n    LIX_COMMIT_EDGE_SCHEMA_KEY,\n    LIX_FILE_DESCRIPTOR_SCHEMA_KEY,\n    LIX_DIRECTORY_DESCRIPTOR_SCHEMA_KEY,\n    LIX_BINARY_BLOB_REF_SCHEMA_KEY,\n];\n\npub(super) fn is_seed_schema_key(schema_key: &str) -> bool {\n    BUILTIN_SCHEMA_KEYS.contains(&schema_key)\n}\n\npub(super) fn seed_schema_definitions() -> Vec<&'static JsonValue> {\n    BUILTIN_SCHEMA_KEYS\n        .iter()\n        .map(|schema_key| {\n            seed_schema_definition(schema_key)\n                .unwrap_or_else(|| panic!(\"missing seed schema definition for '{schema_key}'\"))\n        })\n        .collect()\n}\n\npub(super) fn seed_schema_definition(schema_key: &str) -> Option<&'static JsonValue> {\n    match schema_key {\n        LIX_REGISTERED_SCHEMA_KEY => Some(\n            LIX_REGISTERED_SCHEMA.get_or_init(|| parse_registered_schema_with_inlined_definition()),\n        ),\n        LIX_KEY_VALUE_SCHEMA_KEY => {\n            Some(LIX_KEY_VALUE_SCHEMA.get_or_init(|| {\n                parse_builtin_schema(\"lix_key_value.json\", LIX_KEY_VALUE_SCHEMA_JSON)\n            }))\n        }\n        LIX_ACCOUNT_SCHEMA_KEY => Some(\n            LIX_ACCOUNT_SCHEMA\n                .get_or_init(|| parse_builtin_schema(\"lix_account.json\", LIX_ACCOUNT_SCHEMA_JSON)),\n        ),\n        LIX_ACTIVE_ACCOUNT_SCHEMA_KEY => Some(LIX_ACTIVE_ACCOUNT_SCHEMA.get_or_init(|| {\n            parse_builtin_schema(\"lix_active_account.json\", LIX_ACTIVE_ACCOUNT_SCHEMA_JSON)\n        })),\n        LIX_LABEL_SCHEMA_KEY => Some(\n            LIX_LABEL_SCHEMA\n                .get_or_init(|| parse_builtin_schema(\"lix_label.json\", LIX_LABEL_SCHEMA_JSON)),\n        ),\n        LIX_LABEL_ASSIGNMENT_SCHEMA_KEY => Some(LIX_LABEL_ASSIGNMENT_SCHEMA.get_or_init(|| {\n            parse_builtin_schema(\n                \"lix_label_assignment.json\",\n                LIX_LABEL_ASSIGNMENT_SCHEMA_JSON,\n            )\n        })),\n        LIX_CHANGE_SCHEMA_KEY => Some(\n            LIX_CHANGE_SCHEMA\n                .get_or_init(|| parse_builtin_schema(\"lix_change.json\", LIX_CHANGE_SCHEMA_JSON)),\n        ),\n        LIX_CHANGE_AUTHOR_SCHEMA_KEY => Some(LIX_CHANGE_AUTHOR_SCHEMA.get_or_init(|| {\n            parse_builtin_schema(\"lix_change_author.json\", LIX_CHANGE_AUTHOR_SCHEMA_JSON)\n        })),\n        LIX_COMMIT_SCHEMA_KEY => Some(\n            LIX_COMMIT_SCHEMA\n                .get_or_init(|| parse_builtin_schema(\"lix_commit.json\", LIX_COMMIT_SCHEMA_JSON)),\n        ),\n        LIX_VERSION_DESCRIPTOR_SCHEMA_KEY => {\n            Some(LIX_VERSION_DESCRIPTOR_SCHEMA.get_or_init(|| {\n                parse_builtin_schema(\n                    \"lix_version_descriptor.json\",\n                    LIX_VERSION_DESCRIPTOR_SCHEMA_JSON,\n                )\n            }))\n        }\n        LIX_VERSION_REF_SCHEMA_KEY => Some(LIX_VERSION_REF_SCHEMA.get_or_init(|| {\n            parse_builtin_schema(\"lix_version_ref.json\", LIX_VERSION_REF_SCHEMA_JSON)\n        })),\n        LIX_COMMIT_EDGE_SCHEMA_KEY => Some(LIX_COMMIT_EDGE_SCHEMA.get_or_init(|| {\n            parse_builtin_schema(\"lix_commit_edge.json\", LIX_COMMIT_EDGE_SCHEMA_JSON)\n        })),\n        LIX_FILE_DESCRIPTOR_SCHEMA_KEY => Some(LIX_FILE_DESCRIPTOR_SCHEMA.get_or_init(|| {\n            parse_builtin_schema(\"lix_file_descriptor.json\", LIX_FILE_DESCRIPTOR_SCHEMA_JSON)\n        })),\n        LIX_DIRECTORY_DESCRIPTOR_SCHEMA_KEY => {\n            Some(LIX_DIRECTORY_DESCRIPTOR_SCHEMA.get_or_init(|| {\n                parse_builtin_schema(\n                    \"lix_directory_descriptor.json\",\n                    LIX_DIRECTORY_DESCRIPTOR_SCHEMA_JSON,\n                )\n            }))\n        }\n        LIX_BINARY_BLOB_REF_SCHEMA_KEY => Some(LIX_BINARY_BLOB_REF_SCHEMA.get_or_init(|| {\n            parse_builtin_schema(\"lix_binary_blob_ref.json\", LIX_BINARY_BLOB_REF_SCHEMA_JSON)\n        })),\n        _ => None,\n    }\n}\n\n#[allow(dead_code)]\npub(crate) fn builtin_schema_json(schema_key: &str) -> Option<&'static str> {\n    match schema_key {\n        LIX_REGISTERED_SCHEMA_KEY => Some(LIX_REGISTERED_SCHEMA_JSON),\n        LIX_KEY_VALUE_SCHEMA_KEY => Some(LIX_KEY_VALUE_SCHEMA_JSON),\n        LIX_ACCOUNT_SCHEMA_KEY => Some(LIX_ACCOUNT_SCHEMA_JSON),\n        LIX_ACTIVE_ACCOUNT_SCHEMA_KEY => Some(LIX_ACTIVE_ACCOUNT_SCHEMA_JSON),\n        LIX_LABEL_SCHEMA_KEY => Some(LIX_LABEL_SCHEMA_JSON),\n        LIX_LABEL_ASSIGNMENT_SCHEMA_KEY => Some(LIX_LABEL_ASSIGNMENT_SCHEMA_JSON),\n        LIX_CHANGE_SCHEMA_KEY => Some(LIX_CHANGE_SCHEMA_JSON),\n        LIX_CHANGE_AUTHOR_SCHEMA_KEY => Some(LIX_CHANGE_AUTHOR_SCHEMA_JSON),\n        LIX_COMMIT_SCHEMA_KEY => Some(LIX_COMMIT_SCHEMA_JSON),\n        LIX_VERSION_DESCRIPTOR_SCHEMA_KEY => Some(LIX_VERSION_DESCRIPTOR_SCHEMA_JSON),\n        LIX_VERSION_REF_SCHEMA_KEY => Some(LIX_VERSION_REF_SCHEMA_JSON),\n        LIX_COMMIT_EDGE_SCHEMA_KEY => Some(LIX_COMMIT_EDGE_SCHEMA_JSON),\n        LIX_FILE_DESCRIPTOR_SCHEMA_KEY => Some(LIX_FILE_DESCRIPTOR_SCHEMA_JSON),\n        LIX_DIRECTORY_DESCRIPTOR_SCHEMA_KEY => Some(LIX_DIRECTORY_DESCRIPTOR_SCHEMA_JSON),\n        LIX_BINARY_BLOB_REF_SCHEMA_KEY => Some(LIX_BINARY_BLOB_REF_SCHEMA_JSON),\n        _ => None,\n    }\n}\n\nfn parse_builtin_schema(file_name: &str, raw_json: &str) -> JsonValue {\n    serde_json::from_str(raw_json).unwrap_or_else(|error| {\n        panic!(\"builtin schema file '{file_name}' must contain valid JSON: {error}\")\n    })\n}\n\nfn parse_registered_schema_with_inlined_definition() -> JsonValue {\n    let mut schema = parse_builtin_schema(\"lix_registered_schema.json\", LIX_REGISTERED_SCHEMA_JSON);\n    let value_schema = schema\n        .pointer_mut(\"/properties/value\")\n        .expect(\"lix_registered_schema.json must define /properties/value\");\n    let value_schema_object = value_schema\n        .as_object_mut()\n        .expect(\"lix_registered_schema.json /properties/value must be an object\");\n\n    value_schema_object.insert(\n        \"allOf\".to_string(),\n        JsonValue::Array(vec![lix_schema_definition().clone()]),\n    );\n\n    schema\n}\n\n#[cfg(test)]\nmod tests {\n    use super::{seed_schema_definition, BUILTIN_SCHEMA_KEYS};\n\n    #[test]\n    fn builtin_schemas_load_without_extra_override_metadata() {\n        for schema_key in BUILTIN_SCHEMA_KEYS {\n            seed_schema_definition(schema_key).expect(\"schema should exist\");\n        }\n    }\n\n    #[test]\n    fn registered_schema_value_inlines_lix_schema_definition() {\n        let schema = seed_schema_definition(\"lix_registered_schema\").expect(\"schema should exist\");\n        let all_of = schema\n            .pointer(\"/properties/value/allOf\")\n            .and_then(|value| value.as_array())\n            .expect(\"registered schema value must define allOf array\");\n        assert_eq!(all_of.len(), 1);\n        assert_eq!(all_of[0], *crate::schema::lix_schema_definition());\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/schema/compatibility.rs",
    "content": "use std::collections::{BTreeMap, BTreeSet};\n\nuse serde_json::Value as JsonValue;\n\nuse crate::common::top_level_property_name;\nuse crate::entity_identity::canonical_json_text;\nuse crate::LixError;\n\nconst DOC_ONLY_SCHEMA_FIELDS: &[&str] = &[\"$comment\", \"deprecated\", \"description\", \"title\"];\nconst CONSTRAINT_FIELDS: &[&str] = &[\n    \"x-lix-primary-key\",\n    \"x-lix-unique\",\n    \"x-lix-foreign-keys\",\n    \"x-lix-state-foreign-keys\",\n];\n\n/// Validates that `next` is a compatible amendment of `previous`.\n///\n/// The 0.6 schema model treats `x-lix-key` as the durable relation identity.\n/// Same-key amendments may widen accepted data by adding optional top-level\n/// properties, but they must not alter identity, constraints, requiredness, or\n/// existing field semantics. Nested object schemas are deliberately frozen for\n/// 0.6; recursive schema evolution is a later, explicit feature.\n///\n/// Primary-key column order is semantic because it defines composite\n/// `entity_id` tuple order, so primary keys are never normalized. Relational\n/// constraints are frozen even when a particular addition could be\n/// retroactively safe, such as a new FK on a new optional property. That is a\n/// deliberate MVP rule we may relax later.\npub(crate) fn validate_schema_amendment(\n    previous: &JsonValue,\n    next: &JsonValue,\n) -> Result<(), LixError> {\n    let previous_key = schema_key(previous, \"previous\")?;\n    let next_key = schema_key(next, \"next\")?;\n    if previous_key != next_key {\n        return schema_amendment_error(format!(\n            \"schema amendment must keep x-lix-key stable; previous '{previous_key}', next '{next_key}'\"\n        ));\n    }\n\n    require_additional_properties_false(previous, \"previous\", previous_key)?;\n    require_additional_properties_false(next, \"next\", next_key)?;\n\n    validate_constraints_unchanged(previous, next, previous_key)?;\n\n    let changed_top_level_semantic_keys = changed_top_level_semantic_keys(previous, next);\n    if !changed_top_level_semantic_keys.is_empty() {\n        return schema_amendment_error(format!(\n            \"schema '{previous_key}' cannot change top-level schema semantics: {}\",\n            changed_top_level_semantic_keys.join(\", \")\n        ));\n    }\n\n    let previous_required = string_set_field(previous, \"required\", \"previous\", previous_key)?;\n    let next_required = string_set_field(next, \"required\", \"next\", next_key)?;\n    if previous_required != next_required {\n        return schema_amendment_error(format!(\n            \"schema '{previous_key}' cannot amend required properties\"\n        ));\n    }\n\n    let previous_properties = properties_field(previous, \"previous\", previous_key)?;\n    let next_properties = properties_field(next, \"next\", next_key)?;\n\n    for (property_name, previous_property_schema) in &previous_properties {\n        let Some(next_property_schema) = next_properties.get(property_name) else {\n            return schema_amendment_error(format!(\n                \"schema '{previous_key}' cannot remove property '/{property_name}'\"\n            ));\n        };\n        if strip_doc_only_fields(previous_property_schema)\n            != strip_doc_only_fields(next_property_schema)\n        {\n            return schema_amendment_error(format!(\n                \"schema '{previous_key}' cannot change existing property '/{property_name}' except for doc-only fields\"\n            ));\n        }\n    }\n\n    let constrained_property_names = constrained_top_level_property_names(next)?;\n    for property_name in next_properties.keys() {\n        if previous_properties.contains_key(property_name) {\n            continue;\n        }\n        if next_required.contains(property_name) {\n            return schema_amendment_error(format!(\n                \"schema '{previous_key}' cannot add required property '/{property_name}'\"\n            ));\n        }\n        if constrained_property_names.contains(property_name) {\n            return schema_amendment_error(format!(\n                \"schema '{previous_key}' cannot add property '/{property_name}' as part of primary, unique, or foreign-key constraints\"\n            ));\n        }\n    }\n\n    Ok(())\n}\n\nfn schema_key<'a>(schema: &'a JsonValue, side: &str) -> Result<&'a str, LixError> {\n    schema\n        .get(\"x-lix-key\")\n        .and_then(JsonValue::as_str)\n        .ok_or_else(|| {\n            LixError::new(\n                LixError::CODE_SCHEMA_DEFINITION,\n                format!(\"{side} schema must include string x-lix-key\"),\n            )\n        })\n}\n\nfn require_additional_properties_false(\n    schema: &JsonValue,\n    side: &str,\n    schema_key: &str,\n) -> Result<(), LixError> {\n    if schema.get(\"additionalProperties\") == Some(&JsonValue::Bool(false)) {\n        return Ok(());\n    }\n    schema_amendment_error(format!(\n        \"{side} schema '{schema_key}' must set additionalProperties to false\"\n    ))\n}\n\nfn validate_constraints_unchanged(\n    previous: &JsonValue,\n    next: &JsonValue,\n    schema_key: &str,\n) -> Result<(), LixError> {\n    // Primary-key column order is semantic because it defines composite\n    // entity_id tuple order, so it is compared directly and never normalized.\n    if previous.get(\"x-lix-primary-key\") != next.get(\"x-lix-primary-key\") {\n        return schema_amendment_error(format!(\n            \"schema '{schema_key}' cannot amend constraint field 'x-lix-primary-key'\"\n        ));\n    }\n\n    for field in [\n        \"x-lix-unique\",\n        \"x-lix-foreign-keys\",\n        \"x-lix-state-foreign-keys\",\n    ] {\n        if normalized_constraint_list(previous.get(field), field)?\n            != normalized_constraint_list(next.get(field), field)?\n        {\n            return schema_amendment_error(format!(\n                \"schema '{schema_key}' cannot amend constraint field '{field}'\"\n            ));\n        }\n    }\n\n    Ok(())\n}\n\nfn normalized_constraint_list(\n    value: Option<&JsonValue>,\n    field: &str,\n) -> Result<Vec<JsonValue>, LixError> {\n    let Some(value) = value else {\n        return Ok(Vec::new());\n    };\n    let Some(values) = value.as_array() else {\n        return schema_amendment_error(format!(\n            \"schema constraint field '{field}' must be an array\"\n        ));\n    };\n\n    let mut values = values.clone();\n    values.sort_by(|left, right| {\n        let left = canonical_json_text(left)\n            .expect(\"canonical json from in-memory serde_json::Value cannot fail\");\n        let right = canonical_json_text(right)\n            .expect(\"canonical json from in-memory serde_json::Value cannot fail\");\n        left.cmp(&right)\n    });\n    Ok(values)\n}\n\nfn properties_field(\n    schema: &JsonValue,\n    side: &str,\n    schema_key: &str,\n) -> Result<BTreeMap<String, JsonValue>, LixError> {\n    match schema.get(\"properties\") {\n        Some(JsonValue::Object(object)) => Ok(object\n            .iter()\n            .map(|(key, value)| (key.clone(), value.clone()))\n            .collect()),\n        Some(_) => schema_amendment_error(format!(\n            \"{side} schema '{schema_key}' field 'properties' must be an object\"\n        )),\n        None => Ok(BTreeMap::new()),\n    }\n}\n\nfn string_set_field(\n    schema: &JsonValue,\n    field: &str,\n    side: &str,\n    schema_key: &str,\n) -> Result<BTreeSet<String>, LixError> {\n    let Some(value) = schema.get(field) else {\n        return Ok(BTreeSet::new());\n    };\n    let Some(values) = value.as_array() else {\n        return schema_amendment_error(format!(\n            \"{side} schema '{schema_key}' field '{field}' must be an array of strings\"\n        ));\n    };\n    values\n        .iter()\n        .map(|value| {\n            value.as_str().map(str::to_string).ok_or_else(|| {\n                LixError::new(\n                    LixError::CODE_SCHEMA_DEFINITION,\n                    format!(\n                        \"{side} schema '{schema_key}' field '{field}' must be an array of strings\"\n                    ),\n                )\n            })\n        })\n        .collect()\n}\n\nfn strip_doc_only_fields(value: &JsonValue) -> JsonValue {\n    match value {\n        JsonValue::Object(object) => JsonValue::Object(\n            object\n                .iter()\n                .filter(|(key, _)| !DOC_ONLY_SCHEMA_FIELDS.contains(&key.as_str()))\n                .map(|(key, value)| (key.clone(), strip_doc_only_fields(value)))\n                .collect(),\n        ),\n        JsonValue::Array(values) => {\n            JsonValue::Array(values.iter().map(strip_doc_only_fields).collect())\n        }\n        _ => value.clone(),\n    }\n}\n\nfn top_level_semantic_fields(schema: &JsonValue) -> BTreeMap<String, JsonValue> {\n    let JsonValue::Object(object) = strip_doc_only_fields(schema) else {\n        return BTreeMap::new();\n    };\n    object\n        .into_iter()\n        .filter(|(key, _)| {\n            key != \"properties\" && key != \"required\" && !CONSTRAINT_FIELDS.contains(&key.as_str())\n        })\n        .collect()\n}\n\nfn changed_top_level_semantic_keys(previous: &JsonValue, next: &JsonValue) -> Vec<String> {\n    let previous = top_level_semantic_fields(previous);\n    let next = top_level_semantic_fields(next);\n    previous\n        .keys()\n        .chain(next.keys())\n        .collect::<BTreeSet<_>>()\n        .into_iter()\n        .filter(|key| previous.get(*key) != next.get(*key))\n        .cloned()\n        .collect()\n}\n\nfn constrained_top_level_property_names(schema: &JsonValue) -> Result<BTreeSet<String>, LixError> {\n    let mut names = BTreeSet::new();\n\n    collect_top_level_pointer_names(schema.get(\"x-lix-primary-key\"), &mut names)?;\n    if let Some(unique_groups) = schema.get(\"x-lix-unique\").and_then(JsonValue::as_array) {\n        for group in unique_groups {\n            collect_top_level_pointer_names(Some(group), &mut names)?;\n        }\n    }\n    if let Some(foreign_keys) = schema\n        .get(\"x-lix-foreign-keys\")\n        .and_then(JsonValue::as_array)\n    {\n        for foreign_key in foreign_keys {\n            collect_top_level_pointer_names(foreign_key.get(\"properties\"), &mut names)?;\n        }\n    }\n    if let Some(foreign_keys) = schema\n        .get(\"x-lix-state-foreign-keys\")\n        .and_then(JsonValue::as_array)\n    {\n        for foreign_key in foreign_keys {\n            collect_top_level_pointer_names(Some(foreign_key), &mut names)?;\n        }\n    }\n\n    Ok(names)\n}\n\nfn collect_top_level_pointer_names(\n    value: Option<&JsonValue>,\n    names: &mut BTreeSet<String>,\n) -> Result<(), LixError> {\n    let Some(value) = value else {\n        return Ok(());\n    };\n    let Some(pointers) = value.as_array() else {\n        return schema_amendment_error(\n            \"schema constraint fields must contain arrays of JSON Pointers\".to_string(),\n        );\n    };\n    for pointer in pointers {\n        let Some(pointer) = pointer.as_str() else {\n            return schema_amendment_error(\n                \"schema constraint fields must contain JSON Pointer strings\".to_string(),\n            );\n        };\n        if let Some(name) = top_level_property_name(pointer)? {\n            names.insert(name);\n        }\n    }\n    Ok(())\n}\n\nfn schema_amendment_error<T>(message: String) -> Result<T, LixError> {\n    Err(LixError::new(LixError::CODE_SCHEMA_DEFINITION, message))\n}\n\n#[cfg(test)]\nmod tests {\n    use serde_json::{json, Value as JsonValue};\n\n    use super::validate_schema_amendment;\n\n    fn base_schema() -> JsonValue {\n        json!({\n            \"x-lix-key\": \"library_book\",\n            \"type\": \"object\",\n            \"x-lix-primary-key\": [\"/id\"],\n            \"x-lix-unique\": [[\"/isbn\"]],\n            \"x-lix-foreign-keys\": [\n                {\n                    \"properties\": [\"/author_id\"],\n                    \"references\": {\n                        \"schemaKey\": \"library_author\",\n                        \"properties\": [\"/id\"]\n                    }\n                }\n            ],\n            \"x-lix-state-foreign-keys\": [\n                [\"/target_entity_id\", \"/target_schema_key\", \"/target_file_id\"]\n            ],\n            \"properties\": {\n                \"id\": { \"type\": \"string\", \"description\": \"Stable id\" },\n                \"isbn\": { \"type\": \"string\" },\n                \"title\": { \"type\": \"string\", \"title\": \"Title\" },\n                \"author_id\": { \"type\": \"string\" },\n                \"target_entity_id\": {\n                    \"type\": \"array\",\n                    \"items\": { \"type\": \"string\" }\n                },\n                \"target_schema_key\": { \"type\": \"string\" },\n                \"target_file_id\": { \"type\": [\"string\", \"null\"] }\n            },\n            \"required\": [\n                \"id\",\n                \"isbn\",\n                \"title\",\n                \"author_id\",\n                \"target_entity_id\",\n                \"target_schema_key\",\n                \"target_file_id\"\n            ],\n            \"additionalProperties\": false\n        })\n    }\n\n    #[test]\n    fn allows_doc_only_changes_on_existing_properties() {\n        let previous = base_schema();\n        let mut next = base_schema();\n        next[\"description\"] = json!(\"A library book relation\");\n        next[\"title\"] = json!(\"Library Book\");\n        next[\"$comment\"] = json!(\"Top-level schema docs\");\n        next[\"deprecated\"] = json!(false);\n        next[\"properties\"][\"title\"][\"description\"] = json!(\"Human readable title\");\n        next[\"properties\"][\"title\"][\"title\"] = json!(\"Book title\");\n        next[\"properties\"][\"title\"][\"$comment\"] = json!(\"Shown in schema docs\");\n        next[\"properties\"][\"title\"][\"deprecated\"] = json!(true);\n\n        validate_schema_amendment(&previous, &next).expect(\"doc-only changes are compatible\");\n    }\n\n    #[test]\n    fn allows_adding_optional_property() {\n        let previous = base_schema();\n        let mut next = base_schema();\n        next[\"properties\"][\"subtitle\"] = json!({\n            \"type\": \"string\",\n            \"description\": \"Optional subtitle\"\n        });\n\n        validate_schema_amendment(&previous, &next)\n            .expect(\"optional property addition is compatible\");\n    }\n\n    #[test]\n    fn allows_empty_properties_to_grow_with_optional_properties() {\n        let previous = json!({\n            \"x-lix-key\": \"library_empty\",\n            \"type\": \"object\",\n            \"properties\": {},\n            \"additionalProperties\": false\n        });\n        let next = json!({\n            \"x-lix-key\": \"library_empty\",\n            \"type\": \"object\",\n            \"properties\": {\n                \"title\": { \"type\": \"string\" }\n            },\n            \"additionalProperties\": false\n        });\n\n        validate_schema_amendment(&previous, &next)\n            .expect(\"optional property addition from an empty schema is compatible\");\n    }\n\n    #[test]\n    fn accepts_cosmetic_constraint_list_reordering() {\n        let mut previous = base_schema();\n        previous[\"x-lix-unique\"] = json!([[\"/isbn\"], [\"/title\"]]);\n        previous[\"x-lix-foreign-keys\"] = json!([\n            {\n                \"properties\": [\"/author_id\"],\n                \"references\": {\n                    \"schemaKey\": \"library_author\",\n                    \"properties\": [\"/id\"]\n                }\n            },\n            {\n                \"properties\": [\"/isbn\"],\n                \"references\": {\n                    \"schemaKey\": \"library_isbn\",\n                    \"properties\": [\"/id\"]\n                }\n            }\n        ]);\n        previous[\"x-lix-state-foreign-keys\"] = json!([\n            [\"/target_entity_id\", \"/target_schema_key\", \"/target_file_id\"],\n            [\"/other_entity_id\", \"/other_schema_key\", \"/other_file_id\"]\n        ]);\n        let mut next = previous.clone();\n        next[\"x-lix-unique\"] = json!([[\"/title\"], [\"/isbn\"]]);\n        next[\"x-lix-foreign-keys\"] = json!([\n            {\n                \"properties\": [\"/isbn\"],\n                \"references\": {\n                    \"schemaKey\": \"library_isbn\",\n                    \"properties\": [\"/id\"]\n                }\n            },\n            {\n                \"properties\": [\"/author_id\"],\n                \"references\": {\n                    \"schemaKey\": \"library_author\",\n                    \"properties\": [\"/id\"]\n                }\n            }\n        ]);\n        next[\"x-lix-state-foreign-keys\"] = json!([\n            [\"/other_entity_id\", \"/other_schema_key\", \"/other_file_id\"],\n            [\"/target_entity_id\", \"/target_schema_key\", \"/target_file_id\"]\n        ]);\n\n        validate_schema_amendment(&previous, &next)\n            .expect(\"cosmetic constraint list ordering should not matter\");\n    }\n\n    #[test]\n    fn rejects_required_set_shrink() {\n        let previous = base_schema();\n        let mut next = base_schema();\n        next[\"required\"] = json!([\n            \"id\",\n            \"isbn\",\n            \"author_id\",\n            \"target_entity_id\",\n            \"target_schema_key\",\n            \"target_file_id\"\n        ]);\n\n        let error = validate_schema_amendment(&previous, &next)\n            .expect_err(\"required properties must be frozen\");\n\n        assert!(\n            error.message.contains(\"required properties\"),\n            \"unexpected error: {error:?}\"\n        );\n    }\n\n    #[test]\n    fn rejects_schema_key_change() {\n        let previous = base_schema();\n        let mut next = base_schema();\n        next[\"x-lix-key\"] = json!(\"library_periodical\");\n\n        let error =\n            validate_schema_amendment(&previous, &next).expect_err(\"schema key must be stable\");\n\n        assert!(\n            error.message.contains(\"x-lix-key\"),\n            \"unexpected error: {error:?}\"\n        );\n    }\n\n    #[test]\n    fn rejects_additional_properties_change() {\n        let previous = base_schema();\n        let mut next = base_schema();\n        next[\"additionalProperties\"] = json!(true);\n\n        let error = validate_schema_amendment(&previous, &next)\n            .expect_err(\"additionalProperties must remain false\");\n\n        assert!(\n            error.message.contains(\"additionalProperties\"),\n            \"unexpected error: {error:?}\"\n        );\n    }\n\n    #[test]\n    fn rejects_primary_key_change() {\n        let previous = base_schema();\n        let mut next = base_schema();\n        next[\"x-lix-primary-key\"] = json!([\"/isbn\"]);\n\n        let error = validate_schema_amendment(&previous, &next)\n            .expect_err(\"primary-key changes are incompatible\");\n\n        assert!(\n            error.message.contains(\"x-lix-primary-key\"),\n            \"unexpected error: {error:?}\"\n        );\n    }\n\n    #[test]\n    fn rejects_primary_key_reordering() {\n        let mut previous = base_schema();\n        previous[\"x-lix-primary-key\"] = json!([\"/id\", \"/isbn\"]);\n        let mut next = previous.clone();\n        next[\"x-lix-primary-key\"] = json!([\"/isbn\", \"/id\"]);\n\n        let error = validate_schema_amendment(&previous, &next)\n            .expect_err(\"primary-key column order is semantic\");\n\n        assert!(\n            error.message.contains(\"x-lix-primary-key\"),\n            \"unexpected error: {error:?}\"\n        );\n    }\n\n    #[test]\n    fn rejects_unique_constraint_change() {\n        let previous = base_schema();\n        let mut next = base_schema();\n        next[\"x-lix-unique\"] = json!([[\"/title\"]]);\n\n        let error = validate_schema_amendment(&previous, &next)\n            .expect_err(\"unique changes are incompatible\");\n\n        assert!(\n            error.message.contains(\"x-lix-unique\"),\n            \"unexpected error: {error:?}\"\n        );\n    }\n\n    #[test]\n    fn rejects_foreign_key_change() {\n        let previous = base_schema();\n        let mut next = base_schema();\n        next[\"x-lix-foreign-keys\"][0][\"references\"][\"schemaKey\"] = json!(\"library_person\");\n\n        let error = validate_schema_amendment(&previous, &next)\n            .expect_err(\"foreign-key changes are incompatible\");\n\n        assert!(\n            error.message.contains(\"x-lix-foreign-keys\"),\n            \"unexpected error: {error:?}\"\n        );\n    }\n\n    #[test]\n    fn rejects_inner_foreign_key_pointer_reordering() {\n        let mut previous = base_schema();\n        previous[\"x-lix-foreign-keys\"] = json!([\n            {\n                \"properties\": [\"/author_id\", \"/isbn\"],\n                \"references\": {\n                    \"schemaKey\": \"library_author\",\n                    \"properties\": [\"/id\", \"/isbn\"]\n                }\n            }\n        ]);\n        let mut next = previous.clone();\n        next[\"x-lix-foreign-keys\"] = json!([\n            {\n                \"properties\": [\"/isbn\", \"/author_id\"],\n                \"references\": {\n                    \"schemaKey\": \"library_author\",\n                    \"properties\": [\"/isbn\", \"/id\"]\n                }\n            }\n        ]);\n\n        let error = validate_schema_amendment(&previous, &next)\n            .expect_err(\"FK tuple order is semantic and must remain frozen\");\n\n        assert!(\n            error.message.contains(\"x-lix-foreign-keys\"),\n            \"unexpected error: {error:?}\"\n        );\n    }\n\n    #[test]\n    fn rejects_state_foreign_key_change() {\n        let previous = base_schema();\n        let mut next = base_schema();\n        next[\"x-lix-state-foreign-keys\"] = json!([]);\n\n        let error = validate_schema_amendment(&previous, &next)\n            .expect_err(\"state foreign-key changes are incompatible\");\n\n        assert!(\n            error.message.contains(\"x-lix-state-foreign-keys\"),\n            \"unexpected error: {error:?}\"\n        );\n    }\n\n    #[test]\n    fn rejects_existing_property_type_change() {\n        let previous = base_schema();\n        let mut next = base_schema();\n        next[\"properties\"][\"title\"][\"type\"] = json!(\"number\");\n\n        let error = validate_schema_amendment(&previous, &next)\n            .expect_err(\"existing property semantics must not change\");\n\n        assert!(\n            error.message.contains(\"/title\"),\n            \"unexpected error: {error:?}\"\n        );\n    }\n\n    #[test]\n    fn rejects_nested_object_property_addition() {\n        let mut previous = base_schema();\n        previous[\"properties\"][\"metadata\"] = json!({\n            \"type\": \"object\",\n            \"properties\": {\n                \"source\": { \"type\": \"string\" }\n            },\n            \"additionalProperties\": false\n        });\n        let mut next = previous.clone();\n        next[\"properties\"][\"metadata\"][\"properties\"][\"page\"] = json!({ \"type\": \"number\" });\n\n        let error = validate_schema_amendment(&previous, &next)\n            .expect_err(\"nested schema amendments are frozen for MVP\");\n\n        assert!(\n            error.message.contains(\"/metadata\"),\n            \"unexpected error: {error:?}\"\n        );\n    }\n\n    #[test]\n    fn rejects_top_level_type_change() {\n        let previous = base_schema();\n        let mut next = base_schema();\n        next[\"type\"] = json!(\"array\");\n\n        let error = validate_schema_amendment(&previous, &next)\n            .expect_err(\"top-level schema semantics must not change\");\n\n        assert!(\n            error.message.contains(\"top-level schema semantics\"),\n            \"unexpected error: {error:?}\"\n        );\n    }\n\n    #[test]\n    fn rejects_top_level_examples_change_and_names_field() {\n        let previous = base_schema();\n        let mut next = base_schema();\n        next[\"examples\"] = json!([{ \"title\": \"Example\" }]);\n\n        let error = validate_schema_amendment(&previous, &next)\n            .expect_err(\"examples are not an amendment annotation in the MVP\");\n\n        assert!(\n            error.message.contains(\"examples\"),\n            \"unexpected error: {error:?}\"\n        );\n    }\n\n    #[test]\n    fn rejects_existing_property_default_change() {\n        let mut previous = base_schema();\n        let mut next = base_schema();\n        previous[\"properties\"][\"title\"][\"default\"] = json!(\"Untitled\");\n        next[\"properties\"][\"title\"][\"default\"] = json!(\"Draft\");\n\n        let error = validate_schema_amendment(&previous, &next)\n            .expect_err(\"existing defaults must not change\");\n\n        assert!(\n            error.message.contains(\"/title\"),\n            \"unexpected error: {error:?}\"\n        );\n    }\n\n    #[test]\n    fn rejects_removed_property() {\n        let previous = base_schema();\n        let mut next = base_schema();\n        next[\"properties\"].as_object_mut().unwrap().remove(\"title\");\n\n        let error = validate_schema_amendment(&previous, &next)\n            .expect_err(\"properties must not be removed\");\n\n        assert!(\n            error.message.contains(\"remove property '/title'\"),\n            \"unexpected error: {error:?}\"\n        );\n    }\n\n    #[test]\n    fn rejects_added_required_property() {\n        let previous = base_schema();\n        let mut next = base_schema();\n        next[\"properties\"][\"subtitle\"] = json!({ \"type\": \"string\" });\n        next[\"required\"]\n            .as_array_mut()\n            .unwrap()\n            .push(json!(\"subtitle\"));\n\n        let error = validate_schema_amendment(&previous, &next)\n            .expect_err(\"new properties must be optional\");\n\n        assert!(\n            error.message.contains(\"required\"),\n            \"unexpected error: {error:?}\"\n        );\n    }\n\n    #[test]\n    fn rejects_added_property_that_is_part_of_existing_constraints() {\n        let mut previous = base_schema();\n        previous[\"x-lix-unique\"] = json!([[\"/subtitle\"]]);\n        let mut next = previous.clone();\n        next[\"properties\"][\"subtitle\"] = json!({ \"type\": \"string\" });\n\n        let error = validate_schema_amendment(&previous, &next)\n            .expect_err(\"new properties must not be constraint participants\");\n\n        assert!(\n            error\n                .message\n                .contains(\"primary, unique, or foreign-key constraints\"),\n            \"unexpected error: {error:?}\"\n        );\n    }\n\n    #[test]\n    fn rejects_required_growth_for_existing_property() {\n        let mut previous = base_schema();\n        previous[\"required\"]\n            .as_array_mut()\n            .unwrap()\n            .retain(|value| value != \"title\");\n        let next = base_schema();\n\n        let error =\n            validate_schema_amendment(&previous, &next).expect_err(\"required set must not grow\");\n\n        assert!(\n            error.message.contains(\"cannot amend required properties\"),\n            \"unexpected error: {error:?}\"\n        );\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/schema/definition.json",
    "content": "{\n  \"$schema\": \"https://json-schema.org/draft/2020-12/schema\",\n  \"title\": \"Lix Schema Definition\",\n  \"description\": \"A Lix schema is a JSON Schema draft 2020-12 document augmented with `x-lix-*` extensions that identify and constrain an entity type. Every schema must declare `x-lix-key` (snake_case identifier, preferably prefixed with a plugin or domain namespace such as `library_book` to avoid collisions) and `additionalProperties: false`; add `x-lix-primary-key` (array of JSON Pointers to required string properties) to make the schema writable. Lix will auto-materialize a public virtual table named after `x-lix-key` with an `INSERT/UPDATE/DELETE` surface. See the `examples` field for a minimal working schema.\",\n  \"examples\": [\n    {\n      \"x-lix-key\": \"library_book\",\n      \"type\": \"object\",\n      \"x-lix-primary-key\": [\n        \"/id\"\n      ],\n      \"properties\": {\n        \"id\": {\n          \"type\": \"string\",\n          \"x-lix-default\": \"lix_uuid_v7()\"\n        },\n        \"title\": {\n          \"type\": \"string\"\n        },\n        \"author\": {\n          \"type\": \"string\"\n        }\n      },\n      \"required\": [\n        \"id\",\n        \"title\"\n      ],\n      \"additionalProperties\": false\n    }\n  ],\n  \"allOf\": [\n    {\n      \"$ref\": \"https://json-schema.org/draft/2020-12/schema\"\n    },\n    {\n      \"type\": \"object\",\n      \"properties\": {\n        \"x-lix-unique\": {\n          \"type\": \"array\",\n          \"description\": \"Array of composite unique constraints. Each inner array is a JSON Pointer (RFC 6901) per participating property, e.g. `[[\\\"/email\\\"], [\\\"/tenant_id\\\", \\\"/handle\\\"]]`.\",\n          \"items\": {\n            \"type\": \"array\",\n            \"minItems\": 1,\n            \"uniqueItems\": true,\n            \"items\": {\n              \"type\": \"string\",\n              \"format\": \"json-pointer\",\n              \"description\": \"JSON Pointer (RFC 6901) to the property, e.g. `/id` or `/nested/field`. Note the leading slash.\"\n            }\n          }\n        },\n        \"additionalProperties\": {\n          \"type\": \"boolean\",\n          \"const\": false,\n          \"description\": \"Objects describing Lix schemas must not allow arbitrary additional properties; set this explicitly to false.\"\n        },\n        \"x-lix-primary-key\": {\n          \"type\": \"array\",\n          \"minItems\": 1,\n          \"uniqueItems\": true,\n          \"description\": \"Primary-key fields as JSON Pointers (RFC 6901) into required string-valued entity properties, e.g. `[\\\"/id\\\"]` for a single-column key or `[\\\"/tenant_id\\\", \\\"/handle\\\"]` for a composite key. Note the leading slash; `\\\"id\\\"` without a slash is not a valid pointer.\",\n          \"items\": {\n            \"type\": \"string\",\n            \"format\": \"json-pointer\",\n            \"description\": \"JSON Pointer (RFC 6901) to a property that participates in the primary key, e.g. `/id` or `/nested/field`.\"\n          }\n        },\n        \"x-lix-foreign-keys\": {\n          \"type\": \"array\",\n          \"items\": {\n            \"type\": \"object\",\n            \"required\": [\n              \"properties\",\n              \"references\"\n            ],\n            \"additionalProperties\": false,\n            \"properties\": {\n              \"properties\": {\n                \"type\": \"array\",\n                \"minItems\": 1,\n                \"items\": {\n                  \"type\": \"string\",\n                  \"format\": \"json-pointer\",\n                  \"description\": \"JSON Pointer (RFC 6901) to the local field, e.g. `/author_id` or `/nested/field`.\"\n                },\n                \"uniqueItems\": true,\n                \"description\": \"Local-side participants in the FK, as JSON Pointers (RFC 6901), e.g. `[\\\"/author_id\\\"]` or `[\\\"/tenant_id\\\", \\\"/account_id\\\"]`.\"\n              },\n              \"references\": {\n                \"type\": \"object\",\n                \"required\": [\n                  \"schemaKey\",\n                  \"properties\"\n                ],\n                \"additionalProperties\": false,\n                \"properties\": {\n                  \"schemaKey\": {\n                    \"type\": \"string\",\n                    \"description\": \"The x-lix-key of the referenced schema\"\n                  },\n                  \"properties\": {\n                    \"type\": \"array\",\n                    \"minItems\": 1,\n                    \"items\": {\n                      \"type\": \"string\",\n                      \"format\": \"json-pointer\",\n                      \"description\": \"JSON Pointer (RFC 6901) to the remote field on the referenced schema, e.g. `/id`.\"\n                    },\n                    \"uniqueItems\": true,\n                    \"description\": \"Remote-side participants on the referenced schema, as JSON Pointers (RFC 6901). Must be the same length as the local `properties`.\"\n                  }\n                }\n              }\n            }\n          }\n        },\n        \"x-lix-state-foreign-keys\": {\n          \"type\": \"array\",\n          \"description\": \"Foreign keys from local fields to arbitrary live state rows. Each entry is exactly three required local JSON Pointers ordered as `[entity_id, schema_key, file_id]`: index 0 points to the local entity_id JSON array, index 1 points to the local schema_key string, and index 2 points to the local file_id string-or-null. Use explicit null for global file_id targets; omitted fields are invalid. The referenced state row is resolved in the same version.\",\n          \"items\": {\n            \"type\": \"array\",\n            \"minItems\": 3,\n            \"maxItems\": 3,\n            \"uniqueItems\": true,\n            \"prefixItems\": [\n              {\n                \"type\": \"string\",\n                \"format\": \"json-pointer\",\n                \"description\": \"[0] Local JSON Pointer for the target entity_id. The value must be a non-empty JSON array of strings.\"\n              },\n              {\n                \"type\": \"string\",\n                \"format\": \"json-pointer\",\n                \"description\": \"[1] Local JSON Pointer for the target schema_key. The value must be a string.\"\n              },\n              {\n                \"type\": \"string\",\n                \"format\": \"json-pointer\",\n                \"description\": \"[2] Local JSON Pointer for the target file_id. The value must be a string or null.\"\n              }\n            ],\n            \"items\": false\n          }\n        },\n        \"x-lix-key\": {\n          \"type\": \"string\",\n          \"pattern\": \"^[a-z][a-z0-9_]*$\",\n          \"description\": \"The schema identifier. Must be snake_case (lowercase, underscores) to safely embed in SQL identifiers. Prefix keys with a plugin, app, or domain namespace such as `library_book` or `csv_plugin_cell` to avoid collisions with other schemas.\",\n          \"examples\": [\n            \"library_book\",\n            \"csv_plugin_cell\"\n          ]\n        },\n        \"properties\": {\n          \"type\": \"object\",\n          \"additionalProperties\": {\n            \"allOf\": [\n              {\n                \"$ref\": \"https://json-schema.org/draft/2020-12/schema\"\n              },\n              {\n                \"type\": \"object\",\n                \"properties\": {\n                  \"x-lix-default\": {\n                    \"type\": \"string\",\n                    \"format\": \"cel\",\n                    \"description\": \"CEL expression evaluated to produce the default value when the property is omitted. Available Lix-registered functions: `lix_uuid_v7()` (RFC 9562 UUIDv7), `lix_timestamp()` (ISO-8601 string). CEL literals are also valid: strings (`'open'`), numbers (`0`, `3.14`), booleans (`true` / `false`), and `null`.\",\n                    \"examples\": [\n                      \"lix_uuid_v7()\",\n                      \"lix_timestamp()\",\n                      \"false\",\n                      \"'open'\"\n                    ]\n                  }\n                }\n              }\n            ]\n          }\n        }\n      },\n      \"required\": [\n        \"x-lix-key\",\n        \"additionalProperties\"\n      ]\n    }\n  ]\n}\n"
  },
  {
    "path": "packages/engine/src/schema/definition.rs",
    "content": "use cel::Program;\nuse jsonschema::{Draft, JSONSchema};\nuse serde_json::Value as JsonValue;\nuse std::collections::BTreeSet;\nuse std::sync::OnceLock;\n\nuse crate::common::parse_json_pointer;\nuse crate::LixError;\n\nstatic LIX_SCHEMA_DEFINITION: OnceLock<JsonValue> = OnceLock::new();\nstatic LIX_SCHEMA_VALIDATOR: OnceLock<Result<JSONSchema, LixError>> = OnceLock::new();\n\npub fn lix_schema_definition() -> &'static JsonValue {\n    LIX_SCHEMA_DEFINITION.get_or_init(|| {\n        let raw = include_str!(\"definition.json\");\n        serde_json::from_str(raw).expect(\"definition.json must be valid JSON\")\n    })\n}\n\npub fn lix_schema_definition_json() -> &'static str {\n    include_str!(\"definition.json\")\n}\n\npub fn validate_lix_schema_definition(schema: &JsonValue) -> Result<(), LixError> {\n    if let Some(err) = detect_missing_pointer_slash(schema) {\n        return Err(err);\n    }\n    if let Some(err) = detect_state_foreign_key_tuple_shape(schema) {\n        return Err(err);\n    }\n\n    let validator = lix_schema_validator()?;\n    if let Err(errors) = validator.validate(schema) {\n        let details = format_lix_schema_validation_errors(errors);\n        return Err(LixError {\n            code: LixError::CODE_SCHEMA_DEFINITION.to_string(),\n            message: format!(\"Invalid Lix schema definition: {details}\"),\n            hint: None,\n            details: None,\n        });\n    }\n\n    assert_primary_key_pointers(schema)?;\n    assert_unique_pointers(schema)?;\n    assert_state_foreign_key_pointers(schema)?;\n    assert_known_x_lix_top_level_fields(schema)?;\n    assert_entity_properties_do_not_use_reserved_lix_prefix(schema)?;\n    assert_entity_properties_have_projectable_types(schema)?;\n\n    Ok(())\n}\n\nfn assert_entity_properties_do_not_use_reserved_lix_prefix(\n    schema: &JsonValue,\n) -> Result<(), LixError> {\n    let Some(schema_key) = schema.get(\"x-lix-key\").and_then(JsonValue::as_str) else {\n        return Ok(());\n    };\n    let Some(properties) = schema.get(\"properties\").and_then(JsonValue::as_object) else {\n        return Ok(());\n    };\n\n    for property_name in properties.keys() {\n        if property_name.starts_with(\"lix\") {\n            return Err(LixError::new(\n                LixError::CODE_SCHEMA_DEFINITION,\n                format!(\n                    \"Invalid Lix schema definition: schema '{schema_key}' property '/{property_name}' uses reserved prefix 'lix'.\"\n                ),\n            )\n            .with_hint(\"Property names starting with 'lix' are reserved for Lix system fields.\"));\n        }\n    }\n\n    Ok(())\n}\n\nfn assert_entity_properties_have_projectable_types(schema: &JsonValue) -> Result<(), LixError> {\n    let Some(schema_key) = schema.get(\"x-lix-key\").and_then(JsonValue::as_str) else {\n        return Ok(());\n    };\n    let Some(properties) = schema.get(\"properties\").and_then(JsonValue::as_object) else {\n        return Ok(());\n    };\n\n    for (property_name, property_schema) in properties {\n        if !schema_property_has_sql_projection_type(property_schema) {\n            return Err(LixError::new(\n                LixError::CODE_SCHEMA_DEFINITION,\n                format!(\n                    \"Invalid Lix schema definition: schema '{schema_key}' property '/{property_name}' must declare a SQL-projectable JSON Schema type\"\n                ),\n            )\n            .with_hint(\"Use an explicit type such as string, number, integer, boolean, object, array, or a supported union of those types.\"));\n        }\n    }\n\n    Ok(())\n}\n\nfn schema_property_has_sql_projection_type(schema: &JsonValue) -> bool {\n    let mut kinds = BTreeSet::new();\n    collect_schema_type_kinds(schema, &mut kinds);\n    kinds.remove(\"null\");\n    kinds.iter().any(|kind| {\n        matches!(\n            *kind,\n            \"boolean\" | \"integer\" | \"number\" | \"string\" | \"object\" | \"array\"\n        )\n    })\n}\n\nfn collect_schema_type_kinds<'a>(schema: &'a JsonValue, out: &mut BTreeSet<&'a str>) {\n    match schema.get(\"type\") {\n        Some(JsonValue::String(kind)) => {\n            out.insert(kind.as_str());\n        }\n        Some(JsonValue::Array(kinds)) => {\n            for kind in kinds.iter().filter_map(JsonValue::as_str) {\n                out.insert(kind);\n            }\n        }\n        _ => {}\n    }\n\n    for keyword in [\"anyOf\", \"oneOf\", \"allOf\"] {\n        if let Some(JsonValue::Array(branches)) = schema.get(keyword) {\n            for branch in branches {\n                collect_schema_type_kinds(branch, out);\n            }\n        }\n    }\n}\n\n/// Detect the common no-leading-slash mistake in JSON-Pointer-valued fields\n/// (`x-lix-primary-key`, `x-lix-unique`, `x-lix-foreign-keys[].properties`,\n/// `x-lix-foreign-keys[].references.properties`,\n/// `x-lix-state-foreign-keys[]`) and return a targeted\n/// error + hint suggesting the fix.\n///\n/// Surfacing this before the meta-schema validator runs replaces the\n/// generic `format \"json-pointer\"` failure with a message that tells the\n/// user exactly what to change (e.g. `\"id\"` → `\"/id\"`).\nfn detect_missing_pointer_slash(schema: &JsonValue) -> Option<LixError> {\n    let mut offenders: Vec<(String, String)> = Vec::new();\n\n    fn collect(items: Option<&Vec<JsonValue>>, label: &str, out: &mut Vec<(String, String)>) {\n        let Some(items) = items else {\n            return;\n        };\n        for item in items {\n            if let Some(s) = item.as_str() {\n                if !s.is_empty() && !s.starts_with('/') {\n                    out.push((label.to_string(), s.to_string()));\n                }\n            }\n        }\n    }\n\n    collect(\n        schema\n            .get(\"x-lix-primary-key\")\n            .and_then(JsonValue::as_array),\n        \"x-lix-primary-key\",\n        &mut offenders,\n    );\n\n    if let Some(groups) = schema.get(\"x-lix-unique\").and_then(JsonValue::as_array) {\n        for group in groups {\n            collect(group.as_array(), \"x-lix-unique\", &mut offenders);\n        }\n    }\n\n    if let Some(fks) = schema\n        .get(\"x-lix-foreign-keys\")\n        .and_then(JsonValue::as_array)\n    {\n        for fk in fks {\n            collect(\n                fk.get(\"properties\").and_then(JsonValue::as_array),\n                \"x-lix-foreign-keys[].properties\",\n                &mut offenders,\n            );\n            collect(\n                fk.get(\"references\")\n                    .and_then(|r| r.get(\"properties\"))\n                    .and_then(JsonValue::as_array),\n                \"x-lix-foreign-keys[].references.properties\",\n                &mut offenders,\n            );\n        }\n    }\n\n    if let Some(fks) = schema\n        .get(\"x-lix-state-foreign-keys\")\n        .and_then(JsonValue::as_array)\n    {\n        for fk in fks {\n            collect(fk.as_array(), \"x-lix-state-foreign-keys\", &mut offenders);\n        }\n    }\n\n    if offenders.is_empty() {\n        return None;\n    }\n\n    let examples = offenders\n        .iter()\n        .take(3)\n        .map(|(field, value)| format!(\"{field}: \\\"{value}\\\" → \\\"/{value}\\\"\"))\n        .collect::<Vec<_>>()\n        .join(\"; \");\n    let message = format!(\n        \"Invalid Lix schema definition: JSON Pointer values must begin with '/'. Offending entries: {examples}\"\n    );\n    let hint = format!(\n        \"Did you mean [\\\"/{}\\\"]? JSON Pointer values must prefix property names with '/' (RFC 6901).\",\n        offenders[0].1\n    );\n    Some(\n        LixError {\n            code: LixError::CODE_SCHEMA_DEFINITION.to_string(),\n            message,\n            hint: None,\n            details: None,\n        }\n        .with_hint(hint),\n    )\n}\n\nfn detect_state_foreign_key_tuple_shape(schema: &JsonValue) -> Option<LixError> {\n    let foreign_keys = schema\n        .get(\"x-lix-state-foreign-keys\")\n        .and_then(JsonValue::as_array)?;\n    for (index, foreign_key) in foreign_keys.iter().enumerate() {\n        let Some(local_pointers) = foreign_key.as_array() else {\n            continue;\n        };\n        if local_pointers.len() != 3 {\n            return Some(LixError::new(\n                LixError::CODE_SCHEMA_DEFINITION,\n                format!(\n                    \"Invalid Lix schema definition: x-lix-state-foreign-keys[{index}] must contain exactly three JSON Pointers ordered as [entity_id, schema_key, file_id]; [0] entity_id, [1] schema_key, [2] file_id.\"\n                ),\n            ));\n        }\n    }\n    None\n}\n\npub fn validate_lix_schema(schema: &JsonValue, data: &JsonValue) -> Result<(), LixError> {\n    validate_lix_schema_definition(schema)?;\n\n    let validator = compile_lix_schema(schema)?;\n    if let Err(errors) = validator.validate(data) {\n        let details = format_lix_schema_validation_errors(errors);\n        return Err(LixError {\n            code: LixError::CODE_SCHEMA_VALIDATION.to_string(),\n            message: format!(\"Data validation failed: {details}\"),\n            hint: None,\n            details: None,\n        });\n    }\n\n    Ok(())\n}\n\nfn lix_schema_validator() -> Result<&'static JSONSchema, LixError> {\n    let result = LIX_SCHEMA_VALIDATOR.get_or_init(|| compile_lix_schema(lix_schema_definition()));\n    match result {\n        Ok(schema) => Ok(schema),\n        Err(err) => Err(LixError {\n            code: LixError::CODE_SCHEMA_DEFINITION.to_string(),\n            message: err.message.clone(),\n            hint: None,\n            details: None,\n        }),\n    }\n}\n\npub(crate) fn compile_lix_schema(schema: &JsonValue) -> Result<JSONSchema, LixError> {\n    let mut options = JSONSchema::options();\n    options.with_meta_schemas();\n    if schema_uses_draft_2020_12_without_fragment(schema) {\n        options.with_draft(Draft::Draft202012);\n    }\n    options.should_validate_formats(true);\n    options.with_format(\"json-pointer\", is_json_pointer);\n    options.with_format(\"cel\", is_cel_expression);\n\n    options.compile(schema).map_err(|err| LixError {\n        code: LixError::CODE_SCHEMA_DEFINITION.to_string(),\n        message: format!(\"Failed to compile Lix schema definition: {err}\"),\n        hint: None,\n        details: None,\n    })\n}\n\nfn schema_uses_draft_2020_12_without_fragment(schema: &JsonValue) -> bool {\n    schema\n        .get(\"$schema\")\n        .and_then(JsonValue::as_str)\n        .is_some_and(|url| url == \"https://json-schema.org/draft/2020-12/schema\")\n}\n\nfn is_json_pointer(value: &str) -> bool {\n    parse_json_pointer(value).is_ok()\n}\n\nfn is_cel_expression(value: &str) -> bool {\n    Program::compile(value).is_ok()\n}\n\nfn assert_primary_key_pointers(schema: &JsonValue) -> Result<(), LixError> {\n    let Some(primary_key) = schema\n        .get(\"x-lix-primary-key\")\n        .and_then(|value| value.as_array())\n    else {\n        return Ok(());\n    };\n\n    for pointer in primary_key {\n        let Some(pointer) = pointer.as_str() else {\n            continue;\n        };\n        let segments = parse_json_pointer(pointer)?;\n        let Some(property_schema) = (!segments.is_empty())\n            .then(|| schema_property(schema, &segments))\n            .flatten()\n        else {\n            return Err(LixError { code: LixError::CODE_SCHEMA_DEFINITION.to_string(), message: format!(\n                    \"Invalid Lix schema definition: x-lix-primary-key references missing property \\\"{}\\\".\",\n                    pointer\n                ),\n                hint: None,\n            details: None,\n            });\n        };\n        if !schema_property_is_string_only(property_schema) {\n            return Err(LixError::new(\n                LixError::CODE_SCHEMA_DEFINITION,\n                format!(\n                    \"Invalid Lix schema definition: x-lix-primary-key property \\\"{pointer}\\\" must have type \\\"string\\\".\"\n                ),\n            ));\n        }\n        if !schema_pointer_is_required(schema, &segments) {\n            return Err(LixError::new(\n                LixError::CODE_SCHEMA_DEFINITION,\n                format!(\n                    \"Invalid Lix schema definition: x-lix-primary-key property \\\"{pointer}\\\" must be required.\"\n                ),\n            ));\n        }\n    }\n\n    Ok(())\n}\n\nfn assert_unique_pointers(schema: &JsonValue) -> Result<(), LixError> {\n    let Some(unique_groups) = schema\n        .get(\"x-lix-unique\")\n        .and_then(|value| value.as_array())\n    else {\n        return Ok(());\n    };\n\n    for group in unique_groups {\n        let Some(group) = group.as_array() else {\n            continue;\n        };\n        for pointer in group {\n            let Some(pointer) = pointer.as_str() else {\n                continue;\n            };\n            let segments = parse_json_pointer(pointer)?;\n            if segments.is_empty() || !schema_has_property(schema, &segments) {\n                return Err(LixError { code: LixError::CODE_SCHEMA_DEFINITION.to_string(), message: format!(\n                        \"Invalid Lix schema definition: x-lix-unique references missing property \\\"{}\\\".\",\n                        pointer\n                    ),\n                    hint: None,\n            details: None,\n                });\n            }\n        }\n    }\n\n    Ok(())\n}\n\nfn assert_state_foreign_key_pointers(schema: &JsonValue) -> Result<(), LixError> {\n    let Some(foreign_keys) = schema\n        .get(\"x-lix-state-foreign-keys\")\n        .and_then(|value| value.as_array())\n    else {\n        return Ok(());\n    };\n\n    for (index, foreign_key) in foreign_keys.iter().enumerate() {\n        let Some(local_pointers) = foreign_key.as_array() else {\n            continue;\n        };\n        if local_pointers.len() != 3 {\n            continue;\n        }\n\n        let roles = [\n            (\"entity_id\", \"a non-empty JSON array of strings\"),\n            (\"schema_key\", \"a string\"),\n            (\"file_id\", \"a string or null\"),\n        ];\n        for (slot, (role, expected)) in roles.iter().enumerate() {\n            let Some(pointer) = local_pointers[slot].as_str() else {\n                continue;\n            };\n            let segments = parse_json_pointer(pointer)?;\n            let Some(property_schema) = (!segments.is_empty())\n                .then(|| schema_property(schema, &segments))\n                .flatten()\n            else {\n                return Err(LixError::new(\n                    LixError::CODE_SCHEMA_DEFINITION,\n                    format!(\n                        \"Invalid Lix schema definition: x-lix-state-foreign-keys[{index}][{slot}] ({role}) references missing property \\\"{pointer}\\\".\"\n                    ),\n                ));\n            };\n            if !schema_pointer_is_required(schema, &segments) {\n                return Err(LixError::new(\n                    LixError::CODE_SCHEMA_DEFINITION,\n                    format!(\n                        \"Invalid Lix schema definition: x-lix-state-foreign-keys[{index}][{slot}] ({role}) property \\\"{pointer}\\\" must be required. Tuple order is [entity_id, schema_key, file_id].\"\n                    ),\n                ));\n            }\n\n            let valid = match *role {\n                \"entity_id\" => schema_property_is_string_array(property_schema),\n                \"schema_key\" => schema_property_is_string_only(property_schema),\n                \"file_id\" => schema_property_is_string_or_null(property_schema),\n                _ => unreachable!(\"state foreign key roles are exhaustive\"),\n            };\n            if !valid {\n                return Err(LixError::new(\n                    LixError::CODE_SCHEMA_DEFINITION,\n                    format!(\n                        \"Invalid Lix schema definition: x-lix-state-foreign-keys[{index}][{slot}] ({role}) property \\\"{pointer}\\\" must be {expected}. Tuple order is [entity_id, schema_key, file_id].\"\n                    ),\n                ));\n            }\n        }\n    }\n\n    Ok(())\n}\n\nfn assert_known_x_lix_top_level_fields(schema: &JsonValue) -> Result<(), LixError> {\n    let Some(object) = schema.as_object() else {\n        return Ok(());\n    };\n\n    for key in object.keys() {\n        if !key.starts_with(\"x-lix-\") {\n            continue;\n        }\n\n        let known = matches!(\n            key.as_str(),\n            \"x-lix-key\"\n                | \"x-lix-primary-key\"\n                | \"x-lix-unique\"\n                | \"x-lix-foreign-keys\"\n                | \"x-lix-state-foreign-keys\"\n        );\n\n        if !known {\n            return Err(LixError {\n                code: LixError::CODE_SCHEMA_DEFINITION.to_string(),\n                message: format!(\n                    \"Invalid Lix schema definition: unknown x-lix field '{}'.\",\n                    key\n                ),\n                hint: None,\n                details: None,\n            });\n        }\n    }\n\n    Ok(())\n}\n\nfn schema_has_property(schema: &JsonValue, segments: &[String]) -> bool {\n    schema_property(schema, segments).is_some()\n}\n\nfn schema_pointer_is_required(schema: &JsonValue, segments: &[String]) -> bool {\n    if segments.is_empty() {\n        return false;\n    }\n\n    let mut node = schema;\n    for segment in segments {\n        let required = node\n            .get(\"required\")\n            .and_then(JsonValue::as_array)\n            .map(|required| {\n                required\n                    .iter()\n                    .any(|required_property| required_property.as_str() == Some(segment))\n            })\n            .unwrap_or(false);\n        if !required {\n            return false;\n        }\n\n        let Some(next) = node\n            .get(\"properties\")\n            .and_then(JsonValue::as_object)\n            .and_then(|properties| properties.get(segment))\n        else {\n            return false;\n        };\n        node = next;\n    }\n\n    true\n}\n\nfn schema_property<'a>(schema: &'a JsonValue, segments: &[String]) -> Option<&'a JsonValue> {\n    let mut node = schema;\n    for segment in segments {\n        let properties = node.get(\"properties\")?.as_object()?;\n        let next = properties.get(segment)?;\n        node = next;\n    }\n    Some(node)\n}\n\nfn schema_property_is_string_only(schema: &JsonValue) -> bool {\n    let mut kinds = BTreeSet::new();\n    collect_schema_type_kinds(schema, &mut kinds);\n    kinds.len() == 1 && kinds.contains(\"string\")\n}\n\nfn schema_property_is_string_or_null(schema: &JsonValue) -> bool {\n    let mut kinds = BTreeSet::new();\n    collect_schema_type_kinds(schema, &mut kinds);\n    kinds.remove(\"null\");\n    kinds.len() == 1 && kinds.contains(\"string\")\n}\n\nfn schema_property_is_string_array(schema: &JsonValue) -> bool {\n    let mut kinds = BTreeSet::new();\n    collect_schema_type_kinds(schema, &mut kinds);\n    if kinds.len() != 1 || !kinds.contains(\"array\") {\n        return false;\n    }\n    let Some(items) = schema.get(\"items\") else {\n        return false;\n    };\n    if !schema_property_is_string_only(items) {\n        return false;\n    }\n    schema\n        .get(\"minItems\")\n        .and_then(JsonValue::as_u64)\n        .is_some_and(|min_items| min_items >= 1)\n}\n\npub(crate) fn format_lix_schema_validation_errors<'a>(\n    errors: impl Iterator<Item = jsonschema::ValidationError<'a>>,\n) -> String {\n    let mut parts = Vec::new();\n    for error in errors {\n        let path = error.instance_path.to_string();\n        let message = error.to_string();\n        if path.is_empty() {\n            parts.push(message);\n        } else {\n            parts.push(format!(\"{path} {message}\"));\n        }\n    }\n    if parts.is_empty() {\n        \"Unknown validation error\".to_string()\n    } else {\n        parts.join(\"; \")\n    }\n}\n\n#[cfg(test)]\nmod pointer_slash_detection_tests {\n    use super::*;\n    use serde_json::json;\n\n    fn minimal_schema_with(extras: serde_json::Value) -> JsonValue {\n        let mut obj = json!({\n            \"type\": \"object\",\n            \"x-lix-key\": \"book\",\n            \"properties\": {\n                \"id\": { \"type\": \"string\" },\n                \"author_id\": { \"type\": \"string\" },\n                \"tenant_id\": { \"type\": \"string\" },\n                \"handle\": { \"type\": \"string\" },\n            },\n            \"required\": [\"id\"],\n            \"additionalProperties\": false,\n        });\n        let extras_obj = extras.as_object().expect(\"extras must be object\").clone();\n        for (k, v) in extras_obj {\n            obj.as_object_mut().unwrap().insert(k, v);\n        }\n        obj\n    }\n\n    fn err_for(schema: &JsonValue) -> LixError {\n        validate_lix_schema_definition(schema).expect_err(\"should reject\")\n    }\n\n    #[test]\n    fn primary_key_without_slash_emits_targeted_hint() {\n        let schema = minimal_schema_with(json!({ \"x-lix-primary-key\": [\"id\"] }));\n        let err = err_for(&schema);\n        assert_eq!(\n            err.code,\n            LixError::CODE_SCHEMA_DEFINITION,\n            \"schema-definition errors should carry the categorized code\"\n        );\n        assert!(\n            err.message.contains(\"must begin with '/'\"),\n            \"unexpected message: {}\",\n            err.message\n        );\n        assert!(\n            err.message.contains(\"x-lix-primary-key: \\\"id\\\" → \\\"/id\\\"\"),\n            \"message should show the fix: {}\",\n            err.message\n        );\n        let hint = err.hint.as_deref().expect(\"should carry a hint\");\n        assert!(\n            hint.contains(\"/id\"),\n            \"hint should show fixed pointer: {hint}\"\n        );\n        assert!(\n            hint.contains(\"RFC 6901\"),\n            \"hint should cite the RFC: {hint}\"\n        );\n    }\n\n    #[test]\n    fn unique_without_slash_emits_targeted_hint() {\n        let schema = minimal_schema_with(json!({\n            \"x-lix-primary-key\": [\"/id\"],\n            \"x-lix-unique\": [[\"handle\"]],\n        }));\n        let err = err_for(&schema);\n        assert!(\n            err.message\n                .contains(\"x-lix-unique: \\\"handle\\\" → \\\"/handle\\\"\"),\n            \"should flag x-lix-unique entry: {}\",\n            err.message\n        );\n        assert!(err.hint.is_some());\n    }\n\n    #[test]\n    fn foreign_key_local_without_slash_emits_targeted_hint() {\n        let schema = minimal_schema_with(json!({\n            \"x-lix-primary-key\": [\"/id\"],\n            \"x-lix-foreign-keys\": [{\n                \"properties\": [\"author_id\"],\n                \"references\": {\n                    \"schemaKey\": \"author\",\n                    \"properties\": [\"/id\"],\n                }\n            }]\n        }));\n        let err = err_for(&schema);\n        assert!(\n            err.message\n                .contains(\"x-lix-foreign-keys[].properties: \\\"author_id\\\" → \\\"/author_id\\\"\"),\n            \"should flag FK local entry: {}\",\n            err.message\n        );\n    }\n\n    #[test]\n    fn foreign_key_remote_without_slash_emits_targeted_hint() {\n        let schema = minimal_schema_with(json!({\n            \"x-lix-primary-key\": [\"/id\"],\n            \"x-lix-foreign-keys\": [{\n                \"properties\": [\"/author_id\"],\n                \"references\": {\n                    \"schemaKey\": \"author\",\n                    \"properties\": [\"id\"],\n                }\n            }]\n        }));\n        let err = err_for(&schema);\n        assert!(\n            err.message\n                .contains(\"x-lix-foreign-keys[].references.properties: \\\"id\\\" → \\\"/id\\\"\"),\n            \"should flag FK remote entry: {}\",\n            err.message\n        );\n    }\n\n    #[test]\n    fn valid_pointers_pass_pre_check() {\n        let schema = minimal_schema_with(json!({\n            \"x-lix-primary-key\": [\"/id\"],\n            \"x-lix-unique\": [[\"/handle\"], [\"/tenant_id\", \"/handle\"]],\n            \"x-lix-foreign-keys\": [{\n                \"properties\": [\"/author_id\"],\n                \"references\": {\n                    \"schemaKey\": \"author\",\n                    \"properties\": [\"/id\"],\n                }\n            }]\n        }));\n        assert!(detect_missing_pointer_slash(&schema).is_none());\n    }\n\n    #[test]\n    fn draft_2020_12_json_pointer_format_still_asserts() {\n        let schema = json!({\n            \"$schema\": \"https://json-schema.org/draft/2020-12/schema\",\n            \"type\": \"object\",\n            \"properties\": {\n                \"pointer\": {\n                    \"type\": \"string\",\n                    \"format\": \"json-pointer\"\n                }\n            }\n        });\n\n        let validator = compile_lix_schema(&schema).expect(\"2020-12 schema should compile\");\n\n        assert!(validator.is_valid(&json!({ \"pointer\": \"/id\" })));\n        assert!(!validator.is_valid(&json!({ \"pointer\": \"id\" })));\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/schema/key.rs",
    "content": "use serde_json::Value as JsonValue;\n\nuse crate::entity_identity::EntityIdentity;\nuse crate::LixError;\n\n#[derive(Debug, Clone, PartialEq, Eq, Hash)]\npub struct SchemaKey {\n    pub schema_key: String,\n}\n\nimpl SchemaKey {\n    pub fn new(schema_key: impl Into<String>) -> Self {\n        Self {\n            schema_key: schema_key.into(),\n        }\n    }\n}\n\npub fn schema_key_from_definition(schema: &JsonValue) -> Result<SchemaKey, LixError> {\n    let object = schema.as_object().ok_or_else(|| LixError {\n        code: \"LIX_ERROR_UNKNOWN\".to_string(),\n        message: \"schema definition must be a JSON object\".to_string(),\n        hint: None,\n        details: None,\n    })?;\n    let schema_key = object\n        .get(\"x-lix-key\")\n        .and_then(JsonValue::as_str)\n        .ok_or_else(|| LixError {\n            code: \"LIX_ERROR_UNKNOWN\".to_string(),\n            message: \"schema definition must include string x-lix-key\".to_string(),\n            hint: None,\n            details: None,\n        })?;\n\n    Ok(SchemaKey::new(schema_key.to_string()))\n}\n\npub fn schema_from_registered_snapshot(\n    snapshot: &JsonValue,\n) -> Result<(SchemaKey, JsonValue), LixError> {\n    let value = snapshot.get(\"value\").ok_or_else(|| LixError {\n        code: \"LIX_ERROR_UNKNOWN\".to_string(),\n        message: \"registered schema snapshot_content missing value\".to_string(),\n        hint: None,\n        details: None,\n    })?;\n    let value = value.as_object().ok_or_else(|| LixError {\n        code: \"LIX_ERROR_UNKNOWN\".to_string(),\n        message: \"registered schema snapshot_content value must be an object\".to_string(),\n        hint: None,\n        details: None,\n    })?;\n\n    let schema_key = value\n        .get(\"x-lix-key\")\n        .and_then(|value| value.as_str())\n        .ok_or_else(|| LixError {\n            code: \"LIX_ERROR_UNKNOWN\".to_string(),\n            message: \"registered schema value.x-lix-key must be string\".to_string(),\n            hint: None,\n            details: None,\n        })?;\n\n    Ok((\n        SchemaKey::new(schema_key.to_string()),\n        JsonValue::Object(value.clone()),\n    ))\n}\n\npub(crate) fn registered_schema_entity_id(schema_key: &str) -> Result<EntityIdentity, LixError> {\n    EntityIdentity::from_primary_key_paths(\n        &serde_json::json!({\n            \"value\": {\n                \"x-lix-key\": schema_key,\n            }\n        }),\n        &[vec![\"value\".to_string(), \"x-lix-key\".to_string()]],\n    )\n    .map_err(|error| {\n        LixError::new(\n            LixError::CODE_SCHEMA_DEFINITION,\n            format!(\"registered schema identity could not be derived for schema '{schema_key}': {error}\"),\n        )\n    })\n}\n\n#[cfg(test)]\nmod tests {\n    use serde_json::json;\n\n    use super::{schema_from_registered_snapshot, schema_key_from_definition, SchemaKey};\n\n    #[test]\n    fn schema_from_registered_snapshot_extracts_key_and_schema() {\n        let snapshot = json!({\n            \"value\": {\n                \"x-lix-key\": \"profile\",\n                \"type\": \"object\"\n            }\n        });\n\n        let (key, schema) = schema_from_registered_snapshot(&snapshot).expect(\"schema is valid\");\n        assert_eq!(key, SchemaKey::new(\"profile\"));\n        assert_eq!(schema[\"type\"], json!(\"object\"));\n    }\n\n    #[test]\n    fn schema_from_registered_snapshot_requires_value_object() {\n        let snapshot = json!({});\n\n        let err = schema_from_registered_snapshot(&snapshot).expect_err(\"should fail\");\n        assert!(err.message.contains(\"missing value\"), \"{err:?}\");\n    }\n\n    #[test]\n    fn schema_from_registered_snapshot_requires_string_key() {\n        let snapshot = json!({\n            \"value\": {\n                \"x-lix-key\": 1,\n            }\n        });\n\n        let err = schema_from_registered_snapshot(&snapshot).expect_err(\"should fail\");\n        assert!(err.message.contains(\"x-lix-key\"), \"{err:?}\");\n    }\n\n    #[test]\n    fn schema_key_from_definition_extracts_key() {\n        let schema = json!({\n            \"x-lix-key\": \"users\",\n            \"type\": \"object\"\n        });\n\n        let key = schema_key_from_definition(&schema).expect(\"schema key\");\n        assert_eq!(key, SchemaKey::new(\"users\"));\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/schema/mod.rs",
    "content": "mod builtin;\n#[allow(dead_code)]\npub(crate) mod compatibility;\nmod definition;\nmod key;\npub(crate) mod seed;\n#[cfg(test)]\nmod tests;\n\npub(crate) use compatibility::validate_schema_amendment;\npub(crate) use definition::{compile_lix_schema, format_lix_schema_validation_errors};\npub use definition::{\n    lix_schema_definition, lix_schema_definition_json, validate_lix_schema,\n    validate_lix_schema_definition,\n};\npub(crate) use key::registered_schema_entity_id;\npub use key::{schema_from_registered_snapshot, schema_key_from_definition, SchemaKey};\n#[cfg(test)]\npub(crate) use seed::seed_schema_definition;\npub(crate) use seed::{is_seed_schema_key, seed_schema_definitions};\n"
  },
  {
    "path": "packages/engine/src/schema/seed.rs",
    "content": "use serde_json::Value as JsonValue;\n\npub(crate) fn is_seed_schema_key(schema_key: &str) -> bool {\n    super::builtin::is_seed_schema_key(schema_key)\n}\n\n#[cfg(test)]\npub(crate) fn seed_schema_definition(schema_key: &str) -> Option<&'static JsonValue> {\n    super::builtin::seed_schema_definition(schema_key)\n}\n\npub(crate) fn seed_schema_definitions() -> Vec<&'static JsonValue> {\n    super::builtin::seed_schema_definitions()\n}\n"
  },
  {
    "path": "packages/engine/src/schema/tests.rs",
    "content": "use crate::{validate_lix_schema, validate_lix_schema_definition};\nuse serde_json::json;\n\n#[test]\nfn validate_lix_schema_definition_passes_for_valid_schema() {\n    let valid_schema = json!({\n        \"x-lix-key\": \"test_entity\",\n        \"type\": \"object\",\n        \"properties\": {\n            \"id\": { \"type\": \"string\" }\n        },\n        \"additionalProperties\": false\n    });\n\n    assert!(validate_lix_schema_definition(&valid_schema).is_ok());\n}\n\n#[test]\nfn validate_lix_schema_definition_rejects_unprojectable_entity_properties() {\n    let schema = json!({\n        \"x-lix-key\": \"test_entity\",\n        \"type\": \"object\",\n        \"properties\": {\n            \"id\": { \"type\": \"string\" },\n            \"kind\": {}\n        },\n        \"required\": [\"id\", \"kind\"],\n        \"additionalProperties\": false\n    });\n\n    let err = validate_lix_schema_definition(&schema).unwrap_err();\n    assert!(\n        err.to_string().contains(\"property '/kind'\"),\n        \"error should identify the unprojectable property: {err:?}\"\n    );\n    assert!(\n        err.to_string().contains(\"SQL-projectable JSON Schema type\"),\n        \"error should explain the projection requirement: {err:?}\"\n    );\n}\n\n#[test]\nfn validate_lix_schema_definition_rejects_reserved_lix_property_prefixes() {\n    for property_name in [\"lixcol_entity_id\", \"lix_internal\", \"lixfoo\"] {\n        let schema = json!({\n            \"x-lix-key\": \"test_entity\",\n            \"type\": \"object\",\n            \"properties\": {\n                \"id\": { \"type\": \"string\" },\n                property_name: { \"type\": \"string\" }\n            },\n            \"required\": [\"id\", property_name],\n            \"additionalProperties\": false\n        });\n\n        let err = validate_lix_schema_definition(&schema)\n            .expect_err(\"reserved property names should be rejected\");\n        assert!(\n            err.to_string().contains(&format!(\n                \"property '/{property_name}' uses reserved prefix 'lix'\"\n            )),\n            \"error should identify the reserved property name: {err:?}\"\n        );\n    }\n}\n\n#[test]\nfn validate_lix_schema_definition_throws_for_invalid_schema() {\n    let invalid_schema = json!({\n        \"type\": \"object\",\n        \"properties\": {\n            \"id\": { \"type\": \"string\" }\n        },\n        \"additionalProperties\": false\n    });\n\n    let err = validate_lix_schema_definition(&invalid_schema).unwrap_err();\n    assert!(err.to_string().contains(\"Invalid Lix schema definition\"));\n}\n\n#[test]\nfn validate_lix_schema_validates_both_schema_and_data_successfully() {\n    let schema = json!({\n        \"x-lix-key\": \"user\",\n        \"type\": \"object\",\n        \"properties\": {\n            \"id\": { \"type\": \"string\" },\n            \"name\": { \"type\": \"string\" }\n        },\n        \"required\": [\"id\", \"name\"],\n        \"additionalProperties\": false\n    });\n\n    let valid_data = json!({\n        \"id\": \"123\",\n        \"name\": \"John Doe\"\n    });\n\n    assert!(validate_lix_schema(&schema, &valid_data).is_ok());\n}\n\n#[test]\nfn validate_lix_schema_throws_when_schema_is_invalid() {\n    let invalid_schema = json!({\n        \"type\": \"object\",\n        \"properties\": {\n            \"id\": { \"type\": \"string\" }\n        },\n        \"additionalProperties\": false\n    });\n\n    let data = json!({ \"id\": \"123\" });\n\n    let err = validate_lix_schema(&invalid_schema, &data).unwrap_err();\n    assert!(err.to_string().contains(\"Invalid Lix schema definition\"));\n}\n\n#[test]\nfn validate_lix_schema_throws_when_data_does_not_match_schema() {\n    let schema = json!({\n        \"x-lix-key\": \"user\",\n        \"type\": \"object\",\n        \"properties\": {\n            \"id\": { \"type\": \"string\" },\n            \"name\": { \"type\": \"string\" }\n        },\n        \"required\": [\"id\", \"name\"],\n        \"additionalProperties\": false\n    });\n\n    let invalid_data = json!({ \"id\": \"123\" });\n\n    let err = validate_lix_schema(&schema, &invalid_data).unwrap_err();\n    assert!(err.to_string().contains(\"Data validation failed\"));\n}\n\n#[test]\nfn validate_lix_schema_definition_rejects_when_additional_properties_missing() {\n    let schema = json!({\n        \"x-lix-key\": \"user\",\n        \"type\": \"object\",\n        \"properties\": {\n            \"id\": { \"type\": \"string\" }\n        },\n        \"required\": [\"id\"]\n    });\n\n    let err = validate_lix_schema_definition(&schema).unwrap_err();\n    assert!(err.to_string().contains(\"Invalid Lix schema definition\"));\n}\n\n#[test]\nfn additional_properties_must_be_false() {\n    let schema_with_additional_props = json!({\n        \"x-lix-key\": \"user\",\n        \"type\": \"object\",\n        \"properties\": {\n            \"id\": { \"type\": \"string\" },\n            \"name\": { \"type\": \"string\" }\n        },\n        \"required\": [\"id\", \"name\"],\n        \"additionalProperties\": true\n    });\n\n    assert!(validate_lix_schema_definition(&schema_with_additional_props).is_err());\n\n    let valid_schema = json!({\n        \"x-lix-key\": \"user\",\n        \"type\": \"object\",\n        \"properties\": {\n            \"id\": { \"type\": \"string\" },\n            \"name\": { \"type\": \"string\" }\n        },\n        \"required\": [\"id\", \"name\"],\n        \"additionalProperties\": false\n    });\n\n    assert!(validate_lix_schema_definition(&valid_schema).is_ok());\n\n    let data = json!({\n        \"id\": \"123\",\n        \"name\": \"John Doe\",\n        \"extraField\": \"not allowed\"\n    });\n\n    let err = validate_lix_schema(&valid_schema, &data).unwrap_err();\n    assert!(err.to_string().contains(\"Data validation failed\"));\n}\n\n#[test]\nfn validate_lix_schema_definition_rejects_missing_primary_key_properties() {\n    let schema = json!({\n        \"x-lix-key\": \"missing_pk\",\n        \"type\": \"object\",\n        \"properties\": {\n            \"value\": { \"type\": \"string\" }\n        },\n        \"required\": [\"value\"],\n        \"x-lix-primary-key\": [\"/entity_id\"],\n        \"additionalProperties\": false\n    });\n\n    let err = validate_lix_schema_definition(&schema).unwrap_err();\n    assert!(err\n        .to_string()\n        .contains(\"x-lix-primary-key references missing property\"));\n}\n\n#[test]\nfn validate_lix_schema_definition_rejects_non_string_primary_key_properties() {\n    let schema = json!({\n        \"x-lix-key\": \"numeric_pk\",\n        \"type\": \"object\",\n        \"properties\": {\n            \"id\": { \"type\": \"number\" },\n            \"value\": { \"type\": \"string\" }\n        },\n        \"required\": [\"id\", \"value\"],\n        \"x-lix-primary-key\": [\"/id\"],\n        \"additionalProperties\": false\n    });\n\n    let err = validate_lix_schema_definition(&schema).unwrap_err();\n    assert!(err\n        .to_string()\n        .contains(\"x-lix-primary-key property \\\"/id\\\" must have type \\\"string\\\"\"));\n}\n\n#[test]\nfn validate_lix_schema_definition_rejects_optional_primary_key_properties() {\n    let schema = json!({\n        \"x-lix-key\": \"optional_pk\",\n        \"type\": \"object\",\n        \"properties\": {\n            \"id\": { \"type\": \"string\" },\n            \"value\": { \"type\": \"string\" }\n        },\n        \"required\": [\"value\"],\n        \"x-lix-primary-key\": [\"/id\"],\n        \"additionalProperties\": false\n    });\n\n    let err = validate_lix_schema_definition(&schema)\n        .expect_err(\"primary-key property should be required\");\n    assert!(err\n        .to_string()\n        .contains(\"x-lix-primary-key property \\\"/id\\\" must be required\"));\n}\n\n#[test]\nfn validate_lix_schema_definition_rejects_missing_unique_constraint_properties() {\n    let schema = json!({\n        \"x-lix-key\": \"missing_unique\",\n        \"type\": \"object\",\n        \"properties\": {\n            \"value\": { \"type\": \"string\" }\n        },\n        \"x-lix-unique\": [[\"/entity_id\", \"/value\"]],\n        \"additionalProperties\": false\n    });\n\n    let err = validate_lix_schema_definition(&schema).unwrap_err();\n    assert!(err\n        .to_string()\n        .contains(\"x-lix-unique references missing property\"));\n}\n\n#[test]\nfn x_key_is_required() {\n    let schema = json!({\n        \"type\": \"object\",\n        \"x-lix-key\": null,\n        \"properties\": {\n            \"name\": { \"type\": \"string\" }\n        },\n        \"required\": [\"name\"],\n        \"additionalProperties\": false\n    });\n\n    assert!(validate_lix_schema_definition(&schema).is_err());\n}\n\n#[test]\nfn x_lix_key_must_be_snake_case() {\n    let base_schema = json!({\n        \"type\": \"object\",\n        \"properties\": {\n            \"name\": { \"type\": \"string\" }\n        },\n        \"required\": [\"name\"],\n        \"additionalProperties\": false\n    });\n\n    let invalid_keys = [\n        \"Invalid-Key!\",\n        \"also.invalid\",\n        \"123starts_with_number\",\n        \"contains space\",\n        \"camelCaseKey\",\n        \"UPPER_CASE\",\n        \"mixed-Case_Value\",\n    ];\n    for key in invalid_keys {\n        let mut schema = base_schema.clone();\n        schema[\"x-lix-key\"] = json!(key);\n        assert!(validate_lix_schema_definition(&schema).is_err());\n    }\n\n    let valid_keys = [\"abc\", \"abc123\", \"abc_123\", \"a\", \"snake_case_key\"];\n    for key in valid_keys {\n        let mut schema = base_schema.clone();\n        schema[\"x-lix-key\"] = json!(key);\n        assert!(validate_lix_schema_definition(&schema).is_ok());\n    }\n}\n\n#[test]\nfn x_lix_unique_is_optional() {\n    let schema = json!({\n        \"type\": \"object\",\n        \"x-lix-key\": \"mock\",\n        \"properties\": {\n            \"name\": { \"type\": \"string\" }\n        },\n        \"required\": [\"name\"],\n        \"additionalProperties\": false\n    });\n\n    assert!(validate_lix_schema_definition(&schema).is_ok());\n}\n\n#[test]\nfn x_lix_unique_must_be_array_of_arrays_when_present() {\n    let schema = json!({\n        \"type\": \"object\",\n        \"x-lix-key\": \"mock\",\n        \"x-lix-unique\": [[\"/id\"], [\"/name\", \"/age\"]],\n        \"properties\": {\n            \"id\": { \"type\": \"string\" },\n            \"name\": { \"type\": \"string\" },\n            \"age\": { \"type\": \"number\" }\n        },\n        \"required\": [\"id\", \"name\", \"age\"],\n        \"additionalProperties\": false\n    });\n\n    assert!(validate_lix_schema_definition(&schema).is_ok());\n}\n\n#[test]\nfn x_lix_unique_fails_with_invalid_structure() {\n    let schema = json!({\n        \"type\": \"object\",\n        \"x-lix-key\": \"mock\",\n        \"x-lix-unique\": [\"/id\", \"/name\"],\n        \"properties\": {\n            \"id\": { \"type\": \"string\" },\n            \"name\": { \"type\": \"string\" }\n        },\n        \"required\": [\"id\", \"name\"],\n        \"additionalProperties\": false\n    });\n\n    assert!(validate_lix_schema_definition(&schema).is_err());\n}\n\n#[test]\nfn x_lix_primary_key_must_include_at_least_one_unique_pointer() {\n    let base_schema = json!({\n        \"type\": \"object\",\n        \"x-lix-key\": \"mock\",\n        \"properties\": {\n            \"id\": { \"type\": \"string\" }\n        },\n        \"required\": [\"id\"],\n        \"additionalProperties\": false\n    });\n\n    let mut empty_pk = base_schema.clone();\n    empty_pk[\"x-lix-primary-key\"] = json!([]);\n    assert!(validate_lix_schema_definition(&empty_pk).is_err());\n\n    let mut duplicate_pk = base_schema.clone();\n    duplicate_pk[\"x-lix-primary-key\"] = json!([\"/id\", \"/id\"]);\n    assert!(validate_lix_schema_definition(&duplicate_pk).is_err());\n\n    let mut valid_pk = base_schema.clone();\n    valid_pk[\"x-lix-primary-key\"] = json!([\"/id\"]);\n    assert!(validate_lix_schema_definition(&valid_pk).is_ok());\n}\n\n#[test]\nfn x_lix_unique_groups_must_include_unique_pointers() {\n    let base_schema = json!({\n        \"type\": \"object\",\n        \"x-lix-key\": \"mock\",\n        \"properties\": {\n            \"id\": { \"type\": \"string\" },\n            \"email\": { \"type\": \"string\" }\n        },\n        \"required\": [\"id\", \"email\"],\n        \"additionalProperties\": false\n    });\n\n    let mut empty_group = base_schema.clone();\n    empty_group[\"x-lix-unique\"] = json!([[]]);\n    assert!(validate_lix_schema_definition(&empty_group).is_err());\n\n    let mut duplicate_pointers = base_schema.clone();\n    duplicate_pointers[\"x-lix-unique\"] = json!([[\"/email\", \"/email\"]]);\n    assert!(validate_lix_schema_definition(&duplicate_pointers).is_err());\n\n    let mut valid_unique = base_schema.clone();\n    valid_unique[\"x-lix-unique\"] = json!([[\"/email\"]]);\n    assert!(validate_lix_schema_definition(&valid_unique).is_ok());\n}\n\n#[test]\nfn x_lix_entity_views_is_rejected() {\n    let schema = json!({\n        \"type\": \"object\",\n        \"x-lix-key\": \"mock\",\n        \"x-lix-entity-views\": [\"lix_state\", \"lix_state_by_version\"],\n        \"properties\": {\n            \"name\": { \"type\": \"string\" }\n        },\n        \"required\": [\"name\"],\n        \"additionalProperties\": false\n    });\n\n    let err =\n        validate_lix_schema_definition(&schema).expect_err(\"x-lix-entity-views should be rejected\");\n    assert!(err.to_string().contains(\"x-lix-entity-views\"));\n}\n\n#[test]\nfn x_lix_primary_key_is_optional() {\n    let schema = json!({\n        \"type\": \"object\",\n        \"x-lix-key\": \"mock\",\n        \"properties\": {\n            \"name\": { \"type\": \"string\" }\n        },\n        \"required\": [\"name\"],\n        \"additionalProperties\": false\n    });\n\n    assert!(validate_lix_schema_definition(&schema).is_ok());\n}\n\n#[test]\nfn x_lix_primary_key_must_be_array_of_strings_when_present() {\n    let schema = json!({\n        \"type\": \"object\",\n        \"x-lix-key\": \"mock\",\n        \"x-lix-primary-key\": [\"/id\", \"/version\"],\n        \"properties\": {\n            \"id\": { \"type\": \"string\" },\n            \"version\": { \"type\": \"string\" },\n            \"name\": { \"type\": \"string\" }\n        },\n        \"required\": [\"id\", \"version\", \"name\"],\n        \"additionalProperties\": false\n    });\n\n    assert!(validate_lix_schema_definition(&schema).is_ok());\n}\n\n#[test]\nfn x_lix_foreign_keys_is_optional() {\n    let schema = json!({\n        \"type\": \"object\",\n        \"x-lix-key\": \"blog_post\",\n        \"properties\": {\n            \"id\": { \"type\": \"string\" },\n            \"author_id\": { \"type\": \"string\" }\n        },\n        \"required\": [\"id\", \"author_id\"],\n        \"additionalProperties\": false\n    });\n\n    assert!(validate_lix_schema_definition(&schema).is_ok());\n}\n\n#[test]\nfn x_lix_foreign_keys_with_valid_structure() {\n    let schema = json!({\n        \"type\": \"object\",\n        \"x-lix-key\": \"blog_post\",\n        \"x-lix-foreign-keys\": [\n            {\n                \"properties\": [\"/author_id\"],\n                \"references\": {\n                    \"schemaKey\": \"user_profile\",\n                    \"properties\": [\"/id\"]\n                }\n            },\n            {\n                \"properties\": [\"/category_id\"],\n                \"references\": {\n                    \"schemaKey\": \"post_category\",\n                    \"properties\": [\"/id\"]\n                }\n            }\n        ],\n        \"properties\": {\n            \"id\": { \"type\": \"string\" },\n            \"author_id\": { \"type\": \"string\" },\n            \"category_id\": { \"type\": \"string\" }\n        },\n        \"required\": [\"id\", \"author_id\", \"category_id\"],\n        \"additionalProperties\": false\n    });\n\n    assert!(validate_lix_schema_definition(&schema).is_ok());\n}\n\n#[test]\nfn x_lix_foreign_keys_reject_duplicate_pointers() {\n    let schema = json!({\n        \"type\": \"object\",\n        \"x-lix-key\": \"invalid_fk_duplicates\",\n        \"x-lix-foreign-keys\": [\n            {\n                \"properties\": [\"/local\", \"/local\"],\n                \"references\": {\n                    \"schemaKey\": \"remote_schema\",\n                    \"properties\": [\"/id\", \"/version\"]\n                }\n            }\n        ],\n        \"properties\": {\n            \"local\": { \"type\": \"string\" }\n        },\n        \"required\": [\"local\"],\n        \"additionalProperties\": false\n    });\n\n    assert!(validate_lix_schema_definition(&schema).is_err());\n}\n\n#[test]\nfn x_lix_foreign_keys_fails_without_required_fields() {\n    let schema = json!({\n        \"type\": \"object\",\n        \"x-lix-key\": \"blog_post\",\n        \"x-lix-foreign-keys\": [\n            {\n                \"properties\": [\"/author_id\"]\n            }\n        ],\n        \"properties\": {\n            \"id\": { \"type\": \"string\" },\n            \"author_id\": { \"type\": \"string\" }\n        },\n        \"required\": [\"id\", \"author_id\"],\n        \"additionalProperties\": false\n    });\n\n    assert!(validate_lix_schema_definition(&schema).is_err());\n}\n\n#[test]\nfn x_lix_foreign_keys_use_schema_key_identity_only() {\n    let schema = json!({\n        \"type\": \"object\",\n        \"x-lix-key\": \"comment\",\n        \"x-lix-foreign-keys\": [\n            {\n                \"properties\": [\"/post_id\"],\n                \"references\": {\n                    \"schemaKey\": \"blog_post\",\n                    \"properties\": [\"/id\"]\n                }\n            }\n        ],\n        \"properties\": {\n            \"id\": { \"type\": \"string\" },\n            \"post_id\": { \"type\": \"string\" }\n        },\n        \"required\": [\"id\", \"post_id\"],\n        \"additionalProperties\": false\n    });\n\n    assert!(validate_lix_schema_definition(&schema).is_ok());\n}\n\n#[test]\nfn x_lix_foreign_keys_rejects_mode_field() {\n    let schema = json!({\n        \"type\": \"object\",\n        \"x-lix-key\": \"child_entity\",\n        \"x-lix-primary-key\": [\"/id\"],\n        \"x-lix-foreign-keys\": [\n            {\n                \"properties\": [\"/parent_id\"],\n                \"references\": { \"schemaKey\": \"parent_entity\", \"properties\": [\"/id\"] },\n                \"mode\": \"materialized\"\n            }\n        ],\n        \"properties\": {\n            \"id\": { \"type\": \"string\" },\n            \"parent_id\": { \"type\": \"string\" }\n        },\n        \"required\": [\"id\", \"parent_id\"],\n        \"additionalProperties\": false\n    });\n\n    let err = validate_lix_schema_definition(&schema).expect_err(\"mode should be rejected\");\n    assert!(err.to_string().contains(\"mode\"));\n}\n\n#[test]\nfn x_lix_foreign_keys_rejects_scope_field() {\n    let schema = json!({\n        \"type\": \"object\",\n        \"x-lix-key\": \"child_entity\",\n        \"x-lix-primary-key\": [\"/id\"],\n        \"x-lix-foreign-keys\": [\n            {\n                \"properties\": [\"/parent_id\"],\n                \"references\": { \"schemaKey\": \"parent_entity\", \"properties\": [\"/id\"] },\n                \"scope\": [\"file_id\"]\n            }\n        ],\n        \"properties\": {\n            \"id\": { \"type\": \"string\" },\n            \"parent_id\": { \"type\": \"string\" }\n        },\n        \"required\": [\"id\", \"parent_id\"],\n        \"additionalProperties\": false\n    });\n\n    let err = validate_lix_schema_definition(&schema).expect_err(\"scope should be rejected\");\n    assert!(err.to_string().contains(\"scope\"));\n}\n\n#[test]\nfn x_lix_state_foreign_keys_with_ordered_state_address_tuple() {\n    let schema = json!({\n        \"type\": \"object\",\n        \"x-lix-key\": \"label_assignment\",\n        \"x-lix-state-foreign-keys\": [\n            [\"/target_entity_id\", \"/target_schema_key\", \"/target_file_id\"]\n        ],\n        \"x-lix-foreign-keys\": [\n            {\n                \"properties\": [\"/label_id\"],\n                \"references\": {\n                    \"schemaKey\": \"lix_label\",\n                    \"properties\": [\"/id\"]\n                }\n            }\n        ],\n        \"properties\": {\n            \"target_entity_id\": {\n                \"type\": \"array\",\n                \"items\": { \"type\": \"string\" },\n                \"minItems\": 1\n            },\n            \"target_schema_key\": { \"type\": \"string\" },\n            \"target_file_id\": { \"type\": [\"string\", \"null\"] },\n            \"label_id\": { \"type\": \"string\" }\n        },\n        \"required\": [\"target_entity_id\", \"target_schema_key\", \"target_file_id\", \"label_id\"],\n        \"additionalProperties\": false\n    });\n\n    assert!(validate_lix_schema_definition(&schema).is_ok());\n}\n\n#[test]\nfn x_lix_state_foreign_keys_rejects_wrong_tuple_order_by_type() {\n    let schema = json!({\n        \"type\": \"object\",\n        \"x-lix-key\": \"bad_label_assignment\",\n        \"x-lix-state-foreign-keys\": [\n            [\"/target_schema_key\", \"/target_entity_id\", \"/target_file_id\"]\n        ],\n        \"properties\": {\n            \"target_entity_id\": {\n                \"type\": \"array\",\n                \"items\": { \"type\": \"string\" },\n                \"minItems\": 1\n            },\n            \"target_schema_key\": { \"type\": \"string\" },\n            \"target_file_id\": { \"type\": [\"string\", \"null\"] }\n        },\n        \"required\": [\"target_entity_id\", \"target_schema_key\", \"target_file_id\"],\n        \"additionalProperties\": false\n    });\n\n    let err =\n        validate_lix_schema_definition(&schema).expect_err(\"wrong tuple order should be rejected\");\n    assert!(\n        err.message.contains(\"[entity_id, schema_key, file_id]\"),\n        \"unexpected error: {err:?}\"\n    );\n}\n\n#[test]\nfn x_lix_state_foreign_keys_requires_address_tuple_properties() {\n    let schema = json!({\n        \"type\": \"object\",\n        \"x-lix-key\": \"optional_label_assignment\",\n        \"x-lix-state-foreign-keys\": [\n            [\"/target_entity_id\", \"/target_schema_key\", \"/target_file_id\"]\n        ],\n        \"properties\": {\n            \"target_entity_id\": {\n                \"type\": \"array\",\n                \"items\": { \"type\": \"string\" },\n                \"minItems\": 1\n            },\n            \"target_schema_key\": { \"type\": \"string\" },\n            \"target_file_id\": { \"type\": [\"string\", \"null\"] }\n        },\n        \"required\": [\"target_entity_id\", \"target_schema_key\"],\n        \"additionalProperties\": false\n    });\n\n    let err = validate_lix_schema_definition(&schema)\n        .expect_err(\"state foreign key tuple fields should be required\");\n    assert!(\n        err.message.contains(\"file_id\") && err.message.contains(\"must be required\"),\n        \"unexpected error: {err:?}\"\n    );\n}\n\n#[test]\nfn x_lix_foreign_keys_treat_schema_keys_literally() {\n    let schema = json!({\n        \"type\": \"object\",\n        \"x-lix-key\": \"custom_label_assignment\",\n        \"x-lix-foreign-keys\": [\n            {\n                \"properties\": [\"/label_id\"],\n                \"references\": {\n                    \"schemaKey\": \"label\",\n                    \"properties\": [\"/id\"]\n                }\n            }\n        ],\n        \"properties\": {\n            \"label_id\": { \"type\": \"string\" }\n        },\n        \"required\": [\"label_id\"],\n        \"additionalProperties\": false\n    });\n\n    assert!(validate_lix_schema_definition(&schema).is_ok());\n}\n\n#[test]\nfn x_lix_default_accepts_valid_cel_expression() {\n    let schema = json!({\n        \"type\": \"object\",\n        \"x-lix-key\": \"mock\",\n        \"properties\": {\n            \"id\": { \"type\": \"string\", \"x-lix-default\": \"lix_uuid_v7()\" }\n        },\n        \"additionalProperties\": false\n    });\n\n    assert!(validate_lix_schema_definition(&schema).is_ok());\n}\n\n#[test]\nfn x_lix_default_rejects_invalid_cel_expression() {\n    let schema = json!({\n        \"type\": \"object\",\n        \"x-lix-key\": \"mock\",\n        \"properties\": {\n            \"id\": { \"type\": \"string\", \"x-lix-default\": \"lix_uuid_v7(\" }\n        },\n        \"additionalProperties\": false\n    });\n\n    assert!(validate_lix_schema_definition(&schema).is_err());\n}\n"
  },
  {
    "path": "packages/engine/src/session/context.rs",
    "content": "use std::future::Future;\nuse std::pin::Pin;\nuse std::sync::atomic::{AtomicBool, Ordering};\nuse std::sync::Arc;\n\nuse serde_json::Value as JsonValue;\n\nuse crate::binary_cas::{BinaryCasContext, BlobDataReader};\nuse crate::catalog::CatalogContext;\nuse crate::commit_graph::{CommitGraphContext, CommitGraphReader};\nuse crate::commit_store::CommitStoreContext;\nuse crate::entity_identity::EntityIdentity;\nuse crate::functions::FunctionProviderHandle;\nuse crate::json_store::JsonStoreContext;\nuse crate::live_state::{LiveStateContext, LiveStateReader, LiveStateRowRequest};\nuse crate::sql2::{CommitStoreQuerySource, SqlCommitStoreQuerySource, SqlExecutionContext};\nuse crate::storage::{\n    ScopedStorageReader, StorageContext, StorageReadScope, StorageReadTransaction, StorageReader,\n};\nuse crate::tracked_state::TrackedStateContext;\nuse crate::transaction::{open_transaction, Transaction};\nuse crate::version::{\n    VersionContext, VersionLifecycle, VersionOperation, VersionRefReader, VersionReferenceRole,\n};\nuse crate::GLOBAL_VERSION_ID;\nuse crate::{LixError, NullableKeyFilter};\n\npub(crate) const WORKSPACE_VERSION_KEY: &str = \"lix_workspace_version_id\";\n\n#[derive(Clone)]\npub(crate) enum SessionMode {\n    Pinned { version_id: String },\n    Workspace,\n}\n\n/// Session-context state for engine execution.\n///\n/// A session context pins the active version selector and shared execution\n/// services. Each call to `execute(...)` projects this state into a read-only\n/// SQL context or a transaction-owned write context.\n///\n/// Write transaction invariant: any engine operation that may write must enter\n/// through `SessionContext::with_write_transaction`. Reads that influence writes\n/// are only available from that transaction capability, not from session-level\n/// helpers.\n#[derive(Clone)]\npub struct SessionContext {\n    pub(super) mode: SessionMode,\n    pub(super) storage: StorageContext,\n    pub(super) live_state: Arc<LiveStateContext>,\n    pub(super) tracked_state: Arc<TrackedStateContext>,\n    pub(super) binary_cas: Arc<BinaryCasContext>,\n    pub(super) commit_store: Arc<CommitStoreContext>,\n    pub(super) version_ctx: Arc<VersionContext>,\n    pub(super) catalog_context: Arc<CatalogContext>,\n    closed: Arc<AtomicBool>,\n}\n\nimpl SessionContext {\n    pub(crate) async fn open_workspace(\n        storage: StorageContext,\n        live_state: Arc<LiveStateContext>,\n        tracked_state: Arc<TrackedStateContext>,\n        binary_cas: Arc<BinaryCasContext>,\n        commit_store: Arc<CommitStoreContext>,\n        version_ctx: Arc<VersionContext>,\n        catalog_context: Arc<CatalogContext>,\n    ) -> Result<Self, LixError> {\n        let session = Self::new(\n            SessionMode::Workspace,\n            storage,\n            live_state,\n            tracked_state,\n            binary_cas,\n            commit_store,\n            version_ctx,\n            catalog_context,\n        );\n        session.active_version_id().await?;\n        Ok(session)\n    }\n\n    pub(crate) async fn open(\n        active_version_id: String,\n        storage: StorageContext,\n        live_state: Arc<LiveStateContext>,\n        tracked_state: Arc<TrackedStateContext>,\n        binary_cas: Arc<BinaryCasContext>,\n        commit_store: Arc<CommitStoreContext>,\n        version_ctx: Arc<VersionContext>,\n        catalog_context: Arc<CatalogContext>,\n    ) -> Result<Self, LixError> {\n        Ok(Self::new(\n            SessionMode::Pinned {\n                version_id: active_version_id,\n            },\n            storage,\n            live_state,\n            tracked_state,\n            binary_cas,\n            commit_store,\n            version_ctx,\n            catalog_context,\n        ))\n    }\n\n    pub(super) fn new(\n        mode: SessionMode,\n        storage: StorageContext,\n        live_state: Arc<LiveStateContext>,\n        tracked_state: Arc<TrackedStateContext>,\n        binary_cas: Arc<BinaryCasContext>,\n        commit_store: Arc<CommitStoreContext>,\n        version_ctx: Arc<VersionContext>,\n        catalog_context: Arc<CatalogContext>,\n    ) -> Self {\n        Self::new_with_closed(\n            mode,\n            storage,\n            live_state,\n            tracked_state,\n            binary_cas,\n            commit_store,\n            version_ctx,\n            catalog_context,\n            Arc::new(AtomicBool::new(false)),\n        )\n    }\n\n    pub(super) fn new_with_closed(\n        mode: SessionMode,\n        storage: StorageContext,\n        live_state: Arc<LiveStateContext>,\n        tracked_state: Arc<TrackedStateContext>,\n        binary_cas: Arc<BinaryCasContext>,\n        commit_store: Arc<CommitStoreContext>,\n        version_ctx: Arc<VersionContext>,\n        catalog_context: Arc<CatalogContext>,\n        closed: Arc<AtomicBool>,\n    ) -> Self {\n        Self {\n            mode,\n            storage,\n            live_state,\n            tracked_state,\n            binary_cas,\n            commit_store,\n            version_ctx,\n            catalog_context,\n            closed,\n        }\n    }\n\n    /// Releases this logical session handle. This is a lifecycle boundary only:\n    /// successful writes are committed before their operation returns.\n    pub async fn close(&self) -> Result<(), LixError> {\n        self.closed.store(true, Ordering::SeqCst);\n        Ok(())\n    }\n\n    pub fn is_closed(&self) -> bool {\n        self.closed.load(Ordering::SeqCst)\n    }\n\n    pub(crate) fn closed_flag(&self) -> Arc<AtomicBool> {\n        Arc::clone(&self.closed)\n    }\n\n    pub(crate) fn ensure_open(&self) -> Result<(), LixError> {\n        if self.is_closed() {\n            return Err(closed_error());\n        }\n        Ok(())\n    }\n\n    /// Resolves the version this session should operate on right now.\n    ///\n    /// This is a read-path helper. Write flows must resolve the active version\n    /// through the transaction capability so the read is scoped to the\n    /// same backend transaction as the writes it influences.\n    ///\n    /// Pinned sessions are pure in-memory views over one version. Workspace\n    /// sessions read the shared workspace selector from untracked global\n    /// `lix_key_value` state so multiple open app sessions can observe the same\n    /// active workspace version.\n    pub async fn active_version_id(&self) -> Result<String, LixError> {\n        let mut transaction = self.storage.begin_read_transaction().await?;\n        let result = self\n            .active_version_id_from_reader(transaction.as_mut())\n            .await;\n        match result {\n            Ok(version_id) => {\n                transaction.rollback().await?;\n                Ok(version_id)\n            }\n            Err(error) => {\n                let _ = transaction.rollback().await;\n                Err(error)\n            }\n        }\n    }\n\n    pub(super) async fn active_version_id_from_reader<S>(\n        &self,\n        reader: &mut S,\n    ) -> Result<String, LixError>\n    where\n        S: StorageReader + ?Sized,\n    {\n        self.ensure_open()?;\n        match &self.mode {\n            SessionMode::Pinned { version_id } => Ok(version_id.clone()),\n            SessionMode::Workspace => self.load_workspace_version_id(reader).await,\n        }\n    }\n\n    async fn load_workspace_version_id<S>(&self, reader: &mut S) -> Result<String, LixError>\n    where\n        S: StorageReader + ?Sized,\n    {\n        let row = self\n            .live_state\n            .reader(&mut *reader)\n            .load_row(&LiveStateRowRequest {\n                schema_key: \"lix_key_value\".to_string(),\n                version_id: GLOBAL_VERSION_ID.to_string(),\n                entity_id: EntityIdentity::single(WORKSPACE_VERSION_KEY),\n                file_id: NullableKeyFilter::Null,\n            })\n            .await?\n            .ok_or_else(|| {\n                LixError::new(\n                    \"LIX_ERROR_UNKNOWN\",\n                    \"workspace version selector is missing lix_key_value:lix_workspace_version_id\",\n                )\n            })?;\n        let snapshot_content = row.snapshot_content.as_deref().ok_or_else(|| {\n            LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                \"workspace version selector is missing snapshot_content\",\n            )\n        })?;\n        let snapshot = serde_json::from_str::<JsonValue>(snapshot_content).map_err(|error| {\n            LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                format!(\"workspace version selector snapshot is invalid JSON: {error}\"),\n            )\n        })?;\n        let version_id = snapshot\n            .get(\"value\")\n            .and_then(JsonValue::as_str)\n            .filter(|value| !value.is_empty())\n            .ok_or_else(|| {\n                LixError::new(\n                    \"LIX_ERROR_UNKNOWN\",\n                    \"workspace version selector value must be a non-empty string\",\n                )\n            })?\n            .to_string();\n\n        let version_ref = self.version_ctx.ref_reader(&mut *reader);\n        VersionLifecycle::new(&version_ref)\n            .require_existing_ref(\n                &version_id,\n                VersionOperation::LoadWorkspaceSelector,\n                VersionReferenceRole::WorkspaceSelector,\n            )\n            .await?;\n\n        Ok(version_id)\n    }\n\n    pub(crate) async fn with_write_transaction<T, F>(&self, f: F) -> Result<T, LixError>\n    where\n        F: for<'tx> FnOnce(\n            &'tx mut Transaction,\n        ) -> Pin<Box<dyn Future<Output = Result<T, LixError>> + 'tx>>,\n    {\n        self.ensure_open()?;\n        let opened = open_transaction(\n            &self.mode,\n            self.storage.clone(),\n            Arc::clone(&self.live_state),\n            Arc::clone(&self.tracked_state),\n            Arc::clone(&self.binary_cas),\n            Arc::clone(&self.commit_store),\n            Arc::clone(&self.version_ctx),\n            Arc::clone(&self.catalog_context),\n        )\n        .await?;\n        let mut transaction = opened.transaction;\n        let runtime_functions = opened.runtime_functions;\n\n        match f(&mut transaction).await {\n            Ok(value) => {\n                transaction.commit(&runtime_functions).await?;\n                Ok(value)\n            }\n            Err(error) => {\n                let _ = transaction.rollback().await;\n                Err(error)\n            }\n        }\n    }\n}\n\nfn closed_error() -> LixError {\n    LixError::new(LixError::CODE_CLOSED, \"Lix handle is closed\")\n        .with_hint(\"Open a new Lix handle before calling this method.\")\n}\n\n/// Read-only SQL execution context derived from a session.\n///\n/// Write statements re-plan against `Transaction`; this context intentionally\n/// has no write stager.\npub(super) struct SessionSqlExecutionContext<'a> {\n    pub(super) active_version_id: &'a str,\n    pub(super) read_store:\n        ScopedStorageReader<Box<dyn StorageReadTransaction + Send + Sync + 'static>>,\n    pub(super) live_state: Arc<LiveStateContext>,\n    pub(super) binary_cas: Arc<BinaryCasContext>,\n    pub(super) commit_store: Arc<CommitStoreContext>,\n    pub(super) version_ctx: Arc<VersionContext>,\n    pub(super) visible_schemas: Vec<JsonValue>,\n    pub(super) functions: FunctionProviderHandle,\n}\n\nimpl SqlExecutionContext for SessionSqlExecutionContext<'_> {\n    fn active_version_id(&self) -> &str {\n        self.active_version_id\n    }\n\n    fn live_state(&self) -> Arc<dyn LiveStateReader> {\n        Arc::new(self.live_state.reader(self.read_store.clone())) as Arc<dyn LiveStateReader>\n    }\n\n    fn commit_store_query_source(&self) -> SqlCommitStoreQuerySource {\n        let read_scope = StorageReadScope::new(self.read_store.clone());\n        CommitStoreQuerySource {\n            commit_store_reader: Arc::new(self.commit_store.reader(read_scope.store())),\n            json_reader: JsonStoreContext::new().reader(read_scope.store()),\n        }\n    }\n\n    fn commit_graph(&self) -> Box<dyn CommitGraphReader> {\n        Box::new(CommitGraphContext::new().reader(self.read_store.clone()))\n    }\n\n    fn version_ref(&self) -> Arc<dyn VersionRefReader> {\n        Arc::new(self.version_ctx.ref_reader(self.read_store.clone()))\n    }\n\n    fn functions(&self) -> FunctionProviderHandle {\n        self.functions.clone()\n    }\n\n    fn blob_reader(&self) -> Arc<dyn BlobDataReader> {\n        Arc::new(self.binary_cas.reader(self.read_store.clone())) as Arc<dyn BlobDataReader>\n    }\n\n    fn list_visible_schemas(&self) -> Result<Vec<JsonValue>, LixError> {\n        Ok(self.visible_schemas.clone())\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/session/create_version.rs",
    "content": "use crate::transaction::types::{TransactionWrite, TransactionWriteMode};\nuse crate::version::{\n    version_descriptor_stage_row, version_ref_stage_row, VersionLifecycle, VersionOperation,\n    VersionReferenceRole,\n};\nuse crate::LixError;\n\nuse super::context::SessionContext;\n\n/// Options for creating a new version from the session's active version.\n#[derive(Debug, Clone, PartialEq, Eq)]\npub struct CreateVersionOptions {\n    /// Optional caller-provided version id. If omitted, engine generates one.\n    pub id: Option<String>,\n    /// User-facing version name.\n    pub name: String,\n    /// Optional commit id for the new version head. If omitted, the current\n    /// active version head is used.\n    pub from_commit_id: Option<String>,\n}\n\n/// Receipt returned after creating a version.\n#[derive(Debug, Clone, PartialEq, Eq)]\npub struct CreateVersionReceipt {\n    pub id: String,\n    pub name: String,\n    pub hidden: bool,\n    pub commit_id: String,\n}\n\nimpl SessionContext {\n    /// Creates a new version from this session's current version head.\n    ///\n    /// Version descriptors are tracked global facts so every version agrees on\n    /// which versions exist. Version refs are untracked global moving pointers,\n    /// so creating a ref does not add another changelog fact.\n    pub async fn create_version(\n        &self,\n        options: CreateVersionOptions,\n    ) -> Result<CreateVersionReceipt, LixError> {\n        self.with_write_transaction(|transaction| {\n            Box::pin(async move {\n                let version_id = options\n                    .id\n                    .unwrap_or_else(|| transaction.functions().call_uuid_v7());\n                let source_head = if let Some(from_commit_id) = options.from_commit_id {\n                    let mut commit_graph = transaction.commit_graph_reader();\n                    VersionLifecycle::require_existing_commit(\n                        &mut commit_graph,\n                        &from_commit_id,\n                        VersionOperation::CreateVersion,\n                        VersionReferenceRole::CommitSource,\n                    )\n                    .await?;\n                    from_commit_id\n                } else {\n                    let active_version_id = transaction.active_version_id().to_string();\n                    let reader = transaction.version_ref_reader();\n                    VersionLifecycle::new(&reader)\n                        .require_existing_commit_id(\n                            &active_version_id,\n                            VersionOperation::CreateVersion,\n                            VersionReferenceRole::Source,\n                        )\n                        .await?\n                };\n\n                transaction\n                    .stage_write(TransactionWrite::Rows {\n                        mode: TransactionWriteMode::Insert,\n                        rows: vec![\n                            version_descriptor_stage_row(&version_id, &options.name, false),\n                            version_ref_stage_row(&version_id, &source_head),\n                        ],\n                    })\n                    .await?;\n\n                Ok(CreateVersionReceipt {\n                    id: version_id,\n                    name: options.name,\n                    hidden: false,\n                    commit_id: source_head,\n                })\n            })\n        })\n        .await\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/session/execute.rs",
    "content": "use std::sync::Arc;\n\nuse crate::functions::FunctionContext;\nuse crate::sql2;\nuse crate::storage::{StorageReadScope, StorageWriteSet};\nuse crate::{LixError, LixNotice, SqlQueryResult, Value};\n\nuse super::context::{SessionContext, SessionSqlExecutionContext};\n\n/// Result of executing one SQL statement through engine.\n///\n/// Column names live once at the result-set level. Individual rows only own\n/// values, which keeps the public API row-oriented without copying schema\n/// metadata into every row.\n#[derive(Debug, Clone, PartialEq)]\npub struct ExecuteResult {\n    columns: Vec<String>,\n    rows: Vec<Row>,\n    rows_affected: u64,\n    notices: Vec<LixNotice>,\n}\n\nimpl ExecuteResult {\n    fn from_sql_query_result(result: SqlQueryResult) -> Self {\n        Self {\n            columns: result.columns,\n            rows: Vec::new(),\n            rows_affected: 0,\n            notices: result.notices,\n        }\n        .with_rows(result.rows)\n    }\n\n    pub fn from_rows_affected(rows_affected: u64) -> Self {\n        Self {\n            columns: Vec::new(),\n            rows: Vec::new(),\n            rows_affected,\n            notices: Vec::new(),\n        }\n    }\n\n    pub fn from_rows(columns: Vec<String>, rows: Vec<Vec<Value>>) -> Self {\n        Self {\n            columns,\n            rows: Vec::new(),\n            rows_affected: 0,\n            notices: Vec::new(),\n        }\n        .with_rows(rows)\n    }\n\n    fn with_rows(mut self, rows: Vec<Vec<Value>>) -> Self {\n        let columns = Arc::<[String]>::from(self.columns.clone().into_boxed_slice());\n        self.rows = rows\n            .into_iter()\n            .map(|values| Row {\n                columns: Arc::clone(&columns),\n                values,\n            })\n            .collect();\n        self\n    }\n\n    /// Returns the result-set column names in row value order.\n    pub fn columns(&self) -> &[String] {\n        &self.columns\n    }\n\n    /// Returns the owned rows. Use `iter()` for name-based access.\n    pub fn rows(&self) -> &[Row] {\n        &self.rows\n    }\n\n    /// Iterates rows with borrowed access to the shared column metadata.\n    pub fn iter(&self) -> impl Iterator<Item = RowRef<'_>> {\n        self.rows.iter().map(|row| RowRef {\n            columns: self.columns.as_slice(),\n            values: row.values.as_slice(),\n        })\n    }\n\n    /// Returns the number of rows in this result set.\n    pub fn len(&self) -> usize {\n        self.rows.len()\n    }\n\n    /// Returns true when this result set has no rows.\n    pub fn is_empty(&self) -> bool {\n        self.rows.is_empty()\n    }\n\n    /// Returns the number of rows affected by a mutation statement.\n    pub fn rows_affected(&self) -> u64 {\n        self.rows_affected\n    }\n\n    /// Returns non-fatal diagnostics produced while executing the statement.\n    pub fn notices(&self) -> &[LixNotice] {\n        &self.notices\n    }\n\n    /// Looks up the value for `column_name` on an owned row from this set.\n    pub fn get<'a>(&self, row: &'a Row, column_name: &str) -> Option<&'a Value> {\n        let index = self.column_index(column_name)?;\n        row.get_index(index)\n    }\n\n    /// Returns the index for a column name.\n    pub fn column_index(&self, column_name: &str) -> Option<usize> {\n        self.columns.iter().position(|column| column == column_name)\n    }\n}\n\n/// One owned row returned by a query.\n#[derive(Debug, Clone, PartialEq)]\npub struct Row {\n    columns: Arc<[String]>,\n    values: Vec<Value>,\n}\n\nimpl Row {\n    /// Returns the values in result-set column order.\n    pub fn values(&self) -> &[Value] {\n        &self.values\n    }\n\n    /// Returns the value at `index`.\n    pub fn get_index(&self, index: usize) -> Option<&Value> {\n        self.values.get(index)\n    }\n\n    /// Returns the raw value for `column_name`, or an error when the column is absent.\n    pub fn value(&self, column_name: &str) -> Result<&Value, LixError> {\n        let index = self.column_index(column_name)?;\n        self.values.get(index).ok_or_else(|| {\n            LixError::new(\n                LixError::CODE_COLUMN_NOT_FOUND,\n                format!(\n                    \"column '{}' points past row width {}; available columns: {}\",\n                    column_name,\n                    self.values.len(),\n                    self.available_columns()\n                ),\n            )\n        })\n    }\n\n    /// Converts the named column to a native Rust value.\n    pub fn get<T>(&self, column_name: &str) -> Result<T, LixError>\n    where\n        T: TryFromValue,\n    {\n        T::try_from_value(self.value(column_name)?)\n    }\n\n    fn column_index(&self, column_name: &str) -> Result<usize, LixError> {\n        self.columns\n            .iter()\n            .position(|column| column == column_name)\n            .ok_or_else(|| {\n                LixError::new(\n                    LixError::CODE_COLUMN_NOT_FOUND,\n                    format!(\n                        \"column '{}' does not exist; available columns: {}\",\n                        column_name,\n                        self.available_columns()\n                    ),\n                )\n            })\n    }\n\n    fn available_columns(&self) -> String {\n        if self.columns.is_empty() {\n            \"<none>\".to_string()\n        } else {\n            self.columns.join(\", \")\n        }\n    }\n}\n\npub trait TryFromValue: Sized {\n    fn try_from_value(value: &Value) -> Result<Self, LixError>;\n}\n\nimpl TryFromValue for Value {\n    fn try_from_value(value: &Value) -> Result<Self, LixError> {\n        Ok(value.clone())\n    }\n}\n\nimpl TryFromValue for String {\n    fn try_from_value(value: &Value) -> Result<Self, LixError> {\n        match value {\n            Value::Text(value) => Ok(value.clone()),\n            other => Err(value_type_error(\"text\", other)),\n        }\n    }\n}\n\nimpl TryFromValue for bool {\n    fn try_from_value(value: &Value) -> Result<Self, LixError> {\n        match value {\n            Value::Boolean(value) => Ok(*value),\n            other => Err(value_type_error(\"boolean\", other)),\n        }\n    }\n}\n\nimpl TryFromValue for i64 {\n    fn try_from_value(value: &Value) -> Result<Self, LixError> {\n        match value {\n            Value::Integer(value) => Ok(*value),\n            other => Err(value_type_error(\"integer\", other)),\n        }\n    }\n}\n\nimpl TryFromValue for f64 {\n    fn try_from_value(value: &Value) -> Result<Self, LixError> {\n        match value {\n            Value::Real(value) => Ok(*value),\n            other => Err(value_type_error(\"real\", other)),\n        }\n    }\n}\n\nimpl TryFromValue for serde_json::Value {\n    fn try_from_value(value: &Value) -> Result<Self, LixError> {\n        match value {\n            Value::Json(value) => Ok(value.clone()),\n            other => Err(value_type_error(\"json\", other)),\n        }\n    }\n}\n\nimpl TryFromValue for Vec<u8> {\n    fn try_from_value(value: &Value) -> Result<Self, LixError> {\n        match value {\n            Value::Blob(value) => Ok(value.clone()),\n            other => Err(value_type_error(\"blob\", other)),\n        }\n    }\n}\n\nfn value_type_error(expected: &str, actual: &Value) -> LixError {\n    LixError::new(\n        \"LIX_ERROR_VALUE_TYPE\",\n        format!(\"expected {expected} value, got {actual:?}\"),\n    )\n}\n\n/// Zero-copy row view with access to the result-set column names.\n///\n/// This is the ergonomic path for callers that want `row.get(\"column\")`\n/// without storing column metadata on every owned row.\n#[derive(Debug, Clone, Copy)]\npub struct RowRef<'a> {\n    columns: &'a [String],\n    values: &'a [Value],\n}\n\nimpl RowRef<'_> {\n    /// Returns the result-set column names in row value order.\n    pub fn columns(&self) -> &[String] {\n        self.columns\n    }\n\n    /// Returns the row values in result-set column order.\n    pub fn values(&self) -> &[Value] {\n        self.values\n    }\n\n    /// Returns the value for `column_name`.\n    pub fn get(&self, column_name: &str) -> Option<&Value> {\n        let index = self\n            .columns\n            .iter()\n            .position(|column| column == column_name)?;\n        self.values.get(index)\n    }\n\n    /// Returns the value at `index`.\n    pub fn get_index(&self, index: usize) -> Option<&Value> {\n        self.values.get(index)\n    }\n}\n\nimpl SessionContext {\n    /// Executes one DataFusion SQL statement against this Lix session.\n    ///\n    /// The SQL dialect is DataFusion SQL, not SQLite SQL. Positional\n    /// placeholders use `$1`, `$2`, and so on. SQLite-specific catalog tables\n    /// and transaction statements such as `sqlite_master`, `BEGIN`, and\n    /// `COMMIT` are not part of this contract; use `information_schema` for\n    /// catalog inspection. Lix owns transaction boundaries for each statement.\n    pub async fn execute(&self, sql: &str, params: &[Value]) -> Result<ExecuteResult, LixError> {\n        self.ensure_open()?;\n        let kind = sql2::classify_statement(sql)?;\n        if kind == sql2::SqlStatementKind::Write {\n            let sql = sql.to_string();\n            let sql_for_error = sql.clone();\n            let params = params.to_vec();\n            return self\n                .with_write_transaction(|transaction| {\n                    Box::pin(async move {\n                        // Re-plan against the transaction-backed write\n                        // session so provider hooks read and stage through the\n                        // transaction-owned SQL write context.\n                        let tx_plan = sql2::create_write_logical_plan(transaction, &sql).await?;\n                        let result = sql2::execute_logical_plan(tx_plan, &params).await?;\n                        let affected_rows = affected_rows_from_query_result(result)?;\n                        Ok(ExecuteResult::from_rows_affected(affected_rows))\n                    })\n                })\n                .await\n                .map_err(|error| normalize_sql_surface_error(error, &sql_for_error));\n        }\n\n        let read_scope = StorageReadScope::new(self.storage.begin_read_transaction().await?);\n        let read_result = async {\n            let mut read_store = read_scope.store();\n            let live_state: Arc<dyn crate::live_state::LiveStateReader> =\n                Arc::new(self.live_state.reader(read_store.clone()));\n            let runtime_functions = FunctionContext::prepare(live_state.as_ref()).await?;\n            let functions = runtime_functions.provider();\n            let active_version_id = self.active_version_id_from_reader(&mut read_store).await?;\n            let visible_schemas = self\n                .catalog_context\n                .schema_jsons_for_sql_read_planning(live_state.as_ref(), &active_version_id)\n                .await?;\n            let ctx = SessionSqlExecutionContext {\n                active_version_id: &active_version_id,\n                read_store,\n                live_state: Arc::clone(&self.live_state),\n                binary_cas: Arc::clone(&self.binary_cas),\n                commit_store: Arc::clone(&self.commit_store),\n                version_ctx: Arc::clone(&self.version_ctx),\n                visible_schemas,\n                functions: functions.clone(),\n            };\n\n            let plan = sql2::create_logical_plan(&ctx, sql).await?;\n            let result = sql2::execute_logical_plan(plan, params).await?;\n            drop(ctx);\n            drop(live_state);\n            Ok::<_, LixError>((runtime_functions, result))\n        };\n        let (runtime_functions, result) = match read_result.await {\n            Ok(result) => {\n                read_scope.rollback().await?;\n                result\n            }\n            Err(error) => {\n                let _ = read_scope.rollback().await;\n                return Err(normalize_sql_surface_error(error, sql));\n            }\n        };\n        self.persist_runtime_functions_if_needed(&runtime_functions)\n            .await?;\n        Ok(ExecuteResult::from_sql_query_result(result))\n    }\n\n    /// Persists execution-scoped runtime function state after a successful read.\n    ///\n    /// Reads do not otherwise own a write transaction, but SQL functions such as\n    /// `lix_uuid_v7()` can still advance runtime state. Persisting happens only\n    /// after successful execution so failed reads do not consume durable\n    /// sequence state.\n    async fn persist_runtime_functions_if_needed(\n        &self,\n        runtime_functions: &FunctionContext,\n    ) -> Result<(), LixError> {\n        let mut transaction = self.storage.begin_write_transaction().await?;\n        let mut writes = StorageWriteSet::new();\n        runtime_functions\n            .stage_persist_if_needed(&mut writes)\n            .await?;\n        if !writes.is_empty() {\n            writes.apply(&mut transaction.as_mut()).await?;\n        }\n        transaction.commit().await\n    }\n}\n\nfn normalize_sql_surface_error(error: LixError, sql: &str) -> LixError {\n    if error.code.starts_with(\"LIX_ERROR_PATH_\") && sql_uses_public_filesystem_path_surface(sql) {\n        return LixError {\n            code: LixError::CODE_INVALID_PARAM.to_string(),\n            ..error\n        };\n    }\n    if error.code == LixError::CODE_INVALID_JSON_PATH\n        && error\n            .message\n            .to_ascii_lowercase()\n            .contains(\"uses variadic path segments\")\n    {\n        return LixError {\n            code: LixError::CODE_INVALID_PARAM.to_string(),\n            ..error\n        };\n    }\n    if error.code == LixError::CODE_FOREIGN_KEY {\n        let lower = error.message.to_ascii_lowercase();\n        if lower.contains(\"schema 'lix_version_ref'\") && lower.contains(\"target 'lix_commit.\") {\n            return LixError {\n                code: LixError::CODE_VERSION_NOT_FOUND.to_string(),\n                ..error\n            };\n        }\n    }\n    error\n}\n\nfn sql_uses_public_filesystem_path_surface(sql: &str) -> bool {\n    let lower = sql.to_ascii_lowercase();\n    (lower.contains(\"lix_file\") || lower.contains(\"lix_directory\")) && lower.contains(\"path\")\n}\n\nfn affected_rows_from_query_result(result: SqlQueryResult) -> Result<u64, LixError> {\n    let Some(first_row) = result.rows.first() else {\n        return Ok(0);\n    };\n    let Some(first_value) = first_row.first() else {\n        return Ok(0);\n    };\n    match first_value {\n        Value::Integer(value) if *value >= 0 => Ok(*value as u64),\n        Value::Text(value) => value.parse::<u64>().map_err(|error| {\n            LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                format!(\"failed to parse affected row count from SQL result: {error}\"),\n            )\n        }),\n        other => Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"expected affected row count, got {other:?}\"),\n        )),\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn row_get_converts_native_values_and_value_keeps_wrapper() {\n        let result = ExecuteResult::from_rows(\n            vec![\"title\".to_string(), \"done\".to_string()],\n            vec![vec![Value::Text(\"Hello\".to_string()), Value::Boolean(true)]],\n        );\n        let row = &result.rows()[0];\n\n        assert_eq!(row.get::<String>(\"title\").unwrap(), \"Hello\");\n        assert!(row.get::<bool>(\"done\").unwrap());\n        assert_eq!(\n            row.value(\"title\").unwrap(),\n            &Value::Text(\"Hello\".to_string())\n        );\n    }\n\n    #[test]\n    fn row_get_errors_on_missing_column_and_wrong_type() {\n        let result = ExecuteResult::from_rows(\n            vec![\"title\".to_string()],\n            vec![vec![Value::Text(\"Hello\".to_string())]],\n        );\n        let row = &result.rows()[0];\n\n        let missing = row.get::<String>(\"missing\").unwrap_err();\n        assert_eq!(missing.code, LixError::CODE_COLUMN_NOT_FOUND);\n        assert!(missing.message.contains(\"available columns: title\"));\n\n        let wrong_type = row.get::<bool>(\"title\").unwrap_err();\n        assert_eq!(wrong_type.code, \"LIX_ERROR_VALUE_TYPE\");\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/session/merge/analysis.rs",
    "content": "use crate::storage::StorageReader;\nuse crate::tracked_state::{\n    plan_merge, TrackedStateDiff, TrackedStateDiffRequest, TrackedStateMergePlan,\n    TrackedStateStoreReader,\n};\nuse crate::LixError;\n\nuse super::conflicts::{conflicts_from_plan, MergeConflict};\nuse super::stats::{stats_from_diff, stats_from_plan, MergeStats};\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub(crate) enum MergeOutcome {\n    AlreadyUpToDate,\n    FastForward,\n    MergeCommitted,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct MergeCommits {\n    pub(crate) base_commit_id: String,\n    pub(crate) target_commit_id: String,\n    pub(crate) source_commit_id: String,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct MergeAnalysis {\n    pub(crate) outcome: MergeOutcome,\n    pub(crate) commits: MergeCommits,\n    pub(crate) source_diff: TrackedStateDiff,\n    pub(crate) target_diff: TrackedStateDiff,\n    pub(crate) stats: MergeStats,\n    pub(crate) conflicts: Vec<MergeConflict>,\n    pub(crate) merge_plan: Option<TrackedStateMergePlan>,\n}\n\nimpl MergeAnalysis {\n    pub(crate) fn merge_plan(&self) -> Option<&TrackedStateMergePlan> {\n        self.merge_plan.as_ref()\n    }\n}\n\npub(crate) async fn analyze<S>(\n    reader: &mut TrackedStateStoreReader<S>,\n    commits: MergeCommits,\n) -> Result<MergeAnalysis, LixError>\nwhere\n    S: StorageReader,\n{\n    let request = TrackedStateDiffRequest::default();\n    let source_diff = reader\n        .diff_commits(&commits.base_commit_id, &commits.source_commit_id, &request)\n        .await?;\n    let target_diff = if commits.base_commit_id == commits.source_commit_id\n        || commits.base_commit_id == commits.target_commit_id\n    {\n        TrackedStateDiff::default()\n    } else {\n        reader\n            .diff_commits(&commits.base_commit_id, &commits.target_commit_id, &request)\n            .await?\n    };\n\n    let outcome = if commits.base_commit_id == commits.source_commit_id {\n        MergeOutcome::AlreadyUpToDate\n    } else if commits.base_commit_id == commits.target_commit_id {\n        MergeOutcome::FastForward\n    } else {\n        MergeOutcome::MergeCommitted\n    };\n\n    let merge_plan = if outcome == MergeOutcome::MergeCommitted {\n        Some(plan_merge(&target_diff, &source_diff)?)\n    } else {\n        None\n    };\n\n    let stats = match outcome {\n        MergeOutcome::AlreadyUpToDate => MergeStats::default(),\n        MergeOutcome::FastForward => stats_from_diff(&source_diff),\n        MergeOutcome::MergeCommitted => merge_plan\n            .as_ref()\n            .map(|plan| stats_from_plan(plan, &source_diff))\n            .transpose()?\n            .unwrap_or_default(),\n    };\n\n    let conflicts = merge_plan\n        .as_ref()\n        .map(conflicts_from_plan)\n        .transpose()?\n        .unwrap_or_default();\n\n    Ok(MergeAnalysis {\n        outcome,\n        commits,\n        source_diff,\n        target_diff,\n        stats,\n        conflicts,\n        merge_plan,\n    })\n}\n"
  },
  {
    "path": "packages/engine/src/session/merge/apply.rs",
    "content": "use crate::tracked_state::TrackedStateMergePlan;\nuse crate::transaction::types::TransactionAdoptedChange;\n\npub(crate) fn adopted_changes_from_merge_plan(\n    plan: &TrackedStateMergePlan,\n    target_version_id: &str,\n) -> Vec<TransactionAdoptedChange> {\n    plan.patches\n        .iter()\n        .map(|patch| stage_adopted_change_from_patch(patch, target_version_id))\n        .collect()\n}\n\nfn stage_adopted_change_from_patch(\n    patch: &crate::tracked_state::TrackedStateMergePatch,\n    target_version_id: &str,\n) -> TransactionAdoptedChange {\n    TransactionAdoptedChange {\n        version_id: target_version_id.to_string(),\n        change_id: patch.change_id().to_string(),\n        projected_row: patch.projected_row().clone(),\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/session/merge/conflicts.rs",
    "content": "use crate::tracked_state::{\n    TrackedStateDiffEntry, TrackedStateDiffKind, TrackedStateMergeConflict, TrackedStateMergePlan,\n};\nuse crate::LixError;\nuse serde_json::Value as JsonValue;\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct MergeConflict {\n    pub(crate) kind: MergeConflictKind,\n    pub(crate) schema_key: String,\n    pub(crate) entity_id: JsonValue,\n    pub(crate) file_id: Option<String>,\n    pub(crate) target: MergeConflictSide,\n    pub(crate) source: MergeConflictSide,\n}\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub(crate) enum MergeConflictKind {\n    SameEntityChanged,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct MergeConflictSide {\n    pub(crate) kind: MergeConflictChangeKind,\n    pub(crate) before_change_id: Option<String>,\n    pub(crate) after_change_id: Option<String>,\n}\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub(crate) enum MergeConflictChangeKind {\n    Added,\n    Modified,\n    Removed,\n}\n\npub(crate) fn conflicts_from_plan(\n    plan: &TrackedStateMergePlan,\n) -> Result<Vec<MergeConflict>, LixError> {\n    plan.conflicts.iter().map(conflict_from_tracked).collect()\n}\n\nfn conflict_from_tracked(conflict: &TrackedStateMergeConflict) -> Result<MergeConflict, LixError> {\n    Ok(MergeConflict {\n        kind: MergeConflictKind::SameEntityChanged,\n        schema_key: conflict.identity.schema_key.clone(),\n        entity_id: conflict.identity.entity_id.as_json_array_value()?,\n        file_id: conflict.identity.file_id.clone(),\n        target: conflict_side_from_diff_entry(&conflict.target),\n        source: conflict_side_from_diff_entry(&conflict.source),\n    })\n}\n\nfn conflict_side_from_diff_entry(entry: &TrackedStateDiffEntry) -> MergeConflictSide {\n    MergeConflictSide {\n        kind: match entry.kind {\n            TrackedStateDiffKind::Added => MergeConflictChangeKind::Added,\n            TrackedStateDiffKind::Modified => MergeConflictChangeKind::Modified,\n            TrackedStateDiffKind::Removed => MergeConflictChangeKind::Removed,\n        },\n        before_change_id: entry.before.as_ref().map(|row| row.change_id.clone()),\n        after_change_id: entry.after.as_ref().map(|row| row.change_id.clone()),\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/session/merge/mod.rs",
    "content": "mod analysis;\nmod apply;\nmod conflicts;\nmod stats;\nmod version;\n\npub use version::{\n    MergeChangeStats, MergeConflict, MergeConflictChangeKind, MergeConflictKind, MergeConflictSide,\n    MergeVersionOptions, MergeVersionOutcome, MergeVersionPreview, MergeVersionPreviewOptions,\n    MergeVersionReceipt,\n};\n"
  },
  {
    "path": "packages/engine/src/session/merge/stats.rs",
    "content": "use crate::tracked_state::{\n    TrackedStateDiff, TrackedStateDiffKind, TrackedStateMergePatch, TrackedStateMergePlan,\n};\nuse crate::LixError;\n\n#[derive(Debug, Clone, PartialEq, Eq, Default)]\npub(crate) struct MergeStats {\n    pub(crate) total: usize,\n    pub(crate) added: usize,\n    pub(crate) modified: usize,\n    pub(crate) removed: usize,\n}\n\npub(crate) fn stats_from_diff(diff: &TrackedStateDiff) -> MergeStats {\n    let mut stats = MergeStats::default();\n    for entry in &diff.entries {\n        stats.add(entry.kind);\n    }\n    stats\n}\n\npub(crate) fn stats_from_plan(\n    plan: &TrackedStateMergePlan,\n    source_diff: &TrackedStateDiff,\n) -> Result<MergeStats, LixError> {\n    let mut stats = MergeStats::default();\n    for patch in &plan.patches {\n        let identity = patch_identity(patch);\n        let Some(entry) = source_diff\n            .entries\n            .iter()\n            .find(|entry| &entry.identity == identity)\n        else {\n            return Err(LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                format!(\n                    \"merge analysis could not find source diff entry for adopted schema '{}' entity '{}'\",\n                    identity.schema_key,\n                    identity.entity_id.as_json_array_text()?\n                ),\n            ));\n        };\n        stats.add(entry.kind);\n    }\n    Ok(stats)\n}\n\nimpl MergeStats {\n    fn add(&mut self, kind: TrackedStateDiffKind) {\n        self.total += 1;\n        match kind {\n            TrackedStateDiffKind::Added => self.added += 1,\n            TrackedStateDiffKind::Modified => self.modified += 1,\n            TrackedStateDiffKind::Removed => self.removed += 1,\n        }\n    }\n}\n\nfn patch_identity(\n    patch: &TrackedStateMergePatch,\n) -> &crate::tracked_state::TrackedStateDiffIdentity {\n    match patch {\n        TrackedStateMergePatch::Adopt { identity, .. } => identity,\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/session/merge/version.rs",
    "content": "use serde_json::{json, Value as JsonValue};\n\nuse crate::transaction::types::TransactionWrite;\nuse crate::version::{VersionLifecycle, VersionOperation, VersionReferenceRole};\nuse crate::LixError;\n\nuse super::analysis::{analyze, MergeCommits, MergeOutcome};\nuse super::apply::adopted_changes_from_merge_plan;\nuse super::conflicts::{\n    MergeConflict as AnalysisMergeConflict,\n    MergeConflictChangeKind as AnalysisMergeConflictChangeKind,\n    MergeConflictKind as AnalysisMergeConflictKind, MergeConflictSide as AnalysisMergeConflictSide,\n};\nuse super::stats::MergeStats;\nuse crate::session::context::SessionContext;\n\n/// Options for merging another version into this session's active version.\n#[derive(Debug, Clone, PartialEq, Eq)]\npub struct MergeVersionOptions {\n    /// Version whose changes should be merged into the active session version.\n    pub source_version_id: String,\n}\n\n/// Options for previewing a merge from another version into this session's\n/// active version.\n#[derive(Debug, Clone, PartialEq, Eq)]\npub struct MergeVersionPreviewOptions {\n    /// Version whose changes would be merged into the active session version.\n    pub source_version_id: String,\n}\n\n/// Receipt returned after merging a version.\n#[derive(Debug, Clone, PartialEq, Eq)]\npub struct MergeVersionReceipt {\n    pub outcome: MergeVersionOutcome,\n    pub target_version_id: String,\n    pub source_version_id: String,\n    pub base_commit_id: String,\n    pub target_head_before_commit_id: String,\n    pub source_head_before_commit_id: String,\n    pub target_head_after_commit_id: String,\n    pub created_merge_commit_id: Option<String>,\n    pub change_stats: MergeChangeStats,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq, Default)]\npub struct MergeChangeStats {\n    pub total: usize,\n    pub added: usize,\n    pub modified: usize,\n    pub removed: usize,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub struct MergeVersionPreview {\n    pub outcome: MergeVersionOutcome,\n    pub target_version_id: String,\n    pub source_version_id: String,\n    pub base_commit_id: String,\n    pub target_head_commit_id: String,\n    pub source_head_commit_id: String,\n    pub change_stats: MergeChangeStats,\n    pub conflicts: Vec<MergeConflict>,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub struct MergeConflict {\n    pub kind: MergeConflictKind,\n    pub schema_key: String,\n    pub entity_id: JsonValue,\n    pub file_id: Option<String>,\n    pub target: MergeConflictSide,\n    pub source: MergeConflictSide,\n}\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub enum MergeConflictKind {\n    SameEntityChanged,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub struct MergeConflictSide {\n    pub kind: MergeConflictChangeKind,\n    pub before_change_id: Option<String>,\n    pub after_change_id: Option<String>,\n}\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub enum MergeConflictChangeKind {\n    Added,\n    Modified,\n    Removed,\n}\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub enum MergeVersionOutcome {\n    AlreadyUpToDate,\n    FastForward,\n    MergeCommitted,\n}\n\nimpl SessionContext {\n    /// Previews merging `source_version_id` into this session's active version\n    /// without advancing refs, staging changes, or creating commits.\n    pub async fn merge_version_preview(\n        &self,\n        options: MergeVersionPreviewOptions,\n    ) -> Result<MergeVersionPreview, LixError> {\n        let source_version_id = options.source_version_id;\n\n        self.with_write_transaction(|transaction| {\n            Box::pin(async move {\n                let active_version_id = transaction.active_version_id().to_string();\n                if source_version_id == active_version_id {\n                    return Err(LixError::invalid_self_merge(active_version_id));\n                }\n\n                let (target_head, source_head) = {\n                    let reader = transaction.version_ref_reader();\n                    let lifecycle = VersionLifecycle::new(&reader);\n                    let target_head = lifecycle\n                        .require_existing_commit_id(\n                            &active_version_id,\n                            VersionOperation::MergeVersionPreview,\n                            VersionReferenceRole::Target,\n                        )\n                        .await?;\n                    let source_head = lifecycle\n                        .require_existing_commit_id(\n                            &source_version_id,\n                            VersionOperation::MergeVersionPreview,\n                            VersionReferenceRole::Source,\n                        )\n                        .await?;\n                    (target_head, source_head)\n                };\n\n                let merge_base = {\n                    let mut reader = transaction.commit_graph_reader();\n                    reader.merge_base(&target_head, &source_head).await?\n                };\n\n                let analysis = {\n                    let mut reader = transaction.tracked_state_reader();\n                    analyze(\n                        &mut reader,\n                        MergeCommits {\n                            base_commit_id: merge_base.commit_id,\n                            target_commit_id: target_head,\n                            source_commit_id: source_head,\n                        },\n                    )\n                    .await?\n                };\n\n                Ok(preview_from_analysis(\n                    &active_version_id,\n                    &source_version_id,\n                    &analysis,\n                ))\n            })\n        })\n        .await\n    }\n\n    /// Merges `source_version_id` into this session's active version.\n    ///\n    /// The generated target commit keeps the previous target head as its first\n    /// parent and records the source head as an additional parent, so the\n    /// commit graph preserves branch ancestry while tracked-state storage can\n    /// build the new root by applying source effects onto the target root.\n    pub async fn merge_version(\n        &self,\n        options: MergeVersionOptions,\n    ) -> Result<MergeVersionReceipt, LixError> {\n        let source_version_id = options.source_version_id;\n\n        self.with_write_transaction(|transaction| {\n            Box::pin(async move {\n                let active_version_id = transaction.active_version_id().to_string();\n                if source_version_id == active_version_id {\n                    return Err(LixError::invalid_self_merge(active_version_id));\n                }\n\n                let (target_head, source_head) = {\n                    let reader = transaction.version_ref_reader();\n                    let lifecycle = VersionLifecycle::new(&reader);\n                    let target_head = lifecycle\n                        .require_existing_commit_id(\n                            &active_version_id,\n                            VersionOperation::MergeVersion,\n                            VersionReferenceRole::Target,\n                        )\n                        .await?;\n                    let source_head = lifecycle\n                        .require_existing_commit_id(\n                            &source_version_id,\n                            VersionOperation::MergeVersion,\n                            VersionReferenceRole::Source,\n                        )\n                        .await?;\n                    (target_head, source_head)\n                };\n\n                let merge_base = {\n                    let mut reader = transaction.commit_graph_reader();\n                    reader.merge_base(&target_head, &source_head).await?\n                };\n                let base_commit_id = merge_base.commit_id;\n\n                let analysis = {\n                    let mut reader = transaction.tracked_state_reader();\n                    analyze(\n                        &mut reader,\n                        MergeCommits {\n                            base_commit_id,\n                            target_commit_id: target_head,\n                            source_commit_id: source_head,\n                        },\n                    )\n                    .await?\n                };\n\n                if analysis.outcome == MergeOutcome::AlreadyUpToDate {\n                    return Ok(MergeVersionReceipt {\n                        outcome: MergeVersionOutcome::AlreadyUpToDate,\n                        target_version_id: active_version_id,\n                        source_version_id,\n                        base_commit_id: analysis.commits.base_commit_id,\n                        target_head_after_commit_id: analysis.commits.target_commit_id.clone(),\n                        target_head_before_commit_id: analysis.commits.target_commit_id,\n                        source_head_before_commit_id: analysis.commits.source_commit_id,\n                        created_merge_commit_id: None,\n                        change_stats: merge_change_stats_from_analysis(&analysis.stats),\n                    });\n                }\n\n                if analysis.outcome == MergeOutcome::FastForward {\n                    transaction\n                        .advance_version_ref(&active_version_id, &analysis.commits.source_commit_id)\n                        .await?;\n\n                    return Ok(MergeVersionReceipt {\n                        outcome: MergeVersionOutcome::FastForward,\n                        target_version_id: active_version_id,\n                        source_version_id,\n                        base_commit_id: analysis.commits.base_commit_id,\n                        target_head_before_commit_id: analysis.commits.target_commit_id,\n                        source_head_before_commit_id: analysis.commits.source_commit_id.clone(),\n                        target_head_after_commit_id: analysis.commits.source_commit_id,\n                        created_merge_commit_id: None,\n                        change_stats: merge_change_stats_from_analysis(&analysis.stats),\n                    });\n                }\n\n                let merge_plan = analysis\n                    .merge_plan()\n                    .expect(\"merge analysis should include a plan for mergeCommitted\");\n\n                if !analysis.conflicts.is_empty() {\n                    return Err(merge_conflict_error(\n                        &analysis\n                            .conflicts\n                            .iter()\n                            .map(merge_conflict_from_analysis)\n                            .collect::<Vec<_>>(),\n                    )?);\n                }\n\n                let adopted_changes =\n                    adopted_changes_from_merge_plan(merge_plan, &active_version_id);\n                if adopted_changes.is_empty() {\n                    let created_merge_commit_id =\n                        transaction.stage_empty_commit(active_version_id.clone())?;\n                    transaction.add_commit_parent(\n                        active_version_id.clone(),\n                        analysis.commits.source_commit_id.clone(),\n                    )?;\n                    return Ok(MergeVersionReceipt {\n                        outcome: MergeVersionOutcome::MergeCommitted,\n                        target_version_id: active_version_id,\n                        source_version_id,\n                        base_commit_id: analysis.commits.base_commit_id,\n                        target_head_after_commit_id: created_merge_commit_id.clone(),\n                        target_head_before_commit_id: analysis.commits.target_commit_id,\n                        source_head_before_commit_id: analysis.commits.source_commit_id,\n                        created_merge_commit_id: Some(created_merge_commit_id),\n                        change_stats: merge_change_stats_from_analysis(&analysis.stats),\n                    });\n                }\n\n                transaction\n                    .stage_write(TransactionWrite::AdoptedChanges {\n                        changes: adopted_changes,\n                    })\n                    .await?;\n                let created_merge_commit_id = transaction\n                    .staged_commit_id(&active_version_id)?\n                    .ok_or_else(|| {\n                        LixError::new(\n                            \"LIX_ERROR_UNKNOWN\",\n                            \"merge_version staged tracked rows without a commit id\",\n                        )\n                    })?;\n                transaction.add_commit_parent(\n                    active_version_id.clone(),\n                    analysis.commits.source_commit_id.clone(),\n                )?;\n\n                Ok(MergeVersionReceipt {\n                    outcome: MergeVersionOutcome::MergeCommitted,\n                    target_version_id: active_version_id,\n                    source_version_id,\n                    base_commit_id: analysis.commits.base_commit_id,\n                    target_head_before_commit_id: analysis.commits.target_commit_id,\n                    source_head_before_commit_id: analysis.commits.source_commit_id,\n                    created_merge_commit_id: Some(created_merge_commit_id.clone()),\n                    target_head_after_commit_id: created_merge_commit_id,\n                    change_stats: merge_change_stats_from_analysis(&analysis.stats),\n                })\n            })\n        })\n        .await\n    }\n}\n\nfn preview_from_analysis(\n    target_version_id: &str,\n    source_version_id: &str,\n    analysis: &super::analysis::MergeAnalysis,\n) -> MergeVersionPreview {\n    MergeVersionPreview {\n        outcome: merge_version_outcome_from_analysis(analysis.outcome),\n        target_version_id: target_version_id.to_string(),\n        source_version_id: source_version_id.to_string(),\n        base_commit_id: analysis.commits.base_commit_id.clone(),\n        target_head_commit_id: analysis.commits.target_commit_id.clone(),\n        source_head_commit_id: analysis.commits.source_commit_id.clone(),\n        change_stats: merge_change_stats_from_analysis(&analysis.stats),\n        conflicts: analysis\n            .conflicts\n            .iter()\n            .map(merge_conflict_from_analysis)\n            .collect(),\n    }\n}\n\nfn merge_version_outcome_from_analysis(outcome: MergeOutcome) -> MergeVersionOutcome {\n    match outcome {\n        MergeOutcome::AlreadyUpToDate => MergeVersionOutcome::AlreadyUpToDate,\n        MergeOutcome::FastForward => MergeVersionOutcome::FastForward,\n        MergeOutcome::MergeCommitted => MergeVersionOutcome::MergeCommitted,\n    }\n}\n\nfn merge_change_stats_from_analysis(stats: &MergeStats) -> MergeChangeStats {\n    MergeChangeStats {\n        total: stats.total,\n        added: stats.added,\n        modified: stats.modified,\n        removed: stats.removed,\n    }\n}\n\nfn merge_conflict_from_analysis(conflict: &AnalysisMergeConflict) -> MergeConflict {\n    MergeConflict {\n        kind: match conflict.kind {\n            AnalysisMergeConflictKind::SameEntityChanged => MergeConflictKind::SameEntityChanged,\n        },\n        schema_key: conflict.schema_key.clone(),\n        entity_id: conflict.entity_id.clone(),\n        file_id: conflict.file_id.clone(),\n        target: merge_conflict_side_from_analysis(&conflict.target),\n        source: merge_conflict_side_from_analysis(&conflict.source),\n    }\n}\n\nfn merge_conflict_side_from_analysis(side: &AnalysisMergeConflictSide) -> MergeConflictSide {\n    MergeConflictSide {\n        kind: match side.kind {\n            AnalysisMergeConflictChangeKind::Added => MergeConflictChangeKind::Added,\n            AnalysisMergeConflictChangeKind::Modified => MergeConflictChangeKind::Modified,\n            AnalysisMergeConflictChangeKind::Removed => MergeConflictChangeKind::Removed,\n        },\n        before_change_id: side.before_change_id.clone(),\n        after_change_id: side.after_change_id.clone(),\n    }\n}\n\nfn merge_conflict_error(conflicts: &[MergeConflict]) -> Result<LixError, LixError> {\n    let conflict_count = conflicts.len();\n    Ok(LixError::new(\n        LixError::CODE_MERGE_CONFLICT,\n        format!(\"merge_version found {conflict_count} tracked-state conflict(s)\"),\n    )\n    .with_hint(\"Resolve the conflicting entities in the target version, then retry the merge.\")\n    .with_details(json!({\n        \"conflicts\": conflicts.iter()\n            .map(merge_conflict_details)\n            .collect::<Vec<_>>(),\n    })))\n}\n\nfn merge_conflict_details(conflict: &MergeConflict) -> serde_json::Value {\n    json!({\n        \"kind\": match conflict.kind {\n            MergeConflictKind::SameEntityChanged => \"sameEntityChanged\",\n        },\n        \"schemaKey\": conflict.schema_key,\n        \"entityId\": conflict.entity_id,\n        \"fileId\": conflict.file_id,\n        \"target\": merge_conflict_side_details(&conflict.target),\n        \"source\": merge_conflict_side_details(&conflict.source),\n    })\n}\n\nfn merge_conflict_side_details(side: &MergeConflictSide) -> serde_json::Value {\n    json!({\n        \"kind\": match side.kind {\n            MergeConflictChangeKind::Added => \"added\",\n            MergeConflictChangeKind::Modified => \"modified\",\n            MergeConflictChangeKind::Removed => \"removed\",\n        },\n        \"beforeChangeId\": side.before_change_id,\n        \"afterChangeId\": side.after_change_id,\n    })\n}\n"
  },
  {
    "path": "packages/engine/src/session/mod.rs",
    "content": "//! Engine session boundary.\n//!\n//! Transaction invariant:\n//! any engine operation that may write must enter through\n//! `SessionContext::with_write_transaction`. Reads that influence writes are\n//! only available from the transaction capability. Session APIs must not\n//! open `Transaction` directly or use session-level read helpers inside write\n//! flows.\n\nmod context;\nmod create_version;\nmod execute;\nmod merge;\n#[cfg(feature = \"storage-benches\")]\npub mod optimization9_sql2_bench;\nmod switch_version;\n\npub use context::SessionContext;\npub(crate) use context::{SessionMode, WORKSPACE_VERSION_KEY};\npub use create_version::{CreateVersionOptions, CreateVersionReceipt};\npub use execute::{ExecuteResult, Row, RowRef, TryFromValue};\npub use merge::{\n    MergeChangeStats, MergeConflict, MergeConflictChangeKind, MergeConflictKind, MergeConflictSide,\n    MergeVersionOptions, MergeVersionOutcome, MergeVersionPreview, MergeVersionPreviewOptions,\n    MergeVersionReceipt,\n};\npub use switch_version::{SwitchVersionOptions, SwitchVersionReceipt};\n"
  },
  {
    "path": "packages/engine/src/session/optimization9_sql2_bench.rs",
    "content": "use crate::functions::FunctionContext;\nuse crate::session::context::{SessionContext, SessionSqlExecutionContext};\nuse crate::sql2::{self, SqlLogicalPlan};\nuse crate::storage::StorageReadScope;\nuse crate::transaction::open_transaction;\nuse crate::{LixError, SqlQueryResult, Value};\n\n/// Opaque read plan used by the Optimization 9 SQL2 diagnostic benchmark.\n///\n/// This module is gated behind `storage-benches` and exists only to split SQL2\n/// planning cost from SQL2 execution cost without widening the normal session\n/// API.\npub struct PreparedReadPlan {\n    plan: SqlLogicalPlan,\n    read_scope:\n        StorageReadScope<Box<dyn crate::storage::StorageReadTransaction + Send + Sync + 'static>>,\n    runtime_functions: FunctionContext,\n}\n\npub async fn plan_read_only(session: &SessionContext, sql: &str) -> Result<(), LixError> {\n    let prepared = prepare_read_plan(session, sql).await?;\n    drop(prepared.plan);\n    drop(prepared.runtime_functions);\n    prepared.read_scope.rollback().await\n}\n\npub async fn plan_write_only(session: &SessionContext, sql: &str) -> Result<(), LixError> {\n    session.ensure_open()?;\n    let opened = open_transaction(\n        &session.mode,\n        session.storage.clone(),\n        std::sync::Arc::clone(&session.live_state),\n        std::sync::Arc::clone(&session.tracked_state),\n        std::sync::Arc::clone(&session.binary_cas),\n        std::sync::Arc::clone(&session.commit_store),\n        std::sync::Arc::clone(&session.version_ctx),\n        std::sync::Arc::clone(&session.catalog_context),\n    )\n    .await?;\n    let mut transaction = opened.transaction;\n    let runtime_functions = opened.runtime_functions;\n    let plan = sql2::create_write_logical_plan(&mut transaction, sql).await?;\n    drop(plan);\n    drop(runtime_functions);\n    transaction.rollback().await\n}\n\npub async fn prepare_read_plan(\n    session: &SessionContext,\n    sql: &str,\n) -> Result<PreparedReadPlan, LixError> {\n    session.ensure_open()?;\n    let read_scope = StorageReadScope::new(session.storage.begin_read_transaction().await?);\n    let mut read_store = read_scope.store();\n    let live_state: std::sync::Arc<dyn crate::live_state::LiveStateReader> =\n        std::sync::Arc::new(session.live_state.reader(read_store.clone()));\n    let runtime_functions = FunctionContext::prepare(live_state.as_ref()).await?;\n    let functions = runtime_functions.provider();\n    let active_version_id = session\n        .active_version_id_from_reader(&mut read_store)\n        .await?;\n    let visible_schemas = session\n        .catalog_context\n        .schema_jsons_for_sql_read_planning(live_state.as_ref(), &active_version_id)\n        .await?;\n    let ctx = SessionSqlExecutionContext {\n        active_version_id: &active_version_id,\n        read_store,\n        live_state: std::sync::Arc::clone(&session.live_state),\n        binary_cas: std::sync::Arc::clone(&session.binary_cas),\n        commit_store: std::sync::Arc::clone(&session.commit_store),\n        version_ctx: std::sync::Arc::clone(&session.version_ctx),\n        visible_schemas,\n        functions: functions.clone(),\n    };\n    let plan = sql2::create_logical_plan(&ctx, sql).await?;\n    drop(ctx);\n    drop(live_state);\n\n    Ok(PreparedReadPlan {\n        plan,\n        read_scope,\n        runtime_functions,\n    })\n}\n\npub async fn execute_read_plan(\n    prepared: PreparedReadPlan,\n    params: &[Value],\n) -> Result<SqlQueryResult, LixError> {\n    let PreparedReadPlan {\n        plan,\n        read_scope,\n        runtime_functions,\n    } = prepared;\n    let result = sql2::execute_logical_plan(plan, params).await;\n    read_scope.rollback().await?;\n    drop(runtime_functions);\n    result\n}\n"
  },
  {
    "path": "packages/engine/src/session/switch_version.rs",
    "content": "use std::sync::Arc;\n\nuse serde_json::json;\n\nuse crate::transaction::types::{TransactionJson, TransactionWriteRow};\nuse crate::version::{VersionLifecycle, VersionOperation, VersionReferenceRole};\nuse crate::LixError;\nuse crate::GLOBAL_VERSION_ID;\n\nuse super::context::{SessionContext, SessionMode, WORKSPACE_VERSION_KEY};\n\nconst KEY_VALUE_SCHEMA_KEY: &str = \"lix_key_value\";\n\n/// Options for switching a session to another version.\n#[derive(Debug, Clone, PartialEq, Eq)]\npub struct SwitchVersionOptions {\n    pub version_id: String,\n}\n\n/// Receipt returned after switching to another version.\n#[derive(Debug, Clone, PartialEq, Eq)]\npub struct SwitchVersionReceipt {\n    pub version_id: String,\n}\n\nimpl SessionContext {\n    /// Switches the session's active version selector.\n    ///\n    /// Pinned sessions switch in memory and return a new pinned session.\n    /// Workspace sessions update the shared workspace selector so other\n    /// workspace sessions observe the new active version on their next use.\n    pub async fn switch_version(\n        &self,\n        options: SwitchVersionOptions,\n    ) -> Result<(SessionContext, SwitchVersionReceipt), LixError> {\n        let version_id = options.version_id;\n        let receipt_version_id = version_id.clone();\n        let current_mode = self.mode.clone();\n        let next_mode = self\n            .with_write_transaction(|transaction| {\n                Box::pin(async move {\n                    {\n                        let reader = transaction.version_ref_reader();\n                        VersionLifecycle::new(&reader)\n                            .require_existing_commit_id(\n                                &version_id,\n                                VersionOperation::SwitchVersion,\n                                VersionReferenceRole::Target,\n                            )\n                            .await?\n                    };\n\n                    match current_mode {\n                        SessionMode::Pinned { .. } => Ok(SessionMode::Pinned {\n                            version_id: version_id.clone(),\n                        }),\n                        SessionMode::Workspace => {\n                            transaction\n                                .stage_rows(vec![workspace_version_stage_row(&version_id)?])\n                                .await?;\n                            Ok(SessionMode::Workspace)\n                        }\n                    }\n                })\n            })\n            .await?;\n\n        let session = SessionContext::new_with_closed(\n            next_mode,\n            self.storage.clone(),\n            Arc::clone(&self.live_state),\n            Arc::clone(&self.tracked_state),\n            Arc::clone(&self.binary_cas),\n            Arc::clone(&self.commit_store),\n            Arc::clone(&self.version_ctx),\n            Arc::clone(&self.catalog_context),\n            self.closed_flag(),\n        );\n        Ok((\n            session,\n            SwitchVersionReceipt {\n                version_id: receipt_version_id,\n            },\n        ))\n    }\n}\n\nfn workspace_version_stage_row(version_id: &str) -> Result<TransactionWriteRow, LixError> {\n    Ok(TransactionWriteRow {\n        entity_id: Some(crate::entity_identity::EntityIdentity::single(\n            WORKSPACE_VERSION_KEY,\n        )),\n        schema_key: KEY_VALUE_SCHEMA_KEY.to_string(),\n        file_id: None,\n        snapshot: Some(TransactionJson::from_value_unchecked(json!({\n            \"key\": WORKSPACE_VERSION_KEY,\n            \"value\": version_id,\n        }))),\n        metadata: None,\n        origin: None,\n        created_at: None,\n        updated_at: None,\n        global: true,\n        change_id: None,\n        commit_id: None,\n        untracked: true,\n        version_id: GLOBAL_VERSION_ID.to_string(),\n    })\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/change_provider.rs",
    "content": "use std::any::Any;\nuse std::sync::Arc;\n\nuse async_trait::async_trait;\nuse datafusion::arrow::array::{ArrayRef, StringArray};\nuse datafusion::arrow::datatypes::{DataType, Field, Schema, SchemaRef};\nuse datafusion::arrow::record_batch::RecordBatch;\nuse datafusion::catalog::{Session, TableProvider};\nuse datafusion::common::{DataFusionError, Result};\nuse datafusion::datasource::TableType;\nuse datafusion::execution::TaskContext;\nuse datafusion::logical_expr::{Expr, TableProviderFilterPushDown};\nuse datafusion::physical_expr::EquivalenceProperties;\nuse datafusion::physical_plan::execution_plan::{Boundedness, EmissionType, PlanProperties};\nuse datafusion::physical_plan::stream::RecordBatchStreamAdapter;\nuse datafusion::physical_plan::{\n    DisplayAs, DisplayFormatType, ExecutionPlan, Partitioning, SendableRecordBatchStream,\n};\nuse futures_util::stream;\n\nuse crate::commit_store::ChangeScanRequest;\nuse crate::serialize_row_metadata;\nuse crate::LixError;\n\nuse super::record_batch::record_batch_with_row_count;\nuse super::result_metadata::json_field;\nuse super::SqlCommitStoreQuerySource;\nuse crate::commit_store::{materialize_change, MaterializedChange};\n\npub(crate) async fn register_lix_change_provider(\n    session: &datafusion::prelude::SessionContext,\n    query_source: SqlCommitStoreQuerySource,\n) -> Result<(), LixError> {\n    session\n        .register_table(\"lix_change\", Arc::new(LixChangeProvider::new(query_source)))\n        .map_err(datafusion_error_to_lix_error)?;\n    Ok(())\n}\n\nstruct LixChangeProvider {\n    schema: SchemaRef,\n    query_source: SqlCommitStoreQuerySource,\n}\n\nimpl std::fmt::Debug for LixChangeProvider {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"LixChangeProvider\").finish()\n    }\n}\n\nimpl LixChangeProvider {\n    fn new(query_source: SqlCommitStoreQuerySource) -> Self {\n        Self {\n            schema: lix_change_schema(),\n            query_source,\n        }\n    }\n}\n\n#[async_trait]\nimpl TableProvider for LixChangeProvider {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn schema(&self) -> SchemaRef {\n        Arc::clone(&self.schema)\n    }\n\n    fn table_type(&self) -> TableType {\n        TableType::Base\n    }\n\n    fn supports_filters_pushdown(\n        &self,\n        filters: &[&Expr],\n    ) -> Result<Vec<TableProviderFilterPushDown>> {\n        Ok(filters\n            .iter()\n            .map(|_| TableProviderFilterPushDown::Unsupported)\n            .collect())\n    }\n\n    async fn scan(\n        &self,\n        _state: &dyn Session,\n        projection: Option<&Vec<usize>>,\n        _filters: &[Expr],\n        limit: Option<usize>,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        Ok(Arc::new(LixChangeScanExec::new(\n            self.query_source.clone(),\n            projected_schema(&self.schema, projection),\n            projection.cloned(),\n            limit,\n        )))\n    }\n}\n\nstruct LixChangeScanExec {\n    query_source: SqlCommitStoreQuerySource,\n    schema: SchemaRef,\n    projection: Option<Vec<usize>>,\n    limit: Option<usize>,\n    properties: Arc<PlanProperties>,\n}\n\nimpl std::fmt::Debug for LixChangeScanExec {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"LixChangeScanExec\").finish()\n    }\n}\n\nimpl LixChangeScanExec {\n    fn new(\n        query_source: SqlCommitStoreQuerySource,\n        schema: SchemaRef,\n        projection: Option<Vec<usize>>,\n        limit: Option<usize>,\n    ) -> Self {\n        let properties = PlanProperties::new(\n            EquivalenceProperties::new(schema.clone()),\n            Partitioning::UnknownPartitioning(1),\n            EmissionType::Incremental,\n            Boundedness::Bounded,\n        );\n        Self {\n            query_source,\n            schema,\n            projection,\n            limit,\n            properties: Arc::new(properties),\n        }\n    }\n}\n\nimpl DisplayAs for LixChangeScanExec {\n    fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        match t {\n            DisplayFormatType::Default | DisplayFormatType::Verbose => {\n                write!(f, \"LixChangeScanExec\")\n            }\n            DisplayFormatType::TreeRender => write!(f, \"LixChangeScanExec\"),\n        }\n    }\n}\n\nimpl ExecutionPlan for LixChangeScanExec {\n    fn name(&self) -> &str {\n        \"LixChangeScanExec\"\n    }\n\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn properties(&self) -> &Arc<PlanProperties> {\n        &self.properties\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn ExecutionPlan>> {\n        Vec::new()\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        children: Vec<Arc<dyn ExecutionPlan>>,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        if !children.is_empty() {\n            return Err(DataFusionError::Execution(\n                \"LixChangeScanExec does not accept children\".to_string(),\n            ));\n        }\n        Ok(self)\n    }\n\n    fn execute(\n        &self,\n        partition: usize,\n        _context: Arc<TaskContext>,\n    ) -> Result<SendableRecordBatchStream> {\n        if partition != 0 {\n            return Err(DataFusionError::Execution(format!(\n                \"LixChangeScanExec only exposes one partition, got {partition}\"\n            )));\n        }\n\n        let query_source = self.query_source.clone();\n        let projection = change_projection_for_scan(self.projection.as_ref());\n        let limit = self.limit;\n        let schema = Arc::clone(&self.schema);\n        let stream = stream::once(async move {\n            let mut json_reader = query_source.json_reader;\n            let canonical_changes = query_source\n                .commit_store_reader\n                .scan_changes(&ChangeScanRequest { limit })\n                .await\n                .map_err(lix_error_to_datafusion_error)?;\n            let mut changes = Vec::with_capacity(canonical_changes.len());\n            for change in canonical_changes {\n                changes.push(\n                    materialize_change(&mut json_reader, change)\n                        .await\n                        .map_err(lix_error_to_datafusion_error)?,\n                );\n            }\n            change_record_batch(&projection, &changes)\n        });\n        Ok(Box::pin(RecordBatchStreamAdapter::new(schema, stream)))\n    }\n}\n\n#[derive(Debug, Clone, Copy)]\nenum ChangeColumn {\n    Id,\n    EntityId,\n    SchemaKey,\n    FileId,\n    Metadata,\n    CreatedAt,\n    SnapshotContent,\n}\n\nfn lix_change_schema() -> SchemaRef {\n    Arc::new(Schema::new(vec![\n        Field::new(\"id\", DataType::Utf8, false),\n        json_field(\"entity_id\", false),\n        Field::new(\"schema_key\", DataType::Utf8, false),\n        Field::new(\"file_id\", DataType::Utf8, true),\n        json_field(\"metadata\", true),\n        Field::new(\"created_at\", DataType::Utf8, false),\n        json_field(\"snapshot_content\", true),\n    ]))\n}\n\nfn change_projection_for_scan(projection: Option<&Vec<usize>>) -> Vec<ChangeColumn> {\n    let all_columns = vec![\n        ChangeColumn::Id,\n        ChangeColumn::EntityId,\n        ChangeColumn::SchemaKey,\n        ChangeColumn::FileId,\n        ChangeColumn::Metadata,\n        ChangeColumn::CreatedAt,\n        ChangeColumn::SnapshotContent,\n    ];\n    projection.map_or(all_columns.clone(), |indices| {\n        indices\n            .iter()\n            .filter_map(|index| all_columns.get(*index).copied())\n            .collect()\n    })\n}\n\nfn projected_schema(schema: &SchemaRef, projection: Option<&Vec<usize>>) -> SchemaRef {\n    match projection {\n        Some(projection) => Arc::new(schema.project(projection).expect(\"projection is valid\")),\n        None => Arc::clone(schema),\n    }\n}\n\nfn change_record_batch(\n    projection: &[ChangeColumn],\n    changes: &[MaterializedChange],\n) -> Result<RecordBatch> {\n    let arrays = projection\n        .iter()\n        .map(|column| match column {\n            ChangeColumn::Id => string_array(changes.iter().map(|row| Some(row.id.as_str()))),\n            ChangeColumn::EntityId => Arc::new(StringArray::from(\n                changes\n                    .iter()\n                    .map(|row| {\n                        Some(\n                            row.entity_id\n                                .as_json_array_text()\n                                .expect(\"canonical change entity identity should project\"),\n                        )\n                    })\n                    .collect::<Vec<_>>(),\n            )) as ArrayRef,\n            ChangeColumn::SchemaKey => {\n                string_array(changes.iter().map(|row| Some(row.schema_key.as_str())))\n            }\n            ChangeColumn::FileId => string_array(changes.iter().map(|row| row.file_id.as_deref())),\n            ChangeColumn::Metadata => Arc::new(StringArray::from(\n                changes\n                    .iter()\n                    .map(|row| row.metadata.as_ref().map(serialize_row_metadata))\n                    .collect::<Vec<_>>(),\n            )),\n            ChangeColumn::CreatedAt => {\n                string_array(changes.iter().map(|row| Some(row.created_at.as_str())))\n            }\n            ChangeColumn::SnapshotContent => {\n                string_array(changes.iter().map(|row| row.snapshot_content.as_deref()))\n            }\n        })\n        .collect::<Vec<_>>();\n    record_batch_with_row_count(change_schema(projection), arrays, changes.len()).map_err(|error| {\n        DataFusionError::Execution(format!(\"failed to build lix_change batch: {error}\"))\n    })\n}\n\nfn change_schema(projection: &[ChangeColumn]) -> SchemaRef {\n    Arc::new(Schema::new(\n        projection\n            .iter()\n            .map(|column| match column {\n                ChangeColumn::Id => Field::new(\"id\", DataType::Utf8, false),\n                ChangeColumn::EntityId => json_field(\"entity_id\", false),\n                ChangeColumn::SchemaKey => Field::new(\"schema_key\", DataType::Utf8, false),\n                ChangeColumn::FileId => Field::new(\"file_id\", DataType::Utf8, true),\n                ChangeColumn::Metadata => json_field(\"metadata\", true),\n                ChangeColumn::CreatedAt => Field::new(\"created_at\", DataType::Utf8, false),\n                ChangeColumn::SnapshotContent => json_field(\"snapshot_content\", true),\n            })\n            .collect::<Vec<_>>(),\n    ))\n}\n\nfn string_array<'a>(values: impl Iterator<Item = Option<&'a str>>) -> ArrayRef {\n    Arc::new(StringArray::from(values.collect::<Vec<_>>())) as ArrayRef\n}\n\nfn datafusion_error_to_lix_error(error: DataFusionError) -> LixError {\n    super::error::datafusion_error_to_lix_error(error)\n}\n\nfn lix_error_to_datafusion_error(error: LixError) -> DataFusionError {\n    super::error::lix_error_to_datafusion_error(error)\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/classify.rs",
    "content": "use datafusion::sql::parser::Statement as DataFusionStatement;\nuse datafusion::sql::sqlparser::ast::{\n    FromTable, ObjectName, Query, SetExpr, Statement as SqlStatement, TableFactor, TableObject,\n    TableWithJoins,\n};\nuse datafusion::sql::sqlparser::dialect::GenericDialect;\nuse datafusion::sql::sqlparser::parser::Parser;\n\nuse crate::LixError;\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub(crate) enum SqlStatementKind {\n    Read,\n    Write,\n    Other,\n}\n\npub(crate) fn classify_statement(sql: &str) -> Result<SqlStatementKind, LixError> {\n    let statements = parse_sql_statements(sql)?;\n    let [statement] = statements.as_slice() else {\n        return Ok(SqlStatementKind::Other);\n    };\n    Ok(classify_ast_statement(statement))\n}\n\npub(crate) fn validate_supported_statement_ast(sql: &str) -> Result<(), LixError> {\n    let statements = parse_sql_statements(sql)?;\n    let [statement] = statements.as_slice() else {\n        return Err(unsupported_sql_error(\n            \"Lix SQL only supports one statement per execute() call\",\n        ));\n    };\n    validate_supported_ast_statement(statement)\n}\n\npub(crate) fn validate_supported_datafusion_statement_ast(\n    statement: &DataFusionStatement,\n) -> Result<(), LixError> {\n    match statement {\n        DataFusionStatement::Statement(statement) => validate_supported_ast_statement(statement),\n        DataFusionStatement::Explain(explain) => {\n            validate_supported_datafusion_statement_ast(explain.statement.as_ref())\n        }\n        _ => Err(unsupported_sql_error(format!(\n            \"SQL statement is not supported by Lix SQL: {statement}\"\n        ))),\n    }\n}\n\npub(crate) fn datafusion_statement_dml_target_table_names(\n    statement: &DataFusionStatement,\n) -> Vec<String> {\n    let mut targets = Vec::new();\n    collect_datafusion_statement_dml_target_table_names(statement, &mut targets);\n    targets\n}\n\nfn parse_sql_statements(sql: &str) -> Result<Vec<SqlStatement>, LixError> {\n    Parser::parse_sql(&GenericDialect {}, sql).map_err(|error| {\n        LixError::new(\n            LixError::CODE_PARSE_ERROR,\n            format!(\"sql2 SQL parse error: {error}\"),\n        )\n    })\n}\n\nfn collect_datafusion_statement_dml_target_table_names(\n    statement: &DataFusionStatement,\n    targets: &mut Vec<String>,\n) {\n    match statement {\n        DataFusionStatement::Statement(statement) => {\n            collect_dml_target_table_names(statement, targets);\n        }\n        DataFusionStatement::Explain(explain) => {\n            collect_datafusion_statement_dml_target_table_names(\n                explain.statement.as_ref(),\n                targets,\n            );\n        }\n        _ => {}\n    }\n}\n\nfn collect_dml_target_table_names(statement: &SqlStatement, targets: &mut Vec<String>) {\n    match statement {\n        SqlStatement::Insert(insert) => {\n            if let TableObject::TableName(name) = &insert.table {\n                if let Some(table_name) = object_name_table_part(name) {\n                    targets.push(table_name);\n                }\n            }\n        }\n        SqlStatement::Update(update) => {\n            collect_table_with_joins_target(&update.table, targets);\n        }\n        SqlStatement::Delete(delete) => {\n            let tables = match &delete.from {\n                FromTable::WithFromKeyword(tables) | FromTable::WithoutKeyword(tables) => tables,\n            };\n            for table in tables {\n                collect_table_with_joins_target(table, targets);\n            }\n        }\n        SqlStatement::Explain { statement, .. } => {\n            collect_dml_target_table_names(statement.as_ref(), targets);\n        }\n        _ => {}\n    }\n}\n\nfn collect_table_with_joins_target(table: &TableWithJoins, targets: &mut Vec<String>) {\n    if let TableFactor::Table { name, .. } = &table.relation {\n        if let Some(table_name) = object_name_table_part(name) {\n            targets.push(table_name);\n        }\n    }\n}\n\nfn object_name_table_part(name: &ObjectName) -> Option<String> {\n    name.0.last().and_then(|part| part.as_ident()).map(|ident| {\n        if ident.quote_style.is_some() {\n            ident.value.clone()\n        } else {\n            ident.value.to_ascii_lowercase()\n        }\n    })\n}\n\nfn classify_ast_statement(statement: &SqlStatement) -> SqlStatementKind {\n    match statement {\n        SqlStatement::Insert(_) | SqlStatement::Update(_) | SqlStatement::Delete(_) => {\n            SqlStatementKind::Write\n        }\n        SqlStatement::Query(_) => SqlStatementKind::Read,\n        SqlStatement::Explain { statement, .. } => classify_ast_statement(statement.as_ref()),\n        _ => SqlStatementKind::Other,\n    }\n}\n\nfn validate_supported_ast_statement(statement: &SqlStatement) -> Result<(), LixError> {\n    match statement {\n        SqlStatement::Query(query) => validate_supported_query(query),\n        SqlStatement::Insert(_) | SqlStatement::Update(_) | SqlStatement::Delete(_) => Ok(()),\n        SqlStatement::Explain { statement, .. } => validate_supported_ast_statement(statement),\n        _ => Err(unsupported_sql_error(format!(\n            \"SQL statement is not supported by Lix SQL: {statement}\"\n        ))),\n    }\n}\n\nfn validate_supported_query(query: &Query) -> Result<(), LixError> {\n    if query.with.as_ref().is_some_and(|with| with.recursive) {\n        return Err(\n            unsupported_sql_error(\"recursive CTEs are not supported by Lix SQL\").with_hint(\n                \"Use explicit commit graph surfaces such as lix_commit, lix_commit_edge, and lix_state_history instead of WITH RECURSIVE.\",\n            ),\n        );\n    }\n\n    if let Some(with) = &query.with {\n        for cte in &with.cte_tables {\n            validate_supported_query(&cte.query)?;\n        }\n    }\n    validate_supported_set_expr(&query.body)\n}\n\nfn validate_supported_set_expr(expr: &SetExpr) -> Result<(), LixError> {\n    match expr {\n        SetExpr::Query(query) => validate_supported_query(query),\n        SetExpr::SetOperation { left, right, .. } => {\n            validate_supported_set_expr(left)?;\n            validate_supported_set_expr(right)\n        }\n        _ => Ok(()),\n    }\n}\n\nfn unsupported_sql_error(message: impl Into<String>) -> LixError {\n    LixError::new(LixError::CODE_UNSUPPORTED_SQL, message)\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/context.rs",
    "content": "use std::ptr::NonNull;\nuse std::sync::Arc;\n\nuse async_trait::async_trait;\nuse serde_json::Value as JsonValue;\nuse tokio::sync::Mutex;\n\nuse crate::binary_cas::{BlobBytesBatch, BlobDataReader, BlobHash};\nuse crate::commit_graph::CommitGraphReader;\nuse crate::commit_store::CommitStoreReader;\nuse crate::functions::FunctionProviderHandle;\nuse crate::json_store::JsonStoreReader;\nuse crate::live_state::{\n    LiveStateFilter, LiveStateReader, LiveStateRowRequest, LiveStateScanRequest,\n    MaterializedLiveStateRow,\n};\nuse crate::storage::{ScopedStorageReader, StorageReadTransaction};\nuse crate::transaction::types::{TransactionWrite, TransactionWriteOutcome};\nuse crate::version::{VersionHead, VersionRefReader};\nuse crate::LixError;\n\npub(crate) type SqlReadStore =\n    ScopedStorageReader<Box<dyn StorageReadTransaction + Send + Sync + 'static>>;\npub(crate) type SqlCommitStoreQuerySource = CommitStoreQuerySource<SqlReadStore>;\npub(crate) type SqlJsonReader = JsonStoreReader<ScopedStorageReader<SqlReadStore>>;\n\n#[derive(Clone)]\npub(crate) struct CommitStoreQuerySource<S> {\n    pub(crate) commit_store_reader: Arc<CommitStoreReader<ScopedStorageReader<S>>>,\n    pub(crate) json_reader: JsonStoreReader<ScopedStorageReader<S>>,\n}\n\n/// Read-only execution boundary for `sql2::execute_sql(...)`.\n///\n/// Session and transaction orchestration stay above `sql2`. They provide the\n/// execution-scoped committed read context for each call.\n///\n/// This trait is for read SQL session construction. Write SQL should use\n/// `SqlWriteExecutionContext` so transaction-scoped reads and staging stay in\n/// the transaction capability instead of flowing through committed read\n/// sources.\n#[allow(dead_code)]\npub(crate) trait SqlExecutionContext {\n    fn active_version_id(&self) -> &str;\n    fn live_state(&self) -> Arc<dyn LiveStateReader>;\n    fn functions(&self) -> FunctionProviderHandle;\n    fn commit_store_query_source(&self) -> SqlCommitStoreQuerySource;\n    fn commit_graph(&self) -> Box<dyn CommitGraphReader>;\n    fn version_ref(&self) -> Arc<dyn VersionRefReader>;\n    fn blob_reader(&self) -> Arc<dyn BlobDataReader>;\n    fn list_visible_schemas(&self) -> Result<Vec<JsonValue>, LixError>;\n}\n\n/// Write-capable SQL runtime boundary.\n///\n/// Providers that mutate engine state should target this shape instead of\n/// reaching through session/backend escape hatches. The request and write\n/// payloads stay in the existing engine forms so this boundary centralizes\n/// authority without adding another translation layer.\n#[async_trait]\n#[allow(dead_code)]\npub(crate) trait SqlWriteExecutionContext {\n    fn active_version_id(&self) -> &str;\n    fn functions(&self) -> FunctionProviderHandle;\n    fn list_visible_schemas(&self) -> Result<Vec<JsonValue>, LixError>;\n\n    async fn load_bytes_many(&mut self, hashes: &[BlobHash]) -> Result<BlobBytesBatch, LixError>;\n\n    async fn scan_live_state(\n        &mut self,\n        request: &LiveStateScanRequest,\n    ) -> Result<Vec<MaterializedLiveStateRow>, LixError>;\n\n    async fn load_version_head(&mut self, version_id: &str) -> Result<Option<String>, LixError>;\n\n    async fn stage_write(\n        &mut self,\n        write: TransactionWrite,\n    ) -> Result<TransactionWriteOutcome, LixError>;\n}\n\n#[derive(Clone)]\npub(crate) struct SqlWriteContext {\n    ptr: Arc<SqlWriteContextPtr>,\n    gate: Arc<Mutex<()>>,\n}\n\nstruct SqlWriteContextPtr(NonNull<dyn SqlWriteExecutionContext>);\n\n// DataFusion stores providers as owned Send + Sync trait objects. This context\n// is only constructed for one write execution and never outlives the borrowed\n// transaction context that owns it.\nunsafe impl Send for SqlWriteContextPtr {}\nunsafe impl Sync for SqlWriteContextPtr {}\n\nimpl SqlWriteContext {\n    pub(crate) fn new(ctx: &mut dyn SqlWriteExecutionContext) -> Self {\n        let ptr = NonNull::from(ctx);\n        let ptr = unsafe {\n            std::mem::transmute::<\n                NonNull<dyn SqlWriteExecutionContext + '_>,\n                NonNull<dyn SqlWriteExecutionContext + 'static>,\n            >(ptr)\n        };\n        Self {\n            ptr: Arc::new(SqlWriteContextPtr(ptr)),\n            gate: Arc::new(Mutex::new(())),\n        }\n    }\n\n    pub(crate) fn functions(&self) -> FunctionProviderHandle {\n        unsafe { self.ptr.0.as_ref().functions() }\n    }\n\n    pub(crate) fn blob_reader(&self) -> Arc<dyn BlobDataReader> {\n        Arc::new(WriteContextBlobDataReader::new(self.clone()))\n    }\n\n    pub(crate) fn list_visible_schemas(&self) -> Result<Vec<JsonValue>, LixError> {\n        unsafe { self.ptr.0.as_ref().list_visible_schemas() }\n    }\n\n    pub(crate) fn active_version_id(&self) -> String {\n        unsafe { self.ptr.0.as_ref().active_version_id().to_string() }\n    }\n\n    pub(crate) async fn scan_live_state(\n        &self,\n        request: &LiveStateScanRequest,\n    ) -> Result<Vec<MaterializedLiveStateRow>, LixError> {\n        let _guard = self.gate.lock().await;\n        unsafe {\n            self.ptr\n                .0\n                .as_ptr()\n                .as_mut()\n                .unwrap()\n                .scan_live_state(request)\n                .await\n        }\n    }\n\n    pub(crate) async fn load_bytes_many(\n        &self,\n        hashes: &[BlobHash],\n    ) -> Result<BlobBytesBatch, LixError> {\n        let _guard = self.gate.lock().await;\n        unsafe {\n            self.ptr\n                .0\n                .as_ptr()\n                .as_mut()\n                .unwrap()\n                .load_bytes_many(hashes)\n                .await\n        }\n    }\n\n    pub(crate) async fn load_version_head(\n        &self,\n        version_id: &str,\n    ) -> Result<Option<String>, LixError> {\n        let _guard = self.gate.lock().await;\n        unsafe {\n            self.ptr\n                .0\n                .as_ptr()\n                .as_mut()\n                .unwrap()\n                .load_version_head(version_id)\n                .await\n        }\n    }\n\n    pub(crate) async fn stage_write(\n        &self,\n        write: TransactionWrite,\n    ) -> Result<TransactionWriteOutcome, LixError> {\n        let _guard = self.gate.lock().await;\n        unsafe {\n            self.ptr\n                .0\n                .as_ptr()\n                .as_mut()\n                .unwrap()\n                .stage_write(write)\n                .await\n        }\n    }\n}\n\npub(crate) struct WriteContextBlobDataReader {\n    ctx: SqlWriteContext,\n}\n\nimpl WriteContextBlobDataReader {\n    pub(crate) fn new(ctx: SqlWriteContext) -> Self {\n        Self { ctx }\n    }\n}\n\n#[async_trait]\nimpl BlobDataReader for WriteContextBlobDataReader {\n    async fn load_bytes_many(&self, hashes: &[BlobHash]) -> Result<BlobBytesBatch, LixError> {\n        self.ctx.load_bytes_many(hashes).await\n    }\n}\n\n#[derive(Clone)]\npub(crate) enum WriteAccess {\n    ReadOnly,\n    Write { ctx: SqlWriteContext },\n}\n\nimpl WriteAccess {\n    pub(crate) fn read_only() -> Self {\n        Self::ReadOnly\n    }\n\n    pub(crate) fn write(ctx: SqlWriteContext) -> Self {\n        Self::Write { ctx }\n    }\n\n    pub(crate) fn require_write(\n        &self,\n        action: &str,\n    ) -> Result<SqlWriteContext, datafusion::error::DataFusionError> {\n        match self {\n            Self::Write { ctx } => Ok(ctx.clone()),\n            Self::ReadOnly => Err(datafusion::error::DataFusionError::Execution(format!(\n                \"{action} requires a write transaction\"\n            ))),\n        }\n    }\n\n    pub(crate) fn is_write(&self) -> bool {\n        matches!(self, Self::Write { .. })\n    }\n}\n\npub(crate) struct WriteContextLiveStateReader {\n    ctx: SqlWriteContext,\n}\n\nimpl WriteContextLiveStateReader {\n    pub(crate) fn new(ctx: SqlWriteContext) -> Self {\n        Self { ctx }\n    }\n}\n\n#[async_trait]\nimpl LiveStateReader for WriteContextLiveStateReader {\n    async fn scan_rows(\n        &self,\n        request: &LiveStateScanRequest,\n    ) -> Result<Vec<MaterializedLiveStateRow>, LixError> {\n        self.ctx.scan_live_state(request).await\n    }\n\n    async fn load_row(\n        &self,\n        request: &LiveStateRowRequest,\n    ) -> Result<Option<MaterializedLiveStateRow>, LixError> {\n        let mut rows = self\n            .ctx\n            .scan_live_state(&LiveStateScanRequest {\n                filter: LiveStateFilter {\n                    schema_keys: vec![request.schema_key.clone()],\n                    entity_ids: vec![request.entity_id.clone()],\n                    version_ids: vec![request.version_id.clone()],\n                    file_ids: vec![request.file_id.clone()],\n                    ..LiveStateFilter::default()\n                },\n                projection: Default::default(),\n                limit: Some(1),\n            })\n            .await?;\n        Ok(rows.pop())\n    }\n}\n\npub(crate) struct WriteContextVersionRefReader {\n    ctx: SqlWriteContext,\n}\n\nimpl WriteContextVersionRefReader {\n    pub(crate) fn new(ctx: SqlWriteContext) -> Self {\n        Self { ctx }\n    }\n}\n\n#[async_trait]\nimpl VersionRefReader for WriteContextVersionRefReader {\n    async fn load_head(&self, version_id: &str) -> Result<Option<VersionHead>, LixError> {\n        Ok(self\n            .ctx\n            .load_version_head(version_id)\n            .await?\n            .map(|commit_id| VersionHead {\n                version_id: version_id.to_string(),\n                commit_id,\n            }))\n    }\n\n    async fn scan_heads(&self) -> Result<Vec<VersionHead>, LixError> {\n        Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            \"scan_heads is not available through sql2 write context\",\n        ))\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/directory_history_provider.rs",
    "content": "use std::any::Any;\nuse std::collections::{BTreeMap, BTreeSet};\nuse std::sync::Arc;\n\nuse async_trait::async_trait;\nuse datafusion::arrow::array::{ArrayRef, BooleanArray, Int64Array, StringArray};\nuse datafusion::arrow::datatypes::{DataType, Field, Schema, SchemaRef};\nuse datafusion::arrow::record_batch::{RecordBatch, RecordBatchOptions};\nuse datafusion::catalog::{Session, TableProvider};\nuse datafusion::common::{DataFusionError, Result};\nuse datafusion::datasource::TableType;\nuse datafusion::execution::TaskContext;\nuse datafusion::logical_expr::{Expr, TableProviderFilterPushDown};\nuse datafusion::physical_expr::EquivalenceProperties;\nuse datafusion::physical_plan::execution_plan::{Boundedness, EmissionType, PlanProperties};\nuse datafusion::physical_plan::stream::RecordBatchStreamAdapter;\nuse datafusion::physical_plan::{\n    DisplayAs, DisplayFormatType, ExecutionPlan, Partitioning, SendableRecordBatchStream,\n};\nuse futures_util::stream;\nuse serde::Deserialize;\nuse tokio::sync::Mutex;\n\nuse crate::commit_graph::CommitGraphReader;\nuse crate::serialize_row_metadata;\nuse crate::LixError;\n\nuse super::history_projection::{tombstone_identity_column_value, HistoryIdentityProjection};\nuse super::history_route::{\n    history_descriptor_event_matches, load_history_entries, parse_history_filter,\n    HistoryColumnStyle, HistoryEntry, HistoryRoute, HistoryViewDescriptor, HISTORY_COL_CHANGE_ID,\n    HISTORY_COL_COMMIT_CREATED_AT, HISTORY_COL_DEPTH, HISTORY_COL_ENTITY_ID, HISTORY_COL_FILE_ID,\n    HISTORY_COL_METADATA, HISTORY_COL_OBSERVED_COMMIT_ID, HISTORY_COL_SCHEMA_KEY,\n    HISTORY_COL_SNAPSHOT_CONTENT, HISTORY_COL_START_COMMIT_ID,\n};\nuse super::result_metadata::json_field;\nuse super::SqlCommitStoreQuerySource;\nuse crate::commit_store::MaterializedChange;\n\nconst DIRECTORY_DESCRIPTOR_SCHEMA_KEY: &str = \"lix_directory_descriptor\";\n\npub(crate) async fn register_lix_directory_history_provider(\n    session: &datafusion::prelude::SessionContext,\n    commit_graph: Box<dyn CommitGraphReader>,\n    query_source: SqlCommitStoreQuerySource,\n) -> Result<(), LixError> {\n    session\n        .register_table(\n            \"lix_directory_history\",\n            Arc::new(LixDirectoryHistoryProvider::new(\n                Arc::new(Mutex::new(commit_graph)),\n                query_source,\n            )),\n        )\n        .map_err(datafusion_error_to_lix_error)?;\n    Ok(())\n}\n\nstruct LixDirectoryHistoryProvider {\n    schema: SchemaRef,\n    commit_graph: Arc<Mutex<Box<dyn CommitGraphReader>>>,\n    query_source: SqlCommitStoreQuerySource,\n}\n\nimpl std::fmt::Debug for LixDirectoryHistoryProvider {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"LixDirectoryHistoryProvider\").finish()\n    }\n}\n\nimpl LixDirectoryHistoryProvider {\n    fn new(\n        commit_graph: Arc<Mutex<Box<dyn CommitGraphReader>>>,\n        query_source: SqlCommitStoreQuerySource,\n    ) -> Self {\n        Self {\n            schema: lix_directory_history_schema(),\n            commit_graph,\n            query_source,\n        }\n    }\n}\n\n#[async_trait]\nimpl TableProvider for LixDirectoryHistoryProvider {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn schema(&self) -> SchemaRef {\n        Arc::clone(&self.schema)\n    }\n\n    fn table_type(&self) -> TableType {\n        TableType::View\n    }\n\n    fn supports_filters_pushdown(\n        &self,\n        filters: &[&Expr],\n    ) -> Result<Vec<TableProviderFilterPushDown>> {\n        Ok(filters\n            .iter()\n            .map(|filter| {\n                if parse_history_filter(filter, HistoryColumnStyle::Prefixed).is_some() {\n                    TableProviderFilterPushDown::Exact\n                } else {\n                    TableProviderFilterPushDown::Unsupported\n                }\n            })\n            .collect())\n    }\n\n    async fn scan(\n        &self,\n        _state: &dyn Session,\n        projection: Option<&Vec<usize>>,\n        filters: &[Expr],\n        limit: Option<usize>,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        Ok(Arc::new(LixDirectoryHistoryScanExec::new(\n            Arc::clone(&self.commit_graph),\n            self.query_source.clone(),\n            projected_schema(&self.schema, projection)?,\n            HistoryRoute::from_filters(filters, HistoryColumnStyle::Prefixed),\n            limit,\n        )))\n    }\n}\n\nstruct LixDirectoryHistoryScanExec {\n    commit_graph: Arc<Mutex<Box<dyn CommitGraphReader>>>,\n    query_source: SqlCommitStoreQuerySource,\n    schema: SchemaRef,\n    route: HistoryRoute,\n    limit: Option<usize>,\n    properties: Arc<PlanProperties>,\n}\n\nimpl std::fmt::Debug for LixDirectoryHistoryScanExec {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"LixDirectoryHistoryScanExec\")\n            .field(\"route\", &self.route)\n            .field(\"limit\", &self.limit)\n            .finish()\n    }\n}\n\nimpl LixDirectoryHistoryScanExec {\n    fn new(\n        commit_graph: Arc<Mutex<Box<dyn CommitGraphReader>>>,\n        query_source: SqlCommitStoreQuerySource,\n        schema: SchemaRef,\n        route: HistoryRoute,\n        limit: Option<usize>,\n    ) -> Self {\n        let properties = PlanProperties::new(\n            EquivalenceProperties::new(Arc::clone(&schema)),\n            Partitioning::UnknownPartitioning(1),\n            EmissionType::Incremental,\n            Boundedness::Bounded,\n        );\n        Self {\n            commit_graph,\n            query_source,\n            schema,\n            route,\n            limit,\n            properties: Arc::new(properties),\n        }\n    }\n}\n\nimpl DisplayAs for LixDirectoryHistoryScanExec {\n    fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        match t {\n            DisplayFormatType::Default | DisplayFormatType::Verbose => write!(\n                f,\n                \"LixDirectoryHistoryScanExec(route={:?}, limit={:?})\",\n                self.route, self.limit\n            ),\n            DisplayFormatType::TreeRender => write!(f, \"LixDirectoryHistoryScanExec\"),\n        }\n    }\n}\n\nimpl ExecutionPlan for LixDirectoryHistoryScanExec {\n    fn name(&self) -> &str {\n        \"LixDirectoryHistoryScanExec\"\n    }\n\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn properties(&self) -> &Arc<PlanProperties> {\n        &self.properties\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn ExecutionPlan>> {\n        Vec::new()\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        children: Vec<Arc<dyn ExecutionPlan>>,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        if !children.is_empty() {\n            return Err(DataFusionError::Execution(\n                \"LixDirectoryHistoryScanExec does not accept children\".to_string(),\n            ));\n        }\n        Ok(self)\n    }\n\n    fn execute(\n        &self,\n        partition: usize,\n        _context: Arc<TaskContext>,\n    ) -> Result<SendableRecordBatchStream> {\n        if partition != 0 {\n            return Err(DataFusionError::Execution(format!(\n                \"LixDirectoryHistoryScanExec only exposes one partition, got {partition}\"\n            )));\n        }\n\n        let commit_graph = Arc::clone(&self.commit_graph);\n        let query_source = self.query_source.clone();\n        let schema = Arc::clone(&self.schema);\n        let stream_schema = Arc::clone(&schema);\n        let route = self.route.clone();\n        let limit = self.limit;\n        let fut = async move {\n            let mut rows = load_directory_history_rows(commit_graph, query_source, &route)\n                .await\n                .map_err(lix_error_to_datafusion_error)?;\n            if let Some(limit) = limit {\n                rows.truncate(limit);\n            }\n            directory_history_record_batch(&stream_schema, &rows)\n                .map_err(lix_error_to_datafusion_error)\n        };\n\n        Ok(Box::pin(RecordBatchStreamAdapter::new(\n            schema,\n            stream::once(fut),\n        )))\n    }\n}\n\n#[derive(Debug, Clone)]\nstruct DirectoryHistoryRecord {\n    id: String,\n    parent_id: Option<String>,\n    name: Option<String>,\n    hidden: Option<bool>,\n    entry: HistoryEntry,\n}\n\n#[derive(Debug, Clone)]\nstruct DirectoryHistoryOutputRow {\n    entity_id: String,\n    id: String,\n    path: Option<String>,\n    parent_id: Option<String>,\n    name: Option<String>,\n    hidden: Option<bool>,\n    descriptor_change: MaterializedChange,\n    event: DirectoryHistoryEvent,\n}\n\n#[derive(Debug, Clone)]\nstruct DirectoryHistoryEvent {\n    directory_id: String,\n    start_commit_id: String,\n    depth: u32,\n    change: MaterializedChange,\n    observed_commit_id: String,\n    commit_created_at: String,\n}\n\n#[derive(Debug, Deserialize)]\nstruct DirectoryDescriptorSnapshot {\n    id: String,\n    parent_id: Option<String>,\n    name: String,\n    hidden: Option<bool>,\n}\n\nasync fn load_directory_history_rows(\n    commit_graph: Arc<Mutex<Box<dyn CommitGraphReader>>>,\n    query_source: SqlCommitStoreQuerySource,\n    route: &HistoryRoute,\n) -> Result<Vec<DirectoryHistoryOutputRow>, LixError> {\n    let event_route = route.traversal_only();\n    let event_entries = load_history_entries(\n        HistoryViewDescriptor {\n            view_name: \"lix_directory_history\",\n            start_commit_column: HISTORY_COL_START_COMMIT_ID,\n        },\n        Arc::clone(&commit_graph),\n        query_source.json_reader.clone(),\n        &event_route,\n        vec![DIRECTORY_DESCRIPTOR_SCHEMA_KEY.to_string()],\n    )\n    .await?;\n    let context_route = route.starts_only();\n    let context_entries = load_history_entries(\n        HistoryViewDescriptor {\n            view_name: \"lix_directory_history\",\n            start_commit_column: HISTORY_COL_START_COMMIT_ID,\n        },\n        commit_graph,\n        query_source.json_reader,\n        &context_route,\n        vec![DIRECTORY_DESCRIPTOR_SCHEMA_KEY.to_string()],\n    )\n    .await?;\n    let event_descriptors = parse_directory_history_records(&event_entries)?;\n    let descriptors = parse_directory_history_records(&context_entries)?;\n    let mut output = Vec::new();\n\n    for descriptor in &event_descriptors {\n        let event = directory_history_event_from_entry(&descriptor.id, &descriptor.entry);\n        let Some(visible_descriptor) = nearest_directory_descriptor(&descriptors, &event) else {\n            continue;\n        };\n        let path = if visible_descriptor.name.is_some() {\n            resolve_directory_history_path(\n                &visible_descriptor.id,\n                &event.start_commit_id,\n                event.depth,\n                &descriptors,\n                &mut BTreeMap::new(),\n                &mut BTreeSet::new(),\n            )\n        } else {\n            None\n        };\n        let id = tombstone_identity_column_value(\n            \"id\",\n            &visible_descriptor.id,\n            HistoryIdentityProjection::SingleColumn { column: \"id\" },\n        )?\n        .and_then(|value| value.as_str().map(ToOwned::to_owned))\n        .unwrap_or_else(|| visible_descriptor.id.clone());\n        output.push(DirectoryHistoryOutputRow {\n            entity_id: visible_descriptor.id.clone(),\n            id,\n            path,\n            parent_id: visible_descriptor.parent_id.clone(),\n            name: visible_descriptor.name.clone(),\n            hidden: visible_descriptor.hidden,\n            descriptor_change: visible_descriptor.entry.change.clone(),\n            event,\n        });\n    }\n    output.retain(|row| {\n        let entity_id = entity_id_json_array(&row.entity_id).ok();\n        route.matches_surface_row(\n            DIRECTORY_DESCRIPTOR_SCHEMA_KEY,\n            entity_id.as_deref().unwrap_or(&row.entity_id),\n            None,\n            row.event.depth,\n        )\n    });\n\n    output.sort_by(|left, right| {\n        left.entity_id\n            .cmp(&right.entity_id)\n            .then(left.event.start_commit_id.cmp(&right.event.start_commit_id))\n            .then(left.event.depth.cmp(&right.event.depth))\n            .then(\n                left.event\n                    .observed_commit_id\n                    .cmp(&right.event.observed_commit_id),\n            )\n            .then(left.event.change.id.cmp(&right.event.change.id))\n    });\n    Ok(output)\n}\n\nfn parse_directory_history_records(\n    entries: &[HistoryEntry],\n) -> Result<Vec<DirectoryHistoryRecord>, LixError> {\n    entries\n        .iter()\n        .filter(|entry| entry.change.schema_key == DIRECTORY_DESCRIPTOR_SCHEMA_KEY)\n        .map(|entry| {\n            let Some(snapshot_content) = entry.change.snapshot_content.as_deref() else {\n                return Ok(DirectoryHistoryRecord {\n                    id: entry.change.entity_id.as_single_string_owned()?,\n                    parent_id: None,\n                    name: None,\n                    hidden: None,\n                    entry: entry.clone(),\n                });\n            };\n            let snapshot: DirectoryDescriptorSnapshot = serde_json::from_str(snapshot_content)\n                .map_err(|error| {\n                    LixError::new(\n                        \"LIX_ERROR_UNKNOWN\",\n                        format!(\"invalid lix_directory_descriptor history snapshot JSON: {error}\"),\n                    )\n                })?;\n            Ok(DirectoryHistoryRecord {\n                id: snapshot.id,\n                parent_id: snapshot.parent_id,\n                name: Some(snapshot.name),\n                hidden: Some(snapshot.hidden.unwrap_or(false)),\n                entry: entry.clone(),\n            })\n        })\n        .collect()\n}\n\nfn directory_history_event_from_entry(\n    directory_id: &str,\n    entry: &HistoryEntry,\n) -> DirectoryHistoryEvent {\n    DirectoryHistoryEvent {\n        directory_id: directory_id.to_string(),\n        start_commit_id: entry.start_commit_id.clone(),\n        depth: entry.depth,\n        change: entry.change.clone(),\n        observed_commit_id: entry.observed_commit_id.clone(),\n        commit_created_at: entry.commit_created_at.clone(),\n    }\n}\n\nfn nearest_directory_descriptor<'a>(\n    descriptors: &'a [DirectoryHistoryRecord],\n    event: &DirectoryHistoryEvent,\n) -> Option<&'a DirectoryHistoryRecord> {\n    descriptors\n        .iter()\n        .filter(|descriptor| {\n            let exact_descriptor_event =\n                history_descriptor_event_matches(&descriptor.entry, event.depth, &event.change.id);\n            (exact_descriptor_event || descriptor.name.is_some())\n                && descriptor.id == event.directory_id\n                && descriptor.entry.start_commit_id == event.start_commit_id\n                && descriptor.entry.depth >= event.depth\n        })\n        .min_by(|left, right| {\n            left.entry\n                .depth\n                .cmp(&right.entry.depth)\n                .then(left.entry.change.id.cmp(&right.entry.change.id))\n        })\n}\n\nfn resolve_directory_history_path(\n    directory_id: &str,\n    start_commit_id: &str,\n    target_depth: u32,\n    directories: &[DirectoryHistoryRecord],\n    cache: &mut BTreeMap<String, Option<String>>,\n    visiting: &mut BTreeSet<String>,\n) -> Option<String> {\n    if let Some(path) = cache.get(directory_id) {\n        return path.clone();\n    }\n    if !visiting.insert(directory_id.to_string()) {\n        cache.insert(directory_id.to_string(), None);\n        return None;\n    }\n    let directory = directories\n        .iter()\n        .filter(|directory| {\n            directory.name.is_some()\n                && directory.id == directory_id\n                && directory.entry.start_commit_id == start_commit_id\n                && directory.entry.depth >= target_depth\n        })\n        .min_by(|left, right| {\n            left.entry\n                .depth\n                .cmp(&right.entry.depth)\n                .then(left.entry.change.id.cmp(&right.entry.change.id))\n        })?;\n    let name = directory.name.as_ref()?;\n    let path = match directory.parent_id.as_deref() {\n        Some(parent_id) => {\n            let parent_path = resolve_directory_history_path(\n                parent_id,\n                start_commit_id,\n                target_depth,\n                directories,\n                cache,\n                visiting,\n            )?;\n            format!(\"{parent_path}{name}/\")\n        }\n        None => format!(\"/{name}/\"),\n    };\n    visiting.remove(directory_id);\n    cache.insert(directory_id.to_string(), Some(path.clone()));\n    Some(path)\n}\n\nfn directory_history_record_batch(\n    schema: &SchemaRef,\n    rows: &[DirectoryHistoryOutputRow],\n) -> Result<RecordBatch, LixError> {\n    let columns = schema\n        .fields()\n        .iter()\n        .map(|field| directory_history_column_array(field.name(), rows))\n        .collect::<Result<Vec<_>, _>>()?;\n    let options = RecordBatchOptions::new().with_row_count(Some(rows.len()));\n    RecordBatch::try_new_with_options(Arc::clone(schema), columns, &options).map_err(|error| {\n        LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"sql2 failed to build lix_directory_history record batch: {error}\"),\n        )\n    })\n}\n\nfn directory_history_column_array(\n    column_name: &str,\n    rows: &[DirectoryHistoryOutputRow],\n) -> Result<ArrayRef, LixError> {\n    Ok(match column_name {\n        \"id\" => string_array(rows.iter().map(|row| Some(row.id.as_str()))),\n        \"path\" => string_array(rows.iter().map(|row| row.path.as_deref())),\n        \"parent_id\" => string_array(rows.iter().map(|row| row.parent_id.as_deref())),\n        \"name\" => string_array(rows.iter().map(|row| row.name.as_deref())),\n        \"hidden\" => Arc::new(BooleanArray::from(\n            rows.iter().map(|row| row.hidden).collect::<Vec<_>>(),\n        )) as ArrayRef,\n        HISTORY_COL_ENTITY_ID => Arc::new(StringArray::from(\n            rows.iter()\n                .map(|row| entity_id_json_array(&row.entity_id).map(Some))\n                .collect::<std::result::Result<Vec<_>, _>>()?,\n        )) as ArrayRef,\n        HISTORY_COL_SCHEMA_KEY => {\n            string_array(rows.iter().map(|_| Some(DIRECTORY_DESCRIPTOR_SCHEMA_KEY)))\n        }\n        HISTORY_COL_FILE_ID => string_array(rows.iter().map(|_| None)),\n        HISTORY_COL_CHANGE_ID => {\n            string_array(rows.iter().map(|row| Some(row.event.change.id.as_str())))\n        }\n        HISTORY_COL_SNAPSHOT_CONTENT => string_array(\n            rows.iter()\n                .map(|row| row.descriptor_change.snapshot_content.as_deref()),\n        ),\n        HISTORY_COL_METADATA => Arc::new(StringArray::from(\n            rows.iter()\n                .map(|row| {\n                    row.descriptor_change\n                        .metadata\n                        .as_ref()\n                        .map(serialize_row_metadata)\n                })\n                .collect::<Vec<_>>(),\n        )),\n        HISTORY_COL_OBSERVED_COMMIT_ID => string_array(\n            rows.iter()\n                .map(|row| Some(row.event.observed_commit_id.as_str())),\n        ),\n        HISTORY_COL_COMMIT_CREATED_AT => string_array(\n            rows.iter()\n                .map(|row| Some(row.event.commit_created_at.as_str())),\n        ),\n        HISTORY_COL_START_COMMIT_ID => string_array(\n            rows.iter()\n                .map(|row| Some(row.event.start_commit_id.as_str())),\n        ),\n        HISTORY_COL_DEPTH => Arc::new(Int64Array::from(\n            rows.iter()\n                .map(|row| i64::from(row.event.depth))\n                .collect::<Vec<_>>(),\n        )) as ArrayRef,\n        other => {\n            return Err(LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                format!(\n                \"sql2 lix_directory_history provider does not support projected column '{other}'\"\n            ),\n            ))\n        }\n    })\n}\n\nfn lix_directory_history_schema() -> SchemaRef {\n    Arc::new(Schema::new(vec![\n        Field::new(\"id\", DataType::Utf8, false),\n        Field::new(\"path\", DataType::Utf8, true),\n        Field::new(\"parent_id\", DataType::Utf8, true),\n        Field::new(\"name\", DataType::Utf8, true),\n        Field::new(\"hidden\", DataType::Boolean, true),\n        json_field(HISTORY_COL_ENTITY_ID, false),\n        Field::new(HISTORY_COL_SCHEMA_KEY, DataType::Utf8, false),\n        Field::new(HISTORY_COL_FILE_ID, DataType::Utf8, true),\n        json_field(HISTORY_COL_SNAPSHOT_CONTENT, true),\n        Field::new(HISTORY_COL_CHANGE_ID, DataType::Utf8, false),\n        json_field(HISTORY_COL_METADATA, true),\n        Field::new(HISTORY_COL_OBSERVED_COMMIT_ID, DataType::Utf8, false),\n        Field::new(HISTORY_COL_COMMIT_CREATED_AT, DataType::Utf8, false),\n        Field::new(HISTORY_COL_START_COMMIT_ID, DataType::Utf8, false),\n        Field::new(HISTORY_COL_DEPTH, DataType::Int64, false),\n    ]))\n}\n\nfn projected_schema(base_schema: &SchemaRef, projection: Option<&Vec<usize>>) -> Result<SchemaRef> {\n    let Some(projection) = projection else {\n        return Ok(Arc::clone(base_schema));\n    };\n    Ok(Arc::new(base_schema.project(projection)?))\n}\n\nfn string_array<'a>(values: impl Iterator<Item = Option<&'a str>>) -> ArrayRef {\n    Arc::new(StringArray::from(values.collect::<Vec<_>>())) as ArrayRef\n}\n\nfn datafusion_error_to_lix_error(error: DataFusionError) -> LixError {\n    super::error::datafusion_error_to_lix_error(error)\n}\n\nfn entity_id_json_array(entity_id: &str) -> Result<String, LixError> {\n    serde_json::to_string(&[entity_id]).map_err(|error| {\n        LixError::unknown(format!(\n            \"failed to encode history entity id as JSON: {error}\"\n        ))\n    })\n}\n\nfn lix_error_to_datafusion_error(error: LixError) -> DataFusionError {\n    super::error::lix_error_to_datafusion_error(error)\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/directory_provider.rs",
    "content": "use std::any::Any;\nuse std::collections::{BTreeMap, BTreeSet};\nuse std::sync::Arc;\n\nuse async_trait::async_trait;\nuse datafusion::arrow::array::{\n    ArrayRef, BooleanArray, RecordBatchOptions, StringArray, UInt64Array,\n};\nuse datafusion::arrow::compute::{and, filter_record_batch};\nuse datafusion::arrow::datatypes::{DataType, Field, Schema, SchemaRef};\nuse datafusion::arrow::record_batch::RecordBatch;\nuse datafusion::catalog::{Session, TableProvider};\nuse datafusion::common::{not_impl_err, DFSchema, DataFusionError, Result, ScalarValue};\nuse datafusion::datasource::TableType;\nuse datafusion::execution::TaskContext;\nuse datafusion::logical_expr::dml::InsertOp;\nuse datafusion::logical_expr::{Expr, TableProviderFilterPushDown};\nuse datafusion::physical_expr::{create_physical_expr, EquivalenceProperties, PhysicalExpr};\nuse datafusion::physical_plan::execution_plan::{Boundedness, EmissionType, PlanProperties};\nuse datafusion::physical_plan::stream::RecordBatchStreamAdapter;\nuse datafusion::physical_plan::{\n    DisplayAs, DisplayFormatType, ExecutionPlan, Partitioning, SendableRecordBatchStream,\n};\nuse datafusion::prelude::SessionContext;\nuse futures_util::{stream, TryStreamExt};\nuse serde::Deserialize;\n\nuse crate::functions::FunctionProviderHandle;\nuse crate::live_state::MaterializedLiveStateRow;\nuse crate::live_state::{\n    LiveStateFilter, LiveStateProjection, LiveStateReader, LiveStateScanRequest,\n};\nuse crate::sql2::dml::{InsertExec, InsertSink};\nuse crate::sql2::filesystem_predicates::{\n    canonicalize_filesystem_path_filters, FilesystemPathKind,\n};\nuse crate::sql2::predicate_typecheck::validate_json_predicate_filters;\nuse crate::sql2::version_scope::{\n    explicit_version_ids_from_dml_filters, resolve_provider_version_ids,\n    resolve_write_version_scope, VersionBinding,\n};\nuse crate::sql2::write_normalization::{InsertCell, SqlCell, UpdateAssignmentValues};\nuse crate::transaction::types::{\n    LogicalPrimaryKey, TransactionJson, TransactionWriteOperation, TransactionWriteOrigin,\n    TransactionWriteRow,\n};\nuse crate::version::VersionRefReader;\nuse crate::{parse_row_metadata_value, serialize_row_metadata, LixError};\n\nuse super::filesystem_planner::{\n    directory_descriptor_write_row, directory_path_resolvers_from_state_rows,\n    filesystem_storage_scope_key, plan_recursive_directory_delete, DirectoryDescriptorWriteIntent,\n    DirectoryPathResolver, FilesystemDeletePlan, FilesystemRowContext,\n};\nuse super::filesystem_visibility::VisibleFilesystem;\nuse super::result_metadata::json_field;\nuse crate::sql2::{\n    SqlWriteContext, WriteAccess, WriteContextLiveStateReader, WriteContextVersionRefReader,\n};\nuse crate::transaction::types::{TransactionWrite, TransactionWriteMode};\n\nconst DIRECTORY_SCHEMA_KEY: &str = \"lix_directory_descriptor\";\nconst FILE_DESCRIPTOR_SCHEMA_KEY: &str = \"lix_file_descriptor\";\n\npub(crate) async fn register_lix_directory_providers(\n    session: &SessionContext,\n    active_version_id: &str,\n    live_state: Arc<dyn LiveStateReader>,\n    version_ref: Arc<dyn VersionRefReader>,\n    functions: FunctionProviderHandle,\n) -> Result<(), LixError> {\n    session\n        .register_table(\n            \"lix_directory_by_version\",\n            Arc::new(LixDirectoryProvider::by_version(\n                Arc::clone(&live_state),\n                Arc::clone(&version_ref),\n                functions.clone(),\n            )),\n        )\n        .map_err(datafusion_error_to_lix_error)?;\n    session\n        .register_table(\n            \"lix_directory\",\n            Arc::new(LixDirectoryProvider::active_version(\n                active_version_id,\n                live_state,\n                version_ref,\n                functions,\n            )),\n        )\n        .map_err(datafusion_error_to_lix_error)?;\n    Ok(())\n}\n\npub(crate) async fn register_lix_directory_write_providers(\n    session: &SessionContext,\n    write_ctx: SqlWriteContext,\n) -> Result<(), LixError> {\n    session\n        .register_table(\n            \"lix_directory_by_version\",\n            Arc::new(LixDirectoryProvider::by_version_with_write(\n                write_ctx.clone(),\n            )),\n        )\n        .map_err(datafusion_error_to_lix_error)?;\n    session\n        .register_table(\n            \"lix_directory\",\n            Arc::new(LixDirectoryProvider::active_version_with_write(write_ctx)),\n        )\n        .map_err(datafusion_error_to_lix_error)?;\n    Ok(())\n}\n\npub(crate) struct LixDirectoryProvider {\n    schema: SchemaRef,\n    live_state: Arc<dyn LiveStateReader>,\n    version_ref: Arc<dyn VersionRefReader>,\n    write_access: WriteAccess,\n    functions: FunctionProviderHandle,\n    version_binding: VersionBinding,\n}\n\nimpl std::fmt::Debug for LixDirectoryProvider {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"LixDirectoryProvider\").finish()\n    }\n}\n\nimpl LixDirectoryProvider {\n    fn active_version(\n        active_version_id: impl Into<String>,\n        live_state: Arc<dyn LiveStateReader>,\n        version_ref: Arc<dyn VersionRefReader>,\n        functions: FunctionProviderHandle,\n    ) -> Self {\n        Self {\n            schema: lix_directory_schema(),\n            live_state,\n            version_ref,\n            write_access: WriteAccess::read_only(),\n            functions,\n            version_binding: VersionBinding::active(active_version_id),\n        }\n    }\n\n    fn active_version_with_write(write_ctx: SqlWriteContext) -> Self {\n        let active_version_id = write_ctx.active_version_id();\n        let functions = write_ctx.functions();\n        let live_state = Arc::new(WriteContextLiveStateReader::new(write_ctx.clone()));\n        let version_ref = Arc::new(WriteContextVersionRefReader::new(write_ctx.clone()));\n        Self {\n            schema: lix_directory_schema(),\n            live_state,\n            version_ref,\n            write_access: WriteAccess::write(write_ctx),\n            functions,\n            version_binding: VersionBinding::active(active_version_id),\n        }\n    }\n\n    fn by_version(\n        live_state: Arc<dyn LiveStateReader>,\n        version_ref: Arc<dyn VersionRefReader>,\n        functions: FunctionProviderHandle,\n    ) -> Self {\n        Self {\n            schema: lix_directory_by_version_schema(),\n            live_state,\n            version_ref,\n            write_access: WriteAccess::read_only(),\n            functions,\n            version_binding: VersionBinding::explicit(),\n        }\n    }\n\n    fn by_version_with_write(write_ctx: SqlWriteContext) -> Self {\n        let functions = write_ctx.functions();\n        let live_state = Arc::new(WriteContextLiveStateReader::new(write_ctx.clone()));\n        let version_ref = Arc::new(WriteContextVersionRefReader::new(write_ctx.clone()));\n        Self {\n            schema: lix_directory_by_version_schema(),\n            live_state,\n            version_ref,\n            write_access: WriteAccess::write(write_ctx),\n            functions,\n            version_binding: VersionBinding::explicit(),\n        }\n    }\n}\n\n#[async_trait]\nimpl TableProvider for LixDirectoryProvider {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn schema(&self) -> SchemaRef {\n        Arc::clone(&self.schema)\n    }\n\n    fn table_type(&self) -> TableType {\n        TableType::Base\n    }\n\n    fn supports_filters_pushdown(\n        &self,\n        filters: &[&Expr],\n    ) -> Result<Vec<TableProviderFilterPushDown>> {\n        Ok(filters\n            .iter()\n            .map(|_| TableProviderFilterPushDown::Exact)\n            .collect())\n    }\n\n    async fn scan(\n        &self,\n        _state: &dyn Session,\n        projection: Option<&Vec<usize>>,\n        filters: &[Expr],\n        limit: Option<usize>,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        let projected_schema = projected_schema(&self.schema, projection)?;\n        let scan_limit = if filters.is_empty() { limit } else { None };\n        let mut request = lix_directory_scan_request(\n            self.version_binding.active_version_id(),\n            Some(projected_schema.as_ref()),\n            scan_limit,\n        );\n        if self.write_access.is_write() && matches!(self.version_binding, VersionBinding::Explicit)\n        {\n            request.filter.version_ids = explicit_version_ids_from_dml_filters(filters);\n            if request.filter.version_ids.is_empty() {\n                return Err(DataFusionError::Plan(\n                    \"DELETE FROM lix_directory_by_version requires an explicit lixcol_version_id predicate\"\n                        .to_string(),\n                ));\n            }\n        }\n        request.filter.version_ids = resolve_provider_version_ids(\n            self.version_ref.as_ref(),\n            &self.version_binding,\n            request.filter.version_ids,\n        )\n        .await\n        .map_err(lix_error_to_datafusion_error)?;\n        let filters = canonicalize_filesystem_path_filters(filters, FilesystemPathKind::Directory)?;\n        let df_schema = DFSchema::try_from(Arc::clone(&self.schema))?;\n        validate_json_predicate_filters(self.schema.as_ref(), &filters)?;\n        let physical_filters = filters\n            .iter()\n            .map(|expr| create_physical_expr(expr, &df_schema, _state.execution_props()))\n            .collect::<Result<Vec<_>>>()?;\n        Ok(Arc::new(LixDirectoryScanExec::new(\n            Arc::clone(&self.live_state),\n            Arc::clone(&self.schema),\n            projected_schema,\n            projection.cloned(),\n            request,\n            physical_filters,\n            limit,\n        )))\n    }\n\n    async fn insert_into(\n        &self,\n        _state: &dyn Session,\n        input: Arc<dyn ExecutionPlan>,\n        insert_op: InsertOp,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        if insert_op != InsertOp::Append {\n            return not_impl_err!(\"{insert_op} not implemented for lix_directory yet\");\n        }\n\n        let write_ctx = self\n            .write_access\n            .require_write(\"INSERT into lix_directory\")?;\n\n        let sink = LixDirectoryInsertSink::new(\n            input.schema(),\n            write_ctx.clone(),\n            self.functions.clone(),\n            self.version_binding.clone(),\n        );\n        Ok(Arc::new(InsertExec::new(input, Arc::new(sink))))\n    }\n\n    async fn delete_from(\n        &self,\n        state: &dyn Session,\n        filters: Vec<Expr>,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        let write_ctx = self\n            .write_access\n            .require_write(\"DELETE FROM lix_directory\")?;\n\n        let df_schema = DFSchema::try_from(Arc::clone(&self.schema))?;\n        let filters =\n            canonicalize_filesystem_path_filters(&filters, FilesystemPathKind::Directory)?;\n        validate_json_predicate_filters(self.schema.as_ref(), &filters)?;\n        let physical_filters = filters\n            .iter()\n            .map(|expr| create_physical_expr(expr, &df_schema, state.execution_props()))\n            .collect::<Result<Vec<_>>>()?;\n        let mut request =\n            lix_directory_scan_request(self.version_binding.active_version_id(), None, None);\n        if matches!(self.version_binding, VersionBinding::Explicit) {\n            request.filter.version_ids = explicit_version_ids_from_dml_filters(&filters);\n            if request.filter.version_ids.is_empty() {\n                return Err(DataFusionError::Plan(\n                    \"DELETE FROM lix_directory_by_version requires an explicit lixcol_version_id predicate\"\n                        .to_string(),\n                ));\n            }\n        }\n\n        Ok(Arc::new(LixDirectoryDeleteExec::new(\n            write_ctx.clone(),\n            Arc::clone(&self.schema),\n            self.version_binding.clone(),\n            request,\n            physical_filters,\n        )))\n    }\n\n    async fn update(\n        &self,\n        state: &dyn Session,\n        assignments: Vec<(String, Expr)>,\n        filters: Vec<Expr>,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        let write_ctx = self.write_access.require_write(\"UPDATE lix_directory\")?;\n\n        validate_lix_directory_update_assignments(&self.schema, &assignments)?;\n\n        let df_schema = DFSchema::try_from(Arc::clone(&self.schema))?;\n        let physical_assignments = assignments\n            .iter()\n            .map(|(column_name, expr)| {\n                Ok((\n                    column_name.clone(),\n                    create_physical_expr(expr, &df_schema, state.execution_props())?,\n                ))\n            })\n            .collect::<Result<Vec<_>>>()?;\n        let filters =\n            canonicalize_filesystem_path_filters(&filters, FilesystemPathKind::Directory)?;\n        validate_json_predicate_filters(self.schema.as_ref(), &filters)?;\n        let physical_filters = filters\n            .iter()\n            .map(|expr| create_physical_expr(expr, &df_schema, state.execution_props()))\n            .collect::<Result<Vec<_>>>()?;\n        let request =\n            lix_directory_scan_request(self.version_binding.active_version_id(), None, None);\n\n        Ok(Arc::new(LixDirectoryUpdateExec::new(\n            write_ctx.clone(),\n            Arc::clone(&self.schema),\n            self.version_binding.clone(),\n            request,\n            physical_assignments,\n            physical_filters,\n        )))\n    }\n}\n\nstruct LixDirectoryInsertSink {\n    write_ctx: SqlWriteContext,\n    functions: FunctionProviderHandle,\n    version_binding: VersionBinding,\n    surface_name: &'static str,\n}\n\nimpl std::fmt::Debug for LixDirectoryInsertSink {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"LixDirectoryInsertSink\").finish()\n    }\n}\n\nimpl LixDirectoryInsertSink {\n    fn new(\n        _schema: SchemaRef,\n        write_ctx: SqlWriteContext,\n        functions: FunctionProviderHandle,\n        version_binding: VersionBinding,\n    ) -> Self {\n        let surface_name = lix_directory_surface_name(&version_binding);\n        Self {\n            write_ctx,\n            functions,\n            version_binding,\n            surface_name,\n        }\n    }\n}\n\nimpl DisplayAs for LixDirectoryInsertSink {\n    fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        match t {\n            DisplayFormatType::Default | DisplayFormatType::Verbose => {\n                write!(f, \"LixDirectoryInsertSink\")\n            }\n            DisplayFormatType::TreeRender => write!(f, \"LixDirectoryInsertSink\"),\n        }\n    }\n}\n\n#[async_trait]\nimpl InsertSink for LixDirectoryInsertSink {\n    async fn write_batches(\n        &self,\n        batches: Vec<RecordBatch>,\n        _context: &Arc<TaskContext>,\n    ) -> Result<u64> {\n        let mut path_resolvers = None;\n        let mut rows = Vec::new();\n        let mut count = 0_u64;\n        for batch in batches {\n            if path_resolvers.is_none() {\n                path_resolvers = Some(\n                    directory_path_resolvers_from_live_state(\n                        Arc::new(WriteContextLiveStateReader::new(self.write_ctx.clone())),\n                        self.version_binding.active_version_id(),\n                    )\n                    .await\n                    .map_err(lix_error_to_datafusion_error)?,\n                );\n            }\n            count = count\n                .checked_add(u64::try_from(batch.num_rows()).map_err(|_| {\n                    DataFusionError::Execution(\"lix_directory INSERT row count overflow\".into())\n                })?)\n                .ok_or_else(|| {\n                    DataFusionError::Execution(\"lix_directory INSERT row count overflow\".into())\n                })?;\n            if record_batch_has_non_null_column(&batch, \"path\")? {\n                rows.extend(lix_directory_write_rows_from_batch_with_path_resolvers(\n                    &batch,\n                    self.version_binding.active_version_id(),\n                    self.surface_name,\n                    path_resolvers\n                        .as_mut()\n                        .expect(\"path resolver should be initialized\"),\n                    &mut || self.functions.call_uuid_v7(),\n                )?);\n            } else {\n                rows.extend(\n                    lix_directory_write_rows_from_batch_with_options_and_path_resolvers(\n                        &batch,\n                        self.version_binding.active_version_id(),\n                        self.surface_name,\n                        true,\n                        path_resolvers.as_mut(),\n                        None,\n                    )?,\n                );\n            }\n        }\n\n        self.write_ctx\n            .stage_write(TransactionWrite::Rows {\n                mode: TransactionWriteMode::Insert,\n                rows,\n            })\n            .await\n            .map_err(lix_error_to_datafusion_error)?;\n\n        Ok(count)\n    }\n}\n\nfn lix_directory_surface_name(version_binding: &VersionBinding) -> &'static str {\n    match version_binding {\n        VersionBinding::Active { .. } => \"lix_directory\",\n        VersionBinding::Explicit => \"lix_directory_by_version\",\n    }\n}\n\n#[allow(dead_code)]\nstruct LixDirectoryDeleteExec {\n    write_ctx: SqlWriteContext,\n    table_schema: SchemaRef,\n    version_binding: VersionBinding,\n    request: LiveStateScanRequest,\n    filters: Vec<Arc<dyn PhysicalExpr>>,\n    result_schema: SchemaRef,\n    properties: Arc<PlanProperties>,\n}\n\nimpl std::fmt::Debug for LixDirectoryDeleteExec {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"LixDirectoryDeleteExec\").finish()\n    }\n}\n\nimpl LixDirectoryDeleteExec {\n    fn new(\n        write_ctx: SqlWriteContext,\n        table_schema: SchemaRef,\n        version_binding: VersionBinding,\n        request: LiveStateScanRequest,\n        filters: Vec<Arc<dyn PhysicalExpr>>,\n    ) -> Self {\n        let result_schema = dml_count_schema();\n        let properties = PlanProperties::new(\n            EquivalenceProperties::new(Arc::clone(&result_schema)),\n            Partitioning::UnknownPartitioning(1),\n            EmissionType::Final,\n            Boundedness::Bounded,\n        );\n        Self {\n            write_ctx,\n            table_schema,\n            version_binding,\n            request,\n            filters,\n            result_schema,\n            properties: Arc::new(properties),\n        }\n    }\n}\n\nimpl DisplayAs for LixDirectoryDeleteExec {\n    fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        match t {\n            DisplayFormatType::Default | DisplayFormatType::Verbose => {\n                write!(f, \"LixDirectoryDeleteExec(filters={})\", self.filters.len())\n            }\n            DisplayFormatType::TreeRender => write!(f, \"LixDirectoryDeleteExec\"),\n        }\n    }\n}\n\nimpl ExecutionPlan for LixDirectoryDeleteExec {\n    fn name(&self) -> &str {\n        \"LixDirectoryDeleteExec\"\n    }\n\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn properties(&self) -> &Arc<PlanProperties> {\n        &self.properties\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn ExecutionPlan>> {\n        Vec::new()\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        children: Vec<Arc<dyn ExecutionPlan>>,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        if !children.is_empty() {\n            return Err(DataFusionError::Execution(\n                \"LixDirectoryDeleteExec does not accept children\".to_string(),\n            ));\n        }\n        Ok(self)\n    }\n\n    fn execute(\n        &self,\n        partition: usize,\n        _context: Arc<TaskContext>,\n    ) -> Result<SendableRecordBatchStream> {\n        if partition != 0 {\n            return Err(DataFusionError::Execution(format!(\n                \"LixDirectoryDeleteExec only exposes one partition, got {partition}\"\n            )));\n        }\n        let write_ctx = self.write_ctx.clone();\n        let table_schema = Arc::clone(&self.table_schema);\n        let version_binding = self.version_binding.clone();\n        let request = self.request.clone();\n        let filters = self.filters.clone();\n        let result_schema = Arc::clone(&self.result_schema);\n        let stream_schema = Arc::clone(&result_schema);\n\n        let stream = stream::once(async move {\n            let rows = write_ctx\n                .scan_live_state(&request)\n                .await\n                .map_err(lix_error_to_datafusion_error)?;\n            let source_batch = lix_directory_record_batch(&table_schema, rows)\n                .map_err(lix_error_to_datafusion_error)?;\n            let matched_batch = filter_lix_directory_batch(source_batch, &filters)?;\n            let version_ids = directory_version_ids_from_batch(\n                &matched_batch,\n                version_binding.active_version_id(),\n            )?;\n            let mut visible_filesystems = BTreeMap::new();\n            for version_id in version_ids {\n                visible_filesystems.insert(\n                    version_id.clone(),\n                    VisibleFilesystem::load(\n                        Arc::new(WriteContextLiveStateReader::new(write_ctx.clone())),\n                        &version_id,\n                    )\n                    .await\n                    .map_err(lix_error_to_datafusion_error)?,\n                );\n            }\n            let (write_rows, count) = lix_directory_recursive_delete_rows_from_batch(\n                &matched_batch,\n                version_binding.active_version_id(),\n                &visible_filesystems,\n            )?;\n\n            if count > 0 {\n                write_ctx\n                    .stage_write(TransactionWrite::Rows {\n                        mode: TransactionWriteMode::Replace,\n                        rows: write_rows,\n                    })\n                    .await\n                    .map_err(lix_error_to_datafusion_error)?;\n            }\n\n            Ok::<_, DataFusionError>(stream::iter(vec![Ok::<RecordBatch, DataFusionError>(\n                dml_count_batch(Arc::clone(&stream_schema), count)?,\n            )]))\n        })\n        .try_flatten();\n\n        Ok(Box::pin(RecordBatchStreamAdapter::new(\n            result_schema,\n            stream,\n        )))\n    }\n}\n\n#[allow(dead_code)]\nstruct LixDirectoryUpdateExec {\n    write_ctx: SqlWriteContext,\n    table_schema: SchemaRef,\n    version_binding: VersionBinding,\n    request: LiveStateScanRequest,\n    assignments: Vec<(String, Arc<dyn PhysicalExpr>)>,\n    filters: Vec<Arc<dyn PhysicalExpr>>,\n    result_schema: SchemaRef,\n    properties: Arc<PlanProperties>,\n}\n\nimpl std::fmt::Debug for LixDirectoryUpdateExec {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"LixDirectoryUpdateExec\").finish()\n    }\n}\n\nimpl LixDirectoryUpdateExec {\n    fn new(\n        write_ctx: SqlWriteContext,\n        table_schema: SchemaRef,\n        version_binding: VersionBinding,\n        request: LiveStateScanRequest,\n        assignments: Vec<(String, Arc<dyn PhysicalExpr>)>,\n        filters: Vec<Arc<dyn PhysicalExpr>>,\n    ) -> Self {\n        let result_schema = dml_count_schema();\n        let properties = PlanProperties::new(\n            EquivalenceProperties::new(Arc::clone(&result_schema)),\n            Partitioning::UnknownPartitioning(1),\n            EmissionType::Final,\n            Boundedness::Bounded,\n        );\n        Self {\n            write_ctx,\n            table_schema,\n            version_binding,\n            request,\n            assignments,\n            filters,\n            result_schema,\n            properties: Arc::new(properties),\n        }\n    }\n}\n\nimpl DisplayAs for LixDirectoryUpdateExec {\n    fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        match t {\n            DisplayFormatType::Default | DisplayFormatType::Verbose => {\n                write!(\n                    f,\n                    \"LixDirectoryUpdateExec(assignments={}, filters={})\",\n                    self.assignments.len(),\n                    self.filters.len()\n                )\n            }\n            DisplayFormatType::TreeRender => write!(f, \"LixDirectoryUpdateExec\"),\n        }\n    }\n}\n\nimpl ExecutionPlan for LixDirectoryUpdateExec {\n    fn name(&self) -> &str {\n        \"LixDirectoryUpdateExec\"\n    }\n\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn properties(&self) -> &Arc<PlanProperties> {\n        &self.properties\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn ExecutionPlan>> {\n        Vec::new()\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        children: Vec<Arc<dyn ExecutionPlan>>,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        if !children.is_empty() {\n            return Err(DataFusionError::Execution(\n                \"LixDirectoryUpdateExec does not accept children\".to_string(),\n            ));\n        }\n        Ok(self)\n    }\n\n    fn execute(\n        &self,\n        partition: usize,\n        _context: Arc<TaskContext>,\n    ) -> Result<SendableRecordBatchStream> {\n        if partition != 0 {\n            return Err(DataFusionError::Execution(format!(\n                \"LixDirectoryUpdateExec only exposes one partition, got {partition}\"\n            )));\n        }\n        let write_ctx = self.write_ctx.clone();\n        let table_schema = Arc::clone(&self.table_schema);\n        let version_binding = self.version_binding.clone();\n        let request = self.request.clone();\n        let assignments = self.assignments.clone();\n        let filters = self.filters.clone();\n        let result_schema = Arc::clone(&self.result_schema);\n        let stream_schema = Arc::clone(&result_schema);\n\n        let stream = stream::once(async move {\n            let rows = write_ctx\n                .scan_live_state(&request)\n                .await\n                .map_err(lix_error_to_datafusion_error)?;\n            let source_batch = lix_directory_record_batch(&table_schema, rows)\n                .map_err(lix_error_to_datafusion_error)?;\n            let matched_batch = filter_lix_directory_batch(source_batch, &filters)?;\n            let mut path_resolvers = directory_path_resolvers_from_live_state(\n                Arc::new(WriteContextLiveStateReader::new(write_ctx.clone())),\n                version_binding.active_version_id(),\n            )\n            .await\n            .map_err(lix_error_to_datafusion_error)?;\n            let write_rows = lix_directory_update_write_rows_from_batch(\n                &matched_batch,\n                &assignments,\n                version_binding.active_version_id(),\n                &mut path_resolvers,\n            )?;\n            let count = u64::try_from(write_rows.len()).map_err(|_| {\n                DataFusionError::Execution(\"lix_directory UPDATE row count overflow\".into())\n            })?;\n\n            if count > 0 {\n                write_ctx\n                    .stage_write(TransactionWrite::Rows {\n                        mode: TransactionWriteMode::Replace,\n                        rows: write_rows,\n                    })\n                    .await\n                    .map_err(lix_error_to_datafusion_error)?;\n            }\n\n            Ok::<_, DataFusionError>(stream::iter(vec![Ok::<RecordBatch, DataFusionError>(\n                dml_count_batch(Arc::clone(&stream_schema), count)?,\n            )]))\n        })\n        .try_flatten();\n\n        Ok(Box::pin(RecordBatchStreamAdapter::new(\n            result_schema,\n            stream,\n        )))\n    }\n}\n\nstruct LixDirectoryScanExec {\n    live_state: Arc<dyn LiveStateReader>,\n    batch_schema: SchemaRef,\n    output_schema: SchemaRef,\n    projection: Option<Vec<usize>>,\n    request: LiveStateScanRequest,\n    filters: Vec<Arc<dyn PhysicalExpr>>,\n    limit: Option<usize>,\n    properties: Arc<PlanProperties>,\n}\n\nimpl std::fmt::Debug for LixDirectoryScanExec {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"LixDirectoryScanExec\").finish()\n    }\n}\n\nimpl LixDirectoryScanExec {\n    fn new(\n        live_state: Arc<dyn LiveStateReader>,\n        batch_schema: SchemaRef,\n        output_schema: SchemaRef,\n        projection: Option<Vec<usize>>,\n        request: LiveStateScanRequest,\n        filters: Vec<Arc<dyn PhysicalExpr>>,\n        limit: Option<usize>,\n    ) -> Self {\n        let properties = PlanProperties::new(\n            EquivalenceProperties::new(Arc::clone(&output_schema)),\n            Partitioning::UnknownPartitioning(1),\n            EmissionType::Incremental,\n            Boundedness::Bounded,\n        );\n        Self {\n            live_state,\n            batch_schema,\n            output_schema,\n            projection,\n            request,\n            filters,\n            limit,\n            properties: Arc::new(properties),\n        }\n    }\n}\n\nimpl DisplayAs for LixDirectoryScanExec {\n    fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        match t {\n            DisplayFormatType::Default | DisplayFormatType::Verbose => {\n                write!(f, \"LixDirectoryScanExec(limit={:?})\", self.limit)\n            }\n            DisplayFormatType::TreeRender => write!(f, \"LixDirectoryScanExec\"),\n        }\n    }\n}\n\nimpl ExecutionPlan for LixDirectoryScanExec {\n    fn name(&self) -> &str {\n        \"LixDirectoryScanExec\"\n    }\n\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn properties(&self) -> &Arc<PlanProperties> {\n        &self.properties\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn ExecutionPlan>> {\n        Vec::new()\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        children: Vec<Arc<dyn ExecutionPlan>>,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        if !children.is_empty() {\n            return Err(DataFusionError::Execution(\n                \"LixDirectoryScanExec does not accept children\".to_string(),\n            ));\n        }\n        Ok(self)\n    }\n\n    fn execute(\n        &self,\n        partition: usize,\n        _context: Arc<TaskContext>,\n    ) -> Result<SendableRecordBatchStream> {\n        if partition != 0 {\n            return Err(DataFusionError::Execution(format!(\n                \"LixDirectoryScanExec only supports partition 0, got {partition}\"\n            )));\n        }\n\n        let live_state = Arc::clone(&self.live_state);\n        let request = self.request.clone();\n        let filters = self.filters.clone();\n        let limit = self.limit;\n        let output_schema = Arc::clone(&self.output_schema);\n        let batch_schema = Arc::clone(&self.batch_schema);\n        let projection = self.projection.clone();\n        let fut = async move {\n            let rows = live_state.scan_rows(&request).await.map_err(|error| {\n                DataFusionError::Execution(format!(\"sql2 lix_directory scan failed: {error}\"))\n            })?;\n            let batch = lix_directory_record_batch(&batch_schema, rows).map_err(|error| {\n                DataFusionError::Execution(format!(\n                    \"sql2 lix_directory batch build failed: {error}\"\n                ))\n            })?;\n            let filtered = filter_lix_directory_batch(batch, &filters)?;\n            let projected = match projection {\n                Some(indices) => filtered.project(&indices).map_err(DataFusionError::from),\n                None => Ok(filtered),\n            }?;\n            match limit {\n                Some(limit) => Ok(projected.slice(0, limit.min(projected.num_rows()))),\n                None => Ok(projected),\n            }\n        };\n\n        Ok(Box::pin(RecordBatchStreamAdapter::new(\n            output_schema,\n            stream::once(fut).map_ok(|batch| batch),\n        )))\n    }\n}\n\n#[derive(Debug, Clone)]\nstruct DirectoryDescriptorRecord {\n    id: String,\n    parent_id: Option<String>,\n    name: String,\n    hidden: bool,\n    live: MaterializedLiveStateRow,\n}\n\n#[derive(Debug, Deserialize)]\nstruct DirectoryDescriptorSnapshot {\n    id: String,\n    parent_id: Option<String>,\n    name: String,\n    hidden: Option<bool>,\n}\n\n#[cfg(test)]\nfn lix_directory_write_rows_from_batch(\n    batch: &RecordBatch,\n    version_binding: Option<&str>,\n) -> Result<Vec<TransactionWriteRow>> {\n    lix_directory_write_rows_from_batch_with_options(batch, version_binding, \"lix_directory\", true)\n}\n\nfn lix_directory_write_rows_from_batch_with_path_resolvers(\n    batch: &RecordBatch,\n    version_binding: Option<&str>,\n    surface_name: &str,\n    path_resolvers: &mut BTreeMap<String, DirectoryPathResolver>,\n    generate_directory_id: &mut dyn FnMut() -> String,\n) -> Result<Vec<TransactionWriteRow>> {\n    lix_directory_write_rows_from_batch_with_options_and_path_resolvers(\n        batch,\n        version_binding,\n        surface_name,\n        true,\n        Some(path_resolvers),\n        Some(generate_directory_id),\n    )\n}\n\nfn lix_directory_update_write_rows_from_batch(\n    batch: &RecordBatch,\n    assignments: &[(String, Arc<dyn PhysicalExpr>)],\n    version_binding: Option<&str>,\n    path_resolvers: &mut BTreeMap<String, DirectoryPathResolver>,\n) -> Result<Vec<TransactionWriteRow>> {\n    let assignment_values = UpdateAssignmentValues::evaluate(batch, assignments)?;\n    let mut rows = Vec::new();\n    for row_index in 0..batch.num_rows() {\n        let id = optional_string_value(batch, row_index, \"id\")?;\n        let context = directory_row_context_from_update(\n            batch,\n            &assignment_values,\n            row_index,\n            version_binding,\n        )?;\n        let parent_id =\n            update_optional_string_value(batch, &assignment_values, row_index, \"parent_id\")?;\n        let name = update_required_string_value(batch, &assignment_values, row_index, \"name\")?;\n        if let Some(directory_id) = id.as_ref() {\n            let resolver = path_resolvers\n                .entry(directory_path_resolver_key(&context))\n                .or_insert_with(DirectoryPathResolver::default);\n            resolver\n                .reserve_directory(parent_id.clone(), name.clone(), directory_id.clone())\n                .map_err(lix_error_to_datafusion_error)?;\n        }\n        rows.push(directory_descriptor_write_row(\n            DirectoryDescriptorWriteIntent {\n                id,\n                parent_id,\n                name,\n                hidden: update_optional_bool_value(batch, &assignment_values, row_index, \"hidden\")?,\n                context,\n            },\n        ));\n    }\n    Ok(rows)\n}\n\nfn directory_version_ids_from_batch(\n    batch: &RecordBatch,\n    version_binding: Option<&str>,\n) -> Result<BTreeSet<String>> {\n    let mut version_ids = BTreeSet::new();\n    for row_index in 0..batch.num_rows() {\n        version_ids.insert(\n            directory_row_context_from_batch(batch, row_index, version_binding)?.version_id,\n        );\n    }\n    Ok(version_ids)\n}\n\nfn lix_directory_recursive_delete_rows_from_batch(\n    batch: &RecordBatch,\n    version_binding: Option<&str>,\n    visible_filesystems: &BTreeMap<String, VisibleFilesystem>,\n) -> Result<(Vec<TransactionWriteRow>, u64)> {\n    let mut rows = Vec::new();\n    let mut seen = BTreeSet::new();\n    let mut count = 0u64;\n    for row_index in 0..batch.num_rows() {\n        let directory_id = required_string_value(batch, row_index, \"id\")?;\n        let context = directory_row_context_from_batch(batch, row_index, version_binding)?;\n        let visible_filesystem = visible_filesystems\n            .get(&context.version_id)\n            .ok_or_else(|| {\n                DataFusionError::Execution(format!(\n                    \"DELETE FROM lix_directory missing visible filesystem for version '{}'\",\n                    context.version_id\n                ))\n            })?;\n        append_deduped_delete_plan(\n            &mut rows,\n            &mut seen,\n            plan_recursive_directory_delete(&directory_id, visible_filesystem, context),\n            &mut count,\n        );\n    }\n    Ok((rows, count))\n}\n\nfn append_deduped_delete_plan(\n    rows: &mut Vec<TransactionWriteRow>,\n    seen: &mut BTreeSet<StateRowDedupeKey>,\n    plan: FilesystemDeletePlan,\n    count: &mut u64,\n) {\n    for row in plan.rows {\n        if seen.insert(StateRowDedupeKey::from(&row)) {\n            if is_user_visible_filesystem_delete_row(&row) {\n                *count += 1;\n            }\n            rows.push(row);\n        }\n    }\n}\n\nfn is_user_visible_filesystem_delete_row(row: &TransactionWriteRow) -> bool {\n    matches!(\n        row.schema_key.as_str(),\n        \"lix_directory_descriptor\" | \"lix_file_descriptor\"\n    )\n}\n\n#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)]\nstruct StateRowDedupeKey {\n    entity_id: String,\n    schema_key: String,\n    file_id: Option<String>,\n    version_id: String,\n    global: bool,\n    untracked: bool,\n}\n\nimpl From<&TransactionWriteRow> for StateRowDedupeKey {\n    fn from(row: &TransactionWriteRow) -> Self {\n        Self {\n            entity_id: row\n                .entity_id\n                .as_ref()\n                .expect(\"directory provider staged row should carry entity_id\")\n                .as_single_string_owned()\n                .expect(\"directory provider staged row entity identity should project\"),\n            schema_key: row.schema_key.clone(),\n            file_id: row.file_id.clone(),\n            version_id: row.version_id.clone(),\n            global: row.global,\n            untracked: row.untracked,\n        }\n    }\n}\n\n#[cfg(test)]\nfn lix_directory_write_rows_from_batch_with_options(\n    batch: &RecordBatch,\n    version_binding: Option<&str>,\n    surface_name: &str,\n    reject_read_only_fields: bool,\n) -> Result<Vec<TransactionWriteRow>> {\n    lix_directory_write_rows_from_batch_with_options_and_path_resolvers(\n        batch,\n        version_binding,\n        surface_name,\n        reject_read_only_fields,\n        None,\n        None,\n    )\n}\n\nfn lix_directory_write_rows_from_batch_with_options_and_path_resolvers(\n    batch: &RecordBatch,\n    version_binding: Option<&str>,\n    surface_name: &str,\n    reject_read_only_fields: bool,\n    mut path_resolvers: Option<&mut BTreeMap<String, DirectoryPathResolver>>,\n    mut generate_directory_id: Option<&mut dyn FnMut() -> String>,\n) -> Result<Vec<TransactionWriteRow>> {\n    let mut rows = Vec::new();\n    for row_index in 0..batch.num_rows() {\n        if reject_read_only_fields {\n            reject_read_only_lix_directory_insert_field(batch, row_index, \"lixcol_entity_id\")?;\n            reject_read_only_lix_directory_insert_field(batch, row_index, \"lixcol_schema_key\")?;\n            reject_read_only_lix_directory_insert_field(batch, row_index, \"lixcol_change_id\")?;\n            reject_read_only_lix_directory_insert_field(batch, row_index, \"lixcol_created_at\")?;\n            reject_read_only_lix_directory_insert_field(batch, row_index, \"lixcol_updated_at\")?;\n            reject_read_only_lix_directory_insert_field(batch, row_index, \"lixcol_commit_id\")?;\n        }\n\n        let path = optional_string_value(batch, row_index, \"path\")?;\n        let id = optional_string_value(batch, row_index, \"id\")?;\n        let hidden = optional_bool_value(batch, row_index, \"hidden\")?;\n        let context = directory_row_context_from_batch(batch, row_index, version_binding)?;\n\n        if let Some(path) = path.filter(|_| reject_read_only_fields) {\n            reject_read_only_lix_directory_insert_field(batch, row_index, \"parent_id\")?;\n            reject_read_only_lix_directory_insert_field(batch, row_index, \"name\")?;\n\n            let Some(path_resolvers) = path_resolvers.as_deref_mut() else {\n                return Err(DataFusionError::Execution(\n                    \"INSERT into lix_directory with path requires directory path resolver\"\n                        .to_string(),\n                ));\n            };\n            let resolver = path_resolvers\n                .entry(directory_path_resolver_key(&context))\n                .or_insert_with(DirectoryPathResolver::default);\n            let Some(generate_directory_id) = generate_directory_id.as_deref_mut() else {\n                return Err(DataFusionError::Execution(\n                    \"INSERT into lix_directory with path requires directory id generator\"\n                        .to_string(),\n                ));\n            };\n            let directory_id = id.unwrap_or_else(|| generate_directory_id());\n            let mut planned_rows = resolver\n                .create_directory_path_with_leaf_id(\n                    &path,\n                    Some(directory_id.clone()),\n                    context,\n                    hidden.unwrap_or(false),\n                    generate_directory_id,\n                )\n                .map_err(lix_error_to_datafusion_error)?;\n            attach_lix_directory_insert_origin(&mut planned_rows, surface_name, &directory_id);\n            rows.extend(planned_rows);\n            continue;\n        }\n\n        let parent_id = optional_string_value(batch, row_index, \"parent_id\")?;\n        let name = required_string_value(batch, row_index, \"name\")?;\n        if let Some(path_resolvers) = path_resolvers.as_deref_mut() {\n            if let Some(directory_id) = id.as_ref() {\n                let resolver = path_resolvers\n                    .entry(directory_path_resolver_key(&context))\n                    .or_insert_with(DirectoryPathResolver::default);\n                resolver\n                    .reserve_directory(parent_id.clone(), name.clone(), directory_id.clone())\n                    .map_err(lix_error_to_datafusion_error)?;\n            }\n        }\n        let mut row = directory_descriptor_write_row(DirectoryDescriptorWriteIntent {\n            id: id.clone(),\n            parent_id,\n            name,\n            hidden,\n            context,\n        });\n        if let Some(directory_id) = id.as_ref() {\n            row.origin = Some(lix_directory_insert_origin(surface_name, directory_id));\n        }\n        rows.push(row);\n    }\n    Ok(rows)\n}\n\nfn attach_lix_directory_insert_origin(\n    rows: &mut [TransactionWriteRow],\n    surface_name: &str,\n    directory_id: &str,\n) {\n    let origin = lix_directory_insert_origin(surface_name, directory_id);\n    for row in rows {\n        if row.schema_key != DIRECTORY_SCHEMA_KEY {\n            continue;\n        }\n        let Some(entity_id) = row\n            .entity_id\n            .as_ref()\n            .and_then(|entity_id| entity_id.as_single_string_owned().ok())\n        else {\n            continue;\n        };\n        if entity_id == directory_id {\n            row.origin = Some(origin.clone());\n        }\n    }\n}\n\nfn lix_directory_insert_origin(surface_name: &str, directory_id: &str) -> TransactionWriteOrigin {\n    TransactionWriteOrigin {\n        surface: surface_name.to_string(),\n        operation: TransactionWriteOperation::Insert,\n        primary_key: Some(LogicalPrimaryKey {\n            columns: vec![\"id\".to_string()],\n            values: vec![directory_id.to_string()],\n        }),\n    }\n}\n\nfn directory_row_context_from_batch(\n    batch: &RecordBatch,\n    row_index: usize,\n    version_binding: Option<&str>,\n) -> Result<FilesystemRowContext> {\n    let scope = resolve_write_version_scope(\n        optional_bool_value(batch, row_index, \"lixcol_global\")?,\n        optional_string_value(batch, row_index, \"lixcol_version_id\")?,\n        version_binding,\n        \"INSERT into lix_directory_by_version\",\n        \"lix_directory\",\n    )?;\n\n    Ok(FilesystemRowContext {\n        version_id: scope.version_id,\n        global: scope.global,\n        untracked: optional_bool_value(batch, row_index, \"lixcol_untracked\")?.unwrap_or(false),\n        file_id: optional_string_value(batch, row_index, \"lixcol_file_id\")?,\n        metadata: optional_metadata_value(batch, row_index, \"lixcol_metadata\", \"lix_directory\")?,\n    })\n}\n\nfn directory_row_context_from_update(\n    batch: &RecordBatch,\n    assignment_values: &UpdateAssignmentValues,\n    row_index: usize,\n    version_binding: Option<&str>,\n) -> Result<FilesystemRowContext> {\n    let scope = resolve_write_version_scope(\n        optional_bool_value(batch, row_index, \"lixcol_global\")?,\n        optional_string_value(batch, row_index, \"lixcol_version_id\")?,\n        version_binding,\n        \"UPDATE into lix_directory_by_version\",\n        \"lix_directory\",\n    )?;\n\n    Ok(FilesystemRowContext {\n        version_id: scope.version_id,\n        global: scope.global,\n        untracked: optional_bool_value(batch, row_index, \"lixcol_untracked\")?.unwrap_or(false),\n        file_id: optional_string_value(batch, row_index, \"lixcol_file_id\")?,\n        metadata: update_optional_metadata_value(\n            batch,\n            assignment_values,\n            row_index,\n            \"lixcol_metadata\",\n            \"lix_directory\",\n        )?,\n    })\n}\n\nfn directory_path_resolver_key(context: &FilesystemRowContext) -> String {\n    filesystem_storage_scope_key(\n        &context.version_id,\n        context.global,\n        context.untracked,\n        context.file_id.as_deref(),\n    )\n}\n\nasync fn directory_path_resolvers_from_live_state(\n    live_state: Arc<dyn LiveStateReader>,\n    version_binding: Option<&str>,\n) -> std::result::Result<BTreeMap<String, DirectoryPathResolver>, LixError> {\n    let rows = live_state\n        .scan_rows(&LiveStateScanRequest {\n            filter: LiveStateFilter {\n                schema_keys: vec![\n                    DIRECTORY_SCHEMA_KEY.to_string(),\n                    FILE_DESCRIPTOR_SCHEMA_KEY.to_string(),\n                ],\n                version_ids: version_binding\n                    .map(|version_id| vec![version_id.to_string()])\n                    .unwrap_or_default(),\n                ..Default::default()\n            },\n            ..Default::default()\n        })\n        .await?;\n    let mut resolvers = directory_path_resolvers_from_state_rows(rows)?;\n    if let Some(version_id) = version_binding {\n        let key = filesystem_storage_scope_key(version_id, false, false, None);\n        resolvers\n            .entry(key)\n            .or_insert_with(DirectoryPathResolver::default);\n    }\n    Ok(resolvers)\n}\n\nfn lix_directory_record_batch(\n    schema: &SchemaRef,\n    rows: Vec<MaterializedLiveStateRow>,\n) -> Result<RecordBatch, LixError> {\n    let mut directory_rows = Vec::<DirectoryDescriptorRecord>::new();\n\n    for row in rows {\n        if row.schema_key != DIRECTORY_SCHEMA_KEY {\n            continue;\n        }\n        let Some(snapshot_content) = row.snapshot_content.as_deref() else {\n            continue;\n        };\n        let snapshot: DirectoryDescriptorSnapshot = serde_json::from_str(snapshot_content)\n            .map_err(|error| {\n                LixError::new(\n                    \"LIX_ERROR_UNKNOWN\",\n                    format!(\"invalid lix_directory_descriptor snapshot JSON: {error}\"),\n                )\n            })?;\n        directory_rows.push(DirectoryDescriptorRecord {\n            id: snapshot.id,\n            parent_id: snapshot.parent_id,\n            name: snapshot.name,\n            hidden: snapshot.hidden.unwrap_or(false),\n            live: row,\n        });\n    }\n\n    let directory_paths = derive_directory_paths(&directory_rows)?;\n    let mut ids = Vec::new();\n    let mut paths = Vec::new();\n    let mut parent_ids = Vec::new();\n    let mut names = Vec::new();\n    let mut hiddens = Vec::new();\n    let mut entity_ids = Vec::new();\n    let mut schema_keys = Vec::new();\n    let mut file_ids = Vec::new();\n    let mut globals = Vec::new();\n    let mut change_ids = Vec::new();\n    let mut created_ats = Vec::new();\n    let mut updated_ats = Vec::new();\n    let mut commit_ids = Vec::new();\n    let mut untracked_values = Vec::new();\n    let mut metadata_values = Vec::new();\n    let mut version_ids = Vec::new();\n\n    for directory in directory_rows {\n        ids.push(Some(directory.id.clone()));\n        paths.push(\n            directory_paths\n                .get(&(directory.live.version_id.clone(), directory.id.clone()))\n                .cloned(),\n        );\n        parent_ids.push(directory.parent_id);\n        names.push(Some(directory.name));\n        hiddens.push(Some(directory.hidden));\n        entity_ids.push(Some(directory.live.entity_id.as_json_array_text()?));\n        schema_keys.push(Some(directory.live.schema_key));\n        file_ids.push(directory.live.file_id);\n        globals.push(Some(directory.live.global));\n        change_ids.push(directory.live.change_id);\n        created_ats.push(directory.live.created_at);\n        updated_ats.push(directory.live.updated_at);\n        commit_ids.push(directory.live.commit_id);\n        untracked_values.push(Some(directory.live.untracked));\n        metadata_values.push(directory.live.metadata.as_ref().map(serialize_row_metadata));\n        version_ids.push(Some(directory.live.version_id));\n    }\n\n    let mut columns = Vec::<ArrayRef>::with_capacity(schema.fields().len());\n    for field in schema.fields() {\n        let array: ArrayRef = match field.name().as_str() {\n            \"id\" => Arc::new(StringArray::from(ids.clone())),\n            \"path\" => Arc::new(StringArray::from(paths.clone())),\n            \"parent_id\" => Arc::new(StringArray::from(parent_ids.clone())),\n            \"name\" => Arc::new(StringArray::from(names.clone())),\n            \"hidden\" => Arc::new(BooleanArray::from(hiddens.clone())),\n            \"lixcol_entity_id\" => Arc::new(StringArray::from(entity_ids.clone())),\n            \"lixcol_schema_key\" => Arc::new(StringArray::from(schema_keys.clone())),\n            \"lixcol_file_id\" => Arc::new(StringArray::from(file_ids.clone())),\n            \"lixcol_global\" => Arc::new(BooleanArray::from(globals.clone())),\n            \"lixcol_change_id\" => Arc::new(StringArray::from(change_ids.clone())),\n            \"lixcol_created_at\" => Arc::new(StringArray::from(created_ats.clone())),\n            \"lixcol_updated_at\" => Arc::new(StringArray::from(updated_ats.clone())),\n            \"lixcol_commit_id\" => Arc::new(StringArray::from(commit_ids.clone())),\n            \"lixcol_untracked\" => Arc::new(BooleanArray::from(untracked_values.clone())),\n            \"lixcol_metadata\" => Arc::new(StringArray::from(metadata_values.clone())),\n            \"lixcol_version_id\" => Arc::new(StringArray::from(version_ids.clone())),\n            other => {\n                return Err(LixError::new(\n                    \"LIX_ERROR_UNKNOWN\",\n                    format!(\n                        \"sql2 lix_directory provider does not support projected column '{other}'\"\n                    ),\n                ))\n            }\n        };\n        columns.push(array);\n    }\n\n    let options = RecordBatchOptions::new().with_row_count(Some(ids.len()));\n    RecordBatch::try_new_with_options(Arc::clone(schema), columns, &options).map_err(|error| {\n        LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"sql2 failed to build lix_directory record batch: {error}\"),\n        )\n    })\n}\n\nfn derive_directory_paths(\n    rows: &[DirectoryDescriptorRecord],\n) -> std::result::Result<BTreeMap<(String, String), String>, LixError> {\n    let mut by_version = BTreeMap::<String, BTreeMap<String, &DirectoryDescriptorRecord>>::new();\n    for row in rows {\n        by_version\n            .entry(row.live.version_id.clone())\n            .or_default()\n            .insert(row.id.clone(), row);\n    }\n\n    let mut paths = BTreeMap::<(String, String), String>::new();\n    for (version_id, records) in by_version {\n        for directory_id in records.keys() {\n            derive_directory_path_for(\n                &version_id,\n                directory_id,\n                &records,\n                &mut paths,\n                &mut BTreeSet::new(),\n            )?;\n        }\n    }\n    Ok(paths)\n}\n\nfn derive_directory_path_for(\n    version_id: &str,\n    directory_id: &str,\n    records: &BTreeMap<String, &DirectoryDescriptorRecord>,\n    paths: &mut BTreeMap<(String, String), String>,\n    visiting: &mut BTreeSet<String>,\n) -> std::result::Result<Option<String>, LixError> {\n    if let Some(path) = paths.get(&(version_id.to_string(), directory_id.to_string())) {\n        return Ok(Some(path.clone()));\n    }\n    if !visiting.insert(directory_id.to_string()) {\n        return Err(directory_parent_cycle_error(version_id, directory_id));\n    }\n    let Some(row) = records.get(directory_id) else {\n        visiting.remove(directory_id);\n        return Ok(None);\n    };\n    let path = match row.parent_id.as_deref() {\n        Some(parent_id) => {\n            let Some(parent_path) =\n                derive_directory_path_for(version_id, parent_id, records, paths, visiting)?\n            else {\n                visiting.remove(directory_id);\n                return Ok(None);\n            };\n            format!(\"{parent_path}{}/\", row.name)\n        }\n        None => format!(\"/{}/\", row.name),\n    };\n    visiting.remove(directory_id);\n    paths.insert(\n        (version_id.to_string(), directory_id.to_string()),\n        path.clone(),\n    );\n    Ok(Some(path))\n}\n\nfn directory_parent_cycle_error(version_id: &str, directory_id: &str) -> LixError {\n    LixError::new(\n        LixError::CODE_CONSTRAINT_VIOLATION,\n        format!(\n            \"lix_directory_descriptor parent_id cycle in version '{version_id}' while resolving directory '{directory_id}'\"\n        ),\n    )\n}\n\nfn projected_schema(base_schema: &SchemaRef, projection: Option<&Vec<usize>>) -> Result<SchemaRef> {\n    let fields = match projection {\n        Some(indices) => indices\n            .iter()\n            .map(|index| base_schema.field(*index).as_ref().clone())\n            .collect::<Vec<_>>(),\n        None => base_schema\n            .fields()\n            .iter()\n            .map(|field| field.as_ref().clone())\n            .collect::<Vec<_>>(),\n    };\n    Ok(Arc::new(Schema::new(fields)))\n}\n\nfn lix_directory_scan_request(\n    version_binding: Option<&str>,\n    projected_schema: Option<&Schema>,\n    limit: Option<usize>,\n) -> LiveStateScanRequest {\n    LiveStateScanRequest {\n        filter: LiveStateFilter {\n            schema_keys: vec![DIRECTORY_SCHEMA_KEY.to_string()],\n            version_ids: version_binding\n                .map(|version_id| vec![version_id.to_string()])\n                .unwrap_or_default(),\n            ..LiveStateFilter::default()\n        },\n        projection: lix_directory_live_state_projection(projected_schema),\n        limit,\n    }\n}\n\nfn lix_directory_live_state_projection(projected_schema: Option<&Schema>) -> LiveStateProjection {\n    let Some(schema) = projected_schema else {\n        return LiveStateProjection::default();\n    };\n    let mut columns = Vec::new();\n    let needs_snapshot = schema\n        .fields()\n        .iter()\n        .any(|field| matches!(field.name().as_str(), \"parent_id\" | \"name\" | \"hidden\"));\n    if needs_snapshot {\n        columns.push(\"snapshot_content\".to_string());\n    }\n    if schema\n        .fields()\n        .iter()\n        .any(|field| field.name() == \"lixcol_metadata\")\n    {\n        columns.push(\"metadata\".to_string());\n    }\n    LiveStateProjection { columns }\n}\n\nfn validate_lix_directory_update_assignments(\n    schema: &SchemaRef,\n    assignments: &[(String, Expr)],\n) -> Result<()> {\n    for (column_name, _) in assignments {\n        schema.field_with_name(column_name).map_err(|_| {\n            DataFusionError::Plan(format!(\n                \"UPDATE lix_directory failed: column '{column_name}' does not exist\"\n            ))\n        })?;\n        if !matches!(\n            column_name.as_str(),\n            \"parent_id\" | \"name\" | \"hidden\" | \"lixcol_metadata\"\n        ) {\n            return Err(DataFusionError::Execution(format!(\n                \"UPDATE lix_directory cannot stage read-only column '{column_name}'\"\n            )));\n        }\n    }\n    Ok(())\n}\n\nfn filter_lix_directory_batch(\n    batch: RecordBatch,\n    filters: &[Arc<dyn PhysicalExpr>],\n) -> Result<RecordBatch> {\n    let Some(mask) = evaluate_lix_directory_filters(&batch, filters)? else {\n        return Ok(batch);\n    };\n    Ok(filter_record_batch(&batch, &mask)?)\n}\n\nfn evaluate_lix_directory_filters(\n    batch: &RecordBatch,\n    filters: &[Arc<dyn PhysicalExpr>],\n) -> Result<Option<BooleanArray>> {\n    if filters.is_empty() {\n        return Ok(None);\n    }\n\n    let mut combined_mask: Option<BooleanArray> = None;\n    for filter in filters {\n        let result = filter.evaluate(batch)?;\n        let array = result.into_array(batch.num_rows())?;\n        let bool_array = array\n            .as_any()\n            .downcast_ref::<BooleanArray>()\n            .ok_or_else(|| {\n                DataFusionError::Execution(\"lix_directory filter was not boolean\".to_string())\n            })?;\n        let normalized = bool_array\n            .iter()\n            .map(|value| Some(value == Some(true)))\n            .collect::<BooleanArray>();\n        combined_mask = Some(match combined_mask {\n            Some(existing) => and(&existing, &normalized)?,\n            None => normalized,\n        });\n    }\n    Ok(combined_mask)\n}\n\nfn dml_count_schema() -> SchemaRef {\n    Arc::new(Schema::new(vec![Field::new(\n        \"count\",\n        DataType::UInt64,\n        false,\n    )]))\n}\n\nfn dml_count_batch(schema: SchemaRef, count: u64) -> Result<RecordBatch> {\n    RecordBatch::try_new(\n        schema,\n        vec![Arc::new(UInt64Array::from(vec![count])) as ArrayRef],\n    )\n    .map_err(DataFusionError::from)\n}\n\nfn record_batch_has_non_null_column(batch: &RecordBatch, column_name: &str) -> Result<bool> {\n    for row_index in 0..batch.num_rows() {\n        if optional_scalar_value(batch, row_index, column_name)?\n            .is_some_and(|value| !value.is_null())\n        {\n            return Ok(true);\n        }\n    }\n    Ok(false)\n}\n\nfn reject_read_only_lix_directory_insert_field(\n    batch: &RecordBatch,\n    row_index: usize,\n    column_name: &str,\n) -> Result<()> {\n    if optional_scalar_value(batch, row_index, column_name)?.is_some_and(|value| !value.is_null()) {\n        return Err(DataFusionError::Execution(format!(\n            \"INSERT into lix_directory cannot stage read-only column '{column_name}'\"\n        )));\n    }\n    Ok(())\n}\n\nfn required_string_value(\n    batch: &RecordBatch,\n    row_index: usize,\n    column_name: &str,\n) -> Result<String> {\n    optional_string_value(batch, row_index, column_name)?.ok_or_else(|| {\n        DataFusionError::Execution(format!(\n            \"INSERT into lix_directory requires non-null text column '{column_name}'\"\n        ))\n    })\n}\n\nfn update_required_string_value(\n    batch: &RecordBatch,\n    assignment_values: &UpdateAssignmentValues,\n    row_index: usize,\n    column_name: &str,\n) -> Result<String> {\n    update_optional_string_value(batch, assignment_values, row_index, column_name)?.ok_or_else(\n        || {\n            DataFusionError::Execution(format!(\n                \"UPDATE lix_directory requires non-null text column '{column_name}'\"\n            ))\n        },\n    )\n}\n\nfn update_optional_string_value(\n    batch: &RecordBatch,\n    assignment_values: &UpdateAssignmentValues,\n    row_index: usize,\n    column_name: &str,\n) -> Result<Option<String>> {\n    match assignment_values.assigned_or_existing_cell(batch, row_index, column_name)? {\n        InsertCell::Omitted | InsertCell::Provided(SqlCell::Null) => Ok(None),\n        InsertCell::Provided(SqlCell::Value(\n            ScalarValue::Utf8(Some(value))\n            | ScalarValue::Utf8View(Some(value))\n            | ScalarValue::LargeUtf8(Some(value)),\n        )) => Ok(Some(value)),\n        InsertCell::Provided(SqlCell::Value(other)) => Err(DataFusionError::Execution(format!(\n            \"UPDATE lix_directory expected text-compatible column '{column_name}', got {other:?}\"\n        ))),\n    }\n}\n\nfn update_optional_metadata_value(\n    batch: &RecordBatch,\n    assignment_values: &UpdateAssignmentValues,\n    row_index: usize,\n    column_name: &str,\n    context: &str,\n) -> Result<Option<TransactionJson>> {\n    update_optional_string_value(batch, assignment_values, row_index, column_name)?\n        .map(|value| {\n            let metadata = parse_row_metadata_value(&value, context)\n                .map_err(super::error::lix_error_to_datafusion_error)?;\n            TransactionJson::from_value(metadata, &format!(\"{context} metadata\"))\n                .map_err(super::error::lix_error_to_datafusion_error)\n        })\n        .transpose()\n}\n\nfn update_optional_bool_value(\n    batch: &RecordBatch,\n    assignment_values: &UpdateAssignmentValues,\n    row_index: usize,\n    column_name: &str,\n) -> Result<Option<bool>> {\n    match assignment_values.assigned_or_existing_cell(batch, row_index, column_name)? {\n        InsertCell::Omitted | InsertCell::Provided(SqlCell::Null) => Ok(None),\n        InsertCell::Provided(SqlCell::Value(ScalarValue::Boolean(Some(value)))) => Ok(Some(value)),\n        InsertCell::Provided(SqlCell::Value(other)) => Err(DataFusionError::Execution(format!(\n            \"UPDATE lix_directory expected boolean column '{column_name}', got {other:?}\"\n        ))),\n    }\n}\n\nfn optional_string_value(\n    batch: &RecordBatch,\n    row_index: usize,\n    column_name: &str,\n) -> Result<Option<String>> {\n    match optional_scalar_value(batch, row_index, column_name)? {\n        None\n        | Some(ScalarValue::Null)\n        | Some(ScalarValue::Utf8(None))\n        | Some(ScalarValue::Utf8View(None))\n        | Some(ScalarValue::LargeUtf8(None)) => Ok(None),\n        Some(ScalarValue::Utf8(Some(value)))\n        | Some(ScalarValue::Utf8View(Some(value)))\n        | Some(ScalarValue::LargeUtf8(Some(value))) => Ok(Some(value)),\n        Some(other) => Err(DataFusionError::Execution(format!(\n            \"INSERT into lix_directory expected text-compatible column '{column_name}', got {other:?}\"\n        ))),\n    }\n}\n\nfn optional_metadata_value(\n    batch: &RecordBatch,\n    row_index: usize,\n    column_name: &str,\n    context: &str,\n) -> Result<Option<TransactionJson>> {\n    optional_string_value(batch, row_index, column_name)?\n        .map(|value| {\n            let metadata = parse_row_metadata_value(&value, context)\n                .map_err(super::error::lix_error_to_datafusion_error)?;\n            TransactionJson::from_value(metadata, &format!(\"{context} metadata\"))\n                .map_err(super::error::lix_error_to_datafusion_error)\n        })\n        .transpose()\n}\n\nfn optional_bool_value(\n    batch: &RecordBatch,\n    row_index: usize,\n    column_name: &str,\n) -> Result<Option<bool>> {\n    match optional_scalar_value(batch, row_index, column_name)? {\n        None | Some(ScalarValue::Null) | Some(ScalarValue::Boolean(None)) => Ok(None),\n        Some(ScalarValue::Boolean(Some(value))) => Ok(Some(value)),\n        Some(other) => Err(DataFusionError::Execution(format!(\n            \"INSERT into lix_directory expected boolean column '{column_name}', got {other:?}\"\n        ))),\n    }\n}\n\nfn optional_scalar_value(\n    batch: &RecordBatch,\n    row_index: usize,\n    column_name: &str,\n) -> Result<Option<ScalarValue>> {\n    let schema = batch.schema();\n    let column_index = match schema.index_of(column_name) {\n        Ok(column_index) => column_index,\n        Err(_) => return Ok(None),\n    };\n    if row_index >= batch.num_rows() {\n        return Err(DataFusionError::Execution(format!(\n            \"row index {row_index} out of bounds for lix_directory batch with {} rows\",\n            batch.num_rows()\n        )));\n    }\n    ScalarValue::try_from_array(batch.column(column_index).as_ref(), row_index)\n        .map(Some)\n        .map_err(|error| {\n            DataFusionError::Execution(format!(\n                \"failed to decode lix_directory column '{column_name}' at row {row_index}: {error}\"\n            ))\n        })\n}\n\nfn lix_directory_schema() -> SchemaRef {\n    Arc::new(Schema::new(vec![\n        Field::new(\"id\", DataType::Utf8, true),\n        Field::new(\"path\", DataType::Utf8, true),\n        Field::new(\"parent_id\", DataType::Utf8, true),\n        Field::new(\"name\", DataType::Utf8, false),\n        Field::new(\"hidden\", DataType::Boolean, true),\n        json_field(\"lixcol_entity_id\", false),\n        Field::new(\"lixcol_schema_key\", DataType::Utf8, false),\n        Field::new(\"lixcol_file_id\", DataType::Utf8, true),\n        Field::new(\"lixcol_global\", DataType::Boolean, true),\n        Field::new(\"lixcol_change_id\", DataType::Utf8, true),\n        Field::new(\"lixcol_created_at\", DataType::Utf8, true),\n        Field::new(\"lixcol_updated_at\", DataType::Utf8, true),\n        Field::new(\"lixcol_commit_id\", DataType::Utf8, true),\n        Field::new(\"lixcol_untracked\", DataType::Boolean, true),\n        json_field(\"lixcol_metadata\", true),\n    ]))\n}\n\nfn lix_directory_by_version_schema() -> SchemaRef {\n    let mut fields = lix_directory_schema()\n        .fields()\n        .iter()\n        .map(|field| field.as_ref().clone())\n        .collect::<Vec<_>>();\n    fields.push(Field::new(\"lixcol_version_id\", DataType::Utf8, false));\n    Arc::new(Schema::new(fields))\n}\n\nfn datafusion_error_to_lix_error(error: DataFusionError) -> LixError {\n    super::error::datafusion_error_to_lix_error(error)\n}\n\nfn lix_error_to_datafusion_error(error: LixError) -> DataFusionError {\n    super::error::lix_error_to_datafusion_error(error)\n}\n\n#[cfg(test)]\nmod tests {\n    use std::collections::{BTreeMap, BTreeSet};\n    use std::sync::Arc;\n\n    use async_trait::async_trait;\n    use datafusion::arrow::array::{ArrayRef, BooleanArray, StringArray};\n    use datafusion::arrow::datatypes::{DataType, Field, Schema};\n    use datafusion::arrow::record_batch::RecordBatch;\n    use datafusion::execution::TaskContext;\n    use serde_json::json;\n\n    use crate::binary_cas::BlobDataReader;\n    use crate::functions::{\n        FunctionProvider, FunctionProviderHandle, SharedFunctionProvider, SystemFunctionProvider,\n    };\n    use crate::live_state::{\n        LiveStateReader, LiveStateRowRequest, LiveStateScanRequest, MaterializedLiveStateRow,\n    };\n    use crate::sql2::dml::InsertSink;\n    use crate::sql2::{SqlWriteContext, SqlWriteExecutionContext};\n    use crate::transaction::types::{\n        TransactionJson, TransactionWrite, TransactionWriteMode, TransactionWriteOutcome,\n        TransactionWriteRow,\n    };\n    use crate::LixError;\n\n    use super::{\n        derive_directory_path_for, directory_path_resolvers_from_state_rows,\n        lix_directory_by_version_schema, lix_directory_insert_origin, lix_directory_record_batch,\n        lix_directory_recursive_delete_rows_from_batch, lix_directory_write_rows_from_batch,\n        lix_directory_write_rows_from_batch_with_path_resolvers, DirectoryDescriptorRecord,\n        LixDirectoryInsertSink, VersionBinding,\n    };\n    use crate::sql2::filesystem_visibility::VisibleFilesystem;\n\n    fn test_id_generator(ids: &'static [&'static str]) -> impl FnMut() -> String {\n        let mut ids = ids.iter();\n        move || ids.next().expect(\"test id should exist\").to_string()\n    }\n\n    fn test_functions() -> FunctionProviderHandle {\n        SharedFunctionProvider::new(\n            Box::new(SystemFunctionProvider) as Box<dyn FunctionProvider + Send>\n        )\n    }\n\n    #[derive(Default)]\n    struct CapturingWriteContext {\n        rows: Vec<MaterializedLiveStateRow>,\n        writes: Vec<TransactionWrite>,\n    }\n\n    #[async_trait]\n    impl BlobDataReader for CapturingWriteContext {\n        async fn load_bytes_many(\n            &self,\n            hashes: &[crate::binary_cas::BlobHash],\n        ) -> Result<crate::binary_cas::BlobBytesBatch, LixError> {\n            Ok(crate::binary_cas::BlobBytesBatch::new(vec![\n                None;\n                hashes.len()\n            ]))\n        }\n    }\n\n    #[async_trait]\n    impl SqlWriteExecutionContext for CapturingWriteContext {\n        fn active_version_id(&self) -> &str {\n            \"version-a\"\n        }\n\n        fn functions(&self) -> FunctionProviderHandle {\n            test_functions()\n        }\n\n        fn list_visible_schemas(&self) -> Result<Vec<serde_json::Value>, LixError> {\n            Ok(Vec::new())\n        }\n\n        async fn load_bytes_many(\n            &mut self,\n            hashes: &[crate::binary_cas::BlobHash],\n        ) -> Result<crate::binary_cas::BlobBytesBatch, LixError> {\n            BlobDataReader::load_bytes_many(self, hashes).await\n        }\n\n        async fn scan_live_state(\n            &mut self,\n            _request: &LiveStateScanRequest,\n        ) -> Result<Vec<MaterializedLiveStateRow>, LixError> {\n            Ok(self.rows.clone())\n        }\n\n        async fn load_version_head(\n            &mut self,\n            version_id: &str,\n        ) -> Result<Option<String>, LixError> {\n            if version_id == \"ghost-version\" {\n                return Ok(None);\n            }\n            Ok(Some(format!(\"commit-{version_id}\")))\n        }\n\n        async fn stage_write(\n            &mut self,\n            write: TransactionWrite,\n        ) -> Result<TransactionWriteOutcome, LixError> {\n            self.writes.push(write);\n            Ok(TransactionWriteOutcome { count: 0 })\n        }\n    }\n\n    #[derive(Default)]\n    #[allow(dead_code)]\n    struct RowsLiveStateReader {\n        rows: Vec<MaterializedLiveStateRow>,\n    }\n\n    #[async_trait]\n    impl LiveStateReader for RowsLiveStateReader {\n        async fn scan_rows(\n            &self,\n            _request: &LiveStateScanRequest,\n        ) -> Result<Vec<MaterializedLiveStateRow>, LixError> {\n            Ok(self.rows.clone())\n        }\n\n        async fn load_row(\n            &self,\n            _request: &LiveStateRowRequest,\n        ) -> Result<Option<MaterializedLiveStateRow>, LixError> {\n            Ok(None)\n        }\n    }\n\n    fn live_row(\n        entity_id: &str,\n        version_id: &str,\n        snapshot_content: &str,\n    ) -> MaterializedLiveStateRow {\n        live_filesystem_row(\n            entity_id,\n            super::DIRECTORY_SCHEMA_KEY,\n            None,\n            version_id,\n            snapshot_content,\n        )\n    }\n\n    fn live_filesystem_row(\n        entity_id: &str,\n        schema_key: &str,\n        file_id: Option<&str>,\n        version_id: &str,\n        snapshot_content: &str,\n    ) -> MaterializedLiveStateRow {\n        MaterializedLiveStateRow {\n            entity_id: crate::entity_identity::EntityIdentity::single(entity_id),\n            schema_key: schema_key.to_string(),\n            file_id: file_id.map(ToOwned::to_owned),\n            snapshot_content: Some(snapshot_content.to_string()),\n            metadata: Some(json!({\"source\": \"test\"}).to_string()),\n            deleted: false,\n            version_id: version_id.to_string(),\n            change_id: Some(format!(\"change-{entity_id}\")),\n            commit_id: Some(format!(\"commit-{entity_id}\")),\n            global: false,\n            untracked: false,\n            created_at: \"2026-04-23T00:00:00Z\".to_string(),\n            updated_at: \"2026-04-23T01:00:00Z\".to_string(),\n        }\n    }\n\n    fn filesystem_rows() -> Vec<MaterializedLiveStateRow> {\n        vec![\n            live_filesystem_row(\n                \"dir-docs\",\n                \"lix_directory_descriptor\",\n                None,\n                \"version-a\",\n                r#\"{\"id\":\"dir-docs\",\"parent_id\":null,\"name\":\"docs\",\"hidden\":false}\"#,\n            ),\n            live_filesystem_row(\n                \"dir-guides\",\n                \"lix_directory_descriptor\",\n                None,\n                \"version-a\",\n                r#\"{\"id\":\"dir-guides\",\"parent_id\":\"dir-docs\",\"name\":\"guides\",\"hidden\":false}\"#,\n            ),\n            live_filesystem_row(\n                \"file-index\",\n                \"lix_file_descriptor\",\n                None,\n                \"version-a\",\n                r#\"{\"id\":\"file-index\",\"directory_id\":\"dir-docs\",\"name\":\"index.md\",\"hidden\":false}\"#,\n            ),\n            live_filesystem_row(\n                \"file-readme\",\n                \"lix_file_descriptor\",\n                None,\n                \"version-a\",\n                r#\"{\"id\":\"file-readme\",\"directory_id\":\"dir-guides\",\"name\":\"readme.md\",\"hidden\":false}\"#,\n            ),\n            live_filesystem_row(\n                \"file-readme\",\n                \"lix_binary_blob_ref\",\n                Some(\"file-readme\"),\n                \"version-a\",\n                r#\"{\"id\":\"file-readme\",\"blob_hash\":\"abc123\",\"size_bytes\":5}\"#,\n            ),\n        ]\n    }\n\n    fn string_column(values: Vec<Option<&str>>) -> ArrayRef {\n        Arc::new(StringArray::from(values)) as ArrayRef\n    }\n\n    fn directory_insert_batch(include_version: bool, global: bool) -> RecordBatch {\n        let mut fields = vec![\n            Field::new(\"id\", DataType::Utf8, false),\n            Field::new(\"parent_id\", DataType::Utf8, true),\n            Field::new(\"name\", DataType::Utf8, false),\n            Field::new(\"hidden\", DataType::Boolean, false),\n            Field::new(\"lixcol_global\", DataType::Boolean, false),\n            Field::new(\"lixcol_metadata\", DataType::Utf8, true),\n        ];\n        let mut columns = vec![\n            string_column(vec![Some(\"dir-docs\")]),\n            string_column(vec![None]),\n            string_column(vec![Some(\"docs\")]),\n            Arc::new(BooleanArray::from(vec![false])) as ArrayRef,\n            Arc::new(BooleanArray::from(vec![global])) as ArrayRef,\n            string_column(vec![Some(\"{\\\"source\\\":\\\"directory\\\"}\")]),\n        ];\n        if include_version {\n            fields.push(Field::new(\"lixcol_version_id\", DataType::Utf8, false));\n            columns.push(string_column(vec![Some(\"version-a\")]));\n        }\n        RecordBatch::try_new(Arc::new(Schema::new(fields)), columns)\n            .expect(\"directory insert batch should build\")\n    }\n\n    fn directory_path_insert_batch(path: &str) -> RecordBatch {\n        RecordBatch::try_new(\n            Arc::new(Schema::new(vec![\n                Field::new(\"id\", DataType::Utf8, false),\n                Field::new(\"path\", DataType::Utf8, true),\n                Field::new(\"hidden\", DataType::Boolean, false),\n                Field::new(\"lixcol_version_id\", DataType::Utf8, false),\n            ])),\n            vec![\n                string_column(vec![Some(\"dir-nested\")]),\n                string_column(vec![Some(path)]),\n                Arc::new(BooleanArray::from(vec![false])) as ArrayRef,\n                string_column(vec![Some(\"version-a\")]),\n            ],\n        )\n        .expect(\"directory path insert batch should build\")\n    }\n\n    fn directory_delete_batch(ids: &[&str]) -> RecordBatch {\n        RecordBatch::try_new(\n            Arc::new(Schema::new(vec![\n                Field::new(\"id\", DataType::Utf8, false),\n                Field::new(\"lixcol_version_id\", DataType::Utf8, false),\n            ])),\n            vec![\n                string_column(ids.iter().copied().map(Some).collect::<Vec<_>>()),\n                string_column(vec![Some(\"version-a\"); ids.len()]),\n            ],\n        )\n        .expect(\"directory delete batch should build\")\n    }\n\n    #[test]\n    fn derives_nested_directory_paths() {\n        let root = DirectoryDescriptorRecord {\n            id: \"dir-docs\".to_string(),\n            parent_id: None,\n            name: \"docs\".to_string(),\n            hidden: false,\n            live: live_row(\n                \"dir-docs\",\n                \"version-a\",\n                \"{\\\"id\\\":\\\"dir-docs\\\",\\\"parent_id\\\":null,\\\"name\\\":\\\"docs\\\",\\\"hidden\\\":false}\",\n            ),\n        };\n        let child = DirectoryDescriptorRecord {\n            id: \"dir-guides\".to_string(),\n            parent_id: Some(\"dir-docs\".to_string()),\n            name: \"guides\".to_string(),\n            hidden: false,\n            live: live_row(\n                \"dir-guides\",\n                \"version-a\",\n                \"{\\\"id\\\":\\\"dir-guides\\\",\\\"parent_id\\\":\\\"dir-docs\\\",\\\"name\\\":\\\"guides\\\",\\\"hidden\\\":false}\",\n            ),\n        };\n        let mut records = BTreeMap::new();\n        records.insert(root.id.clone(), &root);\n        records.insert(child.id.clone(), &child);\n        let mut paths = BTreeMap::new();\n\n        assert_eq!(\n            derive_directory_path_for(\n                \"version-a\",\n                \"dir-guides\",\n                &records,\n                &mut paths,\n                &mut BTreeSet::new()\n            )\n            .expect(\"path derivation should succeed\"),\n            Some(\"/docs/guides/\".to_string())\n        );\n    }\n\n    #[test]\n    fn record_batch_projects_directory_columns() {\n        let rows = vec![\n            live_row(\n                \"dir-docs\",\n                \"version-a\",\n                \"{\\\"id\\\":\\\"dir-docs\\\",\\\"parent_id\\\":null,\\\"name\\\":\\\"docs\\\",\\\"hidden\\\":false}\",\n            ),\n            live_row(\n                \"dir-guides\",\n                \"version-a\",\n                \"{\\\"id\\\":\\\"dir-guides\\\",\\\"parent_id\\\":\\\"dir-docs\\\",\\\"name\\\":\\\"guides\\\",\\\"hidden\\\":true}\",\n            ),\n        ];\n\n        let batch = lix_directory_record_batch(&lix_directory_by_version_schema(), rows)\n            .expect(\"directory batch should build\");\n\n        assert_eq!(batch.num_rows(), 2);\n        assert_eq!(\n            batch\n                .column_by_name(\"path\")\n                .expect(\"path column\")\n                .as_any()\n                .downcast_ref::<StringArray>()\n                .expect(\"path is string\")\n                .value(1),\n            \"/docs/guides/\"\n        );\n        assert_eq!(\n            batch\n                .column_by_name(\"lixcol_version_id\")\n                .expect(\"version column\")\n                .as_any()\n                .downcast_ref::<StringArray>()\n                .expect(\"version is string\")\n                .value(1),\n            \"version-a\"\n        );\n    }\n\n    #[test]\n    fn decodes_directory_insert_into_lix_state_write_row() {\n        let rows = lix_directory_write_rows_from_batch(&directory_insert_batch(true, false), None)\n            .expect(\"directory batch should decode\");\n\n        assert_eq!(\n            rows,\n            vec![TransactionWriteRow {\n                entity_id: Some(crate::entity_identity::EntityIdentity::single(\"dir-docs\")),\n                schema_key: super::DIRECTORY_SCHEMA_KEY.to_string(),\n                file_id: None,\n                snapshot: Some(TransactionJson::from_value_for_test(\n                    json!({\"hidden\":false,\"id\":\"dir-docs\",\"name\":\"docs\",\"parent_id\":null})\n                )),\n                metadata: Some(TransactionJson::from_value_for_test(\n                    json!({\"source\": \"directory\"})\n                )),\n                origin: Some(lix_directory_insert_origin(\"lix_directory\", \"dir-docs\")),\n                created_at: None,\n                updated_at: None,\n                global: false,\n                change_id: None,\n                commit_id: None,\n                untracked: false,\n                version_id: \"version-a\".to_string(),\n            }]\n        );\n    }\n\n    #[test]\n    fn active_directory_insert_defaults_version_id() {\n        let rows = lix_directory_write_rows_from_batch(\n            &directory_insert_batch(false, false),\n            Some(\"version-active\"),\n        )\n        .expect(\"active directory batch should decode\");\n\n        assert_eq!(rows[0].version_id, \"version-active\");\n    }\n\n    #[test]\n    fn by_version_directory_insert_requires_version_id_for_non_global_rows() {\n        let error =\n            lix_directory_write_rows_from_batch(&directory_insert_batch(false, false), None)\n                .expect_err(\"by-version insert should require version id\");\n\n        assert!(\n            error.to_string().contains(\"requires lixcol_version_id\"),\n            \"unexpected error: {error}\"\n        );\n    }\n\n    #[test]\n    fn directory_insert_rejects_global_with_non_global_version_id() {\n        let error = lix_directory_write_rows_from_batch(&directory_insert_batch(true, true), None)\n            .expect_err(\"global directory write should reject conflicting version id\");\n\n        assert!(\n            error\n                .to_string()\n                .contains(\"cannot set lixcol_global=true with non-global lixcol_version_id\"),\n            \"unexpected error: {error}\"\n        );\n    }\n\n    #[test]\n    fn directory_path_insert_reuses_existing_parent_descriptor() {\n        let existing_rows = vec![live_row(\n            \"dir-docs\",\n            \"version-a\",\n            \"{\\\"id\\\":\\\"dir-docs\\\",\\\"parent_id\\\":null,\\\"name\\\":\\\"docs\\\",\\\"hidden\\\":false}\",\n        )];\n        let mut resolvers = directory_path_resolvers_from_state_rows(existing_rows)\n            .expect(\"existing directory rows should seed paths\");\n\n        let rows = lix_directory_write_rows_from_batch_with_path_resolvers(\n            &directory_path_insert_batch(\"/docs/nested/\"),\n            None,\n            \"lix_directory\",\n            &mut resolvers,\n            &mut test_id_generator(&[\"should-not-be-used\"]),\n        )\n        .expect(\"directory path batch should decode\");\n\n        assert_eq!(rows.len(), 1);\n        let snapshot = rows[0].snapshot.as_ref().unwrap();\n        assert_eq!(snapshot[\"id\"], \"dir-nested\");\n        assert_eq!(snapshot[\"parent_id\"], \"dir-docs\");\n        assert_eq!(snapshot[\"name\"], \"nested\");\n    }\n\n    #[test]\n    fn recursive_directory_delete_deletes_nested_dirs_files_and_blob_refs() {\n        let visible_filesystem = VisibleFilesystem::from_live_rows(filesystem_rows())\n            .expect(\"visible filesystem should build\");\n        let mut visible_filesystems = BTreeMap::new();\n        visible_filesystems.insert(\"version-a\".to_string(), visible_filesystem);\n\n        let (rows, count) = lix_directory_recursive_delete_rows_from_batch(\n            &directory_delete_batch(&[\"dir-docs\"]),\n            None,\n            &visible_filesystems,\n        )\n        .expect(\"recursive directory delete should plan\");\n\n        assert_eq!(count, 4);\n        assert_eq!(\n            rows.iter()\n                .map(|row| {\n                    (\n                        row.schema_key.as_str(),\n                        row.entity_id\n                            .as_ref()\n                            .expect(\"planned delete row should carry entity_id\")\n                            .as_single_string_owned()\n                            .expect(\"planned delete row should project entity_id\"),\n                    )\n                })\n                .collect::<Vec<_>>(),\n            vec![\n                (\"lix_file_descriptor\", \"file-readme\".to_string()),\n                (\"lix_binary_blob_ref\", \"file-readme\".to_string()),\n                (\"lix_directory_descriptor\", \"dir-guides\".to_string()),\n                (\"lix_file_descriptor\", \"file-index\".to_string()),\n                (\"lix_directory_descriptor\", \"dir-docs\".to_string()),\n            ]\n        );\n        assert!(rows.iter().all(|row| row.snapshot.is_none()));\n    }\n\n    #[test]\n    fn recursive_directory_delete_dedupes_overlapping_parent_and_child() {\n        let visible_filesystem = VisibleFilesystem::from_live_rows(filesystem_rows())\n            .expect(\"visible filesystem should build\");\n        let mut visible_filesystems = BTreeMap::new();\n        visible_filesystems.insert(\"version-a\".to_string(), visible_filesystem);\n\n        let (rows, count) = lix_directory_recursive_delete_rows_from_batch(\n            &directory_delete_batch(&[\"dir-docs\", \"dir-guides\"]),\n            None,\n            &visible_filesystems,\n        )\n        .expect(\"recursive directory delete should plan\");\n\n        assert_eq!(count, 4);\n        let identities = rows\n            .iter()\n            .map(|row| {\n                (\n                    row.schema_key.clone(),\n                    row.entity_id.clone(),\n                    row.file_id.clone(),\n                    row.version_id.clone(),\n                )\n            })\n            .collect::<std::collections::BTreeSet<_>>();\n        assert_eq!(identities.len(), rows.len());\n        assert_eq!(rows.len(), 5);\n    }\n\n    #[tokio::test]\n    async fn directory_insert_sink_stages_decoded_lix_state_rows() {\n        let mut write_context = CapturingWriteContext::default();\n        let write_ctx = SqlWriteContext::new(&mut write_context);\n        let batch = directory_insert_batch(true, false);\n        let sink = LixDirectoryInsertSink::new(\n            batch.schema(),\n            write_ctx,\n            test_functions(),\n            VersionBinding::explicit(),\n        );\n        let count = sink\n            .write_batches(vec![batch], &Arc::new(TaskContext::default()))\n            .await\n            .expect(\"directory sink should stage write\");\n\n        assert_eq!(count, 1);\n        assert_eq!(\n            write_context.writes.as_slice(),\n            &[TransactionWrite::Rows {\n                mode: TransactionWriteMode::Insert,\n                rows: vec![TransactionWriteRow {\n                    entity_id: Some(crate::entity_identity::EntityIdentity::single(\"dir-docs\")),\n                    schema_key: super::DIRECTORY_SCHEMA_KEY.to_string(),\n                    file_id: None,\n                    snapshot: Some(TransactionJson::from_value_for_test(\n                        json!({\"hidden\":false,\"id\":\"dir-docs\",\"name\":\"docs\",\"parent_id\":null})\n                    )),\n                    metadata: Some(TransactionJson::from_value_for_test(\n                        json!({\"source\": \"directory\"})\n                    )),\n                    origin: Some(lix_directory_insert_origin(\n                        \"lix_directory_by_version\",\n                        \"dir-docs\"\n                    )),\n                    created_at: None,\n                    updated_at: None,\n                    global: false,\n                    change_id: None,\n                    commit_id: None,\n                    untracked: false,\n                    version_id: \"version-a\".to_string(),\n                }]\n            }]\n        );\n    }\n\n    #[tokio::test]\n    async fn directory_insert_sink_seeds_path_resolver_from_live_state() {\n        let mut write_context = CapturingWriteContext {\n            rows: vec![live_row(\n                \"dir-docs\",\n                \"version-a\",\n                \"{\\\"id\\\":\\\"dir-docs\\\",\\\"parent_id\\\":null,\\\"name\\\":\\\"docs\\\",\\\"hidden\\\":false}\",\n            )],\n            writes: Vec::new(),\n        };\n        let write_ctx = SqlWriteContext::new(&mut write_context);\n        let batch = directory_path_insert_batch(\"/docs/nested/\");\n        let sink = LixDirectoryInsertSink::new(\n            batch.schema(),\n            write_ctx,\n            test_functions(),\n            VersionBinding::explicit(),\n        );\n        let count = sink\n            .write_batches(vec![batch], &Arc::new(TaskContext::default()))\n            .await\n            .expect(\"directory sink should stage path write\");\n\n        assert_eq!(count, 1);\n        let [TransactionWrite::Rows { rows, .. }] = write_context.writes.as_slice() else {\n            panic!(\"expected one directory staged write\");\n        };\n        assert_eq!(rows.len(), 1);\n        let snapshot = rows[0].snapshot.as_ref().unwrap();\n        assert_eq!(snapshot[\"id\"], \"dir-nested\");\n        assert_eq!(snapshot[\"parent_id\"], \"dir-docs\");\n        assert_eq!(snapshot[\"name\"], \"nested\");\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/dml.rs",
    "content": "use std::any::Any;\nuse std::fmt::Debug;\nuse std::sync::Arc;\n\nuse async_trait::async_trait;\nuse datafusion::arrow::array::{ArrayRef, UInt64Array};\nuse datafusion::arrow::datatypes::{DataType, Field, Schema, SchemaRef};\nuse datafusion::arrow::record_batch::RecordBatch;\nuse datafusion::common::{DataFusionError, Result};\nuse datafusion::execution::TaskContext;\nuse datafusion::physical_expr::EquivalenceProperties;\nuse datafusion::physical_plan::execution_plan::{Boundedness, EmissionType, PlanProperties};\nuse datafusion::physical_plan::stream::RecordBatchStreamAdapter;\nuse datafusion::physical_plan::{\n    DisplayAs, DisplayFormatType, ExecutionPlan, Partitioning, SendableRecordBatchStream,\n};\nuse futures_util::stream;\n\nuse super::runtime;\n\n#[async_trait]\npub(crate) trait InsertSink: Debug + DisplayAs + Send + Sync {\n    async fn write_batches(\n        &self,\n        batches: Vec<RecordBatch>,\n        context: &Arc<TaskContext>,\n    ) -> Result<u64>;\n}\n\npub(crate) struct InsertExec {\n    input: Arc<dyn ExecutionPlan>,\n    sink: Arc<dyn InsertSink>,\n    result_schema: SchemaRef,\n    properties: Arc<PlanProperties>,\n}\n\nimpl InsertExec {\n    pub(crate) fn new(input: Arc<dyn ExecutionPlan>, sink: Arc<dyn InsertSink>) -> Self {\n        let result_schema = dml_count_schema();\n        let properties = PlanProperties::new(\n            EquivalenceProperties::new(Arc::clone(&result_schema)),\n            Partitioning::UnknownPartitioning(1),\n            EmissionType::Final,\n            Boundedness::Bounded,\n        );\n        Self {\n            input,\n            sink,\n            result_schema,\n            properties: Arc::new(properties),\n        }\n    }\n}\n\nimpl Debug for InsertExec {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"InsertExec\").finish()\n    }\n}\n\nimpl DisplayAs for InsertExec {\n    fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        match t {\n            DisplayFormatType::Default | DisplayFormatType::Verbose => {\n                write!(f, \"InsertExec: sink=\")?;\n                self.sink.fmt_as(t, f)\n            }\n            DisplayFormatType::TreeRender => write!(f, \"InsertExec\"),\n        }\n    }\n}\n\nimpl ExecutionPlan for InsertExec {\n    fn name(&self) -> &str {\n        \"InsertExec\"\n    }\n\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn properties(&self) -> &Arc<PlanProperties> {\n        &self.properties\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn ExecutionPlan>> {\n        vec![&self.input]\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        mut children: Vec<Arc<dyn ExecutionPlan>>,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        if children.len() != 1 {\n            return Err(DataFusionError::Execution(format!(\n                \"InsertExec expects one input child, got {}\",\n                children.len()\n            )));\n        }\n        Ok(Arc::new(Self::new(\n            children.swap_remove(0),\n            Arc::clone(&self.sink),\n        )))\n    }\n\n    fn execute(\n        &self,\n        partition: usize,\n        context: Arc<TaskContext>,\n    ) -> Result<SendableRecordBatchStream> {\n        if partition != 0 {\n            return Err(DataFusionError::Execution(format!(\n                \"InsertExec only exposes one partition, got {partition}\"\n            )));\n        }\n\n        let input = Arc::clone(&self.input);\n        let sink = Arc::clone(&self.sink);\n        let stream_schema = Arc::clone(&self.result_schema);\n        let result_schema = Arc::clone(&self.result_schema);\n        let stream = stream::once(async move {\n            let batches = runtime::collect_input_plan(input, Arc::clone(&context)).await?;\n            let count = sink.write_batches(batches, &context).await?;\n            dml_count_batch(stream_schema, count)\n        });\n\n        Ok(Box::pin(RecordBatchStreamAdapter::new(\n            result_schema,\n            stream,\n        )))\n    }\n}\n\nfn dml_count_schema() -> SchemaRef {\n    Arc::new(Schema::new(vec![Field::new(\n        \"count\",\n        DataType::UInt64,\n        false,\n    )]))\n}\n\nfn dml_count_batch(schema: SchemaRef, count: u64) -> Result<RecordBatch> {\n    RecordBatch::try_new(\n        schema,\n        vec![Arc::new(UInt64Array::from(vec![count])) as ArrayRef],\n    )\n    .map_err(DataFusionError::from)\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/entity_history_provider.rs",
    "content": "use std::any::Any;\nuse std::sync::Arc;\n\nuse async_trait::async_trait;\nuse datafusion::arrow::array::{ArrayRef, BooleanArray, Float64Array, Int64Array, StringArray};\nuse datafusion::arrow::datatypes::SchemaRef;\nuse datafusion::arrow::record_batch::{RecordBatch, RecordBatchOptions};\nuse datafusion::catalog::{Session, TableProvider};\nuse datafusion::common::{DataFusionError, Result};\nuse datafusion::datasource::TableType;\nuse datafusion::execution::TaskContext;\nuse datafusion::logical_expr::{Expr, TableProviderFilterPushDown};\nuse datafusion::physical_expr::EquivalenceProperties;\nuse datafusion::physical_plan::execution_plan::{Boundedness, EmissionType, PlanProperties};\nuse datafusion::physical_plan::stream::RecordBatchStreamAdapter;\nuse datafusion::physical_plan::{\n    DisplayAs, DisplayFormatType, ExecutionPlan, Partitioning, SendableRecordBatchStream,\n};\nuse futures_util::stream;\nuse serde_json::Value as JsonValue;\nuse tokio::sync::Mutex;\n\nuse crate::commit_graph::CommitGraphReader;\nuse crate::serialize_row_metadata;\nuse crate::LixError;\n\nuse super::entity_provider::{\n    entity_f64_value, entity_i64_value, entity_json_text_value, entity_surface_schema,\n    parse_snapshot, string_array, EntityColumnType, EntityProviderVariant, EntitySurfaceSpec,\n};\nuse super::history_projection::{tombstone_identity_column_value, HistoryIdentityProjection};\nuse super::history_route::{\n    load_history_entries, parse_history_filter, HistoryColumnStyle, HistoryRoute,\n    HistoryViewDescriptor, HISTORY_COL_START_COMMIT_ID,\n};\nuse super::SqlCommitStoreQuerySource;\nuse crate::commit_store::MaterializedChange;\n\n/// Schema-specific history surface backed directly by the commit graph.\n///\n/// The provider does not query `lix_state_history` through SQL. It uses the same\n/// commit graph primitive as the generic history surface, then shapes canonical\n/// changes into the typed entity columns for one registered schema.\npub(crate) struct EntityHistoryProvider {\n    spec: Arc<EntitySurfaceSpec>,\n    schema: SchemaRef,\n    commit_graph: Arc<Mutex<Box<dyn CommitGraphReader>>>,\n    query_source: SqlCommitStoreQuerySource,\n}\n\nimpl std::fmt::Debug for EntityHistoryProvider {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"EntityHistoryProvider\")\n            .field(\"schema_key\", &self.spec.schema_key)\n            .finish()\n    }\n}\n\nimpl EntityHistoryProvider {\n    pub(crate) fn new(\n        spec: Arc<EntitySurfaceSpec>,\n        commit_graph: Arc<Mutex<Box<dyn CommitGraphReader>>>,\n        query_source: SqlCommitStoreQuerySource,\n    ) -> Self {\n        Self {\n            schema: entity_surface_schema(&spec, EntityProviderVariant::History),\n            spec,\n            commit_graph,\n            query_source,\n        }\n    }\n}\n\n#[async_trait]\nimpl TableProvider for EntityHistoryProvider {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn schema(&self) -> SchemaRef {\n        Arc::clone(&self.schema)\n    }\n\n    fn table_type(&self) -> TableType {\n        TableType::View\n    }\n\n    fn supports_filters_pushdown(\n        &self,\n        filters: &[&Expr],\n    ) -> Result<Vec<TableProviderFilterPushDown>> {\n        Ok(filters\n            .iter()\n            .map(|filter| {\n                if parse_history_filter(filter, HistoryColumnStyle::Prefixed).is_some() {\n                    TableProviderFilterPushDown::Exact\n                } else {\n                    TableProviderFilterPushDown::Unsupported\n                }\n            })\n            .collect())\n    }\n\n    async fn scan(\n        &self,\n        _state: &dyn Session,\n        projection: Option<&Vec<usize>>,\n        filters: &[Expr],\n        limit: Option<usize>,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        let route = HistoryRoute::from_filters(filters, HistoryColumnStyle::Prefixed);\n        let schema = projected_schema(&self.schema, projection)?;\n        Ok(Arc::new(EntityHistoryScanExec::new(\n            Arc::clone(&self.spec),\n            Arc::clone(&self.commit_graph),\n            self.query_source.clone(),\n            schema,\n            route,\n            limit,\n        )))\n    }\n}\n\nstruct EntityHistoryScanExec {\n    spec: Arc<EntitySurfaceSpec>,\n    commit_graph: Arc<Mutex<Box<dyn CommitGraphReader>>>,\n    query_source: SqlCommitStoreQuerySource,\n    schema: SchemaRef,\n    route: HistoryRoute,\n    limit: Option<usize>,\n    properties: Arc<PlanProperties>,\n}\n\nimpl std::fmt::Debug for EntityHistoryScanExec {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"EntityHistoryScanExec\")\n            .field(\"schema_key\", &self.spec.schema_key)\n            .field(\"route\", &self.route)\n            .field(\"limit\", &self.limit)\n            .finish()\n    }\n}\n\nimpl EntityHistoryScanExec {\n    fn new(\n        spec: Arc<EntitySurfaceSpec>,\n        commit_graph: Arc<Mutex<Box<dyn CommitGraphReader>>>,\n        query_source: SqlCommitStoreQuerySource,\n        schema: SchemaRef,\n        route: HistoryRoute,\n        limit: Option<usize>,\n    ) -> Self {\n        let properties = PlanProperties::new(\n            EquivalenceProperties::new(Arc::clone(&schema)),\n            Partitioning::UnknownPartitioning(1),\n            EmissionType::Incremental,\n            Boundedness::Bounded,\n        );\n        Self {\n            spec,\n            commit_graph,\n            query_source,\n            schema,\n            route,\n            limit,\n            properties: Arc::new(properties),\n        }\n    }\n}\n\nimpl DisplayAs for EntityHistoryScanExec {\n    fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        match t {\n            DisplayFormatType::Default | DisplayFormatType::Verbose => write!(\n                f,\n                \"EntityHistoryScanExec(schema_key={}, route={:?}, limit={:?})\",\n                self.spec.schema_key, self.route, self.limit\n            ),\n            DisplayFormatType::TreeRender => write!(f, \"EntityHistoryScanExec\"),\n        }\n    }\n}\n\nimpl ExecutionPlan for EntityHistoryScanExec {\n    fn name(&self) -> &str {\n        \"EntityHistoryScanExec\"\n    }\n\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn properties(&self) -> &Arc<PlanProperties> {\n        &self.properties\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn ExecutionPlan>> {\n        Vec::new()\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        children: Vec<Arc<dyn ExecutionPlan>>,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        if !children.is_empty() {\n            return Err(DataFusionError::Internal(\n                \"EntityHistoryScanExec does not accept children\".to_string(),\n            ));\n        }\n        Ok(self)\n    }\n\n    fn execute(\n        &self,\n        partition: usize,\n        _context: Arc<TaskContext>,\n    ) -> Result<SendableRecordBatchStream> {\n        if partition != 0 {\n            return Err(DataFusionError::Execution(format!(\n                \"EntityHistoryScanExec only exposes one partition, got {partition}\"\n            )));\n        }\n\n        let spec = Arc::clone(&self.spec);\n        let commit_graph = Arc::clone(&self.commit_graph);\n        let query_source = self.query_source.clone();\n        let schema = Arc::clone(&self.schema);\n        let route = self.route.clone();\n        let limit = self.limit;\n        let stream_schema = Arc::clone(&schema);\n        let fut = async move {\n            let rows = load_entity_history_rows(&spec, commit_graph, query_source, &route, limit)\n                .await\n                .map_err(lix_error_to_datafusion_error)?;\n            entity_history_record_batch(&stream_schema, &spec, &rows)\n        };\n\n        Ok(Box::pin(RecordBatchStreamAdapter::new(\n            schema,\n            stream::once(fut),\n        )))\n    }\n}\n\n#[derive(Debug, Clone)]\nstruct EntityHistoryRow {\n    change: MaterializedChange,\n    observed_commit_id: String,\n    commit_created_at: String,\n    start_commit_id: String,\n    depth: u32,\n}\n\nasync fn load_entity_history_rows(\n    spec: &EntitySurfaceSpec,\n    commit_graph: Arc<Mutex<Box<dyn CommitGraphReader>>>,\n    query_source: SqlCommitStoreQuerySource,\n    route: &HistoryRoute,\n    limit: Option<usize>,\n) -> Result<Vec<EntityHistoryRow>, LixError> {\n    let history_view_name = format!(\"{}_history\", spec.schema_key);\n    let entries = load_history_entries(\n        HistoryViewDescriptor {\n            view_name: history_view_name.as_str(),\n            start_commit_column: HISTORY_COL_START_COMMIT_ID,\n        },\n        commit_graph,\n        query_source.json_reader,\n        route,\n        vec![spec.schema_key.clone()],\n    )\n    .await?;\n    let mut rows = entries\n        .into_iter()\n        .map(|entry| EntityHistoryRow {\n            change: entry.change,\n            observed_commit_id: entry.observed_commit_id,\n            commit_created_at: entry.commit_created_at,\n            start_commit_id: entry.start_commit_id,\n            depth: entry.depth,\n        })\n        .collect::<Vec<_>>();\n    if let Some(limit) = limit {\n        rows.truncate(limit);\n    }\n    Ok(rows)\n}\n\nfn entity_history_record_batch(\n    schema: &SchemaRef,\n    spec: &EntitySurfaceSpec,\n    rows: &[EntityHistoryRow],\n) -> Result<RecordBatch> {\n    let columns = schema\n        .fields()\n        .iter()\n        .map(|field| entity_history_column_array(field.name(), spec, rows))\n        .collect::<Result<Vec<_>>>()?;\n    Ok(RecordBatch::try_new_with_options(\n        Arc::clone(schema),\n        columns,\n        &RecordBatchOptions::new().with_row_count(Some(rows.len())),\n    )?)\n}\n\nfn entity_history_column_array(\n    column_name: &str,\n    spec: &EntitySurfaceSpec,\n    rows: &[EntityHistoryRow],\n) -> Result<ArrayRef> {\n    if let Some(system_column) = column_name.strip_prefix(\"lixcol_\") {\n        return entity_history_system_column_array(system_column, rows);\n    }\n\n    let column_type = spec\n        .visible_column(column_name)\n        .ok_or_else(|| {\n            DataFusionError::Execution(format!(\n                \"sql2 entity history provider '{}' does not expose column '{}'\",\n                spec.schema_key, column_name\n            ))\n        })?\n        .column_type;\n    let projected_values = rows\n        .iter()\n        .map(|row| entity_history_column_value(row, spec, column_name))\n        .collect::<Result<Vec<_>>>()?;\n\n    Ok(match column_type {\n        EntityColumnType::String | EntityColumnType::Json => Arc::new(StringArray::from(\n            projected_values\n                .iter()\n                .map(|snapshot| entity_json_text_value(snapshot.as_ref(), column_type))\n                .collect::<Result<Vec<_>>>()?,\n        )) as ArrayRef,\n        EntityColumnType::Integer => Arc::new(Int64Array::from(\n            projected_values\n                .iter()\n                .map(|snapshot| entity_i64_value(snapshot.as_ref()))\n                .collect::<Vec<_>>(),\n        )) as ArrayRef,\n        EntityColumnType::Number => Arc::new(Float64Array::from(\n            projected_values\n                .iter()\n                .map(|snapshot| entity_f64_value(snapshot.as_ref()))\n                .collect::<Vec<_>>(),\n        )) as ArrayRef,\n        EntityColumnType::Boolean => Arc::new(BooleanArray::from(\n            projected_values\n                .iter()\n                .map(|snapshot| snapshot.as_ref().and_then(JsonValue::as_bool))\n                .collect::<Vec<_>>(),\n        )) as ArrayRef,\n    })\n}\n\nfn entity_history_column_value(\n    row: &EntityHistoryRow,\n    spec: &EntitySurfaceSpec,\n    column_name: &str,\n) -> Result<Option<JsonValue>> {\n    let snapshot = parse_snapshot(row.change.snapshot_content.as_deref())?;\n    if let Some(snapshot) = snapshot {\n        return Ok(snapshot.get(column_name).cloned());\n    }\n\n    let entity_id = row.change.entity_id.as_json_array_text().map_err(|error| {\n        DataFusionError::Execution(format!(\n            \"sql2 entity history provider failed to project entity id: {error}\"\n        ))\n    })?;\n    tombstone_identity_column_value(\n        column_name,\n        &entity_id,\n        HistoryIdentityProjection::PrimaryKeyPaths(&spec.primary_key_paths),\n    )\n    .map_err(|error| DataFusionError::Execution(error.to_string()))\n}\n\nfn entity_history_system_column_array(\n    column_name: &str,\n    rows: &[EntityHistoryRow],\n) -> Result<ArrayRef> {\n    Ok(match column_name {\n        \"entity_id\" => Arc::new(StringArray::from(\n            rows.iter()\n                .map(|row| {\n                    Some(\n                        row.change\n                            .entity_id\n                            .as_json_array_text()\n                            .expect(\"canonical change entity identity should project\"),\n                    )\n                })\n                .collect::<Vec<_>>(),\n        )) as ArrayRef,\n        \"schema_key\" => string_array(rows.iter().map(|row| Some(row.change.schema_key.as_str()))),\n        \"file_id\" => string_array(rows.iter().map(|row| row.change.file_id.as_deref())),\n        \"snapshot_content\" => string_array(\n            rows.iter()\n                .map(|row| row.change.snapshot_content.as_deref()),\n        ),\n        \"metadata\" => Arc::new(StringArray::from(\n            rows.iter()\n                .map(|row| row.change.metadata.as_ref().map(serialize_row_metadata))\n                .collect::<Vec<_>>(),\n        )) as ArrayRef,\n        \"change_id\" => string_array(rows.iter().map(|row| Some(row.change.id.as_str()))),\n        \"observed_commit_id\" => {\n            string_array(rows.iter().map(|row| Some(row.observed_commit_id.as_str())))\n        }\n        \"commit_created_at\" => {\n            string_array(rows.iter().map(|row| Some(row.commit_created_at.as_str())))\n        }\n        \"start_commit_id\" => {\n            string_array(rows.iter().map(|row| Some(row.start_commit_id.as_str())))\n        }\n        \"depth\" => Arc::new(Int64Array::from(\n            rows.iter()\n                .map(|row| i64::from(row.depth))\n                .collect::<Vec<_>>(),\n        )) as ArrayRef,\n        other => {\n            return Err(DataFusionError::Execution(format!(\n                \"sql2 entity history provider does not support system column 'lixcol_{other}'\"\n            )))\n        }\n    })\n}\n\nfn projected_schema(schema: &SchemaRef, projection: Option<&Vec<usize>>) -> Result<SchemaRef> {\n    let Some(projection) = projection else {\n        return Ok(Arc::clone(schema));\n    };\n    Ok(Arc::new(schema.project(projection)?))\n}\n\nfn lix_error_to_datafusion_error(error: LixError) -> DataFusionError {\n    super::error::lix_error_to_datafusion_error(error)\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/entity_provider.rs",
    "content": "use std::any::Any;\nuse std::collections::{BTreeMap, BTreeSet};\nuse std::sync::Arc;\n\nuse async_trait::async_trait;\nuse datafusion::arrow::array::{\n    ArrayRef, BooleanArray, Float64Array, Int64Array, StringArray, UInt64Array,\n};\nuse datafusion::arrow::compute::{and, filter_record_batch};\nuse datafusion::arrow::datatypes::{DataType, Field, Schema, SchemaRef};\nuse datafusion::arrow::record_batch::{RecordBatch, RecordBatchOptions};\nuse datafusion::catalog::{Session, TableProvider};\nuse datafusion::common::{not_impl_err, DFSchema, DataFusionError, Result, ScalarValue};\nuse datafusion::datasource::TableType;\nuse datafusion::execution::TaskContext;\nuse datafusion::logical_expr::dml::InsertOp;\nuse datafusion::logical_expr::expr::InList;\nuse datafusion::logical_expr::{BinaryExpr, Expr, Operator, TableProviderFilterPushDown};\nuse datafusion::physical_expr::{create_physical_expr, EquivalenceProperties, PhysicalExpr};\nuse datafusion::physical_plan::execution_plan::{Boundedness, EmissionType, PlanProperties};\nuse datafusion::physical_plan::stream::RecordBatchStreamAdapter;\nuse datafusion::physical_plan::{\n    DisplayAs, DisplayFormatType, ExecutionPlan, Partitioning, SendableRecordBatchStream,\n};\nuse datafusion::prelude::SessionContext;\nuse futures_util::{stream, TryStreamExt};\nuse serde_json::Value as JsonValue;\n\nuse crate::commit_graph::CommitGraphReader;\nuse crate::entity_identity::EntityIdentity;\nuse crate::live_state::MaterializedLiveStateRow;\nuse crate::live_state::{\n    LiveStateFilter, LiveStateProjection, LiveStateReader, LiveStateScanRequest,\n};\nuse crate::sql2::dml::{InsertExec, InsertSink};\nuse crate::sql2::predicate_typecheck::validate_json_predicate_filters;\nuse crate::sql2::read_only::reject_read_only_entity_surface;\nuse crate::sql2::version_scope::{\n    explicit_version_ids_from_dml_filters, resolve_provider_version_ids,\n    resolve_write_version_scope, VersionBinding,\n};\nuse crate::sql2::write_normalization::{\n    InsertCell, InsertColumnIntents, SqlCell, UpdateAssignmentValues, UpdateCell,\n};\nuse crate::transaction::types::{TransactionJson, TransactionWriteRow};\nuse crate::version::VersionRefReader;\nuse crate::{parse_row_metadata_value, serialize_row_metadata, LixError};\n\nuse super::entity_history_provider::EntityHistoryProvider;\nuse super::history_route::{\n    HISTORY_COL_CHANGE_ID, HISTORY_COL_COMMIT_CREATED_AT, HISTORY_COL_DEPTH, HISTORY_COL_ENTITY_ID,\n    HISTORY_COL_FILE_ID, HISTORY_COL_METADATA, HISTORY_COL_OBSERVED_COMMIT_ID,\n    HISTORY_COL_SCHEMA_KEY, HISTORY_COL_SNAPSHOT_CONTENT, HISTORY_COL_START_COMMIT_ID,\n};\nuse super::result_metadata::{json_field, mark_json_field};\nuse crate::sql2::{\n    SqlCommitStoreQuerySource, SqlWriteContext, WriteAccess, WriteContextLiveStateReader,\n    WriteContextVersionRefReader,\n};\nuse crate::transaction::types::{TransactionWrite, TransactionWriteMode};\n\npub(crate) async fn register_entity_providers(\n    ctx: &SessionContext,\n    active_version_id: &str,\n    live_state: Arc<dyn LiveStateReader>,\n    version_ref: Arc<dyn VersionRefReader>,\n    commit_graph: Arc<tokio::sync::Mutex<Box<dyn CommitGraphReader>>>,\n    query_source: SqlCommitStoreQuerySource,\n    schema_definitions: &[JsonValue],\n) -> Result<(), LixError> {\n    for schema in schema_definitions {\n        let spec = match derive_entity_surface_spec_from_schema(schema) {\n            Ok(spec) => Arc::new(spec),\n            Err(_) => continue,\n        };\n\n        if !schema_exposed_as_entity_surface(&spec.schema_key) {\n            continue;\n        }\n\n        let by_version_name = format!(\"{}_by_version\", spec.schema_key);\n        ctx.register_table(\n            &by_version_name,\n            Arc::new(EntityProvider::by_version(\n                Arc::clone(&spec),\n                Arc::clone(&live_state),\n                Arc::clone(&version_ref),\n            )),\n        )\n        .map_err(datafusion_error_to_lix_error)?;\n\n        ctx.register_table(\n            &spec.schema_key,\n            Arc::new(EntityProvider::active(\n                Arc::clone(&spec),\n                Arc::clone(&live_state),\n                Arc::clone(&version_ref),\n                active_version_id.to_string(),\n            )),\n        )\n        .map_err(datafusion_error_to_lix_error)?;\n\n        if schema_exposed_as_entity_history_surface(&spec.schema_key) {\n            let history_name = format!(\"{}_history\", spec.schema_key);\n            ctx.register_table(\n                &history_name,\n                Arc::new(EntityHistoryProvider::new(\n                    Arc::clone(&spec),\n                    Arc::clone(&commit_graph),\n                    query_source.clone(),\n                )),\n            )\n            .map_err(datafusion_error_to_lix_error)?;\n        }\n    }\n\n    Ok(())\n}\n\npub(crate) async fn register_entity_write_providers(\n    ctx: &SessionContext,\n    write_ctx: SqlWriteContext,\n    schema_definitions: &[JsonValue],\n) -> Result<(), LixError> {\n    for schema in schema_definitions {\n        let spec = match derive_entity_surface_spec_from_schema(schema) {\n            Ok(spec) => Arc::new(spec),\n            Err(_) => continue,\n        };\n\n        if !schema_exposed_as_entity_surface(&spec.schema_key) {\n            continue;\n        }\n\n        let by_version_name = format!(\"{}_by_version\", spec.schema_key);\n        ctx.register_table(\n            &by_version_name,\n            Arc::new(EntityProvider::by_version_with_write(\n                Arc::clone(&spec),\n                write_ctx.clone(),\n            )),\n        )\n        .map_err(datafusion_error_to_lix_error)?;\n\n        ctx.register_table(\n            &spec.schema_key,\n            Arc::new(EntityProvider::active_with_write(\n                Arc::clone(&spec),\n                write_ctx.clone(),\n            )),\n        )\n        .map_err(datafusion_error_to_lix_error)?;\n    }\n\n    Ok(())\n}\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub(super) enum EntityProviderVariant {\n    Active,\n    ByVersion,\n    History,\n}\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub(super) enum EntityColumnType {\n    String,\n    Json,\n    Integer,\n    Number,\n    Boolean,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(super) struct EntitySurfaceColumn {\n    pub(super) name: String,\n    pub(super) column_type: EntityColumnType,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(super) struct EntitySurfaceSpec {\n    pub(super) schema_key: String,\n    pub(super) primary_key_paths: Vec<Vec<String>>,\n    pub(super) columns: Vec<EntitySurfaceColumn>,\n}\n\nimpl EntitySurfaceSpec {\n    #[cfg(test)]\n    fn visible_column_names(&self) -> impl Iterator<Item = &str> {\n        self.columns.iter().map(|column| column.name.as_str())\n    }\n\n    pub(super) fn visible_column(&self, column_name: &str) -> Option<&EntitySurfaceColumn> {\n        self.columns\n            .iter()\n            .find(|column| column.name == column_name)\n    }\n\n    fn is_visible_column(&self, column_name: &str) -> bool {\n        self.visible_column(column_name).is_some()\n    }\n}\n\npub(crate) struct EntityProvider {\n    spec: Arc<EntitySurfaceSpec>,\n    live_state: Arc<dyn LiveStateReader>,\n    version_ref: Arc<dyn VersionRefReader>,\n    write_access: WriteAccess,\n    schema: SchemaRef,\n    variant: EntityProviderVariant,\n    version_binding: VersionBinding,\n}\n\nimpl std::fmt::Debug for EntityProvider {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"EntityProvider\")\n            .field(\"schema_key\", &self.spec.schema_key)\n            .field(\"variant\", &self.variant)\n            .finish()\n    }\n}\n\nimpl EntityProvider {\n    fn active(\n        spec: Arc<EntitySurfaceSpec>,\n        live_state: Arc<dyn LiveStateReader>,\n        version_ref: Arc<dyn VersionRefReader>,\n        active_version_id: String,\n    ) -> Self {\n        Self {\n            schema: entity_surface_schema(&spec, EntityProviderVariant::Active),\n            spec,\n            live_state,\n            version_ref,\n            write_access: WriteAccess::read_only(),\n            variant: EntityProviderVariant::Active,\n            version_binding: VersionBinding::active(active_version_id),\n        }\n    }\n\n    fn active_with_write(spec: Arc<EntitySurfaceSpec>, write_ctx: SqlWriteContext) -> Self {\n        let active_version_id = write_ctx.active_version_id();\n        let live_state = Arc::new(WriteContextLiveStateReader::new(write_ctx.clone()));\n        let version_ref = Arc::new(WriteContextVersionRefReader::new(write_ctx.clone()));\n        Self {\n            schema: entity_surface_schema(&spec, EntityProviderVariant::Active),\n            spec,\n            live_state,\n            version_ref,\n            write_access: WriteAccess::write(write_ctx),\n            variant: EntityProviderVariant::Active,\n            version_binding: VersionBinding::active(active_version_id),\n        }\n    }\n\n    fn by_version(\n        spec: Arc<EntitySurfaceSpec>,\n        live_state: Arc<dyn LiveStateReader>,\n        version_ref: Arc<dyn VersionRefReader>,\n    ) -> Self {\n        Self {\n            schema: entity_surface_schema(&spec, EntityProviderVariant::ByVersion),\n            spec,\n            live_state,\n            version_ref,\n            write_access: WriteAccess::read_only(),\n            variant: EntityProviderVariant::ByVersion,\n            version_binding: VersionBinding::explicit(),\n        }\n    }\n\n    fn by_version_with_write(spec: Arc<EntitySurfaceSpec>, write_ctx: SqlWriteContext) -> Self {\n        let live_state = Arc::new(WriteContextLiveStateReader::new(write_ctx.clone()));\n        let version_ref = Arc::new(WriteContextVersionRefReader::new(write_ctx.clone()));\n        Self {\n            schema: entity_surface_schema(&spec, EntityProviderVariant::ByVersion),\n            spec,\n            live_state,\n            version_ref,\n            write_access: WriteAccess::write(write_ctx),\n            variant: EntityProviderVariant::ByVersion,\n            version_binding: VersionBinding::explicit(),\n        }\n    }\n}\n\n#[async_trait]\nimpl TableProvider for EntityProvider {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn schema(&self) -> SchemaRef {\n        Arc::clone(&self.schema)\n    }\n\n    fn table_type(&self) -> TableType {\n        TableType::Base\n    }\n\n    fn supports_filters_pushdown(\n        &self,\n        filters: &[&Expr],\n    ) -> Result<Vec<TableProviderFilterPushDown>> {\n        let analyzer = EntityPrimaryKeyFilterAnalyzer::new(&self.spec);\n        Ok(filters\n            .iter()\n            .map(|filter| {\n                if ExactVersionIdFilterAnalyzer.supports(filter) || analyzer.supports(filter) {\n                    TableProviderFilterPushDown::Exact\n                } else {\n                    TableProviderFilterPushDown::Unsupported\n                }\n            })\n            .collect())\n    }\n\n    async fn scan(\n        &self,\n        _state: &dyn Session,\n        projection: Option<&Vec<usize>>,\n        filters: &[Expr],\n        limit: Option<usize>,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        let projected_schema = projected_schema(&self.schema, projection)?;\n        let mut request = entity_live_state_scan_request(\n            &self.spec.schema_key,\n            self.version_binding.active_version_id(),\n            Some(projected_schema.as_ref()),\n            limit,\n        );\n        if self.write_access.is_write() && matches!(self.version_binding, VersionBinding::Explicit)\n        {\n            request.filter.version_ids = explicit_version_ids_from_dml_filters(filters);\n            if request.filter.version_ids.is_empty() {\n                return Err(DataFusionError::Plan(format!(\n                    \"DELETE FROM {}_by_version requires an explicit lixcol_version_id predicate\",\n                    self.spec.schema_key\n                )));\n            }\n        }\n        request.filter.version_ids = resolve_provider_version_ids(\n            self.version_ref.as_ref(),\n            &self.version_binding,\n            request.filter.version_ids,\n        )\n        .await\n        .map_err(lix_error_to_datafusion_error)?;\n        apply_exact_version_id_filter(&mut request, exact_version_ids_from_filters(filters)?);\n        apply_exact_entity_id_filters(&mut request, &self.spec, filters)?;\n\n        Ok(Arc::new(EntityScanExec::new(\n            Arc::clone(&self.spec),\n            Arc::clone(&self.live_state),\n            projected_schema,\n            request,\n        )))\n    }\n\n    async fn insert_into(\n        &self,\n        _state: &dyn Session,\n        input: Arc<dyn ExecutionPlan>,\n        insert_op: InsertOp,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        if insert_op != InsertOp::Append {\n            return not_impl_err!(\"{insert_op} not implemented for entity surfaces yet\");\n        }\n        reject_read_only_entity_surface(&self.spec.schema_key, \"INSERT\")?;\n\n        let write_ctx = self.write_access.require_write(&format!(\n            \"INSERT into {} entity surface\",\n            self.spec.schema_key\n        ))?;\n\n        let insert_version_binding = match self.variant {\n            EntityProviderVariant::Active => self.version_binding.clone(),\n            EntityProviderVariant::ByVersion => VersionBinding::explicit(),\n            EntityProviderVariant::History => {\n                return not_impl_err!(\"INSERT is not implemented for entity history surfaces\");\n            }\n        };\n\n        let sink = EntityInsertSink::new(\n            Arc::clone(&self.spec),\n            input.schema(),\n            InsertColumnIntents::from_input(&input),\n            write_ctx.clone(),\n            insert_version_binding,\n        );\n        Ok(Arc::new(InsertExec::new(input, Arc::new(sink))))\n    }\n\n    async fn delete_from(\n        &self,\n        state: &dyn Session,\n        filters: Vec<Expr>,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        reject_read_only_entity_surface(&self.spec.schema_key, \"DELETE\")?;\n\n        let write_ctx = self.write_access.require_write(&format!(\n            \"DELETE FROM {} entity surface\",\n            self.spec.schema_key\n        ))?;\n\n        let version_binding = match self.variant {\n            EntityProviderVariant::Active => self.version_binding.clone(),\n            EntityProviderVariant::ByVersion => VersionBinding::explicit(),\n            EntityProviderVariant::History => {\n                return not_impl_err!(\"DELETE is not implemented for entity history surfaces\");\n            }\n        };\n\n        let df_schema = DFSchema::try_from(Arc::clone(&self.schema))?;\n        validate_json_predicate_filters(self.schema.as_ref(), &filters)?;\n        let physical_filters = filters\n            .iter()\n            .map(|expr| create_physical_expr(expr, &df_schema, state.execution_props()))\n            .collect::<Result<Vec<_>>>()?;\n        let mut request = entity_live_state_scan_request(\n            &self.spec.schema_key,\n            version_binding.active_version_id(),\n            None,\n            None,\n        );\n        if matches!(version_binding, VersionBinding::Explicit) {\n            let exact_version_ids = exact_version_ids_from_filters(&filters)?;\n            if exact_version_ids.is_none() {\n                return Err(DataFusionError::Plan(format!(\n                    \"DELETE FROM {}_by_version requires an explicit lixcol_version_id predicate\",\n                    self.spec.schema_key\n                )));\n            }\n            apply_exact_version_id_filter(&mut request, exact_version_ids);\n        }\n        apply_exact_entity_id_filters(&mut request, &self.spec, &filters)?;\n\n        Ok(Arc::new(EntityDeleteExec::new(\n            Arc::clone(&self.spec),\n            write_ctx.clone(),\n            Arc::clone(&self.schema),\n            version_binding,\n            request,\n            physical_filters,\n        )))\n    }\n\n    async fn update(\n        &self,\n        state: &dyn Session,\n        assignments: Vec<(String, Expr)>,\n        filters: Vec<Expr>,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        reject_read_only_entity_surface(&self.spec.schema_key, \"UPDATE\")?;\n\n        let write_ctx = self\n            .write_access\n            .require_write(&format!(\"UPDATE {} entity surface\", self.spec.schema_key))?;\n\n        validate_entity_update_assignments(&self.spec, &self.schema, &assignments)?;\n\n        let version_binding = match self.variant {\n            EntityProviderVariant::Active => self.version_binding.clone(),\n            EntityProviderVariant::ByVersion => VersionBinding::explicit(),\n            EntityProviderVariant::History => {\n                return not_impl_err!(\"UPDATE is not implemented for entity history surfaces\");\n            }\n        };\n\n        let df_schema = DFSchema::try_from(Arc::clone(&self.schema))?;\n        validate_json_predicate_filters(self.schema.as_ref(), &filters)?;\n        let physical_assignments = assignments\n            .iter()\n            .map(|(column_name, expr)| {\n                Ok((\n                    column_name.clone(),\n                    create_physical_expr(expr, &df_schema, state.execution_props())?,\n                ))\n            })\n            .collect::<Result<Vec<_>>>()?;\n        let physical_filters = filters\n            .iter()\n            .map(|expr| create_physical_expr(expr, &df_schema, state.execution_props()))\n            .collect::<Result<Vec<_>>>()?;\n        let mut request = entity_live_state_scan_request(\n            &self.spec.schema_key,\n            version_binding.active_version_id(),\n            None,\n            None,\n        );\n        apply_exact_entity_id_filters(&mut request, &self.spec, &filters)?;\n\n        Ok(Arc::new(EntityUpdateExec::new(\n            Arc::clone(&self.spec),\n            write_ctx.clone(),\n            Arc::clone(&self.schema),\n            version_binding,\n            request,\n            physical_assignments,\n            physical_filters,\n        )))\n    }\n}\n\nfn entity_ids_from_primary_key_filters(\n    spec: &EntitySurfaceSpec,\n    filters: &[Expr],\n) -> Result<Option<Vec<EntityIdentity>>> {\n    let analyzer = EntityPrimaryKeyFilterAnalyzer::new(spec);\n    let mut entity_ids: Option<BTreeSet<EntityIdentity>> = None;\n    for filter in filters {\n        let Some(filter_ids) = analyzer.analyze(filter)? else {\n            continue;\n        };\n        entity_ids = Some(match entity_ids {\n            Some(existing_ids) => existing_ids.intersection(&filter_ids).cloned().collect(),\n            None => filter_ids,\n        });\n    }\n\n    Ok(entity_ids.map(|ids| ids.into_iter().collect()))\n}\n\nfn apply_exact_entity_id_filters(\n    request: &mut LiveStateScanRequest,\n    spec: &EntitySurfaceSpec,\n    filters: &[Expr],\n) -> Result<()> {\n    if let Some(entity_ids) = entity_ids_from_primary_key_filters(spec, filters)? {\n        if entity_ids.is_empty() {\n            request.limit = Some(0);\n        }\n        request.filter.entity_ids = entity_ids;\n    }\n    Ok(())\n}\n\nfn exact_version_ids_from_filters(filters: &[Expr]) -> Result<Option<Vec<String>>> {\n    let analyzer = ExactVersionIdFilterAnalyzer;\n    let mut version_ids: Option<BTreeSet<String>> = None;\n    for filter in filters {\n        let Some(filter_ids) = analyzer.analyze(filter)? else {\n            continue;\n        };\n        version_ids = Some(match version_ids {\n            Some(existing_ids) => existing_ids.intersection(&filter_ids).cloned().collect(),\n            None => filter_ids,\n        });\n    }\n    Ok(version_ids.map(|ids| ids.into_iter().collect()))\n}\n\nfn apply_exact_version_id_filter(\n    request: &mut LiveStateScanRequest,\n    version_ids: Option<Vec<String>>,\n) {\n    if let Some(version_ids) = version_ids {\n        if version_ids.is_empty() {\n            request.limit = Some(0);\n        }\n        request.filter.version_ids = version_ids;\n    }\n}\n\nstruct EntityPrimaryKeyFilterAnalyzer<'a> {\n    primary_key_columns: Vec<&'a str>,\n}\n\nstruct ExactVersionIdFilterAnalyzer;\n\nimpl ExactVersionIdFilterAnalyzer {\n    fn supports(&self, expr: &Expr) -> bool {\n        self.analyze(expr)\n            .is_ok_and(|constraint| constraint.is_some())\n    }\n\n    fn analyze(&self, expr: &Expr) -> Result<Option<BTreeSet<String>>> {\n        match expr {\n            Expr::BinaryExpr(binary_expr) if binary_expr.op == Operator::And => {\n                let Some(left) = self.analyze(&binary_expr.left)? else {\n                    return Ok(None);\n                };\n                let Some(right) = self.analyze(&binary_expr.right)? else {\n                    return Ok(None);\n                };\n                Ok(Some(left.intersection(&right).cloned().collect()))\n            }\n            Expr::BinaryExpr(binary_expr) if binary_expr.op == Operator::Or => {\n                let Some(mut left) = self.analyze(&binary_expr.left)? else {\n                    return Ok(None);\n                };\n                let Some(right) = self.analyze(&binary_expr.right)? else {\n                    return Ok(None);\n                };\n                left.extend(right);\n                Ok(Some(left))\n            }\n            Expr::BinaryExpr(binary_expr) => {\n                Ok(version_id_from_binary_filter(binary_expr).map(|value| BTreeSet::from([value])))\n            }\n            Expr::InList(in_list) => {\n                Ok(version_ids_from_in_list_filter(in_list)\n                    .map(|values| values.into_iter().collect()))\n            }\n            _ => Ok(None),\n        }\n    }\n}\n\nfn version_id_from_binary_filter(binary_expr: &BinaryExpr) -> Option<String> {\n    if binary_expr.op != Operator::Eq {\n        return None;\n    }\n\n    version_id_from_column_literal_filter(&binary_expr.left, &binary_expr.right)\n        .or_else(|| version_id_from_column_literal_filter(&binary_expr.right, &binary_expr.left))\n}\n\nfn version_ids_from_in_list_filter(in_list: &InList) -> Option<Vec<String>> {\n    if in_list.negated {\n        return None;\n    }\n    let Expr::Column(column) = in_list.expr.as_ref() else {\n        return None;\n    };\n    if column.name != \"lixcol_version_id\" {\n        return None;\n    }\n\n    let values = in_list\n        .list\n        .iter()\n        .map(string_expr_literal)\n        .collect::<Option<Vec<_>>>()?;\n    if values.is_empty() {\n        return None;\n    }\n    Some(values)\n}\n\nfn version_id_from_column_literal_filter(\n    column_expr: &Expr,\n    literal_expr: &Expr,\n) -> Option<String> {\n    let Expr::Column(column) = column_expr else {\n        return None;\n    };\n    if column.name != \"lixcol_version_id\" {\n        return None;\n    }\n    string_expr_literal(literal_expr)\n}\n\nimpl<'a> EntityPrimaryKeyFilterAnalyzer<'a> {\n    fn new(spec: &'a EntitySurfaceSpec) -> Self {\n        Self {\n            primary_key_columns: string_primary_key_columns(spec),\n        }\n    }\n\n    fn supports(&self, expr: &Expr) -> bool {\n        self.analyze(expr)\n            .is_ok_and(|constraint| constraint.is_some())\n    }\n\n    fn analyze(&self, expr: &Expr) -> Result<Option<BTreeSet<EntityIdentity>>> {\n        if self.primary_key_columns.is_empty() {\n            return Ok(None);\n        };\n        let Some(constraint) = self.analyze_constraint(expr)? else {\n            return Ok(None);\n        };\n        Ok(constraint.into_entity_ids(&self.primary_key_columns))\n    }\n\n    fn analyze_constraint(&self, expr: &Expr) -> Result<Option<EntityIdentityConstraint>> {\n        match expr {\n            Expr::BinaryExpr(binary_expr) if binary_expr.op == Operator::And => {\n                let Some(left) = self.analyze_constraint(&binary_expr.left)? else {\n                    return Ok(None);\n                };\n                let Some(right) = self.analyze_constraint(&binary_expr.right)? else {\n                    return Ok(None);\n                };\n                Ok(Some(left.intersect(right, &self.primary_key_columns)))\n            }\n            Expr::BinaryExpr(binary_expr) if binary_expr.op == Operator::Or => {\n                let Some(left) = self.analyze_constraint(&binary_expr.left)? else {\n                    return Ok(None);\n                };\n                let Some(right) = self.analyze_constraint(&binary_expr.right)? else {\n                    return Ok(None);\n                };\n                let Some(left_ids) = left.into_entity_ids(&self.primary_key_columns) else {\n                    return Ok(None);\n                };\n                let Some(mut right_ids) = right.into_entity_ids(&self.primary_key_columns) else {\n                    return Ok(None);\n                };\n                right_ids.extend(left_ids);\n                Ok(Some(EntityIdentityConstraint::Full(right_ids)))\n            }\n            Expr::BinaryExpr(binary_expr) => Ok(entity_identity_constraint_from_binary_filter(\n                binary_expr,\n                &self.primary_key_columns,\n            )),\n            Expr::InList(in_list) => Ok(entity_identity_constraint_from_in_list_filter(\n                in_list,\n                &self.primary_key_columns,\n            )),\n            _ => Ok(None),\n        }\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\nenum EntityIdentityConstraint {\n    Full(BTreeSet<EntityIdentity>),\n    Parts(BTreeMap<String, BTreeSet<String>>),\n}\n\nimpl EntityIdentityConstraint {\n    fn intersect(self, other: Self, primary_key_columns: &[&str]) -> Self {\n        match (self, other) {\n            (Self::Full(left), Self::Full(right)) => {\n                Self::Full(left.intersection(&right).cloned().collect())\n            }\n            (Self::Full(ids), Self::Parts(parts)) | (Self::Parts(parts), Self::Full(ids)) => {\n                Self::Full(\n                    ids.into_iter()\n                        .filter(|identity| {\n                            identity_matches_parts(identity, primary_key_columns, &parts)\n                        })\n                        .collect(),\n                )\n            }\n            (Self::Parts(mut left), Self::Parts(right)) => {\n                for (column, right_values) in right {\n                    left.entry(column)\n                        .and_modify(|left_values| {\n                            *left_values =\n                                left_values.intersection(&right_values).cloned().collect();\n                        })\n                        .or_insert(right_values);\n                }\n                Self::Parts(left)\n            }\n        }\n    }\n\n    fn into_entity_ids(self, primary_key_columns: &[&str]) -> Option<BTreeSet<EntityIdentity>> {\n        match self {\n            Self::Full(ids) => Some(ids),\n            Self::Parts(parts) => entity_ids_from_primary_key_parts(primary_key_columns, parts),\n        }\n    }\n}\n\nfn string_primary_key_columns(spec: &EntitySurfaceSpec) -> Vec<&str> {\n    spec.primary_key_paths\n        .iter()\n        .map(|path| {\n            let [column_name] = path.as_slice() else {\n                return None;\n            };\n            let column = spec.visible_column(column_name)?;\n            (column.column_type == EntityColumnType::String).then_some(column.name.as_str())\n        })\n        .collect::<Option<Vec<_>>>()\n        .unwrap_or_default()\n}\n\nfn entity_identity_constraint_from_binary_filter(\n    binary_expr: &BinaryExpr,\n    primary_key_columns: &[&str],\n) -> Option<EntityIdentityConstraint> {\n    if binary_expr.op != Operator::Eq {\n        return None;\n    }\n    entity_identity_constraint_from_column_literal_filter(\n        &binary_expr.left,\n        &binary_expr.right,\n        primary_key_columns,\n    )\n    .or_else(|| {\n        entity_identity_constraint_from_column_literal_filter(\n            &binary_expr.right,\n            &binary_expr.left,\n            primary_key_columns,\n        )\n    })\n}\n\nfn entity_identity_constraint_from_in_list_filter(\n    in_list: &InList,\n    primary_key_columns: &[&str],\n) -> Option<EntityIdentityConstraint> {\n    if in_list.negated {\n        return None;\n    }\n    let Expr::Column(column) = in_list.expr.as_ref() else {\n        return None;\n    };\n    let values = in_list\n        .list\n        .iter()\n        .map(string_expr_literal)\n        .collect::<Option<Vec<_>>>()?;\n    if values.is_empty() {\n        return None;\n    }\n    match column.name.as_str() {\n        \"lixcol_entity_id\" => values\n            .into_iter()\n            .map(|value| EntityIdentity::from_json_array_text(&value).ok())\n            .collect::<Option<BTreeSet<_>>>()\n            .map(EntityIdentityConstraint::Full),\n        column_name if primary_key_columns.contains(&column_name) => {\n            Some(EntityIdentityConstraint::Parts(BTreeMap::from([(\n                column_name.to_string(),\n                values.into_iter().collect(),\n            )])))\n        }\n        _ => None,\n    }\n}\n\nfn entity_identity_constraint_from_column_literal_filter(\n    column_expr: &Expr,\n    literal_expr: &Expr,\n    primary_key_columns: &[&str],\n) -> Option<EntityIdentityConstraint> {\n    let Expr::Column(column) = column_expr else {\n        return None;\n    };\n    let value = string_expr_literal(literal_expr)?;\n    match column.name.as_str() {\n        \"lixcol_entity_id\" => EntityIdentity::from_json_array_text(&value)\n            .ok()\n            .map(|identity| EntityIdentityConstraint::Full(BTreeSet::from([identity]))),\n        column_name if primary_key_columns.contains(&column_name) => {\n            Some(EntityIdentityConstraint::Parts(BTreeMap::from([(\n                column_name.to_string(),\n                BTreeSet::from([value]),\n            )])))\n        }\n        _ => None,\n    }\n}\n\nfn entity_ids_from_primary_key_parts(\n    primary_key_columns: &[&str],\n    parts: BTreeMap<String, BTreeSet<String>>,\n) -> Option<BTreeSet<EntityIdentity>> {\n    if primary_key_columns\n        .iter()\n        .any(|column| !parts.contains_key(*column))\n    {\n        return None;\n    }\n\n    let mut identities = BTreeSet::from([Vec::<String>::new()]);\n    for column in primary_key_columns {\n        let values = parts.get(*column)?;\n        identities = identities\n            .into_iter()\n            .flat_map(|prefix| {\n                values.iter().map(move |value| {\n                    let mut parts = prefix.clone();\n                    parts.push(value.clone());\n                    parts\n                })\n            })\n            .collect();\n    }\n    Some(\n        identities\n            .into_iter()\n            .map(|parts| EntityIdentity { parts })\n            .collect(),\n    )\n}\n\nfn identity_matches_parts(\n    identity: &EntityIdentity,\n    primary_key_columns: &[&str],\n    parts: &BTreeMap<String, BTreeSet<String>>,\n) -> bool {\n    let identity_parts = identity.parts.as_slice();\n    primary_key_columns\n        .iter()\n        .zip(identity_parts.iter())\n        .all(|(column, value)| {\n            parts\n                .get(*column)\n                .is_none_or(|values| values.contains(value))\n        })\n}\n\nfn string_expr_literal(expr: &Expr) -> Option<String> {\n    let Expr::Literal(literal, _) = expr else {\n        return None;\n    };\n    match literal {\n        ScalarValue::Utf8(Some(value))\n        | ScalarValue::Utf8View(Some(value))\n        | ScalarValue::LargeUtf8(Some(value)) => Some(value.clone()),\n        _ => None,\n    }\n}\n\nstruct EntityInsertSink {\n    spec: Arc<EntitySurfaceSpec>,\n    insert_column_intents: InsertColumnIntents,\n    write_ctx: SqlWriteContext,\n    version_binding: VersionBinding,\n}\n\nimpl std::fmt::Debug for EntityInsertSink {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"EntityInsertSink\")\n            .field(\"schema_key\", &self.spec.schema_key)\n            .finish()\n    }\n}\n\nimpl EntityInsertSink {\n    fn new(\n        spec: Arc<EntitySurfaceSpec>,\n        _schema: SchemaRef,\n        insert_column_intents: InsertColumnIntents,\n        write_ctx: SqlWriteContext,\n        version_binding: VersionBinding,\n    ) -> Self {\n        Self {\n            spec,\n            insert_column_intents,\n            write_ctx,\n            version_binding,\n        }\n    }\n}\n\nimpl DisplayAs for EntityInsertSink {\n    fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        match t {\n            DisplayFormatType::Default | DisplayFormatType::Verbose => {\n                write!(f, \"EntityInsertSink(schema_key={})\", self.spec.schema_key)\n            }\n            DisplayFormatType::TreeRender => write!(f, \"EntityInsertSink\"),\n        }\n    }\n}\n\n#[async_trait]\nimpl InsertSink for EntityInsertSink {\n    async fn write_batches(\n        &self,\n        batches: Vec<RecordBatch>,\n        _context: &Arc<TaskContext>,\n    ) -> Result<u64> {\n        let mut rows = Vec::new();\n        for batch in batches {\n            rows.extend(entity_lix_state_write_rows_from_batch(\n                &self.spec,\n                &batch,\n                &self.insert_column_intents,\n                self.version_binding.active_version_id(),\n            )?);\n        }\n        let count = u64::try_from(rows.len())\n            .map_err(|_| DataFusionError::Execution(\"entity INSERT row count overflow\".into()))?;\n\n        self.write_ctx\n            .stage_write(TransactionWrite::Rows {\n                mode: TransactionWriteMode::Insert,\n                rows,\n            })\n            .await\n            .map_err(lix_error_to_datafusion_error)?;\n\n        Ok(count)\n    }\n}\n\n#[allow(dead_code)]\nstruct EntityDeleteExec {\n    spec: Arc<EntitySurfaceSpec>,\n    write_ctx: SqlWriteContext,\n    table_schema: SchemaRef,\n    version_binding: VersionBinding,\n    request: LiveStateScanRequest,\n    filters: Vec<Arc<dyn PhysicalExpr>>,\n    result_schema: SchemaRef,\n    properties: Arc<PlanProperties>,\n}\n\nimpl std::fmt::Debug for EntityDeleteExec {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"EntityDeleteExec\")\n            .field(\"schema_key\", &self.spec.schema_key)\n            .finish()\n    }\n}\n\nimpl EntityDeleteExec {\n    fn new(\n        spec: Arc<EntitySurfaceSpec>,\n        write_ctx: SqlWriteContext,\n        table_schema: SchemaRef,\n        version_binding: VersionBinding,\n        request: LiveStateScanRequest,\n        filters: Vec<Arc<dyn PhysicalExpr>>,\n    ) -> Self {\n        let result_schema = dml_count_schema();\n        let properties = PlanProperties::new(\n            EquivalenceProperties::new(Arc::clone(&result_schema)),\n            Partitioning::UnknownPartitioning(1),\n            EmissionType::Final,\n            Boundedness::Bounded,\n        );\n        Self {\n            spec,\n            write_ctx,\n            table_schema,\n            version_binding,\n            request,\n            filters,\n            result_schema,\n            properties: Arc::new(properties),\n        }\n    }\n}\n\nimpl DisplayAs for EntityDeleteExec {\n    fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        match t {\n            DisplayFormatType::Default | DisplayFormatType::Verbose => {\n                write!(f, \"EntityDeleteExec(schema_key={})\", self.spec.schema_key)\n            }\n            DisplayFormatType::TreeRender => write!(f, \"EntityDeleteExec\"),\n        }\n    }\n}\n\nimpl ExecutionPlan for EntityDeleteExec {\n    fn name(&self) -> &str {\n        \"EntityDeleteExec\"\n    }\n\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn properties(&self) -> &Arc<PlanProperties> {\n        &self.properties\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn ExecutionPlan>> {\n        Vec::new()\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        children: Vec<Arc<dyn ExecutionPlan>>,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        if !children.is_empty() {\n            return Err(DataFusionError::Execution(\n                \"EntityDeleteExec does not accept children\".to_string(),\n            ));\n        }\n        Ok(self)\n    }\n\n    fn execute(\n        &self,\n        partition: usize,\n        _context: Arc<TaskContext>,\n    ) -> Result<SendableRecordBatchStream> {\n        if partition != 0 {\n            return Err(DataFusionError::Execution(format!(\n                \"EntityDeleteExec only exposes one partition, got {partition}\"\n            )));\n        }\n\n        let spec = Arc::clone(&self.spec);\n        let write_ctx = self.write_ctx.clone();\n        let table_schema = Arc::clone(&self.table_schema);\n        let version_binding = self.version_binding.clone();\n        let request = self.request.clone();\n        let filters = self.filters.clone();\n        let result_schema = Arc::clone(&self.result_schema);\n        let stream_schema = Arc::clone(&result_schema);\n\n        let stream = stream::once(async move {\n            let rows = if request.limit == Some(0) {\n                Vec::new()\n            } else {\n                write_ctx\n                    .scan_live_state(&request)\n                    .await\n                    .map_err(lix_error_to_datafusion_error)?\n            };\n            let source_batch = entity_record_batch(&spec, Arc::clone(&table_schema), &rows)?;\n            let matched_batch = filter_entity_batch(source_batch, &filters)?;\n            let mut write_rows = entity_existing_lix_state_write_rows_from_batch(\n                &spec,\n                &matched_batch,\n                version_binding.active_version_id(),\n            )?;\n            for row in &mut write_rows {\n                row.snapshot = None;\n            }\n            let count = u64::try_from(write_rows.len()).map_err(|_| {\n                DataFusionError::Execution(\"entity DELETE row count overflow\".to_string())\n            })?;\n\n            if count > 0 {\n                write_ctx\n                    .stage_write(TransactionWrite::Rows {\n                        mode: TransactionWriteMode::Replace,\n                        rows: write_rows,\n                    })\n                    .await\n                    .map_err(lix_error_to_datafusion_error)?;\n            }\n\n            Ok::<_, DataFusionError>(stream::iter(vec![Ok::<RecordBatch, DataFusionError>(\n                dml_count_batch(Arc::clone(&stream_schema), count)?,\n            )]))\n        })\n        .try_flatten();\n\n        Ok(Box::pin(RecordBatchStreamAdapter::new(\n            result_schema,\n            stream,\n        )))\n    }\n}\n\n#[allow(dead_code)]\nstruct EntityUpdateExec {\n    spec: Arc<EntitySurfaceSpec>,\n    write_ctx: SqlWriteContext,\n    table_schema: SchemaRef,\n    version_binding: VersionBinding,\n    request: LiveStateScanRequest,\n    assignments: Vec<(String, Arc<dyn PhysicalExpr>)>,\n    filters: Vec<Arc<dyn PhysicalExpr>>,\n    result_schema: SchemaRef,\n    properties: Arc<PlanProperties>,\n}\n\nimpl std::fmt::Debug for EntityUpdateExec {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"EntityUpdateExec\")\n            .field(\"schema_key\", &self.spec.schema_key)\n            .finish()\n    }\n}\n\nimpl EntityUpdateExec {\n    fn new(\n        spec: Arc<EntitySurfaceSpec>,\n        write_ctx: SqlWriteContext,\n        table_schema: SchemaRef,\n        version_binding: VersionBinding,\n        request: LiveStateScanRequest,\n        assignments: Vec<(String, Arc<dyn PhysicalExpr>)>,\n        filters: Vec<Arc<dyn PhysicalExpr>>,\n    ) -> Self {\n        let result_schema = dml_count_schema();\n        let properties = PlanProperties::new(\n            EquivalenceProperties::new(Arc::clone(&result_schema)),\n            Partitioning::UnknownPartitioning(1),\n            EmissionType::Final,\n            Boundedness::Bounded,\n        );\n        Self {\n            spec,\n            write_ctx,\n            table_schema,\n            version_binding,\n            request,\n            assignments,\n            filters,\n            result_schema,\n            properties: Arc::new(properties),\n        }\n    }\n}\n\nimpl DisplayAs for EntityUpdateExec {\n    fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        match t {\n            DisplayFormatType::Default | DisplayFormatType::Verbose => {\n                write!(\n                    f,\n                    \"EntityUpdateExec(schema_key={}, assignments={})\",\n                    self.spec.schema_key,\n                    self.assignments.len()\n                )\n            }\n            DisplayFormatType::TreeRender => write!(f, \"EntityUpdateExec\"),\n        }\n    }\n}\n\nimpl ExecutionPlan for EntityUpdateExec {\n    fn name(&self) -> &str {\n        \"EntityUpdateExec\"\n    }\n\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn properties(&self) -> &Arc<PlanProperties> {\n        &self.properties\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn ExecutionPlan>> {\n        Vec::new()\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        children: Vec<Arc<dyn ExecutionPlan>>,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        if !children.is_empty() {\n            return Err(DataFusionError::Execution(\n                \"EntityUpdateExec does not accept children\".to_string(),\n            ));\n        }\n        Ok(self)\n    }\n\n    fn execute(\n        &self,\n        partition: usize,\n        _context: Arc<TaskContext>,\n    ) -> Result<SendableRecordBatchStream> {\n        if partition != 0 {\n            return Err(DataFusionError::Execution(format!(\n                \"EntityUpdateExec only exposes one partition, got {partition}\"\n            )));\n        }\n\n        let spec = Arc::clone(&self.spec);\n        let write_ctx = self.write_ctx.clone();\n        let table_schema = Arc::clone(&self.table_schema);\n        let version_binding = self.version_binding.clone();\n        let request = self.request.clone();\n        let assignments = self.assignments.clone();\n        let filters = self.filters.clone();\n        let result_schema = Arc::clone(&self.result_schema);\n        let stream_schema = Arc::clone(&result_schema);\n\n        let stream = stream::once(async move {\n            let rows = if request.limit == Some(0) {\n                Vec::new()\n            } else {\n                write_ctx\n                    .scan_live_state(&request)\n                    .await\n                    .map_err(lix_error_to_datafusion_error)?\n            };\n            let source_batch = entity_record_batch(&spec, Arc::clone(&table_schema), &rows)?;\n            let matched_batch = filter_entity_batch(source_batch, &filters)?;\n            let write_rows = entity_update_write_rows_from_batch(\n                &spec,\n                &matched_batch,\n                &assignments,\n                version_binding.active_version_id(),\n            )?;\n            let count = u64::try_from(write_rows.len()).map_err(|_| {\n                DataFusionError::Execution(\"entity UPDATE row count overflow\".to_string())\n            })?;\n\n            if count > 0 {\n                write_ctx\n                    .stage_write(TransactionWrite::Rows {\n                        mode: TransactionWriteMode::Replace,\n                        rows: write_rows,\n                    })\n                    .await\n                    .map_err(lix_error_to_datafusion_error)?;\n            }\n\n            Ok::<_, DataFusionError>(stream::iter(vec![Ok::<RecordBatch, DataFusionError>(\n                dml_count_batch(Arc::clone(&stream_schema), count)?,\n            )]))\n        })\n        .try_flatten();\n\n        Ok(Box::pin(RecordBatchStreamAdapter::new(\n            result_schema,\n            stream,\n        )))\n    }\n}\n\nfn validate_entity_update_assignments(\n    spec: &EntitySurfaceSpec,\n    schema: &SchemaRef,\n    assignments: &[(String, Expr)],\n) -> Result<()> {\n    for (column_name, _) in assignments {\n        schema.field_with_name(column_name).map_err(|_| {\n            DataFusionError::Plan(format!(\n                \"UPDATE entity surface '{}' failed: column '{column_name}' does not exist\",\n                spec.schema_key\n            ))\n        })?;\n        if !spec.is_visible_column(column_name) && column_name != \"lixcol_metadata\" {\n            return Err(DataFusionError::Execution(format!(\n                \"UPDATE entity surface '{}' cannot stage read-only column '{column_name}'\",\n                spec.schema_key\n            )));\n        }\n    }\n    Ok(())\n}\n\nfn filter_entity_batch(\n    batch: RecordBatch,\n    filters: &[Arc<dyn PhysicalExpr>],\n) -> Result<RecordBatch> {\n    let Some(mask) = evaluate_entity_filters(&batch, filters)? else {\n        return Ok(batch);\n    };\n    Ok(filter_record_batch(&batch, &mask)?)\n}\n\nfn evaluate_entity_filters(\n    batch: &RecordBatch,\n    filters: &[Arc<dyn PhysicalExpr>],\n) -> Result<Option<BooleanArray>> {\n    if filters.is_empty() {\n        return Ok(None);\n    }\n\n    let mut combined_mask: Option<BooleanArray> = None;\n    for filter in filters {\n        let result = filter.evaluate(batch)?;\n        let array = result.into_array(batch.num_rows())?;\n        let bool_array = array\n            .as_any()\n            .downcast_ref::<BooleanArray>()\n            .ok_or_else(|| {\n                DataFusionError::Execution(\"entity surface filter was not boolean\".to_string())\n            })?;\n        let normalized = bool_array\n            .iter()\n            .map(|value| Some(value == Some(true)))\n            .collect::<BooleanArray>();\n        combined_mask = Some(match combined_mask {\n            Some(existing) => and(&existing, &normalized)?,\n            None => normalized,\n        });\n    }\n    Ok(combined_mask)\n}\n\nfn entity_update_write_rows_from_batch(\n    spec: &EntitySurfaceSpec,\n    batch: &RecordBatch,\n    assignments: &[(String, Arc<dyn PhysicalExpr>)],\n    version_binding: Option<&str>,\n) -> Result<Vec<TransactionWriteRow>> {\n    let assignment_values = UpdateAssignmentValues::evaluate(batch, assignments)?;\n    (0..batch.num_rows())\n        .map(|row_index| {\n            let scope = resolve_write_version_scope(\n                optional_bool_value(batch, row_index, \"lixcol_global\")?,\n                optional_string_value(batch, row_index, \"lixcol_version_id\")?,\n                version_binding,\n                &format!(\"UPDATE into {}_by_version\", spec.schema_key),\n                &spec.schema_key,\n            )?;\n\n            Ok(TransactionWriteRow {\n                entity_id: optional_string_value(batch, row_index, \"lixcol_entity_id\")?\n                    .map(|entity_id| {\n                        EntityIdentity::from_json_array_text(&entity_id).map_err(|error| {\n                            DataFusionError::Execution(format!(\n                                \"UPDATE entity surface '{}' has invalid lixcol_entity_id: {error}\",\n                                spec.schema_key\n                            ))\n                        })\n                    })\n                    .transpose()?,\n                schema_key: spec.schema_key.clone(),\n                file_id: optional_string_value(batch, row_index, \"lixcol_file_id\")?,\n                snapshot: Some(\n                    TransactionJson::from_value(\n                        entity_update_snapshot_content_from_batch(\n                            spec,\n                            batch,\n                            &assignment_values,\n                            row_index,\n                        )?,\n                        &format!(\"{} update snapshot_content\", spec.schema_key),\n                    )\n                    .map_err(super::error::lix_error_to_datafusion_error)?,\n                ),\n                metadata: entity_update_optional_metadata_value(\n                    batch,\n                    &assignment_values,\n                    row_index,\n                    \"lixcol_metadata\",\n                    &spec.schema_key,\n                )?,\n                origin: None,\n                created_at: None,\n                updated_at: None,\n                global: scope.global,\n                change_id: None,\n                commit_id: None,\n                untracked: optional_bool_value(batch, row_index, \"lixcol_untracked\")?\n                    .unwrap_or(false),\n                version_id: scope.version_id,\n            })\n        })\n        .collect()\n}\n\nfn entity_update_snapshot_content_from_batch(\n    spec: &EntitySurfaceSpec,\n    batch: &RecordBatch,\n    assignment_values: &UpdateAssignmentValues,\n    row_index: usize,\n) -> Result<JsonValue> {\n    let snapshot_content = optional_string_value(batch, row_index, \"lixcol_snapshot_content\")?\n        .ok_or_else(|| {\n            DataFusionError::Execution(format!(\n                \"UPDATE entity surface '{}' requires existing lixcol_snapshot_content\",\n                spec.schema_key\n            ))\n        })?;\n    let mut object = match serde_json::from_str::<JsonValue>(&snapshot_content).map_err(|error| {\n        DataFusionError::Execution(format!(\n            \"UPDATE entity surface '{}' expected existing snapshot_content to be valid JSON: {error}\",\n            spec.schema_key\n        ))\n    })? {\n        JsonValue::Object(object) => object,\n        other => {\n            return Err(DataFusionError::Execution(format!(\n                \"UPDATE entity surface '{}' expected existing snapshot_content to be a JSON object, got {other}\",\n                spec.schema_key\n            )))\n        }\n    };\n\n    for column in &spec.columns {\n        let value = match entity_update_json_value(\n            assignment_values,\n            row_index,\n            &column.name,\n            column.column_type,\n        )? {\n            Some(value) => value,\n            None => continue,\n        };\n        object.insert(column.name.clone(), value);\n    }\n    Ok(JsonValue::Object(object))\n}\n\nfn entity_update_optional_string_value(\n    batch: &RecordBatch,\n    assignment_values: &UpdateAssignmentValues,\n    row_index: usize,\n    column_name: &str,\n) -> Result<Option<String>> {\n    match assignment_values.assigned_or_existing_cell(batch, row_index, column_name)? {\n        InsertCell::Omitted | InsertCell::Provided(SqlCell::Null) => Ok(None),\n        InsertCell::Provided(SqlCell::Value(\n            ScalarValue::Utf8(Some(value))\n            | ScalarValue::Utf8View(Some(value))\n            | ScalarValue::LargeUtf8(Some(value)),\n        )) => Ok(Some(value)),\n        InsertCell::Provided(SqlCell::Value(other)) => Err(DataFusionError::Execution(format!(\n            \"UPDATE entity surface expected text-compatible column '{column_name}', got {other:?}\"\n        ))),\n    }\n}\n\nfn entity_update_optional_metadata_value(\n    batch: &RecordBatch,\n    assignment_values: &UpdateAssignmentValues,\n    row_index: usize,\n    column_name: &str,\n    context: &str,\n) -> Result<Option<TransactionJson>> {\n    entity_update_optional_string_value(batch, assignment_values, row_index, column_name)?\n        .map(|value| {\n            let metadata = parse_row_metadata_value(&value, context)\n                .map_err(super::error::lix_error_to_datafusion_error)?;\n            TransactionJson::from_value(metadata, &format!(\"{context} metadata\"))\n                .map_err(super::error::lix_error_to_datafusion_error)\n        })\n        .transpose()\n}\n\nfn entity_update_json_value(\n    assignment_values: &UpdateAssignmentValues,\n    row_index: usize,\n    column_name: &str,\n    column_type: EntityColumnType,\n) -> Result<Option<JsonValue>> {\n    match assignment_values.assigned_cell(row_index, column_name)? {\n        UpdateCell::Unassigned => Ok(None),\n        UpdateCell::Assigned(SqlCell::Null) => Ok(Some(JsonValue::Null)),\n        UpdateCell::Assigned(SqlCell::Value(value)) => {\n            entity_json_value_from_scalar(Some(value), column_type).map(Some)\n        }\n    }\n}\n\nfn dml_count_schema() -> SchemaRef {\n    Arc::new(Schema::new(vec![Field::new(\n        \"count\",\n        DataType::UInt64,\n        false,\n    )]))\n}\n\nfn dml_count_batch(schema: SchemaRef, count: u64) -> Result<RecordBatch> {\n    RecordBatch::try_new(\n        schema,\n        vec![Arc::new(UInt64Array::from(vec![count])) as ArrayRef],\n    )\n    .map_err(DataFusionError::from)\n}\n\nfn entity_lix_state_write_rows_from_batch(\n    spec: &EntitySurfaceSpec,\n    batch: &RecordBatch,\n    insert_column_intents: &InsertColumnIntents,\n    version_binding: Option<&str>,\n) -> Result<Vec<TransactionWriteRow>> {\n    entity_lix_state_write_rows_from_batch_with_options(\n        spec,\n        batch,\n        insert_column_intents,\n        version_binding,\n        true,\n    )\n}\n\nfn entity_existing_lix_state_write_rows_from_batch(\n    spec: &EntitySurfaceSpec,\n    batch: &RecordBatch,\n    version_binding: Option<&str>,\n) -> Result<Vec<TransactionWriteRow>> {\n    entity_lix_state_write_rows_from_batch_with_options(\n        spec,\n        batch,\n        &InsertColumnIntents::all_explicit(),\n        version_binding,\n        false,\n    )\n}\n\nfn entity_lix_state_write_rows_from_batch_with_options(\n    spec: &EntitySurfaceSpec,\n    batch: &RecordBatch,\n    insert_column_intents: &InsertColumnIntents,\n    version_binding: Option<&str>,\n    reject_read_only_fields: bool,\n) -> Result<Vec<TransactionWriteRow>> {\n    (0..batch.num_rows())\n        .map(|row_index| {\n            let scope = resolve_write_version_scope(\n                optional_bool_value(batch, row_index, \"lixcol_global\")?,\n                optional_string_value(batch, row_index, \"lixcol_version_id\")?,\n                version_binding,\n                &format!(\n                    \"INSERT into {}_by_version\",\n                    spec.schema_key\n                ),\n                &spec.schema_key,\n            )?;\n\n            if let Some(schema_key) = optional_string_value(batch, row_index, \"lixcol_schema_key\")?\n            {\n                if schema_key != spec.schema_key {\n                    return Err(DataFusionError::Execution(format!(\n                        \"INSERT into entity surface '{}' cannot set lixcol_schema_key to '{}'\",\n                        spec.schema_key, schema_key\n                    )));\n                }\n            }\n\n            if reject_read_only_fields {\n                reject_present_entity_insert_field(batch, row_index, \"lixcol_snapshot_content\")?;\n                reject_present_entity_insert_field(batch, row_index, \"lixcol_created_at\")?;\n                reject_present_entity_insert_field(batch, row_index, \"lixcol_updated_at\")?;\n                reject_present_entity_insert_field(batch, row_index, \"lixcol_change_id\")?;\n                reject_present_entity_insert_field(batch, row_index, \"lixcol_commit_id\")?;\n            }\n\n            let snapshot_content =\n                entity_snapshot_content_from_batch(spec, batch, insert_column_intents, row_index)?;\n            let explicit_entity_id = optional_string_value(batch, row_index, \"lixcol_entity_id\")?;\n            let entity_id = if spec.primary_key_paths.is_empty() {\n                let entity_id = explicit_entity_id.ok_or_else(|| {\n                    DataFusionError::Execution(format!(\n                        \"INSERT into entity surface '{}' requires lixcol_entity_id because the schema has no x-lix-primary-key\",\n                        spec.schema_key\n                    ))\n                })?;\n                Some(EntityIdentity::from_json_array_text(&entity_id).map_err(|error| {\n                    DataFusionError::Execution(format!(\n                        \"INSERT into entity surface '{}' has invalid lixcol_entity_id: {error}\",\n                        spec.schema_key\n                    ))\n                })?)\n            } else {\n                explicit_entity_id\n                    .map(|entity_id| {\n                        EntityIdentity::from_json_array_text(&entity_id).map_err(|error| {\n                            DataFusionError::Execution(format!(\n                                \"INSERT into entity surface '{}' has invalid lixcol_entity_id: {error}\",\n                                spec.schema_key\n                            ))\n                        })\n                    })\n                    .transpose()?\n            };\n\n            Ok(TransactionWriteRow {\n                entity_id,\n                schema_key: spec.schema_key.clone(),\n                file_id: optional_string_value(batch, row_index, \"lixcol_file_id\")?,\n                snapshot: Some(TransactionJson::from_value(\n                    snapshot_content,\n                    &format!(\"{} insert snapshot_content\", spec.schema_key),\n                )\n                .map_err(super::error::lix_error_to_datafusion_error)?),\n                metadata: optional_metadata_value(\n                    batch,\n                    row_index,\n                    \"lixcol_metadata\",\n                    &spec.schema_key,\n                )?,\n                origin: None,\n                created_at: None,\n                updated_at: None,\n                global: scope.global,\n                change_id: None,\n                commit_id: None,\n                untracked: optional_bool_value(batch, row_index, \"lixcol_untracked\")?\n                    .unwrap_or(false),\n                version_id: scope.version_id,\n            })\n        })\n        .collect()\n}\n\nfn entity_snapshot_content_from_batch(\n    spec: &EntitySurfaceSpec,\n    batch: &RecordBatch,\n    insert_column_intents: &InsertColumnIntents,\n    row_index: usize,\n) -> Result<JsonValue> {\n    let mut object = serde_json::Map::new();\n    for column in &spec.columns {\n        let value = match insert_column_intents.cell(batch, row_index, &column.name)? {\n            InsertCell::Omitted => {\n                continue;\n            }\n            InsertCell::Provided(SqlCell::Null) => JsonValue::Null,\n            InsertCell::Provided(SqlCell::Value(value)) => {\n                entity_json_value_from_scalar(Some(value), column.column_type)?\n            }\n        };\n        object.insert(column.name.clone(), value);\n    }\n    Ok(JsonValue::Object(object))\n}\n\nfn entity_json_value_from_scalar(\n    value: Option<ScalarValue>,\n    column_type: EntityColumnType,\n) -> Result<JsonValue> {\n    let Some(value) = value else {\n        return Ok(JsonValue::Null);\n    };\n    match value {\n        ScalarValue::Null\n        | ScalarValue::Utf8(None)\n        | ScalarValue::Utf8View(None)\n        | ScalarValue::LargeUtf8(None)\n        | ScalarValue::Boolean(None)\n        | ScalarValue::Int64(None)\n        | ScalarValue::Int32(None)\n        | ScalarValue::UInt64(None)\n        | ScalarValue::UInt32(None)\n        | ScalarValue::Float64(None)\n        | ScalarValue::Float32(None) => Ok(JsonValue::Null),\n        ScalarValue::Utf8(Some(value))\n        | ScalarValue::Utf8View(Some(value))\n        | ScalarValue::LargeUtf8(Some(value)) => match column_type {\n            EntityColumnType::Json => {\n                // JSON surface columns accept SQL strings as JSON string values,\n                // while still allowing callers to pass serialized JSON text for\n                // objects, arrays, numbers, booleans, and null.\n                Ok(serde_json::from_str(&value).unwrap_or(JsonValue::String(value)))\n            }\n            EntityColumnType::Integer => {\n                value.parse::<i64>().map(JsonValue::from).map_err(|error| {\n                    DataFusionError::Execution(format!(\n                        \"entity integer column expected integer text, got error: {error}\"\n                    ))\n                })\n            }\n            EntityColumnType::Number => value\n                .parse::<f64>()\n                .map_err(|error| {\n                    DataFusionError::Execution(format!(\n                        \"entity number column expected number text, got error: {error}\"\n                    ))\n                })\n                .and_then(json_number_from_f64),\n            EntityColumnType::Boolean => {\n                value.parse::<bool>().map(JsonValue::from).map_err(|error| {\n                    DataFusionError::Execution(format!(\n                        \"entity boolean column expected boolean text, got error: {error}\"\n                    ))\n                })\n            }\n            EntityColumnType::String => Ok(JsonValue::String(value)),\n        },\n        ScalarValue::Boolean(Some(value)) => Ok(JsonValue::Bool(value)),\n        ScalarValue::Int64(Some(value)) => Ok(JsonValue::from(value)),\n        ScalarValue::Int32(Some(value)) => Ok(JsonValue::from(value)),\n        ScalarValue::UInt64(Some(value)) => Ok(JsonValue::from(value)),\n        ScalarValue::UInt32(Some(value)) => Ok(JsonValue::from(value)),\n        ScalarValue::Float64(Some(value)) => json_number_from_f64(value),\n        ScalarValue::Float32(Some(value)) => json_number_from_f64(value as f64),\n        ScalarValue::Binary(Some(_))\n        | ScalarValue::LargeBinary(Some(_))\n        | ScalarValue::FixedSizeBinary(_, Some(_)) => Err(lix_error_to_datafusion_error(\n            LixError::new(\n                LixError::CODE_TYPE_MISMATCH,\n                \"entity JSON columns cannot store blob values directly\",\n            )\n            .with_hint(\n                \"Encode bytes explicitly as JSON text/object, or store raw bytes in a blob-native surface such as lix_file.data.\",\n            ),\n        )),\n        ScalarValue::Binary(None)\n        | ScalarValue::LargeBinary(None)\n        | ScalarValue::FixedSizeBinary(_, None) => Ok(JsonValue::Null),\n        other => Err(DataFusionError::Execution(format!(\n            \"entity insert does not support scalar value {other:?}\"\n        ))),\n    }\n}\n\nfn json_number_from_f64(value: f64) -> Result<JsonValue> {\n    serde_json::Number::from_f64(value)\n        .map(JsonValue::Number)\n        .ok_or_else(|| {\n            DataFusionError::Execution(format!(\"entity number column cannot store {value}\"))\n        })\n}\n\nfn reject_present_entity_insert_field(\n    batch: &RecordBatch,\n    row_index: usize,\n    column_name: &str,\n) -> Result<()> {\n    if optional_scalar_value(batch, row_index, column_name)?.is_some_and(|value| !value.is_null()) {\n        return Err(DataFusionError::Execution(format!(\n            \"INSERT into entity surface cannot stage read-only column '{column_name}'\"\n        )));\n    }\n    Ok(())\n}\n\nfn optional_string_value(\n    batch: &RecordBatch,\n    row_index: usize,\n    column_name: &str,\n) -> Result<Option<String>> {\n    match optional_scalar_value(batch, row_index, column_name)? {\n        None\n        | Some(ScalarValue::Null)\n        | Some(ScalarValue::Utf8(None))\n        | Some(ScalarValue::Utf8View(None))\n        | Some(ScalarValue::LargeUtf8(None)) => Ok(None),\n        Some(ScalarValue::Utf8(Some(value)))\n        | Some(ScalarValue::Utf8View(Some(value)))\n        | Some(ScalarValue::LargeUtf8(Some(value))) => Ok(Some(value)),\n        Some(other) => Err(DataFusionError::Execution(format!(\n            \"INSERT into entity surface expected text-compatible column '{column_name}', got {other:?}\"\n        ))),\n    }\n}\n\nfn optional_metadata_value(\n    batch: &RecordBatch,\n    row_index: usize,\n    column_name: &str,\n    context: &str,\n) -> Result<Option<TransactionJson>> {\n    optional_string_value(batch, row_index, column_name)?\n        .map(|value| {\n            let metadata = parse_row_metadata_value(&value, context)\n                .map_err(super::error::lix_error_to_datafusion_error)?;\n            TransactionJson::from_value(metadata, &format!(\"{context} metadata\"))\n                .map_err(super::error::lix_error_to_datafusion_error)\n        })\n        .transpose()\n}\n\nfn optional_bool_value(\n    batch: &RecordBatch,\n    row_index: usize,\n    column_name: &str,\n) -> Result<Option<bool>> {\n    match optional_scalar_value(batch, row_index, column_name)? {\n        None | Some(ScalarValue::Null) | Some(ScalarValue::Boolean(None)) => Ok(None),\n        Some(ScalarValue::Boolean(Some(value))) => Ok(Some(value)),\n        Some(other) => Err(DataFusionError::Execution(format!(\n            \"INSERT into entity surface expected boolean column '{column_name}', got {other:?}\"\n        ))),\n    }\n}\n\nfn optional_scalar_value(\n    batch: &RecordBatch,\n    row_index: usize,\n    column_name: &str,\n) -> Result<Option<ScalarValue>> {\n    let schema = batch.schema();\n    let column_index = match schema.index_of(column_name) {\n        Ok(column_index) => column_index,\n        Err(_) => return Ok(None),\n    };\n    if row_index >= batch.num_rows() {\n        return Err(DataFusionError::Execution(format!(\n            \"row index {row_index} out of bounds for entity batch with {} rows\",\n            batch.num_rows()\n        )));\n    }\n    ScalarValue::try_from_array(batch.column(column_index).as_ref(), row_index)\n        .map(Some)\n        .map_err(|error| {\n            DataFusionError::Execution(format!(\n                \"failed to decode entity column '{column_name}' at row {row_index}: {error}\"\n            ))\n        })\n}\n\nstruct EntityScanExec {\n    spec: Arc<EntitySurfaceSpec>,\n    live_state: Arc<dyn LiveStateReader>,\n    schema: SchemaRef,\n    request: LiveStateScanRequest,\n    properties: Arc<PlanProperties>,\n}\n\nimpl std::fmt::Debug for EntityScanExec {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"EntityScanExec\")\n            .field(\"schema_key\", &self.spec.schema_key)\n            .finish()\n    }\n}\n\nimpl EntityScanExec {\n    fn new(\n        spec: Arc<EntitySurfaceSpec>,\n        live_state: Arc<dyn LiveStateReader>,\n        schema: SchemaRef,\n        request: LiveStateScanRequest,\n    ) -> Self {\n        let properties = PlanProperties::new(\n            EquivalenceProperties::new(Arc::clone(&schema)),\n            Partitioning::UnknownPartitioning(1),\n            EmissionType::Incremental,\n            Boundedness::Bounded,\n        );\n        Self {\n            spec,\n            live_state,\n            schema,\n            request,\n            properties: Arc::new(properties),\n        }\n    }\n}\n\nimpl DisplayAs for EntityScanExec {\n    fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        match t {\n            DisplayFormatType::Default | DisplayFormatType::Verbose => {\n                write!(\n                    f,\n                    \"EntityScanExec(schema_key={}, limit={:?})\",\n                    self.spec.schema_key, self.request.limit\n                )\n            }\n            DisplayFormatType::TreeRender => write!(f, \"EntityScanExec\"),\n        }\n    }\n}\n\nimpl ExecutionPlan for EntityScanExec {\n    fn name(&self) -> &str {\n        \"EntityScanExec\"\n    }\n\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn properties(&self) -> &Arc<PlanProperties> {\n        &self.properties\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn ExecutionPlan>> {\n        Vec::new()\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        children: Vec<Arc<dyn ExecutionPlan>>,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        if !children.is_empty() {\n            return Err(DataFusionError::Execution(\n                \"EntityScanExec does not accept children\".to_string(),\n            ));\n        }\n        Ok(self)\n    }\n\n    fn execute(\n        &self,\n        partition: usize,\n        _context: Arc<TaskContext>,\n    ) -> Result<SendableRecordBatchStream> {\n        if partition != 0 {\n            return Err(DataFusionError::Execution(format!(\n                \"EntityScanExec only exposes one partition, got {partition}\"\n            )));\n        }\n\n        let spec = Arc::clone(&self.spec);\n        let live_state = Arc::clone(&self.live_state);\n        let schema = Arc::clone(&self.schema);\n        let request = self.request.clone();\n        let stream_schema = Arc::clone(&schema);\n        let stream = stream::once(async move {\n            let rows = if request.limit == Some(0) {\n                Vec::new()\n            } else {\n                live_state\n                    .scan_rows(&request)\n                    .await\n                    .map_err(lix_error_to_datafusion_error)?\n            };\n            let batch = entity_record_batch(&spec, Arc::clone(&stream_schema), &rows)?;\n            Ok::<_, DataFusionError>(stream::iter(vec![Ok::<RecordBatch, DataFusionError>(\n                batch,\n            )]))\n        })\n        .try_flatten();\n\n        Ok(Box::pin(RecordBatchStreamAdapter::new(schema, stream)))\n    }\n}\n\nfn entity_live_state_scan_request(\n    schema_key: &str,\n    active_version_id: Option<&str>,\n    projected_schema: Option<&Schema>,\n    limit: Option<usize>,\n) -> LiveStateScanRequest {\n    LiveStateScanRequest {\n        filter: LiveStateFilter {\n            schema_keys: vec![schema_key.to_string()],\n            version_ids: active_version_id\n                .map(|version_id| vec![version_id.to_string()])\n                .unwrap_or_default(),\n            ..LiveStateFilter::default()\n        },\n        projection: entity_live_state_projection(projected_schema),\n        limit,\n    }\n}\n\nfn entity_live_state_projection(projected_schema: Option<&Schema>) -> LiveStateProjection {\n    let Some(schema) = projected_schema else {\n        return LiveStateProjection::default();\n    };\n    let mut columns = projection_column_names(schema);\n    if schema\n        .fields()\n        .iter()\n        .any(|field| !field.name().starts_with(\"lixcol_\"))\n        && !columns.iter().any(|column| column == \"snapshot_content\")\n    {\n        columns.push(\"snapshot_content\".to_string());\n    }\n    LiveStateProjection { columns }\n}\n\nfn projection_column_names(schema: &Schema) -> Vec<String> {\n    schema\n        .fields()\n        .iter()\n        .filter_map(|field| field.name().strip_prefix(\"lixcol_\"))\n        .map(str::to_string)\n        .collect()\n}\n\nfn entity_record_batch(\n    spec: &EntitySurfaceSpec,\n    schema: SchemaRef,\n    rows: &[MaterializedLiveStateRow],\n) -> Result<RecordBatch> {\n    if schema.fields().is_empty() {\n        let options = RecordBatchOptions::new().with_row_count(Some(rows.len()));\n        return RecordBatch::try_new_with_options(schema, vec![], &options)\n            .map_err(DataFusionError::from);\n    }\n\n    let snapshots = rows\n        .iter()\n        .map(|row| parse_snapshot(row.snapshot_content.as_deref()))\n        .collect::<Result<Vec<_>>>()?;\n\n    let columns = schema\n        .fields()\n        .iter()\n        .map(|field| entity_column_array(spec, field.name(), rows, &snapshots))\n        .collect::<Result<Vec<_>>>()?;\n\n    RecordBatch::try_new(schema, columns).map_err(DataFusionError::from)\n}\n\nfn entity_column_array(\n    spec: &EntitySurfaceSpec,\n    column_name: &str,\n    rows: &[MaterializedLiveStateRow],\n    snapshots: &[Option<JsonValue>],\n) -> Result<ArrayRef> {\n    if let Some(property_name) = column_name.strip_prefix(\"lixcol_\") {\n        return entity_system_column_array(property_name, rows);\n    }\n\n    let column_type = spec\n        .visible_column(column_name)\n        .ok_or_else(|| {\n            DataFusionError::Execution(format!(\n                \"sql2 entity provider '{}' does not expose column '{}'\",\n                spec.schema_key, column_name\n            ))\n        })?\n        .column_type;\n\n    let values = snapshots\n        .iter()\n        .map(|snapshot| snapshot.as_ref().and_then(|value| value.get(column_name)))\n        .collect::<Vec<_>>();\n    Ok(match column_type {\n        EntityColumnType::String | EntityColumnType::Json => Arc::new(StringArray::from(\n            values\n                .iter()\n                .map(|value| entity_json_text_value(*value, column_type))\n                .collect::<Result<Vec<_>>>()?,\n        )) as ArrayRef,\n        EntityColumnType::Integer => Arc::new(Int64Array::from(\n            values\n                .iter()\n                .map(|value| entity_i64_value(*value))\n                .collect::<Vec<_>>(),\n        )) as ArrayRef,\n        EntityColumnType::Number => Arc::new(Float64Array::from(\n            values\n                .iter()\n                .map(|value| entity_f64_value(*value))\n                .collect::<Vec<_>>(),\n        )) as ArrayRef,\n        EntityColumnType::Boolean => Arc::new(BooleanArray::from(\n            values\n                .iter()\n                .map(|value| value.and_then(JsonValue::as_bool))\n                .collect::<Vec<_>>(),\n        )) as ArrayRef,\n    })\n}\n\nfn entity_system_column_array(\n    column_name: &str,\n    rows: &[MaterializedLiveStateRow],\n) -> Result<ArrayRef> {\n    Ok(match column_name {\n        \"entity_id\" => Arc::new(StringArray::from(\n            rows.iter()\n                .map(|row| {\n                    row.entity_id\n                        .as_json_array_text()\n                        .map(Some)\n                        .map_err(lix_error_to_datafusion_error)\n                })\n                .collect::<Result<Vec<_>>>()?,\n        )) as ArrayRef,\n        \"schema_key\" => string_array(rows.iter().map(|row| Some(row.schema_key.as_str()))),\n        \"file_id\" => string_array(rows.iter().map(|row| row.file_id.as_deref())),\n        \"snapshot_content\" => string_array(rows.iter().map(|row| row.snapshot_content.as_deref())),\n        \"metadata\" => Arc::new(StringArray::from(\n            rows.iter()\n                .map(|row| row.metadata.as_ref().map(serialize_row_metadata))\n                .collect::<Vec<_>>(),\n        )) as ArrayRef,\n        \"created_at\" => string_array(rows.iter().map(|row| Some(row.created_at.as_str()))),\n        \"updated_at\" => string_array(rows.iter().map(|row| Some(row.updated_at.as_str()))),\n        \"global\" => Arc::new(BooleanArray::from(\n            rows.iter().map(|row| row.global).collect::<Vec<_>>(),\n        )) as ArrayRef,\n        \"change_id\" => string_array(rows.iter().map(|row| row.change_id.as_deref())),\n        \"commit_id\" => string_array(rows.iter().map(|row| row.commit_id.as_deref())),\n        \"untracked\" => Arc::new(BooleanArray::from(\n            rows.iter().map(|row| row.untracked).collect::<Vec<_>>(),\n        )) as ArrayRef,\n        \"version_id\" => string_array(rows.iter().map(|row| Some(row.version_id.as_str()))),\n        other => {\n            return Err(DataFusionError::Execution(format!(\n                \"sql2 entity provider does not support system column 'lixcol_{other}'\"\n            )))\n        }\n    })\n}\n\npub(super) fn parse_snapshot(snapshot_content: Option<&str>) -> Result<Option<JsonValue>> {\n    snapshot_content\n        .map(|snapshot| {\n            serde_json::from_str::<JsonValue>(snapshot).map_err(|error| {\n                DataFusionError::Execution(format!(\n                    \"sql2 entity provider expected valid snapshot_content JSON: {error}\"\n                ))\n            })\n        })\n        .transpose()\n}\n\npub(super) fn entity_json_text_value(\n    value: Option<&JsonValue>,\n    column_type: EntityColumnType,\n) -> Result<Option<String>> {\n    Ok(match (column_type, value) {\n        (_, None) | (_, Some(JsonValue::Null)) => None,\n        (EntityColumnType::String, Some(JsonValue::Bool(value))) => Some(if *value {\n            \"true\".to_string()\n        } else {\n            \"false\".to_string()\n        }),\n        (EntityColumnType::String, Some(JsonValue::String(value))) => Some(value.clone()),\n        (EntityColumnType::String, Some(other)) => Some(json_to_string(other)?),\n        (EntityColumnType::Json, Some(other)) => Some(json_to_string(other)?),\n        _ => None,\n    })\n}\n\npub(super) fn entity_i64_value(value: Option<&JsonValue>) -> Option<i64> {\n    match value {\n        Some(JsonValue::Number(number)) => number.as_i64(),\n        Some(JsonValue::String(value)) => value.parse::<i64>().ok(),\n        _ => None,\n    }\n}\n\npub(super) fn entity_f64_value(value: Option<&JsonValue>) -> Option<f64> {\n    match value {\n        Some(JsonValue::Number(number)) => number.as_f64(),\n        Some(JsonValue::String(value)) => value.parse::<f64>().ok(),\n        _ => None,\n    }\n}\n\nfn json_to_string(value: &JsonValue) -> Result<String> {\n    serde_json::to_string(value).map_err(|error| {\n        DataFusionError::Execution(format!(\"failed to render JSON value: {error}\"))\n    })\n}\n\npub(super) fn string_array<'a>(values: impl Iterator<Item = Option<&'a str>>) -> ArrayRef {\n    let values = values\n        .map(|value| value.map(ToOwned::to_owned))\n        .collect::<Vec<_>>();\n    Arc::new(StringArray::from(values)) as ArrayRef\n}\n\npub(super) fn entity_surface_schema(\n    spec: &EntitySurfaceSpec,\n    variant: EntityProviderVariant,\n) -> SchemaRef {\n    let mut fields = spec\n        .columns\n        .iter()\n        .map(|column| {\n            let field = Field::new(\n                &column.name,\n                arrow_data_type_for_entity_column_type(column.column_type),\n                true,\n            );\n            if column.column_type == EntityColumnType::Json {\n                mark_json_field(field)\n            } else {\n                field\n            }\n        })\n        .collect::<Vec<_>>();\n\n    fields.extend(entity_system_fields(variant));\n    Arc::new(Schema::new(fields))\n}\n\nfn arrow_data_type_for_entity_column_type(column_type: EntityColumnType) -> DataType {\n    match column_type {\n        EntityColumnType::String | EntityColumnType::Json => DataType::Utf8,\n        EntityColumnType::Integer => DataType::Int64,\n        EntityColumnType::Number => DataType::Float64,\n        EntityColumnType::Boolean => DataType::Boolean,\n    }\n}\n\npub(super) fn entity_system_fields(variant: EntityProviderVariant) -> Vec<Field> {\n    if variant == EntityProviderVariant::History {\n        return vec![\n            json_field(HISTORY_COL_ENTITY_ID, false),\n            Field::new(HISTORY_COL_SCHEMA_KEY, DataType::Utf8, false),\n            Field::new(HISTORY_COL_FILE_ID, DataType::Utf8, true),\n            json_field(HISTORY_COL_SNAPSHOT_CONTENT, true),\n            json_field(HISTORY_COL_METADATA, true),\n            Field::new(HISTORY_COL_CHANGE_ID, DataType::Utf8, false),\n            Field::new(HISTORY_COL_OBSERVED_COMMIT_ID, DataType::Utf8, false),\n            Field::new(HISTORY_COL_COMMIT_CREATED_AT, DataType::Utf8, false),\n            Field::new(HISTORY_COL_START_COMMIT_ID, DataType::Utf8, false),\n            Field::new(HISTORY_COL_DEPTH, DataType::Int64, false),\n        ];\n    }\n\n    let mut fields = vec![\n        json_field(\"lixcol_entity_id\", true),\n        Field::new(\"lixcol_schema_key\", DataType::Utf8, false),\n        Field::new(\"lixcol_file_id\", DataType::Utf8, true),\n        json_field(\"lixcol_snapshot_content\", true),\n        json_field(\"lixcol_metadata\", true),\n        Field::new(\"lixcol_created_at\", DataType::Utf8, true),\n        Field::new(\"lixcol_updated_at\", DataType::Utf8, true),\n        Field::new(\"lixcol_global\", DataType::Boolean, true),\n        Field::new(\"lixcol_change_id\", DataType::Utf8, true),\n        Field::new(\"lixcol_commit_id\", DataType::Utf8, true),\n        Field::new(\"lixcol_untracked\", DataType::Boolean, true),\n    ];\n    if variant == EntityProviderVariant::ByVersion {\n        fields.push(Field::new(\"lixcol_version_id\", DataType::Utf8, false));\n    }\n    fields\n}\n\nfn projected_schema(schema: &SchemaRef, projection: Option<&Vec<usize>>) -> Result<SchemaRef> {\n    let Some(projection) = projection else {\n        return Ok(Arc::clone(schema));\n    };\n    Ok(Arc::new(schema.project(projection)?))\n}\n\nfn derive_entity_surface_spec_from_schema(\n    schema: &JsonValue,\n) -> std::result::Result<EntitySurfaceSpec, LixError> {\n    let schema_key = schema\n        .get(\"x-lix-key\")\n        .and_then(JsonValue::as_str)\n        .ok_or_else(|| {\n            LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                \"schema is missing string x-lix-key\".to_string(),\n            )\n        })?;\n\n    let properties = schema\n        .get(\"properties\")\n        .and_then(JsonValue::as_object)\n        .ok_or_else(|| {\n            LixError::new(\n                LixError::CODE_SCHEMA_DEFINITION,\n                format!(\"schema '{schema_key}' must define object properties\"),\n            )\n        })?;\n\n    let mut columns = properties\n        .iter()\n        .filter(|(key, _)| !key.starts_with(\"lixcol_\"))\n        .map(|(key, property_schema)| {\n            let column_type = entity_column_type_from_schema(property_schema).ok_or_else(|| {\n                LixError::new(\n                    LixError::CODE_SCHEMA_DEFINITION,\n                    format!(\n                        \"schema '{schema_key}' property '/{key}' must declare a SQL-projectable JSON Schema type\"\n                    ),\n                )\n                .with_hint(\"Use an explicit type such as string, number, integer, boolean, object, array, or a supported union of those types.\")\n            })?;\n            Ok(EntitySurfaceColumn {\n                name: key.clone(),\n                column_type,\n            })\n        })\n        .collect::<std::result::Result<Vec<_>, LixError>>()?;\n    columns.sort_by(|left, right| left.name.cmp(&right.name));\n\n    let primary_key_paths = parse_primary_key_paths(schema)?;\n\n    Ok(EntitySurfaceSpec {\n        schema_key: schema_key.to_string(),\n        primary_key_paths,\n        columns,\n    })\n}\n\nfn parse_primary_key_paths(schema: &JsonValue) -> std::result::Result<Vec<Vec<String>>, LixError> {\n    let Some(primary_key) = schema.get(\"x-lix-primary-key\") else {\n        return Ok(Vec::new());\n    };\n    let primary_key = primary_key.as_array().ok_or_else(|| {\n        LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            \"schema x-lix-primary-key must be an array of JSON Pointers\".to_string(),\n        )\n    })?;\n\n    primary_key\n        .iter()\n        .enumerate()\n        .map(|(index, pointer)| {\n            let pointer = pointer.as_str().ok_or_else(|| {\n                LixError::new(\n                    \"LIX_ERROR_UNKNOWN\",\n                    format!(\"schema x-lix-primary-key entry at index {index} must be a string\"),\n                )\n            })?;\n            parse_json_pointer(pointer)\n        })\n        .collect()\n}\n\n// TODO(engine): share JSON Pointer parsing with schema/canonical validation once\n// those helpers have a clean module boundary for SQL providers.\nfn parse_json_pointer(pointer: &str) -> std::result::Result<Vec<String>, LixError> {\n    if pointer.is_empty() {\n        return Ok(Vec::new());\n    }\n    if !pointer.starts_with('/') {\n        return Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"invalid JSON pointer '{pointer}'\"),\n        ));\n    }\n    pointer[1..]\n        .split('/')\n        .map(decode_json_pointer_segment)\n        .collect()\n}\n\nfn decode_json_pointer_segment(segment: &str) -> std::result::Result<String, LixError> {\n    let mut out = String::new();\n    let mut chars = segment.chars();\n    while let Some(ch) = chars.next() {\n        if ch == '~' {\n            match chars.next() {\n                Some('0') => out.push('~'),\n                Some('1') => out.push('/'),\n                _ => {\n                    return Err(LixError::new(\n                        \"LIX_ERROR_UNKNOWN\",\n                        format!(\"invalid JSON pointer segment '{segment}'\"),\n                    ))\n                }\n            }\n        } else {\n            out.push(ch);\n        }\n    }\n    Ok(out)\n}\n\nfn schema_exposed_as_entity_surface(schema_key: &str) -> bool {\n    !matches!(schema_key, \"lix_active_account\" | \"lix_change\")\n}\n\nfn schema_exposed_as_entity_history_surface(schema_key: &str) -> bool {\n    !matches!(schema_key, \"lix_commit\" | \"lix_commit_edge\")\n}\n\nfn entity_column_type_from_schema(schema: &JsonValue) -> Option<EntityColumnType> {\n    let mut kinds = BTreeSet::new();\n    collect_entity_type_kinds(schema, &mut kinds);\n    kinds.remove(\"null\");\n\n    if kinds.is_empty() {\n        return None;\n    }\n\n    if kinds.len() == 1 {\n        return match kinds.into_iter().next() {\n            Some(\"boolean\") => Some(EntityColumnType::Boolean),\n            Some(\"integer\") => Some(EntityColumnType::Integer),\n            Some(\"number\") => Some(EntityColumnType::Number),\n            Some(\"string\") => Some(EntityColumnType::String),\n            Some(\"object\" | \"array\") => Some(EntityColumnType::Json),\n            _ => None,\n        };\n    }\n\n    Some(EntityColumnType::Json)\n}\n\nfn collect_entity_type_kinds<'a>(schema: &'a JsonValue, out: &mut BTreeSet<&'a str>) {\n    match schema.get(\"type\") {\n        Some(JsonValue::String(kind)) => {\n            out.insert(kind.as_str());\n        }\n        Some(JsonValue::Array(kinds)) => {\n            for kind in kinds.iter().filter_map(JsonValue::as_str) {\n                out.insert(kind);\n            }\n        }\n        _ => {}\n    }\n\n    for keyword in [\"anyOf\", \"oneOf\", \"allOf\"] {\n        if let Some(JsonValue::Array(branches)) = schema.get(keyword) {\n            for branch in branches {\n                collect_entity_type_kinds(branch, out);\n            }\n        }\n    }\n}\n\nfn datafusion_error_to_lix_error(error: DataFusionError) -> LixError {\n    super::error::datafusion_error_to_lix_error(error)\n}\n\nfn lix_error_to_datafusion_error(error: LixError) -> DataFusionError {\n    DataFusionError::External(Box::new(error))\n}\n\n#[cfg(test)]\nmod tests {\n    use std::sync::Arc;\n\n    use async_trait::async_trait;\n    use datafusion::arrow::array::{ArrayRef, BooleanArray, Float64Array, Int64Array, StringArray};\n    use datafusion::arrow::datatypes::{DataType, Field, Schema};\n    use datafusion::arrow::record_batch::RecordBatch;\n    use datafusion::common::{Column, ScalarValue};\n    use datafusion::execution::TaskContext;\n    use datafusion::logical_expr::expr::InList;\n    use datafusion::logical_expr::{BinaryExpr, Expr, Operator};\n    use serde_json::json;\n\n    use super::{\n        derive_entity_surface_spec_from_schema, entity_lix_state_write_rows_from_batch,\n        entity_record_batch, entity_surface_schema, schema_exposed_as_entity_surface,\n        EntityColumnType, EntityInsertSink, EntityProviderVariant,\n    };\n    use crate::binary_cas::BlobDataReader;\n    use crate::functions::{\n        FunctionProvider, FunctionProviderHandle, SharedFunctionProvider, SystemFunctionProvider,\n    };\n    use crate::live_state::{\n        LiveStateReader, LiveStateRowRequest, LiveStateScanRequest, MaterializedLiveStateRow,\n    };\n    use crate::sql2::dml::InsertSink;\n    use crate::sql2::write_normalization::InsertColumnIntents;\n    use crate::sql2::{SqlWriteContext, SqlWriteExecutionContext};\n    use crate::transaction::types::{\n        TransactionJson, TransactionWrite, TransactionWriteMode, TransactionWriteOutcome,\n        TransactionWriteRow,\n    };\n    use crate::version::{VersionHead, VersionRefReader};\n    use crate::LixError;\n\n    struct EmptyLiveStateReader;\n    struct EmptyVersionRefReader;\n    #[derive(Default)]\n    struct CapturingWriteContext {\n        rows: Vec<MaterializedLiveStateRow>,\n        writes: Vec<TransactionWrite>,\n    }\n\n    #[async_trait]\n    impl LiveStateReader for EmptyLiveStateReader {\n        async fn scan_rows(\n            &self,\n            _request: &LiveStateScanRequest,\n        ) -> Result<Vec<MaterializedLiveStateRow>, LixError> {\n            Ok(vec![])\n        }\n\n        async fn load_row(\n            &self,\n            _request: &LiveStateRowRequest,\n        ) -> Result<Option<MaterializedLiveStateRow>, LixError> {\n            Ok(None)\n        }\n    }\n\n    #[async_trait]\n    impl VersionRefReader for EmptyVersionRefReader {\n        async fn load_head(&self, _version_id: &str) -> Result<Option<VersionHead>, LixError> {\n            Ok(None)\n        }\n\n        async fn scan_heads(&self) -> Result<Vec<VersionHead>, LixError> {\n            Ok(Vec::new())\n        }\n    }\n\n    fn empty_version_ref() -> Arc<dyn VersionRefReader> {\n        Arc::new(EmptyVersionRefReader)\n    }\n\n    fn test_functions() -> FunctionProviderHandle {\n        SharedFunctionProvider::new(\n            Box::new(SystemFunctionProvider) as Box<dyn FunctionProvider + Send>\n        )\n    }\n\n    #[async_trait]\n    impl BlobDataReader for CapturingWriteContext {\n        async fn load_bytes_many(\n            &self,\n            hashes: &[crate::binary_cas::BlobHash],\n        ) -> Result<crate::binary_cas::BlobBytesBatch, LixError> {\n            Ok(crate::binary_cas::BlobBytesBatch::new(vec![\n                None;\n                hashes.len()\n            ]))\n        }\n    }\n\n    #[async_trait]\n    impl SqlWriteExecutionContext for CapturingWriteContext {\n        fn active_version_id(&self) -> &str {\n            \"version-a\"\n        }\n\n        fn functions(&self) -> FunctionProviderHandle {\n            test_functions()\n        }\n\n        fn list_visible_schemas(&self) -> Result<Vec<serde_json::Value>, LixError> {\n            Ok(Vec::new())\n        }\n\n        async fn load_bytes_many(\n            &mut self,\n            hashes: &[crate::binary_cas::BlobHash],\n        ) -> Result<crate::binary_cas::BlobBytesBatch, LixError> {\n            BlobDataReader::load_bytes_many(self, hashes).await\n        }\n\n        async fn scan_live_state(\n            &mut self,\n            _request: &LiveStateScanRequest,\n        ) -> Result<Vec<MaterializedLiveStateRow>, LixError> {\n            Ok(self.rows.clone())\n        }\n\n        async fn load_version_head(\n            &mut self,\n            version_id: &str,\n        ) -> Result<Option<String>, LixError> {\n            if version_id == \"ghost-version\" {\n                return Ok(None);\n            }\n            Ok(Some(format!(\"commit-{version_id}\")))\n        }\n\n        async fn stage_write(\n            &mut self,\n            write: TransactionWrite,\n        ) -> Result<TransactionWriteOutcome, LixError> {\n            self.writes.push(write);\n            Ok(TransactionWriteOutcome { count: 0 })\n        }\n    }\n\n    fn live_row() -> MaterializedLiveStateRow {\n        MaterializedLiveStateRow {\n            entity_id: crate::entity_identity::EntityIdentity::single(\"entity-1\"),\n            schema_key: \"project_message\".to_string(),\n            file_id: None,\n            snapshot_content: Some(\n                \"{\\\"body\\\":\\\"hello\\\",\\\"rating\\\":4.5,\\\"count\\\":7,\\\"enabled\\\":true,\\\"meta\\\":{\\\"x\\\":1}}\"\n                    .to_string(),\n            ),\n            metadata: Some(json!({\"source\": \"test\"}).to_string()),\n            deleted: false,\n            version_id: \"version-a\".to_string(),\n            change_id: Some(\"change-a\".to_string()),\n            commit_id: Some(\"commit-a\".to_string()),\n            global: false,\n            untracked: false,\n            created_at: \"2026-04-23T00:00:00Z\".to_string(),\n            updated_at: \"2026-04-23T01:00:00Z\".to_string(),\n        }\n    }\n\n    fn entity_insert_spec() -> Arc<super::EntitySurfaceSpec> {\n        Arc::new(\n            derive_entity_surface_spec_from_schema(&json!({\n                \"x-lix-key\": \"project_message\",\n                \"type\": \"object\",\n                \"properties\": {\n                    \"body\": { \"type\": \"string\" },\n                    \"count\": { \"type\": \"integer\" },\n                    \"enabled\": { \"type\": \"boolean\" },\n                    \"meta\": { \"type\": \"object\" },\n                    \"rating\": { \"type\": \"number\" }\n                }\n            }))\n            .expect(\"schema should derive entity surface spec\"),\n        )\n    }\n\n    fn entity_insert_spec_with_primary_key() -> Arc<super::EntitySurfaceSpec> {\n        Arc::new(\n            derive_entity_surface_spec_from_schema(&json!({\n                \"x-lix-key\": \"project_message\",\n                \"x-lix-primary-key\": [\"/id\"],\n                \"type\": \"object\",\n                \"properties\": {\n                    \"id\": { \"type\": \"string\" },\n                    \"body\": { \"type\": \"string\" }\n                },\n                \"required\": [\"id\", \"body\"]\n            }))\n            .expect(\"schema should derive entity surface spec\"),\n        )\n    }\n\n    fn string_column(values: Vec<Option<&str>>) -> ArrayRef {\n        Arc::new(StringArray::from(values)) as ArrayRef\n    }\n\n    fn string_literal(value: &str) -> Expr {\n        Expr::Literal(ScalarValue::Utf8(Some(value.to_string())), None)\n    }\n\n    fn column(name: &str) -> Expr {\n        Expr::Column(Column::from_name(name))\n    }\n\n    fn eq_filter(column_name: &str, value: &str) -> Expr {\n        Expr::BinaryExpr(BinaryExpr::new(\n            Box::new(column(column_name)),\n            Operator::Eq,\n            Box::new(string_literal(value)),\n        ))\n    }\n\n    fn entity_insert_batch(include_version: bool, global: bool) -> RecordBatch {\n        let mut fields = vec![\n            Field::new(\"body\", DataType::Utf8, true),\n            Field::new(\"count\", DataType::Int64, true),\n            Field::new(\"enabled\", DataType::Boolean, true),\n            Field::new(\"meta\", DataType::Utf8, true),\n            Field::new(\"rating\", DataType::Float64, true),\n            Field::new(\"lixcol_entity_id\", DataType::Utf8, false),\n            Field::new(\"lixcol_metadata\", DataType::Utf8, true),\n            Field::new(\"lixcol_global\", DataType::Boolean, false),\n            Field::new(\"lixcol_untracked\", DataType::Boolean, false),\n        ];\n        let mut columns = vec![\n            string_column(vec![Some(\"hello\")]),\n            Arc::new(Int64Array::from(vec![7])) as ArrayRef,\n            Arc::new(BooleanArray::from(vec![true])) as ArrayRef,\n            string_column(vec![Some(\"{\\\"x\\\":1}\")]),\n            Arc::new(Float64Array::from(vec![4.5])) as ArrayRef,\n            string_column(vec![Some(\"[\\\"entity-1\\\"]\")]),\n            string_column(vec![Some(\"{\\\"source\\\":\\\"entity\\\"}\")]),\n            Arc::new(BooleanArray::from(vec![global])) as ArrayRef,\n            Arc::new(BooleanArray::from(vec![false])) as ArrayRef,\n        ];\n        if include_version {\n            fields.push(Field::new(\"lixcol_version_id\", DataType::Utf8, false));\n            columns.push(string_column(vec![Some(\"version-a\")]));\n        }\n\n        RecordBatch::try_new(Arc::new(Schema::new(fields)), columns)\n            .expect(\"entity insert batch should build\")\n    }\n\n    fn primary_key_entity_insert_batch(include_entity_id: bool) -> RecordBatch {\n        let mut fields = vec![\n            Field::new(\"id\", DataType::Utf8, false),\n            Field::new(\"body\", DataType::Utf8, true),\n            Field::new(\"lixcol_version_id\", DataType::Utf8, false),\n        ];\n        let mut columns = vec![\n            string_column(vec![Some(\"message-1\")]),\n            string_column(vec![Some(\"hello\")]),\n            string_column(vec![Some(\"version-a\")]),\n        ];\n        if include_entity_id {\n            fields.push(Field::new(\"lixcol_entity_id\", DataType::Utf8, false));\n            columns.push(string_column(vec![Some(\"[\\\"message-1\\\"]\")]));\n        }\n\n        RecordBatch::try_new(Arc::new(Schema::new(fields)), columns)\n            .expect(\"primary-key entity insert batch should build\")\n    }\n\n    #[test]\n    fn excludes_non_entity_builtin_session_surfaces() {\n        assert!(!schema_exposed_as_entity_surface(\"lix_active_account\"));\n        assert!(schema_exposed_as_entity_surface(\"project_message\"));\n    }\n\n    #[test]\n    fn derives_entity_surface_spec_from_schema_definition() {\n        let spec = derive_entity_surface_spec_from_schema(&json!({\n            \"x-lix-key\": \"project_message\",\n            \"type\": \"object\",\n            \"properties\": {\n                \"body\": { \"type\": \"string\" },\n                \"rating\": { \"type\": \"number\" },\n                \"meta\": { \"type\": \"object\" },\n                \"lixcol_entity_id\": { \"type\": \"string\" }\n            }\n        }))\n        .expect(\"schema should derive entity surface spec\");\n\n        assert_eq!(spec.schema_key, \"project_message\");\n        assert_eq!(\n            spec.visible_column_names().collect::<Vec<_>>(),\n            vec![\"body\", \"meta\", \"rating\"]\n        );\n        assert_eq!(\n            spec.visible_column(\"body\").map(|column| column.column_type),\n            Some(EntityColumnType::String)\n        );\n        assert_eq!(\n            spec.visible_column(\"rating\")\n                .map(|column| column.column_type),\n            Some(EntityColumnType::Number)\n        );\n        assert_eq!(\n            spec.visible_column(\"meta\").map(|column| column.column_type),\n            Some(EntityColumnType::Json)\n        );\n        assert!(!spec.is_visible_column(\"lixcol_entity_id\"));\n    }\n\n    #[test]\n    fn entity_surface_spec_rejects_properties_without_projection_type() {\n        let error = derive_entity_surface_spec_from_schema(&json!({\n            \"x-lix-key\": \"project_message\",\n            \"x-lix-primary-key\": [\"/id\"],\n            \"type\": \"object\",\n            \"properties\": {\n                \"id\": { \"type\": \"string\" },\n                \"kind\": {}\n            },\n            \"required\": [\"id\", \"kind\"],\n            \"additionalProperties\": false\n        }))\n        .expect_err(\"unprojectable property should be rejected\");\n\n        assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION);\n        assert!(\n            error.message.contains(\"property '/kind'\"),\n            \"error should identify the property: {error:?}\"\n        );\n    }\n\n    #[test]\n    fn by_version_schema_includes_version_system_column() {\n        let spec = derive_entity_surface_spec_from_schema(&json!({\n            \"x-lix-key\": \"project_message\",\n            \"type\": \"object\",\n            \"properties\": {\n                \"body\": { \"type\": \"string\" }\n            }\n        }))\n        .expect(\"schema should derive entity surface spec\");\n\n        let schema = entity_surface_schema(&spec, EntityProviderVariant::ByVersion);\n        assert!(schema.field_with_name(\"body\").is_ok());\n        assert!(schema.field_with_name(\"lixcol_entity_id\").is_ok());\n        assert!(schema.field_with_name(\"lixcol_version_id\").is_ok());\n    }\n\n    #[test]\n    fn active_schema_excludes_version_system_column() {\n        let spec = derive_entity_surface_spec_from_schema(&json!({\n            \"x-lix-key\": \"project_message\",\n            \"type\": \"object\",\n            \"properties\": {\n                \"body\": { \"type\": \"string\" }\n            }\n        }))\n        .expect(\"schema should derive entity surface spec\");\n\n        let schema = entity_surface_schema(&spec, EntityProviderVariant::Active);\n        assert!(schema.field_with_name(\"body\").is_ok());\n        assert!(schema.field_with_name(\"lixcol_entity_id\").is_ok());\n        assert!(schema.field_with_name(\"lixcol_version_id\").is_err());\n    }\n\n    #[test]\n    fn insert_schema_allows_defaulted_identity_columns_to_be_omitted() {\n        let spec = derive_entity_surface_spec_from_schema(&json!({\n            \"x-lix-key\": \"project_message\",\n            \"x-lix-primary-key\": [\"/id\"],\n            \"type\": \"object\",\n            \"properties\": {\n                \"id\": { \"type\": \"string\", \"x-lix-default\": \"lix_uuid_v7()\" },\n                \"body\": { \"type\": \"string\" }\n            }\n        }))\n        .expect(\"schema should derive entity surface spec\");\n\n        let schema = entity_surface_schema(&spec, EntityProviderVariant::Active);\n        assert!(\n            schema\n                .field_with_name(\"id\")\n                .expect(\"id field\")\n                .is_nullable(),\n            \"defaulted primary-key property should be nullable at SQL input\"\n        );\n        assert!(\n            schema\n                .field_with_name(\"lixcol_entity_id\")\n                .expect(\"entity id field\")\n                .is_nullable(),\n            \"opaque identity projection should be nullable for normal primary-key inserts\"\n        );\n    }\n\n    #[test]\n    fn record_batch_projects_payload_and_system_columns() {\n        let spec = Arc::new(\n            derive_entity_surface_spec_from_schema(&json!({\n                \"x-lix-key\": \"project_message\",\n                \"type\": \"object\",\n                \"properties\": {\n                    \"body\": { \"type\": \"string\" },\n                    \"rating\": { \"type\": \"number\" },\n                    \"count\": { \"type\": \"integer\" },\n                    \"enabled\": { \"type\": \"boolean\" },\n                    \"meta\": { \"type\": \"object\" }\n                }\n            }))\n            .expect(\"schema should derive entity surface spec\"),\n        );\n        let schema = entity_surface_schema(&spec, EntityProviderVariant::ByVersion);\n\n        let batch =\n            entity_record_batch(&spec, schema, &[live_row()]).expect(\"entity batch should build\");\n\n        assert_eq!(batch.num_rows(), 1);\n        assert_eq!(\n            batch\n                .column_by_name(\"body\")\n                .expect(\"body column\")\n                .as_any()\n                .downcast_ref::<datafusion::arrow::array::StringArray>()\n                .expect(\"body is string\")\n                .value(0),\n            \"hello\"\n        );\n        assert_eq!(\n            batch\n                .column_by_name(\"rating\")\n                .expect(\"rating column\")\n                .as_any()\n                .downcast_ref::<Float64Array>()\n                .expect(\"rating is f64\")\n                .value(0),\n            4.5\n        );\n        assert_eq!(\n            batch\n                .column_by_name(\"count\")\n                .expect(\"count column\")\n                .as_any()\n                .downcast_ref::<Int64Array>()\n                .expect(\"count is i64\")\n                .value(0),\n            7\n        );\n        assert_eq!(\n            batch\n                .column_by_name(\"lixcol_entity_id\")\n                .expect(\"entity id column\")\n                .as_any()\n                .downcast_ref::<datafusion::arrow::array::StringArray>()\n                .expect(\"entity id is string\")\n                .value(0),\n            \"[\\\"entity-1\\\"]\"\n        );\n        assert_eq!(\n            batch\n                .column_by_name(\"lixcol_version_id\")\n                .expect(\"version id column\")\n                .as_any()\n                .downcast_ref::<datafusion::arrow::array::StringArray>()\n                .expect(\"version id is string\")\n                .value(0),\n            \"version-a\"\n        );\n    }\n\n    #[tokio::test]\n    async fn provider_registers_as_table_provider() {\n        let spec = Arc::new(\n            derive_entity_surface_spec_from_schema(&json!({\n                \"x-lix-key\": \"project_message\",\n                \"type\": \"object\",\n                \"properties\": {\n                    \"body\": { \"type\": \"string\" }\n                }\n            }))\n            .expect(\"schema should derive entity surface spec\"),\n        );\n        let provider = super::EntityProvider::by_version(\n            spec,\n            Arc::new(EmptyLiveStateReader) as Arc<dyn LiveStateReader>,\n            empty_version_ref(),\n        );\n\n        assert!(provider.schema.field_with_name(\"lixcol_version_id\").is_ok());\n    }\n\n    #[test]\n    fn primary_key_filters_route_entity_ids_for_string_primary_key() {\n        let spec = entity_insert_spec_with_primary_key();\n        let filters = vec![\n            eq_filter(\"id\", \"entity-a\"),\n            Expr::InList(InList::new(\n                Box::new(column(\"id\")),\n                vec![string_literal(\"entity-b\"), string_literal(\"entity-a\")],\n                false,\n            )),\n        ];\n\n        let entity_ids = super::entity_ids_from_primary_key_filters(&spec, &filters)\n            .expect(\"primary-key filters should analyze\")\n            .expect(\"primary-key filters should produce a constraint\");\n\n        assert_eq!(\n            entity_ids,\n            vec![crate::entity_identity::EntityIdentity::single(\"entity-a\")]\n        );\n    }\n\n    #[test]\n    fn primary_key_filter_analyzer_models_boolean_predicates() {\n        let spec = entity_insert_spec_with_primary_key();\n        let analyzer = super::EntityPrimaryKeyFilterAnalyzer::new(&spec);\n        let disjunction = Expr::BinaryExpr(BinaryExpr::new(\n            Box::new(eq_filter(\"id\", \"entity-a\")),\n            Operator::Or,\n            Box::new(eq_filter(\"id\", \"entity-b\")),\n        ));\n        let contradiction = Expr::BinaryExpr(BinaryExpr::new(\n            Box::new(eq_filter(\"id\", \"entity-a\")),\n            Operator::And,\n            Box::new(eq_filter(\"id\", \"entity-b\")),\n        ));\n\n        let disjunction_ids = analyzer\n            .analyze(&disjunction)\n            .expect(\"OR should analyze\")\n            .expect(\"OR should produce an entity-id set\");\n        let contradiction_ids = analyzer\n            .analyze(&contradiction)\n            .expect(\"AND should analyze\")\n            .expect(\"AND should produce an entity-id set\");\n\n        assert_eq!(\n            disjunction_ids.into_iter().collect::<Vec<_>>(),\n            vec![\n                crate::entity_identity::EntityIdentity::single(\"entity-a\"),\n                crate::entity_identity::EntityIdentity::single(\"entity-b\"),\n            ]\n        );\n        assert!(contradiction_ids.is_empty());\n    }\n\n    #[test]\n    fn primary_key_filters_ignore_non_key_and_negated_predicates() {\n        let spec = entity_insert_spec_with_primary_key();\n        let filters = vec![\n            eq_filter(\"body\", \"hello\"),\n            Expr::InList(InList::new(\n                Box::new(column(\"id\")),\n                vec![string_literal(\"entity-a\")],\n                true,\n            )),\n        ];\n\n        assert!(super::entity_ids_from_primary_key_filters(&spec, &filters)\n            .expect(\"ignored filters should analyze\")\n            .unwrap_or_default()\n            .is_empty());\n    }\n\n    #[test]\n    fn decodes_by_version_entity_insert_into_lix_state_write_row() {\n        let spec = entity_insert_spec();\n        let rows = entity_lix_state_write_rows_from_batch(\n            &spec,\n            &entity_insert_batch(true, false),\n            &InsertColumnIntents::all_explicit(),\n            None,\n        )\n        .expect(\"entity batch should decode\");\n\n        assert_eq!(rows.len(), 1);\n        assert_eq!(\n            rows[0].entity_id.as_ref(),\n            Some(&crate::entity_identity::EntityIdentity::single(\"entity-1\"))\n        );\n        assert_eq!(rows[0].schema_key, \"project_message\");\n        assert_eq!(rows[0].version_id, \"version-a\");\n        assert_eq!(\n            rows[0].metadata.as_ref(),\n            Some(&TransactionJson::from_value_for_test(\n                json!({\"source\": \"entity\"})\n            ))\n        );\n        assert!(!rows[0].global);\n        assert_eq!(\n            rows[0].snapshot.as_ref().expect(\"snapshot_content\"),\n            &json!({\n                \"body\": \"hello\",\n                \"count\": 7,\n                \"enabled\": true,\n                \"meta\": {\"x\": 1},\n                \"rating\": 4.5\n            })\n        );\n    }\n\n    #[test]\n    fn primary_key_entity_insert_stages_partial_row_for_normalization() {\n        let spec = entity_insert_spec_with_primary_key();\n        let rows = entity_lix_state_write_rows_from_batch(\n            &spec,\n            &primary_key_entity_insert_batch(false),\n            &InsertColumnIntents::all_explicit(),\n            None,\n        )\n        .expect(\"entity batch should decode\");\n\n        assert_eq!(rows.len(), 1);\n        assert_eq!(rows[0].entity_id, None);\n        assert_eq!(\n            rows[0].snapshot.as_ref().expect(\"snapshot_content\"),\n            &json!({\n                \"body\": \"hello\",\n                \"id\": \"message-1\"\n            })\n        );\n    }\n\n    #[test]\n    fn primary_key_entity_insert_preserves_explicit_opaque_projection_for_normalization() {\n        let spec = entity_insert_spec_with_primary_key();\n        let rows = entity_lix_state_write_rows_from_batch(\n            &spec,\n            &primary_key_entity_insert_batch(true),\n            &InsertColumnIntents::all_explicit(),\n            None,\n        )\n        .expect(\"primary-key entity insert should stage explicit lixcol_entity_id\");\n\n        assert_eq!(rows.len(), 1);\n        assert_eq!(\n            rows[0].entity_id.as_ref(),\n            Some(&crate::entity_identity::EntityIdentity::single(\"message-1\"))\n        );\n    }\n\n    #[test]\n    fn active_entity_insert_defaults_version_id() {\n        let spec = entity_insert_spec();\n        let rows = entity_lix_state_write_rows_from_batch(\n            &spec,\n            &entity_insert_batch(false, false),\n            &InsertColumnIntents::all_explicit(),\n            Some(\"version-active\"),\n        )\n        .expect(\"active entity batch should decode\");\n\n        assert_eq!(rows.len(), 1);\n        assert_eq!(rows[0].version_id, \"version-active\");\n        assert!(!rows[0].global);\n    }\n\n    #[test]\n    fn by_version_entity_insert_requires_version_id_for_non_global_rows() {\n        let spec = entity_insert_spec();\n        let error = entity_lix_state_write_rows_from_batch(\n            &spec,\n            &entity_insert_batch(false, false),\n            &InsertColumnIntents::all_explicit(),\n            None,\n        )\n        .expect_err(\"by-version entity insert should require version id\");\n\n        assert!(\n            error.to_string().contains(\"requires lixcol_version_id\"),\n            \"unexpected error: {error}\"\n        );\n    }\n\n    #[test]\n    fn by_version_entity_insert_global_row_uses_global_version() {\n        let spec = entity_insert_spec();\n        let rows = entity_lix_state_write_rows_from_batch(\n            &spec,\n            &entity_insert_batch(false, true),\n            &InsertColumnIntents::all_explicit(),\n            None,\n        )\n        .expect(\"global entity batch should decode\");\n\n        assert_eq!(rows.len(), 1);\n        assert!(rows[0].global);\n        assert_eq!(rows[0].version_id, crate::GLOBAL_VERSION_ID);\n    }\n\n    #[test]\n    fn entity_insert_rejects_global_with_non_global_version_id() {\n        let spec = entity_insert_spec();\n        let error = entity_lix_state_write_rows_from_batch(\n            &spec,\n            &entity_insert_batch(true, true),\n            &InsertColumnIntents::all_explicit(),\n            None,\n        )\n        .expect_err(\"global entity write should reject conflicting version id\");\n\n        assert!(\n            error\n                .to_string()\n                .contains(\"cannot set lixcol_global=true with non-global lixcol_version_id\"),\n            \"unexpected error: {error}\"\n        );\n    }\n\n    #[tokio::test]\n    async fn entity_insert_sink_stages_decoded_lix_state_rows() {\n        let spec = entity_insert_spec();\n        let mut write_context = CapturingWriteContext::default();\n        let write_ctx = SqlWriteContext::new(&mut write_context);\n        let batch = entity_insert_batch(true, false);\n        let sink = EntityInsertSink::new(\n            Arc::clone(&spec),\n            batch.schema(),\n            InsertColumnIntents::all_explicit(),\n            write_ctx,\n            super::VersionBinding::explicit(),\n        );\n        let count = sink\n            .write_batches(vec![batch], &Arc::new(TaskContext::default()))\n            .await\n            .expect(\"entity sink should stage write\");\n\n        assert_eq!(count, 1);\n        assert_eq!(\n            write_context.writes.as_slice(),\n            &[TransactionWrite::Rows {\n                mode: TransactionWriteMode::Insert,\n                rows: vec![TransactionWriteRow {\n                    entity_id: Some(crate::entity_identity::EntityIdentity::single(\"entity-1\")),\n                    schema_key: \"project_message\".to_string(),\n                    file_id: None,\n                    snapshot: Some(TransactionJson::from_value_for_test(\n                        json!({\"body\":\"hello\",\"count\":7,\"enabled\":true,\"meta\":{\"x\":1},\"rating\":4.5})\n                    )),\n                    metadata: Some(TransactionJson::from_value_for_test(\n                        json!({\"source\": \"entity\"})\n                    )),\n                    origin: None,\n                    created_at: None,\n                    updated_at: None,\n                    global: false,\n                    change_id: None,\n                    commit_id: None,\n                    untracked: false,\n                    version_id: \"version-a\".to_string(),\n                }]\n            }]\n        );\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/error.rs",
    "content": "use datafusion::error::DataFusionError;\n\nuse crate::LixError;\n\npub(crate) fn datafusion_error_to_lix_error(error: DataFusionError) -> LixError {\n    if let Some(error) = lix_error_from_datafusion_error(&error) {\n        return error;\n    }\n\n    classify_datafusion_error(&error)\n}\n\npub(crate) fn lix_error_to_datafusion_error(error: LixError) -> DataFusionError {\n    DataFusionError::External(Box::new(error))\n}\n\nfn lix_error_from_datafusion_error(error: &DataFusionError) -> Option<LixError> {\n    match error {\n        DataFusionError::External(error) => error\n            .downcast_ref::<LixError>()\n            .cloned()\n            .map(normalize_external_sql_error),\n        DataFusionError::Context(_, error) | DataFusionError::Diagnostic(_, error) => {\n            lix_error_from_datafusion_error(error)\n        }\n        DataFusionError::Shared(error) => lix_error_from_datafusion_error(error),\n        DataFusionError::Collection(errors) => {\n            errors.iter().find_map(lix_error_from_datafusion_error)\n        }\n        _ => None,\n    }\n}\n\nfn normalize_external_sql_error(error: LixError) -> LixError {\n    let lower = error.message.to_ascii_lowercase();\n    if (error.code.starts_with(\"LIX_ERROR_PATH_\")\n        && error.code != \"LIX_ERROR_PATH_INVALID_SEGMENT_CODE_POINT\")\n        || error.code == LixError::CODE_INVALID_JSON_PATH\n        || (error.code == LixError::CODE_TYPE_MISMATCH\n            && lower.contains(\"cannot store blob values directly\"))\n        || (error.code == LixError::CODE_SCHEMA_DEFINITION && lower.contains(\"system schema\"))\n    {\n        return LixError {\n            code: LixError::CODE_INVALID_PARAM.to_string(),\n            ..error\n        };\n    }\n    error\n}\n\nfn classify_datafusion_error(error: &DataFusionError) -> LixError {\n    let message = format!(\"sql2 DataFusion error: {error}\");\n    let lower = message.to_ascii_lowercase();\n\n    if looks_like_json_udf_miss(&lower) {\n        return LixError::new(LixError::CODE_UDF_NOT_FOUND, message)\n            .with_hint(\"Use lix_json_get(json, key_or_index, ...) for JSON values or lix_json_get_text(json, key_or_index, ...) for text.\");\n    }\n\n    if looks_like_unsupported_dialect(&lower) {\n        return LixError::new(LixError::CODE_DIALECT_UNSUPPORTED, message)\n            .with_hint(\"Lix SQL uses DataFusion syntax. Use lix_json_get(...) or lix_json_get_text(...) for JSON access, and numbered placeholders like $1, $2, ...\");\n    }\n\n    if looks_like_unsupported_runtime_plan(&lower) {\n        return LixError::new(LixError::CODE_UNSUPPORTED_SQL_RUNTIME_PLAN, message)\n            .with_hint(\"This SQL feature currently plans to a physical operator that is not supported by this engine runtime. Rewrite the query to avoid the unsupported operator, or run it on a runtime that supports the full physical plan.\");\n    }\n\n    if lower.contains(\"uses variadic path segments\") {\n        return LixError::new(LixError::CODE_INVALID_JSON_PATH, message)\n            .with_hint(\"Pass path segments as separate arguments, for example lix_json_get_text(document, 'user', 'name'), not '$.user.name' or '/user/name'.\");\n    }\n\n    if lower.contains(\"failed to parse placeholder id\")\n        || lower.contains(\"placeholder\")\n        || lower.contains(\"bind\")\n    {\n        return LixError::new(LixError::CODE_PARSE_ERROR, message).with_hint(\n            \"Use numbered placeholders like $1, $2, ...; '?' placeholders are not supported.\",\n        );\n    }\n\n    if lower.contains(\"requires start_commit_id\")\n        || lower.contains(\"history filter\")\n        || lower.contains(\"history table\")\n    {\n        return LixError::new(LixError::CODE_HISTORY_FILTER_REQUIRED, message)\n            .with_hint(\"Add a commit/version range predicate before querying history tables.\");\n    }\n\n    if lower.contains(\"table not found\")\n        || (lower.contains(\"table\") && lower.contains(\"not found\"))\n        || lower.contains(\"no table named\")\n        || lower.contains(\"failed to resolve table\")\n        || lower.contains(\"could not find table\")\n        || (lower.contains(\"relation\") && lower.contains(\"not found\"))\n    {\n        return LixError::new(LixError::CODE_TABLE_NOT_FOUND, message)\n            .with_hint(\"Use information_schema.tables to inspect available Lix SQL tables.\");\n    }\n\n    if (lower.contains(\"column\") || lower.contains(\"field\"))\n        && (lower.contains(\"not found\")\n            || lower.contains(\"does not exist\")\n            || lower.contains(\"no field named\"))\n    {\n        return LixError::new(LixError::CODE_COLUMN_NOT_FOUND, message);\n    }\n\n    if lower.contains(\"schema validation\") {\n        return LixError::new(LixError::CODE_SCHEMA_VALIDATION, message);\n    }\n\n    if lower.contains(\"schema definition\") {\n        return LixError::new(LixError::CODE_SCHEMA_DEFINITION, message);\n    }\n\n    if lower.contains(\"unsupported sql type json\") {\n        return LixError::new(LixError::CODE_DIALECT_UNSUPPORTED, message)\n            .with_hint(\"Declare JSON/object columns through lix.registerSchema(...) or lix_registered_schema; SQL type JSON is not supported.\");\n    }\n\n    if looks_like_type_mismatch(&lower) {\n        if lower.contains(\"encountered non utf-8 data\") {\n            return LixError::new(\n                LixError::CODE_TYPE_MISMATCH,\n                \"Lix SQL string functions require valid UTF-8 text; blob data could not be decoded as UTF-8\",\n            )\n            .with_hint(\n                \"Pass text to string functions. Raw blob parameters stay binary and are not implicitly decoded as UTF-8.\",\n            );\n        }\n        return LixError::new(LixError::CODE_TYPE_MISMATCH, message)\n            .with_hint(\"Check the SQL function argument types. JSON text can be converted with lix_json(...); JSON fields can be read with lix_json_get(...) or lix_json_get_text(...).\");\n    }\n\n    if matches!(\n        error,\n        DataFusionError::Plan(_) | DataFusionError::SchemaError(_, _)\n    ) {\n        return LixError::new(LixError::CODE_PARSE_ERROR, message);\n    }\n\n    if lower.contains(\"constraint\")\n        || lower.contains(\"not null\")\n        || lower.contains(\"non-nullable\")\n        || lower.contains(\"unique\")\n        || lower.contains(\"duplicate\")\n        || lower.contains(\"primary key\")\n        || lower.contains(\"foreign key\")\n    {\n        return LixError::new(LixError::CODE_CONSTRAINT_VIOLATION, message);\n    }\n\n    match error {\n        DataFusionError::SQL(_, _) => LixError::new(LixError::CODE_PARSE_ERROR, message),\n        DataFusionError::NotImplemented(_) => {\n            LixError::new(LixError::CODE_DIALECT_UNSUPPORTED, message)\n        }\n        DataFusionError::Plan(_) | DataFusionError::SchemaError(_, _) => {\n            LixError::new(LixError::CODE_PARSE_ERROR, message)\n        }\n        DataFusionError::IoError(_) | DataFusionError::ObjectStore(_) => {\n            LixError::new(LixError::CODE_STORAGE_ERROR, message)\n        }\n        DataFusionError::Internal(_) => LixError::new(LixError::CODE_INTERNAL_ERROR, message),\n        _ => LixError::new(LixError::CODE_UNKNOWN, message),\n    }\n}\n\nfn looks_like_json_udf_miss(lower: &str) -> bool {\n    let json_function_guess = [\n        \"json_extract\",\n        \"json_get\",\n        \"json_get_string\",\n        \"json_get_text\",\n        \"json_extract_string\",\n        \"json_extract_text\",\n    ]\n    .iter()\n    .any(|name| lower.contains(name));\n\n    json_function_guess\n        && (lower.contains(\"function\")\n            || lower.contains(\"udf\")\n            || lower.contains(\"not found\")\n            || lower.contains(\"does not exist\")\n            || lower.contains(\"did you mean\"))\n}\n\nfn looks_like_unsupported_dialect(lower: &str) -> bool {\n    lower.contains(\"->>\")\n        || lower.contains(\"operator does not exist\")\n        || lower.contains(\"unsupported sql type json\")\n        || lower.contains(\"sqlite_master\")\n        || lower.contains(\"returning\")\n}\n\nfn looks_like_unsupported_runtime_plan(lower: &str) -> bool {\n    lower.contains(\"sql physical operator\")\n        && lower.contains(\"is not supported by the webassembly runtime yet\")\n}\n\nfn looks_like_type_mismatch(lower: &str) -> bool {\n    (lower.contains(\"type\")\n        || lower.contains(\"signature\")\n        || lower.contains(\"coerc\")\n        || lower.contains(\"argument\")\n        || lower.contains(\"convert\"))\n        && (lower.contains(\"mismatch\")\n            || lower.contains(\"incompatible\")\n            || lower.contains(\"expected\")\n            || lower.contains(\"cannot\")\n            || lower.contains(\"invalid\"))\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/execute.rs",
    "content": "use datafusion::arrow::datatypes::Field;\nuse datafusion::arrow::record_batch::RecordBatch;\nuse datafusion::common::metadata::{FieldMetadata, ScalarAndMetadata};\nuse datafusion::common::{ParamValues, ScalarValue};\nuse datafusion::logical_expr::{Expr, LogicalPlan, WriteOp};\nuse datafusion::prelude::SessionContext;\nuse datafusion::sql::parser::Statement as DataFusionStatement;\nuse serde_json::{json, Value as JsonValue};\nuse std::collections::{BTreeMap, BTreeSet, HashSet};\n\nuse crate::schema::schema_key_from_definition;\nuse crate::{LixError, LixNotice, SqlQueryResult, Value};\n\nuse super::predicate_typecheck::validate_json_predicate_expr_with_dfschema;\nuse super::result_metadata::{field_is_json, LIX_VALUE_TYPE_JSON, LIX_VALUE_TYPE_METADATA_KEY};\nuse super::session::{build_read_session, build_write_session, new_sql_session_context};\nuse super::write_normalization::{\n    is_binary_type, lix_file_data_type_lix_error, logical_expr_is_binary_or_null,\n};\nuse super::{SqlExecutionContext, SqlStatementKind, SqlWriteExecutionContext};\n\n#[allow(dead_code)]\npub(crate) struct SqlLogicalPlan {\n    session: SessionContext,\n    plan: LogicalPlan,\n    kind: SqlStatementKind,\n    notices: Vec<LixNotice>,\n    strict_binary_params: BTreeSet<usize>,\n}\n\nimpl SqlLogicalPlan {\n    #[allow(dead_code)]\n    pub(crate) fn kind(&self) -> SqlStatementKind {\n        self.kind\n    }\n\n    #[allow(dead_code)]\n    pub(crate) fn is_write(&self) -> bool {\n        self.kind == SqlStatementKind::Write\n    }\n}\n\n/// Minimal top-level sql2 entrypoint.\n///\n/// The final implementation will build the DataFusion session from the\n/// execution context and source rows from `live_state()`.\n///\n/// `catalog()` is intentionally omitted from the MVP boundary for now.\n#[allow(dead_code)]\npub(crate) async fn execute_sql(\n    ctx: &dyn SqlExecutionContext,\n    sql: &str,\n    params: &[Value],\n) -> Result<SqlQueryResult, LixError> {\n    let plan = create_logical_plan(ctx, sql).await?;\n    execute_logical_plan(plan, params).await\n}\n\npub(crate) async fn create_logical_plan(\n    ctx: &dyn SqlExecutionContext,\n    sql: &str,\n) -> Result<SqlLogicalPlan, LixError> {\n    super::validate_supported_statement_ast(sql)?;\n    super::udfs::validate_public_udf_calls(sql)?;\n    validate_public_read_sql_surface(sql)?;\n    let session = build_read_session(ctx).await?;\n    let plan = session\n        .state()\n        .create_logical_plan(sql)\n        .await\n        .map_err(datafusion_error_to_lix_error)?;\n    validate_supported_logical_plan(&plan)?;\n    validate_json_predicates_in_logical_plan(&plan)?;\n    let kind = classify_logical_plan(&plan);\n    let notices = history_filter_notices(&plan);\n\n    Ok(SqlLogicalPlan {\n        session,\n        plan,\n        kind,\n        notices,\n        strict_binary_params: BTreeSet::new(),\n    })\n}\n\n#[allow(dead_code)]\npub(crate) async fn create_write_logical_plan(\n    ctx: &mut dyn SqlWriteExecutionContext,\n    sql: &str,\n) -> Result<SqlLogicalPlan, LixError> {\n    super::udfs::validate_public_udf_calls(sql)?;\n    let visible_schemas = ctx.list_visible_schemas()?;\n    super::public_bind::validate_public_dml_sql(sql, &visible_schemas)?;\n    let statement = parse_datafusion_statement(sql)?;\n    super::validate_supported_datafusion_statement_ast(&statement)?;\n    reject_read_only_history_view_dml_from_statement(&statement, &visible_schemas)?;\n    let session = build_write_session(ctx).await?;\n    let plan = create_logical_plan_from_statement(&session, statement).await?;\n    validate_supported_logical_plan(&plan)?;\n    super::public_bind::validate_public_dml_plan(&plan, &visible_schemas)?;\n    validate_json_predicates_in_logical_plan(&plan)?;\n    let strict_binary_params = validate_strict_lix_file_data_writes(&plan)?;\n    let kind = classify_logical_plan(&plan);\n\n    Ok(SqlLogicalPlan {\n        session,\n        plan,\n        kind,\n        notices: Vec::new(),\n        strict_binary_params,\n    })\n}\n\nfn validate_public_read_sql_surface(sql: &str) -> Result<(), LixError> {\n    let normalized = sql.to_ascii_lowercase();\n    if normalized.contains(\"lower(path)\") {\n        return Err(LixError::new(\n            LixError::CODE_UNSUPPORTED_SQL,\n            \"public column 'path' must be compared directly to a literal or parameter\",\n        ));\n    }\n    if normalized.contains(\"lixcol_version_id\")\n        && (normalized.contains(\"= lower(\") || normalized.contains(\" in (lower(\"))\n    {\n        return Err(LixError::new(\n            LixError::CODE_UNSUPPORTED_SQL,\n            \"public column 'lixcol_version_id' must be compared directly to a literal or parameter\",\n        ));\n    }\n    Ok(())\n}\n\nfn parse_datafusion_statement(sql: &str) -> Result<DataFusionStatement, LixError> {\n    let session = new_sql_session_context();\n    let dialect = session.state().config_options().sql_parser.dialect;\n    session\n        .state()\n        .sql_to_statement(sql, &dialect)\n        .map_err(datafusion_error_to_lix_error)\n}\n\nasync fn create_logical_plan_from_statement(\n    session: &SessionContext,\n    statement: DataFusionStatement,\n) -> Result<LogicalPlan, LixError> {\n    session\n        .state()\n        .statement_to_plan(statement)\n        .await\n        .map_err(datafusion_error_to_lix_error)\n}\n\nfn validate_json_predicates_in_logical_plan(plan: &LogicalPlan) -> Result<(), LixError> {\n    match plan {\n        LogicalPlan::Filter(filter) => {\n            validate_json_predicate_expr_with_dfschema(filter.input.schema(), &filter.predicate)?;\n        }\n        LogicalPlan::TableScan(scan) => {\n            for filter in &scan.filters {\n                validate_json_predicate_expr_with_dfschema(scan.projected_schema.as_ref(), filter)?;\n            }\n        }\n        _ => {}\n    }\n\n    for input in plan.inputs() {\n        validate_json_predicates_in_logical_plan(input)?;\n    }\n\n    Ok(())\n}\n\nfn validate_strict_lix_file_data_writes(plan: &LogicalPlan) -> Result<BTreeSet<usize>, LixError> {\n    let mut strict_binary_params = BTreeSet::new();\n    let LogicalPlan::Dml(dml) = plan else {\n        return Ok(strict_binary_params);\n    };\n    if dml.table_name.table() != \"lix_file\"\n        || !matches!(dml.op, WriteOp::Insert(_) | WriteOp::Update)\n    {\n        return Ok(strict_binary_params);\n    }\n\n    reject_non_binary_lix_file_data_write(&dml.input, &mut strict_binary_params)?;\n    Ok(strict_binary_params)\n}\n\nfn reject_non_binary_lix_file_data_write(\n    input: &LogicalPlan,\n    strict_binary_params: &mut BTreeSet<usize>,\n) -> Result<(), LixError> {\n    let LogicalPlan::Projection(projection) = input else {\n        return Ok(());\n    };\n\n    let Some(data_expr) = projection.expr.iter().find_map(|expr| match expr {\n        Expr::Alias(alias) if alias.name == \"data\" => Some(alias.expr.as_ref()),\n        _ => None,\n    }) else {\n        return Ok(());\n    };\n\n    validate_lix_file_data_expr(data_expr, strict_binary_params)?;\n\n    let Expr::Column(column) = data_expr else {\n        return Ok(());\n    };\n    let LogicalPlan::Values(values) = projection.input.as_ref() else {\n        return Ok(());\n    };\n    let Ok(column_index) = values.schema.index_of_column(column) else {\n        return Ok(());\n    };\n\n    for row in &values.values {\n        if let Some(value_expr) = row.get(column_index) {\n            validate_lix_file_data_expr(value_expr, strict_binary_params)?;\n        }\n    }\n\n    Ok(())\n}\n\nfn validate_lix_file_data_expr(\n    expr: &Expr,\n    strict_binary_params: &mut BTreeSet<usize>,\n) -> Result<(), LixError> {\n    match expr {\n        Expr::Cast(cast) if is_binary_type(&cast.data_type) => {\n            if collect_placeholder_param(&cast.expr, strict_binary_params)? {\n                return Ok(());\n            }\n            if !logical_expr_is_binary_or_null(&cast.expr) {\n                return Err(lix_file_data_type_lix_error());\n            }\n        }\n        Expr::Placeholder(_) => {\n            collect_placeholder_param(expr, strict_binary_params)?;\n        }\n        Expr::Alias(alias) => validate_lix_file_data_expr(&alias.expr, strict_binary_params)?,\n        _ => {}\n    }\n    Ok(())\n}\n\nfn collect_placeholder_param(\n    expr: &Expr,\n    strict_binary_params: &mut BTreeSet<usize>,\n) -> Result<bool, LixError> {\n    match expr {\n        Expr::Placeholder(placeholder) => {\n            let index = placeholder_index(&placeholder.id)?;\n            strict_binary_params.insert(index);\n            Ok(true)\n        }\n        Expr::Alias(alias) => collect_placeholder_param(&alias.expr, strict_binary_params),\n        _ => Ok(false),\n    }\n}\n\nfn placeholder_index(id: &str) -> Result<usize, LixError> {\n    id.strip_prefix('$')\n        .and_then(|raw| raw.parse::<usize>().ok())\n        .filter(|index| *index > 0)\n        .ok_or_else(|| {\n            LixError::new(\n                LixError::CODE_PARSE_ERROR,\n                format!(\"unsupported SQL parameter placeholder '{id}'\"),\n            )\n            .with_hint(\"Use numbered placeholders like $1, $2, ...\")\n        })\n}\n\npub(crate) async fn execute_logical_plan(\n    plan: SqlLogicalPlan,\n    params: &[Value],\n) -> Result<SqlQueryResult, LixError> {\n    let SqlLogicalPlan {\n        session,\n        plan,\n        kind: _,\n        notices,\n        strict_binary_params,\n    } = plan;\n    validate_parameter_count(&plan, params.len())?;\n    validate_strict_binary_params(&strict_binary_params, params)?;\n\n    let mut dataframe = session\n        .execute_logical_plan(plan)\n        .await\n        .map_err(datafusion_error_to_lix_error)?;\n    if !params.is_empty() {\n        dataframe = dataframe\n            .with_param_values(ParamValues::List(\n                params.iter().map(scalar_value_from_lix_value).collect(),\n            ))\n            .map_err(datafusion_error_to_lix_error)?;\n    }\n\n    let result_fields = dataframe\n        .schema()\n        .fields()\n        .iter()\n        .map(|field| field.as_ref().clone())\n        .collect::<Vec<_>>();\n    let batches = super::runtime::collect_dataframe(dataframe)\n        .await\n        .map_err(datafusion_error_to_lix_error)?;\n    let mut result = query_result_from_batches(&result_fields, &batches)?;\n    result.notices = notices;\n    Ok(result)\n}\n\nfn validate_strict_binary_params(\n    strict_binary_params: &BTreeSet<usize>,\n    params: &[Value],\n) -> Result<(), LixError> {\n    for index in strict_binary_params {\n        let Some(value) = params.get(index - 1) else {\n            continue;\n        };\n        if !matches!(value, Value::Blob(_)) {\n            return Err(lix_file_data_type_lix_error());\n        }\n    }\n    Ok(())\n}\n\nfn validate_parameter_count(plan: &LogicalPlan, param_count: usize) -> Result<(), LixError> {\n    let parameter_names = plan\n        .get_parameter_names()\n        .map_err(datafusion_error_to_lix_error)?;\n    let expected_count = expected_positional_parameter_count(&parameter_names)?;\n    if param_count == expected_count {\n        return Ok(());\n    }\n\n    Err(LixError::new(\n        LixError::CODE_INVALID_PARAM,\n        format!(\n            \"SQL expected {expected_count} parameter(s), but {param_count} parameter(s) were provided\"\n        ),\n    )\n    .with_details(json!({\n        \"operation\": \"execute\",\n        \"expected_param_count\": expected_count,\n        \"provided_param_count\": param_count,\n        \"placeholders\": sorted_parameter_names(&parameter_names),\n    })))\n}\n\nfn expected_positional_parameter_count(\n    parameter_names: &HashSet<String>,\n) -> Result<usize, LixError> {\n    let mut max_index = 0usize;\n    for name in parameter_names {\n        let Some(index) = name\n            .strip_prefix('$')\n            .and_then(|raw| raw.parse::<usize>().ok())\n        else {\n            return Err(LixError::new(\n                LixError::CODE_PARSE_ERROR,\n                format!(\"unsupported SQL parameter placeholder '{name}'\"),\n            )\n            .with_hint(\"Use numbered placeholders like $1, $2, ...\")\n            .with_details(json!({\n                \"operation\": \"execute\",\n                \"placeholder\": name,\n            })));\n        };\n        if index == 0 {\n            return Err(LixError::new(\n                LixError::CODE_PARSE_ERROR,\n                \"SQL parameter placeholders are 1-indexed\",\n            )\n            .with_hint(\"Use numbered placeholders like $1, $2, ...\")\n            .with_details(json!({\n                \"operation\": \"execute\",\n                \"placeholder\": name,\n            })));\n        }\n        max_index = max_index.max(index);\n    }\n    Ok(max_index)\n}\n\nfn sorted_parameter_names(parameter_names: &HashSet<String>) -> Vec<String> {\n    let mut names = parameter_names.iter().cloned().collect::<Vec<_>>();\n    names.sort();\n    names\n}\n\nfn reject_read_only_history_view_dml_from_statement(\n    statement: &DataFusionStatement,\n    visible_schemas: &[JsonValue],\n) -> Result<(), LixError> {\n    let target_names = super::datafusion_statement_dml_target_table_names(statement);\n    for target_name in target_names {\n        if is_history_view_name(&target_name, visible_schemas)? {\n            return Err(read_only_history_view_error(&target_name));\n        }\n    }\n    Ok(())\n}\n\nfn is_history_view_name(table_name: &str, visible_schemas: &[JsonValue]) -> Result<bool, LixError> {\n    if matches!(\n        table_name,\n        \"lix_state_history\" | \"lix_file_history\" | \"lix_directory_history\"\n    ) {\n        return Ok(true);\n    }\n\n    for schema in visible_schemas {\n        let schema_key = schema_key_from_definition(schema)?;\n        if table_name == format!(\"{}_history\", schema_key.schema_key) {\n            return Ok(true);\n        }\n    }\n\n    Ok(false)\n}\n\nfn read_only_history_view_error(view_name: &str) -> LixError {\n    LixError::new(\n        LixError::CODE_READ_ONLY,\n        format!(\"DML cannot write read-only history view '{view_name}'\"),\n    )\n    .with_hint(\n        \"History views are query-only; write to the live surface such as lix_state, lix_file, lix_directory, or the typed entity table.\",\n    )\n}\n\nfn classify_logical_plan(plan: &LogicalPlan) -> SqlStatementKind {\n    match plan {\n        LogicalPlan::Dml(_) => SqlStatementKind::Write,\n        LogicalPlan::Ddl(_) | LogicalPlan::Statement(_) | LogicalPlan::Copy(_) => {\n            SqlStatementKind::Other\n        }\n        _ => SqlStatementKind::Read,\n    }\n}\n\nfn validate_supported_logical_plan(plan: &LogicalPlan) -> Result<(), LixError> {\n    match plan {\n        LogicalPlan::Ddl(_) => {\n            return Err(LixError::new(\n                LixError::CODE_UNSUPPORTED_SQL,\n                \"DDL statements are not supported by Lix SQL\",\n            )\n            .with_hint(\n                \"Use Lix entity surfaces such as lix_registered_schema, lix_version, lix_file, and lix_key_value instead of CREATE/DROP statements.\",\n            ));\n        }\n        LogicalPlan::Statement(_) => {\n            return Err(LixError::new(\n                LixError::CODE_UNSUPPORTED_SQL,\n                \"SQL utility statements are not supported by Lix SQL\",\n            ));\n        }\n        LogicalPlan::Copy(_) => {\n            return Err(LixError::new(\n                LixError::CODE_UNSUPPORTED_SQL,\n                \"COPY statements are not supported by Lix SQL\",\n            ));\n        }\n        LogicalPlan::RecursiveQuery(_) => {\n            return Err(LixError::new(\n                LixError::CODE_UNSUPPORTED_SQL,\n                \"recursive CTEs are not supported by Lix SQL\",\n            )\n            .with_hint(\n                \"Use explicit commit graph surfaces such as lix_commit, lix_commit_edge, and lix_state_history instead of WITH RECURSIVE.\",\n            ));\n        }\n        _ => {}\n    }\n\n    for input in plan.inputs() {\n        validate_supported_logical_plan(input)?;\n    }\n\n    Ok(())\n}\n\nfn scalar_value_from_lix_value(value: &Value) -> ScalarAndMetadata {\n    match value {\n        Value::Null => ScalarValue::Null.into(),\n        Value::Boolean(value) => ScalarValue::Boolean(Some(*value)).into(),\n        Value::Integer(value) => ScalarValue::Int64(Some(*value)).into(),\n        Value::Real(value) => ScalarValue::Float64(Some(*value)).into(),\n        Value::Text(value) => ScalarValue::Utf8(Some(value.clone())).into(),\n        Value::Json(value) => ScalarAndMetadata::new(\n            ScalarValue::Utf8(Some(value.to_string())),\n            Some(json_field_metadata()),\n        ),\n        Value::Blob(value) => ScalarValue::Binary(Some(value.clone())).into(),\n    }\n}\n\nfn json_field_metadata() -> FieldMetadata {\n    FieldMetadata::new(BTreeMap::from([(\n        LIX_VALUE_TYPE_METADATA_KEY.to_string(),\n        LIX_VALUE_TYPE_JSON.to_string(),\n    )]))\n}\n\nfn datafusion_error_to_lix_error(error: datafusion::error::DataFusionError) -> LixError {\n    super::error::datafusion_error_to_lix_error(error)\n}\n\nfn query_result_from_batches(\n    result_fields: &[Field],\n    batches: &[RecordBatch],\n) -> Result<SqlQueryResult, LixError> {\n    let result_columns = result_fields\n        .iter()\n        .map(|field| field.name().to_string())\n        .collect::<Vec<_>>();\n    let mut rows = Vec::<Vec<Value>>::new();\n    for batch in batches {\n        for row_index in 0..batch.num_rows() {\n            let mut row = Vec::<Value>::with_capacity(batch.num_columns());\n            for (column_index, array) in batch.columns().iter().enumerate() {\n                let scalar = ScalarValue::try_from_array(array.as_ref(), row_index)\n                    .map_err(datafusion_error_to_lix_error)?;\n                let field = result_fields.get(column_index);\n                row.push(scalar_value_to_lix_value(&scalar, field)?);\n            }\n            rows.push(row);\n        }\n    }\n\n    Ok(SqlQueryResult {\n        rows,\n        columns: result_columns.to_vec(),\n        notices: Vec::new(),\n    })\n}\n\nfn history_filter_notices(plan: &LogicalPlan) -> Vec<LixNotice> {\n    let mut observations = Vec::new();\n    collect_notice_observations(plan, &Vec::new(), &mut observations);\n\n    let mut notices = Vec::new();\n    let mut emitted_codes = HashSet::<String>::new();\n    for observation in observations {\n        for rule in HISTORY_NOTICE_RULES {\n            if observation.table_name != rule.table_name {\n                continue;\n            }\n            if !observation.references_any(rule.payload_columns)\n                || observation.references_any(rule.identity_columns)\n            {\n                continue;\n            }\n\n            let code = format!(\"LIX_HISTORY_NON_IDENTITY_FILTER:{}\", rule.table_name);\n            if emitted_codes.insert(code) {\n                notices.push(history_non_identity_filter_notice(rule.table_name));\n            }\n        }\n    }\n    notices\n}\n\n#[derive(Debug)]\nstruct NoticeObservation {\n    table_name: String,\n    filter_columns: HashSet<String>,\n}\n\nimpl NoticeObservation {\n    fn references_any(&self, columns: &[&str]) -> bool {\n        columns\n            .iter()\n            .any(|column| self.filter_columns.contains(*column))\n    }\n}\n\nstruct HistoryNoticeRule {\n    table_name: &'static str,\n    payload_columns: &'static [&'static str],\n    identity_columns: &'static [&'static str],\n}\n\nconst HISTORY_NOTICE_RULES: &[HistoryNoticeRule] = &[\n    HistoryNoticeRule {\n        table_name: \"lix_file_history\",\n        payload_columns: &[\"path\", \"directory_id\", \"name\", \"hidden\", \"data\"],\n        identity_columns: &[\"id\", \"lixcol_entity_id\"],\n    },\n    HistoryNoticeRule {\n        table_name: \"lix_directory_history\",\n        payload_columns: &[\"path\", \"parent_id\", \"name\", \"hidden\"],\n        identity_columns: &[\"id\", \"lixcol_entity_id\"],\n    },\n];\n\nfn collect_notice_observations(\n    plan: &LogicalPlan,\n    active_filter_columns: &Vec<HashSet<String>>,\n    observations: &mut Vec<NoticeObservation>,\n) {\n    match plan {\n        LogicalPlan::Filter(filter) => {\n            let mut next_filters = active_filter_columns.clone();\n            next_filters.push(expr_column_names(&filter.predicate));\n            collect_notice_observations(&filter.input, &next_filters, observations);\n        }\n        LogicalPlan::TableScan(scan) => {\n            let mut filter_columns = HashSet::new();\n            for columns in active_filter_columns {\n                filter_columns.extend(columns.iter().cloned());\n            }\n            for filter in &scan.filters {\n                filter_columns.extend(expr_column_names(filter));\n            }\n            if !filter_columns.is_empty() {\n                observations.push(NoticeObservation {\n                    table_name: table_reference_name(&scan.table_name),\n                    filter_columns,\n                });\n            }\n        }\n        other => {\n            for input in other.inputs() {\n                collect_notice_observations(input, active_filter_columns, observations);\n            }\n        }\n    }\n}\n\nfn expr_column_names(expr: &Expr) -> HashSet<String> {\n    expr.column_refs()\n        .iter()\n        .map(|column| column.name.clone())\n        .collect()\n}\n\nfn table_reference_name(table: &datafusion::common::TableReference) -> String {\n    match table {\n        datafusion::common::TableReference::Bare { table } => table.to_string(),\n        datafusion::common::TableReference::Partial { table, .. } => table.to_string(),\n        datafusion::common::TableReference::Full { table, .. } => table.to_string(),\n    }\n}\n\nfn history_non_identity_filter_notice(view_name: &str) -> LixNotice {\n    LixNotice {\n        code: \"LIX_HISTORY_NON_IDENTITY_FILTER\".to_string(),\n        message: format!(\"{view_name} was filtered without an identity predicate.\"),\n        hint: Some(\n            \"Filter by id or lixcol_entity_id to include tombstones and renamed history.\"\n                .to_string(),\n        ),\n    }\n}\n\nfn scalar_value_to_lix_value(\n    value: &ScalarValue,\n    field: Option<&Field>,\n) -> Result<Value, LixError> {\n    match value {\n        ScalarValue::Null => Ok(Value::Null),\n        ScalarValue::Boolean(Some(value)) => Ok(Value::Boolean(*value)),\n        ScalarValue::Boolean(None) => Ok(Value::Null),\n        ScalarValue::Int8(Some(value)) => Ok(Value::Integer(i64::from(*value))),\n        ScalarValue::Int8(None) => Ok(Value::Null),\n        ScalarValue::Int16(Some(value)) => Ok(Value::Integer(i64::from(*value))),\n        ScalarValue::Int16(None) => Ok(Value::Null),\n        ScalarValue::Int32(Some(value)) => Ok(Value::Integer(i64::from(*value))),\n        ScalarValue::Int32(None) => Ok(Value::Null),\n        ScalarValue::Int64(Some(value)) => Ok(Value::Integer(*value)),\n        ScalarValue::Int64(None) => Ok(Value::Null),\n        ScalarValue::UInt8(Some(value)) => Ok(Value::Integer(i64::from(*value))),\n        ScalarValue::UInt8(None) => Ok(Value::Null),\n        ScalarValue::UInt16(Some(value)) => Ok(Value::Integer(i64::from(*value))),\n        ScalarValue::UInt16(None) => Ok(Value::Null),\n        ScalarValue::UInt32(Some(value)) => Ok(Value::Integer(i64::from(*value))),\n        ScalarValue::UInt32(None) => Ok(Value::Null),\n        ScalarValue::UInt64(Some(value)) => match i64::try_from(*value) {\n            Ok(value) => Ok(Value::Integer(value)),\n            Err(_) => Ok(Value::Text(value.to_string())),\n        },\n        ScalarValue::UInt64(None) => Ok(Value::Null),\n        ScalarValue::Float32(Some(value)) => Ok(Value::Real(f64::from(*value))),\n        ScalarValue::Float32(None) => Ok(Value::Null),\n        ScalarValue::Float64(Some(value)) => Ok(Value::Real(*value)),\n        ScalarValue::Float64(None) => Ok(Value::Null),\n        ScalarValue::Utf8(Some(value))\n        | ScalarValue::Utf8View(Some(value))\n        | ScalarValue::LargeUtf8(Some(value)) => string_scalar_to_lix_value(value, field),\n        ScalarValue::Utf8(None) | ScalarValue::Utf8View(None) | ScalarValue::LargeUtf8(None) => {\n            Ok(Value::Null)\n        }\n        ScalarValue::Binary(Some(value)) | ScalarValue::LargeBinary(Some(value)) => {\n            Ok(Value::Blob(value.clone()))\n        }\n        ScalarValue::Binary(None) | ScalarValue::LargeBinary(None) => Ok(Value::Null),\n        other => Ok(Value::Text(other.to_string())),\n    }\n}\n\nfn string_scalar_to_lix_value(value: &str, field: Option<&Field>) -> Result<Value, LixError> {\n    if field.is_some_and(field_is_json) {\n        return serde_json::from_str::<serde_json::Value>(value)\n            .map(Value::Json)\n            .map_err(|error| {\n                LixError::new(\n                    \"LIX_ERROR_INVALID_JSON\",\n                    format!(\n                        \"column '{}' is marked as JSON but contains invalid JSON: {error}\",\n                        field\n                            .map(|field| field.name().as_str())\n                            .unwrap_or(\"<unknown>\")\n                    ),\n                )\n            });\n    }\n    Ok(Value::Text(value.to_string()))\n}\n\n#[cfg(test)]\nmod tests {\n    use std::sync::{Arc, Mutex};\n\n    use async_trait::async_trait;\n    use serde_json::json;\n    use serde_json::Value as JsonValue;\n\n    use super::{\n        create_write_logical_plan, execute_logical_plan, execute_sql, SqlExecutionContext,\n        SqlWriteExecutionContext,\n    };\n    use crate::binary_cas::BlobDataReader;\n    use crate::commit_graph::{\n        CommitGraphChangeHistoryEntry, CommitGraphChangeHistoryRequest, CommitGraphCommit,\n        CommitGraphEdge, CommitGraphReader, ReachableCommitGraphCommit,\n    };\n    use crate::commit_store::CommitStoreContext;\n    use crate::functions::{\n        FunctionProvider, FunctionProviderHandle, SharedFunctionProvider, SystemFunctionProvider,\n    };\n    use crate::json_store::JsonStoreContext;\n    use crate::live_state::{\n        LiveStateContext, LiveStateReader, LiveStateRowRequest, LiveStateScanRequest,\n        MaterializedLiveStateRow,\n    };\n    use crate::sql2::{CommitStoreQuerySource, SqlCommitStoreQuerySource};\n    use crate::storage::{\n        KvEntryPage, KvExistsBatch, KvGetRequest, KvKeyPage, KvScanRequest, KvValueBatch,\n        KvValuePage, StorageContext, StorageReadScope, StorageReadTransaction, StorageReader,\n        StorageWriteSet,\n    };\n    use crate::tracked_state::TrackedStateContext;\n    use crate::transaction::prepare_version_ref_row;\n    use crate::transaction::types::{\n        TransactionWrite, TransactionWriteOutcome, TransactionWriteRow,\n    };\n    use crate::untracked_state::UntrackedStateContext;\n    use crate::version::VersionRefReader;\n    use crate::{Engine, ExecuteResult, SessionContext};\n    use crate::{LixError, Value};\n\n    struct DummyBlobReader;\n    struct DummyLiveStateReader;\n    struct RowsLiveStateReader {\n        rows: Vec<MaterializedLiveStateRow>,\n    }\n    struct BackendBlobReader(StorageContext);\n    struct DummyCommitGraphReader;\n    struct DummyVersionRefReader;\n    struct TestReadTransaction(StorageContext);\n\n    fn test_read_scope(\n        storage: StorageContext,\n    ) -> StorageReadScope<Box<dyn StorageReadTransaction + Send + Sync + 'static>> {\n        StorageReadScope::new(Box::new(TestReadTransaction(storage)))\n    }\n\n    #[async_trait]\n    impl StorageReader for TestReadTransaction {\n        async fn get_values(&mut self, request: KvGetRequest) -> Result<KvValueBatch, LixError> {\n            self.0.get_values(request).await\n        }\n\n        async fn exists_many(&mut self, request: KvGetRequest) -> Result<KvExistsBatch, LixError> {\n            self.0.exists_many(request).await\n        }\n\n        async fn scan_keys(&mut self, request: KvScanRequest) -> Result<KvKeyPage, LixError> {\n            self.0.scan_keys(request).await\n        }\n\n        async fn scan_values(&mut self, request: KvScanRequest) -> Result<KvValuePage, LixError> {\n            self.0.scan_values(request).await\n        }\n\n        async fn scan_entries(&mut self, request: KvScanRequest) -> Result<KvEntryPage, LixError> {\n            self.0.scan_entries(request).await\n        }\n    }\n\n    #[async_trait]\n    impl StorageReadTransaction for TestReadTransaction {\n        async fn rollback(self: Box<Self>) -> Result<(), LixError> {\n            Ok(())\n        }\n    }\n\n    #[allow(dead_code)]\n    fn test_functions() -> FunctionProviderHandle {\n        SharedFunctionProvider::new(\n            Box::new(SystemFunctionProvider) as Box<dyn FunctionProvider + Send>\n        )\n    }\n\n    #[derive(Default)]\n    struct CapturingStagedWrites {\n        deltas: Vec<CapturedStageWrite>,\n    }\n\n    #[derive(Clone)]\n    struct CapturedStageWrite {\n        rows: Vec<TransactionWriteRow>,\n    }\n\n    impl CapturedStageWrite {\n        fn pending_write_overlay(&self) -> Result<CapturedStageOverlay, LixError> {\n            Ok(CapturedStageOverlay {\n                rows: self.rows.clone(),\n            })\n        }\n    }\n\n    struct CapturedStageOverlay {\n        rows: Vec<TransactionWriteRow>,\n    }\n\n    impl CapturedStageOverlay {\n        fn visible_semantic_rows(\n            &self,\n            include_tombstones: bool,\n            schema_key: &str,\n        ) -> Vec<CapturedStageRow> {\n            self.visible_all_semantic_rows()\n                .into_iter()\n                .filter(|row| row.schema_key == schema_key)\n                .filter(|row| include_tombstones || !row.tombstone)\n                .collect()\n        }\n\n        fn visible_all_semantic_rows(&self) -> Vec<CapturedStageRow> {\n            self.rows\n                .iter()\n                .cloned()\n                .map(CapturedStageRow::from)\n                .collect()\n        }\n    }\n\n    struct CapturedStageRow {\n        entity_id: String,\n        schema_key: String,\n        version_id: String,\n        file_id: Option<String>,\n        snapshot_content: Option<String>,\n        metadata: Option<String>,\n        global: bool,\n        untracked: bool,\n        tombstone: bool,\n    }\n\n    impl From<TransactionWriteRow> for CapturedStageRow {\n        fn from(row: TransactionWriteRow) -> Self {\n            Self {\n                entity_id: row\n                    .entity_id\n                    .expect(\"captured staged row should carry entity_id\")\n                    .as_json_array_text()\n                    .expect(\"captured staged row should project entity_id\"),\n                schema_key: row.schema_key,\n                version_id: row.version_id,\n                file_id: row.file_id,\n                global: row.global,\n                untracked: row.untracked,\n                tombstone: row.snapshot.is_none(),\n                snapshot_content: row.snapshot.map(|snapshot| snapshot.to_string()),\n                metadata: row.metadata.map(|metadata| metadata.to_string()),\n            }\n        }\n    }\n\n    struct DummySqlExecutionContext<'a> {\n        active_version_id: &'a str,\n        blob_reader: Arc<dyn BlobDataReader>,\n        live_state: Arc<dyn LiveStateReader>,\n        schema_definitions: Vec<JsonValue>,\n    }\n\n    impl<'a> SqlExecutionContext for DummySqlExecutionContext<'a> {\n        fn active_version_id(&self) -> &str {\n            self.active_version_id\n        }\n\n        fn live_state(&self) -> Arc<dyn LiveStateReader> {\n            Arc::clone(&self.live_state)\n        }\n\n        fn functions(&self) -> FunctionProviderHandle {\n            test_functions()\n        }\n\n        fn blob_reader(&self) -> Arc<dyn BlobDataReader> {\n            Arc::clone(&self.blob_reader)\n        }\n\n        fn commit_store_query_source(&self) -> SqlCommitStoreQuerySource {\n            let base_scope = test_read_scope(StorageContext::new(Arc::new(\n                crate::backend::testing::UnitTestBackend::new(),\n            )));\n            let read_scope = StorageReadScope::new(base_scope.store());\n            CommitStoreQuerySource {\n                commit_store_reader: Arc::new(CommitStoreContext::new().reader(read_scope.store())),\n                json_reader: JsonStoreContext::new().reader(read_scope.store()),\n            }\n        }\n\n        fn commit_graph(&self) -> Box<dyn CommitGraphReader> {\n            Box::new(DummyCommitGraphReader)\n        }\n\n        fn version_ref(&self) -> Arc<dyn VersionRefReader> {\n            Arc::new(DummyVersionRefReader)\n        }\n\n        fn list_visible_schemas(&self) -> Result<Vec<JsonValue>, LixError> {\n            Ok(self.schema_definitions.clone())\n        }\n    }\n\n    struct DummySqlWriteExecutionContext<'a> {\n        active_version_id: &'a str,\n        blob_reader: Arc<dyn BlobDataReader>,\n        live_state: Arc<dyn LiveStateReader>,\n        staged_writes: Arc<Mutex<CapturingStagedWrites>>,\n        schema_definitions: Vec<JsonValue>,\n    }\n\n    #[async_trait]\n    impl SqlWriteExecutionContext for DummySqlWriteExecutionContext<'_> {\n        fn active_version_id(&self) -> &str {\n            self.active_version_id\n        }\n\n        fn functions(&self) -> FunctionProviderHandle {\n            test_functions()\n        }\n\n        fn list_visible_schemas(&self) -> Result<Vec<JsonValue>, LixError> {\n            Ok(self.schema_definitions.clone())\n        }\n\n        async fn load_bytes_many(\n            &mut self,\n            hashes: &[crate::binary_cas::BlobHash],\n        ) -> Result<crate::binary_cas::BlobBytesBatch, LixError> {\n            self.blob_reader.load_bytes_many(hashes).await\n        }\n\n        async fn scan_live_state(\n            &mut self,\n            request: &LiveStateScanRequest,\n        ) -> Result<Vec<MaterializedLiveStateRow>, LixError> {\n            self.live_state.scan_rows(request).await\n        }\n\n        async fn load_version_head(\n            &mut self,\n            version_id: &str,\n        ) -> Result<Option<String>, LixError> {\n            Ok(Some(format!(\"commit-{version_id}\")))\n        }\n\n        async fn stage_write(\n            &mut self,\n            write: TransactionWrite,\n        ) -> Result<TransactionWriteOutcome, LixError> {\n            let count = match &write {\n                TransactionWrite::Rows { rows, .. } => rows.len() as u64,\n                TransactionWrite::RowsWithFileData { count, .. } => *count,\n                TransactionWrite::AdoptedChanges { changes } => changes.len() as u64,\n            };\n            let rows = match write {\n                TransactionWrite::Rows { rows, .. } => rows,\n                TransactionWrite::RowsWithFileData { rows, .. } => rows,\n                TransactionWrite::AdoptedChanges { .. } => Vec::new(),\n            };\n            self.staged_writes\n                .lock()\n                .expect(\"staged writes lock\")\n                .deltas\n                .push(CapturedStageWrite { rows });\n            Ok(TransactionWriteOutcome { count })\n        }\n    }\n\n    async fn execute_write_sql(\n        ctx: &mut dyn SqlWriteExecutionContext,\n        sql: &str,\n        params: &[Value],\n    ) -> Result<crate::SqlQueryResult, LixError> {\n        let plan = create_write_logical_plan(ctx, sql).await?;\n        execute_logical_plan(plan, params).await\n    }\n\n    #[async_trait]\n    impl VersionRefReader for DummyVersionRefReader {\n        async fn load_head(\n            &self,\n            _version_id: &str,\n        ) -> Result<Option<crate::version::VersionHead>, LixError> {\n            Ok(None)\n        }\n\n        async fn scan_heads(&self) -> Result<Vec<crate::version::VersionHead>, LixError> {\n            Ok(Vec::new())\n        }\n    }\n\n    #[async_trait]\n    impl CommitGraphReader for DummyCommitGraphReader {\n        async fn load_commit(\n            &mut self,\n            _commit_id: &str,\n        ) -> Result<Option<CommitGraphCommit>, LixError> {\n            Ok(None)\n        }\n\n        async fn all_commits(&mut self) -> Result<Vec<CommitGraphCommit>, LixError> {\n            Ok(Vec::new())\n        }\n\n        async fn reachable_commits(\n            &mut self,\n            _head_commit_id: &str,\n        ) -> Result<Vec<ReachableCommitGraphCommit>, LixError> {\n            Ok(Vec::new())\n        }\n\n        async fn best_common_ancestors(\n            &mut self,\n            _left_commit_id: &str,\n            _right_commit_id: &str,\n        ) -> Result<Vec<CommitGraphCommit>, LixError> {\n            Ok(Vec::new())\n        }\n\n        async fn merge_base(\n            &mut self,\n            _left_commit_id: &str,\n            _right_commit_id: &str,\n        ) -> Result<CommitGraphCommit, LixError> {\n            Err(LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                \"dummy commit graph reader cannot resolve merge base\",\n            ))\n        }\n\n        fn commit_edges(&self, _commits: &[CommitGraphCommit]) -> Vec<CommitGraphEdge> {\n            Vec::new()\n        }\n\n        async fn change_history_from_commit(\n            &mut self,\n            _start_commit_id: &str,\n            _request: &CommitGraphChangeHistoryRequest,\n        ) -> Result<Vec<CommitGraphChangeHistoryEntry>, LixError> {\n            Ok(Vec::new())\n        }\n    }\n\n    #[async_trait]\n    impl LiveStateReader for DummyLiveStateReader {\n        async fn scan_rows(\n            &self,\n            _request: &LiveStateScanRequest,\n        ) -> Result<Vec<MaterializedLiveStateRow>, LixError> {\n            Ok(vec![])\n        }\n\n        async fn load_row(\n            &self,\n            _request: &LiveStateRowRequest,\n        ) -> Result<Option<MaterializedLiveStateRow>, LixError> {\n            Ok(None)\n        }\n    }\n\n    #[async_trait]\n    impl LiveStateReader for RowsLiveStateReader {\n        async fn scan_rows(\n            &self,\n            _request: &LiveStateScanRequest,\n        ) -> Result<Vec<MaterializedLiveStateRow>, LixError> {\n            Ok(self.rows.clone())\n        }\n\n        async fn load_row(\n            &self,\n            _request: &LiveStateRowRequest,\n        ) -> Result<Option<MaterializedLiveStateRow>, LixError> {\n            Ok(None)\n        }\n    }\n\n    #[async_trait]\n    impl BlobDataReader for DummyBlobReader {\n        async fn load_bytes_many(\n            &self,\n            hashes: &[crate::binary_cas::BlobHash],\n        ) -> Result<crate::binary_cas::BlobBytesBatch, LixError> {\n            Ok(crate::binary_cas::BlobBytesBatch::new(vec![\n                None;\n                hashes.len()\n            ]))\n        }\n    }\n\n    #[async_trait]\n    impl BlobDataReader for BackendBlobReader {\n        async fn load_bytes_many(\n            &self,\n            hashes: &[crate::binary_cas::BlobHash],\n        ) -> Result<crate::binary_cas::BlobBytesBatch, LixError> {\n            let binary_cas = crate::binary_cas::BinaryCasContext::new();\n            let reader = binary_cas.reader(self.0.clone());\n            reader.load_bytes_many(hashes).await\n        }\n    }\n\n    fn live_lix_state_row(entity_id: &str, metadata: Option<&str>) -> MaterializedLiveStateRow {\n        MaterializedLiveStateRow {\n            entity_id: crate::entity_identity::EntityIdentity::single(entity_id),\n            schema_key: \"lix_key_value\".to_string(),\n            file_id: None,\n            snapshot_content: Some(\"{\\\"key\\\":\\\"hello\\\",\\\"value\\\":\\\"world\\\"}\".to_string()),\n            metadata: metadata.map(str::to_string),\n            deleted: false,\n            version_id: \"version-a\".to_string(),\n            change_id: Some(format!(\"change-{entity_id}\")),\n            commit_id: Some(format!(\"commit-{entity_id}\")),\n            global: false,\n            untracked: false,\n            created_at: \"2026-04-23T00:00:00Z\".to_string(),\n            updated_at: \"2026-04-23T01:00:00Z\".to_string(),\n        }\n    }\n\n    fn live_entity_row(entity_id: &str, version_id: &str, value: &str) -> MaterializedLiveStateRow {\n        MaterializedLiveStateRow {\n            entity_id: crate::entity_identity::EntityIdentity::single(entity_id),\n            schema_key: \"test_state_schema\".to_string(),\n            file_id: None,\n            snapshot_content: Some(format!(\"{{\\\"value\\\":\\\"{value}\\\"}}\")),\n            metadata: Some(json!({ \"source\": entity_id }).to_string()),\n            deleted: false,\n            version_id: version_id.to_string(),\n            change_id: Some(format!(\"change-{entity_id}\")),\n            commit_id: Some(format!(\"commit-{entity_id}\")),\n            global: false,\n            untracked: false,\n            created_at: \"2026-04-23T00:00:00Z\".to_string(),\n            updated_at: \"2026-04-23T01:00:00Z\".to_string(),\n        }\n    }\n\n    fn live_directory_row(\n        entity_id: &str,\n        version_id: &str,\n        parent_id: Option<&str>,\n        name: &str,\n        hidden: bool,\n    ) -> MaterializedLiveStateRow {\n        MaterializedLiveStateRow {\n            entity_id: crate::entity_identity::EntityIdentity::single(entity_id),\n            schema_key: \"lix_directory_descriptor\".to_string(),\n            file_id: None,\n            snapshot_content: Some(\n                json!({\n                    \"id\": entity_id,\n                    \"parent_id\": parent_id,\n                    \"name\": name,\n                    \"hidden\": hidden\n                })\n                .to_string(),\n            ),\n            metadata: Some(json!({ \"source\": entity_id }).to_string()),\n            deleted: false,\n            version_id: version_id.to_string(),\n            change_id: Some(format!(\"change-{entity_id}\")),\n            commit_id: Some(format!(\"commit-{entity_id}\")),\n            global: false,\n            untracked: false,\n            created_at: \"2026-04-23T00:00:00Z\".to_string(),\n            updated_at: \"2026-04-23T01:00:00Z\".to_string(),\n        }\n    }\n\n    fn live_file_row(\n        entity_id: &str,\n        version_id: &str,\n        directory_id: Option<&str>,\n        name: &str,\n        hidden: bool,\n    ) -> MaterializedLiveStateRow {\n        MaterializedLiveStateRow {\n            entity_id: crate::entity_identity::EntityIdentity::single(entity_id),\n            schema_key: \"lix_file_descriptor\".to_string(),\n            file_id: None,\n            snapshot_content: Some(\n                json!({\n                    \"id\": entity_id,\n                    \"directory_id\": directory_id,\n                    \"name\": name,\n                    \"hidden\": hidden\n                })\n                .to_string(),\n            ),\n            metadata: Some(json!({ \"source\": entity_id }).to_string()),\n            deleted: false,\n            version_id: version_id.to_string(),\n            change_id: Some(format!(\"change-{entity_id}\")),\n            commit_id: Some(format!(\"commit-{entity_id}\")),\n            global: false,\n            untracked: false,\n            created_at: \"2026-04-23T00:00:00Z\".to_string(),\n            updated_at: \"2026-04-23T01:00:00Z\".to_string(),\n        }\n    }\n\n    #[tokio::test]\n    async fn sql_execution_context_exposes_live_state_and_blob_reader() {\n        let blob_reader: Arc<dyn BlobDataReader> = Arc::new(DummyBlobReader);\n        let live_state = Arc::new(DummyLiveStateReader);\n        let ctx = DummySqlExecutionContext {\n            active_version_id: \"version-a\",\n            blob_reader: Arc::clone(&blob_reader),\n            live_state: Arc::clone(&live_state) as Arc<dyn LiveStateReader>,\n            schema_definitions: vec![],\n        };\n\n        let actual = ctx.live_state();\n        let expected = live_state as Arc<dyn LiveStateReader>;\n        assert_eq!(ctx.active_version_id(), \"version-a\");\n        assert!(Arc::ptr_eq(&actual, &expected));\n        assert!(Arc::ptr_eq(&ctx.blob_reader(), &blob_reader));\n    }\n\n    #[tokio::test]\n    async fn execute_sql_uses_execution_context_boundary() {\n        let blob_reader: Arc<dyn BlobDataReader> = Arc::new(DummyBlobReader);\n        let live_state = Arc::new(DummyLiveStateReader);\n        let ctx = DummySqlExecutionContext {\n            active_version_id: \"version-a\",\n            blob_reader,\n            live_state,\n            schema_definitions: vec![],\n        };\n\n        let result = execute_sql(&ctx, \"SELECT 1\", &[])\n            .await\n            .expect(\"sql2 execute should support literal-only queries\");\n        assert_eq!(result.rows, vec![vec![Value::Integer(1)]]);\n    }\n\n    #[tokio::test]\n    async fn execute_sql_collects_union_all_partitions() {\n        let blob_reader: Arc<dyn BlobDataReader> = Arc::new(DummyBlobReader);\n        let live_state = Arc::new(DummyLiveStateReader);\n        let ctx = DummySqlExecutionContext {\n            active_version_id: \"version-a\",\n            blob_reader,\n            live_state,\n            schema_definitions: vec![],\n        };\n\n        let result = execute_sql(&ctx, \"SELECT 1 UNION ALL SELECT 2\", &[])\n            .await\n            .expect(\"sql2 execute should collect UNION ALL partitions\");\n        assert_eq!(\n            result.rows,\n            vec![vec![Value::Integer(1)], vec![Value::Integer(2)]]\n        );\n    }\n\n    #[tokio::test]\n    async fn execute_sql_rejects_extra_parameters() {\n        let blob_reader: Arc<dyn BlobDataReader> = Arc::new(DummyBlobReader);\n        let live_state = Arc::new(DummyLiveStateReader);\n        let ctx = DummySqlExecutionContext {\n            active_version_id: \"version-a\",\n            blob_reader,\n            live_state,\n            schema_definitions: vec![],\n        };\n\n        let error = execute_sql(\n            &ctx,\n            \"SELECT $1 AS value\",\n            &[Value::Integer(1), Value::Integer(2)],\n        )\n        .await\n        .expect_err(\"extra params should fail instead of being ignored\");\n\n        assert_eq!(error.code, LixError::CODE_INVALID_PARAM);\n        assert_eq!(\n            error.message,\n            \"SQL expected 1 parameter(s), but 2 parameter(s) were provided\"\n        );\n        assert_eq!(\n            error.details,\n            Some(json!({\n                \"operation\": \"execute\",\n                \"expected_param_count\": 1,\n                \"provided_param_count\": 2,\n                \"placeholders\": [\"$1\"],\n            }))\n        );\n    }\n\n    #[tokio::test]\n    async fn execute_sql_exposes_datafusion_information_schema() {\n        let blob_reader: Arc<dyn BlobDataReader> = Arc::new(DummyBlobReader);\n        let live_state = Arc::new(DummyLiveStateReader);\n        let ctx = DummySqlExecutionContext {\n            active_version_id: \"version-a\",\n            blob_reader,\n            live_state,\n            schema_definitions: vec![],\n        };\n\n        let information_schema_result = execute_sql(\n            &ctx,\n            \"SELECT table_name FROM information_schema.tables WHERE table_name = 'lix_state'\",\n            &[],\n        )\n        .await\n        .expect(\"information_schema.tables should be enabled\");\n        assert_eq!(\n            information_schema_result.rows,\n            vec![vec![Value::Text(\"lix_state\".to_string())]]\n        );\n\n        let tables_result = execute_sql(\n            &ctx,\n            \"SELECT table_name FROM information_schema.tables\",\n            &[],\n        )\n        .await\n        .expect(\"information_schema.tables should list registered tables\");\n        assert!(tables_result.rows.iter().any(|row| {\n            row.iter()\n                .any(|value| matches!(value, Value::Text(value) if value == \"lix_state\"))\n        }));\n    }\n\n    async fn setup_engine_history_fixture() -> Result<(SessionContext, String), LixError> {\n        let backend = crate::backend::testing::UnitTestBackend::new();\n        let init_receipt = Engine::initialize(Box::new(backend.clone())).await?;\n        let engine = Engine::new(Box::new(backend)).await?;\n        let session = engine.open_session(init_receipt.main_version_id).await?;\n\n        session\n            .execute(\n                \"INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \\\n                 VALUES (\\\n                 lix_json('{\\\"x-lix-key\\\":\\\"test_state_schema\\\",\\\"type\\\":\\\"object\\\",\\\"properties\\\":{\\\"value\\\":{\\\"type\\\":\\\"string\\\"},\\\"count\\\":{\\\"type\\\":\\\"integer\\\"}},\\\"required\\\":[\\\"value\\\",\\\"count\\\"],\\\"additionalProperties\\\":false}'),\\\n                 false,\\\n                 false\\\n                 )\",\n                &[],\n            )\n            .await?;\n        session\n            .execute(\n                \"INSERT INTO test_state_schema \\\n\t             (lixcol_entity_id, value, count, lixcol_metadata, lixcol_untracked) \\\n\t             VALUES (lix_json('[\\\"entity-history\\\"]'), 'A', 7, '{\\\"source\\\":\\\"history\\\"}', false)\",\n                &[],\n            )\n            .await?;\n        session\n            .execute(\n                \"INSERT INTO lix_directory (id, path, hidden) \\\n                 VALUES ('dir-docs', '/docs/', false)\",\n                &[],\n            )\n            .await?;\n        session\n            .execute(\n                \"INSERT INTO lix_file (id, path, data, hidden) \\\n                 VALUES ('file-a', '/docs/readme.md', X'68656C6C6F', false)\",\n                &[],\n            )\n            .await?;\n\n        let active_version_id = session.active_version_id().await?;\n        let head_commit_id = engine\n            .load_version_head_commit_id(&active_version_id)\n            .await?\n            .ok_or_else(|| {\n                LixError::new(\n                    \"LIX_ERROR_UNKNOWN\",\n                    \"history fixture expected the session version to have a head commit\",\n                )\n            })?;\n        Ok((session, head_commit_id))\n    }\n\n    #[tokio::test]\n    async fn lix_file_path_predicates_canonicalize_bound_values_like_writes() {\n        let backend = crate::backend::testing::UnitTestBackend::new();\n        let init_receipt = Engine::initialize(Box::new(backend.clone()))\n            .await\n            .expect(\"engine should initialize\");\n        let engine = Engine::new(Box::new(backend))\n            .await\n            .expect(\"engine should open\");\n        let session = engine\n            .open_session(init_receipt.main_version_id)\n            .await\n            .expect(\"session should open\");\n\n        session\n            .execute(\n                \"INSERT INTO lix_file (id, path, data) VALUES ('file-nfc', $1, X'41')\",\n                &[Value::Text(\"/Cafe\\u{301}.txt\".to_string())],\n            )\n            .await\n            .expect(\"NFD path insert should canonicalize\");\n\n        let nfd_result = session\n            .execute(\n                \"SELECT id FROM lix_file WHERE path = $1\",\n                &[Value::Text(\"/Cafe\\u{301}.txt\".to_string())],\n            )\n            .await\n            .expect(\"NFD path predicate should canonicalize\");\n        assert_eq!(\n            rows_from_execute_result(nfd_result).1,\n            vec![vec![Value::Text(\"file-nfc\".to_string())]]\n        );\n\n        let percent_result = session\n            .execute(\n                \"SELECT id FROM lix_file WHERE path = '/%43afe%CC%81.txt'\",\n                &[],\n            )\n            .await\n            .expect(\"percent-encoded path predicate should canonicalize\");\n        assert_eq!(\n            rows_from_execute_result(percent_result).1,\n            vec![vec![Value::Text(\"file-nfc\".to_string())]]\n        );\n\n        let reversed_result = session\n            .execute(\n                \"SELECT id FROM lix_file WHERE $1 = path\",\n                &[Value::Text(\"/Cafe\\u{301}.txt\".to_string())],\n            )\n            .await\n            .expect(\"reversed path predicate should canonicalize\");\n        assert_eq!(\n            rows_from_execute_result(reversed_result).1,\n            vec![vec![Value::Text(\"file-nfc\".to_string())]]\n        );\n\n        let or_result = session\n            .execute(\n                \"SELECT id FROM lix_file WHERE path = $1 OR id = 'missing'\",\n                &[Value::Text(\"/Cafe\\u{301}.txt\".to_string())],\n            )\n            .await\n            .expect(\"OR path predicate should canonicalize\");\n        assert_eq!(\n            rows_from_execute_result(or_result).1,\n            vec![vec![Value::Text(\"file-nfc\".to_string())]]\n        );\n\n        let not_result = session\n            .execute(\n                \"SELECT id FROM lix_file WHERE NOT (path = $1)\",\n                &[Value::Text(\"/Cafe\\u{301}.txt\".to_string())],\n            )\n            .await\n            .expect(\"NOT path predicate should canonicalize\");\n        assert!(rows_from_execute_result(not_result).1.is_empty());\n\n        let not_in_result = session\n            .execute(\n                \"SELECT id FROM lix_file WHERE path NOT IN ($1)\",\n                &[Value::Text(\"/%43afe%CC%81.txt\".to_string())],\n            )\n            .await\n            .expect(\"NOT IN path predicate should canonicalize\");\n        assert!(rows_from_execute_result(not_in_result).1.is_empty());\n\n        let update_result = session\n            .execute(\n                \"UPDATE lix_file SET hidden = true WHERE path = $1 OR id = 'missing'\",\n                &[Value::Text(\"/Cafe\\u{301}.txt\".to_string())],\n            )\n            .await\n            .expect(\"update predicate should canonicalize through OR\");\n        assert_eq!(update_result.rows_affected(), 1);\n\n        let delete_result = session\n            .execute(\n                \"DELETE FROM lix_file WHERE path = $1\",\n                &[Value::Text(\"/%43afe%CC%81.txt\".to_string())],\n            )\n            .await\n            .expect(\"delete predicate should canonicalize\");\n        assert_eq!(delete_result.rows_affected(), 1);\n    }\n\n    #[tokio::test]\n    async fn lix_file_path_predicates_reject_non_literal_path_values() {\n        let backend = crate::backend::testing::UnitTestBackend::new();\n        let init_receipt = Engine::initialize(Box::new(backend.clone()))\n            .await\n            .expect(\"engine should initialize\");\n        let engine = Engine::new(Box::new(backend))\n            .await\n            .expect(\"engine should open\");\n        let session = engine\n            .open_session(init_receipt.main_version_id)\n            .await\n            .expect(\"session should open\");\n\n        session\n            .execute(\n                \"INSERT INTO lix_file (id, path, data) VALUES ('file-nfc', $1, X'41')\",\n                &[Value::Text(\"/Cafe\\u{301}.txt\".to_string())],\n            )\n            .await\n            .expect(\"NFD path insert should canonicalize\");\n\n        let error = session\n            .execute(\"SELECT id FROM lix_file WHERE path = id\", &[])\n            .await\n            .expect_err(\"computed path predicate values should be rejected\");\n        assert_eq!(error.code, LixError::CODE_UNSUPPORTED_SQL);\n        assert!(\n            error\n                .message\n                .contains(\"filesystem path predicates only support literal path values\"),\n            \"{error:?}\"\n        );\n    }\n\n    #[tokio::test]\n    async fn lix_directory_path_predicates_canonicalize_bound_values_like_writes() {\n        let backend = crate::backend::testing::UnitTestBackend::new();\n        let init_receipt = Engine::initialize(Box::new(backend.clone()))\n            .await\n            .expect(\"engine should initialize\");\n        let engine = Engine::new(Box::new(backend))\n            .await\n            .expect(\"engine should open\");\n        let session = engine\n            .open_session(init_receipt.main_version_id)\n            .await\n            .expect(\"session should open\");\n\n        session\n            .execute(\n                \"INSERT INTO lix_directory (id, path) VALUES ('dir-nfc', $1)\",\n                &[Value::Text(\"/Cafe\\u{301}/\".to_string())],\n            )\n            .await\n            .expect(\"NFD directory path insert should canonicalize\");\n\n        let result = session\n            .execute(\n                \"SELECT id FROM lix_directory WHERE path IN ($1)\",\n                &[Value::Text(\"/%43afe%CC%81/\".to_string())],\n            )\n            .await\n            .expect(\"directory path predicate should canonicalize\");\n        assert_eq!(\n            rows_from_execute_result(result).1,\n            vec![vec![Value::Text(\"dir-nfc\".to_string())]]\n        );\n\n        let or_result = session\n            .execute(\n                \"SELECT id FROM lix_directory WHERE id = 'missing' OR path = $1\",\n                &[Value::Text(\"/Cafe\\u{301}/\".to_string())],\n            )\n            .await\n            .expect(\"directory OR path predicate should canonicalize\");\n        assert_eq!(\n            rows_from_execute_result(or_result).1,\n            vec![vec![Value::Text(\"dir-nfc\".to_string())]]\n        );\n\n        let not_in_result = session\n            .execute(\n                \"SELECT id FROM lix_directory WHERE path NOT IN ($1)\",\n                &[Value::Text(\"/%43afe%CC%81/\".to_string())],\n            )\n            .await\n            .expect(\"directory NOT IN path predicate should canonicalize\");\n        assert!(rows_from_execute_result(not_in_result).1.is_empty());\n    }\n\n    #[tokio::test]\n    async fn lix_directory_path_predicates_reject_non_literal_path_values() {\n        let backend = crate::backend::testing::UnitTestBackend::new();\n        let init_receipt = Engine::initialize(Box::new(backend.clone()))\n            .await\n            .expect(\"engine should initialize\");\n        let engine = Engine::new(Box::new(backend))\n            .await\n            .expect(\"engine should open\");\n        let session = engine\n            .open_session(init_receipt.main_version_id)\n            .await\n            .expect(\"session should open\");\n\n        session\n            .execute(\n                \"INSERT INTO lix_directory (id, path) VALUES ('dir-nfc', $1)\",\n                &[Value::Text(\"/Cafe\\u{301}/\".to_string())],\n            )\n            .await\n            .expect(\"NFD directory path insert should canonicalize\");\n\n        let error = session\n            .execute(\"SELECT id FROM lix_directory WHERE path IN (id)\", &[])\n            .await\n            .expect_err(\"computed directory path predicate values should be rejected\");\n        assert_eq!(error.code, LixError::CODE_UNSUPPORTED_SQL);\n        assert!(\n            error\n                .message\n                .contains(\"filesystem path predicates only support literal path values\"),\n            \"{error:?}\"\n        );\n    }\n\n    fn rows_from_execute_result(result: ExecuteResult) -> (Vec<String>, Vec<Vec<Value>>) {\n        let rows = result;\n        (\n            rows.columns().to_vec(),\n            rows.rows()\n                .iter()\n                .map(|row| row.values().to_vec())\n                .collect(),\n        )\n    }\n\n    #[tokio::test]\n    async fn execute_sql_reads_lix_state_history_from_history_context() {\n        let (session, head_commit_id) = setup_engine_history_fixture()\n            .await\n            .expect(\"history fixture should initialize\");\n        let result = session\n            .execute(\n                &format!(\n                    \"SELECT entity_id, snapshot_content, metadata, depth, start_commit_id \\\n\t             FROM lix_state_history \\\n\t             WHERE schema_key = 'test_state_schema' \\\n\t               AND entity_id = lix_json('[\\\"entity-history\\\"]') \\\n\t               AND start_commit_id = '{head_commit_id}' \\\n\t               AND depth >= 0\"\n                ),\n                &[],\n            )\n            .await\n            .expect(\"sql2 execute should read lix_state_history through real engine context\");\n        let (columns, rows) = rows_from_execute_result(result);\n\n        assert_eq!(\n            columns,\n            vec![\n                \"entity_id\",\n                \"snapshot_content\",\n                \"metadata\",\n                \"depth\",\n                \"start_commit_id\"\n            ]\n        );\n        assert_eq!(rows.len(), 1);\n        assert_eq!(rows[0][0], Value::Json(json!([\"entity-history\"])));\n        assert_eq!(rows[0][1], Value::Json(json!({\"count\": 7, \"value\": \"A\"})));\n        assert_eq!(rows[0][2], Value::Json(json!({\"source\": \"history\"})));\n        assert!(matches!(rows[0][3], Value::Integer(_)));\n        assert_eq!(rows[0][4], Value::Text(head_commit_id.clone()));\n    }\n\n    #[tokio::test]\n    async fn execute_sql_reads_entity_history_view_from_history_context() {\n        let (session, head_commit_id) = setup_engine_history_fixture()\n            .await\n            .expect(\"history fixture should initialize\");\n        let result = session\n            .execute(\n                &format!(\n                    \"SELECT value, count, lixcol_entity_id, lixcol_start_commit_id, lixcol_depth \\\n\t             FROM test_state_schema_history \\\n\t             WHERE lixcol_start_commit_id = '{head_commit_id}' \\\n\t               AND lixcol_entity_id = lix_json('[\\\"entity-history\\\"]')\"\n                ),\n                &[],\n            )\n            .await\n            .expect(\"sql2 execute should read entity history through real engine context\");\n        let (columns, rows) = rows_from_execute_result(result);\n\n        assert_eq!(\n            columns,\n            vec![\n                \"value\",\n                \"count\",\n                \"lixcol_entity_id\",\n                \"lixcol_start_commit_id\",\n                \"lixcol_depth\",\n            ]\n        );\n        assert_eq!(rows.len(), 1);\n        assert_eq!(rows[0][0], Value::Text(\"A\".to_string()));\n        assert_eq!(rows[0][1], Value::Integer(7));\n        assert_eq!(rows[0][2], Value::Json(json!([\"entity-history\"])));\n        assert_eq!(rows[0][3], Value::Text(head_commit_id));\n        assert!(matches!(rows[0][4], Value::Integer(_)));\n    }\n\n    #[tokio::test]\n    async fn execute_sql_reads_directory_history_view_from_history_context() {\n        let (session, head_commit_id) = setup_engine_history_fixture()\n            .await\n            .expect(\"history fixture should initialize\");\n        let result = session\n            .execute(\n                &format!(\n                    \"SELECT id, parent_id, name, path, hidden, lixcol_start_commit_id, lixcol_depth \\\n             FROM lix_directory_history \\\n             WHERE id = 'dir-docs' AND lixcol_start_commit_id = '{head_commit_id}'\"\n                ),\n                &[],\n            )\n            .await\n            .expect(\"sql2 execute should read directory history through real engine context\");\n        assert!(\n            result.notices().is_empty(),\n            \"identity-filtered directory history should not emit soft notices\"\n        );\n        let (columns, rows) = rows_from_execute_result(result);\n\n        assert_eq!(\n            columns,\n            vec![\n                \"id\",\n                \"parent_id\",\n                \"name\",\n                \"path\",\n                \"hidden\",\n                \"lixcol_start_commit_id\",\n                \"lixcol_depth\",\n            ]\n        );\n        assert_eq!(rows.len(), 1);\n        assert_eq!(rows[0][0], Value::Text(\"dir-docs\".to_string()));\n        assert_eq!(rows[0][1], Value::Null);\n        assert_eq!(rows[0][2], Value::Text(\"docs\".to_string()));\n        assert_eq!(rows[0][3], Value::Text(\"/docs/\".to_string()));\n        assert_eq!(rows[0][4], Value::Boolean(false));\n        assert_eq!(rows[0][5], Value::Text(head_commit_id.clone()));\n        assert!(matches!(rows[0][6], Value::Integer(_)));\n\n        let name_filtered_result = session\n            .execute(\n                &format!(\n                    \"SELECT id \\\n             FROM lix_directory_history \\\n             WHERE name = 'docs' \\\n               AND lixcol_start_commit_id = '{head_commit_id}'\"\n                ),\n                &[],\n            )\n            .await\n            .expect(\"sql2 execute should attach notices to name-filtered directory history reads\");\n        assert_eq!(name_filtered_result.notices().len(), 1);\n        assert_eq!(\n            name_filtered_result.notices()[0].code,\n            \"LIX_HISTORY_NON_IDENTITY_FILTER\"\n        );\n    }\n\n    #[tokio::test]\n    async fn execute_sql_reads_file_history_view_from_history_context() {\n        let (session, head_commit_id) = setup_engine_history_fixture()\n            .await\n            .expect(\"history fixture should initialize\");\n        let result = session\n            .execute(\n                &format!(\n                    \"SELECT id, path, data, hidden, lixcol_start_commit_id, lixcol_depth \\\n             FROM lix_file_history \\\n             WHERE id = 'file-a' \\\n               AND lixcol_start_commit_id = '{head_commit_id}' \\\n               AND data IS NOT NULL \\\n             ORDER BY lixcol_depth\",\n                ),\n                &[],\n            )\n            .await\n            .expect(\"sql2 execute should read file history through real engine context\");\n        assert!(\n            result.notices().is_empty(),\n            \"identity-filtered file history should not emit soft notices\"\n        );\n        let (columns, rows) = rows_from_execute_result(result);\n\n        assert_eq!(\n            columns,\n            vec![\n                \"id\",\n                \"path\",\n                \"data\",\n                \"hidden\",\n                \"lixcol_start_commit_id\",\n                \"lixcol_depth\",\n            ]\n        );\n        assert_eq!(rows.len(), 1);\n        assert_eq!(rows[0][0], Value::Text(\"file-a\".to_string()));\n        assert_eq!(rows[0][1], Value::Text(\"/docs/readme.md\".to_string()));\n        assert_eq!(rows[0][2], Value::Blob(b\"hello\".to_vec()));\n        assert_eq!(rows[0][3], Value::Boolean(false));\n        assert_eq!(rows[0][4], Value::Text(head_commit_id.clone()));\n        assert!(matches!(rows[0][5], Value::Integer(_)));\n\n        let path_filtered_result = session\n            .execute(\n                &format!(\n                    \"SELECT id \\\n             FROM lix_file_history \\\n             WHERE path = '/docs/readme.md' \\\n               AND lixcol_start_commit_id = '{head_commit_id}'\"\n                ),\n                &[],\n            )\n            .await\n            .expect(\"sql2 execute should attach notices to path-filtered file history reads\");\n        assert_eq!(path_filtered_result.notices().len(), 1);\n        assert_eq!(\n            path_filtered_result.notices()[0].code,\n            \"LIX_HISTORY_NON_IDENTITY_FILTER\"\n        );\n    }\n\n    #[tokio::test]\n    async fn execute_sql_rejects_writes_to_history_views_before_planning() {\n        for sql in [\n            \"DELETE FROM lix_state_history\",\n            \"DELETE FROM LIX_STATE_HISTORY\",\n            \"DELETE FROM main.LIX_STATE_HISTORY\",\n            \"EXPLAIN DELETE FROM lix_state_history\",\n        ] {\n            let blob_reader: Arc<dyn BlobDataReader> = Arc::new(DummyBlobReader);\n            let live_state = Arc::new(DummyLiveStateReader);\n            let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default()));\n            let mut ctx = DummySqlWriteExecutionContext {\n                active_version_id: \"version-a\",\n                blob_reader,\n                live_state,\n                staged_writes,\n                schema_definitions: vec![],\n            };\n\n            let error = execute_write_sql(&mut ctx, sql, &[])\n                .await\n                .expect_err(\"history views are read-only\");\n\n            assert_eq!(error.code, LixError::CODE_READ_ONLY, \"{sql}\");\n            assert_eq!(\n                error.message, \"DML cannot write read-only history view 'lix_state_history'\",\n                \"{sql}\"\n            );\n        }\n    }\n\n    #[tokio::test]\n    async fn execute_sql_insert_into_lix_state_values_stages_write() {\n        let blob_reader: Arc<dyn BlobDataReader> = Arc::new(DummyBlobReader);\n        let live_state = Arc::new(DummyLiveStateReader);\n        let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default()));\n        let mut ctx = DummySqlWriteExecutionContext {\n            active_version_id: \"version-a\",\n            blob_reader,\n            live_state,\n            staged_writes: Arc::clone(&staged_writes),\n            schema_definitions: vec![],\n        };\n\n        let result = execute_write_sql(\n\t\t\t&mut ctx,\n\t\t\t\"INSERT INTO lix_state (\\\n\t         entity_id, schema_key, file_id, snapshot_content, metadata, global, untracked\\\n\t         ) VALUES (\\\n\t         lix_json('[\\\"entity-1\\\"]'), 'lix_key_value', NULL, '{\\\"key\\\":\\\"hello\\\",\\\"value\\\":\\\"world\\\"}', '{\\\"source\\\":\\\"sql\\\"}', false, false\\\n\t         )\",\n\t\t\t&[],\n\t\t)\n        .await\n        .expect(\"INSERT INTO lix_state VALUES should stage write\");\n\n        assert_eq!(result.columns, vec![\"count\"]);\n        assert_eq!(result.rows, vec![vec![Value::Integer(1)]]);\n\n        let staged_writes = staged_writes.lock().expect(\"staged writes lock\");\n        assert_eq!(staged_writes.deltas.len(), 1);\n        let overlay = staged_writes.deltas[0]\n            .pending_write_overlay()\n            .expect(\"staged delta should expose pending overlay\");\n        let rows = overlay.visible_semantic_rows(false, \"lix_key_value\");\n        assert_eq!(rows.len(), 1);\n        assert_eq!(rows[0].entity_id, \"[\\\"entity-1\\\"]\");\n        assert_eq!(rows[0].version_id, \"version-a\");\n        assert!(!rows[0].global);\n        assert!(!rows[0].untracked);\n        assert_eq!(\n            rows[0].snapshot_content.as_deref(),\n            Some(\"{\\\"key\\\":\\\"hello\\\",\\\"value\\\":\\\"world\\\"}\")\n        );\n        assert_eq!(rows[0].metadata.as_deref(), Some(\"{\\\"source\\\":\\\"sql\\\"}\"));\n    }\n\n    #[tokio::test]\n    async fn execute_sql_insert_into_lix_state_defaults_global_and_untracked_to_false() {\n        let blob_reader: Arc<dyn BlobDataReader> = Arc::new(DummyBlobReader);\n        let live_state = Arc::new(DummyLiveStateReader);\n        let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default()));\n        let mut ctx = DummySqlWriteExecutionContext {\n            active_version_id: \"version-a\",\n            blob_reader,\n            live_state,\n            staged_writes: Arc::clone(&staged_writes),\n            schema_definitions: vec![],\n        };\n\n        let result = execute_write_sql(\n\t\t\t&mut ctx,\n\t\t\t\"INSERT INTO lix_state (\\\n\t         entity_id, schema_key, file_id, snapshot_content, metadata\\\n\t         ) VALUES (\\\n\t         lix_json('[\\\"entity-defaults\\\"]'), 'lix_key_value', NULL, '{\\\"key\\\":\\\"hello\\\",\\\"value\\\":\\\"defaults\\\"}', NULL\\\n\t         )\",\n\t\t\t&[],\n\t\t)\n        .await\n        .expect(\"INSERT INTO lix_state should default bookkeeping flags\");\n\n        assert_eq!(result.columns, vec![\"count\"]);\n        assert_eq!(result.rows, vec![vec![Value::Integer(1)]]);\n\n        let staged_writes = staged_writes.lock().expect(\"staged writes lock\");\n        assert_eq!(staged_writes.deltas.len(), 1);\n        let overlay = staged_writes.deltas[0]\n            .pending_write_overlay()\n            .expect(\"staged delta should expose pending overlay\");\n        let rows = overlay.visible_semantic_rows(false, \"lix_key_value\");\n        assert_eq!(rows.len(), 1);\n        assert_eq!(rows[0].entity_id, \"[\\\"entity-defaults\\\"]\");\n        assert_eq!(rows[0].version_id, \"version-a\");\n        assert!(!rows[0].global);\n        assert!(!rows[0].untracked);\n    }\n\n    #[tokio::test]\n    async fn execute_sql_insert_into_lix_state_select_stages_write() {\n        let blob_reader: Arc<dyn BlobDataReader> = Arc::new(DummyBlobReader);\n        let live_state = Arc::new(DummyLiveStateReader);\n        let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default()));\n        let mut ctx = DummySqlWriteExecutionContext {\n            active_version_id: \"version-a\",\n            blob_reader,\n            live_state,\n            staged_writes: Arc::clone(&staged_writes),\n            schema_definitions: vec![],\n        };\n\n        let result = execute_write_sql(\n            &mut ctx,\n            \"INSERT INTO lix_state (\\\n\t         entity_id, schema_key, file_id, snapshot_content, metadata, global, untracked\\\n\t         ) \\\n\t         SELECT \\\n\t         lix_json('[\\\"entity-from-select\\\"]') AS entity_id, \\\n\t         'lix_key_value' AS schema_key, \\\n\t         NULL AS file_id, \\\n             '{\\\"key\\\":\\\"hello\\\",\\\"value\\\":\\\"from-select\\\"}' AS snapshot_content, \\\n             '{\\\"source\\\":\\\"select\\\"}' AS metadata, \\\n             false AS global, \\\n             false AS untracked\",\n            &[],\n        )\n        .await\n        .expect(\"INSERT INTO lix_state SELECT should stage write\");\n\n        assert_eq!(result.columns, vec![\"count\"]);\n        assert_eq!(result.rows, vec![vec![Value::Integer(1)]]);\n\n        let staged_writes = staged_writes.lock().expect(\"staged writes lock\");\n        assert_eq!(staged_writes.deltas.len(), 1);\n        let overlay = staged_writes.deltas[0]\n            .pending_write_overlay()\n            .expect(\"staged delta should expose pending overlay\");\n        let rows = overlay.visible_semantic_rows(false, \"lix_key_value\");\n        assert_eq!(rows.len(), 1);\n        assert_eq!(rows[0].entity_id, \"[\\\"entity-from-select\\\"]\");\n        assert_eq!(rows[0].version_id, \"version-a\");\n        assert_eq!(\n            rows[0].snapshot_content.as_deref(),\n            Some(\"{\\\"key\\\":\\\"hello\\\",\\\"value\\\":\\\"from-select\\\"}\")\n        );\n        assert_eq!(rows[0].metadata.as_deref(), Some(\"{\\\"source\\\":\\\"select\\\"}\"));\n    }\n\n    #[tokio::test]\n    async fn execute_sql_insert_into_entity_by_version_stages_write() {\n        let blob_reader: Arc<dyn BlobDataReader> = Arc::new(DummyBlobReader);\n        let live_state = Arc::new(DummyLiveStateReader);\n        let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default()));\n        let mut ctx = DummySqlWriteExecutionContext {\n            active_version_id: \"version-a\",\n            blob_reader,\n            live_state,\n            staged_writes: Arc::clone(&staged_writes),\n            schema_definitions: vec![json!({\n                \"x-lix-key\": \"test_state_schema\",\n                \"type\": \"object\",\n                \"properties\": {\n                    \"value\": { \"type\": \"string\" }\n                }\n            })],\n        };\n\n        let result = execute_write_sql(\n            &mut ctx,\n            \"INSERT INTO test_state_schema_by_version (\\\n\t     lixcol_entity_id, lixcol_version_id, value\\\n\t     ) VALUES (lix_json('[\\\"entity-c\\\"]'), 'version-b', 'C')\",\n            &[],\n        )\n        .await\n        .expect(\"INSERT INTO entity by-version surface should stage write\");\n\n        assert_eq!(result.columns, vec![\"count\"]);\n        assert_eq!(result.rows, vec![vec![Value::Integer(1)]]);\n\n        let staged_writes = staged_writes.lock().expect(\"staged writes lock\");\n        assert_eq!(staged_writes.deltas.len(), 1);\n        let overlay = staged_writes.deltas[0]\n            .pending_write_overlay()\n            .expect(\"staged delta should expose pending overlay\");\n        let rows = overlay.visible_semantic_rows(false, \"test_state_schema\");\n        assert_eq!(rows.len(), 1);\n        assert_eq!(rows[0].entity_id, \"[\\\"entity-c\\\"]\");\n        assert_eq!(rows[0].version_id, \"version-b\");\n        assert!(!rows[0].global);\n        assert!(!rows[0].untracked);\n        assert_eq!(\n            rows[0].snapshot_content.as_deref(),\n            Some(\"{\\\"value\\\":\\\"C\\\"}\")\n        );\n    }\n\n    #[tokio::test]\n    async fn execute_sql_insert_into_active_entity_defaults_active_version() {\n        let blob_reader: Arc<dyn BlobDataReader> = Arc::new(DummyBlobReader);\n        let live_state = Arc::new(DummyLiveStateReader);\n        let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default()));\n        let mut ctx = DummySqlWriteExecutionContext {\n            active_version_id: \"version-a\",\n            blob_reader,\n            live_state,\n            staged_writes: Arc::clone(&staged_writes),\n            schema_definitions: vec![json!({\n                \"x-lix-key\": \"test_state_schema\",\n                \"type\": \"object\",\n                \"properties\": {\n                    \"value\": { \"type\": \"string\" }\n                }\n            })],\n        };\n\n        let result = execute_write_sql(\n            &mut ctx,\n            \"INSERT INTO test_state_schema (lixcol_entity_id, value) \\\n\t     VALUES (lix_json('[\\\"entity-c\\\"]'), 'C')\",\n            &[],\n        )\n        .await\n        .expect(\"INSERT INTO active entity surface should stage write\");\n\n        assert_eq!(result.columns, vec![\"count\"]);\n        assert_eq!(result.rows, vec![vec![Value::Integer(1)]]);\n\n        let staged_writes = staged_writes.lock().expect(\"staged writes lock\");\n        assert_eq!(staged_writes.deltas.len(), 1);\n        let overlay = staged_writes.deltas[0]\n            .pending_write_overlay()\n            .expect(\"staged delta should expose pending overlay\");\n        let rows = overlay.visible_semantic_rows(false, \"test_state_schema\");\n        assert_eq!(rows.len(), 1);\n        assert_eq!(rows[0].entity_id, \"[\\\"entity-c\\\"]\");\n        assert_eq!(rows[0].version_id, \"version-a\");\n        assert!(!rows[0].global);\n        assert!(!rows[0].untracked);\n        assert_eq!(\n            rows[0].snapshot_content.as_deref(),\n            Some(\"{\\\"value\\\":\\\"C\\\"}\")\n        );\n    }\n\n    #[tokio::test]\n    async fn execute_sql_insert_into_directory_by_version_stages_write() {\n        let blob_reader: Arc<dyn BlobDataReader> = Arc::new(DummyBlobReader);\n        let live_state = Arc::new(DummyLiveStateReader);\n        let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default()));\n        let mut ctx = DummySqlWriteExecutionContext {\n            active_version_id: \"version-a\",\n            blob_reader,\n            live_state,\n            staged_writes: Arc::clone(&staged_writes),\n            schema_definitions: vec![],\n        };\n\n        let result = execute_write_sql(\n            &mut ctx,\n            \"INSERT INTO lix_directory_by_version (\\\n             id, parent_id, name, hidden, lixcol_version_id\\\n             ) VALUES ('dir-docs', NULL, 'docs', false, 'version-b')\",\n            &[],\n        )\n        .await\n        .expect(\"INSERT INTO lix_directory_by_version should stage write\");\n\n        assert_eq!(result.columns, vec![\"count\"]);\n        assert_eq!(result.rows, vec![vec![Value::Integer(1)]]);\n\n        let staged_writes = staged_writes.lock().expect(\"staged writes lock\");\n        assert_eq!(staged_writes.deltas.len(), 1);\n        let overlay = staged_writes.deltas[0]\n            .pending_write_overlay()\n            .expect(\"staged delta should expose pending overlay\");\n        let rows = overlay.visible_semantic_rows(false, \"lix_directory_descriptor\");\n        assert_eq!(rows.len(), 1);\n        assert_eq!(rows[0].entity_id, \"[\\\"dir-docs\\\"]\");\n        assert_eq!(rows[0].version_id, \"version-b\");\n        assert!(!rows[0].global);\n        assert!(!rows[0].untracked);\n        assert_eq!(\n            rows[0].snapshot_content.as_deref(),\n            Some(\"{\\\"hidden\\\":false,\\\"id\\\":\\\"dir-docs\\\",\\\"name\\\":\\\"docs\\\",\\\"parent_id\\\":null}\")\n        );\n    }\n\n    #[tokio::test]\n    async fn execute_sql_insert_into_active_directory_defaults_active_version() {\n        let blob_reader: Arc<dyn BlobDataReader> = Arc::new(DummyBlobReader);\n        let live_state = Arc::new(DummyLiveStateReader);\n        let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default()));\n        let mut ctx = DummySqlWriteExecutionContext {\n            active_version_id: \"version-a\",\n            blob_reader,\n            live_state,\n            staged_writes: Arc::clone(&staged_writes),\n            schema_definitions: vec![],\n        };\n\n        let result = execute_write_sql(\n            &mut ctx,\n            \"INSERT INTO lix_directory (id, parent_id, name, hidden) \\\n             VALUES ('dir-docs', NULL, 'docs', false)\",\n            &[],\n        )\n        .await\n        .expect(\"INSERT INTO lix_directory should stage write\");\n\n        assert_eq!(result.columns, vec![\"count\"]);\n        assert_eq!(result.rows, vec![vec![Value::Integer(1)]]);\n\n        let staged_writes = staged_writes.lock().expect(\"staged writes lock\");\n        assert_eq!(staged_writes.deltas.len(), 1);\n        let overlay = staged_writes.deltas[0]\n            .pending_write_overlay()\n            .expect(\"staged delta should expose pending overlay\");\n        let rows = overlay.visible_semantic_rows(false, \"lix_directory_descriptor\");\n        assert_eq!(rows.len(), 1);\n        assert_eq!(rows[0].entity_id, \"[\\\"dir-docs\\\"]\");\n        assert_eq!(rows[0].version_id, \"version-a\");\n        assert!(!rows[0].global);\n        assert!(!rows[0].untracked);\n    }\n\n    #[tokio::test]\n    async fn execute_sql_update_directory_stages_rewritten_descriptor() {\n        let blob_reader: Arc<dyn BlobDataReader> = Arc::new(DummyBlobReader);\n        let live_state = Arc::new(RowsLiveStateReader {\n            rows: vec![\n                live_directory_row(\"dir-docs\", \"version-a\", None, \"docs\", false),\n                live_directory_row(\"dir-guides\", \"version-a\", Some(\"dir-docs\"), \"guides\", false),\n            ],\n        });\n        let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default()));\n        let mut ctx = DummySqlWriteExecutionContext {\n            active_version_id: \"version-a\",\n            blob_reader,\n            live_state,\n            staged_writes: Arc::clone(&staged_writes),\n            schema_definitions: vec![],\n        };\n\n        let result = execute_write_sql(\n            &mut ctx,\n            \"UPDATE lix_directory \\\n             SET hidden = true, lixcol_metadata = '{\\\"source\\\":\\\"directory-update\\\"}' \\\n             WHERE id = 'dir-docs'\",\n            &[],\n        )\n        .await\n        .expect(\"UPDATE lix_directory should stage rewritten descriptor\");\n\n        assert_eq!(result.columns, vec![\"count\"]);\n        assert_eq!(result.rows, vec![vec![Value::Integer(1)]]);\n\n        let staged_writes = staged_writes.lock().expect(\"staged writes lock\");\n        assert_eq!(staged_writes.deltas.len(), 1);\n        let overlay = staged_writes.deltas[0]\n            .pending_write_overlay()\n            .expect(\"staged delta should expose pending overlay\");\n        let rows = overlay.visible_semantic_rows(false, \"lix_directory_descriptor\");\n        assert_eq!(rows.len(), 1);\n        assert_eq!(rows[0].entity_id, \"[\\\"dir-docs\\\"]\");\n        assert_eq!(rows[0].version_id, \"version-a\");\n        assert_eq!(\n            rows[0].snapshot_content.as_deref(),\n            Some(\"{\\\"hidden\\\":true,\\\"id\\\":\\\"dir-docs\\\",\\\"name\\\":\\\"docs\\\",\\\"parent_id\\\":null}\")\n        );\n        assert_eq!(\n            rows[0].metadata.as_deref(),\n            Some(\"{\\\"source\\\":\\\"directory-update\\\"}\")\n        );\n    }\n\n    #[tokio::test]\n    async fn execute_sql_update_directory_rejects_path_assignment() {\n        let blob_reader: Arc<dyn BlobDataReader> = Arc::new(DummyBlobReader);\n        let live_state = Arc::new(RowsLiveStateReader {\n            rows: vec![live_directory_row(\n                \"dir-docs\",\n                \"version-a\",\n                None,\n                \"docs\",\n                false,\n            )],\n        });\n        let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default()));\n        let mut ctx = DummySqlWriteExecutionContext {\n            active_version_id: \"version-a\",\n            blob_reader,\n            live_state,\n            staged_writes: Arc::clone(&staged_writes),\n            schema_definitions: vec![],\n        };\n\n        let error = execute_write_sql(\n            &mut ctx,\n            \"UPDATE lix_directory SET path = '/renamed/' WHERE id = 'dir-docs'\",\n            &[],\n        )\n        .await\n        .expect_err(\"path should remain read-only\");\n\n        assert!(\n            error.message.contains(\"read-only column 'path'\"),\n            \"unexpected error: {error:?}\"\n        );\n        assert!(staged_writes\n            .lock()\n            .expect(\"staged writes lock\")\n            .deltas\n            .is_empty());\n    }\n\n    #[tokio::test]\n    async fn execute_sql_delete_directory_by_version_stages_tombstone() {\n        let blob_reader: Arc<dyn BlobDataReader> = Arc::new(DummyBlobReader);\n        let live_state = Arc::new(RowsLiveStateReader {\n            rows: vec![\n                live_directory_row(\"dir-docs\", \"version-a\", None, \"docs\", false),\n                live_directory_row(\"dir-guides\", \"version-b\", Some(\"dir-docs\"), \"guides\", false),\n            ],\n        });\n        let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default()));\n        let mut ctx = DummySqlWriteExecutionContext {\n            active_version_id: \"version-a\",\n            blob_reader,\n            live_state,\n            staged_writes: Arc::clone(&staged_writes),\n            schema_definitions: vec![],\n        };\n\n        let result = execute_write_sql(\n            &mut ctx,\n            \"DELETE FROM lix_directory_by_version \\\n             WHERE id = 'dir-guides' AND lixcol_version_id = 'version-b'\",\n            &[],\n        )\n        .await\n        .expect(\"DELETE lix_directory_by_version should stage tombstone\");\n\n        assert_eq!(result.columns, vec![\"count\"]);\n        assert_eq!(result.rows, vec![vec![Value::Integer(1)]]);\n\n        let staged_writes = staged_writes.lock().expect(\"staged writes lock\");\n        assert_eq!(staged_writes.deltas.len(), 1);\n        let overlay = staged_writes.deltas[0]\n            .pending_write_overlay()\n            .expect(\"staged delta should expose pending overlay\");\n        let rows = overlay.visible_all_semantic_rows();\n        assert_eq!(rows.len(), 1);\n        assert_eq!(rows[0].entity_id, \"[\\\"dir-guides\\\"]\");\n        assert_eq!(rows[0].version_id, \"version-b\");\n        assert!(rows[0].tombstone);\n        assert_eq!(rows[0].snapshot_content, None);\n    }\n\n    #[tokio::test]\n    async fn execute_sql_insert_into_file_by_version_stages_descriptor_write() {\n        let blob_reader: Arc<dyn BlobDataReader> = Arc::new(DummyBlobReader);\n        let live_state = Arc::new(DummyLiveStateReader);\n        let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default()));\n        let mut ctx = DummySqlWriteExecutionContext {\n            active_version_id: \"version-a\",\n            blob_reader,\n            live_state,\n            staged_writes: Arc::clone(&staged_writes),\n            schema_definitions: vec![],\n        };\n\n        let result = execute_write_sql(\n            &mut ctx,\n            \"INSERT INTO lix_file_by_version (\\\n             id, directory_id, name, hidden, lixcol_version_id\\\n             ) VALUES ('file-readme', 'dir-docs', 'readme.md', false, 'version-b')\",\n            &[],\n        )\n        .await\n        .expect(\"INSERT INTO lix_file_by_version should stage descriptor write\");\n\n        assert_eq!(result.columns, vec![\"count\"]);\n        assert_eq!(result.rows, vec![vec![Value::Integer(1)]]);\n\n        let staged_writes = staged_writes.lock().expect(\"staged writes lock\");\n        assert_eq!(staged_writes.deltas.len(), 1);\n        let overlay = staged_writes.deltas[0]\n            .pending_write_overlay()\n            .expect(\"staged delta should expose pending overlay\");\n        let rows = overlay.visible_semantic_rows(false, \"lix_file_descriptor\");\n        assert_eq!(rows.len(), 1);\n        assert_eq!(rows[0].entity_id, \"[\\\"file-readme\\\"]\");\n        assert_eq!(rows[0].version_id, \"version-b\");\n        assert!(!rows[0].global);\n        assert!(!rows[0].untracked);\n        let snapshot: JsonValue =\n            serde_json::from_str(rows[0].snapshot_content.as_deref().unwrap())\n                .expect(\"descriptor snapshot JSON\");\n        assert_eq!(snapshot[\"id\"], \"file-readme\");\n        assert_eq!(snapshot[\"directory_id\"], \"dir-docs\");\n        assert_eq!(snapshot[\"name\"], \"readme.md\");\n        assert_eq!(snapshot[\"hidden\"], false);\n    }\n\n    #[tokio::test]\n    async fn execute_sql_insert_into_active_file_defaults_active_version() {\n        let blob_reader: Arc<dyn BlobDataReader> = Arc::new(DummyBlobReader);\n        let live_state = Arc::new(DummyLiveStateReader);\n        let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default()));\n        let mut ctx = DummySqlWriteExecutionContext {\n            active_version_id: \"version-a\",\n            blob_reader,\n            live_state,\n            staged_writes: Arc::clone(&staged_writes),\n            schema_definitions: vec![],\n        };\n\n        let result = execute_write_sql(\n            &mut ctx,\n            \"INSERT INTO lix_file (id, directory_id, name, hidden) \\\n             VALUES ('file-readme', 'dir-docs', 'readme.md', false)\",\n            &[],\n        )\n        .await\n        .expect(\"INSERT INTO lix_file should stage descriptor write\");\n\n        assert_eq!(result.columns, vec![\"count\"]);\n        assert_eq!(result.rows, vec![vec![Value::Integer(1)]]);\n\n        let staged_writes = staged_writes.lock().expect(\"staged writes lock\");\n        assert_eq!(staged_writes.deltas.len(), 1);\n        let overlay = staged_writes.deltas[0]\n            .pending_write_overlay()\n            .expect(\"staged delta should expose pending overlay\");\n        let rows = overlay.visible_semantic_rows(false, \"lix_file_descriptor\");\n        assert_eq!(rows.len(), 1);\n        assert_eq!(rows[0].entity_id, \"[\\\"file-readme\\\"]\");\n        assert_eq!(rows[0].version_id, \"version-a\");\n        assert!(!rows[0].global);\n        assert!(!rows[0].untracked);\n    }\n\n    #[tokio::test]\n    async fn execute_sql_insert_into_file_with_data_stages_blob_ref() {\n        let blob_reader: Arc<dyn BlobDataReader> = Arc::new(DummyBlobReader);\n        let live_state = Arc::new(DummyLiveStateReader);\n        let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default()));\n        let mut ctx = DummySqlWriteExecutionContext {\n            active_version_id: \"version-a\",\n            blob_reader,\n            live_state,\n            staged_writes: Arc::clone(&staged_writes),\n            schema_definitions: vec![],\n        };\n\n        let result = execute_write_sql(\n            &mut ctx,\n            \"INSERT INTO lix_file_by_version (\\\n             id, directory_id, name, hidden, data, lixcol_version_id\\\n             ) VALUES ('file-readme', 'dir-docs', 'readme.md', false, X'4142', 'version-b')\",\n            &[],\n        )\n        .await\n        .expect(\"INSERT INTO lix_file_by_version should stage descriptor and data writes\");\n\n        assert_eq!(result.columns, vec![\"count\"]);\n        assert_eq!(result.rows, vec![vec![Value::Integer(1)]]);\n\n        let staged_writes = staged_writes.lock().expect(\"staged writes lock\");\n        assert_eq!(staged_writes.deltas.len(), 1);\n        let overlay = staged_writes.deltas[0]\n            .pending_write_overlay()\n            .expect(\"staged delta should expose pending overlay\");\n        let descriptor_rows = overlay.visible_semantic_rows(false, \"lix_file_descriptor\");\n        assert_eq!(descriptor_rows.len(), 1);\n        assert_eq!(descriptor_rows[0].entity_id, \"[\\\"file-readme\\\"]\");\n        let blob_ref_rows = overlay.visible_semantic_rows(false, \"lix_binary_blob_ref\");\n        assert_eq!(blob_ref_rows.len(), 1);\n        assert_eq!(blob_ref_rows[0].entity_id, \"[\\\"file-readme\\\"]\");\n        assert_eq!(blob_ref_rows[0].file_id.as_deref(), Some(\"file-readme\"));\n        assert_eq!(blob_ref_rows[0].version_id, \"version-b\");\n        let snapshot: JsonValue =\n            serde_json::from_str(blob_ref_rows[0].snapshot_content.as_deref().unwrap())\n                .expect(\"blob ref snapshot JSON\");\n        assert_eq!(snapshot[\"id\"], \"file-readme\");\n        assert_eq!(snapshot[\"size_bytes\"], 2);\n        assert!(snapshot[\"blob_hash\"]\n            .as_str()\n            .is_some_and(|value| !value.is_empty()));\n    }\n\n    #[tokio::test]\n    async fn execute_sql_update_file_stages_rewritten_descriptor() {\n        let blob_reader: Arc<dyn BlobDataReader> = Arc::new(DummyBlobReader);\n        let live_state = Arc::new(RowsLiveStateReader {\n            rows: vec![\n                live_directory_row(\"dir-docs\", \"version-a\", None, \"docs\", false),\n                live_file_row(\n                    \"file-readme\",\n                    \"version-a\",\n                    Some(\"dir-docs\"),\n                    \"readme.md\",\n                    false,\n                ),\n                live_file_row(\n                    \"file-guide\",\n                    \"version-a\",\n                    Some(\"dir-docs\"),\n                    \"guide.md\",\n                    false,\n                ),\n            ],\n        });\n        let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default()));\n        let mut ctx = DummySqlWriteExecutionContext {\n            active_version_id: \"version-a\",\n            blob_reader,\n            live_state,\n            staged_writes: Arc::clone(&staged_writes),\n            schema_definitions: vec![],\n        };\n\n        let result = execute_write_sql(\n            &mut ctx,\n            \"UPDATE lix_file \\\n             SET name = 'readme-updated.txt', hidden = true, lixcol_metadata = '{\\\"source\\\":\\\"file-update\\\"}' \\\n             WHERE id = 'file-readme'\",\n            &[],\n        )\n        .await\n        .expect(\"UPDATE lix_file should stage rewritten descriptor\");\n\n        assert_eq!(result.columns, vec![\"count\"]);\n        assert_eq!(result.rows, vec![vec![Value::Integer(1)]]);\n\n        let staged_writes = staged_writes.lock().expect(\"staged writes lock\");\n        assert_eq!(staged_writes.deltas.len(), 1);\n        let overlay = staged_writes.deltas[0]\n            .pending_write_overlay()\n            .expect(\"staged delta should expose pending overlay\");\n        let rows = overlay.visible_semantic_rows(false, \"lix_file_descriptor\");\n        assert_eq!(rows.len(), 1);\n        assert_eq!(rows[0].entity_id, \"[\\\"file-readme\\\"]\");\n        assert_eq!(rows[0].version_id, \"version-a\");\n        let snapshot: JsonValue =\n            serde_json::from_str(rows[0].snapshot_content.as_deref().unwrap())\n                .expect(\"descriptor snapshot JSON\");\n        assert_eq!(snapshot[\"id\"], \"file-readme\");\n        assert_eq!(snapshot[\"directory_id\"], \"dir-docs\");\n        assert_eq!(snapshot[\"name\"], \"readme-updated.txt\");\n        assert_eq!(snapshot[\"hidden\"], true);\n        assert_eq!(\n            rows[0].metadata.as_deref(),\n            Some(\"{\\\"source\\\":\\\"file-update\\\"}\")\n        );\n    }\n\n    #[tokio::test]\n    async fn execute_sql_update_file_stages_data_blob_ref() {\n        let blob_reader: Arc<dyn BlobDataReader> = Arc::new(DummyBlobReader);\n        let live_state = Arc::new(RowsLiveStateReader {\n            rows: vec![\n                live_directory_row(\"dir-docs\", \"version-a\", None, \"docs\", false),\n                live_file_row(\n                    \"file-readme\",\n                    \"version-a\",\n                    Some(\"dir-docs\"),\n                    \"readme.md\",\n                    false,\n                ),\n            ],\n        });\n        let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default()));\n        let mut ctx = DummySqlWriteExecutionContext {\n            active_version_id: \"version-a\",\n            blob_reader,\n            live_state,\n            staged_writes: Arc::clone(&staged_writes),\n            schema_definitions: vec![],\n        };\n\n        let result = execute_write_sql(\n            &mut ctx,\n            \"UPDATE lix_file SET data = X'4142' WHERE id = 'file-readme'\",\n            &[],\n        )\n        .await\n        .expect(\"UPDATE lix_file should stage data write\");\n\n        assert_eq!(result.columns, vec![\"count\"]);\n        assert_eq!(result.rows, vec![vec![Value::Integer(1)]]);\n\n        let staged_writes = staged_writes.lock().expect(\"staged writes lock\");\n        assert_eq!(staged_writes.deltas.len(), 1);\n        let overlay = staged_writes.deltas[0]\n            .pending_write_overlay()\n            .expect(\"staged delta should expose pending overlay\");\n        assert!(overlay\n            .visible_semantic_rows(false, \"lix_file_descriptor\")\n            .is_empty());\n        let blob_ref_rows = overlay.visible_semantic_rows(false, \"lix_binary_blob_ref\");\n        assert_eq!(blob_ref_rows.len(), 1);\n        assert_eq!(blob_ref_rows[0].entity_id, \"[\\\"file-readme\\\"]\");\n        let snapshot: JsonValue =\n            serde_json::from_str(blob_ref_rows[0].snapshot_content.as_deref().unwrap())\n                .expect(\"blob ref snapshot JSON\");\n        assert_eq!(snapshot[\"id\"], \"file-readme\");\n        assert_eq!(snapshot[\"size_bytes\"], 2);\n    }\n\n    #[tokio::test]\n    async fn execute_sql_update_file_stages_path_assignment() {\n        let blob_reader: Arc<dyn BlobDataReader> = Arc::new(DummyBlobReader);\n        let live_state = Arc::new(RowsLiveStateReader {\n            rows: vec![\n                live_directory_row(\"dir-docs\", \"version-a\", None, \"docs\", false),\n                live_file_row(\n                    \"file-readme\",\n                    \"version-a\",\n                    Some(\"dir-docs\"),\n                    \"readme.md\",\n                    false,\n                ),\n            ],\n        });\n        let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default()));\n        let mut ctx = DummySqlWriteExecutionContext {\n            active_version_id: \"version-a\",\n            blob_reader,\n            live_state,\n            staged_writes: Arc::clone(&staged_writes),\n            schema_definitions: vec![],\n        };\n\n        let result = execute_write_sql(\n            &mut ctx,\n            \"UPDATE lix_file SET path = '/docs/renamed.md' WHERE id = 'file-readme'\",\n            &[],\n        )\n        .await\n        .expect(\"path update should stage descriptor rewrite\");\n\n        assert_eq!(result.columns, vec![\"count\"]);\n        assert_eq!(result.rows, vec![vec![Value::Integer(1)]]);\n\n        let staged_writes = staged_writes.lock().expect(\"staged writes lock\");\n        assert_eq!(staged_writes.deltas.len(), 1);\n        let overlay = staged_writes.deltas[0]\n            .pending_write_overlay()\n            .expect(\"staged delta should expose pending overlay\");\n        let rows = overlay.visible_semantic_rows(false, \"lix_file_descriptor\");\n        assert_eq!(rows.len(), 1);\n        let snapshot: JsonValue =\n            serde_json::from_str(rows[0].snapshot_content.as_deref().unwrap())\n                .expect(\"descriptor snapshot JSON\");\n        assert_eq!(snapshot[\"directory_id\"], \"dir-docs\");\n        assert_eq!(snapshot[\"name\"], \"renamed.md\");\n    }\n\n    #[tokio::test]\n    async fn execute_sql_delete_file_by_version_stages_descriptor_tombstone() {\n        let blob_reader: Arc<dyn BlobDataReader> = Arc::new(DummyBlobReader);\n        let live_state = Arc::new(RowsLiveStateReader {\n            rows: vec![\n                live_directory_row(\"dir-docs\", \"version-a\", None, \"docs\", false),\n                live_directory_row(\"dir-docs\", \"version-b\", None, \"docs\", false),\n                live_file_row(\n                    \"file-readme\",\n                    \"version-a\",\n                    Some(\"dir-docs\"),\n                    \"readme.md\",\n                    false,\n                ),\n                live_file_row(\n                    \"file-guide\",\n                    \"version-b\",\n                    Some(\"dir-docs\"),\n                    \"guide.md\",\n                    false,\n                ),\n            ],\n        });\n        let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default()));\n        let mut ctx = DummySqlWriteExecutionContext {\n            active_version_id: \"version-a\",\n            blob_reader,\n            live_state,\n            staged_writes: Arc::clone(&staged_writes),\n            schema_definitions: vec![],\n        };\n\n        let result = execute_write_sql(\n            &mut ctx,\n            \"DELETE FROM lix_file_by_version \\\n             WHERE id = 'file-guide' AND lixcol_version_id = 'version-b'\",\n            &[],\n        )\n        .await\n        .expect(\"DELETE lix_file_by_version should stage descriptor tombstone\");\n\n        assert_eq!(result.columns, vec![\"count\"]);\n        assert_eq!(result.rows, vec![vec![Value::Integer(1)]]);\n\n        let staged_writes = staged_writes.lock().expect(\"staged writes lock\");\n        assert_eq!(staged_writes.deltas.len(), 1);\n        let overlay = staged_writes.deltas[0]\n            .pending_write_overlay()\n            .expect(\"staged delta should expose pending overlay\");\n        let rows = overlay.visible_all_semantic_rows();\n        assert_eq!(rows.len(), 1);\n        assert_eq!(rows[0].entity_id, \"[\\\"file-guide\\\"]\");\n        assert_eq!(rows[0].version_id, \"version-b\");\n        assert!(rows[0].tombstone);\n        assert_eq!(rows[0].snapshot_content, None);\n    }\n\n    #[tokio::test]\n    async fn execute_sql_update_entity_surface_stages_rewritten_snapshot() {\n        let blob_reader: Arc<dyn BlobDataReader> = Arc::new(DummyBlobReader);\n        let live_state = Arc::new(RowsLiveStateReader {\n            rows: vec![\n                live_entity_row(\"entity-a\", \"version-a\", \"A\"),\n                live_entity_row(\"entity-b\", \"version-a\", \"B\"),\n            ],\n        });\n        let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default()));\n        let mut ctx = DummySqlWriteExecutionContext {\n            active_version_id: \"version-a\",\n            blob_reader,\n            live_state,\n            staged_writes: Arc::clone(&staged_writes),\n            schema_definitions: vec![json!({\n                \"x-lix-key\": \"test_state_schema\",\n                \"type\": \"object\",\n                \"properties\": {\n                    \"value\": { \"type\": \"string\" }\n                }\n            })],\n        };\n\n        let result = execute_write_sql(\n            &mut ctx,\n            \"UPDATE test_state_schema \\\n             SET value = 'updated', lixcol_metadata = '{\\\"source\\\":\\\"entity-update\\\"}' \\\n             WHERE value = 'A'\",\n            &[],\n        )\n        .await\n        .expect(\"UPDATE entity surface should stage rewritten row\");\n\n        assert_eq!(result.columns, vec![\"count\"]);\n        assert_eq!(result.rows, vec![vec![Value::Integer(1)]]);\n\n        let staged_writes = staged_writes.lock().expect(\"staged writes lock\");\n        assert_eq!(staged_writes.deltas.len(), 1);\n        let overlay = staged_writes.deltas[0]\n            .pending_write_overlay()\n            .expect(\"staged delta should expose pending overlay\");\n        let rows = overlay.visible_semantic_rows(false, \"test_state_schema\");\n        assert_eq!(rows.len(), 1);\n        assert_eq!(rows[0].entity_id, \"[\\\"entity-a\\\"]\");\n        assert_eq!(rows[0].version_id, \"version-a\");\n        assert_eq!(\n            rows[0].snapshot_content.as_deref(),\n            Some(\"{\\\"value\\\":\\\"updated\\\"}\")\n        );\n        assert_eq!(\n            rows[0].metadata.as_deref(),\n            Some(\"{\\\"source\\\":\\\"entity-update\\\"}\")\n        );\n    }\n\n    #[tokio::test]\n    async fn execute_sql_delete_entity_by_version_stages_tombstone() {\n        let blob_reader: Arc<dyn BlobDataReader> = Arc::new(DummyBlobReader);\n        let live_state = Arc::new(RowsLiveStateReader {\n            rows: vec![\n                live_entity_row(\"entity-a\", \"version-a\", \"A\"),\n                live_entity_row(\"entity-b\", \"version-b\", \"B\"),\n            ],\n        });\n        let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default()));\n        let mut ctx = DummySqlWriteExecutionContext {\n            active_version_id: \"version-a\",\n            blob_reader,\n            live_state,\n            staged_writes: Arc::clone(&staged_writes),\n            schema_definitions: vec![json!({\n                \"x-lix-key\": \"test_state_schema\",\n                \"type\": \"object\",\n                \"properties\": {\n                    \"value\": { \"type\": \"string\" }\n                }\n            })],\n        };\n\n        let result = execute_write_sql(\n            &mut ctx,\n            \"DELETE FROM test_state_schema_by_version \\\n             WHERE lixcol_version_id = 'version-b'\",\n            &[],\n        )\n        .await\n        .expect(\"DELETE entity by-version surface should stage tombstone\");\n\n        assert_eq!(result.columns, vec![\"count\"]);\n        assert_eq!(result.rows, vec![vec![Value::Integer(1)]]);\n\n        let staged_writes = staged_writes.lock().expect(\"staged writes lock\");\n        assert_eq!(staged_writes.deltas.len(), 1);\n        let overlay = staged_writes.deltas[0]\n            .pending_write_overlay()\n            .expect(\"staged delta should expose pending overlay\");\n        let rows = overlay.visible_all_semantic_rows();\n        assert_eq!(rows.len(), 1);\n        assert_eq!(rows[0].entity_id, \"[\\\"entity-b\\\"]\");\n        assert_eq!(rows[0].version_id, \"version-b\");\n        assert!(rows[0].tombstone);\n        assert_eq!(rows[0].snapshot_content, None);\n    }\n\n    #[tokio::test]\n    async fn execute_sql_update_lix_state_stages_rewritten_rows() {\n        let blob_reader: Arc<dyn BlobDataReader> = Arc::new(DummyBlobReader);\n        let live_state = Arc::new(RowsLiveStateReader {\n            rows: vec![\n                live_lix_state_row(\"entity-1\", Some(\"{\\\"source\\\":\\\"match\\\"}\")),\n                live_lix_state_row(\"entity-2\", Some(\"{\\\"source\\\":\\\"skip\\\"}\")),\n            ],\n        });\n        let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default()));\n        let mut ctx = DummySqlWriteExecutionContext {\n            active_version_id: \"version-a\",\n            blob_reader,\n            live_state,\n            staged_writes: Arc::clone(&staged_writes),\n            schema_definitions: vec![],\n        };\n\n        let result = execute_write_sql(\n            &mut ctx,\n            \"UPDATE lix_state \\\n             SET snapshot_content = '{\\\"key\\\":\\\"hello\\\",\\\"value\\\":\\\"updated\\\"}', \\\n                 metadata = '{\\\"schema_key\\\":\\\"lix_key_value\\\"}' \\\n             WHERE metadata = lix_json('{\\\"source\\\":\\\"match\\\"}')\",\n            &[],\n        )\n        .await\n        .expect(\"UPDATE lix_state should stage rewritten rows\");\n\n        assert_eq!(result.columns, vec![\"count\"]);\n        assert_eq!(result.rows, vec![vec![Value::Integer(1)]]);\n\n        let staged_writes = staged_writes.lock().expect(\"staged writes lock\");\n        assert_eq!(staged_writes.deltas.len(), 1);\n        let overlay = staged_writes.deltas[0]\n            .pending_write_overlay()\n            .expect(\"staged delta should expose pending overlay\");\n        let rows = overlay.visible_semantic_rows(false, \"lix_key_value\");\n        assert_eq!(rows.len(), 1);\n        assert_eq!(rows[0].entity_id, \"[\\\"entity-1\\\"]\");\n        assert_eq!(rows[0].version_id, \"version-a\");\n        assert_eq!(\n            rows[0].snapshot_content.as_deref(),\n            Some(\"{\\\"key\\\":\\\"hello\\\",\\\"value\\\":\\\"updated\\\"}\")\n        );\n        assert_eq!(\n            rows[0].metadata.as_deref(),\n            Some(\"{\\\"schema_key\\\":\\\"lix_key_value\\\"}\")\n        );\n    }\n\n    #[tokio::test]\n    async fn execute_sql_delete_lix_state_without_where_stages_all_rows() {\n        let blob_reader: Arc<dyn BlobDataReader> = Arc::new(DummyBlobReader);\n        let live_state = Arc::new(RowsLiveStateReader {\n            rows: vec![\n                live_lix_state_row(\"entity-1\", Some(\"{\\\"source\\\":\\\"one\\\"}\")),\n                live_lix_state_row(\"entity-2\", Some(\"{\\\"source\\\":\\\"two\\\"}\")),\n            ],\n        });\n        let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default()));\n        let mut ctx = DummySqlWriteExecutionContext {\n            active_version_id: \"version-a\",\n            blob_reader,\n            live_state,\n            staged_writes: Arc::clone(&staged_writes),\n            schema_definitions: vec![],\n        };\n\n        let result = execute_write_sql(&mut ctx, \"DELETE FROM lix_state\", &[])\n            .await\n            .expect(\"DELETE FROM lix_state should follow DataFusion delete-all semantics\");\n\n        assert_eq!(result.columns, vec![\"count\"]);\n        assert_eq!(result.rows, vec![vec![Value::Integer(2)]]);\n\n        let staged_writes = staged_writes.lock().expect(\"staged writes lock\");\n        assert_eq!(staged_writes.deltas.len(), 1);\n        let overlay = staged_writes.deltas[0]\n            .pending_write_overlay()\n            .expect(\"staged delta should expose pending overlay\");\n        let rows = overlay.visible_all_semantic_rows();\n        assert_eq!(rows.len(), 2);\n        assert!(rows.iter().all(|row| row.tombstone));\n        assert!(rows.iter().all(|row| row.snapshot_content.is_none()));\n        assert!(rows.iter().any(|row| row.entity_id == \"[\\\"entity-1\\\"]\"));\n        assert!(rows.iter().any(|row| row.entity_id == \"[\\\"entity-2\\\"]\"));\n    }\n\n    struct BackendSqlExecutionContext<'a> {\n        active_version_id: &'a str,\n        storage: StorageContext,\n        blob_reader: Arc<dyn BlobDataReader>,\n        live_state: Arc<dyn LiveStateReader>,\n        schema_definitions: Vec<JsonValue>,\n    }\n\n    impl SqlExecutionContext for BackendSqlExecutionContext<'_> {\n        fn active_version_id(&self) -> &str {\n            self.active_version_id\n        }\n\n        fn live_state(&self) -> Arc<dyn LiveStateReader> {\n            Arc::clone(&self.live_state)\n        }\n\n        fn functions(&self) -> FunctionProviderHandle {\n            test_functions()\n        }\n\n        fn blob_reader(&self) -> Arc<dyn BlobDataReader> {\n            Arc::clone(&self.blob_reader)\n        }\n\n        fn commit_store_query_source(&self) -> SqlCommitStoreQuerySource {\n            let base_scope = test_read_scope(self.storage.clone());\n            let read_scope = StorageReadScope::new(base_scope.store());\n            CommitStoreQuerySource {\n                commit_store_reader: Arc::new(CommitStoreContext::new().reader(read_scope.store())),\n                json_reader: JsonStoreContext::new().reader(read_scope.store()),\n            }\n        }\n\n        fn commit_graph(&self) -> Box<dyn CommitGraphReader> {\n            Box::new(DummyCommitGraphReader)\n        }\n\n        fn version_ref(&self) -> Arc<dyn VersionRefReader> {\n            Arc::new(\n                crate::version::VersionContext::new(Arc::new(UntrackedStateContext::new()))\n                    .ref_reader(self.storage.clone()),\n            )\n        }\n\n        fn list_visible_schemas(&self) -> Result<Vec<JsonValue>, LixError> {\n            Ok(self.schema_definitions.clone())\n        }\n    }\n\n    async fn setup_sql2_state_fixture(\n    ) -> Result<(crate::backend::testing::UnitTestBackend, JsonValue), crate::LixError> {\n        let backend = crate::backend::testing::UnitTestBackend::new();\n        let init_receipt = Engine::initialize(Box::new(backend.clone())).await?;\n        let storage = crate::storage::StorageContext::new(std::sync::Arc::new(backend.clone()));\n        {\n            let mut transaction = storage.begin_write_transaction().await?;\n            let version_ctx = crate::version::VersionContext::new(Arc::new(\n                crate::untracked_state::UntrackedStateContext::new(),\n            ));\n            let mut writes = StorageWriteSet::new();\n            let canonical_rows = vec![\n                prepare_version_ref_row(\n                    \"version-a\",\n                    &init_receipt.initial_commit_id,\n                    \"1970-01-01T00:00:00.000Z\",\n                )?,\n                prepare_version_ref_row(\n                    \"version-b\",\n                    &init_receipt.initial_commit_id,\n                    \"1970-01-01T00:00:00.000Z\",\n                )?,\n            ];\n            let rows = canonical_rows\n                .into_iter()\n                .map(|prepared| prepared.row)\n                .collect::<Vec<_>>();\n            version_ctx.stage_canonical_ref_rows(&mut writes, &rows)?;\n            writes.apply(&mut transaction.as_mut()).await?;\n            transaction.commit().await?;\n        }\n        let engine = Engine::new(Box::new(backend.clone())).await?;\n        let session_a = engine.open_session(\"version-a\").await?;\n        let session_b = engine.open_session(\"version-b\").await?;\n        let schema_definition = json!({\n            \"x-lix-key\": \"test_state_schema\",\n            \"type\": \"object\",\n            \"properties\": {\n                \"value\": { \"type\": \"string\" }\n            },\n            \"required\": [\"value\"],\n            \"additionalProperties\": false\n        });\n        session_a\n            .execute(\n                \"INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \\\n                 VALUES (\\\n                 lix_json('{\\\"x-lix-key\\\":\\\"test_state_schema\\\",\\\"type\\\":\\\"object\\\",\\\"properties\\\":{\\\"value\\\":{\\\"type\\\":\\\"string\\\"}},\\\"required\\\":[\\\"value\\\"],\\\"additionalProperties\\\":false}'),\\\n                 false,\\\n                 false\\\n                 )\",\n                &[],\n            )\n            .await?;\n        session_b\n            .execute(\n                \"INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \\\n                 VALUES (\\\n                 lix_json('{\\\"x-lix-key\\\":\\\"test_state_schema\\\",\\\"type\\\":\\\"object\\\",\\\"properties\\\":{\\\"value\\\":{\\\"type\\\":\\\"string\\\"}},\\\"required\\\":[\\\"value\\\"],\\\"additionalProperties\\\":false}'),\\\n                 false,\\\n                 false\\\n                 )\",\n                &[],\n            )\n            .await?;\n        session_a\n            .execute(\n                \"INSERT INTO lix_state (\\\n\t         entity_id, schema_key, file_id, snapshot_content, global, untracked\\\n\t         ) VALUES (\\\n\t         lix_json('[\\\"entity-a\\\"]'), 'test_state_schema', NULL, '{\\\"value\\\":\\\"A\\\"}', false, false\\\n\t         )\",\n                &[],\n            )\n            .await?;\n        session_b\n            .execute(\n                \"INSERT INTO lix_state (\\\n\t         entity_id, schema_key, file_id, snapshot_content, global, untracked\\\n\t         ) VALUES (\\\n\t         lix_json('[\\\"entity-b\\\"]'), 'test_state_schema', NULL, '{\\\"value\\\":\\\"B\\\"}', false, false\\\n\t         )\",\n                &[],\n            )\n            .await?;\n        session_a\n\t\t.execute(\n\t\t\t\"INSERT INTO lix_state (\\\n\t         entity_id, schema_key, file_id, snapshot_content, global, untracked\\\n\t         ) VALUES (\\\n\t         lix_json('[\\\"dir-docs\\\"]'), 'lix_directory_descriptor', NULL, '{\\\"id\\\":\\\"dir-docs\\\",\\\"parent_id\\\":null,\\\"name\\\":\\\"docs\\\",\\\"hidden\\\":false}', false, false\\\n\t         )\",\n\t\t\t&[],\n\t\t)\n            .await?;\n        session_a\n            .execute(\n                \"INSERT INTO lix_file (id, path, data) \\\n                 VALUES ('file-a', '/docs/readme.md', X'4142')\",\n                &[],\n            )\n            .await?;\n        Ok((backend, schema_definition))\n    }\n\n    fn test_live_state_context() -> LiveStateContext {\n        LiveStateContext::new(\n            TrackedStateContext::new(),\n            UntrackedStateContext::new(),\n            crate::commit_graph::CommitGraphContext::new(),\n        )\n    }\n\n    fn run_async_test_with_large_stack(\n        test: impl FnOnce() -> futures_util::future::LocalBoxFuture<'static, ()> + Send + 'static,\n    ) {\n        std::thread::Builder::new()\n            .name(\"sql2-execute-test\".to_string())\n            .stack_size(32 * 1024 * 1024)\n            .spawn(move || {\n                tokio::runtime::Builder::new_current_thread()\n                    .enable_all()\n                    .build()\n                    .expect(\"test runtime should build\")\n                    .block_on(test());\n            })\n            .expect(\"test thread should spawn\")\n            .join()\n            .expect(\"test thread should join\");\n    }\n\n    #[test]\n    fn execute_sql_reads_lix_state_by_version() {\n        run_async_test_with_large_stack(|| {\n            Box::pin(async move {\n                let (backend, schema_definition) = setup_sql2_state_fixture()\n                    .await\n                    .expect(\"fixture should initialize\");\n                let backend = Arc::new(backend);\n                let backend_ref: Arc<dyn crate::Backend + Send + Sync> = backend;\n                let storage = StorageContext::new(Arc::clone(&backend_ref));\n                let blob_reader: Arc<dyn BlobDataReader> =\n                    Arc::new(BackendBlobReader(storage.clone()));\n                let ctx = BackendSqlExecutionContext {\n                    active_version_id: \"version-a\",\n                    storage: storage.clone(),\n                    blob_reader: Arc::clone(&blob_reader),\n                    live_state: Arc::new(test_live_state_context().reader(storage.clone())),\n                    schema_definitions: vec![schema_definition],\n                };\n\n                let result = execute_sql(\n                    &ctx,\n                    \"SELECT entity_id, version_id, snapshot_content, commit_id \\\n                     FROM lix_state_by_version \\\n                     WHERE version_id = 'version-b' AND schema_key = 'test_state_schema'\",\n                    &[],\n                )\n                .await\n                .expect(\"sql2 execute should read lix_state_by_version\");\n\n                assert_eq!(\n                    result.columns,\n                    vec![\"entity_id\", \"version_id\", \"snapshot_content\", \"commit_id\"]\n                );\n                assert_eq!(result.rows.len(), 1);\n                assert_eq!(result.rows[0][0], Value::Json(json!([\"entity-b\"])));\n                assert_eq!(result.rows[0][1], Value::Text(\"version-b\".to_string()));\n                assert_eq!(result.rows[0][2], Value::Json(json!({\"value\": \"B\"})));\n                match &result.rows[0][3] {\n                    Value::Text(commit_id) => assert!(!commit_id.is_empty()),\n                    other => panic!(\"expected non-null commit_id text, got {other:?}\"),\n                }\n            })\n        });\n    }\n\n    #[test]\n    fn execute_sql_supports_broad_lix_state_by_version_reads() {\n        run_async_test_with_large_stack(|| {\n            Box::pin(async move {\n                let (backend, schema_definition) = setup_sql2_state_fixture()\n                    .await\n                    .expect(\"fixture should initialize\");\n                let backend = Arc::new(backend);\n                let backend_ref: Arc<dyn crate::Backend + Send + Sync> = backend;\n                let storage = StorageContext::new(Arc::clone(&backend_ref));\n                let blob_reader: Arc<dyn BlobDataReader> =\n                    Arc::new(BackendBlobReader(storage.clone()));\n                let ctx = BackendSqlExecutionContext {\n                    active_version_id: \"version-a\",\n                    storage: storage.clone(),\n                    blob_reader: Arc::clone(&blob_reader),\n                    live_state: Arc::new(test_live_state_context().reader(storage.clone())),\n                    schema_definitions: vec![schema_definition],\n                };\n\n                let result = execute_sql(\n                    &ctx,\n                    \"SELECT entity_id FROM lix_state_by_version WHERE schema_key = 'test_state_schema'\",\n                    &[],\n                )\n                .await\n                .expect(\"broad by-version read should succeed\");\n\n                assert!(\n\t\t\t\t\tresult.rows.iter().any(|row| row[0] == Value::Json(json!([\"entity-a\"])))\n\t\t\t\t\t\t&& result.rows.iter().any(|row| row[0] == Value::Json(json!([\"entity-b\"]))),\n\t\t\t\t\t\"expected broad by-version read to include rows from multiple visible versions: {:?}\",\n\t\t\t\t\tresult.rows\n\t\t\t\t);\n            })\n        });\n    }\n\n    #[test]\n    fn execute_sql_reads_lix_state_from_active_version() {\n        run_async_test_with_large_stack(|| {\n            Box::pin(async move {\n                let (backend, schema_definition) = setup_sql2_state_fixture()\n                    .await\n                    .expect(\"fixture should initialize\");\n                let backend = Arc::new(backend);\n                let backend_ref: Arc<dyn crate::Backend + Send + Sync> = backend;\n                let storage = StorageContext::new(Arc::clone(&backend_ref));\n                let blob_reader: Arc<dyn BlobDataReader> =\n                    Arc::new(BackendBlobReader(storage.clone()));\n                let ctx = BackendSqlExecutionContext {\n                    active_version_id: \"version-a\",\n                    storage: storage.clone(),\n                    blob_reader: Arc::clone(&blob_reader),\n                    live_state: Arc::new(test_live_state_context().reader(storage.clone())),\n                    schema_definitions: vec![schema_definition],\n                };\n\n                let result = execute_sql(\n                    &ctx,\n                    \"SELECT entity_id, snapshot_content \\\n                     FROM lix_state \\\n                     WHERE schema_key = 'test_state_schema'\",\n                    &[],\n                )\n                .await\n                .expect(\"sql2 execute should read lix_state\");\n\n                assert_eq!(result.columns, vec![\"entity_id\", \"snapshot_content\"]);\n                assert_eq!(result.rows.len(), 1);\n                assert_eq!(result.rows[0][0], Value::Json(json!([\"entity-a\"])));\n                assert_eq!(result.rows[0][1], Value::Json(json!({\"value\": \"A\"})));\n            })\n        });\n    }\n\n    #[test]\n    fn execute_sql_reads_entity_view_from_active_version() {\n        run_async_test_with_large_stack(|| {\n            Box::pin(async move {\n                let (backend, schema_definition) = setup_sql2_state_fixture()\n                    .await\n                    .expect(\"fixture should initialize\");\n                let backend = Arc::new(backend);\n                let backend_ref: Arc<dyn crate::Backend + Send + Sync> = backend;\n                let storage = StorageContext::new(Arc::clone(&backend_ref));\n                let blob_reader: Arc<dyn BlobDataReader> =\n                    Arc::new(BackendBlobReader(storage.clone()));\n                let ctx = BackendSqlExecutionContext {\n                    active_version_id: \"version-a\",\n                    storage: storage.clone(),\n                    blob_reader: Arc::clone(&blob_reader),\n                    live_state: Arc::new(test_live_state_context().reader(storage.clone())),\n                    schema_definitions: vec![schema_definition],\n                };\n\n                let result = execute_sql(\n                    &ctx,\n                    \"SELECT value, lixcol_entity_id \\\n                     FROM test_state_schema\",\n                    &[],\n                )\n                .await\n                .expect(\"sql2 execute should read entity view\");\n\n                assert_eq!(result.columns, vec![\"value\", \"lixcol_entity_id\"]);\n                assert_eq!(result.rows.len(), 1);\n                assert_eq!(result.rows[0][0], Value::Text(\"A\".to_string()));\n                assert_eq!(result.rows[0][1], Value::Json(json!([\"entity-a\"])));\n            })\n        });\n    }\n\n    #[test]\n    fn execute_sql_reads_entity_by_version_view() {\n        run_async_test_with_large_stack(|| {\n            Box::pin(async move {\n                let (backend, schema_definition) = setup_sql2_state_fixture()\n                    .await\n                    .expect(\"fixture should initialize\");\n                let backend = Arc::new(backend);\n                let backend_ref: Arc<dyn crate::Backend + Send + Sync> = backend;\n                let storage = StorageContext::new(Arc::clone(&backend_ref));\n                let blob_reader: Arc<dyn BlobDataReader> =\n                    Arc::new(BackendBlobReader(storage.clone()));\n                let ctx = BackendSqlExecutionContext {\n                    active_version_id: \"version-a\",\n                    storage: storage.clone(),\n                    blob_reader: Arc::clone(&blob_reader),\n                    live_state: Arc::new(test_live_state_context().reader(storage.clone())),\n                    schema_definitions: vec![schema_definition],\n                };\n\n                let result = execute_sql(\n                    &ctx,\n                    \"SELECT value, lixcol_version_id \\\n                     FROM test_state_schema_by_version \\\n                     WHERE lixcol_version_id = 'version-b'\",\n                    &[],\n                )\n                .await\n                .expect(\"sql2 execute should read entity by-version view\");\n\n                assert_eq!(result.columns, vec![\"value\", \"lixcol_version_id\"]);\n                assert_eq!(result.rows.len(), 1);\n                assert_eq!(result.rows[0][0], Value::Text(\"B\".to_string()));\n                assert_eq!(result.rows[0][1], Value::Text(\"version-b\".to_string()));\n            })\n        });\n    }\n\n    #[test]\n    fn execute_sql_reads_lix_directory_by_version_view() {\n        run_async_test_with_large_stack(|| {\n            Box::pin(async move {\n                let (backend, schema_definition) = setup_sql2_state_fixture()\n                    .await\n                    .expect(\"fixture should initialize\");\n                let backend = Arc::new(backend);\n                let backend_ref: Arc<dyn crate::Backend + Send + Sync> = backend;\n                let storage = StorageContext::new(Arc::clone(&backend_ref));\n                let blob_reader: Arc<dyn BlobDataReader> =\n                    Arc::new(BackendBlobReader(storage.clone()));\n                let ctx = BackendSqlExecutionContext {\n                    active_version_id: \"version-a\",\n                    storage: storage.clone(),\n                    blob_reader: Arc::clone(&blob_reader),\n                    live_state: Arc::new(test_live_state_context().reader(storage.clone())),\n                    schema_definitions: vec![schema_definition],\n                };\n\n                let result = execute_sql(\n                    &ctx,\n                    \"SELECT path, name, lixcol_version_id \\\n                     FROM lix_directory_by_version \\\n                     WHERE id = 'dir-docs' AND lixcol_version_id = 'version-a'\",\n                    &[],\n                )\n                .await\n                .expect(\"sql2 execute should read lix_directory_by_version\");\n\n                assert_eq!(result.columns, vec![\"path\", \"name\", \"lixcol_version_id\"]);\n                assert_eq!(result.rows.len(), 1);\n                assert_eq!(result.rows[0][0], Value::Text(\"/docs/\".to_string()));\n                assert_eq!(result.rows[0][1], Value::Text(\"docs\".to_string()));\n                assert_eq!(result.rows[0][2], Value::Text(\"version-a\".to_string()));\n            })\n        });\n    }\n\n    #[test]\n    fn execute_sql_reads_lix_directory_from_active_version() {\n        run_async_test_with_large_stack(|| {\n            Box::pin(async move {\n                let (backend, schema_definition) = setup_sql2_state_fixture()\n                    .await\n                    .expect(\"fixture should initialize\");\n                let backend = Arc::new(backend);\n                let backend_ref: Arc<dyn crate::Backend + Send + Sync> = backend;\n                let storage = StorageContext::new(Arc::clone(&backend_ref));\n                let blob_reader: Arc<dyn BlobDataReader> =\n                    Arc::new(BackendBlobReader(storage.clone()));\n                let ctx = BackendSqlExecutionContext {\n                    active_version_id: \"version-a\",\n                    storage: storage.clone(),\n                    blob_reader: Arc::clone(&blob_reader),\n                    live_state: Arc::new(test_live_state_context().reader(storage.clone())),\n                    schema_definitions: vec![schema_definition],\n                };\n\n                let result = execute_sql(\n                    &ctx,\n                    \"SELECT path, name \\\n                     FROM lix_directory \\\n                     WHERE id = 'dir-docs'\",\n                    &[],\n                )\n                .await\n                .expect(\"sql2 execute should read lix_directory\");\n\n                assert_eq!(result.columns, vec![\"path\", \"name\"]);\n                assert_eq!(result.rows.len(), 1);\n                assert_eq!(result.rows[0][0], Value::Text(\"/docs/\".to_string()));\n                assert_eq!(result.rows[0][1], Value::Text(\"docs\".to_string()));\n            })\n        });\n    }\n\n    #[test]\n    fn execute_sql_reads_lix_file_by_version_view() {\n        run_async_test_with_large_stack(|| {\n            Box::pin(async move {\n                let (backend, schema_definition) = setup_sql2_state_fixture()\n                    .await\n                    .expect(\"fixture should initialize\");\n                let backend = Arc::new(backend);\n                let backend_ref: Arc<dyn crate::Backend + Send + Sync> = backend;\n                let storage = StorageContext::new(Arc::clone(&backend_ref));\n                let blob_reader: Arc<dyn BlobDataReader> =\n                    Arc::new(BackendBlobReader(storage.clone()));\n                let ctx = BackendSqlExecutionContext {\n                    active_version_id: \"version-a\",\n                    storage: storage.clone(),\n                    blob_reader: Arc::clone(&blob_reader),\n                    live_state: Arc::new(test_live_state_context().reader(storage.clone())),\n                    schema_definitions: vec![schema_definition],\n                };\n\n                let result = execute_sql(\n                    &ctx,\n                    \"SELECT path, name, data, lixcol_version_id \\\n                     FROM lix_file_by_version \\\n                     WHERE id = 'file-a' AND lixcol_version_id = 'version-a'\",\n                    &[],\n                )\n                .await\n                .expect(\"sql2 execute should read lix_file_by_version\");\n\n                assert_eq!(\n                    result.columns,\n                    vec![\"path\", \"name\", \"data\", \"lixcol_version_id\"]\n                );\n                assert_eq!(result.rows.len(), 1);\n                assert_eq!(\n                    result.rows[0][0],\n                    Value::Text(\"/docs/readme.md\".to_string())\n                );\n                assert_eq!(result.rows[0][1], Value::Text(\"readme.md\".to_string()));\n                assert_eq!(result.rows[0][2], Value::Blob(vec![0x41, 0x42]));\n                assert_eq!(result.rows[0][3], Value::Text(\"version-a\".to_string()));\n            })\n        });\n    }\n\n    #[test]\n    fn execute_sql_reads_lix_file_from_active_version() {\n        run_async_test_with_large_stack(|| {\n            Box::pin(async move {\n                let (backend, schema_definition) = setup_sql2_state_fixture()\n                    .await\n                    .expect(\"fixture should initialize\");\n                let backend = Arc::new(backend);\n                let backend_ref: Arc<dyn crate::Backend + Send + Sync> = backend;\n                let storage = StorageContext::new(Arc::clone(&backend_ref));\n                let blob_reader: Arc<dyn BlobDataReader> =\n                    Arc::new(BackendBlobReader(storage.clone()));\n                let ctx = BackendSqlExecutionContext {\n                    active_version_id: \"version-a\",\n                    storage: storage.clone(),\n                    blob_reader: Arc::clone(&blob_reader),\n                    live_state: Arc::new(test_live_state_context().reader(storage.clone())),\n                    schema_definitions: vec![schema_definition],\n                };\n\n                let result = execute_sql(\n                    &ctx,\n                    \"SELECT path, name, data \\\n                     FROM lix_file \\\n                     WHERE id = 'file-a'\",\n                    &[],\n                )\n                .await\n                .expect(\"sql2 execute should read lix_file\");\n\n                assert_eq!(result.columns, vec![\"path\", \"name\", \"data\"]);\n                assert_eq!(result.rows.len(), 1);\n                assert_eq!(\n                    result.rows[0][0],\n                    Value::Text(\"/docs/readme.md\".to_string())\n                );\n                assert_eq!(result.rows[0][1], Value::Text(\"readme.md\".to_string()));\n                assert_eq!(result.rows[0][2], Value::Blob(vec![0x41, 0x42]));\n            })\n        });\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/file_history_provider.rs",
    "content": "use std::any::Any;\nuse std::collections::{BTreeMap, BTreeSet};\nuse std::sync::Arc;\n\nuse async_trait::async_trait;\nuse datafusion::arrow::array::{ArrayRef, BinaryArray, BooleanArray, Int64Array, StringArray};\nuse datafusion::arrow::datatypes::{DataType, Field, Schema, SchemaRef};\nuse datafusion::arrow::record_batch::{RecordBatch, RecordBatchOptions};\nuse datafusion::catalog::{Session, TableProvider};\nuse datafusion::common::{DataFusionError, Result};\nuse datafusion::datasource::TableType;\nuse datafusion::execution::TaskContext;\nuse datafusion::logical_expr::{Expr, TableProviderFilterPushDown};\nuse datafusion::physical_expr::EquivalenceProperties;\nuse datafusion::physical_plan::execution_plan::{Boundedness, EmissionType, PlanProperties};\nuse datafusion::physical_plan::stream::RecordBatchStreamAdapter;\nuse datafusion::physical_plan::{\n    DisplayAs, DisplayFormatType, ExecutionPlan, Partitioning, SendableRecordBatchStream,\n};\nuse futures_util::stream;\nuse serde::Deserialize;\nuse tokio::sync::Mutex;\n\nuse crate::binary_cas::{BlobDataReader, BlobHash};\nuse crate::commit_graph::CommitGraphReader;\nuse crate::serialize_row_metadata;\nuse crate::LixError;\n\nuse super::history_projection::{tombstone_identity_column_value, HistoryIdentityProjection};\nuse super::history_route::{\n    history_descriptor_event_matches, load_history_entries, parse_history_filter,\n    HistoryColumnStyle, HistoryEntry, HistoryRoute, HistoryViewDescriptor, HISTORY_COL_CHANGE_ID,\n    HISTORY_COL_COMMIT_CREATED_AT, HISTORY_COL_DEPTH, HISTORY_COL_ENTITY_ID, HISTORY_COL_FILE_ID,\n    HISTORY_COL_METADATA, HISTORY_COL_OBSERVED_COMMIT_ID, HISTORY_COL_SCHEMA_KEY,\n    HISTORY_COL_SNAPSHOT_CONTENT, HISTORY_COL_START_COMMIT_ID,\n};\nuse super::result_metadata::json_field;\nuse super::SqlCommitStoreQuerySource;\nuse crate::commit_store::MaterializedChange;\n\nconst FILE_DESCRIPTOR_SCHEMA_KEY: &str = \"lix_file_descriptor\";\nconst DIRECTORY_DESCRIPTOR_SCHEMA_KEY: &str = \"lix_directory_descriptor\";\nconst BLOB_REF_SCHEMA_KEY: &str = \"lix_binary_blob_ref\";\n\npub(crate) async fn register_lix_file_history_provider(\n    session: &datafusion::prelude::SessionContext,\n    commit_graph: Box<dyn CommitGraphReader>,\n    query_source: SqlCommitStoreQuerySource,\n    blob_reader: Arc<dyn BlobDataReader>,\n) -> Result<(), LixError> {\n    session\n        .register_table(\n            \"lix_file_history\",\n            Arc::new(LixFileHistoryProvider::new(\n                Arc::new(Mutex::new(commit_graph)),\n                query_source,\n                blob_reader,\n            )),\n        )\n        .map_err(datafusion_error_to_lix_error)?;\n    Ok(())\n}\n\nstruct LixFileHistoryProvider {\n    schema: SchemaRef,\n    commit_graph: Arc<Mutex<Box<dyn CommitGraphReader>>>,\n    query_source: SqlCommitStoreQuerySource,\n    blob_reader: Arc<dyn BlobDataReader>,\n}\n\nimpl std::fmt::Debug for LixFileHistoryProvider {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"LixFileHistoryProvider\").finish()\n    }\n}\n\nimpl LixFileHistoryProvider {\n    fn new(\n        commit_graph: Arc<Mutex<Box<dyn CommitGraphReader>>>,\n        query_source: SqlCommitStoreQuerySource,\n        blob_reader: Arc<dyn BlobDataReader>,\n    ) -> Self {\n        Self {\n            schema: lix_file_history_schema(),\n            commit_graph,\n            query_source,\n            blob_reader,\n        }\n    }\n}\n\n#[async_trait]\nimpl TableProvider for LixFileHistoryProvider {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn schema(&self) -> SchemaRef {\n        Arc::clone(&self.schema)\n    }\n\n    fn table_type(&self) -> TableType {\n        TableType::View\n    }\n\n    fn supports_filters_pushdown(\n        &self,\n        filters: &[&Expr],\n    ) -> Result<Vec<TableProviderFilterPushDown>> {\n        Ok(filters\n            .iter()\n            .map(|filter| {\n                if parse_history_filter(filter, HistoryColumnStyle::Prefixed).is_some() {\n                    TableProviderFilterPushDown::Exact\n                } else {\n                    TableProviderFilterPushDown::Unsupported\n                }\n            })\n            .collect())\n    }\n\n    async fn scan(\n        &self,\n        _state: &dyn Session,\n        projection: Option<&Vec<usize>>,\n        filters: &[Expr],\n        limit: Option<usize>,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        let schema = projected_schema(&self.schema, projection)?;\n        let needs_data = projection.is_none_or(|projection| {\n            projection.iter().any(|index| {\n                self.schema\n                    .field(*index)\n                    .name()\n                    .as_str()\n                    .eq_ignore_ascii_case(\"data\")\n            })\n        });\n        Ok(Arc::new(LixFileHistoryScanExec::new(\n            Arc::clone(&self.commit_graph),\n            self.query_source.clone(),\n            Arc::clone(&self.blob_reader),\n            schema,\n            needs_data,\n            HistoryRoute::from_filters(filters, HistoryColumnStyle::Prefixed),\n            limit,\n        )))\n    }\n}\n\nstruct LixFileHistoryScanExec {\n    commit_graph: Arc<Mutex<Box<dyn CommitGraphReader>>>,\n    query_source: SqlCommitStoreQuerySource,\n    blob_reader: Arc<dyn BlobDataReader>,\n    schema: SchemaRef,\n    needs_data: bool,\n    route: HistoryRoute,\n    limit: Option<usize>,\n    properties: Arc<PlanProperties>,\n}\n\nimpl std::fmt::Debug for LixFileHistoryScanExec {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"LixFileHistoryScanExec\")\n            .field(\"route\", &self.route)\n            .field(\"limit\", &self.limit)\n            .finish()\n    }\n}\n\nimpl LixFileHistoryScanExec {\n    fn new(\n        commit_graph: Arc<Mutex<Box<dyn CommitGraphReader>>>,\n        query_source: SqlCommitStoreQuerySource,\n        blob_reader: Arc<dyn BlobDataReader>,\n        schema: SchemaRef,\n        needs_data: bool,\n        route: HistoryRoute,\n        limit: Option<usize>,\n    ) -> Self {\n        let properties = PlanProperties::new(\n            EquivalenceProperties::new(Arc::clone(&schema)),\n            Partitioning::UnknownPartitioning(1),\n            EmissionType::Incremental,\n            Boundedness::Bounded,\n        );\n        Self {\n            commit_graph,\n            query_source,\n            blob_reader,\n            schema,\n            needs_data,\n            route,\n            limit,\n            properties: Arc::new(properties),\n        }\n    }\n}\n\nimpl DisplayAs for LixFileHistoryScanExec {\n    fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        match t {\n            DisplayFormatType::Default | DisplayFormatType::Verbose => write!(\n                f,\n                \"LixFileHistoryScanExec(route={:?}, limit={:?})\",\n                self.route, self.limit\n            ),\n            DisplayFormatType::TreeRender => write!(f, \"LixFileHistoryScanExec\"),\n        }\n    }\n}\n\nimpl ExecutionPlan for LixFileHistoryScanExec {\n    fn name(&self) -> &str {\n        \"LixFileHistoryScanExec\"\n    }\n\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn properties(&self) -> &Arc<PlanProperties> {\n        &self.properties\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn ExecutionPlan>> {\n        Vec::new()\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        children: Vec<Arc<dyn ExecutionPlan>>,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        if !children.is_empty() {\n            return Err(DataFusionError::Execution(\n                \"LixFileHistoryScanExec does not accept children\".to_string(),\n            ));\n        }\n        Ok(self)\n    }\n\n    fn execute(\n        &self,\n        partition: usize,\n        _context: Arc<TaskContext>,\n    ) -> Result<SendableRecordBatchStream> {\n        if partition != 0 {\n            return Err(DataFusionError::Execution(format!(\n                \"LixFileHistoryScanExec only exposes one partition, got {partition}\"\n            )));\n        }\n\n        let commit_graph = Arc::clone(&self.commit_graph);\n        let query_source = self.query_source.clone();\n        let blob_reader = Arc::clone(&self.blob_reader);\n        let schema = Arc::clone(&self.schema);\n        let stream_schema = Arc::clone(&schema);\n        let route = self.route.clone();\n        let limit = self.limit;\n        let needs_data = self.needs_data;\n\n        let fut = async move {\n            let mut rows = load_file_history_rows(\n                commit_graph,\n                query_source,\n                &blob_reader,\n                &route,\n                needs_data,\n            )\n            .await\n            .map_err(lix_error_to_datafusion_error)?;\n            if let Some(limit) = limit {\n                rows.truncate(limit);\n            }\n            file_history_record_batch(&stream_schema, &rows).map_err(lix_error_to_datafusion_error)\n        };\n\n        Ok(Box::pin(RecordBatchStreamAdapter::new(\n            schema,\n            stream::once(fut),\n        )))\n    }\n}\n\n#[derive(Debug, Clone)]\nstruct FileHistoryDescriptorRecord {\n    id: String,\n    directory_id: Option<String>,\n    name: Option<String>,\n    hidden: Option<bool>,\n    entry: HistoryEntry,\n}\n\n#[derive(Debug, Clone)]\nstruct FileHistoryDirectoryRecord {\n    id: String,\n    parent_id: Option<String>,\n    name: String,\n    entry: HistoryEntry,\n}\n\n#[derive(Debug, Clone)]\nstruct FileHistoryBlobRecord {\n    file_id: String,\n    blob_hash: Option<String>,\n    entry: HistoryEntry,\n}\n\n#[derive(Debug, Clone)]\nstruct FileHistoryEvent {\n    file_id: String,\n    start_commit_id: String,\n    depth: u32,\n    priority: u8,\n    change: MaterializedChange,\n    observed_commit_id: String,\n    commit_created_at: String,\n}\n\n#[derive(Debug, Clone)]\nstruct FileHistoryOutputRow {\n    entity_id: String,\n    id: String,\n    path: Option<String>,\n    directory_id: Option<String>,\n    name: Option<String>,\n    hidden: Option<bool>,\n    data: Option<Vec<u8>>,\n    descriptor_change: MaterializedChange,\n    event: FileHistoryEvent,\n}\n\n#[derive(Debug, Deserialize)]\nstruct FileDescriptorSnapshot {\n    id: String,\n    directory_id: Option<String>,\n    name: String,\n    hidden: bool,\n}\n\n#[derive(Debug, Deserialize)]\nstruct DirectoryDescriptorSnapshot {\n    id: String,\n    parent_id: Option<String>,\n    name: String,\n}\n\n#[derive(Debug, Deserialize)]\nstruct BlobRefSnapshot {\n    id: String,\n    blob_hash: String,\n}\n\nasync fn load_file_history_rows(\n    commit_graph: Arc<Mutex<Box<dyn CommitGraphReader>>>,\n    query_source: SqlCommitStoreQuerySource,\n    blob_reader: &Arc<dyn BlobDataReader>,\n    route: &HistoryRoute,\n    needs_data: bool,\n) -> Result<Vec<FileHistoryOutputRow>, LixError> {\n    let event_route = route.traversal_only();\n    let event_entries = load_history_entries(\n        HistoryViewDescriptor {\n            view_name: \"lix_file_history\",\n            start_commit_column: HISTORY_COL_START_COMMIT_ID,\n        },\n        Arc::clone(&commit_graph),\n        query_source.json_reader.clone(),\n        &event_route,\n        vec![\n            FILE_DESCRIPTOR_SCHEMA_KEY.to_string(),\n            DIRECTORY_DESCRIPTOR_SCHEMA_KEY.to_string(),\n            BLOB_REF_SCHEMA_KEY.to_string(),\n        ],\n    )\n    .await?;\n    let context_route = route.starts_only();\n    let context_entries = load_history_entries(\n        HistoryViewDescriptor {\n            view_name: \"lix_file_history\",\n            start_commit_column: HISTORY_COL_START_COMMIT_ID,\n        },\n        commit_graph,\n        query_source.json_reader,\n        &context_route,\n        vec![\n            FILE_DESCRIPTOR_SCHEMA_KEY.to_string(),\n            DIRECTORY_DESCRIPTOR_SCHEMA_KEY.to_string(),\n            BLOB_REF_SCHEMA_KEY.to_string(),\n        ],\n    )\n    .await?;\n\n    let event_descriptors = parse_file_history_descriptors(&event_entries)?;\n    let event_directories = parse_file_history_directories(&event_entries)?;\n    let event_blobs = parse_file_history_blobs(&event_entries)?;\n    let descriptors = parse_file_history_descriptors(&context_entries)?;\n    let directories = parse_file_history_directories(&context_entries)?;\n    let blobs = parse_file_history_blobs(&context_entries)?;\n    let events = file_history_events(\n        &event_descriptors,\n        &event_directories,\n        &event_blobs,\n        &descriptors,\n    );\n\n    let mut output = Vec::new();\n    for event in events {\n        let Some(descriptor) = nearest_file_descriptor(&descriptors, &event) else {\n            continue;\n        };\n        let blob = nearest_blob_ref(&blobs, &event);\n        let data = if needs_data {\n            match blob.and_then(|blob| blob.blob_hash.as_deref()) {\n                Some(blob_hash) => load_single_blob_bytes(blob_reader, blob_hash).await?,\n                None => None,\n            }\n        } else {\n            None\n        };\n        let path = resolve_file_history_path(descriptor, &directories, event.depth);\n        let id = tombstone_identity_column_value(\n            \"id\",\n            &descriptor.id,\n            HistoryIdentityProjection::SingleColumn { column: \"id\" },\n        )?\n        .and_then(|value| value.as_str().map(ToOwned::to_owned))\n        .unwrap_or_else(|| descriptor.id.clone());\n\n        output.push(FileHistoryOutputRow {\n            entity_id: descriptor.id.clone(),\n            id,\n            path,\n            directory_id: descriptor.directory_id.clone(),\n            name: descriptor.name.clone(),\n            hidden: descriptor.hidden,\n            data,\n            descriptor_change: descriptor.entry.change.clone(),\n            event,\n        });\n    }\n    output.retain(|row| {\n        let entity_id = entity_id_json_array(&row.entity_id).ok();\n        route.matches_surface_row(\n            FILE_DESCRIPTOR_SCHEMA_KEY,\n            entity_id.as_deref().unwrap_or(&row.entity_id),\n            Some(&row.entity_id),\n            row.event.depth,\n        )\n    });\n\n    output.sort_by(|left, right| {\n        left.entity_id\n            .cmp(&right.entity_id)\n            .then(left.event.start_commit_id.cmp(&right.event.start_commit_id))\n            .then(left.event.depth.cmp(&right.event.depth))\n            .then(\n                left.event\n                    .observed_commit_id\n                    .cmp(&right.event.observed_commit_id),\n            )\n            .then(left.event.change.id.cmp(&right.event.change.id))\n    });\n    Ok(output)\n}\n\nasync fn load_single_blob_bytes(\n    blob_reader: &Arc<dyn BlobDataReader>,\n    blob_hash: &str,\n) -> Result<Option<Vec<u8>>, LixError> {\n    let hash = BlobHash::from_hex(blob_hash)?;\n    Ok(blob_reader\n        .load_bytes_many(&[hash])\n        .await?\n        .into_vec()\n        .into_iter()\n        .next()\n        .flatten())\n}\n\nfn file_history_events(\n    event_descriptors: &[FileHistoryDescriptorRecord],\n    event_directories: &[FileHistoryDirectoryRecord],\n    event_blobs: &[FileHistoryBlobRecord],\n    context_descriptors: &[FileHistoryDescriptorRecord],\n) -> Vec<FileHistoryEvent> {\n    let mut descriptor_ids_by_start = BTreeSet::<(String, String)>::new();\n    let mut directory_ids_by_file_start = BTreeMap::<(String, String), BTreeSet<String>>::new();\n\n    for descriptor in context_descriptors {\n        let key = (\n            descriptor.id.clone(),\n            descriptor.entry.start_commit_id.clone(),\n        );\n        descriptor_ids_by_start.insert(key.clone());\n        if let Some(directory_id) = &descriptor.directory_id {\n            directory_ids_by_file_start\n                .entry(key)\n                .or_default()\n                .insert(directory_id.clone());\n        }\n    }\n\n    let mut candidates = Vec::new();\n    for descriptor in event_descriptors {\n        candidates.push(file_history_event_from_entry(\n            descriptor.id.clone(),\n            &descriptor.entry,\n            1,\n        ));\n    }\n    for directory in event_directories {\n        for ((file_id, start_commit_id), directory_ids) in &directory_ids_by_file_start {\n            if start_commit_id == &directory.entry.start_commit_id\n                && directory_ids.contains(&directory.id)\n            {\n                candidates.push(file_history_event_from_entry(\n                    file_id.clone(),\n                    &directory.entry,\n                    2,\n                ));\n            }\n        }\n    }\n    for blob in event_blobs {\n        if descriptor_ids_by_start\n            .contains(&(blob.file_id.clone(), blob.entry.start_commit_id.clone()))\n        {\n            candidates.push(file_history_event_from_entry(\n                blob.file_id.clone(),\n                &blob.entry,\n                3,\n            ));\n        }\n    }\n\n    candidates.sort_by(|left, right| {\n        left.file_id\n            .cmp(&right.file_id)\n            .then(left.start_commit_id.cmp(&right.start_commit_id))\n            .then(left.depth.cmp(&right.depth))\n            .then(left.priority.cmp(&right.priority))\n            .then(left.change.id.cmp(&right.change.id))\n    });\n    candidates.dedup_by(|left, right| {\n        left.file_id == right.file_id\n            && left.start_commit_id == right.start_commit_id\n            && left.depth == right.depth\n    });\n    candidates\n}\n\nfn file_history_event_from_entry(\n    file_id: String,\n    entry: &HistoryEntry,\n    priority: u8,\n) -> FileHistoryEvent {\n    FileHistoryEvent {\n        file_id,\n        start_commit_id: entry.start_commit_id.clone(),\n        depth: entry.depth,\n        priority,\n        change: entry.change.clone(),\n        observed_commit_id: entry.observed_commit_id.clone(),\n        commit_created_at: entry.commit_created_at.clone(),\n    }\n}\n\nfn parse_file_history_descriptors(\n    entries: &[HistoryEntry],\n) -> Result<Vec<FileHistoryDescriptorRecord>, LixError> {\n    entries\n        .iter()\n        .filter(|entry| entry.change.schema_key == FILE_DESCRIPTOR_SCHEMA_KEY)\n        .map(|entry| {\n            let Some(snapshot_content) = entry.change.snapshot_content.as_deref() else {\n                return Ok(FileHistoryDescriptorRecord {\n                    id: entry.change.entity_id.as_single_string_owned()?,\n                    directory_id: None,\n                    name: None,\n                    hidden: None,\n                    entry: entry.clone(),\n                });\n            };\n            let snapshot: FileDescriptorSnapshot =\n                serde_json::from_str(snapshot_content).map_err(|error| {\n                    LixError::new(\n                        \"LIX_ERROR_UNKNOWN\",\n                        format!(\"invalid lix_file_descriptor history snapshot JSON: {error}\"),\n                    )\n                })?;\n            Ok(FileHistoryDescriptorRecord {\n                id: snapshot.id,\n                directory_id: snapshot.directory_id,\n                name: Some(snapshot.name),\n                hidden: Some(snapshot.hidden),\n                entry: entry.clone(),\n            })\n        })\n        .collect()\n}\n\nfn parse_file_history_directories(\n    entries: &[HistoryEntry],\n) -> Result<Vec<FileHistoryDirectoryRecord>, LixError> {\n    entries\n        .iter()\n        .filter(|entry| entry.change.schema_key == DIRECTORY_DESCRIPTOR_SCHEMA_KEY)\n        .filter_map(|entry| {\n            let snapshot_content = entry.change.snapshot_content.clone()?;\n            Some((entry, snapshot_content))\n        })\n        .map(|(entry, snapshot_content)| {\n            let snapshot: DirectoryDescriptorSnapshot = serde_json::from_str(&snapshot_content)\n                .map_err(|error| {\n                    LixError::new(\n                        \"LIX_ERROR_UNKNOWN\",\n                        format!(\"invalid lix_directory_descriptor history snapshot JSON: {error}\"),\n                    )\n                })?;\n            Ok(FileHistoryDirectoryRecord {\n                id: snapshot.id,\n                parent_id: snapshot.parent_id,\n                name: snapshot.name,\n                entry: entry.clone(),\n            })\n        })\n        .collect()\n}\n\nfn parse_file_history_blobs(\n    entries: &[HistoryEntry],\n) -> Result<Vec<FileHistoryBlobRecord>, LixError> {\n    entries\n        .iter()\n        .filter(|entry| entry.change.schema_key == BLOB_REF_SCHEMA_KEY)\n        .map(|entry| {\n            let Some(snapshot_content) = entry.change.snapshot_content.as_deref() else {\n                return Ok(FileHistoryBlobRecord {\n                    file_id: entry.change.file_id.clone().unwrap_or_else(|| {\n                        entry\n                            .change\n                            .entity_id\n                            .as_single_string_owned()\n                            .expect(\"canonical change entity identity should project\")\n                    }),\n                    blob_hash: None,\n                    entry: entry.clone(),\n                });\n            };\n            let snapshot: BlobRefSnapshot =\n                serde_json::from_str(snapshot_content).map_err(|error| {\n                    LixError::new(\n                        \"LIX_ERROR_UNKNOWN\",\n                        format!(\"invalid lix_binary_blob_ref history snapshot JSON: {error}\"),\n                    )\n                })?;\n            Ok(FileHistoryBlobRecord {\n                file_id: entry.change.file_id.clone().unwrap_or(snapshot.id),\n                blob_hash: Some(snapshot.blob_hash),\n                entry: entry.clone(),\n            })\n        })\n        .collect()\n}\n\nfn nearest_file_descriptor<'a>(\n    descriptors: &'a [FileHistoryDescriptorRecord],\n    event: &FileHistoryEvent,\n) -> Option<&'a FileHistoryDescriptorRecord> {\n    descriptors\n        .iter()\n        .filter(|descriptor| {\n            let exact_descriptor_event =\n                history_descriptor_event_matches(&descriptor.entry, event.depth, &event.change.id);\n            (exact_descriptor_event || descriptor.name.is_some())\n                && descriptor.id == event.file_id\n                && descriptor.entry.start_commit_id == event.start_commit_id\n                && descriptor.entry.depth >= event.depth\n        })\n        .min_by(|left, right| {\n            left.entry\n                .depth\n                .cmp(&right.entry.depth)\n                .then(left.entry.change.id.cmp(&right.entry.change.id))\n        })\n}\n\nfn nearest_blob_ref<'a>(\n    blobs: &'a [FileHistoryBlobRecord],\n    event: &FileHistoryEvent,\n) -> Option<&'a FileHistoryBlobRecord> {\n    blobs\n        .iter()\n        .filter(|blob| {\n            blob.file_id == event.file_id\n                && blob.entry.start_commit_id == event.start_commit_id\n                && blob.entry.depth >= event.depth\n        })\n        .min_by(|left, right| {\n            left.entry\n                .depth\n                .cmp(&right.entry.depth)\n                .then(left.entry.change.id.cmp(&right.entry.change.id))\n        })\n}\n\nfn resolve_file_history_path(\n    descriptor: &FileHistoryDescriptorRecord,\n    directories: &[FileHistoryDirectoryRecord],\n    target_depth: u32,\n) -> Option<String> {\n    let name = descriptor.name.as_ref()?;\n    let Some(directory_id) = descriptor.directory_id.as_deref() else {\n        return Some(format!(\"/{name}\"));\n    };\n    let directory_path = resolve_directory_history_path(\n        directory_id,\n        &descriptor.entry.start_commit_id,\n        target_depth,\n        directories,\n        &mut BTreeMap::new(),\n        &mut BTreeSet::new(),\n    )?;\n    Some(format!(\"{directory_path}{name}\"))\n}\n\nfn resolve_directory_history_path(\n    directory_id: &str,\n    start_commit_id: &str,\n    target_depth: u32,\n    directories: &[FileHistoryDirectoryRecord],\n    cache: &mut BTreeMap<String, Option<String>>,\n    visiting: &mut BTreeSet<String>,\n) -> Option<String> {\n    if let Some(path) = cache.get(directory_id) {\n        return path.clone();\n    }\n    if !visiting.insert(directory_id.to_string()) {\n        cache.insert(directory_id.to_string(), None);\n        return None;\n    }\n    let directory = directories\n        .iter()\n        .filter(|directory| {\n            directory.id == directory_id\n                && directory.entry.start_commit_id == start_commit_id\n                && directory.entry.depth >= target_depth\n        })\n        .min_by(|left, right| {\n            left.entry\n                .depth\n                .cmp(&right.entry.depth)\n                .then(left.entry.change.id.cmp(&right.entry.change.id))\n        })?;\n    let path = match directory.parent_id.as_deref() {\n        Some(parent_id) => {\n            let parent_path = resolve_directory_history_path(\n                parent_id,\n                start_commit_id,\n                target_depth,\n                directories,\n                cache,\n                visiting,\n            )?;\n            format!(\"{parent_path}{}/\", directory.name)\n        }\n        None => format!(\"/{}/\", directory.name),\n    };\n    visiting.remove(directory_id);\n    cache.insert(directory_id.to_string(), Some(path.clone()));\n    Some(path)\n}\n\nfn file_history_record_batch(\n    schema: &SchemaRef,\n    rows: &[FileHistoryOutputRow],\n) -> Result<RecordBatch, LixError> {\n    let columns = schema\n        .fields()\n        .iter()\n        .map(|field| file_history_column_array(field.name(), rows))\n        .collect::<Result<Vec<_>, _>>()?;\n    let options = RecordBatchOptions::new().with_row_count(Some(rows.len()));\n    RecordBatch::try_new_with_options(Arc::clone(schema), columns, &options).map_err(|error| {\n        LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"sql2 failed to build lix_file_history record batch: {error}\"),\n        )\n    })\n}\n\nfn file_history_column_array(\n    column_name: &str,\n    rows: &[FileHistoryOutputRow],\n) -> Result<ArrayRef, LixError> {\n    Ok(match column_name {\n        \"id\" => string_array(rows.iter().map(|row| Some(row.id.as_str()))),\n        \"path\" => string_array(rows.iter().map(|row| row.path.as_deref())),\n        \"directory_id\" => string_array(rows.iter().map(|row| row.directory_id.as_deref())),\n        \"name\" => string_array(rows.iter().map(|row| row.name.as_deref())),\n        \"hidden\" => Arc::new(BooleanArray::from(\n            rows.iter().map(|row| row.hidden).collect::<Vec<_>>(),\n        )) as ArrayRef,\n        \"data\" => Arc::new(BinaryArray::from(\n            rows.iter()\n                .map(|row| row.data.as_deref())\n                .collect::<Vec<_>>(),\n        )) as ArrayRef,\n        HISTORY_COL_ENTITY_ID => Arc::new(StringArray::from(\n            rows.iter()\n                .map(|row| entity_id_json_array(&row.entity_id).map(Some))\n                .collect::<std::result::Result<Vec<_>, _>>()?,\n        )) as ArrayRef,\n        HISTORY_COL_SCHEMA_KEY => {\n            string_array(rows.iter().map(|_| Some(FILE_DESCRIPTOR_SCHEMA_KEY)))\n        }\n        HISTORY_COL_FILE_ID => string_array(rows.iter().map(|row| Some(row.entity_id.as_str()))),\n        HISTORY_COL_CHANGE_ID => {\n            string_array(rows.iter().map(|row| Some(row.event.change.id.as_str())))\n        }\n        HISTORY_COL_SNAPSHOT_CONTENT => string_array(\n            rows.iter()\n                .map(|row| row.descriptor_change.snapshot_content.as_deref()),\n        ),\n        HISTORY_COL_METADATA => Arc::new(StringArray::from(\n            rows.iter()\n                .map(|row| {\n                    row.descriptor_change\n                        .metadata\n                        .as_ref()\n                        .map(serialize_row_metadata)\n                })\n                .collect::<Vec<_>>(),\n        )),\n        HISTORY_COL_OBSERVED_COMMIT_ID => string_array(\n            rows.iter()\n                .map(|row| Some(row.event.observed_commit_id.as_str())),\n        ),\n        HISTORY_COL_COMMIT_CREATED_AT => string_array(\n            rows.iter()\n                .map(|row| Some(row.event.commit_created_at.as_str())),\n        ),\n        HISTORY_COL_START_COMMIT_ID => string_array(\n            rows.iter()\n                .map(|row| Some(row.event.start_commit_id.as_str())),\n        ),\n        HISTORY_COL_DEPTH => Arc::new(Int64Array::from(\n            rows.iter()\n                .map(|row| i64::from(row.event.depth))\n                .collect::<Vec<_>>(),\n        )) as ArrayRef,\n        other => {\n            return Err(LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                format!(\n                    \"sql2 lix_file_history provider does not support projected column '{other}'\"\n                ),\n            ))\n        }\n    })\n}\n\nfn lix_file_history_schema() -> SchemaRef {\n    Arc::new(Schema::new(vec![\n        Field::new(\"id\", DataType::Utf8, false),\n        Field::new(\"path\", DataType::Utf8, true),\n        Field::new(\"directory_id\", DataType::Utf8, true),\n        Field::new(\"name\", DataType::Utf8, true),\n        Field::new(\"hidden\", DataType::Boolean, true),\n        Field::new(\"data\", DataType::Binary, true),\n        json_field(HISTORY_COL_ENTITY_ID, false),\n        Field::new(HISTORY_COL_SCHEMA_KEY, DataType::Utf8, false),\n        Field::new(HISTORY_COL_FILE_ID, DataType::Utf8, true),\n        json_field(HISTORY_COL_SNAPSHOT_CONTENT, true),\n        Field::new(HISTORY_COL_CHANGE_ID, DataType::Utf8, false),\n        json_field(HISTORY_COL_METADATA, true),\n        Field::new(HISTORY_COL_OBSERVED_COMMIT_ID, DataType::Utf8, false),\n        Field::new(HISTORY_COL_COMMIT_CREATED_AT, DataType::Utf8, false),\n        Field::new(HISTORY_COL_START_COMMIT_ID, DataType::Utf8, false),\n        Field::new(HISTORY_COL_DEPTH, DataType::Int64, false),\n    ]))\n}\n\nfn projected_schema(base_schema: &SchemaRef, projection: Option<&Vec<usize>>) -> Result<SchemaRef> {\n    let Some(projection) = projection else {\n        return Ok(Arc::clone(base_schema));\n    };\n    Ok(Arc::new(base_schema.project(projection)?))\n}\n\nfn string_array<'a>(values: impl Iterator<Item = Option<&'a str>>) -> ArrayRef {\n    Arc::new(StringArray::from(values.collect::<Vec<_>>())) as ArrayRef\n}\n\nfn datafusion_error_to_lix_error(error: DataFusionError) -> LixError {\n    super::error::datafusion_error_to_lix_error(error)\n}\n\nfn entity_id_json_array(entity_id: &str) -> Result<String, LixError> {\n    serde_json::to_string(&[entity_id]).map_err(|error| {\n        LixError::unknown(format!(\n            \"failed to encode history entity id as JSON: {error}\"\n        ))\n    })\n}\n\nfn lix_error_to_datafusion_error(error: LixError) -> DataFusionError {\n    super::error::lix_error_to_datafusion_error(error)\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/file_provider.rs",
    "content": "use std::any::Any;\nuse std::collections::{BTreeMap, BTreeSet};\nuse std::sync::Arc;\n\nuse async_trait::async_trait;\nuse datafusion::arrow::array::{\n    ArrayRef, BinaryArray, BooleanArray, RecordBatchOptions, StringArray, UInt64Array,\n};\nuse datafusion::arrow::compute::{and, filter_record_batch};\nuse datafusion::arrow::datatypes::{DataType, Field, Schema, SchemaRef};\nuse datafusion::arrow::record_batch::RecordBatch;\nuse datafusion::catalog::{Session, TableProvider};\nuse datafusion::common::{not_impl_err, DFSchema, DataFusionError, Result, ScalarValue};\nuse datafusion::datasource::TableType;\nuse datafusion::execution::TaskContext;\nuse datafusion::logical_expr::dml::InsertOp;\nuse datafusion::logical_expr::expr::InList;\nuse datafusion::logical_expr::{BinaryExpr, Expr, Operator, TableProviderFilterPushDown};\nuse datafusion::physical_expr::{create_physical_expr, EquivalenceProperties, PhysicalExpr};\nuse datafusion::physical_plan::execution_plan::{Boundedness, EmissionType, PlanProperties};\nuse datafusion::physical_plan::stream::RecordBatchStreamAdapter;\nuse datafusion::physical_plan::{\n    DisplayAs, DisplayFormatType, ExecutionPlan, Partitioning, SendableRecordBatchStream,\n};\nuse datafusion::prelude::SessionContext;\nuse futures_util::{stream, TryStreamExt};\nuse serde::Deserialize;\n\nuse crate::binary_cas::{BlobDataReader, BlobHash};\nuse crate::entity_identity::EntityIdentity;\nuse crate::functions::FunctionProviderHandle;\nuse crate::live_state::MaterializedLiveStateRow;\nuse crate::live_state::{\n    LiveStateFilter, LiveStateProjection, LiveStateReader, LiveStateScanRequest,\n};\nuse crate::sql2::dml::{InsertExec, InsertSink};\nuse crate::sql2::filesystem_predicates::{\n    canonicalize_filesystem_path_filters, FilesystemPathKind,\n};\nuse crate::sql2::predicate_typecheck::validate_json_predicate_filters;\nuse crate::sql2::version_scope::{\n    explicit_version_ids_from_dml_filters, resolve_provider_version_ids,\n    resolve_write_version_scope, VersionBinding,\n};\nuse crate::sql2::write_normalization::{\n    is_binary_type, lix_file_data_type_error, lix_file_data_type_error_with_value,\n    logical_expr_is_binary_or_null, reject_non_binary_casts_for_insert_column,\n    scalar_is_binary_or_null, InsertCell, InsertColumnIntents, SqlCell, UpdateAssignmentValues,\n    UpdateCell,\n};\nuse crate::transaction::types::{TransactionJson, TransactionWriteRow};\nuse crate::version::VersionRefReader;\nuse crate::{parse_row_metadata_value, serialize_row_metadata, LixError};\n\nconst FILE_DESCRIPTOR_SCHEMA_KEY: &str = \"lix_file_descriptor\";\nconst BLOB_REF_SCHEMA_KEY: &str = \"lix_binary_blob_ref\";\nconst DIRECTORY_DESCRIPTOR_SCHEMA_KEY: &str = \"lix_directory_descriptor\";\n\nuse super::filesystem_planner::{\n    blob_ref_row, directory_path_resolvers_from_state_rows, file_descriptor_row,\n    file_descriptor_write_row, filesystem_storage_scope_key, plan_file_delete,\n    plan_file_path_update, BlobRefRowInput, DirectoryPathResolver, FileDeleteInput,\n    FileDescriptorRowInput, FileDescriptorWriteIntent, FilePathWriteInput, FilesystemDeletePlan,\n    FilesystemRowContext,\n};\nuse super::result_metadata::json_field;\nuse crate::sql2::{\n    SqlWriteContext, WriteAccess, WriteContextLiveStateReader, WriteContextVersionRefReader,\n};\nuse crate::transaction::types::{\n    LogicalPrimaryKey, TransactionFileData, TransactionWrite, TransactionWriteMode,\n    TransactionWriteOperation, TransactionWriteOrigin,\n};\n\npub(crate) async fn register_lix_file_providers(\n    session: &SessionContext,\n    active_version_id: &str,\n    live_state: Arc<dyn LiveStateReader>,\n    version_ref: Arc<dyn VersionRefReader>,\n    blob_reader: Arc<dyn BlobDataReader>,\n    functions: FunctionProviderHandle,\n) -> Result<(), LixError> {\n    session\n        .register_table(\n            \"lix_file_by_version\",\n            Arc::new(LixFileProvider::by_version(\n                Arc::clone(&live_state),\n                Arc::clone(&version_ref),\n                Arc::clone(&blob_reader),\n                functions.clone(),\n            )),\n        )\n        .map_err(datafusion_error_to_lix_error)?;\n    session\n        .register_table(\n            \"lix_file\",\n            Arc::new(LixFileProvider::active_version(\n                active_version_id,\n                live_state,\n                version_ref,\n                Arc::clone(&blob_reader),\n                functions,\n            )),\n        )\n        .map_err(datafusion_error_to_lix_error)?;\n    Ok(())\n}\n\npub(crate) async fn register_lix_file_write_providers(\n    session: &SessionContext,\n    write_ctx: SqlWriteContext,\n) -> Result<(), LixError> {\n    session\n        .register_table(\n            \"lix_file_by_version\",\n            Arc::new(LixFileProvider::by_version_with_write(write_ctx.clone())),\n        )\n        .map_err(datafusion_error_to_lix_error)?;\n    session\n        .register_table(\n            \"lix_file\",\n            Arc::new(LixFileProvider::active_version_with_write(write_ctx)),\n        )\n        .map_err(datafusion_error_to_lix_error)?;\n    Ok(())\n}\n\npub(crate) struct LixFileProvider {\n    schema: SchemaRef,\n    live_state: Arc<dyn LiveStateReader>,\n    version_ref: Arc<dyn VersionRefReader>,\n    blob_reader: Arc<dyn BlobDataReader>,\n    write_access: WriteAccess,\n    functions: FunctionProviderHandle,\n    version_binding: VersionBinding,\n}\n\nimpl std::fmt::Debug for LixFileProvider {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"LixFileProvider\").finish()\n    }\n}\n\nimpl LixFileProvider {\n    pub(crate) fn active_version(\n        active_version_id: impl Into<String>,\n        live_state: Arc<dyn LiveStateReader>,\n        version_ref: Arc<dyn VersionRefReader>,\n        blob_reader: Arc<dyn BlobDataReader>,\n        functions: FunctionProviderHandle,\n    ) -> Self {\n        Self {\n            schema: lix_file_schema(),\n            live_state,\n            version_ref,\n            blob_reader,\n            write_access: WriteAccess::read_only(),\n            functions,\n            version_binding: VersionBinding::active(active_version_id),\n        }\n    }\n\n    pub(crate) fn active_version_with_write(write_ctx: SqlWriteContext) -> Self {\n        let active_version_id = write_ctx.active_version_id();\n        let functions = write_ctx.functions();\n        let live_state = Arc::new(WriteContextLiveStateReader::new(write_ctx.clone()));\n        let version_ref = Arc::new(WriteContextVersionRefReader::new(write_ctx.clone()));\n        let blob_reader = write_ctx.blob_reader();\n        Self {\n            schema: lix_file_schema(),\n            live_state,\n            version_ref,\n            blob_reader,\n            write_access: WriteAccess::write(write_ctx),\n            functions,\n            version_binding: VersionBinding::active(active_version_id),\n        }\n    }\n\n    pub(crate) fn by_version(\n        live_state: Arc<dyn LiveStateReader>,\n        version_ref: Arc<dyn VersionRefReader>,\n        blob_reader: Arc<dyn BlobDataReader>,\n        functions: FunctionProviderHandle,\n    ) -> Self {\n        Self {\n            schema: lix_file_by_version_schema(),\n            live_state,\n            version_ref,\n            blob_reader,\n            write_access: WriteAccess::read_only(),\n            functions,\n            version_binding: VersionBinding::explicit(),\n        }\n    }\n\n    pub(crate) fn by_version_with_write(write_ctx: SqlWriteContext) -> Self {\n        let functions = write_ctx.functions();\n        let live_state = Arc::new(WriteContextLiveStateReader::new(write_ctx.clone()));\n        let version_ref = Arc::new(WriteContextVersionRefReader::new(write_ctx.clone()));\n        let blob_reader = write_ctx.blob_reader();\n        Self {\n            schema: lix_file_by_version_schema(),\n            live_state,\n            version_ref,\n            blob_reader,\n            write_access: WriteAccess::write(write_ctx),\n            functions,\n            version_binding: VersionBinding::explicit(),\n        }\n    }\n}\n\n#[async_trait]\nimpl TableProvider for LixFileProvider {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn schema(&self) -> SchemaRef {\n        Arc::clone(&self.schema)\n    }\n\n    fn table_type(&self) -> TableType {\n        TableType::Base\n    }\n\n    fn supports_filters_pushdown(\n        &self,\n        filters: &[&Expr],\n    ) -> Result<Vec<TableProviderFilterPushDown>> {\n        let analyzer = LixFileIdFilterAnalyzer;\n        Ok(filters\n            .iter()\n            .map(|filter| {\n                if ExactStringColumnFilterAnalyzer::new(\"lixcol_version_id\").supports(filter)\n                    || analyzer.supports(filter)\n                    || contains_column(filter, \"path\")\n                {\n                    TableProviderFilterPushDown::Exact\n                } else {\n                    TableProviderFilterPushDown::Unsupported\n                }\n            })\n            .collect())\n    }\n\n    async fn scan(\n        &self,\n        _state: &dyn Session,\n        projection: Option<&Vec<usize>>,\n        filters: &[Expr],\n        limit: Option<usize>,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        let projected_schema = projected_schema(&self.schema, projection)?;\n        let scan_limit = if filters.is_empty() { limit } else { None };\n        let mut request = lix_file_scan_request(\n            self.version_binding.active_version_id(),\n            Some(projected_schema.as_ref()),\n            scan_limit,\n        );\n        if self.write_access.is_write() && matches!(self.version_binding, VersionBinding::Explicit)\n        {\n            request.filter.version_ids = explicit_version_ids_from_dml_filters(filters);\n            if request.filter.version_ids.is_empty() {\n                return Err(DataFusionError::Plan(\n                    \"DELETE FROM lix_file_by_version requires an explicit lixcol_version_id predicate\"\n                        .to_string(),\n                ));\n            }\n        }\n        request.filter.version_ids = resolve_provider_version_ids(\n            self.version_ref.as_ref(),\n            &self.version_binding,\n            request.filter.version_ids,\n        )\n        .await\n        .map_err(lix_error_to_datafusion_error)?;\n        let filters = canonicalize_filesystem_path_filters(filters, FilesystemPathKind::File)?;\n        let target_file_ids = file_id_constraint_from_filters(&filters)?;\n        let df_schema = DFSchema::try_from(Arc::clone(&self.schema))?;\n        validate_json_predicate_filters(self.schema.as_ref(), &filters)?;\n        let physical_filters = filters\n            .iter()\n            .map(|expr| create_physical_expr(expr, &df_schema, _state.execution_props()))\n            .collect::<Result<Vec<_>>>()?;\n        Ok(Arc::new(LixFileScanExec::new(\n            Arc::clone(&self.live_state),\n            Arc::clone(&self.blob_reader),\n            Arc::clone(&self.schema),\n            projected_schema,\n            projection.cloned(),\n            request,\n            target_file_ids,\n            physical_filters,\n            limit,\n        )))\n    }\n\n    async fn insert_into(\n        &self,\n        _state: &dyn Session,\n        input: Arc<dyn ExecutionPlan>,\n        insert_op: InsertOp,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        if insert_op != InsertOp::Append {\n            return not_impl_err!(\"{insert_op} not implemented for lix_file yet\");\n        }\n\n        let write_ctx = self.write_access.require_write(\"INSERT into lix_file\")?;\n        let insert_column_intents = InsertColumnIntents::from_input(&input);\n        let include_data_writes = insert_column_intents.includes_column(\"data\");\n        if include_data_writes {\n            reject_non_binary_casts_for_insert_column(&input, \"data\", \"INSERT into lix_file\")?;\n        }\n\n        let sink = LixFileInsertSink::new(\n            input.schema(),\n            write_ctx.clone(),\n            self.functions.clone(),\n            self.version_binding.clone(),\n            include_data_writes,\n        );\n        Ok(Arc::new(InsertExec::new(input, Arc::new(sink))))\n    }\n\n    async fn delete_from(\n        &self,\n        state: &dyn Session,\n        filters: Vec<Expr>,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        let write_ctx = self.write_access.require_write(\"DELETE FROM lix_file\")?;\n\n        let df_schema = DFSchema::try_from(Arc::clone(&self.schema))?;\n        let filters = canonicalize_filesystem_path_filters(&filters, FilesystemPathKind::File)?;\n        validate_json_predicate_filters(self.schema.as_ref(), &filters)?;\n        let physical_filters = filters\n            .iter()\n            .map(|expr| create_physical_expr(expr, &df_schema, state.execution_props()))\n            .collect::<Result<Vec<_>>>()?;\n        let target_file_ids = file_id_constraint_from_filters(&filters)?;\n        let mut request =\n            lix_file_scan_request(self.version_binding.active_version_id(), None, None);\n        if matches!(self.version_binding, VersionBinding::Explicit) {\n            request.filter.version_ids = explicit_version_ids_from_dml_filters(&filters);\n            if request.filter.version_ids.is_empty() {\n                return Err(DataFusionError::Plan(\n                    \"DELETE FROM lix_file_by_version requires an explicit lixcol_version_id predicate\"\n                        .to_string(),\n                ));\n            }\n        }\n\n        Ok(Arc::new(LixFileDeleteExec::new(\n            Arc::clone(&self.blob_reader),\n            write_ctx.clone(),\n            Arc::clone(&self.schema),\n            self.version_binding.clone(),\n            request,\n            target_file_ids,\n            physical_filters,\n        )))\n    }\n\n    async fn update(\n        &self,\n        state: &dyn Session,\n        assignments: Vec<(String, Expr)>,\n        filters: Vec<Expr>,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        let write_ctx = self.write_access.require_write(\"UPDATE lix_file\")?;\n\n        validate_lix_file_update_assignments(&self.schema, &assignments)?;\n\n        let df_schema = DFSchema::try_from(Arc::clone(&self.schema))?;\n        let physical_assignments = assignments\n            .iter()\n            .map(|(column_name, expr)| {\n                Ok((\n                    column_name.clone(),\n                    create_physical_expr(expr, &df_schema, state.execution_props())?,\n                ))\n            })\n            .collect::<Result<Vec<_>>>()?;\n        let filters = canonicalize_filesystem_path_filters(&filters, FilesystemPathKind::File)?;\n        let target_file_ids = file_id_constraint_from_filters(&filters)?;\n        validate_json_predicate_filters(self.schema.as_ref(), &filters)?;\n        let physical_filters = filters\n            .iter()\n            .map(|expr| create_physical_expr(expr, &df_schema, state.execution_props()))\n            .collect::<Result<Vec<_>>>()?;\n        let request = lix_file_scan_request(self.version_binding.active_version_id(), None, None);\n\n        Ok(Arc::new(LixFileUpdateExec::new(\n            Arc::clone(&self.blob_reader),\n            write_ctx.clone(),\n            Arc::clone(&self.schema),\n            self.version_binding.clone(),\n            self.functions.clone(),\n            request,\n            target_file_ids,\n            physical_assignments,\n            physical_filters,\n        )))\n    }\n}\n\n#[allow(dead_code)]\nstruct LixFileInsertSink {\n    write_ctx: SqlWriteContext,\n    functions: FunctionProviderHandle,\n    version_binding: VersionBinding,\n    surface_name: &'static str,\n    include_data_writes: bool,\n}\n\nimpl std::fmt::Debug for LixFileInsertSink {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"LixFileInsertSink\").finish()\n    }\n}\n\nimpl LixFileInsertSink {\n    fn new(\n        _schema: SchemaRef,\n        write_ctx: SqlWriteContext,\n        functions: FunctionProviderHandle,\n        version_binding: VersionBinding,\n        include_data_writes: bool,\n    ) -> Self {\n        let surface_name = lix_file_surface_name(&version_binding);\n        Self {\n            write_ctx,\n            functions,\n            version_binding,\n            surface_name,\n            include_data_writes,\n        }\n    }\n}\n\nimpl DisplayAs for LixFileInsertSink {\n    fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        match t {\n            DisplayFormatType::Default | DisplayFormatType::Verbose => {\n                write!(f, \"LixFileInsertSink\")\n            }\n            DisplayFormatType::TreeRender => write!(f, \"LixFileInsertSink\"),\n        }\n    }\n}\n\n#[async_trait]\nimpl InsertSink for LixFileInsertSink {\n    async fn write_batches(\n        &self,\n        batches: Vec<RecordBatch>,\n        _context: &Arc<TaskContext>,\n    ) -> Result<u64> {\n        let mut staged = LixFileStagedBatch::default();\n        let mut path_resolvers = None;\n        for batch in batches {\n            if path_resolvers.is_none() {\n                path_resolvers = Some(\n                    file_path_resolvers_from_live_state(\n                        Arc::new(WriteContextLiveStateReader::new(self.write_ctx.clone())),\n                        self.version_binding.active_version_id(),\n                    )\n                    .await\n                    .map_err(lix_error_to_datafusion_error)?,\n                );\n            }\n            if record_batch_has_non_null_column(&batch, \"path\")? {\n                staged.extend(lix_file_insert_stage_from_batch_with_path_resolvers(\n                    &batch,\n                    self.version_binding.active_version_id(),\n                    self.surface_name,\n                    path_resolvers\n                        .as_mut()\n                        .expect(\"path resolver should be initialized\"),\n                    &mut || self.functions.call_uuid_v7(),\n                    self.include_data_writes,\n                )?);\n            } else {\n                staged.extend(\n                    lix_file_insert_stage_from_batch_with_id_generator_and_path_resolvers(\n                        &batch,\n                        self.version_binding.active_version_id(),\n                        self.surface_name,\n                        path_resolvers\n                            .as_mut()\n                            .expect(\"path resolver should be initialized\"),\n                        &mut || self.functions.call_uuid_v7(),\n                        self.include_data_writes,\n                    )?,\n                );\n            }\n        }\n\n        if !staged.state_rows.is_empty() || !staged.file_data_writes.is_empty() {\n            let intent = if staged.file_data_writes.is_empty() {\n                TransactionWrite::Rows {\n                    mode: TransactionWriteMode::Insert,\n                    rows: staged.state_rows,\n                }\n            } else {\n                TransactionWrite::RowsWithFileData {\n                    mode: TransactionWriteMode::Insert,\n                    rows: staged.state_rows,\n                    file_data: staged.file_data_writes,\n                    count: staged.count,\n                }\n            };\n            self.write_ctx\n                .stage_write(intent)\n                .await\n                .map_err(lix_error_to_datafusion_error)?;\n        }\n\n        Ok(staged.count)\n    }\n}\n\nfn lix_file_surface_name(version_binding: &VersionBinding) -> &'static str {\n    match version_binding {\n        VersionBinding::Active { .. } => \"lix_file\",\n        VersionBinding::Explicit => \"lix_file_by_version\",\n    }\n}\n\n#[allow(dead_code)]\nstruct LixFileDeleteExec {\n    blob_reader: Arc<dyn BlobDataReader>,\n    write_ctx: SqlWriteContext,\n    table_schema: SchemaRef,\n    version_binding: VersionBinding,\n    request: LiveStateScanRequest,\n    target_file_ids: FileIdConstraint,\n    filters: Vec<Arc<dyn PhysicalExpr>>,\n    result_schema: SchemaRef,\n    properties: Arc<PlanProperties>,\n}\n\nimpl std::fmt::Debug for LixFileDeleteExec {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"LixFileDeleteExec\").finish()\n    }\n}\n\nimpl LixFileDeleteExec {\n    fn new(\n        blob_reader: Arc<dyn BlobDataReader>,\n        write_ctx: SqlWriteContext,\n        table_schema: SchemaRef,\n        version_binding: VersionBinding,\n        request: LiveStateScanRequest,\n        target_file_ids: FileIdConstraint,\n        filters: Vec<Arc<dyn PhysicalExpr>>,\n    ) -> Self {\n        let result_schema = dml_count_schema();\n        let properties = PlanProperties::new(\n            EquivalenceProperties::new(Arc::clone(&result_schema)),\n            Partitioning::UnknownPartitioning(1),\n            EmissionType::Final,\n            Boundedness::Bounded,\n        );\n        Self {\n            blob_reader,\n            write_ctx,\n            table_schema,\n            version_binding,\n            request,\n            target_file_ids,\n            filters,\n            result_schema,\n            properties: Arc::new(properties),\n        }\n    }\n}\n\nimpl DisplayAs for LixFileDeleteExec {\n    fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        match t {\n            DisplayFormatType::Default | DisplayFormatType::Verbose => {\n                write!(f, \"LixFileDeleteExec(filters={})\", self.filters.len())\n            }\n            DisplayFormatType::TreeRender => write!(f, \"LixFileDeleteExec\"),\n        }\n    }\n}\n\nimpl ExecutionPlan for LixFileDeleteExec {\n    fn name(&self) -> &str {\n        \"LixFileDeleteExec\"\n    }\n\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn properties(&self) -> &Arc<PlanProperties> {\n        &self.properties\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn ExecutionPlan>> {\n        Vec::new()\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        children: Vec<Arc<dyn ExecutionPlan>>,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        if !children.is_empty() {\n            return Err(DataFusionError::Execution(\n                \"LixFileDeleteExec does not accept children\".to_string(),\n            ));\n        }\n        Ok(self)\n    }\n\n    fn execute(\n        &self,\n        partition: usize,\n        _context: Arc<TaskContext>,\n    ) -> Result<SendableRecordBatchStream> {\n        if partition != 0 {\n            return Err(DataFusionError::Execution(format!(\n                \"LixFileDeleteExec only exposes one partition, got {partition}\"\n            )));\n        }\n\n        let blob_reader = Arc::clone(&self.blob_reader);\n        let write_ctx = self.write_ctx.clone();\n        let table_schema = Arc::clone(&self.table_schema);\n        let version_binding = self.version_binding.clone();\n        let request = self.request.clone();\n        let target_file_ids = self.target_file_ids.clone();\n        let filters = self.filters.clone();\n        let result_schema = Arc::clone(&self.result_schema);\n        let stream_schema = Arc::clone(&result_schema);\n\n        let stream = stream::once(async move {\n            let rows = scan_lix_file_live_rows(\n                Arc::new(WriteContextLiveStateReader::new(write_ctx.clone())),\n                &request,\n                &target_file_ids,\n            )\n            .await\n            .map_err(lix_error_to_datafusion_error)?;\n            let blob_ref_file_ids =\n                blob_ref_file_ids_from_live_rows(&rows).map_err(lix_error_to_datafusion_error)?;\n            let source_batch = lix_file_record_batch(&table_schema, &blob_reader, rows)\n                .await\n                .map_err(lix_error_to_datafusion_error)?;\n            let matched_batch = filter_lix_file_batch(source_batch, &filters)?;\n            let staged = lix_file_delete_stage_from_batch(\n                &matched_batch,\n                version_binding.active_version_id(),\n                &blob_ref_file_ids,\n            )?;\n            let count = staged.count;\n\n            if count > 0 {\n                write_ctx\n                    .stage_write(TransactionWrite::Rows {\n                        mode: TransactionWriteMode::Replace,\n                        rows: staged.state_rows,\n                    })\n                    .await\n                    .map_err(lix_error_to_datafusion_error)?;\n            }\n\n            Ok::<_, DataFusionError>(stream::iter(vec![Ok::<RecordBatch, DataFusionError>(\n                dml_count_batch(Arc::clone(&stream_schema), count)?,\n            )]))\n        })\n        .try_flatten();\n\n        Ok(Box::pin(RecordBatchStreamAdapter::new(\n            result_schema,\n            stream,\n        )))\n    }\n}\n\n#[allow(dead_code)]\nstruct LixFileUpdateExec {\n    blob_reader: Arc<dyn BlobDataReader>,\n    write_ctx: SqlWriteContext,\n    table_schema: SchemaRef,\n    version_binding: VersionBinding,\n    functions: FunctionProviderHandle,\n    request: LiveStateScanRequest,\n    target_file_ids: FileIdConstraint,\n    assignments: Vec<(String, Arc<dyn PhysicalExpr>)>,\n    filters: Vec<Arc<dyn PhysicalExpr>>,\n    result_schema: SchemaRef,\n    properties: Arc<PlanProperties>,\n}\n\nimpl std::fmt::Debug for LixFileUpdateExec {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"LixFileUpdateExec\").finish()\n    }\n}\n\nimpl LixFileUpdateExec {\n    fn new(\n        blob_reader: Arc<dyn BlobDataReader>,\n        write_ctx: SqlWriteContext,\n        table_schema: SchemaRef,\n        version_binding: VersionBinding,\n        functions: FunctionProviderHandle,\n        request: LiveStateScanRequest,\n        target_file_ids: FileIdConstraint,\n        assignments: Vec<(String, Arc<dyn PhysicalExpr>)>,\n        filters: Vec<Arc<dyn PhysicalExpr>>,\n    ) -> Self {\n        let result_schema = dml_count_schema();\n        let properties = PlanProperties::new(\n            EquivalenceProperties::new(Arc::clone(&result_schema)),\n            Partitioning::UnknownPartitioning(1),\n            EmissionType::Final,\n            Boundedness::Bounded,\n        );\n        Self {\n            blob_reader,\n            write_ctx,\n            table_schema,\n            version_binding,\n            functions,\n            request,\n            target_file_ids,\n            assignments,\n            filters,\n            result_schema,\n            properties: Arc::new(properties),\n        }\n    }\n}\n\nimpl DisplayAs for LixFileUpdateExec {\n    fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        match t {\n            DisplayFormatType::Default | DisplayFormatType::Verbose => {\n                write!(\n                    f,\n                    \"LixFileUpdateExec(assignments={}, filters={})\",\n                    self.assignments.len(),\n                    self.filters.len()\n                )\n            }\n            DisplayFormatType::TreeRender => write!(f, \"LixFileUpdateExec\"),\n        }\n    }\n}\n\nimpl ExecutionPlan for LixFileUpdateExec {\n    fn name(&self) -> &str {\n        \"LixFileUpdateExec\"\n    }\n\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn properties(&self) -> &Arc<PlanProperties> {\n        &self.properties\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn ExecutionPlan>> {\n        Vec::new()\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        children: Vec<Arc<dyn ExecutionPlan>>,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        if !children.is_empty() {\n            return Err(DataFusionError::Execution(\n                \"LixFileUpdateExec does not accept children\".to_string(),\n            ));\n        }\n        Ok(self)\n    }\n\n    fn execute(\n        &self,\n        partition: usize,\n        _context: Arc<TaskContext>,\n    ) -> Result<SendableRecordBatchStream> {\n        if partition != 0 {\n            return Err(DataFusionError::Execution(format!(\n                \"LixFileUpdateExec only exposes one partition, got {partition}\"\n            )));\n        }\n\n        let blob_reader = Arc::clone(&self.blob_reader);\n        let write_ctx = self.write_ctx.clone();\n        let table_schema = Arc::clone(&self.table_schema);\n        let version_binding = self.version_binding.clone();\n        let functions = self.functions.clone();\n        let request = self.request.clone();\n        let target_file_ids = self.target_file_ids.clone();\n        let assignments = self.assignments.clone();\n        let filters = self.filters.clone();\n        let result_schema = Arc::clone(&self.result_schema);\n        let stream_schema = Arc::clone(&result_schema);\n\n        let stream = stream::once(async move {\n            let rows = scan_lix_file_live_rows(\n                Arc::new(WriteContextLiveStateReader::new(write_ctx.clone())),\n                &request,\n                &target_file_ids,\n            )\n            .await\n            .map_err(lix_error_to_datafusion_error)?;\n            let source_batch = lix_file_record_batch(&table_schema, &blob_reader, rows)\n                .await\n                .map_err(lix_error_to_datafusion_error)?;\n            let matched_batch = filter_lix_file_batch(source_batch, &filters)?;\n            let assignment_values = UpdateAssignmentValues::evaluate(&matched_batch, &assignments)?;\n            let update_columns = LixFileUpdateColumns::from_assignments(&assignments);\n            let mut path_resolvers = None;\n            if update_columns.path || update_columns.descriptor {\n                path_resolvers = Some(\n                    file_path_resolvers_from_live_state(\n                        Arc::new(WriteContextLiveStateReader::new(write_ctx.clone())),\n                        version_binding.active_version_id(),\n                    )\n                    .await\n                    .map_err(lix_error_to_datafusion_error)?,\n                );\n            }\n            let staged = lix_file_update_stage_from_batch(\n                &matched_batch,\n                &assignment_values,\n                version_binding.active_version_id(),\n                update_columns,\n                path_resolvers.as_mut(),\n                &mut || functions.call_uuid_v7(),\n            )?;\n            let count = staged.count;\n\n            if count > 0 {\n                let intent = if staged.file_data_writes.is_empty() {\n                    TransactionWrite::Rows {\n                        mode: TransactionWriteMode::Replace,\n                        rows: staged.state_rows,\n                    }\n                } else {\n                    TransactionWrite::RowsWithFileData {\n                        mode: TransactionWriteMode::Replace,\n                        rows: staged.state_rows,\n                        file_data: staged.file_data_writes,\n                        count,\n                    }\n                };\n                write_ctx\n                    .stage_write(intent)\n                    .await\n                    .map_err(lix_error_to_datafusion_error)?;\n            }\n\n            Ok::<_, DataFusionError>(stream::iter(vec![Ok::<RecordBatch, DataFusionError>(\n                dml_count_batch(Arc::clone(&stream_schema), count)?,\n            )]))\n        })\n        .try_flatten();\n\n        Ok(Box::pin(RecordBatchStreamAdapter::new(\n            result_schema,\n            stream,\n        )))\n    }\n}\n\nstruct LixFileScanExec {\n    live_state: Arc<dyn LiveStateReader>,\n    blob_reader: Arc<dyn BlobDataReader>,\n    batch_schema: SchemaRef,\n    output_schema: SchemaRef,\n    projection: Option<Vec<usize>>,\n    request: LiveStateScanRequest,\n    target_file_ids: FileIdConstraint,\n    filters: Vec<Arc<dyn PhysicalExpr>>,\n    limit: Option<usize>,\n    properties: Arc<PlanProperties>,\n}\n\nimpl std::fmt::Debug for LixFileScanExec {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"LixFileScanExec\").finish()\n    }\n}\n\nimpl LixFileScanExec {\n    fn new(\n        live_state: Arc<dyn LiveStateReader>,\n        blob_reader: Arc<dyn BlobDataReader>,\n        batch_schema: SchemaRef,\n        output_schema: SchemaRef,\n        projection: Option<Vec<usize>>,\n        request: LiveStateScanRequest,\n        target_file_ids: FileIdConstraint,\n        filters: Vec<Arc<dyn PhysicalExpr>>,\n        limit: Option<usize>,\n    ) -> Self {\n        let properties = PlanProperties::new(\n            EquivalenceProperties::new(output_schema.clone()),\n            Partitioning::UnknownPartitioning(1),\n            EmissionType::Incremental,\n            Boundedness::Bounded,\n        );\n        Self {\n            live_state,\n            blob_reader,\n            batch_schema,\n            output_schema,\n            projection,\n            request,\n            target_file_ids,\n            filters,\n            limit,\n            properties: Arc::new(properties),\n        }\n    }\n}\n\nimpl DisplayAs for LixFileScanExec {\n    fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        match t {\n            DisplayFormatType::Default | DisplayFormatType::Verbose => {\n                write!(f, \"LixFileScanExec(limit={:?})\", self.limit)\n            }\n            DisplayFormatType::TreeRender => write!(f, \"LixFileScanExec\"),\n        }\n    }\n}\n\nimpl ExecutionPlan for LixFileScanExec {\n    fn name(&self) -> &str {\n        \"LixFileScanExec\"\n    }\n\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn properties(&self) -> &Arc<PlanProperties> {\n        &self.properties\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn ExecutionPlan>> {\n        Vec::new()\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        children: Vec<Arc<dyn ExecutionPlan>>,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        if !children.is_empty() {\n            return Err(DataFusionError::Execution(\n                \"LixFileScanExec does not accept children\".to_string(),\n            ));\n        }\n        Ok(self)\n    }\n\n    fn execute(\n        &self,\n        partition: usize,\n        _context: Arc<TaskContext>,\n    ) -> Result<SendableRecordBatchStream> {\n        if partition != 0 {\n            return Err(DataFusionError::Execution(format!(\n                \"LixFileScanExec only supports partition 0, got {partition}\"\n            )));\n        }\n\n        let live_state = Arc::clone(&self.live_state);\n        let blob_reader = Arc::clone(&self.blob_reader);\n        let request = self.request.clone();\n        let target_file_ids = self.target_file_ids.clone();\n        let filters = self.filters.clone();\n        let limit = self.limit;\n        let output_schema = Arc::clone(&self.output_schema);\n        let batch_schema = Arc::clone(&self.batch_schema);\n        let projection = self.projection.clone();\n        let fut = async move {\n            let rows = scan_lix_file_live_rows(live_state, &request, &target_file_ids)\n                .await\n                .map_err(|error| {\n                    DataFusionError::Execution(format!(\"sql2 lix_file scan failed: {error}\"))\n                })?;\n            let batch = lix_file_record_batch(&batch_schema, &blob_reader, rows)\n                .await\n                .map_err(|error| {\n                    DataFusionError::Execution(format!(\"sql2 lix_file batch build failed: {error}\"))\n                })?;\n            let filtered = filter_lix_file_batch(batch, &filters)?;\n            let projected = match projection {\n                Some(indices) => filtered.project(&indices).map_err(DataFusionError::from),\n                None => Ok(filtered),\n            }?;\n            match limit {\n                Some(limit) => Ok(projected.slice(0, limit.min(projected.num_rows()))),\n                None => Ok(projected),\n            }\n        };\n\n        Ok(Box::pin(RecordBatchStreamAdapter::new(\n            output_schema,\n            stream::once(fut).map_ok(|batch| batch),\n        )))\n    }\n}\n\n#[derive(Debug, Clone)]\nstruct FileDescriptorRecord {\n    id: String,\n    directory_id: Option<String>,\n    name: String,\n    hidden: bool,\n    live: MaterializedLiveStateRow,\n}\n\n#[derive(Debug, Clone)]\nstruct BlobRefRecord {\n    blob_hash: String,\n}\n\n#[derive(Debug, Clone)]\nstruct DirectoryDescriptorRecord {\n    id: String,\n    parent_id: Option<String>,\n    name: String,\n    version_id: String,\n}\n\n#[derive(Debug, Deserialize)]\nstruct FileDescriptorSnapshot {\n    id: String,\n    directory_id: Option<String>,\n    name: String,\n    hidden: bool,\n}\n\n#[derive(Debug, Deserialize)]\nstruct BlobRefSnapshot {\n    id: String,\n    blob_hash: String,\n}\n\n#[derive(Debug, Deserialize)]\nstruct DirectoryDescriptorSnapshot {\n    id: String,\n    parent_id: Option<String>,\n    name: String,\n}\n\n#[derive(Debug, Default)]\nstruct LixFileStagedBatch {\n    state_rows: Vec<TransactionWriteRow>,\n    file_data_writes: Vec<TransactionFileData>,\n    count: u64,\n}\n\nimpl LixFileStagedBatch {\n    fn extend(&mut self, other: LixFileStagedBatch) {\n        self.state_rows.extend(other.state_rows);\n        self.file_data_writes.extend(other.file_data_writes);\n        self.count += other.count;\n    }\n\n    fn extend_filesystem_plan(&mut self, plan: super::filesystem_planner::FilesystemWritePlan) {\n        self.state_rows.extend(plan.rows);\n        self.file_data_writes.extend(plan.file_data);\n        self.count += plan.count;\n    }\n\n    fn extend_filesystem_delete_plan(&mut self, plan: FilesystemDeletePlan) {\n        self.state_rows.extend(plan.rows);\n        self.count += plan.count;\n    }\n}\n\n#[cfg(test)]\nfn lix_file_write_rows_from_batch(\n    batch: &RecordBatch,\n    version_binding: Option<&str>,\n) -> Result<Vec<TransactionWriteRow>> {\n    Ok(lix_file_insert_stage_from_batch(batch, version_binding)?.state_rows)\n}\n\nfn lix_file_delete_stage_from_batch(\n    batch: &RecordBatch,\n    version_binding: Option<&str>,\n    blob_ref_file_ids: &BTreeSet<String>,\n) -> Result<LixFileStagedBatch> {\n    let mut staged = LixFileStagedBatch::default();\n    for row_index in 0..batch.num_rows() {\n        let file_id = required_string_value(batch, row_index, \"id\")?;\n        let context = file_row_context_from_batch(batch, row_index, version_binding)?;\n        staged.extend_filesystem_delete_plan(plan_file_delete(FileDeleteInput {\n            file_id: file_id.clone(),\n            has_blob_ref: blob_ref_file_ids.contains(&file_id),\n            context,\n        }));\n    }\n    Ok(staged)\n}\n\nfn blob_ref_file_ids_from_live_rows(\n    rows: &[MaterializedLiveStateRow],\n) -> std::result::Result<BTreeSet<String>, LixError> {\n    let mut file_ids = BTreeSet::new();\n    for row in rows {\n        if row.schema_key != BLOB_REF_SCHEMA_KEY {\n            continue;\n        }\n        let Some(snapshot_content) = row.snapshot_content.as_deref() else {\n            continue;\n        };\n        let snapshot: BlobRefSnapshot =\n            serde_json::from_str(snapshot_content).map_err(|error| {\n                LixError::new(\n                    \"LIX_ERROR_UNKNOWN\",\n                    format!(\"invalid lix_binary_blob_ref snapshot JSON: {error}\"),\n                )\n            })?;\n        file_ids.insert(snapshot.id);\n    }\n    Ok(file_ids)\n}\n\n#[cfg(test)]\nfn lix_file_insert_stage_from_batch(\n    batch: &RecordBatch,\n    version_binding: Option<&str>,\n) -> Result<LixFileStagedBatch> {\n    lix_file_stage_from_batch_with_options(batch, version_binding, \"lix_file\", true, true, true)\n}\n\nfn lix_file_insert_stage_from_batch_with_id_generator_and_path_resolvers(\n    batch: &RecordBatch,\n    version_binding: Option<&str>,\n    surface_name: &str,\n    path_resolvers: &mut BTreeMap<String, DirectoryPathResolver>,\n    generate_id: &mut dyn FnMut() -> String,\n    include_data_writes: bool,\n) -> Result<LixFileStagedBatch> {\n    lix_file_stage_from_batch_with_options_and_path_resolvers(\n        batch,\n        version_binding,\n        surface_name,\n        true,\n        true,\n        include_data_writes,\n        Some(path_resolvers),\n        Some(generate_id),\n    )\n}\n\nfn lix_file_insert_stage_from_batch_with_path_resolvers(\n    batch: &RecordBatch,\n    version_binding: Option<&str>,\n    surface_name: &str,\n    path_resolvers: &mut BTreeMap<String, DirectoryPathResolver>,\n    generate_directory_id: &mut dyn FnMut() -> String,\n    include_data_writes: bool,\n) -> Result<LixFileStagedBatch> {\n    lix_file_stage_from_batch_with_options_and_path_resolvers(\n        batch,\n        version_binding,\n        surface_name,\n        true,\n        true,\n        include_data_writes,\n        Some(path_resolvers),\n        Some(generate_directory_id),\n    )\n}\n\nfn lix_file_existing_update_stage_from_batch(\n    batch: &RecordBatch,\n    assignment_values: &UpdateAssignmentValues,\n    version_binding: Option<&str>,\n    include_descriptor_writes: bool,\n    include_data_writes: bool,\n    path_resolvers: Option<&mut BTreeMap<String, DirectoryPathResolver>>,\n) -> Result<LixFileStagedBatch> {\n    let mut staged = LixFileStagedBatch::default();\n    let mut path_resolvers = path_resolvers;\n\n    for row_index in 0..batch.num_rows() {\n        let id = required_string_value(batch, row_index, \"id\")?;\n        let hidden = update_optional_bool_value(batch, assignment_values, row_index, \"hidden\")?\n            .unwrap_or(false);\n        let context =\n            file_row_context_from_update(batch, assignment_values, row_index, version_binding)?;\n\n        if include_descriptor_writes {\n            let directory_id =\n                update_optional_string_value(batch, assignment_values, row_index, \"directory_id\")?;\n            let name = update_required_string_value(batch, assignment_values, row_index, \"name\")?;\n            if let Some(path_resolvers) = path_resolvers.as_deref_mut() {\n                let resolver = path_resolvers\n                    .entry(file_path_resolver_key(&context))\n                    .or_insert_with(DirectoryPathResolver::default);\n                resolver\n                    .reserve_file(directory_id.clone(), name.clone(), id.clone())\n                    .map_err(lix_error_to_datafusion_error)?;\n            }\n            staged\n                .state_rows\n                .push(file_descriptor_row(FileDescriptorRowInput {\n                    id: id.clone(),\n                    directory_id,\n                    name,\n                    hidden,\n                    context: context.clone(),\n                }));\n        }\n\n        if include_data_writes {\n            let data = update_required_binary_value(batch, assignment_values, row_index, \"data\")?;\n            stage_lix_file_data_write(&mut staged, id, data, context, None)?;\n        }\n\n        staged.count = staged\n            .count\n            .checked_add(1)\n            .ok_or_else(|| DataFusionError::Execution(\"lix_file row count overflow\".into()))?;\n    }\n\n    Ok(staged)\n}\n\n#[derive(Debug, Clone, Copy)]\nstruct LixFileUpdateColumns {\n    path: bool,\n    data: bool,\n    descriptor: bool,\n}\n\nimpl LixFileUpdateColumns {\n    fn from_assignments(assignments: &[(String, Arc<dyn PhysicalExpr>)]) -> Self {\n        let path = assignments\n            .iter()\n            .any(|(column_name, _)| column_name == \"path\");\n        let data = assignments\n            .iter()\n            .any(|(column_name, _)| column_name == \"data\");\n        let descriptor = assignments\n            .iter()\n            .any(|(column_name, _)| column_name != \"path\" && column_name != \"data\");\n        Self {\n            path,\n            data,\n            descriptor,\n        }\n    }\n}\n\nfn lix_file_update_stage_from_batch(\n    batch: &RecordBatch,\n    assignment_values: &UpdateAssignmentValues,\n    version_binding: Option<&str>,\n    update_columns: LixFileUpdateColumns,\n    path_resolvers: Option<&mut BTreeMap<String, DirectoryPathResolver>>,\n    generate_directory_id: &mut dyn FnMut() -> String,\n) -> Result<LixFileStagedBatch> {\n    if update_columns.path || update_columns.descriptor {\n        let Some(path_resolvers) = path_resolvers else {\n            return Err(DataFusionError::Execution(\n                \"UPDATE lix_file requires filesystem path resolver\".to_string(),\n            ));\n        };\n        return if update_columns.path {\n            lix_file_path_update_stage_from_batch(\n                batch,\n                assignment_values,\n                version_binding,\n                update_columns,\n                path_resolvers,\n                generate_directory_id,\n            )\n        } else {\n            lix_file_existing_update_stage_from_batch(\n                batch,\n                assignment_values,\n                version_binding,\n                update_columns.descriptor,\n                update_columns.data,\n                Some(path_resolvers),\n            )\n        };\n    }\n\n    lix_file_existing_update_stage_from_batch(\n        batch,\n        assignment_values,\n        version_binding,\n        update_columns.descriptor,\n        update_columns.data,\n        None,\n    )\n}\n\nfn lix_file_path_update_stage_from_batch(\n    batch: &RecordBatch,\n    assignment_values: &UpdateAssignmentValues,\n    version_binding: Option<&str>,\n    update_columns: LixFileUpdateColumns,\n    path_resolvers: &mut BTreeMap<String, DirectoryPathResolver>,\n    generate_directory_id: &mut dyn FnMut() -> String,\n) -> Result<LixFileStagedBatch> {\n    let mut staged = LixFileStagedBatch::default();\n\n    for row_index in 0..batch.num_rows() {\n        let id = required_string_value(batch, row_index, \"id\")?;\n        let path = update_required_string_value(batch, assignment_values, row_index, \"path\")?;\n        let hidden = update_optional_bool_value(batch, assignment_values, row_index, \"hidden\")?\n            .unwrap_or(false);\n        let context =\n            file_row_context_from_update(batch, assignment_values, row_index, version_binding)?;\n        let assigned_data = if update_columns.data {\n            Some(update_required_binary_value(\n                batch,\n                assignment_values,\n                row_index,\n                \"data\",\n            )?)\n        } else {\n            None\n        };\n\n        let resolver = path_resolvers\n            .entry(file_path_resolver_key(&context))\n            .or_insert_with(DirectoryPathResolver::default);\n        let plan = plan_file_path_update(\n            resolver,\n            id.clone(),\n            path,\n            hidden,\n            None,\n            context.clone(),\n            generate_directory_id,\n        )\n        .map_err(lix_error_to_datafusion_error)?;\n        staged.extend_filesystem_plan(plan);\n\n        if let Some(data) = assigned_data {\n            stage_lix_file_data_write(&mut staged, id, data, context, None)?;\n        }\n    }\n\n    Ok(staged)\n}\n\n#[cfg(test)]\nfn lix_file_stage_from_batch_with_options(\n    batch: &RecordBatch,\n    version_binding: Option<&str>,\n    surface_name: &str,\n    reject_read_only_fields: bool,\n    include_descriptor_writes: bool,\n    include_data_writes: bool,\n) -> Result<LixFileStagedBatch> {\n    lix_file_stage_from_batch_with_options_and_path_resolvers(\n        batch,\n        version_binding,\n        surface_name,\n        reject_read_only_fields,\n        include_descriptor_writes,\n        include_data_writes,\n        None,\n        None,\n    )\n}\n\nfn lix_file_stage_from_batch_with_options_and_path_resolvers(\n    batch: &RecordBatch,\n    version_binding: Option<&str>,\n    surface_name: &str,\n    reject_read_only_fields: bool,\n    include_descriptor_writes: bool,\n    include_data_writes: bool,\n    mut path_resolvers: Option<&mut BTreeMap<String, DirectoryPathResolver>>,\n    mut generate_directory_id: Option<&mut dyn FnMut() -> String>,\n) -> Result<LixFileStagedBatch> {\n    let mut staged = LixFileStagedBatch::default();\n\n    for row_index in 0..batch.num_rows() {\n        if reject_read_only_fields {\n            reject_read_only_lix_file_insert_field(batch, row_index, \"lixcol_entity_id\")?;\n            reject_read_only_lix_file_insert_field(batch, row_index, \"lixcol_schema_key\")?;\n            reject_read_only_lix_file_insert_field(batch, row_index, \"lixcol_change_id\")?;\n            reject_read_only_lix_file_insert_field(batch, row_index, \"lixcol_created_at\")?;\n            reject_read_only_lix_file_insert_field(batch, row_index, \"lixcol_updated_at\")?;\n            reject_read_only_lix_file_insert_field(batch, row_index, \"lixcol_commit_id\")?;\n        }\n\n        let path = optional_string_value(batch, row_index, \"path\")?;\n        let id = optional_string_value(batch, row_index, \"id\")?;\n        let hidden = optional_bool_value(batch, row_index, \"hidden\")?;\n        let context = file_row_context_from_batch(batch, row_index, version_binding)?;\n        let data = if include_data_writes {\n            insert_optional_binary_value(batch, row_index, \"data\")?\n        } else {\n            None\n        };\n\n        if let Some(path) = path {\n            reject_read_only_lix_file_insert_field(batch, row_index, \"directory_id\")?;\n            reject_read_only_lix_file_insert_field(batch, row_index, \"name\")?;\n\n            let Some(path_resolvers) = path_resolvers.as_deref_mut() else {\n                return Err(DataFusionError::Execution(\n                    \"INSERT into lix_file with path requires directory path resolver\".to_string(),\n                ));\n            };\n            let resolver = path_resolvers\n                .entry(file_path_resolver_key(&context))\n                .or_insert_with(DirectoryPathResolver::default);\n            let Some(generate_directory_id) = generate_directory_id.as_deref_mut() else {\n                return Err(DataFusionError::Execution(\n                    \"INSERT into lix_file with path requires directory id generator\".to_string(),\n                ));\n            };\n            let file_id = id.unwrap_or_else(|| generate_directory_id());\n            let mut plan = super::filesystem_planner::plan_file_path_write(\n                resolver,\n                FilePathWriteInput {\n                    id: Some(file_id.clone()),\n                    path,\n                    data,\n                    hidden,\n                    context,\n                },\n                generate_directory_id,\n            )\n            .map_err(lix_error_to_datafusion_error)?;\n            attach_lix_file_insert_origin(&mut plan.rows, surface_name, &file_id);\n            staged.extend_filesystem_plan(plan);\n            continue;\n        }\n\n        let directory_id = optional_string_value(batch, row_index, \"directory_id\")?;\n        let name = required_string_value(batch, row_index, \"name\")?;\n\n        let id = if data.is_some() {\n            match id {\n                Some(id) => Some(id),\n                None => {\n                    let Some(generate_id) = generate_directory_id.as_deref_mut() else {\n                        return Err(DataFusionError::Execution(\n                            \"INSERT into lix_file with data requires id generator\".to_string(),\n                        ));\n                    };\n                    Some(generate_id())\n                }\n            }\n        } else {\n            id\n        };\n\n        if include_descriptor_writes {\n            if let Some(path_resolvers) = path_resolvers.as_deref_mut() {\n                if let Some(file_id) = id.as_ref() {\n                    let resolver = path_resolvers\n                        .entry(file_path_resolver_key(&context))\n                        .or_insert_with(DirectoryPathResolver::default);\n                    resolver\n                        .reserve_file(directory_id.clone(), name.clone(), file_id.clone())\n                        .map_err(lix_error_to_datafusion_error)?;\n                }\n            }\n            let mut row = file_descriptor_write_row(FileDescriptorWriteIntent {\n                id: id.clone(),\n                directory_id: directory_id.clone(),\n                name: name.clone(),\n                hidden,\n                context: context.clone(),\n            });\n            if let Some(file_id) = id.as_ref() {\n                row.origin = Some(lix_file_insert_origin(surface_name, file_id));\n            }\n            staged.state_rows.push(row);\n        }\n\n        if let (Some(id), Some(data)) = (id, data) {\n            let origin = Some(lix_file_insert_origin(surface_name, &id));\n            stage_lix_file_data_write(&mut staged, id, data, context, origin)?;\n        }\n        staged.count = staged\n            .count\n            .checked_add(1)\n            .ok_or_else(|| DataFusionError::Execution(\"lix_file row count overflow\".into()))?;\n    }\n\n    Ok(staged)\n}\n\nfn stage_lix_file_data_write(\n    staged: &mut LixFileStagedBatch,\n    file_id: String,\n    data: Vec<u8>,\n    context: FilesystemRowContext,\n    origin: Option<TransactionWriteOrigin>,\n) -> Result<()> {\n    let mut row = blob_ref_row(BlobRefRowInput {\n        file_id: file_id.clone(),\n        data: data.clone(),\n        context: FilesystemRowContext {\n            file_id: None,\n            metadata: None,\n            ..context.clone()\n        },\n    })\n    .map_err(lix_error_to_datafusion_error)?;\n    row.origin = origin;\n    staged.state_rows.push(row);\n    staged.file_data_writes.push(TransactionFileData {\n        file_id,\n        version_id: context.version_id,\n        untracked: context.untracked,\n        data,\n    });\n    Ok(())\n}\n\nfn attach_lix_file_insert_origin(\n    rows: &mut [TransactionWriteRow],\n    surface_name: &str,\n    file_id: &str,\n) {\n    let origin = lix_file_insert_origin(surface_name, file_id);\n    for row in rows {\n        if row.schema_key == FILE_DESCRIPTOR_SCHEMA_KEY || row.schema_key == BLOB_REF_SCHEMA_KEY {\n            row.origin = Some(origin.clone());\n        }\n    }\n}\n\nfn lix_file_insert_origin(surface_name: &str, file_id: &str) -> TransactionWriteOrigin {\n    TransactionWriteOrigin {\n        surface: surface_name.to_string(),\n        operation: TransactionWriteOperation::Insert,\n        primary_key: Some(LogicalPrimaryKey {\n            columns: vec![\"id\".to_string()],\n            values: vec![file_id.to_string()],\n        }),\n    }\n}\n\nfn file_row_context_from_batch(\n    batch: &RecordBatch,\n    row_index: usize,\n    version_binding: Option<&str>,\n) -> Result<FilesystemRowContext> {\n    let explicit_version_id = optional_string_value(batch, row_index, \"lixcol_version_id\")?;\n    let scope = resolve_write_version_scope(\n        optional_bool_value(batch, row_index, \"lixcol_global\")?,\n        explicit_version_id,\n        version_binding,\n        \"INSERT into lix_file_by_version\",\n        \"lix_file\",\n    )?;\n\n    Ok(FilesystemRowContext {\n        version_id: scope.version_id,\n        global: scope.global,\n        untracked: optional_bool_value(batch, row_index, \"lixcol_untracked\")?.unwrap_or(false),\n        file_id: optional_string_value(batch, row_index, \"lixcol_file_id\")?,\n        metadata: optional_metadata_value(batch, row_index, \"lixcol_metadata\", \"lix_file\")?,\n    })\n}\n\nfn file_row_context_from_update(\n    batch: &RecordBatch,\n    assignment_values: &UpdateAssignmentValues,\n    row_index: usize,\n    version_binding: Option<&str>,\n) -> Result<FilesystemRowContext> {\n    let explicit_version_id = optional_string_value(batch, row_index, \"lixcol_version_id\")?;\n    let scope = resolve_write_version_scope(\n        optional_bool_value(batch, row_index, \"lixcol_global\")?,\n        explicit_version_id,\n        version_binding,\n        \"UPDATE into lix_file_by_version\",\n        \"lix_file\",\n    )?;\n\n    Ok(FilesystemRowContext {\n        version_id: scope.version_id,\n        global: scope.global,\n        untracked: optional_bool_value(batch, row_index, \"lixcol_untracked\")?.unwrap_or(false),\n        file_id: optional_string_value(batch, row_index, \"lixcol_file_id\")?,\n        metadata: update_optional_metadata_value(\n            batch,\n            assignment_values,\n            row_index,\n            \"lixcol_metadata\",\n            \"lix_file\",\n        )?,\n    })\n}\n\nfn file_path_resolver_key(context: &FilesystemRowContext) -> String {\n    filesystem_storage_scope_key(\n        &context.version_id,\n        context.global,\n        context.untracked,\n        context.file_id.as_deref(),\n    )\n}\n\nasync fn file_path_resolvers_from_live_state(\n    live_state: Arc<dyn LiveStateReader>,\n    version_binding: Option<&str>,\n) -> std::result::Result<BTreeMap<String, DirectoryPathResolver>, LixError> {\n    let rows = live_state\n        .scan_rows(&LiveStateScanRequest {\n            filter: LiveStateFilter {\n                schema_keys: vec![\n                    DIRECTORY_DESCRIPTOR_SCHEMA_KEY.to_string(),\n                    FILE_DESCRIPTOR_SCHEMA_KEY.to_string(),\n                ],\n                version_ids: version_binding\n                    .map(|version_id| vec![version_id.to_string()])\n                    .unwrap_or_default(),\n                ..Default::default()\n            },\n            ..Default::default()\n        })\n        .await?;\n    let mut resolvers = directory_path_resolvers_from_state_rows(rows)?;\n    if let Some(version_id) = version_binding {\n        let key = filesystem_storage_scope_key(version_id, false, false, None);\n        resolvers\n            .entry(key)\n            .or_insert_with(DirectoryPathResolver::default);\n    }\n    Ok(resolvers)\n}\n\nasync fn lix_file_record_batch(\n    schema: &SchemaRef,\n    blob_reader: &Arc<dyn BlobDataReader>,\n    rows: Vec<MaterializedLiveStateRow>,\n) -> Result<RecordBatch, LixError> {\n    let projected_columns = schema\n        .fields()\n        .iter()\n        .map(|field| field.name().as_str())\n        .collect::<Vec<_>>();\n    let needs_data = projected_columns\n        .iter()\n        .any(|column_name| *column_name == \"data\");\n\n    let mut file_rows = BTreeMap::<(String, String), FileDescriptorRecord>::new();\n    let mut blob_rows = BTreeMap::<(String, String), BlobRefRecord>::new();\n    let mut directory_rows = Vec::<DirectoryDescriptorRecord>::new();\n\n    for row in rows {\n        match row.schema_key.as_str() {\n            FILE_DESCRIPTOR_SCHEMA_KEY => {\n                let Some(snapshot_content) = row.snapshot_content.as_deref() else {\n                    continue;\n                };\n                let snapshot: FileDescriptorSnapshot = serde_json::from_str(snapshot_content)\n                    .map_err(|error| {\n                        LixError::new(\n                            \"LIX_ERROR_UNKNOWN\",\n                            format!(\"invalid lix_file_descriptor snapshot JSON: {error}\"),\n                        )\n                    })?;\n                file_rows.insert(\n                    (row.version_id.clone(), snapshot.id.clone()),\n                    FileDescriptorRecord {\n                        id: snapshot.id,\n                        directory_id: snapshot.directory_id,\n                        name: snapshot.name,\n                        hidden: snapshot.hidden,\n                        live: row,\n                    },\n                );\n            }\n            BLOB_REF_SCHEMA_KEY => {\n                let Some(snapshot_content) = row.snapshot_content.as_deref() else {\n                    continue;\n                };\n                let snapshot: BlobRefSnapshot =\n                    serde_json::from_str(snapshot_content).map_err(|error| {\n                        LixError::new(\n                            \"LIX_ERROR_UNKNOWN\",\n                            format!(\"invalid lix_binary_blob_ref snapshot JSON: {error}\"),\n                        )\n                    })?;\n                blob_rows.insert(\n                    (row.version_id.clone(), snapshot.id.clone()),\n                    BlobRefRecord {\n                        blob_hash: snapshot.blob_hash,\n                    },\n                );\n            }\n            DIRECTORY_DESCRIPTOR_SCHEMA_KEY => {\n                let Some(snapshot_content) = row.snapshot_content.as_deref() else {\n                    continue;\n                };\n                let snapshot: DirectoryDescriptorSnapshot = serde_json::from_str(snapshot_content)\n                    .map_err(|error| {\n                        LixError::new(\n                            \"LIX_ERROR_UNKNOWN\",\n                            format!(\"invalid lix_directory_descriptor snapshot JSON: {error}\"),\n                        )\n                    })?;\n                directory_rows.push(DirectoryDescriptorRecord {\n                    id: snapshot.id,\n                    parent_id: snapshot.parent_id,\n                    name: snapshot.name,\n                    version_id: row.version_id,\n                });\n            }\n            _ => {}\n        }\n    }\n\n    let directory_paths = derive_directory_paths(&directory_rows)?;\n    let mut ids = Vec::new();\n    let mut paths = Vec::new();\n    let mut directory_ids = Vec::new();\n    let mut names = Vec::new();\n    let mut hiddens = Vec::new();\n    let mut data_values = Vec::new();\n    let mut entity_ids = Vec::new();\n    let mut schema_keys = Vec::new();\n    let mut file_ids = Vec::new();\n    let mut globals = Vec::new();\n    let mut change_ids = Vec::new();\n    let mut created_ats = Vec::new();\n    let mut updated_ats = Vec::new();\n    let mut commit_ids = Vec::new();\n    let mut untracked_values = Vec::new();\n    let mut metadata_values = Vec::new();\n    let mut version_ids = Vec::new();\n\n    for ((version_id, _), file) in file_rows {\n        let directory_path = match file.directory_id.as_ref() {\n            Some(directory_id) => {\n                let key = (version_id.clone(), directory_id.clone());\n                let Some(path) = directory_paths.get(&key).cloned() else {\n                    return Err(LixError::new(\n                        LixError::CODE_FOREIGN_KEY,\n                        format!(\n                            \"lix_file_descriptor '{}' references missing directory_id '{}' in version '{}'\",\n                            file.id, directory_id, version_id\n                        ),\n                    ));\n                };\n                Some(path)\n            }\n            None => None,\n        };\n        let path = match directory_path {\n            Some(directory_path) => format!(\"{directory_path}{}\", file.name),\n            None => format!(\"/{}\", file.name),\n        };\n        let data = if needs_data {\n            match blob_rows.get(&(version_id.clone(), file.id.clone())) {\n                Some(blob_ref) => load_single_blob_bytes(blob_reader, &blob_ref.blob_hash).await?,\n                None => None,\n            }\n        } else {\n            None\n        };\n\n        ids.push(Some(file.id));\n        paths.push(Some(path));\n        directory_ids.push(file.directory_id);\n        names.push(Some(file.name));\n        hiddens.push(Some(file.hidden));\n        data_values.push(data);\n        entity_ids.push(Some(file.live.entity_id.as_json_array_text()?));\n        schema_keys.push(Some(file.live.schema_key));\n        file_ids.push(file.live.file_id);\n        globals.push(Some(file.live.global));\n        change_ids.push(file.live.change_id);\n        created_ats.push(file.live.created_at);\n        updated_ats.push(file.live.updated_at);\n        commit_ids.push(file.live.commit_id);\n        untracked_values.push(Some(file.live.untracked));\n        metadata_values.push(file.live.metadata.as_ref().map(serialize_row_metadata));\n        version_ids.push(Some(version_id));\n    }\n\n    let mut columns = Vec::<ArrayRef>::with_capacity(schema.fields().len());\n    for field in schema.fields() {\n        let array: ArrayRef = match field.name().as_str() {\n            \"id\" => Arc::new(StringArray::from(ids.clone())),\n            \"path\" => Arc::new(StringArray::from(paths.clone())),\n            \"directory_id\" => Arc::new(StringArray::from(directory_ids.clone())),\n            \"name\" => Arc::new(StringArray::from(names.clone())),\n            \"hidden\" => Arc::new(BooleanArray::from(hiddens.clone())),\n            \"data\" => Arc::new(BinaryArray::from(\n                data_values\n                    .iter()\n                    .map(|value| value.as_deref())\n                    .collect::<Vec<_>>(),\n            )),\n            \"lixcol_entity_id\" => Arc::new(StringArray::from(entity_ids.clone())),\n            \"lixcol_schema_key\" => Arc::new(StringArray::from(schema_keys.clone())),\n            \"lixcol_file_id\" => Arc::new(StringArray::from(file_ids.clone())),\n            \"lixcol_global\" => Arc::new(BooleanArray::from(globals.clone())),\n            \"lixcol_change_id\" => Arc::new(StringArray::from(change_ids.clone())),\n            \"lixcol_created_at\" => Arc::new(StringArray::from(created_ats.clone())),\n            \"lixcol_updated_at\" => Arc::new(StringArray::from(updated_ats.clone())),\n            \"lixcol_commit_id\" => Arc::new(StringArray::from(commit_ids.clone())),\n            \"lixcol_untracked\" => Arc::new(BooleanArray::from(untracked_values.clone())),\n            \"lixcol_metadata\" => Arc::new(StringArray::from(metadata_values.clone())),\n            \"lixcol_version_id\" => Arc::new(StringArray::from(version_ids.clone())),\n            other => {\n                return Err(LixError::new(\n                    \"LIX_ERROR_UNKNOWN\",\n                    format!(\"sql2 lix_file provider does not support projected column '{other}'\"),\n                ))\n            }\n        };\n        columns.push(array);\n    }\n\n    let options = RecordBatchOptions::new().with_row_count(Some(ids.len()));\n    RecordBatch::try_new_with_options(Arc::clone(schema), columns, &options).map_err(|error| {\n        LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"sql2 failed to build lix_file record batch: {error}\"),\n        )\n    })\n}\n\nasync fn load_single_blob_bytes(\n    blob_reader: &Arc<dyn BlobDataReader>,\n    blob_hash: &str,\n) -> Result<Option<Vec<u8>>, LixError> {\n    let hash = BlobHash::from_hex(blob_hash)?;\n    Ok(blob_reader\n        .load_bytes_many(&[hash])\n        .await?\n        .into_vec()\n        .into_iter()\n        .next()\n        .flatten())\n}\n\nfn derive_directory_paths(\n    rows: &[DirectoryDescriptorRecord],\n) -> Result<BTreeMap<(String, String), String>, LixError> {\n    let mut by_version = BTreeMap::<String, BTreeMap<String, &DirectoryDescriptorRecord>>::new();\n    for row in rows {\n        by_version\n            .entry(row.version_id.clone())\n            .or_default()\n            .insert(row.id.clone(), row);\n    }\n\n    let mut paths = BTreeMap::<(String, String), String>::new();\n    for (version_id, records) in by_version {\n        for directory_id in records.keys() {\n            derive_directory_path_for(\n                &version_id,\n                directory_id,\n                &records,\n                &mut paths,\n                &mut BTreeSet::new(),\n            )?;\n        }\n    }\n    Ok(paths)\n}\n\nfn derive_directory_path_for(\n    version_id: &str,\n    directory_id: &str,\n    records: &BTreeMap<String, &DirectoryDescriptorRecord>,\n    paths: &mut BTreeMap<(String, String), String>,\n    visiting: &mut BTreeSet<String>,\n) -> Result<Option<String>, LixError> {\n    if let Some(path) = paths.get(&(version_id.to_string(), directory_id.to_string())) {\n        return Ok(Some(path.clone()));\n    }\n    if !visiting.insert(directory_id.to_string()) {\n        return Err(directory_parent_cycle_error(version_id, directory_id));\n    }\n    let Some(row) = records.get(directory_id) else {\n        visiting.remove(directory_id);\n        return Ok(None);\n    };\n    let path = match row.parent_id.as_deref() {\n        Some(parent_id) => {\n            let Some(parent_path) =\n                derive_directory_path_for(version_id, parent_id, records, paths, visiting)?\n            else {\n                visiting.remove(directory_id);\n                return Ok(None);\n            };\n            format!(\"{parent_path}{}/\", row.name)\n        }\n        None => format!(\"/{}/\", row.name),\n    };\n    visiting.remove(directory_id);\n    paths.insert(\n        (version_id.to_string(), directory_id.to_string()),\n        path.clone(),\n    );\n    Ok(Some(path))\n}\n\nfn directory_parent_cycle_error(version_id: &str, directory_id: &str) -> LixError {\n    LixError::new(\n        LixError::CODE_CONSTRAINT_VIOLATION,\n        format!(\n            \"lix_directory_descriptor parent_id cycle in version '{version_id}' while resolving directory '{directory_id}'\"\n        ),\n    )\n}\n\nfn projected_schema(base_schema: &SchemaRef, projection: Option<&Vec<usize>>) -> Result<SchemaRef> {\n    let fields = match projection {\n        Some(indices) => indices\n            .iter()\n            .map(|index| base_schema.field(*index).as_ref().clone())\n            .collect::<Vec<_>>(),\n        None => base_schema\n            .fields()\n            .iter()\n            .map(|field| field.as_ref().clone())\n            .collect::<Vec<_>>(),\n    };\n    Ok(Arc::new(Schema::new(fields)))\n}\n\nfn lix_file_scan_request(\n    version_binding: Option<&str>,\n    projected_schema: Option<&Schema>,\n    limit: Option<usize>,\n) -> LiveStateScanRequest {\n    LiveStateScanRequest {\n        filter: LiveStateFilter {\n            schema_keys: vec![\n                FILE_DESCRIPTOR_SCHEMA_KEY.to_string(),\n                BLOB_REF_SCHEMA_KEY.to_string(),\n                DIRECTORY_DESCRIPTOR_SCHEMA_KEY.to_string(),\n            ],\n            version_ids: version_binding\n                .map(|version_id| vec![version_id.to_string()])\n                .unwrap_or_default(),\n            ..LiveStateFilter::default()\n        },\n        projection: lix_file_live_state_projection(projected_schema),\n        limit,\n    }\n}\n\nfn lix_file_live_state_projection(projected_schema: Option<&Schema>) -> LiveStateProjection {\n    let Some(schema) = projected_schema else {\n        return LiveStateProjection::default();\n    };\n    let mut columns = Vec::new();\n    let needs_snapshot = schema.fields().iter().any(|field| {\n        matches!(\n            field.name().as_str(),\n            \"path\" | \"directory_id\" | \"name\" | \"hidden\" | \"data\"\n        )\n    });\n    if needs_snapshot {\n        columns.push(\"snapshot_content\".to_string());\n    }\n    if schema\n        .fields()\n        .iter()\n        .any(|field| field.name() == \"lixcol_metadata\")\n    {\n        columns.push(\"metadata\".to_string());\n    }\n    LiveStateProjection { columns }\n}\n\nasync fn scan_lix_file_live_rows(\n    live_state: Arc<dyn LiveStateReader>,\n    request: &LiveStateScanRequest,\n    target_file_ids: &FileIdConstraint,\n) -> std::result::Result<Vec<MaterializedLiveStateRow>, LixError> {\n    let target_file_ids = match target_file_ids {\n        FileIdConstraint::All => return live_state.scan_rows(request).await,\n        FileIdConstraint::None => return Ok(Vec::new()),\n        FileIdConstraint::Ids(target_file_ids) => target_file_ids,\n    };\n\n    let mut file_request = request.clone();\n    file_request.filter.schema_keys = vec![\n        FILE_DESCRIPTOR_SCHEMA_KEY.to_string(),\n        BLOB_REF_SCHEMA_KEY.to_string(),\n    ];\n    file_request.filter.entity_ids = target_file_ids\n        .iter()\n        .map(|file_id| EntityIdentity::single(file_id.clone()))\n        .collect();\n\n    let mut rows = live_state.scan_rows(&file_request).await?;\n\n    let mut directory_request = request.clone();\n    directory_request.filter.schema_keys = vec![DIRECTORY_DESCRIPTOR_SCHEMA_KEY.to_string()];\n    directory_request.filter.entity_ids.clear();\n    directory_request.limit = None;\n    rows.extend(live_state.scan_rows(&directory_request).await?);\n\n    Ok(rows)\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\nenum FileIdConstraint {\n    All,\n    None,\n    Ids(BTreeSet<String>),\n}\n\nimpl FileIdConstraint {\n    fn from_ids(ids: Vec<String>) -> Self {\n        let ids = ids.into_iter().collect::<BTreeSet<_>>();\n        if ids.is_empty() {\n            Self::None\n        } else {\n            Self::Ids(ids)\n        }\n    }\n\n    fn intersect(self, other: Self) -> Self {\n        match (self, other) {\n            (Self::None, _) | (_, Self::None) => Self::None,\n            (Self::All, constraint) | (constraint, Self::All) => constraint,\n            (Self::Ids(left), Self::Ids(right)) => {\n                let ids = left.intersection(&right).cloned().collect::<BTreeSet<_>>();\n                if ids.is_empty() {\n                    Self::None\n                } else {\n                    Self::Ids(ids)\n                }\n            }\n        }\n    }\n\n    fn union(self, other: Self) -> Self {\n        match (self, other) {\n            (Self::All, _) | (_, Self::All) => Self::All,\n            (Self::None, constraint) | (constraint, Self::None) => constraint,\n            (Self::Ids(mut left), Self::Ids(right)) => {\n                left.extend(right);\n                Self::Ids(left)\n            }\n        }\n    }\n}\n\nfn file_id_constraint_from_filters(filters: &[Expr]) -> Result<FileIdConstraint> {\n    let analyzer = LixFileIdFilterAnalyzer;\n    let mut constraint = FileIdConstraint::All;\n    for filter in filters {\n        if let Some(filter_constraint) = analyzer.analyze(filter)? {\n            constraint = constraint.intersect(filter_constraint);\n        }\n    }\n    Ok(constraint)\n}\n\nstruct LixFileIdFilterAnalyzer;\n\nimpl LixFileIdFilterAnalyzer {\n    fn supports(&self, expr: &Expr) -> bool {\n        self.analyze(expr)\n            .is_ok_and(|constraint| constraint.is_some())\n    }\n\n    fn analyze(&self, expr: &Expr) -> Result<Option<FileIdConstraint>> {\n        ExactStringColumnFilterAnalyzer::new(\"id\").analyze(expr)\n    }\n}\n\nstruct ExactStringColumnFilterAnalyzer {\n    column_name: &'static str,\n}\n\nimpl ExactStringColumnFilterAnalyzer {\n    fn new(column_name: &'static str) -> Self {\n        Self { column_name }\n    }\n\n    fn supports(&self, expr: &Expr) -> bool {\n        self.analyze(expr)\n            .is_ok_and(|constraint| constraint.is_some())\n    }\n\n    fn analyze(&self, expr: &Expr) -> Result<Option<FileIdConstraint>> {\n        match expr {\n            Expr::BinaryExpr(binary_expr) if binary_expr.op == Operator::And => {\n                let Some(left) = self.analyze(&binary_expr.left)? else {\n                    return Ok(None);\n                };\n                let Some(right) = self.analyze(&binary_expr.right)? else {\n                    return Ok(None);\n                };\n                Ok(Some(left.intersect(right)))\n            }\n            Expr::BinaryExpr(binary_expr) if binary_expr.op == Operator::Or => {\n                let Some(left) = self.analyze(&binary_expr.left)? else {\n                    return Ok(None);\n                };\n                let Some(right) = self.analyze(&binary_expr.right)? else {\n                    return Ok(None);\n                };\n                Ok(Some(left.union(right)))\n            }\n            Expr::BinaryExpr(binary_expr) => Ok(self\n                .value_from_binary_filter(binary_expr)\n                .map(|value| FileIdConstraint::Ids(BTreeSet::from([value])))),\n            Expr::InList(in_list) => Ok(self\n                .values_from_in_list_filter(in_list)\n                .map(FileIdConstraint::from_ids)),\n            _ => Ok(None),\n        }\n    }\n\n    fn value_from_binary_filter(&self, binary_expr: &BinaryExpr) -> Option<String> {\n        if binary_expr.op != Operator::Eq {\n            return None;\n        }\n        self.value_from_column_literal_filter(&binary_expr.left, &binary_expr.right)\n            .or_else(|| {\n                self.value_from_column_literal_filter(&binary_expr.right, &binary_expr.left)\n            })\n    }\n\n    fn values_from_in_list_filter(&self, in_list: &InList) -> Option<Vec<String>> {\n        if in_list.negated {\n            return None;\n        }\n        let Expr::Column(column) = in_list.expr.as_ref() else {\n            return None;\n        };\n        if column.name != self.column_name {\n            return None;\n        }\n        let values = in_list\n            .list\n            .iter()\n            .map(string_expr_literal)\n            .collect::<Option<Vec<_>>>()?;\n        Some(values)\n    }\n\n    fn value_from_column_literal_filter(\n        &self,\n        column_expr: &Expr,\n        literal_expr: &Expr,\n    ) -> Option<String> {\n        let Expr::Column(column) = column_expr else {\n            return None;\n        };\n        if column.name != self.column_name {\n            return None;\n        }\n        string_expr_literal(literal_expr)\n    }\n}\n\nfn string_expr_literal(expr: &Expr) -> Option<String> {\n    let Expr::Literal(literal, _) = expr else {\n        return None;\n    };\n    match literal {\n        ScalarValue::Utf8(Some(value))\n        | ScalarValue::Utf8View(Some(value))\n        | ScalarValue::LargeUtf8(Some(value)) => Some(value.clone()),\n        _ => None,\n    }\n}\n\nfn contains_column(expr: &Expr, column_name: &str) -> bool {\n    match expr {\n        Expr::Column(column) => column.name == column_name,\n        Expr::BinaryExpr(binary_expr) => {\n            contains_column(&binary_expr.left, column_name)\n                || contains_column(&binary_expr.right, column_name)\n        }\n        Expr::InList(in_list) => {\n            contains_column(&in_list.expr, column_name)\n                || in_list\n                    .list\n                    .iter()\n                    .any(|expr| contains_column(expr, column_name))\n        }\n        Expr::Between(between) => {\n            contains_column(&between.expr, column_name)\n                || contains_column(&between.low, column_name)\n                || contains_column(&between.high, column_name)\n        }\n        Expr::Not(expr) | Expr::IsNull(expr) | Expr::IsNotNull(expr) => {\n            contains_column(expr, column_name)\n        }\n        Expr::Negative(expr) => contains_column(expr, column_name),\n        _ => false,\n    }\n}\n\nfn validate_lix_file_update_assignments(\n    schema: &SchemaRef,\n    assignments: &[(String, Expr)],\n) -> Result<()> {\n    for (column_name, expr) in assignments {\n        schema.field_with_name(column_name).map_err(|_| {\n            DataFusionError::Plan(format!(\n                \"UPDATE lix_file failed: column '{column_name}' does not exist\"\n            ))\n        })?;\n        if !matches!(\n            column_name.as_str(),\n            \"path\" | \"directory_id\" | \"name\" | \"hidden\" | \"data\" | \"lixcol_metadata\"\n        ) {\n            return Err(DataFusionError::Execution(format!(\n                \"UPDATE lix_file cannot stage read-only column '{column_name}'\"\n            )));\n        }\n        if column_name == \"data\" {\n            reject_non_binary_lix_file_data_assignment(expr)?;\n        }\n    }\n    Ok(())\n}\n\nfn reject_non_binary_lix_file_data_assignment(expr: &Expr) -> Result<()> {\n    match expr {\n        Expr::Literal(value, _) => {\n            if !scalar_is_binary_or_null(value) {\n                return Err(non_binary_lix_file_data_assignment_error());\n            }\n        }\n        Expr::Cast(cast) if is_binary_type(&cast.data_type) => {\n            if !logical_expr_is_binary_or_null(&cast.expr) {\n                return Err(non_binary_lix_file_data_assignment_error());\n            }\n        }\n        _ => {}\n    }\n\n    Ok(())\n}\n\nfn non_binary_lix_file_data_assignment_error() -> DataFusionError {\n    lix_file_data_type_error(\n        \"UPDATE lix_file\",\n        \"data\",\n        \"use X'...' or a binary parameter for file contents\",\n    )\n}\n\nfn filter_lix_file_batch(\n    batch: RecordBatch,\n    filters: &[Arc<dyn PhysicalExpr>],\n) -> Result<RecordBatch> {\n    let Some(mask) = evaluate_lix_file_filters(&batch, filters)? else {\n        return Ok(batch);\n    };\n    Ok(filter_record_batch(&batch, &mask)?)\n}\n\nfn evaluate_lix_file_filters(\n    batch: &RecordBatch,\n    filters: &[Arc<dyn PhysicalExpr>],\n) -> Result<Option<BooleanArray>> {\n    if filters.is_empty() {\n        return Ok(None);\n    }\n\n    let mut combined_mask: Option<BooleanArray> = None;\n    for filter in filters {\n        let result = filter.evaluate(batch)?;\n        let array = result.into_array(batch.num_rows())?;\n        let bool_array = array\n            .as_any()\n            .downcast_ref::<BooleanArray>()\n            .ok_or_else(|| {\n                DataFusionError::Execution(\"lix_file filter was not boolean\".to_string())\n            })?;\n        let normalized = bool_array\n            .iter()\n            .map(|value| Some(value == Some(true)))\n            .collect::<BooleanArray>();\n        combined_mask = Some(match combined_mask {\n            Some(existing) => and(&existing, &normalized)?,\n            None => normalized,\n        });\n    }\n    Ok(combined_mask)\n}\n\nfn dml_count_schema() -> SchemaRef {\n    Arc::new(Schema::new(vec![Field::new(\n        \"count\",\n        DataType::UInt64,\n        false,\n    )]))\n}\n\nfn dml_count_batch(schema: SchemaRef, count: u64) -> Result<RecordBatch> {\n    RecordBatch::try_new(\n        schema,\n        vec![Arc::new(UInt64Array::from(vec![count])) as ArrayRef],\n    )\n    .map_err(DataFusionError::from)\n}\n\nfn record_batch_has_non_null_column(batch: &RecordBatch, column_name: &str) -> Result<bool> {\n    for row_index in 0..batch.num_rows() {\n        if optional_scalar_value(batch, row_index, column_name)?\n            .is_some_and(|value| !value.is_null())\n        {\n            return Ok(true);\n        }\n    }\n    Ok(false)\n}\n\nfn reject_read_only_lix_file_insert_field(\n    batch: &RecordBatch,\n    row_index: usize,\n    column_name: &str,\n) -> Result<()> {\n    if optional_scalar_value(batch, row_index, column_name)?.is_some_and(|value| !value.is_null()) {\n        return Err(DataFusionError::Execution(format!(\n            \"INSERT into lix_file cannot stage read-only column '{column_name}'\"\n        )));\n    }\n    Ok(())\n}\n\nfn required_string_value(\n    batch: &RecordBatch,\n    row_index: usize,\n    column_name: &str,\n) -> Result<String> {\n    optional_string_value(batch, row_index, column_name)?.ok_or_else(|| {\n        DataFusionError::Execution(format!(\n            \"INSERT into lix_file requires non-null text column '{column_name}'\"\n        ))\n    })\n}\n\nfn update_required_string_value(\n    batch: &RecordBatch,\n    assignment_values: &UpdateAssignmentValues,\n    row_index: usize,\n    column_name: &str,\n) -> Result<String> {\n    update_optional_string_value(batch, assignment_values, row_index, column_name)?.ok_or_else(\n        || {\n            DataFusionError::Execution(format!(\n                \"UPDATE lix_file requires non-null text column '{column_name}'\"\n            ))\n        },\n    )\n}\n\nfn update_optional_string_value(\n    batch: &RecordBatch,\n    assignment_values: &UpdateAssignmentValues,\n    row_index: usize,\n    column_name: &str,\n) -> Result<Option<String>> {\n    match assignment_values.assigned_or_existing_cell(batch, row_index, column_name)? {\n        InsertCell::Omitted | InsertCell::Provided(SqlCell::Null) => Ok(None),\n        InsertCell::Provided(SqlCell::Value(\n            ScalarValue::Utf8(Some(value))\n            | ScalarValue::Utf8View(Some(value))\n            | ScalarValue::LargeUtf8(Some(value)),\n        )) => Ok(Some(value)),\n        InsertCell::Provided(SqlCell::Value(other)) => Err(DataFusionError::Execution(format!(\n            \"UPDATE lix_file expected text-compatible column '{column_name}', got {other:?}\"\n        ))),\n    }\n}\n\nfn update_optional_metadata_value(\n    batch: &RecordBatch,\n    assignment_values: &UpdateAssignmentValues,\n    row_index: usize,\n    column_name: &str,\n    context: &str,\n) -> Result<Option<TransactionJson>> {\n    update_optional_string_value(batch, assignment_values, row_index, column_name)?\n        .map(|value| {\n            let metadata = parse_row_metadata_value(&value, context)\n                .map_err(super::error::lix_error_to_datafusion_error)?;\n            TransactionJson::from_value(metadata, &format!(\"{context} metadata\"))\n                .map_err(super::error::lix_error_to_datafusion_error)\n        })\n        .transpose()\n}\n\nfn update_optional_bool_value(\n    batch: &RecordBatch,\n    assignment_values: &UpdateAssignmentValues,\n    row_index: usize,\n    column_name: &str,\n) -> Result<Option<bool>> {\n    match assignment_values.assigned_or_existing_cell(batch, row_index, column_name)? {\n        InsertCell::Omitted | InsertCell::Provided(SqlCell::Null) => Ok(None),\n        InsertCell::Provided(SqlCell::Value(ScalarValue::Boolean(Some(value)))) => Ok(Some(value)),\n        InsertCell::Provided(SqlCell::Value(other)) => Err(DataFusionError::Execution(format!(\n            \"UPDATE lix_file expected boolean column '{column_name}', got {other:?}\"\n        ))),\n    }\n}\n\nfn update_required_binary_value(\n    _batch: &RecordBatch,\n    assignment_values: &UpdateAssignmentValues,\n    row_index: usize,\n    column_name: &str,\n) -> Result<Vec<u8>> {\n    match assignment_values.assigned_cell(row_index, column_name)? {\n        UpdateCell::Unassigned | UpdateCell::Assigned(SqlCell::Null) => {\n            Err(lix_file_data_type_error(\n                \"UPDATE lix_file\",\n                column_name,\n                \"use X'' for an empty file or omit data to leave contents unchanged\",\n            ))\n        }\n        UpdateCell::Assigned(SqlCell::Value(ScalarValue::Binary(Some(value))))\n        | UpdateCell::Assigned(SqlCell::Value(ScalarValue::LargeBinary(Some(value)))) => Ok(value),\n        UpdateCell::Assigned(SqlCell::Value(ScalarValue::FixedSizeBinary(_, Some(value)))) => {\n            Ok(value)\n        }\n        UpdateCell::Assigned(SqlCell::Value(other)) => Err(lix_file_data_type_error_with_value(\n            \"UPDATE lix_file\",\n            column_name,\n            &other,\n            \"use X'...' or a binary parameter for file contents\",\n        )),\n    }\n}\n\nfn optional_string_value(\n    batch: &RecordBatch,\n    row_index: usize,\n    column_name: &str,\n) -> Result<Option<String>> {\n    match optional_scalar_value(batch, row_index, column_name)? {\n        None\n        | Some(ScalarValue::Null)\n        | Some(ScalarValue::Utf8(None))\n        | Some(ScalarValue::Utf8View(None))\n        | Some(ScalarValue::LargeUtf8(None)) => Ok(None),\n        Some(ScalarValue::Utf8(Some(value)))\n        | Some(ScalarValue::Utf8View(Some(value)))\n        | Some(ScalarValue::LargeUtf8(Some(value))) => Ok(Some(value)),\n        Some(other) => Err(DataFusionError::Execution(format!(\n            \"INSERT into lix_file expected text-compatible column '{column_name}', got {other:?}\"\n        ))),\n    }\n}\n\nfn optional_metadata_value(\n    batch: &RecordBatch,\n    row_index: usize,\n    column_name: &str,\n    context: &str,\n) -> Result<Option<TransactionJson>> {\n    optional_string_value(batch, row_index, column_name)?\n        .map(|value| {\n            let metadata = parse_row_metadata_value(&value, context)\n                .map_err(super::error::lix_error_to_datafusion_error)?;\n            TransactionJson::from_value(metadata, &format!(\"{context} metadata\"))\n                .map_err(super::error::lix_error_to_datafusion_error)\n        })\n        .transpose()\n}\n\nfn optional_bool_value(\n    batch: &RecordBatch,\n    row_index: usize,\n    column_name: &str,\n) -> Result<Option<bool>> {\n    match optional_scalar_value(batch, row_index, column_name)? {\n        None | Some(ScalarValue::Null) | Some(ScalarValue::Boolean(None)) => Ok(None),\n        Some(ScalarValue::Boolean(Some(value))) => Ok(Some(value)),\n        Some(other) => Err(DataFusionError::Execution(format!(\n            \"INSERT into lix_file expected boolean column '{column_name}', got {other:?}\"\n        ))),\n    }\n}\n\nfn insert_optional_binary_value(\n    batch: &RecordBatch,\n    row_index: usize,\n    column_name: &str,\n) -> Result<Option<Vec<u8>>> {\n    match optional_scalar_value(batch, row_index, column_name)? {\n        None => Ok(None),\n        Some(ScalarValue::Null)\n        | Some(ScalarValue::Binary(None))\n        | Some(ScalarValue::LargeBinary(None))\n        | Some(ScalarValue::FixedSizeBinary(_, None)) => Err(lix_file_data_type_error(\n            \"INSERT into lix_file\",\n            column_name,\n            \"use X'' for an empty file or omit data to create a descriptor without contents\",\n        )),\n        Some(ScalarValue::Binary(Some(value))) | Some(ScalarValue::LargeBinary(Some(value))) => {\n            Ok(Some(value))\n        }\n        Some(ScalarValue::FixedSizeBinary(_, Some(value))) => Ok(Some(value)),\n        Some(other) => Err(lix_file_data_type_error_with_value(\n            \"INSERT into lix_file\",\n            column_name,\n            &other,\n            \"use X'...' or a binary parameter for file contents\",\n        )),\n    }\n}\n\nfn optional_scalar_value(\n    batch: &RecordBatch,\n    row_index: usize,\n    column_name: &str,\n) -> Result<Option<ScalarValue>> {\n    let schema = batch.schema();\n    let column_index = match schema.index_of(column_name) {\n        Ok(column_index) => column_index,\n        Err(_) => return Ok(None),\n    };\n    if row_index >= batch.num_rows() {\n        return Err(DataFusionError::Execution(format!(\n            \"row index {row_index} out of bounds for lix_file batch with {} rows\",\n            batch.num_rows()\n        )));\n    }\n    ScalarValue::try_from_array(batch.column(column_index).as_ref(), row_index)\n        .map(Some)\n        .map_err(|error| {\n            DataFusionError::Execution(format!(\n                \"failed to decode lix_file column '{column_name}' at row {row_index}: {error}\"\n            ))\n        })\n}\n\nfn lix_file_schema() -> SchemaRef {\n    Arc::new(Schema::new(vec![\n        Field::new(\"id\", DataType::Utf8, true),\n        Field::new(\"path\", DataType::Utf8, false),\n        Field::new(\"directory_id\", DataType::Utf8, true),\n        Field::new(\"name\", DataType::Utf8, false),\n        Field::new(\"hidden\", DataType::Boolean, true),\n        Field::new(\"data\", DataType::Binary, true),\n        json_field(\"lixcol_entity_id\", false),\n        Field::new(\"lixcol_schema_key\", DataType::Utf8, false),\n        Field::new(\"lixcol_file_id\", DataType::Utf8, true),\n        Field::new(\"lixcol_global\", DataType::Boolean, true),\n        Field::new(\"lixcol_change_id\", DataType::Utf8, true),\n        Field::new(\"lixcol_created_at\", DataType::Utf8, true),\n        Field::new(\"lixcol_updated_at\", DataType::Utf8, true),\n        Field::new(\"lixcol_commit_id\", DataType::Utf8, true),\n        Field::new(\"lixcol_untracked\", DataType::Boolean, true),\n        json_field(\"lixcol_metadata\", true),\n    ]))\n}\n\nfn lix_file_by_version_schema() -> SchemaRef {\n    let mut fields = lix_file_schema()\n        .fields()\n        .iter()\n        .map(|field| field.as_ref().clone())\n        .collect::<Vec<_>>();\n    fields.push(Field::new(\"lixcol_version_id\", DataType::Utf8, false));\n    Arc::new(Schema::new(fields))\n}\n\nfn datafusion_error_to_lix_error(error: DataFusionError) -> LixError {\n    super::error::datafusion_error_to_lix_error(error)\n}\n\nfn lix_error_to_datafusion_error(error: LixError) -> DataFusionError {\n    super::error::lix_error_to_datafusion_error(error)\n}\n\n#[cfg(test)]\nmod tests {\n    use std::collections::{BTreeMap, BTreeSet};\n    use std::sync::Arc;\n\n    use async_trait::async_trait;\n    use datafusion::arrow::array::{ArrayRef, BinaryArray, BooleanArray, StringArray};\n    use datafusion::arrow::datatypes::{DataType, Field, Schema};\n    use datafusion::arrow::record_batch::RecordBatch;\n    use datafusion::common::{Column, ScalarValue};\n    use datafusion::execution::TaskContext;\n    use datafusion::logical_expr::expr::InList;\n    use datafusion::logical_expr::lit;\n    use datafusion::logical_expr::{BinaryExpr, Expr, Operator};\n    use serde_json::Value as JsonValue;\n\n    use crate::binary_cas::BlobDataReader;\n    use crate::functions::{\n        FunctionProvider, FunctionProviderHandle, SharedFunctionProvider, SystemFunctionProvider,\n    };\n    use crate::live_state::MaterializedLiveStateRow;\n    use crate::live_state::{LiveStateReader, LiveStateRowRequest, LiveStateScanRequest};\n    use crate::sql2::dml::InsertSink;\n    use crate::sql2::{SqlWriteContext, SqlWriteExecutionContext};\n    use crate::transaction::types::{\n        TransactionJson, TransactionWrite, TransactionWriteMode, TransactionWriteOutcome,\n    };\n    use crate::LixError;\n\n    use super::{\n        derive_directory_path_for, lix_file_delete_stage_from_batch,\n        lix_file_insert_stage_from_batch, lix_file_insert_stage_from_batch_with_path_resolvers,\n        lix_file_write_rows_from_batch, DirectoryDescriptorRecord, LixFileInsertSink,\n        VersionBinding,\n    };\n\n    fn test_id_generator(ids: &'static [&'static str]) -> impl FnMut() -> String {\n        let mut ids = ids.iter();\n        move || ids.next().expect(\"test id should exist\").to_string()\n    }\n\n    fn test_functions() -> FunctionProviderHandle {\n        SharedFunctionProvider::new(\n            Box::new(SystemFunctionProvider) as Box<dyn FunctionProvider + Send>\n        )\n    }\n\n    fn string_literal(value: &str) -> Expr {\n        Expr::Literal(ScalarValue::Utf8(Some(value.to_string())), None)\n    }\n\n    fn column(name: &str) -> Expr {\n        Expr::Column(Column::from_name(name))\n    }\n\n    fn eq_filter(column_name: &str, value: &str) -> Expr {\n        Expr::BinaryExpr(BinaryExpr::new(\n            Box::new(column(column_name)),\n            Operator::Eq,\n            Box::new(string_literal(value)),\n        ))\n    }\n\n    #[test]\n    fn file_id_filters_support_string_id_predicates() {\n        let analyzer = super::LixFileIdFilterAnalyzer;\n        let constraint = analyzer\n            .analyze(&Expr::InList(InList::new(\n                Box::new(column(\"id\")),\n                vec![string_literal(\"file-b\"), string_literal(\"file-a\")],\n                false,\n            )))\n            .unwrap()\n            .unwrap();\n\n        assert_eq!(\n            constraint,\n            super::FileIdConstraint::Ids(BTreeSet::from([\n                \"file-a\".to_string(),\n                \"file-b\".to_string()\n            ]))\n        );\n        assert!(analyzer.supports(&eq_filter(\"id\", \"file-a\")));\n        assert!(analyzer.supports(&Expr::BinaryExpr(BinaryExpr::new(\n            Box::new(string_literal(\"file-a\")),\n            Operator::Eq,\n            Box::new(column(\"id\")),\n        ))));\n    }\n\n    #[test]\n    fn file_id_filters_intersect_and_union_boolean_predicates() {\n        let analyzer = super::LixFileIdFilterAnalyzer;\n        let left = Expr::InList(InList::new(\n            Box::new(column(\"id\")),\n            vec![string_literal(\"file-a\"), string_literal(\"file-b\")],\n            false,\n        ));\n        let right = Expr::InList(InList::new(\n            Box::new(column(\"id\")),\n            vec![string_literal(\"file-b\"), string_literal(\"file-c\")],\n            false,\n        ));\n\n        let and_constraint = analyzer\n            .analyze(&Expr::BinaryExpr(BinaryExpr::new(\n                Box::new(left.clone()),\n                Operator::And,\n                Box::new(right.clone()),\n            )))\n            .unwrap()\n            .unwrap();\n        assert_eq!(\n            and_constraint,\n            super::FileIdConstraint::Ids(BTreeSet::from([\"file-b\".to_string()]))\n        );\n\n        let or_constraint = analyzer\n            .analyze(&Expr::BinaryExpr(BinaryExpr::new(\n                Box::new(left),\n                Operator::Or,\n                Box::new(right),\n            )))\n            .unwrap()\n            .unwrap();\n        assert_eq!(\n            or_constraint,\n            super::FileIdConstraint::Ids(BTreeSet::from([\n                \"file-a\".to_string(),\n                \"file-b\".to_string(),\n                \"file-c\".to_string()\n            ]))\n        );\n    }\n\n    #[test]\n    fn file_id_filters_detect_contradictions() {\n        let filters = vec![Expr::BinaryExpr(BinaryExpr::new(\n            Box::new(eq_filter(\"id\", \"file-a\")),\n            Operator::And,\n            Box::new(eq_filter(\"id\", \"file-b\")),\n        ))];\n\n        assert_eq!(\n            super::file_id_constraint_from_filters(&filters).unwrap(),\n            super::FileIdConstraint::None\n        );\n    }\n\n    #[test]\n    fn file_id_filters_ignore_non_id_and_negated_predicates() {\n        let analyzer = super::LixFileIdFilterAnalyzer;\n\n        assert!(!analyzer.supports(&eq_filter(\"name\", \"readme.md\")));\n        assert!(!analyzer.supports(&Expr::InList(InList::new(\n            Box::new(column(\"id\")),\n            vec![string_literal(\"file-a\")],\n            true,\n        ))));\n    }\n\n    fn lix_file_update_stage_from_batch_for_test(\n        batch: &RecordBatch,\n        version_binding: Option<&str>,\n        update_columns: super::LixFileUpdateColumns,\n        path_resolvers: Option<&mut BTreeMap<String, super::DirectoryPathResolver>>,\n        generate_directory_id: &mut dyn FnMut() -> String,\n    ) -> datafusion::common::Result<super::LixFileStagedBatch> {\n        let mut columns = Vec::new();\n        if update_columns.path {\n            columns.extend([\"path\", \"hidden\"]);\n        }\n        if update_columns.data {\n            columns.push(\"data\");\n        }\n        if update_columns.descriptor {\n            columns.extend([\"directory_id\", \"name\", \"hidden\"]);\n        }\n        let assignment_values = super::UpdateAssignmentValues::from_batch_columns(batch, &columns);\n        super::lix_file_update_stage_from_batch(\n            batch,\n            &assignment_values,\n            version_binding,\n            update_columns,\n            path_resolvers,\n            generate_directory_id,\n        )\n    }\n\n    #[derive(Default)]\n    struct CapturingWriteContext {\n        rows: Vec<MaterializedLiveStateRow>,\n        writes: Vec<TransactionWrite>,\n    }\n\n    #[async_trait]\n    impl BlobDataReader for CapturingWriteContext {\n        async fn load_bytes_many(\n            &self,\n            hashes: &[crate::binary_cas::BlobHash],\n        ) -> Result<crate::binary_cas::BlobBytesBatch, LixError> {\n            Ok(crate::binary_cas::BlobBytesBatch::new(vec![\n                None;\n                hashes.len()\n            ]))\n        }\n    }\n\n    #[async_trait]\n    impl SqlWriteExecutionContext for CapturingWriteContext {\n        fn active_version_id(&self) -> &str {\n            \"version-b\"\n        }\n\n        fn functions(&self) -> FunctionProviderHandle {\n            test_functions()\n        }\n\n        fn list_visible_schemas(&self) -> Result<Vec<JsonValue>, LixError> {\n            Ok(Vec::new())\n        }\n\n        async fn load_bytes_many(\n            &mut self,\n            hashes: &[crate::binary_cas::BlobHash],\n        ) -> Result<crate::binary_cas::BlobBytesBatch, LixError> {\n            BlobDataReader::load_bytes_many(self, hashes).await\n        }\n\n        async fn scan_live_state(\n            &mut self,\n            _request: &LiveStateScanRequest,\n        ) -> Result<Vec<MaterializedLiveStateRow>, LixError> {\n            Ok(self.rows.clone())\n        }\n\n        async fn load_version_head(\n            &mut self,\n            version_id: &str,\n        ) -> Result<Option<String>, LixError> {\n            if version_id == \"ghost-version\" {\n                return Ok(None);\n            }\n            Ok(Some(format!(\"commit-{version_id}\")))\n        }\n\n        async fn stage_write(\n            &mut self,\n            write: TransactionWrite,\n        ) -> Result<TransactionWriteOutcome, LixError> {\n            self.writes.push(write);\n            Ok(TransactionWriteOutcome { count: 0 })\n        }\n    }\n\n    #[derive(Default)]\n    struct RowsLiveStateReader {\n        rows: Vec<MaterializedLiveStateRow>,\n    }\n\n    #[async_trait]\n    impl LiveStateReader for RowsLiveStateReader {\n        async fn scan_rows(\n            &self,\n            _request: &LiveStateScanRequest,\n        ) -> Result<Vec<MaterializedLiveStateRow>, LixError> {\n            Ok(self.rows.clone())\n        }\n\n        async fn load_row(\n            &self,\n            _request: &LiveStateRowRequest,\n        ) -> Result<Option<MaterializedLiveStateRow>, LixError> {\n            Ok(None)\n        }\n    }\n\n    fn live_directory_row(\n        entity_id: &str,\n        version_id: &str,\n        snapshot_content: &str,\n    ) -> MaterializedLiveStateRow {\n        MaterializedLiveStateRow {\n            entity_id: crate::entity_identity::EntityIdentity::single(entity_id),\n            schema_key: super::DIRECTORY_DESCRIPTOR_SCHEMA_KEY.to_string(),\n            file_id: None,\n            snapshot_content: Some(snapshot_content.to_string()),\n            metadata: None,\n            deleted: false,\n            version_id: version_id.to_string(),\n            change_id: Some(format!(\"change-{entity_id}\")),\n            commit_id: Some(format!(\"commit-{entity_id}\")),\n            global: false,\n            untracked: false,\n            created_at: \"2026-04-23T00:00:00Z\".to_string(),\n            updated_at: \"2026-04-23T01:00:00Z\".to_string(),\n        }\n    }\n\n    fn live_file_row(\n        entity_id: &str,\n        version_id: &str,\n        snapshot_content: &str,\n    ) -> MaterializedLiveStateRow {\n        MaterializedLiveStateRow {\n            entity_id: crate::entity_identity::EntityIdentity::single(entity_id),\n            schema_key: super::FILE_DESCRIPTOR_SCHEMA_KEY.to_string(),\n            file_id: None,\n            snapshot_content: Some(snapshot_content.to_string()),\n            metadata: None,\n            deleted: false,\n            version_id: version_id.to_string(),\n            change_id: Some(format!(\"change-{entity_id}\")),\n            commit_id: Some(format!(\"commit-{entity_id}\")),\n            global: false,\n            untracked: false,\n            created_at: \"2026-04-23T00:00:00Z\".to_string(),\n            updated_at: \"2026-04-23T01:00:00Z\".to_string(),\n        }\n    }\n\n    fn string_column(values: Vec<Option<&str>>) -> ArrayRef {\n        Arc::new(StringArray::from(values)) as ArrayRef\n    }\n\n    fn file_insert_batch(include_version: bool, global: bool) -> RecordBatch {\n        let mut fields = vec![\n            Field::new(\"id\", DataType::Utf8, false),\n            Field::new(\"directory_id\", DataType::Utf8, true),\n            Field::new(\"name\", DataType::Utf8, false),\n            Field::new(\"hidden\", DataType::Boolean, false),\n            Field::new(\"lixcol_global\", DataType::Boolean, false),\n            Field::new(\"lixcol_metadata\", DataType::Utf8, true),\n        ];\n        let mut columns = vec![\n            string_column(vec![Some(\"file-readme\")]),\n            string_column(vec![Some(\"dir-docs\")]),\n            string_column(vec![Some(\"readme.md\")]),\n            Arc::new(BooleanArray::from(vec![false])) as ArrayRef,\n            Arc::new(BooleanArray::from(vec![global])) as ArrayRef,\n            string_column(vec![Some(\"{\\\"source\\\":\\\"file\\\"}\")]),\n        ];\n        if include_version {\n            fields.push(Field::new(\"lixcol_version_id\", DataType::Utf8, false));\n            columns.push(string_column(vec![Some(\"version-b\")]));\n        }\n        RecordBatch::try_new(Arc::new(Schema::new(fields)), columns).expect(\"file insert batch\")\n    }\n\n    fn data_insert_batch() -> RecordBatch {\n        RecordBatch::try_new(\n            Arc::new(Schema::new(vec![\n                Field::new(\"id\", DataType::Utf8, false),\n                Field::new(\"directory_id\", DataType::Utf8, true),\n                Field::new(\"name\", DataType::Utf8, false),\n                Field::new(\"hidden\", DataType::Boolean, false),\n                Field::new(\"data\", DataType::Binary, true),\n                Field::new(\"lixcol_version_id\", DataType::Utf8, false),\n            ])),\n            vec![\n                string_column(vec![Some(\"file-readme\")]),\n                string_column(vec![Some(\"dir-docs\")]),\n                string_column(vec![Some(\"readme.md\")]),\n                Arc::new(BooleanArray::from(vec![false])) as ArrayRef,\n                Arc::new(BinaryArray::from_vec(vec![b\"hello\"])) as ArrayRef,\n                string_column(vec![Some(\"version-b\")]),\n            ],\n        )\n        .expect(\"file data batch\")\n    }\n\n    fn path_data_insert_batch() -> RecordBatch {\n        RecordBatch::try_new(\n            Arc::new(Schema::new(vec![\n                Field::new(\"id\", DataType::Utf8, false),\n                Field::new(\"path\", DataType::Utf8, false),\n                Field::new(\"hidden\", DataType::Boolean, false),\n                Field::new(\"data\", DataType::Binary, true),\n                Field::new(\"lixcol_version_id\", DataType::Utf8, false),\n            ])),\n            vec![\n                string_column(vec![Some(\"file-readme\")]),\n                string_column(vec![Some(\"/docs/guides/readme.md\")]),\n                Arc::new(BooleanArray::from(vec![false])) as ArrayRef,\n                Arc::new(BinaryArray::from_vec(vec![b\"hello\"])) as ArrayRef,\n                string_column(vec![Some(\"version-b\")]),\n            ],\n        )\n        .expect(\"file path data batch\")\n    }\n\n    fn path_update_batch() -> RecordBatch {\n        RecordBatch::try_new(\n            Arc::new(Schema::new(vec![\n                Field::new(\"id\", DataType::Utf8, false),\n                Field::new(\"path\", DataType::Utf8, false),\n                Field::new(\"hidden\", DataType::Boolean, false),\n                Field::new(\"data\", DataType::Binary, true),\n                Field::new(\"lixcol_version_id\", DataType::Utf8, false),\n            ])),\n            vec![\n                string_column(vec![Some(\"file-readme\")]),\n                string_column(vec![Some(\"/docs/renamed.md\")]),\n                Arc::new(BooleanArray::from(vec![false])) as ArrayRef,\n                Arc::new(BinaryArray::from_vec(vec![b\"hello\"])) as ArrayRef,\n                string_column(vec![Some(\"version-b\")]),\n            ],\n        )\n        .expect(\"file path update batch\")\n    }\n\n    fn file_delete_batch() -> RecordBatch {\n        RecordBatch::try_new(\n            Arc::new(Schema::new(vec![\n                Field::new(\"id\", DataType::Utf8, false),\n                Field::new(\"lixcol_version_id\", DataType::Utf8, false),\n            ])),\n            vec![\n                string_column(vec![Some(\"file-readme\")]),\n                string_column(vec![Some(\"version-b\")]),\n            ],\n        )\n        .expect(\"file delete batch\")\n    }\n\n    #[test]\n    fn derives_nested_directory_paths() {\n        let root = DirectoryDescriptorRecord {\n            id: \"dir-docs\".to_string(),\n            parent_id: None,\n            name: \"docs\".to_string(),\n            version_id: \"version-a\".to_string(),\n        };\n        let child = DirectoryDescriptorRecord {\n            id: \"dir-guides\".to_string(),\n            parent_id: Some(\"dir-docs\".to_string()),\n            name: \"guides\".to_string(),\n            version_id: \"version-a\".to_string(),\n        };\n        let mut records = BTreeMap::new();\n        records.insert(root.id.clone(), &root);\n        records.insert(child.id.clone(), &child);\n        let mut paths = BTreeMap::new();\n\n        assert_eq!(\n            derive_directory_path_for(\n                \"version-a\",\n                \"dir-guides\",\n                &records,\n                &mut paths,\n                &mut BTreeSet::new()\n            )\n            .expect(\"path derivation should succeed\"),\n            Some(\"/docs/guides/\".to_string())\n        );\n    }\n\n    #[tokio::test]\n    async fn file_projection_rejects_unresolved_non_root_directory_id() {\n        let blob_reader = Arc::new(CapturingWriteContext::default()) as Arc<dyn BlobDataReader>;\n        let error = super::lix_file_record_batch(\n            &super::lix_file_schema(),\n            &blob_reader,\n            vec![live_file_row(\n                \"file-readme\",\n                \"version-b\",\n                \"{\\\"id\\\":\\\"file-readme\\\",\\\"directory_id\\\":\\\"missing-dir\\\",\\\"name\\\":\\\"readme.md\\\",\\\"hidden\\\":false}\",\n            )],\n        )\n        .await\n        .expect_err(\"unresolved non-root directory_id should not project as root path\");\n\n        assert_eq!(error.code, LixError::CODE_FOREIGN_KEY);\n        assert!(error.message.contains(\"missing-dir\"));\n    }\n\n    #[test]\n    fn decodes_file_insert_into_lix_state_write_row() {\n        let batch = file_insert_batch(true, false);\n\n        let rows = lix_file_write_rows_from_batch(&batch, None).expect(\"decode file insert\");\n\n        assert_eq!(rows.len(), 1);\n        assert_eq!(\n            rows[0].entity_id.as_ref(),\n            Some(&crate::entity_identity::EntityIdentity::single(\n                \"file-readme\"\n            ))\n        );\n        assert_eq!(rows[0].schema_key, \"lix_file_descriptor\");\n        assert_eq!(rows[0].version_id, \"version-b\");\n        assert_eq!(\n            rows[0].metadata.as_ref(),\n            Some(&TransactionJson::from_value_for_test(\n                serde_json::json!({\"source\": \"file\"})\n            ))\n        );\n        let snapshot = rows[0].snapshot.as_ref().expect(\"descriptor snapshot JSON\");\n        assert_eq!(snapshot[\"id\"], \"file-readme\");\n        assert_eq!(snapshot[\"directory_id\"], \"dir-docs\");\n        assert_eq!(snapshot[\"name\"], \"readme.md\");\n        assert_eq!(snapshot[\"hidden\"], false);\n    }\n\n    #[test]\n    fn active_file_insert_defaults_version_id() {\n        let batch = file_insert_batch(false, false);\n\n        let rows =\n            lix_file_write_rows_from_batch(&batch, Some(\"version-a\")).expect(\"decode file insert\");\n\n        assert_eq!(rows.len(), 1);\n        assert_eq!(rows[0].version_id, \"version-a\");\n    }\n\n    #[test]\n    fn by_version_file_insert_requires_version_id_for_non_global_rows() {\n        let batch = file_insert_batch(false, false);\n\n        let error =\n            lix_file_write_rows_from_batch(&batch, None).expect_err(\"version id is required\");\n\n        assert!(\n            error.to_string().contains(\"requires lixcol_version_id\"),\n            \"unexpected error: {error}\"\n        );\n    }\n\n    #[test]\n    fn file_insert_rejects_global_with_non_global_version_id() {\n        let error = lix_file_write_rows_from_batch(&file_insert_batch(true, true), None)\n            .expect_err(\"global file write should reject conflicting version id\");\n\n        assert!(\n            error\n                .to_string()\n                .contains(\"cannot set lixcol_global=true with non-global lixcol_version_id\"),\n            \"unexpected error: {error}\"\n        );\n    }\n\n    #[test]\n    fn file_update_accepts_path_assignment() {\n        super::validate_lix_file_update_assignments(\n            &super::lix_file_schema(),\n            &[(\"path\".to_string(), lit(\"/docs/renamed.md\"))],\n        )\n        .expect(\"path should be writable for update\");\n    }\n\n    #[test]\n    fn file_path_update_stages_descriptor_from_new_path() {\n        let mut resolvers = BTreeMap::new();\n        resolvers.insert(\n            super::filesystem_storage_scope_key(\"version-b\", false, false, None),\n            super::DirectoryPathResolver::from_existing([(\n                \"/docs/\".to_string(),\n                \"dir-docs\".to_string(),\n            )])\n            .expect(\"directory resolver should seed\"),\n        );\n\n        let staged = lix_file_update_stage_from_batch_for_test(\n            &path_update_batch(),\n            None,\n            super::LixFileUpdateColumns {\n                path: true,\n                data: false,\n                descriptor: false,\n            },\n            Some(&mut resolvers),\n            &mut test_id_generator(&[\"should-not-be-used\"]),\n        )\n        .expect(\"decode file path update\");\n\n        assert_eq!(staged.count, 1);\n        assert_eq!(staged.file_data_writes.len(), 0);\n        assert_eq!(staged.state_rows.len(), 1);\n        let descriptor = staged\n            .state_rows\n            .iter()\n            .find(|row| row.schema_key == \"lix_file_descriptor\")\n            .expect(\"file descriptor row should be staged\");\n        let snapshot: JsonValue = descriptor.snapshot.as_ref().unwrap().value().clone();\n        assert_eq!(snapshot[\"id\"], \"file-readme\");\n        assert_eq!(snapshot[\"directory_id\"], \"dir-docs\");\n        assert_eq!(snapshot[\"name\"], \"renamed.md\");\n        assert_eq!(snapshot[\"hidden\"], false);\n    }\n\n    #[test]\n    fn file_path_update_preserves_existing_data_unless_data_is_assigned() {\n        let mut resolvers = BTreeMap::new();\n        resolvers.insert(\n            super::filesystem_storage_scope_key(\"version-b\", false, false, None),\n            super::DirectoryPathResolver::from_existing([(\n                \"/docs/\".to_string(),\n                \"dir-docs\".to_string(),\n            )])\n            .expect(\"directory resolver should seed\"),\n        );\n\n        let staged = lix_file_update_stage_from_batch_for_test(\n            &path_update_batch(),\n            None,\n            super::LixFileUpdateColumns {\n                path: true,\n                data: false,\n                descriptor: false,\n            },\n            Some(&mut resolvers),\n            &mut test_id_generator(&[\"should-not-be-used\"]),\n        )\n        .expect(\"decode file path update\");\n\n        assert!(\n            staged.file_data_writes.is_empty(),\n            \"path-only update should not rewrite file data\"\n        );\n        assert!(\n            staged\n                .state_rows\n                .iter()\n                .all(|row| row.schema_key != \"lix_binary_blob_ref\"),\n            \"path-only update should not rewrite the blob ref\"\n        );\n    }\n\n    #[tokio::test]\n    async fn file_path_update_seeds_resolver_from_visible_directory_state() {\n        let mut resolvers = super::file_path_resolvers_from_live_state(\n            Arc::new(RowsLiveStateReader {\n                rows: vec![live_directory_row(\n                    \"dir-docs\",\n                    \"version-b\",\n                    \"{\\\"id\\\":\\\"dir-docs\\\",\\\"parent_id\\\":null,\\\"name\\\":\\\"docs\\\"}\",\n                )],\n            }) as Arc<dyn LiveStateReader>,\n            Some(\"version-b\"),\n        )\n        .await\n        .expect(\"directory state should seed path resolver\");\n\n        let staged = lix_file_update_stage_from_batch_for_test(\n            &path_update_batch(),\n            None,\n            super::LixFileUpdateColumns {\n                path: true,\n                data: false,\n                descriptor: false,\n            },\n            Some(&mut resolvers),\n            &mut test_id_generator(&[\"should-not-be-used\"]),\n        )\n        .expect(\"decode file path update\");\n\n        assert_eq!(staged.count, 1);\n        assert_eq!(staged.state_rows.len(), 1);\n        assert!(staged\n            .state_rows\n            .iter()\n            .all(|row| row.schema_key != \"lix_directory_descriptor\"));\n\n        let snapshot: JsonValue = staged.state_rows[0]\n            .snapshot\n            .as_ref()\n            .unwrap()\n            .value()\n            .clone();\n        assert_eq!(snapshot[\"directory_id\"], \"dir-docs\");\n        assert_eq!(snapshot[\"name\"], \"renamed.md\");\n    }\n\n    #[tokio::test]\n    async fn file_path_update_stages_only_missing_parent_directories() {\n        let mut resolvers = super::file_path_resolvers_from_live_state(\n            Arc::new(RowsLiveStateReader::default()) as Arc<dyn LiveStateReader>,\n            Some(\"version-b\"),\n        )\n        .await\n        .expect(\"empty directory state should seed path resolver\");\n\n        let staged = lix_file_update_stage_from_batch_for_test(\n            &path_update_batch(),\n            None,\n            super::LixFileUpdateColumns {\n                path: true,\n                data: false,\n                descriptor: false,\n            },\n            Some(&mut resolvers),\n            &mut test_id_generator(&[\"dir-generated-docs\"]),\n        )\n        .expect(\"decode file path update\");\n\n        assert_eq!(staged.count, 1);\n        assert_eq!(staged.state_rows.len(), 2);\n        assert_eq!(\n            staged\n                .state_rows\n                .iter()\n                .filter(|row| row.schema_key == \"lix_directory_descriptor\")\n                .count(),\n            1\n        );\n\n        let directory = staged\n            .state_rows\n            .iter()\n            .find(|row| row.schema_key == \"lix_directory_descriptor\")\n            .expect(\"missing /docs/ directory should be staged\");\n        assert_eq!(\n            directory.entity_id.as_ref(),\n            Some(&crate::entity_identity::EntityIdentity::single(\n                \"dir-generated-docs\"\n            ))\n        );\n\n        let descriptor = staged\n            .state_rows\n            .iter()\n            .find(|row| row.schema_key == \"lix_file_descriptor\")\n            .expect(\"file descriptor should be staged\");\n        let snapshot: JsonValue = descriptor.snapshot.as_ref().unwrap().value().clone();\n        assert_eq!(snapshot[\"directory_id\"], \"dir-generated-docs\");\n    }\n\n    #[test]\n    fn file_path_update_with_data_assignment_stages_blob_ref_and_payload() {\n        let mut resolvers = BTreeMap::new();\n        resolvers.insert(\n            super::filesystem_storage_scope_key(\"version-b\", false, false, None),\n            super::DirectoryPathResolver::from_existing([(\n                \"/docs/\".to_string(),\n                \"dir-docs\".to_string(),\n            )])\n            .expect(\"directory resolver should seed\"),\n        );\n\n        let staged = lix_file_update_stage_from_batch_for_test(\n            &path_update_batch(),\n            None,\n            super::LixFileUpdateColumns {\n                path: true,\n                data: true,\n                descriptor: false,\n            },\n            Some(&mut resolvers),\n            &mut test_id_generator(&[\"should-not-be-used\"]),\n        )\n        .expect(\"decode file path and data update\");\n\n        assert_eq!(staged.count, 1);\n        assert_eq!(staged.file_data_writes.len(), 1);\n        assert_eq!(staged.file_data_writes[0].file_id, \"file-readme\");\n        assert_eq!(staged.file_data_writes[0].data, b\"hello\");\n        assert!(staged\n            .state_rows\n            .iter()\n            .any(|row| row.schema_key == \"lix_file_descriptor\"));\n        assert!(staged\n            .state_rows\n            .iter()\n            .any(|row| row.schema_key == \"lix_binary_blob_ref\"));\n    }\n\n    #[test]\n    fn file_data_update_without_path_ignores_materialized_path_column() {\n        let staged = lix_file_update_stage_from_batch_for_test(\n            &path_update_batch(),\n            None,\n            super::LixFileUpdateColumns {\n                path: false,\n                data: true,\n                descriptor: false,\n            },\n            None,\n            &mut test_id_generator(&[\"should-not-be-used\"]),\n        )\n        .expect(\"decode file data update\");\n\n        assert_eq!(staged.count, 1);\n        assert_eq!(staged.file_data_writes.len(), 1);\n        assert_eq!(staged.file_data_writes[0].file_id, \"file-readme\");\n        assert_eq!(staged.state_rows.len(), 1);\n        assert_eq!(staged.state_rows[0].schema_key, \"lix_binary_blob_ref\");\n    }\n\n    #[test]\n    fn file_insert_stages_non_null_data() {\n        let batch = data_insert_batch();\n\n        let staged = lix_file_insert_stage_from_batch(&batch, None).expect(\"decode file data\");\n\n        assert_eq!(staged.count, 1);\n        assert_eq!(staged.state_rows.len(), 2);\n        assert!(staged\n            .state_rows\n            .iter()\n            .any(|row| row.schema_key == \"lix_file_descriptor\"));\n        let blob_ref_row = staged\n            .state_rows\n            .iter()\n            .find(|row| row.schema_key == \"lix_binary_blob_ref\")\n            .expect(\"data insert should stage blob ref row\");\n        assert_eq!(\n            blob_ref_row.entity_id.as_ref(),\n            Some(&crate::entity_identity::EntityIdentity::single(\n                \"file-readme\"\n            ))\n        );\n        assert_eq!(blob_ref_row.file_id.as_deref(), Some(\"file-readme\"));\n        assert_eq!(staged.file_data_writes.len(), 1);\n        assert_eq!(staged.file_data_writes[0].file_id, \"file-readme\");\n        assert_eq!(staged.file_data_writes[0].version_id, \"version-b\");\n        assert_eq!(staged.file_data_writes[0].data, b\"hello\");\n    }\n\n    #[test]\n    fn file_delete_with_blob_ref_stages_descriptor_and_blob_ref_tombstones() {\n        let batch = file_delete_batch();\n        let staged = lix_file_delete_stage_from_batch(\n            &batch,\n            None,\n            &BTreeSet::from([\"file-readme\".to_string()]),\n        )\n        .expect(\"decode file delete\");\n\n        assert_eq!(staged.count, 1);\n        assert_eq!(staged.state_rows.len(), 2);\n        let descriptor = staged\n            .state_rows\n            .iter()\n            .find(|row| row.schema_key == \"lix_file_descriptor\")\n            .expect(\"file descriptor tombstone should be staged\");\n        assert_eq!(\n            descriptor.entity_id.as_ref(),\n            Some(&crate::entity_identity::EntityIdentity::single(\n                \"file-readme\"\n            ))\n        );\n        assert_eq!(descriptor.file_id, None);\n        assert_eq!(descriptor.snapshot, None);\n\n        let blob_ref = staged\n            .state_rows\n            .iter()\n            .find(|row| row.schema_key == \"lix_binary_blob_ref\")\n            .expect(\"blob ref tombstone should be staged\");\n        assert_eq!(\n            blob_ref.entity_id.as_ref(),\n            Some(&crate::entity_identity::EntityIdentity::single(\n                \"file-readme\"\n            ))\n        );\n        assert_eq!(blob_ref.file_id.as_deref(), Some(\"file-readme\"));\n        assert_eq!(blob_ref.snapshot, None);\n    }\n\n    #[test]\n    fn file_delete_without_blob_ref_stages_only_descriptor_tombstone() {\n        let batch = file_delete_batch();\n        let staged = lix_file_delete_stage_from_batch(&batch, None, &BTreeSet::new())\n            .expect(\"decode file delete\");\n\n        assert_eq!(staged.count, 1);\n        assert_eq!(staged.state_rows.len(), 1);\n        assert_eq!(staged.state_rows[0].schema_key, \"lix_file_descriptor\");\n        assert_eq!(\n            staged.state_rows[0].entity_id.as_ref(),\n            Some(&crate::entity_identity::EntityIdentity::single(\n                \"file-readme\"\n            ))\n        );\n        assert_eq!(staged.state_rows[0].snapshot, None);\n    }\n\n    #[test]\n    fn file_path_insert_reuses_existing_parent_directory() {\n        let mut resolvers = BTreeMap::new();\n        resolvers.insert(\n            super::filesystem_storage_scope_key(\"version-b\", false, false, None),\n            super::DirectoryPathResolver::from_existing([\n                (\"/docs/\".to_string(), \"dir-docs\".to_string()),\n                (\"/docs/guides/\".to_string(), \"dir-guides\".to_string()),\n            ])\n            .expect(\"directory resolver should seed\"),\n        );\n\n        let staged = lix_file_insert_stage_from_batch_with_path_resolvers(\n            &path_data_insert_batch(),\n            None,\n            \"lix_file\",\n            &mut resolvers,\n            &mut test_id_generator(&[\"should-not-be-used\"]),\n            true,\n        )\n        .expect(\"decode file path data\");\n\n        assert_eq!(staged.count, 1);\n        assert_eq!(staged.file_data_writes.len(), 1);\n        assert_eq!(staged.file_data_writes[0].file_id, \"file-readme\");\n        assert_eq!(staged.state_rows.len(), 2);\n        let descriptor = staged\n            .state_rows\n            .iter()\n            .find(|row| row.schema_key == \"lix_file_descriptor\")\n            .expect(\"file descriptor row should be staged\");\n        let snapshot: JsonValue = descriptor.snapshot.as_ref().unwrap().value().clone();\n        assert_eq!(snapshot[\"id\"], \"file-readme\");\n        assert_eq!(snapshot[\"directory_id\"], \"dir-guides\");\n        assert_eq!(snapshot[\"name\"], \"readme.md\");\n    }\n\n    #[test]\n    fn file_path_insert_stages_missing_parent_directories_once() {\n        let mut resolvers = BTreeMap::new();\n\n        let staged = lix_file_insert_stage_from_batch_with_path_resolvers(\n            &path_data_insert_batch(),\n            None,\n            \"lix_file\",\n            &mut resolvers,\n            &mut test_id_generator(&[\"dir-generated-docs\", \"dir-generated-guides\"]),\n            true,\n        )\n        .expect(\"decode file path data\");\n\n        assert_eq!(staged.count, 1);\n        assert_eq!(staged.state_rows.len(), 4);\n        let directory_rows = staged\n            .state_rows\n            .iter()\n            .filter(|row| row.schema_key == \"lix_directory_descriptor\")\n            .collect::<Vec<_>>();\n        assert_eq!(directory_rows.len(), 2);\n\n        let descriptor = staged\n            .state_rows\n            .iter()\n            .find(|row| row.schema_key == \"lix_file_descriptor\")\n            .expect(\"file descriptor row should be staged\");\n        let snapshot: JsonValue = descriptor.snapshot.as_ref().unwrap().value().clone();\n        assert_eq!(snapshot[\"directory_id\"], \"dir-generated-guides\");\n    }\n\n    #[tokio::test]\n    async fn file_insert_sink_stages_decoded_lix_state_rows() {\n        let batch = file_insert_batch(true, false);\n        let mut write_context = CapturingWriteContext::default();\n        let write_ctx = SqlWriteContext::new(&mut write_context);\n        let sink = LixFileInsertSink::new(\n            batch.schema(),\n            write_ctx,\n            test_functions(),\n            VersionBinding::explicit(),\n            false,\n        );\n\n        let count = sink\n            .write_batches(vec![batch], &Arc::new(TaskContext::default()))\n            .await\n            .expect(\"file insert sink should stage\");\n\n        assert_eq!(count, 1);\n        let writes = &write_context.writes;\n        assert_eq!(writes.len(), 1);\n        match &writes[0] {\n            TransactionWrite::Rows { mode, rows } => {\n                assert_eq!(*mode, TransactionWriteMode::Insert);\n                assert_eq!(rows.len(), 1);\n                assert_eq!(\n                    rows[0].entity_id.as_ref(),\n                    Some(&crate::entity_identity::EntityIdentity::single(\n                        \"file-readme\"\n                    ))\n                );\n                assert_eq!(rows[0].schema_key, \"lix_file_descriptor\");\n            }\n            other => panic!(\"expected insert staged write, got {other:?}\"),\n        }\n    }\n\n    #[tokio::test]\n    async fn file_insert_sink_stages_file_data_writes() {\n        let batch = data_insert_batch();\n        let mut write_context = CapturingWriteContext::default();\n        let write_ctx = SqlWriteContext::new(&mut write_context);\n        let sink = LixFileInsertSink::new(\n            batch.schema(),\n            write_ctx,\n            test_functions(),\n            VersionBinding::explicit(),\n            true,\n        );\n\n        let count = sink\n            .write_batches(vec![batch], &Arc::new(TaskContext::default()))\n            .await\n            .expect(\"file insert sink should stage data\");\n\n        assert_eq!(count, 1);\n        let writes = &write_context.writes;\n        assert_eq!(writes.len(), 1);\n        match &writes[0] {\n            TransactionWrite::RowsWithFileData {\n                mode,\n                rows,\n                file_data,\n                count,\n                ..\n            } => {\n                assert_eq!(*mode, TransactionWriteMode::Insert);\n                assert_eq!(*count, 1);\n                assert_eq!(rows.len(), 2);\n                assert!(rows\n                    .iter()\n                    .any(|row| row.schema_key == \"lix_file_descriptor\"));\n                assert!(rows\n                    .iter()\n                    .any(|row| row.schema_key == \"lix_binary_blob_ref\"));\n                assert_eq!(file_data.len(), 1);\n                assert_eq!(file_data[0].file_id, \"file-readme\");\n                assert_eq!(file_data[0].data, b\"hello\");\n            }\n            other => panic!(\"expected insert with file data staged write, got {other:?}\"),\n        }\n    }\n\n    #[tokio::test]\n    async fn file_insert_sink_seeds_path_resolver_from_live_state() {\n        let batch = path_data_insert_batch();\n        let mut write_context = CapturingWriteContext {\n            rows: vec![\n                live_directory_row(\n                    \"dir-docs\",\n                    \"version-b\",\n                    \"{\\\"id\\\":\\\"dir-docs\\\",\\\"parent_id\\\":null,\\\"name\\\":\\\"docs\\\"}\",\n                ),\n                live_directory_row(\n                    \"dir-guides\",\n                    \"version-b\",\n                    \"{\\\"id\\\":\\\"dir-guides\\\",\\\"parent_id\\\":\\\"dir-docs\\\",\\\"name\\\":\\\"guides\\\"}\",\n                ),\n            ],\n            writes: Vec::new(),\n        };\n        let write_ctx = SqlWriteContext::new(&mut write_context);\n        let sink = LixFileInsertSink::new(\n            batch.schema(),\n            write_ctx,\n            test_functions(),\n            VersionBinding::explicit(),\n            true,\n        );\n\n        let count = sink\n            .write_batches(vec![batch], &Arc::new(TaskContext::default()))\n            .await\n            .expect(\"file insert sink should stage path data\");\n\n        assert_eq!(count, 1);\n        let writes = &write_context.writes;\n        assert_eq!(writes.len(), 1);\n        match &writes[0] {\n            TransactionWrite::RowsWithFileData {\n                rows,\n                file_data,\n                count,\n                ..\n            } => {\n                assert_eq!(*count, 1);\n                assert_eq!(file_data.len(), 1);\n                assert_eq!(file_data[0].file_id, \"file-readme\");\n                let descriptor = rows\n                    .iter()\n                    .find(|row| row.schema_key == \"lix_file_descriptor\")\n                    .expect(\"file descriptor row should be staged\");\n                let snapshot: JsonValue = descriptor.snapshot.as_ref().unwrap().value().clone();\n                assert_eq!(snapshot[\"directory_id\"], \"dir-guides\");\n            }\n            other => panic!(\"expected insert with file data staged write, got {other:?}\"),\n        }\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/filesystem_planner.rs",
    "content": "#![allow(dead_code)]\n\nuse std::collections::{BTreeMap, BTreeSet};\n\nuse serde::Deserialize;\nuse serde_json::{json, Map as JsonMap, Value as JsonValue};\n\nuse crate::common::{\n    directory_ancestor_paths, directory_name_from_path, normalize_directory_path,\n    parent_directory_path, stable_content_fingerprint_hex, ParsedFilePath,\n};\nuse crate::entity_identity::EntityIdentity;\nuse crate::live_state::MaterializedLiveStateRow;\nuse crate::LixError;\n\nuse super::filesystem_visibility::VisibleFilesystem;\nuse crate::transaction::types::{TransactionFileData, TransactionJson, TransactionWriteRow};\n\npub(crate) const FILE_DESCRIPTOR_SCHEMA_KEY: &str = \"lix_file_descriptor\";\npub(crate) const DIRECTORY_DESCRIPTOR_SCHEMA_KEY: &str = \"lix_directory_descriptor\";\npub(crate) const BLOB_REF_SCHEMA_KEY: &str = \"lix_binary_blob_ref\";\n\n/// Planned filesystem write output after SQL surface columns have been lowered\n/// into state rows and optional file payload writes.\n///\n/// Providers should emit this shape; transaction/commit code should not need\n/// to know whether a row came from `lix_file`, `lix_directory`, or a future\n/// filesystem write surface.\n#[derive(Debug, Clone, PartialEq, Eq, Default)]\npub(crate) struct FilesystemWritePlan {\n    pub(crate) rows: Vec<TransactionWriteRow>,\n    pub(crate) file_data: Vec<TransactionFileData>,\n    pub(crate) count: u64,\n}\n\n/// Planned filesystem delete output after SQL predicates have selected rows\n/// and the surface delete has been lowered into tombstone state rows.\n#[derive(Debug, Clone, PartialEq, Eq, Default)]\npub(crate) struct FilesystemDeletePlan {\n    pub(crate) rows: Vec<TransactionWriteRow>,\n    pub(crate) count: u64,\n}\n\n/// Common state-row lane fields shared by filesystem descriptor/blob rows.\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct FilesystemRowContext {\n    pub(crate) version_id: String,\n    pub(crate) global: bool,\n    pub(crate) untracked: bool,\n    pub(crate) file_id: Option<String>,\n    pub(crate) metadata: Option<TransactionJson>,\n}\n\nimpl FilesystemRowContext {\n    pub(crate) fn active_version(version_id: impl Into<String>) -> Self {\n        Self {\n            version_id: version_id.into(),\n            global: false,\n            untracked: false,\n            file_id: None,\n            metadata: None,\n        }\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct DirectoryDescriptorRowInput {\n    pub(crate) id: String,\n    pub(crate) parent_id: Option<String>,\n    pub(crate) name: String,\n    pub(crate) hidden: bool,\n    pub(crate) context: FilesystemRowContext,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct FileDescriptorRowInput {\n    pub(crate) id: String,\n    pub(crate) directory_id: Option<String>,\n    pub(crate) name: String,\n    pub(crate) hidden: bool,\n    pub(crate) context: FilesystemRowContext,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct DirectoryDescriptorWriteIntent {\n    pub(crate) id: Option<String>,\n    pub(crate) parent_id: Option<String>,\n    pub(crate) name: String,\n    pub(crate) hidden: Option<bool>,\n    pub(crate) context: FilesystemRowContext,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct FileDescriptorWriteIntent {\n    pub(crate) id: Option<String>,\n    pub(crate) directory_id: Option<String>,\n    pub(crate) name: String,\n    pub(crate) hidden: Option<bool>,\n    pub(crate) context: FilesystemRowContext,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct BlobRefRowInput {\n    pub(crate) file_id: String,\n    pub(crate) data: Vec<u8>,\n    pub(crate) context: FilesystemRowContext,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct FilePathWriteInput {\n    pub(crate) id: Option<String>,\n    pub(crate) path: String,\n    pub(crate) data: Option<Vec<u8>>,\n    pub(crate) hidden: Option<bool>,\n    pub(crate) context: FilesystemRowContext,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct FileDeleteInput {\n    pub(crate) file_id: String,\n    pub(crate) has_blob_ref: bool,\n    pub(crate) context: FilesystemRowContext,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct DirectoryDeleteInput {\n    pub(crate) directory_id: String,\n    pub(crate) context: FilesystemRowContext,\n}\n\n#[derive(Debug, Deserialize)]\nstruct DirectoryDescriptorSnapshot {\n    id: String,\n    parent_id: Option<String>,\n    name: String,\n}\n\n#[derive(Debug, Deserialize)]\nstruct FileDescriptorSnapshot {\n    id: String,\n    directory_id: Option<String>,\n    name: String,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\nenum FilesystemNamespaceEntry {\n    Directory(String),\n    File(String),\n}\n\n/// Resolves directory paths while planning filesystem writes.\n///\n/// The resolver is seeded from the transaction-visible filesystem state and is\n/// then updated as the current statement stages implicit directories. That is\n/// what prevents path inserts from restaging committed ancestors or duplicating\n/// an ancestor created earlier in the same SQL batch.\n#[derive(Debug, Clone, Default)]\npub(crate) struct DirectoryPathResolver {\n    directory_ids_by_path: BTreeMap<String, String>,\n    entries_by_parent_and_name: BTreeMap<(Option<String>, String), FilesystemNamespaceEntry>,\n}\n\nimpl DirectoryPathResolver {\n    pub(crate) fn from_existing(\n        existing_directories: impl IntoIterator<Item = (String, String)>,\n    ) -> Result<Self, LixError> {\n        Self::from_existing_filesystem(existing_directories, std::iter::empty())\n    }\n\n    pub(crate) fn from_existing_filesystem(\n        existing_directories: impl IntoIterator<Item = (String, String)>,\n        existing_files: impl IntoIterator<Item = (Option<String>, String, String)>,\n    ) -> Result<Self, LixError> {\n        let mut directory_ids_by_path = BTreeMap::new();\n        for (path, id) in existing_directories {\n            directory_ids_by_path.insert(normalize_directory_path(&path)?, id);\n        }\n\n        let mut resolver = Self {\n            directory_ids_by_path,\n            entries_by_parent_and_name: BTreeMap::new(),\n        };\n        let mut paths = resolver\n            .directory_ids_by_path\n            .iter()\n            .map(|(path, id)| (path.clone(), id.clone()))\n            .collect::<Vec<_>>();\n        paths.sort_by_key(|(path, _)| path.len());\n        for (path, id) in paths {\n            let parent_id = parent_directory_path(&path)\n                .and_then(|parent_path| resolver.directory_ids_by_path.get(&parent_path).cloned());\n            let name = directory_name_from_path(&path).ok_or_else(|| {\n                LixError::new(\n                    \"LIX_ERROR_UNKNOWN\",\n                    format!(\"directory path '{path}' does not contain a directory name\"),\n                )\n            })?;\n            resolver.reserve_directory(parent_id, name, id)?;\n        }\n        for (directory_id, entry_name, file_id) in existing_files {\n            resolver.reserve_file(directory_id, entry_name, file_id)?;\n        }\n        Ok(resolver)\n    }\n\n    pub(crate) fn directory_id(&self, path: &str) -> Result<Option<&str>, LixError> {\n        Ok(self\n            .directory_ids_by_path\n            .get(&normalize_directory_path(path)?)\n            .map(String::as_str))\n    }\n\n    /// Stages only the missing descriptors needed for `directory_path`.\n    ///\n    /// Existing directories keep their original ids. Missing directories receive\n    /// deterministic ids so repeated planning of the same transaction-visible\n    /// path resolves to the same descriptor identity.\n    pub(crate) fn ensure_directory_path(\n        &mut self,\n        directory_path: &str,\n        context: FilesystemRowContext,\n        hidden: bool,\n        generate_directory_id: &mut dyn FnMut() -> String,\n    ) -> Result<Vec<TransactionWriteRow>, LixError> {\n        self.ensure_directory_path_with_leaf_id(\n            directory_path,\n            None,\n            context,\n            hidden,\n            generate_directory_id,\n        )\n    }\n\n    pub(crate) fn ensure_directory_path_with_leaf_id(\n        &mut self,\n        directory_path: &str,\n        leaf_id: Option<String>,\n        context: FilesystemRowContext,\n        hidden: bool,\n        generate_directory_id: &mut dyn FnMut() -> String,\n    ) -> Result<Vec<TransactionWriteRow>, LixError> {\n        self.plan_directory_path(\n            directory_path,\n            leaf_id,\n            context,\n            hidden,\n            generate_directory_id,\n            false,\n        )\n    }\n\n    pub(crate) fn create_directory_path_with_leaf_id(\n        &mut self,\n        directory_path: &str,\n        leaf_id: Option<String>,\n        context: FilesystemRowContext,\n        hidden: bool,\n        generate_directory_id: &mut dyn FnMut() -> String,\n    ) -> Result<Vec<TransactionWriteRow>, LixError> {\n        self.plan_directory_path(\n            directory_path,\n            leaf_id,\n            context,\n            hidden,\n            generate_directory_id,\n            true,\n        )\n    }\n\n    fn plan_directory_path(\n        &mut self,\n        directory_path: &str,\n        leaf_id: Option<String>,\n        context: FilesystemRowContext,\n        hidden: bool,\n        generate_directory_id: &mut dyn FnMut() -> String,\n        reject_existing_leaf: bool,\n    ) -> Result<Vec<TransactionWriteRow>, LixError> {\n        let directory_path = normalize_directory_path(directory_path)?;\n        if directory_path == \"/\" {\n            if reject_existing_leaf {\n                return Err(duplicate_directory_path_error(&directory_path));\n            }\n            return Ok(Vec::new());\n        }\n\n        let mut paths = directory_ancestor_paths(&directory_path);\n        paths.push(directory_path.clone());\n\n        let mut rows = Vec::new();\n        for path in paths {\n            if self.directory_ids_by_path.contains_key(&path) {\n                if reject_existing_leaf && path == directory_path {\n                    return Err(duplicate_directory_path_error(&directory_path));\n                }\n                continue;\n            }\n\n            let id = if path == directory_path {\n                leaf_id.clone().unwrap_or_else(&mut *generate_directory_id)\n            } else {\n                generate_directory_id()\n            };\n            let parent_id = parent_directory_path(&path)\n                .and_then(|parent_path| self.directory_ids_by_path.get(&parent_path).cloned());\n            let name = directory_name_from_path(&path).ok_or_else(|| {\n                LixError::new(\n                    \"LIX_ERROR_UNKNOWN\",\n                    format!(\"directory path '{path}' does not contain a directory name\"),\n                )\n            })?;\n            self.reserve_directory(parent_id.clone(), name.clone(), id.clone())?;\n\n            rows.push(directory_descriptor_row(DirectoryDescriptorRowInput {\n                id: id.clone(),\n                parent_id,\n                name,\n                hidden,\n                context: FilesystemRowContext {\n                    // Directory descriptors are their own filesystem state row,\n                    // even when they are implicitly planned from a file insert.\n                    file_id: None,\n                    ..context.clone()\n                },\n            }));\n            self.directory_ids_by_path.insert(path, id);\n        }\n\n        Ok(rows)\n    }\n\n    pub(crate) fn reserve_directory(\n        &mut self,\n        parent_id: Option<String>,\n        name: String,\n        directory_id: String,\n    ) -> Result<(), LixError> {\n        let key = (parent_id, name);\n        match self.entries_by_parent_and_name.get(&key) {\n            Some(FilesystemNamespaceEntry::Directory(existing_id))\n                if existing_id == &directory_id =>\n            {\n                Ok(())\n            }\n            Some(existing) => Err(filesystem_namespace_conflict_error(\n                &key.0, &key.1, existing,\n            )),\n            None => {\n                self.entries_by_parent_and_name\n                    .insert(key, FilesystemNamespaceEntry::Directory(directory_id));\n                Ok(())\n            }\n        }\n    }\n\n    pub(crate) fn reserve_file(\n        &mut self,\n        directory_id: Option<String>,\n        entry_name: String,\n        file_id: String,\n    ) -> Result<(), LixError> {\n        let key = (directory_id, entry_name);\n        match self.entries_by_parent_and_name.get(&key) {\n            Some(FilesystemNamespaceEntry::File(existing_id)) if existing_id == &file_id => Ok(()),\n            Some(existing) => Err(filesystem_namespace_conflict_error(\n                &key.0, &key.1, existing,\n            )),\n            None => {\n                self.entries_by_parent_and_name\n                    .insert(key, FilesystemNamespaceEntry::File(file_id));\n                Ok(())\n            }\n        }\n    }\n}\n\nfn duplicate_directory_path_error(path: &str) -> LixError {\n    LixError::new(\n        LixError::CODE_UNIQUE,\n        format!(\"unique constraint violation on lix_directory.path for value {path:?}\"),\n    )\n}\n\nfn filesystem_namespace_conflict_error(\n    parent_id: &Option<String>,\n    entry_name: &str,\n    existing: &FilesystemNamespaceEntry,\n) -> LixError {\n    let parent = parent_id.as_deref().unwrap_or(\"<root>\");\n    let existing_kind = match existing {\n        FilesystemNamespaceEntry::Directory(_) => \"directory\",\n        FilesystemNamespaceEntry::File(_) => \"file\",\n    };\n    LixError::new(\n        LixError::CODE_UNIQUE,\n        format!(\n            \"filesystem namespace conflict: parent {parent:?} already contains {existing_kind} entry {entry_name:?}\"\n        ),\n    )\n}\n\npub(crate) fn directory_descriptor_row(input: DirectoryDescriptorRowInput) -> TransactionWriteRow {\n    directory_descriptor_write_row(DirectoryDescriptorWriteIntent {\n        id: Some(input.id),\n        parent_id: input.parent_id,\n        name: input.name,\n        hidden: Some(input.hidden),\n        context: input.context,\n    })\n}\n\npub(crate) fn file_descriptor_row(input: FileDescriptorRowInput) -> TransactionWriteRow {\n    file_descriptor_write_row(FileDescriptorWriteIntent {\n        id: Some(input.id),\n        directory_id: input.directory_id,\n        name: input.name,\n        hidden: Some(input.hidden),\n        context: input.context,\n    })\n}\n\npub(crate) fn directory_descriptor_write_row(\n    input: DirectoryDescriptorWriteIntent,\n) -> TransactionWriteRow {\n    let mut snapshot = JsonMap::new();\n    if let Some(id) = input.id.as_ref() {\n        snapshot.insert(\"id\".to_string(), JsonValue::String(id.clone()));\n    }\n    snapshot.insert(\n        \"parent_id\".to_string(),\n        input\n            .parent_id\n            .clone()\n            .map(JsonValue::String)\n            .unwrap_or(JsonValue::Null),\n    );\n    snapshot.insert(\"name\".to_string(), JsonValue::String(input.name));\n    if let Some(hidden) = input.hidden {\n        snapshot.insert(\"hidden\".to_string(), JsonValue::Bool(hidden));\n    }\n\n    partial_state_row(\n        input.id,\n        DIRECTORY_DESCRIPTOR_SCHEMA_KEY,\n        Some(JsonValue::Object(snapshot)),\n        input.context,\n    )\n}\n\npub(crate) fn file_descriptor_write_row(input: FileDescriptorWriteIntent) -> TransactionWriteRow {\n    let mut snapshot = JsonMap::new();\n    if let Some(id) = input.id.as_ref() {\n        snapshot.insert(\"id\".to_string(), JsonValue::String(id.clone()));\n    }\n    snapshot.insert(\n        \"directory_id\".to_string(),\n        input\n            .directory_id\n            .clone()\n            .map(JsonValue::String)\n            .unwrap_or(JsonValue::Null),\n    );\n    snapshot.insert(\"name\".to_string(), JsonValue::String(input.name));\n    if let Some(hidden) = input.hidden {\n        snapshot.insert(\"hidden\".to_string(), JsonValue::Bool(hidden));\n    }\n\n    partial_state_row(\n        input.id,\n        FILE_DESCRIPTOR_SCHEMA_KEY,\n        Some(JsonValue::Object(snapshot)),\n        input.context,\n    )\n}\n\npub(crate) fn blob_ref_row(input: BlobRefRowInput) -> Result<TransactionWriteRow, LixError> {\n    let size_bytes = u64::try_from(input.data.len()).map_err(|_| {\n        LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\n                \"binary blob size exceeds supported range for file '{}' version '{}'\",\n                input.file_id, input.context.version_id\n            ),\n        )\n    })?;\n    let snapshot = json!({\n        \"id\": input.file_id.clone(),\n        \"blob_hash\": stable_content_fingerprint_hex(&input.data),\n        \"size_bytes\": size_bytes,\n    });\n\n    Ok(state_row(\n        input.file_id.clone(),\n        BLOB_REF_SCHEMA_KEY,\n        Some(snapshot),\n        FilesystemRowContext {\n            file_id: Some(input.file_id),\n            ..input.context\n        },\n    ))\n}\n\npub(crate) fn plan_file_path_write(\n    resolver: &mut DirectoryPathResolver,\n    input: FilePathWriteInput,\n    generate_directory_id: &mut dyn FnMut() -> String,\n) -> Result<FilesystemWritePlan, LixError> {\n    let parsed = ParsedFilePath::try_from_path(&input.path)?;\n    let mut rows = Vec::new();\n    let file_id = input.id.unwrap_or_else(&mut *generate_directory_id);\n\n    let directory_id = match parsed.directory_path.as_ref() {\n        Some(directory_path) => {\n            rows.extend(resolver.ensure_directory_path(\n                directory_path.as_str(),\n                input.context.clone(),\n                false,\n                generate_directory_id,\n            )?);\n            resolver\n                .directory_id(directory_path.as_str())?\n                .map(ToOwned::to_owned)\n        }\n        None => None,\n    };\n\n    resolver.reserve_file(directory_id.clone(), parsed.name.clone(), file_id.clone())?;\n    rows.push(file_descriptor_row(FileDescriptorRowInput {\n        id: file_id.clone(),\n        directory_id,\n        name: parsed.name.clone(),\n        hidden: input.hidden.unwrap_or(false),\n        context: input.context.clone(),\n    }));\n\n    let mut file_data = Vec::new();\n    if let Some(data) = input.data {\n        rows.push(blob_ref_row(BlobRefRowInput {\n            file_id: file_id.clone(),\n            data: data.clone(),\n            context: FilesystemRowContext {\n                file_id: None,\n                metadata: None,\n                ..input.context.clone()\n            },\n        })?);\n        file_data.push(TransactionFileData {\n            file_id,\n            version_id: input.context.version_id,\n            untracked: input.context.untracked,\n            data,\n        });\n    }\n\n    Ok(FilesystemWritePlan {\n        rows,\n        file_data,\n        count: 1,\n    })\n}\n\npub(crate) fn plan_file_path_update(\n    resolver: &mut DirectoryPathResolver,\n    existing_file_id: String,\n    new_path: String,\n    existing_hidden: bool,\n    _existing_data: Option<Vec<u8>>,\n    context: FilesystemRowContext,\n    generate_directory_id: &mut dyn FnMut() -> String,\n) -> Result<FilesystemWritePlan, LixError> {\n    let parsed = ParsedFilePath::try_from_path(&new_path)?;\n    let mut rows = Vec::new();\n\n    let directory_id = match parsed.directory_path.as_ref() {\n        Some(directory_path) => {\n            rows.extend(resolver.ensure_directory_path(\n                directory_path.as_str(),\n                context.clone(),\n                false,\n                generate_directory_id,\n            )?);\n            resolver\n                .directory_id(directory_path.as_str())?\n                .map(ToOwned::to_owned)\n        }\n        None => None,\n    };\n\n    resolver.reserve_file(\n        directory_id.clone(),\n        parsed.name.clone(),\n        existing_file_id.clone(),\n    )?;\n    rows.push(file_descriptor_row(FileDescriptorRowInput {\n        id: existing_file_id,\n        directory_id,\n        name: parsed.name.clone(),\n        hidden: existing_hidden,\n        context,\n    }));\n\n    // Data/blob-ref state is intentionally left untouched for path-only\n    // updates. A provider should plan blob rows only when `data` is assigned.\n    Ok(FilesystemWritePlan {\n        rows,\n        file_data: Vec::new(),\n        count: 1,\n    })\n}\n\npub(crate) fn plan_file_delete(input: FileDeleteInput) -> FilesystemDeletePlan {\n    let mut rows = vec![tombstone_row(\n        input.file_id.clone(),\n        FILE_DESCRIPTOR_SCHEMA_KEY,\n        FilesystemRowContext {\n            file_id: None,\n            ..input.context.clone()\n        },\n    )];\n\n    if input.has_blob_ref {\n        rows.push(tombstone_row(\n            input.file_id.clone(),\n            BLOB_REF_SCHEMA_KEY,\n            FilesystemRowContext {\n                file_id: Some(input.file_id),\n                metadata: None,\n                ..input.context\n            },\n        ));\n    }\n\n    FilesystemDeletePlan { rows, count: 1 }\n}\n\npub(crate) fn plan_directory_delete(input: DirectoryDeleteInput) -> FilesystemDeletePlan {\n    FilesystemDeletePlan {\n        rows: vec![tombstone_row(\n            input.directory_id,\n            DIRECTORY_DESCRIPTOR_SCHEMA_KEY,\n            FilesystemRowContext {\n                file_id: None,\n                ..input.context\n            },\n        )],\n        count: 1,\n    }\n}\n\npub(crate) fn plan_recursive_directory_delete(\n    root_directory_id: &str,\n    visible_filesystem: &VisibleFilesystem,\n    context: FilesystemRowContext,\n) -> FilesystemDeletePlan {\n    let mut rows = Vec::new();\n    let mut count = 0;\n\n    collect_recursive_directory_delete(\n        root_directory_id,\n        visible_filesystem,\n        &context,\n        &mut rows,\n        &mut count,\n    );\n\n    FilesystemDeletePlan { rows, count }\n}\n\npub(crate) fn directory_path_resolvers_from_state_rows(\n    rows: Vec<MaterializedLiveStateRow>,\n) -> Result<BTreeMap<String, DirectoryPathResolver>, LixError> {\n    let mut directory_rows = BTreeMap::<String, BTreeMap<String, DirectoryDescriptorSeed>>::new();\n    let mut file_rows = BTreeMap::<String, Vec<(Option<String>, String, String)>>::new();\n    for row in rows {\n        let Some(snapshot_content) = row.snapshot_content.as_deref() else {\n            continue;\n        };\n        let resolver_key = filesystem_storage_scope_key(\n            &row.version_id,\n            row.global,\n            row.untracked,\n            row.file_id.as_deref(),\n        );\n        match row.schema_key.as_str() {\n            DIRECTORY_DESCRIPTOR_SCHEMA_KEY => {\n                let snapshot: DirectoryDescriptorSnapshot = serde_json::from_str(snapshot_content)\n                    .map_err(|error| {\n                        LixError::new(\n                            \"LIX_ERROR_UNKNOWN\",\n                            format!(\"invalid lix_directory_descriptor snapshot JSON: {error}\"),\n                        )\n                    })?;\n                directory_rows.entry(resolver_key).or_default().insert(\n                    snapshot.id.clone(),\n                    DirectoryDescriptorSeed {\n                        id: snapshot.id,\n                        parent_id: snapshot.parent_id,\n                        name: snapshot.name,\n                    },\n                );\n            }\n            FILE_DESCRIPTOR_SCHEMA_KEY => {\n                let snapshot: FileDescriptorSnapshot = serde_json::from_str(snapshot_content)\n                    .map_err(|error| {\n                        LixError::new(\n                            \"LIX_ERROR_UNKNOWN\",\n                            format!(\"invalid lix_file_descriptor snapshot JSON: {error}\"),\n                        )\n                    })?;\n                file_rows.entry(resolver_key).or_default().push((\n                    snapshot.directory_id,\n                    snapshot.name,\n                    snapshot.id,\n                ));\n            }\n            _ => {}\n        }\n    }\n\n    let mut resolvers = BTreeMap::new();\n    for (version_id, records) in directory_rows {\n        let mut paths = BTreeMap::<String, String>::new();\n        for directory_id in records.keys() {\n            resolve_directory_seed_path(directory_id, &records, &mut paths, &mut BTreeSet::new())?;\n        }\n        let seeds = paths\n            .into_iter()\n            .map(|(directory_id, path)| (path, directory_id))\n            .collect::<Vec<_>>();\n        let files = file_rows.remove(&version_id).unwrap_or_default();\n        resolvers.insert(\n            version_id,\n            DirectoryPathResolver::from_existing_filesystem(seeds, files)?,\n        );\n    }\n    for (version_id, files) in file_rows {\n        resolvers.insert(\n            version_id,\n            DirectoryPathResolver::from_existing_filesystem(std::iter::empty(), files)?,\n        );\n    }\n    Ok(resolvers)\n}\n\npub(crate) fn filesystem_storage_scope_key(\n    version_id: &str,\n    global: bool,\n    untracked: bool,\n    file_id: Option<&str>,\n) -> String {\n    format!(\n        \"version={version_id}\\0global={global}\\0untracked={untracked}\\0file_id={}\",\n        file_id.unwrap_or(\"<null>\")\n    )\n}\n\n#[derive(Debug, Clone)]\nstruct DirectoryDescriptorSeed {\n    id: String,\n    parent_id: Option<String>,\n    name: String,\n}\n\nfn resolve_directory_seed_path(\n    directory_id: &str,\n    records: &BTreeMap<String, DirectoryDescriptorSeed>,\n    paths: &mut BTreeMap<String, String>,\n    visiting: &mut BTreeSet<String>,\n) -> Result<Option<String>, LixError> {\n    if let Some(path) = paths.get(directory_id) {\n        return Ok(Some(path.clone()));\n    }\n    if !visiting.insert(directory_id.to_string()) {\n        return Err(directory_parent_cycle_error(directory_id));\n    }\n    let Some(row) = records.get(directory_id) else {\n        visiting.remove(directory_id);\n        return Ok(None);\n    };\n    let path = match row.parent_id.as_deref() {\n        Some(parent_id) => {\n            let Some(parent_path) =\n                resolve_directory_seed_path(parent_id, records, paths, visiting)?\n            else {\n                visiting.remove(directory_id);\n                return Ok(None);\n            };\n            format!(\"{parent_path}{}/\", row.name)\n        }\n        None => format!(\"/{}/\", row.name),\n    };\n    visiting.remove(directory_id);\n    paths.insert(row.id.clone(), path.clone());\n    Ok(Some(path))\n}\n\nfn directory_parent_cycle_error(directory_id: &str) -> LixError {\n    LixError::new(\n        LixError::CODE_CONSTRAINT_VIOLATION,\n        format!(\n            \"lix_directory_descriptor parent_id cycle detected while resolving directory '{directory_id}'\"\n        ),\n    )\n}\n\nfn state_row(\n    entity_id: String,\n    schema_key: &str,\n    snapshot: Option<JsonValue>,\n    context: FilesystemRowContext,\n) -> TransactionWriteRow {\n    partial_state_row(Some(entity_id), schema_key, snapshot, context)\n}\n\nfn partial_state_row(\n    entity_id: Option<String>,\n    schema_key: &str,\n    snapshot: Option<JsonValue>,\n    context: FilesystemRowContext,\n) -> TransactionWriteRow {\n    let snapshot = snapshot.map(TransactionJson::from_value_unchecked);\n    TransactionWriteRow {\n        entity_id: entity_id.map(EntityIdentity::single),\n        schema_key: schema_key.to_string(),\n        file_id: context.file_id,\n        snapshot,\n        metadata: context.metadata,\n        origin: None,\n        created_at: None,\n        updated_at: None,\n        global: context.global,\n        change_id: None,\n        commit_id: None,\n        untracked: context.untracked,\n        version_id: context.version_id,\n    }\n}\n\nfn tombstone_row(\n    entity_id: String,\n    schema_key: &str,\n    context: FilesystemRowContext,\n) -> TransactionWriteRow {\n    state_row(entity_id, schema_key, None, context)\n}\n\nfn collect_recursive_directory_delete(\n    directory_id: &str,\n    visible_filesystem: &VisibleFilesystem,\n    context: &FilesystemRowContext,\n    rows: &mut Vec<TransactionWriteRow>,\n    count: &mut u64,\n) {\n    if let Some(child_ids) = visible_filesystem\n        .directory_children_by_parent_id\n        .get(&Some(directory_id.to_string()))\n    {\n        for child_id in child_ids {\n            collect_recursive_directory_delete(child_id, visible_filesystem, context, rows, count);\n        }\n    }\n\n    if let Some(files) = visible_filesystem\n        .files_by_directory_id\n        .get(&Some(directory_id.to_string()))\n    {\n        for file_id in files.keys() {\n            let plan = plan_file_delete(FileDeleteInput {\n                file_id: file_id.clone(),\n                has_blob_ref: visible_filesystem\n                    .blob_refs_by_file_id\n                    .contains_key(file_id),\n                context: context.clone(),\n            });\n            rows.extend(plan.rows);\n            *count += plan.count;\n        }\n    }\n\n    let plan = plan_directory_delete(DirectoryDeleteInput {\n        directory_id: directory_id.to_string(),\n        context: context.clone(),\n    });\n    rows.extend(plan.rows);\n    *count += plan.count;\n}\n\n#[cfg(test)]\nmod tests {\n    use std::collections::{BTreeMap, BTreeSet};\n\n    use serde_json::Value as JsonValue;\n\n    use super::{\n        blob_ref_row, directory_descriptor_row, file_descriptor_row, plan_file_path_update,\n        plan_file_path_write, BlobRefRowInput, DirectoryDeleteInput, DirectoryDescriptorRowInput,\n        DirectoryPathResolver, FileDeleteInput, FileDescriptorRowInput, FilePathWriteInput,\n        FilesystemRowContext,\n    };\n    use crate::sql2::filesystem_visibility::{\n        VisibleBlobRef, VisibleDirectory, VisibleFile, VisibleFilesystem,\n    };\n    use crate::{entity_identity::EntityIdentity, live_state::MaterializedLiveStateRow};\n\n    fn test_id_generator(ids: &'static [&'static str]) -> impl FnMut() -> String {\n        let mut ids = ids.iter();\n        move || ids.next().expect(\"test id should exist\").to_string()\n    }\n\n    #[test]\n    fn directory_descriptor_row_builds_state_row() {\n        let row = directory_descriptor_row(DirectoryDescriptorRowInput {\n            id: \"dir-docs\".to_string(),\n            parent_id: None,\n            name: \"docs\".to_string(),\n            hidden: false,\n            context: FilesystemRowContext::active_version(\"version-a\"),\n        });\n\n        assert_eq!(\n            row.entity_id.as_ref(),\n            Some(&crate::entity_identity::EntityIdentity::single(\"dir-docs\"))\n        );\n        assert_eq!(row.schema_key, \"lix_directory_descriptor\");\n        assert_eq!(row.version_id, \"version-a\");\n        let snapshot: JsonValue = row.snapshot.as_ref().unwrap().value().clone();\n        assert_eq!(snapshot[\"id\"], \"dir-docs\");\n        assert_eq!(snapshot[\"parent_id\"], JsonValue::Null);\n        assert_eq!(snapshot[\"name\"], \"docs\");\n        assert_eq!(snapshot[\"hidden\"], false);\n    }\n\n    #[test]\n    fn file_descriptor_row_builds_state_row() {\n        let row = file_descriptor_row(FileDescriptorRowInput {\n            id: \"file-readme\".to_string(),\n            directory_id: Some(\"dir-docs\".to_string()),\n            name: \"readme.md\".to_string(),\n            hidden: false,\n            context: FilesystemRowContext::active_version(\"version-a\"),\n        });\n\n        assert_eq!(\n            row.entity_id.as_ref(),\n            Some(&crate::entity_identity::EntityIdentity::single(\n                \"file-readme\"\n            ))\n        );\n        assert_eq!(row.schema_key, \"lix_file_descriptor\");\n        let snapshot: JsonValue = row.snapshot.as_ref().unwrap().value().clone();\n        assert_eq!(snapshot[\"directory_id\"], \"dir-docs\");\n        assert_eq!(snapshot[\"name\"], \"readme.md\");\n    }\n\n    #[test]\n    fn blob_ref_row_builds_state_row() {\n        let row = blob_ref_row(BlobRefRowInput {\n            file_id: \"file-readme\".to_string(),\n            data: b\"Hello\".to_vec(),\n            context: FilesystemRowContext::active_version(\"version-a\"),\n        })\n        .expect(\"blob ref row should build\");\n\n        assert_eq!(\n            row.entity_id.as_ref(),\n            Some(&crate::entity_identity::EntityIdentity::single(\n                \"file-readme\"\n            ))\n        );\n        assert_eq!(row.file_id.as_deref(), Some(\"file-readme\"));\n        assert_eq!(row.schema_key, \"lix_binary_blob_ref\");\n        let snapshot: JsonValue = row.snapshot.as_ref().unwrap().value().clone();\n        assert_eq!(snapshot[\"id\"], \"file-readme\");\n        assert_eq!(snapshot[\"size_bytes\"], 5);\n        assert!(snapshot[\"blob_hash\"]\n            .as_str()\n            .is_some_and(|hash| !hash.is_empty()));\n    }\n\n    #[test]\n    fn directory_path_resolver_reuses_existing_ancestor() {\n        let mut resolver =\n            DirectoryPathResolver::from_existing([(\"/docs/\".to_string(), \"dir-docs\".to_string())])\n                .expect(\"existing directories should normalize\");\n\n        let rows = resolver\n            .ensure_directory_path(\n                \"/docs/nested/\",\n                FilesystemRowContext::active_version(\"version-a\"),\n                false,\n                &mut test_id_generator(&[\"dir-generated-nested\"]),\n            )\n            .expect(\"directory path should plan\");\n\n        assert_eq!(rows.len(), 1);\n        assert_eq!(resolver.directory_id(\"/docs/\").unwrap(), Some(\"dir-docs\"));\n        assert_eq!(\n            resolver.directory_id(\"/docs/nested/\").unwrap(),\n            Some(\"dir-generated-nested\")\n        );\n\n        let snapshot: JsonValue = rows[0].snapshot.as_ref().unwrap().value().clone();\n        assert_eq!(snapshot[\"id\"], \"dir-generated-nested\");\n        assert_eq!(snapshot[\"parent_id\"], \"dir-docs\");\n        assert_eq!(snapshot[\"name\"], \"nested\");\n    }\n\n    #[test]\n    fn directory_path_resolver_reuses_ancestor_staged_in_same_batch() {\n        let mut resolver =\n            DirectoryPathResolver::from_existing([]).expect(\"empty resolver should build\");\n\n        let docs_rows = resolver\n            .ensure_directory_path(\n                \"/docs/\",\n                FilesystemRowContext::active_version(\"version-a\"),\n                false,\n                &mut test_id_generator(&[\"dir-generated-docs\"]),\n            )\n            .expect(\"top-level directory should plan\");\n        assert_eq!(docs_rows.len(), 1);\n\n        let nested_rows = resolver\n            .ensure_directory_path(\n                \"/docs/nested/\",\n                FilesystemRowContext::active_version(\"version-a\"),\n                false,\n                &mut test_id_generator(&[\"dir-generated-nested\"]),\n            )\n            .expect(\"nested directory should plan\");\n\n        assert_eq!(nested_rows.len(), 1);\n        let snapshot: JsonValue = nested_rows[0].snapshot.as_ref().unwrap().value().clone();\n        assert_eq!(snapshot[\"id\"], \"dir-generated-nested\");\n        assert_eq!(snapshot[\"parent_id\"], \"dir-generated-docs\");\n        assert_eq!(snapshot[\"name\"], \"nested\");\n    }\n\n    #[test]\n    fn directory_path_resolver_uses_explicit_leaf_id() {\n        let mut resolver =\n            DirectoryPathResolver::from_existing([]).expect(\"empty resolver should build\");\n\n        let rows = resolver\n            .ensure_directory_path_with_leaf_id(\n                \"/docs/nested/\",\n                Some(\"dir-nested\".to_string()),\n                FilesystemRowContext::active_version(\"version-a\"),\n                false,\n                &mut test_id_generator(&[\"dir-generated-docs\"]),\n            )\n            .expect(\"directory path should plan\");\n\n        assert_eq!(rows.len(), 2);\n        assert_eq!(\n            resolver.directory_id(\"/docs/\").unwrap(),\n            Some(\"dir-generated-docs\")\n        );\n        assert_eq!(\n            resolver.directory_id(\"/docs/nested/\").unwrap(),\n            Some(\"dir-nested\")\n        );\n\n        let snapshot: JsonValue = rows[1].snapshot.as_ref().unwrap().value().clone();\n        assert_eq!(snapshot[\"id\"], \"dir-nested\");\n        assert_eq!(snapshot[\"parent_id\"], \"dir-generated-docs\");\n        assert_eq!(snapshot[\"name\"], \"nested\");\n    }\n\n    #[test]\n    fn directory_path_resolver_does_not_restage_same_path() {\n        let mut resolver =\n            DirectoryPathResolver::from_existing([]).expect(\"empty resolver should build\");\n\n        let rows = resolver\n            .ensure_directory_path(\n                \"/docs/nested/\",\n                FilesystemRowContext::active_version(\"version-a\"),\n                false,\n                &mut test_id_generator(&[\"dir-generated-docs\", \"dir-generated-nested\"]),\n            )\n            .expect(\"directory path should plan\");\n        assert_eq!(rows.len(), 2);\n\n        let rows = resolver\n            .ensure_directory_path(\n                \"/docs/nested/\",\n                FilesystemRowContext::active_version(\"version-a\"),\n                false,\n                &mut test_id_generator(&[\"should-not-be-used\"]),\n            )\n            .expect(\"directory path should plan\");\n        assert!(rows.is_empty());\n    }\n\n    #[test]\n    fn file_path_write_stages_missing_directories_file_blob_and_payload() {\n        let mut resolver =\n            DirectoryPathResolver::from_existing([]).expect(\"empty resolver should build\");\n\n        let plan = plan_file_path_write(\n            &mut resolver,\n            FilePathWriteInput {\n                id: Some(\"file-readme\".to_string()),\n                path: \"/docs/guides/readme.md\".to_string(),\n                data: Some(b\"hello\".to_vec()),\n                hidden: Some(false),\n                context: FilesystemRowContext::active_version(\"version-a\"),\n            },\n            &mut test_id_generator(&[\"dir-generated-docs\", \"dir-generated-guides\"]),\n        )\n        .expect(\"file path write should plan\");\n\n        assert_eq!(plan.count, 1);\n        assert_eq!(plan.file_data.len(), 1);\n        assert_eq!(plan.file_data[0].file_id, \"file-readme\");\n        assert_eq!(plan.file_data[0].version_id, \"version-a\");\n        assert_eq!(plan.file_data[0].data, b\"hello\");\n        assert_eq!(plan.rows.len(), 4);\n        assert_eq!(\n            plan.rows\n                .iter()\n                .filter(|row| row.schema_key == \"lix_directory_descriptor\")\n                .count(),\n            2\n        );\n        assert!(plan\n            .rows\n            .iter()\n            .any(|row| row.schema_key == \"lix_binary_blob_ref\"));\n\n        let file_row = plan\n            .rows\n            .iter()\n            .find(|row| row.schema_key == \"lix_file_descriptor\")\n            .expect(\"file descriptor row should be planned\");\n        let snapshot: JsonValue = file_row.snapshot.as_ref().unwrap().value().clone();\n        assert_eq!(snapshot[\"id\"], \"file-readme\");\n        assert_eq!(snapshot[\"directory_id\"], \"dir-generated-guides\");\n        assert_eq!(snapshot[\"name\"], \"readme.md\");\n    }\n\n    #[test]\n    fn file_path_write_reuses_existing_parent_directory() {\n        let mut resolver = DirectoryPathResolver::from_existing([\n            (\"/docs/\".to_string(), \"dir-docs\".to_string()),\n            (\"/docs/guides/\".to_string(), \"dir-guides\".to_string()),\n        ])\n        .expect(\"existing directories should seed\");\n\n        let plan = plan_file_path_write(\n            &mut resolver,\n            FilePathWriteInput {\n                id: Some(\"file-readme\".to_string()),\n                path: \"/docs/guides/readme.md\".to_string(),\n                data: Some(b\"hello\".to_vec()),\n                hidden: Some(false),\n                context: FilesystemRowContext::active_version(\"version-a\"),\n            },\n            &mut test_id_generator(&[\"should-not-be-used\"]),\n        )\n        .expect(\"file path write should plan\");\n\n        assert_eq!(plan.rows.len(), 2);\n        assert_eq!(\n            plan.rows\n                .iter()\n                .filter(|row| row.schema_key == \"lix_directory_descriptor\")\n                .count(),\n            0\n        );\n        let file_row = plan\n            .rows\n            .iter()\n            .find(|row| row.schema_key == \"lix_file_descriptor\")\n            .expect(\"file descriptor row should be planned\");\n        let snapshot: JsonValue = file_row.snapshot.as_ref().unwrap().value().clone();\n        assert_eq!(snapshot[\"directory_id\"], \"dir-guides\");\n    }\n\n    #[test]\n    fn file_path_update_reuses_existing_parent_and_preserves_data() {\n        let mut resolver =\n            DirectoryPathResolver::from_existing([(\"/docs/\".to_string(), \"dir-docs\".to_string())])\n                .expect(\"existing directories should seed\");\n\n        let plan = plan_file_path_update(\n            &mut resolver,\n            \"file-readme\".to_string(),\n            \"/docs/renamed.md\".to_string(),\n            false,\n            Some(b\"hello\".to_vec()),\n            FilesystemRowContext::active_version(\"version-a\"),\n            &mut test_id_generator(&[\"should-not-be-used\"]),\n        )\n        .expect(\"file path update should plan\");\n\n        assert_eq!(plan.count, 1);\n        assert!(plan.file_data.is_empty());\n        assert_eq!(plan.rows.len(), 1);\n        assert!(plan\n            .rows\n            .iter()\n            .all(|row| row.schema_key != \"lix_binary_blob_ref\"));\n\n        let snapshot: JsonValue = plan.rows[0].snapshot.as_ref().unwrap().value().clone();\n        assert_eq!(snapshot[\"id\"], \"file-readme\");\n        assert_eq!(snapshot[\"directory_id\"], \"dir-docs\");\n        assert_eq!(snapshot[\"name\"], \"renamed.md\");\n        assert_eq!(snapshot[\"hidden\"], false);\n    }\n\n    #[test]\n    fn file_path_update_stages_missing_parent_directories() {\n        let mut resolver =\n            DirectoryPathResolver::from_existing([]).expect(\"empty resolver should build\");\n\n        let plan = plan_file_path_update(\n            &mut resolver,\n            \"file-readme\".to_string(),\n            \"/docs/guides/readme.md\".to_string(),\n            true,\n            Some(b\"hello\".to_vec()),\n            FilesystemRowContext::active_version(\"version-a\"),\n            &mut test_id_generator(&[\"dir-generated-docs\", \"dir-generated-guides\"]),\n        )\n        .expect(\"file path update should plan\");\n\n        assert_eq!(plan.count, 1);\n        assert!(plan.file_data.is_empty());\n        assert_eq!(plan.rows.len(), 3);\n        assert_eq!(\n            plan.rows\n                .iter()\n                .filter(|row| row.schema_key == \"lix_directory_descriptor\")\n                .count(),\n            2\n        );\n        assert!(plan\n            .rows\n            .iter()\n            .all(|row| row.schema_key != \"lix_binary_blob_ref\"));\n\n        let file_row = plan\n            .rows\n            .iter()\n            .find(|row| row.schema_key == \"lix_file_descriptor\")\n            .expect(\"file descriptor row should be planned\");\n        let snapshot: JsonValue = file_row.snapshot.as_ref().unwrap().value().clone();\n        assert_eq!(snapshot[\"directory_id\"], \"dir-generated-guides\");\n        assert_eq!(snapshot[\"name\"], \"readme.md\");\n        assert_eq!(snapshot[\"hidden\"], true);\n    }\n\n    #[test]\n    fn directory_path_resolvers_from_state_rows_derives_nested_paths() {\n        let resolvers = super::directory_path_resolvers_from_state_rows(vec![\n            live_directory_row(\n                \"dir-docs\",\n                \"version-a\",\n                \"{\\\"id\\\":\\\"dir-docs\\\",\\\"parent_id\\\":null,\\\"name\\\":\\\"docs\\\"}\",\n            ),\n            live_directory_row(\n                \"dir-guides\",\n                \"version-a\",\n                \"{\\\"id\\\":\\\"dir-guides\\\",\\\"parent_id\\\":\\\"dir-docs\\\",\\\"name\\\":\\\"guides\\\"}\",\n            ),\n        ])\n        .expect(\"state rows should seed directory resolvers\");\n\n        let resolver = resolvers\n            .get(&super::filesystem_storage_scope_key(\n                \"version-a\",\n                false,\n                false,\n                None,\n            ))\n            .expect(\"storage-scope resolver should exist\");\n        assert_eq!(resolver.directory_id(\"/docs/\").unwrap(), Some(\"dir-docs\"));\n        assert_eq!(\n            resolver.directory_id(\"/docs/guides/\").unwrap(),\n            Some(\"dir-guides\")\n        );\n    }\n\n    #[test]\n    fn file_delete_plans_descriptor_and_blob_ref_tombstones() {\n        let plan = super::plan_file_delete(FileDeleteInput {\n            file_id: \"file-readme\".to_string(),\n            has_blob_ref: true,\n            context: FilesystemRowContext::active_version(\"version-a\"),\n        });\n\n        assert_eq!(plan.count, 1);\n        assert_eq!(plan.rows.len(), 2);\n        let descriptor = plan\n            .rows\n            .iter()\n            .find(|row| row.schema_key == \"lix_file_descriptor\")\n            .expect(\"file descriptor tombstone should be planned\");\n        assert_eq!(\n            descriptor.entity_id.as_ref(),\n            Some(&crate::entity_identity::EntityIdentity::single(\n                \"file-readme\"\n            ))\n        );\n        assert_eq!(descriptor.file_id, None);\n        assert_eq!(descriptor.snapshot, None);\n\n        let blob_ref = plan\n            .rows\n            .iter()\n            .find(|row| row.schema_key == \"lix_binary_blob_ref\")\n            .expect(\"blob ref tombstone should be planned\");\n        assert_eq!(\n            blob_ref.entity_id.as_ref(),\n            Some(&crate::entity_identity::EntityIdentity::single(\n                \"file-readme\"\n            ))\n        );\n        assert_eq!(blob_ref.file_id.as_deref(), Some(\"file-readme\"));\n        assert_eq!(blob_ref.snapshot, None);\n    }\n\n    #[test]\n    fn file_delete_without_blob_ref_plans_only_descriptor_tombstone() {\n        let plan = super::plan_file_delete(FileDeleteInput {\n            file_id: \"file-readme\".to_string(),\n            has_blob_ref: false,\n            context: FilesystemRowContext::active_version(\"version-a\"),\n        });\n\n        assert_eq!(plan.count, 1);\n        assert_eq!(plan.rows.len(), 1);\n        assert_eq!(plan.rows[0].schema_key, \"lix_file_descriptor\");\n        assert_eq!(plan.rows[0].snapshot, None);\n    }\n\n    #[test]\n    fn directory_delete_plans_descriptor_tombstone() {\n        let plan = super::plan_directory_delete(DirectoryDeleteInput {\n            directory_id: \"dir-docs\".to_string(),\n            context: FilesystemRowContext::active_version(\"version-a\"),\n        });\n\n        assert_eq!(plan.count, 1);\n        assert_eq!(plan.rows.len(), 1);\n        assert_eq!(\n            plan.rows[0].entity_id.as_ref(),\n            Some(&crate::entity_identity::EntityIdentity::single(\"dir-docs\"))\n        );\n        assert_eq!(plan.rows[0].schema_key, \"lix_directory_descriptor\");\n        assert_eq!(plan.rows[0].file_id, None);\n        assert_eq!(plan.rows[0].snapshot, None);\n    }\n\n    #[test]\n    fn recursive_directory_delete_plans_files_blobs_and_deepest_directories_first() {\n        let context = FilesystemRowContext::active_version(\"version-a\");\n        let mut directories_by_id = BTreeMap::new();\n        directories_by_id.insert(\n            \"dir-docs\".to_string(),\n            visible_directory(\"dir-docs\", None, \"docs\", context.clone()),\n        );\n        directories_by_id.insert(\n            \"dir-guides\".to_string(),\n            visible_directory(\"dir-guides\", Some(\"dir-docs\"), \"guides\", context.clone()),\n        );\n\n        let mut directory_children_by_parent_id = BTreeMap::new();\n        directory_children_by_parent_id.insert(\n            Some(\"dir-docs\".to_string()),\n            BTreeSet::from([\"dir-guides\".to_string()]),\n        );\n\n        let mut files_by_directory_id = BTreeMap::new();\n        files_by_directory_id.insert(\n            Some(\"dir-guides\".to_string()),\n            BTreeMap::from([(\n                \"file-readme\".to_string(),\n                visible_file(\"file-readme\", Some(\"dir-guides\"), \"readme\", context.clone()),\n            )]),\n        );\n        files_by_directory_id.insert(\n            Some(\"dir-docs\".to_string()),\n            BTreeMap::from([(\n                \"file-index\".to_string(),\n                visible_file(\"file-index\", Some(\"dir-docs\"), \"index\", context.clone()),\n            )]),\n        );\n\n        let visible_filesystem = VisibleFilesystem {\n            directories_by_id,\n            directory_children_by_parent_id,\n            files_by_directory_id,\n            blob_refs_by_file_id: BTreeMap::from([(\n                \"file-readme\".to_string(),\n                visible_blob_ref(\"file-readme\", context.clone()),\n            )]),\n        };\n\n        let plan = super::plan_recursive_directory_delete(\"dir-docs\", &visible_filesystem, context);\n\n        assert_eq!(plan.count, 4);\n        assert_eq!(\n            plan.rows\n                .iter()\n                .map(|row| {\n                    (\n                        row.schema_key.as_str(),\n                        row.entity_id\n                            .as_ref()\n                            .expect(\"planned recursive delete row should carry entity_id\")\n                            .as_single_string_owned()\n                            .expect(\"planned recursive delete row should project entity_id\"),\n                    )\n                })\n                .collect::<Vec<_>>(),\n            vec![\n                (\"lix_file_descriptor\", \"file-readme\".to_string()),\n                (\"lix_binary_blob_ref\", \"file-readme\".to_string()),\n                (\"lix_directory_descriptor\", \"dir-guides\".to_string()),\n                (\"lix_file_descriptor\", \"file-index\".to_string()),\n                (\"lix_directory_descriptor\", \"dir-docs\".to_string()),\n            ]\n        );\n        assert!(plan.rows.iter().all(|row| row.snapshot.is_none()));\n    }\n\n    fn visible_directory(\n        id: &str,\n        parent_id: Option<&str>,\n        name: &str,\n        context: FilesystemRowContext,\n    ) -> VisibleDirectory {\n        VisibleDirectory {\n            id: id.to_string(),\n            parent_id: parent_id.map(ToOwned::to_owned),\n            name: name.to_string(),\n            hidden: false,\n            context,\n        }\n    }\n\n    fn visible_file(\n        id: &str,\n        directory_id: Option<&str>,\n        name: &str,\n        context: FilesystemRowContext,\n    ) -> VisibleFile {\n        VisibleFile {\n            id: id.to_string(),\n            directory_id: directory_id.map(ToOwned::to_owned),\n            name: name.to_string(),\n            hidden: false,\n            context,\n        }\n    }\n\n    fn visible_blob_ref(file_id: &str, context: FilesystemRowContext) -> VisibleBlobRef {\n        VisibleBlobRef {\n            file_id: file_id.to_string(),\n            blob_hash: format!(\"hash-{file_id}\"),\n            size_bytes: Some(1),\n            context,\n        }\n    }\n\n    fn live_directory_row(\n        entity_id: &str,\n        version_id: &str,\n        snapshot_content: &str,\n    ) -> MaterializedLiveStateRow {\n        MaterializedLiveStateRow {\n            entity_id: EntityIdentity::single(entity_id),\n            schema_key: \"lix_directory_descriptor\".to_string(),\n            file_id: None,\n            snapshot_content: Some(snapshot_content.to_string()),\n            metadata: None,\n            deleted: false,\n            version_id: version_id.to_string(),\n            change_id: Some(format!(\"change-{entity_id}\")),\n            commit_id: Some(format!(\"commit-{entity_id}\")),\n            global: false,\n            untracked: false,\n            created_at: \"2026-04-23T00:00:00Z\".to_string(),\n            updated_at: \"2026-04-23T01:00:00Z\".to_string(),\n        }\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/filesystem_predicates.rs",
    "content": "use datafusion::common::tree_node::{Transformed, TreeNode};\nuse datafusion::common::{DataFusionError, Result, ScalarValue};\nuse datafusion::logical_expr::expr::{Between, InList};\nuse datafusion::logical_expr::{BinaryExpr, Expr, Operator};\n\nuse crate::common::{normalize_directory_path, ParsedFilePath};\nuse crate::LixError;\n\nuse super::error::lix_error_to_datafusion_error;\n\n#[derive(Debug, Clone, Copy)]\npub(crate) enum FilesystemPathKind {\n    File,\n    Directory,\n}\n\npub(crate) fn canonicalize_filesystem_path_filters(\n    filters: &[Expr],\n    kind: FilesystemPathKind,\n) -> Result<Vec<Expr>> {\n    filters\n        .iter()\n        .cloned()\n        .map(|filter| canonicalize_filesystem_path_filter(filter, kind))\n        .collect()\n}\n\nfn canonicalize_filesystem_path_filter(expr: Expr, kind: FilesystemPathKind) -> Result<Expr> {\n    expr.transform(|expr| canonicalize_filesystem_path_expr(expr, kind))\n        .map(|transformed| transformed.data)\n}\n\nfn canonicalize_filesystem_path_expr(\n    expr: Expr,\n    kind: FilesystemPathKind,\n) -> Result<Transformed<Expr>> {\n    match expr {\n        Expr::BinaryExpr(binary_expr) if is_path_comparison_operator(binary_expr.op) => {\n            canonicalize_path_binary_expr(binary_expr, kind)\n        }\n        Expr::InList(in_list) if is_path_column(&in_list.expr) => {\n            canonicalize_path_in_list(in_list, kind)\n        }\n        Expr::Between(between) if is_path_column(&between.expr) => {\n            canonicalize_path_between(between, kind)\n        }\n        _ => Ok(Transformed::no(expr)),\n    }\n}\n\nfn canonicalize_path_binary_expr(\n    binary_expr: BinaryExpr,\n    kind: FilesystemPathKind,\n) -> Result<Transformed<Expr>> {\n    let BinaryExpr { left, op, right } = binary_expr;\n    let left_is_path = is_path_column(&left);\n    let right_is_path = is_path_column(&right);\n\n    let left = if right_is_path {\n        Box::new(canonicalize_path_literal_expr(*left, kind)?)\n    } else {\n        left\n    };\n    let right = if left_is_path {\n        Box::new(canonicalize_path_literal_expr(*right, kind)?)\n    } else {\n        right\n    };\n\n    Ok(Transformed::yes(Expr::BinaryExpr(BinaryExpr::new(\n        left, op, right,\n    ))))\n}\n\nfn canonicalize_path_in_list(\n    in_list: InList,\n    kind: FilesystemPathKind,\n) -> Result<Transformed<Expr>> {\n    let list = in_list\n        .list\n        .into_iter()\n        .map(|expr| canonicalize_path_literal_expr(expr, kind))\n        .collect::<Result<Vec<_>>>()?;\n    Ok(Transformed::yes(Expr::InList(InList::new(\n        in_list.expr,\n        list,\n        in_list.negated,\n    ))))\n}\n\nfn canonicalize_path_between(\n    between: Between,\n    kind: FilesystemPathKind,\n) -> Result<Transformed<Expr>> {\n    Ok(Transformed::yes(Expr::Between(Between {\n        expr: between.expr,\n        negated: between.negated,\n        low: Box::new(canonicalize_path_literal_expr(*between.low, kind)?),\n        high: Box::new(canonicalize_path_literal_expr(*between.high, kind)?),\n    })))\n}\n\nfn canonicalize_path_literal_expr(expr: Expr, kind: FilesystemPathKind) -> Result<Expr> {\n    let Expr::Literal(literal, metadata) = expr else {\n        return Err(unsupported_dynamic_path_predicate_error(expr));\n    };\n\n    match literal {\n        ScalarValue::Utf8(Some(value))\n        | ScalarValue::Utf8View(Some(value))\n        | ScalarValue::LargeUtf8(Some(value)) => {\n            let normalized = canonicalize_path_value(&value, kind)?;\n            Ok(Expr::Literal(ScalarValue::Utf8(Some(normalized)), metadata))\n        }\n        _ => Ok(Expr::Literal(literal, metadata)),\n    }\n}\n\nfn canonicalize_path_value(value: &str, kind: FilesystemPathKind) -> Result<String> {\n    match kind {\n        FilesystemPathKind::File => ParsedFilePath::try_from_path(value)\n            .map(|parsed| parsed.normalized_path.to_string())\n            .map_err(lix_error_to_datafusion_error),\n        FilesystemPathKind::Directory => {\n            normalize_directory_path(value).map_err(lix_error_to_datafusion_error)\n        }\n    }\n}\n\nfn is_path_column(expr: &Expr) -> bool {\n    matches!(expr, Expr::Column(column) if column.name == \"path\")\n}\n\nfn is_path_comparison_operator(op: Operator) -> bool {\n    matches!(\n        op,\n        Operator::Eq\n            | Operator::NotEq\n            | Operator::Lt\n            | Operator::LtEq\n            | Operator::Gt\n            | Operator::GtEq\n    )\n}\n\nfn unsupported_dynamic_path_predicate_error(expr: Expr) -> DataFusionError {\n    lix_error_to_datafusion_error(\n        LixError::new(\n            LixError::CODE_UNSUPPORTED_SQL,\n            format!(\n                \"filesystem path predicates only support literal path values; found expression {expr:?}\"\n            ),\n        )\n        .with_hint(\n            \"Compare lix_file.path or lix_directory.path to a string literal or bound parameter. \\\n             Computed path expressions are not supported until path canonicalization can run at evaluation time.\",\n        ),\n    )\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/filesystem_visibility.rs",
    "content": "#![allow(dead_code)]\n\nuse std::collections::{BTreeMap, BTreeSet};\nuse std::sync::Arc;\n\nuse serde::Deserialize;\n\nuse crate::live_state::MaterializedLiveStateRow;\nuse crate::live_state::{LiveStateFilter, LiveStateReader, LiveStateScanRequest};\nuse crate::LixError;\n\nuse super::filesystem_planner::{\n    FilesystemRowContext, BLOB_REF_SCHEMA_KEY, DIRECTORY_DESCRIPTOR_SCHEMA_KEY,\n    FILE_DESCRIPTOR_SCHEMA_KEY,\n};\n\n/// Execution-visible filesystem metadata decoded from live-state rows.\n///\n/// The helper intentionally depends only on `LiveStateReader`. In engine\n/// write execution that context may include staged rows, so filesystem planning\n/// sees pending writes without reaching into write-execution internals.\n#[derive(Debug, Clone, PartialEq, Eq, Default)]\npub(crate) struct VisibleFilesystem {\n    pub(crate) directories_by_id: BTreeMap<String, VisibleDirectory>,\n    pub(crate) directory_children_by_parent_id: BTreeMap<Option<String>, BTreeSet<String>>,\n    pub(crate) files_by_directory_id: BTreeMap<Option<String>, BTreeMap<String, VisibleFile>>,\n    pub(crate) blob_refs_by_file_id: BTreeMap<String, VisibleBlobRef>,\n}\n\nimpl VisibleFilesystem {\n    /// Loads filesystem rows for a single version from execution-visible live\n    /// state and builds lookup indexes used by filesystem write planning.\n    pub(crate) async fn load(\n        live_state: Arc<dyn LiveStateReader>,\n        version_id: &str,\n    ) -> Result<Self, LixError> {\n        let rows = live_state\n            .scan_rows(&LiveStateScanRequest {\n                filter: LiveStateFilter {\n                    schema_keys: vec![\n                        DIRECTORY_DESCRIPTOR_SCHEMA_KEY.to_string(),\n                        FILE_DESCRIPTOR_SCHEMA_KEY.to_string(),\n                        BLOB_REF_SCHEMA_KEY.to_string(),\n                    ],\n                    version_ids: vec![version_id.to_string()],\n                    ..LiveStateFilter::default()\n                },\n                ..LiveStateScanRequest::default()\n            })\n            .await?;\n        Self::from_live_rows(rows)\n    }\n\n    /// Builds filesystem lookup indexes from rows that are already known to be\n    /// transaction-visible.\n    pub(crate) fn from_live_rows(rows: Vec<MaterializedLiveStateRow>) -> Result<Self, LixError> {\n        let mut visible = Self::default();\n\n        for row in rows {\n            let Some(snapshot_content) = row.snapshot_content.as_deref() else {\n                continue;\n            };\n            match row.schema_key.as_str() {\n                DIRECTORY_DESCRIPTOR_SCHEMA_KEY => {\n                    let snapshot: DirectoryDescriptorSnapshot =\n                        serde_json::from_str(snapshot_content).map_err(|error| {\n                            LixError::new(\n                                \"LIX_ERROR_UNKNOWN\",\n                                format!(\"invalid lix_directory_descriptor snapshot JSON: {error}\"),\n                            )\n                        })?;\n                    let directory = VisibleDirectory {\n                        id: snapshot.id,\n                        parent_id: snapshot.parent_id,\n                        name: snapshot.name,\n                        hidden: snapshot.hidden.unwrap_or(false),\n                        context: filesystem_row_context(&row)?,\n                    };\n                    visible\n                        .directory_children_by_parent_id\n                        .entry(directory.parent_id.clone())\n                        .or_default()\n                        .insert(directory.id.clone());\n                    visible\n                        .directories_by_id\n                        .insert(directory.id.clone(), directory);\n                }\n                FILE_DESCRIPTOR_SCHEMA_KEY => {\n                    let snapshot: FileDescriptorSnapshot = serde_json::from_str(snapshot_content)\n                        .map_err(|error| {\n                        LixError::new(\n                            \"LIX_ERROR_UNKNOWN\",\n                            format!(\"invalid lix_file_descriptor snapshot JSON: {error}\"),\n                        )\n                    })?;\n                    let file = VisibleFile {\n                        id: snapshot.id,\n                        directory_id: snapshot.directory_id,\n                        name: snapshot.name,\n                        hidden: snapshot.hidden,\n                        context: filesystem_row_context(&row)?,\n                    };\n                    visible\n                        .files_by_directory_id\n                        .entry(file.directory_id.clone())\n                        .or_default()\n                        .insert(file.id.clone(), file);\n                }\n                BLOB_REF_SCHEMA_KEY => {\n                    let snapshot: BlobRefSnapshot = serde_json::from_str(snapshot_content)\n                        .map_err(|error| {\n                            LixError::new(\n                                \"LIX_ERROR_UNKNOWN\",\n                                format!(\"invalid lix_binary_blob_ref snapshot JSON: {error}\"),\n                            )\n                        })?;\n                    visible.blob_refs_by_file_id.insert(\n                        snapshot.id.clone(),\n                        VisibleBlobRef {\n                            file_id: snapshot.id,\n                            blob_hash: snapshot.blob_hash,\n                            size_bytes: snapshot.size_bytes,\n                            context: filesystem_row_context(&row)?,\n                        },\n                    );\n                }\n                _ => {}\n            }\n        }\n\n        Ok(visible)\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct VisibleDirectory {\n    pub(crate) id: String,\n    pub(crate) parent_id: Option<String>,\n    pub(crate) name: String,\n    pub(crate) hidden: bool,\n    pub(crate) context: FilesystemRowContext,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct VisibleFile {\n    pub(crate) id: String,\n    pub(crate) directory_id: Option<String>,\n    pub(crate) name: String,\n    pub(crate) hidden: bool,\n    pub(crate) context: FilesystemRowContext,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct VisibleBlobRef {\n    pub(crate) file_id: String,\n    pub(crate) blob_hash: String,\n    pub(crate) size_bytes: Option<u64>,\n    pub(crate) context: FilesystemRowContext,\n}\n\n#[derive(Debug, Deserialize)]\nstruct DirectoryDescriptorSnapshot {\n    id: String,\n    parent_id: Option<String>,\n    name: String,\n    hidden: Option<bool>,\n}\n\n#[derive(Debug, Deserialize)]\nstruct FileDescriptorSnapshot {\n    id: String,\n    directory_id: Option<String>,\n    name: String,\n    hidden: bool,\n}\n\n#[derive(Debug, Deserialize)]\nstruct BlobRefSnapshot {\n    id: String,\n    blob_hash: String,\n    size_bytes: Option<u64>,\n}\n\nfn filesystem_row_context(\n    row: &MaterializedLiveStateRow,\n) -> Result<FilesystemRowContext, LixError> {\n    Ok(FilesystemRowContext {\n        version_id: row.version_id.clone(),\n        global: row.global,\n        untracked: row.untracked,\n        file_id: row.file_id.clone(),\n        metadata: row\n            .metadata\n            .as_deref()\n            .map(|metadata| {\n                crate::parse_row_metadata_value(metadata, \"filesystem row metadata\").and_then(\n                    |metadata| {\n                        crate::transaction::types::TransactionJson::from_value(\n                            metadata,\n                            \"filesystem row metadata\",\n                        )\n                    },\n                )\n            })\n            .transpose()?,\n    })\n}\n\n#[cfg(test)]\nmod tests {\n    use async_trait::async_trait;\n\n    use crate::live_state::MaterializedLiveStateRow;\n    use crate::live_state::{LiveStateReader, LiveStateRowRequest, LiveStateScanRequest};\n    use crate::LixError;\n\n    use super::{\n        VisibleFilesystem, BLOB_REF_SCHEMA_KEY, DIRECTORY_DESCRIPTOR_SCHEMA_KEY,\n        FILE_DESCRIPTOR_SCHEMA_KEY,\n    };\n\n    #[tokio::test]\n    async fn nested_directories_resolve_correctly() {\n        let filesystem = VisibleFilesystem::load(\n            live_state(vec![\n                directory_row(\n                    \"dir-docs\",\n                    r#\"{\"id\":\"dir-docs\",\"parent_id\":null,\"name\":\"docs\",\"hidden\":false}\"#,\n                ),\n                directory_row(\n                    \"dir-guides\",\n                    r#\"{\"id\":\"dir-guides\",\"parent_id\":\"dir-docs\",\"name\":\"guides\",\"hidden\":false}\"#,\n                ),\n            ]),\n            \"version-a\",\n        )\n        .await\n        .expect(\"visible filesystem should load\");\n\n        assert_eq!(\n            filesystem\n                .directories_by_id\n                .get(\"dir-guides\")\n                .and_then(|directory| directory.parent_id.as_deref()),\n            Some(\"dir-docs\")\n        );\n        assert!(filesystem\n            .directory_children_by_parent_id\n            .get(&None)\n            .is_some_and(|children| children.contains(\"dir-docs\")));\n        assert!(filesystem\n            .directory_children_by_parent_id\n            .get(&Some(\"dir-docs\".to_string()))\n            .is_some_and(|children| children.contains(\"dir-guides\")));\n    }\n\n    #[tokio::test]\n    async fn files_attach_to_directory_ids() {\n        let filesystem = VisibleFilesystem::load(\n            live_state(vec![file_row(\n                \"file-readme\",\n                r#\"{\"id\":\"file-readme\",\"directory_id\":\"dir-guides\",\"name\":\"readme.md\",\"hidden\":false}\"#,\n            )]),\n            \"version-a\",\n        )\n        .await\n        .expect(\"visible filesystem should load\");\n\n        let files = filesystem\n            .files_by_directory_id\n            .get(&Some(\"dir-guides\".to_string()))\n            .expect(\"directory should have attached files\");\n        let file = files\n            .get(\"file-readme\")\n            .expect(\"file should be indexed by id inside directory\");\n        assert_eq!(file.name, \"readme.md\");\n    }\n\n    #[tokio::test]\n    async fn blob_refs_attach_to_file_ids() {\n        let filesystem = VisibleFilesystem::load(\n            live_state(vec![blob_ref_row(\n                \"file-readme\",\n                r#\"{\"id\":\"file-readme\",\"blob_hash\":\"abc123\",\"size_bytes\":5}\"#,\n            )]),\n            \"version-a\",\n        )\n        .await\n        .expect(\"visible filesystem should load\");\n\n        let blob_ref = filesystem\n            .blob_refs_by_file_id\n            .get(\"file-readme\")\n            .expect(\"blob ref should be indexed by file id\");\n        assert_eq!(blob_ref.blob_hash, \"abc123\");\n        assert_eq!(blob_ref.size_bytes, Some(5));\n    }\n\n    fn live_state(rows: Vec<MaterializedLiveStateRow>) -> std::sync::Arc<dyn LiveStateReader> {\n        std::sync::Arc::new(RowsLiveStateReader { rows })\n    }\n\n    struct RowsLiveStateReader {\n        rows: Vec<MaterializedLiveStateRow>,\n    }\n\n    #[async_trait]\n    impl LiveStateReader for RowsLiveStateReader {\n        async fn scan_rows(\n            &self,\n            request: &LiveStateScanRequest,\n        ) -> Result<Vec<MaterializedLiveStateRow>, LixError> {\n            Ok(self\n                .rows\n                .iter()\n                .filter(|row| {\n                    (request.filter.schema_keys.is_empty()\n                        || request.filter.schema_keys.contains(&row.schema_key))\n                        && (request.filter.version_ids.is_empty()\n                            || request.filter.version_ids.contains(&row.version_id))\n                })\n                .cloned()\n                .collect())\n        }\n\n        async fn load_row(\n            &self,\n            _request: &LiveStateRowRequest,\n        ) -> Result<Option<MaterializedLiveStateRow>, LixError> {\n            Ok(None)\n        }\n    }\n\n    fn directory_row(entity_id: &str, snapshot_content: &str) -> MaterializedLiveStateRow {\n        live_row(\n            entity_id,\n            DIRECTORY_DESCRIPTOR_SCHEMA_KEY,\n            None,\n            snapshot_content,\n        )\n    }\n\n    fn file_row(entity_id: &str, snapshot_content: &str) -> MaterializedLiveStateRow {\n        live_row(\n            entity_id,\n            FILE_DESCRIPTOR_SCHEMA_KEY,\n            None,\n            snapshot_content,\n        )\n    }\n\n    fn blob_ref_row(entity_id: &str, snapshot_content: &str) -> MaterializedLiveStateRow {\n        live_row(\n            entity_id,\n            BLOB_REF_SCHEMA_KEY,\n            Some(entity_id.to_string()),\n            snapshot_content,\n        )\n    }\n\n    fn live_row(\n        entity_id: &str,\n        schema_key: &str,\n        file_id: Option<String>,\n        snapshot_content: &str,\n    ) -> MaterializedLiveStateRow {\n        MaterializedLiveStateRow {\n            entity_id: crate::entity_identity::EntityIdentity::single(entity_id),\n            schema_key: schema_key.to_string(),\n            file_id,\n            snapshot_content: Some(snapshot_content.to_string()),\n            metadata: None,\n            deleted: false,\n            version_id: \"version-a\".to_string(),\n            change_id: Some(format!(\"change-{entity_id}\")),\n            commit_id: Some(format!(\"commit-{entity_id}\")),\n            global: false,\n            untracked: false,\n            created_at: \"2026-04-23T00:00:00Z\".to_string(),\n            updated_at: \"2026-04-23T01:00:00Z\".to_string(),\n        }\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/history_projection.rs",
    "content": "use serde_json::Value as JsonValue;\n\nuse crate::entity_identity::EntityIdentity;\nuse crate::LixError;\n\n/// Shared projection contract for typed history views.\n///\n/// On tombstone rows (`snapshot_content IS NULL`), identity columns survive by\n/// projecting from canonical entity identity. Non-identity columns must remain\n/// NULL because there is no snapshot to project payload from.\npub(crate) enum HistoryIdentityProjection<'a> {\n    PrimaryKeyPaths(&'a [Vec<String>]),\n    SingleColumn { column: &'a str },\n}\n\npub(crate) fn tombstone_identity_column_value(\n    column_name: &str,\n    entity_id: &str,\n    projection: HistoryIdentityProjection<'_>,\n) -> Result<Option<JsonValue>, LixError> {\n    match projection {\n        HistoryIdentityProjection::SingleColumn { column } => {\n            if column_name == column {\n                Ok(Some(JsonValue::String(entity_id.to_string())))\n            } else {\n                Ok(None)\n            }\n        }\n        HistoryIdentityProjection::PrimaryKeyPaths(primary_key_paths) => {\n            primary_key_tombstone_value(column_name, entity_id, primary_key_paths)\n        }\n    }\n}\n\nfn primary_key_tombstone_value(\n    column_name: &str,\n    entity_id: &str,\n    primary_key_paths: &[Vec<String>],\n) -> Result<Option<JsonValue>, LixError> {\n    let Some(part_index) = primary_key_paths\n        .iter()\n        .position(|path| path.as_slice() == [column_name])\n    else {\n        return Ok(None);\n    };\n\n    let identity = EntityIdentity::from_json_array_text(entity_id).map_err(|error| {\n        LixError::unknown(format!(\n            \"failed to decode history tombstone entity identity: {error}\"\n        ))\n    })?;\n    Ok(identity\n        .parts\n        .get(part_index)\n        .map(|part| JsonValue::String(part.clone())))\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/history_provider.rs",
    "content": "use std::any::Any;\nuse std::sync::Arc;\n\nuse async_trait::async_trait;\nuse datafusion::arrow::array::{ArrayRef, Int64Array, StringArray};\nuse datafusion::arrow::datatypes::{DataType, Field, Schema, SchemaRef};\nuse datafusion::arrow::record_batch::{RecordBatch, RecordBatchOptions};\nuse datafusion::catalog::{Session, TableProvider};\nuse datafusion::common::{DataFusionError, Result};\nuse datafusion::datasource::TableType;\nuse datafusion::execution::TaskContext;\nuse datafusion::logical_expr::{Expr, TableProviderFilterPushDown};\nuse datafusion::physical_expr::EquivalenceProperties;\nuse datafusion::physical_plan::execution_plan::{Boundedness, EmissionType, PlanProperties};\nuse datafusion::physical_plan::stream::RecordBatchStreamAdapter;\nuse datafusion::physical_plan::{\n    DisplayAs, DisplayFormatType, ExecutionPlan, Partitioning, SendableRecordBatchStream,\n};\nuse datafusion::prelude::SessionContext;\nuse futures_util::{stream, TryStreamExt};\nuse tokio::sync::Mutex;\n\nuse crate::commit_graph::CommitGraphReader;\nuse crate::{serialize_row_metadata, LixError};\n\nuse super::history_route::{\n    load_history_entries, parse_history_filter, HistoryColumnStyle, HistoryRoute,\n    HistoryViewDescriptor,\n};\nuse super::result_metadata::json_field;\nuse super::SqlCommitStoreQuerySource;\n\npub(crate) async fn register_history_providers(\n    session: &SessionContext,\n    commit_graph: Box<dyn CommitGraphReader>,\n    query_source: SqlCommitStoreQuerySource,\n) -> Result<Arc<dyn TableProvider>, LixError> {\n    let provider: Arc<dyn TableProvider> = Arc::new(LixStateHistoryProvider::new(\n        Arc::new(Mutex::new(commit_graph)),\n        query_source,\n    ));\n    session\n        .register_table(\"lix_state_history\", Arc::clone(&provider))\n        .map_err(datafusion_error_to_lix_error)?;\n    Ok(provider)\n}\n\npub(crate) struct LixStateHistoryProvider {\n    schema: SchemaRef,\n    commit_graph: Arc<Mutex<Box<dyn CommitGraphReader>>>,\n    query_source: SqlCommitStoreQuerySource,\n}\n\nimpl std::fmt::Debug for LixStateHistoryProvider {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"LixStateHistoryProvider\").finish()\n    }\n}\n\nimpl LixStateHistoryProvider {\n    pub(crate) fn new(\n        commit_graph: Arc<Mutex<Box<dyn CommitGraphReader>>>,\n        query_source: SqlCommitStoreQuerySource,\n    ) -> Self {\n        Self {\n            schema: lix_state_history_schema(),\n            commit_graph,\n            query_source,\n        }\n    }\n}\n\n#[async_trait]\nimpl TableProvider for LixStateHistoryProvider {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn schema(&self) -> SchemaRef {\n        Arc::clone(&self.schema)\n    }\n\n    fn table_type(&self) -> TableType {\n        TableType::View\n    }\n\n    fn supports_filters_pushdown(\n        &self,\n        filters: &[&Expr],\n    ) -> Result<Vec<TableProviderFilterPushDown>> {\n        Ok(filters\n            .iter()\n            .map(|filter| {\n                if parse_history_filter(filter, HistoryColumnStyle::Bare).is_some() {\n                    TableProviderFilterPushDown::Exact\n                } else {\n                    TableProviderFilterPushDown::Unsupported\n                }\n            })\n            .collect())\n    }\n\n    async fn scan(\n        &self,\n        _state: &dyn Session,\n        projection: Option<&Vec<usize>>,\n        filters: &[Expr],\n        limit: Option<usize>,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        let projected_schema = projected_schema(&self.schema, projection)?;\n        Ok(Arc::new(LixStateHistoryScanExec::new(\n            Arc::clone(&self.commit_graph),\n            self.query_source.clone(),\n            projected_schema,\n            projection.cloned(),\n            HistoryRoute::from_filters(filters, HistoryColumnStyle::Bare),\n            limit,\n        )))\n    }\n}\n\nstruct LixStateHistoryScanExec {\n    commit_graph: Arc<Mutex<Box<dyn CommitGraphReader>>>,\n    query_source: SqlCommitStoreQuerySource,\n    schema: SchemaRef,\n    projection: Option<Vec<usize>>,\n    route: HistoryRoute,\n    limit: Option<usize>,\n    properties: Arc<PlanProperties>,\n}\n\nimpl std::fmt::Debug for LixStateHistoryScanExec {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"LixStateHistoryScanExec\")\n            .field(\"limit\", &self.limit)\n            .field(\"route\", &self.route)\n            .finish()\n    }\n}\n\nimpl LixStateHistoryScanExec {\n    fn new(\n        commit_graph: Arc<Mutex<Box<dyn CommitGraphReader>>>,\n        query_source: SqlCommitStoreQuerySource,\n        schema: SchemaRef,\n        projection: Option<Vec<usize>>,\n        route: HistoryRoute,\n        limit: Option<usize>,\n    ) -> Self {\n        let properties = PlanProperties::new(\n            EquivalenceProperties::new(Arc::clone(&schema)),\n            Partitioning::UnknownPartitioning(1),\n            EmissionType::Incremental,\n            Boundedness::Bounded,\n        );\n        Self {\n            commit_graph,\n            query_source,\n            schema,\n            projection,\n            route,\n            limit,\n            properties: Arc::new(properties),\n        }\n    }\n}\n\nimpl DisplayAs for LixStateHistoryScanExec {\n    fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        match t {\n            DisplayFormatType::Default | DisplayFormatType::Verbose => {\n                write!(\n                    f,\n                    \"LixStateHistoryScanExec(limit={:?}, route={:?})\",\n                    self.limit, self.route\n                )\n            }\n            DisplayFormatType::TreeRender => write!(f, \"LixStateHistoryScanExec\"),\n        }\n    }\n}\n\nimpl ExecutionPlan for LixStateHistoryScanExec {\n    fn name(&self) -> &str {\n        \"LixStateHistoryScanExec\"\n    }\n\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn properties(&self) -> &Arc<PlanProperties> {\n        &self.properties\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn ExecutionPlan>> {\n        Vec::new()\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        children: Vec<Arc<dyn ExecutionPlan>>,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        if !children.is_empty() {\n            return Err(DataFusionError::Execution(\n                \"LixStateHistoryScanExec does not accept children\".to_string(),\n            ));\n        }\n        Ok(self)\n    }\n\n    fn execute(\n        &self,\n        partition: usize,\n        _context: Arc<TaskContext>,\n    ) -> Result<SendableRecordBatchStream> {\n        if partition != 0 {\n            return Err(DataFusionError::Execution(format!(\n                \"LixStateHistoryScanExec only exposes one partition, got {partition}\"\n            )));\n        }\n\n        let commit_graph = Arc::clone(&self.commit_graph);\n        let query_source = self.query_source.clone();\n        let route = self.route.clone();\n        let schema = Arc::clone(&self.schema);\n        let stream_schema = Arc::clone(&schema);\n        let limit = self.limit;\n        let zero_column_projection = self\n            .projection\n            .as_ref()\n            .is_some_and(|projection| projection.is_empty());\n\n        let stream = stream::once(async move {\n            let rows = if route.is_contradictory() {\n                Vec::new()\n            } else {\n                load_state_history_rows(commit_graph, query_source, &route)\n                    .await\n                    .map_err(lix_error_to_datafusion_error)?\n            };\n            let rows = if let Some(limit) = limit {\n                rows.into_iter().take(limit).collect::<Vec<_>>()\n            } else {\n                rows\n            };\n\n            let batch = if zero_column_projection {\n                let options = RecordBatchOptions::new().with_row_count(Some(rows.len()));\n                RecordBatch::try_new_with_options(Arc::clone(&stream_schema), vec![], &options)\n                    .map_err(|error| {\n                        DataFusionError::Execution(format!(\n                            \"failed to build zero-column lix_state_history batch: {error}\"\n                        ))\n                    })?\n            } else {\n                state_history_record_batch(Arc::clone(&stream_schema), &rows)?\n            };\n            Ok::<_, DataFusionError>(stream::iter(vec![Ok::<RecordBatch, DataFusionError>(\n                batch,\n            )]))\n        })\n        .try_flatten();\n\n        Ok(Box::pin(RecordBatchStreamAdapter::new(schema, stream)))\n    }\n}\n\nfn lix_state_history_schema() -> SchemaRef {\n    Arc::new(Schema::new(vec![\n        json_field(\"entity_id\", false),\n        Field::new(\"schema_key\", DataType::Utf8, false),\n        Field::new(\"file_id\", DataType::Utf8, true),\n        json_field(\"snapshot_content\", true),\n        json_field(\"metadata\", true),\n        Field::new(\"change_id\", DataType::Utf8, false),\n        Field::new(\"observed_commit_id\", DataType::Utf8, false),\n        Field::new(\"commit_created_at\", DataType::Utf8, false),\n        Field::new(\"start_commit_id\", DataType::Utf8, false),\n        Field::new(\"depth\", DataType::Int64, false),\n    ]))\n}\n\nfn projected_schema(base_schema: &SchemaRef, projection: Option<&Vec<usize>>) -> Result<SchemaRef> {\n    let fields = match projection {\n        Some(indices) => indices\n            .iter()\n            .map(|index| base_schema.field(*index).as_ref().clone())\n            .collect::<Vec<_>>(),\n        None => base_schema\n            .fields()\n            .iter()\n            .map(|field| field.as_ref().clone())\n            .collect::<Vec<_>>(),\n    };\n    Ok(Arc::new(Schema::new(fields)))\n}\n\n#[derive(Debug, Clone)]\nstruct StateHistorySqlRow {\n    entity_id: String,\n    schema_key: String,\n    file_id: Option<String>,\n    snapshot_content: Option<String>,\n    metadata: Option<String>,\n    change_id: String,\n    observed_commit_id: String,\n    commit_created_at: String,\n    start_commit_id: String,\n    depth: i64,\n}\n\nfn state_history_record_batch(\n    schema: SchemaRef,\n    rows: &[StateHistorySqlRow],\n) -> Result<RecordBatch> {\n    let arrays = schema\n        .fields()\n        .iter()\n        .map(|field| {\n            Ok(match field.name().as_str() {\n                \"entity_id\" => string_array(rows.iter().map(|row| Some(row.entity_id.as_str()))),\n                \"schema_key\" => string_array(rows.iter().map(|row| Some(row.schema_key.as_str()))),\n                \"file_id\" => string_array(rows.iter().map(|row| row.file_id.as_deref())),\n                \"snapshot_content\" => {\n                    string_array(rows.iter().map(|row| row.snapshot_content.as_deref()))\n                }\n                \"metadata\" => Arc::new(StringArray::from(\n                    rows.iter()\n                        .map(|row| row.metadata.as_ref().map(serialize_row_metadata))\n                        .collect::<Vec<_>>(),\n                )),\n                \"change_id\" => string_array(rows.iter().map(|row| Some(row.change_id.as_str()))),\n                \"observed_commit_id\" => {\n                    string_array(rows.iter().map(|row| Some(row.observed_commit_id.as_str())))\n                }\n                \"commit_created_at\" => {\n                    string_array(rows.iter().map(|row| Some(row.commit_created_at.as_str())))\n                }\n                \"start_commit_id\" => {\n                    string_array(rows.iter().map(|row| Some(row.start_commit_id.as_str())))\n                }\n                \"depth\" => Arc::new(Int64Array::from(\n                    rows.iter().map(|row| row.depth).collect::<Vec<_>>(),\n                )) as ArrayRef,\n                other => {\n                    return Err(DataFusionError::Execution(format!(\n                        \"lix_state_history provider does not support projected column '{other}'\"\n                    )))\n                }\n            })\n        })\n        .collect::<Result<Vec<_>>>()?;\n    RecordBatch::try_new(schema, arrays).map_err(DataFusionError::from)\n}\n\nfn string_array<'a>(values: impl Iterator<Item = Option<&'a str>>) -> ArrayRef {\n    Arc::new(StringArray::from(values.collect::<Vec<_>>())) as ArrayRef\n}\n\nasync fn load_state_history_rows(\n    commit_graph: Arc<Mutex<Box<dyn CommitGraphReader>>>,\n    query_source: SqlCommitStoreQuerySource,\n    route: &HistoryRoute,\n) -> Result<Vec<StateHistorySqlRow>, LixError> {\n    let entries = load_history_entries(\n        HistoryViewDescriptor {\n            view_name: \"lix_state_history\",\n            start_commit_column: \"start_commit_id\",\n        },\n        commit_graph,\n        query_source.json_reader,\n        route,\n        Vec::new(),\n    )\n    .await?;\n    let mut rows = entries\n        .into_iter()\n        .map(|entry| -> Result<StateHistorySqlRow, LixError> {\n            Ok(StateHistorySqlRow {\n                entity_id: entry.change.entity_id.as_json_array_text()?,\n                schema_key: entry.change.schema_key,\n                file_id: entry.change.file_id,\n                snapshot_content: entry.change.snapshot_content,\n                metadata: entry.change.metadata,\n                change_id: entry.change.id,\n                observed_commit_id: entry.observed_commit_id,\n                commit_created_at: entry.commit_created_at,\n                start_commit_id: entry.start_commit_id,\n                depth: i64::from(entry.depth),\n            })\n        })\n        .collect::<Result<Vec<_>, _>>()?;\n\n    rows.sort_by(|left, right| {\n        left.entity_id\n            .cmp(&right.entity_id)\n            .then(left.file_id.cmp(&right.file_id))\n            .then(left.schema_key.cmp(&right.schema_key))\n            .then(left.depth.cmp(&right.depth))\n            .then(left.change_id.cmp(&right.change_id))\n    });\n    Ok(rows)\n}\n\nfn datafusion_error_to_lix_error(error: DataFusionError) -> LixError {\n    super::error::datafusion_error_to_lix_error(error)\n}\n\nfn lix_error_to_datafusion_error(error: LixError) -> DataFusionError {\n    super::error::lix_error_to_datafusion_error(error)\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/history_route.rs",
    "content": "use std::collections::BTreeMap;\nuse std::sync::Arc;\n\nuse datafusion::common::ScalarValue;\nuse datafusion::logical_expr::expr::InList;\nuse datafusion::logical_expr::{Expr, Operator};\nuse tokio::sync::Mutex;\n\nuse crate::commit_graph::{CommitGraphChangeHistoryRequest, CommitGraphReader};\nuse crate::entity_identity::EntityIdentity;\nuse crate::LixError;\n\nuse super::SqlJsonReader;\nuse crate::commit_store::{materialize_change, MaterializedChange};\n\n/// Shared routing state for commit-shaped history SQL surfaces.\n///\n/// History providers differ in how they shape rows, but they should not drift\n/// in how they interpret filters such as `start_commit_id IN (...)`, entity\n/// filters, or depth ranges.\n#[derive(Debug, Clone, Default, PartialEq, Eq)]\npub(crate) struct HistoryRoute {\n    pub(crate) start_commit_ids: Vec<String>,\n    pub(crate) entity_ids: Vec<String>,\n    pub(crate) schema_keys: Vec<String>,\n    pub(crate) file_ids: Vec<String>,\n    pub(crate) min_depth: Option<i64>,\n    pub(crate) max_depth: Option<i64>,\n    pub(crate) contradictory: bool,\n}\n\nimpl HistoryRoute {\n    pub(crate) fn from_filters(filters: &[Expr], column_style: HistoryColumnStyle) -> Self {\n        let mut route = Self::default();\n        for filter in filters {\n            apply_history_filter(filter, &mut route, column_style);\n        }\n        route\n    }\n\n    /// Returns the part of the route that is safe to apply before a shaped\n    /// history provider has built its output rows.\n    ///\n    /// Surface providers such as `lix_file_history` may be caused by different\n    /// canonical event schemas than the schema they expose. For those providers,\n    /// identity/schema filters must be evaluated against the shaped output row,\n    /// not against the canonical event row.\n    pub(crate) fn traversal_only(&self) -> Self {\n        Self {\n            start_commit_ids: self.start_commit_ids.clone(),\n            min_depth: self.min_depth,\n            max_depth: self.max_depth,\n            contradictory: self.contradictory,\n            ..Self::default()\n        }\n    }\n\n    /// Returns only the explicit history starts.\n    ///\n    /// Shaped history providers use this for context loading: path/data shaping\n    /// often needs ancestor descriptor rows even when the event route is\n    /// restricted to a specific depth.\n    pub(crate) fn starts_only(&self) -> Self {\n        Self {\n            start_commit_ids: self.start_commit_ids.clone(),\n            contradictory: self.contradictory,\n            ..Self::default()\n        }\n    }\n\n    pub(crate) fn is_contradictory(&self) -> bool {\n        self.contradictory\n            || self\n                .min_depth\n                .zip(self.max_depth)\n                .is_some_and(|(min, max)| min > max)\n            || self.min_depth.is_some_and(|depth| depth < 0)\n            || self.max_depth.is_some_and(|depth| depth < 0)\n    }\n\n    /// Checks filters that refer to the row exposed by a shaped history surface.\n    pub(crate) fn matches_surface_row(\n        &self,\n        schema_key: &str,\n        entity_id: &str,\n        file_id: Option<&str>,\n        depth: u32,\n    ) -> bool {\n        if self.is_contradictory() {\n            return false;\n        }\n        if !self.schema_keys.is_empty()\n            && !self\n                .schema_keys\n                .iter()\n                .any(|candidate| candidate == schema_key)\n        {\n            return false;\n        }\n        if !self.entity_ids.is_empty()\n            && !self\n                .entity_ids\n                .iter()\n                .any(|candidate| candidate == entity_id)\n        {\n            return false;\n        }\n        if !self.file_ids.is_empty() {\n            let Some(file_id) = file_id else {\n                return false;\n            };\n            if !self.file_ids.iter().any(|candidate| candidate == file_id) {\n                return false;\n            }\n        }\n        if self\n            .min_depth\n            .is_some_and(|min_depth| i64::from(depth) < min_depth)\n        {\n            return false;\n        }\n        if self\n            .max_depth\n            .is_some_and(|max_depth| i64::from(depth) > max_depth)\n        {\n            return false;\n        }\n        true\n    }\n}\n\n/// Commit-graph history entry enriched with commit metadata needed by SQL\n/// history surfaces.\n#[derive(Debug, Clone)]\npub(crate) struct HistoryEntry {\n    pub(crate) change: MaterializedChange,\n    pub(crate) observed_commit_id: String,\n    pub(crate) commit_created_at: String,\n    pub(crate) start_commit_id: String,\n    pub(crate) depth: u32,\n}\n\npub(crate) const HISTORY_COL_ENTITY_ID: &str = \"lixcol_entity_id\";\npub(crate) const HISTORY_COL_SCHEMA_KEY: &str = \"lixcol_schema_key\";\npub(crate) const HISTORY_COL_FILE_ID: &str = \"lixcol_file_id\";\npub(crate) const HISTORY_COL_SNAPSHOT_CONTENT: &str = \"lixcol_snapshot_content\";\npub(crate) const HISTORY_COL_METADATA: &str = \"lixcol_metadata\";\npub(crate) const HISTORY_COL_CHANGE_ID: &str = \"lixcol_change_id\";\npub(crate) const HISTORY_COL_OBSERVED_COMMIT_ID: &str = \"lixcol_observed_commit_id\";\npub(crate) const HISTORY_COL_COMMIT_CREATED_AT: &str = \"lixcol_commit_created_at\";\npub(crate) const HISTORY_COL_START_COMMIT_ID: &str = \"lixcol_start_commit_id\";\npub(crate) const HISTORY_COL_DEPTH: &str = \"lixcol_depth\";\n\npub(crate) struct HistoryViewDescriptor<'a> {\n    pub(crate) view_name: &'a str,\n    pub(crate) start_commit_column: &'a str,\n}\n\n#[derive(Debug, Clone, Copy)]\npub(crate) enum HistoryColumnStyle {\n    Bare,\n    Prefixed,\n}\n\n/// Shaped history views expose delete events as tombstone rows.\n///\n/// If the current event is the descriptor tombstone itself, the provider must\n/// use that tombstone row instead of looking through to an earlier live\n/// descriptor. This keeps one contract across typed entity, file, directory,\n/// and state history: `snapshot_content IS NULL` means projected user/domain\n/// columns are NULL while metadata columns still identify the event.\npub(crate) fn history_descriptor_event_matches(\n    descriptor_entry: &HistoryEntry,\n    event_depth: u32,\n    event_change_id: &str,\n) -> bool {\n    descriptor_entry.depth == event_depth && descriptor_entry.change.id == event_change_id\n}\n\npub(crate) fn parse_history_filter(expr: &Expr, column_style: HistoryColumnStyle) -> Option<()> {\n    parse_history_filter_terms(expr, column_style).map(|_| ())\n}\n\npub(crate) fn commit_graph_history_request(\n    route: &HistoryRoute,\n    schema_keys: Vec<String>,\n) -> Option<CommitGraphChangeHistoryRequest> {\n    let schema_keys = effective_schema_keys(route, schema_keys)?;\n    Some(CommitGraphChangeHistoryRequest {\n        entity_ids: route\n            .entity_ids\n            .iter()\n            .filter_map(|entity_id| EntityIdentity::from_json_array_text(entity_id).ok())\n            .collect(),\n        schema_keys,\n        file_ids: route.file_ids.clone(),\n        min_depth: route.min_depth.and_then(nonnegative_u32),\n        max_depth: route.max_depth.and_then(nonnegative_u32),\n        include_tombstones: true,\n    })\n}\n\n/// Loads commit-graph history once for all SQL history providers.\n///\n/// Providers pass the schema keys they know how to shape. An empty list means\n/// \"do not constrain by provider schema\"; this is what `lix_state_history` uses.\npub(crate) async fn load_history_entries(\n    descriptor: HistoryViewDescriptor<'_>,\n    commit_graph: Arc<Mutex<Box<dyn CommitGraphReader>>>,\n    mut json_reader: SqlJsonReader,\n    route: &HistoryRoute,\n    schema_keys: Vec<String>,\n) -> Result<Vec<HistoryEntry>, LixError> {\n    if route.is_contradictory() {\n        return Ok(Vec::new());\n    }\n    if route.start_commit_ids.is_empty() {\n        return Err(LixError::new(\n            LixError::CODE_HISTORY_FILTER_REQUIRED,\n            format!(\n                \"{} requires a {} filter\",\n                descriptor.view_name, descriptor.start_commit_column\n            ),\n        )\n        .with_hint(format!(\n            \"Use WHERE {} = lix_active_version_commit_id() to inspect {} from the active version head.\",\n            descriptor.start_commit_column, descriptor.view_name\n        )));\n    }\n    let Some(request) = commit_graph_history_request(route, schema_keys) else {\n        return Ok(Vec::new());\n    };\n\n    let mut rows = Vec::new();\n    for start_commit_id in &route.start_commit_ids {\n        let (entries, reachable_commits) = {\n            let mut guard = commit_graph.lock().await;\n            let entries = guard\n                .change_history_from_commit(start_commit_id, &request)\n                .await?;\n            let reachable_commits = guard.reachable_commits(start_commit_id).await?;\n            (entries, reachable_commits)\n        };\n        let commit_created_at_by_id = reachable_commits\n            .into_iter()\n            .map(|reachable| {\n                (\n                    reachable.commit.commit_id.clone(),\n                    reachable.commit.change.created_at.clone(),\n                )\n            })\n            .collect::<BTreeMap<_, _>>();\n\n        for entry in entries {\n            let change = materialize_change(&mut json_reader, entry.located_change).await?;\n            rows.push(HistoryEntry {\n                commit_created_at: commit_created_at_by_id\n                    .get(&entry.observed_commit_id)\n                    .cloned()\n                    .unwrap_or_else(|| change.created_at.clone()),\n                change,\n                observed_commit_id: entry.observed_commit_id,\n                start_commit_id: entry.start_commit_id,\n                depth: entry.depth,\n            });\n        }\n    }\n\n    Ok(rows)\n}\n\nfn effective_schema_keys(\n    route: &HistoryRoute,\n    surface_schema_keys: Vec<String>,\n) -> Option<Vec<String>> {\n    if surface_schema_keys.is_empty() {\n        return Some(route.schema_keys.clone());\n    }\n    if route.schema_keys.is_empty() {\n        return Some(surface_schema_keys);\n    }\n\n    let mut effective = Vec::new();\n    for schema_key in surface_schema_keys {\n        if route.schema_keys.contains(&schema_key) && !effective.contains(&schema_key) {\n            effective.push(schema_key);\n        }\n    }\n    if effective.is_empty() {\n        None\n    } else {\n        Some(effective)\n    }\n}\n\nfn parse_history_filter_terms(\n    expr: &Expr,\n    column_style: HistoryColumnStyle,\n) -> Option<Vec<HistoryFilterTerm>> {\n    match expr {\n        Expr::BinaryExpr(binary_expr) if binary_expr.op == Operator::And => {\n            let mut terms = parse_history_filter_terms(&binary_expr.left, column_style)?;\n            terms.extend(parse_history_filter_terms(\n                &binary_expr.right,\n                column_style,\n            )?);\n            Some(terms)\n        }\n        Expr::BinaryExpr(binary_expr) if binary_expr.op == Operator::Or => {\n            parse_history_disjunction(binary_expr, column_style)\n        }\n        Expr::BinaryExpr(binary_expr) => {\n            parse_history_binary_filter(binary_expr, column_style).map(|term| vec![term])\n        }\n        Expr::InList(in_list) => {\n            parse_history_in_list_filter(in_list, column_style).map(|term| vec![term])\n        }\n        _ => None,\n    }\n}\n\nfn collect_history_route_terms(\n    expr: &Expr,\n    column_style: HistoryColumnStyle,\n) -> Vec<HistoryFilterTerm> {\n    match expr {\n        Expr::BinaryExpr(binary_expr) if binary_expr.op == Operator::And => {\n            let mut terms = collect_history_route_terms(&binary_expr.left, column_style);\n            terms.extend(collect_history_route_terms(\n                &binary_expr.right,\n                column_style,\n            ));\n            terms\n        }\n        // OR filters are only safe to route when the entire disjunction is a\n        // supported history predicate. Partially routing one side would change\n        // SQL semantics before DataFusion can apply the residual filter.\n        Expr::BinaryExpr(binary_expr) if binary_expr.op == Operator::Or => {\n            parse_history_disjunction(binary_expr, column_style).unwrap_or_default()\n        }\n        Expr::BinaryExpr(binary_expr) => parse_history_binary_filter(binary_expr, column_style)\n            .map(|term| vec![term])\n            .unwrap_or_default(),\n        Expr::InList(in_list) => parse_history_in_list_filter(in_list, column_style)\n            .map(|term| vec![term])\n            .unwrap_or_default(),\n        _ => Vec::new(),\n    }\n}\n\nfn parse_history_disjunction(\n    binary_expr: &datafusion::logical_expr::BinaryExpr,\n    column_style: HistoryColumnStyle,\n) -> Option<Vec<HistoryFilterTerm>> {\n    let left = parse_history_filter_terms(&binary_expr.left, column_style)?;\n    let right = parse_history_filter_terms(&binary_expr.right, column_style)?;\n    let [left] = left.as_slice() else {\n        return None;\n    };\n    let [right] = right.as_slice() else {\n        return None;\n    };\n    merge_history_disjunction_terms(left.clone(), right.clone()).map(|term| vec![term])\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\nenum HistoryFilterTerm {\n    StartCommitIds(Vec<String>),\n    EntityIds(Vec<String>),\n    SchemaKeys(Vec<String>),\n    FileIds(Vec<String>),\n    MinDepth(i64),\n    MaxDepth(i64),\n    ExactDepth(i64),\n}\n\nfn merge_history_disjunction_terms(\n    left: HistoryFilterTerm,\n    right: HistoryFilterTerm,\n) -> Option<HistoryFilterTerm> {\n    match (left, right) {\n        (HistoryFilterTerm::StartCommitIds(mut left), HistoryFilterTerm::StartCommitIds(right)) => {\n            extend_unique(&mut left, right);\n            Some(HistoryFilterTerm::StartCommitIds(left))\n        }\n        (HistoryFilterTerm::EntityIds(mut left), HistoryFilterTerm::EntityIds(right)) => {\n            extend_unique(&mut left, right);\n            Some(HistoryFilterTerm::EntityIds(left))\n        }\n        (HistoryFilterTerm::FileIds(mut left), HistoryFilterTerm::FileIds(right)) => {\n            extend_unique(&mut left, right);\n            Some(HistoryFilterTerm::FileIds(left))\n        }\n        (HistoryFilterTerm::SchemaKeys(mut left), HistoryFilterTerm::SchemaKeys(right)) => {\n            extend_unique(&mut left, right);\n            Some(HistoryFilterTerm::SchemaKeys(left))\n        }\n        _ => None,\n    }\n}\n\nfn parse_history_binary_filter(\n    binary_expr: &datafusion::logical_expr::BinaryExpr,\n    column_style: HistoryColumnStyle,\n) -> Option<HistoryFilterTerm> {\n    let Expr::Column(column) = &*binary_expr.left else {\n        return None;\n    };\n    let column_name = canonical_history_column_name(column.name.as_str(), column_style)?;\n    let right = &*binary_expr.right;\n    match (column_name, &binary_expr.op, right) {\n        (\"start_commit_id\", Operator::Eq, Expr::Literal(ScalarValue::Utf8(Some(value)), _))\n        | (\"schema_key\", Operator::Eq, Expr::Literal(ScalarValue::Utf8(Some(value)), _))\n        | (\"file_id\", Operator::Eq, Expr::Literal(ScalarValue::Utf8(Some(value)), _)) => {\n            Some(match column_name {\n                \"start_commit_id\" => HistoryFilterTerm::StartCommitIds(vec![value.clone()]),\n                \"schema_key\" => HistoryFilterTerm::SchemaKeys(vec![value.clone()]),\n                \"file_id\" => HistoryFilterTerm::FileIds(vec![value.clone()]),\n                _ => unreachable!(),\n            })\n        }\n        (\"entity_id\", Operator::Eq, Expr::Literal(ScalarValue::Utf8(Some(value)), _)) => {\n            canonical_entity_id_value(value).map(|value| HistoryFilterTerm::EntityIds(vec![value]))\n        }\n        (\"depth\", Operator::Eq, depth_expr) => {\n            scalar_i64_literal(depth_expr).map(HistoryFilterTerm::ExactDepth)\n        }\n        (\"depth\", Operator::Gt, depth_expr) => {\n            scalar_i64_literal(depth_expr).map(|value| HistoryFilterTerm::MinDepth(value + 1))\n        }\n        (\"depth\", Operator::GtEq, depth_expr) => {\n            scalar_i64_literal(depth_expr).map(HistoryFilterTerm::MinDepth)\n        }\n        (\"depth\", Operator::Lt, depth_expr) => {\n            scalar_i64_literal(depth_expr).map(|value| HistoryFilterTerm::MaxDepth(value - 1))\n        }\n        (\"depth\", Operator::LtEq, depth_expr) => {\n            scalar_i64_literal(depth_expr).map(HistoryFilterTerm::MaxDepth)\n        }\n        _ => None,\n    }\n}\n\nfn parse_history_in_list_filter(\n    in_list: &InList,\n    column_style: HistoryColumnStyle,\n) -> Option<HistoryFilterTerm> {\n    if in_list.negated {\n        return None;\n    }\n\n    let Expr::Column(column) = in_list.expr.as_ref() else {\n        return None;\n    };\n    let column_name = canonical_history_column_name(column.name.as_str(), column_style)?;\n    let values = in_list\n        .list\n        .iter()\n        .map(string_literal)\n        .collect::<Option<Vec<_>>>()?;\n    if values.is_empty() {\n        return None;\n    }\n\n    match column_name {\n        \"start_commit_id\" => Some(HistoryFilterTerm::StartCommitIds(values)),\n        \"entity_id\" => canonical_entity_id_values(values).map(HistoryFilterTerm::EntityIds),\n        \"schema_key\" => Some(HistoryFilterTerm::SchemaKeys(values)),\n        \"file_id\" => Some(HistoryFilterTerm::FileIds(values)),\n        _ => None,\n    }\n}\n\nfn apply_history_filter(expr: &Expr, route: &mut HistoryRoute, column_style: HistoryColumnStyle) {\n    for term in collect_history_route_terms(expr, column_style) {\n        match term {\n            HistoryFilterTerm::StartCommitIds(values) => {\n                route.contradictory |=\n                    apply_conjunctive_values_filter(&mut route.start_commit_ids, values)\n            }\n            HistoryFilterTerm::EntityIds(values) => {\n                route.contradictory |=\n                    apply_conjunctive_values_filter(&mut route.entity_ids, values)\n            }\n            HistoryFilterTerm::SchemaKeys(values) => {\n                route.contradictory |=\n                    apply_conjunctive_values_filter(&mut route.schema_keys, values)\n            }\n            HistoryFilterTerm::FileIds(values) => {\n                route.contradictory |= apply_conjunctive_values_filter(&mut route.file_ids, values)\n            }\n            HistoryFilterTerm::ExactDepth(value) => {\n                route.min_depth = Some(value);\n                route.max_depth = Some(value);\n            }\n            HistoryFilterTerm::MinDepth(value) => {\n                route.min_depth = Some(route.min_depth.map_or(value, |current| current.max(value)));\n            }\n            HistoryFilterTerm::MaxDepth(value) => {\n                route.max_depth = Some(route.max_depth.map_or(value, |current| current.min(value)));\n            }\n        }\n    }\n}\n\nfn apply_conjunctive_values_filter(bucket: &mut Vec<String>, incoming_values: Vec<String>) -> bool {\n    let mut values = Vec::new();\n    extend_unique(&mut values, incoming_values);\n    if values.is_empty() {\n        return true;\n    }\n    if bucket.is_empty() {\n        extend_unique(bucket, values);\n        return false;\n    }\n\n    bucket.retain(|existing| values.contains(existing));\n    bucket.is_empty()\n}\n\nfn canonical_entity_id_values(values: Vec<String>) -> Option<Vec<String>> {\n    values\n        .into_iter()\n        .map(|value| canonical_entity_id_value(&value))\n        .collect()\n}\n\nfn canonical_entity_id_value(value: &str) -> Option<String> {\n    EntityIdentity::from_json_array_text(value)\n        .ok()?\n        .as_json_array_text()\n        .ok()\n}\n\nfn canonical_history_column_name(name: &str, column_style: HistoryColumnStyle) -> Option<&str> {\n    match (column_style, name) {\n        (HistoryColumnStyle::Bare, \"start_commit_id\")\n        | (HistoryColumnStyle::Prefixed, \"lixcol_start_commit_id\") => Some(\"start_commit_id\"),\n        (HistoryColumnStyle::Bare, \"entity_id\")\n        | (HistoryColumnStyle::Prefixed, \"lixcol_entity_id\") => Some(\"entity_id\"),\n        (HistoryColumnStyle::Bare, \"schema_key\")\n        | (HistoryColumnStyle::Prefixed, \"lixcol_schema_key\") => Some(\"schema_key\"),\n        (HistoryColumnStyle::Bare, \"file_id\")\n        | (HistoryColumnStyle::Prefixed, \"lixcol_file_id\") => Some(\"file_id\"),\n        (HistoryColumnStyle::Bare, \"depth\") | (HistoryColumnStyle::Prefixed, \"lixcol_depth\") => {\n            Some(\"depth\")\n        }\n        _ => None,\n    }\n}\n\nfn nonnegative_u32(value: i64) -> Option<u32> {\n    u32::try_from(value).ok()\n}\n\nfn extend_unique(bucket: &mut Vec<String>, values: Vec<String>) {\n    for value in values {\n        if !bucket.contains(&value) {\n            bucket.push(value);\n        }\n    }\n}\n\nfn string_literal(expr: &Expr) -> Option<String> {\n    match expr {\n        Expr::Literal(ScalarValue::Utf8(Some(value)), _) => Some(value.clone()),\n        _ => None,\n    }\n}\n\nfn scalar_i64_literal(expr: &Expr) -> Option<i64> {\n    match expr {\n        Expr::Literal(ScalarValue::Int8(Some(value)), _) => Some(i64::from(*value)),\n        Expr::Literal(ScalarValue::Int16(Some(value)), _) => Some(i64::from(*value)),\n        Expr::Literal(ScalarValue::Int32(Some(value)), _) => Some(i64::from(*value)),\n        Expr::Literal(ScalarValue::Int64(Some(value)), _) => Some(*value),\n        Expr::Literal(ScalarValue::UInt8(Some(value)), _) => Some(i64::from(*value)),\n        Expr::Literal(ScalarValue::UInt16(Some(value)), _) => Some(i64::from(*value)),\n        Expr::Literal(ScalarValue::UInt32(Some(value)), _) => Some(i64::from(*value)),\n        Expr::Literal(ScalarValue::UInt64(Some(value)), _) => i64::try_from(*value).ok(),\n        _ => None,\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use datafusion::common::{Column, ScalarValue};\n    use datafusion::logical_expr::{BinaryExpr, Expr, Like, Operator};\n\n    use super::{parse_history_filter, HistoryColumnStyle, HistoryRoute};\n\n    #[test]\n    fn route_extraction_keeps_supported_terms_from_mixed_and_filter() {\n        let filter = and(\n            eq(col(\"start_commit_id\"), str_lit(\"commit-1\")),\n            Expr::Like(Like::new(\n                false,\n                Box::new(col(\"path\")),\n                Box::new(str_lit(\"/docs/%\")),\n                None,\n                false,\n            )),\n        );\n\n        assert!(\n            parse_history_filter(&filter, HistoryColumnStyle::Bare).is_none(),\n            \"mixed filters must not be advertised as exact pushdown\"\n        );\n\n        let route = HistoryRoute::from_filters(&[filter], HistoryColumnStyle::Bare);\n        assert_eq!(route.start_commit_ids, vec![\"commit-1\".to_string()]);\n    }\n\n    #[test]\n    fn route_extraction_does_not_partially_route_mixed_or_filter() {\n        let filter = or(\n            eq(col(\"start_commit_id\"), str_lit(\"commit-1\")),\n            Expr::Like(Like::new(\n                false,\n                Box::new(col(\"path\")),\n                Box::new(str_lit(\"/docs/%\")),\n                None,\n                false,\n            )),\n        );\n\n        let route = HistoryRoute::from_filters(&[filter], HistoryColumnStyle::Bare);\n        assert!(\n            route.start_commit_ids.is_empty(),\n            \"partial OR pushdown would change SQL semantics\"\n        );\n    }\n\n    fn and(left: Expr, right: Expr) -> Expr {\n        binary(left, Operator::And, right)\n    }\n\n    fn or(left: Expr, right: Expr) -> Expr {\n        binary(left, Operator::Or, right)\n    }\n\n    fn eq(left: Expr, right: Expr) -> Expr {\n        binary(left, Operator::Eq, right)\n    }\n\n    fn binary(left: Expr, op: Operator, right: Expr) -> Expr {\n        Expr::BinaryExpr(BinaryExpr::new(Box::new(left), op, Box::new(right)))\n    }\n\n    fn col(name: &str) -> Expr {\n        Expr::Column(Column::from_name(name))\n    }\n\n    fn str_lit(value: &str) -> Expr {\n        Expr::Literal(ScalarValue::Utf8(Some(value.to_string())), None)\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/lix_state_provider.rs",
    "content": "use std::any::Any;\nuse std::collections::BTreeSet;\nuse std::sync::Arc;\n\nuse async_trait::async_trait;\nuse datafusion::arrow::array::{ArrayRef, BooleanArray, StringArray, UInt64Array};\nuse datafusion::arrow::compute::{and, filter_record_batch};\nuse datafusion::arrow::datatypes::{DataType, Field, Schema, SchemaRef};\nuse datafusion::arrow::record_batch::{RecordBatch, RecordBatchOptions};\nuse datafusion::catalog::{Session, TableProvider};\nuse datafusion::common::{not_impl_err, DFSchema, DataFusionError, Result, SchemaExt};\nuse datafusion::datasource::TableType;\nuse datafusion::execution::TaskContext;\nuse datafusion::logical_expr::dml::InsertOp;\nuse datafusion::logical_expr::expr::InList;\nuse datafusion::logical_expr::{BinaryExpr, Expr, Operator, TableProviderFilterPushDown};\nuse datafusion::physical_expr::{create_physical_expr, EquivalenceProperties, PhysicalExpr};\nuse datafusion::physical_plan::execution_plan::{Boundedness, EmissionType, PlanProperties};\nuse datafusion::physical_plan::stream::RecordBatchStreamAdapter;\nuse datafusion::physical_plan::{\n    DisplayAs, DisplayFormatType, ExecutionPlan, Partitioning, SendableRecordBatchStream,\n};\nuse datafusion::prelude::SessionContext;\nuse datafusion::scalar::ScalarValue;\nuse futures_util::{stream, TryStreamExt};\nuse serde_json::Value as JsonValue;\n\nuse crate::entity_identity::EntityIdentity;\nuse crate::live_state::MaterializedLiveStateRow;\nuse crate::live_state::{\n    LiveStateFilter, LiveStateProjection, LiveStateReader, LiveStateScanRequest,\n};\nuse crate::sql2::dml::{InsertExec, InsertSink};\nuse crate::sql2::read_only::reject_read_only_stage_rows;\nuse crate::sql2::version_scope::{resolve_provider_version_ids, VersionBinding};\nuse crate::sql2::write_normalization::{InsertCell, SqlCell, UpdateAssignmentValues};\nuse crate::transaction::types::{TransactionJson, TransactionWriteRow};\nuse crate::version::VersionRefReader;\nuse crate::GLOBAL_VERSION_ID;\nuse crate::{parse_row_metadata_value, serialize_row_metadata, LixError, NullableKeyFilter};\n\nuse crate::sql2::{\n    SqlWriteContext, WriteAccess, WriteContextLiveStateReader, WriteContextVersionRefReader,\n};\nuse crate::transaction::types::{TransactionWrite, TransactionWriteMode};\n\nuse super::predicate_typecheck::validate_json_predicate_filters;\nuse super::result_metadata::json_field;\n\npub(crate) async fn register_lix_state_providers(\n    session: &SessionContext,\n    active_version_id: &str,\n    live_state: Arc<dyn LiveStateReader>,\n    version_ref: Arc<dyn VersionRefReader>,\n) -> Result<(), LixError> {\n    session\n        .register_table(\n            \"lix_state_by_version\",\n            Arc::new(LixStateProvider::by_version(\n                Arc::clone(&live_state),\n                Arc::clone(&version_ref),\n            )),\n        )\n        .map_err(datafusion_error_to_lix_error)?;\n    session\n        .register_table(\n            \"lix_state\",\n            Arc::new(LixStateProvider::active_version(\n                active_version_id,\n                live_state,\n                version_ref,\n            )),\n        )\n        .map_err(datafusion_error_to_lix_error)?;\n    Ok(())\n}\n\npub(crate) async fn register_lix_state_write_providers(\n    session: &SessionContext,\n    write_ctx: SqlWriteContext,\n) -> Result<(), LixError> {\n    session\n        .register_table(\n            \"lix_state_by_version\",\n            Arc::new(LixStateProvider::by_version_with_write(write_ctx.clone())),\n        )\n        .map_err(datafusion_error_to_lix_error)?;\n    session\n        .register_table(\n            \"lix_state\",\n            Arc::new(LixStateProvider::active_version_with_write(write_ctx)),\n        )\n        .map_err(datafusion_error_to_lix_error)?;\n    Ok(())\n}\n\npub(crate) struct LixStateProvider {\n    schema: SchemaRef,\n    live_state: Arc<dyn LiveStateReader>,\n    version_ref: Arc<dyn VersionRefReader>,\n    write_access: WriteAccess,\n    version_binding: VersionBinding,\n}\n\nimpl std::fmt::Debug for LixStateProvider {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"LixStateProvider\")\n            .field(\"write_access\", &self.write_access.is_write())\n            .finish()\n    }\n}\n\nimpl LixStateProvider {\n    pub(crate) fn active_version(\n        active_version_id: impl Into<String>,\n        live_state: Arc<dyn LiveStateReader>,\n        version_ref: Arc<dyn VersionRefReader>,\n    ) -> Self {\n        Self {\n            schema: lix_state_schema(),\n            live_state,\n            version_ref,\n            write_access: WriteAccess::read_only(),\n            version_binding: VersionBinding::active(active_version_id),\n        }\n    }\n\n    pub(crate) fn active_version_with_write(write_ctx: SqlWriteContext) -> Self {\n        let active_version_id = write_ctx.active_version_id();\n        let live_state = Arc::new(WriteContextLiveStateReader::new(write_ctx.clone()));\n        let version_ref = Arc::new(WriteContextVersionRefReader::new(write_ctx.clone()));\n        Self {\n            schema: lix_state_schema(),\n            live_state,\n            version_ref,\n            write_access: WriteAccess::write(write_ctx),\n            version_binding: VersionBinding::active(active_version_id),\n        }\n    }\n\n    pub(crate) fn by_version(\n        live_state: Arc<dyn LiveStateReader>,\n        version_ref: Arc<dyn VersionRefReader>,\n    ) -> Self {\n        Self {\n            schema: lix_state_by_version_schema(),\n            live_state,\n            version_ref,\n            write_access: WriteAccess::read_only(),\n            version_binding: VersionBinding::explicit(),\n        }\n    }\n\n    pub(crate) fn by_version_with_write(write_ctx: SqlWriteContext) -> Self {\n        let live_state = Arc::new(WriteContextLiveStateReader::new(write_ctx.clone()));\n        let version_ref = Arc::new(WriteContextVersionRefReader::new(write_ctx.clone()));\n        Self {\n            schema: lix_state_by_version_schema(),\n            live_state,\n            version_ref,\n            write_access: WriteAccess::write(write_ctx),\n            version_binding: VersionBinding::explicit(),\n        }\n    }\n}\n\n#[async_trait]\nimpl TableProvider for LixStateProvider {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn schema(&self) -> SchemaRef {\n        Arc::clone(&self.schema)\n    }\n\n    fn table_type(&self) -> TableType {\n        TableType::Base\n    }\n\n    fn supports_filters_pushdown(\n        &self,\n        filters: &[&Expr],\n    ) -> Result<Vec<TableProviderFilterPushDown>> {\n        Ok(filters\n            .iter()\n            .map(|filter| {\n                if parse_lix_state_filter(filter).is_some() {\n                    TableProviderFilterPushDown::Exact\n                } else {\n                    TableProviderFilterPushDown::Unsupported\n                }\n            })\n            .collect())\n    }\n\n    async fn scan(\n        &self,\n        _state: &dyn Session,\n        projection: Option<&Vec<usize>>,\n        filters: &[Expr],\n        limit: Option<usize>,\n    ) -> Result<Arc<dyn datafusion::physical_plan::ExecutionPlan>> {\n        let route = LixStateByVersionRoute::from_filters(filters);\n        let projected_schema = projected_schema(&self.schema, projection)?;\n        let mut request = lix_state_scan_request(\n            &self.schema,\n            self.version_binding.active_version_id(),\n            projection,\n            &route,\n            limit,\n        );\n        if !route.contradictory {\n            request.filter.version_ids = resolve_provider_version_ids(\n                self.version_ref.as_ref(),\n                &self.version_binding,\n                request.filter.version_ids,\n            )\n            .await\n            .map_err(lix_error_to_datafusion_error)?;\n        }\n        Ok(Arc::new(LixStateScanExec::new(\n            Arc::clone(&self.live_state),\n            projected_schema,\n            request,\n        )))\n    }\n\n    async fn insert_into(\n        &self,\n        _state: &dyn Session,\n        input: Arc<dyn ExecutionPlan>,\n        insert_op: InsertOp,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        if insert_op != InsertOp::Append {\n            return not_impl_err!(\"{insert_op} not implemented for lix_state yet\");\n        }\n\n        let active_version_id = self\n            .version_binding\n            .require_active_version_id(\"INSERT\")\n            .map_err(lix_error_to_datafusion_error)?;\n\n        let write_ctx = self.write_access.require_write(\"INSERT into lix_state\")?;\n\n        self.schema\n            .logically_equivalent_names_and_types(&input.schema())?;\n\n        let sink = LixStateInsertSink::new(\n            Arc::clone(&self.schema),\n            write_ctx.clone(),\n            active_version_id,\n        );\n        Ok(Arc::new(InsertExec::new(input, Arc::new(sink))))\n    }\n\n    async fn delete_from(\n        &self,\n        state: &dyn Session,\n        filters: Vec<Expr>,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        let active_version_id = self\n            .version_binding\n            .require_active_version_id(\"DELETE\")\n            .map_err(lix_error_to_datafusion_error)?;\n\n        let write_ctx = self.write_access.require_write(\"DELETE FROM lix_state\")?;\n\n        let df_schema = DFSchema::try_from(Arc::clone(&self.schema))?;\n        validate_json_predicate_filters(self.schema.as_ref(), &filters)?;\n        let physical_filters = filters\n            .iter()\n            .map(|expr| create_physical_expr(expr, &df_schema, state.execution_props()))\n            .collect::<Result<Vec<_>>>()?;\n\n        let route = LixStateByVersionRoute::from_filters(&filters);\n        let request =\n            lix_state_scan_request(&self.schema, Some(&active_version_id), None, &route, None);\n\n        Ok(Arc::new(LixStateDeleteExec::new(\n            write_ctx.clone(),\n            Arc::clone(&self.schema),\n            active_version_id,\n            request,\n            physical_filters,\n        )))\n    }\n\n    async fn update(\n        &self,\n        state: &dyn Session,\n        assignments: Vec<(String, Expr)>,\n        filters: Vec<Expr>,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        let active_version_id = self\n            .version_binding\n            .require_active_version_id(\"UPDATE\")\n            .map_err(lix_error_to_datafusion_error)?;\n\n        let write_ctx = self.write_access.require_write(\"UPDATE lix_state\")?;\n\n        validate_lix_state_update_assignments(&self.schema, &assignments)?;\n\n        let df_schema = DFSchema::try_from(Arc::clone(&self.schema))?;\n        validate_json_predicate_filters(self.schema.as_ref(), &filters)?;\n        let physical_assignments = assignments\n            .iter()\n            .map(|(column_name, expr)| {\n                Ok((\n                    column_name.clone(),\n                    create_physical_expr(expr, &df_schema, state.execution_props())?,\n                ))\n            })\n            .collect::<Result<Vec<_>>>()?;\n        let physical_filters = filters\n            .iter()\n            .map(|expr| create_physical_expr(expr, &df_schema, state.execution_props()))\n            .collect::<Result<Vec<_>>>()?;\n\n        let route = LixStateByVersionRoute::from_filters(&filters);\n        let request =\n            lix_state_scan_request(&self.schema, Some(&active_version_id), None, &route, None);\n\n        Ok(Arc::new(LixStateUpdateExec::new(\n            write_ctx.clone(),\n            Arc::clone(&self.schema),\n            active_version_id,\n            request,\n            physical_assignments,\n            physical_filters,\n        )))\n    }\n}\n\nstruct LixStateInsertSink {\n    write_ctx: SqlWriteContext,\n    version_binding: String,\n}\n\nimpl std::fmt::Debug for LixStateInsertSink {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"LixStateInsertSink\").finish()\n    }\n}\n\nimpl LixStateInsertSink {\n    fn new(_schema: SchemaRef, write_ctx: SqlWriteContext, version_binding: String) -> Self {\n        Self {\n            write_ctx,\n            version_binding,\n        }\n    }\n}\n\nimpl DisplayAs for LixStateInsertSink {\n    fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        match t {\n            DisplayFormatType::Default | DisplayFormatType::Verbose => {\n                write!(f, \"LixStateInsertSink\")\n            }\n            DisplayFormatType::TreeRender => write!(f, \"LixStateInsertSink\"),\n        }\n    }\n}\n\n#[async_trait]\nimpl InsertSink for LixStateInsertSink {\n    async fn write_batches(\n        &self,\n        batches: Vec<RecordBatch>,\n        _context: &Arc<TaskContext>,\n    ) -> Result<u64> {\n        let mut rows = Vec::new();\n        for batch in batches {\n            rows.extend(lix_state_write_rows_from_batch(\n                &batch,\n                &self.version_binding,\n            )?);\n        }\n        reject_read_only_stage_rows(&rows, \"INSERT into lix_state\")?;\n        let count = u64::try_from(rows.len())\n            .map_err(|_| DataFusionError::Execution(\"INSERT row count overflow\".into()))?;\n\n        self.write_ctx\n            .stage_write(TransactionWrite::Rows {\n                mode: TransactionWriteMode::Insert,\n                rows,\n            })\n            .await\n            .map_err(lix_error_to_datafusion_error)?;\n\n        Ok(count)\n    }\n}\n\n#[allow(dead_code)]\nstruct LixStateDeleteExec {\n    write_ctx: SqlWriteContext,\n    table_schema: SchemaRef,\n    version_binding: String,\n    request: LiveStateScanRequest,\n    filters: Vec<Arc<dyn PhysicalExpr>>,\n    result_schema: SchemaRef,\n    properties: Arc<PlanProperties>,\n}\n\nimpl std::fmt::Debug for LixStateDeleteExec {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"LixStateDeleteExec\").finish()\n    }\n}\n\nimpl LixStateDeleteExec {\n    fn new(\n        write_ctx: SqlWriteContext,\n        table_schema: SchemaRef,\n        version_binding: String,\n        request: LiveStateScanRequest,\n        filters: Vec<Arc<dyn PhysicalExpr>>,\n    ) -> Self {\n        let result_schema = dml_count_schema();\n        let properties = PlanProperties::new(\n            EquivalenceProperties::new(Arc::clone(&result_schema)),\n            Partitioning::UnknownPartitioning(1),\n            EmissionType::Final,\n            Boundedness::Bounded,\n        );\n        Self {\n            write_ctx,\n            table_schema,\n            version_binding,\n            request,\n            filters,\n            result_schema,\n            properties: Arc::new(properties),\n        }\n    }\n}\n\nimpl DisplayAs for LixStateDeleteExec {\n    fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        match t {\n            DisplayFormatType::Default | DisplayFormatType::Verbose => {\n                write!(f, \"LixStateDeleteExec(filters={})\", self.filters.len())\n            }\n            DisplayFormatType::TreeRender => write!(f, \"LixStateDeleteExec\"),\n        }\n    }\n}\n\nimpl ExecutionPlan for LixStateDeleteExec {\n    fn name(&self) -> &str {\n        \"LixStateDeleteExec\"\n    }\n\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn properties(&self) -> &Arc<PlanProperties> {\n        &self.properties\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn ExecutionPlan>> {\n        Vec::new()\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        children: Vec<Arc<dyn ExecutionPlan>>,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        if !children.is_empty() {\n            return Err(DataFusionError::Execution(\n                \"LixStateDeleteExec does not accept children\".to_string(),\n            ));\n        }\n        Ok(self)\n    }\n\n    fn execute(\n        &self,\n        partition: usize,\n        _context: Arc<TaskContext>,\n    ) -> Result<SendableRecordBatchStream> {\n        if partition != 0 {\n            return Err(DataFusionError::Execution(format!(\n                \"LixStateDeleteExec only exposes one partition, got {partition}\"\n            )));\n        }\n        let write_ctx = self.write_ctx.clone();\n        let table_schema = Arc::clone(&self.table_schema);\n        let version_binding = self.version_binding.clone();\n        let request = self.request.clone();\n        let filters = self.filters.clone();\n        let result_schema = Arc::clone(&self.result_schema);\n        let stream_schema = Arc::clone(&result_schema);\n\n        let stream = stream::once(async move {\n            let rows = if request.limit == Some(0) {\n                Vec::new()\n            } else {\n                write_ctx\n                    .scan_live_state(&request)\n                    .await\n                    .map_err(lix_error_to_datafusion_error)?\n            };\n            let source_batch = lix_state_record_batch(Arc::clone(&table_schema), &rows)\n                .map_err(lix_error_to_datafusion_error)?;\n            let matched_batch = filter_lix_state_batch(source_batch, &filters)?;\n            let write_rows =\n                lix_state_deletable_write_rows_from_batch(&matched_batch, &version_binding)?;\n            reject_read_only_stage_rows(&write_rows, \"DELETE FROM lix_state\")?;\n            let count = u64::try_from(write_rows.len())\n                .map_err(|_| DataFusionError::Execution(\"DELETE row count overflow\".to_string()))?;\n\n            if count > 0 {\n                write_ctx\n                    .stage_write(TransactionWrite::Rows {\n                        mode: TransactionWriteMode::Replace,\n                        rows: write_rows,\n                    })\n                    .await\n                    .map_err(lix_error_to_datafusion_error)?;\n            }\n\n            Ok::<_, DataFusionError>(stream::iter(vec![Ok::<RecordBatch, DataFusionError>(\n                dml_count_batch(Arc::clone(&stream_schema), count)?,\n            )]))\n        })\n        .try_flatten();\n        Ok(Box::pin(RecordBatchStreamAdapter::new(\n            result_schema,\n            stream,\n        )))\n    }\n}\n\n#[allow(dead_code)]\nstruct LixStateUpdateExec {\n    write_ctx: SqlWriteContext,\n    table_schema: SchemaRef,\n    version_binding: String,\n    request: LiveStateScanRequest,\n    assignments: Vec<(String, Arc<dyn PhysicalExpr>)>,\n    filters: Vec<Arc<dyn PhysicalExpr>>,\n    result_schema: SchemaRef,\n    properties: Arc<PlanProperties>,\n}\n\nimpl std::fmt::Debug for LixStateUpdateExec {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"LixStateUpdateExec\").finish()\n    }\n}\n\nimpl LixStateUpdateExec {\n    fn new(\n        write_ctx: SqlWriteContext,\n        table_schema: SchemaRef,\n        version_binding: String,\n        request: LiveStateScanRequest,\n        assignments: Vec<(String, Arc<dyn PhysicalExpr>)>,\n        filters: Vec<Arc<dyn PhysicalExpr>>,\n    ) -> Self {\n        let result_schema = dml_count_schema();\n        let properties = PlanProperties::new(\n            EquivalenceProperties::new(Arc::clone(&result_schema)),\n            Partitioning::UnknownPartitioning(1),\n            EmissionType::Final,\n            Boundedness::Bounded,\n        );\n        Self {\n            write_ctx,\n            table_schema,\n            version_binding,\n            request,\n            assignments,\n            filters,\n            result_schema,\n            properties: Arc::new(properties),\n        }\n    }\n}\n\nimpl DisplayAs for LixStateUpdateExec {\n    fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        match t {\n            DisplayFormatType::Default | DisplayFormatType::Verbose => {\n                write!(\n                    f,\n                    \"LixStateUpdateExec(assignments={}, filters={})\",\n                    self.assignments.len(),\n                    self.filters.len()\n                )\n            }\n            DisplayFormatType::TreeRender => write!(f, \"LixStateUpdateExec\"),\n        }\n    }\n}\n\nimpl ExecutionPlan for LixStateUpdateExec {\n    fn name(&self) -> &str {\n        \"LixStateUpdateExec\"\n    }\n\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn properties(&self) -> &Arc<PlanProperties> {\n        &self.properties\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn ExecutionPlan>> {\n        Vec::new()\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        children: Vec<Arc<dyn ExecutionPlan>>,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        if !children.is_empty() {\n            return Err(DataFusionError::Execution(\n                \"LixStateUpdateExec does not accept children\".to_string(),\n            ));\n        }\n        Ok(self)\n    }\n\n    fn execute(\n        &self,\n        partition: usize,\n        _context: Arc<TaskContext>,\n    ) -> Result<SendableRecordBatchStream> {\n        if partition != 0 {\n            return Err(DataFusionError::Execution(format!(\n                \"LixStateUpdateExec only exposes one partition, got {partition}\"\n            )));\n        }\n        let write_ctx = self.write_ctx.clone();\n        let table_schema = Arc::clone(&self.table_schema);\n        let version_binding = self.version_binding.clone();\n        let request = self.request.clone();\n        let assignments = self.assignments.clone();\n        let filters = self.filters.clone();\n        let result_schema = Arc::clone(&self.result_schema);\n        let stream_schema = Arc::clone(&result_schema);\n\n        let stream = stream::once(async move {\n            let rows = if request.limit == Some(0) {\n                Vec::new()\n            } else {\n                write_ctx\n                    .scan_live_state(&request)\n                    .await\n                    .map_err(lix_error_to_datafusion_error)?\n            };\n            let source_batch = lix_state_record_batch(Arc::clone(&table_schema), &rows)\n                .map_err(lix_error_to_datafusion_error)?;\n            let matched_batch = filter_lix_state_batch(source_batch, &filters)?;\n            let write_rows = lix_state_update_write_rows_from_batch(\n                &matched_batch,\n                &assignments,\n                &version_binding,\n            )?;\n            reject_read_only_stage_rows(&write_rows, \"UPDATE lix_state\")?;\n            let count = u64::try_from(write_rows.len())\n                .map_err(|_| DataFusionError::Execution(\"UPDATE row count overflow\".to_string()))?;\n\n            if count > 0 {\n                write_ctx\n                    .stage_write(TransactionWrite::Rows {\n                        mode: TransactionWriteMode::Replace,\n                        rows: write_rows,\n                    })\n                    .await\n                    .map_err(lix_error_to_datafusion_error)?;\n            }\n\n            Ok::<_, DataFusionError>(stream::iter(vec![Ok::<RecordBatch, DataFusionError>(\n                dml_count_batch(Arc::clone(&stream_schema), count)?,\n            )]))\n        })\n        .try_flatten();\n        Ok(Box::pin(RecordBatchStreamAdapter::new(\n            result_schema,\n            stream,\n        )))\n    }\n}\n\nfn validate_lix_state_update_assignments(\n    schema: &SchemaRef,\n    assignments: &[(String, Expr)],\n) -> Result<()> {\n    for (column_name, _) in assignments {\n        schema.field_with_name(column_name).map_err(|_| {\n            DataFusionError::Plan(format!(\n                \"UPDATE lix_state failed: column '{column_name}' does not exist\"\n            ))\n        })?;\n        if !matches!(column_name.as_str(), \"snapshot_content\" | \"metadata\") {\n            return Err(DataFusionError::Execution(format!(\n                \"UPDATE lix_state cannot stage read-only column '{column_name}'\"\n            )));\n        }\n    }\n    Ok(())\n}\n\nfn filter_lix_state_batch(\n    batch: RecordBatch,\n    filters: &[Arc<dyn PhysicalExpr>],\n) -> Result<RecordBatch> {\n    let Some(mask) = evaluate_lix_state_filters(&batch, filters)? else {\n        return Ok(batch);\n    };\n    Ok(filter_record_batch(&batch, &mask)?)\n}\n\nfn evaluate_lix_state_filters(\n    batch: &RecordBatch,\n    filters: &[Arc<dyn PhysicalExpr>],\n) -> Result<Option<BooleanArray>> {\n    if filters.is_empty() {\n        return Ok(None);\n    }\n\n    let mut combined_mask: Option<BooleanArray> = None;\n    for filter in filters {\n        let result = filter.evaluate(batch)?;\n        let array = result.into_array(batch.num_rows())?;\n        let bool_array = array\n            .as_any()\n            .downcast_ref::<BooleanArray>()\n            .ok_or_else(|| {\n                DataFusionError::Execution(\"UPDATE lix_state filter was not boolean\".to_string())\n            })?;\n        let normalized = bool_array\n            .iter()\n            .map(|value| Some(value == Some(true)))\n            .collect::<BooleanArray>();\n        combined_mask = Some(match combined_mask {\n            Some(existing) => and(&existing, &normalized)?,\n            None => normalized,\n        });\n    }\n    Ok(combined_mask)\n}\n\nfn lix_state_stageable_write_rows_from_batch(\n    batch: &RecordBatch,\n    version_binding: &str,\n) -> Result<Vec<TransactionWriteRow>> {\n    let mut rows = lix_state_write_rows_from_batch(batch, version_binding)?;\n    for row in &mut rows {\n        row.created_at = None;\n        row.updated_at = None;\n        row.change_id = None;\n        row.commit_id = None;\n    }\n    Ok(rows)\n}\n\nfn lix_state_update_write_rows_from_batch(\n    batch: &RecordBatch,\n    assignments: &[(String, Arc<dyn PhysicalExpr>)],\n    version_binding: &str,\n) -> Result<Vec<TransactionWriteRow>> {\n    let assignment_values = UpdateAssignmentValues::evaluate(batch, assignments)?;\n    (0..batch.num_rows())\n        .map(|row_index| {\n            let global = optional_bool_value(batch, row_index, \"global\")?.unwrap_or(false);\n            let version_id =\n                optional_string_value(batch, row_index, \"version_id\")?.unwrap_or_else(|| {\n                    if global {\n                        GLOBAL_VERSION_ID.to_string()\n                    } else {\n                        version_binding.to_string()\n                    }\n                });\n\n            Ok(TransactionWriteRow {\n                entity_id: Some(\n                    EntityIdentity::from_json_array_text(&required_string_value(\n                        batch,\n                        row_index,\n                        \"entity_id\",\n                    )?)\n                    .map_err(|error| {\n                        DataFusionError::Execution(format!(\n                            \"lix_state UPDATE has invalid entity_id: {error}\"\n                        ))\n                    })?,\n                ),\n                schema_key: required_string_value(batch, row_index, \"schema_key\")?,\n                file_id: optional_string_value(batch, row_index, \"file_id\")?,\n                snapshot: update_optional_json_value(\n                    batch,\n                    &assignment_values,\n                    row_index,\n                    \"snapshot_content\",\n                )?,\n                metadata: update_optional_metadata_value(\n                    batch,\n                    &assignment_values,\n                    row_index,\n                    \"metadata\",\n                    \"lix_state\",\n                )?,\n                origin: None,\n                created_at: None,\n                updated_at: None,\n                global,\n                change_id: None,\n                commit_id: None,\n                untracked: optional_bool_value(batch, row_index, \"untracked\")?.unwrap_or(false),\n                version_id,\n            })\n        })\n        .collect()\n}\n\nfn lix_state_deletable_write_rows_from_batch(\n    batch: &RecordBatch,\n    version_binding: &str,\n) -> Result<Vec<TransactionWriteRow>> {\n    let mut rows = lix_state_stageable_write_rows_from_batch(batch, version_binding)?;\n    for row in &mut rows {\n        row.snapshot = None;\n    }\n    Ok(rows)\n}\n\nfn update_optional_string_value(\n    batch: &RecordBatch,\n    assignment_values: &UpdateAssignmentValues,\n    row_index: usize,\n    column_name: &str,\n) -> Result<Option<String>> {\n    match assignment_values.assigned_or_existing_cell(batch, row_index, column_name)? {\n        InsertCell::Omitted | InsertCell::Provided(SqlCell::Null) => Ok(None),\n        InsertCell::Provided(SqlCell::Value(\n            ScalarValue::Utf8(Some(value))\n            | ScalarValue::Utf8View(Some(value))\n            | ScalarValue::LargeUtf8(Some(value)),\n        )) => Ok(Some(value)),\n        InsertCell::Provided(SqlCell::Value(other)) => Err(DataFusionError::Execution(format!(\n            \"UPDATE lix_state expected text-compatible column '{column_name}', got {other:?}\"\n        ))),\n    }\n}\n\nfn update_optional_metadata_value(\n    batch: &RecordBatch,\n    assignment_values: &UpdateAssignmentValues,\n    row_index: usize,\n    column_name: &str,\n    context: &str,\n) -> Result<Option<TransactionJson>> {\n    update_optional_string_value(batch, assignment_values, row_index, column_name)?\n        .map(|value| {\n            let metadata = parse_row_metadata_value(&value, context)\n                .map_err(super::error::lix_error_to_datafusion_error)?;\n            TransactionJson::from_value(metadata, &format!(\"{context} metadata\"))\n                .map_err(super::error::lix_error_to_datafusion_error)\n        })\n        .transpose()\n}\n\nfn update_optional_json_value(\n    batch: &RecordBatch,\n    assignment_values: &UpdateAssignmentValues,\n    row_index: usize,\n    column_name: &str,\n) -> Result<Option<TransactionJson>> {\n    update_optional_string_value(batch, assignment_values, row_index, column_name)?\n        .map(|value| parse_snapshot_json(&value, column_name))\n        .transpose()\n}\n\nfn dml_count_schema() -> SchemaRef {\n    Arc::new(Schema::new(vec![Field::new(\n        \"count\",\n        DataType::UInt64,\n        false,\n    )]))\n}\n\nfn dml_count_batch(schema: SchemaRef, count: u64) -> Result<RecordBatch> {\n    RecordBatch::try_new(\n        schema,\n        vec![Arc::new(UInt64Array::from(vec![count])) as ArrayRef],\n    )\n    .map_err(DataFusionError::from)\n}\n\nfn lix_state_write_rows_from_batch(\n    batch: &RecordBatch,\n    version_binding: &str,\n) -> Result<Vec<TransactionWriteRow>> {\n    (0..batch.num_rows())\n        .map(|row_index| {\n            let global = optional_bool_value(batch, row_index, \"global\")?.unwrap_or(false);\n            let version_id =\n                optional_string_value(batch, row_index, \"version_id\")?.unwrap_or_else(|| {\n                    if global {\n                        GLOBAL_VERSION_ID.to_string()\n                    } else {\n                        version_binding.to_string()\n                    }\n                });\n\n            Ok(TransactionWriteRow {\n                entity_id: Some(\n                    EntityIdentity::from_json_array_text(&required_string_value(\n                        batch,\n                        row_index,\n                        \"entity_id\",\n                    )?)\n                    .map_err(|error| {\n                        DataFusionError::Execution(format!(\n                            \"lix_state INSERT has invalid entity_id: {error}\"\n                        ))\n                    })?,\n                ),\n                schema_key: required_string_value(batch, row_index, \"schema_key\")?,\n                file_id: optional_string_value(batch, row_index, \"file_id\")?,\n                snapshot: optional_json_value(batch, row_index, \"snapshot_content\")?,\n                metadata: optional_metadata_value(batch, row_index, \"metadata\", \"lix_state\")?,\n                origin: None,\n                created_at: optional_string_value(batch, row_index, \"created_at\")?,\n                updated_at: optional_string_value(batch, row_index, \"updated_at\")?,\n                global,\n                change_id: optional_string_value(batch, row_index, \"change_id\")?,\n                commit_id: optional_string_value(batch, row_index, \"commit_id\")?,\n                untracked: optional_bool_value(batch, row_index, \"untracked\")?.unwrap_or(false),\n                version_id,\n            })\n        })\n        .collect()\n}\n\nfn required_string_value(\n    batch: &RecordBatch,\n    row_index: usize,\n    column_name: &str,\n) -> Result<String> {\n    optional_string_value(batch, row_index, column_name)?.ok_or_else(|| {\n        DataFusionError::Execution(format!(\n            \"INSERT into lix_state requires non-null text column '{column_name}'\"\n        ))\n    })\n}\n\nfn optional_string_value(\n    batch: &RecordBatch,\n    row_index: usize,\n    column_name: &str,\n) -> Result<Option<String>> {\n    match optional_scalar_value(batch, row_index, column_name)? {\n        None\n        | Some(ScalarValue::Null)\n        | Some(ScalarValue::Utf8(None))\n        | Some(ScalarValue::Utf8View(None))\n        | Some(ScalarValue::LargeUtf8(None)) => Ok(None),\n        Some(ScalarValue::Utf8(Some(value)))\n        | Some(ScalarValue::Utf8View(Some(value)))\n        | Some(ScalarValue::LargeUtf8(Some(value))) => Ok(Some(value)),\n        Some(other) => Err(DataFusionError::Execution(format!(\n            \"INSERT into lix_state expected text-compatible column '{column_name}', got {other:?}\"\n        ))),\n    }\n}\n\nfn optional_metadata_value(\n    batch: &RecordBatch,\n    row_index: usize,\n    column_name: &str,\n    context: &str,\n) -> Result<Option<TransactionJson>> {\n    optional_string_value(batch, row_index, column_name)?\n        .map(|value| {\n            let metadata = parse_row_metadata_value(&value, context)\n                .map_err(super::error::lix_error_to_datafusion_error)?;\n            TransactionJson::from_value(metadata, &format!(\"{context} metadata\"))\n                .map_err(super::error::lix_error_to_datafusion_error)\n        })\n        .transpose()\n}\n\nfn optional_json_value(\n    batch: &RecordBatch,\n    row_index: usize,\n    column_name: &str,\n) -> Result<Option<TransactionJson>> {\n    optional_string_value(batch, row_index, column_name)?\n        .map(|value| parse_snapshot_json(&value, column_name))\n        .transpose()\n}\n\nfn parse_snapshot_json(value: &str, column_name: &str) -> Result<TransactionJson> {\n    let parsed = serde_json::from_str::<JsonValue>(value).map_err(|error| {\n        DataFusionError::Execution(format!(\n            \"lix_state expected valid JSON in column '{column_name}': {error}\"\n        ))\n    })?;\n    TransactionJson::from_value(parsed, &format!(\"lix_state {column_name}\"))\n        .map_err(super::error::lix_error_to_datafusion_error)\n}\n\nfn optional_bool_value(\n    batch: &RecordBatch,\n    row_index: usize,\n    column_name: &str,\n) -> Result<Option<bool>> {\n    match optional_scalar_value(batch, row_index, column_name)? {\n        Some(ScalarValue::Boolean(Some(value))) => Ok(Some(value)),\n        None | Some(ScalarValue::Null) | Some(ScalarValue::Boolean(None)) => Ok(None),\n        Some(other) => Err(DataFusionError::Execution(format!(\n            \"INSERT into lix_state expected boolean column '{column_name}', got {other:?}\"\n        ))),\n    }\n}\n\nfn optional_scalar_value(\n    batch: &RecordBatch,\n    row_index: usize,\n    column_name: &str,\n) -> Result<Option<ScalarValue>> {\n    let schema = batch.schema();\n    let column_index = match schema.index_of(column_name) {\n        Ok(column_index) => column_index,\n        Err(_) => return Ok(None),\n    };\n\n    if row_index >= batch.num_rows() {\n        return Err(DataFusionError::Execution(format!(\n            \"row index {row_index} out of bounds for lix_state batch with {} rows\",\n            batch.num_rows()\n        )));\n    }\n\n    ScalarValue::try_from_array(batch.column(column_index).as_ref(), row_index)\n        .map(Some)\n        .map_err(|error| {\n            DataFusionError::Execution(format!(\n                \"failed to decode lix_state column '{column_name}' at row {row_index}: {error}\"\n            ))\n        })\n}\n\nstruct LixStateScanExec {\n    live_state: Arc<dyn LiveStateReader>,\n    schema: SchemaRef,\n    request: LiveStateScanRequest,\n    properties: Arc<PlanProperties>,\n}\n\nimpl std::fmt::Debug for LixStateScanExec {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"LixStateScanExec\").finish()\n    }\n}\n\nimpl LixStateScanExec {\n    fn new(\n        live_state: Arc<dyn LiveStateReader>,\n        schema: SchemaRef,\n        request: LiveStateScanRequest,\n    ) -> Self {\n        let properties = PlanProperties::new(\n            EquivalenceProperties::new(schema.clone()),\n            Partitioning::UnknownPartitioning(1),\n            EmissionType::Incremental,\n            Boundedness::Bounded,\n        );\n        Self {\n            live_state,\n            schema,\n            request,\n            properties: Arc::new(properties),\n        }\n    }\n}\n\nimpl DisplayAs for LixStateScanExec {\n    fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        match t {\n            DisplayFormatType::Default | DisplayFormatType::Verbose => {\n                write!(f, \"LixStateScanExec(limit={:?})\", self.request.limit)\n            }\n            DisplayFormatType::TreeRender => write!(f, \"LixStateScanExec\"),\n        }\n    }\n}\n\nimpl ExecutionPlan for LixStateScanExec {\n    fn name(&self) -> &str {\n        \"LixStateScanExec\"\n    }\n\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn properties(&self) -> &Arc<PlanProperties> {\n        &self.properties\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn ExecutionPlan>> {\n        Vec::new()\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        children: Vec<Arc<dyn ExecutionPlan>>,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        if !children.is_empty() {\n            return Err(DataFusionError::Execution(\n                \"LixStateScanExec does not accept children\".to_string(),\n            ));\n        }\n        Ok(self)\n    }\n\n    fn execute(\n        &self,\n        partition: usize,\n        _context: Arc<TaskContext>,\n    ) -> Result<SendableRecordBatchStream> {\n        if partition != 0 {\n            return Err(DataFusionError::Execution(format!(\n                \"LixStateScanExec only exposes one partition, got {partition}\"\n            )));\n        }\n\n        let live_state = Arc::clone(&self.live_state);\n        let schema = Arc::clone(&self.schema);\n        let request = self.request.clone();\n        let stream_schema = Arc::clone(&schema);\n        let stream = stream::once(async move {\n            let rows = if request.limit == Some(0) {\n                Vec::new()\n            } else {\n                live_state\n                    .scan_rows(&request)\n                    .await\n                    .map_err(lix_error_to_datafusion_error)?\n            };\n            let batch = lix_state_record_batch(Arc::clone(&stream_schema), &rows)\n                .map_err(lix_error_to_datafusion_error)?;\n            Ok::<_, DataFusionError>(stream::iter(vec![Ok::<RecordBatch, DataFusionError>(\n                batch,\n            )]))\n        })\n        .try_flatten();\n        Ok(Box::pin(RecordBatchStreamAdapter::new(schema, stream)))\n    }\n}\n\nfn lix_state_schema() -> SchemaRef {\n    Arc::new(Schema::new(vec![\n        json_field(\"entity_id\", false),\n        Field::new(\"schema_key\", DataType::Utf8, false),\n        Field::new(\"file_id\", DataType::Utf8, true),\n        json_field(\"snapshot_content\", true),\n        json_field(\"metadata\", true),\n        Field::new(\"created_at\", DataType::Utf8, true),\n        Field::new(\"updated_at\", DataType::Utf8, true),\n        Field::new(\"global\", DataType::Boolean, true),\n        Field::new(\"change_id\", DataType::Utf8, true),\n        Field::new(\"commit_id\", DataType::Utf8, true),\n        Field::new(\"untracked\", DataType::Boolean, true),\n    ]))\n}\n\nfn lix_state_by_version_schema() -> SchemaRef {\n    Arc::new(Schema::new(vec![\n        json_field(\"entity_id\", false),\n        Field::new(\"schema_key\", DataType::Utf8, false),\n        Field::new(\"file_id\", DataType::Utf8, true),\n        json_field(\"snapshot_content\", true),\n        json_field(\"metadata\", true),\n        Field::new(\"created_at\", DataType::Utf8, true),\n        Field::new(\"updated_at\", DataType::Utf8, true),\n        Field::new(\"global\", DataType::Boolean, true),\n        Field::new(\"change_id\", DataType::Utf8, true),\n        Field::new(\"commit_id\", DataType::Utf8, true),\n        Field::new(\"untracked\", DataType::Boolean, true),\n        Field::new(\"version_id\", DataType::Utf8, false),\n    ]))\n}\n\n#[derive(Debug, Clone, PartialEq, Eq, Default)]\nstruct LixStateByVersionRoute {\n    schema_keys: Option<BTreeSet<String>>,\n    version_ids: Option<BTreeSet<String>>,\n    entity_ids: Option<BTreeSet<String>>,\n    file_id: Option<NullableKeyFilter<String>>,\n    contradictory: bool,\n}\n\nimpl LixStateByVersionRoute {\n    fn from_filters(filters: &[Expr]) -> Self {\n        let mut route = Self::default();\n        for filter in filters {\n            let Some(predicates) = parse_lix_state_filters(filter) else {\n                continue;\n            };\n            for predicate in predicates {\n                match predicate {\n                    LixStateFilterPredicate::SchemaKeys(values) => {\n                        merge_string_route_slot(\n                            &mut route.schema_keys,\n                            values,\n                            &mut route.contradictory,\n                        );\n                    }\n                    LixStateFilterPredicate::VersionIds(values) => {\n                        merge_string_route_slot(\n                            &mut route.version_ids,\n                            values,\n                            &mut route.contradictory,\n                        );\n                    }\n                    LixStateFilterPredicate::EntityIds(values) => {\n                        merge_string_route_slot(\n                            &mut route.entity_ids,\n                            values,\n                            &mut route.contradictory,\n                        );\n                    }\n                    LixStateFilterPredicate::FileId(filter) => {\n                        merge_nullable_key_route_slot(\n                            &mut route.file_id,\n                            filter,\n                            &mut route.contradictory,\n                        );\n                    }\n                }\n            }\n        }\n        route\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\nenum LixStateFilterPredicate {\n    SchemaKeys(BTreeSet<String>),\n    VersionIds(BTreeSet<String>),\n    EntityIds(BTreeSet<String>),\n    FileId(NullableKeyFilter<String>),\n}\n\nfn lix_state_scan_request(\n    schema: &SchemaRef,\n    version_binding: Option<&str>,\n    projection: Option<&Vec<usize>>,\n    route: &LixStateByVersionRoute,\n    limit: Option<usize>,\n) -> LiveStateScanRequest {\n    let projection = LiveStateProjection {\n        columns: projection_column_names(schema, projection),\n    };\n    let mut filter = LiveStateFilter {\n        schema_keys: route\n            .schema_keys\n            .as_ref()\n            .map(|values| values.iter().cloned().collect())\n            .unwrap_or_default(),\n        entity_ids: route\n            .entity_ids\n            .as_ref()\n            .map(|values| {\n                values\n                    .iter()\n                    .filter_map(|value| EntityIdentity::from_json_array_text(value).ok())\n                    .collect()\n            })\n            .unwrap_or_default(),\n        version_ids: version_binding\n            .map(|value| vec![value.to_string()])\n            .or_else(|| {\n                route\n                    .version_ids\n                    .as_ref()\n                    .map(|values| values.iter().cloned().collect())\n            })\n            .unwrap_or_default(),\n        ..LiveStateFilter::default()\n    };\n    if let Some(file_id) = route.file_id.clone() {\n        filter.file_ids.push(file_id);\n    }\n\n    LiveStateScanRequest {\n        filter,\n        projection,\n        limit: route.contradictory.then_some(0).or(limit),\n    }\n}\n\nfn projection_column_names(schema: &SchemaRef, projection: Option<&Vec<usize>>) -> Vec<String> {\n    projection\n        .map(|indices| {\n            indices\n                .iter()\n                .filter_map(|index| schema.fields().get(*index))\n                .map(|field| field.name().to_string())\n                .collect::<Vec<_>>()\n        })\n        .unwrap_or_default()\n}\n\nfn merge_string_route_slot(\n    slot: &mut Option<BTreeSet<String>>,\n    values: BTreeSet<String>,\n    contradictory: &mut bool,\n) {\n    if values.is_empty() {\n        return;\n    }\n\n    match slot {\n        Some(existing) => {\n            existing.retain(|value| values.contains(value));\n            if existing.is_empty() {\n                *contradictory = true;\n            }\n        }\n        None => *slot = Some(values),\n    }\n}\n\nfn merge_nullable_key_route_slot(\n    slot: &mut Option<NullableKeyFilter<String>>,\n    value: NullableKeyFilter<String>,\n    contradictory: &mut bool,\n) {\n    match slot {\n        Some(existing) if *existing != value => *contradictory = true,\n        Some(_) => {}\n        None => *slot = Some(value),\n    }\n}\n\nfn parse_lix_state_filter(expr: &Expr) -> Option<LixStateFilterPredicate> {\n    parse_lix_state_filters(expr)?.into_iter().next()\n}\n\nfn parse_lix_state_filters(expr: &Expr) -> Option<Vec<LixStateFilterPredicate>> {\n    match expr {\n        Expr::BinaryExpr(binary_expr) if binary_expr.op == Operator::And => {\n            let mut predicates = parse_lix_state_filters(&binary_expr.left)?;\n            predicates.extend(parse_lix_state_filters(&binary_expr.right)?);\n            Some(predicates)\n        }\n        Expr::BinaryExpr(binary_expr) => {\n            parse_lix_state_binary_filter(binary_expr).map(|predicate| vec![predicate])\n        }\n        Expr::InList(in_list) => {\n            parse_lix_state_in_list_filter(in_list).map(|predicate| vec![predicate])\n        }\n        Expr::IsNull(expr) => parse_lix_state_null_filter(expr).map(|predicate| vec![predicate]),\n        _ => None,\n    }\n}\n\nfn parse_lix_state_binary_filter(binary_expr: &BinaryExpr) -> Option<LixStateFilterPredicate> {\n    if binary_expr.op != Operator::Eq {\n        return None;\n    }\n\n    parse_lix_state_column_literal_filter(&binary_expr.left, &binary_expr.right)\n        .or_else(|| parse_lix_state_column_literal_filter(&binary_expr.right, &binary_expr.left))\n}\n\nfn parse_lix_state_in_list_filter(in_list: &InList) -> Option<LixStateFilterPredicate> {\n    if in_list.negated {\n        return None;\n    }\n    let Expr::Column(column) = in_list.expr.as_ref() else {\n        return None;\n    };\n\n    let values = in_list\n        .list\n        .iter()\n        .map(string_expr_literal)\n        .collect::<Option<Vec<_>>>()?;\n    if values.is_empty() {\n        return None;\n    }\n\n    let values = values.into_iter().collect::<BTreeSet<_>>();\n    match column.name.as_str() {\n        \"schema_key\" => Some(LixStateFilterPredicate::SchemaKeys(values)),\n        \"version_id\" => Some(LixStateFilterPredicate::VersionIds(values)),\n        \"entity_id\" => canonical_entity_id_values(values).map(LixStateFilterPredicate::EntityIds),\n        _ => None,\n    }\n}\n\nfn parse_lix_state_null_filter(expr: &Expr) -> Option<LixStateFilterPredicate> {\n    let Expr::Column(column) = expr else {\n        return None;\n    };\n\n    match column.name.as_str() {\n        \"file_id\" => Some(LixStateFilterPredicate::FileId(NullableKeyFilter::Null)),\n        _ => None,\n    }\n}\n\nfn parse_lix_state_column_literal_filter(\n    column_expr: &Expr,\n    literal_expr: &Expr,\n) -> Option<LixStateFilterPredicate> {\n    let Expr::Column(column) = column_expr else {\n        return None;\n    };\n\n    match column.name.as_str() {\n        \"schema_key\" => string_expr_literal(literal_expr)\n            .map(|value| LixStateFilterPredicate::SchemaKeys(BTreeSet::from([value]))),\n        \"version_id\" => string_expr_literal(literal_expr)\n            .map(|value| LixStateFilterPredicate::VersionIds(BTreeSet::from([value]))),\n        \"entity_id\" => string_expr_literal(literal_expr)\n            .and_then(|value| canonical_entity_id_value(&value))\n            .map(|value| LixStateFilterPredicate::EntityIds(BTreeSet::from([value]))),\n        \"file_id\" => nullable_key_literal(literal_expr).map(LixStateFilterPredicate::FileId),\n        _ => None,\n    }\n}\n\nfn canonical_entity_id_values(values: BTreeSet<String>) -> Option<BTreeSet<String>> {\n    values\n        .into_iter()\n        .map(|value| canonical_entity_id_value(&value))\n        .collect()\n}\n\nfn canonical_entity_id_value(value: &str) -> Option<String> {\n    EntityIdentity::from_json_array_text(value)\n        .ok()?\n        .as_json_array_text()\n        .ok()\n}\n\nfn nullable_key_literal(expr: &Expr) -> Option<NullableKeyFilter<String>> {\n    if is_null_literal(expr) {\n        return Some(NullableKeyFilter::Null);\n    }\n    string_expr_literal(expr).map(NullableKeyFilter::Value)\n}\n\nfn string_expr_literal(expr: &Expr) -> Option<String> {\n    let Expr::Literal(literal, _) = expr else {\n        return None;\n    };\n    match literal {\n        ScalarValue::Utf8(Some(value))\n        | ScalarValue::Utf8View(Some(value))\n        | ScalarValue::LargeUtf8(Some(value)) => Some(value.clone()),\n        _ => None,\n    }\n}\n\nfn is_null_literal(expr: &Expr) -> bool {\n    matches!(expr, Expr::Literal(ScalarValue::Null, _))\n}\n\nfn lix_state_record_batch(\n    schema: SchemaRef,\n    rows: &[MaterializedLiveStateRow],\n) -> Result<RecordBatch, LixError> {\n    if schema.fields().is_empty() {\n        let options = RecordBatchOptions::new().with_row_count(Some(rows.len()));\n        return RecordBatch::try_new_with_options(schema, vec![], &options).map_err(|error| {\n            LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                format!(\"sql2 failed to build zero-column lix_state batch: {error}\"),\n            )\n        });\n    }\n\n    let columns = schema\n        .fields()\n        .iter()\n        .map(|field| {\n            Ok(match field.name().as_str() {\n                \"entity_id\" => Arc::new(StringArray::from(\n                    rows.iter()\n                        .map(|row| row.entity_id.as_json_array_text().map(Some))\n                        .collect::<std::result::Result<Vec<_>, LixError>>()?,\n                )) as ArrayRef,\n                \"schema_key\" => string_array(rows.iter().map(|row| Some(row.schema_key.as_str()))),\n                \"file_id\" => string_array(rows.iter().map(|row| row.file_id.as_deref())),\n                \"snapshot_content\" => {\n                    string_array(rows.iter().map(|row| row.snapshot_content.as_deref()))\n                }\n                \"metadata\" => Arc::new(StringArray::from(\n                    rows.iter()\n                        .map(|row| row.metadata.as_ref().map(serialize_row_metadata))\n                        .collect::<Vec<_>>(),\n                )),\n                \"created_at\" => string_array(rows.iter().map(|row| Some(row.created_at.as_str()))),\n                \"updated_at\" => string_array(rows.iter().map(|row| Some(row.updated_at.as_str()))),\n                \"global\" => Arc::new(BooleanArray::from(\n                    rows.iter().map(|row| row.global).collect::<Vec<_>>(),\n                )) as ArrayRef,\n                \"change_id\" => string_array(rows.iter().map(|row| row.change_id.as_deref())),\n                \"commit_id\" => string_array(rows.iter().map(|row| row.commit_id.as_deref())),\n                \"untracked\" => Arc::new(BooleanArray::from(\n                    rows.iter().map(|row| row.untracked).collect::<Vec<_>>(),\n                )) as ArrayRef,\n                \"version_id\" => string_array(rows.iter().map(|row| Some(row.version_id.as_str()))),\n                other => {\n                    return Err(LixError::new(\n                        \"LIX_ERROR_UNKNOWN\",\n                        format!(\"sql2 does not support lix_state column '{other}'\"),\n                    ))\n                }\n            })\n        })\n        .collect::<Result<Vec<_>, _>>()?;\n\n    RecordBatch::try_new(schema, columns).map_err(|error| {\n        LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"sql2 failed to build lix_state_by_version batch: {error}\"),\n        )\n    })\n}\n\nfn string_array<'a>(values: impl Iterator<Item = Option<&'a str>>) -> ArrayRef {\n    let values = values\n        .map(|value| value.map(ToOwned::to_owned))\n        .collect::<Vec<_>>();\n    Arc::new(StringArray::from(values)) as ArrayRef\n}\n\nfn projected_schema(schema: &SchemaRef, projection: Option<&Vec<usize>>) -> Result<SchemaRef> {\n    let Some(projection) = projection else {\n        return Ok(Arc::clone(schema));\n    };\n\n    let projected = schema.project(projection).map_err(|error| {\n        DataFusionError::Execution(format!(\"sql2 failed to project lix_state schema: {error}\"))\n    })?;\n    Ok(Arc::new(projected))\n}\n\nfn datafusion_error_to_lix_error(error: DataFusionError) -> LixError {\n    super::error::datafusion_error_to_lix_error(error)\n}\n\nfn lix_error_to_datafusion_error(error: LixError) -> DataFusionError {\n    super::error::lix_error_to_datafusion_error(error)\n}\n\n#[cfg(test)]\nmod tests {\n    use super::{\n        lix_state_scan_request, lix_state_schema, lix_state_write_rows_from_batch,\n        parse_lix_state_filter, register_lix_state_write_providers, LixStateByVersionRoute,\n        LixStateDeleteExec, LixStateFilterPredicate, LixStateInsertSink, LixStateProvider,\n        LixStateUpdateExec,\n    };\n    use crate::binary_cas::BlobDataReader;\n    use crate::functions::{\n        FunctionProvider, FunctionProviderHandle, SharedFunctionProvider, SystemFunctionProvider,\n    };\n    use crate::sql2::dml::{InsertExec, InsertSink};\n    use crate::sql2::{SqlWriteContext, SqlWriteExecutionContext};\n    use crate::transaction::types::{\n        TransactionJson, TransactionWrite, TransactionWriteMode, TransactionWriteOutcome,\n        TransactionWriteRow,\n    };\n    use crate::version::{VersionHead, VersionRefReader};\n    use crate::{\n        entity_identity::EntityIdentity,\n        live_state::{\n            LiveStateReader, LiveStateRowRequest, LiveStateScanRequest, MaterializedLiveStateRow,\n        },\n    };\n    use crate::{LixError, NullableKeyFilter};\n    use async_trait::async_trait;\n    use datafusion::arrow::array::{ArrayRef, BooleanArray, StringArray, UInt64Array};\n    use datafusion::arrow::datatypes::DataType;\n    use datafusion::arrow::record_batch::RecordBatch;\n    use datafusion::catalog::TableProvider;\n    use datafusion::common::{Column, DataFusionError};\n    use datafusion::execution::TaskContext;\n    use datafusion::logical_expr::dml::InsertOp;\n    use datafusion::logical_expr::expr::InList;\n    use datafusion::logical_expr::{BinaryExpr, Expr, Operator};\n    use datafusion::physical_expr::EquivalenceProperties;\n    use datafusion::physical_plan::empty::EmptyExec;\n    use datafusion::physical_plan::execution_plan::{Boundedness, EmissionType, PlanProperties};\n    use datafusion::physical_plan::stream::RecordBatchStreamAdapter;\n    use datafusion::physical_plan::{\n        DisplayAs, DisplayFormatType, ExecutionPlan, Partitioning, SendableRecordBatchStream,\n    };\n    use datafusion::prelude::SessionContext;\n    use datafusion::scalar::ScalarValue;\n    use futures_util::stream;\n    use serde_json::json;\n    use std::collections::BTreeSet;\n    use std::sync::Arc;\n\n    struct EmptyLiveStateReader;\n    struct EmptyVersionRefReader;\n    #[allow(dead_code)]\n    struct RowsLiveStateReader {\n        rows: Vec<MaterializedLiveStateRow>,\n    }\n    struct DummyBlobReader;\n\n    #[derive(Default)]\n    struct DummyWriteContext {\n        rows: Vec<MaterializedLiveStateRow>,\n    }\n\n    #[derive(Default)]\n    struct CapturingWriteContext {\n        rows: Vec<MaterializedLiveStateRow>,\n        writes: Vec<TransactionWrite>,\n    }\n\n    struct SingleBatchExec {\n        batch: RecordBatch,\n        properties: Arc<PlanProperties>,\n    }\n\n    impl std::fmt::Debug for SingleBatchExec {\n        fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n            f.debug_struct(\"SingleBatchExec\").finish()\n        }\n    }\n\n    impl SingleBatchExec {\n        fn new(batch: RecordBatch) -> Self {\n            let properties = PlanProperties::new(\n                EquivalenceProperties::new(batch.schema()),\n                Partitioning::UnknownPartitioning(1),\n                EmissionType::Incremental,\n                Boundedness::Bounded,\n            );\n            Self {\n                batch,\n                properties: Arc::new(properties),\n            }\n        }\n    }\n\n    impl DisplayAs for SingleBatchExec {\n        fn fmt_as(\n            &self,\n            _t: DisplayFormatType,\n            f: &mut std::fmt::Formatter<'_>,\n        ) -> std::fmt::Result {\n            write!(f, \"SingleBatchExec\")\n        }\n    }\n\n    impl ExecutionPlan for SingleBatchExec {\n        fn name(&self) -> &str {\n            \"SingleBatchExec\"\n        }\n\n        fn as_any(&self) -> &dyn std::any::Any {\n            self\n        }\n\n        fn properties(&self) -> &Arc<PlanProperties> {\n            &self.properties\n        }\n\n        fn children(&self) -> Vec<&Arc<dyn ExecutionPlan>> {\n            Vec::new()\n        }\n\n        fn with_new_children(\n            self: Arc<Self>,\n            children: Vec<Arc<dyn ExecutionPlan>>,\n        ) -> datafusion::common::Result<Arc<dyn ExecutionPlan>> {\n            if !children.is_empty() {\n                return Err(DataFusionError::Execution(\n                    \"SingleBatchExec does not accept children\".to_string(),\n                ));\n            }\n            Ok(self)\n        }\n\n        fn execute(\n            &self,\n            partition: usize,\n            _context: Arc<TaskContext>,\n        ) -> datafusion::common::Result<SendableRecordBatchStream> {\n            if partition != 0 {\n                return Err(DataFusionError::Execution(format!(\n                    \"SingleBatchExec only exposes one partition, got {partition}\"\n                )));\n            }\n\n            let batch = self.batch.clone();\n            let schema = batch.schema();\n            let stream = stream::iter(vec![Ok(batch)]);\n            Ok(Box::pin(RecordBatchStreamAdapter::new(schema, stream)))\n        }\n    }\n\n    #[async_trait]\n    impl LiveStateReader for EmptyLiveStateReader {\n        async fn scan_rows(\n            &self,\n            _request: &LiveStateScanRequest,\n        ) -> Result<Vec<MaterializedLiveStateRow>, LixError> {\n            Ok(vec![])\n        }\n\n        async fn load_row(\n            &self,\n            _request: &LiveStateRowRequest,\n        ) -> Result<Option<MaterializedLiveStateRow>, LixError> {\n            Ok(None)\n        }\n    }\n\n    #[async_trait]\n    impl VersionRefReader for EmptyVersionRefReader {\n        async fn load_head(&self, _version_id: &str) -> Result<Option<VersionHead>, LixError> {\n            Ok(None)\n        }\n\n        async fn scan_heads(&self) -> Result<Vec<VersionHead>, LixError> {\n            Ok(Vec::new())\n        }\n    }\n\n    fn empty_version_ref() -> Arc<dyn VersionRefReader> {\n        Arc::new(EmptyVersionRefReader)\n    }\n\n    #[async_trait]\n    impl LiveStateReader for RowsLiveStateReader {\n        async fn scan_rows(\n            &self,\n            _request: &LiveStateScanRequest,\n        ) -> Result<Vec<MaterializedLiveStateRow>, LixError> {\n            Ok(self.rows.clone())\n        }\n\n        async fn load_row(\n            &self,\n            _request: &LiveStateRowRequest,\n        ) -> Result<Option<MaterializedLiveStateRow>, LixError> {\n            Ok(None)\n        }\n    }\n\n    fn test_functions() -> FunctionProviderHandle {\n        SharedFunctionProvider::new(\n            Box::new(SystemFunctionProvider) as Box<dyn FunctionProvider + Send>\n        )\n    }\n\n    #[async_trait]\n    impl BlobDataReader for DummyBlobReader {\n        async fn load_bytes_many(\n            &self,\n            hashes: &[crate::binary_cas::BlobHash],\n        ) -> Result<crate::binary_cas::BlobBytesBatch, LixError> {\n            Ok(crate::binary_cas::BlobBytesBatch::new(vec![\n                None;\n                hashes.len()\n            ]))\n        }\n    }\n\n    #[async_trait]\n    impl SqlWriteExecutionContext for DummyWriteContext {\n        fn active_version_id(&self) -> &str {\n            \"version-a\"\n        }\n\n        fn functions(&self) -> FunctionProviderHandle {\n            test_functions()\n        }\n\n        fn list_visible_schemas(&self) -> Result<Vec<serde_json::Value>, LixError> {\n            Ok(Vec::new())\n        }\n\n        async fn load_bytes_many(\n            &mut self,\n            hashes: &[crate::binary_cas::BlobHash],\n        ) -> Result<crate::binary_cas::BlobBytesBatch, LixError> {\n            DummyBlobReader.load_bytes_many(hashes).await\n        }\n\n        async fn scan_live_state(\n            &mut self,\n            _request: &LiveStateScanRequest,\n        ) -> Result<Vec<MaterializedLiveStateRow>, LixError> {\n            Ok(self.rows.clone())\n        }\n\n        async fn load_version_head(\n            &mut self,\n            version_id: &str,\n        ) -> Result<Option<String>, LixError> {\n            if version_id == \"ghost-version\" {\n                return Ok(None);\n            }\n            Ok(Some(format!(\"commit-{version_id}\")))\n        }\n\n        async fn stage_write(\n            &mut self,\n            _write: TransactionWrite,\n        ) -> Result<TransactionWriteOutcome, LixError> {\n            Ok(TransactionWriteOutcome { count: 0 })\n        }\n    }\n\n    #[async_trait]\n    impl SqlWriteExecutionContext for CapturingWriteContext {\n        fn active_version_id(&self) -> &str {\n            \"version-a\"\n        }\n\n        fn functions(&self) -> FunctionProviderHandle {\n            test_functions()\n        }\n\n        fn list_visible_schemas(&self) -> Result<Vec<serde_json::Value>, LixError> {\n            Ok(Vec::new())\n        }\n\n        async fn load_bytes_many(\n            &mut self,\n            hashes: &[crate::binary_cas::BlobHash],\n        ) -> Result<crate::binary_cas::BlobBytesBatch, LixError> {\n            DummyBlobReader.load_bytes_many(hashes).await\n        }\n\n        async fn scan_live_state(\n            &mut self,\n            _request: &LiveStateScanRequest,\n        ) -> Result<Vec<MaterializedLiveStateRow>, LixError> {\n            Ok(self.rows.clone())\n        }\n\n        async fn load_version_head(\n            &mut self,\n            version_id: &str,\n        ) -> Result<Option<String>, LixError> {\n            if version_id == \"ghost-version\" {\n                return Ok(None);\n            }\n            Ok(Some(format!(\"commit-{version_id}\")))\n        }\n\n        async fn stage_write(\n            &mut self,\n            write: TransactionWrite,\n        ) -> Result<TransactionWriteOutcome, LixError> {\n            self.writes.push(write);\n            Ok(TransactionWriteOutcome { count: 0 })\n        }\n    }\n\n    fn col(name: &str) -> Expr {\n        Expr::Column(Column::from_name(name))\n    }\n\n    fn str_lit(value: &str) -> Expr {\n        Expr::Literal(ScalarValue::Utf8(Some(value.to_string())), None)\n    }\n\n    fn json_lit(value: &str) -> Expr {\n        Expr::Literal(\n            ScalarValue::Utf8(Some(value.to_string())),\n            Some(datafusion::common::metadata::FieldMetadata::new(\n                std::collections::BTreeMap::from([(\n                    crate::sql2::result_metadata::LIX_VALUE_TYPE_METADATA_KEY.to_string(),\n                    crate::sql2::result_metadata::LIX_VALUE_TYPE_JSON.to_string(),\n                )]),\n            )),\n        )\n    }\n\n    fn string_column(values: Vec<Option<&str>>) -> ArrayRef {\n        Arc::new(StringArray::from(values)) as ArrayRef\n    }\n\n    fn one_row_lix_state_batch(global: bool) -> RecordBatch {\n        RecordBatch::try_new(\n            lix_state_schema(),\n            vec![\n                string_column(vec![Some(\"[\\\"entity-1\\\"]\")]),\n                string_column(vec![Some(\"lix_key_value\")]),\n                string_column(vec![None]),\n                string_column(vec![Some(\"{\\\"key\\\":\\\"hello\\\",\\\"value\\\":\\\"world\\\"}\")]),\n                string_column(vec![Some(\"{\\\"source\\\":\\\"test\\\"}\")]),\n                string_column(vec![Some(\"2026-04-23T00:00:00Z\")]),\n                string_column(vec![Some(\"2026-04-23T01:00:00Z\")]),\n                Arc::new(BooleanArray::from(vec![global])) as ArrayRef,\n                string_column(vec![Some(\"change-a\")]),\n                string_column(vec![None]),\n                Arc::new(BooleanArray::from(vec![false])) as ArrayRef,\n            ],\n        )\n        .expect(\"valid lix_state batch\")\n    }\n\n    fn one_row_stageable_lix_state_batch() -> RecordBatch {\n        RecordBatch::try_new(\n            lix_state_schema(),\n            vec![\n                string_column(vec![Some(\"[\\\"entity-1\\\"]\")]),\n                string_column(vec![Some(\"lix_key_value\")]),\n                string_column(vec![None]),\n                string_column(vec![Some(\"{\\\"key\\\":\\\"hello\\\",\\\"value\\\":\\\"world\\\"}\")]),\n                string_column(vec![None]),\n                string_column(vec![None]),\n                string_column(vec![None]),\n                Arc::new(BooleanArray::from(vec![false])) as ArrayRef,\n                string_column(vec![None]),\n                string_column(vec![None]),\n                Arc::new(BooleanArray::from(vec![false])) as ArrayRef,\n            ],\n        )\n        .expect(\"valid stageable lix_state batch\")\n    }\n\n    fn live_row(entity_id: &str, metadata: Option<&str>) -> MaterializedLiveStateRow {\n        MaterializedLiveStateRow {\n            entity_id: EntityIdentity::single(entity_id),\n            schema_key: \"lix_key_value\".to_string(),\n            file_id: None,\n            snapshot_content: Some(\"{\\\"key\\\":\\\"hello\\\",\\\"value\\\":\\\"world\\\"}\".to_string()),\n            metadata: metadata.map(str::to_string),\n            deleted: false,\n            version_id: \"version-a\".to_string(),\n            change_id: Some(format!(\"change-{entity_id}\")),\n            commit_id: Some(format!(\"commit-{entity_id}\")),\n            global: false,\n            untracked: false,\n            created_at: \"2026-04-23T00:00:00Z\".to_string(),\n            updated_at: \"2026-04-23T01:00:00Z\".to_string(),\n        }\n    }\n\n    #[test]\n    fn parses_eq_filter_for_schema_key() {\n        let expr = Expr::BinaryExpr(BinaryExpr::new(\n            Box::new(col(\"schema_key\")),\n            Operator::Eq,\n            Box::new(str_lit(\"profile\")),\n        ));\n\n        assert_eq!(\n            parse_lix_state_filter(&expr),\n            Some(LixStateFilterPredicate::SchemaKeys(BTreeSet::from([\n                \"profile\".to_string(),\n            ])))\n        );\n    }\n\n    #[test]\n    fn parses_in_list_filter_for_version_id() {\n        let expr = Expr::InList(InList::new(\n            Box::new(col(\"version_id\")),\n            vec![str_lit(\"a\"), str_lit(\"b\")],\n            false,\n        ));\n\n        assert_eq!(\n            parse_lix_state_filter(&expr),\n            Some(LixStateFilterPredicate::VersionIds(BTreeSet::from([\n                \"a\".to_string(),\n                \"b\".to_string(),\n            ])))\n        );\n    }\n\n    #[test]\n    fn builds_scan_request_from_route_and_projection() {\n        let schema = super::lix_state_by_version_schema();\n        let route = LixStateByVersionRoute::from_filters(&[\n            Expr::BinaryExpr(BinaryExpr::new(\n                Box::new(col(\"schema_key\")),\n                Operator::Eq,\n                Box::new(str_lit(\"profile\")),\n            )),\n            Expr::BinaryExpr(BinaryExpr::new(\n                Box::new(col(\"version_id\")),\n                Operator::Eq,\n                Box::new(str_lit(\"v1\")),\n            )),\n            Expr::IsNull(Box::new(col(\"file_id\"))),\n        ]);\n\n        let request =\n            lix_state_scan_request(&schema, None, Some(&vec![0, 1, 11]), &route, Some(10));\n\n        assert_eq!(request.filter.schema_keys, vec![\"profile\".to_string()]);\n        assert_eq!(request.filter.version_ids, vec![\"v1\".to_string()]);\n        assert_eq!(request.filter.file_ids, vec![NullableKeyFilter::Null]);\n        assert_eq!(\n            request.projection.columns,\n            vec![\n                \"entity_id\".to_string(),\n                \"schema_key\".to_string(),\n                \"version_id\".to_string()\n            ]\n        );\n        assert_eq!(request.limit, Some(10));\n    }\n\n    #[test]\n    fn builds_route_from_and_filter_tree() {\n        let route = LixStateByVersionRoute::from_filters(&[Expr::BinaryExpr(BinaryExpr::new(\n            Box::new(Expr::BinaryExpr(BinaryExpr::new(\n                Box::new(col(\"entity_id\")),\n                Operator::Eq,\n                Box::new(str_lit(\"[\\\"entity-a\\\"]\")),\n            ))),\n            Operator::And,\n            Box::new(Expr::InList(InList::new(\n                Box::new(col(\"version_id\")),\n                vec![str_lit(\"version-a\"), str_lit(\"global\")],\n                false,\n            ))),\n        ))]);\n\n        assert_eq!(\n            route.entity_ids,\n            Some(BTreeSet::from([\"[\\\"entity-a\\\"]\".to_string()]))\n        );\n        assert_eq!(\n            route.version_ids,\n            Some(BTreeSet::from([\n                \"global\".to_string(),\n                \"version-a\".to_string()\n            ]))\n        );\n    }\n\n    #[test]\n    fn contradictory_filters_turn_into_zero_limit_request() {\n        let schema = super::lix_state_by_version_schema();\n        let route = LixStateByVersionRoute::from_filters(&[\n            Expr::BinaryExpr(BinaryExpr::new(\n                Box::new(col(\"schema_key\")),\n                Operator::Eq,\n                Box::new(str_lit(\"a\")),\n            )),\n            Expr::BinaryExpr(BinaryExpr::new(\n                Box::new(col(\"schema_key\")),\n                Operator::Eq,\n                Box::new(str_lit(\"b\")),\n            )),\n        ]);\n\n        let request = lix_state_scan_request(&schema, None, None, &route, None);\n\n        assert_eq!(request.limit, Some(0));\n        assert!(request.filter.schema_keys.is_empty());\n    }\n\n    #[test]\n    fn active_version_view_pins_version_filter() {\n        let schema = super::lix_state_schema();\n        let route = LixStateByVersionRoute::from_filters(&[Expr::BinaryExpr(BinaryExpr::new(\n            Box::new(col(\"schema_key\")),\n            Operator::Eq,\n            Box::new(str_lit(\"profile\")),\n        ))]);\n\n        let request = lix_state_scan_request(&schema, Some(\"version-a\"), None, &route, None);\n\n        assert_eq!(request.filter.schema_keys, vec![\"profile\".to_string()]);\n        assert_eq!(request.filter.version_ids, vec![\"version-a\".to_string()]);\n    }\n\n    #[tokio::test]\n    async fn registers_active_lix_state_with_write_context_only() {\n        let session = SessionContext::new();\n        let mut write_context = DummyWriteContext::default();\n        let write_ctx = SqlWriteContext::new(&mut write_context);\n\n        register_lix_state_write_providers(&session, write_ctx)\n            .await\n            .expect(\"lix_state providers should register\");\n\n        let lix_state = session\n            .table_provider(\"lix_state\")\n            .await\n            .expect(\"lix_state provider should exist\");\n        let lix_state = lix_state\n            .as_any()\n            .downcast_ref::<LixStateProvider>()\n            .expect(\"lix_state should be a LixStateProvider\");\n        assert!(lix_state.write_access.is_write());\n\n        let by_version = session\n            .table_provider(\"lix_state_by_version\")\n            .await\n            .expect(\"lix_state_by_version provider should exist\");\n        let by_version = by_version\n            .as_any()\n            .downcast_ref::<LixStateProvider>()\n            .expect(\"lix_state_by_version should be a LixStateProvider\");\n        assert!(by_version.write_access.is_write());\n    }\n\n    #[tokio::test]\n    async fn insert_into_requires_write_transaction() {\n        let session = SessionContext::new();\n        let live_state = Arc::new(EmptyLiveStateReader) as Arc<dyn LiveStateReader>;\n        let provider =\n            LixStateProvider::active_version(\"version-a\", live_state, empty_version_ref());\n        let input = Arc::new(EmptyExec::new(provider.schema())) as Arc<dyn ExecutionPlan>;\n\n        let error = provider\n            .insert_into(&session.state(), input, InsertOp::Append)\n            .await\n            .expect_err(\"insert without a write context should fail\");\n\n        assert!(\n            error.to_string().contains(\"requires a write transaction\"),\n            \"unexpected error: {error}\"\n        );\n    }\n\n    #[tokio::test]\n    async fn update_requires_write_transaction() {\n        let session = SessionContext::new();\n        let live_state = Arc::new(EmptyLiveStateReader) as Arc<dyn LiveStateReader>;\n        let provider =\n            LixStateProvider::active_version(\"version-a\", live_state, empty_version_ref());\n\n        let error = provider\n            .update(\n                &session.state(),\n                vec![(\"metadata\".to_string(), str_lit(\"{\\\"source\\\":\\\"update\\\"}\"))],\n                vec![],\n            )\n            .await\n            .expect_err(\"update without a write context should fail\");\n\n        assert!(\n            error.to_string().contains(\"requires a write transaction\"),\n            \"unexpected error: {error}\"\n        );\n    }\n\n    #[tokio::test]\n    async fn delete_requires_write_transaction() {\n        let session = SessionContext::new();\n        let live_state = Arc::new(EmptyLiveStateReader) as Arc<dyn LiveStateReader>;\n        let provider =\n            LixStateProvider::active_version(\"version-a\", live_state, empty_version_ref());\n\n        let error = provider\n            .delete_from(&session.state(), vec![])\n            .await\n            .expect_err(\"delete without a write context should fail\");\n\n        assert!(\n            error.to_string().contains(\"requires a write transaction\"),\n            \"unexpected error: {error}\"\n        );\n    }\n\n    #[tokio::test]\n    async fn delete_returns_lix_state_delete_exec_with_write_ctx() {\n        let session = SessionContext::new();\n        let mut write_context = DummyWriteContext::default();\n        let write_ctx = SqlWriteContext::new(&mut write_context);\n        let provider = LixStateProvider::active_version_with_write(write_ctx);\n\n        let plan = provider\n            .delete_from(&session.state(), vec![])\n            .await\n            .expect(\"delete should produce a write plan\");\n\n        assert!(plan.as_any().is::<LixStateDeleteExec>());\n    }\n\n    #[tokio::test]\n    async fn update_rejects_read_only_lix_state_columns() {\n        let session = SessionContext::new();\n        let mut write_context = DummyWriteContext::default();\n        let write_ctx = SqlWriteContext::new(&mut write_context);\n        let provider = LixStateProvider::active_version_with_write(write_ctx);\n\n        let error = provider\n            .update(\n                &session.state(),\n                vec![(\"entity_id\".to_string(), str_lit(\"entity-2\"))],\n                vec![],\n            )\n            .await\n            .expect_err(\"updating a read-only field should fail\");\n\n        assert!(\n            error.to_string().contains(\"read-only column 'entity_id'\"),\n            \"unexpected error: {error}\"\n        );\n    }\n\n    #[tokio::test]\n    async fn update_returns_lix_state_update_exec_with_write_ctx() {\n        let session = SessionContext::new();\n        let mut write_context = DummyWriteContext::default();\n        let write_ctx = SqlWriteContext::new(&mut write_context);\n        let provider = LixStateProvider::active_version_with_write(write_ctx);\n\n        let plan = provider\n            .update(\n                &session.state(),\n                vec![(\"metadata\".to_string(), str_lit(\"{\\\"source\\\":\\\"update\\\"}\"))],\n                vec![],\n            )\n            .await\n            .expect(\"update should produce a write plan\");\n\n        assert!(plan.as_any().is::<LixStateUpdateExec>());\n    }\n\n    #[tokio::test]\n    async fn insert_into_returns_data_sink_exec_with_write_ctx() {\n        let session = SessionContext::new();\n        let mut write_context = DummyWriteContext::default();\n        let write_ctx = SqlWriteContext::new(&mut write_context);\n        let provider = LixStateProvider::active_version_with_write(write_ctx);\n        let input = Arc::new(EmptyExec::new(provider.schema())) as Arc<dyn ExecutionPlan>;\n\n        let plan = provider\n            .insert_into(&session.state(), input, InsertOp::Append)\n            .await\n            .expect(\"insert should produce a write plan\");\n\n        assert!(plan.as_any().is::<InsertExec>());\n    }\n\n    #[test]\n    fn decodes_lix_state_batch_into_write_rows() {\n        let rows = lix_state_write_rows_from_batch(&one_row_lix_state_batch(false), \"version-a\")\n            .expect(\"batch should decode\");\n\n        assert_eq!(\n            rows,\n            vec![TransactionWriteRow {\n                entity_id: Some(crate::entity_identity::EntityIdentity::single(\"entity-1\")),\n                schema_key: \"lix_key_value\".to_string(),\n                file_id: None,\n                snapshot: Some(TransactionJson::from_value_for_test(\n                    json!({\"key\":\"hello\",\"value\":\"world\"})\n                )),\n                metadata: Some(TransactionJson::from_value_for_test(\n                    json!({\"source\": \"test\"})\n                )),\n                origin: None,\n                created_at: Some(\"2026-04-23T00:00:00Z\".to_string()),\n                updated_at: Some(\"2026-04-23T01:00:00Z\".to_string()),\n                global: false,\n                change_id: Some(\"change-a\".to_string()),\n                commit_id: None,\n                untracked: false,\n                version_id: \"version-a\".to_string(),\n            }]\n        );\n    }\n\n    #[test]\n    fn decodes_global_lix_state_batch_into_global_version() {\n        let rows = lix_state_write_rows_from_batch(&one_row_lix_state_batch(true), \"version-a\")\n            .expect(\"batch should decode\");\n\n        assert_eq!(rows[0].version_id, \"global\");\n        assert!(rows[0].global);\n    }\n\n    #[tokio::test]\n    async fn insert_sink_stages_decoded_lix_state_rows() {\n        let mut write_context = CapturingWriteContext::default();\n        let write_ctx = SqlWriteContext::new(&mut write_context);\n        let sink = LixStateInsertSink::new(lix_state_schema(), write_ctx, \"version-a\".to_string());\n        let batch = one_row_lix_state_batch(false);\n        let count = sink\n            .write_batches(vec![batch], &Arc::new(TaskContext::default()))\n            .await\n            .expect(\"sink should stage write\");\n\n        assert_eq!(count, 1);\n        assert_eq!(\n            write_context.writes.as_slice(),\n            &[TransactionWrite::Rows {\n                mode: TransactionWriteMode::Insert,\n                rows: vec![TransactionWriteRow {\n                    entity_id: Some(crate::entity_identity::EntityIdentity::single(\"entity-1\")),\n                    schema_key: \"lix_key_value\".to_string(),\n                    file_id: None,\n                    snapshot: Some(TransactionJson::from_value_for_test(\n                        json!({\"key\":\"hello\",\"value\":\"world\"})\n                    )),\n                    metadata: Some(TransactionJson::from_value_for_test(\n                        json!({\"source\": \"test\"})\n                    )),\n                    origin: None,\n                    created_at: Some(\"2026-04-23T00:00:00Z\".to_string()),\n                    updated_at: Some(\"2026-04-23T01:00:00Z\".to_string()),\n                    global: false,\n                    change_id: Some(\"change-a\".to_string()),\n                    commit_id: None,\n                    untracked: false,\n                    version_id: \"version-a\".to_string(),\n                }]\n            }]\n        );\n    }\n\n    #[tokio::test]\n    async fn insert_plan_returns_datafusion_count_uint64() {\n        let session = SessionContext::new();\n        let mut write_context = CapturingWriteContext::default();\n        let write_ctx = SqlWriteContext::new(&mut write_context);\n        let provider = LixStateProvider::active_version_with_write(write_ctx);\n        let input = Arc::new(SingleBatchExec::new(one_row_stageable_lix_state_batch()))\n            as Arc<dyn ExecutionPlan>;\n\n        let plan = provider\n            .insert_into(&session.state(), input, InsertOp::Append)\n            .await\n            .expect(\"insert should produce a write plan\");\n        let batches = datafusion::physical_plan::collect(plan, Arc::new(TaskContext::default()))\n            .await\n            .expect(\"insert write plan should execute\");\n\n        assert_eq!(batches.len(), 1);\n        assert_eq!(batches[0].num_rows(), 1);\n        assert_eq!(batches[0].num_columns(), 1);\n        assert_eq!(batches[0].schema().field(0).name(), \"count\");\n        assert_eq!(batches[0].schema().field(0).data_type(), &DataType::UInt64);\n        assert!(!batches[0].schema().field(0).is_nullable());\n\n        let count = batches[0]\n            .column(0)\n            .as_any()\n            .downcast_ref::<UInt64Array>()\n            .expect(\"count should be UInt64\");\n        assert_eq!(count.value(0), 1);\n        assert_eq!(write_context.writes.len(), 1);\n    }\n\n    #[tokio::test]\n    async fn update_plan_evaluates_filters_assignments_and_stages_rows() {\n        let session = SessionContext::new();\n        let mut write_context = CapturingWriteContext {\n            rows: vec![\n                live_row(\"entity-1\", Some(\"{\\\"source\\\":\\\"match\\\"}\")),\n                live_row(\"entity-2\", Some(\"{\\\"source\\\":\\\"skip\\\"}\")),\n            ],\n            writes: Vec::new(),\n        };\n        let write_ctx = SqlWriteContext::new(&mut write_context);\n        let provider = LixStateProvider::active_version_with_write(write_ctx);\n\n        let plan = provider\n            .update(\n                &session.state(),\n                vec![\n                    (\n                        \"snapshot_content\".to_string(),\n                        str_lit(\"{\\\"key\\\":\\\"hello\\\",\\\"value\\\":\\\"updated\\\"}\"),\n                    ),\n                    (\n                        \"metadata\".to_string(),\n                        str_lit(\"{\\\"schema_key\\\":\\\"lix_key_value\\\"}\"),\n                    ),\n                ],\n                vec![Expr::BinaryExpr(BinaryExpr::new(\n                    Box::new(col(\"metadata\")),\n                    Operator::Eq,\n                    Box::new(json_lit(\"{\\\"source\\\":\\\"match\\\"}\")),\n                ))],\n            )\n            .await\n            .expect(\"update should produce a write plan\");\n        let batches = datafusion::physical_plan::collect(plan, Arc::new(TaskContext::default()))\n            .await\n            .expect(\"update write plan should execute\");\n\n        assert_eq!(batches.len(), 1);\n        assert_eq!(batches[0].schema().field(0).name(), \"count\");\n        assert_eq!(batches[0].schema().field(0).data_type(), &DataType::UInt64);\n        let count = batches[0]\n            .column(0)\n            .as_any()\n            .downcast_ref::<UInt64Array>()\n            .expect(\"count should be UInt64\");\n        assert_eq!(count.value(0), 1);\n\n        assert_eq!(\n            write_context.writes.as_slice(),\n            &[TransactionWrite::Rows {\n                mode: TransactionWriteMode::Replace,\n                rows: vec![TransactionWriteRow {\n                    entity_id: Some(crate::entity_identity::EntityIdentity::single(\"entity-1\")),\n                    schema_key: \"lix_key_value\".to_string(),\n                    file_id: None,\n                    snapshot: Some(TransactionJson::from_value_for_test(\n                        json!({\"key\":\"hello\",\"value\":\"updated\"})\n                    )),\n                    metadata: Some(TransactionJson::from_value_for_test(\n                        json!({\"schema_key\": \"lix_key_value\"})\n                    )),\n                    origin: None,\n                    created_at: None,\n                    updated_at: None,\n                    global: false,\n                    change_id: None,\n                    commit_id: None,\n                    untracked: false,\n                    version_id: \"version-a\".to_string(),\n                }]\n            }]\n        );\n    }\n\n    #[tokio::test]\n    async fn delete_plan_with_empty_filters_stages_all_visible_rows() {\n        let session = SessionContext::new();\n        let mut write_context = CapturingWriteContext {\n            rows: vec![\n                live_row(\"entity-1\", Some(\"{\\\"source\\\":\\\"one\\\"}\")),\n                live_row(\"entity-2\", Some(\"{\\\"source\\\":\\\"two\\\"}\")),\n            ],\n            writes: Vec::new(),\n        };\n        let write_ctx = SqlWriteContext::new(&mut write_context);\n        let provider = LixStateProvider::active_version_with_write(write_ctx);\n\n        let plan = provider\n            .delete_from(&session.state(), vec![])\n            .await\n            .expect(\"delete should produce a write plan\");\n        let batches = datafusion::physical_plan::collect(plan, Arc::new(TaskContext::default()))\n            .await\n            .expect(\"delete write plan should execute\");\n\n        assert_eq!(batches.len(), 1);\n        assert_eq!(batches[0].schema().field(0).name(), \"count\");\n        assert_eq!(batches[0].schema().field(0).data_type(), &DataType::UInt64);\n        let count = batches[0]\n            .column(0)\n            .as_any()\n            .downcast_ref::<UInt64Array>()\n            .expect(\"count should be UInt64\");\n        assert_eq!(count.value(0), 2);\n\n        assert_eq!(\n            write_context.writes.as_slice(),\n            &[TransactionWrite::Rows {\n                mode: TransactionWriteMode::Replace,\n                rows: vec![\n                    TransactionWriteRow {\n                        entity_id: Some(crate::entity_identity::EntityIdentity::single(\"entity-1\")),\n                        schema_key: \"lix_key_value\".to_string(),\n                        file_id: None,\n                        snapshot: None,\n                        metadata: Some(TransactionJson::from_value_for_test(\n                            json!({\"source\": \"one\"})\n                        )),\n                        origin: None,\n                        created_at: None,\n                        updated_at: None,\n                        global: false,\n                        change_id: None,\n                        commit_id: None,\n                        untracked: false,\n                        version_id: \"version-a\".to_string(),\n                    },\n                    TransactionWriteRow {\n                        entity_id: Some(crate::entity_identity::EntityIdentity::single(\"entity-2\")),\n                        schema_key: \"lix_key_value\".to_string(),\n                        file_id: None,\n                        snapshot: None,\n                        metadata: Some(TransactionJson::from_value_for_test(\n                            json!({\"source\": \"two\"})\n                        )),\n                        origin: None,\n                        created_at: None,\n                        updated_at: None,\n                        global: false,\n                        change_id: None,\n                        commit_id: None,\n                        untracked: false,\n                        version_id: \"version-a\".to_string(),\n                    },\n                ]\n            }]\n        );\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/mod.rs",
    "content": "mod change_provider;\nmod classify;\nmod context;\nmod directory_history_provider;\nmod directory_provider;\nmod dml;\nmod entity_history_provider;\nmod entity_provider;\nmod error;\nmod execute;\nmod file_history_provider;\nmod file_provider;\nmod filesystem_planner;\nmod filesystem_predicates;\nmod filesystem_visibility;\nmod history_projection;\nmod history_provider;\nmod history_route;\nmod lix_state_provider;\nmod predicate_typecheck;\nmod public_bind;\nmod read_only;\nmod record_batch;\nmod result_metadata;\nmod runtime;\nmod session;\nmod udfs;\nmod version_provider;\nmod version_scope;\nmod write_normalization;\n\npub(crate) use classify::{\n    classify_statement, datafusion_statement_dml_target_table_names,\n    validate_supported_datafusion_statement_ast, validate_supported_statement_ast,\n    SqlStatementKind,\n};\npub(crate) use context::{\n    CommitStoreQuerySource, SqlCommitStoreQuerySource, SqlExecutionContext, SqlJsonReader,\n    SqlWriteContext, SqlWriteExecutionContext, WriteAccess, WriteContextLiveStateReader,\n    WriteContextVersionRefReader,\n};\n#[allow(unused_imports)]\npub(crate) use execute::{\n    create_logical_plan, create_write_logical_plan, execute_logical_plan, execute_sql,\n    SqlLogicalPlan,\n};\n"
  },
  {
    "path": "packages/engine/src/sql2/predicate_typecheck.rs",
    "content": "use datafusion::arrow::datatypes::{Field, Schema};\nuse datafusion::common::{DFSchema, DataFusionError, ScalarValue};\nuse datafusion::logical_expr::expr::{Between, InList};\nuse datafusion::logical_expr::{BinaryExpr, Expr, Like, Operator};\n\nuse crate::LixError;\n\nuse super::error::lix_error_to_datafusion_error;\nuse super::result_metadata::{field_is_json, LIX_VALUE_TYPE_JSON, LIX_VALUE_TYPE_METADATA_KEY};\n\npub(crate) fn validate_json_predicate_filters(\n    schema: &Schema,\n    filters: &[Expr],\n) -> Result<(), DataFusionError> {\n    for filter in filters {\n        validate_json_predicate_expr_with_arrow_schema(schema, filter)\n            .map_err(lix_error_to_datafusion_error)?;\n    }\n    Ok(())\n}\n\npub(crate) fn validate_json_predicate_expr_with_dfschema(\n    schema: &DFSchema,\n    expr: &Expr,\n) -> Result<(), LixError> {\n    validate_expr(expr, &|column| {\n        schema\n            .field_with_name(column.relation.as_ref(), &column.name)\n            .ok()\n            .map(|field| field.as_ref())\n    })\n}\n\nfn validate_json_predicate_expr_with_arrow_schema(\n    schema: &Schema,\n    expr: &Expr,\n) -> Result<(), LixError> {\n    validate_expr(expr, &|column| {\n        schema\n            .fields()\n            .iter()\n            .find(|field| field.name() == &column.name)\n            .map(|field| field.as_ref())\n    })\n}\n\nfn validate_expr<'a>(\n    expr: &'a Expr,\n    lookup_field: &impl Fn(&datafusion::common::Column) -> Option<&'a Field>,\n) -> Result<(), LixError> {\n    match expr {\n        Expr::BinaryExpr(binary) => validate_binary_expr(binary, lookup_field),\n        Expr::InList(in_list) => validate_in_list(in_list, lookup_field),\n        Expr::Between(between) => validate_between(between, lookup_field),\n        Expr::Like(like) | Expr::SimilarTo(like) => validate_like(like, lookup_field),\n        Expr::Alias(alias) => validate_expr(&alias.expr, lookup_field),\n        Expr::Not(inner)\n        | Expr::IsNotNull(inner)\n        | Expr::IsNull(inner)\n        | Expr::IsTrue(inner)\n        | Expr::IsFalse(inner)\n        | Expr::IsUnknown(inner)\n        | Expr::IsNotTrue(inner)\n        | Expr::IsNotFalse(inner)\n        | Expr::IsNotUnknown(inner)\n        | Expr::Negative(inner) => validate_expr(inner, lookup_field),\n        Expr::Cast(cast) => validate_expr(&cast.expr, lookup_field),\n        Expr::TryCast(cast) => validate_expr(&cast.expr, lookup_field),\n        Expr::ScalarFunction(function) => {\n            for arg in &function.args {\n                validate_expr(arg, lookup_field)?;\n            }\n            Ok(())\n        }\n        Expr::Case(case) => {\n            if let Some(expr) = &case.expr {\n                validate_expr(expr, lookup_field)?;\n            }\n            for (when, then) in &case.when_then_expr {\n                validate_expr(when, lookup_field)?;\n                validate_expr(then, lookup_field)?;\n            }\n            if let Some(expr) = &case.else_expr {\n                validate_expr(expr, lookup_field)?;\n            }\n            Ok(())\n        }\n        Expr::AggregateFunction(function) => {\n            for arg in &function.params.args {\n                validate_expr(arg, lookup_field)?;\n            }\n            Ok(())\n        }\n        Expr::WindowFunction(function) => {\n            for arg in &function.params.args {\n                validate_expr(arg, lookup_field)?;\n            }\n            Ok(())\n        }\n        _ => Ok(()),\n    }\n}\n\nfn validate_binary_expr<'a>(\n    binary: &'a BinaryExpr,\n    lookup_field: &impl Fn(&datafusion::common::Column) -> Option<&'a Field>,\n) -> Result<(), LixError> {\n    validate_expr(&binary.left, lookup_field)?;\n    validate_expr(&binary.right, lookup_field)?;\n\n    if !is_comparison_operator(binary.op) {\n        return Ok(());\n    }\n\n    validate_comparison_operands(&binary.left, &binary.right, lookup_field)\n}\n\nfn validate_in_list<'a>(\n    in_list: &'a InList,\n    lookup_field: &impl Fn(&datafusion::common::Column) -> Option<&'a Field>,\n) -> Result<(), LixError> {\n    validate_expr(&in_list.expr, lookup_field)?;\n    for item in &in_list.list {\n        validate_expr(item, lookup_field)?;\n    }\n\n    if is_json_expr(&in_list.expr, lookup_field) {\n        for item in &in_list.list {\n            require_json_comparison_operand(item, lookup_field)?;\n        }\n    }\n\n    for item in &in_list.list {\n        if is_json_expr(item, lookup_field) {\n            require_json_comparison_operand(&in_list.expr, lookup_field)?;\n        }\n    }\n\n    Ok(())\n}\n\nfn validate_between<'a>(\n    between: &'a Between,\n    lookup_field: &impl Fn(&datafusion::common::Column) -> Option<&'a Field>,\n) -> Result<(), LixError> {\n    validate_expr(&between.expr, lookup_field)?;\n    validate_expr(&between.low, lookup_field)?;\n    validate_expr(&between.high, lookup_field)?;\n\n    if is_json_expr(&between.expr, lookup_field) {\n        require_json_comparison_operand(&between.low, lookup_field)?;\n        require_json_comparison_operand(&between.high, lookup_field)?;\n    }\n\n    Ok(())\n}\n\nfn validate_like<'a>(\n    like: &'a Like,\n    lookup_field: &impl Fn(&datafusion::common::Column) -> Option<&'a Field>,\n) -> Result<(), LixError> {\n    validate_expr(&like.expr, lookup_field)?;\n    validate_expr(&like.pattern, lookup_field)?;\n\n    if is_json_expr(&like.expr, lookup_field) {\n        return Err(json_predicate_type_error(&like.expr));\n    }\n\n    Ok(())\n}\n\nfn validate_comparison_operands<'a>(\n    left: &'a Expr,\n    right: &'a Expr,\n    lookup_field: &impl Fn(&datafusion::common::Column) -> Option<&'a Field>,\n) -> Result<(), LixError> {\n    let left_is_json = is_json_expr(left, lookup_field);\n    let right_is_json = is_json_expr(right, lookup_field);\n\n    if left_is_json {\n        require_json_comparison_operand(right, lookup_field)?;\n    }\n    if right_is_json {\n        require_json_comparison_operand(left, lookup_field)?;\n    }\n\n    Ok(())\n}\n\nfn require_json_comparison_operand<'a>(\n    expr: &'a Expr,\n    lookup_field: &impl Fn(&datafusion::common::Column) -> Option<&'a Field>,\n) -> Result<(), LixError> {\n    if is_json_expr(expr, lookup_field)\n        || is_null_literal(expr)\n        || matches!(expr, Expr::Placeholder(_))\n    {\n        return Ok(());\n    }\n\n    Err(json_predicate_type_error(expr))\n}\n\nfn is_json_expr<'a>(\n    expr: &'a Expr,\n    lookup_field: &impl Fn(&datafusion::common::Column) -> Option<&'a Field>,\n) -> bool {\n    match expr {\n        Expr::Column(column) => lookup_field(column).is_some_and(field_is_json),\n        Expr::Literal(_, Some(metadata)) => metadata\n            .inner()\n            .get(LIX_VALUE_TYPE_METADATA_KEY)\n            .is_some_and(|value| value == LIX_VALUE_TYPE_JSON),\n        Expr::ScalarFunction(function) => matches!(function.name(), \"lix_json\" | \"lix_json_get\"),\n        Expr::Alias(alias) => is_json_expr(&alias.expr, lookup_field),\n        Expr::Cast(cast) => is_json_expr(&cast.expr, lookup_field),\n        Expr::TryCast(cast) => is_json_expr(&cast.expr, lookup_field),\n        _ => false,\n    }\n}\n\nfn is_null_literal(expr: &Expr) -> bool {\n    matches!(expr, Expr::Literal(value, _) if matches!(value, ScalarValue::Null))\n}\n\nfn is_comparison_operator(op: Operator) -> bool {\n    matches!(\n        op,\n        Operator::Eq\n            | Operator::NotEq\n            | Operator::Lt\n            | Operator::LtEq\n            | Operator::Gt\n            | Operator::GtEq\n            | Operator::IsDistinctFrom\n            | Operator::IsNotDistinctFrom\n    )\n}\n\nfn json_predicate_type_error(expr: &Expr) -> LixError {\n    LixError::new(\n        LixError::CODE_TYPE_MISMATCH,\n        format!(\"JSON columns can only be compared with JSON expressions, got {expr}\"),\n    )\n    .with_hint(\"Wrap JSON text with lix_json(...), use lix_json_get(...) for JSON values, or use IS NULL for null checks.\")\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/public_bind/assignment.rs",
    "content": "use std::collections::BTreeSet;\n\nuse crate::LixError;\n\nuse super::table::{PublicSurface, PublicTableContracts};\n\npub(crate) fn validate_update_assignments(\n    surface: &PublicSurface,\n    columns: Vec<String>,\n    contracts: &PublicTableContracts,\n) -> Result<(), LixError> {\n    let Some(contract) = contracts.get(surface) else {\n        return Ok(());\n    };\n    let mut seen = BTreeSet::new();\n    for column in columns {\n        if !seen.insert(column.clone()) {\n            return Err(LixError::new(\n                LixError::CODE_INVALID_PARAM,\n                format!(\n                    \"update {} assigns column '{column}' more than once\",\n                    surface.name()\n                ),\n            ));\n        }\n        let Some(column_contract) = contract.column(&column) else {\n            return Err(LixError::new(\n                LixError::CODE_INVALID_PARAM,\n                format!(\n                    \"update {} references unknown column '{column}'\",\n                    surface.name()\n                ),\n            ));\n        };\n        if !column_contract.writable {\n            return Err(LixError::new(\n                LixError::CODE_UNSUPPORTED_SQL,\n                format!(\n                    \"update {} cannot assign read-only column '{column}'\",\n                    surface.name()\n                ),\n            ));\n        }\n    }\n    Ok(())\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/public_bind/capability.rs",
    "content": "use crate::LixError;\n\nuse super::table::{Capability, PublicSurface, PublicTableContracts};\nuse super::DmlOperation;\n\npub(crate) fn validate_table_operation(\n    surface: &PublicSurface,\n    operation: DmlOperation,\n    contracts: &PublicTableContracts,\n) -> Result<(), LixError> {\n    let Some(contract) = contracts.get(surface) else {\n        return Ok(());\n    };\n    match contract.operation(operation) {\n        Capability::Allowed => Ok(()),\n        Capability::ReadOnly(hint) => {\n            let message = if surface.name().ends_with(\"_history\") {\n                format!(\n                    \"DML cannot write read-only history view '{}'\",\n                    surface.name()\n                )\n            } else {\n                format!(\n                    \"{} {} is not allowed because the SQL surface is read-only\",\n                    operation.as_str(),\n                    surface.name()\n                )\n            };\n            Err(LixError::new(LixError::CODE_READ_ONLY, message).with_hint(hint))\n        }\n        Capability::Unsupported(hint) => Err(LixError::new(\n            LixError::CODE_UNSUPPORTED_SQL,\n            format!(\n                \"{} {} is not supported by Lix SQL\",\n                operation.as_str(),\n                surface.name()\n            ),\n        )\n        .with_hint(hint)),\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/public_bind/dml.rs",
    "content": "use datafusion::logical_expr::{LogicalPlan, WriteOp};\nuse datafusion::sql::sqlparser::ast::{\n    Assignment, AssignmentTarget, Delete, FromTable, ObjectName, Statement, TableFactor,\n    TableObject, TableWithJoins, Update,\n};\nuse datafusion::sql::sqlparser::dialect::GenericDialect;\nuse datafusion::sql::sqlparser::parser::Parser;\nuse serde_json::Value as JsonValue;\n\nuse crate::LixError;\n\nuse super::assignment::validate_update_assignments;\nuse super::capability::validate_table_operation;\nuse super::table::{PublicSurface, PublicTableContracts};\n\n#[derive(Clone, Copy, Debug, Eq, PartialEq)]\npub(crate) enum DmlOperation {\n    Insert,\n    Update,\n    Delete,\n}\n\nimpl DmlOperation {\n    pub(crate) fn as_str(self) -> &'static str {\n        match self {\n            Self::Insert => \"insert\",\n            Self::Update => \"update\",\n            Self::Delete => \"delete\",\n        }\n    }\n}\n\npub(crate) fn validate_sql(sql: &str, visible_schemas: &[JsonValue]) -> Result<(), LixError> {\n    let statements = Parser::parse_sql(&GenericDialect {}, sql).map_err(|error| {\n        LixError::new(\n            LixError::CODE_PARSE_ERROR,\n            format!(\"sql2 SQL parse error: {error}\"),\n        )\n    })?;\n    let [statement] = statements.as_slice() else {\n        return Ok(());\n    };\n    let contracts = PublicTableContracts::new(visible_schemas)?;\n    validate_statement(statement, &contracts)\n}\n\npub(crate) fn validate_plan(\n    plan: &LogicalPlan,\n    visible_schemas: &[JsonValue],\n) -> Result<(), LixError> {\n    let contracts = PublicTableContracts::new(visible_schemas)?;\n    validate_plan_with_contracts(plan, &contracts)\n}\n\nfn validate_plan_with_contracts(\n    plan: &LogicalPlan,\n    contracts: &PublicTableContracts,\n) -> Result<(), LixError> {\n    if let LogicalPlan::Dml(dml) = plan {\n        let surface = PublicSurface::named(dml.table_name.table());\n        validate_table_operation(&surface, operation_from_write_op(&dml.op), contracts)?;\n    }\n    for input in plan.inputs() {\n        validate_plan_with_contracts(input, contracts)?;\n    }\n    Ok(())\n}\n\nfn operation_from_write_op(op: &WriteOp) -> DmlOperation {\n    match op {\n        WriteOp::Insert(_) | WriteOp::Ctas => DmlOperation::Insert,\n        WriteOp::Update => DmlOperation::Update,\n        WriteOp::Delete | WriteOp::Truncate => DmlOperation::Delete,\n    }\n}\n\nfn validate_statement(\n    statement: &Statement,\n    contracts: &PublicTableContracts,\n) -> Result<(), LixError> {\n    match statement {\n        Statement::Insert(insert) => {\n            let Some(table_name) = insert_target_name(&insert.table) else {\n                return Ok(());\n            };\n            let surface = PublicSurface::named(table_name);\n            validate_table_operation(&surface, DmlOperation::Insert, contracts)\n        }\n        Statement::Update(update) => validate_update(update, contracts),\n        Statement::Delete(delete) => validate_delete(delete, contracts),\n        Statement::Explain { statement, .. } => validate_statement(statement, contracts),\n        _ => Ok(()),\n    }\n}\n\nfn validate_update(update: &Update, contracts: &PublicTableContracts) -> Result<(), LixError> {\n    let Some(table_name) = table_with_joins_target_name(&update.table) else {\n        return Ok(());\n    };\n    let surface = PublicSurface::named(table_name);\n    validate_table_operation(&surface, DmlOperation::Update, contracts)?;\n    validate_update_assignments(\n        &surface,\n        assignment_column_names(&update.assignments)?,\n        contracts,\n    )\n}\n\nfn validate_delete(delete: &Delete, contracts: &PublicTableContracts) -> Result<(), LixError> {\n    for table in delete_from_tables(delete) {\n        let Some(table_name) = table_with_joins_target_name(table) else {\n            continue;\n        };\n        let surface = PublicSurface::named(table_name);\n        validate_table_operation(&surface, DmlOperation::Delete, contracts)?;\n    }\n    Ok(())\n}\n\nfn delete_from_tables(delete: &Delete) -> &[TableWithJoins] {\n    match &delete.from {\n        FromTable::WithFromKeyword(tables) | FromTable::WithoutKeyword(tables) => tables,\n    }\n}\n\nfn assignment_column_names(assignments: &[Assignment]) -> Result<Vec<String>, LixError> {\n    let mut columns = Vec::new();\n    for assignment in assignments {\n        match &assignment.target {\n            AssignmentTarget::ColumnName(name) => {\n                if let Some(column) = object_name_leaf(name) {\n                    columns.push(column);\n                }\n            }\n            AssignmentTarget::Tuple(names) => {\n                for name in names {\n                    if let Some(column) = object_name_leaf(name) {\n                        columns.push(column);\n                    }\n                }\n            }\n        }\n    }\n    Ok(columns)\n}\n\nfn insert_target_name(table: &TableObject) -> Option<String> {\n    match table {\n        TableObject::TableName(name) => object_name_leaf(name),\n        _ => None,\n    }\n}\n\nfn table_with_joins_target_name(table: &TableWithJoins) -> Option<String> {\n    match &table.relation {\n        TableFactor::Table { name, .. } => object_name_leaf(name),\n        _ => None,\n    }\n}\n\nfn object_name_leaf(name: &ObjectName) -> Option<String> {\n    name.0\n        .last()\n        .and_then(|part| part.as_ident())\n        .map(|ident| ident.value.to_ascii_lowercase())\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/public_bind/mod.rs",
    "content": "mod assignment;\nmod capability;\nmod dml;\nmod table;\n\nuse datafusion::logical_expr::LogicalPlan;\nuse serde_json::Value as JsonValue;\n\nuse crate::LixError;\n\npub(crate) use dml::DmlOperation;\n\npub(crate) fn validate_public_dml_sql(\n    sql: &str,\n    visible_schemas: &[JsonValue],\n) -> Result<(), LixError> {\n    dml::validate_sql(sql, visible_schemas)\n}\n\npub(crate) fn validate_public_dml_plan(\n    plan: &LogicalPlan,\n    visible_schemas: &[JsonValue],\n) -> Result<(), LixError> {\n    dml::validate_plan(plan, visible_schemas)\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/public_bind/table.rs",
    "content": "use std::collections::{BTreeMap, BTreeSet};\n\nuse serde_json::Value as JsonValue;\n\nuse crate::schema::schema_key_from_definition;\nuse crate::LixError;\n\n#[derive(Clone, Copy, Debug, Eq, PartialEq)]\npub(crate) enum Capability {\n    Allowed,\n    ReadOnly(&'static str),\n    Unsupported(&'static str),\n}\n\n#[derive(Clone, Debug)]\npub(crate) struct ColumnContract {\n    pub(crate) writable: bool,\n}\n\n#[derive(Clone, Debug)]\npub(crate) struct TableContract {\n    pub(crate) insert: Capability,\n    pub(crate) update: Capability,\n    pub(crate) delete: Capability,\n    pub(crate) columns: BTreeMap<String, ColumnContract>,\n}\n\nimpl TableContract {\n    pub(crate) fn operation(&self, operation: super::DmlOperation) -> Capability {\n        match operation {\n            super::DmlOperation::Insert => self.insert,\n            super::DmlOperation::Update => self.update,\n            super::DmlOperation::Delete => self.delete,\n        }\n    }\n\n    pub(crate) fn column(&self, column: &str) -> Option<&ColumnContract> {\n        self.columns.get(column)\n    }\n}\n\n#[derive(Clone, Debug, Eq, PartialEq, Ord, PartialOrd)]\npub(crate) struct PublicSurface {\n    name: String,\n}\n\nimpl PublicSurface {\n    pub(crate) fn named(name: impl Into<String>) -> Self {\n        Self {\n            name: name.into().to_ascii_lowercase(),\n        }\n    }\n\n    pub(crate) fn name(&self) -> &str {\n        &self.name\n    }\n}\n\n#[derive(Clone, Debug)]\npub(crate) struct PublicTableContracts {\n    contracts: BTreeMap<String, TableContract>,\n}\n\nimpl PublicTableContracts {\n    pub(crate) fn new(visible_schemas: &[JsonValue]) -> Result<Self, LixError> {\n        let mut contracts = builtin_contracts();\n        for schema in visible_schemas {\n            let schema_key = schema_key_from_definition(schema)?.schema_key;\n            contracts.insert(\n                format!(\"{}_history\", schema_key.to_ascii_lowercase()),\n                history_contract(),\n            );\n        }\n        Ok(Self { contracts })\n    }\n\n    pub(crate) fn get(&self, surface: &PublicSurface) -> Option<&TableContract> {\n        self.contracts.get(surface.name())\n    }\n}\n\nfn builtin_contracts() -> BTreeMap<String, TableContract> {\n    let mut contracts = BTreeMap::new();\n\n    for table in [\n        \"lix_change\",\n        \"lix_commit\",\n        \"lix_commit_by_version\",\n        \"lix_commit_edge\",\n        \"lix_commit_edge_by_version\",\n        \"lix_change_set\",\n        \"lix_change_set_by_version\",\n        \"lix_change_set_element\",\n        \"lix_change_set_element_by_version\",\n    ] {\n        contracts.insert(table.to_string(), commit_graph_contract());\n    }\n\n    for table in [\n        \"lix_state_history\",\n        \"lix_file_history\",\n        \"lix_directory_history\",\n    ] {\n        contracts.insert(table.to_string(), history_contract());\n    }\n\n    contracts.insert(\n        \"lix_registered_schema\".to_string(),\n        TableContract {\n            insert: Capability::Allowed,\n            update: Capability::Allowed,\n            delete: Capability::Unsupported(\n                \"lix_registered_schema deletion is not supported; register an amended schema instead\",\n            ),\n            columns: columns(&[\"value\", \"lixcol_metadata\", \"lixcol_global\", \"lixcol_untracked\"]),\n        },\n    );\n\n    contracts.insert(\n        \"lix_key_value\".to_string(),\n        TableContract {\n            insert: Capability::Allowed,\n            update: Capability::Allowed,\n            delete: Capability::Allowed,\n            columns: columns(&[\"key\", \"value\", \"lixcol_metadata\"]),\n        },\n    );\n\n    contracts\n}\n\nfn commit_graph_contract() -> TableContract {\n    TableContract {\n        insert: Capability::ReadOnly(\n            \"Commit graph and changelog surfaces are read-only; Lix creates them when transactions commit.\",\n        ),\n        update: Capability::ReadOnly(\n            \"Commit graph and changelog surfaces are read-only; Lix creates them when transactions commit.\",\n        ),\n        delete: Capability::ReadOnly(\n            \"Commit graph and changelog surfaces are read-only; Lix creates them when transactions commit.\",\n        ),\n        columns: BTreeMap::new(),\n    }\n}\n\nfn history_contract() -> TableContract {\n    TableContract {\n        insert: Capability::ReadOnly(\n            \"History views are query-only; write to the live surface such as lix_state, lix_file, lix_directory, or the typed entity table.\",\n        ),\n        update: Capability::ReadOnly(\n            \"History views are query-only; write to the live surface such as lix_state, lix_file, lix_directory, or the typed entity table.\",\n        ),\n        delete: Capability::ReadOnly(\n            \"History views are query-only; write to the live surface such as lix_state, lix_file, lix_directory, or the typed entity table.\",\n        ),\n        columns: BTreeMap::new(),\n    }\n}\n\nfn columns(writable: &[&str]) -> BTreeMap<String, ColumnContract> {\n    let writable = writable.iter().copied().collect::<BTreeSet<_>>();\n    writable\n        .into_iter()\n        .map(|column| (column.to_string(), ColumnContract { writable: true }))\n        .collect()\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/read_only.rs",
    "content": "use datafusion::error::DataFusionError;\n\nuse crate::transaction::types::TransactionWriteRow;\nuse crate::LixError;\n\npub(crate) fn reject_read_only_entity_surface(\n    schema_key: &str,\n    action: &str,\n) -> Result<(), DataFusionError> {\n    if schema_key == \"lix_directory_descriptor\" {\n        return Err(read_only_error(\n            action,\n            schema_key,\n            \"Use the writable lix_directory surface to create, update, or delete directories.\",\n        ));\n    }\n    if let Some(message) = read_only_schema_message(schema_key) {\n        return Err(read_only_error(action, schema_key, message));\n    }\n    Ok(())\n}\n\npub(crate) fn reject_read_only_stage_rows(\n    rows: &[TransactionWriteRow],\n    action: &str,\n) -> Result<(), DataFusionError> {\n    for row in rows {\n        if let Some(message) = read_only_schema_message(&row.schema_key) {\n            return Err(read_only_error(action, &row.schema_key, message));\n        }\n    }\n    Ok(())\n}\n\nfn read_only_error(action: &str, schema_key: &str, message: &'static str) -> DataFusionError {\n    super::error::lix_error_to_datafusion_error(\n        LixError::new(\n            LixError::CODE_READ_ONLY,\n            format!(\"{action} cannot write read-only surface '{schema_key}'\"),\n        )\n        .with_hint(message),\n    )\n}\n\nfn read_only_schema_message(schema_key: &str) -> Option<&'static str> {\n    match schema_key {\n        \"lix_version_descriptor\" | \"lix_version_ref\" => {\n            Some(\"Use the writable lix_version surface to create, update, or delete versions.\")\n        }\n        \"lix_file_descriptor\" => {\n            Some(\"Use the writable lix_file surface to create, update, or delete files.\")\n        }\n        \"lix_binary_blob_ref\" => {\n            Some(\"Use the writable lix_file data column to create, update, or delete file contents.\")\n        }\n        \"lix_commit\"\n        | \"lix_commit_edge\"\n        | \"lix_change\" => Some(\n            \"Commit graph and changelog surfaces are read-only; Lix creates them when transactions commit.\",\n        ),\n        _ => None,\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/record_batch.rs",
    "content": "use datafusion::arrow::array::ArrayRef;\nuse datafusion::arrow::datatypes::SchemaRef;\nuse datafusion::arrow::record_batch::{RecordBatch, RecordBatchOptions};\nuse datafusion::common::{DataFusionError, Result};\n\npub(crate) fn record_batch_with_row_count(\n    schema: SchemaRef,\n    columns: Vec<ArrayRef>,\n    row_count: usize,\n) -> Result<RecordBatch> {\n    if schema.fields().is_empty() {\n        let options = RecordBatchOptions::new().with_row_count(Some(row_count));\n        return RecordBatch::try_new_with_options(schema, columns, &options)\n            .map_err(DataFusionError::from);\n    }\n    RecordBatch::try_new(schema, columns).map_err(DataFusionError::from)\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/result_metadata.rs",
    "content": "use std::collections::HashMap;\n\nuse datafusion::arrow::datatypes::Field;\n\npub(crate) const LIX_VALUE_TYPE_METADATA_KEY: &str = \"lix.value_type\";\npub(crate) const LIX_VALUE_TYPE_JSON: &str = \"json\";\n\npub(crate) fn json_field(name: impl Into<String>, nullable: bool) -> Field {\n    Field::new(name, datafusion::arrow::datatypes::DataType::Utf8, nullable)\n        .with_metadata(json_field_metadata_map())\n}\n\npub(crate) fn mark_json_field(field: Field) -> Field {\n    field.with_metadata(json_field_metadata_map())\n}\n\npub(crate) fn field_is_json(field: &Field) -> bool {\n    field\n        .metadata()\n        .get(LIX_VALUE_TYPE_METADATA_KEY)\n        .is_some_and(|value| value == LIX_VALUE_TYPE_JSON)\n}\n\nfn json_field_metadata_map() -> HashMap<String, String> {\n    HashMap::from([(\n        LIX_VALUE_TYPE_METADATA_KEY.to_string(),\n        LIX_VALUE_TYPE_JSON.to_string(),\n    )])\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/runtime.rs",
    "content": "use std::sync::Arc;\n\nuse datafusion::arrow::record_batch::RecordBatch;\nuse datafusion::dataframe::DataFrame;\nuse datafusion::error::Result;\nuse datafusion::execution::TaskContext;\nuse datafusion::physical_plan::{ExecutionPlan, ExecutionPlanProperties};\nuse futures_util::TryStreamExt;\n\npub(crate) async fn collect_dataframe(dataframe: DataFrame) -> Result<Vec<RecordBatch>> {\n    let task_ctx = Arc::new(dataframe.task_ctx());\n    let plan = dataframe.create_physical_plan().await?;\n    collect_input_plan(plan, task_ctx).await\n}\n\npub(crate) async fn collect_input_plan(\n    plan: Arc<dyn ExecutionPlan>,\n    task_ctx: Arc<TaskContext>,\n) -> Result<Vec<RecordBatch>> {\n    validate_physical_plan(&plan)?;\n    let partition_count = plan.output_partitioning().partition_count();\n    let mut batches = Vec::new();\n    for partition in 0..partition_count {\n        let partition_batches = plan\n            .execute(partition, Arc::clone(&task_ctx))?\n            .try_collect::<Vec<_>>()\n            .await?;\n        batches.extend(partition_batches);\n    }\n    Ok(batches)\n}\n\n#[cfg(not(target_arch = \"wasm32\"))]\nfn validate_physical_plan(_plan: &Arc<dyn ExecutionPlan>) -> Result<()> {\n    Ok(())\n}\n\n#[cfg(target_arch = \"wasm32\")]\nfn validate_physical_plan(plan: &Arc<dyn ExecutionPlan>) -> Result<()> {\n    let operator_name = plan.name();\n    if is_wasm_unsafe_operator(operator_name) {\n        return Err(datafusion::error::DataFusionError::Plan(format!(\n            \"SQL physical operator '{operator_name}' is not supported by the WebAssembly runtime yet\"\n        )));\n    }\n\n    for child in plan.children() {\n        validate_physical_plan(child)?;\n    }\n\n    Ok(())\n}\n\n#[cfg(target_arch = \"wasm32\")]\nfn is_wasm_unsafe_operator(operator_name: &str) -> bool {\n    matches!(\n        operator_name,\n        \"CoalescePartitionsExec\" | \"RepartitionExec\" | \"SortPreservingMergeExec\"\n    )\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/session.rs",
    "content": "use std::sync::Arc;\n\nuse datafusion::prelude::{SessionConfig, SessionContext};\n\nuse crate::LixError;\n\nuse super::change_provider::register_lix_change_provider;\nuse super::directory_history_provider::register_lix_directory_history_provider;\nuse super::directory_provider::{\n    register_lix_directory_providers, register_lix_directory_write_providers,\n};\nuse super::entity_provider::{register_entity_providers, register_entity_write_providers};\nuse super::file_history_provider::register_lix_file_history_provider;\nuse super::file_provider::{register_lix_file_providers, register_lix_file_write_providers};\nuse super::history_provider::register_history_providers;\nuse super::lix_state_provider::{register_lix_state_providers, register_lix_state_write_providers};\nuse super::udfs::register_sql2_functions;\nuse super::version_provider::{register_lix_version_provider, register_lix_version_write_provider};\nuse super::{SqlExecutionContext, SqlWriteContext, SqlWriteExecutionContext};\n\npub(crate) async fn build_read_session(\n    ctx: &dyn SqlExecutionContext,\n) -> Result<SessionContext, LixError> {\n    let session = new_sql_session_context();\n    let version_ref = ctx.version_ref();\n    let active_version_commit_id = version_ref\n        .load_head(ctx.active_version_id())\n        .await?\n        .map(|head| head.commit_id);\n    register_sql2_functions(&session, ctx.functions(), active_version_commit_id);\n    register_lix_state_providers(\n        &session,\n        ctx.active_version_id(),\n        ctx.live_state(),\n        Arc::clone(&version_ref),\n    )\n    .await?;\n    register_lix_version_provider(&session, ctx.live_state(), Arc::clone(&version_ref)).await?;\n    let commit_store_query_source = ctx.commit_store_query_source();\n    register_lix_change_provider(&session, commit_store_query_source.clone()).await?;\n    let state_history_commit_graph = ctx.commit_graph();\n    register_history_providers(\n        &session,\n        state_history_commit_graph,\n        commit_store_query_source.clone(),\n    )\n    .await?;\n    let file_history_commit_graph = ctx.commit_graph();\n    register_lix_file_history_provider(\n        &session,\n        file_history_commit_graph,\n        commit_store_query_source.clone(),\n        ctx.blob_reader(),\n    )\n    .await?;\n    let directory_history_commit_graph = ctx.commit_graph();\n    register_lix_directory_history_provider(\n        &session,\n        directory_history_commit_graph,\n        commit_store_query_source.clone(),\n    )\n    .await?;\n    let entity_commit_graph = Arc::new(tokio::sync::Mutex::new(ctx.commit_graph()));\n    register_lix_directory_providers(\n        &session,\n        ctx.active_version_id(),\n        ctx.live_state(),\n        Arc::clone(&version_ref),\n        ctx.functions(),\n    )\n    .await?;\n    register_lix_file_providers(\n        &session,\n        ctx.active_version_id(),\n        ctx.live_state(),\n        Arc::clone(&version_ref),\n        ctx.blob_reader(),\n        ctx.functions(),\n    )\n    .await?;\n    register_entity_providers(\n        &session,\n        ctx.active_version_id(),\n        ctx.live_state(),\n        Arc::clone(&version_ref),\n        entity_commit_graph,\n        commit_store_query_source,\n        &ctx.list_visible_schemas()?,\n    )\n    .await?;\n\n    Ok(session)\n}\n\npub(crate) async fn build_write_session(\n    ctx: &mut dyn SqlWriteExecutionContext,\n) -> Result<SessionContext, LixError> {\n    let session = new_sql_session_context();\n    let write_ctx = SqlWriteContext::new(ctx);\n    let active_version_commit_id = write_ctx\n        .load_version_head(&write_ctx.active_version_id())\n        .await?;\n    register_sql2_functions(&session, write_ctx.functions(), active_version_commit_id);\n\n    register_lix_state_write_providers(&session, write_ctx.clone()).await?;\n    register_lix_version_write_provider(&session, write_ctx.clone()).await?;\n\n    register_lix_directory_write_providers(&session, write_ctx.clone()).await?;\n    register_lix_file_write_providers(&session, write_ctx.clone()).await?;\n    register_entity_write_providers(\n        &session,\n        write_ctx.clone(),\n        &write_ctx.list_visible_schemas()?,\n    )\n    .await?;\n\n    Ok(session)\n}\n\npub(crate) fn new_sql_session_context() -> SessionContext {\n    SessionContext::new_with_config(\n        SessionConfig::new()\n            .with_information_schema(true)\n            .with_target_partitions(1)\n            .set_bool(\"datafusion.optimizer.repartition_aggregations\", false)\n            .set_bool(\"datafusion.optimizer.repartition_joins\", false)\n            .set_bool(\"datafusion.optimizer.repartition_sorts\", false)\n            .set_bool(\"datafusion.optimizer.repartition_windows\", false)\n            .set_bool(\"datafusion.optimizer.repartition_file_scans\", false)\n            .set_bool(\"datafusion.optimizer.enable_round_robin_repartition\", false),\n    )\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/udfs/common.rs",
    "content": "use std::sync::Arc;\n\nuse datafusion::arrow::array::{\n    Array, ArrayRef, BinaryArray, BooleanArray, Float32Array, Float64Array, Int16Array, Int32Array,\n    Int64Array, Int8Array, LargeBinaryArray, LargeStringArray, StringArray, UInt16Array,\n    UInt32Array, UInt64Array, UInt8Array,\n};\nuse datafusion::common::{plan_err, DataFusionError, Result};\nuse datafusion::logical_expr::ColumnarValue;\nuse serde_json::Value as JsonValue;\n\npub(super) fn scalar_inputs(args: &[ColumnarValue]) -> bool {\n    args.iter()\n        .all(|value| matches!(value, ColumnarValue::Scalar(_)))\n}\n\npub(super) fn json_value_to_serde(array: &dyn Array, row: usize) -> Result<Option<JsonValue>> {\n    let Some(raw) = text_like_value(array, row)? else {\n        return Ok(None);\n    };\n    serde_json::from_str::<JsonValue>(&raw)\n        .map(Some)\n        .map_err(|error| {\n            DataFusionError::Execution(format!(\n                \"JSON function expected valid JSON text in its first argument, got error: {error}\"\n            ))\n        })\n}\n\npub(super) fn text_like_value(array: &dyn Array, row: usize) -> Result<Option<String>> {\n    if let Some(array) = array.as_any().downcast_ref::<StringArray>() {\n        return Ok((!array.is_null(row)).then(|| array.value(row).to_string()));\n    }\n    if let Some(array) = array.as_any().downcast_ref::<LargeStringArray>() {\n        return Ok((!array.is_null(row)).then(|| array.value(row).to_string()));\n    }\n    if let Some(value) = numeric_value(array, row)? {\n        return Ok(Some(value));\n    }\n    if let Some(array) = array.as_any().downcast_ref::<BooleanArray>() {\n        return Ok((!array.is_null(row)).then(|| {\n            if array.value(row) {\n                \"true\".to_string()\n            } else {\n                \"false\".to_string()\n            }\n        }));\n    }\n    if let Some(array) = array.as_any().downcast_ref::<BinaryArray>() {\n        return Ok(\n            (!array.is_null(row)).then(|| String::from_utf8_lossy(array.value(row)).to_string())\n        );\n    }\n    if let Some(array) = array.as_any().downcast_ref::<LargeBinaryArray>() {\n        return Ok(\n            (!array.is_null(row)).then(|| String::from_utf8_lossy(array.value(row)).to_string())\n        );\n    }\n    Err(DataFusionError::Execution(format!(\n        \"unsupported argument type for JSON/text function: {:?}\",\n        array.data_type()\n    )))\n}\n\npub(super) fn numeric_value(array: &dyn Array, row: usize) -> Result<Option<String>> {\n    macro_rules! numeric_array {\n        ($ty:ty) => {\n            if let Some(array) = array.as_any().downcast_ref::<$ty>() {\n                return Ok((!array.is_null(row)).then(|| array.value(row).to_string()));\n            }\n        };\n    }\n\n    numeric_array!(Int8Array);\n    numeric_array!(Int16Array);\n    numeric_array!(Int32Array);\n    numeric_array!(Int64Array);\n    numeric_array!(UInt8Array);\n    numeric_array!(UInt16Array);\n    numeric_array!(UInt32Array);\n    numeric_array!(UInt64Array);\n    numeric_array!(Float32Array);\n    numeric_array!(Float64Array);\n    Ok(None)\n}\n\npub(super) fn decode_utf8_value(array: &dyn Array, row: usize) -> Result<Option<String>> {\n    if let Some(array) = array.as_any().downcast_ref::<BinaryArray>() {\n        return (!array.is_null(row))\n            .then(|| String::from_utf8(array.value(row).to_vec()))\n            .transpose()\n            .map_err(|error| {\n                DataFusionError::Execution(format!(\n                    \"lix_text_decode() expected valid UTF8 bytes: {error}\"\n                ))\n            });\n    }\n    if let Some(array) = array.as_any().downcast_ref::<LargeBinaryArray>() {\n        return (!array.is_null(row))\n            .then(|| String::from_utf8(array.value(row).to_vec()))\n            .transpose()\n            .map_err(|error| {\n                DataFusionError::Execution(format!(\n                    \"lix_text_decode() expected valid UTF8 bytes: {error}\"\n                ))\n            });\n    }\n    if let Some(array) = array.as_any().downcast_ref::<StringArray>() {\n        return Ok((!array.is_null(row)).then(|| array.value(row).to_string()));\n    }\n    if let Some(array) = array.as_any().downcast_ref::<LargeStringArray>() {\n        return Ok((!array.is_null(row)).then(|| array.value(row).to_string()));\n    }\n    Err(DataFusionError::Execution(format!(\n        \"lix_text_decode() expected Binary or Utf8, got {:?}\",\n        array.data_type()\n    )))\n}\n\npub(super) fn encode_utf8_value(array: &dyn Array, row: usize) -> Result<Option<Vec<u8>>> {\n    if let Some(array) = array.as_any().downcast_ref::<StringArray>() {\n        return Ok((!array.is_null(row)).then(|| array.value(row).as_bytes().to_vec()));\n    }\n    if let Some(array) = array.as_any().downcast_ref::<LargeStringArray>() {\n        return Ok((!array.is_null(row)).then(|| array.value(row).as_bytes().to_vec()));\n    }\n    if let Some(array) = array.as_any().downcast_ref::<BinaryArray>() {\n        return Ok((!array.is_null(row)).then(|| array.value(row).to_vec()));\n    }\n    if let Some(array) = array.as_any().downcast_ref::<LargeBinaryArray>() {\n        return Ok((!array.is_null(row)).then(|| array.value(row).to_vec()));\n    }\n    Err(DataFusionError::Execution(format!(\n        \"lix_text_encode() expected Utf8 or Binary, got {:?}\",\n        array.data_type()\n    )))\n}\n\npub(super) fn validate_utf8_encoding_arg(\n    fn_name: &str,\n    encoding: Option<&ColumnarValue>,\n) -> Result<()> {\n    let Some(encoding) = encoding else {\n        return Ok(());\n    };\n    let arrays = ColumnarValue::values_to_arrays(std::slice::from_ref(encoding))?;\n    let array = &arrays[0];\n    if array.len() == 0 {\n        return Ok(());\n    }\n    let Some(value) = text_like_value(array.as_ref(), 0)? else {\n        return Ok(());\n    };\n    let normalized = value.trim().to_ascii_uppercase().replace('-', \"\");\n    if normalized == \"UTF8\" {\n        Ok(())\n    } else {\n        plan_err!(\"{fn_name}() only supports UTF8 encoding, got '{value}'\")\n    }\n}\n\npub(super) fn extract_json_path(\n    fn_name: &str,\n    arrays: &[ArrayRef],\n    row: usize,\n) -> Result<Option<JsonValue>> {\n    let Some(mut current) = json_value_to_serde(arrays[0].as_ref(), row)? else {\n        return Ok(None);\n    };\n\n    for path in &arrays[1..] {\n        let Some(segment) = json_path_segment(fn_name, path.as_ref(), row)? else {\n            return Ok(None);\n        };\n        let next = match segment {\n            JsonPathSegment::Key(key) => current.get(&key).cloned(),\n            JsonPathSegment::Index(index) => current\n                .as_array()\n                .and_then(|values| values.get(index))\n                .cloned(),\n        };\n        let Some(value) = next else {\n            return Ok(None);\n        };\n        current = value;\n    }\n\n    Ok(Some(current))\n}\n\npub(super) fn json_text_value(value: &JsonValue) -> Result<String> {\n    match value {\n        JsonValue::String(text) => Ok(text.clone()),\n        JsonValue::Number(number) => Ok(number.to_string()),\n        JsonValue::Bool(boolean) => Ok(if *boolean {\n            \"true\".to_string()\n        } else {\n            \"false\".to_string()\n        }),\n        JsonValue::Array(_) | JsonValue::Object(_) => {\n            serde_json::to_string(value).map_err(|error| {\n                DataFusionError::Execution(format!(\n                    \"lix_json_get_text() could not render JSON value: {error}\"\n                ))\n            })\n        }\n        JsonValue::Null => Ok(\"null\".to_string()),\n    }\n}\n\npub(super) fn json_json_value(value: &JsonValue) -> Result<String> {\n    serde_json::to_string(value).map_err(|error| {\n        DataFusionError::Execution(format!(\n            \"lix_json_get() could not render JSON value: {error}\"\n        ))\n    })\n}\n\nenum JsonPathSegment {\n    Key(String),\n    Index(usize),\n}\n\nfn json_path_segment(\n    fn_name: &str,\n    array: &dyn Array,\n    row: usize,\n) -> Result<Option<JsonPathSegment>> {\n    if let Some(array) = array.as_any().downcast_ref::<StringArray>() {\n        if array.is_null(row) {\n            return Ok(None);\n        }\n        let value = array.value(row).to_string();\n        validate_json_path_key_segment(fn_name, &value)?;\n        return Ok(Some(JsonPathSegment::Key(value)));\n    }\n    if let Some(array) = array.as_any().downcast_ref::<LargeStringArray>() {\n        if array.is_null(row) {\n            return Ok(None);\n        }\n        let value = array.value(row).to_string();\n        validate_json_path_key_segment(fn_name, &value)?;\n        return Ok(Some(JsonPathSegment::Key(value)));\n    }\n    macro_rules! index_array {\n        ($ty:ty) => {\n            if let Some(array) = array.as_any().downcast_ref::<$ty>() {\n                if array.is_null(row) {\n                    return Ok(None);\n                }\n                let value = array.value(row);\n                let index = usize::try_from(value).map_err(|_| {\n                    DataFusionError::Execution(format!(\n                        \"{fn_name}() path indexes must be non-negative integers\"\n                    ))\n                })?;\n                return Ok(Some(JsonPathSegment::Index(index)));\n            }\n        };\n    }\n    index_array!(UInt8Array);\n    index_array!(UInt16Array);\n    index_array!(UInt32Array);\n    index_array!(UInt64Array);\n    index_array!(Int8Array);\n    index_array!(Int16Array);\n    index_array!(Int32Array);\n    index_array!(Int64Array);\n    Err(DataFusionError::Execution(format!(\n        \"{fn_name}() path arguments must be strings or non-negative integers, got {:?}\",\n        array.data_type()\n    )))\n}\n\nfn validate_json_path_key_segment(fn_name: &str, value: &str) -> Result<()> {\n    if value == \"$\" || value.starts_with(\"$.\") || value.starts_with(\"$[\") || value.starts_with('/')\n    {\n        return Err(DataFusionError::Execution(format!(\n            \"{fn_name}() uses variadic path segments, not JSONPath or JSON Pointer; got '{value}'\"\n        )));\n    }\n    Ok(())\n}\n\npub(super) fn binary_array_from_owned(values: &[Option<Vec<u8>>]) -> BinaryArray {\n    let refs = values\n        .iter()\n        .map(|value| value.as_deref())\n        .collect::<Vec<_>>();\n    BinaryArray::from(refs)\n}\n\npub(super) fn array_ref<T: Array + 'static>(array: T) -> ArrayRef {\n    Arc::new(array)\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/udfs/lix_active_version_commit_id.rs",
    "content": "use std::any::Any;\n\nuse datafusion::arrow::datatypes::DataType;\nuse datafusion::common::{plan_err, Result, ScalarValue};\nuse datafusion::logical_expr::{\n    ColumnarValue, ScalarFunctionArgs, ScalarUDFImpl, Signature, Volatility,\n};\n\n#[derive(Clone, PartialEq, Eq, Hash)]\npub(super) struct LixActiveVersionCommitId {\n    commit_id: Option<String>,\n}\n\nimpl LixActiveVersionCommitId {\n    pub(super) fn new(commit_id: Option<String>) -> Self {\n        Self { commit_id }\n    }\n}\n\nimpl std::fmt::Debug for LixActiveVersionCommitId {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"LixActiveVersionCommitId\").finish()\n    }\n}\n\nimpl ScalarUDFImpl for LixActiveVersionCommitId {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn name(&self) -> &str {\n        \"lix_active_version_commit_id\"\n    }\n\n    fn signature(&self) -> &Signature {\n        static SIGNATURE: std::sync::LazyLock<Signature> =\n            std::sync::LazyLock::new(|| Signature::nullary(Volatility::Stable));\n        &SIGNATURE\n    }\n\n    fn return_type(&self, _arg_types: &[DataType]) -> Result<DataType> {\n        Ok(DataType::Utf8)\n    }\n\n    fn invoke_with_args(&self, args: ScalarFunctionArgs) -> Result<ColumnarValue> {\n        if !args.args.is_empty() {\n            return plan_err!(\"lix_active_version_commit_id requires no arguments\");\n        }\n        Ok(ColumnarValue::Scalar(ScalarValue::Utf8(\n            self.commit_id.clone(),\n        )))\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/udfs/lix_empty_blob.rs",
    "content": "use std::any::Any;\n\nuse datafusion::arrow::datatypes::DataType;\nuse datafusion::common::{Result, ScalarValue};\nuse datafusion::logical_expr::{\n    ColumnarValue, ScalarFunctionArgs, ScalarUDFImpl, Signature, Volatility,\n};\n\n#[derive(Debug, Clone, PartialEq, Eq, Hash)]\npub(super) struct LixEmptyBlob;\n\nimpl ScalarUDFImpl for LixEmptyBlob {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn name(&self) -> &str {\n        \"lix_empty_blob\"\n    }\n\n    fn signature(&self) -> &Signature {\n        static SIGNATURE: std::sync::LazyLock<Signature> =\n            std::sync::LazyLock::new(|| Signature::nullary(Volatility::Immutable));\n        &SIGNATURE\n    }\n\n    fn return_type(&self, _arg_types: &[DataType]) -> Result<DataType> {\n        Ok(DataType::Binary)\n    }\n\n    fn invoke_with_args(&self, _args: ScalarFunctionArgs) -> Result<ColumnarValue> {\n        Ok(ColumnarValue::Scalar(ScalarValue::Binary(Some(Vec::new()))))\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::super::test_support::single_binary;\n\n    #[tokio::test]\n    async fn returns_empty_binary_value() {\n        assert_eq!(\n            single_binary(\"SELECT lix_empty_blob()\").await,\n            Some(Vec::new())\n        );\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/udfs/lix_json.rs",
    "content": "use std::any::Any;\nuse std::sync::Arc;\n\nuse datafusion::arrow::array::{Array, StringArray};\nuse datafusion::arrow::datatypes::{DataType, FieldRef};\nuse datafusion::common::{plan_err, DataFusionError, Result, ScalarValue};\nuse datafusion::logical_expr::{\n    ColumnarValue, ReturnFieldArgs, ScalarFunctionArgs, ScalarUDFImpl, Signature, Volatility,\n};\nuse serde_json::Value as JsonValue;\n\nuse crate::sql2::result_metadata::json_field;\n\nuse super::common::{scalar_inputs, text_like_value};\n\n#[derive(Debug, Clone, PartialEq, Eq, Hash)]\npub(super) struct LixJson;\n\nimpl ScalarUDFImpl for LixJson {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn name(&self) -> &str {\n        \"lix_json\"\n    }\n\n    fn signature(&self) -> &Signature {\n        static SIGNATURE: std::sync::LazyLock<Signature> =\n            std::sync::LazyLock::new(|| Signature::any(1, Volatility::Immutable));\n        &SIGNATURE\n    }\n\n    fn return_type(&self, _arg_types: &[DataType]) -> Result<DataType> {\n        Ok(DataType::Utf8)\n    }\n\n    fn return_field_from_args(&self, _args: ReturnFieldArgs) -> Result<FieldRef> {\n        Ok(Arc::new(json_field(self.name(), true)))\n    }\n\n    fn invoke_with_args(&self, args: ScalarFunctionArgs) -> Result<ColumnarValue> {\n        if args.args.len() != 1 {\n            return plan_err!(\"lix_json requires exactly 1 argument\");\n        }\n        let scalar_inputs = scalar_inputs(&args.args);\n        let arrays = ColumnarValue::values_to_arrays(&args.args)?;\n        let input = &arrays[0];\n        let len = input.len();\n        let mut values = Vec::with_capacity(len);\n        for row in 0..len {\n            values.push(json_value(input.as_ref(), row)?);\n        }\n        if scalar_inputs {\n            Ok(ColumnarValue::Scalar(ScalarValue::Utf8(\n                values.into_iter().next().flatten(),\n            )))\n        } else {\n            Ok(ColumnarValue::Array(Arc::new(StringArray::from(values))))\n        }\n    }\n}\n\nfn json_value(array: &dyn Array, row: usize) -> Result<Option<String>> {\n    if matches!(array.data_type(), DataType::Null) {\n        return Ok(Some(\"null\".to_string()));\n    }\n    let Some(raw) = text_like_value(array, row)? else {\n        return Ok(Some(\"null\".to_string()));\n    };\n    let parsed = serde_json::from_str::<JsonValue>(&raw).map_err(|error| {\n        DataFusionError::Execution(format!(\n            \"lix_json() expected valid JSON text, got error: {error}\"\n        ))\n    })?;\n    Ok(Some(serde_json::to_string(&parsed).map_err(|error| {\n        DataFusionError::Execution(format!(\"lix_json() could not render JSON: {error}\"))\n    })?))\n}\n\n#[cfg(test)]\nmod tests {\n    use super::super::test_support::single_text;\n\n    #[tokio::test]\n    async fn canonicalizes_json_text() {\n        assert_eq!(\n            single_text(\"SELECT lix_json('{ \\\"name\\\" : \\\"Ada\\\" }')\").await,\n            Some(\"{\\\"name\\\":\\\"Ada\\\"}\".to_string())\n        );\n    }\n\n    #[tokio::test]\n    async fn null_input_returns_json_null() {\n        assert_eq!(\n            single_text(\"SELECT lix_json(NULL)\").await,\n            Some(\"null\".to_string())\n        );\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/udfs/lix_json_get.rs",
    "content": "use std::any::Any;\nuse std::sync::Arc;\n\nuse datafusion::arrow::array::StringArray;\nuse datafusion::arrow::datatypes::{DataType, FieldRef};\nuse datafusion::common::{plan_err, Result, ScalarValue};\nuse datafusion::logical_expr::{\n    ColumnarValue, ReturnFieldArgs, ScalarFunctionArgs, ScalarUDFImpl, Signature, Volatility,\n};\nuse serde_json::Value as JsonValue;\n\nuse crate::sql2::result_metadata::json_field;\n\nuse super::common::{extract_json_path, json_json_value, scalar_inputs};\n\n#[derive(Debug, Clone, PartialEq, Eq, Hash)]\npub(super) struct LixJsonGet {\n    signature: Signature,\n}\n\nimpl LixJsonGet {\n    pub(super) fn new() -> Self {\n        Self {\n            signature: Signature::variadic_any(Volatility::Immutable),\n        }\n    }\n}\n\nimpl ScalarUDFImpl for LixJsonGet {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn name(&self) -> &str {\n        \"lix_json_get\"\n    }\n\n    fn signature(&self) -> &Signature {\n        &self.signature\n    }\n\n    fn return_type(&self, _arg_types: &[DataType]) -> Result<DataType> {\n        Ok(DataType::Utf8)\n    }\n\n    fn return_field_from_args(&self, _args: ReturnFieldArgs) -> Result<FieldRef> {\n        Ok(Arc::new(json_field(self.name(), true)))\n    }\n\n    fn invoke_with_args(&self, args: ScalarFunctionArgs) -> Result<ColumnarValue> {\n        if args.args.len() < 2 {\n            return plan_err!(\"lix_json_get requires at least 2 arguments\");\n        }\n\n        let scalar_inputs = scalar_inputs(&args.args);\n        let arrays = ColumnarValue::values_to_arrays(&args.args)?;\n        let len = arrays.first().map(|array| array.len()).unwrap_or(1);\n\n        let mut values = Vec::with_capacity(len);\n        for row in 0..len {\n            values.push(match extract_json_path(self.name(), &arrays, row)? {\n                None | Some(JsonValue::Null) => None,\n                Some(other) => Some(json_json_value(&other)?),\n            });\n        }\n        if scalar_inputs {\n            Ok(ColumnarValue::Scalar(ScalarValue::Utf8(\n                values.into_iter().next().flatten(),\n            )))\n        } else {\n            Ok(ColumnarValue::Array(Arc::new(StringArray::from(values))))\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::super::test_support::single_text;\n\n    #[tokio::test]\n    async fn returns_json_representation() {\n        assert_eq!(\n            single_text(\"SELECT lix_json_get('{\\\"name\\\":\\\"Ada\\\"}', 'name')\").await,\n            Some(\"\\\"Ada\\\"\".to_string())\n        );\n        assert_eq!(\n            single_text(\"SELECT lix_json_get('{\\\"tags\\\":[\\\"db\\\"]}', 'tags')\").await,\n            Some(\"[\\\"db\\\"]\".to_string())\n        );\n    }\n\n    #[tokio::test]\n    async fn missing_path_returns_null() {\n        assert_eq!(\n            single_text(\"SELECT lix_json_get('{\\\"name\\\":\\\"Ada\\\"}', 'missing')\").await,\n            None\n        );\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/udfs/lix_json_get_text.rs",
    "content": "use std::any::Any;\nuse std::sync::Arc;\n\nuse datafusion::arrow::array::StringArray;\nuse datafusion::arrow::datatypes::DataType;\nuse datafusion::common::{plan_err, Result, ScalarValue};\nuse datafusion::logical_expr::{\n    ColumnarValue, ScalarFunctionArgs, ScalarUDFImpl, Signature, Volatility,\n};\nuse serde_json::Value as JsonValue;\n\nuse super::common::{extract_json_path, json_text_value, scalar_inputs};\n\n#[derive(Debug, Clone, PartialEq, Eq, Hash)]\npub(super) struct LixJsonGetText {\n    signature: Signature,\n}\n\nimpl LixJsonGetText {\n    pub(super) fn new() -> Self {\n        Self {\n            signature: Signature::variadic_any(Volatility::Immutable),\n        }\n    }\n}\n\nimpl ScalarUDFImpl for LixJsonGetText {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn name(&self) -> &str {\n        \"lix_json_get_text\"\n    }\n\n    fn signature(&self) -> &Signature {\n        &self.signature\n    }\n\n    fn return_type(&self, _arg_types: &[DataType]) -> Result<DataType> {\n        Ok(DataType::Utf8)\n    }\n\n    fn invoke_with_args(&self, args: ScalarFunctionArgs) -> Result<ColumnarValue> {\n        if args.args.len() < 2 {\n            return plan_err!(\"lix_json_get_text requires at least 2 arguments\");\n        }\n\n        let scalar_inputs = scalar_inputs(&args.args);\n        let arrays = ColumnarValue::values_to_arrays(&args.args)?;\n        let len = arrays.first().map(|array| array.len()).unwrap_or(1);\n\n        let mut values = Vec::with_capacity(len);\n        for row in 0..len {\n            values.push(match extract_json_path(self.name(), &arrays, row)? {\n                None | Some(JsonValue::Null) => None,\n                Some(JsonValue::Bool(value)) => Some(if value {\n                    \"true\".to_string()\n                } else {\n                    \"false\".to_string()\n                }),\n                Some(JsonValue::String(value)) => Some(value),\n                Some(other) => Some(json_text_value(&other)?),\n            });\n        }\n        if scalar_inputs {\n            Ok(ColumnarValue::Scalar(ScalarValue::Utf8(\n                values.into_iter().next().flatten(),\n            )))\n        } else {\n            Ok(ColumnarValue::Array(Arc::new(StringArray::from(values))))\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::super::test_support::single_text;\n\n    #[tokio::test]\n    async fn returns_unwrapped_text() {\n        assert_eq!(\n            single_text(\"SELECT lix_json_get_text('{\\\"name\\\":\\\"Ada\\\"}', 'name')\").await,\n            Some(\"Ada\".to_string())\n        );\n        assert_eq!(\n            single_text(\"SELECT lix_json_get_text('{\\\"active\\\":true}', 'active')\").await,\n            Some(\"true\".to_string())\n        );\n    }\n\n    #[tokio::test]\n    async fn missing_path_returns_null() {\n        assert_eq!(\n            single_text(\"SELECT lix_json_get_text('{\\\"name\\\":\\\"Ada\\\"}', 'missing')\").await,\n            None\n        );\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/udfs/lix_text_decode.rs",
    "content": "use std::any::Any;\nuse std::sync::Arc;\n\nuse datafusion::arrow::array::StringArray;\nuse datafusion::arrow::datatypes::DataType;\nuse datafusion::common::{plan_err, Result, ScalarValue};\nuse datafusion::logical_expr::{\n    ColumnarValue, ScalarFunctionArgs, ScalarUDFImpl, Signature, Volatility,\n};\n\nuse super::common::{decode_utf8_value, scalar_inputs, validate_utf8_encoding_arg};\n\n#[derive(Debug, Clone, PartialEq, Eq, Hash)]\npub(super) struct LixTextDecode {\n    signature: Signature,\n}\n\nimpl LixTextDecode {\n    pub(super) fn new() -> Self {\n        Self {\n            signature: Signature::one_of(\n                vec![Signature::any(1, Volatility::Immutable).type_signature],\n                Volatility::Immutable,\n            ),\n        }\n    }\n}\n\nimpl ScalarUDFImpl for LixTextDecode {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn name(&self) -> &str {\n        \"lix_text_decode\"\n    }\n\n    fn signature(&self) -> &Signature {\n        &self.signature\n    }\n\n    fn return_type(&self, _arg_types: &[DataType]) -> Result<DataType> {\n        Ok(DataType::Utf8)\n    }\n\n    fn invoke_with_args(&self, args: ScalarFunctionArgs) -> Result<ColumnarValue> {\n        if !(1..=2).contains(&args.args.len()) {\n            return plan_err!(\"lix_text_decode requires 1 or 2 arguments\");\n        }\n        validate_utf8_encoding_arg(self.name(), args.args.get(1))?;\n\n        let scalar_inputs = scalar_inputs(&args.args);\n        let arrays = ColumnarValue::values_to_arrays(&args.args)?;\n        let input = &arrays[0];\n        let len = input.len();\n\n        let mut values = Vec::with_capacity(len);\n        for row in 0..len {\n            values.push(decode_utf8_value(input.as_ref(), row)?);\n        }\n        if scalar_inputs {\n            Ok(ColumnarValue::Scalar(ScalarValue::Utf8(\n                values.into_iter().next().flatten(),\n            )))\n        } else {\n            Ok(ColumnarValue::Array(Arc::new(StringArray::from(values))))\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::super::test_support::single_text;\n\n    #[tokio::test]\n    async fn decodes_utf8_binary_to_text() {\n        assert_eq!(\n            single_text(\"SELECT lix_text_decode(X'416461')\").await,\n            Some(\"Ada\".to_string())\n        );\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/udfs/lix_text_encode.rs",
    "content": "use std::any::Any;\n\nuse datafusion::arrow::datatypes::DataType;\nuse datafusion::common::{plan_err, Result, ScalarValue};\nuse datafusion::logical_expr::{\n    ColumnarValue, ScalarFunctionArgs, ScalarUDFImpl, Signature, Volatility,\n};\n\nuse super::common::{\n    array_ref, binary_array_from_owned, encode_utf8_value, scalar_inputs,\n    validate_utf8_encoding_arg,\n};\n\n#[derive(Debug, Clone, PartialEq, Eq, Hash)]\npub(super) struct LixTextEncode {\n    signature: Signature,\n}\n\nimpl LixTextEncode {\n    pub(super) fn new() -> Self {\n        Self {\n            signature: Signature::one_of(\n                vec![Signature::any(1, Volatility::Immutable).type_signature],\n                Volatility::Immutable,\n            ),\n        }\n    }\n}\n\nimpl ScalarUDFImpl for LixTextEncode {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn name(&self) -> &str {\n        \"lix_text_encode\"\n    }\n\n    fn signature(&self) -> &Signature {\n        &self.signature\n    }\n\n    fn return_type(&self, _arg_types: &[DataType]) -> Result<DataType> {\n        Ok(DataType::Binary)\n    }\n\n    fn invoke_with_args(&self, args: ScalarFunctionArgs) -> Result<ColumnarValue> {\n        if !(1..=2).contains(&args.args.len()) {\n            return plan_err!(\"lix_text_encode requires 1 or 2 arguments\");\n        }\n        validate_utf8_encoding_arg(self.name(), args.args.get(1))?;\n\n        let scalar_inputs = scalar_inputs(&args.args);\n        let arrays = ColumnarValue::values_to_arrays(&args.args)?;\n        let input = &arrays[0];\n        let len = input.len();\n\n        let mut values = Vec::with_capacity(len);\n        for row in 0..len {\n            values.push(encode_utf8_value(input.as_ref(), row)?);\n        }\n        if scalar_inputs {\n            Ok(ColumnarValue::Scalar(ScalarValue::Binary(\n                values.into_iter().next().flatten(),\n            )))\n        } else {\n            Ok(ColumnarValue::Array(array_ref(binary_array_from_owned(\n                &values,\n            ))))\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::super::test_support::single_binary;\n\n    #[tokio::test]\n    async fn encodes_utf8_text_to_binary() {\n        assert_eq!(\n            single_binary(\"SELECT lix_text_encode('Ada')\").await,\n            Some(b\"Ada\".to_vec())\n        );\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/udfs/lix_timestamp.rs",
    "content": "use std::any::Any;\n\nuse datafusion::arrow::datatypes::DataType;\nuse datafusion::common::{plan_err, Result, ScalarValue};\nuse datafusion::logical_expr::{\n    ColumnarValue, ScalarFunctionArgs, ScalarUDFImpl, Signature, Volatility,\n};\n\nuse crate::functions::FunctionProviderHandle;\n\n#[derive(Clone)]\npub(super) struct LixTimestamp {\n    pub(super) functions: FunctionProviderHandle,\n}\n\nimpl PartialEq for LixTimestamp {\n    fn eq(&self, _other: &Self) -> bool {\n        true\n    }\n}\n\nimpl Eq for LixTimestamp {}\n\nimpl std::hash::Hash for LixTimestamp {\n    fn hash<H: std::hash::Hasher>(&self, state: &mut H) {\n        self.name().hash(state);\n    }\n}\n\nimpl std::fmt::Debug for LixTimestamp {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"LixTimestamp\").finish()\n    }\n}\n\nimpl ScalarUDFImpl for LixTimestamp {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn name(&self) -> &str {\n        \"lix_timestamp\"\n    }\n\n    fn signature(&self) -> &Signature {\n        static SIGNATURE: std::sync::LazyLock<Signature> =\n            std::sync::LazyLock::new(|| Signature::nullary(Volatility::Volatile));\n        &SIGNATURE\n    }\n\n    fn return_type(&self, _arg_types: &[DataType]) -> Result<DataType> {\n        Ok(DataType::Utf8)\n    }\n\n    fn invoke_with_args(&self, args: ScalarFunctionArgs) -> Result<ColumnarValue> {\n        if !args.args.is_empty() {\n            return plan_err!(\"lix_timestamp requires no arguments\");\n        }\n        Ok(ColumnarValue::Scalar(ScalarValue::Utf8(Some(\n            self.functions.call_timestamp(),\n        ))))\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::super::test_support::single_text;\n\n    #[tokio::test]\n    async fn returns_timestamp_text() {\n        let value = single_text(\"SELECT lix_timestamp()\")\n            .await\n            .expect(\"timestamp should not be null\");\n        assert!(!value.is_empty());\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/udfs/lix_uuid_v7.rs",
    "content": "use std::any::Any;\n\nuse datafusion::arrow::datatypes::DataType;\nuse datafusion::common::{plan_err, Result, ScalarValue};\nuse datafusion::logical_expr::{\n    ColumnarValue, ScalarFunctionArgs, ScalarUDFImpl, Signature, Volatility,\n};\n\nuse crate::functions::FunctionProviderHandle;\n\n#[derive(Clone)]\npub(super) struct LixUuidV7 {\n    pub(super) functions: FunctionProviderHandle,\n}\n\nimpl PartialEq for LixUuidV7 {\n    fn eq(&self, _other: &Self) -> bool {\n        true\n    }\n}\n\nimpl Eq for LixUuidV7 {}\n\nimpl std::hash::Hash for LixUuidV7 {\n    fn hash<H: std::hash::Hasher>(&self, state: &mut H) {\n        self.name().hash(state);\n    }\n}\n\nimpl std::fmt::Debug for LixUuidV7 {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"LixUuidV7\").finish()\n    }\n}\n\nimpl ScalarUDFImpl for LixUuidV7 {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn name(&self) -> &str {\n        \"lix_uuid_v7\"\n    }\n\n    fn signature(&self) -> &Signature {\n        static SIGNATURE: std::sync::LazyLock<Signature> =\n            std::sync::LazyLock::new(|| Signature::nullary(Volatility::Volatile));\n        &SIGNATURE\n    }\n\n    fn return_type(&self, _arg_types: &[DataType]) -> Result<DataType> {\n        Ok(DataType::Utf8)\n    }\n\n    fn invoke_with_args(&self, args: ScalarFunctionArgs) -> Result<ColumnarValue> {\n        if !args.args.is_empty() {\n            return plan_err!(\"lix_uuid_v7 requires no arguments\");\n        }\n        Ok(ColumnarValue::Scalar(ScalarValue::Utf8(Some(\n            self.functions.call_uuid_v7(),\n        ))))\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::super::test_support::single_text;\n\n    #[tokio::test]\n    async fn returns_uuid_text() {\n        let value = single_text(\"SELECT lix_uuid_v7()\")\n            .await\n            .expect(\"uuid should not be null\");\n        assert!(!value.is_empty());\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/udfs/mod.rs",
    "content": "mod common;\nmod lix_active_version_commit_id;\nmod lix_empty_blob;\nmod lix_json;\nmod lix_json_get;\nmod lix_json_get_text;\nmod lix_text_decode;\nmod lix_text_encode;\nmod lix_timestamp;\nmod lix_uuid_v7;\nmod public_call;\n\nuse datafusion::execution::context::SessionContext;\nuse datafusion::logical_expr::ScalarUDF;\n\nuse crate::functions::FunctionProviderHandle;\n\npub(crate) use public_call::validate_public_udf_calls;\n\n#[cfg(test)]\npub(crate) fn system_sql2_function_provider() -> FunctionProviderHandle {\n    use crate::functions::{FunctionProvider, SharedFunctionProvider, SystemFunctionProvider};\n\n    SharedFunctionProvider::new(Box::new(SystemFunctionProvider) as Box<dyn FunctionProvider + Send>)\n}\n\npub(crate) fn register_sql2_functions(\n    ctx: &SessionContext,\n    functions: FunctionProviderHandle,\n    active_version_commit_id: Option<String>,\n) {\n    ctx.register_udf(ScalarUDF::from(\n        lix_active_version_commit_id::LixActiveVersionCommitId::new(active_version_commit_id),\n    ));\n    ctx.register_udf(ScalarUDF::from(lix_json_get::LixJsonGet::new()));\n    ctx.register_udf(ScalarUDF::from(lix_json_get_text::LixJsonGetText::new()));\n    ctx.register_udf(ScalarUDF::from(lix_text_decode::LixTextDecode::new()));\n    ctx.register_udf(ScalarUDF::from(lix_text_encode::LixTextEncode::new()));\n    ctx.register_udf(ScalarUDF::from(lix_json::LixJson));\n    ctx.register_udf(ScalarUDF::from(lix_empty_blob::LixEmptyBlob));\n    ctx.register_udf(ScalarUDF::from(lix_uuid_v7::LixUuidV7 {\n        functions: functions.clone(),\n    }));\n    ctx.register_udf(ScalarUDF::from(lix_timestamp::LixTimestamp { functions }));\n}\n\n#[cfg(test)]\npub(super) mod test_support {\n    use datafusion::arrow::array::{Array, BinaryArray, StringArray};\n    use datafusion::prelude::SessionContext;\n\n    use super::{register_sql2_functions, system_sql2_function_provider};\n\n    pub(super) async fn single_text(sql: &str) -> Option<String> {\n        let ctx = SessionContext::new();\n        register_sql2_functions(&ctx, system_sql2_function_provider(), None);\n        let batches = ctx\n            .sql(sql)\n            .await\n            .expect(\"query should plan\")\n            .collect()\n            .await\n            .expect(\"query should execute\");\n        let array = batches[0]\n            .column(0)\n            .as_any()\n            .downcast_ref::<StringArray>()\n            .expect(\"first column should be utf8\");\n        (!array.is_null(0)).then(|| array.value(0).to_string())\n    }\n\n    pub(super) async fn single_binary(sql: &str) -> Option<Vec<u8>> {\n        let ctx = SessionContext::new();\n        register_sql2_functions(&ctx, system_sql2_function_provider(), None);\n        let batches = ctx\n            .sql(sql)\n            .await\n            .expect(\"query should plan\")\n            .collect()\n            .await\n            .expect(\"query should execute\");\n        let array = batches[0]\n            .column(0)\n            .as_any()\n            .downcast_ref::<BinaryArray>()\n            .expect(\"first column should be binary\");\n        (!array.is_null(0)).then(|| array.value(0).to_vec())\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/udfs/public_call.rs",
    "content": "use std::ops::ControlFlow;\n\nuse datafusion::sql::sqlparser::ast::{\n    Expr, Function, FunctionArg, FunctionArgExpr, FunctionArguments, ObjectNamePart, Statement,\n    Value, Visit, Visitor,\n};\nuse datafusion::sql::sqlparser::dialect::GenericDialect;\nuse datafusion::sql::sqlparser::parser::Parser;\n\nuse crate::LixError;\n\npub(crate) fn validate_public_udf_calls(sql: &str) -> Result<(), LixError> {\n    let statements = Parser::parse_sql(&GenericDialect {}, sql).map_err(|error| {\n        LixError::new(\n            LixError::CODE_PARSE_ERROR,\n            format!(\"sql2 SQL parse error: {error}\"),\n        )\n    })?;\n\n    let mut visitor = PublicUdfCallVisitor;\n    match statements.visit(&mut visitor) {\n        ControlFlow::Continue(()) => Ok(()),\n        ControlFlow::Break(error) => Err(*error),\n    }\n}\n\nstruct PublicUdfCallVisitor;\n\nimpl Visitor for PublicUdfCallVisitor {\n    type Break = Box<LixError>;\n\n    fn pre_visit_expr(&mut self, expr: &Expr) -> ControlFlow<Self::Break> {\n        let Expr::Function(function) = expr else {\n            return ControlFlow::Continue(());\n        };\n\n        match validate_public_function_call(function) {\n            Ok(()) => ControlFlow::Continue(()),\n            Err(error) => ControlFlow::Break(Box::new(error)),\n        }\n    }\n\n    fn pre_visit_statement(&mut self, statement: &Statement) -> ControlFlow<Self::Break> {\n        match statement {\n            Statement::CreateFunction(_) | Statement::DropFunction(_) => ControlFlow::Continue(()),\n            _ => ControlFlow::Continue(()),\n        }\n    }\n}\n\nfn validate_public_function_call(function: &Function) -> Result<(), LixError> {\n    let Some(name) = public_lix_function_name(function) else {\n        return Ok(());\n    };\n    let arity = function_arity(&function.args);\n\n    match name {\n        \"lix_json\" => expect_exact_arity(name, arity, 1),\n        \"lix_empty_blob\" => expect_exact_arity(name, arity, 0),\n        \"lix_timestamp\" => expect_exact_arity(name, arity, 0),\n        \"lix_uuid_v7\" => expect_exact_arity(name, arity, 0),\n        \"lix_active_version_commit_id\" => expect_exact_arity(name, arity, 0),\n        \"lix_text_encode\" | \"lix_text_decode\" => {\n            expect_arity_range(name, arity, 1, 2)?;\n            validate_literal_utf8_encoding(name, &function.args)\n        }\n        _ => Ok(()),\n    }\n}\n\nfn public_lix_function_name(function: &Function) -> Option<&'static str> {\n    let part = function.name.0.last()?;\n    let ident = match part {\n        ObjectNamePart::Identifier(ident) => ident.value.as_str(),\n        ObjectNamePart::Function(_) => return None,\n    };\n    match ident.to_ascii_lowercase().as_str() {\n        \"lix_json\" => Some(\"lix_json\"),\n        \"lix_empty_blob\" => Some(\"lix_empty_blob\"),\n        \"lix_timestamp\" => Some(\"lix_timestamp\"),\n        \"lix_uuid_v7\" => Some(\"lix_uuid_v7\"),\n        \"lix_active_version_commit_id\" => Some(\"lix_active_version_commit_id\"),\n        \"lix_text_encode\" => Some(\"lix_text_encode\"),\n        \"lix_text_decode\" => Some(\"lix_text_decode\"),\n        _ => None,\n    }\n}\n\nfn function_arity(args: &FunctionArguments) -> usize {\n    match args {\n        FunctionArguments::None => 0,\n        FunctionArguments::Subquery(_) => 1,\n        FunctionArguments::List(list) => list.args.len(),\n    }\n}\n\nfn expect_exact_arity(name: &str, actual: usize, expected: usize) -> Result<(), LixError> {\n    if actual == expected {\n        return Ok(());\n    }\n\n    let expectation = if expected == 0 {\n        \"no arguments\".to_string()\n    } else if expected == 1 {\n        \"exactly 1 argument\".to_string()\n    } else {\n        format!(\"exactly {expected} arguments\")\n    };\n    Err(invalid_param(format!(\"{name} requires {expectation}\")))\n}\n\nfn expect_arity_range(name: &str, actual: usize, min: usize, max: usize) -> Result<(), LixError> {\n    if (min..=max).contains(&actual) {\n        return Ok(());\n    }\n    Err(invalid_param(format!(\n        \"{name} requires {min} or {max} arguments\"\n    )))\n}\n\nfn validate_literal_utf8_encoding(name: &str, args: &FunctionArguments) -> Result<(), LixError> {\n    let Some(encoding) = function_arg(args, 1) else {\n        return Ok(());\n    };\n    let Some(value) = string_literal_arg(encoding) else {\n        return Ok(());\n    };\n    let normalized = value.trim().to_ascii_uppercase().replace('-', \"\");\n    if normalized == \"UTF8\" {\n        Ok(())\n    } else {\n        Err(invalid_param(format!(\n            \"{name}() only supports UTF8 encoding, got '{value}'\"\n        )))\n    }\n}\n\nfn function_arg(args: &FunctionArguments, index: usize) -> Option<&FunctionArg> {\n    match args {\n        FunctionArguments::List(list) => list.args.get(index),\n        _ => None,\n    }\n}\n\nfn string_literal_arg(arg: &FunctionArg) -> Option<&str> {\n    let expr = match arg {\n        FunctionArg::Unnamed(FunctionArgExpr::Expr(expr))\n        | FunctionArg::Named {\n            arg: FunctionArgExpr::Expr(expr),\n            ..\n        }\n        | FunctionArg::ExprNamed {\n            arg: FunctionArgExpr::Expr(expr),\n            ..\n        } => expr,\n        _ => return None,\n    };\n    let Expr::Value(value) = expr else {\n        return None;\n    };\n    match &value.value {\n        Value::SingleQuotedString(value)\n        | Value::DoubleQuotedString(value)\n        | Value::TripleSingleQuotedString(value)\n        | Value::TripleDoubleQuotedString(value)\n        | Value::EscapedStringLiteral(value)\n        | Value::UnicodeStringLiteral(value)\n        | Value::NationalStringLiteral(value)\n        | Value::SingleQuotedRawStringLiteral(value)\n        | Value::DoubleQuotedRawStringLiteral(value)\n        | Value::TripleSingleQuotedRawStringLiteral(value)\n        | Value::TripleDoubleQuotedRawStringLiteral(value) => Some(value.as_str()),\n        Value::DollarQuotedString(value) => Some(value.value.as_str()),\n        _ => None,\n    }\n}\n\nfn invalid_param(message: impl Into<String>) -> LixError {\n    LixError::new(LixError::CODE_INVALID_PARAM, message)\n}\n\n#[cfg(test)]\nmod tests {\n    use super::validate_public_udf_calls;\n\n    #[test]\n    fn rejects_lix_udf_wrong_arity_as_public_invalid_param() {\n        let error = validate_public_udf_calls(\"SELECT lix_uuid_v7('extra')\")\n            .expect_err(\"wrong arity should be rejected\");\n        assert_eq!(error.code, \"LIX_INVALID_PARAM\");\n        assert!(error.message.contains(\"lix_uuid_v7 requires no arguments\"));\n    }\n\n    #[test]\n    fn rejects_unsupported_literal_encoding_as_public_invalid_param() {\n        let error = validate_public_udf_calls(\"SELECT lix_text_encode('Ada', 'base64')\")\n            .expect_err(\"unsupported encoding should be rejected\");\n        assert_eq!(error.code, \"LIX_INVALID_PARAM\");\n        assert!(error\n            .message\n            .contains(\"lix_text_encode() only supports UTF8 encoding\"));\n    }\n\n    #[test]\n    fn accepts_valid_public_lix_udf_calls() {\n        validate_public_udf_calls(\n            \"SELECT lix_json('{\\\"x\\\":1}'), lix_text_decode(X'416461', 'utf-8')\",\n        )\n        .expect(\"valid calls should pass public validation\");\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/version_provider.rs",
    "content": "use std::any::Any;\nuse std::sync::Arc;\n\nuse async_trait::async_trait;\nuse datafusion::arrow::array::{ArrayRef, BooleanArray, StringArray, UInt64Array};\nuse datafusion::arrow::compute::{and, filter_record_batch};\nuse datafusion::arrow::datatypes::{DataType, Field, Schema, SchemaRef};\nuse datafusion::arrow::record_batch::RecordBatch;\nuse datafusion::catalog::{Session, TableProvider};\nuse datafusion::common::{not_impl_err, DFSchema, DataFusionError, Result, ScalarValue};\nuse datafusion::datasource::TableType;\nuse datafusion::execution::TaskContext;\nuse datafusion::logical_expr::dml::InsertOp;\nuse datafusion::logical_expr::{Expr, TableProviderFilterPushDown};\nuse datafusion::physical_expr::{create_physical_expr, EquivalenceProperties, PhysicalExpr};\nuse datafusion::physical_plan::execution_plan::{Boundedness, EmissionType, PlanProperties};\nuse datafusion::physical_plan::stream::RecordBatchStreamAdapter;\nuse datafusion::physical_plan::{\n    DisplayAs, DisplayFormatType, ExecutionPlan, Partitioning, SendableRecordBatchStream,\n};\nuse futures_util::{stream, TryStreamExt};\nuse serde_json::Value as JsonValue;\n\nuse crate::live_state::{\n    LiveStateFilter, LiveStateReader, LiveStateScanRequest, MaterializedLiveStateRow,\n};\nuse crate::sql2::dml::{InsertExec, InsertSink};\nuse crate::sql2::record_batch::record_batch_with_row_count;\nuse crate::sql2::write_normalization::{InsertCell, SqlCell, UpdateAssignmentValues};\nuse crate::sql2::{\n    SqlWriteContext, WriteAccess, WriteContextLiveStateReader, WriteContextVersionRefReader,\n};\nuse crate::transaction::types::{\n    LogicalPrimaryKey, TransactionWrite, TransactionWriteMode, TransactionWriteOperation,\n    TransactionWriteOrigin, TransactionWriteRow,\n};\nuse crate::version::{\n    version_descriptor_stage_row, version_descriptor_tombstone_row, version_ref_stage_row,\n    version_ref_tombstone_row, VersionRefReader,\n};\nuse crate::LixError;\nuse crate::GLOBAL_VERSION_ID;\n\npub(crate) async fn register_lix_version_provider(\n    session: &datafusion::prelude::SessionContext,\n    live_state: Arc<dyn LiveStateReader>,\n    version_ref: Arc<dyn VersionRefReader>,\n) -> Result<(), LixError> {\n    session\n        .register_table(\n            \"lix_version\",\n            Arc::new(LixVersionProvider::new(live_state, version_ref)),\n        )\n        .map_err(datafusion_error_to_lix_error)?;\n    Ok(())\n}\n\npub(crate) async fn register_lix_version_write_provider(\n    session: &datafusion::prelude::SessionContext,\n    write_ctx: SqlWriteContext,\n) -> Result<(), LixError> {\n    session\n        .register_table(\n            \"lix_version\",\n            Arc::new(LixVersionProvider::with_write(write_ctx)),\n        )\n        .map_err(datafusion_error_to_lix_error)?;\n    Ok(())\n}\n\nstruct LixVersionProvider {\n    schema: SchemaRef,\n    live_state: Arc<dyn LiveStateReader>,\n    version_ref: Arc<dyn VersionRefReader>,\n    write_access: WriteAccess,\n}\n\nimpl std::fmt::Debug for LixVersionProvider {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"LixVersionProvider\").finish()\n    }\n}\n\nimpl LixVersionProvider {\n    fn new(live_state: Arc<dyn LiveStateReader>, version_ref: Arc<dyn VersionRefReader>) -> Self {\n        Self {\n            schema: lix_version_schema(),\n            live_state,\n            version_ref,\n            write_access: WriteAccess::read_only(),\n        }\n    }\n\n    fn with_write(write_ctx: SqlWriteContext) -> Self {\n        let live_state = Arc::new(WriteContextLiveStateReader::new(write_ctx.clone()));\n        let version_ref = Arc::new(WriteContextVersionRefReader::new(write_ctx.clone()));\n        Self {\n            schema: lix_version_schema(),\n            live_state,\n            version_ref,\n            write_access: WriteAccess::write(write_ctx),\n        }\n    }\n}\n\n#[async_trait]\nimpl TableProvider for LixVersionProvider {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn schema(&self) -> SchemaRef {\n        Arc::clone(&self.schema)\n    }\n\n    fn table_type(&self) -> TableType {\n        TableType::Base\n    }\n\n    fn supports_filters_pushdown(\n        &self,\n        filters: &[&Expr],\n    ) -> Result<Vec<TableProviderFilterPushDown>> {\n        Ok(filters\n            .iter()\n            .map(|_| TableProviderFilterPushDown::Unsupported)\n            .collect())\n    }\n\n    async fn scan(\n        &self,\n        _state: &dyn Session,\n        projection: Option<&Vec<usize>>,\n        _filters: &[Expr],\n        _limit: Option<usize>,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        Ok(Arc::new(LixVersionScanExec::new(\n            Arc::clone(&self.live_state),\n            Arc::clone(&self.version_ref),\n            projected_schema(&self.schema, projection),\n            projection.cloned(),\n        )))\n    }\n\n    async fn insert_into(\n        &self,\n        _state: &dyn Session,\n        input: Arc<dyn ExecutionPlan>,\n        insert_op: InsertOp,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        if insert_op != InsertOp::Append {\n            return not_impl_err!(\"{insert_op} not implemented for lix_version yet\");\n        }\n\n        let write_ctx = self.write_access.require_write(\"INSERT into lix_version\")?;\n        let sink = LixVersionInsertSink::new(input.schema(), write_ctx);\n        Ok(Arc::new(InsertExec::new(input, Arc::new(sink))))\n    }\n\n    async fn delete_from(\n        &self,\n        state: &dyn Session,\n        filters: Vec<Expr>,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        let write_ctx = self.write_access.require_write(\"DELETE FROM lix_version\")?;\n        let df_schema = DFSchema::try_from(Arc::clone(&self.schema))?;\n        let physical_filters = filters\n            .iter()\n            .map(|expr| create_physical_expr(expr, &df_schema, state.execution_props()))\n            .collect::<Result<Vec<_>>>()?;\n\n        Ok(Arc::new(LixVersionDeleteExec::new(\n            write_ctx,\n            Arc::clone(&self.live_state),\n            Arc::clone(&self.version_ref),\n            Arc::clone(&self.schema),\n            physical_filters,\n        )))\n    }\n\n    async fn update(\n        &self,\n        state: &dyn Session,\n        assignments: Vec<(String, Expr)>,\n        filters: Vec<Expr>,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        let write_ctx = self.write_access.require_write(\"UPDATE lix_version\")?;\n        validate_lix_version_update_assignments(&assignments)?;\n\n        let df_schema = DFSchema::try_from(Arc::clone(&self.schema))?;\n        let physical_assignments = assignments\n            .iter()\n            .map(|(column_name, expr)| {\n                Ok((\n                    column_name.clone(),\n                    create_physical_expr(expr, &df_schema, state.execution_props())?,\n                ))\n            })\n            .collect::<Result<Vec<_>>>()?;\n        let physical_filters = filters\n            .iter()\n            .map(|expr| create_physical_expr(expr, &df_schema, state.execution_props()))\n            .collect::<Result<Vec<_>>>()?;\n\n        Ok(Arc::new(LixVersionUpdateExec::new(\n            write_ctx,\n            Arc::clone(&self.live_state),\n            Arc::clone(&self.version_ref),\n            Arc::clone(&self.schema),\n            physical_assignments,\n            physical_filters,\n        )))\n    }\n}\n\nstruct LixVersionInsertSink {\n    write_ctx: SqlWriteContext,\n}\n\nimpl std::fmt::Debug for LixVersionInsertSink {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"LixVersionInsertSink\").finish()\n    }\n}\n\nimpl LixVersionInsertSink {\n    fn new(_schema: SchemaRef, write_ctx: SqlWriteContext) -> Self {\n        Self { write_ctx }\n    }\n}\n\nimpl DisplayAs for LixVersionInsertSink {\n    fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        match t {\n            DisplayFormatType::Default | DisplayFormatType::Verbose => {\n                write!(f, \"LixVersionInsertSink\")\n            }\n            DisplayFormatType::TreeRender => write!(f, \"LixVersionInsertSink\"),\n        }\n    }\n}\n\n#[async_trait]\nimpl InsertSink for LixVersionInsertSink {\n    async fn write_batches(\n        &self,\n        batches: Vec<RecordBatch>,\n        _context: &Arc<TaskContext>,\n    ) -> Result<u64> {\n        let default_commit_id = self\n            .write_ctx\n            .load_version_head(&self.write_ctx.active_version_id())\n            .await\n            .map_err(lix_error_to_datafusion_error)?\n            .ok_or_else(|| {\n                DataFusionError::Execution(\n                    \"INSERT into lix_version could not resolve active version head\".to_string(),\n                )\n            })?;\n        let mut rows = Vec::new();\n        let mut count = 0u64;\n        for batch in batches {\n            let version_rows = version_insert_rows_from_batch(&batch, &default_commit_id)?;\n            count = count\n                .checked_add(u64::try_from(version_rows.len()).map_err(|_| {\n                    DataFusionError::Execution(\"INSERT row count overflow\".to_string())\n                })?)\n                .ok_or_else(|| DataFusionError::Execution(\"INSERT row count overflow\".into()))?;\n            rows.extend(version_rows.into_iter().flat_map(version_insert_stage_rows));\n        }\n\n        if !rows.is_empty() {\n            self.write_ctx\n                .stage_write(TransactionWrite::Rows {\n                    mode: TransactionWriteMode::Insert,\n                    rows,\n                })\n                .await\n                .map_err(lix_error_to_datafusion_error)?;\n        }\n\n        Ok(count)\n    }\n}\n\nstruct LixVersionDeleteExec {\n    write_ctx: SqlWriteContext,\n    active_version_id: String,\n    live_state: Arc<dyn LiveStateReader>,\n    version_ref: Arc<dyn VersionRefReader>,\n    table_schema: SchemaRef,\n    filters: Vec<Arc<dyn PhysicalExpr>>,\n    result_schema: SchemaRef,\n    properties: Arc<PlanProperties>,\n}\n\nimpl std::fmt::Debug for LixVersionDeleteExec {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"LixVersionDeleteExec\").finish()\n    }\n}\n\nimpl LixVersionDeleteExec {\n    fn new(\n        write_ctx: SqlWriteContext,\n        live_state: Arc<dyn LiveStateReader>,\n        version_ref: Arc<dyn VersionRefReader>,\n        table_schema: SchemaRef,\n        filters: Vec<Arc<dyn PhysicalExpr>>,\n    ) -> Self {\n        let result_schema = dml_count_schema();\n        let properties = dml_plan_properties(Arc::clone(&result_schema));\n        let active_version_id = write_ctx.active_version_id();\n        Self {\n            write_ctx,\n            active_version_id,\n            live_state,\n            version_ref,\n            table_schema,\n            filters,\n            result_schema,\n            properties: Arc::new(properties),\n        }\n    }\n}\n\nimpl DisplayAs for LixVersionDeleteExec {\n    fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        match t {\n            DisplayFormatType::Default | DisplayFormatType::Verbose => {\n                write!(f, \"LixVersionDeleteExec(filters={})\", self.filters.len())\n            }\n            DisplayFormatType::TreeRender => write!(f, \"LixVersionDeleteExec\"),\n        }\n    }\n}\n\nimpl ExecutionPlan for LixVersionDeleteExec {\n    fn name(&self) -> &str {\n        \"LixVersionDeleteExec\"\n    }\n\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn properties(&self) -> &Arc<PlanProperties> {\n        &self.properties\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn ExecutionPlan>> {\n        Vec::new()\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        children: Vec<Arc<dyn ExecutionPlan>>,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        if !children.is_empty() {\n            return Err(DataFusionError::Execution(\n                \"LixVersionDeleteExec does not accept children\".to_string(),\n            ));\n        }\n        Ok(self)\n    }\n\n    fn execute(\n        &self,\n        partition: usize,\n        _context: Arc<TaskContext>,\n    ) -> Result<SendableRecordBatchStream> {\n        if partition != 0 {\n            return Err(DataFusionError::Execution(format!(\n                \"LixVersionDeleteExec only exposes one partition, got {partition}\"\n            )));\n        }\n        let write_ctx = self.write_ctx.clone();\n        let active_version_id = self.active_version_id.clone();\n        let live_state = Arc::clone(&self.live_state);\n        let version_ref = Arc::clone(&self.version_ref);\n        let filters = self.filters.clone();\n        let table_schema = Arc::clone(&self.table_schema);\n        let result_schema = Arc::clone(&self.result_schema);\n        let stream_schema = Arc::clone(&result_schema);\n\n        let stream = stream::once(async move {\n            let rows = load_version_rows(live_state, version_ref)\n                .await\n                .map_err(lix_error_to_datafusion_error)?;\n            let source_batch = version_record_batch(&version_projection_for_scan(None), &rows)?;\n            let matched_batch = filter_version_batch(source_batch, &filters)?;\n            let version_rows = version_rows_from_batch(&matched_batch)?;\n            reject_protected_version_deletes(&version_rows, &active_version_id)?;\n            let count = u64::try_from(version_rows.len())\n                .map_err(|_| DataFusionError::Execution(\"DELETE row count overflow\".to_string()))?;\n            let rows = version_rows\n                .into_iter()\n                .flat_map(version_tombstone_rows)\n                .collect::<Vec<_>>();\n\n            if !rows.is_empty() {\n                write_ctx\n                    .stage_write(TransactionWrite::Rows {\n                        mode: TransactionWriteMode::Replace,\n                        rows,\n                    })\n                    .await\n                    .map_err(lix_error_to_datafusion_error)?;\n            }\n\n            let _ = table_schema;\n            Ok::<_, DataFusionError>(stream::iter(vec![Ok::<RecordBatch, DataFusionError>(\n                dml_count_batch(Arc::clone(&stream_schema), count)?,\n            )]))\n        })\n        .try_flatten();\n        Ok(Box::pin(RecordBatchStreamAdapter::new(\n            result_schema,\n            stream,\n        )))\n    }\n}\n\nstruct LixVersionUpdateExec {\n    write_ctx: SqlWriteContext,\n    live_state: Arc<dyn LiveStateReader>,\n    version_ref: Arc<dyn VersionRefReader>,\n    table_schema: SchemaRef,\n    assignments: Vec<(String, Arc<dyn PhysicalExpr>)>,\n    filters: Vec<Arc<dyn PhysicalExpr>>,\n    result_schema: SchemaRef,\n    properties: Arc<PlanProperties>,\n}\n\nimpl std::fmt::Debug for LixVersionUpdateExec {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"LixVersionUpdateExec\").finish()\n    }\n}\n\nimpl LixVersionUpdateExec {\n    fn new(\n        write_ctx: SqlWriteContext,\n        live_state: Arc<dyn LiveStateReader>,\n        version_ref: Arc<dyn VersionRefReader>,\n        table_schema: SchemaRef,\n        assignments: Vec<(String, Arc<dyn PhysicalExpr>)>,\n        filters: Vec<Arc<dyn PhysicalExpr>>,\n    ) -> Self {\n        let result_schema = dml_count_schema();\n        let properties = dml_plan_properties(Arc::clone(&result_schema));\n        Self {\n            write_ctx,\n            live_state,\n            version_ref,\n            table_schema,\n            assignments,\n            filters,\n            result_schema,\n            properties: Arc::new(properties),\n        }\n    }\n}\n\nimpl DisplayAs for LixVersionUpdateExec {\n    fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        match t {\n            DisplayFormatType::Default | DisplayFormatType::Verbose => {\n                write!(\n                    f,\n                    \"LixVersionUpdateExec(assignments={}, filters={})\",\n                    self.assignments.len(),\n                    self.filters.len()\n                )\n            }\n            DisplayFormatType::TreeRender => write!(f, \"LixVersionUpdateExec\"),\n        }\n    }\n}\n\nimpl ExecutionPlan for LixVersionUpdateExec {\n    fn name(&self) -> &str {\n        \"LixVersionUpdateExec\"\n    }\n\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn properties(&self) -> &Arc<PlanProperties> {\n        &self.properties\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn ExecutionPlan>> {\n        Vec::new()\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        children: Vec<Arc<dyn ExecutionPlan>>,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        if !children.is_empty() {\n            return Err(DataFusionError::Execution(\n                \"LixVersionUpdateExec does not accept children\".to_string(),\n            ));\n        }\n        Ok(self)\n    }\n\n    fn execute(\n        &self,\n        partition: usize,\n        _context: Arc<TaskContext>,\n    ) -> Result<SendableRecordBatchStream> {\n        if partition != 0 {\n            return Err(DataFusionError::Execution(format!(\n                \"LixVersionUpdateExec only exposes one partition, got {partition}\"\n            )));\n        }\n        let write_ctx = self.write_ctx.clone();\n        let live_state = Arc::clone(&self.live_state);\n        let version_ref = Arc::clone(&self.version_ref);\n        let table_schema = Arc::clone(&self.table_schema);\n        let assignments = self.assignments.clone();\n        let filters = self.filters.clone();\n        let result_schema = Arc::clone(&self.result_schema);\n        let stream_schema = Arc::clone(&result_schema);\n\n        let stream = stream::once(async move {\n            let rows = load_version_rows(live_state, version_ref)\n                .await\n                .map_err(lix_error_to_datafusion_error)?;\n            let source_batch = version_record_batch(&version_projection_for_scan(None), &rows)?;\n            let matched_batch = filter_version_batch(source_batch, &filters)?;\n            let version_rows =\n                version_update_rows_from_batch(&matched_batch, &assignments, &table_schema)?;\n            let count = u64::try_from(version_rows.len())\n                .map_err(|_| DataFusionError::Execution(\"UPDATE row count overflow\".to_string()))?;\n            let rows = version_rows\n                .into_iter()\n                .flat_map(version_update_stage_rows)\n                .collect::<Vec<_>>();\n\n            if !rows.is_empty() {\n                write_ctx\n                    .stage_write(TransactionWrite::Rows {\n                        mode: TransactionWriteMode::Replace,\n                        rows,\n                    })\n                    .await\n                    .map_err(lix_error_to_datafusion_error)?;\n            }\n\n            Ok::<_, DataFusionError>(stream::iter(vec![Ok::<RecordBatch, DataFusionError>(\n                dml_count_batch(Arc::clone(&stream_schema), count)?,\n            )]))\n        })\n        .try_flatten();\n        Ok(Box::pin(RecordBatchStreamAdapter::new(\n            result_schema,\n            stream,\n        )))\n    }\n}\n\nstruct LixVersionScanExec {\n    live_state: Arc<dyn LiveStateReader>,\n    version_ref: Arc<dyn VersionRefReader>,\n    schema: SchemaRef,\n    projection: Option<Vec<usize>>,\n    properties: Arc<PlanProperties>,\n}\n\nimpl std::fmt::Debug for LixVersionScanExec {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"LixVersionScanExec\").finish()\n    }\n}\n\nimpl LixVersionScanExec {\n    fn new(\n        live_state: Arc<dyn LiveStateReader>,\n        version_ref: Arc<dyn VersionRefReader>,\n        schema: SchemaRef,\n        projection: Option<Vec<usize>>,\n    ) -> Self {\n        let properties = PlanProperties::new(\n            EquivalenceProperties::new(schema.clone()),\n            Partitioning::UnknownPartitioning(1),\n            EmissionType::Incremental,\n            Boundedness::Bounded,\n        );\n        Self {\n            live_state,\n            version_ref,\n            schema,\n            projection,\n            properties: Arc::new(properties),\n        }\n    }\n}\n\nimpl DisplayAs for LixVersionScanExec {\n    fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        match t {\n            DisplayFormatType::Default | DisplayFormatType::Verbose => {\n                write!(f, \"LixVersionScanExec\")\n            }\n            DisplayFormatType::TreeRender => write!(f, \"LixVersionScanExec\"),\n        }\n    }\n}\n\nimpl ExecutionPlan for LixVersionScanExec {\n    fn name(&self) -> &str {\n        \"LixVersionScanExec\"\n    }\n\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn properties(&self) -> &Arc<PlanProperties> {\n        &self.properties\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn ExecutionPlan>> {\n        Vec::new()\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        children: Vec<Arc<dyn ExecutionPlan>>,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        if !children.is_empty() {\n            return Err(DataFusionError::Execution(\n                \"LixVersionScanExec does not accept children\".to_string(),\n            ));\n        }\n        Ok(self)\n    }\n\n    fn execute(\n        &self,\n        partition: usize,\n        _context: Arc<TaskContext>,\n    ) -> Result<SendableRecordBatchStream> {\n        if partition != 0 {\n            return Err(DataFusionError::Execution(format!(\n                \"LixVersionScanExec only exposes one partition, got {partition}\"\n            )));\n        }\n\n        let live_state = Arc::clone(&self.live_state);\n        let version_ref = Arc::clone(&self.version_ref);\n        let projection = version_projection_for_scan(self.projection.as_ref());\n        let schema = Arc::clone(&self.schema);\n        let stream = stream::once(async move {\n            let rows = load_version_rows(live_state, version_ref)\n                .await\n                .map_err(lix_error_to_datafusion_error)?;\n            version_record_batch(&projection, &rows)\n        });\n        Ok(Box::pin(RecordBatchStreamAdapter::new(schema, stream)))\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\nstruct VersionRow {\n    id: String,\n    name: String,\n    hidden: bool,\n    commit_id: String,\n}\n\n#[derive(Debug, Clone, Copy)]\nenum VersionColumn {\n    Id,\n    Name,\n    Hidden,\n    CommitId,\n}\n\nasync fn load_version_rows(\n    live_state: Arc<dyn LiveStateReader>,\n    version_ref: Arc<dyn VersionRefReader>,\n) -> Result<Vec<VersionRow>, LixError> {\n    let descriptor_rows = live_state\n        .scan_rows(&LiveStateScanRequest {\n            filter: LiveStateFilter {\n                schema_keys: vec![\"lix_version_descriptor\".to_string()],\n                version_ids: vec![GLOBAL_VERSION_ID.to_string()],\n                ..LiveStateFilter::default()\n            },\n            projection: Default::default(),\n            limit: None,\n        })\n        .await?;\n\n    let mut out = Vec::new();\n    for descriptor_row in descriptor_rows {\n        let descriptor = parse_descriptor(&descriptor_row)?;\n        let Some(commit_id) = version_ref.load_head_commit_id(&descriptor.id).await? else {\n            continue;\n        };\n        out.push(VersionRow {\n            commit_id,\n            id: descriptor.id,\n            name: descriptor.name,\n            hidden: descriptor.hidden,\n        });\n    }\n    Ok(out)\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\nstruct VersionDescriptor {\n    id: String,\n    name: String,\n    hidden: bool,\n}\n\nfn parse_descriptor(row: &MaterializedLiveStateRow) -> Result<VersionDescriptor, LixError> {\n    let snapshot = parse_snapshot(row, \"lix_version_descriptor\")?;\n    let id = snapshot\n        .get(\"id\")\n        .and_then(JsonValue::as_str)\n        .ok_or_else(|| LixError::new(\"LIX_ERROR_UNKNOWN\", \"lix_version_descriptor is missing id\"))?\n        .to_string();\n    let name = snapshot\n        .get(\"name\")\n        .and_then(JsonValue::as_str)\n        .ok_or_else(|| {\n            LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                \"lix_version_descriptor is missing name\",\n            )\n        })?\n        .to_string();\n    let hidden = snapshot\n        .get(\"hidden\")\n        .and_then(JsonValue::as_bool)\n        .unwrap_or(false);\n    Ok(VersionDescriptor { id, name, hidden })\n}\n\nfn parse_snapshot(row: &MaterializedLiveStateRow, schema_key: &str) -> Result<JsonValue, LixError> {\n    let snapshot_content = row.snapshot_content.as_deref().ok_or_else(|| {\n        LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"{schema_key} row is missing snapshot_content\"),\n        )\n    })?;\n    serde_json::from_str(snapshot_content).map_err(|error| {\n        LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"{schema_key} snapshot_content is invalid JSON: {error}\"),\n        )\n    })\n}\n\nfn validate_lix_version_update_assignments(assignments: &[(String, Expr)]) -> Result<()> {\n    for (column_name, _) in assignments {\n        match column_name.as_str() {\n            \"name\" | \"hidden\" | \"commit_id\" => {}\n            \"id\" => {\n                return Err(DataFusionError::Execution(\n                    \"UPDATE lix_version cannot change immutable column 'id'\".to_string(),\n                ));\n            }\n            other => {\n                return Err(DataFusionError::Plan(format!(\n                    \"UPDATE lix_version failed: column '{other}' does not exist\"\n                )));\n            }\n        }\n    }\n    Ok(())\n}\n\nfn filter_version_batch(\n    batch: RecordBatch,\n    filters: &[Arc<dyn PhysicalExpr>],\n) -> Result<RecordBatch> {\n    let Some(mask) = evaluate_version_filters(&batch, filters)? else {\n        return Ok(batch);\n    };\n    Ok(filter_record_batch(&batch, &mask)?)\n}\n\nfn evaluate_version_filters(\n    batch: &RecordBatch,\n    filters: &[Arc<dyn PhysicalExpr>],\n) -> Result<Option<BooleanArray>> {\n    if filters.is_empty() {\n        return Ok(None);\n    }\n\n    let mut combined_mask: Option<BooleanArray> = None;\n    for filter in filters {\n        let result = filter.evaluate(batch)?;\n        let array = result.into_array(batch.num_rows())?;\n        let bool_array = array\n            .as_any()\n            .downcast_ref::<BooleanArray>()\n            .ok_or_else(|| {\n                DataFusionError::Execution(\"lix_version filter was not boolean\".to_string())\n            })?;\n        let normalized = bool_array\n            .iter()\n            .map(|value| Some(value == Some(true)))\n            .collect::<BooleanArray>();\n        combined_mask = Some(match combined_mask {\n            Some(existing) => and(&existing, &normalized)?,\n            None => normalized,\n        });\n    }\n    Ok(combined_mask)\n}\n\nfn version_insert_rows_from_batch(\n    batch: &RecordBatch,\n    default_commit_id: &str,\n) -> Result<Vec<VersionRow>> {\n    (0..batch.num_rows())\n        .map(|row_index| {\n            let id = required_string_value(batch, row_index, \"id\", \"INSERT\")?;\n            let name = required_string_value(batch, row_index, \"name\", \"INSERT\")?;\n            let hidden =\n                optional_bool_value(batch, row_index, \"hidden\", \"INSERT\")?.unwrap_or(false);\n            let commit_id = optional_string_value(batch, row_index, \"commit_id\", \"INSERT\")?\n                .unwrap_or_else(|| default_commit_id.to_string());\n            Ok(VersionRow {\n                id,\n                name,\n                hidden,\n                commit_id,\n            })\n        })\n        .collect()\n}\n\nfn version_rows_from_batch(batch: &RecordBatch) -> Result<Vec<VersionRow>> {\n    (0..batch.num_rows())\n        .map(|row_index| {\n            Ok(VersionRow {\n                id: required_string_value(batch, row_index, \"id\", \"DELETE\")?,\n                name: required_string_value(batch, row_index, \"name\", \"DELETE\")?,\n                hidden: required_bool_value(batch, row_index, \"hidden\", \"DELETE\")?,\n                commit_id: required_string_value(batch, row_index, \"commit_id\", \"DELETE\")?,\n            })\n        })\n        .collect()\n}\n\nfn reject_protected_version_deletes(rows: &[VersionRow], active_version_id: &str) -> Result<()> {\n    for row in rows {\n        if row.id == GLOBAL_VERSION_ID {\n            return Err(DataFusionError::Execution(\n                \"DELETE FROM lix_version cannot delete the global version\".to_string(),\n            ));\n        }\n        if row.id == active_version_id {\n            return Err(DataFusionError::Execution(format!(\n                \"DELETE FROM lix_version cannot delete active version '{}'\",\n                row.id\n            )));\n        }\n    }\n    Ok(())\n}\n\nfn version_update_rows_from_batch(\n    batch: &RecordBatch,\n    assignments: &[(String, Arc<dyn PhysicalExpr>)],\n    table_schema: &SchemaRef,\n) -> Result<Vec<VersionRow>> {\n    let assignment_values = UpdateAssignmentValues::evaluate(batch, assignments)?;\n    (0..batch.num_rows())\n        .map(|row_index| {\n            Ok(VersionRow {\n                id: required_string_value(batch, row_index, \"id\", \"UPDATE\")?,\n                name: update_string_value(\n                    batch,\n                    &assignment_values,\n                    table_schema,\n                    row_index,\n                    \"name\",\n                )?,\n                hidden: update_bool_value(\n                    batch,\n                    &assignment_values,\n                    table_schema,\n                    row_index,\n                    \"hidden\",\n                )?,\n                commit_id: update_string_value(\n                    batch,\n                    &assignment_values,\n                    table_schema,\n                    row_index,\n                    \"commit_id\",\n                )?,\n            })\n        })\n        .collect()\n}\n\nfn version_stage_rows(\n    row: VersionRow,\n    origin: Option<TransactionWriteOrigin>,\n) -> Vec<TransactionWriteRow> {\n    vec![\n        with_origin(\n            version_descriptor_stage_row(&row.id, &row.name, row.hidden),\n            origin.clone(),\n        ),\n        with_origin(version_ref_stage_row(&row.id, &row.commit_id), origin),\n    ]\n}\n\nfn version_tombstone_rows(row: VersionRow) -> Vec<TransactionWriteRow> {\n    let origin = Some(lix_version_origin(\n        TransactionWriteOperation::Delete,\n        &row.id,\n    ));\n    vec![\n        with_origin(version_descriptor_tombstone_row(&row.id), origin.clone()),\n        with_origin(version_ref_tombstone_row(&row.id), origin),\n    ]\n}\n\nfn version_insert_stage_rows(row: VersionRow) -> Vec<TransactionWriteRow> {\n    let origin = lix_version_origin(TransactionWriteOperation::Insert, &row.id);\n    version_stage_rows(row, Some(origin))\n}\n\nfn version_update_stage_rows(row: VersionRow) -> Vec<TransactionWriteRow> {\n    let origin = lix_version_origin(TransactionWriteOperation::Update, &row.id);\n    version_stage_rows(row, Some(origin))\n}\n\nfn with_origin(\n    mut row: TransactionWriteRow,\n    origin: Option<TransactionWriteOrigin>,\n) -> TransactionWriteRow {\n    row.origin = origin;\n    row\n}\n\nfn lix_version_origin(\n    action: TransactionWriteOperation,\n    version_id: &str,\n) -> TransactionWriteOrigin {\n    TransactionWriteOrigin {\n        surface: \"lix_version\".to_string(),\n        operation: action,\n        primary_key: Some(LogicalPrimaryKey {\n            columns: vec![\"id\".to_string()],\n            values: vec![version_id.to_string()],\n        }),\n    }\n}\n\nfn update_string_value(\n    batch: &RecordBatch,\n    assignment_values: &UpdateAssignmentValues,\n    table_schema: &SchemaRef,\n    row_index: usize,\n    column_name: &str,\n) -> Result<String> {\n    let column_index = table_schema.index_of(column_name)?;\n    match assignment_values.assigned_or_existing_cell(batch, row_index, column_name)? {\n        InsertCell::Omitted => required_string_value(batch, row_index, column_name, \"UPDATE\"),\n        InsertCell::Provided(SqlCell::Value(\n            ScalarValue::Utf8(Some(value))\n            | ScalarValue::Utf8View(Some(value))\n            | ScalarValue::LargeUtf8(Some(value)),\n        )) => Ok(value),\n        InsertCell::Provided(SqlCell::Null) => Err(DataFusionError::Execution(format!(\n            \"UPDATE lix_version requires non-null text column '{column_name}'\"\n        ))),\n        InsertCell::Provided(SqlCell::Value(other)) => Err(DataFusionError::Execution(format!(\n            \"UPDATE lix_version expected text-compatible column '{column_name}', got {other:?}\"\n        ))),\n    }\n    .or_else(|error| {\n        if batch.column(column_index).is_null(row_index) {\n            Err(DataFusionError::Execution(format!(\n                \"UPDATE lix_version requires non-null text column '{column_name}'\"\n            )))\n        } else {\n            Err(error)\n        }\n    })\n}\n\nfn update_bool_value(\n    batch: &RecordBatch,\n    assignment_values: &UpdateAssignmentValues,\n    table_schema: &SchemaRef,\n    row_index: usize,\n    column_name: &str,\n) -> Result<bool> {\n    let column_index = table_schema.index_of(column_name)?;\n    match assignment_values.assigned_or_existing_cell(batch, row_index, column_name)? {\n        InsertCell::Omitted => required_bool_value(batch, row_index, column_name, \"UPDATE\"),\n        InsertCell::Provided(SqlCell::Value(ScalarValue::Boolean(Some(value)))) => Ok(value),\n        InsertCell::Provided(SqlCell::Null) => Err(DataFusionError::Execution(format!(\n            \"UPDATE lix_version requires non-null boolean column '{column_name}'\"\n        ))),\n        InsertCell::Provided(SqlCell::Value(other)) => Err(DataFusionError::Execution(format!(\n            \"UPDATE lix_version expected boolean column '{column_name}', got {other:?}\"\n        ))),\n    }\n    .or_else(|error| {\n        if batch.column(column_index).is_null(row_index) {\n            Err(DataFusionError::Execution(format!(\n                \"UPDATE lix_version requires non-null boolean column '{column_name}'\"\n            )))\n        } else {\n            Err(error)\n        }\n    })\n}\n\nfn required_string_value(\n    batch: &RecordBatch,\n    row_index: usize,\n    column_name: &str,\n    action: &str,\n) -> Result<String> {\n    optional_string_value(batch, row_index, column_name, action)?.ok_or_else(|| {\n        DataFusionError::Execution(format!(\n            \"{action} lix_version requires non-null text column '{column_name}'\"\n        ))\n    })\n}\n\nfn optional_string_value(\n    batch: &RecordBatch,\n    row_index: usize,\n    column_name: &str,\n    action: &str,\n) -> Result<Option<String>> {\n    match optional_scalar_value(batch, row_index, column_name)? {\n        None\n        | Some(ScalarValue::Null)\n        | Some(ScalarValue::Utf8(None))\n        | Some(ScalarValue::Utf8View(None))\n        | Some(ScalarValue::LargeUtf8(None)) => Ok(None),\n        Some(ScalarValue::Utf8(Some(value)))\n        | Some(ScalarValue::Utf8View(Some(value)))\n        | Some(ScalarValue::LargeUtf8(Some(value))) => Ok(Some(value)),\n        Some(other) => Err(DataFusionError::Execution(format!(\n            \"{action} lix_version expected text-compatible column '{column_name}', got {other:?}\"\n        ))),\n    }\n}\n\nfn required_bool_value(\n    batch: &RecordBatch,\n    row_index: usize,\n    column_name: &str,\n    action: &str,\n) -> Result<bool> {\n    optional_bool_value(batch, row_index, column_name, action)?.ok_or_else(|| {\n        DataFusionError::Execution(format!(\n            \"{action} lix_version requires non-null boolean column '{column_name}'\"\n        ))\n    })\n}\n\nfn optional_bool_value(\n    batch: &RecordBatch,\n    row_index: usize,\n    column_name: &str,\n    action: &str,\n) -> Result<Option<bool>> {\n    match optional_scalar_value(batch, row_index, column_name)? {\n        None | Some(ScalarValue::Null) | Some(ScalarValue::Boolean(None)) => Ok(None),\n        Some(ScalarValue::Boolean(Some(value))) => Ok(Some(value)),\n        Some(other) => Err(DataFusionError::Execution(format!(\n            \"{action} lix_version expected boolean column '{column_name}', got {other:?}\"\n        ))),\n    }\n}\n\nfn optional_scalar_value(\n    batch: &RecordBatch,\n    row_index: usize,\n    column_name: &str,\n) -> Result<Option<ScalarValue>> {\n    let Ok(column_index) = batch.schema().index_of(column_name) else {\n        return Ok(None);\n    };\n    Ok(Some(ScalarValue::try_from_array(\n        batch.column(column_index).as_ref(),\n        row_index,\n    )?))\n}\n\nfn dml_count_schema() -> SchemaRef {\n    Arc::new(Schema::new(vec![Field::new(\n        \"count\",\n        DataType::UInt64,\n        false,\n    )]))\n}\n\nfn dml_plan_properties(schema: SchemaRef) -> PlanProperties {\n    PlanProperties::new(\n        EquivalenceProperties::new(schema),\n        Partitioning::UnknownPartitioning(1),\n        EmissionType::Final,\n        Boundedness::Bounded,\n    )\n}\n\nfn dml_count_batch(schema: SchemaRef, count: u64) -> Result<RecordBatch> {\n    RecordBatch::try_new(\n        schema,\n        vec![Arc::new(UInt64Array::from(vec![count])) as ArrayRef],\n    )\n    .map_err(DataFusionError::from)\n}\n\nfn lix_version_schema() -> SchemaRef {\n    Arc::new(Schema::new(vec![\n        Field::new(\"id\", DataType::Utf8, false),\n        Field::new(\"name\", DataType::Utf8, false),\n        Field::new(\"hidden\", DataType::Boolean, false),\n        Field::new(\"commit_id\", DataType::Utf8, false),\n    ]))\n}\n\nfn version_projection_for_scan(projection: Option<&Vec<usize>>) -> Vec<VersionColumn> {\n    let all_columns = vec![\n        VersionColumn::Id,\n        VersionColumn::Name,\n        VersionColumn::Hidden,\n        VersionColumn::CommitId,\n    ];\n    projection.map_or(all_columns.clone(), |indices| {\n        indices\n            .iter()\n            .filter_map(|index| all_columns.get(*index).copied())\n            .collect()\n    })\n}\n\nfn projected_schema(schema: &SchemaRef, projection: Option<&Vec<usize>>) -> SchemaRef {\n    match projection {\n        Some(projection) => Arc::new(schema.project(projection).expect(\"projection is valid\")),\n        None => Arc::clone(schema),\n    }\n}\n\nfn version_record_batch(projection: &[VersionColumn], rows: &[VersionRow]) -> Result<RecordBatch> {\n    let arrays = projection\n        .iter()\n        .map(|column| match column {\n            VersionColumn::Id => string_array(rows.iter().map(|row| Some(row.id.as_str()))),\n            VersionColumn::Name => string_array(rows.iter().map(|row| Some(row.name.as_str()))),\n            VersionColumn::Hidden => Arc::new(BooleanArray::from(\n                rows.iter().map(|row| row.hidden).collect::<Vec<_>>(),\n            )) as ArrayRef,\n            VersionColumn::CommitId => {\n                string_array(rows.iter().map(|row| Some(row.commit_id.as_str())))\n            }\n        })\n        .collect::<Vec<_>>();\n    record_batch_with_row_count(version_schema(projection), arrays, rows.len()).map_err(|error| {\n        DataFusionError::Execution(format!(\"failed to build lix_version batch: {error}\"))\n    })\n}\n\nfn version_schema(projection: &[VersionColumn]) -> SchemaRef {\n    Arc::new(Schema::new(\n        projection\n            .iter()\n            .map(|column| match column {\n                VersionColumn::Id => Field::new(\"id\", DataType::Utf8, false),\n                VersionColumn::Name => Field::new(\"name\", DataType::Utf8, false),\n                VersionColumn::Hidden => Field::new(\"hidden\", DataType::Boolean, false),\n                VersionColumn::CommitId => Field::new(\"commit_id\", DataType::Utf8, false),\n            })\n            .collect::<Vec<_>>(),\n    ))\n}\n\nfn string_array<'a>(values: impl Iterator<Item = Option<&'a str>>) -> ArrayRef {\n    Arc::new(StringArray::from(values.collect::<Vec<_>>())) as ArrayRef\n}\n\nfn datafusion_error_to_lix_error(error: DataFusionError) -> LixError {\n    super::error::datafusion_error_to_lix_error(error)\n}\n\nfn lix_error_to_datafusion_error(error: LixError) -> DataFusionError {\n    super::error::lix_error_to_datafusion_error(error)\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/version_scope.rs",
    "content": "use std::collections::BTreeSet;\n\nuse datafusion::error::DataFusionError;\nuse datafusion::logical_expr::expr::InList;\nuse datafusion::logical_expr::{BinaryExpr, Expr, Operator};\nuse datafusion::scalar::ScalarValue;\n\nuse crate::version::VersionRefReader;\nuse crate::LixError;\nuse crate::GLOBAL_VERSION_ID;\n\n/// Version scope requested by a SQL surface.\n///\n/// Active surfaces read through one session version. By-version surfaces either\n/// read explicitly filtered versions or, without a version predicate, enumerate\n/// every visible version scope before handing the request to live_state.\npub(crate) enum SqlVersionScope {\n    Active(String),\n    Explicit(Vec<String>),\n    AllVisible,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) enum VersionBinding {\n    Active { version_id: String },\n    Explicit,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct WriteVersionScope {\n    pub(crate) version_id: String,\n    pub(crate) global: bool,\n}\n\nimpl VersionBinding {\n    pub(crate) fn active(version_id: impl Into<String>) -> Self {\n        Self::Active {\n            version_id: version_id.into(),\n        }\n    }\n\n    pub(crate) fn explicit() -> Self {\n        Self::Explicit\n    }\n\n    pub(crate) fn active_version_id(&self) -> Option<&str> {\n        match self {\n            Self::Active { version_id } => Some(version_id),\n            Self::Explicit => None,\n        }\n    }\n\n    pub(crate) fn require_active_version_id(&self, action: &str) -> Result<String, LixError> {\n        match self {\n            Self::Active { version_id } => Ok(version_id.clone()),\n            Self::Explicit => Err(LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                format!(\"{action} is only supported for active-version SQL surfaces\"),\n            )),\n        }\n    }\n}\n\npub(crate) fn resolve_write_version_scope(\n    explicit_global: Option<bool>,\n    explicit_version_id: Option<String>,\n    fallback_version_id: Option<&str>,\n    action: &str,\n    surface: &str,\n) -> Result<WriteVersionScope, DataFusionError> {\n    if explicit_global == Some(true) {\n        if explicit_version_id\n            .as_deref()\n            .is_some_and(|version_id| version_id != GLOBAL_VERSION_ID)\n        {\n            return Err(DataFusionError::Execution(format!(\n                \"{surface} cannot set lixcol_global=true with non-global lixcol_version_id\"\n            )));\n        }\n        return Ok(WriteVersionScope {\n            version_id: GLOBAL_VERSION_ID.to_string(),\n            global: true,\n        });\n    }\n\n    let version_id = explicit_version_id\n        .or_else(|| fallback_version_id.map(ToOwned::to_owned))\n        .ok_or_else(|| {\n            DataFusionError::Execution(format!(\"{action} requires lixcol_version_id\"))\n        })?;\n    if explicit_global == Some(false) && version_id == GLOBAL_VERSION_ID {\n        return Err(DataFusionError::Execution(format!(\n            \"{surface} cannot set lixcol_global=false with global lixcol_version_id\"\n        )));\n    }\n    Ok(WriteVersionScope {\n        global: explicit_global.unwrap_or(version_id == GLOBAL_VERSION_ID),\n        version_id,\n    })\n}\n\nimpl SqlVersionScope {\n    pub(crate) fn from_provider(\n        binding: &VersionBinding,\n        requested_version_ids: Vec<String>,\n    ) -> Self {\n        match binding {\n            VersionBinding::Active { version_id } => Self::Active(version_id.clone()),\n            VersionBinding::Explicit if requested_version_ids.is_empty() => Self::AllVisible,\n            VersionBinding::Explicit => Self::Explicit(requested_version_ids),\n        }\n    }\n}\n\npub(crate) async fn resolve_sql_version_scope(\n    version_ref: &dyn VersionRefReader,\n    scope: SqlVersionScope,\n) -> Result<Vec<String>, LixError> {\n    match scope {\n        SqlVersionScope::Active(version_id) => Ok(vec![version_id]),\n        SqlVersionScope::Explicit(version_ids) => Ok(version_ids),\n        SqlVersionScope::AllVisible => visible_version_ids(version_ref).await,\n    }\n}\n\npub(crate) async fn resolve_provider_version_ids(\n    version_ref: &dyn VersionRefReader,\n    binding: &VersionBinding,\n    requested_version_ids: Vec<String>,\n) -> Result<Vec<String>, LixError> {\n    resolve_sql_version_scope(\n        version_ref,\n        SqlVersionScope::from_provider(binding, requested_version_ids),\n    )\n    .await\n}\n\npub(crate) fn explicit_version_ids_from_dml_filters(filters: &[Expr]) -> Vec<String> {\n    filters\n        .iter()\n        .flat_map(version_ids_from_filter)\n        .collect::<BTreeSet<_>>()\n        .into_iter()\n        .collect()\n}\n\nfn version_ids_from_filter(expr: &Expr) -> Vec<String> {\n    match expr {\n        Expr::BinaryExpr(binary_expr) if binary_expr.op == Operator::And => {\n            let mut values = version_ids_from_filter(&binary_expr.left);\n            values.extend(version_ids_from_filter(&binary_expr.right));\n            values\n        }\n        Expr::BinaryExpr(binary_expr) => version_id_from_binary_filter(binary_expr)\n            .map(|value| vec![value])\n            .unwrap_or_default(),\n        Expr::InList(in_list) => version_ids_from_in_list_filter(in_list).unwrap_or_default(),\n        _ => Vec::new(),\n    }\n}\n\nfn version_id_from_binary_filter(binary_expr: &BinaryExpr) -> Option<String> {\n    if binary_expr.op != Operator::Eq {\n        return None;\n    }\n\n    version_id_from_column_literal_filter(&binary_expr.left, &binary_expr.right)\n        .or_else(|| version_id_from_column_literal_filter(&binary_expr.right, &binary_expr.left))\n}\n\nfn version_ids_from_in_list_filter(in_list: &InList) -> Option<Vec<String>> {\n    if in_list.negated {\n        return None;\n    }\n    let Expr::Column(column) = in_list.expr.as_ref() else {\n        return None;\n    };\n    if column.name != \"lixcol_version_id\" {\n        return None;\n    }\n\n    let values = in_list\n        .list\n        .iter()\n        .map(string_expr_literal)\n        .collect::<Option<Vec<_>>>()?;\n    if values.is_empty() {\n        return None;\n    }\n    Some(values)\n}\n\nfn version_id_from_column_literal_filter(\n    column_expr: &Expr,\n    literal_expr: &Expr,\n) -> Option<String> {\n    let Expr::Column(column) = column_expr else {\n        return None;\n    };\n    if column.name != \"lixcol_version_id\" {\n        return None;\n    }\n    string_expr_literal(literal_expr)\n}\n\nfn string_expr_literal(expr: &Expr) -> Option<String> {\n    let Expr::Literal(literal, _) = expr else {\n        return None;\n    };\n    match literal {\n        ScalarValue::Utf8(Some(value))\n        | ScalarValue::Utf8View(Some(value))\n        | ScalarValue::LargeUtf8(Some(value)) => Some(value.clone()),\n        _ => None,\n    }\n}\n\nasync fn visible_version_ids(version_ref: &dyn VersionRefReader) -> Result<Vec<String>, LixError> {\n    let mut version_ids = version_ref\n        .scan_heads()\n        .await?\n        .into_iter()\n        .map(|head| head.version_id)\n        .collect::<BTreeSet<_>>();\n    version_ids.insert(GLOBAL_VERSION_ID.to_string());\n    Ok(version_ids.into_iter().collect())\n}\n\n#[cfg(test)]\nmod tests {\n    use async_trait::async_trait;\n\n    use super::*;\n    use crate::version::VersionHead;\n\n    #[tokio::test]\n    async fn active_scope_uses_session_version() {\n        let version_ref = RowsVersionRefReader::new(Vec::new());\n        let ids =\n            resolve_provider_version_ids(&version_ref, &VersionBinding::active(\"main\"), Vec::new())\n                .await\n                .expect(\"scope should resolve\");\n\n        assert_eq!(ids, vec![\"main\".to_string()]);\n    }\n\n    #[tokio::test]\n    async fn explicit_scope_keeps_requested_versions() {\n        let version_ref = RowsVersionRefReader::new(Vec::new());\n        let ids = resolve_provider_version_ids(\n            &version_ref,\n            &VersionBinding::explicit(),\n            vec![\"version-a\".to_string(), \"global\".to_string()],\n        )\n        .await\n        .expect(\"scope should resolve\");\n\n        assert_eq!(ids, vec![\"version-a\".to_string(), \"global\".to_string()]);\n    }\n\n    #[tokio::test]\n    async fn all_visible_scope_loads_version_refs_and_global() {\n        let version_ref = RowsVersionRefReader::new(vec![\n            VersionHead {\n                version_id: \"version-b\".to_string(),\n                commit_id: \"commit-version-b\".to_string(),\n            },\n            VersionHead {\n                version_id: \"version-a\".to_string(),\n                commit_id: \"commit-version-a\".to_string(),\n            },\n        ]);\n        let ids =\n            resolve_provider_version_ids(&version_ref, &VersionBinding::explicit(), Vec::new())\n                .await\n                .expect(\"scope should resolve\");\n\n        assert_eq!(\n            ids,\n            vec![\n                \"global\".to_string(),\n                \"version-a\".to_string(),\n                \"version-b\".to_string(),\n            ]\n        );\n    }\n\n    #[test]\n    fn write_scope_uses_fallback_version_when_version_is_implicit() {\n        let scope = resolve_write_version_scope(\n            None,\n            None,\n            Some(\"active-version\"),\n            \"INSERT into surface\",\n            \"surface\",\n        )\n        .expect(\"scope should resolve\");\n\n        assert_eq!(\n            scope,\n            WriteVersionScope {\n                version_id: \"active-version\".to_string(),\n                global: false,\n            }\n        );\n    }\n\n    #[test]\n    fn write_scope_requires_version_without_fallback() {\n        let error = resolve_write_version_scope(None, None, None, \"INSERT into surface\", \"surface\")\n            .expect_err(\"missing version should be rejected\");\n\n        assert!(error\n            .to_string()\n            .contains(\"INSERT into surface requires lixcol_version_id\"));\n    }\n\n    #[test]\n    fn write_scope_derives_global_from_global_version_id() {\n        let scope = resolve_write_version_scope(\n            None,\n            Some(GLOBAL_VERSION_ID.to_string()),\n            None,\n            \"INSERT into surface\",\n            \"surface\",\n        )\n        .expect(\"scope should resolve\");\n\n        assert_eq!(\n            scope,\n            WriteVersionScope {\n                version_id: GLOBAL_VERSION_ID.to_string(),\n                global: true,\n            }\n        );\n    }\n\n    #[test]\n    fn write_scope_rejects_non_global_with_global_version_id() {\n        let error = resolve_write_version_scope(\n            Some(false),\n            Some(GLOBAL_VERSION_ID.to_string()),\n            None,\n            \"INSERT into surface\",\n            \"surface\",\n        )\n        .expect_err(\"conflicting global/version scope should be rejected\");\n\n        assert!(error\n            .to_string()\n            .contains(\"surface cannot set lixcol_global=false with global lixcol_version_id\"));\n    }\n\n    #[test]\n    fn write_scope_rejects_global_with_non_global_version_id() {\n        let error = resolve_write_version_scope(\n            Some(true),\n            Some(\"version-a\".to_string()),\n            None,\n            \"INSERT into surface\",\n            \"surface\",\n        )\n        .expect_err(\"conflicting global/version scope should be rejected\");\n\n        assert!(error\n            .to_string()\n            .contains(\"surface cannot set lixcol_global=true with non-global lixcol_version_id\"));\n    }\n\n    struct RowsVersionRefReader {\n        heads: Vec<VersionHead>,\n    }\n\n    impl RowsVersionRefReader {\n        fn new(heads: Vec<VersionHead>) -> Self {\n            Self { heads }\n        }\n    }\n\n    #[async_trait]\n    impl VersionRefReader for RowsVersionRefReader {\n        async fn load_head(&self, version_id: &str) -> Result<Option<VersionHead>, LixError> {\n            Ok(self\n                .heads\n                .iter()\n                .find(|head| head.version_id == version_id)\n                .cloned())\n        }\n\n        async fn scan_heads(&self) -> Result<Vec<VersionHead>, LixError> {\n            Ok(self.heads.clone())\n        }\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/sql2/write_normalization.rs",
    "content": "use std::collections::{BTreeMap, BTreeSet};\nuse std::sync::Arc;\n\nuse datafusion::arrow::array::ArrayRef;\nuse datafusion::arrow::datatypes::DataType;\nuse datafusion::arrow::record_batch::RecordBatch;\nuse datafusion::common::{DataFusionError, Result, ScalarValue};\nuse datafusion::logical_expr::Expr;\nuse datafusion::physical_expr::expressions::{CastExpr, Literal};\nuse datafusion::physical_expr::PhysicalExpr;\nuse datafusion::physical_plan::projection::ProjectionExec;\nuse datafusion::physical_plan::ExecutionPlan;\n\nuse crate::LixError;\n\n#[derive(Debug, Clone)]\npub(crate) enum SqlCell {\n    Null,\n    Value(ScalarValue),\n}\n\nimpl SqlCell {\n    pub(crate) fn from_scalar(value: ScalarValue) -> Self {\n        if value.is_null() {\n            Self::Null\n        } else {\n            Self::Value(value)\n        }\n    }\n}\n\n#[derive(Debug, Clone)]\npub(crate) enum InsertCell {\n    Omitted,\n    Provided(SqlCell),\n}\n\n#[derive(Debug, Clone)]\npub(crate) enum UpdateCell {\n    Unassigned,\n    Assigned(SqlCell),\n}\n\n#[derive(Debug, Clone)]\npub(crate) struct InsertColumnIntents {\n    explicit_columns: Option<BTreeSet<String>>,\n}\n\nimpl InsertColumnIntents {\n    pub(crate) fn all_explicit() -> Self {\n        Self {\n            explicit_columns: None,\n        }\n    }\n\n    pub(crate) fn from_input(input: &Arc<dyn ExecutionPlan>) -> Self {\n        let Some(projection) = input.as_any().downcast_ref::<ProjectionExec>() else {\n            return Self {\n                explicit_columns: None,\n            };\n        };\n\n        let explicit_columns = projection\n            .expr()\n            .iter()\n            .filter(|expr| !is_generated_null_default(expr.expr.as_ref()))\n            .map(|expr| expr.alias.clone())\n            .collect();\n\n        Self {\n            explicit_columns: Some(explicit_columns),\n        }\n    }\n\n    pub(crate) fn includes_column(&self, column_name: &str) -> bool {\n        self.explicit_columns\n            .as_ref()\n            .is_none_or(|columns| columns.contains(column_name))\n    }\n\n    pub(crate) fn cell(\n        &self,\n        batch: &RecordBatch,\n        row_index: usize,\n        column_name: &str,\n    ) -> Result<InsertCell> {\n        if !self.includes_column(column_name) {\n            return Ok(InsertCell::Omitted);\n        }\n\n        optional_scalar_value(batch, row_index, column_name).map(|value| match value {\n            None => InsertCell::Omitted,\n            Some(value) => InsertCell::Provided(SqlCell::from_scalar(value)),\n        })\n    }\n}\n\npub(crate) fn reject_non_binary_casts_for_insert_column(\n    input: &Arc<dyn ExecutionPlan>,\n    column_name: &str,\n    context: &str,\n) -> Result<()> {\n    reject_non_binary_casts_for_insert_column_in_plan(input.as_ref(), column_name, context)\n}\n\nfn reject_non_binary_casts_for_insert_column_in_plan(\n    input: &dyn ExecutionPlan,\n    column_name: &str,\n    context: &str,\n) -> Result<()> {\n    let Some(projection) = input.as_any().downcast_ref::<ProjectionExec>() else {\n        for child in input.children() {\n            reject_non_binary_casts_for_insert_column_in_plan(\n                child.as_ref(),\n                column_name,\n                context,\n            )?;\n        }\n        return Ok(());\n    };\n\n    let Some(expr) = projection\n        .expr()\n        .iter()\n        .find(|expr| expr.alias == column_name)\n    else {\n        return Ok(());\n    };\n\n    if contains_non_binary_cast_to_binary(expr.expr.as_ref()) {\n        return Err(super::error::lix_error_to_datafusion_error(\n            LixError::new(\n                LixError::CODE_TYPE_MISMATCH,\n                format!(\"{context} expected binary column '{column_name}'\"),\n            )\n            .with_hint(\"Use X'...' or a binary parameter for file contents.\"),\n        ));\n    }\n\n    Ok(())\n}\n\nfn contains_non_binary_cast_to_binary(expr: &dyn PhysicalExpr) -> bool {\n    let Some(cast) = expr.as_any().downcast_ref::<CastExpr>() else {\n        return false;\n    };\n\n    if is_binary_type(cast.cast_type()) && !physical_expr_is_binary_or_null(cast.expr().as_ref()) {\n        return true;\n    }\n\n    contains_non_binary_cast_to_binary(cast.expr().as_ref())\n}\n\nfn physical_expr_is_binary_or_null(expr: &dyn PhysicalExpr) -> bool {\n    if let Some(literal) = expr.as_any().downcast_ref::<Literal>() {\n        return scalar_is_binary_or_null(literal.value());\n    }\n\n    if let Some(cast) = expr.as_any().downcast_ref::<CastExpr>() {\n        return is_binary_type(cast.cast_type())\n            && physical_expr_is_binary_or_null(cast.expr().as_ref());\n    }\n\n    false\n}\n\npub(crate) fn scalar_is_binary_or_null(value: &ScalarValue) -> bool {\n    value.is_null()\n        || matches!(\n            value,\n            ScalarValue::Binary(_)\n                | ScalarValue::LargeBinary(_)\n                | ScalarValue::FixedSizeBinary(_, _)\n        )\n}\n\npub(crate) fn logical_expr_is_binary_or_null(expr: &Expr) -> bool {\n    match expr {\n        Expr::Literal(value, _) => scalar_is_binary_or_null(value),\n        Expr::Cast(cast) => {\n            is_binary_type(&cast.data_type) && logical_expr_is_binary_or_null(&cast.expr)\n        }\n        Expr::Alias(alias) => logical_expr_is_binary_or_null(&alias.expr),\n        _ => false,\n    }\n}\n\npub(crate) fn is_binary_type(data_type: &DataType) -> bool {\n    matches!(\n        data_type,\n        DataType::Binary | DataType::LargeBinary | DataType::FixedSizeBinary(_)\n    )\n}\n\npub(crate) fn lix_file_data_type_lix_error() -> LixError {\n    LixError::new(\n        LixError::CODE_TYPE_MISMATCH,\n        \"lix_file.data expects binary data\",\n    )\n    .with_hint(\"Use X'...' or a binary parameter for file contents.\")\n}\n\npub(crate) fn lix_file_data_type_error(\n    context: &str,\n    column_name: &str,\n    instruction: &str,\n) -> DataFusionError {\n    super::error::lix_error_to_datafusion_error(\n        LixError::new(\n            LixError::CODE_TYPE_MISMATCH,\n            format!(\"{context} expected binary column '{column_name}'\"),\n        )\n        .with_hint(instruction),\n    )\n}\n\npub(crate) fn lix_file_data_type_error_with_value(\n    context: &str,\n    column_name: &str,\n    value: &ScalarValue,\n    instruction: &str,\n) -> DataFusionError {\n    super::error::lix_error_to_datafusion_error(\n        LixError::new(\n            LixError::CODE_TYPE_MISMATCH,\n            format!(\"{context} expected binary column '{column_name}', got {value:?}\"),\n        )\n        .with_hint(instruction),\n    )\n}\n\npub(crate) struct UpdateAssignmentValues {\n    values: BTreeMap<String, ArrayRef>,\n}\n\nimpl UpdateAssignmentValues {\n    pub(crate) fn evaluate(\n        batch: &RecordBatch,\n        assignments: &[(String, Arc<dyn PhysicalExpr>)],\n    ) -> Result<Self> {\n        let mut values = BTreeMap::new();\n        for (column_name, assignment) in assignments {\n            values.insert(\n                column_name.clone(),\n                assignment.evaluate(batch)?.into_array(batch.num_rows())?,\n            );\n        }\n        Ok(Self { values })\n    }\n\n    #[cfg(test)]\n    pub(crate) fn from_batch_columns(batch: &RecordBatch, columns: &[&str]) -> Self {\n        let values = columns\n            .iter()\n            .filter_map(|column_name| {\n                let column_index = batch.schema().index_of(column_name).ok()?;\n                Some((\n                    (*column_name).to_string(),\n                    Arc::clone(batch.column(column_index)),\n                ))\n            })\n            .collect();\n        Self { values }\n    }\n\n    /// Returns only the value explicitly assigned by SQL UPDATE.\n    ///\n    /// Use this for document-patch semantics where `Unassigned` must remain\n    /// distinct from `Assigned(NULL)`.\n    pub(crate) fn assigned_cell(&self, row_index: usize, column_name: &str) -> Result<UpdateCell> {\n        let Some(array) = self.values.get(column_name) else {\n            return Ok(UpdateCell::Unassigned);\n        };\n\n        ScalarValue::try_from_array(array.as_ref(), row_index)\n            .map(SqlCell::from_scalar)\n            .map(UpdateCell::Assigned)\n            .map_err(|error| {\n                DataFusionError::Execution(format!(\n                    \"failed to decode SQL UPDATE assignment for column '{column_name}' at row {row_index}: {error}\"\n                ))\n            })\n    }\n\n    /// Returns the assigned SQL UPDATE value, or falls back to the existing row\n    /// column value when the column was not assigned.\n    ///\n    /// Use this for scalar row-column semantics. Do not use it to reconstruct\n    /// JSON documents from projected property columns, because projection can\n    /// erase the difference between an absent property and an explicit null.\n    pub(crate) fn assigned_or_existing_cell(\n        &self,\n        batch: &RecordBatch,\n        row_index: usize,\n        column_name: &str,\n    ) -> Result<InsertCell> {\n        match self.assigned_cell(row_index, column_name)? {\n            UpdateCell::Assigned(value) => Ok(InsertCell::Provided(value)),\n            UpdateCell::Unassigned => {\n                optional_scalar_value(batch, row_index, column_name).map(|value| match value {\n                    None => InsertCell::Omitted,\n                    Some(value) => InsertCell::Provided(SqlCell::from_scalar(value)),\n                })\n            }\n        }\n    }\n}\n\npub(crate) fn optional_scalar_value(\n    batch: &RecordBatch,\n    row_index: usize,\n    column_name: &str,\n) -> Result<Option<ScalarValue>> {\n    let schema = batch.schema();\n    let column_index = match schema.index_of(column_name) {\n        Ok(column_index) => column_index,\n        Err(_) => return Ok(None),\n    };\n    if row_index >= batch.num_rows() {\n        return Err(DataFusionError::Execution(format!(\n            \"row index {row_index} out of bounds for SQL write batch with {} rows\",\n            batch.num_rows()\n        )));\n    }\n    ScalarValue::try_from_array(batch.column(column_index).as_ref(), row_index)\n        .map(Some)\n        .map_err(|error| {\n            DataFusionError::Execution(format!(\n                \"failed to decode SQL write column '{column_name}' at row {row_index}: {error}\"\n            ))\n        })\n}\n\nfn is_generated_null_default(expr: &dyn PhysicalExpr) -> bool {\n    if let Some(literal) = expr.as_any().downcast_ref::<Literal>() {\n        return literal.value().is_null();\n    }\n\n    if let Some(cast) = expr.as_any().downcast_ref::<CastExpr>() {\n        return is_generated_null_default(cast.expr().as_ref());\n    }\n\n    false\n}\n"
  },
  {
    "path": "packages/engine/src/storage/context.rs",
    "content": "use std::sync::Arc;\n\nuse async_trait::async_trait;\n\nuse crate::backend::{Backend, BackendReadTransaction, BackendWriteTransaction};\nuse crate::storage::types::{KvWriteBatch, StorageWriter};\nuse crate::storage::{\n    KvEntryPage, KvExistsBatch, KvGetRequest, KvKeyPage, KvScanRequest, KvValueBatch, KvValuePage,\n    KvWriteStats, StorageReadTransaction, StorageReader, StorageWriteTransaction,\n};\nuse crate::LixError;\n\n#[derive(Clone)]\npub(crate) struct StorageContext {\n    backend: Arc<dyn Backend + Send + Sync>,\n}\n\nimpl StorageContext {\n    pub(crate) fn new(backend: Arc<dyn Backend + Send + Sync>) -> Self {\n        Self { backend }\n    }\n\n    pub(crate) async fn begin_read_transaction(\n        &self,\n    ) -> Result<Box<dyn StorageReadTransaction + Send + Sync + 'static>, LixError> {\n        let transaction = self.backend.begin_read_transaction().await?;\n        Ok(Box::new(StorageContextReadTransaction { transaction }))\n    }\n\n    pub(crate) async fn begin_write_transaction(\n        &self,\n    ) -> Result<Box<dyn StorageWriteTransaction + Send + Sync + 'static>, LixError> {\n        let transaction = self.backend.begin_write_transaction().await?;\n        Ok(Box::new(StorageContextWriteTransaction { transaction }))\n    }\n\n    pub(crate) async fn close(&self) -> Result<(), LixError> {\n        self.backend.close().await\n    }\n\n    pub(crate) async fn destroy(&self) -> Result<(), LixError> {\n        self.backend.destroy().await\n    }\n}\n\n#[cfg(any(test, feature = \"storage-benches\"))]\n#[async_trait]\nimpl StorageReader for StorageContext {\n    async fn get_values(&mut self, request: KvGetRequest) -> Result<KvValueBatch, LixError> {\n        let mut transaction = self.begin_read_transaction().await?;\n        let result = transaction.get_values(request).await;\n        match result {\n            Ok(result) => {\n                transaction.rollback().await?;\n                Ok(result)\n            }\n            Err(error) => {\n                let _ = transaction.rollback().await;\n                Err(error)\n            }\n        }\n    }\n\n    async fn exists_many(&mut self, request: KvGetRequest) -> Result<KvExistsBatch, LixError> {\n        let mut transaction = self.begin_read_transaction().await?;\n        let result = transaction.exists_many(request).await;\n        match result {\n            Ok(result) => {\n                transaction.rollback().await?;\n                Ok(result)\n            }\n            Err(error) => {\n                let _ = transaction.rollback().await;\n                Err(error)\n            }\n        }\n    }\n\n    async fn scan_keys(&mut self, request: KvScanRequest) -> Result<KvKeyPage, LixError> {\n        let mut transaction = self.begin_read_transaction().await?;\n        let result = transaction.scan_keys(request).await;\n        match result {\n            Ok(result) => {\n                transaction.rollback().await?;\n                Ok(result)\n            }\n            Err(error) => {\n                let _ = transaction.rollback().await;\n                Err(error)\n            }\n        }\n    }\n\n    async fn scan_values(&mut self, request: KvScanRequest) -> Result<KvValuePage, LixError> {\n        let mut transaction = self.begin_read_transaction().await?;\n        let result = transaction.scan_values(request).await;\n        match result {\n            Ok(result) => {\n                transaction.rollback().await?;\n                Ok(result)\n            }\n            Err(error) => {\n                let _ = transaction.rollback().await;\n                Err(error)\n            }\n        }\n    }\n\n    async fn scan_entries(&mut self, request: KvScanRequest) -> Result<KvEntryPage, LixError> {\n        let mut transaction = self.begin_read_transaction().await?;\n        let result = transaction.scan_entries(request).await;\n        match result {\n            Ok(result) => {\n                transaction.rollback().await?;\n                Ok(result)\n            }\n            Err(error) => {\n                let _ = transaction.rollback().await;\n                Err(error)\n            }\n        }\n    }\n}\n\nstruct StorageContextReadTransaction {\n    transaction: Box<dyn BackendReadTransaction + Send + Sync + 'static>,\n}\n\nstruct StorageContextWriteTransaction {\n    transaction: Box<dyn BackendWriteTransaction + Send + Sync + 'static>,\n}\n\n#[async_trait]\nimpl StorageReader for StorageContextReadTransaction {\n    async fn get_values(&mut self, request: KvGetRequest) -> Result<KvValueBatch, LixError> {\n        self.transaction\n            .get_values(request.into())\n            .await\n            .map(Into::into)\n    }\n\n    async fn exists_many(&mut self, request: KvGetRequest) -> Result<KvExistsBatch, LixError> {\n        self.transaction\n            .exists_many(request.into())\n            .await\n            .map(Into::into)\n    }\n\n    async fn scan_keys(&mut self, request: KvScanRequest) -> Result<KvKeyPage, LixError> {\n        self.transaction\n            .scan_keys(request.into())\n            .await\n            .map(Into::into)\n    }\n\n    async fn scan_values(&mut self, request: KvScanRequest) -> Result<KvValuePage, LixError> {\n        self.transaction\n            .scan_values(request.into())\n            .await\n            .map(Into::into)\n    }\n\n    async fn scan_entries(&mut self, request: KvScanRequest) -> Result<KvEntryPage, LixError> {\n        self.transaction\n            .scan_entries(request.into())\n            .await\n            .map(Into::into)\n    }\n}\n\n#[async_trait]\nimpl StorageReadTransaction for StorageContextReadTransaction {\n    async fn rollback(self: Box<Self>) -> Result<(), LixError> {\n        self.transaction.rollback().await\n    }\n}\n\n#[async_trait]\nimpl StorageReader for StorageContextWriteTransaction {\n    async fn get_values(&mut self, request: KvGetRequest) -> Result<KvValueBatch, LixError> {\n        self.transaction\n            .get_values(request.into())\n            .await\n            .map(Into::into)\n    }\n\n    async fn exists_many(&mut self, request: KvGetRequest) -> Result<KvExistsBatch, LixError> {\n        self.transaction\n            .exists_many(request.into())\n            .await\n            .map(Into::into)\n    }\n\n    async fn scan_keys(&mut self, request: KvScanRequest) -> Result<KvKeyPage, LixError> {\n        self.transaction\n            .scan_keys(request.into())\n            .await\n            .map(Into::into)\n    }\n\n    async fn scan_values(&mut self, request: KvScanRequest) -> Result<KvValuePage, LixError> {\n        self.transaction\n            .scan_values(request.into())\n            .await\n            .map(Into::into)\n    }\n\n    async fn scan_entries(&mut self, request: KvScanRequest) -> Result<KvEntryPage, LixError> {\n        self.transaction\n            .scan_entries(request.into())\n            .await\n            .map(Into::into)\n    }\n}\n\n#[async_trait]\nimpl StorageWriter for StorageContextWriteTransaction {\n    async fn write_kv_batch(&mut self, batch: KvWriteBatch) -> Result<KvWriteStats, LixError> {\n        self.transaction\n            .write_kv_batch(batch.into())\n            .await\n            .map(Into::into)\n    }\n}\n\n#[async_trait]\nimpl StorageReadTransaction for StorageContextWriteTransaction {\n    async fn rollback(self: Box<Self>) -> Result<(), LixError> {\n        self.transaction.rollback().await\n    }\n}\n\n#[async_trait]\nimpl StorageWriteTransaction for StorageContextWriteTransaction {\n    async fn commit(self: Box<Self>) -> Result<(), LixError> {\n        self.transaction.commit().await\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use std::sync::Arc;\n\n    use crate::backend::testing::UnitTestBackend;\n    use crate::storage::types::KvWriteBatch;\n    use crate::storage::{KvGetGroup, KvScanRange, StorageWriteSet};\n\n    use super::*;\n\n    #[tokio::test]\n    async fn storage_context_roundtrips_batched_writes_and_reads() {\n        let backend: Arc<dyn Backend + Send + Sync> = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend);\n        let mut tx = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction opens\");\n\n        let mut batch = KvWriteBatch::new();\n        batch.put(\"ns\", b\"a\".to_vec(), b\"1\".to_vec());\n        batch.put(\"ns\", b\"b\".to_vec(), b\"2\".to_vec());\n        let stats = tx.write_kv_batch(batch).await.expect(\"batch writes\");\n        assert_eq!(stats.puts, 2);\n        tx.commit().await.expect(\"commit succeeds\");\n\n        let mut tx = storage\n            .begin_read_transaction()\n            .await\n            .expect(\"read transaction opens\");\n        let result = tx\n            .get_values(KvGetRequest {\n                groups: vec![KvGetGroup {\n                    namespace: \"ns\".to_string(),\n                    keys: vec![b\"a\".to_vec(), b\"b\".to_vec()],\n                }],\n            })\n            .await\n            .expect(\"batch reads\");\n        assert_eq!(result.groups[0].value(0), Some(Some(b\"1\".as_slice())));\n        assert_eq!(result.groups[0].value(1), Some(Some(b\"2\".as_slice())));\n\n        let exists = tx\n            .exists_many(KvGetRequest {\n                groups: vec![KvGetGroup {\n                    namespace: \"ns\".to_string(),\n                    keys: vec![b\"a\".to_vec(), b\"missing\".to_vec()],\n                }],\n            })\n            .await\n            .expect(\"existence reads\");\n        assert_eq!(exists.groups[0].exists, vec![true, false]);\n\n        let result = tx\n            .scan_entries(KvScanRequest {\n                namespace: \"ns\".to_string(),\n                range: KvScanRange::prefix(Vec::new()),\n                after: Some(b\"a\".to_vec()),\n                limit: 1,\n            })\n            .await\n            .expect(\"scan reads\");\n        assert_eq!(result.key(0).expect(\"key exists\"), b\"b\");\n        assert_eq!(result.value(0).expect(\"value exists\"), b\"2\");\n\n        let key_only = tx\n            .scan_keys(KvScanRequest {\n                namespace: \"ns\".to_string(),\n                range: KvScanRange::prefix(Vec::new()),\n                after: None,\n                limit: 2,\n            })\n            .await\n            .expect(\"key-only scan reads\");\n        assert_eq!(key_only.keys.iter().collect::<Vec<_>>(), vec![b\"a\", b\"b\"]);\n        tx.rollback().await.expect(\"rollback succeeds\");\n    }\n\n    #[tokio::test]\n    async fn storage_write_set_applies_as_one_batch() {\n        let backend: Arc<dyn Backend + Send + Sync> = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend);\n        let mut tx = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction opens\");\n\n        let mut writes = StorageWriteSet::new();\n        assert!(writes.is_empty());\n        writes.put(\"ns\", b\"a\".to_vec(), b\"1\".to_vec());\n        writes.put(\"ns\", b\"b\".to_vec(), b\"2\".to_vec());\n        writes.delete(\"ns\", b\"missing\".to_vec());\n        assert!(!writes.is_empty());\n\n        let stats = writes.apply(tx.as_mut()).await.expect(\"write set applies\");\n        assert_eq!(stats.puts, 2);\n        assert_eq!(stats.deletes, 1);\n        tx.commit().await.expect(\"commit succeeds\");\n\n        let mut tx = storage\n            .begin_read_transaction()\n            .await\n            .expect(\"read transaction opens\");\n        let result = tx\n            .get_values(KvGetRequest {\n                groups: vec![KvGetGroup {\n                    namespace: \"ns\".to_string(),\n                    keys: vec![b\"a\".to_vec(), b\"b\".to_vec()],\n                }],\n            })\n            .await\n            .expect(\"batch reads\");\n        assert_eq!(result.groups[0].value(0), Some(Some(&b\"1\"[..])));\n        assert_eq!(result.groups[0].value(1), Some(Some(&b\"2\"[..])));\n        tx.rollback().await.expect(\"rollback succeeds\");\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/storage/mod.rs",
    "content": "mod context;\nmod read_scope;\nmod types;\n\npub(crate) use context::StorageContext;\npub(crate) use read_scope::{ScopedStorageReader, StorageReadScope};\npub(crate) use types::{\n    KvEntryPage, KvExistsBatch, KvExistsGroup, KvGetGroup, KvGetRequest, KvKeyPage, KvScanRange,\n    KvScanRequest, KvValueBatch, KvValueGroup, KvValuePage, KvWriteStats, StorageReadTransaction,\n    StorageReader, StorageWriteSet, StorageWriteTransaction,\n};\n\n#[cfg(feature = \"storage-benches\")]\npub(crate) use types::{KvWriteBatch, KvWriteGroup};\n"
  },
  {
    "path": "packages/engine/src/storage/read_scope.rs",
    "content": "use std::sync::Arc;\n\nuse crate::storage::{\n    KvEntryPage, KvExistsBatch, KvGetRequest, KvKeyPage, KvScanRequest, KvValueBatch, KvValuePage,\n    StorageReadTransaction, StorageReader,\n};\nuse crate::LixError;\nuse tokio::sync::Mutex;\n\n/// Shared read visibility over one KV store handle.\n///\n/// This lets multiple subsystem readers share the same transaction/backend view\n/// even when the underlying handle itself is not cloneable.\npub(crate) struct StorageReadScope<S> {\n    store: Arc<Mutex<S>>,\n}\n\nimpl<S> StorageReadScope<S>\nwhere\n    S: StorageReader,\n{\n    pub(crate) fn new(store: S) -> Self {\n        Self {\n            store: Arc::new(Mutex::new(store)),\n        }\n    }\n\n    pub(crate) fn store(&self) -> ScopedStorageReader<S> {\n        ScopedStorageReader {\n            store: Arc::clone(&self.store),\n        }\n    }\n}\n\nimpl StorageReadScope<Box<dyn StorageReadTransaction + Send + Sync + 'static>> {\n    pub(crate) async fn rollback(self) -> Result<(), LixError> {\n        let store = Arc::try_unwrap(self.store).map_err(|_| {\n            LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                \"cannot close storage read scope while scoped readers are still alive\",\n            )\n        })?;\n        store.into_inner().rollback().await\n    }\n}\n\npub(crate) struct ScopedStorageReader<S> {\n    store: Arc<Mutex<S>>,\n}\n\nimpl<S> Clone for ScopedStorageReader<S> {\n    fn clone(&self) -> Self {\n        Self {\n            store: Arc::clone(&self.store),\n        }\n    }\n}\n\n#[async_trait::async_trait]\nimpl<S> StorageReader for ScopedStorageReader<S>\nwhere\n    S: StorageReader,\n{\n    async fn get_values(&mut self, request: KvGetRequest) -> Result<KvValueBatch, LixError> {\n        let mut store = self.store.lock().await;\n        store.get_values(request).await\n    }\n\n    async fn exists_many(&mut self, request: KvGetRequest) -> Result<KvExistsBatch, LixError> {\n        let mut store = self.store.lock().await;\n        store.exists_many(request).await\n    }\n\n    async fn scan_keys(&mut self, request: KvScanRequest) -> Result<KvKeyPage, LixError> {\n        let mut store = self.store.lock().await;\n        store.scan_keys(request).await\n    }\n\n    async fn scan_values(&mut self, request: KvScanRequest) -> Result<KvValuePage, LixError> {\n        let mut store = self.store.lock().await;\n        store.scan_values(request).await\n    }\n\n    async fn scan_entries(&mut self, request: KvScanRequest) -> Result<KvEntryPage, LixError> {\n        let mut store = self.store.lock().await;\n        store.scan_entries(request).await\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/storage/types.rs",
    "content": "use async_trait::async_trait;\n\nuse crate::backend;\nuse crate::backend::BytePage;\nuse crate::LixError;\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) enum KvScanRange {\n    Prefix(Vec<u8>),\n    Range { start: Vec<u8>, end: Vec<u8> },\n}\n\nimpl KvScanRange {\n    pub(crate) fn prefix(prefix: impl Into<Vec<u8>>) -> Self {\n        Self::Prefix(prefix.into())\n    }\n\n    pub(crate) fn range(start: impl Into<Vec<u8>>, end: impl Into<Vec<u8>>) -> Self {\n        Self::Range {\n            start: start.into(),\n            end: end.into(),\n        }\n    }\n}\n\nimpl From<KvScanRange> for backend::BackendKvScanRange {\n    fn from(range: KvScanRange) -> Self {\n        match range {\n            KvScanRange::Prefix(prefix) => Self::Prefix(prefix),\n            KvScanRange::Range { start, end } => Self::Range { start, end },\n        }\n    }\n}\n\n#[async_trait]\npub(crate) trait StorageReader: Send {\n    async fn get_values(&mut self, request: KvGetRequest) -> Result<KvValueBatch, LixError>;\n\n    async fn exists_many(&mut self, request: KvGetRequest) -> Result<KvExistsBatch, LixError>;\n\n    async fn scan_keys(&mut self, request: KvScanRequest) -> Result<KvKeyPage, LixError>;\n\n    async fn scan_values(&mut self, request: KvScanRequest) -> Result<KvValuePage, LixError>;\n\n    async fn scan_entries(&mut self, request: KvScanRequest) -> Result<KvEntryPage, LixError>;\n}\n\n#[async_trait]\npub(crate) trait StorageWriter: StorageReader {\n    async fn write_kv_batch(&mut self, batch: KvWriteBatch) -> Result<KvWriteStats, LixError>;\n}\n\n#[async_trait]\npub(crate) trait StorageReadTransaction: StorageReader + Send + Sync {\n    async fn rollback(self: Box<Self>) -> Result<(), LixError>;\n}\n\n#[async_trait]\npub(crate) trait StorageWriteTransaction:\n    StorageReadTransaction + StorageWriter + Send + Sync\n{\n    async fn commit(self: Box<Self>) -> Result<(), LixError>;\n}\n\n#[async_trait]\nimpl<T> StorageReader for &mut T\nwhere\n    T: StorageReader + ?Sized,\n{\n    async fn get_values(&mut self, request: KvGetRequest) -> Result<KvValueBatch, LixError> {\n        (**self).get_values(request).await\n    }\n\n    async fn exists_many(&mut self, request: KvGetRequest) -> Result<KvExistsBatch, LixError> {\n        (**self).exists_many(request).await\n    }\n\n    async fn scan_keys(&mut self, request: KvScanRequest) -> Result<KvKeyPage, LixError> {\n        (**self).scan_keys(request).await\n    }\n\n    async fn scan_values(&mut self, request: KvScanRequest) -> Result<KvValuePage, LixError> {\n        (**self).scan_values(request).await\n    }\n\n    async fn scan_entries(&mut self, request: KvScanRequest) -> Result<KvEntryPage, LixError> {\n        (**self).scan_entries(request).await\n    }\n}\n\n#[async_trait]\nimpl<T> StorageReader for Box<T>\nwhere\n    T: StorageReader + ?Sized,\n{\n    async fn get_values(&mut self, request: KvGetRequest) -> Result<KvValueBatch, LixError> {\n        (**self).get_values(request).await\n    }\n\n    async fn exists_many(&mut self, request: KvGetRequest) -> Result<KvExistsBatch, LixError> {\n        (**self).exists_many(request).await\n    }\n\n    async fn scan_keys(&mut self, request: KvScanRequest) -> Result<KvKeyPage, LixError> {\n        (**self).scan_keys(request).await\n    }\n\n    async fn scan_values(&mut self, request: KvScanRequest) -> Result<KvValuePage, LixError> {\n        (**self).scan_values(request).await\n    }\n\n    async fn scan_entries(&mut self, request: KvScanRequest) -> Result<KvEntryPage, LixError> {\n        (**self).scan_entries(request).await\n    }\n}\n\n#[async_trait]\nimpl<T> StorageWriter for &mut T\nwhere\n    T: StorageWriter + ?Sized,\n{\n    async fn write_kv_batch(&mut self, batch: KvWriteBatch) -> Result<KvWriteStats, LixError> {\n        (**self).write_kv_batch(batch).await\n    }\n}\n\n#[async_trait]\nimpl<T> StorageWriter for Box<T>\nwhere\n    T: StorageWriter + ?Sized,\n{\n    async fn write_kv_batch(&mut self, batch: KvWriteBatch) -> Result<KvWriteStats, LixError> {\n        (**self).write_kv_batch(batch).await\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct KvGetRequest {\n    pub(crate) groups: Vec<KvGetGroup>,\n}\n\nimpl From<KvGetRequest> for backend::BackendKvGetRequest {\n    fn from(request: KvGetRequest) -> Self {\n        Self {\n            groups: request.groups.into_iter().map(Into::into).collect(),\n        }\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct KvGetGroup {\n    pub(crate) namespace: String,\n    pub(crate) keys: Vec<Vec<u8>>,\n}\n\nimpl From<KvGetGroup> for backend::BackendKvGetGroup {\n    fn from(group: KvGetGroup) -> Self {\n        Self {\n            namespace: group.namespace,\n            keys: group.keys,\n        }\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct KvValueBatch {\n    pub(crate) groups: Vec<KvValueGroup>,\n}\n\nimpl From<backend::BackendKvValueBatch> for KvValueBatch {\n    fn from(result: backend::BackendKvValueBatch) -> Self {\n        Self {\n            groups: result.groups.into_iter().map(Into::into).collect(),\n        }\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct KvValueGroup {\n    namespace: String,\n    values: BytePage,\n    present: Vec<bool>,\n}\n\nimpl From<backend::BackendKvValueGroup> for KvValueGroup {\n    fn from(group: backend::BackendKvValueGroup) -> Self {\n        let (namespace, values, present) = group.into_parts();\n        Self {\n            namespace,\n            values,\n            present,\n        }\n    }\n}\n\nimpl KvValueGroup {\n    pub(crate) fn len(&self) -> usize {\n        self.present.len()\n    }\n\n    pub(crate) fn value(&self, index: usize) -> Option<Option<&[u8]>> {\n        let present = *self.present.get(index)?;\n        if present {\n            Some(Some(\n                self.values\n                    .get(index)\n                    .expect(\"storage value batch invariant violated\"),\n            ))\n        } else {\n            Some(None)\n        }\n    }\n\n    pub(crate) fn values_iter(&self) -> impl Iterator<Item = Option<&[u8]>> {\n        (0..self.len()).filter_map(|index| self.value(index))\n    }\n\n    pub(crate) fn single_value_owned(&self) -> Option<Vec<u8>> {\n        if self.len() != 1 {\n            return None;\n        }\n        self.value(0).flatten().map(<[u8]>::to_vec)\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct KvExistsBatch {\n    pub(crate) groups: Vec<KvExistsGroup>,\n}\n\nimpl From<backend::BackendKvExistsBatch> for KvExistsBatch {\n    fn from(result: backend::BackendKvExistsBatch) -> Self {\n        Self {\n            groups: result.groups.into_iter().map(Into::into).collect(),\n        }\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct KvExistsGroup {\n    pub(crate) namespace: String,\n    pub(crate) exists: Vec<bool>,\n}\n\nimpl From<backend::BackendKvExistsGroup> for KvExistsGroup {\n    fn from(group: backend::BackendKvExistsGroup) -> Self {\n        Self {\n            namespace: group.namespace,\n            exists: group.exists,\n        }\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct KvScanRequest {\n    pub(crate) namespace: String,\n    pub(crate) range: KvScanRange,\n    pub(crate) after: Option<Vec<u8>>,\n    pub(crate) limit: usize,\n}\n\nimpl From<KvScanRequest> for backend::BackendKvScanRequest {\n    fn from(request: KvScanRequest) -> Self {\n        Self {\n            namespace: request.namespace,\n            range: request.range.into(),\n            after: request.after,\n            limit: request.limit,\n        }\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct KvKeyPage {\n    pub(crate) keys: BytePage,\n    pub(crate) resume_after: Option<Vec<u8>>,\n}\n\nimpl From<backend::BackendKvKeyPage> for KvKeyPage {\n    fn from(result: backend::BackendKvKeyPage) -> Self {\n        Self {\n            keys: result.keys,\n            resume_after: result.resume_after,\n        }\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct KvValuePage {\n    pub(crate) values: BytePage,\n    pub(crate) resume_after: Option<Vec<u8>>,\n}\n\nimpl From<backend::BackendKvValuePage> for KvValuePage {\n    fn from(result: backend::BackendKvValuePage) -> Self {\n        Self {\n            values: result.values,\n            resume_after: result.resume_after,\n        }\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct KvEntryPage {\n    pub(crate) keys: BytePage,\n    pub(crate) values: BytePage,\n    pub(crate) resume_after: Option<Vec<u8>>,\n}\n\nimpl From<backend::BackendKvEntryPage> for KvEntryPage {\n    fn from(result: backend::BackendKvEntryPage) -> Self {\n        Self {\n            keys: result.keys,\n            values: result.values,\n            resume_after: result.resume_after,\n        }\n    }\n}\n\nimpl KvEntryPage {\n    pub(crate) fn len(&self) -> usize {\n        self.keys.len()\n    }\n\n    pub(crate) fn is_empty(&self) -> bool {\n        self.keys.is_empty()\n    }\n\n    pub(crate) fn key(&self, index: usize) -> Option<&[u8]> {\n        self.keys.get(index)\n    }\n\n    pub(crate) fn value(&self, index: usize) -> Option<&[u8]> {\n        self.values.get(index)\n    }\n}\n\n#[derive(Debug, Default)]\npub(crate) struct StorageWriteSet {\n    batch: KvWriteBatch,\n}\n\nimpl StorageWriteSet {\n    pub(crate) fn new() -> Self {\n        Self::default()\n    }\n\n    pub(crate) fn put(&mut self, namespace: &'static str, key: Vec<u8>, value: Vec<u8>) {\n        self.batch.put(namespace, key, value);\n    }\n\n    pub(crate) fn delete(&mut self, namespace: &'static str, key: Vec<u8>) {\n        self.batch.delete(namespace, key);\n    }\n\n    pub(crate) fn is_empty(&self) -> bool {\n        self.batch.is_empty()\n    }\n\n    pub(crate) async fn apply(\n        self,\n        writer: &mut (impl StorageWriter + ?Sized),\n    ) -> Result<KvWriteStats, LixError> {\n        writer.write_kv_batch(self.batch).await\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq, Default)]\npub(crate) struct KvWriteBatch {\n    pub(crate) groups: Vec<KvWriteGroup>,\n}\n\nimpl KvWriteBatch {\n    pub(crate) fn new() -> Self {\n        Self::default()\n    }\n\n    pub(crate) fn put(\n        &mut self,\n        namespace: impl Into<String>,\n        key: impl Into<Vec<u8>>,\n        value: impl Into<Vec<u8>>,\n    ) {\n        let namespace = namespace.into();\n        let group = self.group_mut(namespace);\n        group.put(key.into(), value.into());\n    }\n\n    pub(crate) fn delete(&mut self, namespace: impl Into<String>, key: impl Into<Vec<u8>>) {\n        let namespace = namespace.into();\n        let group = self.group_mut(namespace);\n        group.delete(key.into());\n    }\n\n    pub(crate) fn is_empty(&self) -> bool {\n        self.groups\n            .iter()\n            .all(|group| group.put_count() == 0 && group.delete_count() == 0)\n    }\n\n    fn group_mut(&mut self, namespace: String) -> &mut KvWriteGroup {\n        if let Some(index) = self\n            .groups\n            .iter()\n            .position(|group| group.namespace == namespace)\n        {\n            return &mut self.groups[index];\n        }\n        self.groups.push(KvWriteGroup {\n            namespace,\n            put_keys: backend::BytePageBuilder::new(),\n            put_values: backend::BytePageBuilder::new(),\n            deletes: backend::BytePageBuilder::new(),\n        });\n        self.groups.last_mut().expect(\"group just pushed\")\n    }\n}\n\nimpl From<KvWriteBatch> for backend::BackendKvWriteBatch {\n    fn from(batch: KvWriteBatch) -> Self {\n        Self {\n            groups: batch.groups.into_iter().map(Into::into).collect(),\n        }\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct KvWriteGroup {\n    namespace: String,\n    put_keys: backend::BytePageBuilder,\n    put_values: backend::BytePageBuilder,\n    deletes: backend::BytePageBuilder,\n}\n\nimpl From<KvWriteGroup> for backend::BackendKvWriteGroup {\n    fn from(group: KvWriteGroup) -> Self {\n        Self::from_pages(\n            group.namespace,\n            group.put_keys.finish(),\n            group.put_values.finish(),\n            group.deletes.finish(),\n        )\n    }\n}\n\nimpl KvWriteGroup {\n    pub(crate) fn new(namespace: impl Into<String>) -> Self {\n        Self {\n            namespace: namespace.into(),\n            put_keys: backend::BytePageBuilder::new(),\n            put_values: backend::BytePageBuilder::new(),\n            deletes: backend::BytePageBuilder::new(),\n        }\n    }\n\n    pub(crate) fn put(&mut self, key: impl AsRef<[u8]>, value: impl AsRef<[u8]>) {\n        self.put_keys.push(key);\n        self.put_values.push(value);\n    }\n\n    pub(crate) fn delete(&mut self, key: impl AsRef<[u8]>) {\n        self.deletes.push(key);\n    }\n\n    pub(crate) fn put_count(&self) -> usize {\n        self.put_keys.len()\n    }\n\n    pub(crate) fn delete_count(&self) -> usize {\n        self.deletes.len()\n    }\n\n    pub(crate) fn put_key(&self, index: usize) -> Option<&[u8]> {\n        self.put_keys.get(index)\n    }\n\n    pub(crate) fn put_value(&self, index: usize) -> Option<&[u8]> {\n        self.put_values.get(index)\n    }\n\n    pub(crate) fn delete_key(&self, index: usize) -> Option<&[u8]> {\n        self.deletes.get(index)\n    }\n}\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq, Default)]\npub(crate) struct KvWriteStats {\n    pub(crate) puts: usize,\n    pub(crate) deletes: usize,\n    pub(crate) bytes_written: usize,\n}\n\nimpl From<backend::BackendKvWriteStats> for KvWriteStats {\n    fn from(stats: backend::BackendKvWriteStats) -> Self {\n        Self {\n            puts: stats.puts,\n            deletes: stats.deletes,\n            bytes_written: stats.bytes_written,\n        }\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/storage_bench.rs",
    "content": "use crate::binary_cas::{BinaryCasContext, BlobHash, BlobWrite};\nuse crate::catalog::CatalogContext;\nuse crate::commit_graph::CommitGraphChangeHistoryRequest;\nuse crate::commit_store::{\n    Change, ChangeScanRequest, CommitDraftRef, CommitStoreContext, MaterializedChange,\n};\nuse crate::entity_identity::EntityIdentity;\nuse crate::json_store::context::JsonStoreContext;\nuse crate::json_store::types::{\n    JsonLoadRequestRef, JsonProjectionLoadRequestRef, JsonProjectionPath, JsonReadScopeRef,\n    JsonRef, JsonWritePlacementRef, NormalizedJsonRef,\n};\nuse crate::live_state::LiveStateContext;\nuse crate::session::SessionMode;\nuse crate::storage::{\n    KvGetGroup, KvGetRequest, KvScanRange, KvScanRequest, KvWriteBatch, StorageContext,\n    StorageWriteSet,\n};\nuse crate::tracked_state::{\n    MaterializedTrackedStateRow, TrackedStateContext, TrackedStateDeltaRef,\n    TrackedStateDiffRequest, TrackedStateFilter, TrackedStateProjection, TrackedStateRowRequest,\n    TrackedStateScanRequest,\n};\nuse crate::transaction::open_transaction;\nuse crate::transaction::types::{\n    TransactionJson, TransactionWrite, TransactionWriteMode, TransactionWriteRow,\n};\nuse crate::untracked_state::{\n    MaterializedUntrackedStateRow, UntrackedStateContext, UntrackedStateFilter,\n    UntrackedStateProjection, UntrackedStateRowRequest, UntrackedStateScanRequest,\n};\nuse crate::version::VersionContext;\nuse crate::{Backend, LixError, NullableKeyFilter};\nuse std::collections::{BTreeMap, HashSet};\nuse std::sync::atomic::{AtomicUsize, Ordering};\nuse std::sync::Arc;\nuse std::sync::Mutex;\nuse std::sync::OnceLock;\nuse std::time::{Duration, Instant};\n\nfn prepare_json_ref(document: &[u8]) -> Result<JsonRef, LixError> {\n    let text = std::str::from_utf8(document).map_err(|error| {\n        LixError::new(\n            LixError::CODE_UNKNOWN,\n            format!(\"benchmark JSON document is invalid UTF-8: {error}\"),\n        )\n    })?;\n    Ok(JsonRef::for_content(text.as_bytes()))\n}\n\n#[derive(Debug, Clone, Copy)]\npub struct StorageBenchConfig {\n    pub rows: usize,\n    pub blob_bytes: usize,\n    pub state_payload_bytes: usize,\n    pub key_pattern: StorageBenchKeyPattern,\n    pub selectivity: StorageBenchSelectivity,\n    pub update_fraction: StorageBenchUpdateFraction,\n}\n\nimpl StorageBenchConfig {\n    pub fn with_rows(mut self, rows: usize) -> Self {\n        self.rows = rows;\n        self\n    }\n\n    pub fn with_blob_bytes(mut self, blob_bytes: usize) -> Self {\n        self.blob_bytes = blob_bytes;\n        self\n    }\n\n    pub fn with_state_payload_bytes(mut self, state_payload_bytes: usize) -> Self {\n        self.state_payload_bytes = state_payload_bytes;\n        self\n    }\n\n    pub fn with_key_pattern(mut self, key_pattern: StorageBenchKeyPattern) -> Self {\n        self.key_pattern = key_pattern;\n        self\n    }\n\n    pub fn with_selectivity(mut self, selectivity: StorageBenchSelectivity) -> Self {\n        self.selectivity = selectivity;\n        self\n    }\n\n    pub fn with_update_fraction(mut self, update_fraction: StorageBenchUpdateFraction) -> Self {\n        self.update_fraction = update_fraction;\n        self\n    }\n}\n\n#[derive(Debug, Clone, Copy)]\npub enum StorageBenchKeyPattern {\n    Sequential,\n    Random,\n}\n\n#[derive(Debug, Clone, Copy)]\npub enum StorageBenchSelectivity {\n    Percent1,\n    Percent10,\n    Percent100,\n}\n\nimpl StorageBenchSelectivity {\n    fn matches(self, index: usize) -> bool {\n        match self {\n            Self::Percent1 => index % 100 == 0,\n            Self::Percent10 => index % 10 == 0,\n            Self::Percent100 => true,\n        }\n    }\n\n    fn expected_rows(self, rows: usize) -> usize {\n        (0..rows).filter(|index| self.matches(*index)).count()\n    }\n}\n\n#[derive(Debug, Clone, Copy)]\npub enum StorageBenchUpdateFraction {\n    Percent10,\n    Percent100,\n}\n\nimpl StorageBenchUpdateFraction {\n    fn rows(self, total_rows: usize) -> usize {\n        match self {\n            Self::Percent10 => total_rows.div_ceil(10),\n            Self::Percent100 => total_rows,\n        }\n    }\n}\n\n#[derive(Debug, Clone, Copy)]\npub struct StorageBenchReport {\n    pub measured_rows: usize,\n    pub verified_rows: usize,\n    pub elapsed: Duration,\n}\n\n#[derive(Debug, Clone, Default, PartialEq, Eq)]\npub struct TransactionBenchCounters {\n    pub rows_staged: usize,\n    pub untracked_rows: usize,\n    pub validation_version_count: usize,\n    pub schema_catalog_loads: usize,\n    pub json_store_stage_bytes_calls: usize,\n    pub unique_json_refs: usize,\n}\n\n#[derive(Debug, Clone, Default, PartialEq, Eq)]\npub struct TransactionAccountingReport {\n    pub counters: TransactionBenchCounters,\n    pub storage_write_batches: usize,\n    pub kv_puts_by_namespace: BTreeMap<String, usize>,\n    pub bytes_by_namespace: BTreeMap<String, usize>,\n}\n\npub struct StorageApiFixture {\n    storage: StorageContext,\n    rows: usize,\n}\n\npub struct TransactionBenchFixture {\n    storage: StorageContext,\n    live_state: Arc<LiveStateContext>,\n    tracked_state: Arc<TrackedStateContext>,\n    binary_cas: Arc<BinaryCasContext>,\n    commit_store: Arc<CommitStoreContext>,\n    version_ctx: Arc<VersionContext>,\n    catalog_context: Arc<CatalogContext>,\n    rows: Vec<TransactionWriteRow>,\n}\n\npub struct TransactionCommitOnlyFixture {\n    runtime_functions: crate::functions::FunctionContext,\n    transaction: crate::transaction::Transaction,\n    rows: usize,\n}\n\nstatic TRANSACTION_ROWS_STAGED: AtomicUsize = AtomicUsize::new(0);\nstatic TRANSACTION_UNTRACKED_ROWS: AtomicUsize = AtomicUsize::new(0);\nstatic TRANSACTION_VALIDATION_VERSION_COUNT: AtomicUsize = AtomicUsize::new(0);\nstatic TRANSACTION_SCHEMA_CATALOG_LOADS: AtomicUsize = AtomicUsize::new(0);\nstatic JSON_STORE_STAGE_BYTES_CALLS: AtomicUsize = AtomicUsize::new(0);\nstatic JSON_STORE_UNIQUE_REFS: OnceLock<Mutex<HashSet<[u8; 32]>>> = OnceLock::new();\n\nconst STORAGE_API_NAMESPACE: &str = \"bench.storage_api\";\nconst STORAGE_API_ALT_NAMESPACE: &str = \"bench.storage_api.alt\";\nconst TRANSACTION_BENCH_SCHEMA_KEY: &str = \"bench_transaction_entity\";\n\npub fn reset_transaction_bench_counters() {\n    TRANSACTION_ROWS_STAGED.store(0, Ordering::Relaxed);\n    TRANSACTION_UNTRACKED_ROWS.store(0, Ordering::Relaxed);\n    TRANSACTION_VALIDATION_VERSION_COUNT.store(0, Ordering::Relaxed);\n    TRANSACTION_SCHEMA_CATALOG_LOADS.store(0, Ordering::Relaxed);\n    JSON_STORE_STAGE_BYTES_CALLS.store(0, Ordering::Relaxed);\n    json_store_unique_refs()\n        .lock()\n        .expect(\"json store unique ref counter mutex should lock\")\n        .clear();\n}\n\npub fn transaction_bench_counters() -> TransactionBenchCounters {\n    TransactionBenchCounters {\n        rows_staged: TRANSACTION_ROWS_STAGED.load(Ordering::Relaxed),\n        untracked_rows: TRANSACTION_UNTRACKED_ROWS.load(Ordering::Relaxed),\n        validation_version_count: TRANSACTION_VALIDATION_VERSION_COUNT.load(Ordering::Relaxed),\n        schema_catalog_loads: TRANSACTION_SCHEMA_CATALOG_LOADS.load(Ordering::Relaxed),\n        json_store_stage_bytes_calls: JSON_STORE_STAGE_BYTES_CALLS.load(Ordering::Relaxed),\n        unique_json_refs: json_store_unique_refs()\n            .lock()\n            .expect(\"json store unique ref counter mutex should lock\")\n            .len(),\n    }\n}\n\npub(crate) fn record_transaction_rows_staged(rows: usize) {\n    TRANSACTION_ROWS_STAGED.fetch_add(rows, Ordering::Relaxed);\n}\n\npub(crate) fn record_transaction_untracked_rows(rows: usize) {\n    TRANSACTION_UNTRACKED_ROWS.fetch_add(rows, Ordering::Relaxed);\n}\n\npub(crate) fn record_transaction_validation_version() {\n    TRANSACTION_VALIDATION_VERSION_COUNT.fetch_add(1, Ordering::Relaxed);\n}\n\npub(crate) fn record_transaction_schema_catalog_load() {\n    TRANSACTION_SCHEMA_CATALOG_LOADS.fetch_add(1, Ordering::Relaxed);\n}\n\npub(crate) fn record_json_store_stage_bytes(hash: [u8; 32]) {\n    JSON_STORE_STAGE_BYTES_CALLS.fetch_add(1, Ordering::Relaxed);\n    json_store_unique_refs()\n        .lock()\n        .expect(\"json store unique ref counter mutex should lock\")\n        .insert(hash);\n}\n\nfn json_store_unique_refs() -> &'static Mutex<HashSet<[u8; 32]>> {\n    JSON_STORE_UNIQUE_REFS.get_or_init(|| Mutex::new(HashSet::new()))\n}\n\npub async fn prepare_transaction_commit_empty(\n    backend: Arc<dyn Backend + Send + Sync>,\n) -> Result<TransactionBenchFixture, LixError> {\n    prepare_transaction_fixture(backend, Vec::new()).await\n}\n\npub async fn prepare_transaction_commit_schema_only(\n    backend: Arc<dyn Backend + Send + Sync>,\n) -> Result<TransactionBenchFixture, LixError> {\n    prepare_transaction_fixture(backend, vec![transaction_registered_schema_row()]).await\n}\n\npub async fn prepare_transaction_commit_entities_no_payload(\n    backend: Arc<dyn Backend + Send + Sync>,\n    rows: usize,\n) -> Result<TransactionBenchFixture, LixError> {\n    prepare_transaction_fixture(\n        backend,\n        transaction_entity_rows(TransactionEntityRows {\n            rows,\n            payload_bytes: 0,\n            payload_pattern: TransactionPayloadPattern::Unique,\n            metadata_pattern: TransactionPayloadPattern::None,\n            untracked: false,\n            key_prefix: \"entity-no-payload\",\n        }),\n    )\n    .await\n}\n\npub async fn prepare_transaction_commit_entities_payload_1k_unique(\n    backend: Arc<dyn Backend + Send + Sync>,\n    rows: usize,\n) -> Result<TransactionBenchFixture, LixError> {\n    prepare_transaction_payload_fixture(\n        backend,\n        rows,\n        1024,\n        TransactionPayloadPattern::Unique,\n        false,\n        \"entity-payload-1k-unique\",\n    )\n    .await\n}\n\npub async fn prepare_transaction_commit_entities_payload_1k_same(\n    backend: Arc<dyn Backend + Send + Sync>,\n    rows: usize,\n) -> Result<TransactionBenchFixture, LixError> {\n    prepare_transaction_payload_fixture(\n        backend,\n        rows,\n        1024,\n        TransactionPayloadPattern::Same,\n        false,\n        \"entity-payload-1k-same\",\n    )\n    .await\n}\n\npub async fn prepare_transaction_commit_entities_payload_1k_half_duplicate(\n    backend: Arc<dyn Backend + Send + Sync>,\n    rows: usize,\n) -> Result<TransactionBenchFixture, LixError> {\n    prepare_transaction_payload_fixture(\n        backend,\n        rows,\n        1024,\n        TransactionPayloadPattern::HalfDuplicate,\n        false,\n        \"entity-payload-1k-half-duplicate\",\n    )\n    .await\n}\n\npub async fn prepare_transaction_commit_entities_metadata_1k_same(\n    backend: Arc<dyn Backend + Send + Sync>,\n    rows: usize,\n) -> Result<TransactionBenchFixture, LixError> {\n    prepare_transaction_fixture(\n        backend,\n        transaction_entity_rows(TransactionEntityRows {\n            rows,\n            payload_bytes: 0,\n            payload_pattern: TransactionPayloadPattern::Unique,\n            metadata_pattern: TransactionPayloadPattern::Same,\n            untracked: false,\n            key_prefix: \"entity-metadata-1k-same\",\n        }),\n    )\n    .await\n}\n\npub async fn prepare_transaction_commit_entities_payload_16k_unique(\n    backend: Arc<dyn Backend + Send + Sync>,\n    rows: usize,\n) -> Result<TransactionBenchFixture, LixError> {\n    prepare_transaction_payload_fixture(\n        backend,\n        rows,\n        16 * 1024,\n        TransactionPayloadPattern::Unique,\n        false,\n        \"entity-payload-16k-unique\",\n    )\n    .await\n}\n\npub async fn prepare_transaction_commit_untracked_payload_1k_same(\n    backend: Arc<dyn Backend + Send + Sync>,\n    rows: usize,\n) -> Result<TransactionBenchFixture, LixError> {\n    prepare_transaction_payload_fixture(\n        backend,\n        rows,\n        1024,\n        TransactionPayloadPattern::Same,\n        true,\n        \"untracked-payload-1k-same\",\n    )\n    .await\n}\n\npub async fn prepare_transaction_update_existing_payload_1k(\n    backend: Arc<dyn Backend + Send + Sync>,\n    root_rows: usize,\n    update_rows: usize,\n) -> Result<TransactionBenchFixture, LixError> {\n    let fixture = prepare_transaction_payload_fixture(\n        backend,\n        root_rows,\n        1024,\n        TransactionPayloadPattern::Unique,\n        false,\n        \"update-existing-root\",\n    )\n    .await?;\n    transaction_commit_prepared(&fixture).await?;\n    let rows = transaction_entity_rows(TransactionEntityRows {\n        rows: update_rows,\n        payload_bytes: 1024,\n        payload_pattern: TransactionPayloadPattern::Unique,\n        metadata_pattern: TransactionPayloadPattern::None,\n        untracked: false,\n        key_prefix: \"update-existing-root\",\n    });\n    Ok(TransactionBenchFixture { rows, ..fixture })\n}\n\npub async fn transaction_commit_prepared(\n    fixture: &TransactionBenchFixture,\n) -> Result<StorageBenchReport, LixError> {\n    let opened = open_transaction(\n        &SessionMode::Pinned {\n            version_id: crate::GLOBAL_VERSION_ID.to_string(),\n        },\n        fixture.storage.clone(),\n        Arc::clone(&fixture.live_state),\n        Arc::clone(&fixture.tracked_state),\n        Arc::clone(&fixture.binary_cas),\n        Arc::clone(&fixture.commit_store),\n        Arc::clone(&fixture.version_ctx),\n        Arc::clone(&fixture.catalog_context),\n    )\n    .await?;\n    let mut transaction = opened.transaction;\n    let runtime_functions = opened.runtime_functions;\n    let started_at = Instant::now();\n    if !fixture.rows.is_empty() {\n        transaction\n            .stage_write(TransactionWrite::Rows {\n                mode: TransactionWriteMode::Replace,\n                rows: fixture.rows.clone(),\n            })\n            .await?;\n    }\n    transaction.commit(&runtime_functions).await?;\n    Ok(StorageBenchReport {\n        measured_rows: fixture.rows.len(),\n        verified_rows: fixture.rows.len(),\n        elapsed: started_at.elapsed(),\n    })\n}\n\npub async fn transaction_open_empty_prepared(\n    fixture: &TransactionBenchFixture,\n) -> Result<StorageBenchReport, LixError> {\n    let started_at = Instant::now();\n    let opened = open_transaction(\n        &SessionMode::Pinned {\n            version_id: crate::GLOBAL_VERSION_ID.to_string(),\n        },\n        fixture.storage.clone(),\n        Arc::clone(&fixture.live_state),\n        Arc::clone(&fixture.tracked_state),\n        Arc::clone(&fixture.binary_cas),\n        Arc::clone(&fixture.commit_store),\n        Arc::clone(&fixture.version_ctx),\n        Arc::clone(&fixture.catalog_context),\n    )\n    .await?;\n    let elapsed = started_at.elapsed();\n    opened.transaction.rollback().await?;\n    Ok(StorageBenchReport {\n        measured_rows: 0,\n        verified_rows: 0,\n        elapsed,\n    })\n}\n\npub async fn transaction_stage_only_prepared(\n    fixture: &TransactionBenchFixture,\n) -> Result<StorageBenchReport, LixError> {\n    let opened = open_transaction(\n        &SessionMode::Pinned {\n            version_id: crate::GLOBAL_VERSION_ID.to_string(),\n        },\n        fixture.storage.clone(),\n        Arc::clone(&fixture.live_state),\n        Arc::clone(&fixture.tracked_state),\n        Arc::clone(&fixture.binary_cas),\n        Arc::clone(&fixture.commit_store),\n        Arc::clone(&fixture.version_ctx),\n        Arc::clone(&fixture.catalog_context),\n    )\n    .await?;\n    let mut transaction = opened.transaction;\n    let started_at = Instant::now();\n    if !fixture.rows.is_empty() {\n        transaction\n            .stage_write(TransactionWrite::Rows {\n                mode: TransactionWriteMode::Replace,\n                rows: fixture.rows.clone(),\n            })\n            .await?;\n    }\n    let elapsed = started_at.elapsed();\n    transaction.rollback().await?;\n    Ok(StorageBenchReport {\n        measured_rows: fixture.rows.len(),\n        verified_rows: fixture.rows.len(),\n        elapsed,\n    })\n}\n\npub async fn prepare_transaction_commit_only(\n    fixture: TransactionBenchFixture,\n) -> Result<TransactionCommitOnlyFixture, LixError> {\n    let opened = open_transaction(\n        &SessionMode::Pinned {\n            version_id: crate::GLOBAL_VERSION_ID.to_string(),\n        },\n        fixture.storage.clone(),\n        Arc::clone(&fixture.live_state),\n        Arc::clone(&fixture.tracked_state),\n        Arc::clone(&fixture.binary_cas),\n        Arc::clone(&fixture.commit_store),\n        Arc::clone(&fixture.version_ctx),\n        Arc::clone(&fixture.catalog_context),\n    )\n    .await?;\n    let mut transaction = opened.transaction;\n    let runtime_functions = opened.runtime_functions;\n    let rows = fixture.rows.len();\n    if !fixture.rows.is_empty() {\n        transaction\n            .stage_write(TransactionWrite::Rows {\n                mode: TransactionWriteMode::Replace,\n                rows: fixture.rows,\n            })\n            .await?;\n    }\n    Ok(TransactionCommitOnlyFixture {\n        runtime_functions,\n        transaction,\n        rows,\n    })\n}\n\npub async fn transaction_commit_only_prepared(\n    fixture: TransactionCommitOnlyFixture,\n) -> Result<StorageBenchReport, LixError> {\n    let rows = fixture.rows;\n    let started_at = Instant::now();\n    fixture\n        .transaction\n        .commit(&fixture.runtime_functions)\n        .await?;\n    Ok(StorageBenchReport {\n        measured_rows: rows,\n        verified_rows: rows,\n        elapsed: started_at.elapsed(),\n    })\n}\n\nasync fn prepare_transaction_payload_fixture(\n    backend: Arc<dyn Backend + Send + Sync>,\n    rows: usize,\n    payload_bytes: usize,\n    payload_pattern: TransactionPayloadPattern,\n    untracked: bool,\n    key_prefix: &'static str,\n) -> Result<TransactionBenchFixture, LixError> {\n    prepare_transaction_fixture(\n        backend,\n        transaction_entity_rows(TransactionEntityRows {\n            rows,\n            payload_bytes,\n            payload_pattern,\n            metadata_pattern: TransactionPayloadPattern::None,\n            untracked,\n            key_prefix,\n        }),\n    )\n    .await\n}\n\nasync fn prepare_transaction_fixture(\n    backend: Arc<dyn Backend + Send + Sync>,\n    rows: Vec<TransactionWriteRow>,\n) -> Result<TransactionBenchFixture, LixError> {\n    let storage = StorageContext::new(backend);\n    let tracked_state = Arc::new(TrackedStateContext::new());\n    let untracked_state = Arc::new(UntrackedStateContext::new());\n    let commit_store = Arc::new(CommitStoreContext::new());\n    let live_state = Arc::new(LiveStateContext::new(\n        tracked_state.as_ref().clone(),\n        untracked_state.as_ref().clone(),\n        crate::commit_graph::CommitGraphContext::new(),\n    ));\n    let binary_cas = Arc::new(BinaryCasContext::new());\n    let version_ctx = Arc::new(VersionContext::new(untracked_state));\n    let catalog_context = Arc::new(CatalogContext::new());\n    seed_transaction_visible_schema_rows(storage.clone()).await?;\n    Ok(TransactionBenchFixture {\n        storage,\n        live_state,\n        tracked_state,\n        binary_cas,\n        commit_store,\n        version_ctx,\n        catalog_context,\n        rows,\n    })\n}\n\nasync fn seed_transaction_visible_schema_rows(storage: StorageContext) -> Result<(), LixError> {\n    let mut writes = StorageWriteSet::new();\n    let rows = crate::schema::seed_schema_definitions()\n        .into_iter()\n        .cloned()\n        .chain(std::iter::once(transaction_entity_schema_definition()))\n        .map(|schema| {\n            let key = crate::schema::schema_key_from_definition(&schema)\n                .expect(\"seed schema key should derive\");\n            let snapshot_content = serde_json::json!({ \"value\": schema }).to_string();\n            Ok(crate::untracked_state::UntrackedStateRow {\n                entity_id: crate::schema::registered_schema_entity_id(&key.schema_key)\n                    .expect(\"registered schema identity should derive\"),\n                schema_key: \"lix_registered_schema\".to_string(),\n                file_id: None,\n                version_id: crate::GLOBAL_VERSION_ID.to_string(),\n                snapshot_content: Some(snapshot_content),\n                metadata: None,\n                created_at: \"1970-01-01T00:00:00.000Z\".to_string(),\n                updated_at: \"1970-01-01T00:00:00.000Z\".to_string(),\n                global: true,\n            })\n        })\n        .collect::<Result<Vec<_>, LixError>>()?;\n    let mut transaction = storage.begin_write_transaction().await?;\n    UntrackedStateContext::new()\n        .writer(&mut writes)\n        .stage_rows(rows.iter().map(|row| row.as_ref()))?;\n    writes.apply(&mut transaction.as_mut()).await?;\n    transaction.commit().await\n}\n\nfn transaction_entity_schema_definition() -> serde_json::Value {\n    serde_json::json!({\n        \"x-lix-key\": TRANSACTION_BENCH_SCHEMA_KEY,\n        \"type\": \"object\",\n        \"properties\": {\n            \"value\": {\n                \"anyOf\": [\n                    { \"type\": \"string\" },\n                    { \"type\": \"object\" },\n                    { \"type\": \"array\" },\n                    { \"type\": \"number\" },\n                    { \"type\": \"boolean\" },\n                    { \"type\": \"null\" }\n                ]\n            }\n        },\n        \"required\": [\"value\"],\n        \"additionalProperties\": false\n    })\n}\n\n#[derive(Debug, Clone, Copy)]\nenum TransactionPayloadPattern {\n    None,\n    Unique,\n    Same,\n    HalfDuplicate,\n}\n\nstruct TransactionEntityRows {\n    rows: usize,\n    payload_bytes: usize,\n    payload_pattern: TransactionPayloadPattern,\n    metadata_pattern: TransactionPayloadPattern,\n    untracked: bool,\n    key_prefix: &'static str,\n}\n\nfn transaction_entity_rows(config: TransactionEntityRows) -> Vec<TransactionWriteRow> {\n    (0..config.rows)\n        .map(|index| {\n            let key = format!(\"{}-{index:06}\", config.key_prefix);\n            let value_index = payload_pattern_index(config.payload_pattern, index);\n            let metadata_index = payload_pattern_index(config.metadata_pattern, index);\n            TransactionWriteRow {\n                entity_id: Some(EntityIdentity::single(key.clone())),\n                schema_key: TRANSACTION_BENCH_SCHEMA_KEY.to_string(),\n                file_id: None,\n                snapshot: Some(transaction_snapshot_json(\n                    &key,\n                    value_index,\n                    config.payload_bytes,\n                )),\n                metadata: transaction_metadata(config.metadata_pattern, metadata_index),\n                origin: None,\n                created_at: None,\n                updated_at: None,\n                global: true,\n                change_id: None,\n                commit_id: None,\n                untracked: config.untracked,\n                version_id: crate::GLOBAL_VERSION_ID.to_string(),\n            }\n        })\n        .collect()\n}\n\nfn transaction_registered_schema_row() -> TransactionWriteRow {\n    let schema = serde_json::json!({\n        \"x-lix-key\": \"bench_transaction_schema\",\n        \"x-lix-primary-key\": [\"/id\"],\n        \"type\": \"object\",\n        \"properties\": {\n            \"id\": { \"type\": \"string\" },\n            \"value\": { \"type\": \"string\" }\n        },\n        \"required\": [\"id\", \"value\"],\n        \"additionalProperties\": false\n    });\n    let key =\n        crate::schema::schema_key_from_definition(&schema).expect(\"seed schema key should derive\");\n    TransactionWriteRow {\n        entity_id: Some(\n            crate::schema::registered_schema_entity_id(&key.schema_key)\n                .expect(\"registered schema identity should derive\"),\n        ),\n        schema_key: \"lix_registered_schema\".to_string(),\n        file_id: None,\n        snapshot: Some(TransactionJson::from_value_unchecked(\n            serde_json::json!({ \"value\": schema }),\n        )),\n        metadata: None,\n        origin: None,\n        created_at: None,\n        updated_at: None,\n        global: true,\n        change_id: None,\n        commit_id: None,\n        untracked: false,\n        version_id: crate::GLOBAL_VERSION_ID.to_string(),\n    }\n}\n\nfn transaction_snapshot_json(\n    _key: &str,\n    payload_index: usize,\n    target_bytes: usize,\n) -> TransactionJson {\n    let base_value = format!(\"/entities/{payload_index}/value\");\n    let value = if target_bytes == 0 {\n        base_value\n    } else {\n        let current = serde_json::json!({\n            \"value\": base_value,\n        })\n        .to_string()\n        .len();\n        let padding = target_bytes.saturating_sub(current);\n        format!(\"{base_value}:{}\", \"x\".repeat(padding))\n    };\n    let mut object = serde_json::Map::new();\n    object.insert(\"value\".to_string(), serde_json::Value::String(value));\n    TransactionJson::from_value_unchecked(serde_json::Value::Object(object))\n}\n\nfn transaction_metadata(\n    pattern: TransactionPayloadPattern,\n    metadata_index: usize,\n) -> Option<TransactionJson> {\n    match pattern {\n        TransactionPayloadPattern::None => None,\n        TransactionPayloadPattern::Unique\n        | TransactionPayloadPattern::Same\n        | TransactionPayloadPattern::HalfDuplicate => {\n            let mut object = serde_json::Map::new();\n            object.insert(\n                \"source\".to_string(),\n                serde_json::Value::String(\"transaction-bench\".to_string()),\n            );\n            object.insert(\n                \"metadata_index\".to_string(),\n                serde_json::Value::String(metadata_index.to_string()),\n            );\n            pad_json_object(&mut object, 1024);\n            Some(TransactionJson::from_value_unchecked(\n                serde_json::Value::Object(object),\n            ))\n        }\n    }\n}\n\nfn payload_pattern_index(pattern: TransactionPayloadPattern, index: usize) -> usize {\n    match pattern {\n        TransactionPayloadPattern::None | TransactionPayloadPattern::Unique => index,\n        TransactionPayloadPattern::Same => 0,\n        TransactionPayloadPattern::HalfDuplicate => index % 2,\n    }\n}\n\npub async fn storage_api_write_kv_batch_puts(\n    backend: Arc<dyn Backend + Send + Sync>,\n    rows: usize,\n) -> Result<StorageBenchReport, LixError> {\n    let storage = StorageContext::new(backend);\n    let mut transaction = storage.begin_write_transaction().await?;\n    let mut batch = KvWriteBatch::new();\n    for index in 0..rows {\n        batch.put(\n            STORAGE_API_NAMESPACE,\n            storage_api_key(index),\n            storage_api_value(index),\n        );\n    }\n    let started_at = Instant::now();\n    let stats = transaction.write_kv_batch(batch).await?;\n    transaction.commit().await?;\n    Ok(StorageBenchReport {\n        measured_rows: stats.puts,\n        verified_rows: rows,\n        elapsed: started_at.elapsed(),\n    })\n}\n\npub async fn storage_api_write_kv_batch_mixed_put_delete(\n    backend: Arc<dyn Backend + Send + Sync>,\n    rows: usize,\n) -> Result<StorageBenchReport, LixError> {\n    let fixture = prepare_storage_api_read(backend, rows).await?;\n    let mut transaction = fixture.storage.begin_write_transaction().await?;\n    let mut batch = KvWriteBatch::new();\n    for index in 0..rows {\n        if index % 2 == 0 {\n            batch.put(\n                STORAGE_API_NAMESPACE,\n                storage_api_key(index),\n                storage_api_updated_value(index),\n            );\n        } else {\n            batch.delete(STORAGE_API_NAMESPACE, storage_api_key(index));\n        }\n    }\n    let started_at = Instant::now();\n    let stats = transaction.write_kv_batch(batch).await?;\n    transaction.commit().await?;\n    Ok(StorageBenchReport {\n        measured_rows: stats.puts + stats.deletes,\n        verified_rows: rows,\n        elapsed: started_at.elapsed(),\n    })\n}\n\npub async fn storage_api_write_kv_batch_multi_namespace(\n    backend: Arc<dyn Backend + Send + Sync>,\n    rows: usize,\n) -> Result<StorageBenchReport, LixError> {\n    let storage = StorageContext::new(backend);\n    let mut transaction = storage.begin_write_transaction().await?;\n    let mut batch = KvWriteBatch::new();\n    for index in 0..rows {\n        let namespace = if index % 2 == 0 {\n            STORAGE_API_NAMESPACE\n        } else {\n            STORAGE_API_ALT_NAMESPACE\n        };\n        batch.put(namespace, storage_api_key(index), storage_api_value(index));\n    }\n    let started_at = Instant::now();\n    let stats = transaction.write_kv_batch(batch).await?;\n    transaction.commit().await?;\n    Ok(StorageBenchReport {\n        measured_rows: stats.puts,\n        verified_rows: rows,\n        elapsed: started_at.elapsed(),\n    })\n}\n\npub async fn storage_api_write_kv_batch_duplicate_keys(\n    backend: Arc<dyn Backend + Send + Sync>,\n    rows: usize,\n) -> Result<StorageBenchReport, LixError> {\n    let storage = StorageContext::new(backend);\n    let mut transaction = storage.begin_write_transaction().await?;\n    let mut batch = KvWriteBatch::new();\n    for index in 0..rows {\n        batch.put(\n            STORAGE_API_NAMESPACE,\n            storage_api_key(index % 100),\n            storage_api_value(index),\n        );\n    }\n    let started_at = Instant::now();\n    let stats = transaction.write_kv_batch(batch).await?;\n    transaction.commit().await?;\n    Ok(StorageBenchReport {\n        measured_rows: stats.puts,\n        verified_rows: rows,\n        elapsed: started_at.elapsed(),\n    })\n}\n\npub async fn storage_api_write_kv_batch_value_size(\n    backend: Arc<dyn Backend + Send + Sync>,\n    rows: usize,\n    value_bytes: usize,\n) -> Result<StorageBenchReport, LixError> {\n    let storage = StorageContext::new(backend);\n    let mut transaction = storage.begin_write_transaction().await?;\n    let mut batch = KvWriteBatch::new();\n    for index in 0..rows {\n        batch.put(\n            STORAGE_API_NAMESPACE,\n            storage_api_key(index),\n            storage_api_value_with_bytes(index, value_bytes),\n        );\n    }\n    let started_at = Instant::now();\n    let stats = transaction.write_kv_batch(batch).await?;\n    transaction.commit().await?;\n    Ok(StorageBenchReport {\n        measured_rows: stats.puts,\n        verified_rows: rows,\n        elapsed: started_at.elapsed(),\n    })\n}\n\npub async fn storage_api_write_and_commit(\n    backend: Arc<dyn Backend + Send + Sync>,\n    rows: usize,\n) -> Result<StorageBenchReport, LixError> {\n    let storage = StorageContext::new(backend);\n    let started_at = Instant::now();\n    let mut transaction = storage.begin_write_transaction().await?;\n    let mut batch = KvWriteBatch::new();\n    for index in 0..rows {\n        batch.put(\n            STORAGE_API_NAMESPACE,\n            storage_api_key(index),\n            storage_api_value(index),\n        );\n    }\n    let stats = transaction.write_kv_batch(batch).await?;\n    transaction.commit().await?;\n    Ok(StorageBenchReport {\n        measured_rows: stats.puts,\n        verified_rows: rows,\n        elapsed: started_at.elapsed(),\n    })\n}\n\npub async fn storage_api_rollback_after_write(\n    backend: Arc<dyn Backend + Send + Sync>,\n    rows: usize,\n) -> Result<StorageBenchReport, LixError> {\n    let storage = StorageContext::new(backend);\n    let started_at = Instant::now();\n    let mut transaction = storage.begin_write_transaction().await?;\n    let mut batch = KvWriteBatch::new();\n    for index in 0..rows {\n        batch.put(\n            STORAGE_API_NAMESPACE,\n            storage_api_key(index),\n            storage_api_value(index),\n        );\n    }\n    let stats = transaction.write_kv_batch(batch).await?;\n    transaction.rollback().await?;\n    Ok(StorageBenchReport {\n        measured_rows: stats.puts,\n        verified_rows: rows,\n        elapsed: started_at.elapsed(),\n    })\n}\n\npub async fn prepare_storage_api_read(\n    backend: Arc<dyn Backend + Send + Sync>,\n    rows: usize,\n) -> Result<StorageApiFixture, LixError> {\n    let storage = StorageContext::new(backend);\n    let mut transaction = storage.begin_write_transaction().await?;\n    let mut batch = KvWriteBatch::new();\n    for index in 0..rows {\n        batch.put(\n            STORAGE_API_NAMESPACE,\n            storage_api_key(index),\n            storage_api_value(index),\n        );\n    }\n    transaction.write_kv_batch(batch).await?;\n    transaction.commit().await?;\n    Ok(StorageApiFixture { storage, rows })\n}\n\npub async fn storage_api_get_values_hits_prepared(\n    fixture: &StorageApiFixture,\n    reads: usize,\n) -> Result<StorageBenchReport, LixError> {\n    let mut transaction = fixture.storage.begin_read_transaction().await?;\n    let keys = (0..reads)\n        .map(|index| storage_api_key(index % fixture.rows))\n        .collect::<Vec<_>>();\n    let started_at = Instant::now();\n    let result = transaction\n        .get_values(KvGetRequest {\n            groups: vec![KvGetGroup {\n                namespace: STORAGE_API_NAMESPACE.to_string(),\n                keys,\n            }],\n        })\n        .await?;\n    transaction.rollback().await?;\n    let verified_rows = result.groups[0]\n        .values_iter()\n        .filter(|value| value.is_some())\n        .count();\n    Ok(StorageBenchReport {\n        measured_rows: reads,\n        verified_rows,\n        elapsed: started_at.elapsed(),\n    })\n}\n\npub async fn storage_api_exists_many_prepared(\n    fixture: &StorageApiFixture,\n    reads: usize,\n) -> Result<StorageBenchReport, LixError> {\n    let mut transaction = fixture.storage.begin_read_transaction().await?;\n    let keys = (0..reads)\n        .map(|index| storage_api_key(index % fixture.rows))\n        .collect::<Vec<_>>();\n    let started_at = Instant::now();\n    let result = transaction\n        .exists_many(KvGetRequest {\n            groups: vec![KvGetGroup {\n                namespace: STORAGE_API_NAMESPACE.to_string(),\n                keys,\n            }],\n        })\n        .await?;\n    transaction.rollback().await?;\n    let verified_rows = result.groups[0]\n        .exists\n        .iter()\n        .filter(|exists| **exists)\n        .count();\n    Ok(StorageBenchReport {\n        measured_rows: reads,\n        verified_rows,\n        elapsed: started_at.elapsed(),\n    })\n}\n\npub async fn storage_api_get_values_misses_prepared(\n    fixture: &StorageApiFixture,\n    reads: usize,\n) -> Result<StorageBenchReport, LixError> {\n    let mut transaction = fixture.storage.begin_read_transaction().await?;\n    let keys = (0..reads)\n        .map(|index| storage_api_missing_key(index))\n        .collect::<Vec<_>>();\n    let started_at = Instant::now();\n    let result = transaction\n        .get_values(KvGetRequest {\n            groups: vec![KvGetGroup {\n                namespace: STORAGE_API_NAMESPACE.to_string(),\n                keys,\n            }],\n        })\n        .await?;\n    transaction.rollback().await?;\n    let verified_rows = result.groups[0]\n        .values_iter()\n        .filter(|value| value.is_none())\n        .count();\n    Ok(StorageBenchReport {\n        measured_rows: reads,\n        verified_rows,\n        elapsed: started_at.elapsed(),\n    })\n}\n\npub async fn storage_api_get_values_mixed_hit_miss_prepared(\n    fixture: &StorageApiFixture,\n    reads: usize,\n) -> Result<StorageBenchReport, LixError> {\n    let mut transaction = fixture.storage.begin_read_transaction().await?;\n    let keys = (0..reads)\n        .map(|index| {\n            if index % 2 == 0 {\n                storage_api_key(index % fixture.rows)\n            } else {\n                storage_api_missing_key(index)\n            }\n        })\n        .collect::<Vec<_>>();\n    let started_at = Instant::now();\n    let result = transaction\n        .get_values(KvGetRequest {\n            groups: vec![KvGetGroup {\n                namespace: STORAGE_API_NAMESPACE.to_string(),\n                keys,\n            }],\n        })\n        .await?;\n    transaction.rollback().await?;\n    let verified_rows = result.groups[0]\n        .values_iter()\n        .filter(|value| value.is_some())\n        .count();\n    Ok(StorageBenchReport {\n        measured_rows: reads,\n        verified_rows,\n        elapsed: started_at.elapsed(),\n    })\n}\n\npub async fn storage_api_get_values_multi_namespace(\n    backend: Arc<dyn Backend + Send + Sync>,\n    reads: usize,\n) -> Result<StorageBenchReport, LixError> {\n    let storage = StorageContext::new(backend);\n    let mut transaction = storage.begin_write_transaction().await?;\n    let mut batch = KvWriteBatch::new();\n    for index in 0..reads {\n        let namespace = if index % 2 == 0 {\n            STORAGE_API_NAMESPACE\n        } else {\n            STORAGE_API_ALT_NAMESPACE\n        };\n        batch.put(namespace, storage_api_key(index), storage_api_value(index));\n    }\n    transaction.write_kv_batch(batch).await?;\n    transaction.commit().await?;\n\n    let mut transaction = storage.begin_read_transaction().await?;\n    let even_keys = (0..reads)\n        .step_by(2)\n        .map(storage_api_key)\n        .collect::<Vec<_>>();\n    let odd_keys = (1..reads)\n        .step_by(2)\n        .map(storage_api_key)\n        .collect::<Vec<_>>();\n    let started_at = Instant::now();\n    let result = transaction\n        .get_values(KvGetRequest {\n            groups: vec![\n                KvGetGroup {\n                    namespace: STORAGE_API_NAMESPACE.to_string(),\n                    keys: even_keys,\n                },\n                KvGetGroup {\n                    namespace: STORAGE_API_ALT_NAMESPACE.to_string(),\n                    keys: odd_keys,\n                },\n            ],\n        })\n        .await?;\n    transaction.rollback().await?;\n    let verified_rows = result\n        .groups\n        .iter()\n        .map(|group| group.values_iter().filter(|value| value.is_some()).count())\n        .sum();\n    Ok(StorageBenchReport {\n        measured_rows: reads,\n        verified_rows,\n        elapsed: started_at.elapsed(),\n    })\n}\n\npub async fn storage_api_get_values_duplicate_keys_prepared(\n    fixture: &StorageApiFixture,\n    reads: usize,\n) -> Result<StorageBenchReport, LixError> {\n    let mut transaction = fixture.storage.begin_read_transaction().await?;\n    let keys = (0..reads)\n        .map(|index| storage_api_key(index % 100))\n        .collect::<Vec<_>>();\n    let started_at = Instant::now();\n    let result = transaction\n        .get_values(KvGetRequest {\n            groups: vec![KvGetGroup {\n                namespace: STORAGE_API_NAMESPACE.to_string(),\n                keys,\n            }],\n        })\n        .await?;\n    transaction.rollback().await?;\n    let verified_rows = result.groups[0]\n        .values_iter()\n        .filter(|value| value.is_some())\n        .count();\n    Ok(StorageBenchReport {\n        measured_rows: reads,\n        verified_rows,\n        elapsed: started_at.elapsed(),\n    })\n}\n\npub async fn storage_api_scan_keys_prefix_prepared(\n    fixture: &StorageApiFixture,\n    limit: usize,\n) -> Result<StorageBenchReport, LixError> {\n    let mut transaction = fixture.storage.begin_read_transaction().await?;\n    let started_at = Instant::now();\n    let result = transaction\n        .scan_keys(KvScanRequest {\n            namespace: STORAGE_API_NAMESPACE.to_string(),\n            range: KvScanRange::prefix(b\"key/\".to_vec()),\n            after: None,\n            limit,\n        })\n        .await?;\n    transaction.rollback().await?;\n    Ok(StorageBenchReport {\n        measured_rows: result.keys.len(),\n        verified_rows: limit.min(fixture.rows),\n        elapsed: started_at.elapsed(),\n    })\n}\n\npub async fn storage_api_scan_keys_after_pages_prepared(\n    fixture: &StorageApiFixture,\n    page_size: usize,\n) -> Result<StorageBenchReport, LixError> {\n    let mut transaction = fixture.storage.begin_read_transaction().await?;\n    let started_at = Instant::now();\n    let mut after = None;\n    let mut measured_rows = 0usize;\n    loop {\n        let result = transaction\n            .scan_keys(KvScanRequest {\n                namespace: STORAGE_API_NAMESPACE.to_string(),\n                range: KvScanRange::prefix(b\"key/\".to_vec()),\n                after,\n                limit: page_size,\n            })\n            .await?;\n        if result.keys.is_empty() {\n            break;\n        }\n        measured_rows += result.keys.len();\n        let Some(resume_after) = result.resume_after else {\n            break;\n        };\n        after = Some(resume_after);\n    }\n    transaction.rollback().await?;\n    Ok(StorageBenchReport {\n        measured_rows,\n        verified_rows: fixture.rows,\n        elapsed: started_at.elapsed(),\n    })\n}\n\npub async fn storage_api_scan_keys_empty_range_prepared(\n    fixture: &StorageApiFixture,\n) -> Result<StorageBenchReport, LixError> {\n    let mut transaction = fixture.storage.begin_read_transaction().await?;\n    let started_at = Instant::now();\n    let result = transaction\n        .scan_keys(KvScanRequest {\n            namespace: STORAGE_API_NAMESPACE.to_string(),\n            range: KvScanRange::prefix(b\"absent/\".to_vec()),\n            after: None,\n            limit: fixture.rows,\n        })\n        .await?;\n    transaction.rollback().await?;\n    Ok(StorageBenchReport {\n        measured_rows: result.keys.len(),\n        verified_rows: 0,\n        elapsed: started_at.elapsed(),\n    })\n}\n\npub async fn prepare_storage_api_selective_scan(\n    backend: Arc<dyn Backend + Send + Sync>,\n    rows: usize,\n    selectivity: StorageBenchSelectivity,\n) -> Result<StorageApiFixture, LixError> {\n    let storage = StorageContext::new(backend);\n    let mut transaction = storage.begin_write_transaction().await?;\n    let mut batch = KvWriteBatch::new();\n    for index in 0..rows {\n        let key = if selectivity.matches(index) {\n            storage_api_selective_key(index)\n        } else {\n            storage_api_key(index)\n        };\n        batch.put(STORAGE_API_NAMESPACE, key, storage_api_value(index));\n    }\n    transaction.write_kv_batch(batch).await?;\n    transaction.commit().await?;\n    Ok(StorageApiFixture { storage, rows })\n}\n\npub async fn storage_api_scan_keys_selective_prefix_prepared(\n    fixture: &StorageApiFixture,\n    selectivity: StorageBenchSelectivity,\n) -> Result<StorageBenchReport, LixError> {\n    let mut transaction = fixture.storage.begin_read_transaction().await?;\n    let started_at = Instant::now();\n    let result = transaction\n        .scan_keys(KvScanRequest {\n            namespace: STORAGE_API_NAMESPACE.to_string(),\n            range: KvScanRange::prefix(b\"selective/\".to_vec()),\n            after: None,\n            limit: fixture.rows,\n        })\n        .await?;\n    transaction.rollback().await?;\n    Ok(StorageBenchReport {\n        measured_rows: result.keys.len(),\n        verified_rows: selectivity.expected_rows(fixture.rows),\n        elapsed: started_at.elapsed(),\n    })\n}\n\npub async fn storage_api_transaction_commit_empty(\n    backend: Arc<dyn Backend + Send + Sync>,\n) -> Result<StorageBenchReport, LixError> {\n    let storage = StorageContext::new(backend);\n    let started_at = Instant::now();\n    let transaction = storage.begin_write_transaction().await?;\n    transaction.commit().await?;\n    Ok(StorageBenchReport {\n        measured_rows: 0,\n        verified_rows: 0,\n        elapsed: started_at.elapsed(),\n    })\n}\n\nfn storage_api_key(index: usize) -> Vec<u8> {\n    format!(\"key/{index:08}\").into_bytes()\n}\n\nfn storage_api_selective_key(index: usize) -> Vec<u8> {\n    format!(\"selective/{index:08}\").into_bytes()\n}\n\nfn storage_api_missing_key(index: usize) -> Vec<u8> {\n    format!(\"missing/{index:08}\").into_bytes()\n}\n\nfn storage_api_value(index: usize) -> Vec<u8> {\n    format!(\"value/{index:08}/{}\", \"x\".repeat(64)).into_bytes()\n}\n\nfn storage_api_value_with_bytes(index: usize, value_bytes: usize) -> Vec<u8> {\n    let prefix = format!(\"value/{index:08}/\");\n    if value_bytes <= prefix.len() {\n        return prefix.into_bytes();\n    }\n    let mut value = prefix.into_bytes();\n    value.extend(std::iter::repeat_n(b'x', value_bytes - value.len()));\n    value\n}\n\nfn storage_api_updated_value(index: usize) -> Vec<u8> {\n    format!(\"updated/{index:08}/{}\", \"y\".repeat(64)).into_bytes()\n}\n\npub struct TrackedStateWriteRootFixture {\n    context: TrackedStateContext,\n    rows: Vec<MaterializedTrackedStateRow>,\n}\n\npub struct TrackedStateReadFixture {\n    context: TrackedStateContext,\n    rows: usize,\n    commit_id: String,\n    key_pattern: StorageBenchKeyPattern,\n    selectivity: StorageBenchSelectivity,\n}\n\npub struct TrackedStateUpdateFixture {\n    context: TrackedStateContext,\n    rows: Vec<MaterializedTrackedStateRow>,\n}\n\npub struct TrackedStateDiffFixture {\n    context: TrackedStateContext,\n    left_commit_id: String,\n    right_commit_id: String,\n    expected_entries: usize,\n}\n\npub struct TrackedStateMaterializeFixture {\n    context: TrackedStateContext,\n    commit_id: String,\n    expected_rows: usize,\n}\n\n#[derive(Clone)]\npub struct JsonPointerStorageRow {\n    pub path: String,\n    pub value_json: String,\n    pub updated_value_json: String,\n}\n\npub struct JsonPointerTrackedStateReadFixture {\n    context: TrackedStateContext,\n    rows: Vec<JsonPointerStorageRow>,\n    commit_id: String,\n}\n\npub struct JsonPointerTrackedStateDiffFixture {\n    context: TrackedStateContext,\n    left_commit_id: String,\n    right_commit_id: String,\n    expected_entries: usize,\n}\n\npub struct UntrackedStateWriteFixture {\n    context: UntrackedStateContext,\n    rows: Vec<MaterializedUntrackedStateRow>,\n}\n\npub struct UntrackedStateReadFixture {\n    context: UntrackedStateContext,\n    rows: usize,\n    key_pattern: StorageBenchKeyPattern,\n    selectivity: StorageBenchSelectivity,\n}\n\npub struct ChangelogAppendFixture {\n    context: CommitStoreContext,\n    changes: Vec<MaterializedChange>,\n}\n\npub struct ChangelogReadFixture {\n    context: CommitStoreContext,\n    rows: usize,\n}\n\npub struct ChangelogCodecFixture {\n    changes: Vec<Change>,\n    encoded_changes: Vec<Vec<u8>>,\n}\n\npub struct CommitGraphReadFixture {\n    head_commit_id: String,\n    rows: usize,\n}\n\npub struct BinaryCasWriteFixture {\n    context: BinaryCasContext,\n    file_ids: Vec<String>,\n    payloads: Vec<Vec<u8>>,\n}\n\npub struct BinaryCasReadFixture {\n    context: BinaryCasContext,\n    rows: usize,\n    hashes: Vec<BlobHash>,\n}\n\n#[derive(Debug, Clone, Copy)]\npub enum JsonStorePayloadShape {\n    SmallRaw1k,\n    MediumStructured16k,\n    LargeStructured128k,\n    LargeArray128k,\n}\n\n#[derive(Debug, Clone, Copy)]\npub enum JsonStoreProjectionShape {\n    TopLevelTarget,\n    TopLevelTenProps,\n    NestedTarget,\n    ArrayItem999,\n    Status,\n}\n\npub struct JsonStoreWriteFixture {\n    context: JsonStoreContext,\n    documents: Vec<Vec<u8>>,\n}\n\npub struct JsonStoreReadFixture {\n    context: JsonStoreContext,\n    refs: Vec<JsonRef>,\n    paths: Vec<JsonProjectionPath>,\n}\n\npub async fn prepare_tracked_state_write_root(\n    config: StorageBenchConfig,\n) -> Result<TrackedStateWriteRootFixture, LixError> {\n    Ok(TrackedStateWriteRootFixture {\n        context: TrackedStateContext::new(),\n        rows: tracked_rows(config, \"bench-tracked-commit\"),\n    })\n}\n\npub async fn tracked_state_write_root_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &TrackedStateWriteRootFixture,\n) -> Result<StorageBenchReport, LixError> {\n    write_tracked_root(\n        backend,\n        &fixture.context,\n        \"bench-tracked-commit\",\n        None,\n        &fixture.rows,\n    )\n    .await?;\n    Ok(report(\n        fixture.rows.len(),\n        fixture.rows.len(),\n        Duration::ZERO,\n    ))\n}\n\npub async fn prepare_tracked_state_read(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n) -> Result<TrackedStateReadFixture, LixError> {\n    let context = TrackedStateContext::new();\n    let rows = tracked_rows(config, \"bench-tracked-commit\");\n    write_tracked_root(backend, &context, \"bench-tracked-commit\", None, &rows).await?;\n    Ok(TrackedStateReadFixture {\n        context,\n        rows: config.rows,\n        commit_id: \"bench-tracked-commit\".to_string(),\n        key_pattern: config.key_pattern,\n        selectivity: config.selectivity,\n    })\n}\n\npub async fn prepare_tracked_state_read_file_selective(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n) -> Result<TrackedStateReadFixture, LixError> {\n    let context = TrackedStateContext::new();\n    let rows = tracked_rows_file_selective(config, \"bench-tracked-commit\");\n    write_tracked_root(backend, &context, \"bench-tracked-commit\", None, &rows).await?;\n    Ok(TrackedStateReadFixture {\n        context,\n        rows: config.rows,\n        commit_id: \"bench-tracked-commit\".to_string(),\n        key_pattern: config.key_pattern,\n        selectivity: config.selectivity,\n    })\n}\n\npub async fn prepare_tracked_state_read_after_update_rows(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n    updated_rows: usize,\n) -> Result<TrackedStateReadFixture, LixError> {\n    let fixture = prepare_tracked_state_update_rows(backend, config, updated_rows).await?;\n    tracked_state_update_existing_prepared(backend, &fixture).await?;\n    Ok(TrackedStateReadFixture {\n        context: fixture.context,\n        rows: config.rows,\n        commit_id: \"bench-tracked-child\".to_string(),\n        key_pattern: config.key_pattern,\n        selectivity: config.selectivity,\n    })\n}\n\npub async fn prepare_tracked_state_read_delta_chain(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n    delta_commits: usize,\n    updated_rows_per_commit: usize,\n) -> Result<TrackedStateReadFixture, LixError> {\n    let (context, final_commit_id) =\n        write_tracked_delta_chain(backend, config, delta_commits, updated_rows_per_commit).await?;\n    Ok(TrackedStateReadFixture {\n        context,\n        rows: config.rows,\n        commit_id: final_commit_id,\n        key_pattern: config.key_pattern,\n        selectivity: config.selectivity,\n    })\n}\n\npub async fn prepare_tracked_state_read_materialized_delta_chain(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n    delta_commits: usize,\n    updated_rows_per_commit: usize,\n) -> Result<TrackedStateReadFixture, LixError> {\n    let (context, final_commit_id) =\n        write_tracked_delta_chain(backend, config, delta_commits, updated_rows_per_commit).await?;\n    materialize_tracked_root(backend, &context, &final_commit_id).await?;\n    Ok(TrackedStateReadFixture {\n        context,\n        rows: config.rows,\n        commit_id: final_commit_id,\n        key_pattern: config.key_pattern,\n        selectivity: config.selectivity,\n    })\n}\n\npub async fn prepare_tracked_state_materialize_delta_chain(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n    delta_commits: usize,\n    updated_rows_per_commit: usize,\n) -> Result<TrackedStateMaterializeFixture, LixError> {\n    let (context, final_commit_id) =\n        write_tracked_delta_chain(backend, config, delta_commits, updated_rows_per_commit).await?;\n    Ok(TrackedStateMaterializeFixture {\n        context,\n        commit_id: final_commit_id,\n        expected_rows: config.rows,\n    })\n}\n\nfn tracked_point_hit_requests(\n    rows: usize,\n    key_pattern: StorageBenchKeyPattern,\n) -> Vec<TrackedStateRowRequest> {\n    (0..rows)\n        .map(|index| TrackedStateRowRequest {\n            schema_key: tracked_schema_key(index, StorageBenchSelectivity::Percent100),\n            entity_id: EntityIdentity::single(entity_id(\"tracked\", index, key_pattern)),\n            file_id: NullableKeyFilter::Value(\"bench.json\".to_string()),\n        })\n        .collect()\n}\n\nfn tracked_point_miss_requests(\n    rows: usize,\n    selectivity: StorageBenchSelectivity,\n) -> Vec<TrackedStateRowRequest> {\n    (0..rows)\n        .map(|index| TrackedStateRowRequest {\n            schema_key: tracked_schema_key(index, selectivity),\n            entity_id: EntityIdentity::single(format!(\"missing-{index}\")),\n            file_id: NullableKeyFilter::Value(\"bench.json\".to_string()),\n        })\n        .collect()\n}\n\nfn tracked_point_miss_requests_for_schema(\n    rows: usize,\n    schema_key: &str,\n) -> Vec<TrackedStateRowRequest> {\n    (0..rows)\n        .map(|index| TrackedStateRowRequest {\n            schema_key: schema_key.to_string(),\n            entity_id: EntityIdentity::single(format!(\"missing-{index}\")),\n            file_id: NullableKeyFilter::Value(\"bench.json\".to_string()),\n        })\n        .collect()\n}\n\npub async fn tracked_state_read_point_hit_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &TrackedStateReadFixture,\n) -> Result<StorageBenchReport, LixError> {\n    let mut reader = fixture\n        .context\n        .reader(StorageContext::new(Arc::clone(backend)));\n    let requests = tracked_point_hit_requests(fixture.rows, fixture.key_pattern);\n    let verified_rows = reader\n        .load_rows_at_commit(&fixture.commit_id, &requests)\n        .await?\n        .into_iter()\n        .filter(Option::is_some)\n        .count();\n    Ok(report(fixture.rows, verified_rows, Duration::ZERO))\n}\n\npub async fn tracked_state_read_point_hit_constant_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &TrackedStateReadFixture,\n    measured_reads: usize,\n) -> Result<StorageBenchReport, LixError> {\n    let measured_rows = measured_reads.min(fixture.rows);\n    let mut reader = fixture\n        .context\n        .reader(StorageContext::new(Arc::clone(backend)));\n    let requests = tracked_point_hit_requests(measured_rows, fixture.key_pattern);\n    let verified_rows = reader\n        .load_rows_at_commit(&fixture.commit_id, &requests)\n        .await?\n        .into_iter()\n        .filter(Option::is_some)\n        .count();\n    Ok(report(measured_rows, verified_rows, Duration::ZERO))\n}\n\npub async fn tracked_state_read_point_miss_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &TrackedStateReadFixture,\n) -> Result<StorageBenchReport, LixError> {\n    let mut reader = fixture\n        .context\n        .reader(StorageContext::new(Arc::clone(backend)));\n    let requests = tracked_point_miss_requests_for_schema(fixture.rows, TRACKED_MATCH_SCHEMA_KEY);\n    let misses = reader\n        .load_rows_at_commit(&fixture.commit_id, &requests)\n        .await?\n        .into_iter()\n        .filter(Option::is_none)\n        .count();\n    Ok(report(fixture.rows, misses, Duration::ZERO))\n}\n\npub async fn tracked_state_scan_all_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &TrackedStateReadFixture,\n) -> Result<StorageBenchReport, LixError> {\n    let verified_rows = scan_tracked(backend, &fixture.context, &fixture.commit_id)\n        .await?\n        .len();\n    Ok(report(fixture.rows, verified_rows, Duration::ZERO))\n}\n\npub async fn tracked_state_scan_keys_only_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &TrackedStateReadFixture,\n) -> Result<StorageBenchReport, LixError> {\n    let mut reader = fixture\n        .context\n        .reader(StorageContext::new(Arc::clone(backend)));\n    let verified_rows = reader\n        .scan_rows_at_commit(\n            &fixture.commit_id,\n            &TrackedStateScanRequest {\n                projection: TrackedStateProjection {\n                    columns: vec![\"entity_id\".to_string()],\n                },\n                ..Default::default()\n            },\n        )\n        .await?\n        .len();\n    Ok(report(fixture.rows, verified_rows, Duration::ZERO))\n}\n\npub async fn tracked_state_scan_headers_only_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &TrackedStateReadFixture,\n) -> Result<StorageBenchReport, LixError> {\n    let mut reader = fixture\n        .context\n        .reader(StorageContext::new(Arc::clone(backend)));\n    let verified_rows = reader\n        .scan_rows_at_commit(\n            &fixture.commit_id,\n            &TrackedStateScanRequest {\n                projection: TrackedStateProjection {\n                    columns: tracked_state_header_columns(),\n                },\n                ..Default::default()\n            },\n        )\n        .await?\n        .len();\n    Ok(report(fixture.rows, verified_rows, Duration::ZERO))\n}\n\npub async fn tracked_state_scan_full_rows_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &TrackedStateReadFixture,\n) -> Result<StorageBenchReport, LixError> {\n    tracked_state_scan_all_prepared(backend, fixture).await\n}\n\npub async fn tracked_state_scan_schema_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &TrackedStateReadFixture,\n) -> Result<StorageBenchReport, LixError> {\n    let mut reader = fixture\n        .context\n        .reader(StorageContext::new(Arc::clone(backend)));\n    let verified_rows = reader\n        .scan_rows_at_commit(\n            &fixture.commit_id,\n            &TrackedStateScanRequest {\n                filter: TrackedStateFilter {\n                    schema_keys: vec![tracked_schema_key(0, StorageBenchSelectivity::Percent100)],\n                    ..Default::default()\n                },\n                ..Default::default()\n            },\n        )\n        .await?\n        .len();\n    Ok(report(fixture.rows, verified_rows, Duration::ZERO))\n}\n\npub async fn tracked_state_scan_schema_selective_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &TrackedStateReadFixture,\n) -> Result<StorageBenchReport, LixError> {\n    let mut reader = fixture\n        .context\n        .reader(StorageContext::new(Arc::clone(backend)));\n    let verified_rows = reader\n        .scan_rows_at_commit(\n            &fixture.commit_id,\n            &TrackedStateScanRequest {\n                filter: TrackedStateFilter {\n                    schema_keys: vec![TRACKED_MATCH_SCHEMA_KEY.to_string()],\n                    ..Default::default()\n                },\n                ..Default::default()\n            },\n        )\n        .await?\n        .len();\n    Ok(report(\n        fixture.selectivity.expected_rows(fixture.rows),\n        verified_rows,\n        Duration::ZERO,\n    ))\n}\n\npub async fn tracked_state_scan_file_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &TrackedStateReadFixture,\n) -> Result<StorageBenchReport, LixError> {\n    let mut reader = fixture\n        .context\n        .reader(StorageContext::new(Arc::clone(backend)));\n    let verified_rows = reader\n        .scan_rows_at_commit(\n            &fixture.commit_id,\n            &TrackedStateScanRequest {\n                filter: TrackedStateFilter {\n                    file_ids: vec![NullableKeyFilter::Value(\"bench.json\".to_string())],\n                    ..Default::default()\n                },\n                ..Default::default()\n            },\n        )\n        .await?\n        .len();\n    Ok(report(fixture.rows, verified_rows, Duration::ZERO))\n}\n\npub async fn tracked_state_scan_file_selective_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &TrackedStateReadFixture,\n) -> Result<StorageBenchReport, LixError> {\n    let mut reader = fixture\n        .context\n        .reader(StorageContext::new(Arc::clone(backend)));\n    let verified_rows = reader\n        .scan_rows_at_commit(\n            &fixture.commit_id,\n            &TrackedStateScanRequest {\n                filter: TrackedStateFilter {\n                    file_ids: vec![NullableKeyFilter::Value(\"bench-match.json\".to_string())],\n                    ..Default::default()\n                },\n                ..Default::default()\n            },\n        )\n        .await?\n        .len();\n    Ok(report(\n        fixture.selectivity.expected_rows(fixture.rows),\n        verified_rows,\n        Duration::ZERO,\n    ))\n}\n\npub async fn tracked_state_scan_file_header_selective_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &TrackedStateReadFixture,\n) -> Result<StorageBenchReport, LixError> {\n    let mut reader = fixture\n        .context\n        .reader(StorageContext::new(Arc::clone(backend)));\n    let verified_rows = reader\n        .scan_rows_at_commit(\n            &fixture.commit_id,\n            &TrackedStateScanRequest {\n                filter: TrackedStateFilter {\n                    file_ids: vec![NullableKeyFilter::Value(\"bench-match.json\".to_string())],\n                    ..Default::default()\n                },\n                projection: TrackedStateProjection {\n                    columns: tracked_state_header_columns(),\n                },\n                ..Default::default()\n            },\n        )\n        .await?\n        .len();\n    Ok(report(\n        fixture.selectivity.expected_rows(fixture.rows),\n        verified_rows,\n        Duration::ZERO,\n    ))\n}\n\npub async fn prepare_tracked_state_update(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n) -> Result<TrackedStateUpdateFixture, LixError> {\n    prepare_tracked_state_update_rows(backend, config, config.update_fraction.rows(config.rows))\n        .await\n}\n\npub async fn prepare_tracked_state_update_rows(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n    updated_rows: usize,\n) -> Result<TrackedStateUpdateFixture, LixError> {\n    let context = TrackedStateContext::new();\n    let rows = tracked_rows(config, \"bench-tracked-parent\");\n    write_tracked_root(backend, &context, \"bench-tracked-parent\", None, &rows).await?;\n    let mut updated_rows = tracked_rows(\n        config.with_rows(updated_rows.min(config.rows)),\n        \"bench-tracked-child\",\n    );\n    for (index, row) in updated_rows.iter_mut().enumerate() {\n        row.snapshot_content = Some(updated_snapshot_content(index, config.state_payload_bytes));\n    }\n    Ok(TrackedStateUpdateFixture {\n        context,\n        rows: updated_rows,\n    })\n}\n\npub async fn prepare_tracked_state_partial_snapshot_update_rows(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n    updated_rows: usize,\n) -> Result<TrackedStateUpdateFixture, LixError> {\n    let context = TrackedStateContext::new();\n    let rows = tracked_rows(config, \"bench-tracked-parent\");\n    write_tracked_root(backend, &context, \"bench-tracked-parent\", None, &rows).await?;\n    let mut updated_rows = tracked_rows(\n        config.with_rows(updated_rows.min(config.rows)),\n        \"bench-tracked-child\",\n    );\n    for (index, row) in updated_rows.iter_mut().enumerate() {\n        row.snapshot_content = Some(partial_updated_snapshot_content(\n            index,\n            config.state_payload_bytes,\n        ));\n    }\n    Ok(TrackedStateUpdateFixture {\n        context,\n        rows: updated_rows,\n    })\n}\n\npub async fn prepare_tracked_state_append_child(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n) -> Result<TrackedStateUpdateFixture, LixError> {\n    prepare_tracked_state_append_child_rows(backend, config, config.rows).await\n}\n\npub async fn prepare_tracked_state_append_child_rows(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n    appended_rows: usize,\n) -> Result<TrackedStateUpdateFixture, LixError> {\n    let context = TrackedStateContext::new();\n    let rows = tracked_rows(config, \"bench-tracked-parent\");\n    write_tracked_root(backend, &context, \"bench-tracked-parent\", None, &rows).await?;\n    let mut appended_rows = tracked_rows(\n        config.with_rows(appended_rows.min(config.rows)),\n        \"bench-tracked-child\",\n    );\n    for (index, row) in appended_rows.iter_mut().enumerate() {\n        row.entity_id = EntityIdentity::single(entity_id(\"tracked-new\", index, config.key_pattern));\n        row.change_id = format!(\"tracked-new-change-{index}\");\n    }\n    Ok(TrackedStateUpdateFixture {\n        context,\n        rows: appended_rows,\n    })\n}\n\npub async fn prepare_tracked_state_tombstone_rows(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n    tombstone_rows: usize,\n) -> Result<TrackedStateUpdateFixture, LixError> {\n    let context = TrackedStateContext::new();\n    let rows = tracked_rows(config, \"bench-tracked-parent\");\n    write_tracked_root(backend, &context, \"bench-tracked-parent\", None, &rows).await?;\n    let mut tombstones = tracked_rows(\n        config.with_rows(tombstone_rows.min(config.rows)),\n        \"bench-tracked-child\",\n    );\n    for row in &mut tombstones {\n        row.snapshot_content = None;\n    }\n    Ok(TrackedStateUpdateFixture {\n        context,\n        rows: tombstones,\n    })\n}\n\npub async fn tracked_state_update_existing_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &TrackedStateUpdateFixture,\n) -> Result<StorageBenchReport, LixError> {\n    write_tracked_root(\n        backend,\n        &fixture.context,\n        \"bench-tracked-child\",\n        Some(\"bench-tracked-parent\"),\n        &fixture.rows,\n    )\n    .await?;\n    Ok(report(\n        fixture.rows.len(),\n        fixture.rows.len(),\n        Duration::ZERO,\n    ))\n}\n\npub async fn prepare_tracked_state_diff_update_rows(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n    updated_rows: usize,\n) -> Result<TrackedStateDiffFixture, LixError> {\n    let fixture = prepare_tracked_state_update_rows(backend, config, updated_rows).await?;\n    tracked_state_update_existing_prepared(backend, &fixture).await?;\n    Ok(TrackedStateDiffFixture {\n        context: fixture.context,\n        left_commit_id: \"bench-tracked-parent\".to_string(),\n        right_commit_id: \"bench-tracked-child\".to_string(),\n        expected_entries: fixture.rows.len(),\n    })\n}\n\npub async fn prepare_tracked_state_diff_delta_chain(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n    delta_commits: usize,\n    updated_rows_per_commit: usize,\n) -> Result<TrackedStateDiffFixture, LixError> {\n    let (context, final_commit_id) =\n        write_tracked_delta_chain(backend, config, delta_commits, updated_rows_per_commit).await?;\n    Ok(TrackedStateDiffFixture {\n        context,\n        left_commit_id: \"bench-tracked-base\".to_string(),\n        right_commit_id: final_commit_id,\n        expected_entries: updated_rows_per_commit.min(config.rows),\n    })\n}\n\npub async fn prepare_tracked_state_diff_tombstone_rows(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n    tombstone_rows: usize,\n) -> Result<TrackedStateDiffFixture, LixError> {\n    let fixture = prepare_tracked_state_tombstone_rows(backend, config, tombstone_rows).await?;\n    tracked_state_update_existing_prepared(backend, &fixture).await?;\n    Ok(TrackedStateDiffFixture {\n        context: fixture.context,\n        left_commit_id: \"bench-tracked-parent\".to_string(),\n        right_commit_id: \"bench-tracked-child\".to_string(),\n        expected_entries: fixture.rows.len(),\n    })\n}\n\npub async fn prepare_tracked_state_diff_equal(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n) -> Result<TrackedStateDiffFixture, LixError> {\n    let context = TrackedStateContext::new();\n    let rows = tracked_rows(config, \"bench-tracked-parent\");\n    write_tracked_root(backend, &context, \"bench-tracked-parent\", None, &rows).await?;\n    Ok(TrackedStateDiffFixture {\n        context,\n        left_commit_id: \"bench-tracked-parent\".to_string(),\n        right_commit_id: \"bench-tracked-parent\".to_string(),\n        expected_entries: 0,\n    })\n}\n\npub async fn tracked_state_diff_commits_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &TrackedStateDiffFixture,\n) -> Result<StorageBenchReport, LixError> {\n    let mut reader = fixture\n        .context\n        .reader(StorageContext::new(Arc::clone(backend)));\n    let diff = reader\n        .diff_commits(\n            &fixture.left_commit_id,\n            &fixture.right_commit_id,\n            &TrackedStateDiffRequest::default(),\n        )\n        .await?;\n    Ok(report(\n        fixture.expected_entries,\n        diff.entries.len(),\n        Duration::ZERO,\n    ))\n}\n\npub async fn tracked_state_materialize_root_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &TrackedStateMaterializeFixture,\n) -> Result<StorageBenchReport, LixError> {\n    materialize_tracked_root(backend, &fixture.context, &fixture.commit_id).await?;\n    Ok(report(\n        fixture.expected_rows,\n        fixture.expected_rows,\n        Duration::ZERO,\n    ))\n}\n\npub async fn prepare_json_pointer_tracked_state_write_root(\n    rows: &[JsonPointerStorageRow],\n) -> Result<TrackedStateWriteRootFixture, LixError> {\n    Ok(TrackedStateWriteRootFixture {\n        context: TrackedStateContext::new(),\n        rows: json_pointer_tracked_rows(rows, \"json-pointer-base\", false),\n    })\n}\n\npub async fn prepare_json_pointer_tracked_state_read(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    rows: &[JsonPointerStorageRow],\n) -> Result<JsonPointerTrackedStateReadFixture, LixError> {\n    let context = TrackedStateContext::new();\n    let materialized_rows = json_pointer_tracked_rows(rows, \"json-pointer-base\", false);\n    write_tracked_root(\n        backend,\n        &context,\n        \"json-pointer-base\",\n        None,\n        &materialized_rows,\n    )\n    .await?;\n    Ok(JsonPointerTrackedStateReadFixture {\n        context,\n        rows: rows.to_vec(),\n        commit_id: \"json-pointer-base\".to_string(),\n    })\n}\n\npub async fn prepare_json_pointer_tracked_state_diff_update_rows(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    rows: &[JsonPointerStorageRow],\n    updated_rows: usize,\n) -> Result<JsonPointerTrackedStateDiffFixture, LixError> {\n    let context = TrackedStateContext::new();\n    let base_rows = json_pointer_tracked_rows(rows, \"json-pointer-base\", false);\n    write_tracked_root(backend, &context, \"json-pointer-base\", None, &base_rows).await?;\n    let child_rows = json_pointer_tracked_rows(\n        &rows[..updated_rows.min(rows.len())],\n        \"json-pointer-child\",\n        true,\n    );\n    write_tracked_root(\n        backend,\n        &context,\n        \"json-pointer-child\",\n        Some(\"json-pointer-base\"),\n        &child_rows,\n    )\n    .await?;\n    Ok(JsonPointerTrackedStateDiffFixture {\n        context,\n        left_commit_id: \"json-pointer-base\".to_string(),\n        right_commit_id: \"json-pointer-child\".to_string(),\n        expected_entries: child_rows.len(),\n    })\n}\n\npub async fn json_pointer_tracked_state_get_many_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &JsonPointerTrackedStateReadFixture,\n) -> Result<StorageBenchReport, LixError> {\n    let mut reader = fixture\n        .context\n        .reader(StorageContext::new(Arc::clone(backend)));\n    let requests = fixture\n        .rows\n        .iter()\n        .map(|row| TrackedStateRowRequest {\n            schema_key: \"json_pointer\".to_string(),\n            entity_id: EntityIdentity::single(row.path.as_str()),\n            file_id: NullableKeyFilter::Null,\n        })\n        .collect::<Vec<_>>();\n    let verified_rows = reader\n        .load_rows_at_commit(&fixture.commit_id, &requests)\n        .await?\n        .into_iter()\n        .filter(Option::is_some)\n        .count();\n    Ok(report(fixture.rows.len(), verified_rows, Duration::ZERO))\n}\n\npub async fn json_pointer_tracked_state_get_many_missing_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &JsonPointerTrackedStateReadFixture,\n) -> Result<StorageBenchReport, LixError> {\n    let mut reader = fixture\n        .context\n        .reader(StorageContext::new(Arc::clone(backend)));\n    let requests = fixture\n        .rows\n        .iter()\n        .map(|row| TrackedStateRowRequest {\n            schema_key: \"json_pointer\".to_string(),\n            entity_id: EntityIdentity::single(format!(\"missing{}\", row.path)),\n            file_id: NullableKeyFilter::Null,\n        })\n        .collect::<Vec<_>>();\n    let verified_rows = reader\n        .load_rows_at_commit(&fixture.commit_id, &requests)\n        .await?\n        .into_iter()\n        .filter(Option::is_none)\n        .count();\n    Ok(report(fixture.rows.len(), verified_rows, Duration::ZERO))\n}\n\npub async fn json_pointer_tracked_state_scan_keys_only_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &JsonPointerTrackedStateReadFixture,\n) -> Result<StorageBenchReport, LixError> {\n    json_pointer_scan_with_projection(\n        backend,\n        fixture,\n        TrackedStateProjection {\n            columns: vec![\"entity_id\".to_string()],\n        },\n    )\n    .await\n}\n\npub async fn json_pointer_tracked_state_scan_headers_only_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &JsonPointerTrackedStateReadFixture,\n) -> Result<StorageBenchReport, LixError> {\n    json_pointer_scan_with_projection(\n        backend,\n        fixture,\n        TrackedStateProjection {\n            columns: tracked_state_header_columns(),\n        },\n    )\n    .await\n}\n\npub async fn json_pointer_tracked_state_scan_full_rows_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &JsonPointerTrackedStateReadFixture,\n) -> Result<StorageBenchReport, LixError> {\n    json_pointer_scan_with_projection(backend, fixture, TrackedStateProjection::default()).await\n}\n\npub async fn json_pointer_tracked_state_prefix_scan_schema_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &JsonPointerTrackedStateReadFixture,\n) -> Result<StorageBenchReport, LixError> {\n    json_pointer_scan_with_projection(backend, fixture, TrackedStateProjection::default()).await\n}\n\npub async fn json_pointer_tracked_state_prefix_scan_schema_file_null_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &JsonPointerTrackedStateReadFixture,\n) -> Result<StorageBenchReport, LixError> {\n    json_pointer_scan_with_projection(backend, fixture, TrackedStateProjection::default()).await\n}\n\nasync fn json_pointer_scan_with_projection(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &JsonPointerTrackedStateReadFixture,\n    projection: TrackedStateProjection,\n) -> Result<StorageBenchReport, LixError> {\n    let mut reader = fixture\n        .context\n        .reader(StorageContext::new(Arc::clone(backend)));\n    let verified_rows = reader\n        .scan_rows_at_commit(\n            &fixture.commit_id,\n            &TrackedStateScanRequest {\n                filter: TrackedStateFilter {\n                    schema_keys: vec![\"json_pointer\".to_string()],\n                    file_ids: vec![NullableKeyFilter::Null],\n                    ..Default::default()\n                },\n                projection,\n                ..Default::default()\n            },\n        )\n        .await?\n        .len();\n    Ok(report(fixture.rows.len(), verified_rows, Duration::ZERO))\n}\n\npub async fn prepare_json_pointer_tracked_state_update_rows(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    rows: &[JsonPointerStorageRow],\n    updated_rows: usize,\n) -> Result<TrackedStateUpdateFixture, LixError> {\n    let context = TrackedStateContext::new();\n    let base_rows = json_pointer_tracked_rows(rows, \"json-pointer-base\", false);\n    write_tracked_root(backend, &context, \"json-pointer-base\", None, &base_rows).await?;\n    let child_rows = json_pointer_tracked_rows(\n        &rows[..updated_rows.min(rows.len())],\n        \"json-pointer-child\",\n        true,\n    );\n    Ok(TrackedStateUpdateFixture {\n        context,\n        rows: child_rows,\n    })\n}\n\npub async fn prepare_json_pointer_tracked_state_tombstone_rows(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    rows: &[JsonPointerStorageRow],\n    tombstone_rows: usize,\n) -> Result<TrackedStateUpdateFixture, LixError> {\n    let context = TrackedStateContext::new();\n    let base_rows = json_pointer_tracked_rows(rows, \"json-pointer-base\", false);\n    write_tracked_root(backend, &context, \"json-pointer-base\", None, &base_rows).await?;\n    let mut child_rows = json_pointer_tracked_rows(\n        &rows[..tombstone_rows.min(rows.len())],\n        \"json-pointer-child\",\n        true,\n    );\n    for row in &mut child_rows {\n        row.snapshot_content = None;\n    }\n    Ok(TrackedStateUpdateFixture {\n        context,\n        rows: child_rows,\n    })\n}\n\npub async fn prepare_json_pointer_tracked_state_diff_delta_chain(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    rows: &[JsonPointerStorageRow],\n    delta_commits: usize,\n    updated_rows_per_commit: usize,\n) -> Result<JsonPointerTrackedStateDiffFixture, LixError> {\n    let (context, final_commit_id) =\n        write_json_pointer_delta_chain(backend, rows, delta_commits, updated_rows_per_commit)\n            .await?;\n    Ok(JsonPointerTrackedStateDiffFixture {\n        context,\n        left_commit_id: \"json-pointer-base\".to_string(),\n        right_commit_id: final_commit_id,\n        expected_entries: updated_rows_per_commit.min(rows.len()),\n    })\n}\n\npub async fn prepare_json_pointer_tracked_state_materialize_delta_chain(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    rows: &[JsonPointerStorageRow],\n    delta_commits: usize,\n    updated_rows_per_commit: usize,\n) -> Result<TrackedStateMaterializeFixture, LixError> {\n    let (context, final_commit_id) =\n        write_json_pointer_delta_chain(backend, rows, delta_commits, updated_rows_per_commit)\n            .await?;\n    Ok(TrackedStateMaterializeFixture {\n        context,\n        commit_id: final_commit_id,\n        expected_rows: rows.len(),\n    })\n}\n\npub async fn json_pointer_tracked_state_changed_keys_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &JsonPointerTrackedStateDiffFixture,\n) -> Result<StorageBenchReport, LixError> {\n    let mut reader = fixture\n        .context\n        .reader(StorageContext::new(Arc::clone(backend)));\n    let diff = reader\n        .diff_commits(\n            &fixture.left_commit_id,\n            &fixture.right_commit_id,\n            &TrackedStateDiffRequest::default(),\n        )\n        .await?;\n    Ok(report(\n        fixture.expected_entries,\n        diff.entries.len(),\n        Duration::ZERO,\n    ))\n}\n\npub async fn prepare_untracked_state_write_rows(\n    config: StorageBenchConfig,\n) -> Result<UntrackedStateWriteFixture, LixError> {\n    Ok(UntrackedStateWriteFixture {\n        context: UntrackedStateContext::new(),\n        rows: untracked_rows(config),\n    })\n}\n\npub async fn untracked_state_write_rows_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &UntrackedStateWriteFixture,\n) -> Result<StorageBenchReport, LixError> {\n    write_untracked_rows(backend, &fixture.context, &fixture.rows).await?;\n    let verified_rows = scan_untracked(\n        backend,\n        &fixture.context,\n        UntrackedStateScanRequest::default(),\n    )\n    .await?\n    .len();\n    Ok(report(fixture.rows.len(), verified_rows, Duration::ZERO))\n}\n\npub async fn prepare_untracked_state_read(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n) -> Result<UntrackedStateReadFixture, LixError> {\n    let context = UntrackedStateContext::new();\n    let rows = untracked_rows(config);\n    write_untracked_rows(backend, &context, &rows).await?;\n    Ok(UntrackedStateReadFixture {\n        context,\n        rows: config.rows,\n        key_pattern: config.key_pattern,\n        selectivity: config.selectivity,\n    })\n}\n\npub async fn untracked_state_read_point_hit_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &UntrackedStateReadFixture,\n) -> Result<StorageBenchReport, LixError> {\n    let mut verified_rows = 0;\n    let mut reader = fixture\n        .context\n        .reader(StorageContext::new(Arc::clone(backend)));\n    for index in 0..fixture.rows {\n        if reader\n            .load_row(&UntrackedStateRowRequest {\n                schema_key: untracked_schema_key(index, StorageBenchSelectivity::Percent100),\n                version_id: \"bench-version\".to_string(),\n                entity_id: EntityIdentity::single(entity_id(\n                    \"untracked\",\n                    index,\n                    fixture.key_pattern,\n                )),\n                file_id: NullableKeyFilter::Value(\"bench.json\".to_string()),\n            })\n            .await?\n            .is_some()\n        {\n            verified_rows += 1;\n        }\n    }\n    Ok(report(fixture.rows, verified_rows, Duration::ZERO))\n}\n\npub async fn untracked_state_read_point_hit_constant_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &UntrackedStateReadFixture,\n    measured_reads: usize,\n) -> Result<StorageBenchReport, LixError> {\n    let mut verified_rows = 0;\n    let mut reader = fixture\n        .context\n        .reader(StorageContext::new(Arc::clone(backend)));\n    for index in 0..measured_reads.min(fixture.rows) {\n        if reader\n            .load_row(&UntrackedStateRowRequest {\n                schema_key: untracked_schema_key(index, StorageBenchSelectivity::Percent100),\n                version_id: \"bench-version\".to_string(),\n                entity_id: EntityIdentity::single(entity_id(\n                    \"untracked\",\n                    index,\n                    fixture.key_pattern,\n                )),\n                file_id: NullableKeyFilter::Value(\"bench.json\".to_string()),\n            })\n            .await?\n            .is_some()\n        {\n            verified_rows += 1;\n        }\n    }\n    Ok(report(\n        measured_reads.min(fixture.rows),\n        verified_rows,\n        Duration::ZERO,\n    ))\n}\n\npub async fn untracked_state_read_point_miss_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &UntrackedStateReadFixture,\n) -> Result<StorageBenchReport, LixError> {\n    let mut misses = 0;\n    let mut reader = fixture\n        .context\n        .reader(StorageContext::new(Arc::clone(backend)));\n    for index in 0..fixture.rows {\n        if reader\n            .load_row(&UntrackedStateRowRequest {\n                schema_key: \"bench_untracked_entity\".to_string(),\n                version_id: \"bench-version\".to_string(),\n                entity_id: EntityIdentity::single(format!(\"missing-{index}\")),\n                file_id: NullableKeyFilter::Value(\"bench.json\".to_string()),\n            })\n            .await?\n            .is_none()\n        {\n            misses += 1;\n        }\n    }\n    Ok(report(fixture.rows, misses, Duration::ZERO))\n}\n\npub async fn untracked_state_scan_all_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &UntrackedStateReadFixture,\n) -> Result<StorageBenchReport, LixError> {\n    let verified_rows = scan_untracked(\n        backend,\n        &fixture.context,\n        UntrackedStateScanRequest::default(),\n    )\n    .await?\n    .len();\n    Ok(report(fixture.rows, verified_rows, Duration::ZERO))\n}\n\npub async fn untracked_state_scan_keys_only_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &UntrackedStateReadFixture,\n) -> Result<StorageBenchReport, LixError> {\n    let verified_rows = scan_untracked(\n        backend,\n        &fixture.context,\n        UntrackedStateScanRequest {\n            projection: UntrackedStateProjection {\n                columns: vec![\"entity_id\".to_string()],\n            },\n            ..Default::default()\n        },\n    )\n    .await?\n    .len();\n    Ok(report(fixture.rows, verified_rows, Duration::ZERO))\n}\n\npub async fn untracked_state_scan_headers_only_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &UntrackedStateReadFixture,\n) -> Result<StorageBenchReport, LixError> {\n    let verified_rows = scan_untracked(\n        backend,\n        &fixture.context,\n        UntrackedStateScanRequest {\n            projection: UntrackedStateProjection {\n                columns: untracked_state_header_columns(),\n            },\n            ..Default::default()\n        },\n    )\n    .await?\n    .len();\n    Ok(report(fixture.rows, verified_rows, Duration::ZERO))\n}\n\npub async fn untracked_state_scan_full_rows_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &UntrackedStateReadFixture,\n) -> Result<StorageBenchReport, LixError> {\n    untracked_state_scan_all_prepared(backend, fixture).await\n}\n\npub async fn untracked_state_scan_version_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &UntrackedStateReadFixture,\n) -> Result<StorageBenchReport, LixError> {\n    let verified_rows = scan_untracked(\n        backend,\n        &fixture.context,\n        UntrackedStateScanRequest {\n            filter: UntrackedStateFilter {\n                version_ids: vec![\"bench-version\".to_string()],\n                ..Default::default()\n            },\n            ..Default::default()\n        },\n    )\n    .await?\n    .len();\n    Ok(report(fixture.rows, verified_rows, Duration::ZERO))\n}\n\npub async fn untracked_state_scan_schema_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &UntrackedStateReadFixture,\n) -> Result<StorageBenchReport, LixError> {\n    let verified_rows = scan_untracked(\n        backend,\n        &fixture.context,\n        UntrackedStateScanRequest {\n            filter: UntrackedStateFilter {\n                schema_keys: vec![untracked_schema_key(0, StorageBenchSelectivity::Percent100)],\n                ..Default::default()\n            },\n            ..Default::default()\n        },\n    )\n    .await?\n    .len();\n    Ok(report(fixture.rows, verified_rows, Duration::ZERO))\n}\n\npub async fn untracked_state_scan_schema_selective_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &UntrackedStateReadFixture,\n) -> Result<StorageBenchReport, LixError> {\n    let verified_rows = scan_untracked(\n        backend,\n        &fixture.context,\n        UntrackedStateScanRequest {\n            filter: UntrackedStateFilter {\n                schema_keys: vec![UNTRACKED_MATCH_SCHEMA_KEY.to_string()],\n                ..Default::default()\n            },\n            ..Default::default()\n        },\n    )\n    .await?\n    .len();\n    Ok(report(\n        fixture.selectivity.expected_rows(fixture.rows),\n        verified_rows,\n        Duration::ZERO,\n    ))\n}\n\npub async fn prepare_untracked_state_overwrite(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n) -> Result<UntrackedStateWriteFixture, LixError> {\n    let context = UntrackedStateContext::new();\n    let rows = untracked_rows(config);\n    write_untracked_rows(backend, &context, &rows).await?;\n    let mut updated_rows =\n        untracked_rows(config.with_rows(config.update_fraction.rows(config.rows)));\n    for (index, row) in updated_rows.iter_mut().enumerate() {\n        row.snapshot_content = Some(updated_snapshot_content(index, config.state_payload_bytes));\n    }\n    Ok(UntrackedStateWriteFixture {\n        context,\n        rows: updated_rows,\n    })\n}\n\npub async fn prepare_untracked_state_insert_new_keys(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n) -> Result<UntrackedStateWriteFixture, LixError> {\n    let context = UntrackedStateContext::new();\n    let rows = untracked_rows(config);\n    write_untracked_rows(backend, &context, &rows).await?;\n    let mut new_rows = untracked_rows(config);\n    for (index, row) in new_rows.iter_mut().enumerate() {\n        row.entity_id =\n            EntityIdentity::single(entity_id(\"untracked-new\", index, config.key_pattern));\n    }\n    Ok(UntrackedStateWriteFixture {\n        context,\n        rows: new_rows,\n    })\n}\n\npub async fn untracked_state_overwrite_existing_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &UntrackedStateWriteFixture,\n) -> Result<StorageBenchReport, LixError> {\n    write_untracked_rows(backend, &fixture.context, &fixture.rows).await?;\n    let verified_rows = scan_untracked(\n        backend,\n        &fixture.context,\n        UntrackedStateScanRequest::default(),\n    )\n    .await?\n    .len();\n    Ok(report(fixture.rows.len(), verified_rows, Duration::ZERO))\n}\n\npub async fn prepare_changelog_append_changes(\n    config: StorageBenchConfig,\n) -> Result<ChangelogAppendFixture, LixError> {\n    Ok(ChangelogAppendFixture {\n        context: CommitStoreContext::new(),\n        changes: changelog_materialized_changes(config),\n    })\n}\n\npub async fn prepare_changelog_append_tombstones(\n    config: StorageBenchConfig,\n) -> Result<ChangelogAppendFixture, LixError> {\n    Ok(ChangelogAppendFixture {\n        context: CommitStoreContext::new(),\n        changes: changelog_tombstone_changes(config),\n    })\n}\n\npub async fn prepare_changelog_append_metadata(\n    config: StorageBenchConfig,\n) -> Result<ChangelogAppendFixture, LixError> {\n    Ok(ChangelogAppendFixture {\n        context: CommitStoreContext::new(),\n        changes: changelog_metadata_changes(config),\n    })\n}\n\npub async fn prepare_changelog_append_shared_payload(\n    config: StorageBenchConfig,\n) -> Result<ChangelogAppendFixture, LixError> {\n    Ok(ChangelogAppendFixture {\n        context: CommitStoreContext::new(),\n        changes: changelog_shared_payload_changes(config),\n    })\n}\n\npub async fn prepare_changelog_append_shared_metadata(\n    config: StorageBenchConfig,\n) -> Result<ChangelogAppendFixture, LixError> {\n    Ok(ChangelogAppendFixture {\n        context: CommitStoreContext::new(),\n        changes: changelog_shared_metadata_changes(config),\n    })\n}\n\npub async fn prepare_changelog_append_shared_payload_and_metadata(\n    config: StorageBenchConfig,\n) -> Result<ChangelogAppendFixture, LixError> {\n    Ok(ChangelogAppendFixture {\n        context: CommitStoreContext::new(),\n        changes: changelog_shared_payload_and_metadata_changes(config),\n    })\n}\n\npub async fn prepare_changelog_append_composite_entity_ids(\n    config: StorageBenchConfig,\n) -> Result<ChangelogAppendFixture, LixError> {\n    Ok(ChangelogAppendFixture {\n        context: CommitStoreContext::new(),\n        changes: changelog_composite_entity_id_changes(config),\n    })\n}\n\npub async fn prepare_changelog_codec(\n    config: StorageBenchConfig,\n) -> Result<ChangelogCodecFixture, LixError> {\n    let changes = changelog_changes(config);\n    let encoded_changes = changes\n        .iter()\n        .map(|change| crate::commit_store::codec::encode_change_ref(change.as_ref()))\n        .collect::<Result<Vec<_>, _>>()?;\n    Ok(ChangelogCodecFixture {\n        changes,\n        encoded_changes,\n    })\n}\n\npub async fn changelog_append_changes_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &ChangelogAppendFixture,\n) -> Result<StorageBenchReport, LixError> {\n    append_changelog_changes(backend, &fixture.context, &fixture.changes).await?;\n    let reader = fixture\n        .context\n        .reader(StorageContext::new(Arc::clone(backend)));\n    let verified_rows = reader\n        .scan_changes(&ChangeScanRequest::default())\n        .await?\n        .len();\n    Ok(report(fixture.changes.len(), verified_rows, Duration::ZERO))\n}\n\npub async fn prepare_changelog_read(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n) -> Result<ChangelogReadFixture, LixError> {\n    let context = CommitStoreContext::new();\n    let changes = changelog_materialized_changes(config);\n    append_changelog_changes(backend, &context, &changes).await?;\n    Ok(ChangelogReadFixture {\n        context,\n        rows: config.rows,\n    })\n}\n\npub async fn prepare_changelog_read_with_selectivity(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n) -> Result<ChangelogReadFixture, LixError> {\n    let context = CommitStoreContext::new();\n    let changes = changelog_selective_changes(config);\n    append_changelog_changes(backend, &context, &changes).await?;\n    Ok(ChangelogReadFixture {\n        context,\n        rows: config.rows,\n    })\n}\n\npub async fn prepare_changelog_read_entity_history(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n) -> Result<ChangelogReadFixture, LixError> {\n    let context = CommitStoreContext::new();\n    let changes = changelog_entity_history_changes(config);\n    append_changelog_changes(backend, &context, &changes).await?;\n    Ok(ChangelogReadFixture {\n        context,\n        rows: config.rows,\n    })\n}\n\npub async fn changelog_encode_only_prepared(\n    fixture: &ChangelogCodecFixture,\n) -> Result<StorageBenchReport, LixError> {\n    let mut verified_rows = 0;\n    let mut encoded_bytes = 0;\n    for change in &fixture.changes {\n        encoded_bytes += crate::commit_store::codec::encode_change_ref(change.as_ref())?.len();\n        verified_rows += 1;\n    }\n    Ok(report(\n        fixture.changes.len(),\n        verified_rows + usize::from(encoded_bytes == 0),\n        Duration::ZERO,\n    ))\n}\n\npub async fn changelog_decode_only_prepared(\n    fixture: &ChangelogCodecFixture,\n) -> Result<StorageBenchReport, LixError> {\n    let mut verified_rows = 0;\n    let mut decoded_bytes = 0;\n    for bytes in &fixture.encoded_changes {\n        let change = crate::commit_store::codec::decode_change(bytes)?;\n        decoded_bytes += change.schema_key.len();\n        verified_rows += 1;\n    }\n    Ok(report(\n        fixture.encoded_changes.len(),\n        verified_rows + usize::from(decoded_bytes == 0),\n        Duration::ZERO,\n    ))\n}\n\npub async fn changelog_load_changes_hit_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &ChangelogReadFixture,\n) -> Result<StorageBenchReport, LixError> {\n    let reader = fixture\n        .context\n        .reader(StorageContext::new(Arc::clone(backend)));\n    let change_ids = (0..fixture.rows)\n        .map(|index| format!(\"bench-change-{index}\"))\n        .collect::<Vec<_>>();\n    let verified_rows = reader\n        .load_changes(&change_ids)\n        .await?\n        .into_iter()\n        .filter(Option::is_some)\n        .count();\n    Ok(report(fixture.rows, verified_rows, Duration::ZERO))\n}\n\npub async fn changelog_load_changes_miss_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &ChangelogReadFixture,\n) -> Result<StorageBenchReport, LixError> {\n    let reader = fixture\n        .context\n        .reader(StorageContext::new(Arc::clone(backend)));\n    let change_ids = (0..fixture.rows)\n        .map(|index| format!(\"missing-change-{index}\"))\n        .collect::<Vec<_>>();\n    let misses = reader\n        .load_changes(&change_ids)\n        .await?\n        .into_iter()\n        .filter(Option::is_none)\n        .count();\n    Ok(report(fixture.rows, misses, Duration::ZERO))\n}\n\npub async fn changelog_scan_all_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &ChangelogReadFixture,\n) -> Result<StorageBenchReport, LixError> {\n    let reader = fixture\n        .context\n        .reader(StorageContext::new(Arc::clone(backend)));\n    let verified_rows = reader\n        .scan_changes(&ChangeScanRequest::default())\n        .await?\n        .len();\n    Ok(report(fixture.rows, verified_rows, Duration::ZERO))\n}\n\npub async fn changelog_scan_full_changes_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &ChangelogReadFixture,\n) -> Result<StorageBenchReport, LixError> {\n    changelog_scan_all_prepared(backend, fixture).await\n}\n\npub async fn changelog_scan_limit_100_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &ChangelogReadFixture,\n) -> Result<StorageBenchReport, LixError> {\n    let reader = fixture\n        .context\n        .reader(StorageContext::new(Arc::clone(backend)));\n    let expected = fixture.rows.min(100);\n    let verified_rows = reader\n        .scan_changes(&ChangeScanRequest {\n            limit: Some(expected),\n        })\n        .await?\n        .len();\n    Ok(report(expected, verified_rows, Duration::ZERO))\n}\n\npub async fn changelog_scan_change_set_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &ChangelogReadFixture,\n) -> Result<StorageBenchReport, LixError> {\n    let reader = fixture\n        .context\n        .reader(StorageContext::new(Arc::clone(backend)));\n    let change_ids = (0..fixture.rows)\n        .map(|index| format!(\"bench-change-{index}\"))\n        .collect::<Vec<_>>();\n    let verified_rows = reader\n        .load_changes(&change_ids)\n        .await?\n        .into_iter()\n        .filter(Option::is_some)\n        .count();\n    Ok(report(fixture.rows, verified_rows, Duration::ZERO))\n}\n\npub async fn changelog_scan_schema_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &ChangelogReadFixture,\n    selectivity: StorageBenchSelectivity,\n) -> Result<StorageBenchReport, LixError> {\n    let reader = fixture\n        .context\n        .reader(StorageContext::new(Arc::clone(backend)));\n    let changes = reader.scan_changes(&ChangeScanRequest::default()).await?;\n    let verified_rows = changes\n        .iter()\n        .filter(|change| change.record.schema_key == CHANGELOG_MATCH_SCHEMA_KEY)\n        .count();\n    Ok(report(\n        selectivity.expected_rows(fixture.rows),\n        verified_rows,\n        Duration::ZERO,\n    ))\n}\n\npub async fn changelog_scan_entity_history_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &ChangelogReadFixture,\n) -> Result<StorageBenchReport, LixError> {\n    let reader = fixture\n        .context\n        .reader(StorageContext::new(Arc::clone(backend)));\n    let changes = reader.scan_changes(&ChangeScanRequest::default()).await?;\n    let target = EntityIdentity::single(CHANGELOG_HISTORY_ENTITY_ID);\n    let verified_rows = changes\n        .iter()\n        .filter(|change| change.record.entity_id == target)\n        .count();\n    Ok(report(\n        fixture.rows.div_ceil(10),\n        verified_rows,\n        Duration::ZERO,\n    ))\n}\n\npub async fn prepare_commit_graph_read(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n) -> Result<CommitGraphReadFixture, LixError> {\n    let changelog = CommitStoreContext::new();\n    let mut changes = changelog_materialized_changes(config);\n    let head_commit_id = \"bench-commit-head\".to_string();\n    changes.push(commit_graph_materialized_commit_change(\n        &head_commit_id,\n        config.rows,\n    ));\n    append_changelog_changes(backend, &changelog, &changes).await?;\n\n    Ok(CommitGraphReadFixture {\n        head_commit_id,\n        rows: config.rows,\n    })\n}\n\npub async fn commit_graph_change_history_from_commit_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &CommitGraphReadFixture,\n) -> Result<StorageBenchReport, LixError> {\n    let graph = crate::commit_graph::CommitGraphContext::new();\n    let mut reader = graph.reader(StorageContext::new(Arc::clone(backend)));\n    let verified_rows = reader\n        .change_history_from_commit(\n            &fixture.head_commit_id,\n            &CommitGraphChangeHistoryRequest::default(),\n        )\n        .await?\n        .len();\n    Ok(report(fixture.rows, verified_rows, Duration::ZERO))\n}\n\npub async fn prepare_binary_cas_write_blobs(\n    config: StorageBenchConfig,\n) -> Result<BinaryCasWriteFixture, LixError> {\n    Ok(BinaryCasWriteFixture {\n        context: BinaryCasContext::new(),\n        file_ids: binary_file_ids(config.rows),\n        payloads: binary_payloads(config.rows, config.blob_bytes),\n    })\n}\n\npub async fn prepare_binary_cas_write_duplicate_payload(\n    config: StorageBenchConfig,\n) -> Result<BinaryCasWriteFixture, LixError> {\n    let payload = binary_payload(0, config.blob_bytes);\n    Ok(BinaryCasWriteFixture {\n        context: BinaryCasContext::new(),\n        file_ids: binary_file_ids(config.rows),\n        payloads: (0..config.rows).map(|_| payload.clone()).collect(),\n    })\n}\n\npub async fn prepare_binary_cas_write_half_duplicate_payload(\n    config: StorageBenchConfig,\n) -> Result<BinaryCasWriteFixture, LixError> {\n    Ok(BinaryCasWriteFixture {\n        context: BinaryCasContext::new(),\n        file_ids: binary_file_ids(config.rows),\n        payloads: binary_half_duplicate_payloads(config.rows, config.blob_bytes),\n    })\n}\n\npub async fn binary_cas_write_blobs_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &BinaryCasWriteFixture,\n) -> Result<StorageBenchReport, LixError> {\n    let writes = binary_blob_writes(&fixture.file_ids, &fixture.payloads);\n    write_binary_blob_writes(backend, &fixture.context, &writes).await?;\n    let verified_rows = count_binary_cas_manifests(backend).await?;\n    Ok(report(writes.len(), verified_rows, Duration::ZERO))\n}\n\npub async fn prepare_binary_cas_read(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n) -> Result<BinaryCasReadFixture, LixError> {\n    let context = BinaryCasContext::new();\n    let payloads = binary_payloads(config.rows, config.blob_bytes);\n    let file_ids = binary_file_ids(config.rows);\n    let writes = binary_blob_writes(&file_ids, &payloads);\n    write_binary_blob_writes(backend, &context, &writes).await?;\n    let hashes = payloads\n        .iter()\n        .map(|payload| BlobHash::from_content(payload))\n        .collect::<Vec<_>>();\n    Ok(BinaryCasReadFixture {\n        context,\n        rows: config.rows,\n        hashes,\n    })\n}\n\npub async fn binary_cas_read_blob_hit_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &BinaryCasReadFixture,\n) -> Result<StorageBenchReport, LixError> {\n    let mut reader = fixture\n        .context\n        .reader(StorageContext::new(Arc::clone(backend)));\n    let verified_rows = reader\n        .load_bytes_many(&fixture.hashes)\n        .await?\n        .into_vec()\n        .into_iter()\n        .filter(|row| row.is_some())\n        .count();\n    Ok(report(fixture.hashes.len(), verified_rows, Duration::ZERO))\n}\n\npub async fn binary_cas_read_blob_miss_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &BinaryCasReadFixture,\n) -> Result<StorageBenchReport, LixError> {\n    let mut misses = 0;\n    let mut reader = fixture\n        .context\n        .reader(StorageContext::new(Arc::clone(backend)));\n    for index in 0..fixture.rows {\n        let missing_hash = BlobHash::from_hex(&format!(\"{index:064x}\"))?;\n        if reader\n            .load_bytes_many(&[missing_hash])\n            .await?\n            .get(0)\n            .is_none()\n        {\n            misses += 1;\n        }\n    }\n    Ok(report(fixture.rows, misses, Duration::ZERO))\n}\n\npub async fn prepare_json_store_write(\n    shape: JsonStorePayloadShape,\n    rows: usize,\n) -> Result<JsonStoreWriteFixture, LixError> {\n    Ok(JsonStoreWriteFixture {\n        context: JsonStoreContext::new(),\n        documents: json_documents(shape, rows),\n    })\n}\n\npub async fn prepare_json_store_write_dedupe(\n    shape: JsonStorePayloadShape,\n    rows: usize,\n) -> Result<JsonStoreWriteFixture, LixError> {\n    let document = json_document(shape, 0);\n    Ok(JsonStoreWriteFixture {\n        context: JsonStoreContext::new(),\n        documents: (0..rows).map(|_| document.clone()).collect(),\n    })\n}\n\npub async fn json_store_write_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &JsonStoreWriteFixture,\n) -> Result<StorageBenchReport, LixError> {\n    let storage = StorageContext::new(Arc::clone(backend));\n    let mut transaction = storage.begin_write_transaction().await?;\n    {\n        let mut writes = StorageWriteSet::new();\n        let mut writer = fixture.context.writer();\n        writer.stage_batch(\n            &mut writes,\n            JsonWritePlacementRef::OutOfBand,\n            fixture\n                .documents\n                .iter()\n                .map(|document| {\n                    std::str::from_utf8(document)\n                        .map(NormalizedJsonRef::new)\n                        .map_err(|error| {\n                            LixError::new(\n                                LixError::CODE_UNKNOWN,\n                                format!(\"benchmark JSON document is invalid UTF-8: {error}\"),\n                            )\n                        })\n                })\n                .collect::<Result<Vec<_>, _>>()?,\n        )?;\n        writes.apply(&mut transaction.as_mut()).await?;\n    }\n    transaction.commit().await?;\n    Ok(report(\n        fixture.documents.len(),\n        fixture.documents.len(),\n        Duration::ZERO,\n    ))\n}\n\npub async fn prepare_json_store_read(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    shape: JsonStorePayloadShape,\n    rows: usize,\n) -> Result<JsonStoreReadFixture, LixError> {\n    prepare_json_store_projection_read(\n        backend,\n        shape,\n        rows,\n        JsonStoreProjectionShape::TopLevelTarget,\n    )\n    .await\n}\n\npub async fn prepare_json_store_projection_read(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    shape: JsonStorePayloadShape,\n    rows: usize,\n    projection: JsonStoreProjectionShape,\n) -> Result<JsonStoreReadFixture, LixError> {\n    let context = JsonStoreContext::new();\n    let documents = json_documents(shape, rows);\n    let mut refs = Vec::with_capacity(documents.len());\n    let storage = StorageContext::new(Arc::clone(backend));\n    let mut transaction = storage.begin_write_transaction().await?;\n    {\n        let mut writes = StorageWriteSet::new();\n        let mut writer = context.writer();\n        for document in &documents {\n            refs.push(prepare_json_ref(document)?);\n        }\n        writer.stage_batch(\n            &mut writes,\n            JsonWritePlacementRef::OutOfBand,\n            documents\n                .iter()\n                .map(|document| {\n                    std::str::from_utf8(document)\n                        .map(NormalizedJsonRef::new)\n                        .map_err(|error| {\n                            LixError::new(\n                                LixError::CODE_UNKNOWN,\n                                format!(\"benchmark JSON document is invalid UTF-8: {error}\"),\n                            )\n                        })\n                })\n                .collect::<Result<Vec<_>, _>>()?,\n        )?;\n        writes.apply(&mut transaction.as_mut()).await?;\n    }\n    transaction.commit().await?;\n    Ok(JsonStoreReadFixture {\n        context,\n        refs,\n        paths: json_projection_paths(projection),\n    })\n}\n\npub async fn json_store_read_bytes_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &JsonStoreReadFixture,\n) -> Result<StorageBenchReport, LixError> {\n    let mut verified_rows = 0;\n    let mut reader = fixture\n        .context\n        .reader(StorageContext::new(Arc::clone(backend)));\n    let batch = reader\n        .load_bytes_many(JsonLoadRequestRef {\n            refs: &fixture.refs,\n            scope: JsonReadScopeRef::OutOfBand,\n        })\n        .await?;\n    for value in batch.values() {\n        if value.is_some() {\n            verified_rows += 1;\n        }\n    }\n    Ok(report(fixture.refs.len(), verified_rows, Duration::ZERO))\n}\n\npub async fn json_store_read_value_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &JsonStoreReadFixture,\n) -> Result<StorageBenchReport, LixError> {\n    let mut verified_rows = 0;\n    let mut reader = fixture\n        .context\n        .reader(StorageContext::new(Arc::clone(backend)));\n    let batch = reader\n        .load_values_many(JsonLoadRequestRef {\n            refs: &fixture.refs,\n            scope: JsonReadScopeRef::OutOfBand,\n        })\n        .await?;\n    for value in batch.values() {\n        if value.is_some() {\n            verified_rows += 1;\n        }\n    }\n    Ok(report(fixture.refs.len(), verified_rows, Duration::ZERO))\n}\n\npub async fn json_store_read_projection_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &JsonStoreReadFixture,\n) -> Result<StorageBenchReport, LixError> {\n    let mut verified_rows = 0;\n    let mut reader = fixture\n        .context\n        .reader(StorageContext::new(Arc::clone(backend)));\n    let batch = reader\n        .load_projections_many(JsonProjectionLoadRequestRef {\n            refs: &fixture.refs,\n            scope: JsonReadScopeRef::OutOfBand,\n            paths: &fixture.paths,\n        })\n        .await?;\n    for value in batch.values() {\n        if value.is_some() {\n            verified_rows += 1;\n        }\n    }\n    Ok(report(fixture.refs.len(), verified_rows, Duration::ZERO))\n}\n\npub async fn prepare_json_store_base_update_object(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    rows: usize,\n) -> Result<JsonStoreReadFixture, LixError> {\n    prepare_json_store_base_update(backend, JsonStorePayloadShape::LargeStructured128k, rows).await\n}\n\npub async fn prepare_json_store_base_update_array(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    rows: usize,\n) -> Result<JsonStoreReadFixture, LixError> {\n    prepare_json_store_base_update(backend, JsonStorePayloadShape::LargeArray128k, rows).await\n}\n\nasync fn prepare_json_store_base_update(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    shape: JsonStorePayloadShape,\n    rows: usize,\n) -> Result<JsonStoreReadFixture, LixError> {\n    let context = JsonStoreContext::new();\n    let documents = json_documents(shape, rows);\n    let mut refs = Vec::with_capacity(documents.len());\n    let storage = StorageContext::new(Arc::clone(backend));\n    let mut transaction = storage.begin_write_transaction().await?;\n    {\n        let mut writes = StorageWriteSet::new();\n        let mut writer = context.writer();\n        for document in &documents {\n            refs.push(prepare_json_ref(document)?);\n        }\n        writer.stage_batch(\n            &mut writes,\n            JsonWritePlacementRef::OutOfBand,\n            documents\n                .iter()\n                .map(|document| {\n                    std::str::from_utf8(document)\n                        .map(NormalizedJsonRef::new)\n                        .map_err(|error| {\n                            LixError::new(\n                                LixError::CODE_UNKNOWN,\n                                format!(\"benchmark JSON document is invalid UTF-8: {error}\"),\n                            )\n                        })\n                })\n                .collect::<Result<Vec<_>, _>>()?,\n        )?;\n        writes.apply(&mut transaction.as_mut()).await?;\n    }\n    transaction.commit().await?;\n    Ok(JsonStoreReadFixture {\n        context,\n        refs,\n        paths: json_projection_paths(JsonStoreProjectionShape::TopLevelTarget),\n    })\n}\n\npub async fn json_store_write_against_base_object_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &JsonStoreReadFixture,\n) -> Result<StorageBenchReport, LixError> {\n    json_store_write_against_base_prepared(\n        backend,\n        fixture,\n        JsonStorePayloadShape::LargeStructured128k,\n    )\n    .await\n}\n\npub async fn json_store_write_against_base_array_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &JsonStoreReadFixture,\n) -> Result<StorageBenchReport, LixError> {\n    json_store_write_against_base_prepared(backend, fixture, JsonStorePayloadShape::LargeArray128k)\n        .await\n}\n\nasync fn json_store_write_against_base_prepared(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    fixture: &JsonStoreReadFixture,\n    shape: JsonStorePayloadShape,\n) -> Result<StorageBenchReport, LixError> {\n    let storage = StorageContext::new(Arc::clone(backend));\n    let mut transaction = storage.begin_write_transaction().await?;\n    {\n        let mut writes = StorageWriteSet::new();\n        let mut writer = fixture.context.writer();\n        let mut updated_documents = Vec::with_capacity(fixture.refs.len());\n        for (index, _json_ref) in fixture.refs.iter().enumerate() {\n            let updated = updated_json_document(shape, index);\n            prepare_json_ref(&updated)?;\n            updated_documents.push(updated);\n        }\n        writer.stage_batch(\n            &mut writes,\n            JsonWritePlacementRef::OutOfBand,\n            updated_documents\n                .iter()\n                .map(|document| {\n                    std::str::from_utf8(document)\n                        .map(NormalizedJsonRef::new)\n                        .map_err(|error| {\n                            LixError::new(\n                                LixError::CODE_UNKNOWN,\n                                format!(\"benchmark JSON document is invalid UTF-8: {error}\"),\n                            )\n                        })\n                })\n                .collect::<Result<Vec<_>, _>>()?,\n        )?;\n        writes.apply(&mut transaction.as_mut()).await?;\n    }\n    transaction.commit().await?;\n    Ok(report(\n        fixture.refs.len(),\n        fixture.refs.len(),\n        Duration::ZERO,\n    ))\n}\n\npub async fn tracked_state_write_root(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n) -> Result<StorageBenchReport, LixError> {\n    let rows = tracked_rows(config, \"bench-tracked-commit\");\n    let context = TrackedStateContext::new();\n    let started = Instant::now();\n    write_tracked_root(backend, &context, \"bench-tracked-commit\", None, &rows).await?;\n    let elapsed = started.elapsed();\n    let verified_rows = scan_tracked(backend, &context, \"bench-tracked-commit\")\n        .await?\n        .len();\n    Ok(report(rows.len(), verified_rows, elapsed))\n}\n\npub async fn tracked_state_read_point_hit(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n) -> Result<StorageBenchReport, LixError> {\n    let context = TrackedStateContext::new();\n    let rows = tracked_rows(config, \"bench-tracked-commit\");\n    write_tracked_root(backend, &context, \"bench-tracked-commit\", None, &rows).await?;\n\n    let started = Instant::now();\n    let mut reader = context.reader(StorageContext::new(Arc::clone(backend)));\n    let requests = tracked_point_hit_requests(config.rows, config.key_pattern);\n    let verified_rows = reader\n        .load_rows_at_commit(\"bench-tracked-commit\", &requests)\n        .await?\n        .into_iter()\n        .filter(Option::is_some)\n        .count();\n    Ok(report(config.rows, verified_rows, started.elapsed()))\n}\n\npub async fn tracked_state_read_point_miss(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n) -> Result<StorageBenchReport, LixError> {\n    let context = TrackedStateContext::new();\n    let rows = tracked_rows(config, \"bench-tracked-commit\");\n    write_tracked_root(backend, &context, \"bench-tracked-commit\", None, &rows).await?;\n\n    let started = Instant::now();\n    let mut reader = context.reader(StorageContext::new(Arc::clone(backend)));\n    let requests = tracked_point_miss_requests(config.rows, StorageBenchSelectivity::Percent100);\n    let misses = reader\n        .load_rows_at_commit(\"bench-tracked-commit\", &requests)\n        .await?\n        .into_iter()\n        .filter(Option::is_none)\n        .count();\n    Ok(report(config.rows, misses, started.elapsed()))\n}\n\npub async fn tracked_state_scan_all(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n) -> Result<StorageBenchReport, LixError> {\n    let context = TrackedStateContext::new();\n    let rows = tracked_rows(config, \"bench-tracked-commit\");\n    write_tracked_root(backend, &context, \"bench-tracked-commit\", None, &rows).await?;\n\n    let started = Instant::now();\n    let verified_rows = scan_tracked(backend, &context, \"bench-tracked-commit\")\n        .await?\n        .len();\n    Ok(report(config.rows, verified_rows, started.elapsed()))\n}\n\npub async fn tracked_state_scan_schema(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n) -> Result<StorageBenchReport, LixError> {\n    let context = TrackedStateContext::new();\n    let rows = tracked_rows(config, \"bench-tracked-commit\");\n    write_tracked_root(backend, &context, \"bench-tracked-commit\", None, &rows).await?;\n\n    let started = Instant::now();\n    let mut reader = context.reader(StorageContext::new(Arc::clone(backend)));\n    let verified_rows = reader\n        .scan_rows_at_commit(\n            \"bench-tracked-commit\",\n            &TrackedStateScanRequest {\n                filter: TrackedStateFilter {\n                    schema_keys: vec![tracked_schema_key(0, StorageBenchSelectivity::Percent100)],\n                    ..Default::default()\n                },\n                ..Default::default()\n            },\n        )\n        .await?\n        .len();\n    Ok(report(config.rows, verified_rows, started.elapsed()))\n}\n\npub async fn tracked_state_scan_file(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n) -> Result<StorageBenchReport, LixError> {\n    let context = TrackedStateContext::new();\n    let rows = tracked_rows(config, \"bench-tracked-commit\");\n    write_tracked_root(backend, &context, \"bench-tracked-commit\", None, &rows).await?;\n\n    let started = Instant::now();\n    let mut reader = context.reader(StorageContext::new(Arc::clone(backend)));\n    let verified_rows = reader\n        .scan_rows_at_commit(\n            \"bench-tracked-commit\",\n            &TrackedStateScanRequest {\n                filter: TrackedStateFilter {\n                    file_ids: vec![NullableKeyFilter::Value(\"bench.json\".to_string())],\n                    ..Default::default()\n                },\n                ..Default::default()\n            },\n        )\n        .await?\n        .len();\n    Ok(report(config.rows, verified_rows, started.elapsed()))\n}\n\npub async fn tracked_state_update_existing(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n) -> Result<StorageBenchReport, LixError> {\n    let context = TrackedStateContext::new();\n    let rows = tracked_rows(config, \"bench-tracked-parent\");\n    write_tracked_root(backend, &context, \"bench-tracked-parent\", None, &rows).await?;\n    let mut updated_rows = tracked_rows(config, \"bench-tracked-child\");\n    for (index, row) in updated_rows.iter_mut().enumerate() {\n        row.snapshot_content = Some(updated_snapshot_content(index, config.state_payload_bytes));\n    }\n\n    let started = Instant::now();\n    write_tracked_root(\n        backend,\n        &context,\n        \"bench-tracked-child\",\n        Some(\"bench-tracked-parent\"),\n        &updated_rows,\n    )\n    .await?;\n    let elapsed = started.elapsed();\n    let verified_rows = scan_tracked(backend, &context, \"bench-tracked-child\")\n        .await?\n        .len();\n    Ok(report(updated_rows.len(), verified_rows, elapsed))\n}\n\npub async fn untracked_state_write_rows(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n) -> Result<StorageBenchReport, LixError> {\n    let rows = untracked_rows(config);\n    let context = UntrackedStateContext::new();\n    let started = Instant::now();\n    write_untracked_rows(backend, &context, &rows).await?;\n    let elapsed = started.elapsed();\n    let verified_rows = scan_untracked(backend, &context, UntrackedStateScanRequest::default())\n        .await?\n        .len();\n    Ok(report(rows.len(), verified_rows, elapsed))\n}\n\npub async fn untracked_state_read_point_hit(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n) -> Result<StorageBenchReport, LixError> {\n    let context = UntrackedStateContext::new();\n    let rows = untracked_rows(config);\n    write_untracked_rows(backend, &context, &rows).await?;\n\n    let started = Instant::now();\n    let mut verified_rows = 0;\n    let mut reader = context.reader(StorageContext::new(Arc::clone(backend)));\n    for index in 0..config.rows {\n        if reader\n            .load_row(&UntrackedStateRowRequest {\n                schema_key: untracked_schema_key(index, StorageBenchSelectivity::Percent100),\n                version_id: \"bench-version\".to_string(),\n                entity_id: EntityIdentity::single(entity_id(\n                    \"untracked\",\n                    index,\n                    config.key_pattern,\n                )),\n                file_id: NullableKeyFilter::Value(\"bench.json\".to_string()),\n            })\n            .await?\n            .is_some()\n        {\n            verified_rows += 1;\n        }\n    }\n    Ok(report(config.rows, verified_rows, started.elapsed()))\n}\n\npub async fn untracked_state_read_point_miss(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n) -> Result<StorageBenchReport, LixError> {\n    let context = UntrackedStateContext::new();\n    let rows = untracked_rows(config);\n    write_untracked_rows(backend, &context, &rows).await?;\n\n    let started = Instant::now();\n    let mut misses = 0;\n    let mut reader = context.reader(StorageContext::new(Arc::clone(backend)));\n    for index in 0..config.rows {\n        if reader\n            .load_row(&UntrackedStateRowRequest {\n                schema_key: \"bench_untracked_entity\".to_string(),\n                version_id: \"bench-version\".to_string(),\n                entity_id: EntityIdentity::single(format!(\"missing-{index}\")),\n                file_id: NullableKeyFilter::Value(\"bench.json\".to_string()),\n            })\n            .await?\n            .is_none()\n        {\n            misses += 1;\n        }\n    }\n    Ok(report(config.rows, misses, started.elapsed()))\n}\n\npub async fn untracked_state_scan_all(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n) -> Result<StorageBenchReport, LixError> {\n    let context = UntrackedStateContext::new();\n    let rows = untracked_rows(config);\n    write_untracked_rows(backend, &context, &rows).await?;\n\n    let started = Instant::now();\n    let verified_rows = scan_untracked(backend, &context, UntrackedStateScanRequest::default())\n        .await?\n        .len();\n    Ok(report(config.rows, verified_rows, started.elapsed()))\n}\n\npub async fn untracked_state_scan_version(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n) -> Result<StorageBenchReport, LixError> {\n    let context = UntrackedStateContext::new();\n    let rows = untracked_rows(config);\n    write_untracked_rows(backend, &context, &rows).await?;\n\n    let started = Instant::now();\n    let verified_rows = scan_untracked(\n        backend,\n        &context,\n        UntrackedStateScanRequest {\n            filter: UntrackedStateFilter {\n                version_ids: vec![\"bench-version\".to_string()],\n                ..Default::default()\n            },\n            ..Default::default()\n        },\n    )\n    .await?\n    .len();\n    Ok(report(config.rows, verified_rows, started.elapsed()))\n}\n\npub async fn untracked_state_scan_schema(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n) -> Result<StorageBenchReport, LixError> {\n    let context = UntrackedStateContext::new();\n    let rows = untracked_rows(config);\n    write_untracked_rows(backend, &context, &rows).await?;\n\n    let started = Instant::now();\n    let verified_rows = scan_untracked(\n        backend,\n        &context,\n        UntrackedStateScanRequest {\n            filter: UntrackedStateFilter {\n                schema_keys: vec![untracked_schema_key(0, StorageBenchSelectivity::Percent100)],\n                ..Default::default()\n            },\n            ..Default::default()\n        },\n    )\n    .await?\n    .len();\n    Ok(report(config.rows, verified_rows, started.elapsed()))\n}\n\npub async fn untracked_state_overwrite_existing(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n) -> Result<StorageBenchReport, LixError> {\n    let context = UntrackedStateContext::new();\n    let rows = untracked_rows(config);\n    write_untracked_rows(backend, &context, &rows).await?;\n    let mut updated_rows = untracked_rows(config);\n    for (index, row) in updated_rows.iter_mut().enumerate() {\n        row.snapshot_content = Some(updated_snapshot_content(index, config.state_payload_bytes));\n    }\n\n    let started = Instant::now();\n    write_untracked_rows(backend, &context, &updated_rows).await?;\n    let elapsed = started.elapsed();\n    let verified_rows = scan_untracked(backend, &context, UntrackedStateScanRequest::default())\n        .await?\n        .len();\n    Ok(report(updated_rows.len(), verified_rows, elapsed))\n}\n\npub async fn changelog_append_changes(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n) -> Result<StorageBenchReport, LixError> {\n    let changes = changelog_materialized_changes(config);\n    let context = CommitStoreContext::new();\n    let started = Instant::now();\n    append_changelog_changes(backend, &context, &changes).await?;\n    let elapsed = started.elapsed();\n    let reader = context.reader(StorageContext::new(Arc::clone(backend)));\n    let verified_rows = reader\n        .scan_changes(&ChangeScanRequest::default())\n        .await?\n        .len();\n    Ok(report(changes.len(), verified_rows, elapsed))\n}\n\npub async fn changelog_load_changes_hit(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n) -> Result<StorageBenchReport, LixError> {\n    let context = CommitStoreContext::new();\n    let changes = changelog_materialized_changes(config);\n    append_changelog_changes(backend, &context, &changes).await?;\n    let reader = context.reader(StorageContext::new(Arc::clone(backend)));\n\n    let started = Instant::now();\n    let change_ids = (0..config.rows)\n        .map(|index| format!(\"bench-change-{index}\"))\n        .collect::<Vec<_>>();\n    let verified_rows = reader\n        .load_changes(&change_ids)\n        .await?\n        .into_iter()\n        .filter(Option::is_some)\n        .count();\n    Ok(report(config.rows, verified_rows, started.elapsed()))\n}\n\npub async fn changelog_load_changes_miss(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n) -> Result<StorageBenchReport, LixError> {\n    let context = CommitStoreContext::new();\n    let changes = changelog_materialized_changes(config);\n    append_changelog_changes(backend, &context, &changes).await?;\n    let reader = context.reader(StorageContext::new(Arc::clone(backend)));\n\n    let started = Instant::now();\n    let change_ids = (0..config.rows)\n        .map(|index| format!(\"missing-change-{index}\"))\n        .collect::<Vec<_>>();\n    let misses = reader\n        .load_changes(&change_ids)\n        .await?\n        .into_iter()\n        .filter(Option::is_none)\n        .count();\n    Ok(report(config.rows, misses, started.elapsed()))\n}\n\npub async fn changelog_scan_all(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n) -> Result<StorageBenchReport, LixError> {\n    let context = CommitStoreContext::new();\n    let changes = changelog_materialized_changes(config);\n    append_changelog_changes(backend, &context, &changes).await?;\n    let reader = context.reader(StorageContext::new(Arc::clone(backend)));\n\n    let started = Instant::now();\n    let verified_rows = reader\n        .scan_changes(&ChangeScanRequest::default())\n        .await?\n        .len();\n    Ok(report(config.rows, verified_rows, started.elapsed()))\n}\n\npub async fn changelog_scan_limit_100(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n) -> Result<StorageBenchReport, LixError> {\n    let context = CommitStoreContext::new();\n    let changes = changelog_materialized_changes(config);\n    append_changelog_changes(backend, &context, &changes).await?;\n    let reader = context.reader(StorageContext::new(Arc::clone(backend)));\n    let expected = config.rows.min(100);\n\n    let started = Instant::now();\n    let verified_rows = reader\n        .scan_changes(&ChangeScanRequest {\n            limit: Some(expected),\n        })\n        .await?\n        .len();\n    Ok(report(expected, verified_rows, started.elapsed()))\n}\n\npub async fn binary_cas_write_blobs(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n) -> Result<StorageBenchReport, LixError> {\n    let payloads = binary_payloads(config.rows, config.blob_bytes);\n    let file_ids = binary_file_ids(config.rows);\n    let writes = binary_blob_writes(&file_ids, &payloads);\n    let context = BinaryCasContext::new();\n\n    let started = Instant::now();\n    write_binary_blob_writes(backend, &context, &writes).await?;\n    let elapsed = started.elapsed();\n    let verified_rows = count_binary_cas_manifests(backend).await?;\n    Ok(report(writes.len(), verified_rows, elapsed))\n}\n\npub async fn binary_cas_read_blob_hit(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n) -> Result<StorageBenchReport, LixError> {\n    let context = BinaryCasContext::new();\n    let payloads = binary_payloads(config.rows, config.blob_bytes);\n    let file_ids = binary_file_ids(config.rows);\n    let writes = binary_blob_writes(&file_ids, &payloads);\n    write_binary_blob_writes(backend, &context, &writes).await?;\n    let hashes = payloads\n        .iter()\n        .map(|payload| BlobHash::from_content(payload))\n        .collect::<Vec<_>>();\n\n    let started = Instant::now();\n    let mut reader = context.reader(StorageContext::new(Arc::clone(backend)));\n    let verified_rows = reader\n        .load_bytes_many(&hashes)\n        .await?\n        .into_vec()\n        .into_iter()\n        .filter(|row| row.is_some())\n        .count();\n    Ok(report(hashes.len(), verified_rows, started.elapsed()))\n}\n\npub async fn binary_cas_read_blob_miss(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n) -> Result<StorageBenchReport, LixError> {\n    let context = BinaryCasContext::new();\n    let payloads = binary_payloads(config.rows, config.blob_bytes);\n    let file_ids = binary_file_ids(config.rows);\n    let writes = binary_blob_writes(&file_ids, &payloads);\n    write_binary_blob_writes(backend, &context, &writes).await?;\n\n    let started = Instant::now();\n    let mut misses = 0;\n    let mut reader = context.reader(StorageContext::new(Arc::clone(backend)));\n    for index in 0..config.rows {\n        let missing_hash = BlobHash::from_hex(&format!(\"{index:064x}\"))?;\n        if reader\n            .load_bytes_many(&[missing_hash])\n            .await?\n            .get(0)\n            .is_none()\n        {\n            misses += 1;\n        }\n    }\n    Ok(report(config.rows, misses, started.elapsed()))\n}\n\npub async fn binary_cas_write_duplicate_payload(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n) -> Result<StorageBenchReport, LixError> {\n    let payload = binary_payload(0, config.blob_bytes);\n    let payloads = (0..config.rows)\n        .map(|_| payload.clone())\n        .collect::<Vec<_>>();\n    let file_ids = binary_file_ids(config.rows);\n    let writes = binary_blob_writes(&file_ids, &payloads);\n    let context = BinaryCasContext::new();\n\n    let started = Instant::now();\n    write_binary_blob_writes(backend, &context, &writes).await?;\n    let elapsed = started.elapsed();\n    let verified_rows = count_binary_cas_manifests(backend).await?;\n    Ok(report(writes.len(), verified_rows, elapsed))\n}\n\nasync fn write_tracked_root(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    context: &TrackedStateContext,\n    commit_id: &str,\n    parent_commit_id: Option<&str>,\n    rows: &[MaterializedTrackedStateRow],\n) -> Result<(), LixError> {\n    let storage = StorageContext::new(Arc::clone(backend));\n    let mut transaction = storage.begin_write_transaction().await?;\n    let mut writes = StorageWriteSet::new();\n    let changes = rows\n        .iter()\n        .map(tracked_bench_change_from_materialized)\n        .collect::<Result<Vec<_>, _>>()?;\n    let payloads = tracked_bench_json_payloads(rows, &changes);\n    let json_report = JsonStoreContext::new().writer().stage_batch_report(\n        &mut writes,\n        JsonWritePlacementRef::CommitPack {\n            commit_id,\n            pack_id: 0,\n        },\n        payloads.iter().map(|(payload, json_ref)| match json_ref {\n            Some(json_ref) => NormalizedJsonRef::trusted_prehashed(payload.as_str(), *json_ref),\n            None => NormalizedJsonRef::new(payload.as_str()),\n        }),\n    )?;\n\n    let parent_ids = parent_commit_id\n        .map(|parent| vec![parent.to_string()])\n        .unwrap_or_default();\n    let commit_change_id = format!(\"{commit_id}:commit\");\n    let commit = CommitDraftRef {\n        id: commit_id,\n        change_id: &commit_change_id,\n        parent_ids: &parent_ids,\n        author_account_ids: &[],\n        created_at: rows\n            .first()\n            .map(|row| row.updated_at.as_str())\n            .unwrap_or(\"1970-01-01T00:00:00.000Z\"),\n    };\n    let commit_store = CommitStoreContext::new();\n    let authored_changes = changes.iter().map(Change::as_ref).collect::<Vec<_>>();\n    let staged = commit_store\n        .writer(&mut transaction.as_mut(), &mut writes)\n        .stage_tracked_commit_draft(commit, authored_changes.clone(), Vec::new())\n        .await?;\n    let mut deltas = Vec::with_capacity(changes.len());\n    deltas.extend(\n        authored_changes\n            .iter()\n            .zip(&staged.authored_locators)\n            .zip(rows)\n            .map(|((change, locator), row)| TrackedStateDeltaRef {\n                change: *change,\n                locator: locator.as_ref(),\n                created_at: row.created_at.as_str(),\n                updated_at: row.updated_at.as_str(),\n            }),\n    );\n    context\n        .writer(&mut transaction.as_mut(), &mut writes)\n        .stage_delta_with_json_pack_indexes(\n            commit_id,\n            parent_commit_id,\n            &deltas,\n            crate::tracked_state::DeltaJsonPackIndexesRef {\n                commit_id,\n                pack_id: 0,\n                indexes: &json_report.pack_indexes,\n            },\n        )\n        .await?;\n    writes.apply(&mut transaction.as_mut()).await?;\n    transaction.commit().await\n}\n\nasync fn materialize_tracked_root(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    context: &TrackedStateContext,\n    commit_id: &str,\n) -> Result<(), LixError> {\n    let storage = StorageContext::new(Arc::clone(backend));\n    let mut transaction = storage.begin_write_transaction().await?;\n    let mut writes = StorageWriteSet::new();\n    let commit_store = CommitStoreContext::new();\n    context\n        .materializer(&mut transaction.as_mut(), &mut writes, &commit_store)\n        .materialize_root_at(commit_id)\n        .await?;\n    writes.apply(&mut transaction.as_mut()).await?;\n    transaction.commit().await\n}\n\nasync fn write_tracked_delta_chain(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    config: StorageBenchConfig,\n    delta_commits: usize,\n    updated_rows_per_commit: usize,\n) -> Result<(TrackedStateContext, String), LixError> {\n    let context = TrackedStateContext::new();\n    let base_commit_id = \"bench-tracked-base\";\n    let rows = tracked_rows(config, base_commit_id);\n    write_tracked_root(backend, &context, base_commit_id, None, &rows).await?;\n\n    let mut parent_commit_id = base_commit_id.to_string();\n    for delta_index in 0..delta_commits {\n        let commit_id = format!(\"bench-tracked-delta-{delta_index}\");\n        let mut updated_rows = tracked_rows(\n            config.with_rows(updated_rows_per_commit.min(config.rows)),\n            &commit_id,\n        );\n        for (row_index, row) in updated_rows.iter_mut().enumerate() {\n            row.snapshot_content = Some(delta_chain_snapshot_content(\n                delta_index,\n                row_index,\n                config.state_payload_bytes,\n            ));\n            row.updated_at = timestamp(config.rows + delta_index * config.rows + row_index);\n        }\n        write_tracked_root(\n            backend,\n            &context,\n            &commit_id,\n            Some(parent_commit_id.as_str()),\n            &updated_rows,\n        )\n        .await?;\n        parent_commit_id = commit_id;\n    }\n\n    Ok((context, parent_commit_id))\n}\n\nfn tracked_bench_change_from_materialized(\n    row: &MaterializedTrackedStateRow,\n) -> Result<Change, LixError> {\n    Ok(Change {\n        id: row.change_id.clone(),\n        entity_id: row.entity_id.clone(),\n        schema_key: row.schema_key.clone(),\n        file_id: row.file_id.clone(),\n        snapshot_ref: row\n            .snapshot_content\n            .as_deref()\n            .map(|value| prepare_json_ref(value.as_bytes()))\n            .transpose()?,\n        metadata_ref: row\n            .metadata\n            .as_ref()\n            .map(|value| {\n                let serialized = crate::serialize_row_metadata(value);\n                prepare_json_ref(serialized.as_bytes())\n            })\n            .transpose()?,\n        created_at: row.created_at.clone(),\n    })\n}\n\nfn tracked_bench_json_payloads(\n    rows: &[MaterializedTrackedStateRow],\n    changes: &[Change],\n) -> Vec<(String, Option<JsonRef>)> {\n    let mut payloads = Vec::new();\n    for (row, change) in rows.iter().zip(changes) {\n        if let Some(snapshot) = row.snapshot_content.as_deref() {\n            payloads.push((snapshot.to_string(), change.snapshot_ref));\n        }\n        if let Some(metadata) = row.metadata.as_ref() {\n            payloads.push((crate::serialize_row_metadata(metadata), change.metadata_ref));\n        }\n    }\n    payloads\n}\n\nasync fn scan_tracked(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    context: &TrackedStateContext,\n    commit_id: &str,\n) -> Result<Vec<MaterializedTrackedStateRow>, LixError> {\n    let mut reader = context.reader(StorageContext::new(Arc::clone(backend)));\n    reader\n        .scan_rows_at_commit(commit_id, &TrackedStateScanRequest::default())\n        .await\n}\n\nasync fn write_untracked_rows(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    context: &UntrackedStateContext,\n    rows: &[MaterializedUntrackedStateRow],\n) -> Result<(), LixError> {\n    let storage = StorageContext::new(Arc::clone(backend));\n    let mut transaction = storage.begin_write_transaction().await?;\n    {\n        let mut writes = StorageWriteSet::new();\n        let canonical_rows = rows\n            .iter()\n            .map(|row| crate::test_support::untracked_state_row_from_materialized(&mut writes, row))\n            .collect::<Result<Vec<_>, _>>()?;\n        let mut writer = context.writer(&mut writes);\n        writer.stage_rows(canonical_rows.iter().map(|row| row.as_ref()))?;\n        writes.apply(&mut transaction.as_mut()).await?;\n    }\n    transaction.commit().await\n}\n\nasync fn scan_untracked(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    context: &UntrackedStateContext,\n    request: UntrackedStateScanRequest,\n) -> Result<Vec<MaterializedUntrackedStateRow>, LixError> {\n    let mut reader = context.reader(StorageContext::new(Arc::clone(backend)));\n    reader.scan_rows(&request).await\n}\n\nasync fn append_changelog_changes(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    context: &CommitStoreContext,\n    changes: &[MaterializedChange],\n) -> Result<(), LixError> {\n    let storage = StorageContext::new(Arc::clone(backend));\n    let mut transaction = storage.begin_write_transaction().await?;\n    {\n        let mut writes = StorageWriteSet::new();\n        let canonical_changes = changes\n            .iter()\n            .map(canonical_changelog_bench_change)\n            .collect::<Result<Vec<_>, _>>()?;\n        let payloads = changelog_bench_json_payloads(changes);\n        JsonStoreContext::new().writer().stage_batch(\n            &mut writes,\n            JsonWritePlacementRef::OutOfBand,\n            payloads\n                .iter()\n                .map(|payload| NormalizedJsonRef::new(payload.as_str())),\n        )?;\n        let parent_ids = Vec::new();\n        let author_account_ids = vec![\"bench-author\".to_string()];\n        {\n            let mut transaction_ref = transaction.as_mut();\n            let mut writer = context.writer(&mut transaction_ref, &mut writes);\n            writer\n                .stage_commit_draft(\n                    CommitDraftRef {\n                        id: \"bench-changelog-commit-0\",\n                        change_id: \"bench-changelog-header-change-0\",\n                        parent_ids: &parent_ids,\n                        author_account_ids: &author_account_ids,\n                        created_at: \"2024-01-01T00:00:00.000Z\",\n                    },\n                    canonical_changes\n                        .iter()\n                        .map(|change| change.as_ref())\n                        .collect(),\n                    Vec::new(),\n                )\n                .await?;\n        }\n        writes.apply(&mut transaction.as_mut()).await?;\n    }\n    transaction.commit().await\n}\n\nasync fn write_binary_blob_writes(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    context: &BinaryCasContext,\n    writes: &[BlobWrite<'_>],\n) -> Result<(), LixError> {\n    let storage = StorageContext::new(Arc::clone(backend));\n    let mut transaction = storage.begin_write_transaction().await?;\n    {\n        let mut writeset = StorageWriteSet::new();\n        let mut writer = context.writer(&mut writeset);\n        writer.stage_many(writes)?;\n        writeset.apply(&mut transaction.as_mut()).await?;\n    }\n    transaction.commit().await\n}\n\nasync fn count_binary_cas_manifests(\n    backend: &Arc<dyn Backend + Send + Sync>,\n) -> Result<usize, LixError> {\n    let context = BinaryCasContext::new();\n    let mut reader = context.reader(StorageContext::new(Arc::clone(backend)));\n    reader.count_blob_manifests().await\n}\n\nfn report(measured_rows: usize, verified_rows: usize, elapsed: Duration) -> StorageBenchReport {\n    StorageBenchReport {\n        measured_rows,\n        verified_rows,\n        elapsed,\n    }\n}\n\nconst TRACKED_MATCH_SCHEMA_KEY: &str = \"bench_tracked_entity\";\nconst TRACKED_OTHER_SCHEMA_KEY: &str = \"bench_tracked_other_entity\";\nconst UNTRACKED_MATCH_SCHEMA_KEY: &str = \"bench_untracked_entity\";\nconst UNTRACKED_OTHER_SCHEMA_KEY: &str = \"bench_untracked_other_entity\";\nconst CHANGELOG_MATCH_SCHEMA_KEY: &str = \"bench_changelog_entity\";\nconst CHANGELOG_OTHER_SCHEMA_KEY: &str = \"bench_changelog_other_entity\";\nconst CHANGELOG_HISTORY_ENTITY_ID: &str = \"change-entity-history-target\";\n\nfn tracked_rows(config: StorageBenchConfig, commit_id: &str) -> Vec<MaterializedTrackedStateRow> {\n    (0..config.rows)\n        .map(|index| MaterializedTrackedStateRow {\n            entity_id: EntityIdentity::single(entity_id(\"tracked\", index, config.key_pattern)),\n            schema_key: tracked_schema_key(index, config.selectivity),\n            file_id: Some(\"bench.json\".to_string()),\n            snapshot_content: Some(snapshot_content(index, config.state_payload_bytes)),\n            metadata: None,\n            deleted: false,\n            created_at: timestamp(index),\n            updated_at: timestamp(index),\n            change_id: tracked_change_id(commit_id, index),\n            commit_id: commit_id.to_string(),\n        })\n        .collect()\n}\n\nfn json_pointer_tracked_rows(\n    rows: &[JsonPointerStorageRow],\n    commit_id: &str,\n    updated: bool,\n) -> Vec<MaterializedTrackedStateRow> {\n    rows.iter()\n        .enumerate()\n        .map(|(index, row)| {\n            let value_json = if updated {\n                row.updated_value_json.as_str()\n            } else {\n                row.value_json.as_str()\n            };\n            let value = serde_json::from_str::<serde_json::Value>(value_json)\n                .unwrap_or_else(|_| serde_json::Value::String(value_json.to_string()));\n            let snapshot = serde_json::json!({\n                \"path\": row.path,\n                \"value\": value,\n            })\n            .to_string();\n            MaterializedTrackedStateRow {\n                entity_id: EntityIdentity::single(row.path.as_str()),\n                schema_key: \"json_pointer\".to_string(),\n                file_id: None,\n                snapshot_content: Some(snapshot),\n                metadata: None,\n                deleted: false,\n                created_at: timestamp(index),\n                updated_at: timestamp(index),\n                change_id: tracked_change_id(commit_id, index),\n                commit_id: commit_id.to_string(),\n            }\n        })\n        .collect()\n}\n\nasync fn write_json_pointer_delta_chain(\n    backend: &Arc<dyn Backend + Send + Sync>,\n    rows: &[JsonPointerStorageRow],\n    delta_commits: usize,\n    updated_rows_per_commit: usize,\n) -> Result<(TrackedStateContext, String), LixError> {\n    let context = TrackedStateContext::new();\n    let base_commit_id = \"json-pointer-base\";\n    let base_rows = json_pointer_tracked_rows(rows, base_commit_id, false);\n    write_tracked_root(backend, &context, base_commit_id, None, &base_rows).await?;\n\n    let mut parent_commit_id = base_commit_id.to_string();\n    for delta_index in 0..delta_commits {\n        let commit_id = format!(\"json-pointer-delta-{delta_index}\");\n        let mut child_rows = json_pointer_tracked_rows(\n            &rows[..updated_rows_per_commit.min(rows.len())],\n            &commit_id,\n            true,\n        );\n        for row in &mut child_rows {\n            row.updated_at = timestamp(rows.len() + delta_index);\n        }\n        write_tracked_root(\n            backend,\n            &context,\n            &commit_id,\n            Some(parent_commit_id.as_str()),\n            &child_rows,\n        )\n        .await?;\n        parent_commit_id = commit_id;\n    }\n\n    Ok((context, parent_commit_id))\n}\n\nfn tracked_rows_file_selective(\n    config: StorageBenchConfig,\n    commit_id: &str,\n) -> Vec<MaterializedTrackedStateRow> {\n    (0..config.rows)\n        .map(|index| MaterializedTrackedStateRow {\n            entity_id: EntityIdentity::single(entity_id(\"tracked\", index, config.key_pattern)),\n            schema_key: TRACKED_MATCH_SCHEMA_KEY.to_string(),\n            file_id: Some(\n                if config.selectivity.matches(index) {\n                    \"bench-match.json\"\n                } else {\n                    \"bench-other.json\"\n                }\n                .to_string(),\n            ),\n            snapshot_content: Some(snapshot_content(index, config.state_payload_bytes)),\n            metadata: None,\n            deleted: false,\n            created_at: timestamp(index),\n            updated_at: timestamp(index),\n            change_id: tracked_change_id(commit_id, index),\n            commit_id: commit_id.to_string(),\n        })\n        .collect()\n}\n\nfn tracked_change_id(commit_id: &str, index: usize) -> String {\n    format!(\"{commit_id}:tracked-change-{index}\")\n}\n\nfn untracked_rows(config: StorageBenchConfig) -> Vec<MaterializedUntrackedStateRow> {\n    (0..config.rows)\n        .map(|index| MaterializedUntrackedStateRow {\n            entity_id: EntityIdentity::single(entity_id(\"untracked\", index, config.key_pattern)),\n            schema_key: untracked_schema_key(index, config.selectivity),\n            file_id: Some(\"bench.json\".to_string()),\n            snapshot_content: Some(snapshot_content(index, config.state_payload_bytes)),\n            metadata: None,\n            deleted: false,\n            created_at: timestamp(index),\n            updated_at: timestamp(index),\n            global: false,\n            version_id: \"bench-version\".to_string(),\n        })\n        .collect()\n}\n\nfn changelog_changes(config: StorageBenchConfig) -> Vec<Change> {\n    changelog_materialized_changes(config)\n        .into_iter()\n        .map(changelog_bench_change_ref_only)\n        .collect()\n}\n\nfn changelog_materialized_changes(config: StorageBenchConfig) -> Vec<MaterializedChange> {\n    (0..config.rows)\n        .map(|index| MaterializedChange {\n            id: format!(\"bench-change-{index}\"),\n            entity_id: EntityIdentity::single(entity_id(\n                \"change-entity\",\n                index,\n                config.key_pattern,\n            )),\n            schema_key: \"bench_changelog_entity\".to_string(),\n            file_id: Some(\"bench.json\".to_string()),\n            snapshot_content: Some(snapshot_content(index, config.state_payload_bytes)),\n            metadata: None,\n            created_at: timestamp(index),\n        })\n        .collect()\n}\n\nfn commit_graph_materialized_commit_change(commit_id: &str, rows: usize) -> MaterializedChange {\n    let snapshot_content = serde_json::json!({\n        \"id\": commit_id,\n    })\n    .to_string();\n\n    MaterializedChange {\n        id: format!(\"bench-commit-change-{commit_id}\"),\n        entity_id: EntityIdentity::single(commit_id.to_string()),\n        schema_key: \"lix_commit\".to_string(),\n        file_id: None,\n        snapshot_content: Some(snapshot_content),\n        metadata: None,\n        created_at: timestamp(rows),\n    }\n}\n\nfn canonical_changelog_bench_change(change: &MaterializedChange) -> Result<Change, LixError> {\n    let snapshot_ref = change\n        .snapshot_content\n        .as_ref()\n        .map(|value| prepare_json_ref(value.as_bytes()))\n        .transpose()?;\n    let metadata_ref = change\n        .metadata\n        .as_ref()\n        .map(|value| prepare_json_ref(value.as_bytes()))\n        .transpose()?;\n    Ok(Change {\n        id: change.id.clone(),\n        entity_id: change.entity_id.clone(),\n        schema_key: change.schema_key.clone(),\n        file_id: change.file_id.clone(),\n        snapshot_ref,\n        metadata_ref,\n        created_at: change.created_at.clone(),\n    })\n}\n\nfn changelog_bench_json_payloads(changes: &[MaterializedChange]) -> Vec<String> {\n    changes\n        .iter()\n        .flat_map(|change| {\n            change\n                .snapshot_content\n                .iter()\n                .chain(change.metadata.iter())\n                .cloned()\n                .collect::<Vec<_>>()\n        })\n        .collect()\n}\n\nfn changelog_bench_change_ref_only(change: MaterializedChange) -> Change {\n    let snapshot_ref = change\n        .snapshot_content\n        .as_ref()\n        .map(|value| JsonRef::from_hash(blake3::hash(value.as_bytes())));\n    let metadata_ref = change\n        .metadata\n        .as_ref()\n        .map(|value| JsonRef::from_hash(blake3::hash(value.as_bytes())));\n    Change {\n        id: change.id,\n        entity_id: change.entity_id,\n        schema_key: change.schema_key,\n        file_id: change.file_id,\n        snapshot_ref,\n        metadata_ref,\n        created_at: change.created_at,\n    }\n}\n\nfn changelog_tombstone_changes(config: StorageBenchConfig) -> Vec<MaterializedChange> {\n    changelog_materialized_changes(config)\n        .into_iter()\n        .map(|mut change| {\n            change.snapshot_content = None;\n            change.metadata = None;\n            change\n        })\n        .collect()\n}\n\nfn changelog_metadata_changes(config: StorageBenchConfig) -> Vec<MaterializedChange> {\n    changelog_materialized_changes(config)\n        .into_iter()\n        .enumerate()\n        .map(|(index, mut change)| {\n            change.metadata = Some(snapshot_metadata(index, config.state_payload_bytes));\n            change\n        })\n        .collect()\n}\n\nfn changelog_shared_payload_changes(config: StorageBenchConfig) -> Vec<MaterializedChange> {\n    let shared_snapshot_content = snapshot_content(0, config.state_payload_bytes);\n    changelog_materialized_changes(config)\n        .into_iter()\n        .map(|mut change| {\n            change.snapshot_content = Some(shared_snapshot_content.clone());\n            change\n        })\n        .collect()\n}\n\nfn changelog_shared_metadata_changes(config: StorageBenchConfig) -> Vec<MaterializedChange> {\n    let shared_metadata = snapshot_metadata(0, config.state_payload_bytes);\n    changelog_materialized_changes(config)\n        .into_iter()\n        .map(|mut change| {\n            change.snapshot_content = None;\n            change.metadata = Some(shared_metadata.clone());\n            change\n        })\n        .collect()\n}\n\nfn changelog_shared_payload_and_metadata_changes(\n    config: StorageBenchConfig,\n) -> Vec<MaterializedChange> {\n    let shared_snapshot_content = snapshot_content(0, config.state_payload_bytes);\n    let shared_metadata = snapshot_metadata(1, config.state_payload_bytes);\n    changelog_materialized_changes(config)\n        .into_iter()\n        .map(|mut change| {\n            change.snapshot_content = Some(shared_snapshot_content.clone());\n            change.metadata = Some(shared_metadata.clone());\n            change\n        })\n        .collect()\n}\n\nfn changelog_composite_entity_id_changes(config: StorageBenchConfig) -> Vec<MaterializedChange> {\n    changelog_materialized_changes(config)\n        .into_iter()\n        .enumerate()\n        .map(|(index, mut change)| {\n            change.entity_id = EntityIdentity {\n                parts: vec![\n                    entity_id(\"change-composite\", index, config.key_pattern),\n                    index.to_string(),\n                    (index % 2 == 0).to_string(),\n                ],\n            };\n            change\n        })\n        .collect()\n}\n\nfn changelog_selective_changes(config: StorageBenchConfig) -> Vec<MaterializedChange> {\n    changelog_materialized_changes(config)\n        .into_iter()\n        .enumerate()\n        .map(|(index, mut change)| {\n            change.schema_key = changelog_schema_key(index, config.selectivity);\n            change\n        })\n        .collect()\n}\n\nfn changelog_entity_history_changes(config: StorageBenchConfig) -> Vec<MaterializedChange> {\n    changelog_materialized_changes(config)\n        .into_iter()\n        .enumerate()\n        .map(|(index, mut change)| {\n            if index % 10 == 0 {\n                change.entity_id = EntityIdentity::single(CHANGELOG_HISTORY_ENTITY_ID);\n            }\n            change\n        })\n        .collect()\n}\n\nfn tracked_schema_key(index: usize, selectivity: StorageBenchSelectivity) -> String {\n    if selectivity.matches(index) {\n        TRACKED_MATCH_SCHEMA_KEY\n    } else {\n        TRACKED_OTHER_SCHEMA_KEY\n    }\n    .to_string()\n}\n\nfn untracked_schema_key(index: usize, selectivity: StorageBenchSelectivity) -> String {\n    if selectivity.matches(index) {\n        UNTRACKED_MATCH_SCHEMA_KEY\n    } else {\n        UNTRACKED_OTHER_SCHEMA_KEY\n    }\n    .to_string()\n}\n\nfn changelog_schema_key(index: usize, selectivity: StorageBenchSelectivity) -> String {\n    if selectivity.matches(index) {\n        CHANGELOG_MATCH_SCHEMA_KEY\n    } else {\n        CHANGELOG_OTHER_SCHEMA_KEY\n    }\n    .to_string()\n}\n\nfn entity_id(prefix: &str, index: usize, key_pattern: StorageBenchKeyPattern) -> String {\n    match key_pattern {\n        StorageBenchKeyPattern::Sequential => format!(\"{prefix}-{index}\"),\n        StorageBenchKeyPattern::Random => format!(\"{prefix}-{:016x}\", randomish_index(index)),\n    }\n}\n\nfn randomish_index(index: usize) -> u64 {\n    let mut value = index as u64;\n    value ^= value >> 30;\n    value = value.wrapping_mul(0xbf58_476d_1ce4_e5b9);\n    value ^= value >> 27;\n    value = value.wrapping_mul(0x94d0_49bb_1331_11eb);\n    value ^ (value >> 31)\n}\n\nfn binary_file_ids(rows: usize) -> Vec<String> {\n    (0..rows)\n        .map(|index| format!(\"bench-file-{index}\"))\n        .collect()\n}\n\nfn binary_payloads(rows: usize, blob_bytes: usize) -> Vec<Vec<u8>> {\n    (0..rows)\n        .map(|index| binary_payload(index, blob_bytes))\n        .collect()\n}\n\nfn binary_half_duplicate_payloads(rows: usize, blob_bytes: usize) -> Vec<Vec<u8>> {\n    (0..rows)\n        .map(|index| {\n            if index % 2 == 0 {\n                binary_payload(0, blob_bytes)\n            } else {\n                binary_payload(index, blob_bytes)\n            }\n        })\n        .collect()\n}\n\nfn binary_blob_writes<'a>(_file_ids: &'a [String], payloads: &'a [Vec<u8>]) -> Vec<BlobWrite<'a>> {\n    payloads\n        .iter()\n        .map(|payload| BlobWrite {\n            bytes: payload.as_slice(),\n        })\n        .collect()\n}\n\nfn snapshot_content(index: usize, target_bytes: usize) -> String {\n    let mut value = serde_json::json!({\n        \"id\": format!(\"entity-{index}\"),\n        \"value\": format!(\"value-{index}\"),\n        \"index\": index\n    });\n    pad_snapshot_content(&mut value, target_bytes);\n    value.to_string()\n}\n\nfn snapshot_metadata(index: usize, target_bytes: usize) -> String {\n    snapshot_content(index, target_bytes)\n}\n\nfn tracked_state_header_columns() -> Vec<String> {\n    [\n        \"entity_id\",\n        \"schema_key\",\n        \"file_id\",\n        \"metadata\",\n        \"created_at\",\n        \"updated_at\",\n        \"change_id\",\n        \"commit_id\",\n    ]\n    .into_iter()\n    .map(str::to_string)\n    .collect()\n}\n\nfn untracked_state_header_columns() -> Vec<String> {\n    [\n        \"entity_id\",\n        \"schema_key\",\n        \"file_id\",\n        \"metadata\",\n        \"created_at\",\n        \"updated_at\",\n        \"global\",\n        \"version_id\",\n    ]\n    .into_iter()\n    .map(str::to_string)\n    .collect()\n}\n\nfn updated_snapshot_content(index: usize, target_bytes: usize) -> String {\n    let mut value = serde_json::json!({\n        \"id\": format!(\"entity-{index}\"),\n        \"value\": format!(\"updated-{index}\"),\n        \"index\": index\n    });\n    pad_snapshot_content(&mut value, target_bytes);\n    value.to_string()\n}\n\nfn partial_updated_snapshot_content(index: usize, target_bytes: usize) -> String {\n    let mut value = serde_json::json!({\n        \"id\": format!(\"entity-{index}\"),\n        \"value\": format!(\"value-{index}\"),\n        \"index\": index,\n        \"done\": true\n    });\n    pad_snapshot_content(&mut value, target_bytes);\n    value.to_string()\n}\n\nfn delta_chain_snapshot_content(\n    delta_index: usize,\n    row_index: usize,\n    target_bytes: usize,\n) -> String {\n    let mut value = serde_json::json!({\n        \"id\": format!(\"entity-{row_index}\"),\n        \"value\": format!(\"delta-{delta_index}-{row_index}\"),\n        \"index\": row_index,\n        \"delta\": delta_index\n    });\n    pad_snapshot_content(&mut value, target_bytes);\n    value.to_string()\n}\n\nfn pad_snapshot_content(value: &mut serde_json::Value, target_bytes: usize) {\n    let current = value.to_string().len();\n    if target_bytes <= current {\n        return;\n    }\n    value[\"padding\"] = serde_json::Value::String(\"x\".repeat(target_bytes - current));\n}\n\nfn timestamp(index: usize) -> String {\n    format!(\n        \"2026-05-01T00:{:02}:{:02}.000Z\",\n        (index / 60) % 60,\n        index % 60\n    )\n}\n\nfn binary_payload(index: usize, len: usize) -> Vec<u8> {\n    let mut payload = (0..len)\n        .map(|offset| {\n            ((index as u64)\n                .wrapping_mul(31)\n                .wrapping_add((offset as u64).wrapping_mul(17))\n                & 0xff) as u8\n        })\n        .collect::<Vec<_>>();\n    for (offset, byte) in (index as u64).to_le_bytes().into_iter().enumerate() {\n        if offset < payload.len() {\n            payload[offset] = byte;\n        }\n    }\n    payload\n}\n\nfn json_documents(shape: JsonStorePayloadShape, rows: usize) -> Vec<Vec<u8>> {\n    (0..rows).map(|index| json_document(shape, index)).collect()\n}\n\nfn json_document(shape: JsonStorePayloadShape, index: usize) -> Vec<u8> {\n    match shape {\n        JsonStorePayloadShape::SmallRaw1k => json_object_document(index, 1_024, 8),\n        JsonStorePayloadShape::MediumStructured16k => json_object_document(index, 16 * 1024, 128),\n        JsonStorePayloadShape::LargeStructured128k => {\n            json_object_document(index, 128 * 1024, 1_000)\n        }\n        JsonStorePayloadShape::LargeArray128k => json_array_document(index, 128 * 1024, 1_000),\n    }\n}\n\nfn updated_json_document(shape: JsonStorePayloadShape, index: usize) -> Vec<u8> {\n    let bytes = json_document(shape, index);\n    let mut value: serde_json::Value =\n        serde_json::from_slice(&bytes).expect(\"storage bench JSON document should parse\");\n    match shape {\n        JsonStorePayloadShape::LargeArray128k => {\n            value[\"items\"][999][\"value\"] =\n                serde_json::Value::String(format!(\"updated-array-value-{index}\"));\n        }\n        JsonStorePayloadShape::SmallRaw1k\n        | JsonStorePayloadShape::MediumStructured16k\n        | JsonStorePayloadShape::LargeStructured128k => {\n            value[\"field_999\"] = serde_json::Value::String(format!(\"updated-object-value-{index}\"));\n        }\n    }\n    serde_json::to_vec(&value).expect(\"storage bench updated JSON should serialize\")\n}\n\nfn json_object_document(index: usize, target_bytes: usize, fields: usize) -> Vec<u8> {\n    let mut object = serde_json::Map::new();\n    object.insert(\n        \"id\".to_string(),\n        serde_json::Value::String(format!(\"json-{index}\")),\n    );\n    object.insert(\n        \"target\".to_string(),\n        serde_json::Value::String(format!(\"target-{index}\")),\n    );\n    object.insert(\n        \"status\".to_string(),\n        serde_json::Value::String(if index % 2 == 0 { \"open\" } else { \"closed\" }.to_string()),\n    );\n    object.insert(\n        \"nested\".to_string(),\n        serde_json::json!({\n            \"target\": format!(\"nested-target-{index}\"),\n            \"revision\": index,\n        }),\n    );\n    for field_index in 0..fields {\n        object.insert(\n            format!(\"field_{field_index}\"),\n            serde_json::Value::String(format!(\"value-{index}-{field_index}\")),\n        );\n    }\n    pad_json_object(&mut object, target_bytes);\n    serde_json::to_vec(&serde_json::Value::Object(object))\n        .expect(\"storage bench object JSON should serialize\")\n}\n\nfn json_array_document(index: usize, target_bytes: usize, items: usize) -> Vec<u8> {\n    let mut object = serde_json::Map::new();\n    object.insert(\n        \"id\".to_string(),\n        serde_json::Value::String(format!(\"json-array-{index}\")),\n    );\n    object.insert(\n        \"target\".to_string(),\n        serde_json::Value::String(format!(\"target-{index}\")),\n    );\n    object.insert(\n        \"status\".to_string(),\n        serde_json::Value::String(if index % 2 == 0 { \"open\" } else { \"closed\" }.to_string()),\n    );\n    object.insert(\n        \"items\".to_string(),\n        serde_json::Value::Array(\n            (0..items)\n                .map(|item_index| {\n                    serde_json::json!({\n                        \"index\": item_index,\n                        \"status\": if item_index % 2 == 0 { \"ready\" } else { \"blocked\" },\n                        \"value\": format!(\"item-{index}-{item_index}\"),\n                    })\n                })\n                .collect(),\n        ),\n    );\n    pad_json_object(&mut object, target_bytes);\n    serde_json::to_vec(&serde_json::Value::Object(object))\n        .expect(\"storage bench array JSON should serialize\")\n}\n\nfn pad_json_object(object: &mut serde_json::Map<String, serde_json::Value>, target_bytes: usize) {\n    let current = serde_json::to_vec(&serde_json::Value::Object(object.clone()))\n        .expect(\"storage bench JSON should serialize\")\n        .len();\n    if target_bytes <= current {\n        return;\n    }\n    object.insert(\n        \"padding\".to_string(),\n        serde_json::Value::String(\"x\".repeat(target_bytes - current)),\n    );\n}\n\nfn json_projection_paths(projection: JsonStoreProjectionShape) -> Vec<JsonProjectionPath> {\n    match projection {\n        JsonStoreProjectionShape::TopLevelTarget => vec![JsonProjectionPath::new(\"/target\")],\n        JsonStoreProjectionShape::TopLevelTenProps => (0..10)\n            .map(|index| JsonProjectionPath::new(format!(\"/field_{index}\")))\n            .collect(),\n        JsonStoreProjectionShape::NestedTarget => vec![JsonProjectionPath::new(\"/nested/target\")],\n        JsonStoreProjectionShape::ArrayItem999 => {\n            vec![JsonProjectionPath::new(\"/items/999/value\")]\n        }\n        JsonStoreProjectionShape::Status => vec![JsonProjectionPath::new(\"/status\")],\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/test_support.rs",
    "content": "use std::sync::Arc;\n\nuse crate::commit_store::{Change, CommitDraftRef, CommitStoreContext};\nuse crate::json_store::{\n    JsonStoreContext, JsonWritePlacementRef, NormalizedJson, NormalizedJsonRef,\n};\nuse crate::storage::StorageContext;\nuse crate::storage::StorageWriteSet;\nuse crate::storage::StorageWriteTransaction;\nuse crate::tracked_state::{\n    MaterializedTrackedStateRow, TrackedStateContext, TrackedStateDeltaRef,\n};\nuse crate::transaction::prepare_version_ref_row;\nuse crate::untracked_state::{\n    MaterializedUntrackedStateRow, UntrackedStateContext, UntrackedStateRow,\n};\nuse crate::version::VersionContext;\n\nfn prepare_json_ref(value: &str) -> crate::json_store::JsonRef {\n    crate::json_store::JsonRef::for_content(value.as_bytes())\n}\nuse crate::GLOBAL_VERSION_ID;\n\npub(crate) const TEST_EMPTY_ROOT_COMMIT_ID: &str = \"test-empty-root\";\nconst TEST_TIMESTAMP: &str = \"1970-01-01T00:00:00.000Z\";\n\n/// Seeds a version head and matching tracked root for unit tests.\n///\n/// A version ref that points at a commit without a tracked root is invalid for\n/// the serving projection. This helper keeps that invariant in one place while\n/// still letting low-level tests use synthetic commit ids.\npub(crate) async fn seed_version_head(storage: StorageContext, version_id: &str, commit_id: &str) {\n    seed_version_head_with_rows(storage, version_id, commit_id, &[]).await;\n}\n\n/// Seeds the global version head to an empty tracked root for unit tests.\npub(crate) async fn seed_global_version_head(storage: StorageContext) {\n    seed_version_head(storage, GLOBAL_VERSION_ID, TEST_EMPTY_ROOT_COMMIT_ID).await;\n}\n\n/// Seeds a version head and writes the tracked root contents for its commit.\npub(crate) async fn seed_version_head_with_rows(\n    storage: StorageContext,\n    version_id: &str,\n    commit_id: &str,\n    rows: &[MaterializedTrackedStateRow],\n) {\n    let mut transaction = storage\n        .begin_write_transaction()\n        .await\n        .expect(\"seed transaction should open\");\n    let version_ctx = VersionContext::new(Arc::new(UntrackedStateContext::new()));\n    let mut writes = StorageWriteSet::new();\n    let canonical_row = prepare_version_ref_row(version_id, commit_id, TEST_TIMESTAMP)\n        .expect(\"version ref should canonicalize\");\n    version_ctx\n        .stage_canonical_ref_rows(&mut writes, &[canonical_row.row])\n        .expect(\"version ref should stage\");\n    writes\n        .apply(&mut transaction.as_mut())\n        .await\n        .expect(\"version ref should write\");\n    stage_tracked_root_from_materialized(\n        transaction.as_mut(),\n        &TrackedStateContext::new(),\n        commit_id,\n        None,\n        rows,\n    )\n    .await\n    .expect(\"tracked root should write\");\n    transaction.commit().await.expect(\"seed should commit\");\n}\n\npub(crate) async fn stage_tracked_root_from_materialized(\n    transaction: &mut dyn StorageWriteTransaction,\n    tracked_state: &TrackedStateContext,\n    commit_id: &str,\n    parent_commit_id: Option<&str>,\n    rows: &[MaterializedTrackedStateRow],\n) -> Result<(), crate::LixError> {\n    let mut writes = StorageWriteSet::new();\n    let changes = rows\n        .iter()\n        .map(tracked_change_from_materialized)\n        .collect::<Result<Vec<_>, _>>()?;\n    let json_payloads = materialized_tracked_json_payloads(rows);\n    JsonStoreContext::new().writer().stage_batch(\n        &mut writes,\n        JsonWritePlacementRef::CommitPack {\n            commit_id,\n            pack_id: 0,\n        },\n        json_payloads\n            .iter()\n            .map(|json| NormalizedJsonRef::from(json)),\n    )?;\n\n    let parent_ids = parent_commit_id\n        .map(|parent| vec![parent.to_string()])\n        .unwrap_or_default();\n    let commit_change_id = format!(\"{commit_id}:commit\");\n    let commit = CommitDraftRef {\n        id: commit_id,\n        change_id: &commit_change_id,\n        parent_ids: &parent_ids,\n        author_account_ids: &[],\n        created_at: rows\n            .first()\n            .map(|row| row.updated_at.as_str())\n            .unwrap_or(TEST_TIMESTAMP),\n    };\n    let commit_store = CommitStoreContext::new();\n    let change_ids = changes\n        .iter()\n        .map(|change| change.id.clone())\n        .collect::<Vec<_>>();\n    let existing_changes = commit_store\n        .reader(&mut *transaction)\n        .load_change_index_entries(&change_ids)\n        .await?;\n    let mut authored_changes = Vec::new();\n    let mut authored_created_at = Vec::new();\n    let mut authored_updated_at = Vec::new();\n    let mut adopted_changes = Vec::new();\n    let mut adopted_created_at = Vec::new();\n    let mut adopted_updated_at = Vec::new();\n    for ((change, row), existing) in changes.iter().zip(rows).zip(existing_changes) {\n        if existing.is_some() {\n            adopted_changes.push(change.as_ref());\n            adopted_created_at.push(row.created_at.as_str());\n            adopted_updated_at.push(row.updated_at.as_str());\n        } else {\n            authored_changes.push(change.as_ref());\n            authored_created_at.push(row.created_at.as_str());\n            authored_updated_at.push(row.updated_at.as_str());\n        }\n    }\n    let staged = commit_store\n        .writer(&mut *transaction, &mut writes)\n        .stage_tracked_commit_draft(commit, authored_changes.clone(), adopted_changes.clone())\n        .await?;\n    let mut deltas = Vec::with_capacity(changes.len());\n    deltas.extend(\n        authored_changes\n            .iter()\n            .zip(&staged.authored_locators)\n            .zip(authored_created_at)\n            .zip(authored_updated_at)\n            .map(\n                |(((change, locator), created_at), updated_at)| TrackedStateDeltaRef {\n                    change: *change,\n                    locator: locator.as_ref(),\n                    created_at,\n                    updated_at,\n                },\n            ),\n    );\n    deltas.extend(\n        adopted_changes\n            .iter()\n            .zip(&staged.adopted_locators)\n            .zip(adopted_created_at)\n            .zip(adopted_updated_at)\n            .map(\n                |(((change, locator), created_at), updated_at)| TrackedStateDeltaRef {\n                    change: *change,\n                    locator: locator.as_ref(),\n                    created_at,\n                    updated_at,\n                },\n            ),\n    );\n    tracked_state\n        .writer(&mut *transaction, &mut writes)\n        .stage_delta(commit_id, parent_commit_id, &deltas)\n        .await?;\n    writes.apply(&mut *transaction).await.map(|_| ())\n}\n\npub(crate) fn tracked_change_from_materialized(\n    row: &MaterializedTrackedStateRow,\n) -> Result<Change, crate::LixError> {\n    Ok(Change {\n        id: row.change_id.clone(),\n        entity_id: row.entity_id.clone(),\n        schema_key: row.schema_key.clone(),\n        file_id: row.file_id.clone(),\n        snapshot_ref: row.snapshot_content.as_deref().map(prepare_json_ref),\n        metadata_ref: row.metadata.as_ref().map(|value| {\n            let serialized = crate::serialize_row_metadata(value);\n            prepare_json_ref(&serialized)\n        }),\n        created_at: row.created_at.clone(),\n    })\n}\n\nfn materialized_tracked_json_payloads(rows: &[MaterializedTrackedStateRow]) -> Vec<NormalizedJson> {\n    let mut payloads = Vec::new();\n    for row in rows {\n        if let Some(snapshot) = row.snapshot_content.as_deref() {\n            payloads.push(NormalizedJson::from_arc_unchecked(Arc::from(snapshot)));\n        }\n        if let Some(metadata) = row.metadata.as_ref() {\n            payloads.push(NormalizedJson::from_arc_unchecked(Arc::from(\n                crate::serialize_row_metadata(metadata),\n            )));\n        }\n    }\n    payloads\n}\n\npub(crate) fn untracked_state_row_from_materialized(\n    _writes: &mut StorageWriteSet,\n    row: &MaterializedUntrackedStateRow,\n) -> Result<UntrackedStateRow, crate::LixError> {\n    Ok(UntrackedStateRow {\n        entity_id: row.entity_id.clone(),\n        schema_key: row.schema_key.clone(),\n        file_id: row.file_id.clone(),\n        snapshot_content: row.snapshot_content.clone(),\n        metadata: row.metadata.as_ref().map(crate::serialize_row_metadata),\n        created_at: row.created_at.clone(),\n        updated_at: row.updated_at.clone(),\n        global: row.global,\n        version_id: row.version_id.clone(),\n    })\n}\n"
  },
  {
    "path": "packages/engine/src/tracked_state/by_file_index.rs",
    "content": "use crate::tracked_state::codec::{\n    encode_key_ref as encode_tracked_key_ref, encode_value_ref as encode_tracked_value_ref,\n};\nuse crate::tracked_state::types::{\n    TrackedStateIndexValueRef, TrackedStateKey, TrackedStateKeyRef, TrackedStateTreeScanRequest,\n};\nuse crate::tracked_state::TrackedStateScanRequest;\nuse crate::NullableKeyFilter;\n\nconst NULL_COMPONENT: &str = \"\\0\";\nconst VALUE_PREFIX: &str = \"\\u{1}\";\n\npub(crate) struct ByFileIndex;\n\nimpl ByFileIndex {\n    pub(crate) fn should_use(request: &TrackedStateScanRequest) -> bool {\n        !request.filter.file_ids.is_empty()\n            && request\n                .filter\n                .file_ids\n                .iter()\n                .all(|filter| matches!(filter, NullableKeyFilter::Value(_)))\n    }\n\n    pub(crate) fn scan_request_from_tracked(\n        request: &TrackedStateScanRequest,\n    ) -> TrackedStateTreeScanRequest {\n        debug_assert!(Self::should_use(request));\n        let schema_keys = request\n            .filter\n            .file_ids\n            .iter()\n            .filter_map(|filter| match filter {\n                NullableKeyFilter::Any | NullableKeyFilter::Null => None,\n                NullableKeyFilter::Value(file_id) => Some(value_component(file_id)),\n            })\n            .collect();\n        let file_ids = request\n            .filter\n            .schema_keys\n            .iter()\n            .cloned()\n            .map(NullableKeyFilter::Value)\n            .collect();\n        TrackedStateTreeScanRequest {\n            schema_keys,\n            entity_ids: request.filter.entity_ids.clone(),\n            file_ids,\n            include_tombstones: request.filter.include_tombstones,\n            limit: None,\n        }\n    }\n\n    pub(crate) fn encode_key_ref(row: TrackedStateKeyRef<'_>) -> Vec<u8> {\n        debug_assert!(row.file_id.is_some());\n        let schema_key = component(row.file_id);\n        encode_tracked_key_ref(TrackedStateKeyRef {\n            schema_key: &schema_key,\n            file_id: Some(row.schema_key),\n            entity_id: row.entity_id,\n        })\n    }\n\n    pub(crate) fn primary_key_from_index_key(\n        index_key: TrackedStateKey,\n    ) -> Option<TrackedStateKey> {\n        let schema_key = index_key.file_id?;\n        Some(TrackedStateKey {\n            schema_key,\n            file_id: file_id_from_component(&index_key.schema_key)?,\n            entity_id: index_key.entity_id,\n        })\n    }\n\n    pub(crate) fn encode_header_value_ref(value: TrackedStateIndexValueRef<'_>) -> Vec<u8> {\n        encode_tracked_value_ref(value)\n    }\n}\n\nfn component(file_id: Option<&str>) -> String {\n    match file_id {\n        Some(file_id) => value_component(file_id),\n        None => NULL_COMPONENT.to_string(),\n    }\n}\n\nfn value_component(file_id: &str) -> String {\n    format!(\"{VALUE_PREFIX}{file_id}\")\n}\n\nfn file_id_from_component(component: &str) -> Option<Option<String>> {\n    if component == NULL_COMPONENT {\n        return Some(None);\n    }\n    component\n        .strip_prefix(VALUE_PREFIX)\n        .map(|file_id| Some(file_id.to_string()))\n}\n"
  },
  {
    "path": "packages/engine/src/tracked_state/codec.rs",
    "content": "use std::collections::HashMap;\n\nuse xxhash_rust::xxh3::xxh3_64_with_seed;\n\nuse crate::commit_store::ChangeLocator;\nuse crate::entity_identity::EntityIdentity;\nuse crate::json_store::JsonRef;\nuse crate::tracked_state::types::{\n    TrackedStateDeltaEntry, TrackedStateDeltaRef, TrackedStateIndexValue,\n    TrackedStateIndexValueRef, TrackedStateKey, TrackedStateKeyRef, TRACKED_STATE_HASH_BYTES,\n};\nuse crate::LixError;\n\nconst NODE_VERSION: u8 = 2;\nconst VALUE_VERSION: u8 = 7;\nconst VALUE_DELETED_FLAG: u8 = 0b1000_0000;\nconst VALUE_VERSION_MASK: u8 = 0b0111_1111;\nconst DELTA_PACK_VERSION: u8 = 7;\nconst DELTA_LOCATOR_SAME_COMMIT: u8 = 0;\nconst DELTA_LOCATOR_FULL: u8 = 1;\nconst DELTA_JSON_REFS_INLINE: u8 = 0;\nconst DELTA_JSON_REFS_MIXED_PACK_INDEX: u8 = 1;\nconst DELTA_JSON_REF_NONE: u8 = 0;\nconst DELTA_JSON_REF_PACK_INDEX: u8 = 1;\nconst DELTA_JSON_REF_INLINE: u8 = 2;\nconst DELTA_CHANGE_ID_FULL: u8 = 0;\nconst DELTA_CHANGE_ID_COMMIT_SUFFIX: u8 = 1;\nconst TIMESTAMP_UPDATED_SAME: u8 = 0;\nconst TIMESTAMP_UPDATED_DISTINCT: u8 = 1;\nconst NODE_KIND_LEAF: u8 = 1;\nconst NODE_KIND_INTERNAL: u8 = 2;\nconst WEIBULL_K: i32 = 4;\nconst ENTITY_IDENTITY_END: u8 = 0;\nconst ENTITY_IDENTITY_STRING: u8 = 1;\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\nstruct DeltaKeyPrefixRef<'a> {\n    schema_key: &'a str,\n    file_id: Option<&'a str>,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\nstruct DeltaKeyPrefix {\n    schema_key: String,\n    file_id: Option<String>,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct EncodedLeafEntry {\n    pub(crate) key: Vec<u8>,\n    pub(crate) value: Vec<u8>,\n}\n\n#[derive(Debug, Clone, Copy)]\npub(crate) struct EncodedLeafEntryRef<'a> {\n    pub(crate) key: &'a [u8],\n    pub(crate) value: &'a [u8],\n}\n\nimpl EncodedLeafEntry {\n    pub(crate) fn as_ref(&self) -> EncodedLeafEntryRef<'_> {\n        EncodedLeafEntryRef {\n            key: &self.key,\n            value: &self.value,\n        }\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct PendingChunkWrite {\n    pub(crate) hash: [u8; TRACKED_STATE_HASH_BYTES],\n    pub(crate) data: Vec<u8>,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct ChildSummary {\n    pub(crate) first_key: Vec<u8>,\n    pub(crate) last_key: Vec<u8>,\n    pub(crate) child_hash: [u8; TRACKED_STATE_HASH_BYTES],\n    pub(crate) subtree_count: u64,\n}\n\n#[derive(Debug, Clone, Copy)]\npub(crate) struct ChildSummaryRef<'a> {\n    pub(crate) first_key: &'a [u8],\n    pub(crate) last_key: &'a [u8],\n    pub(crate) child_hash: [u8; TRACKED_STATE_HASH_BYTES],\n    pub(crate) subtree_count: u64,\n}\n\nimpl ChildSummary {\n    pub(crate) fn as_ref(&self) -> ChildSummaryRef<'_> {\n        ChildSummaryRef {\n            first_key: &self.first_key,\n            last_key: &self.last_key,\n            child_hash: self.child_hash,\n            subtree_count: self.subtree_count,\n        }\n    }\n}\n\n#[derive(Debug, Clone)]\npub(crate) enum DecodedNode {\n    Leaf(DecodedLeafNode),\n    Internal(DecodedInternalNode),\n}\n\n#[derive(Debug, Clone)]\npub(crate) enum DecodedNodeRef<'a> {\n    Leaf(DecodedLeafNodeRef<'a>),\n    Internal(DecodedInternalNode),\n}\n\n#[derive(Debug, Clone)]\npub(crate) struct DecodedLeafNode {\n    entries: Vec<EncodedLeafEntry>,\n}\n\nimpl DecodedLeafNode {\n    pub(crate) fn entries(&self) -> &[EncodedLeafEntry] {\n        &self.entries\n    }\n}\n\n#[derive(Debug, Clone)]\npub(crate) struct DecodedLeafNodeRef<'a> {\n    bytes: &'a [u8],\n    payload_start: usize,\n    offsets: Vec<usize>,\n}\n\nimpl<'a> DecodedLeafNodeRef<'a> {\n    pub(crate) fn len(&self) -> usize {\n        self.offsets.len().saturating_sub(1)\n    }\n\n    pub(crate) fn entry(&self, index: usize) -> Result<Option<EncodedLeafEntryRef<'a>>, LixError> {\n        if index >= self.len() {\n            return Ok(None);\n        }\n        let start = self.payload_start + self.offsets[index];\n        let end = self.payload_start + self.offsets[index + 1];\n        let record = self.bytes.get(start..end).ok_or_else(|| {\n            LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                \"tracked-state leaf offset points outside node payload\",\n            )\n        })?;\n        let mut cursor = 0usize;\n        let key = read_sized_slice(record, &mut cursor, \"leaf key\")?;\n        let value = read_sized_slice(record, &mut cursor, \"leaf value\")?;\n        if cursor != record.len() {\n            return Err(LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                \"tracked-state leaf entry decode found trailing bytes\",\n            ));\n        }\n        Ok(Some(EncodedLeafEntryRef { key, value }))\n    }\n\n    pub(crate) fn key(&self, index: usize) -> Result<Option<&'a [u8]>, LixError> {\n        let Some(entry) = self.entry(index)? else {\n            return Ok(None);\n        };\n        Ok(Some(entry.key))\n    }\n}\n\n#[derive(Debug, Clone)]\npub(crate) struct DecodedInternalNode {\n    children: Vec<ChildSummary>,\n}\n\nimpl DecodedInternalNode {\n    pub(crate) fn children(&self) -> &[ChildSummary] {\n        &self.children\n    }\n}\n\npub(crate) fn hash_bytes(bytes: &[u8]) -> [u8; TRACKED_STATE_HASH_BYTES] {\n    *blake3::hash(bytes).as_bytes()\n}\n\npub(crate) fn encode_key(key: &TrackedStateKey) -> Vec<u8> {\n    encode_key_ref(TrackedStateKeyRef {\n        schema_key: &key.schema_key,\n        file_id: key.file_id.as_deref(),\n        entity_id: &key.entity_id,\n    })\n}\n\npub(crate) fn encode_key_ref(key: TrackedStateKeyRef<'_>) -> Vec<u8> {\n    let mut out = Vec::new();\n    append_key_ref(&mut out, key);\n    out\n}\n\nfn append_key_ref(out: &mut Vec<u8>, key: TrackedStateKeyRef<'_>) {\n    push_sized_bytes(out, key.schema_key.as_bytes());\n    match key.file_id {\n        Some(file_id) => {\n            out.push(1);\n            push_sized_bytes(out, file_id.as_bytes());\n        }\n        None => out.push(0),\n    }\n    push_entity_identity(out, key.entity_id);\n}\n\npub(crate) fn encode_schema_key_prefix(schema_key: &str) -> Vec<u8> {\n    let mut out = Vec::new();\n    push_sized_bytes(&mut out, schema_key.as_bytes());\n    out\n}\n\npub(crate) fn encode_schema_file_prefix(schema_key: &str, file_id: Option<&str>) -> Vec<u8> {\n    let mut out = encode_schema_key_prefix(schema_key);\n    match file_id {\n        Some(file_id) => {\n            out.push(1);\n            push_sized_bytes(&mut out, file_id.as_bytes());\n        }\n        None => out.push(0),\n    }\n    out\n}\n\npub(crate) fn decode_key(bytes: &[u8]) -> Result<TrackedStateKey, LixError> {\n    let mut cursor = 0usize;\n    let schema_key = read_sized_string(bytes, &mut cursor, \"schema_key\")?;\n    let file_id = match read_u8(bytes, &mut cursor, \"file_id presence\")? {\n        0 => None,\n        1 => Some(read_sized_string(bytes, &mut cursor, \"file_id\")?),\n        other => {\n            return Err(LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                format!(\"tracked-state tree key has invalid file_id presence byte {other}\"),\n            ))\n        }\n    };\n    let entity_id = read_entity_identity(bytes, &mut cursor)?;\n    if cursor != bytes.len() {\n        return Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            \"tracked-state tree key decode found trailing bytes\",\n        ));\n    }\n    Ok(TrackedStateKey {\n        schema_key,\n        file_id,\n        entity_id,\n    })\n}\n\n/// Decodes a key after the caller has already proven the schema/file prefix.\n///\n/// This is for scan paths that have matched an encoded prefix range and only\n/// need to materialize the entity suffix plus the known projection fields.\npub(crate) fn decode_key_with_trusted_prefix(\n    bytes: &[u8],\n    schema_key: &str,\n    file_id: Option<&str>,\n    prefix_len: usize,\n) -> Result<TrackedStateKey, LixError> {\n    let mut cursor = prefix_len;\n    let entity_id = read_entity_identity(bytes, &mut cursor)?;\n    if cursor != bytes.len() {\n        return Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            \"tracked-state tree key decode found trailing bytes\",\n        ));\n    }\n    Ok(TrackedStateKey {\n        schema_key: schema_key.to_string(),\n        file_id: file_id.map(str::to_string),\n        entity_id,\n    })\n}\n\n#[cfg(test)]\npub(crate) fn encode_value(value: &TrackedStateIndexValue) -> Vec<u8> {\n    encode_value_ref(TrackedStateIndexValueRef {\n        change_locator: value.change_locator.as_ref(),\n        deleted: value.deleted,\n        snapshot_ref: value.snapshot_ref.as_ref(),\n        metadata_ref: value.metadata_ref.as_ref(),\n        created_at: &value.created_at,\n        updated_at: &value.updated_at,\n    })\n}\n\npub(crate) fn encode_value_ref(value: TrackedStateIndexValueRef<'_>) -> Vec<u8> {\n    let mut out = Vec::new();\n    append_value_ref(&mut out, value);\n    out\n}\n\nfn append_value_ref(out: &mut Vec<u8>, value: TrackedStateIndexValueRef<'_>) {\n    out.push(VALUE_VERSION | if value.deleted { VALUE_DELETED_FLAG } else { 0 });\n    push_sized_bytes(out, value.change_locator.source_commit_id.as_bytes());\n    out.extend_from_slice(&value.change_locator.source_pack_id.to_be_bytes());\n    out.extend_from_slice(&value.change_locator.source_ordinal.to_be_bytes());\n    push_sized_bytes(out, value.change_locator.change_id.as_bytes());\n    push_timestamp_pair(out, value.created_at, value.updated_at);\n    push_optional_json_ref(out, value.snapshot_ref);\n    push_optional_json_ref(out, value.metadata_ref);\n}\n\n#[cfg(test)]\npub(crate) fn encoded_value_len(value: &TrackedStateIndexValue) -> usize {\n    1 + sized_bytes_len(value.change_locator.source_commit_id.as_bytes())\n        + 4\n        + 4\n        + sized_bytes_len(value.change_locator.change_id.as_bytes())\n        + timestamp_pair_len(&value.created_at, &value.updated_at)\n        + optional_json_ref_len(value.snapshot_ref.as_ref())\n        + optional_json_ref_len(value.metadata_ref.as_ref())\n}\n\npub(crate) fn decode_value(bytes: &[u8]) -> Result<TrackedStateIndexValue, LixError> {\n    let mut cursor = 0usize;\n    let value_header = read_u8(bytes, &mut cursor, \"value header\")?;\n    let deleted = decode_value_header(value_header)?;\n    decode_value_after_header(bytes, cursor, deleted)\n}\n\npub(crate) fn decode_visible_value(\n    bytes: &[u8],\n    include_tombstones: bool,\n) -> Result<Option<TrackedStateIndexValue>, LixError> {\n    let mut cursor = 0usize;\n    let value_header = read_u8(bytes, &mut cursor, \"value header\")?;\n    let deleted = decode_value_header(value_header)?;\n    if deleted && !include_tombstones {\n        return Ok(None);\n    }\n    decode_value_after_header(bytes, cursor, deleted).map(Some)\n}\n\nfn decode_value_header(value_header: u8) -> Result<bool, LixError> {\n    let version = value_header & VALUE_VERSION_MASK;\n    let deleted = value_header & VALUE_DELETED_FLAG != 0;\n    if version != VALUE_VERSION {\n        return Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"unsupported tracked-state tree value version {version}\"),\n        ));\n    }\n    Ok(deleted)\n}\n\nfn decode_value_after_header(\n    bytes: &[u8],\n    mut cursor: usize,\n    deleted: bool,\n) -> Result<TrackedStateIndexValue, LixError> {\n    let source_commit_id = read_sized_string(bytes, &mut cursor, \"source_commit_id\")?;\n    let source_pack_id =\n        u32::try_from(read_u32(bytes, &mut cursor, \"source_pack_id\")?).map_err(|_| {\n            LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                \"tracked-state source_pack_id exceeds u32\",\n            )\n        })?;\n    let source_ordinal =\n        u32::try_from(read_u32(bytes, &mut cursor, \"source_ordinal\")?).map_err(|_| {\n            LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                \"tracked-state source_ordinal exceeds u32\",\n            )\n        })?;\n    let change_id = read_sized_string(bytes, &mut cursor, \"change_id\")?;\n    let (created_at, updated_at) = read_timestamp_pair(bytes, &mut cursor)?;\n    let snapshot_ref = read_optional_json_ref(bytes, &mut cursor, \"snapshot_ref\")?;\n    let metadata_ref = read_optional_json_ref(bytes, &mut cursor, \"metadata_ref\")?;\n    if cursor != bytes.len() {\n        return Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            \"tracked-state tree value decode found trailing bytes\",\n        ));\n    }\n    Ok(TrackedStateIndexValue {\n        change_locator: ChangeLocator {\n            source_commit_id,\n            source_pack_id,\n            source_ordinal,\n            change_id,\n        },\n        deleted,\n        snapshot_ref,\n        metadata_ref,\n        created_at,\n        updated_at,\n    })\n}\n\npub(crate) fn encode_delta_pack_refs(\n    commit_id: &str,\n    deltas: &[TrackedStateDeltaRef<'_>],\n) -> Result<Vec<u8>, LixError> {\n    encode_delta_pack_refs_with_json_pack_indexes(commit_id, deltas, None)\n}\n\npub(crate) fn encode_delta_pack_refs_with_json_pack_indexes(\n    commit_id: &str,\n    deltas: &[TrackedStateDeltaRef<'_>],\n    json_pack_indexes: Option<&HashMap<[u8; TRACKED_STATE_HASH_BYTES], usize>>,\n) -> Result<Vec<u8>, LixError> {\n    let json_pack_indexes = json_pack_indexes.filter(|indexes| !indexes.is_empty());\n    let mut out = Vec::new();\n    out.extend_from_slice(b\"LXTD\");\n    out.push(DELTA_PACK_VERSION);\n    push_var_sized_bytes(&mut out, commit_id.as_bytes(), \"delta pack commit_id\")?;\n    let (key_prefixes, delta_prefix_indexes) = delta_key_prefixes(deltas);\n    push_var_u32(&mut out, key_prefixes.len(), \"delta key prefix count\")?;\n    for prefix in &key_prefixes {\n        append_delta_key_prefix_ref(&mut out, *prefix)?;\n    }\n    push_var_u32(&mut out, deltas.len(), \"delta pack entry count\")?;\n    out.push(if json_pack_indexes.is_some() {\n        DELTA_JSON_REFS_MIXED_PACK_INDEX\n    } else {\n        DELTA_JSON_REFS_INLINE\n    });\n    for (delta, prefix_index) in deltas.iter().zip(delta_prefix_indexes) {\n        append_delta_key_ref(\n            &mut out,\n            &key_prefixes,\n            prefix_index,\n            TrackedStateKeyRef {\n                schema_key: delta.change.schema_key,\n                file_id: delta.change.file_id,\n                entity_id: delta.change.entity_id,\n            },\n        )?;\n        append_delta_value_ref(\n            &mut out,\n            commit_id,\n            json_pack_indexes,\n            TrackedStateIndexValueRef {\n                change_locator: delta.locator,\n                deleted: delta.change.snapshot_ref.is_none(),\n                snapshot_ref: delta.change.snapshot_ref,\n                metadata_ref: delta.change.metadata_ref,\n                created_at: delta.created_at,\n                updated_at: delta.updated_at,\n            },\n        )?;\n    }\n    Ok(out)\n}\n\nfn delta_key_prefixes<'a>(\n    deltas: &'a [TrackedStateDeltaRef<'a>],\n) -> (Vec<DeltaKeyPrefixRef<'a>>, Vec<usize>) {\n    let mut prefixes = Vec::new();\n    let mut delta_prefix_indexes = Vec::with_capacity(deltas.len());\n    for delta in deltas {\n        let prefix = DeltaKeyPrefixRef {\n            schema_key: delta.change.schema_key,\n            file_id: delta.change.file_id,\n        };\n        let prefix_index = match prefixes.iter().position(|candidate| *candidate == prefix) {\n            Some(prefix_index) => prefix_index,\n            None => {\n                let prefix_index = prefixes.len();\n                prefixes.push(prefix);\n                prefix_index\n            }\n        };\n        delta_prefix_indexes.push(prefix_index);\n    }\n    (prefixes, delta_prefix_indexes)\n}\n\nfn append_delta_key_prefix_ref(\n    out: &mut Vec<u8>,\n    prefix: DeltaKeyPrefixRef<'_>,\n) -> Result<(), LixError> {\n    push_var_sized_bytes(\n        out,\n        prefix.schema_key.as_bytes(),\n        \"delta key prefix schema_key\",\n    )?;\n    match prefix.file_id {\n        Some(file_id) => {\n            out.push(1);\n            push_var_sized_bytes(out, file_id.as_bytes(), \"delta key prefix file_id\")?;\n        }\n        None => out.push(0),\n    }\n    Ok(())\n}\n\nfn decode_delta_key_prefix(bytes: &[u8], cursor: &mut usize) -> Result<DeltaKeyPrefix, LixError> {\n    let schema_key = read_var_sized_string(bytes, cursor, \"delta key prefix schema_key\")?;\n    let file_id = match read_u8(bytes, cursor, \"delta key prefix file_id presence\")? {\n        0 => None,\n        1 => Some(read_var_sized_string(\n            bytes,\n            cursor,\n            \"delta key prefix file_id\",\n        )?),\n        other => {\n            return Err(LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                format!(\"tracked-state delta key prefix has invalid file_id presence byte {other}\"),\n            ))\n        }\n    };\n    Ok(DeltaKeyPrefix {\n        schema_key,\n        file_id,\n    })\n}\n\nfn append_delta_key_ref(\n    out: &mut Vec<u8>,\n    prefixes: &[DeltaKeyPrefixRef<'_>],\n    prefix_index: usize,\n    key: TrackedStateKeyRef<'_>,\n) -> Result<(), LixError> {\n    let prefix = DeltaKeyPrefixRef {\n        schema_key: key.schema_key,\n        file_id: key.file_id,\n    };\n    debug_assert_eq!(prefixes.get(prefix_index), Some(&prefix));\n    push_var_u32(out, prefix_index, \"delta key prefix index\")?;\n    push_var_entity_identity(out, key.entity_id)?;\n    Ok(())\n}\n\nfn append_delta_value_ref(\n    out: &mut Vec<u8>,\n    pack_commit_id: &str,\n    json_pack_indexes: Option<&HashMap<[u8; TRACKED_STATE_HASH_BYTES], usize>>,\n    value: TrackedStateIndexValueRef<'_>,\n) -> Result<(), LixError> {\n    out.push(VALUE_VERSION | if value.deleted { VALUE_DELETED_FLAG } else { 0 });\n    if value.change_locator.source_commit_id == pack_commit_id {\n        out.push(DELTA_LOCATOR_SAME_COMMIT);\n    } else {\n        out.push(DELTA_LOCATOR_FULL);\n        push_var_sized_bytes(\n            out,\n            value.change_locator.source_commit_id.as_bytes(),\n            \"source_commit_id\",\n        )?;\n    }\n    push_var_u32(\n        out,\n        value.change_locator.source_pack_id as usize,\n        \"source_pack_id\",\n    )?;\n    push_var_u32(\n        out,\n        value.change_locator.source_ordinal as usize,\n        \"source_ordinal\",\n    )?;\n    push_var_delta_change_id(\n        out,\n        value.change_locator.source_commit_id,\n        value.change_locator.change_id,\n    )?;\n    push_var_timestamp_pair(out, value.created_at, value.updated_at)?;\n    match json_pack_indexes {\n        Some(indexes) => {\n            push_mixed_optional_json_ref(out, indexes, value.snapshot_ref)?;\n            push_mixed_optional_json_ref(out, indexes, value.metadata_ref)?;\n        }\n        None => {\n            push_optional_json_ref(out, value.snapshot_ref);\n            push_optional_json_ref(out, value.metadata_ref);\n        }\n    }\n    Ok(())\n}\n\npub(crate) fn decode_delta_pack(\n    bytes: &[u8],\n    pack_json_refs: Option<&[JsonRef]>,\n) -> Result<(String, Vec<TrackedStateDeltaEntry>), LixError> {\n    let mut cursor = 0usize;\n    let magic = bytes.get(0..4).ok_or_else(|| {\n        LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            \"tracked-state delta pack is truncated before magic\",\n        )\n    })?;\n    if magic != b\"LXTD\" {\n        return Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            \"tracked-state delta pack has invalid magic\",\n        ));\n    }\n    cursor += 4;\n    let version = read_u8(bytes, &mut cursor, \"delta pack version\")?;\n    if version != DELTA_PACK_VERSION {\n        return Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"unsupported tracked-state delta pack version {version}\"),\n        ));\n    }\n    let commit_id = read_var_sized_string(bytes, &mut cursor, \"delta pack commit_id\")?;\n    let prefix_count = read_var_u32(bytes, &mut cursor, \"delta key prefix count\")?;\n    let mut key_prefixes = Vec::new();\n    for _ in 0..prefix_count {\n        key_prefixes.push(decode_delta_key_prefix(bytes, &mut cursor)?);\n    }\n    let count = read_var_u32(bytes, &mut cursor, \"delta pack entry count\")?;\n    let json_ref_mode = decode_delta_json_ref_mode(bytes, &mut cursor, pack_json_refs)?;\n    let mut entries = Vec::new();\n    for _ in 0..count {\n        let key = decode_delta_key(bytes, &mut cursor, &key_prefixes)?;\n        let value = decode_delta_value(bytes, &mut cursor, &commit_id, &json_ref_mode)?;\n        entries.push(TrackedStateDeltaEntry { key, value });\n    }\n    if cursor != bytes.len() {\n        return Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            \"tracked-state delta pack decode found trailing bytes\",\n        ));\n    }\n    Ok((commit_id, entries))\n}\n\npub(crate) fn delta_pack_uses_json_pack_indexes(bytes: &[u8]) -> Result<bool, LixError> {\n    let mut cursor = 0usize;\n    let magic = bytes.get(0..4).ok_or_else(|| {\n        LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            \"tracked-state delta pack is truncated before magic\",\n        )\n    })?;\n    if magic != b\"LXTD\" {\n        return Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            \"tracked-state delta pack has invalid magic\",\n        ));\n    }\n    cursor += 4;\n    let version = read_u8(bytes, &mut cursor, \"delta pack version\")?;\n    if version != DELTA_PACK_VERSION {\n        return Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"unsupported tracked-state delta pack version {version}\"),\n        ));\n    }\n    let _commit_id = read_var_sized_string(bytes, &mut cursor, \"delta pack commit_id\")?;\n    let prefix_count = read_var_u32(bytes, &mut cursor, \"delta key prefix count\")?;\n    for _ in 0..prefix_count {\n        let _ = decode_delta_key_prefix(bytes, &mut cursor)?;\n    }\n    let _count = read_var_u32(bytes, &mut cursor, \"delta pack entry count\")?;\n    match read_u8(bytes, &mut cursor, \"delta JSON ref mode\")? {\n        DELTA_JSON_REFS_INLINE => Ok(false),\n        DELTA_JSON_REFS_MIXED_PACK_INDEX => Ok(true),\n        other => Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"tracked-state delta pack has invalid JSON ref mode {other}\"),\n        )),\n    }\n}\n\nfn decode_delta_key(\n    bytes: &[u8],\n    cursor: &mut usize,\n    prefixes: &[DeltaKeyPrefix],\n) -> Result<TrackedStateKey, LixError> {\n    let prefix_index = read_var_u32(bytes, cursor, \"delta key prefix index\")?;\n    let prefix = prefixes.get(prefix_index).ok_or_else(|| {\n        LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"tracked-state delta key prefix index {prefix_index} is out of bounds\"),\n        )\n    })?;\n    let entity_id = read_var_entity_identity(bytes, cursor)?;\n    Ok(TrackedStateKey {\n        schema_key: prefix.schema_key.clone(),\n        file_id: prefix.file_id.clone(),\n        entity_id,\n    })\n}\n\nenum DeltaJsonRefDecodeMode<'a> {\n    Inline,\n    MixedPackIndex(&'a [JsonRef]),\n}\n\nfn decode_delta_json_ref_mode<'a>(\n    bytes: &[u8],\n    cursor: &mut usize,\n    pack_json_refs: Option<&'a [JsonRef]>,\n) -> Result<DeltaJsonRefDecodeMode<'a>, LixError> {\n    match read_u8(bytes, cursor, \"delta JSON ref mode\")? {\n        DELTA_JSON_REFS_INLINE => Ok(DeltaJsonRefDecodeMode::Inline),\n        DELTA_JSON_REFS_MIXED_PACK_INDEX => {\n            let refs = pack_json_refs.ok_or_else(|| {\n                LixError::new(\n                    LixError::CODE_INTERNAL_ERROR,\n                    \"tracked-state delta pack needs JSON pack refs but none were provided\",\n                )\n            })?;\n            Ok(DeltaJsonRefDecodeMode::MixedPackIndex(refs))\n        }\n        other => Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"tracked-state delta pack has invalid JSON ref mode {other}\"),\n        )),\n    }\n}\n\nfn decode_delta_value(\n    bytes: &[u8],\n    cursor: &mut usize,\n    pack_commit_id: &str,\n    json_ref_mode: &DeltaJsonRefDecodeMode<'_>,\n) -> Result<TrackedStateIndexValue, LixError> {\n    let value_header = read_u8(bytes, cursor, \"delta value header\")?;\n    let deleted = decode_value_header(value_header)?;\n    let source_commit_id = match read_u8(bytes, cursor, \"delta locator tag\")? {\n        DELTA_LOCATOR_SAME_COMMIT => pack_commit_id.to_string(),\n        DELTA_LOCATOR_FULL => read_var_sized_string(bytes, cursor, \"source_commit_id\")?,\n        other => {\n            return Err(LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                format!(\"tracked-state delta value has invalid locator tag {other}\"),\n            ))\n        }\n    };\n    let source_pack_id =\n        u32::try_from(read_var_u32(bytes, cursor, \"source_pack_id\")?).map_err(|_| {\n            LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                \"tracked-state source_pack_id exceeds u32\",\n            )\n        })?;\n    let source_ordinal =\n        u32::try_from(read_var_u32(bytes, cursor, \"source_ordinal\")?).map_err(|_| {\n            LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                \"tracked-state source_ordinal exceeds u32\",\n            )\n        })?;\n    let change_id = read_var_delta_change_id(bytes, cursor, &source_commit_id)?;\n    let (created_at, updated_at) = read_var_timestamp_pair(bytes, cursor)?;\n    let (snapshot_ref, metadata_ref) = match json_ref_mode {\n        DeltaJsonRefDecodeMode::Inline => (\n            read_optional_json_ref(bytes, cursor, \"snapshot_ref\")?,\n            read_optional_json_ref(bytes, cursor, \"metadata_ref\")?,\n        ),\n        DeltaJsonRefDecodeMode::MixedPackIndex(refs) => (\n            read_mixed_optional_json_ref(bytes, cursor, refs, \"snapshot_ref\")?,\n            read_mixed_optional_json_ref(bytes, cursor, refs, \"metadata_ref\")?,\n        ),\n    };\n    Ok(TrackedStateIndexValue {\n        change_locator: ChangeLocator {\n            source_commit_id,\n            source_pack_id,\n            source_ordinal,\n            change_id,\n        },\n        deleted,\n        snapshot_ref,\n        metadata_ref,\n        created_at,\n        updated_at,\n    })\n}\n\n#[cfg(test)]\nfn sized_bytes_len(bytes: &[u8]) -> usize {\n    4 + bytes.len()\n}\n\npub(crate) fn encode_leaf_node(entries: &[EncodedLeafEntry]) -> Vec<u8> {\n    let entries = entries\n        .iter()\n        .map(EncodedLeafEntry::as_ref)\n        .collect::<Vec<_>>();\n    encode_leaf_node_refs(&entries)\n}\n\npub(crate) fn encode_leaf_node_refs(entries: &[EncodedLeafEntryRef<'_>]) -> Vec<u8> {\n    let mut out = Vec::new();\n    out.push(NODE_KIND_LEAF);\n    out.push(NODE_VERSION);\n    push_u32(&mut out, entries.len());\n\n    let mut offsets = Vec::with_capacity(entries.len().saturating_add(1));\n    let mut payload = Vec::new();\n    offsets.push(0usize);\n    for entry in entries {\n        push_sized_bytes(&mut payload, entry.key);\n        push_sized_bytes(&mut payload, entry.value);\n        offsets.push(payload.len());\n    }\n    for offset in offsets {\n        push_u32(&mut out, offset);\n    }\n    out.extend_from_slice(&payload);\n    out\n}\n\npub(crate) fn encode_internal_node(children: &[ChildSummary]) -> Vec<u8> {\n    let children = children\n        .iter()\n        .map(ChildSummary::as_ref)\n        .collect::<Vec<_>>();\n    encode_internal_node_refs(&children)\n}\n\npub(crate) fn encode_internal_node_refs(children: &[ChildSummaryRef<'_>]) -> Vec<u8> {\n    let mut out = Vec::new();\n    out.push(NODE_KIND_INTERNAL);\n    out.push(NODE_VERSION);\n    push_u32(&mut out, children.len());\n    for child in children {\n        push_sized_bytes(&mut out, child.first_key);\n        push_sized_bytes(&mut out, child.last_key);\n        out.extend_from_slice(&child.child_hash);\n        out.extend_from_slice(&child.subtree_count.to_be_bytes());\n    }\n    out\n}\n\npub(crate) fn decode_node(bytes: &[u8]) -> Result<DecodedNode, LixError> {\n    match decode_node_ref(bytes)? {\n        DecodedNodeRef::Leaf(leaf) => {\n            let mut entries = Vec::with_capacity(leaf.len());\n            for index in 0..leaf.len() {\n                let entry = leaf.entry(index)?.ok_or_else(|| {\n                    LixError::new(\n                        \"LIX_ERROR_UNKNOWN\",\n                        \"tracked-state leaf entry disappeared during owned decode\",\n                    )\n                })?;\n                entries.push(EncodedLeafEntry {\n                    key: entry.key.to_vec(),\n                    value: entry.value.to_vec(),\n                });\n            }\n            Ok(DecodedNode::Leaf(DecodedLeafNode { entries }))\n        }\n        DecodedNodeRef::Internal(internal) => Ok(DecodedNode::Internal(internal)),\n    }\n}\n\npub(crate) fn decode_node_ref(bytes: &[u8]) -> Result<DecodedNodeRef<'_>, LixError> {\n    let mut cursor = 0usize;\n    let kind = read_u8(bytes, &mut cursor, \"node kind\")?;\n    let version = read_u8(bytes, &mut cursor, \"node version\")?;\n    if version != NODE_VERSION {\n        return Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"unsupported tracked-state tree node version {version}\"),\n        ));\n    }\n    let count = read_u32(bytes, &mut cursor, \"entry count\")?;\n    let node = match kind {\n        NODE_KIND_LEAF => {\n            let leaf = decode_leaf_node_ref_after_count(bytes, &mut cursor, count)?;\n            DecodedNodeRef::Leaf(leaf)\n        }\n        NODE_KIND_INTERNAL => {\n            let mut children = Vec::with_capacity(count);\n            for _ in 0..count {\n                let first_key = read_sized_bytes(bytes, &mut cursor, \"internal first_key\")?;\n                let last_key = read_sized_bytes(bytes, &mut cursor, \"internal last_key\")?;\n                let child_hash = read_fixed_hash(bytes, &mut cursor, \"internal child_hash\")?;\n                let subtree_count = read_u64(bytes, &mut cursor, \"internal subtree_count\")?;\n                children.push(ChildSummary {\n                    first_key,\n                    last_key,\n                    child_hash,\n                    subtree_count,\n                });\n            }\n            DecodedNodeRef::Internal(DecodedInternalNode { children })\n        }\n        other => {\n            return Err(LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                format!(\"unknown tracked-state tree node kind {other}\"),\n            ))\n        }\n    };\n    if cursor != bytes.len() {\n        return Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            \"tracked-state tree node decode found trailing bytes\",\n        ));\n    }\n    Ok(node)\n}\n\nfn decode_leaf_node_ref_after_count<'a>(\n    bytes: &'a [u8],\n    cursor: &mut usize,\n    count: usize,\n) -> Result<DecodedLeafNodeRef<'a>, LixError> {\n    let mut offsets = Vec::with_capacity(count.saturating_add(1));\n    for _ in 0..=count {\n        offsets.push(read_u32(bytes, cursor, \"leaf entry offset\")?);\n    }\n    if offsets.first().copied() != Some(0) {\n        return Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            \"tracked-state leaf offset table must start at zero\",\n        ));\n    }\n    for window in offsets.windows(2) {\n        if window[0] > window[1] {\n            return Err(LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                \"tracked-state leaf offsets must be monotonic\",\n            ));\n        }\n    }\n    let payload_len = bytes.len().checked_sub(*cursor).ok_or_else(|| {\n        LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            \"tracked-state leaf payload start is past node end\",\n        )\n    })?;\n    if offsets.last().copied().unwrap_or_default() != payload_len {\n        return Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            \"tracked-state leaf offset table does not cover full payload\",\n        ));\n    }\n    let payload_start = *cursor;\n    *cursor = bytes.len();\n    Ok(DecodedLeafNodeRef {\n        bytes,\n        payload_start,\n        offsets,\n    })\n}\n\npub(crate) fn child_summary_from_node(\n    node_bytes: Vec<u8>,\n    first_key: Vec<u8>,\n    last_key: Vec<u8>,\n    subtree_count: u64,\n) -> (PendingChunkWrite, ChildSummary) {\n    let hash = hash_bytes(&node_bytes);\n    (\n        PendingChunkWrite {\n            hash,\n            data: node_bytes,\n        },\n        ChildSummary {\n            first_key,\n            last_key,\n            child_hash: hash,\n            subtree_count,\n        },\n    )\n}\n\npub(crate) fn boundary_trigger(\n    encoded_key: &[u8],\n    level: usize,\n    chunk_size: usize,\n    item_size: usize,\n    target_chunk_bytes: usize,\n) -> bool {\n    if item_size == 0 || target_chunk_bytes == 0 {\n        return false;\n    }\n\n    let start =\n        weibull_cdf(chunk_size.saturating_sub(item_size) as f64 / target_chunk_bytes as f64);\n    let end = weibull_cdf(chunk_size as f64 / target_chunk_bytes as f64);\n    let remaining = 1.0 - start;\n    if remaining <= 0.0 {\n        return true;\n    }\n\n    let split_probability = ((end - start) / remaining).clamp(0.0, 1.0);\n    let hash = xxh3_64_with_seed(encoded_key, level_salt(level));\n    (hash as f64) < split_probability * (u64::MAX as f64)\n}\n\nfn weibull_cdf(normalized_size: f64) -> f64 {\n    if normalized_size <= 0.0 {\n        return 0.0;\n    }\n    -f64::exp_m1(-normalized_size.powi(WEIBULL_K))\n}\n\nfn level_salt(level: usize) -> u64 {\n    let mut value = (level as u64).wrapping_add(0x9e37_79b9_7f4a_7c15);\n    value = (value ^ (value >> 30)).wrapping_mul(0xbf58_476d_1ce4_e5b9);\n    value = (value ^ (value >> 27)).wrapping_mul(0x94d0_49bb_1331_11eb);\n    value ^ (value >> 31)\n}\n\nfn push_entity_identity(out: &mut Vec<u8>, identity: &EntityIdentity) {\n    assert!(\n        !identity.parts.is_empty(),\n        \"tracked-state key entity identity must contain at least one part\"\n    );\n    for part in &identity.parts {\n        out.push(ENTITY_IDENTITY_STRING);\n        push_sized_bytes(out, part.as_bytes());\n    }\n    out.push(ENTITY_IDENTITY_END);\n}\n\nfn read_entity_identity(bytes: &[u8], cursor: &mut usize) -> Result<EntityIdentity, LixError> {\n    let mut parts = Vec::new();\n    loop {\n        let tag = read_u8(bytes, cursor, \"entity identity part tag\")?;\n        match tag {\n            ENTITY_IDENTITY_END => break,\n            ENTITY_IDENTITY_STRING => {\n                parts.push(read_sized_string(\n                    bytes,\n                    cursor,\n                    \"entity identity string part\",\n                )?);\n            }\n            other => {\n                return Err(LixError::new(\n                    \"LIX_ERROR_UNKNOWN\",\n                    format!(\"tracked-state tree key has invalid entity identity part tag {other}\"),\n                ))\n            }\n        }\n    }\n    if parts.is_empty() {\n        return Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            \"tracked-state tree key entity identity must contain at least one part\",\n        ));\n    }\n    Ok(EntityIdentity { parts })\n}\n\nfn push_sized_bytes(out: &mut Vec<u8>, bytes: &[u8]) {\n    push_u32(out, bytes.len());\n    out.extend_from_slice(bytes);\n}\n\nfn push_var_u32(out: &mut Vec<u8>, value: usize, field_name: &str) -> Result<(), LixError> {\n    let (encoded, len) = var_u32_bytes(value, field_name)?;\n    out.extend_from_slice(&encoded[..len]);\n    Ok(())\n}\n\nfn var_u32_bytes(value: usize, field_name: &str) -> Result<([u8; 5], usize), LixError> {\n    let mut value = u32::try_from(value).map_err(|_| {\n        LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            format!(\"tracked-state delta pack field '{field_name}' exceeds u32\"),\n        )\n    })?;\n    let mut encoded = [0_u8; 5];\n    let mut len = 0usize;\n    while value >= 0x80 {\n        encoded[len] = (value as u8 & 0x7f) | 0x80;\n        len += 1;\n        value >>= 7;\n    }\n    encoded[len] = value as u8;\n    len += 1;\n    Ok((encoded, len))\n}\n\nfn push_var_sized_bytes(out: &mut Vec<u8>, bytes: &[u8], field_name: &str) -> Result<(), LixError> {\n    push_var_u32(out, bytes.len(), field_name)?;\n    out.extend_from_slice(bytes);\n    Ok(())\n}\n\nfn push_var_entity_identity(out: &mut Vec<u8>, identity: &EntityIdentity) -> Result<(), LixError> {\n    assert!(\n        !identity.parts.is_empty(),\n        \"tracked-state delta key entity identity must contain at least one part\"\n    );\n    push_var_u32(out, identity.parts.len(), \"entity identity part count\")?;\n    for part in &identity.parts {\n        push_var_sized_bytes(out, part.as_bytes(), \"entity identity string part\")?;\n    }\n    Ok(())\n}\n\nfn push_optional_json_ref(out: &mut Vec<u8>, json_ref: Option<&JsonRef>) {\n    match json_ref {\n        Some(json_ref) => {\n            out.push(1);\n            out.extend_from_slice(json_ref.as_hash_bytes());\n        }\n        None => out.push(0),\n    }\n}\n\nfn push_mixed_optional_json_ref(\n    out: &mut Vec<u8>,\n    indexes: &HashMap<[u8; TRACKED_STATE_HASH_BYTES], usize>,\n    json_ref: Option<&JsonRef>,\n) -> Result<(), LixError> {\n    let Some(json_ref) = json_ref else {\n        out.push(DELTA_JSON_REF_NONE);\n        return Ok(());\n    };\n    if let Some(index) = indexes.get(json_ref.as_hash_array()).copied() {\n        out.push(DELTA_JSON_REF_PACK_INDEX);\n        push_var_u32(out, index, \"json ref pack index\")\n    } else {\n        out.push(DELTA_JSON_REF_INLINE);\n        out.extend_from_slice(json_ref.as_hash_bytes());\n        Ok(())\n    }\n}\n\nfn push_var_delta_change_id(\n    out: &mut Vec<u8>,\n    source_commit_id: &str,\n    change_id: &str,\n) -> Result<(), LixError> {\n    if let Some(suffix) = change_id.strip_prefix(source_commit_id) {\n        out.push(DELTA_CHANGE_ID_COMMIT_SUFFIX);\n        push_var_sized_bytes(out, suffix.as_bytes(), \"change_id\")\n    } else {\n        out.push(DELTA_CHANGE_ID_FULL);\n        push_var_sized_bytes(out, change_id.as_bytes(), \"change_id\")\n    }\n}\n\nfn read_var_delta_change_id(\n    bytes: &[u8],\n    cursor: &mut usize,\n    source_commit_id: &str,\n) -> Result<String, LixError> {\n    let tag = read_u8(bytes, cursor, \"delta change_id tag\")?;\n    let value = read_var_sized_string(bytes, cursor, \"change_id\")?;\n    match tag {\n        DELTA_CHANGE_ID_FULL => Ok(value),\n        DELTA_CHANGE_ID_COMMIT_SUFFIX => Ok(format!(\"{source_commit_id}{value}\")),\n        other => Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"tracked-state delta value has invalid change_id tag {other}\"),\n        )),\n    }\n}\n\n#[cfg(test)]\nfn optional_json_ref_len(json_ref: Option<&JsonRef>) -> usize {\n    1 + json_ref.map_or(0, |_| TRACKED_STATE_HASH_BYTES)\n}\n\nfn push_timestamp_pair(out: &mut Vec<u8>, created_at: &str, updated_at: &str) {\n    push_sized_bytes(out, created_at.as_bytes());\n    if updated_at == created_at {\n        out.push(TIMESTAMP_UPDATED_SAME);\n    } else {\n        out.push(TIMESTAMP_UPDATED_DISTINCT);\n        push_sized_bytes(out, updated_at.as_bytes());\n    }\n}\n\nfn push_var_timestamp_pair(\n    out: &mut Vec<u8>,\n    created_at: &str,\n    updated_at: &str,\n) -> Result<(), LixError> {\n    push_var_sized_bytes(out, created_at.as_bytes(), \"created_at\")?;\n    if updated_at == created_at {\n        out.push(TIMESTAMP_UPDATED_SAME);\n    } else {\n        out.push(TIMESTAMP_UPDATED_DISTINCT);\n        push_var_sized_bytes(out, updated_at.as_bytes(), \"updated_at\")?;\n    }\n    Ok(())\n}\n\n#[cfg(test)]\nfn timestamp_pair_len(created_at: &str, updated_at: &str) -> usize {\n    sized_bytes_len(created_at.as_bytes())\n        + 1\n        + if updated_at == created_at {\n            0\n        } else {\n            sized_bytes_len(updated_at.as_bytes())\n        }\n}\n\nfn read_timestamp_pair(bytes: &[u8], cursor: &mut usize) -> Result<(String, String), LixError> {\n    let created_at = read_sized_string(bytes, cursor, \"created_at\")?;\n    let updated_at = match read_u8(bytes, cursor, \"updated_at tag\")? {\n        TIMESTAMP_UPDATED_SAME => created_at.clone(),\n        TIMESTAMP_UPDATED_DISTINCT => read_sized_string(bytes, cursor, \"updated_at\")?,\n        other => {\n            return Err(LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                format!(\"tracked-state timestamp pair has invalid updated_at tag {other}\"),\n            ))\n        }\n    };\n    Ok((created_at, updated_at))\n}\n\nfn read_var_timestamp_pair(bytes: &[u8], cursor: &mut usize) -> Result<(String, String), LixError> {\n    let created_at = read_var_sized_string(bytes, cursor, \"created_at\")?;\n    let updated_at = match read_u8(bytes, cursor, \"updated_at tag\")? {\n        TIMESTAMP_UPDATED_SAME => created_at.clone(),\n        TIMESTAMP_UPDATED_DISTINCT => read_var_sized_string(bytes, cursor, \"updated_at\")?,\n        other => {\n            return Err(LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                format!(\"tracked-state timestamp pair has invalid updated_at tag {other}\"),\n            ))\n        }\n    };\n    Ok((created_at, updated_at))\n}\n\nfn push_u32(out: &mut Vec<u8>, value: usize) {\n    out.extend_from_slice(&(value as u32).to_be_bytes());\n}\n\nfn read_sized_string(\n    bytes: &[u8],\n    cursor: &mut usize,\n    field_name: &str,\n) -> Result<String, LixError> {\n    String::from_utf8(read_sized_bytes(bytes, cursor, field_name)?).map_err(|error| {\n        LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"tracked-state tree field '{field_name}' is invalid UTF-8: {error}\"),\n        )\n    })\n}\n\nfn read_sized_bytes(\n    bytes: &[u8],\n    cursor: &mut usize,\n    field_name: &str,\n) -> Result<Vec<u8>, LixError> {\n    read_sized_slice(bytes, cursor, field_name).map(<[u8]>::to_vec)\n}\n\nfn read_sized_slice<'a>(\n    bytes: &'a [u8],\n    cursor: &mut usize,\n    field_name: &str,\n) -> Result<&'a [u8], LixError> {\n    let len = read_u32(bytes, cursor, field_name)?;\n    let end = cursor.checked_add(len).ok_or_else(|| {\n        LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"tracked-state tree field '{field_name}' length overflow\"),\n        )\n    })?;\n    let slice = bytes.get(*cursor..end).ok_or_else(|| {\n        LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"tracked-state tree field '{field_name}' is truncated\"),\n        )\n    })?;\n    *cursor = end;\n    Ok(slice)\n}\n\nfn read_var_sized_string(\n    bytes: &[u8],\n    cursor: &mut usize,\n    field_name: &str,\n) -> Result<String, LixError> {\n    String::from_utf8(read_var_sized_slice(bytes, cursor, field_name)?.to_vec()).map_err(|error| {\n        LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"tracked-state delta pack field '{field_name}' is invalid UTF-8: {error}\"),\n        )\n    })\n}\n\nfn read_var_sized_slice<'a>(\n    bytes: &'a [u8],\n    cursor: &mut usize,\n    field_name: &str,\n) -> Result<&'a [u8], LixError> {\n    let len = read_var_u32(bytes, cursor, field_name)?;\n    let end = cursor.checked_add(len).ok_or_else(|| {\n        LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"tracked-state delta pack field '{field_name}' length overflow\"),\n        )\n    })?;\n    let slice = bytes.get(*cursor..end).ok_or_else(|| {\n        LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"tracked-state delta pack field '{field_name}' is truncated\"),\n        )\n    })?;\n    *cursor = end;\n    Ok(slice)\n}\n\nfn read_var_entity_identity(bytes: &[u8], cursor: &mut usize) -> Result<EntityIdentity, LixError> {\n    let count = read_var_u32(bytes, cursor, \"entity identity part count\")?;\n    let mut parts = Vec::new();\n    for _ in 0..count {\n        parts.push(read_var_sized_string(\n            bytes,\n            cursor,\n            \"entity identity string part\",\n        )?);\n    }\n    if parts.is_empty() {\n        return Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            \"tracked-state delta key entity identity must contain at least one part\",\n        ));\n    }\n    Ok(EntityIdentity { parts })\n}\n\nfn read_fixed_hash(\n    bytes: &[u8],\n    cursor: &mut usize,\n    field_name: &str,\n) -> Result<[u8; TRACKED_STATE_HASH_BYTES], LixError> {\n    let end = *cursor + TRACKED_STATE_HASH_BYTES;\n    let slice = bytes.get(*cursor..end).ok_or_else(|| {\n        LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"tracked-state tree field '{field_name}' is truncated\"),\n        )\n    })?;\n    let mut out = [0_u8; TRACKED_STATE_HASH_BYTES];\n    out.copy_from_slice(slice);\n    *cursor = end;\n    Ok(out)\n}\n\nfn read_optional_json_ref(\n    bytes: &[u8],\n    cursor: &mut usize,\n    field_name: &str,\n) -> Result<Option<JsonRef>, LixError> {\n    match read_u8(bytes, cursor, field_name)? {\n        0 => Ok(None),\n        1 => Ok(Some(JsonRef::from_hash_bytes(read_fixed_hash(\n            bytes, cursor, field_name,\n        )?))),\n        other => Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"tracked-state tree field '{field_name}' has invalid JSON ref tag {other}\"),\n        )),\n    }\n}\n\nfn read_mixed_optional_json_ref(\n    bytes: &[u8],\n    cursor: &mut usize,\n    refs: &[JsonRef],\n    field_name: &str,\n) -> Result<Option<JsonRef>, LixError> {\n    match read_u8(bytes, cursor, field_name)? {\n        DELTA_JSON_REF_NONE => Ok(None),\n        DELTA_JSON_REF_PACK_INDEX => {\n            let index = read_var_u32(bytes, cursor, field_name)?;\n            refs.get(index).copied().map(Some).ok_or_else(|| {\n                LixError::new(\n                    \"LIX_ERROR_UNKNOWN\",\n                    format!(\"tracked-state delta JSON ref index {index} is out of bounds\"),\n                )\n            })\n        }\n        DELTA_JSON_REF_INLINE => {\n            let hash = read_fixed_hash(bytes, cursor, field_name)?;\n            Ok(Some(JsonRef::from_hash_bytes(hash)))\n        }\n        other => Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"tracked-state tree field '{field_name}' has invalid JSON ref tag {other}\"),\n        )),\n    }\n}\n\nfn read_u8(bytes: &[u8], cursor: &mut usize, field_name: &str) -> Result<u8, LixError> {\n    let value = *bytes.get(*cursor).ok_or_else(|| {\n        LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"tracked-state tree field '{field_name}' is truncated\"),\n        )\n    })?;\n    *cursor += 1;\n    Ok(value)\n}\n\nfn read_var_u32(bytes: &[u8], cursor: &mut usize, field_name: &str) -> Result<usize, LixError> {\n    let mut value = 0u32;\n    let mut shift = 0u32;\n    for byte_index in 0..5 {\n        let byte = read_u8(bytes, cursor, field_name)?;\n        if shift == 28 && (byte & 0x80 != 0 || byte & 0x70 != 0) {\n            return Err(LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                format!(\"tracked-state delta pack field '{field_name}' varint exceeds u32\"),\n            ));\n        }\n        if byte_index > 0 && byte & 0x80 == 0 && byte == 0 {\n            return Err(LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                format!(\"tracked-state delta pack field '{field_name}' has non-canonical varint\"),\n            ));\n        }\n        value |= ((byte & 0x7f) as u32) << shift;\n        if byte & 0x80 == 0 {\n            return Ok(value as usize);\n        }\n        shift += 7;\n    }\n    Err(LixError::new(\n        \"LIX_ERROR_UNKNOWN\",\n        format!(\"tracked-state delta pack field '{field_name}' varint exceeds u32\"),\n    ))\n}\n\nfn read_u32(bytes: &[u8], cursor: &mut usize, field_name: &str) -> Result<usize, LixError> {\n    let end = *cursor + 4;\n    let slice = bytes.get(*cursor..end).ok_or_else(|| {\n        LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"tracked-state tree field '{field_name}' is truncated\"),\n        )\n    })?;\n    let mut out = [0_u8; 4];\n    out.copy_from_slice(slice);\n    *cursor = end;\n    Ok(u32::from_be_bytes(out) as usize)\n}\n\nfn read_u64(bytes: &[u8], cursor: &mut usize, field_name: &str) -> Result<u64, LixError> {\n    let end = *cursor + 8;\n    let slice = bytes.get(*cursor..end).ok_or_else(|| {\n        LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"tracked-state tree field '{field_name}' is truncated\"),\n        )\n    })?;\n    let mut out = [0_u8; 8];\n    out.copy_from_slice(slice);\n    *cursor = end;\n    Ok(u64::from_be_bytes(out))\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn key_codec_distinguishes_null_and_value_file_id() {\n        let null_key = encode_key(&TrackedStateKey {\n            schema_key: \"schema\".to_string(),\n            file_id: None,\n            entity_id: EntityIdentity::single(\"entity\"),\n        });\n        let file_key = encode_key(&TrackedStateKey {\n            schema_key: \"schema\".to_string(),\n            file_id: Some(\"file\".to_string()),\n            entity_id: EntityIdentity::single(\"entity\"),\n        });\n\n        assert_ne!(null_key, file_key);\n        assert_eq!(\n            decode_key(&null_key).expect(\"null key\"),\n            TrackedStateKey {\n                schema_key: \"schema\".to_string(),\n                file_id: None,\n                entity_id: EntityIdentity::single(\"entity\"),\n            }\n        );\n        assert_eq!(\n            decode_key(&file_key).expect(\"file key\"),\n            TrackedStateKey {\n                schema_key: \"schema\".to_string(),\n                file_id: Some(\"file\".to_string()),\n                entity_id: EntityIdentity::single(\"entity\"),\n            }\n        );\n    }\n\n    #[test]\n    fn key_codec_encodes_composite_identity_as_string_tuple_parts() {\n        let key = TrackedStateKey {\n            schema_key: \"schema\".to_string(),\n            file_id: None,\n            entity_id: EntityIdentity {\n                parts: vec![\n                    \"namespace\".to_string(),\n                    \"true\".to_string(),\n                    \"42\".to_string(),\n                ],\n            },\n        };\n\n        let encoded = encode_key(&key);\n\n        assert_eq!(decode_key(&encoded).expect(\"key should decode\"), key);\n    }\n\n    #[test]\n    fn key_codec_decodes_entity_suffix_with_trusted_prefix() {\n        let key = TrackedStateKey {\n            schema_key: \"schema\".to_string(),\n            file_id: Some(\"file\".to_string()),\n            entity_id: EntityIdentity {\n                parts: vec![\"namespace\".to_string(), \"id\".to_string()],\n            },\n        };\n        let encoded = encode_key(&key);\n        let prefix = encode_schema_file_prefix(\"schema\", Some(\"file\"));\n\n        assert_eq!(\n            decode_key_with_trusted_prefix(&encoded, \"schema\", Some(\"file\"), prefix.len())\n                .expect(\"key suffix should decode\"),\n            key\n        );\n    }\n\n    #[test]\n    fn key_codec_rejects_non_string_identity_part_tags() {\n        let mut encoded = encode_key(&TrackedStateKey {\n            schema_key: \"schema\".to_string(),\n            file_id: None,\n            entity_id: EntityIdentity {\n                parts: vec![\"true\".to_string()],\n            },\n        });\n        let schema_key_len = \"schema\".len();\n        let file_scope_offset = 4 + schema_key_len;\n        let entity_tag_offset = file_scope_offset + 1;\n        encoded[entity_tag_offset] = 2;\n\n        let error = decode_key(&encoded).expect_err(\"non-string identity tag should reject\");\n        assert!(error\n            .to_string()\n            .contains(\"invalid entity identity part tag 2\"));\n    }\n\n    #[test]\n    fn key_codec_preserves_tuple_prefix_ordering() {\n        let prefix = encode_key(&TrackedStateKey {\n            schema_key: \"schema\".to_string(),\n            file_id: None,\n            entity_id: EntityIdentity {\n                parts: vec![\"a\".to_string()],\n            },\n        });\n        let extended = encode_key(&TrackedStateKey {\n            schema_key: \"schema\".to_string(),\n            file_id: None,\n            entity_id: EntityIdentity {\n                parts: vec![\"a\".to_string(), \"b\".to_string()],\n            },\n        });\n\n        assert!(prefix < extended);\n    }\n\n    #[test]\n    fn value_codec_roundtrips_locator_value() {\n        let value = TrackedStateIndexValue {\n            change_locator: ChangeLocator {\n                source_commit_id: \"commit\".to_string(),\n                source_pack_id: 7,\n                source_ordinal: 11,\n                change_id: \"change\".to_string(),\n            },\n            deleted: false,\n            snapshot_ref: Some(JsonRef::from_hash_bytes([1; 32])),\n            metadata_ref: Some(JsonRef::from_hash_bytes([2; 32])),\n            created_at: \"2026-01-01T00:00:00Z\".to_string(),\n            updated_at: \"2026-01-02T00:00:00Z\".to_string(),\n        };\n\n        let encoded = encode_value(&value);\n        assert_eq!(decode_value(&encoded).expect(\"value\"), value);\n    }\n\n    #[test]\n    fn value_codec_roundtrips_second_locator_value() {\n        let value = TrackedStateIndexValue {\n            change_locator: ChangeLocator {\n                source_commit_id: \"other-commit\".to_string(),\n                source_pack_id: 0,\n                source_ordinal: 1,\n                change_id: \"other-change\".to_string(),\n            },\n            deleted: true,\n            snapshot_ref: None,\n            metadata_ref: None,\n            created_at: \"2026-01-01T00:00:00Z\".to_string(),\n            updated_at: \"2026-01-02T00:00:00Z\".to_string(),\n        };\n\n        let encoded = encode_value(&value);\n        assert_eq!(decode_value(&encoded).expect(\"value\"), value);\n    }\n\n    #[test]\n    fn value_codec_compacts_matching_timestamps() {\n        let mut compact = TrackedStateIndexValue {\n            change_locator: ChangeLocator {\n                source_commit_id: \"commit\".to_string(),\n                source_pack_id: 0,\n                source_ordinal: 1,\n                change_id: \"change\".to_string(),\n            },\n            deleted: false,\n            snapshot_ref: None,\n            metadata_ref: None,\n            created_at: \"2026-01-01T00:00:00Z\".to_string(),\n            updated_at: \"2026-01-01T00:00:00Z\".to_string(),\n        };\n        let compact_len = encode_value(&compact).len();\n        assert_eq!(\n            decode_value(&encode_value(&compact)).expect(\"value\"),\n            compact\n        );\n\n        compact.updated_at = \"2026-01-02T00:00:00Z\".to_string();\n        let distinct_len = encode_value(&compact).len();\n\n        assert!(compact_len < distinct_len);\n        assert_eq!(\n            distinct_len - compact_len,\n            sized_bytes_len(compact.updated_at.as_bytes())\n        );\n    }\n\n    #[test]\n    fn delta_pack_ref_encoder_roundtrips_entries() {\n        let entity_id = EntityIdentity {\n            parts: vec![\"entity-a\".to_string()],\n        };\n        let snapshot_ref = JsonRef::from_hash_bytes([1; 32]);\n        let metadata_ref = JsonRef::from_hash_bytes([2; 32]);\n        let live_change = crate::commit_store::ChangeRef {\n            id: \"commit-a:change-live\",\n            entity_id: &entity_id,\n            schema_key: \"schema\",\n            file_id: Some(\"file-a\"),\n            snapshot_ref: Some(&snapshot_ref),\n            metadata_ref: Some(&metadata_ref),\n            created_at: \"2026-01-01T00:00:00Z\",\n        };\n        let tombstone_change = crate::commit_store::ChangeRef {\n            id: \"change-deleted\",\n            entity_id: &entity_id,\n            schema_key: \"schema\",\n            file_id: None,\n            snapshot_ref: None,\n            metadata_ref: None,\n            created_at: \"2026-01-01T00:00:00Z\",\n        };\n        let live_locator = crate::commit_store::ChangeLocatorRef {\n            source_commit_id: \"commit-a\",\n            source_pack_id: 3,\n            source_ordinal: 5,\n            change_id: \"commit-a:change-live\",\n        };\n        let tombstone_locator = crate::commit_store::ChangeLocatorRef {\n            source_commit_id: \"source-commit\",\n            source_pack_id: 3,\n            source_ordinal: 6,\n            change_id: \"commit-a:borrowed\",\n        };\n        let encoded = encode_delta_pack_refs(\n            \"commit-a\",\n            &[\n                TrackedStateDeltaRef {\n                    change: live_change,\n                    locator: live_locator,\n                    created_at: \"2026-01-01T00:00:00Z\",\n                    updated_at: \"2026-01-02T00:00:00Z\",\n                },\n                TrackedStateDeltaRef {\n                    change: tombstone_change,\n                    locator: tombstone_locator,\n                    created_at: \"2026-01-03T00:00:00Z\",\n                    updated_at: \"2026-01-04T00:00:00Z\",\n                },\n            ],\n        )\n        .expect(\"delta pack should encode\");\n\n        let mut cursor = 5usize;\n        assert_eq!(\n            read_var_sized_string(&encoded, &mut cursor, \"delta pack commit_id\")\n                .expect(\"commit id should decode\"),\n            \"commit-a\"\n        );\n        assert_eq!(\n            read_var_u32(&encoded, &mut cursor, \"delta key prefix count\")\n                .expect(\"prefix count should decode\"),\n            2\n        );\n\n        let (decoded_commit_id, decoded) =\n            decode_delta_pack(&encoded, None).expect(\"delta pack should decode\");\n\n        assert_eq!(decoded_commit_id, \"commit-a\");\n        assert_eq!(\n            decoded,\n            vec![\n                TrackedStateDeltaEntry {\n                    key: TrackedStateKey {\n                        schema_key: \"schema\".to_string(),\n                        file_id: Some(\"file-a\".to_string()),\n                        entity_id: entity_id.clone(),\n                    },\n                    value: TrackedStateIndexValue {\n                        change_locator: ChangeLocator {\n                            source_commit_id: \"commit-a\".to_string(),\n                            source_pack_id: 3,\n                            source_ordinal: 5,\n                            change_id: \"commit-a:change-live\".to_string(),\n                        },\n                        deleted: false,\n                        snapshot_ref: Some(snapshot_ref),\n                        metadata_ref: Some(metadata_ref),\n                        created_at: \"2026-01-01T00:00:00Z\".to_string(),\n                        updated_at: \"2026-01-02T00:00:00Z\".to_string(),\n                    },\n                },\n                TrackedStateDeltaEntry {\n                    key: TrackedStateKey {\n                        schema_key: \"schema\".to_string(),\n                        file_id: None,\n                        entity_id,\n                    },\n                    value: TrackedStateIndexValue {\n                        change_locator: ChangeLocator {\n                            source_commit_id: \"source-commit\".to_string(),\n                            source_pack_id: 3,\n                            source_ordinal: 6,\n                            change_id: \"commit-a:borrowed\".to_string(),\n                        },\n                        deleted: true,\n                        snapshot_ref: None,\n                        metadata_ref: None,\n                        created_at: \"2026-01-03T00:00:00Z\".to_string(),\n                        updated_at: \"2026-01-04T00:00:00Z\".to_string(),\n                    },\n                },\n            ]\n        );\n    }\n\n    #[test]\n    fn delta_pack_ref_encoder_roundtrips_mixed_json_pack_indexes() {\n        let entity_id = EntityIdentity::single(\"entity-a\");\n        let snapshot_ref = JsonRef::from_hash_bytes([1; 32]);\n        let metadata_ref = JsonRef::from_hash_bytes([2; 32]);\n        let change = crate::commit_store::ChangeRef {\n            id: \"commit-a:change-live\",\n            entity_id: &entity_id,\n            schema_key: \"schema\",\n            file_id: Some(\"file-a\"),\n            snapshot_ref: Some(&snapshot_ref),\n            metadata_ref: Some(&metadata_ref),\n            created_at: \"2026-01-01T00:00:00Z\",\n        };\n        let locator = crate::commit_store::ChangeLocatorRef {\n            source_commit_id: \"commit-a\",\n            source_pack_id: 0,\n            source_ordinal: 0,\n            change_id: \"commit-a:change-live\",\n        };\n        let delta = TrackedStateDeltaRef {\n            change,\n            locator,\n            created_at: \"2026-01-01T00:00:00Z\",\n            updated_at: \"2026-01-01T00:00:00Z\",\n        };\n        let mut pack_indexes = HashMap::new();\n        pack_indexes.insert(*snapshot_ref.as_hash_array(), 1);\n        let pack_refs = vec![JsonRef::from_hash_bytes([9; 32]), snapshot_ref];\n\n        let inline = encode_delta_pack_refs(\"commit-a\", &[delta]).expect(\"inline delta pack\");\n        assert!(!delta_pack_uses_json_pack_indexes(&inline).expect(\"inline mode should peek\"));\n        let empty_indexes = HashMap::new();\n        let empty_index_pack = encode_delta_pack_refs_with_json_pack_indexes(\n            \"commit-a\",\n            &[delta],\n            Some(&empty_indexes),\n        )\n        .expect(\"empty-index delta pack\");\n        assert_eq!(empty_index_pack, inline);\n        assert!(!delta_pack_uses_json_pack_indexes(&empty_index_pack)\n            .expect(\"empty index mode should peek\"));\n        decode_delta_pack(&empty_index_pack, None).expect(\"empty index pack should decode inline\");\n\n        let mixed = encode_delta_pack_refs_with_json_pack_indexes(\n            \"commit-a\",\n            &[delta],\n            Some(&pack_indexes),\n        )\n        .expect(\"mixed delta pack\");\n        assert!(delta_pack_uses_json_pack_indexes(&mixed).expect(\"mixed mode should peek\"));\n\n        assert!(\n            mixed.len() < inline.len(),\n            \"pack-index refs should be smaller than inline refs\"\n        );\n        assert!(decode_delta_pack(&mixed, None)\n            .expect_err(\"mixed refs require JSON pack refs\")\n            .to_string()\n            .contains(\"needs JSON pack refs\"));\n        let (_, decoded) =\n            decode_delta_pack(&mixed, Some(&pack_refs)).expect(\"mixed delta pack should decode\");\n        assert_eq!(decoded[0].value.snapshot_ref, Some(snapshot_ref));\n        assert_eq!(decoded[0].value.metadata_ref, Some(metadata_ref));\n    }\n\n    #[test]\n    fn delta_pack_stream_decoder_rejects_trailing_entry_bytes() {\n        let entity_id = EntityIdentity::single(\"entity\");\n        let change = crate::commit_store::ChangeRef {\n            id: \"commit-a:change-0\",\n            entity_id: &entity_id,\n            schema_key: \"schema\",\n            file_id: None,\n            snapshot_ref: None,\n            metadata_ref: None,\n            created_at: \"2026-01-01T00:00:00Z\",\n        };\n        let locator = crate::commit_store::ChangeLocatorRef {\n            source_commit_id: \"commit-a\",\n            source_pack_id: 0,\n            source_ordinal: 0,\n            change_id: \"commit-a:change-0\",\n        };\n        let mut encoded = encode_delta_pack_refs(\n            \"commit-a\",\n            &[TrackedStateDeltaRef {\n                change,\n                locator,\n                created_at: \"2026-01-01T00:00:00Z\",\n                updated_at: \"2026-01-01T00:00:00Z\",\n            }],\n        )\n        .expect(\"delta pack should encode\");\n\n        let mut cursor = 5usize;\n        let _ = read_var_sized_string(&encoded, &mut cursor, \"delta pack commit_id\")\n            .expect(\"commit id should decode\");\n        assert_eq!(\n            read_var_u32(&encoded, &mut cursor, \"delta key prefix count\")\n                .expect(\"prefix count should decode\"),\n            1\n        );\n        let _ =\n            decode_delta_key_prefix(&encoded, &mut cursor).expect(\"delta key prefix should decode\");\n        encoded[cursor] = 0;\n\n        let error =\n            decode_delta_pack(&encoded, None).expect_err(\"trailing entry bytes should reject\");\n        assert!(\n            error.to_string().contains(\"trailing bytes\"),\n            \"error should mention trailing bytes: {error}\"\n        );\n    }\n\n    #[test]\n    fn delta_pack_rejects_overlong_varint() {\n        let mut encoded = Vec::new();\n        encoded.extend_from_slice(b\"LXTD\");\n        encoded.push(DELTA_PACK_VERSION);\n        encoded.extend_from_slice(&[0x80, 0x80, 0x80, 0x80, 0x80]);\n\n        let error = decode_delta_pack(&encoded, None).expect_err(\"overlong varint should reject\");\n        assert!(\n            error.to_string().contains(\"varint exceeds u32\"),\n            \"error should mention overlong varint: {error}\"\n        );\n    }\n\n    #[test]\n    fn delta_pack_rejects_varint_above_u32() {\n        let mut encoded = Vec::new();\n        encoded.extend_from_slice(b\"LXTD\");\n        encoded.push(DELTA_PACK_VERSION);\n        encoded.extend_from_slice(&[0xff, 0xff, 0xff, 0xff, 0x1f]);\n\n        let error = decode_delta_pack(&encoded, None).expect_err(\"too-large varint should reject\");\n        assert!(\n            error.to_string().contains(\"varint exceeds u32\"),\n            \"error should mention oversized varint: {error}\"\n        );\n    }\n\n    #[test]\n    fn delta_pack_rejects_non_canonical_varint() {\n        let mut encoded = Vec::new();\n        encoded.extend_from_slice(b\"LXTD\");\n        encoded.push(DELTA_PACK_VERSION);\n        encoded.extend_from_slice(&[0x80, 0x00]);\n\n        let error =\n            decode_delta_pack(&encoded, None).expect_err(\"non-canonical varint should reject\");\n        assert!(\n            error.to_string().contains(\"non-canonical varint\"),\n            \"error should mention non-canonical varint: {error}\"\n        );\n    }\n\n    #[test]\n    fn delta_key_decoder_rejects_out_of_bounds_prefix_index() {\n        let mut encoded_key = Vec::new();\n        push_var_u32(&mut encoded_key, 1, \"delta key prefix index\").expect(\"prefix index\");\n        push_var_entity_identity(&mut encoded_key, &EntityIdentity::single(\"entity\"))\n            .expect(\"entity identity\");\n\n        let mut cursor = 0usize;\n        let err = decode_delta_key(\n            &encoded_key,\n            &mut cursor,\n            &[DeltaKeyPrefix {\n                schema_key: \"schema\".to_string(),\n                file_id: None,\n            }],\n        )\n        .expect_err(\"out-of-bounds prefix index should reject\");\n\n        assert!(err\n            .to_string()\n            .contains(\"tracked-state delta key prefix index 1 is out of bounds\"));\n    }\n\n    #[test]\n    fn encoded_value_len_matches_encoded_value_bytes() {\n        let values = [\n            TrackedStateIndexValue {\n                change_locator: ChangeLocator {\n                    source_commit_id: \"commit\".to_string(),\n                    source_pack_id: 0,\n                    source_ordinal: 0,\n                    change_id: \"change\".to_string(),\n                },\n                deleted: false,\n                snapshot_ref: None,\n                metadata_ref: None,\n                created_at: \"2026-01-01T00:00:00Z\".to_string(),\n                updated_at: \"2026-01-02T00:00:00Z\".to_string(),\n            },\n            TrackedStateIndexValue {\n                change_locator: ChangeLocator {\n                    source_commit_id: \"commit\".to_string(),\n                    source_pack_id: 1,\n                    source_ordinal: 2,\n                    change_id: \"change-2\".to_string(),\n                },\n                deleted: true,\n                snapshot_ref: Some(JsonRef::from_hash_bytes([3; 32])),\n                metadata_ref: None,\n                created_at: \"2026-01-01T00:00:00Z\".to_string(),\n                updated_at: \"2026-01-02T00:00:00Z\".to_string(),\n            },\n            TrackedStateIndexValue {\n                change_locator: ChangeLocator {\n                    source_commit_id: \"other\".to_string(),\n                    source_pack_id: 4,\n                    source_ordinal: 8,\n                    change_id: \"change-3\".to_string(),\n                },\n                deleted: false,\n                snapshot_ref: None,\n                metadata_ref: Some(JsonRef::from_hash_bytes([4; 32])),\n                created_at: \"2026-01-01T00:00:00Z\".to_string(),\n                updated_at: \"2026-01-02T00:00:00Z\".to_string(),\n            },\n        ];\n\n        for value in values {\n            assert_eq!(encoded_value_len(&value), encode_value(&value).len());\n        }\n    }\n\n    #[test]\n    fn leaf_node_codec_uses_indexable_offset_table() {\n        let entries = vec![\n            EncodedLeafEntry {\n                key: b\"alpha\".to_vec(),\n                value: b\"one\".to_vec(),\n            },\n            EncodedLeafEntry {\n                key: b\"bravo\".to_vec(),\n                value: b\"two-two\".to_vec(),\n            },\n        ];\n\n        let encoded = encode_leaf_node(&entries);\n        assert_eq!(encoded[0], NODE_KIND_LEAF);\n        assert_eq!(encoded[1], NODE_VERSION);\n        assert_eq!(&encoded[2..6], 2u32.to_be_bytes().as_slice());\n        assert_eq!(&encoded[6..10], 0u32.to_be_bytes().as_slice());\n\n        let DecodedNodeRef::Leaf(leaf) = decode_node_ref(&encoded).expect(\"leaf ref\") else {\n            panic!(\"expected leaf node\");\n        };\n        assert_eq!(leaf.len(), 2);\n        assert_eq!(leaf.key(1).expect(\"second key\"), Some(b\"bravo\".as_slice()));\n        let second = leaf\n            .entry(1)\n            .expect(\"second entry\")\n            .expect(\"second entry exists\");\n        assert_eq!(second.key, b\"bravo\");\n        assert_eq!(second.value, b\"two-two\");\n\n        let DecodedNode::Leaf(owned) = decode_node(&encoded).expect(\"owned leaf\") else {\n            panic!(\"expected owned leaf node\");\n        };\n        assert_eq!(owned.entries(), entries.as_slice());\n    }\n\n    #[test]\n    fn leaf_node_codec_roundtrips_empty_leaf() {\n        let encoded = encode_leaf_node(&[]);\n        assert_eq!(encoded.len(), 10);\n\n        let DecodedNodeRef::Leaf(leaf) = decode_node_ref(&encoded).expect(\"leaf ref\") else {\n            panic!(\"expected leaf node\");\n        };\n        assert_eq!(leaf.len(), 0);\n        assert!(leaf.entry(0).expect(\"missing entry\").is_none());\n    }\n\n    #[test]\n    fn leaf_node_codec_rejects_malformed_offsets() {\n        let entries = vec![\n            EncodedLeafEntry {\n                key: b\"alpha\".to_vec(),\n                value: b\"one\".to_vec(),\n            },\n            EncodedLeafEntry {\n                key: b\"bravo\".to_vec(),\n                value: b\"two\".to_vec(),\n            },\n        ];\n        let encoded = encode_leaf_node(&entries);\n\n        let mut non_zero_first = encoded.clone();\n        non_zero_first[6..10].copy_from_slice(&1u32.to_be_bytes());\n        assert!(decode_node_ref(&non_zero_first)\n            .expect_err(\"non-zero first offset should reject\")\n            .to_string()\n            .contains(\"offset table must start at zero\"));\n\n        let mut non_monotonic = encoded.clone();\n        non_monotonic[10..14].copy_from_slice(&100u32.to_be_bytes());\n        assert!(decode_node_ref(&non_monotonic)\n            .expect_err(\"non-monotonic offsets should reject\")\n            .to_string()\n            .contains(\"offsets must be monotonic\"));\n\n        let mut short_coverage = encoded;\n        let payload_len = short_coverage.len() - 18;\n        short_coverage[14..18].copy_from_slice(&((payload_len - 1) as u32).to_be_bytes());\n        assert!(decode_node_ref(&short_coverage)\n            .expect_err(\"short offset coverage should reject\")\n            .to_string()\n            .contains(\"offset table does not cover full payload\"));\n    }\n\n    #[test]\n    fn content_hash_is_blake3() {\n        assert_eq!(hash_bytes(b\"abc\"), *blake3::hash(b\"abc\").as_bytes());\n    }\n\n    #[test]\n    fn boundary_decisions_are_xxh3_based_and_deterministic() {\n        let left = boundary_trigger(b\"key\", 0, 4096, 128, 4096);\n        let right = boundary_trigger(b\"key\", 0, 4096, 128, 4096);\n        assert_eq!(left, right);\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/tracked_state/context.rs",
    "content": "use std::collections::{BTreeMap, BTreeSet};\n\nuse crate::commit_store::CommitStoreContext;\nuse crate::storage::{StorageReader, StorageWriteSet};\nuse crate::tracked_state::by_file_index::ByFileIndex;\nuse crate::tracked_state::codec::{encode_key_ref, encode_value_ref};\nuse crate::tracked_state::diff::{diff_commits, TrackedStateDiff, TrackedStateDiffRequest};\nuse crate::tracked_state::materialize_index_entries;\nuse crate::tracked_state::merge::{self, TrackedStateMergePlan};\nuse crate::tracked_state::storage;\nuse crate::tracked_state::storage::DeltaJsonPackIndexesRef;\nuse crate::tracked_state::tree::TrackedStateTree;\nuse crate::tracked_state::types::{\n    TrackedStateIndexValue, TrackedStateKey, TrackedStateKeyRef, TrackedStateMutation,\n    TrackedStateTreeDiffEntry, TrackedStateTreeScanRequest,\n};\nuse crate::tracked_state::{\n    MaterializedTrackedStateRow, TrackedStateDeltaRef, TrackedStateRowRequest,\n    TrackedStateScanRequest,\n};\nuse crate::LixError;\n\n/// Factory for tracked-state readers, delta writers, and projection-root materializers.\n///\n/// Tracked state is stored as content-addressed roots. Version refs\n/// choose which commit/root to read; this context only owns root operations.\n#[derive(Clone)]\npub(crate) struct TrackedStateContext {\n    tree: TrackedStateTree,\n    commit_store: CommitStoreContext,\n}\n\nimpl TrackedStateContext {\n    pub(crate) fn new() -> Self {\n        Self {\n            tree: TrackedStateTree::new(),\n            commit_store: CommitStoreContext::new(),\n        }\n    }\n\n    /// Creates a commit-id-addressed tracked-state reader.\n    pub(crate) fn reader<S>(&self, store: S) -> TrackedStateStoreReader<S>\n    where\n        S: StorageReader,\n    {\n        TrackedStateStoreReader {\n            store,\n            tree: self.tree.clone(),\n            commit_store: self.commit_store,\n        }\n    }\n\n    /// Creates a tracked-state writer over a caller-owned transaction and write set.\n    pub(crate) fn writer<'a, S>(\n        &'a self,\n        store: &'a mut S,\n        writes: &'a mut StorageWriteSet,\n    ) -> TrackedStateWriter<'a, S>\n    where\n        S: StorageReader + ?Sized,\n    {\n        TrackedStateWriter {\n            tree: self.tree.clone(),\n            store,\n            writes,\n        }\n    }\n\n    /// Creates an explicit tracked-state projection-root materializer.\n    ///\n    /// Normal commits should use `writer(...).stage_delta(...)`. Materializing a\n    /// projection root is a caller-chosen maintenance/read-acceleration step.\n    pub(crate) fn materializer<'a, S>(\n        &'a self,\n        store: &'a mut S,\n        writes: &'a mut StorageWriteSet,\n        commit_store: &'a CommitStoreContext,\n    ) -> TrackedStateMaterializer<'a, S>\n    where\n        S: StorageReader + ?Sized,\n    {\n        TrackedStateMaterializer {\n            tracked_state: self,\n            store,\n            writes,\n            commit_store,\n        }\n    }\n}\n\n/// Store-backed tracked-state reader created by `TrackedStateContext`.\npub(crate) struct TrackedStateStoreReader<S> {\n    store: S,\n    tree: TrackedStateTree,\n    commit_store: CommitStoreContext,\n}\n\nimpl<S> TrackedStateStoreReader<S>\nwhere\n    S: StorageReader,\n{\n    pub(crate) async fn scan_rows_at_commit(\n        &mut self,\n        commit_id: &str,\n        request: &TrackedStateScanRequest,\n    ) -> Result<Vec<MaterializedTrackedStateRow>, LixError> {\n        let root_id = self.tree.load_root(&mut self.store, commit_id).await?;\n        let rows = if let Some(root_id) = root_id {\n            if ByFileIndex::should_use(request) {\n                if let Some(by_file_root_id) =\n                    storage::load_by_file_root(&mut self.store, commit_id).await?\n                {\n                    self.scan_rows_at_commit_by_file_index(&root_id, &by_file_root_id, request)\n                        .await?\n                } else {\n                    self.tree\n                        .scan(\n                            &mut self.store,\n                            &root_id,\n                            &tree_scan_request_from_tracked(request),\n                        )\n                        .await?\n                }\n            } else {\n                self.tree\n                    .scan(\n                        &mut self.store,\n                        &root_id,\n                        &tree_scan_request_from_tracked(request),\n                    )\n                    .await?\n            }\n        } else {\n            self.projection_entries_at_commit(commit_id, &tree_scan_request_from_tracked(request))\n                .await?\n        };\n        let projection = crate::tracked_state::TrackedMaterializationProjection::from_columns(\n            &request.projection.columns,\n        );\n        let mut rows = materialize_index_entries(&mut self.store, rows, &projection).await?;\n        if !request.filter.include_tombstones {\n            rows.retain(|row| !row.deleted);\n        }\n        if let Some(limit) = request.limit {\n            rows.truncate(limit);\n        }\n        Ok(rows)\n    }\n\n    pub(crate) async fn load_rows_at_commit(\n        &mut self,\n        commit_id: &str,\n        requests: &[TrackedStateRowRequest],\n    ) -> Result<Vec<Option<MaterializedTrackedStateRow>>, LixError> {\n        if requests.is_empty() {\n            return Ok(Vec::new());\n        }\n        let keys = requests\n            .iter()\n            .map(tracked_key_from_request)\n            .collect::<Result<Vec<_>, _>>()?;\n        let values = self\n            .projection_values_at_commit_for_keys(commit_id, &keys)\n            .await?;\n        let mut entry_indices = Vec::new();\n        let mut entries = Vec::new();\n        for (index, (key, value)) in keys.into_iter().zip(values).enumerate() {\n            if let Some(value) = value {\n                entry_indices.push(index);\n                entries.push((key, value));\n            }\n        }\n        let materialized = materialize_index_entries(\n            &mut self.store,\n            entries,\n            &crate::tracked_state::TrackedMaterializationProjection::full(),\n        )\n        .await?;\n        let mut rows = vec![None; requests.len()];\n        for (index, row) in entry_indices.into_iter().zip(materialized) {\n            rows[index] = Some(row);\n        }\n        Ok(rows)\n    }\n\n    pub(crate) async fn diff_commits(\n        &mut self,\n        left_commit_id: &str,\n        right_commit_id: &str,\n        request: &TrackedStateDiffRequest,\n    ) -> Result<TrackedStateDiff, LixError> {\n        diff_commits(self, left_commit_id, right_commit_id, request).await\n    }\n\n    pub(crate) async fn diff_tree_entries_at_commits(\n        &mut self,\n        left_commit_id: &str,\n        right_commit_id: &str,\n        request: &TrackedStateTreeScanRequest,\n    ) -> Result<Vec<crate::tracked_state::types::TrackedStateTreeDiffEntry>, LixError> {\n        if !self.projection_has_pending_deltas(left_commit_id).await?\n            && !self.projection_has_pending_deltas(right_commit_id).await?\n            && self.projection_root_exists(left_commit_id).await?\n            && self.projection_root_exists(right_commit_id).await?\n        {\n            let left_root = self.tree.load_root(&mut self.store, left_commit_id).await?;\n            let right_root = self\n                .tree\n                .load_root(&mut self.store, right_commit_id)\n                .await?;\n            let entries = self\n                .tree\n                .diff(\n                    &mut self.store,\n                    left_root.as_ref(),\n                    right_root.as_ref(),\n                    request,\n                )\n                .await?;\n            return Ok(entries);\n        }\n\n        if let Some(entries) = self\n            .diff_pending_delta_suffix(left_commit_id, right_commit_id, request)\n            .await?\n        {\n            return Ok(entries);\n        }\n\n        let left = self\n            .projection_entries_at_commit(left_commit_id, request)\n            .await?\n            .into_iter()\n            .collect::<BTreeMap<_, _>>();\n        let right = self\n            .projection_entries_at_commit(right_commit_id, request)\n            .await?\n            .into_iter()\n            .collect::<BTreeMap<_, _>>();\n        let keys = left\n            .keys()\n            .chain(right.keys())\n            .cloned()\n            .collect::<BTreeSet<_>>();\n        let entries = keys\n            .into_iter()\n            .filter_map(|key| {\n                let before = left.get(&key).cloned().map(|value| (key.clone(), value));\n                let after = right.get(&key).cloned().map(|value| (key, value));\n                if before == after {\n                    None\n                } else {\n                    Some(TrackedStateTreeDiffEntry { before, after })\n                }\n            })\n            .collect();\n        Ok(entries)\n    }\n\n    async fn diff_pending_delta_suffix(\n        &mut self,\n        left_commit_id: &str,\n        right_commit_id: &str,\n        request: &TrackedStateTreeScanRequest,\n    ) -> Result<Option<Vec<TrackedStateTreeDiffEntry>>, LixError> {\n        let left_delta_ids = self\n            .delta_commit_ids_since_projection_root(left_commit_id)\n            .await?;\n        let right_delta_ids = self\n            .delta_commit_ids_since_projection_root(right_commit_id)\n            .await?;\n        let left_base_commit_id = self\n            .projection_base_commit_id(left_commit_id, &left_delta_ids)\n            .await?;\n        let right_base_commit_id = self\n            .projection_base_commit_id(right_commit_id, &right_delta_ids)\n            .await?;\n        if left_base_commit_id != right_base_commit_id {\n            return Ok(None);\n        }\n\n        if right_delta_ids.starts_with(&left_delta_ids) {\n            let suffix = &right_delta_ids[left_delta_ids.len()..];\n            return self\n                .diff_pending_delta_suffix_from_base(left_commit_id, suffix, request, true)\n                .await\n                .map(Some);\n        }\n\n        if left_delta_ids.starts_with(&right_delta_ids) {\n            let suffix = &left_delta_ids[right_delta_ids.len()..];\n            return self\n                .diff_pending_delta_suffix_from_base(right_commit_id, suffix, request, false)\n                .await\n                .map(Some);\n        }\n\n        Ok(None)\n    }\n\n    async fn diff_pending_delta_suffix_from_base(\n        &mut self,\n        base_commit_id: &str,\n        suffix_commit_ids: &[String],\n        request: &TrackedStateTreeScanRequest,\n        suffix_is_after: bool,\n    ) -> Result<Vec<TrackedStateTreeDiffEntry>, LixError> {\n        if suffix_commit_ids.is_empty() {\n            return Ok(Vec::new());\n        }\n\n        let mut changed = BTreeMap::<TrackedStateKey, TrackedStateIndexValue>::new();\n        for commit_id in suffix_commit_ids {\n            let Some(delta_entries) = storage::load_delta_pack(&mut self.store, commit_id).await?\n            else {\n                continue;\n            };\n            for delta in delta_entries {\n                if request.matches_key(&delta.key) {\n                    changed.insert(delta.key, delta.value);\n                }\n            }\n        }\n\n        if changed.is_empty() {\n            return Ok(Vec::new());\n        }\n\n        let keys = changed.keys().cloned().collect::<Vec<_>>();\n        let base_values = self\n            .projection_values_at_commit_for_keys(base_commit_id, &keys)\n            .await?;\n        let entries = keys\n            .into_iter()\n            .zip(base_values)\n            .filter_map(|(key, base_value)| {\n                let changed_value = changed.get(&key).cloned();\n                let (before_value, after_value) = if suffix_is_after {\n                    (base_value, changed_value)\n                } else {\n                    (changed_value, base_value)\n                };\n                if before_value == after_value {\n                    return None;\n                }\n                Some(TrackedStateTreeDiffEntry {\n                    before: before_value.map(|value| (key.clone(), value)),\n                    after: after_value.map(|value| (key, value)),\n                })\n            })\n            .collect();\n        Ok(entries)\n    }\n\n    pub(crate) async fn materialize_tree_values(\n        &mut self,\n        entries: Vec<(TrackedStateKey, TrackedStateIndexValue)>,\n    ) -> Result<Vec<MaterializedTrackedStateRow>, LixError> {\n        materialize_index_entries(\n            &mut self.store,\n            entries,\n            &crate::tracked_state::TrackedMaterializationProjection::full(),\n        )\n        .await\n    }\n\n    async fn scan_rows_at_commit_by_file_index(\n        &mut self,\n        primary_root_id: &crate::tracked_state::types::TrackedStateRootId,\n        by_file_root_id: &crate::tracked_state::types::TrackedStateRootId,\n        request: &TrackedStateScanRequest,\n    ) -> Result<Vec<(TrackedStateKey, TrackedStateIndexValue)>, LixError> {\n        let by_file_request = ByFileIndex::scan_request_from_tracked(request);\n        let index_match_count = self\n            .tree\n            .count_matching_keys(&mut self.store, by_file_root_id, &by_file_request)\n            .await?;\n        let primary_row_count = self\n            .tree\n            .row_count(&mut self.store, primary_root_id)\n            .await?;\n        if index_match_count * 20 > primary_row_count {\n            let rows = self\n                .tree\n                .scan(\n                    &mut self.store,\n                    primary_root_id,\n                    &tree_scan_request_from_tracked(request),\n                )\n                .await?;\n            return Ok(rows);\n        }\n        let index_rows = self\n            .tree\n            .scan(&mut self.store, by_file_root_id, &by_file_request)\n            .await?;\n        let mut rows = Vec::new();\n        let tree_request = tree_scan_request_from_tracked(request);\n        let needs_payloads = scan_needs_json_payloads(request);\n        if needs_payloads {\n            let mut primary_keys = Vec::with_capacity(index_rows.len());\n            for (index_key, _) in index_rows {\n                if let Some(primary_key) = ByFileIndex::primary_key_from_index_key(index_key) {\n                    primary_keys.push(primary_key);\n                }\n            }\n            let primary_values = self\n                .tree\n                .get_many(&mut self.store, primary_root_id, &primary_keys)\n                .await?;\n            for (primary_key, value) in primary_keys.into_iter().zip(primary_values) {\n                let Some(value) = value else {\n                    continue;\n                };\n                if !tree_request.matches(&primary_key, &value) {\n                    continue;\n                }\n                rows.push((primary_key, value));\n            }\n            return Ok(rows);\n        }\n\n        for (index_key, index_value) in index_rows {\n            let Some(primary_key) = ByFileIndex::primary_key_from_index_key(index_key) else {\n                continue;\n            };\n            let value = index_value;\n            if tree_request.matches(&primary_key, &value) {\n                rows.push((primary_key, value));\n            }\n        }\n        Ok(rows)\n    }\n\n    async fn projection_root_exists(&mut self, commit_id: &str) -> Result<bool, LixError> {\n        Ok(self\n            .tree\n            .load_root(&mut self.store, commit_id)\n            .await?\n            .is_some())\n    }\n\n    async fn projection_has_pending_deltas(&mut self, commit_id: &str) -> Result<bool, LixError> {\n        Ok(!self\n            .delta_commit_ids_since_projection_root(commit_id)\n            .await?\n            .is_empty())\n    }\n\n    async fn projection_entries_at_commit(\n        &mut self,\n        commit_id: &str,\n        request: &TrackedStateTreeScanRequest,\n    ) -> Result<Vec<(TrackedStateKey, TrackedStateIndexValue)>, LixError> {\n        let delta_commit_ids = self\n            .delta_commit_ids_since_projection_root(commit_id)\n            .await?;\n        let base_commit_id = self\n            .projection_base_commit_id(commit_id, &delta_commit_ids)\n            .await?;\n        if base_commit_id.is_none() && delta_commit_ids.len() == 1 {\n            return self\n                .single_delta_pack_entries(&delta_commit_ids[0], request)\n                .await;\n        }\n        let mut entries = if let Some(base_commit_id) = base_commit_id {\n            let root_id = self\n                .tree\n                .load_root(&mut self.store, &base_commit_id)\n                .await?\n                .ok_or_else(|| {\n                    LixError::new(\n                        LixError::CODE_INTERNAL_ERROR,\n                        format!(\n                            \"tracked_state projection base root '{base_commit_id}' disappeared\"\n                        ),\n                    )\n                })?;\n            self.tree\n                .scan(&mut self.store, &root_id, request)\n                .await?\n                .into_iter()\n                .collect::<BTreeMap<_, _>>()\n        } else {\n            BTreeMap::new()\n        };\n        self.apply_delta_packs_to_entries(&delta_commit_ids, Some(request), &mut entries)\n            .await?;\n        Ok(entries.into_iter().collect())\n    }\n\n    async fn single_delta_pack_entries(\n        &mut self,\n        commit_id: &str,\n        request: &TrackedStateTreeScanRequest,\n    ) -> Result<Vec<(TrackedStateKey, TrackedStateIndexValue)>, LixError> {\n        let Some(delta_entries) = storage::load_delta_pack(&mut self.store, commit_id).await?\n        else {\n            return Ok(Vec::new());\n        };\n        let mut rows = delta_entries\n            .into_iter()\n            .enumerate()\n            .filter_map(|(ordinal, delta)| {\n                request\n                    .matches_key(&delta.key)\n                    .then_some((ordinal, delta.key, delta.value))\n            })\n            .collect::<Vec<_>>();\n        rows.sort_by(|left, right| left.1.cmp(&right.1).then(left.0.cmp(&right.0)));\n\n        let mut out = Vec::new();\n        let mut rows = rows.into_iter().peekable();\n        while let Some((_, key, mut value)) = rows.next() {\n            while rows.peek().is_some_and(|(_, next_key, _)| next_key == &key) {\n                let (_, _, next_value) = rows\n                    .next()\n                    .expect(\"peek confirmed duplicate delta entry exists\");\n                value = next_value;\n            }\n            if !request.include_tombstones && value.deleted {\n                continue;\n            }\n            out.push((key, value));\n        }\n        Ok(out)\n    }\n\n    async fn projection_values_at_commit_for_keys(\n        &mut self,\n        commit_id: &str,\n        keys: &[TrackedStateKey],\n    ) -> Result<Vec<Option<TrackedStateIndexValue>>, LixError> {\n        let delta_commit_ids = self\n            .delta_commit_ids_since_projection_root(commit_id)\n            .await?;\n        let base_commit_id = self\n            .projection_base_commit_id(commit_id, &delta_commit_ids)\n            .await?;\n        let mut entries = if let Some(base_commit_id) = base_commit_id {\n            let root_id = self\n                .tree\n                .load_root(&mut self.store, &base_commit_id)\n                .await?\n                .ok_or_else(|| {\n                    LixError::new(\n                        LixError::CODE_INTERNAL_ERROR,\n                        format!(\n                            \"tracked_state projection base root '{base_commit_id}' disappeared\"\n                        ),\n                    )\n                })?;\n            let values = self.tree.get_many(&mut self.store, &root_id, keys).await?;\n            keys.iter()\n                .cloned()\n                .zip(values)\n                .filter_map(|(key, value)| value.map(|value| (key, value)))\n                .collect::<BTreeMap<_, _>>()\n        } else {\n            BTreeMap::new()\n        };\n        let key_filter = keys.iter().cloned().collect::<BTreeSet<_>>();\n        self.apply_delta_packs_to_entries_for_keys(&delta_commit_ids, &key_filter, &mut entries)\n            .await?;\n        Ok(keys.iter().map(|key| entries.get(key).cloned()).collect())\n    }\n\n    async fn projection_base_commit_id(\n        &mut self,\n        commit_id: &str,\n        delta_commit_ids: &[String],\n    ) -> Result<Option<String>, LixError> {\n        if delta_commit_ids.is_empty() {\n            return Ok(if self.projection_root_exists(commit_id).await? {\n                Some(commit_id.to_string())\n            } else {\n                None\n            });\n        }\n        let Some(first_delta_commit_id) = delta_commit_ids.first() else {\n            return Ok(None);\n        };\n        let commit = self\n            .commit_store\n            .load_commit_from(&mut self.store, first_delta_commit_id)\n            .await?\n            .ok_or_else(|| missing_commit_error(first_delta_commit_id))?;\n        let Some(parent_id) = commit.parent_ids.first() else {\n            return Ok(None);\n        };\n        Ok(if self.projection_root_exists(parent_id).await? {\n            Some(parent_id.clone())\n        } else {\n            None\n        })\n    }\n\n    async fn delta_commit_ids_since_projection_root(\n        &mut self,\n        commit_id: &str,\n    ) -> Result<Vec<String>, LixError> {\n        let mut out = Vec::new();\n        let mut seen = BTreeSet::new();\n        let mut current = Some(commit_id.to_string());\n        while let Some(current_id) = current {\n            if !seen.insert(current_id.clone()) {\n                return Err(LixError::new(\n                    LixError::CODE_INTERNAL_ERROR,\n                    format!(\"tracked_state projection found first-parent cycle at '{current_id}'\"),\n                ));\n            }\n            if self\n                .tree\n                .load_root(&mut self.store, &current_id)\n                .await?\n                .is_some()\n            {\n                break;\n            }\n            if storage::delta_pack_exists(&mut self.store, &current_id).await? {\n                out.push(current_id.clone());\n            }\n            let commit = self\n                .commit_store\n                .load_commit_from(&mut self.store, &current_id)\n                .await?\n                .ok_or_else(|| missing_commit_error(&current_id))?;\n            current = commit.parent_ids.first().cloned();\n        }\n        out.reverse();\n        Ok(out)\n    }\n\n    async fn apply_delta_packs_to_entries(\n        &mut self,\n        commit_ids: &[String],\n        request: Option<&TrackedStateTreeScanRequest>,\n        entries: &mut BTreeMap<TrackedStateKey, TrackedStateIndexValue>,\n    ) -> Result<(), LixError> {\n        for commit_id in commit_ids {\n            let Some(delta_entries) = storage::load_delta_pack(&mut self.store, commit_id).await?\n            else {\n                continue;\n            };\n            for delta in delta_entries {\n                if let Some(request) = request {\n                    if !request.matches_key(&delta.key) {\n                        continue;\n                    }\n                    if !request.include_tombstones && delta.value.deleted {\n                        entries.remove(&delta.key);\n                        continue;\n                    }\n                    entries.insert(delta.key, delta.value);\n                } else {\n                    entries.insert(delta.key, delta.value);\n                }\n            }\n        }\n        Ok(())\n    }\n\n    async fn apply_delta_packs_to_entries_for_keys(\n        &mut self,\n        commit_ids: &[String],\n        keys: &BTreeSet<TrackedStateKey>,\n        entries: &mut BTreeMap<TrackedStateKey, TrackedStateIndexValue>,\n    ) -> Result<(), LixError> {\n        for commit_id in commit_ids {\n            let Some(delta_entries) = storage::load_delta_pack(&mut self.store, commit_id).await?\n            else {\n                continue;\n            };\n            for delta in delta_entries {\n                if keys.contains(&delta.key) {\n                    entries.insert(delta.key, delta.value);\n                }\n            }\n        }\n        Ok(())\n    }\n\n    /// Plans a three-way merge by diffing both heads against the same base.\n    ///\n    /// `target_commit_id` is the destination root that should keep its own\n    /// changes. `source_commit_id` is the incoming root whose non-conflicting\n    /// changes should be applied.\n    #[allow(dead_code)]\n    pub(crate) async fn plan_merge(\n        &mut self,\n        base_commit_id: &str,\n        target_commit_id: &str,\n        source_commit_id: &str,\n        request: &TrackedStateDiffRequest,\n    ) -> Result<TrackedStateMergePlan, LixError> {\n        let target_diff = self\n            .diff_commits(base_commit_id, target_commit_id, request)\n            .await?;\n        let source_diff = self\n            .diff_commits(base_commit_id, source_commit_id, request)\n            .await?;\n        merge::plan_merge(&target_diff, &source_diff)\n    }\n}\n\n/// Writer for commit-store-backed tracked-state projection roots.\npub(crate) struct TrackedStateWriter<'a, S: ?Sized> {\n    tree: TrackedStateTree,\n    store: &'a mut S,\n    writes: &'a mut StorageWriteSet,\n}\n\n/// Explicit projection-root materializer created by `TrackedStateContext`.\npub(crate) struct TrackedStateMaterializer<'a, S: ?Sized> {\n    pub(super) tracked_state: &'a TrackedStateContext,\n    pub(super) store: &'a mut S,\n    pub(super) writes: &'a mut StorageWriteSet,\n    pub(super) commit_store: &'a CommitStoreContext,\n}\n\nimpl<S> TrackedStateMaterializer<'_, S>\nwhere\n    S: StorageReader + ?Sized,\n{\n    pub(crate) async fn materialize_root_at(\n        &mut self,\n        commit_id: &str,\n    ) -> Result<TrackedStateWriteReport, LixError> {\n        crate::tracked_state::materializer::materialize_root_at(self, commit_id).await\n    }\n}\n\nimpl<S> TrackedStateWriter<'_, S>\nwhere\n    S: StorageReader + ?Sized,\n{\n    /// Stages one tracked-state projection delta for `commit_id`.\n    pub(crate) async fn stage_delta(\n        &mut self,\n        commit_id: &str,\n        _parent_commit_id: Option<&str>,\n        deltas: &[TrackedStateDeltaRef<'_>],\n    ) -> Result<TrackedStateWriteReport, LixError> {\n        storage::stage_delta_pack_refs(self.writes, commit_id, deltas)?;\n        Ok(TrackedStateWriteReport {\n            commit_id: commit_id.to_string(),\n            changed_rows: deltas.len(),\n            primary_chunk_puts: 0,\n            by_file_chunk_puts: 0,\n        })\n    }\n\n    pub(crate) async fn stage_delta_with_json_pack_indexes(\n        &mut self,\n        commit_id: &str,\n        _parent_commit_id: Option<&str>,\n        deltas: &[TrackedStateDeltaRef<'_>],\n        json_pack_indexes: DeltaJsonPackIndexesRef<'_>,\n    ) -> Result<TrackedStateWriteReport, LixError> {\n        storage::stage_delta_pack_refs_with_json_pack_indexes(\n            self.writes,\n            commit_id,\n            deltas,\n            json_pack_indexes,\n        )?;\n        Ok(TrackedStateWriteReport {\n            commit_id: commit_id.to_string(),\n            changed_rows: deltas.len(),\n            primary_chunk_puts: 0,\n            by_file_chunk_puts: 0,\n        })\n    }\n\n    pub(crate) async fn stage_projection_root<'a, I>(\n        &mut self,\n        commit_id: &str,\n        parent_commit_id: Option<&str>,\n        deltas: I,\n    ) -> Result<TrackedStateWriteReport, LixError>\n    where\n        I: IntoIterator<Item = TrackedStateDeltaRef<'a>>,\n    {\n        let deltas = deltas.into_iter().collect::<Vec<_>>();\n        let base_root = match parent_commit_id {\n            Some(parent_commit_id) => {\n                let Some(root) = self.tree.load_root(self.store, parent_commit_id).await? else {\n                    return Err(LixError::new(\n                        \"LIX_ERROR_UNKNOWN\",\n                        format!(\n                            \"tracked-state parent root for commit '{parent_commit_id}' is missing\"\n                        ),\n                    ));\n                };\n                Some(root)\n            }\n            None => None,\n        };\n        let mut mutations = Vec::with_capacity(deltas.len());\n        for delta in &deltas {\n            let key = TrackedStateKeyRef {\n                schema_key: delta.change.schema_key,\n                file_id: delta.change.file_id,\n                entity_id: delta.change.entity_id,\n            };\n            let value = crate::tracked_state::types::TrackedStateIndexValueRef {\n                change_locator: delta.locator,\n                deleted: delta.change.snapshot_ref.is_none(),\n                snapshot_ref: delta.change.snapshot_ref,\n                metadata_ref: delta.change.metadata_ref,\n                created_at: delta.created_at,\n                updated_at: delta.updated_at,\n            };\n            mutations.push(TrackedStateMutation::put_encoded(\n                encode_key_ref(key),\n                encode_value_ref(value),\n            ));\n        }\n        let result = self\n            .tree\n            .apply_mutations(\n                self.store,\n                self.writes,\n                base_root.as_ref(),\n                mutations,\n                Some(commit_id),\n            )\n            .await?;\n\n        let by_file_base_root = match parent_commit_id {\n            Some(parent_commit_id) => {\n                storage::load_by_file_root(self.store, parent_commit_id).await?\n            }\n            None => None,\n        };\n        let concrete_file_deltas = deltas\n            .iter()\n            .filter(|delta| delta.change.file_id.is_some())\n            .collect::<Vec<_>>();\n        let by_file_chunk_puts = if concrete_file_deltas.is_empty() {\n            if let Some(by_file_base_root) = by_file_base_root.as_ref() {\n                storage::stage_by_file_root(self.writes, commit_id, by_file_base_root);\n            }\n            0\n        } else {\n            let mut by_file_mutations = Vec::with_capacity(concrete_file_deltas.len());\n            for delta in concrete_file_deltas {\n                let key = TrackedStateKeyRef {\n                    schema_key: delta.change.schema_key,\n                    file_id: delta.change.file_id,\n                    entity_id: delta.change.entity_id,\n                };\n                let header_value = crate::tracked_state::types::TrackedStateIndexValueRef {\n                    change_locator: delta.locator,\n                    deleted: delta.change.snapshot_ref.is_none(),\n                    snapshot_ref: None,\n                    metadata_ref: None,\n                    created_at: delta.created_at,\n                    updated_at: delta.updated_at,\n                };\n                by_file_mutations.push(TrackedStateMutation::put_encoded(\n                    ByFileIndex::encode_key_ref(key),\n                    ByFileIndex::encode_header_value_ref(header_value),\n                ));\n            }\n            let by_file_result = self\n                .tree\n                .apply_mutations(\n                    self.store,\n                    self.writes,\n                    by_file_base_root.as_ref(),\n                    by_file_mutations,\n                    None,\n                )\n                .await?;\n            storage::stage_by_file_root(self.writes, commit_id, &by_file_result.root_id);\n            by_file_result.chunk_count\n        };\n        Ok(TrackedStateWriteReport {\n            commit_id: commit_id.to_string(),\n            changed_rows: deltas.len(),\n            primary_chunk_puts: result.chunk_count,\n            by_file_chunk_puts,\n        })\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct TrackedStateWriteReport {\n    pub(crate) commit_id: String,\n    pub(crate) changed_rows: usize,\n    pub(crate) primary_chunk_puts: usize,\n    pub(crate) by_file_chunk_puts: usize,\n}\n\nfn missing_commit_error(commit_id: &str) -> LixError {\n    LixError::new(\n        LixError::CODE_INTERNAL_ERROR,\n        format!(\"tracked_state projection references missing commit '{commit_id}'\"),\n    )\n}\n\nfn tree_scan_request_from_tracked(\n    request: &TrackedStateScanRequest,\n) -> TrackedStateTreeScanRequest {\n    TrackedStateTreeScanRequest {\n        schema_keys: request.filter.schema_keys.clone(),\n        entity_ids: request.filter.entity_ids.clone(),\n        file_ids: request.filter.file_ids.clone(),\n        include_tombstones: request.filter.include_tombstones,\n        // User limits belong above delta overlay and tombstone visibility.\n        // Pushing them into the physical tree can stop on rows that are later\n        // hidden, returning too few live rows.\n        limit: None,\n    }\n}\n\nfn scan_needs_json_payloads(request: &TrackedStateScanRequest) -> bool {\n    if request.projection.columns.is_empty() {\n        return true;\n    }\n    request\n        .projection\n        .columns\n        .iter()\n        .any(|column| column == \"snapshot_content\" || column == \"metadata\")\n}\n\nfn tracked_key_from_request(request: &TrackedStateRowRequest) -> Result<TrackedStateKey, LixError> {\n    let file_id = match &request.file_id {\n        crate::NullableKeyFilter::Null => None,\n        crate::NullableKeyFilter::Value(value) => Some(value.clone()),\n        crate::NullableKeyFilter::Any => {\n            return Err(LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                \"tracked-state tree exact lookup requires a concrete file_id filter\",\n            ))\n        }\n    };\n    Ok(TrackedStateKey {\n        schema_key: request.schema_key.clone(),\n        file_id,\n        entity_id: request.entity_id.clone(),\n    })\n}\n\n#[cfg(test)]\nmod tests {\n    use std::sync::Arc;\n\n    use super::*;\n    use crate::backend::{testing::UnitTestBackend, Backend};\n    use crate::storage::{StorageContext, StorageWriteTransaction};\n    use crate::NullableKeyFilter;\n\n    #[tokio::test]\n    async fn stage_delta_does_not_require_parent_projection_root() {\n        let backend: Arc<dyn Backend + Send + Sync> = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(Arc::clone(&backend));\n        let tracked_state = TrackedStateContext::new();\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n\n        write_root_for_test(\n            transaction.as_mut(),\n            &tracked_state,\n            \"commit-child\",\n            Some(\"missing-parent\"),\n            &[row(\"entity-child\", \"change-child\", \"commit-child\")],\n        )\n        .await\n        .expect(\"delta pack staging should not require a parent projection root\");\n    }\n\n    #[tokio::test]\n    async fn plan_merge_from_roots_applies_source_only_change() {\n        let (storage, tracked_state) = seed_merge_roots(\n            &[row_with_value(\"entity-a\", \"change-base\", \"base\", \"base\")],\n            &[row_with_value(\"entity-a\", \"change-base\", \"base\", \"base\")],\n            &[row_with_value(\n                \"entity-a\",\n                \"change-source\",\n                \"source\",\n                \"source\",\n            )],\n        )\n        .await;\n\n        let plan = tracked_state\n            .reader(storage.clone())\n            .plan_merge(\n                \"base\",\n                \"target\",\n                \"source\",\n                &TrackedStateDiffRequest::default(),\n            )\n            .await\n            .expect(\"merge should plan\");\n\n        assert_eq!(merge_patch_ids(&plan), vec![\"entity-a\"]);\n        assert!(plan.conflicts.is_empty());\n    }\n\n    #[tokio::test]\n    async fn plan_merge_from_roots_keeps_target_only_change() {\n        let (storage, tracked_state) = seed_merge_roots(\n            &[row(\"entity-a\", \"change-base\", \"base\")],\n            &[row(\"entity-a\", \"change-target\", \"target\")],\n            &[row(\"entity-a\", \"change-base\", \"base\")],\n        )\n        .await;\n\n        let plan = tracked_state\n            .reader(storage.clone())\n            .plan_merge(\n                \"base\",\n                \"target\",\n                \"source\",\n                &TrackedStateDiffRequest::default(),\n            )\n            .await\n            .expect(\"merge should plan\");\n\n        assert!(plan.patches.is_empty());\n        assert!(plan.conflicts.is_empty());\n    }\n\n    #[tokio::test]\n    async fn plan_merge_from_roots_reports_divergent_modification_conflict() {\n        let (storage, tracked_state) = seed_merge_roots(\n            &[row_with_value(\"entity-a\", \"change-base\", \"base\", \"base\")],\n            &[row_with_value(\n                \"entity-a\",\n                \"change-target\",\n                \"target\",\n                \"target\",\n            )],\n            &[row_with_value(\n                \"entity-a\",\n                \"change-source\",\n                \"source\",\n                \"source\",\n            )],\n        )\n        .await;\n\n        let plan = tracked_state\n            .reader(storage.clone())\n            .plan_merge(\n                \"base\",\n                \"target\",\n                \"source\",\n                &TrackedStateDiffRequest::default(),\n            )\n            .await\n            .expect(\"merge should plan\");\n\n        assert!(plan.patches.is_empty());\n        assert_eq!(merge_conflict_ids(&plan), vec![\"entity-a\"]);\n    }\n\n    #[tokio::test]\n    async fn plan_merge_from_roots_applies_source_tombstone() {\n        let (storage, tracked_state) = seed_merge_roots(\n            &[row(\"entity-a\", \"change-base\", \"base\")],\n            &[row(\"entity-a\", \"change-base\", \"base\")],\n            &[tombstone(\"entity-a\", \"change-source-delete\", \"source\")],\n        )\n        .await;\n\n        let plan = tracked_state\n            .reader(storage.clone())\n            .plan_merge(\n                \"base\",\n                \"target\",\n                \"source\",\n                &TrackedStateDiffRequest::default(),\n            )\n            .await\n            .expect(\"merge should plan\");\n\n        assert_eq!(merge_patch_ids(&plan), vec![\"entity-a\"]);\n        assert_eq!(plan.patches[0].projected_row().snapshot_content, None);\n        assert_eq!(plan.patches[0].change_id(), \"change-source-delete\");\n    }\n\n    #[tokio::test]\n    async fn scan_rows_by_file_uses_file_index_shape() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        let tracked_state = TrackedStateContext::new();\n        let mut file_a = row(\"entity-a\", \"change-a\", \"commit-1\");\n        file_a.file_id = Some(\"file-a.json\".to_string());\n        let mut file_b = row(\"entity-b\", \"change-b\", \"commit-1\");\n        file_b.file_id = Some(\"file-b.json\".to_string());\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        write_root_for_test(\n            transaction.as_mut(),\n            &tracked_state,\n            \"commit-1\",\n            None,\n            &[file_a, file_b],\n        )\n        .await\n        .expect(\"root should write\");\n        transaction\n            .commit()\n            .await\n            .expect(\"transaction should commit\");\n\n        let rows = tracked_state\n            .reader(storage.clone())\n            .scan_rows_at_commit(\n                \"commit-1\",\n                &TrackedStateScanRequest {\n                    filter: crate::tracked_state::TrackedStateFilter {\n                        file_ids: vec![NullableKeyFilter::Value(\"file-a.json\".to_string())],\n                        ..Default::default()\n                    },\n                    ..Default::default()\n                },\n            )\n            .await\n            .expect(\"file scan should read through index\");\n\n        assert_eq!(rows.len(), 1);\n        assert_eq!(\n            rows[0]\n                .entity_id\n                .as_single_string_owned()\n                .expect(\"entity id\"),\n            \"entity-a\"\n        );\n        assert_eq!(rows[0].file_id.as_deref(), Some(\"file-a.json\"));\n    }\n\n    #[tokio::test]\n    async fn by_file_header_index_fetches_primary_payload_only_when_requested() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        let tracked_state = TrackedStateContext::new();\n        let mut row = row(\"entity-a\", \"change-a\", \"commit-1\");\n        row.file_id = Some(\"file-a.json\".to_string());\n        let expected_snapshot = row.snapshot_content.clone();\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        write_root_for_test(\n            transaction.as_mut(),\n            &tracked_state,\n            \"commit-1\",\n            None,\n            std::slice::from_ref(&row),\n        )\n        .await\n        .expect(\"root should write\");\n        transaction\n            .commit()\n            .await\n            .expect(\"transaction should commit\");\n\n        let mut reader = tracked_state.reader(storage.clone());\n        let header_rows = reader\n            .scan_rows_at_commit(\n                \"commit-1\",\n                &TrackedStateScanRequest {\n                    filter: crate::tracked_state::TrackedStateFilter {\n                        file_ids: vec![NullableKeyFilter::Value(\"file-a.json\".to_string())],\n                        ..Default::default()\n                    },\n                    projection: crate::tracked_state::TrackedStateProjection {\n                        columns: vec![\"entity_id\".to_string()],\n                    },\n                    ..Default::default()\n                },\n            )\n            .await\n            .expect(\"header scan should read through by-file index\");\n        let full_rows = reader\n            .scan_rows_at_commit(\n                \"commit-1\",\n                &TrackedStateScanRequest {\n                    filter: crate::tracked_state::TrackedStateFilter {\n                        file_ids: vec![NullableKeyFilter::Value(\"file-a.json\".to_string())],\n                        ..Default::default()\n                    },\n                    ..Default::default()\n                },\n            )\n            .await\n            .expect(\"full scan should fetch primary payload\");\n\n        assert_eq!(header_rows[0].snapshot_content, None);\n        assert_eq!(full_rows[0].snapshot_content, expected_snapshot);\n    }\n\n    #[tokio::test]\n    async fn null_file_rows_do_not_stage_by_file_index() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        let tracked_state = TrackedStateContext::new();\n        let row = row(\"entity-a\", \"change-a\", \"commit-1\");\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        write_root_for_test(\n            transaction.as_mut(),\n            &tracked_state,\n            \"commit-1\",\n            None,\n            std::slice::from_ref(&row),\n        )\n        .await\n        .expect(\"root should write\");\n        transaction\n            .commit()\n            .await\n            .expect(\"transaction should commit\");\n\n        let by_file_root = storage::load_by_file_root(&mut storage.clone(), \"commit-1\")\n            .await\n            .expect(\"by-file root lookup should load\");\n        assert!(by_file_root.is_none());\n\n        let rows = tracked_state\n            .reader(storage.clone())\n            .scan_rows_at_commit(\n                \"commit-1\",\n                &TrackedStateScanRequest {\n                    filter: crate::tracked_state::TrackedStateFilter {\n                        file_ids: vec![NullableKeyFilter::Null],\n                        ..Default::default()\n                    },\n                    ..Default::default()\n                },\n            )\n            .await\n            .expect(\"null file scan should fall back to primary tree\");\n\n        assert_eq!(rows.len(), 1);\n        assert_eq!(\n            rows[0]\n                .entity_id\n                .as_single_string_owned()\n                .expect(\"entity id\"),\n            \"entity-a\"\n        );\n    }\n\n    #[tokio::test]\n    async fn mixed_null_and_concrete_file_scan_uses_primary_tree() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        let tracked_state = TrackedStateContext::new();\n        let null_row = row(\"entity-null\", \"change-null\", \"commit-1\");\n        let mut file_row = row(\"entity-file\", \"change-file\", \"commit-2\");\n        file_row.file_id = Some(\"file-a.json\".to_string());\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        write_root_for_test(\n            transaction.as_mut(),\n            &tracked_state,\n            \"commit-1\",\n            None,\n            std::slice::from_ref(&null_row),\n        )\n        .await\n        .expect(\"parent root should write\");\n        write_root_for_test(\n            transaction.as_mut(),\n            &tracked_state,\n            \"commit-2\",\n            Some(\"commit-1\"),\n            std::slice::from_ref(&file_row),\n        )\n        .await\n        .expect(\"child root should write\");\n        transaction\n            .commit()\n            .await\n            .expect(\"transaction should commit\");\n\n        let rows = tracked_state\n            .reader(storage.clone())\n            .scan_rows_at_commit(\n                \"commit-2\",\n                &TrackedStateScanRequest {\n                    filter: crate::tracked_state::TrackedStateFilter {\n                        file_ids: vec![\n                            NullableKeyFilter::Null,\n                            NullableKeyFilter::Value(\"file-a.json\".to_string()),\n                        ],\n                        ..Default::default()\n                    },\n                    ..Default::default()\n                },\n            )\n            .await\n            .expect(\"mixed scan should use primary tree\");\n\n        let mut entity_ids = rows\n            .iter()\n            .map(|row| row.entity_id.as_single_string_owned().expect(\"entity id\"))\n            .collect::<Vec<_>>();\n        entity_ids.sort();\n        assert_eq!(entity_ids, vec![\"entity-file\", \"entity-null\"]);\n    }\n\n    #[tokio::test]\n    async fn by_file_header_index_filters_tombstones_without_payload_sentinel() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        let tracked_state = TrackedStateContext::new();\n        let mut live = row(\"entity-live\", \"change-live\", \"commit-1\");\n        live.file_id = Some(\"file-a.json\".to_string());\n        let mut deleted = tombstone(\"entity-deleted\", \"change-delete\", \"commit-1\");\n        deleted.file_id = Some(\"file-a.json\".to_string());\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        write_root_for_test(\n            transaction.as_mut(),\n            &tracked_state,\n            \"commit-1\",\n            None,\n            &[live, deleted],\n        )\n        .await\n        .expect(\"root should write\");\n        transaction\n            .commit()\n            .await\n            .expect(\"transaction should commit\");\n\n        let rows = tracked_state\n            .reader(storage.clone())\n            .scan_rows_at_commit(\n                \"commit-1\",\n                &TrackedStateScanRequest {\n                    filter: crate::tracked_state::TrackedStateFilter {\n                        file_ids: vec![NullableKeyFilter::Value(\"file-a.json\".to_string())],\n                        ..Default::default()\n                    },\n                    projection: crate::tracked_state::TrackedStateProjection {\n                        columns: vec![\"entity_id\".to_string()],\n                    },\n                    ..Default::default()\n                },\n            )\n            .await\n            .expect(\"file scan should read through index\");\n\n        assert_eq!(rows.len(), 1);\n        assert_eq!(\n            rows[0]\n                .entity_id\n                .as_single_string_owned()\n                .expect(\"entity id\"),\n            \"entity-live\"\n        );\n    }\n\n    #[tokio::test]\n    async fn pending_tombstone_delta_hides_materialized_base_row() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        let tracked_state = TrackedStateContext::new();\n        let base = row(\"entity-a\", \"change-base\", \"base\");\n        let delete = tombstone(\"entity-a\", \"change-delete\", \"child\");\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"base transaction should open\");\n        write_root_for_test(\n            transaction.as_mut(),\n            &tracked_state,\n            \"base\",\n            None,\n            std::slice::from_ref(&base),\n        )\n        .await\n        .expect(\"base delta should write\");\n        transaction.commit().await.expect(\"base should commit\");\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"materialize transaction should open\");\n        let mut writes = StorageWriteSet::new();\n        tracked_state\n            .materializer(\n                transaction.as_mut(),\n                &mut writes,\n                &CommitStoreContext::new(),\n            )\n            .materialize_root_at(\"base\")\n            .await\n            .expect(\"base projection root should materialize\");\n        writes\n            .apply(transaction.as_mut())\n            .await\n            .expect(\"base root writes should apply\");\n        transaction\n            .commit()\n            .await\n            .expect(\"materialized base should commit\");\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"child transaction should open\");\n        write_root_for_test(\n            transaction.as_mut(),\n            &tracked_state,\n            \"child\",\n            Some(\"base\"),\n            std::slice::from_ref(&delete),\n        )\n        .await\n        .expect(\"child tombstone delta should write\");\n        transaction.commit().await.expect(\"child should commit\");\n\n        let rows = tracked_state\n            .reader(storage.clone())\n            .scan_rows_at_commit(\"child\", &TrackedStateScanRequest::default())\n            .await\n            .expect(\"child scan should apply pending tombstone over base root\");\n\n        assert!(rows.is_empty(), \"pending tombstone must hide base row\");\n    }\n\n    #[tokio::test]\n    async fn single_delta_pack_scan_keeps_last_delta_for_duplicate_key() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        let tracked_state = TrackedStateContext::new();\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        write_root_for_test(\n            transaction.as_mut(),\n            &tracked_state,\n            \"commit-1\",\n            None,\n            &[\n                row_with_value(\"entity-a\", \"change-a1\", \"commit-1\", \"first\"),\n                row_with_value(\"entity-b\", \"change-b\", \"commit-1\", \"middle\"),\n                row_with_value(\"entity-a\", \"change-a2\", \"commit-1\", \"second\"),\n                tombstone(\"entity-c\", \"change-c1\", \"commit-1\"),\n            ],\n        )\n        .await\n        .expect(\"delta pack should write\");\n        transaction\n            .commit()\n            .await\n            .expect(\"transaction should commit\");\n\n        let rows = tracked_state\n            .reader(storage.clone())\n            .scan_rows_at_commit(\"commit-1\", &TrackedStateScanRequest::default())\n            .await\n            .expect(\"single delta pack should scan\");\n\n        assert_eq!(rows.len(), 2);\n        assert_eq!(\n            rows.iter()\n                .map(|row| (\n                    row.entity_id.as_single_string_owned().expect(\"entity id\"),\n                    row.snapshot_content.clone()\n                ))\n                .collect::<Vec<_>>(),\n            vec![\n                (\n                    \"entity-a\".to_string(),\n                    Some(\"{\\\"value\\\":\\\"second\\\"}\".to_string())\n                ),\n                (\n                    \"entity-b\".to_string(),\n                    Some(\"{\\\"value\\\":\\\"middle\\\"}\".to_string())\n                ),\n            ]\n        );\n    }\n\n    #[tokio::test]\n    async fn scan_limit_applies_after_tombstone_visibility() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        let tracked_state = TrackedStateContext::new();\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        write_root_for_test(\n            transaction.as_mut(),\n            &tracked_state,\n            \"commit-1\",\n            None,\n            &[\n                tombstone(\"entity-a\", \"change-delete\", \"commit-1\"),\n                row(\"entity-b\", \"change-live\", \"commit-1\"),\n            ],\n        )\n        .await\n        .expect(\"root should write\");\n        transaction\n            .commit()\n            .await\n            .expect(\"transaction should commit\");\n\n        let rows = tracked_state\n            .reader(storage.clone())\n            .scan_rows_at_commit(\n                \"commit-1\",\n                &TrackedStateScanRequest {\n                    limit: Some(1),\n                    ..Default::default()\n                },\n            )\n            .await\n            .expect(\"limited scan should apply visibility before limit\");\n\n        assert_eq!(rows.len(), 1);\n        assert_eq!(\n            rows[0]\n                .entity_id\n                .as_single_string_owned()\n                .expect(\"entity id\"),\n            \"entity-b\"\n        );\n    }\n\n    #[tokio::test]\n    async fn by_file_scan_limit_applies_after_tombstone_visibility() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        let tracked_state = TrackedStateContext::new();\n        let mut deleted = tombstone(\"entity-a\", \"change-delete\", \"commit-1\");\n        deleted.file_id = Some(\"file-a.json\".to_string());\n        let mut live = row(\"entity-b\", \"change-live\", \"commit-1\");\n        live.file_id = Some(\"file-a.json\".to_string());\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        write_root_for_test(\n            transaction.as_mut(),\n            &tracked_state,\n            \"commit-1\",\n            None,\n            &[deleted, live],\n        )\n        .await\n        .expect(\"root should write\");\n        transaction\n            .commit()\n            .await\n            .expect(\"transaction should commit\");\n\n        let rows = tracked_state\n            .reader(storage.clone())\n            .scan_rows_at_commit(\n                \"commit-1\",\n                &TrackedStateScanRequest {\n                    filter: crate::tracked_state::TrackedStateFilter {\n                        file_ids: vec![NullableKeyFilter::Value(\"file-a.json\".to_string())],\n                        ..Default::default()\n                    },\n                    projection: crate::tracked_state::TrackedStateProjection {\n                        columns: vec![\"entity_id\".to_string()],\n                    },\n                    limit: Some(1),\n                },\n            )\n            .await\n            .expect(\"limited by-file scan should apply visibility before limit\");\n\n        assert_eq!(rows.len(), 1);\n        assert_eq!(\n            rows[0]\n                .entity_id\n                .as_single_string_owned()\n                .expect(\"entity id\"),\n            \"entity-b\"\n        );\n    }\n\n    #[tokio::test]\n    async fn reads_resolve_json_snapshot_refs() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        let tracked_state = TrackedStateContext::new();\n        let large_value = \"x\".repeat(1536);\n        let row = row_with_value(\"entity-a\", \"change-a\", \"commit-1\", &large_value);\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        write_root_for_test(\n            transaction.as_mut(),\n            &tracked_state,\n            \"commit-1\",\n            None,\n            std::slice::from_ref(&row),\n        )\n        .await\n        .expect(\"root should write\");\n        transaction\n            .commit()\n            .await\n            .expect(\"transaction should commit\");\n\n        let mut reader = tracked_state.reader(storage.clone());\n        let loaded = reader\n            .load_rows_at_commit(\n                \"commit-1\",\n                &[TrackedStateRowRequest {\n                    schema_key: row.schema_key.clone(),\n                    entity_id: row.entity_id.clone(),\n                    file_id: NullableKeyFilter::Null,\n                }],\n            )\n            .await\n            .expect(\"row should load\")\n            .pop()\n            .flatten()\n            .expect(\"row should exist\");\n        let scanned = reader\n            .scan_rows_at_commit(\"commit-1\", &TrackedStateScanRequest::default())\n            .await\n            .expect(\"rows should scan\");\n\n        assert_eq!(loaded.snapshot_content, row.snapshot_content);\n        assert_eq!(scanned[0].snapshot_content, row.snapshot_content);\n    }\n\n    #[tokio::test]\n    async fn projection_cache_uses_seen_updated_at_not_change_created_at() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        let tracked_state = TrackedStateContext::new();\n        let mut row = row(\"entity-a\", \"change-a\", \"commit-1\");\n        row.created_at = \"2026-01-01T00:00:00Z\".to_string();\n        row.updated_at = \"2026-01-02T00:00:00Z\".to_string();\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        write_root_for_test(\n            transaction.as_mut(),\n            &tracked_state,\n            \"commit-1\",\n            None,\n            std::slice::from_ref(&row),\n        )\n        .await\n        .expect(\"root should write\");\n        transaction\n            .commit()\n            .await\n            .expect(\"transaction should commit\");\n\n        let loaded = tracked_state\n            .reader(storage.clone())\n            .load_rows_at_commit(\n                \"commit-1\",\n                &[TrackedStateRowRequest {\n                    schema_key: row.schema_key.clone(),\n                    entity_id: row.entity_id.clone(),\n                    file_id: NullableKeyFilter::Null,\n                }],\n            )\n            .await\n            .expect(\"row should load\")\n            .pop()\n            .flatten()\n            .expect(\"row should exist\");\n\n        assert_eq!(loaded.created_at, \"2026-01-01T00:00:00Z\");\n        assert_eq!(loaded.updated_at, \"2026-01-02T00:00:00Z\");\n    }\n\n    #[tokio::test]\n    async fn projected_scans_do_not_materialize_snapshot_when_snapshot_content_is_omitted() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        let tracked_state = TrackedStateContext::new();\n        let large_value = \"x\".repeat(1536);\n        let row = row_with_value(\"entity-a\", \"change-a\", \"commit-1\", &large_value);\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        write_root_for_test(\n            transaction.as_mut(),\n            &tracked_state,\n            \"commit-1\",\n            None,\n            std::slice::from_ref(&row),\n        )\n        .await\n        .expect(\"root should write\");\n        transaction\n            .commit()\n            .await\n            .expect(\"transaction should commit\");\n\n        let rows = tracked_state\n            .reader(storage.clone())\n            .scan_rows_at_commit(\n                \"commit-1\",\n                &TrackedStateScanRequest {\n                    projection: crate::tracked_state::TrackedStateProjection {\n                        columns: vec![\"entity_id\".to_string()],\n                    },\n                    ..Default::default()\n                },\n            )\n            .await\n            .expect(\"rows should scan\");\n\n        assert_eq!(rows.len(), 1);\n        assert_eq!(rows[0].snapshot_content, None);\n    }\n\n    async fn seed_merge_roots(\n        base_rows: &[MaterializedTrackedStateRow],\n        target_rows: &[MaterializedTrackedStateRow],\n        source_rows: &[MaterializedTrackedStateRow],\n    ) -> (StorageContext, TrackedStateContext) {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        let tracked_state = TrackedStateContext::new();\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        write_root_for_test(\n            transaction.as_mut(),\n            &tracked_state,\n            \"base\",\n            None,\n            base_rows,\n        )\n        .await\n        .expect(\"base root should write\");\n        write_root_for_test(\n            transaction.as_mut(),\n            &tracked_state,\n            \"target\",\n            None,\n            target_rows,\n        )\n        .await\n        .expect(\"target root should write\");\n        write_root_for_test(\n            transaction.as_mut(),\n            &tracked_state,\n            \"source\",\n            None,\n            source_rows,\n        )\n        .await\n        .expect(\"source root should write\");\n        transaction\n            .commit()\n            .await\n            .expect(\"transaction should commit\");\n        (storage, tracked_state)\n    }\n\n    fn merge_patch_ids(plan: &TrackedStateMergePlan) -> Vec<String> {\n        plan.patches\n            .iter()\n            .map(|entry| {\n                entry\n                    .identity()\n                    .entity_id\n                    .as_single_string_owned()\n                    .expect(\"identity\")\n            })\n            .collect()\n    }\n\n    fn merge_conflict_ids(plan: &TrackedStateMergePlan) -> Vec<String> {\n        plan.conflicts\n            .iter()\n            .map(|entry| {\n                entry\n                    .identity\n                    .entity_id\n                    .as_single_string_owned()\n                    .expect(\"identity\")\n            })\n            .collect()\n    }\n\n    async fn write_root_for_test(\n        transaction: &mut dyn StorageWriteTransaction,\n        tracked_state: &TrackedStateContext,\n        commit_id: &str,\n        parent_commit_id: Option<&str>,\n        rows: &[MaterializedTrackedStateRow],\n    ) -> Result<(), LixError> {\n        crate::test_support::stage_tracked_root_from_materialized(\n            transaction,\n            tracked_state,\n            commit_id,\n            parent_commit_id,\n            rows,\n        )\n        .await\n    }\n\n    fn tombstone(entity_id: &str, change_id: &str, commit_id: &str) -> MaterializedTrackedStateRow {\n        let mut row = row(entity_id, change_id, commit_id);\n        row.snapshot_content = None;\n        row\n    }\n\n    fn row(entity_id: &str, change_id: &str, commit_id: &str) -> MaterializedTrackedStateRow {\n        row_with_value(entity_id, change_id, commit_id, \"value\")\n    }\n\n    fn row_with_value(\n        entity_id: &str,\n        change_id: &str,\n        commit_id: &str,\n        value: &str,\n    ) -> MaterializedTrackedStateRow {\n        MaterializedTrackedStateRow {\n            entity_id: crate::entity_identity::EntityIdentity::single(entity_id),\n            schema_key: \"test_schema\".to_string(),\n            file_id: None,\n            snapshot_content: Some(format!(\"{{\\\"value\\\":\\\"{value}\\\"}}\")),\n            metadata: None,\n            deleted: false,\n            created_at: \"2026-01-01T00:00:00Z\".to_string(),\n            updated_at: \"2026-01-01T00:00:00Z\".to_string(),\n            change_id: change_id.to_string(),\n            commit_id: commit_id.to_string(),\n        }\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/tracked_state/diff.rs",
    "content": "use crate::entity_identity::EntityIdentity;\nuse crate::tracked_state::types::TrackedStateTreeScanRequest;\nuse crate::tracked_state::{\n    MaterializedTrackedStateRow, TrackedStateFilter, TrackedStateStoreReader,\n};\nuse crate::LixError;\n\n/// Filter for comparing two tracked-state commit roots.\n#[derive(Debug, Clone, PartialEq, Eq, Default)]\npub(crate) struct TrackedStateDiffRequest {\n    pub(crate) filter: TrackedStateFilter,\n}\n\n/// Changed tracked-state rows between two commit roots.\n#[derive(Debug, Clone, PartialEq, Eq, Default)]\npub(crate) struct TrackedStateDiff {\n    pub(crate) entries: Vec<TrackedStateDiffEntry>,\n}\n\n/// One changed identity between two commit roots.\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct TrackedStateDiffEntry {\n    pub(crate) identity: TrackedStateDiffIdentity,\n    pub(crate) kind: TrackedStateDiffKind,\n    /// Raw row in the left root.\n    ///\n    /// This can be a tombstone. Callers that need user-visible semantics\n    /// should use `visible_before()` instead of inspecting this directly.\n    pub(crate) before: Option<MaterializedTrackedStateRow>,\n    /// Raw row in the right root.\n    ///\n    /// This can be a tombstone. Keeping the raw tombstone is what lets merge\n    /// apply deletes without reloading the source root.\n    pub(crate) after: Option<MaterializedTrackedStateRow>,\n}\n\n/// Root-local tracked-state identity.\n///\n/// Entity identity used by merge/diff logic.\n#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)]\npub(crate) struct TrackedStateDiffIdentity {\n    pub(crate) schema_key: String,\n    pub(crate) entity_id: EntityIdentity,\n    pub(crate) file_id: Option<String>,\n}\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub(crate) enum TrackedStateDiffKind {\n    Added,\n    Modified,\n    Removed,\n}\n\n/// Diffs two tracked-state commit roots.\n///\npub(crate) async fn diff_commits<S>(\n    reader: &mut TrackedStateStoreReader<S>,\n    left_commit_id: &str,\n    right_commit_id: &str,\n    request: &TrackedStateDiffRequest,\n) -> Result<TrackedStateDiff, LixError>\nwhere\n    S: crate::storage::StorageReader,\n{\n    let scan_request = scan_request_for_diff(request);\n    let tree_entries = reader\n        .diff_tree_entries_at_commits(left_commit_id, right_commit_id, &scan_request)\n        .await?;\n    let mut before_entries = Vec::new();\n    let mut after_entries = Vec::new();\n    let mut pending_entries = Vec::with_capacity(tree_entries.len());\n    for tree_entry in tree_entries {\n        let before_index = tree_entry.before.map(|entry| {\n            let index = before_entries.len();\n            before_entries.push(entry);\n            index\n        });\n        let after_index = tree_entry.after.map(|entry| {\n            let index = after_entries.len();\n            after_entries.push(entry);\n            index\n        });\n        pending_entries.push(PendingDiffEntry {\n            before_index,\n            after_index,\n        });\n    }\n\n    let before_rows = reader.materialize_tree_values(before_entries).await?;\n    let after_rows = reader.materialize_tree_values(after_entries).await?;\n    let mut entries = Vec::new();\n    for pending_entry in pending_entries {\n        let before = materialized_row_at(pending_entry.before_index, &before_rows)?;\n        let after = materialized_row_at(pending_entry.after_index, &after_rows)?;\n        let identity = match before.as_ref().or(after.as_ref()) {\n            Some(row) => TrackedStateDiffIdentity::from_row(row)?,\n            None => continue,\n        };\n        let Some(entry) = classify_diff(identity, before, after) else {\n            continue;\n        };\n        entries.push(entry);\n    }\n\n    Ok(TrackedStateDiff { entries })\n}\n\nfn materialized_row_at(\n    index: Option<usize>,\n    rows: &[MaterializedTrackedStateRow],\n) -> Result<Option<MaterializedTrackedStateRow>, LixError> {\n    let Some(index) = index else {\n        return Ok(None);\n    };\n    rows.get(index).cloned().map(Some).ok_or_else(|| {\n        LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            \"tracked_state diff materialization returned fewer rows than planned\",\n        )\n    })\n}\n\nstruct PendingDiffEntry {\n    before_index: Option<usize>,\n    after_index: Option<usize>,\n}\n\nfn scan_request_for_diff(request: &TrackedStateDiffRequest) -> TrackedStateTreeScanRequest {\n    let mut filter = request.filter.clone();\n    filter.include_tombstones = true;\n    TrackedStateTreeScanRequest {\n        schema_keys: filter.schema_keys,\n        entity_ids: filter.entity_ids,\n        file_ids: filter.file_ids,\n        include_tombstones: true,\n        limit: None,\n    }\n}\n\nfn classify_diff(\n    identity: TrackedStateDiffIdentity,\n    before: Option<MaterializedTrackedStateRow>,\n    after: Option<MaterializedTrackedStateRow>,\n) -> Option<TrackedStateDiffEntry> {\n    match (is_live_row(before.as_ref()), is_live_row(after.as_ref())) {\n        (None, None) => None,\n        (None, Some(_)) => Some(TrackedStateDiffEntry {\n            identity,\n            kind: TrackedStateDiffKind::Added,\n            before,\n            after,\n        }),\n        (Some(_), None) => Some(TrackedStateDiffEntry {\n            identity,\n            kind: TrackedStateDiffKind::Removed,\n            before,\n            after,\n        }),\n        (Some(before), Some(after)) if tracked_row_payload_eq(before, after) => None,\n        (Some(_), Some(_)) => Some(TrackedStateDiffEntry {\n            identity,\n            kind: TrackedStateDiffKind::Modified,\n            before,\n            after,\n        }),\n    }\n}\n\nfn is_live_row(row: Option<&MaterializedTrackedStateRow>) -> Option<&MaterializedTrackedStateRow> {\n    row.filter(|row| row.snapshot_content.is_some())\n}\n\nfn tracked_row_payload_eq(\n    left: &MaterializedTrackedStateRow,\n    right: &MaterializedTrackedStateRow,\n) -> bool {\n    left.snapshot_content == right.snapshot_content && left.metadata == right.metadata\n}\n\nimpl TrackedStateDiffIdentity {\n    fn from_row(row: &MaterializedTrackedStateRow) -> Result<Self, LixError> {\n        Ok(Self {\n            schema_key: row.schema_key.clone(),\n            entity_id: row.entity_id.clone(),\n            file_id: row.file_id.clone(),\n        })\n    }\n}\n\nimpl TrackedStateDiffEntry {\n    #[cfg(test)]\n    pub(crate) fn before_is_live(&self) -> bool {\n        self.visible_before().is_some()\n    }\n\n    #[cfg(test)]\n    pub(crate) fn after_is_live(&self) -> bool {\n        self.visible_after().is_some()\n    }\n\n    #[cfg(test)]\n    pub(crate) fn visible_before(&self) -> Option<&MaterializedTrackedStateRow> {\n        self.before\n            .as_ref()\n            .filter(|row| row.snapshot_content.is_some())\n    }\n\n    #[cfg(test)]\n    pub(crate) fn visible_after(&self) -> Option<&MaterializedTrackedStateRow> {\n        self.after\n            .as_ref()\n            .filter(|row| row.snapshot_content.is_some())\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use std::sync::Arc;\n\n    use super::*;\n    use crate::backend::testing::UnitTestBackend;\n    use crate::storage::{StorageContext, StorageWriteTransaction};\n    use crate::tracked_state::TrackedStateContext;\n    use crate::NullableKeyFilter;\n\n    #[tokio::test]\n    async fn diff_commits_reports_added_rows() {\n        let (storage, tracked_state) = seed_roots(&[], &[row(\"entity-a\", None, \"after\")]).await;\n\n        let diff = diff(storage.clone(), &tracked_state).await;\n\n        assert_eq!(\n            kinds(&diff),\n            vec![(\"entity-a\".to_string(), TrackedStateDiffKind::Added)]\n        );\n        assert!(diff.entries[0].before.is_none());\n        assert_eq!(\n            diff.entries[0]\n                .after\n                .as_ref()\n                .map(|row| row.change_id.as_str()),\n            Some(\"after\")\n        );\n        assert!(!diff.entries[0].before_is_live());\n        assert!(diff.entries[0].after_is_live());\n    }\n\n    #[tokio::test]\n    async fn diff_commits_reports_removed_rows_when_right_side_is_absent() {\n        let (storage, tracked_state) = seed_roots(&[row(\"entity-a\", None, \"before\")], &[]).await;\n\n        let diff = diff(storage.clone(), &tracked_state).await;\n\n        assert_eq!(\n            kinds(&diff),\n            vec![(\"entity-a\".to_string(), TrackedStateDiffKind::Removed)]\n        );\n        assert_eq!(\n            diff.entries[0]\n                .before\n                .as_ref()\n                .map(|row| row.change_id.as_str()),\n            Some(\"before\")\n        );\n        assert!(diff.entries[0].after.is_none());\n        assert!(diff.entries[0].before_is_live());\n        assert!(!diff.entries[0].after_is_live());\n    }\n\n    #[tokio::test]\n    async fn diff_commits_reports_removed_rows_when_right_side_is_tombstone() {\n        let (storage, tracked_state) = seed_roots(\n            &[row(\"entity-a\", None, \"before\")],\n            &[tombstone(\"entity-a\", None, \"delete\")],\n        )\n        .await;\n\n        let diff = diff(storage.clone(), &tracked_state).await;\n\n        assert_eq!(\n            kinds(&diff),\n            vec![(\"entity-a\".to_string(), TrackedStateDiffKind::Removed)]\n        );\n        let entry = &diff.entries[0];\n        assert_eq!(\n            entry.after.as_ref().map(|row| row.change_id.as_str()),\n            Some(\"delete\")\n        );\n        assert!(\n            entry\n                .after\n                .as_ref()\n                .is_some_and(|row| row.snapshot_content.is_none()),\n            \"removed diff should preserve the right-side tombstone for merge\"\n        );\n        assert!(entry.before_is_live());\n        assert!(!entry.after_is_live());\n    }\n\n    #[tokio::test]\n    async fn diff_commits_reports_added_rows_when_left_side_is_tombstone() {\n        let (storage, tracked_state) = seed_roots(\n            &[tombstone(\"entity-a\", None, \"delete\")],\n            &[row(\"entity-a\", None, \"after\")],\n        )\n        .await;\n\n        let diff = diff(storage.clone(), &tracked_state).await;\n\n        assert_eq!(\n            kinds(&diff),\n            vec![(\"entity-a\".to_string(), TrackedStateDiffKind::Added)]\n        );\n        let entry = &diff.entries[0];\n        assert_eq!(\n            entry.before.as_ref().map(|row| row.change_id.as_str()),\n            Some(\"delete\")\n        );\n        assert!(\n            entry\n                .before\n                .as_ref()\n                .is_some_and(|row| row.snapshot_content.is_none()),\n            \"added diff should preserve the left-side tombstone for merge\"\n        );\n        assert!(!entry.before_is_live());\n        assert!(entry.after_is_live());\n    }\n\n    #[tokio::test]\n    async fn diff_commits_reports_modified_rows_for_changed_payload() {\n        let (storage, tracked_state) = seed_roots(\n            &[row_with_value(\"entity-a\", None, \"before\", \"one\")],\n            &[row_with_value(\"entity-a\", None, \"after\", \"two\")],\n        )\n        .await;\n\n        let diff = diff(storage.clone(), &tracked_state).await;\n\n        assert_eq!(\n            kinds(&diff),\n            vec![(\"entity-a\".to_string(), TrackedStateDiffKind::Modified)]\n        );\n        assert!(diff.entries[0].before_is_live());\n        assert!(diff.entries[0].after_is_live());\n    }\n\n    #[tokio::test]\n    async fn diff_commits_omits_unchanged_rows_even_when_metadata_differs_only_by_commit() {\n        let (storage, tracked_state) = seed_roots(\n            &[row_with_value(\"entity-a\", None, \"before\", \"same\")],\n            &[row_with_value(\"entity-a\", None, \"after\", \"same\")],\n        )\n        .await;\n\n        let diff = diff(storage.clone(), &tracked_state).await;\n\n        assert!(diff.entries.is_empty());\n    }\n\n    #[tokio::test]\n    async fn diff_commits_distinguishes_same_entity_with_different_file_id() {\n        let (storage, tracked_state) = seed_roots(\n            &[row(\"entity-a\", Some(\"file-a\"), \"before-a\")],\n            &[\n                row(\"entity-a\", Some(\"file-a\"), \"before-a\"),\n                row(\"entity-a\", Some(\"file-b\"), \"after-b\"),\n            ],\n        )\n        .await;\n\n        let diff = diff(storage.clone(), &tracked_state).await;\n\n        assert_eq!(diff.entries.len(), 1);\n        assert_eq!(diff.entries[0].identity.file_id.as_deref(), Some(\"file-b\"));\n        assert_eq!(diff.entries[0].kind, TrackedStateDiffKind::Added);\n    }\n\n    #[tokio::test]\n    async fn diff_commits_filters_by_schema_entity_and_file_id() {\n        let (storage, tracked_state) = seed_roots(\n            &[],\n            &[\n                row_with_schema(\"entity-a\", Some(\"file-a\"), \"schema-a\", \"change-a\"),\n                row_with_schema(\"entity-b\", Some(\"file-b\"), \"schema-b\", \"change-b\"),\n            ],\n        )\n        .await;\n        let mut reader = tracked_state.reader(storage.clone());\n        let diff = reader\n            .diff_commits(\n                \"left\",\n                \"right\",\n                &TrackedStateDiffRequest {\n                    filter: TrackedStateFilter {\n                        schema_keys: vec![\"schema-b\".to_string()],\n                        entity_ids: vec![crate::entity_identity::EntityIdentity::single(\n                            \"entity-b\",\n                        )],\n                        file_ids: vec![NullableKeyFilter::Value(\"file-b\".to_string())],\n                        ..Default::default()\n                    },\n                },\n            )\n            .await\n            .expect(\"diff should load\");\n\n        assert_eq!(\n            kinds(&diff),\n            vec![(\"entity-b\".to_string(), TrackedStateDiffKind::Added)]\n        );\n    }\n\n    #[tokio::test]\n    async fn diff_commits_between_delta_parent_and_child_reports_suffix_rows() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        let tracked_state = TrackedStateContext::new();\n        let mut tx = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        write_root_for_test(\n            tx.as_mut(),\n            &tracked_state,\n            \"parent\",\n            None,\n            &[\n                row_with_value(\"entity-a\", None, \"parent-a\", \"before\"),\n                row_with_value(\"entity-b\", None, \"parent-b\", \"same\"),\n            ],\n        )\n        .await\n        .expect(\"parent should write\");\n        write_root_for_test(\n            tx.as_mut(),\n            &tracked_state,\n            \"child\",\n            Some(\"parent\"),\n            &[row_with_value(\"entity-a\", None, \"child-a\", \"after\")],\n        )\n        .await\n        .expect(\"child should write\");\n        tx.commit().await.expect(\"transaction should commit\");\n\n        let diff = tracked_state\n            .reader(storage)\n            .diff_commits(\"parent\", \"child\", &TrackedStateDiffRequest::default())\n            .await\n            .expect(\"diff should load\");\n\n        assert_eq!(\n            kinds(&diff),\n            vec![(\"entity-a\".to_string(), TrackedStateDiffKind::Modified)]\n        );\n        assert_eq!(\n            diff.entries[0]\n                .before\n                .as_ref()\n                .and_then(|row| row.snapshot_content.as_deref()),\n            Some(\"{\\\"value\\\":\\\"before\\\"}\")\n        );\n        assert_eq!(\n            diff.entries[0]\n                .after\n                .as_ref()\n                .and_then(|row| row.snapshot_content.as_deref()),\n            Some(\"{\\\"value\\\":\\\"after\\\"}\")\n        );\n    }\n\n    #[tokio::test]\n    async fn diff_commits_between_delta_child_and_parent_reports_reverse_suffix_rows() {\n        let (storage, tracked_state) = seed_parent_child_delta(\n            &[\n                row_with_value(\"entity-a\", None, \"parent-a\", \"before\"),\n                row_with_value(\"entity-b\", None, \"parent-b\", \"same\"),\n            ],\n            &[row_with_value(\"entity-a\", None, \"child-a\", \"after\")],\n        )\n        .await;\n\n        let diff = tracked_state\n            .reader(storage)\n            .diff_commits(\"child\", \"parent\", &TrackedStateDiffRequest::default())\n            .await\n            .expect(\"diff should load\");\n\n        assert_eq!(\n            kinds(&diff),\n            vec![(\"entity-a\".to_string(), TrackedStateDiffKind::Modified)]\n        );\n        assert_eq!(\n            diff.entries[0]\n                .before\n                .as_ref()\n                .and_then(|row| row.snapshot_content.as_deref()),\n            Some(\"{\\\"value\\\":\\\"after\\\"}\")\n        );\n        assert_eq!(\n            diff.entries[0]\n                .after\n                .as_ref()\n                .and_then(|row| row.snapshot_content.as_deref()),\n            Some(\"{\\\"value\\\":\\\"before\\\"}\")\n        );\n    }\n\n    #[tokio::test]\n    async fn diff_commits_between_delta_parent_and_child_preserves_suffix_tombstones() {\n        let (storage, tracked_state) = seed_parent_child_delta(\n            &[\n                row_with_value(\"entity-a\", None, \"parent-a\", \"before\"),\n                row_with_value(\"entity-b\", None, \"parent-b\", \"same\"),\n            ],\n            &[tombstone(\"entity-a\", None, \"child-delete\")],\n        )\n        .await;\n\n        let diff = tracked_state\n            .reader(storage)\n            .diff_commits(\"parent\", \"child\", &TrackedStateDiffRequest::default())\n            .await\n            .expect(\"diff should load\");\n\n        assert_eq!(\n            kinds(&diff),\n            vec![(\"entity-a\".to_string(), TrackedStateDiffKind::Removed)]\n        );\n        assert!(diff.entries[0].before_is_live());\n        assert!(!diff.entries[0].after_is_live());\n        assert_eq!(\n            diff.entries[0]\n                .after\n                .as_ref()\n                .map(|row| row.change_id.as_str()),\n            Some(\"child-delete\")\n        );\n    }\n\n    async fn diff(\n        storage: StorageContext,\n        tracked_state: &TrackedStateContext,\n    ) -> TrackedStateDiff {\n        tracked_state\n            .reader(storage)\n            .diff_commits(\"left\", \"right\", &TrackedStateDiffRequest::default())\n            .await\n            .expect(\"diff should load\")\n    }\n\n    async fn seed_roots(\n        left_rows: &[MaterializedTrackedStateRow],\n        right_rows: &[MaterializedTrackedStateRow],\n    ) -> (StorageContext, TrackedStateContext) {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        let tracked_state = TrackedStateContext::new();\n        let mut tx = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        write_root_for_test(tx.as_mut(), &tracked_state, \"left\", None, left_rows)\n            .await\n            .expect(\"left root should write\");\n        write_root_for_test(tx.as_mut(), &tracked_state, \"right\", None, right_rows)\n            .await\n            .expect(\"right root should write\");\n        tx.commit().await.expect(\"transaction should commit\");\n        (storage, tracked_state)\n    }\n\n    async fn seed_parent_child_delta(\n        parent_rows: &[MaterializedTrackedStateRow],\n        child_rows: &[MaterializedTrackedStateRow],\n    ) -> (StorageContext, TrackedStateContext) {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        let tracked_state = TrackedStateContext::new();\n        let mut tx = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        write_root_for_test(tx.as_mut(), &tracked_state, \"parent\", None, parent_rows)\n            .await\n            .expect(\"parent should write\");\n        write_root_for_test(\n            tx.as_mut(),\n            &tracked_state,\n            \"child\",\n            Some(\"parent\"),\n            child_rows,\n        )\n        .await\n        .expect(\"child should write\");\n        tx.commit().await.expect(\"transaction should commit\");\n        (storage, tracked_state)\n    }\n\n    async fn write_root_for_test(\n        tx: &mut dyn StorageWriteTransaction,\n        tracked_state: &TrackedStateContext,\n        commit_id: &str,\n        parent_commit_id: Option<&str>,\n        rows: &[MaterializedTrackedStateRow],\n    ) -> Result<(), LixError> {\n        crate::test_support::stage_tracked_root_from_materialized(\n            tx,\n            tracked_state,\n            commit_id,\n            parent_commit_id,\n            rows,\n        )\n        .await\n    }\n\n    fn kinds(diff: &TrackedStateDiff) -> Vec<(String, TrackedStateDiffKind)> {\n        diff.entries\n            .iter()\n            .map(|entry| {\n                (\n                    entry\n                        .identity\n                        .entity_id\n                        .as_single_string_owned()\n                        .expect(\"identity\"),\n                    entry.kind,\n                )\n            })\n            .collect()\n    }\n\n    fn tombstone(\n        entity_id: &str,\n        file_id: Option<&str>,\n        change_id: &str,\n    ) -> MaterializedTrackedStateRow {\n        let mut row = row(entity_id, file_id, change_id);\n        row.snapshot_content = None;\n        row.deleted = true;\n        row\n    }\n\n    fn row(entity_id: &str, file_id: Option<&str>, change_id: &str) -> MaterializedTrackedStateRow {\n        row_with_schema(entity_id, file_id, \"test_schema\", change_id)\n    }\n\n    fn row_with_schema(\n        entity_id: &str,\n        file_id: Option<&str>,\n        schema_key: &str,\n        change_id: &str,\n    ) -> MaterializedTrackedStateRow {\n        row_with_schema_and_value(entity_id, file_id, schema_key, change_id, \"value\")\n    }\n\n    fn row_with_value(\n        entity_id: &str,\n        file_id: Option<&str>,\n        change_id: &str,\n        value: &str,\n    ) -> MaterializedTrackedStateRow {\n        row_with_schema_and_value(entity_id, file_id, \"test_schema\", change_id, value)\n    }\n\n    fn row_with_schema_and_value(\n        entity_id: &str,\n        file_id: Option<&str>,\n        schema_key: &str,\n        change_id: &str,\n        value: &str,\n    ) -> MaterializedTrackedStateRow {\n        MaterializedTrackedStateRow {\n            entity_id: EntityIdentity::single(entity_id),\n            schema_key: schema_key.to_string(),\n            file_id: file_id.map(str::to_string),\n            snapshot_content: Some(format!(\"{{\\\"value\\\":\\\"{value}\\\"}}\")),\n            metadata: None,\n            deleted: false,\n            created_at: \"2026-01-01T00:00:00Z\".to_string(),\n            updated_at: \"2026-01-01T00:00:00Z\".to_string(),\n            change_id: change_id.to_string(),\n            commit_id: change_id.replace(\"change\", \"commit\"),\n        }\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/tracked_state/materialization.rs",
    "content": "use crate::entity_identity::EntityIdentity;\nuse crate::json_store::JsonRef;\nuse crate::json_store::{JsonLoadRequestRef, JsonReadScopeRef, JsonStoreContext};\nuse crate::storage::StorageReader;\nuse crate::tracked_state::types::{TrackedStateIndexValue, TrackedStateKey};\nuse crate::tracked_state::MaterializedTrackedStateRow;\nuse crate::LixError;\nuse std::collections::BTreeMap;\n\n/// Materializes tracked-state index entries.\n///\n/// The durable tracked_state value is authoritative for scalar projection\n/// fields and stores the JSON refs needed for payload projections. Snapshot and\n/// metadata bytes are hydrated from grouped json_store loads only when the\n/// requested projection needs them.\npub(crate) async fn materialize_index_entries<S>(\n    store: &mut S,\n    entries: Vec<(TrackedStateKey, TrackedStateIndexValue)>,\n    projection: &TrackedMaterializationProjection,\n) -> Result<Vec<MaterializedTrackedStateRow>, LixError>\nwhere\n    S: StorageReader,\n{\n    if !projection.snapshot_content && !projection.metadata {\n        return Ok(entries\n            .into_iter()\n            .map(materialize_entry_without_json)\n            .collect());\n    }\n\n    let json_slots_per_row =\n        usize::from(projection.snapshot_content) + usize::from(projection.metadata);\n    let json_ref_capacity = entries.len().saturating_mul(json_slots_per_row);\n    let mut row_plans = Vec::with_capacity(entries.len());\n    let mut json_refs = Vec::with_capacity(json_ref_capacity);\n    let mut json_ref_localities = Vec::with_capacity(json_ref_capacity);\n    for (key, value) in entries {\n        let row_index = row_plans.len();\n        let snapshot_ref_index = projected_json_ref_index(\n            projection.snapshot_content,\n            value.snapshot_ref,\n            row_index,\n            value.change_locator.source_pack_id,\n            &mut json_refs,\n            &mut json_ref_localities,\n        );\n        let metadata_ref_index = projected_json_ref_index(\n            projection.metadata,\n            value.metadata_ref,\n            row_index,\n            value.change_locator.source_pack_id,\n            &mut json_refs,\n            &mut json_ref_localities,\n        );\n        row_plans.push(MaterializedTrackedStateRowPlan {\n            entity_id: key.entity_id,\n            schema_key: key.schema_key,\n            file_id: key.file_id,\n            deleted: value.deleted,\n            created_at: value.created_at,\n            updated_at: value.updated_at,\n            change_id: value.change_locator.change_id,\n            commit_id: value.change_locator.source_commit_id,\n            snapshot_ref_index,\n            metadata_ref_index,\n        });\n    }\n\n    let mut json_values =\n        load_projection_json_values(store, &json_refs, &json_ref_localities, &row_plans).await?;\n    row_plans\n        .into_iter()\n        .map(|plan| materialize_row_plan(plan, &json_refs, &mut json_values))\n        .collect()\n}\n\nfn materialize_entry_without_json(\n    (key, value): (TrackedStateKey, TrackedStateIndexValue),\n) -> MaterializedTrackedStateRow {\n    MaterializedTrackedStateRow {\n        entity_id: key.entity_id,\n        schema_key: key.schema_key,\n        file_id: key.file_id,\n        snapshot_content: None,\n        metadata: None,\n        deleted: value.deleted,\n        created_at: value.created_at,\n        updated_at: value.updated_at,\n        change_id: value.change_locator.change_id,\n        commit_id: value.change_locator.source_commit_id,\n    }\n}\n\nstruct MaterializedTrackedStateRowPlan {\n    entity_id: EntityIdentity,\n    schema_key: String,\n    file_id: Option<String>,\n    deleted: bool,\n    created_at: String,\n    updated_at: String,\n    change_id: String,\n    commit_id: String,\n    snapshot_ref_index: Option<usize>,\n    metadata_ref_index: Option<usize>,\n}\n\nfn projected_json_ref_index(\n    include: bool,\n    json_ref: Option<JsonRef>,\n    row_index: usize,\n    pack_id: u32,\n    json_refs: &mut Vec<JsonRef>,\n    json_ref_localities: &mut Vec<JsonRefLocality>,\n) -> Option<usize> {\n    if !include {\n        return None;\n    }\n    let index = json_refs.len();\n    json_refs.push(json_ref?);\n    json_ref_localities.push(JsonRefLocality { row_index, pack_id });\n    Some(index)\n}\n\nstruct JsonRefLocality {\n    row_index: usize,\n    pack_id: u32,\n}\n\nasync fn load_projection_json_values<S>(\n    store: &mut S,\n    json_refs: &[JsonRef],\n    json_ref_localities: &[JsonRefLocality],\n    row_plans: &[MaterializedTrackedStateRowPlan],\n) -> Result<Vec<Option<Vec<u8>>>, LixError>\nwhere\n    S: StorageReader,\n{\n    if json_refs.len() != json_ref_localities.len() {\n        return Err(LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            \"tracked_state materialization JSON refs and locality indexes diverged\",\n        ));\n    }\n\n    let json_store = JsonStoreContext::new();\n    if let Some((commit_id, pack_id)) = single_projection_pack(json_ref_localities, row_plans)? {\n        let pack_ids = [pack_id];\n        return json_store\n            .load_bytes_many(\n                store,\n                JsonLoadRequestRef {\n                    refs: json_refs,\n                    scope: JsonReadScopeRef::CommitPacks {\n                        commit_id,\n                        pack_ids: &pack_ids,\n                    },\n                },\n            )\n            .await\n            .map(|batch| batch.into_values());\n    }\n\n    let mut json_values = vec![None; json_refs.len()];\n    let mut refs_by_pack = BTreeMap::<(&str, u32), Vec<(usize, JsonRef)>>::new();\n    for (index, json_ref) in json_refs.iter().copied().enumerate() {\n        let locality = json_ref_localities.get(index).ok_or_else(|| {\n            LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                \"tracked_state materialization lost JSON locality index\",\n            )\n        })?;\n        let row_plan = row_plans.get(locality.row_index).ok_or_else(|| {\n            LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                \"tracked_state materialization lost JSON row locality index\",\n            )\n        })?;\n        refs_by_pack\n            .entry((row_plan.commit_id.as_str(), locality.pack_id))\n            .or_default()\n            .push((index, json_ref));\n    }\n\n    for ((commit_id, pack_id), refs) in refs_by_pack {\n        let indexes = refs.iter().map(|(index, _)| *index).collect::<Vec<_>>();\n        let refs = refs\n            .into_iter()\n            .map(|(_, json_ref)| json_ref)\n            .collect::<Vec<_>>();\n        let pack_ids = [pack_id];\n        let values = json_store\n            .load_bytes_many(\n                store,\n                JsonLoadRequestRef {\n                    refs: &refs,\n                    scope: JsonReadScopeRef::CommitPacks {\n                        commit_id: &commit_id,\n                        pack_ids: &pack_ids,\n                    },\n                },\n            )\n            .await?\n            .into_values();\n        for (index, value) in indexes.into_iter().zip(values) {\n            json_values[index] = value;\n        }\n    }\n    Ok(json_values)\n}\n\nfn single_projection_pack<'a>(\n    json_ref_localities: &[JsonRefLocality],\n    row_plans: &'a [MaterializedTrackedStateRowPlan],\n) -> Result<Option<(&'a str, u32)>, LixError> {\n    let Some(first_locality) = json_ref_localities.first() else {\n        return Ok(None);\n    };\n    let first_plan = row_plans.get(first_locality.row_index).ok_or_else(|| {\n        LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            \"tracked_state materialization lost JSON row locality index\",\n        )\n    })?;\n    let commit_id = first_plan.commit_id.as_str();\n    let pack_id = first_locality.pack_id;\n\n    for locality in &json_ref_localities[1..] {\n        let row_plan = row_plans.get(locality.row_index).ok_or_else(|| {\n            LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                \"tracked_state materialization lost JSON row locality index\",\n            )\n        })?;\n        if row_plan.commit_id != commit_id || locality.pack_id != pack_id {\n            return Ok(None);\n        }\n    }\n    Ok(Some((commit_id, pack_id)))\n}\n\nfn materialize_row_plan(\n    plan: MaterializedTrackedStateRowPlan,\n    json_refs: &[JsonRef],\n    json_values: &mut [Option<Vec<u8>>],\n) -> Result<MaterializedTrackedStateRow, LixError> {\n    Ok(MaterializedTrackedStateRow {\n        entity_id: plan.entity_id,\n        schema_key: plan.schema_key,\n        file_id: plan.file_id,\n        snapshot_content: materialized_json_string(\n            plan.snapshot_ref_index,\n            json_refs,\n            json_values,\n        )?,\n        metadata: materialized_json_string(plan.metadata_ref_index, json_refs, json_values)?,\n        deleted: plan.deleted,\n        created_at: plan.created_at,\n        updated_at: plan.updated_at,\n        change_id: plan.change_id,\n        commit_id: plan.commit_id,\n    })\n}\n\nfn materialized_json_string(\n    index: Option<usize>,\n    json_refs: &[JsonRef],\n    json_values: &mut [Option<Vec<u8>>],\n) -> Result<Option<String>, LixError> {\n    let Some(index) = index else {\n        return Ok(None);\n    };\n    let json_ref = json_refs.get(index).ok_or_else(|| {\n        LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            \"tracked_state materialization lost JSON ref index\",\n        )\n    })?;\n    // Each row plan owns its projected JSON slots. If this path starts\n    // deduplicating refs, duplicate consumers must clone intentionally.\n    let bytes = json_values\n        .get_mut(index)\n        .ok_or_else(|| {\n            LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                \"tracked_state materialization lost JSON value index\",\n            )\n        })?\n        .take()\n        .ok_or_else(|| {\n            LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                format!(\n                    \"tracked_state materialization missing JSON payload '{}'\",\n                    json_ref.to_hex()\n                ),\n            )\n        })?;\n    String::from_utf8(bytes).map(Some).map_err(|error| {\n        let utf8_error = error.utf8_error();\n        LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            format!(\"tracked_state materialized JSON payload is not UTF-8: {utf8_error}\"),\n        )\n    })\n}\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub(crate) struct TrackedMaterializationProjection {\n    pub(crate) snapshot_content: bool,\n    pub(crate) metadata: bool,\n}\n\nimpl TrackedMaterializationProjection {\n    pub(crate) fn full() -> Self {\n        Self {\n            snapshot_content: true,\n            metadata: true,\n        }\n    }\n\n    pub(crate) fn from_columns(columns: &[String]) -> Self {\n        if columns.is_empty() {\n            return Self::full();\n        }\n        Self {\n            snapshot_content: columns.iter().any(|column| column == \"snapshot_content\"),\n            metadata: columns.iter().any(|column| column == \"metadata\"),\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    fn row_plan(commit_id: &str) -> MaterializedTrackedStateRowPlan {\n        MaterializedTrackedStateRowPlan {\n            entity_id: EntityIdentity::single(\"entity\"),\n            schema_key: \"schema\".to_string(),\n            file_id: None,\n            deleted: false,\n            created_at: \"2024-01-01T00:00:00.000Z\".to_string(),\n            updated_at: \"2024-01-01T00:00:00.000Z\".to_string(),\n            change_id: \"change\".to_string(),\n            commit_id: commit_id.to_string(),\n            snapshot_ref_index: None,\n            metadata_ref_index: None,\n        }\n    }\n\n    #[test]\n    fn single_projection_pack_accepts_duplicate_slots_from_same_pack() {\n        let row_plans = vec![row_plan(\"commit-a\")];\n        let localities = vec![\n            JsonRefLocality {\n                row_index: 0,\n                pack_id: 7,\n            },\n            JsonRefLocality {\n                row_index: 0,\n                pack_id: 7,\n            },\n        ];\n\n        assert_eq!(\n            single_projection_pack(&localities, &row_plans).expect(\"pack detection should succeed\"),\n            Some((\"commit-a\", 7))\n        );\n    }\n\n    #[test]\n    fn single_projection_pack_rejects_mixed_packs() {\n        let row_plans = vec![row_plan(\"commit-a\")];\n        let localities = vec![\n            JsonRefLocality {\n                row_index: 0,\n                pack_id: 7,\n            },\n            JsonRefLocality {\n                row_index: 0,\n                pack_id: 8,\n            },\n        ];\n\n        assert_eq!(\n            single_projection_pack(&localities, &row_plans).expect(\"pack detection should succeed\"),\n            None\n        );\n    }\n\n    #[test]\n    fn materialized_json_string_consumes_owned_payload_bytes() {\n        let json = br#\"{\"value\":1}\"#.to_vec();\n        let json_ref = JsonRef::for_content(&json);\n        let mut json_values = vec![Some(json)];\n\n        let materialized = materialized_json_string(Some(0), &[json_ref], &mut json_values)\n            .expect(\"json should materialize\");\n\n        assert_eq!(materialized, Some(r#\"{\"value\":1}\"#.to_string()));\n        assert!(json_values[0].is_none());\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/tracked_state/materializer.rs",
    "content": "use crate::commit_store::{Change, ChangeLocator, Commit, CommitStoreContext};\nuse crate::storage::StorageReader;\nuse crate::tracked_state::context::{TrackedStateMaterializer, TrackedStateWriteReport};\nuse crate::tracked_state::types::TrackedStateKey;\nuse crate::tracked_state::TrackedStateDeltaRef;\nuse crate::LixError;\nuse std::collections::{BTreeMap, BTreeSet};\n\n/// Owned materialization delta used only by explicit projection-root hydration.\n///\n/// Normal transaction commits already have borrowed `ChangeRef` and\n/// `ChangeLocatorRef` values available while staging commit_store.\n/// Materialization loads those facts back from storage, so it owns the decoded\n/// data internally and immediately passes a borrowed view into the same\n/// tracked-state root writer.\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct MaterializationDelta {\n    pub(crate) change: Change,\n    pub(crate) locator: ChangeLocator,\n    pub(crate) created_at: String,\n    pub(crate) updated_at: String,\n}\n\nimpl MaterializationDelta {\n    pub(crate) fn as_ref(&self) -> TrackedStateDeltaRef<'_> {\n        TrackedStateDeltaRef {\n            change: self.change.as_ref(),\n            locator: self.locator.as_ref(),\n            created_at: &self.created_at,\n            updated_at: &self.updated_at,\n        }\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct MaterializationInput {\n    pub(crate) commit_id: String,\n    pub(crate) parent_commit_id: Option<String>,\n    pub(crate) deltas: Vec<MaterializationDelta>,\n}\n\nstruct LocatedChange {\n    locator: ChangeLocator,\n    change: Change,\n}\n\n/// Explicit projection-root materialization over commit_store.\n///\n/// Normal transaction commits must use `TrackedStateWriter::stage_delta` with\n/// already prepared commit_store refs. This path exists for deliberate\n/// materialization only.\npub(crate) async fn materialize_root_at<S>(\n    materializer: &mut TrackedStateMaterializer<'_, S>,\n    commit_id: &str,\n) -> Result<TrackedStateWriteReport, LixError>\nwhere\n    S: StorageReader + ?Sized,\n{\n    let input =\n        build_materialization_input(materializer.store, materializer.commit_store, commit_id)\n            .await?;\n    let delta_refs = input\n        .deltas\n        .iter()\n        .map(MaterializationDelta::as_ref)\n        .collect::<Vec<_>>();\n    materializer\n        .tracked_state\n        .writer(materializer.store, materializer.writes)\n        .stage_projection_root(\n            &input.commit_id,\n            input.parent_commit_id.as_deref(),\n            delta_refs,\n        )\n        .await\n}\n\nasync fn build_materialization_input<S>(\n    store: &mut S,\n    commit_store: &CommitStoreContext,\n    commit_id: &str,\n) -> Result<MaterializationInput, LixError>\nwhere\n    S: StorageReader + ?Sized,\n{\n    let lineage = load_first_parent_lineage(store, commit_store, commit_id).await?;\n    let mut located_changes = Vec::new();\n    for commit in lineage {\n        located_changes\n            .append(&mut load_commit_located_changes(store, commit_store, &commit).await?);\n    }\n    let deltas = project_materialization_deltas(located_changes);\n\n    Ok(MaterializationInput {\n        commit_id: commit_id.to_string(),\n        parent_commit_id: None,\n        deltas,\n    })\n}\n\nasync fn load_first_parent_lineage<S>(\n    store: &mut S,\n    commit_store: &CommitStoreContext,\n    commit_id: &str,\n) -> Result<Vec<Commit>, LixError>\nwhere\n    S: StorageReader + ?Sized,\n{\n    let mut lineage = Vec::new();\n    let mut seen = BTreeSet::new();\n    let mut current = Some(commit_id.to_string());\n    while let Some(current_id) = current {\n        if !seen.insert(current_id.clone()) {\n            return Err(LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                format!(\n                    \"tracked_state materialization found first-parent cycle at commit '{current_id}'\"\n                ),\n            ));\n        }\n        let commit = commit_store\n            .load_commit_from(store, &current_id)\n            .await?\n            .ok_or_else(|| missing_commit_error(&current_id))?;\n        current = commit.parent_ids.first().cloned();\n        lineage.push(commit);\n    }\n    lineage.reverse();\n    Ok(lineage)\n}\n\nasync fn load_commit_located_changes<S>(\n    store: &mut S,\n    commit_store: &CommitStoreContext,\n    commit: &Commit,\n) -> Result<Vec<LocatedChange>, LixError>\nwhere\n    S: StorageReader + ?Sized,\n{\n    let mut located_changes = Vec::new();\n    for pack_id in 0..commit.change_pack_count {\n        let changes = commit_store\n            .load_change_pack_from(store, &commit.id, pack_id)\n            .await?\n            .ok_or_else(|| missing_pack_error(\"change\", &commit.id, pack_id))?;\n        for (source_ordinal, change) in changes.into_iter().enumerate() {\n            let locator = ChangeLocator {\n                source_commit_id: commit.id.clone(),\n                source_pack_id: pack_id,\n                source_ordinal: u32::try_from(source_ordinal).map_err(|_| {\n                    LixError::new(\n                        LixError::CODE_INTERNAL_ERROR,\n                        \"tracked_state materialization change pack ordinal exceeds u32\",\n                    )\n                })?,\n                change_id: change.id.clone(),\n            };\n            located_changes.push(LocatedChange { locator, change });\n        }\n    }\n\n    let mut adopted_locators = Vec::new();\n    for pack_id in 0..commit.membership_pack_count {\n        let mut locators = commit_store\n            .load_membership_pack_from(store, &commit.id, pack_id)\n            .await?\n            .ok_or_else(|| missing_pack_error(\"membership\", &commit.id, pack_id))?;\n        adopted_locators.append(&mut locators);\n    }\n    let adopted_changes = load_changes_by_locators(store, commit_store, &adopted_locators).await?;\n    located_changes.extend(\n        adopted_locators\n            .into_iter()\n            .zip(adopted_changes)\n            .map(|(locator, change)| LocatedChange { locator, change }),\n    );\n    Ok(located_changes)\n}\n\nfn project_materialization_deltas(\n    changes: impl IntoIterator<Item = LocatedChange>,\n) -> Vec<MaterializationDelta> {\n    let mut projected = BTreeMap::<TrackedStateKey, MaterializationDelta>::new();\n    for LocatedChange { locator, change } in changes {\n        let key = TrackedStateKey {\n            schema_key: change.schema_key.clone(),\n            file_id: change.file_id.clone(),\n            entity_id: change.entity_id.clone(),\n        };\n        let created_at = projected\n            .get(&key)\n            .map(|delta| delta.created_at.clone())\n            .unwrap_or_else(|| change.created_at.clone());\n        let updated_at = change.created_at.clone();\n        projected.insert(\n            key,\n            MaterializationDelta {\n                change,\n                locator,\n                created_at,\n                updated_at,\n            },\n        );\n    }\n    projected.into_values().collect()\n}\n\nasync fn load_changes_by_locators(\n    store: &mut (impl StorageReader + ?Sized),\n    commit_store: &CommitStoreContext,\n    locators: &[ChangeLocator],\n) -> Result<Vec<Change>, LixError> {\n    let mut packs = BTreeMap::<(String, u32), Vec<Change>>::new();\n    for locator in locators {\n        let key = (locator.source_commit_id.clone(), locator.source_pack_id);\n        if packs.contains_key(&key) {\n            continue;\n        }\n        let changes = commit_store\n            .load_change_pack_from(store, &locator.source_commit_id, locator.source_pack_id)\n            .await?\n            .ok_or_else(|| {\n                missing_pack_error(\"change\", &locator.source_commit_id, locator.source_pack_id)\n            })?;\n        packs.insert(key, changes);\n    }\n\n    locators\n        .iter()\n        .map(|locator| change_from_loaded_packs(&packs, locator))\n        .collect()\n}\n\nfn change_from_loaded_packs(\n    packs: &BTreeMap<(String, u32), Vec<Change>>,\n    locator: &ChangeLocator,\n) -> Result<Change, LixError> {\n    let key = (locator.source_commit_id.clone(), locator.source_pack_id);\n    let changes = packs.get(&key).ok_or_else(|| {\n        LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            format!(\n                \"tracked_state materialization lost loaded change pack ({}, {})\",\n                locator.source_commit_id, locator.source_pack_id\n            ),\n        )\n    })?;\n    let change = changes\n        .get(usize::try_from(locator.source_ordinal).map_err(|_| {\n            LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                \"tracked_state materialization locator ordinal does not fit usize\",\n            )\n        })?)\n        .ok_or_else(|| {\n            LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                format!(\n                    \"tracked_state materialization locator for '{}' points past pack ({}, {})\",\n                    locator.change_id, locator.source_commit_id, locator.source_pack_id\n                ),\n            )\n        })?;\n    if change.id != locator.change_id {\n        return Err(LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            format!(\n                \"tracked_state materialization locator expected '{}' but found '{}'\",\n                locator.change_id, change.id\n            ),\n        ));\n    }\n    Ok(change.clone())\n}\n\nfn missing_pack_error(label: &str, commit_id: &str, pack_id: u32) -> LixError {\n    LixError::new(\n        LixError::CODE_INTERNAL_ERROR,\n        format!(\"tracked_state materialization missing {label} pack ({commit_id}, {pack_id})\"),\n    )\n}\n\nfn missing_commit_error(commit_id: &str) -> LixError {\n    LixError::new(\n        LixError::CODE_INTERNAL_ERROR,\n        format!(\"tracked_state materialization missing commit '{commit_id}'\"),\n    )\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::commit_store::ChangeLocator;\n    use crate::entity_identity::EntityIdentity;\n\n    #[test]\n    fn materialization_delta_ref_borrows_owned_facts() {\n        let delta = MaterializationDelta {\n            change: Change {\n                id: \"change-1\".to_string(),\n                entity_id: EntityIdentity::single(\"entity-1\"),\n                schema_key: \"schema\".to_string(),\n                file_id: Some(\"file\".to_string()),\n                snapshot_ref: None,\n                metadata_ref: None,\n                created_at: \"2026-01-01T00:00:00Z\".to_string(),\n            },\n            locator: ChangeLocator {\n                source_commit_id: \"commit-1\".to_string(),\n                source_pack_id: 7,\n                source_ordinal: 11,\n                change_id: \"change-1\".to_string(),\n            },\n            created_at: \"2026-01-01T00:00:00Z\".to_string(),\n            updated_at: \"2026-02-01T00:00:00Z\".to_string(),\n        };\n\n        let delta_ref = delta.as_ref();\n\n        assert_eq!(delta_ref.change.id, \"change-1\");\n        assert_eq!(delta_ref.change.schema_key, \"schema\");\n        assert_eq!(delta_ref.change.file_id, Some(\"file\"));\n        assert_eq!(delta_ref.locator.source_commit_id, \"commit-1\");\n        assert_eq!(delta_ref.locator.source_pack_id, 7);\n        assert_eq!(delta_ref.locator.source_ordinal, 11);\n        assert_eq!(delta_ref.created_at, \"2026-01-01T00:00:00Z\");\n        assert_eq!(delta_ref.updated_at, \"2026-02-01T00:00:00Z\");\n    }\n\n    #[test]\n    fn change_from_loaded_packs_resolves_locator_by_pack_and_ordinal() {\n        let mut packs = BTreeMap::new();\n        packs.insert(\n            (\"source-commit\".to_string(), 3),\n            vec![change(\"change-0\"), change(\"change-1\"), change(\"change-2\")],\n        );\n        let locator = ChangeLocator {\n            source_commit_id: \"source-commit\".to_string(),\n            source_pack_id: 3,\n            source_ordinal: 1,\n            change_id: \"change-1\".to_string(),\n        };\n\n        let resolved = change_from_loaded_packs(&packs, &locator).expect(\"locator should resolve\");\n\n        assert_eq!(resolved.id, \"change-1\");\n    }\n\n    #[test]\n    fn change_from_loaded_packs_rejects_locator_change_id_mismatch() {\n        let mut packs = BTreeMap::new();\n        packs.insert((\"source-commit\".to_string(), 3), vec![change(\"actual\")]);\n        let locator = ChangeLocator {\n            source_commit_id: \"source-commit\".to_string(),\n            source_pack_id: 3,\n            source_ordinal: 0,\n            change_id: \"expected\".to_string(),\n        };\n\n        let error =\n            change_from_loaded_packs(&packs, &locator).expect_err(\"mismatched locator should fail\");\n\n        assert!(error.message.contains(\"expected\"));\n        assert!(error.message.contains(\"actual\"));\n    }\n\n    #[test]\n    fn project_materialization_deltas_keeps_first_seen_created_at_and_latest_updated_at() {\n        let deltas = project_materialization_deltas(vec![\n            located_change(\n                \"commit-1\",\n                0,\n                \"change-create\",\n                \"entity-1\",\n                \"2026-01-01T00:00:00Z\",\n            ),\n            located_change(\n                \"commit-2\",\n                0,\n                \"change-update\",\n                \"entity-1\",\n                \"2026-02-01T00:00:00Z\",\n            ),\n        ]);\n\n        assert_eq!(deltas.len(), 1);\n        let delta = &deltas[0];\n        assert_eq!(delta.change.id, \"change-update\");\n        assert_eq!(delta.locator.source_commit_id, \"commit-2\");\n        assert_eq!(delta.created_at, \"2026-01-01T00:00:00Z\");\n        assert_eq!(delta.updated_at, \"2026-02-01T00:00:00Z\");\n    }\n\n    #[test]\n    fn project_materialization_deltas_uses_adopted_change_time_not_target_commit_time() {\n        let deltas = project_materialization_deltas(vec![located_change(\n            \"source-commit\",\n            0,\n            \"adopted-change\",\n            \"entity-1\",\n            \"2026-01-01T00:00:00Z\",\n        )]);\n\n        assert_eq!(deltas.len(), 1);\n        assert_eq!(deltas[0].created_at, \"2026-01-01T00:00:00Z\");\n        assert_eq!(deltas[0].updated_at, \"2026-01-01T00:00:00Z\");\n    }\n\n    #[test]\n    fn project_materialization_deltas_tracks_entities_independently() {\n        let deltas = project_materialization_deltas(vec![\n            located_change(\n                \"commit-1\",\n                0,\n                \"entity-a-create\",\n                \"entity-a\",\n                \"2026-01-01T00:00:00Z\",\n            ),\n            located_change(\n                \"commit-1\",\n                1,\n                \"entity-b-create\",\n                \"entity-b\",\n                \"2026-01-02T00:00:00Z\",\n            ),\n            located_change(\n                \"commit-2\",\n                0,\n                \"entity-a-update\",\n                \"entity-a\",\n                \"2026-02-01T00:00:00Z\",\n            ),\n        ]);\n\n        let entity_a = deltas\n            .iter()\n            .find(|delta| delta.change.entity_id == EntityIdentity::single(\"entity-a\"))\n            .expect(\"entity-a delta\");\n        let entity_b = deltas\n            .iter()\n            .find(|delta| delta.change.entity_id == EntityIdentity::single(\"entity-b\"))\n            .expect(\"entity-b delta\");\n        assert_eq!(entity_a.change.id, \"entity-a-update\");\n        assert_eq!(entity_a.created_at, \"2026-01-01T00:00:00Z\");\n        assert_eq!(entity_a.updated_at, \"2026-02-01T00:00:00Z\");\n        assert_eq!(entity_b.change.id, \"entity-b-create\");\n        assert_eq!(entity_b.created_at, \"2026-01-02T00:00:00Z\");\n        assert_eq!(entity_b.updated_at, \"2026-01-02T00:00:00Z\");\n    }\n\n    fn change(id: &str) -> Change {\n        Change {\n            id: id.to_string(),\n            entity_id: EntityIdentity::single(\"entity-1\"),\n            schema_key: \"schema\".to_string(),\n            file_id: Some(\"file\".to_string()),\n            snapshot_ref: None,\n            metadata_ref: None,\n            created_at: \"2026-01-01T00:00:00Z\".to_string(),\n        }\n    }\n\n    fn located_change(\n        commit_id: &str,\n        source_ordinal: u32,\n        change_id: &str,\n        entity_id: &str,\n        created_at: &str,\n    ) -> LocatedChange {\n        LocatedChange {\n            locator: ChangeLocator {\n                source_commit_id: commit_id.to_string(),\n                source_pack_id: 0,\n                source_ordinal,\n                change_id: change_id.to_string(),\n            },\n            change: Change {\n                id: change_id.to_string(),\n                entity_id: EntityIdentity::single(entity_id),\n                schema_key: \"schema\".to_string(),\n                file_id: Some(\"file\".to_string()),\n                snapshot_ref: None,\n                metadata_ref: None,\n                created_at: created_at.to_string(),\n            },\n        }\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/tracked_state/merge.rs",
    "content": "use std::collections::{BTreeMap, BTreeSet};\n\nuse crate::tracked_state::{\n    MaterializedTrackedStateRow, TrackedStateDiff, TrackedStateDiffEntry, TrackedStateDiffIdentity,\n};\nuse crate::LixError;\n\n/// Planned tracked-state merge result.\n///\n/// This is intentionally a pure planner. It does not know about versions,\n/// sessions, changelog writes, or live-state overlays. Callers provide two\n/// diffs from the same merge base:\n///\n/// - `base -> target`: what the destination version changed.\n/// - `base -> source`: what the incoming version changed.\n///\n/// The planner returns source-side patches that can be applied to the target\n/// root plus first-class conflicts for identities changed differently on both\n/// sides.\n#[derive(Debug, Clone, PartialEq, Eq, Default)]\npub(crate) struct TrackedStateMergePlan {\n    pub(crate) patches: Vec<TrackedStateMergePatch>,\n    pub(crate) conflicts: Vec<TrackedStateMergeConflict>,\n}\n\n/// One source-side patch to apply to the target root.\n///\n/// Merge patches are expressed as canonical change adoption, not as new row\n/// writes. The projected row carries the target-root materialization shape,\n/// including tombstones, while `change_id` preserves the source canonical\n/// change identity.\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) enum TrackedStateMergePatch {\n    Adopt {\n        identity: TrackedStateDiffIdentity,\n        change_id: String,\n        projected_row: MaterializedTrackedStateRow,\n    },\n}\n\nimpl TrackedStateMergePatch {\n    #[cfg(test)]\n    pub(crate) fn identity(&self) -> &TrackedStateDiffIdentity {\n        match self {\n            Self::Adopt { identity, .. } => identity,\n        }\n    }\n\n    pub(crate) fn change_id(&self) -> &str {\n        match self {\n            Self::Adopt { change_id, .. } => change_id,\n        }\n    }\n\n    pub(crate) fn projected_row(&self) -> &MaterializedTrackedStateRow {\n        match self {\n            Self::Adopt { projected_row, .. } => projected_row,\n        }\n    }\n}\n\n/// One identity that both sides changed incompatibly.\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct TrackedStateMergeConflict {\n    pub(crate) identity: TrackedStateDiffIdentity,\n    pub(crate) target: TrackedStateDiffEntry,\n    pub(crate) source: TrackedStateDiffEntry,\n}\n\n/// Plans a three-way tracked-state merge from two base-relative diffs.\n///\n/// This follows the same shape as prolly-tree merge systems: compare\n/// `base -> target` and `base -> source` by identity, emit source-only patches\n/// for the target root, ignore target-only changes, collapse convergent\n/// changes, and report divergent same-identity changes as conflicts.\npub(crate) fn plan_merge(\n    target_diff: &TrackedStateDiff,\n    source_diff: &TrackedStateDiff,\n) -> Result<TrackedStateMergePlan, LixError> {\n    let target_by_identity = diff_by_identity(target_diff)?;\n    let source_by_identity = diff_by_identity(source_diff)?;\n    let identities = target_by_identity\n        .keys()\n        .chain(source_by_identity.keys())\n        .cloned()\n        .collect::<BTreeSet<_>>();\n\n    let mut plan = TrackedStateMergePlan::default();\n    for identity in identities {\n        match (\n            target_by_identity.get(&identity),\n            source_by_identity.get(&identity),\n        ) {\n            (None, None) => {}\n            (Some(_target), None) => {\n                // Target already changed this identity. Source did not, so\n                // there is nothing to apply.\n            }\n            (None, Some(source)) => {\n                plan.patches\n                    .push(adopt_source_change_patch(identity, source)?);\n            }\n            (Some(target), Some(source)) if same_final_state(target, source) => {\n                // Both sides reached the same visible state. Keep target to\n                // avoid writing duplicate source metadata.\n            }\n            (Some(target), Some(source)) => {\n                plan.conflicts.push(TrackedStateMergeConflict {\n                    identity,\n                    target: (*target).clone(),\n                    source: (*source).clone(),\n                });\n            }\n        }\n    }\n\n    Ok(plan)\n}\n\nfn diff_by_identity(\n    diff: &TrackedStateDiff,\n) -> Result<BTreeMap<TrackedStateDiffIdentity, &TrackedStateDiffEntry>, LixError> {\n    let mut entries = BTreeMap::new();\n    for entry in &diff.entries {\n        if entries.insert(entry.identity.clone(), entry).is_some() {\n            return Err(LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                format!(\n                    \"tracked-state merge received duplicate diff entry for schema '{}' entity '{}'\",\n                    entry.identity.schema_key,\n                    entry.identity.entity_id.as_json_array_text()?\n                ),\n            ));\n        }\n    }\n    Ok(entries)\n}\n\nfn adopt_source_change_patch(\n    identity: TrackedStateDiffIdentity,\n    entry: &TrackedStateDiffEntry,\n) -> Result<TrackedStateMergePatch, LixError> {\n    let Some(row) = entry.after.clone() else {\n        return Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\n                \"tracked-state merge cannot apply source removal for schema '{}' entity '{}' without a tombstone row\",\n                entry.identity.schema_key,\n                entry.identity.entity_id.as_json_array_text()?\n            ),\n        ));\n    };\n    Ok(TrackedStateMergePatch::Adopt {\n        identity,\n        change_id: row.change_id.clone(),\n        projected_row: row,\n    })\n}\n\nfn same_final_state(target: &TrackedStateDiffEntry, source: &TrackedStateDiffEntry) -> bool {\n    match (target.after.as_ref(), source.after.as_ref()) {\n        (None, None) => true,\n        (Some(target), Some(source)) if !row_is_live(target) && !row_is_live(source) => true,\n        (Some(target), Some(source)) if row_is_live(target) && row_is_live(source) => {\n            tracked_row_payload_eq(target, source)\n        }\n        _ => false,\n    }\n}\n\nfn row_is_live(row: &MaterializedTrackedStateRow) -> bool {\n    row.snapshot_content.is_some()\n}\n\nfn tracked_row_payload_eq(\n    left: &MaterializedTrackedStateRow,\n    right: &MaterializedTrackedStateRow,\n) -> bool {\n    left.snapshot_content == right.snapshot_content && left.metadata == right.metadata\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::entity_identity::EntityIdentity;\n    use crate::tracked_state::TrackedStateDiffKind;\n\n    #[test]\n    fn source_add_applies() {\n        let plan = plan_merge(\n            &TrackedStateDiff::default(),\n            &diff(vec![entry(\n                \"entity-a\",\n                TrackedStateDiffKind::Added,\n                None,\n                Some(row(\"entity-a\", \"source\")),\n            )]),\n        )\n        .expect(\"merge should plan\");\n\n        assert_eq!(patch_ids(&plan), vec![\"entity-a\"]);\n        assert!(plan.conflicts.is_empty());\n    }\n\n    #[test]\n    fn source_modify_applies() {\n        let plan = plan_merge(\n            &TrackedStateDiff::default(),\n            &diff(vec![entry(\n                \"entity-a\",\n                TrackedStateDiffKind::Modified,\n                Some(row_with_value(\"entity-a\", \"base\", \"base\")),\n                Some(row_with_value(\"entity-a\", \"source\", \"source\")),\n            )]),\n        )\n        .expect(\"merge should plan\");\n\n        assert_eq!(patch_ids(&plan), vec![\"entity-a\"]);\n        assert_eq!(\n            plan.patches[0].projected_row().snapshot_content.as_deref(),\n            Some(\"{\\\"value\\\":\\\"source\\\"}\")\n        );\n        assert_eq!(plan.patches[0].change_id(), \"source\");\n    }\n\n    #[test]\n    fn source_delete_applies_tombstone() {\n        let plan = plan_merge(\n            &TrackedStateDiff::default(),\n            &diff(vec![entry(\n                \"entity-a\",\n                TrackedStateDiffKind::Removed,\n                Some(row(\"entity-a\", \"base\")),\n                Some(tombstone(\"entity-a\", \"source-delete\")),\n            )]),\n        )\n        .expect(\"merge should plan\");\n\n        assert_eq!(patch_ids(&plan), vec![\"entity-a\"]);\n        assert_eq!(plan.patches[0].projected_row().snapshot_content, None);\n        assert_eq!(plan.patches[0].change_id(), \"source-delete\");\n    }\n\n    #[test]\n    fn target_only_change_is_noop() {\n        let plan = plan_merge(\n            &diff(vec![entry(\n                \"entity-a\",\n                TrackedStateDiffKind::Modified,\n                Some(row(\"entity-a\", \"base\")),\n                Some(row(\"entity-a\", \"target\")),\n            )]),\n            &TrackedStateDiff::default(),\n        )\n        .expect(\"merge should plan\");\n\n        assert!(plan.patches.is_empty());\n        assert!(plan.conflicts.is_empty());\n    }\n\n    #[test]\n    fn both_sides_same_final_value_is_convergent_noop() {\n        let target = entry(\n            \"entity-a\",\n            TrackedStateDiffKind::Modified,\n            Some(row_with_value(\"entity-a\", \"base\", \"base\")),\n            Some(row_with_value(\"entity-a\", \"target\", \"same\")),\n        );\n        let source = entry(\n            \"entity-a\",\n            TrackedStateDiffKind::Modified,\n            Some(row_with_value(\"entity-a\", \"base\", \"base\")),\n            Some(row_with_value(\"entity-a\", \"source\", \"same\")),\n        );\n\n        let plan = plan_merge(&diff(vec![target]), &diff(vec![source])).expect(\"merge should plan\");\n\n        assert!(plan.patches.is_empty());\n        assert!(plan.conflicts.is_empty());\n    }\n\n    #[test]\n    fn both_sides_delete_is_convergent_noop() {\n        let target = entry(\n            \"entity-a\",\n            TrackedStateDiffKind::Removed,\n            Some(row(\"entity-a\", \"base\")),\n            Some(tombstone(\"entity-a\", \"target-delete\")),\n        );\n        let source = entry(\n            \"entity-a\",\n            TrackedStateDiffKind::Removed,\n            Some(row(\"entity-a\", \"base\")),\n            Some(tombstone(\"entity-a\", \"source-delete\")),\n        );\n\n        let plan = plan_merge(&diff(vec![target]), &diff(vec![source])).expect(\"merge should plan\");\n\n        assert!(plan.patches.is_empty());\n        assert!(plan.conflicts.is_empty());\n    }\n\n    #[test]\n    fn different_modifications_conflict() {\n        let target = entry(\n            \"entity-a\",\n            TrackedStateDiffKind::Modified,\n            Some(row_with_value(\"entity-a\", \"base\", \"base\")),\n            Some(row_with_value(\"entity-a\", \"target\", \"target\")),\n        );\n        let source = entry(\n            \"entity-a\",\n            TrackedStateDiffKind::Modified,\n            Some(row_with_value(\"entity-a\", \"base\", \"base\")),\n            Some(row_with_value(\"entity-a\", \"source\", \"source\")),\n        );\n\n        let plan = plan_merge(&diff(vec![target]), &diff(vec![source])).expect(\"merge should plan\");\n\n        assert!(plan.patches.is_empty());\n        assert_eq!(conflict_ids(&plan), vec![\"entity-a\"]);\n    }\n\n    #[test]\n    fn delete_modify_conflicts() {\n        let target = entry(\n            \"entity-a\",\n            TrackedStateDiffKind::Removed,\n            Some(row(\"entity-a\", \"base\")),\n            Some(tombstone(\"entity-a\", \"target-delete\")),\n        );\n        let source = entry(\n            \"entity-a\",\n            TrackedStateDiffKind::Modified,\n            Some(row(\"entity-a\", \"base\")),\n            Some(row_with_value(\"entity-a\", \"source\", \"source\")),\n        );\n\n        let plan = plan_merge(&diff(vec![target]), &diff(vec![source])).expect(\"merge should plan\");\n\n        assert_eq!(conflict_ids(&plan), vec![\"entity-a\"]);\n    }\n\n    #[test]\n    fn modify_delete_conflicts() {\n        let target = entry(\n            \"entity-a\",\n            TrackedStateDiffKind::Modified,\n            Some(row(\"entity-a\", \"base\")),\n            Some(row_with_value(\"entity-a\", \"target\", \"target\")),\n        );\n        let source = entry(\n            \"entity-a\",\n            TrackedStateDiffKind::Removed,\n            Some(row(\"entity-a\", \"base\")),\n            Some(tombstone(\"entity-a\", \"source-delete\")),\n        );\n\n        let plan = plan_merge(&diff(vec![target]), &diff(vec![source])).expect(\"merge should plan\");\n\n        assert_eq!(conflict_ids(&plan), vec![\"entity-a\"]);\n    }\n\n    #[test]\n    fn source_removal_without_tombstone_errors() {\n        let error = plan_merge(\n            &TrackedStateDiff::default(),\n            &diff(vec![entry(\n                \"entity-a\",\n                TrackedStateDiffKind::Removed,\n                Some(row(\"entity-a\", \"base\")),\n                None,\n            )]),\n        )\n        .expect_err(\"merge should reject impossible source removal\");\n\n        assert!(error.message.contains(\"without a tombstone row\"));\n    }\n\n    #[test]\n    fn patch_and_conflict_order_is_deterministic_by_identity() {\n        let target = diff(vec![entry(\n            \"entity-b\",\n            TrackedStateDiffKind::Modified,\n            Some(row_with_value(\"entity-b\", \"base\", \"base\")),\n            Some(row_with_value(\"entity-b\", \"target\", \"target\")),\n        )]);\n        let source = diff(vec![\n            entry(\n                \"entity-c\",\n                TrackedStateDiffKind::Added,\n                None,\n                Some(row(\"entity-c\", \"source-c\")),\n            ),\n            entry(\n                \"entity-a\",\n                TrackedStateDiffKind::Added,\n                None,\n                Some(row(\"entity-a\", \"source-a\")),\n            ),\n            entry(\n                \"entity-b\",\n                TrackedStateDiffKind::Modified,\n                Some(row_with_value(\"entity-b\", \"base\", \"base\")),\n                Some(row_with_value(\"entity-b\", \"source\", \"source\")),\n            ),\n        ]);\n\n        let plan = plan_merge(&target, &source).expect(\"merge should plan\");\n\n        assert_eq!(patch_ids(&plan), vec![\"entity-a\", \"entity-c\"]);\n        assert_eq!(conflict_ids(&plan), vec![\"entity-b\"]);\n    }\n\n    fn diff(entries: Vec<TrackedStateDiffEntry>) -> TrackedStateDiff {\n        TrackedStateDiff { entries }\n    }\n\n    fn entry(\n        entity_id: &str,\n        kind: TrackedStateDiffKind,\n        before: Option<MaterializedTrackedStateRow>,\n        after: Option<MaterializedTrackedStateRow>,\n    ) -> TrackedStateDiffEntry {\n        TrackedStateDiffEntry {\n            identity: TrackedStateDiffIdentity {\n                schema_key: \"test_schema\".to_string(),\n                entity_id: EntityIdentity::single(entity_id),\n                file_id: None,\n            },\n            kind,\n            before,\n            after,\n        }\n    }\n\n    fn patch_ids(plan: &TrackedStateMergePlan) -> Vec<String> {\n        plan.patches\n            .iter()\n            .map(|entry| {\n                entry\n                    .identity()\n                    .entity_id\n                    .as_single_string_owned()\n                    .expect(\"identity\")\n            })\n            .collect()\n    }\n\n    fn conflict_ids(plan: &TrackedStateMergePlan) -> Vec<String> {\n        plan.conflicts\n            .iter()\n            .map(|entry| {\n                entry\n                    .identity\n                    .entity_id\n                    .as_single_string_owned()\n                    .expect(\"identity\")\n            })\n            .collect()\n    }\n\n    fn tombstone(entity_id: &str, change_id: &str) -> MaterializedTrackedStateRow {\n        let mut row = row(entity_id, change_id);\n        row.snapshot_content = None;\n        row.deleted = true;\n        row\n    }\n\n    fn row(entity_id: &str, change_id: &str) -> MaterializedTrackedStateRow {\n        row_with_value(entity_id, change_id, \"value\")\n    }\n\n    fn row_with_value(\n        entity_id: &str,\n        change_id: &str,\n        value: &str,\n    ) -> MaterializedTrackedStateRow {\n        MaterializedTrackedStateRow {\n            entity_id: EntityIdentity::single(entity_id),\n            schema_key: \"test_schema\".to_string(),\n            file_id: None,\n            snapshot_content: Some(format!(\"{{\\\"value\\\":\\\"{value}\\\"}}\")),\n            metadata: None,\n            deleted: false,\n            created_at: \"2026-01-01T00:00:00Z\".to_string(),\n            updated_at: \"2026-01-01T00:00:00Z\".to_string(),\n            change_id: change_id.to_string(),\n            commit_id: change_id.replace(\"change\", \"commit\"),\n        }\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/tracked_state/mod.rs",
    "content": "mod by_file_index;\nmod codec;\nmod context;\nmod diff;\nmod materialization;\nmod materializer;\nmod merge;\nmod storage;\nmod tree;\nmod types;\n\n#[allow(unused_imports)]\npub(crate) use context::{\n    TrackedStateContext, TrackedStateMaterializer, TrackedStateStoreReader, TrackedStateWriter,\n};\n#[allow(unused_imports)]\npub(crate) use diff::{\n    TrackedStateDiff, TrackedStateDiffEntry, TrackedStateDiffIdentity, TrackedStateDiffKind,\n    TrackedStateDiffRequest,\n};\npub(crate) use materialization::{materialize_index_entries, TrackedMaterializationProjection};\n#[allow(unused_imports)]\npub(crate) use merge::{\n    plan_merge, TrackedStateMergeConflict, TrackedStateMergePatch, TrackedStateMergePlan,\n};\npub(crate) use storage::{load_delta_pack, DeltaJsonPackIndexesRef};\n#[allow(unused_imports)]\npub(crate) use types::{\n    MaterializedTrackedStateRow, TrackedStateDeltaRef, TrackedStateFilter,\n    TrackedStateIndexValueRef, TrackedStateKeyRef, TrackedStateProjection, TrackedStateRowRequest,\n    TrackedStateScanRequest,\n};\n"
  },
  {
    "path": "packages/engine/src/tracked_state/storage.rs",
    "content": "use std::collections::HashMap;\n\nuse crate::json_store::JsonStoreContext;\nuse crate::storage::{KvGetGroup, KvGetRequest, StorageReader, StorageWriteSet};\nuse crate::tracked_state::codec::PendingChunkWrite;\nuse crate::tracked_state::types::{\n    TrackedStateDeltaEntry, TrackedStateDeltaRef, TrackedStateRootId, TRACKED_STATE_HASH_BYTES,\n};\nuse crate::LixError;\n\npub(crate) const TRACKED_STATE_CHUNK_NAMESPACE: &'static str = \"tracked_state.tree.chunk\";\npub(crate) const TRACKED_STATE_ROOT_NAMESPACE: &'static str = \"tracked_state.tree.root\";\npub(crate) const TRACKED_STATE_BY_FILE_ROOT_NAMESPACE: &'static str =\n    \"tracked_state.tree.root.by_file\";\npub(crate) const TRACKED_STATE_DELTA_PACK_NAMESPACE: &'static str = \"tracked_state.delta_pack\";\n\nasync fn get_one(\n    store: &mut (impl StorageReader + ?Sized),\n    namespace: &str,\n    key: Vec<u8>,\n) -> Result<Option<Vec<u8>>, LixError> {\n    Ok(store\n        .get_values(KvGetRequest {\n            groups: vec![KvGetGroup {\n                namespace: namespace.to_string(),\n                keys: vec![key],\n            }],\n        })\n        .await?\n        .groups\n        .into_iter()\n        .next()\n        .and_then(|group| group.single_value_owned()))\n}\n\npub(crate) async fn load_root(\n    store: &mut (impl StorageReader + ?Sized),\n    commit_id: &str,\n) -> Result<Option<TrackedStateRootId>, LixError> {\n    let Some(bytes) = get_one(\n        store,\n        TRACKED_STATE_ROOT_NAMESPACE,\n        commit_id.as_bytes().to_vec(),\n    )\n    .await?\n    else {\n        return Ok(None);\n    };\n    TrackedStateRootId::from_slice(&bytes).map(Some)\n}\n\npub(crate) fn stage_root(\n    writes: &mut StorageWriteSet,\n    commit_id: &str,\n    root_id: &TrackedStateRootId,\n) {\n    writes.put(\n        TRACKED_STATE_ROOT_NAMESPACE,\n        commit_id.as_bytes().to_vec(),\n        root_id.as_bytes().to_vec(),\n    );\n}\n\npub(crate) async fn load_by_file_root(\n    store: &mut (impl StorageReader + ?Sized),\n    commit_id: &str,\n) -> Result<Option<TrackedStateRootId>, LixError> {\n    let Some(bytes) = get_one(\n        store,\n        TRACKED_STATE_BY_FILE_ROOT_NAMESPACE,\n        commit_id.as_bytes().to_vec(),\n    )\n    .await?\n    else {\n        return Ok(None);\n    };\n    TrackedStateRootId::from_slice(&bytes).map(Some)\n}\n\npub(crate) fn stage_by_file_root(\n    writes: &mut StorageWriteSet,\n    commit_id: &str,\n    root_id: &TrackedStateRootId,\n) {\n    writes.put(\n        TRACKED_STATE_BY_FILE_ROOT_NAMESPACE,\n        commit_id.as_bytes().to_vec(),\n        root_id.as_bytes().to_vec(),\n    );\n}\n\npub(crate) async fn load_delta_pack(\n    store: &mut (impl StorageReader + ?Sized),\n    commit_id: &str,\n) -> Result<Option<Vec<TrackedStateDeltaEntry>>, LixError> {\n    let json_store = JsonStoreContext::new();\n    let result = store\n        .get_values(KvGetRequest {\n            groups: vec![\n                KvGetGroup {\n                    namespace: TRACKED_STATE_DELTA_PACK_NAMESPACE.to_string(),\n                    keys: vec![commit_id.as_bytes().to_vec()],\n                },\n                json_store.commit_pack_get_group(commit_id, 0),\n            ],\n        })\n        .await?;\n    let mut groups = result.groups.into_iter();\n    let delta_group = groups.next().ok_or_else(|| {\n        LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            \"tracked-state delta pack load returned no delta result group\",\n        )\n    })?;\n    let json_pack_group = groups.next().ok_or_else(|| {\n        LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            \"tracked-state delta pack load returned no JSON pack result group\",\n        )\n    })?;\n    let Some(bytes) = delta_group.single_value_owned() else {\n        return Ok(None);\n    };\n    let pack_refs = if crate::tracked_state::codec::delta_pack_uses_json_pack_indexes(&bytes)? {\n        json_pack_group\n            .single_value_owned()\n            .map(|bytes| json_store.decode_pack_refs(&bytes))\n            .transpose()?\n    } else {\n        None\n    };\n    let (stored_commit_id, entries) =\n        crate::tracked_state::codec::decode_delta_pack(&bytes, pack_refs.as_deref())?;\n    if stored_commit_id != commit_id {\n        return Err(LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            format!(\n                \"tracked-state delta pack identity mismatch: expected '{commit_id}', got '{stored_commit_id}'\"\n            ),\n        ));\n    }\n    Ok(Some(entries))\n}\n\npub(crate) async fn delta_pack_exists(\n    store: &mut (impl StorageReader + ?Sized),\n    commit_id: &str,\n) -> Result<bool, LixError> {\n    let result = store\n        .exists_many(KvGetRequest {\n            groups: vec![KvGetGroup {\n                namespace: TRACKED_STATE_DELTA_PACK_NAMESPACE.to_string(),\n                keys: vec![commit_id.as_bytes().to_vec()],\n            }],\n        })\n        .await?;\n    let group = result.groups.into_iter().next().ok_or_else(|| {\n        LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            \"tracked-state delta pack existence check returned no result group\",\n        )\n    })?;\n    group.exists.into_iter().next().ok_or_else(|| {\n        LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            \"tracked-state delta pack existence check returned no result\",\n        )\n    })\n}\n\npub(crate) fn stage_delta_pack_refs(\n    writes: &mut StorageWriteSet,\n    commit_id: &str,\n    deltas: &[TrackedStateDeltaRef<'_>],\n) -> Result<(), LixError> {\n    writes.put(\n        TRACKED_STATE_DELTA_PACK_NAMESPACE,\n        commit_id.as_bytes().to_vec(),\n        crate::tracked_state::codec::encode_delta_pack_refs(commit_id, deltas)?,\n    );\n    Ok(())\n}\n\npub(crate) struct DeltaJsonPackIndexesRef<'a> {\n    pub(crate) commit_id: &'a str,\n    pub(crate) pack_id: u32,\n    pub(crate) indexes: &'a std::collections::HashMap<[u8; TRACKED_STATE_HASH_BYTES], usize>,\n}\n\npub(crate) fn stage_delta_pack_refs_with_json_pack_indexes(\n    writes: &mut StorageWriteSet,\n    commit_id: &str,\n    deltas: &[TrackedStateDeltaRef<'_>],\n    json_pack_indexes: DeltaJsonPackIndexesRef<'_>,\n) -> Result<(), LixError> {\n    if json_pack_indexes.commit_id != commit_id {\n        return Err(LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            format!(\n                \"tracked-state delta JSON pack indexes for '{}' cannot encode delta pack '{}'\",\n                json_pack_indexes.commit_id, commit_id\n            ),\n        ));\n    }\n    if json_pack_indexes.pack_id != 0 {\n        return Err(LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            format!(\n                \"tracked-state delta JSON pack indexes only support pack 0, got pack {}\",\n                json_pack_indexes.pack_id\n            ),\n        ));\n    }\n    if json_pack_indexes.indexes.is_empty() {\n        return stage_delta_pack_refs(writes, commit_id, deltas);\n    }\n    writes.put(\n        TRACKED_STATE_DELTA_PACK_NAMESPACE,\n        commit_id.as_bytes().to_vec(),\n        crate::tracked_state::codec::encode_delta_pack_refs_with_json_pack_indexes(\n            commit_id,\n            deltas,\n            Some(json_pack_indexes.indexes),\n        )?,\n    );\n    Ok(())\n}\n\npub(crate) async fn read_chunk(\n    store: &mut (impl StorageReader + ?Sized),\n    hash: &[u8; TRACKED_STATE_HASH_BYTES],\n) -> Result<Option<Vec<u8>>, LixError> {\n    get_one(store, TRACKED_STATE_CHUNK_NAMESPACE, hash.to_vec()).await\n}\n\npub(crate) fn verify_chunk_hash(\n    expected: &[u8; TRACKED_STATE_HASH_BYTES],\n    bytes: &[u8],\n) -> Result<(), LixError> {\n    let actual = crate::tracked_state::codec::hash_bytes(bytes);\n    if &actual != expected {\n        return Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            \"tracked-state chunk hash mismatch\",\n        ));\n    }\n    Ok(())\n}\n\npub(crate) fn stage_chunks(writes: &mut StorageWriteSet, chunks: &[PendingChunkWrite]) {\n    for chunk in chunks {\n        writes.put(\n            TRACKED_STATE_CHUNK_NAMESPACE,\n            chunk.hash.to_vec(),\n            chunk.data.clone(),\n        );\n    }\n}\n\n#[allow(dead_code)]\n#[derive(Debug, Default)]\npub(crate) struct TrackedStateChunkOverlay {\n    chunks: HashMap<[u8; TRACKED_STATE_HASH_BYTES], Vec<u8>>,\n}\n\nimpl TrackedStateChunkOverlay {\n    pub(crate) fn new() -> Self {\n        Self::default()\n    }\n\n    pub(crate) async fn read_chunk(\n        &self,\n        store: &mut (impl StorageReader + ?Sized),\n        hash: &[u8; TRACKED_STATE_HASH_BYTES],\n    ) -> Result<Option<Vec<u8>>, LixError> {\n        if let Some(bytes) = self.chunks.get(hash) {\n            return Ok(Some(bytes.clone()));\n        }\n        read_chunk(store, hash).await\n    }\n\n    pub(crate) fn stage_chunks(\n        &mut self,\n        writes: &mut StorageWriteSet,\n        chunks: &[PendingChunkWrite],\n    ) {\n        for chunk in chunks {\n            self.chunks.insert(chunk.hash, chunk.data.clone());\n        }\n        stage_chunks(writes, chunks);\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use std::fs;\n    use std::path::{Path, PathBuf};\n\n    #[test]\n    fn production_tracked_state_sources_do_not_call_storage_batch_writer() {\n        let tracked_state_dir = Path::new(env!(\"CARGO_MANIFEST_DIR\")).join(\"src/tracked_state\");\n        let forbidden = [\"write\", \"kv\", \"batch\"].join(\"_\");\n\n        for path in rust_sources(&tracked_state_dir) {\n            let source =\n                fs::read_to_string(&path).expect(\"tracked_state source should be readable\");\n            for (line_number, line) in production_lines(&source) {\n                assert!(\n                    !line.contains(&forbidden),\n                    \"production tracked_state source must stage into StorageWriteSet instead of calling {forbidden}: {}:{}\",\n                    path.display(),\n                    line_number\n                );\n            }\n        }\n    }\n\n    fn rust_sources(dir: &Path) -> Vec<PathBuf> {\n        let mut sources = Vec::new();\n        for entry in fs::read_dir(dir).expect(\"tracked_state source dir should be readable\") {\n            let path = entry\n                .expect(\"tracked_state source entry should be readable\")\n                .path();\n            if path.is_dir() {\n                sources.extend(rust_sources(&path));\n            } else if path.extension().and_then(|extension| extension.to_str()) == Some(\"rs\") {\n                sources.push(path);\n            }\n        }\n        sources\n    }\n\n    fn production_lines(source: &str) -> Vec<(usize, &str)> {\n        let mut lines = Vec::new();\n        let mut skipping_cfg_test_item = false;\n        let mut pending_cfg_test = false;\n        let mut item_started = false;\n        let mut brace_depth = 0i32;\n\n        for (index, line) in source.lines().enumerate() {\n            let trimmed = line.trim();\n            if trimmed == \"#[cfg(test)]\" {\n                pending_cfg_test = true;\n                continue;\n            }\n\n            if pending_cfg_test || skipping_cfg_test_item {\n                if pending_cfg_test && !item_started && trimmed.ends_with(';') {\n                    pending_cfg_test = false;\n                    continue;\n                }\n                let opens = line.matches('{').count() as i32;\n                let closes = line.matches('}').count() as i32;\n                if opens > 0 {\n                    item_started = true;\n                    skipping_cfg_test_item = true;\n                }\n                if item_started {\n                    brace_depth += opens - closes;\n                    if brace_depth <= 0 {\n                        pending_cfg_test = false;\n                        skipping_cfg_test_item = false;\n                        item_started = false;\n                        brace_depth = 0;\n                    }\n                }\n                continue;\n            }\n\n            lines.push((index + 1, line));\n        }\n\n        lines\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/tracked_state/tree.rs",
    "content": "use std::{\n    collections::{BTreeMap, VecDeque},\n    future::Future,\n    ops::Range,\n    pin::Pin,\n};\n\nuse crate::storage::{StorageReader, StorageWriteSet};\nuse crate::tracked_state::codec::{\n    boundary_trigger, child_summary_from_node, decode_key, decode_key_with_trusted_prefix,\n    decode_node, decode_node_ref, decode_value, decode_visible_value, encode_internal_node,\n    encode_internal_node_refs, encode_key, encode_leaf_node, encode_leaf_node_refs,\n    encode_schema_file_prefix, encode_schema_key_prefix, ChildSummary, ChildSummaryRef,\n    DecodedLeafNodeRef, DecodedNode, DecodedNodeRef, EncodedLeafEntry, EncodedLeafEntryRef,\n    PendingChunkWrite,\n};\nuse crate::tracked_state::storage;\nuse crate::tracked_state::types::{\n    TrackedStateApplyResult, TrackedStateIndexValue, TrackedStateKey, TrackedStateMutation,\n    TrackedStateRootId, TrackedStateTreeDiffEntry, TrackedStateTreeScanRequest,\n    TRACKED_STATE_HASH_BYTES,\n};\nuse crate::{LixError, NullableKeyFilter};\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct TrackedStateTreeOptions {\n    pub(crate) target_chunk_bytes: usize,\n    pub(crate) min_chunk_bytes: usize,\n    pub(crate) max_chunk_bytes: usize,\n}\n\nenum MutationApply<T> {\n    Applied(TrackedStateApplyResult),\n    Fallback(T),\n}\n\nimpl Default for TrackedStateTreeOptions {\n    fn default() -> Self {\n        Self {\n            target_chunk_bytes: 4 * 1024,\n            min_chunk_bytes: 512,\n            max_chunk_bytes: 16 * 1024,\n        }\n    }\n}\n\n/// Content-addressed tracked-state tree operations.\n///\n/// This type owns tracked-state tree mechanics only. Version refs, untracked overlay,\n/// and SQL visibility remain outside the tree.\n#[derive(Debug, Clone)]\npub(crate) struct TrackedStateTree {\n    options: TrackedStateTreeOptions,\n}\n\nimpl TrackedStateTree {\n    pub(crate) fn new() -> Self {\n        Self {\n            options: TrackedStateTreeOptions::default(),\n        }\n    }\n\n    #[allow(dead_code)]\n    pub(crate) fn with_options(options: TrackedStateTreeOptions) -> Self {\n        Self { options }\n    }\n\n    pub(crate) async fn load_root(\n        &self,\n        store: &mut (impl StorageReader + ?Sized),\n        commit_id: &str,\n    ) -> Result<Option<TrackedStateRootId>, LixError> {\n        storage::load_root(store, commit_id).await\n    }\n\n    #[cfg(test)]\n    pub(crate) async fn get(\n        &self,\n        store: &mut impl StorageReader,\n        root_id: &TrackedStateRootId,\n        key: &TrackedStateKey,\n    ) -> Result<Option<TrackedStateIndexValue>, LixError> {\n        let encoded_key = encode_key(key);\n        let mut current = *root_id.as_bytes();\n        loop {\n            match self.load_node(store, &current).await? {\n                DecodedNode::Leaf(leaf) => {\n                    let entry = leaf\n                        .entries()\n                        .binary_search_by(|entry| entry.key.as_slice().cmp(&encoded_key))\n                        .ok()\n                        .map(|index| &leaf.entries()[index]);\n                    return entry.map(|entry| decode_value(&entry.value)).transpose();\n                }\n                DecodedNode::Internal(internal) => {\n                    let child = internal\n                        .children()\n                        .iter()\n                        .find(|child| child.last_key.as_slice() >= encoded_key.as_slice())\n                        .or_else(|| internal.children().last())\n                        .ok_or_else(|| {\n                            LixError::new(\n                                \"LIX_ERROR_UNKNOWN\",\n                                \"tracked-state tree internal node has no children\",\n                            )\n                        })?;\n                    current = child.child_hash;\n                }\n            }\n        }\n    }\n\n    pub(crate) async fn get_many(\n        &self,\n        store: &mut impl StorageReader,\n        root_id: &TrackedStateRootId,\n        keys: &[TrackedStateKey],\n    ) -> Result<Vec<Option<TrackedStateIndexValue>>, LixError> {\n        if keys.is_empty() {\n            return Ok(Vec::new());\n        }\n\n        let mut encoded_keys = keys\n            .iter()\n            .enumerate()\n            .map(|(index, key)| (index, encode_key(key)))\n            .collect::<Vec<_>>();\n        encoded_keys.sort_by(|left, right| left.1.cmp(&right.1));\n\n        let mut values = vec![None; keys.len()];\n        self.get_many_node(store, *root_id.as_bytes(), &encoded_keys, &mut values)\n            .await?;\n        Ok(values)\n    }\n\n    pub(crate) async fn row_count(\n        &self,\n        store: &mut impl StorageReader,\n        root_id: &TrackedStateRootId,\n    ) -> Result<usize, LixError> {\n        match self.load_node(store, root_id.as_bytes()).await? {\n            DecodedNode::Leaf(leaf) => Ok(leaf.entries().len()),\n            DecodedNode::Internal(internal) => Ok(internal\n                .children()\n                .iter()\n                .map(|child| child.subtree_count as usize)\n                .sum()),\n        }\n    }\n\n    pub(crate) async fn scan(\n        &self,\n        store: &mut impl StorageReader,\n        root_id: &TrackedStateRootId,\n        request: &TrackedStateTreeScanRequest,\n    ) -> Result<Vec<(TrackedStateKey, TrackedStateIndexValue)>, LixError> {\n        if request.limit == Some(0) {\n            return Ok(Vec::new());\n        }\n\n        let ranges = scan_ranges(request);\n        let key_decode_hint = scan_key_decode_hint(request, &ranges);\n        let mut rows = Vec::new();\n        self.scan_node(\n            store,\n            *root_id.as_bytes(),\n            request,\n            &ranges,\n            key_decode_hint,\n            &mut rows,\n        )\n        .await?;\n        Ok(rows)\n    }\n\n    pub(crate) async fn count_matching_keys(\n        &self,\n        store: &mut impl StorageReader,\n        root_id: &TrackedStateRootId,\n        request: &TrackedStateTreeScanRequest,\n    ) -> Result<usize, LixError> {\n        if request.limit == Some(0) {\n            return Ok(0);\n        }\n\n        let ranges = scan_ranges(request);\n        self.count_matching_keys_node(store, *root_id.as_bytes(), request, &ranges)\n            .await\n    }\n\n    pub(crate) async fn diff(\n        &self,\n        store: &mut impl StorageReader,\n        left_root: Option<&TrackedStateRootId>,\n        right_root: Option<&TrackedStateRootId>,\n        request: &TrackedStateTreeScanRequest,\n    ) -> Result<Vec<TrackedStateTreeDiffEntry>, LixError> {\n        match (left_root, right_root) {\n            (None, None) => Ok(Vec::new()),\n            (Some(left), Some(right)) if left == right => Ok(Vec::new()),\n            (Some(left), Some(right)) => {\n                let mut out = Vec::new();\n                self.diff_nodes(\n                    store,\n                    *left.as_bytes(),\n                    *right.as_bytes(),\n                    request,\n                    &mut out,\n                )\n                .await?;\n                Ok(out)\n            }\n            (Some(left), None) => Ok(self\n                .collect_filtered_entries(store, left, request)\n                .await?\n                .into_iter()\n                .map(|(key, value)| TrackedStateTreeDiffEntry {\n                    before: Some((key, value)),\n                    after: None,\n                })\n                .collect()),\n            (None, Some(right)) => Ok(self\n                .collect_filtered_entries(store, right, request)\n                .await?\n                .into_iter()\n                .map(|(key, value)| TrackedStateTreeDiffEntry {\n                    before: None,\n                    after: Some((key, value)),\n                })\n                .collect()),\n        }\n    }\n\n    pub(crate) async fn apply_mutations(\n        &self,\n        store: &mut (impl StorageReader + ?Sized),\n        writes: &mut StorageWriteSet,\n        base_root: Option<&TrackedStateRootId>,\n        mut mutations: Vec<TrackedStateMutation>,\n        commit_id: Option<&str>,\n    ) -> Result<TrackedStateApplyResult, LixError> {\n        let mut overlay = storage::TrackedStateChunkOverlay::new();\n        if let Some(root_id) = base_root {\n            if mutations.len() == 1 {\n                let mutation = mutations.pop().expect(\"single mutation should exist\");\n                match self\n                    .apply_single_mutation(\n                        store,\n                        writes,\n                        &mut overlay,\n                        root_id,\n                        mutation,\n                        commit_id,\n                    )\n                    .await?\n                {\n                    MutationApply::Applied(result) => return Ok(result),\n                    MutationApply::Fallback(mutation) => mutations = vec![mutation],\n                }\n            } else if mutations.len() > 1 {\n                match self\n                    .apply_sorted_mutations_chunker(\n                        store,\n                        writes,\n                        &mut overlay,\n                        root_id,\n                        mutations,\n                        commit_id,\n                    )\n                    .await?\n                {\n                    MutationApply::Applied(result) => return Ok(result),\n                    MutationApply::Fallback(fallback_mutations) => mutations = fallback_mutations,\n                }\n            }\n        }\n\n        let mut entries = match base_root {\n            Some(root_id) => self\n                .collect_leaf_entries(store, root_id)\n                .await?\n                .into_iter()\n                .map(|entry| (entry.key, entry.value))\n                .collect::<BTreeMap<_, _>>(),\n            None => BTreeMap::new(),\n        };\n\n        // Apply in caller order so repeated writes to the same key behave like\n        // normal transaction staging: the latest mutation wins.\n        for mutation in mutations {\n            entries.insert(mutation.encoded_key, mutation.encoded_value);\n        }\n\n        let built = self.build_tree_from_entries(\n            entries\n                .into_iter()\n                .map(|(key, value)| EncodedLeafEntry { key, value })\n                .collect(),\n        )?;\n        overlay.stage_chunks(writes, &built.chunks);\n        let persisted_root = if let Some(commit_id) = commit_id {\n            storage::stage_root(writes, commit_id, &built.root_id);\n            true\n        } else {\n            false\n        };\n\n        Ok(TrackedStateApplyResult {\n            root_id: built.root_id,\n            row_count: built.row_count,\n            tree_height: built.tree_height,\n            chunk_count: built.chunks.len(),\n            chunk_bytes: built.chunk_bytes,\n            persisted_root,\n        })\n    }\n\n    async fn apply_single_mutation(\n        &self,\n        store: &mut (impl StorageReader + ?Sized),\n        writes: &mut StorageWriteSet,\n        overlay: &mut storage::TrackedStateChunkOverlay,\n        root_id: &TrackedStateRootId,\n        mutation: TrackedStateMutation,\n        commit_id: Option<&str>,\n    ) -> Result<MutationApply<TrackedStateMutation>, LixError> {\n        let mutation = match self\n            .apply_single_mutation_from_seek_path(\n                store, writes, overlay, root_id, mutation, commit_id,\n            )\n            .await?\n        {\n            MutationApply::Applied(result) => return Ok(MutationApply::Applied(result)),\n            MutationApply::Fallback(mutation) => mutation,\n        };\n\n        let TrackedStateMutation {\n            encoded_key,\n            encoded_value,\n        } = mutation;\n\n        let levels = self\n            .collect_summary_levels_with_overlay(store, overlay, root_id)\n            .await?;\n        let Some(leaves) = levels.first() else {\n            return Ok(MutationApply::Fallback(TrackedStateMutation {\n                encoded_key,\n                encoded_value,\n            }));\n        };\n        let target_leaf_index = leaves\n            .iter()\n            .position(|leaf| leaf.last_key.as_slice() >= encoded_key.as_slice())\n            .unwrap_or_else(|| leaves.len().saturating_sub(1));\n        let Some(target_leaf) = leaves.get(target_leaf_index).cloned() else {\n            return Ok(MutationApply::Fallback(TrackedStateMutation {\n                encoded_key,\n                encoded_value,\n            }));\n        };\n\n        let mut entries = self\n            .load_leaf_entries_with_overlay(store, overlay, &target_leaf.child_hash)\n            .await?;\n        let mutation_entry_index = match entries\n            .binary_search_by(|entry| entry.key.as_slice().cmp(encoded_key.as_slice()))\n        {\n            Ok(index) => {\n                if entries[index].value.as_slice() == encoded_value.as_slice() {\n                    return Ok(MutationApply::Fallback(TrackedStateMutation {\n                        encoded_key,\n                        encoded_value,\n                    }));\n                }\n                entries[index].value = encoded_value;\n                index\n            }\n            Err(index) => {\n                entries.insert(\n                    index,\n                    EncodedLeafEntry {\n                        key: encoded_key,\n                        value: encoded_value,\n                    },\n                );\n                index\n            }\n        };\n\n        let mut chunks = BTreeMap::new();\n        let mut suffix_entries = entries;\n        let mut next_leaf_index = target_leaf_index + 1;\n        let mut replacement_leaves;\n        let old_leaf_count;\n\n        // Rechunk from the edited leaf until a generated leaf matches an\n        // existing post-mutation leaf, then reuse the rest of the old suffix.\n        loop {\n            let mut candidate_chunks = BTreeMap::new();\n            let candidate_summaries = self.build_leaf_level_from_refs(\n                suffix_entries.iter().map(EncodedLeafEntry::as_ref),\n                &mut candidate_chunks,\n            );\n\n            if let Some((generated_resync_index, existing_resync_index)) = first_resync_index(\n                &candidate_summaries,\n                &leaves[target_leaf_index..],\n                suffix_entries[mutation_entry_index].key.as_slice(),\n            ) {\n                for summary in &candidate_summaries[..generated_resync_index] {\n                    if let Some(chunk) = candidate_chunks.remove(&summary.child_hash) {\n                        chunks.entry(chunk.hash).or_insert(chunk);\n                    }\n                }\n                replacement_leaves = candidate_summaries\n                    .into_iter()\n                    .take(generated_resync_index)\n                    .collect();\n                old_leaf_count = existing_resync_index;\n                break;\n            }\n\n            if next_leaf_index >= leaves.len() {\n                chunks.extend(candidate_chunks);\n                replacement_leaves = candidate_summaries;\n                old_leaf_count = leaves.len() - target_leaf_index;\n                break;\n            }\n\n            suffix_entries.extend(\n                self.load_leaf_entries_with_overlay(\n                    store,\n                    overlay,\n                    &leaves[next_leaf_index].child_hash,\n                )\n                .await?,\n            );\n            next_leaf_index += 1;\n        }\n\n        let built = self.build_tree_from_leaf_patch(\n            &levels,\n            target_leaf_index,\n            old_leaf_count,\n            std::mem::take(&mut replacement_leaves),\n            chunks,\n            suffix_entries[mutation_entry_index].key.as_slice(),\n        )?;\n        overlay.stage_chunks(writes, &built.chunks);\n        let persisted_root = if let Some(commit_id) = commit_id {\n            storage::stage_root(writes, commit_id, &built.root_id);\n            true\n        } else {\n            false\n        };\n\n        Ok(MutationApply::Applied(TrackedStateApplyResult {\n            root_id: built.root_id,\n            row_count: built.row_count,\n            tree_height: built.tree_height,\n            chunk_count: built.chunks.len(),\n            chunk_bytes: built.chunk_bytes,\n            persisted_root,\n        }))\n    }\n\n    fn diff_nodes<'a, S>(\n        &'a self,\n        store: &'a mut S,\n        left_hash: [u8; TRACKED_STATE_HASH_BYTES],\n        right_hash: [u8; TRACKED_STATE_HASH_BYTES],\n        request: &'a TrackedStateTreeScanRequest,\n        out: &'a mut Vec<TrackedStateTreeDiffEntry>,\n    ) -> Pin<Box<dyn Future<Output = Result<(), LixError>> + 'a>>\n    where\n        S: StorageReader + 'a,\n    {\n        Box::pin(async move {\n            if left_hash == right_hash {\n                return Ok(());\n            }\n\n            let left = self.load_node(store, &left_hash).await?;\n            let right = self.load_node(store, &right_hash).await?;\n            match (left, right) {\n                (DecodedNode::Leaf(left), DecodedNode::Leaf(right)) => {\n                    self.diff_leaf_entries(left.entries(), right.entries(), request, out)?;\n                }\n                (DecodedNode::Internal(left), DecodedNode::Internal(right))\n                    if internal_boundaries_match(left.children(), right.children()) =>\n                {\n                    for (left_child, right_child) in left.children().iter().zip(right.children()) {\n                        if left_child == right_child {\n                            continue;\n                        }\n                        self.diff_nodes(\n                            store,\n                            left_child.child_hash,\n                            right_child.child_hash,\n                            request,\n                            out,\n                        )\n                        .await?;\n                    }\n                }\n                _ => {\n                    self.diff_leaf_summary_cursors(store, left_hash, right_hash, request, out)\n                        .await?;\n                }\n            }\n            Ok(())\n        })\n    }\n\n    async fn diff_leaf_summary_cursors(\n        &self,\n        store: &mut impl StorageReader,\n        left_hash: [u8; TRACKED_STATE_HASH_BYTES],\n        right_hash: [u8; TRACKED_STATE_HASH_BYTES],\n        request: &TrackedStateTreeScanRequest,\n        out: &mut Vec<TrackedStateTreeDiffEntry>,\n    ) -> Result<(), LixError> {\n        let mut left = LeafSummaryCursor::new(self, store, left_hash).await?;\n        let mut right = LeafSummaryCursor::new(self, store, right_hash).await?;\n        let mut left_window = Vec::new();\n        let mut right_window = Vec::new();\n\n        loop {\n            match (left.current(), right.current()) {\n                (Some(left_leaf), Some(right_leaf)) if left_leaf == right_leaf => {\n                    self.diff_leaf_summary_window(store, &left_window, &right_window, request, out)\n                        .await?;\n                    left_window.clear();\n                    right_window.clear();\n                    left.advance(self, store).await?;\n                    right.advance(self, store).await?;\n                }\n                (Some(left_leaf), Some(right_leaf)) => {\n                    match left_leaf.last_key.cmp(&right_leaf.last_key) {\n                        std::cmp::Ordering::Less => {\n                            left_window.push(left_leaf.clone());\n                            left.advance(self, store).await?;\n                        }\n                        std::cmp::Ordering::Greater => {\n                            right_window.push(right_leaf.clone());\n                            right.advance(self, store).await?;\n                        }\n                        std::cmp::Ordering::Equal => {\n                            left_window.push(left_leaf.clone());\n                            right_window.push(right_leaf.clone());\n                            left.advance(self, store).await?;\n                            right.advance(self, store).await?;\n                        }\n                    }\n                }\n                (Some(left_leaf), None) => {\n                    left_window.push(left_leaf.clone());\n                    left.advance(self, store).await?;\n                }\n                (None, Some(right_leaf)) => {\n                    right_window.push(right_leaf.clone());\n                    right.advance(self, store).await?;\n                }\n                (None, None) => {\n                    self.diff_leaf_summary_window(store, &left_window, &right_window, request, out)\n                        .await?;\n                    return Ok(());\n                }\n            }\n        }\n    }\n\n    async fn diff_leaf_summary_window(\n        &self,\n        store: &mut impl StorageReader,\n        left_leaves: &[ChildSummary],\n        right_leaves: &[ChildSummary],\n        request: &TrackedStateTreeScanRequest,\n        out: &mut Vec<TrackedStateTreeDiffEntry>,\n    ) -> Result<(), LixError> {\n        if left_leaves.is_empty() && right_leaves.is_empty() {\n            return Ok(());\n        }\n        let left_entries = self\n            .collect_entries_from_leaf_summaries(store, left_leaves)\n            .await?;\n        let right_entries = self\n            .collect_entries_from_leaf_summaries(store, right_leaves)\n            .await?;\n        self.diff_leaf_entries(&left_entries, &right_entries, request, out)\n    }\n\n    fn diff_leaf_entries(\n        &self,\n        left: &[EncodedLeafEntry],\n        right: &[EncodedLeafEntry],\n        request: &TrackedStateTreeScanRequest,\n        out: &mut Vec<TrackedStateTreeDiffEntry>,\n    ) -> Result<(), LixError> {\n        let mut left_index = 0usize;\n        let mut right_index = 0usize;\n        while left_index < left.len() && right_index < right.len() {\n            match left[left_index].key.cmp(&right[right_index].key) {\n                std::cmp::Ordering::Less => {\n                    self.push_removed_diff(&left[left_index], request, out)?;\n                    left_index += 1;\n                }\n                std::cmp::Ordering::Greater => {\n                    self.push_added_diff(&right[right_index], request, out)?;\n                    right_index += 1;\n                }\n                std::cmp::Ordering::Equal => {\n                    if left[left_index].value != right[right_index].value {\n                        self.push_modified_diff(\n                            &left[left_index],\n                            &right[right_index],\n                            request,\n                            out,\n                        )?;\n                    }\n                    left_index += 1;\n                    right_index += 1;\n                }\n            }\n        }\n        for entry in &left[left_index..] {\n            self.push_removed_diff(entry, request, out)?;\n        }\n        for entry in &right[right_index..] {\n            self.push_added_diff(entry, request, out)?;\n        }\n        Ok(())\n    }\n\n    fn push_removed_diff(\n        &self,\n        entry: &EncodedLeafEntry,\n        request: &TrackedStateTreeScanRequest,\n        out: &mut Vec<TrackedStateTreeDiffEntry>,\n    ) -> Result<(), LixError> {\n        let (key, value) = decode_entry(entry)?;\n        if request.matches(&key, &value) {\n            out.push(TrackedStateTreeDiffEntry {\n                before: Some((key, value)),\n                after: None,\n            });\n        }\n        Ok(())\n    }\n\n    fn push_added_diff(\n        &self,\n        entry: &EncodedLeafEntry,\n        request: &TrackedStateTreeScanRequest,\n        out: &mut Vec<TrackedStateTreeDiffEntry>,\n    ) -> Result<(), LixError> {\n        let (key, value) = decode_entry(entry)?;\n        if request.matches(&key, &value) {\n            out.push(TrackedStateTreeDiffEntry {\n                before: None,\n                after: Some((key, value)),\n            });\n        }\n        Ok(())\n    }\n\n    fn push_modified_diff(\n        &self,\n        left: &EncodedLeafEntry,\n        right: &EncodedLeafEntry,\n        request: &TrackedStateTreeScanRequest,\n        out: &mut Vec<TrackedStateTreeDiffEntry>,\n    ) -> Result<(), LixError> {\n        let (left_key, left_value) = decode_entry(left)?;\n        let (right_key, right_value) = decode_entry(right)?;\n        if request.matches(&left_key, &left_value) || request.matches(&right_key, &right_value) {\n            out.push(TrackedStateTreeDiffEntry {\n                before: Some((left_key, left_value)),\n                after: Some((right_key, right_value)),\n            });\n        }\n        Ok(())\n    }\n\n    async fn apply_sorted_mutations_chunker(\n        &self,\n        store: &mut (impl StorageReader + ?Sized),\n        writes: &mut StorageWriteSet,\n        overlay: &mut storage::TrackedStateChunkOverlay,\n        root_id: &TrackedStateRootId,\n        mutations: Vec<TrackedStateMutation>,\n        commit_id: Option<&str>,\n    ) -> Result<MutationApply<Vec<TrackedStateMutation>>, LixError> {\n        let mut mutation_map = BTreeMap::new();\n        for mutation in mutations {\n            mutation_map.insert(mutation.encoded_key, mutation.encoded_value);\n        }\n        if mutation_map.is_empty() {\n            return Ok(MutationApply::Fallback(Vec::new()));\n        }\n\n        let levels = self\n            .collect_summary_levels_with_overlay(store, overlay, root_id)\n            .await?;\n        let Some(leaves) = levels.first() else {\n            return Ok(MutationApply::Fallback(\n                mutation_map\n                    .into_iter()\n                    .map(|(encoded_key, encoded_value)| TrackedStateMutation {\n                        encoded_key,\n                        encoded_value,\n                    })\n                    .collect(),\n            ));\n        };\n\n        let base_row_count = leaves\n            .iter()\n            .map(|leaf| leaf.subtree_count as usize)\n            .sum::<usize>();\n        let first_mutation_key = mutation_map\n            .keys()\n            .next()\n            .expect(\"non-empty mutation map should have first key\");\n        let append_only = leaves\n            .last()\n            .is_some_and(|leaf| first_mutation_key.as_slice() > leaf.last_key.as_slice());\n        if !append_only && mutation_map.len() * 2 > base_row_count {\n            return Ok(MutationApply::Fallback(\n                mutation_map\n                    .into_iter()\n                    .map(|(encoded_key, encoded_value)| TrackedStateMutation {\n                        encoded_key,\n                        encoded_value,\n                    })\n                    .collect(),\n            ));\n        }\n\n        let mut mutations = mutation_map.into_iter().collect::<VecDeque<_>>();\n        let mut output_leaves = Vec::new();\n        let mut chunks = BTreeMap::new();\n        let mut leaf_index = 0usize;\n\n        while leaf_index < leaves.len() {\n            let current_leaf_has_mutation = mutations\n                .front()\n                .is_some_and(|(key, _)| key.as_slice() <= leaves[leaf_index].last_key.as_slice());\n            if !current_leaf_has_mutation {\n                output_leaves.push(leaves[leaf_index].clone());\n                leaf_index += 1;\n                continue;\n            }\n\n            let window_start = leaf_index;\n            let mut window_entries = BTreeMap::new();\n            let mut window_mutation_ceiling = mutations\n                .front()\n                .map(|(key, _)| key.clone())\n                .expect(\"window with mutation should have front mutation\");\n\n            loop {\n                if leaf_index < leaves.len() {\n                    let leaf = &leaves[leaf_index];\n                    for entry in self\n                        .load_leaf_entries_with_overlay(store, overlay, &leaf.child_hash)\n                        .await?\n                    {\n                        window_entries.insert(entry.key, entry.value);\n                    }\n\n                    while mutations\n                        .front()\n                        .is_some_and(|(key, _)| key.as_slice() <= leaf.last_key.as_slice())\n                    {\n                        let (key, value) = mutations\n                            .pop_front()\n                            .expect(\"front mutation should be present\");\n                        window_mutation_ceiling = key.clone();\n                        window_entries.insert(key, value);\n                    }\n                    leaf_index += 1;\n                }\n\n                while let Some((key, _)) = mutations.front() {\n                    if leaf_index < leaves.len()\n                        && key.as_slice() >= leaves[leaf_index].first_key.as_slice()\n                    {\n                        break;\n                    }\n                    let (key, value) = mutations\n                        .pop_front()\n                        .expect(\"front mutation should be present\");\n                    window_mutation_ceiling = key.clone();\n                    window_entries.insert(key, value);\n                }\n\n                if leaf_index < leaves.len()\n                    && mutations.front().is_some_and(|(key, _)| {\n                        key.as_slice() <= leaves[leaf_index].last_key.as_slice()\n                    })\n                {\n                    continue;\n                }\n\n                let mut candidate_chunks = BTreeMap::new();\n                let candidate_leaves = self.build_leaf_level_from_refs(\n                    window_entries\n                        .iter()\n                        .map(|(key, value)| EncodedLeafEntryRef { key, value }),\n                    &mut candidate_chunks,\n                );\n\n                if let Some((generated_resync_index, existing_resync_index)) = first_resync_index(\n                    &candidate_leaves,\n                    &leaves[window_start..],\n                    &window_mutation_ceiling,\n                ) {\n                    for summary in &candidate_leaves[..generated_resync_index] {\n                        if let Some(chunk) = candidate_chunks.remove(&summary.child_hash) {\n                            chunks.entry(chunk.hash).or_insert(chunk);\n                        }\n                    }\n                    output_leaves.extend(candidate_leaves.into_iter().take(generated_resync_index));\n                    leaf_index = window_start + existing_resync_index;\n                    break;\n                }\n\n                if leaf_index >= leaves.len() {\n                    chunks.extend(candidate_chunks);\n                    output_leaves.extend(candidate_leaves);\n                    break;\n                }\n            }\n        }\n\n        if !mutations.is_empty() {\n            let entries = mutations\n                .into_iter()\n                .map(|(key, value)| EncodedLeafEntry { key, value })\n                .collect();\n            output_leaves.extend(self.build_leaf_level(entries, &mut chunks));\n        }\n\n        let built = self.build_tree_from_leaf_summaries(output_leaves, chunks)?;\n        Ok(MutationApply::Applied(\n            self.persist_built_tree(writes, overlay, built, commit_id)\n                .await?,\n        ))\n    }\n\n    async fn apply_single_mutation_from_seek_path(\n        &self,\n        store: &mut (impl StorageReader + ?Sized),\n        writes: &mut StorageWriteSet,\n        overlay: &mut storage::TrackedStateChunkOverlay,\n        root_id: &TrackedStateRootId,\n        mutation: TrackedStateMutation,\n        commit_id: Option<&str>,\n    ) -> Result<MutationApply<TrackedStateMutation>, LixError> {\n        let TrackedStateMutation {\n            encoded_key,\n            encoded_value,\n        } = mutation;\n        let mut current = *root_id.as_bytes();\n        let mut path = Vec::new();\n        let mut entries = loop {\n            match self\n                .load_node_with_overlay(store, overlay, &current)\n                .await?\n            {\n                DecodedNode::Leaf(leaf) => break leaf.entries().to_vec(),\n                DecodedNode::Internal(internal) => {\n                    let children = internal.children().to_vec();\n                    let child_index = children\n                        .iter()\n                        .position(|child| child.last_key.as_slice() >= encoded_key.as_slice())\n                        .or_else(|| (!children.is_empty()).then_some(children.len() - 1))\n                        .ok_or_else(|| {\n                            LixError::new(\n                                \"LIX_ERROR_UNKNOWN\",\n                                \"tracked-state tree internal node has no children\",\n                            )\n                        })?;\n                    current = children[child_index].child_hash;\n                    path.push(SeekPathFrame {\n                        children,\n                        child_index,\n                    });\n                }\n            }\n        };\n\n        let mutation_entry_index = match entries\n            .binary_search_by(|entry| entry.key.as_slice().cmp(encoded_key.as_slice()))\n        {\n            Ok(index) => {\n                if entries[index].value.as_slice() == encoded_value.as_slice() {\n                    return Ok(MutationApply::Fallback(TrackedStateMutation {\n                        encoded_key,\n                        encoded_value,\n                    }));\n                }\n                entries[index].value = encoded_value;\n                index\n            }\n            Err(index) => {\n                entries.insert(\n                    index,\n                    EncodedLeafEntry {\n                        key: encoded_key,\n                        value: encoded_value,\n                    },\n                );\n                index\n            }\n        };\n\n        let mut chunks = BTreeMap::new();\n        let mut replacement_children;\n        let mut old_child_count;\n\n        let Some(leaf_parent) = path.pop() else {\n            let built = self.build_tree_from_entries(entries)?;\n            return Ok(MutationApply::Applied(\n                self.persist_built_tree(writes, overlay, built, commit_id)\n                    .await?,\n            ));\n        };\n        let mutation_is_right_edge = leaf_parent.child_index + 1 == leaf_parent.children.len()\n            && path\n                .iter()\n                .all(|frame| frame.child_index + 1 == frame.children.len());\n\n        let mut leaf_entries = entries;\n        let mut next_leaf_index = leaf_parent.child_index + 1;\n        loop {\n            let mut candidate_chunks = BTreeMap::new();\n            let candidate_leaves = self.build_leaf_level_from_refs(\n                leaf_entries.iter().map(EncodedLeafEntry::as_ref),\n                &mut candidate_chunks,\n            );\n            if let Some((generated_resync_index, existing_resync_index)) = first_resync_index(\n                &candidate_leaves,\n                &leaf_parent.children[leaf_parent.child_index..],\n                leaf_entries[mutation_entry_index].key.as_slice(),\n            ) {\n                for summary in &candidate_leaves[..generated_resync_index] {\n                    if let Some(chunk) = candidate_chunks.remove(&summary.child_hash) {\n                        chunks.entry(chunk.hash).or_insert(chunk);\n                    }\n                }\n                replacement_children = candidate_leaves\n                    .into_iter()\n                    .take(generated_resync_index)\n                    .collect();\n                old_child_count = existing_resync_index;\n                break;\n            }\n\n            if next_leaf_index >= leaf_parent.children.len() {\n                if !mutation_is_right_edge {\n                    let entry = leaf_entries.remove(mutation_entry_index);\n                    return Ok(MutationApply::Fallback(TrackedStateMutation {\n                        encoded_key: entry.key,\n                        encoded_value: entry.value,\n                    }));\n                }\n                chunks.extend(candidate_chunks);\n                replacement_children = candidate_leaves;\n                old_child_count = leaf_parent.children.len() - leaf_parent.child_index;\n                break;\n            }\n\n            leaf_entries.extend(\n                self.load_leaf_entries_with_overlay(\n                    store,\n                    overlay,\n                    &leaf_parent.children[next_leaf_index].child_hash,\n                )\n                .await?,\n            );\n            next_leaf_index += 1;\n        }\n\n        let mut child_index = leaf_parent.child_index;\n        let mut children = leaf_parent.children;\n        let mut parent_level = 1usize;\n        loop {\n            children.splice(\n                child_index..child_index + old_child_count,\n                replacement_children,\n            );\n            replacement_children = self.build_internal_level(children, parent_level, &mut chunks);\n            old_child_count = 1;\n\n            let Some(frame) = path.pop() else {\n                let mut summaries = replacement_children;\n                let mut tree_height = parent_level + 1;\n                while summaries.len() > 1 {\n                    summaries = self.build_internal_level(summaries, tree_height, &mut chunks);\n                    tree_height += 1;\n                }\n                let root = summaries.pop().ok_or_else(|| {\n                    LixError::new(\n                        \"LIX_ERROR_UNKNOWN\",\n                        \"tracked-state seek-path mutation produced no root\",\n                    )\n                })?;\n                let chunks = chunks.into_values().collect::<Vec<_>>();\n                let chunk_bytes = chunks.iter().map(|chunk| chunk.data.len()).sum();\n                let built = BuiltTree {\n                    root_id: TrackedStateRootId::new(root.child_hash),\n                    chunks,\n                    row_count: root.subtree_count as usize,\n                    tree_height,\n                    chunk_bytes,\n                };\n                return Ok(MutationApply::Applied(\n                    self.persist_built_tree(writes, overlay, built, commit_id)\n                        .await?,\n                ));\n            };\n\n            child_index = frame.child_index;\n            children = frame.children;\n            parent_level += 1;\n        }\n    }\n\n    async fn persist_built_tree(\n        &self,\n        writes: &mut StorageWriteSet,\n        overlay: &mut storage::TrackedStateChunkOverlay,\n        built: BuiltTree,\n        commit_id: Option<&str>,\n    ) -> Result<TrackedStateApplyResult, LixError> {\n        overlay.stage_chunks(writes, &built.chunks);\n        let persisted_root = if let Some(commit_id) = commit_id {\n            storage::stage_root(writes, commit_id, &built.root_id);\n            true\n        } else {\n            false\n        };\n        Ok(TrackedStateApplyResult {\n            root_id: built.root_id,\n            row_count: built.row_count,\n            tree_height: built.tree_height,\n            chunk_count: built.chunks.len(),\n            chunk_bytes: built.chunk_bytes,\n            persisted_root,\n        })\n    }\n\n    fn build_tree_from_entries(\n        &self,\n        entries: Vec<EncodedLeafEntry>,\n    ) -> Result<BuiltTree, LixError> {\n        let row_count = entries.len();\n        let mut chunks = BTreeMap::<[u8; TRACKED_STATE_HASH_BYTES], PendingChunkWrite>::new();\n        let mut summaries = self.build_leaf_level(entries, &mut chunks);\n        let mut tree_height = 1usize;\n        while summaries.len() > 1 {\n            summaries = self.build_internal_level(summaries, tree_height, &mut chunks);\n            tree_height += 1;\n        }\n        let root = summaries.pop().ok_or_else(|| {\n            LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                \"tracked-state tree tree build produced no root\",\n            )\n        })?;\n        let chunks = chunks.into_values().collect::<Vec<_>>();\n        let chunk_bytes = chunks.iter().map(|chunk| chunk.data.len()).sum();\n        Ok(BuiltTree {\n            root_id: TrackedStateRootId::new(root.child_hash),\n            chunks,\n            row_count,\n            tree_height,\n            chunk_bytes,\n        })\n    }\n\n    fn build_tree_from_leaf_summaries(\n        &self,\n        leaf_summaries: Vec<ChildSummary>,\n        mut chunks: BTreeMap<[u8; TRACKED_STATE_HASH_BYTES], PendingChunkWrite>,\n    ) -> Result<BuiltTree, LixError> {\n        let row_count = leaf_summaries\n            .iter()\n            .map(|summary| summary.subtree_count as usize)\n            .sum();\n        let mut summaries = leaf_summaries;\n        let mut tree_height = 1usize;\n        while summaries.len() > 1 {\n            summaries = self.build_internal_level(summaries, tree_height, &mut chunks);\n            tree_height += 1;\n        }\n        let root = summaries.pop().ok_or_else(|| {\n            LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                \"tracked-state tree build from leaves produced no root\",\n            )\n        })?;\n        let chunks = chunks.into_values().collect::<Vec<_>>();\n        let chunk_bytes = chunks.iter().map(|chunk| chunk.data.len()).sum();\n        Ok(BuiltTree {\n            root_id: TrackedStateRootId::new(root.child_hash),\n            chunks,\n            row_count,\n            tree_height,\n            chunk_bytes,\n        })\n    }\n\n    fn build_tree_from_leaf_patch(\n        &self,\n        levels: &[Vec<ChildSummary>],\n        leaf_start: usize,\n        old_leaf_count: usize,\n        replacement_leaves: Vec<ChildSummary>,\n        mut chunks: BTreeMap<[u8; TRACKED_STATE_HASH_BYTES], PendingChunkWrite>,\n        mutation_key: &[u8],\n    ) -> Result<BuiltTree, LixError> {\n        if levels.len() <= 1 {\n            let mut leaves = levels.first().cloned().unwrap_or_default();\n            leaves.splice(leaf_start..leaf_start + old_leaf_count, replacement_leaves);\n            return self.build_tree_from_leaf_summaries(leaves, chunks);\n        }\n\n        let mut child_start = leaf_start;\n        let mut old_child_count = old_leaf_count;\n        let mut replacement_children = replacement_leaves;\n\n        for level in 0..levels.len() - 1 {\n            let patch = self.patch_parent_level(\n                &levels[level],\n                &levels[level + 1],\n                child_start,\n                old_child_count,\n                replacement_children,\n                level + 1,\n                &mut chunks,\n                mutation_key,\n            )?;\n            child_start = patch.parent_start;\n            old_child_count = patch.old_parent_count;\n            replacement_children = patch.replacement_parents;\n        }\n\n        let mut summaries = replacement_children;\n        let mut tree_height = levels.len();\n        while summaries.len() > 1 {\n            summaries = self.build_internal_level(summaries, tree_height, &mut chunks);\n            tree_height += 1;\n        }\n        let root = summaries.pop().ok_or_else(|| {\n            LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                \"tracked-state patched tree produced no root\",\n            )\n        })?;\n        let chunks = chunks.into_values().collect::<Vec<_>>();\n        let chunk_bytes = chunks.iter().map(|chunk| chunk.data.len()).sum();\n        Ok(BuiltTree {\n            root_id: TrackedStateRootId::new(root.child_hash),\n            chunks,\n            row_count: root.subtree_count as usize,\n            tree_height,\n            chunk_bytes,\n        })\n    }\n\n    fn patch_parent_level(\n        &self,\n        old_children: &[ChildSummary],\n        old_parents: &[ChildSummary],\n        child_start: usize,\n        old_child_count: usize,\n        replacement_children: Vec<ChildSummary>,\n        parent_level: usize,\n        chunks: &mut BTreeMap<[u8; TRACKED_STATE_HASH_BYTES], PendingChunkWrite>,\n        mutation_key: &[u8],\n    ) -> Result<ParentLevelPatch, LixError> {\n        if old_parents.is_empty() {\n            return Ok(ParentLevelPatch {\n                parent_start: 0,\n                old_parent_count: 0,\n                replacement_parents: self.build_internal_level(\n                    replacement_children,\n                    parent_level,\n                    chunks,\n                ),\n            });\n        }\n\n        let parent_start = parent_index_for_child_index(old_children, old_parents, child_start);\n        let parent_child_range = child_range_for_parent(old_children, &old_parents[parent_start])?;\n        let old_child_end = child_start + old_child_count;\n        let parent_end = if old_child_count == 0 {\n            parent_start\n        } else {\n            parent_index_for_child_index(old_children, old_parents, old_child_end - 1)\n        };\n        let parent_end_child_range =\n            child_range_for_parent(old_children, &old_parents[parent_end])?;\n        let mut window_children = Vec::new();\n        window_children.extend(\n            old_children[parent_child_range.start..child_start]\n                .iter()\n                .map(ChildSummary::as_ref),\n        );\n        window_children.extend(replacement_children.iter().map(ChildSummary::as_ref));\n        window_children.extend(\n            old_children[old_child_end..parent_end_child_range.end]\n                .iter()\n                .map(ChildSummary::as_ref),\n        );\n        let mut next_parent_index = parent_end + 1;\n\n        loop {\n            let mut candidate_chunks = BTreeMap::new();\n            let candidate_parents = self.build_internal_level_from_refs(\n                window_children.iter().copied(),\n                parent_level,\n                &mut candidate_chunks,\n            );\n\n            if let Some((generated_resync_index, existing_resync_index)) = first_resync_index(\n                &candidate_parents,\n                &old_parents[parent_start..],\n                mutation_key,\n            ) {\n                for summary in &candidate_parents[..generated_resync_index] {\n                    if let Some(chunk) = candidate_chunks.remove(&summary.child_hash) {\n                        chunks.entry(chunk.hash).or_insert(chunk);\n                    }\n                }\n                return Ok(ParentLevelPatch {\n                    parent_start,\n                    old_parent_count: existing_resync_index,\n                    replacement_parents: candidate_parents\n                        .into_iter()\n                        .take(generated_resync_index)\n                        .collect(),\n                });\n            }\n\n            if next_parent_index >= old_parents.len() {\n                chunks.extend(candidate_chunks);\n                return Ok(ParentLevelPatch {\n                    parent_start,\n                    old_parent_count: old_parents.len() - parent_start,\n                    replacement_parents: candidate_parents,\n                });\n            }\n\n            let next_range = child_range_for_parent(old_children, &old_parents[next_parent_index])?;\n            window_children.extend(old_children[next_range].iter().map(ChildSummary::as_ref));\n            next_parent_index += 1;\n        }\n    }\n\n    fn build_leaf_level(\n        &self,\n        entries: Vec<EncodedLeafEntry>,\n        chunks: &mut BTreeMap<[u8; TRACKED_STATE_HASH_BYTES], PendingChunkWrite>,\n    ) -> Vec<ChildSummary> {\n        let groups = chunk_leaf_entries(entries, &self.options);\n        groups\n            .into_iter()\n            .map(|group| {\n                let subtree_count = group.entries.len() as u64;\n                let first_key = group\n                    .entries\n                    .first()\n                    .map(|entry| entry.key.clone())\n                    .unwrap_or_default();\n                let last_key = group\n                    .entries\n                    .last()\n                    .map(|entry| entry.key.clone())\n                    .unwrap_or_default();\n                let node = encode_leaf_node(&group.entries);\n                let (chunk, summary) =\n                    child_summary_from_node(node, first_key, last_key, subtree_count);\n                chunks.entry(chunk.hash).or_insert(chunk);\n                summary\n            })\n            .collect()\n    }\n\n    fn build_leaf_level_from_refs<'a>(\n        &self,\n        entries: impl IntoIterator<Item = EncodedLeafEntryRef<'a>>,\n        chunks: &mut BTreeMap<[u8; TRACKED_STATE_HASH_BYTES], PendingChunkWrite>,\n    ) -> Vec<ChildSummary> {\n        let groups = chunk_leaf_entry_refs(entries, &self.options);\n        groups\n            .into_iter()\n            .map(|group| {\n                let subtree_count = group.entries.len() as u64;\n                let first_key = group\n                    .entries\n                    .first()\n                    .map(|entry| entry.key.to_vec())\n                    .unwrap_or_default();\n                let last_key = group\n                    .entries\n                    .last()\n                    .map(|entry| entry.key.to_vec())\n                    .unwrap_or_default();\n                let node = encode_leaf_node_refs(&group.entries);\n                let (chunk, summary) =\n                    child_summary_from_node(node, first_key, last_key, subtree_count);\n                chunks.entry(chunk.hash).or_insert(chunk);\n                summary\n            })\n            .collect()\n    }\n\n    fn build_internal_level(\n        &self,\n        children: Vec<ChildSummary>,\n        level: usize,\n        chunks: &mut BTreeMap<[u8; TRACKED_STATE_HASH_BYTES], PendingChunkWrite>,\n    ) -> Vec<ChildSummary> {\n        let groups = chunk_internal_entries(children, &self.options, level);\n        groups\n            .into_iter()\n            .map(|group| {\n                let subtree_count = group.children.iter().map(|child| child.subtree_count).sum();\n                let first_key = group\n                    .children\n                    .first()\n                    .map(|child| child.first_key.clone())\n                    .unwrap_or_default();\n                let last_key = group\n                    .children\n                    .last()\n                    .map(|child| child.last_key.clone())\n                    .unwrap_or_default();\n                let node = encode_internal_node(&group.children);\n                let (chunk, summary) =\n                    child_summary_from_node(node, first_key, last_key, subtree_count);\n                chunks.entry(chunk.hash).or_insert(chunk);\n                summary\n            })\n            .collect()\n    }\n\n    fn build_internal_level_from_refs<'a>(\n        &self,\n        children: impl IntoIterator<Item = ChildSummaryRef<'a>>,\n        level: usize,\n        chunks: &mut BTreeMap<[u8; TRACKED_STATE_HASH_BYTES], PendingChunkWrite>,\n    ) -> Vec<ChildSummary> {\n        let groups = chunk_internal_entry_refs(children, &self.options, level);\n        groups\n            .into_iter()\n            .map(|group| {\n                let subtree_count = group.children.iter().map(|child| child.subtree_count).sum();\n                let first_key = group\n                    .children\n                    .first()\n                    .map(|child| child.first_key.to_vec())\n                    .unwrap_or_default();\n                let last_key = group\n                    .children\n                    .last()\n                    .map(|child| child.last_key.to_vec())\n                    .unwrap_or_default();\n                let node = encode_internal_node_refs(&group.children);\n                let (chunk, summary) =\n                    child_summary_from_node(node, first_key, last_key, subtree_count);\n                chunks.entry(chunk.hash).or_insert(chunk);\n                summary\n            })\n            .collect()\n    }\n\n    async fn collect_leaf_entries(\n        &self,\n        store: &mut (impl StorageReader + ?Sized),\n        root_id: &TrackedStateRootId,\n    ) -> Result<Vec<EncodedLeafEntry>, LixError> {\n        let mut out = Vec::new();\n        let mut current = vec![*root_id.as_bytes()];\n        while !current.is_empty() {\n            let mut next = Vec::new();\n            for hash in current {\n                match self.load_node(store, &hash).await? {\n                    DecodedNode::Leaf(leaf) => out.extend(leaf.entries().iter().cloned()),\n                    DecodedNode::Internal(internal) => {\n                        next.extend(internal.children().iter().map(|child| child.child_hash));\n                    }\n                }\n            }\n            current = next;\n        }\n        Ok(out)\n    }\n\n    async fn collect_filtered_entries(\n        &self,\n        store: &mut impl StorageReader,\n        root_id: &TrackedStateRootId,\n        request: &TrackedStateTreeScanRequest,\n    ) -> Result<Vec<(TrackedStateKey, TrackedStateIndexValue)>, LixError> {\n        self.scan(store, root_id, request).await\n    }\n\n    fn scan_node<'a, S>(\n        &'a self,\n        store: &'a mut S,\n        hash: [u8; TRACKED_STATE_HASH_BYTES],\n        request: &'a TrackedStateTreeScanRequest,\n        ranges: &'a [EncodedScanRange],\n        key_decode_hint: Option<ScanKeyDecodeHint<'a>>,\n        rows: &'a mut Vec<(TrackedStateKey, TrackedStateIndexValue)>,\n    ) -> Pin<Box<dyn Future<Output = Result<(), LixError>> + Send + 'a>>\n    where\n        S: StorageReader + Send + 'a,\n    {\n        Box::pin(async move {\n            let bytes = self.load_node_bytes(store, &hash).await?;\n            match decode_node_ref(&bytes)? {\n                DecodedNodeRef::Leaf(leaf) => {\n                    for index in 0..leaf.len() {\n                        if scan_limit_reached(request, rows.len()) {\n                            break;\n                        }\n                        let entry = leaf.entry(index)?.ok_or_else(|| {\n                            LixError::new(\n                                \"LIX_ERROR_UNKNOWN\",\n                                \"tracked-state leaf entry disappeared during scan\",\n                            )\n                        })?;\n                        if !encoded_key_in_scan_ranges(entry.key, ranges) {\n                            continue;\n                        }\n                        let key = match key_decode_hint {\n                            Some(hint) => decode_key_with_trusted_prefix(\n                                entry.key,\n                                hint.schema_key,\n                                hint.file_id,\n                                hint.prefix_len,\n                            )?,\n                            None => decode_key(entry.key)?,\n                        };\n                        if key_decode_hint.is_none() && !key_matches_scan_filters(request, &key) {\n                            continue;\n                        }\n                        let Some(value) =\n                            decode_visible_value(entry.value, request.include_tombstones)?\n                        else {\n                            continue;\n                        };\n                        if key_decode_hint.is_some() || request.matches(&key, &value) {\n                            rows.push((key, value));\n                        }\n                    }\n                }\n                DecodedNodeRef::Internal(internal) => {\n                    for child in internal.children() {\n                        if scan_limit_reached(request, rows.len()) {\n                            break;\n                        }\n                        if child_summary_overlaps_scan_ranges(child, ranges) {\n                            self.scan_node(\n                                store,\n                                child.child_hash,\n                                request,\n                                ranges,\n                                key_decode_hint,\n                                rows,\n                            )\n                            .await?;\n                        }\n                    }\n                }\n            }\n            Ok(())\n        })\n    }\n\n    fn get_many_node<'a, S>(\n        &'a self,\n        store: &'a mut S,\n        hash: [u8; TRACKED_STATE_HASH_BYTES],\n        encoded_keys: &'a [(usize, Vec<u8>)],\n        values: &'a mut [Option<TrackedStateIndexValue>],\n    ) -> Pin<Box<dyn Future<Output = Result<(), LixError>> + Send + 'a>>\n    where\n        S: StorageReader + Send + 'a,\n    {\n        Box::pin(async move {\n            if encoded_keys.is_empty() {\n                return Ok(());\n            }\n\n            let bytes = self.load_node_bytes(store, &hash).await?;\n            match decode_node_ref(&bytes)? {\n                DecodedNodeRef::Leaf(leaf) => {\n                    for (original_index, encoded_key) in encoded_keys {\n                        if let Some(entry_index) = binary_search_leaf_key(&leaf, encoded_key)? {\n                            let entry = leaf.entry(entry_index)?.ok_or_else(|| {\n                                LixError::new(\n                                    \"LIX_ERROR_UNKNOWN\",\n                                    \"tracked-state leaf entry disappeared during get_many\",\n                                )\n                            })?;\n                            values[*original_index] = Some(decode_value(entry.value)?);\n                        }\n                    }\n                }\n                DecodedNodeRef::Internal(internal) => {\n                    let mut start = 0usize;\n                    let children = internal.children();\n                    for (child_index, child) in children.iter().enumerate() {\n                        if start >= encoded_keys.len() {\n                            break;\n                        }\n\n                        let mut end = start;\n                        if child_index + 1 == children.len() {\n                            end = encoded_keys.len();\n                        } else {\n                            while end < encoded_keys.len()\n                                && encoded_keys[end].1.as_slice() <= child.last_key.as_slice()\n                            {\n                                end += 1;\n                            }\n                        }\n\n                        if start < end {\n                            self.get_many_node(\n                                store,\n                                child.child_hash,\n                                &encoded_keys[start..end],\n                                values,\n                            )\n                            .await?;\n                        }\n                        start = end;\n                    }\n                }\n            }\n            Ok(())\n        })\n    }\n\n    fn count_matching_keys_node<'a, S>(\n        &'a self,\n        store: &'a mut S,\n        hash: [u8; TRACKED_STATE_HASH_BYTES],\n        request: &'a TrackedStateTreeScanRequest,\n        ranges: &'a [EncodedScanRange],\n    ) -> Pin<Box<dyn Future<Output = Result<usize, LixError>> + Send + 'a>>\n    where\n        S: StorageReader + Send + 'a,\n    {\n        Box::pin(async move {\n            let mut count = 0usize;\n            match self.load_node(store, &hash).await? {\n                DecodedNode::Leaf(leaf) => {\n                    for entry in leaf.entries() {\n                        if !encoded_key_in_scan_ranges(&entry.key, ranges) {\n                            continue;\n                        }\n                        let key = decode_key(&entry.key)?;\n                        if key_matches_scan_filters(request, &key) {\n                            count += 1;\n                        }\n                    }\n                }\n                DecodedNode::Internal(internal) => {\n                    for child in internal.children() {\n                        if child_summary_contained_by_scan_ranges(child, ranges)\n                            && request.entity_ids.is_empty()\n                        {\n                            count += child.subtree_count as usize;\n                        } else if child_summary_overlaps_scan_ranges(child, ranges) {\n                            count += self\n                                .count_matching_keys_node(store, child.child_hash, request, ranges)\n                                .await?;\n                        }\n                    }\n                }\n            }\n            Ok(count)\n        })\n    }\n\n    async fn collect_entries_from_leaf_summaries(\n        &self,\n        store: &mut impl StorageReader,\n        leaves: &[ChildSummary],\n    ) -> Result<Vec<EncodedLeafEntry>, LixError> {\n        let mut entries = Vec::new();\n        for leaf in leaves {\n            entries.extend(self.load_leaf_entries(store, &leaf.child_hash).await?);\n        }\n        Ok(entries)\n    }\n\n    async fn collect_summary_levels_with_overlay(\n        &self,\n        store: &mut (impl StorageReader + ?Sized),\n        overlay: &storage::TrackedStateChunkOverlay,\n        root_id: &TrackedStateRootId,\n    ) -> Result<Vec<Vec<ChildSummary>>, LixError> {\n        let mut levels = Vec::new();\n        self.collect_summary_levels_for_node_with_overlay(\n            store,\n            overlay,\n            *root_id.as_bytes(),\n            &mut levels,\n        )\n        .await?;\n        Ok(levels)\n    }\n\n    fn collect_summary_levels_for_node_with_overlay<'a, S>(\n        &'a self,\n        store: &'a mut S,\n        overlay: &'a storage::TrackedStateChunkOverlay,\n        hash: [u8; TRACKED_STATE_HASH_BYTES],\n        levels: &'a mut Vec<Vec<ChildSummary>>,\n    ) -> Pin<Box<dyn Future<Output = Result<(ChildSummary, usize), LixError>> + 'a>>\n    where\n        S: StorageReader + ?Sized + 'a,\n    {\n        Box::pin(async move {\n            match self.load_node_with_overlay(store, overlay, &hash).await? {\n                DecodedNode::Leaf(leaf) => {\n                    let summary = leaf_summary(hash, leaf.entries());\n                    push_level_summary(levels, 0, summary.clone());\n                    Ok((summary, 0))\n                }\n                DecodedNode::Internal(internal) => {\n                    let children = internal.children().to_vec();\n                    let child_height = match children.first() {\n                        Some(child) => match self\n                            .load_node_with_overlay(store, overlay, &child.child_hash)\n                            .await?\n                        {\n                            DecodedNode::Leaf(_) => {\n                                if levels.is_empty() {\n                                    levels.push(Vec::new());\n                                }\n                                levels[0].extend(children.iter().cloned());\n                                0\n                            }\n                            DecodedNode::Internal(_) => {\n                                let mut child_height = None;\n                                for child in &children {\n                                    let (_, height) = self\n                                        .collect_summary_levels_for_node_with_overlay(\n                                            store,\n                                            overlay,\n                                            child.child_hash,\n                                            levels,\n                                        )\n                                        .await?;\n                                    child_height = Some(height);\n                                }\n                                child_height.unwrap_or(0)\n                            }\n                        },\n                        None => 0,\n                    };\n                    let height = child_height + 1;\n                    let summary = internal_summary(hash, &children)?;\n                    push_level_summary(levels, height, summary.clone());\n                    Ok((summary, height))\n                }\n            }\n        })\n    }\n\n    async fn load_leaf_entries(\n        &self,\n        store: &mut (impl StorageReader + ?Sized),\n        hash: &[u8; TRACKED_STATE_HASH_BYTES],\n    ) -> Result<Vec<EncodedLeafEntry>, LixError> {\n        match self.load_node(store, hash).await? {\n            DecodedNode::Leaf(leaf) => Ok(leaf.entries().to_vec()),\n            DecodedNode::Internal(_) => Err(LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                \"tracked-state expected leaf chunk but found internal node\",\n            )),\n        }\n    }\n\n    async fn load_leaf_entries_with_overlay(\n        &self,\n        store: &mut (impl StorageReader + ?Sized),\n        overlay: &storage::TrackedStateChunkOverlay,\n        hash: &[u8; TRACKED_STATE_HASH_BYTES],\n    ) -> Result<Vec<EncodedLeafEntry>, LixError> {\n        match self.load_node_with_overlay(store, overlay, hash).await? {\n            DecodedNode::Leaf(leaf) => Ok(leaf.entries().to_vec()),\n            DecodedNode::Internal(_) => Err(LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                \"tracked-state expected leaf chunk but found internal node\",\n            )),\n        }\n    }\n\n    async fn load_node(\n        &self,\n        store: &mut (impl StorageReader + ?Sized),\n        hash: &[u8; TRACKED_STATE_HASH_BYTES],\n    ) -> Result<DecodedNode, LixError> {\n        let bytes = self.load_node_bytes(store, hash).await?;\n        decode_node(&bytes)\n    }\n\n    async fn load_node_bytes(\n        &self,\n        store: &mut (impl StorageReader + ?Sized),\n        hash: &[u8; TRACKED_STATE_HASH_BYTES],\n    ) -> Result<Vec<u8>, LixError> {\n        let bytes = storage::read_chunk(store, hash).await?.ok_or_else(|| {\n            LixError::new(\"LIX_ERROR_UNKNOWN\", \"tracked-state tree chunk is missing\")\n        })?;\n        storage::verify_chunk_hash(hash, &bytes)?;\n        Ok(bytes)\n    }\n\n    async fn load_node_with_overlay(\n        &self,\n        store: &mut (impl StorageReader + ?Sized),\n        overlay: &storage::TrackedStateChunkOverlay,\n        hash: &[u8; TRACKED_STATE_HASH_BYTES],\n    ) -> Result<DecodedNode, LixError> {\n        let bytes = overlay.read_chunk(store, hash).await?.ok_or_else(|| {\n            LixError::new(\"LIX_ERROR_UNKNOWN\", \"tracked-state tree chunk is missing\")\n        })?;\n        storage::verify_chunk_hash(hash, &bytes)?;\n        decode_node(&bytes)\n    }\n}\n\n#[derive(Debug)]\nstruct BuiltTree {\n    root_id: TrackedStateRootId,\n    chunks: Vec<PendingChunkWrite>,\n    row_count: usize,\n    tree_height: usize,\n    chunk_bytes: usize,\n}\n\nstruct ParentLevelPatch {\n    parent_start: usize,\n    old_parent_count: usize,\n    replacement_parents: Vec<ChildSummary>,\n}\n\nstruct SeekPathFrame {\n    children: Vec<ChildSummary>,\n    child_index: usize,\n}\n\n#[derive(Debug, Clone)]\nstruct EncodedScanRange {\n    start: Vec<u8>,\n    end: Option<Vec<u8>>,\n}\n\n#[derive(Debug, Clone, Copy)]\nstruct ScanKeyDecodeHint<'a> {\n    schema_key: &'a str,\n    file_id: Option<&'a str>,\n    prefix_len: usize,\n}\n\nfn binary_search_leaf_key(\n    leaf: &DecodedLeafNodeRef<'_>,\n    encoded_key: &[u8],\n) -> Result<Option<usize>, LixError> {\n    let mut low = 0usize;\n    let mut high = leaf.len();\n    while low < high {\n        let mid = low + (high - low) / 2;\n        let key = leaf.key(mid)?.ok_or_else(|| {\n            LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                \"tracked-state leaf key disappeared during binary search\",\n            )\n        })?;\n        match key.cmp(encoded_key) {\n            std::cmp::Ordering::Less => low = mid + 1,\n            std::cmp::Ordering::Equal => return Ok(Some(mid)),\n            std::cmp::Ordering::Greater => high = mid,\n        }\n    }\n    Ok(None)\n}\n\nstruct LeafSummaryCursor {\n    stack: Vec<LeafSummaryCursorFrame>,\n    current: Option<ChildSummary>,\n}\n\nstruct LeafSummaryCursorFrame {\n    children: Vec<ChildSummary>,\n    next_index: usize,\n    children_are_leaves: bool,\n}\n\nimpl LeafSummaryCursor {\n    async fn new(\n        tree: &TrackedStateTree,\n        store: &mut impl StorageReader,\n        root_hash: [u8; TRACKED_STATE_HASH_BYTES],\n    ) -> Result<Self, LixError> {\n        let mut cursor = Self {\n            stack: Vec::new(),\n            current: None,\n        };\n        match tree.load_node(store, &root_hash).await? {\n            DecodedNode::Leaf(leaf) => {\n                cursor.current = Some(leaf_summary(root_hash, leaf.entries()));\n            }\n            DecodedNode::Internal(internal) => {\n                let children = internal.children().to_vec();\n                let children_are_leaves =\n                    child_summaries_are_leaves(tree, store, &children).await?;\n                cursor.stack.push(LeafSummaryCursorFrame {\n                    children,\n                    next_index: 0,\n                    children_are_leaves,\n                });\n                cursor.advance(tree, store).await?;\n            }\n        }\n        Ok(cursor)\n    }\n\n    fn current(&self) -> Option<&ChildSummary> {\n        self.current.as_ref()\n    }\n\n    async fn advance(\n        &mut self,\n        tree: &TrackedStateTree,\n        store: &mut impl StorageReader,\n    ) -> Result<(), LixError> {\n        self.current = None;\n        while let Some(frame) = self.stack.last_mut() {\n            if frame.next_index >= frame.children.len() {\n                self.stack.pop();\n                continue;\n            }\n\n            let next = frame.children[frame.next_index].clone();\n            let next_is_leaf = frame.children_are_leaves;\n            frame.next_index += 1;\n            if next_is_leaf {\n                self.current = Some(next);\n                return Ok(());\n            }\n            self.descend_to_leaf(tree, store, next).await?;\n            return Ok(());\n        }\n        Ok(())\n    }\n\n    async fn descend_to_leaf(\n        &mut self,\n        tree: &TrackedStateTree,\n        store: &mut impl StorageReader,\n        mut summary: ChildSummary,\n    ) -> Result<(), LixError> {\n        loop {\n            match tree.load_node(store, &summary.child_hash).await? {\n                DecodedNode::Leaf(_) => {\n                    self.current = Some(summary);\n                    return Ok(());\n                }\n                DecodedNode::Internal(internal) => {\n                    let children = internal.children().to_vec();\n                    let children_are_leaves =\n                        child_summaries_are_leaves(tree, store, &children).await?;\n                    let Some(first_child) = children.first().cloned() else {\n                        return Err(LixError::new(\n                            \"LIX_ERROR_UNKNOWN\",\n                            \"tracked-state internal node has no children\",\n                        ));\n                    };\n                    self.stack.push(LeafSummaryCursorFrame {\n                        children,\n                        next_index: 1,\n                        children_are_leaves,\n                    });\n                    if children_are_leaves {\n                        self.current = Some(first_child);\n                        return Ok(());\n                    } else {\n                        summary = first_child;\n                    }\n                }\n            }\n        }\n    }\n}\n\n#[derive(Debug, Default)]\nstruct LeafChunkAccumulator {\n    entries: Vec<EncodedLeafEntry>,\n    key_bytes: usize,\n    value_bytes: usize,\n}\n\n#[derive(Debug, Default)]\nstruct LeafChunkRefAccumulator<'a> {\n    entries: Vec<EncodedLeafEntryRef<'a>>,\n    key_bytes: usize,\n    value_bytes: usize,\n}\n\n#[derive(Debug, Default)]\nstruct InternalChunkAccumulator {\n    children: Vec<ChildSummary>,\n    first_key_bytes: usize,\n    last_key_bytes: usize,\n}\n\n#[derive(Debug, Default)]\nstruct InternalChunkRefAccumulator<'a> {\n    children: Vec<ChildSummaryRef<'a>>,\n    first_key_bytes: usize,\n    last_key_bytes: usize,\n}\n\nfn chunk_leaf_entries(\n    entries: Vec<EncodedLeafEntry>,\n    options: &TrackedStateTreeOptions,\n) -> Vec<LeafChunkAccumulator> {\n    if entries.is_empty() {\n        return vec![LeafChunkAccumulator::default()];\n    }\n    let mut groups = Vec::new();\n    let mut current = LeafChunkAccumulator::default();\n    for entry in entries {\n        let item_size = estimate_leaf_entry_size(entry.key.len(), entry.value.len());\n        let projected_size = estimate_leaf_chunk_size(\n            current.entries.len() + 1,\n            current.key_bytes + entry.key.len(),\n            current.value_bytes + entry.value.len(),\n        );\n        if !current.entries.is_empty() && projected_size > options.max_chunk_bytes {\n            groups.push(std::mem::take(&mut current));\n        }\n\n        current.key_bytes += entry.key.len();\n        current.value_bytes += entry.value.len();\n        current.entries.push(entry);\n        let current_size = estimate_leaf_chunk_size(\n            current.entries.len(),\n            current.key_bytes,\n            current.value_bytes,\n        );\n        if current_size >= options.min_chunk_bytes\n            && (current_size >= options.max_chunk_bytes\n                || current.entries.last().is_some_and(|entry| {\n                    boundary_trigger(\n                        &entry.key,\n                        0,\n                        current_size,\n                        item_size,\n                        options.target_chunk_bytes,\n                    )\n                }))\n        {\n            groups.push(std::mem::take(&mut current));\n        }\n    }\n    if !current.entries.is_empty() {\n        groups.push(current);\n    }\n    groups\n}\n\nfn chunk_leaf_entry_refs<'a>(\n    entries: impl IntoIterator<Item = EncodedLeafEntryRef<'a>>,\n    options: &TrackedStateTreeOptions,\n) -> Vec<LeafChunkRefAccumulator<'a>> {\n    let mut iter = entries.into_iter().peekable();\n    if iter.peek().is_none() {\n        return vec![LeafChunkRefAccumulator::default()];\n    }\n    let mut groups = Vec::new();\n    let mut current = LeafChunkRefAccumulator::default();\n    for entry in iter {\n        let item_size = estimate_leaf_entry_size(entry.key.len(), entry.value.len());\n        let projected_size = estimate_leaf_chunk_size(\n            current.entries.len() + 1,\n            current.key_bytes + entry.key.len(),\n            current.value_bytes + entry.value.len(),\n        );\n        if !current.entries.is_empty() && projected_size > options.max_chunk_bytes {\n            groups.push(std::mem::take(&mut current));\n        }\n\n        current.key_bytes += entry.key.len();\n        current.value_bytes += entry.value.len();\n        current.entries.push(entry);\n        let current_size = estimate_leaf_chunk_size(\n            current.entries.len(),\n            current.key_bytes,\n            current.value_bytes,\n        );\n        if current_size >= options.min_chunk_bytes\n            && (current_size >= options.max_chunk_bytes\n                || current.entries.last().is_some_and(|entry| {\n                    boundary_trigger(\n                        entry.key,\n                        0,\n                        current_size,\n                        item_size,\n                        options.target_chunk_bytes,\n                    )\n                }))\n        {\n            groups.push(std::mem::take(&mut current));\n        }\n    }\n    if !current.entries.is_empty() {\n        groups.push(current);\n    }\n    groups\n}\n\nfn chunk_internal_entries(\n    children: Vec<ChildSummary>,\n    options: &TrackedStateTreeOptions,\n    level: usize,\n) -> Vec<InternalChunkAccumulator> {\n    let mut groups = Vec::new();\n    let mut current = InternalChunkAccumulator::default();\n    for child in children {\n        let item_size = child.first_key.len()\n            + child.last_key.len()\n            + TRACKED_STATE_HASH_BYTES\n            + std::mem::size_of::<u64>();\n        let projected_size = estimate_internal_chunk_size(\n            current.children.len() + 1,\n            current.first_key_bytes + child.first_key.len(),\n            current.last_key_bytes + child.last_key.len(),\n        );\n        if !current.children.is_empty() && projected_size > options.max_chunk_bytes {\n            groups.push(std::mem::take(&mut current));\n        }\n\n        current.first_key_bytes += child.first_key.len();\n        current.last_key_bytes += child.last_key.len();\n        current.children.push(child);\n        let current_size = estimate_internal_chunk_size(\n            current.children.len(),\n            current.first_key_bytes,\n            current.last_key_bytes,\n        );\n        if current_size >= options.min_chunk_bytes\n            && (current_size >= options.max_chunk_bytes\n                || current.children.last().is_some_and(|child| {\n                    boundary_trigger(\n                        &child.first_key,\n                        level,\n                        current_size,\n                        item_size,\n                        options.target_chunk_bytes,\n                    )\n                }))\n        {\n            groups.push(std::mem::take(&mut current));\n        }\n    }\n    if !current.children.is_empty() {\n        groups.push(current);\n    }\n    groups\n}\n\nfn chunk_internal_entry_refs<'a>(\n    children: impl IntoIterator<Item = ChildSummaryRef<'a>>,\n    options: &TrackedStateTreeOptions,\n    level: usize,\n) -> Vec<InternalChunkRefAccumulator<'a>> {\n    let mut groups = Vec::new();\n    let mut current = InternalChunkRefAccumulator::default();\n    for child in children {\n        let item_size = child.first_key.len()\n            + child.last_key.len()\n            + TRACKED_STATE_HASH_BYTES\n            + std::mem::size_of::<u64>();\n        let projected_size = estimate_internal_chunk_size(\n            current.children.len() + 1,\n            current.first_key_bytes + child.first_key.len(),\n            current.last_key_bytes + child.last_key.len(),\n        );\n        if !current.children.is_empty() && projected_size > options.max_chunk_bytes {\n            groups.push(std::mem::take(&mut current));\n        }\n\n        current.first_key_bytes += child.first_key.len();\n        current.last_key_bytes += child.last_key.len();\n        current.children.push(child);\n        let current_size = estimate_internal_chunk_size(\n            current.children.len(),\n            current.first_key_bytes,\n            current.last_key_bytes,\n        );\n        if current_size >= options.min_chunk_bytes\n            && (current_size >= options.max_chunk_bytes\n                || current.children.last().is_some_and(|child| {\n                    boundary_trigger(\n                        child.first_key,\n                        level,\n                        current_size,\n                        item_size,\n                        options.target_chunk_bytes,\n                    )\n                }))\n        {\n            groups.push(std::mem::take(&mut current));\n        }\n    }\n    if !current.children.is_empty() {\n        groups.push(current);\n    }\n    groups\n}\n\nfn estimate_leaf_chunk_size(entry_count: usize, key_bytes: usize, value_bytes: usize) -> usize {\n    10 + entry_count * 12 + key_bytes + value_bytes\n}\n\nfn estimate_leaf_entry_size(key_bytes: usize, value_bytes: usize) -> usize {\n    12 + key_bytes + value_bytes\n}\n\nfn estimate_internal_chunk_size(\n    child_count: usize,\n    first_key_bytes: usize,\n    last_key_bytes: usize,\n) -> usize {\n    16 + child_count * (8 + TRACKED_STATE_HASH_BYTES + std::mem::size_of::<u64>())\n        + first_key_bytes\n        + last_key_bytes\n}\n\nfn first_resync_index(\n    generated: &[ChildSummary],\n    existing: &[ChildSummary],\n    mutation_key: &[u8],\n) -> Option<(usize, usize)> {\n    for (generated_index, generated) in generated.iter().enumerate() {\n        // A matching old chunk before the mutation key is only unchanged\n        // prefix; resync is only valid after the mutation has been emitted.\n        if generated.first_key.as_slice() <= mutation_key {\n            continue;\n        }\n        if let Some(existing_index) = existing.iter().position(|existing| generated == existing) {\n            return Some((generated_index, existing_index));\n        }\n    }\n    None\n}\n\nfn internal_boundaries_match(left: &[ChildSummary], right: &[ChildSummary]) -> bool {\n    left.len() == right.len()\n        && left.iter().zip(right).all(|(left, right)| {\n            left.first_key == right.first_key && left.last_key == right.last_key\n        })\n}\n\nasync fn child_summaries_are_leaves(\n    tree: &TrackedStateTree,\n    store: &mut impl StorageReader,\n    children: &[ChildSummary],\n) -> Result<bool, LixError> {\n    let Some(first_child) = children.first() else {\n        return Ok(false);\n    };\n    Ok(matches!(\n        tree.load_node(store, &first_child.child_hash).await?,\n        DecodedNode::Leaf(_)\n    ))\n}\n\nfn decode_entry(\n    entry: &EncodedLeafEntry,\n) -> Result<(TrackedStateKey, TrackedStateIndexValue), LixError> {\n    Ok((decode_key(&entry.key)?, decode_value(&entry.value)?))\n}\n\nfn parent_index_for_child_index(\n    old_children: &[ChildSummary],\n    old_parents: &[ChildSummary],\n    child_index: usize,\n) -> usize {\n    let key = if child_index < old_children.len() {\n        old_children[child_index].first_key.as_slice()\n    } else {\n        old_children\n            .last()\n            .map(|child| child.last_key.as_slice())\n            .unwrap_or_default()\n    };\n    old_parents\n        .iter()\n        .position(|parent| parent.last_key.as_slice() >= key)\n        .unwrap_or_else(|| old_parents.len().saturating_sub(1))\n}\n\nfn child_range_for_parent(\n    old_children: &[ChildSummary],\n    parent: &ChildSummary,\n) -> Result<Range<usize>, LixError> {\n    let start = old_children\n        .iter()\n        .position(|child| child.last_key.as_slice() >= parent.first_key.as_slice())\n        .ok_or_else(|| {\n            LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                \"tracked-state parent summary does not overlap child summaries\",\n            )\n        })?;\n    let end = old_children[start..]\n        .iter()\n        .position(|child| child.last_key == parent.last_key)\n        .map(|offset| start + offset + 1)\n        .ok_or_else(|| {\n            LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                \"tracked-state parent summary end does not match child summaries\",\n            )\n        })?;\n    Ok(start..end)\n}\n\nfn leaf_summary(\n    hash: [u8; TRACKED_STATE_HASH_BYTES],\n    entries: &[EncodedLeafEntry],\n) -> ChildSummary {\n    ChildSummary {\n        first_key: entries\n            .first()\n            .map(|entry| entry.key.clone())\n            .unwrap_or_default(),\n        last_key: entries\n            .last()\n            .map(|entry| entry.key.clone())\n            .unwrap_or_default(),\n        child_hash: hash,\n        subtree_count: entries.len() as u64,\n    }\n}\n\nfn internal_summary(\n    hash: [u8; TRACKED_STATE_HASH_BYTES],\n    children: &[ChildSummary],\n) -> Result<ChildSummary, LixError> {\n    let first_key = children\n        .first()\n        .map(|child| child.first_key.clone())\n        .ok_or_else(|| {\n            LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                \"tracked-state internal node has no children\",\n            )\n        })?;\n    let last_key = children\n        .last()\n        .map(|child| child.last_key.clone())\n        .ok_or_else(|| {\n            LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                \"tracked-state internal node has no children\",\n            )\n        })?;\n    Ok(ChildSummary {\n        first_key,\n        last_key,\n        child_hash: hash,\n        subtree_count: children.iter().map(|child| child.subtree_count).sum(),\n    })\n}\n\nfn push_level_summary(levels: &mut Vec<Vec<ChildSummary>>, level: usize, summary: ChildSummary) {\n    while levels.len() <= level {\n        levels.push(Vec::new());\n    }\n    levels[level].push(summary);\n}\n\nfn scan_ranges(request: &TrackedStateTreeScanRequest) -> Vec<EncodedScanRange> {\n    if request.schema_keys.is_empty() {\n        return Vec::new();\n    }\n\n    let can_bind_entity = !request.entity_ids.is_empty()\n        && !request.file_ids.is_empty()\n        && request\n            .file_ids\n            .iter()\n            .all(|filter| !matches!(filter, NullableKeyFilter::Any));\n\n    let mut ranges = Vec::new();\n    for schema_key in &request.schema_keys {\n        if can_bind_entity {\n            for file_filter in &request.file_ids {\n                let file_id = match file_filter {\n                    NullableKeyFilter::Null => None,\n                    NullableKeyFilter::Value(file_id) => Some(file_id.clone()),\n                    NullableKeyFilter::Any => unreachable!(\"filtered above\"),\n                };\n                for entity_id in &request.entity_ids {\n                    let key = TrackedStateKey {\n                        schema_key: schema_key.clone(),\n                        file_id: file_id.clone(),\n                        entity_id: entity_id.clone(),\n                    };\n                    ranges.push(exact_scan_range(encode_key(&key)));\n                }\n            }\n            continue;\n        }\n\n        if request.file_ids.is_empty()\n            || request\n                .file_ids\n                .iter()\n                .any(|filter| matches!(filter, NullableKeyFilter::Any))\n        {\n            ranges.push(prefix_scan_range(encode_schema_key_prefix(schema_key)));\n            continue;\n        }\n\n        for file_filter in &request.file_ids {\n            let prefix = match file_filter {\n                NullableKeyFilter::Null => encode_schema_file_prefix(schema_key, None),\n                NullableKeyFilter::Value(file_id) => {\n                    encode_schema_file_prefix(schema_key, Some(file_id))\n                }\n                NullableKeyFilter::Any => unreachable!(\"handled above\"),\n            };\n            ranges.push(prefix_scan_range(prefix));\n        }\n    }\n    ranges\n}\n\nfn scan_key_decode_hint<'a>(\n    request: &'a TrackedStateTreeScanRequest,\n    ranges: &[EncodedScanRange],\n) -> Option<ScanKeyDecodeHint<'a>> {\n    if ranges.len() != 1 || request.schema_keys.len() != 1 || request.file_ids.len() != 1 {\n        return None;\n    }\n    if !request.entity_ids.is_empty() {\n        return None;\n    }\n    let file_id = match request.file_ids.first()? {\n        NullableKeyFilter::Null => None,\n        NullableKeyFilter::Value(file_id) => Some(file_id.as_str()),\n        NullableKeyFilter::Any => return None,\n    };\n    Some(ScanKeyDecodeHint {\n        schema_key: request.schema_keys.first()?.as_str(),\n        file_id,\n        prefix_len: ranges.first()?.start.len(),\n    })\n}\n\nfn prefix_scan_range(prefix: Vec<u8>) -> EncodedScanRange {\n    EncodedScanRange {\n        end: lexicographic_successor(&prefix),\n        start: prefix,\n    }\n}\n\nfn exact_scan_range(key: Vec<u8>) -> EncodedScanRange {\n    EncodedScanRange {\n        end: lexicographic_successor(&key),\n        start: key,\n    }\n}\n\nfn lexicographic_successor(bytes: &[u8]) -> Option<Vec<u8>> {\n    let mut out = bytes.to_vec();\n    for index in (0..out.len()).rev() {\n        if out[index] != u8::MAX {\n            out[index] += 1;\n            out.truncate(index + 1);\n            return Some(out);\n        }\n    }\n    None\n}\n\nfn child_summary_overlaps_scan_ranges(child: &ChildSummary, ranges: &[EncodedScanRange]) -> bool {\n    ranges.is_empty()\n        || ranges.iter().any(|range| {\n            child.last_key.as_slice() >= range.start.as_slice()\n                && range\n                    .end\n                    .as_ref()\n                    .is_none_or(|end| child.first_key.as_slice() < end.as_slice())\n        })\n}\n\nfn child_summary_contained_by_scan_ranges(\n    child: &ChildSummary,\n    ranges: &[EncodedScanRange],\n) -> bool {\n    ranges.is_empty()\n        || ranges.iter().any(|range| {\n            child.first_key.as_slice() >= range.start.as_slice()\n                && range\n                    .end\n                    .as_ref()\n                    .is_none_or(|end| child.last_key.as_slice() < end.as_slice())\n        })\n}\n\nfn encoded_key_in_scan_ranges(key: &[u8], ranges: &[EncodedScanRange]) -> bool {\n    ranges.is_empty()\n        || ranges.iter().any(|range| {\n            key >= range.start.as_slice()\n                && range.end.as_ref().is_none_or(|end| key < end.as_slice())\n        })\n}\n\nfn key_matches_scan_filters(request: &TrackedStateTreeScanRequest, key: &TrackedStateKey) -> bool {\n    if !request.schema_keys.is_empty() && !request.schema_keys.contains(&key.schema_key) {\n        return false;\n    }\n    if !request.entity_ids.is_empty() && !request.entity_ids.contains(&key.entity_id) {\n        return false;\n    }\n    if !request.file_ids.is_empty()\n        && !request\n            .file_ids\n            .iter()\n            .any(|filter| filter.matches(key.file_id.as_ref()))\n    {\n        return false;\n    }\n    true\n}\n\nfn scan_limit_reached(request: &TrackedStateTreeScanRequest, row_count: usize) -> bool {\n    request.limit.is_some_and(|limit| row_count >= limit)\n}\n\n#[cfg(test)]\nmod tests {\n    use std::sync::Arc;\n\n    use super::*;\n    use crate::backend::testing::UnitTestBackend;\n    use crate::entity_identity::EntityIdentity;\n    use crate::storage::{StorageContext, StorageWriteTransaction};\n    use crate::tracked_state::codec::encode_value;\n\n    #[tokio::test]\n    async fn exact_read_roundtrips_from_stored_root() {\n        let storage = StorageContext::new(Arc::new(UnitTestBackend::new()));\n        let tree = TrackedStateTree::new();\n        let key = key(\"schema\", None, \"entity\");\n        let value = value(\"change-1\", Some(\"{}\"));\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        let result = apply_mutations_for_test(\n            &tree,\n            transaction.as_mut(),\n            None,\n            vec![mutation(&key, &value)],\n            Some(\"commit-1\"),\n        )\n        .await\n        .expect(\"mutations should apply\");\n        transaction\n            .commit()\n            .await\n            .expect(\"transaction should commit\");\n\n        let mut store = storage.clone();\n        assert_eq!(\n            tree.load_root(&mut store, \"commit-1\")\n                .await\n                .expect(\"root should load\"),\n            Some(result.root_id.clone())\n        );\n        assert_eq!(\n            tree.get(&mut store, &result.root_id, &key)\n                .await\n                .expect(\"row should load\"),\n            Some(value)\n        );\n    }\n\n    #[tokio::test]\n    async fn latest_mutation_for_key_wins() {\n        let storage = StorageContext::new(Arc::new(UnitTestBackend::new()));\n        let tree = TrackedStateTree::new();\n        let key = key(\"schema\", None, \"entity\");\n        let old_value = value(\"change-old\", Some(\"{\\\"v\\\":1}\"));\n        let new_value = value(\"change-new\", Some(\"{\\\"v\\\":2}\"));\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        let result = apply_mutations_for_test(\n            &tree,\n            transaction.as_mut(),\n            None,\n            vec![mutation(&key, &old_value), mutation(&key, &new_value)],\n            None,\n        )\n        .await\n        .expect(\"mutations should apply\");\n        transaction\n            .commit()\n            .await\n            .expect(\"transaction should commit\");\n\n        let mut store = storage.clone();\n        let loaded = tree\n            .get(&mut store, &result.root_id, &key)\n            .await\n            .expect(\"row should load\")\n            .expect(\"row should exist\");\n        assert_eq!(loaded.change_locator.change_id, \"change-new\");\n        assert_eq!(loaded.change_locator.source_commit_id, \"commit\");\n    }\n\n    #[tokio::test]\n    async fn scan_filters_by_index_key_without_materializing_tombstones() {\n        let storage = StorageContext::new(Arc::new(UnitTestBackend::new()));\n        let tree = TrackedStateTree::new();\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        let result = apply_mutations_for_test(\n            &tree,\n            transaction.as_mut(),\n            None,\n            vec![\n                mutation_owned(key(\"schema-a\", None, \"visible\"), value(\"c1\", Some(\"{}\"))),\n                mutation_owned(key(\"schema-a\", None, \"deleted\"), value(\"c2\", None)),\n                mutation_owned(key(\"schema-b\", None, \"other\"), value(\"c3\", Some(\"{}\"))),\n            ],\n            None,\n        )\n        .await\n        .expect(\"mutations should apply\");\n        transaction\n            .commit()\n            .await\n            .expect(\"transaction should commit\");\n\n        let mut store = storage.clone();\n        let rows = tree\n            .scan(\n                &mut store,\n                &result.root_id,\n                &TrackedStateTreeScanRequest {\n                    schema_keys: vec![\"schema-a\".to_string()],\n                    ..Default::default()\n                },\n            )\n            .await\n            .expect(\"scan should succeed\");\n        assert_eq!(rows.len(), 2);\n        let identities = rows\n            .iter()\n            .map(|(key, _)| key.entity_id.as_single_string_owned().expect(\"identity\"))\n            .collect::<Vec<_>>();\n        assert_eq!(identities, vec![\"deleted\", \"visible\"]);\n\n        let live_rows = tree\n            .scan(\n                &mut store,\n                &result.root_id,\n                &TrackedStateTreeScanRequest {\n                    schema_keys: vec![\"schema-a\".to_string()],\n                    include_tombstones: false,\n                    ..Default::default()\n                },\n            )\n            .await\n            .expect(\"live scan should succeed\");\n        let live_identities = live_rows\n            .iter()\n            .map(|(key, _)| key.entity_id.as_single_string_owned().expect(\"identity\"))\n            .collect::<Vec<_>>();\n        assert_eq!(live_identities, vec![\"visible\"]);\n    }\n\n    #[tokio::test]\n    async fn scan_filters_by_schema_entity_and_file() {\n        let storage = StorageContext::new(Arc::new(UnitTestBackend::new()));\n        let tree = TrackedStateTree::new();\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        let result = apply_mutations_for_test(\n            &tree,\n            transaction.as_mut(),\n            None,\n            vec![\n                mutation_owned(\n                    key(\"schema-a\", Some(\"file-a\"), \"entity-a\"),\n                    value(\"c1\", Some(\"{}\")),\n                ),\n                mutation_owned(\n                    key(\"schema-a\", Some(\"file-b\"), \"entity-a\"),\n                    value(\"c2\", Some(\"{}\")),\n                ),\n                mutation_owned(\n                    key(\"schema-a\", Some(\"file-a\"), \"entity-b\"),\n                    value(\"c3\", Some(\"{}\")),\n                ),\n                mutation_owned(\n                    key(\"schema-b\", Some(\"file-a\"), \"entity-a\"),\n                    value(\"c4\", Some(\"{}\")),\n                ),\n            ],\n            None,\n        )\n        .await\n        .expect(\"mutations should apply\");\n        transaction\n            .commit()\n            .await\n            .expect(\"transaction should commit\");\n\n        let mut store = storage.clone();\n        let rows = tree\n            .scan(\n                &mut store,\n                &result.root_id,\n                &TrackedStateTreeScanRequest {\n                    schema_keys: vec![\"schema-a\".to_string()],\n                    entity_ids: vec![crate::entity_identity::EntityIdentity::single(\"entity-a\")],\n                    file_ids: vec![crate::NullableKeyFilter::Value(\"file-a\".to_string())],\n                    ..Default::default()\n                },\n            )\n            .await\n            .expect(\"scan should succeed\");\n\n        assert_eq!(rows.len(), 1);\n        assert_eq!(rows[0].0.schema_key, \"schema-a\");\n        assert_eq!(\n            rows[0]\n                .0\n                .entity_id\n                .as_single_string_owned()\n                .expect(\"identity\"),\n            \"entity-a\"\n        );\n        assert_eq!(rows[0].0.file_id.as_deref(), Some(\"file-a\"));\n    }\n\n    #[tokio::test]\n    async fn scan_schema_file_prefix_honors_tombstones_and_limit() {\n        let storage = StorageContext::new(Arc::new(UnitTestBackend::new()));\n        let tree = TrackedStateTree::new();\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        let result = apply_mutations_for_test(\n            &tree,\n            transaction.as_mut(),\n            None,\n            vec![\n                mutation_owned(\n                    key(\"schema-a\", Some(\"file-a\"), \"entity-a\"),\n                    value(\"c1\", Some(\"{}\")),\n                ),\n                mutation_owned(\n                    key(\"schema-a\", Some(\"file-a\"), \"entity-b\"),\n                    value(\"c2\", None),\n                ),\n                mutation_owned(\n                    key(\"schema-a\", Some(\"file-a\"), \"entity-c\"),\n                    value(\"c3\", Some(\"{}\")),\n                ),\n                mutation_owned(\n                    key(\"schema-a\", Some(\"file-b\"), \"entity-d\"),\n                    value(\"c4\", Some(\"{}\")),\n                ),\n            ],\n            None,\n        )\n        .await\n        .expect(\"mutations should apply\");\n        transaction\n            .commit()\n            .await\n            .expect(\"transaction should commit\");\n\n        let mut store = storage.clone();\n        let rows = tree\n            .scan(\n                &mut store,\n                &result.root_id,\n                &TrackedStateTreeScanRequest {\n                    schema_keys: vec![\"schema-a\".to_string()],\n                    file_ids: vec![crate::NullableKeyFilter::Value(\"file-a\".to_string())],\n                    include_tombstones: false,\n                    limit: Some(2),\n                    ..Default::default()\n                },\n            )\n            .await\n            .expect(\"scan should succeed\");\n\n        assert_eq!(rows.len(), 2);\n        assert!(rows.iter().all(\n            |(key, _)| key.schema_key == \"schema-a\" && key.file_id.as_deref() == Some(\"file-a\")\n        ));\n        assert_eq!(\n            rows.iter()\n                .map(|(key, _)| key.entity_id.as_single_string_owned().expect(\"identity\"))\n                .collect::<Vec<_>>(),\n            vec![\"entity-a\", \"entity-c\"]\n        );\n    }\n\n    #[tokio::test]\n    async fn applying_to_base_root_reuses_existing_rows_and_overwrites_changed_rows() {\n        let storage = StorageContext::new(Arc::new(UnitTestBackend::new()));\n        let tree = TrackedStateTree::new();\n        let unchanged_key = key(\"schema\", None, \"unchanged\");\n        let changed_key = key(\"schema\", None, \"changed\");\n        let unchanged_value = value(\"c1\", Some(\"{}\"));\n        let old_changed_value = value(\"c2\", Some(\"{\\\"old\\\":true}\"));\n        let new_changed_value = value(\"c3\", Some(\"{\\\"new\\\":true}\"));\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        let base = apply_mutations_for_test(\n            &tree,\n            transaction.as_mut(),\n            None,\n            vec![\n                mutation(&unchanged_key, &unchanged_value),\n                mutation(&changed_key, &old_changed_value),\n            ],\n            None,\n        )\n        .await\n        .expect(\"base should build\");\n        let next = apply_mutations_for_test(\n            &tree,\n            transaction.as_mut(),\n            Some(&base.root_id),\n            vec![mutation(&changed_key, &new_changed_value)],\n            None,\n        )\n        .await\n        .expect(\"next should build\");\n        transaction\n            .commit()\n            .await\n            .expect(\"transaction should commit\");\n\n        let mut store = storage.clone();\n        assert_eq!(\n            tree.get(&mut store, &next.root_id, &unchanged_key)\n                .await\n                .expect(\"unchanged read\")\n                .expect(\"unchanged exists\")\n                .change_locator\n                .change_id,\n            \"c1\"\n        );\n        assert_eq!(\n            tree.get(&mut store, &next.root_id, &changed_key)\n                .await\n                .expect(\"changed read\")\n                .expect(\"changed exists\")\n                .change_locator\n                .change_id,\n            \"c3\"\n        );\n    }\n\n    #[tokio::test]\n    async fn two_commit_roots_can_share_unchanged_rows() {\n        let storage = StorageContext::new(Arc::new(UnitTestBackend::new()));\n        let tree = TrackedStateTree::new();\n        let shared_key = key(\"schema\", None, \"shared\");\n        let branch_a_key = key(\"schema\", None, \"branch-a\");\n        let branch_b_key = key(\"schema\", None, \"branch-b\");\n        let shared_value = value(\"shared-change\", Some(\"{\\\"shared\\\":true}\"));\n        let branch_a_value = value(\"branch-a-change\", Some(\"{\\\"branch\\\":\\\"a\\\"}\"));\n        let branch_b_value = value(\"branch-b-change\", Some(\"{\\\"branch\\\":\\\"b\\\"}\"));\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        let base = apply_mutations_for_test(\n            &tree,\n            transaction.as_mut(),\n            None,\n            vec![mutation(&shared_key, &shared_value)],\n            Some(\"commit-base\"),\n        )\n        .await\n        .expect(\"base root should build\");\n        let branch_a = apply_mutations_for_test(\n            &tree,\n            transaction.as_mut(),\n            Some(&base.root_id),\n            vec![mutation(&branch_a_key, &branch_a_value)],\n            Some(\"commit-a\"),\n        )\n        .await\n        .expect(\"branch a root should build\");\n        let branch_b = apply_mutations_for_test(\n            &tree,\n            transaction.as_mut(),\n            Some(&base.root_id),\n            vec![mutation(&branch_b_key, &branch_b_value)],\n            Some(\"commit-b\"),\n        )\n        .await\n        .expect(\"branch b root should build\");\n        transaction\n            .commit()\n            .await\n            .expect(\"transaction should commit\");\n\n        assert_ne!(branch_a.root_id, branch_b.root_id);\n        let mut store = storage.clone();\n        assert_eq!(\n            tree.get(&mut store, &branch_a.root_id, &shared_key)\n                .await\n                .expect(\"branch a shared row should load\"),\n            Some(value(\"shared-change\", Some(\"{\\\"shared\\\":true}\")))\n        );\n        assert_eq!(\n            tree.get(&mut store, &branch_b.root_id, &shared_key)\n                .await\n                .expect(\"branch b shared row should load\"),\n            Some(value(\"shared-change\", Some(\"{\\\"shared\\\":true}\")))\n        );\n        assert!(tree\n            .get(&mut store, &branch_a.root_id, &branch_b_key)\n            .await\n            .expect(\"branch a should read\")\n            .is_none());\n        assert!(tree\n            .get(&mut store, &branch_b.root_id, &branch_a_key)\n            .await\n            .expect(\"branch b should read\")\n            .is_none());\n    }\n\n    #[tokio::test]\n    async fn single_update_matches_full_canonical_rebuild() {\n        let storage = StorageContext::new(Arc::new(UnitTestBackend::new()));\n        let tree = TrackedStateTree::with_options(TrackedStateTreeOptions {\n            target_chunk_bytes: 128,\n            min_chunk_bytes: 64,\n            max_chunk_bytes: 256,\n        });\n        let rows = (0..100)\n            .map(|index| {\n                mutation_owned(\n                    key(\"schema\", None, &format!(\"entity-{index:03}\")),\n                    value(&format!(\"c-{index}\"), Some(&format!(\"{{\\\"v\\\":{index}}}\"))),\n                )\n            })\n            .collect::<Vec<_>>();\n        let changed_key = key(\"schema\", None, \"entity-000\");\n        let changed_value = value(\"changed\", Some(\"{\\\"v\\\":\\\"changed\\\"}\"));\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        let base = apply_mutations_for_test(&tree, transaction.as_mut(), None, rows, None)\n            .await\n            .expect(\"base should build\");\n        let fast = apply_mutations_for_test(\n            &tree,\n            transaction.as_mut(),\n            Some(&base.root_id),\n            vec![mutation(&changed_key, &changed_value)],\n            None,\n        )\n        .await\n        .expect(\"fast path should apply\");\n        let mut canonical_entries = tree\n            .collect_leaf_entries(&mut transaction.as_mut(), &base.root_id)\n            .await\n            .expect(\"base entries should collect\");\n        assert!(canonical_entries\n            .windows(2)\n            .all(|window| window[0].key < window[1].key));\n        let encoded_changed_key = encode_key(&changed_key);\n        let encoded_changed_value = encode_value(&changed_value);\n        let index = canonical_entries\n            .binary_search_by(|entry| entry.key.as_slice().cmp(&encoded_changed_key))\n            .expect(\"changed key should exist\");\n        canonical_entries[index].value = encoded_changed_value;\n        let canonical = tree\n            .build_tree_from_entries(canonical_entries)\n            .expect(\"canonical root should build\");\n\n        assert_eq!(fast.root_id, canonical.root_id);\n    }\n\n    #[tokio::test]\n    async fn single_insert_matches_full_canonical_rebuild() {\n        let storage = StorageContext::new(Arc::new(UnitTestBackend::new()));\n        let tree = TrackedStateTree::with_options(TrackedStateTreeOptions {\n            target_chunk_bytes: 128,\n            min_chunk_bytes: 64,\n            max_chunk_bytes: 256,\n        });\n        let rows = (0..100)\n            .map(|index| {\n                mutation_owned(\n                    key(\"schema\", None, &format!(\"entity-{index:03}\")),\n                    value(&format!(\"c-{index}\"), Some(&format!(\"{{\\\"v\\\":{index}}}\"))),\n                )\n            })\n            .collect::<Vec<_>>();\n        let inserted_key = key(\"schema\", None, \"entity-050a\");\n        let inserted_value = value(\"inserted\", Some(\"{\\\"v\\\":\\\"inserted\\\"}\"));\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        let base = apply_mutations_for_test(&tree, transaction.as_mut(), None, rows, None)\n            .await\n            .expect(\"base should build\");\n        let fast = apply_mutations_for_test(\n            &tree,\n            transaction.as_mut(),\n            Some(&base.root_id),\n            vec![mutation(&inserted_key, &inserted_value)],\n            None,\n        )\n        .await\n        .expect(\"fast path should apply\");\n        let mut canonical_entries = tree\n            .collect_leaf_entries(&mut transaction.as_mut(), &base.root_id)\n            .await\n            .expect(\"base entries should collect\");\n        let encoded_inserted_key = encode_key(&inserted_key);\n        let encoded_inserted_value = encode_value(&inserted_value);\n        let index = canonical_entries\n            .binary_search_by(|entry| entry.key.as_slice().cmp(&encoded_inserted_key))\n            .expect_err(\"inserted key should not exist\");\n        canonical_entries.insert(\n            index,\n            EncodedLeafEntry {\n                key: encoded_inserted_key,\n                value: encoded_inserted_value,\n            },\n        );\n        let canonical = tree\n            .build_tree_from_entries(canonical_entries)\n            .expect(\"canonical root should build\");\n\n        assert_eq!(fast.root_id, canonical.root_id);\n    }\n\n    #[tokio::test]\n    async fn batch_update_matches_full_canonical_rebuild() {\n        let storage = StorageContext::new(Arc::new(UnitTestBackend::new()));\n        let tree = TrackedStateTree::with_options(TrackedStateTreeOptions {\n            target_chunk_bytes: 128,\n            min_chunk_bytes: 64,\n            max_chunk_bytes: 256,\n        });\n        let rows = (0..100)\n            .map(|index| {\n                mutation_owned(\n                    key(\"schema\", None, &format!(\"entity-{index:03}\")),\n                    value(&format!(\"c-{index}\"), Some(&format!(\"{{\\\"v\\\":{index}}}\"))),\n                )\n            })\n            .collect::<Vec<_>>();\n        let updates = (10..25)\n            .map(|index| {\n                (\n                    key(\"schema\", None, &format!(\"entity-{index:03}\")),\n                    value(\n                        &format!(\"changed-{index}\"),\n                        Some(&format!(\"{{\\\"changed\\\":{index}}}\")),\n                    ),\n                )\n            })\n            .collect::<Vec<_>>();\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        let base = apply_mutations_for_test(&tree, transaction.as_mut(), None, rows, None)\n            .await\n            .expect(\"base should build\");\n        let fast = apply_mutations_for_test(\n            &tree,\n            transaction.as_mut(),\n            Some(&base.root_id),\n            updates\n                .iter()\n                .map(|(key, value)| mutation(key, value))\n                .collect(),\n            None,\n        )\n        .await\n        .expect(\"batch path should apply\");\n        let mut canonical_entries = tree\n            .collect_leaf_entries(&mut transaction.as_mut(), &base.root_id)\n            .await\n            .expect(\"base entries should collect\");\n        for (key, value) in updates {\n            let encoded_key = encode_key(&key);\n            let encoded_value = encode_value(&value);\n            let index = canonical_entries\n                .binary_search_by(|entry| entry.key.as_slice().cmp(&encoded_key))\n                .expect(\"updated key should exist\");\n            canonical_entries[index].value = encoded_value;\n        }\n        let canonical = tree\n            .build_tree_from_entries(canonical_entries)\n            .expect(\"canonical root should build\");\n\n        assert_eq!(fast.root_id, canonical.root_id);\n    }\n\n    #[tokio::test]\n    async fn batch_insert_matches_full_canonical_rebuild() {\n        let storage = StorageContext::new(Arc::new(UnitTestBackend::new()));\n        let tree = TrackedStateTree::with_options(TrackedStateTreeOptions {\n            target_chunk_bytes: 128,\n            min_chunk_bytes: 64,\n            max_chunk_bytes: 256,\n        });\n        let rows = (0..100)\n            .map(|index| {\n                mutation_owned(\n                    key(\"schema\", None, &format!(\"entity-{index:03}\")),\n                    value(&format!(\"c-{index}\"), Some(&format!(\"{{\\\"v\\\":{index}}}\"))),\n                )\n            })\n            .collect::<Vec<_>>();\n        let inserts = [\"entity-050a\", \"entity-050b\", \"entity-050c\"]\n            .into_iter()\n            .enumerate()\n            .map(|(index, entity_id)| {\n                (\n                    key(\"schema\", None, entity_id),\n                    value(\n                        &format!(\"inserted-{index}\"),\n                        Some(&format!(\"{{\\\"inserted\\\":{index}}}\")),\n                    ),\n                )\n            })\n            .collect::<Vec<_>>();\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        let base = apply_mutations_for_test(&tree, transaction.as_mut(), None, rows, None)\n            .await\n            .expect(\"base should build\");\n        let fast = apply_mutations_for_test(\n            &tree,\n            transaction.as_mut(),\n            Some(&base.root_id),\n            inserts\n                .iter()\n                .map(|(key, value)| mutation(key, value))\n                .collect(),\n            None,\n        )\n        .await\n        .expect(\"batch path should apply\");\n        let mut canonical_entries = tree\n            .collect_leaf_entries(&mut transaction.as_mut(), &base.root_id)\n            .await\n            .expect(\"base entries should collect\");\n        for (key, value) in inserts {\n            let encoded_key = encode_key(&key);\n            let encoded_value = encode_value(&value);\n            let index = canonical_entries\n                .binary_search_by(|entry| entry.key.as_slice().cmp(&encoded_key))\n                .expect_err(\"inserted key should not exist\");\n            canonical_entries.insert(\n                index,\n                EncodedLeafEntry {\n                    key: encoded_key,\n                    value: encoded_value,\n                },\n            );\n        }\n        let canonical = tree\n            .build_tree_from_entries(canonical_entries)\n            .expect(\"canonical root should build\");\n\n        assert_eq!(fast.root_id, canonical.root_id);\n    }\n\n    async fn apply_mutations_for_test(\n        tree: &TrackedStateTree,\n        transaction: &mut dyn StorageWriteTransaction,\n        base_root: Option<&TrackedStateRootId>,\n        mutations: Vec<TrackedStateMutation>,\n        commit_id: Option<&str>,\n    ) -> Result<TrackedStateApplyResult, LixError> {\n        let mut writes = StorageWriteSet::new();\n        let result = tree\n            .apply_mutations(transaction, &mut writes, base_root, mutations, commit_id)\n            .await?;\n        writes.apply(transaction).await?;\n        Ok(result)\n    }\n\n    fn mutation(key: &TrackedStateKey, value: &TrackedStateIndexValue) -> TrackedStateMutation {\n        TrackedStateMutation::put_encoded(encode_key(key), encode_value(value))\n    }\n\n    fn mutation_owned(key: TrackedStateKey, value: TrackedStateIndexValue) -> TrackedStateMutation {\n        mutation(&key, &value)\n    }\n\n    fn key(schema_key: &str, file_id: Option<&str>, entity_id: &str) -> TrackedStateKey {\n        TrackedStateKey {\n            schema_key: schema_key.to_string(),\n            file_id: file_id.map(str::to_string),\n            entity_id: EntityIdentity::single(entity_id),\n        }\n    }\n\n    fn value(change_id: &str, snapshot_content: Option<&str>) -> TrackedStateIndexValue {\n        let source_ordinal = match snapshot_content {\n            Some(\"{\\\"v\\\":1}\") => 1,\n            Some(\"{\\\"v\\\":2}\") => 2,\n            Some(_) => 3,\n            None => 0,\n        };\n        TrackedStateIndexValue {\n            change_locator: crate::commit_store::ChangeLocator {\n                source_commit_id: \"commit\".to_string(),\n                source_pack_id: 0,\n                source_ordinal,\n                change_id: change_id.to_string(),\n            },\n            deleted: snapshot_content.is_none(),\n            snapshot_ref: snapshot_content\n                .map(|content| crate::json_store::JsonRef::for_content(content.as_bytes())),\n            metadata_ref: None,\n            created_at: \"2026-01-01T00:00:00Z\".to_string(),\n            updated_at: \"2026-01-01T00:00:00Z\".to_string(),\n        }\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/tracked_state/types.rs",
    "content": "use crate::commit_store::{ChangeLocator, ChangeLocatorRef, ChangeRef};\nuse crate::entity_identity::EntityIdentity;\nuse crate::json_store::JsonRef;\nuse crate::{LixError, NullableKeyFilter};\n\npub(crate) const TRACKED_STATE_HASH_BYTES: usize = 32;\n\n/// Content-addressed root id for one tracked-state projection tree.\n#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord, Hash)]\npub(crate) struct TrackedStateRootId([u8; TRACKED_STATE_HASH_BYTES]);\n\nimpl TrackedStateRootId {\n    pub(crate) fn new(bytes: [u8; TRACKED_STATE_HASH_BYTES]) -> Self {\n        Self(bytes)\n    }\n\n    pub(crate) fn from_slice(bytes: &[u8]) -> Result<Self, LixError> {\n        if bytes.len() != TRACKED_STATE_HASH_BYTES {\n            return Err(LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                format!(\n                    \"tracked-state tree root id must be {TRACKED_STATE_HASH_BYTES} bytes, got {}\",\n                    bytes.len()\n                ),\n            ));\n        }\n        let mut out = [0_u8; TRACKED_STATE_HASH_BYTES];\n        out.copy_from_slice(bytes);\n        Ok(Self(out))\n    }\n\n    pub(crate) fn as_bytes(&self) -> &[u8; TRACKED_STATE_HASH_BYTES] {\n        &self.0\n    }\n}\n\n/// Root-independent tracked entity identity.\n#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord, Hash)]\npub(crate) struct TrackedStateKey {\n    pub(crate) schema_key: String,\n    pub(crate) file_id: Option<String>,\n    pub(crate) entity_id: EntityIdentity,\n}\n\n/// Zero-copy view of primary tracked-state key.\n#[derive(Debug, Clone, Copy)]\npub(crate) struct TrackedStateKeyRef<'a> {\n    pub(crate) schema_key: &'a str,\n    pub(crate) file_id: Option<&'a str>,\n    pub(crate) entity_id: &'a EntityIdentity,\n}\n\n/// Zero-copy tracked-state projection delta prepared from commit_store facts.\n#[derive(Debug, Clone, Copy)]\npub(crate) struct TrackedStateDeltaRef<'a> {\n    pub(crate) change: ChangeRef<'a>,\n    pub(crate) locator: ChangeLocatorRef<'a>,\n    pub(crate) created_at: &'a str,\n    pub(crate) updated_at: &'a str,\n}\n\n/// Owned per-commit projection delta entry.\n///\n/// Normal commits persist these entries in `tracked_state.delta_pack`. Full\n/// projection roots are materialized separately from these deltas.\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct TrackedStateDeltaEntry {\n    pub(crate) key: TrackedStateKey,\n    pub(crate) value: TrackedStateIndexValue,\n}\n\n/// Projection value stored in tracked-state trees.\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct TrackedStateIndexValue {\n    pub(crate) change_locator: ChangeLocator,\n    pub(crate) deleted: bool,\n    pub(crate) snapshot_ref: Option<JsonRef>,\n    pub(crate) metadata_ref: Option<JsonRef>,\n    pub(crate) created_at: String,\n    pub(crate) updated_at: String,\n}\n\n/// Zero-copy view of a tracked-state projection value.\n#[derive(Debug, Clone, Copy)]\npub(crate) struct TrackedStateIndexValueRef<'a> {\n    pub(crate) change_locator: ChangeLocatorRef<'a>,\n    pub(crate) deleted: bool,\n    pub(crate) snapshot_ref: Option<&'a JsonRef>,\n    pub(crate) metadata_ref: Option<&'a JsonRef>,\n    pub(crate) created_at: &'a str,\n    pub(crate) updated_at: &'a str,\n}\n\n/// Materialized tracked-state projection row.\n///\n/// Tracked rows are the projection that can be rebuilt from changelog facts.\n/// They intentionally do not carry an `untracked` flag: untracked local overlay\n/// data belongs to `untracked_state`, and the serving `live_state` facade is\n/// responsible for combining both sources.\n#[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize)]\npub(crate) struct MaterializedTrackedStateRow {\n    pub(crate) entity_id: EntityIdentity,\n    pub(crate) schema_key: String,\n    pub(crate) file_id: Option<String>,\n    pub(crate) snapshot_content: Option<String>,\n    pub(crate) metadata: Option<String>,\n    pub(crate) deleted: bool,\n    pub(crate) created_at: String,\n    pub(crate) updated_at: String,\n    pub(crate) change_id: String,\n    pub(crate) commit_id: String,\n}\n\n/// Identity-centered filter for tracked-state scans.\n#[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize, Default)]\npub(crate) struct TrackedStateFilter {\n    #[serde(default)]\n    pub(crate) schema_keys: Vec<String>,\n    #[serde(default)]\n    pub(crate) entity_ids: Vec<EntityIdentity>,\n    #[serde(default)]\n    pub(crate) file_ids: Vec<NullableKeyFilter<String>>,\n    #[serde(default)]\n    pub(crate) include_tombstones: bool,\n}\n\n/// Requested property set for a tracked-state scan.\n#[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize, Default)]\npub(crate) struct TrackedStateProjection {\n    #[serde(default)]\n    pub(crate) columns: Vec<String>,\n}\n\n/// Scan request for the tracked-state projection.\n#[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize, Default)]\npub(crate) struct TrackedStateScanRequest {\n    #[serde(default)]\n    pub(crate) filter: TrackedStateFilter,\n    #[serde(default)]\n    pub(crate) projection: TrackedStateProjection,\n    #[serde(default)]\n    pub(crate) limit: Option<usize>,\n}\n\n/// Point lookup request for one tracked-state row.\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct TrackedStateRowRequest {\n    pub(crate) schema_key: String,\n    pub(crate) entity_id: EntityIdentity,\n    pub(crate) file_id: NullableKeyFilter<String>,\n}\n\n#[derive(Debug, PartialEq, Eq)]\npub(crate) struct TrackedStateMutation {\n    pub(crate) encoded_key: Vec<u8>,\n    pub(crate) encoded_value: Vec<u8>,\n}\n\nimpl TrackedStateMutation {\n    pub(crate) fn put_encoded(encoded_key: Vec<u8>, encoded_value: Vec<u8>) -> Self {\n        Self {\n            encoded_key,\n            encoded_value,\n        }\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct TrackedStateTreeScanRequest {\n    pub(crate) schema_keys: Vec<String>,\n    pub(crate) entity_ids: Vec<EntityIdentity>,\n    pub(crate) file_ids: Vec<NullableKeyFilter<String>>,\n    pub(crate) include_tombstones: bool,\n    pub(crate) limit: Option<usize>,\n}\n\nimpl Default for TrackedStateTreeScanRequest {\n    fn default() -> Self {\n        Self {\n            schema_keys: Vec::new(),\n            entity_ids: Vec::new(),\n            file_ids: Vec::new(),\n            include_tombstones: true,\n            limit: None,\n        }\n    }\n}\n\nimpl TrackedStateTreeScanRequest {\n    pub(crate) fn matches(&self, key: &TrackedStateKey, value: &TrackedStateIndexValue) -> bool {\n        if !self.include_tombstones && value.deleted {\n            return false;\n        }\n        self.matches_key(key)\n    }\n\n    pub(crate) fn matches_key(&self, key: &TrackedStateKey) -> bool {\n        if !self.schema_keys.is_empty() && !self.schema_keys.contains(&key.schema_key) {\n            return false;\n        }\n        if !self.entity_ids.is_empty() && !self.entity_ids.contains(&key.entity_id) {\n            return false;\n        }\n        if !self.file_ids.is_empty()\n            && !self.file_ids.iter().any(|filter| match filter {\n                NullableKeyFilter::Any => true,\n                NullableKeyFilter::Null => key.file_id.is_none(),\n                NullableKeyFilter::Value(value) => key.file_id.as_ref() == Some(value),\n            })\n        {\n            return false;\n        }\n        true\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct TrackedStateApplyResult {\n    pub(crate) root_id: TrackedStateRootId,\n    pub(crate) row_count: usize,\n    pub(crate) tree_height: usize,\n    pub(crate) chunk_count: usize,\n    pub(crate) chunk_bytes: usize,\n    pub(crate) persisted_root: bool,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct TrackedStateTreeDiffEntry {\n    pub(crate) before: Option<(TrackedStateKey, TrackedStateIndexValue)>,\n    pub(crate) after: Option<(TrackedStateKey, TrackedStateIndexValue)>,\n}\n"
  },
  {
    "path": "packages/engine/src/transaction/commit.rs",
    "content": "use crate::binary_cas::BinaryCasContext;\nuse crate::commit_store::{ChangeRef, CommitDraftRef, CommitStoreContext, StagedCommitStoreCommit};\nuse crate::functions::FunctionContext;\nuse crate::json_store::{JsonStoreContext, JsonWritePlacementRef, NormalizedJsonRef};\nuse crate::storage::{StorageReader, StorageWriteSet, StorageWriteTransaction};\nuse crate::tracked_state::{TrackedStateContext, TrackedStateDeltaRef};\nuse crate::transaction::prepare_version_ref_row;\nuse crate::transaction::staging::PreparedWriteSet;\nuse crate::transaction::types::{PreparedAdoptedStateRow, PreparedStateRow, StagedCommitMembers};\nuse crate::untracked_state::{\n    UntrackedStateContext, UntrackedStateIdentity, UntrackedStateIdentityRef, UntrackedStateRowRef,\n};\nuse crate::version::{VersionContext, VersionRefReader};\nuse crate::LixError;\nuse std::collections::BTreeMap;\n\ntype RowIndex = usize;\ntype AdoptedRowIndex = usize;\n\n/// Commits prepared transaction rows into durable tracked and untracked stores.\n///\n/// Providers decode DataFusion DML into hydrated `PreparedStateRow`s. Untracked\n/// rows are durable local overlay state and bypass commit-store rows. Tracked\n/// rows stage canonical commit-store facts, then update the live-state serving\n/// projection. The tracked side of that projection is a prolly root keyed by\n/// the new commit id.\npub(crate) async fn commit_prepared_writes(\n    binary_cas: &BinaryCasContext,\n    commit_store: &CommitStoreContext,\n    version_ctx: &VersionContext,\n    runtime_functions: Option<&FunctionContext>,\n    transaction: &mut (impl StorageWriteTransaction + ?Sized),\n    prepared_writes: PreparedWriteSet,\n) -> Result<(), LixError> {\n    let mut writes = StorageWriteSet::new();\n    let mut json_writer = JsonStoreContext::new().writer();\n\n    if !prepared_writes.file_data_writes.is_empty() {\n        let mut blob_writer = binary_cas.writer(&mut writes);\n        for write in &prepared_writes.file_data_writes {\n            blob_writer.stage_bytes(&write.data)?;\n        }\n    }\n\n    let state_rows = prepared_writes.state_rows;\n    let adopted_rows = prepared_writes.adopted_rows;\n    let finalized = finalize_commit_rows(\n        prepared_writes.commit_members_by_version,\n        prepared_writes.extra_commit_parents_by_version,\n        version_ctx,\n        transaction,\n    )\n    .await?;\n    let commit_rows = finalized.commit_rows;\n    let version_heads = finalized.version_heads;\n    let tracked_roots = finalized.tracked_roots;\n    let row_index = index_prepared_rows(&state_rows)?;\n    let adopted_index = index_adopted_rows(&adopted_rows);\n\n    if let Some(runtime_functions) = runtime_functions {\n        runtime_functions\n            .stage_persist_if_needed(&mut writes)\n            .await?;\n    }\n\n    if state_rows.is_empty()\n        && adopted_rows.is_empty()\n        && commit_rows.is_empty()\n        && version_heads.is_empty()\n        && writes.is_empty()\n    {\n        return Ok(());\n    }\n\n    let staged_commits = stage_commit_store_commits(\n        commit_store,\n        transaction,\n        &mut writes,\n        &state_rows,\n        &row_index.tracked_row_indices_by_commit,\n        &adopted_rows,\n        &adopted_index.tracked_row_indices_by_commit,\n        &commit_rows,\n    )\n    .await?;\n\n    let json_pack_indexes_by_commit = stage_prepared_json_payloads(\n        &mut json_writer,\n        &mut writes,\n        &state_rows,\n        &row_index.tracked_row_indices_by_commit,\n        &staged_commits,\n        &row_index.untracked_row_indices,\n    )?;\n\n    // The serving projection is updated in the same backend transaction as the\n    // commit-store append. Tracked rows become prolly mutations under their owning\n    // commit root; untracked rows remain in the separate local overlay store.\n    {\n        let untracked_overlay_delete_identities = existing_untracked_overlay_delete_identities(\n            transaction,\n            row_index\n                .canonical_row_indices\n                .iter()\n                .map(|&row_index| untracked_identity_ref_from_state_row(&state_rows[row_index]))\n                .chain(\n                    adopted_rows\n                        .iter()\n                        .map(untracked_identity_ref_from_adopted_row),\n                ),\n        )\n        .await?;\n        UntrackedStateContext::new()\n            .writer(&mut writes)\n            .stage_rows(\n                row_index\n                    .untracked_row_indices\n                    .iter()\n                    .map(|&row_index| untracked_row_ref_from_state_row(&state_rows[row_index])),\n            )?;\n        UntrackedStateContext::new()\n            .writer(&mut writes)\n            .stage_delete_rows(\n                untracked_overlay_delete_identities\n                    .iter()\n                    .map(UntrackedStateIdentity::as_ref),\n            );\n        stage_tracked_roots(\n            transaction,\n            &mut writes,\n            &state_rows,\n            row_index.tracked_row_indices_by_commit,\n            &adopted_rows,\n            adopted_index.tracked_row_indices_by_commit,\n            tracked_roots,\n            staged_commits,\n            json_pack_indexes_by_commit,\n        )\n        .await?;\n    }\n\n    for version_head in version_heads {\n        let canonical_row = prepare_version_ref_row(\n            &version_head.version_id,\n            &version_head.commit_id,\n            &version_head.timestamp,\n        )?;\n        version_ctx.stage_canonical_ref_rows(&mut writes, &[canonical_row.row])?;\n    }\n\n    writes.apply(transaction).await?;\n    Ok(())\n}\n\nfn stage_prepared_json_payloads(\n    json_writer: &mut crate::json_store::JsonStoreWriter,\n    writes: &mut StorageWriteSet,\n    state_rows: &[PreparedStateRow],\n    tracked_row_indices_by_commit: &BTreeMap<String, Vec<RowIndex>>,\n    staged_commits: &BTreeMap<String, StagedCommitStoreCommit>,\n    untracked_row_indices: &[RowIndex],\n) -> Result<BTreeMap<String, BTreeMap<u32, std::collections::HashMap<[u8; 32], usize>>>, LixError> {\n    let mut pack_indexes_by_commit = BTreeMap::new();\n    for (commit_id, row_indices) in tracked_row_indices_by_commit {\n        let staged_commit = staged_commits.get(commit_id).ok_or_else(|| {\n            LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                format!(\"commit '{commit_id}' has tracked JSON rows but no staged commit-store locators\"),\n            )\n        })?;\n        if row_indices.len() != staged_commit.authored_locators.len() {\n            return Err(LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                format!(\n                    \"commit '{commit_id}' has {} tracked JSON rows but {} authored locators\",\n                    row_indices.len(),\n                    staged_commit.authored_locators.len()\n                ),\n            ));\n        }\n        let mut row_indices_by_pack = BTreeMap::<u32, Vec<RowIndex>>::new();\n        for (&row_index, locator) in row_indices.iter().zip(&staged_commit.authored_locators) {\n            row_indices_by_pack\n                .entry(locator.source_pack_id)\n                .or_default()\n                .push(row_index);\n        }\n        for (pack_id, pack_row_indices) in row_indices_by_pack {\n            let report = json_writer.stage_batch_report(\n                writes,\n                JsonWritePlacementRef::CommitPack { commit_id, pack_id },\n                pack_row_indices\n                    .iter()\n                    .flat_map(|&row_index| json_payloads_from_state_row(&state_rows[row_index])),\n            )?;\n            pack_indexes_by_commit\n                .entry(commit_id.clone())\n                .or_insert_with(BTreeMap::new)\n                .insert(pack_id, report.pack_indexes);\n        }\n    }\n    json_writer.stage_batch(\n        writes,\n        JsonWritePlacementRef::OutOfBand,\n        untracked_row_indices\n            .iter()\n            .flat_map(|&row_index| json_payloads_from_state_row(&state_rows[row_index])),\n    )?;\n    Ok(pack_indexes_by_commit)\n}\n\nfn json_payloads_from_state_row(\n    row: &PreparedStateRow,\n) -> impl Iterator<Item = NormalizedJsonRef<'_>> {\n    row.snapshot\n        .iter()\n        .chain(row.metadata.iter())\n        .map(|json| NormalizedJsonRef::trusted_prehashed(json.normalized.as_ref(), json.json_ref))\n}\n\nasync fn existing_untracked_overlay_delete_identities<'a>(\n    transaction: &mut (impl StorageReader + ?Sized),\n    identities: impl IntoIterator<Item = UntrackedStateIdentityRef<'a>>,\n) -> Result<Vec<UntrackedStateIdentity>, LixError> {\n    UntrackedStateContext::new()\n        .reader(transaction)\n        .existing_identities(identities)\n        .await\n}\n\nstruct PreparedRowIndex {\n    canonical_row_indices: Vec<RowIndex>,\n    untracked_row_indices: Vec<RowIndex>,\n    tracked_row_indices_by_commit: BTreeMap<String, Vec<RowIndex>>,\n}\n\nstruct PreparedAdoptedRowIndex {\n    tracked_row_indices_by_commit: BTreeMap<String, Vec<AdoptedRowIndex>>,\n}\n\nfn index_prepared_rows(rows: &[PreparedStateRow]) -> Result<PreparedRowIndex, LixError> {\n    let mut canonical_row_indices = Vec::new();\n    let mut untracked_row_indices = Vec::new();\n    let mut tracked_row_indices_by_commit = BTreeMap::<String, Vec<RowIndex>>::new();\n\n    for (row_index, row) in rows.iter().enumerate() {\n        if row.untracked {\n            untracked_row_indices.push(row_index);\n            continue;\n        }\n        let Some(commit_id) = row.commit_id.as_ref() else {\n            return Err(LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                \"tracked prepared row is missing commit_id before commit indexing\",\n            ));\n        };\n        canonical_row_indices.push(row_index);\n        tracked_row_indices_by_commit\n            .entry(commit_id.clone())\n            .or_default()\n            .push(row_index);\n    }\n\n    Ok(PreparedRowIndex {\n        canonical_row_indices,\n        untracked_row_indices,\n        tracked_row_indices_by_commit,\n    })\n}\n\nfn index_adopted_rows(rows: &[PreparedAdoptedStateRow]) -> PreparedAdoptedRowIndex {\n    let mut tracked_row_indices_by_commit = BTreeMap::<String, Vec<AdoptedRowIndex>>::new();\n    for (row_index, row) in rows.iter().enumerate() {\n        tracked_row_indices_by_commit\n            .entry(row.commit_id.clone())\n            .or_default()\n            .push(row_index);\n    }\n    PreparedAdoptedRowIndex {\n        tracked_row_indices_by_commit,\n    }\n}\n\nasync fn stage_commit_store_commits(\n    commit_store: &CommitStoreContext,\n    transaction: &mut (impl StorageReader + ?Sized),\n    writes: &mut StorageWriteSet,\n    state_rows: &[PreparedStateRow],\n    tracked_row_indices_by_commit: &BTreeMap<String, Vec<RowIndex>>,\n    adopted_rows: &[PreparedAdoptedStateRow],\n    adopted_row_indices_by_commit: &BTreeMap<String, Vec<AdoptedRowIndex>>,\n    commit_rows: &[FinalizedCommitRow],\n) -> Result<BTreeMap<String, StagedCommitStoreCommit>, LixError> {\n    let mut commits = Vec::with_capacity(commit_rows.len());\n    let mut commit_ids = Vec::with_capacity(commit_rows.len());\n    for commit_row in commit_rows {\n        let state_row_indices = tracked_row_indices_by_commit\n            .get(&commit_row.commit_id)\n            .map(Vec::as_slice)\n            .unwrap_or_default();\n        let adopted_row_indices = adopted_row_indices_by_commit\n            .get(&commit_row.commit_id)\n            .map(Vec::as_slice)\n            .unwrap_or_default();\n        let mut authored_changes = Vec::with_capacity(state_row_indices.len());\n        for &row_index in state_row_indices {\n            authored_changes.push(change_ref_from_state_row(&state_rows[row_index])?);\n        }\n        let mut adopted_changes = Vec::with_capacity(adopted_row_indices.len());\n        for &row_index in adopted_row_indices {\n            adopted_changes.push(change_ref_from_adopted_row(&adopted_rows[row_index]));\n        }\n\n        let commit = CommitDraftRef {\n            id: &commit_row.commit_id,\n            change_id: &commit_row.change_id,\n            parent_ids: &commit_row.parent_commit_ids,\n            author_account_ids: &[],\n            created_at: &commit_row.created_at,\n        };\n        commit_ids.push(commit_row.commit_id.clone());\n        commits.push((commit, authored_changes, adopted_changes));\n    }\n    let staged = commit_store\n        .writer(transaction, writes)\n        .stage_tracked_commit_drafts(commits)\n        .await?;\n    if staged.len() != commit_ids.len() {\n        return Err(LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            format!(\n                \"commit-store staged {} commits for {} finalized commit rows\",\n                staged.len(),\n                commit_ids.len()\n            ),\n        ));\n    }\n    Ok(commit_ids.into_iter().zip(staged).collect())\n}\n\nfn change_ref_from_state_row(row: &PreparedStateRow) -> Result<ChangeRef<'_>, LixError> {\n    let Some(change_id) = row.change_id.as_deref() else {\n        return Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            \"tracked staged row is missing change_id before commit-store append\",\n        ));\n    };\n\n    Ok(ChangeRef {\n        id: change_id,\n        entity_id: &row.entity_id,\n        schema_key: &row.schema_key,\n        file_id: row.file_id.as_deref(),\n        snapshot_ref: row.snapshot.as_ref().map(|snapshot| &snapshot.json_ref),\n        metadata_ref: row.metadata.as_ref().map(|metadata| &metadata.json_ref),\n        created_at: &row.updated_at,\n    })\n}\n\nfn change_ref_from_adopted_row(row: &PreparedAdoptedStateRow) -> ChangeRef<'_> {\n    ChangeRef {\n        id: &row.change_id,\n        entity_id: &row.entity_id,\n        schema_key: &row.schema_key,\n        file_id: row.file_id.as_deref(),\n        snapshot_ref: row.snapshot.as_ref().map(|snapshot| &snapshot.json_ref),\n        metadata_ref: row.metadata.as_ref().map(|metadata| &metadata.json_ref),\n        created_at: &row.updated_at,\n    }\n}\n\nasync fn stage_tracked_roots(\n    transaction: &mut (impl StorageReader + ?Sized),\n    writes: &mut StorageWriteSet,\n    state_rows: &[PreparedStateRow],\n    mut tracked_row_indices_by_commit: BTreeMap<String, Vec<RowIndex>>,\n    adopted_rows: &[PreparedAdoptedStateRow],\n    mut adopted_row_indices_by_commit: BTreeMap<String, Vec<AdoptedRowIndex>>,\n    tracked_roots: Vec<PendingTrackedRoot>,\n    mut staged_commits: BTreeMap<String, StagedCommitStoreCommit>,\n    json_pack_indexes_by_commit: BTreeMap<\n        String,\n        BTreeMap<u32, std::collections::HashMap<[u8; 32], usize>>,\n    >,\n) -> Result<(), LixError> {\n    let tracked_state = TrackedStateContext::new();\n    let mut writer = tracked_state.writer(transaction, writes);\n    for root in tracked_roots {\n        let staged = staged_commits.remove(&root.commit_id).ok_or_else(|| {\n            LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                format!(\n                    \"tracked-state root for commit '{}' has no staged commit-store locators\",\n                    root.commit_id\n                ),\n            )\n        })?;\n        let state_row_indices = tracked_row_indices_by_commit\n            .remove(&root.commit_id)\n            .unwrap_or_default();\n        let adopted_row_indices = adopted_row_indices_by_commit\n            .remove(&root.commit_id)\n            .unwrap_or_default();\n        if state_row_indices.len() != staged.authored_locators.len() {\n            return Err(LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                format!(\n                    \"commit '{}' has {} tracked authored rows but {} commit-store authored locators\",\n                    root.commit_id,\n                    state_row_indices.len(),\n                    staged.authored_locators.len()\n                ),\n            ));\n        }\n        if adopted_row_indices.len() != staged.adopted_locators.len() {\n            return Err(LixError::new(\n                LixError::CODE_INTERNAL_ERROR,\n                format!(\n                    \"commit '{}' has {} tracked adopted rows but {} commit-store adopted locators\",\n                    root.commit_id,\n                    adopted_row_indices.len(),\n                    staged.adopted_locators.len()\n                ),\n            ));\n        }\n        let authored_changes = state_row_indices\n            .iter()\n            .map(|&row_index| change_ref_from_state_row(&state_rows[row_index]))\n            .collect::<Result<Vec<_>, _>>()?;\n        let adopted_changes = adopted_row_indices\n            .iter()\n            .map(|&row_index| change_ref_from_adopted_row(&adopted_rows[row_index]))\n            .collect::<Vec<_>>();\n        let authored_updated_at = state_row_indices\n            .iter()\n            .map(|&row_index| state_rows[row_index].updated_at.as_str())\n            .collect::<Vec<_>>();\n        let authored_created_at = state_row_indices\n            .iter()\n            .map(|&row_index| state_rows[row_index].created_at.as_str())\n            .collect::<Vec<_>>();\n        let adopted_updated_at = adopted_row_indices\n            .iter()\n            .map(|&row_index| adopted_rows[row_index].updated_at.as_str())\n            .collect::<Vec<_>>();\n        let adopted_created_at = adopted_row_indices\n            .iter()\n            .map(|&row_index| adopted_rows[row_index].created_at.as_str())\n            .collect::<Vec<_>>();\n        let mut deltas = Vec::with_capacity(authored_changes.len() + adopted_changes.len());\n        deltas.extend(\n            authored_changes\n                .iter()\n                .zip(&staged.authored_locators)\n                .zip(authored_created_at)\n                .zip(authored_updated_at)\n                .map(\n                    |(((change, locator), created_at), updated_at)| TrackedStateDeltaRef {\n                        change: *change,\n                        locator: locator.as_ref(),\n                        created_at,\n                        updated_at,\n                    },\n                ),\n        );\n        deltas.extend(\n            adopted_changes\n                .iter()\n                .zip(&staged.adopted_locators)\n                .zip(adopted_created_at)\n                .zip(adopted_updated_at)\n                .map(\n                    |(((change, locator), created_at), updated_at)| TrackedStateDeltaRef {\n                        change: *change,\n                        locator: locator.as_ref(),\n                        created_at,\n                        updated_at,\n                    },\n                ),\n        );\n        if let Some(indexes) = json_pack_indexes_by_commit\n            .get(&root.commit_id)\n            .and_then(|packs| packs.get(&0))\n        {\n            writer\n                .stage_delta_with_json_pack_indexes(\n                    &root.commit_id,\n                    root.parent_commit_id.as_deref(),\n                    &deltas,\n                    crate::tracked_state::DeltaJsonPackIndexesRef {\n                        commit_id: &root.commit_id,\n                        pack_id: 0,\n                        indexes,\n                    },\n                )\n                .await?;\n        } else {\n            writer\n                .stage_delta(&root.commit_id, root.parent_commit_id.as_deref(), &deltas)\n                .await?;\n        }\n    }\n    if !tracked_row_indices_by_commit.is_empty() || !adopted_row_indices_by_commit.is_empty() {\n        let mut commit_ids = tracked_row_indices_by_commit\n            .keys()\n            .chain(adopted_row_indices_by_commit.keys())\n            .cloned()\n            .collect::<Vec<_>>();\n        commit_ids.sort();\n        commit_ids.dedup();\n        return Err(LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            format!(\n                \"tracked live_state rows have no finalized root metadata for commit ids: {}\",\n                commit_ids.join(\", \")\n            ),\n        ));\n    }\n    if !staged_commits.is_empty() {\n        let commit_ids = staged_commits.keys().cloned().collect::<Vec<_>>();\n        return Err(LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            format!(\n                \"commit-store staged commits without tracked root metadata: {}\",\n                commit_ids.join(\", \")\n            ),\n        ));\n    }\n    Ok(())\n}\n\nfn untracked_row_ref_from_state_row(row: &PreparedStateRow) -> UntrackedStateRowRef<'_> {\n    UntrackedStateRowRef {\n        entity_id: &row.entity_id,\n        schema_key: &row.schema_key,\n        file_id: row.file_id.as_deref(),\n        snapshot_content: row\n            .snapshot\n            .as_ref()\n            .map(|snapshot| snapshot.normalized.as_ref()),\n        metadata: row\n            .metadata\n            .as_ref()\n            .map(|metadata| metadata.normalized.as_ref()),\n        created_at: &row.created_at,\n        updated_at: &row.updated_at,\n        global: row.global,\n        version_id: &row.version_id,\n    }\n}\n\nfn untracked_identity_ref_from_state_row(row: &PreparedStateRow) -> UntrackedStateIdentityRef<'_> {\n    UntrackedStateIdentityRef {\n        version_id: &row.version_id,\n        schema_key: &row.schema_key,\n        entity_id: &row.entity_id,\n        file_id: row.file_id.as_deref(),\n    }\n}\n\nfn untracked_identity_ref_from_adopted_row(\n    row: &PreparedAdoptedStateRow,\n) -> UntrackedStateIdentityRef<'_> {\n    UntrackedStateIdentityRef {\n        version_id: &row.version_id,\n        schema_key: &row.schema_key,\n        entity_id: &row.entity_id,\n        file_id: row.file_id.as_deref(),\n    }\n}\n\n/// Materializes tracked staged membership into commit-store commits.\n///\n/// Staging only accumulates `version_id -> change_ids` because commit ids,\n/// parent heads, and commit-row timestamps belong to transaction finalization.\n/// The `change_ids` list is the ordered set of canonical changes whose effects\n/// the commit introduces relative to its first parent; merge commits may later\n/// populate this list with existing source-parent changes instead of copied\n/// change payloads.\n/// This function turns those membership sets into finalized commit facts.\n///\n/// Commit finalization output split by durability target.\n///\n/// `commit_rows` are canonical commit-store facts. live_state later projects\n/// commit SQL surfaces from commit_store; tracked_state roots do not store\n/// commit graph facts.\n///\n/// `version_heads` are moving refs. They are written through `VersionContext`,\n/// not the canonical commit store.\nstruct FinalizedCommitRows {\n    commit_rows: Vec<FinalizedCommitRow>,\n    version_heads: Vec<PendingVersionHead>,\n    tracked_roots: Vec<PendingTrackedRoot>,\n}\n\nstruct FinalizedCommitRow {\n    commit_id: String,\n    parent_commit_ids: Vec<String>,\n    created_at: String,\n    change_id: String,\n}\n\nstruct PendingVersionHead {\n    version_id: String,\n    commit_id: String,\n    timestamp: String,\n}\n\nstruct PendingTrackedRoot {\n    commit_id: String,\n    parent_commit_id: Option<String>,\n}\n\nasync fn finalize_commit_rows(\n    commit_members_by_version: BTreeMap<String, StagedCommitMembers>,\n    extra_commit_parents_by_version: BTreeMap<String, Vec<String>>,\n    version_ctx: &VersionContext,\n    transaction: &mut (impl StorageReader + ?Sized),\n) -> Result<FinalizedCommitRows, LixError> {\n    let mut commit_rows = Vec::new();\n    let mut version_heads = Vec::new();\n    let mut tracked_roots = Vec::new();\n\n    for (version_id, members) in commit_members_by_version {\n        if members.is_empty() && !members.allow_empty {\n            continue;\n        }\n\n        let commit_id = members.commit_id;\n        let commit_change_id = members.commit_change_id;\n        let timestamp = members.created_at;\n        let _change_ids = members.change_ids;\n        let parent_commit_ids = version_ctx\n            .ref_reader(&mut *transaction)\n            .load_head_commit_id(&version_id)\n            .await?\n            .into_iter()\n            .collect::<Vec<_>>();\n        let parent_commit_ids = merge_parent_commit_ids(\n            parent_commit_ids,\n            extra_commit_parents_by_version\n                .get(&version_id)\n                .cloned()\n                .unwrap_or_default(),\n        );\n        let parent_commit_id = parent_commit_ids.first().cloned();\n\n        commit_rows.push(FinalizedCommitRow {\n            commit_id: commit_id.clone(),\n            parent_commit_ids: parent_commit_ids.clone(),\n            created_at: timestamp.clone(),\n            change_id: commit_change_id,\n        });\n        version_heads.push(PendingVersionHead {\n            version_id: version_id.clone(),\n            commit_id: commit_id.clone(),\n            timestamp,\n        });\n        tracked_roots.push(PendingTrackedRoot {\n            commit_id,\n            parent_commit_id,\n        });\n    }\n\n    Ok(FinalizedCommitRows {\n        commit_rows,\n        version_heads,\n        tracked_roots,\n    })\n}\n\nfn merge_parent_commit_ids(mut base: Vec<String>, extra: Vec<String>) -> Vec<String> {\n    for parent in extra {\n        if !base.contains(&parent) {\n            base.push(parent);\n        }\n    }\n    base\n}\n\n#[cfg(test)]\nmod tests {\n    use std::collections::BTreeMap;\n    use std::sync::{\n        atomic::{AtomicUsize, Ordering},\n        Arc,\n    };\n\n    use super::*;\n    use crate::backend::{\n        testing::UnitTestBackend, Backend, BackendKvEntryPage, BackendKvExistsBatch,\n        BackendKvGetRequest, BackendKvKeyPage, BackendKvScanRequest, BackendKvValueBatch,\n        BackendKvValuePage, BackendKvWriteBatch, BackendKvWriteStats, BackendReadTransaction,\n        BackendWriteTransaction,\n    };\n    use crate::catalog::SchemaPlanId;\n    use crate::commit_store::{ChangeIndexEntry, ChangeLocator};\n    use crate::live_state::{LiveStateContext, LiveStateRowRequest};\n    use crate::storage::StorageContext;\n    use crate::transaction::types::PreparedRowFacts;\n    use crate::untracked_state::{\n        MaterializedUntrackedStateRow, UntrackedStateContext, UntrackedStateRowRequest,\n    };\n    use crate::version::VersionContext;\n    use crate::NullableKeyFilter;\n    use crate::GLOBAL_VERSION_ID;\n    use async_trait::async_trait;\n\n    const DETERMINISTIC_MODE_KEY: &str = \"lix_deterministic_mode\";\n    const DETERMINISTIC_SEQUENCE_KEY: &str = \"lix_deterministic_sequence_number\";\n\n    fn live_state_context() -> LiveStateContext {\n        LiveStateContext::new(\n            crate::tracked_state::TrackedStateContext::new(),\n            crate::untracked_state::UntrackedStateContext::new(),\n            crate::commit_graph::CommitGraphContext::new(),\n        )\n    }\n\n    #[tokio::test]\n    async fn commit_staged_writes_appends_commit_store_and_updates_serving_projection() {\n        let backend: Arc<dyn Backend + Send + Sync> = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(Arc::clone(&backend));\n        let binary_cas = BinaryCasContext::new();\n        let version_ctx = VersionContext::new(Arc::new(UntrackedStateContext::new()));\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n\n        let state_rows = vec![tracked_global_row(\"change-1\")];\n        commit_prepared_writes(\n            &binary_cas,\n            &crate::commit_store::CommitStoreContext::new(),\n            &version_ctx,\n            None,\n            transaction.as_mut(),\n            PreparedWriteSet {\n                insert_identities: BTreeMap::new(),\n                state_rows,\n                adopted_rows: Vec::new(),\n                commit_members_by_version: BTreeMap::from([(\n                    GLOBAL_VERSION_ID.to_string(),\n                    members([\"change-1\"]),\n                )]),\n                extra_commit_parents_by_version: BTreeMap::new(),\n                file_data_writes: Vec::new(),\n            },\n        )\n        .await\n        .expect(\"commit should flush staged rows\");\n        transaction\n            .commit()\n            .await\n            .expect(\"commit should persist kv\");\n\n        let commit_reader = crate::commit_store::CommitStoreContext::new().reader(storage.clone());\n        let commit = commit_reader\n            .load_commit(\"test-uuid-1\")\n            .await\n            .expect(\"commit-store commit should load\")\n            .expect(\"commit-store commit should exist\");\n        assert_eq!(commit.change_id, \"test-uuid-2\");\n        assert_eq!(commit.change_pack_count, 1);\n        assert_eq!(commit.membership_pack_count, 0);\n        let index_entries = commit_reader\n            .load_change_index_entries(&[\"change-1\".to_string(), \"test-uuid-2\".to_string()])\n            .await\n            .expect(\"commit-store change index should load\");\n        assert_eq!(\n            index_entries,\n            vec![\n                Some(ChangeIndexEntry::PackedChange {\n                    locator: ChangeLocator {\n                        source_commit_id: \"test-uuid-1\".to_string(),\n                        source_pack_id: 0,\n                        source_ordinal: 0,\n                        change_id: \"change-1\".to_string(),\n                    },\n                }),\n                Some(ChangeIndexEntry::CommitHeader {\n                    commit_id: \"test-uuid-1\".to_string(),\n                    change_id: \"test-uuid-2\".to_string(),\n                }),\n            ]\n        );\n        let change_pack = commit_reader\n            .load_change_pack(\"test-uuid-1\", 0)\n            .await\n            .expect(\"commit-store change pack should load\")\n            .expect(\"commit-store change pack should exist\");\n        assert_eq!(change_pack.len(), 1);\n        assert_eq!(change_pack[0].id, \"change-1\");\n        assert_eq!(change_pack[0].schema_key, \"test_schema\");\n\n        let loaded_head = version_ctx\n            .ref_reader(storage.clone())\n            .load_head_commit_id(GLOBAL_VERSION_ID)\n            .await\n            .expect(\"version ref load should succeed\");\n        assert_eq!(loaded_head.as_deref(), Some(\"test-uuid-1\"));\n    }\n\n    #[tokio::test]\n    async fn commit_with_only_untracked_writes_does_not_create_lix_commit() {\n        let backend: Arc<dyn Backend + Send + Sync> = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(Arc::clone(&backend));\n        let binary_cas = BinaryCasContext::new();\n        let version_ctx = VersionContext::new(Arc::new(UntrackedStateContext::new()));\n        let untracked_state = UntrackedStateContext::new();\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n\n        let state_rows = vec![untracked_global_row(\"change-untracked\")];\n        commit_prepared_writes(\n            &binary_cas,\n            &crate::commit_store::CommitStoreContext::new(),\n            &version_ctx,\n            None,\n            transaction.as_mut(),\n            PreparedWriteSet {\n                insert_identities: BTreeMap::new(),\n                state_rows,\n                adopted_rows: Vec::new(),\n                commit_members_by_version: BTreeMap::new(),\n                extra_commit_parents_by_version: BTreeMap::new(),\n                file_data_writes: Vec::new(),\n            },\n        )\n        .await\n        .expect(\"commit should flush untracked row\");\n        transaction\n            .commit()\n            .await\n            .expect(\"commit should persist kv\");\n\n        let commit_reader = crate::commit_store::CommitStoreContext::new().reader(storage.clone());\n        let index_entries = commit_reader\n            .load_change_index_entries(&[\"change-untracked\".to_string()])\n            .await\n            .expect(\"commit-store change index should load\");\n        assert_eq!(index_entries, vec![None]);\n\n        let loaded = {\n            let mut untracked_reader = untracked_state.reader(storage.clone());\n            untracked_reader\n                .load_row(&UntrackedStateRowRequest {\n                    schema_key: \"test_schema\".to_string(),\n                    version_id: GLOBAL_VERSION_ID.to_string(),\n                    entity_id: crate::entity_identity::EntityIdentity::single(\"entity-1\"),\n                    file_id: NullableKeyFilter::Null,\n                })\n                .await\n        }\n        .expect(\"untracked row load should succeed\")\n        .expect(\"untracked row should be persisted\");\n        assert_eq!(\n            loaded.snapshot_content.as_deref(),\n            Some(\"{\\\"value\\\":\\\"untracked\\\"}\")\n        );\n    }\n\n    #[tokio::test]\n    async fn tracked_write_deletes_matching_untracked_overlay() {\n        let backend: Arc<dyn Backend + Send + Sync> = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(Arc::clone(&backend));\n        let binary_cas = BinaryCasContext::new();\n        let untracked_state = UntrackedStateContext::new();\n        let live_state = Arc::new(live_state_context());\n        let version_ctx = VersionContext::new(Arc::new(UntrackedStateContext::new()));\n\n        let mut seed_transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"seed transaction should open\");\n        let mut writes = StorageWriteSet::new();\n        let staged_row = untracked_global_row(\"change-untracked\");\n        let canonical_row = crate::test_support::untracked_state_row_from_materialized(\n            &mut writes,\n            &MaterializedUntrackedStateRow::from(staged_row),\n        )\n        .expect(\"untracked seed should canonicalize\");\n        untracked_state\n            .writer(&mut writes)\n            .stage_rows(std::iter::once(canonical_row.as_ref()))\n            .expect(\"untracked seed should write\");\n        writes\n            .apply(&mut seed_transaction.as_mut())\n            .await\n            .expect(\"untracked seed should apply\");\n        seed_transaction\n            .commit()\n            .await\n            .expect(\"seed transaction should persist\");\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        let state_rows = vec![tracked_global_row(\"change-tracked\")];\n        commit_prepared_writes(\n            &binary_cas,\n            &crate::commit_store::CommitStoreContext::new(),\n            &version_ctx,\n            None,\n            transaction.as_mut(),\n            PreparedWriteSet {\n                insert_identities: BTreeMap::new(),\n                state_rows,\n                adopted_rows: Vec::new(),\n                commit_members_by_version: BTreeMap::from([(\n                    GLOBAL_VERSION_ID.to_string(),\n                    members([\"change-tracked\"]),\n                )]),\n                extra_commit_parents_by_version: BTreeMap::new(),\n                file_data_writes: Vec::new(),\n            },\n        )\n        .await\n        .expect(\"tracked commit should flush\");\n        transaction\n            .commit()\n            .await\n            .expect(\"commit should persist kv\");\n\n        let untracked = {\n            let mut untracked_reader = untracked_state.reader(storage.clone());\n            untracked_reader.load_row(&untracked_request()).await\n        }\n        .expect(\"untracked load should succeed\");\n        assert_eq!(untracked, None);\n\n        let visible = live_state\n            .reader(storage.clone())\n            .load_row(&live_state_request())\n            .await\n            .expect(\"live-state load should succeed\")\n            .expect(\"tracked row should be visible\");\n        assert!(!visible.untracked);\n        assert_eq!(visible.change_id.as_deref(), Some(\"change-tracked\"));\n        assert_eq!(visible.snapshot_content.as_deref(), Some(\"{\\\"value\\\":1}\"));\n    }\n\n    #[tokio::test]\n    async fn commit_staged_writes_applies_cross_subsystem_rows_as_one_backend_batch() {\n        let counting_backend = Arc::new(CountingBackend::new());\n        let write_batches = counting_backend.write_batches();\n        let backend: Arc<dyn Backend + Send + Sync> = counting_backend;\n        let storage = StorageContext::new(backend);\n        let binary_cas = BinaryCasContext::new();\n        let live_state = Arc::new(live_state_context());\n        let untracked_state = UntrackedStateContext::new();\n        let version_ctx = VersionContext::new(Arc::new(UntrackedStateContext::new()));\n        crate::test_support::seed_global_version_head(storage.clone()).await;\n        {\n            let mut seed_transaction = storage\n                .begin_write_transaction()\n                .await\n                .expect(\"seed transaction should open\");\n            let mut writes = StorageWriteSet::new();\n            let mode_snapshot = serde_json::to_string(&serde_json::json!({\n                \"key\": DETERMINISTIC_MODE_KEY,\n                \"value\": { \"enabled\": true },\n            }))\n            .expect(\"mode snapshot should serialize\");\n            JsonStoreContext::new()\n                .writer()\n                .stage_batch(\n                    &mut writes,\n                    JsonWritePlacementRef::OutOfBand,\n                    [NormalizedJsonRef::new(mode_snapshot.as_str())],\n                )\n                .expect(\"deterministic mode snapshot should stage\");\n            let row = crate::untracked_state::UntrackedStateRow {\n                entity_id: crate::entity_identity::EntityIdentity::single(DETERMINISTIC_MODE_KEY),\n                schema_key: \"lix_key_value\".to_string(),\n                file_id: None,\n                snapshot_content: Some(mode_snapshot.to_string()),\n                metadata: None,\n                created_at: \"2026-01-01T00:00:00Z\".to_string(),\n                updated_at: \"2026-01-01T00:00:00Z\".to_string(),\n                global: true,\n                version_id: GLOBAL_VERSION_ID.to_string(),\n            };\n            UntrackedStateContext::new()\n                .writer(&mut writes)\n                .stage_rows(std::iter::once(row.as_ref()))\n                .expect(\"deterministic mode should stage\");\n            writes\n                .apply(&mut seed_transaction.as_mut())\n                .await\n                .expect(\"deterministic mode should apply\");\n            seed_transaction\n                .commit()\n                .await\n                .expect(\"seed transaction should persist\");\n        }\n        write_batches.store(0, Ordering::SeqCst);\n        let runtime_functions = {\n            let reader = live_state.reader(storage.clone());\n            FunctionContext::prepare(&reader)\n                .await\n                .expect(\"runtime context should prepare\")\n        };\n        runtime_functions.provider().call_uuid_v7();\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n\n        let tracked_row = tracked_global_row(\"change-tracked\");\n        let mut untracked_row = untracked_global_row(\"change-untracked\");\n        untracked_row.entity_id = crate::entity_identity::EntityIdentity::single(\"entity-2\");\n\n        commit_prepared_writes(\n            &binary_cas,\n            &crate::commit_store::CommitStoreContext::new(),\n            &version_ctx,\n            Some(&runtime_functions),\n            transaction.as_mut(),\n            PreparedWriteSet {\n                insert_identities: BTreeMap::new(),\n                state_rows: vec![tracked_row, untracked_row],\n                adopted_rows: Vec::new(),\n                commit_members_by_version: BTreeMap::from([(\n                    GLOBAL_VERSION_ID.to_string(),\n                    members([\"change-tracked\"]),\n                )]),\n                extra_commit_parents_by_version: BTreeMap::new(),\n                file_data_writes: Vec::new(),\n            },\n        )\n        .await\n        .expect(\"cross-subsystem commit should stage and apply\");\n\n        assert_eq!(\n            write_batches.load(Ordering::SeqCst),\n            1,\n            \"tracked, json, untracked, commit-store, and version refs must apply as one backend write batch\"\n        );\n\n        transaction\n            .commit()\n            .await\n            .expect(\"commit should persist kv\");\n        assert_eq!(write_batches.load(Ordering::SeqCst), 1);\n\n        let commit_reader = crate::commit_store::CommitStoreContext::new().reader(storage.clone());\n        let commit = commit_reader\n            .load_commit(\"test-uuid-1\")\n            .await\n            .expect(\"commit-store commit should load\")\n            .expect(\"commit-store commit should exist\");\n        assert_eq!(commit.change_id, \"test-uuid-2\");\n        let index_entries = commit_reader\n            .load_change_index_entries(&[\"change-tracked\".to_string()])\n            .await\n            .expect(\"commit-store change index should load\");\n        assert!(matches!(\n            index_entries.as_slice(),\n            [Some(ChangeIndexEntry::PackedChange { .. })]\n        ));\n\n        let loaded_head = version_ctx\n            .ref_reader(storage.clone())\n            .load_head_commit_id(GLOBAL_VERSION_ID)\n            .await\n            .expect(\"version ref load should succeed\");\n        assert_eq!(loaded_head.as_deref(), Some(\"test-uuid-1\"));\n\n        let untracked = {\n            let mut untracked_reader = untracked_state.reader(storage.clone());\n            untracked_reader\n                .load_row(&UntrackedStateRowRequest {\n                    schema_key: \"test_schema\".to_string(),\n                    version_id: GLOBAL_VERSION_ID.to_string(),\n                    entity_id: crate::entity_identity::EntityIdentity::single(\"entity-2\"),\n                    file_id: NullableKeyFilter::Null,\n                })\n                .await\n        }\n        .expect(\"untracked row load should succeed\")\n        .expect(\"untracked row should persist\");\n        assert_eq!(\n            untracked.snapshot_content.as_deref(),\n            Some(\"{\\\"value\\\":\\\"untracked\\\"}\")\n        );\n\n        let sequence_row = live_state\n            .reader(storage.clone())\n            .load_row(&LiveStateRowRequest {\n                schema_key: \"lix_key_value\".to_string(),\n                version_id: GLOBAL_VERSION_ID.to_string(),\n                entity_id: crate::entity_identity::EntityIdentity::single(\n                    DETERMINISTIC_SEQUENCE_KEY,\n                ),\n                file_id: NullableKeyFilter::Null,\n            })\n            .await\n            .expect(\"deterministic sequence should load\")\n            .expect(\"deterministic sequence should persist\");\n        assert_eq!(\n            sequence_row.snapshot_content.as_deref(),\n            Some(\"{\\\"key\\\":\\\"lix_deterministic_sequence_number\\\",\\\"value\\\":0}\")\n        );\n    }\n\n    #[tokio::test]\n    async fn non_global_tracked_write_creates_one_commit_and_advances_only_touched_version() {\n        let backend: Arc<dyn Backend + Send + Sync> = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(Arc::clone(&backend));\n        let binary_cas = BinaryCasContext::new();\n        let version_ctx = VersionContext::new(Arc::new(UntrackedStateContext::new()));\n        crate::test_support::seed_version_head(storage.clone(), GLOBAL_VERSION_ID, \"global-before\")\n            .await;\n        crate::test_support::seed_version_head(storage.clone(), \"version-a\", \"version-a-before\")\n            .await;\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        let state_rows = vec![tracked_version_row(\"version-a\", \"change-version-a\")];\n        commit_prepared_writes(\n            &binary_cas,\n            &crate::commit_store::CommitStoreContext::new(),\n            &version_ctx,\n            None,\n            transaction.as_mut(),\n            PreparedWriteSet {\n                insert_identities: BTreeMap::new(),\n                state_rows,\n                adopted_rows: Vec::new(),\n                commit_members_by_version: BTreeMap::from([(\n                    \"version-a\".to_string(),\n                    members([\"change-version-a\"]),\n                )]),\n                extra_commit_parents_by_version: BTreeMap::new(),\n                file_data_writes: Vec::new(),\n            },\n        )\n        .await\n        .expect(\"version commit should flush\");\n        transaction\n            .commit()\n            .await\n            .expect(\"commit should persist kv\");\n\n        let commit_reader = crate::commit_store::CommitStoreContext::new().reader(storage.clone());\n        let commit = commit_reader\n            .load_commit(\"test-uuid-1\")\n            .await\n            .expect(\"commit-store commit should load\")\n            .expect(\"commit-store commit should exist\");\n        assert_eq!(commit.change_id, \"test-uuid-2\");\n        assert_eq!(commit.parent_ids, vec![\"version-a-before\"]);\n        let index_entries = commit_reader\n            .load_change_index_entries(&[\"change-version-a\".to_string()])\n            .await\n            .expect(\"commit-store change index should load\");\n        assert!(matches!(\n            index_entries.as_slice(),\n            [Some(ChangeIndexEntry::PackedChange { .. })]\n        ));\n\n        let global_head = version_ctx\n            .ref_reader(storage.clone())\n            .load_head_commit_id(GLOBAL_VERSION_ID)\n            .await\n            .expect(\"global head should load\");\n        let version_head = version_ctx\n            .ref_reader(storage.clone())\n            .load_head_commit_id(\"version-a\")\n            .await\n            .expect(\"version head should load\");\n        assert_eq!(global_head.as_deref(), Some(\"global-before\"));\n        assert_eq!(version_head.as_deref(), Some(\"test-uuid-1\"));\n    }\n\n    #[tokio::test]\n    async fn finalize_commit_rows_parents_global_commit_to_existing_version_ref() {\n        let backend: Arc<dyn Backend + Send + Sync> = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(Arc::clone(&backend));\n        let version_ctx = VersionContext::new(Arc::new(UntrackedStateContext::new()));\n        crate::test_support::seed_version_head(\n            storage.clone(),\n            GLOBAL_VERSION_ID,\n            \"initial-commit\",\n        )\n        .await;\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        let rows = finalize_commit_rows(\n            BTreeMap::from([(\n                GLOBAL_VERSION_ID.to_string(),\n                members([\"change-a\", \"change-b\"]),\n            )]),\n            BTreeMap::new(),\n            &version_ctx,\n            transaction.as_mut(),\n        )\n        .await\n        .expect(\"global commit row should finalize\");\n\n        assert_eq!(rows.commit_rows.len(), 1);\n        assert_eq!(rows.version_heads.len(), 1);\n        let row = &rows.commit_rows[0];\n        assert_eq!(row.commit_id, \"test-uuid-1\");\n        assert_eq!(row.change_id, \"test-uuid-2\");\n        assert_eq!(row.created_at, \"test-timestamp-1\");\n        assert_eq!(row.parent_commit_ids, vec![\"initial-commit\"]);\n\n        let version_head = &rows.version_heads[0];\n        assert_eq!(version_head.version_id, GLOBAL_VERSION_ID);\n        assert_eq!(version_head.commit_id, \"test-uuid-1\");\n    }\n\n    #[tokio::test]\n    async fn finalize_commit_rows_skips_empty_members() {\n        let backend: Arc<dyn Backend + Send + Sync> = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(Arc::clone(&backend));\n        let version_ctx = VersionContext::new(Arc::new(UntrackedStateContext::new()));\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        let rows = finalize_commit_rows(\n            BTreeMap::from([(\n                GLOBAL_VERSION_ID.to_string(),\n                StagedCommitMembers::default(),\n            )]),\n            BTreeMap::new(),\n            &version_ctx,\n            transaction.as_mut(),\n        )\n        .await\n        .expect(\"empty members should be ignored\");\n\n        assert!(rows.commit_rows.is_empty());\n        assert!(rows.version_heads.is_empty());\n    }\n\n    #[tokio::test]\n    async fn finalize_commit_rows_uses_existing_version_ref_as_parent() {\n        let backend: Arc<dyn Backend + Send + Sync> = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(Arc::clone(&backend));\n        let version_ctx = VersionContext::new(Arc::new(UntrackedStateContext::new()));\n        crate::test_support::seed_version_head(storage.clone(), GLOBAL_VERSION_ID, \"global-before\")\n            .await;\n        crate::test_support::seed_version_head(storage.clone(), \"version-a\", \"previous-commit\")\n            .await;\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        let rows = finalize_commit_rows(\n            BTreeMap::from([(\"version-a\".to_string(), members([\"change-a\"]))]),\n            BTreeMap::new(),\n            &version_ctx,\n            transaction.as_mut(),\n        )\n        .await\n        .expect(\"active-version commit finalization should resolve parent\");\n\n        assert_eq!(\n            rows.commit_rows[0].parent_commit_ids,\n            vec![\"previous-commit\"]\n        );\n        assert_eq!(rows.version_heads[0].version_id, \"version-a\");\n    }\n\n    #[tokio::test]\n    async fn finalize_commit_rows_appends_extra_merge_parent_after_target_head() {\n        let backend: Arc<dyn Backend + Send + Sync> = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(Arc::clone(&backend));\n        let version_ctx = VersionContext::new(Arc::new(UntrackedStateContext::new()));\n        crate::test_support::seed_version_head(storage.clone(), \"version-a\", \"target-head\").await;\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        let rows = finalize_commit_rows(\n            BTreeMap::from([(\"version-a\".to_string(), members([\"change-a\"]))]),\n            BTreeMap::from([(\"version-a\".to_string(), vec![\"source-head\".to_string()])]),\n            &version_ctx,\n            transaction.as_mut(),\n        )\n        .await\n        .expect(\"merge commit finalization should resolve parents\");\n\n        assert_eq!(\n            rows.commit_rows[0].parent_commit_ids,\n            vec![\"target-head\", \"source-head\"]\n        );\n    }\n\n    fn members<const N: usize>(change_ids: [&str; N]) -> StagedCommitMembers {\n        let mut members = StagedCommitMembers::new(\n            \"test-uuid-1\".to_string(),\n            \"test-uuid-2\".to_string(),\n            \"test-timestamp-1\".to_string(),\n        );\n        for change_id in change_ids {\n            members.add_change_id(change_id.to_string());\n        }\n        members\n    }\n\n    fn tracked_global_row(change_id: &str) -> PreparedStateRow {\n        tracked_version_row(GLOBAL_VERSION_ID, change_id)\n    }\n\n    fn tracked_version_row(version_id: &str, change_id: &str) -> PreparedStateRow {\n        PreparedStateRow {\n            schema_plan_id: SchemaPlanId::for_test(0),\n            facts: PreparedRowFacts::default(),\n            entity_id: crate::entity_identity::EntityIdentity::single(\"entity-1\"),\n            schema_key: \"test_schema\".to_string(),\n            file_id: None,\n            snapshot: Some(\n                crate::transaction::types::stage_json_from_value(\n                    crate::transaction::types::TransactionJson::from_value_for_test(\n                        serde_json::json!({ \"value\": 1 }),\n                    ),\n                    \"test tracked row snapshot\",\n                )\n                .expect(\"test snapshot should stage\"),\n            ),\n            metadata: None,\n            origin: None,\n            created_at: \"2026-01-01T00:00:00Z\".to_string(),\n            updated_at: \"2026-01-01T00:00:00Z\".to_string(),\n            global: version_id == GLOBAL_VERSION_ID,\n            change_id: Some(change_id.to_string()),\n            commit_id: Some(\"test-uuid-1\".to_string()),\n            untracked: false,\n            version_id: version_id.to_string(),\n        }\n    }\n\n    fn untracked_global_row(change_id: &str) -> PreparedStateRow {\n        let mut row = tracked_global_row(change_id);\n        row.snapshot = Some(\n            crate::transaction::types::stage_json_from_value(\n                crate::transaction::types::TransactionJson::from_value_for_test(\n                    serde_json::json!({ \"value\": \"untracked\" }),\n                ),\n                \"test untracked row snapshot\",\n            )\n            .expect(\"test snapshot should stage\"),\n        );\n        PreparedStateRow {\n            change_id: None,\n            commit_id: None,\n            untracked: true,\n            ..row\n        }\n    }\n\n    fn untracked_request() -> UntrackedStateRowRequest {\n        UntrackedStateRowRequest {\n            schema_key: \"test_schema\".to_string(),\n            version_id: GLOBAL_VERSION_ID.to_string(),\n            entity_id: crate::entity_identity::EntityIdentity::single(\"entity-1\"),\n            file_id: NullableKeyFilter::Null,\n        }\n    }\n\n    fn live_state_request() -> LiveStateRowRequest {\n        LiveStateRowRequest {\n            schema_key: \"test_schema\".to_string(),\n            version_id: GLOBAL_VERSION_ID.to_string(),\n            entity_id: crate::entity_identity::EntityIdentity::single(\"entity-1\"),\n            file_id: NullableKeyFilter::Null,\n        }\n    }\n\n    struct CountingBackend {\n        inner: UnitTestBackend,\n        write_batches: Arc<AtomicUsize>,\n    }\n\n    impl CountingBackend {\n        fn new() -> Self {\n            Self {\n                inner: UnitTestBackend::new(),\n                write_batches: Arc::new(AtomicUsize::new(0)),\n            }\n        }\n\n        fn write_batches(&self) -> Arc<AtomicUsize> {\n            Arc::clone(&self.write_batches)\n        }\n    }\n\n    #[async_trait]\n    impl Backend for CountingBackend {\n        async fn begin_read_transaction(\n            &self,\n        ) -> Result<Box<dyn BackendReadTransaction + Send + Sync + 'static>, LixError> {\n            self.inner.begin_read_transaction().await\n        }\n\n        async fn begin_write_transaction(\n            &self,\n        ) -> Result<Box<dyn BackendWriteTransaction + Send + Sync + 'static>, LixError> {\n            Ok(Box::new(CountingWriteTransaction {\n                inner: self.inner.begin_write_transaction().await?,\n                write_batches: Arc::clone(&self.write_batches),\n            }))\n        }\n    }\n\n    struct CountingWriteTransaction {\n        inner: Box<dyn BackendWriteTransaction + Send + Sync + 'static>,\n        write_batches: Arc<AtomicUsize>,\n    }\n\n    #[async_trait]\n    impl BackendReadTransaction for CountingWriteTransaction {\n        async fn get_values(\n            &mut self,\n            request: BackendKvGetRequest,\n        ) -> Result<BackendKvValueBatch, LixError> {\n            self.inner.get_values(request).await\n        }\n\n        async fn exists_many(\n            &mut self,\n            request: BackendKvGetRequest,\n        ) -> Result<BackendKvExistsBatch, LixError> {\n            self.inner.exists_many(request).await\n        }\n\n        async fn scan_keys(\n            &mut self,\n            request: BackendKvScanRequest,\n        ) -> Result<BackendKvKeyPage, LixError> {\n            self.inner.scan_keys(request).await\n        }\n\n        async fn scan_values(\n            &mut self,\n            request: BackendKvScanRequest,\n        ) -> Result<BackendKvValuePage, LixError> {\n            self.inner.scan_values(request).await\n        }\n\n        async fn scan_entries(\n            &mut self,\n            request: BackendKvScanRequest,\n        ) -> Result<BackendKvEntryPage, LixError> {\n            self.inner.scan_entries(request).await\n        }\n\n        async fn rollback(self: Box<Self>) -> Result<(), LixError> {\n            let Self { inner, .. } = *self;\n            inner.rollback().await\n        }\n    }\n\n    #[async_trait]\n    impl BackendWriteTransaction for CountingWriteTransaction {\n        async fn write_kv_batch(\n            &mut self,\n            batch: BackendKvWriteBatch,\n        ) -> Result<BackendKvWriteStats, LixError> {\n            self.write_batches.fetch_add(1, Ordering::SeqCst);\n            self.inner.write_kv_batch(batch).await\n        }\n\n        async fn commit(self: Box<Self>) -> Result<(), LixError> {\n            let Self { inner, .. } = *self;\n            inner.commit().await\n        }\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/transaction/context.rs",
    "content": "use std::collections::{BTreeMap, BTreeSet};\nuse std::sync::Arc;\n\nuse async_trait::async_trait;\nuse serde_json::Value as JsonValue;\n\nuse crate::binary_cas::{BinaryCasContext, BlobBytesBatch, BlobHash};\nuse crate::catalog::CatalogContext;\nuse crate::commit_graph::{CommitGraphContext, CommitGraphStoreReader};\nuse crate::commit_store::CommitStoreContext;\nuse crate::domain::Domain;\nuse crate::entity_identity::EntityIdentity;\nuse crate::functions::{FunctionContext, FunctionProviderHandle};\nuse crate::live_state::{\n    LiveStateContext, LiveStateRowRequest, LiveStateScanRequest, MaterializedLiveStateRow,\n};\nuse crate::session::{SessionMode, WORKSPACE_VERSION_KEY};\nuse crate::sql2::SqlWriteExecutionContext;\nuse crate::storage::{StorageContext, StorageWriteSet, StorageWriteTransaction};\nuse crate::tracked_state::{TrackedStateContext, TrackedStateStoreReader};\nuse crate::transaction::commit;\nuse crate::transaction::live_state_overlay::overlay_scan_rows;\nuse crate::transaction::normalization::{\n    normalize_transaction_write_row, remember_pending_registered_schema,\n    NormalizedTransactionWriteRow, REGISTERED_SCHEMA_KEY,\n};\nuse crate::transaction::prepare_version_ref_row;\nuse crate::transaction::schema_resolver::TransactionSchemaResolver;\nuse crate::transaction::staging::{PreparedWriteSet, TransactionWriteBuffer};\nuse crate::transaction::types::{\n    stage_json_from_value, PreparedAdoptedStateRow, PreparedRowFacts, PreparedStateRow,\n    PreparedTransactionWrite, TransactionAdoptedChange, TransactionFileData, TransactionJson,\n    TransactionWrite, TransactionWriteMode, TransactionWriteOutcome, TransactionWriteRow,\n};\nuse crate::transaction::validation::{validate_prepared_writes, TransactionValidationInput};\nuse crate::version::{VersionContext, VersionRefReader};\nuse crate::GLOBAL_VERSION_ID;\nuse crate::{LixError, NullableKeyFilter};\n\n#[derive(Debug, Clone, PartialEq, Eq, Default)]\npub(crate) struct TransactionCommitOutcome;\n\n/// One execution-scoped transaction capability for engine write paths.\n///\n/// This is intentionally not a session-wide kitchen sink. It owns the backend\n/// write transaction for one `SessionContext::execute(...)` call and projects\n/// accepted SQL/provider writes back into the SQL DAG through an engine-local live-state\n/// overlay.\n///\n/// Transaction invariant: this is the capability for engine operations\n/// that may write. Write-relevant reads must be exposed from this transaction,\n/// after the backend write transaction has begun, rather than from session-level\n/// helpers.\npub(crate) struct Transaction {\n    active_version_id: String,\n    live_state: Arc<LiveStateContext>,\n    tracked_state: Arc<TrackedStateContext>,\n    binary_cas: Arc<BinaryCasContext>,\n    commit_store: Arc<CommitStoreContext>,\n    version_ctx: Arc<VersionContext>,\n    schema_resolver: TransactionSchemaResolver,\n    staged_writes: Arc<TransactionWriteBuffer>,\n    storage_transaction: Box<dyn StorageWriteTransaction + Send + Sync + 'static>,\n    visible_schemas: Vec<JsonValue>,\n    functions: FunctionProviderHandle,\n}\n\nimpl Transaction {\n    /// Opens a backend write transaction and creates an execution-scoped\n    /// staging area for SQL/provider hooks.\n    async fn open(\n        mode: &SessionMode,\n        storage: StorageContext,\n        live_state: Arc<LiveStateContext>,\n        tracked_state: Arc<TrackedStateContext>,\n        binary_cas: Arc<BinaryCasContext>,\n        commit_store: Arc<CommitStoreContext>,\n        version_ctx: Arc<VersionContext>,\n        catalog_context: Arc<CatalogContext>,\n    ) -> Result<OpenTransaction, LixError> {\n        let mut storage_transaction = storage.begin_write_transaction().await?;\n        let setup_result = async {\n            let active_version_id = resolve_active_version_id(\n                mode,\n                live_state.as_ref(),\n                version_ctx.as_ref(),\n                storage_transaction.as_mut(),\n            )\n            .await?;\n            let runtime_functions = {\n                let runtime_live_state = live_state.reader(storage_transaction.as_mut());\n                FunctionContext::prepare(&runtime_live_state).await?\n            };\n            let functions = runtime_functions.provider();\n            let visible_schemas = {\n                let visible_live_state = live_state.reader(storage_transaction.as_mut());\n                catalog_context\n                    .schema_jsons_for_sql_read_planning(&visible_live_state, &active_version_id)\n                    .await?\n            };\n            let schema_facts = {\n                let visible_live_state = live_state.reader(storage_transaction.as_mut());\n                catalog_context\n                    .schema_facts_for_domain(\n                        &visible_live_state,\n                        &Domain::schema_catalog(active_version_id.clone(), true),\n                    )\n                    .await?\n            };\n            Ok::<_, LixError>((\n                active_version_id,\n                runtime_functions,\n                functions,\n                visible_schemas,\n                schema_facts,\n            ))\n        }\n        .await;\n        let (active_version_id, runtime_functions, functions, visible_schemas, schema_facts) =\n            match setup_result {\n                Ok(result) => result,\n                Err(error) => {\n                    let _ = storage_transaction.rollback().await;\n                    return Err(error);\n                }\n            };\n        let mut schema_resolver = TransactionSchemaResolver::new(catalog_context);\n        schema_resolver.remember_schema_facts(\n            &Domain::schema_catalog(active_version_id.clone(), true),\n            schema_facts,\n        );\n        let staged_writes = Arc::new(TransactionWriteBuffer::new(functions.clone()));\n        Ok(OpenTransaction {\n            transaction: Self {\n                active_version_id,\n                live_state,\n                tracked_state,\n                binary_cas,\n                commit_store,\n                version_ctx,\n                schema_resolver,\n                staged_writes,\n                storage_transaction,\n                visible_schemas,\n                functions,\n            },\n            runtime_functions,\n        })\n    }\n\n    /// Commits prepared writes, runtime function state, and the backend transaction.\n    ///\n    /// Commit owns the execution boundary: prepared rows become commit-store\n    /// facts, version-ref updates, and visible live_state rows before the\n    /// backend transaction is committed.\n    pub(crate) async fn commit(\n        mut self,\n        runtime_functions: &FunctionContext,\n    ) -> Result<TransactionCommitOutcome, LixError> {\n        let prepared_writes = match self.staged_writes.drain() {\n            Ok(prepared_writes) => prepared_writes,\n            Err(error) => {\n                let _ = self.storage_transaction.rollback().await;\n                return Err(error);\n            }\n        };\n        if let Err(error) = self\n            .validate_prepared_writes_by_version(&prepared_writes)\n            .await\n        {\n            let _ = self.storage_transaction.rollback().await;\n            return Err(error);\n        }\n        if let Err(error) = commit::commit_prepared_writes(\n            &self.binary_cas,\n            &self.commit_store,\n            self.version_ctx.as_ref(),\n            Some(runtime_functions),\n            self.storage_transaction.as_mut(),\n            prepared_writes,\n        )\n        .await\n        {\n            let _ = self.storage_transaction.rollback().await;\n            return Err(error);\n        }\n        self.storage_transaction.commit().await?;\n        Ok(TransactionCommitOutcome::default())\n    }\n\n    /// Rolls back the backend transaction.\n    ///\n    /// This is the explicit failure path for a write execution. Dropping the\n    /// buffered transaction without commit is not the API we want callers to\n    /// rely on.\n    #[allow(dead_code)]\n    pub(crate) async fn rollback(self) -> Result<(), LixError> {\n        self.storage_transaction.rollback().await\n    }\n\n    /// Stages one decoded write batch into this transaction.\n    ///\n    /// This is the programmatic write entrypoint used by non-SQL APIs. The\n    /// transaction still owns preparation from `TransactionWriteRow` into\n    /// `PreparedStateRow`, so generated timestamps, change ids, commit ids, and\n    /// commit membership stay in one place.\n    #[allow(dead_code)]\n    pub(crate) async fn stage_write(\n        &mut self,\n        write: TransactionWrite,\n    ) -> Result<TransactionWriteOutcome, LixError> {\n        require_valid_transaction_write_storage_scopes(&write)?;\n        #[cfg(feature = \"storage-benches\")]\n        {\n            crate::storage_bench::record_transaction_rows_staged(transaction_write_row_count(\n                &write,\n            ));\n            crate::storage_bench::record_transaction_untracked_rows(\n                transaction_write_untracked_row_count(&write),\n            );\n        }\n        self.require_existing_transaction_write_version_ids(&write)\n            .await?;\n        let write = self.prepare_transaction_write(write).await?;\n        self.staged_writes.stage_write(write)\n    }\n\n    async fn prepare_transaction_write(\n        &mut self,\n        write: TransactionWrite,\n    ) -> Result<PreparedTransactionWrite, LixError> {\n        Ok(match write {\n            TransactionWrite::Rows { mode, rows } => PreparedTransactionWrite::Rows {\n                mode,\n                rows: self.prepare_transaction_rows(rows).await?,\n            },\n            TransactionWrite::RowsWithFileData {\n                mode,\n                rows,\n                file_data,\n                count,\n            } => PreparedTransactionWrite::RowsWithFileData {\n                mode,\n                rows: self.prepare_transaction_rows(rows).await?,\n                file_data,\n                count,\n            },\n            TransactionWrite::AdoptedChanges { changes } => {\n                PreparedTransactionWrite::AdoptedChanges {\n                    rows: self.prepare_adopted_changes(changes).await?,\n                }\n            }\n        })\n    }\n\n    async fn prepare_transaction_rows(\n        &mut self,\n        rows: Vec<TransactionWriteRow>,\n    ) -> Result<Vec<PreparedStateRow>, LixError> {\n        let row_count = rows.len();\n        let staged = self.staged_writes.staging_overlay()?;\n        let live_state = self.live_state.reader(self.storage_transaction.as_mut());\n        let mut rows_by_scope = BTreeMap::<Domain, Vec<(usize, TransactionWriteRow)>>::new();\n        for (index, row) in rows.into_iter().enumerate() {\n            rows_by_scope\n                .entry(Domain::schema_catalog(\n                    row.schema_scope_version_id().to_string(),\n                    row.untracked,\n                ))\n                .or_default()\n                .push((index, row));\n        }\n\n        let mut prepared_rows = Vec::with_capacity(row_count);\n        prepared_rows.resize_with(row_count, || None);\n        for (domain, rows) in rows_by_scope {\n            let functions = self.functions.clone();\n            let catalog = self\n                .schema_resolver\n                .catalog_for_row_normalization(&live_state, &staged, &domain)\n                .await?;\n            for (_, row) in &rows {\n                if row.schema_key != REGISTERED_SCHEMA_KEY {\n                    continue;\n                }\n                if row.file_id.is_some() {\n                    return Err(LixError::new(\n                        LixError::CODE_SCHEMA_DEFINITION,\n                        \"lix_registered_schema rows must not be scoped to a file\",\n                    )\n                    .with_hint(\"Schema definitions are scoped by version and durability only; write them with null file_id.\"));\n                }\n                remember_pending_registered_schema(\n                    row.snapshot.as_ref().map(TransactionJson::value),\n                    Domain::schema_catalog(\n                        row.schema_scope_version_id().to_string(),\n                        row.untracked,\n                    ),\n                    catalog,\n                )?;\n            }\n            let normalized_rows = rows\n                .into_iter()\n                .map(|(index, row)| {\n                    normalize_transaction_write_row(row, catalog, functions.clone())\n                        .map(|row| (index, row))\n                })\n                .collect::<Result<Vec<_>, _>>()?;\n            for (index, row) in normalized_rows {\n                prepared_rows[index] = Some(prepare_state_row(row, &functions)?);\n            }\n        }\n        Ok(prepared_rows\n            .into_iter()\n            .map(|row| {\n                row.expect(\"every row should be prepared exactly once by schema scope grouping\")\n            })\n            .collect())\n    }\n\n    async fn prepare_adopted_changes(\n        &mut self,\n        changes: Vec<TransactionAdoptedChange>,\n    ) -> Result<Vec<PreparedAdoptedStateRow>, LixError> {\n        let change_count = changes.len();\n        let staged = self.staged_writes.staging_overlay()?;\n        let live_state = self.live_state.reader(self.storage_transaction.as_mut());\n        let mut changes_by_scope =\n            BTreeMap::<Domain, Vec<(usize, TransactionAdoptedChange)>>::new();\n        for (index, change) in changes.into_iter().enumerate() {\n            let schema_scope_version_id = if change.version_id == GLOBAL_VERSION_ID {\n                GLOBAL_VERSION_ID\n            } else {\n                change.version_id.as_str()\n            };\n            changes_by_scope\n                .entry(Domain::schema_catalog(\n                    schema_scope_version_id.to_string(),\n                    false,\n                ))\n                .or_default()\n                .push((index, change));\n        }\n\n        let mut prepared_rows = Vec::with_capacity(change_count);\n        prepared_rows.resize_with(change_count, || None);\n        for (domain, changes) in changes_by_scope {\n            let catalog = self\n                .schema_resolver\n                .catalog_for_row_normalization(&live_state, &staged, &domain)\n                .await?;\n            for (_, change) in &changes {\n                let row = &change.projected_row;\n                if row.schema_key != REGISTERED_SCHEMA_KEY {\n                    continue;\n                }\n                if row.file_id.is_some() {\n                    return Err(LixError::new(\n                        LixError::CODE_SCHEMA_DEFINITION,\n                        \"lix_registered_schema rows must not be scoped to a file\",\n                    )\n                    .with_hint(\"Schema definitions are scoped by version and durability only; write them with null file_id.\"));\n                }\n                remember_adopted_registered_schema(\n                    Domain::schema_catalog(change.version_id.clone(), false),\n                    row.snapshot_content.as_deref(),\n                    catalog,\n                )?;\n            }\n            let mut planned_changes = Vec::with_capacity(changes.len());\n            for (index, change) in changes {\n                let row = &change.projected_row;\n                let Some((schema_plan_id, _)) = catalog.plan_for_key(&row.schema_key) else {\n                    return Err(LixError::new(\n                        LixError::CODE_SCHEMA_DEFINITION,\n                        format!(\n                            \"schema '{}' is not visible to this transaction\",\n                            row.schema_key\n                        ),\n                    ));\n                };\n                if row.schema_key == REGISTERED_SCHEMA_KEY {\n                    if row.file_id.is_some() {\n                        return Err(LixError::new(\n                            LixError::CODE_SCHEMA_DEFINITION,\n                            \"lix_registered_schema rows must not be scoped to a file\",\n                        )\n                        .with_hint(\"Schema definitions are scoped by version and durability only; write them with null file_id.\"));\n                    }\n                    remember_adopted_registered_schema(\n                        Domain::schema_catalog(change.version_id.clone(), false),\n                        row.snapshot_content.as_deref(),\n                        catalog,\n                    )?;\n                }\n                planned_changes.push((index, change, schema_plan_id));\n            }\n            for (index, change, schema_plan_id) in planned_changes {\n                prepared_rows[index] = Some(prepare_adopted_state_row(change, schema_plan_id)?);\n            }\n        }\n        Ok(prepared_rows\n            .into_iter()\n            .map(|row| row.expect(\"every adopted row should be prepared exactly once\"))\n            .collect())\n    }\n\n    async fn validate_prepared_writes_by_version(\n        &mut self,\n        prepared_writes: &PreparedWriteSet,\n    ) -> Result<(), LixError> {\n        let validation_index = prepared_writes.validation_index();\n        for scope in validation_index.schema_scopes() {\n            #[cfg(feature = \"storage-benches\")]\n            crate::storage_bench::record_transaction_validation_version();\n            let version_prepared_writes = validation_index.validation_set_for_schema_scope(scope);\n            let live_state = self.live_state.reader(self.storage_transaction.as_mut());\n            let schema_catalog = self\n                .schema_resolver\n                .catalog_for_validation(&live_state, scope)\n                .await?;\n            validate_prepared_writes(TransactionValidationInput::new(\n                &version_prepared_writes,\n                &schema_catalog,\n                &live_state,\n            ))\n            .await?;\n        }\n        Ok(())\n    }\n\n    /// Convenience helper for programmatic APIs that only stage state rows.\n    #[allow(dead_code)]\n    pub(crate) async fn stage_rows(\n        &mut self,\n        rows: Vec<TransactionWriteRow>,\n    ) -> Result<TransactionWriteOutcome, LixError> {\n        self.stage_write(TransactionWrite::Rows {\n            mode: TransactionWriteMode::Replace,\n            rows,\n        })\n        .await\n    }\n\n    async fn require_existing_transaction_write_version_ids(\n        &mut self,\n        write: &TransactionWrite,\n    ) -> Result<(), LixError> {\n        let version_ids = transaction_write_version_ids(write);\n        let reader = self\n            .version_ctx\n            .ref_reader(self.storage_transaction.as_mut());\n        for version_id in version_ids {\n            if version_id == GLOBAL_VERSION_ID {\n                continue;\n            }\n            if reader.load_head_commit_id(&version_id).await?.is_none() {\n                return Err(LixError::version_not_found(\n                    version_id,\n                    \"stage_write\",\n                    \"target\",\n                ));\n            }\n        }\n        Ok(())\n    }\n\n    /// Returns the active version resolved inside this write transaction.\n    pub(crate) fn active_version_id(&self) -> &str {\n        &self.active_version_id\n    }\n\n    /// Returns this transaction's prepared runtime functions.\n    pub(crate) fn functions(&self) -> FunctionProviderHandle {\n        self.functions.clone()\n    }\n\n    /// Adds an extra parent to the commit generated for `version_id`.\n    ///\n    /// Merge uses this to preserve source-branch ancestry. Ordinary writes do\n    /// not call this because commit finalization already parents to the\n    /// version's previous head.\n    pub(crate) fn add_commit_parent(\n        &self,\n        version_id: String,\n        parent_commit_id: String,\n    ) -> Result<(), LixError> {\n        self.staged_writes\n            .add_commit_parent(version_id, parent_commit_id)\n    }\n\n    /// Advances a version ref without staging tracked rows.\n    ///\n    /// Fast-forward merges use this path because the commit graph already\n    /// contains the source head; the target ref only needs to move to it.\n    pub(crate) async fn advance_version_ref(\n        &mut self,\n        version_id: &str,\n        commit_id: &str,\n    ) -> Result<(), LixError> {\n        let timestamp = self.functions.call_timestamp();\n        let mut writes = StorageWriteSet::new();\n        let canonical_row = prepare_version_ref_row(version_id, commit_id, &timestamp)?;\n        self.version_ctx\n            .stage_canonical_ref_rows(&mut writes, &[canonical_row.row])?;\n        writes\n            .apply(&mut self.storage_transaction.as_mut())\n            .await\n            .map(|_| ())\n    }\n\n    /// Returns the commit id currently staged for `version_id`, if tracked rows\n    /// have been staged for that version.\n    pub(crate) fn staged_commit_id(&self, version_id: &str) -> Result<Option<String>, LixError> {\n        self.staged_writes.staged_commit_id(version_id)\n    }\n\n    /// Stages a commit for `version_id` even if no tracked rows changed.\n    pub(crate) fn stage_empty_commit(&self, version_id: String) -> Result<String, LixError> {\n        self.staged_writes.stage_empty_commit(version_id)\n    }\n\n    /// Creates a version-ref reader scoped to this write transaction.\n    pub(crate) fn version_ref_reader(&mut self) -> impl VersionRefReader + '_ {\n        self.version_ctx\n            .ref_reader(self.storage_transaction.as_mut())\n    }\n\n    /// Creates a tracked-state reader scoped to this write transaction.\n    pub(crate) fn tracked_state_reader(\n        &mut self,\n    ) -> TrackedStateStoreReader<&mut dyn StorageWriteTransaction> {\n        self.tracked_state.reader(self.storage_transaction.as_mut())\n    }\n\n    /// Creates a commit-graph reader scoped to this write transaction.\n    pub(crate) fn commit_graph_reader(\n        &mut self,\n    ) -> CommitGraphStoreReader<&mut dyn StorageWriteTransaction> {\n        CommitGraphContext::new().reader(self.storage_transaction.as_mut())\n    }\n}\n\nfn prepare_state_row(\n    normalized: NormalizedTransactionWriteRow,\n    functions: &FunctionProviderHandle,\n) -> Result<PreparedStateRow, LixError> {\n    let NormalizedTransactionWriteRow {\n        row,\n        snapshot,\n        schema_plan_id,\n        facts,\n    } = normalized;\n    let updated_at = row.updated_at.unwrap_or_else(|| functions.call_timestamp());\n    let snapshot = snapshot\n        .map(|value| stage_json_from_value(value, \"prepared row snapshot_content\"))\n        .transpose()?;\n    let metadata = row\n        .metadata\n        .map(|value| stage_json_from_value(value, \"prepared row metadata\"))\n        .transpose()?;\n    Ok(PreparedStateRow {\n        schema_plan_id,\n        facts,\n        entity_id: row.entity_id.ok_or_else(|| {\n            LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                \"normalized transaction write row is missing entity_id\",\n            )\n        })?,\n        schema_key: row.schema_key,\n        file_id: row.file_id,\n        snapshot,\n        metadata,\n        origin: row.origin,\n        created_at: row.created_at.unwrap_or_else(|| updated_at.clone()),\n        updated_at,\n        global: row.global,\n        change_id: if row.untracked {\n            row.change_id\n        } else {\n            Some(row.change_id.unwrap_or_else(|| functions.call_uuid_v7()))\n        },\n        commit_id: row.commit_id,\n        untracked: row.untracked,\n        version_id: row.version_id,\n    })\n}\n\nfn remember_adopted_registered_schema(\n    domain: Domain,\n    snapshot_content: Option<&str>,\n    catalog: &mut crate::catalog::CatalogSnapshot,\n) -> Result<(), LixError> {\n    let snapshot = snapshot_content\n        .map(|value| {\n            serde_json::from_str::<JsonValue>(value).map_err(|error| {\n                LixError::new(\n                    LixError::CODE_UNKNOWN,\n                    format!(\"adopted registered schema snapshot_content is invalid JSON: {error}\"),\n                )\n            })\n        })\n        .transpose()?;\n    remember_pending_registered_schema(snapshot.as_ref(), domain, catalog)\n}\n\nfn prepare_adopted_state_row(\n    change: TransactionAdoptedChange,\n    schema_plan_id: crate::catalog::SchemaPlanId,\n) -> Result<PreparedAdoptedStateRow, LixError> {\n    if change.change_id != change.projected_row.change_id {\n        return Err(LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            format!(\n                \"adopted change '{}' does not match projected row change_id '{}'\",\n                change.change_id, change.projected_row.change_id\n            ),\n        ));\n    }\n    let row = change.projected_row;\n    let snapshot = row\n        .snapshot_content\n        .as_deref()\n        .map(|value| stage_materialized_json_text(value, \"adopted row snapshot_content\"))\n        .transpose()?;\n    let metadata = row\n        .metadata\n        .as_deref()\n        .map(|value| stage_materialized_json_text(value, \"adopted row metadata\"))\n        .transpose()?;\n    Ok(PreparedAdoptedStateRow {\n        schema_plan_id,\n        facts: PreparedRowFacts::default(),\n        entity_id: row.entity_id,\n        schema_key: row.schema_key,\n        file_id: row.file_id,\n        snapshot,\n        metadata,\n        created_at: row.created_at,\n        updated_at: row.updated_at,\n        global: change.version_id == GLOBAL_VERSION_ID,\n        change_id: change.change_id,\n        commit_id: String::new(),\n        version_id: change.version_id,\n    })\n}\n\nfn stage_materialized_json_text(\n    value: &str,\n    context: &str,\n) -> Result<crate::transaction::types::StageJson, LixError> {\n    let parsed = serde_json::from_str::<serde_json::Value>(value).map_err(|error| {\n        LixError::new(\n            LixError::CODE_UNKNOWN,\n            format!(\"{context} is invalid JSON: {error}\"),\n        )\n    })?;\n    let prepared = TransactionJson::from_value(parsed, context)?;\n    stage_json_from_value(prepared, context)\n}\n\npub(crate) struct OpenTransaction {\n    pub(crate) transaction: Transaction,\n    pub(crate) runtime_functions: FunctionContext,\n}\n\npub(crate) async fn open_transaction(\n    mode: &SessionMode,\n    storage: StorageContext,\n    live_state: Arc<LiveStateContext>,\n    tracked_state: Arc<TrackedStateContext>,\n    binary_cas: Arc<BinaryCasContext>,\n    commit_store: Arc<CommitStoreContext>,\n    version_ctx: Arc<VersionContext>,\n    catalog_context: Arc<CatalogContext>,\n) -> Result<OpenTransaction, LixError> {\n    Transaction::open(\n        mode,\n        storage,\n        live_state,\n        tracked_state,\n        binary_cas,\n        commit_store,\n        version_ctx,\n        catalog_context,\n    )\n    .await\n}\n\n#[async_trait]\nimpl SqlWriteExecutionContext for Transaction {\n    fn active_version_id(&self) -> &str {\n        &self.active_version_id\n    }\n\n    fn functions(&self) -> FunctionProviderHandle {\n        self.functions.clone()\n    }\n\n    fn list_visible_schemas(&self) -> Result<Vec<JsonValue>, LixError> {\n        Ok(self.visible_schemas.clone())\n    }\n\n    async fn load_bytes_many(&mut self, hashes: &[BlobHash]) -> Result<BlobBytesBatch, LixError> {\n        self.binary_cas\n            .reader(self.storage_transaction.as_mut())\n            .load_bytes_many(hashes)\n            .await\n    }\n\n    async fn scan_live_state(\n        &mut self,\n        request: &LiveStateScanRequest,\n    ) -> Result<Vec<MaterializedLiveStateRow>, LixError> {\n        let staged = self.staged_writes.staging_overlay()?;\n        let base = self.live_state.reader(self.storage_transaction.as_mut());\n        overlay_scan_rows(&base, &staged, request).await\n    }\n\n    async fn load_version_head(&mut self, version_id: &str) -> Result<Option<String>, LixError> {\n        self.version_ctx\n            .ref_reader(self.storage_transaction.as_mut())\n            .load_head_commit_id(version_id)\n            .await\n    }\n\n    async fn stage_write(\n        &mut self,\n        write: TransactionWrite,\n    ) -> Result<TransactionWriteOutcome, LixError> {\n        Transaction::stage_write(self, write).await\n    }\n}\n\nfn transaction_write_version_ids(write: &TransactionWrite) -> BTreeSet<String> {\n    match write {\n        TransactionWrite::Rows { rows, .. } => transaction_write_row_version_ids(rows),\n        TransactionWrite::RowsWithFileData {\n            rows, file_data, ..\n        } => transaction_write_row_version_ids(rows)\n            .into_iter()\n            .chain(stage_file_data_version_ids(file_data))\n            .collect(),\n        TransactionWrite::AdoptedChanges { changes } => changes\n            .iter()\n            .map(|change| change.version_id.clone())\n            .collect(),\n    }\n}\n\n#[cfg(feature = \"storage-benches\")]\nfn transaction_write_row_count(write: &TransactionWrite) -> usize {\n    match write {\n        TransactionWrite::Rows { rows, .. } => rows.len(),\n        TransactionWrite::RowsWithFileData { rows, .. } => rows.len(),\n        TransactionWrite::AdoptedChanges { changes } => changes.len(),\n    }\n}\n\n#[cfg(feature = \"storage-benches\")]\nfn transaction_write_untracked_row_count(write: &TransactionWrite) -> usize {\n    match write {\n        TransactionWrite::Rows { rows, .. } => rows.iter().filter(|row| row.untracked).count(),\n        TransactionWrite::RowsWithFileData { rows, .. } => {\n            rows.iter().filter(|row| row.untracked).count()\n        }\n        TransactionWrite::AdoptedChanges { .. } => 0,\n    }\n}\n\nfn require_valid_transaction_write_storage_scopes(\n    write: &TransactionWrite,\n) -> Result<(), LixError> {\n    match write {\n        TransactionWrite::Rows { rows, .. } => {\n            require_valid_transaction_write_row_storage_scopes(rows)\n        }\n        TransactionWrite::RowsWithFileData { rows, .. } => {\n            require_valid_transaction_write_row_storage_scopes(rows)\n        }\n        TransactionWrite::AdoptedChanges { .. } => Ok(()),\n    }\n}\n\nfn require_valid_transaction_write_row_storage_scopes(\n    rows: &[TransactionWriteRow],\n) -> Result<(), LixError> {\n    for row in rows {\n        require_valid_storage_scope(row.version_id.as_str(), row.global)?;\n    }\n    Ok(())\n}\n\nfn require_valid_storage_scope(version_id: &str, global: bool) -> Result<(), LixError> {\n    if global != (version_id == GLOBAL_VERSION_ID) {\n        return Err(LixError::new(\n            LixError::CODE_INVALID_STORAGE_SCOPE,\n            format!(\"invalid storage scope: version_id='{version_id}', global={global}\"),\n        ));\n    }\n    Ok(())\n}\n\nfn transaction_write_row_version_ids(rows: &[TransactionWriteRow]) -> BTreeSet<String> {\n    rows.iter().map(|row| row.version_id.clone()).collect()\n}\n\nfn stage_file_data_version_ids(file_data: &[TransactionFileData]) -> BTreeSet<String> {\n    file_data\n        .iter()\n        .map(|write| write.version_id.clone())\n        .collect()\n}\n\nasync fn resolve_active_version_id(\n    mode: &SessionMode,\n    live_state: &LiveStateContext,\n    version_ctx: &VersionContext,\n    transaction: &mut dyn StorageWriteTransaction,\n) -> Result<String, LixError> {\n    match mode {\n        SessionMode::Pinned { version_id } => Ok(version_id.clone()),\n        SessionMode::Workspace => {\n            load_workspace_version_id(live_state, version_ctx, transaction).await\n        }\n    }\n}\n\nasync fn load_workspace_version_id(\n    live_state: &LiveStateContext,\n    version_ctx: &VersionContext,\n    transaction: &mut dyn StorageWriteTransaction,\n) -> Result<String, LixError> {\n    let row = live_state\n        .reader(&mut *transaction)\n        .load_row(&LiveStateRowRequest {\n            schema_key: \"lix_key_value\".to_string(),\n            version_id: GLOBAL_VERSION_ID.to_string(),\n            entity_id: EntityIdentity::single(WORKSPACE_VERSION_KEY),\n            file_id: NullableKeyFilter::Null,\n        })\n        .await?\n        .ok_or_else(|| {\n            LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                \"workspace version selector is missing lix_key_value:lix_workspace_version_id\",\n            )\n        })?;\n    let snapshot_content = row.snapshot_content.as_deref().ok_or_else(|| {\n        LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            \"workspace version selector is missing snapshot_content\",\n        )\n    })?;\n    let snapshot = serde_json::from_str::<JsonValue>(snapshot_content).map_err(|error| {\n        LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"workspace version selector snapshot is invalid JSON: {error}\"),\n        )\n    })?;\n    let version_id = snapshot\n        .get(\"value\")\n        .and_then(JsonValue::as_str)\n        .filter(|value| !value.is_empty())\n        .ok_or_else(|| {\n            LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                \"workspace version selector value must be a non-empty string\",\n            )\n        })?\n        .to_string();\n\n    let head = version_ctx\n        .ref_reader(&mut *transaction)\n        .load_head_commit_id(&version_id)\n        .await?;\n    if head.is_none() {\n        return Err(LixError::version_not_found(\n            version_id,\n            \"load_workspace_version_id\",\n            \"workspace_selector\",\n        ));\n    }\n\n    Ok(version_id)\n}\n\n#[cfg(test)]\nmod tests {\n    use std::sync::Arc;\n\n    use serde_json::json;\n\n    use super::*;\n    use crate::backend::testing::UnitTestBackend;\n    use crate::commit_store::{ChangeScanRequest, CommitStoreContext};\n    use crate::tracked_state::{TrackedStateRowRequest, TrackedStateScanRequest};\n    use crate::transaction::types::TransactionJson;\n    use crate::untracked_state::{UntrackedStateContext, UntrackedStateRowRequest};\n    use crate::version::VersionContext;\n    use crate::Backend;\n    use crate::NullableKeyFilter;\n    use crate::GLOBAL_VERSION_ID;\n\n    fn live_state_context() -> LiveStateContext {\n        LiveStateContext::new(\n            crate::tracked_state::TrackedStateContext::new(),\n            crate::untracked_state::UntrackedStateContext::new(),\n            crate::commit_graph::CommitGraphContext::new(),\n        )\n    }\n\n    const SCHEMA_FIXTURE_COMMIT_ID: &str = \"schema-fixture-commit\";\n\n    #[tokio::test]\n    async fn stage_rows_routes_tracked_and_untracked_rows_without_sql() {\n        let backend: Arc<dyn Backend + Send + Sync> = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(Arc::clone(&backend));\n        let live_state = Arc::new(live_state_context());\n        seed_visible_schema_rows(storage.clone()).await;\n        let binary_cas = Arc::new(BinaryCasContext::new());\n        let changelog = Arc::new(CommitStoreContext::new());\n        let commit_store = Arc::new(CommitStoreContext::new());\n        let version_ctx = Arc::new(VersionContext::new(Arc::new(UntrackedStateContext::new())));\n        let catalog_context = Arc::new(CatalogContext::new());\n        let opened = open_transaction(\n            &SessionMode::Pinned {\n                version_id: GLOBAL_VERSION_ID.to_string(),\n            },\n            storage.clone(),\n            Arc::clone(&live_state),\n            Arc::new(crate::tracked_state::TrackedStateContext::new()),\n            Arc::clone(&binary_cas),\n            Arc::clone(&commit_store),\n            Arc::clone(&version_ctx),\n            Arc::clone(&catalog_context),\n        )\n        .await\n        .expect(\"transaction should open\");\n        let mut transaction = opened.transaction;\n        let runtime_functions = opened.runtime_functions;\n\n        transaction\n            .stage_rows(vec![\n                key_value_stage_row(\"tracked-programmatic\", \"tracked\", false),\n                key_value_stage_row(\"untracked-programmatic\", \"untracked\", true),\n            ])\n            .await\n            .expect(\"programmatic rows should stage\");\n        transaction\n            .commit(&runtime_functions)\n            .await\n            .expect(\"transaction should commit\");\n\n        let changes = changelog\n            .reader(storage.clone())\n            .scan_changes(&ChangeScanRequest::default())\n            .await\n            .expect(\"changelog should scan\");\n        assert!(\n            changes.iter().any(|change| change\n                .record\n                .entity_id\n                .as_single_string_owned()\n                .as_deref()\n                == Ok(\"tracked-programmatic\")),\n            \"tracked staged row should be appended to changelog\"\n        );\n        assert!(\n            !changes.iter().any(|change| change\n                .record\n                .entity_id\n                .as_single_string_owned()\n                .as_deref()\n                == Ok(\"untracked-programmatic\")),\n            \"untracked staged row must not be appended to changelog\"\n        );\n\n        let head_commit_id = version_ctx\n            .ref_reader(storage.clone())\n            .load_head_commit_id(GLOBAL_VERSION_ID)\n            .await\n            .expect(\"version ref should load\")\n            .expect(\"tracked commit should advance the global version ref\");\n\n        let tracked_row = crate::tracked_state::TrackedStateContext::new()\n            .reader(storage.clone())\n            .load_rows_at_commit(\n                &head_commit_id,\n                &[TrackedStateRowRequest {\n                    schema_key: \"lix_key_value\".to_string(),\n                    entity_id: crate::entity_identity::EntityIdentity::single(\n                        \"tracked-programmatic\",\n                    ),\n                    file_id: NullableKeyFilter::Null,\n                }],\n            )\n            .await\n            .expect(\"tracked state should load\")\n            .pop()\n            .flatten()\n            .expect(\"tracked row should be present in tracked state\");\n        assert_eq!(tracked_row.commit_id, head_commit_id);\n        assert_eq!(\n            tracked_row.snapshot_content.as_deref(),\n            Some(r#\"{\"key\":\"tracked-programmatic\",\"value\":\"tracked\"}\"#)\n        );\n\n        let untracked_row = crate::untracked_state::UntrackedStateContext::new()\n            .reader(storage.clone())\n            .load_row(&UntrackedStateRowRequest {\n                schema_key: \"lix_key_value\".to_string(),\n                version_id: GLOBAL_VERSION_ID.to_string(),\n                entity_id: crate::entity_identity::EntityIdentity::single(\"untracked-programmatic\"),\n                file_id: NullableKeyFilter::Null,\n            })\n            .await\n            .expect(\"untracked state should load\")\n            .expect(\"untracked row should be present in untracked state\");\n        assert_eq!(\n            untracked_row.snapshot_content.as_deref(),\n            Some(r#\"{\"key\":\"untracked-programmatic\",\"value\":\"untracked\"}\"#)\n        );\n\n        let live_untracked_row = live_state\n            .reader(storage.clone())\n            .load_row(&crate::live_state::LiveStateRowRequest {\n                schema_key: \"lix_key_value\".to_string(),\n                version_id: GLOBAL_VERSION_ID.to_string(),\n                entity_id: crate::entity_identity::EntityIdentity::single(\"untracked-programmatic\"),\n                file_id: NullableKeyFilter::Null,\n            })\n            .await\n            .expect(\"live state should load\")\n            .expect(\"untracked row should be visible through live state\");\n        assert!(live_untracked_row.untracked);\n        assert!(live_untracked_row.global);\n        assert_eq!(live_untracked_row.version_id, GLOBAL_VERSION_ID);\n\n        let tracked_rows = crate::tracked_state::TrackedStateContext::new()\n            .reader(storage.clone())\n            .scan_rows_at_commit(&head_commit_id, &TrackedStateScanRequest::default())\n            .await\n            .expect(\"tracked state should scan\");\n        assert!(\n            tracked_rows\n                .iter()\n                .all(|row| row.entity_id.as_single_string_owned().as_deref()\n                    != Ok(\"untracked-programmatic\")),\n            \"untracked staged rows should not be written into tracked state\"\n        );\n    }\n\n    #[tokio::test]\n    async fn commit_validates_staged_rows_before_persistence() {\n        let backend: Arc<dyn Backend + Send + Sync> = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(Arc::clone(&backend));\n        let live_state = Arc::new(live_state_context());\n        seed_visible_schema_rows(storage.clone()).await;\n        let binary_cas = Arc::new(BinaryCasContext::new());\n        let changelog = Arc::new(CommitStoreContext::new());\n        let commit_store = Arc::new(CommitStoreContext::new());\n        let version_ctx = Arc::new(VersionContext::new(Arc::new(UntrackedStateContext::new())));\n        let catalog_context = Arc::new(CatalogContext::new());\n        let opened = open_transaction(\n            &SessionMode::Pinned {\n                version_id: GLOBAL_VERSION_ID.to_string(),\n            },\n            storage.clone(),\n            Arc::clone(&live_state),\n            Arc::new(crate::tracked_state::TrackedStateContext::new()),\n            Arc::clone(&binary_cas),\n            Arc::clone(&commit_store),\n            Arc::clone(&version_ctx),\n            Arc::clone(&catalog_context),\n        )\n        .await\n        .expect(\"transaction should open\");\n        let mut transaction = opened.transaction;\n        let runtime_functions = opened.runtime_functions;\n\n        let mut invalid_row = key_value_stage_row(\"invalid-programmatic\", \"invalid\", false);\n        invalid_row.snapshot = Some(TransactionJson::from_value_for_test(\n            json!({\"key\": \"invalid-programmatic\"}),\n        ));\n        transaction\n            .stage_rows(vec![invalid_row])\n            .await\n            .expect(\"invalid row should still reach commit validation\");\n\n        let error = transaction\n            .commit(&runtime_functions)\n            .await\n            .expect_err(\"validation should reject before persistence\");\n        assert!(\n            error.message.contains(\"snapshot_content validation failed\"),\n            \"validation error should explain the rejected schema data: {error:?}\"\n        );\n\n        let changes = changelog\n            .reader(storage.clone())\n            .scan_changes(&ChangeScanRequest::default())\n            .await\n            .expect(\"changelog should scan after failed commit\");\n        assert!(\n            changes.iter().all(|change| change\n                .record\n                .entity_id\n                .as_single_string_owned()\n                .as_deref()\n                != Ok(\"invalid-programmatic\")),\n            \"validation failure must happen before changelog persistence\"\n        );\n        let head = version_ctx\n            .ref_reader(storage.clone())\n            .load_head_commit_id(GLOBAL_VERSION_ID)\n            .await\n            .expect(\"version ref should load after failed commit\");\n        assert_eq!(\n            head.as_deref(),\n            Some(SCHEMA_FIXTURE_COMMIT_ID),\n            \"validation failure must not advance the version ref\"\n        );\n    }\n\n    #[tokio::test]\n    async fn commit_rejects_non_object_metadata_without_sql() {\n        let backend: Arc<dyn Backend + Send + Sync> = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(Arc::clone(&backend));\n        let (live_state, _binary_cas, changelog, version_ref, runtime_functions, mut transaction) =\n            open_test_transaction(&backend).await;\n\n        let mut row = key_value_stage_row(\"invalid-metadata\", \"value\", false);\n        row.metadata = Some(TransactionJson::from_value_for_test(json!(\"not-an-object\")));\n        transaction\n            .stage_rows(vec![row])\n            .await\n            .expect(\"row should stage before metadata validation\");\n\n        let error = transaction\n            .commit(&runtime_functions)\n            .await\n            .expect_err(\"non-object metadata should fail commit validation\");\n\n        assert_eq!(error.code, LixError::CODE_SCHEMA_VALIDATION);\n        assert!(\n            error.message.contains(\"metadata\") && error.message.contains(\"JSON object\"),\n            \"error should explain metadata object validation: {error:?}\"\n        );\n        assert_no_persistence_after_validation_failure(\n            storage.clone(),\n            &live_state,\n            &changelog,\n            &version_ref,\n            \"invalid-metadata\",\n        )\n        .await;\n    }\n\n    #[tokio::test]\n    async fn stage_rows_rejects_unknown_schema_key_without_sql() {\n        let backend: Arc<dyn Backend + Send + Sync> = Arc::new(UnitTestBackend::new());\n        let (\n            _live_state,\n            _binary_cas,\n            _changelog,\n            _version_ref,\n            _runtime_functions,\n            mut transaction,\n        ) = open_test_transaction(&backend).await;\n\n        let mut row = key_value_stage_row(\"unknown-schema\", \"value\", false);\n        row.schema_key = \"missing_schema\".to_string();\n\n        let error = transaction\n            .stage_rows(vec![row])\n            .await\n            .expect_err(\"unknown schema should be rejected while staging\");\n\n        assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION);\n        assert!(\n            error\n                .message\n                .contains(\"schema 'missing_schema' is not visible\"),\n            \"error should explain missing schema visibility: {error:?}\"\n        );\n    }\n\n    #[tokio::test]\n    async fn stage_rows_rejects_missing_version_without_sql() {\n        let backend: Arc<dyn Backend + Send + Sync> = Arc::new(UnitTestBackend::new());\n        let (\n            _live_state,\n            _binary_cas,\n            _changelog,\n            _version_ref,\n            _runtime_functions,\n            mut transaction,\n        ) = open_test_transaction(&backend).await;\n\n        let mut row = key_value_stage_row(\"ghost-version-row\", \"value\", false);\n        row.version_id = \"ghost-version\".to_string();\n        row.global = false;\n\n        let error = transaction\n            .stage_rows(vec![row])\n            .await\n            .expect_err(\"missing version should be rejected before staging\");\n\n        assert_eq!(error.code, LixError::CODE_VERSION_NOT_FOUND);\n        assert!(\n            error\n                .message\n                .contains(\"version 'ghost-version' was not found\"),\n            \"error should explain missing version: {error:?}\"\n        );\n    }\n\n    #[tokio::test]\n    async fn stage_rows_rejects_invalid_storage_scope_without_sql() {\n        let backend: Arc<dyn Backend + Send + Sync> = Arc::new(UnitTestBackend::new());\n        let (\n            _live_state,\n            _binary_cas,\n            _changelog,\n            _version_ref,\n            _runtime_functions,\n            mut transaction,\n        ) = open_test_transaction(&backend).await;\n\n        let mut row = key_value_stage_row(\"invalid-storage-scope\", \"value\", false);\n        row.version_id = GLOBAL_VERSION_ID.to_string();\n        row.global = false;\n\n        let error = transaction\n            .stage_rows(vec![row])\n            .await\n            .expect_err(\"invalid storage scope should be rejected before staging\");\n\n        assert_eq!(error.code, LixError::CODE_INVALID_STORAGE_SCOPE);\n        assert!(\n            error.message.contains(\"version_id='global', global=false\"),\n            \"error should explain invalid storage scope: {error:?}\"\n        );\n    }\n\n    #[tokio::test]\n    async fn stage_rows_rejects_invalid_snapshot_json_without_sql() {\n        let backend: Arc<dyn Backend + Send + Sync> = Arc::new(UnitTestBackend::new());\n        let (\n            _live_state,\n            _binary_cas,\n            _changelog,\n            _version_ref,\n            _runtime_functions,\n            mut transaction,\n        ) = open_test_transaction(&backend).await;\n\n        let mut row = key_value_stage_row(\"invalid-json\", \"value\", false);\n        row.snapshot = Some(TransactionJson::from_value_for_test(json!(\"not-an-object\")));\n\n        let error = transaction\n            .stage_rows(vec![row])\n            .await\n            .expect_err(\"non-object snapshot should be rejected while staging\");\n\n        assert_eq!(error.code, LixError::CODE_SCHEMA_VALIDATION);\n        assert!(\n            error.message.contains(\"must be a JSON object\"),\n            \"error should explain invalid snapshot shape: {error:?}\"\n        );\n    }\n\n    #[tokio::test]\n    async fn commit_rejects_snapshot_that_violates_json_schema_without_sql() {\n        let backend: Arc<dyn Backend + Send + Sync> = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(Arc::clone(&backend));\n        let (live_state, _binary_cas, changelog, version_ref, runtime_functions, mut transaction) =\n            open_test_transaction(&backend).await;\n\n        let mut row = key_value_stage_row(\"schema-mismatch\", \"value\", false);\n        row.snapshot = Some(TransactionJson::from_value_for_test(\n            json!({\"key\": \"schema-mismatch\"}),\n        ));\n        transaction\n            .stage_rows(vec![row])\n            .await\n            .expect(\"row should stage before JSON Schema validation\");\n\n        let error = transaction\n            .commit(&runtime_functions)\n            .await\n            .expect_err(\"JSON Schema mismatch should fail commit validation\");\n\n        assert_eq!(error.code, LixError::CODE_SCHEMA_VALIDATION);\n        assert!(\n            error.message.contains(\"snapshot_content validation failed\"),\n            \"error should explain JSON Schema validation: {error:?}\"\n        );\n        assert_no_persistence_after_validation_failure(\n            storage.clone(),\n            &live_state,\n            &changelog,\n            &version_ref,\n            \"schema-mismatch\",\n        )\n        .await;\n    }\n\n    #[tokio::test]\n    async fn stage_rows_rejects_malformed_registered_schema_without_sql() {\n        let backend: Arc<dyn Backend + Send + Sync> = Arc::new(UnitTestBackend::new());\n        let (\n            _live_state,\n            _binary_cas,\n            _changelog,\n            _version_ref,\n            _runtime_functions,\n            mut transaction,\n        ) = open_test_transaction(&backend).await;\n\n        let mut row = key_value_stage_row(\"malformed-registered-schema\", \"value\", false);\n        row.schema_key = \"lix_registered_schema\".to_string();\n        row.snapshot = Some(TransactionJson::from_value_for_test(json!({\n            \"value\": {\n                \"x-lix-key\": \"malformed_registered_schema\",\n                \"x-lix-primary-key\": [\"id\"],\n                \"type\": \"object\",\n                \"properties\": {\n                    \"id\": { \"type\": \"string\" }\n                },\n                \"required\": [\"id\"],\n                \"additionalProperties\": false\n            }\n        })));\n        row.entity_id = None;\n\n        let error = transaction\n            .stage_rows(vec![row])\n            .await\n            .expect_err(\"malformed registered schema should be rejected while staging\");\n\n        assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION);\n        assert!(\n            error.message.contains(\"x-lix-primary-key\"),\n            \"error should explain malformed registered schema: {error:?}\"\n        );\n    }\n\n    #[tokio::test]\n    async fn stage_rows_rejects_primary_key_entity_id_mismatch_without_sql() {\n        let backend: Arc<dyn Backend + Send + Sync> = Arc::new(UnitTestBackend::new());\n        let (\n            _live_state,\n            _binary_cas,\n            _changelog,\n            _version_ref,\n            _runtime_functions,\n            mut transaction,\n        ) = open_test_transaction(&backend).await;\n\n        let mut row = key_value_stage_row(\"right-id\", \"value\", false);\n        row.entity_id = Some(crate::entity_identity::EntityIdentity::single(\"wrong-id\"));\n\n        let error = transaction\n            .stage_rows(vec![row])\n            .await\n            .expect_err(\"entity id mismatch should be rejected while staging\");\n\n        assert_eq!(error.code, LixError::CODE_SCHEMA_VALIDATION);\n        assert!(\n            error\n                .message\n                .contains(\"does not match x-lix-primary-key derived entity_id\"),\n            \"error should explain entity id mismatch: {error:?}\"\n        );\n    }\n\n    async fn open_test_transaction(\n        backend: &Arc<dyn Backend + Send + Sync>,\n    ) -> (\n        Arc<LiveStateContext>,\n        Arc<BinaryCasContext>,\n        Arc<CommitStoreContext>,\n        Arc<VersionContext>,\n        FunctionContext,\n        Transaction,\n    ) {\n        let storage = StorageContext::new(Arc::clone(backend));\n        let live_state = Arc::new(live_state_context());\n        seed_visible_schema_rows(storage.clone()).await;\n        let binary_cas = Arc::new(BinaryCasContext::new());\n        let changelog = Arc::new(CommitStoreContext::new());\n        let commit_store = Arc::new(CommitStoreContext::new());\n        let version_ctx = Arc::new(VersionContext::new(Arc::new(UntrackedStateContext::new())));\n        let catalog_context = Arc::new(CatalogContext::new());\n        let opened = open_transaction(\n            &SessionMode::Pinned {\n                version_id: GLOBAL_VERSION_ID.to_string(),\n            },\n            storage,\n            Arc::clone(&live_state),\n            Arc::new(crate::tracked_state::TrackedStateContext::new()),\n            Arc::clone(&binary_cas),\n            Arc::clone(&commit_store),\n            Arc::clone(&version_ctx),\n            catalog_context,\n        )\n        .await\n        .expect(\"transaction should open\");\n        let transaction = opened.transaction;\n        let runtime_functions = opened.runtime_functions;\n\n        (\n            live_state,\n            binary_cas,\n            changelog,\n            version_ctx,\n            runtime_functions,\n            transaction,\n        )\n    }\n\n    async fn seed_visible_schema_rows(storage: StorageContext) {\n        let mut writes = StorageWriteSet::new();\n        let rows = crate::schema::seed_schema_definitions()\n            .into_iter()\n            .map(|schema| {\n                let key = crate::schema::schema_key_from_definition(schema)\n                    .expect(\"seed schema key should derive\");\n                let snapshot_content = json!({ \"value\": schema }).to_string();\n                crate::tracked_state::MaterializedTrackedStateRow {\n                    entity_id: crate::schema::registered_schema_entity_id(&key.schema_key)\n                        .expect(\"registered schema identity should derive\"),\n                    schema_key: \"lix_registered_schema\".to_string(),\n                    file_id: None,\n                    snapshot_content: Some(snapshot_content),\n                    metadata: None,\n                    deleted: false,\n                    created_at: \"1970-01-01T00:00:00.000Z\".to_string(),\n                    updated_at: \"1970-01-01T00:00:00.000Z\".to_string(),\n                    change_id: format!(\"schema-fixture-{}\", key.schema_key),\n                    commit_id: SCHEMA_FIXTURE_COMMIT_ID.to_string(),\n                }\n            })\n            .collect::<Vec<_>>();\n        let version_ref_row = prepare_version_ref_row(\n            GLOBAL_VERSION_ID,\n            SCHEMA_FIXTURE_COMMIT_ID,\n            \"1970-01-01T00:00:00.000Z\",\n        )\n        .expect(\"schema fixture version ref should stage\");\n        let mut storage_transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"schema fixture transaction should open\");\n        crate::test_support::stage_tracked_root_from_materialized(\n            storage_transaction.as_mut(),\n            &crate::tracked_state::TrackedStateContext::new(),\n            SCHEMA_FIXTURE_COMMIT_ID,\n            None,\n            &rows,\n        )\n        .await\n        .expect(\"schema fixture rows should stage\");\n        crate::untracked_state::UntrackedStateContext::new()\n            .writer(&mut writes)\n            .stage_rows([version_ref_row.row.as_ref()])\n            .expect(\"schema fixture version ref should stage\");\n        writes\n            .apply(&mut storage_transaction.as_mut())\n            .await\n            .expect(\"schema fixture rows should apply\");\n        storage_transaction\n            .commit()\n            .await\n            .expect(\"schema fixture transaction should commit\");\n    }\n\n    async fn assert_no_persistence_after_validation_failure(\n        storage: StorageContext,\n        live_state: &LiveStateContext,\n        changelog: &CommitStoreContext,\n        version_ctx: &VersionContext,\n        rejected_entity_id: &str,\n    ) {\n        let changes = changelog\n            .reader(storage.clone())\n            .scan_changes(&ChangeScanRequest::default())\n            .await\n            .expect(\"changelog should scan after failed commit\");\n        assert!(\n            changes.iter().all(|change| change\n                .record\n                .entity_id\n                .as_single_string_owned()\n                .as_deref()\n                != Ok(rejected_entity_id)),\n            \"validation failure must happen before changelog persistence\"\n        );\n        let head = version_ctx\n            .ref_reader(storage.clone())\n            .load_head_commit_id(GLOBAL_VERSION_ID)\n            .await\n            .expect(\"version ref should load after failed commit\");\n        assert_eq!(\n            head.as_deref(),\n            Some(SCHEMA_FIXTURE_COMMIT_ID),\n            \"validation failure must not advance the version ref\"\n        );\n        let row = live_state\n            .reader(storage)\n            .load_row(&crate::live_state::LiveStateRowRequest {\n                schema_key: \"lix_key_value\".to_string(),\n                version_id: GLOBAL_VERSION_ID.to_string(),\n                entity_id: crate::entity_identity::EntityIdentity::single(rejected_entity_id),\n                file_id: NullableKeyFilter::Null,\n            })\n            .await\n            .expect(\"live state should load after failed commit\");\n        assert_eq!(\n            row, None,\n            \"validation failure must happen before live-state persistence\"\n        );\n    }\n\n    fn key_value_stage_row(key: &str, value: &str, untracked: bool) -> TransactionWriteRow {\n        TransactionWriteRow {\n            entity_id: Some(crate::entity_identity::EntityIdentity::single(key)),\n            schema_key: \"lix_key_value\".to_string(),\n            file_id: None,\n            snapshot: Some(TransactionJson::from_value_for_test(json!({\n                \"key\": key,\n                \"value\": value,\n            }))),\n            metadata: None,\n            origin: None,\n            created_at: None,\n            updated_at: None,\n            global: true,\n            change_id: None,\n            commit_id: None,\n            untracked,\n            version_id: GLOBAL_VERSION_ID.to_string(),\n        }\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/transaction/live_state_overlay.rs",
    "content": "use std::collections::BTreeSet;\n\nuse crate::live_state::MaterializedLiveStateRow;\nuse crate::live_state::{LiveStateReader, LiveStateScanRequest};\nuse crate::transaction::staging::{PreparedStateRowIdentity, PreparedStateRowOverlay};\nuse crate::LixError;\n\npub(crate) async fn overlay_scan_rows(\n    base: &dyn LiveStateReader,\n    staged: &PreparedStateRowOverlay,\n    request: &LiveStateScanRequest,\n) -> Result<Vec<MaterializedLiveStateRow>, LixError> {\n    let staged_parts = staged.scan_parts(request)?;\n    let hidden_identities = staged_parts.hidden_identities;\n    let mut rows = staged_parts.rows;\n    let mut visible_identities = rows\n        .iter()\n        .map(PreparedStateRowIdentity::from)\n        .collect::<BTreeSet<_>>();\n\n    for row in base.scan_rows(request).await? {\n        let identity = PreparedStateRowIdentity::from(&row);\n        if hidden_identities.contains(&identity) {\n            continue;\n        }\n        if visible_identities.insert(identity) {\n            rows.push(row);\n        }\n    }\n\n    if let Some(limit) = request.limit {\n        rows.truncate(limit);\n    }\n    Ok(rows)\n}\n"
  },
  {
    "path": "packages/engine/src/transaction/mod.rs",
    "content": "mod commit;\nmod context;\nmod live_state_overlay;\nmod normalization;\nmod prep;\nmod schema_resolver;\nmod staging;\npub(crate) mod types;\nmod validation;\n\npub(crate) use context::open_transaction;\npub(crate) use context::Transaction;\npub(crate) use prep::prepare_version_ref_row;\n"
  },
  {
    "path": "packages/engine/src/transaction/normalization.rs",
    "content": "use std::sync::Arc;\n\nuse serde_json::{Map as JsonMap, Value as JsonValue};\n\nuse crate::catalog::{CatalogSnapshot, SchemaPlan, SchemaPlanId};\nuse crate::common::format_json_pointer;\nuse crate::common::normalize_path_segment;\nuse crate::domain::Domain;\nuse crate::entity_identity::{EntityIdentity, EntityIdentityError};\nuse crate::functions::FunctionProviderHandle;\nuse crate::schema::{\n    is_seed_schema_key, schema_from_registered_snapshot, validate_lix_schema,\n    validate_lix_schema_definition,\n};\nuse crate::transaction::types::{PreparedRowFacts, TransactionJson, TransactionWriteRow};\nuse crate::LixError;\n\npub(crate) const REGISTERED_SCHEMA_KEY: &str = \"lix_registered_schema\";\nconst DIRECTORY_DESCRIPTOR_SCHEMA_KEY: &str = \"lix_directory_descriptor\";\nconst FILE_DESCRIPTOR_SCHEMA_KEY: &str = \"lix_file_descriptor\";\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct NormalizedTransactionWriteRow {\n    pub(crate) row: TransactionWriteRow,\n    pub(crate) snapshot: Option<TransactionJson>,\n    pub(crate) schema_plan_id: SchemaPlanId,\n    pub(crate) facts: PreparedRowFacts,\n}\n\n/// Normalizes one incoming row into a row with final snapshot/entity identity.\n///\n/// This is the canonical schema-semantics boundary for transaction writes. It owns\n/// schema default application, primary-key identity derivation, and explicit\n/// identity mismatch validation. SQL providers should not pre-derive primary\n/// keys for schemas that can be normalized here; they should pass decoded\n/// snapshots and let this layer complete them.\n///\n/// This function intentionally does not assign timestamps, change ids, or\n/// commit ids; those are prepared-row fields assigned after semantic\n/// normalization has produced the final identity.\npub(crate) fn normalize_transaction_write_row(\n    mut row: TransactionWriteRow,\n    schema_catalog: &mut CatalogSnapshot,\n    functions: FunctionProviderHandle,\n) -> Result<NormalizedTransactionWriteRow, LixError> {\n    validate_transaction_write_row_schema_identity(&row)?;\n\n    let Some((schema_plan_id, schema_plan)) = schema_catalog.plan_for_key(&row.schema_key) else {\n        return Err(LixError::new(\n            LixError::CODE_SCHEMA_DEFINITION,\n            format!(\n                \"schema '{}' is not visible to this transaction\",\n                row.schema_key\n            ),\n        ));\n    };\n\n    let normalized_snapshot = if let Some(snapshot) = row.snapshot.take() {\n        let (mut snapshot, normalized) = snapshot_object_from_transaction_json(snapshot, &row)?;\n        let defaults_changed = apply_defaults(&mut snapshot, schema_plan, &row, functions)?;\n        let descriptor_changed = normalize_filesystem_descriptor_snapshot(&row, &mut snapshot)?;\n        let snapshot = JsonValue::Object(snapshot);\n        row.entity_id = Some(resolve_entity_id(&row, schema_plan, &snapshot)?);\n        if defaults_changed || descriptor_changed {\n            Some(TransactionJson::from_value(\n                snapshot,\n                \"normalized transaction snapshot_content\",\n            )?)\n        } else {\n            Some(TransactionJson::from_parts(Arc::new(snapshot), normalized))\n        }\n    } else if row.entity_id.is_none() {\n        return Err(LixError::new(\n            LixError::CODE_SCHEMA_VALIDATION,\n            format!(\n                \"tombstone for schema '{}' requires entity_id\",\n                row.schema_key\n            ),\n        ));\n    } else {\n        None\n    };\n\n    if row.schema_key == REGISTERED_SCHEMA_KEY {\n        if row.file_id.is_some() {\n            return Err(LixError::new(\n                LixError::CODE_SCHEMA_DEFINITION,\n                \"lix_registered_schema rows must not be scoped to a file\",\n            )\n            .with_hint(\"Schema definitions are scoped by version and durability only; write them with null file_id.\"));\n        }\n        let schema_domain =\n            Domain::schema_catalog(row.schema_scope_version_id().to_string(), row.untracked);\n        remember_pending_registered_schema(\n            normalized_snapshot.as_ref().map(TransactionJson::value),\n            schema_domain,\n            schema_catalog,\n        )?;\n    }\n\n    Ok(NormalizedTransactionWriteRow {\n        row,\n        snapshot: normalized_snapshot,\n        schema_plan_id,\n        facts: PreparedRowFacts::default(),\n    })\n}\n\nfn validate_transaction_write_row_schema_identity(\n    row: &TransactionWriteRow,\n) -> Result<(), LixError> {\n    if row.schema_key.is_empty() {\n        return Err(LixError::new(\n            LixError::CODE_UNKNOWN,\n            \"engine transaction staging requires non-empty schema_key\",\n        ));\n    }\n    Ok(())\n}\n\nfn snapshot_object_from_transaction_json(\n    snapshot: TransactionJson,\n    row: &TransactionWriteRow,\n) -> Result<(JsonMap<String, JsonValue>, Arc<str>), LixError> {\n    let (snapshot, normalized) = snapshot.into_parts();\n    let snapshot = match Arc::try_unwrap(snapshot) {\n        Ok(snapshot) => snapshot,\n        Err(snapshot) => snapshot.as_ref().clone(),\n    };\n    match snapshot {\n        JsonValue::Object(snapshot) => Ok((snapshot, normalized)),\n        _ => Err(LixError::new(\n            LixError::CODE_SCHEMA_VALIDATION,\n            format!(\n                \"snapshot_content for schema '{}' must be a JSON object\",\n                row.schema_key\n            ),\n        )),\n    }\n}\n\nfn apply_defaults(\n    snapshot: &mut JsonMap<String, JsonValue>,\n    schema_plan: &SchemaPlan,\n    row: &TransactionWriteRow,\n    functions: FunctionProviderHandle,\n) -> Result<bool, LixError> {\n    schema_plan\n        .defaults\n        .apply(snapshot, functions, &row.schema_key)\n}\n\nfn normalize_filesystem_descriptor_snapshot(\n    row: &TransactionWriteRow,\n    snapshot: &mut JsonMap<String, JsonValue>,\n) -> Result<bool, LixError> {\n    match row.schema_key.as_str() {\n        DIRECTORY_DESCRIPTOR_SCHEMA_KEY => normalize_directory_descriptor_snapshot(row, snapshot),\n        FILE_DESCRIPTOR_SCHEMA_KEY => normalize_file_descriptor_snapshot(row, snapshot),\n        _ => Ok(false),\n    }\n}\n\nfn normalize_directory_descriptor_snapshot(\n    row: &TransactionWriteRow,\n    snapshot: &mut JsonMap<String, JsonValue>,\n) -> Result<bool, LixError> {\n    let Some(name) = optional_string_field(snapshot, \"name\", row)? else {\n        return Ok(false);\n    };\n    let normalized_name = normalize_path_segment(name)?;\n    if name == normalized_name {\n        return Ok(false);\n    }\n    snapshot.insert(\"name\".to_string(), JsonValue::String(normalized_name));\n    Ok(true)\n}\n\nfn normalize_file_descriptor_snapshot(\n    row: &TransactionWriteRow,\n    snapshot: &mut JsonMap<String, JsonValue>,\n) -> Result<bool, LixError> {\n    let Some(name) = optional_string_field(snapshot, \"name\", row)? else {\n        return Ok(false);\n    };\n    let normalized_name = normalize_path_segment(name)?;\n    if name == normalized_name {\n        return Ok(false);\n    }\n    snapshot.insert(\"name\".to_string(), JsonValue::String(normalized_name));\n    Ok(true)\n}\n\nfn optional_string_field<'a>(\n    snapshot: &'a JsonMap<String, JsonValue>,\n    field: &str,\n    row: &TransactionWriteRow,\n) -> Result<Option<&'a str>, LixError> {\n    let Some(value) = snapshot.get(field) else {\n        return Ok(None);\n    };\n    value.as_str().map(Some).ok_or_else(|| {\n        LixError::new(\n            LixError::CODE_SCHEMA_VALIDATION,\n            format!(\n                \"snapshot_content for schema '{}' field '{}' must be a string\",\n                row.schema_key, field\n            ),\n        )\n    })\n}\n\nfn resolve_entity_id(\n    row: &TransactionWriteRow,\n    schema_plan: &SchemaPlan,\n    snapshot: &JsonValue,\n) -> Result<EntityIdentity, LixError> {\n    let Some(primary_key_paths) = schema_plan.primary_key.as_ref() else {\n        return row.entity_id.clone().ok_or_else(|| {\n            LixError::new(\n                LixError::CODE_SCHEMA_VALIDATION,\n                format!(\n                    \"write for schema '{}' requires entity_id because the schema has no x-lix-primary-key\",\n                    row.schema_key\n                ),\n            )\n        });\n    };\n    let derived = EntityIdentity::from_primary_key_paths(snapshot, primary_key_paths)\n        .map_err(|error| entity_id_derivation_error(row, primary_key_paths, error))?;\n    if let Some(entity_id) = row.entity_id.as_ref() {\n        if entity_id != &derived {\n            return Err(LixError::new(\n                LixError::CODE_SCHEMA_VALIDATION,\n                format!(\n                    \"entity_id '{}' does not match x-lix-primary-key derived entity_id '{}' for schema '{}'\",\n                    entity_id.as_json_array_text()?, derived.as_json_array_text()?, row.schema_key\n                ),\n            ));\n        }\n    }\n    Ok(derived)\n}\n\nfn entity_id_derivation_error(\n    row: &TransactionWriteRow,\n    primary_key_paths: &[Vec<String>],\n    error: EntityIdentityError,\n) -> LixError {\n    let detail = match error {\n        EntityIdentityError::EmptyPrimaryKey => \"empty x-lix-primary-key\".to_string(),\n        EntityIdentityError::EmptyPrimaryKeyPath { index } => {\n            format!(\"empty x-lix-primary-key pointer at index {index}\")\n        }\n        EntityIdentityError::EmptyPrimaryKeyValue { index } => {\n            let pointer = primary_key_paths\n                .get(index)\n                .map(|path| format_json_pointer(path))\n                .unwrap_or_else(|| format!(\"index {index}\"));\n            format!(\"empty value at primary-key pointer '{pointer}'\")\n        }\n        EntityIdentityError::MissingPrimaryKeyValue { index } => {\n            let pointer = format_json_pointer(&primary_key_paths[index]);\n            format!(\"missing value at primary-key pointer '{pointer}'\")\n        }\n        EntityIdentityError::UnsupportedPrimaryKeyValue { index } => {\n            let pointer = format_json_pointer(&primary_key_paths[index]);\n            format!(\"non-string value at primary-key pointer '{pointer}'\")\n        }\n        EntityIdentityError::InvalidEncodedEntityIdentity => {\n            \"invalid encoded entity identity\".to_string()\n        }\n    };\n    LixError::new(\n        LixError::CODE_SCHEMA_VALIDATION,\n        format!(\n            \"failed to derive entity_id for schema '{}': {detail}\",\n            row.schema_key\n        ),\n    )\n}\n\npub(crate) fn remember_pending_registered_schema(\n    snapshot: Option<&JsonValue>,\n    domain: Domain,\n    schema_catalog: &mut CatalogSnapshot,\n) -> Result<(), LixError> {\n    let Some(snapshot) = snapshot else {\n        return Err(LixError::new(\n            LixError::CODE_SCHEMA_DEFINITION,\n            \"lix_registered_schema rows cannot be deleted yet; schema deletion is not supported\",\n        ));\n    };\n    if let Some(schema) = snapshot.get(\"value\") {\n        validate_lix_schema_definition(schema)?;\n    }\n    {\n        let registered_schema_definition = schema_catalog\n            .schema(REGISTERED_SCHEMA_KEY)\n            .ok_or_else(|| {\n                LixError::new(\n                    LixError::CODE_SCHEMA_DEFINITION,\n                    \"lix_registered_schema schema is not visible to this transaction\",\n                )\n            })?;\n        validate_lix_schema(registered_schema_definition, &snapshot)?;\n    }\n    let (key, schema) = schema_from_registered_snapshot(&snapshot)?;\n    if is_seed_schema_key(&key.schema_key) {\n        return Err(LixError::new(\n            LixError::CODE_SCHEMA_DEFINITION,\n            format!(\n                \"schema '{}' is a system schema and cannot be registered at runtime\",\n                key.schema_key\n            ),\n        ));\n    }\n    validate_lix_schema_definition(&schema)?;\n    schema_catalog.insert_schema_for_domain(domain, key, schema)?;\n    Ok(())\n}\n\n#[cfg(test)]\nmod tests {\n    use serde_json::json;\n\n    use super::*;\n    use crate::functions::{FunctionProvider, SharedFunctionProvider};\n    use crate::schema::seed_schema_definition;\n\n    #[test]\n    fn normalization_derives_entity_id_from_primary_key() {\n        let mut catalog = catalog_with(vec![schema_with_default_id()]);\n        let row = TransactionWriteRow {\n            entity_id: None,\n            schema_key: \"normalization_schema\".to_string(),\n            snapshot: Some(snapshot_json(\n                r#\"{\"id\":\"entity-from-snapshot\",\"value\":\"hello\"}\"#,\n            )),\n            ..base_stage_row()\n        };\n\n        let row =\n            normalize_transaction_write_row(row, &mut catalog, functions()).expect(\"normalize row\");\n\n        assert_eq!(\n            row.row.entity_id.as_ref(),\n            Some(&crate::entity_identity::EntityIdentity::single(\n                \"entity-from-snapshot\"\n            ))\n        );\n    }\n\n    #[test]\n    fn normalization_applies_json_and_cel_defaults_before_identity_derivation() {\n        let mut catalog = catalog_with(vec![schema_with_default_id()]);\n        let row = TransactionWriteRow {\n            entity_id: None,\n            schema_key: \"normalization_schema\".to_string(),\n            snapshot: Some(snapshot_json(r#\"{}\"#)),\n            ..base_stage_row()\n        };\n\n        let row =\n            normalize_transaction_write_row(row, &mut catalog, functions()).expect(\"normalize row\");\n        let snapshot = normalized_snapshot(&row);\n\n        assert_eq!(\n            row.row.entity_id.as_ref(),\n            Some(&crate::entity_identity::EntityIdentity::single(\n                \"uuid-default\"\n            ))\n        );\n        assert_eq!(snapshot[\"id\"], \"uuid-default\");\n        assert_eq!(snapshot[\"value\"], \"literal-default\");\n    }\n\n    #[test]\n    fn normalization_applies_cel_defaults_from_snapshot_context() {\n        let mut catalog = catalog_with(vec![schema_with_cel_field_default()]);\n        let row = TransactionWriteRow {\n            entity_id: None,\n            schema_key: \"cel_field_default_schema\".to_string(),\n            snapshot: Some(snapshot_json(r#\"{\"id\":\"entity-1\",\"name\":\"Sample\"}\"#)),\n            ..base_stage_row()\n        };\n\n        let row =\n            normalize_transaction_write_row(row, &mut catalog, functions()).expect(\"normalize row\");\n        let snapshot = normalized_snapshot(&row);\n\n        assert_eq!(snapshot[\"slug\"], \"Sample-slug\");\n    }\n\n    #[test]\n    fn normalization_x_lix_default_overrides_json_default() {\n        let mut catalog = catalog_with(vec![schema_with_overridden_default()]);\n        let row = TransactionWriteRow {\n            entity_id: None,\n            schema_key: \"overridden_default_schema\".to_string(),\n            snapshot: Some(snapshot_json(r#\"{\"id\":\"entity-1\"}\"#)),\n            ..base_stage_row()\n        };\n\n        let row =\n            normalize_transaction_write_row(row, &mut catalog, functions()).expect(\"normalize row\");\n        let snapshot = normalized_snapshot(&row);\n\n        assert_eq!(snapshot[\"status\"], \"computed\");\n    }\n\n    #[test]\n    fn normalization_does_not_overwrite_explicit_null_with_default() {\n        let mut catalog = catalog_with(vec![schema_with_nullable_default()]);\n        let row = TransactionWriteRow {\n            entity_id: None,\n            schema_key: \"nullable_default_schema\".to_string(),\n            snapshot: Some(snapshot_json(r#\"{\"id\":\"entity-1\",\"status\":null}\"#)),\n            ..base_stage_row()\n        };\n\n        let row =\n            normalize_transaction_write_row(row, &mut catalog, functions()).expect(\"normalize row\");\n        let snapshot = normalized_snapshot(&row);\n\n        assert_eq!(snapshot[\"status\"], JsonValue::Null);\n    }\n\n    #[test]\n    fn normalization_applies_timestamp_function_default() {\n        let mut catalog = catalog_with(vec![schema_with_timestamp_default()]);\n        let row = TransactionWriteRow {\n            entity_id: None,\n            schema_key: \"timestamp_default_schema\".to_string(),\n            snapshot: Some(snapshot_json(r#\"{\"id\":\"entity-1\"}\"#)),\n            ..base_stage_row()\n        };\n\n        let row =\n            normalize_transaction_write_row(row, &mut catalog, functions()).expect(\"normalize row\");\n        let snapshot = normalized_snapshot(&row);\n\n        assert_eq!(snapshot[\"created_at\"], \"1970-01-01T00:00:00.000Z\");\n    }\n\n    #[test]\n    fn normalization_surfaces_cel_default_errors() {\n        let mut catalog = catalog_with(vec![schema_with_unknown_cel_default()]);\n        let row = TransactionWriteRow {\n            entity_id: None,\n            schema_key: \"unknown_cel_default_schema\".to_string(),\n            snapshot: Some(snapshot_json(r#\"{\"id\":\"entity-1\"}\"#)),\n            ..base_stage_row()\n        };\n\n        let error = normalize_transaction_write_row(row, &mut catalog, functions())\n            .expect_err(\"default should fail\");\n\n        assert!(error.message.contains(\"failed to evaluate x-lix-default\"));\n        assert!(error.message.contains(\"unknown_cel_default_schema.slug\"));\n    }\n\n    #[test]\n    fn normalization_rejects_entity_id_that_disagrees_with_primary_key() {\n        let mut catalog = catalog_with(vec![schema_with_default_id()]);\n        let row = TransactionWriteRow {\n            entity_id: Some(crate::entity_identity::EntityIdentity::single(\"wrong-id\")),\n            schema_key: \"normalization_schema\".to_string(),\n            snapshot: Some(snapshot_json(r#\"{\"id\":\"right-id\",\"value\":\"hello\"}\"#)),\n            ..base_stage_row()\n        };\n\n        let error = normalize_transaction_write_row(row, &mut catalog, functions())\n            .expect_err(\"id mismatch fails\");\n\n        assert!(error\n            .message\n            .contains(\"does not match x-lix-primary-key derived entity_id\"));\n    }\n\n    #[test]\n    fn normalization_derives_json_array_entity_id_for_composite_primary_key() {\n        let mut catalog = catalog_with(vec![composite_key_schema()]);\n        let row = TransactionWriteRow {\n            entity_id: None,\n            schema_key: \"composite_key_schema\".to_string(),\n            snapshot: Some(snapshot_json(r#\"{\"namespace\":\"a~b\",\"key\":\"1\"}\"#)),\n            ..base_stage_row()\n        };\n\n        let row =\n            normalize_transaction_write_row(row, &mut catalog, functions()).expect(\"normalize row\");\n        let entity_id = row.row.entity_id.expect(\"composite entity id\");\n        let projected_entity_id = entity_id\n            .as_json_array_text()\n            .expect(\"entity id should project\");\n\n        assert_eq!(projected_entity_id, \"[\\\"a~b\\\",\\\"1\\\"]\");\n    }\n\n    #[test]\n    fn normalization_rejects_non_string_primary_key_values() {\n        let mut catalog = catalog_with(vec![composite_key_schema()]);\n        let row = TransactionWriteRow {\n            entity_id: None,\n            schema_key: \"composite_key_schema\".to_string(),\n            snapshot: Some(snapshot_json(r#\"{\"namespace\":\"a~b\",\"key\":1}\"#)),\n            ..base_stage_row()\n        };\n\n        let error = normalize_transaction_write_row(row, &mut catalog, functions())\n            .expect_err(\"non-string primary key values should fail\");\n\n        assert_eq!(error.code, LixError::CODE_SCHEMA_VALIDATION);\n        assert!(error\n            .message\n            .contains(\"non-string value at primary-key pointer '/key'\"));\n    }\n\n    #[test]\n    fn normalization_validates_explicit_composite_entity_id_against_projection() {\n        let mut catalog = catalog_with(vec![composite_key_schema()]);\n        let snapshot = json!({\n            \"namespace\": \"a~b\",\n            \"key\": \"1\",\n        });\n        let derived = EntityIdentity::from_primary_key_paths(\n            &snapshot,\n            &[vec![\"namespace\".to_string()], vec![\"key\".to_string()]],\n        )\n        .expect(\"identity should derive\");\n        let row = TransactionWriteRow {\n            entity_id: Some(derived.clone()),\n            schema_key: \"composite_key_schema\".to_string(),\n            snapshot: Some(transaction_json(snapshot.clone())),\n            ..base_stage_row()\n        };\n\n        let row =\n            normalize_transaction_write_row(row, &mut catalog, functions()).expect(\"normalize row\");\n\n        assert_eq!(row.row.entity_id.as_ref(), Some(&derived));\n    }\n\n    #[test]\n    fn normalization_makes_pending_registered_schema_visible_to_later_rows() {\n        let mut catalog = catalog_with(vec![seed_schema_definition(REGISTERED_SCHEMA_KEY)\n            .expect(\"registered schema builtin\")\n            .clone()]);\n        let registered = TransactionWriteRow {\n            entity_id: None,\n            schema_key: REGISTERED_SCHEMA_KEY.to_string(),\n            snapshot: Some(transaction_json(json!({\n                \"value\": dynamic_schema_definition(),\n            }))),\n            ..base_stage_row()\n        };\n\n        normalize_transaction_write_row(registered, &mut catalog, functions())\n            .expect(\"register schema\");\n\n        let dynamic = TransactionWriteRow {\n            entity_id: None,\n            schema_key: \"dynamic_schema\".to_string(),\n            snapshot: Some(snapshot_json(r#\"{\"id\":\"dynamic-1\"}\"#)),\n            ..base_stage_row()\n        };\n        let dynamic = normalize_transaction_write_row(dynamic, &mut catalog, functions())\n            .expect(\"dynamic row\");\n\n        assert_eq!(\n            dynamic.row.entity_id.as_ref(),\n            Some(&crate::entity_identity::EntityIdentity::single(\"dynamic-1\"))\n        );\n    }\n\n    #[test]\n    fn normalization_canonicalizes_filesystem_descriptor_segments() {\n        let mut catalog = catalog_with(vec![\n            builtin_schema(FILE_DESCRIPTOR_SCHEMA_KEY),\n            builtin_schema(DIRECTORY_DESCRIPTOR_SCHEMA_KEY),\n        ]);\n\n        let file = TransactionWriteRow {\n            entity_id: None,\n            schema_key: FILE_DESCRIPTOR_SCHEMA_KEY.to_string(),\n            snapshot: Some(transaction_json(json!({\n                \"id\": \"file-cafe\",\n                \"directory_id\": null,\n                \"name\": \"Cafe\\u{301}.txt\",\n            }))),\n            global: false,\n            ..base_stage_row()\n        };\n        let file = normalize_transaction_write_row(file, &mut catalog, functions())\n            .expect(\"normalize file\");\n        let file_snapshot = normalized_snapshot(&file);\n        assert_eq!(file_snapshot[\"name\"], \"Café.txt\");\n\n        let directory = TransactionWriteRow {\n            entity_id: None,\n            schema_key: DIRECTORY_DESCRIPTOR_SCHEMA_KEY.to_string(),\n            snapshot: Some(transaction_json(json!({\n                \"id\": \"dir-cafe\",\n                \"parent_id\": null,\n                \"name\": \"Cafe\\u{301}\",\n            }))),\n            global: false,\n            ..base_stage_row()\n        };\n        let directory = normalize_transaction_write_row(directory, &mut catalog, functions())\n            .expect(\"normalize directory\");\n        let directory_snapshot = normalized_snapshot(&directory);\n        assert_eq!(directory_snapshot[\"name\"], \"Café\");\n    }\n\n    #[test]\n    fn normalization_rejects_invalid_filesystem_descriptor_segments() {\n        let mut catalog = catalog_with(vec![\n            builtin_schema(FILE_DESCRIPTOR_SCHEMA_KEY),\n            builtin_schema(DIRECTORY_DESCRIPTOR_SCHEMA_KEY),\n        ]);\n\n        let dot_segment = normalize_transaction_write_row(\n            TransactionWriteRow {\n                entity_id: None,\n                schema_key: FILE_DESCRIPTOR_SCHEMA_KEY.to_string(),\n                snapshot: Some(transaction_json(json!({\n                    \"id\": \"file-dotdot\",\n                    \"directory_id\": null,\n                    \"name\": \"..\",\n                }))),\n                global: false,\n                ..base_stage_row()\n            },\n            &mut catalog,\n            functions(),\n        )\n        .expect_err(\"file descriptor name should reject dot segments\");\n        assert_eq!(dot_segment.code, \"LIX_ERROR_PATH_DOT_SEGMENT\");\n\n        let bidi = normalize_transaction_write_row(\n            TransactionWriteRow {\n                entity_id: None,\n                schema_key: FILE_DESCRIPTOR_SCHEMA_KEY.to_string(),\n                snapshot: Some(transaction_json(json!({\n                    \"id\": \"file-bidi\",\n                    \"directory_id\": null,\n                    \"name\": \"safe\\u{202E}txt\",\n                }))),\n                global: false,\n                ..base_stage_row()\n            },\n            &mut catalog,\n            functions(),\n        )\n        .expect_err(\"file descriptor name should reject bidi formatting characters\");\n        assert_eq!(bidi.code, \"LIX_ERROR_PATH_INVALID_SEGMENT_CODE_POINT\");\n\n        let zero_width = normalize_transaction_write_row(\n            TransactionWriteRow {\n                entity_id: None,\n                schema_key: DIRECTORY_DESCRIPTOR_SCHEMA_KEY.to_string(),\n                snapshot: Some(transaction_json(json!({\n                    \"id\": \"dir-zero-width\",\n                    \"parent_id\": null,\n                    \"name\": \"zero\\u{200D}width\",\n                }))),\n                global: false,\n                ..base_stage_row()\n            },\n            &mut catalog,\n            functions(),\n        )\n        .expect_err(\"directory descriptor name should reject zero-width characters\");\n        assert_eq!(zero_width.code, \"LIX_ERROR_PATH_INVALID_SEGMENT_CODE_POINT\");\n    }\n\n    #[test]\n    fn normalization_keeps_file_descriptor_name_opaque() {\n        let mut catalog = catalog_with(vec![builtin_schema(FILE_DESCRIPTOR_SCHEMA_KEY)]);\n\n        let row = normalize_transaction_write_row(\n            TransactionWriteRow {\n                entity_id: None,\n                schema_key: FILE_DESCRIPTOR_SCHEMA_KEY.to_string(),\n                snapshot: Some(transaction_json(json!({\n                    \"id\": \"file-opaque-name\",\n                    \"directory_id\": null,\n                    \"name\": \"foo.bar\",\n                }))),\n                global: false,\n                ..base_stage_row()\n            },\n            &mut catalog,\n            functions(),\n        )\n        .expect(\"file descriptor name should be an opaque basename\");\n\n        let snapshot = normalized_snapshot(&row);\n        assert_eq!(snapshot[\"name\"], \"foo.bar\");\n    }\n\n    fn normalized_snapshot(row: &NormalizedTransactionWriteRow) -> &JsonValue {\n        row.snapshot\n            .as_ref()\n            .expect(\"normalized test row should have a snapshot\")\n            .value()\n    }\n\n    fn catalog_with(schemas: Vec<JsonValue>) -> CatalogSnapshot {\n        let mut visible_schemas = schemas;\n        if visible_schemas.iter().any(|schema| {\n            schema.get(\"x-lix-key\").and_then(JsonValue::as_str) == Some(FILE_DESCRIPTOR_SCHEMA_KEY)\n        }) && !visible_schemas.iter().any(|schema| {\n            schema.get(\"x-lix-key\").and_then(JsonValue::as_str)\n                == Some(DIRECTORY_DESCRIPTOR_SCHEMA_KEY)\n        }) {\n            visible_schemas.push(builtin_schema(DIRECTORY_DESCRIPTOR_SCHEMA_KEY));\n        }\n        CatalogSnapshot::from_visible_schemas(&visible_schemas).expect(\"catalog\")\n    }\n\n    fn builtin_schema(schema_key: &str) -> JsonValue {\n        seed_schema_definition(schema_key)\n            .unwrap_or_else(|| panic!(\"{schema_key} builtin schema should exist\"))\n            .clone()\n    }\n\n    fn transaction_json(value: JsonValue) -> TransactionJson {\n        TransactionJson::from_value_for_test(value)\n    }\n\n    fn snapshot_json(value: &str) -> TransactionJson {\n        transaction_json(serde_json::from_str(value).expect(\"test snapshot should parse\"))\n    }\n\n    fn base_stage_row() -> TransactionWriteRow {\n        TransactionWriteRow {\n            entity_id: Some(crate::entity_identity::EntityIdentity::single(\"entity-1\")),\n            schema_key: \"normalization_schema\".to_string(),\n            file_id: None,\n            snapshot: Some(snapshot_json(r#\"{\"id\":\"entity-1\",\"value\":\"hello\"}\"#)),\n            metadata: None,\n            origin: None,\n            created_at: None,\n            updated_at: None,\n            global: true,\n            change_id: None,\n            commit_id: None,\n            untracked: false,\n            version_id: crate::GLOBAL_VERSION_ID.to_string(),\n        }\n    }\n\n    fn schema_with_default_id() -> JsonValue {\n        json!({\n            \"x-lix-key\": \"normalization_schema\",\n            \"x-lix-primary-key\": [\"/id\"],\n            \"type\": \"object\",\n            \"properties\": {\n                \"id\": { \"type\": \"string\", \"x-lix-default\": \"lix_uuid_v7()\" },\n                \"value\": { \"type\": \"string\", \"default\": \"literal-default\" }\n            },\n            \"required\": [\"id\", \"value\"],\n            \"additionalProperties\": false\n        })\n    }\n\n    fn schema_with_cel_field_default() -> JsonValue {\n        json!({\n            \"x-lix-key\": \"cel_field_default_schema\",\n            \"x-lix-primary-key\": [\"/id\"],\n            \"type\": \"object\",\n            \"properties\": {\n                \"id\": { \"type\": \"string\" },\n                \"name\": { \"type\": \"string\" },\n                \"slug\": { \"type\": \"string\", \"x-lix-default\": \"name + '-slug'\" }\n            },\n            \"required\": [\"id\", \"name\"],\n            \"additionalProperties\": false\n        })\n    }\n\n    fn schema_with_overridden_default() -> JsonValue {\n        json!({\n            \"x-lix-key\": \"overridden_default_schema\",\n            \"x-lix-primary-key\": [\"/id\"],\n            \"type\": \"object\",\n            \"properties\": {\n                \"id\": { \"type\": \"string\" },\n                \"status\": {\n                    \"type\": \"string\",\n                    \"default\": \"literal\",\n                    \"x-lix-default\": \"'computed'\"\n                }\n            },\n            \"required\": [\"id\"],\n            \"additionalProperties\": false\n        })\n    }\n\n    fn schema_with_nullable_default() -> JsonValue {\n        json!({\n            \"x-lix-key\": \"nullable_default_schema\",\n            \"x-lix-primary-key\": [\"/id\"],\n            \"type\": \"object\",\n            \"properties\": {\n                \"id\": { \"type\": \"string\" },\n                \"status\": {\n                    \"anyOf\": [{ \"type\": \"string\" }, { \"type\": \"null\" }],\n                    \"x-lix-default\": \"'computed'\"\n                }\n            },\n            \"required\": [\"id\"],\n            \"additionalProperties\": false\n        })\n    }\n\n    fn schema_with_timestamp_default() -> JsonValue {\n        json!({\n            \"x-lix-key\": \"timestamp_default_schema\",\n            \"x-lix-primary-key\": [\"/id\"],\n            \"type\": \"object\",\n            \"properties\": {\n                \"id\": { \"type\": \"string\" },\n                \"created_at\": { \"type\": \"string\", \"x-lix-default\": \"lix_timestamp()\" }\n            },\n            \"required\": [\"id\"],\n            \"additionalProperties\": false\n        })\n    }\n\n    fn schema_with_unknown_cel_default() -> JsonValue {\n        json!({\n            \"x-lix-key\": \"unknown_cel_default_schema\",\n            \"x-lix-primary-key\": [\"/id\"],\n            \"type\": \"object\",\n            \"properties\": {\n                \"id\": { \"type\": \"string\" },\n                \"slug\": { \"type\": \"string\", \"x-lix-default\": \"missing_var + '-slug'\" }\n            },\n            \"required\": [\"id\"],\n            \"additionalProperties\": false\n        })\n    }\n\n    fn composite_key_schema() -> JsonValue {\n        json!({\n            \"x-lix-key\": \"composite_key_schema\",\n            \"x-lix-primary-key\": [\"/namespace\", \"/key\"],\n            \"type\": \"object\",\n            \"properties\": {\n                \"namespace\": { \"type\": \"string\" },\n                \"key\": { \"type\": \"string\" }\n            },\n            \"required\": [\"namespace\", \"key\"],\n            \"additionalProperties\": false\n        })\n    }\n\n    fn dynamic_schema_definition() -> JsonValue {\n        json!({\n            \"x-lix-key\": \"dynamic_schema\",\n            \"x-lix-primary-key\": [\"/id\"],\n            \"type\": \"object\",\n            \"properties\": {\n                \"id\": { \"type\": \"string\" }\n            },\n            \"required\": [\"id\"],\n            \"additionalProperties\": false\n        })\n    }\n\n    fn functions() -> FunctionProviderHandle {\n        SharedFunctionProvider::new(Box::new(FixedFunctions) as Box<dyn FunctionProvider + Send>)\n    }\n\n    struct FixedFunctions;\n\n    impl FunctionProvider for FixedFunctions {\n        fn uuid_v7(&mut self) -> String {\n            \"uuid-default\".to_string()\n        }\n\n        fn timestamp(&mut self) -> String {\n            \"1970-01-01T00:00:00.000Z\".to_string()\n        }\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/transaction/prep.rs",
    "content": "use crate::entity_identity::EntityIdentity;\nuse crate::untracked_state::UntrackedStateRow;\nuse crate::version::VERSION_REF_SCHEMA_KEY;\nuse crate::{LixError, GLOBAL_VERSION_ID};\n\npub(crate) struct PreparedVersionRefRow {\n    pub(crate) row: UntrackedStateRow,\n}\n\npub(crate) fn prepare_version_ref_row(\n    version_id: &str,\n    commit_id: &str,\n    timestamp: &str,\n) -> Result<PreparedVersionRefRow, LixError> {\n    let snapshot = serde_json::json!({\n        \"id\": version_id,\n        \"commit_id\": commit_id,\n    });\n    let snapshot = crate::json_store::NormalizedJson::from_value(\n        &snapshot,\n        \"engine version-ref snapshot_content\",\n    )?;\n\n    Ok(PreparedVersionRefRow {\n        row: UntrackedStateRow {\n            entity_id: EntityIdentity::single(version_id),\n            schema_key: VERSION_REF_SCHEMA_KEY.to_string(),\n            file_id: None,\n            snapshot_content: Some(snapshot.as_str().to_string()),\n            metadata: None,\n            created_at: timestamp.to_string(),\n            updated_at: timestamp.to_string(),\n            global: true,\n            version_id: GLOBAL_VERSION_ID.to_string(),\n        },\n    })\n}\n"
  },
  {
    "path": "packages/engine/src/transaction/schema_resolver.rs",
    "content": "use std::collections::BTreeMap;\nuse std::sync::Arc;\n\nuse async_trait::async_trait;\n\nuse crate::catalog::{CatalogContext, CatalogSnapshot, SchemaCatalogFact};\nuse crate::domain::Domain;\nuse crate::live_state::{\n    LiveStateReader, LiveStateRowRequest, LiveStateScanRequest, MaterializedLiveStateRow,\n};\nuse crate::transaction::live_state_overlay::overlay_scan_rows;\nuse crate::transaction::staging::PreparedStateRowOverlay;\nuse crate::LixError;\n\npub(crate) struct TransactionSchemaResolver {\n    context: Arc<CatalogContext>,\n    catalogs_by_domain: BTreeMap<Domain, CatalogEntry>,\n}\n\nenum CatalogEntry {\n    SchemaFacts(Vec<SchemaCatalogFact>),\n    Catalog(CatalogSnapshot),\n}\n\nimpl TransactionSchemaResolver {\n    pub(crate) fn new(context: Arc<CatalogContext>) -> Self {\n        Self {\n            context,\n            catalogs_by_domain: BTreeMap::new(),\n        }\n    }\n\n    async fn load_catalog_for_domain(\n        &mut self,\n        live_state: &dyn LiveStateReader,\n        staged: Option<&PreparedStateRowOverlay>,\n        domain: &Domain,\n    ) -> Result<(), LixError> {\n        let domain = domain.schema_catalog_domain();\n        let needs_load = !self.catalogs_by_domain.contains_key(&domain);\n        if needs_load {\n            let facts = if let Some(staged) = staged {\n                let reader = TransactionSchemaLiveStateReader {\n                    base: live_state,\n                    staged,\n                };\n                self.context\n                    .schema_facts_for_domain(&reader, &domain)\n                    .await?\n            } else {\n                self.context\n                    .schema_facts_for_domain(live_state, &domain)\n                    .await?\n            };\n            self.catalogs_by_domain\n                .insert(domain.clone(), CatalogEntry::SchemaFacts(facts));\n        }\n\n        let should_materialize = self\n            .catalogs_by_domain\n            .get(&domain)\n            .is_some_and(|entry| matches!(entry, CatalogEntry::SchemaFacts(_)));\n        if should_materialize {\n            #[cfg(feature = \"storage-benches\")]\n            crate::storage_bench::record_transaction_schema_catalog_load();\n            let entry = self\n                .catalogs_by_domain\n                .remove(&domain)\n                .expect(\"schema catalog entry should exist after load\");\n            let CatalogEntry::SchemaFacts(facts) = entry else {\n                unreachable!(\"catalog entry was checked as schema facts\");\n            };\n            let catalog = CatalogSnapshot::from_schema_facts(&facts)?;\n            self.catalogs_by_domain\n                .insert(domain, CatalogEntry::Catalog(catalog));\n        }\n        Ok(())\n    }\n\n    pub(crate) async fn catalog_for_row_normalization(\n        &mut self,\n        live_state: &dyn LiveStateReader,\n        staged: &PreparedStateRowOverlay,\n        domain: &Domain,\n    ) -> Result<&mut CatalogSnapshot, LixError> {\n        self.load_catalog_for_domain(live_state, Some(staged), domain)\n            .await?;\n        let domain = domain.schema_catalog_domain();\n        match self\n            .catalogs_by_domain\n            .get_mut(&domain)\n            .expect(\"catalog cache should contain requested version\")\n        {\n            CatalogEntry::Catalog(catalog) => Ok(catalog),\n            CatalogEntry::SchemaFacts(_) => {\n                unreachable!(\"schema catalog should be materialized before mutable access\")\n            }\n        }\n    }\n\n    pub(crate) async fn catalog_for_validation(\n        &mut self,\n        live_state: &dyn LiveStateReader,\n        domain: &Domain,\n    ) -> Result<&CatalogSnapshot, LixError> {\n        self.load_catalog_for_domain(live_state, None, domain)\n            .await?;\n        let domain = domain.schema_catalog_domain();\n        match self\n            .catalogs_by_domain\n            .get(&domain)\n            .expect(\"catalog cache should contain requested version\")\n        {\n            CatalogEntry::Catalog(catalog) => Ok(catalog),\n            CatalogEntry::SchemaFacts(_) => {\n                unreachable!(\"schema catalog should be materialized before validation access\")\n            }\n        }\n    }\n\n    pub(crate) fn remember_schema_facts(&mut self, domain: &Domain, facts: Vec<SchemaCatalogFact>) {\n        self.catalogs_by_domain.insert(\n            domain.schema_catalog_domain(),\n            CatalogEntry::SchemaFacts(facts),\n        );\n    }\n}\n\nstruct TransactionSchemaLiveStateReader<'a> {\n    base: &'a dyn LiveStateReader,\n    staged: &'a PreparedStateRowOverlay,\n}\n\n#[async_trait]\nimpl LiveStateReader for TransactionSchemaLiveStateReader<'_> {\n    async fn scan_rows(\n        &self,\n        request: &LiveStateScanRequest,\n    ) -> Result<Vec<MaterializedLiveStateRow>, LixError> {\n        overlay_scan_rows(self.base, self.staged, request).await\n    }\n\n    async fn load_row(\n        &self,\n        request: &LiveStateRowRequest,\n    ) -> Result<Option<MaterializedLiveStateRow>, LixError> {\n        self.base.load_row(request).await\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/transaction/staging.rs",
    "content": "use std::collections::{BTreeMap, BTreeSet, HashMap};\nuse std::sync::{Arc, Mutex};\n\nuse crate::catalog::SchemaPlanId;\nuse crate::domain::{Domain, DomainRowIdentity};\nuse crate::entity_identity::EntityIdentity;\nuse crate::functions::{FunctionProvider, FunctionProviderHandle};\n#[cfg(test)]\nuse crate::live_state::LiveStateRowRequest;\nuse crate::live_state::{LiveStateScanRequest, MaterializedLiveStateRow};\n#[cfg(test)]\nuse crate::transaction::types::{stage_json_from_value, TransactionJson};\nuse crate::transaction::types::{\n    LogicalPrimaryKey, PreparedTransactionWrite, TransactionFileData, TransactionWriteMode,\n    TransactionWriteOperation, TransactionWriteOrigin, TransactionWriteOutcome,\n};\nuse crate::transaction::types::{PreparedAdoptedStateRow, PreparedStateRow, StagedCommitMembers};\nuse crate::GLOBAL_VERSION_ID;\nuse crate::{LixError, NullableKeyFilter};\n\n/// Transaction-local write buffer after transaction-boundary preparation.\n///\n/// This is the engine seam between SQL execution and transaction ownership:\n/// write frontends pass decoded `TransactionWriteRow`s to `Transaction`, the\n/// transaction prepares them into stable `PreparedStateRow`s, reads build a\n/// `PreparedStateRowOverlay` from those rows, and commit drains the same rows.\npub(crate) struct TransactionWriteBuffer {\n    functions: FunctionProviderHandle,\n    rows: Mutex<Vec<Option<PreparedStateRow>>>,\n    adopted_rows: Mutex<Vec<Option<PreparedAdoptedStateRow>>>,\n    by_identity: Mutex<HashMap<PreparedStateRowIdentity, RowSlot>>,\n    insert_identities: Mutex<BTreeMap<PreparedStateRowIdentity, Option<TransactionWriteOrigin>>>,\n    commit_members_by_version: Mutex<BTreeMap<String, StagedCommitMembers>>,\n    extra_commit_parents_by_version: Mutex<BTreeMap<String, Vec<String>>>,\n    file_data_writes: Mutex<Vec<TransactionFileData>>,\n}\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub(crate) enum RowSlot {\n    State(usize),\n    Adopted(usize),\n}\n\n/// Drained prepared transaction writes ready for commit.\npub(crate) struct PreparedWriteSet {\n    pub(crate) state_rows: Vec<PreparedStateRow>,\n    pub(crate) adopted_rows: Vec<PreparedAdoptedStateRow>,\n    pub(crate) insert_identities:\n        BTreeMap<PreparedStateRowIdentity, Option<TransactionWriteOrigin>>,\n    pub(crate) commit_members_by_version: BTreeMap<String, StagedCommitMembers>,\n    pub(crate) extra_commit_parents_by_version: BTreeMap<String, Vec<String>>,\n    pub(crate) file_data_writes: Vec<TransactionFileData>,\n}\n\npub(crate) struct PreparedWriteValidationSet<'a> {\n    rows: Vec<PreparedValidationRow<'a>>,\n    constraint_rows: Vec<PreparedValidationRow<'a>>,\n    insert_identities: Vec<(\n        &'a PreparedStateRowIdentity,\n        Option<&'a TransactionWriteOrigin>,\n    )>,\n}\n\npub(crate) struct PreparedWriteValidationIndex<'a> {\n    rows_by_schema_scope: BTreeMap<Domain, Vec<PreparedValidationRow<'a>>>,\n    insert_identities_by_schema_scope: BTreeMap<\n        Domain,\n        Vec<(\n            &'a PreparedStateRowIdentity,\n            Option<&'a TransactionWriteOrigin>,\n        )>,\n    >,\n}\n\n#[derive(Clone, Copy)]\npub(crate) enum PreparedValidationRow<'a> {\n    State(&'a PreparedStateRow),\n    Adopted(&'a PreparedAdoptedStateRow),\n}\n\nimpl<'a> PreparedValidationRow<'a> {\n    pub(crate) fn entity_id(&self) -> &EntityIdentity {\n        match self {\n            Self::State(row) => &row.entity_id,\n            Self::Adopted(row) => &row.entity_id,\n        }\n    }\n\n    pub(crate) fn schema_plan_id(&self) -> SchemaPlanId {\n        match self {\n            Self::State(row) => row.schema_plan_id,\n            Self::Adopted(row) => row.schema_plan_id,\n        }\n    }\n\n    pub(crate) fn schema_key(&self) -> &str {\n        match self {\n            Self::State(row) => &row.schema_key,\n            Self::Adopted(row) => &row.schema_key,\n        }\n    }\n\n    pub(crate) fn file_id(&self) -> &Option<String> {\n        match self {\n            Self::State(row) => &row.file_id,\n            Self::Adopted(row) => &row.file_id,\n        }\n    }\n\n    #[cfg(test)]\n    pub(crate) fn snapshot_content(&self) -> Option<&str> {\n        match self {\n            Self::State(row) => row\n                .snapshot\n                .as_ref()\n                .map(|snapshot| snapshot.normalized.as_ref()),\n            Self::Adopted(row) => row\n                .snapshot\n                .as_ref()\n                .map(|snapshot| snapshot.normalized.as_ref()),\n        }\n    }\n\n    pub(crate) fn snapshot_json(self) -> Option<&'a serde_json::Value> {\n        match self {\n            Self::State(row) => row\n                .snapshot\n                .as_ref()\n                .map(|snapshot| snapshot.value.as_ref()),\n            Self::Adopted(row) => row\n                .snapshot\n                .as_ref()\n                .map(|snapshot| snapshot.value.as_ref()),\n        }\n    }\n\n    pub(crate) fn metadata_json(self) -> Option<&'a serde_json::Value> {\n        match self {\n            Self::State(row) => row\n                .metadata\n                .as_ref()\n                .map(|metadata| metadata.value.as_ref()),\n            Self::Adopted(row) => row\n                .metadata\n                .as_ref()\n                .map(|metadata| metadata.value.as_ref()),\n        }\n    }\n\n    pub(crate) fn untracked(&self) -> bool {\n        match self {\n            Self::State(row) => row.untracked,\n            Self::Adopted(_) => false,\n        }\n    }\n\n    pub(crate) fn version_id(&self) -> &str {\n        match self {\n            Self::State(row) => &row.version_id,\n            Self::Adopted(row) => &row.version_id,\n        }\n    }\n\n    pub(crate) fn domain(&self) -> Domain {\n        Domain::exact_file(\n            self.version_id().to_string(),\n            self.untracked(),\n            self.file_id().clone(),\n        )\n    }\n\n    pub(crate) fn domain_row_identity(&self) -> DomainRowIdentity {\n        DomainRowIdentity::in_domain(\n            self.domain(),\n            self.schema_key().to_string(),\n            self.entity_id().clone(),\n        )\n    }\n}\n\nimpl<'a> PreparedWriteValidationIndex<'a> {\n    pub(crate) fn schema_scopes(&self) -> impl Iterator<Item = &Domain> {\n        self.rows_by_schema_scope.keys()\n    }\n\n    pub(crate) fn validation_set_for_schema_scope(\n        &self,\n        schema_scope: &Domain,\n    ) -> PreparedWriteValidationSet<'a> {\n        let constraint_rows = self\n            .rows_by_schema_scope\n            .iter()\n            .flat_map(|(target_scope, rows)| {\n                rows.iter().copied().filter(move |row| {\n                    schema_scope.validation_scope_contains_constraint_domain(target_scope)\n                        || (row.snapshot_json().is_none()\n                            && target_scope.tombstone_domain_affects_validation_scope(schema_scope))\n                })\n            })\n            .collect();\n        PreparedWriteValidationSet {\n            rows: self\n                .rows_by_schema_scope\n                .get(schema_scope)\n                .cloned()\n                .unwrap_or_default(),\n            constraint_rows,\n            insert_identities: self\n                .insert_identities_by_schema_scope\n                .get(schema_scope)\n                .cloned()\n                .unwrap_or_default(),\n        }\n    }\n}\n\nimpl<'a> PreparedWriteValidationSet<'a> {\n    pub(crate) fn rows(&self) -> impl Iterator<Item = PreparedValidationRow<'a>> + '_ {\n        self.rows.iter().copied()\n    }\n\n    pub(crate) fn constraint_rows(&self) -> impl Iterator<Item = PreparedValidationRow<'a>> + '_ {\n        self.constraint_rows.iter().copied()\n    }\n\n    pub(crate) fn insert_identities(\n        &self,\n    ) -> impl Iterator<Item = (&PreparedStateRowIdentity, Option<&TransactionWriteOrigin>)> {\n        self.insert_identities\n            .iter()\n            .map(|(identity, origin)| (*identity, *origin))\n    }\n}\n\nimpl PreparedWriteSet {\n    #[cfg(test)]\n    pub(crate) fn validation_rows(&self) -> impl Iterator<Item = PreparedValidationRow<'_>> + '_ {\n        self.state_rows\n            .iter()\n            .map(PreparedValidationRow::State)\n            .chain(self.adopted_rows.iter().map(PreparedValidationRow::Adopted))\n    }\n\n    pub(crate) fn validation_index(&self) -> PreparedWriteValidationIndex<'_> {\n        let mut rows_by_schema_scope = BTreeMap::<Domain, Vec<PreparedValidationRow<'_>>>::new();\n        for row in &self.state_rows {\n            let row = PreparedValidationRow::State(row);\n            rows_by_schema_scope\n                .entry(row.domain().schema_catalog_domain())\n                .or_default()\n                .push(row);\n        }\n        for row in &self.adopted_rows {\n            let row = PreparedValidationRow::Adopted(row);\n            rows_by_schema_scope\n                .entry(row.domain().schema_catalog_domain())\n                .or_default()\n                .push(row);\n        }\n\n        let mut insert_identities_by_schema_scope = BTreeMap::<\n            Domain,\n            Vec<(&PreparedStateRowIdentity, Option<&TransactionWriteOrigin>)>,\n        >::new();\n        for (identity, origin) in &self.insert_identities {\n            insert_identities_by_schema_scope\n                .entry(identity.domain().schema_catalog_domain())\n                .or_default()\n                .push((identity, origin.as_ref()));\n        }\n\n        PreparedWriteValidationIndex {\n            rows_by_schema_scope,\n            insert_identities_by_schema_scope,\n        }\n    }\n\n    #[cfg(test)]\n    pub(crate) fn validation_set_for_tests(&self) -> PreparedWriteValidationSet<'_> {\n        let rows: Vec<_> = self.validation_rows().collect();\n        let insert_identities = self\n            .insert_identities\n            .iter()\n            .map(|(identity, origin)| (identity, origin.as_ref()))\n            .collect();\n        PreparedWriteValidationSet {\n            constraint_rows: rows.clone(),\n            rows,\n            insert_identities,\n        }\n    }\n}\n\nimpl TransactionWriteBuffer {\n    pub(crate) fn new(functions: FunctionProviderHandle) -> Self {\n        Self {\n            functions,\n            rows: Mutex::new(Vec::new()),\n            adopted_rows: Mutex::new(Vec::new()),\n            by_identity: Mutex::new(HashMap::new()),\n            insert_identities: Mutex::new(BTreeMap::new()),\n            commit_members_by_version: Mutex::new(BTreeMap::new()),\n            extra_commit_parents_by_version: Mutex::new(BTreeMap::new()),\n            file_data_writes: Mutex::new(Vec::new()),\n        }\n    }\n\n    /// Drains staged writes for commit.\n    pub(crate) fn drain(&self) -> Result<PreparedWriteSet, LixError> {\n        let mut rows_guard = self.rows.lock().map_err(|_| {\n            LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                \"failed to acquire transaction staged writes lock\",\n            )\n        })?;\n        let mut adopted_rows_guard = self.adopted_rows.lock().map_err(|_| {\n            LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                \"failed to acquire transaction staged adopted writes lock\",\n            )\n        })?;\n        let mut by_identity_guard = self.by_identity.lock().map_err(|_| {\n            LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                \"failed to acquire transaction staged identity index lock\",\n            )\n        })?;\n        let mut file_data_guard = self.file_data_writes.lock().map_err(|_| {\n            LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                \"failed to acquire transaction staged file data lock\",\n            )\n        })?;\n        let mut insert_identities_guard = self.insert_identities.lock().map_err(|_| {\n            LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                \"failed to acquire transaction staged insert identity lock\",\n            )\n        })?;\n        let mut commit_members_guard = self.commit_members_by_version.lock().map_err(|_| {\n            LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                \"failed to acquire transaction staged commit membership lock\",\n            )\n        })?;\n        let mut extra_parents_guard =\n            self.extra_commit_parents_by_version.lock().map_err(|_| {\n                LixError::new(\n                    \"LIX_ERROR_UNKNOWN\",\n                    \"failed to acquire transaction staged extra commit parents lock\",\n                )\n            })?;\n        let result = Ok(PreparedWriteSet {\n            state_rows: std::mem::take(&mut *rows_guard)\n                .into_iter()\n                .flatten()\n                .collect(),\n            adopted_rows: std::mem::take(&mut *adopted_rows_guard)\n                .into_iter()\n                .flatten()\n                .collect(),\n            insert_identities: std::mem::take(&mut *insert_identities_guard),\n            commit_members_by_version: std::mem::take(&mut *commit_members_guard),\n            extra_commit_parents_by_version: std::mem::take(&mut *extra_parents_guard),\n            file_data_writes: std::mem::take(&mut *file_data_guard),\n        });\n        by_identity_guard.clear();\n        result\n    }\n\n    /// Records an additional parent for the commit generated for `version_id`.\n    ///\n    /// Normal writes parent the new commit to the version's previous head.\n    /// Merges add the source version head as an extra parent so the commit graph\n    /// preserves branch ancestry while tracked-state roots still apply source\n    /// rows onto the target root.\n    pub(crate) fn add_commit_parent(\n        &self,\n        version_id: String,\n        parent_commit_id: String,\n    ) -> Result<(), LixError> {\n        let mut guard = self.extra_commit_parents_by_version.lock().map_err(|_| {\n            LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                \"failed to acquire transaction staged extra commit parents lock\",\n            )\n        })?;\n        let parents = guard.entry(version_id).or_default();\n        if !parents.contains(&parent_commit_id) {\n            parents.push(parent_commit_id);\n        }\n        Ok(())\n    }\n\n    pub(crate) fn staged_commit_id(&self, version_id: &str) -> Result<Option<String>, LixError> {\n        let guard = self.commit_members_by_version.lock().map_err(|_| {\n            LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                \"failed to acquire transaction staged commit membership lock\",\n            )\n        })?;\n        Ok(guard\n            .get(version_id)\n            .map(|members| members.commit_id.clone()))\n    }\n\n    /// Stages a commit for `version_id` even if no tracked state rows changed.\n    ///\n    /// Merge uses this to record graph ancestry for convergent merges where the\n    /// target already has the same final state as the source, but the source\n    /// head is not reachable from the target head.\n    pub(crate) fn stage_empty_commit(&self, version_id: String) -> Result<String, LixError> {\n        let mut functions = self.functions.clone();\n        let mut guard = self.commit_members_by_version.lock().map_err(|_| {\n            LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                \"failed to acquire transaction staged commit membership lock\",\n            )\n        })?;\n        let members = guard.entry(version_id).or_insert_with(|| {\n            StagedCommitMembers::new(\n                functions.uuid_v7(),\n                functions.uuid_v7(),\n                functions.timestamp(),\n            )\n        });\n        members.allow_empty();\n        Ok(members.commit_id.clone())\n    }\n\n    /// Builds the transaction-local read overlay from currently staged writes.\n    pub(crate) fn staging_overlay(self: &Arc<Self>) -> Result<PreparedStateRowOverlay, LixError> {\n        let by_identity_guard = self.by_identity.lock().map_err(|_| {\n            LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                \"failed to acquire transaction staged identity index lock\",\n            )\n        })?;\n        let slots = by_identity_guard\n            .iter()\n            .map(|(identity, slot)| (identity.clone(), *slot))\n            .collect();\n        Ok(PreparedStateRowOverlay {\n            staged_writes: Arc::clone(self),\n            slots,\n        })\n    }\n\n    /// Stages one prepared write batch into this transaction.\n    ///\n    /// Frontends hand raw `TransactionWriteRow`s to `Transaction`; normalization prepares\n    /// stable `PreparedStateRow`s before this method indexes them for transaction-\n    /// local reads and commit routing.\n    pub(crate) fn stage_write(\n        &self,\n        write: PreparedTransactionWrite,\n    ) -> Result<TransactionWriteOutcome, LixError> {\n        let (mode, count) = match &write {\n            PreparedTransactionWrite::Rows { mode, rows } => (Some(*mode), rows.len() as u64),\n            PreparedTransactionWrite::RowsWithFileData { mode, count, .. } => (Some(*mode), *count),\n            PreparedTransactionWrite::AdoptedChanges { rows } => (None, rows.len() as u64),\n        };\n        let mut functions = self.functions.clone();\n        let (rows, adopted_rows, file_data_writes) = self.state_rows_from_stage_write(write)?;\n        for row in &rows {\n            validate_commit_membership_support(row)?;\n        }\n        for row in &adopted_rows {\n            validate_adopted_commit_membership_support(row)?;\n        }\n        reject_duplicate_present_rows_in_batch(&rows)?;\n        let mut guard = self.rows.lock().map_err(|_| {\n            LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                \"failed to acquire transaction staged writes lock\",\n            )\n        })?;\n        let mut adopted_guard = self.adopted_rows.lock().map_err(|_| {\n            LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                \"failed to acquire transaction staged adopted writes lock\",\n            )\n        })?;\n        let mut by_identity_guard = self.by_identity.lock().map_err(|_| {\n            LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                \"failed to acquire transaction staged identity index lock\",\n            )\n        })?;\n        let mut commit_members_guard = self.commit_members_by_version.lock().map_err(|_| {\n            LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                \"failed to acquire transaction staged commit membership lock\",\n            )\n        })?;\n        let mut insert_identities_guard = self.insert_identities.lock().map_err(|_| {\n            LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                \"failed to acquire transaction staged insert identity lock\",\n            )\n        })?;\n        for mut row in rows {\n            let identity = PreparedStateRowIdentity::from(&row);\n            if mode == Some(TransactionWriteMode::Insert)\n                && by_identity_guard.contains_key(&identity)\n            {\n                return Err(duplicate_insert_identity_error(&row));\n            }\n            if matches!(by_identity_guard.get(&identity), Some(RowSlot::Adopted(_))) {\n                return Err(conflicting_adopted_identity_error(&row));\n            }\n            let existing_slot = by_identity_guard.remove(&identity);\n            if let Some(RowSlot::State(index)) = existing_slot {\n                if let Some(previous) = guard.get_mut(index).and_then(Option::take) {\n                    remove_row_from_commit_members(&mut commit_members_guard, &previous);\n                }\n            }\n            add_row_to_commit_members(&mut commit_members_guard, &mut row, &mut functions);\n            let identity = PreparedStateRowIdentity::from(&row);\n            if mode == Some(TransactionWriteMode::Insert) {\n                insert_identities_guard.insert(identity.clone(), row.origin.clone());\n            }\n            let slot = match existing_slot {\n                Some(RowSlot::State(index)) => {\n                    guard[index] = Some(row);\n                    RowSlot::State(index)\n                }\n                _ => {\n                    let index = guard.len();\n                    guard.push(Some(row));\n                    RowSlot::State(index)\n                }\n            };\n            by_identity_guard.insert(identity, slot);\n        }\n        for mut row in adopted_rows {\n            let identity = PreparedStateRowIdentity::from(&row);\n            if by_identity_guard.contains_key(&identity) {\n                return Err(conflicting_adopted_projection_error(&row));\n            }\n            add_adopted_row_to_commit_members(&mut commit_members_guard, &mut row, &mut functions);\n            let identity = PreparedStateRowIdentity::from(&row);\n            let index = adopted_guard.len();\n            adopted_guard.push(Some(row));\n            by_identity_guard.insert(identity, RowSlot::Adopted(index));\n        }\n        if !file_data_writes.is_empty() {\n            self.file_data_writes\n                .lock()\n                .map_err(|_| {\n                    LixError::new(\n                        \"LIX_ERROR_UNKNOWN\",\n                        \"failed to acquire transaction staged file data lock\",\n                    )\n                })?\n                .extend(file_data_writes);\n        }\n        Ok(TransactionWriteOutcome { count })\n    }\n\n    fn state_rows_from_stage_write(\n        &self,\n        write: PreparedTransactionWrite,\n    ) -> Result<\n        (\n            Vec<PreparedStateRow>,\n            Vec<PreparedAdoptedStateRow>,\n            Vec<TransactionFileData>,\n        ),\n        LixError,\n    > {\n        let mut state_rows = Vec::new();\n        let mut adopted_rows = Vec::new();\n        let mut file_data_writes = Vec::new();\n        match write {\n            PreparedTransactionWrite::Rows { rows, .. } => {\n                state_rows.extend(rows);\n            }\n            PreparedTransactionWrite::RowsWithFileData {\n                rows, file_data, ..\n            } => {\n                state_rows.extend(rows);\n                file_data_writes.extend(file_data);\n            }\n            PreparedTransactionWrite::AdoptedChanges { rows } => {\n                adopted_rows.extend(rows);\n            }\n        }\n        Ok((state_rows, adopted_rows, file_data_writes))\n    }\n}\n\n/// Read overlay derived from staged transaction writes.\npub(crate) struct PreparedStateRowOverlay {\n    staged_writes: Arc<TransactionWriteBuffer>,\n    slots: BTreeMap<PreparedStateRowIdentity, RowSlot>,\n}\n\npub(crate) struct StagedScanParts {\n    pub(crate) rows: Vec<MaterializedLiveStateRow>,\n    pub(crate) hidden_identities: BTreeSet<PreparedStateRowIdentity>,\n}\n\nimpl PreparedStateRowOverlay {\n    /// Returns staged rows visible for a scan request.\n    #[cfg(test)]\n    pub(crate) fn scan(\n        &self,\n        request: &LiveStateScanRequest,\n    ) -> Result<Vec<MaterializedLiveStateRow>, LixError> {\n        Ok(self.scan_parts(request)?.rows)\n    }\n\n    /// Returns staged rows and base-row identities hidden by staged rows in one pass.\n    ///\n    /// Tombstones hide base rows even when the request does not include\n    /// tombstone rows in the visible result set.\n    pub(crate) fn scan_parts(\n        &self,\n        request: &LiveStateScanRequest,\n    ) -> Result<StagedScanParts, LixError> {\n        let rows_guard = self.staged_writes.rows.lock().map_err(|_| {\n            LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                \"failed to acquire transaction staged writes lock\",\n            )\n        })?;\n        let adopted_guard = self.staged_writes.adopted_rows.lock().map_err(|_| {\n            LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                \"failed to acquire transaction staged adopted writes lock\",\n            )\n        })?;\n\n        let mut rows = Vec::new();\n        let mut hidden_identities = BTreeSet::new();\n        for (identity, slot) in &self.slots {\n            match *slot {\n                RowSlot::State(index) => {\n                    let Some(row) = rows_guard.get(index).and_then(Option::as_ref) else {\n                        continue;\n                    };\n                    if !staged_row_identity_matches_scan(row, request) {\n                        continue;\n                    }\n                    hidden_identities.insert(identity.clone());\n                    if row.snapshot.is_some() || request.filter.include_tombstones {\n                        rows.push(MaterializedLiveStateRow::from(row));\n                    }\n                }\n                RowSlot::Adopted(index) => {\n                    let Some(row) = adopted_guard.get(index).and_then(Option::as_ref) else {\n                        continue;\n                    };\n                    if !adopted_row_identity_matches_scan(row, request) {\n                        continue;\n                    }\n                    hidden_identities.insert(identity.clone());\n                    if row.snapshot.is_some() || request.filter.include_tombstones {\n                        rows.push(MaterializedLiveStateRow::from(row));\n                    }\n                }\n            }\n        }\n        Ok(StagedScanParts {\n            rows,\n            hidden_identities,\n        })\n    }\n\n    /// Returns a staged exact-row answer, if this transaction has one.\n    #[cfg(test)]\n    pub(crate) fn load_exact(&self, request: &LiveStateRowRequest) -> Option<StagedExactRow> {\n        let untracked_identity = PreparedStateRowIdentity::from_exact_request(request, true)?;\n        if let Some(row) = self.load_state_slot(&untracked_identity) {\n            return Some(if row.snapshot.is_none() {\n                StagedExactRow::Tombstone\n            } else {\n                StagedExactRow::Row(MaterializedLiveStateRow::from(&row))\n            });\n        }\n\n        let identity = PreparedStateRowIdentity::from_exact_request(request, false)?;\n        if let Some(row) = self.load_state_slot(&identity) {\n            return Some(if row.snapshot.is_none() {\n                StagedExactRow::Tombstone\n            } else {\n                StagedExactRow::Row(MaterializedLiveStateRow::from(&row))\n            });\n        }\n        self.load_adopted_slot(&identity).map(|row| {\n            if row.snapshot.is_none() {\n                StagedExactRow::Tombstone\n            } else {\n                StagedExactRow::Row(MaterializedLiveStateRow::from(&row))\n            }\n        })\n    }\n\n    #[cfg(test)]\n    fn load_state_slot(&self, identity: &PreparedStateRowIdentity) -> Option<PreparedStateRow> {\n        let Some(RowSlot::State(index)) = self.slots.get(identity).copied() else {\n            return None;\n        };\n        self.staged_writes\n            .rows\n            .lock()\n            .ok()?\n            .get(index)?\n            .as_ref()\n            .cloned()\n    }\n\n    #[cfg(test)]\n    fn load_adopted_slot(\n        &self,\n        identity: &PreparedStateRowIdentity,\n    ) -> Option<PreparedAdoptedStateRow> {\n        let Some(RowSlot::Adopted(index)) = self.slots.get(identity).copied() else {\n            return None;\n        };\n        self.staged_writes\n            .adopted_rows\n            .lock()\n            .ok()?\n            .get(index)?\n            .as_ref()\n            .cloned()\n    }\n}\n\n#[cfg(test)]\npub(crate) enum StagedExactRow {\n    Row(MaterializedLiveStateRow),\n    Tombstone,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord, Hash)]\npub(crate) struct PreparedStateRowIdentity {\n    untracked: bool,\n    schema_key: String,\n    entity_id: crate::entity_identity::EntityIdentity,\n    file_id: Option<String>,\n    version_id: String,\n}\n\nimpl PreparedStateRowIdentity {\n    fn from_staged_row(row: &PreparedStateRow) -> Self {\n        Self {\n            untracked: row.untracked,\n            schema_key: row.schema_key.clone(),\n            entity_id: row.entity_id.clone(),\n            file_id: row.file_id.clone(),\n            version_id: row.version_id.clone(),\n        }\n    }\n\n    #[cfg(test)]\n    fn from_exact_request(request: &LiveStateRowRequest, untracked: bool) -> Option<Self> {\n        let file_id = match &request.file_id {\n            NullableKeyFilter::Null => None,\n            NullableKeyFilter::Value(value) => Some(value.clone()),\n            // Exact overlay lookup requires a concrete row identity.\n            NullableKeyFilter::Any => return None,\n        };\n        Some(Self {\n            untracked,\n            schema_key: request.schema_key.clone(),\n            entity_id: request.entity_id.clone(),\n            file_id,\n            version_id: request.version_id.clone(),\n        })\n    }\n\n    pub(crate) fn schema_key(&self) -> &str {\n        &self.schema_key\n    }\n\n    pub(crate) fn entity_id(&self) -> &crate::entity_identity::EntityIdentity {\n        &self.entity_id\n    }\n\n    pub(crate) fn domain(&self) -> Domain {\n        Domain::exact_file(\n            self.version_id.clone(),\n            self.untracked,\n            self.file_id.clone(),\n        )\n    }\n}\n\nimpl From<&PreparedStateRow> for PreparedStateRowIdentity {\n    fn from(row: &PreparedStateRow) -> Self {\n        Self::from_staged_row(row)\n    }\n}\n\nimpl From<&PreparedAdoptedStateRow> for PreparedStateRowIdentity {\n    fn from(row: &PreparedAdoptedStateRow) -> Self {\n        Self {\n            untracked: false,\n            schema_key: row.schema_key.clone(),\n            entity_id: row.entity_id.clone(),\n            file_id: row.file_id.clone(),\n            version_id: row.version_id.clone(),\n        }\n    }\n}\n\nimpl From<&MaterializedLiveStateRow> for PreparedStateRowIdentity {\n    fn from(row: &MaterializedLiveStateRow) -> Self {\n        Self {\n            untracked: row.untracked,\n            schema_key: row.schema_key.clone(),\n            entity_id: row.entity_id.clone(),\n            file_id: row.file_id.clone(),\n            version_id: row.version_id.clone(),\n        }\n    }\n}\n\nfn validate_commit_membership_support(row: &PreparedStateRow) -> Result<(), LixError> {\n    if row.global && row.version_id != GLOBAL_VERSION_ID {\n        return Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            \"engine global staged rows must use the global version id\",\n        ));\n    }\n    Ok(())\n}\n\nfn validate_adopted_commit_membership_support(\n    row: &PreparedAdoptedStateRow,\n) -> Result<(), LixError> {\n    if row.global && row.version_id != GLOBAL_VERSION_ID {\n        return Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            \"engine global adopted rows must use the global version id\",\n        ));\n    }\n    Ok(())\n}\n\nfn reject_duplicate_present_rows_in_batch(rows: &[PreparedStateRow]) -> Result<(), LixError> {\n    let mut pending_present_rows = BTreeMap::<PreparedStateRowIdentity, &PreparedStateRow>::new();\n    for row in rows {\n        let identity = PreparedStateRowIdentity::from(row);\n        if row.snapshot.is_none() {\n            pending_present_rows.remove(&identity);\n            continue;\n        }\n        if let Some(previous) = pending_present_rows.insert(identity, row) {\n            return Err(duplicate_staged_present_row_error(row, previous));\n        }\n    }\n    Ok(())\n}\n\nfn duplicate_staged_present_row_error(\n    row: &PreparedStateRow,\n    previous: &PreparedStateRow,\n) -> LixError {\n    let message = logical_primary_key_violation_message(row.origin.as_ref())\n        .unwrap_or_else(|| {\n            format!(\n                \"primary-key constraint violation on schema '{}': duplicate staged rows for entity_id '{}' in version '{}'\",\n                row.schema_key,\n                previous\n                    .entity_id\n                    .as_json_array_text()\n                    .unwrap_or_else(|_| \"<invalid entity_id>\".to_string()),\n                row.version_id\n            )\n        });\n    LixError::new(LixError::CODE_UNIQUE, message)\n}\n\npub(crate) fn duplicate_insert_identity_message(\n    schema_key: &str,\n    entity_id: &crate::entity_identity::EntityIdentity,\n    version_id: Option<&str>,\n    origin: Option<&TransactionWriteOrigin>,\n) -> String {\n    if let Some(message) = logical_primary_key_violation_message(origin) {\n        return message;\n    }\n    let entity_id = entity_id\n        .as_json_array_text()\n        .unwrap_or_else(|_| \"<invalid entity_id>\".to_string());\n    match version_id {\n        Some(version_id) => format!(\n            \"primary-key constraint violation on schema '{schema_key}': INSERT would duplicate entity_id '{entity_id}' in version '{version_id}'\"\n        ),\n        None => format!(\n            \"primary-key constraint violation on schema '{schema_key}': INSERT would duplicate entity_id '{entity_id}'\"\n        ),\n    }\n}\n\nfn duplicate_insert_identity_error(row: &PreparedStateRow) -> LixError {\n    let message = duplicate_insert_identity_message(\n        &row.schema_key,\n        &row.entity_id,\n        Some(&row.version_id),\n        row.origin.as_ref(),\n    );\n    LixError::new(LixError::CODE_UNIQUE, message)\n}\n\nfn logical_primary_key_violation_message(\n    origin: Option<&TransactionWriteOrigin>,\n) -> Option<String> {\n    let origin = origin?;\n    if origin.operation != TransactionWriteOperation::Insert {\n        return None;\n    }\n    let primary_key = origin.primary_key.as_ref()?;\n    Some(format!(\n        \"primary-key constraint violation on table '{}': INSERT would duplicate {}\",\n        origin.surface,\n        format_logical_primary_key(primary_key)\n    ))\n}\n\nfn format_logical_primary_key(primary_key: &LogicalPrimaryKey) -> String {\n    primary_key\n        .columns\n        .iter()\n        .enumerate()\n        .map(|(index, column)| {\n            let value = primary_key\n                .values\n                .get(index)\n                .map(String::as_str)\n                .unwrap_or(\"<missing>\");\n            format!(\"{column} '{value}'\")\n        })\n        .collect::<Vec<_>>()\n        .join(\", \")\n}\n\nfn conflicting_adopted_identity_error(row: &PreparedStateRow) -> LixError {\n    LixError::new(\n        LixError::CODE_UNIQUE,\n        format!(\n            \"transaction cannot stage a new row and an adopted projection for schema '{}' entity_id '{}' in version '{}'\",\n            row.schema_key,\n            row.entity_id\n                .as_json_array_text()\n                .unwrap_or_else(|_| \"<invalid entity_id>\".to_string()),\n            row.version_id\n        ),\n    )\n}\n\nfn conflicting_adopted_projection_error(row: &PreparedAdoptedStateRow) -> LixError {\n    LixError::new(\n        LixError::CODE_UNIQUE,\n        format!(\n            \"transaction cannot stage duplicate adopted projections for schema '{}' entity_id '{}' in version '{}'\",\n            row.schema_key,\n            row.entity_id\n                .as_json_array_text()\n                .unwrap_or_else(|_| \"<invalid entity_id>\".to_string()),\n            row.version_id\n        ),\n    )\n}\n\nfn add_row_to_commit_members(\n    members_by_version: &mut BTreeMap<String, StagedCommitMembers>,\n    row: &mut PreparedStateRow,\n    functions: &mut dyn FunctionProvider,\n) {\n    if row.untracked {\n        return;\n    }\n    let change_id = row\n        .change_id\n        .clone()\n        .expect(\"tracked staged rows must carry change_id for commit membership\");\n    let members = members_by_version\n        .entry(row.version_id.clone())\n        .or_insert_with(|| {\n            StagedCommitMembers::new(\n                functions.uuid_v7(),\n                functions.uuid_v7(),\n                functions.timestamp(),\n            )\n        });\n    row.commit_id = Some(members.commit_id.clone());\n    members.add_change_id(change_id);\n}\n\nfn add_adopted_row_to_commit_members(\n    members_by_version: &mut BTreeMap<String, StagedCommitMembers>,\n    row: &mut PreparedAdoptedStateRow,\n    functions: &mut dyn FunctionProvider,\n) {\n    let members = members_by_version\n        .entry(row.version_id.clone())\n        .or_insert_with(|| {\n            StagedCommitMembers::new(\n                functions.uuid_v7(),\n                functions.uuid_v7(),\n                functions.timestamp(),\n            )\n        });\n    row.commit_id = members.commit_id.clone();\n    members.add_change_id(row.change_id.clone());\n}\n\nfn remove_row_from_commit_members(\n    members_by_version: &mut BTreeMap<String, StagedCommitMembers>,\n    row: &PreparedStateRow,\n) {\n    if row.untracked {\n        return;\n    }\n    let Some(members) = members_by_version.get_mut(&row.version_id) else {\n        return;\n    };\n    let Some(change_id) = row.change_id.as_deref() else {\n        return;\n    };\n    members.remove_change_id(change_id);\n    if members.is_empty() {\n        members_by_version.remove(&row.version_id);\n    }\n}\n\nfn adopted_row_identity_matches_scan(\n    row: &PreparedAdoptedStateRow,\n    request: &LiveStateScanRequest,\n) -> bool {\n    if !request.filter.schema_keys.is_empty()\n        && !request.filter.schema_keys.contains(&row.schema_key)\n    {\n        return false;\n    }\n    if !request.filter.entity_ids.is_empty() && !request.filter.entity_ids.contains(&row.entity_id)\n    {\n        return false;\n    }\n    if !request.filter.version_ids.is_empty()\n        && !request.filter.version_ids.contains(&row.version_id)\n    {\n        return false;\n    }\n    if request.filter.untracked == Some(true) {\n        return false;\n    }\n    nullable_key_matches_filters(&row.file_id, &request.filter.file_ids)\n}\n\nfn staged_row_identity_matches_scan(\n    row: &PreparedStateRow,\n    request: &LiveStateScanRequest,\n) -> bool {\n    if !request.filter.schema_keys.is_empty()\n        && !request.filter.schema_keys.contains(&row.schema_key)\n    {\n        return false;\n    }\n    if !request.filter.entity_ids.is_empty() && !request.filter.entity_ids.contains(&row.entity_id)\n    {\n        return false;\n    }\n    if !request.filter.version_ids.is_empty()\n        && !request.filter.version_ids.contains(&row.version_id)\n    {\n        return false;\n    }\n    if request\n        .filter\n        .untracked\n        .is_some_and(|untracked| row.untracked != untracked)\n    {\n        return false;\n    }\n    nullable_key_matches_filters(&row.file_id, &request.filter.file_ids)\n}\n\nfn nullable_key_matches_filters(\n    value: &Option<String>,\n    filters: &[NullableKeyFilter<String>],\n) -> bool {\n    filters.is_empty()\n        || filters\n            .iter()\n            .any(|filter| nullable_key_matches_filter(value, filter))\n}\n\nfn nullable_key_matches_filter(value: &Option<String>, filter: &NullableKeyFilter<String>) -> bool {\n    match filter {\n        NullableKeyFilter::Any => true,\n        NullableKeyFilter::Null => value.is_none(),\n        NullableKeyFilter::Value(expected) => value.as_ref() == Some(expected),\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::functions::SharedFunctionProvider;\n    use crate::live_state::{LiveStateFilter, LiveStateRowRequest};\n\n    #[tokio::test]\n    async fn staging_overlay_uses_last_staged_row_for_exact_load() {\n        let staged_writes = test_staged_writes();\n\n        staged_writes\n            .stage_write(PreparedTransactionWrite::Rows {\n                mode: TransactionWriteMode::Replace,\n                rows: vec![state_row(\"sql2-duplicate-key\", \"first\")],\n            })\n            .expect(\"initial row should stage\");\n        staged_writes\n            .stage_write(PreparedTransactionWrite::Rows {\n                mode: TransactionWriteMode::Replace,\n                rows: vec![state_row(\"sql2-duplicate-key\", \"second\")],\n            })\n            .expect(\"staging rows should succeed\");\n\n        let overlay = staged_writes\n            .staging_overlay()\n            .expect(\"overlay should build from staged rows\");\n        let row = overlay\n            .load_exact(&LiveStateRowRequest {\n                schema_key: \"lix_key_value\".to_string(),\n                version_id: \"global\".to_string(),\n                entity_id: crate::entity_identity::EntityIdentity::single(\"sql2-duplicate-key\"),\n                file_id: NullableKeyFilter::Null,\n            })\n            .expect(\"staged row should be visible\");\n\n        let StagedExactRow::Row(row) = row else {\n            panic!(\"latest staged row should not be a tombstone\");\n        };\n        assert_eq!(\n            row.snapshot_content.as_deref(),\n            Some(\"{\\\"key\\\":\\\"sql2-duplicate-key\\\",\\\"value\\\":\\\"second\\\"}\")\n        );\n    }\n\n    #[tokio::test]\n    async fn staging_overlay_scan_returns_only_latest_row_per_identity() {\n        let staged_writes = test_staged_writes();\n\n        staged_writes\n            .stage_write(PreparedTransactionWrite::Rows {\n                mode: TransactionWriteMode::Replace,\n                rows: vec![state_row(\"sql2-duplicate-key\", \"first\")],\n            })\n            .expect(\"initial row should stage\");\n        staged_writes\n            .stage_write(PreparedTransactionWrite::Rows {\n                mode: TransactionWriteMode::Replace,\n                rows: vec![state_row(\"sql2-duplicate-key\", \"second\")],\n            })\n            .expect(\"staging rows should succeed\");\n\n        let overlay = staged_writes\n            .staging_overlay()\n            .expect(\"overlay should build from staged rows\");\n        let rows = overlay\n            .scan(&scan_request_for_key(\"sql2-duplicate-key\", false))\n            .expect(\"overlay scan should succeed\");\n\n        assert_eq!(rows.len(), 1);\n        assert_eq!(\n            rows[0].snapshot_content.as_deref(),\n            Some(\"{\\\"key\\\":\\\"sql2-duplicate-key\\\",\\\"value\\\":\\\"second\\\"}\")\n        );\n    }\n\n    #[tokio::test]\n    async fn staging_overlay_delete_hides_prior_staged_insert() {\n        let staged_writes = test_staged_writes();\n\n        staged_writes\n            .stage_write(PreparedTransactionWrite::Rows {\n                mode: TransactionWriteMode::Replace,\n                rows: vec![\n                    state_row(\"sql2-delete-key\", \"visible\"),\n                    tombstone_row(\"sql2-delete-key\"),\n                ],\n            })\n            .expect(\"staging rows should succeed\");\n\n        let overlay = staged_writes\n            .staging_overlay()\n            .expect(\"overlay should build from staged rows\");\n        let exact = overlay\n            .load_exact(&exact_request_for_key(\"sql2-delete-key\"))\n            .expect(\"staged tombstone should answer exact load\");\n        assert!(matches!(exact, StagedExactRow::Tombstone));\n        assert!(overlay\n            .scan(&scan_request_for_key(\"sql2-delete-key\", false))\n            .expect(\"overlay scan should succeed\")\n            .is_empty());\n\n        let tombstones = overlay\n            .scan(&scan_request_for_key(\"sql2-delete-key\", true))\n            .expect(\"overlay scan should succeed\");\n        assert_eq!(tombstones.len(), 1);\n        assert_eq!(tombstones[0].snapshot_content, None);\n    }\n\n    #[tokio::test]\n    async fn staging_overlay_insert_after_delete_resurrects_row() {\n        let staged_writes = test_staged_writes();\n\n        staged_writes\n            .stage_write(PreparedTransactionWrite::Rows {\n                mode: TransactionWriteMode::Replace,\n                rows: vec![\n                    tombstone_row(\"sql2-resurrect-key\"),\n                    state_row(\"sql2-resurrect-key\", \"visible-again\"),\n                ],\n            })\n            .expect(\"staging rows should succeed\");\n\n        let overlay = staged_writes\n            .staging_overlay()\n            .expect(\"overlay should build from staged rows\");\n        let exact = overlay\n            .load_exact(&exact_request_for_key(\"sql2-resurrect-key\"))\n            .expect(\"staged row should answer exact load\");\n\n        let StagedExactRow::Row(row) = exact else {\n            panic!(\"latest staged row should be visible\");\n        };\n        assert_eq!(\n            row.snapshot_content.as_deref(),\n            Some(\"{\\\"key\\\":\\\"sql2-resurrect-key\\\",\\\"value\\\":\\\"visible-again\\\"}\")\n        );\n        assert_eq!(\n            overlay\n                .scan(&scan_request_for_key(\"sql2-resurrect-key\", false))\n                .expect(\"overlay scan should succeed\")\n                .len(),\n            1\n        );\n    }\n\n    #[tokio::test]\n    async fn staged_writes_drain_returns_coalesced_latest_rows() {\n        let staged_writes = test_staged_writes();\n\n        staged_writes\n            .stage_write(PreparedTransactionWrite::Rows {\n                mode: TransactionWriteMode::Replace,\n                rows: vec![\n                    state_row(\"sql2-key-a\", \"first\"),\n                    state_row(\"sql2-key-b\", \"only\"),\n                ],\n            })\n            .expect(\"initial rows should stage\");\n        staged_writes\n            .stage_write(PreparedTransactionWrite::Rows {\n                mode: TransactionWriteMode::Replace,\n                rows: vec![state_row(\"sql2-key-a\", \"second\")],\n            })\n            .expect(\"staging rows should succeed\");\n\n        let drained = staged_writes.drain().expect(\"drain should succeed\");\n\n        assert_eq!(drained.state_rows.len(), 2);\n        assert!(drained.state_rows.iter().any(|row| {\n            row.entity_id == crate::entity_identity::EntityIdentity::single(\"sql2-key-a\")\n                && row\n                    .snapshot\n                    .as_ref()\n                    .map(|snapshot| snapshot.normalized.as_ref())\n                    == Some(\"{\\\"key\\\":\\\"sql2-key-a\\\",\\\"value\\\":\\\"second\\\"}\")\n        }));\n        assert!(drained.state_rows.iter().any(|row| {\n            row.entity_id == crate::entity_identity::EntityIdentity::single(\"sql2-key-b\")\n                && row\n                    .snapshot\n                    .as_ref()\n                    .map(|snapshot| snapshot.normalized.as_ref())\n                    == Some(\"{\\\"key\\\":\\\"sql2-key-b\\\",\\\"value\\\":\\\"only\\\"}\")\n        }));\n    }\n\n    #[tokio::test]\n    async fn staged_writes_drain_preserves_file_data_payloads() {\n        let staged_writes = test_staged_writes();\n\n        staged_writes\n            .stage_write(PreparedTransactionWrite::RowsWithFileData {\n                mode: TransactionWriteMode::Replace,\n                rows: vec![state_row(\"file-readme\", \"descriptor\")],\n                file_data: vec![TransactionFileData {\n                    file_id: \"file-readme\".to_string(),\n                    version_id: \"global\".to_string(),\n                    untracked: true,\n                    data: b\"hello\".to_vec(),\n                }],\n                count: 1,\n            })\n            .expect(\"staging rows with file data should succeed\");\n\n        let drained = staged_writes.drain().expect(\"drain should succeed\");\n\n        assert_eq!(drained.state_rows.len(), 1);\n        assert_eq!(drained.file_data_writes.len(), 1);\n        assert_eq!(drained.file_data_writes[0].file_id, \"file-readme\");\n        assert_eq!(drained.file_data_writes[0].data, b\"hello\");\n    }\n\n    #[tokio::test]\n    async fn staged_writes_track_commit_members_for_tracked_global_rows() {\n        let staged_writes = test_staged_writes();\n\n        staged_writes\n            .stage_write(PreparedTransactionWrite::Rows {\n                mode: TransactionWriteMode::Replace,\n                rows: vec![state_row(\"tracked-key\", \"value\").with_tracked()],\n            })\n            .expect(\"tracked global row should stage\");\n\n        let drained = staged_writes.drain().expect(\"drain should succeed\");\n        let members = drained\n            .commit_members_by_version\n            .get(\"global\")\n            .expect(\"global commit members should exist\");\n        assert_eq!(\n            members.change_ids.iter().cloned().collect::<Vec<_>>(),\n            vec![\"test-change-id\".to_string()]\n        );\n    }\n\n    #[tokio::test]\n    async fn staged_writes_do_not_track_untracked_rows_as_commit_members() {\n        let staged_writes = test_staged_writes();\n\n        staged_writes\n            .stage_write(PreparedTransactionWrite::Rows {\n                mode: TransactionWriteMode::Replace,\n                rows: vec![state_row(\"untracked-key\", \"value\")],\n            })\n            .expect(\"untracked row should stage\");\n\n        let drained = staged_writes.drain().expect(\"drain should succeed\");\n        assert!(drained.commit_members_by_version.is_empty());\n    }\n\n    #[tokio::test]\n    async fn staged_writes_replace_commit_member_on_tracked_overwrite() {\n        let staged_writes = test_staged_writes();\n\n        staged_writes\n            .stage_write(PreparedTransactionWrite::Rows {\n                mode: TransactionWriteMode::Replace,\n                rows: vec![state_row(\"overwrite-key\", \"first\")\n                    .with_tracked()\n                    .with_change_id(\"change-first\")],\n            })\n            .expect(\"initial tracked row should stage\");\n        staged_writes\n            .stage_write(PreparedTransactionWrite::Rows {\n                mode: TransactionWriteMode::Replace,\n                rows: vec![state_row(\"overwrite-key\", \"second\")\n                    .with_tracked()\n                    .with_change_id(\"change-second\")],\n            })\n            .expect(\"tracked overwrite should stage\");\n\n        let drained = staged_writes.drain().expect(\"drain should succeed\");\n        let members = drained\n            .commit_members_by_version\n            .get(\"global\")\n            .expect(\"global commit members should exist\");\n        assert_eq!(\n            members.change_ids.iter().cloned().collect::<Vec<_>>(),\n            vec![\"change-second\".to_string()]\n        );\n    }\n\n    #[tokio::test]\n    async fn staged_writes_keep_tracked_and_untracked_domains_separate() {\n        let staged_writes = test_staged_writes();\n\n        staged_writes\n            .stage_write(PreparedTransactionWrite::Rows {\n                mode: TransactionWriteMode::Replace,\n                rows: vec![\n                    state_row(\"tracked-to-untracked-key\", \"tracked\")\n                        .with_tracked()\n                        .with_change_id(\"change-tracked\"),\n                    state_row(\"tracked-to-untracked-key\", \"untracked\")\n                        .with_change_id(\"change-untracked\"),\n                ],\n            })\n            .expect(\"untracked overwrite should stage\");\n\n        let drained = staged_writes.drain().expect(\"drain should succeed\");\n        assert_eq!(drained.state_rows.len(), 2);\n        assert!(drained\n            .state_rows\n            .iter()\n            .any(|row| { row.change_id.as_deref() == Some(\"change-tracked\") && !row.untracked }));\n        assert!(drained\n            .state_rows\n            .iter()\n            .any(|row| { row.change_id.as_deref() == Some(\"change-untracked\") && row.untracked }));\n        let members = drained\n            .commit_members_by_version\n            .get(\"global\")\n            .expect(\"tracked commit member should remain in tracked domain\");\n        assert_eq!(\n            members.change_ids.iter().cloned().collect::<Vec<_>>(),\n            vec![\"change-tracked\".to_string()]\n        );\n    }\n\n    #[tokio::test]\n    async fn staged_writes_reject_duplicate_present_rows_in_one_batch() {\n        let staged_writes = test_staged_writes();\n\n        let error = staged_writes\n            .stage_write(PreparedTransactionWrite::Rows {\n                mode: TransactionWriteMode::Replace,\n                rows: vec![\n                    state_row(\"duplicate-present-key\", \"first\"),\n                    state_row(\"duplicate-present-key\", \"second\"),\n                ],\n            })\n            .expect_err(\"same-batch duplicate present rows should fail\");\n\n        assert_eq!(error.code, LixError::CODE_UNIQUE);\n        assert!(\n            error.message.contains(\"primary-key constraint violation\"),\n            \"error should explain the duplicate primary key: {error:?}\"\n        );\n    }\n\n    #[tokio::test]\n    async fn staged_writes_insert_keeps_tracked_and_untracked_rows_as_distinct_identities() {\n        let staged_writes = test_staged_writes();\n\n        staged_writes\n            .stage_write(PreparedTransactionWrite::Rows {\n                mode: TransactionWriteMode::Insert,\n                rows: vec![\n                    state_row(\"shared-domain-key\", \"tracked\").with_tracked(),\n                    state_row(\"shared-domain-key\", \"untracked\"),\n                ],\n            })\n            .expect(\"tracked and untracked rows are distinct domain identities\");\n\n        let drained = staged_writes.drain().expect(\"drain should succeed\");\n        assert_eq!(drained.state_rows.len(), 2);\n        assert!(drained.state_rows.iter().any(|row| {\n            row.entity_id == crate::entity_identity::EntityIdentity::single(\"shared-domain-key\")\n                && !row.untracked\n        }));\n        assert!(drained.state_rows.iter().any(|row| {\n            row.entity_id == crate::entity_identity::EntityIdentity::single(\"shared-domain-key\")\n                && row.untracked\n        }));\n    }\n\n    #[tokio::test]\n    async fn staged_writes_track_active_version_members_separately() {\n        let staged_writes = test_staged_writes();\n\n        staged_writes\n            .stage_write(PreparedTransactionWrite::Rows {\n                mode: TransactionWriteMode::Replace,\n                rows: vec![state_row(\"active-version-key\", \"value\")\n                    .with_tracked()\n                    .with_version(\"version-a\")],\n            })\n            .expect(\"active-version tracked staging should accumulate members\");\n\n        let drained = staged_writes.drain().expect(\"drain should succeed\");\n        let members = drained\n            .commit_members_by_version\n            .get(\"version-a\")\n            .expect(\"active-version commit members should exist\");\n        assert_eq!(\n            members.change_ids.iter().cloned().collect::<Vec<_>>(),\n            vec![\"test-change-id\".to_string()]\n        );\n    }\n\n    #[tokio::test]\n    async fn staged_writes_reject_global_rows_with_non_global_version_id() {\n        let staged_writes = test_staged_writes();\n\n        let error = staged_writes\n            .stage_write(PreparedTransactionWrite::Rows {\n                mode: TransactionWriteMode::Replace,\n                rows: vec![{\n                    let mut row = state_row(\"invalid-global-key\", \"value\");\n                    row.version_id = \"version-a\".to_string();\n                    row\n                }],\n            })\n            .expect_err(\"global row with non-global version should fail\");\n\n        assert!(error\n            .message\n            .contains(\"global staged rows must use the global version id\"));\n    }\n\n    #[tokio::test]\n    async fn staging_overlay_identity_matches_live_state_conflict_key() {\n        let staged_writes = test_staged_writes();\n\n        staged_writes\n            .stage_write(PreparedTransactionWrite::Rows {\n                mode: TransactionWriteMode::Replace,\n                rows: vec![state_row(\"shared-entity\", \"base\")],\n            })\n            .expect(\"initial same-identity row should stage\");\n        staged_writes\n            .stage_write(PreparedTransactionWrite::Rows {\n                mode: TransactionWriteMode::Replace,\n                rows: vec![\n                    state_row(\"shared-entity\", \"base\"),\n                    state_row(\"shared-entity\", \"other-version\").with_version(\"version-b\"),\n                    state_row(\"shared-entity\", \"other-schema\").with_schema(\"other_schema\"),\n                    state_row(\"shared-entity\", \"other-file\").with_file_id(\"file-a\"),\n                    state_row(\"shared-entity\", \"tracked\").with_tracked(),\n                ],\n            })\n            .expect(\"staging rows should succeed\");\n\n        let overlay = staged_writes\n            .staging_overlay()\n            .expect(\"overlay should build from staged rows\");\n        let rows = overlay\n            .scan(&LiveStateScanRequest {\n                filter: LiveStateFilter {\n                    entity_ids: vec![crate::entity_identity::EntityIdentity::single(\n                        \"shared-entity\",\n                    )],\n                    include_tombstones: true,\n                    ..LiveStateFilter::default()\n                },\n                ..LiveStateScanRequest::default()\n            })\n            .expect(\"overlay scan should succeed\");\n\n        assert_eq!(rows.len(), 5);\n        assert_eq!(\n            rows.iter()\n                .filter(|row| row.entity_id\n                    == crate::entity_identity::EntityIdentity::single(\"shared-entity\")\n                    && row.version_id == \"global\"\n                    && row.schema_key == \"lix_key_value\"\n                    && row.file_id.is_none())\n                .count(),\n            2\n        );\n        assert!(rows.iter().any(|row| {\n            row.snapshot_content.as_deref()\n                == Some(\"{\\\"key\\\":\\\"shared-entity\\\",\\\"value\\\":\\\"tracked\\\"}\")\n        }));\n    }\n\n    #[tokio::test]\n    async fn staged_writes_use_injected_function_provider_for_commit_metadata() {\n        let staged_writes = test_staged_writes();\n\n        staged_writes\n            .stage_write(PreparedTransactionWrite::Rows {\n                mode: TransactionWriteMode::Replace,\n                rows: vec![state_row(\"sql2-functions-key\", \"value\").with_tracked()],\n            })\n            .expect(\"staging rows should succeed\");\n\n        let drained = staged_writes.drain().expect(\"drain should succeed\");\n        let members = drained\n            .commit_members_by_version\n            .get(\"global\")\n            .expect(\"global commit members should exist\");\n        assert_eq!(members.commit_id, \"test-uuid-1\");\n        assert_eq!(members.commit_change_id, \"test-uuid-2\");\n        assert_eq!(members.created_at, \"test-timestamp-1\");\n    }\n\n    #[tokio::test]\n    async fn staged_writes_stamp_tracked_rows_with_commit_id_during_staging() {\n        let staged_writes = test_staged_writes();\n\n        staged_writes\n            .stage_write(PreparedTransactionWrite::Rows {\n                mode: TransactionWriteMode::Replace,\n                rows: vec![state_row(\"tracked-commit-key\", \"value\").with_tracked()],\n            })\n            .expect(\"tracked row should stage\");\n\n        let drained = staged_writes.drain().expect(\"drain should succeed\");\n        assert_eq!(drained.state_rows.len(), 1);\n        assert_eq!(\n            drained.state_rows[0].commit_id.as_deref(),\n            Some(\"test-uuid-1\")\n        );\n        assert_eq!(\n            drained\n                .commit_members_by_version\n                .get(\"global\")\n                .expect(\"global commit members should exist\")\n                .commit_id,\n            \"test-uuid-1\"\n        );\n    }\n\n    fn test_staged_writes() -> Arc<TransactionWriteBuffer> {\n        Arc::new(TransactionWriteBuffer::new(SharedFunctionProvider::new(\n            Box::new(TestFunctionProvider::default()) as Box<dyn FunctionProvider + Send>,\n        )))\n    }\n\n    #[derive(Default)]\n    struct TestFunctionProvider {\n        uuid_count: usize,\n        timestamp_count: usize,\n    }\n\n    impl FunctionProvider for TestFunctionProvider {\n        fn uuid_v7(&mut self) -> String {\n            self.uuid_count += 1;\n            format!(\"test-uuid-{}\", self.uuid_count)\n        }\n\n        fn timestamp(&mut self) -> String {\n            self.timestamp_count += 1;\n            format!(\"test-timestamp-{}\", self.timestamp_count)\n        }\n    }\n\n    fn state_row(key: &str, value: &str) -> PreparedStateRow {\n        let snapshot = stage_json_from_value(\n            TransactionJson::from_value_for_test(serde_json::json!({ \"key\": key, \"value\": value })),\n            \"test staged row snapshot_content\",\n        )\n        .expect(\"test snapshot should prepare\");\n        PreparedStateRow {\n            schema_plan_id: SchemaPlanId::for_test(0),\n            facts: crate::transaction::types::PreparedRowFacts::default(),\n            entity_id: crate::entity_identity::EntityIdentity::single(key),\n            schema_key: \"lix_key_value\".to_string(),\n            file_id: None,\n            snapshot: Some(snapshot),\n            metadata: None,\n            origin: None,\n            created_at: \"test-created-at\".to_string(),\n            updated_at: \"test-updated-at\".to_string(),\n            global: true,\n            change_id: None,\n            commit_id: None,\n            untracked: true,\n            version_id: \"global\".to_string(),\n        }\n    }\n\n    fn tombstone_row(key: &str) -> PreparedStateRow {\n        let mut row = state_row(key, \"deleted\");\n        row.snapshot = None;\n        row\n    }\n\n    fn exact_request_for_key(key: &str) -> LiveStateRowRequest {\n        LiveStateRowRequest {\n            schema_key: \"lix_key_value\".to_string(),\n            version_id: \"global\".to_string(),\n            entity_id: crate::entity_identity::EntityIdentity::single(key),\n            file_id: NullableKeyFilter::Null,\n        }\n    }\n\n    fn scan_request_for_key(key: &str, include_tombstones: bool) -> LiveStateScanRequest {\n        LiveStateScanRequest {\n            filter: LiveStateFilter {\n                schema_keys: vec![\"lix_key_value\".to_string()],\n                entity_ids: vec![crate::entity_identity::EntityIdentity::single(key)],\n                version_ids: vec![\"global\".to_string()],\n                file_ids: vec![NullableKeyFilter::Null],\n                include_tombstones,\n                ..LiveStateFilter::default()\n            },\n            ..LiveStateScanRequest::default()\n        }\n    }\n\n    trait StateRowTestExt {\n        fn with_schema(self, schema_key: &str) -> Self;\n        fn with_file_id(self, file_id: &str) -> Self;\n        fn with_tracked(self) -> Self;\n        fn with_version(self, version_id: &str) -> Self;\n        fn with_change_id(self, change_id: &str) -> Self;\n    }\n\n    impl StateRowTestExt for PreparedStateRow {\n        fn with_schema(mut self, schema_key: &str) -> Self {\n            self.schema_key = schema_key.to_string();\n            self\n        }\n\n        fn with_file_id(mut self, file_id: &str) -> Self {\n            self.file_id = Some(file_id.to_string());\n            self\n        }\n\n        fn with_tracked(mut self) -> Self {\n            self.untracked = false;\n            if self.change_id.is_none() {\n                self.change_id = Some(\"test-change-id\".to_string());\n            }\n            self\n        }\n\n        fn with_version(mut self, version_id: &str) -> Self {\n            self.version_id = version_id.to_string();\n            self.global = version_id == GLOBAL_VERSION_ID;\n            self\n        }\n\n        fn with_change_id(mut self, change_id: &str) -> Self {\n            self.change_id = Some(change_id.to_string());\n            self\n        }\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/transaction/types.rs",
    "content": "use std::{collections::BTreeSet, fmt, ops::Deref, sync::Arc};\n\nuse crate::catalog::SchemaPlanId;\nuse crate::entity_identity::EntityIdentity;\nuse crate::json_store::JsonRef;\nuse crate::live_state::MaterializedLiveStateRow;\nuse crate::tracked_state::MaterializedTrackedStateRow;\nuse crate::untracked_state::MaterializedUntrackedStateRow;\nuse crate::LixError;\nuse serde::{Deserialize, Deserializer, Serialize, Serializer};\nuse serde_json::Value as JsonValue;\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct TransactionJson {\n    value: Arc<JsonValue>,\n    normalized: Arc<str>,\n}\n\nimpl TransactionJson {\n    pub(crate) fn from_value(value: JsonValue, context: &str) -> Result<Self, LixError> {\n        let normalized: Arc<str> = serde_json::to_string(&value)\n            .map_err(|error| {\n                LixError::new(\n                    LixError::CODE_UNKNOWN,\n                    format!(\"{context} failed to serialize as normalized JSON: {error}\"),\n                )\n            })?\n            .into();\n        Ok(Self {\n            value: Arc::new(value),\n            normalized,\n        })\n    }\n\n    pub(crate) fn from_value_unchecked(value: JsonValue) -> Self {\n        Self::from_value(value, \"transaction JSON\")\n            .expect(\"serializing serde_json::Value should not fail\")\n    }\n\n    #[cfg(test)]\n    pub(crate) fn from_value_for_test(value: JsonValue) -> Self {\n        Self::from_value(value, \"test transaction JSON\").expect(\"test JSON should normalize\")\n    }\n\n    pub(crate) fn from_parts(value: Arc<JsonValue>, normalized: Arc<str>) -> Self {\n        Self { value, normalized }\n    }\n\n    pub(crate) fn value(&self) -> &JsonValue {\n        self.value.as_ref()\n    }\n\n    pub(crate) fn normalized(&self) -> &str {\n        self.normalized.as_ref()\n    }\n\n    pub(crate) fn into_parts(self) -> (Arc<JsonValue>, Arc<str>) {\n        (self.value, self.normalized)\n    }\n}\n\nimpl Deref for TransactionJson {\n    type Target = JsonValue;\n\n    fn deref(&self) -> &Self::Target {\n        self.value()\n    }\n}\n\nimpl PartialEq<JsonValue> for TransactionJson {\n    fn eq(&self, other: &JsonValue) -> bool {\n        self.value() == other\n    }\n}\n\nimpl PartialEq<TransactionJson> for JsonValue {\n    fn eq(&self, other: &TransactionJson) -> bool {\n        self == other.value()\n    }\n}\n\nimpl fmt::Display for TransactionJson {\n    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {\n        f.write_str(self.normalized())\n    }\n}\n\nimpl Serialize for TransactionJson {\n    fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>\n    where\n        S: Serializer,\n    {\n        self.value.serialize(serializer)\n    }\n}\n\nimpl<'de> Deserialize<'de> for TransactionJson {\n    fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>\n    where\n        D: Deserializer<'de>,\n    {\n        let value = JsonValue::deserialize(deserializer)?;\n        Self::from_value(value, \"transaction JSON\").map_err(serde::de::Error::custom)\n    }\n}\n\n/// State row accepted at the transaction write boundary.\n///\n/// External SQL/provider code must parse any textual JSON before constructing\n/// this type. The transaction receives `TransactionJson`, applies schema\n/// defaults and identity derivation, then prepares JSON refs in\n/// `PreparedStateRow` without serializing already-normalized JSON again.\n///\n/// SQL providers stage semantic rows, not final storage rows. INSERT providers\n/// may omit defaulted snapshot fields and leave `entity_id` unset when the\n/// target schema has an `x-lix-primary-key`; transaction normalization applies\n/// schema defaults and derives the final identity. Typed UPDATE providers must\n/// stage full rewritten snapshots after applying column assignments to the\n/// existing row. Raw `lix_state` snapshot updates are replacement writes, not\n/// implicit patches.\n#[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize)]\npub(crate) struct TransactionWriteRow {\n    pub(crate) entity_id: Option<EntityIdentity>,\n    pub(crate) schema_key: String,\n    pub(crate) file_id: Option<String>,\n    pub(crate) snapshot: Option<TransactionJson>,\n    pub(crate) metadata: Option<TransactionJson>,\n    pub(crate) origin: Option<TransactionWriteOrigin>,\n    pub(crate) created_at: Option<String>,\n    pub(crate) updated_at: Option<String>,\n    pub(crate) global: bool,\n    pub(crate) change_id: Option<String>,\n    pub(crate) commit_id: Option<String>,\n    pub(crate) untracked: bool,\n    pub(crate) version_id: String,\n}\n\nimpl TransactionWriteRow {\n    pub(crate) fn schema_scope_version_id(&self) -> &str {\n        if self.global {\n            crate::GLOBAL_VERSION_ID\n        } else {\n            self.version_id.as_str()\n        }\n    }\n}\n\n/// User-facing write operation that produced one physical staged row.\n///\n/// Composite SQL surfaces such as `lix_file` lower one logical row into\n/// multiple state rows. The transaction layer owns final constraint validation,\n/// but error messages should stay in the vocabulary of the logical operation\n/// when the caller did not write the physical state schema directly.\n#[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize)]\npub(crate) struct TransactionWriteOrigin {\n    pub(crate) surface: String,\n    pub(crate) operation: TransactionWriteOperation,\n    pub(crate) primary_key: Option<LogicalPrimaryKey>,\n}\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq, serde::Serialize, serde::Deserialize)]\npub(crate) enum TransactionWriteOperation {\n    Insert,\n    Update,\n    Delete,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize)]\npub(crate) struct LogicalPrimaryKey {\n    pub(crate) columns: Vec<String>,\n    pub(crate) values: Vec<String>,\n}\n\n/// Incoming file payload paired with transaction write rows.\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct TransactionFileData {\n    pub(crate) file_id: String,\n    pub(crate) version_id: String,\n    pub(crate) untracked: bool,\n    pub(crate) data: Vec<u8>,\n}\n\n/// Existing canonical change adopted into another version's tracked projection.\n///\n/// Merges use this path when the source side already owns the canonical\n/// changelog fact. The target commit references that existing change id and\n/// writes a target-version projection row without appending a copied change.\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct TransactionAdoptedChange {\n    pub(crate) version_id: String,\n    pub(crate) change_id: String,\n    pub(crate) projected_row: MaterializedTrackedStateRow,\n}\n\n/// One decoded write batch accepted by the transaction boundary.\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) enum TransactionWrite {\n    Rows {\n        mode: TransactionWriteMode,\n        rows: Vec<TransactionWriteRow>,\n    },\n    RowsWithFileData {\n        mode: TransactionWriteMode,\n        rows: Vec<TransactionWriteRow>,\n        file_data: Vec<TransactionFileData>,\n        count: u64,\n    },\n    AdoptedChanges {\n        changes: Vec<TransactionAdoptedChange>,\n    },\n}\n\n/// One decoded write batch after semantic normalization and JSON preparation.\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) enum PreparedTransactionWrite {\n    Rows {\n        mode: TransactionWriteMode,\n        rows: Vec<PreparedStateRow>,\n    },\n    RowsWithFileData {\n        mode: TransactionWriteMode,\n        rows: Vec<PreparedStateRow>,\n        file_data: Vec<TransactionFileData>,\n        count: u64,\n    },\n    AdoptedChanges {\n        rows: Vec<PreparedAdoptedStateRow>,\n    },\n}\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub(crate) enum TransactionWriteMode {\n    Insert,\n    Replace,\n}\n\n/// Result returned after the transaction accepts a write batch.\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct TransactionWriteOutcome {\n    pub(crate) count: u64,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct StageJson {\n    pub(crate) value: Arc<serde_json::Value>,\n    pub(crate) normalized: Arc<str>,\n    pub(crate) json_ref: JsonRef,\n}\n\nimpl StageJson {\n    pub(crate) fn materialize(&self) -> String {\n        self.normalized.as_ref().to_string()\n    }\n}\n\npub(crate) fn stage_json_from_value(\n    value: TransactionJson,\n    _context: &str,\n) -> Result<StageJson, LixError> {\n    let (value, normalized) = value.into_parts();\n    let json_ref = JsonRef::for_content(normalized.as_bytes());\n    Ok(StageJson {\n        value,\n        normalized,\n        json_ref,\n    })\n}\n\n#[derive(Debug, Clone, Default, PartialEq, Eq)]\npub(crate) struct PreparedRowFacts {\n    /// Placeholder for the next cut: row-derived constraint facts will be\n    /// computed once during normalization and consumed by validation.\n    pub(crate) _sealed: (),\n}\n\n/// Prepared state row owned by the transaction write buffer.\n///\n/// This is the first boundary that owns `StageJson`: JSON has been normalized\n/// and assigned a content-addressed `JsonRef`. Durable placement belongs to the\n/// JSON store at batch staging time, not row preparation time.\n/// Storage owners must receive only the ref-backed row forms derived from this\n/// type.\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct PreparedStateRow {\n    pub(crate) schema_plan_id: SchemaPlanId,\n    pub(crate) facts: PreparedRowFacts,\n    pub(crate) entity_id: EntityIdentity,\n    pub(crate) schema_key: String,\n    pub(crate) file_id: Option<String>,\n    pub(crate) snapshot: Option<StageJson>,\n    pub(crate) metadata: Option<StageJson>,\n    pub(crate) origin: Option<TransactionWriteOrigin>,\n    pub(crate) created_at: String,\n    pub(crate) updated_at: String,\n    pub(crate) global: bool,\n    pub(crate) change_id: Option<String>,\n    pub(crate) commit_id: Option<String>,\n    pub(crate) untracked: bool,\n    pub(crate) version_id: String,\n}\n\n/// Transaction-hydrated projection for an adopted canonical change.\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct PreparedAdoptedStateRow {\n    pub(crate) schema_plan_id: SchemaPlanId,\n    pub(crate) facts: PreparedRowFacts,\n    pub(crate) entity_id: EntityIdentity,\n    pub(crate) schema_key: String,\n    pub(crate) file_id: Option<String>,\n    pub(crate) snapshot: Option<StageJson>,\n    pub(crate) metadata: Option<StageJson>,\n    pub(crate) created_at: String,\n    pub(crate) updated_at: String,\n    pub(crate) global: bool,\n    pub(crate) change_id: String,\n    pub(crate) commit_id: String,\n    pub(crate) version_id: String,\n}\n\nimpl From<PreparedStateRow> for MaterializedLiveStateRow {\n    fn from(row: PreparedStateRow) -> Self {\n        let deleted = row.snapshot.is_none();\n        MaterializedLiveStateRow {\n            entity_id: row.entity_id,\n            schema_key: row.schema_key,\n            file_id: row.file_id,\n            snapshot_content: row.snapshot.map(|snapshot| snapshot.materialize()),\n            metadata: row.metadata.map(|metadata| metadata.materialize()),\n            deleted,\n            created_at: row.created_at,\n            updated_at: row.updated_at,\n            global: row.global,\n            change_id: row.change_id,\n            commit_id: row.commit_id,\n            untracked: row.untracked,\n            version_id: row.version_id,\n        }\n    }\n}\n\nimpl From<&PreparedStateRow> for MaterializedLiveStateRow {\n    fn from(row: &PreparedStateRow) -> Self {\n        MaterializedLiveStateRow {\n            entity_id: row.entity_id.clone(),\n            schema_key: row.schema_key.clone(),\n            file_id: row.file_id.clone(),\n            snapshot_content: row.snapshot.as_ref().map(StageJson::materialize),\n            metadata: row.metadata.as_ref().map(StageJson::materialize),\n            deleted: row.snapshot.is_none(),\n            created_at: row.created_at.clone(),\n            updated_at: row.updated_at.clone(),\n            global: row.global,\n            change_id: row.change_id.clone(),\n            commit_id: row.commit_id.clone(),\n            untracked: row.untracked,\n            version_id: row.version_id.clone(),\n        }\n    }\n}\n\nimpl From<PreparedAdoptedStateRow> for MaterializedLiveStateRow {\n    fn from(row: PreparedAdoptedStateRow) -> Self {\n        let deleted = row.snapshot.is_none();\n        MaterializedLiveStateRow {\n            entity_id: row.entity_id,\n            schema_key: row.schema_key,\n            file_id: row.file_id,\n            snapshot_content: row.snapshot.map(|snapshot| snapshot.materialize()),\n            metadata: row.metadata.map(|metadata| metadata.materialize()),\n            deleted,\n            created_at: row.created_at,\n            updated_at: row.updated_at,\n            global: row.global,\n            change_id: Some(row.change_id),\n            commit_id: Some(row.commit_id),\n            untracked: false,\n            version_id: row.version_id,\n        }\n    }\n}\n\nimpl From<&PreparedAdoptedStateRow> for MaterializedLiveStateRow {\n    fn from(row: &PreparedAdoptedStateRow) -> Self {\n        MaterializedLiveStateRow {\n            entity_id: row.entity_id.clone(),\n            schema_key: row.schema_key.clone(),\n            file_id: row.file_id.clone(),\n            snapshot_content: row.snapshot.as_ref().map(StageJson::materialize),\n            metadata: row.metadata.as_ref().map(StageJson::materialize),\n            deleted: row.snapshot.is_none(),\n            created_at: row.created_at.clone(),\n            updated_at: row.updated_at.clone(),\n            global: row.global,\n            change_id: Some(row.change_id.clone()),\n            commit_id: Some(row.commit_id.clone()),\n            untracked: false,\n            version_id: row.version_id.clone(),\n        }\n    }\n}\n\nimpl From<PreparedStateRow> for MaterializedUntrackedStateRow {\n    fn from(row: PreparedStateRow) -> Self {\n        let deleted = row.snapshot.is_none();\n        MaterializedUntrackedStateRow {\n            entity_id: row.entity_id,\n            schema_key: row.schema_key,\n            file_id: row.file_id,\n            snapshot_content: row.snapshot.map(|snapshot| snapshot.materialize()),\n            metadata: row.metadata.map(|metadata| metadata.materialize()),\n            deleted,\n            created_at: row.created_at,\n            updated_at: row.updated_at,\n            global: row.global,\n            version_id: row.version_id,\n        }\n    }\n}\n\n/// Transaction-local introduced-change membership accumulated while rows are staged.\n///\n/// Final commit row materialization owns commit ids, parent heads, and commit\n/// row timestamps. Staging only tracks which hydrated tracked changes the\n/// future commit introduces for a version.\n#[derive(Debug, Clone, Default, PartialEq, Eq)]\npub(crate) struct StagedCommitMembers {\n    pub(crate) commit_id: String,\n    pub(crate) commit_change_id: String,\n    pub(crate) created_at: String,\n    pub(crate) change_ids: BTreeSet<String>,\n    pub(crate) allow_empty: bool,\n}\n\nimpl StagedCommitMembers {\n    pub(crate) fn new(commit_id: String, commit_change_id: String, created_at: String) -> Self {\n        Self {\n            commit_id,\n            commit_change_id,\n            created_at,\n            change_ids: BTreeSet::new(),\n            allow_empty: false,\n        }\n    }\n\n    pub(crate) fn add_change_id(&mut self, change_id: String) {\n        self.change_ids.insert(change_id);\n    }\n\n    pub(crate) fn remove_change_id(&mut self, change_id: &str) {\n        self.change_ids.remove(change_id);\n    }\n\n    pub(crate) fn is_empty(&self) -> bool {\n        self.change_ids.is_empty()\n    }\n\n    pub(crate) fn allow_empty(&mut self) {\n        self.allow_empty = true;\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/transaction/validation.rs",
    "content": "use std::collections::{BTreeMap, BTreeSet};\n\nuse serde_json::Value as JsonValue;\n\nuse crate::catalog::{\n    CatalogSnapshot, ForeignKeyPlan, SchemaCatalogKey, SchemaPlan, StateDeleteReferencePlan,\n    StateForeignKeyPlan,\n};\nuse crate::common::format_json_pointer;\n#[cfg(test)]\nuse crate::common::parse_json_pointer;\nuse crate::common::{json_pointer_get, validate_row_metadata};\nuse crate::domain::{Domain, DomainFileScope, DomainRowIdentity};\nuse crate::entity_identity::{canonical_json_text, EntityIdentity, EntityIdentityError};\n#[cfg(test)]\nuse crate::live_state::LiveStateRowIdentity;\nuse crate::live_state::{\n    LiveStateFilter, LiveStateReader, LiveStateScanRequest, MaterializedLiveStateRow,\n};\nuse crate::schema::{\n    format_lix_schema_validation_errors, schema_from_registered_snapshot, validate_schema_amendment,\n};\n#[cfg(test)]\nuse crate::schema::{\n    is_seed_schema_key, validate_lix_schema, validate_lix_schema_definition, SchemaKey,\n};\nuse crate::transaction::staging::duplicate_insert_identity_message;\n#[cfg(test)]\nuse crate::transaction::staging::PreparedWriteSet;\nuse crate::transaction::staging::{PreparedValidationRow, PreparedWriteValidationSet};\n#[cfg(test)]\nuse crate::transaction::types::PreparedStateRow;\nuse crate::transaction::types::TransactionWriteOrigin;\nuse crate::version::{VERSION_DESCRIPTOR_SCHEMA_KEY, VERSION_REF_SCHEMA_KEY};\nuse crate::LixError;\n\nconst REGISTERED_SCHEMA_KEY: &str = \"lix_registered_schema\";\nconst DIRECTORY_DESCRIPTOR_SCHEMA_KEY: &str = \"lix_directory_descriptor\";\nconst FILE_DESCRIPTOR_SCHEMA_KEY: &str = \"lix_file_descriptor\";\nconst STATE_SURFACE_SCHEMA_KEY: &str = \"lix_state\";\nconst MAX_DIRECTORY_PARENT_DEPTH: usize = 1024;\n\n/// Immutable view of the final transaction write set before persistence.\n///\n/// Validation intentionally runs after staging has coalesced overwrites and\n/// hydrated generated fields, but before changelog, tracked-state, untracked\n/// state, or binary CAS writes are flushed.\npub(crate) struct TransactionValidationInput<'a> {\n    staged_writes: &'a PreparedWriteValidationSet<'a>,\n    schema_catalog: &'a CatalogSnapshot,\n    live_state: &'a dyn LiveStateReader,\n}\n\nimpl<'a> TransactionValidationInput<'a> {\n    pub(crate) fn new(\n        staged_writes: &'a PreparedWriteValidationSet<'a>,\n        schema_catalog: &'a CatalogSnapshot,\n        live_state: &'a dyn LiveStateReader,\n    ) -> Self {\n        Self {\n            staged_writes,\n            schema_catalog,\n            live_state,\n        }\n    }\n\n    #[cfg(test)]\n    fn from_visible_schemas_for_tests(\n        staged_writes: &'a PreparedWriteSet,\n        visible_schemas: &'a [JsonValue],\n        live_state: &'a dyn LiveStateReader,\n    ) -> Self {\n        let catalog = Box::leak(Box::new(\n            CatalogSnapshot::from_visible_schemas(visible_schemas)\n                .expect(\"test schema catalog should build\"),\n        ));\n        let validation_set = Box::leak(Box::new(staged_writes.validation_set_for_tests()));\n        Self::new(validation_set, catalog, live_state)\n    }\n}\n\nasync fn scan_committed_constraint_rows(\n    live_state: &dyn LiveStateReader,\n    domain: &Domain,\n    schema_keys: Vec<String>,\n    entity_ids: Vec<EntityIdentity>,\n    include_tombstones: bool,\n) -> Result<Vec<MaterializedLiveStateRow>, LixError> {\n    let rows = live_state\n        .scan_rows(&LiveStateScanRequest {\n            filter: LiveStateFilter {\n                schema_keys: schema_keys.clone(),\n                entity_ids: entity_ids.clone(),\n                version_ids: vec![domain.version_id().to_string()],\n                file_ids: domain.file_filters(),\n                untracked: Some(domain.untracked()),\n                include_tombstones,\n                ..Default::default()\n            },\n            ..Default::default()\n        })\n        .await?;\n    Ok(rows\n        .into_iter()\n        .filter(|row| {\n            domain.contains(row)\n                && (schema_keys.is_empty() || schema_keys.contains(&row.schema_key))\n                && (entity_ids.is_empty() || entity_ids.contains(&row.entity_id))\n        })\n        .collect())\n}\n\nasync fn load_committed_constraint_row(\n    live_state: &dyn LiveStateReader,\n    domain: &Domain,\n    schema_key: &str,\n    entity_id: EntityIdentity,\n    include_tombstones: bool,\n) -> Result<Option<MaterializedLiveStateRow>, LixError> {\n    Ok(scan_committed_constraint_rows(\n        live_state,\n        domain,\n        vec![schema_key.to_string()],\n        vec![entity_id],\n        include_tombstones,\n    )\n    .await?\n    .into_iter()\n    .next())\n}\n\n/// Validates the final transaction write set before durable persistence.\n///\n/// The validator owns semantic write correctness for every engine write\n/// frontend. It builds one transaction-visible schema catalog, validates pending\n/// schema registrations, checks exact schema existence, and validates each\n/// non-tombstone snapshot against the compiled JSON Schema for its\n/// `schema_key`.\n///\n/// Cross-row constraints such as `x-lix-unique` and foreign keys should also\n/// live here so they can share transaction-local indexes and see the final\n/// coalesced staged write set.\npub(crate) async fn validate_prepared_writes(\n    input: TransactionValidationInput<'_>,\n) -> Result<(), LixError> {\n    validate_foreign_key_definitions(input.schema_catalog)?;\n    let staged_rows = input.staged_writes.rows().collect::<Vec<_>>();\n    let constraint_rows = input.staged_writes.constraint_rows().collect::<Vec<_>>();\n    let pending_file_descriptors = PendingFileDescriptorIndex::from_rows(&constraint_rows);\n    let pending_schema_domains = PendingSchemaDomains::from_staged_rows(&staged_rows)?;\n    validate_registered_schema_identity_is_canonical(&input, &staged_rows).await?;\n    let mut pending_constraints = PendingConstraintIndexes::default();\n    let mut staged_snapshots = Vec::new();\n    for row in &constraint_rows {\n        let row = *row;\n        let Some(snapshot) = row.snapshot_json() else {\n            pending_constraints.remember_tombstone(row);\n            continue;\n        };\n        let schema_plan = schema_plan_for_row(input.schema_catalog, &pending_schema_domains, row)?;\n        validate_schema_matches_row(row, schema_plan)?;\n        validate_snapshot_content(row, schema_plan)?;\n        pending_constraints.remember_row(row, schema_plan, snapshot)?;\n    }\n    for row in &staged_rows {\n        let row = *row;\n        validate_staged_row_shape(row)?;\n        validate_staged_row_metadata(row)?;\n        let schema_plan = schema_plan_for_row(input.schema_catalog, &pending_schema_domains, row)?;\n        validate_schema_matches_row(row, schema_plan)?;\n        let snapshot = validate_snapshot_content(row, schema_plan)?;\n        if let Some(snapshot) = snapshot {\n            validate_file_owner_reference(&input, &pending_file_descriptors, row).await?;\n            validate_primary_key_identity(row, schema_plan, snapshot)?;\n            pending_constraints.remember_foreign_key_references(row, schema_plan, snapshot)?;\n            staged_snapshots.push((row, schema_plan, snapshot));\n        } else {\n            pending_constraints.remember_tombstone(row);\n        }\n    }\n    let unresolved_foreign_keys =\n        validate_pending_foreign_keys(&pending_constraints, &staged_snapshots)?;\n    validate_pending_delete_restrictions(input.schema_catalog, &pending_constraints)?;\n    let unresolved_foreign_keys =\n        validate_committed_foreign_keys(&input, &pending_constraints, &unresolved_foreign_keys)\n            .await?;\n    reject_unresolved_foreign_keys(&unresolved_foreign_keys)?;\n    validate_committed_delete_restrictions(&input, input.schema_catalog, &pending_constraints)\n        .await?;\n    validate_file_descriptor_delete_restrictions(&input, &pending_constraints).await?;\n    validate_version_ref_delete_restrictions(&input, &pending_constraints).await?;\n    validate_committed_insert_identities(&input, &pending_constraints).await?;\n    validate_committed_unique_constraints(&input, &pending_constraints).await?;\n    validate_directory_descriptor_parent_graph(&input, &staged_rows, &constraint_rows).await?;\n    validate_filesystem_namespace(&input, &staged_rows).await?;\n    Ok(())\n}\n\n#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)]\nstruct DirectoryDescriptorScope {\n    domain: Domain,\n}\n\n#[derive(Debug, Clone, serde::Deserialize)]\nstruct DirectoryDescriptorSnapshot {\n    id: String,\n    parent_id: Option<String>,\n    name: String,\n}\n\n#[derive(Debug, Clone, serde::Deserialize)]\nstruct FileDescriptorSnapshot {\n    directory_id: Option<String>,\n    name: String,\n}\n\nasync fn validate_directory_descriptor_parent_graph(\n    input: &TransactionValidationInput<'_>,\n    staged_rows: &[PreparedValidationRow<'_>],\n    constraint_rows: &[PreparedValidationRow<'_>],\n) -> Result<(), LixError> {\n    let scopes = staged_directory_descriptor_scopes(staged_rows);\n    for scope in scopes {\n        let mut parents = committed_directory_parent_map(input.live_state, &scope).await?;\n        apply_staged_directory_parent_rows(constraint_rows, &scope, &mut parents)?;\n        validate_directory_parent_map(&scope, &parents)?;\n    }\n    Ok(())\n}\n\nasync fn validate_registered_schema_identity_is_canonical(\n    input: &TransactionValidationInput<'_>,\n    staged_rows: &[PreparedValidationRow<'_>],\n) -> Result<(), LixError> {\n    let pending_schema_rows = staged_rows\n        .iter()\n        .filter(|row| row.schema_key() == REGISTERED_SCHEMA_KEY && row.snapshot_json().is_some())\n        .collect::<Vec<_>>();\n    if pending_schema_rows.is_empty() {\n        return Ok(());\n    }\n\n    for pending_row in pending_schema_rows {\n        let Some(row) = load_committed_constraint_row(\n            input.live_state,\n            &pending_row.domain().with_exact_file_scope(None),\n            REGISTERED_SCHEMA_KEY,\n            pending_row.entity_id().clone(),\n            false,\n        )\n        .await?\n        else {\n            continue;\n        };\n        let Some(snapshot_content) = row.snapshot_content.as_deref() else {\n            continue;\n        };\n        let snapshot = parse_registered_schema_snapshot(snapshot_content)?;\n        let pending_snapshot = pending_row\n            .snapshot_json()\n            .expect(\"pending registered schema row has snapshot_content\");\n        if &snapshot != pending_snapshot {\n            let (key, pending_schema) = schema_from_registered_snapshot(pending_snapshot)?;\n            let (_, committed_schema) = schema_from_registered_snapshot(&snapshot)?;\n            validate_schema_amendment(&committed_schema, &pending_schema).map_err(|_| {\n                LixError::new(\n                    LixError::CODE_SCHEMA_DEFINITION,\n                    format!(\n                        \"schema '{}' is already registered with a different definition; schema identity must be canonical\",\n                        key.schema_key\n                    ),\n                )\n            })?;\n            continue;\n        }\n    }\n\n    Ok(())\n}\n\nfn parse_registered_schema_snapshot(snapshot_content: &str) -> Result<JsonValue, LixError> {\n    serde_json::from_str::<JsonValue>(snapshot_content).map_err(|error| {\n        LixError::new(\n            LixError::CODE_SCHEMA_DEFINITION,\n            format!(\"registered schema snapshot_content is invalid JSON: {error}\"),\n        )\n    })\n}\n\nfn staged_directory_descriptor_scopes(\n    staged_rows: &[PreparedValidationRow<'_>],\n) -> BTreeSet<DirectoryDescriptorScope> {\n    staged_rows\n        .iter()\n        .filter(|row| row.schema_key() == DIRECTORY_DESCRIPTOR_SCHEMA_KEY)\n        .map(|row| DirectoryDescriptorScope {\n            domain: row.domain(),\n        })\n        .collect()\n}\n\nasync fn committed_directory_parent_map(\n    live_state: &dyn LiveStateReader,\n    scope: &DirectoryDescriptorScope,\n) -> Result<BTreeMap<String, Option<String>>, LixError> {\n    let mut parents = BTreeMap::new();\n    for domain in scope.domain.directory_parent_domains() {\n        let rows = scan_committed_constraint_rows(\n            live_state,\n            &domain,\n            vec![DIRECTORY_DESCRIPTOR_SCHEMA_KEY.to_string()],\n            Vec::new(),\n            false,\n        )\n        .await?;\n        for row in rows {\n            if !committed_directory_row_is_in_domain(&row, scope, &domain) {\n                continue;\n            }\n            let Some(snapshot_content) = row.snapshot_content.as_deref() else {\n                continue;\n            };\n            let snapshot = parse_directory_descriptor_snapshot(snapshot_content)?;\n            parents.insert(snapshot.id, snapshot.parent_id);\n        }\n    }\n    Ok(parents)\n}\n\nfn committed_directory_row_is_in_domain(\n    row: &MaterializedLiveStateRow,\n    _scope: &DirectoryDescriptorScope,\n    domain: &Domain,\n) -> bool {\n    row.schema_key == DIRECTORY_DESCRIPTOR_SCHEMA_KEY && domain.contains(row)\n}\n\nfn apply_staged_directory_parent_rows(\n    staged_rows: &[PreparedValidationRow<'_>],\n    scope: &DirectoryDescriptorScope,\n    parents: &mut BTreeMap<String, Option<String>>,\n) -> Result<(), LixError> {\n    let reachable_domains = scope.domain.directory_parent_domains();\n    for row in staged_rows {\n        if row.schema_key() != DIRECTORY_DESCRIPTOR_SCHEMA_KEY\n            || !reachable_domains.contains(&row.domain())\n        {\n            continue;\n        }\n        let id = row.entity_id().as_single_string_owned()?;\n        let Some(snapshot) = row.snapshot_json() else {\n            parents.remove(&id);\n            continue;\n        };\n        let snapshot = directory_descriptor_snapshot_from_value(snapshot)?;\n        parents.insert(snapshot.id, snapshot.parent_id);\n    }\n    Ok(())\n}\n\nfn parse_directory_descriptor_snapshot(\n    snapshot_content: &str,\n) -> Result<DirectoryDescriptorSnapshot, LixError> {\n    serde_json::from_str::<DirectoryDescriptorSnapshot>(snapshot_content).map_err(|error| {\n        LixError::new(\n            LixError::CODE_SCHEMA_VALIDATION,\n            format!(\"lix_directory_descriptor snapshot_content is invalid JSON: {error}\"),\n        )\n    })\n}\n\nfn directory_descriptor_snapshot_from_value(\n    snapshot: &JsonValue,\n) -> Result<DirectoryDescriptorSnapshot, LixError> {\n    Ok(DirectoryDescriptorSnapshot {\n        id: required_snapshot_string(snapshot, \"lix_directory_descriptor\", \"id\")?,\n        parent_id: optional_snapshot_string(snapshot, \"lix_directory_descriptor\", \"parent_id\")?,\n        name: required_snapshot_string(snapshot, \"lix_directory_descriptor\", \"name\")?,\n    })\n}\n\nfn file_descriptor_snapshot_from_value(\n    snapshot: &JsonValue,\n) -> Result<FileDescriptorSnapshot, LixError> {\n    Ok(FileDescriptorSnapshot {\n        directory_id: optional_snapshot_string(snapshot, \"lix_file_descriptor\", \"directory_id\")?,\n        name: required_snapshot_string(snapshot, \"lix_file_descriptor\", \"name\")?,\n    })\n}\n\nfn required_snapshot_string(\n    snapshot: &JsonValue,\n    schema_key: &str,\n    field: &str,\n) -> Result<String, LixError> {\n    let Some(value) = snapshot.get(field) else {\n        return Err(LixError::new(\n            LixError::CODE_SCHEMA_VALIDATION,\n            format!(\"{schema_key} snapshot_content is missing field '{field}'\"),\n        ));\n    };\n    value.as_str().map(str::to_string).ok_or_else(|| {\n        LixError::new(\n            LixError::CODE_SCHEMA_VALIDATION,\n            format!(\"{schema_key} snapshot_content field '{field}' must be a string\"),\n        )\n    })\n}\n\nfn optional_snapshot_string(\n    snapshot: &JsonValue,\n    schema_key: &str,\n    field: &str,\n) -> Result<Option<String>, LixError> {\n    let Some(value) = snapshot.get(field) else {\n        return Ok(None);\n    };\n    if value.is_null() {\n        return Ok(None);\n    }\n    value\n        .as_str()\n        .map(|value| Some(value.to_string()))\n        .ok_or_else(|| {\n            LixError::new(\n                LixError::CODE_SCHEMA_VALIDATION,\n                format!(\"{schema_key} snapshot_content field '{field}' must be a string or null\"),\n            )\n        })\n}\n\n#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)]\nstruct FilesystemNamespaceIdentity {\n    schema_key: String,\n    entity_id: EntityIdentity,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\nenum FilesystemNamespaceOccupant {\n    Directory {\n        entity_id: EntityIdentity,\n        parent_id: Option<String>,\n        name: String,\n    },\n    File {\n        entity_id: EntityIdentity,\n        directory_id: Option<String>,\n        entry_name: String,\n    },\n}\n\nimpl FilesystemNamespaceOccupant {\n    fn entity_id(&self) -> &EntityIdentity {\n        match self {\n            Self::Directory { entity_id, .. } | Self::File { entity_id, .. } => entity_id,\n        }\n    }\n\n    fn kind(&self) -> &'static str {\n        match self {\n            Self::Directory { .. } => \"directory\",\n            Self::File { .. } => \"file\",\n        }\n    }\n\n    fn parent_id(&self) -> &Option<String> {\n        match self {\n            Self::Directory { parent_id, .. } => parent_id,\n            Self::File { directory_id, .. } => directory_id,\n        }\n    }\n\n    fn entry_name(&self) -> &str {\n        match self {\n            Self::Directory { name, .. } => name,\n            Self::File { entry_name, .. } => entry_name,\n        }\n    }\n}\n\nasync fn validate_filesystem_namespace(\n    input: &TransactionValidationInput<'_>,\n    staged_rows: &[PreparedValidationRow<'_>],\n) -> Result<(), LixError> {\n    // Filesystem namespace constraints are storage-scope local. Global rows are\n    // validated in the global scope and may be projected into version reads, but\n    // projected globals do not participate in version-local constraint checks.\n    let domains = staged_filesystem_namespace_domains(staged_rows);\n    for domain in domains {\n        let mut occupants =\n            committed_filesystem_namespace_occupants(input.live_state, &domain).await?;\n        apply_staged_filesystem_namespace_rows(staged_rows, &domain, &mut occupants)?;\n        validate_filesystem_namespace_occupants(&domain, occupants)?;\n    }\n    Ok(())\n}\n\nfn staged_filesystem_namespace_domains(\n    staged_rows: &[PreparedValidationRow<'_>],\n) -> BTreeSet<Domain> {\n    staged_rows\n        .iter()\n        .filter(|row| {\n            row.schema_key() == DIRECTORY_DESCRIPTOR_SCHEMA_KEY\n                || row.schema_key() == FILE_DESCRIPTOR_SCHEMA_KEY\n        })\n        .map(|row| row.domain())\n        .collect()\n}\n\nasync fn committed_filesystem_namespace_occupants(\n    live_state: &dyn LiveStateReader,\n    domain: &Domain,\n) -> Result<BTreeMap<FilesystemNamespaceIdentity, FilesystemNamespaceOccupant>, LixError> {\n    let rows = scan_committed_constraint_rows(\n        live_state,\n        domain,\n        vec![\n            DIRECTORY_DESCRIPTOR_SCHEMA_KEY.to_string(),\n            FILE_DESCRIPTOR_SCHEMA_KEY.to_string(),\n        ],\n        Vec::new(),\n        false,\n    )\n    .await?;\n    let mut occupants = BTreeMap::new();\n    for row in rows {\n        if !committed_filesystem_row_is_in_domain(&row, domain) {\n            continue;\n        }\n        if let Some((identity, occupant)) = filesystem_namespace_occupant_from_live_row(&row)? {\n            occupants.insert(identity, occupant);\n        }\n    }\n    Ok(occupants)\n}\n\nfn committed_filesystem_row_is_in_domain(row: &MaterializedLiveStateRow, domain: &Domain) -> bool {\n    (row.schema_key == DIRECTORY_DESCRIPTOR_SCHEMA_KEY\n        || row.schema_key == FILE_DESCRIPTOR_SCHEMA_KEY)\n        && domain.contains(row)\n}\n\nfn apply_staged_filesystem_namespace_rows(\n    staged_rows: &[PreparedValidationRow<'_>],\n    domain: &Domain,\n    occupants: &mut BTreeMap<FilesystemNamespaceIdentity, FilesystemNamespaceOccupant>,\n) -> Result<(), LixError> {\n    for row in staged_rows {\n        if (row.schema_key() != DIRECTORY_DESCRIPTOR_SCHEMA_KEY\n            && row.schema_key() != FILE_DESCRIPTOR_SCHEMA_KEY)\n            || row.domain() != *domain\n        {\n            continue;\n        }\n        let identity = FilesystemNamespaceIdentity {\n            schema_key: row.schema_key().to_string(),\n            entity_id: row.entity_id().clone(),\n        };\n        let Some(snapshot) = row.snapshot_json() else {\n            occupants.remove(&identity);\n            continue;\n        };\n        occupants.insert(\n            identity,\n            filesystem_namespace_occupant_from_staged_row(*row, snapshot)?,\n        );\n    }\n    Ok(())\n}\n\nfn filesystem_namespace_occupant_from_live_row(\n    row: &MaterializedLiveStateRow,\n) -> Result<Option<(FilesystemNamespaceIdentity, FilesystemNamespaceOccupant)>, LixError> {\n    let Some(snapshot_content) = row.snapshot_content.as_deref() else {\n        return Ok(None);\n    };\n    let identity = FilesystemNamespaceIdentity {\n        schema_key: row.schema_key.clone(),\n        entity_id: row.entity_id.clone(),\n    };\n    let occupant = match row.schema_key.as_str() {\n        DIRECTORY_DESCRIPTOR_SCHEMA_KEY => {\n            directory_namespace_occupant(&row.entity_id, snapshot_content)?\n        }\n        FILE_DESCRIPTOR_SCHEMA_KEY => file_namespace_occupant(&row.entity_id, snapshot_content)?,\n        _ => return Ok(None),\n    };\n    Ok(Some((identity, occupant)))\n}\n\nfn filesystem_namespace_occupant_from_staged_row(\n    row: PreparedValidationRow<'_>,\n    snapshot: &JsonValue,\n) -> Result<FilesystemNamespaceOccupant, LixError> {\n    match row.schema_key() {\n        DIRECTORY_DESCRIPTOR_SCHEMA_KEY => {\n            directory_namespace_occupant_from_value(row.entity_id(), snapshot)\n        }\n        FILE_DESCRIPTOR_SCHEMA_KEY => file_namespace_occupant_from_value(row.entity_id(), snapshot),\n        _ => Err(LixError::new(\n            LixError::CODE_SCHEMA_VALIDATION,\n            format!(\n                \"filesystem namespace validation cannot parse schema '{}'\",\n                row.schema_key()\n            ),\n        )),\n    }\n}\n\nfn directory_namespace_occupant(\n    entity_id: &EntityIdentity,\n    snapshot_content: &str,\n) -> Result<FilesystemNamespaceOccupant, LixError> {\n    let snapshot = parse_directory_descriptor_snapshot(snapshot_content)?;\n    Ok(FilesystemNamespaceOccupant::Directory {\n        entity_id: entity_id.clone(),\n        parent_id: snapshot.parent_id,\n        name: snapshot.name,\n    })\n}\n\nfn directory_namespace_occupant_from_value(\n    entity_id: &EntityIdentity,\n    snapshot: &JsonValue,\n) -> Result<FilesystemNamespaceOccupant, LixError> {\n    let snapshot = directory_descriptor_snapshot_from_value(snapshot)?;\n    Ok(FilesystemNamespaceOccupant::Directory {\n        entity_id: entity_id.clone(),\n        parent_id: snapshot.parent_id,\n        name: snapshot.name,\n    })\n}\n\nfn file_namespace_occupant(\n    entity_id: &EntityIdentity,\n    snapshot_content: &str,\n) -> Result<FilesystemNamespaceOccupant, LixError> {\n    let snapshot =\n        serde_json::from_str::<FileDescriptorSnapshot>(snapshot_content).map_err(|error| {\n            LixError::new(\n                LixError::CODE_SCHEMA_VALIDATION,\n                format!(\"lix_file_descriptor snapshot_content is invalid JSON: {error}\"),\n            )\n        })?;\n    Ok(FilesystemNamespaceOccupant::File {\n        entity_id: entity_id.clone(),\n        directory_id: snapshot.directory_id,\n        entry_name: snapshot.name,\n    })\n}\n\nfn file_namespace_occupant_from_value(\n    entity_id: &EntityIdentity,\n    snapshot: &JsonValue,\n) -> Result<FilesystemNamespaceOccupant, LixError> {\n    let snapshot = file_descriptor_snapshot_from_value(snapshot)?;\n    Ok(FilesystemNamespaceOccupant::File {\n        entity_id: entity_id.clone(),\n        directory_id: snapshot.directory_id,\n        entry_name: snapshot.name,\n    })\n}\n\nfn validate_filesystem_namespace_occupants(\n    domain: &Domain,\n    occupants: BTreeMap<FilesystemNamespaceIdentity, FilesystemNamespaceOccupant>,\n) -> Result<(), LixError> {\n    let mut by_parent_and_name =\n        BTreeMap::<(Option<String>, String), FilesystemNamespaceOccupant>::new();\n    for occupant in occupants.into_values() {\n        let key = (\n            occupant.parent_id().clone(),\n            occupant.entry_name().to_string(),\n        );\n        if let Some(existing) = by_parent_and_name.insert(key.clone(), occupant.clone()) {\n            if existing != occupant {\n                return Err(filesystem_namespace_conflict_error(\n                    domain, &key.0, &key.1, &existing, &occupant,\n                ));\n            }\n        }\n    }\n    Ok(())\n}\n\nfn filesystem_namespace_conflict_error(\n    domain: &Domain,\n    parent_id: &Option<String>,\n    entry_name: &str,\n    existing: &FilesystemNamespaceOccupant,\n    conflicting: &FilesystemNamespaceOccupant,\n) -> LixError {\n    let parent = parent_id.as_deref().unwrap_or(\"<root>\");\n    let existing_id = existing\n        .entity_id()\n        .as_single_string_owned()\n        .unwrap_or_else(|_| \"<non-string-entity-id>\".to_string());\n    let conflicting_id = conflicting\n        .entity_id()\n        .as_single_string_owned()\n        .unwrap_or_else(|_| \"<non-string-entity-id>\".to_string());\n    LixError::new(\n        LixError::CODE_UNIQUE,\n        format!(\n            \"filesystem namespace conflict in version '{}' for parent {parent:?} entry {entry_name:?}: {} '{}' conflicts with {} '{}'\",\n            domain.version_id(),\n            existing.kind(),\n            existing_id,\n            conflicting.kind(),\n            conflicting_id\n        ),\n    )\n}\n\nfn validate_directory_parent_map(\n    scope: &DirectoryDescriptorScope,\n    parents: &BTreeMap<String, Option<String>>,\n) -> Result<(), LixError> {\n    for directory_id in parents.keys() {\n        validate_directory_parent_chain(scope, parents, directory_id)?;\n    }\n    Ok(())\n}\n\nfn validate_directory_parent_chain(\n    scope: &DirectoryDescriptorScope,\n    parents: &BTreeMap<String, Option<String>>,\n    start_id: &str,\n) -> Result<(), LixError> {\n    let mut current_id = start_id;\n    let mut seen = BTreeSet::<String>::new();\n    for depth in 0..=MAX_DIRECTORY_PARENT_DEPTH {\n        if !seen.insert(current_id.to_string()) {\n            return Err(directory_parent_cycle_error(scope, start_id, current_id));\n        }\n        let Some(parent_id) = parents.get(current_id) else {\n            return Err(directory_parent_missing_error(scope, start_id, current_id));\n        };\n        let Some(parent_id) = parent_id.as_deref() else {\n            return Ok(());\n        };\n        current_id = parent_id;\n        if depth == MAX_DIRECTORY_PARENT_DEPTH {\n            return Err(directory_parent_depth_error(scope, start_id));\n        }\n    }\n    Err(directory_parent_depth_error(scope, start_id))\n}\n\nfn directory_parent_cycle_error(\n    scope: &DirectoryDescriptorScope,\n    start_id: &str,\n    repeated_id: &str,\n) -> LixError {\n    LixError::new(\n        LixError::CODE_CONSTRAINT_VIOLATION,\n        format!(\n            \"lix_directory_descriptor parent_id cycle in version '{}': directory '{}' reaches ancestor '{}' twice\",\n            scope.domain.version_id(), start_id, repeated_id\n        ),\n    )\n    .with_hint(\"Set parent_id to null or to an existing directory outside the directory's descendants.\")\n}\n\nfn directory_parent_missing_error(\n    scope: &DirectoryDescriptorScope,\n    start_id: &str,\n    missing_id: &str,\n) -> LixError {\n    LixError::new(\n        LixError::CODE_FOREIGN_KEY,\n        format!(\n            \"lix_directory_descriptor parent_id chain in version '{}' for directory '{}' references missing directory '{}'\",\n            scope.domain.version_id(), start_id, missing_id\n        ),\n    )\n}\n\nfn directory_parent_depth_error(scope: &DirectoryDescriptorScope, start_id: &str) -> LixError {\n    LixError::new(\n        LixError::CODE_CONSTRAINT_VIOLATION,\n        format!(\n            \"lix_directory_descriptor parent_id chain in version '{}' for directory '{}' exceeds maximum depth {}\",\n            scope.domain.version_id(), start_id, MAX_DIRECTORY_PARENT_DEPTH\n        ),\n    )\n}\n\nasync fn validate_committed_insert_identities(\n    input: &TransactionValidationInput<'_>,\n    pending_constraints: &PendingConstraintIndexes,\n) -> Result<(), LixError> {\n    let pending_identity_targets = pending_constraints\n        .identity_targets\n        .iter()\n        .map(|target| target.identity.clone())\n        .collect::<BTreeSet<_>>();\n    let mut checks_by_domain_schema =\n        BTreeMap::<(Domain, String), Vec<(EntityIdentity, Option<TransactionWriteOrigin>)>>::new();\n    for (identity, origin) in input.staged_writes.insert_identities() {\n        let pending_identity = DomainRowIdentity::in_domain(\n            identity.domain(),\n            identity.schema_key().to_string(),\n            identity.entity_id().clone(),\n        );\n        if !pending_identity_targets.contains(&pending_identity) {\n            continue;\n        }\n        checks_by_domain_schema\n            .entry((\n                pending_identity.domain().clone(),\n                pending_identity.schema_key_owned(),\n            ))\n            .or_default()\n            .push((pending_identity.entity_id_owned(), origin.cloned()));\n    }\n\n    for ((domain, schema_key), checks) in checks_by_domain_schema {\n        let entity_ids = checks\n            .iter()\n            .map(|(entity_id, _)| entity_id.clone())\n            .collect::<Vec<_>>();\n        let committed_rows = scan_committed_constraint_rows(\n            input.live_state,\n            &domain,\n            vec![schema_key.clone()],\n            entity_ids,\n            false,\n        )\n        .await?;\n        let committed_rows_by_entity_id = committed_rows\n            .into_iter()\n            .filter(|row| {\n                row.snapshot_content.is_some() && !pending_constraints.tombstones_identity(row)\n            })\n            .map(|row| (row.entity_id.clone(), row))\n            .collect::<BTreeMap<_, _>>();\n        for (entity_id, origin) in checks {\n            if !committed_rows_by_entity_id.contains_key(&entity_id) {\n                continue;\n            }\n            return Err(LixError::new(\n                LixError::CODE_UNIQUE,\n                duplicate_insert_identity_message(&schema_key, &entity_id, None, origin.as_ref()),\n            ));\n        }\n    }\n    Ok(())\n}\n\nasync fn validate_version_ref_delete_restrictions(\n    input: &TransactionValidationInput<'_>,\n    pending_constraints: &PendingConstraintIndexes,\n) -> Result<(), LixError> {\n    for tombstone in &pending_constraints.tombstones {\n        if tombstone.identity.schema_key() != VERSION_REF_SCHEMA_KEY {\n            continue;\n        }\n\n        for source_domain in tombstone\n            .identity\n            .domain()\n            .version_descriptor_domains_for_ref_delete()\n        {\n            let descriptor_identity = DomainRowIdentity::in_domain(\n                source_domain,\n                VERSION_DESCRIPTOR_SCHEMA_KEY,\n                tombstone.identity.entity_id_owned(),\n            );\n            if pending_constraints.tombstones_target_identity(&descriptor_identity) {\n                continue;\n            }\n            if pending_constraints.has_identity_target(&descriptor_identity) {\n                return Err(version_ref_delete_restriction_error(\n                    &tombstone.identity,\n                    &descriptor_identity,\n                )?);\n            }\n\n            let Some(descriptor_row) = load_committed_constraint_row(\n                input.live_state,\n                descriptor_identity.domain(),\n                descriptor_identity.schema_key(),\n                descriptor_identity.entity_id_owned(),\n                false,\n            )\n            .await?\n            else {\n                continue;\n            };\n            if descriptor_row.snapshot_content.is_some()\n                && !pending_constraints.tombstones_identity(&descriptor_row)\n            {\n                return Err(version_ref_delete_restriction_error(\n                    &tombstone.identity,\n                    &descriptor_identity,\n                )?);\n            }\n        }\n    }\n    Ok(())\n}\n\nfn version_ref_delete_restriction_error(\n    ref_identity: &DomainRowIdentity,\n    descriptor_identity: &DomainRowIdentity,\n) -> Result<LixError, LixError> {\n    Ok(LixError::new(\n        LixError::CODE_FOREIGN_KEY,\n        format!(\n            \"cannot delete '{}' row '{}' in version '{}' because matching '{}' row '{}' would remain without a version ref\",\n            ref_identity.schema_key(),\n            ref_identity.entity_id().as_single_string_owned()?,\n            ref_identity.domain().version_id(),\n            descriptor_identity.schema_key(),\n            descriptor_identity.entity_id().as_single_string_owned()?,\n        ),\n    ))\n}\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\nenum PendingFileDescriptorState {\n    Present,\n    Tombstone,\n}\n\n#[derive(Debug, Clone, Default)]\nstruct PendingFileDescriptorIndex {\n    by_identity: BTreeMap<DomainRowIdentity, PendingFileDescriptorState>,\n}\n\nimpl PendingFileDescriptorIndex {\n    fn from_rows(staged_rows: &[PreparedValidationRow<'_>]) -> Self {\n        let mut index = Self::default();\n        for row in staged_rows {\n            if row.schema_key() != FILE_DESCRIPTOR_SCHEMA_KEY || row.file_id().is_some() {\n                continue;\n            }\n            if row.entity_id().as_single_string_owned().is_ok() {\n                let state = if (*row).snapshot_json().is_some() {\n                    PendingFileDescriptorState::Present\n                } else {\n                    PendingFileDescriptorState::Tombstone\n                };\n                index.by_identity.insert(row.domain_row_identity(), state);\n            }\n        }\n        index\n    }\n\n    fn state_in_domain(\n        &self,\n        domain: &Domain,\n        file_id: &str,\n    ) -> Option<PendingFileDescriptorState> {\n        self.by_identity\n            .get(&DomainRowIdentity::in_domain(\n                domain.with_exact_file_scope(None),\n                FILE_DESCRIPTOR_SCHEMA_KEY,\n                EntityIdentity::single(file_id),\n            ))\n            .copied()\n    }\n}\n\nasync fn validate_file_owner_reference(\n    input: &TransactionValidationInput<'_>,\n    pending_file_descriptors: &PendingFileDescriptorIndex,\n    row: PreparedValidationRow<'_>,\n) -> Result<(), LixError> {\n    let Some(file_id) = row.file_id().as_deref() else {\n        return Ok(());\n    };\n\n    let row_domain = row.domain();\n    let target_domains = row_domain\n        .with_untracked(row.untracked())\n        .file_owner_domains();\n\n    for domain in &target_domains {\n        if pending_file_descriptors.state_in_domain(domain, file_id)\n            == Some(PendingFileDescriptorState::Present)\n        {\n            return Ok(());\n        }\n    }\n\n    for domain in &target_domains {\n        if pending_file_descriptors.state_in_domain(domain, file_id)\n            == Some(PendingFileDescriptorState::Tombstone)\n        {\n            continue;\n        }\n        if committed_file_descriptor_exists_in_domain(input.live_state, domain, file_id).await? {\n            return Ok(());\n        }\n    }\n\n    Err(missing_file_owner_reference_error(row, file_id)?)\n}\n\nasync fn committed_file_descriptor_exists_in_domain(\n    live_state: &dyn LiveStateReader,\n    domain: &Domain,\n    file_id: &str,\n) -> Result<bool, LixError> {\n    let Some(row) = load_committed_constraint_row(\n        live_state,\n        &domain.with_exact_file_scope(None),\n        FILE_DESCRIPTOR_SCHEMA_KEY,\n        EntityIdentity::single(file_id),\n        false,\n    )\n    .await?\n    else {\n        return Ok(false);\n    };\n    Ok(row.snapshot_content.is_some()\n        && row.schema_key == FILE_DESCRIPTOR_SCHEMA_KEY\n        && row.entity_id == EntityIdentity::single(file_id)\n        && row.file_id.is_none())\n}\n\nfn missing_file_owner_reference_error(\n    row: PreparedValidationRow<'_>,\n    file_id: &str,\n) -> Result<LixError, LixError> {\n    Ok(LixError::new(\n        LixError::CODE_FILE_NOT_FOUND,\n            format!(\n                \"file ownership validation failed for schema '{}': entity '{}' references missing file_id '{}' in effective file scope for version '{}'\",\n                row.schema_key(),\n                row.entity_id().as_json_array_text()?,\n                file_id,\n                row.version_id()\n            ),\n    )\n    .with_hint(\"Insert a row into lix_file with this id first, or use null for a global entity.\"))\n}\n\nfn validate_staged_row_shape(row: PreparedValidationRow<'_>) -> Result<(), LixError> {\n    if row.schema_key().is_empty() {\n        return Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            \"engine transaction validation requires non-empty schema_key\",\n        ));\n    }\n    if row.schema_key() == REGISTERED_SCHEMA_KEY && row.file_id().is_some() {\n        return Err(LixError::new(\n            LixError::CODE_SCHEMA_DEFINITION,\n            \"lix_registered_schema rows must not be scoped to a file\",\n        )\n        .with_hint(\"Schema definitions are scoped by version and durability only; write them with null file_id.\"));\n    }\n    Ok(())\n}\n\nfn validate_staged_row_metadata(row: PreparedValidationRow<'_>) -> Result<(), LixError> {\n    let Some(metadata) = row.metadata_json() else {\n        return Ok(());\n    };\n    validate_row_metadata(\n        metadata,\n        format!(\"metadata for schema '{}'\", row.schema_key()),\n    )?;\n    Ok(())\n}\n\n#[derive(Default)]\nstruct PendingSchemaDomains {\n    domains_by_key: BTreeMap<SchemaCatalogKey, BTreeSet<Domain>>,\n}\n\nimpl PendingSchemaDomains {\n    fn from_staged_rows(staged_rows: &[PreparedValidationRow<'_>]) -> Result<Self, LixError> {\n        let mut domains_by_key = BTreeMap::<SchemaCatalogKey, BTreeSet<Domain>>::new();\n        for row in staged_rows {\n            if row.schema_key() != REGISTERED_SCHEMA_KEY {\n                continue;\n            }\n            let Some(snapshot) = row.snapshot_json() else {\n                continue;\n            };\n            let (key, _) = schema_from_registered_snapshot(snapshot)?;\n            domains_by_key\n                .entry(SchemaCatalogKey::from_schema_key(key))\n                .or_default()\n                .insert(row.domain());\n        }\n        Ok(Self { domains_by_key })\n    }\n\n    fn validate_row_schema_domain(&self, row: PreparedValidationRow<'_>) -> Result<(), LixError> {\n        let key = SchemaCatalogKey {\n            schema_key: row.schema_key().to_string(),\n        };\n        let Some(domains) = self.domains_by_key.get(&key) else {\n            return Ok(());\n        };\n        let row_domain = row.domain();\n        if domains.contains(&row_domain) {\n            return Ok(());\n        }\n        Err(LixError::new(\n            LixError::CODE_SCHEMA_DEFINITION,\n            format!(\n                \"schema '{}' is pending in another validation domain\",\n                row.schema_key()\n            ),\n        ))\n    }\n}\n\nfn schema_plan_for_row<'a>(\n    schema_catalog: &'a CatalogSnapshot,\n    pending_schema_domains: &PendingSchemaDomains,\n    row: PreparedValidationRow<'_>,\n) -> Result<&'a SchemaPlan, LixError> {\n    pending_schema_domains.validate_row_schema_domain(row)?;\n    if let Some(plan) = schema_catalog.plan(row.schema_plan_id()) {\n        if plan.key.schema_key == row.schema_key() {\n            return Ok(plan);\n        }\n    }\n    #[cfg(test)]\n    if let Some((_, plan)) = schema_catalog.plan_for_key(row.schema_key()) {\n        return Ok(plan);\n    }\n    Err(LixError::new(\n        LixError::CODE_SCHEMA_DEFINITION,\n        format!(\n            \"schema plan for schema '{}' is not visible to this transaction\",\n            row.schema_key()\n        ),\n    ))\n}\n\nfn validate_schema_matches_row(\n    row: PreparedValidationRow<'_>,\n    schema_plan: &SchemaPlan,\n) -> Result<(), LixError> {\n    if schema_plan.key.schema_key != row.schema_key() {\n        return Err(LixError::new(\n            LixError::CODE_SCHEMA_DEFINITION,\n            format!(\n                \"schema plan mismatch: row targets schema '{}' but plan is schema '{}'\",\n                row.schema_key(),\n                schema_plan.key.schema_key,\n            ),\n        ));\n    }\n    Ok(())\n}\n\nfn validate_snapshot_content<'a>(\n    row: PreparedValidationRow<'a>,\n    schema_plan: &SchemaPlan,\n) -> Result<Option<&'a JsonValue>, LixError> {\n    let Some(snapshot) = row.snapshot_json() else {\n        return Ok(None);\n    };\n    if let Err(errors) = schema_plan.compiled_schema.validate(&snapshot) {\n        let details = format_lix_schema_validation_errors(errors);\n        return Err(LixError::new(\n            LixError::CODE_SCHEMA_VALIDATION,\n            format!(\n                \"snapshot_content validation failed for schema '{}': {details}\",\n                row.schema_key()\n            ),\n        ));\n    }\n    Ok(Some(snapshot))\n}\n\nfn validate_primary_key_identity(\n    row: PreparedValidationRow<'_>,\n    schema_plan: &SchemaPlan,\n    snapshot: &JsonValue,\n) -> Result<(), LixError> {\n    let Some(primary_key_paths) = schema_plan.primary_key.as_ref() else {\n        return Ok(());\n    };\n    let derived = EntityIdentity::from_primary_key_paths(snapshot, &primary_key_paths)\n        .map_err(|error| primary_key_identity_error(row, &primary_key_paths, error))?;\n    if row.entity_id() != &derived {\n        return Err(LixError::new(\n            LixError::CODE_UNIQUE,\n            format!(\n                \"primary-key constraint violation on schema '{}': entity_id '{}' does not match derived primary key '{}'\",\n                row.schema_key(),\n                row.entity_id().as_json_array_text()?,\n                derived.as_json_array_text()?\n            ),\n        ));\n    }\n    Ok(())\n}\n\n#[derive(Default)]\nstruct PendingConstraintIndexes {\n    unique_values: BTreeMap<PendingUniqueKey, EntityIdentity>,\n    identity_targets: Vec<PendingIdentityTarget>,\n    fk_targets: BTreeMap<PendingForeignKeyTargetKey, Vec<PendingForeignKeyTarget>>,\n    fk_references: BTreeMap<PendingForeignKeyReferenceTarget, Vec<PendingForeignKeyReference>>,\n    tombstones: Vec<PendingTombstone>,\n}\n\nimpl PendingConstraintIndexes {\n    fn remember_tombstone(&mut self, row: PreparedValidationRow<'_>) {\n        self.tombstones.push(PendingTombstone {\n            identity: row.domain_row_identity(),\n        });\n    }\n\n    fn remember_row(\n        &mut self,\n        row: PreparedValidationRow<'_>,\n        schema_plan: &SchemaPlan,\n        snapshot: &JsonValue,\n    ) -> Result<(), LixError> {\n        self.remember_identity_target(row);\n        self.remember_primary_key_target(row, schema_plan, snapshot);\n        self.remember_unique_targets(row, schema_plan, snapshot)?;\n        Ok(())\n    }\n\n    fn remember_identity_target(&mut self, row: PreparedValidationRow<'_>) {\n        self.identity_targets.push(PendingIdentityTarget {\n            identity: row.domain_row_identity(),\n        });\n    }\n\n    fn remember_primary_key_target(\n        &mut self,\n        row: PreparedValidationRow<'_>,\n        schema_plan: &SchemaPlan,\n        snapshot: &JsonValue,\n    ) {\n        if let Some(primary_key_paths) = schema_plan.primary_key.as_ref() {\n            self.remember_fk_target(row, &primary_key_paths, snapshot);\n        }\n    }\n\n    fn remember_unique_targets(\n        &mut self,\n        row: PreparedValidationRow<'_>,\n        schema_plan: &SchemaPlan,\n        snapshot: &JsonValue,\n    ) -> Result<(), LixError> {\n        for unique_paths in &schema_plan.uniques {\n            let Some(value) = UniqueConstraintValue::from_snapshot(snapshot, &unique_paths) else {\n                continue;\n            };\n            self.remember_fk_target(row, &unique_paths, snapshot);\n            let key = PendingUniqueKey {\n                schema_key: row.schema_key().to_string(),\n                domain: row.domain(),\n                pointer_group: unique_paths.clone(),\n                value,\n            };\n            if let Some(existing_entity_id) = self\n                .unique_values\n                .insert(key.clone(), row.entity_id().clone())\n            {\n                if existing_entity_id != *row.entity_id() {\n                    return Err(LixError::new(\n                        LixError::CODE_UNIQUE,\n                        format!(\n                            \"unique constraint violation on {}.{} for value {}: rows '{}' and '{}' conflict\",\n                            row.schema_key(),\n                            format_pointer_group(&key.pointer_group),\n                            key.value.display(),\n                            existing_entity_id.as_json_array_text()?,\n                            row.entity_id().as_json_array_text()?\n                        ),\n                    ));\n                }\n            }\n        }\n        Ok(())\n    }\n\n    fn remember_fk_target(\n        &mut self,\n        row: PreparedValidationRow<'_>,\n        pointer_group: &[Vec<String>],\n        snapshot: &JsonValue,\n    ) {\n        let Some(value) = UniqueConstraintValue::from_snapshot(snapshot, pointer_group) else {\n            return;\n        };\n        self.fk_targets\n            .entry(PendingForeignKeyTargetKey {\n                schema_key: row.schema_key().to_string(),\n                domain: row.domain(),\n                pointer_group: pointer_group.to_vec(),\n                value,\n            })\n            .or_default()\n            .push(PendingForeignKeyTarget {\n                entity_id: row.entity_id().clone(),\n            });\n    }\n\n    fn remember_foreign_key_references(\n        &mut self,\n        row: PreparedValidationRow<'_>,\n        schema_plan: &SchemaPlan,\n        snapshot: &JsonValue,\n    ) -> Result<(), LixError> {\n        for foreign_key in &schema_plan.foreign_keys {\n            let Some(local_value) = UniqueConstraintValue::from_snapshot_non_null(\n                snapshot,\n                &foreign_key.local_properties,\n            ) else {\n                continue;\n            };\n            let target = PendingForeignKeyReferenceTarget::Key(PendingForeignKeyTargetKey {\n                schema_key: foreign_key.referenced_schema.schema_key.clone(),\n                domain: row.domain(),\n                pointer_group: foreign_key.referenced_properties.clone(),\n                value: local_value,\n            });\n            self.fk_references\n                .entry(target)\n                .or_default()\n                .push(PendingForeignKeyReference {\n                    identity: row.domain_row_identity(),\n                });\n        }\n\n        for foreign_key in &schema_plan.state_foreign_keys {\n            let target = PendingForeignKeyReferenceTarget::StateSurfaceIdentity(\n                state_surface_target_identity(row.domain(), foreign_key, snapshot)?,\n            );\n            self.fk_references\n                .entry(target)\n                .or_default()\n                .push(PendingForeignKeyReference {\n                    identity: row.domain_row_identity(),\n                });\n        }\n        Ok(())\n    }\n\n    fn tombstones_identity(&self, row: &MaterializedLiveStateRow) -> bool {\n        let identity = DomainRowIdentity::from_live_row(row);\n        self.tombstones\n            .iter()\n            .any(|tombstone| tombstone.identity == identity)\n    }\n\n    fn has_identity_target(&self, identity: &DomainRowIdentity) -> bool {\n        self.identity_targets\n            .iter()\n            .any(|target| target.identity == *identity)\n    }\n\n    fn has_reachable_identity_target(&self, identity: &DomainRowIdentity) -> bool {\n        identity\n            .reachable_target_identities()\n            .iter()\n            .any(|candidate| self.has_identity_target(candidate))\n    }\n\n    fn tombstones_target_identity(&self, identity: &DomainRowIdentity) -> bool {\n        self.tombstones\n            .iter()\n            .any(|tombstone| tombstone.identity == *identity)\n    }\n\n    fn has_fk_target_key(&self, key: &PendingForeignKeyTargetKey) -> bool {\n        self.fk_targets\n            .get(key)\n            .is_some_and(|targets| !targets.is_empty())\n    }\n\n    fn has_reachable_fk_target_key(&self, key: &PendingForeignKeyTargetKey) -> bool {\n        key.domain.fk_target_domains().iter().any(|domain| {\n            self.has_fk_target_key(&PendingForeignKeyTargetKey {\n                domain: domain.clone(),\n                ..key.clone()\n            })\n        })\n    }\n\n    fn active_references_to(\n        &self,\n        target: &PendingForeignKeyReferenceTarget,\n    ) -> Vec<&PendingForeignKeyReference> {\n        self.fk_references\n            .get(target)\n            .into_iter()\n            .flat_map(|references| references.iter())\n            .filter(|reference| !self.tombstones_target_identity(&reference.identity))\n            .collect()\n    }\n\n    fn active_references_to_any(\n        &self,\n        targets: &[PendingForeignKeyReferenceTarget],\n    ) -> Vec<&PendingForeignKeyReference> {\n        let mut references = Vec::new();\n        for target in targets {\n            references.extend(self.active_references_to(target));\n        }\n        references\n    }\n\n    #[cfg(test)]\n    fn has_fk_reference_to_key(\n        &self,\n        schema_key: &str,\n        version_id: &str,\n        file_id: Option<&str>,\n        pointer_group: &[&str],\n        value: UniqueConstraintValue,\n    ) -> Result<bool, LixError> {\n        let pointer_group = pointer_group\n            .iter()\n            .map(|pointer| parse_json_pointer(pointer))\n            .collect::<Result<Vec<_>, _>>()?;\n        let key = PendingForeignKeyReferenceTarget::Key(PendingForeignKeyTargetKey {\n            schema_key: schema_key.to_string(),\n            domain: Domain::exact_file(version_id.to_string(), false, file_id.map(str::to_string)),\n            pointer_group,\n            value,\n        });\n        Ok(self.fk_references.contains_key(&key))\n    }\n\n    #[cfg(test)]\n    fn has_fk_reference_to_identity(&self, identity: DomainRowIdentity) -> bool {\n        self.fk_references\n            .contains_key(&PendingForeignKeyReferenceTarget::StateSurfaceIdentity(\n                identity,\n            ))\n    }\n\n    #[cfg(test)]\n    fn has_fk_target(\n        &self,\n        schema_key: &str,\n        version_id: &str,\n        file_id: Option<&str>,\n        pointer_group: &[&str],\n        value: UniqueConstraintValue,\n    ) -> Result<bool, LixError> {\n        let pointer_group = pointer_group\n            .iter()\n            .map(|pointer| parse_json_pointer(pointer))\n            .collect::<Result<Vec<_>, _>>()?;\n        let key = PendingForeignKeyTargetKey {\n            schema_key: schema_key.to_string(),\n            domain: Domain::exact_file(version_id.to_string(), false, file_id.map(str::to_string)),\n            pointer_group,\n            value,\n        };\n        Ok(self.fk_targets.contains_key(&key))\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\nstruct PendingTombstone {\n    identity: DomainRowIdentity,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\nstruct PendingIdentityTarget {\n    identity: DomainRowIdentity,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\nstruct PendingForeignKeyTarget {\n    entity_id: EntityIdentity,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\nstruct PendingForeignKeyReference {\n    identity: DomainRowIdentity,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)]\nstruct PendingUniqueKey {\n    schema_key: String,\n    domain: Domain,\n    pointer_group: Vec<Vec<String>>,\n    value: UniqueConstraintValue,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)]\nstruct PendingUniqueConstraintScope {\n    schema_key: String,\n    domain: Domain,\n    pointer_group: Vec<Vec<String>>,\n}\n\nimpl From<&PendingUniqueKey> for PendingUniqueConstraintScope {\n    fn from(key: &PendingUniqueKey) -> Self {\n        Self {\n            schema_key: key.schema_key.clone(),\n            domain: key.domain.clone(),\n            pointer_group: key.pointer_group.clone(),\n        }\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)]\nstruct PendingForeignKeyTargetKey {\n    schema_key: String,\n    domain: Domain,\n    pointer_group: Vec<Vec<String>>,\n    value: UniqueConstraintValue,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)]\nenum PendingForeignKeyReferenceTarget {\n    Key(PendingForeignKeyTargetKey),\n    StateSurfaceIdentity(DomainRowIdentity),\n}\n\nfn validate_pending_delete_restrictions(\n    schema_catalog: &CatalogSnapshot,\n    pending_constraints: &PendingConstraintIndexes,\n) -> Result<(), LixError> {\n    for tombstone in &pending_constraints.tombstones {\n        let identity_targets = tombstone\n            .identity\n            .source_identities_that_can_reach()\n            .into_iter()\n            .map(PendingForeignKeyReferenceTarget::StateSurfaceIdentity)\n            .collect::<Vec<_>>();\n        reject_pending_delete_references(\n            &tombstone.identity,\n            &identity_targets,\n            pending_constraints.active_references_to_any(&identity_targets),\n        )?;\n\n        let Some((_, schema_plan)) = schema_catalog.plan_for_key(tombstone.identity.schema_key())\n        else {\n            continue;\n        };\n        if let Some(primary_key_paths) = schema_plan.primary_key.as_ref() {\n            let targets = tombstone\n                .identity\n                .domain()\n                .fk_source_domains_for_target()\n                .into_iter()\n                .map(|domain| {\n                    PendingForeignKeyReferenceTarget::Key(PendingForeignKeyTargetKey {\n                        schema_key: tombstone.identity.schema_key_owned(),\n                        domain,\n                        pointer_group: primary_key_paths.clone(),\n                        value: UniqueConstraintValue::from_entity_identity(\n                            tombstone.identity.entity_id(),\n                        ),\n                    })\n                })\n                .collect::<Vec<_>>();\n            reject_pending_delete_references(\n                &tombstone.identity,\n                &targets,\n                pending_constraints.active_references_to_any(&targets),\n            )?;\n        }\n    }\n    Ok(())\n}\n\nfn reject_pending_delete_references(\n    deleted_identity: &DomainRowIdentity,\n    targets: &[PendingForeignKeyReferenceTarget],\n    references: Vec<&PendingForeignKeyReference>,\n) -> Result<(), LixError> {\n    let Some(reference) = references.first() else {\n        return Ok(());\n    };\n    let target = targets\n        .first()\n        .expect(\"delete restriction callers provide at least one target\");\n    Err(LixError::new(\n        LixError::CODE_FOREIGN_KEY,\n        format!(\n            \"cannot delete '{}' row '{}' in version '{}' because pending row '{}' references it{}\",\n            deleted_identity.schema_key(),\n            deleted_identity.entity_id().as_json_array_text()?,\n            deleted_identity.domain().version_id(),\n            reference.identity.entity_id().as_json_array_text()?,\n            pending_foreign_key_reference_target_description(target)?\n        ),\n    ))\n}\n\nfn pending_foreign_key_reference_target_description(\n    target: &PendingForeignKeyReferenceTarget,\n) -> Result<String, LixError> {\n    match target {\n        PendingForeignKeyReferenceTarget::Key(target) => Ok(format!(\n            \" through '{}.{}' value {}\",\n            target.schema_key,\n            format_pointer_group(&target.pointer_group),\n            target.value.display()\n        )),\n        PendingForeignKeyReferenceTarget::StateSurfaceIdentity(target) => Ok(format!(\n            \" through '{}:{}'\",\n            target.schema_key(),\n            target.entity_id().as_json_array_text()?\n        )),\n    }\n}\n\nasync fn validate_committed_delete_restrictions(\n    input: &TransactionValidationInput<'_>,\n    schema_catalog: &CatalogSnapshot,\n    pending_constraints: &PendingConstraintIndexes,\n) -> Result<(), LixError> {\n    let mut state_batches =\n        BTreeMap::<StateDeleteRestrictionBatchKey, Vec<DomainRowIdentity>>::new();\n    for tombstone in &pending_constraints.tombstones {\n        let delete_plan = schema_catalog.delete_plan_for_key(tombstone.identity.schema_key());\n        if !delete_plan.has_committed_checks() {\n            continue;\n        }\n        for reference in delete_plan.foreign_key_references {\n            validate_committed_normal_delete_restriction(\n                input.live_state,\n                pending_constraints,\n                tombstone,\n                &reference.source_key,\n                &reference.foreign_key,\n            )\n            .await?;\n        }\n        for reference in delete_plan.state_foreign_key_references {\n            for source_domain in tombstone.identity.domain().fk_source_domains_for_target() {\n                state_batches\n                    .entry(StateDeleteRestrictionBatchKey {\n                        source_key: reference.source_key.clone(),\n                        source_domain: source_domain.with_file_scope(DomainFileScope::Any),\n                        foreign_key: reference.clone(),\n                    })\n                    .or_default()\n                    .push(tombstone.identity.clone());\n            }\n        }\n    }\n    validate_committed_state_surface_delete_restriction_batches(\n        input.live_state,\n        pending_constraints,\n        state_batches,\n    )\n    .await?;\n    Ok(())\n}\n\n#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)]\nstruct StateDeleteRestrictionBatchKey {\n    source_key: SchemaCatalogKey,\n    source_domain: Domain,\n    foreign_key: StateDeleteReferencePlan,\n}\n\nasync fn validate_file_descriptor_delete_restrictions(\n    input: &TransactionValidationInput<'_>,\n    pending_constraints: &PendingConstraintIndexes,\n) -> Result<(), LixError> {\n    for tombstone in &pending_constraints.tombstones {\n        if tombstone.identity.schema_key() != FILE_DESCRIPTOR_SCHEMA_KEY {\n            continue;\n        }\n        if !tombstone.identity.domain().is_exact_file(&None) {\n            continue;\n        }\n        let file_id = tombstone.identity.entity_id().as_single_string_owned()?;\n        for source_domain in tombstone\n            .identity\n            .domain()\n            .file_scoped_row_domains_for_file_descriptor_delete()\n        {\n            let rows = scan_committed_constraint_rows(\n                input.live_state,\n                &source_domain.with_exact_file_scope(Some(file_id.clone())),\n                Vec::new(),\n                Vec::new(),\n                false,\n            )\n            .await?;\n\n            for row in rows {\n                if pending_constraints.tombstones_identity(&row) || row.snapshot_content.is_none() {\n                    continue;\n                }\n                return Err(LixError::new(\n                    LixError::CODE_FOREIGN_KEY,\n                    format!(\n                        \"cannot delete file descriptor '{}' in version '{}' because committed row '{}' in schema '{}' is still scoped to that file\",\n                        file_id,\n                        tombstone.identity.domain().version_id(),\n                        row.entity_id.as_json_array_text()?,\n                        row.schema_key,\n                    ),\n                ));\n            }\n        }\n    }\n    Ok(())\n}\n\nasync fn validate_committed_normal_delete_restriction(\n    live_state: &dyn LiveStateReader,\n    pending_constraints: &PendingConstraintIndexes,\n    tombstone: &PendingTombstone,\n    source_key: &SchemaCatalogKey,\n    foreign_key: &ForeignKeyPlan,\n) -> Result<(), LixError> {\n    let Some(deleted_value) =\n        committed_deleted_row_value(live_state, tombstone, &foreign_key.referenced_properties)\n            .await?\n    else {\n        return Ok(());\n    };\n    for source_domain in tombstone.identity.domain().fk_source_domains_for_target() {\n        let rows = scan_committed_constraint_rows(\n            live_state,\n            &source_domain,\n            vec![source_key.schema_key.clone()],\n            Vec::new(),\n            false,\n        )\n        .await?;\n\n        for row in rows {\n            if pending_constraints.tombstones_identity(&row) {\n                continue;\n            }\n            let Some(snapshot_content) = row.snapshot_content.as_deref() else {\n                continue;\n            };\n            let snapshot = parse_committed_snapshot(&row, snapshot_content)?;\n            if UniqueConstraintValue::from_snapshot_non_null(\n                &snapshot,\n                &foreign_key.local_properties,\n            )\n            .as_ref()\n                == Some(&deleted_value)\n            {\n                return Err(committed_delete_restriction_error(\n                    &tombstone.identity,\n                    &row,\n                    &foreign_key.local_properties,\n                )?);\n            }\n        }\n    }\n    Ok(())\n}\n\nasync fn validate_committed_state_surface_delete_restriction_batches(\n    live_state: &dyn LiveStateReader,\n    pending_constraints: &PendingConstraintIndexes,\n    batches: BTreeMap<StateDeleteRestrictionBatchKey, Vec<DomainRowIdentity>>,\n) -> Result<(), LixError> {\n    for (batch, tombstones) in batches {\n        let rows = scan_committed_constraint_rows(\n            live_state,\n            &batch.source_domain,\n            vec![batch.source_key.schema_key.clone()],\n            Vec::new(),\n            false,\n        )\n        .await?;\n\n        for row in rows {\n            if pending_constraints.tombstones_identity(&row) {\n                continue;\n            }\n            let Some(snapshot_content) = row.snapshot_content.as_deref() else {\n                continue;\n            };\n            let snapshot = parse_committed_snapshot(&row, snapshot_content)?;\n            let target_identity = state_surface_target_identity(\n                Domain::for_live_row(&row),\n                &batch.foreign_key.foreign_key,\n                &snapshot,\n            )?;\n            let Some(tombstone) = tombstones.iter().find(|tombstone| {\n                target_identity\n                    .reachable_target_identities()\n                    .contains(*tombstone)\n            }) else {\n                continue;\n            };\n            return Err(committed_delete_restriction_error(\n                tombstone,\n                &row,\n                &batch.foreign_key.foreign_key.local_properties(),\n            )?);\n        }\n    }\n    Ok(())\n}\n\nasync fn committed_deleted_row_value(\n    live_state: &dyn LiveStateReader,\n    tombstone: &PendingTombstone,\n    referenced_properties: &[Vec<String>],\n) -> Result<Option<UniqueConstraintValue>, LixError> {\n    let Some(row) = load_committed_constraint_row(\n        live_state,\n        tombstone.identity.domain(),\n        tombstone.identity.schema_key(),\n        tombstone.identity.entity_id_owned(),\n        true,\n    )\n    .await?\n    else {\n        return Ok(None);\n    };\n    let Some(snapshot_content) = row.snapshot_content.as_deref() else {\n        return Ok(None);\n    };\n    let snapshot = parse_committed_snapshot(&row, snapshot_content)?;\n    Ok(UniqueConstraintValue::from_snapshot(\n        &snapshot,\n        referenced_properties,\n    ))\n}\n\nfn committed_delete_restriction_error(\n    deleted_identity: &DomainRowIdentity,\n    referencing_row: &MaterializedLiveStateRow,\n    local_properties: &[Vec<String>],\n) -> Result<LixError, LixError> {\n    Ok(LixError::new(\n        LixError::CODE_FOREIGN_KEY,\n        format!(\n            \"cannot delete '{}' row '{}' in version '{}' because committed row '{}' references it through {}\",\n            deleted_identity.schema_key(),\n            deleted_identity.entity_id().as_json_array_text()?,\n            deleted_identity.domain().version_id(),\n            referencing_row.entity_id.as_json_array_text()?,\n            format_pointer_group(local_properties)\n        ),\n    ))\n}\n\nfn parse_committed_snapshot(\n    row: &MaterializedLiveStateRow,\n    snapshot_content: &str,\n) -> Result<JsonValue, LixError> {\n    serde_json::from_str::<JsonValue>(snapshot_content).map_err(|error| {\n        LixError::new(\n            LixError::CODE_SCHEMA_VALIDATION,\n            format!(\n                \"committed snapshot_content for schema '{}' is invalid JSON: {error}\",\n                row.schema_key\n            ),\n        )\n    })\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\nstruct UnresolvedForeignKeyCheck {\n    source_identity: DomainRowIdentity,\n    source_schema_key: String,\n    source_pointer_group: Vec<Vec<String>>,\n    target: UnresolvedForeignKeyTarget,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\nenum UnresolvedForeignKeyTarget {\n    Key(PendingForeignKeyTargetKey),\n    StateSurfaceIdentity(DomainRowIdentity),\n}\n\nfn validate_pending_foreign_keys(\n    pending_constraints: &PendingConstraintIndexes,\n    staged_snapshots: &[(PreparedValidationRow<'_>, &SchemaPlan, &JsonValue)],\n) -> Result<Vec<UnresolvedForeignKeyCheck>, LixError> {\n    let mut unresolved = Vec::new();\n    for (row, schema_plan, snapshot) in staged_snapshots {\n        for foreign_key in &schema_plan.foreign_keys {\n            let Some(local_value) = UniqueConstraintValue::from_snapshot_non_null(\n                snapshot,\n                &foreign_key.local_properties,\n            ) else {\n                continue;\n            };\n            if let Some(check) = validate_pending_normal_foreign_key(\n                *row,\n                foreign_key,\n                local_value,\n                pending_constraints,\n            )? {\n                unresolved.push(check);\n            }\n        }\n        for foreign_key in &schema_plan.state_foreign_keys {\n            if let Some(check) = validate_pending_state_surface_foreign_key(\n                *row,\n                foreign_key,\n                snapshot,\n                pending_constraints,\n            )? {\n                unresolved.push(check);\n            }\n        }\n    }\n    Ok(unresolved)\n}\n\nfn validate_pending_normal_foreign_key(\n    row: PreparedValidationRow<'_>,\n    foreign_key: &ForeignKeyPlan,\n    local_value: UniqueConstraintValue,\n    pending_constraints: &PendingConstraintIndexes,\n) -> Result<Option<UnresolvedForeignKeyCheck>, LixError> {\n    let key = PendingForeignKeyTargetKey {\n        schema_key: foreign_key.referenced_schema.schema_key.clone(),\n        domain: row.domain(),\n        pointer_group: foreign_key.referenced_properties.clone(),\n        value: local_value,\n    };\n    if pending_constraints.has_reachable_fk_target_key(&key) {\n        return Ok(None);\n    }\n    Ok(Some(UnresolvedForeignKeyCheck {\n        source_identity: row.domain_row_identity(),\n        source_schema_key: row.schema_key().to_string(),\n        source_pointer_group: foreign_key.local_properties.clone(),\n        target: UnresolvedForeignKeyTarget::Key(key),\n    }))\n}\n\nfn validate_pending_state_surface_foreign_key(\n    row: PreparedValidationRow<'_>,\n    foreign_key: &StateForeignKeyPlan,\n    snapshot: &JsonValue,\n    pending_constraints: &PendingConstraintIndexes,\n) -> Result<Option<UnresolvedForeignKeyCheck>, LixError> {\n    let local_properties = foreign_key.local_properties();\n    let target_identity = state_surface_target_identity(row.domain(), foreign_key, snapshot)?;\n    if pending_constraints.has_reachable_identity_target(&target_identity) {\n        return Ok(None);\n    }\n    Ok(Some(UnresolvedForeignKeyCheck {\n        source_identity: row.domain_row_identity(),\n        source_schema_key: row.schema_key().to_string(),\n        source_pointer_group: local_properties,\n        target: UnresolvedForeignKeyTarget::StateSurfaceIdentity(target_identity),\n    }))\n}\n\nasync fn validate_committed_foreign_keys(\n    input: &TransactionValidationInput<'_>,\n    pending_constraints: &PendingConstraintIndexes,\n    unresolved_checks: &[UnresolvedForeignKeyCheck],\n) -> Result<Vec<UnresolvedForeignKeyCheck>, LixError> {\n    let mut still_unresolved = Vec::new();\n    for check in unresolved_checks {\n        let resolved = match &check.target {\n            UnresolvedForeignKeyTarget::Key(target) => {\n                committed_normal_foreign_key_target_exists(\n                    input.live_state,\n                    pending_constraints,\n                    target,\n                )\n                .await?\n            }\n            UnresolvedForeignKeyTarget::StateSurfaceIdentity(target_identity) => {\n                committed_state_surface_foreign_key_target_exists(\n                    input.live_state,\n                    pending_constraints,\n                    target_identity,\n                )\n                .await?\n            }\n        };\n        if !resolved {\n            still_unresolved.push(check.clone());\n        }\n    }\n    Ok(still_unresolved)\n}\n\nfn reject_unresolved_foreign_keys(\n    unresolved_checks: &[UnresolvedForeignKeyCheck],\n) -> Result<(), LixError> {\n    let Some(check) = unresolved_checks.first() else {\n        return Ok(());\n    };\n    Err(LixError::new(\n        LixError::CODE_FOREIGN_KEY,\n        format!(\n            \"foreign key on schema '{}' row '{}' via {} has no matching target in version '{}'{}\",\n            check.source_schema_key,\n            check.source_identity.entity_id().as_json_array_text()?,\n            format_pointer_group(&check.source_pointer_group),\n            check.source_identity.domain().version_id(),\n            unresolved_foreign_key_target_description(&check.target)?\n        ),\n    ))\n}\n\nfn unresolved_foreign_key_target_description(\n    target: &UnresolvedForeignKeyTarget,\n) -> Result<String, LixError> {\n    match target {\n        UnresolvedForeignKeyTarget::Key(target) => Ok(format!(\n            \" for target '{}.{}' value {}\",\n            target.schema_key,\n            format_pointer_group(&target.pointer_group),\n            target.value.display()\n        )),\n        UnresolvedForeignKeyTarget::StateSurfaceIdentity(target) => Ok(format!(\n            \" for target '{}:{}'\",\n            target.schema_key(),\n            target.entity_id().as_json_array_text()?\n        )),\n    }\n}\n\nasync fn committed_normal_foreign_key_target_exists(\n    live_state: &dyn LiveStateReader,\n    pending_constraints: &PendingConstraintIndexes,\n    target: &PendingForeignKeyTargetKey,\n) -> Result<bool, LixError> {\n    for domain in target.domain.fk_target_domains() {\n        let rows = scan_committed_constraint_rows(\n            live_state,\n            &domain,\n            vec![target.schema_key.clone()],\n            Vec::new(),\n            false,\n        )\n        .await?;\n\n        for row in rows {\n            if pending_constraints.tombstones_identity(&row) {\n                continue;\n            }\n            if row.schema_key != target.schema_key {\n                continue;\n            }\n            let Some(snapshot_content) = row.snapshot_content.as_deref() else {\n                continue;\n            };\n            let snapshot =\n                serde_json::from_str::<JsonValue>(snapshot_content).map_err(|error| {\n                    LixError::new(\n                        LixError::CODE_SCHEMA_VALIDATION,\n                        format!(\n                            \"committed snapshot_content for schema '{}' is invalid JSON: {error}\",\n                            row.schema_key\n                        ),\n                    )\n                })?;\n            if UniqueConstraintValue::from_snapshot(&snapshot, &target.pointer_group).as_ref()\n                == Some(&target.value)\n            {\n                return Ok(true);\n            }\n        }\n    }\n    Ok(false)\n}\n\nasync fn committed_state_surface_foreign_key_target_exists(\n    live_state: &dyn LiveStateReader,\n    pending_constraints: &PendingConstraintIndexes,\n    target_identity: &DomainRowIdentity,\n) -> Result<bool, LixError> {\n    for candidate in target_identity.reachable_target_identities() {\n        let rows = scan_committed_constraint_rows(\n            live_state,\n            candidate.domain(),\n            vec![candidate.schema_key_owned()],\n            vec![candidate.entity_id_owned()],\n            false,\n        )\n        .await?;\n        for row in rows {\n            if pending_constraints.tombstones_identity(&row) {\n                continue;\n            }\n            if candidate.matches_parts(&Domain::for_live_row(&row), &row.schema_key, &row.entity_id)\n            {\n                return Ok(true);\n            }\n        }\n    }\n    Ok(false)\n}\n\nfn state_surface_target_identity(\n    source_domain: Domain,\n    foreign_key: &StateForeignKeyPlan,\n    snapshot: &JsonValue,\n) -> Result<DomainRowIdentity, LixError> {\n    let entity_id =\n        state_surface_local_json_value(snapshot, &foreign_key.entity_id_property, \"entity_id\")?;\n    let schema_key =\n        state_surface_local_value(snapshot, &foreign_key.schema_key_property, \"schema_key\")?;\n    let file_id =\n        state_surface_nullable_local_value(snapshot, &foreign_key.file_id_property, \"file_id\")?;\n    Ok(DomainRowIdentity::in_domain(\n        source_domain.with_exact_file_scope(file_id),\n        schema_key,\n        EntityIdentity::from_json_array_value(entity_id).map_err(|error| {\n            LixError::new(\n                LixError::CODE_FOREIGN_KEY,\n                format!(\"state-surface foreign key entity_id is invalid: {error}\"),\n            )\n        })?,\n    ))\n}\n\nfn state_surface_local_json_value<'a>(\n    snapshot: &'a JsonValue,\n    local_pointer: &[String],\n    state_address_part: &str,\n) -> Result<&'a JsonValue, LixError> {\n    state_surface_optional_local_json_value(snapshot, local_pointer)?.ok_or_else(|| {\n        LixError::new(\n            LixError::CODE_FOREIGN_KEY,\n            format!(\n                \"state-surface foreign key {state_address_part} at '{}' is missing\",\n                format_json_pointer(local_pointer)\n            ),\n        )\n    })\n}\n\nfn state_surface_local_value(\n    snapshot: &JsonValue,\n    local_pointer: &[String],\n    state_address_part: &str,\n) -> Result<String, LixError> {\n    state_surface_nullable_local_value(snapshot, local_pointer, state_address_part)?.ok_or_else(\n        || {\n            LixError::new(\n                LixError::CODE_FOREIGN_KEY,\n                format!(\n                    \"state-surface foreign key {state_address_part} at '{}' is missing\",\n                    format_json_pointer(local_pointer)\n                ),\n            )\n        },\n    )\n}\n\nfn state_surface_nullable_local_value(\n    snapshot: &JsonValue,\n    local_pointer: &[String],\n    state_address_part: &str,\n) -> Result<Option<String>, LixError> {\n    let Some(value) = json_pointer_get(snapshot, local_pointer) else {\n        return Err(LixError::new(\n            LixError::CODE_FOREIGN_KEY,\n            format!(\n                \"state-surface foreign key {state_address_part} at '{}' is missing\",\n                format_json_pointer(local_pointer)\n            ),\n        ));\n    };\n    if value.is_null() {\n        return Ok(None);\n    }\n    value\n        .as_str()\n        .map(|value| Some(value.to_string()))\n        .ok_or_else(|| {\n            LixError::new(\n                LixError::CODE_FOREIGN_KEY,\n                format!(\n                    \"state-surface foreign key {state_address_part} at '{}' must be a string or null\",\n                    format_json_pointer(local_pointer)\n                ),\n            )\n        })\n}\n\nfn state_surface_optional_local_json_value<'a>(\n    snapshot: &'a JsonValue,\n    local_pointer: &[String],\n) -> Result<Option<&'a JsonValue>, LixError> {\n    let Some(value) = json_pointer_get(snapshot, local_pointer) else {\n        return Ok(None);\n    };\n    if value.is_null() {\n        return Ok(None);\n    }\n    Ok(Some(value))\n}\n\nasync fn validate_committed_unique_constraints(\n    input: &TransactionValidationInput<'_>,\n    pending_constraints: &PendingConstraintIndexes,\n) -> Result<(), LixError> {\n    let mut pending_by_scope = BTreeMap::<\n        PendingUniqueConstraintScope,\n        BTreeMap<UniqueConstraintValue, Vec<&EntityIdentity>>,\n    >::new();\n    for (key, pending_entity_id) in &pending_constraints.unique_values {\n        pending_by_scope\n            .entry(PendingUniqueConstraintScope::from(key))\n            .or_default()\n            .entry(key.value.clone())\n            .or_default()\n            .push(pending_entity_id);\n    }\n\n    for (scope, pending_values) in pending_by_scope {\n        let committed_rows = scan_committed_constraint_rows(\n            input.live_state,\n            &scope.domain,\n            vec![scope.schema_key.clone()],\n            Vec::new(),\n            false,\n        )\n        .await?;\n\n        for committed_row in committed_rows {\n            if !committed_row_is_in_exact_unique_scope(&committed_row, &scope) {\n                continue;\n            }\n            if pending_constraints.tombstones_identity(&committed_row) {\n                continue;\n            }\n            let Some(snapshot_content) = committed_row.snapshot_content.as_deref() else {\n                continue;\n            };\n            let snapshot =\n                serde_json::from_str::<JsonValue>(snapshot_content).map_err(|error| {\n                    LixError::new(\n                        LixError::CODE_SCHEMA_VALIDATION,\n                        format!(\n                            \"committed snapshot_content for schema '{}' is invalid JSON: {error}\",\n                            committed_row.schema_key\n                        ),\n                    )\n                })?;\n            let Some(committed_value) =\n                UniqueConstraintValue::from_snapshot(&snapshot, &scope.pointer_group)\n            else {\n                continue;\n            };\n            let Some(pending_entity_ids) = pending_values.get(&committed_value) else {\n                continue;\n            };\n            for pending_entity_id in pending_entity_ids {\n                if committed_row.entity_id == **pending_entity_id {\n                    continue;\n                }\n                return Err(LixError::new(\n                    LixError::CODE_UNIQUE,\n                    format!(\n                        \"unique constraint violation on {}.{} for value {}: committed row '{}' conflicts with staged row '{}'\",\n                        scope.schema_key,\n                        format_pointer_group(&scope.pointer_group),\n                        committed_value.display(),\n                        committed_row.entity_id.as_json_array_text()?,\n                        pending_entity_id.as_json_array_text()?\n                    ),\n                ));\n            }\n        }\n    }\n    Ok(())\n}\n\nfn committed_row_is_in_exact_unique_scope(\n    row: &MaterializedLiveStateRow,\n    scope: &PendingUniqueConstraintScope,\n) -> bool {\n    // LiveStateReader may return serving projections such as global rows\n    // projected into a requested version. Constraint validation is root-local:\n    // only rows authored in the exact version participate.\n    scope.domain.contains(row) && row.schema_key == scope.schema_key\n}\n\n#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)]\nstruct UniqueConstraintValue(Vec<String>);\n\nimpl UniqueConstraintValue {\n    #[cfg(test)]\n    fn string_values<const N: usize>(values: [&str; N]) -> Self {\n        Self(\n            values\n                .into_iter()\n                .map(|value| format!(\"{value:?}\"))\n                .collect(),\n        )\n    }\n\n    fn from_entity_identity(identity: &EntityIdentity) -> Self {\n        Self(\n            identity\n                .parts\n                .iter()\n                .map(|part| format!(\"{part:?}\"))\n                .collect(),\n        )\n    }\n\n    fn from_snapshot(snapshot: &JsonValue, pointers: &[Vec<String>]) -> Option<Self> {\n        let mut values = Vec::with_capacity(pointers.len());\n        for pointer in pointers {\n            let value = json_pointer_get(snapshot, pointer)?;\n            values.push(stable_unique_value(value));\n        }\n        Some(Self(values))\n    }\n\n    fn from_snapshot_non_null(snapshot: &JsonValue, pointers: &[Vec<String>]) -> Option<Self> {\n        let mut values = Vec::with_capacity(pointers.len());\n        for pointer in pointers {\n            let value = json_pointer_get(snapshot, pointer)?;\n            if value.is_null() {\n                return None;\n            }\n            values.push(stable_unique_value(value));\n        }\n        Some(Self(values))\n    }\n\n    fn display(&self) -> String {\n        if let [value] = self.0.as_slice() {\n            return value.clone();\n        }\n        format!(\"({})\", self.0.join(\", \"))\n    }\n}\n\nfn stable_unique_value(value: &JsonValue) -> String {\n    match value {\n        JsonValue::String(value) => format!(\"{value:?}\"),\n        JsonValue::Number(value) => value.to_string(),\n        JsonValue::Bool(value) => value.to_string(),\n        JsonValue::Null => \"null\".to_string(),\n        JsonValue::Array(_) | JsonValue::Object(_) => {\n            canonical_json_text(value).unwrap_or_else(|_| value.to_string())\n        }\n    }\n}\n\nfn format_pointer_group(group: &[Vec<String>]) -> String {\n    let pointers = group\n        .iter()\n        .map(|pointer| format_json_pointer(pointer))\n        .collect::<Vec<_>>();\n    if let [pointer] = pointers.as_slice() {\n        pointer.clone()\n    } else {\n        format!(\"({})\", pointers.join(\", \"))\n    }\n}\n\nfn primary_key_identity_error(\n    row: PreparedValidationRow<'_>,\n    primary_key_paths: &[Vec<String>],\n    error: EntityIdentityError,\n) -> LixError {\n    let reason = match error {\n        EntityIdentityError::EmptyPrimaryKey => \"empty x-lix-primary-key\".to_string(),\n        EntityIdentityError::EmptyPrimaryKeyPath { index } => {\n            format!(\"empty x-lix-primary-key pointer at index {index}\")\n        }\n        EntityIdentityError::EmptyPrimaryKeyValue { index } => {\n            let pointer = primary_key_paths\n                .get(index)\n                .map(|path| format_json_pointer(path))\n                .unwrap_or_else(|| format!(\"index {index}\"));\n            format!(\"empty value at primary-key pointer '{pointer}'\")\n        }\n        EntityIdentityError::MissingPrimaryKeyValue { index } => {\n            let pointer = format_json_pointer(&primary_key_paths[index]);\n            format!(\"missing value at primary-key pointer '{pointer}'\")\n        }\n        EntityIdentityError::UnsupportedPrimaryKeyValue { index } => {\n            let pointer = format_json_pointer(&primary_key_paths[index]);\n            format!(\"non-string value at primary-key pointer '{pointer}'\")\n        }\n        EntityIdentityError::InvalidEncodedEntityIdentity => {\n            \"invalid encoded entity identity\".to_string()\n        }\n    };\n    LixError::new(\n        LixError::CODE_UNIQUE,\n        format!(\n            \"primary-key constraint violation on schema '{}': {reason}\",\n            row.schema_key()\n        ),\n    )\n}\n\nfn validate_foreign_key_definition(\n    catalog: &CatalogSnapshot,\n    source_key: &SchemaCatalogKey,\n    source_schema: &JsonValue,\n    foreign_key: &ForeignKeyPlan,\n) -> Result<(), LixError> {\n    for pointer in &foreign_key.local_properties {\n        validate_schema_field_pointer(source_schema, pointer).map_err(|detail| {\n            LixError::new(\n                LixError::CODE_SCHEMA_DEFINITION,\n                format!(\n                    \"foreign key on schema '{}' references missing local property '{}': {detail}\",\n                    source_key.schema_key,\n                    format_json_pointer(pointer)\n                ),\n            )\n        })?;\n    }\n\n    if foreign_key.referenced_schema.schema_key == STATE_SURFACE_SCHEMA_KEY {\n        return Err(LixError::new(\n            LixError::CODE_SCHEMA_DEFINITION,\n            format!(\n                \"foreign key on schema '{}' must not reference schemaKey 'lix_state'; use x-lix-state-foreign-keys with pointers ordered as [entity_id, schema_key, file_id]\",\n                source_key.schema_key\n            ),\n        ));\n    }\n\n    let target_plan = catalog\n        .plan(foreign_key.referenced_plan_id)\n        .ok_or_else(|| {\n            LixError::new(\n                LixError::CODE_SCHEMA_DEFINITION,\n                format!(\n                    \"foreign key on schema '{}' references missing bound schema plan '{}'\",\n                    source_key.schema_key, foreign_key.referenced_schema.schema_key,\n                ),\n            )\n        })?;\n    let target_schema = target_plan.schema.as_ref();\n    if target_plan.key != foreign_key.referenced_schema {\n        return Err(LixError::new(\n            LixError::CODE_SCHEMA_DEFINITION,\n            format!(\n                \"foreign key on schema '{}' is bound to schema '{}' but declares schema '{}'\",\n                source_key.schema_key,\n                target_plan.key.schema_key,\n                foreign_key.referenced_schema.schema_key,\n            ),\n        ));\n    }\n\n    for pointer in &foreign_key.referenced_properties {\n        validate_schema_field_pointer(target_schema, pointer).map_err(|detail| {\n            LixError::new(\n                LixError::CODE_SCHEMA_DEFINITION,\n                format!(\n                    \"foreign key on schema '{}' references missing target property '{}.{}': {detail}\",\n                    source_key.schema_key,\n                    foreign_key.referenced_schema.schema_key,\n                    format_json_pointer(pointer)\n                ),\n            )\n        })?;\n    }\n\n    if !referenced_properties_are_keyed(target_plan, &foreign_key.referenced_properties) {\n        return Err(LixError::new(\n            LixError::CODE_SCHEMA_DEFINITION,\n            format!(\n                \"foreign key on schema '{}' references '{}.{}', but referenced properties must match the target primary key or a unique constraint\",\n                source_key.schema_key,\n                foreign_key.referenced_schema.schema_key,\n                format_pointer_group(&foreign_key.referenced_properties)\n            ),\n        ));\n    }\n\n    Ok(())\n}\n\nfn validate_state_foreign_key_definition(\n    source_key: &SchemaCatalogKey,\n    source_schema: &JsonValue,\n    foreign_key: &StateForeignKeyPlan,\n) -> Result<(), LixError> {\n    let local_properties = foreign_key.local_properties();\n    for pointer in &local_properties {\n        validate_schema_field_pointer(source_schema, pointer).map_err(|detail| {\n            LixError::new(\n                LixError::CODE_SCHEMA_DEFINITION,\n                format!(\n                    \"state foreign key on schema '{}' references missing local property '{}': {detail}\",\n                    source_key.schema_key,\n                    format_json_pointer(pointer)\n                ),\n            )\n        })?;\n    }\n    Ok(())\n}\n\nfn validate_schema_field_pointer(schema: &JsonValue, pointer: &[String]) -> Result<(), String> {\n    if pointer.is_empty() {\n        return Err(\"empty pointer does not name a field\".to_string());\n    }\n    let mut current = schema;\n    for segment in pointer {\n        let properties = current\n            .get(\"properties\")\n            .and_then(JsonValue::as_object)\n            .ok_or_else(|| {\n                format!(\n                    \"schema segment before '{}' has no object properties\",\n                    segment\n                )\n            })?;\n        current = properties\n            .get(segment)\n            .ok_or_else(|| format!(\"property '{}' does not exist\", segment))?;\n    }\n    Ok(())\n}\n\nfn referenced_properties_are_keyed(\n    target_plan: &SchemaPlan,\n    referenced_properties: &[Vec<String>],\n) -> bool {\n    if let Some(primary_key) = target_plan.primary_key.as_ref() {\n        if primary_key == referenced_properties {\n            return true;\n        }\n    }\n    target_plan\n        .uniques\n        .iter()\n        .any(|unique_group| unique_group == referenced_properties)\n}\n\nfn validate_foreign_key_definitions(catalog: &CatalogSnapshot) -> Result<(), LixError> {\n    for plan in catalog.plans() {\n        for foreign_key in &plan.foreign_keys {\n            validate_foreign_key_definition(catalog, &plan.key, plan.schema.as_ref(), foreign_key)?;\n        }\n        for foreign_key in &plan.state_foreign_keys {\n            validate_state_foreign_key_definition(&plan.key, plan.schema.as_ref(), foreign_key)?;\n        }\n    }\n    Ok(())\n}\n\n#[cfg(test)]\nfn validate_pending_registered_schema(\n    row: PreparedValidationRow<'_>,\n    registered_schema_definition: &JsonValue,\n) -> Result<(SchemaKey, JsonValue), LixError> {\n    let snapshot_content = row.snapshot_content().ok_or_else(|| {\n        LixError::new(\n            LixError::CODE_SCHEMA_DEFINITION,\n            \"registered schema write requires snapshot_content\",\n        )\n    })?;\n    let snapshot = serde_json::from_str::<JsonValue>(snapshot_content).map_err(|error| {\n        LixError::new(\n            LixError::CODE_SCHEMA_DEFINITION,\n            format!(\"pending registered schema snapshot_content is invalid JSON: {error}\"),\n        )\n    })?;\n    if !snapshot.get(\"value\").is_some_and(JsonValue::is_object) {\n        validate_lix_schema(registered_schema_definition, &snapshot)?;\n    }\n    // A registered-schema row stores the schema definition under `value`.\n    // Validate both layers: the outer row must match the builtin\n    // `lix_registered_schema` schema, and the inner definition must be a valid\n    // Lix schema before it can extend the transaction-visible catalog.\n    let (key, schema) = schema_from_registered_snapshot(&snapshot)?;\n    reject_seed_schema_registration(&key)?;\n    validate_lix_schema_definition(&schema)?;\n    validate_lix_schema(registered_schema_definition, &snapshot)?;\n    Ok((key, schema))\n}\n\n#[cfg(test)]\nfn reject_seed_schema_registration(key: &SchemaKey) -> Result<(), LixError> {\n    if is_seed_schema_key(&key.schema_key) {\n        return Err(LixError::new(\n            LixError::CODE_SCHEMA_DEFINITION,\n            format!(\n                \"schema '{}' is a system schema and cannot be registered at runtime\",\n                key.schema_key\n            ),\n        ));\n    }\n    Ok(())\n}\n\n#[cfg(test)]\nmod tests {\n    use std::sync::atomic::{AtomicUsize, Ordering};\n\n    use async_trait::async_trait;\n    use serde_json::json;\n\n    use super::*;\n    use crate::live_state::{LiveStateRowRequest, LiveStateScanRequest, MaterializedLiveStateRow};\n    use crate::schema::{schema_key_from_definition, seed_schema_definition};\n    use crate::transaction::types::{StageJson, TransactionJson};\n\n    struct EmptyLiveStateReader;\n\n    fn test_stage_json(value: &str) -> StageJson {\n        let parsed = test_json_text(value).expect(\"test staged JSON should parse\");\n        crate::transaction::types::stage_json_from_value(\n            TransactionJson::from_value_for_test(parsed),\n            \"test staged JSON\",\n        )\n        .expect(\"test staged JSON should prepare\")\n    }\n\n    fn test_json_text(value: &str) -> Result<serde_json::Value, LixError> {\n        serde_json::from_str::<serde_json::Value>(value).map_err(|error| {\n            LixError::new(\n                LixError::CODE_UNKNOWN,\n                format!(\"test staged JSON is invalid JSON: {error}\"),\n            )\n        })\n    }\n\n    fn test_plan_from_schema(schema: JsonValue) -> &'static SchemaPlan {\n        let key = schema_key_from_definition(&schema).expect(\"test schema should have key\");\n        let visible_schemas = match key.schema_key.as_str() {\n            \"fk_child_schema\" => vec![fk_parent_schema(), schema],\n            FILE_DESCRIPTOR_SCHEMA_KEY => vec![directory_descriptor_schema(), schema],\n            DIRECTORY_DESCRIPTOR_SCHEMA_KEY => vec![schema],\n            _ => vec![schema],\n        };\n        let catalog = Box::leak(Box::new(\n            CatalogSnapshot::from_visible_schemas(&visible_schemas)\n                .expect(\"test schema plan catalog should build\"),\n        ));\n        catalog\n            .plan_for_key(&key.schema_key)\n            .expect(\"test schema key should resolve\")\n            .1\n    }\n\n    #[async_trait]\n    impl LiveStateReader for EmptyLiveStateReader {\n        async fn scan_rows(\n            &self,\n            request: &LiveStateScanRequest,\n        ) -> Result<Vec<MaterializedLiveStateRow>, LixError> {\n            Ok(test_file_descriptor_rows()\n                .into_iter()\n                .filter(|row| live_state_row_matches_scan(row, request))\n                .collect())\n        }\n\n        async fn load_row(\n            &self,\n            request: &LiveStateRowRequest,\n        ) -> Result<Option<MaterializedLiveStateRow>, LixError> {\n            Ok(test_file_descriptor_rows()\n                .into_iter()\n                .find(|row| live_state_row_matches_load(row, request)))\n        }\n    }\n\n    fn validation_input<'a>(\n        staged_writes: &'a PreparedWriteSet,\n        visible_schemas: &'a [JsonValue],\n    ) -> TransactionValidationInput<'a> {\n        let catalog = Box::leak(Box::new(\n            catalog_from_transaction_parts_unchecked(staged_writes, visible_schemas)\n                .expect(\"test schema catalog should build\"),\n        ));\n        let validation_set = Box::leak(Box::new(staged_writes.validation_set_for_tests()));\n        TransactionValidationInput::new(validation_set, catalog, &EmptyLiveStateReader)\n    }\n\n    fn catalog_from_transaction_input<'a>(\n        input: &'a TransactionValidationInput<'a>,\n    ) -> Result<&'a CatalogSnapshot, LixError> {\n        validate_foreign_key_definitions(input.schema_catalog)?;\n        Ok(input.schema_catalog)\n    }\n\n    fn catalog_from_transaction_parts(\n        staged_writes: &PreparedWriteSet,\n        visible_schemas: &[JsonValue],\n    ) -> Result<CatalogSnapshot, LixError> {\n        let catalog = catalog_from_transaction_parts_unchecked(staged_writes, visible_schemas)?;\n        let mut pending_keys =\n            BTreeMap::<SchemaCatalogKey, crate::entity_identity::EntityIdentity>::new();\n        for row in staged_writes\n            .validation_rows()\n            .filter(|row| row.schema_key() == REGISTERED_SCHEMA_KEY)\n        {\n            let snapshot_content = row.snapshot_content().ok_or_else(|| {\n                LixError::new(\n                    LixError::CODE_SCHEMA_DEFINITION,\n                    \"registered schema write requires snapshot_content\",\n                )\n            })?;\n            let snapshot =\n                serde_json::from_str::<JsonValue>(snapshot_content).map_err(|error| {\n                    LixError::new(\n                        LixError::CODE_SCHEMA_DEFINITION,\n                        format!(\n                            \"pending registered schema snapshot_content is invalid JSON: {error}\"\n                        ),\n                    )\n                })?;\n            let (key, _) = schema_from_registered_snapshot(&snapshot)?;\n            let catalog_key = SchemaCatalogKey::from_schema_key(key);\n            if let Some(existing_entity_id) =\n                pending_keys.insert(catalog_key.clone(), row.entity_id().clone())\n            {\n                return Err(LixError::new(\n                    LixError::CODE_SCHEMA_DEFINITION,\n                    format!(\n                        \"duplicate pending registered schema '{}' in transaction: rows '{}' and '{}'\",\n                        catalog_key.schema_key,\n                        existing_entity_id.as_json_array_text()?,\n                        row.entity_id().as_json_array_text()?\n                    ),\n                ));\n            }\n        }\n        validate_foreign_key_definitions(&catalog)?;\n        Ok(catalog)\n    }\n\n    fn catalog_from_transaction_parts_unchecked(\n        staged_writes: &PreparedWriteSet,\n        visible_schemas: &[JsonValue],\n    ) -> Result<CatalogSnapshot, LixError> {\n        let mut catalog = CatalogSnapshot::from_visible_schemas(visible_schemas)?;\n        for row in staged_writes\n            .validation_rows()\n            .filter(|row| row.schema_key() == REGISTERED_SCHEMA_KEY)\n        {\n            let registered_schema_definition = catalog\n                .schema(REGISTERED_SCHEMA_KEY)\n                .cloned()\n                .ok_or_else(|| {\n                    LixError::new(\n                        LixError::CODE_SCHEMA_DEFINITION,\n                        \"lix_registered_schema schema is not visible to this transaction\",\n                    )\n                })?;\n            let (key, schema) =\n                validate_pending_registered_schema(row, &registered_schema_definition)?;\n            catalog.insert_schema_for_domain(row.domain(), key, schema)?;\n        }\n        Ok(catalog)\n    }\n\n    struct StaticLiveStateReader {\n        rows: Vec<MaterializedLiveStateRow>,\n    }\n\n    #[async_trait]\n    impl LiveStateReader for StaticLiveStateReader {\n        async fn scan_rows(\n            &self,\n            request: &LiveStateScanRequest,\n        ) -> Result<Vec<MaterializedLiveStateRow>, LixError> {\n            Ok(self\n                .rows\n                .iter()\n                .cloned()\n                .chain(test_file_descriptor_rows())\n                .filter(|row| live_state_row_matches_scan(row, request))\n                .collect())\n        }\n\n        async fn load_row(\n            &self,\n            request: &LiveStateRowRequest,\n        ) -> Result<Option<MaterializedLiveStateRow>, LixError> {\n            Ok(self\n                .rows\n                .iter()\n                .cloned()\n                .chain(test_file_descriptor_rows())\n                .find(|row| {\n                    row.schema_key == request.schema_key\n                        && row.version_id == request.version_id\n                        && row.entity_id == request.entity_id\n                        && request.file_id.matches(row.file_id.as_ref())\n                }))\n        }\n    }\n\n    struct OverlayingStaticLiveStateReader {\n        rows: Vec<MaterializedLiveStateRow>,\n    }\n\n    #[async_trait]\n    impl LiveStateReader for OverlayingStaticLiveStateReader {\n        async fn scan_rows(\n            &self,\n            request: &LiveStateScanRequest,\n        ) -> Result<Vec<MaterializedLiveStateRow>, LixError> {\n            let rows = self\n                .rows\n                .iter()\n                .cloned()\n                .chain(test_file_descriptor_rows())\n                .filter(|row| live_state_row_matches_scan(row, request))\n                .collect::<Vec<_>>();\n            if request.filter.untracked.is_some() {\n                return Ok(rows);\n            }\n            let tracked_rows = rows\n                .iter()\n                .filter(|row| !row.untracked)\n                .cloned()\n                .collect::<Vec<_>>();\n            let untracked_rows = rows\n                .into_iter()\n                .filter(|row| row.untracked)\n                .collect::<Vec<_>>();\n            Ok(overlay_untracked_rows_for_test(\n                tracked_rows,\n                untracked_rows,\n            ))\n        }\n\n        async fn load_row(\n            &self,\n            request: &LiveStateRowRequest,\n        ) -> Result<Option<MaterializedLiveStateRow>, LixError> {\n            Ok(self\n                .scan_rows(&LiveStateScanRequest {\n                    filter: LiveStateFilter {\n                        schema_keys: vec![request.schema_key.clone()],\n                        entity_ids: vec![request.entity_id.clone()],\n                        version_ids: vec![request.version_id.clone()],\n                        file_ids: vec![request.file_id.clone()],\n                        ..Default::default()\n                    },\n                    ..Default::default()\n                })\n                .await?\n                .into_iter()\n                .next())\n        }\n    }\n\n    fn overlay_untracked_rows_for_test(\n        tracked_rows: Vec<MaterializedLiveStateRow>,\n        untracked_rows: Vec<MaterializedLiveStateRow>,\n    ) -> Vec<MaterializedLiveStateRow> {\n        let mut rows_by_identity = BTreeMap::new();\n        for row in tracked_rows {\n            rows_by_identity.insert(LiveStateRowIdentity::from_row(&row), row);\n        }\n        for row in untracked_rows {\n            rows_by_identity.insert(LiveStateRowIdentity::from_row(&row), row);\n        }\n        rows_by_identity.into_values().collect()\n    }\n\n    struct StrictEmptyLiveStateReader;\n\n    #[async_trait]\n    impl LiveStateReader for StrictEmptyLiveStateReader {\n        async fn scan_rows(\n            &self,\n            _request: &LiveStateScanRequest,\n        ) -> Result<Vec<MaterializedLiveStateRow>, LixError> {\n            Ok(Vec::new())\n        }\n\n        async fn load_row(\n            &self,\n            _request: &LiveStateRowRequest,\n        ) -> Result<Option<MaterializedLiveStateRow>, LixError> {\n            Ok(None)\n        }\n    }\n\n    struct StrictStaticLiveStateReader {\n        rows: Vec<MaterializedLiveStateRow>,\n    }\n\n    #[async_trait]\n    impl LiveStateReader for StrictStaticLiveStateReader {\n        async fn scan_rows(\n            &self,\n            request: &LiveStateScanRequest,\n        ) -> Result<Vec<MaterializedLiveStateRow>, LixError> {\n            Ok(self\n                .rows\n                .iter()\n                .filter(|row| live_state_row_matches_scan(row, request))\n                .cloned()\n                .collect())\n        }\n\n        async fn load_row(\n            &self,\n            request: &LiveStateRowRequest,\n        ) -> Result<Option<MaterializedLiveStateRow>, LixError> {\n            Ok(self\n                .rows\n                .iter()\n                .find(|row| live_state_row_matches_load(row, request))\n                .cloned())\n        }\n    }\n\n    struct CountingStaticLiveStateReader {\n        rows: Vec<MaterializedLiveStateRow>,\n        scan_count: AtomicUsize,\n    }\n\n    #[async_trait]\n    impl LiveStateReader for CountingStaticLiveStateReader {\n        async fn scan_rows(\n            &self,\n            request: &LiveStateScanRequest,\n        ) -> Result<Vec<MaterializedLiveStateRow>, LixError> {\n            self.scan_count.fetch_add(1, Ordering::Relaxed);\n            Ok(self\n                .rows\n                .iter()\n                .cloned()\n                .chain(test_file_descriptor_rows())\n                .filter(|row| live_state_row_matches_scan(row, request))\n                .collect())\n        }\n\n        async fn load_row(\n            &self,\n            request: &LiveStateRowRequest,\n        ) -> Result<Option<MaterializedLiveStateRow>, LixError> {\n            Ok(self\n                .rows\n                .iter()\n                .cloned()\n                .chain(test_file_descriptor_rows())\n                .find(|row| live_state_row_matches_load(row, request)))\n        }\n    }\n\n    #[test]\n    fn schema_catalog_indexes_visible_schemas_by_key_and_version() {\n        let visible_schemas = vec![json!({\n            \"x-lix-key\": \"visible_schema\",\n            \"type\": \"object\",\n        })];\n        let staged_writes = empty_staged_write_set();\n        let input = validation_input(&staged_writes, &visible_schemas);\n\n        let catalog = catalog_from_transaction_input(&input).expect(\"schema catalog should build\");\n\n        assert_eq!(catalog.len(), 1);\n        assert!(catalog.contains(\"visible_schema\"));\n    }\n\n    #[test]\n    fn schema_catalog_includes_pending_registered_schema_rows() {\n        let visible_schemas = vec![\n            registered_schema(),\n            json!({\n                \"x-lix-key\": \"visible_schema\",\n                \"type\": \"object\",\n            }),\n        ];\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![pending_registered_schema_row(\"pending_schema\")],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n        let input = validation_input(&staged_writes, &visible_schemas);\n\n        let catalog = catalog_from_transaction_input(&input).expect(\"schema catalog should build\");\n\n        assert_eq!(catalog.len(), 3);\n        assert!(catalog.contains(\"visible_schema\"));\n        assert!(catalog.contains(\"pending_schema\"));\n    }\n\n    #[test]\n    fn schema_catalog_rejects_pending_schema_duplicate_of_visible_identity() {\n        let visible_schemas = vec![\n            registered_schema(),\n            json!({\n                \"x-lix-key\": \"same_schema\",\n                \"type\": \"object\",\n                \"properties\": {\n                    \"old\": { \"type\": \"string\" }\n                }\n            }),\n        ];\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![pending_registered_schema_row(\"same_schema\")],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n\n        let error = catalog_from_transaction_parts_unchecked(&staged_writes, &visible_schemas)\n            .expect_err(\"pending schema must not override a visible domain fact\");\n\n        assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION);\n        assert!(error.message.contains(\"more than one schema domain\"));\n    }\n\n    #[test]\n    fn pending_registered_schema_requires_snapshot_content() {\n        let mut row = pending_registered_schema_row(\"missing_snapshot\");\n        row.snapshot = None;\n\n        let error = validate_pending_registered_schema(\n            PreparedValidationRow::State(&row),\n            &registered_schema(),\n        )\n        .expect_err(\"registered schema writes require snapshot_content\");\n\n        assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION);\n    }\n\n    #[test]\n    fn pending_registered_schema_rejects_invalid_snapshot_json() {\n        let error =\n            test_json_text(\"{not-json\").expect_err(\"invalid JSON should fail before validation\");\n\n        assert_eq!(error.code, LixError::CODE_UNKNOWN);\n    }\n\n    #[test]\n    fn pending_registered_schema_uses_builtin_schema_for_outer_value_shape() {\n        let mut row = pending_registered_schema_row(\"missing_value\");\n        row.snapshot = Some(test_stage_json(&json!({}).to_string()));\n\n        let error = validate_pending_registered_schema(\n            PreparedValidationRow::State(&row),\n            &registered_schema(),\n        )\n        .expect_err(\"builtin lix_registered_schema validation should fail\");\n\n        assert_eq!(error.code, LixError::CODE_SCHEMA_VALIDATION);\n    }\n\n    #[test]\n    fn pending_registered_schema_rejects_malformed_nested_lix_schema_definition() {\n        let mut row = pending_registered_schema_row(\"bad_schema_definition\");\n        row.snapshot = Some(test_stage_json(\n            &json!({\n                \"value\": {\n                    \"x-lix-key\": \"bad_schema_definition\",\n                    \"x-lix-primary-key\": [\"id\"],\n                    \"type\": \"object\",\n                    \"properties\": {\n                        \"id\": { \"type\": \"string\" }\n                    },\n                    \"required\": [\"id\"],\n                    \"additionalProperties\": false,\n                }\n            })\n            .to_string(),\n        ));\n\n        let error = validate_pending_registered_schema(\n            PreparedValidationRow::State(&row),\n            &registered_schema(),\n        )\n        .expect_err(\"nested Lix schema definition should be rejected\");\n\n        assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION);\n    }\n\n    #[test]\n    fn schema_catalog_rejects_duplicate_pending_registered_schema_identity() {\n        let mut duplicate = pending_registered_schema_row(\"duplicate_schema\");\n        duplicate.entity_id = registered_schema_entity_id(\"duplicate_schema_duplicate\");\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![pending_registered_schema_row(\"duplicate_schema\"), duplicate],\n            ..empty_staged_write_set()\n        };\n        let visible_schemas = vec![registered_schema()];\n\n        let error = catalog_from_transaction_parts(&staged_writes, &visible_schemas)\n            .expect_err(\"duplicate pending schema keys should fail\");\n\n        assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION);\n    }\n\n    #[test]\n    fn schema_catalog_allows_pending_foreign_key_to_pending_schema() {\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![\n                pending_registered_schema_from_definition(fk_parent_schema()),\n                pending_registered_schema_from_definition(fk_child_schema()),\n            ],\n            ..empty_staged_write_set()\n        };\n        let visible_schemas = vec![registered_schema()];\n        let input = validation_input(&staged_writes, &visible_schemas);\n\n        let catalog = catalog_from_transaction_input(&input)\n            .expect(\"pending parent schema should satisfy pending child foreign key\");\n\n        assert!(catalog.contains(\"fk_parent_schema\"));\n        assert!(catalog.contains(\"fk_child_schema\"));\n    }\n\n    #[test]\n    fn schema_catalog_rejects_foreign_key_missing_target_schema() {\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![pending_registered_schema_from_definition(fk_child_schema())],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n        let visible_schemas = vec![registered_schema()];\n\n        let error = catalog_from_transaction_parts(&staged_writes, &visible_schemas)\n            .expect_err(\"missing referenced schema should fail\");\n\n        assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION);\n    }\n\n    #[test]\n    fn schema_catalog_rejects_foreign_key_missing_local_field() {\n        let mut child = fk_child_schema();\n        child[\"x-lix-foreign-keys\"][0][\"properties\"] = json!([\"/missing_parent_id\"]);\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![\n                pending_registered_schema_from_definition(fk_parent_schema()),\n                pending_registered_schema_from_definition(child),\n            ],\n            ..empty_staged_write_set()\n        };\n        let visible_schemas = vec![registered_schema()];\n\n        let error = catalog_from_transaction_parts(&staged_writes, &visible_schemas)\n            .expect_err(\"missing local FK field should fail\");\n\n        assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION);\n    }\n\n    #[test]\n    fn schema_catalog_rejects_foreign_key_missing_referenced_field() {\n        let mut child = fk_child_schema();\n        child[\"x-lix-foreign-keys\"][0][\"references\"][\"properties\"] = json!([\"/missing_id\"]);\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![\n                pending_registered_schema_from_definition(fk_parent_schema()),\n                pending_registered_schema_from_definition(child),\n            ],\n            ..empty_staged_write_set()\n        };\n        let visible_schemas = vec![registered_schema()];\n\n        let error = catalog_from_transaction_parts(&staged_writes, &visible_schemas)\n            .expect_err(\"missing referenced FK field should fail\");\n\n        assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION);\n    }\n\n    #[test]\n    fn schema_catalog_rejects_foreign_key_to_non_unique_target_field() {\n        let mut parent = fk_parent_schema();\n        parent[\"properties\"][\"name\"] = json!({ \"type\": \"string\" });\n        let mut child = fk_child_schema();\n        child[\"x-lix-foreign-keys\"][0][\"references\"][\"properties\"] = json!([\"/name\"]);\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![\n                pending_registered_schema_from_definition(parent),\n                pending_registered_schema_from_definition(child),\n            ],\n            ..empty_staged_write_set()\n        };\n        let visible_schemas = vec![registered_schema()];\n\n        let error = catalog_from_transaction_parts(&staged_writes, &visible_schemas)\n            .expect_err(\"FK target must be primary-key or unique\");\n\n        assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION);\n    }\n\n    #[test]\n    fn schema_catalog_allows_state_surface_foreign_key_target() {\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![pending_registered_schema_from_definition(\n                state_surface_ref_schema(),\n            )],\n            ..empty_staged_write_set()\n        };\n        let visible_schemas = vec![registered_schema()];\n        let input = validation_input(&staged_writes, &visible_schemas);\n\n        let catalog = catalog_from_transaction_input(&input)\n            .expect(\"x-lix-state-foreign-keys should validate as a state-surface FK target\");\n\n        assert!(catalog.contains(\"state_surface_ref_schema\"));\n    }\n\n    #[test]\n    fn schema_catalog_rejects_normal_foreign_key_to_lix_state() {\n        let mut schema = fk_child_schema();\n        schema[\"x-lix-foreign-keys\"][0][\"properties\"] = json!([\"/parent_id\"]);\n        schema[\"x-lix-foreign-keys\"][0][\"references\"] = json!({\n            \"schemaKey\": \"lix_state\",\n            \"properties\": [\"/entity_id\"]\n        });\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![pending_registered_schema_from_definition(schema)],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n        let visible_schemas = vec![registered_schema()];\n\n        let error = catalog_from_transaction_parts(&staged_writes, &visible_schemas)\n            .expect_err(\"normal FK must not use fake lix_state schema key\");\n\n        assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION);\n        assert!(\n            error.message.contains(\"x-lix-state-foreign-keys\"),\n            \"unexpected error: {error:?}\"\n        );\n    }\n\n    #[test]\n    fn schema_catalog_rejects_state_surface_foreign_key_without_full_address_tuple() {\n        let mut schema = state_surface_ref_schema();\n        schema[\"x-lix-state-foreign-keys\"][0] = json!([\"/target_entity_id\"]);\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![pending_registered_schema_from_definition(schema)],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n        let visible_schemas = vec![registered_schema()];\n\n        let error = catalog_from_transaction_parts_unchecked(&staged_writes, &visible_schemas)\n            .expect_err(\"state FK target must include entity_id, schema_key, and file_id\");\n\n        assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION);\n        assert!(\n            error.message.contains(\"[entity_id, schema_key, file_id]\"),\n            \"unexpected error: {error:?}\"\n        );\n    }\n\n    #[tokio::test]\n    async fn validation_rejects_unknown_schema_key() {\n        let visible_schemas = vec![key_value_schema()];\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![staged_row(\"unknown_schema\", Some(json!({}).to_string()))],\n            ..empty_staged_write_set()\n        };\n\n        let error = validate_prepared_writes(validation_input(&staged_writes, &visible_schemas))\n            .await\n            .expect_err(\"unknown schema_key should fail\");\n\n        assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION);\n    }\n\n    #[tokio::test]\n    async fn validation_checks_schema_existence_for_tombstones() {\n        let visible_schemas = vec![key_value_schema()];\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![staged_row(\"unknown_schema\", None)],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n\n        let error = validate_prepared_writes(validation_input(&staged_writes, &visible_schemas))\n            .await\n            .expect_err(\"tombstone with unknown schema should fail\");\n\n        assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION);\n    }\n\n    #[tokio::test]\n    async fn validation_allows_pending_registered_schema_to_validate_later_rows() {\n        let visible_schemas = vec![key_value_schema(), registered_schema()];\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![\n                pending_registered_schema_row(\"pending_schema\"),\n                staged_row(\n                    \"pending_schema\",\n                    Some(json!({ \"id\": \"entity-1\" }).to_string()),\n                ),\n            ],\n            ..empty_staged_write_set()\n        };\n\n        validate_prepared_writes(validation_input(&staged_writes, &visible_schemas))\n            .await\n            .expect(\"pending registered schema should be visible to later staged rows\");\n    }\n\n    #[tokio::test]\n    async fn validation_rejects_tracked_row_using_pending_untracked_schema_definition() {\n        let visible_schemas = vec![registered_schema()];\n        let mut untracked_schema = pending_registered_schema_row(\"untracked_only_schema\");\n        mark_prepared_row_untracked(&mut untracked_schema);\n        let mut tracked_row = staged_row(\n            \"untracked_only_schema\",\n            Some(json!({ \"id\": \"row-1\" }).to_string()),\n        );\n        tracked_row.entity_id = EntityIdentity::single(\"row-1\");\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![untracked_schema, tracked_row],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n\n        let error = validate_prepared_writes(validation_input(&staged_writes, &visible_schemas))\n            .await\n            .expect_err(\"tracked rows must not validate against untracked schema definitions\");\n\n        assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION);\n    }\n\n    #[tokio::test]\n    async fn validation_validates_snapshot_content_against_schema() {\n        let visible_schemas = vec![key_value_schema()];\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![staged_row(\n                \"lix_key_value\",\n                Some(json!({ \"key\": \"k\" }).to_string()),\n            )],\n            ..empty_staged_write_set()\n        };\n\n        let error = validate_prepared_writes(validation_input(&staged_writes, &visible_schemas))\n            .await\n            .expect_err(\"missing required snapshot field should fail\");\n\n        assert_eq!(error.code, LixError::CODE_SCHEMA_VALIDATION);\n    }\n\n    #[tokio::test]\n    async fn validation_rejects_invalid_snapshot_json() {\n        let error = test_json_text(\"{not-json\")\n            .expect_err(\"invalid snapshot JSON should fail before validation\");\n\n        assert_eq!(error.code, LixError::CODE_UNKNOWN);\n    }\n\n    #[tokio::test]\n    async fn validation_skips_snapshot_validation_for_tombstones() {\n        let visible_schemas = vec![key_value_schema()];\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![staged_row(\"lix_key_value\", None)],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n\n        validate_prepared_writes(validation_input(&staged_writes, &visible_schemas))\n            .await\n            .expect(\"tombstone should only require schema existence\");\n    }\n\n    #[tokio::test]\n    async fn validation_rejects_missing_file_owner_reference() {\n        let visible_schemas = vec![unique_schema()];\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![unique_row(\"post-1\", \"hello-world\", \"first\")],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n\n        let error =\n            validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests(\n                &staged_writes,\n                &visible_schemas,\n                &StrictEmptyLiveStateReader,\n            ))\n            .await\n            .expect_err(\"non-null file_id should require a file descriptor\");\n\n        assert_eq!(error.code, LixError::CODE_FILE_NOT_FOUND);\n    }\n\n    #[tokio::test]\n    async fn validation_allows_pending_file_owner_reference() {\n        let visible_schemas = vec![\n            unique_schema(),\n            file_descriptor_schema(),\n            directory_descriptor_schema(),\n        ];\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![\n                staged_file_descriptor_row(\"file-a\", \"version-a\"),\n                unique_row(\"post-1\", \"hello-world\", \"first\"),\n            ],\n            ..empty_staged_write_set()\n        };\n\n        validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests(\n            &staged_writes,\n            &visible_schemas,\n            &StrictEmptyLiveStateReader,\n        ))\n        .await\n        .expect(\"same-transaction file descriptor should satisfy file ownership\");\n    }\n\n    #[tokio::test]\n    async fn validation_rejects_tracked_file_owner_reference_pending_only_as_untracked() {\n        let visible_schemas = vec![\n            unique_schema(),\n            file_descriptor_schema(),\n            directory_descriptor_schema(),\n        ];\n        let mut untracked_file_descriptor = staged_file_descriptor_row(\"file-a\", \"version-a\");\n        mark_prepared_row_untracked(&mut untracked_file_descriptor);\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![\n                untracked_file_descriptor,\n                unique_row(\"post-1\", \"hello-world\", \"first\"),\n            ],\n            ..empty_staged_write_set()\n        };\n\n        let error =\n            validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests(\n                &staged_writes,\n                &visible_schemas,\n                &StrictEmptyLiveStateReader,\n            ))\n            .await\n            .expect_err(\"tracked file owner must not resolve through pending untracked descriptor\");\n\n        assert_eq!(error.code, LixError::CODE_FILE_NOT_FOUND);\n    }\n\n    #[tokio::test]\n    async fn validation_allows_untracked_file_owner_reference_pending_as_tracked() {\n        let visible_schemas = vec![\n            unique_schema(),\n            file_descriptor_schema(),\n            directory_descriptor_schema(),\n        ];\n        let mut untracked_row = unique_row(\"post-1\", \"hello-world\", \"first\");\n        mark_prepared_row_untracked(&mut untracked_row);\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![\n                staged_file_descriptor_row(\"file-a\", \"version-a\"),\n                untracked_row,\n            ],\n            ..empty_staged_write_set()\n        };\n\n        validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests(\n            &staged_writes,\n            &visible_schemas,\n            &StrictEmptyLiveStateReader,\n        ))\n        .await\n        .expect(\"untracked file owner should resolve through pending tracked descriptor\");\n    }\n\n    #[tokio::test]\n    async fn validation_rejects_file_owner_reference_when_descriptor_tombstoned_in_transaction() {\n        let visible_schemas = vec![\n            unique_schema(),\n            file_descriptor_schema(),\n            directory_descriptor_schema(),\n        ];\n        let mut file_descriptor_delete = staged_file_descriptor_row(\"file-a\", \"version-a\");\n        file_descriptor_delete.snapshot = None;\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![\n                file_descriptor_delete,\n                unique_row(\"post-1\", \"hello-world\", \"first\"),\n            ],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n\n        let error = validate_prepared_writes(\n            TransactionValidationInput::from_visible_schemas_for_tests(\n                &staged_writes,\n                &visible_schemas,\n                &EmptyLiveStateReader,\n            ),\n        )\n        .await\n        .expect_err(\"same-transaction file descriptor tombstone must hide committed descriptor\");\n\n        assert_eq!(error.code, LixError::CODE_FILE_NOT_FOUND);\n    }\n\n    #[tokio::test]\n    async fn validation_allows_committed_file_owner_reference() {\n        let visible_schemas = vec![unique_schema()];\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![unique_row(\"post-1\", \"hello-world\", \"first\")],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n        let live_state = StaticLiveStateReader {\n            rows: vec![committed_file_descriptor_row(\"file-a\", \"version-a\")],\n        };\n\n        validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests(\n            &staged_writes,\n            &visible_schemas,\n            &live_state,\n        ))\n        .await\n        .expect(\"committed file descriptor should satisfy file ownership\");\n    }\n\n    #[tokio::test]\n    async fn validation_rejects_tracked_file_owner_reference_committed_only_as_untracked() {\n        let visible_schemas = vec![unique_schema()];\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![unique_row(\"post-1\", \"hello-world\", \"first\")],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n        let mut untracked_file_descriptor = committed_file_descriptor_row(\"file-a\", \"version-a\");\n        mark_live_row_untracked(&mut untracked_file_descriptor);\n        let live_state = StrictStaticLiveStateReader {\n            rows: vec![untracked_file_descriptor],\n        };\n\n        let error = validate_prepared_writes(\n            TransactionValidationInput::from_visible_schemas_for_tests(\n                &staged_writes,\n                &visible_schemas,\n                &live_state,\n            ),\n        )\n        .await\n        .expect_err(\"tracked file owner must not resolve through committed untracked descriptor\");\n\n        assert_eq!(error.code, LixError::CODE_FILE_NOT_FOUND);\n    }\n\n    #[tokio::test]\n    async fn validation_allows_untracked_file_owner_reference_committed_as_tracked() {\n        let visible_schemas = vec![unique_schema()];\n        let mut untracked_row = unique_row(\"post-1\", \"hello-world\", \"first\");\n        mark_prepared_row_untracked(&mut untracked_row);\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![untracked_row],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n        let live_state = StrictStaticLiveStateReader {\n            rows: vec![committed_file_descriptor_row(\"file-a\", \"version-a\")],\n        };\n\n        validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests(\n            &staged_writes,\n            &visible_schemas,\n            &live_state,\n        ))\n        .await\n        .expect(\"untracked file owner should resolve through committed tracked descriptor\");\n    }\n\n    #[tokio::test]\n    async fn validation_allows_tracked_file_owner_reference_committed_behind_untracked_overlay() {\n        let visible_schemas = vec![unique_schema()];\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![unique_row(\"post-1\", \"hello-world\", \"first\")],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n        let tracked_file_descriptor = committed_file_descriptor_row(\"file-a\", \"version-a\");\n        let mut untracked_tombstone = committed_file_descriptor_row(\"file-a\", \"version-a\");\n        untracked_tombstone.snapshot_content = None;\n        mark_live_row_untracked(&mut untracked_tombstone);\n        let live_state = OverlayingStaticLiveStateReader {\n            rows: vec![tracked_file_descriptor, untracked_tombstone],\n        };\n\n        validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests(\n            &staged_writes,\n            &visible_schemas,\n            &live_state,\n        ))\n        .await\n        .expect(\"tracked file owner should resolve against tracked descriptor behind overlay\");\n    }\n\n    #[tokio::test]\n    async fn validation_rejects_deleting_file_descriptor_referenced_by_committed_row() {\n        let visible_schemas = vec![\n            unique_schema(),\n            file_descriptor_schema(),\n            directory_descriptor_schema(),\n        ];\n        let mut file_descriptor_delete = staged_file_descriptor_row(\"file-a\", \"version-a\");\n        file_descriptor_delete.snapshot = None;\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![file_descriptor_delete],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n        let live_state = StaticLiveStateReader {\n            rows: vec![committed_unique_row(\"post-1\", \"hello-world\", \"first\")],\n        };\n\n        let error =\n            validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests(\n                &staged_writes,\n                &visible_schemas,\n                &live_state,\n            ))\n            .await\n            .expect_err(\"file descriptor delete must be blocked by committed file-owned rows\");\n\n        assert_eq!(error.code, LixError::CODE_FOREIGN_KEY);\n    }\n\n    #[tokio::test]\n    async fn validation_rejects_deleting_tracked_file_descriptor_referenced_by_committed_untracked_row(\n    ) {\n        let visible_schemas = vec![\n            unique_schema(),\n            file_descriptor_schema(),\n            directory_descriptor_schema(),\n        ];\n        let mut file_descriptor_delete = staged_file_descriptor_row(\"file-a\", \"version-a\");\n        file_descriptor_delete.snapshot = None;\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![file_descriptor_delete],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n        let mut untracked_row =\n            MaterializedLiveStateRow::from(unique_row(\"post-1\", \"hello-world\", \"first\"));\n        mark_live_row_untracked(&mut untracked_row);\n        let live_state = StrictStaticLiveStateReader {\n            rows: vec![\n                committed_file_descriptor_row(\"file-a\", \"version-a\"),\n                untracked_row,\n            ],\n        };\n\n        let error =\n            validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests(\n                &staged_writes,\n                &visible_schemas,\n                &live_state,\n            ))\n            .await\n            .expect_err(\"tracked file descriptor delete must be blocked by untracked rows\");\n\n        assert_eq!(error.code, LixError::CODE_FOREIGN_KEY);\n    }\n\n    #[tokio::test]\n    async fn validation_allows_untracked_directory_parent_to_tracked_directory() {\n        let visible_schemas = vec![directory_descriptor_schema()];\n        let tracked_parent = directory_descriptor_row(\"dir-parent\", None, \"parent\", \"version-a\");\n        let mut untracked_child =\n            directory_descriptor_row(\"dir-child\", Some(\"dir-parent\"), \"child\", \"version-a\");\n        mark_prepared_row_untracked(&mut untracked_child);\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![tracked_parent, untracked_child],\n            ..empty_staged_write_set()\n        };\n\n        validate_prepared_writes(validation_input(&staged_writes, &visible_schemas))\n            .await\n            .expect(\"untracked directory parent_id should resolve through tracked directory\");\n    }\n\n    #[tokio::test]\n    async fn validation_rejects_file_owner_reference_that_exists_only_in_global() {\n        let visible_schemas = vec![unique_schema()];\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![unique_row(\"post-1\", \"hello-world\", \"first\")],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n        let live_state = StrictStaticLiveStateReader {\n            rows: vec![committed_file_descriptor_row(\n                \"file-a\",\n                crate::GLOBAL_VERSION_ID,\n            )],\n        };\n\n        let error =\n            validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests(\n                &staged_writes,\n                &visible_schemas,\n                &live_state,\n            ))\n            .await\n            .expect_err(\"global file descriptor should not satisfy a version-local row\");\n\n        assert_eq!(error.code, LixError::CODE_FILE_NOT_FOUND);\n    }\n\n    #[tokio::test]\n    async fn validation_rejects_primary_key_duplicate_with_different_identity() {\n        let visible_schemas = vec![unique_schema()];\n        let mut conflicting = unique_row(\"post-1\", \"hello-world\", \"first\");\n        conflicting.entity_id = crate::entity_identity::EntityIdentity::single(\"post-2\");\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![unique_row(\"post-1\", \"hello-world\", \"first\"), conflicting],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n\n        let error = validate_prepared_writes(validation_input(&staged_writes, &visible_schemas))\n            .await\n            .expect_err(\"same primary key under different identity should fail\");\n\n        assert_eq!(error.code, LixError::CODE_UNIQUE);\n    }\n\n    #[tokio::test]\n    async fn validation_rejects_pending_unique_value_duplicate() {\n        let visible_schemas = vec![unique_schema()];\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![\n                unique_row(\"post-1\", \"hello-world\", \"first\"),\n                unique_row(\"post-2\", \"hello-world\", \"second\"),\n            ],\n            ..empty_staged_write_set()\n        };\n\n        let error = validate_prepared_writes(validation_input(&staged_writes, &visible_schemas))\n            .await\n            .expect_err(\"duplicate pending unique value should fail\");\n\n        assert_eq!(error.code, LixError::CODE_UNIQUE);\n    }\n\n    #[tokio::test]\n    async fn validation_rejects_pending_unique_duplicate_with_null_component() {\n        let visible_schemas = vec![nullable_unique_schema()];\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![\n                nullable_unique_row(\"row-1\", None, \"root-name\"),\n                nullable_unique_row(\"row-2\", None, \"root-name\"),\n            ],\n            ..empty_staged_write_set()\n        };\n\n        let error = validate_prepared_writes(validation_input(&staged_writes, &visible_schemas))\n            .await\n            .expect_err(\"duplicate nullable unique value should fail\");\n\n        assert_eq!(error.code, LixError::CODE_UNIQUE);\n    }\n\n    #[tokio::test]\n    async fn validation_rejects_pending_unique_same_value_in_same_version() {\n        let visible_schemas = vec![unique_schema()];\n        let mut duplicate = unique_row(\"post-2\", \"hello-world\", \"second\");\n        duplicate.version_id = \"version-a\".to_string();\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![unique_row(\"post-1\", \"hello-world\", \"first\"), duplicate],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n\n        let error = validate_prepared_writes(validation_input(&staged_writes, &visible_schemas))\n            .await\n            .expect_err(\"same unique value in the same version should fail\");\n\n        assert_eq!(error.code, LixError::CODE_UNIQUE);\n    }\n\n    #[tokio::test]\n    async fn validation_allows_pending_unique_same_value_in_different_versions() {\n        let visible_schemas = vec![unique_schema()];\n        let mut version_b = unique_row(\"post-2\", \"hello-world\", \"second\");\n        version_b.version_id = \"version-b\".to_string();\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![unique_row(\"post-1\", \"hello-world\", \"first\"), version_b],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n\n        validate_prepared_writes(validation_input(&staged_writes, &visible_schemas))\n            .await\n            .expect(\"unique values should be scoped to the exact version_id\");\n    }\n\n    #[tokio::test]\n    async fn validation_allows_pending_unique_overwrite_of_same_identity() {\n        let visible_schemas = vec![unique_schema()];\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![\n                unique_row(\"post-1\", \"hello-world\", \"first\"),\n                unique_row(\"post-1\", \"hello-world\", \"updated\"),\n            ],\n            ..empty_staged_write_set()\n        };\n\n        validate_prepared_writes(validation_input(&staged_writes, &visible_schemas))\n            .await\n            .expect(\"same identity should be treated as replacement, not duplicate\");\n    }\n\n    #[tokio::test]\n    async fn validation_skips_pending_unique_indexes_for_tombstones() {\n        let visible_schemas = vec![unique_schema()];\n        let mut tombstone = unique_row(\"post-1\", \"hello-world\", \"deleted\");\n        tombstone.snapshot = None;\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![tombstone, unique_row(\"post-2\", \"hello-world\", \"second\")],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n\n        validate_prepared_writes(validation_input(&staged_writes, &visible_schemas))\n            .await\n            .expect(\"tombstones should not claim pending unique values\");\n    }\n\n    #[tokio::test]\n    async fn validation_scopes_pending_unique_values_by_file_and_version() {\n        let visible_schemas = vec![unique_schema()];\n        let mut different_file = unique_row(\"post-2\", \"hello-world\", \"second\");\n        different_file.file_id = Some(\"file-b\".to_string());\n        let mut different_version = unique_row(\"post-3\", \"hello-world\", \"third\");\n        different_version.version_id = \"version-b\".to_string();\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![\n                unique_row(\"post-1\", \"hello-world\", \"first\"),\n                different_file,\n                different_version,\n            ],\n            ..empty_staged_write_set()\n        };\n\n        validate_prepared_writes(validation_input(&staged_writes, &visible_schemas))\n            .await\n            .expect(\"unique values are scoped by file and version\");\n    }\n\n    #[tokio::test]\n    async fn validation_rejects_committed_visible_unique_value_duplicate() {\n        let visible_schemas = vec![unique_schema()];\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![unique_row(\"post-2\", \"hello-world\", \"second\")],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n        let live_state = StaticLiveStateReader {\n            rows: vec![committed_unique_row(\"post-1\", \"hello-world\", \"first\")],\n        };\n\n        let error =\n            validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests(\n                &staged_writes,\n                &visible_schemas,\n                &live_state,\n            ))\n            .await\n            .expect_err(\"committed visible unique value should conflict\");\n\n        assert_eq!(error.code, LixError::CODE_UNIQUE);\n    }\n\n    #[tokio::test]\n    async fn validation_rejects_committed_tracked_unique_duplicate_behind_untracked_overlay() {\n        let visible_schemas = vec![unique_schema()];\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![unique_row(\"post-2\", \"hello-world\", \"second\")],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n        let tracked_duplicate = committed_unique_row(\"post-1\", \"hello-world\", \"first\");\n        let mut untracked_overlay = committed_unique_row(\"post-1\", \"draft-slug\", \"draft\");\n        mark_live_row_untracked(&mut untracked_overlay);\n        let live_state = OverlayingStaticLiveStateReader {\n            rows: vec![tracked_duplicate, untracked_overlay],\n        };\n\n        let error =\n            validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests(\n                &staged_writes,\n                &visible_schemas,\n                &live_state,\n            ))\n            .await\n            .expect_err(\"tracked unique duplicate must be detected behind untracked overlay\");\n\n        assert_eq!(error.code, LixError::CODE_UNIQUE);\n    }\n\n    #[tokio::test]\n    async fn validation_rejects_committed_unique_duplicate_when_untracked_tombstone_shadows_owner()\n    {\n        let visible_schemas = vec![unique_schema()];\n        let mut untracked_tombstone = unique_row(\"post-1\", \"ignored\", \"deleted\");\n        untracked_tombstone.snapshot = None;\n        mark_prepared_row_untracked(&mut untracked_tombstone);\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![\n                untracked_tombstone,\n                unique_row(\"post-2\", \"hello-world\", \"second\"),\n            ],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n        let live_state = StaticLiveStateReader {\n            rows: vec![committed_unique_row(\"post-1\", \"hello-world\", \"first\")],\n        };\n\n        let error =\n            validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests(\n                &staged_writes,\n                &visible_schemas,\n                &live_state,\n            ))\n            .await\n            .expect_err(\"untracked tombstone must not hide tracked unique owner\");\n\n        assert_eq!(error.code, LixError::CODE_UNIQUE);\n    }\n\n    #[tokio::test]\n    async fn validation_rejects_committed_unique_duplicate_with_null_component() {\n        let visible_schemas = vec![nullable_unique_schema()];\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![nullable_unique_row(\"row-2\", None, \"root-name\")],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n        let live_state = StaticLiveStateReader {\n            rows: vec![committed_nullable_unique_row(\"row-1\", None, \"root-name\")],\n        };\n\n        let error =\n            validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests(\n                &staged_writes,\n                &visible_schemas,\n                &live_state,\n            ))\n            .await\n            .expect_err(\"committed duplicate nullable unique value should conflict\");\n\n        assert_eq!(error.code, LixError::CODE_UNIQUE);\n    }\n\n    #[tokio::test]\n    async fn validation_rejects_committed_unique_same_value_in_same_version() {\n        let visible_schemas = vec![unique_schema()];\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![unique_row(\"post-2\", \"hello-world\", \"second\")],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n        let live_state = StaticLiveStateReader {\n            rows: vec![committed_unique_row(\"post-1\", \"hello-world\", \"first\")],\n        };\n\n        let error =\n            validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests(\n                &staged_writes,\n                &visible_schemas,\n                &live_state,\n            ))\n            .await\n            .expect_err(\"same unique value in the same version should conflict\");\n\n        assert_eq!(error.code, LixError::CODE_UNIQUE);\n    }\n\n    #[tokio::test]\n    async fn validation_allows_committed_unique_same_value_in_different_versions() {\n        let visible_schemas = vec![unique_schema()];\n        let mut version_b = unique_row(\"post-2\", \"hello-world\", \"second\");\n        version_b.version_id = \"version-b\".to_string();\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![version_b],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n        let live_state = StaticLiveStateReader {\n            rows: vec![committed_unique_row(\"post-1\", \"hello-world\", \"first\")],\n        };\n\n        validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests(\n            &staged_writes,\n            &visible_schemas,\n            &live_state,\n        ))\n        .await\n        .expect(\"committed unique values should be scoped to the exact version_id\");\n    }\n\n    #[tokio::test]\n    async fn validation_ignores_projected_live_state_rows_for_unique_constraints() {\n        let visible_schemas = vec![unique_schema()];\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![unique_row(\"post-2\", \"hello-world\", \"second\")],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n        let mut projected_overlay_row = committed_unique_row(\"post-1\", \"hello-world\", \"first\");\n        projected_overlay_row.version_id = \"version-a\".to_string();\n        projected_overlay_row.global = true;\n        let live_state = StaticLiveStateReader {\n            rows: vec![projected_overlay_row],\n        };\n\n        validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests(\n            &staged_writes,\n            &visible_schemas,\n            &live_state,\n        ))\n        .await\n        .expect(\"validation should ignore live-state overlay projections\");\n    }\n\n    #[tokio::test]\n    async fn validation_allows_committed_visible_unique_update_of_same_identity() {\n        let visible_schemas = vec![unique_schema()];\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![unique_row(\"post-1\", \"hello-world\", \"updated\")],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n        let live_state = StaticLiveStateReader {\n            rows: vec![committed_unique_row(\"post-1\", \"hello-world\", \"first\")],\n        };\n\n        validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests(\n            &staged_writes,\n            &visible_schemas,\n            &live_state,\n        ))\n        .await\n        .expect(\"same identity should update committed unique owner\");\n    }\n\n    #[tokio::test]\n    async fn validation_batches_committed_unique_scans_by_constraint_group() {\n        let visible_schemas = vec![unique_schema()];\n        let mut staged_one = unique_row(\"post-3\", \"new-slug-3\", \"third\");\n        staged_one.file_id = None;\n        let mut staged_two = unique_row(\"post-4\", \"new-slug-4\", \"fourth\");\n        staged_two.file_id = None;\n        let mut committed_one = committed_unique_row(\"post-1\", \"hello-world\", \"first\");\n        committed_one.file_id = None;\n        let mut committed_two = committed_unique_row(\"post-2\", \"second-slug\", \"second\");\n        committed_two.file_id = None;\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![staged_one, staged_two],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n        let live_state = CountingStaticLiveStateReader {\n            rows: vec![committed_one, committed_two],\n            scan_count: AtomicUsize::new(0),\n        };\n\n        validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests(\n            &staged_writes,\n            &visible_schemas,\n            &live_state,\n        ))\n        .await\n        .expect(\"distinct pending unique values should not conflict\");\n\n        assert_eq!(live_state.scan_count.load(Ordering::Relaxed), 1);\n    }\n\n    #[tokio::test]\n    async fn validation_ignores_committed_unique_owner_tombstoned_by_transaction() {\n        let visible_schemas = vec![unique_schema()];\n        let mut tombstone = unique_row(\"post-1\", \"hello-world\", \"deleted\");\n        tombstone.snapshot = None;\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![tombstone, unique_row(\"post-2\", \"hello-world\", \"second\")],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n        let live_state = StaticLiveStateReader {\n            rows: vec![committed_unique_row(\"post-1\", \"hello-world\", \"first\")],\n        };\n\n        validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests(\n            &staged_writes,\n            &visible_schemas,\n            &live_state,\n        ))\n        .await\n        .expect(\"tombstoned committed owner should not conflict\");\n    }\n\n    #[tokio::test]\n    async fn validation_allows_committed_unique_same_value_in_different_file_or_version() {\n        let visible_schemas = vec![unique_schema()];\n        let mut different_file = unique_row(\"post-2\", \"hello-world\", \"second\");\n        different_file.file_id = Some(\"file-b\".to_string());\n        let mut different_version = unique_row(\"post-3\", \"hello-world\", \"third\");\n        different_version.version_id = \"version-b\".to_string();\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![different_file, different_version],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n        let live_state = StaticLiveStateReader {\n            rows: vec![committed_unique_row(\"post-1\", \"hello-world\", \"first\")],\n        };\n\n        validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests(\n            &staged_writes,\n            &visible_schemas,\n            &live_state,\n        ))\n        .await\n        .expect(\"committed uniqueness is scoped by file and version\");\n    }\n\n    #[tokio::test]\n    async fn validation_rejects_foreign_key_target_missing_in_same_version() {\n        let visible_schemas = vec![fk_parent_schema(), fk_child_schema()];\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![fk_child_row(\"child-1\", \"parent-1\", \"version-a\")],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n\n        let error = validate_prepared_writes(validation_input(&staged_writes, &visible_schemas))\n            .await\n            .expect_err(\"foreign key must resolve in the same version\");\n\n        assert_eq!(error.code, LixError::CODE_FOREIGN_KEY);\n    }\n\n    #[tokio::test]\n    async fn validation_allows_foreign_key_target_in_same_version() {\n        let visible_schemas = vec![fk_parent_schema(), fk_child_schema()];\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![\n                fk_parent_row(\"parent-1\", \"version-a\"),\n                fk_child_row(\"child-1\", \"parent-1\", \"version-a\"),\n            ],\n            ..empty_staged_write_set()\n        };\n\n        validate_prepared_writes(validation_input(&staged_writes, &visible_schemas))\n            .await\n            .expect(\"foreign key should resolve against pending rows in the same version\");\n    }\n\n    #[tokio::test]\n    async fn validation_rejects_tracked_foreign_key_target_pending_only_as_untracked() {\n        let visible_schemas = vec![\n            fk_parent_schema(),\n            fk_child_schema(),\n            file_descriptor_schema(),\n            directory_descriptor_schema(),\n        ];\n        let mut untracked_parent = fk_parent_row(\"parent-1\", \"version-a\");\n        mark_prepared_row_untracked(&mut untracked_parent);\n        let mut untracked_file_descriptor = staged_file_descriptor_row(\"file-a\", \"version-a\");\n        mark_prepared_row_untracked(&mut untracked_file_descriptor);\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![\n                untracked_file_descriptor,\n                untracked_parent,\n                fk_child_row(\"child-1\", \"parent-1\", \"version-a\"),\n            ],\n            ..empty_staged_write_set()\n        };\n\n        let error = validate_prepared_writes(validation_input(&staged_writes, &visible_schemas))\n            .await\n            .expect_err(\"tracked FK must not resolve through a pending untracked target\");\n\n        assert_eq!(error.code, LixError::CODE_FOREIGN_KEY);\n    }\n\n    #[tokio::test]\n    async fn validation_allows_untracked_foreign_key_target_pending_as_tracked() {\n        let visible_schemas = vec![\n            fk_parent_schema(),\n            fk_child_schema(),\n            file_descriptor_schema(),\n            directory_descriptor_schema(),\n        ];\n        let tracked_file_descriptor = staged_file_descriptor_row(\"file-a\", \"version-a\");\n        let tracked_parent = fk_parent_row(\"parent-1\", \"version-a\");\n        let mut untracked_file_descriptor = staged_file_descriptor_row(\"file-a\", \"version-a\");\n        mark_prepared_row_untracked(&mut untracked_file_descriptor);\n        let mut untracked_child = fk_child_row(\"child-1\", \"parent-1\", \"version-a\");\n        mark_prepared_row_untracked(&mut untracked_child);\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![\n                tracked_file_descriptor,\n                tracked_parent,\n                untracked_file_descriptor,\n                untracked_child,\n            ],\n            ..empty_staged_write_set()\n        };\n\n        validate_prepared_writes(validation_input(&staged_writes, &visible_schemas))\n            .await\n            .expect(\"untracked FK should be allowed to reference a pending tracked target\");\n    }\n\n    #[tokio::test]\n    async fn validation_rejects_foreign_key_target_that_exists_only_in_different_version() {\n        let visible_schemas = vec![fk_parent_schema(), fk_child_schema()];\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![\n                fk_parent_row(\"parent-1\", \"version-b\"),\n                fk_child_row(\"child-1\", \"parent-1\", \"version-a\"),\n            ],\n            ..empty_staged_write_set()\n        };\n\n        let error = validate_prepared_writes(validation_input(&staged_writes, &visible_schemas))\n            .await\n            .expect_err(\"foreign key target in another version should not satisfy this version\");\n\n        assert_eq!(error.code, LixError::CODE_FOREIGN_KEY);\n    }\n\n    #[tokio::test]\n    async fn validation_allows_foreign_key_target_committed_in_same_version() {\n        let visible_schemas = vec![fk_parent_schema(), fk_child_schema()];\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![fk_child_row(\"child-1\", \"parent-1\", \"version-a\")],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n        let live_state = StaticLiveStateReader {\n            rows: vec![MaterializedLiveStateRow::from(fk_parent_row(\n                \"parent-1\",\n                \"version-a\",\n            ))],\n        };\n\n        validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests(\n            &staged_writes,\n            &visible_schemas,\n            &live_state,\n        ))\n        .await\n        .expect(\"foreign key should resolve against committed rows in the same version\");\n    }\n\n    #[tokio::test]\n    async fn validation_rejects_tracked_foreign_key_target_committed_only_as_untracked() {\n        let visible_schemas = vec![fk_parent_schema(), fk_child_schema()];\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![fk_child_row(\"child-1\", \"parent-1\", \"version-a\")],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n        let mut untracked_parent =\n            MaterializedLiveStateRow::from(fk_parent_row(\"parent-1\", \"version-a\"));\n        mark_live_row_untracked(&mut untracked_parent);\n        let live_state = StaticLiveStateReader {\n            rows: vec![untracked_parent],\n        };\n\n        let error =\n            validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests(\n                &staged_writes,\n                &visible_schemas,\n                &live_state,\n            ))\n            .await\n            .expect_err(\"tracked FK must not resolve through a committed untracked target\");\n\n        assert_eq!(error.code, LixError::CODE_FOREIGN_KEY);\n    }\n\n    #[tokio::test]\n    async fn validation_allows_untracked_foreign_key_target_committed_as_tracked() {\n        let visible_schemas = vec![\n            fk_parent_schema(),\n            fk_child_schema(),\n            file_descriptor_schema(),\n            directory_descriptor_schema(),\n        ];\n        let mut untracked_file_descriptor = staged_file_descriptor_row(\"file-a\", \"version-a\");\n        mark_prepared_row_untracked(&mut untracked_file_descriptor);\n        let mut untracked_child = fk_child_row(\"child-1\", \"parent-1\", \"version-a\");\n        mark_prepared_row_untracked(&mut untracked_child);\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![untracked_file_descriptor, untracked_child],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n        let live_state = StaticLiveStateReader {\n            rows: vec![\n                committed_file_descriptor_row(\"file-a\", \"version-a\"),\n                MaterializedLiveStateRow::from(fk_parent_row(\"parent-1\", \"version-a\")),\n            ],\n        };\n\n        validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests(\n            &staged_writes,\n            &visible_schemas,\n            &live_state,\n        ))\n        .await\n        .expect(\"untracked FK should be allowed to reference a committed tracked target\");\n    }\n\n    #[tokio::test]\n    async fn validation_allows_tracked_foreign_key_target_committed_behind_untracked_overlay() {\n        let visible_schemas = vec![fk_parent_schema(), fk_child_schema()];\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![fk_child_row(\"child-1\", \"parent-1\", \"version-a\")],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n        let tracked_parent = MaterializedLiveStateRow::from(fk_parent_row(\"parent-1\", \"version-a\"));\n        let mut untracked_overlay =\n            MaterializedLiveStateRow::from(fk_parent_row(\"parent-1\", \"version-a\"));\n        mark_live_row_untracked(&mut untracked_overlay);\n        let live_state = OverlayingStaticLiveStateReader {\n            rows: vec![tracked_parent, untracked_overlay],\n        };\n\n        validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests(\n            &staged_writes,\n            &visible_schemas,\n            &live_state,\n        ))\n        .await\n        .expect(\n            \"tracked FK should resolve against tracked storage target behind untracked overlay\",\n        );\n    }\n\n    #[tokio::test]\n    async fn validation_rejects_deleting_tracked_fk_target_referenced_behind_untracked_overlay() {\n        let visible_schemas = vec![fk_parent_schema(), fk_child_schema()];\n        let mut parent_delete = fk_parent_row(\"parent-1\", \"version-a\");\n        parent_delete.snapshot = None;\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![parent_delete],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n        let tracked_parent = MaterializedLiveStateRow::from(fk_parent_row(\"parent-1\", \"version-a\"));\n        let tracked_child =\n            MaterializedLiveStateRow::from(fk_child_row(\"child-1\", \"parent-1\", \"version-a\"));\n        let mut untracked_child_overlay =\n            MaterializedLiveStateRow::from(fk_child_row(\"child-1\", \"other-parent\", \"version-a\"));\n        mark_live_row_untracked(&mut untracked_child_overlay);\n        let live_state = OverlayingStaticLiveStateReader {\n            rows: vec![tracked_parent, tracked_child, untracked_child_overlay],\n        };\n\n        let error =\n            validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests(\n                &staged_writes,\n                &visible_schemas,\n                &live_state,\n            ))\n            .await\n            .expect_err(\"tracked referencing row behind overlay must block target delete\");\n\n        assert_eq!(error.code, LixError::CODE_FOREIGN_KEY);\n    }\n\n    #[tokio::test]\n    async fn validation_rejects_deleting_tracked_fk_target_referenced_by_committed_untracked_row() {\n        let visible_schemas = vec![fk_parent_schema(), fk_child_schema()];\n        let mut parent_delete = fk_parent_row(\"parent-1\", \"version-a\");\n        parent_delete.snapshot = None;\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![parent_delete],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n        let tracked_parent = MaterializedLiveStateRow::from(fk_parent_row(\"parent-1\", \"version-a\"));\n        let mut untracked_child =\n            MaterializedLiveStateRow::from(fk_child_row(\"child-1\", \"parent-1\", \"version-a\"));\n        mark_live_row_untracked(&mut untracked_child);\n        let live_state = StaticLiveStateReader {\n            rows: vec![tracked_parent, untracked_child],\n        };\n\n        let error =\n            validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests(\n                &staged_writes,\n                &visible_schemas,\n                &live_state,\n            ))\n            .await\n            .expect_err(\"tracked target delete must be blocked by committed untracked references\");\n\n        assert_eq!(error.code, LixError::CODE_FOREIGN_KEY);\n    }\n\n    #[tokio::test]\n    async fn validation_rejects_foreign_key_target_committed_only_in_different_version() {\n        let visible_schemas = vec![fk_parent_schema(), fk_child_schema()];\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![fk_child_row(\"child-1\", \"parent-1\", \"version-a\")],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n        let live_state = StaticLiveStateReader {\n            rows: vec![MaterializedLiveStateRow::from(fk_parent_row(\n                \"parent-1\",\n                \"version-b\",\n            ))],\n        };\n\n        let error =\n            validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests(\n                &staged_writes,\n                &visible_schemas,\n                &live_state,\n            ))\n            .await\n            .expect_err(\n                \"foreign key target in another committed version should not satisfy this version\",\n            );\n\n        assert_eq!(error.code, LixError::CODE_FOREIGN_KEY);\n    }\n\n    #[tokio::test]\n    async fn validation_rejects_foreign_key_target_tombstoned_by_transaction() {\n        let visible_schemas = vec![fk_parent_schema(), fk_child_schema()];\n        let mut parent_delete = fk_parent_row(\"parent-1\", \"version-a\");\n        parent_delete.snapshot = None;\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![\n                parent_delete,\n                fk_child_row(\"child-1\", \"parent-1\", \"version-a\"),\n            ],\n            ..empty_staged_write_set()\n        };\n        let live_state = StaticLiveStateReader {\n            rows: vec![MaterializedLiveStateRow::from(fk_parent_row(\n                \"parent-1\",\n                \"version-a\",\n            ))],\n        };\n\n        let error =\n            validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests(\n                &staged_writes,\n                &visible_schemas,\n                &live_state,\n            ))\n            .await\n            .expect_err(\"same-transaction tombstone should hide the committed FK target\");\n\n        assert_eq!(error.code, LixError::CODE_FOREIGN_KEY);\n    }\n\n    #[tokio::test]\n    async fn validation_allows_tracked_fk_target_when_untracked_tombstone_shadows_same_identity() {\n        let visible_schemas = vec![fk_parent_schema(), fk_child_schema()];\n        let mut untracked_parent_delete = fk_parent_row(\"parent-1\", \"version-a\");\n        untracked_parent_delete.snapshot = None;\n        mark_prepared_row_untracked(&mut untracked_parent_delete);\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![\n                untracked_parent_delete,\n                fk_child_row(\"child-1\", \"parent-1\", \"version-a\"),\n            ],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n        let live_state = StaticLiveStateReader {\n            rows: vec![MaterializedLiveStateRow::from(fk_parent_row(\n                \"parent-1\",\n                \"version-a\",\n            ))],\n        };\n\n        validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests(\n            &staged_writes,\n            &visible_schemas,\n            &live_state,\n        ))\n        .await\n        .expect(\"untracked tombstone must not hide tracked FK target\");\n    }\n\n    #[tokio::test]\n    async fn validation_rejects_pending_reference_to_deleted_identity() {\n        let visible_schemas = vec![fk_parent_schema(), fk_child_schema()];\n        let mut parent_delete = fk_parent_row(\"parent-1\", \"version-a\");\n        parent_delete.snapshot = None;\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![\n                parent_delete,\n                fk_child_row(\"child-1\", \"parent-1\", \"version-a\"),\n            ],\n            ..empty_staged_write_set()\n        };\n        let live_state = StaticLiveStateReader {\n            rows: vec![MaterializedLiveStateRow::from(fk_parent_row(\n                \"parent-1\",\n                \"version-a\",\n            ))],\n        };\n\n        let error =\n            validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests(\n                &staged_writes,\n                &visible_schemas,\n                &live_state,\n            ))\n            .await\n            .expect_err(\"pending child reference should block parent delete\");\n\n        assert_eq!(error.code, LixError::CODE_FOREIGN_KEY);\n    }\n\n    #[tokio::test]\n    async fn validation_allows_delete_with_pending_reference_in_different_version() {\n        let visible_schemas = vec![fk_parent_schema(), fk_child_schema()];\n        let mut parent_delete = fk_parent_row(\"parent-1\", \"version-a\");\n        parent_delete.snapshot = None;\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![\n                parent_delete,\n                fk_parent_row(\"parent-1\", \"version-b\"),\n                fk_child_row(\"child-1\", \"parent-1\", \"version-b\"),\n            ],\n            ..empty_staged_write_set()\n        };\n\n        validate_prepared_writes(validation_input(&staged_writes, &visible_schemas))\n            .await\n            .expect(\"pending references in another version should not block this delete\");\n    }\n\n    #[tokio::test]\n    async fn validation_allows_state_surface_fk_target_committed_by_exact_identity() {\n        let visible_schemas = vec![fk_parent_schema(), state_surface_ref_schema()];\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![state_surface_ref_row(\n                \"ref-1\",\n                \"target-1\",\n                \"fk_parent_schema\",\n                \"file-a\",\n            )],\n            ..empty_staged_write_set()\n        };\n        let live_state = StaticLiveStateReader {\n            rows: vec![MaterializedLiveStateRow::from(fk_parent_row(\n                \"target-1\",\n                \"version-a\",\n            ))],\n        };\n\n        validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests(\n            &staged_writes,\n            &visible_schemas,\n            &live_state,\n        ))\n        .await\n        .expect(\"state FK should resolve against exact committed identity\");\n    }\n\n    #[tokio::test]\n    async fn validation_rejects_tracked_state_surface_fk_target_pending_only_as_untracked() {\n        let visible_schemas = vec![\n            fk_parent_schema(),\n            state_surface_ref_schema(),\n            file_descriptor_schema(),\n            directory_descriptor_schema(),\n        ];\n        let mut untracked_target = fk_parent_row(\"target-1\", \"version-a\");\n        mark_prepared_row_untracked(&mut untracked_target);\n        let mut untracked_file_descriptor = staged_file_descriptor_row(\"file-a\", \"version-a\");\n        mark_prepared_row_untracked(&mut untracked_file_descriptor);\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![\n                untracked_file_descriptor,\n                untracked_target,\n                state_surface_ref_row(\"ref-1\", \"target-1\", \"fk_parent_schema\", \"file-a\"),\n            ],\n            ..empty_staged_write_set()\n        };\n\n        let error = validate_prepared_writes(validation_input(&staged_writes, &visible_schemas))\n            .await\n            .expect_err(\n                \"tracked state-surface FK must not resolve through a pending untracked target\",\n            );\n\n        assert_eq!(error.code, LixError::CODE_FOREIGN_KEY);\n    }\n\n    #[tokio::test]\n    async fn validation_rejects_tracked_state_surface_fk_target_committed_only_as_untracked() {\n        let visible_schemas = vec![fk_parent_schema(), state_surface_ref_schema()];\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![state_surface_ref_row(\n                \"ref-1\",\n                \"target-1\",\n                \"fk_parent_schema\",\n                \"file-a\",\n            )],\n            ..empty_staged_write_set()\n        };\n        let mut untracked_target =\n            MaterializedLiveStateRow::from(fk_parent_row(\"target-1\", \"version-a\"));\n        mark_live_row_untracked(&mut untracked_target);\n        let live_state = StaticLiveStateReader {\n            rows: vec![untracked_target],\n        };\n\n        let error =\n            validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests(\n                &staged_writes,\n                &visible_schemas,\n                &live_state,\n            ))\n            .await\n            .expect_err(\n                \"tracked state-surface FK must not resolve through a committed untracked target\",\n            );\n\n        assert_eq!(error.code, LixError::CODE_FOREIGN_KEY);\n    }\n\n    #[tokio::test]\n    async fn validation_allows_untracked_state_surface_fk_target_committed_as_tracked() {\n        let visible_schemas = vec![\n            fk_parent_schema(),\n            state_surface_ref_schema(),\n            file_descriptor_schema(),\n            directory_descriptor_schema(),\n        ];\n        let mut untracked_file_descriptor = staged_file_descriptor_row(\"file-a\", \"version-a\");\n        mark_prepared_row_untracked(&mut untracked_file_descriptor);\n        let mut untracked_ref =\n            state_surface_ref_row(\"ref-1\", \"target-1\", \"fk_parent_schema\", \"file-a\");\n        mark_prepared_row_untracked(&mut untracked_ref);\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![untracked_file_descriptor, untracked_ref],\n            ..empty_staged_write_set()\n        };\n        let live_state = StaticLiveStateReader {\n            rows: vec![\n                committed_file_descriptor_row(\"file-a\", \"version-a\"),\n                MaterializedLiveStateRow::from(fk_parent_row(\"target-1\", \"version-a\")),\n            ],\n        };\n\n        validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests(\n            &staged_writes,\n            &visible_schemas,\n            &live_state,\n        ))\n        .await\n        .expect(\"untracked state-surface FK should reference committed tracked target\");\n    }\n\n    #[tokio::test]\n    async fn validation_allows_tracked_state_surface_fk_target_committed_behind_untracked_overlay()\n    {\n        let visible_schemas = vec![fk_parent_schema(), state_surface_ref_schema()];\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![state_surface_ref_row(\n                \"ref-1\",\n                \"target-1\",\n                \"fk_parent_schema\",\n                \"file-a\",\n            )],\n            ..empty_staged_write_set()\n        };\n        let tracked_target = MaterializedLiveStateRow::from(fk_parent_row(\"target-1\", \"version-a\"));\n        let mut untracked_overlay =\n            MaterializedLiveStateRow::from(fk_parent_row(\"target-1\", \"version-a\"));\n        mark_live_row_untracked(&mut untracked_overlay);\n        let live_state = OverlayingStaticLiveStateReader {\n            rows: vec![tracked_target, untracked_overlay],\n        };\n\n        validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests(\n            &staged_writes,\n            &visible_schemas,\n            &live_state,\n        ))\n        .await\n        .expect(\n            \"tracked state-surface FK should resolve against tracked target behind untracked overlay\",\n        );\n    }\n\n    #[tokio::test]\n    async fn validation_allows_state_surface_fk_target_with_composite_entity_id() {\n        let visible_schemas = vec![composite_message_schema(), state_surface_ref_schema()];\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![state_surface_ref_row_with_target_entity_id(\n                \"ref-1\",\n                json!([\"welcome.title\", \"en\"]),\n                \"composite_message_schema\",\n                \"file-a\",\n            )],\n            ..empty_staged_write_set()\n        };\n        let live_state = StaticLiveStateReader {\n            rows: vec![MaterializedLiveStateRow::from(composite_message_row(\n                \"welcome.title\",\n                \"en\",\n                \"version-a\",\n            ))],\n        };\n\n        validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests(\n            &staged_writes,\n            &visible_schemas,\n            &live_state,\n        ))\n        .await\n        .expect(\"state FK should resolve composite JSON-array entity ids\");\n    }\n\n    #[tokio::test]\n    async fn validation_rejects_delete_when_same_version_reference_exists() {\n        let visible_schemas = vec![fk_parent_schema(), fk_child_schema()];\n        let mut parent_delete = fk_parent_row(\"parent-1\", \"version-a\");\n        parent_delete.snapshot = None;\n        let live_state = StaticLiveStateReader {\n            rows: vec![\n                MaterializedLiveStateRow::from(fk_parent_row(\"parent-1\", \"version-a\")),\n                MaterializedLiveStateRow::from(fk_child_row(\"child-1\", \"parent-1\", \"version-a\")),\n            ],\n        };\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![parent_delete],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n\n        let error =\n            validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests(\n                &staged_writes,\n                &visible_schemas,\n                &live_state,\n            ))\n            .await\n            .expect_err(\"delete should be restricted by same-version references\");\n\n        assert_eq!(error.code, LixError::CODE_FOREIGN_KEY);\n    }\n\n    #[tokio::test]\n    async fn validation_allows_delete_when_only_different_version_reference_exists() {\n        let visible_schemas = vec![fk_parent_schema(), fk_child_schema()];\n        let mut parent_delete = fk_parent_row(\"parent-1\", \"version-a\");\n        parent_delete.snapshot = None;\n        let live_state = StaticLiveStateReader {\n            rows: vec![\n                MaterializedLiveStateRow::from(fk_parent_row(\"parent-1\", \"version-a\")),\n                MaterializedLiveStateRow::from(fk_child_row(\"child-1\", \"parent-1\", \"version-b\")),\n            ],\n        };\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![parent_delete],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n\n        validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests(\n            &staged_writes,\n            &visible_schemas,\n            &live_state,\n        ))\n        .await\n        .expect(\"references in another version should not restrict this version\");\n    }\n\n    #[tokio::test]\n    async fn validation_allows_delete_when_committed_reference_is_also_deleted() {\n        let visible_schemas = vec![fk_parent_schema(), fk_child_schema()];\n        let mut parent_delete = fk_parent_row(\"parent-1\", \"version-a\");\n        parent_delete.snapshot = None;\n        let mut child_delete = fk_child_row(\"child-1\", \"parent-1\", \"version-a\");\n        child_delete.snapshot = None;\n        let live_state = StaticLiveStateReader {\n            rows: vec![\n                MaterializedLiveStateRow::from(fk_parent_row(\"parent-1\", \"version-a\")),\n                MaterializedLiveStateRow::from(fk_child_row(\"child-1\", \"parent-1\", \"version-a\")),\n            ],\n        };\n        let staged_writes = PreparedWriteSet {\n            state_rows: vec![parent_delete, child_delete],\n            adopted_rows: Vec::new(),\n            ..empty_staged_write_set()\n        };\n\n        validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests(\n            &staged_writes,\n            &visible_schemas,\n            &live_state,\n        ))\n        .await\n        .expect(\"committed references deleted in the same transaction should not restrict delete\");\n    }\n\n    #[test]\n    fn schema_catalog_plans_include_compiled_schema() {\n        let visible_schemas = vec![key_value_schema()];\n        let staged_writes = empty_staged_write_set();\n        let input = validation_input(&staged_writes, &visible_schemas);\n        let catalog = catalog_from_transaction_input(&input).expect(\"schema catalog should build\");\n        let plan = catalog\n            .plan_for_key(\"lix_key_value\")\n            .expect(\"lix_key_value plan should exist\");\n\n        assert!(plan\n            .1\n            .compiled_schema\n            .validate(&json!({ \"key\": \"k\", \"value\": \"v\" }))\n            .is_ok());\n    }\n\n    #[test]\n    fn pending_indexes_record_primary_key_fk_targets_by_exact_scope() {\n        let mut indexes = PendingConstraintIndexes::default();\n        let row = fk_parent_row(\"parent-1\", \"version-a\");\n        let snapshot = serde_json::from_str::<JsonValue>(\n            row.snapshot\n                .as_ref()\n                .map(|snapshot| snapshot.normalized.as_ref())\n                .expect(\"fixture should have snapshot\"),\n        )\n        .expect(\"fixture JSON should parse\");\n\n        indexes\n            .remember_row(\n                PreparedValidationRow::State(&row),\n                test_plan_from_schema(fk_parent_schema()),\n                &snapshot,\n            )\n            .expect(\"parent row should index\");\n\n        assert!(indexes\n            .has_fk_target(\n                \"fk_parent_schema\",\n                \"version-a\",\n                Some(\"file-a\"),\n                &[\"/id\"],\n                UniqueConstraintValue::string_values([\"parent-1\"]),\n            )\n            .expect(\"lookup should build\"));\n        assert!(!indexes\n            .has_fk_target(\n                \"fk_parent_schema\",\n                \"version-b\",\n                Some(\"file-a\"),\n                &[\"/id\"],\n                UniqueConstraintValue::string_values([\"parent-1\"]),\n            )\n            .expect(\"lookup should build\"));\n    }\n\n    #[test]\n    fn pending_indexes_record_unique_fk_targets_by_exact_scope() {\n        let mut indexes = PendingConstraintIndexes::default();\n        let row = unique_row(\"post-1\", \"hello-world\", \"first\");\n        let snapshot = serde_json::from_str::<JsonValue>(\n            row.snapshot\n                .as_ref()\n                .map(|snapshot| snapshot.normalized.as_ref())\n                .expect(\"fixture should have snapshot\"),\n        )\n        .expect(\"fixture JSON should parse\");\n\n        indexes\n            .remember_row(\n                PreparedValidationRow::State(&row),\n                test_plan_from_schema(unique_schema()),\n                &snapshot,\n            )\n            .expect(\"unique row should index\");\n\n        assert!(indexes\n            .has_fk_target(\n                \"unique_schema\",\n                \"version-a\",\n                Some(\"file-a\"),\n                &[\"/slug\"],\n                UniqueConstraintValue::string_values([\"hello-world\"]),\n            )\n            .expect(\"lookup should build\"));\n    }\n\n    #[test]\n    fn pending_indexes_record_normal_fk_references_by_exact_scope() {\n        let mut indexes = PendingConstraintIndexes::default();\n        let row = fk_child_row(\"child-1\", \"parent-1\", \"version-a\");\n        let snapshot = serde_json::from_str::<JsonValue>(\n            row.snapshot\n                .as_ref()\n                .map(|snapshot| snapshot.normalized.as_ref())\n                .expect(\"fixture should have snapshot\"),\n        )\n        .expect(\"fixture JSON should parse\");\n        let visible_schemas = vec![fk_parent_schema(), fk_child_schema()];\n        let staged_writes = empty_staged_write_set();\n        let input = validation_input(&staged_writes, &visible_schemas);\n        let _catalog = catalog_from_transaction_input(&input).expect(\"catalog should build\");\n\n        indexes\n            .remember_foreign_key_references(\n                PreparedValidationRow::State(&row),\n                test_plan_from_schema(fk_child_schema()),\n                &snapshot,\n            )\n            .expect(\"child row should index FK reference\");\n\n        assert!(indexes\n            .has_fk_reference_to_key(\n                \"fk_parent_schema\",\n                \"version-a\",\n                Some(\"file-a\"),\n                &[\"/id\"],\n                UniqueConstraintValue::string_values([\"parent-1\"]),\n            )\n            .expect(\"lookup should build\"));\n        assert!(!indexes\n            .has_fk_reference_to_key(\n                \"fk_parent_schema\",\n                \"version-b\",\n                Some(\"file-a\"),\n                &[\"/id\"],\n                UniqueConstraintValue::string_values([\"parent-1\"]),\n            )\n            .expect(\"lookup should build\"));\n    }\n\n    #[test]\n    fn pending_indexes_record_state_surface_fk_references_by_exact_identity() {\n        let mut indexes = PendingConstraintIndexes::default();\n        let row = state_surface_ref_row(\"ref-1\", \"target-1\", \"fk_parent_schema\", \"file-a\");\n        let snapshot = serde_json::from_str::<JsonValue>(\n            row.snapshot\n                .as_ref()\n                .map(|snapshot| snapshot.normalized.as_ref())\n                .expect(\"fixture should have snapshot\"),\n        )\n        .expect(\"fixture JSON should parse\");\n        let visible_schemas = vec![state_surface_ref_schema()];\n        let staged_writes = empty_staged_write_set();\n        let input = validation_input(&staged_writes, &visible_schemas);\n        let _catalog = catalog_from_transaction_input(&input).expect(\"catalog should build\");\n\n        indexes\n            .remember_foreign_key_references(\n                PreparedValidationRow::State(&row),\n                test_plan_from_schema(state_surface_ref_schema()),\n                &snapshot,\n            )\n            .expect(\"state-surface row should index FK reference\");\n\n        assert!(\n            indexes.has_fk_reference_to_identity(DomainRowIdentity::exact(\n                \"version-a\",\n                false,\n                Some(\"file-a\".to_string()),\n                \"fk_parent_schema\",\n                EntityIdentity::single(\"target-1\"),\n            ))\n        );\n    }\n\n    #[test]\n    fn pending_delete_restrictions_ignore_tombstoned_referencing_rows() {\n        let mut indexes = PendingConstraintIndexes::default();\n        let mut parent_delete = fk_parent_row(\"parent-1\", \"version-a\");\n        parent_delete.snapshot = None;\n        indexes.remember_tombstone(PreparedValidationRow::State(&parent_delete));\n\n        let child = fk_child_row(\"child-1\", \"parent-1\", \"version-a\");\n        let child_snapshot = serde_json::from_str::<JsonValue>(\n            child\n                .snapshot\n                .as_ref()\n                .map(|snapshot| snapshot.normalized.as_ref())\n                .expect(\"fixture should have snapshot\"),\n        )\n        .expect(\"fixture JSON should parse\");\n        let visible_schemas = vec![fk_parent_schema(), fk_child_schema()];\n        let staged_writes = empty_staged_write_set();\n        let input = validation_input(&staged_writes, &visible_schemas);\n        let catalog = catalog_from_transaction_input(&input).expect(\"catalog should build\");\n        indexes\n            .remember_foreign_key_references(\n                PreparedValidationRow::State(&child),\n                test_plan_from_schema(fk_child_schema()),\n                &child_snapshot,\n            )\n            .expect(\"child row should index FK reference\");\n\n        let mut child_delete = fk_child_row(\"child-1\", \"parent-1\", \"version-a\");\n        child_delete.snapshot = None;\n        indexes.remember_tombstone(PreparedValidationRow::State(&child_delete));\n\n        validate_pending_delete_restrictions(&catalog, &indexes)\n            .expect(\"a row deleted in the same transaction should not block target delete\");\n    }\n\n    #[test]\n    fn pending_fk_validation_collects_unresolved_normal_fk_check() {\n        let indexes = PendingConstraintIndexes::default();\n        let row = fk_child_row(\"child-1\", \"parent-1\", \"version-a\");\n        let snapshot = serde_json::from_str::<JsonValue>(\n            row.snapshot\n                .as_ref()\n                .map(|snapshot| snapshot.normalized.as_ref())\n                .expect(\"fixture should have snapshot\"),\n        )\n        .expect(\"fixture JSON should parse\");\n        let visible_schemas = vec![fk_parent_schema(), fk_child_schema()];\n        let staged_writes = empty_staged_write_set();\n        let input = validation_input(&staged_writes, &visible_schemas);\n        let _catalog = catalog_from_transaction_input(&input).expect(\"catalog should build\");\n\n        let unresolved = validate_pending_foreign_keys(\n            &indexes,\n            &[(\n                PreparedValidationRow::State(&row),\n                test_plan_from_schema(fk_child_schema()),\n                &snapshot,\n            )],\n        )\n        .expect(\"FK validation should collect unresolved checks\");\n\n        assert_eq!(unresolved.len(), 1);\n        assert_eq!(\n            unresolved[0].source_identity,\n            DomainRowIdentity::exact(\n                \"version-a\",\n                false,\n                Some(\"file-a\".to_string()),\n                \"fk_child_schema\",\n                EntityIdentity::single(\"child-1\"),\n            )\n        );\n        assert_eq!(unresolved[0].source_schema_key, \"fk_child_schema\");\n        assert_eq!(\n            unresolved[0].source_pointer_group,\n            vec![vec![\"parent_id\".to_string()]]\n        );\n        let UnresolvedForeignKeyTarget::Key(target) = &unresolved[0].target else {\n            panic!(\"normal FK should produce key target\");\n        };\n        assert_eq!(target.schema_key, \"fk_parent_schema\");\n        assert_eq!(target.domain.version_id(), \"version-a\");\n        assert_eq!(\n            target.domain.file_scope(),\n            &DomainFileScope::Exact(Some(\"file-a\".to_string()))\n        );\n        assert_eq!(target.pointer_group, vec![vec![\"id\".to_string()]]);\n        assert_eq!(\n            target.value,\n            UniqueConstraintValue::string_values([\"parent-1\"])\n        );\n    }\n\n    #[test]\n    fn pending_fk_validation_resolves_normal_fk_against_pending_target() {\n        let mut indexes = PendingConstraintIndexes::default();\n        let parent = fk_parent_row(\"parent-1\", \"version-a\");\n        let parent_snapshot = serde_json::from_str::<JsonValue>(\n            parent\n                .snapshot\n                .as_ref()\n                .map(|snapshot| snapshot.normalized.as_ref())\n                .expect(\"fixture should have snapshot\"),\n        )\n        .expect(\"fixture JSON should parse\");\n        indexes\n            .remember_row(\n                PreparedValidationRow::State(&parent),\n                test_plan_from_schema(fk_parent_schema()),\n                &parent_snapshot,\n            )\n            .expect(\"parent should index as pending FK target\");\n\n        let child = fk_child_row(\"child-1\", \"parent-1\", \"version-a\");\n        let child_snapshot = serde_json::from_str::<JsonValue>(\n            child\n                .snapshot\n                .as_ref()\n                .map(|snapshot| snapshot.normalized.as_ref())\n                .expect(\"fixture should have snapshot\"),\n        )\n        .expect(\"fixture JSON should parse\");\n        let visible_schemas = vec![fk_parent_schema(), fk_child_schema()];\n        let staged_writes = empty_staged_write_set();\n        let input = validation_input(&staged_writes, &visible_schemas);\n        let _catalog = catalog_from_transaction_input(&input).expect(\"catalog should build\");\n\n        let unresolved = validate_pending_foreign_keys(\n            &indexes,\n            &[(\n                PreparedValidationRow::State(&child),\n                test_plan_from_schema(fk_child_schema()),\n                &child_snapshot,\n            )],\n        )\n        .expect(\"FK validation should inspect pending targets\");\n\n        assert!(\n            unresolved.is_empty(),\n            \"same-version pending parent should satisfy the child FK\"\n        );\n    }\n\n    #[test]\n    fn pending_fk_validation_keeps_normal_fk_unresolved_across_versions() {\n        let mut indexes = PendingConstraintIndexes::default();\n        let parent = fk_parent_row(\"parent-1\", \"version-b\");\n        let parent_snapshot = serde_json::from_str::<JsonValue>(\n            parent\n                .snapshot\n                .as_ref()\n                .map(|snapshot| snapshot.normalized.as_ref())\n                .expect(\"fixture should have snapshot\"),\n        )\n        .expect(\"fixture JSON should parse\");\n        indexes\n            .remember_row(\n                PreparedValidationRow::State(&parent),\n                test_plan_from_schema(fk_parent_schema()),\n                &parent_snapshot,\n            )\n            .expect(\"parent should index as pending FK target\");\n\n        let child = fk_child_row(\"child-1\", \"parent-1\", \"version-a\");\n        let child_snapshot = serde_json::from_str::<JsonValue>(\n            child\n                .snapshot\n                .as_ref()\n                .map(|snapshot| snapshot.normalized.as_ref())\n                .expect(\"fixture should have snapshot\"),\n        )\n        .expect(\"fixture JSON should parse\");\n        let visible_schemas = vec![fk_parent_schema(), fk_child_schema()];\n        let staged_writes = empty_staged_write_set();\n        let input = validation_input(&staged_writes, &visible_schemas);\n        let _catalog = catalog_from_transaction_input(&input).expect(\"catalog should build\");\n\n        let unresolved = validate_pending_foreign_keys(\n            &indexes,\n            &[(\n                PreparedValidationRow::State(&child),\n                test_plan_from_schema(fk_child_schema()),\n                &child_snapshot,\n            )],\n        )\n        .expect(\"FK validation should inspect pending targets\");\n\n        assert_eq!(unresolved.len(), 1);\n        let UnresolvedForeignKeyTarget::Key(target) = &unresolved[0].target else {\n            panic!(\"normal FK should produce key target\");\n        };\n        assert_eq!(\n            target.domain.version_id(),\n            \"version-a\",\n            \"FK checks are exact-version scoped, not overlay scoped\"\n        );\n    }\n\n    #[test]\n    fn pending_fk_validation_collects_unresolved_state_surface_check() {\n        let indexes = PendingConstraintIndexes::default();\n        let row = state_surface_ref_row(\"ref-1\", \"target-1\", \"fk_parent_schema\", \"file-a\");\n        let snapshot = serde_json::from_str::<JsonValue>(\n            row.snapshot\n                .as_ref()\n                .map(|snapshot| snapshot.normalized.as_ref())\n                .expect(\"fixture should have snapshot\"),\n        )\n        .expect(\"fixture JSON should parse\");\n        let visible_schemas = vec![state_surface_ref_schema()];\n        let staged_writes = empty_staged_write_set();\n        let input = validation_input(&staged_writes, &visible_schemas);\n        let _catalog = catalog_from_transaction_input(&input).expect(\"catalog should build\");\n\n        let unresolved = validate_pending_foreign_keys(\n            &indexes,\n            &[(\n                PreparedValidationRow::State(&row),\n                test_plan_from_schema(state_surface_ref_schema()),\n                &snapshot,\n            )],\n        )\n        .expect(\"FK validation should collect unresolved checks\");\n\n        assert_eq!(unresolved.len(), 1);\n        assert_eq!(\n            unresolved[0].source_identity,\n            DomainRowIdentity::exact(\n                \"version-a\",\n                false,\n                Some(\"file-a\".to_string()),\n                \"state_surface_ref_schema\",\n                EntityIdentity::single(\"ref-1\"),\n            )\n        );\n        assert_eq!(unresolved[0].source_schema_key, \"state_surface_ref_schema\");\n        assert_eq!(\n            unresolved[0].source_pointer_group,\n            vec![\n                vec![\"target_entity_id\".to_string()],\n                vec![\"target_schema_key\".to_string()],\n                vec![\"target_file_id\".to_string()],\n            ]\n        );\n        let UnresolvedForeignKeyTarget::StateSurfaceIdentity(target) = &unresolved[0].target else {\n            panic!(\"state FK should produce state-surface identity target\");\n        };\n        assert_eq!(target.domain().version_id(), \"version-a\");\n        assert_eq!(target.schema_key(), \"fk_parent_schema\");\n        assert_eq!(target.entity_id(), &EntityIdentity::single(\"target-1\"));\n        assert_eq!(\n            target.domain().file_scope(),\n            &DomainFileScope::Exact(Some(\"file-a\".to_string()))\n        );\n    }\n\n    #[tokio::test]\n    async fn committed_fk_lookup_resolves_normal_fk_in_exact_scope() {\n        let indexes = PendingConstraintIndexes::default();\n        let child = fk_child_row(\"child-1\", \"parent-1\", \"version-a\");\n        let child_snapshot = serde_json::from_str::<JsonValue>(\n            child\n                .snapshot\n                .as_ref()\n                .map(|snapshot| snapshot.normalized.as_ref())\n                .expect(\"fixture should have snapshot\"),\n        )\n        .expect(\"fixture JSON should parse\");\n        let visible_schemas = vec![fk_parent_schema(), fk_child_schema()];\n        let staged_writes = empty_staged_write_set();\n        let input = validation_input(&staged_writes, &visible_schemas);\n        let _catalog = catalog_from_transaction_input(&input).expect(\"catalog should build\");\n        let unresolved = validate_pending_foreign_keys(\n            &indexes,\n            &[(\n                PreparedValidationRow::State(&child),\n                test_plan_from_schema(fk_child_schema()),\n                &child_snapshot,\n            )],\n        )\n        .expect(\"pending FK validation should collect unresolved check\");\n        let live_state = StaticLiveStateReader {\n            rows: vec![MaterializedLiveStateRow::from(fk_parent_row(\n                \"parent-1\",\n                \"version-a\",\n            ))],\n        };\n\n        let still_unresolved = validate_committed_foreign_keys(\n            &TransactionValidationInput::from_visible_schemas_for_tests(\n                &staged_writes,\n                &visible_schemas,\n                &live_state,\n            ),\n            &indexes,\n            &unresolved,\n        )\n        .await\n        .expect(\"committed FK lookup should scan live state\");\n\n        assert!(\n            still_unresolved.is_empty(),\n            \"same-version committed parent should satisfy unresolved FK\"\n        );\n    }\n\n    #[tokio::test]\n    async fn committed_fk_lookup_keeps_normal_fk_unresolved_across_versions() {\n        let indexes = PendingConstraintIndexes::default();\n        let child = fk_child_row(\"child-1\", \"parent-1\", \"version-a\");\n        let child_snapshot = serde_json::from_str::<JsonValue>(\n            child\n                .snapshot\n                .as_ref()\n                .map(|snapshot| snapshot.normalized.as_ref())\n                .expect(\"fixture should have snapshot\"),\n        )\n        .expect(\"fixture JSON should parse\");\n        let visible_schemas = vec![fk_parent_schema(), fk_child_schema()];\n        let staged_writes = empty_staged_write_set();\n        let input = validation_input(&staged_writes, &visible_schemas);\n        let _catalog = catalog_from_transaction_input(&input).expect(\"catalog should build\");\n        let unresolved = validate_pending_foreign_keys(\n            &indexes,\n            &[(\n                PreparedValidationRow::State(&child),\n                test_plan_from_schema(fk_child_schema()),\n                &child_snapshot,\n            )],\n        )\n        .expect(\"pending FK validation should collect unresolved check\");\n        let live_state = StaticLiveStateReader {\n            rows: vec![MaterializedLiveStateRow::from(fk_parent_row(\n                \"parent-1\",\n                \"version-b\",\n            ))],\n        };\n\n        let still_unresolved = validate_committed_foreign_keys(\n            &TransactionValidationInput::from_visible_schemas_for_tests(\n                &staged_writes,\n                &visible_schemas,\n                &live_state,\n            ),\n            &indexes,\n            &unresolved,\n        )\n        .await\n        .expect(\"committed FK lookup should scan live state\");\n\n        assert_eq!(\n            still_unresolved.len(),\n            1,\n            \"committed FK lookup is exact-version scoped\"\n        );\n    }\n\n    #[tokio::test]\n    async fn committed_fk_lookup_resolves_state_surface_fk_by_exact_identity() {\n        let indexes = PendingConstraintIndexes::default();\n        let row = state_surface_ref_row(\"ref-1\", \"target-1\", \"fk_parent_schema\", \"file-a\");\n        let snapshot = serde_json::from_str::<JsonValue>(\n            row.snapshot\n                .as_ref()\n                .map(|snapshot| snapshot.normalized.as_ref())\n                .expect(\"fixture should have snapshot\"),\n        )\n        .expect(\"fixture JSON should parse\");\n        let visible_schemas = vec![state_surface_ref_schema()];\n        let staged_writes = empty_staged_write_set();\n        let input = validation_input(&staged_writes, &visible_schemas);\n        let _catalog = catalog_from_transaction_input(&input).expect(\"catalog should build\");\n        let unresolved = validate_pending_foreign_keys(\n            &indexes,\n            &[(\n                PreparedValidationRow::State(&row),\n                test_plan_from_schema(state_surface_ref_schema()),\n                &snapshot,\n            )],\n        )\n        .expect(\"pending FK validation should collect unresolved check\");\n        let live_state = StaticLiveStateReader {\n            rows: vec![MaterializedLiveStateRow::from(fk_parent_row(\n                \"target-1\",\n                \"version-a\",\n            ))],\n        };\n\n        let still_unresolved = validate_committed_foreign_keys(\n            &TransactionValidationInput::from_visible_schemas_for_tests(\n                &staged_writes,\n                &visible_schemas,\n                &live_state,\n            ),\n            &indexes,\n            &unresolved,\n        )\n        .await\n        .expect(\"committed FK lookup should load exact live-state row\");\n\n        assert!(\n            still_unresolved.is_empty(),\n            \"committed state-surface target should satisfy unresolved FK\"\n        );\n    }\n\n    fn empty_staged_write_set() -> PreparedWriteSet {\n        PreparedWriteSet {\n            state_rows: Vec::new(),\n            adopted_rows: Vec::new(),\n            insert_identities: BTreeMap::new(),\n            commit_members_by_version: BTreeMap::new(),\n            extra_commit_parents_by_version: BTreeMap::new(),\n            file_data_writes: Vec::new(),\n        }\n    }\n\n    fn live_state_row_matches_scan(\n        row: &MaterializedLiveStateRow,\n        request: &LiveStateScanRequest,\n    ) -> bool {\n        if request\n            .filter\n            .untracked\n            .is_some_and(|untracked| row.untracked != untracked)\n        {\n            return false;\n        }\n        (request.filter.schema_keys.is_empty()\n            || request.filter.schema_keys.contains(&row.schema_key))\n            && (request.filter.version_ids.is_empty()\n                || request.filter.version_ids.contains(&row.version_id))\n            && (request.filter.file_ids.is_empty()\n                || request\n                    .filter\n                    .file_ids\n                    .iter()\n                    .any(|filter| filter.matches(row.file_id.as_ref())))\n    }\n\n    fn live_state_row_matches_load(\n        row: &MaterializedLiveStateRow,\n        request: &LiveStateRowRequest,\n    ) -> bool {\n        row.schema_key == request.schema_key\n            && row.version_id == request.version_id\n            && row.entity_id == request.entity_id\n            && request.file_id.matches(row.file_id.as_ref())\n    }\n\n    fn test_file_descriptor_rows() -> Vec<MaterializedLiveStateRow> {\n        vec![\n            committed_file_descriptor_row(\"file-a\", \"version-a\"),\n            committed_file_descriptor_row(\"file-a\", \"version-b\"),\n            committed_file_descriptor_row(\"file-b\", \"version-a\"),\n            committed_file_descriptor_row(\"file-b\", \"version-b\"),\n        ]\n    }\n\n    fn pending_registered_schema_row(schema_key: &str) -> PreparedStateRow {\n        pending_registered_schema_from_definition(json!({\n            \"x-lix-key\": schema_key,\n            \"type\": \"object\",\n            \"properties\": {\n                \"id\": { \"type\": \"string\" }\n            },\n            \"required\": [\"id\"],\n            \"additionalProperties\": false,\n        }))\n    }\n\n    fn pending_registered_schema_from_definition(schema: JsonValue) -> PreparedStateRow {\n        let key = schema_key_from_definition(&schema).expect(\"test schema should have a key\");\n        PreparedStateRow {\n            schema_plan_id: crate::catalog::SchemaPlanId::for_test(0),\n            facts: crate::transaction::types::PreparedRowFacts::default(),\n            entity_id: registered_schema_entity_id(&key.schema_key),\n            schema_key: REGISTERED_SCHEMA_KEY.to_string(),\n            file_id: None,\n            snapshot: Some(test_stage_json(&json!({ \"value\": schema }).to_string())),\n            metadata: None,\n            origin: None,\n            created_at: \"2026-04-29T00:00:00.000Z\".to_string(),\n            updated_at: \"2026-04-29T00:00:00.000Z\".to_string(),\n            global: true,\n            change_id: Some(\"change-registered-schema\".to_string()),\n            commit_id: Some(\"commit-registered-schema\".to_string()),\n            untracked: false,\n            version_id: crate::GLOBAL_VERSION_ID.to_string(),\n        }\n    }\n\n    fn registered_schema_entity_id(schema_key: &str) -> crate::entity_identity::EntityIdentity {\n        crate::entity_identity::EntityIdentity::from_primary_key_paths(\n            &serde_json::json!({\n                \"value\": {\n                    \"x-lix-key\": schema_key,\n                }\n            }),\n            &[vec![\"value\".to_string(), \"x-lix-key\".to_string()]],\n        )\n        .expect(\"registered schema identity should derive\")\n    }\n\n    fn key_value_schema() -> JsonValue {\n        seed_schema_definition(\"lix_key_value\")\n            .expect(\"lix_key_value builtin schema should exist\")\n            .clone()\n    }\n\n    fn registered_schema() -> JsonValue {\n        seed_schema_definition(REGISTERED_SCHEMA_KEY)\n            .expect(\"lix_registered_schema builtin schema should exist\")\n            .clone()\n    }\n\n    fn file_descriptor_schema() -> JsonValue {\n        seed_schema_definition(FILE_DESCRIPTOR_SCHEMA_KEY)\n            .expect(\"lix_file_descriptor builtin schema should exist\")\n            .clone()\n    }\n\n    fn directory_descriptor_schema() -> JsonValue {\n        seed_schema_definition(DIRECTORY_DESCRIPTOR_SCHEMA_KEY)\n            .expect(\"lix_directory_descriptor builtin schema should exist\")\n            .clone()\n    }\n\n    fn unique_schema() -> JsonValue {\n        json!({\n            \"x-lix-key\": \"unique_schema\",\n            \"x-lix-primary-key\": [\"/id\"],\n            \"x-lix-unique\": [[\"/slug\"]],\n            \"type\": \"object\",\n            \"properties\": {\n                \"id\": { \"type\": \"string\" },\n                \"slug\": { \"type\": \"string\" },\n                \"title\": { \"type\": \"string\" }\n            },\n            \"required\": [\"id\", \"slug\", \"title\"],\n            \"additionalProperties\": false\n        })\n    }\n\n    fn nullable_unique_schema() -> JsonValue {\n        json!({\n            \"x-lix-key\": \"nullable_unique_schema\",\n            \"x-lix-primary-key\": [\"/id\"],\n            \"x-lix-unique\": [[\"/scope\", \"/name\"]],\n            \"type\": \"object\",\n            \"properties\": {\n                \"id\": { \"type\": \"string\" },\n                \"scope\": { \"type\": [\"string\", \"null\"] },\n                \"name\": { \"type\": \"string\" }\n            },\n            \"required\": [\"id\", \"scope\", \"name\"],\n            \"additionalProperties\": false\n        })\n    }\n\n    fn fk_parent_schema() -> JsonValue {\n        json!({\n            \"x-lix-key\": \"fk_parent_schema\",\n            \"x-lix-primary-key\": [\"/id\"],\n            \"type\": \"object\",\n            \"properties\": {\n                \"id\": { \"type\": \"string\" }\n            },\n            \"required\": [\"id\"],\n            \"additionalProperties\": false\n        })\n    }\n\n    fn composite_message_schema() -> JsonValue {\n        json!({\n            \"x-lix-key\": \"composite_message_schema\",\n            \"x-lix-primary-key\": [\"/key\", \"/locale\"],\n            \"type\": \"object\",\n            \"properties\": {\n                \"key\": { \"type\": \"string\" },\n                \"locale\": { \"type\": \"string\" },\n                \"text\": { \"type\": \"string\" }\n            },\n            \"required\": [\"key\", \"locale\", \"text\"],\n            \"additionalProperties\": false\n        })\n    }\n\n    fn fk_child_schema() -> JsonValue {\n        json!({\n            \"x-lix-key\": \"fk_child_schema\",\n            \"x-lix-primary-key\": [\"/id\"],\n            \"x-lix-foreign-keys\": [{\n                \"properties\": [\"/parent_id\"],\n                \"references\": {\n                    \"schemaKey\": \"fk_parent_schema\",\n                    \"properties\": [\"/id\"]\n                }\n            }],\n            \"type\": \"object\",\n            \"properties\": {\n                \"id\": { \"type\": \"string\" },\n                \"parent_id\": { \"type\": \"string\" }\n            },\n            \"required\": [\"id\", \"parent_id\"],\n            \"additionalProperties\": false\n        })\n    }\n\n    fn state_surface_ref_schema() -> JsonValue {\n        json!({\n            \"x-lix-key\": \"state_surface_ref_schema\",\n            \"x-lix-primary-key\": [\"/id\"],\n            \"x-lix-state-foreign-keys\": [\n                [\"/target_entity_id\", \"/target_schema_key\", \"/target_file_id\"]\n            ],\n            \"type\": \"object\",\n            \"properties\": {\n                \"id\": { \"type\": \"string\" },\n                \"target_entity_id\": {\n                    \"type\": \"array\",\n                    \"items\": { \"type\": \"string\" },\n                    \"minItems\": 1\n                },\n                \"target_schema_key\": { \"type\": \"string\" },\n                \"target_file_id\": { \"type\": [\"string\", \"null\"] }\n            },\n            \"required\": [\"id\", \"target_entity_id\", \"target_schema_key\", \"target_file_id\"],\n            \"additionalProperties\": false\n        })\n    }\n\n    fn unique_row(entity_id: &str, slug: &str, title: &str) -> PreparedStateRow {\n        let mut row = staged_row(\n            \"unique_schema\",\n            Some(\n                json!({\n                    \"id\": entity_id,\n                    \"slug\": slug,\n                    \"title\": title,\n                })\n                .to_string(),\n            ),\n        );\n        row.entity_id = crate::entity_identity::EntityIdentity::single(entity_id);\n        row.file_id = Some(\"file-a\".to_string());\n        row.version_id = \"version-a\".to_string();\n        row.global = false;\n        row\n    }\n\n    fn nullable_unique_row(entity_id: &str, scope: Option<&str>, name: &str) -> PreparedStateRow {\n        let mut row = staged_row(\n            \"nullable_unique_schema\",\n            Some(\n                json!({\n                    \"id\": entity_id,\n                    \"scope\": scope,\n                    \"name\": name,\n                })\n                .to_string(),\n            ),\n        );\n        row.entity_id = crate::entity_identity::EntityIdentity::single(entity_id);\n        row.file_id = Some(\"file-a\".to_string());\n        row.version_id = \"version-a\".to_string();\n        row.global = false;\n        row\n    }\n\n    fn fk_parent_row(entity_id: &str, version_id: &str) -> PreparedStateRow {\n        let mut row = staged_row(\n            \"fk_parent_schema\",\n            Some(json!({ \"id\": entity_id }).to_string()),\n        );\n        row.entity_id = crate::entity_identity::EntityIdentity::single(entity_id);\n        row.file_id = Some(\"file-a\".to_string());\n        row.version_id = version_id.to_string();\n        row.global = false;\n        row\n    }\n\n    fn fk_child_row(entity_id: &str, parent_id: &str, version_id: &str) -> PreparedStateRow {\n        let mut row = staged_row(\n            \"fk_child_schema\",\n            Some(json!({ \"id\": entity_id, \"parent_id\": parent_id }).to_string()),\n        );\n        row.entity_id = crate::entity_identity::EntityIdentity::single(entity_id);\n        row.file_id = Some(\"file-a\".to_string());\n        row.version_id = version_id.to_string();\n        row.global = false;\n        row\n    }\n\n    fn composite_message_row(key: &str, locale: &str, version_id: &str) -> PreparedStateRow {\n        let snapshot = json!({\n            \"key\": key,\n            \"locale\": locale,\n            \"text\": \"Welcome\",\n        });\n        let mut row = staged_row(\"composite_message_schema\", Some(snapshot.to_string()));\n        row.entity_id = EntityIdentity::from_primary_key_paths(\n            &snapshot,\n            &[vec![\"key\".to_string()], vec![\"locale\".to_string()]],\n        )\n        .expect(\"composite message identity should derive\");\n        row.file_id = Some(\"file-a\".to_string());\n        row.version_id = version_id.to_string();\n        row.global = false;\n        row\n    }\n\n    fn state_surface_ref_row(\n        entity_id: &str,\n        target_entity_id: &str,\n        target_schema_key: &str,\n        target_file_id: &str,\n    ) -> PreparedStateRow {\n        state_surface_ref_row_with_target_entity_id(\n            entity_id,\n            json!([target_entity_id]),\n            target_schema_key,\n            target_file_id,\n        )\n    }\n\n    fn state_surface_ref_row_with_target_entity_id(\n        entity_id: &str,\n        target_entity_id: JsonValue,\n        target_schema_key: &str,\n        target_file_id: &str,\n    ) -> PreparedStateRow {\n        let mut row = staged_row(\n            \"state_surface_ref_schema\",\n            Some(\n                json!({\n                    \"id\": entity_id,\n                    \"target_entity_id\": target_entity_id,\n                    \"target_schema_key\": target_schema_key,\n                    \"target_file_id\": target_file_id,\n                })\n                .to_string(),\n            ),\n        );\n        row.entity_id = crate::entity_identity::EntityIdentity::single(entity_id);\n        row.file_id = Some(\"file-a\".to_string());\n        row.version_id = \"version-a\".to_string();\n        row.global = false;\n        row\n    }\n\n    fn mark_prepared_row_untracked(row: &mut PreparedStateRow) {\n        row.untracked = true;\n        row.change_id = None;\n        row.commit_id = None;\n    }\n\n    fn mark_live_row_untracked(row: &mut MaterializedLiveStateRow) {\n        row.untracked = true;\n        row.change_id = None;\n        row.commit_id = None;\n    }\n\n    fn staged_file_descriptor_row(file_id: &str, version_id: &str) -> PreparedStateRow {\n        let mut row = staged_row(\n            FILE_DESCRIPTOR_SCHEMA_KEY,\n            Some(\n                json!({\n                    \"id\": file_id,\n                    \"directory_id\": null,\n                    \"name\": file_id,\n                    \"hidden\": false,\n                })\n                .to_string(),\n            ),\n        );\n        row.entity_id = crate::entity_identity::EntityIdentity::single(file_id);\n        row.file_id = None;\n        row.version_id = version_id.to_string();\n        row.global = version_id == crate::GLOBAL_VERSION_ID;\n        row\n    }\n\n    fn committed_file_descriptor_row(file_id: &str, version_id: &str) -> MaterializedLiveStateRow {\n        MaterializedLiveStateRow::from(staged_file_descriptor_row(file_id, version_id))\n    }\n\n    fn directory_descriptor_row(\n        directory_id: &str,\n        parent_id: Option<&str>,\n        name: &str,\n        version_id: &str,\n    ) -> PreparedStateRow {\n        let mut row = staged_row(\n            DIRECTORY_DESCRIPTOR_SCHEMA_KEY,\n            Some(\n                json!({\n                    \"id\": directory_id,\n                    \"parent_id\": parent_id,\n                    \"name\": name,\n                    \"hidden\": false,\n                })\n                .to_string(),\n            ),\n        );\n        row.entity_id = crate::entity_identity::EntityIdentity::single(directory_id);\n        row.file_id = None;\n        row.version_id = version_id.to_string();\n        row.global = version_id == crate::GLOBAL_VERSION_ID;\n        row\n    }\n\n    fn committed_unique_row(entity_id: &str, slug: &str, title: &str) -> MaterializedLiveStateRow {\n        let row = unique_row(entity_id, slug, title);\n        MaterializedLiveStateRow {\n            entity_id: row.entity_id,\n            schema_key: row.schema_key,\n            file_id: row.file_id,\n            snapshot_content: row.snapshot.as_ref().map(|snapshot| snapshot.materialize()),\n            metadata: row.metadata.as_ref().map(|metadata| metadata.materialize()),\n            deleted: row.snapshot.is_none(),\n            created_at: row.created_at,\n            updated_at: row.updated_at,\n            global: row.global,\n            change_id: row.change_id,\n            commit_id: row.commit_id,\n            untracked: row.untracked,\n            version_id: row.version_id,\n        }\n    }\n\n    fn committed_nullable_unique_row(\n        entity_id: &str,\n        scope: Option<&str>,\n        name: &str,\n    ) -> MaterializedLiveStateRow {\n        MaterializedLiveStateRow::from(nullable_unique_row(entity_id, scope, name))\n    }\n\n    fn staged_row(schema_key: &str, snapshot_content: Option<String>) -> PreparedStateRow {\n        PreparedStateRow {\n            schema_plan_id: crate::catalog::SchemaPlanId::for_test(0),\n            facts: crate::transaction::types::PreparedRowFacts::default(),\n            entity_id: crate::entity_identity::EntityIdentity::single(\"entity-1\"),\n            schema_key: schema_key.to_string(),\n            file_id: None,\n            snapshot: snapshot_content.as_deref().map(test_stage_json),\n            metadata: None,\n            origin: None,\n            created_at: \"2026-04-29T00:00:00.000Z\".to_string(),\n            updated_at: \"2026-04-29T00:00:00.000Z\".to_string(),\n            global: true,\n            change_id: Some(\"change-1\".to_string()),\n            commit_id: Some(\"commit-1\".to_string()),\n            untracked: false,\n            version_id: crate::GLOBAL_VERSION_ID.to_string(),\n        }\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/untracked_state/codec.rs",
    "content": "use crate::entity_identity::EntityIdentity;\nuse crate::untracked_state::{UntrackedStateRow, UntrackedStateRowRef};\nuse crate::LixError;\n\nconst UNTRACKED_STATE_FILE_IDENTIFIER: &str = \"LXUS\";\n\npub(crate) fn encode_row_ref(row: UntrackedStateRowRef<'_>) -> Result<Vec<u8>, LixError> {\n    let entity_id = row.entity_id.as_json_array_text().map_err(|error| {\n        LixError::unknown(format!(\n            \"failed to encode untracked-state entity identity: {error}\"\n        ))\n    })?;\n\n    let mut builder = flatbuffers::FlatBufferBuilder::with_capacity(256);\n    let entity_id = builder.create_string(&entity_id);\n    let schema_key = builder.create_string(row.schema_key);\n    let file_id = row.file_id.map(|value| builder.create_string(value));\n    let snapshot_content = row\n        .snapshot_content\n        .map(|value| builder.create_string(value));\n    let metadata = row.metadata.map(|value| builder.create_string(value));\n    let created_at = builder.create_string(row.created_at);\n    let updated_at = builder.create_string(row.updated_at);\n    let version_id = builder.create_string(row.version_id);\n\n    let root = flatbuffer::create_untracked_state_row(\n        &mut builder,\n        &flatbuffer::UntrackedStateRowArgs {\n            entity_id,\n            schema_key,\n            file_id,\n            snapshot_content,\n            metadata,\n            created_at,\n            updated_at,\n            global: row.global,\n            version_id,\n        },\n    );\n    builder.finish(root, Some(UNTRACKED_STATE_FILE_IDENTIFIER));\n    Ok(builder.finished_data().to_vec())\n}\n\npub(crate) fn decode_row(bytes: &[u8]) -> Result<UntrackedStateRow, LixError> {\n    if bytes.len() < flatbuffers::SIZE_UOFFSET + flatbuffers::FILE_IDENTIFIER_LENGTH\n        || !flatbuffers::buffer_has_identifier(bytes, UNTRACKED_STATE_FILE_IDENTIFIER, false)\n    {\n        return Err(LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            \"failed to decode untracked-state row: invalid FlatBuffers file identifier\",\n        ));\n    }\n\n    let row = flatbuffer::root_as_untracked_state_row(bytes).map_err(|error| {\n        LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"failed to decode untracked-state row: {error}\"),\n        )\n    })?;\n\n    let entity_id = required_str(row.entity_id(), \"entity_id\")?;\n    let entity_id = EntityIdentity::from_json_array_text(entity_id).map_err(|error| {\n        LixError::unknown(format!(\n            \"failed to decode untracked-state entity identity: {error}\"\n        ))\n    })?;\n\n    Ok(UntrackedStateRow {\n        entity_id,\n        schema_key: required_str(row.schema_key(), \"schema_key\")?.to_string(),\n        file_id: row.file_id().map(ToString::to_string),\n        snapshot_content: row.snapshot_content().map(ToString::to_string),\n        metadata: row.metadata().map(ToString::to_string),\n        created_at: required_str(row.created_at(), \"created_at\")?.to_string(),\n        updated_at: required_str(row.updated_at(), \"updated_at\")?.to_string(),\n        global: row.global(),\n        version_id: required_str(row.version_id(), \"version_id\")?.to_string(),\n    })\n}\n\nfn required_str<'a>(value: Option<&'a str>, field: &str) -> Result<&'a str, LixError> {\n    value.ok_or_else(|| {\n        LixError::new(\n            \"LIX_ERROR_UNKNOWN\",\n            format!(\"failed to decode untracked-state row: missing required field `{field}`\"),\n        )\n    })\n}\n\nmod flatbuffer {\n    #[derive(Copy, Clone, PartialEq)]\n    pub(super) struct UntrackedStateRow<'a> {\n        table: flatbuffers::Table<'a>,\n    }\n\n    impl<'a> flatbuffers::Follow<'a> for UntrackedStateRow<'a> {\n        type Inner = UntrackedStateRow<'a>;\n\n        #[inline]\n        unsafe fn follow(buf: &'a [u8], loc: usize) -> Self::Inner {\n            Self {\n                table: unsafe { flatbuffers::Table::new(buf, loc) },\n            }\n        }\n    }\n\n    impl<'a> UntrackedStateRow<'a> {\n        const VT_ENTITY_ID: flatbuffers::VOffsetT = 4;\n        const VT_SCHEMA_KEY: flatbuffers::VOffsetT = 6;\n        const VT_FILE_ID: flatbuffers::VOffsetT = 8;\n        const VT_SNAPSHOT_CONTENT: flatbuffers::VOffsetT = 10;\n        const VT_METADATA: flatbuffers::VOffsetT = 12;\n        const VT_CREATED_AT: flatbuffers::VOffsetT = 14;\n        const VT_UPDATED_AT: flatbuffers::VOffsetT = 16;\n        const VT_GLOBAL: flatbuffers::VOffsetT = 18;\n        const VT_VERSION_ID: flatbuffers::VOffsetT = 20;\n\n        #[inline]\n        pub(super) fn entity_id(&self) -> Option<&'a str> {\n            unsafe {\n                self.table\n                    .get::<flatbuffers::ForwardsUOffset<&str>>(Self::VT_ENTITY_ID, None)\n            }\n        }\n\n        #[inline]\n        pub(super) fn schema_key(&self) -> Option<&'a str> {\n            unsafe {\n                self.table\n                    .get::<flatbuffers::ForwardsUOffset<&str>>(Self::VT_SCHEMA_KEY, None)\n            }\n        }\n\n        #[inline]\n        pub(super) fn file_id(&self) -> Option<&'a str> {\n            unsafe {\n                self.table\n                    .get::<flatbuffers::ForwardsUOffset<&str>>(Self::VT_FILE_ID, None)\n            }\n        }\n\n        #[inline]\n        pub(super) fn snapshot_content(&self) -> Option<&'a str> {\n            unsafe {\n                self.table\n                    .get::<flatbuffers::ForwardsUOffset<&str>>(Self::VT_SNAPSHOT_CONTENT, None)\n            }\n        }\n\n        #[inline]\n        pub(super) fn metadata(&self) -> Option<&'a str> {\n            unsafe {\n                self.table\n                    .get::<flatbuffers::ForwardsUOffset<&str>>(Self::VT_METADATA, None)\n            }\n        }\n\n        pub(super) fn created_at(&self) -> Option<&'a str> {\n            unsafe {\n                self.table\n                    .get::<flatbuffers::ForwardsUOffset<&str>>(Self::VT_CREATED_AT, None)\n            }\n        }\n\n        #[inline]\n        pub(super) fn updated_at(&self) -> Option<&'a str> {\n            unsafe {\n                self.table\n                    .get::<flatbuffers::ForwardsUOffset<&str>>(Self::VT_UPDATED_AT, None)\n            }\n        }\n\n        #[inline]\n        pub(super) fn global(&self) -> bool {\n            unsafe { self.table.get::<bool>(Self::VT_GLOBAL, Some(false)) }.unwrap_or(false)\n        }\n\n        #[inline]\n        pub(super) fn version_id(&self) -> Option<&'a str> {\n            unsafe {\n                self.table\n                    .get::<flatbuffers::ForwardsUOffset<&str>>(Self::VT_VERSION_ID, None)\n            }\n        }\n    }\n\n    impl flatbuffers::Verifiable for UntrackedStateRow<'_> {\n        #[inline]\n        fn run_verifier(\n            verifier: &mut flatbuffers::Verifier,\n            position: usize,\n        ) -> Result<(), flatbuffers::InvalidFlatbuffer> {\n            verifier\n                .visit_table(position)?\n                .visit_field::<flatbuffers::ForwardsUOffset<&str>>(\n                    \"entity_id\",\n                    Self::VT_ENTITY_ID,\n                    true,\n                )?\n                .visit_field::<flatbuffers::ForwardsUOffset<&str>>(\n                    \"schema_key\",\n                    Self::VT_SCHEMA_KEY,\n                    true,\n                )?\n                .visit_field::<flatbuffers::ForwardsUOffset<&str>>(\n                    \"file_id\",\n                    Self::VT_FILE_ID,\n                    false,\n                )?\n                .visit_field::<flatbuffers::ForwardsUOffset<&str>>(\n                    \"snapshot_content\",\n                    Self::VT_SNAPSHOT_CONTENT,\n                    false,\n                )?\n                .visit_field::<flatbuffers::ForwardsUOffset<&str>>(\n                    \"metadata\",\n                    Self::VT_METADATA,\n                    false,\n                )?\n                .visit_field::<flatbuffers::ForwardsUOffset<&str>>(\n                    \"created_at\",\n                    Self::VT_CREATED_AT,\n                    true,\n                )?\n                .visit_field::<flatbuffers::ForwardsUOffset<&str>>(\n                    \"updated_at\",\n                    Self::VT_UPDATED_AT,\n                    true,\n                )?\n                .visit_field::<bool>(\"global\", Self::VT_GLOBAL, false)?\n                .visit_field::<flatbuffers::ForwardsUOffset<&str>>(\n                    \"version_id\",\n                    Self::VT_VERSION_ID,\n                    true,\n                )?\n                .finish();\n            Ok(())\n        }\n    }\n\n    pub(super) struct UntrackedStateRowArgs<'a> {\n        pub(super) entity_id: flatbuffers::WIPOffset<&'a str>,\n        pub(super) schema_key: flatbuffers::WIPOffset<&'a str>,\n        pub(super) file_id: Option<flatbuffers::WIPOffset<&'a str>>,\n        pub(super) snapshot_content: Option<flatbuffers::WIPOffset<&'a str>>,\n        pub(super) metadata: Option<flatbuffers::WIPOffset<&'a str>>,\n        pub(super) created_at: flatbuffers::WIPOffset<&'a str>,\n        pub(super) updated_at: flatbuffers::WIPOffset<&'a str>,\n        pub(super) global: bool,\n        pub(super) version_id: flatbuffers::WIPOffset<&'a str>,\n    }\n\n    pub(super) fn create_untracked_state_row<'bldr: 'args, 'args: 'mut_bldr, 'mut_bldr>(\n        builder: &'mut_bldr mut flatbuffers::FlatBufferBuilder<'bldr>,\n        args: &'args UntrackedStateRowArgs<'args>,\n    ) -> flatbuffers::WIPOffset<UntrackedStateRow<'bldr>> {\n        let start = builder.start_table();\n        builder.push_slot_always::<flatbuffers::WIPOffset<_>>(\n            UntrackedStateRow::VT_VERSION_ID,\n            args.version_id,\n        );\n        builder.push_slot::<bool>(UntrackedStateRow::VT_GLOBAL, args.global, false);\n        builder.push_slot_always::<flatbuffers::WIPOffset<_>>(\n            UntrackedStateRow::VT_UPDATED_AT,\n            args.updated_at,\n        );\n        builder.push_slot_always::<flatbuffers::WIPOffset<_>>(\n            UntrackedStateRow::VT_CREATED_AT,\n            args.created_at,\n        );\n        if let Some(metadata) = args.metadata {\n            builder.push_slot_always::<flatbuffers::WIPOffset<_>>(\n                UntrackedStateRow::VT_METADATA,\n                metadata,\n            );\n        }\n        if let Some(snapshot_content) = args.snapshot_content {\n            builder.push_slot_always::<flatbuffers::WIPOffset<_>>(\n                UntrackedStateRow::VT_SNAPSHOT_CONTENT,\n                snapshot_content,\n            );\n        }\n        if let Some(file_id) = args.file_id {\n            builder.push_slot_always::<flatbuffers::WIPOffset<_>>(\n                UntrackedStateRow::VT_FILE_ID,\n                file_id,\n            );\n        }\n        builder.push_slot_always::<flatbuffers::WIPOffset<_>>(\n            UntrackedStateRow::VT_SCHEMA_KEY,\n            args.schema_key,\n        );\n        builder.push_slot_always::<flatbuffers::WIPOffset<_>>(\n            UntrackedStateRow::VT_ENTITY_ID,\n            args.entity_id,\n        );\n        let offset = builder.end_table(start);\n        flatbuffers::WIPOffset::new(offset.value())\n    }\n\n    #[inline]\n    pub(super) fn root_as_untracked_state_row(\n        bytes: &[u8],\n    ) -> Result<UntrackedStateRow<'_>, flatbuffers::InvalidFlatbuffer> {\n        flatbuffers::root::<UntrackedStateRow>(bytes)\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/untracked_state/context.rs",
    "content": "use crate::storage::{StorageReader, StorageWriteSet};\nuse crate::untracked_state::{\n    MaterializedUntrackedStateRow, UntrackedStateIdentity, UntrackedStateIdentityRef,\n    UntrackedStateRowRef, UntrackedStateRowRequest, UntrackedStateScanRequest,\n};\nuse crate::LixError;\n\n/// Durable local overlay excluded from changelog and commit membership.\n///\n/// Untracked state is not change-controlled, but it is still durable local\n/// state. It is read alongside tracked live state and can override tracked rows\n/// with the same identity.\n#[derive(Clone, Copy)]\npub(crate) struct UntrackedStateContext;\n\nimpl UntrackedStateContext {\n    pub(crate) fn new() -> Self {\n        Self\n    }\n\n    /// Creates a reader over a caller-provided KV store.\n    ///\n    /// The caller decides which KV store supplies visibility for the read.\n    pub(crate) fn reader<S>(&self, store: S) -> UntrackedStateStoreReader<S>\n    where\n        S: StorageReader,\n    {\n        UntrackedStateStoreReader { store }\n    }\n\n    /// Creates a writer over a transaction-local storage write set.\n    ///\n    /// The context never opens its own transaction; the caller applies the\n    /// write set to choose the durable commit or rollback boundary.\n    pub(crate) fn writer<'a>(&self, writes: &'a mut StorageWriteSet) -> UntrackedStateWriter<'a> {\n        UntrackedStateWriter { writes }\n    }\n}\n\n/// Store-backed untracked-state reader created by `UntrackedStateContext`.\npub(crate) struct UntrackedStateStoreReader<S> {\n    store: S,\n}\n\nimpl<S> UntrackedStateStoreReader<S>\nwhere\n    S: StorageReader,\n{\n    pub(crate) async fn scan_rows(\n        &mut self,\n        request: &UntrackedStateScanRequest,\n    ) -> Result<Vec<MaterializedUntrackedStateRow>, LixError> {\n        crate::untracked_state::storage::scan_rows(&mut self.store, request).await\n    }\n\n    pub(crate) async fn load_row(\n        &mut self,\n        request: &UntrackedStateRowRequest,\n    ) -> Result<Option<MaterializedUntrackedStateRow>, LixError> {\n        crate::untracked_state::storage::load_row(&mut self.store, request).await\n    }\n\n    pub(crate) async fn existing_identities<'a, I>(\n        &mut self,\n        identities: I,\n    ) -> Result<Vec<UntrackedStateIdentity>, LixError>\n    where\n        I: IntoIterator<Item = UntrackedStateIdentityRef<'a>>,\n    {\n        crate::untracked_state::storage::existing_identities(&mut self.store, identities).await\n    }\n}\n\n/// Untracked-state writer over a transaction-local storage write set.\npub(crate) struct UntrackedStateWriter<'a> {\n    writes: &'a mut StorageWriteSet,\n}\n\nimpl UntrackedStateWriter<'_> {\n    /// Stages the latest untracked rows for their identities.\n    ///\n    /// A row with `snapshot_content = None` is treated as removal because\n    /// untracked state keeps only the current local value, not tombstones.\n    pub(crate) fn stage_rows<'a, I>(&mut self, rows: I) -> Result<(), LixError>\n    where\n        I: IntoIterator<Item = UntrackedStateRowRef<'a>>,\n    {\n        crate::untracked_state::storage::stage_rows(self.writes, rows)\n    }\n\n    /// Removes untracked rows by exact identity.\n    pub(crate) fn stage_delete_rows<'a, I>(&mut self, identities: I)\n    where\n        I: IntoIterator<Item = UntrackedStateIdentityRef<'a>>,\n    {\n        crate::untracked_state::storage::stage_delete_rows(self.writes, identities)\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/untracked_state/materialization.rs",
    "content": "use crate::untracked_state::{MaterializedUntrackedStateRow, UntrackedStateRow};\nuse crate::{parse_row_metadata, LixError};\n\npub(crate) fn materialize_row(\n    row: UntrackedStateRow,\n    projection: &UntrackedMaterializationProjection,\n) -> Result<MaterializedUntrackedStateRow, LixError> {\n    let deleted = row.snapshot_content.is_none();\n    let snapshot_content = if projection.snapshot_content {\n        row.snapshot_content\n    } else {\n        None\n    };\n    let metadata = if projection.metadata {\n        load_optional_metadata(row.metadata)?\n    } else {\n        None\n    };\n    Ok(MaterializedUntrackedStateRow {\n        entity_id: row.entity_id,\n        schema_key: row.schema_key,\n        file_id: row.file_id,\n        snapshot_content,\n        metadata,\n        deleted,\n        created_at: row.created_at,\n        updated_at: row.updated_at,\n        global: row.global,\n        version_id: row.version_id,\n    })\n}\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub(crate) struct UntrackedMaterializationProjection {\n    pub(crate) snapshot_content: bool,\n    pub(crate) metadata: bool,\n}\n\nimpl UntrackedMaterializationProjection {\n    pub(crate) fn full() -> Self {\n        Self {\n            snapshot_content: true,\n            metadata: true,\n        }\n    }\n\n    pub(crate) fn from_columns(columns: &[String]) -> Self {\n        if columns.is_empty() {\n            return Self::full();\n        }\n        Self {\n            snapshot_content: columns.iter().any(|column| column == \"snapshot_content\"),\n            metadata: columns.iter().any(|column| column == \"metadata\"),\n        }\n    }\n}\n\nfn load_optional_metadata(metadata: Option<String>) -> Result<Option<String>, LixError> {\n    let Some(json) = metadata else {\n        return Ok(None);\n    };\n    parse_row_metadata(&json, \"untracked_state metadata\").map(Some)\n}\n"
  },
  {
    "path": "packages/engine/src/untracked_state/mod.rs",
    "content": "mod codec;\nmod context;\nmod materialization;\npub(crate) mod storage;\nmod types;\n\n#[allow(unused_imports)]\npub(crate) use context::{UntrackedStateContext, UntrackedStateStoreReader, UntrackedStateWriter};\npub(crate) use materialization::{materialize_row, UntrackedMaterializationProjection};\n#[allow(unused_imports)]\npub(crate) use types::{\n    MaterializedUntrackedStateRow, UntrackedStateFilter, UntrackedStateIdentity,\n    UntrackedStateIdentityRef, UntrackedStateProjection, UntrackedStateRow, UntrackedStateRowRef,\n    UntrackedStateRowRequest, UntrackedStateScanRequest,\n};\n"
  },
  {
    "path": "packages/engine/src/untracked_state/storage.rs",
    "content": "use crate::storage::KvScanRange;\nuse crate::storage::{KvGetGroup, KvGetRequest, KvScanRequest, StorageReader, StorageWriteSet};\nuse crate::untracked_state::{\n    MaterializedUntrackedStateRow, UntrackedMaterializationProjection, UntrackedStateIdentity,\n    UntrackedStateIdentityRef, UntrackedStateRow, UntrackedStateRowRef, UntrackedStateRowRequest,\n    UntrackedStateScanRequest,\n};\nuse crate::{LixError, NullableKeyFilter};\n\npub(super) const UNTRACKED_STATE_ROW_NAMESPACE: &str = \"untracked_state.row\";\n\npub(crate) async fn scan_rows(\n    store: &mut impl StorageReader,\n    request: &UntrackedStateScanRequest,\n) -> Result<Vec<MaterializedUntrackedStateRow>, LixError> {\n    let mut rows = scan_all_canonical_rows(store).await?;\n    rows.retain(|row| row_matches_scan(row, request));\n    if let Some(limit) = request.limit {\n        rows.truncate(limit);\n    }\n    let projection = UntrackedMaterializationProjection::from_columns(&request.projection.columns);\n    let mut materialized = Vec::with_capacity(rows.len());\n    for row in rows {\n        materialized.push(crate::untracked_state::materialize_row(row, &projection)?);\n    }\n    Ok(materialized)\n}\n\npub(crate) async fn load_row(\n    store: &mut impl StorageReader,\n    request: &UntrackedStateRowRequest,\n) -> Result<Option<MaterializedUntrackedStateRow>, LixError> {\n    let Some(identity) = identity_from_request(request) else {\n        return Ok(None);\n    };\n    let bytes = store\n        .get_values(KvGetRequest {\n            groups: vec![KvGetGroup {\n                namespace: UNTRACKED_STATE_ROW_NAMESPACE.to_string(),\n                keys: vec![encode_untracked_state_row_key(&identity)],\n            }],\n        })\n        .await?\n        .groups\n        .into_iter()\n        .next()\n        .and_then(|group| group.single_value_owned());\n    let Some(bytes) = bytes else {\n        return Ok(None);\n    };\n    let row = crate::untracked_state::codec::decode_row(&bytes)?;\n    crate::untracked_state::materialize_row(row, &UntrackedMaterializationProjection::full())\n        .map(Some)\n}\n\npub(super) async fn existing_identities<'a>(\n    store: &mut (impl StorageReader + ?Sized),\n    identities: impl IntoIterator<Item = UntrackedStateIdentityRef<'a>>,\n) -> Result<Vec<UntrackedStateIdentity>, LixError> {\n    let mut candidates = identities\n        .into_iter()\n        .map(|identity| {\n            let owned = UntrackedStateIdentity {\n                version_id: identity.version_id.to_string(),\n                schema_key: identity.schema_key.to_string(),\n                entity_id: identity.entity_id.clone(),\n                file_id: identity.file_id.map(str::to_string),\n            };\n            let key = encode_untracked_state_row_key_ref(owned.as_ref());\n            (key, owned)\n        })\n        .collect::<Vec<_>>();\n    candidates.sort_by(|(left, _), (right, _)| left.cmp(right));\n    candidates.dedup_by(|(left, _), (right, _)| left == right);\n    if candidates.is_empty() {\n        return Ok(Vec::new());\n    }\n    let keys = candidates\n        .iter()\n        .map(|(key, _)| key.clone())\n        .collect::<Vec<_>>();\n\n    let result = store\n        .exists_many(KvGetRequest {\n            groups: vec![KvGetGroup {\n                namespace: UNTRACKED_STATE_ROW_NAMESPACE.to_string(),\n                keys,\n            }],\n        })\n        .await?;\n    let group = result.groups.into_iter().next().ok_or_else(|| {\n        LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            \"untracked identity existence probe returned no result group\",\n        )\n    })?;\n    if group.exists.len() != candidates.len() {\n        return Err(LixError::new(\n            LixError::CODE_INTERNAL_ERROR,\n            format!(\n                \"untracked identity existence probe returned {} results for {} requested keys\",\n                group.exists.len(),\n                candidates.len()\n            ),\n        ));\n    }\n\n    Ok(candidates\n        .into_iter()\n        .zip(group.exists)\n        .filter_map(|((_, identity), exists)| exists.then_some(identity))\n        .collect())\n}\n\npub(crate) fn stage_rows<'a, I>(writes: &mut StorageWriteSet, rows: I) -> Result<(), LixError>\nwhere\n    I: IntoIterator<Item = UntrackedStateRowRef<'a>>,\n{\n    for row in rows {\n        if row.snapshot_content.is_none() {\n            writes.delete(\n                UNTRACKED_STATE_ROW_NAMESPACE,\n                encode_untracked_state_row_key_ref(row.into()),\n            );\n        } else {\n            writes.put(\n                UNTRACKED_STATE_ROW_NAMESPACE,\n                encode_untracked_state_row_key_ref(row.into()),\n                crate::untracked_state::codec::encode_row_ref(row)?,\n            );\n        }\n    }\n    Ok(())\n}\n\npub(crate) fn stage_delete_rows<'a, I>(writes: &mut StorageWriteSet, identities: I)\nwhere\n    I: IntoIterator<Item = UntrackedStateIdentityRef<'a>>,\n{\n    for identity in identities {\n        writes.delete(\n            UNTRACKED_STATE_ROW_NAMESPACE,\n            encode_untracked_state_row_key_ref(identity),\n        );\n    }\n}\n\nasync fn scan_all_canonical_rows(\n    store: &mut impl StorageReader,\n) -> Result<Vec<UntrackedStateRow>, LixError> {\n    let page = store\n        .scan_values(KvScanRequest {\n            namespace: UNTRACKED_STATE_ROW_NAMESPACE.to_string(),\n            range: KvScanRange::prefix(Vec::new()),\n            after: None,\n            limit: usize::MAX,\n        })\n        .await?;\n    page.values\n        .iter()\n        .map(crate::untracked_state::codec::decode_row)\n        .collect()\n}\n\nfn row_matches_scan(row: &UntrackedStateRow, request: &UntrackedStateScanRequest) -> bool {\n    (request.filter.schema_keys.is_empty() || request.filter.schema_keys.contains(&row.schema_key))\n        && (request.filter.entity_ids.is_empty()\n            || request.filter.entity_ids.contains(&row.entity_id))\n        && (request.filter.version_ids.is_empty()\n            || request.filter.version_ids.contains(&row.version_id))\n        && nullable_matches_filters(&row.file_id, &request.filter.file_ids)\n}\n\nfn nullable_matches_filters(value: &Option<String>, filters: &[NullableKeyFilter<String>]) -> bool {\n    filters.is_empty()\n        || filters.iter().any(|filter| match filter {\n            NullableKeyFilter::Any => true,\n            NullableKeyFilter::Null => value.is_none(),\n            NullableKeyFilter::Value(expected) => value.as_ref() == Some(expected),\n        })\n}\n\nfn identity_from_request(request: &UntrackedStateRowRequest) -> Option<UntrackedStateIdentity> {\n    let file_id = match &request.file_id {\n        NullableKeyFilter::Null => None,\n        NullableKeyFilter::Value(value) => Some(value.clone()),\n        NullableKeyFilter::Any => return None,\n    };\n    Some(UntrackedStateIdentity {\n        version_id: request.version_id.clone(),\n        schema_key: request.schema_key.clone(),\n        entity_id: request.entity_id.clone(),\n        file_id,\n    })\n}\n\nfn encode_untracked_state_row_key(identity: &UntrackedStateIdentity) -> Vec<u8> {\n    encode_untracked_state_row_key_ref(identity.as_ref())\n}\n\npub(super) fn encode_untracked_state_row_key_ref(\n    identity: UntrackedStateIdentityRef<'_>,\n) -> Vec<u8> {\n    let mut out = Vec::new();\n    push_component(&mut out, identity.version_id);\n    push_component(&mut out, identity.schema_key);\n    let entity_id = identity\n        .entity_id\n        .as_json_array_text()\n        .expect(\"untracked-state identity should project\");\n    push_component(&mut out, &entity_id);\n    match identity.file_id {\n        Some(file_id) => {\n            out.push(1);\n            push_component(&mut out, file_id);\n        }\n        None => out.push(0),\n    }\n    out\n}\n\nfn push_component(out: &mut Vec<u8>, value: &str) {\n    let bytes = value.as_bytes();\n    out.extend_from_slice(&(bytes.len() as u32).to_be_bytes());\n    out.extend_from_slice(bytes);\n}\n\n#[cfg(test)]\nmod tests {\n    use std::sync::Arc;\n\n    use super::*;\n    use crate::backend::testing::UnitTestBackend;\n    use crate::storage::{StorageContext, StorageWriteTransaction};\n    use crate::untracked_state::UntrackedStateContext;\n\n    async fn write_materialized_rows_to_store(\n        context: &UntrackedStateContext,\n        store: &mut (impl StorageWriteTransaction + ?Sized),\n        rows: &[MaterializedUntrackedStateRow],\n    ) {\n        let mut writes = StorageWriteSet::new();\n        let canonical_rows = rows\n            .iter()\n            .map(|row| crate::test_support::untracked_state_row_from_materialized(&mut writes, row))\n            .collect::<Result<Vec<_>, _>>()\n            .expect(\"rows should canonicalize\");\n        context\n            .writer(&mut writes)\n            .stage_rows(canonical_rows.iter().map(|row| row.as_ref()))\n            .expect(\"rows should write\");\n        writes.apply(store).await.expect(\"rows should apply\");\n    }\n\n    #[tokio::test]\n    async fn write_and_load_roundtrips() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        let context = UntrackedStateContext::new();\n        let row = untracked_row(\"global\", \"lix_key_value\", \"ui-tab\");\n\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        write_materialized_rows_to_store(\n            &context,\n            transaction.as_mut(),\n            std::slice::from_ref(&row),\n        )\n        .await;\n        transaction.commit().await.expect(\"commit should succeed\");\n\n        let loaded = {\n            let mut reader = context.reader(storage.clone());\n            reader\n                .load_row(&UntrackedStateRowRequest {\n                    schema_key: \"lix_key_value\".to_string(),\n                    version_id: \"global\".to_string(),\n                    entity_id: crate::entity_identity::EntityIdentity::single(\"ui-tab\"),\n                    file_id: NullableKeyFilter::Null,\n                })\n                .await\n        }\n        .expect(\"load should succeed\");\n        assert_eq!(loaded, Some(row));\n    }\n\n    #[tokio::test]\n    async fn scan_filters_by_schema_and_version() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        let context = UntrackedStateContext::new();\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        write_materialized_rows_to_store(\n            &context,\n            transaction.as_mut(),\n            &[\n                untracked_row(\"global\", \"lix_key_value\", \"global-ui\"),\n                untracked_row(\"version-a\", \"lix_key_value\", \"version-ui\"),\n                untracked_row(\"version-a\", \"other_schema\", \"other\"),\n            ],\n        )\n        .await;\n        transaction.commit().await.expect(\"commit should succeed\");\n\n        let rows = {\n            let mut reader = context.reader(storage.clone());\n            reader\n                .scan_rows(&UntrackedStateScanRequest {\n                    filter: crate::untracked_state::UntrackedStateFilter {\n                        schema_keys: vec![\"lix_key_value\".to_string()],\n                        version_ids: vec![\"version-a\".to_string()],\n                        ..Default::default()\n                    },\n                    ..Default::default()\n                })\n                .await\n        }\n        .expect(\"scan should succeed\");\n\n        assert_eq!(rows.len(), 1);\n        assert_eq!(\n            rows[0].entity_id,\n            crate::entity_identity::EntityIdentity::single(\"version-ui\")\n        );\n    }\n\n    #[tokio::test]\n    async fn delete_removes_row() {\n        let backend = Arc::new(UnitTestBackend::new());\n        let storage = StorageContext::new(backend.clone());\n        let context = UntrackedStateContext::new();\n        let row = untracked_row(\"global\", \"lix_key_value\", \"ui-tab\");\n        let identity = UntrackedStateIdentity {\n            version_id: row.version_id.clone(),\n            schema_key: row.schema_key.clone(),\n            entity_id: row.entity_id.clone(),\n            file_id: row.file_id.clone(),\n        };\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        let mut writes = StorageWriteSet::new();\n        let canonical_row =\n            crate::test_support::untracked_state_row_from_materialized(&mut writes, &row)\n                .expect(\"row should canonicalize\");\n        let mut writer = context.writer(&mut writes);\n        writer\n            .stage_rows(std::iter::once(canonical_row.as_ref()))\n            .expect(\"write should succeed\");\n        writer.stage_delete_rows(std::iter::once(identity.as_ref()));\n        writes\n            .apply(&mut transaction.as_mut())\n            .await\n            .expect(\"writes should apply\");\n        transaction.commit().await.expect(\"commit should succeed\");\n\n        let loaded = {\n            let mut reader = context.reader(storage.clone());\n            reader\n                .load_row(&UntrackedStateRowRequest {\n                    schema_key: \"lix_key_value\".to_string(),\n                    version_id: \"global\".to_string(),\n                    entity_id: crate::entity_identity::EntityIdentity::single(\"ui-tab\"),\n                    file_id: NullableKeyFilter::Null,\n                })\n                .await\n        }\n        .expect(\"load should succeed\");\n        assert_eq!(loaded, None);\n    }\n\n    fn untracked_row(\n        version_id: &str,\n        schema_key: &str,\n        entity_id: &str,\n    ) -> MaterializedUntrackedStateRow {\n        MaterializedUntrackedStateRow {\n            entity_id: crate::entity_identity::EntityIdentity::single(entity_id),\n            schema_key: schema_key.to_string(),\n            file_id: None,\n            snapshot_content: Some(format!(\"{{\\\"key\\\":\\\"{}\\\",\\\"value\\\":\\\"value\\\"}}\", entity_id)),\n            metadata: None,\n            deleted: false,\n            created_at: \"2026-01-01T00:00:00Z\".to_string(),\n            updated_at: \"2026-01-01T00:00:00Z\".to_string(),\n            global: version_id == \"global\",\n            version_id: version_id.to_string(),\n        }\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/untracked_state/types.rs",
    "content": "use crate::entity_identity::EntityIdentity;\nuse crate::NullableKeyFilter;\n\n/// Durable local row excluded from changelog and commit membership.\n///\n/// This is the canonical physical shape: identity/header fields are stored\n/// directly, and mutable JSON payloads are stored inline in the sidecar row.\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct UntrackedStateRow {\n    pub(crate) entity_id: EntityIdentity,\n    pub(crate) schema_key: String,\n    pub(crate) file_id: Option<String>,\n    pub(crate) snapshot_content: Option<String>,\n    pub(crate) metadata: Option<String>,\n    pub(crate) created_at: String,\n    pub(crate) updated_at: String,\n    pub(crate) global: bool,\n    pub(crate) version_id: String,\n}\n\nimpl UntrackedStateRow {\n    pub(crate) fn as_ref(&self) -> UntrackedStateRowRef<'_> {\n        UntrackedStateRowRef {\n            entity_id: &self.entity_id,\n            schema_key: &self.schema_key,\n            file_id: self.file_id.as_deref(),\n            snapshot_content: self.snapshot_content.as_deref(),\n            metadata: self.metadata.as_deref(),\n            created_at: &self.created_at,\n            updated_at: &self.updated_at,\n            global: self.global,\n            version_id: &self.version_id,\n        }\n    }\n}\n\n/// Zero-copy view of untracked-state write row.\n///\n/// Untracked state owns this storage-facing write shape. Callers adapt into it\n/// without making untracked_state depend on transaction or live-state types.\n#[derive(Debug, Clone, Copy)]\npub(crate) struct UntrackedStateRowRef<'a> {\n    pub(crate) entity_id: &'a EntityIdentity,\n    pub(crate) schema_key: &'a str,\n    pub(crate) file_id: Option<&'a str>,\n    pub(crate) snapshot_content: Option<&'a str>,\n    pub(crate) metadata: Option<&'a str>,\n    pub(crate) created_at: &'a str,\n    pub(crate) updated_at: &'a str,\n    pub(crate) global: bool,\n    pub(crate) version_id: &'a str,\n}\n\n/// Hydrated boundary shape for callers that still work with JSON payloads.\n#[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize)]\npub(crate) struct MaterializedUntrackedStateRow {\n    pub(crate) entity_id: EntityIdentity,\n    pub(crate) schema_key: String,\n    pub(crate) file_id: Option<String>,\n    pub(crate) snapshot_content: Option<String>,\n    pub(crate) metadata: Option<String>,\n    pub(crate) deleted: bool,\n    pub(crate) created_at: String,\n    pub(crate) updated_at: String,\n    pub(crate) global: bool,\n    pub(crate) version_id: String,\n}\n\n/// Stable identity for one local untracked overlay row.\n#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)]\npub(crate) struct UntrackedStateIdentity {\n    pub(crate) version_id: String,\n    pub(crate) schema_key: String,\n    pub(crate) entity_id: EntityIdentity,\n    pub(crate) file_id: Option<String>,\n}\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub(crate) struct UntrackedStateIdentityRef<'a> {\n    pub(crate) version_id: &'a str,\n    pub(crate) schema_key: &'a str,\n    pub(crate) entity_id: &'a EntityIdentity,\n    pub(crate) file_id: Option<&'a str>,\n}\n\nimpl UntrackedStateIdentity {\n    pub(crate) fn as_ref(&self) -> UntrackedStateIdentityRef<'_> {\n        UntrackedStateIdentityRef {\n            version_id: &self.version_id,\n            schema_key: &self.schema_key,\n            entity_id: &self.entity_id,\n            file_id: self.file_id.as_deref(),\n        }\n    }\n}\n\nimpl<'a> From<UntrackedStateRowRef<'a>> for UntrackedStateIdentityRef<'a> {\n    fn from(row: UntrackedStateRowRef<'a>) -> Self {\n        Self {\n            version_id: row.version_id,\n            schema_key: row.schema_key,\n            entity_id: row.entity_id,\n            file_id: row.file_id,\n        }\n    }\n}\n\n/// Identity-centered filter for untracked local overlay scans.\n#[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize, Default)]\npub(crate) struct UntrackedStateFilter {\n    #[serde(default)]\n    pub(crate) schema_keys: Vec<String>,\n    #[serde(default)]\n    pub(crate) entity_ids: Vec<EntityIdentity>,\n    #[serde(default)]\n    pub(crate) version_ids: Vec<String>,\n    #[serde(default)]\n    pub(crate) file_ids: Vec<NullableKeyFilter<String>>,\n}\n\n/// Requested property set for an untracked-state scan.\n#[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize, Default)]\npub(crate) struct UntrackedStateProjection {\n    #[serde(default)]\n    pub(crate) columns: Vec<String>,\n}\n\n/// Scan request for local untracked overlay rows.\n#[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize, Default)]\npub(crate) struct UntrackedStateScanRequest {\n    #[serde(default)]\n    pub(crate) filter: UntrackedStateFilter,\n    #[serde(default)]\n    pub(crate) projection: UntrackedStateProjection,\n    #[serde(default)]\n    pub(crate) limit: Option<usize>,\n}\n\n/// Point lookup request for one untracked local overlay row.\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct UntrackedStateRowRequest {\n    pub(crate) schema_key: String,\n    pub(crate) version_id: String,\n    pub(crate) entity_id: EntityIdentity,\n    pub(crate) file_id: NullableKeyFilter<String>,\n}\n"
  },
  {
    "path": "packages/engine/src/version/context.rs",
    "content": "use std::sync::Arc;\n\nuse crate::storage::{StorageReader, StorageWriteSet};\nuse crate::untracked_state::{UntrackedStateContext, UntrackedStateRow};\n\nuse super::refs::VersionRefContext;\nuse super::VersionRefReader;\n\n/// Aggregate entrypoint for version-domain services.\n///\n/// Today this owns the moving-ref subsystem. Descriptor helpers are re-exported\n/// by `version`; future version APIs can grow here without making session or\n/// SQL code depend directly on ref storage details.\npub(crate) struct VersionContext {\n    refs: Arc<VersionRefContext>,\n}\n\nimpl VersionContext {\n    pub(crate) fn new(untracked_state: Arc<UntrackedStateContext>) -> Self {\n        Self {\n            refs: Arc::new(VersionRefContext::new(untracked_state)),\n        }\n    }\n\n    /// Creates a version-ref reader over a caller-provided KV store.\n    pub(crate) fn ref_reader<S>(&self, store: S) -> impl VersionRefReader\n    where\n        S: StorageReader + Send,\n    {\n        self.refs.reader(store)\n    }\n\n    pub(crate) fn stage_canonical_ref_rows(\n        &self,\n        writes: &mut StorageWriteSet,\n        rows: &[UntrackedStateRow],\n    ) -> Result<(), crate::LixError> {\n        self.refs.writer(writes).stage_rows(rows)\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/version/lifecycle.rs",
    "content": "use crate::commit_graph::{CommitGraphCommit, CommitGraphReader};\nuse crate::common::validate_non_empty_identity_value;\nuse crate::LixError;\n\nuse super::{VersionHead, VersionRefReader};\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub(crate) enum VersionOperation {\n    CreateVersion,\n    SwitchVersion,\n    MergeVersion,\n    MergeVersionPreview,\n    LoadWorkspaceSelector,\n}\n\nimpl VersionOperation {\n    pub(crate) fn label(self) -> &'static str {\n        match self {\n            Self::CreateVersion => \"create_version\",\n            Self::SwitchVersion => \"switch_version\",\n            Self::MergeVersion => \"merge_version\",\n            Self::MergeVersionPreview => \"merge_version_preview\",\n            Self::LoadWorkspaceSelector => \"load_workspace_version_id\",\n        }\n    }\n}\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub(crate) enum VersionReferenceRole {\n    Source,\n    Target,\n    WorkspaceSelector,\n    CommitSource,\n}\n\nimpl VersionReferenceRole {\n    pub(crate) fn label(self) -> &'static str {\n        match self {\n            Self::Source => \"source\",\n            Self::Target => \"target\",\n            Self::WorkspaceSelector => \"workspace_selector\",\n            Self::CommitSource => \"commit_source\",\n        }\n    }\n}\n\n/// Shared domain service for resolving public version references.\n///\n/// Built-in version schemas describe row shape. This service owns semantic\n/// ref validation: non-empty ids, global sentinel handling, and missing refs.\npub(crate) struct VersionLifecycle<'a> {\n    refs: &'a dyn VersionRefReader,\n}\n\nimpl<'a> VersionLifecycle<'a> {\n    pub(crate) fn new(refs: &'a dyn VersionRefReader) -> Self {\n        Self { refs }\n    }\n\n    pub(crate) fn require_non_empty_id(\n        version_id: &str,\n        operation: VersionOperation,\n        role: VersionReferenceRole,\n    ) -> Result<(), LixError> {\n        require_non_empty_public_id(\"version_id\", version_id, operation, role)\n    }\n\n    pub(crate) async fn require_existing_commit(\n        commit_graph: &mut dyn CommitGraphReader,\n        commit_id: &str,\n        operation: VersionOperation,\n        role: VersionReferenceRole,\n    ) -> Result<CommitGraphCommit, LixError> {\n        require_non_empty_public_id(\"commit_id\", commit_id, operation, role)?;\n        commit_graph\n            .load_commit(commit_id)\n            .await?\n            .ok_or_else(|| LixError::version_not_found(commit_id, operation.label(), role.label()))\n    }\n\n    pub(crate) async fn require_existing_ref(\n        &self,\n        version_id: &str,\n        operation: VersionOperation,\n        role: VersionReferenceRole,\n    ) -> Result<VersionHead, LixError> {\n        Self::require_non_empty_id(version_id, operation, role)?;\n        self.require_existing_stored_ref(version_id, operation, role)\n            .await\n    }\n\n    pub(crate) async fn require_existing_commit_id(\n        &self,\n        version_id: &str,\n        operation: VersionOperation,\n        role: VersionReferenceRole,\n    ) -> Result<String, LixError> {\n        Ok(self\n            .require_existing_ref(version_id, operation, role)\n            .await?\n            .commit_id)\n    }\n\n    async fn require_existing_stored_ref(\n        &self,\n        version_id: &str,\n        operation: VersionOperation,\n        role: VersionReferenceRole,\n    ) -> Result<VersionHead, LixError> {\n        self.refs\n            .load_head(version_id)\n            .await?\n            .ok_or_else(|| LixError::version_not_found(version_id, operation.label(), role.label()))\n    }\n}\n\nfn require_non_empty_public_id(\n    label: &str,\n    value: &str,\n    operation: VersionOperation,\n    role: VersionReferenceRole,\n) -> Result<(), LixError> {\n    validate_non_empty_identity_value(label, value)\n        .map(|_| ())\n        .map_err(|_| {\n            LixError::new(\n                LixError::CODE_INVALID_PARAM,\n                format!(\n                    \"{} {} {label} must be non-empty\",\n                    operation.label(),\n                    role.label()\n                ),\n            )\n        })\n}\n\n#[cfg(test)]\nmod tests {\n    use async_trait::async_trait;\n\n    use super::*;\n\n    #[tokio::test]\n    async fn require_existing_ref_returns_head() {\n        let reader = RowsVersionRefReader::new(vec![VersionHead {\n            version_id: \"version-a\".to_string(),\n            commit_id: \"commit-a\".to_string(),\n        }]);\n        let lifecycle = VersionLifecycle::new(&reader);\n\n        let head = lifecycle\n            .require_existing_ref(\n                \"version-a\",\n                VersionOperation::SwitchVersion,\n                VersionReferenceRole::Target,\n            )\n            .await\n            .expect(\"version should resolve\");\n\n        assert_eq!(head.commit_id, \"commit-a\");\n    }\n\n    #[tokio::test]\n    async fn require_existing_ref_rejects_empty_id_as_invalid_param() {\n        let reader = RowsVersionRefReader::new(Vec::new());\n        let lifecycle = VersionLifecycle::new(&reader);\n\n        let error = lifecycle\n            .require_existing_ref(\n                \"\",\n                VersionOperation::SwitchVersion,\n                VersionReferenceRole::Target,\n            )\n            .await\n            .expect_err(\"empty version id should be rejected before lookup\");\n\n        assert_eq!(error.code, LixError::CODE_INVALID_PARAM);\n    }\n\n    #[tokio::test]\n    async fn require_existing_ref_reports_missing_version() {\n        let reader = RowsVersionRefReader::new(Vec::new());\n        let lifecycle = VersionLifecycle::new(&reader);\n\n        let error = lifecycle\n            .require_existing_ref(\n                \"missing\",\n                VersionOperation::SwitchVersion,\n                VersionReferenceRole::Target,\n            )\n            .await\n            .expect_err(\"missing version should be rejected\");\n\n        assert_eq!(error.code, LixError::CODE_VERSION_NOT_FOUND);\n    }\n\n    struct RowsVersionRefReader {\n        heads: Vec<VersionHead>,\n    }\n\n    impl RowsVersionRefReader {\n        fn new(heads: Vec<VersionHead>) -> Self {\n            Self { heads }\n        }\n    }\n\n    #[async_trait]\n    impl VersionRefReader for RowsVersionRefReader {\n        async fn load_head(&self, version_id: &str) -> Result<Option<VersionHead>, LixError> {\n            Ok(self\n                .heads\n                .iter()\n                .find(|head| head.version_id == version_id)\n                .cloned())\n        }\n\n        async fn scan_heads(&self) -> Result<Vec<VersionHead>, LixError> {\n            Ok(self.heads.clone())\n        }\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/version/mod.rs",
    "content": "mod context;\nmod lifecycle;\nmod refs;\nmod stage_rows;\nmod types;\n\npub(crate) use context::VersionContext;\npub(crate) use lifecycle::{VersionLifecycle, VersionOperation, VersionReferenceRole};\npub(crate) use stage_rows::{\n    version_descriptor_stage_row, version_descriptor_tombstone_row, version_ref_stage_row,\n    version_ref_tombstone_row, VERSION_DESCRIPTOR_SCHEMA_KEY, VERSION_REF_SCHEMA_KEY,\n};\npub(crate) use types::{VersionHead, VersionRefReader};\n"
  },
  {
    "path": "packages/engine/src/version/refs.rs",
    "content": "use std::sync::Arc;\n\nuse tokio::sync::Mutex;\n\nuse crate::entity_identity::EntityIdentity;\nuse crate::storage::{StorageReader, StorageWriteSet};\nuse crate::untracked_state::{\n    MaterializedUntrackedStateRow, UntrackedStateContext, UntrackedStateFilter, UntrackedStateRow,\n    UntrackedStateRowRequest, UntrackedStateScanRequest,\n};\nuse crate::version::VERSION_REF_SCHEMA_KEY;\nuse crate::version::{VersionHead, VersionRefReader};\nuse crate::GLOBAL_VERSION_ID;\nuse crate::{LixError, NullableKeyFilter};\n\n/// Typed access to moving version heads stored in untracked state.\n///\n/// Version refs are one of the inputs used by live_state visibility, so this\n/// context deliberately bypasses live_state and reads the underlying untracked\n/// rows directly. That keeps the dependency acyclic:\n/// untracked_state -> version_ref -> live_state.\npub(super) struct VersionRefContext {\n    untracked_state: Arc<UntrackedStateContext>,\n}\n\nimpl VersionRefContext {\n    pub(super) fn new(untracked_state: Arc<UntrackedStateContext>) -> Self {\n        Self { untracked_state }\n    }\n\n    /// Creates a version-ref reader over a caller-provided KV store.\n    pub(super) fn reader<S>(&self, store: S) -> VersionRefStoreReader<S>\n    where\n        S: StorageReader,\n    {\n        VersionRefStoreReader {\n            untracked_state: Arc::clone(&self.untracked_state),\n            store: Mutex::new(store),\n        }\n    }\n\n    /// Creates a version-ref writer over a transaction-local storage write set.\n    pub(super) fn writer<'a>(&self, writes: &'a mut StorageWriteSet) -> VersionRefWriter<'a> {\n        VersionRefWriter {\n            untracked_state: Arc::clone(&self.untracked_state),\n            writes,\n        }\n    }\n}\n\n/// Read side for version heads.\npub(super) struct VersionRefStoreReader<S>\nwhere\n    S: StorageReader,\n{\n    untracked_state: Arc<UntrackedStateContext>,\n    store: Mutex<S>,\n}\n\nimpl<S> VersionRefStoreReader<S>\nwhere\n    S: StorageReader,\n{\n    pub(crate) async fn load_head(\n        &self,\n        version_id: &str,\n    ) -> Result<Option<VersionHead>, LixError> {\n        let mut store = self.store.lock().await;\n        let Some(row) = self\n            .untracked_state\n            .reader(&mut *store as &mut dyn StorageReader)\n            .load_row(&UntrackedStateRowRequest {\n                schema_key: VERSION_REF_SCHEMA_KEY.to_string(),\n                version_id: GLOBAL_VERSION_ID.to_string(),\n                entity_id: EntityIdentity::single(version_id),\n                file_id: NullableKeyFilter::Null,\n            })\n            .await?\n        else {\n            return Ok(None);\n        };\n\n        decode_version_head(version_id, &row)\n    }\n\n    pub(crate) async fn load_head_commit_id(\n        &self,\n        version_id: &str,\n    ) -> Result<Option<String>, LixError> {\n        Ok(self.load_head(version_id).await?.map(|head| head.commit_id))\n    }\n\n    pub(crate) async fn scan_heads(&self) -> Result<Vec<VersionHead>, LixError> {\n        let mut store = self.store.lock().await;\n        let rows = self\n            .untracked_state\n            .reader(&mut *store as &mut dyn StorageReader)\n            .scan_rows(&UntrackedStateScanRequest {\n                filter: UntrackedStateFilter {\n                    schema_keys: vec![VERSION_REF_SCHEMA_KEY.to_string()],\n                    version_ids: vec![GLOBAL_VERSION_ID.to_string()],\n                    ..UntrackedStateFilter::default()\n                },\n                ..UntrackedStateScanRequest::default()\n            })\n            .await?;\n        let mut heads = rows\n            .iter()\n            .map(|row| {\n                let version_id = row.entity_id.as_single_string_owned()?;\n                decode_version_head(&version_id, row)\n            })\n            .collect::<Result<Vec<_>, _>>()?\n            .into_iter()\n            .flatten()\n            .collect::<Vec<_>>();\n        heads.sort_by(|left, right| left.version_id.cmp(&right.version_id));\n        Ok(heads)\n    }\n}\n\n#[async_trait::async_trait]\nimpl<S> VersionRefReader for VersionRefStoreReader<S>\nwhere\n    S: StorageReader + Send,\n{\n    async fn load_head(&self, version_id: &str) -> Result<Option<VersionHead>, LixError> {\n        VersionRefStoreReader::load_head(self, version_id).await\n    }\n\n    async fn load_head_commit_id(&self, version_id: &str) -> Result<Option<String>, LixError> {\n        VersionRefStoreReader::load_head_commit_id(self, version_id).await\n    }\n\n    async fn scan_heads(&self) -> Result<Vec<VersionHead>, LixError> {\n        VersionRefStoreReader::scan_heads(self).await\n    }\n}\n\n/// Write side for moving version heads.\npub(super) struct VersionRefWriter<'a> {\n    untracked_state: Arc<UntrackedStateContext>,\n    writes: &'a mut StorageWriteSet,\n}\n\nimpl VersionRefWriter<'_> {\n    pub(crate) fn stage_rows(&mut self, rows: &[UntrackedStateRow]) -> Result<(), LixError> {\n        self.untracked_state\n            .writer(self.writes)\n            .stage_rows(rows.iter().map(|row| row.as_ref()))\n    }\n}\n\nfn decode_version_head(\n    requested_version_id: &str,\n    row: &MaterializedUntrackedStateRow,\n) -> Result<Option<VersionHead>, LixError> {\n    let Some(snapshot_content) = row.snapshot_content.as_deref() else {\n        return Ok(None);\n    };\n    let snapshot =\n        serde_json::from_str::<serde_json::Value>(snapshot_content).map_err(|error| {\n            LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                format!(\"engine version-ref snapshot parse failed: {error}\"),\n            )\n        })?;\n    let commit_id = snapshot\n        .get(\"commit_id\")\n        .and_then(serde_json::Value::as_str)\n        .ok_or_else(|| {\n            LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                format!(\"version ref for version '{requested_version_id}' is missing commit_id\"),\n            )\n        })?;\n    Ok(Some(VersionHead {\n        version_id: requested_version_id.to_string(),\n        commit_id: commit_id.to_string(),\n    }))\n}\n\n#[cfg(test)]\nmod tests {\n    use std::sync::Arc;\n\n    use crate::backend::testing::UnitTestBackend;\n    use crate::storage::{StorageContext, StorageWriteSet};\n    use crate::transaction::prepare_version_ref_row;\n    use crate::untracked_state::{UntrackedStateContext, UntrackedStateRowRequest};\n\n    use super::*;\n\n    #[tokio::test]\n    async fn load_head_returns_none_when_missing() {\n        let storage = StorageContext::new(Arc::new(UnitTestBackend::new()));\n        let version_ref = test_version_ref();\n\n        let head = version_ref\n            .reader(storage)\n            .load_head(\"missing-version\")\n            .await\n            .expect(\"missing version ref should load cleanly\");\n\n        assert_eq!(head, None);\n    }\n\n    #[tokio::test]\n    async fn advance_head_writes_untracked_global_ref() {\n        let storage = StorageContext::new(Arc::new(UnitTestBackend::new()));\n        let version_ref = VersionRefContext::new(Arc::new(UntrackedStateContext::new()));\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n\n        let mut writes = StorageWriteSet::new();\n        stage_version_head(\n            &version_ref,\n            &mut writes,\n            \"version-a\",\n            \"commit-a\",\n            \"2026-01-01T00:00:00Z\",\n        )\n        .expect(\"version head should advance\");\n        writes\n            .apply(&mut transaction.as_mut())\n            .await\n            .expect(\"version head should apply\");\n        transaction\n            .commit()\n            .await\n            .expect(\"transaction should commit\");\n\n        let head = version_ref\n            .reader(storage.clone())\n            .load_head(\"version-a\")\n            .await\n            .expect(\"version head should load\")\n            .expect(\"version head should exist\");\n        assert_eq!(head.version_id, \"version-a\");\n        assert_eq!(head.commit_id, \"commit-a\");\n\n        let mut reader = UntrackedStateContext::new().reader(storage);\n        let row = reader\n            .load_row(&UntrackedStateRowRequest {\n                schema_key: VERSION_REF_SCHEMA_KEY.to_string(),\n                version_id: GLOBAL_VERSION_ID.to_string(),\n                entity_id: crate::entity_identity::EntityIdentity::single(\"version-a\"),\n                file_id: NullableKeyFilter::Null,\n            })\n            .await\n            .expect(\"version-ref row should load\")\n            .expect(\"version-ref row should exist\");\n        assert!(row.global);\n        assert_eq!(row.created_at, \"2026-01-01T00:00:00Z\");\n        assert_eq!(row.updated_at, \"2026-01-01T00:00:00Z\");\n    }\n\n    #[tokio::test]\n    async fn scan_heads_returns_sorted_version_heads() {\n        let storage = StorageContext::new(Arc::new(UnitTestBackend::new()));\n        let version_ref = test_version_ref();\n        let mut transaction = storage\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n\n        let mut writes = StorageWriteSet::new();\n        stage_version_head(\n            &version_ref,\n            &mut writes,\n            \"version-b\",\n            \"commit-b\",\n            \"2026-01-01T00:00:00Z\",\n        )\n        .expect(\"version-b should advance\");\n        stage_version_head(\n            &version_ref,\n            &mut writes,\n            \"version-a\",\n            \"commit-a\",\n            \"2026-01-01T00:00:00Z\",\n        )\n        .expect(\"version-a should advance\");\n        writes\n            .apply(&mut transaction.as_mut())\n            .await\n            .expect(\"version heads should apply\");\n        transaction\n            .commit()\n            .await\n            .expect(\"transaction should commit\");\n\n        let heads = version_ref\n            .reader(storage)\n            .scan_heads()\n            .await\n            .expect(\"heads should scan\");\n\n        assert_eq!(\n            heads,\n            vec![\n                VersionHead {\n                    version_id: \"version-a\".to_string(),\n                    commit_id: \"commit-a\".to_string(),\n                },\n                VersionHead {\n                    version_id: \"version-b\".to_string(),\n                    commit_id: \"commit-b\".to_string(),\n                },\n            ]\n        );\n    }\n\n    fn test_version_ref() -> VersionRefContext {\n        VersionRefContext::new(Arc::new(UntrackedStateContext::new()))\n    }\n\n    fn stage_version_head(\n        version_ref: &VersionRefContext,\n        writes: &mut StorageWriteSet,\n        version_id: &str,\n        commit_id: &str,\n        timestamp: &str,\n    ) -> Result<(), LixError> {\n        let canonical_row = prepare_version_ref_row(version_id, commit_id, timestamp)?;\n        version_ref.writer(writes).stage_rows(&[canonical_row.row])\n    }\n}\n"
  },
  {
    "path": "packages/engine/src/version/stage_rows.rs",
    "content": "use serde_json::json;\n\nuse crate::entity_identity::EntityIdentity;\nuse crate::transaction::types::{TransactionJson, TransactionWriteRow};\nuse crate::GLOBAL_VERSION_ID;\n\npub(crate) const VERSION_DESCRIPTOR_SCHEMA_KEY: &str = \"lix_version_descriptor\";\npub(crate) const VERSION_REF_SCHEMA_KEY: &str = \"lix_version_ref\";\n\npub(crate) fn version_descriptor_stage_row(\n    version_id: &str,\n    name: &str,\n    hidden: bool,\n) -> TransactionWriteRow {\n    TransactionWriteRow {\n        entity_id: Some(EntityIdentity::single(version_id)),\n        schema_key: VERSION_DESCRIPTOR_SCHEMA_KEY.to_string(),\n        file_id: None,\n        snapshot: Some(TransactionJson::from_value_unchecked(json!({\n            \"id\": version_id,\n            \"name\": name,\n            \"hidden\": hidden,\n        }))),\n        metadata: None,\n        origin: None,\n        created_at: None,\n        updated_at: None,\n        global: true,\n        change_id: None,\n        commit_id: None,\n        untracked: false,\n        version_id: GLOBAL_VERSION_ID.to_string(),\n    }\n}\n\npub(crate) fn version_ref_stage_row(version_id: &str, commit_id: &str) -> TransactionWriteRow {\n    TransactionWriteRow {\n        entity_id: Some(EntityIdentity::single(version_id)),\n        schema_key: VERSION_REF_SCHEMA_KEY.to_string(),\n        file_id: None,\n        snapshot: Some(TransactionJson::from_value_unchecked(json!({\n            \"id\": version_id,\n            \"commit_id\": commit_id,\n        }))),\n        metadata: None,\n        origin: None,\n        created_at: None,\n        updated_at: None,\n        global: true,\n        change_id: None,\n        commit_id: None,\n        untracked: true,\n        version_id: GLOBAL_VERSION_ID.to_string(),\n    }\n}\n\npub(crate) fn version_descriptor_tombstone_row(version_id: &str) -> TransactionWriteRow {\n    let mut row = version_descriptor_stage_row(version_id, \"\", false);\n    row.snapshot = None;\n    row\n}\n\npub(crate) fn version_ref_tombstone_row(version_id: &str) -> TransactionWriteRow {\n    let mut row = version_ref_stage_row(version_id, \"\");\n    row.snapshot = None;\n    row\n}\n"
  },
  {
    "path": "packages/engine/src/version/types.rs",
    "content": "/// Current changelog head for a version.\n#[derive(Debug, Clone, PartialEq, Eq)]\npub(crate) struct VersionHead {\n    pub(crate) version_id: String,\n    pub(crate) commit_id: String,\n}\n\n/// Typed reader for moving version heads.\n#[async_trait::async_trait]\npub(crate) trait VersionRefReader: Send + Sync {\n    async fn load_head(&self, version_id: &str) -> Result<Option<VersionHead>, crate::LixError>;\n\n    async fn load_head_commit_id(\n        &self,\n        version_id: &str,\n    ) -> Result<Option<String>, crate::LixError> {\n        Ok(self.load_head(version_id).await?.map(|head| head.commit_id))\n    }\n\n    async fn scan_heads(&self) -> Result<Vec<VersionHead>, crate::LixError>;\n}\n"
  },
  {
    "path": "packages/engine/src/wasm/mod.rs",
    "content": "use std::sync::Arc;\n\nuse async_trait::async_trait;\n\nuse crate::LixError;\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub struct WasmLimits {\n    pub max_memory_bytes: u64,\n    pub max_fuel: Option<u64>,\n    pub timeout_ms: Option<u64>,\n}\n\nimpl Default for WasmLimits {\n    fn default() -> Self {\n        Self {\n            max_memory_bytes: 64 * 1024 * 1024,\n            max_fuel: None,\n            timeout_ms: None,\n        }\n    }\n}\n\n#[async_trait(?Send)]\npub trait WasmRuntime: Send + Sync {\n    async fn init_component(\n        &self,\n        bytes: Vec<u8>,\n        limits: WasmLimits,\n    ) -> Result<Arc<dyn WasmComponentInstance>, LixError>;\n}\n\n#[async_trait(?Send)]\npub trait WasmComponentInstance: Send + Sync {\n    async fn call(&self, export: &str, input: &[u8]) -> Result<Vec<u8>, LixError>;\n\n    async fn close(&self) -> Result<(), LixError> {\n        Ok(())\n    }\n}\n\n#[derive(Debug, Default, Clone, Copy)]\npub struct NoopWasmRuntime;\n\n#[async_trait(?Send)]\nimpl WasmRuntime for NoopWasmRuntime {\n    async fn init_component(\n        &self,\n        _bytes: Vec<u8>,\n        _limits: WasmLimits,\n    ) -> Result<Arc<dyn WasmComponentInstance>, LixError> {\n        Err(LixError {\n            code: \"LIX_ERROR_UNKNOWN\".to_string(),\n            message: \"wasm runtime is required to execute plugins; provide a non-noop runtime\"\n                .to_string(),\n            hint: None,\n            details: None,\n        })\n    }\n}\n"
  },
  {
    "path": "packages/engine/tests/branching.rs",
    "content": "#[macro_use]\n#[path = \"support/mod.rs\"]\nmod support;\n\nuse lix_engine::Value;\nuse lix_engine::{\n    CreateVersionOptions, Engine, LixError, MergeChangeStats, MergeVersionOptions,\n    MergeVersionOutcome, MergeVersionPreviewOptions, SwitchVersionOptions,\n};\nuse serde_json::Value as JsonValue;\n\nsimulation_test!(create_version_from_main, |sim| async move {\n    let (engine, main, draft) = create_draft_from_main(&sim).await;\n\n    assert_version_descriptor(&main, \"draft-version\", \"Draft\").await;\n    assert_eq!(\n        engine\n            .load_version_head_commit_id(\"draft-version\")\n            .await\n            .expect(\"draft head should load\"),\n        Some(sim.initial_commit_id().to_string())\n    );\n\n    drop(draft);\n    drop(main);\n    drop(engine);\n});\n\nsimulation_test!(create_version_rejects_existing_id, |sim| async move {\n    let (engine, main, draft) = create_draft_from_main(&sim).await;\n\n    let error = main\n        .create_version(CreateVersionOptions {\n            id: Some(\"draft-version\".to_string()),\n            name: \"Overwritten draft\".to_string(),\n            from_commit_id: None,\n        })\n        .await\n        .expect_err(\"creating a version with an existing id should fail\");\n\n    assert_eq!(error.code, \"LIX_ERROR_UNIQUE\");\n    assert!(\n        error\n            .to_string()\n            .contains(\"INSERT would duplicate entity_id\"),\n        \"error should explain the duplicate version id: {error:?}\"\n    );\n    assert_version_descriptor(&main, \"draft-version\", \"Draft\").await;\n\n    drop(draft);\n    drop(main);\n    drop(engine);\n});\n\nsimulation_test!(create_version_rejects_duplicate_name, |sim| async move {\n    let (engine, main, draft) = create_draft_from_main(&sim).await;\n\n    let error = main\n        .create_version(CreateVersionOptions {\n            id: Some(\"duplicate-name-version\".to_string()),\n            name: \"Draft\".to_string(),\n            from_commit_id: None,\n        })\n        .await\n        .expect_err(\"creating a version with an existing name should fail\");\n\n    assert_eq!(error.code, lix_engine::LixError::CODE_UNIQUE);\n    assert!(\n        error.to_string().contains(\"/name\"),\n        \"error should explain the duplicate version name: {error:?}\"\n    );\n\n    drop(draft);\n    drop(main);\n    drop(engine);\n});\n\nsimulation_test!(\n    version_descriptor_delete_via_entity_surface_is_rejected_when_ref_exists,\n    |sim| async move {\n        let (engine, main, _draft) = create_draft_from_main(&sim).await;\n\n        let error = main\n            .execute(\n                \"DELETE FROM lix_version_descriptor WHERE id = 'draft-version'\",\n                &[],\n            )\n            .await\n            .expect_err(\"descriptor delete through entity surface should fail\");\n        assert_version_pair_delete_restricted(&error);\n\n        assert_eq!(count_version_descriptors(&main, \"draft-version\").await, 1);\n        assert_eq!(count_version_refs(&main, \"draft-version\").await, 1);\n        assert_eq!(\n            engine\n                .load_version_head_commit_id(\"draft-version\")\n                .await\n                .expect(\"version ref head should still load\"),\n            Some(sim.initial_commit_id().to_string())\n        );\n\n        drop(main);\n        drop(engine);\n    }\n);\n\nsimulation_test!(\n    version_descriptor_delete_via_lix_state_is_rejected_when_ref_exists,\n    |sim| async move {\n        let (engine, main, _draft) = create_draft_from_main(&sim).await;\n\n        let error = main\n\t\t.execute(\n\t\t\t\"DELETE FROM lix_state \\\n\t             WHERE schema_key = 'lix_version_descriptor' AND entity_id = lix_json('[\\\"draft-version\\\"]')\",\n\t\t\t&[],\n\t\t)\n            .await\n            .expect_err(\"descriptor delete through lix_state should fail\");\n        assert_version_pair_delete_restricted(&error);\n\n        assert_eq!(count_version_descriptors(&main, \"draft-version\").await, 1);\n        assert_eq!(count_version_refs(&main, \"draft-version\").await, 1);\n        assert_eq!(\n            engine\n                .load_version_head_commit_id(\"draft-version\")\n                .await\n                .expect(\"version ref head should still load\"),\n            Some(sim.initial_commit_id().to_string())\n        );\n\n        drop(main);\n        drop(engine);\n    }\n);\n\nsimulation_test!(\n    version_ref_delete_via_entity_surface_is_rejected_when_descriptor_exists,\n    |sim| async move {\n        let (engine, main, _draft) = create_draft_from_main(&sim).await;\n\n        let error = main\n            .execute(\n                \"DELETE FROM lix_version_ref WHERE id = 'draft-version'\",\n                &[],\n            )\n            .await\n            .expect_err(\"ref delete through entity surface should fail\");\n        assert_version_pair_delete_restricted(&error);\n\n        assert_eq!(count_version_descriptors(&main, \"draft-version\").await, 1);\n        assert_eq!(count_version_refs(&main, \"draft-version\").await, 1);\n        assert_eq!(\n            engine\n                .load_version_head_commit_id(\"draft-version\")\n                .await\n                .expect(\"version ref head should still load\"),\n            Some(sim.initial_commit_id().to_string())\n        );\n\n        drop(main);\n        drop(engine);\n    }\n);\n\nsimulation_test!(\n    version_ref_delete_via_lix_state_is_rejected_when_descriptor_exists,\n    |sim| async move {\n        let (engine, main, _draft) = create_draft_from_main(&sim).await;\n\n        let error = main\n\t\t.execute(\n\t\t\t\"DELETE FROM lix_state \\\n\t                 WHERE schema_key = 'lix_version_ref' AND entity_id = lix_json('[\\\"draft-version\\\"]')\",\n\t\t\t&[],\n\t\t)\n            .await\n            .expect_err(\"ref delete through lix_state should fail\");\n        assert_version_pair_delete_restricted(&error);\n\n        assert_eq!(count_version_descriptors(&main, \"draft-version\").await, 1);\n        assert_eq!(count_version_refs(&main, \"draft-version\").await, 1);\n        assert_eq!(\n            engine\n                .load_version_head_commit_id(\"draft-version\")\n                .await\n                .expect(\"version ref head should still load\"),\n            Some(sim.initial_commit_id().to_string())\n        );\n\n        drop(main);\n        drop(engine);\n    }\n);\n\nsimulation_test!(\n    create_version_can_start_from_explicit_commit,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let main = sim.wrap_session(\n            engine\n                .open_session(sim.main_version_id())\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n        main.execute(\n            \"INSERT INTO lix_key_value (key, value) VALUES ('main-after-initial', 'main')\",\n            &[],\n        )\n        .await\n        .expect(\"main write should succeed\");\n\n        assert_key_value(&main, \"main-after-initial\", Some(\"\\\"main\\\"\")).await;\n\n        let receipt = main\n            .create_version(CreateVersionOptions {\n                id: Some(\"from-initial\".to_string()),\n                name: \"From initial\".to_string(),\n                from_commit_id: Some(sim.initial_commit_id().to_string()),\n            })\n            .await\n            .expect(\"version should be created from explicit commit\");\n        assert_eq!(receipt.id, \"from-initial\");\n        assert_eq!(receipt.name, \"From initial\");\n        assert!(!receipt.hidden);\n        assert_eq!(receipt.commit_id, sim.initial_commit_id());\n        assert_eq!(\n            engine\n                .load_version_head_commit_id(\"from-initial\")\n                .await\n                .expect(\"version head should load\"),\n            Some(sim.initial_commit_id().to_string())\n        );\n\n        let from_initial = main.wrap_session(\n            engine\n                .open_session(\"from-initial\")\n                .await\n                .expect(\"explicit commit version session should open\"),\n            &engine,\n        );\n        assert_key_value(&from_initial, \"main-after-initial\", None).await;\n\n        drop(from_initial);\n        drop(main);\n        drop(engine);\n    }\n);\n\nsimulation_test!(created_version_sees_inherited_state, |sim| async move {\n    let (_engine, _main, draft) = create_draft_after_shared_write(&sim).await;\n\n    assert_key_value(&draft, \"shared-before-branch\", Some(\"\\\"shared\\\"\")).await;\n});\n\nsimulation_test!(\n    open_workspace_session_starts_on_seeded_main_version,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let workspace = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"workspace session should open\"),\n            &engine,\n        );\n\n        assert_eq!(\n            workspace\n                .active_version_id()\n                .await\n                .expect(\"workspace active version should resolve\"),\n            sim.main_version_id()\n        );\n    }\n);\n\nsimulation_test!(\n    later_main_changes_do_not_appear_in_created_version,\n    |sim| async move {\n        let (_engine, main, draft) = create_draft_from_main(&sim).await;\n\n        main.execute(\n            \"INSERT INTO lix_key_value (key, value) VALUES ('main-after-branch', 'main')\",\n            &[],\n        )\n        .await\n        .expect(\"main write should succeed\");\n\n        assert_key_value(&main, \"main-after-branch\", Some(\"\\\"main\\\"\")).await;\n        assert_key_value(&draft, \"main-after-branch\", None).await;\n    }\n);\n\nsimulation_test!(\n    later_created_version_changes_do_not_appear_in_main,\n    |sim| async move {\n        let (_engine, main, draft) = create_draft_from_main(&sim).await;\n\n        draft\n            .execute(\n                \"INSERT INTO lix_key_value (key, value) VALUES ('draft-after-branch', 'draft')\",\n                &[],\n            )\n            .await\n            .expect(\"draft write should succeed\");\n\n        assert_key_value(&draft, \"draft-after-branch\", Some(\"\\\"draft\\\"\")).await;\n        assert_key_value(&main, \"draft-after-branch\", None).await;\n    }\n);\n\nsimulation_test!(\n    switch_version_returns_session_for_target_version,\n    |sim| async move {\n        let (engine, main, draft) = create_draft_from_main(&sim).await;\n        draft\n            .execute(\n                \"INSERT INTO lix_key_value (key, value) VALUES ('switch-draft-only', 'draft')\",\n                &[],\n            )\n            .await\n            .expect(\"draft write should succeed\");\n\n        let (switched, receipt) = main\n            .switch_version(SwitchVersionOptions {\n                version_id: \"draft-version\".to_string(),\n            })\n            .await\n            .expect(\"switch should succeed\");\n\n        assert_eq!(receipt.version_id, \"draft-version\");\n        assert_key_value(&switched, \"switch-draft-only\", Some(\"\\\"draft\\\"\")).await;\n        assert_key_value(&main, \"switch-draft-only\", None).await;\n\n        drop(engine);\n    }\n);\n\nsimulation_test!(\n    pinned_switch_version_is_ephemeral_and_does_not_advance_refs,\n    |sim| async move {\n        let (engine, main, _draft) = create_draft_from_main(&sim).await;\n        let main_head_before = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"main head should load\");\n        let draft_head_before = engine\n            .load_version_head_commit_id(\"draft-version\")\n            .await\n            .expect(\"draft head should load\");\n        let workspace_before = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"workspace session should open\"),\n            &engine,\n        );\n        assert_eq!(\n            workspace_before\n                .active_version_id()\n                .await\n                .expect(\"workspace selector should resolve\"),\n            sim.main_version_id(),\n            \"pinned session setup should not have moved the workspace selector\"\n        );\n\n        let (_switched, _receipt) = main\n            .switch_version(SwitchVersionOptions {\n                version_id: \"draft-version\".to_string(),\n            })\n            .await\n            .expect(\"switch should succeed\");\n\n        assert_eq!(\n            engine\n                .load_version_head_commit_id(sim.main_version_id())\n                .await\n                .expect(\"main head should load\"),\n            main_head_before,\n            \"switching must not mutate the source session version ref\"\n        );\n        assert_eq!(\n            engine\n                .load_version_head_commit_id(\"draft-version\")\n                .await\n                .expect(\"draft head should load\"),\n            draft_head_before,\n            \"switching must not mutate the target version ref\"\n        );\n        let workspace_after = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"workspace session should open\"),\n            &engine,\n        );\n        assert_eq!(\n            workspace_after\n                .active_version_id()\n                .await\n                .expect(\"workspace selector should resolve\"),\n            sim.main_version_id(),\n            \"pinned switching must not mutate the shared workspace selector\"\n        );\n    }\n);\n\nsimulation_test!(\n    workspace_switch_version_updates_shared_workspace_selector,\n    |sim| async move {\n        let (engine, main, draft) = create_draft_from_main(&sim).await;\n        draft\n            .execute(\n                \"INSERT INTO lix_key_value (key, value) VALUES ('workspace-draft-only', 'draft')\",\n                &[],\n            )\n            .await\n            .expect(\"draft write should succeed\");\n        let main_head_before = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"main head should load\");\n        let draft_head_before = engine\n            .load_version_head_commit_id(\"draft-version\")\n            .await\n            .expect(\"draft head should load\");\n\n        let workspace_a = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"workspace session should open\"),\n            &engine,\n        );\n        let workspace_b = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"second workspace session should open\"),\n            &engine,\n        );\n        assert_eq!(\n            workspace_a\n                .active_version_id()\n                .await\n                .expect(\"workspace selector should resolve\"),\n            sim.main_version_id()\n        );\n\n        let (workspace_switched, receipt) = workspace_a\n            .switch_version(SwitchVersionOptions {\n                version_id: \"draft-version\".to_string(),\n            })\n            .await\n            .expect(\"workspace switch should succeed\");\n\n        assert_eq!(receipt.version_id, \"draft-version\");\n        assert_eq!(\n            workspace_switched\n                .active_version_id()\n                .await\n                .expect(\"switched workspace selector should resolve\"),\n            \"draft-version\"\n        );\n        assert_eq!(\n            workspace_b\n                .active_version_id()\n                .await\n                .expect(\"other workspace session should observe selector\"),\n            \"draft-version\",\n            \"workspace sessions resolve the shared selector on use\"\n        );\n        assert_key_value(&workspace_b, \"workspace-draft-only\", Some(\"\\\"draft\\\"\")).await;\n        assert_key_value(&main, \"workspace-draft-only\", None).await;\n        assert_eq!(\n            engine\n                .load_version_head_commit_id(sim.main_version_id())\n                .await\n                .expect(\"main head should load\"),\n            main_head_before,\n            \"workspace switching must not mutate the old version ref\"\n        );\n        assert_eq!(\n            engine\n                .load_version_head_commit_id(\"draft-version\")\n                .await\n                .expect(\"draft head should load\"),\n            draft_head_before,\n            \"workspace switching must not mutate the new version ref\"\n        );\n    }\n);\n\nsimulation_test!(\n    workspace_switch_version_persists_across_reopened_engine,\n    |sim| async move {\n        let (engine, _main, draft) = create_draft_from_main(&sim).await;\n        draft\n            .execute(\n                \"INSERT INTO lix_key_value (key, value) VALUES ('workspace-reopen-draft', 'draft')\",\n                &[],\n            )\n            .await\n            .expect(\"draft write should succeed\");\n\n        let workspace = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"workspace session should open\"),\n            &engine,\n        );\n        workspace\n            .switch_version(SwitchVersionOptions {\n                version_id: \"draft-version\".to_string(),\n            })\n            .await\n            .expect(\"workspace switch should persist\");\n\n        let reopened_engine = sim\n            .reboot_engine_from_current_snapshot()\n            .await\n            .expect(\"engine should reopen from current snapshot\");\n        let reopened_workspace = sim.wrap_session(\n            reopened_engine\n                .open_workspace_session()\n                .await\n                .expect(\"reopened workspace session should open\"),\n            &reopened_engine,\n        );\n\n        assert_eq!(\n            reopened_workspace\n                .active_version_id()\n                .await\n                .expect(\"workspace selector should resolve after reopen\"),\n            \"draft-version\",\n            \"workspace switch should survive reopening the engine\"\n        );\n        assert_key_value(\n            &reopened_workspace,\n            \"workspace-reopen-draft\",\n            Some(\"\\\"draft\\\"\"),\n        )\n        .await;\n    }\n);\n\nsimulation_test!(\n    switch_version_errors_when_target_ref_is_missing,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let main = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        let result = main\n            .switch_version(SwitchVersionOptions {\n                version_id: \"missing-version\".to_string(),\n            })\n            .await;\n        let Err(error) = result else {\n            panic!(\"missing version ref should fail\");\n        };\n\n        assert_eq!(error.code, LixError::CODE_VERSION_NOT_FOUND);\n        assert_eq!(\n            error\n                .details\n                .as_ref()\n                .and_then(|details| details.get(\"version_id\")),\n            Some(&JsonValue::String(\"missing-version\".to_string()))\n        );\n        assert_eq!(\n            error\n                .details\n                .as_ref()\n                .and_then(|details| details.get(\"operation\")),\n            Some(&JsonValue::String(\"switch_version\".to_string()))\n        );\n        assert_eq!(\n            error\n                .details\n                .as_ref()\n                .and_then(|details| details.get(\"role\")),\n            Some(&JsonValue::String(\"target\".to_string()))\n        );\n    }\n);\n\nsimulation_test!(\n    merge_version_resolves_existing_source_and_target_heads,\n    |sim| async move {\n        let (engine, main, _draft) = create_draft_from_main(&sim).await;\n        let main_head_before = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"main head should load\")\n            .expect(\"main head should exist\");\n\n        let receipt = main\n            .merge_version(MergeVersionOptions {\n                source_version_id: \"draft-version\".to_string(),\n            })\n            .await\n            .expect(\"merge head resolution should succeed\");\n\n        assert_eq!(receipt.outcome, MergeVersionOutcome::AlreadyUpToDate);\n        assert_eq!(receipt.change_stats, MergeChangeStats::default());\n        assert_eq!(receipt.created_merge_commit_id, None);\n        assert_eq!(receipt.target_version_id, sim.main_version_id());\n        assert_eq!(receipt.source_version_id, \"draft-version\");\n        assert_eq!(\n            receipt.target_head_before_commit_id, main_head_before,\n            \"receipt should expose the target head before the no-op merge\"\n        );\n        assert_eq!(\n            receipt.target_head_after_commit_id, main_head_before,\n            \"no-op merge should leave target head unchanged\"\n        );\n        assert_eq!(\n            engine\n                .load_version_head_commit_id(sim.main_version_id())\n                .await\n                .expect(\"main head should load\"),\n            Some(main_head_before)\n        );\n    }\n);\n\nsimulation_test!(\n    merge_version_fast_forwards_when_target_is_merge_base,\n    |sim| async move {\n        let (engine, main, draft) = create_draft_from_main(&sim).await;\n        draft\n            .execute(\n                \"INSERT INTO lix_key_value (key, value) VALUES ('draft-fast-forward', 'draft')\",\n                &[],\n            )\n            .await\n            .expect(\"draft write should succeed\");\n\n        let target_head_before = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"main head should load\")\n            .expect(\"main head should exist\");\n        let source_head = engine\n            .load_version_head_commit_id(\"draft-version\")\n            .await\n            .expect(\"draft head should load\")\n            .expect(\"draft head should exist\");\n\n        let preview = main\n            .merge_version_preview(MergeVersionPreviewOptions {\n                source_version_id: \"draft-version\".to_string(),\n            })\n            .await\n            .expect(\"merge preview should analyze fast-forward\");\n        assert_eq!(preview.outcome, MergeVersionOutcome::FastForward);\n        assert_eq!(preview.target_head_commit_id, target_head_before);\n        assert_eq!(preview.source_head_commit_id, source_head);\n        assert_eq!(\n            preview.change_stats,\n            MergeChangeStats {\n                total: 1,\n                added: 1,\n                modified: 0,\n                removed: 0,\n            }\n        );\n        assert_eq!(preview.conflicts.len(), 0);\n        assert_eq!(\n            engine\n                .load_version_head_commit_id(sim.main_version_id())\n                .await\n                .expect(\"main head should load\")\n                .as_deref(),\n            Some(target_head_before.as_str()),\n            \"preview should not advance the target ref\"\n        );\n\n        let receipt = main\n            .merge_version(MergeVersionOptions {\n                source_version_id: \"draft-version\".to_string(),\n            })\n            .await\n            .expect(\"merge should fast-forward target\");\n        assert_eq!(receipt.outcome, MergeVersionOutcome::FastForward);\n        assert_eq!(\n            receipt.change_stats,\n            MergeChangeStats {\n                total: 1,\n                added: 1,\n                modified: 0,\n                removed: 0,\n            }\n        );\n        assert_eq!(receipt.created_merge_commit_id, None);\n        assert_eq!(receipt.base_commit_id, target_head_before);\n        assert_eq!(receipt.target_head_before_commit_id, target_head_before);\n        assert_eq!(receipt.source_head_before_commit_id, source_head);\n        assert_eq!(receipt.target_head_after_commit_id, source_head);\n        assert_eq!(\n            engine\n                .load_version_head_commit_id(sim.main_version_id())\n                .await\n                .expect(\"main head should load\")\n                .as_deref(),\n            Some(source_head.as_str())\n        );\n        assert_key_value(&main, \"draft-fast-forward\", Some(\"\\\"draft\\\"\")).await;\n\n        let global = sim.wrap_session(\n            engine\n                .open_session(\"global\")\n                .await\n                .expect(\"global session should open\"),\n            &engine,\n        );\n        assert_eq!(\n            commit_parent_edges(&global, &source_head).await,\n            vec![(target_head_before, 0)],\n            \"fast-forward should not create a two-parent merge commit\"\n        );\n    }\n);\n\nsimulation_test!(\n    merge_version_advances_target_with_two_parent_commit,\n    |sim| async move {\n        let (engine, main, draft) = create_draft_from_main(&sim).await;\n        main.execute(\n            \"INSERT INTO lix_key_value (key, value) VALUES ('main-merge-target', 'main')\",\n            &[],\n        )\n        .await\n        .expect(\"main write should succeed\");\n        draft\n            .execute(\n                \"INSERT INTO lix_key_value (key, value) VALUES ('draft-merge-source', 'draft')\",\n                &[],\n            )\n            .await\n            .expect(\"draft write should succeed\");\n\n        let target_head_before = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"main head should load\")\n            .expect(\"main head should exist\");\n        let source_head = engine\n            .load_version_head_commit_id(\"draft-version\")\n            .await\n            .expect(\"draft head should load\")\n            .expect(\"draft head should exist\");\n\n        let receipt = main\n            .merge_version(MergeVersionOptions {\n                source_version_id: \"draft-version\".to_string(),\n            })\n            .await\n            .expect(\"merge should apply source change\");\n        assert_eq!(receipt.outcome, MergeVersionOutcome::MergeCommitted);\n        assert_eq!(\n            receipt.change_stats,\n            MergeChangeStats {\n                total: 1,\n                added: 1,\n                modified: 0,\n                removed: 0,\n            }\n        );\n        assert_eq!(receipt.target_head_before_commit_id, target_head_before);\n        assert_eq!(receipt.source_head_before_commit_id, source_head);\n\n        let target_head_after = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"main head should load\")\n            .expect(\"main head should exist\");\n        assert_eq!(\n            receipt.target_head_after_commit_id, target_head_after,\n            \"receipt should expose the post-merge target head\"\n        );\n        assert_eq!(\n            receipt.created_merge_commit_id.as_deref(),\n            Some(target_head_after.as_str()),\n            \"a non-empty merge should report the merge commit it created\"\n        );\n        assert_ne!(target_head_after, target_head_before);\n        assert_eq!(\n            engine\n                .load_version_head_commit_id(\"draft-version\")\n                .await\n                .expect(\"draft head should load\")\n                .as_deref(),\n            Some(source_head.as_str()),\n            \"merging into main must not move the source version ref\"\n        );\n\n        assert_key_value(&main, \"draft-merge-source\", Some(\"\\\"draft\\\"\")).await;\n        assert_key_value(&main, \"main-merge-target\", Some(\"\\\"main\\\"\")).await;\n\n        let global = sim.wrap_session(\n            engine\n                .open_session(\"global\")\n                .await\n                .expect(\"global session should open\"),\n            &engine,\n        );\n        assert_eq!(\n            commit_parent_edges(&global, &target_head_after).await,\n            vec![(target_head_before, 0), (source_head, 1)],\n            \"merge commit should preserve target as first parent and source as second parent\"\n        );\n    }\n);\n\nsimulation_test!(\n    merge_version_adopts_source_change_without_minting_equivalent_copy,\n    |sim| async move {\n        let (engine, main, draft) = create_draft_from_main(&sim).await;\n        main.execute(\n            \"INSERT INTO lix_key_value (key, value) VALUES ('merge-adopt-target', 'target')\",\n            &[],\n        )\n        .await\n        .expect(\"main write should succeed\");\n        draft\n            .execute(\n                \"INSERT INTO lix_key_value (key, value) VALUES ('merge-adopt-change', 'source')\",\n                &[],\n            )\n            .await\n            .expect(\"draft write should succeed\");\n\n        let receipt = main\n            .merge_version(MergeVersionOptions {\n                source_version_id: \"draft-version\".to_string(),\n            })\n            .await\n            .expect(\"merge should apply source change\");\n        assert!(\n            receipt.created_merge_commit_id.is_some(),\n            \"non-empty merge should create a merge commit\"\n        );\n\n        let global = sim.wrap_session(\n            engine\n                .open_session(\"global\")\n                .await\n                .expect(\"global session should open\"),\n            &engine,\n        );\n        let equivalent_change_count = select_single_integer(\n            &global,\n            \"SELECT count(*) \\\n\t     FROM lix_change \\\n\t     WHERE schema_key = 'lix_key_value' \\\n\t       AND entity_id = lix_json('[\\\"merge-adopt-change\\\"]') \\\n\t       AND snapshot_content = lix_json('{\\\"key\\\":\\\"merge-adopt-change\\\",\\\"value\\\":\\\"source\\\"}')\",\n        )\n        .await;\n        assert_eq!(\n            equivalent_change_count, 1,\n            \"merge must not append a second canonical change with identical effect\"\n        );\n\n        let history = main\n            .execute(\n                \"SELECT snapshot_content \\\n\t             FROM lix_state_history \\\n\t             WHERE start_commit_id = lix_active_version_commit_id() \\\n\t               AND entity_id = lix_json('[\\\"merge-adopt-change\\\"]') \\\n\t             ORDER BY depth\",\n                &[],\n            )\n            .await\n            .expect(\"history query should succeed\");\n        assert_eq!(\n            history.len(),\n            1,\n            \"history should show the adopted canonical change once, not once from the merge commit and once from the source parent\"\n        );\n    }\n);\n\nsimulation_test!(\n    merge_version_adopts_schema_registration_before_schema_rows,\n    |sim| async move {\n        let (engine, main, draft) = create_draft_from_main(&sim).await;\n\n        main.execute(\n            \"INSERT INTO lix_key_value (key, value) VALUES ('merge-schema-target-change', 'target')\",\n            &[],\n        )\n        .await\n        .expect(\"main write should force a merge commit instead of fast-forward\");\n\n        draft\n            .execute(\n                \"INSERT INTO lix_registered_schema (value) \\\n                 VALUES (\\\n                 lix_json('{\\\"x-lix-key\\\":\\\"merge_task_item\\\",\\\"x-lix-primary-key\\\":[\\\"/id\\\"],\\\"type\\\":\\\"object\\\",\\\"properties\\\":{\\\"id\\\":{\\\"type\\\":\\\"string\\\"},\\\"title\\\":{\\\"type\\\":\\\"string\\\"}},\\\"required\\\":[\\\"id\\\",\\\"title\\\"],\\\"additionalProperties\\\":false}')\\\n                 )\",\n                &[],\n            )\n            .await\n            .expect(\"draft schema registration should succeed\");\n\n        draft\n            .execute(\n                \"INSERT INTO merge_task_item (id, title) \\\n                 VALUES ('task-1', 'Adopted schema row')\",\n                &[],\n            )\n            .await\n            .expect(\"draft row using newly registered schema should succeed\");\n\n        main.merge_version(MergeVersionOptions {\n            source_version_id: \"draft-version\".to_string(),\n        })\n        .await\n        .expect(\"merge should adopt schema registration before rows that use it\");\n\n        let reopened_main = sim.wrap_session(\n            engine\n                .open_session(sim.main_version_id())\n                .await\n                .expect(\"main session should reopen after merge\"),\n            &engine,\n        );\n\n        let rows = reopened_main\n            .execute(\n                \"SELECT id, title FROM merge_task_item WHERE id = 'task-1'\",\n                &[],\n            )\n            .await\n            .expect(\"merged schema surface should be queryable\");\n        assert_eq!(\n            rows.rows()[0].values(),\n            &[\n                Value::Text(\"task-1\".to_string()),\n                Value::Text(\"Adopted schema row\".to_string()),\n            ]\n        );\n    }\n);\n\nsimulation_test!(\n    merge_version_errors_on_divergent_same_entity_change,\n    |sim| async move {\n        let (engine, main, draft) = create_draft_from_main(&sim).await;\n\n        main.execute(\n            \"INSERT INTO lix_key_value (key, value) VALUES ('merge-conflict', 'main')\",\n            &[],\n        )\n        .await\n        .expect(\"main write should succeed\");\n        draft\n            .execute(\n                \"INSERT INTO lix_key_value (key, value) VALUES ('merge-conflict', 'draft')\",\n                &[],\n            )\n            .await\n            .expect(\"draft write should succeed\");\n        let main_head_before = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"main head should load\")\n            .expect(\"main head should exist\");\n\n        let error = main\n            .merge_version(MergeVersionOptions {\n                source_version_id: \"draft-version\".to_string(),\n            })\n            .await\n            .expect_err(\"divergent same-entity changes should conflict\");\n        assert_merge_conflict_error(&error);\n        assert_eq!(\n            engine\n                .load_version_head_commit_id(sim.main_version_id())\n                .await\n                .expect(\"main head should load\"),\n            Some(main_head_before),\n            \"failed merge should not advance the target version ref\"\n        );\n        assert_key_value(&main, \"merge-conflict\", Some(\"\\\"main\\\"\")).await;\n    }\n);\n\nsimulation_test!(\n    merge_version_fast_forwards_source_delete_when_target_unchanged,\n    |sim| async move {\n        let (engine, main, draft) = create_draft_after_shared_write(&sim).await;\n\n        delete_key_value(&draft, \"shared-before-branch\").await;\n        let source_head = engine\n            .load_version_head_commit_id(\"draft-version\")\n            .await\n            .expect(\"draft head should load\")\n            .expect(\"draft head should exist\");\n\n        let receipt = main\n            .merge_version(MergeVersionOptions {\n                source_version_id: \"draft-version\".to_string(),\n            })\n            .await\n            .expect(\"merge should apply source delete\");\n\n        assert_eq!(receipt.outcome, MergeVersionOutcome::FastForward);\n        assert_eq!(\n            receipt.change_stats,\n            MergeChangeStats {\n                total: 1,\n                added: 0,\n                modified: 0,\n                removed: 1,\n            }\n        );\n        assert_eq!(receipt.created_merge_commit_id, None);\n        assert_eq!(receipt.target_head_after_commit_id, source_head);\n        assert_key_value(&main, \"shared-before-branch\", None).await;\n    }\n);\n\nsimulation_test!(\n    merge_version_records_empty_merge_when_both_sides_delete,\n    |sim| async move {\n        let (engine, main, draft) = create_draft_after_shared_write(&sim).await;\n\n        delete_key_value(&main, \"shared-before-branch\").await;\n        delete_key_value(&draft, \"shared-before-branch\").await;\n        let main_head_before = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"main head should load\")\n            .expect(\"main head should exist\");\n        let source_head = engine\n            .load_version_head_commit_id(\"draft-version\")\n            .await\n            .expect(\"draft head should load\")\n            .expect(\"draft head should exist\");\n\n        let receipt = main\n            .merge_version(MergeVersionOptions {\n                source_version_id: \"draft-version\".to_string(),\n            })\n            .await\n            .expect(\"convergent delete merge should succeed\");\n\n        assert_eq!(receipt.outcome, MergeVersionOutcome::MergeCommitted);\n        assert_eq!(receipt.change_stats, MergeChangeStats::default());\n        let merge_commit_id = receipt\n            .created_merge_commit_id\n            .clone()\n            .expect(\"convergent delete should create an empty merge commit\");\n        assert_eq!(receipt.target_head_after_commit_id, merge_commit_id);\n        assert_eq!(receipt.target_head_before_commit_id, main_head_before);\n        assert_eq!(receipt.source_head_before_commit_id, source_head);\n        assert_empty_merge_commit(\n            &engine,\n            &main,\n            &merge_commit_id,\n            &receipt.target_head_before_commit_id,\n            &receipt.source_head_before_commit_id,\n        )\n        .await;\n        assert_key_value(&main, \"shared-before-branch\", None).await;\n    }\n);\n\nsimulation_test!(\n    merge_version_conflicts_when_target_deletes_source_modifies,\n    |sim| async move {\n        let (engine, main, draft) = create_draft_after_shared_write(&sim).await;\n\n        delete_key_value(&main, \"shared-before-branch\").await;\n        draft\n            .execute(\n                \"UPDATE lix_key_value SET value = 'draft' WHERE key = 'shared-before-branch'\",\n                &[],\n            )\n            .await\n            .expect(\"draft update should succeed\");\n        let main_head_before = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"main head should load\")\n            .expect(\"main head should exist\");\n\n        let error = main\n            .merge_version(MergeVersionOptions {\n                source_version_id: \"draft-version\".to_string(),\n            })\n            .await\n            .expect_err(\"delete/modify should conflict\");\n        assert_merge_conflict_error(&error);\n        assert_eq!(\n            engine\n                .load_version_head_commit_id(sim.main_version_id())\n                .await\n                .expect(\"main head should load\"),\n            Some(main_head_before),\n            \"failed merge should not advance the target version ref\"\n        );\n        assert_key_value(&main, \"shared-before-branch\", None).await;\n    }\n);\n\nsimulation_test!(\n    merge_version_conflicts_when_target_modifies_source_deletes,\n    |sim| async move {\n        let (engine, main, draft) = create_draft_after_shared_write(&sim).await;\n\n        main.execute(\n            \"UPDATE lix_key_value SET value = 'main' WHERE key = 'shared-before-branch'\",\n            &[],\n        )\n        .await\n        .expect(\"main update should succeed\");\n        delete_key_value(&draft, \"shared-before-branch\").await;\n        let main_head_before = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"main head should load\")\n            .expect(\"main head should exist\");\n\n        let error = main\n            .merge_version(MergeVersionOptions {\n                source_version_id: \"draft-version\".to_string(),\n            })\n            .await\n            .expect_err(\"modify/delete should conflict\");\n        assert_merge_conflict_error(&error);\n        assert_eq!(\n            engine\n                .load_version_head_commit_id(sim.main_version_id())\n                .await\n                .expect(\"main head should load\"),\n            Some(main_head_before),\n            \"failed merge should not advance the target version ref\"\n        );\n        assert_key_value(&main, \"shared-before-branch\", Some(\"\\\"main\\\"\")).await;\n    }\n);\n\nsimulation_test!(\n    merge_version_records_empty_merge_for_same_payload_convergence,\n    |sim| async move {\n        let (engine, main, draft) = create_draft_after_shared_write(&sim).await;\n\n        main.execute(\n            \"UPDATE lix_key_value SET value = 'same' WHERE key = 'shared-before-branch'\",\n            &[],\n        )\n        .await\n        .expect(\"main update should succeed\");\n        draft\n            .execute(\n                \"UPDATE lix_key_value SET value = 'same' WHERE key = 'shared-before-branch'\",\n                &[],\n            )\n            .await\n            .expect(\"draft update should succeed\");\n        let main_head_before = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"main head should load\")\n            .expect(\"main head should exist\");\n        let source_head = engine\n            .load_version_head_commit_id(\"draft-version\")\n            .await\n            .expect(\"draft head should load\")\n            .expect(\"draft head should exist\");\n\n        let receipt = main\n            .merge_version(MergeVersionOptions {\n                source_version_id: \"draft-version\".to_string(),\n            })\n            .await\n            .expect(\"convergent update merge should succeed\");\n\n        assert_eq!(receipt.outcome, MergeVersionOutcome::MergeCommitted);\n        assert_eq!(receipt.change_stats, MergeChangeStats::default());\n        let merge_commit_id = receipt\n            .created_merge_commit_id\n            .clone()\n            .expect(\"convergent update should create an empty merge commit\");\n        assert_eq!(receipt.target_head_after_commit_id, merge_commit_id);\n        assert_eq!(receipt.target_head_before_commit_id, main_head_before);\n        assert_eq!(receipt.source_head_before_commit_id, source_head);\n        assert_empty_merge_commit(\n            &engine,\n            &main,\n            &merge_commit_id,\n            &receipt.target_head_before_commit_id,\n            &receipt.source_head_before_commit_id,\n        )\n        .await;\n        assert_key_value(&main, \"shared-before-branch\", Some(\"\\\"same\\\"\")).await;\n    }\n);\n\nsimulation_test!(\n    merge_version_conflicts_on_independent_add_same_identity_different_payload,\n    |sim| async move {\n        let (engine, main, draft) = create_draft_from_main(&sim).await;\n\n        main.execute(\n            \"INSERT INTO lix_key_value (key, value) VALUES ('merge-independent-add', 'main')\",\n            &[],\n        )\n        .await\n        .expect(\"main insert should succeed\");\n        draft\n            .execute(\n                \"INSERT INTO lix_key_value (key, value) VALUES ('merge-independent-add', 'draft')\",\n                &[],\n            )\n            .await\n            .expect(\"draft insert should succeed\");\n        let main_head_before = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"main head should load\")\n            .expect(\"main head should exist\");\n\n        let error = main\n            .merge_version(MergeVersionOptions {\n                source_version_id: \"draft-version\".to_string(),\n            })\n            .await\n            .expect_err(\"independent adds with different payloads should conflict\");\n        assert_merge_conflict_error(&error);\n        assert_eq!(\n            engine\n                .load_version_head_commit_id(sim.main_version_id())\n                .await\n                .expect(\"main head should load\"),\n            Some(main_head_before),\n            \"failed merge should not advance the target version ref\"\n        );\n        assert_key_value(&main, \"merge-independent-add\", Some(\"\\\"main\\\"\")).await;\n    }\n);\n\nsimulation_test!(\n    merge_version_records_empty_merge_for_same_identity_same_payload_add,\n    |sim| async move {\n        let (engine, main, draft) = create_draft_from_main(&sim).await;\n\n        main.execute(\n            \"INSERT INTO lix_key_value (key, value) VALUES ('merge-independent-same-add', 'same')\",\n            &[],\n        )\n        .await\n        .expect(\"main insert should succeed\");\n        draft\n            .execute(\n                \"INSERT INTO lix_key_value (key, value) VALUES ('merge-independent-same-add', 'same')\",\n                &[],\n            )\n            .await\n            .expect(\"draft insert should succeed\");\n        let main_head_before = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"main head should load\")\n            .expect(\"main head should exist\");\n        let source_head = engine\n            .load_version_head_commit_id(\"draft-version\")\n            .await\n            .expect(\"draft head should load\")\n            .expect(\"draft head should exist\");\n\n        let receipt = main\n            .merge_version(MergeVersionOptions {\n                source_version_id: \"draft-version\".to_string(),\n            })\n            .await\n            .expect(\"convergent independent add merge should succeed\");\n\n        assert_eq!(receipt.outcome, MergeVersionOutcome::MergeCommitted);\n        assert_eq!(receipt.change_stats, MergeChangeStats::default());\n        let merge_commit_id = receipt\n            .created_merge_commit_id\n            .clone()\n            .expect(\"convergent independent add should create an empty merge commit\");\n        assert_eq!(receipt.target_head_after_commit_id, merge_commit_id);\n        assert_eq!(receipt.target_head_before_commit_id, main_head_before);\n        assert_eq!(receipt.source_head_before_commit_id, source_head);\n        assert_empty_merge_commit(\n            &engine,\n            &main,\n            &merge_commit_id,\n            &receipt.target_head_before_commit_id,\n            &receipt.source_head_before_commit_id,\n        )\n        .await;\n        assert_key_value(&main, \"merge-independent-same-add\", Some(\"\\\"same\\\"\")).await;\n    }\n);\n\nsimulation_test!(\n    merge_version_errors_when_source_version_ref_is_missing,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let main = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        let error = main\n            .merge_version(MergeVersionOptions {\n                source_version_id: \"missing-version\".to_string(),\n            })\n            .await\n            .expect_err(\"missing source ref should fail\");\n\n        assert_eq!(error.code, LixError::CODE_VERSION_NOT_FOUND);\n        assert_eq!(\n            error\n                .details\n                .as_ref()\n                .and_then(|details| details.get(\"version_id\")),\n            Some(&JsonValue::String(\"missing-version\".to_string()))\n        );\n        assert_eq!(\n            error\n                .details\n                .as_ref()\n                .and_then(|details| details.get(\"operation\")),\n            Some(&JsonValue::String(\"merge_version\".to_string()))\n        );\n        assert_eq!(\n            error\n                .details\n                .as_ref()\n                .and_then(|details| details.get(\"role\")),\n            Some(&JsonValue::String(\"source\".to_string()))\n        );\n    }\n);\n\nsimulation_test!(merge_version_rejects_self_merge, |sim| async move {\n    let engine = sim.boot_engine().await;\n    let main = sim.wrap_session(\n        engine\n            .open_workspace_session()\n            .await\n            .expect(\"main session should open\"),\n        &engine,\n    );\n\n    let error = main\n        .merge_version(MergeVersionOptions {\n            source_version_id: sim.main_version_id().to_string(),\n        })\n        .await\n        .expect_err(\"self-merge should fail\");\n\n    assert_eq!(error.code, LixError::CODE_INVALID_MERGE);\n    assert_eq!(\n        error\n            .details\n            .as_ref()\n            .and_then(|details| details.get(\"operation\")),\n        Some(&JsonValue::String(\"merge_version\".to_string()))\n    );\n    assert_eq!(\n        error\n            .details\n            .as_ref()\n            .and_then(|details| details.get(\"target_version_id\")),\n        Some(&JsonValue::String(sim.main_version_id().to_string()))\n    );\n    assert_eq!(\n        error\n            .details\n            .as_ref()\n            .and_then(|details| details.get(\"source_version_id\")),\n        Some(&JsonValue::String(sim.main_version_id().to_string()))\n    );\n});\n\nasync fn delete_key_value(\n    session: &crate::support::simulation_test::engine::SimSession,\n    key: &str,\n) {\n    session\n        .execute(\n            &format!(\"DELETE FROM lix_key_value WHERE key = '{key}'\"),\n            &[],\n        )\n        .await\n        .expect(\"key-value delete should succeed\");\n}\n\nasync fn create_draft_after_shared_write(\n    sim: &crate::support::simulation_test::engine::Simulation,\n) -> (\n    Engine,\n    crate::support::simulation_test::engine::SimSession,\n    crate::support::simulation_test::engine::SimSession,\n) {\n    let engine = sim.boot_engine().await;\n    let main = sim.wrap_session(\n        engine\n            .open_session(sim.main_version_id())\n            .await\n            .expect(\"main session should open\"),\n        &engine,\n    );\n    main.execute(\n        \"INSERT INTO lix_key_value (key, value) VALUES ('shared-before-branch', 'shared')\",\n        &[],\n    )\n    .await\n    .expect(\"source write should succeed\");\n\n    let draft = create_draft(&engine, &main).await;\n    (engine, main, draft)\n}\n\nasync fn create_draft_from_main(\n    sim: &crate::support::simulation_test::engine::Simulation,\n) -> (\n    Engine,\n    crate::support::simulation_test::engine::SimSession,\n    crate::support::simulation_test::engine::SimSession,\n) {\n    let engine = sim.boot_engine().await;\n    let main = sim.wrap_session(\n        engine\n            .open_session(sim.main_version_id())\n            .await\n            .expect(\"main session should open\"),\n        &engine,\n    );\n    let draft = create_draft(&engine, &main).await;\n    (engine, main, draft)\n}\n\nasync fn create_draft(\n    engine: &Engine,\n    main: &crate::support::simulation_test::engine::SimSession,\n) -> crate::support::simulation_test::engine::SimSession {\n    let receipt = main\n        .create_version(CreateVersionOptions {\n            id: Some(\"draft-version\".to_string()),\n            name: \"Draft\".to_string(),\n            from_commit_id: None,\n        })\n        .await\n        .expect(\"version should be created\");\n    assert_eq!(receipt.id, \"draft-version\");\n    let version_row = main\n        .execute(\n            \"SELECT id, name, hidden, commit_id FROM lix_version WHERE id = 'draft-version'\",\n            &[],\n        )\n        .await\n        .expect(\"created version should be queryable through lix_version\");\n    assert_eq!(version_row.len(), 1);\n    assert_eq!(\n        version_row.rows()[0].values(),\n        &[\n            Value::Text(receipt.id.clone()),\n            Value::Text(receipt.name.clone()),\n            Value::Boolean(receipt.hidden),\n            Value::Text(receipt.commit_id.clone()),\n        ],\n        \"create_version should return the same public shape as lix_version\"\n    );\n    main.wrap_session(\n        engine\n            .open_session(receipt.id)\n            .await\n            .expect(\"draft session should open\"),\n        engine,\n    )\n}\n\nasync fn assert_key_value(\n    session: &crate::support::simulation_test::engine::SimSession,\n    key: &str,\n    expected: Option<&str>,\n) {\n    let result = session\n        .execute(\n            &format!(\"SELECT value FROM lix_key_value WHERE key = '{key}'\"),\n            &[],\n        )\n        .await\n        .expect(\"key-value query should succeed\");\n    let rows = result;\n    match expected {\n        Some(value) => {\n            assert_eq!(rows.len(), 1);\n            let expected_json = serde_json::from_str::<JsonValue>(value)\n                .expect(\"expected key-value should be valid JSON\");\n            assert_eq!(rows.rows()[0].values(), &[Value::Json(expected_json)]);\n        }\n        None => assert_eq!(rows.len(), 0),\n    }\n}\n\nasync fn assert_version_descriptor(\n    session: &crate::support::simulation_test::engine::SimSession,\n    version_id: &str,\n    expected_name: &str,\n) {\n    let result = session\n        .execute(\n            &format!(\"SELECT id, name FROM lix_version WHERE id = '{version_id}'\"),\n            &[],\n        )\n        .await\n        .expect(\"version query should succeed\");\n    let rows = result;\n    assert_eq!(rows.len(), 1);\n    assert_eq!(\n        rows.rows()[0].values(),\n        &[\n            Value::Text(version_id.to_string()),\n            Value::Text(expected_name.to_string()),\n        ]\n    );\n}\n\nasync fn count_version_descriptors(\n    session: &crate::support::simulation_test::engine::SimSession,\n    version_id: &str,\n) -> i64 {\n    select_single_integer(\n        session,\n        &format!(\"SELECT COUNT(*) FROM lix_version_descriptor WHERE id = '{version_id}'\"),\n    )\n    .await\n}\n\nasync fn count_version_refs(\n    session: &crate::support::simulation_test::engine::SimSession,\n    version_id: &str,\n) -> i64 {\n    select_single_integer(\n        session,\n        &format!(\n            \"SELECT COUNT(*) FROM lix_state \\\n\t         WHERE schema_key = 'lix_version_ref' AND entity_id = lix_json('[\\\"{version_id}\\\"]')\"\n        ),\n    )\n    .await\n}\n\nfn assert_version_pair_delete_restricted(error: &lix_engine::LixError) {\n    assert_eq!(error.code, lix_engine::LixError::CODE_READ_ONLY);\n    assert!(\n        error.to_string().contains(\"lix_version\"),\n        \"error should explain the version pair restriction: {error:?}\"\n    );\n    assert!(\n        error\n            .hint\n            .as_deref()\n            .is_some_and(|hint| hint.contains(\"lix_version\")),\n        \"error should guide callers to the lix_version surface: {error:?}\"\n    );\n}\n\nfn assert_merge_conflict_error(error: &lix_engine::LixError) {\n    assert_eq!(error.code, \"LIX_MERGE_CONFLICT\");\n    assert!(\n        error.message.contains(\"tracked-state conflict\"),\n        \"unexpected merge error: {error:?}\"\n    );\n    let details = error\n        .details\n        .as_ref()\n        .expect(\"merge conflict should include details\");\n    let conflicts = details\n        .get(\"conflicts\")\n        .and_then(JsonValue::as_array)\n        .expect(\"merge conflict details should include conflicts array\");\n    assert_eq!(conflicts.len(), 1);\n    let conflict = &conflicts[0];\n    assert_eq!(\n        conflict.get(\"kind\").and_then(JsonValue::as_str),\n        Some(\"sameEntityChanged\")\n    );\n    assert_eq!(\n        conflict.get(\"schemaKey\").and_then(JsonValue::as_str),\n        Some(\"lix_key_value\")\n    );\n    assert!(\n        conflict\n            .get(\"entityId\")\n            .and_then(JsonValue::as_array)\n            .is_some(),\n        \"conflict should include entityId: {conflict:?}\"\n    );\n    assert!(\n        conflict.get(\"target\").is_some(),\n        \"conflict should include target side: {conflict:?}\"\n    );\n    assert!(\n        conflict.get(\"source\").is_some(),\n        \"conflict should include source side: {conflict:?}\"\n    );\n}\n\nasync fn select_single_integer(\n    session: &crate::support::simulation_test::engine::SimSession,\n    sql: &str,\n) -> i64 {\n    let result = session\n        .execute(sql, &[])\n        .await\n        .expect(\"query should succeed\");\n    assert_eq!(result.len(), 1, \"expected exactly one row for query: {sql}\");\n    let Value::Integer(value) = result.rows()[0].values()[0] else {\n        panic!(\"expected integer value for query: {sql}\");\n    };\n    value\n}\n\nasync fn commit_parent_edges(\n    session: &crate::support::simulation_test::engine::SimSession,\n    commit_id: &str,\n) -> Vec<(String, i64)> {\n    let result = session\n        .execute(\n            &format!(\n                \"SELECT parent_id, parent_order \\\n                 FROM lix_commit_edge \\\n                 WHERE child_id = '{commit_id}' \\\n                 ORDER BY parent_order\"\n            ),\n            &[],\n        )\n        .await\n        .expect(\"commit edges should read\");\n    result\n        .rows()\n        .iter()\n        .map(|row| {\n            let Value::Text(value) = &row.values()[0] else {\n                panic!(\"parent_id should be text\");\n            };\n            let Value::Integer(parent_order) = row.values()[1] else {\n                panic!(\"parent_order should be integer\");\n            };\n            (value.clone(), parent_order)\n        })\n        .collect()\n}\n\nasync fn assert_empty_merge_commit(\n    engine: &Engine,\n    session: &crate::support::simulation_test::engine::SimSession,\n    merge_commit_id: &str,\n    target_head_before: &str,\n    source_head: &str,\n) {\n    let active_version_id = session\n        .active_version_id()\n        .await\n        .expect(\"active version should load\");\n    assert_eq!(\n        engine\n            .load_version_head_commit_id(&active_version_id)\n            .await\n            .expect(\"target version head should load\")\n            .as_deref(),\n        Some(merge_commit_id),\n        \"empty merge should advance the target version ref\"\n    );\n\n    let global = session.wrap_session(\n        engine\n            .open_session(\"global\")\n            .await\n            .expect(\"global session should open\"),\n        engine,\n    );\n    assert_eq!(\n        commit_parent_edges(&global, merge_commit_id)\n            .await\n            .into_iter()\n            .map(|(parent_id, _)| parent_id)\n            .collect::<std::collections::BTreeSet<_>>(),\n        [target_head_before.to_string(), source_head.to_string()]\n            .into_iter()\n            .collect::<std::collections::BTreeSet<_>>(),\n        \"empty merge commit should preserve target/source ancestry\"\n    );\n}\n"
  },
  {
    "path": "packages/engine/tests/code_structure.rs",
    "content": "#![allow(dead_code)]\n\nuse std::collections::{BTreeMap, BTreeSet, HashMap, HashSet};\nuse std::fmt::Write as _;\nuse std::fs;\nuse std::path::{Path, PathBuf};\n\n#[derive(Debug, Clone, PartialEq, Eq)]\nstruct ForbiddenDependencyRule {\n    from_scope: &'static str,\n    reason: &'static str,\n    forbidden_scopes: &'static [&'static str],\n}\n\nconst FORBIDDEN_DEPENDENCY_RULES: &[ForbiddenDependencyRule] = &[\n    ForbiddenDependencyRule {\n        from_scope: \"catalog\",\n        reason: \"catalog is the semantic owner for public named relations and must not depend on lowering, orchestration, or sidecar owners\",\n        forbidden_scopes: &[\n            \"backend\",\n            \"canonical\",\n            \"api\",\n            \"execution\",\n            \"init\",\n            \"services\",\n            \"session\",\n            \"sql\",\n        ],\n    },\n    ForbiddenDependencyRule {\n        from_scope: \"backend\",\n        reason: \"backend is a lower persistence owner; it owns raw prepared statement DTOs but must not grow dependencies on higher workflow or sidecar roots\",\n        forbidden_scopes: &[\"services\"],\n    },\n    ForbiddenDependencyRule {\n        from_scope: \"services\",\n        reason: \"services are leaf sidecar capabilities and may depend only on neutral foundations like common, not on engine composition or semantic owner roots\",\n        forbidden_scopes: &[\n            \"api\",\n            \"backend\",\n            \"canonical\",\n            \"catalog\",\n            \"diagnostics\",\n            \"execution\",\n            \"init\",\n            \"live_state\",\n            \"schema\",\n            \"session\",\n            \"sql\",\n        ],\n    },\n    ForbiddenDependencyRule {\n        from_scope: \"live_state\",\n        reason: \"live_state is the generic projection engine and must not reacquire services sidecars or write orchestration owners\",\n        forbidden_scopes: &[\"execution\", \"services\"],\n    },\n    ForbiddenDependencyRule {\n        from_scope: \"sql2\",\n        reason: \"sql2 is the compiler/runtime provider lane; it must not depend on workflow or higher orchestration roots directly\",\n        forbidden_scopes: &[\"execution\", \"services\", \"session\"],\n    },\n    ForbiddenDependencyRule {\n        from_scope: \"execution\",\n        reason: \"execution is the public SQL runner leaf; it may consume sql-owned prepared artifacts but must not depend on higher orchestration owners or transaction internals\",\n        forbidden_scopes: &[\"canonical\", \"api\", \"init\", \"services\", \"session\", \"transaction\"],\n    },\n    ForbiddenDependencyRule {\n        from_scope: \"session\",\n        reason: \"session owns orchestration and workflow code, but should not couple itself to the root API shell\",\n        forbidden_scopes: &[\"api\"],\n    },\n];\n\nconst TARGET_CORE_MODULES: &[&str] = &[\"backend\", \"live_state\", \"session\", \"sql2\", \"transaction\"];\n\n#[derive(Debug, Clone, PartialEq, Eq)]\nstruct EngineDependencyGraph {\n    module_source: String,\n    modules_analyzed: Vec<String>,\n    edges: Vec<DependencyEdge>,\n    strongly_connected_components: Vec<StronglyConnectedComponent>,\n    adjacency_by_module: BTreeMap<String, ModuleAdjacency>,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\nstruct DependencyEdge {\n    from: String,\n    to: String,\n    via_files: Vec<String>,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\nstruct StronglyConnectedComponent {\n    modules: Vec<String>,\n    internal_edges: Vec<DependencyEdge>,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\nstruct ModuleAdjacency {\n    incoming: Vec<String>,\n    outgoing: Vec<String>,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)]\nstruct SealedOwnerViolation {\n    importer_file: String,\n    imported_path: String,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)]\nstruct ImportPathViolation {\n    importer_file: String,\n    imported_path: String,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)]\nstruct RawSqlExecutionViolation {\n    file: String,\n    pattern: &'static str,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)]\nstruct RawBackendTypeViolation {\n    file: String,\n    type_name: &'static str,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)]\nstruct TransactionLifecycleViolation {\n    file: String,\n    pattern: &'static str,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)]\nstruct SqlRuntimeOwnershipViolation {\n    file: String,\n    pattern: &'static str,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\nenum UseToken {\n    DblColon,\n    LBrace,\n    RBrace,\n    Comma,\n    Star,\n    As,\n    Ident(String),\n}\n\nconst ALLOWED_SERVICE_FOUNDATION_ROOTS: &[&str] = &[\"common\"];\n\nfn engine_root() -> PathBuf {\n    PathBuf::from(env!(\"CARGO_MANIFEST_DIR\"))\n}\n\nfn src_root() -> PathBuf {\n    engine_root().join(\"src\")\n}\n\nfn lib_path() -> PathBuf {\n    src_root().join(\"lib.rs\")\n}\n\nfn read_engine_source(relative: &str) -> String {\n    fs::read_to_string(src_root().join(relative)).expect(\"engine source file should be readable\")\n}\n\nfn source_between<'a>(\n    relative: &str,\n    source: &'a str,\n    start_needle: &str,\n    end_needle: &str,\n) -> &'a str {\n    let start = source\n        .find(start_needle)\n        .unwrap_or_else(|| panic!(\"{relative} should contain `{start_needle}`\"));\n    let end = source[start..]\n        .find(end_needle)\n        .map(|end| start + end)\n        .unwrap_or_else(|| {\n            panic!(\"{relative} should contain `{end_needle}` after `{start_needle}`\")\n        });\n    &source[start..end]\n}\n\nfn assert_source_contains_in_order(relative: &str, source: &str, needles: &[&str]) {\n    let mut previous: Option<(&str, usize)> = None;\n    for needle in needles {\n        let index = source\n            .find(needle)\n            .unwrap_or_else(|| panic!(\"{relative} should contain `{needle}`\"));\n        if let Some((previous_needle, previous_index)) = previous {\n            assert!(\n                previous_index < index,\n                \"{relative} should keep `{previous_needle}` before `{needle}`\",\n            );\n        }\n        previous = Some((needle, index));\n    }\n}\n\nfn assert_source_contains_all(relative: &str, source: &str, needles: &[&str]) {\n    for needle in needles {\n        assert!(\n            source.contains(needle),\n            \"{relative} should contain `{needle}`\",\n        );\n    }\n}\n\nfn assert_source_contains_none(relative: &str, source: &str, needles: &[&str]) {\n    for needle in needles {\n        assert!(\n            !source.contains(needle),\n            \"{relative} should not contain `{needle}`\",\n        );\n    }\n}\n\nfn analyze_engine_dependency_graph() -> EngineDependencyGraph {\n    let lib_source = fs::read_to_string(lib_path()).expect(\"src/lib.rs should be readable\");\n    let top_level_modules = parse_top_level_modules(&lib_source);\n    let module_set: HashSet<String> = top_level_modules.iter().cloned().collect();\n    let mut graph: BTreeMap<String, BTreeSet<String>> = top_level_modules\n        .iter()\n        .cloned()\n        .map(|module| (module, BTreeSet::new()))\n        .collect();\n    let mut edge_provenance: BTreeMap<(String, String), BTreeSet<String>> = BTreeMap::new();\n\n    for module_name in &top_level_modules {\n        for absolute_path in rust_files_for_top_level_module(module_name) {\n            let relative_path = absolute_path\n                .strip_prefix(src_root())\n                .expect(\"module source file should be inside src/\")\n                .to_string_lossy()\n                .replace('\\\\', \"/\");\n            if is_test_support_relative_path(&relative_path) {\n                continue;\n            }\n            let source =\n                fs::read_to_string(&absolute_path).expect(\"module source file should be readable\");\n            let current_module_path = module_path_for_file(&relative_path);\n            let dependencies = collect_dependencies_from_source(\n                &strip_test_code(&source),\n                &current_module_path,\n                &module_set,\n            );\n\n            for dependency in dependencies {\n                if dependency == *module_name {\n                    continue;\n                }\n                graph\n                    .get_mut(module_name)\n                    .expect(\"all top-level modules should have graph entries\")\n                    .insert(dependency.clone());\n                edge_provenance\n                    .entry((module_name.clone(), dependency))\n                    .or_default()\n                    .insert(relative_path.clone());\n            }\n        }\n    }\n\n    let edges: Vec<DependencyEdge> = edge_provenance\n        .into_iter()\n        .map(|((from, to), via_files)| DependencyEdge {\n            from,\n            to,\n            via_files: via_files.into_iter().collect(),\n        })\n        .collect();\n\n    let strongly_connected_components = tarjan(&top_level_modules, &graph)\n        .into_iter()\n        .filter(|component| component.len() > 1)\n        .map(|component| {\n            let members: BTreeSet<String> = component.into_iter().collect();\n            let mut modules: Vec<String> = members.iter().cloned().collect();\n            modules.sort();\n\n            let internal_edges: Vec<DependencyEdge> = edges\n                .iter()\n                .filter(|edge| members.contains(&edge.from) && members.contains(&edge.to))\n                .cloned()\n                .collect();\n\n            StronglyConnectedComponent {\n                modules,\n                internal_edges,\n            }\n        })\n        .collect();\n\n    let adjacency_by_module = build_adjacency_map(&top_level_modules, &edges);\n\n    EngineDependencyGraph {\n        module_source: \"src/lib.rs\".to_string(),\n        modules_analyzed: top_level_modules,\n        edges,\n        strongly_connected_components,\n        adjacency_by_module,\n    }\n}\n\nfn build_adjacency_map(\n    modules: &[String],\n    edges: &[DependencyEdge],\n) -> BTreeMap<String, ModuleAdjacency> {\n    let mut incoming: BTreeMap<String, BTreeSet<String>> = modules\n        .iter()\n        .cloned()\n        .map(|module| (module, BTreeSet::new()))\n        .collect();\n    let mut outgoing: BTreeMap<String, BTreeSet<String>> = modules\n        .iter()\n        .cloned()\n        .map(|module| (module, BTreeSet::new()))\n        .collect();\n\n    for edge in edges {\n        incoming\n            .get_mut(&edge.to)\n            .expect(\"all destination modules should exist in adjacency map\")\n            .insert(edge.from.clone());\n        outgoing\n            .get_mut(&edge.from)\n            .expect(\"all source modules should exist in adjacency map\")\n            .insert(edge.to.clone());\n    }\n\n    modules\n        .iter()\n        .cloned()\n        .map(|module| {\n            let incoming = incoming\n                .remove(&module)\n                .expect(\"all modules should have incoming adjacency entries\")\n                .into_iter()\n                .collect();\n            let outgoing = outgoing\n                .remove(&module)\n                .expect(\"all modules should have outgoing adjacency entries\")\n                .into_iter()\n                .collect();\n            (module, ModuleAdjacency { incoming, outgoing })\n        })\n        .collect()\n}\n\nfn parse_top_level_modules(lib_source: &str) -> Vec<String> {\n    let mut modules = Vec::new();\n    let mut pending_attributes = Vec::new();\n\n    for line in lib_source.lines() {\n        let trimmed = line.trim();\n        if trimmed.is_empty() {\n            continue;\n        }\n        if trimmed.starts_with(\"#[\") {\n            pending_attributes.push(trimmed.to_string());\n            continue;\n        }\n\n        let mut cursor = trimmed;\n        if let Some(rest) = cursor.strip_prefix(\"pub(crate) \") {\n            cursor = rest;\n        } else if let Some(rest) = cursor.strip_prefix(\"pub \") {\n            cursor = rest;\n        } else if cursor.starts_with(\"pub(\") {\n            if let Some(idx) = cursor.find(\") \") {\n                cursor = &cursor[idx + 2..];\n            }\n        }\n\n        if let Some(rest) = cursor.strip_prefix(\"mod \") {\n            if let Some(module_name) = rest.strip_suffix(';') {\n                let is_test_only = pending_attributes\n                    .iter()\n                    .any(|attribute| attribute.contains(\"cfg(test)\"));\n                if !is_test_only {\n                    let name = module_name.trim();\n                    if !name.is_empty() {\n                        modules.push(name.to_string());\n                    }\n                }\n            }\n        }\n\n        pending_attributes.clear();\n    }\n\n    modules\n}\n\nfn rust_files_for_top_level_module(module_name: &str) -> Vec<PathBuf> {\n    let mut files = Vec::new();\n    let module_file = src_root().join(format!(\"{module_name}.rs\"));\n    let module_directory = src_root().join(module_name);\n\n    if module_file.exists() {\n        files.push(module_file);\n    }\n    if module_directory.exists() {\n        walk_rust_files(&module_directory, &mut files);\n    }\n\n    files.sort();\n    files\n}\n\nfn walk_rust_files(directory: &Path, files: &mut Vec<PathBuf>) {\n    for entry in fs::read_dir(directory).expect(\"directory should be readable\") {\n        let entry = entry.expect(\"directory entry should be readable\");\n        let path = entry.path();\n        if path.is_dir() {\n            if path.file_name().is_some_and(|name| name == \"tests\") {\n                continue;\n            }\n            walk_rust_files(&path, files);\n            continue;\n        }\n        if !path.is_file() {\n            continue;\n        }\n        if path.extension().is_some_and(|ext| ext == \"rs\")\n            && path.file_name().is_none_or(|name| name != \"tests.rs\")\n        {\n            files.push(path);\n        }\n    }\n}\n\nfn module_path_for_file(relative_path: &str) -> Vec<String> {\n    let normalized: Vec<&str> = relative_path.split('/').collect();\n    if normalized.len() == 1 {\n        return vec![normalized[0].trim_end_matches(\".rs\").to_string()];\n    }\n\n    if normalized.last() == Some(&\"mod.rs\") {\n        return normalized[..normalized.len() - 1]\n            .iter()\n            .map(|segment| (*segment).to_string())\n            .collect();\n    }\n\n    let mut parts: Vec<String> = normalized[..normalized.len() - 1]\n        .iter()\n        .map(|segment| (*segment).to_string())\n        .collect();\n    let filename = normalized\n        .last()\n        .expect(\"relative path should contain a file name\")\n        .trim_end_matches(\".rs\");\n    parts.push(filename.to_string());\n    parts\n}\n\nfn collect_dependencies_from_source(\n    source: &str,\n    current_module_path: &[String],\n    module_set: &HashSet<String>,\n) -> BTreeSet<String> {\n    let without_tests = strip_test_code(source);\n    let sanitized = mask_rust_source(&without_tests);\n    let mut dependencies = BTreeSet::new();\n\n    dependencies.extend(collect_use_dependencies(\n        &sanitized,\n        current_module_path,\n        module_set,\n    ));\n    dependencies.extend(collect_explicit_path_dependencies(\n        &sanitized,\n        current_module_path,\n        module_set,\n    ));\n\n    dependencies\n}\n\nfn strip_test_code(source: &str) -> String {\n    let stripped = strip_cfg_test_items(source);\n    let masked = mask_rust_source(&stripped);\n    let mut ranges = Vec::new();\n    let bytes = masked.as_bytes();\n    let mut index = 0usize;\n\n    while index < bytes.len() {\n        if let Some((mod_start, after_mod)) = match_keyword(bytes, index, b\"mod\") {\n            let after_whitespace = skip_whitespace(bytes, after_mod);\n            if let Some((ident, after_ident)) = parse_identifier(bytes, after_whitespace) {\n                let ident = normalize_identifier(&ident);\n                let after_name = skip_whitespace(bytes, after_ident);\n                if ident == \"tests\" && bytes.get(after_name) == Some(&b'{') {\n                    if let Some(close_brace_index) = find_matching_brace(bytes, after_name) {\n                        ranges.push((mod_start, close_brace_index + 1));\n                        index = close_brace_index + 1;\n                        continue;\n                    }\n                }\n            }\n        }\n        index += 1;\n    }\n\n    let mut result = stripped;\n    ranges.sort_by(|left, right| right.0.cmp(&left.0));\n    for (start, end) in ranges {\n        result.replace_range(start..end, \"\");\n    }\n    result\n}\n\nfn strip_cfg_test_items(source: &str) -> String {\n    let lines: Vec<&str> = source.lines().collect();\n    let mut output = String::new();\n    let mut index = 0usize;\n\n    while index < lines.len() {\n        let line = lines[index];\n        let trimmed = line.trim_start();\n        if trimmed.starts_with(\"#[\") && trimmed.contains(\"cfg(test)\") {\n            index += 1;\n            while index < lines.len() && lines[index].trim_start().starts_with(\"#[\") {\n                index += 1;\n            }\n            skip_annotated_item(&lines, &mut index);\n            continue;\n        }\n\n        output.push_str(line);\n        output.push('\\n');\n        index += 1;\n    }\n\n    output\n}\n\nfn skip_annotated_item(lines: &[&str], index: &mut usize) {\n    let mut brace_depth = 0i32;\n    let mut saw_item_body = false;\n\n    while *index < lines.len() {\n        let line = lines[*index];\n        brace_depth += brace_delta(line);\n        saw_item_body |= line.contains('{') || line.trim_end().ends_with(';');\n        *index += 1;\n\n        if saw_item_body && brace_depth <= 0 {\n            break;\n        }\n    }\n}\n\nfn brace_delta(line: &str) -> i32 {\n    line.chars().fold(0, |count, ch| match ch {\n        '{' => count + 1,\n        '}' => count - 1,\n        _ => count,\n    })\n}\n\nfn mask_rust_source(source: &str) -> String {\n    let bytes = source.as_bytes();\n    let mut result = vec![b' '; bytes.len()];\n    let mut index = 0usize;\n    let mut block_comment_depth = 0usize;\n\n    while index < bytes.len() {\n        let current = bytes[index];\n        let next = bytes.get(index + 1).copied().unwrap_or_default();\n\n        if block_comment_depth > 0 {\n            if current == b'/' && next == b'*' {\n                block_comment_depth += 1;\n                index += 2;\n                continue;\n            }\n            if current == b'*' && next == b'/' {\n                block_comment_depth -= 1;\n                index += 2;\n                continue;\n            }\n            if current == b'\\n' {\n                result[index] = b'\\n';\n            }\n            index += 1;\n            continue;\n        }\n\n        if current == b'/' && next == b'/' {\n            index += 2;\n            while index < bytes.len() && bytes[index] != b'\\n' {\n                index += 1;\n            }\n            continue;\n        }\n\n        if current == b'/' && next == b'*' {\n            block_comment_depth = 1;\n            index += 2;\n            continue;\n        }\n\n        if current == b'\"' {\n            result[index] = b' ';\n            index += 1;\n            while index < bytes.len() {\n                let ch = bytes[index];\n                if ch == b'\\n' {\n                    result[index] = b'\\n';\n                }\n                index += 1;\n                if ch == b'\\\\' {\n                    if index < bytes.len() {\n                        if bytes[index] == b'\\n' {\n                            result[index] = b'\\n';\n                        }\n                        index += 1;\n                    }\n                    continue;\n                }\n                if ch == b'\"' {\n                    break;\n                }\n            }\n            continue;\n        }\n\n        if current == b'r' {\n            let mut probe = index + 1;\n            while bytes.get(probe) == Some(&b'#') {\n                probe += 1;\n            }\n            if bytes.get(probe) == Some(&b'\"') {\n                let hash_count = probe - index - 1;\n                let closing_len = hash_count + 1;\n                index = probe + 1;\n                while index < bytes.len() {\n                    if bytes[index] == b'\\n' {\n                        result[index] = b'\\n';\n                    }\n                    if bytes[index] == b'\"'\n                        && bytes\n                            .get(index + 1..index + 1 + hash_count)\n                            .is_some_and(|suffix| suffix.iter().all(|byte| *byte == b'#'))\n                    {\n                        index += closing_len;\n                        break;\n                    }\n                    index += 1;\n                }\n                continue;\n            }\n        }\n\n        result[index] = current;\n        index += 1;\n    }\n\n    String::from_utf8(result).expect(\"masked Rust source should stay valid UTF-8\")\n}\n\nfn collect_use_dependencies(\n    source: &str,\n    current_module_path: &[String],\n    module_set: &HashSet<String>,\n) -> BTreeSet<String> {\n    let bytes = source.as_bytes();\n    let mut dependencies = BTreeSet::new();\n    let mut index = 0usize;\n\n    while index < bytes.len() {\n        if let Some((_, after_use)) = match_keyword(bytes, index, b\"use\") {\n            let mut cursor = after_use;\n            while cursor < bytes.len() && bytes[cursor] != b';' {\n                cursor += 1;\n            }\n            if cursor < bytes.len() {\n                let spec = &source[after_use..cursor];\n                dependencies.extend(resolve_use_dependencies(\n                    spec,\n                    current_module_path,\n                    module_set,\n                ));\n                index = cursor + 1;\n                continue;\n            }\n        }\n        index += 1;\n    }\n\n    dependencies\n}\n\nfn resolve_use_dependencies(\n    spec: &str,\n    current_module_path: &[String],\n    module_set: &HashSet<String>,\n) -> BTreeSet<String> {\n    let tokens = tokenize_use_spec(spec);\n    let mut dependencies = BTreeSet::new();\n    let mut index = 0usize;\n\n    while index < tokens.len() {\n        index = parse_use_tree(\n            &tokens,\n            index,\n            current_module_path,\n            None,\n            module_set,\n            &mut dependencies,\n        );\n        if matches!(tokens.get(index), Some(UseToken::Comma)) {\n            index += 1;\n        } else {\n            break;\n        }\n    }\n\n    dependencies\n}\n\nfn tokenize_use_spec(spec: &str) -> Vec<UseToken> {\n    let bytes = spec.as_bytes();\n    let mut tokens = Vec::new();\n    let mut index = 0usize;\n\n    while index < bytes.len() {\n        let current = bytes[index];\n        let next = bytes.get(index + 1).copied().unwrap_or_default();\n\n        if current.is_ascii_whitespace() {\n            index += 1;\n            continue;\n        }\n        if current == b':' && next == b':' {\n            tokens.push(UseToken::DblColon);\n            index += 2;\n            continue;\n        }\n        if current == b'{' {\n            tokens.push(UseToken::LBrace);\n            index += 1;\n            continue;\n        }\n        if current == b'}' {\n            tokens.push(UseToken::RBrace);\n            index += 1;\n            continue;\n        }\n        if current == b',' {\n            tokens.push(UseToken::Comma);\n            index += 1;\n            continue;\n        }\n        if current == b'*' {\n            tokens.push(UseToken::Star);\n            index += 1;\n            continue;\n        }\n        if let Some((ident, next_index)) = parse_identifier(bytes, index) {\n            let normalized = normalize_identifier(&ident);\n            if normalized == \"as\" {\n                tokens.push(UseToken::As);\n            } else {\n                tokens.push(UseToken::Ident(normalized));\n            }\n            index = next_index;\n            continue;\n        }\n\n        index += 1;\n    }\n\n    tokens\n}\n\nfn parse_use_tree(\n    tokens: &[UseToken],\n    index: usize,\n    current_module_path: &[String],\n    base_context: Option<&[String]>,\n    module_set: &HashSet<String>,\n    dependencies: &mut BTreeSet<String>,\n) -> usize {\n    let (path_parts, next_index) = parse_use_path(tokens, index);\n    if path_parts.is_empty() {\n        return skip_until_boundary(tokens, index);\n    }\n\n    let resolved_path = resolve_use_path(&path_parts, current_module_path, base_context);\n    if let Some(dependency) = resolved_path.first() {\n        if module_set.contains(dependency) {\n            dependencies.insert(dependency.clone());\n        }\n    }\n\n    let mut cursor = next_index;\n    if matches!(tokens.get(cursor), Some(UseToken::DblColon))\n        && matches!(tokens.get(cursor + 1), Some(UseToken::LBrace))\n    {\n        cursor += 2;\n        while cursor < tokens.len() && !matches!(tokens.get(cursor), Some(UseToken::RBrace)) {\n            cursor = parse_use_tree(\n                tokens,\n                cursor,\n                current_module_path,\n                Some(&resolved_path),\n                module_set,\n                dependencies,\n            );\n            if matches!(tokens.get(cursor), Some(UseToken::Comma)) {\n                cursor += 1;\n            }\n        }\n        if matches!(tokens.get(cursor), Some(UseToken::RBrace)) {\n            cursor += 1;\n        }\n        return cursor;\n    }\n\n    if matches!(tokens.get(cursor), Some(UseToken::DblColon))\n        && matches!(tokens.get(cursor + 1), Some(UseToken::Star))\n    {\n        return cursor + 2;\n    }\n\n    if matches!(tokens.get(cursor), Some(UseToken::As)) {\n        return cursor\n            + if matches!(tokens.get(cursor + 1), Some(UseToken::Ident(_))) {\n                2\n            } else {\n                1\n            };\n    }\n\n    cursor\n}\n\nfn parse_use_path(tokens: &[UseToken], index: usize) -> (Vec<String>, usize) {\n    let mut path_parts = Vec::new();\n    let mut cursor = index;\n\n    while let Some(UseToken::Ident(value)) = tokens.get(cursor) {\n        path_parts.push(value.clone());\n        if matches!(tokens.get(cursor + 1), Some(UseToken::DblColon))\n            && matches!(tokens.get(cursor + 2), Some(UseToken::Ident(_)))\n        {\n            cursor += 2;\n            continue;\n        }\n        cursor += 1;\n        break;\n    }\n\n    (path_parts, cursor)\n}\n\nfn resolve_use_path(\n    path_parts: &[String],\n    current_module_path: &[String],\n    base_context: Option<&[String]>,\n) -> Vec<String> {\n    if let Some(base_context) = base_context {\n        if path_parts.first().is_some_and(|part| part == \"self\") {\n            let mut result = base_context.to_vec();\n            result.extend(path_parts.iter().skip(1).cloned());\n            return result;\n        }\n        if path_parts\n            .first()\n            .is_some_and(|part| part == \"crate\" || part == \"super\")\n        {\n            return resolve_relative_path(path_parts, current_module_path);\n        }\n        let mut result = base_context.to_vec();\n        result.extend(path_parts.iter().cloned());\n        return result;\n    }\n\n    if path_parts\n        .first()\n        .is_none_or(|part| part != \"crate\" && part != \"self\" && part != \"super\")\n    {\n        return Vec::new();\n    }\n\n    resolve_relative_path(path_parts, current_module_path)\n}\n\nfn resolve_relative_path(path_parts: &[String], current_module_path: &[String]) -> Vec<String> {\n    if path_parts.first().is_some_and(|part| part == \"crate\") {\n        return path_parts.iter().skip(1).cloned().collect();\n    }\n    if path_parts.first().is_some_and(|part| part == \"self\") {\n        let mut result = current_module_path.to_vec();\n        result.extend(path_parts.iter().skip(1).cloned());\n        return result;\n    }\n\n    let super_count = path_parts\n        .iter()\n        .take_while(|part| *part == \"super\")\n        .count();\n    let mut result: Vec<String> = current_module_path\n        .iter()\n        .take(current_module_path.len().saturating_sub(super_count))\n        .cloned()\n        .collect();\n    result.extend(path_parts.iter().skip(super_count).cloned());\n    result\n}\n\nfn skip_until_boundary(tokens: &[UseToken], index: usize) -> usize {\n    let mut cursor = index;\n    while cursor < tokens.len()\n        && !matches!(\n            tokens.get(cursor),\n            Some(UseToken::Comma) | Some(UseToken::RBrace)\n        )\n    {\n        cursor += 1;\n    }\n    cursor\n}\n\nfn collect_explicit_path_dependencies(\n    source: &str,\n    current_module_path: &[String],\n    module_set: &HashSet<String>,\n) -> BTreeSet<String> {\n    let bytes = source.as_bytes();\n    let mut dependencies = BTreeSet::new();\n    let mut index = 0usize;\n\n    while index < bytes.len() {\n        let Some((prefix, after_prefix)) = parse_explicit_prefix(bytes, index) else {\n            index += 1;\n            continue;\n        };\n\n        let after_separator = skip_whitespace(bytes, after_prefix);\n        if bytes.get(after_separator..after_separator + 2) != Some(&b\"::\"[..]) {\n            index += 1;\n            continue;\n        }\n\n        let after_double_colon = skip_whitespace(bytes, after_separator + 2);\n        let Some((first_segment, after_first_segment)) =\n            parse_identifier(bytes, after_double_colon)\n        else {\n            index += 1;\n            continue;\n        };\n\n        let dependency = resolve_explicit_dependency(\n            &prefix,\n            &normalize_identifier(&first_segment),\n            current_module_path,\n        );\n        if let Some(dependency) = dependency {\n            if module_set.contains(&dependency) {\n                dependencies.insert(dependency);\n            }\n        }\n\n        index = after_first_segment;\n    }\n\n    dependencies\n}\n\nfn parse_explicit_prefix(bytes: &[u8], index: usize) -> Option<(Vec<String>, usize)> {\n    let (ident, mut cursor) = parse_identifier(bytes, index)?;\n    let normalized = normalize_identifier(&ident);\n    if normalized != \"crate\" && normalized != \"self\" && normalized != \"super\" {\n        return None;\n    }\n\n    let mut prefix = vec![normalized];\n    loop {\n        let after_whitespace = skip_whitespace(bytes, cursor);\n        if bytes.get(after_whitespace..after_whitespace + 2) != Some(&b\"::\"[..]) {\n            return Some((prefix, cursor));\n        }\n        let after_separator = skip_whitespace(bytes, after_whitespace + 2);\n        let Some((next_ident, next_cursor)) = parse_identifier(bytes, after_separator) else {\n            return Some((prefix, cursor));\n        };\n        let next_ident = normalize_identifier(&next_ident);\n        if next_ident != \"super\" {\n            return Some((prefix, cursor));\n        }\n        prefix.push(next_ident);\n        cursor = next_cursor;\n    }\n}\n\nfn resolve_explicit_dependency(\n    prefix: &[String],\n    first_segment: &str,\n    current_module_path: &[String],\n) -> Option<String> {\n    match prefix.first()?.as_str() {\n        \"crate\" => Some(first_segment.to_string()),\n        \"self\" => current_module_path.first().cloned(),\n        \"super\" => {\n            let super_count = prefix.iter().filter(|segment| *segment == \"super\").count();\n            let mut absolute_path: Vec<String> = current_module_path\n                .iter()\n                .take(current_module_path.len().saturating_sub(super_count))\n                .cloned()\n                .collect();\n            absolute_path.push(first_segment.to_string());\n            absolute_path.first().cloned()\n        }\n        _ => None,\n    }\n}\n\nfn parse_identifier(bytes: &[u8], index: usize) -> Option<(String, usize)> {\n    let current = *bytes.get(index)?;\n    if current == b'r' && bytes.get(index + 1) == Some(&b'#') {\n        let mut cursor = index + 2;\n        if !bytes.get(cursor).is_some_and(|byte| is_ident_start(*byte)) {\n            return None;\n        }\n        cursor += 1;\n        while bytes\n            .get(cursor)\n            .is_some_and(|byte| is_ident_continue(*byte))\n        {\n            cursor += 1;\n        }\n        return Some((\n            String::from_utf8(bytes[index..cursor].to_vec())\n                .expect(\"raw identifier should stay valid UTF-8\"),\n            cursor,\n        ));\n    }\n\n    if !is_ident_start(current) {\n        return None;\n    }\n\n    let mut cursor = index + 1;\n    while bytes\n        .get(cursor)\n        .is_some_and(|byte| is_ident_continue(*byte))\n    {\n        cursor += 1;\n    }\n\n    Some((\n        String::from_utf8(bytes[index..cursor].to_vec())\n            .expect(\"identifier should stay valid UTF-8\"),\n        cursor,\n    ))\n}\n\nfn normalize_identifier(identifier: &str) -> String {\n    identifier\n        .strip_prefix(\"r#\")\n        .unwrap_or(identifier)\n        .to_string()\n}\n\nfn is_ident_start(byte: u8) -> bool {\n    byte.is_ascii_alphabetic() || byte == b'_'\n}\n\nfn is_ident_continue(byte: u8) -> bool {\n    byte.is_ascii_alphanumeric() || byte == b'_'\n}\n\nfn skip_whitespace(bytes: &[u8], mut index: usize) -> usize {\n    while bytes\n        .get(index)\n        .is_some_and(|byte| byte.is_ascii_whitespace())\n    {\n        index += 1;\n    }\n    index\n}\n\nfn match_keyword(bytes: &[u8], index: usize, keyword: &[u8]) -> Option<(usize, usize)> {\n    let end = index.checked_add(keyword.len())?;\n    if bytes.get(index..end)? != keyword {\n        return None;\n    }\n\n    let boundary_before = index == 0 || !is_ident_continue(bytes[index - 1]);\n    let boundary_after = bytes.get(end).is_none_or(|byte| !is_ident_continue(*byte));\n    if boundary_before && boundary_after {\n        Some((index, end))\n    } else {\n        None\n    }\n}\n\nfn find_matching_brace(bytes: &[u8], open_brace_index: usize) -> Option<usize> {\n    let mut depth = 0i32;\n    for (index, byte) in bytes.iter().copied().enumerate().skip(open_brace_index) {\n        match byte {\n            b'{' => depth += 1,\n            b'}' => {\n                depth -= 1;\n                if depth == 0 {\n                    return Some(index);\n                }\n            }\n            _ => {}\n        }\n    }\n    None\n}\n\nfn tarjan(nodes: &[String], graph: &BTreeMap<String, BTreeSet<String>>) -> Vec<Vec<String>> {\n    fn strong_connect(\n        node: &str,\n        graph: &BTreeMap<String, BTreeSet<String>>,\n        next_index: &mut usize,\n        stack: &mut Vec<String>,\n        on_stack: &mut HashSet<String>,\n        index_by_node: &mut HashMap<String, usize>,\n        low_link_by_node: &mut HashMap<String, usize>,\n        components: &mut Vec<Vec<String>>,\n    ) {\n        index_by_node.insert(node.to_string(), *next_index);\n        low_link_by_node.insert(node.to_string(), *next_index);\n        *next_index += 1;\n        stack.push(node.to_string());\n        on_stack.insert(node.to_string());\n\n        for neighbor in graph\n            .get(node)\n            .into_iter()\n            .flat_map(|neighbors| neighbors.iter())\n        {\n            if !index_by_node.contains_key(neighbor) {\n                strong_connect(\n                    neighbor,\n                    graph,\n                    next_index,\n                    stack,\n                    on_stack,\n                    index_by_node,\n                    low_link_by_node,\n                    components,\n                );\n                let new_low_link = low_link_by_node[node].min(low_link_by_node[neighbor]);\n                low_link_by_node.insert(node.to_string(), new_low_link);\n            } else if on_stack.contains(neighbor) {\n                let new_low_link = low_link_by_node[node].min(index_by_node[neighbor]);\n                low_link_by_node.insert(node.to_string(), new_low_link);\n            }\n        }\n\n        if low_link_by_node[node] != index_by_node[node] {\n            return;\n        }\n\n        let mut component = Vec::new();\n        while let Some(member) = stack.pop() {\n            on_stack.remove(&member);\n            component.push(member.clone());\n            if member == node {\n                break;\n            }\n        }\n        components.push(component);\n    }\n\n    let mut next_index = 0usize;\n    let mut stack = Vec::new();\n    let mut on_stack = HashSet::new();\n    let mut index_by_node = HashMap::new();\n    let mut low_link_by_node = HashMap::new();\n    let mut components = Vec::new();\n\n    for node in nodes {\n        if !index_by_node.contains_key(node) {\n            strong_connect(\n                node,\n                graph,\n                &mut next_index,\n                &mut stack,\n                &mut on_stack,\n                &mut index_by_node,\n                &mut low_link_by_node,\n                &mut components,\n            );\n        }\n    }\n\n    components\n}\n\nfn module_set(graph: &EngineDependencyGraph) -> BTreeSet<String> {\n    graph.modules_analyzed.iter().cloned().collect()\n}\n\nfn forbidden_dependency_lookup() -> BTreeMap<&'static str, &'static ForbiddenDependencyRule> {\n    let mut lookup = BTreeMap::new();\n    for rule in FORBIDDEN_DEPENDENCY_RULES {\n        let replaced = lookup.insert(rule.from_scope, rule);\n        assert!(\n            replaced.is_none(),\n            \"forbidden dependency map must define each source scope only once; duplicate `{}`\",\n            rule.from_scope,\n        );\n    }\n    lookup\n}\n\nfn actual_architecture_violations<'a>(\n    graph: &'a EngineDependencyGraph,\n    forbidden_lookup: &BTreeMap<&'static str, &'static ForbiddenDependencyRule>,\n) -> Vec<&'a DependencyEdge> {\n    graph\n        .edges\n        .iter()\n        .filter(|edge| {\n            forbidden_lookup\n                .get(edge.from.as_str())\n                .is_some_and(|rule| rule.forbidden_scopes.contains(&edge.to.as_str()))\n        })\n        .collect()\n}\n\nfn target_core_graph(graph: &EngineDependencyGraph) -> BTreeMap<String, BTreeSet<String>> {\n    let target_core_modules: BTreeSet<String> = TARGET_CORE_MODULES\n        .iter()\n        .map(|module| (*module).to_string())\n        .collect();\n    let mut filtered: BTreeMap<String, BTreeSet<String>> = target_core_modules\n        .iter()\n        .cloned()\n        .map(|module| (module, BTreeSet::new()))\n        .collect();\n\n    for edge in &graph.edges {\n        if !target_core_modules.contains(&edge.from) || !target_core_modules.contains(&edge.to) {\n            continue;\n        }\n        if target_core_transition_allows_edge(edge) {\n            continue;\n        }\n        filtered\n            .get_mut(&edge.from)\n            .expect(\"target core graph should contain every filtered source\")\n            .insert(edge.to.clone());\n    }\n\n    filtered\n}\n\nfn target_core_transition_allows_edge(edge: &DependencyEdge) -> bool {\n    (edge.from == \"transaction\" && edge.to == \"session\")\n        || (edge.from == \"sql2\" && edge.to == \"transaction\")\n}\n\nfn render_target_core_graph(graph: &BTreeMap<String, BTreeSet<String>>) -> String {\n    let mut rendered = String::new();\n\n    for (module, outgoing) in graph {\n        let neighbors = outgoing.iter().cloned().collect::<Vec<_>>().join(\", \");\n        let _ = writeln!(&mut rendered, \"{module} -> [{neighbors}]\");\n    }\n\n    rendered\n}\n\nfn owner_root_cycles(graph: &BTreeMap<String, BTreeSet<String>>) -> Vec<Vec<String>> {\n    let nodes = graph.keys().cloned().collect::<Vec<_>>();\n    let mut cycles = tarjan(&nodes, graph)\n        .into_iter()\n        .filter(|component| {\n            component.len() > 1\n                || component.first().is_some_and(|node| {\n                    graph\n                        .get(node)\n                        .is_some_and(|neighbors| neighbors.contains(node))\n                })\n        })\n        .map(|mut component| {\n            component.sort();\n            component\n        })\n        .collect::<Vec<_>>();\n    cycles.sort();\n    cycles\n}\n\nfn render_owner_root_cycles(cycles: &[Vec<String>]) -> String {\n    let mut rendered = String::new();\n    for cycle in cycles {\n        let _ = writeln!(&mut rendered, \"  - {}\", cycle.join(\" -> \"));\n    }\n    rendered\n}\n\nfn render_forbidden_dependency_violations(\n    violations: &[&DependencyEdge],\n    forbidden_lookup: &BTreeMap<&'static str, &'static ForbiddenDependencyRule>,\n) -> String {\n    let mut grouped: BTreeMap<&str, Vec<&DependencyEdge>> = BTreeMap::new();\n\n    for violation in violations {\n        grouped\n            .entry(violation.from.as_str())\n            .or_default()\n            .push(*violation);\n    }\n\n    let mut rendered = String::new();\n    for (from_scope, edges) in grouped {\n        let rule = forbidden_lookup\n            .get(from_scope)\n            .expect(\"every forbidden violation should have a matching rule\");\n        let _ = writeln!(&mut rendered, \"{from_scope}: {}\", rule.reason);\n        for edge in edges {\n            let _ = writeln!(&mut rendered, \"  - {} -> {}\", edge.from, edge.to);\n            for via_file in &edge.via_files {\n                let _ = writeln!(&mut rendered, \"    via {via_file}\");\n            }\n        }\n    }\n\n    rendered\n}\n\nfn production_source_files() -> Vec<(String, String)> {\n    let lib_source = fs::read_to_string(lib_path()).expect(\"src/lib.rs should be readable\");\n    let top_level_modules = parse_top_level_modules(&lib_source);\n    let mut files = Vec::new();\n\n    files.push((\"lib.rs\".to_string(), strip_test_code(&lib_source)));\n\n    for module_name in top_level_modules {\n        for absolute_path in rust_files_for_top_level_module(&module_name) {\n            let relative_path = absolute_path\n                .strip_prefix(src_root())\n                .expect(\"module source file should be inside src/\")\n                .to_string_lossy()\n                .replace('\\\\', \"/\");\n            if is_test_support_relative_path(&relative_path) {\n                continue;\n            }\n            let source =\n                fs::read_to_string(&absolute_path).expect(\"module source file should be readable\");\n            files.push((relative_path, strip_test_code(&source)));\n        }\n    }\n\n    files.sort_by(|left, right| left.0.cmp(&right.0));\n    files\n}\n\nfn source_and_test_rust_files() -> Vec<(String, String)> {\n    let mut files = production_source_files();\n    let mut test_files = Vec::new();\n    let tests_root = engine_root().join(\"tests\");\n    walk_rust_files(&tests_root, &mut test_files);\n\n    for absolute_path in test_files {\n        let relative_path = absolute_path\n            .strip_prefix(engine_root())\n            .expect(\"test source file should be inside the engine root\")\n            .to_string_lossy()\n            .replace('\\\\', \"/\");\n        let source =\n            fs::read_to_string(&absolute_path).expect(\"test source file should be readable\");\n        files.push((relative_path, source));\n    }\n\n    files.sort_by(|left, right| left.0.cmp(&right.0));\n    files\n}\n\nfn is_test_support_relative_path(relative_path: &str) -> bool {\n    let parts: Vec<&str> = relative_path.split('/').collect();\n    parts.iter().any(|part| {\n        *part == \"tests\"\n            || *part == \"test\"\n            || part\n                .strip_suffix(\".rs\")\n                .is_some_and(|stem| stem.ends_with(\"_tests\"))\n            || part.ends_with(\"_tests\")\n    })\n}\n\nfn root_module_entry_relative_path(module_name: &str) -> Option<String> {\n    let module_file = src_root().join(format!(\"{module_name}.rs\"));\n    if module_file.exists() {\n        return Some(format!(\"{module_name}.rs\"));\n    }\n\n    let module_mod_file = src_root().join(module_name).join(\"mod.rs\");\n    if module_mod_file.exists() {\n        return Some(format!(\"{module_name}/mod.rs\"));\n    }\n\n    None\n}\n\nfn parse_declared_modules(source: &str) -> Vec<String> {\n    let mut modules = Vec::new();\n    let mut pending_attributes = Vec::new();\n\n    for line in source.lines() {\n        let trimmed = line.trim();\n        if trimmed.is_empty() {\n            continue;\n        }\n        if trimmed.starts_with(\"#[\") {\n            pending_attributes.push(trimmed.to_string());\n            continue;\n        }\n\n        let mut cursor = trimmed;\n        if let Some(rest) = cursor.strip_prefix(\"pub(crate) \") {\n            cursor = rest;\n        } else if let Some(rest) = cursor.strip_prefix(\"pub \") {\n            cursor = rest;\n        } else if cursor.starts_with(\"pub(\") {\n            if let Some(idx) = cursor.find(\") \") {\n                cursor = &cursor[idx + 2..];\n            }\n        }\n\n        if let Some(rest) = cursor.strip_prefix(\"mod \") {\n            if let Some(module_name) = rest.strip_suffix(';') {\n                let is_test_only = pending_attributes\n                    .iter()\n                    .any(|attribute| attribute.contains(\"cfg(test)\"));\n                if !is_test_only {\n                    let name = module_name.trim();\n                    if !name.is_empty() {\n                        modules.push(name.to_string());\n                    }\n                }\n            }\n        }\n\n        pending_attributes.clear();\n    }\n\n    modules\n}\n\nfn sealed_owner_child_modules() -> BTreeMap<String, BTreeSet<String>> {\n    let lib_source = fs::read_to_string(lib_path()).expect(\"src/lib.rs should be readable\");\n    let top_level_modules = parse_top_level_modules(&lib_source);\n    let mut child_modules = BTreeMap::new();\n\n    for module_name in top_level_modules {\n        let Some(relative_path) = root_module_entry_relative_path(&module_name) else {\n            continue;\n        };\n        let source = read_engine_source(&relative_path);\n        let declared_modules = parse_declared_modules(&strip_test_code(&source));\n        child_modules.insert(module_name, declared_modules.into_iter().collect());\n    }\n\n    child_modules\n}\n\nfn collect_module_paths_from_source(\n    source: &str,\n    current_module_path: &[String],\n    module_set: &HashSet<String>,\n) -> BTreeSet<Vec<String>> {\n    let without_tests = strip_test_code(source);\n    let sanitized = mask_rust_source(&without_tests);\n    let mut paths = BTreeSet::new();\n\n    paths.extend(collect_use_paths_from_source(\n        &sanitized,\n        current_module_path,\n        module_set,\n    ));\n    paths.extend(collect_explicit_paths_from_source(\n        &sanitized,\n        current_module_path,\n        module_set,\n    ));\n\n    paths\n}\n\nfn collect_use_paths_from_source(\n    source: &str,\n    current_module_path: &[String],\n    module_set: &HashSet<String>,\n) -> BTreeSet<Vec<String>> {\n    let bytes = source.as_bytes();\n    let mut paths = BTreeSet::new();\n    let mut index = 0usize;\n\n    while index < bytes.len() {\n        if let Some((_, after_use)) = match_keyword(bytes, index, b\"use\") {\n            let mut cursor = after_use;\n            while cursor < bytes.len() && bytes[cursor] != b';' {\n                cursor += 1;\n            }\n            if cursor < bytes.len() {\n                let spec = &source[after_use..cursor];\n                paths.extend(resolve_use_paths(spec, current_module_path, module_set));\n                index = cursor + 1;\n                continue;\n            }\n        }\n        index += 1;\n    }\n\n    paths\n}\n\nfn resolve_use_paths(\n    spec: &str,\n    current_module_path: &[String],\n    module_set: &HashSet<String>,\n) -> BTreeSet<Vec<String>> {\n    let tokens = tokenize_use_spec(spec);\n    let mut paths = BTreeSet::new();\n    let mut index = 0usize;\n\n    while index < tokens.len() {\n        index = parse_use_tree_paths(\n            &tokens,\n            index,\n            current_module_path,\n            None,\n            module_set,\n            &mut paths,\n        );\n        if matches!(tokens.get(index), Some(UseToken::Comma)) {\n            index += 1;\n        } else {\n            break;\n        }\n    }\n\n    paths\n}\n\nfn parse_use_tree_paths(\n    tokens: &[UseToken],\n    index: usize,\n    current_module_path: &[String],\n    base_context: Option<&[String]>,\n    module_set: &HashSet<String>,\n    paths: &mut BTreeSet<Vec<String>>,\n) -> usize {\n    let (path_parts, next_index) = parse_use_path(tokens, index);\n    if path_parts.is_empty() {\n        return skip_until_boundary(tokens, index);\n    }\n\n    let resolved_path = resolve_use_path(&path_parts, current_module_path, base_context);\n    if resolved_path\n        .first()\n        .is_some_and(|dependency| module_set.contains(dependency))\n    {\n        paths.insert(resolved_path.clone());\n    }\n\n    let mut cursor = next_index;\n    if matches!(tokens.get(cursor), Some(UseToken::DblColon))\n        && matches!(tokens.get(cursor + 1), Some(UseToken::LBrace))\n    {\n        cursor += 2;\n        while cursor < tokens.len() && !matches!(tokens.get(cursor), Some(UseToken::RBrace)) {\n            cursor = parse_use_tree_paths(\n                tokens,\n                cursor,\n                current_module_path,\n                Some(&resolved_path),\n                module_set,\n                paths,\n            );\n            if matches!(tokens.get(cursor), Some(UseToken::Comma)) {\n                cursor += 1;\n            }\n        }\n        if matches!(tokens.get(cursor), Some(UseToken::RBrace)) {\n            cursor += 1;\n        }\n        return cursor;\n    }\n\n    if matches!(tokens.get(cursor), Some(UseToken::DblColon))\n        && matches!(tokens.get(cursor + 1), Some(UseToken::Star))\n    {\n        return cursor + 2;\n    }\n\n    if matches!(tokens.get(cursor), Some(UseToken::As)) {\n        return cursor\n            + if matches!(tokens.get(cursor + 1), Some(UseToken::Ident(_))) {\n                2\n            } else {\n                1\n            };\n    }\n\n    cursor\n}\n\nfn collect_explicit_paths_from_source(\n    source: &str,\n    current_module_path: &[String],\n    module_set: &HashSet<String>,\n) -> BTreeSet<Vec<String>> {\n    let bytes = source.as_bytes();\n    let mut paths = BTreeSet::new();\n    let mut index = 0usize;\n\n    while index < bytes.len() {\n        let Some((prefix, after_prefix)) = parse_explicit_prefix(bytes, index) else {\n            index += 1;\n            continue;\n        };\n\n        let after_separator = skip_whitespace(bytes, after_prefix);\n        if bytes.get(after_separator..after_separator + 2) != Some(&b\"::\"[..]) {\n            index += 1;\n            continue;\n        }\n\n        let mut cursor = skip_whitespace(bytes, after_separator + 2);\n        let mut segments = Vec::new();\n\n        loop {\n            let Some((segment, after_segment)) = parse_identifier(bytes, cursor) else {\n                break;\n            };\n            segments.push(normalize_identifier(&segment));\n            let after_whitespace = skip_whitespace(bytes, after_segment);\n            if bytes.get(after_whitespace..after_whitespace + 2) == Some(&b\"::\"[..]) {\n                cursor = skip_whitespace(bytes, after_whitespace + 2);\n                continue;\n            }\n            cursor = after_segment;\n            break;\n        }\n\n        if segments.is_empty() {\n            index += 1;\n            continue;\n        }\n\n        let resolved_path = resolve_explicit_path(&prefix, &segments, current_module_path);\n        if resolved_path\n            .first()\n            .is_some_and(|dependency| module_set.contains(dependency))\n        {\n            paths.insert(resolved_path);\n        }\n\n        index = cursor.max(index + 1);\n    }\n\n    paths\n}\n\nfn resolve_explicit_path(\n    prefix: &[String],\n    segments: &[String],\n    current_module_path: &[String],\n) -> Vec<String> {\n    match prefix.first().map(String::as_str) {\n        Some(\"crate\") => segments.to_vec(),\n        Some(\"self\") => {\n            let mut result = current_module_path.to_vec();\n            result.extend(segments.iter().cloned());\n            result\n        }\n        Some(\"super\") => {\n            let super_count = prefix.iter().filter(|segment| *segment == \"super\").count();\n            let mut result: Vec<String> = current_module_path\n                .iter()\n                .take(current_module_path.len().saturating_sub(super_count))\n                .cloned()\n                .collect();\n            result.extend(segments.iter().cloned());\n            result\n        }\n        _ => Vec::new(),\n    }\n}\n\nfn current_sealed_owner_violations() -> Vec<SealedOwnerViolation> {\n    let lib_source = fs::read_to_string(lib_path()).expect(\"src/lib.rs should be readable\");\n    let top_level_modules = parse_top_level_modules(&lib_source);\n    let module_set: HashSet<String> = top_level_modules.iter().cloned().collect();\n    let child_modules = sealed_owner_child_modules();\n    let mut violations = BTreeSet::new();\n\n    for (relative_path, source) in production_source_files() {\n        let current_module_path = module_path_for_file(&relative_path);\n        let Some(current_root) = current_module_path.first() else {\n            continue;\n        };\n\n        for imported_path in\n            collect_module_paths_from_source(&source, &current_module_path, &module_set)\n        {\n            if imported_path.len() < 2 {\n                continue;\n            }\n            let owner_root = imported_path[0].as_str();\n            if owner_root == current_root {\n                continue;\n            }\n            if sealed_owner_allows_importer(owner_root, &relative_path) {\n                continue;\n            }\n            if sealed_owner_allows_import_path(owner_root, &imported_path) {\n                continue;\n            }\n\n            if !violates_sealed_owner_boundary(owner_root, &imported_path, &child_modules) {\n                continue;\n            }\n\n            violations.insert(SealedOwnerViolation {\n                importer_file: relative_path.clone(),\n                imported_path: imported_path.join(\"::\"),\n            });\n        }\n    }\n\n    violations.into_iter().collect()\n}\n\nfn violates_sealed_owner_boundary(\n    owner_root: &str,\n    imported_path: &[String],\n    child_modules: &BTreeMap<String, BTreeSet<String>>,\n) -> bool {\n    if sealed_owner_root_facade_owners().contains(owner_root) {\n        return true;\n    }\n\n    child_modules\n        .get(owner_root)\n        .is_some_and(|owner_child_modules| owner_child_modules.contains(&imported_path[1]))\n}\n\nfn sealed_owner_root_facade_owners() -> BTreeSet<&'static str> {\n    [\"api\"].into_iter().collect()\n}\n\nfn sealed_owner_allows_importer(owner_root: &str, importer_file: &str) -> bool {\n    (matches!(owner_root, \"api\") && importer_file == \"lib.rs\")\n        || importer_file == \"storage_bench.rs\"\n}\n\nfn sealed_owner_allows_import_path(owner_root: &str, imported_path: &[String]) -> bool {\n    owner_root == \"transaction\"\n        && imported_path\n            .get(1)\n            .is_some_and(|segment| segment == \"types\")\n}\n\nfn render_grouped_sealed_owner_violations(violations: &[SealedOwnerViolation]) -> String {\n    let mut grouped: BTreeMap<&str, BTreeMap<&str, Vec<&str>>> = BTreeMap::new();\n\n    for violation in violations {\n        let owner_root = violation\n            .imported_path\n            .split(\"::\")\n            .next()\n            .expect(\"imported path should include an owner root\");\n        grouped\n            .entry(owner_root)\n            .or_default()\n            .entry(violation.importer_file.as_str())\n            .or_default()\n            .push(violation.imported_path.as_str());\n    }\n\n    let mut rendered = String::new();\n    for (owner_root, files) in grouped {\n        let _ = writeln!(&mut rendered, \"{owner_root}:\");\n        for (file, imported_paths) in files {\n            let _ = writeln!(&mut rendered, \"  {file}:\");\n            for imported_path in imported_paths {\n                let _ = writeln!(&mut rendered, \"    - {imported_path}\");\n            }\n        }\n    }\n\n    rendered\n}\n\nfn render_grouped_import_path_violations(violations: &[ImportPathViolation]) -> String {\n    let mut grouped: BTreeMap<&str, Vec<&str>> = BTreeMap::new();\n\n    for violation in violations {\n        grouped\n            .entry(violation.importer_file.as_str())\n            .or_default()\n            .push(violation.imported_path.as_str());\n    }\n\n    let mut rendered = String::new();\n    for (file, imported_paths) in grouped {\n        let _ = writeln!(&mut rendered, \"{file}:\");\n        for imported_path in imported_paths {\n            let _ = writeln!(&mut rendered, \"  - {imported_path}\");\n        }\n    }\n\n    rendered\n}\n\nfn render_grouped_raw_sql_execution_violations(violations: &[RawSqlExecutionViolation]) -> String {\n    let mut grouped: BTreeMap<&str, Vec<&str>> = BTreeMap::new();\n\n    for violation in violations {\n        grouped\n            .entry(violation.file.as_str())\n            .or_default()\n            .push(violation.pattern);\n    }\n\n    let mut rendered = String::new();\n    for (file, patterns) in grouped {\n        let _ = writeln!(&mut rendered, \"{file}:\");\n        for pattern in patterns {\n            let _ = writeln!(&mut rendered, \"  - {pattern}\");\n        }\n    }\n\n    rendered\n}\n\nfn render_grouped_raw_backend_type_violations(violations: &[RawBackendTypeViolation]) -> String {\n    let mut grouped: BTreeMap<&str, Vec<&str>> = BTreeMap::new();\n\n    for violation in violations {\n        grouped\n            .entry(violation.file.as_str())\n            .or_default()\n            .push(violation.type_name);\n    }\n\n    let mut rendered = String::new();\n    for (file, type_names) in grouped {\n        let _ = writeln!(&mut rendered, \"{file}:\");\n        for type_name in type_names {\n            let _ = writeln!(&mut rendered, \"  - {type_name}\");\n        }\n    }\n\n    rendered\n}\n\nfn render_grouped_transaction_lifecycle_violations(\n    violations: &[TransactionLifecycleViolation],\n) -> String {\n    let mut grouped: BTreeMap<&str, Vec<&str>> = BTreeMap::new();\n\n    for violation in violations {\n        grouped\n            .entry(violation.file.as_str())\n            .or_default()\n            .push(violation.pattern);\n    }\n\n    let mut rendered = String::new();\n    for (file, patterns) in grouped {\n        let _ = writeln!(&mut rendered, \"{file}:\");\n        for pattern in patterns {\n            let _ = writeln!(&mut rendered, \"  - {pattern}\");\n        }\n    }\n\n    rendered\n}\n\nfn render_grouped_sql_runtime_ownership_violations(\n    violations: &[SqlRuntimeOwnershipViolation],\n) -> String {\n    let mut grouped: BTreeMap<&str, Vec<&str>> = BTreeMap::new();\n\n    for violation in violations {\n        grouped\n            .entry(violation.file.as_str())\n            .or_default()\n            .push(violation.pattern);\n    }\n\n    let mut rendered = String::new();\n    for (file, patterns) in grouped {\n        let _ = writeln!(&mut rendered, \"{file}:\");\n        for pattern in patterns {\n            let _ = writeln!(&mut rendered, \"  - {pattern}\");\n        }\n    }\n\n    rendered\n}\n\nfn top_level_module_set() -> HashSet<String> {\n    let lib_source = fs::read_to_string(lib_path()).expect(\"src/lib.rs should be readable\");\n    parse_top_level_modules(&lib_source).into_iter().collect()\n}\n\nfn services_child_modules() -> BTreeSet<String> {\n    let Some(relative_path) = root_module_entry_relative_path(\"services\") else {\n        return BTreeSet::new();\n    };\n    let source = read_engine_source(&relative_path);\n    parse_declared_modules(&strip_test_code(&source))\n        .into_iter()\n        .collect()\n}\n\nfn current_services_direct_child_import_violations() -> Vec<ImportPathViolation> {\n    let module_set = top_level_module_set();\n    let service_children = services_child_modules();\n    let mut violations = BTreeSet::new();\n\n    for (relative_path, source) in production_source_files() {\n        let current_module_path = module_path_for_file(&relative_path);\n        if current_module_path\n            .first()\n            .is_some_and(|root| root == \"services\")\n        {\n            continue;\n        }\n\n        for imported_path in\n            collect_module_paths_from_source(&source, &current_module_path, &module_set)\n        {\n            if imported_path.first().is_none_or(|root| root != \"services\") {\n                continue;\n            }\n\n            let imported_child = imported_path.get(1);\n            let imports_declared_service_child =\n                imported_child.is_some_and(|child| service_children.contains(child));\n            let stays_within_direct_child_surface = imported_path.len() <= 3;\n            if imports_declared_service_child && stays_within_direct_child_surface {\n                continue;\n            }\n\n            violations.insert(ImportPathViolation {\n                importer_file: relative_path.clone(),\n                imported_path: imported_path.join(\"::\"),\n            });\n        }\n    }\n\n    violations.into_iter().collect()\n}\n\nfn current_services_external_dependency_violations() -> Vec<ImportPathViolation> {\n    let module_set = top_level_module_set();\n    let mut violations = BTreeSet::new();\n\n    for (relative_path, source) in production_source_files() {\n        let current_module_path = module_path_for_file(&relative_path);\n        if current_module_path\n            .first()\n            .is_none_or(|root| root != \"services\")\n        {\n            continue;\n        }\n\n        for imported_path in\n            collect_module_paths_from_source(&source, &current_module_path, &module_set)\n        {\n            let Some(imported_root) = imported_path.first() else {\n                continue;\n            };\n            if imported_root == \"services\" {\n                continue;\n            }\n            if ALLOWED_SERVICE_FOUNDATION_ROOTS.contains(&imported_root.as_str()) {\n                continue;\n            }\n\n            violations.insert(ImportPathViolation {\n                importer_file: relative_path.clone(),\n                imported_path: imported_path.join(\"::\"),\n            });\n        }\n    }\n\n    violations.into_iter().collect()\n}\n\nfn current_services_sibling_dependency_violations() -> Vec<ImportPathViolation> {\n    let module_set = top_level_module_set();\n    let service_children = services_child_modules();\n    let mut violations = BTreeSet::new();\n\n    for (relative_path, source) in production_source_files() {\n        let current_module_path = module_path_for_file(&relative_path);\n        if current_module_path\n            .first()\n            .is_none_or(|root| root != \"services\")\n        {\n            continue;\n        }\n        let Some(current_child) = current_module_path.get(1) else {\n            continue;\n        };\n\n        for imported_path in\n            collect_module_paths_from_source(&source, &current_module_path, &module_set)\n        {\n            if imported_path.first().is_none_or(|root| root != \"services\") {\n                continue;\n            }\n            let Some(imported_child) = imported_path.get(1) else {\n                continue;\n            };\n            if !service_children.contains(imported_child) {\n                continue;\n            }\n            if imported_child == current_child {\n                continue;\n            }\n\n            violations.insert(ImportPathViolation {\n                importer_file: relative_path.clone(),\n                imported_path: imported_path.join(\"::\"),\n            });\n        }\n    }\n\n    violations.into_iter().collect()\n}\n\nfn is_engine_owned_persistence_path(relative_path: &str) -> bool {\n    let in_scope_owner_root = relative_path.starts_with(\"live_state/\")\n        || relative_path.starts_with(\"canonical/\")\n        || relative_path.starts_with(\"binary_cas/\")\n        || relative_path.starts_with(\"session/version_ops/\");\n    let is_allowed_adapter_surface = relative_path.ends_with(\"/store.rs\")\n        || relative_path.ends_with(\"/store_sql.rs\")\n        || relative_path.ends_with(\"/storage.rs\");\n\n    in_scope_owner_root && !is_allowed_adapter_surface\n}\n\nfn current_engine_owned_persistence_raw_sql_execution_violations() -> Vec<RawSqlExecutionViolation>\n{\n    let mut violations = BTreeSet::new();\n\n    for (relative_path, source) in production_source_files() {\n        if !is_engine_owned_persistence_path(&relative_path) {\n            continue;\n        }\n\n        let masked_source = mask_rust_source(&source);\n        for pattern in [\".execute(\"] {\n            if masked_source.contains(pattern) {\n                violations.insert(RawSqlExecutionViolation {\n                    file: relative_path.clone(),\n                    pattern,\n                });\n            }\n        }\n    }\n\n    violations.into_iter().collect()\n}\n\nfn contains_identifier(source: &str, identifier: &str) -> bool {\n    let bytes = source.as_bytes();\n    let needle = identifier.as_bytes();\n    let mut index = 0usize;\n\n    while index + needle.len() <= bytes.len() {\n        if &bytes[index..index + needle.len()] != needle {\n            index += 1;\n            continue;\n        }\n\n        let boundary_before = index == 0 || !is_ident_continue(bytes[index - 1]);\n        let boundary_after =\n            index + needle.len() == bytes.len() || !is_ident_continue(bytes[index + needle.len()]);\n        if boundary_before && boundary_after {\n            return true;\n        }\n\n        index += 1;\n    }\n\n    false\n}\n\nfn current_engine_owned_persistence_raw_backend_type_violations() -> Vec<RawBackendTypeViolation> {\n    let mut violations = BTreeSet::new();\n\n    for (relative_path, source) in production_source_files() {\n        if !is_engine_owned_persistence_path(&relative_path) {\n            continue;\n        }\n\n        let masked_source = mask_rust_source(&source);\n        for type_name in [\n            \"Backend\",\n            \"BackendReadTransaction\",\n            \"BackendWriteTransaction\",\n        ] {\n            if contains_identifier(&masked_source, type_name) {\n                violations.insert(RawBackendTypeViolation {\n                    file: relative_path.clone(),\n                    type_name,\n                });\n            }\n        }\n    }\n\n    violations.into_iter().collect()\n}\n\nfn is_owner_persistence_root_path(relative_path: &str) -> bool {\n    relative_path.starts_with(\"live_state/\")\n        || relative_path.starts_with(\"canonical/\")\n        || relative_path.starts_with(\"binary_cas/\")\n}\n\nfn is_owner_sql_adapter_path(relative_path: &str) -> bool {\n    relative_path.ends_with(\"/store_sql.rs\") || relative_path.ends_with(\"/storage.rs\")\n}\n\nfn current_owner_persistence_backend_root_dependency_violations() -> Vec<ImportPathViolation> {\n    let module_set = top_level_module_set();\n    let mut violations = BTreeSet::new();\n\n    for (relative_path, source) in production_source_files() {\n        if !is_owner_persistence_root_path(&relative_path)\n            || is_owner_sql_adapter_path(&relative_path)\n        {\n            continue;\n        }\n\n        let masked_source = mask_rust_source(&source);\n        if !contains_identifier(&masked_source, \"Backend\")\n            && !contains_identifier(&masked_source, \"BackendReadTransaction\")\n            && !contains_identifier(&masked_source, \"BackendWriteTransaction\")\n        {\n            continue;\n        }\n\n        let current_module_path = module_path_for_file(&relative_path);\n        for imported_path in\n            collect_module_paths_from_source(&source, &current_module_path, &module_set)\n        {\n            if imported_path.first().is_none_or(|root| root != \"backend\") {\n                continue;\n            }\n\n            violations.insert(ImportPathViolation {\n                importer_file: relative_path.clone(),\n                imported_path: imported_path.join(\"::\"),\n            });\n        }\n    }\n\n    violations.into_iter().collect()\n}\n\nfn current_backend_import_outside_storage_violations() -> Vec<ImportPathViolation> {\n    let module_set = top_level_module_set();\n    let mut violations = BTreeSet::new();\n\n    for (relative_path, source) in production_source_files() {\n        if relative_path.starts_with(\"backend/\") || relative_path.starts_with(\"storage/\") {\n            continue;\n        }\n\n        let current_module_path = module_path_for_file(&relative_path);\n        for imported_path in\n            collect_module_paths_from_source(&source, &current_module_path, &module_set)\n        {\n            if imported_path.first().is_none_or(|root| root != \"backend\") {\n                continue;\n            }\n\n            violations.insert(ImportPathViolation {\n                importer_file: relative_path.clone(),\n                imported_path: imported_path.join(\"::\"),\n            });\n        }\n    }\n\n    violations.into_iter().collect()\n}\n\nfn current_store_sql_import_boundary_violations() -> Vec<ImportPathViolation> {\n    let module_set = top_level_module_set();\n    let mut violations = BTreeSet::new();\n\n    for (relative_path, source) in production_source_files() {\n        let current_module_path = module_path_for_file(&relative_path);\n        let current_root = current_module_path.first().map(String::as_str);\n\n        for imported_path in\n            collect_module_paths_from_source(&source, &current_module_path, &module_set)\n        {\n            if imported_path\n                .get(1)\n                .is_none_or(|segment| segment != \"store_sql\")\n            {\n                continue;\n            }\n\n            let owner_root = imported_path.first().map(String::as_str);\n            if current_root == owner_root {\n                continue;\n            }\n\n            violations.insert(ImportPathViolation {\n                importer_file: relative_path.clone(),\n                imported_path: imported_path.join(\"::\"),\n            });\n        }\n    }\n\n    violations.into_iter().collect()\n}\n\nfn current_owner_persistence_transaction_lifecycle_violations() -> Vec<TransactionLifecycleViolation>\n{\n    let mut violations = BTreeSet::new();\n\n    for (relative_path, source) in production_source_files() {\n        if !is_owner_persistence_root_path(&relative_path)\n            || is_owner_sql_adapter_path(&relative_path)\n        {\n            continue;\n        }\n\n        let masked_source = mask_rust_source(&source);\n        for pattern in [\n            \".begin_read_transaction(\",\n            \"begin_write_transaction(\",\n            \".commit().await\",\n            \".rollback().await\",\n        ] {\n            if masked_source.contains(pattern) {\n                violations.insert(TransactionLifecycleViolation {\n                    file: relative_path.clone(),\n                    pattern,\n                });\n            }\n        }\n    }\n\n    violations.into_iter().collect()\n}\n\nfn is_owner_local_storage_path(relative_path: &str) -> bool {\n    relative_path.ends_with(\"/storage.rs\")\n}\n\nfn is_allowed_raw_execute_boundary_path(relative_path: &str) -> bool {\n    is_owner_local_storage_path(relative_path)\n        || relative_path.starts_with(\"sql/\")\n        || relative_path.starts_with(\"execution/\")\n        || relative_path.starts_with(\"backend/\")\n        || relative_path == \"transaction/backend.rs\"\n        || relative_path == \"transaction/buffered_write_transaction.rs\"\n        || relative_path == \"transaction/live_state_write_transaction.rs\"\n}\n\nfn current_raw_execute_outside_owner_storage_or_public_sql_boundary_violations(\n) -> Vec<RawSqlExecutionViolation> {\n    let mut violations = BTreeSet::new();\n\n    for (relative_path, source) in production_source_files() {\n        if is_allowed_raw_execute_boundary_path(&relative_path) {\n            continue;\n        }\n\n        let masked_source = mask_rust_source(&source);\n        for pattern in [\n            \"backend.execute(\",\n            \"transaction.execute(\",\n            \"executor.execute(\",\n            \"self.base.execute(\",\n            \"self.backend.execute(\",\n            \"self.backend_transaction.execute(\",\n        ] {\n            if masked_source.contains(pattern) {\n                violations.insert(RawSqlExecutionViolation {\n                    file: relative_path.clone(),\n                    pattern,\n                });\n            }\n        }\n    }\n\n    violations.into_iter().collect()\n}\n\nfn is_orchestration_runtime_path(relative_path: &str) -> bool {\n    relative_path.starts_with(\"api/\")\n        || relative_path.starts_with(\"init/\")\n        || relative_path.starts_with(\"session/\")\n        || relative_path.starts_with(\"transaction/\")\n}\n\nfn current_scattered_internal_metadata_crud_outside_owner_storage_violations(\n) -> Vec<RawSqlExecutionViolation> {\n    let mut violations = BTreeSet::new();\n\n    for (relative_path, source) in production_source_files() {\n        if !is_orchestration_runtime_path(&relative_path)\n            || is_owner_local_storage_path(&relative_path)\n        {\n            continue;\n        }\n\n        let masked_source = mask_rust_source(&source);\n        for pattern in [\n            \"SELECT value FROM lix_internal_workspace_metadata\",\n            \"INSERT INTO lix_internal_workspace_metadata\",\n            \"CREATE TABLE lix_internal_workspace_metadata\",\n            \"FROM lix_internal_commit_idempotency\",\n            \"INSERT INTO lix_internal_commit_idempotency\",\n            \"CREATE TABLE IF NOT EXISTS lix_internal_commit_idempotency\",\n            \"FROM lix_internal_undo_redo_operation\",\n            \"INSERT INTO lix_internal_undo_redo_operation\",\n            \"CREATE TABLE IF NOT EXISTS lix_internal_undo_redo_operation\",\n        ] {\n            if masked_source.contains(pattern) {\n                violations.insert(RawSqlExecutionViolation {\n                    file: relative_path.clone(),\n                    pattern,\n                });\n            }\n        }\n    }\n\n    violations.into_iter().collect()\n}\n\nfn current_owner_storage_public_sql_shaped_api_violations() -> Vec<RawSqlExecutionViolation> {\n    let mut violations = BTreeSet::new();\n\n    for (relative_path, source) in production_source_files() {\n        if !is_owner_local_storage_path(&relative_path) {\n            continue;\n        }\n\n        let masked_source = mask_rust_source(&source);\n        for pattern in [\n            \"pub(crate) async fn execute_query_with_\",\n            \"pub(crate) async fn execute_ddl_batch_with_\",\n            \"pub(crate) async fn add_column_if_missing_with_\",\n            \"pub(crate) async fn begin_write_transaction\",\n            \"pub(crate) fn executor_from_transaction\",\n        ] {\n            if masked_source.contains(pattern) {\n                violations.insert(RawSqlExecutionViolation {\n                    file: relative_path.clone(),\n                    pattern,\n                });\n            }\n        }\n    }\n\n    violations.into_iter().collect()\n}\n\nfn current_shared_persistence_root_files() -> Vec<String> {\n    production_source_files()\n        .into_iter()\n        .filter_map(|(relative_path, _)| {\n            relative_path\n                .starts_with(\"persistence/\")\n                .then_some(relative_path)\n        })\n        .collect()\n}\n\nfn is_sql2_runtime_owner_path(relative_path: &str) -> bool {\n    relative_path == \"sql2/runtime.rs\"\n}\n\nfn current_sql2_datafusion_physical_execution_owner_violations() -> Vec<SqlRuntimeOwnershipViolation>\n{\n    let mut violations = BTreeSet::new();\n\n    for (relative_path, source) in production_source_files() {\n        if !relative_path.starts_with(\"sql2/\") || is_sql2_runtime_owner_path(&relative_path) {\n            continue;\n        }\n\n        let stripped = strip_test_code(&source);\n        let masked_source = mask_rust_source(&stripped);\n        for pattern in [\n            \".collect().await\",\n            \".create_physical_plan().await\",\n            \".execute(partition,\",\n            \"execute_input_stream(\",\n        ] {\n            if masked_source.contains(pattern) {\n                violations.insert(SqlRuntimeOwnershipViolation {\n                    file: relative_path.clone(),\n                    pattern,\n                });\n            }\n        }\n    }\n\n    violations.into_iter().collect()\n}\n\nfn current_sql2_data_sink_exec_violations() -> Vec<SqlRuntimeOwnershipViolation> {\n    let mut violations = BTreeSet::new();\n\n    for (relative_path, source) in production_source_files() {\n        if !relative_path.starts_with(\"sql2/\") {\n            continue;\n        }\n\n        let stripped = strip_test_code(&source);\n        let masked_source = mask_rust_source(&stripped);\n        for pattern in [\"DataSinkExec\", \"DataSinkExec::new(\"] {\n            if masked_source.contains(pattern) {\n                violations.insert(SqlRuntimeOwnershipViolation {\n                    file: relative_path.clone(),\n                    pattern,\n                });\n            }\n        }\n    }\n\n    violations.into_iter().collect()\n}\n\nfn current_schema_catalog_dependency_violations() -> Vec<ImportPathViolation> {\n    let module_set = top_level_module_set();\n    let mut violations = BTreeSet::new();\n\n    for (relative_path, source) in production_source_files() {\n        if !relative_path.starts_with(\"schema/\") {\n            continue;\n        }\n\n        let current_module_path = module_path_for_file(&relative_path);\n        for imported_path in\n            collect_module_paths_from_source(&source, &current_module_path, &module_set)\n        {\n            if imported_path\n                .first()\n                .is_some_and(|root| root == \"schema_catalog\")\n            {\n                violations.insert(ImportPathViolation {\n                    importer_file: relative_path.clone(),\n                    imported_path: imported_path.join(\"::\"),\n                });\n            }\n        }\n    }\n\n    violations.into_iter().collect()\n}\n\nfn current_schema_invalid_param_violations() -> Vec<RawSqlExecutionViolation> {\n    let mut violations = BTreeSet::new();\n\n    for (relative_path, source) in production_source_files() {\n        if !relative_path.starts_with(\"schema/\") {\n            continue;\n        }\n\n        let masked_source = mask_rust_source(&source);\n        for pattern in [\"CODE_INVALID_PARAM\", \"LIX_INVALID_PARAM\"] {\n            if masked_source.contains(pattern) {\n                violations.insert(RawSqlExecutionViolation {\n                    file: relative_path.clone(),\n                    pattern,\n                });\n            }\n        }\n    }\n\n    violations.into_iter().collect()\n}\n\n#[test]\nfn sealed_owner_violations_are_empty() {\n    let violations = current_sealed_owner_violations();\n\n    assert!(\n        violations.is_empty(),\n        \"sealed-owner violations are present.\\n\\nCurrent violations:\\n{}\",\n        render_grouped_sealed_owner_violations(&violations),\n    );\n}\n\n#[test]\nfn forbidden_dependency_rules_have_no_current_violations() {\n    let graph = analyze_engine_dependency_graph();\n    let graph_modules = module_set(&graph);\n    for module in TARGET_CORE_MODULES {\n        assert!(\n            graph_modules.contains(*module),\n            \"target core graph should include `{module}`\",\n        );\n    }\n\n    let forbidden_lookup = forbidden_dependency_lookup();\n    let violations = actual_architecture_violations(&graph, &forbidden_lookup);\n\n    assert!(\n        violations.is_empty(),\n        \"forbidden owner-root dependencies are present.\\n\\nTarget core graph:\\n{}\\nCurrent violations:\\n{}\",\n        render_target_core_graph(&target_core_graph(&graph)),\n        render_forbidden_dependency_violations(&violations, &forbidden_lookup),\n    );\n}\n\n#[test]\nfn target_core_owner_graph_has_no_cycles() {\n    let graph = analyze_engine_dependency_graph();\n    let core_graph = target_core_graph(&graph);\n    let cycles = owner_root_cycles(&core_graph);\n\n    assert!(\n        cycles.is_empty(),\n        \"target core owner-root graph has cycles.\\n\\nTarget core graph:\\n{}\\nCycles:\\n{}\",\n        render_target_core_graph(&core_graph),\n        render_owner_root_cycles(&cycles),\n    );\n}\n\n#[test]\nfn schema_domain_does_not_depend_on_schema_catalog() {\n    let violations = current_schema_catalog_dependency_violations();\n\n    assert!(\n        violations.is_empty(),\n        \"`schema/*` owns schema-document semantics and must not depend on `schema_catalog/*`; transaction/public boundary adapters should compose the two domains.\\n\\nCurrent violations:\\n{}\",\n        render_grouped_import_path_violations(&violations),\n    );\n}\n\n#[test]\nfn schema_domain_does_not_emit_public_invalid_param() {\n    let violations = current_schema_invalid_param_violations();\n\n    assert!(\n        violations.is_empty(),\n        \"`schema/*` must return schema-domain errors only. Public `INVALID_PARAM` classification belongs at transaction/API/SQL public boundaries.\\n\\nCurrent violations:\\n{}\",\n        render_grouped_raw_sql_execution_violations(&violations),\n    );\n}\n\n// `services` intentionally does not get a giant root facade. Outside code may\n// depend on `services::child::*`, but not on deeper implementation paths.\n#[test]\nfn services_imports_are_limited_to_direct_child_namespaces() {\n    if !top_level_module_set().contains(\"services\") {\n        return;\n    }\n    let violations = current_services_direct_child_import_violations();\n\n    assert!(\n        violations.is_empty(),\n        \"outside `services/*`, imports into `services` must target a direct child capability namespace only.\\n\\nCurrent violations:\\n{}\",\n        render_grouped_import_path_violations(&violations),\n    );\n}\n\n// Leaf `services/*` modules are standalone capabilities. They may depend on\n// neutral foundations like `common`, but not on engine\n// composition, semantic owners, or other top-level roots.\n#[test]\nfn services_has_no_external_root_dependencies() {\n    if !top_level_module_set().contains(\"services\") {\n        return;\n    }\n    let violations = current_services_external_dependency_violations();\n\n    assert!(\n        violations.is_empty(),\n        \"`services/*` leaf modules may only import neutral foundation roots (`common`) outside `services`.\\n\\nCurrent violations:\\n{}\",\n        render_grouped_import_path_violations(&violations),\n    );\n}\n\n// Direct child `services/*` modules are also standalone relative to each\n// other. If two services need shared pieces, that code should move to neutral\n// ground or the capabilities should be merged.\n#[test]\nfn services_direct_children_do_not_import_sibling_services() {\n    if !top_level_module_set().contains(\"services\") {\n        return;\n    }\n    let violations = current_services_sibling_dependency_violations();\n\n    assert!(\n        violations.is_empty(),\n        \"direct child `services/*` modules must not import sibling service namespaces.\\n\\nCurrent violations:\\n{}\",\n        render_grouped_import_path_violations(&violations),\n    );\n}\n\n// Engine-owned persistence modules should execute through owner-local adapters\n// rather than calling raw backend SQL directly.\n#[test]\nfn engine_owned_persistence_modules_do_not_execute_raw_sql_directly() {\n    let violations = current_engine_owned_persistence_raw_sql_execution_violations();\n\n    assert!(\n        violations.is_empty(),\n        \"engine-owned persistence modules must not execute raw SQL directly outside owner-local adapter files.\\n\\nCurrent violations:\\n{}\",\n        render_grouped_raw_sql_execution_violations(&violations),\n    );\n}\n\n// Engine-owned persistence modules should depend on owner-local store\n// interfaces rather than raw backend handle types.\n#[test]\nfn engine_owned_persistence_modules_do_not_import_raw_backend_types() {\n    let violations = current_engine_owned_persistence_raw_backend_type_violations();\n\n    assert!(\n        violations.is_empty(),\n        \"engine-owned persistence modules must not depend on raw backend types outside owner-local adapter files.\\n\\nCurrent violations:\\n{}\",\n        render_grouped_raw_backend_type_violations(&violations),\n    );\n}\n\n// Owner persistence code should speak in owner-local store terms, not import\n// lower `backend/*` helpers directly outside SQL adapter files.\n#[test]\nfn owner_persistence_modules_do_not_depend_on_backend_root_outside_sql_adapters() {\n    let violations = current_owner_persistence_backend_root_dependency_violations();\n\n    assert!(\n        violations.is_empty(),\n        \"owner persistence modules must not depend on `backend/*` outside owner-local SQL adapter files.\\n\\nCurrent violations:\\n{}\",\n        render_grouped_import_path_violations(&violations),\n    );\n}\n\n#[test]\nfn backend_imports_are_limited_to_storage_boundary() {\n    let violations = current_backend_import_outside_storage_violations();\n\n    assert!(\n        violations.is_empty(),\n        \"`backend/*` may only be imported by `storage/*`; other engine modules must depend on storage-facing APIs.\\n\\nCurrent violations:\\n{}\",\n        render_grouped_import_path_violations(&violations),\n    );\n}\n\n// SQL-backed store adapters are owner internals. Other roots may import the\n// owner-facing store interfaces, but not the `store_sql` implementations.\n#[test]\nfn store_sql_modules_are_not_imported_outside_their_owning_root() {\n    let violations = current_store_sql_import_boundary_violations();\n\n    assert!(\n        violations.is_empty(),\n        \"`store_sql` modules must not be imported outside their owning root.\\n\\nCurrent violations:\\n{}\",\n        render_grouped_import_path_violations(&violations),\n    );\n}\n\n// Owner persistence modules may perform work inside a caller-owned transaction,\n// but must not decide when transactions begin or end. Transaction lifecycle\n// policy belongs to session/runtime, while owner-local SQL adapters may still\n// contain low-level backend transaction calls during the MVP.\n#[test]\nfn owner_persistence_modules_do_not_own_transaction_lifecycle() {\n    let violations = current_owner_persistence_transaction_lifecycle_violations();\n\n    assert!(\n        violations.is_empty(),\n        \"owner persistence modules must not begin, commit, or roll back transactions outside owner-local SQL adapter files.\\n\\nCurrent violations:\\n{}\",\n        render_grouped_transaction_lifecycle_violations(&violations),\n    );\n}\n\n#[test]\nfn raw_backend_execute_is_only_used_in_owner_storage_or_public_sql_layers() {\n    let violations = current_raw_execute_outside_owner_storage_or_public_sql_boundary_violations();\n\n    assert!(\n        violations.is_empty(),\n        \"raw backend / transaction SQL execution may only appear in owner-local `storage.rs`, `sql/*`, `execution/*`, or backend glue.\\n\\nCurrent violations:\\n{}\",\n        render_grouped_raw_sql_execution_violations(&violations),\n    );\n}\n\n#[test]\nfn internal_metadata_crud_is_centralized_in_owner_storage() {\n    let violations = current_scattered_internal_metadata_crud_outside_owner_storage_violations();\n\n    assert!(\n        violations.is_empty(),\n        \"internal metadata CRUD for workspace selectors, commit idempotency, and undo/redo log should live in owner-local `storage.rs` seams, not scattered through `api/*`, `init/*`, `session/*`, or `transaction/*`.\\n\\nCurrent violations:\\n{}\",\n        render_grouped_raw_sql_execution_violations(&violations),\n    );\n}\n\n#[test]\nfn owner_storage_modules_do_not_expose_public_sql_shaped_helpers() {\n    let violations = current_owner_storage_public_sql_shaped_api_violations();\n\n    assert!(\n        violations.is_empty(),\n        \"owner-local `storage.rs` seams should expose operation-shaped APIs rather than public SQL-shaped helpers.\\n\\nCurrent violations:\\n{}\",\n        render_grouped_raw_sql_execution_violations(&violations),\n    );\n}\n\n#[test]\nfn sql2_physical_execution_is_owned_by_runtime_module() {\n    let violations = current_sql2_datafusion_physical_execution_owner_violations();\n\n    assert!(\n        violations.is_empty(),\n        \"DataFusion physical execution must be centralized in `sql2/runtime.rs`; read/write SQL paths should not collect DataFrames or execute physical plans through side doors.\\n\\nCurrent violations:\\n{}\",\n        render_grouped_sql_runtime_ownership_violations(&violations),\n    );\n}\n\n#[test]\nfn sql2_write_providers_do_not_delegate_dml_execution_to_datafusion_sinks() {\n    let violations = current_sql2_data_sink_exec_violations();\n\n    assert!(\n        violations.is_empty(),\n        \"SQL2 write providers must not use DataFusion `DataSinkExec`; DML source batches should be collected through the SQL runtime and staged by transaction-owned write code.\\n\\nCurrent violations:\\n{}\",\n        render_grouped_sql_runtime_ownership_violations(&violations),\n    );\n}\n\n#[test]\nfn sql2_public_boundary_does_not_reintroduce_stringly_validation() {\n    let mut violations = Vec::new();\n\n    for (relative_path, source) in production_source_files() {\n        if !relative_path.starts_with(\"sql2/\") {\n            continue;\n        }\n        let stripped = strip_test_code(&source);\n        let masked_source = mask_rust_source(&stripped);\n\n        for pattern in [\n            \"PublicPredicateSpec {\",\n            \"public_input::expect_text_column(\\\"\",\n            \"public_input::expect_bool_column(\\\"\",\n            \"public_input::expect_json_object_metadata(\\\"\",\n            \"public_input::expect_json_text(\\\"\",\n            \"public_input::expect_file_path_public(\\\"\",\n            \"public_input::expect_directory_path_public(\\\"\",\n            \"public_input::expect_entity_identity_public(\\\"\",\n            \"public_input::expect_non_blob_public_id(\\\"\",\n            \"require_write(\\\"\",\n            \"routed_surface(\",\n            \"operation: &str\",\n            \"table: &str\",\n        ] {\n            if masked_source.contains(pattern) {\n                violations.push(format!(\"{relative_path}: {pattern}\"));\n            }\n        }\n    }\n\n    assert!(\n        violations.is_empty(),\n        \"SQL2 public boundary validation must flow through typed PublicBoundaryContext/PublicSurface helpers, not raw operation/table strings.\\n\\nCurrent violations:\\n{}\",\n        violations.join(\"\\n\"),\n    );\n}\n\n#[test]\nfn sql2_read_session_does_not_register_write_surfaces() {\n    let relative = \"sql2/session.rs\";\n    let source = read_engine_source(relative);\n    let read_session = source_between(\n        relative,\n        &source,\n        \"pub(crate) async fn build_read_session\",\n        \"pub(crate) async fn build_write_session\",\n    );\n\n    assert_source_contains_all(\n        relative,\n        read_session,\n        &[\n            \"register_lix_state_providers\",\n            \"register_lix_version_provider\",\n            \"register_lix_change_provider\",\n            \"register_history_providers\",\n            \"register_lix_file_history_provider\",\n            \"register_lix_directory_history_provider\",\n            \"register_lix_directory_providers\",\n            \"register_lix_file_providers\",\n            \"register_entity_providers\",\n        ],\n    );\n    assert_source_contains_none(\n        relative,\n        read_session,\n        &[\n            \"SqlWriteContext::new\",\n            \"register_lix_state_write_providers\",\n            \"register_lix_version_write_provider\",\n            \"register_lix_directory_write_providers\",\n            \"register_lix_file_write_providers\",\n            \"register_entity_write_providers\",\n        ],\n    );\n}\n\n#[test]\nfn sql2_write_session_does_not_register_history_or_committed_read_surfaces() {\n    let relative = \"sql2/session.rs\";\n    let source = read_engine_source(relative);\n    let write_session = source_between(\n        relative,\n        &source,\n        \"pub(crate) async fn build_write_session\",\n        \"fn new_sql_session_context\",\n    );\n\n    assert_source_contains_all(\n        relative,\n        write_session,\n        &[\n            \"SqlWriteContext::new\",\n            \"register_lix_state_write_providers\",\n            \"register_lix_version_write_provider\",\n            \"register_lix_directory_write_providers\",\n            \"register_lix_file_write_providers\",\n            \"register_entity_write_providers\",\n        ],\n    );\n    assert_source_contains_none(\n        relative,\n        write_session,\n        &[\n            \"ctx.commit_store_query_source\",\n            \"ctx.commit_graph\",\n            \"ctx.live_state()\",\n            \"ctx.version_ref()\",\n            \"register_lix_state_providers\",\n            \"register_lix_version_provider\",\n            \"register_lix_change_provider\",\n            \"register_history_providers\",\n            \"register_lix_file_history_provider\",\n            \"register_lix_directory_history_provider\",\n            \"register_lix_directory_providers\",\n            \"register_lix_file_providers\",\n            \"register_entity_providers\",\n        ],\n    );\n}\n\n#[test]\nfn sql2_session_context_keeps_wasm_safe_physical_plan_defaults() {\n    let relative = \"sql2/session.rs\";\n    let source = read_engine_source(relative);\n    let session_context = source_between(relative, &source, \"fn new_sql_session_context\", \"\\n}\");\n\n    assert_source_contains_all(\n        relative,\n        session_context,\n        &[\n            \".with_target_partitions(1)\",\n            \"\\\"datafusion.optimizer.repartition_aggregations\\\", false\",\n            \"\\\"datafusion.optimizer.repartition_joins\\\", false\",\n            \"\\\"datafusion.optimizer.repartition_sorts\\\", false\",\n            \"\\\"datafusion.optimizer.repartition_windows\\\", false\",\n            \"\\\"datafusion.optimizer.repartition_file_scans\\\", false\",\n            \"\\\"datafusion.optimizer.enable_round_robin_repartition\\\", false\",\n        ],\n    );\n}\n\n#[test]\nfn shared_persistence_root_is_empty_or_absent() {\n    let remaining_files = current_shared_persistence_root_files();\n\n    assert!(\n        remaining_files.is_empty(),\n        \"the shared `persistence/*` root is transitional and should become empty or disappear as owner-local `storage.rs` seams take over.\\n\\nCurrent files:\\n{}\",\n        remaining_files\n            .into_iter()\n            .map(|file| format!(\"- {file}\"))\n            .collect::<Vec<_>>()\n            .join(\"\\n\"),\n    );\n}\n"
  },
  {
    "path": "packages/engine/tests/commit_graph.rs",
    "content": "#[macro_use]\n#[path = \"support/mod.rs\"]\nmod support;\nuse lix_engine::Value;\n\nsimulation_test!(\n    version_ref_advances_after_tracked_commit,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n        let initial_head = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"version head should load\")\n            .expect(\"version head should exist\");\n        session\n            .execute(\n                \"INSERT INTO lix_key_value (key, value) VALUES ('version-ref-advance', 'one')\",\n                &[],\n            )\n            .await\n            .expect(\"tracked write should succeed\");\n        let advanced_head = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"version head should load\")\n            .expect(\"version head should exist\");\n\n        assert_ne!(\n            advanced_head, initial_head,\n            \"tracked commit should advance the touched version ref\"\n        );\n    }\n);\n\nsimulation_test!(\n    tracked_write_creates_one_commit_without_advancing_global_ref,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n        let global_session = sim.wrap_session(\n            engine\n                .open_session(\"global\")\n                .await\n                .expect(\"global session should open\"),\n            &engine,\n        );\n        let global_head_before = engine\n            .load_version_head_commit_id(\"global\")\n            .await\n            .expect(\"global head should load\")\n            .expect(\"global head should exist\");\n        let main_head_before = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"main head should load\")\n            .expect(\"main head should exist\");\n\n        session\n            .execute(\n                \"INSERT INTO lix_key_value (key, value) VALUES ('one-commit-model', 'ok')\",\n                &[],\n            )\n            .await\n            .expect(\"tracked write should succeed\");\n\n        let global_head_after = engine\n            .load_version_head_commit_id(\"global\")\n            .await\n            .expect(\"global head should load\")\n            .expect(\"global head should exist\");\n        let main_head_after = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"main head should load\")\n            .expect(\"main head should exist\");\n\n        assert_eq!(\n            global_head_after, global_head_before,\n            \"non-global writes must not advance the global version ref\"\n        );\n        assert_ne!(\n            main_head_after, main_head_before,\n            \"tracked write should advance exactly the touched version ref\"\n        );\n\n        assert_eq!(\n            commit_ids(&global_session, &main_head_after).await,\n            vec![main_head_after.clone()],\n            \"the touched-version commit should still be globally visible through lix_state\"\n        );\n    }\n);\n\nsimulation_test!(\n    second_commit_parents_previous_version_head,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n        let global_session = sim.wrap_session(\n            engine\n                .open_session(\"global\")\n                .await\n                .expect(\"global session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_key_value (key, value) VALUES ('commit-parent', 'one')\",\n                &[],\n            )\n            .await\n            .expect(\"first tracked write should succeed\");\n        let first_head = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"version head should load\")\n            .expect(\"version head should exist\");\n\n        session\n            .execute(\n                \"UPDATE lix_key_value SET value = 'two' WHERE key = 'commit-parent'\",\n                &[],\n            )\n            .await\n            .expect(\"second tracked write should succeed\");\n        let second_head = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"version head should load\")\n            .expect(\"version head should exist\");\n\n        assert_ne!(second_head, first_head);\n\n        assert_eq!(\n            commit_parent_ids(&global_session, &second_head).await,\n            vec![first_head],\n            \"second commit should parent to the previous version head\"\n        );\n    }\n);\n\nasync fn commit_parent_ids(\n    session: &crate::support::simulation_test::engine::SimSession,\n    commit_id: &str,\n) -> Vec<String> {\n    let result = session\n        .execute(\n            &format!(\n                \"SELECT parent_id \\\n                 FROM lix_commit_edge \\\n                 WHERE child_id = '{commit_id}' \\\n                 ORDER BY parent_id\"\n            ),\n            &[],\n        )\n        .await\n        .expect(\"commit edge rows should read\");\n    result\n        .rows()\n        .iter()\n        .map(|row| match &row.values()[0] {\n            Value::Text(parent_id) => parent_id.clone(),\n            value => panic!(\"expected parent_id string, got {value:?}\"),\n        })\n        .collect()\n}\n\nasync fn commit_ids(\n    session: &crate::support::simulation_test::engine::SimSession,\n    commit_id: &str,\n) -> Vec<String> {\n    let result = session\n        .execute(\n            &format!(\"SELECT id FROM lix_commit WHERE id = '{commit_id}'\"),\n            &[],\n        )\n        .await\n        .expect(\"commit rows should read\");\n    result\n        .rows()\n        .iter()\n        .map(|row| match &row.values()[0] {\n            Value::Text(commit_id) => commit_id.clone(),\n            value => panic!(\"expected commit id string, got {value:?}\"),\n        })\n        .collect()\n}\n"
  },
  {
    "path": "packages/engine/tests/engine.rs",
    "content": "#[path = \"support/mod.rs\"]\nmod support;\n\nuse lix_engine::ExecuteResult;\nuse lix_engine::{CreateVersionOptions, Engine, MergeVersionOptions, SwitchVersionOptions, Value};\nuse serde_json::json;\n\nsimulation_test!(engine_new_rejects_uninitialized_backend, |sim| async move {\n    match Engine::new(sim.uninitialized_backend()).await {\n        Ok(_) => panic!(\"uninitialized backend should not create an engine\"),\n        Err(error) => assert_eq!(error.code, \"LIX_ERROR_NOT_INITIALIZED\"),\n    }\n});\n\nsimulation_test!(\n    engine_initialize_seeds_repository_bootstrap_state,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_session(\"global\")\n                .await\n                .expect(\"initialized backend should open global session\"),\n            &engine,\n        );\n        let main_session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"initialized backend should open main session\"),\n            &engine,\n        );\n\n        let version_result = session\n            .execute(\n                \"SELECT entity_id, snapshot_content \\\n             FROM lix_state \\\n             WHERE schema_key = 'lix_version_descriptor' \\\n             ORDER BY entity_id\",\n                &[],\n            )\n            .await\n            .expect(\"version descriptors should be readable\");\n        let version_rows = version_result;\n        assert_eq!(version_rows.len(), 2);\n        let version_values = version_rows\n            .rows()\n            .iter()\n            .map(|row| row.values().to_vec())\n            .collect::<Vec<_>>();\n        assert!(version_values.contains(&vec![\n            Value::Json(json!([\"global\"])),\n            Value::Json(json!({\"hidden\": true, \"id\": \"global\", \"name\": \"global\"})),\n        ]));\n        assert!(version_values.contains(&vec![\n            Value::Json(json!([sim.main_version_id()])),\n            Value::Json(json!({\"hidden\": false, \"id\": sim.main_version_id(), \"name\": \"main\"})),\n        ]));\n\n        let lix_id_result = session\n            .execute(\"SELECT value FROM lix_key_value WHERE key = 'lix_id'\", &[])\n            .await\n            .expect(\"lix_id key value should be readable\");\n        assert_single_json(lix_id_result, &format!(\"\\\"{}\\\"\", sim.lix_id()));\n\n        let refs_result = session\n            .execute(\n                \"SELECT entity_id, snapshot_content, untracked \\\n             FROM lix_state \\\n             WHERE schema_key = 'lix_version_ref' \\\n             ORDER BY entity_id\",\n                &[],\n            )\n            .await\n            .expect(\"version refs should be readable\");\n        let ref_rows = refs_result;\n        assert_eq!(ref_rows.len(), 2);\n        let ref_values = ref_rows\n            .rows()\n            .iter()\n            .map(|row| row.values().to_vec())\n            .collect::<Vec<_>>();\n        assert!(ref_values.contains(&vec![\n            Value::Json(json!([\"global\"])),\n            Value::Json(json!({\"commit_id\": sim.initial_commit_id(), \"id\": \"global\"})),\n            Value::Boolean(true),\n        ]));\n        assert!(ref_values.contains(&vec![\n            Value::Json(json!([sim.main_version_id()])),\n            Value::Json(json!({\"commit_id\": sim.initial_commit_id(), \"id\": sim.main_version_id()})),\n            Value::Boolean(true),\n        ]));\n\n        drop(main_session);\n        drop(session);\n        drop(engine);\n    }\n);\n\nsimulation_test!(\n    session_execute_inserts_key_value_then_reads_it_back,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"backend should open a session\"),\n            &engine,\n        );\n\n        let uuid_result = session\n            .execute(\"SELECT lix_uuid_v7()\", &[])\n            .await\n            .expect(\"session should expose lix_uuid_v7 UDF\");\n        let uuid_rows = uuid_result;\n        assert_eq!(uuid_rows.len(), 1);\n        let Value::Text(uuid) = &uuid_rows.rows()[0].values()[0] else {\n            panic!(\"lix_uuid_v7 should return text\");\n        };\n        assert!(\n            !uuid.is_empty(),\n            \"lix_uuid_v7 should return a non-empty UUID\"\n        );\n\n        let insert_result = session\n            .execute(\n                \"INSERT INTO lix_key_value (key, value) VALUES ('sql2-key', 'sql2-value')\",\n                &[],\n            )\n            .await\n            .expect(\"session insert should succeed\");\n        assert_eq!(insert_result, ExecuteResult::from_rows_affected(1));\n\n        let result = session\n            .execute(\n                \"SELECT key, value FROM lix_key_value WHERE key = 'sql2-key'\",\n                &[],\n            )\n            .await\n            .expect(\"session read should succeed\");\n        let row_set = result;\n        assert_eq!(row_set.len(), 1);\n        assert_eq!(\n            row_set.rows()[0].values(),\n            &[\n                Value::Text(\"sql2-key\".to_string()),\n                Value::Json(json!(\"sql2-value\")),\n            ]\n        );\n    }\n);\n\nsimulation_test!(\n    failed_write_validation_does_not_poison_session_transaction,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = engine\n            .open_workspace_session()\n            .await\n            .expect(\"backend should open a session\");\n\n        register_poison_task_schema(&session).await;\n\n        let error = session\n            .execute(\n                \"INSERT INTO poison_task (id, title) VALUES ('bad-task', 'missing meta')\",\n                &[],\n            )\n            .await\n            .expect_err(\"schema validation should reject missing required field\");\n        assert_eq!(error.code, \"LIX_ERROR_SCHEMA_VALIDATION\");\n\n        assert_single_integer(\n            session\n                .execute(\"SELECT 1 AS ok\", &[])\n                .await\n                .expect(\"read after failed write should succeed\"),\n            1,\n        );\n\n        let insert_result = session\n            .execute(\n                \"INSERT INTO poison_task (id, title, meta) \\\n                 VALUES ('good-task', 'valid', lix_json('{\\\"priority\\\":\\\"high\\\"}'))\",\n                &[],\n            )\n            .await\n            .expect(\"valid write after failed write should succeed\");\n        assert_eq!(insert_result, ExecuteResult::from_rows_affected(1));\n    }\n);\n\nsimulation_test!(\n    session_close_is_idempotent_and_rejects_later_operations,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = engine\n            .open_workspace_session()\n            .await\n            .expect(\"backend should open a session\");\n\n        session.close().await.expect(\"first close should succeed\");\n        session.close().await.expect(\"second close should succeed\");\n        assert!(session.is_closed());\n\n        assert_closed(\n            session\n                .execute(\"SELECT value FROM lix_key_value WHERE key = 'lix_id'\", &[])\n                .await\n                .expect_err(\"execute after close should fail\"),\n        );\n        assert_closed(\n            session\n                .active_version_id()\n                .await\n                .expect_err(\"active_version_id after close should fail\"),\n        );\n        assert_closed(\n            session\n                .create_version(CreateVersionOptions {\n                    id: Some(\"closed-version\".to_string()),\n                    name: \"Closed\".to_string(),\n                    from_commit_id: None,\n                })\n                .await\n                .expect_err(\"create_version after close should fail\"),\n        );\n        match session\n            .switch_version(SwitchVersionOptions {\n                version_id: sim.main_version_id().to_string(),\n            })\n            .await\n        {\n            Ok(_) => panic!(\"switch_version after close should fail\"),\n            Err(error) => assert_closed(error),\n        }\n        assert_closed(\n            session\n                .merge_version(MergeVersionOptions {\n                    source_version_id: sim.main_version_id().to_string(),\n                })\n                .await\n                .expect_err(\"merge_version after close should fail\"),\n        );\n    }\n);\n\nasync fn register_poison_task_schema(session: &lix_engine::SessionContext) {\n    let schema = json!({\n        \"$schema\": \"https://json-schema.org/draft/2020-12/schema\",\n        \"x-lix-key\": \"poison_task\",\n                \"x-lix-primary-key\": [\"/id\"],\n        \"type\": \"object\",\n        \"required\": [\"id\", \"title\", \"meta\"],\n        \"properties\": {\n            \"id\": { \"type\": \"string\" },\n            \"title\": { \"type\": \"string\" },\n            \"meta\": { \"type\": \"object\" }\n        },\n        \"additionalProperties\": false\n    });\n\n    session\n        .execute(\n            \"INSERT INTO lix_registered_schema (value) VALUES (lix_json($1))\",\n            &[Value::Text(schema.to_string())],\n        )\n        .await\n        .expect(\"schema registration should succeed\");\n}\n\nsimulation_test!(\n    session_close_state_is_shared_with_switched_session,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = engine\n            .open_workspace_session()\n            .await\n            .expect(\"backend should open a session\");\n        let (switched_session, _) = session\n            .switch_version(SwitchVersionOptions {\n                version_id: sim.main_version_id().to_string(),\n            })\n            .await\n            .expect(\"switch_version should succeed before close\");\n\n        session.close().await.expect(\"close should succeed\");\n\n        assert_closed(\n            switched_session\n                .active_version_id()\n                .await\n                .expect_err(\"derived session should observe closed state\"),\n        );\n    }\n);\n\nsimulation_test!(\n    session_execute_persists_deterministic_function_sequence_across_sessions,\n    options = support::simulation_test::engine::SimulationOptions {\n        deterministic: false,\n    },\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"backend should open first session\"),\n            &engine,\n        );\n\n        let mode_result = session\n            .execute(\n                \"INSERT INTO lix_key_value (key, value, lixcol_global, lixcol_untracked) \\\n                 VALUES ('lix_deterministic_mode', \\\n                 lix_json('{\\\"enabled\\\":true}'), true, true)\",\n                &[],\n            )\n            .await\n            .expect(\"deterministic mode insert should succeed\");\n        assert_eq!(mode_result, ExecuteResult::from_rows_affected(1));\n\n        assert_single_text(\n            session\n                .execute(\"SELECT lix_uuid_v7()\", &[])\n                .await\n                .expect(\"first deterministic uuid should succeed\"),\n            \"01920000-0000-7000-8000-000000000000\",\n        );\n        assert_single_text(\n            session\n                .execute(\"SELECT lix_uuid_v7()\", &[])\n                .await\n                .expect(\"second deterministic uuid should succeed\"),\n            \"01920000-0000-7000-8000-000000000001\",\n        );\n\n        let second_session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"backend should open second session\"),\n            &engine,\n        );\n        assert_single_text(\n            second_session\n                .execute(\"SELECT lix_uuid_v7()\", &[])\n                .await\n                .expect(\"third deterministic uuid should succeed\"),\n            \"01920000-0000-7000-8000-000000000002\",\n        );\n        let write_result = second_session\n\t\t\t.execute(\n\t\t\t\t\t\"INSERT INTO lix_state (\\\n\t\t\t\t\t entity_id, schema_key, file_id, snapshot_content, global, untracked\\\n\t\t\t\t\t ) VALUES (\\\n\t\t\t\t\t lix_json('[\\\"det-write\\\"]'), 'lix_key_value', NULL, lix_json('{\\\"key\\\":\\\"det-write\\\",\\\"value\\\":\\\"ok\\\"}'), false, false\\\n\t\t\t\t\t )\",\n\t\t\t\t\t&[],\n\t\t\t\t)\n            .await\n            .expect(\"deterministic write should succeed\");\n        assert_eq!(write_result, ExecuteResult::from_rows_affected(1));\n        assert_single_text(\n            second_session\n                .execute(\"SELECT lix_uuid_v7()\", &[])\n                .await\n                .expect(\"uuid after deterministic write should continue\"),\n            // The tracked write consumes deterministic values for row\n            // metadata and commit metadata.\n            \"01920000-0000-7000-8000-000000000008\",\n        );\n    }\n);\n\nsimulation_test!(\n    session_execute_does_not_persist_deterministic_sequence_after_failed_statement,\n    options = support::simulation_test::engine::SimulationOptions {\n        deterministic: false,\n    },\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"backend should open a session\"),\n            &engine,\n        );\n\n        let mode_result = session\n            .execute(\n                \"INSERT INTO lix_key_value (key, value, lixcol_global, lixcol_untracked) \\\n                 VALUES ('lix_deterministic_mode', \\\n                 lix_json('{\\\"enabled\\\":true}'), true, true)\",\n                &[],\n            )\n            .await\n            .expect(\"deterministic mode insert should succeed\");\n        assert_eq!(mode_result, ExecuteResult::from_rows_affected(1));\n\n        let failed_read = session\n            .execute(\"SELECT lix_uuid_v7() FROM missing_engine_table\", &[])\n            .await;\n        assert!(\n            failed_read.is_err(),\n            \"missing table query should fail before persisting deterministic sequence\"\n        );\n        assert_single_text(\n            session\n                .execute(\"SELECT lix_uuid_v7()\", &[])\n                .await\n                .expect(\"first deterministic uuid should still start at zero\"),\n            \"01920000-0000-7000-8000-000000000000\",\n        );\n\n        let failed_write = session\n            .execute(\n                \"INSERT INTO missing_engine_table VALUES (lix_uuid_v7())\",\n                &[],\n            )\n            .await;\n        assert!(\n            failed_write.is_err(),\n            \"failed write should not persist deterministic sequence\"\n        );\n        assert_single_text(\n            session\n                .execute(\"SELECT lix_uuid_v7()\", &[])\n                .await\n                .expect(\"second deterministic uuid should continue after last success\"),\n            \"01920000-0000-7000-8000-000000000001\",\n        );\n    }\n);\n\nfn assert_single_text(result: ExecuteResult, expected: &str) {\n    let row_set = result;\n    assert_eq!(row_set.len(), 1);\n    assert_eq!(\n        row_set.rows()[0].values(),\n        &[Value::Text(expected.to_string())]\n    );\n}\n\nfn assert_single_integer(result: ExecuteResult, expected: i64) {\n    let row_set = result;\n    assert_eq!(row_set.len(), 1);\n    assert_eq!(row_set.rows()[0].values(), &[Value::Integer(expected)]);\n}\n\nfn assert_single_json(result: ExecuteResult, expected: &str) {\n    let row_set = result;\n    assert_eq!(row_set.len(), 1);\n    let expected_json = serde_json::from_str::<serde_json::Value>(expected)\n        .expect(\"expected JSON value should parse\");\n    assert_eq!(row_set.rows()[0].values(), &[Value::Json(expected_json)]);\n}\n\nfn assert_closed(error: lix_engine::LixError) {\n    assert_eq!(error.code, lix_engine::LixError::CODE_CLOSED);\n    assert_eq!(error.message, \"Lix handle is closed\");\n    assert_eq!(\n        error.hint.as_deref(),\n        Some(\"Open a new Lix handle before calling this method.\")\n    );\n}\n"
  },
  {
    "path": "packages/engine/tests/json_pointer_crud_storage.rs",
    "content": "#![cfg(feature = \"storage-benches\")]\n\nuse std::fs;\nuse std::path::Path;\n\nuse lix_engine::{\n    CreateVersionOptions, Engine, MergeVersionOptions, MergeVersionOutcome, SessionContext,\n    SwitchVersionOptions,\n};\nuse rusqlite::{params, Connection};\nuse serde_json::Value as JsonValue;\nuse tempfile::TempDir;\n\n#[path = \"../benches/storage/rocksdb_backend.rs\"]\nmod rocksdb_backend;\n#[path = \"../benches/storage/sqlite_backend.rs\"]\nmod sqlite_backend;\n\nuse rocksdb_backend::RocksDbBenchBackend;\nuse sqlite_backend::SqliteBenchBackend;\n\nconst JSON_POINTER_SCHEMA_JSON: &str =\n    include_str!(\"../../plugin-json-v2/schema/json_pointer.json\");\nconst PNPM_LOCK_JSON: &str = include_str!(\"../benches/fixtures/pnpm-lock.fixture.json\");\nconst ROW_COUNTS: [usize; 2] = [100, 1_000];\nconst CHUNK_SIZE: usize = 500;\nconst CHANGE_ROW_DENOMINATOR: usize = 10;\n\n#[derive(Clone)]\nstruct PointerRow {\n    path: String,\n    value_json: String,\n}\n\n#[tokio::test]\n#[ignore = \"prints JSON pointer CRUD storage-size reference rows\"]\nasync fn json_pointer_crud_storage_accounting() {\n    let rows = fixture_rows();\n    println!(\"| backend | rows | bytes on disk | bytes/row |\");\n    println!(\"| ------- | ---: | ------------: | --------: |\");\n    for row_count in ROW_COUNTS {\n        let rows = &rows[..row_count];\n        print_storage_row(\"raw SQLite\", row_count, raw_sqlite_storage_bytes(rows));\n        for row in lix_sqlite_storage_rows(rows).await {\n            print_storage_workflow_row(\"Lix SQLite\", row_count, &row);\n        }\n        for row in lix_rocksdb_storage_rows(rows).await {\n            print_storage_workflow_row(\"Lix RocksDB\", row_count, &row);\n        }\n    }\n}\n\nfn print_storage_row(backend: &str, rows: usize, bytes: u64) {\n    println!(\n        \"| {backend} | {rows} | {bytes} | {:.1} |\",\n        bytes as f64 / rows as f64\n    );\n}\n\nstruct WorkflowStorageRow {\n    workflow: &'static str,\n    bytes: u64,\n}\n\nfn print_storage_workflow_row(backend: &str, rows: usize, row: &WorkflowStorageRow) {\n    println!(\n        \"| {backend} / {} | {rows} | {} | {:.1} |\",\n        row.workflow,\n        row.bytes,\n        row.bytes as f64 / rows as f64\n    );\n}\n\nfn raw_sqlite_storage_bytes(rows: &[PointerRow]) -> u64 {\n    let dir = TempDir::new().expect(\"create raw sqlite storage tempdir\");\n    let db_path = dir.path().join(\"json-pointer-crud.sqlite\");\n    let conn = Connection::open(&db_path).expect(\"open raw sqlite storage db\");\n    conn.execute_batch(\n        \"\n        PRAGMA journal_mode = WAL;\n        PRAGMA synchronous = NORMAL;\n        PRAGMA temp_store = MEMORY;\n        PRAGMA foreign_keys = ON;\n        CREATE TABLE json_pointer (\n            path TEXT NOT NULL PRIMARY KEY,\n            value TEXT NOT NULL\n        ) WITHOUT ROWID;\n        \",\n    )\n    .expect(\"configure raw sqlite storage db\");\n    {\n        let tx = conn\n            .unchecked_transaction()\n            .expect(\"begin raw sqlite storage transaction\");\n        {\n            let mut statement = tx\n                .prepare_cached(\"INSERT INTO json_pointer (path, value) VALUES (?1, ?2)\")\n                .expect(\"prepare raw sqlite storage insert\");\n            for row in rows {\n                statement\n                    .execute(params![row.path.as_str(), row.value_json.as_str()])\n                    .expect(\"insert raw sqlite storage row\");\n            }\n        }\n        tx.commit().expect(\"commit raw sqlite storage transaction\");\n    }\n    conn.execute_batch(\"PRAGMA wal_checkpoint(FULL)\")\n        .expect(\"checkpoint raw sqlite storage db\");\n    directory_size(dir.path())\n}\n\nfn changed_row_count(rows: usize) -> usize {\n    (rows / CHANGE_ROW_DENOMINATOR).max(1)\n}\n\nasync fn lix_sqlite_storage_rows(rows: &[PointerRow]) -> Vec<WorkflowStorageRow> {\n    let backend = SqliteBenchBackend::tempfile().expect(\"create sqlite storage backend\");\n    let dir = backend\n        .path()\n        .and_then(Path::parent)\n        .expect(\"sqlite backend should expose tempfile parent\")\n        .to_path_buf();\n    let engine = initialize_engine(Box::new(backend.clone()), Box::new(backend)).await;\n    let session = prepare_session(&engine).await;\n    lix_workflow_storage_rows(&session, rows, &dir).await\n}\n\nasync fn lix_rocksdb_storage_rows(rows: &[PointerRow]) -> Vec<WorkflowStorageRow> {\n    let backend = RocksDbBenchBackend::new().expect(\"create rocksdb storage backend\");\n    let dir = backend.path().to_path_buf();\n    let engine = initialize_engine(Box::new(backend.clone()), Box::new(backend)).await;\n    let session = prepare_session(&engine).await;\n    lix_workflow_storage_rows(&session, rows, &dir).await\n}\n\nasync fn lix_workflow_storage_rows(\n    session: &SessionContext,\n    rows: &[PointerRow],\n    dir: &Path,\n) -> Vec<WorkflowStorageRow> {\n    let change_rows = changed_row_count(rows.len());\n    let main_id = session\n        .active_version_id()\n        .await\n        .expect(\"load active storage main version id\");\n    insert_lix_rows(session, rows).await;\n    let mut storage_rows = vec![WorkflowStorageRow {\n        workflow: \"inserted\",\n        bytes: directory_size(dir),\n    }];\n\n    create_lix_version(session, \"bench-draft\", \"bench draft\").await;\n    storage_rows.push(WorkflowStorageRow {\n        workflow: \"after create_version\",\n        bytes: directory_size(dir),\n    });\n\n    let (draft_session, _) = session\n        .switch_version(SwitchVersionOptions {\n            version_id: \"bench-draft\".to_string(),\n        })\n        .await\n        .expect(\"switch to storage draft version\");\n    update_lix_rows_by_pk(&draft_session, &rows[..change_rows], \"source\").await;\n    let (main_session, _) = draft_session\n        .switch_version(SwitchVersionOptions {\n            version_id: main_id.clone(),\n        })\n        .await\n        .expect(\"switch back to storage main version\");\n    let receipt = main_session\n        .merge_version(MergeVersionOptions {\n            source_version_id: \"bench-draft\".to_string(),\n        })\n        .await\n        .expect(\"merge storage fast-forward draft\");\n    assert_eq!(receipt.outcome, MergeVersionOutcome::FastForward);\n    storage_rows.push(WorkflowStorageRow {\n        workflow: \"after fast-forward merge\",\n        bytes: directory_size(dir),\n    });\n\n    create_lix_version(&main_session, \"bench-divergent\", \"bench divergent\").await;\n    let (divergent_session, _) = main_session\n        .switch_version(SwitchVersionOptions {\n            version_id: \"bench-divergent\".to_string(),\n        })\n        .await\n        .expect(\"switch to divergent storage draft version\");\n    update_lix_rows_by_pk(&divergent_session, &rows[..change_rows], \"source-divergent\").await;\n    let (main_session, _) = divergent_session\n        .switch_version(SwitchVersionOptions {\n            version_id: main_id,\n        })\n        .await\n        .expect(\"switch back to storage main version after divergent edits\");\n    update_lix_rows_by_pk(\n        &main_session,\n        &rows[change_rows..change_rows * 2],\n        \"target-divergent\",\n    )\n    .await;\n    let receipt = main_session\n        .merge_version(MergeVersionOptions {\n            source_version_id: \"bench-divergent\".to_string(),\n        })\n        .await\n        .expect(\"merge storage divergent draft\");\n    assert_eq!(receipt.outcome, MergeVersionOutcome::MergeCommitted);\n    storage_rows.push(WorkflowStorageRow {\n        workflow: \"after divergent merge\",\n        bytes: directory_size(dir),\n    });\n\n    storage_rows\n}\n\nasync fn initialize_engine(\n    initializer_backend: Box<dyn lix_engine::Backend + Send + Sync>,\n    engine_backend: Box<dyn lix_engine::Backend + Send + Sync>,\n) -> Engine {\n    Engine::initialize(initializer_backend)\n        .await\n        .expect(\"initialize storage benchmark engine\");\n    Engine::new(engine_backend)\n        .await\n        .expect(\"open storage benchmark engine\")\n}\n\nasync fn prepare_session(engine: &Engine) -> SessionContext {\n    let session = engine\n        .open_workspace_session()\n        .await\n        .expect(\"open json pointer storage workspace\");\n    register_json_pointer_schema(&session).await;\n    session\n}\n\nasync fn register_json_pointer_schema(session: &SessionContext) {\n    let sql = format!(\n        \"INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked)\n         VALUES (lix_json('{}'), false, false)\",\n        sql_string(JSON_POINTER_SCHEMA_JSON)\n    );\n    let affected = session\n        .execute(&sql, &[])\n        .await\n        .expect(\"register json_pointer storage schema\")\n        .rows_affected();\n    assert_eq!(affected, 1);\n}\n\nasync fn insert_lix_rows(session: &SessionContext, rows: &[PointerRow]) {\n    for chunk in rows.chunks(CHUNK_SIZE) {\n        let mut sql = String::from(\"INSERT INTO json_pointer (path, value) VALUES \");\n        for (index, row) in chunk.iter().enumerate() {\n            if index > 0 {\n                sql.push(',');\n            }\n            sql.push_str(&format!(\n                \"('{}', lix_json('{}'))\",\n                sql_string(row.path.as_str()),\n                sql_string(row.value_json.as_str())\n            ));\n        }\n        let affected = session\n            .execute(&sql, &[])\n            .await\n            .expect(\"insert json_pointer storage rows\")\n            .rows_affected();\n        assert_eq!(affected as usize, chunk.len());\n    }\n}\n\nasync fn create_lix_version(session: &SessionContext, id: &str, name: &str) {\n    session\n        .create_version(CreateVersionOptions {\n            id: Some(id.to_string()),\n            name: name.to_string(),\n            from_commit_id: None,\n        })\n        .await\n        .expect(\"create json_pointer storage version\");\n}\n\nasync fn update_lix_rows_by_pk(session: &SessionContext, rows: &[PointerRow], side: &str) {\n    for row in rows {\n        let value = serde_json::json!({\n            \"updated\": true,\n            \"side\": side,\n            \"path\": row.path,\n        })\n        .to_string();\n        let sql = format!(\n            \"UPDATE json_pointer SET value = lix_json('{}') WHERE path = '{}'\",\n            sql_string(value.as_str()),\n            sql_string(row.path.as_str())\n        );\n        let affected = session\n            .execute(&sql, &[])\n            .await\n            .expect(\"update json_pointer storage row by path\")\n            .rows_affected();\n        assert_eq!(affected, 1);\n    }\n}\n\nfn fixture_rows() -> Vec<PointerRow> {\n    let root: JsonValue = serde_json::from_str(PNPM_LOCK_JSON).expect(\"pnpm lock JSON fixture\");\n    let mut rows = Vec::new();\n    flatten_json(\"\", &root, &mut rows);\n    assert!(rows.len() >= 10_000);\n    rows\n}\n\nfn flatten_json(path: &str, value: &JsonValue, rows: &mut Vec<PointerRow>) {\n    rows.push(PointerRow {\n        path: path.to_string(),\n        value_json: value.to_string(),\n    });\n\n    match value {\n        JsonValue::Array(items) => {\n            for (index, item) in items.iter().enumerate() {\n                flatten_json(&format!(\"{path}/{}\", index), item, rows);\n            }\n        }\n        JsonValue::Object(map) => {\n            for (key, child) in map {\n                flatten_json(\n                    &format!(\"{path}/{}\", key.replace('~', \"~0\").replace('/', \"~1\")),\n                    child,\n                    rows,\n                );\n            }\n        }\n        JsonValue::Null | JsonValue::Bool(_) | JsonValue::Number(_) | JsonValue::String(_) => {}\n    }\n}\n\nfn directory_size(path: &Path) -> u64 {\n    let metadata = fs::metadata(path).expect(\"read storage path metadata\");\n    if metadata.is_file() {\n        return metadata.len();\n    }\n\n    let mut bytes = 0;\n    for entry in fs::read_dir(path).expect(\"read storage directory\") {\n        let entry = entry.expect(\"read storage directory entry\");\n        bytes += directory_size(&entry.path());\n    }\n    bytes\n}\n\nfn sql_string(value: &str) -> String {\n    value.replace('\\'', \"''\")\n}\n"
  },
  {
    "path": "packages/engine/tests/sql/entity_history.rs",
    "content": "use lix_engine::Value;\nuse serde_json::json;\n\nuse super::assert_rows_eq;\n\nsimulation_test!(\n    entity_history_reads_typed_rows_from_commit_graph,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \\\n                 VALUES (\\\n                 lix_json('{\\\"x-lix-key\\\":\\\"engine_history_schema\\\",\\\"type\\\":\\\"object\\\",\\\"properties\\\":{\\\"id\\\":{\\\"type\\\":\\\"string\\\"},\\\"count\\\":{\\\"type\\\":\\\"integer\\\"},\\\"active\\\":{\\\"type\\\":\\\"boolean\\\"},\\\"meta\\\":{\\\"type\\\":\\\"object\\\"}},\\\"required\\\":[\\\"id\\\",\\\"count\\\",\\\"active\\\",\\\"meta\\\"],\\\"additionalProperties\\\":false}'),\\\n                 false,\\\n                 false\\\n                 )\",\n                &[],\n            )\n            .await\n            .expect(\"registered schema insert should succeed\");\n\n        session\n            .execute(\n                \"INSERT INTO engine_history_schema \\\n                 (lixcol_entity_id, id, count, active, meta, lixcol_untracked) \\\n                 VALUES (lix_json('[\\\"history-entity\\\"]'), 'history-entity', 1, true, lix_json('{\\\"source\\\":\\\"insert\\\"}'), false)\",\n                &[],\n            )\n            .await\n            .expect(\"entity insert should succeed\");\n        let first_commit_id = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"first head should load\")\n            .expect(\"first head should exist\");\n\n        session\n            .execute(\n                \"UPDATE engine_history_schema \\\n                 SET count = 2, active = false, meta = lix_json('{\\\"source\\\":\\\"update\\\"}') \\\n                 WHERE lixcol_entity_id = lix_json('[\\\"history-entity\\\"]')\",\n                &[],\n            )\n            .await\n            .expect(\"entity update should succeed\");\n        let second_commit_id = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"second head should load\")\n            .expect(\"second head should exist\");\n        assert_ne!(first_commit_id, second_commit_id);\n\n        let result = session\n            .execute(\n                &format!(\n                    \"SELECT id, count, active, meta, lixcol_entity_id, lixcol_observed_commit_id, lixcol_start_commit_id, lixcol_depth \\\n                     FROM engine_history_schema_history \\\n                     WHERE lixcol_start_commit_id = '{second_commit_id}' \\\n                       AND lixcol_entity_id = lix_json('[\\\"history-entity\\\"]') \\\n                     ORDER BY lixcol_depth\"\n                ),\n                &[],\n            )\n            .await\n            .expect(\"entity history read should succeed\");\n\n        assert_rows_eq(\n            result,\n            vec![\n                vec![\n                    Value::Text(\"history-entity\".to_string()),\n                    Value::Integer(2),\n                    Value::Boolean(false),\n                    Value::Json(json!({\"source\": \"update\"})),\n                    Value::Json(json!([\"history-entity\"])),\n                    Value::Text(second_commit_id.clone()),\n                    Value::Text(second_commit_id.clone()),\n                    Value::Integer(0),\n                ],\n                vec![\n                    Value::Text(\"history-entity\".to_string()),\n                    Value::Integer(1),\n                    Value::Boolean(true),\n                    Value::Json(json!({\"source\": \"insert\"})),\n                    Value::Json(json!([\"history-entity\"])),\n                    Value::Text(first_commit_id),\n                    Value::Text(second_commit_id),\n                    Value::Integer(1),\n                ],\n            ],\n        );\n    }\n);\n\nsimulation_test!(\n    entity_history_requires_lixcol_start_commit_id,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \\\n                 VALUES (\\\n                 lix_json('{\\\"x-lix-key\\\":\\\"engine_history_error_schema\\\",\\\"type\\\":\\\"object\\\",\\\"properties\\\":{\\\"id\\\":{\\\"type\\\":\\\"string\\\"}},\\\"required\\\":[\\\"id\\\"],\\\"additionalProperties\\\":false}'),\\\n                 false,\\\n                 false\\\n                 )\",\n                &[],\n            )\n            .await\n            .expect(\"registered schema insert should succeed\");\n\n        let error = session\n            .execute(\"SELECT id FROM engine_history_error_schema_history\", &[])\n            .await\n            .expect_err(\"typed history queries must provide start commit\");\n\n        assert_eq!(\n            error.code,\n            lix_engine::LixError::CODE_HISTORY_FILTER_REQUIRED\n        );\n        assert!(\n            error\n                .to_string()\n                .contains(\"requires a lixcol_start_commit_id filter\"),\n            \"unexpected error: {error}\"\n        );\n        assert!(\n            error\n                .hint()\n                .is_some_and(|hint| hint.contains(\"WHERE lixcol_start_commit_id\")),\n            \"unexpected error: {error}\"\n        );\n    }\n);\n\nsimulation_test!(\n    entity_history_rejects_bare_start_commit_id_filter,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \\\n                 VALUES (\\\n                 lix_json('{\\\"x-lix-key\\\":\\\"engine_history_bare_error_schema\\\",\\\"type\\\":\\\"object\\\",\\\"properties\\\":{\\\"id\\\":{\\\"type\\\":\\\"string\\\"}},\\\"required\\\":[\\\"id\\\"],\\\"additionalProperties\\\":false}'),\\\n                 false,\\\n                 false\\\n                 )\",\n                &[],\n            )\n            .await\n            .expect(\"registered schema insert should succeed\");\n\n        let error = session\n            .execute(\n                \"SELECT id \\\n                 FROM engine_history_bare_error_schema_history \\\n                 WHERE start_commit_id = lix_active_version_commit_id()\",\n                &[],\n            )\n            .await\n            .expect_err(\"typed history should only expose lixcol_start_commit_id\");\n\n        assert_eq!(error.code, lix_engine::LixError::CODE_COLUMN_NOT_FOUND);\n        assert!(\n            error.to_string().contains(\"start_commit_id\"),\n            \"unexpected error: {error}\"\n        );\n    }\n);\n"
  },
  {
    "path": "packages/engine/tests/sql/errors.rs",
    "content": "use lix_engine::LixError;\nuse lix_engine::Value;\n\nsimulation_test!(sql_missing_table_has_lix_error_code, |sim| async move {\n    let engine = sim.boot_engine().await;\n    let session = sim.wrap_session(\n        engine\n            .open_workspace_session()\n            .await\n            .expect(\"main session should open\"),\n        &engine,\n    );\n\n    let error = session\n        .execute(\"SELECT * FROM missing_table\", &[])\n        .await\n        .expect_err(\"missing table should fail\");\n\n    assert_eq!(error.code, LixError::CODE_TABLE_NOT_FOUND);\n    assert!(error.hint().is_some(), \"expected discovery hint: {error}\");\n});\n\nsimulation_test!(sql_missing_column_has_lix_error_code, |sim| async move {\n    let engine = sim.boot_engine().await;\n    let session = sim.wrap_session(\n        engine\n            .open_workspace_session()\n            .await\n            .expect(\"main session should open\"),\n        &engine,\n    );\n\n    let error = session\n        .execute(\"SELECT missing_column FROM lix_file\", &[])\n        .await\n        .expect_err(\"missing column should fail\");\n\n    assert_eq!(error.code, LixError::CODE_COLUMN_NOT_FOUND);\n});\n\nsimulation_test!(\n    sql_duplicate_projection_name_is_parse_error,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        let error = session\n            .execute(\"SELECT 1 AS x, 2 AS x\", &[])\n            .await\n            .expect_err(\"duplicate projection names should fail during planning\");\n\n        assert_eq!(error.code, LixError::CODE_PARSE_ERROR);\n    }\n);\n\nsimulation_test!(sql_question_mark_placeholder_has_hint, |sim| async move {\n    let engine = sim.boot_engine().await;\n    let session = sim.wrap_session(\n        engine\n            .open_workspace_session()\n            .await\n            .expect(\"main session should open\"),\n        &engine,\n    );\n\n    let error = session\n        .execute(\"SELECT * FROM lix_file WHERE id = ?\", &[])\n        .await\n        .expect_err(\"question mark placeholders should fail\");\n\n    assert_eq!(error.code, LixError::CODE_PARSE_ERROR);\n    assert!(\n        error.hint().is_some_and(|hint| hint.contains(\"$1\")),\n        \"expected placeholder hint: {error}\"\n    );\n});\n\nsimulation_test!(sql_json_function_miss_has_lix_udf_hint, |sim| async move {\n    let engine = sim.boot_engine().await;\n    let session = sim.wrap_session(\n        engine\n            .open_workspace_session()\n            .await\n            .expect(\"main session should open\"),\n        &engine,\n    );\n\n    let error = session\n        .execute(\"SELECT json_extract('{\\\"a\\\":1}', '$.a')\", &[])\n        .await\n        .expect_err(\"non-Lix JSON UDF should fail with a targeted hint\");\n\n    assert_eq!(error.code, LixError::CODE_UDF_NOT_FOUND);\n    assert!(\n        error\n            .hint()\n            .is_some_and(|hint| hint.contains(\"lix_json_get\")),\n        \"expected JSON UDF hint: {error}\"\n    );\n});\n\nsimulation_test!(\n    sql_json_arrow_operator_has_dialect_error,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        let error = session\n            .execute(\"SELECT lix_json('{\\\"a\\\":1}') ->> 'a'\", &[])\n            .await\n            .expect_err(\"Postgres JSON arrow operator should fail with a dialect error\");\n\n        assert_eq!(error.code, LixError::CODE_DIALECT_UNSUPPORTED);\n        assert!(\n            error\n                .hint()\n                .is_some_and(|hint| hint.contains(\"lix_json_get_text\")),\n            \"expected JSON dialect hint: {error}\"\n        );\n    }\n);\n\nsimulation_test!(\n    sql_udf_argument_mismatch_is_public_invalid_param,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        let error = session\n            .execute(\"SELECT lix_uuid_v7('unexpected')\", &[])\n            .await\n            .expect_err(\"wrong UDF arity should fail as public invalid input\");\n\n        assert_eq!(error.code, LixError::CODE_INVALID_PARAM);\n    }\n);\n\nsimulation_test!(\n    sql_non_utf8_blob_parameter_has_targeted_error,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        let error = session\n            .execute(\"SELECT length($1)\", &[Value::Blob(vec![0xff])])\n            .await\n            .expect_err(\"non-UTF-8 blob should fail as text\");\n\n        assert_eq!(error.code, LixError::CODE_TYPE_MISMATCH);\n        assert!(\n            error.message.contains(\"valid UTF-8 text\"),\n            \"expected targeted UTF-8 message: {error}\"\n        );\n        assert!(\n            error\n                .hint()\n                .is_some_and(|hint| hint.contains(\"blob\") && !hint.contains(\"lix_json\")),\n            \"expected blob-specific hint without JSON detour: {error}\"\n        );\n    }\n);\n\nsimulation_test!(\n    sql_blob_insert_into_json_entity_has_targeted_error,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        let error = session\n            .execute(\n                \"INSERT INTO lix_key_value (key, value) VALUES ('blob-value', $1)\",\n                &[Value::Blob(vec![1, 2, 3, 255, 0, 128])],\n            )\n            .await\n            .expect_err(\"blob entity insert should fail cleanly\");\n\n        assert_eq!(error.code, LixError::CODE_INVALID_PARAM);\n        assert!(\n            error.message.contains(\"cannot store blob values directly\"),\n            \"expected targeted blob-to-JSON message: {error}\"\n        );\n        assert!(\n            !error.message.contains(\"Binary(\"),\n            \"error should not expose Rust/DataFusion debug formatting: {error}\"\n        );\n    }\n);\n\nsimulation_test!(sql_create_table_returns_error, |sim| async move {\n    let engine = sim.boot_engine().await;\n    let session = sim.wrap_session(\n        engine\n            .open_workspace_session()\n            .await\n            .expect(\"main session should open\"),\n        &engine,\n    );\n\n    let error = session\n        .execute(\"CREATE TABLE scratch (id TEXT)\", &[])\n        .await\n        .expect_err(\"CREATE TABLE should return an error, not panic\");\n\n    assert_eq!(error.code, LixError::CODE_UNSUPPORTED_SQL);\n});\n\nsimulation_test!(\n    sql_recursive_cte_over_commit_views_returns_error,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        let error = session\n            .execute(\n                \"WITH RECURSIVE commit_walk(id) AS ( \\\n                 SELECT id FROM lix_commit \\\n                 UNION ALL \\\n                 SELECT lix_commit_edge.child_id \\\n                 FROM lix_commit_edge \\\n                 JOIN commit_walk ON lix_commit_edge.parent_id = commit_walk.id \\\n                 ) \\\n                 SELECT id FROM commit_walk\",\n                &[],\n            )\n            .await\n            .expect_err(\"recursive CTE should return an error, not panic\");\n\n        assert_eq!(error.code, LixError::CODE_UNSUPPORTED_SQL, \"{error:?}\");\n    }\n);\n"
  },
  {
    "path": "packages/engine/tests/sql/history_conformance.rs",
    "content": "use lix_engine::Value;\n\nuse super::select_rows;\n\nsimulation_test!(\n    history_surfaces_are_introspected_as_views,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n        .execute(\n            \"INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \\\n             VALUES (\\\n             lix_json('{\\\"x-lix-key\\\":\\\"engine_history_table_type\\\",\\\"type\\\":\\\"object\\\",\\\"properties\\\":{\\\"id\\\":{\\\"type\\\":\\\"string\\\"}},\\\"required\\\":[\\\"id\\\"],\\\"additionalProperties\\\":false}'),\\\n             false,\\\n             false\\\n             )\",\n            &[],\n        )\n        .await\n        .expect(\"registered schema insert should succeed\");\n\n        let rows = select_rows(\n            &session,\n            \"SELECT table_name, table_type \\\n         FROM information_schema.tables \\\n         WHERE table_name IN (\\\n           'lix_state_history',\\\n           'lix_file_history',\\\n           'lix_directory_history',\\\n           'engine_history_table_type_history'\\\n         ) \\\n         ORDER BY table_name\",\n        )\n        .await;\n\n        let expected = [\n            \"engine_history_table_type_history\",\n            \"lix_directory_history\",\n            \"lix_file_history\",\n            \"lix_state_history\",\n        ]\n        .into_iter()\n        .map(|table| {\n            vec![\n                Value::Text(table.to_string()),\n                Value::Text(\"VIEW\".to_string()),\n            ]\n        })\n        .collect::<Vec<_>>();\n\n        assert_eq!(rows, expected);\n    }\n);\n\nsimulation_test!(\n    history_view_schemas_expose_tombstone_contract,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \\\n                 VALUES (\\\n                 lix_json('{\\\"x-lix-key\\\":\\\"engine_history_contract_schema\\\",\\\"type\\\":\\\"object\\\",\\\"properties\\\":{\\\"id\\\":{\\\"type\\\":\\\"string\\\"},\\\"count\\\":{\\\"type\\\":\\\"integer\\\"},\\\"active\\\":{\\\"type\\\":\\\"boolean\\\"},\\\"meta\\\":{\\\"type\\\":\\\"object\\\"}},\\\"required\\\":[\\\"id\\\",\\\"count\\\",\\\"active\\\",\\\"meta\\\"],\\\"additionalProperties\\\":false}'),\\\n                 false,\\\n                 false\\\n                 )\",\n                &[],\n            )\n            .await\n            .expect(\"registered schema insert should succeed\");\n\n        let rows = select_rows(\n            &session,\n            \"SELECT table_name, column_name, is_nullable \\\n             FROM information_schema.columns \\\n             WHERE table_name IN (\\\n               'lix_file_history',\\\n               'lix_directory_history',\\\n               'engine_history_contract_schema_history'\\\n             ) \\\n               AND (\\\n                 column_name IN ('path', 'directory_id', 'parent_id', 'name', 'data', 'id', 'count', 'active', 'meta') \\\n                 OR column_name = 'lixcol_snapshot_content'\\\n               ) \\\n             ORDER BY table_name, column_name\",\n        )\n        .await;\n\n        let expected = vec![\n            (\"engine_history_contract_schema_history\", \"active\", \"YES\"),\n            (\"engine_history_contract_schema_history\", \"count\", \"YES\"),\n            (\"engine_history_contract_schema_history\", \"id\", \"YES\"),\n            (\n                \"engine_history_contract_schema_history\",\n                \"lixcol_snapshot_content\",\n                \"YES\",\n            ),\n            (\"engine_history_contract_schema_history\", \"meta\", \"YES\"),\n            (\"lix_directory_history\", \"id\", \"NO\"),\n            (\"lix_directory_history\", \"lixcol_snapshot_content\", \"YES\"),\n            (\"lix_directory_history\", \"name\", \"YES\"),\n            (\"lix_directory_history\", \"parent_id\", \"YES\"),\n            (\"lix_directory_history\", \"path\", \"YES\"),\n            (\"lix_file_history\", \"data\", \"YES\"),\n            (\"lix_file_history\", \"directory_id\", \"YES\"),\n            (\"lix_file_history\", \"id\", \"NO\"),\n            (\"lix_file_history\", \"lixcol_snapshot_content\", \"YES\"),\n            (\"lix_file_history\", \"name\", \"YES\"),\n            (\"lix_file_history\", \"path\", \"YES\"),\n        ]\n        .into_iter()\n        .map(|(table, column, nullable)| {\n            vec![\n                Value::Text(table.to_string()),\n                Value::Text(column.to_string()),\n                Value::Text(nullable.to_string()),\n            ]\n        })\n        .collect::<Vec<_>>();\n\n        assert_eq!(rows, expected);\n    }\n);\n\nsimulation_test!(\n    typed_entity_history_exposes_tombstones_like_lix_state_history,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \\\n                 VALUES (\\\n                 lix_json('{\\\"x-lix-key\\\":\\\"engine_history_conformance\\\",\\\"type\\\":\\\"object\\\",\\\"properties\\\":{\\\"id\\\":{\\\"type\\\":\\\"string\\\"},\\\"value\\\":{\\\"type\\\":\\\"string\\\"}},\\\"required\\\":[\\\"id\\\",\\\"value\\\"],\\\"additionalProperties\\\":false}'),\\\n                 false,\\\n                 false\\\n                 )\",\n                &[],\n            )\n            .await\n            .expect(\"registered schema insert should succeed\");\n\n        session\n            .execute(\n                \"INSERT INTO engine_history_conformance \\\n                 (lixcol_entity_id, id, value, lixcol_untracked) \\\n                 VALUES (lix_json('[\\\"history-conformance-entity\\\"]'), 'history-conformance-entity', 'one', false)\",\n                &[],\n            )\n            .await\n            .expect(\"entity insert should succeed\");\n        session\n            .execute(\n                \"UPDATE engine_history_conformance \\\n                 SET value = 'two' \\\n                 WHERE lixcol_entity_id = lix_json('[\\\"history-conformance-entity\\\"]')\",\n                &[],\n            )\n            .await\n            .expect(\"entity update should succeed\");\n        session\n            .execute(\n                \"DELETE FROM engine_history_conformance \\\n                 WHERE lixcol_entity_id = lix_json('[\\\"history-conformance-entity\\\"]')\",\n                &[],\n            )\n            .await\n            .expect(\"entity delete should succeed\");\n\n        let typed_rows = select_rows(\n            &session,\n            \"SELECT id, value, lixcol_entity_id, lixcol_snapshot_content, lixcol_depth \\\n             FROM engine_history_conformance_history \\\n             WHERE lixcol_start_commit_id = lix_active_version_commit_id() \\\n               AND lixcol_entity_id = lix_json('[\\\"history-conformance-entity\\\"]') \\\n             ORDER BY lixcol_depth\",\n        )\n        .await;\n        assert_eq!(typed_rows.len(), 3);\n        assert_eq!(\n            typed_rows[0],\n            vec![\n                Value::Null,\n                Value::Null,\n                Value::Json(serde_json::json!([\"history-conformance-entity\"])),\n                Value::Null,\n                Value::Integer(0),\n            ]\n        );\n\n        let state_rows = select_rows(\n            &session,\n            \"SELECT snapshot_content, depth \\\n             FROM lix_state_history \\\n             WHERE start_commit_id = lix_active_version_commit_id() \\\n               AND schema_key = 'engine_history_conformance' \\\n               AND entity_id = lix_json('[\\\"history-conformance-entity\\\"]') \\\n               AND snapshot_content IS NULL\",\n        )\n        .await;\n        assert_eq!(state_rows, vec![vec![Value::Null, Value::Integer(0)]]);\n    }\n);\n\nsimulation_test!(\n    typed_entity_history_backfills_primary_key_columns_on_tombstones,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_key_value (key, value) \\\n                 VALUES ('history-pk-backfill', 'one')\",\n                &[],\n            )\n            .await\n            .expect(\"key value insert should succeed\");\n        session\n            .execute(\n                \"DELETE FROM lix_key_value WHERE key = 'history-pk-backfill'\",\n                &[],\n            )\n            .await\n            .expect(\"key value delete should succeed\");\n\n        let rows = select_rows(\n            &session,\n            \"SELECT key, value, lixcol_entity_id, lixcol_snapshot_content, lixcol_depth \\\n             FROM lix_key_value_history \\\n             WHERE lixcol_start_commit_id = lix_active_version_commit_id() \\\n               AND key = 'history-pk-backfill' \\\n             ORDER BY lixcol_depth\",\n        )\n        .await;\n\n        assert_eq!(\n            rows,\n            vec![\n                vec![\n                    Value::Text(\"history-pk-backfill\".to_string()),\n                    Value::Null,\n                    Value::Json(serde_json::json!([\"history-pk-backfill\"])),\n                    Value::Null,\n                    Value::Integer(0),\n                ],\n                vec![\n                    Value::Text(\"history-pk-backfill\".to_string()),\n                    lix_engine::Value::Json(serde_json::json!(\"one\")),\n                    Value::Json(serde_json::json!([\"history-pk-backfill\"])),\n                    lix_engine::Value::Json(serde_json::json!({\n                        \"key\": \"history-pk-backfill\",\n                        \"value\": \"one\"\n                    })),\n                    Value::Integer(1),\n                ],\n            ]\n        );\n    }\n);\n\nsimulation_test!(\n    typed_entity_history_backfills_composite_primary_key_columns_on_tombstones,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \\\n                 VALUES (\\\n                 lix_json('{\\\"x-lix-key\\\":\\\"engine_history_composite_pk\\\",\\\"x-lix-primary-key\\\":[\\\"/namespace\\\",\\\"/id\\\"],\\\"type\\\":\\\"object\\\",\\\"properties\\\":{\\\"namespace\\\":{\\\"type\\\":\\\"string\\\"},\\\"id\\\":{\\\"type\\\":\\\"string\\\"},\\\"value\\\":{\\\"type\\\":\\\"string\\\"}},\\\"required\\\":[\\\"namespace\\\",\\\"id\\\",\\\"value\\\"],\\\"additionalProperties\\\":false}'),\\\n                 false,\\\n                 false\\\n                 )\",\n                &[],\n            )\n            .await\n            .expect(\"registered schema insert should succeed\");\n\n        session\n            .execute(\n                \"INSERT INTO engine_history_composite_pk \\\n                 (namespace, id, value, lixcol_untracked) \\\n                 VALUES ('messages', '7', 'one', false)\",\n                &[],\n            )\n            .await\n            .expect(\"composite entity insert should succeed\");\n        session\n            .execute(\n                \"DELETE FROM engine_history_composite_pk \\\n                 WHERE namespace = 'messages' AND id = '7'\",\n                &[],\n            )\n            .await\n            .expect(\"composite entity delete should succeed\");\n\n        let rows = select_rows(\n            &session,\n            \"SELECT namespace, id, value, lixcol_snapshot_content, lixcol_depth \\\n             FROM engine_history_composite_pk_history \\\n             WHERE lixcol_start_commit_id = lix_active_version_commit_id() \\\n               AND namespace = 'messages' \\\n               AND id = '7' \\\n             ORDER BY lixcol_depth\",\n        )\n        .await;\n\n        assert_eq!(\n            rows,\n            vec![\n                vec![\n                    Value::Text(\"messages\".to_string()),\n                    Value::Text(\"7\".to_string()),\n                    Value::Null,\n                    Value::Null,\n                    Value::Integer(0),\n                ],\n                vec![\n                    Value::Text(\"messages\".to_string()),\n                    Value::Text(\"7\".to_string()),\n                    Value::Text(\"one\".to_string()),\n                    lix_engine::Value::Json(serde_json::json!({\n                        \"namespace\": \"messages\",\n                        \"id\": \"7\",\n                        \"value\": \"one\"\n                    })),\n                    Value::Integer(1),\n                ],\n            ]\n        );\n    }\n);\n\nsimulation_test!(\n    lix_file_history_exposes_descriptor_tombstones_like_lix_state_history,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_file (id, path, data) \\\n                 VALUES ('history-conformance-file', '/docs/conformance.txt', X'6F6E65')\",\n                &[],\n            )\n            .await\n            .expect(\"file insert should succeed\");\n        session\n            .execute(\n                \"UPDATE lix_file SET data = X'74776F' WHERE id = 'history-conformance-file'\",\n                &[],\n            )\n            .await\n            .expect(\"file update should succeed\");\n        session\n            .execute(\n                \"DELETE FROM lix_file WHERE id = 'history-conformance-file'\",\n                &[],\n            )\n            .await\n            .expect(\"file delete should succeed\");\n\n        let file_rows = select_rows(\n            &session,\n            \"SELECT id, path, name, data, lixcol_entity_id, lixcol_file_id, lixcol_snapshot_content, lixcol_depth \\\n             FROM lix_file_history \\\n             WHERE lixcol_start_commit_id = lix_active_version_commit_id() \\\n               AND id = 'history-conformance-file' \\\n               AND lixcol_depth = 0\",\n        )\n        .await;\n        assert_eq!(\n            file_rows,\n            vec![vec![\n                Value::Text(\"history-conformance-file\".to_string()),\n                Value::Null,\n                Value::Null,\n                Value::Null,\n                Value::Json(serde_json::json!([\"history-conformance-file\"])),\n                Value::Text(\"history-conformance-file\".to_string()),\n                Value::Null,\n                Value::Integer(0),\n            ]]\n        );\n\n        let state_rows = select_rows(\n            &session,\n            \"SELECT snapshot_content, depth \\\n             FROM lix_state_history \\\n             WHERE start_commit_id = lix_active_version_commit_id() \\\n               AND schema_key = 'lix_file_descriptor' \\\n               AND entity_id = lix_json('[\\\"history-conformance-file\\\"]') \\\n               AND snapshot_content IS NULL\",\n        )\n        .await;\n        assert_eq!(state_rows, vec![vec![Value::Null, Value::Integer(0)]]);\n    }\n);\n\nsimulation_test!(\n    lix_directory_history_exposes_descriptor_tombstones_like_lix_state_history,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_directory (id, path) \\\n                 VALUES ('history-conformance-dir', '/conformance/')\",\n                &[],\n            )\n            .await\n            .expect(\"directory insert should succeed\");\n        session\n            .execute(\n                \"UPDATE lix_directory SET name = 'conformance-updated' \\\n                 WHERE id = 'history-conformance-dir'\",\n                &[],\n            )\n            .await\n            .expect(\"directory update should succeed\");\n        session\n            .execute(\n                \"DELETE FROM lix_directory WHERE id = 'history-conformance-dir'\",\n                &[],\n            )\n            .await\n            .expect(\"directory delete should succeed\");\n\n        let directory_rows = select_rows(\n            &session,\n            \"SELECT id, path, parent_id, name, lixcol_entity_id, lixcol_snapshot_content, lixcol_depth \\\n             FROM lix_directory_history \\\n             WHERE lixcol_start_commit_id = lix_active_version_commit_id() \\\n               AND id = 'history-conformance-dir' \\\n               AND lixcol_depth = 0\",\n        )\n        .await;\n        assert_eq!(\n            directory_rows,\n            vec![vec![\n                Value::Text(\"history-conformance-dir\".to_string()),\n                Value::Null,\n                Value::Null,\n                Value::Null,\n                Value::Json(serde_json::json!([\"history-conformance-dir\"])),\n                Value::Null,\n                Value::Integer(0),\n            ]]\n        );\n\n        let state_rows = select_rows(\n            &session,\n            \"SELECT snapshot_content, depth \\\n             FROM lix_state_history \\\n             WHERE start_commit_id = lix_active_version_commit_id() \\\n               AND schema_key = 'lix_directory_descriptor' \\\n               AND entity_id = lix_json('[\\\"history-conformance-dir\\\"]') \\\n               AND snapshot_content IS NULL\",\n        )\n        .await;\n        assert_eq!(state_rows, vec![vec![Value::Null, Value::Integer(0)]]);\n    }\n);\n"
  },
  {
    "path": "packages/engine/tests/sql/lix_change.rs",
    "content": "use std::collections::BTreeSet;\n\nuse lix_engine::Value;\nuse serde_json::json;\n\nuse super::select_rows;\n\nsimulation_test!(lix_change_queries_tracked_changes, |sim| async move {\n    let engine = sim.boot_engine().await;\n    let session = sim.wrap_session(\n        engine\n            .open_workspace_session()\n            .await\n            .expect(\"main session should open\"),\n        &engine,\n    );\n\n    session\n        .execute(\n            \"INSERT INTO lix_key_value (key, value) VALUES ('change-query', 'one')\",\n            &[],\n        )\n        .await\n        .expect(\"tracked write should succeed\");\n\n    let result = session\n        .execute(\n            \"SELECT entity_id, schema_key, snapshot_content \\\n             FROM lix_change \\\n             WHERE entity_id = lix_json('[\\\"change-query\\\"]')\",\n            &[],\n        )\n        .await\n        .expect(\"lix_change should read\");\n    let rows = result;\n    assert_eq!(rows.len(), 1);\n    assert_eq!(\n        rows.rows()[0].values(),\n        &[\n            Value::Json(json!([\"change-query\"])),\n            Value::Text(\"lix_key_value\".to_string()),\n            Value::Json(json!({\"key\": \"change-query\", \"value\": \"one\"})),\n        ]\n    );\n});\n\nsimulation_test!(\n    lix_change_entity_id_is_json_array_for_composite_primary_keys,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \\\n                 VALUES (\\\n                 lix_json('{\\\"x-lix-key\\\":\\\"engine_composite_message\\\",\\\"x-lix-primary-key\\\":[\\\"/key\\\",\\\"/locale\\\"],\\\"type\\\":\\\"object\\\",\\\"properties\\\":{\\\"key\\\":{\\\"type\\\":\\\"string\\\"},\\\"locale\\\":{\\\"type\\\":\\\"string\\\"},\\\"text\\\":{\\\"type\\\":\\\"string\\\"}},\\\"required\\\":[\\\"key\\\",\\\"locale\\\",\\\"text\\\"],\\\"additionalProperties\\\":false}'),\\\n                 false,\\\n                 false\\\n                 )\",\n                &[],\n            )\n            .await\n            .expect(\"composite schema insert should succeed\");\n        session\n            .execute(\n                \"INSERT INTO engine_composite_message (key, locale, text) \\\n                 VALUES ('welcome.title', 'en', 'Welcome')\",\n                &[],\n            )\n            .await\n            .expect(\"composite entity insert should succeed\");\n\n        let result = session\n            .execute(\n                \"SELECT entity_id, \\\n                        lix_json_get_text(entity_id, 0) AS entity_key, \\\n                        lix_json_get_text(entity_id, 1) AS entity_locale \\\n                 FROM lix_change \\\n                 WHERE schema_key = 'engine_composite_message' \\\n                   AND entity_id = lix_json('[\\\"welcome.title\\\",\\\"en\\\"]')\",\n                &[],\n            )\n            .await\n            .expect(\"lix_change should expose composite entity_id as JSON\");\n\n        assert_eq!(result.len(), 1);\n        assert_eq!(\n            result.rows()[0].values(),\n            &[\n                Value::Json(json!([\"welcome.title\", \"en\"])),\n                Value::Text(\"welcome.title\".to_string()),\n                Value::Text(\"en\".to_string()),\n            ]\n        );\n    }\n);\n\nsimulation_test!(\n    lix_change_rejects_non_string_primary_key_schemas,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        let error = session\n            .execute(\n                \"INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \\\n                 VALUES (\\\n                 lix_json('{\\\"x-lix-key\\\":\\\"engine_numeric_message\\\",\\\"x-lix-primary-key\\\":[\\\"/id\\\"],\\\"type\\\":\\\"object\\\",\\\"properties\\\":{\\\"id\\\":{\\\"type\\\":\\\"number\\\"},\\\"text\\\":{\\\"type\\\":\\\"string\\\"}},\\\"required\\\":[\\\"id\\\",\\\"text\\\"],\\\"additionalProperties\\\":false}'),\\\n                 false,\\\n                 false\\\n                 )\",\n                &[],\n            )\n            .await\n            .expect_err(\"numeric primary-key schema should be rejected\");\n\n        assert_eq!(error.code, lix_engine::LixError::CODE_SCHEMA_DEFINITION);\n        assert!(\n            error\n                .message\n                .contains(\"x-lix-primary-key property \\\"/id\\\" must have type \\\"string\\\"\"),\n            \"error should explain non-string primary-key schema: {error:?}\"\n        );\n    }\n);\n\nsimulation_test!(\n    lix_change_sql_surface_matches_builtin_schema,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        assert_eq!(\n            non_system_column_names(&session, \"lix_change\").await,\n            builtin_schema_property_names(),\n        );\n    }\n);\n\nsimulation_test!(\n    lix_change_count_handles_empty_projection,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        let rows = select_rows(&session, \"SELECT count(*) FROM lix_change\").await;\n        assert_single_count(rows);\n    }\n);\n\nfn assert_single_count(rows: Vec<Vec<Value>>) {\n    assert_eq!(rows.len(), 1);\n    assert_eq!(rows[0].len(), 1);\n    let Value::Integer(count) = rows[0][0] else {\n        panic!(\"expected integer count, got {:?}\", rows[0][0]);\n    };\n    assert!(count >= 0);\n}\n\nfn builtin_schema_property_names() -> BTreeSet<String> {\n    let schema = serde_json::from_str::<serde_json::Value>(include_str!(\n        \"../../src/schema/builtin/lix_change.json\"\n    ))\n    .expect(\"builtin lix_change schema should parse\");\n    schema\n        .get(\"properties\")\n        .and_then(serde_json::Value::as_object)\n        .expect(\"builtin lix_change schema should define properties\")\n        .keys()\n        .cloned()\n        .collect::<BTreeSet<_>>()\n}\n\nasync fn non_system_column_names(\n    session: &crate::support::simulation_test::engine::SimSession,\n    table_name: &str,\n) -> BTreeSet<String> {\n    let result = session\n        .execute(\n            &format!(\n                \"SELECT column_name \\\n                 FROM information_schema.columns \\\n                 WHERE table_name = '{table_name}'\"\n            ),\n            &[],\n        )\n        .await\n        .expect(\"information_schema.columns should read\");\n    result\n        .rows()\n        .iter()\n        .map(|row| {\n            let Value::Text(column_name) = &row.values()[0] else {\n                panic!(\"expected text column name, got {:?}\", row.values()[0]);\n            };\n            column_name.clone()\n        })\n        .filter(|column_name| !column_name.starts_with(\"lixcol_\"))\n        .collect()\n}\n"
  },
  {
    "path": "packages/engine/tests/sql/lix_commit.rs",
    "content": "use std::collections::BTreeSet;\n\nuse lix_engine::{CreateVersionOptions, Value};\n\nuse super::select_rows;\n\nsimulation_test!(\n    lix_commit_surfaces_expose_commits_and_edges,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        let initial_head = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"version head should load\")\n            .expect(\"version head should exist\");\n\n        session\n            .execute(\n                \"INSERT INTO lix_key_value (key, value) VALUES ('commit-surface', 'one')\",\n                &[],\n            )\n            .await\n            .expect(\"first tracked write should succeed\");\n        let first_head = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"version head should load\")\n            .expect(\"version head should exist\");\n\n        session\n            .execute(\n                \"UPDATE lix_key_value SET value = 'two' WHERE key = 'commit-surface'\",\n                &[],\n            )\n            .await\n            .expect(\"second tracked write should succeed\");\n        let second_head = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"version head should load\")\n            .expect(\"version head should exist\");\n\n        let commit_rows = select_rows(\n            &session,\n            &format!(\n                \"SELECT id, lixcol_global, lixcol_untracked \\\n                 FROM lix_commit WHERE id = '{second_head}'\"\n            ),\n        )\n        .await;\n        assert_eq!(\n            commit_rows,\n            vec![vec![\n                Value::Text(second_head.clone()),\n                Value::Boolean(true),\n                Value::Boolean(false),\n            ]]\n        );\n\n        let edge_rows = select_rows(\n            &session,\n            &format!(\n                \"SELECT parent_id, child_id, parent_order, lixcol_global, lixcol_untracked \\\n                 FROM lix_commit_edge WHERE child_id = '{second_head}'\"\n            ),\n        )\n        .await;\n        assert_eq!(\n            edge_rows,\n            vec![vec![\n                Value::Text(first_head.clone()),\n                Value::Text(second_head.clone()),\n                Value::Integer(0),\n                Value::Boolean(true),\n                Value::Boolean(false),\n            ]]\n        );\n\n        let by_version_rows = select_rows(\n            &session,\n            &format!(\n                \"SELECT id, lixcol_version_id, lixcol_global, lixcol_untracked \\\n                 FROM lix_commit_by_version \\\n                 WHERE id IN ('{initial_head}', '{first_head}', '{second_head}') \\\n                 ORDER BY id, lixcol_version_id\"\n            ),\n        )\n        .await;\n        assert!(by_version_rows.contains(&vec![\n            Value::Text(initial_head.clone()),\n            Value::Text(sim.main_version_id().to_string()),\n            Value::Boolean(true),\n            Value::Boolean(false),\n        ]));\n        assert!(by_version_rows.contains(&vec![\n            Value::Text(initial_head),\n            Value::Text(\"global\".to_string()),\n            Value::Boolean(true),\n            Value::Boolean(false),\n        ]));\n        assert!(by_version_rows.contains(&vec![\n            Value::Text(first_head.clone()),\n            Value::Text(sim.main_version_id().to_string()),\n            Value::Boolean(true),\n            Value::Boolean(false),\n        ]));\n        assert!(by_version_rows.contains(&vec![\n            Value::Text(first_head.clone()),\n            Value::Text(\"global\".to_string()),\n            Value::Boolean(true),\n            Value::Boolean(false),\n        ]));\n        assert!(by_version_rows.contains(&vec![\n            Value::Text(second_head.clone()),\n            Value::Text(sim.main_version_id().to_string()),\n            Value::Boolean(true),\n            Value::Boolean(false),\n        ]));\n        assert!(by_version_rows.contains(&vec![\n            Value::Text(second_head.clone()),\n            Value::Text(\"global\".to_string()),\n            Value::Boolean(true),\n            Value::Boolean(false),\n        ]));\n\n        let edge_by_version_rows = select_rows(\n            &session,\n            &format!(\n                \"SELECT parent_id, child_id, parent_order, lixcol_version_id, lixcol_global, lixcol_untracked \\\n                 FROM lix_commit_edge_by_version \\\n                 WHERE child_id = '{second_head}' \\\n                 ORDER BY lixcol_version_id\"\n            ),\n        )\n        .await;\n        assert_eq!(\n            edge_by_version_rows,\n            vec![\n                vec![\n                    Value::Text(first_head.clone()),\n                    Value::Text(second_head.clone()),\n                    Value::Integer(0),\n                    Value::Text(sim.main_version_id().to_string()),\n                    Value::Boolean(true),\n                    Value::Boolean(false),\n                ],\n                vec![\n                    Value::Text(first_head),\n                    Value::Text(second_head),\n                    Value::Integer(0),\n                    Value::Text(\"global\".to_string()),\n                    Value::Boolean(true),\n                    Value::Boolean(false),\n                ],\n            ]\n        );\n    }\n);\n\nsimulation_test!(\n    lix_commit_is_plain_global_entity_not_active_reachability_view,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let main = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        main.execute(\n            \"INSERT INTO lix_key_value (key, value) VALUES ('main-only', 'main')\",\n            &[],\n        )\n        .await\n        .expect(\"main write should succeed\");\n\n        main.create_version(CreateVersionOptions {\n            id: Some(\"commit-branch\".to_string()),\n            name: \"Commit branch\".to_string(),\n            from_commit_id: None,\n        })\n        .await\n        .expect(\"branch version should be created\");\n\n        let branch = sim.wrap_session(\n            engine\n                .open_session(\"commit-branch\")\n                .await\n                .expect(\"branch session should open\"),\n            &engine,\n        );\n        branch\n            .execute(\n                \"INSERT INTO lix_key_value (key, value) VALUES ('branch-only', 'branch')\",\n                &[],\n            )\n            .await\n            .expect(\"branch write should succeed\");\n\n        let branch_head = engine\n            .load_version_head_commit_id(\"commit-branch\")\n            .await\n            .expect(\"branch head should load\")\n            .expect(\"branch head should exist\");\n\n        let main_commit_rows = select_rows(\n            &main,\n            &format!(\"SELECT id FROM lix_commit WHERE id = '{branch_head}'\"),\n        )\n        .await;\n        let branch_commit_rows = select_rows(\n            &branch,\n            &format!(\"SELECT id FROM lix_commit WHERE id = '{branch_head}'\"),\n        )\n        .await;\n        assert_eq!(\n            main_commit_rows, branch_commit_rows,\n            \"lix_commit should not depend on the active version\"\n        );\n        assert_eq!(\n            main_commit_rows,\n            vec![vec![Value::Text(branch_head.clone())]]\n        );\n\n        let main_edge_rows = select_rows(\n            &main,\n            &format!(\"SELECT child_id FROM lix_commit_edge WHERE child_id = '{branch_head}'\"),\n        )\n        .await;\n        let branch_edge_rows = select_rows(\n            &branch,\n            &format!(\"SELECT child_id FROM lix_commit_edge WHERE child_id = '{branch_head}'\"),\n        )\n        .await;\n        assert_eq!(\n            main_edge_rows, branch_edge_rows,\n            \"derived commit surfaces should also expose global commit-derived rows\"\n        );\n        assert_eq!(main_edge_rows, vec![vec![Value::Text(branch_head)]]);\n    }\n);\n\nsimulation_test!(\n    lix_commit_derived_by_version_surfaces_match_commit_entity_projection,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let main = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        main.execute(\n            \"INSERT INTO lix_key_value (key, value) VALUES ('main-edge-probe', 'main')\",\n            &[],\n        )\n        .await\n        .expect(\"main write should succeed\");\n\n        main.create_version(CreateVersionOptions {\n            id: Some(\"edge-probe-a\".to_string()),\n            name: \"Edge Probe A\".to_string(),\n            from_commit_id: Some(sim.initial_commit_id().to_string()),\n        })\n        .await\n        .expect(\"edge-probe-a should be created from the initial commit\");\n        main.create_version(CreateVersionOptions {\n            id: Some(\"edge-probe-b\".to_string()),\n            name: \"Edge Probe B\".to_string(),\n            from_commit_id: Some(sim.initial_commit_id().to_string()),\n        })\n        .await\n        .expect(\"edge-probe-b should be created from the initial commit\");\n\n        let branch_a = sim.wrap_session(\n            engine\n                .open_session(\"edge-probe-a\")\n                .await\n                .expect(\"edge-probe-a session should open\"),\n            &engine,\n        );\n        branch_a\n            .execute(\n                \"INSERT INTO lix_key_value (key, value) VALUES ('edge-probe-a-only', 'a')\",\n                &[],\n            )\n            .await\n            .expect(\"edge-probe-a write should succeed\");\n\n        let branch_b = sim.wrap_session(\n            engine\n                .open_session(\"edge-probe-b\")\n                .await\n                .expect(\"edge-probe-b session should open\"),\n            &engine,\n        );\n        branch_b\n            .execute(\n                \"INSERT INTO lix_key_value (key, value) VALUES ('edge-probe-b-only', 'b')\",\n                &[],\n            )\n            .await\n            .expect(\"edge-probe-b write should succeed\");\n\n        let global_edges = commit_edges_by_version(&main, \"global\").await;\n        for version_id in [sim.main_version_id(), \"edge-probe-a\", \"edge-probe-b\"] {\n            let actual_edges = commit_edges_by_version(&main, version_id).await;\n            assert_eq!(\n                actual_edges, global_edges,\n                \"lix_commit_edge_by_version should project derived global edges for {version_id}\"\n            );\n        }\n    }\n);\n\nsimulation_test!(\n    lix_commit_surfaces_match_canonical_schema_definitions,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        for (schema_key, tables) in [\n            (\"lix_commit\", vec![\"lix_commit\", \"lix_commit_by_version\"]),\n            (\n                \"lix_commit_edge\",\n                vec![\"lix_commit_edge\", \"lix_commit_edge_by_version\"],\n            ),\n        ] {\n            let schema_properties = builtin_schema_property_names(schema_key);\n            for table in tables {\n                let surface_columns = non_system_column_names(&session, table).await;\n                assert_eq!(\n                    surface_columns, schema_properties,\n                    \"{table} data columns should match {schema_key} properties\"\n                );\n            }\n        }\n    }\n);\n\nsimulation_test!(\n    lix_commit_surfaces_count_handle_empty_projection,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        for table in [\n            \"lix_commit\",\n            \"lix_commit_by_version\",\n            \"lix_commit_edge\",\n            \"lix_commit_edge_by_version\",\n        ] {\n            let rows = select_rows(&session, &format!(\"SELECT count(*) FROM {table}\")).await;\n            assert_single_count(rows, table);\n        }\n    }\n);\n\nfn assert_single_count(rows: Vec<Vec<Value>>, table: &str) {\n    assert_eq!(rows.len(), 1, \"{table} should return one count row\");\n    assert_eq!(rows[0].len(), 1, \"{table} should return one count column\");\n    let Value::Integer(count) = rows[0][0] else {\n        panic!(\n            \"{table} should return an integer count, got {:?}\",\n            rows[0][0]\n        );\n    };\n    assert!(count >= 0, \"{table} count should be non-negative\");\n}\n\nfn text_value(value: &Value) -> String {\n    let Value::Text(value) = value else {\n        panic!(\"expected text value, got {value:?}\");\n    };\n    value.clone()\n}\n\nasync fn commit_edges_by_version(\n    session: &crate::support::simulation_test::engine::SimSession,\n    version_id: &str,\n) -> BTreeSet<(String, String)> {\n    select_rows(\n        session,\n        &format!(\n            \"SELECT parent_id, child_id \\\n             FROM lix_commit_edge_by_version \\\n             WHERE lixcol_version_id = '{version_id}'\"\n        ),\n    )\n    .await\n    .into_iter()\n    .map(|row| (text_value(&row[0]), text_value(&row[1])))\n    .collect()\n}\n\nfn builtin_schema_property_names(schema_key: &str) -> BTreeSet<String> {\n    let schema = match schema_key {\n        \"lix_commit\" => include_str!(\"../../src/schema/builtin/lix_commit.json\"),\n        \"lix_commit_edge\" => include_str!(\"../../src/schema/builtin/lix_commit_edge.json\"),\n        other => panic!(\"unexpected builtin schema key: {other}\"),\n    };\n    let schema = serde_json::from_str::<serde_json::Value>(schema)\n        .expect(\"builtin schema fixture should parse\");\n    schema\n        .get(\"properties\")\n        .and_then(serde_json::Value::as_object)\n        .expect(\"builtin schema should define properties\")\n        .keys()\n        .cloned()\n        .collect::<BTreeSet<_>>()\n}\n\nasync fn non_system_column_names(\n    session: &crate::support::simulation_test::engine::SimSession,\n    table_name: &str,\n) -> BTreeSet<String> {\n    let rows = select_rows(\n        session,\n        &format!(\n            \"SELECT column_name \\\n             FROM information_schema.columns \\\n             WHERE table_name = '{table_name}'\"\n        ),\n    )\n    .await;\n    rows.into_iter()\n        .map(|row| text_value(&row[0]))\n        .filter(|column_name| !column_name.starts_with(\"lixcol_\"))\n        .collect()\n}\n"
  },
  {
    "path": "packages/engine/tests/sql/lix_directory.rs",
    "content": "use lix_engine::ExecuteResult;\nuse lix_engine::LixError;\nuse lix_engine::Value;\nuse serde_json::json;\n\nuse super::assert_rows_eq;\n\nsimulation_test!(\n    lix_directory_path_insert_rejects_overlong_paths_and_segments,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        let long_segment = \"a\".repeat(256);\n        let segment_error = session\n            .execute(\n                \"INSERT INTO lix_directory (id, path) VALUES ('dir-long-segment', $1)\",\n                &[Value::Text(format!(\"/{long_segment}/\"))],\n            )\n            .await\n            .expect_err(\"overlong directory path segment should be rejected\");\n        assert_eq!(segment_error.code, LixError::CODE_INVALID_PARAM);\n\n        let long_path = format!(\"/{}/\", [\"abcd\"; 820].join(\"/\"));\n        let path_error = session\n            .execute(\n                \"INSERT INTO lix_directory (id, path) VALUES ('dir-long-path', $1)\",\n                &[Value::Text(long_path)],\n            )\n            .await\n            .expect_err(\"overlong directory path should be rejected\");\n        assert_eq!(path_error.code, LixError::CODE_INVALID_PARAM);\n\n        let encoded_segment_at_limit = \"%61\".repeat(255);\n        session\n            .execute(\n                \"INSERT INTO lix_directory (id, path) VALUES ('dir-encoded-limit', $1)\",\n                &[Value::Text(format!(\"/{encoded_segment_at_limit}/\"))],\n            )\n            .await\n            .expect(\"percent-encoded segment should be measured after canonicalization\");\n\n        let encoded_segment_over_limit = \"%61\".repeat(256);\n        let encoded_segment_error = session\n            .execute(\n                \"INSERT INTO lix_directory (id, path) VALUES ('dir-encoded-over-limit', $1)\",\n                &[Value::Text(format!(\"/{encoded_segment_over_limit}/\"))],\n            )\n            .await\n            .expect_err(\"overlong canonical segment should be rejected\");\n        assert_eq!(encoded_segment_error.code, LixError::CODE_INVALID_PARAM);\n\n        let huge_path = format!(\"/{}/\", \"a\".repeat(1024 * 1024));\n        let huge_error = session\n            .execute(\n                \"INSERT INTO lix_directory (id, path) VALUES ('dir-huge-path', $1)\",\n                &[Value::Text(huge_path)],\n            )\n            .await\n            .expect_err(\"huge path input should be rejected without runtime internals\");\n        assert_eq!(huge_error.code, LixError::CODE_INVALID_PARAM);\n    }\n);\n\nsimulation_test!(\n    lix_directory_path_insert_rejects_percent_encoded_forbidden_code_points,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        for (id, path) in [\n            (\"dir-percent-nul\", \"/docs/%00evil/\"),\n            (\"dir-percent-bidi\", \"/docs/%E2%80%AEevil/\"),\n        ] {\n            let error = session\n                .execute(\n                    &format!(\"INSERT INTO lix_directory (id, path) VALUES ('{id}', '{path}')\"),\n                    &[],\n                )\n                .await\n                .expect_err(\"percent-encoded forbidden path code point should be rejected\");\n            assert_eq!(error.code, LixError::CODE_INVALID_PARAM);\n        }\n    }\n);\n\nsimulation_test!(lix_directory_insert_reads_nested_paths, |sim| async move {\n    let engine = sim.boot_engine().await;\n    let session = sim.wrap_session(\n        engine\n            .open_workspace_session()\n            .await\n            .expect(\"main session should open\"),\n        &engine,\n    );\n\n    let insert_result = session\n        .execute(\n            \"INSERT INTO lix_directory (id, parent_id, name) \\\n             VALUES ('dir-docs', NULL, 'docs')\",\n            &[],\n        )\n        .await\n        .expect(\"directory insert should succeed\");\n    assert_eq!(insert_result, ExecuteResult::from_rows_affected(1));\n\n    let nested_insert_result = session\n        .execute(\n            \"INSERT INTO lix_directory (id, path) \\\n             VALUES ('dir-nested', '/docs/nested/')\",\n            &[],\n        )\n        .await\n        .expect(\"nested directory path insert should succeed\");\n    assert_eq!(nested_insert_result, ExecuteResult::from_rows_affected(1));\n\n    let result = session\n        .execute(\n            \"SELECT id, path, parent_id, name \\\n             FROM lix_directory \\\n             WHERE id IN ('dir-docs', 'dir-nested') \\\n             ORDER BY path\",\n            &[],\n        )\n        .await\n        .expect(\"directory read should succeed\");\n    let row_set = result;\n    assert_eq!(row_set.len(), 2);\n    assert_eq!(\n        row_set.rows()[0].values(),\n        &[\n            Value::Text(\"dir-docs\".to_string()),\n            Value::Text(\"/docs/\".to_string()),\n            Value::Null,\n            Value::Text(\"docs\".to_string()),\n        ]\n    );\n    assert_eq!(\n        row_set.rows()[1].values(),\n        &[\n            Value::Text(\"dir-nested\".to_string()),\n            Value::Text(\"/docs/nested/\".to_string()),\n            Value::Text(\"dir-docs\".to_string()),\n            Value::Text(\"nested\".to_string()),\n        ]\n    );\n});\n\nsimulation_test!(\n    lix_directory_insert_applies_defaulted_id,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        let insert_result = session\n            .execute(\n                \"INSERT INTO lix_directory (parent_id, name) \\\n             VALUES (NULL, 'docs')\",\n                &[],\n            )\n            .await\n            .expect(\"directory insert should apply defaulted id\");\n        assert_eq!(insert_result, ExecuteResult::from_rows_affected(1));\n\n        let result = session\n            .execute(\n                \"SELECT id, path, parent_id, name \\\n             FROM lix_directory \\\n             WHERE path = '/docs/'\",\n                &[],\n            )\n            .await\n            .expect(\"directory read should succeed\");\n        let row_set = result;\n        assert_eq!(row_set.len(), 1);\n        let values = row_set.rows()[0].values();\n        let [Value::Text(id), Value::Text(path), Value::Null, Value::Text(name)] = values else {\n            panic!(\"expected generated directory row, got {values:?}\");\n        };\n        assert!(!id.is_empty(), \"defaulted directory id should be non-empty\");\n        assert_eq!(path, \"/docs/\");\n        assert_eq!(name, \"docs\");\n    }\n);\n\nsimulation_test!(\n    lix_directory_path_insert_applies_defaulted_id,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        let insert_result = session\n            .execute(\"INSERT INTO lix_directory (path) VALUES ('/docs/')\", &[])\n            .await\n            .expect(\"directory path insert should apply defaulted id\");\n        assert_eq!(insert_result, ExecuteResult::from_rows_affected(1));\n\n        let result = session\n            .execute(\n                \"SELECT id, path, parent_id, name \\\n             FROM lix_directory \\\n             WHERE path = '/docs/'\",\n                &[],\n            )\n            .await\n            .expect(\"directory read should succeed\");\n        let row_set = result;\n        assert_eq!(row_set.len(), 1);\n        let values = row_set.rows()[0].values();\n        let [Value::Text(id), Value::Text(path), Value::Null, Value::Text(name)] = values else {\n            panic!(\"expected generated directory path row, got {values:?}\");\n        };\n        assert!(!id.is_empty(), \"defaulted directory id should be non-empty\");\n        assert_eq!(path, \"/docs/\");\n        assert_eq!(name, \"docs\");\n    }\n);\n\nsimulation_test!(\n    lix_directory_path_insert_rejects_duplicate_root_path,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\"INSERT INTO lix_directory (path) VALUES ('/docs/')\", &[])\n            .await\n            .expect(\"first directory insert should succeed\");\n        let error = session\n            .execute(\"INSERT INTO lix_directory (path) VALUES ('/docs/')\", &[])\n            .await\n            .expect_err(\"duplicate directory path insert should be rejected\");\n\n        assert_eq!(error.code, LixError::CODE_UNIQUE);\n    }\n);\n\nsimulation_test!(\n    lix_directory_insert_duplicate_id_reports_lix_directory,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_directory (id, path) VALUES ('same-dir', '/a/')\",\n                &[],\n            )\n            .await\n            .expect(\"first directory insert should succeed\");\n\n        let error = session\n            .execute(\n                \"INSERT INTO lix_directory (id, path) VALUES ('same-dir', '/b/')\",\n                &[],\n            )\n            .await\n            .expect_err(\"duplicate directory id insert should be rejected\");\n\n        assert_eq!(error.code, LixError::CODE_UNIQUE);\n        assert!(\n            error.message.contains(\"table 'lix_directory'\")\n                && error.message.contains(\"id 'same-dir'\")\n                && !error.message.contains(\"lix_directory_descriptor\"),\n            \"unexpected error: {error:?}\"\n        );\n    }\n);\n\nsimulation_test!(\n    lix_directory_by_version_insert_duplicate_id_reports_lix_directory_by_version,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n        let version_id = sim.main_version_id();\n\n        session\n            .execute(\n                &format!(\n                    \"INSERT INTO lix_directory_by_version \\\n                     (id, path, lixcol_version_id) \\\n                     VALUES ('same-dir', '/a/', '{version_id}')\"\n                ),\n                &[],\n            )\n            .await\n            .expect(\"first by-version directory insert should succeed\");\n\n        let error = session\n            .execute(\n                &format!(\n                    \"INSERT INTO lix_directory_by_version \\\n                     (id, path, lixcol_version_id) \\\n                     VALUES ('same-dir', '/b/', '{version_id}')\"\n                ),\n                &[],\n            )\n            .await\n            .expect_err(\"duplicate by-version directory id insert should be rejected\");\n\n        assert_eq!(error.code, LixError::CODE_UNIQUE);\n        assert!(\n            error.message.contains(\"table 'lix_directory_by_version'\")\n                && error.message.contains(\"id 'same-dir'\")\n                && !error.message.contains(\"table 'lix_directory':\")\n                && !error.message.contains(\"lix_directory_descriptor\"),\n            \"unexpected error: {error:?}\"\n        );\n    }\n);\n\nsimulation_test!(\n    lix_directory_path_insert_rejects_existing_file_entry,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\"INSERT INTO lix_file (path) VALUES ('/foo')\", &[])\n            .await\n            .expect(\"file insert should succeed\");\n\n        let error = session\n            .execute(\"INSERT INTO lix_directory (path) VALUES ('/foo/')\", &[])\n            .await\n            .expect_err(\"directory should conflict with file at same entry name\");\n\n        assert_eq!(error.code, LixError::CODE_UNIQUE);\n    }\n);\n\nsimulation_test!(\n    lix_directory_descriptor_shape_insert_rejects_existing_file_entry,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_file (id, directory_id, name) \\\n                 VALUES ('file-foo', NULL, 'foo')\",\n                &[],\n            )\n            .await\n            .expect(\"file insert should succeed\");\n\n        let error = session\n            .execute(\n                \"INSERT INTO lix_directory (id, parent_id, name) VALUES ('dir-foo', NULL, 'foo')\",\n                &[],\n            )\n            .await\n            .expect_err(\"descriptor-shaped directory insert should conflict with file\");\n\n        assert_eq!(error.code, LixError::CODE_UNIQUE);\n    }\n);\n\nsimulation_test!(\n    lix_directory_update_rejects_existing_file_entry,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_directory (id, parent_id, name) VALUES ('dir-bar', NULL, 'bar')\",\n                &[],\n            )\n            .await\n            .expect(\"directory insert should succeed\");\n        session\n            .execute(\"INSERT INTO lix_file (path) VALUES ('/foo')\", &[])\n            .await\n            .expect(\"file insert should succeed\");\n\n        let error = session\n            .execute(\n                \"UPDATE lix_directory SET name = 'foo' WHERE id = 'dir-bar'\",\n                &[],\n            )\n            .await\n            .expect_err(\"directory rename should conflict with file\");\n\n        assert_eq!(error.code, LixError::CODE_UNIQUE);\n    }\n);\n\nsimulation_test!(\n    lix_directory_path_insert_rejects_dot_segments,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        for path in [\"/a/../b/\", \"/a/%2e%2e/b/\", \"/a/./b/\"] {\n            let error = session\n                .execute(\n                    \"INSERT INTO lix_directory (path) VALUES ($1)\",\n                    &[Value::Text(path.to_string())],\n                )\n                .await\n                .expect_err(\"directory path insert should reject dot segments\");\n\n            assert_eq!(error.code, LixError::CODE_INVALID_PARAM);\n        }\n\n        let result = session\n            .execute(\"SELECT path FROM lix_directory WHERE path = '/b/'\", &[])\n            .await\n            .expect(\"directory read should succeed\");\n        assert_eq!(result.len(), 0);\n    }\n);\n\nsimulation_test!(\n    lix_directory_update_rejects_parent_cycle,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_directory (id, parent_id, name) VALUES \\\n                 ('dir-parent', NULL, 'parent'), \\\n                 ('dir-child', 'dir-parent', 'child')\",\n                &[],\n            )\n            .await\n            .expect(\"directory tree insert should succeed\");\n\n        let self_cycle = session\n            .execute(\n                \"UPDATE lix_directory SET parent_id = id WHERE id = 'dir-parent'\",\n                &[],\n            )\n            .await\n            .expect_err(\"self parent must be rejected\");\n        assert_eq!(self_cycle.code, LixError::CODE_CONSTRAINT_VIOLATION);\n\n        let descendant_cycle = session\n            .execute(\n                \"UPDATE lix_directory SET parent_id = 'dir-child' WHERE id = 'dir-parent'\",\n                &[],\n            )\n            .await\n            .expect_err(\"parenting a directory under its descendant must be rejected\");\n        assert_eq!(descendant_cycle.code, LixError::CODE_CONSTRAINT_VIOLATION);\n    }\n);\n\nsimulation_test!(\n    lix_directory_descriptor_writes_use_canonical_path_segment_validation,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\"INSERT INTO lix_directory (path) VALUES ('/Café/')\", &[])\n            .await\n            .expect(\"canonical directory insert should succeed\");\n\n        let nfc_collision = session\n\t\t\t.execute(\n\t\t\t\t\"INSERT INTO lix_state (\\\n\t             entity_id, schema_key, file_id, snapshot_content, global, untracked\\\n\t             ) VALUES (lix_json('[\\\"dir-cafe-decomposed\\\"]'), 'lix_directory_descriptor', NULL, $1, false, false)\",\n\t\t\t\t&[Value::Json(json!({\n\t\t\t\t\t\"id\": \"dir-cafe-decomposed\",\n\t\t\t\t\t\"parent_id\": null,\n                    \"name\": \"Cafe\\u{301}\",\n                                    }))],\n            )\n            .await\n            .expect_err(\"decomposed descriptor name should normalize before uniqueness\");\n        assert_eq!(nfc_collision.code, LixError::CODE_UNIQUE);\n\n        let zero_width = session\n\t\t\t.execute(\n\t\t\t\t\"INSERT INTO lix_state (\\\n\t             entity_id, schema_key, file_id, snapshot_content, global, untracked\\\n\t             ) VALUES (lix_json('[\\\"dir-zero-width\\\"]'), 'lix_directory_descriptor', NULL, $1, false, false)\",\n\t\t\t\t&[Value::Json(json!({\n\t\t\t\t\t\"id\": \"dir-zero-width\",\n\t\t\t\t\t\"parent_id\": null,\n                    \"name\": \"zero\\u{200D}width\",\n                                    }))],\n            )\n            .await\n            .expect_err(\"descriptor name should reject zero-width characters\");\n        assert_eq!(zero_width.code, \"LIX_ERROR_PATH_INVALID_SEGMENT_CODE_POINT\");\n    }\n);\n\nsimulation_test!(\n    lix_state_insert_rejects_directory_parent_cycle,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        let error = session\n            .execute(\n                \"INSERT INTO lix_state (\\\n                 entity_id, schema_key, file_id, snapshot_content, global, untracked\\\n                 ) VALUES \\\n                 (lix_json('[\\\"dir-a\\\"]'), 'lix_directory_descriptor', NULL, lix_json('{\\\"id\\\":\\\"dir-a\\\",\\\"parent_id\\\":\\\"dir-b\\\",\\\"name\\\":\\\"a\\\"}'), false, false), \\\n                 (lix_json('[\\\"dir-b\\\"]'), 'lix_directory_descriptor', NULL, lix_json('{\\\"id\\\":\\\"dir-b\\\",\\\"parent_id\\\":\\\"dir-a\\\",\\\"name\\\":\\\"b\\\"}'), false, false)\",\n                &[],\n            )\n            .await\n            .expect_err(\"descriptor cycles staged through lix_state must be rejected\");\n\n        assert_eq!(error.code, LixError::CODE_CONSTRAINT_VIOLATION);\n    }\n);\n\nsimulation_test!(\n    lix_state_insert_rejects_directory_file_namespace_conflict,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\"INSERT INTO lix_file (path) VALUES ('/foo')\", &[])\n            .await\n            .expect(\"file insert should succeed\");\n\n        let error = session\n            .execute(\n                \"INSERT INTO lix_state (\\\n                 entity_id, schema_key, file_id, snapshot_content, global, untracked\\\n                 ) VALUES \\\n                 (lix_json('[\\\"dir-foo\\\"]'), 'lix_directory_descriptor', NULL, lix_json('{\\\"id\\\":\\\"dir-foo\\\",\\\"parent_id\\\":null,\\\"name\\\":\\\"foo\\\"}'), false, false)\",\n                &[],\n            )\n            .await\n            .expect_err(\"lix_state directory descriptor must not bypass filesystem namespace\");\n\n        assert_eq!(error.code, LixError::CODE_UNIQUE);\n        assert!(\n            error.message.contains(\"filesystem namespace conflict\"),\n            \"expected namespace conflict error: {error}\"\n        );\n    }\n);\n\nsimulation_test!(\n    lix_directory_allows_version_local_entry_matching_global_file_entry,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_file (id, path, lixcol_global) \\\n                 VALUES ('global-file-foo', '/foo', true)\",\n                &[],\n            )\n            .await\n            .expect(\"global file insert should succeed\");\n\n        session\n            .execute(\n                \"INSERT INTO lix_directory (id, path) VALUES ('version-dir-foo', '/foo/')\",\n                &[],\n            )\n            .await\n            .expect(\"version-local directory should be a distinct storage namespace\");\n\n        let global_file = session\n            .execute(\n                \"SELECT id, path, lixcol_version_id, lixcol_global \\\n                 FROM lix_file_by_version \\\n                 WHERE id = 'global-file-foo' AND lixcol_version_id = 'global'\",\n                &[],\n            )\n            .await\n            .expect(\"global file should query\");\n        let version_directory = session\n            .execute(\n                \"SELECT id, path \\\n                 FROM lix_directory \\\n                 WHERE id = 'version-dir-foo'\",\n                &[],\n            )\n            .await\n            .expect(\"version directory should query\");\n\n        assert_eq!(global_file.len(), 1);\n        assert_eq!(version_directory.len(), 1);\n    }\n);\n\nsimulation_test!(\n    lix_directory_delete_recursively_deletes_tree,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        let file_result = session\n            .execute(\n                \"INSERT INTO lix_file (id, path, data) \\\n             VALUES ('file-readme', '/docs/guides/readme.md', X'68656C6C6F')\",\n                &[],\n            )\n            .await\n            .expect(\"file insert should succeed\");\n        assert_eq!(file_result, ExecuteResult::from_rows_affected(1));\n\n        let directory_ids_result = session\n            .execute(\n                \"SELECT id \\\n             FROM lix_directory \\\n             WHERE path IN ('/docs/', '/docs/guides/') \\\n             ORDER BY path\",\n                &[],\n            )\n            .await\n            .expect(\"directory id read before delete should succeed\");\n        let directory_id_rows = directory_ids_result;\n        assert_eq!(directory_id_rows.len(), 2);\n        let directory_ids = directory_id_rows\n            .rows()\n            .iter()\n            .map(|row| {\n                let Value::Text(id) = &row.values()[0] else {\n                    panic!(\"directory id should be text\");\n                };\n                id.clone()\n            })\n            .collect::<Vec<_>>();\n\n        let delete_result = session\n            .execute(\"DELETE FROM lix_directory WHERE path = '/docs/'\", &[])\n            .await\n            .expect(\"recursive directory delete should succeed\");\n        assert_eq!(delete_result, ExecuteResult::from_rows_affected(3));\n\n        let directories_result = session\n            .execute(\n                \"SELECT id, path \\\n             FROM lix_directory \\\n             WHERE path IN ('/docs/', '/docs/guides/') \\\n             ORDER BY path\",\n                &[],\n            )\n            .await\n            .expect(\"directory read after delete should succeed\");\n        let directory_rows = directories_result;\n        assert_eq!(\n            directory_rows.len(),\n            0,\n            \"recursive directory delete should delete the root and child directories\"\n        );\n\n        let file_result = session\n            .execute(\n                \"SELECT id, path \\\n             FROM lix_file \\\n             WHERE path = '/docs/guides/readme.md'\",\n                &[],\n            )\n            .await\n            .expect(\"file read after delete should succeed\");\n        let file_rows = file_result;\n        assert_eq!(\n            file_rows.len(),\n            0,\n            \"recursive directory delete should delete nested files\"\n        );\n\n        let state_result = session\n            .execute(\n                &format!(\n                    \"SELECT entity_id, schema_key \\\n                 FROM lix_state \\\n                 WHERE entity_id IN (lix_json('[\\\"{}\\\"]'), lix_json('[\\\"{}\\\"]'), lix_json('[\\\"file-readme\\\"]')) \\\n                 ORDER BY schema_key, entity_id\",\n                    directory_ids[0], directory_ids[1]\n                ),\n                &[],\n            )\n            .await\n            .expect(\"state read after delete should succeed\");\n        let state_rows = state_result;\n        assert_eq!(\n            state_rows.len(),\n            0,\n            \"recursive directory delete should make descriptor/blob-ref state rows not visible\"\n        );\n    }\n);\n\nsimulation_test!(\n    lix_directory_by_version_expands_global_rows,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_directory (id, path, lixcol_global, lixcol_untracked) \\\n                 VALUES ('dir-global-overlay', '/shared/', true, false)\",\n                &[],\n            )\n            .await\n            .expect(\"global directory insert should succeed\");\n\n        let result = session\n            .execute(\n                \"SELECT id, path, lixcol_version_id, lixcol_global, lixcol_untracked \\\n                 FROM lix_directory_by_version \\\n                 WHERE id = 'dir-global-overlay' \\\n                 ORDER BY lixcol_version_id\",\n                &[],\n            )\n            .await\n            .expect(\"directory by-version read should succeed\");\n        assert_rows_eq(\n            result,\n            vec![\n                vec![\n                    Value::Text(\"dir-global-overlay\".to_string()),\n                    Value::Text(\"/shared/\".to_string()),\n                    Value::Text(sim.main_version_id().to_string()),\n                    Value::Boolean(true),\n                    Value::Boolean(false),\n                ],\n                vec![\n                    Value::Text(\"dir-global-overlay\".to_string()),\n                    Value::Text(\"/shared/\".to_string()),\n                    Value::Text(\"global\".to_string()),\n                    Value::Boolean(true),\n                    Value::Boolean(false),\n                ],\n            ],\n        );\n    }\n);\n"
  },
  {
    "path": "packages/engine/tests/sql/lix_directory_history.rs",
    "content": "use lix_engine::Value;\nuse serde_json::json;\n\nuse super::assert_rows_eq;\n\nsimulation_test!(\n    lix_directory_history_reads_paths_from_commit_graph,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_directory (id, path) \\\n                 VALUES ('history-dir-docs', '/docs/')\",\n                &[],\n            )\n            .await\n            .expect(\"root directory insert should succeed\");\n        let first_commit_id = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"first directory commit head should load\")\n            .expect(\"first directory commit head should exist\");\n\n        session\n            .execute(\n                \"INSERT INTO lix_directory (id, path) \\\n                 VALUES ('history-dir-guides', '/docs/guides/')\",\n                &[],\n            )\n            .await\n            .expect(\"nested directory insert should succeed\");\n        let second_commit_id = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"second directory commit head should load\")\n            .expect(\"second directory commit head should exist\");\n\n        assert_ne!(first_commit_id, second_commit_id);\n\n        let result = session\n            .execute(\n                &format!(\n                    \"SELECT id, path, parent_id, name, lixcol_start_commit_id, lixcol_depth \\\n                     FROM lix_directory_history \\\n                     WHERE lixcol_start_commit_id = '{second_commit_id}' \\\n                       AND id IN ('history-dir-docs', 'history-dir-guides') \\\n                     ORDER BY lixcol_depth, id\"\n                ),\n                &[],\n            )\n            .await\n            .expect(\"directory history read should succeed\");\n\n        assert_rows_eq(\n            result,\n            vec![\n                vec![\n                    Value::Text(\"history-dir-guides\".to_string()),\n                    Value::Text(\"/docs/guides/\".to_string()),\n                    Value::Text(\"history-dir-docs\".to_string()),\n                    Value::Text(\"guides\".to_string()),\n                    Value::Text(second_commit_id.clone()),\n                    Value::Integer(0),\n                ],\n                vec![\n                    Value::Text(\"history-dir-docs\".to_string()),\n                    Value::Text(\"/docs/\".to_string()),\n                    Value::Null,\n                    Value::Text(\"docs\".to_string()),\n                    Value::Text(second_commit_id.clone()),\n                    Value::Integer(1),\n                ],\n            ],\n        );\n\n        let snapshot_result = session\n            .execute(\n                &format!(\n                    \"SELECT lixcol_snapshot_content \\\n                     FROM lix_directory_history \\\n                     WHERE lixcol_start_commit_id = '{second_commit_id}' \\\n                       AND id = 'history-dir-guides' \\\n                       AND lixcol_depth = 0\"\n                ),\n                &[],\n            )\n            .await\n            .expect(\"directory history descriptor snapshot should be selectable\");\n        let snapshot = snapshot_result.rows()[0]\n            .get::<Value>(\"lixcol_snapshot_content\")\n            .expect(\"snapshot_content should be present\");\n        let Value::Json(snapshot) = snapshot else {\n            panic!(\"snapshot_content should be semantic JSON, got {snapshot:?}\");\n        };\n        assert_eq!(snapshot[\"parent_id\"], json!(\"history-dir-docs\"));\n        assert_eq!(snapshot[\"name\"], json!(\"guides\"));\n    }\n);\n\nsimulation_test!(\n    lix_directory_history_requires_start_commit_id,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        let error = session\n            .execute(\"SELECT id FROM lix_directory_history\", &[])\n            .await\n            .expect_err(\"directory history queries must provide start commit\");\n\n        assert!(\n            error\n                .to_string()\n                .contains(\"requires a lixcol_start_commit_id filter\"),\n            \"unexpected error: {error}\"\n        );\n        assert!(\n            error\n                .hint()\n                .is_some_and(|hint| hint.contains(\"WHERE lixcol_start_commit_id\")),\n            \"unexpected error: {error}\"\n        );\n    }\n);\n\nsimulation_test!(\n    lix_directory_history_records_recursive_delete,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_directory (id, path) \\\n                 VALUES ('history-delete-docs', '/docs/')\",\n                &[],\n            )\n            .await\n            .expect(\"root directory insert should succeed\");\n        session\n            .execute(\n                \"INSERT INTO lix_directory (id, path) \\\n                 VALUES ('history-delete-guides', '/docs/guides/')\",\n                &[],\n            )\n            .await\n            .expect(\"nested directory insert should succeed\");\n\n        session\n            .execute(\n                \"DELETE FROM lix_directory WHERE id = 'history-delete-docs'\",\n                &[],\n            )\n            .await\n            .expect(\"recursive directory delete should succeed\");\n        let delete_commit_id = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"delete commit head should load\")\n            .expect(\"delete commit head should exist\");\n\n        let result = session\n            .execute(\n                &format!(\n\t\t\t\t\t\"SELECT id, path, name, lixcol_snapshot_content, lixcol_schema_key, lixcol_start_commit_id, lixcol_depth \\\n\t                 FROM lix_directory_history \\\n\t                 WHERE lixcol_start_commit_id = '{delete_commit_id}' \\\n\t                   AND lixcol_entity_id IN (lix_json('[\\\"history-delete-docs\\\"]'), lix_json('[\\\"history-delete-guides\\\"]')) \\\n\t                   AND lixcol_depth = 0 \\\n\t                 ORDER BY lixcol_entity_id\"\n\t\t\t\t),\n                &[],\n            )\n            .await\n            .expect(\"directory delete history read should succeed\");\n\n        assert_rows_eq(\n            result,\n            vec![\n                vec![\n                    Value::Text(\"history-delete-docs\".to_string()),\n                    Value::Null,\n                    Value::Null,\n                    Value::Null,\n                    Value::Text(\"lix_directory_descriptor\".to_string()),\n                    Value::Text(delete_commit_id.clone()),\n                    Value::Integer(0),\n                ],\n                vec![\n                    Value::Text(\"history-delete-guides\".to_string()),\n                    Value::Null,\n                    Value::Null,\n                    Value::Null,\n                    Value::Text(\"lix_directory_descriptor\".to_string()),\n                    Value::Text(delete_commit_id),\n                    Value::Integer(0),\n                ],\n            ],\n        );\n    }\n);\n"
  },
  {
    "path": "packages/engine/tests/sql/lix_file.rs",
    "content": "use lix_engine::ExecuteResult;\nuse lix_engine::LixError;\nuse lix_engine::Value;\n\nuse super::assert_rows_eq;\n\nsimulation_test!(\n    lix_file_read_rejects_public_path_inside_scalar_function,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        let error = session\n            .execute(\n                \"SELECT id FROM lix_file WHERE lower(path) = '/readme.md'\",\n                &[],\n            )\n            .await\n            .expect_err(\"public path column should not be hidden inside scalar functions\");\n\n        assert_eq!(error.code, LixError::CODE_UNSUPPORTED_SQL);\n        assert!(error.message.contains(\"public column 'path'\"));\n    }\n);\n\nsimulation_test!(\n    lix_file_by_version_read_rejects_dynamic_version_id_operand,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        let error = session\n            .execute(\n                \"SELECT id FROM lix_file_by_version WHERE lixcol_version_id = lower('main')\",\n                &[],\n            )\n            .await\n            .expect_err(\"public version id predicate should only accept literal/param operands\");\n\n        assert_eq!(error.code, LixError::CODE_UNSUPPORTED_SQL);\n        assert!(error.message.contains(\"public column 'lixcol_version_id'\"));\n    }\n);\n\nsimulation_test!(\n    lix_file_path_insert_rejects_overlong_paths_and_segments,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        let long_segment = \"a\".repeat(256);\n        let segment_error = session\n            .execute(\n                \"INSERT INTO lix_file (id, path) VALUES ('file-long-segment', $1)\",\n                &[Value::Text(format!(\"/{long_segment}\"))],\n            )\n            .await\n            .expect_err(\"overlong file path segment should be rejected\");\n        assert_eq!(segment_error.code, LixError::CODE_INVALID_PARAM);\n        assert!(segment_error.message.contains(\"path segment is too long\"));\n\n        let long_path = format!(\"/{}\", [\"abcd\"; 820].join(\"/\"));\n        let path_error = session\n            .execute(\n                \"INSERT INTO lix_file (id, path) VALUES ('file-long-path', $1)\",\n                &[Value::Text(long_path)],\n            )\n            .await\n            .expect_err(\"overlong file path should be rejected\");\n        assert_eq!(path_error.code, LixError::CODE_INVALID_PARAM);\n        assert!(path_error.message.contains(\"path is too long\"));\n\n        let encoded_segment_at_limit = \"%61\".repeat(255);\n        session\n            .execute(\n                \"INSERT INTO lix_file (id, path) VALUES ('file-encoded-limit', $1)\",\n                &[Value::Text(format!(\"/{encoded_segment_at_limit}\"))],\n            )\n            .await\n            .expect(\"percent-encoded segment should be measured after canonicalization\");\n\n        let encoded_segment_over_limit = \"%61\".repeat(256);\n        let encoded_segment_error = session\n            .execute(\n                \"INSERT INTO lix_file (id, path) VALUES ('file-encoded-over-limit', $1)\",\n                &[Value::Text(format!(\"/{encoded_segment_over_limit}\"))],\n            )\n            .await\n            .expect_err(\"overlong canonical segment should be rejected\");\n        assert_eq!(encoded_segment_error.code, LixError::CODE_INVALID_PARAM);\n        assert!(encoded_segment_error\n            .message\n            .contains(\"path segment is too long\"));\n\n        let huge_path = format!(\"/{}\", \"a\".repeat(1024 * 1024));\n        let huge_error = session\n            .execute(\n                \"INSERT INTO lix_file (id, path) VALUES ('file-huge-path', $1)\",\n                &[Value::Text(huge_path)],\n            )\n            .await\n            .expect_err(\"huge path input should be rejected without runtime internals\");\n        assert_eq!(huge_error.code, LixError::CODE_INVALID_PARAM);\n        assert!(huge_error.message.contains(\"path input is too long\"));\n    }\n);\n\nsimulation_test!(\n    lix_file_path_insert_rejects_percent_encoded_forbidden_code_points,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        for (id, path, expected_reason) in [\n            (\n                \"file-percent-nul\",\n                \"/docs/%00evil.txt\",\n                \"path must not contain a NUL byte\",\n            ),\n            (\n                \"file-percent-bidi\",\n                \"/docs/%E2%80%AEevil.txt\",\n                \"path segment contains a character that is not allowed\",\n            ),\n        ] {\n            let error = session\n                .execute(\n                    &format!(\"INSERT INTO lix_file (id, path) VALUES ('{id}', '{path}')\"),\n                    &[],\n                )\n                .await\n                .expect_err(\"percent-encoded forbidden path code point should be rejected\");\n            assert_eq!(error.code, LixError::CODE_INVALID_PARAM);\n            assert!(error.message.contains(expected_reason), \"{error:?}\");\n        }\n    }\n);\n\nsimulation_test!(\n    lix_file_path_insert_preserves_opaque_file_name_segments,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        for (id, path) in [\n            (\"file-foo-dot\", \"/foo.\"),\n            (\"file-foo-dot-dot\", \"/foo..\"),\n            (\"file-foo-dot-dot-dot\", \"/foo...\"),\n            (\"file-archive\", \"/archive.tar.gz\"),\n            (\"file-dotenv\", \"/.env\"),\n            (\"file-percent-dot\", \"/docs/%2Ehidden\"),\n        ] {\n            session\n                .execute(\n                    &format!(\"INSERT INTO lix_file (id, path) VALUES ('{id}', '{path}')\"),\n                    &[],\n                )\n                .await\n                .expect(\"opaque file name insert should succeed\");\n        }\n\n        let result = session\n            .execute(\n                \"SELECT id, path, name \\\n                 FROM lix_file \\\n                 WHERE id IN (\\\n                   'file-foo-dot',\\\n                   'file-foo-dot-dot',\\\n                   'file-foo-dot-dot-dot',\\\n                   'file-archive',\\\n                   'file-dotenv',\\\n                   'file-percent-dot'\\\n                 ) \\\n                 ORDER BY id\",\n                &[],\n            )\n            .await\n            .expect(\"file read should succeed\");\n\n        assert_rows_eq(\n            result,\n            vec![\n                vec![\n                    Value::Text(\"file-archive\".to_string()),\n                    Value::Text(\"/archive.tar.gz\".to_string()),\n                    Value::Text(\"archive.tar.gz\".to_string()),\n                ],\n                vec![\n                    Value::Text(\"file-dotenv\".to_string()),\n                    Value::Text(\"/.env\".to_string()),\n                    Value::Text(\".env\".to_string()),\n                ],\n                vec![\n                    Value::Text(\"file-foo-dot\".to_string()),\n                    Value::Text(\"/foo.\".to_string()),\n                    Value::Text(\"foo.\".to_string()),\n                ],\n                vec![\n                    Value::Text(\"file-foo-dot-dot\".to_string()),\n                    Value::Text(\"/foo..\".to_string()),\n                    Value::Text(\"foo..\".to_string()),\n                ],\n                vec![\n                    Value::Text(\"file-foo-dot-dot-dot\".to_string()),\n                    Value::Text(\"/foo...\".to_string()),\n                    Value::Text(\"foo...\".to_string()),\n                ],\n                vec![\n                    Value::Text(\"file-percent-dot\".to_string()),\n                    Value::Text(\"/docs/.hidden\".to_string()),\n                    Value::Text(\".hidden\".to_string()),\n                ],\n            ],\n        );\n    }\n);\n\nsimulation_test!(\n    lix_file_descriptor_shape_insert_uses_name_as_full_basename,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_file (id, directory_id, name) \\\n                 VALUES ('file-descriptor-dot', NULL, 'foo.')\",\n                &[],\n            )\n            .await\n            .expect(\"descriptor-shaped insert should accept full opaque basename\");\n\n        let result = session\n            .execute(\n                \"SELECT id, path, name \\\n                 FROM lix_file \\\n                 WHERE id = 'file-descriptor-dot'\",\n                &[],\n            )\n            .await\n            .expect(\"file read should succeed\");\n\n        assert_rows_eq(\n            result,\n            vec![vec![\n                Value::Text(\"file-descriptor-dot\".to_string()),\n                Value::Text(\"/foo.\".to_string()),\n                Value::Text(\"foo.\".to_string()),\n            ]],\n        );\n    }\n);\n\nsimulation_test!(\n    lix_file_extension_column_is_not_writable_identity,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_file (id, directory_id, name, extension) \\\n                 VALUES ('file-extension-write', NULL, 'readme', 'md')\",\n                &[],\n            )\n            .await\n            .expect_err(\"extension should not be accepted as writable file identity\");\n    }\n);\n\nsimulation_test!(\n    lix_file_namespace_treats_trailing_dot_names_as_distinct,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_file (id, path) VALUES ('file-foo', '/foo')\",\n                &[],\n            )\n            .await\n            .expect(\"plain file insert should succeed\");\n        session\n            .execute(\n                \"INSERT INTO lix_file (id, path) VALUES ('file-foo-dot', '/foo.')\",\n                &[],\n            )\n            .await\n            .expect(\"trailing-dot file insert should be distinct from plain name\");\n\n        let result = session\n            .execute(\n                \"SELECT id, path, name \\\n                 FROM lix_file \\\n                 WHERE id IN ('file-foo', 'file-foo-dot') \\\n                 ORDER BY id\",\n                &[],\n            )\n            .await\n            .expect(\"file read should succeed\");\n\n        assert_rows_eq(\n            result,\n            vec![\n                vec![\n                    Value::Text(\"file-foo\".to_string()),\n                    Value::Text(\"/foo\".to_string()),\n                    Value::Text(\"foo\".to_string()),\n                ],\n                vec![\n                    Value::Text(\"file-foo-dot\".to_string()),\n                    Value::Text(\"/foo.\".to_string()),\n                    Value::Text(\"foo.\".to_string()),\n                ],\n            ],\n        );\n    }\n);\n\nsimulation_test!(\n    lix_file_insert_reads_path_data_and_parent_dirs,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        let file_result = session\n            .execute(\n                \"INSERT INTO lix_file (id, path, data) \\\n             VALUES ('file-readme', '/docs/guides/readme.md', X'68656C6C6F')\",\n                &[],\n            )\n            .await\n            .expect(\"file insert should succeed\");\n        assert_eq!(file_result, ExecuteResult::from_rows_affected(1));\n\n        let result = session\n            .execute(\n                \"SELECT id, path, data, lixcol_schema_key \\\n             FROM lix_file \\\n             WHERE id = 'file-readme'\",\n                &[],\n            )\n            .await\n            .expect(\"file read should succeed\");\n        let row_set = result;\n        assert_eq!(row_set.len(), 1);\n        assert_eq!(\n            row_set.rows()[0].values(),\n            &[\n                Value::Text(\"file-readme\".to_string()),\n                Value::Text(\"/docs/guides/readme.md\".to_string()),\n                Value::Blob(b\"hello\".to_vec()),\n                Value::Text(\"lix_file_descriptor\".to_string()),\n            ]\n        );\n\n        let staged_state_result = session\n            .execute(\n                \"SELECT entity_id, schema_key \\\n             FROM lix_state \\\n             WHERE entity_id = lix_json('[\\\"file-readme\\\"]') \\\n             ORDER BY schema_key, entity_id\",\n                &[],\n            )\n            .await\n            .expect(\"filesystem state read should succeed\");\n        let staged_state_rows = staged_state_result;\n        assert_eq!(\n            staged_state_rows.len(),\n            2,\n            \"file path insert should stage one file descriptor and one blob ref for the file\"\n        );\n\n        let directory_result = session\n            .execute(\n                \"SELECT path \\\n             FROM lix_directory \\\n             WHERE path IN ('/docs/', '/docs/guides/') \\\n             ORDER BY path\",\n                &[],\n            )\n            .await\n            .expect(\"directory read after file insert should succeed\");\n        let directory_rows = directory_result;\n        assert_eq!(\n            directory_rows.len(),\n            2,\n            \"file path insert should stage exactly the two missing parent directories\"\n        );\n    }\n);\n\nsimulation_test!(lix_file_insert_applies_defaulted_id, |sim| async move {\n    let engine = sim.boot_engine().await;\n    let session = sim.wrap_session(\n        engine\n            .open_workspace_session()\n            .await\n            .expect(\"main session should open\"),\n        &engine,\n    );\n\n    session\n        .execute(\n            \"INSERT INTO lix_directory (id, parent_id, name) \\\n             VALUES ('dir-docs', NULL, 'docs')\",\n            &[],\n        )\n        .await\n        .expect(\"directory insert should succeed\");\n\n    let insert_result = session\n        .execute(\n            \"INSERT INTO lix_file (directory_id, name) \\\n             VALUES ('dir-docs', 'readme.md')\",\n            &[],\n        )\n        .await\n        .expect(\"file insert should apply defaulted id\");\n    assert_eq!(insert_result, ExecuteResult::from_rows_affected(1));\n\n    let result = session\n        .execute(\n            \"SELECT id, path, directory_id, name \\\n             FROM lix_file \\\n             WHERE path = '/docs/readme.md'\",\n            &[],\n        )\n        .await\n        .expect(\"file read should succeed\");\n    let row_set = result;\n    assert_eq!(row_set.len(), 1);\n    let values = row_set.rows()[0].values();\n    let [Value::Text(id), Value::Text(path), Value::Text(directory_id), Value::Text(name)] = values\n    else {\n        panic!(\"expected generated file row, got {values:?}\");\n    };\n    assert!(!id.is_empty(), \"defaulted file id should be non-empty\");\n    assert_eq!(path, \"/docs/readme.md\");\n    assert_eq!(directory_id, \"dir-docs\");\n    assert_eq!(name, \"readme.md\");\n});\n\nsimulation_test!(\n    lix_file_path_insert_applies_defaulted_id,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        let insert_result = session\n            .execute(\n                \"INSERT INTO lix_file (path) VALUES ('/docs/readme.md')\",\n                &[],\n            )\n            .await\n            .expect(\"file path insert should apply defaulted id\");\n        assert_eq!(insert_result, ExecuteResult::from_rows_affected(1));\n\n        let result = session\n            .execute(\n                \"SELECT id, path, name \\\n             FROM lix_file \\\n             WHERE path = '/docs/readme.md'\",\n                &[],\n            )\n            .await\n            .expect(\"file read should succeed\");\n        let row_set = result;\n        assert_eq!(row_set.len(), 1);\n        let values = row_set.rows()[0].values();\n        let [Value::Text(id), Value::Text(path), Value::Text(name)] = values else {\n            panic!(\"expected generated file path row, got {values:?}\");\n        };\n        assert!(!id.is_empty(), \"defaulted file id should be non-empty\");\n        assert_eq!(path, \"/docs/readme.md\");\n        assert_eq!(name, \"readme.md\");\n    }\n);\n\nsimulation_test!(\n    lix_file_path_data_insert_applies_defaulted_id,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        let insert_result = session\n            .execute(\n                \"INSERT INTO lix_file (path, data) VALUES ('/docs/readme.md', X'68656C6C6F')\",\n                &[],\n            )\n            .await\n            .expect(\"file path data insert should apply defaulted id\");\n        assert_eq!(insert_result, ExecuteResult::from_rows_affected(1));\n\n        let result = session\n            .execute(\n                \"SELECT id, path, data \\\n             FROM lix_file \\\n             WHERE path = '/docs/readme.md'\",\n                &[],\n            )\n            .await\n            .expect(\"file read should succeed\");\n        let row_set = result;\n        assert_eq!(row_set.len(), 1);\n        let values = row_set.rows()[0].values();\n        let [Value::Text(id), Value::Text(path), Value::Blob(data)] = values else {\n            panic!(\"expected generated file data row, got {values:?}\");\n        };\n        assert!(!id.is_empty(), \"defaulted file id should be non-empty\");\n        assert_eq!(path, \"/docs/readme.md\");\n        assert_eq!(data, b\"hello\");\n    }\n);\n\nsimulation_test!(lix_file_insert_rejects_null_data, |sim| async move {\n    let engine = sim.boot_engine().await;\n    let session = sim.wrap_session(\n        engine\n            .open_workspace_session()\n            .await\n            .expect(\"main session should open\"),\n        &engine,\n    );\n\n    let error = session\n        .execute(\n            \"INSERT INTO lix_file (id, path, data) \\\n             VALUES ('null-data-file', '/null.bin', NULL)\",\n            &[],\n        )\n        .await\n        .expect_err(\"explicit NULL data should be rejected\");\n\n    assert_eq!(error.code, LixError::CODE_TYPE_MISMATCH);\n\n    let parameter_error = session\n        .execute(\n            \"INSERT INTO lix_file (id, path, data) \\\n             VALUES ('null-param-data-file', '/null-param.bin', $1)\",\n            &[Value::Null],\n        )\n        .await\n        .expect_err(\"parameterized NULL data should be rejected\");\n\n    assert_eq!(parameter_error.code, LixError::CODE_TYPE_MISMATCH);\n\n    let result = session\n        .execute(\n            \"SELECT id FROM lix_file \\\n             WHERE id IN ('null-data-file', 'null-param-data-file')\",\n            &[],\n        )\n        .await\n        .expect(\"file read should succeed\");\n    assert_eq!(result.len(), 0);\n});\n\nsimulation_test!(\n    lix_file_insert_rejects_non_binary_data_literals,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        for (id, sql) in [\n            (\n                \"text-data-file\",\n                \"INSERT INTO lix_file (id, path, data) \\\n                 VALUES ('text-data-file', '/text.bin', 'hello')\",\n            ),\n            (\n                \"int-data-file\",\n                \"INSERT INTO lix_file (id, path, data) \\\n                 VALUES ('int-data-file', '/int.bin', 12345)\",\n            ),\n            (\n                \"float-data-file\",\n                \"INSERT INTO lix_file (id, path, data) \\\n                 VALUES ('float-data-file', '/float.bin', 1.5)\",\n            ),\n            (\n                \"bool-data-file\",\n                \"INSERT INTO lix_file (id, path, data) \\\n                 VALUES ('bool-data-file', '/bool.bin', true)\",\n            ),\n        ] {\n            let error = session\n                .execute(sql, &[])\n                .await\n                .expect_err(\"non-binary data literal should be rejected\");\n\n            assert_eq!(error.code, LixError::CODE_TYPE_MISMATCH, \"{id}\");\n        }\n\n        let result = session\n            .execute(\n                \"SELECT id FROM lix_file \\\n                 WHERE id IN (\\\n                   'text-data-file',\\\n                   'int-data-file',\\\n                   'float-data-file',\\\n                   'bool-data-file'\\\n                 )\",\n                &[],\n            )\n            .await\n            .expect(\"file read should succeed\");\n        assert_eq!(result.len(), 0);\n    }\n);\n\nsimulation_test!(\n    lix_file_insert_rejects_non_binary_data_from_select,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        let error = session\n            .execute(\n                \"INSERT INTO lix_file (id, path, data) \\\n                 SELECT 'select-text-data-file', '/select-text.bin', 'hello'\",\n                &[],\n            )\n            .await\n            .expect_err(\"non-binary data from SELECT should be rejected\");\n        assert_eq!(error.code, LixError::CODE_TYPE_MISMATCH);\n\n        let result = session\n            .execute(\n                \"SELECT id FROM lix_file WHERE id = 'select-text-data-file'\",\n                &[],\n            )\n            .await\n            .expect(\"file read should succeed\");\n        assert_eq!(result.len(), 0);\n    }\n);\n\nsimulation_test!(\n    lix_file_insert_rejects_non_binary_data_parameters,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        for (id, value) in [\n            (\"text-param-data-file\", Value::Text(\"hello\".to_string())),\n            (\"int-param-data-file\", Value::Integer(12345)),\n        ] {\n            let error = session\n                .execute(\n                    &format!(\n                        \"INSERT INTO lix_file (id, path, data) \\\n                         VALUES ('{id}', '/{id}.bin', $1)\"\n                    ),\n                    &[value],\n                )\n                .await\n                .expect_err(\"non-binary data parameter should be rejected\");\n            assert_eq!(error.code, LixError::CODE_TYPE_MISMATCH, \"{id}\");\n        }\n    }\n);\n\nsimulation_test!(lix_file_insert_accepts_empty_blob_data, |sim| async move {\n    let engine = sim.boot_engine().await;\n    let session = sim.wrap_session(\n        engine\n            .open_workspace_session()\n            .await\n            .expect(\"main session should open\"),\n        &engine,\n    );\n\n    let insert_result = session\n        .execute(\n            \"INSERT INTO lix_file (id, path, data) \\\n             VALUES ('empty-data-file', '/empty.bin', X'')\",\n            &[],\n        )\n        .await\n        .expect(\"empty blob data should be accepted\");\n    assert_eq!(insert_result, ExecuteResult::from_rows_affected(1));\n\n    let result = session\n        .execute(\n            \"SELECT data FROM lix_file WHERE id = 'empty-data-file'\",\n            &[],\n        )\n        .await\n        .expect(\"file read should succeed\");\n    assert_eq!(result.len(), 1);\n    assert_eq!(result.rows()[0].values(), &[Value::Blob(Vec::new())]);\n});\n\nsimulation_test!(\n    lix_file_path_insert_rejects_duplicate_root_path,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_file (path, data) VALUES ('/x.bin', $1)\",\n                &[Value::Blob(vec![1])],\n            )\n            .await\n            .expect(\"first file path insert should succeed\");\n\n        let error = session\n            .execute(\n                \"INSERT INTO lix_file (path, data) VALUES ('/x.bin', $1)\",\n                &[Value::Blob(vec![2])],\n            )\n            .await\n            .expect_err(\"duplicate file path insert should be rejected\");\n\n        assert_eq!(error.code, LixError::CODE_UNIQUE);\n    }\n);\n\nsimulation_test!(\n    lix_file_insert_duplicate_id_with_data_reports_lix_file,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_file (id, path, data) \\\n                 VALUES ('same-file', '/a.bin', X'01')\",\n                &[],\n            )\n            .await\n            .expect(\"first file insert should succeed\");\n\n        let error = session\n            .execute(\n                \"INSERT INTO lix_file (id, path, data) \\\n                 VALUES ('same-file', '/b.bin', X'02')\",\n                &[],\n            )\n            .await\n            .expect_err(\"duplicate file id insert should be rejected\");\n\n        assert_eq!(error.code, LixError::CODE_UNIQUE);\n        assert!(\n            error.message.contains(\"table 'lix_file'\")\n                && error.message.contains(\"id 'same-file'\")\n                && !error.message.contains(\"lix_binary_blob_ref\"),\n            \"unexpected error: {error:?}\"\n        );\n    }\n);\n\nsimulation_test!(\n    lix_file_insert_duplicate_id_without_data_reports_lix_file,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_file (id, path) VALUES ('same-file', '/a.bin')\",\n                &[],\n            )\n            .await\n            .expect(\"first file insert should succeed\");\n\n        let error = session\n            .execute(\n                \"INSERT INTO lix_file (id, path) VALUES ('same-file', '/b.bin')\",\n                &[],\n            )\n            .await\n            .expect_err(\"duplicate file id insert should be rejected\");\n\n        assert_eq!(error.code, LixError::CODE_UNIQUE);\n        assert!(\n            error.message.contains(\"table 'lix_file'\")\n                && error.message.contains(\"id 'same-file'\")\n                && !error.message.contains(\"lix_file_descriptor\"),\n            \"unexpected error: {error:?}\"\n        );\n    }\n);\n\nsimulation_test!(\n    lix_file_insert_duplicate_id_in_same_batch_reports_lix_file,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        let error = session\n            .execute(\n                \"INSERT INTO lix_file (id, path, data) VALUES \\\n                 ('same-file', '/a.bin', X'01'), \\\n                 ('same-file', '/b.bin', X'02')\",\n                &[],\n            )\n            .await\n            .expect_err(\"same-batch duplicate file id insert should be rejected\");\n\n        assert_eq!(error.code, LixError::CODE_UNIQUE);\n        assert!(\n            error.message.contains(\"table 'lix_file'\")\n                && error.message.contains(\"id 'same-file'\")\n                && !error.message.contains(\"lix_binary_blob_ref\"),\n            \"unexpected error: {error:?}\"\n        );\n    }\n);\n\nsimulation_test!(\n    lix_file_by_version_insert_duplicate_id_reports_lix_file_by_version,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n        let version_id = sim.main_version_id();\n\n        session\n            .execute(\n                &format!(\n                    \"INSERT INTO lix_file_by_version \\\n                     (id, path, data, lixcol_version_id) \\\n                     VALUES ('same-file', '/a.bin', X'01', '{version_id}')\"\n                ),\n                &[],\n            )\n            .await\n            .expect(\"first by-version file insert should succeed\");\n\n        let error = session\n            .execute(\n                &format!(\n                    \"INSERT INTO lix_file_by_version \\\n                     (id, path, data, lixcol_version_id) \\\n                     VALUES ('same-file', '/b.bin', X'02', '{version_id}')\"\n                ),\n                &[],\n            )\n            .await\n            .expect_err(\"duplicate by-version file id insert should be rejected\");\n\n        assert_eq!(error.code, LixError::CODE_UNIQUE);\n        assert!(\n            error.message.contains(\"table 'lix_file_by_version'\")\n                && error.message.contains(\"id 'same-file'\")\n                && !error.message.contains(\"table 'lix_file':\")\n                && !error.message.contains(\"lix_binary_blob_ref\"),\n            \"unexpected error: {error:?}\"\n        );\n    }\n);\n\nsimulation_test!(\n    lix_file_path_insert_rejects_existing_directory_entry,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\"INSERT INTO lix_directory (path) VALUES ('/foo/')\", &[])\n            .await\n            .expect(\"directory insert should succeed\");\n\n        let error = session\n            .execute(\"INSERT INTO lix_file (path) VALUES ('/foo')\", &[])\n            .await\n            .expect_err(\"file should conflict with directory at same entry name\");\n\n        assert_eq!(error.code, LixError::CODE_UNIQUE);\n        assert!(\n            error.message.contains(\"filesystem namespace conflict\"),\n            \"expected namespace conflict error: {error}\"\n        );\n    }\n);\n\nsimulation_test!(\n    lix_file_path_insert_allows_extension_distinct_from_directory,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\"INSERT INTO lix_directory (path) VALUES ('/foo/')\", &[])\n            .await\n            .expect(\"directory insert should succeed\");\n        session\n            .execute(\"INSERT INTO lix_file (path) VALUES ('/foo.txt')\", &[])\n            .await\n            .expect(\"file basename foo.txt should not conflict with directory foo\");\n\n        let file_result = session\n            .execute(\"SELECT path FROM lix_file WHERE path = '/foo.txt'\", &[])\n            .await\n            .expect(\"file path should query\");\n        let directory_result = session\n            .execute(\"SELECT path FROM lix_directory WHERE path = '/foo/'\", &[])\n            .await\n            .expect(\"directory path should query\");\n\n        assert_eq!(file_result.len(), 1);\n        assert_eq!(directory_result.len(), 1);\n    }\n);\n\nsimulation_test!(\n    lix_file_path_insert_rejects_file_as_implicit_ancestor,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\"INSERT INTO lix_file (path) VALUES ('/foo')\", &[])\n            .await\n            .expect(\"file insert should succeed\");\n\n        let error = session\n            .execute(\"INSERT INTO lix_file (path) VALUES ('/foo/bar.txt')\", &[])\n            .await\n            .expect_err(\"implicit ancestor directory should conflict with existing file\");\n\n        assert_eq!(error.code, LixError::CODE_UNIQUE);\n    }\n);\n\nsimulation_test!(\n    lix_file_descriptor_shape_insert_rejects_existing_directory_entry,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_directory (id, parent_id, name) VALUES ('dir-foo', NULL, 'foo')\",\n                &[],\n            )\n            .await\n            .expect(\"directory insert should succeed\");\n\n        let error = session\n            .execute(\n                \"INSERT INTO lix_file (id, directory_id, name) \\\n                 VALUES ('file-foo', NULL, 'foo')\",\n                &[],\n            )\n            .await\n            .expect_err(\"descriptor-shaped file insert should conflict with directory\");\n\n        assert_eq!(error.code, LixError::CODE_UNIQUE);\n    }\n);\n\nsimulation_test!(\n    lix_file_update_rejects_existing_directory_entry,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_file (id, path) VALUES ('file-foo', '/foo')\",\n                &[],\n            )\n            .await\n            .expect(\"file insert should succeed\");\n        session\n            .execute(\"INSERT INTO lix_directory (path) VALUES ('/bar/')\", &[])\n            .await\n            .expect(\"directory insert should succeed\");\n\n        let error = session\n            .execute(\n                \"UPDATE lix_file SET path = '/bar' WHERE id = 'file-foo'\",\n                &[],\n            )\n            .await\n            .expect_err(\"file path update should conflict with directory\");\n\n        assert_eq!(error.code, LixError::CODE_UNIQUE);\n    }\n);\n\nsimulation_test!(\n    lix_file_insert_rejects_missing_directory_id,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        let error = session\n            .execute(\n                \"INSERT INTO lix_file (directory_id, name) \\\n                 VALUES ('missing-dir', 'readme.md')\",\n                &[],\n            )\n            .await\n            .expect_err(\"file insert should reject missing directory_id\");\n\n        assert_eq!(error.code, LixError::CODE_FOREIGN_KEY);\n    }\n);\n\nsimulation_test!(\n    lix_file_update_rejects_missing_directory_id_and_preserves_path,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_directory (id, path) VALUES ('dir-docs', '/docs/')\",\n                &[],\n            )\n            .await\n            .expect(\"directory insert should succeed\");\n        session\n            .execute(\n                \"INSERT INTO lix_file (id, directory_id, name) \\\n                 VALUES ('file-readme', 'dir-docs', 'readme.md')\",\n                &[],\n            )\n            .await\n            .expect(\"file insert should succeed\");\n\n        let error = session\n            .execute(\n                \"UPDATE lix_file SET directory_id = 'missing-dir' WHERE id = 'file-readme'\",\n                &[],\n            )\n            .await\n            .expect_err(\"file update should reject missing directory_id\");\n\n        assert_eq!(error.code, LixError::CODE_FOREIGN_KEY);\n\n        let result = session\n            .execute(\n                \"SELECT path, directory_id FROM lix_file WHERE id = 'file-readme'\",\n                &[],\n            )\n            .await\n            .expect(\"file read should succeed\");\n        assert_eq!(\n            result.rows()[0].values(),\n            &[\n                Value::Text(\"/docs/readme.md\".to_string()),\n                Value::Text(\"dir-docs\".to_string())\n            ]\n        );\n    }\n);\n\nsimulation_test!(\n    lix_file_path_insert_rejects_dot_segments,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        for path in [\"/a/../b/c.txt\", \"/a/%2e%2e/b/c.txt\", \"/a/./b/c.txt\"] {\n            let error = session\n                .execute(\n                    \"INSERT INTO lix_file (path, data) VALUES ($1, $2)\",\n                    &[Value::Text(path.to_string()), Value::Blob(Vec::new())],\n                )\n                .await\n                .expect_err(\"file path insert should reject dot segments\");\n\n            assert_eq!(error.code, LixError::CODE_INVALID_PARAM);\n            assert!(error.message.contains(\"path segment cannot be '.' or '..'\"));\n        }\n\n        let result = session\n            .execute(\"SELECT path FROM lix_file WHERE path = '/b/c.txt'\", &[])\n            .await\n            .expect(\"file read should succeed\");\n        assert_eq!(result.len(), 0);\n    }\n);\n\nsimulation_test!(\n    lix_file_data_insert_applies_defaulted_id,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_directory (id, parent_id, name) \\\n             VALUES ('dir-docs', NULL, 'docs')\",\n                &[],\n            )\n            .await\n            .expect(\"directory insert should succeed\");\n\n        let insert_result = session\n            .execute(\n                \"INSERT INTO lix_file (directory_id, name, data) \\\n             VALUES ('dir-docs', 'readme.md', X'68656C6C6F')\",\n                &[],\n            )\n            .await\n            .expect(\"file data insert should apply defaulted id\");\n        assert_eq!(insert_result, ExecuteResult::from_rows_affected(1));\n\n        let result = session\n            .execute(\n                \"SELECT id, path, data \\\n             FROM lix_file \\\n             WHERE path = '/docs/readme.md'\",\n                &[],\n            )\n            .await\n            .expect(\"file read should succeed\");\n        let row_set = result;\n        assert_eq!(row_set.len(), 1);\n        let values = row_set.rows()[0].values();\n        let [Value::Text(id), Value::Text(path), Value::Blob(data)] = values else {\n            panic!(\"expected generated file data row, got {values:?}\");\n        };\n        assert!(!id.is_empty(), \"defaulted file id should be non-empty\");\n        assert_eq!(path, \"/docs/readme.md\");\n        assert_eq!(data, b\"hello\");\n    }\n);\n\nsimulation_test!(lix_file_path_update_preserves_data, |sim| async move {\n    let engine = sim.boot_engine().await;\n    let session = sim.wrap_session(\n        engine\n            .open_workspace_session()\n            .await\n            .expect(\"main session should open\"),\n        &engine,\n    );\n\n    let insert_result = session\n        .execute(\n            \"INSERT INTO lix_file (id, path, data) \\\n             VALUES ('file-readme', '/docs/guides/readme.md', X'68656C6C6F')\",\n            &[],\n        )\n        .await\n        .expect(\"file insert should succeed\");\n    assert_eq!(insert_result, ExecuteResult::from_rows_affected(1));\n\n    let update_result = session\n        .execute(\n            \"UPDATE lix_file \\\n             SET path = '/docs/readme-renamed.md' \\\n             WHERE id = 'file-readme'\",\n            &[],\n        )\n        .await\n        .expect(\"file path update should succeed\");\n    assert_eq!(update_result, ExecuteResult::from_rows_affected(1));\n\n    let file_result = session\n        .execute(\n            \"SELECT id, path, data \\\n             FROM lix_file \\\n             WHERE id = 'file-readme'\",\n            &[],\n        )\n        .await\n        .expect(\"file read after path update should succeed\");\n    let file_rows = file_result;\n    assert_eq!(file_rows.len(), 1);\n    assert_eq!(\n        file_rows.rows()[0].values(),\n        &[\n            Value::Text(\"file-readme\".to_string()),\n            Value::Text(\"/docs/readme-renamed.md\".to_string()),\n            Value::Blob(b\"hello\".to_vec()),\n        ]\n    );\n\n    let state_result = session\n        .execute(\n            \"SELECT entity_id, schema_key \\\n             FROM lix_state \\\n             WHERE entity_id = lix_json('[\\\"file-readme\\\"]') \\\n             ORDER BY schema_key, entity_id\",\n            &[],\n        )\n        .await\n        .expect(\"filesystem state read after path update should succeed\");\n    let state_rows = state_result;\n    assert_eq!(\n        state_rows.len(),\n        2,\n        \"path update should update one file descriptor and preserve one blob ref\"\n    );\n\n    let directory_result = session\n        .execute(\n            \"SELECT path \\\n             FROM lix_directory \\\n             WHERE path IN ('/docs/', '/docs/guides/') \\\n             ORDER BY path\",\n            &[],\n        )\n        .await\n        .expect(\"directory read after path update should succeed\");\n    let directory_rows = directory_result;\n    assert_eq!(\n        directory_rows.len(),\n        2,\n        \"path update should not stage an extra directory descriptor\"\n    );\n});\n\nsimulation_test!(\n    lix_file_update_rejects_null_data_and_preserves_existing_data,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_file (id, path, data) \\\n                 VALUES ('update-null-file', '/update-null.bin', X'68656C6C6F')\",\n                &[],\n            )\n            .await\n            .expect(\"file insert should succeed\");\n\n        let error = session\n            .execute(\n                \"UPDATE lix_file SET data = NULL WHERE id = 'update-null-file'\",\n                &[],\n            )\n            .await\n            .expect_err(\"explicit NULL data update should be rejected\");\n\n        assert_eq!(error.code, LixError::CODE_TYPE_MISMATCH);\n\n        let parameter_error = session\n            .execute(\n                \"UPDATE lix_file SET data = $1 WHERE id = 'update-null-file'\",\n                &[Value::Null],\n            )\n            .await\n            .expect_err(\"parameterized NULL data update should be rejected\");\n\n        assert_eq!(parameter_error.code, LixError::CODE_TYPE_MISMATCH);\n\n        let result = session\n            .execute(\n                \"SELECT data FROM lix_file WHERE id = 'update-null-file'\",\n                &[],\n            )\n            .await\n            .expect(\"file read should succeed\");\n        assert_eq!(result.len(), 1);\n        assert_eq!(result.rows()[0].values(), &[Value::Blob(b\"hello\".to_vec())]);\n    }\n);\n\nsimulation_test!(\n    lix_file_update_rejects_non_binary_data_literals_and_preserves_existing_data,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        for (id, assignment) in [\n            (\"update-text-file\", \"'hello'\"),\n            (\"update-int-file\", \"12345\"),\n            (\"update-float-file\", \"1.5\"),\n            (\"update-bool-file\", \"true\"),\n        ] {\n            session\n                .execute(\n                    &format!(\n                        \"INSERT INTO lix_file (id, path, data) \\\n                         VALUES ('{id}', '/{id}.bin', X'68656C6C6F')\"\n                    ),\n                    &[],\n                )\n                .await\n                .expect(\"file insert should succeed\");\n\n            let error = session\n                .execute(\n                    &format!(\"UPDATE lix_file SET data = {assignment} WHERE id = '{id}'\"),\n                    &[],\n                )\n                .await\n                .expect_err(\"non-binary data literal update should be rejected\");\n\n            assert_eq!(error.code, LixError::CODE_TYPE_MISMATCH, \"{id}\");\n        }\n\n        let result = session\n            .execute(\n                \"SELECT id, data FROM lix_file \\\n                 WHERE id IN (\\\n                   'update-text-file',\\\n                   'update-int-file',\\\n                   'update-float-file',\\\n                   'update-bool-file'\\\n                 ) \\\n                 ORDER BY id\",\n                &[],\n            )\n            .await\n            .expect(\"file read should succeed\");\n\n        assert_rows_eq(\n            result,\n            vec![\n                vec![\n                    Value::Text(\"update-bool-file\".to_string()),\n                    Value::Blob(b\"hello\".to_vec()),\n                ],\n                vec![\n                    Value::Text(\"update-float-file\".to_string()),\n                    Value::Blob(b\"hello\".to_vec()),\n                ],\n                vec![\n                    Value::Text(\"update-int-file\".to_string()),\n                    Value::Blob(b\"hello\".to_vec()),\n                ],\n                vec![\n                    Value::Text(\"update-text-file\".to_string()),\n                    Value::Blob(b\"hello\".to_vec()),\n                ],\n            ],\n        );\n    }\n);\n\nsimulation_test!(\n    lix_file_update_rejects_non_binary_data_parameters_and_preserves_existing_data,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        for (id, value) in [\n            (\"update-text-param-file\", Value::Text(\"hello\".to_string())),\n            (\"update-int-param-file\", Value::Integer(12345)),\n        ] {\n            session\n                .execute(\n                    &format!(\n                        \"INSERT INTO lix_file (id, path, data) \\\n                         VALUES ('{id}', '/{id}.bin', X'68656C6C6F')\"\n                    ),\n                    &[],\n                )\n                .await\n                .expect(\"file insert should succeed\");\n\n            let error = session\n                .execute(\n                    &format!(\"UPDATE lix_file SET data = $1 WHERE id = '{id}'\"),\n                    &[value],\n                )\n                .await\n                .expect_err(\"non-binary data parameter update should be rejected\");\n            assert_eq!(error.code, LixError::CODE_TYPE_MISMATCH, \"{id}\");\n        }\n\n        let result = session\n            .execute(\n                \"SELECT id, data FROM lix_file \\\n                 WHERE id IN ('update-text-param-file', 'update-int-param-file') \\\n                 ORDER BY id\",\n                &[],\n            )\n            .await\n            .expect(\"file read should succeed\");\n        assert_rows_eq(\n            result,\n            vec![\n                vec![\n                    Value::Text(\"update-int-param-file\".to_string()),\n                    Value::Blob(b\"hello\".to_vec()),\n                ],\n                vec![\n                    Value::Text(\"update-text-param-file\".to_string()),\n                    Value::Blob(b\"hello\".to_vec()),\n                ],\n            ],\n        );\n    }\n);\n\nsimulation_test!(lix_file_update_accepts_empty_blob_data, |sim| async move {\n    let engine = sim.boot_engine().await;\n    let session = sim.wrap_session(\n        engine\n            .open_workspace_session()\n            .await\n            .expect(\"main session should open\"),\n        &engine,\n    );\n\n    session\n        .execute(\n            \"INSERT INTO lix_file (id, path, data) \\\n             VALUES ('empty-update-file', '/empty-update.bin', X'68656C6C6F')\",\n            &[],\n        )\n        .await\n        .expect(\"file insert should succeed\");\n\n    let update_result = session\n        .execute(\n            \"UPDATE lix_file SET data = X'' WHERE id = 'empty-update-file'\",\n            &[],\n        )\n        .await\n        .expect(\"empty blob data update should be accepted\");\n    assert_eq!(update_result, ExecuteResult::from_rows_affected(1));\n\n    let result = session\n        .execute(\n            \"SELECT data FROM lix_file WHERE id = 'empty-update-file'\",\n            &[],\n        )\n        .await\n        .expect(\"file read should succeed\");\n    assert_eq!(result.len(), 1);\n    assert_eq!(result.rows()[0].values(), &[Value::Blob(Vec::new())]);\n});\n\nsimulation_test!(lix_file_by_version_expands_global_rows, |sim| async move {\n    let engine = sim.boot_engine().await;\n    let session = sim.wrap_session(\n        engine\n            .open_workspace_session()\n            .await\n            .expect(\"main session should open\"),\n        &engine,\n    );\n\n    session\n        .execute(\n            \"INSERT INTO lix_file (id, path, data, lixcol_global, lixcol_untracked) \\\n             VALUES ('file-global-overlay', '/global.txt', X'67', true, false)\",\n            &[],\n        )\n        .await\n        .expect(\"global file insert should succeed\");\n\n    let result = session\n        .execute(\n            \"SELECT id, path, lixcol_version_id, lixcol_global, lixcol_untracked \\\n             FROM lix_file_by_version \\\n             WHERE id = 'file-global-overlay' \\\n             ORDER BY lixcol_version_id\",\n            &[],\n        )\n        .await\n        .expect(\"file by-version read should succeed\");\n    assert_rows_eq(\n        result,\n        vec![\n            vec![\n                Value::Text(\"file-global-overlay\".to_string()),\n                Value::Text(\"/global.txt\".to_string()),\n                Value::Text(sim.main_version_id().to_string()),\n                Value::Boolean(true),\n                Value::Boolean(false),\n            ],\n            vec![\n                Value::Text(\"file-global-overlay\".to_string()),\n                Value::Text(\"/global.txt\".to_string()),\n                Value::Text(\"global\".to_string()),\n                Value::Boolean(true),\n                Value::Boolean(false),\n            ],\n        ],\n    );\n});\n"
  },
  {
    "path": "packages/engine/tests/sql/lix_file_history.rs",
    "content": "use lix_engine::Value;\nuse serde_json::json;\n\nuse super::assert_rows_eq;\n\nsimulation_test!(\n    lix_file_history_reads_path_and_data_from_commit_graph,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_file (id, path, data) \\\n                 VALUES ('history-file', '/docs/guides/readme.md', X'68656C6C6F')\",\n                &[],\n            )\n            .await\n            .expect(\"file insert should succeed\");\n        let first_commit_id = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"first file commit head should load\")\n            .expect(\"first file commit head should exist\");\n\n        session\n            .execute(\n                \"UPDATE lix_file \\\n                 SET path = '/docs/readme-renamed.md' \\\n                 WHERE id = 'history-file'\",\n                &[],\n            )\n            .await\n            .expect(\"file path update should succeed\");\n        let second_commit_id = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"second file commit head should load\")\n            .expect(\"second file commit head should exist\");\n\n        assert_ne!(first_commit_id, second_commit_id);\n\n        let result = session\n            .execute(\n                &format!(\n                    \"SELECT id, path, name, data, lixcol_start_commit_id, lixcol_depth \\\n                     FROM lix_file_history \\\n                     WHERE lixcol_start_commit_id = '{second_commit_id}' \\\n                       AND id = 'history-file' \\\n                       AND path LIKE '/docs/%' \\\n                     ORDER BY lixcol_depth\"\n                ),\n                &[],\n            )\n            .await\n            .expect(\"file history read should succeed\");\n\n        assert_rows_eq(\n            result,\n            vec![\n                vec![\n                    Value::Text(\"history-file\".to_string()),\n                    Value::Text(\"/docs/readme-renamed.md\".to_string()),\n                    Value::Text(\"readme-renamed.md\".to_string()),\n                    Value::Blob(b\"hello\".to_vec()),\n                    Value::Text(second_commit_id.clone()),\n                    Value::Integer(0),\n                ],\n                vec![\n                    Value::Text(\"history-file\".to_string()),\n                    Value::Text(\"/docs/guides/readme.md\".to_string()),\n                    Value::Text(\"readme.md\".to_string()),\n                    Value::Blob(b\"hello\".to_vec()),\n                    Value::Text(second_commit_id.clone()),\n                    Value::Integer(1),\n                ],\n            ],\n        );\n\n        let snapshot_result = session\n            .execute(\n                &format!(\n                    \"SELECT lixcol_snapshot_content \\\n                     FROM lix_file_history \\\n                     WHERE lixcol_start_commit_id = '{second_commit_id}' \\\n                       AND id = 'history-file' \\\n                       AND lixcol_depth = 0\"\n                ),\n                &[],\n            )\n            .await\n            .expect(\"file history descriptor snapshot should be selectable\");\n        let snapshot = snapshot_result.rows()[0]\n            .get::<Value>(\"lixcol_snapshot_content\")\n            .expect(\"snapshot_content should be present\");\n        let Value::Json(snapshot) = snapshot else {\n            panic!(\"snapshot_content should be semantic JSON, got {snapshot:?}\");\n        };\n        assert_eq!(snapshot[\"name\"], json!(\"readme-renamed.md\"));\n\n        let result = session\n            .execute(\n                &format!(\n                    \"SELECT id \\\n                     FROM lix_file_history \\\n                     WHERE lixcol_start_commit_id = '{first_commit_id}' \\\n                       AND path LIKE '/missing/%'\"\n                ),\n                &[],\n            )\n            .await\n            .expect(\"file history should route start commit and leave path LIKE as residual\");\n        assert_rows_eq(result, Vec::<Vec<Value>>::new());\n    }\n);\n\nsimulation_test!(\n    lix_file_history_requires_start_commit_id,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        let error = session\n            .execute(\"SELECT id FROM lix_file_history\", &[])\n            .await\n            .expect_err(\"file history queries must provide start commit\");\n\n        assert!(\n            error\n                .to_string()\n                .contains(\"requires a lixcol_start_commit_id filter\"),\n            \"unexpected error: {error}\"\n        );\n        assert!(\n            error\n                .hint()\n                .is_some_and(|hint| hint.contains(\"WHERE lixcol_start_commit_id\")),\n            \"unexpected error: {error}\"\n        );\n    }\n);\n\nsimulation_test!(\n    lix_file_history_exposes_file_descriptor_schema_key,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_file (id, path, data) \\\n                 VALUES ('history-file-blob-filter', '/docs/blob-filter.txt', X'626C6F62')\",\n                &[],\n            )\n            .await\n            .expect(\"file insert should succeed\");\n        session\n            .execute(\n                \"UPDATE lix_file SET data = X'626C6F6232' \\\n                 WHERE id = 'history-file-blob-filter'\",\n                &[],\n            )\n            .await\n            .expect(\"file data update should succeed\");\n        let commit_id = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"file commit head should load\")\n            .expect(\"file commit head should exist\");\n\n        let result = session\n            .execute(\n                &format!(\n                    \"SELECT id, path, data, lixcol_schema_key \\\n                     FROM lix_file_history \\\n                     WHERE lixcol_start_commit_id = '{commit_id}' \\\n                       AND lixcol_schema_key = 'lix_file_descriptor' \\\n                       AND id = 'history-file-blob-filter' \\\n                       AND lixcol_depth = 0\"\n                ),\n                &[],\n            )\n            .await\n            .expect(\"file-descriptor-filtered file history read should succeed\");\n\n        assert_rows_eq(\n            result,\n            vec![vec![\n                Value::Text(\"history-file-blob-filter\".to_string()),\n                Value::Text(\"/docs/blob-filter.txt\".to_string()),\n                Value::Blob(b\"blob2\".to_vec()),\n                Value::Text(\"lix_file_descriptor\".to_string()),\n            ]],\n        );\n\n        let blob_schema_result = session\n            .execute(\n                &format!(\n                    \"SELECT id \\\n                     FROM lix_file_history \\\n                     WHERE lixcol_start_commit_id = '{commit_id}' \\\n                       AND lixcol_schema_key = 'lix_binary_blob_ref' \\\n                       AND id = 'history-file-blob-filter'\"\n                ),\n                &[],\n            )\n            .await\n            .expect(\"blob-ref-filtered file history read should succeed\");\n        assert_rows_eq(blob_schema_result, Vec::<Vec<Value>>::new());\n    }\n);\n"
  },
  {
    "path": "packages/engine/tests/sql/lix_json.rs",
    "content": "use lix_engine::{LixError, Value};\nuse serde_json::json;\n\nuse super::assert_rows_eq;\n\nsimulation_test!(\n    lix_json_expression_results_are_semantic_json,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        let result = session\n            .execute(\n                \"SELECT \\\n                lix_json('{\\\"name\\\":\\\"Ada\\\",\\\"tags\\\":[\\\"db\\\"]}') AS document, \\\n                lix_json(NULL) AS json_null, \\\n                lix_json_get('{\\\"name\\\":\\\"Ada\\\",\\\"tags\\\":[\\\"db\\\"]}', 'tags') AS tags, \\\n                lix_json_get('{\\\"name\\\":\\\"Ada\\\"}', 'missing') AS missing\",\n                &[],\n            )\n            .await\n            .expect(\"select should succeed\");\n\n        assert_rows_eq(\n            result,\n            vec![vec![\n                Value::Json(json!({\"name\": \"Ada\", \"tags\": [\"db\"]})),\n                Value::Json(json!(null)),\n                Value::Json(json!([\"db\"])),\n                Value::Null,\n            ]],\n        );\n    }\n);\n\nsimulation_test!(lix_json_get_uses_variadic_path_segments, |sim| async move {\n    let engine = sim.boot_engine().await;\n    let session = sim.wrap_session(\n        engine\n            .open_workspace_session()\n            .await\n            .expect(\"main session should open\"),\n        &engine,\n    );\n\n    let result = session\n            .execute(\n                \"SELECT lix_json_get_text('{\\\"user\\\":{\\\"names\\\":[\\\"Ada\\\"]}}', 'user', 'names', 0) AS name\",\n                &[],\n            )\n            .await\n            .expect(\"select should succeed\");\n\n    assert_rows_eq(result, vec![vec![Value::Text(\"Ada\".to_string())]]);\n});\n\nsimulation_test!(lix_json_get_rejects_jsonpath_strings, |sim| async move {\n    let engine = sim.boot_engine().await;\n    let session = sim.wrap_session(\n        engine\n            .open_workspace_session()\n            .await\n            .expect(\"main session should open\"),\n        &engine,\n    );\n\n    let error = session\n        .execute(\n            \"SELECT lix_json_get_text('{\\\"path\\\":\\\"ok\\\"}', '$.path')\",\n            &[],\n        )\n        .await\n        .expect_err(\"JSONPath-looking strings should fail loudly\");\n\n    assert_eq!(error.code, LixError::CODE_INVALID_PARAM);\n    assert!(\n        error.message.contains(\"uses variadic path segments\"),\n        \"expected path segment diagnostic: {error}\"\n    );\n});\n\nsimulation_test!(\n    json_column_predicates_reject_bare_text_literals,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        let error = session\n            .execute(\n                \"SELECT entity_id FROM lix_state WHERE entity_id = 'state-latest'\",\n                &[],\n            )\n            .await\n            .expect_err(\"JSON column compared to text should fail loudly\");\n\n        assert_eq!(error.code, LixError::CODE_TYPE_MISMATCH);\n        assert!(\n            error.hint().is_some_and(|hint| hint.contains(\"lix_json\")),\n            \"expected lix_json hint: {error}\"\n        );\n    }\n);\n\nsimulation_test!(\n    json_column_predicates_accept_lix_json_expressions,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"SELECT entity_id FROM lix_state WHERE entity_id = lix_json('[\\\"state-latest\\\"]')\",\n                &[],\n            )\n            .await\n            .expect(\"JSON column compared to lix_json expression should succeed\");\n    }\n);\n\nsimulation_test!(\n    typed_json_property_predicates_reject_bare_text_literals,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \\\n                 VALUES (\\\n                 lix_json('{\\\"x-lix-key\\\":\\\"engine_json_predicate_schema\\\",\\\"x-lix-primary-key\\\":[\\\"/id\\\"],\\\"type\\\":\\\"object\\\",\\\"properties\\\":{\\\"id\\\":{\\\"type\\\":\\\"string\\\"},\\\"meta\\\":{\\\"type\\\":\\\"object\\\"}},\\\"required\\\":[\\\"id\\\",\\\"meta\\\"],\\\"additionalProperties\\\":false}'),\\\n                 false,\\\n                 false\\\n                 )\",\n                &[],\n            )\n            .await\n            .expect(\"schema insert should succeed\");\n\n        session\n            .execute(\n                \"INSERT INTO engine_json_predicate_schema (id, meta, lixcol_untracked) \\\n                 VALUES ('json-predicate-1', lix_json('{\\\"flag\\\":true}'), false)\",\n                &[],\n            )\n            .await\n            .expect(\"typed entity insert should succeed\");\n\n        let error = session\n            .execute(\n                \"SELECT id FROM engine_json_predicate_schema WHERE meta = '{\\\"flag\\\":true}'\",\n                &[],\n            )\n            .await\n            .expect_err(\"typed JSON property compared to text should fail loudly\");\n\n        assert_eq!(error.code, LixError::CODE_TYPE_MISMATCH);\n\n        let result = session\n            .execute(\n                \"SELECT id FROM engine_json_predicate_schema WHERE meta = lix_json('{\\\"flag\\\":true}')\",\n                &[],\n            )\n            .await\n            .expect(\"typed JSON property compared to lix_json should succeed\");\n\n        assert_rows_eq(\n            result,\n            vec![vec![Value::Text(\"json-predicate-1\".to_string())]],\n        );\n    }\n);\n\nsimulation_test!(\n    registered_schema_dml_rejects_bare_lixcol_entity_id_text,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        let error = session\n            .execute(\n                \"UPDATE lix_registered_schema \\\n                 SET value = lix_json('{\\\"x-lix-key\\\":\\\"engine_schema_update_history\\\",\\\"type\\\":\\\"object\\\",\\\"properties\\\":{\\\"id\\\":{\\\"type\\\":\\\"string\\\"}},\\\"required\\\":[\\\"id\\\"],\\\"additionalProperties\\\":false}') \\\n                 WHERE lixcol_entity_id = 'engine_schema_update_history'\",\n                &[],\n            )\n            .await\n            .expect_err(\"bare text lixcol_entity_id update should fail before matching rows\");\n\n        assert_eq!(error.code, LixError::CODE_TYPE_MISMATCH);\n\n        let error = session\n            .execute(\n                \"DELETE FROM lix_registered_schema \\\n                 WHERE lixcol_entity_id = 'engine_schema_update_history'\",\n                &[],\n            )\n            .await\n            .expect_err(\"bare text lixcol_entity_id delete should fail before matching rows\");\n\n        assert_eq!(error.code, LixError::CODE_UNSUPPORTED_SQL);\n    }\n);\n"
  },
  {
    "path": "packages/engine/tests/sql/lix_key_value.rs",
    "content": "use lix_engine::ExecuteResult;\nuse lix_engine::LixError;\nuse lix_engine::Value;\n\nsimulation_test!(lix_key_value_roundtrips_arbitrary_json, |sim| async move {\n    let engine = sim.boot_engine().await;\n    let session = sim.wrap_session(\n        engine\n            .open_workspace_session()\n            .await\n            .expect(\"main session should open\"),\n        &engine,\n    );\n\n    session\n        .execute(\n            \"INSERT INTO lix_key_value (key, value) \\\n             VALUES ('kv-json', lix_json('{\\\"nested\\\":{\\\"flag\\\":true,\\\"items\\\":[1,\\\"two\\\",null]}}'))\",\n            &[],\n        )\n        .await\n        .expect(\"insert should succeed\");\n\n    let result = session\n        .execute(\"SELECT value FROM lix_key_value WHERE key = 'kv-json'\", &[])\n        .await\n        .expect(\"select should succeed\");\n    assert_single_text(\n        result,\n        \"{\\\"nested\\\":{\\\"flag\\\":true,\\\"items\\\":[1,\\\"two\\\",null]}}\",\n    );\n});\n\nsimulation_test!(lix_key_value_duplicate_insert_rejects, |sim| async move {\n    let engine = sim.boot_engine().await;\n    let session = sim.wrap_session(\n        engine\n            .open_workspace_session()\n            .await\n            .expect(\"main session should open\"),\n        &engine,\n    );\n\n    session\n        .execute(\n            \"INSERT INTO lix_key_value (key, value) VALUES ('kv-duplicate', 'first')\",\n            &[],\n        )\n        .await\n        .expect(\"initial insert should succeed\");\n\n    let error = session\n        .execute(\n            \"INSERT INTO lix_key_value (key, value) VALUES ('kv-duplicate', 'second')\",\n            &[],\n        )\n        .await\n        .expect_err(\"plain INSERT should reject duplicate primary keys\");\n    assert_eq!(error.code, LixError::CODE_UNIQUE);\n\n    session\n        .execute(\n            \"UPDATE lix_key_value SET value = 'second' WHERE key = 'kv-duplicate'\",\n            &[],\n        )\n        .await\n        .expect(\"explicit UPDATE should still replace existing state\");\n\n    let result = session\n        .execute(\n            \"SELECT value FROM lix_key_value WHERE key = 'kv-duplicate'\",\n            &[],\n        )\n        .await\n        .expect(\"select should succeed\");\n    assert_single_text(result, \"\\\"second\\\"\");\n});\n\nfn assert_single_text(result: ExecuteResult, expected: &str) {\n    let row_set = result;\n    assert_eq!(row_set.len(), 1);\n    let expected_json = serde_json::from_str::<serde_json::Value>(expected)\n        .expect(\"expected value should be valid JSON\");\n    assert_eq!(row_set.rows()[0].values(), &[Value::Json(expected_json)]);\n}\n"
  },
  {
    "path": "packages/engine/tests/sql/lix_label_assignment.rs",
    "content": "use lix_engine::{LixError, Value};\nuse serde_json::json;\n\nuse super::select_rows;\n\nsimulation_test!(\n    lix_label_assignment_generates_id_and_enforces_mapping_uniqueness,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_key_value (key, value) VALUES ('label-target', 'one')\",\n                &[],\n            )\n            .await\n            .expect(\"target entity insert should succeed\");\n        session\n            .execute(\n                \"INSERT INTO lix_label (id, name) VALUES ('label-a', 'Needs review')\",\n                &[],\n            )\n            .await\n            .expect(\"label insert should succeed\");\n        session\n            .execute(\n                \"INSERT INTO lix_label_assignment \\\n                 (target_entity_id, target_schema_key, target_file_id, label_id) \\\n                 VALUES (lix_json('[\\\"label-target\\\"]'), 'lix_key_value', NULL, 'label-a')\",\n                &[],\n            )\n            .await\n            .expect(\"label assignment insert should succeed\");\n\n        let rows = select_rows(\n            &session,\n            \"SELECT id, target_entity_id, target_schema_key, target_file_id, label_id \\\n             FROM lix_label_assignment \\\n             WHERE target_entity_id = lix_json('[\\\"label-target\\\"]')\",\n        )\n        .await;\n\n        assert_eq!(rows.len(), 1);\n        let id = match &rows[0][0] {\n            Value::Text(value) => value,\n            other => panic!(\"expected generated string id, got {other:?}\"),\n        };\n        assert!(!id.is_empty());\n        assert_eq!(\n            &rows[0][1..],\n            &[\n                Value::Json(json!([\"label-target\"])),\n                Value::Text(\"lix_key_value\".to_string()),\n                Value::Null,\n                Value::Text(\"label-a\".to_string()),\n            ]\n        );\n\n        let error = session\n            .execute(\n                \"INSERT INTO lix_label_assignment \\\n                 (target_entity_id, target_schema_key, target_file_id, label_id) \\\n                 VALUES (lix_json('[\\\"label-target\\\"]'), 'lix_key_value', NULL, 'label-a')\",\n                &[],\n            )\n            .await\n            .expect_err(\"duplicate label assignment should be rejected\");\n\n        assert_eq!(error.code, LixError::CODE_UNIQUE);\n\n        let error = session\n            .execute(\n                \"INSERT INTO lix_label (id, name) VALUES ('label-b', 'Needs review')\",\n                &[],\n            )\n            .await\n            .expect_err(\"duplicate label name should be rejected\");\n\n        assert_eq!(error.code, LixError::CODE_UNIQUE);\n    }\n);\n\nsimulation_test!(\n    lix_label_assignment_rejects_missing_target_state_row,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_label (id, name) VALUES ('label-a', 'Needs review')\",\n                &[],\n            )\n            .await\n            .expect(\"label insert should succeed\");\n\n        let error = session\n            .execute(\n                \"INSERT INTO lix_label_assignment \\\n                 (target_entity_id, target_schema_key, target_file_id, label_id) \\\n                 VALUES (lix_json('[\\\"missing-target\\\"]'), 'lix_key_value', NULL, 'label-a')\",\n                &[],\n            )\n            .await\n            .expect_err(\"label assignment to missing live state row should be rejected\");\n\n        assert_eq!(error.code, LixError::CODE_FOREIGN_KEY);\n    }\n);\n"
  },
  {
    "path": "packages/engine/tests/sql/lix_registered_schema.rs",
    "content": "use lix_engine::CreateVersionOptions;\nuse lix_engine::ExecuteResult;\nuse lix_engine::LixError;\nuse lix_engine::Value;\nuse serde_json::json;\n\nuse super::assert_rows_eq;\n\nsimulation_test!(\n    lix_registered_schema_insert_makes_schema_visible_to_lix_state,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        let register_schema_result = session\n        .execute(\n            \"INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \\\n             VALUES (\\\n             lix_json('{\\\"x-lix-key\\\":\\\"engine_dummy_schema\\\",\\\"x-lix-primary-key\\\":[\\\"/id\\\"],\\\"type\\\":\\\"object\\\",\\\"properties\\\":{\\\"id\\\":{\\\"type\\\":\\\"string\\\"},\\\"name\\\":{\\\"type\\\":\\\"string\\\"}},\\\"required\\\":[\\\"id\\\",\\\"name\\\"],\\\"additionalProperties\\\":false}'),\\\n             false,\\\n             false\\\n             )\",\n            &[],\n        )\n        .await\n        .expect(\"registered schema insert should succeed\");\n        assert_eq!(register_schema_result, ExecuteResult::from_rows_affected(1));\n\n        let registered_schema_row = session\n            .execute(\n                \"SELECT lixcol_entity_id, value \\\n                 FROM lix_registered_schema\",\n                &[],\n            )\n            .await\n            .expect(\"registered schema read should succeed\");\n        let registered_schema_rows = registered_schema_row;\n        let registered_schema_entity_id = registered_schema_rows\n            .rows()\n            .iter()\n            .find_map(|row| match row.values() {\n                [Value::Json(entity_id), Value::Json(value)]\n                    if value.get(\"x-lix-key\").and_then(serde_json::Value::as_str)\n                        == Some(\"engine_dummy_schema\") =>\n                {\n                    Some(entity_id)\n                }\n                [Value::Json(entity_id), Value::Text(value)] => {\n                    let value = serde_json::from_str::<serde_json::Value>(value).ok()?;\n                    (value.get(\"x-lix-key\").and_then(serde_json::Value::as_str)\n                        == Some(\"engine_dummy_schema\"))\n                    .then_some(entity_id)\n                }\n                _ => None,\n            })\n            .expect(\"registered schema row should be visible\");\n        assert_eq!(registered_schema_entity_id, &json!([\"engine_dummy_schema\"]));\n\n        let insert_state_result = session\n        .execute(\n            \"INSERT INTO lix_state (\\\n             entity_id, schema_key, file_id, snapshot_content, global, untracked\\\n             ) VALUES (\\\n             lix_json('[\\\"dummy-1\\\"]'), 'engine_dummy_schema', NULL, lix_json('{\\\"id\\\":\\\"dummy-1\\\",\\\"name\\\":\\\"Dummy\\\"}'), false, true\\\n             )\",\n            &[],\n        )\n        .await\n        .expect(\"lix_state insert for registered schema should succeed\");\n        assert_eq!(insert_state_result, ExecuteResult::from_rows_affected(1));\n\n        let result = session\n            .execute(\n                \"SELECT entity_id, schema_key, snapshot_content \\\n             FROM lix_state \\\n             WHERE schema_key = 'engine_dummy_schema' AND entity_id = lix_json('[\\\"dummy-1\\\"]')\",\n                &[],\n            )\n            .await\n            .expect(\"lix_state read should succeed\");\n        let row_set = result;\n        assert_eq!(row_set.len(), 1);\n        assert_eq!(\n            row_set.rows()[0].values(),\n            &[\n                Value::Json(json!([\"dummy-1\"])),\n                Value::Text(\"engine_dummy_schema\".to_string()),\n                Value::Json(json!({\"id\": \"dummy-1\", \"name\": \"Dummy\"})),\n            ]\n        );\n    }\n);\n\nsimulation_test!(\n    untracked_registered_schema_does_not_authorize_tracked_state_write,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \\\n                 VALUES (\\\n                 lix_json('{\\\"x-lix-key\\\":\\\"engine_untracked_only_schema\\\",\\\"x-lix-primary-key\\\":[\\\"/id\\\"],\\\"type\\\":\\\"object\\\",\\\"properties\\\":{\\\"id\\\":{\\\"type\\\":\\\"string\\\"},\\\"name\\\":{\\\"type\\\":\\\"string\\\"}},\\\"required\\\":[\\\"id\\\",\\\"name\\\"],\\\"additionalProperties\\\":false}'),\\\n                 false,\\\n                 true\\\n                 )\",\n                &[],\n            )\n            .await\n            .expect(\"untracked schema registration should succeed\");\n\n        let error = session\n            .execute(\n                \"INSERT INTO lix_state (\\\n                 entity_id, schema_key, file_id, snapshot_content, global, untracked\\\n                 ) VALUES (\\\n                 lix_json('[\\\"tracked-1\\\"]'), 'engine_untracked_only_schema', NULL, lix_json('{\\\"id\\\":\\\"tracked-1\\\",\\\"name\\\":\\\"Tracked\\\"}'), false, false\\\n                 )\",\n                &[],\n            )\n            .await\n            .expect_err(\"tracked rows must not validate against committed untracked schemas\");\n\n        assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION);\n    }\n);\n\nsimulation_test!(\n    lix_registered_schema_insert_rejects_system_schema_key,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        let error = session\n            .execute(\n                \"INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \\\n                 VALUES (\\\n                 lix_json('{\\\"x-lix-key\\\":\\\"lix_change\\\",\\\"x-lix-primary-key\\\":[\\\"/id\\\"],\\\"type\\\":\\\"object\\\",\\\"properties\\\":{\\\"id\\\":{\\\"type\\\":\\\"string\\\"}},\\\"required\\\":[\\\"id\\\"],\\\"additionalProperties\\\":false}'),\\\n                 false,\\\n                 false\\\n                 )\",\n                &[],\n            )\n            .await\n            .expect_err(\"system schema keys should not be user-registerable\");\n\n        assert_eq!(error.code, LixError::CODE_INVALID_PARAM);\n        assert!(\n            error.message.contains(\"system schema\"),\n            \"unexpected error: {error:?}\"\n        );\n    }\n);\n\nsimulation_test!(lix_registered_schema_delete_is_rejected, |sim| async move {\n    let engine = sim.boot_engine().await;\n    let session = sim.wrap_session(\n        engine\n            .open_workspace_session()\n            .await\n            .expect(\"main session should open\"),\n        &engine,\n    );\n\n    session\n            .execute(\n                \"INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \\\n                 VALUES (\\\n                 lix_json('{\\\"x-lix-key\\\":\\\"engine_delete_schema\\\",\\\"x-lix-primary-key\\\":[\\\"/id\\\"],\\\"type\\\":\\\"object\\\",\\\"properties\\\":{\\\"id\\\":{\\\"type\\\":\\\"string\\\"}},\\\"required\\\":[\\\"id\\\"],\\\"additionalProperties\\\":false}'),\\\n                 false,\\\n                 false\\\n                 )\",\n                &[],\n            )\n            .await\n            .expect(\"schema should register before delete attempt\");\n\n    let registered_schema_rows = session\n        .execute(\n            \"SELECT lixcol_entity_id, value \\\n                 FROM lix_registered_schema\",\n            &[],\n        )\n        .await\n        .expect(\"registered schema read should succeed\");\n    let delete_schema_entity_id = registered_schema_rows\n        .rows()\n        .iter()\n        .find_map(|row| match row.values() {\n            [Value::Json(entity_id), Value::Json(value)]\n                if value.get(\"x-lix-key\").and_then(serde_json::Value::as_str)\n                    == Some(\"engine_delete_schema\") =>\n            {\n                Some(entity_id.clone())\n            }\n            [Value::Json(entity_id), Value::Text(value)] => {\n                let value = serde_json::from_str::<serde_json::Value>(value).ok()?;\n                (value.get(\"x-lix-key\").and_then(serde_json::Value::as_str)\n                    == Some(\"engine_delete_schema\"))\n                .then_some(entity_id.clone())\n            }\n            _ => None,\n        })\n        .expect(\"registered schema entity id should be discoverable\");\n\n    let error = session\n        .execute(\n            \"DELETE FROM lix_registered_schema \\\n                 WHERE lixcol_entity_id = $1\",\n            &[Value::Json(delete_schema_entity_id)],\n        )\n        .await\n        .expect_err(\"schema deletion is not supported yet\");\n\n    assert_eq!(error.code, LixError::CODE_UNSUPPORTED_SQL);\n    assert!(\n        error\n            .message\n            .contains(\"delete lix_registered_schema is not supported\"),\n        \"unexpected error: {error:?}\"\n    );\n});\n\nsimulation_test!(\n    tracked_registered_schema_update_allows_compatible_amendment_and_history,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        let initial_schema = json!({\n            \"x-lix-key\": \"engine_schema_update_history\",\n                        \"x-lix-primary-key\": [\"/id\"],\n            \"type\": \"object\",\n            \"properties\": {\n                \"id\": { \"type\": \"string\" },\n                \"title\": { \"type\": \"string\" }\n            },\n            \"required\": [\"id\", \"title\"],\n            \"additionalProperties\": false\n        });\n        let amended_schema = json!({\n            \"x-lix-key\": \"engine_schema_update_history\",\n                        \"x-lix-primary-key\": [\"/id\"],\n            \"type\": \"object\",\n            \"description\": \"Compatible tracked schema amendment\",\n            \"properties\": {\n                \"id\": { \"type\": \"string\" },\n                \"title\": { \"type\": \"string\" },\n                \"subtitle\": { \"type\": \"string\" }\n            },\n            \"required\": [\"id\", \"title\"],\n            \"additionalProperties\": false\n        });\n\n        session\n            .execute(\n                \"INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \\\n                 VALUES ($1, false, false)\",\n                &[Value::Json(initial_schema.clone())],\n            )\n            .await\n            .expect(\"tracked schema insert should succeed\");\n        let first_commit_id = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"first head should load\")\n            .expect(\"first head should exist\");\n\n        session\n            .execute(\n                \"UPDATE lix_registered_schema \\\n                 SET value = $1 \\\n                 WHERE lixcol_entity_id = lix_json('[\\\"engine_schema_update_history\\\"]')\",\n                &[Value::Json(amended_schema.clone())],\n            )\n            .await\n            .expect(\"compatible tracked schema amendment should succeed\");\n        let second_commit_id = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"second head should load\")\n            .expect(\"second head should exist\");\n        assert_ne!(first_commit_id, second_commit_id);\n\n        let result = session\n            .execute(\n                &format!(\n                    \"SELECT value, lixcol_entity_id, lixcol_observed_commit_id, lixcol_start_commit_id, lixcol_depth \\\n                     FROM lix_registered_schema_history \\\n                     WHERE lixcol_start_commit_id = '{second_commit_id}' \\\n                       AND lixcol_entity_id = lix_json('[\\\"engine_schema_update_history\\\"]') \\\n                     ORDER BY lixcol_depth\"\n                ),\n                &[],\n            )\n            .await\n            .expect(\"tracked registered schema history read should succeed\");\n\n        assert_rows_eq(\n            result,\n            vec![\n                vec![\n                    Value::Json(amended_schema),\n                    Value::Json(json!([\"engine_schema_update_history\"])),\n                    Value::Text(second_commit_id.clone()),\n                    Value::Text(second_commit_id.clone()),\n                    Value::Integer(0),\n                ],\n                vec![\n                    Value::Json(initial_schema),\n                    Value::Json(json!([\"engine_schema_update_history\"])),\n                    Value::Text(first_commit_id),\n                    Value::Text(second_commit_id),\n                    Value::Integer(1),\n                ],\n            ],\n        );\n    }\n);\n\nsimulation_test!(\n    lix_registered_schema_insert_rejects_primary_key_without_json_pointer_slash,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        let error = session\n            .execute(\n                \"INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \\\n                 VALUES (\\\n                 lix_json('{\\\"x-lix-key\\\":\\\"engine_bad_pointer_schema\\\",\\\"x-lix-primary-key\\\":[\\\"id\\\"],\\\"type\\\":\\\"object\\\",\\\"properties\\\":{\\\"id\\\":{\\\"type\\\":\\\"string\\\"}},\\\"required\\\":[\\\"id\\\"],\\\"additionalProperties\\\":false}'),\\\n                 false,\\\n                 false\\\n                 )\",\n                &[],\n            )\n            .await\n            .expect_err(\"registered schema insert should reject JSON Pointers without leading slash\");\n\n        assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION);\n        assert!(\n            error.message.contains(\"must begin with '/'\"),\n            \"unexpected message: {}\",\n            error.message\n        );\n        assert!(\n            error\n                .message\n                .contains(\"x-lix-primary-key: \\\"id\\\" → \\\"/id\\\"\"),\n            \"message should show the offending primary key pointer: {}\",\n            error.message\n        );\n        let hint = error.hint.as_deref().expect(\"error should include a hint\");\n        assert!(\n            hint.contains(\"Did you mean [\\\"/id\\\"]?\"),\n            \"hint should suggest the JSON Pointer form: {hint}\"\n        );\n    }\n);\n\nsimulation_test!(\n    lix_registered_schema_insert_rejects_unprojectable_entity_property,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        let error = session\n            .execute(\n                \"INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \\\n                 VALUES (\\\n                 lix_json('{\\\"x-lix-key\\\":\\\"engine_empty_property_schema\\\",\\\"x-lix-primary-key\\\":[\\\"/id\\\"],\\\"type\\\":\\\"object\\\",\\\"properties\\\":{\\\"id\\\":{\\\"type\\\":\\\"string\\\"},\\\"kind\\\":{}},\\\"required\\\":[\\\"id\\\",\\\"kind\\\"],\\\"additionalProperties\\\":false}'),\\\n                 true,\\\n                 false\\\n                 )\",\n                &[],\n            )\n            .await\n            .expect_err(\"registered schema insert should reject properties without a SQL projection type\");\n\n        assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION);\n        assert!(\n            error.message.contains(\"property '/kind'\"),\n            \"message should identify the unprojectable property: {}\",\n            error.message\n        );\n        assert!(\n            error.message.contains(\"SQL-projectable JSON Schema type\"),\n            \"message should explain the projection requirement: {}\",\n            error.message\n        );\n    }\n);\n\nsimulation_test!(\n    entity_by_version_insert_rejects_target_version_without_schema,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let main = sim.wrap_session(\n            engine\n                .open_session(sim.main_version_id())\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        main.create_version(CreateVersionOptions {\n            id: Some(\"schemaless-target\".to_string()),\n            name: \"Schemaless Target\".to_string(),\n            from_commit_id: None,\n        })\n        .await\n        .expect(\"target version should be created before schema registration\");\n\n        main.execute(\n            \"INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \\\n             VALUES (\\\n             lix_json('{\\\"x-lix-key\\\":\\\"engine_poison_schema\\\",\\\"x-lix-primary-key\\\":[\\\"/id\\\"],\\\"type\\\":\\\"object\\\",\\\"properties\\\":{\\\"id\\\":{\\\"type\\\":\\\"string\\\"},\\\"name\\\":{\\\"type\\\":\\\"string\\\"}},\\\"required\\\":[\\\"id\\\",\\\"name\\\"],\\\"additionalProperties\\\":false}'),\\\n             false,\\\n             false\\\n             )\",\n            &[],\n        )\n        .await\n        .expect(\"schema should be visible on active main\");\n\n        let error = main\n            .execute(\n                \"INSERT INTO engine_poison_schema_by_version \\\n                 (id, name, lixcol_version_id, lixcol_untracked) \\\n                 VALUES ('poison-1', 'Poisoned', 'schemaless-target', true)\",\n                &[],\n            )\n            .await\n            .expect_err(\"_by_version write must use the target version schema catalog\");\n\n        assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION);\n        assert!(\n            error.message.contains(\"engine_poison_schema\"),\n            \"unexpected error: {error:?}\"\n        );\n    }\n);\n\nsimulation_test!(\n    registered_schema_identity_is_scoped_per_version,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let main = sim.wrap_session(\n            engine\n                .open_session(sim.main_version_id())\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        main.create_version(CreateVersionOptions {\n            id: Some(\"divergent-target\".to_string()),\n            name: \"Divergent Target\".to_string(),\n            from_commit_id: None,\n        })\n        .await\n        .expect(\"target version should be created before schema divergence\");\n\n        main.execute(\n            \"INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \\\n             VALUES (\\\n             lix_json('{\\\"x-lix-key\\\":\\\"engine_divergent_schema\\\",\\\"x-lix-primary-key\\\":[\\\"/id\\\"],\\\"type\\\":\\\"object\\\",\\\"properties\\\":{\\\"id\\\":{\\\"type\\\":\\\"string\\\"},\\\"name\\\":{\\\"type\\\":\\\"string\\\"}},\\\"required\\\":[\\\"id\\\",\\\"name\\\"],\\\"additionalProperties\\\":false}'),\\\n             false,\\\n             false\\\n             )\",\n                &[],\n            )\n            .await\n            .expect(\"main schema should be registered\");\n\n        let main_schema = json!({\n            \"x-lix-key\": \"engine_divergent_schema\",\n            \"x-lix-primary-key\": [\"/id\"],\n            \"type\": \"object\",\n            \"properties\": {\n                \"id\": { \"type\": \"string\" },\n                \"name\": { \"type\": \"string\" }\n            },\n            \"required\": [\"id\", \"name\"],\n            \"additionalProperties\": false\n        });\n        let target_schema = json!({\n            \"x-lix-key\": \"engine_divergent_schema\",\n            \"x-lix-primary-key\": [\"/id\"],\n            \"type\": \"object\",\n            \"properties\": {\n                \"id\": { \"type\": \"string\" },\n                \"title\": { \"type\": \"string\" }\n            },\n            \"required\": [\"id\", \"title\"],\n            \"additionalProperties\": false\n        });\n\n        let target = sim.wrap_session(\n            engine\n                .open_session(\"divergent-target\")\n                .await\n                .expect(\"target session should open\"),\n            &engine,\n        );\n\n        target\n            .execute(\n                \"INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \\\n                 VALUES (\\\n                 lix_json('{\\\"x-lix-key\\\":\\\"engine_divergent_schema\\\",\\\"x-lix-primary-key\\\":[\\\"/id\\\"],\\\"type\\\":\\\"object\\\",\\\"properties\\\":{\\\"id\\\":{\\\"type\\\":\\\"string\\\"},\\\"title\\\":{\\\"type\\\":\\\"string\\\"}},\\\"required\\\":[\\\"id\\\",\\\"title\\\"],\\\"additionalProperties\\\":false}'),\\\n                 false,\\\n                 false\\\n                 )\",\n                &[],\n            )\n            .await\n            .expect(\"same schema key may have independent version-local definitions\");\n\n        let main_result = main\n            .execute(\n                \"SELECT value \\\n                 FROM lix_registered_schema \\\n                 WHERE lixcol_entity_id = lix_json('[\\\"engine_divergent_schema\\\"]')\",\n                &[],\n            )\n            .await\n            .expect(\"main schema read should succeed\");\n        assert_rows_eq(main_result, vec![vec![Value::Json(main_schema)]]);\n\n        let target_result = target\n            .execute(\n                \"SELECT value \\\n                 FROM lix_registered_schema \\\n                 WHERE lixcol_entity_id = lix_json('[\\\"engine_divergent_schema\\\"]')\",\n                &[],\n            )\n            .await\n            .expect(\"target schema read should succeed\");\n        assert_rows_eq(target_result, vec![vec![Value::Json(target_schema)]]);\n    }\n);\n\nsimulation_test!(\n    independent_schema_amendments_on_two_versions_are_allowed,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let main = sim.wrap_session(\n            engine\n                .open_session(sim.main_version_id())\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        let base_schema = json!({\n            \"x-lix-key\": \"engine_branch_schema_amendment\",\n            \"x-lix-primary-key\": [\"/id\"],\n            \"type\": \"object\",\n            \"properties\": {\n                \"id\": { \"type\": \"string\" },\n                \"title\": { \"type\": \"string\" }\n            },\n            \"required\": [\"id\", \"title\"],\n            \"additionalProperties\": false\n        });\n        let main_schema = json!({\n            \"x-lix-key\": \"engine_branch_schema_amendment\",\n            \"x-lix-primary-key\": [\"/id\"],\n            \"type\": \"object\",\n            \"properties\": {\n                \"id\": { \"type\": \"string\" },\n                \"title\": { \"type\": \"string\" },\n                \"main_note\": { \"type\": \"string\" }\n            },\n            \"required\": [\"id\", \"title\"],\n            \"additionalProperties\": false\n        });\n        let draft_schema = json!({\n            \"x-lix-key\": \"engine_branch_schema_amendment\",\n            \"x-lix-primary-key\": [\"/id\"],\n            \"type\": \"object\",\n            \"properties\": {\n                \"id\": { \"type\": \"string\" },\n                \"title\": { \"type\": \"string\" },\n                \"draft_note\": { \"type\": \"string\" }\n            },\n            \"required\": [\"id\", \"title\"],\n            \"additionalProperties\": false\n        });\n\n        main.execute(\n            \"INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \\\n             VALUES ($1, false, false)\",\n            &[Value::Json(base_schema)],\n        )\n        .await\n        .expect(\"base schema should be registered\");\n\n        main.create_version(CreateVersionOptions {\n            id: Some(\"schema-amendment-draft\".to_string()),\n            name: \"Schema Amendment Draft\".to_string(),\n            from_commit_id: None,\n        })\n        .await\n        .expect(\"draft version should be created from base schema\");\n\n        let draft = sim.wrap_session(\n            engine\n                .open_session(\"schema-amendment-draft\")\n                .await\n                .expect(\"draft session should open\"),\n            &engine,\n        );\n\n        let main_update = main\n            .execute(\n                \"UPDATE lix_registered_schema \\\n                 SET value = $1 \\\n                 WHERE lixcol_entity_id = lix_json('[\\\"engine_branch_schema_amendment\\\"]')\",\n                &[Value::Json(main_schema.clone())],\n            )\n            .await\n            .expect(\"main additive schema amendment should succeed\");\n        assert_eq!(main_update, ExecuteResult::from_rows_affected(1));\n\n        let draft_update = draft\n            .execute(\n                \"UPDATE lix_registered_schema \\\n                 SET value = $1 \\\n                 WHERE lixcol_entity_id = lix_json('[\\\"engine_branch_schema_amendment\\\"]')\",\n                &[Value::Json(draft_schema.clone())],\n            )\n            .await\n            .expect(\"draft additive schema amendment should succeed\");\n        assert_eq!(draft_update, ExecuteResult::from_rows_affected(1));\n\n        let main_result = main\n            .execute(\n                \"SELECT value \\\n                 FROM lix_registered_schema \\\n                 WHERE lixcol_entity_id = lix_json('[\\\"engine_branch_schema_amendment\\\"]')\",\n                &[],\n            )\n            .await\n            .expect(\"main amended schema read should succeed\");\n        assert_rows_eq(main_result, vec![vec![Value::Json(main_schema)]]);\n\n        let draft_result = draft\n            .execute(\n                \"SELECT value \\\n                 FROM lix_registered_schema \\\n                 WHERE lixcol_entity_id = lix_json('[\\\"engine_branch_schema_amendment\\\"]')\",\n                &[],\n            )\n            .await\n            .expect(\"draft amended schema read should succeed\");\n        assert_rows_eq(draft_result, vec![vec![Value::Json(draft_schema)]]);\n    }\n);\n\nsimulation_test!(\n    entity_by_version_insert_rejects_fk_graph_when_target_version_lacks_schemas,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let main = sim.wrap_session(\n            engine\n                .open_session(sim.main_version_id())\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        main.create_version(CreateVersionOptions {\n            id: Some(\"fk-schemaless-target\".to_string()),\n            name: \"FK Schemaless Target\".to_string(),\n            from_commit_id: None,\n        })\n        .await\n        .expect(\"target version should be created before FK schemas\");\n\n        main.execute(\n            \"INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \\\n             VALUES (\\\n             lix_json('{\\\"x-lix-key\\\":\\\"engine_fk_parent_schema\\\",\\\"x-lix-primary-key\\\":[\\\"/id\\\"],\\\"type\\\":\\\"object\\\",\\\"properties\\\":{\\\"id\\\":{\\\"type\\\":\\\"string\\\"}},\\\"required\\\":[\\\"id\\\"],\\\"additionalProperties\\\":false}'),\\\n             false,\\\n             false\\\n             )\",\n            &[],\n        )\n        .await\n        .expect(\"parent schema should register on active main\");\n\n        main.execute(\n            \"INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \\\n             VALUES (\\\n             lix_json('{\\\"x-lix-key\\\":\\\"engine_fk_child_schema\\\",\\\"x-lix-primary-key\\\":[\\\"/id\\\"],\\\"x-lix-foreign-keys\\\":[{\\\"properties\\\":[\\\"/parent_id\\\"],\\\"references\\\":{\\\"schemaKey\\\":\\\"engine_fk_parent_schema\\\",\\\"properties\\\":[\\\"/id\\\"]}}],\\\"type\\\":\\\"object\\\",\\\"properties\\\":{\\\"id\\\":{\\\"type\\\":\\\"string\\\"},\\\"parent_id\\\":{\\\"type\\\":\\\"string\\\"}},\\\"required\\\":[\\\"id\\\",\\\"parent_id\\\"],\\\"additionalProperties\\\":false}'),\\\n             false,\\\n             false\\\n             )\",\n            &[],\n        )\n        .await\n        .expect(\"child schema should register on active main\");\n\n        let parent_result = main\n            .execute(\n                \"INSERT INTO engine_fk_parent_schema_by_version \\\n                 (id, lixcol_version_id, lixcol_untracked) \\\n                 VALUES ('parent-1', 'fk-schemaless-target', true)\",\n                &[],\n            )\n            .await;\n\n        if let Err(error) = parent_result {\n            assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION);\n            assert!(\n                error.message.contains(\"engine_fk_parent_schema\"),\n                \"unexpected error: {error:?}\"\n            );\n            return;\n        }\n\n        let error = main\n            .execute(\n                \"INSERT INTO engine_fk_child_schema_by_version \\\n                 (id, parent_id, lixcol_version_id, lixcol_untracked) \\\n                 VALUES ('child-1', 'parent-1', 'fk-schemaless-target', true)\",\n                &[],\n            )\n            .await\n            .expect_err(\"FK-valid active graph must not be insertable into a schemaless target\");\n\n        assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION);\n        assert!(\n            error.message.contains(\"engine_fk_child_schema\")\n                || error.message.contains(\"engine_fk_parent_schema\"),\n            \"unexpected error: {error:?}\"\n        );\n    }\n);\n\nsimulation_test!(\n    registered_entity_insert_applies_defaulted_primary_key,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \\\n                 VALUES (\\\n                 lix_json('{\\\"x-lix-key\\\":\\\"engine_default_id_schema\\\",\\\"x-lix-primary-key\\\":[\\\"/id\\\"],\\\"type\\\":\\\"object\\\",\\\"properties\\\":{\\\"id\\\":{\\\"type\\\":\\\"string\\\",\\\"x-lix-default\\\":\\\"lix_uuid_v7()\\\"},\\\"name\\\":{\\\"type\\\":\\\"string\\\"}},\\\"required\\\":[\\\"id\\\",\\\"name\\\"],\\\"additionalProperties\\\":false}'),\\\n                 false,\\\n                 false\\\n                 )\",\n                &[],\n            )\n            .await\n            .expect(\"registered schema insert should succeed\");\n\n        let insert_result = session\n            .execute(\n                \"INSERT INTO engine_default_id_schema (name) VALUES ('Generated')\",\n                &[],\n            )\n            .await\n            .expect(\"entity insert should apply defaulted primary key\");\n        assert_eq!(insert_result, ExecuteResult::from_rows_affected(1));\n\n        let result = session\n            .execute(\n                \"SELECT lixcol_entity_id, id, name \\\n                 FROM engine_default_id_schema \\\n                 WHERE name = 'Generated'\",\n                &[],\n            )\n            .await\n            .expect(\"entity read should succeed\");\n        let row_set = result;\n        assert_eq!(row_set.len(), 1);\n        let values = row_set.rows()[0].values();\n        let [Value::Json(entity_id), Value::Text(id), Value::Text(name)] = values else {\n            panic!(\"expected generated id row, got {values:?}\");\n        };\n        assert_eq!(entity_id, &json!([id]));\n        assert!(!id.is_empty(), \"defaulted id should be non-empty\");\n        assert_eq!(name, \"Generated\");\n    }\n);\n\nsimulation_test!(\n    registered_entity_insert_preserves_explicit_null_for_defaulted_column,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \\\n                 VALUES (\\\n                 lix_json('{\\\"x-lix-key\\\":\\\"engine_nullable_default_schema\\\",\\\"x-lix-primary-key\\\":[\\\"/id\\\"],\\\"type\\\":\\\"object\\\",\\\"properties\\\":{\\\"id\\\":{\\\"type\\\":\\\"string\\\"},\\\"status\\\":{\\\"type\\\":[\\\"string\\\",\\\"null\\\"],\\\"default\\\":\\\"computed\\\"}},\\\"required\\\":[\\\"id\\\"],\\\"additionalProperties\\\":false}'),\\\n                 false,\\\n                 false\\\n                 )\",\n                &[],\n            )\n            .await\n            .expect(\"registered schema insert should succeed\");\n\n        session\n            .execute(\n                \"INSERT INTO engine_nullable_default_schema (id, status) \\\n                 VALUES ('explicit-null', NULL)\",\n                &[],\n            )\n            .await\n            .expect(\"entity insert should preserve explicit null\");\n\n        session\n            .execute(\n                \"INSERT INTO engine_nullable_default_schema (id) \\\n                 VALUES ('omitted')\",\n                &[],\n            )\n            .await\n            .expect(\"entity insert should apply default for omitted column\");\n\n        let result = session\n            .execute(\n                \"SELECT id, status \\\n                 FROM engine_nullable_default_schema \\\n                 ORDER BY id\",\n                &[],\n            )\n            .await\n            .expect(\"entity read should succeed\");\n\n        assert_rows_eq(\n            result,\n            vec![\n                vec![Value::Text(\"explicit-null\".to_string()), Value::Null],\n                vec![\n                    Value::Text(\"omitted\".to_string()),\n                    Value::Text(\"computed\".to_string()),\n                ],\n            ],\n        );\n    }\n);\n\nsimulation_test!(entity_by_version_expands_global_rows, |sim| async move {\n    let engine = sim.boot_engine().await;\n    let global_session = sim.wrap_session(\n        engine\n            .open_session(\"global\")\n            .await\n            .expect(\"global session should open\"),\n        &engine,\n    );\n    let session = sim.wrap_session(\n        engine\n            .open_workspace_session()\n            .await\n            .expect(\"main session should open\"),\n        &engine,\n    );\n\n    global_session\n        .execute(\n            \"INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \\\n             VALUES (\\\n             lix_json('{\\\"x-lix-key\\\":\\\"engine_overlay_schema\\\",\\\"x-lix-primary-key\\\":[\\\"/id\\\"],\\\"type\\\":\\\"object\\\",\\\"properties\\\":{\\\"id\\\":{\\\"type\\\":\\\"string\\\"},\\\"name\\\":{\\\"type\\\":\\\"string\\\"}},\\\"required\\\":[\\\"id\\\",\\\"name\\\"],\\\"additionalProperties\\\":false}'),\\\n             true,\\\n             false\\\n             )\",\n            &[],\n        )\n        .await\n        .expect(\"global registered schema insert should succeed\");\n\n    session\n        .execute(\n            \"INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \\\n             VALUES (\\\n             lix_json('{\\\"x-lix-key\\\":\\\"engine_overlay_schema\\\",\\\"x-lix-primary-key\\\":[\\\"/id\\\"],\\\"type\\\":\\\"object\\\",\\\"properties\\\":{\\\"id\\\":{\\\"type\\\":\\\"string\\\"},\\\"name\\\":{\\\"type\\\":\\\"string\\\"}},\\\"required\\\":[\\\"id\\\",\\\"name\\\"],\\\"additionalProperties\\\":false}'),\\\n             false,\\\n             false\\\n             )\",\n            &[],\n        )\n        .await\n        .expect(\"registered schema insert should succeed\");\n\n    session\n        .execute(\n            \"INSERT INTO engine_overlay_schema \\\n                 (id, name, lixcol_global, lixcol_untracked) \\\n                 VALUES ('entity-global-overlay', 'Global Entity', true, false)\",\n            &[],\n        )\n        .await\n        .expect(\"global entity insert should succeed\");\n\n    let result = session\n        .execute(\n            \"SELECT id, name, lixcol_version_id, lixcol_global, lixcol_untracked \\\n                 FROM engine_overlay_schema_by_version \\\n                 WHERE lixcol_entity_id = lix_json('[\\\"entity-global-overlay\\\"]') \\\n                 ORDER BY lixcol_version_id\",\n            &[],\n        )\n        .await\n        .expect(\"entity by-version read should succeed\");\n    assert_rows_eq(\n        result,\n        vec![\n            vec![\n                Value::Text(\"entity-global-overlay\".to_string()),\n                Value::Text(\"Global Entity\".to_string()),\n                Value::Text(sim.main_version_id().to_string()),\n                Value::Boolean(true),\n                Value::Boolean(false),\n            ],\n            vec![\n                Value::Text(\"entity-global-overlay\".to_string()),\n                Value::Text(\"Global Entity\".to_string()),\n                Value::Text(\"global\".to_string()),\n                Value::Boolean(true),\n                Value::Boolean(false),\n            ],\n        ],\n    );\n});\n\nsimulation_test!(\n    global_entity_insert_rejects_active_only_schema,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \\\n                 VALUES (\\\n                 lix_json('{\\\"x-lix-key\\\":\\\"engine_global_poison_schema\\\",\\\"x-lix-primary-key\\\":[\\\"/id\\\"],\\\"type\\\":\\\"object\\\",\\\"properties\\\":{\\\"id\\\":{\\\"type\\\":\\\"string\\\"},\\\"name\\\":{\\\"type\\\":\\\"string\\\"}},\\\"required\\\":[\\\"id\\\",\\\"name\\\"],\\\"additionalProperties\\\":false}'),\\\n                 false,\\\n                 false\\\n                 )\",\n                &[],\n            )\n            .await\n            .expect(\"main-local schema registration should succeed\");\n\n        let error = session\n            .execute(\n                \"INSERT INTO engine_global_poison_schema \\\n                 (id, name, lixcol_global, lixcol_untracked) \\\n                 VALUES ('global-poison-1', 'Wrong Scope', true, false)\",\n                &[],\n            )\n            .await\n            .expect_err(\"global writes must validate through the global schema catalog\");\n\n        assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION);\n        assert!(\n            error.message.contains(\"engine_global_poison_schema\"),\n            \"unexpected error: {error:?}\"\n        );\n    }\n);\n\nsimulation_test!(\n    registered_typed_entity_surface_uses_primary_key_columns,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \\\n                 VALUES (\\\n                 lix_json('{\\\"x-lix-key\\\":\\\"engine_typed_entity_schema\\\",\\\"x-lix-primary-key\\\":[\\\"/id\\\"],\\\"type\\\":\\\"object\\\",\\\"properties\\\":{\\\"id\\\":{\\\"type\\\":\\\"string\\\"},\\\"name\\\":{\\\"type\\\":\\\"string\\\"},\\\"count\\\":{\\\"type\\\":\\\"number\\\"}},\\\"required\\\":[\\\"id\\\",\\\"name\\\",\\\"count\\\"],\\\"additionalProperties\\\":false}'),\\\n                 false,\\\n                 false\\\n                 )\",\n                &[],\n            )\n            .await\n            .expect(\"registered schema insert should succeed\");\n\n        let insert_result = session\n            .execute(\n                \"INSERT INTO engine_typed_entity_schema \\\n                 (id, name, count, lixcol_global, lixcol_untracked) \\\n                 VALUES ('typed-entity-1', 'Typed Entity', 7, false, false)\",\n                &[],\n            )\n            .await\n            .expect(\"typed entity insert should succeed\");\n        assert_eq!(insert_result, ExecuteResult::from_rows_affected(1));\n\n        let result = session\n            .execute(\n                \"SELECT id, name, count, lixcol_entity_id \\\n                 FROM engine_typed_entity_schema \\\n                 WHERE id = 'typed-entity-1'\",\n                &[],\n            )\n            .await\n            .expect(\"typed entity query by primary-key column should succeed\");\n        assert_rows_eq(\n            result,\n            vec![vec![\n                Value::Text(\"typed-entity-1\".to_string()),\n                Value::Text(\"Typed Entity\".to_string()),\n                Value::Real(7.0),\n                Value::Json(json!([\"typed-entity-1\"])),\n            ]],\n        );\n    }\n);\n\nsimulation_test!(\n    typed_entity_number_update_accepts_integer_param_like_insert,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \\\n                 VALUES (\\\n                 lix_json('{\\\"x-lix-key\\\":\\\"engine_number_update_schema\\\",\\\"x-lix-primary-key\\\":[\\\"/id\\\"],\\\"type\\\":\\\"object\\\",\\\"properties\\\":{\\\"id\\\":{\\\"type\\\":\\\"string\\\"},\\\"score\\\":{\\\"type\\\":\\\"number\\\"}},\\\"required\\\":[\\\"id\\\",\\\"score\\\"],\\\"additionalProperties\\\":false}'),\\\n                 false,\\\n                 false\\\n                 )\",\n                &[],\n            )\n            .await\n            .expect(\"registered schema insert should succeed\");\n\n        session\n            .execute(\n                \"INSERT INTO engine_number_update_schema \\\n                 (id, score, lixcol_global, lixcol_untracked) \\\n                 VALUES ('score-1', 1, false, false)\",\n                &[],\n            )\n            .await\n            .expect(\"typed entity insert should accept integer literal for number column\");\n\n        session\n            .execute(\n                \"UPDATE engine_number_update_schema \\\n                 SET score = $1 \\\n                 WHERE id = 'score-1'\",\n                &[Value::Integer(52000)],\n            )\n            .await\n            .expect(\"typed entity update should accept integer param for number column\");\n\n        let result = session\n            .execute(\n                \"SELECT score \\\n                 FROM engine_number_update_schema \\\n                 WHERE id = 'score-1'\",\n                &[],\n            )\n            .await\n            .expect(\"typed entity query should succeed\");\n        assert_rows_eq(result, vec![vec![Value::Real(52000.0)]]);\n    }\n);\n\nsimulation_test!(\n    typed_entity_update_preserves_absent_optional_non_nullable_fields,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \\\n                 VALUES (\\\n                 lix_json('{\\\"x-lix-key\\\":\\\"engine_optional_update_schema\\\",\\\"x-lix-primary-key\\\":[\\\"/id\\\"],\\\"type\\\":\\\"object\\\",\\\"properties\\\":{\\\"id\\\":{\\\"type\\\":\\\"string\\\"},\\\"title\\\":{\\\"type\\\":\\\"string\\\"},\\\"rank\\\":{\\\"type\\\":\\\"integer\\\"}},\\\"required\\\":[\\\"id\\\",\\\"title\\\"],\\\"additionalProperties\\\":false}'),\\\n                 false,\\\n                 false\\\n                 )\",\n                &[],\n            )\n            .await\n            .expect(\"registered schema insert should succeed\");\n\n        session\n            .execute(\n                \"INSERT INTO engine_optional_update_schema \\\n                 (id, title, lixcol_global, lixcol_untracked) \\\n                 VALUES ('row-1', 'before', false, false)\",\n                &[],\n            )\n            .await\n            .expect(\"insert should omit the optional rank field\");\n\n        session\n            .execute(\n                \"UPDATE engine_optional_update_schema \\\n                 SET title = 'after' \\\n                 WHERE id = 'row-1'\",\n                &[],\n            )\n            .await\n            .expect(\"update should preserve absent optional fields\");\n\n        let result = session\n            .execute(\n                \"SELECT title, rank, lixcol_snapshot_content \\\n                 FROM engine_optional_update_schema \\\n                 WHERE id = 'row-1'\",\n                &[],\n            )\n            .await\n            .expect(\"typed entity query should succeed\");\n        assert_rows_eq(\n            result,\n            vec![vec![\n                Value::Text(\"after\".to_string()),\n                Value::Null,\n                Value::Json(json!({\"id\": \"row-1\", \"title\": \"after\"})),\n            ]],\n        );\n\n        let error = session\n            .execute(\n                \"UPDATE engine_optional_update_schema \\\n                 SET rank = NULL \\\n                 WHERE id = 'row-1'\",\n                &[],\n            )\n            .await\n            .expect_err(\"explicit NULL should still be validated as JSON null\");\n        assert_eq!(error.code, LixError::CODE_SCHEMA_VALIDATION);\n        assert!(\n            error\n                .message\n                .contains(\"/rank null is not of type \\\"integer\\\"\"),\n            \"expected rank validation error, got {error:?}\"\n        );\n    }\n);\n"
  },
  {
    "path": "packages/engine/tests/sql/lix_state.rs",
    "content": "use lix_engine::ExecuteResult;\nuse lix_engine::Value;\nuse serde_json::json;\n\nuse super::assert_rows_eq;\n\nsimulation_test!(lix_state_latest_update_wins, |sim| async move {\n    let engine = sim.boot_engine().await;\n    let session = sim.wrap_session(\n        engine\n            .open_workspace_session()\n            .await\n            .expect(\"main session should open\"),\n        &engine,\n    );\n\n    session\n        .execute(\n            \"INSERT INTO lix_state (\\\n             entity_id, schema_key, file_id, snapshot_content, global, untracked\\\n             ) VALUES (\\\n             lix_json('[\\\"state-latest\\\"]'), 'lix_key_value', NULL, lix_json('{\\\"key\\\":\\\"state-latest\\\",\\\"value\\\":\\\"old\\\"}'), false, false\\\n             )\",\n            &[],\n        )\n        .await\n        .expect(\"lix_state insert should succeed\");\n    session\n        .execute(\n            \"UPDATE lix_state \\\n             SET snapshot_content = lix_json('{\\\"key\\\":\\\"state-latest\\\",\\\"value\\\":\\\"new\\\"}') \\\n             WHERE entity_id = lix_json('[\\\"state-latest\\\"]') AND schema_key = 'lix_key_value'\",\n            &[],\n        )\n        .await\n        .expect(\"lix_state update should succeed\");\n\n    let result = session\n        .execute(\n            \"SELECT snapshot_content \\\n             FROM lix_state \\\n             WHERE entity_id = lix_json('[\\\"state-latest\\\"]') AND schema_key = 'lix_key_value'\",\n            &[],\n        )\n        .await\n        .expect(\"lix_state read should succeed\");\n    assert_single_text(result, \"{\\\"key\\\":\\\"state-latest\\\",\\\"value\\\":\\\"new\\\"}\");\n});\n\nsimulation_test!(lix_state_delete_hides_row, |sim| async move {\n    let engine = sim.boot_engine().await;\n    let session = sim.wrap_session(\n        engine\n            .open_workspace_session()\n            .await\n            .expect(\"main session should open\"),\n        &engine,\n    );\n\n    session\n        .execute(\n            \"INSERT INTO lix_state (\\\n             entity_id, schema_key, file_id, snapshot_content, global, untracked\\\n             ) VALUES (\\\n             lix_json('[\\\"state-delete\\\"]'), 'lix_key_value', NULL, lix_json('{\\\"key\\\":\\\"state-delete\\\",\\\"value\\\":\\\"delete-me\\\"}'), false, false\\\n             )\",\n            &[],\n        )\n        .await\n        .expect(\"lix_state insert should succeed\");\n    session\n        .execute(\n            \"DELETE FROM lix_state \\\n             WHERE entity_id = lix_json('[\\\"state-delete\\\"]') AND schema_key = 'lix_key_value'\",\n            &[],\n        )\n        .await\n        .expect(\"lix_state delete should succeed\");\n\n    let result = session\n        .execute(\n            \"SELECT entity_id \\\n             FROM lix_state \\\n             WHERE entity_id = lix_json('[\\\"state-delete\\\"]') AND schema_key = 'lix_key_value'\",\n            &[],\n        )\n        .await\n        .expect(\"lix_state read should succeed\");\n    let rows = result;\n    assert_eq!(rows.len(), 0);\n});\n\nsimulation_test!(\n    lix_state_global_rows_are_visible_through_version_overlay,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_state (\\\n                 entity_id, schema_key, file_id, snapshot_content, global, untracked\\\n                 ) VALUES (\\\n                 lix_json('[\\\"state-global-overlay\\\"]'), 'lix_key_value', NULL, lix_json('{\\\"key\\\":\\\"state-global-overlay\\\",\\\"value\\\":\\\"global\\\"}'), true, false\\\n                 )\",\n                &[],\n            )\n            .await\n            .expect(\"global lix_state insert should succeed\");\n\n        let active_result = session\n            .execute(\n                \"SELECT entity_id, global, untracked \\\n                 FROM lix_state \\\n                 WHERE entity_id = lix_json('[\\\"state-global-overlay\\\"]') AND schema_key = 'lix_key_value'\",\n                &[],\n            )\n            .await\n            .expect(\"active lix_state read should succeed\");\n        assert_rows_eq(\n            active_result,\n            vec![vec![\n                Value::Json(json!([\"state-global-overlay\"])),\n                Value::Boolean(true),\n                Value::Boolean(false),\n            ]],\n        );\n\n        let by_version_result = session\n            .execute(\n                &format!(\n                    \"SELECT entity_id, version_id, global, untracked \\\n                 FROM lix_state_by_version \\\n                 WHERE entity_id = lix_json('[\\\"state-global-overlay\\\"]') AND schema_key = 'lix_key_value' \\\n                 AND version_id IN ('{}', 'global') \\\n                 ORDER BY version_id\",\n                    sim.main_version_id()\n                ),\n                &[],\n            )\n            .await\n            .expect(\"by-version lix_state read should succeed\");\n        assert_rows_eq(\n            by_version_result,\n            vec![\n                vec![\n                    Value::Json(json!([\"state-global-overlay\"])),\n                    Value::Text(sim.main_version_id().to_string()),\n                    Value::Boolean(true),\n                    Value::Boolean(false),\n                ],\n                vec![\n                    Value::Json(json!([\"state-global-overlay\"])),\n                    Value::Text(\"global\".to_string()),\n                    Value::Boolean(true),\n                    Value::Boolean(false),\n                ],\n            ],\n        );\n    }\n);\n\nsimulation_test!(\n    lix_state_version_tombstone_hides_global_row_in_active_and_by_version,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_state (\\\n                 entity_id, schema_key, file_id, snapshot_content, global, untracked\\\n                 ) VALUES (\\\n                 lix_json('[\\\"state-global-tombstone-overlay\\\"]'), 'lix_key_value', NULL, lix_json('{\\\"key\\\":\\\"state-global-tombstone-overlay\\\",\\\"value\\\":\\\"global\\\"}'), true, false\\\n                 )\",\n                &[],\n            )\n            .await\n            .expect(\"global lix_state insert should succeed\");\n        session\n            .execute(\n                \"INSERT INTO lix_state (\\\n                 entity_id, schema_key, file_id, snapshot_content, global, untracked\\\n                 ) VALUES (\\\n                 lix_json('[\\\"state-global-tombstone-overlay\\\"]'), 'lix_key_value', NULL, NULL, false, false\\\n                 )\",\n                &[],\n            )\n            .await\n            .expect(\"version-local tombstone insert should succeed\");\n\n        let active_result = session\n            .execute(\n                \"SELECT entity_id \\\n                 FROM lix_state \\\n                 WHERE entity_id = lix_json('[\\\"state-global-tombstone-overlay\\\"]') AND schema_key = 'lix_key_value'\",\n                &[],\n            )\n            .await\n            .expect(\"active lix_state read should succeed\");\n        assert_rows_eq(active_result, Vec::new());\n\n        let by_version_result = session\n            .execute(\n                &format!(\n                    \"SELECT entity_id, version_id, global, untracked \\\n                     FROM lix_state_by_version \\\n                     WHERE entity_id = lix_json('[\\\"state-global-tombstone-overlay\\\"]') AND schema_key = 'lix_key_value' \\\n                     AND version_id IN ('{}', 'global') \\\n                     ORDER BY version_id\",\n                    sim.main_version_id()\n                ),\n                &[],\n            )\n            .await\n            .expect(\"by-version lix_state read should succeed\");\n        assert_rows_eq(\n            by_version_result,\n            vec![vec![\n                Value::Json(json!([\"state-global-tombstone-overlay\"])),\n                Value::Text(\"global\".to_string()),\n                Value::Boolean(true),\n                Value::Boolean(false),\n            ]],\n        );\n    }\n);\n\nfn assert_single_text(result: ExecuteResult, expected: &str) {\n    let row_set = result;\n    assert_eq!(row_set.len(), 1);\n    let expected_json = serde_json::from_str::<serde_json::Value>(expected)\n        .expect(\"expected snapshot_content should be valid JSON\");\n    assert_eq!(row_set.rows()[0].values(), &[Value::Json(expected_json)]);\n}\n"
  },
  {
    "path": "packages/engine/tests/sql/lix_state_history.rs",
    "content": "use lix_engine::Value;\nuse serde_json::json;\n\nsimulation_test!(\n    lix_state_history_requires_start_commit_id,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_key_value (key, value) VALUES ('history-start-required', 'one')\",\n                &[],\n            )\n            .await\n            .expect(\"tracked write should succeed\");\n\n        let error = session\n            .execute(\"SELECT entity_id FROM lix_state_history\", &[])\n            .await\n            .expect_err(\"history queries must provide start_commit_id\");\n\n        assert!(error\n            .to_string()\n            .contains(\"requires a start_commit_id filter\"));\n        assert_eq!(\n            error.code,\n            lix_engine::LixError::CODE_HISTORY_FILTER_REQUIRED\n        );\n        assert!(\n            error\n                .hint()\n                .is_some_and(|hint| hint.contains(\"lix_active_version_commit_id()\")),\n            \"expected active-version-head hint: {error}\"\n        );\n    }\n);\n\nsimulation_test!(\n    lix_state_history_accepts_active_version_commit_id_filter,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_key_value (key, value) VALUES ('history-active-head', 'one')\",\n                &[],\n            )\n            .await\n            .expect(\"tracked write should succeed\");\n\n        let rows = select_history_rows(\n            &session,\n            \"SELECT entity_id FROM lix_state_history WHERE start_commit_id = lix_active_version_commit_id()\",\n        )\n        .await;\n\n        assert!(\n            rows.iter()\n                .any(|row| row.first() == Some(&Value::Json(json!([\"history-active-head\"])))),\n            \"expected active-head history row, got {rows:?}\"\n        );\n    }\n);\n\nsimulation_test!(\n    lix_state_history_rejects_prefixed_start_commit_id_filter,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_key_value (key, value) VALUES ('history-prefixed-start', 'one')\",\n                &[],\n            )\n            .await\n            .expect(\"tracked write should succeed\");\n\n        let error = session\n            .execute(\n                \"SELECT entity_id \\\n                 FROM lix_state_history \\\n                 WHERE lixcol_start_commit_id = lix_active_version_commit_id()\",\n                &[],\n            )\n            .await\n            .expect_err(\"lix_state_history should only expose bare start_commit_id\");\n\n        assert_eq!(error.code, lix_engine::LixError::CODE_COLUMN_NOT_FOUND);\n        assert!(\n            error.to_string().contains(\"lixcol_start_commit_id\"),\n            \"unexpected error: {error}\"\n        );\n    }\n);\n\nsimulation_test!(\n    lix_state_history_reads_from_explicit_historical_commit,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_key_value (key, value) VALUES ('history-explicit', 'one')\",\n                &[],\n            )\n            .await\n            .expect(\"initial tracked write should succeed\");\n        let first_commit_id = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"first head should load\")\n            .expect(\"first head should exist\");\n\n        session\n            .execute(\n                \"UPDATE lix_key_value SET value = 'two' WHERE key = 'history-explicit'\",\n                &[],\n            )\n            .await\n            .expect(\"second tracked write should succeed\");\n        let second_commit_id = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"second head should load\")\n            .expect(\"second head should exist\");\n\n        session\n            .execute(\n                \"DELETE FROM lix_key_value WHERE key = 'history-explicit'\",\n                &[],\n            )\n            .await\n            .expect(\"tombstone write should succeed\");\n        let third_commit_id = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"third head should load\")\n            .expect(\"third head should exist\");\n\n        assert_ne!(first_commit_id, second_commit_id);\n        assert_ne!(second_commit_id, third_commit_id);\n\n        let first_history = select_history_rows(\n            &session,\n            &format!(\n                \"SELECT start_commit_id, depth, snapshot_content, change_id, observed_commit_id, commit_created_at \\\n                 FROM lix_state_history \\\n                 WHERE start_commit_id = '{first_commit_id}' \\\n                   AND entity_id = lix_json('[\\\"history-explicit\\\"]') \\\n                 ORDER BY depth\"\n            ),\n        )\n        .await;\n        assert_eq!(\n            &first_history[0][0..3],\n            &[\n                Value::Text(first_commit_id.clone()),\n                Value::Integer(0),\n                Value::Json(json!({\"key\": \"history-explicit\", \"value\": \"one\"})),\n            ],\n            \"historical commit should be queryable after later commits\"\n        );\n        let Value::Text(first_change_id) = &first_history[0][3] else {\n            panic!(\"change_id should be text\");\n        };\n        let Value::Text(first_row_commit_id) = &first_history[0][4] else {\n            panic!(\"observed_commit_id should be text\");\n        };\n        let Value::Text(first_commit_created_at) = &first_history[0][5] else {\n            panic!(\"commit_created_at should be text\");\n        };\n        assert!(!first_change_id.is_empty());\n        assert_eq!(first_row_commit_id, &first_commit_id);\n        assert!(\n            !first_commit_created_at.is_empty(),\n            \"commit_created_at should be populated\"\n        );\n\n        let second_history = select_history_rows(\n            &session,\n            &format!(\n                \"SELECT depth, snapshot_content \\\n                 FROM lix_state_history \\\n                 WHERE start_commit_id = '{second_commit_id}' \\\n                   AND entity_id = lix_json('[\\\"history-explicit\\\"]') \\\n                 ORDER BY depth\"\n            ),\n        )\n        .await;\n        assert_eq!(\n            second_history,\n            vec![\n                vec![\n                    Value::Integer(0),\n                    Value::Json(json!({\"key\": \"history-explicit\", \"value\": \"two\"})),\n                ],\n                vec![\n                    Value::Integer(1),\n                    Value::Json(json!({\"key\": \"history-explicit\", \"value\": \"one\"})),\n                ],\n            ],\n            \"depth 0 is the start commit and parent changes appear at depth > 0\"\n        );\n\n        let tombstone_history = select_history_rows(\n            &session,\n            &format!(\n                \"SELECT depth, snapshot_content \\\n                 FROM lix_state_history \\\n                 WHERE start_commit_id = '{third_commit_id}' \\\n                   AND entity_id = lix_json('[\\\"history-explicit\\\"]') \\\n                   AND depth = 0 \\\n                   AND snapshot_content IS NULL\"\n            ),\n        )\n        .await;\n        assert_eq!(\n            tombstone_history,\n            vec![vec![Value::Integer(0), Value::Null]],\n            \"tombstone changes should be visible as NULL snapshot_content\"\n        );\n    }\n);\n\nsimulation_test!(\n    lix_state_history_routes_schema_entity_file_and_depth_filters,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_file (id, path, data) \\\n                 VALUES ('history-file-a', '/history/a.txt', X'61')\",\n                &[],\n            )\n            .await\n            .expect(\"file insert should succeed\");\n        let first_commit_id = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"first head should load\")\n            .expect(\"first head should exist\");\n\n        session\n            .execute(\n                \"UPDATE lix_file SET data = X'62' WHERE id = 'history-file-a'\",\n                &[],\n            )\n            .await\n            .expect(\"file update should succeed\");\n        let second_commit_id = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"second head should load\")\n            .expect(\"second head should exist\");\n\n        let rows = select_history_rows(\n            &session,\n            &format!(\n                \"SELECT entity_id, schema_key, file_id, depth \\\n                 FROM lix_state_history \\\n                 WHERE start_commit_id = '{second_commit_id}' \\\n                   AND schema_key = 'lix_binary_blob_ref' \\\n                   AND entity_id = lix_json('[\\\"history-file-a\\\"]') \\\n                   AND file_id = 'history-file-a' \\\n                   AND depth >= 0 \\\n                   AND depth <= 1 \\\n                 ORDER BY depth\"\n            ),\n        )\n        .await;\n        assert_eq!(\n            rows,\n            vec![\n                vec![\n                    Value::Json(json!([\"history-file-a\"])),\n                    Value::Text(\"lix_binary_blob_ref\".to_string()),\n                    Value::Text(\"history-file-a\".to_string()),\n                    Value::Integer(0),\n                ],\n                vec![\n                    Value::Json(json!([\"history-file-a\"])),\n                    Value::Text(\"lix_binary_blob_ref\".to_string()),\n                    Value::Text(\"history-file-a\".to_string()),\n                    Value::Integer(1),\n                ],\n            ],\n            \"schema_key, entity_id, file_id, and depth range filters should route through the provider\"\n        );\n\n        let parent_only_rows = select_history_rows(\n            &session,\n            &format!(\n                \"SELECT start_commit_id, depth \\\n                 FROM lix_state_history \\\n                 WHERE start_commit_id = '{second_commit_id}' \\\n                   AND schema_key = 'lix_binary_blob_ref' \\\n                   AND entity_id = lix_json('[\\\"history-file-a\\\"]') \\\n                   AND file_id = 'history-file-a' \\\n                   AND depth > 0 \\\n                   AND depth < 2\"\n            ),\n        )\n        .await;\n        assert_eq!(\n            parent_only_rows,\n            vec![vec![Value::Text(second_commit_id), Value::Integer(1)]],\n            \"strict depth ranges should keep only matching parent rows\"\n        );\n\n        let historical_start_rows = select_history_rows(\n            &session,\n            &format!(\n                \"SELECT start_commit_id, depth \\\n                 FROM lix_state_history \\\n                 WHERE start_commit_id = '{first_commit_id}' \\\n                   AND schema_key = 'lix_binary_blob_ref' \\\n                   AND entity_id = lix_json('[\\\"history-file-a\\\"]') \\\n                   AND file_id = 'history-file-a'\"\n            ),\n        )\n        .await;\n        assert_eq!(\n            historical_start_rows,\n            vec![vec![Value::Text(first_commit_id), Value::Integer(0)]],\n            \"file_id filtering should also work for historical non-head starts\"\n        );\n    }\n);\n\nsimulation_test!(\n    lix_state_history_shows_tombstone_at_ancestor_depth,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_key_value (key, value) VALUES ('history-ancestor-tombstone', 'one')\",\n                &[],\n            )\n            .await\n            .expect(\"initial tracked write should succeed\");\n\n        session\n            .execute(\n                \"DELETE FROM lix_key_value WHERE key = 'history-ancestor-tombstone'\",\n                &[],\n            )\n            .await\n            .expect(\"delete should succeed\");\n        let delete_commit_id = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"delete head should load\")\n            .expect(\"delete head should exist\");\n\n        session\n            .execute(\n                \"INSERT INTO lix_key_value (key, value) VALUES ('history-unrelated-after-delete', 'later')\",\n                &[],\n            )\n            .await\n            .expect(\"unrelated later write should succeed\");\n        let later_commit_id = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"later head should load\")\n            .expect(\"later head should exist\");\n        assert_ne!(delete_commit_id, later_commit_id);\n\n        let tombstone_rows = select_history_rows(\n            &session,\n            &format!(\n                \"SELECT observed_commit_id, depth, snapshot_content \\\n                 FROM lix_state_history \\\n                 WHERE start_commit_id = '{later_commit_id}' \\\n                   AND entity_id = lix_json('[\\\"history-ancestor-tombstone\\\"]') \\\n                   AND snapshot_content IS NULL \\\n                 ORDER BY depth\"\n            ),\n        )\n        .await;\n        assert_eq!(\n            tombstone_rows,\n            vec![vec![\n                Value::Text(delete_commit_id),\n                Value::Integer(1),\n                Value::Null,\n            ]],\n            \"a tombstone from the parent commit should appear at ancestor depth\"\n        );\n    }\n);\n\nsimulation_test!(\n    lix_state_history_supports_multiple_start_commit_filters,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_key_value (key, value) VALUES ('history-multi-start', 'one')\",\n                &[],\n            )\n            .await\n            .expect(\"first write should succeed\");\n        let first_commit_id = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"first head should load\")\n            .expect(\"first head should exist\");\n\n        session\n            .execute(\n                \"UPDATE lix_key_value SET value = 'two' WHERE key = 'history-multi-start'\",\n                &[],\n            )\n            .await\n            .expect(\"second write should succeed\");\n        let second_commit_id = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"second head should load\")\n            .expect(\"second head should exist\");\n\n        let in_rows = select_history_rows(\n            &session,\n            &format!(\n                \"SELECT start_commit_id, depth, snapshot_content \\\n                 FROM lix_state_history \\\n                 WHERE start_commit_id IN ('{first_commit_id}', '{second_commit_id}') \\\n                   AND entity_id = lix_json('[\\\"history-multi-start\\\"]') \\\n                   AND depth = 0 \\\n                 ORDER BY start_commit_id\"\n            ),\n        )\n        .await;\n        assert_eq!(\n            in_rows,\n            vec![\n                vec![\n                    Value::Text(first_commit_id.clone()),\n                    Value::Integer(0),\n                    Value::Json(json!({\"key\": \"history-multi-start\", \"value\": \"one\"})),\n                ],\n                vec![\n                    Value::Text(second_commit_id.clone()),\n                    Value::Integer(0),\n                    Value::Json(json!({\"key\": \"history-multi-start\", \"value\": \"two\"})),\n                ],\n            ],\n            \"IN should allow multiple explicit history starts\"\n        );\n\n        let or_rows = select_history_rows(\n            &session,\n            &format!(\n                \"SELECT start_commit_id \\\n                 FROM lix_state_history \\\n                 WHERE (start_commit_id = '{first_commit_id}' \\\n                        OR start_commit_id = '{second_commit_id}') \\\n                   AND entity_id = lix_json('[\\\"history-multi-start\\\"]') \\\n                   AND depth = 0 \\\n                 ORDER BY start_commit_id\"\n            ),\n        )\n        .await;\n        assert_eq!(\n            or_rows,\n            vec![\n                vec![Value::Text(first_commit_id)],\n                vec![Value::Text(second_commit_id)],\n            ],\n            \"OR should also allow multiple explicit history starts\"\n        );\n    }\n);\n\nsimulation_test!(\n    lix_state_history_intersects_conjunctive_value_filters,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_key_value (key, value) VALUES ('history-and-a', 'a')\",\n                &[],\n            )\n            .await\n            .expect(\"first write should succeed\");\n        session\n            .execute(\n                \"INSERT INTO lix_key_value (key, value) VALUES ('history-and-b', 'b')\",\n                &[],\n            )\n            .await\n            .expect(\"second write should succeed\");\n        let head_commit_id = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"head should load\")\n            .expect(\"head should exist\");\n\n        let narrowed_rows = select_history_rows(\n            &session,\n            &format!(\n                \"SELECT entity_id \\\n                 FROM lix_state_history \\\n                 WHERE start_commit_id = '{head_commit_id}' \\\n                   AND entity_id IN (lix_json('[\\\"history-and-a\\\"]'), lix_json('[\\\"history-and-b\\\"]')) \\\n                   AND entity_id = lix_json('[\\\"history-and-a\\\"]')\"\n            ),\n        )\n        .await;\n        assert_eq!(\n            narrowed_rows,\n            vec![vec![Value::Json(json!([\"history-and-a\"]))]],\n            \"AND filters on the same history column should intersect, not union\"\n        );\n\n        let contradictory_rows = select_history_rows(\n            &session,\n            &format!(\n                \"SELECT entity_id \\\n                 FROM lix_state_history \\\n                 WHERE start_commit_id = '{head_commit_id}' \\\n                   AND entity_id = lix_json('[\\\"history-and-a\\\"]') \\\n                   AND entity_id = lix_json('[\\\"history-and-b\\\"]')\"\n            ),\n        )\n        .await;\n        assert_eq!(\n            contradictory_rows,\n            Vec::<Vec<Value>>::new(),\n            \"contradictory AND filters on the same history column should return no rows\"\n        );\n    }\n);\n\nasync fn select_history_rows(\n    session: &crate::support::simulation_test::engine::SimSession,\n    sql: &str,\n) -> Vec<Vec<Value>> {\n    let result = session\n        .execute(sql, &[])\n        .await\n        .expect(\"history SELECT should succeed\");\n    let row_set = result;\n    row_set\n        .rows()\n        .iter()\n        .map(|row| row.values().to_vec())\n        .collect()\n}\n"
  },
  {
    "path": "packages/engine/tests/sql/lix_version.rs",
    "content": "use lix_engine::ExecuteResult;\nuse lix_engine::LixError;\nuse lix_engine::Value;\n\nsimulation_test!(lix_version_lists_descriptors_with_refs, |sim| async move {\n    let engine = sim.boot_engine().await;\n    let session = sim.wrap_session(\n        engine\n            .open_session(\"global\")\n            .await\n            .expect(\"global session should open\"),\n        &engine,\n    );\n\n    let result = session\n        .execute(\n            \"SELECT id, name, hidden, commit_id FROM lix_version ORDER BY id\",\n            &[],\n        )\n        .await\n        .expect(\"lix_version should read\");\n    let rows = result;\n    assert_eq!(rows.len(), 2);\n\n    let values = rows\n        .rows()\n        .iter()\n        .map(|row| row.values().to_vec())\n        .collect::<Vec<_>>();\n    assert!(values.contains(&vec![\n        Value::Text(\"global\".to_string()),\n        Value::Text(\"global\".to_string()),\n        Value::Boolean(true),\n        Value::Text(sim.initial_commit_id().to_string()),\n    ]));\n    assert!(values.contains(&vec![\n        Value::Text(sim.main_version_id().to_string()),\n        Value::Text(\"main\".to_string()),\n        Value::Boolean(false),\n        Value::Text(sim.initial_commit_id().to_string()),\n    ]));\n});\n\nsimulation_test!(\n    lix_version_count_star_handles_empty_projection,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_session(\"global\")\n                .await\n                .expect(\"global session should open\"),\n            &engine,\n        );\n\n        assert_eq!(\n            count_rows(&session, \"SELECT COUNT(*) FROM lix_version\").await,\n            2\n        );\n        assert_eq!(\n            count_rows(\n                &session,\n                \"SELECT COUNT(*) FROM lix_version WHERE name = 'main'\",\n            )\n            .await,\n            1\n        );\n    }\n);\n\nsimulation_test!(\n    lix_version_insert_creates_descriptor_and_ref,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"workspace session should open\"),\n            &engine,\n        );\n\n        let insert_result = session\n            .execute(\n                \"INSERT INTO lix_version (id, name) \\\n                 VALUES ('sql-version-insert', 'SQL Insert')\",\n                &[],\n            )\n            .await\n            .expect(\"lix_version insert should create descriptor and ref\");\n        assert_eq!(insert_result, ExecuteResult::from_rows_affected(1));\n\n        assert_single_version_row(\n            &session,\n            \"sql-version-insert\",\n            \"SQL Insert\",\n            false,\n            sim.initial_commit_id(),\n        )\n        .await;\n        assert_eq!(\n            count_rows(\n                &session,\n                \"SELECT COUNT(*) FROM lix_version_descriptor WHERE id = 'sql-version-insert'\",\n            )\n            .await,\n            1\n        );\n        assert_eq!(\n            count_rows(\n                &session,\n                \"SELECT COUNT(*) FROM lix_version_ref WHERE id = 'sql-version-insert'\",\n            )\n            .await,\n            1\n        );\n    }\n);\n\nsimulation_test!(\n    lix_version_insert_accepts_explicit_hidden_and_commit_id,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"workspace session should open\"),\n            &engine,\n        );\n\n        let insert_result = session\n            .execute(\n                &format!(\n                    \"INSERT INTO lix_version (id, name, hidden, commit_id) \\\n                     VALUES ('sql-version-explicit', 'Explicit', true, '{}')\",\n                    sim.initial_commit_id()\n                ),\n                &[],\n            )\n            .await\n            .expect(\"lix_version insert should accept hidden and commit_id\");\n        assert_eq!(insert_result, ExecuteResult::from_rows_affected(1));\n\n        assert_single_version_row(\n            &session,\n            \"sql-version-explicit\",\n            \"Explicit\",\n            true,\n            sim.initial_commit_id(),\n        )\n        .await;\n    }\n);\n\nsimulation_test!(\n    lix_version_update_splits_descriptor_and_ref_changes,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"workspace session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_version (id, name) \\\n                 VALUES ('sql-version-update', 'Before')\",\n                &[],\n            )\n            .await\n            .expect(\"version insert should succeed\");\n\n        session\n            .execute(\n                \"INSERT INTO lix_key_value (key, value) \\\n                 VALUES ('sql-version-update-head', 'after')\",\n                &[],\n            )\n            .await\n            .expect(\"tracked write should advance active version head\");\n        let new_head = select_single_text(\n            &session,\n            &format!(\n                \"SELECT commit_id FROM lix_version WHERE id = '{}'\",\n                sim.main_version_id()\n            ),\n        )\n        .await;\n\n        let update_result = session\n            .execute(\n                &format!(\n                    \"UPDATE lix_version \\\n                     SET name = 'After', hidden = true, commit_id = '{new_head}' \\\n                     WHERE id = 'sql-version-update'\"\n                ),\n                &[],\n            )\n            .await\n            .expect(\"lix_version update should split descriptor and ref changes\");\n        assert_eq!(update_result, ExecuteResult::from_rows_affected(1));\n\n        assert_single_version_row(&session, \"sql-version-update\", \"After\", true, &new_head).await;\n        assert_eq!(\n            select_single_text(\n                &session,\n                \"SELECT commit_id FROM lix_version_ref WHERE id = 'sql-version-update'\",\n            )\n            .await,\n            new_head\n        );\n    }\n);\n\nsimulation_test!(\n    lix_version_delete_removes_descriptor_and_ref_atomically,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"workspace session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_version (id, name) \\\n                 VALUES ('sql-version-delete', 'Delete Me')\",\n                &[],\n            )\n            .await\n            .expect(\"version insert should succeed\");\n\n        let delete_result = session\n            .execute(\n                \"DELETE FROM lix_version WHERE id = 'sql-version-delete'\",\n                &[],\n            )\n            .await\n            .expect(\"lix_version delete should remove descriptor and ref atomically\");\n        assert_eq!(delete_result, ExecuteResult::from_rows_affected(1));\n\n        assert_eq!(\n            count_rows(\n                &session,\n                \"SELECT COUNT(*) FROM lix_version WHERE id = 'sql-version-delete'\",\n            )\n            .await,\n            0\n        );\n        assert_eq!(\n            count_rows(\n                &session,\n                \"SELECT COUNT(*) FROM lix_version_descriptor WHERE id = 'sql-version-delete'\",\n            )\n            .await,\n            0\n        );\n        assert_eq!(\n            count_rows(\n                &session,\n                \"SELECT COUNT(*) FROM lix_version_ref WHERE id = 'sql-version-delete'\",\n            )\n            .await,\n            0\n        );\n    }\n);\n\nsimulation_test!(\n    lix_version_delete_rejects_active_and_global_versions,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"workspace session should open\"),\n            &engine,\n        );\n\n        let active_error = session\n            .execute(\n                &format!(\n                    \"DELETE FROM lix_version WHERE id = '{}'\",\n                    sim.main_version_id()\n                ),\n                &[],\n            )\n            .await\n            .expect_err(\"delete should reject active version\");\n        assert!(\n            active_error.to_string().contains(\"active version\"),\n            \"active delete error should explain the restriction: {active_error:?}\"\n        );\n\n        let global_error = session\n            .execute(\"DELETE FROM lix_version WHERE id = 'global'\", &[])\n            .await\n            .expect_err(\"delete should reject global version\");\n        assert!(\n            global_error.to_string().contains(\"global version\"),\n            \"global delete error should explain the restriction: {global_error:?}\"\n        );\n\n        assert_eq!(\n            count_rows(\n                &session,\n                &format!(\n                    \"SELECT COUNT(*) FROM lix_version WHERE id = '{}'\",\n                    sim.main_version_id()\n                ),\n            )\n            .await,\n            1\n        );\n        assert_eq!(\n            count_rows(\n                &session,\n                \"SELECT COUNT(*) FROM lix_version WHERE id = 'global'\"\n            )\n            .await,\n            1\n        );\n    }\n);\n\nsimulation_test!(lix_version_duplicate_insert_rejects, |sim| async move {\n    let engine = sim.boot_engine().await;\n    let session = sim.wrap_session(\n        engine\n            .open_workspace_session()\n            .await\n            .expect(\"workspace session should open\"),\n        &engine,\n    );\n\n    session\n        .execute(\n            \"INSERT INTO lix_version (id, name) \\\n             VALUES ('sql-version-duplicate', 'First')\",\n            &[],\n        )\n        .await\n        .expect(\"initial version insert should succeed\");\n\n    let error = session\n        .execute(\n            \"INSERT INTO lix_version (id, name) \\\n             VALUES ('sql-version-duplicate', 'Second')\",\n            &[],\n        )\n        .await\n        .expect_err(\"duplicate version id should be rejected\");\n    assert_eq!(error.code, LixError::CODE_UNIQUE);\n    assert!(\n        error.message.contains(\"table 'lix_version'\")\n            && error.message.contains(\"id 'sql-version-duplicate'\")\n            && !error.message.contains(\"lix_version_descriptor\")\n            && !error.message.contains(\"lix_version_ref\"),\n        \"unexpected error: {error:?}\"\n    );\n});\n\nsimulation_test!(\n    lix_version_duplicate_name_insert_rejects,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"workspace session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_version (id, name) \\\n             VALUES ('sql-version-name-a', 'Duplicate Name')\",\n                &[],\n            )\n            .await\n            .expect(\"initial version insert should succeed\");\n\n        let error = session\n            .execute(\n                \"INSERT INTO lix_version (id, name) \\\n             VALUES ('sql-version-name-b', 'Duplicate Name')\",\n                &[],\n            )\n            .await\n            .expect_err(\"duplicate version name should be rejected\");\n        assert_eq!(error.code, LixError::CODE_UNIQUE);\n        assert!(\n            error.to_string().contains(\"/name\"),\n            \"error should explain duplicate version name: {error:?}\"\n        );\n    }\n);\n\nsimulation_test!(\n    lix_version_duplicate_name_update_rejects,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"workspace session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_version (id, name) \\\n             VALUES ('sql-version-name-update-a', 'Name A')\",\n                &[],\n            )\n            .await\n            .expect(\"first version insert should succeed\");\n        session\n            .execute(\n                \"INSERT INTO lix_version (id, name) \\\n             VALUES ('sql-version-name-update-b', 'Name B')\",\n                &[],\n            )\n            .await\n            .expect(\"second version insert should succeed\");\n\n        let error = session\n            .execute(\n                \"UPDATE lix_version \\\n             SET name = 'Name A' \\\n             WHERE id = 'sql-version-name-update-b'\",\n                &[],\n            )\n            .await\n            .expect_err(\"updating to a duplicate version name should fail\");\n        assert_eq!(error.code, LixError::CODE_UNIQUE);\n        assert!(\n            error.to_string().contains(\"/name\"),\n            \"error should explain duplicate version name: {error:?}\"\n        );\n    }\n);\n\nsimulation_test!(\n    lix_version_insert_rejects_invalid_commit_id,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"workspace session should open\"),\n            &engine,\n        );\n\n        let error = session\n            .execute(\n                \"INSERT INTO lix_version (id, name, commit_id) \\\n                 VALUES ('sql-version-invalid-commit', 'Invalid Commit', 'missing-commit')\",\n                &[],\n            )\n            .await\n            .expect_err(\"version ref commit_id should reference an existing commit\");\n        assert_eq!(error.code, LixError::CODE_VERSION_NOT_FOUND);\n\n        assert_eq!(\n            count_rows(\n                &session,\n                \"SELECT COUNT(*) FROM lix_version WHERE id = 'sql-version-invalid-commit'\",\n            )\n            .await,\n            0\n        );\n    }\n);\n\nsimulation_test!(lix_version_update_rejects_id_change, |sim| async move {\n    let engine = sim.boot_engine().await;\n    let session = sim.wrap_session(\n        engine\n            .open_workspace_session()\n            .await\n            .expect(\"workspace session should open\"),\n        &engine,\n    );\n\n    session\n        .execute(\n            \"INSERT INTO lix_version (id, name) \\\n             VALUES ('sql-version-id-update', 'Before')\",\n            &[],\n        )\n        .await\n        .expect(\"version insert should succeed\");\n\n    let error = session\n        .execute(\n            \"UPDATE lix_version \\\n             SET id = 'sql-version-id-update-renamed' \\\n             WHERE id = 'sql-version-id-update'\",\n            &[],\n        )\n        .await\n        .expect_err(\"version id should be immutable through UPDATE\");\n    assert!(\n        error.to_string().contains(\"immutable column 'id'\"),\n        \"id update error should explain the restriction: {error:?}\"\n    );\n\n    assert_eq!(\n        count_rows(\n            &session,\n            \"SELECT COUNT(*) FROM lix_version WHERE id = 'sql-version-id-update'\",\n        )\n        .await,\n        1\n    );\n    assert_eq!(\n        count_rows(\n            &session,\n            \"SELECT COUNT(*) FROM lix_version WHERE id = 'sql-version-id-update-renamed'\",\n        )\n        .await,\n        0\n    );\n});\n\nsimulation_test!(\n    lix_version_delete_missing_returns_zero_rows_affected,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"workspace session should open\"),\n            &engine,\n        );\n\n        let delete_result = session\n            .execute(\n                \"DELETE FROM lix_version WHERE id = 'sql-version-missing-delete'\",\n                &[],\n            )\n            .await\n            .expect(\"missing version delete should be a no-op\");\n        assert_eq!(delete_result, ExecuteResult::from_rows_affected(0));\n    }\n);\n\nasync fn assert_single_version_row(\n    session: &crate::support::simulation_test::engine::SimSession,\n    version_id: &str,\n    name: &str,\n    hidden: bool,\n    commit_id: &str,\n) {\n    let result = session\n        .execute(\n            &format!(\n                \"SELECT id, name, hidden, commit_id \\\n                 FROM lix_version \\\n                 WHERE id = '{version_id}'\"\n            ),\n            &[],\n        )\n        .await\n        .expect(\"version row should be selectable\");\n    assert_eq!(result.len(), 1);\n    assert_eq!(\n        result.rows()[0].values(),\n        &[\n            Value::Text(version_id.to_string()),\n            Value::Text(name.to_string()),\n            Value::Boolean(hidden),\n            Value::Text(commit_id.to_string()),\n        ]\n    );\n}\n\nasync fn select_single_text(\n    session: &crate::support::simulation_test::engine::SimSession,\n    sql: &str,\n) -> String {\n    let result = session\n        .execute(sql, &[])\n        .await\n        .expect(\"query should succeed\");\n    assert_eq!(result.len(), 1, \"expected exactly one row for query: {sql}\");\n    match result.rows()[0].values()[0] {\n        Value::Text(ref text) => text.clone(),\n        ref other => panic!(\"expected text for query {sql}, got {other:?}\"),\n    }\n}\n\nasync fn count_rows(\n    session: &crate::support::simulation_test::engine::SimSession,\n    sql: &str,\n) -> i64 {\n    let result = session\n        .execute(sql, &[])\n        .await\n        .expect(\"count should succeed\");\n    assert_eq!(result.len(), 1, \"expected exactly one row for query: {sql}\");\n    match result.rows()[0].values()[0] {\n        Value::Integer(count) => count,\n        ref other => panic!(\"expected integer count for query {sql}, got {other:?}\"),\n    }\n}\n"
  },
  {
    "path": "packages/engine/tests/sql/metadata.rs",
    "content": "use lix_engine::LixError;\nuse lix_engine::Value;\nuse serde_json::json;\n\nsimulation_test!(\n    metadata_rejects_invalid_json_on_lix_file_writes,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        assert_invalid_metadata_error(\n            session\n                .execute(\n                    \"INSERT INTO lix_file (id, path, lixcol_metadata) \\\n                     VALUES ('metadata-file-insert', '/metadata-file-insert.txt', '{bad')\",\n                    &[],\n                )\n                .await\n                .expect_err(\"invalid file metadata should be rejected on INSERT\"),\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_file (id, path) \\\n                 VALUES ('metadata-file-update', '/metadata-file-update.txt')\",\n                &[],\n            )\n            .await\n            .expect(\"file insert should succeed\");\n\n        assert_invalid_metadata_error(\n            session\n                .execute(\n                    \"UPDATE lix_file \\\n                     SET lixcol_metadata = '{bad' \\\n                     WHERE id = 'metadata-file-update'\",\n                    &[],\n                )\n                .await\n                .expect_err(\"invalid file metadata should be rejected on UPDATE\"),\n        );\n    }\n);\n\nsimulation_test!(\n    metadata_rejects_invalid_json_on_lix_directory_writes,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        assert_invalid_metadata_error(\n            session\n                .execute(\n                    \"INSERT INTO lix_directory (id, path, lixcol_metadata) \\\n                     VALUES ('metadata-dir-insert', '/metadata-dir-insert/', '{bad')\",\n                    &[],\n                )\n                .await\n                .expect_err(\"invalid directory metadata should be rejected on INSERT\"),\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_directory (id, path) \\\n                 VALUES ('metadata-dir-update', '/metadata-dir-update/')\",\n                &[],\n            )\n            .await\n            .expect(\"directory insert should succeed\");\n\n        assert_invalid_metadata_error(\n            session\n                .execute(\n                    \"UPDATE lix_directory \\\n                     SET lixcol_metadata = '{bad' \\\n                     WHERE id = 'metadata-dir-update'\",\n                    &[],\n                )\n                .await\n                .expect_err(\"invalid directory metadata should be rejected on UPDATE\"),\n        );\n    }\n);\n\nsimulation_test!(\n    metadata_rejects_invalid_json_on_typed_entity_writes,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        assert_invalid_metadata_error(\n            session\n                .execute(\n                    \"INSERT INTO lix_key_value (key, value, lixcol_metadata) \\\n                     VALUES ('metadata-entity-insert', 'value', '{bad')\",\n                    &[],\n                )\n                .await\n                .expect_err(\"invalid typed entity metadata should be rejected on INSERT\"),\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_key_value (key, value) \\\n                 VALUES ('metadata-entity-update', 'value')\",\n                &[],\n            )\n            .await\n            .expect(\"typed entity insert should succeed\");\n\n        assert_invalid_metadata_error(\n            session\n                .execute(\n                    \"UPDATE lix_key_value \\\n                     SET lixcol_metadata = '{bad' \\\n                     WHERE key = 'metadata-entity-update'\",\n                    &[],\n                )\n                .await\n                .expect_err(\"invalid typed entity metadata should be rejected on UPDATE\"),\n        );\n    }\n);\n\nsimulation_test!(\n    metadata_rejects_invalid_json_on_lix_state_writes,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        assert_invalid_metadata_error(\n            session\n                .execute(\n                    \"INSERT INTO lix_state (\\\n                     entity_id, schema_key, file_id, snapshot_content, metadata\\\n                     ) VALUES (\\\n                     lix_json('[\\\"metadata-state-insert\\\"]'), 'lix_key_value', NULL, \\\n                     lix_json('{\\\"key\\\":\\\"metadata-state-insert\\\",\\\"value\\\":\\\"value\\\"}'), \\\n                     '{bad'\\\n                     )\",\n                    &[],\n                )\n                .await\n                .expect_err(\"invalid lix_state metadata should be rejected on INSERT\"),\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_state (\\\n                 entity_id, schema_key, file_id, snapshot_content\\\n                 ) VALUES (\\\n                 lix_json('[\\\"metadata-state-update\\\"]'), 'lix_key_value', NULL, \\\n                 lix_json('{\\\"key\\\":\\\"metadata-state-update\\\",\\\"value\\\":\\\"value\\\"}')\\\n                 )\",\n                &[],\n            )\n            .await\n            .expect(\"lix_state insert should succeed\");\n\n        assert_invalid_metadata_error(\n            session\n                .execute(\n                    \"UPDATE lix_state \\\n                     SET metadata = '{bad' \\\n                     WHERE entity_id = lix_json('[\\\"metadata-state-update\\\"]') \\\n                       AND schema_key = 'lix_key_value'\",\n                    &[],\n                )\n                .await\n                .expect_err(\"invalid lix_state metadata should be rejected on UPDATE\"),\n        );\n    }\n);\n\nsimulation_test!(\n    valid_object_metadata_survives_live_change_and_history_reads,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n        let expected = json!({\n            \"source\": \"metadata-regression\",\n            \"nested\": {\"ok\": true}\n        });\n\n        session\n            .execute(\n                \"INSERT INTO lix_key_value (key, value, lixcol_metadata) \\\n                 VALUES (\\\n                 'metadata-valid-object', \\\n                 'value', \\\n                 '{\\\"source\\\":\\\"metadata-regression\\\",\\\"nested\\\":{\\\"ok\\\":true}}'\\\n                 )\",\n                &[],\n            )\n            .await\n            .expect(\"valid object metadata should write\");\n        let commit_id = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"head commit should load\")\n            .expect(\"head commit should exist\");\n\n        assert_metadata_value(\n            session\n                .execute(\n                    \"SELECT lixcol_metadata \\\n                     FROM lix_key_value \\\n                     WHERE key = 'metadata-valid-object'\",\n                    &[],\n                )\n                .await\n                .expect(\"typed entity metadata should read\"),\n            \"lixcol_metadata\",\n            &expected,\n        );\n\n        assert_metadata_value(\n            session\n                .execute(\n                    \"SELECT metadata \\\n                     FROM lix_state \\\n                     WHERE entity_id = lix_json('[\\\"metadata-valid-object\\\"]') \\\n                       AND schema_key = 'lix_key_value'\",\n                    &[],\n                )\n                .await\n                .expect(\"lix_state metadata should read\"),\n            \"metadata\",\n            &expected,\n        );\n\n        assert_metadata_value(\n            session\n                .execute(\n                    \"SELECT metadata \\\n                     FROM lix_change \\\n                     WHERE entity_id = lix_json('[\\\"metadata-valid-object\\\"]') \\\n                       AND schema_key = 'lix_key_value'\",\n                    &[],\n                )\n                .await\n                .expect(\"lix_change metadata should read\"),\n            \"metadata\",\n            &expected,\n        );\n\n        assert_metadata_value(\n            session\n                .execute(\n                    &format!(\n                        \"SELECT metadata \\\n                         FROM lix_state_history \\\n                         WHERE start_commit_id = '{commit_id}' \\\n                           AND entity_id = lix_json('[\\\"metadata-valid-object\\\"]') \\\n                           AND schema_key = 'lix_key_value'\"\n                    ),\n                    &[],\n                )\n                .await\n                .expect(\"lix_state_history metadata should read\"),\n            \"metadata\",\n            &expected,\n        );\n    }\n);\n\nfn assert_invalid_metadata_error(error: LixError) {\n    assert!(\n        matches!(\n            error.code.as_str(),\n            \"LIX_ERROR_INVALID_JSON\"\n                | LixError::CODE_SCHEMA_VALIDATION\n                | LixError::CODE_INVALID_PARAM\n        ),\n        \"expected invalid metadata public error, got {error:?}\"\n    );\n    assert!(\n        error.message.contains(\"metadata\") && error.message.contains(\"JSON\"),\n        \"error should identify metadata JSON, got {error:?}\"\n    );\n}\n\nfn assert_metadata_value(\n    result: lix_engine::ExecuteResult,\n    column: &str,\n    expected: &serde_json::Value,\n) {\n    assert_eq!(result.len(), 1, \"expected one metadata row\");\n    let value = result.rows()[0]\n        .get::<Value>(column)\n        .unwrap_or_else(|_| panic!(\"{column} should be present\"));\n    assert_eq!(value, Value::Json(expected.clone()));\n}\n"
  },
  {
    "path": "packages/engine/tests/sql/read_only.rs",
    "content": "use lix_engine::{LixError, Value};\n\nsimulation_test!(\n    read_only_version_components_reject_direct_entity_writes,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"workspace session should open\"),\n            &engine,\n        );\n\n        assert_read_only_error(\n            session\n                .execute(\n                    \"INSERT INTO lix_version_descriptor (id, name, hidden) \\\n                     VALUES ('orphan-descriptor', 'Orphan', false)\",\n                    &[],\n                )\n                .await\n                .expect_err(\"descriptor insert should be read-only\"),\n            \"lix_version_descriptor\",\n            \"lix_version\",\n        );\n\n        assert_read_only_error(\n            session\n                .execute(\n                    \"UPDATE lix_version_descriptor SET name = 'Renamed' \\\n                     WHERE id = 'main'\",\n                    &[],\n                )\n                .await\n                .expect_err(\"descriptor update should be read-only\"),\n            \"lix_version_descriptor\",\n            \"lix_version\",\n        );\n\n        assert_read_only_error(\n            session\n                .execute(\"DELETE FROM lix_version_ref WHERE id = 'main'\", &[])\n                .await\n                .expect_err(\"ref delete should be read-only\"),\n            \"lix_version_ref\",\n            \"lix_version\",\n        );\n    }\n);\n\nsimulation_test!(\n    read_only_version_components_reject_lix_state_writes,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"workspace session should open\"),\n            &engine,\n        );\n\n        assert_read_only_error(\n            session\n                .execute(\n                    \"INSERT INTO lix_state (entity_id, schema_key, snapshot_content) \\\n                     VALUES (lix_json('[\\\"orphan-descriptor\\\"]'), 'lix_version_descriptor', \\\n                       lix_json('{\\\"id\\\":\\\"orphan-descriptor\\\",\\\"name\\\":\\\"Orphan\\\"}'))\",\n                    &[],\n                )\n                .await\n                .expect_err(\"descriptor insert via lix_state should be read-only\"),\n            \"lix_version_descriptor\",\n            \"lix_version\",\n        );\n\n        let descriptor_count = session\n            .execute(\n                \"SELECT COUNT(*) FROM lix_version_descriptor WHERE id = 'orphan-descriptor'\",\n                &[],\n            )\n            .await\n            .expect(\"descriptor count should query\");\n        assert_eq!(\n            descriptor_count.rows()[0].values(),\n            &[Value::Integer(0)],\n            \"read-only rejection should prevent orphan descriptor persistence\"\n        );\n    }\n);\n\nsimulation_test!(read_only_file_descriptor_rejects_writes, |sim| async move {\n    let engine = sim.boot_engine().await;\n    let session = sim.wrap_session(\n        engine\n            .open_workspace_session()\n            .await\n            .expect(\"workspace session should open\"),\n        &engine,\n    );\n\n    assert_read_only_error(\n        session\n            .execute(\n                \"INSERT INTO lix_file_descriptor (id, directory_id, name) \\\n                 VALUES ('file-direct', NULL, 'direct.txt')\",\n                &[],\n            )\n            .await\n            .expect_err(\"file descriptor insert should be read-only\"),\n        \"lix_file_descriptor\",\n        \"lix_file\",\n    );\n});\n\nsimulation_test!(read_only_binary_blob_ref_rejects_writes, |sim| async move {\n    let engine = sim.boot_engine().await;\n    let session = sim.wrap_session(\n        engine\n            .open_workspace_session()\n            .await\n            .expect(\"workspace session should open\"),\n        &engine,\n    );\n\n    session\n        .execute(\n            \"INSERT INTO lix_file (id, path, data) \\\n             VALUES ('file-with-data', '/file.bin', X'4142')\",\n            &[],\n        )\n        .await\n        .expect(\"file insert should create managed blob ref\");\n\n    assert_read_only_error(\n        session\n            .execute(\n                \"INSERT INTO lix_binary_blob_ref (id, blob_hash, size_bytes) \\\n                 VALUES ('file-direct', 'fake-hash', 2)\",\n                &[],\n            )\n            .await\n            .expect_err(\"blob ref insert should be read-only\"),\n        \"lix_binary_blob_ref\",\n        \"lix_file data column\",\n    );\n\n    assert_read_only_error(\n        session\n            .execute(\n                \"UPDATE lix_binary_blob_ref \\\n                 SET blob_hash = 'other-hash' \\\n                 WHERE id = 'file-with-data'\",\n                &[],\n            )\n            .await\n            .expect_err(\"blob ref update should be read-only\"),\n        \"lix_binary_blob_ref\",\n        \"lix_file data column\",\n    );\n\n    assert_read_only_error(\n        session\n            .execute(\n                \"DELETE FROM lix_binary_blob_ref WHERE id = 'file-with-data'\",\n                &[],\n            )\n            .await\n            .expect_err(\"blob ref delete should be read-only\"),\n        \"lix_binary_blob_ref\",\n        \"lix_file data column\",\n    );\n\n    assert_read_only_error(\n        session\n            .execute(\n                \"DELETE FROM lix_state \\\n                 WHERE schema_key = 'lix_binary_blob_ref' \\\n                   AND entity_id = lix_json('[\\\"file-with-data\\\"]')\",\n                &[],\n            )\n            .await\n            .expect_err(\"blob ref delete via lix_state should be read-only\"),\n        \"lix_binary_blob_ref\",\n        \"lix_file data column\",\n    );\n\n    let data = session\n        .execute(\"SELECT data FROM lix_file WHERE id = 'file-with-data'\", &[])\n        .await\n        .expect(\"file data should still be readable\");\n    assert_eq!(data.rows()[0].values(), &[Value::Blob(vec![0x41, 0x42])]);\n});\n\nsimulation_test!(\n    read_only_directory_descriptor_rejects_writes,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"workspace session should open\"),\n            &engine,\n        );\n\n        assert_read_only_error(\n            session\n                .execute(\n                    \"INSERT INTO lix_directory_descriptor (id, parent_id, name) \\\n                     VALUES ('dir-direct', NULL, 'direct')\",\n                    &[],\n                )\n                .await\n                .expect_err(\"directory descriptor insert should be read-only\"),\n            \"lix_directory_descriptor\",\n            \"lix_directory\",\n        );\n    }\n);\n\nsimulation_test!(\n    read_only_internal_state_rejects_lix_state_writes,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"workspace session should open\"),\n            &engine,\n        );\n\n        assert_read_only_error(\n            session\n                .execute(\n                    \"INSERT INTO lix_state (entity_id, schema_key, snapshot_content, global) \\\n                     VALUES (lix_json('[\\\"fake-change\\\"]'), 'lix_change', \\\n                       lix_json('{\\\"id\\\":\\\"fake-change\\\",\\\"entity_id\\\":\\\"x\\\",\\\"schema_key\\\":\\\"lix_key_value\\\"}'), true)\",\n                    &[],\n                )\n                .await\n                .expect_err(\"lix_change insert via lix_state should be read-only\"),\n            \"lix_change\",\n            \"transactions commit\",\n        );\n    }\n);\n\nsimulation_test!(read_only_history_views_reject_dml, |sim| async move {\n    let engine = sim.boot_engine().await;\n    let session = sim.wrap_session(\n        engine\n            .open_workspace_session()\n            .await\n            .expect(\"workspace session should open\"),\n        &engine,\n    );\n\n    assert_read_only_error(\n        session\n            .execute(\n                \"INSERT INTO lix_file_history (id, path) VALUES ('history-file', '/x.txt')\",\n                &[],\n            )\n            .await\n            .expect_err(\"history insert should be read-only\"),\n        \"lix_file_history\",\n        \"History views are query-only\",\n    );\n\n    assert_read_only_error(\n        session\n            .execute(\"UPDATE lix_directory_history SET name = 'renamed'\", &[])\n            .await\n            .expect_err(\"history update should be read-only\"),\n        \"lix_directory_history\",\n        \"History views are query-only\",\n    );\n\n    assert_read_only_error(\n        session\n            .execute(\"DELETE FROM lix_state_history\", &[])\n            .await\n            .expect_err(\"history delete should be read-only\"),\n        \"lix_state_history\",\n        \"History views are query-only\",\n    );\n});\n\nsimulation_test!(read_only_typed_history_views_reject_dml, |sim| async move {\n    let engine = sim.boot_engine().await;\n    let session = sim.wrap_session(\n        engine\n            .open_workspace_session()\n            .await\n            .expect(\"workspace session should open\"),\n        &engine,\n    );\n\n    session\n        .execute(\n            \"INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \\\n             VALUES (\\\n             lix_json('{\\\"x-lix-key\\\":\\\"read_only_history_entity\\\",\\\"type\\\":\\\"object\\\",\\\"properties\\\":{\\\"id\\\":{\\\"type\\\":\\\"string\\\"}},\\\"required\\\":[\\\"id\\\"],\\\"additionalProperties\\\":false}'),\\\n             false,\\\n             true\\\n             )\",\n            &[],\n        )\n        .await\n        .expect(\"registered schema insert should succeed\");\n\n    assert_read_only_error(\n        session\n            .execute(\n                \"INSERT INTO read_only_history_entity_history (id) VALUES ('entity-a')\",\n                &[],\n            )\n            .await\n            .expect_err(\"typed history insert should be read-only\"),\n        \"read_only_history_entity_history\",\n        \"History views are query-only\",\n    );\n});\n\nfn assert_read_only_error(error: LixError, schema_key: &str, hint_fragment: &str) {\n    assert_eq!(error.code, LixError::CODE_READ_ONLY);\n    assert!(\n        error.message.contains(schema_key),\n        \"read-only error should name {schema_key}: {error:?}\"\n    );\n    assert!(\n        error\n            .hint\n            .as_deref()\n            .is_some_and(|hint| hint.contains(hint_fragment)),\n        \"read-only error should guide callers toward {hint_fragment}: {error:?}\"\n    );\n}\n"
  },
  {
    "path": "packages/engine/tests/sql/udfs.rs",
    "content": "simulation_test!(\n    lix_active_version_commit_id_returns_active_head,\n    |sim| async move {\n        let engine = sim.boot_engine().await;\n        let session = sim.wrap_session(\n            engine\n                .open_workspace_session()\n                .await\n                .expect(\"main session should open\"),\n            &engine,\n        );\n\n        session\n            .execute(\n                \"INSERT INTO lix_key_value (key, value) VALUES ('active-head', 'one')\",\n                &[],\n            )\n            .await\n            .expect(\"tracked write should succeed\");\n        let expected = engine\n            .load_version_head_commit_id(sim.main_version_id())\n            .await\n            .expect(\"head should load\")\n            .expect(\"head should exist\");\n\n        let result = session\n            .execute(\"SELECT lix_active_version_commit_id()\", &[])\n            .await\n            .expect(\"active head UDF should execute\");\n\n        assert_eq!(\n            result.rows()[0]\n                .get::<String>(\"lix_active_version_commit_id()\")\n                .unwrap(),\n            expected\n        );\n    }\n);\n"
  },
  {
    "path": "packages/engine/tests/sql.rs",
    "content": "#[macro_use]\n#[path = \"support/mod.rs\"]\nmod support;\n\n#[path = \"sql/entity_history.rs\"]\nmod entity_history;\n#[path = \"sql/errors.rs\"]\nmod errors;\n#[path = \"sql/history_conformance.rs\"]\nmod history_conformance;\n#[path = \"sql/lix_change.rs\"]\nmod lix_change;\n#[path = \"sql/lix_commit.rs\"]\nmod lix_commit;\n#[path = \"sql/lix_directory.rs\"]\nmod lix_directory;\n#[path = \"sql/lix_directory_history.rs\"]\nmod lix_directory_history;\n#[path = \"sql/lix_file.rs\"]\nmod lix_file;\n#[path = \"sql/lix_file_history.rs\"]\nmod lix_file_history;\n#[path = \"sql/lix_json.rs\"]\nmod lix_json;\n#[path = \"sql/lix_key_value.rs\"]\nmod lix_key_value;\n#[path = \"sql/lix_label_assignment.rs\"]\nmod lix_label_assignment;\n#[path = \"sql/lix_registered_schema.rs\"]\nmod lix_registered_schema;\n#[path = \"sql/lix_state.rs\"]\nmod lix_state;\n#[path = \"sql/lix_state_history.rs\"]\nmod lix_state_history;\n#[path = \"sql/lix_version.rs\"]\nmod lix_version;\n#[path = \"sql/metadata.rs\"]\nmod metadata;\n#[path = \"sql/read_only.rs\"]\nmod read_only;\n#[path = \"sql/udfs.rs\"]\nmod udfs;\n\nuse lix_engine::ExecuteResult;\nuse lix_engine::Value;\n\nasync fn select_rows(\n    session: &crate::support::simulation_test::engine::SimSession,\n    sql: &str,\n) -> Vec<Vec<Value>> {\n    let result = session\n        .execute(sql, &[])\n        .await\n        .expect(\"SELECT should succeed\");\n    rows_from_result(result)\n}\n\nfn assert_rows_eq(result: ExecuteResult, expected: Vec<Vec<Value>>) {\n    assert_eq!(rows_from_result(result), expected);\n}\n\nfn rows_from_result(result: ExecuteResult) -> Vec<Vec<Value>> {\n    let row_set = result;\n    row_set\n        .rows()\n        .iter()\n        .map(|row| row.values().to_vec())\n        .collect()\n}\n"
  },
  {
    "path": "packages/engine/tests/storage_accounting.rs",
    "content": "#![cfg(feature = \"storage-benches\")]\n\nuse async_trait::async_trait;\nuse lix_engine::storage_bench::{\n    self, JsonStorePayloadShape, StorageBenchConfig, StorageBenchKeyPattern,\n    StorageBenchSelectivity, StorageBenchUpdateFraction,\n};\nuse lix_engine::{\n    Backend, BackendKvEntryPage, BackendKvExistsBatch, BackendKvExistsGroup, BackendKvGetRequest,\n    BackendKvKeyPage, BackendKvScanRange, BackendKvScanRequest, BackendKvValueBatch,\n    BackendKvValueGroup, BackendKvValuePage, BackendKvWriteBatch, BackendKvWriteStats,\n    BackendReadTransaction, BackendWriteTransaction, BytePageBuilder, LixError,\n};\nuse std::collections::BTreeMap;\nuse std::sync::{Arc, Mutex};\n\ntype Store = BTreeMap<(String, Vec<u8>), Vec<u8>>;\n\nfn byte_page_from_iter(values: impl IntoIterator<Item = Vec<u8>>) -> lix_engine::BytePage {\n    let values = values.into_iter();\n    let (lower_bound, _) = values.size_hint();\n    let mut page = BytePageBuilder::with_capacity(lower_bound, 0);\n    for value in values {\n        page.push(&value);\n    }\n    page.finish()\n}\n\n#[derive(Clone, Default)]\nstruct AccountingBackend {\n    store: Arc<Mutex<Store>>,\n}\n\n#[derive(Debug, Clone, Copy, Default)]\nstruct AccountingSnapshot {\n    entries: usize,\n    key_bytes: usize,\n    value_bytes: usize,\n    tracked_chunk_entries: usize,\n    tracked_chunk_value_bytes: usize,\n    tracked_snapshot_entries: usize,\n    tracked_snapshot_value_bytes: usize,\n    tracked_root_entries: usize,\n    tracked_by_file_root_entries: usize,\n    json_entries: usize,\n    json_value_bytes: usize,\n    json_chunk_entries: usize,\n    json_chunk_value_bytes: usize,\n    changelog_entries: usize,\n    changelog_value_bytes: usize,\n    untracked_entries: usize,\n    untracked_value_bytes: usize,\n}\n\nimpl AccountingSnapshot {\n    fn total_bytes(self) -> usize {\n        self.key_bytes + self.value_bytes\n    }\n\n    fn bytes_per_row(self, rows: usize) -> usize {\n        if rows == 0 {\n            0\n        } else {\n            self.total_bytes() / rows\n        }\n    }\n\n    fn saturating_sub(self, before: Self) -> Self {\n        Self {\n            entries: self.entries.saturating_sub(before.entries),\n            key_bytes: self.key_bytes.saturating_sub(before.key_bytes),\n            value_bytes: self.value_bytes.saturating_sub(before.value_bytes),\n            tracked_chunk_entries: self\n                .tracked_chunk_entries\n                .saturating_sub(before.tracked_chunk_entries),\n            tracked_chunk_value_bytes: self\n                .tracked_chunk_value_bytes\n                .saturating_sub(before.tracked_chunk_value_bytes),\n            tracked_snapshot_entries: self\n                .tracked_snapshot_entries\n                .saturating_sub(before.tracked_snapshot_entries),\n            tracked_snapshot_value_bytes: self\n                .tracked_snapshot_value_bytes\n                .saturating_sub(before.tracked_snapshot_value_bytes),\n            tracked_root_entries: self\n                .tracked_root_entries\n                .saturating_sub(before.tracked_root_entries),\n            tracked_by_file_root_entries: self\n                .tracked_by_file_root_entries\n                .saturating_sub(before.tracked_by_file_root_entries),\n            json_entries: self.json_entries.saturating_sub(before.json_entries),\n            json_value_bytes: self\n                .json_value_bytes\n                .saturating_sub(before.json_value_bytes),\n            json_chunk_entries: self\n                .json_chunk_entries\n                .saturating_sub(before.json_chunk_entries),\n            json_chunk_value_bytes: self\n                .json_chunk_value_bytes\n                .saturating_sub(before.json_chunk_value_bytes),\n            changelog_entries: self\n                .changelog_entries\n                .saturating_sub(before.changelog_entries),\n            changelog_value_bytes: self\n                .changelog_value_bytes\n                .saturating_sub(before.changelog_value_bytes),\n            untracked_entries: self\n                .untracked_entries\n                .saturating_sub(before.untracked_entries),\n            untracked_value_bytes: self\n                .untracked_value_bytes\n                .saturating_sub(before.untracked_value_bytes),\n        }\n    }\n}\n\n#[derive(Debug, Clone, Copy)]\nenum AccountingWorkload {\n    WriteRoot {\n        label: &'static str,\n        rows: usize,\n        payload_bytes: usize,\n    },\n    UpdateOne {\n        rows: usize,\n    },\n    AppendOne {\n        rows: usize,\n    },\n    Update10Pct {\n        rows: usize,\n    },\n}\n\n#[derive(Debug, Clone, Copy)]\nenum JsonAccountingWorkload {\n    Raw1k { rows: usize },\n    Structured16k { rows: usize },\n    Structured128k { rows: usize },\n    Array128k { rows: usize },\n    DedupeSame16k { rows: usize },\n    BaseUpdateObject1Of1000 { rows: usize },\n    BaseUpdateArray1Of1000 { rows: usize },\n}\n\n#[derive(Debug, Clone, Copy)]\nenum ChangelogAccountingWorkload {\n    AppendSmall { rows: usize },\n    Append1k { rows: usize },\n    Append16k { rows: usize },\n    Tombstones { rows: usize },\n    Metadata1k { rows: usize },\n    CompositeEntityIds { rows: usize },\n}\n\n#[derive(Debug, Clone, Copy)]\nenum UntrackedAccountingWorkload {\n    WriteRows {\n        label: &'static str,\n        rows: usize,\n        payload_bytes: usize,\n    },\n}\n\n#[tokio::test]\n#[ignore = \"prints deterministic storage accounting table\"]\nasync fn storage_accounting() {\n    let workloads = [\n        AccountingWorkload::WriteRoot {\n            label: \"write_root_payload_small\",\n            rows: 10_000,\n            payload_bytes: 0,\n        },\n        AccountingWorkload::WriteRoot {\n            label: \"write_root_payload_1k\",\n            rows: 10_000,\n            payload_bytes: 1024,\n        },\n        AccountingWorkload::WriteRoot {\n            label: \"write_root_payload_16k\",\n            rows: 1_000,\n            payload_bytes: 16 * 1024,\n        },\n        AccountingWorkload::WriteRoot {\n            label: \"write_root_payload_128k\",\n            rows: 100,\n            payload_bytes: 128 * 1024,\n        },\n        AccountingWorkload::UpdateOne { rows: 100_000 },\n        AccountingWorkload::AppendOne { rows: 100_000 },\n        AccountingWorkload::Update10Pct { rows: 10_000 },\n    ];\n\n    println!(\n        \"{:<31} {:>7} {:>8} {:>12} {:>12} {:>10} {:>11} {:>11} {:>11} {:>11} {:>9} {:>13}\",\n        \"workload\",\n        \"rows\",\n        \"entries\",\n        \"value_bytes\",\n        \"total_bytes\",\n        \"bytes/row\",\n        \"chunks\",\n        \"snapshots\",\n        \"roots\",\n        \"file_roots\",\n        \"json\",\n        \"json_bytes\"\n    );\n\n    for workload in workloads {\n        let row = run_workload(workload)\n            .await\n            .expect(\"storage accounting workload should run\");\n        println!(\n            \"{:<31} {:>7} {:>8} {:>12} {:>12} {:>10} {:>11} {:>11} {:>11} {:>11} {:>9} {:>13}\",\n            workload_label(workload),\n            row.rows,\n            row.snapshot.entries,\n            row.snapshot.value_bytes,\n            row.snapshot.total_bytes(),\n            row.snapshot.bytes_per_row(row.rows),\n            row.snapshot.tracked_chunk_entries,\n            row.snapshot.tracked_snapshot_entries,\n            row.snapshot.tracked_root_entries,\n            row.snapshot.tracked_by_file_root_entries,\n            row.snapshot.json_entries,\n            row.snapshot.json_value_bytes,\n        );\n    }\n}\n\n#[tokio::test]\n#[ignore = \"prints deterministic json_store storage accounting table\"]\nasync fn json_store_accounting() {\n    let workloads = [\n        JsonAccountingWorkload::Raw1k { rows: 1_000 },\n        JsonAccountingWorkload::Structured16k { rows: 200 },\n        JsonAccountingWorkload::Structured128k { rows: 50 },\n        JsonAccountingWorkload::Array128k { rows: 50 },\n        JsonAccountingWorkload::DedupeSame16k { rows: 1_000 },\n        JsonAccountingWorkload::BaseUpdateObject1Of1000 { rows: 50 },\n        JsonAccountingWorkload::BaseUpdateArray1Of1000 { rows: 50 },\n    ];\n\n    println!(\n        \"{:<37} {:>7} {:>8} {:>12} {:>12} {:>10} {:>11} {:>15}\",\n        \"workload\",\n        \"rows\",\n        \"entries\",\n        \"value_bytes\",\n        \"total_bytes\",\n        \"bytes/row\",\n        \"json_refs\",\n        \"json_chunks\"\n    );\n\n    for workload in workloads {\n        let row = run_json_workload(workload)\n            .await\n            .expect(\"json_store accounting workload should run\");\n        println!(\n            \"{:<37} {:>7} {:>8} {:>12} {:>12} {:>10} {:>11} {:>15}\",\n            json_workload_label(workload),\n            row.rows,\n            row.snapshot.entries,\n            row.snapshot.value_bytes,\n            row.snapshot.total_bytes(),\n            row.snapshot.bytes_per_row(row.rows),\n            row.snapshot.json_entries,\n            row.snapshot.json_chunk_entries,\n        );\n    }\n}\n\n#[tokio::test]\n#[ignore = \"prints deterministic changelog storage accounting table\"]\nasync fn changelog_accounting() {\n    let workloads = [\n        ChangelogAccountingWorkload::AppendSmall { rows: 10_000 },\n        ChangelogAccountingWorkload::Append1k { rows: 10_000 },\n        ChangelogAccountingWorkload::Append16k { rows: 1_000 },\n        ChangelogAccountingWorkload::Tombstones { rows: 10_000 },\n        ChangelogAccountingWorkload::Metadata1k { rows: 10_000 },\n        ChangelogAccountingWorkload::CompositeEntityIds { rows: 10_000 },\n    ];\n\n    println!(\n        \"{:<31} {:>7} {:>8} {:>12} {:>12} {:>10} {:>11} {:>13}\",\n        \"workload\",\n        \"rows\",\n        \"entries\",\n        \"value_bytes\",\n        \"total_bytes\",\n        \"bytes/row\",\n        \"changes\",\n        \"change_bytes\"\n    );\n\n    for workload in workloads {\n        let row = run_changelog_workload(workload)\n            .await\n            .expect(\"changelog accounting workload should run\");\n        println!(\n            \"{:<31} {:>7} {:>8} {:>12} {:>12} {:>10} {:>11} {:>13}\",\n            changelog_workload_label(workload),\n            row.rows,\n            row.snapshot.entries,\n            row.snapshot.value_bytes,\n            row.snapshot.total_bytes(),\n            row.snapshot.bytes_per_row(row.rows),\n            row.snapshot.changelog_entries,\n            row.snapshot.changelog_value_bytes,\n        );\n    }\n}\n\n#[tokio::test]\n#[ignore = \"prints deterministic untracked_state storage accounting table\"]\nasync fn untracked_state_accounting() {\n    let workloads = [\n        UntrackedAccountingWorkload::WriteRows {\n            label: \"write_rows_payload_small\",\n            rows: 10_000,\n            payload_bytes: 0,\n        },\n        UntrackedAccountingWorkload::WriteRows {\n            label: \"write_rows_payload_1k\",\n            rows: 10_000,\n            payload_bytes: 1024,\n        },\n        UntrackedAccountingWorkload::WriteRows {\n            label: \"write_rows_payload_16k\",\n            rows: 1_000,\n            payload_bytes: 16 * 1024,\n        },\n        UntrackedAccountingWorkload::WriteRows {\n            label: \"write_rows_payload_128k\",\n            rows: 100,\n            payload_bytes: 128 * 1024,\n        },\n    ];\n\n    println!(\n        \"{:<31} {:>7} {:>8} {:>12} {:>12} {:>10} {:>11} {:>13}\",\n        \"workload\",\n        \"rows\",\n        \"entries\",\n        \"value_bytes\",\n        \"total_bytes\",\n        \"bytes/row\",\n        \"rows_ns\",\n        \"row_bytes\"\n    );\n\n    for workload in workloads {\n        let row = run_untracked_workload(workload)\n            .await\n            .expect(\"untracked_state accounting workload should run\");\n        println!(\n            \"{:<31} {:>7} {:>8} {:>12} {:>12} {:>10} {:>11} {:>13}\",\n            untracked_workload_label(workload),\n            row.rows,\n            row.snapshot.entries,\n            row.snapshot.value_bytes,\n            row.snapshot.total_bytes(),\n            row.snapshot.bytes_per_row(row.rows),\n            row.snapshot.untracked_entries,\n            row.snapshot.untracked_value_bytes,\n        );\n    }\n}\n\nstruct AccountingRow {\n    rows: usize,\n    snapshot: AccountingSnapshot,\n}\n\nasync fn run_workload(workload: AccountingWorkload) -> Result<AccountingRow, LixError> {\n    let accounting_backend = AccountingBackend::default();\n    let backend: Arc<dyn Backend + Send + Sync> = Arc::new(accounting_backend.clone());\n    let config = config_for(workload);\n    let rows = workload_rows(workload);\n    let snapshot = match workload {\n        AccountingWorkload::WriteRoot { .. } => {\n            let fixture = storage_bench::prepare_tracked_state_write_root(config).await?;\n            storage_bench::tracked_state_write_root_prepared(&backend, &fixture).await?;\n            accounting_backend.accounting()?\n        }\n        AccountingWorkload::UpdateOne { .. } => {\n            let fixture =\n                storage_bench::prepare_tracked_state_update_rows(&backend, config, 1).await?;\n            let before = accounting_backend.accounting()?;\n            storage_bench::tracked_state_update_existing_prepared(&backend, &fixture).await?;\n            accounting_backend.accounting()?.saturating_sub(before)\n        }\n        AccountingWorkload::AppendOne { .. } => {\n            let fixture =\n                storage_bench::prepare_tracked_state_append_child_rows(&backend, config, 1).await?;\n            let before = accounting_backend.accounting()?;\n            storage_bench::tracked_state_update_existing_prepared(&backend, &fixture).await?;\n            accounting_backend.accounting()?.saturating_sub(before)\n        }\n        AccountingWorkload::Update10Pct { rows } => {\n            let fixture = storage_bench::prepare_tracked_state_update_rows(\n                &backend,\n                config,\n                rows.div_ceil(10),\n            )\n            .await?;\n            let before = accounting_backend.accounting()?;\n            storage_bench::tracked_state_update_existing_prepared(&backend, &fixture).await?;\n            accounting_backend.accounting()?.saturating_sub(before)\n        }\n    };\n    Ok(AccountingRow { rows, snapshot })\n}\n\nasync fn run_json_workload(workload: JsonAccountingWorkload) -> Result<AccountingRow, LixError> {\n    let accounting_backend = AccountingBackend::default();\n    let backend: Arc<dyn Backend + Send + Sync> = Arc::new(accounting_backend.clone());\n    let rows = json_workload_rows(workload);\n    let snapshot = match workload {\n        JsonAccountingWorkload::Raw1k { rows } => {\n            let fixture =\n                storage_bench::prepare_json_store_write(JsonStorePayloadShape::SmallRaw1k, rows)\n                    .await?;\n            storage_bench::json_store_write_prepared(&backend, &fixture).await?;\n            accounting_backend.accounting()?\n        }\n        JsonAccountingWorkload::Structured16k { rows } => {\n            let fixture = storage_bench::prepare_json_store_write(\n                JsonStorePayloadShape::MediumStructured16k,\n                rows,\n            )\n            .await?;\n            storage_bench::json_store_write_prepared(&backend, &fixture).await?;\n            accounting_backend.accounting()?\n        }\n        JsonAccountingWorkload::Structured128k { rows } => {\n            let fixture = storage_bench::prepare_json_store_write(\n                JsonStorePayloadShape::LargeStructured128k,\n                rows,\n            )\n            .await?;\n            storage_bench::json_store_write_prepared(&backend, &fixture).await?;\n            accounting_backend.accounting()?\n        }\n        JsonAccountingWorkload::Array128k { rows } => {\n            let fixture = storage_bench::prepare_json_store_write(\n                JsonStorePayloadShape::LargeArray128k,\n                rows,\n            )\n            .await?;\n            storage_bench::json_store_write_prepared(&backend, &fixture).await?;\n            accounting_backend.accounting()?\n        }\n        JsonAccountingWorkload::DedupeSame16k { rows } => {\n            let fixture = storage_bench::prepare_json_store_write_dedupe(\n                JsonStorePayloadShape::MediumStructured16k,\n                rows,\n            )\n            .await?;\n            storage_bench::json_store_write_prepared(&backend, &fixture).await?;\n            accounting_backend.accounting()?\n        }\n        JsonAccountingWorkload::BaseUpdateObject1Of1000 { rows } => {\n            let fixture =\n                storage_bench::prepare_json_store_base_update_object(&backend, rows).await?;\n            let before = accounting_backend.accounting()?;\n            storage_bench::json_store_write_against_base_object_prepared(&backend, &fixture)\n                .await?;\n            accounting_backend.accounting()?.saturating_sub(before)\n        }\n        JsonAccountingWorkload::BaseUpdateArray1Of1000 { rows } => {\n            let fixture =\n                storage_bench::prepare_json_store_base_update_array(&backend, rows).await?;\n            let before = accounting_backend.accounting()?;\n            storage_bench::json_store_write_against_base_array_prepared(&backend, &fixture).await?;\n            accounting_backend.accounting()?.saturating_sub(before)\n        }\n    };\n    Ok(AccountingRow { rows, snapshot })\n}\n\nasync fn run_changelog_workload(\n    workload: ChangelogAccountingWorkload,\n) -> Result<AccountingRow, LixError> {\n    let accounting_backend = AccountingBackend::default();\n    let backend: Arc<dyn Backend + Send + Sync> = Arc::new(accounting_backend.clone());\n    let rows = changelog_workload_rows(workload);\n    let config = changelog_config_for(workload);\n    let fixture = match workload {\n        ChangelogAccountingWorkload::AppendSmall { .. }\n        | ChangelogAccountingWorkload::Append1k { .. }\n        | ChangelogAccountingWorkload::Append16k { .. } => {\n            storage_bench::prepare_changelog_append_changes(config).await?\n        }\n        ChangelogAccountingWorkload::Tombstones { .. } => {\n            storage_bench::prepare_changelog_append_tombstones(config).await?\n        }\n        ChangelogAccountingWorkload::Metadata1k { .. } => {\n            storage_bench::prepare_changelog_append_metadata(config).await?\n        }\n        ChangelogAccountingWorkload::CompositeEntityIds { .. } => {\n            storage_bench::prepare_changelog_append_composite_entity_ids(config).await?\n        }\n    };\n    storage_bench::changelog_append_changes_prepared(&backend, &fixture).await?;\n    Ok(AccountingRow {\n        rows,\n        snapshot: accounting_backend.accounting()?,\n    })\n}\n\nasync fn run_untracked_workload(\n    workload: UntrackedAccountingWorkload,\n) -> Result<AccountingRow, LixError> {\n    let accounting_backend = AccountingBackend::default();\n    let backend: Arc<dyn Backend + Send + Sync> = Arc::new(accounting_backend.clone());\n    let rows = untracked_workload_rows(workload);\n    let fixture =\n        storage_bench::prepare_untracked_state_write_rows(untracked_config_for(workload)).await?;\n    storage_bench::untracked_state_write_rows_prepared(&backend, &fixture).await?;\n    Ok(AccountingRow {\n        rows,\n        snapshot: accounting_backend.accounting()?,\n    })\n}\n\nfn config_for(workload: AccountingWorkload) -> StorageBenchConfig {\n    StorageBenchConfig {\n        rows: workload_rows(workload),\n        blob_bytes: 1024,\n        state_payload_bytes: match workload {\n            AccountingWorkload::WriteRoot { payload_bytes, .. } => payload_bytes,\n            AccountingWorkload::UpdateOne { .. }\n            | AccountingWorkload::AppendOne { .. }\n            | AccountingWorkload::Update10Pct { .. } => 256,\n        },\n        key_pattern: StorageBenchKeyPattern::Sequential,\n        selectivity: StorageBenchSelectivity::Percent100,\n        update_fraction: StorageBenchUpdateFraction::Percent100,\n    }\n}\n\nfn workload_rows(workload: AccountingWorkload) -> usize {\n    match workload {\n        AccountingWorkload::WriteRoot { rows, .. }\n        | AccountingWorkload::UpdateOne { rows }\n        | AccountingWorkload::AppendOne { rows }\n        | AccountingWorkload::Update10Pct { rows } => rows,\n    }\n}\n\nfn workload_label(workload: AccountingWorkload) -> String {\n    match workload {\n        AccountingWorkload::WriteRoot { label, rows, .. } => format!(\"{label}/{}\", row_label(rows)),\n        AccountingWorkload::UpdateOne { rows } => format!(\"update_1_existing/{}\", row_label(rows)),\n        AccountingWorkload::AppendOne { rows } => {\n            format!(\"append_1_new_child_commit/{}\", row_label(rows))\n        }\n        AccountingWorkload::Update10Pct { rows } => {\n            format!(\"update_10pct_existing/{}\", row_label(rows))\n        }\n    }\n}\n\nfn json_workload_rows(workload: JsonAccountingWorkload) -> usize {\n    match workload {\n        JsonAccountingWorkload::Raw1k { rows }\n        | JsonAccountingWorkload::Structured16k { rows }\n        | JsonAccountingWorkload::Structured128k { rows }\n        | JsonAccountingWorkload::Array128k { rows }\n        | JsonAccountingWorkload::DedupeSame16k { rows }\n        | JsonAccountingWorkload::BaseUpdateObject1Of1000 { rows }\n        | JsonAccountingWorkload::BaseUpdateArray1Of1000 { rows } => rows,\n    }\n}\n\nfn changelog_config_for(workload: ChangelogAccountingWorkload) -> StorageBenchConfig {\n    StorageBenchConfig {\n        rows: changelog_workload_rows(workload),\n        blob_bytes: 1024,\n        state_payload_bytes: match workload {\n            ChangelogAccountingWorkload::AppendSmall { .. }\n            | ChangelogAccountingWorkload::Tombstones { .. }\n            | ChangelogAccountingWorkload::CompositeEntityIds { .. } => 0,\n            ChangelogAccountingWorkload::Append1k { .. }\n            | ChangelogAccountingWorkload::Metadata1k { .. } => 1024,\n            ChangelogAccountingWorkload::Append16k { .. } => 16 * 1024,\n        },\n        key_pattern: StorageBenchKeyPattern::Sequential,\n        selectivity: StorageBenchSelectivity::Percent100,\n        update_fraction: StorageBenchUpdateFraction::Percent100,\n    }\n}\n\nfn changelog_workload_rows(workload: ChangelogAccountingWorkload) -> usize {\n    match workload {\n        ChangelogAccountingWorkload::AppendSmall { rows }\n        | ChangelogAccountingWorkload::Append1k { rows }\n        | ChangelogAccountingWorkload::Append16k { rows }\n        | ChangelogAccountingWorkload::Tombstones { rows }\n        | ChangelogAccountingWorkload::Metadata1k { rows }\n        | ChangelogAccountingWorkload::CompositeEntityIds { rows } => rows,\n    }\n}\n\nfn changelog_workload_label(workload: ChangelogAccountingWorkload) -> String {\n    match workload {\n        ChangelogAccountingWorkload::AppendSmall { rows } => {\n            format!(\"append_small/{}\", row_label(rows))\n        }\n        ChangelogAccountingWorkload::Append1k { rows } => {\n            format!(\"append_1k/{}\", row_label(rows))\n        }\n        ChangelogAccountingWorkload::Append16k { rows } => {\n            format!(\"append_16k/{}\", row_label(rows))\n        }\n        ChangelogAccountingWorkload::Tombstones { rows } => {\n            format!(\"tombstones/{}\", row_label(rows))\n        }\n        ChangelogAccountingWorkload::Metadata1k { rows } => {\n            format!(\"metadata_1k/{}\", row_label(rows))\n        }\n        ChangelogAccountingWorkload::CompositeEntityIds { rows } => {\n            format!(\"composite_entity_ids/{}\", row_label(rows))\n        }\n    }\n}\n\nfn untracked_config_for(workload: UntrackedAccountingWorkload) -> StorageBenchConfig {\n    StorageBenchConfig {\n        rows: untracked_workload_rows(workload),\n        blob_bytes: 1024,\n        state_payload_bytes: match workload {\n            UntrackedAccountingWorkload::WriteRows { payload_bytes, .. } => payload_bytes,\n        },\n        key_pattern: StorageBenchKeyPattern::Sequential,\n        selectivity: StorageBenchSelectivity::Percent100,\n        update_fraction: StorageBenchUpdateFraction::Percent100,\n    }\n}\n\nfn untracked_workload_rows(workload: UntrackedAccountingWorkload) -> usize {\n    match workload {\n        UntrackedAccountingWorkload::WriteRows { rows, .. } => rows,\n    }\n}\n\nfn untracked_workload_label(workload: UntrackedAccountingWorkload) -> String {\n    match workload {\n        UntrackedAccountingWorkload::WriteRows { label, rows, .. } => {\n            format!(\"{label}/{}\", row_label(rows))\n        }\n    }\n}\n\nfn json_workload_label(workload: JsonAccountingWorkload) -> String {\n    match workload {\n        JsonAccountingWorkload::Raw1k { rows } => {\n            format!(\"raw_1k/{}\", row_label(rows))\n        }\n        JsonAccountingWorkload::Structured16k { rows } => {\n            format!(\"structured_16k/{}\", row_label(rows))\n        }\n        JsonAccountingWorkload::Structured128k { rows } => {\n            format!(\"structured_128k/{}\", row_label(rows))\n        }\n        JsonAccountingWorkload::Array128k { rows } => {\n            format!(\"array_128k/{}\", row_label(rows))\n        }\n        JsonAccountingWorkload::DedupeSame16k { rows } => {\n            format!(\"dedupe_same_16k/{}\", row_label(rows))\n        }\n        JsonAccountingWorkload::BaseUpdateObject1Of1000 { rows } => {\n            format!(\"base_update_object_1_of_1000/{}\", row_label(rows))\n        }\n        JsonAccountingWorkload::BaseUpdateArray1Of1000 { rows } => {\n            format!(\"base_update_array_1_of_1000/{}\", row_label(rows))\n        }\n    }\n}\n\nfn row_label(rows: usize) -> String {\n    match rows {\n        100_000 => \"100k\".to_string(),\n        10_000 => \"10k\".to_string(),\n        1_000 => \"1k\".to_string(),\n        rows => rows.to_string(),\n    }\n}\n\nimpl AccountingBackend {\n    fn lock_store(&self) -> Result<std::sync::MutexGuard<'_, Store>, LixError> {\n        self.store\n            .lock()\n            .map_err(|_| LixError::new(\"LIX_ERROR_UNKNOWN\", \"accounting store mutex poisoned\"))\n    }\n\n    fn accounting(&self) -> Result<AccountingSnapshot, LixError> {\n        let store = self.lock_store()?;\n        let mut snapshot = AccountingSnapshot::default();\n        for ((namespace, key), value) in store.iter() {\n            snapshot.entries += 1;\n            snapshot.key_bytes += key.len();\n            snapshot.value_bytes += value.len();\n            match namespace.as_str() {\n                \"tracked_state.tree.chunk\" => {\n                    snapshot.tracked_chunk_entries += 1;\n                    snapshot.tracked_chunk_value_bytes += value.len();\n                }\n                \"tracked_state.tree.root\" => {\n                    snapshot.tracked_root_entries += 1;\n                }\n                \"tracked_state.tree.root.by_file\" => {\n                    snapshot.tracked_by_file_root_entries += 1;\n                }\n                \"json_store.json\" => {\n                    snapshot.json_entries += 1;\n                    snapshot.json_value_bytes += value.len();\n                }\n                \"json_store.json_chunk\" => {\n                    snapshot.json_chunk_entries += 1;\n                    snapshot.json_chunk_value_bytes += value.len();\n                }\n                \"changelog.change\" => {\n                    snapshot.changelog_entries += 1;\n                    snapshot.changelog_value_bytes += value.len();\n                }\n                \"untracked_state.row\" => {\n                    snapshot.untracked_entries += 1;\n                    snapshot.untracked_value_bytes += value.len();\n                }\n                _ => {}\n            }\n        }\n        Ok(snapshot)\n    }\n}\n\n#[async_trait]\nimpl Backend for AccountingBackend {\n    async fn begin_read_transaction(\n        &self,\n    ) -> Result<Box<dyn BackendReadTransaction + Send + Sync + 'static>, LixError> {\n        Ok(Box::new(AccountingTransaction {\n            store: Arc::clone(&self.store),\n            finalized: false,\n        }))\n    }\n\n    async fn begin_write_transaction(\n        &self,\n    ) -> Result<Box<dyn BackendWriteTransaction + Send + Sync + 'static>, LixError> {\n        Ok(Box::new(AccountingTransaction {\n            store: Arc::clone(&self.store),\n            finalized: false,\n        }))\n    }\n}\n\nstruct AccountingTransaction {\n    store: Arc<Mutex<Store>>,\n    finalized: bool,\n}\n\nimpl AccountingTransaction {\n    fn lock_store(&self) -> Result<std::sync::MutexGuard<'_, Store>, LixError> {\n        self.store\n            .lock()\n            .map_err(|_| LixError::new(\"LIX_ERROR_UNKNOWN\", \"accounting store mutex poisoned\"))\n    }\n\n    fn scan_filtered_pairs(\n        &self,\n        request: &BackendKvScanRequest,\n    ) -> Result<Vec<(Vec<u8>, Vec<u8>)>, LixError> {\n        let store = self.lock_store()?;\n        let scan_limit = request\n            .limit\n            .checked_add(1 + usize::from(request.after.is_some()))\n            .unwrap_or(request.limit);\n        let mut pairs = scan_store(&store, &request.namespace, &request.range, Some(scan_limit));\n        pairs.retain(|(key, _)| {\n            request\n                .after\n                .as_deref()\n                .is_none_or(|after| key.as_slice() > after)\n        });\n        Ok(pairs)\n    }\n}\n\n#[async_trait]\nimpl BackendReadTransaction for AccountingTransaction {\n    async fn get_values(\n        &mut self,\n        request: BackendKvGetRequest,\n    ) -> Result<BackendKvValueBatch, LixError> {\n        let store = self.lock_store()?;\n        let mut groups = Vec::with_capacity(request.groups.len());\n        for group in request.groups {\n            let namespace = group.namespace.clone();\n            let mut values = BytePageBuilder::with_capacity(group.keys.len(), 0);\n            let mut present = Vec::with_capacity(group.keys.len());\n            for key in group.keys {\n                if let Some(value) = store.get(&(namespace.clone(), key)) {\n                    values.push(value);\n                    present.push(true);\n                } else {\n                    values.push([]);\n                    present.push(false);\n                }\n            }\n            groups.push(BackendKvValueGroup::new(\n                namespace,\n                values.finish(),\n                present,\n            ));\n        }\n        Ok(BackendKvValueBatch { groups })\n    }\n\n    async fn exists_many(\n        &mut self,\n        request: BackendKvGetRequest,\n    ) -> Result<BackendKvExistsBatch, LixError> {\n        let store = self.lock_store()?;\n        let mut groups = Vec::with_capacity(request.groups.len());\n        for group in request.groups {\n            let namespace = group.namespace.clone();\n            let mut exists = Vec::with_capacity(group.keys.len());\n            for key in group.keys {\n                exists.push(store.contains_key(&(namespace.clone(), key)));\n            }\n            groups.push(BackendKvExistsGroup { namespace, exists });\n        }\n        Ok(BackendKvExistsBatch { groups })\n    }\n\n    async fn scan_keys(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvKeyPage, LixError> {\n        let pairs = self.scan_filtered_pairs(&request)?;\n        let has_more = pairs.len() > request.limit;\n        let resume_after = has_more\n            .then(|| {\n                pairs\n                    .get(request.limit.saturating_sub(1))\n                    .map(|(key, _)| key.clone())\n            })\n            .flatten();\n        Ok(BackendKvKeyPage {\n            keys: byte_page_from_iter(pairs.into_iter().take(request.limit).map(|(key, _)| key)),\n            resume_after,\n        })\n    }\n\n    async fn scan_values(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvValuePage, LixError> {\n        let pairs = self.scan_filtered_pairs(&request)?;\n        let has_more = pairs.len() > request.limit;\n        let resume_after = has_more\n            .then(|| {\n                pairs\n                    .get(request.limit.saturating_sub(1))\n                    .map(|(key, _)| key.clone())\n            })\n            .flatten();\n        Ok(BackendKvValuePage {\n            values: byte_page_from_iter(\n                pairs\n                    .into_iter()\n                    .take(request.limit)\n                    .map(|(_, value)| value),\n            ),\n            resume_after,\n        })\n    }\n\n    async fn scan_entries(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvEntryPage, LixError> {\n        let pairs = self.scan_filtered_pairs(&request)?;\n        let has_more = pairs.len() > request.limit;\n        let resume_after = has_more\n            .then(|| {\n                pairs\n                    .get(request.limit.saturating_sub(1))\n                    .map(|(key, _)| key.clone())\n            })\n            .flatten();\n        let mut keys = BytePageBuilder::with_capacity(request.limit.min(pairs.len()), 0);\n        let mut values = BytePageBuilder::with_capacity(request.limit.min(pairs.len()), 0);\n        for (key, value) in pairs.into_iter().take(request.limit) {\n            keys.push(&key);\n            values.push(&value);\n        }\n        Ok(BackendKvEntryPage {\n            keys: keys.finish(),\n            values: values.finish(),\n            resume_after,\n        })\n    }\n\n    async fn rollback(mut self: Box<Self>) -> Result<(), LixError> {\n        self.finalized = true;\n        Ok(())\n    }\n}\n\n#[async_trait]\nimpl BackendWriteTransaction for AccountingTransaction {\n    async fn write_kv_batch(\n        &mut self,\n        batch: BackendKvWriteBatch,\n    ) -> Result<BackendKvWriteStats, LixError> {\n        let mut stats = BackendKvWriteStats::default();\n        let mut store = self.lock_store()?;\n        for group in batch.groups {\n            let namespace = group.namespace().to_string();\n            for index in 0..group.put_count() {\n                let key = group.put_key(index).ok_or_else(|| {\n                    LixError::new(\"LIX_ERROR_UNKNOWN\", \"backend write batch missing put key\")\n                })?;\n                let value = group.put_value(index).ok_or_else(|| {\n                    LixError::new(\"LIX_ERROR_UNKNOWN\", \"backend write batch missing put value\")\n                })?;\n                stats.puts += 1;\n                stats.bytes_written += key.len() + value.len();\n                store.insert((namespace.clone(), key.to_vec()), value.to_vec());\n            }\n            for index in 0..group.delete_count() {\n                let key = group.delete_key(index).ok_or_else(|| {\n                    LixError::new(\n                        \"LIX_ERROR_UNKNOWN\",\n                        \"backend write batch missing delete key\",\n                    )\n                })?;\n                stats.deletes += 1;\n                stats.bytes_written += key.len();\n                store.remove(&(namespace.clone(), key.to_vec()));\n            }\n        }\n        Ok(stats)\n    }\n\n    async fn commit(mut self: Box<Self>) -> Result<(), LixError> {\n        self.finalized = true;\n        Ok(())\n    }\n}\n\nfn scan_store(\n    store: &Store,\n    namespace: &str,\n    range: &BackendKvScanRange,\n    limit: Option<usize>,\n) -> Vec<(Vec<u8>, Vec<u8>)> {\n    let mut pairs = Vec::new();\n    for ((row_namespace, key), value) in store.iter() {\n        if row_namespace != namespace {\n            continue;\n        }\n        let matches = match range {\n            BackendKvScanRange::Prefix(prefix) => key.starts_with(prefix),\n            BackendKvScanRange::Range { start, end } => key >= start && key < end,\n        };\n        if matches {\n            pairs.push((key.clone(), value.clone()));\n        }\n        if limit.is_some_and(|limit| pairs.len() >= limit) {\n            break;\n        }\n    }\n    pairs\n}\n"
  },
  {
    "path": "packages/engine/tests/support/mod.rs",
    "content": "pub mod simulation_test;\n\n#[macro_export]\nmacro_rules! simulation_test {\n    ($name:ident, |$sim:ident| $body:expr) => {\n        $crate::simulation_test!(\n            $name,\n            options =\n                $crate::support::simulation_test::engine::SimulationOptions::default(),\n            |$sim| $body\n        );\n    };\n    ($name:ident, options = $options:expr, |$sim:ident| $body:expr) => {\n        $crate::simulation_test!(\n            @single $name,\n            base,\n            Base,\n            $options,\n            |$sim| $body\n        );\n        $crate::simulation_test!(\n            @single $name,\n            tracked_state_rebuild,\n            TrackedStateRebuild,\n            $options,\n            |$sim| $body\n        );\n    };\n    (@single $name:ident, $simulation:ident, $mode:ident, $options:expr, |$sim:ident| $body:expr) => {\n        paste::paste! {\n                #[test]\n                fn [<$name _ $simulation>]() {\n                    let simulation_mode =\n                        $crate::support::simulation_test::engine::SimulationMode::$mode;\n                    let simulation_name = stringify!($simulation);\n                    let timeout_secs = std::env::var(\"LIX_SIMULATION_TEST_TIMEOUT_SECS\")\n                        .ok()\n                        .and_then(|raw| raw.parse::<u64>().ok())\n                        .unwrap_or(120);\n                    let case_id = concat!(module_path!(), \"::\", stringify!($name));\n                    let (result_tx, result_rx) = std::sync::mpsc::sync_channel(1);\n                    let thread = std::thread::Builder::new()\n                        .name(format!(\"{}_{}\", stringify!($name), simulation_name))\n                        .stack_size(32 * 1024 * 1024)\n                        .spawn(move || {\n                            let run_result =\n                                std::panic::catch_unwind(std::panic::AssertUnwindSafe(|| {\n                                    let runtime = tokio::runtime::Builder::new_current_thread()\n                                        .enable_all()\n                                        .build()\n                                        .expect(\"failed to build tokio runtime\");\n                                    runtime.block_on(async {\n                                        $crate::support::simulation_test::engine::run_single_simulation_test(\n                                            simulation_mode,\n                                            $options,\n                                            case_id,\n                                            |$sim| $body,\n                                        )\n                                        .await;\n                                    });\n                                }));\n                            let _ = result_tx.send(run_result);\n                        })\n                        .expect(concat!(\n                            \"failed to spawn \",\n                            stringify!($name),\n                            \" simulation_test thread\"\n                        ));\n\n                    match result_rx.recv_timeout(std::time::Duration::from_secs(timeout_secs)) {\n                        Ok(Ok(())) => {\n                            thread.join().expect(concat!(\n                                stringify!($name),\n                                \" simulation_test thread panicked\"\n                            ));\n                        }\n                        Ok(Err(payload)) => {\n                            let _ = thread.join();\n                            std::panic::resume_unwind(payload);\n                        }\n                        Err(std::sync::mpsc::RecvTimeoutError::Timeout) => {\n                            panic!(\n                                \"simulation_test timed out after {}s (simulation={}, case={})\",\n                                timeout_secs, simulation_name, case_id\n                            );\n                        }\n                        Err(std::sync::mpsc::RecvTimeoutError::Disconnected) => {\n                            if let Err(payload) = thread.join() {\n                                std::panic::resume_unwind(payload);\n                            }\n                            panic!(\n                                \"simulation_test thread exited without reporting result (simulation={}, case={})\",\n                                simulation_name, case_id\n                            );\n                        }\n                    }\n                }\n        }\n    };\n}\n"
  },
  {
    "path": "packages/engine/tests/support/simulation_test/engine/expect_same.rs",
    "content": "use std::collections::HashMap;\nuse std::sync::{Arc, Condvar, Mutex, OnceLock};\nuse std::time::{Duration, Instant};\n\nuse super::mode::SimulationMode;\n\n#[derive(Clone)]\npub(super) struct SimulationAssertions {\n    shared: SharedExpectSameRun,\n}\n\nimpl SimulationAssertions {\n    pub(super) fn shared(run: SharedExpectSameRun) -> Self {\n        Self { shared: run }\n    }\n\n    pub(super) fn start_mode(&self, _mode: SimulationMode) {\n        self.shared.start_mode();\n    }\n\n    pub(super) fn finish_mode(&self, _mode: SimulationMode) {\n        self.shared.finish_mode();\n    }\n}\n\n#[derive(Clone)]\npub(crate) struct SharedExpectSameRun {\n    case_id: String,\n    mode: SimulationMode,\n    call_index: Arc<Mutex<usize>>,\n    case: Arc<SharedExpectSameCase>,\n}\n\nstruct SharedExpectSameCase {\n    state: Mutex<SharedExpectSameState>,\n    condvar: Condvar,\n}\n\n#[derive(Default)]\nstruct SharedExpectSameState {\n    base_finished: bool,\n    base_failed: bool,\n    expected: Vec<(String, String)>,\n}\n\npub(crate) struct SharedExpectSameRunGuard {\n    run: SharedExpectSameRun,\n    finished: bool,\n}\n\nimpl SharedExpectSameRun {\n    pub(crate) fn new(case_id: &str, mode: SimulationMode) -> Self {\n        static CASES: OnceLock<Mutex<HashMap<String, Arc<SharedExpectSameCase>>>> = OnceLock::new();\n        let cases = CASES.get_or_init(|| Mutex::new(HashMap::new()));\n        let case = {\n            let mut guard = cases\n                .lock()\n                .expect(\"engine shared expectation registry lock poisoned\");\n            guard\n                .entry(case_id.to_string())\n                .or_insert_with(|| {\n                    Arc::new(SharedExpectSameCase {\n                        state: Mutex::new(SharedExpectSameState::default()),\n                        condvar: Condvar::new(),\n                    })\n                })\n                .clone()\n        };\n        Self {\n            case_id: case_id.to_string(),\n            mode,\n            call_index: Arc::new(Mutex::new(0)),\n            case,\n        }\n    }\n\n    fn start_mode(&self) {}\n\n    fn next_index(&self) -> usize {\n        let mut guard = self\n            .call_index\n            .lock()\n            .expect(\"engine shared expectation call index lock poisoned\");\n        let index = *guard;\n        *guard += 1;\n        index\n    }\n\n    fn call_count(&self) -> usize {\n        *self\n            .call_index\n            .lock()\n            .expect(\"engine shared expectation call index lock poisoned\")\n    }\n\n    fn assert_same(&self, label: &str, actual: String) {\n        let index = self.next_index();\n        match self.mode {\n            SimulationMode::Base => {\n                let mut state = self\n                    .case\n                    .state\n                    .lock()\n                    .expect(\"engine shared expectation lock poisoned\");\n                state.expected.push((label.to_string(), actual));\n                self.case.condvar.notify_all();\n            }\n            SimulationMode::TrackedStateRebuild => {\n                let expected = self.wait_for_expected(index, label);\n                assert_eq!(\n                    expected.0,\n                    label,\n                    \"simulation_test assertion order changed for case `{}` mode `{}` at call #{}\",\n                    self.case_id,\n                    self.mode.name(),\n                    index\n                );\n                assert_eq!(\n                    expected.1,\n                    actual,\n                    \"simulation_test assert_same `{label}` differed for case `{}` mode `{}`\",\n                    self.case_id,\n                    self.mode.name()\n                );\n            }\n        }\n    }\n\n    fn wait_for_expected(&self, index: usize, label: &str) -> (String, String) {\n        let deadline = Instant::now() + Duration::from_secs(120);\n        let mut state = self\n            .case\n            .state\n            .lock()\n            .expect(\"engine shared expectation lock poisoned\");\n        loop {\n            if state.base_failed {\n                panic!(\n                    \"simulation_test case `{}` base failed before `{}` could compare call #{}\",\n                    self.case_id, label, index\n                );\n            }\n            if let Some(expected) = state.expected.get(index) {\n                return expected.clone();\n            }\n            if state.base_finished {\n                panic!(\n                    \"simulation_test case `{}` mode `{}` called assert_same one extra time at call #{} ({label})\",\n                    self.case_id,\n                    self.mode.name(),\n                    index\n                );\n            }\n\n            let remaining = deadline.saturating_duration_since(Instant::now());\n            if remaining.is_zero() {\n                panic!(\n                    \"simulation_test timed out waiting for base assert_same call #{} in case `{}`\",\n                    index, self.case_id\n                );\n            }\n            let (next_state, timeout) = self\n                .case\n                .condvar\n                .wait_timeout(state, remaining)\n                .expect(\"engine shared expectation condvar wait poisoned\");\n            state = next_state;\n            if timeout.timed_out() {\n                panic!(\n                    \"simulation_test timed out waiting for base assert_same call #{} in case `{}`\",\n                    index, self.case_id\n                );\n            }\n        }\n    }\n\n    fn finish_mode(&self) {\n        match self.mode {\n            SimulationMode::Base => self.finish_base(std::thread::panicking()),\n            SimulationMode::TrackedStateRebuild => self.finish_compare(),\n        }\n    }\n\n    fn finish_base(&self, failed: bool) {\n        let mut state = self\n            .case\n            .state\n            .lock()\n            .expect(\"engine shared expectation lock poisoned\");\n        state.base_finished = true;\n        state.base_failed = failed;\n        self.case.condvar.notify_all();\n    }\n\n    fn finish_compare(&self) {\n        let deadline = Instant::now() + Duration::from_secs(120);\n        let mut state = self\n            .case\n            .state\n            .lock()\n            .expect(\"engine shared expectation lock poisoned\");\n        while !state.base_finished && !state.base_failed {\n            let remaining = deadline.saturating_duration_since(Instant::now());\n            if remaining.is_zero() {\n                panic!(\n                    \"simulation_test timed out waiting for base completion in case `{}`\",\n                    self.case_id\n                );\n            }\n            let (next_state, timeout) = self\n                .case\n                .condvar\n                .wait_timeout(state, remaining)\n                .expect(\"engine shared expectation condvar wait poisoned\");\n            state = next_state;\n            if timeout.timed_out() {\n                panic!(\n                    \"simulation_test timed out waiting for base completion in case `{}`\",\n                    self.case_id\n                );\n            }\n        }\n        if state.base_failed {\n            panic!(\n                \"simulation_test case `{}` base failed before mode `{}` completed\",\n                self.case_id,\n                self.mode.name()\n            );\n        }\n        assert_eq!(\n            self.call_count(),\n            state.expected.len(),\n            \"simulation_test mode `{}` for case `{}` did not execute all assert_same checks\",\n            self.mode.name(),\n            self.case_id\n        );\n    }\n}\n\nimpl SharedExpectSameRunGuard {\n    pub(crate) fn new(run: SharedExpectSameRun) -> Self {\n        Self {\n            run,\n            finished: false,\n        }\n    }\n}\n\nimpl Drop for SharedExpectSameRunGuard {\n    fn drop(&mut self) {\n        if self.finished || self.run.mode != SimulationMode::Base {\n            return;\n        }\n        self.run.finish_base(std::thread::panicking());\n        self.finished = true;\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn shared_expect_same_compares_against_base_run() {\n        let case_id = \"expect_same_unit_shared\";\n        let base = SharedExpectSameRun::new(case_id, SimulationMode::Base);\n        base.assert_same(\"value\", \"1\".to_string());\n        base.finish_mode();\n\n        let rebuild = SharedExpectSameRun::new(case_id, SimulationMode::TrackedStateRebuild);\n        rebuild.assert_same(\"value\", \"1\".to_string());\n        rebuild.finish_mode();\n    }\n}\n"
  },
  {
    "path": "packages/engine/tests/support/simulation_test/engine/kv_backend.rs",
    "content": "use std::collections::BTreeMap;\nuse std::sync::{Arc, Mutex};\n\nuse async_trait::async_trait;\nuse lix_engine::{\n    Backend, BackendKvEntryPage, BackendKvExistsBatch, BackendKvExistsGroup, BackendKvGetGroup,\n    BackendKvGetRequest, BackendKvKeyPage, BackendKvScanRange, BackendKvScanRequest,\n    BackendKvValueBatch, BackendKvValueGroup, BackendKvValuePage, BackendKvWriteBatch,\n    BackendKvWriteGroup, BackendKvWriteStats, BackendReadTransaction, BackendWriteTransaction,\n    BytePageBuilder, LixError,\n};\n\npub(crate) type KvKey = (String, Vec<u8>);\npub(crate) type KvMap = BTreeMap<KvKey, Vec<u8>>;\n\n/// KV-only backend used by simulation tests.\n#[derive(Clone, Default)]\npub(crate) struct InMemoryKvBackend {\n    data: Arc<Mutex<KvMap>>,\n}\n\nimpl InMemoryKvBackend {\n    pub(crate) fn new() -> Self {\n        Self::default()\n    }\n\n    pub(crate) fn from_snapshot(snapshot: KvMap) -> Self {\n        Self {\n            data: Arc::new(Mutex::new(snapshot)),\n        }\n    }\n\n    pub(crate) fn snapshot(&self) -> KvMap {\n        self.data\n            .lock()\n            .expect(\"in-memory backend lock poisoned\")\n            .clone()\n    }\n}\n\n#[async_trait]\nimpl Backend for InMemoryKvBackend {\n    async fn begin_read_transaction(\n        &self,\n    ) -> Result<Box<dyn BackendReadTransaction + Send + Sync + 'static>, LixError> {\n        Ok(Box::new(InMemoryKvTransaction {\n            data: Arc::clone(&self.data),\n            pending: BTreeMap::new(),\n            closed: false,\n        }))\n    }\n\n    async fn begin_write_transaction(\n        &self,\n    ) -> Result<Box<dyn BackendWriteTransaction + Send + Sync + 'static>, LixError> {\n        Ok(Box::new(InMemoryKvTransaction {\n            data: Arc::clone(&self.data),\n            pending: BTreeMap::new(),\n            closed: false,\n        }))\n    }\n}\n\nstruct InMemoryKvTransaction {\n    data: Arc<Mutex<KvMap>>,\n    pending: BTreeMap<KvKey, Option<Vec<u8>>>,\n    closed: bool,\n}\n\n#[async_trait]\nimpl BackendReadTransaction for InMemoryKvTransaction {\n    async fn get_values(\n        &mut self,\n        request: BackendKvGetRequest,\n    ) -> Result<BackendKvValueBatch, LixError> {\n        let data = self.data.lock().expect(\"in-memory backend lock poisoned\");\n        let mut groups = Vec::with_capacity(request.groups.len());\n        for group in request.groups {\n            let namespace = group.namespace.clone();\n            let mut values = BytePageBuilder::with_capacity(group.keys.len(), 0);\n            let mut present = Vec::with_capacity(group.keys.len());\n            for key in group.keys {\n                let identity = (namespace.clone(), key.clone());\n                let value = self\n                    .pending\n                    .get(&identity)\n                    .cloned()\n                    .unwrap_or_else(|| data.get(&identity).cloned());\n                if let Some(value) = value {\n                    values.push(value);\n                    present.push(true);\n                } else {\n                    values.push([]);\n                    present.push(false);\n                }\n            }\n            groups.push(BackendKvValueGroup::new(\n                namespace,\n                values.finish(),\n                present,\n            ));\n        }\n        Ok(BackendKvValueBatch { groups })\n    }\n\n    async fn exists_many(\n        &mut self,\n        request: BackendKvGetRequest,\n    ) -> Result<BackendKvExistsBatch, LixError> {\n        let data = self.data.lock().expect(\"in-memory backend lock poisoned\");\n        let mut groups = Vec::with_capacity(request.groups.len());\n        for group in request.groups {\n            let namespace = group.namespace.clone();\n            let mut exists = Vec::with_capacity(group.keys.len());\n            for key in group.keys {\n                let identity = (namespace.clone(), key.clone());\n                let present = self\n                    .pending\n                    .get(&identity)\n                    .map(|value| value.is_some())\n                    .unwrap_or_else(|| data.contains_key(&identity));\n                exists.push(present);\n            }\n            groups.push(BackendKvExistsGroup { namespace, exists });\n        }\n        Ok(BackendKvExistsBatch { groups })\n    }\n\n    async fn scan_keys(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvKeyPage, LixError> {\n        let entries = self.scan_visible_entries(request)?;\n        Ok(BackendKvKeyPage {\n            keys: entries.keys,\n            resume_after: entries.resume_after,\n        })\n    }\n\n    async fn scan_values(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvValuePage, LixError> {\n        let entries = self.scan_visible_entries(request)?;\n        Ok(BackendKvValuePage {\n            values: entries.values,\n            resume_after: entries.resume_after,\n        })\n    }\n\n    async fn scan_entries(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvEntryPage, LixError> {\n        self.scan_visible_entries(request)\n    }\n\n    async fn rollback(mut self: Box<Self>) -> Result<(), LixError> {\n        self.pending.clear();\n        self.closed = true;\n        Ok(())\n    }\n}\n\nimpl InMemoryKvTransaction {\n    fn scan_visible_entries(\n        &self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvEntryPage, LixError> {\n        let mut visible = self\n            .data\n            .lock()\n            .expect(\"in-memory backend lock poisoned\")\n            .clone();\n        for (key, value) in &self.pending {\n            match value {\n                Some(value) => {\n                    visible.insert(key.clone(), value.clone());\n                }\n                None => {\n                    visible.remove(key);\n                }\n            }\n        }\n        Ok(scan_map(&visible, &request))\n    }\n}\n\n#[async_trait]\nimpl BackendWriteTransaction for InMemoryKvTransaction {\n    async fn write_kv_batch(\n        &mut self,\n        batch: BackendKvWriteBatch,\n    ) -> Result<BackendKvWriteStats, LixError> {\n        let mut stats = BackendKvWriteStats::default();\n        for group in batch.groups {\n            let namespace = group.namespace().to_string();\n            for index in 0..group.put_count() {\n                let key = group.put_key(index).ok_or_else(|| {\n                    LixError::new(\"LIX_ERROR_UNKNOWN\", \"backend write batch missing put key\")\n                })?;\n                let value = group.put_value(index).ok_or_else(|| {\n                    LixError::new(\"LIX_ERROR_UNKNOWN\", \"backend write batch missing put value\")\n                })?;\n                stats.puts += 1;\n                stats.bytes_written += key.len() + value.len();\n                self.pending\n                    .insert((namespace.clone(), key.to_vec()), Some(value.to_vec()));\n            }\n            for index in 0..group.delete_count() {\n                let key = group.delete_key(index).ok_or_else(|| {\n                    LixError::new(\n                        \"LIX_ERROR_UNKNOWN\",\n                        \"backend write batch missing delete key\",\n                    )\n                })?;\n                stats.deletes += 1;\n                stats.bytes_written += key.len();\n                self.pending.insert((namespace.clone(), key.to_vec()), None);\n            }\n        }\n        Ok(stats)\n    }\n\n    async fn commit(mut self: Box<Self>) -> Result<(), LixError> {\n        if self.closed {\n            return Ok(());\n        }\n        let mut guard = self.data.lock().expect(\"in-memory backend lock poisoned\");\n        for (key, value) in std::mem::take(&mut self.pending) {\n            match value {\n                Some(value) => {\n                    guard.insert(key, value);\n                }\n                None => {\n                    guard.remove(&key);\n                }\n            }\n        }\n        self.closed = true;\n        Ok(())\n    }\n}\n\nfn scan_map(map: &KvMap, request: &BackendKvScanRequest) -> BackendKvEntryPage {\n    let mut pairs = map\n        .iter()\n        .filter_map(|((entry_namespace, key), value)| {\n            if entry_namespace != &request.namespace || !key_in_range(key, &request.range) {\n                return None;\n            }\n            if request\n                .after\n                .as_deref()\n                .is_some_and(|after| key.as_slice() <= after)\n            {\n                return None;\n            }\n            Some((key.clone(), value.clone()))\n        })\n        .collect::<Vec<_>>();\n    pairs.sort_by(|left, right| left.0.cmp(&right.0));\n    let has_more = pairs.len() > request.limit;\n    pairs.truncate(request.limit);\n    let resume_after = has_more\n        .then(|| pairs.last().map(|(key, _)| key.clone()))\n        .flatten();\n    let mut keys = BytePageBuilder::with_capacity(pairs.len(), 0);\n    let mut values = BytePageBuilder::with_capacity(pairs.len(), 0);\n    for (key, value) in pairs {\n        keys.push(key);\n        values.push(value);\n    }\n    BackendKvEntryPage {\n        keys: keys.finish(),\n        values: values.finish(),\n        resume_after,\n    }\n}\n\nfn key_in_range(key: &[u8], range: &BackendKvScanRange) -> bool {\n    match range {\n        BackendKvScanRange::Prefix(prefix) => key.starts_with(prefix),\n        BackendKvScanRange::Range { start, end } => key >= start.as_slice() && key < end.as_slice(),\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    async fn put(\n        tx: &mut Box<dyn BackendWriteTransaction + Send + Sync>,\n        namespace: &str,\n        key: &[u8],\n        value: &[u8],\n    ) {\n        tx.write_kv_batch(BackendKvWriteBatch {\n            groups: {\n                let mut group = BackendKvWriteGroup::new(namespace);\n                group.put(key, value);\n                vec![group]\n            },\n        })\n        .await\n        .expect(\"put should succeed\");\n    }\n\n    async fn delete(\n        tx: &mut Box<dyn BackendWriteTransaction + Send + Sync>,\n        namespace: &str,\n        key: &[u8],\n    ) {\n        tx.write_kv_batch(BackendKvWriteBatch {\n            groups: {\n                let mut group = BackendKvWriteGroup::new(namespace);\n                group.delete(key);\n                vec![group]\n            },\n        })\n        .await\n        .expect(\"delete should succeed\");\n    }\n\n    async fn get(\n        tx: &mut (dyn BackendReadTransaction + Send + Sync),\n        namespace: &str,\n        key: &[u8],\n    ) -> Option<Vec<u8>> {\n        tx.get_values(BackendKvGetRequest {\n            groups: vec![BackendKvGetGroup {\n                namespace: namespace.to_string(),\n                keys: vec![key.to_vec()],\n            }],\n        })\n        .await\n        .expect(\"get should succeed\")\n        .groups\n        .remove(0)\n        .value(0)\n        .flatten()\n        .map(<[u8]>::to_vec)\n    }\n\n    async fn committed_get(\n        backend: &InMemoryKvBackend,\n        namespace: &str,\n        key: &[u8],\n    ) -> Option<Vec<u8>> {\n        let mut tx = backend\n            .begin_read_transaction()\n            .await\n            .expect(\"read transaction should open\");\n        let value = get(tx.as_mut(), namespace, key).await;\n        tx.rollback().await.expect(\"rollback should succeed\");\n        value\n    }\n\n    async fn scan(\n        tx: &mut (dyn BackendReadTransaction + Send + Sync),\n        namespace: &str,\n        range: BackendKvScanRange,\n        limit: Option<usize>,\n    ) -> BackendKvEntryPage {\n        tx.scan_entries(BackendKvScanRequest {\n            namespace: namespace.to_string(),\n            range,\n            after: None,\n            limit: limit.unwrap_or(usize::MAX),\n        })\n        .await\n        .expect(\"scan should succeed\")\n    }\n\n    #[tokio::test]\n    async fn transaction_put_commit_makes_value_visible() {\n        let backend = InMemoryKvBackend::new();\n        let mut tx = backend\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n\n        put(&mut tx, \"ns\", b\"a\", b\"one\").await;\n        assert_eq!(get(tx.as_mut(), \"ns\", b\"a\").await, Some(b\"one\".to_vec()));\n        tx.commit().await.expect(\"commit should succeed\");\n\n        assert_eq!(\n            committed_get(&backend, \"ns\", b\"a\").await,\n            Some(b\"one\".to_vec())\n        );\n    }\n\n    #[tokio::test]\n    async fn rollback_discards_pending_values() {\n        let backend = InMemoryKvBackend::new();\n        let mut tx = backend\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n\n        put(&mut tx, \"ns\", b\"a\", b\"one\").await;\n        tx.rollback().await.expect(\"rollback should succeed\");\n\n        assert_eq!(committed_get(&backend, \"ns\", b\"a\").await, None);\n    }\n\n    #[tokio::test]\n    async fn scan_overlays_pending_write_and_delete() {\n        let backend = InMemoryKvBackend::new();\n        let mut seed = backend\n            .begin_write_transaction()\n            .await\n            .expect(\"seed transaction should open\");\n        put(&mut seed, \"ns\", b\"a\", b\"old\").await;\n        put(&mut seed, \"ns\", b\"b\", b\"two\").await;\n        seed.commit().await.unwrap();\n\n        let mut tx = backend\n            .begin_write_transaction()\n            .await\n            .expect(\"transaction should open\");\n        put(&mut tx, \"ns\", b\"a\", b\"new\").await;\n        delete(&mut tx, \"ns\", b\"b\").await;\n        put(&mut tx, \"ns\", b\"c\", b\"three\").await;\n\n        let rows = scan(\n            tx.as_mut(),\n            \"ns\",\n            BackendKvScanRange::Prefix(Vec::new()),\n            None,\n        )\n        .await;\n        assert_eq!(rows.key(0).expect(\"key exists\"), b\"a\");\n        assert_eq!(rows.value(0).expect(\"value exists\"), b\"new\");\n        assert_eq!(rows.key(1).expect(\"key exists\"), b\"c\");\n        assert_eq!(rows.value(1).expect(\"value exists\"), b\"three\");\n    }\n}\n"
  },
  {
    "path": "packages/engine/tests/support/simulation_test/engine/macro_runtime.rs",
    "content": "use std::future::Future;\n\nuse lix_engine::LixError;\nuse lix_engine::{Engine, InitReceipt};\n\nuse super::expect_same::{SharedExpectSameRun, SharedExpectSameRunGuard, SimulationAssertions};\nuse super::kv_backend::{InMemoryKvBackend, KvMap};\nuse super::mode::{SimulationMode, SimulationOptions};\nuse super::rebuild_tracked_state::deterministic_timestamp_shuffle_for;\nuse super::simulation::Simulation;\n\n/// Runs one matrix entry for `simulation_test!`.\n///\n/// The macro generates one Rust test per mode. `assert_same` coordinates across\n/// those test functions through shared state keyed by `case_id`.\npub async fn run_single_simulation_test<F, Fut>(\n    mode: SimulationMode,\n    options: SimulationOptions,\n    case_id: &str,\n    test_fn: F,\n) where\n    F: Fn(Simulation) -> Fut,\n    Fut: Future<Output = ()>,\n{\n    let bootstrap = Bootstrap::create()\n        .await\n        .expect(\"simulation bootstrap should initialize\");\n    let expect_same = SharedExpectSameRun::new(case_id, mode);\n    let _guard = SharedExpectSameRunGuard::new(expect_same.clone());\n    let sim = Simulation::from_bootstrap(\n        mode,\n        options,\n        bootstrap.snapshot,\n        bootstrap.receipt,\n        SimulationAssertions::shared(expect_same),\n    )\n    .await\n    .expect(\"simulation mode should boot\");\n    test_fn(sim.clone()).await;\n    sim.finish();\n}\n\n#[derive(Clone)]\nstruct Bootstrap {\n    snapshot: KvMap,\n    receipt: InitReceipt,\n}\n\nimpl Bootstrap {\n    async fn create() -> Result<Self, LixError> {\n        let backend = InMemoryKvBackend::new();\n        let receipt = Engine::initialize(Box::new(backend.clone())).await?;\n        Ok(Self {\n            snapshot: backend.snapshot(),\n            receipt,\n        })\n    }\n}\n\npub(crate) async fn enable_deterministic_mode(\n    engine: &Engine,\n    receipt: &InitReceipt,\n    mode: SimulationMode,\n) -> Result<(), LixError> {\n    let timestamp_shuffle = deterministic_timestamp_shuffle_for(mode);\n    let session = engine.open_session(receipt.main_version_id.clone()).await?;\n    session\n        .execute(&deterministic_mode_insert_sql(timestamp_shuffle), &[])\n        .await?;\n    Ok(())\n}\n\nfn deterministic_mode_insert_sql(timestamp_shuffle: bool) -> String {\n    format!(\n        \"INSERT INTO lix_key_value (key, value, lixcol_global, lixcol_untracked) \\\n         VALUES ('lix_deterministic_mode', \\\n         lix_json('{{\\\"enabled\\\":true,\\\"timestamp_shuffle\\\":{timestamp_shuffle}}}'), true, true)\"\n    )\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn deterministic_mode_sql_carries_timestamp_shuffle_flag() {\n        assert!(deterministic_mode_insert_sql(true).contains(\"\\\"timestamp_shuffle\\\":true\"));\n        assert!(deterministic_mode_insert_sql(false).contains(\"\\\"timestamp_shuffle\\\":false\"));\n    }\n}\n"
  },
  {
    "path": "packages/engine/tests/support/simulation_test/engine/mod.rs",
    "content": "mod expect_same;\nmod kv_backend;\nmod macro_runtime;\nmod mode;\nmod rebuild_tracked_state;\nmod simulation;\n\n#[allow(unused_imports)]\npub use macro_runtime::run_single_simulation_test;\n#[allow(unused_imports)]\npub use mode::{SimulationMode, SimulationOptions};\n#[allow(unused_imports)]\npub use simulation::{SimSession, Simulation};\n"
  },
  {
    "path": "packages/engine/tests/support/simulation_test/engine/mode.rs",
    "content": "/// Runtime mode for the simulation harness.\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub enum SimulationMode {\n    Base,\n    TrackedStateRebuild,\n}\n\nimpl SimulationMode {\n    pub fn name(self) -> &'static str {\n        match self {\n            Self::Base => \"base\",\n            Self::TrackedStateRebuild => \"tracked_state_rebuild\",\n        }\n    }\n}\n\n/// Options for `simulation_test!`.\n///\n/// Deterministic mode is enabled by default so the base and rebuild runs can be\n/// compared exactly without per-backend result normalization.\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub struct SimulationOptions {\n    pub deterministic: bool,\n}\n\nimpl Default for SimulationOptions {\n    fn default() -> Self {\n        Self {\n            deterministic: true,\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn mode_names_are_stable_for_generated_test_names() {\n        assert_eq!(SimulationMode::Base.name(), \"base\");\n        assert_eq!(\n            SimulationMode::TrackedStateRebuild.name(),\n            \"tracked_state_rebuild\"\n        );\n    }\n\n    #[test]\n    fn deterministic_mode_is_enabled_by_default() {\n        assert!(SimulationOptions::default().deterministic);\n    }\n}\n"
  },
  {
    "path": "packages/engine/tests/support/simulation_test/engine/rebuild_tracked_state.rs",
    "content": "use std::sync::atomic::{AtomicBool, Ordering};\nuse std::sync::Arc;\n\nuse lix_engine::Engine;\nuse lix_engine::LixError;\n\nuse super::mode::SimulationMode;\n\n/// Returns whether a simulation mode should shuffle deterministic timestamps.\n///\n/// Rebuild mode intentionally shuffles timestamps so tests do not encode\n/// assumptions that tracked-state rebuild order and write-time order match.\npub(crate) fn deterministic_timestamp_shuffle_for(mode: SimulationMode) -> bool {\n    matches!(mode, SimulationMode::TrackedStateRebuild)\n}\n\n/// Mode-specific read/write hook for tracked-state rebuild simulation.\n#[derive(Clone)]\npub(crate) struct RebuildTrackedStateSimulation {\n    mode: SimulationMode,\n    pending: Arc<AtomicBool>,\n}\n\nimpl RebuildTrackedStateSimulation {\n    pub(crate) fn new(mode: SimulationMode) -> Self {\n        Self {\n            mode,\n            pending: Arc::new(AtomicBool::new(false)),\n        }\n    }\n\n    pub(crate) fn after_successful_write(&self) {\n        if self.mode == SimulationMode::TrackedStateRebuild {\n            self.pending.store(true, Ordering::SeqCst);\n        }\n    }\n\n    pub(crate) async fn before_read(\n        &self,\n        engine: &Engine,\n        version_id: &str,\n    ) -> Result<(), LixError> {\n        if self.mode != SimulationMode::TrackedStateRebuild {\n            return Ok(());\n        }\n        if !self.pending.swap(false, Ordering::SeqCst) {\n            return Ok(());\n        }\n        engine.rebuild_tracked_state_for_version(version_id).await\n    }\n\n    #[cfg(test)]\n    fn pending_for_test(&self) -> bool {\n        self.pending.load(Ordering::SeqCst)\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn timestamp_shuffle_is_only_enabled_for_rebuild_mode() {\n        assert!(!deterministic_timestamp_shuffle_for(SimulationMode::Base));\n        assert!(deterministic_timestamp_shuffle_for(\n            SimulationMode::TrackedStateRebuild\n        ));\n    }\n\n    #[test]\n    fn successful_write_marks_rebuild_pending_only_in_rebuild_mode() {\n        let base = RebuildTrackedStateSimulation::new(SimulationMode::Base);\n        let rebuild = RebuildTrackedStateSimulation::new(SimulationMode::TrackedStateRebuild);\n\n        base.after_successful_write();\n        rebuild.after_successful_write();\n\n        assert!(!base.pending_for_test());\n        assert!(rebuild.pending_for_test());\n    }\n}\n"
  },
  {
    "path": "packages/engine/tests/support/simulation_test/engine/simulation.rs",
    "content": "use lix_engine::{Backend, LixError, Value};\nuse lix_engine::{\n    CreateVersionOptions, CreateVersionReceipt, Engine, ExecuteResult, InitReceipt,\n    MergeVersionOptions, MergeVersionPreview, MergeVersionPreviewOptions, MergeVersionReceipt,\n    SessionContext, SwitchVersionOptions, SwitchVersionReceipt,\n};\n\nuse super::expect_same::SimulationAssertions;\nuse super::kv_backend::InMemoryKvBackend;\nuse super::mode::{SimulationMode, SimulationOptions};\nuse super::rebuild_tracked_state::RebuildTrackedStateSimulation;\n\n/// Per-mode handle exposed to tests using `simulation_test!`.\n#[derive(Clone)]\npub struct Simulation {\n    mode: SimulationMode,\n    #[allow(dead_code)]\n    backend: InMemoryKvBackend,\n    engine: Engine,\n    receipt: InitReceipt,\n    rebuild_tracked_state: RebuildTrackedStateSimulation,\n    assertions: SimulationAssertions,\n}\n\n#[allow(dead_code)]\nimpl Simulation {\n    pub(super) async fn from_bootstrap(\n        mode: SimulationMode,\n        options: SimulationOptions,\n        snapshot: super::kv_backend::KvMap,\n        receipt: InitReceipt,\n        assertions: SimulationAssertions,\n    ) -> Result<Self, LixError> {\n        let backend = InMemoryKvBackend::from_snapshot(snapshot);\n        let engine = Engine::new(Box::new(backend.clone())).await?;\n        if options.deterministic {\n            super::macro_runtime::enable_deterministic_mode(&engine, &receipt, mode).await?;\n        }\n        assertions.start_mode(mode);\n        Ok(Self {\n            mode,\n            backend,\n            engine,\n            receipt,\n            rebuild_tracked_state: RebuildTrackedStateSimulation::new(mode),\n            assertions,\n        })\n    }\n\n    /// Returns the normal engine runtime for this simulation run.\n    pub async fn boot_engine(&self) -> Engine {\n        self.engine.clone()\n    }\n\n    /// Boots a fresh engine from the current backend snapshot.\n    ///\n    /// This is the simulation equivalent of closing the app and reopening the\n    /// same repository. It lets tests distinguish persisted workspace state\n    /// from in-memory session state.\n    pub async fn reboot_engine_from_current_snapshot(&self) -> Result<Engine, LixError> {\n        Engine::new(Box::new(InMemoryKvBackend::from_snapshot(\n            self.backend.snapshot(),\n        )))\n        .await\n    }\n\n    /// Wraps a normal engine session with simulation hooks.\n    pub fn wrap_session(&self, session: SessionContext, engine: &Engine) -> SimSession {\n        SimSession {\n            sim: self.clone(),\n            engine: engine.clone(),\n            session,\n        }\n    }\n\n    /// Returns a fresh, empty backend for lifecycle tests.\n    pub fn uninitialized_backend(&self) -> Box<dyn Backend + Send + Sync> {\n        Box::new(InMemoryKvBackend::new())\n    }\n\n    /// Returns the initialized Lix id.\n    pub fn lix_id(&self) -> &str {\n        &self.receipt.lix_id\n    }\n\n    /// Returns the initial commit id.\n    pub fn initial_commit_id(&self) -> &str {\n        &self.receipt.initial_commit_id\n    }\n\n    /// Returns the initialized main version id.\n    pub fn main_version_id(&self) -> &str {\n        &self.receipt.main_version_id\n    }\n\n    pub(crate) fn finish(&self) {\n        self.assertions.finish_mode(self.mode);\n    }\n}\n\n/// Session wrapper that injects simulation behavior around normal execution.\npub struct SimSession {\n    sim: Simulation,\n    engine: Engine,\n    session: SessionContext,\n}\n\n#[allow(dead_code)]\nimpl SimSession {\n    pub fn wrap_session(&self, session: SessionContext, engine: &Engine) -> SimSession {\n        SimSession {\n            sim: self.sim.clone(),\n            engine: engine.clone(),\n            session,\n        }\n    }\n\n    pub async fn active_version_id(&self) -> Result<String, LixError> {\n        self.session.active_version_id().await\n    }\n\n    pub async fn execute(&self, sql: &str, params: &[Value]) -> Result<ExecuteResult, LixError> {\n        match classify_statement(sql) {\n            StatementKind::Read => {\n                let active_version_id = self.session.active_version_id().await?;\n                self.sim\n                    .rebuild_tracked_state\n                    .before_read(&self.engine, &active_version_id)\n                    .await?;\n                self.session.execute(sql, params).await\n            }\n            StatementKind::Write => {\n                let result = self.session.execute(sql, params).await;\n                if result.is_ok() {\n                    self.sim.rebuild_tracked_state.after_successful_write();\n                }\n                result\n            }\n            StatementKind::Utility => self.session.execute(sql, params).await,\n        }\n    }\n\n    pub async fn create_version(\n        &self,\n        options: CreateVersionOptions,\n    ) -> Result<CreateVersionReceipt, LixError> {\n        let result = self.session.create_version(options).await;\n        if result.is_ok() {\n            self.sim.rebuild_tracked_state.after_successful_write();\n        }\n        result\n    }\n\n    pub async fn merge_version(\n        &self,\n        options: MergeVersionOptions,\n    ) -> Result<MergeVersionReceipt, LixError> {\n        let result = self.session.merge_version(options).await;\n        if result.is_ok() {\n            self.sim.rebuild_tracked_state.after_successful_write();\n        }\n        result\n    }\n\n    pub async fn merge_version_preview(\n        &self,\n        options: MergeVersionPreviewOptions,\n    ) -> Result<MergeVersionPreview, LixError> {\n        self.session.merge_version_preview(options).await\n    }\n\n    pub async fn switch_version(\n        &self,\n        options: SwitchVersionOptions,\n    ) -> Result<(SimSession, SwitchVersionReceipt), LixError> {\n        let (session, receipt) = self.session.switch_version(options).await?;\n        Ok((\n            SimSession {\n                sim: self.sim.clone(),\n                engine: self.engine.clone(),\n                session,\n            },\n            receipt,\n        ))\n    }\n}\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\nenum StatementKind {\n    Read,\n    Write,\n    Utility,\n}\n\nfn classify_statement(sql: &str) -> StatementKind {\n    let keyword = sql\n        .trim_start()\n        .split(|ch: char| ch.is_whitespace() || ch == '(')\n        .next()\n        .unwrap_or(\"\")\n        .to_ascii_uppercase();\n    match keyword.as_str() {\n        \"SELECT\" | \"WITH\" => StatementKind::Read,\n        \"INSERT\" | \"UPDATE\" | \"DELETE\" => StatementKind::Write,\n        _ => StatementKind::Utility,\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn classify_statement_splits_reads_writes_and_utility() {\n        assert_eq!(classify_statement(\"SELECT 1\"), StatementKind::Read);\n        assert_eq!(\n            classify_statement(\"  WITH x AS (...) SELECT 1\"),\n            StatementKind::Read\n        );\n        assert_eq!(\n            classify_statement(\"INSERT INTO t VALUES (1)\"),\n            StatementKind::Write\n        );\n        assert_eq!(\n            classify_statement(\"UPDATE t SET a = 1\"),\n            StatementKind::Write\n        );\n        assert_eq!(classify_statement(\"DELETE FROM t\"), StatementKind::Write);\n        assert_eq!(\n            classify_statement(\"EXPLAIN SELECT 1\"),\n            StatementKind::Utility\n        );\n    }\n}\n"
  },
  {
    "path": "packages/engine/tests/support/simulation_test/mod.rs",
    "content": "pub mod engine;\n"
  },
  {
    "path": "packages/engine/tests/tmp_lix_key_value_amplification.rs",
    "content": "use std::collections::BTreeMap;\nuse std::sync::{Arc, Mutex};\n\nuse async_trait::async_trait;\nuse lix_engine::{\n    Backend, BackendKvEntryPage, BackendKvExistsBatch, BackendKvGetRequest, BackendKvKeyPage,\n    BackendKvScanRequest, BackendKvValueBatch, BackendKvValuePage, BackendKvWriteBatch,\n    BackendKvWriteStats, BackendReadTransaction, BackendWriteTransaction, CreateVersionOptions,\n    Engine, LixError, SessionContext, Value,\n};\n\n#[allow(dead_code)]\n#[path = \"support/simulation_test/engine/kv_backend.rs\"]\nmod kv_backend;\n\nuse kv_backend::{InMemoryKvBackend, KvMap};\n\n#[derive(Debug, Clone, Default)]\nstruct AmplificationCounts {\n    begin_read_transactions: usize,\n    begin_write_transactions: usize,\n    commits: usize,\n    rollbacks: usize,\n    write_kv_batch_calls: usize,\n    puts: usize,\n    deletes: usize,\n    write_bytes: usize,\n    get_values_calls: usize,\n    get_values_keys: usize,\n    exists_many_calls: usize,\n    exists_many_keys: usize,\n    scan_keys_calls: usize,\n    scan_keys_rows: usize,\n    scan_values_calls: usize,\n    scan_values_rows: usize,\n    scan_entries_calls: usize,\n    scan_entries_rows: usize,\n    puts_by_namespace: BTreeMap<String, usize>,\n    deletes_by_namespace: BTreeMap<String, usize>,\n    bytes_by_namespace: BTreeMap<String, usize>,\n}\n\nimpl AmplificationCounts {\n    fn record_write_batch(&mut self, batch: &BackendKvWriteBatch) {\n        self.write_kv_batch_calls += 1;\n        for group in &batch.groups {\n            let namespace = group.namespace().to_string();\n            for index in 0..group.put_count() {\n                let Some(key) = group.put_key(index) else {\n                    continue;\n                };\n                let Some(value) = group.put_value(index) else {\n                    continue;\n                };\n                self.puts += 1;\n                self.write_bytes += key.len() + value.len();\n                *self.puts_by_namespace.entry(namespace.clone()).or_default() += 1;\n                *self\n                    .bytes_by_namespace\n                    .entry(namespace.clone())\n                    .or_default() += key.len() + value.len();\n            }\n            for index in 0..group.delete_count() {\n                let Some(key) = group.delete_key(index) else {\n                    continue;\n                };\n                self.deletes += 1;\n                self.write_bytes += key.len();\n                *self\n                    .deletes_by_namespace\n                    .entry(namespace.clone())\n                    .or_default() += 1;\n                *self\n                    .bytes_by_namespace\n                    .entry(namespace.clone())\n                    .or_default() += key.len();\n            }\n        }\n    }\n\n    fn read_calls(&self) -> usize {\n        self.get_values_calls\n            + self.exists_many_calls\n            + self.scan_keys_calls\n            + self.scan_values_calls\n            + self.scan_entries_calls\n    }\n\n    fn read_items(&self) -> usize {\n        self.get_values_keys\n            + self.exists_many_keys\n            + self.scan_keys_rows\n            + self.scan_values_rows\n            + self.scan_entries_rows\n    }\n\n    fn write_mutations(&self) -> usize {\n        self.puts + self.deletes\n    }\n\n    fn puts_in(&self, namespace: &str) -> usize {\n        self.puts_by_namespace.get(namespace).copied().unwrap_or(0)\n    }\n\n    fn deletes_in(&self, namespace: &str) -> usize {\n        self.deletes_by_namespace\n            .get(namespace)\n            .copied()\n            .unwrap_or(0)\n    }\n\n    fn bytes_in(&self, namespace: &str) -> usize {\n        self.bytes_by_namespace.get(namespace).copied().unwrap_or(0)\n    }\n}\n\n#[derive(Clone, Default)]\nstruct CountingBackend {\n    inner: InMemoryKvBackend,\n    counts: Arc<Mutex<AmplificationCounts>>,\n}\n\nimpl CountingBackend {\n    fn reset_counts(&self) {\n        *self.counts.lock().expect(\"amplification counts lock\") = AmplificationCounts::default();\n    }\n\n    fn counts(&self) -> AmplificationCounts {\n        self.counts\n            .lock()\n            .expect(\"amplification counts lock\")\n            .clone()\n    }\n\n    fn snapshot(&self) -> KvMap {\n        self.inner.snapshot()\n    }\n}\n\n#[derive(Debug, Clone, Default)]\nstruct StorageAmplification {\n    before_entries: usize,\n    after_entries: usize,\n    before_key_value_bytes: usize,\n    after_key_value_bytes: usize,\n    before_namespace_key_value_bytes: usize,\n    after_namespace_key_value_bytes: usize,\n    added_entries: usize,\n    updated_entries: usize,\n    removed_entries: usize,\n    added_key_value_bytes: usize,\n    updated_before_key_value_bytes: usize,\n    updated_after_key_value_bytes: usize,\n    removed_key_value_bytes: usize,\n    added_namespace_key_value_bytes: usize,\n    updated_before_namespace_key_value_bytes: usize,\n    updated_after_namespace_key_value_bytes: usize,\n    removed_namespace_key_value_bytes: usize,\n    by_namespace: BTreeMap<String, StorageNamespaceAmplification>,\n}\n\n#[derive(Debug, Clone, Default)]\nstruct StorageNamespaceAmplification {\n    added_entries: usize,\n    updated_entries: usize,\n    removed_entries: usize,\n    added_key_value_bytes: usize,\n    updated_before_key_value_bytes: usize,\n    updated_after_key_value_bytes: usize,\n    removed_key_value_bytes: usize,\n    added_namespace_key_value_bytes: usize,\n    updated_before_namespace_key_value_bytes: usize,\n    updated_after_namespace_key_value_bytes: usize,\n    removed_namespace_key_value_bytes: usize,\n}\n\nimpl StorageAmplification {\n    fn from_snapshots(before: &KvMap, after: &KvMap) -> Self {\n        let mut result = Self {\n            before_entries: before.len(),\n            after_entries: after.len(),\n            before_key_value_bytes: snapshot_key_value_bytes(before),\n            after_key_value_bytes: snapshot_key_value_bytes(after),\n            before_namespace_key_value_bytes: snapshot_namespace_key_value_bytes(before),\n            after_namespace_key_value_bytes: snapshot_namespace_key_value_bytes(after),\n            ..Self::default()\n        };\n\n        for (key, after_value) in after {\n            match before.get(key) {\n                None => {\n                    result.added_entries += 1;\n                    result.added_key_value_bytes += key_value_bytes(key, after_value);\n                    result.added_namespace_key_value_bytes +=\n                        namespace_key_value_bytes(key, after_value);\n                    let namespace = result.by_namespace.entry(key.0.clone()).or_default();\n                    namespace.added_entries += 1;\n                    namespace.added_key_value_bytes += key_value_bytes(key, after_value);\n                    namespace.added_namespace_key_value_bytes +=\n                        namespace_key_value_bytes(key, after_value);\n                }\n                Some(before_value) if before_value != after_value => {\n                    result.updated_entries += 1;\n                    result.updated_before_key_value_bytes += key_value_bytes(key, before_value);\n                    result.updated_after_key_value_bytes += key_value_bytes(key, after_value);\n                    result.updated_before_namespace_key_value_bytes +=\n                        namespace_key_value_bytes(key, before_value);\n                    result.updated_after_namespace_key_value_bytes +=\n                        namespace_key_value_bytes(key, after_value);\n                    let namespace = result.by_namespace.entry(key.0.clone()).or_default();\n                    namespace.updated_entries += 1;\n                    namespace.updated_before_key_value_bytes += key_value_bytes(key, before_value);\n                    namespace.updated_after_key_value_bytes += key_value_bytes(key, after_value);\n                    namespace.updated_before_namespace_key_value_bytes +=\n                        namespace_key_value_bytes(key, before_value);\n                    namespace.updated_after_namespace_key_value_bytes +=\n                        namespace_key_value_bytes(key, after_value);\n                }\n                Some(_) => {}\n            }\n        }\n\n        for (key, before_value) in before {\n            if !after.contains_key(key) {\n                result.removed_entries += 1;\n                result.removed_key_value_bytes += key_value_bytes(key, before_value);\n                result.removed_namespace_key_value_bytes +=\n                    namespace_key_value_bytes(key, before_value);\n                let namespace = result.by_namespace.entry(key.0.clone()).or_default();\n                namespace.removed_entries += 1;\n                namespace.removed_key_value_bytes += key_value_bytes(key, before_value);\n                namespace.removed_namespace_key_value_bytes +=\n                    namespace_key_value_bytes(key, before_value);\n            }\n        }\n\n        result\n    }\n\n    fn touched_entries(&self) -> usize {\n        self.added_entries + self.updated_entries + self.removed_entries\n    }\n\n    fn changed_after_key_value_bytes(&self) -> usize {\n        self.added_key_value_bytes + self.updated_after_key_value_bytes\n    }\n\n    fn changed_after_namespace_key_value_bytes(&self) -> usize {\n        self.added_namespace_key_value_bytes + self.updated_after_namespace_key_value_bytes\n    }\n\n    fn net_key_value_bytes_delta(&self) -> isize {\n        self.after_key_value_bytes as isize - self.before_key_value_bytes as isize\n    }\n\n    fn net_namespace_key_value_bytes_delta(&self) -> isize {\n        self.after_namespace_key_value_bytes as isize\n            - self.before_namespace_key_value_bytes as isize\n    }\n}\n\nimpl StorageNamespaceAmplification {\n    fn touched_entries(&self) -> usize {\n        self.added_entries + self.updated_entries + self.removed_entries\n    }\n\n    fn changed_after_key_value_bytes(&self) -> usize {\n        self.added_key_value_bytes + self.updated_after_key_value_bytes\n    }\n\n    fn changed_after_namespace_key_value_bytes(&self) -> usize {\n        self.added_namespace_key_value_bytes + self.updated_after_namespace_key_value_bytes\n    }\n\n    fn net_key_value_bytes_delta(&self) -> isize {\n        (self.added_key_value_bytes + self.updated_after_key_value_bytes) as isize\n            - (self.removed_key_value_bytes + self.updated_before_key_value_bytes) as isize\n    }\n\n    fn net_namespace_key_value_bytes_delta(&self) -> isize {\n        (self.added_namespace_key_value_bytes + self.updated_after_namespace_key_value_bytes)\n            as isize\n            - (self.removed_namespace_key_value_bytes\n                + self.updated_before_namespace_key_value_bytes) as isize\n    }\n}\n\nfn storage_totals_for(\n    storage: &StorageAmplification,\n    namespaces: &[&str],\n) -> StorageNamespaceAmplification {\n    let mut totals = StorageNamespaceAmplification::default();\n    for namespace in namespaces {\n        let Some(item) = storage.by_namespace.get(*namespace) else {\n            continue;\n        };\n        totals.added_entries += item.added_entries;\n        totals.updated_entries += item.updated_entries;\n        totals.removed_entries += item.removed_entries;\n        totals.added_key_value_bytes += item.added_key_value_bytes;\n        totals.updated_before_key_value_bytes += item.updated_before_key_value_bytes;\n        totals.updated_after_key_value_bytes += item.updated_after_key_value_bytes;\n        totals.removed_key_value_bytes += item.removed_key_value_bytes;\n        totals.added_namespace_key_value_bytes += item.added_namespace_key_value_bytes;\n        totals.updated_before_namespace_key_value_bytes +=\n            item.updated_before_namespace_key_value_bytes;\n        totals.updated_after_namespace_key_value_bytes +=\n            item.updated_after_namespace_key_value_bytes;\n        totals.removed_namespace_key_value_bytes += item.removed_namespace_key_value_bytes;\n    }\n    totals\n}\n\nfn print_storage_class_row(\n    rows: usize,\n    category: &str,\n    namespaces: &[&str],\n    totals: &StorageNamespaceAmplification,\n) {\n    println!(\n        \"AMPLIFICATION_CATEGORY rows={rows} category={category} namespaces={} \\\n         added_entries={} updated_entries={} removed_entries={} touched_entries={} \\\n         net_key_value_bytes_delta={} changed_after_key_value_bytes={} \\\n         net_namespace_key_value_bytes_delta={} changed_after_namespace_key_value_bytes={} \\\n         touched_entries_per_row={:.3} net_key_value_bytes_delta_per_row={:.1} \\\n         changed_after_key_value_bytes_per_row={:.1} \\\n         net_namespace_key_value_bytes_delta_per_row={:.1} \\\n         changed_after_namespace_key_value_bytes_per_row={:.1}\",\n        namespaces.join(\",\"),\n        totals.added_entries,\n        totals.updated_entries,\n        totals.removed_entries,\n        totals.touched_entries(),\n        totals.net_key_value_bytes_delta(),\n        totals.changed_after_key_value_bytes(),\n        totals.net_namespace_key_value_bytes_delta(),\n        totals.changed_after_namespace_key_value_bytes(),\n        totals.touched_entries() as f64 / rows as f64,\n        totals.net_key_value_bytes_delta() as f64 / rows as f64,\n        totals.changed_after_key_value_bytes() as f64 / rows as f64,\n        totals.net_namespace_key_value_bytes_delta() as f64 / rows as f64,\n        totals.changed_after_namespace_key_value_bytes() as f64 / rows as f64,\n    );\n}\n\n#[derive(Debug, Clone)]\nstruct AmplificationRun {\n    counts: AmplificationCounts,\n    storage: StorageAmplification,\n}\n\nfn snapshot_key_value_bytes(snapshot: &KvMap) -> usize {\n    snapshot\n        .iter()\n        .map(|(key, value)| key_value_bytes(key, value))\n        .sum()\n}\n\nfn snapshot_namespace_key_value_bytes(snapshot: &KvMap) -> usize {\n    snapshot\n        .iter()\n        .map(|(key, value)| namespace_key_value_bytes(key, value))\n        .sum()\n}\n\nfn key_value_bytes(key: &(String, Vec<u8>), value: &[u8]) -> usize {\n    key.1.len() + value.len()\n}\n\nfn namespace_key_value_bytes(key: &(String, Vec<u8>), value: &[u8]) -> usize {\n    key.0.len() + key.1.len() + value.len()\n}\n\nasync fn setup_counting_engine() -> (CountingBackend, Engine, String) {\n    let backend = CountingBackend::default();\n    let receipt = Engine::initialize(Box::new(backend.clone()))\n        .await\n        .expect(\"engine should initialize\");\n    backend.reset_counts();\n\n    let engine = Engine::new(Box::new(backend.clone()))\n        .await\n        .expect(\"initialized engine should open\");\n    backend.reset_counts();\n\n    (backend, engine, receipt.main_version_id)\n}\n\nasync fn open_main_session(engine: &Engine, main_version_id: &str) -> SessionContext {\n    engine\n        .open_session(main_version_id.to_string())\n        .await\n        .expect(\"main session should open\")\n}\n\nasync fn create_branch(engine: &Engine, main: &SessionContext, id: &str) -> SessionContext {\n    let receipt = main\n        .create_version(CreateVersionOptions {\n            id: Some(id.to_string()),\n            name: format!(\"Amplification {id}\"),\n            from_commit_id: None,\n        })\n        .await\n        .expect(\"branch version should be created\");\n    engine\n        .open_session(receipt.id)\n        .await\n        .expect(\"branch session should open\")\n}\n\nfn start_measurement(backend: &CountingBackend) -> KvMap {\n    backend.reset_counts();\n    backend.snapshot()\n}\n\nfn finish_measurement(backend: &CountingBackend, before: KvMap) -> AmplificationRun {\n    let after = backend.snapshot();\n    AmplificationRun {\n        counts: backend.counts(),\n        storage: StorageAmplification::from_snapshots(&before, &after),\n    }\n}\n\n#[async_trait]\nimpl Backend for CountingBackend {\n    async fn begin_read_transaction(\n        &self,\n    ) -> Result<Box<dyn BackendReadTransaction + Send + Sync + 'static>, LixError> {\n        self.counts\n            .lock()\n            .expect(\"amplification counts lock\")\n            .begin_read_transactions += 1;\n        Ok(Box::new(CountingReadTransaction {\n            inner: self.inner.begin_read_transaction().await?,\n            counts: Arc::clone(&self.counts),\n        }))\n    }\n\n    async fn begin_write_transaction(\n        &self,\n    ) -> Result<Box<dyn BackendWriteTransaction + Send + Sync + 'static>, LixError> {\n        self.counts\n            .lock()\n            .expect(\"amplification counts lock\")\n            .begin_write_transactions += 1;\n        Ok(Box::new(CountingWriteTransaction {\n            inner: self.inner.begin_write_transaction().await?,\n            counts: Arc::clone(&self.counts),\n        }))\n    }\n}\n\nstruct CountingReadTransaction {\n    inner: Box<dyn BackendReadTransaction + Send + Sync + 'static>,\n    counts: Arc<Mutex<AmplificationCounts>>,\n}\n\n#[async_trait]\nimpl BackendReadTransaction for CountingReadTransaction {\n    async fn get_values(\n        &mut self,\n        request: BackendKvGetRequest,\n    ) -> Result<BackendKvValueBatch, LixError> {\n        record_get_values(&self.counts, &request);\n        self.inner.get_values(request).await\n    }\n\n    async fn exists_many(\n        &mut self,\n        request: BackendKvGetRequest,\n    ) -> Result<BackendKvExistsBatch, LixError> {\n        record_exists_many(&self.counts, &request);\n        self.inner.exists_many(request).await\n    }\n\n    async fn scan_keys(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvKeyPage, LixError> {\n        let result = self.inner.scan_keys(request).await?;\n        let mut counts = self.counts.lock().expect(\"amplification counts lock\");\n        counts.scan_keys_calls += 1;\n        counts.scan_keys_rows += result.keys.len();\n        Ok(result)\n    }\n\n    async fn scan_values(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvValuePage, LixError> {\n        let result = self.inner.scan_values(request).await?;\n        let mut counts = self.counts.lock().expect(\"amplification counts lock\");\n        counts.scan_values_calls += 1;\n        counts.scan_values_rows += result.values.len();\n        Ok(result)\n    }\n\n    async fn scan_entries(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvEntryPage, LixError> {\n        let result = self.inner.scan_entries(request).await?;\n        let mut counts = self.counts.lock().expect(\"amplification counts lock\");\n        counts.scan_entries_calls += 1;\n        counts.scan_entries_rows += result.keys.len();\n        Ok(result)\n    }\n\n    async fn rollback(self: Box<Self>) -> Result<(), LixError> {\n        self.counts\n            .lock()\n            .expect(\"amplification counts lock\")\n            .rollbacks += 1;\n        self.inner.rollback().await\n    }\n}\n\nstruct CountingWriteTransaction {\n    inner: Box<dyn BackendWriteTransaction + Send + Sync + 'static>,\n    counts: Arc<Mutex<AmplificationCounts>>,\n}\n\n#[async_trait]\nimpl BackendReadTransaction for CountingWriteTransaction {\n    async fn get_values(\n        &mut self,\n        request: BackendKvGetRequest,\n    ) -> Result<BackendKvValueBatch, LixError> {\n        record_get_values(&self.counts, &request);\n        self.inner.get_values(request).await\n    }\n\n    async fn exists_many(\n        &mut self,\n        request: BackendKvGetRequest,\n    ) -> Result<BackendKvExistsBatch, LixError> {\n        record_exists_many(&self.counts, &request);\n        self.inner.exists_many(request).await\n    }\n\n    async fn scan_keys(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvKeyPage, LixError> {\n        let result = self.inner.scan_keys(request).await?;\n        let mut counts = self.counts.lock().expect(\"amplification counts lock\");\n        counts.scan_keys_calls += 1;\n        counts.scan_keys_rows += result.keys.len();\n        Ok(result)\n    }\n\n    async fn scan_values(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvValuePage, LixError> {\n        let result = self.inner.scan_values(request).await?;\n        let mut counts = self.counts.lock().expect(\"amplification counts lock\");\n        counts.scan_values_calls += 1;\n        counts.scan_values_rows += result.values.len();\n        Ok(result)\n    }\n\n    async fn scan_entries(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvEntryPage, LixError> {\n        let result = self.inner.scan_entries(request).await?;\n        let mut counts = self.counts.lock().expect(\"amplification counts lock\");\n        counts.scan_entries_calls += 1;\n        counts.scan_entries_rows += result.keys.len();\n        Ok(result)\n    }\n\n    async fn rollback(self: Box<Self>) -> Result<(), LixError> {\n        self.counts\n            .lock()\n            .expect(\"amplification counts lock\")\n            .rollbacks += 1;\n        self.inner.rollback().await\n    }\n}\n\n#[async_trait]\nimpl BackendWriteTransaction for CountingWriteTransaction {\n    async fn write_kv_batch(\n        &mut self,\n        batch: BackendKvWriteBatch,\n    ) -> Result<BackendKvWriteStats, LixError> {\n        self.counts\n            .lock()\n            .expect(\"amplification counts lock\")\n            .record_write_batch(&batch);\n        self.inner.write_kv_batch(batch).await\n    }\n\n    async fn commit(self: Box<Self>) -> Result<(), LixError> {\n        self.counts\n            .lock()\n            .expect(\"amplification counts lock\")\n            .commits += 1;\n        self.inner.commit().await\n    }\n}\n\nfn record_get_values(counts: &Mutex<AmplificationCounts>, request: &BackendKvGetRequest) {\n    let mut counts = counts.lock().expect(\"amplification counts lock\");\n    counts.get_values_calls += 1;\n    counts.get_values_keys += request\n        .groups\n        .iter()\n        .map(|group| group.keys.len())\n        .sum::<usize>();\n}\n\nfn record_exists_many(counts: &Mutex<AmplificationCounts>, request: &BackendKvGetRequest) {\n    let mut counts = counts.lock().expect(\"amplification counts lock\");\n    counts.exists_many_calls += 1;\n    counts.exists_many_keys += request\n        .groups\n        .iter()\n        .map(|group| group.keys.len())\n        .sum::<usize>();\n}\n\nfn insert_sql(rows: usize, value_bytes: usize) -> String {\n    let values = (0..rows)\n        .map(|index| {\n            format!(\n                \"('amplification-key-{index:08}', '{}')\",\n                \"v\".repeat(value_bytes)\n            )\n        })\n        .collect::<Vec<_>>()\n        .join(\", \");\n    format!(\"INSERT INTO lix_key_value (key, value) VALUES {values}\")\n}\n\nfn update_key_value_sql(rows: usize) -> String {\n    let keys = (0..rows)\n        .map(|index| format!(\"'amplification-key-{index:08}'\"))\n        .collect::<Vec<_>>()\n        .join(\", \");\n    format!(\"UPDATE lix_key_value SET value = 'branch-updated' WHERE key IN ({keys})\")\n}\n\nfn insert_lix_file_descriptor_sql(rows: usize) -> String {\n    let values = (0..rows)\n        .map(|index| format!(\"('amplification-file-{index:08}', NULL, 'file-{index:08}.bin')\"))\n        .collect::<Vec<_>>()\n        .join(\", \");\n    format!(\"INSERT INTO lix_file (id, directory_id, name) VALUES {values}\")\n}\n\nfn update_lix_file_hidden_sql(rows: usize) -> String {\n    let ids = (0..rows)\n        .map(|index| format!(\"'amplification-file-{index:08}'\"))\n        .collect::<Vec<_>>()\n        .join(\", \");\n    format!(\"UPDATE lix_file SET hidden = true WHERE id IN ({ids})\")\n}\n\nasync fn run_insert(rows: usize, value_bytes: usize) -> AmplificationRun {\n    let (backend, engine, main_version_id) = setup_counting_engine().await;\n    let session = open_main_session(&engine, &main_version_id).await;\n    let storage_before = start_measurement(&backend);\n\n    session\n        .execute(&insert_sql(rows, value_bytes), &[])\n        .await\n        .expect(\"lix_key_value insert should succeed\");\n\n    finish_measurement(&backend, storage_before)\n}\n\nasync fn run_lix_file_insert_data(file_bytes: usize) -> AmplificationRun {\n    let (backend, engine, main_version_id) = setup_counting_engine().await;\n    let session = open_main_session(&engine, &main_version_id).await;\n    let storage_before = start_measurement(&backend);\n\n    let params = [Value::Blob(synthetic_file_bytes(file_bytes))];\n    session\n        .execute(\n            \"INSERT INTO lix_file (id, path, data) \\\n             VALUES ('amplification-video-file', '/video.bin', $1)\",\n            &params,\n        )\n        .await\n        .expect(\"lix_file data insert should succeed\");\n\n    finish_measurement(&backend, storage_before)\n}\n\nasync fn run_branch_from_head_only() -> AmplificationRun {\n    let (backend, engine, main_version_id) = setup_counting_engine().await;\n    let main = open_main_session(&engine, &main_version_id).await;\n    let before = start_measurement(&backend);\n    let _branch = create_branch(&engine, &main, \"amplification-branch-only\").await;\n    finish_measurement(&backend, before)\n}\n\nasync fn run_key_value_branch_insert() -> AmplificationRun {\n    let (backend, engine, main_version_id) = setup_counting_engine().await;\n    let main = open_main_session(&engine, &main_version_id).await;\n    let branch = create_branch(&engine, &main, \"amplification-kv-insert\").await;\n    let before = start_measurement(&backend);\n    branch\n        .execute(\n            \"INSERT INTO lix_key_value (key, value) \\\n             VALUES ('branch-insert-key', 'branch-value')\",\n            &[],\n        )\n        .await\n        .expect(\"branch key-value insert should succeed\");\n    finish_measurement(&backend, before)\n}\n\nasync fn run_key_value_branch_update(base_rows: usize, update_rows: usize) -> AmplificationRun {\n    let (backend, engine, main_version_id) = setup_counting_engine().await;\n    let main = open_main_session(&engine, &main_version_id).await;\n    main.execute(&insert_sql(base_rows, 8), &[])\n        .await\n        .expect(\"base key-values should insert\");\n    let branch = create_branch(\n        &engine,\n        &main,\n        &format!(\"amplification-kv-update-{update_rows}\"),\n    )\n    .await;\n    let before = start_measurement(&backend);\n    branch\n        .execute(&update_key_value_sql(update_rows), &[])\n        .await\n        .expect(\"branch key-value update should succeed\");\n    finish_measurement(&backend, before)\n}\n\nasync fn run_lix_file_branch_insert(file_bytes: usize) -> AmplificationRun {\n    let (backend, engine, main_version_id) = setup_counting_engine().await;\n    let main = open_main_session(&engine, &main_version_id).await;\n    let branch = create_branch(&engine, &main, \"amplification-file-insert\").await;\n    let before = start_measurement(&backend);\n    let params = [Value::Blob(synthetic_file_bytes(file_bytes))];\n    branch\n        .execute(\n            \"INSERT INTO lix_file (id, path, data) \\\n             VALUES ('branch-file', '/branch-file.bin', $1)\",\n            &params,\n        )\n        .await\n        .expect(\"branch lix_file insert should succeed\");\n    finish_measurement(&backend, before)\n}\n\nasync fn run_lix_file_branch_update_data(base_rows: usize, file_bytes: usize) -> AmplificationRun {\n    let (backend, engine, main_version_id) = setup_counting_engine().await;\n    let main = open_main_session(&engine, &main_version_id).await;\n    main.execute(&insert_lix_file_descriptor_sql(base_rows), &[])\n        .await\n        .expect(\"base lix_file descriptors should insert\");\n    let branch = create_branch(&engine, &main, \"amplification-file-update-data\").await;\n    let before = start_measurement(&backend);\n    let params = [Value::Blob(synthetic_file_bytes(file_bytes))];\n    branch\n        .execute(\n            \"UPDATE lix_file SET data = $1 \\\n             WHERE id = 'amplification-file-00000000'\",\n            &params,\n        )\n        .await\n        .expect(\"branch lix_file data update should succeed\");\n    finish_measurement(&backend, before)\n}\n\nasync fn run_lix_file_branch_rename(base_rows: usize) -> AmplificationRun {\n    let (backend, engine, main_version_id) = setup_counting_engine().await;\n    let main = open_main_session(&engine, &main_version_id).await;\n    main.execute(&insert_lix_file_descriptor_sql(base_rows), &[])\n        .await\n        .expect(\"base lix_file descriptors should insert\");\n    let branch = create_branch(&engine, &main, \"amplification-file-rename\").await;\n    let before = start_measurement(&backend);\n    branch\n        .execute(\n            \"UPDATE lix_file SET path = '/file-00000000-renamed.bin' \\\n             WHERE id = 'amplification-file-00000000'\",\n            &[],\n        )\n        .await\n        .expect(\"branch lix_file rename should succeed\");\n    finish_measurement(&backend, before)\n}\n\nasync fn run_lix_file_branch_update_hidden(\n    base_rows: usize,\n    update_rows: usize,\n) -> AmplificationRun {\n    let (backend, engine, main_version_id) = setup_counting_engine().await;\n    let main = open_main_session(&engine, &main_version_id).await;\n    main.execute(&insert_lix_file_descriptor_sql(base_rows), &[])\n        .await\n        .expect(\"base lix_file descriptors should insert\");\n    let branch = create_branch(&engine, &main, \"amplification-file-update-hidden\").await;\n    let before = start_measurement(&backend);\n    branch\n        .execute(&update_lix_file_hidden_sql(update_rows), &[])\n        .await\n        .expect(\"branch lix_file hidden update should succeed\");\n    finish_measurement(&backend, before)\n}\n\nfn synthetic_file_bytes(size: usize) -> Vec<u8> {\n    let mut bytes = vec![0u8; size];\n    let mut state = 0x9e37_79b9_7f4a_7c15u64;\n    for (index, byte) in bytes.iter_mut().enumerate() {\n        state ^= state >> 12;\n        state ^= state << 25;\n        state ^= state >> 27;\n        state = state.wrapping_add(index as u64);\n        *byte = (state.wrapping_mul(0x2545_f491_4f6c_dd1d) >> 56) as u8;\n    }\n    bytes\n}\n\nfn stress_file_bytes_from_env() -> usize {\n    std::env::var(\"LIX_FILE_STRESS_BYTES\")\n        .ok()\n        .and_then(|value| parse_size_bytes(&value))\n        .unwrap_or(100 * 1024 * 1024)\n}\n\nfn parse_size_bytes(value: &str) -> Option<usize> {\n    let trimmed = value.trim();\n    if trimmed.is_empty() {\n        return None;\n    }\n    let lowercase = trimmed.to_ascii_lowercase();\n    let (number, multiplier) = if let Some(number) = lowercase.strip_suffix(\"gib\") {\n        (number, 1024usize * 1024 * 1024)\n    } else if let Some(number) = lowercase.strip_suffix(\"gb\") {\n        (number, 1000usize * 1000 * 1000)\n    } else if let Some(number) = lowercase.strip_suffix(\"mib\") {\n        (number, 1024usize * 1024)\n    } else if let Some(number) = lowercase.strip_suffix(\"mb\") {\n        (number, 1000usize * 1000)\n    } else if let Some(number) = lowercase.strip_suffix(\"kib\") {\n        (number, 1024usize)\n    } else if let Some(number) = lowercase.strip_suffix(\"kb\") {\n        (number, 1000usize)\n    } else {\n        (trimmed, 1usize)\n    };\n    number.trim().parse::<usize>().ok()?.checked_mul(multiplier)\n}\n\nfn print_amplification_row(rows: usize, value_bytes: usize, run: &AmplificationRun) {\n    let counts = &run.counts;\n    print_category_rows(rows, value_bytes, run);\n    println!(\n        \"AMPLIFICATION rows={rows} value_bytes={value_bytes} read_calls={} read_items={} \\\n         get_values_calls={} get_values_keys={} exists_many_calls={} exists_many_keys={} \\\n         scan_calls={} scan_rows={} write_batches={} puts={} deletes={} write_mutations={} \\\n         write_bytes={} read_calls_per_row={:.3} read_items_per_row={:.3} \\\n         write_mutations_per_row={:.3} write_bytes_per_row={:.1}\",\n        counts.read_calls(),\n        counts.read_items(),\n        counts.get_values_calls,\n        counts.get_values_keys,\n        counts.exists_many_calls,\n        counts.exists_many_keys,\n        counts.scan_keys_calls + counts.scan_values_calls + counts.scan_entries_calls,\n        counts.scan_keys_rows + counts.scan_values_rows + counts.scan_entries_rows,\n        counts.write_kv_batch_calls,\n        counts.puts,\n        counts.deletes,\n        counts.write_mutations(),\n        counts.write_bytes,\n        counts.read_calls() as f64 / rows as f64,\n        counts.read_items() as f64 / rows as f64,\n        counts.write_mutations() as f64 / rows as f64,\n        counts.write_bytes as f64 / rows as f64,\n    );\n\n    for namespace in counts\n        .puts_by_namespace\n        .keys()\n        .chain(counts.deletes_by_namespace.keys())\n        .chain(counts.bytes_by_namespace.keys())\n        .collect::<std::collections::BTreeSet<_>>()\n    {\n        println!(\n            \"AMPLIFICATION_NAMESPACE rows={rows} namespace={} puts={} deletes={} bytes={}\",\n            namespace,\n            counts\n                .puts_by_namespace\n                .get(namespace)\n                .copied()\n                .unwrap_or(0),\n            counts\n                .deletes_by_namespace\n                .get(namespace)\n                .copied()\n                .unwrap_or(0),\n            counts\n                .bytes_by_namespace\n                .get(namespace)\n                .copied()\n                .unwrap_or(0),\n        );\n    }\n}\n\nfn print_category_rows(rows: usize, value_bytes: usize, run: &AmplificationRun) {\n    let counts = &run.counts;\n    let storage = &run.storage;\n    let canonical_changelog_row_namespaces = [\"changelog.change\", \"changelog.change_pack\"];\n    let canonical_commit_pack_namespaces =\n        [\"commit_record\", \"change_record_pack\", \"change_ref_pack\"];\n    let canonical_commit_store_namespaces = [\n        \"commit_store.commit\",\n        \"commit_store.change_pack\",\n        \"commit_store.membership_pack\",\n    ];\n    let canonical_storage_namespaces = [\n        \"changelog.change\",\n        \"changelog.change_pack\",\n        \"commit_record\",\n        \"change_record_pack\",\n        \"change_ref_pack\",\n        \"commit_store.commit\",\n        \"commit_store.change_pack\",\n        \"commit_store.membership_pack\",\n    ];\n    let index_storage_namespaces = [\n        \"tracked_state.tree.chunk\",\n        \"tracked_state.tree.root\",\n        \"tracked_state.tree.root.by_file\",\n        \"tracked_state.delta_pack\",\n        \"change_id_index\",\n    ];\n    let payload_storage_namespaces = [\n        \"json_store.json\",\n        \"json_store.json_chunk\",\n        \"binary_cas.manifest\",\n        \"binary_cas.manifest_chunk\",\n        \"binary_cas.chunk\",\n    ];\n    let sidecar_storage_namespaces = [\"untracked_state.row\"];\n    let canonical_storage = storage_totals_for(storage, &canonical_storage_namespaces);\n    let canonical_changelog_row_storage =\n        storage_totals_for(storage, &canonical_changelog_row_namespaces);\n    let canonical_commit_pack_storage =\n        storage_totals_for(storage, &canonical_commit_pack_namespaces);\n    let canonical_commit_store_storage =\n        storage_totals_for(storage, &canonical_commit_store_namespaces);\n    let index_storage = storage_totals_for(storage, &index_storage_namespaces);\n    let payload_storage = storage_totals_for(storage, &payload_storage_namespaces);\n    let sidecar_storage = storage_totals_for(storage, &sidecar_storage_namespaces);\n    let index_puts = counts.puts_in(\"tracked_state.tree.chunk\")\n        + counts.puts_in(\"tracked_state.tree.root\")\n        + counts.puts_in(\"tracked_state.tree.root.by_file\")\n        + counts.puts_in(\"tracked_state.delta_pack\")\n        + counts.puts_in(\"change_id_index\");\n    let index_bytes = counts.bytes_in(\"tracked_state.tree.chunk\")\n        + counts.bytes_in(\"tracked_state.tree.root\")\n        + counts.bytes_in(\"tracked_state.tree.root.by_file\")\n        + counts.bytes_in(\"tracked_state.delta_pack\")\n        + counts.bytes_in(\"change_id_index\");\n    let payload_puts: usize = payload_storage_namespaces\n        .iter()\n        .map(|namespace| counts.puts_in(namespace))\n        .sum();\n    let payload_bytes: usize = payload_storage_namespaces\n        .iter()\n        .map(|namespace| counts.bytes_in(namespace))\n        .sum();\n    let logical_value_bytes = rows.saturating_mul(value_bytes);\n    let scan_calls = counts.scan_keys_calls + counts.scan_values_calls + counts.scan_entries_calls;\n    let scan_rows = counts.scan_keys_rows + counts.scan_values_rows + counts.scan_entries_rows;\n    let changelog_encoded_objects = counts.puts_in(\"changelog.change\")\n        + counts.puts_in(\"commit_record\")\n        + counts.puts_in(\"change_record_pack\")\n        + counts.puts_in(\"change_ref_pack\")\n        + counts.puts_in(\"commit_store.commit\")\n        + counts.puts_in(\"commit_store.change_pack\")\n        + counts.puts_in(\"commit_store.membership_pack\");\n    let tracked_encoded_objects = index_puts;\n    let sidecar_encoded_objects = counts.puts_in(\"untracked_state.row\");\n\n    println!(\n        \"AMPLIFICATION_CATEGORY rows={rows} category=row logical_rows={rows} \\\n         physical_put_rows={} physical_delete_rows={} physical_row_mutations={} \\\n         row_mutations_per_logical_row={:.3}\",\n        counts.puts,\n        counts.deletes,\n        counts.write_mutations(),\n        counts.write_mutations() as f64 / rows as f64,\n    );\n    println!(\n        \"AMPLIFICATION_CATEGORY rows={rows} category=write write_transactions={} commits={} \\\n         write_batches={} puts={} deletes={} write_mutations={} write_bytes={} \\\n         write_mutations_per_row={:.3} write_bytes_per_row={:.1}\",\n        counts.begin_write_transactions,\n        counts.commits,\n        counts.write_kv_batch_calls,\n        counts.puts,\n        counts.deletes,\n        counts.write_mutations(),\n        counts.write_bytes,\n        counts.write_mutations() as f64 / rows as f64,\n        counts.write_bytes as f64 / rows as f64,\n    );\n    println!(\n        \"AMPLIFICATION_CATEGORY rows={rows} category=storage before_entries={} after_entries={} \\\n         added_entries={} updated_entries={} removed_entries={} touched_entries={} \\\n         before_key_value_bytes={} after_key_value_bytes={} net_key_value_bytes_delta={} \\\n         changed_after_key_value_bytes={} before_namespace_key_value_bytes={} \\\n         after_namespace_key_value_bytes={} net_namespace_key_value_bytes_delta={} \\\n         changed_after_namespace_key_value_bytes={} touched_entries_per_row={:.3} \\\n         net_key_value_bytes_delta_per_row={:.1} changed_after_key_value_bytes_per_row={:.1} \\\n         net_namespace_key_value_bytes_delta_per_row={:.1} \\\n         changed_after_namespace_key_value_bytes_per_row={:.1}\",\n        storage.before_entries,\n        storage.after_entries,\n        storage.added_entries,\n        storage.updated_entries,\n        storage.removed_entries,\n        storage.touched_entries(),\n        storage.before_key_value_bytes,\n        storage.after_key_value_bytes,\n        storage.net_key_value_bytes_delta(),\n        storage.changed_after_key_value_bytes(),\n        storage.before_namespace_key_value_bytes,\n        storage.after_namespace_key_value_bytes,\n        storage.net_namespace_key_value_bytes_delta(),\n        storage.changed_after_namespace_key_value_bytes(),\n        storage.touched_entries() as f64 / rows as f64,\n        storage.net_key_value_bytes_delta() as f64 / rows as f64,\n        storage.changed_after_key_value_bytes() as f64 / rows as f64,\n        storage.net_namespace_key_value_bytes_delta() as f64 / rows as f64,\n        storage.changed_after_namespace_key_value_bytes() as f64 / rows as f64,\n    );\n    print_storage_class_row(\n        rows,\n        \"storage_canonical\",\n        &canonical_storage_namespaces,\n        &canonical_storage,\n    );\n    print_storage_class_row(\n        rows,\n        \"storage_canonical_changelog_rows\",\n        &canonical_changelog_row_namespaces,\n        &canonical_changelog_row_storage,\n    );\n    print_storage_class_row(\n        rows,\n        \"storage_canonical_commit_packs\",\n        &canonical_commit_pack_namespaces,\n        &canonical_commit_pack_storage,\n    );\n    print_storage_class_row(\n        rows,\n        \"storage_canonical_commit_store\",\n        &canonical_commit_store_namespaces,\n        &canonical_commit_store_storage,\n    );\n    print_storage_class_row(\n        rows,\n        \"storage_index\",\n        &index_storage_namespaces,\n        &index_storage,\n    );\n    print_storage_class_row(\n        rows,\n        \"storage_payload\",\n        &payload_storage_namespaces,\n        &payload_storage,\n    );\n    print_storage_class_row(\n        rows,\n        \"storage_sidecar\",\n        &sidecar_storage_namespaces,\n        &sidecar_storage,\n    );\n    println!(\n        \"AMPLIFICATION_CATEGORY rows={rows} category=read read_transactions={} rollbacks={} \\\n         read_calls={} read_items={} get_values_calls={} get_values_keys={} \\\n         exists_many_calls={} exists_many_keys={} scan_calls={} scan_rows={} \\\n         read_calls_per_row={:.3} read_items_per_row={:.3}\",\n        counts.begin_read_transactions,\n        counts.rollbacks,\n        counts.read_calls(),\n        counts.read_items(),\n        counts.get_values_calls,\n        counts.get_values_keys,\n        counts.exists_many_calls,\n        counts.exists_many_keys,\n        scan_calls,\n        scan_rows,\n        counts.read_calls() as f64 / rows as f64,\n        counts.read_items() as f64 / rows as f64,\n    );\n    println!(\n        \"AMPLIFICATION_CATEGORY rows={rows} category=serialization proxy_encoded_put_objects={} \\\n         proxy_changelog_objects={} proxy_json_objects={} proxy_tracked_index_objects={} \\\n         proxy_sidecar_objects={} proxy_encoded_objects_per_row={:.3}\",\n        counts.puts,\n        changelog_encoded_objects,\n        payload_puts,\n        tracked_encoded_objects,\n        sidecar_encoded_objects,\n        counts.puts as f64 / rows as f64,\n    );\n    println!(\n        \"AMPLIFICATION_CATEGORY rows={rows} category=index index_puts={} index_deletes={} \\\n         index_mutations={} index_bytes={} tracked_chunk_puts={} tracked_root_puts={} \\\n         tracked_by_file_root_puts={} index_mutations_per_row={:.3} index_bytes_per_row={:.1}\",\n        index_puts,\n        0,\n        index_puts,\n        index_bytes,\n        counts.puts_in(\"tracked_state.tree.chunk\"),\n        counts.puts_in(\"tracked_state.tree.root\"),\n        counts.puts_in(\"tracked_state.tree.root.by_file\"),\n        index_puts as f64 / rows as f64,\n        index_bytes as f64 / rows as f64,\n    );\n    println!(\n        \"AMPLIFICATION_CATEGORY rows={rows} category=payload logical_value_bytes={} \\\n         external_payload_puts={} external_payload_bytes={} external_payload_puts_per_row={:.3} \\\n         external_payload_bytes_per_row={:.1} external_payload_bytes_per_logical_value_byte={:.3}\",\n        logical_value_bytes,\n        payload_puts,\n        payload_bytes,\n        payload_puts as f64 / rows as f64,\n        payload_bytes as f64 / rows as f64,\n        payload_bytes as f64 / logical_value_bytes.max(1) as f64,\n    );\n    println!(\n        \"AMPLIFICATION_CATEGORY rows={rows} category=sidecar_overlay untracked_puts={} \\\n         untracked_deletes={} untracked_bytes={} untracked_mutations_per_row={:.3}\",\n        counts.puts_in(\"untracked_state.row\"),\n        counts.deletes_in(\"untracked_state.row\"),\n        counts.bytes_in(\"untracked_state.row\"),\n        (counts.puts_in(\"untracked_state.row\") + counts.deletes_in(\"untracked_state.row\")) as f64\n            / rows as f64,\n    );\n\n    for (namespace, namespace_storage) in &storage.by_namespace {\n        println!(\n            \"AMPLIFICATION_STORAGE_NAMESPACE rows={rows} namespace={} added_entries={} \\\n             updated_entries={} removed_entries={} touched_entries={} net_key_value_bytes_delta={} \\\n             changed_after_key_value_bytes={} net_namespace_key_value_bytes_delta={} \\\n             changed_after_namespace_key_value_bytes={}\",\n            namespace,\n            namespace_storage.added_entries,\n            namespace_storage.updated_entries,\n            namespace_storage.removed_entries,\n            namespace_storage.touched_entries(),\n            namespace_storage.net_key_value_bytes_delta(),\n            namespace_storage.changed_after_key_value_bytes(),\n            namespace_storage.net_namespace_key_value_bytes_delta(),\n            namespace_storage.changed_after_namespace_key_value_bytes(),\n        );\n    }\n}\n\nfn print_amplification_case(\n    name: &str,\n    base_rows: usize,\n    logical_rows: usize,\n    value_bytes: usize,\n    run: &AmplificationRun,\n) {\n    println!(\n        \"AMPLIFICATION_CASE name={name} base_rows={base_rows} logical_rows={logical_rows} value_bytes={value_bytes}\",\n    );\n    print_amplification_row(logical_rows, value_bytes, run);\n}\n\n#[tokio::test]\n#[ignore = \"prints read/write amplification north-star metrics for lix_key_value inserts\"]\nasync fn lix_key_value_insert_amplification_north_star() {\n    let value_bytes = 8;\n    for rows in [1usize, 100, 1_000] {\n        let run = run_insert(rows, value_bytes).await;\n        print_amplification_row(rows, value_bytes, &run);\n    }\n}\n\n#[tokio::test]\n#[ignore = \"stress test for large lix_file.data inserts; defaults to 100MiB\"]\nasync fn lix_file_data_stress_amplification() {\n    let file_bytes = stress_file_bytes_from_env();\n    println!(\n        \"AMPLIFICATION_FILE_STRESS logical_files=1 logical_file_bytes={} env=LIX_FILE_STRESS_BYTES\",\n        file_bytes\n    );\n    let run = run_lix_file_insert_data(file_bytes).await;\n    print_amplification_row(1, file_bytes, &run);\n}\n\n#[tokio::test]\n#[ignore = \"prints branching amplification canaries for lix_key_value\"]\nasync fn lix_key_value_branching_amplification_canaries() {\n    let branch_only = run_branch_from_head_only().await;\n    print_amplification_case(\"kv_branch_from_head_only\", 0, 1, 0, &branch_only);\n\n    let branch_insert = run_key_value_branch_insert().await;\n    print_amplification_case(\"kv_branch_then_insert_1\", 0, 1, 12, &branch_insert);\n\n    let update_one = run_key_value_branch_update(1_000, 1).await;\n    print_amplification_case(\"kv_branch_from_1000_update_1\", 1_000, 1, 14, &update_one);\n\n    let update_hundred = run_key_value_branch_update(1_000, 100).await;\n    print_amplification_case(\n        \"kv_branch_from_1000_update_100\",\n        1_000,\n        100,\n        14,\n        &update_hundred,\n    );\n}\n\n#[tokio::test]\n#[ignore = \"prints branching amplification canaries for lix_file\"]\nasync fn lix_file_branching_amplification_canaries() {\n    let file_bytes = 1024;\n\n    let branch_insert = run_lix_file_branch_insert(file_bytes).await;\n    print_amplification_case(\n        \"file_branch_then_insert_1k_data\",\n        0,\n        1,\n        file_bytes,\n        &branch_insert,\n    );\n\n    let update_data = run_lix_file_branch_update_data(1_000, file_bytes).await;\n    print_amplification_case(\n        \"file_branch_from_1000_update_data_1\",\n        1_000,\n        1,\n        file_bytes,\n        &update_data,\n    );\n\n    let rename = run_lix_file_branch_rename(1_000).await;\n    print_amplification_case(\"file_branch_from_1000_rename_1\", 1_000, 1, 0, &rename);\n\n    let update_hidden = run_lix_file_branch_update_hidden(1_000, 100).await;\n    print_amplification_case(\n        \"file_branch_from_1000_update_hidden_100\",\n        1_000,\n        100,\n        0,\n        &update_hidden,\n    );\n}\n"
  },
  {
    "path": "packages/engine/tests/transaction.rs",
    "content": "use std::collections::BTreeMap;\nuse std::sync::atomic::{AtomicUsize, Ordering};\nuse std::sync::{Arc, Mutex};\n\nuse async_trait::async_trait;\nuse lix_engine::{\n    Backend, BackendKvEntryPage, BackendKvExistsBatch, BackendKvExistsGroup, BackendKvGetRequest,\n    BackendKvKeyPage, BackendKvScanRange, BackendKvScanRequest, BackendKvValueBatch,\n    BackendKvValueGroup, BackendKvValuePage, BackendKvWriteBatch, BackendKvWriteStats,\n    BackendReadTransaction, BackendWriteTransaction, BytePageBuilder, Engine, LixError,\n};\n\ntype KvKey = (String, Vec<u8>);\ntype KvMap = BTreeMap<KvKey, Vec<u8>>;\n\n#[tokio::test]\nasync fn read_sql_rolls_back_read_transaction_when_pre_plan_setup_fails() {\n    let backend = RecordingBackend::new();\n    let _receipt = Engine::initialize(Box::new(backend.clone()))\n        .await\n        .expect(\"backend should initialize\");\n    let engine = Engine::new(Box::new(backend.clone()))\n        .await\n        .expect(\"initialized backend should create an engine\");\n    let session = engine\n        .open_workspace_session()\n        .await\n        .expect(\"workspace session should open\");\n\n    session\n        .execute(\n            \"UPDATE lix_key_value SET value = 'missing-version' \\\n             WHERE key = 'lix_workspace_version_id'\",\n            &[],\n        )\n        .await\n        .expect(\"test should corrupt workspace selector\");\n\n    let before = backend.stats();\n    let error = session\n        .execute(\"SELECT 1\", &[])\n        .await\n        .expect_err(\"missing active version should fail read pre-plan\");\n    assert!(\n        error.message.contains(\"missing-version\"),\n        \"unexpected error: {error:?}\"\n    );\n\n    let delta = backend.stats().delta_since(&before);\n    assert_eq!(delta.read_opened, 1, \"read SQL should open one read tx\");\n    assert_eq!(\n        delta.read_rolled_back, 1,\n        \"read SQL pre-plan errors must roll back the opened read tx\"\n    );\n}\n\n#[tokio::test]\nasync fn write_transaction_open_rolls_back_when_active_version_resolution_fails() {\n    let backend = RecordingBackend::new();\n    let _receipt = Engine::initialize(Box::new(backend.clone()))\n        .await\n        .expect(\"backend should initialize\");\n    let engine = Engine::new(Box::new(backend.clone()))\n        .await\n        .expect(\"initialized backend should create an engine\");\n    let session = engine\n        .open_workspace_session()\n        .await\n        .expect(\"workspace session should open\");\n\n    session\n        .execute(\n            \"UPDATE lix_key_value SET value = 'missing-version' \\\n             WHERE key = 'lix_workspace_version_id'\",\n            &[],\n        )\n        .await\n        .expect(\"test should corrupt workspace selector\");\n\n    let before = backend.stats();\n    let error = session\n        .execute(\n            \"INSERT INTO lix_key_value (key, value) VALUES ('after-corrupt-selector', 'value')\",\n            &[],\n        )\n        .await\n        .expect_err(\"missing active version should fail write open\");\n    assert_eq!(error.code, \"LIX_VERSION_NOT_FOUND\");\n\n    let delta = backend.stats().delta_since(&before);\n    assert_eq!(delta.write_opened, 1, \"write path should open one write tx\");\n    assert_eq!(\n        delta.write_rolled_back, 1,\n        \"write open errors must roll back the opened write tx\"\n    );\n    assert_eq!(\n        delta.write_committed, 0,\n        \"failed write open must not commit\"\n    );\n}\n\n#[tokio::test]\nasync fn rebuild_tracked_state_rolls_back_read_and_write_transactions_on_failure() {\n    let backend = RecordingBackend::new();\n    let receipt = Engine::initialize(Box::new(backend.clone()))\n        .await\n        .expect(\"backend should initialize\");\n    let engine = Engine::new(Box::new(backend.clone()))\n        .await\n        .expect(\"initialized backend should create an engine\");\n\n    backend.fail_read_namespace(\"commit_store.commit\");\n    let before = backend.stats();\n    let error = engine\n        .rebuild_tracked_state_for_version(&receipt.main_version_id)\n        .await\n        .expect_err(\"forced commit-store read failure should fail rebuild\");\n    assert!(\n        error.message.contains(\"forced read failure\"),\n        \"unexpected error: {error:?}\"\n    );\n\n    let delta = backend.stats().delta_since(&before);\n    assert_eq!(\n        delta.read_opened, delta.read_rolled_back,\n        \"every read tx opened during failed rebuild must be rolled back\"\n    );\n    assert_eq!(delta.write_opened, 1, \"rebuild should open one write tx\");\n    assert_eq!(\n        delta.write_rolled_back, 1,\n        \"failed rebuild must roll back the opened write tx\"\n    );\n    assert_eq!(delta.write_committed, 0, \"failed rebuild must not commit\");\n}\n\n#[derive(Clone, Default)]\nstruct RecordingBackend {\n    data: Arc<Mutex<KvMap>>,\n    stats: Arc<TransactionStats>,\n    fail_read_namespace: Arc<Mutex<Option<String>>>,\n}\n\nimpl RecordingBackend {\n    fn new() -> Self {\n        Self::default()\n    }\n\n    fn stats(&self) -> TransactionStatsSnapshot {\n        self.stats.snapshot()\n    }\n\n    fn fail_read_namespace(&self, namespace: &str) {\n        *self\n            .fail_read_namespace\n            .lock()\n            .expect(\"fail namespace lock should not poison\") = Some(namespace.to_string());\n    }\n}\n\n#[async_trait]\nimpl Backend for RecordingBackend {\n    async fn begin_read_transaction(\n        &self,\n    ) -> Result<Box<dyn BackendReadTransaction + Send + Sync + 'static>, LixError> {\n        self.stats.read_opened.fetch_add(1, Ordering::SeqCst);\n        Ok(Box::new(RecordingTransaction {\n            data: Arc::clone(&self.data),\n            pending: BTreeMap::new(),\n            stats: Arc::clone(&self.stats),\n            fail_read_namespace: Arc::clone(&self.fail_read_namespace),\n            mode: RecordingTransactionMode::Read,\n        }))\n    }\n\n    async fn begin_write_transaction(\n        &self,\n    ) -> Result<Box<dyn BackendWriteTransaction + Send + Sync + 'static>, LixError> {\n        self.stats.write_opened.fetch_add(1, Ordering::SeqCst);\n        Ok(Box::new(RecordingTransaction {\n            data: Arc::clone(&self.data),\n            pending: BTreeMap::new(),\n            stats: Arc::clone(&self.stats),\n            fail_read_namespace: Arc::clone(&self.fail_read_namespace),\n            mode: RecordingTransactionMode::Write,\n        }))\n    }\n}\n\nstruct RecordingTransaction {\n    data: Arc<Mutex<KvMap>>,\n    pending: BTreeMap<KvKey, Option<Vec<u8>>>,\n    stats: Arc<TransactionStats>,\n    fail_read_namespace: Arc<Mutex<Option<String>>>,\n    mode: RecordingTransactionMode,\n}\n\n#[derive(Clone, Copy)]\nenum RecordingTransactionMode {\n    Read,\n    Write,\n}\n\n#[async_trait]\nimpl BackendReadTransaction for RecordingTransaction {\n    async fn get_values(\n        &mut self,\n        request: BackendKvGetRequest,\n    ) -> Result<BackendKvValueBatch, LixError> {\n        self.fail_if_get_namespace_matches(&request)?;\n        let data = self.data.lock().expect(\"recording backend lock poisoned\");\n        let mut groups = Vec::with_capacity(request.groups.len());\n        for group in request.groups {\n            let namespace = group.namespace.clone();\n            let mut values = BytePageBuilder::with_capacity(group.keys.len(), 0);\n            let mut present = Vec::with_capacity(group.keys.len());\n            for key in group.keys {\n                let identity = (namespace.clone(), key.clone());\n                let value = self\n                    .pending\n                    .get(&identity)\n                    .cloned()\n                    .unwrap_or_else(|| data.get(&identity).cloned());\n                if let Some(value) = value {\n                    values.push(value);\n                    present.push(true);\n                } else {\n                    values.push([]);\n                    present.push(false);\n                }\n            }\n            groups.push(BackendKvValueGroup::new(\n                namespace,\n                values.finish(),\n                present,\n            ));\n        }\n        Ok(BackendKvValueBatch { groups })\n    }\n\n    async fn exists_many(\n        &mut self,\n        request: BackendKvGetRequest,\n    ) -> Result<BackendKvExistsBatch, LixError> {\n        self.fail_if_get_namespace_matches(&request)?;\n        let data = self.data.lock().expect(\"recording backend lock poisoned\");\n        let mut groups = Vec::with_capacity(request.groups.len());\n        for group in request.groups {\n            let namespace = group.namespace.clone();\n            let mut exists = Vec::with_capacity(group.keys.len());\n            for key in group.keys {\n                let identity = (namespace.clone(), key.clone());\n                exists.push(\n                    self.pending\n                        .get(&identity)\n                        .map(|value| value.is_some())\n                        .unwrap_or_else(|| data.contains_key(&identity)),\n                );\n            }\n            groups.push(BackendKvExistsGroup { namespace, exists });\n        }\n        Ok(BackendKvExistsBatch { groups })\n    }\n\n    async fn scan_keys(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvKeyPage, LixError> {\n        let entries = self.scan_visible_entries(request)?;\n        Ok(BackendKvKeyPage {\n            keys: entries.keys,\n            resume_after: entries.resume_after,\n        })\n    }\n\n    async fn scan_values(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvValuePage, LixError> {\n        self.fail_if_scan_namespace_matches(&request)?;\n        let entries = self.scan_visible_entries(request)?;\n        Ok(BackendKvValuePage {\n            values: entries.values,\n            resume_after: entries.resume_after,\n        })\n    }\n\n    async fn scan_entries(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvEntryPage, LixError> {\n        self.fail_if_scan_namespace_matches(&request)?;\n        self.scan_visible_entries(request)\n    }\n\n    async fn rollback(self: Box<Self>) -> Result<(), LixError> {\n        match self.mode {\n            RecordingTransactionMode::Read => {\n                self.stats.read_rolled_back.fetch_add(1, Ordering::SeqCst);\n            }\n            RecordingTransactionMode::Write => {\n                self.stats.write_rolled_back.fetch_add(1, Ordering::SeqCst);\n            }\n        }\n        Ok(())\n    }\n}\n\n#[async_trait]\nimpl BackendWriteTransaction for RecordingTransaction {\n    async fn write_kv_batch(\n        &mut self,\n        batch: BackendKvWriteBatch,\n    ) -> Result<BackendKvWriteStats, LixError> {\n        let mut stats = BackendKvWriteStats::default();\n        for group in batch.groups {\n            let namespace = group.namespace().to_string();\n            for index in 0..group.put_count() {\n                let key = group.put_key(index).ok_or_else(|| {\n                    LixError::new(\"LIX_ERROR_UNKNOWN\", \"backend write batch missing put key\")\n                })?;\n                let value = group.put_value(index).ok_or_else(|| {\n                    LixError::new(\"LIX_ERROR_UNKNOWN\", \"backend write batch missing put value\")\n                })?;\n                stats.puts += 1;\n                stats.bytes_written += key.len() + value.len();\n                self.pending\n                    .insert((namespace.clone(), key.to_vec()), Some(value.to_vec()));\n            }\n            for index in 0..group.delete_count() {\n                let key = group.delete_key(index).ok_or_else(|| {\n                    LixError::new(\n                        \"LIX_ERROR_UNKNOWN\",\n                        \"backend write batch missing delete key\",\n                    )\n                })?;\n                stats.deletes += 1;\n                stats.bytes_written += key.len();\n                self.pending.insert((namespace.clone(), key.to_vec()), None);\n            }\n        }\n        Ok(stats)\n    }\n\n    async fn commit(mut self: Box<Self>) -> Result<(), LixError> {\n        self.stats.write_committed.fetch_add(1, Ordering::SeqCst);\n        let mut guard = self.data.lock().expect(\"recording backend lock poisoned\");\n        for (key, value) in std::mem::take(&mut self.pending) {\n            match value {\n                Some(value) => {\n                    guard.insert(key, value);\n                }\n                None => {\n                    guard.remove(&key);\n                }\n            }\n        }\n        Ok(())\n    }\n}\n\nimpl RecordingTransaction {\n    fn fail_if_get_namespace_matches(&self, request: &BackendKvGetRequest) -> Result<(), LixError> {\n        for group in &request.groups {\n            self.fail_if_namespace_matches(&group.namespace)?;\n        }\n        Ok(())\n    }\n\n    fn fail_if_scan_namespace_matches(\n        &self,\n        request: &BackendKvScanRequest,\n    ) -> Result<(), LixError> {\n        self.fail_if_namespace_matches(&request.namespace)\n    }\n\n    fn fail_if_namespace_matches(&self, namespace: &str) -> Result<(), LixError> {\n        if self\n            .fail_read_namespace\n            .lock()\n            .expect(\"fail namespace lock should not poison\")\n            .as_deref()\n            == Some(namespace)\n        {\n            return Err(LixError::new(\n                \"LIX_ERROR_UNKNOWN\",\n                format!(\"forced read failure for namespace {namespace}\"),\n            ));\n        }\n        Ok(())\n    }\n\n    fn scan_visible_entries(\n        &self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvEntryPage, LixError> {\n        let mut visible = self\n            .data\n            .lock()\n            .expect(\"recording backend lock poisoned\")\n            .clone();\n        for (key, value) in &self.pending {\n            match value {\n                Some(value) => {\n                    visible.insert(key.clone(), value.clone());\n                }\n                None => {\n                    visible.remove(key);\n                }\n            }\n        }\n        Ok(scan_map(&visible, &request))\n    }\n}\n\nfn scan_map(map: &KvMap, request: &BackendKvScanRequest) -> BackendKvEntryPage {\n    let mut pairs = map\n        .iter()\n        .filter_map(|((entry_namespace, key), value)| {\n            if entry_namespace != &request.namespace || !key_in_range(key, &request.range) {\n                return None;\n            }\n            if request\n                .after\n                .as_deref()\n                .is_some_and(|after| key.as_slice() <= after)\n            {\n                return None;\n            }\n            Some((key.clone(), value.clone()))\n        })\n        .collect::<Vec<_>>();\n    pairs.sort_by(|left, right| left.0.cmp(&right.0));\n    let has_more = pairs.len() > request.limit;\n    pairs.truncate(request.limit);\n    let resume_after = has_more\n        .then(|| pairs.last().map(|(key, _)| key.clone()))\n        .flatten();\n    let mut keys = BytePageBuilder::with_capacity(pairs.len(), 0);\n    let mut values = BytePageBuilder::with_capacity(pairs.len(), 0);\n    for (key, value) in pairs {\n        keys.push(key);\n        values.push(value);\n    }\n    BackendKvEntryPage {\n        keys: keys.finish(),\n        values: values.finish(),\n        resume_after,\n    }\n}\n\nfn key_in_range(key: &[u8], range: &BackendKvScanRange) -> bool {\n    match range {\n        BackendKvScanRange::Prefix(prefix) => key.starts_with(prefix),\n        BackendKvScanRange::Range { start, end } => key >= start.as_slice() && key < end.as_slice(),\n    }\n}\n\n#[derive(Default)]\nstruct TransactionStats {\n    read_opened: AtomicUsize,\n    read_rolled_back: AtomicUsize,\n    write_opened: AtomicUsize,\n    write_committed: AtomicUsize,\n    write_rolled_back: AtomicUsize,\n}\n\nimpl TransactionStats {\n    fn snapshot(&self) -> TransactionStatsSnapshot {\n        TransactionStatsSnapshot {\n            read_opened: self.read_opened.load(Ordering::SeqCst),\n            read_rolled_back: self.read_rolled_back.load(Ordering::SeqCst),\n            write_opened: self.write_opened.load(Ordering::SeqCst),\n            write_committed: self.write_committed.load(Ordering::SeqCst),\n            write_rolled_back: self.write_rolled_back.load(Ordering::SeqCst),\n        }\n    }\n}\n\n#[derive(Clone, Copy)]\nstruct TransactionStatsSnapshot {\n    read_opened: usize,\n    read_rolled_back: usize,\n    write_opened: usize,\n    write_committed: usize,\n    write_rolled_back: usize,\n}\n\nimpl TransactionStatsSnapshot {\n    fn delta_since(self, before: &Self) -> Self {\n        Self {\n            read_opened: self.read_opened - before.read_opened,\n            read_rolled_back: self.read_rolled_back - before.read_rolled_back,\n            write_opened: self.write_opened - before.write_opened,\n            write_committed: self.write_committed - before.write_committed,\n            write_rolled_back: self.write_rolled_back - before.write_rolled_back,\n        }\n    }\n}\n"
  },
  {
    "path": "packages/engine/wit/lix-plugin.wit",
    "content": "package lix:plugin@0.1.0;\n\ninterface api {\n  type canonical-json = string;\n\n  /// Current materialized file payload. Plugins should treat this as an\n  /// implementation detail cache and must not rely on mutation order.\n  record file {\n    id: string,\n    path: string,\n    data: list<u8>,\n  }\n\n  /// Represents the latest visible row for an entity.\n  ///\n  /// `apply-changes` receives an unordered set of these rows for a single file.\n  /// Implementations must be order-independent and produce the same output for\n  /// any ordering of `changes`.\n  ///\n  /// Uniqueness: callers provide at most one row per\n  /// (`schema-key`, `entity-id`) for the same (`file.id`, version).\n  record entity-change {\n    entity-id: string,\n    schema-key: string,\n    /// Deterministically encoded JSON text.\n    snapshot-content: option<canonical-json>,\n  }\n\n  /// Optional active-state row payload passed to detect-changes when requested\n  /// by the plugin manifest. Omitted fields are represented as `none`.\n  ///\n  /// Scope is implicit and engine-defined: same plugin + same file + active rows.\n  record active-state-row {\n    entity-id: string,\n    schema-key: option<string>,\n    /// Deterministically encoded JSON text.\n    snapshot-content: option<canonical-json>,\n    file-id: option<string>,\n    plugin-key: option<string>,\n    version-id: option<string>,\n    change-id: option<string>,\n    /// Deterministically encoded JSON text.\n    metadata: option<canonical-json>,\n    created-at: option<string>,\n    updated-at: option<string>,\n  }\n\n  record detect-state-context {\n    active-state: option<list<active-state-row>>,\n  }\n\n  variant plugin-error {\n    invalid-input(string),\n    internal(string),\n  }\n\n  /// Computes row-level state transitions between two file payloads.\n  detect-changes: func(before: option<file>, after: file, state-context: option<detect-state-context>) -> result<list<entity-change>, plugin-error>;\n  /// Rebuilds file bytes from the unordered latest-state row set.\n  apply-changes: func(file: file, changes: list<entity-change>) -> result<list<u8>, plugin-error>;\n}\n\nworld plugin {\n  export api;\n}\n"
  },
  {
    "path": "packages/js-kysely/.gitignore",
    "content": "dist/\nnode_modules/\n"
  },
  {
    "path": "packages/js-kysely/package.json",
    "content": "{\n\t\"name\": \"@lix-js/kysely\",\n\t\"type\": \"module\",\n\t\"private\": true,\n\t\"version\": \"0.1.0\",\n\t\"description\": \"Compile-only Kysely query builder and Lix schema types for JS SDK v0.6 consumers\",\n\t\"license\": \"Apache-2.0\",\n\t\"main\": \"./src/index.ts\",\n\t\"types\": \"./src/index.ts\",\n\t\"exports\": {\n\t\t\".\": {\n\t\t\t\"types\": \"./src/index.ts\",\n\t\t\t\"default\": \"./src/index.ts\"\n\t\t}\n\t},\n\t\"scripts\": {\n\t\t\"build\": \"tsc -p tsconfig.json\",\n\t\t\"typecheck\": \"tsc -p tsconfig.json --noEmit\",\n\t\t\"test:types\": \"tsc -p tsconfig.type-tests.json --noEmit\",\n\t\t\"test\": \"pnpm --filter @lix-js/sdk build && vitest run\"\n\t},\n\t\"dependencies\": {\n\t\t\"json-schema-to-ts\": \"^3.1.1\",\n\t\t\"kysely\": \"^0.28.7\"\n\t},\n\t\"peerDependencies\": {\n\t\t\"@lix-js/sdk\": \"^0.6.0\"\n\t},\n\t\"devDependencies\": {\n\t\t\"@lix-js/sdk\": \"workspace:*\",\n\t\t\"typescript\": \"^5.5.4\",\n\t\t\"vitest\": \"^4.0.18\"\n\t}\n}\n"
  },
  {
    "path": "packages/js-kysely/src/create-lix-kysely.ts",
    "content": "import {\n\tKysely,\n\tSqliteAdapter,\n\tSqliteIntrospector,\n\tSqliteQueryCompiler,\n\ttype CompiledQuery,\n\ttype DatabaseConnection,\n\ttype Driver,\n\ttype QueryCompiler,\n\ttype QueryResult,\n} from \"kysely\";\nimport type { LixDatabaseSchema } from \"./schema.js\";\n\ntype LixQueryResult = {\n\trows?: unknown;\n\tcolumns?: unknown;\n\tstatements?: unknown;\n};\n\nexport type LixExecuteOptions = {\n\twriterKey?: string | null;\n};\n\ntype LixExecuteLike = {\n\texecute(\n\t\tsql: string,\n\t\tparams?: ReadonlyArray<unknown>,\n\t\toptions?: LixExecuteOptions,\n\t): Promise<LixQueryResult>;\n};\n\ntype LixDbLike = {\n\tdb: unknown;\n};\n\ntype LixLike = LixExecuteLike | LixDbLike;\nexport type CreateLixKyselyOptions = {\n\twriterKey?: string | null;\n};\n\nclass LixConnection implements DatabaseConnection {\n\treadonly #executeSql: (\n\t\tsql: string,\n\t\tparams?: ReadonlyArray<unknown>,\n\t) => Promise<LixQueryResult>;\n\n\tconstructor(\n\t\texecuteSql: (\n\t\t\tsql: string,\n\t\t\tparams?: ReadonlyArray<unknown>,\n\t\t) => Promise<LixQueryResult>,\n\t) {\n\t\tthis.#executeSql = executeSql;\n\t}\n\n\tasync executeQuery<R>(compiledQuery: CompiledQuery): Promise<QueryResult<R>> {\n\t\tconst raw = normalizeLixQueryResult(\n\t\t\tawait this.#executeSql(\n\t\t\t\tcompiledQuery.sql,\n\t\t\t\tcompiledQuery.parameters,\n\t\t\t),\n\t\t);\n\t\tconst decodedRows = decodeRows(raw.rows);\n\t\tconst columnNames =\n\t\t\tdecodeColumnNames(raw.columns) ??\n\t\t\t(await this.resolveColumnNames(compiledQuery.query));\n\t\tconst rows =\n\t\t\tcolumnNames &&\n\t\t\tdecodedRows.every((row) => row.length === columnNames.length)\n\t\t\t\t? decodedRows.map((row) => rowToObject(row, columnNames))\n\t\t\t\t: decodedRows;\n\n\t\tconst kind =\n\t\t\tcompiledQuery.query && typeof compiledQuery.query === \"object\"\n\t\t\t\t? (compiledQuery.query as { kind?: unknown }).kind\n\t\t\t\t: undefined;\n\n\t\tlet numAffectedRows: bigint | undefined;\n\t\tlet insertId: bigint | undefined;\n\t\tif (kind !== \"SelectQueryNode\") {\n\t\t\tnumAffectedRows = await this.readIntegerResult(\"SELECT changes()\");\n\t\t\tif (kind === \"InsertQueryNode\") {\n\t\t\t\tinsertId = await this.readIntegerResult(\"SELECT last_insert_rowid()\");\n\t\t\t}\n\t\t}\n\n\t\treturn {\n\t\t\trows: rows as R[],\n\t\t\tnumAffectedRows,\n\t\t\tinsertId,\n\t\t};\n\t}\n\n\tasync *streamQuery<R>(\n\t\tcompiledQuery: CompiledQuery,\n\t): AsyncIterableIterator<QueryResult<R>> {\n\t\tyield await this.executeQuery(compiledQuery);\n\t}\n\n\tasync readIntegerResult(sql: string): Promise<bigint | undefined> {\n\t\tconst raw = normalizeLixQueryResult(await this.#executeSql(sql, undefined));\n\t\tconst rows = decodeRows(raw.rows);\n\t\tif (!rows[0] || rows[0].length === 0) {\n\t\t\treturn undefined;\n\t\t}\n\t\treturn extractIntegerValue(rows[0][0]);\n\t}\n\n\tasync resolveColumnNames(queryNode: unknown): Promise<string[] | undefined> {\n\t\tif (!queryNode || typeof queryNode !== \"object\") {\n\t\t\treturn undefined;\n\t\t}\n\n\t\tconst query = queryNode as Record<string, unknown>;\n\t\tconst kind = typeof query.kind === \"string\" ? query.kind : \"\";\n\n\t\tif (kind === \"SelectQueryNode\") {\n\t\t\tconst selections = selectSelectionNodes(query);\n\t\t\tif (selections.length > 0) {\n\t\t\t\treturn selections.map(selectionNameFromNode);\n\t\t\t}\n\t\t\treturn undefined;\n\t\t}\n\n\t\tif (\n\t\t\tkind === \"InsertQueryNode\" ||\n\t\t\tkind === \"UpdateQueryNode\" ||\n\t\t\tkind === \"DeleteQueryNode\"\n\t\t) {\n\t\t\tconst returning = query.returning;\n\t\t\tif (returning && typeof returning === \"object\") {\n\t\t\t\tconst selections = selectSelectionNodes(\n\t\t\t\t\treturning as Record<string, unknown>,\n\t\t\t\t);\n\t\t\t\tif (selections.length > 0) {\n\t\t\t\t\treturn selections.map(selectionNameFromNode);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\treturn undefined;\n\t}\n}\n\nclass LixDriver implements Driver {\n\treadonly #lix: LixExecuteLike;\n\treadonly #connection: LixConnection;\n\treadonly #options?: LixExecuteOptions;\n\t#transactionSlotHeld = false;\n\t#transactionActive = false;\n\t#waiters: Array<() => void> = [];\n\n\tconstructor(lix: LixExecuteLike, options?: LixExecuteOptions) {\n\t\tthis.#lix = lix;\n\t\tthis.#options = options;\n\t\tthis.#connection = new LixConnection((sql, params) =>\n\t\t\tthis.#executeSql(sql, params),\n\t\t);\n\t}\n\n\tasync init(): Promise<void> {}\n\n\tasync acquireConnection(): Promise<DatabaseConnection> {\n\t\treturn this.#connection;\n\t}\n\n\tasync beginTransaction(): Promise<void> {\n\t\tawait this.#acquireTransactionSlot();\n\t\ttry {\n\t\t\tawait this.#executeSql(\"BEGIN\", undefined);\n\t\t\tthis.#transactionActive = true;\n\t\t} catch (error) {\n\t\t\tthis.#releaseTransactionSlot();\n\t\t\tthrow error;\n\t\t}\n\t}\n\n\tasync commitTransaction(): Promise<void> {\n\t\tif (!this.#transactionActive) {\n\t\t\tthrow new Error(\"commitTransaction called without active transaction\");\n\t\t}\n\t\ttry {\n\t\t\tawait this.#executeSql(\"COMMIT\", undefined);\n\t\t} finally {\n\t\t\tthis.#transactionActive = false;\n\t\t\tthis.#releaseTransactionSlot();\n\t\t}\n\t}\n\n\tasync rollbackTransaction(): Promise<void> {\n\t\tif (!this.#transactionActive) {\n\t\t\tthrow new Error(\"rollbackTransaction called without active transaction\");\n\t\t}\n\t\ttry {\n\t\t\tawait this.#executeSql(\"ROLLBACK\", undefined);\n\t\t} finally {\n\t\t\tthis.#transactionActive = false;\n\t\t\tthis.#releaseTransactionSlot();\n\t\t}\n\t}\n\n\tasync savepoint(\n\t\t_connection: DatabaseConnection,\n\t\t_savepointName: string,\n\t\t_compileQuery: QueryCompiler[\"compileQuery\"],\n\t): Promise<void> {\n\t\tthrow new Error(\n\t\t\t\"Nested transactions are not supported by createLixKysely() yet\",\n\t\t);\n\t}\n\n\tasync rollbackToSavepoint(\n\t\t_connection: DatabaseConnection,\n\t\t_savepointName: string,\n\t\t_compileQuery: QueryCompiler[\"compileQuery\"],\n\t): Promise<void> {\n\t\tthrow new Error(\n\t\t\t\"Nested transactions are not supported by createLixKysely() yet\",\n\t\t);\n\t}\n\n\tasync releaseSavepoint(\n\t\t_connection: DatabaseConnection,\n\t\t_savepointName: string,\n\t\t_compileQuery: QueryCompiler[\"compileQuery\"],\n\t): Promise<void> {\n\t\tthrow new Error(\n\t\t\t\"Nested transactions are not supported by createLixKysely() yet\",\n\t\t);\n\t}\n\n\tasync releaseConnection(): Promise<void> {}\n\n\tasync destroy(): Promise<void> {}\n\n\tasync #executeSql(\n\t\tsql: string,\n\t\tparams?: ReadonlyArray<unknown>,\n\t): Promise<LixQueryResult> {\n\t\treturn this.#lix.execute(sql, params, this.#options);\n\t}\n\n\tasync #acquireTransactionSlot(): Promise<void> {\n\t\twhile (this.#transactionSlotHeld) {\n\t\t\tawait new Promise<void>((resolve) => this.#waiters.push(resolve));\n\t\t}\n\t\tthis.#transactionSlotHeld = true;\n\t}\n\n\t#releaseTransactionSlot(): void {\n\t\tthis.#transactionSlotHeld = false;\n\t\tconst waiter = this.#waiters.shift();\n\t\tif (waiter) {\n\t\t\twaiter();\n\t\t}\n\t}\n}\n\nclass LixQueryCompiler extends SqliteQueryCompiler {\n\tprotected override getLeftIdentifierWrapper(): string {\n\t\treturn \"\";\n\t}\n\n\tprotected override getRightIdentifierWrapper(): string {\n\t\treturn \"\";\n\t}\n}\n\nconst cache = new WeakMap<object, Map<string, Kysely<LixDatabaseSchema>>>();\n\nexport function createLixKysely(\n\tlix: LixLike,\n\toptions: CreateLixKyselyOptions = {},\n): Kysely<LixDatabaseSchema> {\n\tconst writerKey = normalizeWriterKey(options.writerKey);\n\tconst cacheKey = writerKeyCacheKey(writerKey);\n\tif (isLixDbLike(lix)) {\n\t\tif (writerKey !== undefined) {\n\t\t\tthrow new TypeError(\n\t\t\t\t\"createLixKysely writerKey option requires lix.execute(sql, params, options)\",\n\t\t\t);\n\t\t}\n\t\treturn lix.db as Kysely<LixDatabaseSchema>;\n\t}\n\tif (!isLixExecuteLike(lix)) {\n\t\tthrow new TypeError(\n\t\t\t\"createLixKysely requires either lix.execute(sql, params) or lix.db\",\n\t\t);\n\t}\n\n\tconst entry = cache.get(lix as object);\n\tconst cached = entry?.get(cacheKey);\n\tif (cached) {\n\t\treturn cached;\n\t}\n\n\tconst dialect = {\n\t\tcreateAdapter: () => new SqliteAdapter(),\n\t\tcreateDriver: () => new LixDriver(lix, { writerKey }),\n\t\tcreateIntrospector: (db: Kysely<any>) => new SqliteIntrospector(db),\n\t\tcreateQueryCompiler: () => new LixQueryCompiler(),\n\t};\n\n\tconst db = new Kysely<LixDatabaseSchema>({ dialect });\n\tif (entry) {\n\t\tentry.set(cacheKey, db);\n\t} else {\n\t\tcache.set(lix as object, new Map([[cacheKey, db]]));\n\t}\n\treturn db;\n}\n\nfunction isLixExecuteLike(value: unknown): value is LixExecuteLike {\n\tif (!value || typeof value !== \"object\") {\n\t\treturn false;\n\t}\n\treturn typeof (value as { execute?: unknown }).execute === \"function\";\n}\n\nfunction normalizeWriterKey(value: unknown): string | null | undefined {\n\tif (value === undefined) {\n\t\treturn undefined;\n\t}\n\tif (value === null) {\n\t\treturn null;\n\t}\n\tif (typeof value === \"string\") {\n\t\treturn value;\n\t}\n\tthrow new TypeError(\"createLixKysely writerKey must be a string or null\");\n}\n\nfunction writerKeyCacheKey(writerKey: string | null | undefined): string {\n\tif (writerKey === undefined) {\n\t\treturn \"__default__\";\n\t}\n\tif (writerKey === null) {\n\t\treturn \"__null__\";\n\t}\n\treturn `writer:${writerKey}`;\n}\n\nfunction isLixDbLike(value: unknown): value is LixDbLike {\n\tif (!value || typeof value !== \"object\") {\n\t\treturn false;\n\t}\n\treturn (\n\t\t\"db\" in (value as object) &&\n\t\tBoolean((value as { db?: unknown }).db) &&\n\t\ttypeof (value as { db?: unknown }).db === \"object\"\n\t);\n}\n\nfunction decodeRows(rawRows: unknown): unknown[][] {\n\tif (!Array.isArray(rawRows)) {\n\t\treturn [];\n\t}\n\treturn rawRows.map((row) => {\n\t\tif (!Array.isArray(row)) {\n\t\t\treturn [];\n\t\t}\n\t\treturn [...row];\n\t});\n}\n\nfunction normalizeLixQueryResult(raw: LixQueryResult): {\n\trows?: unknown;\n\tcolumns?: unknown;\n} {\n\tif (Array.isArray(raw.statements)) {\n\t\tconst [statement] = raw.statements;\n\t\tif (statement && typeof statement === \"object\") {\n\t\t\tconst candidate = statement as { rows?: unknown; columns?: unknown };\n\t\t\treturn {\n\t\t\t\trows: candidate.rows,\n\t\t\t\tcolumns: candidate.columns,\n\t\t\t};\n\t\t}\n\t}\n\treturn raw;\n}\n\nfunction decodeColumnNames(rawColumns: unknown): string[] | undefined {\n\tif (!Array.isArray(rawColumns)) {\n\t\treturn undefined;\n\t}\n\n\tconst names = rawColumns.filter(\n\t\t(value): value is string => typeof value === \"string\",\n\t);\n\n\treturn names.length > 0 ? names : undefined;\n}\n\nfunction extractIntegerValue(value: unknown): bigint | undefined {\n\tif (typeof value === \"number\" && Number.isInteger(value)) {\n\t\treturn BigInt(value);\n\t}\n\tif (typeof value === \"bigint\") {\n\t\treturn value;\n\t}\n\tif (typeof value === \"string\" && /^-?\\d+$/.test(value)) {\n\t\treturn BigInt(value);\n\t}\n\treturn undefined;\n}\n\nfunction rowToObject(\n\trow: unknown[],\n\tcolumns: string[],\n): Record<string, unknown> {\n\tconst out: Record<string, unknown> = {};\n\tfor (let i = 0; i < columns.length; i++) {\n\t\tconst column = columns[i];\n\t\tif (!column) {\n\t\t\tcontinue;\n\t\t}\n\t\tout[column] = row[i];\n\t}\n\treturn out;\n}\n\nfunction selectSelectionNodes(\n\tnode: Record<string, unknown>,\n): Record<string, unknown>[] {\n\tconst selections = node.selections;\n\tif (!Array.isArray(selections)) {\n\t\treturn [];\n\t}\n\treturn selections.filter(\n\t\t(selection): selection is Record<string, unknown> =>\n\t\t\tBoolean(selection) && typeof selection === \"object\",\n\t);\n}\n\nfunction selectTableNames(node: Record<string, unknown>): string[] {\n\tconst from = node.from;\n\tif (!from || typeof from !== \"object\") {\n\t\treturn [];\n\t}\n\tconst froms = (from as Record<string, unknown>).froms;\n\tif (!Array.isArray(froms)) {\n\t\treturn [];\n\t}\n\tconst names: string[] = [];\n\n\tfor (const fromNode of froms) {\n\t\tif (!fromNode || typeof fromNode !== \"object\") {\n\t\t\tcontinue;\n\t\t}\n\t\tconst table = (fromNode as Record<string, unknown>).table;\n\t\tconst name = identifierNameFromTableNode(table);\n\t\tif (name) {\n\t\t\tnames.push(name);\n\t\t}\n\t}\n\n\treturn names;\n}\n\nfunction selectionNameFromNode(selectionNode: Record<string, unknown>): string {\n\tconst selection = selectionNode.selection;\n\tif (!selection || typeof selection !== \"object\") {\n\t\treturn \"column\";\n\t}\n\treturn (\n\t\tidentifierNameFromSelection(selection as Record<string, unknown>) ??\n\t\t\"column\"\n\t);\n}\n\nfunction identifierNameFromSelection(\n\tnode: Record<string, unknown>,\n): string | undefined {\n\tconst kind = typeof node.kind === \"string\" ? node.kind : \"\";\n\tif (kind === \"AliasNode\") {\n\t\tconst alias = node.alias;\n\t\tconst aliasName = identifierName(alias);\n\t\tif (aliasName) return aliasName;\n\t}\n\n\tif (kind === \"ReferenceNode\") {\n\t\tconst column = node.column;\n\t\tif (!column || typeof column !== \"object\") {\n\t\t\treturn undefined;\n\t\t}\n\t\tconst nested = (column as Record<string, unknown>).column;\n\t\tconst name = identifierName(nested);\n\t\tif (name) return name;\n\t}\n\n\tif (kind === \"ColumnNode\") {\n\t\tconst name = identifierName(node.column);\n\t\tif (name) return name;\n\t}\n\n\tif (kind === \"IdentifierNode\") {\n\t\tconst name = identifierName(node);\n\t\tif (name) return name;\n\t}\n\n\treturn undefined;\n}\n\nfunction identifierNameFromTableNode(node: unknown): string | undefined {\n\tif (!node || typeof node !== \"object\") {\n\t\treturn undefined;\n\t}\n\tconst tableNode = node as Record<string, unknown>;\n\tif (tableNode.kind === \"SchemableIdentifierNode\") {\n\t\treturn identifierName(tableNode.identifier);\n\t}\n\treturn undefined;\n}\n\nfunction identifierName(node: unknown): string | undefined {\n\tif (!node || typeof node !== \"object\") {\n\t\treturn undefined;\n\t}\n\tconst name = (node as Record<string, unknown>).name;\n\treturn typeof name === \"string\" ? name : undefined;\n}\n"
  },
  {
    "path": "packages/js-kysely/src/eb-entity.ts",
    "content": "import type { ExpressionBuilder, ExpressionWrapper, SqlBool } from \"kysely\";\nimport type { LixDatabaseSchema } from \"./schema.js\";\n\ntype LixEntityId = string[];\n\ntype LixEntityCanonical = {\n\tschema_key: string;\n\tfile_id: string | null;\n\tentity_id: LixEntityId;\n};\n\ntype LixEntity = {\n\tlixcol_schema_key: string;\n\tlixcol_file_id: string | null;\n\tlixcol_entity_id: LixEntityId;\n};\n\nconst CANONICAL_TABLES = [\n\t\"lix_state\",\n\t\"lix_state_by_version\",\n] as const;\n\nexport function ebEntity<\n\tTB extends keyof LixDatabaseSchema = keyof LixDatabaseSchema,\n>(entityType?: TB) {\n\tconst isCanonicalTable = entityType\n\t\t? CANONICAL_TABLES.includes(entityType as any)\n\t\t: undefined;\n\n\tconst detectColumnType = (\n\t\tentity: LixEntity | LixEntityCanonical,\n\t): boolean => {\n\t\treturn (\n\t\t\t\"entity_id\" in entity && \"schema_key\" in entity && \"file_id\" in entity\n\t\t);\n\t};\n\n\tconst getColumnNames = (entity?: LixEntity | LixEntityCanonical) => {\n\t\tif (entityType !== undefined) {\n\t\t\treturn {\n\t\t\t\tentityIdCol: isCanonicalTable ? \"entity_id\" : \"lixcol_entity_id\",\n\t\t\t\tschemaKeyCol: isCanonicalTable ? \"schema_key\" : \"lixcol_schema_key\",\n\t\t\t\tfileIdCol: isCanonicalTable ? \"file_id\" : \"lixcol_file_id\",\n\t\t\t};\n\t\t}\n\n\t\tif (entity) {\n\t\t\tconst useCanonical = detectColumnType(entity);\n\t\t\treturn {\n\t\t\t\tentityIdCol: useCanonical ? \"entity_id\" : \"lixcol_entity_id\",\n\t\t\t\tschemaKeyCol: useCanonical ? \"schema_key\" : \"lixcol_schema_key\",\n\t\t\t\tfileIdCol: useCanonical ? \"file_id\" : \"lixcol_file_id\",\n\t\t\t};\n\t\t}\n\n\t\treturn {\n\t\t\tentityIdCol: \"lixcol_entity_id\",\n\t\t\tschemaKeyCol: \"lixcol_schema_key\",\n\t\t\tfileIdCol: \"lixcol_file_id\",\n\t\t};\n\t};\n\n\tconst getColumnRefs = (entity?: LixEntity | LixEntityCanonical) => {\n\t\tconst { entityIdCol, schemaKeyCol, fileIdCol } = getColumnNames(entity);\n\t\treturn {\n\t\t\tentityIdRef: entityType ? `${entityType}.${entityIdCol}` : entityIdCol,\n\t\t\tschemaKeyRef: entityType ? `${entityType}.${schemaKeyCol}` : schemaKeyCol,\n\t\t\tfileIdRef: entityType ? `${entityType}.${fileIdCol}` : fileIdCol,\n\t\t};\n\t};\n\n\tconst getTargetValues = (entity: LixEntity | LixEntityCanonical) => {\n\t\treturn {\n\t\t\ttargetEntityId:\n\t\t\t\t\"entity_id\" in entity ? entity.entity_id : entity.lixcol_entity_id,\n\t\t\ttargetSchemaKey:\n\t\t\t\t\"schema_key\" in entity ? entity.schema_key : entity.lixcol_schema_key,\n\t\t\ttargetFileId: \"file_id\" in entity ? entity.file_id : entity.lixcol_file_id,\n\t\t};\n\t};\n\n\tconst equalsExpression = (\n\t\teb: ExpressionBuilder<LixDatabaseSchema, TB>,\n\t\tentity: LixEntity | LixEntityCanonical,\n\t): ExpressionWrapper<LixDatabaseSchema, TB, SqlBool> => {\n\t\tconst { targetEntityId, targetSchemaKey, targetFileId } =\n\t\t\tgetTargetValues(entity);\n\t\tconst { entityIdRef, schemaKeyRef, fileIdRef } = getColumnRefs(entity);\n\t\treturn eb.and([\n\t\t\teb(eb.ref(entityIdRef as any), \"=\", targetEntityId),\n\t\t\teb(eb.ref(schemaKeyRef as any), \"=\", targetSchemaKey),\n\t\t\ttargetFileId === null\n\t\t\t\t? eb(eb.ref(fileIdRef as any), \"is\", null)\n\t\t\t\t: eb(eb.ref(fileIdRef as any), \"=\", targetFileId),\n\t\t]);\n\t};\n\n\treturn {\n\t\thasLabel(\n\t\t\tlabel: { id: string; name?: string } | { name: string; id?: string },\n\t\t) {\n\t\t\treturn (\n\t\t\t\teb: ExpressionBuilder<LixDatabaseSchema, TB>,\n\t\t\t): ExpressionWrapper<LixDatabaseSchema, TB, SqlBool> => {\n\t\t\t\tconst { entityIdRef, schemaKeyRef, fileIdRef } = getColumnRefs();\n\t\t\t\tconst labelQuery = eb\n\t\t\t\t\t.selectFrom(\"lix_label_assignment\" as any)\n\t\t\t\t\t.innerJoin(\n\t\t\t\t\t\t\"lix_label\" as any,\n\t\t\t\t\t\t\"lix_label.id\" as any,\n\t\t\t\t\t\t\"lix_label_assignment.label_id\" as any,\n\t\t\t\t\t) as any;\n\t\t\t\treturn eb.exists(\n\t\t\t\t\tlabelQuery\n\t\t\t\t\t\t.select(\"lix_label_assignment.target_entity_id\" as any)\n\t\t\t\t\t\t.whereRef(\n\t\t\t\t\t\t\t\"lix_label_assignment.target_entity_id\" as any,\n\t\t\t\t\t\t\t\"=\",\n\t\t\t\t\t\t\tentityIdRef as any,\n\t\t\t\t\t\t)\n\t\t\t\t\t\t.whereRef(\n\t\t\t\t\t\t\t\"lix_label_assignment.target_schema_key\" as any,\n\t\t\t\t\t\t\t\"=\",\n\t\t\t\t\t\t\tschemaKeyRef as any,\n\t\t\t\t\t\t)\n\t\t\t\t\t\t.whereRef(\n\t\t\t\t\t\t\t\"lix_label_assignment.target_file_id\" as any,\n\t\t\t\t\t\t\t\"is\",\n\t\t\t\t\t\t\tfileIdRef as any,\n\t\t\t\t\t\t)\n\t\t\t\t\t\t.$if(\"name\" in label, (qb: any) =>\n\t\t\t\t\t\t\tqb.where(\"lix_label.name\", \"=\", label.name!),\n\t\t\t\t\t\t)\n\t\t\t\t\t\t.$if(\"id\" in label, (qb: any) =>\n\t\t\t\t\t\t\tqb.where(\"lix_label.id\", \"=\", label.id!),\n\t\t\t\t\t\t),\n\t\t\t\t);\n\t\t\t};\n\t\t},\n\t\tequals(entity: LixEntity | LixEntityCanonical) {\n\t\t\treturn (\n\t\t\t\teb: ExpressionBuilder<LixDatabaseSchema, TB>,\n\t\t\t): ExpressionWrapper<LixDatabaseSchema, TB, SqlBool> => {\n\t\t\t\treturn equalsExpression(eb, entity);\n\t\t\t};\n\t\t},\n\t\tin(entities: Array<LixEntityCanonical | LixEntity>) {\n\t\t\treturn (\n\t\t\t\teb: ExpressionBuilder<LixDatabaseSchema, TB>,\n\t\t\t): ExpressionWrapper<LixDatabaseSchema, TB, SqlBool> => {\n\t\t\t\tif (entities.length === 0) {\n\t\t\t\t\treturn eb.val(false);\n\t\t\t\t}\n\n\t\t\t\treturn eb.or(entities.map((entity) => equalsExpression(eb, entity)));\n\t\t\t};\n\t\t},\n\t};\n}\n"
  },
  {
    "path": "packages/js-kysely/src/index.ts",
    "content": "export { qb } from \"./qb.js\";\nexport { ebEntity } from \"./eb-entity.js\";\nexport type { LixDatabaseSchema } from \"./schema.js\";\nexport type {\n\tCreateLixKyselyOptions,\n\tLixExecuteOptions,\n} from \"./create-lix-kysely.js\";\nexport { sql } from \"kysely\";\nexport { jsonArrayFrom, jsonObjectFrom } from \"kysely/helpers/sqlite\";\n"
  },
  {
    "path": "packages/js-kysely/src/qb.test-d.ts",
    "content": "import type { Insertable, Selectable } from \"kysely\";\nimport { ebEntity, qb } from \"./index.js\";\nimport type { LixDatabaseSchema } from \"./schema.js\";\n\ntype Equal<A, B> =\n\t(<T>() => T extends A ? 1 : 2) extends <T>() => T extends B ? 1 : 2\n\t\t? true\n\t\t: false;\n\ntype Expect<T extends true> = T;\n\ntype FileRow = Selectable<LixDatabaseSchema[\"lix_file\"]>;\ntype _FilePathIsString = Expect<Equal<FileRow[\"path\"], string>>;\nconst fileHiddenBoolean: FileRow[\"hidden\"] = true;\nconst fileHiddenUndefined: FileRow[\"hidden\"] = undefined;\n// @ts-expect-error wrong hidden type\nconst fileHiddenString: FileRow[\"hidden\"] = \"true\";\nvoid fileHiddenBoolean;\nvoid fileHiddenUndefined;\nvoid fileHiddenString;\n\ntype KeyValueByVersionInsert = Insertable<\n\tLixDatabaseSchema[\"lix_key_value_by_version\"]\n>;\n\ntype _InsertHasKey = Expect<Equal<KeyValueByVersionInsert[\"key\"], string>>;\n\nconst db = qb({\n\texecute: async () => ({ rows: [] }),\n});\n\nconst dbWithWriter = qb(\n\t{\n\t\texecute: async () => ({ rows: [] }),\n\t},\n\t{ writerKey: \"writer-a\" },\n);\ndbWithWriter.selectFrom(\"lix_file\").select(\"id\").compile();\n\ndb.selectFrom(\"lix_file\").select([\"id\", \"path\", \"hidden\"]).compile();\ndb.selectFrom(\"lix_directory\").select([\"id\", \"path\"]).compile();\ndb.selectFrom(\"lix_key_value_by_version\")\n\t.select([\"key\", \"value\", \"lixcol_version_id\"])\n\t.compile();\n\ndb.selectFrom(\"lix_commit\")\n\t.where(ebEntity(\"lix_commit\").hasLabel({ name: \"checkpoint\" }))\n\t.select(\"id\")\n\t.compile();\n\ndb.insertInto(\"lix_key_value_by_version\")\n\t.values({\n\t\tkey: \"flashtype_active_file_id\",\n\t\tvalue: \"file-1\",\n\t\tlixcol_version_id: \"global\",\n\t\tlixcol_untracked: true,\n\t})\n\t.compile();\n\ndb.updateTable(\"lix_key_value_by_version\")\n\t.set({ value: \"file-2\" })\n\t.where(\"key\", \"=\", \"flashtype_active_file_id\")\n\t.compile();\n\ndb.deleteFrom(\"lix_key_value_by_version\")\n\t.where(\"key\", \"=\", \"flashtype_active_file_id\")\n\t.compile();\n\nconst withDb = qb({ db });\nwithDb.selectFrom(\"lix_file\").select(\"id\");\n\n// @ts-expect-error unknown table\ndb.selectFrom(\"not_a_table\").selectAll().compile();\n\n// @ts-expect-error unknown column\ndb.selectFrom(\"lix_file\").select([\"not_a_column\"]).compile();\n\nconst badInsert: Insertable<LixDatabaseSchema[\"lix_key_value_by_version\"]> = {\n\tkey: \"x\",\n\tvalue: \"y\",\n\tlixcol_untracked: true,\n};\nvoid badInsert;\n"
  },
  {
    "path": "packages/js-kysely/src/qb.ts",
    "content": "import { createLixKysely } from \"./create-lix-kysely.js\";\nimport type { CreateLixKyselyOptions } from \"./create-lix-kysely.js\";\n\ntype QbInput = Parameters<typeof createLixKysely>[0];\ntype QbOptions = CreateLixKyselyOptions;\n\n/**\n * Kysely entrypoint for Lix.\n *\n * Usage:\n * await qb(lix).selectFrom(\"lix_file\").selectAll().execute()\n */\nexport const qb = (lix: QbInput, options?: QbOptions) =>\n\tcreateLixKysely(lix, options);\n"
  },
  {
    "path": "packages/js-kysely/src/schema.ts",
    "content": "import {\n\tLixAccountSchema,\n\tLixActiveAccountSchema,\n\tLixChangeAuthorSchema,\n\tLixChangeSchema,\n\tLixChangeSetElementSchema,\n\tLixChangeSetSchema,\n\tLixCommitEdgeSchema,\n\tLixCommitSchema,\n\tLixDirectoryDescriptorSchema,\n\tLixFileDescriptorSchema,\n\tLixKeyValueSchema,\n\tLixLabelAssignmentSchema,\n\tLixLabelSchema,\n\tLixRegisteredSchemaSchema,\n\tLixVersionDescriptorSchema,\n} from \"@lix-js/sdk\";\nimport type { JsonValue as LixJsonValue } from \"@lix-js/sdk\";\nimport type { Generated } from \"kysely\";\nimport type { FromSchema, JSONSchema } from \"json-schema-to-ts\";\n\ntype LixPropertySchema = JSONSchema & {\n\t\"x-lix-default\"?: string;\n};\n\ntype LixSchemaDefinition = JSONSchema & {\n\ttype: \"object\";\n\tadditionalProperties: false;\n\tproperties?: Record<string, LixPropertySchema>;\n};\n\ntype LixJsonObject = { [key: string]: LixJsonValue };\ntype LixEntityId = string[];\n\nexport type LixGenerated<T> = T & {\n\treadonly __lixGenerated?: true;\n};\n\ntype IsLixGenerated<T> = T extends { readonly __lixGenerated?: true }\n\t? true\n\t: false;\n\ntype ExtractFromGenerated<T> = T extends LixGenerated<infer U> ? U : T;\n\ntype IsNever<T> = [T] extends [never] ? true : false;\ntype IsAny<T> = 0 extends 1 & T ? true : false;\n\ntype TransformEmptyObject<T> =\n\tIsAny<T> extends true\n\t\t? any\n\t\t: IsNever<T> extends true\n\t\t\t? never\n\t\t\t: T extends object\n\t\t\t\t? keyof T extends never\n\t\t\t\t\t? LixJsonObject\n\t\t\t\t\t: T\n\t\t\t\t: T;\n\ntype IsEmptyObjectSchema<P> = P extends { type: \"object\" }\n\t? P extends { properties: any }\n\t\t? false\n\t\t: true\n\t: false;\n\ntype GetNullablePart<P> = P extends { nullable: true } ? null : never;\n\ntype PropertyHasDefault<P> = P extends { \"x-lix-default\": any }\n\t? true\n\t: P extends { default: any }\n\t\t? true\n\t\t: false;\n\ntype ApplyLixGenerated<TSchema extends LixSchemaDefinition> = TSchema extends {\n\tproperties: infer Props;\n}\n\t? {\n\t\t\t[K in keyof FromSchema<TSchema>]: K extends keyof Props\n\t\t\t\t? PropertyHasDefault<Props[K]> extends true\n\t\t\t\t\t? LixGenerated<TransformEmptyObject<FromSchema<TSchema>[K]>>\n\t\t\t\t\t: IsEmptyObjectSchema<Props[K]> extends true\n\t\t\t\t\t\t? LixJsonObject | GetNullablePart<Props[K]>\n\t\t\t\t\t\t: TransformEmptyObject<FromSchema<TSchema>[K]>\n\t\t\t\t: TransformEmptyObject<FromSchema<TSchema>[K]>;\n\t\t}\n\t: never;\n\nexport type FromLixSchemaDefinition<T extends LixSchemaDefinition> =\n\tApplyLixGenerated<T>;\n\ntype ToKysely<T> = {\n\t[K in keyof T]: IsLixGenerated<T[K]> extends true\n\t\t? Generated<ExtractFromGenerated<T[K]>>\n\t\t: T[K];\n};\n\ntype EntityStateColumns = {\n\tlixcol_entity_id: LixGenerated<LixEntityId>;\n\tlixcol_schema_key: LixGenerated<string>;\n\tlixcol_file_id: LixGenerated<string | null>;\n\tlixcol_plugin_key: LixGenerated<string>;\n\tlixcol_inherited_from_version_id: LixGenerated<string | null>;\n\tlixcol_created_at: LixGenerated<string>;\n\tlixcol_updated_at: LixGenerated<string>;\n\tlixcol_change_id: LixGenerated<string>;\n\tlixcol_untracked: LixGenerated<boolean>;\n\tlixcol_commit_id: LixGenerated<string>;\n\tlixcol_writer_key: LixGenerated<string | null>;\n};\n\ntype EntityStateByVersionColumns = EntityStateColumns & {\n\tlixcol_version_id: LixGenerated<string>;\n\tlixcol_metadata: LixGenerated<LixJsonValue | null>;\n};\n\ntype EntityStateHistoryColumns = {\n\tlixcol_entity_id: LixGenerated<LixEntityId>;\n\tlixcol_schema_key: LixGenerated<string>;\n\tlixcol_file_id: LixGenerated<string | null>;\n\tlixcol_plugin_key: LixGenerated<string>;\n\tlixcol_change_id: LixGenerated<string>;\n\tlixcol_commit_id: LixGenerated<string>;\n\tlixcol_root_commit_id: LixGenerated<string>;\n\tlixcol_depth: LixGenerated<number>;\n\tlixcol_metadata: LixGenerated<LixJsonValue | null>;\n};\n\ntype EntityStateView<T> = T & EntityStateColumns;\ntype EntityStateByVersionView<T> = T & EntityStateByVersionColumns;\ntype EntityStateHistoryView<T> = T & EntityStateHistoryColumns;\n\ntype EntityViews<\n\tTSchema extends LixSchemaDefinition,\n\tTViewName extends string,\n\tTOverride = object,\n> = {\n\t[K in TViewName]: ToKysely<\n\t\tEntityStateView<FromLixSchemaDefinition<TSchema> & TOverride>\n\t>;\n} & {\n\t[K in `${TViewName}_by_version`]: ToKysely<\n\t\tEntityStateByVersionView<FromLixSchemaDefinition<TSchema> & TOverride>\n\t>;\n} & {\n\t[K in `${TViewName}_history`]: ToKysely<\n\t\tEntityStateHistoryView<FromLixSchemaDefinition<TSchema> & TOverride>\n\t>;\n};\n\ntype StateByVersionView = {\n\tentity_id: LixEntityId;\n\tschema_key: string;\n\tfile_id: string | null;\n\tplugin_key: string;\n\tsnapshot_content: LixJsonValue;\n\tversion_id: string;\n\tcreated_at: Generated<string>;\n\tupdated_at: Generated<string>;\n\tinherited_from_version_id: string | null;\n\tchange_id: Generated<string>;\n\tuntracked: Generated<boolean>;\n\tcommit_id: Generated<string>;\n\twriter_key: string | null;\n\tmetadata: Generated<LixJsonValue | null>;\n};\n\ntype StateView = Omit<StateByVersionView, \"version_id\">;\n\ntype StateWithTombstonesView = {\n\tentity_id: LixEntityId;\n\tschema_key: string;\n\tfile_id: string | null;\n\tplugin_key: string;\n\tsnapshot_content: LixJsonValue | null;\n\tversion_id: string;\n\tcreated_at: Generated<string>;\n\tupdated_at: Generated<string>;\n\tinherited_from_version_id: string | null;\n\tchange_id: Generated<string>;\n\tuntracked: Generated<boolean>;\n\tcommit_id: Generated<string>;\n\twriter_key: string | null;\n\tmetadata: Generated<LixJsonValue | null>;\n};\n\ntype StateHistoryView = {\n\tentity_id: LixEntityId;\n\tschema_key: string;\n\tfile_id: string | null;\n\tplugin_key: string;\n\tsnapshot_content: LixJsonValue;\n\tmetadata: LixJsonValue | null;\n\tchange_id: string;\n\tcommit_id: string;\n\troot_commit_id: string;\n\tdepth: number;\n};\n\ntype WorkingChangesView = {\n\tentity_id: LixEntityId;\n\tschema_key: string;\n\tfile_id: string | null;\n\tbefore_change_id: string | null;\n\tafter_change_id: string | null;\n\tbefore_commit_id: string | null;\n\tafter_commit_id: string | null;\n\tstatus: \"added\" | \"modified\" | \"removed\" | \"unchanged\";\n};\n\ntype LixActiveVersion = {\n\tversion_id: string;\n};\ntype LixKeyValue = FromLixSchemaDefinition<typeof LixKeyValueSchema> & {\n\tvalue: LixJsonValue;\n};\n\ntype ChangeView = ToKysely<\n\tFromLixSchemaDefinition<typeof LixChangeSchema> & {\n\t\tentity_id: LixEntityId;\n\t\tmetadata: LixJsonValue | null;\n\t\tsnapshot_content: LixJsonValue | null;\n\t}\n>;\n\ntype DirectoryDescriptorView = ToKysely<\n\tEntityStateView<\n\t\tFromLixSchemaDefinition<typeof LixDirectoryDescriptorSchema> & {\n\t\t\tpath: LixGenerated<string>;\n\t\t}\n\t>\n>;\n\ntype DirectoryDescriptorByVersionView = ToKysely<\n\tEntityStateByVersionView<\n\t\tFromLixSchemaDefinition<typeof LixDirectoryDescriptorSchema> & {\n\t\t\tpath: LixGenerated<string>;\n\t\t}\n\t>\n>;\n\ntype DirectoryDescriptorHistoryView = ToKysely<\n\tEntityStateHistoryView<\n\t\tFromLixSchemaDefinition<typeof LixDirectoryDescriptorSchema> & {\n\t\t\tpath: LixGenerated<string>;\n\t\t}\n\t>\n>;\n\nexport type LixDatabaseSchema = {\n\tlix_active_account: EntityViews<\n\t\ttypeof LixActiveAccountSchema,\n\t\t\"lix_active_account\"\n\t>[\"lix_active_account\"];\n\tlix_active_version: ToKysely<LixActiveVersion>;\n\n\tlix_state: StateView;\n\tlix_state_by_version: StateByVersionView;\n\tlix_state_history: StateHistoryView;\n\tlix_working_changes: WorkingChangesView;\n\n\tlix_change: ChangeView;\n\tlix_directory: DirectoryDescriptorView;\n\tlix_directory_by_version: DirectoryDescriptorByVersionView;\n\tlix_directory_history: DirectoryDescriptorHistoryView;\n} & EntityViews<\n\ttypeof LixKeyValueSchema,\n\t\"lix_key_value\",\n\t{ value: LixKeyValue[\"value\"] }\n\t> &\n\tEntityViews<typeof LixAccountSchema, \"lix_account\"> &\n\tEntityViews<typeof LixChangeSetSchema, \"lix_change_set\"> &\n\tEntityViews<\n\t\ttypeof LixChangeSetElementSchema,\n\t\t\"lix_change_set_element\",\n\t\t{ entity_id: LixEntityId }\n\t> &\n\tEntityViews<typeof LixChangeAuthorSchema, \"lix_change_author\"> &\n\tEntityViews<\n\t\ttypeof LixFileDescriptorSchema,\n\t\t\"lix_file\",\n\t\t{\n\t\t\tdata: Uint8Array;\n\t\t\tpath: LixGenerated<string>;\n\t\t\tdirectory_id: LixGenerated<string | null>;\n\t\t\tname: LixGenerated<string>;\n\t\t\textension: LixGenerated<string | null>;\n\t\t}\n\t> &\n\tEntityViews<typeof LixLabelSchema, \"lix_label\"> &\n\tEntityViews<\n\t\ttypeof LixLabelAssignmentSchema,\n\t\t\"lix_label_assignment\",\n\t\t{ target_entity_id: LixEntityId }\n\t> &\n\tEntityViews<\n\t\ttypeof LixRegisteredSchemaSchema,\n\t\t\"lix_registered_schema\",\n\t\t{ value: LixJsonValue }\n\t> &\n\tEntityViews<\n\t\ttypeof LixVersionDescriptorSchema,\n\t\t\"lix_version\",\n\t\t{ commit_id: LixGenerated<string>; working_commit_id: LixGenerated<string> }\n\t> &\n\tEntityViews<typeof LixCommitSchema, \"lix_commit\"> &\n\tEntityViews<typeof LixCommitEdgeSchema, \"lix_commit_edge\">;\n"
  },
  {
    "path": "packages/js-kysely/tests/eb-entity.test.ts",
    "content": "import { expect, test } from \"vitest\";\nimport { ebEntity, qb } from \"../src/index.js\";\n\nconst db = qb({\n\texecute: async () => ({ rows: [] }),\n});\n\ntest(\"hasLabel compiles to the label assignment state-address tuple for entity tables\", () => {\n\tconst compiled = db\n\t\t.selectFrom(\"lix_commit\")\n\t\t.where(ebEntity(\"lix_commit\").hasLabel({ name: \"checkpoint\" }))\n\t\t.select(\"id\")\n\t\t.compile();\n\n\texpect(compiled.sql).toContain(\"from lix_label_assignment\");\n\texpect(compiled.sql).toContain(\n\t\t\"lix_label_assignment.target_entity_id = lix_commit.lixcol_entity_id\",\n\t);\n\texpect(compiled.sql).toContain(\n\t\t\"lix_label_assignment.target_schema_key = lix_commit.lixcol_schema_key\",\n\t);\n\texpect(compiled.sql).toContain(\n\t\t\"lix_label_assignment.target_file_id is lix_commit.lixcol_file_id\",\n\t);\n\texpect(compiled.sql).toContain(\"lix_label.name = ?\");\n\texpect(compiled.sql).not.toContain(\"lix_entity_label\");\n\texpect(compiled.parameters).toEqual([\"checkpoint\"]);\n});\n\ntest(\"hasLabel compiles to the label assignment state-address tuple for canonical state tables\", () => {\n\tconst compiled = db\n\t\t.selectFrom(\"lix_state\")\n\t\t.where(ebEntity(\"lix_state\").hasLabel({ id: \"label-a\" }))\n\t\t.select(\"entity_id\")\n\t\t.compile();\n\n\texpect(compiled.sql).toContain(\n\t\t\"lix_label_assignment.target_entity_id = lix_state.entity_id\",\n\t);\n\texpect(compiled.sql).toContain(\n\t\t\"lix_label_assignment.target_schema_key = lix_state.schema_key\",\n\t);\n\texpect(compiled.sql).toContain(\n\t\t\"lix_label_assignment.target_file_id is lix_state.file_id\",\n\t);\n\texpect(compiled.sql).toContain(\"lix_label.id = ?\");\n\texpect(compiled.parameters).toEqual([\"label-a\"]);\n});\n"
  },
  {
    "path": "packages/js-kysely/tests/transaction.test.ts",
    "content": "import { afterEach, expect, test } from \"vitest\";\nimport { openLix, type Lix } from \"@lix-js/sdk\";\nimport { qb } from \"../src/index.js\";\n\nconst encoder = new TextEncoder();\nlet lix: Lix | undefined;\n\nafterEach(async () => {\n\tif (lix) {\n\t\tawait lix.close();\n\t\tlix = undefined;\n\t}\n});\n\ntest(\"qb(lix).transaction works with openLix()\", async () => {\n\tlix = await openLix();\n\n\tawait qb(lix)\n\t\t.transaction()\n\t\t.execute(async (trx) => {\n\t\t\tawait trx\n\t\t\t\t.insertInto(\"lix_file\")\n\t\t\t\t.values({\n\t\t\t\t\tpath: \"/tx-basic.md\",\n\t\t\t\t\tdata: encoder.encode(\"ok\"),\n\t\t\t\t})\n\t\t\t\t.execute();\n\t\t});\n\n\tconst row = await qb(lix)\n\t\t.selectFrom(\"lix_file\")\n\t\t.where(\"path\", \"=\", \"/tx-basic.md\")\n\t\t.select([\"path\"])\n\t\t.executeTakeFirst();\n\texpect(row?.path).toBe(\"/tx-basic.md\");\n});\n\ntest(\"qb(lix) serializes concurrent transactions on one Lix instance\", async () => {\n\tlix = await openLix();\n\tconst wait = (ms: number) =>\n\t\tnew Promise<void>((resolve) => setTimeout(resolve, ms));\n\n\tconst txA = qb(lix)\n\t\t.transaction()\n\t\t.execute(async (trx) => {\n\t\t\tawait trx\n\t\t\t\t.insertInto(\"lix_file\")\n\t\t\t\t.values({\n\t\t\t\t\tpath: \"/tx-concurrent-a.md\",\n\t\t\t\t\tdata: encoder.encode(\"A\"),\n\t\t\t\t})\n\t\t\t\t.execute();\n\t\t\tawait wait(30);\n\t\t});\n\n\tconst txB = qb(lix)\n\t\t.transaction()\n\t\t.execute(async (trx) => {\n\t\t\tawait trx\n\t\t\t\t.insertInto(\"lix_file\")\n\t\t\t\t.values({\n\t\t\t\t\tpath: \"/tx-concurrent-b.md\",\n\t\t\t\t\tdata: encoder.encode(\"B\"),\n\t\t\t\t})\n\t\t\t\t.execute();\n\t\t});\n\n\tawait Promise.all([txA, txB]);\n\n\tconst rows = await qb(lix)\n\t\t.selectFrom(\"lix_file\")\n\t\t.where(\"path\", \"in\", [\"/tx-concurrent-a.md\", \"/tx-concurrent-b.md\"])\n\t\t.select([\"path\"])\n\t\t.execute();\n\tconst paths = rows.map((row) => row.path).sort();\n\texpect(paths).toEqual([\"/tx-concurrent-a.md\", \"/tx-concurrent-b.md\"]);\n});\n"
  },
  {
    "path": "packages/js-kysely/tsconfig.json",
    "content": "{\n\t\"compilerOptions\": {\n\t\t\"target\": \"ES2022\",\n\t\t\"module\": \"NodeNext\",\n\t\t\"moduleResolution\": \"NodeNext\",\n\t\t\"strict\": true,\n\t\t\"declaration\": true,\n\t\t\"outDir\": \"dist\",\n\t\t\"skipLibCheck\": true\n\t},\n\t\"include\": [\"src\"]\n}\n"
  },
  {
    "path": "packages/js-kysely/tsconfig.type-tests.json",
    "content": "{\n\t\"extends\": \"./tsconfig.json\",\n\t\"compilerOptions\": {\n\t\t\"noEmit\": true\n\t},\n\t\"include\": [\"src\", \"tests\"]\n}\n"
  },
  {
    "path": "packages/js-kysely/vitest.config.ts",
    "content": "import { defineConfig } from \"vitest/config\";\n\nexport default defineConfig({\n\ttest: {\n\t\tenvironment: \"node\",\n\t\tinclude: [\"tests/**/*.test.ts\"],\n\t},\n});\n"
  },
  {
    "path": "packages/js-sdk/.gitignore",
    "content": "/dist\n/dist-engine-src\n/engine-src\n\n# wasm-bindgen generated engine outputs\n/src/engine-wasm/wasm\n/src/engine-wasm/engine-wasm-binary.js\n/src/engine-wasm/engine-wasm-binary.d.ts\n\n# legacy embedded wasm wrappers (generated artifacts)\n/src/backend/wasm-sqlite.wasm.ts\n/src/backend/wasm-sqlite.wasm.ts.d.ts\n\n# wasm modules and generated declarations\n*.wasm\n*.wasm.d.ts\n\n# generated from engine builtin schemas\n/src/generated/\n"
  },
  {
    "path": "packages/js-sdk/Cargo.toml",
    "content": "[package]\nname = \"lix_engine_wasm_bindgen\"\nversion = \"0.1.0\"\nedition = \"2021\"\n\n[lib]\npath = \"wasm-bindgen.rs\"\ncrate-type = [\"cdylib\"]\n\n[dependencies]\nlix_rs_sdk = { path = \"../rs-sdk\" }\nwasm-bindgen = \"0.2\"\nwasm-bindgen-futures = \"0.4\"\njs-sys = \"0.3\"\nasync-trait = \"0.1\"\ngetrandom = { version = \"0.3\", features = [\"wasm_js\"] }\nserde = \"1\"\nserde_json = \"1\"\nserde-wasm-bindgen = \"0.6\"\nbase64 = \"0.22\"\n"
  },
  {
    "path": "packages/js-sdk/README.md",
    "content": "# @lix-js/sdk\n\nWASM-backed JavaScript SDK for Lix.\n\n## Agent Guidance\n\nIf you are an AI coding agent using this package, read [`SKILL.md`](./SKILL.md) before building examples, demos, tests, or applications with `@lix-js/sdk`.\n\nThe skill documents the current preview API, recommended SQLite backend setup, schema registration flow, entity-table writes, version workflows, merge behavior, and known sharp edges.\n"
  },
  {
    "path": "packages/js-sdk/SKILL.md",
    "content": "---\nname: lix-js-sdk\ndescription: Use this skill when building examples, demos, tests, or applications with @lix-js/sdk: opening a Lix, registering schemas, writing entities through generated SQL tables, creating named versions, merging, and querying change history.\n---\n\n# Lix JS SDK Skill\n\n## What Is Lix\n\nLix is an embeddable version control system for structured application state. It gives apps named versions, merge, and an immutable SQL-queryable change journal without asking the app to build those systems from scratch.\n\nCurrent `@lix-js/sdk` capabilities:\n\n- Register JSON schemas as tracked entity tables.\n- Read and write entities through generated SQL tables.\n- Create named versions of state and write/read across versions.\n- Merge one version into the active version.\n- Query `lix_change` for history, audit, activity feeds, and undo-style features.\n- Store files as bytes with `lix_file` and version them like other entities.\n\nProduct direction:\n\n- Lix is designed to version files of any kind by parsing them into typed entities on write.\n- Parser plugins that turn file contents into app entities are not shipped through the JS SDK yet. Do not promise this behavior in demos. Today, `lix_file` versions bytes, while app entities are modeled directly through registered schemas.\n\nEvery row in every registered schema is a tracked entity. Merge granularity is currently per-entity, not per-field: two versions editing different rows merge cleanly; two versions editing the same row conflict, even if the fields are disjoint. Model collaborative domains as many small entities, such as sections, blocks, paragraphs, message keys, or line items.\n\nUse Lix vocabulary in user-facing copy. What Git calls a branch is called a **version** in Lix because that language makes sense to non-developers.\n\n## When To Use This Skill\n\nUse this skill when you need to write or debug consumer code using `@lix-js/sdk`:\n\n- Opening a persistent `.lix` file.\n- Registering schemas.\n- Writing and reading generated SQL entity tables.\n- Reading `execute()` results.\n- Creating, switching, previewing, and merging versions.\n- Querying history through `lix_change`.\n- Building app demos, examples, smoke tests, or product flows around the SDK.\n\nDo not use this skill for raw SQLite access, private engine/wasm internals, SDK publishing, SDK build pipelines, or unreleased file-parser plugin behavior.\n\n## Agent Quick Start\n\n1. Install `@lix-js/sdk` and `better-sqlite3`.\n2. Open with `createBetterSqlite3Backend({ path })`; do not open `.lix` with raw SQLite.\n3. Register a schema with `x-lix-key`, `x-lix-primary-key`, and `additionalProperties: false`.\n4. Write rows through the generated table named by `x-lix-key`.\n5. Use `<schema>_by_version` plus `lixcol_version_id` for side-by-side version reads/writes.\n6. Query `lix_change` for audit/history instead of hand-rolling audit tables.\n7. Wrap `mergeVersion()` in `try/catch` whenever conflicts are possible.\n\n## Core Rules\n\n- Use the public `@lix-js/sdk` API only.\n- Use `createBetterSqlite3Backend()` for persistent apps, demos, and tests.\n- Use numbered SQL placeholders: `$1`, `$2`, `$3`; bare `?` is rejected.\n- Use `lix_json($1)` when inserting JSON text into JSON-typed columns.\n- Use scalar SQL functions `SELECT lix_uuid_v7()` and `SELECT lix_timestamp()` when consumer code needs Lix-generated UUID v7 ids or ISO timestamps. Do not call them as table functions with `SELECT * FROM ...`.\n- Use stable, namespaced, lowercase schema keys like `acme_section`, not generic names like `task`.\n- Always include `x-lix-primary-key` and `additionalProperties: false` on app schemas.\n- Use version names from the user's vocabulary, such as `\"Marketing edit\"` or `\"Q3 pricing draft\"`.\n- Model concurrent-edit domains as collections of small rows because merge is per-row today.\n- Prefer `_by_version` tables for demos, sync, agent inspection, and side-by-side diffs.\n- Close handles in scripts and tests with `await lix.close()`.\n\n## Install And Open\n\n```sh\nnpm i @lix-js/sdk better-sqlite3\n```\n\n```ts\nimport { openLix } from \"@lix-js/sdk\";\nimport { createBetterSqlite3Backend } from \"@lix-js/sdk/sqlite\";\n\nconst lix = await openLix({\n  backend: createBetterSqlite3Backend({ path: \"/path/to/app.lix\" }),\n});\n```\n\n`better-sqlite3` is an optional peer dependency. Install it in projects that import `@lix-js/sdk/sqlite`.\n\n`openLix()` without a backend is in-memory and dies with the process. For anything that should persist, pass a real `.lix` path. Reopening the same path picks up existing state.\n\nFor tests and demos, use an isolated temp directory per run:\n\n```ts\nimport { mkdtempSync } from \"node:fs\";\nimport { tmpdir } from \"node:os\";\nimport path from \"node:path\";\nimport { openLix } from \"@lix-js/sdk\";\nimport { createBetterSqlite3Backend } from \"@lix-js/sdk/sqlite\";\n\nconst dir = mkdtempSync(path.join(tmpdir(), \"lix-\"));\nconst lix = await openLix({\n  backend: createBetterSqlite3Backend({ path: path.join(dir, \"demo.lix\") }),\n});\n```\n\nUse the version of this skill that ships with the installed `@lix-js/sdk` package. If behavior is unclear, inspect the installed package before guessing. The npm package bundles matching engine source under `node_modules/@lix-js/sdk/dist-engine-src/`.\n\nUseful installed-package references:\n\n- `dist-engine-src/src/sql2/entity_provider.rs` - registered schema SQL surfaces.\n- `dist-engine-src/src/sql2/change_provider.rs` - `lix_change` projection.\n- `dist-engine-src/src/sql2/version_provider.rs` - writable `lix_version` surface.\n- `dist-engine-src/src/transaction/validation.rs` - primary-key, unique, foreign-key, and shape validation.\n- `dist-engine-src/src/schema/definition.json` - Lix schema-definition meta-schema.\n- `dist-engine-src/src/schema/builtin/` - built-in entity table shapes.\n- `dist-engine-src/src/sql2/udfs/` - registered SQL functions.\n\nDo not import from `@lix-js/sdk/engine-wasm`, do not call private wasm helpers, and do not open the `.lix` SQLite file directly.\n\n## Minimal Entity Example\n\nThis is the smallest useful consumer pattern: open, register a schema, write a row, read it back, and close.\n\n```ts\nimport { mkdtempSync } from \"node:fs\";\nimport { tmpdir } from \"node:os\";\nimport path from \"node:path\";\nimport { openLix } from \"@lix-js/sdk\";\nimport { createBetterSqlite3Backend } from \"@lix-js/sdk/sqlite\";\n\nconst dir = mkdtempSync(path.join(tmpdir(), \"lix-\"));\nconst lix = await openLix({\n  backend: createBetterSqlite3Backend({ path: path.join(dir, \"demo.lix\") }),\n});\n\nawait lix.execute(\n  \"INSERT INTO lix_registered_schema (value) VALUES (lix_json($1))\",\n  [\n    JSON.stringify({\n      $schema: \"https://json-schema.org/draft/2020-12/schema\",\n      \"x-lix-key\": \"acme_note\",\n      \"x-lix-primary-key\": [\"/id\"],\n      type: \"object\",\n      required: [\"id\", \"title\", \"done\"],\n      properties: {\n        id: { type: \"string\" },\n        title: { type: \"string\" },\n        done: { type: \"boolean\" },\n      },\n      additionalProperties: false,\n    }),\n  ],\n);\n\nawait lix.execute(\n  \"INSERT INTO acme_note (id, title, done) VALUES ($1, $2, $3)\",\n  [\"n1\", \"Draft launch copy\", false],\n);\n\nconst result = await lix.execute(\n  \"SELECT title, done FROM acme_note WHERE id = $1\",\n  [\"n1\"],\n);\n\nconst row = result.rows[0]!;\nconsole.log(row.value(\"title\").asText(), row.value(\"done\").asBoolean());\n\nawait lix.close();\n```\n\n## Reading Results\n\n`lix.execute()` returns one shape for every statement:\n\n```ts\ntype ExecuteResult = {\n  columns: string[];\n  rows: Row[];\n  rowsAffected: number;\n  notices: LixNotice[];\n};\n```\n\nThere is no `result.kind`. `SELECT` fills `columns` and `rows`; `INSERT`, `UPDATE`, and `DELETE` usually return `rows: []` and set `rowsAffected`.\n\nEach row is a `Row` object. Use `row.value(\"column\")` or `row.valueAt(index)` to get a `Value`, then call typed accessors:\n\n```ts\nconst r = await lix.execute(\"SELECT id, title, done FROM acme_note\");\nfor (const row of r.rows) {\n  const id = row.value(\"id\").asText();\n  const title = row.value(\"title\").asText();\n  const done = row.value(\"done\").asBoolean();\n}\n```\n\n| Method        | Returns                   | Use for                                   |\n| ------------- | ------------------------- | ----------------------------------------- |\n| `asText()`    | `string \\| undefined`     | strings; note `asText`, not `asString`    |\n| `asBoolean()` | `boolean \\| undefined`    | booleans                                  |\n| `asInteger()` | `number \\| undefined`     | integer fields                            |\n| `asReal()`    | `number \\| undefined`     | decimal/real fields                       |\n| `asJson()`    | `JsonValue \\| undefined`  | objects and arrays                        |\n| `asBlob()`    | `Uint8Array \\| undefined` | binary data                               |\n\nAccessors return `undefined` when the cell kind does not match. Branch on `value.kind` if a column can hold multiple types. Public kind strings are `\"null\"`, `\"boolean\"`, `\"integer\"`, `\"real\"`, `\"text\"`, `\"json\"`, and `\"blob\"`.\n\n`Row` also has convenience methods when native JS values are enough: `get(name)`, `tryGet(name)`, `getAt(index)`, `toObject()`, and `toValueMap()`.\n\n## Registering Schemas\n\nRegister app schemas by inserting JSON into `lix_registered_schema.value`:\n\n```ts\nawait lix.execute(\n  \"INSERT INTO lix_registered_schema (value) VALUES (lix_json($1))\",\n  [JSON.stringify(schema)],\n);\n```\n\nSchema basics:\n\n- `x-lix-key` becomes the generated SQL table name.\n- Compatible schema amendments are keyed by `x-lix-key`.\n- `x-lix-primary-key` tells Lix how to derive entity identity.\n- Primary-key entries are JSON Pointers with a leading slash, such as `[\"/id\"]` or `[\"/owner/email\"]`.\n- Use `additionalProperties: false` so accidental fields fail fast.\n\nWithout `x-lix-primary-key`, table-style INSERTs fail with an error like `requires lixcol_entity_id because the schema has no x-lix-primary-key`.\n\nUniqueness is not inferred from ordinary JSON Schema fields. If a non-primary-key field must be unique, declare it explicitly:\n\n```ts\nconst companyDomainSchema = {\n  \"x-lix-key\": \"crm_company_domain\",\n  \"x-lix-primary-key\": [\"/id\"],\n  \"x-lix-unique\": [[\"/domain\"]],\n  type: \"object\",\n  required: [\"id\", \"domain\"],\n  properties: {\n    id: { type: \"string\" },\n    domain: { type: \"string\" },\n  },\n  additionalProperties: false,\n};\n```\n\nDo not add generic `created_at` or `updated_at` fields by default. Lix already records lifecycle history through `lix_change` and `lixcol_*` metadata. Add timestamp fields only when they are domain data, such as `due_at`, `published_at`, or `occurred_at`.\n\nDiscover live schemas before guessing:\n\n```ts\nconst schemas = await lix.execute(\n  \"SELECT lixcol_entity_id, value FROM lix_registered_schema ORDER BY lixcol_entity_id\",\n);\n\nfor (const row of schemas.rows) {\n  const schema = row.get(\"value\") as { \"x-lix-key\"?: string };\n  console.log(schema[\"x-lix-key\"]);\n}\n```\n\n## Versions And `_by_version`\n\nCapture the initial active version id instead of hardcoding `\"main\"`:\n\n```ts\nconst published = await lix.activeVersionId();\n```\n\nCreate versions with names from the user's domain:\n\n```ts\nconst marketing = await lix.createVersion({ name: \"Marketing edit\" });\nconst legal = await lix.createVersion({ name: \"Legal review\" });\n```\n\nEvery registered schema `X` gets a sibling table `X_by_version` with `lixcol_version_id`. Use it for side-by-side reads and for writes to non-active versions.\n\n```ts\nawait lix.execute(\n  `UPDATE acme_note_by_version\n      SET title = $1\n    WHERE id = $2 AND lixcol_version_id = $3`,\n  [\"Sharper launch copy\", \"n1\", marketing.id],\n);\n\nconst sideBySide = await lix.execute(\n  `SELECT v.name, n.title\n     FROM acme_note_by_version n\n     JOIN lix_version v ON v.id = n.lixcol_version_id\n    WHERE n.id = $1\n      AND n.lixcol_version_id IN ($2, $3)\n    ORDER BY v.name`,\n  [\"n1\", published, marketing.id],\n);\n```\n\nRules for `_by_version`:\n\n- Reads filter by `lixcol_version_id`, or omit the filter to scan all versions.\n- INSERTs require `lixcol_version_id`.\n- UPDATEs and DELETEs must include `lixcol_version_id` in the WHERE clause.\n- The non-suffixed table is the active-version view.\n\n`switchVersion()` is for app code with a current working version concept. `mergeVersion()` always merges into the active version, so switch first if you need a different target.\n\n## Merging\n\n`mergeVersion()` merges the source version into the currently active version:\n\n```ts\ntry {\n  const merge = await lix.mergeVersion({ sourceVersionId: marketing.id });\n  console.log(merge.outcome, merge.changeStats.total);\n} catch (error) {\n  console.error(\"Merge conflict\", error);\n}\n```\n\nCommon outcomes:\n\n- `\"alreadyUpToDate\"` - source has no commits the target lacks.\n- `\"fastForward\"` - target advanced to source without a merge commit.\n- `\"mergeCommitted\"` - a new merge commit was created.\n\n`mergeVersionPreview()` reports the same merge decision without advancing refs, staging changes, or creating commits. Merge conflicts are returned as preview data.\n\nConflicts throw from `mergeVersion()`. If both versions modified the same entity since their merge base, Lix raises a `LixError`. Conflict detection is row-level today, not field-level. To reproduce a conflict in a demo, fork all contending versions from the same base before merging any of them.\n\n## Demo Pattern To Imitate\n\nFor richer demos, show these four things:\n\n1. Isolation: one SELECT against `<schema>_by_version` shows several versions side by side.\n2. Clean parallel merges: two reviewers edit different entities and both land.\n3. Audit history: `lix_change` is queryable SQL.\n4. Conflict handling: two versions edit the same entity and `mergeVersion()` throws.\n\nShape the domain as a collection of small entities:\n\n- Good: brochure sections, document blocks, paragraph rows, message keys, line items.\n- Risky: one huge document row with many editable fields.\n\nDemo recipe:\n\n1. Register a schema such as `acme_section`.\n2. Seed several rows in the published version.\n3. Create all reviewer versions up front from the same base.\n4. Write each reviewer's changes through `acme_section_by_version`.\n5. Read side by side by joining `acme_section_by_version` to `lix_version`.\n6. Merge non-overlapping row edits successfully.\n7. Query `lix_change`.\n8. Catch the deliberate same-row conflict.\n\n## Files With `lix_file`\n\n`lix_file` stores files as versioned bytes. Parent directories are created automatically.\n\n```ts\nawait lix.execute(\"INSERT INTO lix_file (id, path, data) VALUES ($1, $2, $3)\", [\n  \"file-readme\",\n  \"/docs/readme.md\",\n  new TextEncoder().encode(\"# Hello\\n\"),\n]);\n\nconst result = await lix.execute(\n  \"SELECT path, data FROM lix_file WHERE id = $1\",\n  [\"file-readme\"],\n);\n\nconst file = result.rows[0]!;\nconsole.log(\n  file.value(\"path\").asText(),\n  new TextDecoder().decode(file.value(\"data\").asBlob()!),\n);\n```\n\nColumns consumers usually need:\n\n| Column     | What it is                                                            |\n| ---------- | --------------------------------------------------------------------- |\n| `id`       | Stable identity for the file.                                         |\n| `path`     | Absolute path like `/docs/readme.md`.                                 |\n| `data`     | File contents as bytes.                                               |\n| `hidden`   | UI hint; does not affect storage.                                     |\n| `lixcol_*` | Version/change metadata, including `lixcol_version_id` where exposed. |\n\n`lix_file_by_version` exists for cross-version file reads and writes. Files-as-parsed-entities are product direction, not current JS SDK behavior.\n\n## The Change Journal\n\n`lix_change` is an immutable SQL table of changes across registered schemas and versions. Use it for audit logs, blame, history, activity feeds, and undo-style UI.\n\nImportant columns include `id`, `entity_id`, `schema_key`, `snapshot_content`, `created_at`, and `lixcol_*` metadata.\n\n```ts\n// Audit log for one entity, oldest to newest.\nawait lix.execute(\n  `SELECT created_at, snapshot_content\n     FROM lix_change\n    WHERE schema_key = $1 AND entity_id = $2\n    ORDER BY created_at`,\n  [\"acme_note\", \"n1\"],\n);\n\n// Latest activity across a schema.\nawait lix.execute(\n  `SELECT created_at, entity_id, snapshot_content\n     FROM lix_change\n    WHERE schema_key = $1\n    ORDER BY created_at DESC\n    LIMIT 20`,\n  [\"acme_note\"],\n);\n```\n\n`snapshot_content` can be null or absent for tombstones, removals, or rows where content was not materialized. In the JS SDK, read it with `row.value(\"snapshot_content\").asJson()` or `row.get(\"snapshot_content\")`, then handle null. Do not blindly `JSON.parse` it as text.\n\n## Built-In Tables And UDFs\n\nCommon tables:\n\n| Table                   | What it gives consumers                                                                                 |\n| ----------------------- | ------------------------------------------------------------------------------------------------------- |\n| `lix_version`           | Writable version surface: `id`, `name`, `hidden`, `commit_id`.                                          |\n| `lix_change`            | Immutable change journal.                                                                               |\n| `lix_file`              | Versioned byte storage for files.                                                                       |\n| `lix_registered_schema` | Registry of app schemas plus built-ins; also exposes the Lix schema-definition meta-schema at runtime. |\n\n`lix_version` can be updated for admin flows:\n\n```ts\nawait lix.execute(\"UPDATE lix_version SET hidden = true WHERE id = $1\", [\n  marketing.id,\n]);\n```\n\nThere is no documented `deleteVersion()` helper in this preview. If the product wants reversible cleanup, hide the version. If it wants removal, `DELETE FROM lix_version WHERE id = $1` is the SQL surface; the engine rejects deleting the global version and active version.\n\nUse `lix_json($1)` to parse JSON text parameters when writing JSON-typed columns:\n\n```ts\nawait lix.execute(\n  \"INSERT INTO lix_registered_schema (value) VALUES (lix_json($1))\",\n  [JSON.stringify(schema)],\n);\n```\n\nOther UDFs, such as `lix_json_get`, `lix_uuid_v7`, `lix_text_encode`, and `lix_empty_blob`, live in `dist-engine-src/src/sql2/udfs/` in the installed package.\n\n## Do And Avoid\n\n| Do | Avoid |\n| --- | --- |\n| Use `createBetterSqlite3Backend({ path })` for persistent state. | Opening `.lix` files with raw SQLite libraries. |\n| Use public imports from `@lix-js/sdk` and `@lix-js/sdk/sqlite`. | Importing `engine-wasm` or private internals. |\n| Use `$1`, `$2`, `$3` placeholders. | Bare `?` placeholders. |\n| Use `lix_json($1)` for JSON parameters. | Inlining stringified JSON directly into SQL. |\n| Use `_by_version` for cross-version reads/writes. | Switching versions just to render a side-by-side view. |\n| Name versions in user vocabulary. | User-facing words like branch, branch-1, or generic Draft. |\n| Model collaborative data as small rows. | One giant row when multiple reviewers edit different parts. |\n| Add `x-lix-unique` for non-primary unique fields. | Assuming JSON Schema property metadata creates uniqueness. |\n| Read `snapshot_content` as JSON/native and handle null. | Blindly `JSON.parse(row.value(...).asText())`. |\n| Wrap `mergeVersion()` in `try/catch`. | Assuming merges cannot conflict. |\n\n## Reporting SDK Friction\n\nIf you encounter an SDK bug, missing API, confusing error, documentation gap, or large implementation friction while using this skill, pause and ask the user whether they want you to open a GitHub issue via the `gh` CLI installed on their computer. Do not file an issue without confirmation.\n\nBefore filing, scan existing issues to avoid duplicates. If the user approves a report, include a minimal reproduction, expected behavior, actual behavior, the installed `@lix-js/sdk` version, runtime details, and relevant error output. Do not include private data, customer content, credentials, tokens, local paths, database contents, or proprietary schemas.\n"
  },
  {
    "path": "packages/js-sdk/package.json",
    "content": "{\n\t\"name\": \"@lix-js/sdk\",\n\t\"type\": \"module\",\n\t\"version\": \"0.6.0-preview.2\",\n\t\"main\": \"./dist/index.js\",\n\t\"types\": \"./dist/index.d.ts\",\n\t\"files\": [\n\t\t\"dist\",\n\t\t\"dist-engine-src\",\n\t\t\"SKILL.md\"\n\t],\n\t\"exports\": {\n\t\t\".\": {\n\t\t\t\"types\": \"./dist/index.d.ts\",\n\t\t\t\"default\": \"./dist/index.js\"\n\t\t},\n\t\t\"./sqlite\": {\n\t\t\t\"types\": \"./dist/sqlite/index.d.ts\",\n\t\t\t\"default\": \"./dist/sqlite/index.js\"\n\t\t}\n\t},\n\t\"description\": \"WASM-backed JS SDK wrapper for Lix\",\n\t\"scripts\": {\n\t\t\"build\": \"node ./scripts/build.js\",\n\t\t\"sync:builtin-schemas\": \"node ./scripts/sync-builtin-schemas.js\",\n\t\t\"sync:engine-src\": \"node ./scripts/sync-engine-src.js\",\n\t\t\"prepack\": \"node ./scripts/sync-engine-src.js\",\n\t\t\"typecheck\": \"pnpm run sync:builtin-schemas && tsc -p tsconfig.json --noEmit\",\n\t\t\"test\": \"node ./scripts/build.js && vitest run\",\n\t\t\"test:watch\": \"node ./scripts/build.js && vitest\"\n\t},\n\t\"peerDependencies\": {\n\t\t\"better-sqlite3\": \"^12.9.0\"\n\t},\n\t\"peerDependenciesMeta\": {\n\t\t\"better-sqlite3\": {\n\t\t\t\"optional\": true\n\t\t}\n\t},\n\t\"devDependencies\": {\n\t\t\"better-sqlite3\": \"^12.9.0\",\n\t\t\"typescript\": \"^5.5.4\",\n\t\t\"vitest\": \"^4.0.18\"\n\t},\n\t\"nx\": {\n\t\t\"targets\": {\n\t\t\t\"build\": {\n\t\t\t\t\"inputs\": [\n\t\t\t\t\t\"default\",\n\t\t\t\t\t\"^default\",\n\t\t\t\t\t\"publicEnv\",\n\t\t\t\t\t\"nodeVersion\",\n\t\t\t\t\t\"platform\",\n\t\t\t\t\t\"{workspaceRoot}/Cargo.toml\",\n\t\t\t\t\t\"{workspaceRoot}/Cargo.lock\",\n\t\t\t\t\t\"{workspaceRoot}/packages/engine/**/*\",\n\t\t\t\t\t\"{workspaceRoot}/packages/rs-sdk/**/*\",\n\t\t\t\t\t\"{workspaceRoot}/packages/js-sdk/Cargo.toml\",\n\t\t\t\t\t\"{workspaceRoot}/packages/js-sdk/wasm-bindgen.rs\"\n\t\t\t\t],\n\t\t\t\t\"outputs\": [\n\t\t\t\t\t\"{projectRoot}/dist\",\n\t\t\t\t\t\"{projectRoot}/dist-engine-src\",\n\t\t\t\t\t\"{projectRoot}/src/engine-wasm/wasm\"\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "packages/js-sdk/scripts/build.js",
    "content": "#!/usr/bin/env node\nimport { spawn } from \"node:child_process\";\nimport { dirname, join } from \"node:path\";\nimport { fileURLToPath } from \"node:url\";\nimport { cp, mkdir, readFile, rename, rm, writeFile } from \"node:fs/promises\";\nconst __dirname = dirname(fileURLToPath(import.meta.url));\nconst repoRoot = join(__dirname, \"..\", \"..\", \"..\");\nconst jsSdkDir = join(repoRoot, \"packages\", \"js-sdk\");\nconst wasmProfile = process.env.LIX_WASM_PROFILE ?? \"release\";\nconst useWasmSizeOptimizations =\n\twasmProfile === \"release\" && process.env.LIX_WASM_SIZE_OPT !== \"0\";\nconst targetDir = join(\n\trepoRoot,\n\t\"target\",\n\t\"wasm32-unknown-unknown\",\n\twasmProfile,\n);\nconst engineWasmPath = join(targetDir, \"lix_engine_wasm_bindgen.wasm\");\nconst engineOutDir = join(jsSdkDir, \"src\", \"engine-wasm\", \"wasm\");\nconst engineDistOutDir = join(jsSdkDir, \"dist\", \"engine-wasm\", \"wasm\");\nconst distDir = join(jsSdkDir, \"dist\");\nconst wasmBindgenOutName = \"lix_engine\";\n\nfunction run(cmd, args, opts = {}) {\n\treturn new Promise((resolve, reject) => {\n\t\tconst child = spawn(cmd, args, { stdio: \"inherit\", ...opts });\n\t\tchild.on(\"error\", reject);\n\t\tchild.on(\"exit\", (code) => {\n\t\t\tif (code === 0) resolve();\n\t\t\telse reject(new Error(`${cmd} exited with code ${code ?? 1}`));\n\t\t});\n\t});\n}\n\nasync function buildEngineWasm() {\n\tconst existingRustFlags = process.env.RUSTFLAGS ?? \"\";\n\tconst wasmSizeRustFlags = useWasmSizeOptimizations\n\t\t? \" -C opt-level=z -C lto=fat -C embed-bitcode=yes -C codegen-units=1 -C panic=abort\"\n\t\t: \"\";\n\tconst wasmRustFlags =\n\t\t`${existingRustFlags} --cfg getrandom_backend=\"wasm_js\"${wasmSizeRustFlags}`.trim();\n\tconst cargoArgs = [\n\t\t\"build\",\n\t\t\"-p\",\n\t\t\"lix_engine_wasm_bindgen\",\n\t\t\"--target\",\n\t\t\"wasm32-unknown-unknown\",\n\t];\n\tif (wasmProfile === \"release\") {\n\t\tcargoArgs.push(\"--release\");\n\t}\n\tawait run(\"cargo\", cargoArgs, {\n\t\tenv: {\n\t\t\t...process.env,\n\t\t\tRUSTFLAGS: wasmRustFlags,\n\t\t},\n\t});\n\n\tawait rm(engineOutDir, { recursive: true, force: true });\n\tawait run(\"wasm-bindgen\", [\n\t\tengineWasmPath,\n\t\t\"--target\",\n\t\t\"web\",\n\t\t\"--out-dir\",\n\t\tengineOutDir,\n\t\t\"--out-name\",\n\t\twasmBindgenOutName,\n\t]);\n\tawait normalizeWasmBindgenOutput(engineOutDir);\n\tawait stripWasmCustomSections(engineOutDir);\n\tawait mkdir(engineDistOutDir, { recursive: true });\n\tawait cp(engineOutDir, engineDistOutDir, { recursive: true, force: true });\n}\n\nasync function normalizeWasmBindgenOutput(outputDir) {\n\tconst generatedWasm = join(outputDir, `${wasmBindgenOutName}_bg.wasm`);\n\tconst generatedWasmTypes = join(outputDir, `${wasmBindgenOutName}_bg.wasm.d.ts`);\n\tconst normalizedWasm = join(outputDir, `${wasmBindgenOutName}.wasm`);\n\tconst normalizedWasmTypes = join(outputDir, `${wasmBindgenOutName}.wasm.d.ts`);\n\tconst fsmod = await import(\"node:fs\");\n\tif (fsmod.existsSync(generatedWasm)) await rename(generatedWasm, normalizedWasm);\n\tif (fsmod.existsSync(generatedWasmTypes)) await rename(generatedWasmTypes, normalizedWasmTypes);\n\n\tconst jsPath = join(outputDir, `${wasmBindgenOutName}.js`);\n\tconst js = await readFile(jsPath, \"utf8\");\n\tawait writeFile(\n\t\tjsPath,\n\t\tjs.replaceAll(`${wasmBindgenOutName}_bg.wasm`, `${wasmBindgenOutName}.wasm`),\n\t);\n}\n\nasync function stripWasmCustomSections(outputDir) {\n\tconst wasmPath = join(outputDir, `${wasmBindgenOutName}.wasm`);\n\tconst strippedWasmPath = join(outputDir, `${wasmBindgenOutName}.stripped.wasm`);\n\tawait run(\"wasm-tools\", [\"strip\", \"--all\", wasmPath, \"-o\", strippedWasmPath]);\n\tawait rename(strippedWasmPath, wasmPath);\n}\n\nasync function syncBuiltinSchemas() {\n\tawait run(\"node\", [\"./scripts/sync-builtin-schemas.js\"], { cwd: jsSdkDir });\n}\n\nasync function syncEngineSource() {\n\tawait run(\"node\", [\"./scripts/sync-engine-src.js\"], { cwd: jsSdkDir });\n}\n\nasync function buildTypescriptDist() {\n\tawait run(\"tsc\", [\"-p\", \"tsconfig.json\"], { cwd: jsSdkDir });\n}\n\nasync function main() {\n\tawait rm(distDir, { recursive: true, force: true });\n\tawait syncBuiltinSchemas();\n\tawait syncEngineSource();\n\tawait buildEngineWasm();\n\tawait buildTypescriptDist();\n}\n\nmain().catch((error) => {\n\tconsole.error(\"[build-wasm] Failed to generate wasm payloads:\\n\", error);\n\tprocess.exit(1);\n});\n"
  },
  {
    "path": "packages/js-sdk/scripts/sync-builtin-schemas.js",
    "content": "#!/usr/bin/env node\nimport { readdir, readFile, writeFile, mkdir } from \"node:fs/promises\";\nimport { dirname, join, extname, basename } from \"node:path\";\nimport { fileURLToPath } from \"node:url\";\n\nconst __dirname = dirname(fileURLToPath(import.meta.url));\nconst repoRoot = join(__dirname, \"..\", \"..\", \"..\");\nconst engineBuiltinDir = join(\n\trepoRoot,\n\t\"packages\",\n\t\"engine\",\n\t\"src\",\n\t\"schema\",\n\t\"builtin\",\n);\nconst outDir = join(repoRoot, \"packages\", \"js-sdk\", \"src\", \"generated\");\nconst outFile = join(outDir, \"builtin-schemas.ts\");\n\nconst toPascal = (value) =>\n\tvalue\n\t\t.split(\"_\")\n\t\t.filter(Boolean)\n\t\t.map((part) => part[0].toUpperCase() + part.slice(1))\n\t\t.join(\"\");\n\nasync function main() {\n\tconst entries = await readdir(engineBuiltinDir, { withFileTypes: true });\n\tconst jsonFiles = entries\n\t\t.filter((entry) => entry.isFile() && extname(entry.name) === \".json\")\n\t\t.map((entry) => entry.name)\n\t\t.sort();\n\n\tconst exportBlocks = [];\n\tfor (const file of jsonFiles) {\n\t\tconst schemaBase = basename(file, \".json\");\n\t\tconst exportName = `${toPascal(schemaBase)}Schema`;\n\t\tconst raw = await readFile(join(engineBuiltinDir, file), \"utf8\");\n\t\tconst parsed = JSON.parse(raw);\n\t\texportBlocks.push(\n\t\t\t`export const ${exportName} = ${JSON.stringify(parsed, null, 2)} as const;`,\n\t\t);\n\t}\n\n\tconst content = `// AUTO-GENERATED by scripts/sync-builtin-schemas.js\\n// Source of truth: packages/engine/src/schema/builtin/*.json\\n\\n${exportBlocks.join(\"\\n\\n\")}\\n`;\n\n\tawait mkdir(outDir, { recursive: true });\n\tawait writeFile(outFile, content);\n}\n\nmain().catch((error) => {\n\tconsole.error(\"[sync-builtin-schemas] failed\", error);\n\tprocess.exit(1);\n});\n"
  },
  {
    "path": "packages/js-sdk/scripts/sync-engine-src.js",
    "content": "#!/usr/bin/env node\nimport { cp, mkdir, rm, writeFile } from \"node:fs/promises\";\nimport { dirname, join } from \"node:path\";\nimport { fileURLToPath } from \"node:url\";\n\nconst __dirname = dirname(fileURLToPath(import.meta.url));\nconst repoRoot = join(__dirname, \"..\", \"..\", \"..\");\nconst jsSdkDir = join(repoRoot, \"packages\", \"js-sdk\");\nconst engineSrcDir = join(repoRoot, \"packages\", \"engine\", \"src\");\nconst bundledDir = join(jsSdkDir, \"dist-engine-src\");\nconst bundledSrcDir = join(bundledDir, \"src\");\n\nasync function main() {\n\tawait rm(bundledDir, { recursive: true, force: true });\n\tawait mkdir(bundledDir, { recursive: true });\n\tawait cp(engineSrcDir, bundledSrcDir, {\n\t\trecursive: true,\n\t\tforce: true,\n\t});\n\tawait writeFile(\n\t\tjoin(bundledDir, \"README.md\"),\n\t\t[\n\t\t\t\"# Bundled Lix Engine Source\",\n\t\t\t\"\",\n\t\t\t\"This directory is a generated snapshot of the Rust engine source that backs this @lix-js/sdk release.\",\n\t\t\t\"\",\n\t\t\t\"Source in the Lix monorepo: `packages/engine/src`\",\n\t\t\t\"\",\n\t\t\t\"Agents should inspect these files when SDK behavior is unclear instead of relying only on SKILL.md prose.\",\n\t\t\t\"\",\n\t\t\t\"Useful entry points:\",\n\t\t\t\"\",\n\t\t\t\"- `src/sql2/entity_provider.rs` - registered schema SQL surfaces\",\n\t\t\t\"- `src/sql2/change_provider.rs` - `lix_change` projection\",\n\t\t\t\"- `src/sql2/version_provider.rs` - writable `lix_version` surface\",\n\t\t\t\"- `src/transaction/validation.rs` - primary-key, unique, foreign-key, and shape validation\",\n\t\t\t\"- `src/schema/definition.json` - Lix schema-definition meta-schema\",\n\t\t\t\"- `src/schema/builtin/` - built-in schema definitions\",\n\t\t\t\"\",\n\t\t\t\"Regenerate with `pnpm --filter @lix-js/sdk sync:engine-src` from the repo root.\",\n\t\t\t\"\",\n\t\t].join(\"\\n\"),\n\t);\n}\n\nmain().catch((error) => {\n\tconsole.error(\"[sync-engine-src] Failed to sync engine source:\\n\", error);\n\tprocess.exit(1);\n});\n"
  },
  {
    "path": "packages/js-sdk/src/builtin-schemas.ts",
    "content": "export * from \"./generated/builtin-schemas.js\";\n"
  },
  {
    "path": "packages/js-sdk/src/engine-wasm/index.ts",
    "content": "export { default } from \"./wasm/lix_engine.js\";\nexport * from \"./wasm/lix_engine.js\";\nimport type { InitInput } from \"./wasm/lix_engine.js\";\n\nexport type JsonValue =\n\t| null\n\t| boolean\n\t| number\n\t| string\n\t| JsonValue[]\n\t| { [key: string]: JsonValue };\n\nexport type ValueKind =\n\t| \"null\"\n\t| \"boolean\"\n\t| \"integer\"\n\t| \"real\"\n\t| \"text\"\n\t| \"json\"\n\t| \"blob\";\n\nexport type LixValue =\n\t| { kind: \"null\"; value: null }\n\t| { kind: \"boolean\"; value: boolean }\n\t| { kind: \"integer\"; value: number }\n\t| { kind: \"real\"; value: number }\n\t| { kind: \"text\"; value: string }\n\t| { kind: \"json\"; value: JsonValue }\n\t| { kind: \"blob\"; base64: string };\n\nexport class Value {\n\tkind: ValueKind;\n\tvalue: null | boolean | number | string | JsonValue | undefined;\n\tbase64: string | undefined;\n\n\tconstructor(\n\t\tkind: ValueKind,\n\t\tvalue: null | boolean | number | string | JsonValue | undefined,\n\t\tbase64?: string,\n\t) {\n\t\tthis.kind = kind;\n\t\tthis.value = value;\n\t\tthis.base64 = base64;\n\t}\n\n\tstatic null(): Value {\n\t\treturn new Value(\"null\", null);\n\t}\n\n\tstatic integer(value: number): Value {\n\t\tif (!Number.isFinite(value) || !Number.isInteger(value)) {\n\t\t\tthrow new TypeError(\"Value.integer() requires a finite integer number\");\n\t\t}\n\t\treturn new Value(\"integer\", value);\n\t}\n\n\tstatic boolean(value: boolean): Value {\n\t\treturn new Value(\"boolean\", value);\n\t}\n\n\tstatic real(value: number): Value {\n\t\tif (!Number.isFinite(value)) {\n\t\t\tthrow new TypeError(\"Value.real() requires a finite number\");\n\t\t}\n\t\treturn new Value(\"real\", value);\n\t}\n\n\tstatic text(value: string): Value {\n\t\tif (!isWellFormedUtf16(value)) {\n\t\t\tthrow new TypeError(\"Value.text() requires a well-formed UTF-16 string\");\n\t\t}\n\t\treturn new Value(\"text\", value);\n\t}\n\n\tstatic json(value: JsonValue): Value {\n\t\treturn new Value(\"json\", normalizeJsonValue(value));\n\t}\n\n\tstatic blob(value: Uint8Array): Value {\n\t\treturn new Value(\"blob\", undefined, bytesToBase64(value));\n\t}\n\n\tstatic from(raw: unknown): Value {\n\t\tif (raw instanceof Value) return raw;\n\t\tif (isLixValue(raw)) {\n\t\t\tswitch (raw.kind) {\n\t\t\t\tcase \"null\":\n\t\t\t\t\treturn Value.null();\n\t\t\t\tcase \"boolean\":\n\t\t\t\t\treturn Value.boolean(raw.value);\n\t\t\t\tcase \"integer\":\n\t\t\t\t\treturn Value.integer(raw.value);\n\t\t\t\tcase \"real\":\n\t\t\t\t\treturn Value.real(raw.value);\n\t\t\t\tcase \"text\":\n\t\t\t\t\treturn Value.text(raw.value);\n\t\t\t\tcase \"json\":\n\t\t\t\t\treturn Value.json(normalizeJsonValue(raw.value));\n\t\t\t\tcase \"blob\":\n\t\t\t\t\treturn new Value(\"blob\", undefined, raw.base64);\n\t\t\t}\n\t\t}\n\t\tif (raw === null) return Value.null();\n\t\tif (raw === undefined) {\n\t\t\tthrow new TypeError(\"undefined is not a valid SQL parameter\");\n\t\t}\n\t\tif (typeof raw === \"number\") {\n\t\t\treturn Number.isInteger(raw) ? Value.integer(raw) : Value.real(raw);\n\t\t}\n\t\tif (typeof raw === \"boolean\") return Value.boolean(raw);\n\t\tif (typeof raw === \"string\") return Value.text(raw);\n\t\tif (raw instanceof Uint8Array) return Value.blob(raw);\n\t\tif (raw instanceof ArrayBuffer) return Value.blob(new Uint8Array(raw));\n\t\tif (ArrayBuffer.isView(raw)) {\n\t\t\tthrow new TypeError(\n\t\t\t\t\"typed array SQL parameters must be Uint8Array; other ArrayBuffer views are ambiguous\",\n\t\t\t);\n\t\t}\n\t\tif (raw instanceof Date) {\n\t\t\tthrow new TypeError(\n\t\t\t\t\"Date is not a valid SQL parameter; pass date.toISOString() or date.getTime() explicitly\",\n\t\t\t);\n\t\t}\n\t\tif (raw && typeof raw === \"object\") {\n\t\t\treturn Value.json(normalizeJsonValue(raw));\n\t\t}\n\t\tthrow new TypeError(\n\t\t\t\"Value.from() requires a LixValue, JSON value, or binary value\",\n\t\t);\n\t}\n\n\tasInteger(): number | undefined {\n\t\treturn this.kind === \"integer\" ? (this.value as number) : undefined;\n\t}\n\n\tasBoolean(): boolean | undefined {\n\t\treturn this.kind === \"boolean\" ? (this.value as boolean) : undefined;\n\t}\n\n\tasReal(): number | undefined {\n\t\treturn this.kind === \"real\" ? (this.value as number) : undefined;\n\t}\n\n\tasText(): string | undefined {\n\t\treturn this.kind === \"text\" ? (this.value as string) : undefined;\n\t}\n\n\tasJson(): JsonValue | undefined {\n\t\treturn this.kind === \"json\" ? normalizeJsonValue(this.value) : undefined;\n\t}\n\n\tasBlob(): Uint8Array | undefined {\n\t\treturn this.kind === \"blob\" && this.base64 !== undefined\n\t\t\t? base64ToBytes(this.base64)\n\t\t\t: undefined;\n\t}\n\n\ttoJSON(): LixValue {\n\t\tswitch (this.kind) {\n\t\t\tcase \"null\":\n\t\t\t\treturn { kind: \"null\", value: null };\n\t\t\tcase \"boolean\":\n\t\t\t\treturn { kind: \"boolean\", value: this.asBoolean() ?? false };\n\t\t\tcase \"integer\":\n\t\t\t\treturn { kind: \"integer\", value: this.asInteger() ?? 0 };\n\t\t\tcase \"real\":\n\t\t\t\treturn { kind: \"real\", value: this.asReal() ?? 0 };\n\t\t\tcase \"text\":\n\t\t\t\treturn { kind: \"text\", value: this.asText() ?? \"\" };\n\t\t\tcase \"json\":\n\t\t\t\treturn { kind: \"json\", value: this.asJson() ?? null };\n\t\t\tcase \"blob\":\n\t\t\t\treturn { kind: \"blob\", base64: this.base64 ?? \"\" };\n\t\t}\n\t}\n}\n\nexport type ExecuteResult = {\n\tcolumns: string[];\n\trows: LixValue[][];\n\trowsAffected: number;\n\tnotices: LixNotice[];\n};\n\nexport type LixNotice = {\n\tcode: string;\n\tmessage: string;\n\thint?: string;\n};\n\n/**\n * Error thrown by the Lix engine. Extends the standard `Error` with a\n * machine-readable `code`, optional `hint`, and optional structured `details`.\n *\n * Hints follow the Postgres/rustc convention: `message` states what went\n * wrong in factual terms; `hint` offers a fix when one is known. Consumers\n * typically render the hint alongside the primary message (e.g. as\n * `hint: <text>` in a CLI, secondary text in a UI).\n */\nexport interface LixError extends Error {\n\tcode: string;\n\thint?: string;\n\tdetails?: unknown;\n}\n\ntype Assert<T extends true> = T;\ntype _LixErrorHasDetails = Assert<\n\tLixError extends { details?: unknown } ? true : false\n>;\ntype _LixErrorDoesNotHaveData = Assert<\n\t\"data\" extends keyof LixError ? false : true\n>;\ntype _LixErrorDoesNotHaveDescription = Assert<\n\t\"description\" extends keyof LixError ? false : true\n>;\n\n/**\n * Type guard: returns `true` when `err` is a Lix-produced error carrying a\n * structured `code` field (all engine codes start with `LIX_`).\n */\nexport function isLixError(err: unknown): err is LixError {\n\treturn (\n\t\terr instanceof Error &&\n\t\ttypeof (err as Partial<LixError>).code === \"string\" &&\n\t\t(err as LixError).code.startsWith(\"LIX_\")\n\t);\n}\n\nfunction isLixValue(value: unknown): value is LixValue {\n\tif (!value || typeof value !== \"object\") {\n\t\treturn false;\n\t}\n\tconst kind = (value as { kind?: unknown }).kind;\n\tif (kind === \"null\") {\n\t\treturn (value as { value?: unknown }).value === null;\n\t}\n\tif (kind === \"boolean\") {\n\t\treturn typeof (value as { value?: unknown }).value === \"boolean\";\n\t}\n\tif (kind === \"integer\" || kind === \"real\") {\n\t\tconst raw = (value as { value?: unknown }).value;\n\t\tif (typeof raw !== \"number\" || !Number.isFinite(raw)) {\n\t\t\treturn false;\n\t\t}\n\t\tif (kind === \"integer\" && !Number.isInteger(raw)) {\n\t\t\treturn false;\n\t\t}\n\t\treturn true;\n\t}\n\tif (kind === \"text\") {\n\t\tconst raw = (value as { value?: unknown }).value;\n\t\treturn typeof raw === \"string\" && isWellFormedUtf16(raw);\n\t}\n\tif (kind === \"json\") {\n\t\treturn isJsonValue((value as { value?: unknown }).value);\n\t}\n\tif (kind === \"blob\") {\n\t\treturn typeof (value as { base64?: unknown }).base64 === \"string\";\n\t}\n\treturn false;\n}\n\nfunction isJsonValue(value: unknown): value is JsonValue {\n\ttry {\n\t\tnormalizeJsonValue(value);\n\t\treturn true;\n\t} catch {\n\t\treturn false;\n\t}\n}\n\nfunction normalizeJsonValue(value: unknown, seen = new WeakSet<object>()): JsonValue {\n\tif (\n\t\tvalue === null ||\n\t\ttypeof value === \"boolean\"\n\t) {\n\t\treturn value;\n\t}\n\tif (typeof value === \"string\") {\n\t\tif (!isWellFormedUtf16(value)) {\n\t\t\tthrow new TypeError(\"JSON strings must be well-formed UTF-16\");\n\t\t}\n\t\treturn value;\n\t}\n\tif (typeof value === \"number\") {\n\t\tif (!Number.isFinite(value)) {\n\t\t\tthrow new TypeError(\"JSON numbers must be finite\");\n\t\t}\n\t\treturn value;\n\t}\n\tif (Array.isArray(value)) {\n\t\tif (seen.has(value)) {\n\t\t\tthrow new TypeError(\"JSON values must not contain circular references\");\n\t\t}\n\t\tseen.add(value);\n\t\tconst normalized = value.map((item) => normalizeJsonValue(item, seen));\n\t\tseen.delete(value);\n\t\treturn normalized;\n\t}\n\tif (!value || typeof value !== \"object\") {\n\t\tthrow new TypeError(\"expected a JSON-compatible value\");\n\t}\n\n\tif (value instanceof Date) {\n\t\tthrow new TypeError(\"Date is not a JSON value\");\n\t}\n\tconst prototype = Object.getPrototypeOf(value);\n\tif (prototype !== Object.prototype && prototype !== null) {\n\t\tthrow new TypeError(\"JSON objects must be plain objects\");\n\t}\n\tif (seen.has(value)) {\n\t\tthrow new TypeError(\"JSON values must not contain circular references\");\n\t}\n\tseen.add(value);\n\tconst normalized: { [key: string]: JsonValue } = {};\n\tfor (const [key, entry] of Object.entries(value)) {\n\t\tif (!isWellFormedUtf16(key)) {\n\t\t\tthrow new TypeError(\"JSON object keys must be well-formed UTF-16\");\n\t\t}\n\t\tnormalized[key] = normalizeJsonValue(entry, seen);\n\t}\n\tseen.delete(value);\n\treturn normalized;\n}\n\nfunction isWellFormedUtf16(value: string): boolean {\n\tfor (let index = 0; index < value.length; index += 1) {\n\t\tconst code = value.charCodeAt(index);\n\t\tif (code >= 0xd800 && code <= 0xdbff) {\n\t\t\tconst next = value.charCodeAt(index + 1);\n\t\t\tif (next < 0xdc00 || next > 0xdfff) {\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\tindex += 1;\n\t\t\tcontinue;\n\t\t}\n\t\tif (code >= 0xdc00 && code <= 0xdfff) {\n\t\t\treturn false;\n\t\t}\n\t}\n\treturn true;\n}\n\nfunction bytesToBase64(bytes: Uint8Array): string {\n\tconst maybeBuffer = (\n\t\tglobalThis as {\n\t\t\tBuffer?: {\n\t\t\t\tfrom(value: Uint8Array): { toString(encoding: string): string };\n\t\t\t};\n\t\t}\n\t).Buffer;\n\tif (maybeBuffer) {\n\t\treturn maybeBuffer.from(bytes).toString(\"base64\");\n\t}\n\n\tlet binary = \"\";\n\tconst chunkSize = 0x8000;\n\tfor (let index = 0; index < bytes.length; index += chunkSize) {\n\t\tconst chunk = bytes.subarray(index, index + chunkSize);\n\t\tbinary += String.fromCharCode(...chunk);\n\t}\n\treturn btoa(binary);\n}\n\nfunction base64ToBytes(base64: string): Uint8Array {\n\tconst maybeBuffer = (\n\t\tglobalThis as {\n\t\t\tBuffer?: {\n\t\t\t\tfrom(value: string, encoding: string): Uint8Array;\n\t\t\t};\n\t\t}\n\t).Buffer;\n\tif (maybeBuffer) {\n\t\treturn new Uint8Array(maybeBuffer.from(base64, \"base64\"));\n\t}\n\n\tconst binary = atob(base64);\n\tconst bytes = new Uint8Array(binary.length);\n\tfor (let index = 0; index < binary.length; index += 1) {\n\t\tbytes[index] = binary.charCodeAt(index);\n\t}\n\treturn bytes;\n}\n\nconst engineWasmUrl = new URL(\n\t\"./wasm/lix_engine.wasm\",\n\timport.meta.url,\n);\n\nfunction isNodeRuntime(): boolean {\n\tconst processLike = (\n\t\tglobalThis as { process?: { versions?: { node?: string } } }\n\t).process;\n\treturn (\n\t\t!!processLike &&\n\t\ttypeof processLike.versions === \"object\" &&\n\t\t!!processLike.versions?.node\n\t);\n}\n\nasync function tryReadNodeFileFromViteHttpUrl(\n\turl: URL,\n): Promise<Uint8Array | undefined> {\n\tif (url.protocol !== \"http:\" && url.protocol !== \"https:\") {\n\t\treturn undefined;\n\t}\n\n\t// Vitest/Vite in Node often rewrites module URLs to http://localhost with /@fs/.\n\tconst decodedPathname = decodeURIComponent(url.pathname);\n\tlet filePath: string | undefined;\n\tif (decodedPathname.startsWith(\"/@fs/\")) {\n\t\tfilePath = decodedPathname.slice(\"/@fs\".length);\n\t} else if (\n\t\turl.hostname === \"localhost\" ||\n\t\turl.hostname === \"127.0.0.1\" ||\n\t\turl.hostname === \"::1\"\n\t) {\n\t\t// Some setups expose absolute filesystem paths directly on localhost.\n\t\tfilePath = decodedPathname;\n\t}\n\n\tif (!filePath) {\n\t\treturn undefined;\n\t}\n\n\tconst fsModuleName = \"node:fs/promises\";\n\tconst { readFile } = await import(fsModuleName);\n\ttry {\n\t\treturn new Uint8Array(await readFile(filePath));\n\t} catch {\n\t\treturn undefined;\n\t}\n}\n\n/**\n * Returns a wasm-bindgen-compatible init input that works in both browser and Node.\n *\n * - Browser: use a URL so the runtime fetches the `.wasm` asset.\n * - Node: read bytes from disk because `fetch(file://...)` is not supported.\n */\nexport async function resolveEngineWasmModuleOrPath(): Promise<InitInput> {\n\tif (!isNodeRuntime()) {\n\t\treturn engineWasmUrl;\n\t}\n\n\tif (engineWasmUrl.protocol === \"file:\") {\n\t\tconst fsModuleName = \"node:fs/promises\";\n\t\tconst urlModuleName = \"node:url\";\n\t\tconst [{ readFile }, { fileURLToPath }] = await Promise.all([\n\t\t\timport(fsModuleName),\n\t\t\timport(urlModuleName),\n\t\t]);\n\t\treturn readFile(fileURLToPath(engineWasmUrl));\n\t}\n\n\tif (\n\t\tengineWasmUrl.protocol === \"http:\" ||\n\t\tengineWasmUrl.protocol === \"https:\"\n\t) {\n\t\tconst localBytes = await tryReadNodeFileFromViteHttpUrl(engineWasmUrl);\n\t\tif (localBytes) {\n\t\t\treturn localBytes;\n\t\t}\n\n\t\tconst response = await fetch(engineWasmUrl);\n\t\tif (!response.ok) {\n\t\t\tthrow new Error(\n\t\t\t\t`failed to fetch wasm module from '${engineWasmUrl.toString()}': ${response.status} ${response.statusText}`,\n\t\t\t);\n\t\t}\n\t\treturn new Uint8Array(await response.arrayBuffer());\n\t}\n\n\treturn engineWasmUrl;\n}\n"
  },
  {
    "path": "packages/js-sdk/src/engine-wasm/value.test.ts",
    "content": "import { expect, test } from \"vitest\";\nimport { Value } from \"./index.js\";\n\ntest(\"Value.asBlob returns empty Uint8Array for canonical empty blob\", () => {\n\tconst decoded = Value.from({ kind: \"blob\", base64: \"\" }).asBlob();\n\texpect(decoded).toBeInstanceOf(Uint8Array);\n\texpect(decoded?.byteLength).toBe(0);\n});\n\ntest(\"Value.asBlob roundtrips non-empty canonical blob\", () => {\n\tconst decoded = Value.from({ kind: \"blob\", base64: \"AQID\" }).asBlob();\n\texpect(decoded).toEqual(new Uint8Array([1, 2, 3]));\n});\n"
  },
  {
    "path": "packages/js-sdk/src/index.ts",
    "content": "export * from \"./open-lix.js\";\nexport * from \"./builtin-schemas.js\";\nexport { Value, isLixError } from \"./engine-wasm/index.js\";\nexport type { LixError, LixValue } from \"./engine-wasm/index.js\";\nexport type { JsonValue, LixRuntimeValue } from \"./types.js\";\n"
  },
  {
    "path": "packages/js-sdk/src/open-lix.test.ts",
    "content": "import { execFile } from \"node:child_process\";\nimport { promisify } from \"node:util\";\nimport { fileURLToPath } from \"node:url\";\nimport { expect, test } from \"vitest\";\nimport {\n\topenLix,\n\tValue,\n\ttype BackendKvEntryPage,\n\ttype BackendKvExistsBatch,\n\ttype BackendKvGetRequest,\n\ttype BackendKvKeyPage,\n\ttype BackendKvScanRange,\n\ttype BackendKvScanRequest,\n\ttype BackendKvValueBatch,\n\ttype BackendKvValuePage,\n\ttype BackendKvWriteBatch,\n\ttype BackendKvWriteStats,\n\ttype ExecuteResult,\n\ttype LixBackend,\n\ttype LixBackendReadTransaction,\n\ttype LixBackendWriteTransaction,\n\ttype LixError,\n\ttype Lix,\n\tisLixError,\n} from \"./index.js\";\n\nconst execFileAsync = promisify(execFile);\nconst jsSdkRoot = fileURLToPath(new URL(\"..\", import.meta.url));\n\ntest(\"openLix exposes the rs-sdk e2e flow\", async () => {\n\tconst lix = await openLix();\n\tconst mainVersionId = await lix.activeVersionId();\n\n\tawait registerCrmTaskSchema(lix);\n\n\tawait lix.execute(\n\t\t\"INSERT INTO crm_task (id, title, done, meta) VALUES ($1, $2, $3, lix_json($4))\",\n\t\t[\n\t\t\t\"task-1\",\n\t\t\t\"Draft JS SDK flow\",\n\t\t\tfalse,\n\t\t\tJSON.stringify({ priority: \"high\", tags: [\"sdk\", \"json\"] }),\n\t\t],\n\t);\n\n\tconst projected = await lix.execute(\n\t\t\"SELECT title, meta FROM crm_task WHERE id = $1\",\n\t\t[\"task-1\"],\n\t);\n\tconst projectedRow = projected.rows[0]!;\n\texpect(projectedRow.get(\"title\")).toBe(\"Draft JS SDK flow\");\n\texpect(projectedRow.value(\"title\")).toBeInstanceOf(Value);\n\texpect(projectedRow.get(\"meta\")).toEqual({\n\t\tpriority: \"high\",\n\t\ttags: [\"sdk\", \"json\"],\n\t});\n\texpect(projectedRow.value(\"meta\").kind).toBe(\"json\");\n\texpect(projectedRow.value(\"meta\").asJson()).toEqual({\n\t\tpriority: \"high\",\n\t\ttags: [\"sdk\", \"json\"],\n\t});\n\texpect(projectedRow.toObject()).toEqual({\n\t\ttitle: \"Draft JS SDK flow\",\n\t\tmeta: { priority: \"high\", tags: [\"sdk\", \"json\"] },\n\t});\n\texpect(projectedRow.toValueMap().title).toBeInstanceOf(Value);\n\texpect(() => projectedRow.get(\"missing\")).toThrow(\n\t\t/Available columns: title, meta/,\n\t);\n\n\texpect(await taskDone(lix, \"task-1\")).toBe(false);\n\n\tconst mainHead = await lix.execute(\"SELECT lix_active_version_commit_id()\");\n\tconst mainHeadCommitId = mainHead.rows[0]!.get(\"lix_active_version_commit_id()\");\n\texpect(typeof mainHeadCommitId).toBe(\"string\");\n\n\tconst draft = await lix.createVersion({\n\t\tid: \"draft-version\",\n\t\tname: \"Draft\",\n\t});\n\texpect(draft).toMatchObject({\n\t\tid: \"draft-version\",\n\t\tname: \"Draft\",\n\t\thidden: false,\n\t\tcommitId: mainHeadCommitId,\n\t});\n\n\tawait lix.switchVersion({ versionId: draft.id });\n\n\tawait lix.execute(\"UPDATE crm_task SET done = $1 WHERE id = $2\", [\n\t\ttrue,\n\t\t\"task-1\",\n\t]);\n\n\texpect(await taskDone(lix, \"task-1\")).toBe(true);\n\n\tawait lix.switchVersion({ versionId: mainVersionId });\n\n\texpect(await taskDone(lix, \"task-1\")).toBe(false);\n\n\tconst preview = await lix.mergeVersionPreview({\n\t\tsourceVersionId: draft.id,\n\t});\n\texpect(preview.outcome).toBe(\"fastForward\");\n\texpect(preview.targetVersionId).toBe(mainVersionId);\n\texpect(preview.sourceVersionId).toBe(draft.id);\n\texpect(preview.changeStats).toEqual({\n\t\ttotal: 1,\n\t\tadded: 0,\n\t\tmodified: 1,\n\t\tremoved: 0,\n\t});\n\texpect(preview.conflicts).toEqual([]);\n\texpect(await taskDone(lix, \"task-1\")).toBe(false);\n\n\tconst merge = await lix.mergeVersion({\n\t\tsourceVersionId: draft.id,\n\t});\n\n\texpect(merge.outcome).toBe(\"fastForward\");\n\texpect(merge.targetVersionId).toBe(mainVersionId);\n\texpect(merge.changeStats).toEqual({\n\t\ttotal: 1,\n\t\tadded: 0,\n\t\tmodified: 1,\n\t\tremoved: 0,\n\t});\n\texpect(merge.createdMergeCommitId).toBeNull();\n\texpect(await taskDone(lix, \"task-1\")).toBe(true);\n\n\tawait lix.close();\n\tawait lix.close();\n\tawait expect(lix.activeVersionId()).rejects.toMatchObject({\n\t\tcode: \"LIX_ERROR_CLOSED\",\n\t});\n\tawait expect(lix.execute(\"SELECT 1\")).rejects.toMatchObject({\n\t\tcode: \"LIX_ERROR_CLOSED\",\n\t});\n});\n\ntest(\"openLix accepts an explicit backend\", async () => {\n\tconst backend = createMemoryBackend();\n\n\tconst first = await openLix({ backend });\n\tawait registerCrmTaskSchema(first);\n\tawait first.execute(\n\t\t\"INSERT INTO crm_task (id, title, done, meta) VALUES ($1, $2, $3, lix_json($4))\",\n\t\t[\n\t\t\t\"backend-task\",\n\t\t\t\"Stored through explicit backend\",\n\t\t\tfalse,\n\t\t\tJSON.stringify({ priority: \"normal\" }),\n\t\t],\n\t);\n\tawait first.close();\n\n\tconst second = await openLix({ backend });\n\texpect(await taskDone(second, \"backend-task\")).toBe(false);\n\tawait second.close();\n});\n\ntest(\"execute supports UNION ALL without trapping wasm\", async () => {\n\tconst lix = await openLix();\n\n\tconst result = await lix.execute(\"SELECT 1 UNION ALL SELECT 2\");\n\n\texpect(result.rows.map((row) => row.get(\"Int64(1)\"))).toEqual([1, 2]);\n\tawait lix.close();\n});\n\ntest(\"unsupported UNION DISTINCT returns a JS error without trapping wasm\", async () => {\n\tconst { stdout } = await execFileAsync(\n\t\tprocess.execPath,\n\t\t[\n\t\t\t\"--input-type=module\",\n\t\t\t\"-e\",\n\t\t\t`\n\t\t\t\timport { openLix } from './dist/index.js';\n\t\t\t\tconst lix = await openLix();\n\t\t\t\ttry {\n\t\t\t\t\tawait lix.execute('SELECT 1 UNION SELECT 1');\n\t\t\t\t\tconsole.log('unexpected-success');\n\t\t\t\t} catch (error) {\n\t\t\t\t\tconsole.log(error.code, error.message);\n\t\t\t\t} finally {\n\t\t\t\t\tawait lix.close().catch(() => {});\n\t\t\t\t}\n\t\t\t`,\n\t\t],\n\t\t{ cwd: jsSdkRoot },\n\t);\n\n\texpect(stdout).toContain(\"LIX_UNSUPPORTED_SQL_RUNTIME_PLAN\");\n\texpect(stdout).toContain(\"CoalescePartitionsExec\");\n});\n\ntest(\"INSERT SELECT UNION ALL executes without trapping wasm\", async () => {\n\tconst { stdout } = await execFileAsync(\n\t\tprocess.execPath,\n\t\t[\n\t\t\t\"--input-type=module\",\n\t\t\t\"-e\",\n\t\t\t`\n\t\t\t\timport { openLix } from './dist/index.js';\n\t\t\t\tconst lix = await openLix();\n\t\t\t\ttry {\n\t\t\t\t\tconst result = await lix.execute(\"INSERT INTO lix_directory (path) SELECT '/u1/' UNION ALL SELECT '/u2/'\");\n\t\t\t\t\tconsole.log(result.rowsAffected);\n\t\t\t\t} finally {\n\t\t\t\t\tawait lix.close().catch(() => {});\n\t\t\t\t}\n\t\t\t`,\n\t\t],\n\t\t{ cwd: jsSdkRoot },\n\t);\n\n\texpect(stdout.trim()).toBe(\"2\");\n});\n\ntest(\"createVersion can start from an explicit commit id\", async () => {\n\tconst lix = await openLix();\n\n\tawait registerCrmTaskSchema(lix);\n\tconst baseHead = await lix.execute(\"SELECT lix_active_version_commit_id()\");\n\tconst fromCommitId = baseHead.rows[0]!.get(\"lix_active_version_commit_id()\");\n\texpect(typeof fromCommitId).toBe(\"string\");\n\n\tawait lix.execute(\n\t\t\"INSERT INTO crm_task (id, title, done, meta) VALUES ($1, $2, $3, lix_json($4))\",\n\t\t[\n\t\t\t\"after-base\",\n\t\t\t\"Written after base\",\n\t\t\tfalse,\n\t\t\tJSON.stringify({ priority: \"normal\" }),\n\t\t],\n\t);\n\n\tconst version = await lix.createVersion({\n\t\tid: \"from-explicit-commit\",\n\t\tname: \"From explicit commit\",\n\t\tfromCommitId: fromCommitId as string,\n\t});\n\texpect(version).toMatchObject({\n\t\tid: \"from-explicit-commit\",\n\t\tname: \"From explicit commit\",\n\t\thidden: false,\n\t\tcommitId: fromCommitId,\n\t});\n\tawait lix.switchVersion({ versionId: version.id });\n\n\tconst projected = await lix.execute(\n\t\t\"SELECT id FROM crm_task WHERE id = $1\",\n\t\t[\"after-base\"],\n\t);\n\texpect(projected.rows).toHaveLength(0);\n\n\tawait lix.close();\n});\n\ntest(\"merge conflicts expose structured details\", async () => {\n\tconst lix = await openLix();\n\tconst mainVersionId = await lix.activeVersionId();\n\tawait registerCrmTaskSchema(lix);\n\tawait lix.execute(\n\t\t\"INSERT INTO crm_task (id, title, done, meta) VALUES ($1, $2, $3, lix_json($4))\",\n\t\t[\n\t\t\t\"conflict-task\",\n\t\t\t\"Base\",\n\t\t\tfalse,\n\t\t\tJSON.stringify({ priority: \"normal\" }),\n\t\t],\n\t);\n\tconst draft = await lix.createVersion({\n\t\tid: \"conflict-draft\",\n\t\tname: \"Conflict draft\",\n\t});\n\n\tawait lix.switchVersion({ versionId: draft.id });\n\tawait lix.execute(\"UPDATE crm_task SET title = $1 WHERE id = $2\", [\n\t\t\"Draft\",\n\t\t\"conflict-task\",\n\t]);\n\n\tawait lix.switchVersion({ versionId: mainVersionId });\n\tawait lix.execute(\"UPDATE crm_task SET title = $1 WHERE id = $2\", [\n\t\t\"Main\",\n\t\t\"conflict-task\",\n\t]);\n\n\ttry {\n\t\tawait lix.mergeVersion({ sourceVersionId: draft.id });\n\t\tthrow new Error(\"expected merge conflict\");\n\t} catch (error) {\n\t\texpect(isLixError(error)).toBe(true);\n\t\tif (!isLixError(error)) throw error;\n\t\texpect(error.code).toBe(\"LIX_MERGE_CONFLICT\");\n\t\texpect(error.message).toContain(\"tracked-state conflict\");\n\t\texpect(error.details).toBeDefined();\n\t\texpect((error as LixError & { data?: unknown }).data).toBeUndefined();\n\t\texpect(\n\t\t\t\"description\" in (error as LixError & { description?: unknown }),\n\t\t).toBe(false);\n\t\tconst details = error.details as {\n\t\t\tconflicts?: Array<{\n\t\t\t\tschemaKey?: string;\n\t\t\t\tentityId?: string[];\n\t\t\t\ttarget?: unknown;\n\t\t\t\tsource?: unknown;\n\t\t\t}>;\n\t\t};\n\t\texpect(details.conflicts).toHaveLength(1);\n\t\texpect(details.conflicts?.[0]).toMatchObject({\n\t\t\tschemaKey: \"crm_task\",\n\t\t\tentityId: [\"conflict-task\"],\n\t\t});\n\t\texpect(details.conflicts?.[0]?.target).toBeDefined();\n\t\texpect(details.conflicts?.[0]?.source).toBeDefined();\n\t}\n\n\tawait lix.close();\n});\n\ntest(\"lix.close delegates backend close through the engine bridge\", async () => {\n\tlet closeCount = 0;\n\tconst backend = {\n\t\t...createMemoryBackend(),\n\t\tclose() {\n\t\t\tcloseCount += 1;\n\t\t},\n\t};\n\n\tconst lix = await openLix({ backend });\n\tawait lix.close();\n\tawait lix.close();\n\n\texpect(closeCount).toBe(1);\n});\n\ntest(\"engine errors expose structured hints\", async () => {\n\tconst lix = await openLix();\n\n\ttry {\n\t\tawait lix.execute(\"SELECT entity_id FROM lix_state_history\");\n\t\tthrow new Error(\"expected history query to fail\");\n\t} catch (error) {\n\t\texpect(isLixError(error)).toBe(true);\n\t\tif (!isLixError(error)) throw error;\n\t\texpect(error.code).toBe(\"LIX_HISTORY_FILTER_REQUIRED\");\n\t\texpect(error.hint).toContain(\"lix_active_version_commit_id()\");\n\t}\n\n\tawait lix.close();\n});\n\ntest(\"execute rejects invalid runtime arguments before wasm\", async () => {\n\tconst lix = await openLix();\n\tconst unsafeLix = lix as unknown as {\n\t\texecute(sql: unknown, params?: unknown): Promise<ExecuteResult>;\n\t};\n\n\tawait expect(unsafeLix.execute(123, [])).rejects.toMatchObject({\n\t\tname: \"LixError\",\n\t\tcode: \"LIX_INVALID_ARGUMENT\",\n\t\tmessage: \"lix.execute() expected sql to be a string\",\n\t\tdetails: {\n\t\t\toperation: \"execute\",\n\t\t\targument: \"sql\",\n\t\t\texpected: \"string\",\n\t\t\tactual: \"number\",\n\t\t},\n\t});\n\n\tawait expect(unsafeLix.execute(\"SELECT 1\", 123)).rejects.toMatchObject({\n\t\tname: \"LixError\",\n\t\tcode: \"LIX_INVALID_ARGUMENT\",\n\t\tmessage: \"lix.execute() expected params to be an array\",\n\t\tdetails: {\n\t\t\toperation: \"execute\",\n\t\t\targument: \"params\",\n\t\t\texpected: \"array\",\n\t\t\tactual: \"number\",\n\t\t},\n\t});\n\n\tawait lix.close();\n});\n\ntest(\"execute rejects lossy JavaScript parameter coercions\", async () => {\n\tconst lix = await openLix();\n\tconst circular: Record<string, unknown> = {};\n\tcircular.self = circular;\n\n\tconst invalidCases: Array<{\n\t\tname: string;\n\t\tvalue: unknown;\n\t\tmessage: string | RegExp;\n\t\tactual?: string;\n\t}> = [\n\t\t{\n\t\t\tname: \"Date\",\n\t\t\tvalue: new Date(\"2026-01-02T03:04:05.000Z\"),\n\t\t\tmessage: /Date is not a valid SQL parameter/,\n\t\t\tactual: \"Date\",\n\t\t},\n\t\t{\n\t\t\tname: \"Int32Array\",\n\t\t\tvalue: new Int32Array([1, 2, 3]),\n\t\t\tmessage: /typed array SQL parameters must be Uint8Array/,\n\t\t\tactual: \"Int32Array\",\n\t\t},\n\t\t{\n\t\t\tname: \"lone surrogate\",\n\t\t\tvalue: \"X\\uD83DY\",\n\t\t\tmessage: /well-formed UTF-16/,\n\t\t\tactual: \"string\",\n\t\t},\n\t\t{\n\t\t\tname: \"undefined\",\n\t\t\tvalue: undefined,\n\t\t\tmessage: /undefined is not a valid SQL parameter/,\n\t\t\tactual: \"undefined\",\n\t\t},\n\t\t{\n\t\t\tname: \"BigInt\",\n\t\t\tvalue: 10n,\n\t\t\tmessage: /requires a LixValue, JSON value, or binary value/,\n\t\t\tactual: \"bigint\",\n\t\t},\n\t\t{\n\t\t\tname: \"NaN\",\n\t\t\tvalue: Number.NaN,\n\t\t\tmessage: /finite number/,\n\t\t\tactual: \"number\",\n\t\t},\n\t\t{\n\t\t\tname: \"Infinity\",\n\t\t\tvalue: Number.POSITIVE_INFINITY,\n\t\t\tmessage: /finite number/,\n\t\t\tactual: \"number\",\n\t\t},\n\t\t{\n\t\t\tname: \"circular object\",\n\t\t\tvalue: circular,\n\t\t\tmessage: /circular references/,\n\t\t\tactual: \"object\",\n\t\t},\n\t\t{\n\t\t\tname: \"Symbol\",\n\t\t\tvalue: Symbol(\"x\"),\n\t\t\tmessage: /requires a LixValue, JSON value, or binary value/,\n\t\t\tactual: \"symbol\",\n\t\t},\n\t\t{\n\t\t\tname: \"function\",\n\t\t\tvalue: () => undefined,\n\t\t\tmessage: /requires a LixValue, JSON value, or binary value/,\n\t\t\tactual: \"function\",\n\t\t},\n\t];\n\n\tfor (const testCase of invalidCases) {\n\t\ttry {\n\t\t\tawait lix.execute(\"SELECT $1 AS v\", [testCase.value as never]);\n\t\t\tthrow new Error(`expected ${testCase.name} to fail`);\n\t\t} catch (error) {\n\t\t\texpect(error, testCase.name).toMatchObject({\n\t\t\t\tname: \"LixError\",\n\t\t\t\tcode: \"LIX_INVALID_PARAM\",\n\t\t\t\tdetails: {\n\t\t\t\t\toperation: \"execute\",\n\t\t\t\t\tparameter_index: 1,\n\t\t\t\t\targument: \"params[0]\",\n\t\t\t\t\tactual: testCase.actual,\n\t\t\t\t},\n\t\t\t});\n\t\t\tif (!(error instanceof Error)) throw error;\n\t\t\texpect(error.message, testCase.name).toMatch(testCase.message);\n\t\t}\n\t}\n\n\tawait lix.close();\n});\n\ntest(\"execute rejects extra SQL parameters\", async () => {\n\tconst lix = await openLix();\n\n\ttry {\n\t\tawait lix.execute(\"SELECT $1 AS v\", [1, 2]);\n\t\tthrow new Error(\"expected extra params to fail\");\n\t} catch (error) {\n\t\texpect(error).toMatchObject({\n\t\t\tcode: \"LIX_INVALID_PARAM\",\n\t\t\tdetails: {\n\t\t\t\toperation: \"execute\",\n\t\t\t\texpected_param_count: 1,\n\t\t\t\tprovided_param_count: 2,\n\t\t\t\tplaceholders: [\"$1\"],\n\t\t\t},\n\t\t});\n\t\tif (!(error instanceof Error)) throw error;\n\t\texpect(error.message).toBe(\n\t\t\t\"SQL expected 1 parameter(s), but 2 parameter(s) were provided\",\n\t\t);\n\t}\n\n\tawait lix.close();\n});\n\ntest(\"lix_state_history snapshot_content preserves JSON null for binary file rows\", async () => {\n\tconst lix = await openLix();\n\n\tawait lix.execute(\n\t\t\"INSERT INTO lix_file (id, path, data, hidden) VALUES ($1, $2, $3, false)\",\n\t\t[\n\t\t\t\"history-binary-js-repro\",\n\t\t\t\"/history/repro.bin\",\n\t\t\tnew Uint8Array([0x80, 0xff, 0x00]),\n\t\t],\n\t);\n\n\tconst result = await lix.execute(\n\t\t\"SELECT schema_key, snapshot_content \\\n\t\t FROM lix_state_history \\\n\t\t WHERE start_commit_id = lix_active_version_commit_id()\",\n\t);\n\tconst directoryRow = result.rows.find(\n\t\t(row) => row.get(\"schema_key\") === \"lix_directory_descriptor\",\n\t);\n\n\texpect(directoryRow?.get(\"snapshot_content\")).toMatchObject({\n\t\tparent_id: null,\n\t});\n\n\tawait lix.close();\n});\n\nasync function registerCrmTaskSchema(lix: Lix) {\n\tconst schema = {\n\t\t$schema: \"https://json-schema.org/draft/2020-12/schema\",\n\t\t\"x-lix-key\": \"crm_task\",\n\t\t\"x-lix-primary-key\": [\"/id\"],\n\t\ttype: \"object\",\n\t\trequired: [\"id\", \"title\", \"done\", \"meta\"],\n\t\tproperties: {\n\t\t\tid: { type: \"string\" },\n\t\t\ttitle: { type: \"string\" },\n\t\t\tdone: { type: \"boolean\" },\n\t\t\tmeta: { type: \"object\" },\n\t\t},\n\t\tadditionalProperties: false,\n\t} as const;\n\n\tawait lix.execute(\n\t\t\"INSERT INTO lix_registered_schema (value) VALUES (lix_json($1))\",\n\t\t[JSON.stringify(schema)],\n\t);\n}\n\nasync function taskDone(lix: Lix, taskId: string): Promise<boolean> {\n\tconst result = await lix.execute(\n\t\t\"SELECT done FROM crm_task WHERE id = $1\",\n\t\t[taskId],\n\t);\n\tconst rows = expectRows(result);\n\texpect(rows.rows).toHaveLength(1);\n\tconst done = rows.rows[0]?.get(\"done\");\n\texpect(typeof done).toBe(\"boolean\");\n\treturn done as boolean;\n}\n\nfunction expectRows(result: ExecuteResult) {\n\treturn result;\n}\n\ntype StoredKvPair = {\n\tnamespace: string;\n\tkey: Uint8Array;\n\tvalue: Uint8Array;\n};\n\nfunction createMemoryBackend(): LixBackend {\n\tlet rows: StoredKvPair[] = [];\n\n\tfunction createTransaction(): LixBackendWriteTransaction {\n\t\t\tlet transactionRows = rows.map(cloneStoredPair);\n\t\t\tlet closed = false;\n\n\t\t\tconst ensureOpen = () => {\n\t\t\t\tif (closed) {\n\t\t\t\t\tthrow new Error(\"transaction is closed\");\n\t\t\t\t}\n\t\t\t};\n\n\t\t\treturn {\n\t\t\t\tgetValues(request): BackendKvValueBatch {\n\t\t\t\t\tensureOpen();\n\t\t\t\t\treturn {\n\t\t\t\t\t\tgroups: request.groups.map((group) => ({\n\t\t\t\t\t\t\tnamespace: group.namespace,\n\t\t\t\t\t\t\tvalues: group.keys.map((key) => {\n\t\t\t\t\t\t\t\tconst row = transactionRows.find(\n\t\t\t\t\t\t\t\t\t(row) =>\n\t\t\t\t\t\t\t\t\t\trow.namespace === group.namespace &&\n\t\t\t\t\t\t\t\t\t\tcompareBytes(row.key, key) === 0,\n\t\t\t\t\t\t\t\t);\n\t\t\t\t\t\t\t\treturn row ? new Uint8Array(row.value) : null;\n\t\t\t\t\t\t\t}),\n\t\t\t\t\t\t})),\n\t\t\t\t\t};\n\t\t\t\t},\n\t\t\t\texistsMany(request): BackendKvExistsBatch {\n\t\t\t\t\tensureOpen();\n\t\t\t\t\treturn {\n\t\t\t\t\t\tgroups: request.groups.map((group) => ({\n\t\t\t\t\t\t\tnamespace: group.namespace,\n\t\t\t\t\t\t\texists: group.keys.map((key) =>\n\t\t\t\t\t\t\t\ttransactionRows.some(\n\t\t\t\t\t\t\t\t\t(row) =>\n\t\t\t\t\t\t\t\t\t\trow.namespace === group.namespace &&\n\t\t\t\t\t\t\t\t\t\tcompareBytes(row.key, key) === 0,\n\t\t\t\t\t\t\t\t),\n\t\t\t\t\t\t\t),\n\t\t\t\t\t\t})),\n\t\t\t\t\t};\n\t\t\t\t},\n\t\t\t\tscanKeys(request): BackendKvKeyPage {\n\t\t\t\t\tensureOpen();\n\t\t\t\t\tconst { pairs, resumeAfter } = scanPage(transactionRows, request);\n\t\t\t\t\treturn {\n\t\t\t\t\t\tkeys: pairs.map((row) => new Uint8Array(row.key)),\n\t\t\t\t\t\tresumeAfter,\n\t\t\t\t\t};\n\t\t\t\t},\n\t\t\t\tscanValues(request): BackendKvValuePage {\n\t\t\t\t\tensureOpen();\n\t\t\t\t\tconst { pairs, resumeAfter } = scanPage(transactionRows, request);\n\t\t\t\t\treturn {\n\t\t\t\t\t\tvalues: pairs.map((row) => new Uint8Array(row.value)),\n\t\t\t\t\t\tresumeAfter,\n\t\t\t\t\t};\n\t\t\t\t},\n\t\t\t\tscanEntries(request): BackendKvEntryPage {\n\t\t\t\t\tensureOpen();\n\t\t\t\t\tconst { pairs, resumeAfter } = scanPage(transactionRows, request);\n\t\t\t\t\treturn {\n\t\t\t\t\t\tkeys: pairs.map((row) => new Uint8Array(row.key)),\n\t\t\t\t\t\tvalues: pairs.map((row) => new Uint8Array(row.value)),\n\t\t\t\t\t\tresumeAfter,\n\t\t\t\t\t};\n\t\t\t\t},\n\t\t\t\twriteKvBatch(batch): BackendKvWriteStats {\n\t\t\t\t\tensureOpen();\n\t\t\t\t\tconst stats: BackendKvWriteStats = {\n\t\t\t\t\t\tputs: 0,\n\t\t\t\t\t\tdeletes: 0,\n\t\t\t\t\t\tbytesWritten: 0,\n\t\t\t\t\t};\n\t\t\t\t\tfor (const group of batch.groups) {\n\t\t\t\t\t\tfor (const put of group.puts) {\n\t\t\t\t\t\t\tstats.puts += 1;\n\t\t\t\t\t\t\tstats.bytesWritten += put.key.length + put.value.length;\n\t\t\t\t\t\t\ttransactionRows = transactionRows.filter(\n\t\t\t\t\t\t\t\t(row) =>\n\t\t\t\t\t\t\t\t\trow.namespace !== group.namespace ||\n\t\t\t\t\t\t\t\t\tcompareBytes(row.key, put.key) !== 0,\n\t\t\t\t\t\t\t);\n\t\t\t\t\t\t\ttransactionRows.push({\n\t\t\t\t\t\t\t\tnamespace: group.namespace,\n\t\t\t\t\t\t\t\tkey: new Uint8Array(put.key),\n\t\t\t\t\t\t\t\tvalue: new Uint8Array(put.value),\n\t\t\t\t\t\t\t});\n\t\t\t\t\t\t}\n\t\t\t\t\t\tfor (const key of group.deletes) {\n\t\t\t\t\t\t\tstats.deletes += 1;\n\t\t\t\t\t\t\tstats.bytesWritten += key.length;\n\t\t\t\t\t\t\ttransactionRows = transactionRows.filter(\n\t\t\t\t\t\t\t\t(row) =>\n\t\t\t\t\t\t\t\t\trow.namespace !== group.namespace ||\n\t\t\t\t\t\t\t\t\tcompareBytes(row.key, key) !== 0,\n\t\t\t\t\t\t\t);\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\treturn stats;\n\t\t\t\t},\n\t\t\t\tcommit() {\n\t\t\t\t\tensureOpen();\n\t\t\t\t\trows = transactionRows.map(cloneStoredPair);\n\t\t\t\t\tclosed = true;\n\t\t\t\t},\n\t\t\t\trollback() {\n\t\t\t\t\tensureOpen();\n\t\t\t\t\tclosed = true;\n\t\t\t\t},\n\t\t\t};\n\t}\n\n\treturn {\n\t\tbeginReadTransaction(): LixBackendReadTransaction {\n\t\t\treturn createTransaction();\n\t\t},\n\t\tbeginWriteTransaction(): LixBackendWriteTransaction {\n\t\t\treturn createTransaction();\n\t\t},\n\t};\n}\n\nfunction cloneStoredPair(row: StoredKvPair): StoredKvPair {\n\treturn {\n\t\tnamespace: row.namespace,\n\t\tkey: new Uint8Array(row.key),\n\t\tvalue: new Uint8Array(row.value),\n\t};\n}\n\nfunction scanPage(\n\trows: StoredKvPair[],\n\trequest: BackendKvScanRequest,\n): { pairs: StoredKvPair[]; resumeAfter: Uint8Array | null } {\n\tconst matches = rows\n\t\t.filter(\n\t\t\t(row) =>\n\t\t\t\trow.namespace === request.namespace &&\n\t\t\t\tkeyMatchesRange(row.key, request.range) &&\n\t\t\t\t(!request.after || compareBytes(row.key, request.after) > 0),\n\t\t)\n\t\t.sort((left, right) => compareBytes(left.key, right.key));\n\tconst hasMore = matches.length > request.limit;\n\tconst pairs = matches.slice(0, request.limit);\n\treturn {\n\t\tpairs,\n\t\tresumeAfter: hasMore ? (pairs.at(-1)?.key ?? null) : null,\n\t};\n}\n\nfunction keyMatchesRange(key: Uint8Array, range: BackendKvScanRange): boolean {\n\tif (range.kind === \"prefix\") {\n\t\tif (key.length < range.prefix.length) return false;\n\t\treturn range.prefix.every((byte, index) => key[index] === byte);\n\t}\n\treturn (\n\t\tcompareBytes(key, range.start) >= 0 && compareBytes(key, range.end) < 0\n\t);\n}\n\nfunction compareBytes(left: Uint8Array, right: Uint8Array): number {\n\tconst length = Math.min(left.length, right.length);\n\tfor (let index = 0; index < length; index++) {\n\t\tconst delta = left[index]! - right[index]!;\n\t\tif (delta !== 0) return delta;\n\t}\n\treturn left.length - right.length;\n}\n"
  },
  {
    "path": "packages/js-sdk/src/open-lix.ts",
    "content": "import init, {\n\tresolveEngineWasmModuleOrPath,\n\tValue,\n\ttype LixError,\n} from \"./engine-wasm/index.js\";\nimport * as wasmModule from \"./engine-wasm/index.js\";\n\nexport type JsonValue =\n\t| null\n\t| boolean\n\t| number\n\t| string\n\t| JsonValue[]\n\t| { [key: string]: JsonValue };\n\nexport type LixRuntimeValue = JsonValue | Uint8Array | ArrayBuffer | Value;\nexport type LixNativeValue = JsonValue | Uint8Array;\n\nexport type ExecuteResult = {\n\tcolumns: string[];\n\trows: Row[];\n\trowsAffected: number;\n\tnotices: LixNotice[];\n};\n\nexport type LixNotice = {\n\tcode: string;\n\tmessage: string;\n\thint?: string;\n};\n\nexport class Row {\n\treadonly columns: string[];\n\tprivate readonly valuesByIndex: Value[];\n\n\tconstructor(columns: string[], values: Value[]) {\n\t\tthis.columns = columns;\n\t\tthis.valuesByIndex = values;\n\t}\n\n\tget(columnName: string): LixNativeValue {\n\t\treturn valueToNative(this.value(columnName));\n\t}\n\n\ttryGet(columnName: string): LixNativeValue | undefined {\n\t\tconst value = this.tryValue(columnName);\n\t\treturn value === undefined ? undefined : valueToNative(value);\n\t}\n\n\tvalue(columnName: string): Value {\n\t\tconst index = this.columns.indexOf(columnName);\n\t\tif (index === -1) {\n\t\t\tthrow createLixError(\n\t\t\t\t\"LIX_COLUMN_NOT_FOUND\",\n\t\t\t\t`Column \"${columnName}\" does not exist. Available columns: ${this.availableColumns()}`,\n\t\t\t);\n\t\t}\n\t\tconst value = this.valuesByIndex[index];\n\t\tif (value === undefined) {\n\t\t\tthrow createLixError(\n\t\t\t\t\"LIX_COLUMN_NOT_FOUND\",\n\t\t\t\t`Column \"${columnName}\" is outside row width ${this.valuesByIndex.length}.`,\n\t\t\t);\n\t\t}\n\t\treturn value;\n\t}\n\n\ttryValue(columnName: string): Value | undefined {\n\t\tconst index = this.columns.indexOf(columnName);\n\t\treturn index === -1 ? undefined : this.valuesByIndex[index];\n\t}\n\n\tgetAt(index: number): LixNativeValue {\n\t\treturn valueToNative(this.valueAt(index));\n\t}\n\n\tvalueAt(index: number): Value {\n\t\tconst value = this.valuesByIndex[index];\n\t\tif (value === undefined) {\n\t\t\tthrow createLixError(\n\t\t\t\t\"LIX_COLUMN_NOT_FOUND\",\n\t\t\t\t`Column index ${index} is outside row width ${this.valuesByIndex.length}.`,\n\t\t\t);\n\t\t}\n\t\treturn value;\n\t}\n\n\tvalues(): Value[] {\n\t\treturn [...this.valuesByIndex];\n\t}\n\n\ttoObject(): Record<string, LixNativeValue> {\n\t\treturn Object.fromEntries(\n\t\t\tthis.columns.map((column, index) => [\n\t\t\t\tcolumn,\n\t\t\t\tvalueToNative(this.valueAt(index)),\n\t\t\t]),\n\t\t);\n\t}\n\n\ttoValueMap(): Record<string, Value> {\n\t\treturn Object.fromEntries(\n\t\t\tthis.columns.map((column, index) => [column, this.valueAt(index)]),\n\t\t);\n\t}\n\n\tprivate availableColumns(): string {\n\t\treturn this.columns.length === 0 ? \"<none>\" : this.columns.join(\", \");\n\t}\n}\n\nfunction valueToNative(value: Value): LixNativeValue {\n\tswitch (value.kind) {\n\t\tcase \"null\":\n\t\t\treturn null;\n\t\tcase \"boolean\":\n\t\tcase \"integer\":\n\t\tcase \"real\":\n\t\tcase \"text\":\n\t\tcase \"json\":\n\t\t\treturn value.value as JsonValue;\n\t\tcase \"blob\":\n\t\t\treturn value.asBlob() ?? new Uint8Array();\n\t}\n}\n\nexport type BackendKvScanRange =\n\t| { kind: \"prefix\"; prefix: Uint8Array }\n\t| { kind: \"range\"; start: Uint8Array; end: Uint8Array };\n\nexport type BackendKvGetRequest = {\n\tgroups: BackendKvGetGroup[];\n};\n\nexport type BackendKvGetGroup = {\n\tnamespace: string;\n\tkeys: Uint8Array[];\n};\n\nexport type BackendKvValueBatch = {\n\tgroups: BackendKvValueGroup[];\n};\n\nexport type BackendKvValueGroup = {\n\tnamespace: string;\n\tvalues: Array<Uint8Array | null>;\n};\n\nexport type BackendKvExistsBatch = {\n\tgroups: BackendKvExistsGroup[];\n};\n\nexport type BackendKvExistsGroup = {\n\tnamespace: string;\n\texists: boolean[];\n};\n\nexport type BackendKvScanRequest = {\n\tnamespace: string;\n\trange: BackendKvScanRange;\n\tafter?: Uint8Array | null;\n\tlimit: number;\n};\n\nexport type BackendKvKeyPage = {\n\tkeys: Uint8Array[];\n\tresumeAfter?: Uint8Array | null;\n};\n\nexport type BackendKvValuePage = {\n\tvalues: Uint8Array[];\n\tresumeAfter?: Uint8Array | null;\n};\n\nexport type BackendKvEntryPage = {\n\tkeys: Uint8Array[];\n\tvalues: Uint8Array[];\n\tresumeAfter?: Uint8Array | null;\n};\n\nexport type BackendKvPut = {\n\tkey: Uint8Array;\n\tvalue: Uint8Array;\n};\n\nexport type BackendKvWriteBatch = {\n\tgroups: BackendKvWriteGroup[];\n};\n\nexport type BackendKvWriteGroup = {\n\tnamespace: string;\n\tputs: BackendKvPut[];\n\tdeletes: Uint8Array[];\n};\n\nexport type BackendKvWriteStats = {\n\tputs: number;\n\tdeletes: number;\n\tbytesWritten: number;\n};\n\nexport type LixBackendReadTransaction = {\n\tgetValues(request: BackendKvGetRequest): BackendKvValueBatch;\n\texistsMany(request: BackendKvGetRequest): BackendKvExistsBatch;\n\tscanKeys(request: BackendKvScanRequest): BackendKvKeyPage;\n\tscanValues(request: BackendKvScanRequest): BackendKvValuePage;\n\tscanEntries(request: BackendKvScanRequest): BackendKvEntryPage;\n\trollback(): void;\n};\n\nexport type LixBackendWriteTransaction = LixBackendReadTransaction & {\n\twriteKvBatch(batch: BackendKvWriteBatch): BackendKvWriteStats;\n\tcommit(): void;\n};\n\nexport type LixBackend = {\n\tbeginReadTransaction(): LixBackendReadTransaction;\n\tbeginWriteTransaction(): LixBackendWriteTransaction;\n\tclose?(): void;\n};\n\nexport type OpenLixOptions = {\n\tbackend?: LixBackend;\n};\n\nexport type CreateVersionOptions = {\n\tid?: string;\n\tname: string;\n\tfromCommitId?: string;\n};\n\nexport type CreateVersionResult = {\n\tid: string;\n\tname: string;\n\thidden: boolean;\n\tcommitId: string;\n};\n\nexport type SwitchVersionOptions = {\n\tversionId: string;\n};\n\nexport type SwitchVersionResult = {\n\tversionId: string;\n};\n\nexport type MergeVersionOptions = {\n\tsourceVersionId: string;\n};\n\nexport type MergeVersionOutcome =\n\t| \"alreadyUpToDate\"\n\t| \"fastForward\"\n\t| \"mergeCommitted\";\n\nexport type MergeVersionResult = {\n\t/**\n\t * How the merge was applied. `fastForward` advances the target ref without\n\t * creating a merge commit, but can still make source changes visible.\n\t */\n\toutcome: MergeVersionOutcome;\n\ttargetVersionId: string;\n\tsourceVersionId: string;\n\tbaseCommitId: string;\n\ttargetHeadBeforeCommitId: string;\n\tsourceHeadBeforeCommitId: string;\n\ttargetHeadAfterCommitId: string;\n\tcreatedMergeCommitId: string | null;\n\tchangeStats: MergeChangeStats;\n};\n\nexport type MergeVersionPreviewResult = {\n\toutcome: MergeVersionOutcome;\n\ttargetVersionId: string;\n\tsourceVersionId: string;\n\tbaseCommitId: string;\n\ttargetHeadCommitId: string;\n\tsourceHeadCommitId: string;\n\tchangeStats: MergeChangeStats;\n\tconflicts: MergeConflict[];\n};\n\nexport type MergeChangeStats = {\n\ttotal: number;\n\tadded: number;\n\tmodified: number;\n\tremoved: number;\n};\n\nexport type MergeConflict = {\n\tkind: \"sameEntityChanged\";\n\tschemaKey: string;\n\tentityId: string[];\n\tfileId: string | null;\n\ttarget: MergeConflictSide;\n\tsource: MergeConflictSide;\n};\n\nexport type MergeConflictSide = {\n\tkind: \"added\" | \"modified\" | \"removed\";\n\tbeforeChangeId: string | null;\n\tafterChangeId: string | null;\n};\n\nexport type Lix = {\n\t/**\n\t * Executes one DataFusion SQL statement against this Lix session.\n\t *\n\t * This is not SQLite SQL. Use the DataFusion SQL dialect; positional\n\t * placeholders are `$1`, `$2`, and so on. SQLite-specific catalog tables and\n\t * transaction statements such as `sqlite_master`, `BEGIN`, and `COMMIT` are\n\t * not available. Use `information_schema` for catalog inspection.\n\t */\n\texecute(\n\t\tsql: string,\n\t\tparams?: ReadonlyArray<LixRuntimeValue>,\n\t): Promise<ExecuteResult>;\n\tactiveVersionId(): Promise<string>;\n\tcreateVersion(options: CreateVersionOptions): Promise<CreateVersionResult>;\n\tswitchVersion(options: SwitchVersionOptions): Promise<SwitchVersionResult>;\n\tmergeVersionPreview(\n\t\toptions: MergeVersionOptions,\n\t): Promise<MergeVersionPreviewResult>;\n\tmergeVersion(options: MergeVersionOptions): Promise<MergeVersionResult>;\n\tclose(): Promise<void>;\n};\n\nlet wasmReady: Promise<void> | null = null;\n\ntype WasmExecuteResult = {\n\tcolumns: string[];\n\trows: unknown[][];\n\trowsAffected: number;\n\tnotices?: LixNotice[];\n};\n\ntype WasmLix = {\n\t/**\n\t * Executes one DataFusion SQL statement. See `Lix.execute` for the public\n\t * SQL contract.\n\t */\n\texecute(sql: string, params: unknown[]): Promise<WasmExecuteResult>;\n\tactiveVersionId(): Promise<string>;\n\tcreateVersion(options: CreateVersionOptions): Promise<CreateVersionResult>;\n\tswitchVersion(options: SwitchVersionOptions): Promise<SwitchVersionResult>;\n\tmergeVersionPreview(\n\t\toptions: MergeVersionOptions,\n\t): Promise<MergeVersionPreviewResult>;\n\tmergeVersion(options: MergeVersionOptions): Promise<MergeVersionResult>;\n\tclose(): Promise<void>;\n};\n\nasync function ensureWasmReady(): Promise<void> {\n\tif (!wasmReady) {\n\t\twasmReady = resolveEngineWasmModuleOrPath()\n\t\t\t.then((module_or_path) => init({ module_or_path }))\n\t\t\t.then(() => undefined);\n\t}\n\tawait wasmReady;\n}\n\nexport async function openLix(\n\toptions: OpenLixOptions = {},\n): Promise<Lix> {\n\tawait ensureWasmReady();\n\ttry {\n\t\tconst wasmLix = (await (wasmModule as unknown as {\n\t\t\topenLix(options: OpenLixOptions): Promise<WasmLix>;\n\t\t}).openLix(options)) as WasmLix;\n\t\treturn createLixHandle(wasmLix);\n\t} catch (error) {\n\t\ttry {\n\t\t\toptions.backend?.close?.();\n\t\t} catch {\n\t\t\t// Preserve the original open failure.\n\t\t}\n\t\tthrow normalizeThrownError(error);\n\t}\n}\n\nfunction createLixHandle(wasmLix: WasmLix): Lix {\n\tlet operationQueue: Promise<void> = Promise.resolve();\n\n\tconst acquireOperationSlot = async (): Promise<() => void> => {\n\t\tconst previous = operationQueue;\n\t\tlet releaseCurrent: (() => void) | undefined;\n\t\tconst current = new Promise<void>((resolve) => {\n\t\t\treleaseCurrent = resolve;\n\t\t});\n\t\toperationQueue = previous.then(() => current);\n\t\tawait previous;\n\t\treturn () => releaseCurrent?.();\n\t};\n\n\tconst runQueued = async <T>(operation: () => Promise<T>): Promise<T> => {\n\t\tconst release = await acquireOperationSlot();\n\t\ttry {\n\t\t\treturn await operation();\n\t\t} catch (error) {\n\t\t\tthrow normalizeThrownError(error);\n\t\t} finally {\n\t\t\trelease();\n\t\t}\n\t};\n\n\treturn {\n\t\tasync execute(\n\t\t\tsql: string,\n\t\t\tparams: ReadonlyArray<LixRuntimeValue> = [],\n\t\t): Promise<ExecuteResult> {\n\t\t\tvalidateExecuteArguments(sql, params);\n\t\t\tconst values = params.map((param, index) =>\n\t\t\t\tvalueFromExecuteParam(param, index),\n\t\t\t);\n\t\t\tconst result = await runQueued(() =>\n\t\t\t\twasmLix.execute(sql, values),\n\t\t\t);\n\t\t\treturn normalizeExecuteResult(result);\n\t\t},\n\n\t\tasync activeVersionId(): Promise<string> {\n\t\t\treturn await runQueued(() => wasmLix.activeVersionId());\n\t\t},\n\n\t\tasync createVersion(\n\t\t\toptions: CreateVersionOptions,\n\t\t): Promise<CreateVersionResult> {\n\t\t\treturn await runQueued(() => wasmLix.createVersion(options));\n\t\t},\n\n\t\tasync switchVersion(\n\t\t\toptions: SwitchVersionOptions,\n\t\t): Promise<SwitchVersionResult> {\n\t\t\treturn await runQueued(() => wasmLix.switchVersion(options));\n\t\t},\n\n\t\tasync mergeVersionPreview(\n\t\t\toptions: MergeVersionOptions,\n\t\t): Promise<MergeVersionPreviewResult> {\n\t\t\treturn await runQueued(() => wasmLix.mergeVersionPreview(options));\n\t\t},\n\n\t\tasync mergeVersion(options: MergeVersionOptions): Promise<MergeVersionResult> {\n\t\t\treturn await runQueued(() => wasmLix.mergeVersion(options));\n\t\t},\n\n\t\tasync close(): Promise<void> {\n\t\t\tawait runQueued(() => wasmLix.close());\n\t\t},\n\t};\n}\n\nfunction validateExecuteArguments(\n\tsql: unknown,\n\tparams: unknown,\n): asserts sql is string {\n\tif (typeof sql !== \"string\") {\n\t\tthrow invalidArgumentError(\"execute\", \"sql\", \"string\", sql);\n\t}\n\tif (!Array.isArray(params)) {\n\t\tthrow invalidArgumentError(\"execute\", \"params\", \"array\", params);\n\t}\n}\n\nfunction invalidArgumentError(\n\toperation: string,\n\targument: string,\n\texpected: string,\n\tactualValue: unknown,\n): LixError {\n\treturn createLixError(\n\t\t\"LIX_INVALID_ARGUMENT\",\n\t\t`lix.${operation}() expected ${argument} to be ${expectedArticle(expected)} ${expected}`,\n\t\t{\n\t\t\tdetails: {\n\t\t\t\toperation,\n\t\t\t\targument,\n\t\t\t\texpected,\n\t\t\t\tactual: runtimeTypeName(actualValue),\n\t\t\t},\n\t\t},\n\t);\n}\n\nfunction valueFromExecuteParam(param: LixRuntimeValue, index: number): Value {\n\ttry {\n\t\treturn Value.from(param);\n\t} catch (error) {\n\t\tthrow invalidParamError(index, param, error);\n\t}\n}\n\nfunction invalidParamError(\n\tindex: number,\n\tactualValue: unknown,\n\tcause: unknown,\n): LixError {\n\tconst message =\n\t\tcause instanceof Error && cause.message\n\t\t\t? cause.message\n\t\t\t: \"parameter is not a valid Lix SQL value\";\n\treturn createLixError(\n\t\t\"LIX_INVALID_PARAM\",\n\t\t`lix.execute() invalid parameter $${index + 1}: ${message}`,\n\t\t{\n\t\t\tdetails: {\n\t\t\t\toperation: \"execute\",\n\t\t\t\tparameter_index: index + 1,\n\t\t\t\targument: `params[${index}]`,\n\t\t\t\tactual: runtimeTypeName(actualValue),\n\t\t\t},\n\t\t\tcause,\n\t\t},\n\t);\n}\n\nfunction expectedArticle(expected: string): \"a\" | \"an\" {\n\treturn /^[aeiou]/i.test(expected) ? \"an\" : \"a\";\n}\n\nfunction runtimeTypeName(value: unknown): string {\n\tif (value === null) return \"null\";\n\tif (Array.isArray(value)) return \"array\";\n\tif (value instanceof Date) return \"Date\";\n\tif (value instanceof ArrayBuffer) return \"ArrayBuffer\";\n\tif (ArrayBuffer.isView(value)) return value.constructor.name;\n\treturn typeof value;\n}\n\nfunction normalizeExecuteResult(result: WasmExecuteResult): ExecuteResult {\n\tconst columns = [...result.columns];\n\treturn {\n\t\tcolumns,\n\t\trows: result.rows.map(\n\t\t\t(row) => new Row(columns, row.map((value) => Value.from(value))),\n\t\t),\n\t\trowsAffected: result.rowsAffected,\n\t\tnotices: result.notices ?? [],\n\t};\n}\n\nfunction createLixError(\n\tcode: string,\n\tmessage: string,\n\toptions: { hint?: string; details?: unknown; cause?: unknown } = {},\n): LixError {\n\tconst error = new Error(message) as LixError;\n\terror.name = \"LixError\";\n\terror.code = code;\n\tif (options.hint !== undefined) {\n\t\terror.hint = options.hint;\n\t}\n\tif (options.details !== undefined) {\n\t\terror.details = options.details;\n\t}\n\tif (options.cause !== undefined) {\n\t\t(error as Error & { cause?: unknown }).cause = options.cause;\n\t}\n\treturn error;\n}\n\nfunction normalizeThrownError(error: unknown): LixError {\n\tif (isLixErrorLike(error)) {\n\t\tconst hint =\n\t\t\ttypeof error.hint === \"string\"\n\t\t\t\t? error.hint\n\t\t\t\t: extractHintFromMessage(error.message);\n\t\tconst details = \"details\" in error ? error.details : undefined;\n\t\tif (error instanceof Error) {\n\t\t\tif (hint !== undefined && error.hint === undefined) {\n\t\t\t\terror.hint = hint;\n\t\t\t}\n\t\t\tif (details !== undefined && error.details === undefined) {\n\t\t\t\terror.details = details;\n\t\t\t}\n\t\t\treturn error;\n\t\t}\n\t\tconst message = typeof error.message === \"string\" ? error.message : error.code;\n\t\treturn createLixError(error.code, message, { hint, details });\n\t}\n\n\tif (error instanceof WebAssembly.RuntimeError) {\n\t\treturn createLixError(\"LIX_WASM_RUNTIME_ERROR\", error.message, {\n\t\t\thint: \"The Lix engine encountered a WebAssembly runtime trap. Please report this as an engine bug with the SQL statement or API call that triggered it.\",\n\t\t\tcause: error,\n\t\t});\n\t}\n\n\tif (error instanceof Error) {\n\t\treturn createLixError(\"LIX_ERROR_UNKNOWN\", error.message, { cause: error });\n\t}\n\n\treturn createLixError(\"LIX_ERROR_UNKNOWN\", String(error));\n}\n\nfunction extractHintFromMessage(message: unknown): string | undefined {\n\tif (typeof message !== \"string\") return undefined;\n\tconst match = message.match(/(?:^|\\n)hint:\\s*(.+)$/s);\n\treturn match?.[1]?.trim();\n}\n\nfunction isLixErrorLike(error: unknown): error is {\n\tcode: string;\n\tmessage?: string;\n\thint?: string;\n\tdetails?: unknown;\n} {\n\treturn (\n\t\ttypeof error === \"object\" &&\n\t\terror !== null &&\n\t\ttypeof (error as { code?: unknown }).code === \"string\" &&\n\t\t(error as { code: string }).code.startsWith(\"LIX_\")\n\t);\n}\n"
  },
  {
    "path": "packages/js-sdk/src/sqlite/better-sqlite3.d.ts",
    "content": "declare module \"better-sqlite3\" {\n\texport type DatabaseOptions = {\n\t\treadonly?: boolean;\n\t\tfileMustExist?: boolean;\n\t\ttimeout?: number;\n\t\tverbose?: (message?: unknown, ...additional: unknown[]) => void;\n\t};\n\n\texport type Statement = {\n\t\tget(...params: unknown[]): unknown;\n\t\tall(...params: unknown[]): unknown[];\n\t\trun(...params: unknown[]): unknown;\n\t};\n\n\texport type Database = {\n\t\treadonly inTransaction: boolean;\n\t\texec(sql: string): Database;\n\t\tprepare(sql: string): Statement;\n\t\tpragma(source: string, options?: unknown): unknown;\n\t\tclose(): void;\n\t};\n\n\ttype DatabaseConstructor = {\n\t\tnew (filename: string, options?: DatabaseOptions): Database;\n\t\t(filename: string, options?: DatabaseOptions): Database;\n\t};\n\n\tconst Database: DatabaseConstructor;\n\texport default Database;\n}\n"
  },
  {
    "path": "packages/js-sdk/src/sqlite/index.test.ts",
    "content": "import { expect, test } from \"vitest\";\nimport { openLix, Value, type ExecuteResult, type Lix } from \"../index.js\";\n\nconst hasBetterSqlite3 = await import(\"better-sqlite3\").then(\n\t() => true,\n\t() => false,\n);\n\ntest.runIf(hasBetterSqlite3)(\n\t\"createBetterSqlite3Backend can back a Lix session\",\n\tasync () => {\n\t\tconst { createBetterSqlite3Backend } = await import(\"./index.js\");\n\t\tconst backend = createBetterSqlite3Backend({ path: \":memory:\" });\n\t\tconst lix = await openLix({ backend });\n\n\t\tawait registerCrmTaskSchema(lix);\n\t\tawait lix.execute(\n\t\t\t\"INSERT INTO crm_task (id, title, done) VALUES ($1, $2, $3)\",\n\t\t\t[\"sqlite-task\", \"Ship better-sqlite3 backend\", false],\n\t\t);\n\n\t\texpect(await taskTitle(lix, \"sqlite-task\")).toBe(\n\t\t\t\"Ship better-sqlite3 backend\",\n\t\t);\n\t\tawait lix.close();\n\t},\n);\n\ntest.runIf(hasBetterSqlite3)(\n\t\"committed writes survive close and reopen\",\n\tasync () => {\n\t\tconst { createBetterSqlite3Backend } = await import(\"./index.js\");\n\t\tconst file = tempLixPath();\n\t\tconst first = await openLix({\n\t\t\tbackend: createBetterSqlite3Backend({ path: file }),\n\t\t});\n\n\t\tawait registerCrmTaskSchema(first);\n\t\tawait first.execute(\n\t\t\t\"INSERT INTO crm_task (id, title, done) VALUES ($1, $2, $3)\",\n\t\t\t[\"persistent-task\", \"Persist before close\", false],\n\t\t);\n\t\tawait first.close();\n\n\t\tconst second = await openLix({\n\t\t\tbackend: createBetterSqlite3Backend({ path: file }),\n\t\t});\n\n\t\texpect(await taskTitle(second, \"persistent-task\")).toBe(\n\t\t\t\"Persist before close\",\n\t\t);\n\t\tawait second.close();\n\t},\n);\n\ntest.runIf(hasBetterSqlite3)(\n\t\"createBetterSqlite3Backend rejects a second handle for the same file\",\n\tasync () => {\n\t\tconst { createBetterSqlite3Backend } = await import(\"./index.js\");\n\t\tconst file = tempLixPath();\n\t\tconst firstBackend = createBetterSqlite3Backend({ path: file });\n\t\tconst first = await openLix({ backend: firstBackend });\n\n\t\texpect(() => createBetterSqlite3Backend({ path: file })).toThrow(\n\t\t\t/already has an open handle/,\n\t\t);\n\n\t\tawait first.close();\n\t\tconst second = await openLix({\n\t\t\tbackend: createBetterSqlite3Backend({ path: file }),\n\t\t});\n\t\tawait second.close();\n\t},\n);\n\nasync function registerCrmTaskSchema(lix: Lix) {\n\tconst schema = {\n\t\t$schema: \"https://json-schema.org/draft/2020-12/schema\",\n\t\t\"x-lix-key\": \"crm_task\",\n\t\t\"x-lix-primary-key\": [\"/id\"],\n\t\ttype: \"object\",\n\t\trequired: [\"id\", \"title\", \"done\"],\n\t\tproperties: {\n\t\t\tid: { type: \"string\" },\n\t\t\ttitle: { type: \"string\" },\n\t\t\tdone: { type: \"boolean\" },\n\t\t},\n\t\tadditionalProperties: false,\n\t} as const;\n\n\tawait lix.execute(\n\t\t\"INSERT INTO lix_registered_schema (value) VALUES (lix_json($1))\",\n\t\t[JSON.stringify(schema)],\n\t);\n}\n\nasync function taskTitle(lix: Lix, taskId: string): Promise<string> {\n\tconst result = await lix.execute(\n\t\t\"SELECT title FROM crm_task WHERE id = $1\",\n\t\t[taskId],\n\t);\n\tconst rows = expectRows(result);\n\texpect(rows.rows).toHaveLength(1);\n\tconst title = rows.rows[0]?.get(\"title\");\n\texpect(typeof title).toBe(\"string\");\n\treturn title as string;\n}\n\nfunction tempLixPath(): string {\n\treturn `/tmp/lix-sqlite-test-${Date.now()}-${Math.random()\n\t\t.toString(16)\n\t\t.slice(2)}.lix`;\n}\n\nfunction expectRows(result: ExecuteResult) {\n\treturn result;\n}\n"
  },
  {
    "path": "packages/js-sdk/src/sqlite/index.ts",
    "content": "import DatabaseConstructor, { type Database } from \"better-sqlite3\";\nimport type {\n\tBackendKvEntryPage,\n\tBackendKvExistsBatch,\n\tBackendKvGetRequest,\n\tBackendKvKeyPage,\n\tBackendKvScanRange,\n\tBackendKvScanRequest,\n\tBackendKvValueBatch,\n\tBackendKvValuePage,\n\tBackendKvWriteBatch,\n\tBackendKvWriteStats,\n\tLixBackend,\n\tLixBackendReadTransaction,\n\tLixBackendWriteTransaction,\n} from \"../open-lix.js\";\n\nexport type BetterSqlite3BackendOptions = {\n\tpath: string;\n\tdatabaseOptions?: BetterSqlite3DatabaseOptions;\n};\n\nexport type BetterSqlite3DatabaseOptions = {\n\treadonly?: boolean;\n\tfileMustExist?: boolean;\n\ttimeout?: number;\n\tverbose?: (message?: unknown, ...additional: unknown[]) => void;\n};\n\nconst openFileHandles = new Set<string>();\n\nexport function createBetterSqlite3Backend(\n\toptions: BetterSqlite3BackendOptions,\n): LixBackend {\n\tif (!options.path) {\n\t\tthrow new Error(\"createBetterSqlite3Backend() requires a non-empty path\");\n\t}\n\tconst registryKey = registryKeyForPath(options.path);\n\tif (registryKey && openFileHandles.has(registryKey)) {\n\t\tthrow doubleOpenError(options.path);\n\t}\n\tlet activeRegistryKey: string | null = registryKey;\n\tlet db: Database | undefined;\n\tif (activeRegistryKey) {\n\t\topenFileHandles.add(activeRegistryKey);\n\t}\n\ttry {\n\t\tdb = new DatabaseConstructor(options.path, options.databaseOptions);\n\t\tinitializeDatabase(db);\n\t\treturn new BetterSqlite3Backend(db, activeRegistryKey);\n\t} catch (error) {\n\t\tif (activeRegistryKey) {\n\t\t\topenFileHandles.delete(activeRegistryKey);\n\t\t}\n\t\tif (db) {\n\t\t\ttry {\n\t\t\t\tdb.close();\n\t\t\t} catch {\n\t\t\t\t// Ignore close errors while preserving the original open failure.\n\t\t\t}\n\t\t}\n\t\tthrow error;\n\t}\n}\n\nfunction initializeDatabase(db: Database): void {\n\tdb.exec(`\n\t\tCREATE TABLE IF NOT EXISTS lix_kv (\n\t\t\tnamespace TEXT NOT NULL,\n\t\t\tkey BLOB NOT NULL,\n\t\t\tvalue BLOB NOT NULL,\n\t\t\tPRIMARY KEY (namespace, key)\n\t\t) WITHOUT ROWID\n\t`);\n}\n\nclass BetterSqlite3Backend implements LixBackend {\n\treadonly #db: Database;\n\treadonly #registryKey: string | null;\n\t#closed = false;\n\n\tconstructor(db: Database, registryKey: string | null) {\n\t\tthis.#db = db;\n\t\tthis.#registryKey = registryKey;\n\t}\n\n\tbeginReadTransaction(): LixBackendReadTransaction {\n\t\tthis.#ensureOpen();\n\t\tif (this.#db.inTransaction) {\n\t\t\tthrow new Error(\"cannot open nested Lix backend transaction\");\n\t\t}\n\t\tthis.#db.exec(\"BEGIN DEFERRED\");\n\t\treturn new BetterSqlite3Transaction(this.#db);\n\t}\n\n\tbeginWriteTransaction(): LixBackendWriteTransaction {\n\t\tthis.#ensureOpen();\n\t\tif (this.#db.inTransaction) {\n\t\t\tthrow new Error(\"cannot open nested Lix backend transaction\");\n\t\t}\n\t\tthis.#db.exec(\"BEGIN IMMEDIATE\");\n\t\treturn new BetterSqlite3Transaction(this.#db);\n\t}\n\n\tclose(): void {\n\t\tif (this.#closed) return;\n\t\ttry {\n\t\t\tthis.#db.close();\n\t\t} finally {\n\t\t\tthis.#closed = true;\n\t\t\tif (this.#registryKey) {\n\t\t\t\topenFileHandles.delete(this.#registryKey);\n\t\t\t}\n\t\t}\n\t}\n\n\t#ensureOpen(): void {\n\t\tif (this.#closed) {\n\t\t\tthrow new Error(\"better-sqlite3 Lix backend is closed\");\n\t\t}\n\t}\n}\n\nclass BetterSqlite3Transaction implements LixBackendWriteTransaction {\n\treadonly #db: Database;\n\t#closed = false;\n\n\tconstructor(db: Database) {\n\t\tthis.#db = db;\n\t}\n\n\tgetValues(request: BackendKvGetRequest): BackendKvValueBatch {\n\t\tthis.#ensureOpen();\n\t\treturn getValues(this.#db, request);\n\t}\n\n\texistsMany(request: BackendKvGetRequest): BackendKvExistsBatch {\n\t\tthis.#ensureOpen();\n\t\treturn existsMany(this.#db, request);\n\t}\n\n\tscanKeys(request: BackendKvScanRequest): BackendKvKeyPage {\n\t\tthis.#ensureOpen();\n\t\tconst { pairs, resumeAfter } = scanPage(this.#db, request);\n\t\treturn {\n\t\t\tkeys: pairs.map(({ key }) => key),\n\t\t\tresumeAfter,\n\t\t};\n\t}\n\n\tscanValues(request: BackendKvScanRequest): BackendKvValuePage {\n\t\tthis.#ensureOpen();\n\t\tconst { pairs, resumeAfter } = scanPage(this.#db, request);\n\t\treturn {\n\t\t\tvalues: pairs.map(({ value }) => value),\n\t\t\tresumeAfter,\n\t\t};\n\t}\n\n\tscanEntries(request: BackendKvScanRequest): BackendKvEntryPage {\n\t\tthis.#ensureOpen();\n\t\tconst { pairs, resumeAfter } = scanPage(this.#db, request);\n\t\treturn {\n\t\t\tkeys: pairs.map(({ key }) => key),\n\t\t\tvalues: pairs.map(({ value }) => value),\n\t\t\tresumeAfter,\n\t\t};\n\t}\n\n\twriteKvBatch(batch: BackendKvWriteBatch): BackendKvWriteStats {\n\t\tthis.#ensureOpen();\n\t\tconst stats: BackendKvWriteStats = {\n\t\t\tputs: 0,\n\t\t\tdeletes: 0,\n\t\t\tbytesWritten: 0,\n\t\t};\n\t\tfor (const group of batch.groups) {\n\t\t\tfor (const put of group.puts) {\n\t\t\t\tstats.puts += 1;\n\t\t\t\tstats.bytesWritten += put.key.length + put.value.length;\n\t\t\t\tkvPut(this.#db, group.namespace, put.key, put.value);\n\t\t\t}\n\t\t\tfor (const key of group.deletes) {\n\t\t\t\tstats.deletes += 1;\n\t\t\t\tstats.bytesWritten += key.length;\n\t\t\t\tkvDelete(this.#db, group.namespace, key);\n\t\t\t}\n\t\t}\n\t\treturn stats;\n\t}\n\n\tcommit(): void {\n\t\tthis.#ensureOpen();\n\t\tthis.#db.exec(\"COMMIT\");\n\t\tthis.#closed = true;\n\t}\n\n\trollback(): void {\n\t\tthis.#ensureOpen();\n\t\tthis.#db.exec(\"ROLLBACK\");\n\t\tthis.#closed = true;\n\t}\n\n\t#ensureOpen(): void {\n\t\tif (this.#closed) {\n\t\t\tthrow new Error(\"Lix backend transaction is closed\");\n\t\t}\n\t}\n}\n\ntype KvPair = {\n\tkey: Uint8Array;\n\tvalue: Uint8Array;\n};\n\nfunction getValues(\n\tdb: Database,\n\trequest: BackendKvGetRequest,\n): BackendKvValueBatch {\n\treturn {\n\t\tgroups: request.groups.map((group) => ({\n\t\t\tnamespace: group.namespace,\n\t\t\tvalues: group.keys.map((key) => kvGet(db, group.namespace, key)),\n\t\t})),\n\t};\n}\n\nfunction existsMany(\n\tdb: Database,\n\trequest: BackendKvGetRequest,\n): BackendKvExistsBatch {\n\treturn {\n\t\tgroups: request.groups.map((group) => ({\n\t\t\tnamespace: group.namespace,\n\t\t\texists: group.keys.map(\n\t\t\t\t(key) => kvGet(db, group.namespace, key) !== null,\n\t\t\t),\n\t\t})),\n\t};\n}\n\nfunction scanPage(\n\tdb: Database,\n\trequest: BackendKvScanRequest,\n): { pairs: KvPair[]; resumeAfter: Uint8Array | null } {\n\tconst scanLimit = request.limit + 1 + (request.after ? 1 : 0);\n\tconst pairs = kvScan(\n\t\tdb,\n\t\trequest.namespace,\n\t\trequest.range,\n\t\tscanLimit,\n\t).filter(\n\t\t(pair) => !request.after || compareBytes(pair.key, request.after) > 0,\n\t);\n\tconst hasMore = pairs.length > request.limit;\n\tconst pagePairs = pairs.slice(0, request.limit);\n\treturn {\n\t\tpairs: pagePairs,\n\t\tresumeAfter: hasMore\n\t\t\t? (pagePairs.at(-1)?.key ?? null)\n\t\t\t: null,\n\t};\n}\n\nfunction kvGet(\n\tdb: Database,\n\tnamespace: string,\n\tkey: Uint8Array,\n): Uint8Array | null {\n\tconst row = db\n\t\t.prepare(\"SELECT value FROM lix_kv WHERE namespace = ? AND key = ?\")\n\t\t.get(namespace, sqliteBytes(key));\n\tif (!isObject(row) || !(\"value\" in row)) {\n\t\treturn null;\n\t}\n\treturn bytesFromUnknown(row.value, \"lix_kv.value\");\n}\n\nfunction kvPut(\n\tdb: Database,\n\tnamespace: string,\n\tkey: Uint8Array,\n\tvalue: Uint8Array,\n): void {\n\tdb.prepare(\n\t\t`INSERT INTO lix_kv (namespace, key, value)\n\t\t VALUES (?, ?, ?)\n\t\t ON CONFLICT(namespace, key) DO UPDATE SET value = excluded.value`,\n\t).run(namespace, sqliteBytes(key), sqliteBytes(value));\n}\n\nfunction kvDelete(db: Database, namespace: string, key: Uint8Array): void {\n\tdb.prepare(\"DELETE FROM lix_kv WHERE namespace = ? AND key = ?\").run(\n\t\tnamespace,\n\t\tsqliteBytes(key),\n\t);\n}\n\nfunction kvScan(\n\tdb: Database,\n\tnamespace: string,\n\trange: BackendKvScanRange,\n\tlimit?: number | null,\n): KvPair[] {\n\tconst { sql, params } = scanQuery(namespace, range, limit);\n\treturn db.prepare(sql).all(...params).map((row) => {\n\t\tif (!isObject(row) || !(\"key\" in row) || !(\"value\" in row)) {\n\t\t\tthrow new Error(\"invalid lix_kv scan row\");\n\t\t}\n\t\treturn {\n\t\t\tkey: bytesFromUnknown(row.key, \"lix_kv.key\"),\n\t\t\tvalue: bytesFromUnknown(row.value, \"lix_kv.value\"),\n\t\t};\n\t});\n}\n\nfunction scanQuery(\n\tnamespace: string,\n\trange: BackendKvScanRange,\n\tlimit?: number | null,\n): { sql: string; params: unknown[] } {\n\tconst params: unknown[] = [namespace];\n\tconst clauses = [\"namespace = ?\"];\n\n\tif (range.kind === \"prefix\") {\n\t\tclauses.push(\"key >= ?\");\n\t\tparams.push(sqliteBytes(range.prefix));\n\t\tconst end = prefixUpperBound(range.prefix);\n\t\tif (end) {\n\t\t\tclauses.push(\"key < ?\");\n\t\t\tparams.push(sqliteBytes(end));\n\t\t}\n\t} else {\n\t\tclauses.push(\"key >= ?\", \"key < ?\");\n\t\tparams.push(sqliteBytes(range.start), sqliteBytes(range.end));\n\t}\n\n\tlet sql = `SELECT key, value FROM lix_kv WHERE ${clauses.join(\n\t\t\" AND \",\n\t)} ORDER BY key`;\n\tif (limit != null) {\n\t\tsql += \" LIMIT ?\";\n\t\tparams.push(limit);\n\t}\n\treturn { sql, params };\n}\n\nfunction compareBytes(left: Uint8Array, right: Uint8Array): number {\n\tconst length = Math.min(left.length, right.length);\n\tfor (let index = 0; index < length; index++) {\n\t\tconst delta = left[index]! - right[index]!;\n\t\tif (delta !== 0) return delta;\n\t}\n\treturn left.length - right.length;\n}\n\nfunction prefixUpperBound(prefix: Uint8Array): Uint8Array | null {\n\tconst end = new Uint8Array(prefix);\n\tfor (let index = end.length - 1; index >= 0; index--) {\n\t\tif (end[index] !== 0xff) {\n\t\t\tend[index]! += 1;\n\t\t\treturn end.slice(0, index + 1);\n\t\t}\n\t}\n\treturn null;\n}\n\nfunction bytesFromUnknown(value: unknown, context: string): Uint8Array {\n\tif (value instanceof Uint8Array) {\n\t\treturn new Uint8Array(value);\n\t}\n\tthrow new Error(`${context} must be bytes`);\n}\n\nfunction sqliteBytes(bytes: Uint8Array): Uint8Array {\n\tconst buffer = (\n\t\tglobalThis as typeof globalThis & {\n\t\t\tBuffer?: { from(bytes: Uint8Array): Uint8Array };\n\t\t}\n\t).Buffer;\n\treturn buffer ? buffer.from(bytes) : bytes;\n}\n\nfunction registryKeyForPath(filename: string): string | null {\n\tif (filename === \":memory:\") {\n\t\treturn null;\n\t}\n\tif (filename.startsWith(\"/\")) {\n\t\treturn normalizeAbsolutePath(filename);\n\t}\n\tconst cwd =\n\t\t(\n\t\t\tglobalThis as typeof globalThis & {\n\t\t\t\tprocess?: { cwd?: () => string };\n\t\t\t}\n\t\t).process?.cwd?.() ?? \"/\";\n\treturn normalizeAbsolutePath(`${cwd}/${filename}`);\n}\n\nfunction normalizeAbsolutePath(filename: string): string {\n\tconst segments: string[] = [];\n\tfor (const segment of filename.split(\"/\")) {\n\t\tif (!segment || segment === \".\") {\n\t\t\tcontinue;\n\t\t}\n\t\tif (segment === \"..\") {\n\t\t\tsegments.pop();\n\t\t\tcontinue;\n\t\t}\n\t\tsegments.push(segment);\n\t}\n\treturn `/${segments.join(\"/\")}`;\n}\n\nfunction doubleOpenError(filename: string): Error {\n\treturn new Error(\n\t\t`createBetterSqlite3Backend() already has an open handle for ${filename}; close the existing Lix handle before opening this file again`,\n\t);\n}\n\nfunction isObject(value: unknown): value is Record<string, unknown> {\n\treturn typeof value === \"object\" && value !== null;\n}\n"
  },
  {
    "path": "packages/js-sdk/src/types.ts",
    "content": "export type { JsonValue, LixRuntimeValue } from \"./open-lix.js\";\n"
  },
  {
    "path": "packages/js-sdk/tsconfig.json",
    "content": "{\n\t\"compilerOptions\": {\n\t\t\"target\": \"ES2022\",\n\t\t\"module\": \"NodeNext\",\n\t\t\"moduleResolution\": \"NodeNext\",\n\t\t\"allowJs\": true,\n\t\t\"strict\": true,\n\t\t\"declaration\": true,\n\t\t\"outDir\": \"dist\",\n\t\t\"skipLibCheck\": true\n\t},\n\t\"include\": [\"src\"],\n\t\"exclude\": [\n\t\t\"src/**/*.test.ts\",\n\t\t\"src/engine-wasm/wasm/lix_engine_wasm_bindgen_bg.wasm.d.ts\"\n\t]\n}\n"
  },
  {
    "path": "packages/js-sdk/vitest.config.ts",
    "content": "import { defineConfig } from \"vitest/config\";\n\nexport default defineConfig({\n\ttest: {\n\t\tenvironment: \"node\",\n\t\tinclude: [\"src/**/*.test.ts\"],\n\t\texclude: [\"dist/**\"],\n\t},\n});\n"
  },
  {
    "path": "packages/js-sdk/wasm-bindgen.rs",
    "content": "#[cfg(target_arch = \"wasm32\")]\nmod wasm {\n    use async_trait::async_trait;\n    use js_sys::{Array, Object, Reflect};\n    use lix_rs_sdk::{\n        open_lix as open_lix_rs, Backend, BackendKvEntryPage, BackendKvExistsBatch,\n        BackendKvExistsGroup, BackendKvGetRequest, BackendKvKeyPage, BackendKvScanRange,\n        BackendKvScanRequest, BackendKvValueBatch, BackendKvValueGroup, BackendKvValuePage,\n        BackendKvWriteBatch, BackendKvWriteStats, BackendReadTransaction, BackendWriteTransaction,\n        BytePageBuilder, CreateVersionOptions, ExecuteResult, Lix as RsLix, LixError,\n        MergeVersionOptions, MergeVersionPreviewOptions, OpenLixOptions, SwitchVersionOptions,\n        Value,\n    };\n    use serde::Serialize;\n    use serde_json::json;\n    use wasm_bindgen::prelude::*;\n    use wasm_bindgen::JsCast;\n\n    #[wasm_bindgen(typescript_custom_section)]\n    const LIX_TYPES: &str = r#\"\nexport type JsonValue =\n  | null\n  | boolean\n  | number\n  | string\n  | JsonValue[]\n  | { [key: string]: JsonValue };\n\nexport type LixValue =\n  | { kind: \"null\"; value: null }\n  | { kind: \"boolean\"; value: boolean }\n  | { kind: \"integer\"; value: number }\n  | { kind: \"real\"; value: number }\n  | { kind: \"text\"; value: string }\n  | { kind: \"json\"; value: JsonValue }\n  | { kind: \"blob\"; base64: string };\n\nexport type ExecuteResult = {\n  columns: string[];\n  rows: LixValue[][];\n  rowsAffected: number;\n  notices: LixNotice[];\n};\n\nexport type LixNotice = {\n  code: string;\n  message: string;\n  hint?: string;\n};\n\nexport type BackendKvScanRange =\n  | { kind: \"prefix\"; prefix: Uint8Array }\n  | { kind: \"range\"; start: Uint8Array; end: Uint8Array };\n\nexport type BackendKvGetRequest = {\n  groups: BackendKvGetGroup[];\n};\n\nexport type BackendKvGetGroup = {\n  namespace: string;\n  keys: Uint8Array[];\n};\n\nexport type BackendKvValueBatch = {\n  groups: BackendKvValueGroup[];\n};\n\nexport type BackendKvValueGroup = {\n  namespace: string;\n  values: Array<Uint8Array | null>;\n};\n\nexport type BackendKvExistsBatch = {\n  groups: BackendKvExistsGroup[];\n};\n\nexport type BackendKvExistsGroup = {\n  namespace: string;\n  exists: boolean[];\n};\n\nexport type BackendKvScanRequest = {\n  namespace: string;\n  range: BackendKvScanRange;\n  after?: Uint8Array | null;\n  limit: number;\n};\n\nexport type BackendKvKeyPage = {\n  keys: Uint8Array[];\n  resumeAfter?: Uint8Array | null;\n};\n\nexport type BackendKvValuePage = {\n  values: Uint8Array[];\n  resumeAfter?: Uint8Array | null;\n};\n\nexport type BackendKvEntryPage = {\n  keys: Uint8Array[];\n  values: Uint8Array[];\n  resumeAfter?: Uint8Array | null;\n};\n\nexport type BackendKvPut = {\n  key: Uint8Array;\n  value: Uint8Array;\n};\n\nexport type BackendKvWriteBatch = {\n  groups: BackendKvWriteGroup[];\n};\n\nexport type BackendKvWriteGroup = {\n  namespace: string;\n  puts: BackendKvPut[];\n  deletes: Uint8Array[];\n};\n\nexport type BackendKvWriteStats = {\n  puts: number;\n  deletes: number;\n  bytesWritten: number;\n};\n\nexport type BackendReadTransaction = {\n  getValues(request: BackendKvGetRequest): BackendKvValueBatch;\n  existsMany(request: BackendKvGetRequest): BackendKvExistsBatch;\n  scanKeys(request: BackendKvScanRequest): BackendKvKeyPage;\n  scanValues(request: BackendKvScanRequest): BackendKvValuePage;\n  scanEntries(request: BackendKvScanRequest): BackendKvEntryPage;\n  rollback(): void;\n};\n\nexport type BackendWriteTransaction = BackendReadTransaction & {\n  writeKvBatch(batch: BackendKvWriteBatch): BackendKvWriteStats;\n  commit(): void;\n};\n\nexport type Backend = {\n  beginReadTransaction(): BackendReadTransaction;\n  beginWriteTransaction(): BackendWriteTransaction;\n  close?(): void;\n};\n\nexport type OpenLixOptions = {\n  backend?: Backend;\n};\n\nexport type CreateVersionOptions = {\n  id?: string;\n  name: string;\n  fromCommitId?: string;\n};\n\nexport type CreateVersionResult = {\n  id: string;\n  name: string;\n  hidden: boolean;\n  commitId: string;\n};\n\nexport type SwitchVersionOptions = {\n  versionId: string;\n};\n\nexport type SwitchVersionResult = {\n  versionId: string;\n};\n\nexport type MergeVersionOptions = {\n  sourceVersionId: string;\n};\n\nexport type MergeVersionOutcome =\n  | \"alreadyUpToDate\"\n  | \"fastForward\"\n  | \"mergeCommitted\";\n\nexport type MergeVersionResult = {\n  outcome: MergeVersionOutcome;\n  targetVersionId: string;\n  sourceVersionId: string;\n  baseCommitId: string;\n  targetHeadBeforeCommitId: string;\n  sourceHeadBeforeCommitId: string;\n  targetHeadAfterCommitId: string;\n  createdMergeCommitId: string | null;\n  changeStats: MergeChangeStats;\n};\n\nexport type MergeVersionPreviewResult = {\n  outcome: MergeVersionOutcome;\n  targetVersionId: string;\n  sourceVersionId: string;\n  baseCommitId: string;\n  targetHeadCommitId: string;\n  sourceHeadCommitId: string;\n  changeStats: MergeChangeStats;\n  conflicts: MergeConflict[];\n};\n\nexport type MergeChangeStats = {\n  total: number;\n  added: number;\n  modified: number;\n  removed: number;\n};\n\nexport type MergeConflict = {\n  kind: \"sameEntityChanged\";\n  schemaKey: string;\n  entityId: string[];\n  fileId: string | null;\n  target: MergeConflictSide;\n  source: MergeConflictSide;\n};\n\nexport type MergeConflictSide = {\n  kind: \"added\" | \"modified\" | \"removed\";\n  beforeChangeId: string | null;\n  afterChangeId: string | null;\n};\n\"#;\n\n    #[wasm_bindgen]\n    pub struct Lix {\n        inner: RsLix,\n    }\n\n    #[wasm_bindgen]\n    impl Lix {\n        /// Executes one DataFusion SQL statement against this Lix session.\n        ///\n        /// The SQL dialect is DataFusion SQL, not SQLite SQL. Positional\n        /// placeholders use `$1`, `$2`, and so on. SQLite-specific catalog\n        /// tables and transaction statements such as `sqlite_master`, `BEGIN`,\n        /// and `COMMIT` are not part of this contract; use\n        /// `information_schema` for catalog inspection.\n        #[wasm_bindgen(js_name = execute)]\n        pub async fn execute(&self, sql: JsValue, params: JsValue) -> Result<JsValue, JsValue> {\n            let sql = sql\n                .as_string()\n                .ok_or_else(|| invalid_argument_error(\"execute\", \"sql\", \"string\", &sql))\n                .map_err(js_error)?;\n            if !Array::is_array(&params) {\n                return Err(js_error(invalid_argument_error(\n                    \"execute\", \"params\", \"array\", &params,\n                )));\n            }\n            let params = Array::from(&params);\n            let values = params\n                .iter()\n                .map(value_from_js)\n                .collect::<Result<Vec<_>, _>>()\n                .map_err(js_error)?;\n            let result = self.inner.execute(&sql, &values).await.map_err(js_error)?;\n            execute_result_to_js(result).map_err(js_error)\n        }\n\n        #[wasm_bindgen(js_name = activeVersionId)]\n        pub async fn active_version_id(&self) -> Result<String, JsValue> {\n            self.inner.active_version_id().await.map_err(js_error)\n        }\n\n        #[wasm_bindgen(js_name = createVersion)]\n        pub async fn create_version(&self, args: JsValue) -> Result<JsValue, JsValue> {\n            let options = parse_create_version_options(args).map_err(js_error)?;\n            let result = self.inner.create_version(options).await.map_err(js_error)?;\n            let object = Object::new();\n            set_string(&object, \"id\", &result.id).map_err(js_error)?;\n            set_string(&object, \"name\", &result.name).map_err(js_error)?;\n            Reflect::set(\n                &object,\n                &JsValue::from_str(\"hidden\"),\n                &JsValue::from_bool(result.hidden),\n            )\n            .map_err(|_| js_error(js_sdk_error(\"could not set hidden\")))?;\n            set_string(&object, \"commitId\", &result.commit_id).map_err(js_error)?;\n            Ok(object.into())\n        }\n\n        #[wasm_bindgen(js_name = switchVersion)]\n        pub async fn switch_version(&self, args: JsValue) -> Result<JsValue, JsValue> {\n            let options = parse_switch_version_options(args).map_err(js_error)?;\n            let result = self.inner.switch_version(options).await.map_err(js_error)?;\n            let object = Object::new();\n            set_string(&object, \"versionId\", &result.version_id).map_err(js_error)?;\n            Ok(object.into())\n        }\n\n        #[wasm_bindgen(js_name = mergeVersionPreview)]\n        pub async fn merge_version_preview(&self, args: JsValue) -> Result<JsValue, JsValue> {\n            let options = parse_merge_version_preview_options(args).map_err(js_error)?;\n            let result = self\n                .inner\n                .merge_version_preview(options)\n                .await\n                .map_err(js_error)?;\n            merge_version_preview_to_js(result).map_err(js_error)\n        }\n\n        #[wasm_bindgen(js_name = mergeVersion)]\n        pub async fn merge_version(&self, args: JsValue) -> Result<JsValue, JsValue> {\n            let options = parse_merge_version_options(args).map_err(js_error)?;\n            let result = self.inner.merge_version(options).await.map_err(js_error)?;\n            let object = Object::new();\n            let outcome = match result.outcome {\n                lix_rs_sdk::MergeVersionOutcome::AlreadyUpToDate => \"alreadyUpToDate\",\n                lix_rs_sdk::MergeVersionOutcome::FastForward => \"fastForward\",\n                lix_rs_sdk::MergeVersionOutcome::MergeCommitted => \"mergeCommitted\",\n            };\n            set_string(&object, \"outcome\", outcome).map_err(js_error)?;\n            set_string(&object, \"targetVersionId\", &result.target_version_id).map_err(js_error)?;\n            set_string(&object, \"sourceVersionId\", &result.source_version_id).map_err(js_error)?;\n            set_string(&object, \"baseCommitId\", &result.base_commit_id).map_err(js_error)?;\n            set_string(\n                &object,\n                \"targetHeadBeforeCommitId\",\n                &result.target_head_before_commit_id,\n            )\n            .map_err(js_error)?;\n            set_string(\n                &object,\n                \"sourceHeadBeforeCommitId\",\n                &result.source_head_before_commit_id,\n            )\n            .map_err(js_error)?;\n            set_string(\n                &object,\n                \"targetHeadAfterCommitId\",\n                &result.target_head_after_commit_id,\n            )\n            .map_err(js_error)?;\n            set_optional_string(\n                &object,\n                \"createdMergeCommitId\",\n                result.created_merge_commit_id.as_deref(),\n            )\n            .map_err(js_error)?;\n            Reflect::set(\n                &object,\n                &JsValue::from_str(\"changeStats\"),\n                &merge_change_stats_to_js(&result.change_stats).map_err(js_error)?,\n            )\n            .map_err(|_| js_error(js_sdk_error(\"could not set changeStats\")))?;\n            Ok(object.into())\n        }\n\n        #[wasm_bindgen(js_name = close)]\n        pub async fn close(&self) -> Result<(), JsValue> {\n            self.inner.close().await.map_err(js_error)\n        }\n    }\n\n    #[wasm_bindgen(js_name = openLix)]\n    pub async fn open_lix(args: Option<JsValue>) -> Result<Lix, JsValue> {\n        let options = parse_open_lix_options(args).map_err(js_error)?;\n        let inner = open_lix_rs(options).await.map_err(js_error)?;\n        Ok(Lix { inner })\n    }\n\n    fn parse_open_lix_options(args: Option<JsValue>) -> Result<OpenLixOptions, LixError> {\n        let Some(value) = args else {\n            return Ok(OpenLixOptions::default());\n        };\n        if value.is_undefined() || value.is_null() {\n            return Ok(OpenLixOptions::default());\n        }\n        if !value.is_object() {\n            return Err(LixError::new(\n                \"LIX_ERROR_JS_SDK\",\n                \"openLix() options must be an object\",\n            ));\n        }\n        let backend = Reflect::get(&value, &JsValue::from_str(\"backend\"))\n            .map_err(|_| js_sdk_error(\"openLix() could not read backend\"))?;\n        if backend.is_undefined() || backend.is_null() {\n            return Ok(OpenLixOptions::default());\n        }\n        if !backend.is_object() {\n            return Err(LixError::new(\n                \"LIX_ERROR_JS_SDK\",\n                \"openLix() backend must be an object\",\n            ));\n        }\n        Ok(OpenLixOptions {\n            backend: Some(Box::new(JsBackend::new(backend))),\n        })\n    }\n\n    struct JsBackend {\n        inner: JsValue,\n    }\n\n    impl JsBackend {\n        fn new(inner: JsValue) -> Self {\n            Self { inner }\n        }\n    }\n\n    unsafe impl Send for JsBackend {}\n    unsafe impl Sync for JsBackend {}\n\n    #[async_trait]\n    impl Backend for JsBackend {\n        async fn begin_read_transaction(\n            &self,\n        ) -> Result<Box<dyn BackendReadTransaction + Send + Sync + 'static>, LixError> {\n            let transaction = call_method0(&self.inner, \"beginReadTransaction\")?;\n            if transaction.is_null() || transaction.is_undefined() || !transaction.is_object() {\n                return Err(js_sdk_error(\n                    \"backend.beginReadTransaction() must return a transaction object\",\n                ));\n            }\n            Ok(Box::new(JsBackendTransaction { inner: transaction }))\n        }\n\n        async fn begin_write_transaction(\n            &self,\n        ) -> Result<Box<dyn BackendWriteTransaction + Send + Sync + 'static>, LixError> {\n            let transaction = call_method0(&self.inner, \"beginWriteTransaction\")?;\n            if transaction.is_null() || transaction.is_undefined() || !transaction.is_object() {\n                return Err(js_sdk_error(\n                    \"backend.beginWriteTransaction() must return a transaction object\",\n                ));\n            }\n            Ok(Box::new(JsBackendTransaction { inner: transaction }))\n        }\n\n        async fn close(&self) -> Result<(), LixError> {\n            let method = Reflect::get(&self.inner, &JsValue::from_str(\"close\"))\n                .map_err(|_| js_sdk_error(\"backend.close could not be read\"))?;\n            if method.is_undefined() || method.is_null() {\n                return Ok(());\n            }\n            call_function0(&method, &self.inner)?;\n            Ok(())\n        }\n    }\n\n    struct JsBackendTransaction {\n        inner: JsValue,\n    }\n\n    unsafe impl Send for JsBackendTransaction {}\n    unsafe impl Sync for JsBackendTransaction {}\n\n    #[async_trait]\n    impl BackendReadTransaction for JsBackendTransaction {\n        async fn get_values(\n            &mut self,\n            request: BackendKvGetRequest,\n        ) -> Result<BackendKvValueBatch, LixError> {\n            js_value_to_value_batch(\n                call_method1(&self.inner, \"getValues\", &kv_get_request_to_js(&request)?)?,\n                \"transaction.getValues\",\n            )\n        }\n\n        async fn exists_many(\n            &mut self,\n            request: BackendKvGetRequest,\n        ) -> Result<BackendKvExistsBatch, LixError> {\n            js_value_to_exists_batch(\n                call_method1(&self.inner, \"existsMany\", &kv_get_request_to_js(&request)?)?,\n                \"transaction.existsMany\",\n            )\n        }\n\n        async fn scan_keys(\n            &mut self,\n            request: BackendKvScanRequest,\n        ) -> Result<BackendKvKeyPage, LixError> {\n            js_value_to_key_page(\n                call_method1(&self.inner, \"scanKeys\", &kv_scan_request_to_js(&request)?)?,\n                \"transaction.scanKeys\",\n            )\n        }\n\n        async fn scan_values(\n            &mut self,\n            request: BackendKvScanRequest,\n        ) -> Result<BackendKvValuePage, LixError> {\n            js_value_to_value_page(\n                call_method1(&self.inner, \"scanValues\", &kv_scan_request_to_js(&request)?)?,\n                \"transaction.scanValues\",\n            )\n        }\n\n        async fn scan_entries(\n            &mut self,\n            request: BackendKvScanRequest,\n        ) -> Result<BackendKvEntryPage, LixError> {\n            js_value_to_entry_page(\n                call_method1(\n                    &self.inner,\n                    \"scanEntries\",\n                    &kv_scan_request_to_js(&request)?,\n                )?,\n                \"transaction.scanEntries\",\n            )\n        }\n\n        async fn rollback(self: Box<Self>) -> Result<(), LixError> {\n            call_method0(&self.inner, \"rollback\")?;\n            Ok(())\n        }\n    }\n\n    #[async_trait]\n    impl BackendWriteTransaction for JsBackendTransaction {\n        async fn write_kv_batch(\n            &mut self,\n            batch: BackendKvWriteBatch,\n        ) -> Result<BackendKvWriteStats, LixError> {\n            js_value_to_write_stats(\n                call_method1(&self.inner, \"writeKvBatch\", &kv_write_batch_to_js(&batch)?)?,\n                \"transaction.writeKvBatch\",\n            )\n        }\n\n        async fn commit(self: Box<Self>) -> Result<(), LixError> {\n            call_method0(&self.inner, \"commit\")?;\n            Ok(())\n        }\n    }\n\n    fn call_method0(receiver: &JsValue, method_name: &str) -> Result<JsValue, LixError> {\n        let method = Reflect::get(receiver, &JsValue::from_str(method_name))\n            .map_err(|_| js_sdk_error(format!(\"{method_name} could not be read\")))?;\n        call_function0(&method, receiver)\n    }\n\n    fn call_method1(\n        receiver: &JsValue,\n        method_name: &str,\n        arg1: &JsValue,\n    ) -> Result<JsValue, LixError> {\n        let method = Reflect::get(receiver, &JsValue::from_str(method_name))\n            .map_err(|_| js_sdk_error(format!(\"{method_name} could not be read\")))?;\n        call_function1(&method, receiver, arg1)\n    }\n\n    fn call_function0(function: &JsValue, receiver: &JsValue) -> Result<JsValue, LixError> {\n        let function = function\n            .dyn_ref::<js_sys::Function>()\n            .ok_or_else(|| js_sdk_error(\"backend method must be a function\"))?;\n        reject_promise(function.call0(receiver).map_err(js_to_lix_error)?)\n    }\n\n    fn call_function1(\n        function: &JsValue,\n        receiver: &JsValue,\n        arg1: &JsValue,\n    ) -> Result<JsValue, LixError> {\n        let function = function\n            .dyn_ref::<js_sys::Function>()\n            .ok_or_else(|| js_sdk_error(\"backend method must be a function\"))?;\n        reject_promise(function.call1(receiver, arg1).map_err(js_to_lix_error)?)\n    }\n\n    fn reject_promise(value: JsValue) -> Result<JsValue, LixError> {\n        if value.is_instance_of::<js_sys::Promise>() {\n            return Err(js_sdk_error(\n                \"JavaScript Backend methods must return synchronously\",\n            ));\n        }\n        Ok(value)\n    }\n\n    fn bytes_to_js(bytes: &[u8]) -> JsValue {\n        js_sys::Uint8Array::from(bytes).into()\n    }\n\n    fn js_value_to_bytes(value: JsValue, context: &str) -> Result<Vec<u8>, LixError> {\n        if !value.is_instance_of::<js_sys::Uint8Array>() {\n            return Err(js_sdk_error(format!(\"{context} must return Uint8Array\")));\n        }\n        Ok(js_sys::Uint8Array::from(value).to_vec())\n    }\n\n    fn usize_to_js(value: usize) -> JsValue {\n        JsValue::from_f64(value as f64)\n    }\n\n    fn kv_get_request_to_js(request: &BackendKvGetRequest) -> Result<JsValue, LixError> {\n        let object = Object::new();\n        let groups = Array::new();\n        for group in &request.groups {\n            let group_object = Object::new();\n            set_string(&group_object, \"namespace\", &group.namespace)?;\n            let keys = Array::new();\n            for key in &group.keys {\n                keys.push(&bytes_to_js(key));\n            }\n            Reflect::set(&group_object, &JsValue::from_str(\"keys\"), &keys)\n                .map_err(|_| js_sdk_error(\"could not set get request keys\"))?;\n            groups.push(&group_object);\n        }\n        Reflect::set(&object, &JsValue::from_str(\"groups\"), &groups)\n            .map_err(|_| js_sdk_error(\"could not set get request groups\"))?;\n        Ok(object.into())\n    }\n\n    fn kv_scan_range_to_js(range: &BackendKvScanRange) -> Result<JsValue, LixError> {\n        let object = Object::new();\n        match range {\n            BackendKvScanRange::Prefix(prefix) => {\n                set_string(&object, \"kind\", \"prefix\")?;\n                Reflect::set(&object, &JsValue::from_str(\"prefix\"), &bytes_to_js(prefix))\n                    .map_err(|_| js_sdk_error(\"could not set range.prefix\"))?;\n            }\n            BackendKvScanRange::Range { start, end } => {\n                set_string(&object, \"kind\", \"range\")?;\n                Reflect::set(&object, &JsValue::from_str(\"start\"), &bytes_to_js(start))\n                    .map_err(|_| js_sdk_error(\"could not set range.start\"))?;\n                Reflect::set(&object, &JsValue::from_str(\"end\"), &bytes_to_js(end))\n                    .map_err(|_| js_sdk_error(\"could not set range.end\"))?;\n            }\n        }\n        Ok(object.into())\n    }\n\n    fn kv_scan_request_to_js(request: &BackendKvScanRequest) -> Result<JsValue, LixError> {\n        let object = Object::new();\n        set_string(&object, \"namespace\", &request.namespace)?;\n        Reflect::set(\n            &object,\n            &JsValue::from_str(\"range\"),\n            &kv_scan_range_to_js(&request.range)?,\n        )\n        .map_err(|_| js_sdk_error(\"could not set scan request range\"))?;\n        let after = request\n            .after\n            .as_deref()\n            .map(bytes_to_js)\n            .unwrap_or(JsValue::NULL);\n        Reflect::set(&object, &JsValue::from_str(\"after\"), &after)\n            .map_err(|_| js_sdk_error(\"could not set scan request after\"))?;\n        Reflect::set(\n            &object,\n            &JsValue::from_str(\"limit\"),\n            &usize_to_js(request.limit),\n        )\n        .map_err(|_| js_sdk_error(\"could not set scan request limit\"))?;\n        Ok(object.into())\n    }\n\n    fn kv_write_batch_to_js(batch: &BackendKvWriteBatch) -> Result<JsValue, LixError> {\n        let object = Object::new();\n        let groups = Array::new();\n        for group in &batch.groups {\n            let group_object = Object::new();\n            set_string(&group_object, \"namespace\", group.namespace())?;\n\n            let puts = Array::new();\n            for index in 0..group.put_count() {\n                let key = group.put_key(index).ok_or_else(|| {\n                    LixError::new(\"LIX_ERROR_UNKNOWN\", \"backend write batch missing put key\")\n                })?;\n                let value = group.put_value(index).ok_or_else(|| {\n                    LixError::new(\"LIX_ERROR_UNKNOWN\", \"backend write batch missing put value\")\n                })?;\n                let put = Object::new();\n                Reflect::set(&put, &JsValue::from_str(\"key\"), &bytes_to_js(key))\n                    .map_err(|_| js_sdk_error(\"could not set write put key\"))?;\n                Reflect::set(&put, &JsValue::from_str(\"value\"), &bytes_to_js(value))\n                    .map_err(|_| js_sdk_error(\"could not set write put value\"))?;\n                puts.push(&put);\n            }\n            Reflect::set(&group_object, &JsValue::from_str(\"puts\"), &puts)\n                .map_err(|_| js_sdk_error(\"could not set write puts\"))?;\n\n            let deletes = Array::new();\n            for index in 0..group.delete_count() {\n                let key = group.delete_key(index).ok_or_else(|| {\n                    LixError::new(\n                        \"LIX_ERROR_UNKNOWN\",\n                        \"backend write batch missing delete key\",\n                    )\n                })?;\n                deletes.push(&bytes_to_js(key));\n            }\n            Reflect::set(&group_object, &JsValue::from_str(\"deletes\"), &deletes)\n                .map_err(|_| js_sdk_error(\"could not set write deletes\"))?;\n            groups.push(&group_object);\n        }\n        Reflect::set(&object, &JsValue::from_str(\"groups\"), &groups)\n            .map_err(|_| js_sdk_error(\"could not set write groups\"))?;\n        Ok(object.into())\n    }\n\n    fn js_value_to_value_batch(\n        value: JsValue,\n        context: &str,\n    ) -> Result<BackendKvValueBatch, LixError> {\n        let object = expect_backend_object(value, context)?;\n        let groups = required_array(&object, \"groups\", context)?;\n        let groups = groups\n            .iter()\n            .enumerate()\n            .map(|(index, group)| {\n                let group_context = format!(\"{context}.groups[{index}]\");\n                let group = expect_backend_object(group, &group_context)?;\n                let namespace = required_string(&group, \"namespace\", &group_context)?;\n                let values = required_array(&group, \"values\", &group_context)?;\n                let mut bytes = BytePageBuilder::with_capacity(values.length() as usize, 0);\n                let mut present = Vec::with_capacity(values.length() as usize);\n                for value in values.iter() {\n                    if value.is_null() || value.is_undefined() {\n                        bytes.push([]);\n                        present.push(false);\n                    } else {\n                        bytes.push(js_value_to_bytes(\n                            value,\n                            &format!(\"{group_context}.values\"),\n                        )?);\n                        present.push(true);\n                    }\n                }\n                Ok(BackendKvValueGroup::new(namespace, bytes.finish(), present))\n            })\n            .collect::<Result<Vec<_>, LixError>>()?;\n        Ok(BackendKvValueBatch { groups })\n    }\n\n    fn js_value_to_exists_batch(\n        value: JsValue,\n        context: &str,\n    ) -> Result<BackendKvExistsBatch, LixError> {\n        let object = expect_backend_object(value, context)?;\n        let groups = required_array(&object, \"groups\", context)?;\n        let groups = groups\n            .iter()\n            .enumerate()\n            .map(|(index, group)| {\n                let group_context = format!(\"{context}.groups[{index}]\");\n                let group = expect_backend_object(group, &group_context)?;\n                let namespace = required_string(&group, \"namespace\", &group_context)?;\n                let exists = required_array(&group, \"exists\", &group_context)?\n                    .iter()\n                    .map(|value| {\n                        value.as_bool().ok_or_else(|| {\n                            js_sdk_error(format!(\"{group_context}.exists must contain booleans\"))\n                        })\n                    })\n                    .collect::<Result<Vec<_>, LixError>>()?;\n                Ok(BackendKvExistsGroup { namespace, exists })\n            })\n            .collect::<Result<Vec<_>, LixError>>()?;\n        Ok(BackendKvExistsBatch { groups })\n    }\n\n    fn js_value_to_key_page(value: JsValue, context: &str) -> Result<BackendKvKeyPage, LixError> {\n        let object = expect_backend_object(value, context)?;\n        Ok(BackendKvKeyPage {\n            keys: byte_array_property(&object, \"keys\", context)?.finish(),\n            resume_after: optional_bytes_property(&object, \"resumeAfter\", context)?,\n        })\n    }\n\n    fn js_value_to_value_page(\n        value: JsValue,\n        context: &str,\n    ) -> Result<BackendKvValuePage, LixError> {\n        let object = expect_backend_object(value, context)?;\n        Ok(BackendKvValuePage {\n            values: byte_array_property(&object, \"values\", context)?.finish(),\n            resume_after: optional_bytes_property(&object, \"resumeAfter\", context)?,\n        })\n    }\n\n    fn js_value_to_entry_page(\n        value: JsValue,\n        context: &str,\n    ) -> Result<BackendKvEntryPage, LixError> {\n        let object = expect_backend_object(value, context)?;\n        Ok(BackendKvEntryPage {\n            keys: byte_array_property(&object, \"keys\", context)?.finish(),\n            values: byte_array_property(&object, \"values\", context)?.finish(),\n            resume_after: optional_bytes_property(&object, \"resumeAfter\", context)?,\n        })\n    }\n\n    fn js_value_to_write_stats(\n        value: JsValue,\n        context: &str,\n    ) -> Result<BackendKvWriteStats, LixError> {\n        let object = expect_backend_object(value, context)?;\n        Ok(BackendKvWriteStats {\n            puts: required_usize(&object, \"puts\", context)?,\n            deletes: required_usize(&object, \"deletes\", context)?,\n            bytes_written: required_usize(&object, \"bytesWritten\", context)?,\n        })\n    }\n\n    fn expect_backend_object(value: JsValue, context: &str) -> Result<Object, LixError> {\n        if value.is_null() || value.is_undefined() || !value.is_object() {\n            return Err(js_sdk_error(format!(\"{context} must return an object\")));\n        }\n        Ok(Object::from(value))\n    }\n\n    fn required_array(object: &Object, key: &str, context: &str) -> Result<Array, LixError> {\n        let value = Reflect::get(object, &JsValue::from_str(key))\n            .map_err(|_| js_sdk_error(format!(\"{context}.{key} could not be read\")))?;\n        if !Array::is_array(&value) {\n            return Err(js_sdk_error(format!(\"{context}.{key} must be an array\")));\n        }\n        Ok(Array::from(&value))\n    }\n\n    fn byte_array_property(\n        object: &Object,\n        key: &str,\n        context: &str,\n    ) -> Result<BytePageBuilder, LixError> {\n        let array = required_array(object, key, context)?;\n        let mut page = BytePageBuilder::with_capacity(array.length() as usize, 0);\n        for value in array.iter() {\n            page.push(js_value_to_bytes(value, &format!(\"{context}.{key}\"))?);\n        }\n        Ok(page)\n    }\n\n    fn optional_bytes_property(\n        object: &Object,\n        key: &str,\n        context: &str,\n    ) -> Result<Option<Vec<u8>>, LixError> {\n        let value = Reflect::get(object, &JsValue::from_str(key))\n            .map_err(|_| js_sdk_error(format!(\"{context}.{key} could not be read\")))?;\n        if value.is_undefined() || value.is_null() {\n            return Ok(None);\n        }\n        Ok(Some(js_value_to_bytes(value, &format!(\"{context}.{key}\"))?))\n    }\n\n    fn required_usize(object: &Object, key: &str, context: &str) -> Result<usize, LixError> {\n        let value = Reflect::get(object, &JsValue::from_str(key))\n            .map_err(|_| js_sdk_error(format!(\"{context}.{key} could not be read\")))?;\n        let number = value\n            .as_f64()\n            .ok_or_else(|| js_sdk_error(format!(\"{context}.{key} must be a number\")))?;\n        if !number.is_finite() || number < 0.0 || number.fract() != 0.0 {\n            return Err(js_sdk_error(format!(\n                \"{context}.{key} must be a non-negative integer\"\n            )));\n        }\n        Ok(number as usize)\n    }\n\n    fn js_to_lix_error(value: JsValue) -> LixError {\n        if let Some(message) = value.as_string() {\n            return js_sdk_error(message);\n        }\n        let code = Reflect::get(&value, &JsValue::from_str(\"code\"))\n            .ok()\n            .and_then(|code| code.as_string());\n        let message = Reflect::get(&value, &JsValue::from_str(\"message\"))\n            .ok()\n            .and_then(|message| message.as_string())\n            .unwrap_or_else(|| \"JavaScript backend error\".to_string());\n        let hint = Reflect::get(&value, &JsValue::from_str(\"hint\"))\n            .ok()\n            .and_then(|hint| hint.as_string());\n        let details = Reflect::get(&value, &JsValue::from_str(\"details\"))\n            .ok()\n            .and_then(|details| {\n                if details.is_undefined() || details.is_null() {\n                    None\n                } else {\n                    serde_wasm_bindgen::from_value(details).ok()\n                }\n            });\n        let mut error = LixError::new(\n            code.unwrap_or_else(|| \"LIX_ERROR_JS_SDK\".to_string()),\n            message,\n        );\n        if let Some(hint) = hint {\n            error = error.with_hint(hint);\n        }\n        if let Some(details) = details {\n            error = error.with_details(details);\n        }\n        error\n    }\n\n    fn parse_create_version_options(value: JsValue) -> Result<CreateVersionOptions, LixError> {\n        let object = expect_object(value, \"createVersion\")?;\n        let id = optional_string(&object, \"id\", \"createVersion\")?;\n        let name = required_string(&object, \"name\", \"createVersion\")?;\n        let from_commit_id = optional_string(&object, \"fromCommitId\", \"createVersion\")?;\n        Ok(CreateVersionOptions {\n            id,\n            name,\n            from_commit_id,\n        })\n    }\n\n    fn parse_switch_version_options(value: JsValue) -> Result<SwitchVersionOptions, LixError> {\n        let object = expect_object(value, \"switchVersion\")?;\n        let version_id = required_string(&object, \"versionId\", \"switchVersion\")?;\n        Ok(SwitchVersionOptions { version_id })\n    }\n\n    fn parse_merge_version_options(value: JsValue) -> Result<MergeVersionOptions, LixError> {\n        let object = expect_object(value, \"mergeVersion\")?;\n        let source_version_id = required_string(&object, \"sourceVersionId\", \"mergeVersion\")?;\n        Ok(MergeVersionOptions { source_version_id })\n    }\n\n    fn parse_merge_version_preview_options(\n        value: JsValue,\n    ) -> Result<MergeVersionPreviewOptions, LixError> {\n        let object = expect_object(value, \"mergeVersionPreview\")?;\n        let source_version_id = required_string(&object, \"sourceVersionId\", \"mergeVersionPreview\")?;\n        Ok(MergeVersionPreviewOptions { source_version_id })\n    }\n\n    fn expect_object(value: JsValue, method: &str) -> Result<Object, LixError> {\n        if value.is_null() || value.is_undefined() || !value.is_object() {\n            return Err(LixError::new(\n                \"LIX_ERROR_JS_SDK\",\n                format!(\"{method}() options must be an object\"),\n            ));\n        }\n        Ok(Object::from(value))\n    }\n\n    fn invalid_argument_error(\n        operation: &str,\n        argument: &str,\n        expected: &str,\n        actual_value: &JsValue,\n    ) -> LixError {\n        LixError::new(\n            \"LIX_INVALID_ARGUMENT\",\n            format!(\n                \"lix.{operation}() expected {argument} to be {} {expected}\",\n                expected_article(expected)\n            ),\n        )\n        .with_details(json!({\n            \"operation\": operation,\n            \"argument\": argument,\n            \"expected\": expected,\n            \"actual\": js_type_name(actual_value),\n        }))\n    }\n\n    fn expected_article(expected: &str) -> &'static str {\n        match expected.chars().next().map(|c| c.to_ascii_lowercase()) {\n            Some('a' | 'e' | 'i' | 'o' | 'u') => \"an\",\n            _ => \"a\",\n        }\n    }\n\n    fn js_type_name(value: &JsValue) -> &'static str {\n        if value.is_null() {\n            \"null\"\n        } else if Array::is_array(value) {\n            \"array\"\n        } else if value.is_undefined() {\n            \"undefined\"\n        } else if value.is_string() {\n            \"string\"\n        } else if value.as_bool().is_some() {\n            \"boolean\"\n        } else if value.as_f64().is_some() {\n            \"number\"\n        } else if value.is_function() {\n            \"function\"\n        } else if value.is_object() {\n            \"object\"\n        } else {\n            \"unknown\"\n        }\n    }\n\n    fn required_string(object: &Object, key: &str, method: &str) -> Result<String, LixError> {\n        let value = Reflect::get(object, &JsValue::from_str(key)).map_err(|_| {\n            LixError::new(\n                \"LIX_ERROR_JS_SDK\",\n                format!(\"{method}() could not read {key}\"),\n            )\n        })?;\n        if let Some(value) = value.as_string() {\n            if !value.is_empty() {\n                return Ok(value);\n            }\n        }\n        Err(LixError::new(\n            \"LIX_ERROR_JS_SDK\",\n            format!(\"{method}() requires non-empty string {key}\"),\n        ))\n    }\n\n    fn optional_string(\n        object: &Object,\n        key: &str,\n        method: &str,\n    ) -> Result<Option<String>, LixError> {\n        let value = Reflect::get(object, &JsValue::from_str(key)).map_err(|_| {\n            LixError::new(\n                \"LIX_ERROR_JS_SDK\",\n                format!(\"{method}() could not read {key}\"),\n            )\n        })?;\n        if value.is_undefined() || value.is_null() {\n            return Ok(None);\n        }\n        if let Some(value) = value.as_string() {\n            if !value.is_empty() {\n                return Ok(Some(value));\n            }\n        }\n        Err(LixError::new(\n            \"LIX_ERROR_JS_SDK\",\n            format!(\"{method}() requires {key} to be a non-empty string when provided\"),\n        ))\n    }\n\n    fn value_from_js(value: JsValue) -> Result<Value, LixError> {\n        if value.is_null() || value.is_undefined() || !value.is_object() {\n            return Err(invalid_param(\n                \"parameter must be an explicit Lix value object\",\n                &value,\n            ));\n        }\n\n        let object = Object::from(value.clone());\n        let kind = Reflect::get(&object, &JsValue::from_str(\"kind\"))\n            .ok()\n            .and_then(|value| value.as_string());\n        match kind.as_deref() {\n            Some(\"null\") => Ok(Value::Null),\n            Some(\"boolean\") => Ok(Value::Boolean(\n                Reflect::get(&object, &JsValue::from_str(\"value\"))\n                    .ok()\n                    .and_then(|value| value.as_bool())\n                    .ok_or_else(|| invalid_param(\"boolean value must be boolean\", &value))?,\n            )),\n            Some(\"integer\") => {\n                let value = Reflect::get(&object, &JsValue::from_str(\"value\"))\n                    .ok()\n                    .and_then(|value| value.as_f64())\n                    .ok_or_else(|| invalid_param(\"integer value must be number\", &value))?;\n                if !value.is_finite() || value.fract() != 0.0 {\n                    return Err(invalid_param_message(\n                        \"integer value must be a finite integer\",\n                    ));\n                }\n                Ok(Value::Integer(value as i64))\n            }\n            Some(\"real\") => {\n                let value = Reflect::get(&object, &JsValue::from_str(\"value\"))\n                    .ok()\n                    .and_then(|value| value.as_f64())\n                    .ok_or_else(|| invalid_param(\"real value must be number\", &value))?;\n                if !value.is_finite() {\n                    return Err(invalid_param_message(\"real value must be a finite number\"));\n                }\n                Ok(Value::Real(value))\n            }\n            Some(\"text\") => Ok(Value::Text(\n                Reflect::get(&object, &JsValue::from_str(\"value\"))\n                    .ok()\n                    .and_then(|value| value.as_string())\n                    .ok_or_else(|| invalid_param(\"text value must be string\", &value))?,\n            )),\n            Some(\"json\") => {\n                let value = Reflect::get(&object, &JsValue::from_str(\"value\"))\n                    .map_err(|_| invalid_param(\"json value is missing\", &value))?;\n                let json = serde_wasm_bindgen::from_value(value).map_err(|error| {\n                    LixError::new(\n                        LixError::CODE_INVALID_PARAM,\n                        format!(\"json value must be JSON-serializable: {error}\"),\n                    )\n                })?;\n                Ok(Value::Json(json))\n            }\n            Some(\"blob\") => {\n                let base64 = Reflect::get(&object, &JsValue::from_str(\"base64\"))\n                    .ok()\n                    .and_then(|value| value.as_string())\n                    .ok_or_else(|| invalid_param(\"blob base64 must be string\", &value))?;\n                let bytes =\n                    base64::Engine::decode(&base64::engine::general_purpose::STANDARD, base64)\n                        .map_err(|error| {\n                            LixError::new(\n                                LixError::CODE_INVALID_PARAM,\n                                format!(\"blob base64 must be valid base64: {error}\"),\n                            )\n                        })?;\n                Ok(Value::Blob(bytes))\n            }\n            _ => Err(invalid_param(\n                \"parameter must be an explicit Lix value object\",\n                &value,\n            )),\n        }\n    }\n\n    fn execute_result_to_js(result: ExecuteResult) -> Result<JsValue, LixError> {\n        let object = Object::new();\n        let columns = Array::new();\n        for column in result.columns() {\n            columns.push(&JsValue::from_str(column));\n        }\n        Reflect::set(&object, &JsValue::from_str(\"columns\"), &columns)\n            .map_err(|_| js_sdk_error(\"could not set columns\"))?;\n        let values = Array::new();\n        for row in result.rows() {\n            let row_values = Array::new();\n            for value in row.values() {\n                row_values.push(&value_to_js(value)?);\n            }\n            values.push(&row_values);\n        }\n        Reflect::set(&object, &JsValue::from_str(\"rows\"), &values)\n            .map_err(|_| js_sdk_error(\"could not set rows\"))?;\n        set_number(&object, \"rowsAffected\", result.rows_affected() as f64)?;\n        let notices = Array::new();\n        for notice in result.notices() {\n            let notice_object = Object::new();\n            set_string(&notice_object, \"code\", &notice.code)?;\n            set_string(&notice_object, \"message\", &notice.message)?;\n            if let Some(hint) = &notice.hint {\n                set_string(&notice_object, \"hint\", hint)?;\n            }\n            notices.push(&notice_object);\n        }\n        Reflect::set(&object, &JsValue::from_str(\"notices\"), &notices)\n            .map_err(|_| js_sdk_error(\"could not set notices\"))?;\n        Ok(object.into())\n    }\n\n    fn merge_version_preview_to_js(\n        result: lix_rs_sdk::MergeVersionPreview,\n    ) -> Result<JsValue, LixError> {\n        let object = Object::new();\n        let outcome = match result.outcome {\n            lix_rs_sdk::MergeVersionOutcome::AlreadyUpToDate => \"alreadyUpToDate\",\n            lix_rs_sdk::MergeVersionOutcome::FastForward => \"fastForward\",\n            lix_rs_sdk::MergeVersionOutcome::MergeCommitted => \"mergeCommitted\",\n        };\n        set_string(&object, \"outcome\", outcome)?;\n        set_string(&object, \"targetVersionId\", &result.target_version_id)?;\n        set_string(&object, \"sourceVersionId\", &result.source_version_id)?;\n        set_string(&object, \"baseCommitId\", &result.base_commit_id)?;\n        set_string(&object, \"targetHeadCommitId\", &result.target_head_commit_id)?;\n        set_string(&object, \"sourceHeadCommitId\", &result.source_head_commit_id)?;\n        Reflect::set(\n            &object,\n            &JsValue::from_str(\"changeStats\"),\n            &merge_change_stats_to_js(&result.change_stats)?,\n        )\n        .map_err(|_| js_sdk_error(\"could not set changeStats\"))?;\n        let conflicts = Array::new();\n        for conflict in result.conflicts {\n            conflicts.push(&merge_conflict_to_js(&conflict)?);\n        }\n        Reflect::set(&object, &JsValue::from_str(\"conflicts\"), &conflicts)\n            .map_err(|_| js_sdk_error(\"could not set conflicts\"))?;\n        Ok(object.into())\n    }\n\n    fn merge_change_stats_to_js(stats: &lix_rs_sdk::MergeChangeStats) -> Result<JsValue, LixError> {\n        let object = Object::new();\n        set_number(&object, \"total\", stats.total as f64)?;\n        set_number(&object, \"added\", stats.added as f64)?;\n        set_number(&object, \"modified\", stats.modified as f64)?;\n        set_number(&object, \"removed\", stats.removed as f64)?;\n        Ok(object.into())\n    }\n\n    fn merge_conflict_to_js(conflict: &lix_rs_sdk::MergeConflict) -> Result<JsValue, LixError> {\n        let object = Object::new();\n        let kind = match conflict.kind {\n            lix_rs_sdk::MergeConflictKind::SameEntityChanged => \"sameEntityChanged\",\n        };\n        set_string(&object, \"kind\", kind)?;\n        set_string(&object, \"schemaKey\", &conflict.schema_key)?;\n        set_json(&object, \"entityId\", &conflict.entity_id)?;\n        set_optional_string(&object, \"fileId\", conflict.file_id.as_deref())?;\n        Reflect::set(\n            &object,\n            &JsValue::from_str(\"target\"),\n            &merge_conflict_side_to_js(&conflict.target)?,\n        )\n        .map_err(|_| js_sdk_error(\"could not set target conflict side\"))?;\n        Reflect::set(\n            &object,\n            &JsValue::from_str(\"source\"),\n            &merge_conflict_side_to_js(&conflict.source)?,\n        )\n        .map_err(|_| js_sdk_error(\"could not set source conflict side\"))?;\n        Ok(object.into())\n    }\n\n    fn merge_conflict_side_to_js(\n        side: &lix_rs_sdk::MergeConflictSide,\n    ) -> Result<JsValue, LixError> {\n        let object = Object::new();\n        let kind = match side.kind {\n            lix_rs_sdk::MergeConflictChangeKind::Added => \"added\",\n            lix_rs_sdk::MergeConflictChangeKind::Modified => \"modified\",\n            lix_rs_sdk::MergeConflictChangeKind::Removed => \"removed\",\n        };\n        set_string(&object, \"kind\", kind)?;\n        set_optional_string(&object, \"beforeChangeId\", side.before_change_id.as_deref())?;\n        set_optional_string(&object, \"afterChangeId\", side.after_change_id.as_deref())?;\n        Ok(object.into())\n    }\n\n    fn value_to_js(value: &Value) -> Result<JsValue, LixError> {\n        let object = Object::new();\n        match value {\n            Value::Null => {\n                set_string(&object, \"kind\", \"null\")?;\n                Reflect::set(&object, &JsValue::from_str(\"value\"), &JsValue::NULL)\n                    .map_err(|_| js_sdk_error(\"could not set null value\"))?;\n            }\n            Value::Boolean(value) => {\n                set_string(&object, \"kind\", \"boolean\")?;\n                Reflect::set(\n                    &object,\n                    &JsValue::from_str(\"value\"),\n                    &JsValue::from_bool(*value),\n                )\n                .map_err(|_| js_sdk_error(\"could not set boolean value\"))?;\n            }\n            Value::Integer(value) => {\n                set_string(&object, \"kind\", \"integer\")?;\n                set_number(&object, \"value\", *value as f64)?;\n            }\n            Value::Real(value) => {\n                set_string(&object, \"kind\", \"real\")?;\n                set_number(&object, \"value\", *value)?;\n            }\n            Value::Text(value) => {\n                set_string(&object, \"kind\", \"text\")?;\n                set_string(&object, \"value\", value)?;\n            }\n            Value::Json(value) => {\n                set_string(&object, \"kind\", \"json\")?;\n                let serializer = serde_wasm_bindgen::Serializer::json_compatible();\n                let value = value.serialize(&serializer).map_err(|error| {\n                    LixError::new(\n                        \"LIX_ERROR_JS_SDK\",\n                        format!(\"could not serialize JSON value: {error}\"),\n                    )\n                })?;\n                Reflect::set(&object, &JsValue::from_str(\"value\"), &value)\n                    .map_err(|_| js_sdk_error(\"could not set json value\"))?;\n            }\n            Value::Blob(value) => {\n                set_string(&object, \"kind\", \"blob\")?;\n                set_string(\n                    &object,\n                    \"base64\",\n                    &base64::Engine::encode(&base64::engine::general_purpose::STANDARD, value),\n                )?;\n            }\n        }\n        Ok(object.into())\n    }\n\n    fn set_string(object: &Object, key: &str, value: &str) -> Result<(), LixError> {\n        Reflect::set(object, &JsValue::from_str(key), &JsValue::from_str(value))\n            .map(|_| ())\n            .map_err(|_| js_sdk_error(format!(\"could not set {key}\")))\n    }\n\n    fn set_optional_string(\n        object: &Object,\n        key: &str,\n        value: Option<&str>,\n    ) -> Result<(), LixError> {\n        let value = value.map(JsValue::from_str).unwrap_or(JsValue::NULL);\n        Reflect::set(object, &JsValue::from_str(key), &value)\n            .map(|_| ())\n            .map_err(|_| js_sdk_error(format!(\"could not set {key}\")))\n    }\n\n    fn set_number(object: &Object, key: &str, value: f64) -> Result<(), LixError> {\n        Reflect::set(object, &JsValue::from_str(key), &JsValue::from_f64(value))\n            .map(|_| ())\n            .map_err(|_| js_sdk_error(format!(\"could not set {key}\")))\n    }\n\n    fn set_json(object: &Object, key: &str, value: &serde_json::Value) -> Result<(), LixError> {\n        let serializer = serde_wasm_bindgen::Serializer::json_compatible();\n        let value = value.serialize(&serializer).map_err(|error| {\n            LixError::new(\n                \"LIX_ERROR_JS_SDK\",\n                format!(\"could not serialize JSON value for {key}: {error}\"),\n            )\n        })?;\n        Reflect::set(object, &JsValue::from_str(key), &value)\n            .map(|_| ())\n            .map_err(|_| js_sdk_error(format!(\"could not set {key}\")))\n    }\n\n    fn invalid_param(message: impl Into<String>, value: &JsValue) -> LixError {\n        LixError::new(LixError::CODE_INVALID_PARAM, message.into()).with_details(json!({\n            \"operation\": \"execute\",\n            \"actual\": js_type_name(value),\n        }))\n    }\n\n    fn invalid_param_message(message: impl Into<String>) -> LixError {\n        LixError::new(LixError::CODE_INVALID_PARAM, message.into()).with_details(json!({\n            \"operation\": \"execute\",\n        }))\n    }\n\n    fn js_sdk_error(message: impl Into<String>) -> LixError {\n        LixError::new(\"LIX_ERROR_JS_SDK\", message.into())\n    }\n\n    fn js_error(error: LixError) -> JsValue {\n        let js_error = js_sys::Error::new(&error.message);\n        let object: &Object = js_error.as_ref();\n        let _ = Reflect::set(\n            object,\n            &JsValue::from_str(\"code\"),\n            &JsValue::from_str(&error.code),\n        );\n        if let Some(hint) = error.hint {\n            let _ = Reflect::set(\n                object,\n                &JsValue::from_str(\"hint\"),\n                &JsValue::from_str(&hint),\n            );\n        }\n        if let Some(details) = error.details {\n            let serializer = serde_wasm_bindgen::Serializer::json_compatible();\n            if let Ok(value) = details.serialize(&serializer) {\n                let _ = Reflect::set(object, &JsValue::from_str(\"details\"), &value);\n            }\n        }\n        js_error.into()\n    }\n}\n"
  },
  {
    "path": "packages/plugin-json-v2/.gitignore",
    "content": "/target\n/Cargo.lock\n"
  },
  {
    "path": "packages/plugin-json-v2/Cargo.toml",
    "content": "[package]\nname = \"plugin_json_v2\"\nversion = \"0.1.0\"\nedition = \"2021\"\npublish = false\n\n[lib]\ncrate-type = [\"cdylib\", \"rlib\"]\n\n[dependencies]\nserde = { version = \"1\", features = [\"derive\"] }\nserde_json = \"1\"\nwit-bindgen = \"0.40\"\n\n[dev-dependencies]\ncriterion = \"0.5\"\n\n[[bench]]\nname = \"detect_changes\"\nharness = false\n\n[[bench]]\nname = \"apply_changes\"\nharness = false\n\n[[bench]]\nname = \"roundtrip\"\nharness = false\n"
  },
  {
    "path": "packages/plugin-json-v2/README.md",
    "content": "# plugin-json-v2\n\nRust/WASM component JSON plugin for the Lix engine.\n\n- Uses `packages/engine/wit/lix-plugin.wit` as the API contract.\n- Implements JSON pointer based `detect-changes` and `apply-changes`.\n- Intended to be installed through `Engine::install_plugin(manifest_json, wasm_bytes)`.\n- `apply-changes` treats input as an unordered latest-state projection and\n  reconstructs JSON deterministically from upsert rows.\n"
  },
  {
    "path": "packages/plugin-json-v2/benches/apply_changes.rs",
    "content": "mod common;\n\nuse criterion::{criterion_group, criterion_main, BatchSize, Criterion};\nuse plugin_json_v2::apply_changes;\n\nfn bench_apply_changes(c: &mut Criterion) {\n    let mut group = c.benchmark_group(\"apply_changes\");\n    group.sample_size(30);\n\n    for (name, (before, after)) in [\n        (\"small\", common::dataset_small()),\n        (\"medium\", common::dataset_medium()),\n        (\"large\", common::dataset_large()),\n    ] {\n        let projection = common::projection_for_transition(&before, &after);\n        let seed = common::file_from_bytes(\"f1\", \"/x.json\", br#\"{\"stale\":\"cache\"}\"#);\n\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || (seed.clone(), projection.clone()),\n                |(seed_file, rows)| {\n                    apply_changes(seed_file, rows).expect(\"apply_changes benchmark should succeed\")\n                },\n                BatchSize::SmallInput,\n            );\n        });\n    }\n\n    group.finish();\n}\n\ncriterion_group!(benches, bench_apply_changes);\ncriterion_main!(benches);\n"
  },
  {
    "path": "packages/plugin-json-v2/benches/common/mod.rs",
    "content": "#![allow(dead_code)]\n\nuse plugin_json_v2::{detect_changes, PluginEntityChange, PluginFile, SCHEMA_KEY};\nuse serde_json::{Map, Value};\nuse std::collections::BTreeMap;\n\nfn make_document(scale: usize, mutate: bool) -> Value {\n    let mut root = Map::new();\n\n    for i in 0..scale {\n        if mutate && i % 11 == 0 {\n            continue;\n        }\n\n        let mut entry = Map::new();\n        let value = if mutate && i % 3 == 0 {\n            (i as i64) * 2\n        } else {\n            i as i64\n        };\n        entry.insert(\"value\".to_string(), Value::Number(value.into()));\n        entry.insert(\"enabled\".to_string(), Value::Bool(i % 2 == 0));\n\n        let mut tags = Vec::new();\n        tags.push(Value::String(format!(\"tag-{i}\")));\n        tags.push(Value::Number((i as i64 + 1).into()));\n        if mutate && i % 5 == 0 {\n            tags.push(Value::String(\"new\".to_string()));\n        }\n        entry.insert(\"tags\".to_string(), Value::Array(tags));\n\n        root.insert(format!(\"item-{i}\"), Value::Object(entry));\n    }\n\n    if mutate {\n        let extra = scale / 10 + 1;\n        for i in 0..extra {\n            let mut entry = Map::new();\n            entry.insert(\n                \"value\".to_string(),\n                Value::Number((10_000 + i as i64).into()),\n            );\n            entry.insert(\"enabled\".to_string(), Value::Bool(true));\n            entry.insert(\n                \"tags\".to_string(),\n                Value::Array(vec![Value::String(\"added\".to_string())]),\n            );\n            root.insert(format!(\"added-{i}\"), Value::Object(entry));\n        }\n    }\n\n    root.insert(\n        \"meta\".to_string(),\n        serde_json::json!({\n            \"version\": if mutate { 2 } else { 1 },\n            \"name\": if mutate { \"after\" } else { \"before\" },\n        }),\n    );\n\n    Value::Object(root)\n}\n\npub fn dataset_small() -> (Vec<u8>, Vec<u8>) {\n    let before = make_document(20, false);\n    let after = make_document(20, true);\n    (\n        serde_json::to_vec(&before).expect(\"before JSON should serialize\"),\n        serde_json::to_vec(&after).expect(\"after JSON should serialize\"),\n    )\n}\n\npub fn dataset_medium() -> (Vec<u8>, Vec<u8>) {\n    let before = make_document(200, false);\n    let after = make_document(200, true);\n    (\n        serde_json::to_vec(&before).expect(\"before JSON should serialize\"),\n        serde_json::to_vec(&after).expect(\"after JSON should serialize\"),\n    )\n}\n\npub fn dataset_large() -> (Vec<u8>, Vec<u8>) {\n    let before = make_document(1000, false);\n    let after = make_document(1000, true);\n    (\n        serde_json::to_vec(&before).expect(\"before JSON should serialize\"),\n        serde_json::to_vec(&after).expect(\"after JSON should serialize\"),\n    )\n}\n\npub fn file_from_bytes(id: &str, path: &str, data: &[u8]) -> PluginFile {\n    PluginFile {\n        id: id.to_string(),\n        path: path.to_string(),\n        data: data.to_vec(),\n    }\n}\n\npub fn merge_latest_state_rows(\n    changesets: Vec<Vec<PluginEntityChange>>,\n) -> Vec<PluginEntityChange> {\n    let mut latest = BTreeMap::new();\n    for changes in changesets {\n        for change in changes {\n            if change.schema_key != SCHEMA_KEY {\n                continue;\n            }\n            latest.insert(\n                (change.schema_key.clone(), change.entity_id.clone()),\n                change,\n            );\n        }\n    }\n    latest.into_values().collect()\n}\n\npub fn projection_for_transition(before: &[u8], after: &[u8]) -> Vec<PluginEntityChange> {\n    let before_file = file_from_bytes(\"f1\", \"/x.json\", before);\n    let after_file = file_from_bytes(\"f1\", \"/x.json\", after);\n    let baseline =\n        detect_changes(None, before_file.clone()).expect(\"baseline detect_changes should work\");\n    let delta =\n        detect_changes(Some(before_file), after_file).expect(\"delta detect_changes should work\");\n    merge_latest_state_rows(vec![baseline, delta])\n}\n"
  },
  {
    "path": "packages/plugin-json-v2/benches/detect_changes.rs",
    "content": "mod common;\n\nuse criterion::{criterion_group, criterion_main, BatchSize, Criterion};\nuse plugin_json_v2::detect_changes;\n\nfn bench_detect_changes(c: &mut Criterion) {\n    let mut group = c.benchmark_group(\"detect_changes\");\n    group.sample_size(30);\n\n    for (name, (before, after)) in [\n        (\"small\", common::dataset_small()),\n        (\"medium\", common::dataset_medium()),\n        (\"large\", common::dataset_large()),\n    ] {\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || {\n                    (\n                        common::file_from_bytes(\"f1\", \"/x.json\", &before),\n                        common::file_from_bytes(\"f1\", \"/x.json\", &after),\n                    )\n                },\n                |(before_file, after_file)| {\n                    detect_changes(Some(before_file), after_file)\n                        .expect(\"detect_changes benchmark should succeed\")\n                },\n                BatchSize::SmallInput,\n            );\n        });\n    }\n\n    group.finish();\n}\n\ncriterion_group!(benches, bench_detect_changes);\ncriterion_main!(benches);\n"
  },
  {
    "path": "packages/plugin-json-v2/benches/roundtrip.rs",
    "content": "mod common;\n\nuse criterion::{criterion_group, criterion_main, BatchSize, Criterion};\nuse plugin_json_v2::{apply_changes, detect_changes};\n\nfn bench_roundtrip_projection(c: &mut Criterion) {\n    let mut group = c.benchmark_group(\"roundtrip_projection\");\n    group.sample_size(20);\n\n    for (name, (before, after)) in [\n        (\"small\", common::dataset_small()),\n        (\"medium\", common::dataset_medium()),\n        (\"large\", common::dataset_large()),\n    ] {\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || {\n                    (\n                        common::file_from_bytes(\"f1\", \"/x.json\", &before),\n                        common::file_from_bytes(\"f1\", \"/x.json\", &after),\n                    )\n                },\n                |(before_file, after_file)| {\n                    let baseline = detect_changes(None, before_file.clone())\n                        .expect(\"baseline detect_changes should succeed\");\n                    let delta = detect_changes(Some(before_file), after_file)\n                        .expect(\"delta detect_changes should succeed\");\n                    let projection = common::merge_latest_state_rows(vec![baseline, delta]);\n                    let seed = common::file_from_bytes(\"f1\", \"/x.json\", br#\"{\"stale\":\"cache\"}\"#);\n                    apply_changes(seed, projection).expect(\"apply_changes should succeed\")\n                },\n                BatchSize::SmallInput,\n            );\n        });\n    }\n\n    group.finish();\n}\n\ncriterion_group!(benches, bench_roundtrip_projection);\ncriterion_main!(benches);\n"
  },
  {
    "path": "packages/plugin-json-v2/schema/json_pointer.json",
    "content": "{\n  \"x-lix-key\": \"json_pointer\",\n  \"x-lix-primary-key\": [\n    \"/path\"\n  ],\n  \"type\": \"object\",\n  \"properties\": {\n    \"path\": {\n      \"type\": \"string\",\n      \"description\": \"RFC 6901 JSON Pointer path (empty string for root).\"\n    },\n    \"value\": {\n      \"anyOf\": [\n        {\n          \"type\": \"object\"\n        },\n        {\n          \"type\": \"array\"\n        },\n        {\n          \"type\": \"string\"\n        },\n        {\n          \"type\": \"number\"\n        },\n        {\n          \"type\": \"boolean\"\n        },\n        {\n          \"type\": \"null\"\n        }\n      ]\n    }\n  },\n  \"required\": [\n    \"path\",\n    \"value\"\n  ],\n  \"additionalProperties\": false\n}\n"
  },
  {
    "path": "packages/plugin-json-v2/src/lib.rs",
    "content": "use crate::exports::lix::plugin::api::{EntityChange, File, Guest, PluginError};\nuse serde_json::{Map, Value};\nuse std::collections::{BTreeMap, BTreeSet, HashMap};\nuse std::sync::OnceLock;\n\nwit_bindgen::generate!({\n    path: \"../engine/wit\",\n    world: \"plugin\",\n});\n\npub const SCHEMA_KEY: &str = \"json_pointer\";\nconst MAX_ARRAY_INDEX: usize = 100_000;\nconst JSON_POINTER_SCHEMA_JSON: &str = include_str!(\"../schema/json_pointer.json\");\n\nstatic JSON_POINTER_SCHEMA: OnceLock<Value> = OnceLock::new();\n\npub use crate::exports::lix::plugin::api::{\n    EntityChange as PluginEntityChange, File as PluginFile, PluginError as PluginApiError,\n};\n\nstruct JsonPlugin;\n\n#[derive(Debug, serde::Serialize)]\nstruct SnapshotContentRef<'a> {\n    path: &'a str,\n    value: &'a Value,\n}\n\n#[derive(Debug, serde::Deserialize)]\n#[serde(deny_unknown_fields)]\nstruct SnapshotContentWithPath {\n    path: String,\n    value: Value,\n}\n\n#[derive(Debug, Clone)]\nstruct ProjectionUpsert {\n    pointer: String,\n    tokens: Vec<String>,\n    terminal_token: Option<TypedPathToken>,\n    value: Value,\n}\n\n#[derive(Debug, Clone)]\nstruct ProjectionTombstone {\n    pointer: String,\n    tokens: Vec<String>,\n}\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\nenum ProjectionNodeKind {\n    Object,\n    Array,\n    Scalar,\n}\n\nimpl ProjectionNodeKind {\n    fn from_value(value: &Value) -> Self {\n        if value.is_object() {\n            Self::Object\n        } else if value.is_array() {\n            Self::Array\n        } else {\n            Self::Scalar\n        }\n    }\n}\n\n#[derive(Debug, Clone)]\nenum TypedPathToken {\n    ObjectKey(String),\n    ArrayIndex(usize),\n}\n\n#[derive(Debug)]\nstruct ProjectionTreeNode {\n    value: Option<Value>,\n    terminal_token: Option<TypedPathToken>,\n    object_children: Vec<(String, usize)>,\n    array_children: Vec<(usize, usize)>,\n}\n\nimpl Guest for JsonPlugin {\n    fn detect_changes(\n        before: Option<File>,\n        after: File,\n        _state_context: Option<crate::exports::lix::plugin::api::DetectStateContext>,\n    ) -> Result<Vec<EntityChange>, PluginError> {\n        let before_json = before\n            .as_ref()\n            .map(|file| parse_json_bytes(&file.data))\n            .transpose()?;\n        let after_json = parse_json_bytes(&after.data)?;\n\n        let mut changes = Vec::new();\n        diff_json(\n            before_json.as_ref(),\n            Some(&after_json),\n            &mut Vec::new(),\n            &mut changes,\n        )?;\n\n        Ok(changes)\n    }\n\n    fn apply_changes(_file: File, changes: Vec<EntityChange>) -> Result<Vec<u8>, PluginError> {\n        let mut seen_entity_ids = BTreeSet::new();\n        let mut upserts = Vec::new();\n        let mut tombstones = Vec::new();\n\n        for change in changes {\n            if change.schema_key != SCHEMA_KEY {\n                continue;\n            }\n\n            let pointer = change.entity_id;\n            if !seen_entity_ids.insert(pointer.clone()) {\n                return Err(PluginError::InvalidInput(format!(\n                    \"duplicate entity_id '{pointer}' for schema_key '{SCHEMA_KEY}'\"\n                )));\n            }\n\n            let tokens = pointer_tokens(&pointer)?;\n            match change.snapshot_content {\n                Some(snapshot_content) => {\n                    let value = parse_snapshot_value(&snapshot_content, &pointer)?;\n                    upserts.push(ProjectionUpsert {\n                        pointer,\n                        tokens,\n                        terminal_token: None,\n                        value,\n                    });\n                }\n                None => {\n                    tombstones.push(ProjectionTombstone { pointer, tokens });\n                }\n            }\n        }\n\n        let has_root_tombstone = tombstones.iter().any(|entry| entry.tokens.is_empty());\n        if has_root_tombstone\n            && (upserts.iter().any(|entry| !entry.tokens.is_empty())\n                || tombstones.iter().any(|entry| !entry.tokens.is_empty()))\n        {\n            return Err(PluginError::InvalidInput(\n                \"root tombstone cannot coexist with non-root projection rows\".to_string(),\n            ));\n        }\n        let has_root_upsert = upserts.iter().any(|entry| entry.pointer.is_empty());\n        let has_non_root_rows = upserts.iter().any(|entry| !entry.pointer.is_empty())\n            || tombstones.iter().any(|entry| !entry.tokens.is_empty());\n        if has_non_root_rows && !has_root_upsert {\n            return Err(PluginError::InvalidInput(\n                \"non-root projection rows require a root row with entity_id ''\".to_string(),\n            ));\n        }\n\n        let upsert_pointers = upserts\n            .iter()\n            .map(|entry| entry.pointer.clone())\n            .collect::<BTreeSet<_>>();\n        let tombstone_pointers = tombstones\n            .iter()\n            .map(|entry| entry.pointer.clone())\n            .collect::<BTreeSet<_>>();\n        let upsert_node_kinds = upserts\n            .iter()\n            .map(|entry| {\n                (\n                    entry.pointer.clone(),\n                    ProjectionNodeKind::from_value(&entry.value),\n                )\n            })\n            .collect::<BTreeMap<_, _>>();\n        let mut array_child_indices: BTreeMap<String, BTreeSet<usize>> = BTreeMap::new();\n        let mut canonical_upsert_pointers = BTreeSet::new();\n\n        for upsert in &mut upserts {\n            let mut ancestor = String::new();\n            let mut canonical_pointer = String::new();\n            let raw_tokens = std::mem::take(&mut upsert.tokens);\n            let mut terminal_token = None;\n            for token in raw_tokens {\n                if tombstone_pointers.contains(&ancestor) {\n                    return Err(PluginError::InvalidInput(format!(\n                        \"entity_id '{}' conflicts with tombstoned ancestor '{ancestor}'\",\n                        upsert.pointer\n                    )));\n                }\n                if !upsert_pointers.contains(&ancestor) {\n                    return Err(PluginError::InvalidInput(format!(\n                        \"missing ancestor container row '{ancestor}' for entity_id '{}'\",\n                        upsert.pointer\n                    )));\n                }\n                let ancestor_kind = *upsert_node_kinds\n                    .get(&ancestor)\n                    .expect(\"ancestor pointer existence checked above\");\n                let validated = validate_child_token_for_ancestor(\n                    ancestor_kind,\n                    &token,\n                    &ancestor,\n                    &upsert.pointer,\n                )?;\n                let canonical_token = validated.canonical_token;\n                let parent_ancestor = ancestor.clone();\n                push_pointer_segment(&mut ancestor, &token);\n                push_pointer_segment(&mut canonical_pointer, &canonical_token);\n\n                if let Some(index) = validated.array_index {\n                    array_child_indices\n                        .entry(parent_ancestor)\n                        .or_default()\n                        .insert(index);\n                    terminal_token = Some(TypedPathToken::ArrayIndex(index));\n                } else {\n                    terminal_token = Some(TypedPathToken::ObjectKey(token));\n                }\n            }\n            upsert.terminal_token = terminal_token;\n\n            if !canonical_upsert_pointers.insert(canonical_pointer.clone()) {\n                return Err(PluginError::InvalidInput(format!(\n                    \"logical duplicate pointer '{canonical_pointer}' in projection rows\"\n                )));\n            }\n        }\n        let mut canonical_tombstone_pointers = BTreeSet::new();\n        for tombstone in &tombstones {\n            let mut ancestor = String::new();\n            let mut canonical_pointer = String::new();\n            for token in &tombstone.tokens {\n                if array_child_indices.contains_key(&ancestor) {\n                    if token == \"-\"\n                        || (!token.is_empty() && token.chars().all(|ch| ch.is_ascii_digit()))\n                    {\n                        let index =\n                            parse_projection_array_index(token, &ancestor, &tombstone.pointer)?;\n                        push_pointer_segment(&mut canonical_pointer, &index.to_string());\n                    } else {\n                        push_pointer_segment(&mut canonical_pointer, token);\n                    }\n                } else {\n                    push_pointer_segment(&mut canonical_pointer, token);\n                }\n                push_pointer_segment(&mut ancestor, token);\n            }\n\n            if canonical_upsert_pointers.contains(&canonical_pointer) {\n                return Err(PluginError::InvalidInput(format!(\n                    \"tombstone '{}' conflicts with live projection row '{}'\",\n                    tombstone.pointer, canonical_pointer\n                )));\n            }\n            if !canonical_tombstone_pointers.insert(canonical_pointer.clone()) {\n                return Err(PluginError::InvalidInput(format!(\n                    \"logical duplicate tombstone pointer '{canonical_pointer}' in projection rows\"\n                )));\n            }\n        }\n        validate_sparse_array_children(&array_child_indices)?;\n        let document = build_document_from_projection(upserts, has_root_tombstone)?;\n\n        serde_json::to_vec(&document).map_err(|error| {\n            PluginError::Internal(format!(\"failed to serialize reconstructed JSON: {error}\"))\n        })\n    }\n}\n\nfn parse_json_bytes(data: &[u8]) -> Result<Value, PluginError> {\n    if data.is_empty() {\n        return Ok(Value::Object(Map::new()));\n    }\n\n    serde_json::from_slice::<Value>(data).map_err(|error| {\n        PluginError::InvalidInput(format!(\"file.data must be valid JSON UTF-8 bytes: {error}\"))\n    })\n}\n\nfn parse_snapshot_value(raw: &str, pointer: &str) -> Result<Value, PluginError> {\n    if let Ok(parsed) = serde_json::from_str::<SnapshotContentWithPath>(raw) {\n        if parsed.path != pointer {\n            return Err(PluginError::InvalidInput(format!(\n                \"snapshot path '{}' does not match entity_id '{}'\",\n                parsed.path, pointer\n            )));\n        }\n        return Ok(parsed.value);\n    }\n\n    parse_snapshot_value_slow(raw, pointer)\n}\n\nfn parse_snapshot_value_slow(raw: &str, pointer: &str) -> Result<Value, PluginError> {\n    let parsed = serde_json::from_str::<Value>(raw).map_err(|error| {\n        PluginError::InvalidInput(format!(\n            \"invalid snapshot_content for pointer '{pointer}': {error}\"\n        ))\n    })?;\n\n    let Value::Object(mut object) = parsed else {\n        return Err(PluginError::InvalidInput(format!(\n            \"snapshot_content for pointer '{pointer}' must be an object with 'value'\"\n        )));\n    };\n\n    let raw_path = object.remove(\"path\");\n    let raw_value = object.remove(\"value\");\n    if !object.is_empty() {\n        return Err(PluginError::InvalidInput(format!(\n            \"snapshot_content for pointer '{pointer}' contains unsupported properties\"\n        )));\n    }\n\n    match (raw_path, raw_value) {\n        (Some(path), Some(value)) => {\n            let Some(path_string) = path.as_str() else {\n                return Err(PluginError::InvalidInput(format!(\n                    \"snapshot path for entity_id '{pointer}' must be a string\"\n                )));\n            };\n            if path_string != pointer {\n                return Err(PluginError::InvalidInput(format!(\n                    \"snapshot path '{path_string}' does not match entity_id '{pointer}'\"\n                )));\n            }\n            Ok(value)\n        }\n        (None, Some(_)) => Err(PluginError::InvalidInput(format!(\n            \"snapshot_content for pointer '{pointer}' must contain 'path'\"\n        ))),\n        (_, None) => Err(PluginError::InvalidInput(format!(\n            \"snapshot_content for pointer '{pointer}' must contain 'value'\"\n        ))),\n    }\n}\n\nfn diff_json(\n    before: Option<&Value>,\n    after: Option<&Value>,\n    path: &mut Vec<String>,\n    changes: &mut Vec<EntityChange>,\n) -> Result<(), PluginError> {\n    if before.is_none() && after.is_none() {\n        return Ok(());\n    }\n\n    if after.is_none() {\n        collect_deletions(\n            before.expect(\"after is none implies before exists\"),\n            path,\n            changes,\n            true,\n        );\n        return Ok(());\n    }\n\n    if before.is_none() {\n        collect_leaves(after.expect(\"checked above\"), path, changes)?;\n        return Ok(());\n    }\n\n    let before_value = before.expect(\"checked above\");\n    let after_value = after.expect(\"checked above\");\n\n    if before_value == after_value {\n        return Ok(());\n    }\n\n    let before_is_container = is_container(before_value);\n    let after_is_container = is_container(after_value);\n\n    if before_is_container && after_is_container {\n        if let (Some(before_items), Some(after_items)) =\n            (before_value.as_array(), after_value.as_array())\n        {\n            let shared = before_items.len().min(after_items.len());\n            for index in 0..shared {\n                path.push(index.to_string());\n                diff_json(\n                    before_items.get(index),\n                    after_items.get(index),\n                    path,\n                    changes,\n                )?;\n                path.pop();\n            }\n\n            if before_items.len() > after_items.len() {\n                for index in (after_items.len()..before_items.len()).rev() {\n                    path.push(index.to_string());\n                    diff_json(before_items.get(index), None, path, changes)?;\n                    path.pop();\n                }\n            } else {\n                for index in before_items.len()..after_items.len() {\n                    path.push(index.to_string());\n                    diff_json(None, after_items.get(index), path, changes)?;\n                    path.pop();\n                }\n            }\n            return Ok(());\n        }\n\n        if let (Some(before_object), Some(after_object)) =\n            (before_value.as_object(), after_value.as_object())\n        {\n            let mut keys = before_object.keys().cloned().collect::<Vec<_>>();\n            for key in after_object.keys() {\n                if !before_object.contains_key(key) {\n                    keys.push(key.clone());\n                }\n            }\n\n            for key in keys {\n                path.push(key.clone());\n                diff_json(\n                    before_object.get(&key),\n                    after_object.get(&key),\n                    path,\n                    changes,\n                )?;\n                path.pop();\n            }\n            return Ok(());\n        }\n    }\n\n    if before_is_container || after_is_container {\n        collect_deletions(before_value, path, changes, false);\n        collect_leaves(after_value, path, changes)?;\n        return Ok(());\n    }\n\n    if before_value != after_value {\n        push_upsert(changes, pointer_from_segments(path), after_value.clone())?;\n    }\n\n    Ok(())\n}\n\nfn collect_deletions(\n    value: &Value,\n    path: &mut Vec<String>,\n    changes: &mut Vec<EntityChange>,\n    include_current: bool,\n) {\n    match value {\n        Value::Array(items) => {\n            if include_current {\n                push_deletion(changes, pointer_from_segments(path));\n            }\n            for index in (0..items.len()).rev() {\n                path.push(index.to_string());\n                collect_deletions(&items[index], path, changes, true);\n                path.pop();\n            }\n        }\n        Value::Object(object) => {\n            if include_current {\n                push_deletion(changes, pointer_from_segments(path));\n            }\n            for (key, item) in object {\n                path.push(key.clone());\n                collect_deletions(item, path, changes, true);\n                path.pop();\n            }\n        }\n        _ => {\n            if include_current {\n                push_deletion(changes, pointer_from_segments(path));\n            }\n        }\n    }\n}\n\nfn collect_leaves(\n    value: &Value,\n    path: &mut Vec<String>,\n    changes: &mut Vec<EntityChange>,\n) -> Result<(), PluginError> {\n    match value {\n        Value::Array(items) => {\n            push_upsert(\n                changes,\n                pointer_from_segments(path),\n                Value::Array(Vec::new()),\n            )?;\n            for (index, item) in items.iter().enumerate() {\n                path.push(index.to_string());\n                collect_leaves(item, path, changes)?;\n                path.pop();\n            }\n            Ok(())\n        }\n        Value::Object(object) => {\n            push_upsert(\n                changes,\n                pointer_from_segments(path),\n                Value::Object(Map::new()),\n            )?;\n            for (key, item) in object {\n                path.push(key.clone());\n                collect_leaves(item, path, changes)?;\n                path.pop();\n            }\n            Ok(())\n        }\n        _ => push_upsert(changes, pointer_from_segments(path), value.clone()),\n    }\n}\n\nfn push_deletion(changes: &mut Vec<EntityChange>, pointer: String) {\n    changes.push(EntityChange {\n        entity_id: pointer,\n        schema_key: SCHEMA_KEY.to_string(),\n        snapshot_content: None,\n    });\n}\n\nfn push_upsert(\n    changes: &mut Vec<EntityChange>,\n    pointer: String,\n    value: Value,\n) -> Result<(), PluginError> {\n    let snapshot_content = serde_json::to_string(&SnapshotContentRef {\n        path: &pointer,\n        value: &value,\n    })\n    .map_err(|error| {\n        PluginError::Internal(format!(\n            \"failed to serialize snapshot content for '{pointer}': {error}\"\n        ))\n    })?;\n\n    changes.push(EntityChange {\n        entity_id: pointer,\n        schema_key: SCHEMA_KEY.to_string(),\n        snapshot_content: Some(snapshot_content),\n    });\n\n    Ok(())\n}\n\nfn is_container(value: &Value) -> bool {\n    value.is_array() || value.is_object()\n}\n\nfn pointer_from_segments(segments: &[String]) -> String {\n    if segments.is_empty() {\n        return String::new();\n    }\n\n    let mut pointer = String::new();\n    for segment in segments {\n        push_pointer_segment(&mut pointer, segment);\n    }\n    pointer\n}\n\nfn push_pointer_segment(pointer: &mut String, token: &str) {\n    pointer.push('/');\n    for ch in token.chars() {\n        match ch {\n            '~' => pointer.push_str(\"~0\"),\n            '/' => pointer.push_str(\"~1\"),\n            _ => pointer.push(ch),\n        }\n    }\n}\n\nfn unescape_pointer_token(token: &str) -> Result<String, PluginError> {\n    let mut output = String::with_capacity(token.len());\n    let mut chars = token.chars();\n\n    while let Some(ch) = chars.next() {\n        if ch != '~' {\n            output.push(ch);\n            continue;\n        }\n\n        match chars.next() {\n            Some('0') => output.push('~'),\n            Some('1') => output.push('/'),\n            Some(other) => {\n                return Err(PluginError::InvalidInput(format!(\n                    \"invalid JSON pointer escape '~{other}' in token '{token}'\"\n                )));\n            }\n            None => {\n                return Err(PluginError::InvalidInput(format!(\n                    \"invalid JSON pointer escape '~' in token '{token}'\"\n                )));\n            }\n        }\n    }\n\n    Ok(output)\n}\n\nfn pointer_tokens(pointer: &str) -> Result<Vec<String>, PluginError> {\n    if pointer.is_empty() {\n        return Ok(Vec::new());\n    }\n\n    if !pointer.starts_with('/') {\n        return Err(PluginError::InvalidInput(format!(\n            \"entity_id '{pointer}' must be a JSON pointer\"\n        )));\n    }\n\n    pointer\n        .split('/')\n        .skip(1)\n        .map(unescape_pointer_token)\n        .collect()\n}\n\nstruct ValidatedChildToken {\n    canonical_token: String,\n    array_index: Option<usize>,\n}\n\nfn validate_child_token_for_ancestor(\n    ancestor_kind: ProjectionNodeKind,\n    child_token: &str,\n    ancestor_pointer: &str,\n    entity_id: &str,\n) -> Result<ValidatedChildToken, PluginError> {\n    match ancestor_kind {\n        ProjectionNodeKind::Object => Ok(ValidatedChildToken {\n            canonical_token: child_token.to_string(),\n            array_index: None,\n        }),\n        ProjectionNodeKind::Array => {\n            let index = parse_projection_array_index(child_token, ancestor_pointer, entity_id)?;\n            Ok(ValidatedChildToken {\n                canonical_token: index.to_string(),\n                array_index: Some(index),\n            })\n        }\n        ProjectionNodeKind::Scalar => Err(PluginError::InvalidInput(format!(\n            \"ancestor '{ancestor_pointer}' for entity_id '{entity_id}' is not a container\"\n        ))),\n    }\n}\n\nfn validate_sparse_array_children(\n    indices_by_ancestor: &BTreeMap<String, BTreeSet<usize>>,\n) -> Result<(), PluginError> {\n    for (ancestor, indices) in indices_by_ancestor {\n        let Some(max_index) = indices.iter().next_back() else {\n            continue;\n        };\n\n        for expected in 0..=*max_index {\n            if !indices.contains(&expected) {\n                return Err(PluginError::InvalidInput(format!(\n                    \"sparse array projection under '{ancestor}': missing index {expected}\"\n                )));\n            }\n        }\n    }\n    Ok(())\n}\n\nfn parse_projection_array_index(\n    token: &str,\n    ancestor_pointer: &str,\n    entity_id: &str,\n) -> Result<usize, PluginError> {\n    if token == \"-\" {\n        return Err(PluginError::InvalidInput(format!(\n            \"entity_id '{entity_id}' uses non-canonical '-' array token under '{ancestor_pointer}'\"\n        )));\n    }\n    if token.is_empty() || !token.chars().all(|ch| ch.is_ascii_digit()) {\n        return Err(PluginError::InvalidInput(format!(\n            \"invalid array index token '{token}' under '{ancestor_pointer}'\"\n        )));\n    }\n    if token.len() > 1 && token.starts_with('0') {\n        return Err(PluginError::InvalidInput(format!(\n            \"entity_id '{entity_id}' uses non-canonical array index token '{token}' under '{ancestor_pointer}'\"\n        )));\n    }\n\n    let index = token.parse::<usize>().map_err(|error| {\n        PluginError::InvalidInput(format!(\n            \"invalid array index token '{token}' under '{ancestor_pointer}': {error}\"\n        ))\n    })?;\n    if index > MAX_ARRAY_INDEX {\n        return Err(PluginError::InvalidInput(format!(\n            \"array index {index} exceeds max supported index {MAX_ARRAY_INDEX}\"\n        )));\n    }\n    Ok(index)\n}\n\nfn build_document_from_projection(\n    upserts: Vec<ProjectionUpsert>,\n    has_root_tombstone: bool,\n) -> Result<Value, PluginError> {\n    if upserts.is_empty() {\n        return Ok(if has_root_tombstone {\n            Value::Null\n        } else {\n            Value::Object(Map::new())\n        });\n    }\n\n    let mut index_by_pointer = HashMap::with_capacity(upserts.len());\n    let mut pointers = Vec::with_capacity(upserts.len());\n    let mut nodes = Vec::with_capacity(upserts.len());\n    for (index, upsert) in upserts.into_iter().enumerate() {\n        index_by_pointer.insert(upsert.pointer.clone(), index);\n        pointers.push(upsert.pointer);\n        nodes.push(ProjectionTreeNode {\n            value: Some(upsert.value),\n            terminal_token: upsert.terminal_token,\n            object_children: Vec::new(),\n            array_children: Vec::new(),\n        });\n    }\n\n    let root_index = index_by_pointer.get(\"\").copied().ok_or_else(|| {\n        PluginError::InvalidInput(\n            \"non-root projection rows require a root row with entity_id ''\".to_string(),\n        )\n    })?;\n\n    for index in 0..pointers.len() {\n        let pointer = &pointers[index];\n        if pointer.is_empty() {\n            continue;\n        }\n        let parent_pointer = parent_pointer(pointer);\n        let parent_index = index_by_pointer\n            .get(parent_pointer)\n            .copied()\n            .ok_or_else(|| {\n                PluginError::InvalidInput(format!(\n                    \"missing ancestor container row '{parent_pointer}' for entity_id '{pointer}'\"\n                ))\n            })?;\n        let terminal_token = nodes[index].terminal_token.take().ok_or_else(|| {\n            PluginError::Internal(format!(\n                \"missing terminal token for non-root projection row '{pointer}'\"\n            ))\n        })?;\n\n        match terminal_token {\n            TypedPathToken::ObjectKey(key) => {\n                nodes[parent_index].object_children.push((key, index));\n            }\n            TypedPathToken::ArrayIndex(array_index) => {\n                nodes[parent_index]\n                    .array_children\n                    .push((array_index, index));\n            }\n        }\n    }\n\n    materialize_projection_node(&mut nodes, root_index)\n}\n\nfn parent_pointer(pointer: &str) -> &str {\n    pointer\n        .rsplit_once('/')\n        .map(|(parent, _)| parent)\n        .unwrap_or(\"\")\n}\n\nfn materialize_projection_node(\n    nodes: &mut [ProjectionTreeNode],\n    index: usize,\n) -> Result<Value, PluginError> {\n    let (mut value, object_children, array_children) = {\n        let node = nodes.get_mut(index).ok_or_else(|| {\n            PluginError::Internal(format!(\"projection node index {index} out of bounds\"))\n        })?;\n        (\n            node.value.take().ok_or_else(|| {\n                PluginError::Internal(format!(\"projection node {index} was materialized twice\"))\n            })?,\n            std::mem::take(&mut node.object_children),\n            std::mem::take(&mut node.array_children),\n        )\n    };\n\n    match &mut value {\n        Value::Object(object) => {\n            if !array_children.is_empty() {\n                return Err(PluginError::InvalidInput(\n                    \"object projection node cannot have array-index children\".to_string(),\n                ));\n            }\n            for (key, child_index) in object_children {\n                let child_value = materialize_projection_node(nodes, child_index)?;\n                object.insert(key, child_value);\n            }\n        }\n        Value::Array(items) => {\n            if !object_children.is_empty() {\n                return Err(PluginError::InvalidInput(\n                    \"array projection node cannot have object-key children\".to_string(),\n                ));\n            }\n            for (array_index, child_index) in array_children {\n                while items.len() <= array_index {\n                    items.push(Value::Null);\n                }\n                items[array_index] = materialize_projection_node(nodes, child_index)?;\n            }\n        }\n        _ => {\n            if !object_children.is_empty() || !array_children.is_empty() {\n                return Err(PluginError::InvalidInput(\n                    \"scalar projection node cannot have children\".to_string(),\n                ));\n            }\n        }\n    }\n\n    Ok(value)\n}\n\npub fn detect_changes(before: Option<File>, after: File) -> Result<Vec<EntityChange>, PluginError> {\n    <JsonPlugin as Guest>::detect_changes(before, after, None)\n}\n\npub fn detect_changes_with_state_context(\n    before: Option<File>,\n    after: File,\n    state_context: Option<crate::exports::lix::plugin::api::DetectStateContext>,\n) -> Result<Vec<EntityChange>, PluginError> {\n    <JsonPlugin as Guest>::detect_changes(before, after, state_context)\n}\n\npub fn apply_changes(file: File, changes: Vec<EntityChange>) -> Result<Vec<u8>, PluginError> {\n    <JsonPlugin as Guest>::apply_changes(file, changes)\n}\n\npub fn schema_json() -> &'static str {\n    JSON_POINTER_SCHEMA_JSON\n}\n\npub fn schema_definition() -> &'static Value {\n    JSON_POINTER_SCHEMA.get_or_init(|| {\n        serde_json::from_str(JSON_POINTER_SCHEMA_JSON).expect(\"json pointer schema must be valid\")\n    })\n}\n\nexport!(JsonPlugin);\n"
  },
  {
    "path": "packages/plugin-json-v2/tests/apply_changes.rs",
    "content": "mod common;\n\nuse common::{file_from_json, snapshot_content};\nuse plugin_json_v2::{apply_changes, PluginApiError, PluginEntityChange, SCHEMA_KEY};\nuse serde_json::Value;\n\nfn with_root_object(mut changes: Vec<PluginEntityChange>) -> Vec<PluginEntityChange> {\n    if changes.iter().any(|change| change.entity_id.is_empty()) {\n        return changes;\n    }\n\n    let mut with_root = vec![PluginEntityChange {\n        entity_id: \"\".to_string(),\n        schema_key: SCHEMA_KEY.to_string(),\n        snapshot_content: Some(snapshot_content(\"\", Value::Object(serde_json::Map::new()))),\n    }];\n    with_root.append(&mut changes);\n    with_root\n}\n\n#[test]\nfn applies_insert_update_delete() {\n    let file = file_from_json(\"f1\", \"/x.json\", r#\"{\"stale\":\"cache\"}\"#);\n    let changes = vec![\n        PluginEntityChange {\n            entity_id: \"/Name\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\n                \"/Name\",\n                Value::String(\"Samuel\".to_string()),\n            )),\n        },\n        PluginEntityChange {\n            entity_id: \"/Age\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\"/Age\", Value::Number(20.into()))),\n        },\n        PluginEntityChange {\n            entity_id: \"/City\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: None,\n        },\n    ];\n\n    let output =\n        apply_changes(file, with_root_object(changes)).expect(\"apply_changes should succeed\");\n\n    let parsed: Value = serde_json::from_slice(&output).expect(\"output should be valid JSON\");\n    assert_eq!(parsed, serde_json::json!({\"Name\":\"Samuel\",\"Age\":20}));\n}\n\n#[test]\nfn applies_array_changes_with_indexes() {\n    let file = file_from_json(\"f1\", \"/x.json\", r#\"{\"stale\":\"cache\"}\"#);\n    let changes = vec![\n        PluginEntityChange {\n            entity_id: \"/list\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\"/list\", Value::Array(Vec::new()))),\n        },\n        PluginEntityChange {\n            entity_id: \"/list/0\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\"/list/0\", Value::String(\"a\".to_string()))),\n        },\n        PluginEntityChange {\n            entity_id: \"/list/1\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\"/list/1\", Value::String(\"x\".to_string()))),\n        },\n        PluginEntityChange {\n            entity_id: \"/list/2\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\"/list/2\", Value::String(\"c\".to_string()))),\n        },\n        PluginEntityChange {\n            entity_id: \"/list/3\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\"/list/3\", Value::String(\"d\".to_string()))),\n        },\n    ];\n\n    let output =\n        apply_changes(file, with_root_object(changes)).expect(\"apply_changes should succeed\");\n\n    let parsed: Value = serde_json::from_slice(&output).expect(\"output should be valid JSON\");\n    assert_eq!(parsed, serde_json::json!({\"list\":[\"a\",\"x\",\"c\",\"d\"]}));\n}\n\n#[test]\nfn rejects_snapshot_missing_path() {\n    let file = file_from_json(\"f1\", \"/x.json\", r#\"{\"foo\":1}\"#);\n    let changes = vec![PluginEntityChange {\n        entity_id: \"/foo\".to_string(),\n        schema_key: SCHEMA_KEY.to_string(),\n        snapshot_content: Some(r#\"{\"value\":2}\"#.to_string()),\n    }];\n\n    let error =\n        apply_changes(file, with_root_object(changes)).expect_err(\"apply_changes should fail\");\n    match error {\n        PluginApiError::InvalidInput(message) => {\n            assert!(message.contains(\"must contain 'path'\"));\n        }\n        PluginApiError::Internal(message) => {\n            panic!(\"expected InvalidInput, got Internal({message})\");\n        }\n    }\n}\n\n#[test]\nfn infers_array_parent_for_numeric_pointer_segment() {\n    let file = file_from_json(\"f1\", \"/x.json\", r#\"{}\"#);\n    let changes = vec![\n        PluginEntityChange {\n            entity_id: \"/team\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\"/team\", Value::Array(Vec::new()))),\n        },\n        PluginEntityChange {\n            entity_id: \"/team/0\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\n                \"/team/0\",\n                Value::Object(serde_json::Map::new()),\n            )),\n        },\n        PluginEntityChange {\n            entity_id: \"/team/0/name\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\n                \"/team/0/name\",\n                Value::String(\"Ada\".to_string()),\n            )),\n        },\n    ];\n\n    let output =\n        apply_changes(file, with_root_object(changes)).expect(\"apply_changes should succeed\");\n    let parsed: Value = serde_json::from_slice(&output).expect(\"output should parse\");\n    assert_eq!(parsed, serde_json::json!({\"team\":[{\"name\":\"Ada\"}]}));\n}\n\n#[test]\nfn removing_root_sets_null() {\n    let file = file_from_json(\"f1\", \"/x.json\", r#\"{\"foo\":1}\"#);\n    let changes = vec![PluginEntityChange {\n        entity_id: \"\".to_string(),\n        schema_key: SCHEMA_KEY.to_string(),\n        snapshot_content: None,\n    }];\n\n    let output =\n        apply_changes(file, with_root_object(changes)).expect(\"apply_changes should succeed\");\n    let parsed: Value = serde_json::from_slice(&output).expect(\"output should parse\");\n    assert_eq!(parsed, Value::Null);\n}\n\n#[test]\nfn rejects_duplicate_entity_ids_in_projection_set() {\n    let file = file_from_json(\"f1\", \"/x.json\", r#\"{}\"#);\n    let changes = vec![\n        PluginEntityChange {\n            entity_id: \"/foo\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\"/foo\", Value::Number(1.into()))),\n        },\n        PluginEntityChange {\n            entity_id: \"/foo\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\"/foo\", Value::Number(2.into()))),\n        },\n    ];\n\n    let error =\n        apply_changes(file, with_root_object(changes)).expect_err(\"apply_changes should fail\");\n    match error {\n        PluginApiError::InvalidInput(message) => {\n            assert!(message.contains(\"duplicate entity_id\"));\n        }\n        PluginApiError::Internal(message) => {\n            panic!(\"expected InvalidInput, got Internal({message})\");\n        }\n    }\n}\n\n#[test]\nfn rejects_mismatched_snapshot_path() {\n    let file = file_from_json(\"f1\", \"/x.json\", r#\"{\"foo\":1}\"#);\n    let changes = vec![PluginEntityChange {\n        entity_id: \"/foo\".to_string(),\n        schema_key: SCHEMA_KEY.to_string(),\n        snapshot_content: Some(r#\"{\"path\":\"/bar\",\"value\":2}\"#.to_string()),\n    }];\n\n    let error =\n        apply_changes(file, with_root_object(changes)).expect_err(\"apply_changes should fail\");\n    match error {\n        PluginApiError::InvalidInput(message) => {\n            assert!(message.contains(\"snapshot path '/bar'\"));\n        }\n        PluginApiError::Internal(message) => {\n            panic!(\"expected InvalidInput, got Internal({message})\");\n        }\n    }\n}\n\n#[test]\nfn rejects_invalid_json_pointer_escape() {\n    let file = file_from_json(\"f1\", \"/x.json\", r#\"{\"foo\":1}\"#);\n    let changes = vec![PluginEntityChange {\n        entity_id: \"/foo/~2bar\".to_string(),\n        schema_key: SCHEMA_KEY.to_string(),\n        snapshot_content: Some(snapshot_content(\"/foo/~2bar\", Value::Number(2.into()))),\n    }];\n\n    let error =\n        apply_changes(file, with_root_object(changes)).expect_err(\"apply_changes should fail\");\n    match error {\n        PluginApiError::InvalidInput(message) => {\n            assert!(message.contains(\"invalid JSON pointer escape\"));\n        }\n        PluginApiError::Internal(message) => {\n            panic!(\"expected InvalidInput, got Internal({message})\");\n        }\n    }\n}\n\n#[test]\nfn rejects_invalid_dash_placement() {\n    let file = file_from_json(\"f1\", \"/x.json\", r#\"{\"list\":[{\"x\":\"a\"}]}\"#);\n    let changes = vec![\n        PluginEntityChange {\n            entity_id: \"/list\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\"/list\", Value::Array(Vec::new()))),\n        },\n        PluginEntityChange {\n            entity_id: \"/list/-/x\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\n                \"/list/-/x\",\n                Value::String(\"b\".to_string()),\n            )),\n        },\n    ];\n\n    let error =\n        apply_changes(file, with_root_object(changes)).expect_err(\"apply_changes should fail\");\n    match error {\n        PluginApiError::InvalidInput(message) => {\n            assert!(message.contains(\"non-canonical '-' array token\"));\n        }\n        PluginApiError::Internal(message) => {\n            panic!(\"expected InvalidInput, got Internal({message})\");\n        }\n    }\n}\n\n#[test]\nfn allows_proto_like_keys_when_projection_rows_are_consistent() {\n    let file = file_from_json(\"f1\", \"/x.json\", r#\"{}\"#);\n    let changes = vec![\n        PluginEntityChange {\n            entity_id: \"/__proto__\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\n                \"/__proto__\",\n                Value::Object(serde_json::Map::new()),\n            )),\n        },\n        PluginEntityChange {\n            entity_id: \"/__proto__/x\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\n                \"/__proto__/x\",\n                Value::String(\"pwn\".to_string()),\n            )),\n        },\n    ];\n\n    let output =\n        apply_changes(file, with_root_object(changes)).expect(\"apply_changes should succeed\");\n    let parsed: Value = serde_json::from_slice(&output).expect(\"output should parse\");\n    assert_eq!(parsed, serde_json::json!({\"__proto__\":{\"x\":\"pwn\"}}));\n}\n\n#[test]\nfn rejects_descendant_upsert_under_tombstoned_ancestor() {\n    let file = file_from_json(\"f1\", \"/x.json\", r#\"{}\"#);\n    let changes = vec![\n        PluginEntityChange {\n            entity_id: \"/a\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: None,\n        },\n        PluginEntityChange {\n            entity_id: \"/a/b\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\"/a/b\", Value::Number(1.into()))),\n        },\n    ];\n\n    let error =\n        apply_changes(file, with_root_object(changes)).expect_err(\"apply_changes should fail\");\n    match error {\n        PluginApiError::InvalidInput(message) => {\n            assert!(message.contains(\"conflicts with tombstoned ancestor\"));\n        }\n        PluginApiError::Internal(message) => {\n            panic!(\"expected InvalidInput, got Internal({message})\");\n        }\n    }\n}\n\n#[test]\nfn rejects_root_tombstone_with_non_root_rows() {\n    let file = file_from_json(\"f1\", \"/x.json\", r#\"{}\"#);\n    let changes = vec![\n        PluginEntityChange {\n            entity_id: \"\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: None,\n        },\n        PluginEntityChange {\n            entity_id: \"/a\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\"/a\", Value::Number(1.into()))),\n        },\n    ];\n\n    let error =\n        apply_changes(file, with_root_object(changes)).expect_err(\"apply_changes should fail\");\n    match error {\n        PluginApiError::InvalidInput(message) => {\n            assert!(message.contains(\"root tombstone cannot coexist\"));\n        }\n        PluginApiError::Internal(message) => {\n            panic!(\"expected InvalidInput, got Internal({message})\");\n        }\n    }\n}\n\n#[test]\nfn rejects_snapshot_path_non_string() {\n    let file = file_from_json(\"f1\", \"/x.json\", r#\"{}\"#);\n    let changes = vec![PluginEntityChange {\n        entity_id: \"/safe\".to_string(),\n        schema_key: SCHEMA_KEY.to_string(),\n        snapshot_content: Some(r#\"{\"path\":123,\"value\":1}\"#.to_string()),\n    }];\n\n    let error =\n        apply_changes(file, with_root_object(changes)).expect_err(\"apply_changes should fail\");\n    match error {\n        PluginApiError::InvalidInput(message) => {\n            assert!(message.contains(\"must be a string\"));\n        }\n        PluginApiError::Internal(message) => {\n            panic!(\"expected InvalidInput, got Internal({message})\");\n        }\n    }\n}\n\n#[test]\nfn rejects_snapshot_with_additional_properties_or_missing_value() {\n    let file = file_from_json(\"f1\", \"/x.json\", r#\"{}\"#);\n\n    let with_extra = vec![PluginEntityChange {\n        entity_id: \"/safe\".to_string(),\n        schema_key: SCHEMA_KEY.to_string(),\n        snapshot_content: Some(r#\"{\"path\":\"/safe\",\"value\":1,\"extra\":true}\"#.to_string()),\n    }];\n    let error = apply_changes(file.clone(), with_root_object(with_extra))\n        .expect_err(\"apply_changes should fail\");\n    match error {\n        PluginApiError::InvalidInput(message) => {\n            assert!(message.contains(\"unsupported properties\"));\n        }\n        PluginApiError::Internal(message) => {\n            panic!(\"expected InvalidInput, got Internal({message})\");\n        }\n    }\n\n    let missing_value = vec![PluginEntityChange {\n        entity_id: \"/safe\".to_string(),\n        schema_key: SCHEMA_KEY.to_string(),\n        snapshot_content: Some(r#\"{\"path\":\"/safe\"}\"#.to_string()),\n    }];\n    let error = apply_changes(file, with_root_object(missing_value))\n        .expect_err(\"apply_changes should fail\");\n    match error {\n        PluginApiError::InvalidInput(message) => {\n            assert!(message.contains(\"must contain 'value'\"));\n        }\n        PluginApiError::Internal(message) => {\n            panic!(\"expected InvalidInput, got Internal({message})\");\n        }\n    }\n}\n\n#[test]\nfn rejects_numeric_child_without_parent_container_row() {\n    let file = file_from_json(\"f1\", \"/x.json\", r#\"{}\"#);\n    let changes = vec![PluginEntityChange {\n        entity_id: \"/foo/0\".to_string(),\n        schema_key: SCHEMA_KEY.to_string(),\n        snapshot_content: Some(snapshot_content(\"/foo/0\", Value::String(\"x\".to_string()))),\n    }];\n\n    let error =\n        apply_changes(file, with_root_object(changes)).expect_err(\"apply_changes should fail\");\n    match error {\n        PluginApiError::InvalidInput(message) => {\n            assert!(message.contains(\"missing ancestor container row\"));\n        }\n        PluginApiError::Internal(message) => {\n            panic!(\"expected InvalidInput, got Internal({message})\");\n        }\n    }\n}\n\n#[test]\nfn rejects_huge_array_index_growth() {\n    let file = file_from_json(\"f1\", \"/x.json\", r#\"{}\"#);\n    let changes = vec![\n        PluginEntityChange {\n            entity_id: \"/arr\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\"/arr\", Value::Array(Vec::new()))),\n        },\n        PluginEntityChange {\n            entity_id: \"/arr/100001\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\n                \"/arr/100001\",\n                Value::String(\"x\".to_string()),\n            )),\n        },\n    ];\n\n    let error =\n        apply_changes(file, with_root_object(changes)).expect_err(\"apply_changes should fail\");\n    match error {\n        PluginApiError::InvalidInput(message) => {\n            assert!(message.contains(\"exceeds max supported index\"));\n        }\n        PluginApiError::Internal(message) => {\n            panic!(\"expected InvalidInput, got Internal({message})\");\n        }\n    }\n}\n\n#[test]\nfn rejects_leading_zero_array_indices_under_array_ancestor() {\n    let file = file_from_json(\"f1\", \"/x.json\", r#\"{}\"#);\n    let changes = vec![\n        PluginEntityChange {\n            entity_id: \"/arr\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\"/arr\", Value::Array(Vec::new()))),\n        },\n        PluginEntityChange {\n            entity_id: \"/arr/01\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\"/arr/01\", Value::String(\"A\".to_string()))),\n        },\n    ];\n\n    let error =\n        apply_changes(file, with_root_object(changes)).expect_err(\"apply_changes should fail\");\n    match error {\n        PluginApiError::InvalidInput(message) => {\n            assert!(message.contains(\"non-canonical array index token\"));\n        }\n        PluginApiError::Internal(message) => {\n            panic!(\"expected InvalidInput, got Internal({message})\");\n        }\n    }\n}\n\n#[test]\nfn accepts_canonical_zero_array_index() {\n    let file = file_from_json(\"f1\", \"/x.json\", r#\"{}\"#);\n    let changes = vec![\n        PluginEntityChange {\n            entity_id: \"/arr\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\"/arr\", Value::Array(Vec::new()))),\n        },\n        PluginEntityChange {\n            entity_id: \"/arr/0\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\"/arr/0\", Value::String(\"A\".to_string()))),\n        },\n    ];\n\n    let output =\n        apply_changes(file, with_root_object(changes)).expect(\"apply_changes should succeed\");\n    let parsed: Value = serde_json::from_slice(&output).expect(\"output should parse\");\n    assert_eq!(parsed, serde_json::json!({\"arr\":[\"A\"]}));\n}\n\n#[test]\nfn rejects_sparse_array_projection_rows() {\n    let file = file_from_json(\"f1\", \"/x.json\", r#\"{}\"#);\n    let changes = vec![\n        PluginEntityChange {\n            entity_id: \"/arr\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\"/arr\", Value::Array(Vec::new()))),\n        },\n        PluginEntityChange {\n            entity_id: \"/arr/5\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\"/arr/5\", Value::String(\"x\".to_string()))),\n        },\n    ];\n\n    let error =\n        apply_changes(file, with_root_object(changes)).expect_err(\"apply_changes should fail\");\n    match error {\n        PluginApiError::InvalidInput(message) => {\n            assert!(message.contains(\"sparse array projection\"));\n        }\n        PluginApiError::Internal(message) => {\n            panic!(\"expected InvalidInput, got Internal({message})\");\n        }\n    }\n}\n\n#[test]\nfn rejects_aliasing_array_indices_via_non_canonical_form() {\n    let file = file_from_json(\"f1\", \"/x.json\", r#\"{}\"#);\n    let changes = vec![\n        PluginEntityChange {\n            entity_id: \"/arr\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\"/arr\", Value::Array(Vec::new()))),\n        },\n        PluginEntityChange {\n            entity_id: \"/arr/1\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\"/arr/1\", Value::String(\"A\".to_string()))),\n        },\n        PluginEntityChange {\n            entity_id: \"/arr/01\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\"/arr/01\", Value::String(\"B\".to_string()))),\n        },\n    ];\n\n    let error =\n        apply_changes(file, with_root_object(changes)).expect_err(\"apply_changes should fail\");\n    match error {\n        PluginApiError::InvalidInput(message) => {\n            assert!(message.contains(\"non-canonical array index token\"));\n        }\n        PluginApiError::Internal(message) => {\n            panic!(\"expected InvalidInput, got Internal({message})\");\n        }\n    }\n}\n\n#[test]\nfn rejects_tombstone_with_leading_zero_token_under_live_array_context() {\n    let file = file_from_json(\"f1\", \"/x.json\", r#\"{}\"#);\n    let changes = vec![\n        PluginEntityChange {\n            entity_id: \"/arr\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\"/arr\", Value::Array(Vec::new()))),\n        },\n        PluginEntityChange {\n            entity_id: \"/arr/0\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\"/arr/0\", Value::String(\"A\".to_string()))),\n        },\n        PluginEntityChange {\n            entity_id: \"/arr/01\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: None,\n        },\n    ];\n\n    let error =\n        apply_changes(file, with_root_object(changes)).expect_err(\"apply_changes should fail\");\n    match error {\n        PluginApiError::InvalidInput(message) => {\n            assert!(message.contains(\"non-canonical array index token\"));\n        }\n        PluginApiError::Internal(message) => {\n            panic!(\"expected InvalidInput, got Internal({message})\");\n        }\n    }\n}\n\n#[test]\nfn rejects_tombstone_with_dash_token_under_live_array_context() {\n    let file = file_from_json(\"f1\", \"/x.json\", r#\"{}\"#);\n    let changes = vec![\n        PluginEntityChange {\n            entity_id: \"/arr\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\"/arr\", Value::Array(Vec::new()))),\n        },\n        PluginEntityChange {\n            entity_id: \"/arr/0\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\"/arr/0\", Value::String(\"A\".to_string()))),\n        },\n        PluginEntityChange {\n            entity_id: \"/arr/-\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: None,\n        },\n    ];\n\n    let error =\n        apply_changes(file, with_root_object(changes)).expect_err(\"apply_changes should fail\");\n    match error {\n        PluginApiError::InvalidInput(message) => {\n            assert!(message.contains(\"non-canonical '-' array token\"));\n        }\n        PluginApiError::Internal(message) => {\n            panic!(\"expected InvalidInput, got Internal({message})\");\n        }\n    }\n}\n\n#[test]\nfn allows_tombstone_with_leading_zero_token_with_only_live_array_container() {\n    let file = file_from_json(\"f1\", \"/x.json\", r#\"{}\"#);\n    let changes = vec![\n        PluginEntityChange {\n            entity_id: \"/arr\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\"/arr\", Value::Array(Vec::new()))),\n        },\n        PluginEntityChange {\n            entity_id: \"/arr/00\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: None,\n        },\n    ];\n\n    let output =\n        apply_changes(file, with_root_object(changes)).expect(\"apply_changes should succeed\");\n    let parsed: Value = serde_json::from_slice(&output).expect(\"output should parse\");\n    assert_eq!(parsed, serde_json::json!({\"arr\":[]}));\n}\n\n#[test]\nfn allows_tombstone_with_dash_token_with_only_live_array_container() {\n    let file = file_from_json(\"f1\", \"/x.json\", r#\"{}\"#);\n    let changes = vec![\n        PluginEntityChange {\n            entity_id: \"/arr\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\"/arr\", Value::Array(Vec::new()))),\n        },\n        PluginEntityChange {\n            entity_id: \"/arr/-\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: None,\n        },\n    ];\n\n    let output =\n        apply_changes(file, with_root_object(changes)).expect(\"apply_changes should succeed\");\n    let parsed: Value = serde_json::from_slice(&output).expect(\"output should parse\");\n    assert_eq!(parsed, serde_json::json!({\"arr\":[]}));\n}\n\n#[test]\nfn rejects_live_array_row_with_non_canonical_tombstone_alias() {\n    let file = file_from_json(\"f1\", \"/x.json\", r#\"{}\"#);\n    let changes = vec![\n        PluginEntityChange {\n            entity_id: \"/arr\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\"/arr\", Value::Array(Vec::new()))),\n        },\n        PluginEntityChange {\n            entity_id: \"/arr/0\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\"/arr/0\", Value::Null)),\n        },\n        PluginEntityChange {\n            entity_id: \"/arr/1\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\"/arr/1\", Value::String(\"B\".to_string()))),\n        },\n        PluginEntityChange {\n            entity_id: \"/arr/01\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: None,\n        },\n    ];\n\n    let error =\n        apply_changes(file, with_root_object(changes)).expect_err(\"apply_changes should fail\");\n    match error {\n        PluginApiError::InvalidInput(message) => {\n            assert!(message.contains(\"non-canonical array index token\"));\n        }\n        PluginApiError::Internal(message) => {\n            panic!(\"expected InvalidInput, got Internal({message})\");\n        }\n    }\n}\n\n#[test]\nfn allows_tombstone_non_numeric_token_under_live_array_context() {\n    let file = file_from_json(\"f1\", \"/x.json\", r#\"{}\"#);\n    let changes = vec![\n        PluginEntityChange {\n            entity_id: \"/arr\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\"/arr\", Value::Array(Vec::new()))),\n        },\n        PluginEntityChange {\n            entity_id: \"/arr/0\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\"/arr/0\", Value::Null)),\n        },\n        PluginEntityChange {\n            entity_id: \"/arr/foo\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: None,\n        },\n    ];\n\n    let output =\n        apply_changes(file, with_root_object(changes)).expect(\"apply_changes should succeed\");\n    let parsed: Value = serde_json::from_slice(&output).expect(\"output should parse\");\n    assert_eq!(parsed, serde_json::json!({\"arr\":[null]}));\n}\n\n#[test]\nfn rejects_root_scalar_with_non_root_descendants() {\n    let file = file_from_json(\"f1\", \"/x.json\", r#\"{}\"#);\n    let changes = vec![\n        PluginEntityChange {\n            entity_id: \"\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\"\", Value::Number(7.into()))),\n        },\n        PluginEntityChange {\n            entity_id: \"/a\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\"/a\", Value::Number(1.into()))),\n        },\n    ];\n\n    let error =\n        apply_changes(file, with_root_object(changes)).expect_err(\"apply_changes should fail\");\n    match error {\n        PluginApiError::InvalidInput(message) => {\n            assert!(message.contains(\"is not a container\"));\n        }\n        PluginApiError::Internal(message) => {\n            panic!(\"expected InvalidInput, got Internal({message})\");\n        }\n    }\n}\n\n#[test]\nfn rejects_scalar_ancestor_with_descendant_row() {\n    let file = file_from_json(\"f1\", \"/x.json\", r#\"{}\"#);\n    let changes = vec![\n        PluginEntityChange {\n            entity_id: \"/a\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\"/a\", Value::Number(1.into()))),\n        },\n        PluginEntityChange {\n            entity_id: \"/a/b\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\"/a/b\", Value::Number(2.into()))),\n        },\n    ];\n\n    let error =\n        apply_changes(file, with_root_object(changes)).expect_err(\"apply_changes should fail\");\n    match error {\n        PluginApiError::InvalidInput(message) => {\n            assert!(message.contains(\"is not a container\"));\n        }\n        PluginApiError::Internal(message) => {\n            panic!(\"expected InvalidInput, got Internal({message})\");\n        }\n    }\n}\n\n#[test]\nfn rejects_final_dash_token_in_projection_rows() {\n    let file = file_from_json(\"f1\", \"/x.json\", r#\"{}\"#);\n    let changes = vec![\n        PluginEntityChange {\n            entity_id: \"/arr\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\"/arr\", Value::Array(Vec::new()))),\n        },\n        PluginEntityChange {\n            entity_id: \"/arr/-\".to_string(),\n            schema_key: SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content(\"/arr/-\", Value::String(\"x\".to_string()))),\n        },\n    ];\n\n    let error =\n        apply_changes(file, with_root_object(changes)).expect_err(\"apply_changes should fail\");\n    match error {\n        PluginApiError::InvalidInput(message) => {\n            assert!(message.contains(\"non-canonical '-' array token\"));\n        }\n        PluginApiError::Internal(message) => {\n            panic!(\"expected InvalidInput, got Internal({message})\");\n        }\n    }\n}\n\n#[test]\nfn rejects_non_root_rows_when_root_row_is_missing() {\n    let file = file_from_json(\"f1\", \"/x.json\", r#\"{}\"#);\n    let changes = vec![PluginEntityChange {\n        entity_id: \"/0\".to_string(),\n        schema_key: SCHEMA_KEY.to_string(),\n        snapshot_content: Some(snapshot_content(\"/0\", Value::String(\"x\".to_string()))),\n    }];\n\n    let error = apply_changes(file, changes).expect_err(\"apply_changes should fail\");\n    match error {\n        PluginApiError::InvalidInput(message) => {\n            assert!(message.contains(\"non-root projection rows require a root row\"));\n        }\n        PluginApiError::Internal(message) => {\n            panic!(\"expected InvalidInput, got Internal({message})\");\n        }\n    }\n}\n"
  },
  {
    "path": "packages/plugin-json-v2/tests/common/mod.rs",
    "content": "#![allow(dead_code)]\n\nuse plugin_json_v2::{PluginEntityChange, PluginFile};\nuse serde::Deserialize;\nuse serde_json::Value;\n\n#[derive(Debug, Deserialize)]\nstruct SnapshotContent {\n    path: String,\n    value: Value,\n}\n\npub fn file_from_json(id: &str, path: &str, json: &str) -> PluginFile {\n    PluginFile {\n        id: id.to_string(),\n        path: path.to_string(),\n        data: json.as_bytes().to_vec(),\n    }\n}\n\npub fn parse_snapshot_value_from_change(change: &PluginEntityChange) -> Value {\n    let Some(snapshot_content) = change.snapshot_content.as_ref() else {\n        panic!(\"change should have snapshot_content\");\n    };\n\n    let parsed: SnapshotContent =\n        serde_json::from_str(snapshot_content).expect(\"snapshot content should parse\");\n    assert_eq!(parsed.path, change.entity_id);\n    parsed.value\n}\n\npub fn snapshot_content(path: &str, value: Value) -> String {\n    serde_json::json!({\n        \"path\": path,\n        \"value\": value,\n    })\n    .to_string()\n}\n"
  },
  {
    "path": "packages/plugin-json-v2/tests/detect_changes.rs",
    "content": "mod common;\n\nuse common::{file_from_json, parse_snapshot_value_from_change};\nuse plugin_json_v2::{detect_changes, SCHEMA_KEY};\nuse serde_json::Value;\n\n#[test]\nfn returns_empty_when_documents_are_equal() {\n    let before = file_from_json(\"f1\", \"/x.json\", r#\"{\"Name\":\"Anna\",\"Age\":20}\"#);\n    let after = file_from_json(\"f1\", \"/x.json\", r#\"{\"Name\":\"Anna\",\"Age\":20}\"#);\n\n    let changes = detect_changes(Some(before), after).expect(\"detect_changes should succeed\");\n\n    assert!(changes.is_empty());\n}\n\n#[test]\nfn detects_root_insert() {\n    let before = file_from_json(\"f1\", \"/x.json\", r#\"{\"Name\":\"Anna\",\"Age\":20}\"#);\n    let after = file_from_json(\n        \"f1\",\n        \"/x.json\",\n        r#\"{\"Name\":\"Anna\",\"Age\":20,\"City\":\"New York\"}\"#,\n    );\n\n    let changes = detect_changes(Some(before), after).expect(\"detect_changes should succeed\");\n\n    assert_eq!(changes.len(), 1);\n    assert_eq!(changes[0].entity_id, \"/City\");\n    assert_eq!(changes[0].schema_key, SCHEMA_KEY);\n    assert_eq!(\n        parse_snapshot_value_from_change(&changes[0]),\n        Value::String(\"New York\".to_string())\n    );\n}\n\n#[test]\nfn detects_nested_array_updates_and_deletions() {\n    let before = file_from_json(\"f1\", \"/x.json\", r#\"{\"list\":[\"a\",\"b\",\"c\"]}\"#);\n    let after = file_from_json(\"f1\", \"/x.json\", r#\"{\"list\":[\"a\",\"x\"]}\"#);\n\n    let changes = detect_changes(Some(before), after).expect(\"detect_changes should succeed\");\n\n    assert_eq!(changes.len(), 2);\n    assert_eq!(changes[0].entity_id, \"/list/1\");\n    assert_eq!(\n        parse_snapshot_value_from_change(&changes[0]),\n        Value::String(\"x\".to_string())\n    );\n    assert_eq!(changes[1].entity_id, \"/list/2\");\n    assert_eq!(changes[1].snapshot_content, None);\n}\n\n#[test]\nfn detects_container_replacement() {\n    let before = file_from_json(\"f1\", \"/x.json\", r#\"{\"a\":{\"x\":1}}\"#);\n    let after = file_from_json(\"f1\", \"/x.json\", r#\"{\"a\":2}\"#);\n\n    let changes = detect_changes(Some(before), after).expect(\"detect_changes should succeed\");\n\n    assert_eq!(changes.len(), 2);\n    assert_eq!(changes[0].entity_id, \"/a/x\");\n    assert_eq!(changes[0].snapshot_content, None);\n    assert_eq!(changes[1].entity_id, \"/a\");\n    assert_eq!(\n        parse_snapshot_value_from_change(&changes[1]),\n        Value::Number(2.into())\n    );\n}\n\n#[test]\nfn handles_file_creation_without_synthetic_root_deletion() {\n    let after = file_from_json(\"f1\", \"/x.json\", r#\"{\"Name\":\"Anna\"}\"#);\n\n    let changes = detect_changes(None, after).expect(\"detect_changes should succeed\");\n\n    assert_eq!(changes.len(), 2);\n    assert_eq!(changes[0].entity_id, \"\");\n    assert_eq!(\n        parse_snapshot_value_from_change(&changes[0]),\n        Value::Object(serde_json::Map::new())\n    );\n    assert_eq!(changes[1].entity_id, \"/Name\");\n    assert_eq!(\n        parse_snapshot_value_from_change(&changes[1]),\n        Value::String(\"Anna\".to_string())\n    );\n}\n\n#[test]\nfn detects_multi_delete_array_in_descending_order() {\n    let before = file_from_json(\"f1\", \"/x.json\", r#\"{\"list\":[\"a\",\"b\",\"c\",\"d\"]}\"#);\n    let after = file_from_json(\"f1\", \"/x.json\", r#\"{\"list\":[\"a\"]}\"#);\n\n    let changes = detect_changes(Some(before), after).expect(\"detect_changes should succeed\");\n\n    assert_eq!(changes.len(), 3);\n    assert_eq!(changes[0].entity_id, \"/list/3\");\n    assert_eq!(changes[0].snapshot_content, None);\n    assert_eq!(changes[1].entity_id, \"/list/2\");\n    assert_eq!(changes[1].snapshot_content, None);\n    assert_eq!(changes[2].entity_id, \"/list/1\");\n    assert_eq!(changes[2].snapshot_content, None);\n}\n\n#[test]\nfn deleting_non_empty_container_emits_subtree_tombstones() {\n    let before = file_from_json(\"f1\", \"/x.json\", r#\"{\"a\":{\"b\":1}}\"#);\n    let after = file_from_json(\"f1\", \"/x.json\", r#\"{}\"#);\n\n    let changes = detect_changes(Some(before), after).expect(\"detect_changes should succeed\");\n\n    assert_eq!(changes.len(), 2);\n    assert_eq!(changes[0].entity_id, \"/a\");\n    assert_eq!(changes[0].snapshot_content, None);\n    assert_eq!(changes[1].entity_id, \"/a/b\");\n    assert_eq!(changes[1].snapshot_content, None);\n}\n\n#[test]\nfn replacing_non_empty_container_with_scalar_tombstones_subtree() {\n    let before = file_from_json(\"f1\", \"/x.json\", r#\"{\"a\":{\"b\":1}}\"#);\n    let after = file_from_json(\"f1\", \"/x.json\", r#\"2\"#);\n\n    let changes = detect_changes(Some(before), after).expect(\"detect_changes should succeed\");\n\n    assert_eq!(changes.len(), 3);\n    assert_eq!(changes[0].entity_id, \"/a\");\n    assert_eq!(changes[0].snapshot_content, None);\n    assert_eq!(changes[1].entity_id, \"/a/b\");\n    assert_eq!(changes[1].snapshot_content, None);\n    assert_eq!(changes[2].entity_id, \"\");\n    assert_eq!(\n        parse_snapshot_value_from_change(&changes[2]),\n        Value::Number(2.into())\n    );\n}\n\n#[test]\nfn deleting_whole_object_property_emits_subtree_tombstones() {\n    let before = file_from_json(\n        \"f1\",\n        \"/x.json\",\n        r#\"{\"keep\":1,\"obj\":{\"k\":1,\"nested\":{\"z\":2}}}\"#,\n    );\n    let after = file_from_json(\"f1\", \"/x.json\", r#\"{\"keep\":1}\"#);\n\n    let changes = detect_changes(Some(before), after).expect(\"detect_changes should succeed\");\n    let mut entity_ids = changes\n        .iter()\n        .map(|change| change.entity_id.as_str())\n        .collect::<Vec<_>>();\n    entity_ids.sort_unstable();\n\n    assert_eq!(\n        entity_ids,\n        vec![\"/obj\", \"/obj/k\", \"/obj/nested\", \"/obj/nested/z\"]\n    );\n    assert!(changes\n        .iter()\n        .all(|change| change.snapshot_content.is_none()));\n}\n\n#[test]\nfn deleting_whole_array_property_emits_subtree_tombstones() {\n    let before = file_from_json(\"f1\", \"/x.json\", r#\"{\"keep\":1,\"arr\":[{\"x\":1},2,3]}\"#);\n    let after = file_from_json(\"f1\", \"/x.json\", r#\"{\"keep\":1}\"#);\n\n    let changes = detect_changes(Some(before), after).expect(\"detect_changes should succeed\");\n    let mut entity_ids = changes\n        .iter()\n        .map(|change| change.entity_id.as_str())\n        .collect::<Vec<_>>();\n    entity_ids.sort_unstable();\n\n    assert_eq!(\n        entity_ids,\n        vec![\"/arr\", \"/arr/0\", \"/arr/0/x\", \"/arr/1\", \"/arr/2\"]\n    );\n    assert!(changes\n        .iter()\n        .all(|change| change.snapshot_content.is_none()));\n}\n\n#[test]\nfn deleting_nested_subtree_emits_all_descendant_tombstones() {\n    let before = file_from_json(\"f1\", \"/x.json\", r#\"{\"a\":{\"b\":{\"c\":1,\"d\":2},\"e\":3},\"x\":0}\"#);\n    let after = file_from_json(\"f1\", \"/x.json\", r#\"{\"a\":{\"e\":3},\"x\":0}\"#);\n\n    let changes = detect_changes(Some(before), after).expect(\"detect_changes should succeed\");\n    let mut entity_ids = changes\n        .iter()\n        .map(|change| change.entity_id.as_str())\n        .collect::<Vec<_>>();\n    entity_ids.sort_unstable();\n\n    assert_eq!(entity_ids, vec![\"/a/b\", \"/a/b/c\", \"/a/b/d\"]);\n    assert!(changes\n        .iter()\n        .all(|change| change.snapshot_content.is_none()));\n}\n"
  },
  {
    "path": "packages/plugin-json-v2/tests/roundtrip.rs",
    "content": "mod common;\n\nuse std::collections::BTreeMap;\n\nuse common::file_from_json;\nuse plugin_json_v2::{apply_changes, detect_changes, PluginEntityChange, SCHEMA_KEY};\nuse serde_json::Value;\n\nfn merge_latest_state_rows(changesets: Vec<Vec<PluginEntityChange>>) -> Vec<PluginEntityChange> {\n    let mut latest = BTreeMap::new();\n    for changes in changesets {\n        for change in changes {\n            if change.schema_key != SCHEMA_KEY {\n                continue;\n            }\n            latest.insert(\n                (change.schema_key.clone(), change.entity_id.clone()),\n                change,\n            );\n        }\n    }\n    latest.into_values().collect()\n}\n\nfn projected_changes_for_transition(\n    before_json: &str,\n    after_json: &str,\n) -> Vec<PluginEntityChange> {\n    let baseline = detect_changes(None, file_from_json(\"f1\", \"/x.json\", before_json))\n        .expect(\"baseline detect_changes should succeed\");\n    let delta = detect_changes(\n        Some(file_from_json(\"f1\", \"/x.json\", before_json)),\n        file_from_json(\"f1\", \"/x.json\", after_json),\n    )\n    .expect(\"delta detect_changes should succeed\");\n    merge_latest_state_rows(vec![baseline, delta])\n}\n\nfn apply_projection(changes: Vec<PluginEntityChange>) -> Value {\n    let seed = file_from_json(\"f1\", \"/x.json\", r#\"{\"stale\":\"cache\"}\"#);\n    let reconstructed = apply_changes(seed, changes).expect(\"apply_changes should succeed\");\n    serde_json::from_slice(&reconstructed).expect(\"reconstructed bytes should parse\")\n}\n\nfn assert_projection_roundtrip(before_json: &str, after_json: &str) {\n    let reconstructed_json =\n        apply_projection(projected_changes_for_transition(before_json, after_json));\n    let expected_json: Value =\n        serde_json::from_str(after_json).expect(\"expected JSON should parse\");\n    assert_eq!(reconstructed_json, expected_json);\n}\n\n#[test]\nfn roundtrip_reconstructs_after_document() {\n    assert_projection_roundtrip(\n        r#\"{\"Name\":\"Samuel\",\"address\":{\"city\":\"Berlin\",\"zip\":\"10115\"},\"tags\":[\"a\",\"b\",\"c\"]}\"#,\n        r#\"{\"Name\":\"Sam\",\"address\":{\"city\":\"Berlin\"},\"tags\":[\"a\",\"x\"],\"active\":true}\"#,\n    );\n}\n\n#[test]\nfn roundtrip_file_creation_from_empty_seed() {\n    assert_projection_roundtrip(\n        r#\"{}\"#,\n        r#\"{\"profile\":{\"name\":\"Anna\"},\"roles\":[\"admin\",\"editor\"]}\"#,\n    );\n}\n\n#[test]\nfn roundtrip_handles_numeric_object_keys() {\n    assert_projection_roundtrip(r#\"{}\"#, r#\"{\"foo\":{\"0\":\"x\",\"1\":\"y\"}}\"#);\n}\n\n#[test]\nfn roundtrip_handles_multi_delete_arrays() {\n    assert_projection_roundtrip(r#\"{\"list\":[\"a\",\"b\",\"c\",\"d\"]}\"#, r#\"{\"list\":[\"a\"]}\"#);\n}\n\n#[test]\nfn roundtrip_preserves_pointer_escaped_keys() {\n    assert_projection_roundtrip(\n        r#\"{\"a/b\":\"old\",\"tilde~key\":\"x\"}\"#,\n        r#\"{\"a/b\":\"new\",\"tilde~key\":\"y\"}\"#,\n    );\n}\n\n#[test]\nfn roundtrip_replacing_empty_object_in_array_index_keeps_neighbors() {\n    assert_projection_roundtrip(r#\"{\"arr\":[{}, \"x\"]}\"#, r#\"{\"arr\":[1, \"x\"]}\"#);\n}\n\n#[test]\nfn roundtrip_replacing_empty_array_with_empty_object_in_array_index_keeps_neighbors() {\n    assert_projection_roundtrip(r#\"{\"arr\":[[], \"x\"]}\"#, r#\"{\"arr\":[{}, \"x\"]}\"#);\n}\n\n#[test]\nfn roundtrip_deleting_non_empty_container_removes_descendants() {\n    assert_projection_roundtrip(r#\"{\"a\":{\"b\":1}}\"#, r#\"{}\"#);\n}\n\n#[test]\nfn roundtrip_replacing_non_empty_container_with_scalar_removes_descendants() {\n    assert_projection_roundtrip(r#\"{\"a\":{\"b\":1}}\"#, r#\"2\"#);\n}\n\n#[test]\nfn roundtrip_deleting_whole_object_property_removes_subtree_rows() {\n    assert_projection_roundtrip(\n        r#\"{\"keep\":1,\"obj\":{\"k\":1,\"nested\":{\"z\":2}}}\"#,\n        r#\"{\"keep\":1}\"#,\n    );\n}\n\n#[test]\nfn roundtrip_deleting_whole_array_property_removes_subtree_rows() {\n    assert_projection_roundtrip(r#\"{\"keep\":1,\"arr\":[{\"x\":1},2,3]}\"#, r#\"{\"keep\":1}\"#);\n}\n\n#[test]\nfn roundtrip_deleting_nested_subtree_removes_descendants() {\n    assert_projection_roundtrip(\n        r#\"{\"a\":{\"b\":{\"c\":1,\"d\":2},\"e\":3},\"x\":0}\"#,\n        r#\"{\"a\":{\"e\":3},\"x\":0}\"#,\n    );\n}\n\n#[test]\nfn roundtrip_replacing_root_array_with_scalar_removes_descendants() {\n    assert_projection_roundtrip(r#\"[{\"a\":1},{\"b\":2},3]\"#, r#\"7\"#);\n}\n\n#[test]\nfn roundtrip_with_proto_like_keys_is_supported() {\n    assert_projection_roundtrip(\n        r#\"{\"__proto__\":{\"ok\":true},\"prototype\":[1],\"constructor\":{\"x\":1}}\"#,\n        r#\"{\"__proto__\":{\"ok\":false},\"prototype\":[1,2],\"constructor\":{\"x\":2}}\"#,\n    );\n}\n\n#[test]\nfn roundtrip_handles_object_key_dash() {\n    assert_projection_roundtrip(r#\"{}\"#, r#\"{\"obj\":{\"-\":{\"x\":1}}}\"#);\n}\n\n#[test]\nfn roundtrip_handles_pointer_escape_edge_keys() {\n    assert_projection_roundtrip(r#\"{}\"#, r#\"{\"\":{\"/\":1,\"~\":2,\"~1\":3,\"~0\":4}}\"#);\n}\n\n#[test]\nfn roundtrip_replacing_root_object_with_array_allows_non_numeric_old_keys() {\n    assert_projection_roundtrip(r#\"{\"~1\":\"x\"}\"#, r#\"[]\"#);\n}\n\n#[test]\nfn roundtrip_replacing_nested_object_with_array_allows_non_numeric_old_keys() {\n    assert_projection_roundtrip(r#\"{\"x\":{\"~1\":\"v\"}}\"#, r#\"{\"x\":[]}\"#);\n}\n\n#[test]\nfn roundtrip_replacing_object_with_array_allows_dash_and_leading_zero_keys() {\n    assert_projection_roundtrip(r#\"{\"-\":\"dash\",\"01\":\"lead\",\"foo\":\"bar\"}\"#, r#\"[]\"#);\n}\n\n#[derive(Clone)]\nstruct Lcg {\n    state: u64,\n}\n\nimpl Lcg {\n    fn new(seed: u64) -> Self {\n        Self { state: seed }\n    }\n\n    fn next_u32(&mut self) -> u32 {\n        self.state = self.state.wrapping_mul(6364136223846793005).wrapping_add(1);\n        (self.state >> 32) as u32\n    }\n\n    fn next_usize(&mut self, max_exclusive: usize) -> usize {\n        if max_exclusive == 0 {\n            return 0;\n        }\n        (self.next_u32() as usize) % max_exclusive\n    }\n\n    fn next_bool(&mut self) -> bool {\n        (self.next_u32() & 1) == 0\n    }\n}\n\nfn random_scalar(rng: &mut Lcg) -> Value {\n    match rng.next_usize(5) {\n        0 => Value::Null,\n        1 => Value::Bool(rng.next_bool()),\n        2 => Value::Number(((rng.next_u32() % 100) as i64).into()),\n        3 => Value::String(format!(\"s{}\", rng.next_u32() % 10)),\n        _ => Value::String(String::new()),\n    }\n}\n\nfn random_json(rng: &mut Lcg, depth: usize) -> Value {\n    if depth == 0 {\n        return random_scalar(rng);\n    }\n\n    match rng.next_usize(5) {\n        0 => random_scalar(rng),\n        1 => {\n            let len = rng.next_usize(3);\n            let mut values = Vec::new();\n            for _ in 0..len {\n                values.push(random_json(rng, depth - 1));\n            }\n            Value::Array(values)\n        }\n        _ => {\n            let candidate_keys = [\"\", \"a\", \"b\", \"x\", \"~\", \"~0\", \"~1\", \"/\", \"a/b\"];\n            let count = rng.next_usize(4);\n            let mut object = serde_json::Map::new();\n            for _ in 0..count {\n                let key = candidate_keys[rng.next_usize(candidate_keys.len())].to_string();\n                object\n                    .entry(key)\n                    .or_insert_with(|| random_json(rng, depth - 1));\n            }\n            Value::Object(object)\n        }\n    }\n}\n\n#[test]\nfn roundtrip_randomized_transition_invariant() {\n    let mut rng = Lcg::new(0xA11CE5EEDu64);\n\n    for _ in 0..300 {\n        let before = random_json(&mut rng, 3);\n        let after = random_json(&mut rng, 3);\n        let before_json = serde_json::to_string(&before).expect(\"before should serialize\");\n        let after_json = serde_json::to_string(&after).expect(\"after should serialize\");\n        assert_projection_roundtrip(&before_json, &after_json);\n    }\n}\n\n#[test]\nfn roundtrip_is_invariant_to_change_order_permutations() {\n    let before_json = r#\"{\"list\":[\"a\",\"b\",\"c\",\"d\"],\"flags\":{\"active\":false},\"old\":\"x\"}\"#;\n    let after_json = r#\"{\"list\":[\"a\"],\"flags\":{\"active\":true},\"team\":[{\"name\":\"Ada\"}]}\"#;\n    let projected = projected_changes_for_transition(before_json, after_json);\n    let expected: Value = serde_json::from_str(after_json).expect(\"expected JSON should parse\");\n\n    let mut permutations = Vec::new();\n    permutations.push(projected.clone());\n\n    let mut reversed = projected.clone();\n    reversed.reverse();\n    permutations.push(reversed);\n\n    let mut rotated = projected.clone();\n    if !rotated.is_empty() {\n        rotated.rotate_left(1);\n    }\n    permutations.push(rotated);\n\n    let mut lexicographic = projected.clone();\n    lexicographic.sort_by(|a, b| a.entity_id.cmp(&b.entity_id));\n    permutations.push(lexicographic);\n\n    let mut reverse_lexicographic = projected.clone();\n    reverse_lexicographic.sort_by(|a, b| b.entity_id.cmp(&a.entity_id));\n    permutations.push(reverse_lexicographic);\n\n    for changes in permutations {\n        let reconstructed = apply_projection(changes);\n        assert_eq!(reconstructed, expected);\n    }\n}\n\n#[test]\nfn roundtrip_reconstructs_with_lexicographic_entity_id_order() {\n    let before_json = r#\"{\"list\":[\"a\",\"b\",\"c\",\"d\"]}\"#;\n    let after_json = r#\"{\"list\":[\"a\"]}\"#;\n    let mut projected = projected_changes_for_transition(before_json, after_json);\n    projected.sort_by(|a, b| a.entity_id.cmp(&b.entity_id));\n\n    let reconstructed = apply_projection(projected);\n    let expected: Value = serde_json::from_str(after_json).expect(\"expected JSON should parse\");\n    assert_eq!(reconstructed, expected);\n}\n"
  },
  {
    "path": "packages/plugin-json-v2/tests/schema.rs",
    "content": "use plugin_json_v2::{schema_definition, schema_json, SCHEMA_KEY};\n\n#[test]\nfn schema_json_is_valid_and_matches_constants() {\n    let schema = schema_definition();\n\n    let key = schema\n        .get(\"x-lix-key\")\n        .and_then(serde_json::Value::as_str)\n        .expect(\"schema must define string x-lix-key\");\n    assert_eq!(key, SCHEMA_KEY);\n\n    let primary_key = schema\n        .get(\"x-lix-primary-key\")\n        .and_then(serde_json::Value::as_array)\n        .expect(\"schema must define x-lix-primary-key array\");\n    assert_eq!(primary_key.len(), 1);\n    assert_eq!(primary_key[0].as_str(), Some(\"/path\"));\n}\n\n#[test]\nfn schema_json_accessor_returns_expected_text() {\n    let raw = schema_json();\n    assert!(raw.contains(\"\\\"x-lix-key\\\": \\\"json_pointer\\\"\"));\n}\n"
  },
  {
    "path": "packages/plugin-md-v2/.gitignore",
    "content": "/target\n"
  },
  {
    "path": "packages/plugin-md-v2/Cargo.toml",
    "content": "[package]\nname = \"plugin_md_v2\"\nversion = \"0.1.0\"\nedition = \"2021\"\npublish = false\n\n[lib]\ncrate-type = [\"cdylib\", \"rlib\"]\n\n[dependencies]\nmarkdown = { version = \"1\", features = [\"serde\"] }\n# Temporary workspace unblock: local dependency is missing in this checkout.\n# markdown_wc = { path = \"../../../markdown-wc\" }\nserde = { version = \"1\", features = [\"derive\"] }\nserde_json = \"1\"\nstrsim = \"0.11\"\nunicode-normalization = \"0.1\"\nwit-bindgen = \"0.40\"\n\n[dev-dependencies]\ncriterion = \"0.5\"\n\n[[bench]]\nname = \"detect_changes\"\nharness = false\n"
  },
  {
    "path": "packages/plugin-md-v2/README.md",
    "content": "# plugin-md-v2\n\nRust/WASM component Markdown plugin for the Lix engine.\n\n## Current scope\n\n- `detect-changes` parses markdown with `markdown-rs` using GFM + MDX + math + frontmatter options.\n- Emits block-level rows (`markdown_v2_block`) plus a document order row (`markdown_v2_document`).\n- `apply-changes` materializes markdown from the latest block snapshots and document order.\n\nThis establishes a deterministic block-level projection baseline with unit tests and benchmarks.\n\n## Identity Model (v2)\n\n`plugin-md-v2` detect expects active state context for top-level block IDs:\n\n- With `detect_changes.state_context.include_active_state: true`: existing IDs are reused from active state rows whenever blocks can be matched (exact + fuzzy matching).\n- Fingerprint normalization includes:\n  - line ending normalization (`CRLF`/`CR` -> `LF`)\n  - Unicode NFC normalization for all string fields\n\nPractical behavior (with active state context):\n\n- Pure reorder of unchanged blocks keeps IDs stable and only updates the document `order`.\n- With active state context, content edits can keep existing IDs and emit only an upsert.\n- Cross-type edits (e.g. paragraph -> heading) also produce tombstone + upsert + document update.\n\n## Expected Change Shapes\n\nCommon detect scenarios:\n\n- No-op: `[]`\n- New file with `N` top-level blocks: `N` block upserts + `1` document row\n- Pure reorder: `1` document row only\n- Insert one block: `1` block upsert + `1` document row\n- Delete one block: `1` block tombstone + `1` document row\n- Edit one block: `1` block tombstone + `1` block upsert + `1` document row\n\nThis is intentionally different from v1 nested-node identity. v2 tracks identity at top-level block granularity.\n"
  },
  {
    "path": "packages/plugin-md-v2/benches/common/mod.rs",
    "content": "use plugin_md_v2::PluginFile;\n\npub fn file_from_markdown(id: &str, path: &str, markdown: &str) -> PluginFile {\n    PluginFile {\n        id: id.to_string(),\n        path: path.to_string(),\n        data: markdown.as_bytes().to_vec(),\n    }\n}\n\npub fn dataset_small() -> (String, String) {\n    let before = \"# Title\\n\\nA short paragraph.\\n\".to_string();\n    let after = \"# Title\\n\\nA short paragraph with update.\\n\".to_string();\n    (before, after)\n}\n\npub fn dataset_medium() -> (String, String) {\n    let mut before = String::new();\n    let mut after = String::new();\n    before.push_str(\"---\\ntitle: Medium\\n---\\n\\n\");\n    after.push_str(\"---\\ntitle: Medium\\n---\\n\\n\");\n\n    for idx in 0..120 {\n        before.push_str(&format!(\"## Section {idx}\\n\\nParagraph {idx}.\\n\\n\"));\n        after.push_str(&format!(\n            \"## Section {idx}\\n\\nParagraph {idx} changed with value {}.\\n\\n\",\n            idx * 3\n        ));\n    }\n\n    (before, after)\n}\n\npub fn dataset_large() -> (String, String) {\n    let mut before = String::new();\n    let mut after = String::new();\n    before.push_str(\"---\\ntitle: Large\\n---\\n\\n\");\n    after.push_str(\"---\\ntitle: Large\\n---\\n\\n\");\n\n    for idx in 0..450 {\n        before.push_str(&format!(\n            \"### Item {idx}\\n\\n- [x] done\\n- [ ] pending\\n\\nInline math $a_{} + b_{}$\\n\\n<Component value={{ {} }} />\\n\\n\",\n            idx,\n            idx,\n            idx\n        ));\n        after.push_str(&format!(\n            \"### Item {idx}\\n\\n- [x] done\\n- [x] pending\\n\\nInline math $a_{} + b_{} + c_{}$\\n\\n<Component value={{ {} }} flag />\\n\\n\",\n            idx,\n            idx,\n            idx,\n            idx\n        ));\n    }\n\n    (before, after)\n}\n"
  },
  {
    "path": "packages/plugin-md-v2/benches/detect_changes.rs",
    "content": "mod common;\n\nuse criterion::{criterion_group, criterion_main, BatchSize, Criterion};\nuse plugin_md_v2::{\n    detect_changes, detect_changes_with_state_context, PluginActiveStateRow,\n    PluginDetectStateContext, PluginEntityChange,\n};\n\nfn to_state_context(rows: &[PluginEntityChange]) -> PluginDetectStateContext {\n    PluginDetectStateContext {\n        active_state: Some(\n            rows.iter()\n                .map(|row| PluginActiveStateRow {\n                    entity_id: row.entity_id.clone(),\n                    schema_key: Some(row.schema_key.clone()),\n                    snapshot_content: row.snapshot_content.clone(),\n                    file_id: None,\n                    plugin_key: None,\n                    version_id: None,\n                    change_id: None,\n                    metadata: None,\n                    created_at: None,\n                    updated_at: None,\n                })\n                .collect::<Vec<_>>(),\n        ),\n    }\n}\n\nfn bench_detect_changes(c: &mut Criterion) {\n    let mut group = c.benchmark_group(\"detect_changes\");\n    group.sample_size(20);\n\n    for (name, (before, after)) in [\n        (\"small\", common::dataset_small()),\n        (\"medium\", common::dataset_medium()),\n        (\"large\", common::dataset_large()),\n    ] {\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || {\n                    (\n                        common::file_from_markdown(\"f1\", \"/doc.mdx\", &before),\n                        common::file_from_markdown(\"f1\", \"/doc.mdx\", &after),\n                    )\n                },\n                |(before_file, after_file)| {\n                    detect_changes(Some(before_file), after_file)\n                        .expect(\"detect_changes benchmark should succeed\")\n                },\n                BatchSize::SmallInput,\n            );\n        });\n    }\n\n    group.finish();\n}\n\nfn bench_detect_changes_with_state_context(c: &mut Criterion) {\n    let mut group = c.benchmark_group(\"detect_changes_with_state_context\");\n    group.sample_size(20);\n\n    for (name, (before, after)) in [\n        (\"medium\", common::dataset_medium()),\n        (\"large\", common::dataset_large()),\n    ] {\n        let before_file = common::file_from_markdown(\"f1\", \"/doc.mdx\", &before);\n        let after_file = common::file_from_markdown(\"f1\", \"/doc.mdx\", &after);\n        let bootstrap = detect_changes(None, before_file.clone())\n            .expect(\"bootstrap detect_changes benchmark should succeed\");\n        let state_context = to_state_context(&bootstrap);\n\n        group.bench_function(name, |b| {\n            b.iter_batched(\n                || {\n                    (\n                        before_file.clone(),\n                        after_file.clone(),\n                        state_context.clone(),\n                    )\n                },\n                |(before_file, after_file, state_context)| {\n                    detect_changes_with_state_context(\n                        Some(before_file),\n                        after_file,\n                        Some(state_context),\n                    )\n                    .expect(\"detect_changes_with_state_context benchmark should succeed\")\n                },\n                BatchSize::SmallInput,\n            );\n        });\n    }\n\n    group.finish();\n}\n\ncriterion_group!(\n    benches,\n    bench_detect_changes,\n    bench_detect_changes_with_state_context\n);\ncriterion_main!(benches);\n"
  },
  {
    "path": "packages/plugin-md-v2/manifest.json",
    "content": "{\n  \"key\": \"plugin_md_v2\",\n  \"runtime\": \"wasm-component-v1\",\n  \"api_version\": \"0.1.0\",\n  \"match\": {\n    \"path_glob\": \"*.{md,mdx}\",\n    \"content_type\": \"text\"\n  },\n  \"detect_changes\": {\n    \"state_context\": {\n      \"include_active_state\": true,\n      \"columns\": [\n        \"entity_id\",\n        \"schema_key\",\n        \"snapshot_content\"\n      ]\n    }\n  },\n  \"entry\": \"plugin.wasm\",\n  \"schemas\": [\n    \"schema/markdown_document.json\",\n    \"schema/markdown_block.json\"\n  ]\n}\n"
  },
  {
    "path": "packages/plugin-md-v2/schema/markdown_block.json",
    "content": "{\n  \"x-lix-key\": \"markdown_v2_block\",\n  \"x-lix-primary-key\": [\n    \"/id\"\n  ],\n  \"type\": \"object\",\n  \"properties\": {\n    \"id\": {\n      \"type\": \"string\",\n      \"minLength\": 1\n    },\n    \"type\": {\n      \"type\": \"string\",\n      \"minLength\": 1\n    },\n    \"node\": {\n      \"type\": \"object\"\n    },\n    \"markdown\": {\n      \"type\": \"string\"\n    }\n  },\n  \"required\": [\n    \"id\",\n    \"type\",\n    \"node\",\n    \"markdown\"\n  ],\n  \"additionalProperties\": false\n}\n"
  },
  {
    "path": "packages/plugin-md-v2/schema/markdown_document.json",
    "content": "{\n  \"x-lix-key\": \"markdown_v2_document\",\n  \"x-lix-primary-key\": [\n    \"/id\"\n  ],\n  \"type\": \"object\",\n  \"properties\": {\n    \"id\": {\n      \"type\": \"string\",\n      \"const\": \"root\"\n    },\n    \"order\": {\n      \"type\": \"array\",\n      \"items\": {\n        \"type\": \"string\",\n        \"minLength\": 1\n      }\n    }\n  },\n  \"required\": [\n    \"id\",\n    \"order\"\n  ],\n  \"additionalProperties\": false\n}\n"
  },
  {
    "path": "packages/plugin-md-v2/src/apply_changes.rs",
    "content": "use crate::common::{BlockSnapshotContent, DocumentSnapshotContent};\nuse crate::exports::lix::plugin::api::{EntityChange, File, PluginError};\nuse crate::schemas::{BLOCK_SCHEMA_KEY, DOCUMENT_SCHEMA_KEY};\nuse crate::ROOT_ENTITY_ID;\nuse std::collections::{BTreeMap, BTreeSet};\n\npub(crate) fn apply_changes(\n    file: File,\n    changes: Vec<EntityChange>,\n) -> Result<Vec<u8>, PluginError> {\n    let mut document: Option<DocumentSnapshotContent> = None;\n    let mut blocks_by_id: BTreeMap<String, BlockSnapshotContent> = BTreeMap::new();\n    let mut seen_block_ids = BTreeSet::new();\n\n    for change in changes {\n        if change.schema_key != DOCUMENT_SCHEMA_KEY && change.schema_key != BLOCK_SCHEMA_KEY {\n            continue;\n        }\n\n        if change.schema_key == DOCUMENT_SCHEMA_KEY {\n            if change.entity_id != ROOT_ENTITY_ID {\n                return Err(PluginError::InvalidInput(format!(\n                    \"unsupported entity_id '{}' for schema_key '{}', expected '{}'\",\n                    change.entity_id, DOCUMENT_SCHEMA_KEY, ROOT_ENTITY_ID\n                )));\n            }\n            if document.is_some() {\n                return Err(PluginError::InvalidInput(format!(\n                    \"duplicate entity_id '{}' for schema_key '{}'\",\n                    ROOT_ENTITY_ID, DOCUMENT_SCHEMA_KEY\n                )));\n            }\n\n            let snapshot = match change.snapshot_content {\n                Some(raw) => {\n                    let parsed: DocumentSnapshotContent =\n                        serde_json::from_str(&raw).map_err(|error| {\n                            PluginError::InvalidInput(format!(\n                                \"invalid snapshot_content for entity_id '{}': {error}\",\n                                ROOT_ENTITY_ID\n                            ))\n                        })?;\n                    if parsed.id != ROOT_ENTITY_ID {\n                        return Err(PluginError::InvalidInput(format!(\n                            \"document snapshot id '{}' does not match expected '{}'\",\n                            parsed.id, ROOT_ENTITY_ID\n                        )));\n                    }\n                    parsed\n                }\n                None => DocumentSnapshotContent {\n                    id: ROOT_ENTITY_ID.to_string(),\n                    order: Vec::new(),\n                },\n            };\n\n            document = Some(snapshot);\n            continue;\n        }\n\n        // BLOCK_SCHEMA_KEY\n        if !seen_block_ids.insert(change.entity_id.clone()) {\n            return Err(PluginError::InvalidInput(format!(\n                \"duplicate entity_id '{}' for schema_key '{}'\",\n                change.entity_id, BLOCK_SCHEMA_KEY\n            )));\n        }\n\n        let Some(snapshot_content) = change.snapshot_content else {\n            continue;\n        };\n\n        let snapshot: BlockSnapshotContent =\n            serde_json::from_str(&snapshot_content).map_err(|error| {\n                PluginError::InvalidInput(format!(\n                    \"invalid snapshot_content for entity_id '{}': {error}\",\n                    change.entity_id\n                ))\n            })?;\n\n        if snapshot.id != change.entity_id {\n            return Err(PluginError::InvalidInput(format!(\n                \"block snapshot id '{}' does not match entity_id '{}'\",\n                snapshot.id, change.entity_id\n            )));\n        }\n\n        blocks_by_id.insert(change.entity_id, snapshot);\n    }\n\n    if document.is_none() && blocks_by_id.is_empty() {\n        return Ok(file.data);\n    }\n\n    let mut ordered_ids = document\n        .as_ref()\n        .map(|doc| doc.order.clone())\n        .unwrap_or_else(|| blocks_by_id.keys().cloned().collect());\n\n    // Include orphaned blocks not referenced by document order to avoid data loss.\n    for id in blocks_by_id.keys() {\n        if !ordered_ids.contains(id) {\n            ordered_ids.push(id.clone());\n        }\n    }\n\n    let mut parts = Vec::new();\n    for id in ordered_ids {\n        let Some(block) = blocks_by_id.get(&id) else {\n            continue;\n        };\n        let normalized = block.markdown.trim_matches('\\n').to_string();\n        if normalized.is_empty() {\n            continue;\n        }\n        parts.push(normalized);\n    }\n\n    let mut markdown = parts.join(\"\\n\\n\");\n    if !markdown.is_empty() {\n        markdown.push('\\n');\n    }\n\n    Ok(markdown.into_bytes())\n}\n"
  },
  {
    "path": "packages/plugin-md-v2/src/common.rs",
    "content": "#[derive(Debug, serde::Serialize, serde::Deserialize, PartialEq, Eq)]\n#[serde(deny_unknown_fields)]\npub(crate) struct DocumentSnapshotContent {\n    pub(crate) id: String,\n    pub(crate) order: Vec<String>,\n}\n\n#[derive(Debug, serde::Serialize, serde::Deserialize, PartialEq)]\n#[serde(deny_unknown_fields)]\npub(crate) struct BlockSnapshotContent {\n    pub(crate) id: String,\n    #[serde(rename = \"type\")]\n    pub(crate) node_type: String,\n    pub(crate) node: serde_json::Value,\n    pub(crate) markdown: String,\n}\n"
  },
  {
    "path": "packages/plugin-md-v2/src/detect_changes.rs",
    "content": "use crate::common::{BlockSnapshotContent, DocumentSnapshotContent};\nuse crate::exports::lix::plugin::api::{DetectStateContext, EntityChange, File, PluginError};\nuse crate::schemas::{BLOCK_SCHEMA_KEY, DOCUMENT_SCHEMA_KEY};\nuse crate::ROOT_ENTITY_ID;\nuse markdown::mdast::{Node, Root};\nuse markdown::{to_mdast, ParseOptions};\nuse serde_json::Value;\nuse std::collections::{BTreeMap, HashMap, HashSet};\nuse strsim::normalized_levenshtein;\nuse unicode_normalization::{is_nfc, UnicodeNormalization};\n\n#[derive(Debug, Clone)]\nstruct ParsedBlock {\n    id: String,\n    schema_key: String,\n    node_type: String,\n    node_json: Value,\n    markdown: String,\n    fingerprint: String,\n}\n\n#[derive(Debug, Clone)]\nstruct ParsedBlockCandidate {\n    node_type: String,\n    node_json: Value,\n    markdown: String,\n    fingerprint: String,\n}\n\n#[derive(Debug, Clone)]\nstruct BeforeProjection {\n    order: Vec<String>,\n    blocks_by_id: BTreeMap<String, ParsedBlock>,\n}\n\npub(crate) fn detect_changes(\n    _before: Option<File>,\n    after: File,\n    state_context: Option<DetectStateContext>,\n) -> Result<Vec<EntityChange>, PluginError> {\n    if !is_markdown_path(&after.path) {\n        return Ok(Vec::new());\n    }\n\n    let after_markdown = decode_markdown_bytes(&after.data)?;\n    let after_candidates = parse_top_level_block_candidates(&after_markdown)?;\n\n    let before_projection = parse_state_context_projection(state_context.as_ref())?;\n\n    let BeforeProjection {\n        order: before_order,\n        blocks_by_id: before_by_id,\n    } = before_projection;\n\n    let after_blocks =\n        assign_ids_with_existing_state(after_candidates, &before_order, &before_by_id);\n    let after_order = after_blocks\n        .iter()\n        .map(|block| block.id.clone())\n        .collect::<Vec<_>>();\n    let after_by_id = to_block_map(after_blocks)?;\n\n    let mut changes = Vec::new();\n\n    for id in before_by_id.keys() {\n        if !after_by_id.contains_key(id) {\n            let before_block = before_by_id\n                .get(id)\n                .expect(\"key came from before_by_id.keys() iterator\");\n            changes.push(EntityChange {\n                entity_id: id.clone(),\n                schema_key: before_block.schema_key.clone(),\n                snapshot_content: None,\n            });\n        }\n    }\n\n    for (id, after_block) in &after_by_id {\n        match before_by_id.get(id) {\n            Some(before_block) if blocks_equal_for_change_detection(before_block, after_block)? => {\n            }\n            _ => changes.push(block_upsert_change(after_block)?),\n        }\n    }\n\n    if before_order != after_order {\n        let snapshot_content = serde_json::to_string(&DocumentSnapshotContent {\n            id: ROOT_ENTITY_ID.to_string(),\n            order: after_order,\n        })\n        .map_err(|error| {\n            PluginError::Internal(format!(\n                \"failed to serialize markdown document snapshot: {error}\"\n            ))\n        })?;\n\n        changes.push(EntityChange {\n            entity_id: ROOT_ENTITY_ID.to_string(),\n            schema_key: DOCUMENT_SCHEMA_KEY.to_string(),\n            snapshot_content: Some(snapshot_content),\n        });\n    }\n\n    Ok(changes)\n}\n\nfn parse_state_context_projection(\n    state_context: Option<&DetectStateContext>,\n) -> Result<BeforeProjection, PluginError> {\n    let Some(state_context) = state_context else {\n        return Ok(BeforeProjection {\n            order: Vec::new(),\n            blocks_by_id: BTreeMap::new(),\n        });\n    };\n    let Some(rows) = state_context.active_state.as_ref() else {\n        return Ok(BeforeProjection {\n            order: Vec::new(),\n            blocks_by_id: BTreeMap::new(),\n        });\n    };\n\n    let mut document_order = None::<Vec<String>>;\n    let mut blocks_by_id = BTreeMap::<String, ParsedBlock>::new();\n\n    for row in rows {\n        let Some(schema_key) = row.schema_key.as_deref() else {\n            continue;\n        };\n        let Some(snapshot_content) = row.snapshot_content.as_deref() else {\n            continue;\n        };\n\n        if schema_key == DOCUMENT_SCHEMA_KEY {\n            let snapshot: DocumentSnapshotContent = serde_json::from_str(snapshot_content)\n                .map_err(|error| {\n                    PluginError::Internal(format!(\n                        \"invalid markdown document row in detect state context: {error}\"\n                    ))\n                })?;\n            document_order = Some(snapshot.order);\n            continue;\n        }\n\n        if schema_key != BLOCK_SCHEMA_KEY {\n            continue;\n        }\n\n        let snapshot: BlockSnapshotContent =\n            serde_json::from_str(snapshot_content).map_err(|error| {\n                PluginError::Internal(format!(\n                    \"invalid markdown block row in detect state context: {error}\"\n                ))\n            })?;\n        let fingerprint = normalize_text_for_fingerprint(&snapshot.markdown);\n        let block = ParsedBlock {\n            id: row.entity_id.clone(),\n            schema_key: BLOCK_SCHEMA_KEY.to_string(),\n            node_type: snapshot.node_type,\n            node_json: snapshot.node,\n            markdown: snapshot.markdown,\n            fingerprint,\n        };\n        blocks_by_id.insert(block.id.clone(), block);\n    }\n\n    let mut order = document_order.unwrap_or_default();\n    order.retain(|id| blocks_by_id.contains_key(id));\n\n    if order.len() != blocks_by_id.len() {\n        let order_set = order.iter().cloned().collect::<HashSet<_>>();\n        let remaining = blocks_by_id\n            .keys()\n            .filter(|id| !order_set.contains(*id))\n            .cloned()\n            .collect::<Vec<_>>();\n        order.extend(remaining);\n    }\n\n    Ok(BeforeProjection {\n        order,\n        blocks_by_id,\n    })\n}\n\nfn assign_ids_with_existing_state(\n    candidates: Vec<ParsedBlockCandidate>,\n    before_order: &[String],\n    before_by_id: &BTreeMap<String, ParsedBlock>,\n) -> Vec<ParsedBlock> {\n    if candidates.is_empty() {\n        return Vec::new();\n    }\n\n    let mut ordered_before_ids = before_order\n        .iter()\n        .filter(|id| before_by_id.contains_key(*id))\n        .cloned()\n        .collect::<Vec<_>>();\n    let mut ordered_before_id_set = ordered_before_ids.iter().cloned().collect::<HashSet<_>>();\n    for id in before_by_id.keys() {\n        if !ordered_before_id_set.contains(id) {\n            ordered_before_ids.push(id.clone());\n            ordered_before_id_set.insert(id.clone());\n        }\n    }\n\n    let mut assigned_ids = vec![None::<String>; candidates.len()];\n    let mut matched_before_ids = HashSet::<String>::new();\n\n    let mut before_exact = BTreeMap::<(String, String), Vec<String>>::new();\n    for id in &ordered_before_ids {\n        let before = before_by_id\n            .get(id)\n            .expect(\"ordered_before_ids are sourced from before_by_id\");\n        before_exact\n            .entry((before.node_type.clone(), before.fingerprint.clone()))\n            .or_default()\n            .push(id.clone());\n    }\n\n    let mut after_exact = BTreeMap::<(String, String), Vec<usize>>::new();\n    for (idx, after) in candidates.iter().enumerate() {\n        after_exact\n            .entry((after.node_type.clone(), after.fingerprint.clone()))\n            .or_default()\n            .push(idx);\n    }\n\n    for (key, after_indexes) in after_exact {\n        let Some(before_ids) = before_exact.get(&key) else {\n            continue;\n        };\n        let pair_count = before_ids.len().min(after_indexes.len());\n        let before_positions = if before_ids.len() > after_indexes.len() {\n            sampled_positions(before_ids.len(), pair_count)\n        } else {\n            (0..pair_count).collect::<Vec<_>>()\n        };\n        let after_positions = if after_indexes.len() > before_ids.len() {\n            sampled_positions(after_indexes.len(), pair_count)\n        } else {\n            (0..pair_count).collect::<Vec<_>>()\n        };\n\n        for offset in 0..pair_count {\n            let before_id = before_ids[before_positions[offset]].clone();\n            let after_idx = after_indexes[after_positions[offset]];\n            if assigned_ids[after_idx].is_none() {\n                assigned_ids[after_idx] = Some(before_id.clone());\n                matched_before_ids.insert(before_id);\n            }\n        }\n    }\n\n    // Fast-path: if lengths are equal, reuse same-index IDs for unmatched candidates\n    // when node types align. This avoids O(n^2) fuzzy scoring for in-place edits.\n    if candidates.len() == ordered_before_ids.len() {\n        for (after_idx, after) in candidates.iter().enumerate() {\n            if assigned_ids[after_idx].is_some() {\n                continue;\n            }\n            let Some(before_id) = ordered_before_ids.get(after_idx) else {\n                continue;\n            };\n            if matched_before_ids.contains(before_id) {\n                continue;\n            }\n            let Some(before_block) = before_by_id.get(before_id) else {\n                continue;\n            };\n            if before_block.node_type == after.node_type {\n                assigned_ids[after_idx] = Some(before_id.clone());\n                matched_before_ids.insert(before_id.clone());\n            }\n        }\n    }\n\n    let before_positions = ordered_before_ids\n        .iter()\n        .enumerate()\n        .map(|(idx, id)| (id.clone(), idx))\n        .collect::<HashMap<_, _>>();\n\n    let before_normalized_text = ordered_before_ids\n        .iter()\n        .filter_map(|id| {\n            before_by_id\n                .get(id)\n                .map(|before| (id.clone(), normalize_text_for_fingerprint(&before.markdown)))\n        })\n        .collect::<HashMap<_, _>>();\n    let after_normalized_text = candidates\n        .iter()\n        .map(|after| normalize_text_for_fingerprint(&after.markdown))\n        .collect::<Vec<_>>();\n\n    let mut before_ids_by_type = HashMap::<String, Vec<String>>::new();\n    for id in &ordered_before_ids {\n        let before = before_by_id\n            .get(id)\n            .expect(\"ordered_before_ids are sourced from before_by_id\");\n        before_ids_by_type\n            .entry(before.node_type.clone())\n            .or_default()\n            .push(id.clone());\n    }\n\n    for (after_idx, after) in candidates.iter().enumerate() {\n        if assigned_ids[after_idx].is_some() {\n            continue;\n        }\n\n        let mut pool = before_ids_by_type\n            .get(&after.node_type)\n            .into_iter()\n            .flat_map(|ids| ids.iter())\n            .filter_map(|id| {\n                if matched_before_ids.contains(id) {\n                    return None;\n                }\n                let before = before_by_id.get(id)?;\n                let before_idx = *before_positions.get(id).unwrap_or(&0);\n                Some((id.clone(), before, before_idx))\n            })\n            .collect::<Vec<_>>();\n\n        if pool.is_empty() {\n            continue;\n        }\n\n        let chosen = if pool.len() == 1 {\n            Some(pool.swap_remove(0).0)\n        } else {\n            let after_text = &after_normalized_text[after_idx];\n            let total = candidates.len().max(ordered_before_ids.len()).max(1) as f64;\n            let mut scored = pool\n                .iter()\n                .map(|(id, before, before_idx)| {\n                    let before_text = before_normalized_text\n                        .get(id)\n                        .map(String::as_str)\n                        .unwrap_or(&before.markdown);\n                    let similarity = normalized_levenshtein(&before_text, &after_text);\n                    let position = 1.0 - ((after_idx as f64 - *before_idx as f64).abs() / total);\n                    let score = similarity * 0.75 + position * 0.25;\n                    (id.clone(), similarity, score)\n                })\n                .collect::<Vec<_>>();\n\n            scored.sort_by(|a, b| b.2.total_cmp(&a.2).then_with(|| b.1.total_cmp(&a.1)));\n\n            let top = scored[0].clone();\n            let second = scored.get(1).cloned();\n            let accept = match second {\n                None => true,\n                Some((_, second_similarity, second_score)) => {\n                    top.1 >= 0.55\n                        || top.2 >= 0.60\n                        || (top.1 >= 0.35\n                            && (top.1 - second_similarity) >= 0.15\n                            && (top.2 - second_score) >= 0.08)\n                }\n            };\n\n            if accept {\n                Some(top.0)\n            } else {\n                None\n            }\n        };\n\n        if let Some(id) = chosen {\n            matched_before_ids.insert(id.clone());\n            assigned_ids[after_idx] = Some(id);\n        }\n    }\n\n    assign_missing_ids(candidates, assigned_ids)\n}\n\nfn sampled_positions(total: usize, picks: usize) -> Vec<usize> {\n    if picks == 0 || total == 0 {\n        return Vec::new();\n    }\n    if picks == 1 {\n        return vec![0];\n    }\n\n    let mut positions = Vec::with_capacity(picks);\n    for index in 0..picks {\n        let ratio = index as f64 / (picks - 1) as f64;\n        let target = (ratio * (total - 1) as f64).round() as usize;\n        let min_allowed = positions.last().copied().unwrap_or(0);\n        let max_allowed = total - (picks - index);\n        positions.push(target.clamp(min_allowed, max_allowed));\n    }\n\n    positions\n}\n\nfn assign_missing_ids(\n    candidates: Vec<ParsedBlockCandidate>,\n    assigned_ids: Vec<Option<String>>,\n) -> Vec<ParsedBlock> {\n    let mut occurrence_counter: HashMap<(String, String), u32> = HashMap::new();\n    let mut used_ids = assigned_ids\n        .iter()\n        .filter_map(|id| id.clone())\n        .collect::<HashSet<_>>();\n\n    candidates\n        .into_iter()\n        .enumerate()\n        .map(|(idx, candidate)| {\n            let occurrence_key = (candidate.node_type.clone(), candidate.fingerprint.clone());\n            let occurrence = occurrence_counter\n                .entry(occurrence_key)\n                .and_modify(|count| *count += 1)\n                .or_insert(1);\n\n            let id = if let Some(existing) = assigned_ids[idx].clone() {\n                existing\n            } else {\n                let base = block_id(&candidate.node_type, &candidate.fingerprint, *occurrence);\n                if !used_ids.contains(&base) {\n                    base\n                } else {\n                    let mut suffix = 2u32;\n                    let mut candidate_id = format!(\"{base}_{suffix}\");\n                    while used_ids.contains(&candidate_id) {\n                        suffix += 1;\n                        candidate_id = format!(\"{base}_{suffix}\");\n                    }\n                    candidate_id\n                }\n            };\n\n            used_ids.insert(id.clone());\n\n            ParsedBlock {\n                id,\n                schema_key: BLOCK_SCHEMA_KEY.to_string(),\n                node_type: candidate.node_type,\n                node_json: candidate.node_json,\n                markdown: candidate.markdown,\n                fingerprint: candidate.fingerprint,\n            }\n        })\n        .collect()\n}\n\nfn block_upsert_change(block: &ParsedBlock) -> Result<EntityChange, PluginError> {\n    let snapshot_content = serde_json::to_string(&BlockSnapshotContent {\n        id: block.id.clone(),\n        node_type: block.node_type.clone(),\n        node: block.node_json.clone(),\n        markdown: block.markdown.clone(),\n    })\n    .map_err(|error| {\n        PluginError::Internal(format!(\n            \"failed to serialize markdown block snapshot: {error}\"\n        ))\n    })?;\n\n    Ok(EntityChange {\n        entity_id: block.id.clone(),\n        schema_key: block.schema_key.clone(),\n        snapshot_content: Some(snapshot_content),\n    })\n}\n\nfn blocks_equal_for_change_detection(\n    before: &ParsedBlock,\n    after: &ParsedBlock,\n) -> Result<bool, PluginError> {\n    if before.schema_key != after.schema_key || before.node_type != after.node_type {\n        return Ok(false);\n    }\n    if before.fingerprint == after.fingerprint {\n        return Ok(true);\n    }\n    if !needs_semantic_ast_compare(&before.node_type) {\n        return Ok(false);\n    }\n\n    Ok(stable_json_string(&before.node_json)? == stable_json_string(&after.node_json)?)\n}\n\nfn needs_semantic_ast_compare(node_type: &str) -> bool {\n    matches!(node_type, \"paragraph\" | \"code\")\n}\n\nfn to_block_map(blocks: Vec<ParsedBlock>) -> Result<BTreeMap<String, ParsedBlock>, PluginError> {\n    let mut map = BTreeMap::new();\n    for block in blocks {\n        if map.insert(block.id.clone(), block).is_some() {\n            return Err(PluginError::Internal(\n                \"generated duplicate markdown block id\".to_string(),\n            ));\n        }\n    }\n    Ok(map)\n}\n\nfn parse_top_level_block_candidates(\n    markdown: &str,\n) -> Result<Vec<ParsedBlockCandidate>, PluginError> {\n    let root = parse_markdown_to_root(markdown)?;\n    let mut blocks = Vec::new();\n\n    for node in root.children {\n        let node_type = node_type_name(&node).to_string();\n        let node_json = node_json_without_position(&node)?;\n        let markdown_fragment = extract_block_markdown(markdown, &node)?;\n        let fingerprint = normalize_text_for_fingerprint(&markdown_fragment);\n        blocks.push(ParsedBlockCandidate {\n            node_type,\n            node_json,\n            markdown: markdown_fragment,\n            fingerprint,\n        });\n    }\n\n    Ok(blocks)\n}\n\nfn parse_markdown_to_root(markdown: &str) -> Result<Root, PluginError> {\n    let tree = to_mdast(markdown, &parse_options_all_extensions()).map_err(|error| {\n        PluginError::InvalidInput(format!(\n            \"markdown parse failed with configured extensions: {}\",\n            error\n        ))\n    })?;\n\n    match tree {\n        Node::Root(root) => Ok(root),\n        _ => Err(PluginError::Internal(\n            \"markdown parser returned non-root AST node\".to_string(),\n        )),\n    }\n}\n\nfn node_json_without_position(node: &Node) -> Result<Value, PluginError> {\n    let mut value = serde_json::to_value(node).map_err(|error| {\n        PluginError::Internal(format!(\"failed to serialize mdast node: {error}\"))\n    })?;\n    strip_position_recursively(&mut value);\n    Ok(value)\n}\n\nfn strip_position_recursively(value: &mut Value) {\n    match value {\n        Value::Object(object) => {\n            object.remove(\"position\");\n            for child in object.values_mut() {\n                strip_position_recursively(child);\n            }\n        }\n        Value::Array(items) => {\n            for item in items {\n                strip_position_recursively(item);\n            }\n        }\n        _ => {}\n    }\n}\n\nfn stable_json_string(value: &Value) -> Result<String, PluginError> {\n    let mut normalized = value.clone();\n    normalize_json_for_fingerprint(&mut normalized);\n    serde_json::to_string(&normalized).map_err(|error| {\n        PluginError::Internal(format!(\"failed to serialize node fingerprint: {error}\"))\n    })\n}\n\nfn normalize_json_for_fingerprint(value: &mut Value) {\n    match value {\n        Value::Object(object) => {\n            for child in object.values_mut() {\n                normalize_json_for_fingerprint(child);\n            }\n        }\n        Value::Array(items) => {\n            for item in items {\n                normalize_json_for_fingerprint(item);\n            }\n        }\n        Value::String(text) => {\n            *text = normalize_text_for_fingerprint(text);\n        }\n        _ => {}\n    }\n}\n\nfn normalize_text_for_fingerprint(input: &str) -> String {\n    let has_carriage_return = input.as_bytes().contains(&b'\\r');\n    if !has_carriage_return {\n        if input.is_ascii() || is_nfc(input) {\n            return input.to_string();\n        }\n        return input.nfc().collect();\n    }\n\n    let normalized_newlines = input.replace(\"\\r\\n\", \"\\n\").replace('\\r', \"\\n\");\n    if normalized_newlines.is_ascii() || is_nfc(&normalized_newlines) {\n        return normalized_newlines;\n    }\n    normalized_newlines.nfc().collect()\n}\n\nfn extract_block_markdown(markdown: &str, node: &Node) -> Result<String, PluginError> {\n    let Some(position) = node.position() else {\n        return Err(PluginError::Internal(\n            \"top-level markdown node is missing position metadata\".to_string(),\n        ));\n    };\n\n    let start = position.start.offset;\n    let end = position.end.offset;\n    if start > end || end > markdown.len() {\n        return Err(PluginError::Internal(\n            \"markdown node position offsets are out of bounds\".to_string(),\n        ));\n    }\n    if !markdown.is_char_boundary(start) || !markdown.is_char_boundary(end) {\n        return Err(PluginError::Internal(\n            \"markdown node position offsets are not valid UTF-8 boundaries\".to_string(),\n        ));\n    }\n\n    Ok(markdown[start..end].to_string())\n}\n\nfn block_id(node_type: &str, fingerprint: &str, occurrence: u32) -> String {\n    let node_type_sanitized = node_type\n        .chars()\n        .map(|ch| if ch.is_ascii_alphanumeric() { ch } else { '_' })\n        .collect::<String>()\n        .to_ascii_lowercase();\n    let hash = fnv1a64(fingerprint.as_bytes());\n    format!(\"b_{node_type_sanitized}_{hash:016x}_{occurrence}\")\n}\n\nfn fnv1a64(input: &[u8]) -> u64 {\n    let mut hash = 0xcbf29ce484222325u64;\n    for byte in input {\n        hash ^= *byte as u64;\n        hash = hash.wrapping_mul(0x100000001b3);\n    }\n    hash\n}\n\nfn decode_markdown_bytes(bytes: &[u8]) -> Result<String, PluginError> {\n    std::str::from_utf8(bytes)\n        .map(|markdown| markdown.to_owned())\n        .map_err(|error| {\n            PluginError::InvalidInput(format!(\n                \"file.data must be valid UTF-8 markdown bytes: {error}\"\n            ))\n        })\n}\n\nfn is_markdown_path(path: &str) -> bool {\n    let path = path.to_ascii_lowercase();\n    path.ends_with(\".md\") || path.ends_with(\".mdx\")\n}\n\nfn parse_options_all_extensions() -> ParseOptions {\n    let mut options = ParseOptions::gfm();\n    let constructs = &mut options.constructs;\n\n    constructs.frontmatter = true;\n    constructs.gfm_autolink_literal = true;\n    constructs.gfm_footnote_definition = true;\n    constructs.gfm_label_start_footnote = true;\n    constructs.gfm_strikethrough = true;\n    constructs.gfm_table = true;\n    constructs.gfm_task_list_item = true;\n    constructs.math_flow = true;\n    constructs.math_text = true;\n    options\n}\n\nfn node_type_name(node: &Node) -> &'static str {\n    match node {\n        Node::Root(_) => \"root\",\n        Node::Blockquote(_) => \"blockquote\",\n        Node::FootnoteDefinition(_) => \"footnoteDefinition\",\n        Node::MdxJsxFlowElement(_) => \"mdxJsxFlowElement\",\n        Node::List(_) => \"list\",\n        Node::MdxjsEsm(_) => \"mdxjsEsm\",\n        Node::Toml(_) => \"toml\",\n        Node::Yaml(_) => \"yaml\",\n        Node::Break(_) => \"break\",\n        Node::InlineCode(_) => \"inlineCode\",\n        Node::InlineMath(_) => \"inlineMath\",\n        Node::Delete(_) => \"delete\",\n        Node::Emphasis(_) => \"emphasis\",\n        Node::MdxTextExpression(_) => \"mdxTextExpression\",\n        Node::FootnoteReference(_) => \"footnoteReference\",\n        Node::Html(_) => \"html\",\n        Node::Image(_) => \"image\",\n        Node::ImageReference(_) => \"imageReference\",\n        Node::MdxJsxTextElement(_) => \"mdxJsxTextElement\",\n        Node::Link(_) => \"link\",\n        Node::LinkReference(_) => \"linkReference\",\n        Node::Strong(_) => \"strong\",\n        Node::Text(_) => \"text\",\n        Node::Code(_) => \"code\",\n        Node::Math(_) => \"math\",\n        Node::MdxFlowExpression(_) => \"mdxFlowExpression\",\n        Node::Heading(_) => \"heading\",\n        Node::Table(_) => \"table\",\n        Node::ThematicBreak(_) => \"thematicBreak\",\n        Node::TableRow(_) => \"tableRow\",\n        Node::TableCell(_) => \"tableCell\",\n        Node::ListItem(_) => \"listItem\",\n        Node::Definition(_) => \"definition\",\n        Node::Paragraph(_) => \"paragraph\",\n    }\n}\n"
  },
  {
    "path": "packages/plugin-md-v2/src/lib.rs",
    "content": "use crate::exports::lix::plugin::api::{EntityChange, File, Guest, PluginError};\n\nwit_bindgen::generate!({\n    path: \"../engine/wit\",\n    world: \"plugin\",\n});\n\nmod apply_changes;\nmod common;\nmod detect_changes;\npub mod schemas;\n\npub const ROOT_ENTITY_ID: &str = \"root\";\npub const DOCUMENT_SCHEMA_KEY: &str = schemas::DOCUMENT_SCHEMA_KEY;\npub const BLOCK_SCHEMA_KEY: &str = schemas::BLOCK_SCHEMA_KEY;\n\npub use crate::exports::lix::plugin::api::{\n    ActiveStateRow as PluginActiveStateRow, DetectStateContext as PluginDetectStateContext,\n    EntityChange as PluginEntityChange, File as PluginFile, PluginError as PluginApiError,\n};\n\nstruct MarkdownPlugin;\n\nimpl Guest for MarkdownPlugin {\n    fn detect_changes(\n        before: Option<File>,\n        after: File,\n        state_context: Option<crate::exports::lix::plugin::api::DetectStateContext>,\n    ) -> Result<Vec<EntityChange>, PluginError> {\n        detect_changes::detect_changes(before, after, state_context)\n    }\n\n    fn apply_changes(file: File, changes: Vec<EntityChange>) -> Result<Vec<u8>, PluginError> {\n        apply_changes::apply_changes(file, changes)\n    }\n}\n\npub fn detect_changes(before: Option<File>, after: File) -> Result<Vec<EntityChange>, PluginError> {\n    let state_context = project_state_context_from_before(before)?;\n    <MarkdownPlugin as Guest>::detect_changes(None, after, Some(state_context))\n}\n\npub fn detect_changes_with_state_context(\n    before: Option<File>,\n    after: File,\n    state_context: Option<PluginDetectStateContext>,\n) -> Result<Vec<EntityChange>, PluginError> {\n    <MarkdownPlugin as Guest>::detect_changes(before, after, state_context)\n}\n\npub fn apply_changes(file: File, changes: Vec<EntityChange>) -> Result<Vec<u8>, PluginError> {\n    <MarkdownPlugin as Guest>::apply_changes(file, changes)\n}\n\nfn empty_state_context() -> PluginDetectStateContext {\n    PluginDetectStateContext {\n        active_state: Some(Vec::new()),\n    }\n}\n\nfn project_state_context_from_before(\n    before: Option<File>,\n) -> Result<PluginDetectStateContext, PluginError> {\n    let Some(before_file) = before else {\n        return Ok(empty_state_context());\n    };\n\n    // Compatibility helper for tests/callers using detect_changes(before, after):\n    // bootstrap a projected active-state from `before`.\n    let bootstrap =\n        <MarkdownPlugin as Guest>::detect_changes(None, before_file, Some(empty_state_context()))?;\n\n    Ok(PluginDetectStateContext {\n        active_state: Some(\n            bootstrap\n                .into_iter()\n                .map(|row| PluginActiveStateRow {\n                    entity_id: row.entity_id,\n                    schema_key: Some(row.schema_key),\n                    snapshot_content: row.snapshot_content,\n                    file_id: None,\n                    plugin_key: None,\n                    version_id: None,\n                    change_id: None,\n                    metadata: None,\n                    created_at: None,\n                    updated_at: None,\n                })\n                .collect(),\n        ),\n    })\n}\n\nexport!(MarkdownPlugin);\n"
  },
  {
    "path": "packages/plugin-md-v2/src/schemas.rs",
    "content": "use serde_json::Value;\nuse std::sync::OnceLock;\n\npub const DOCUMENT_SCHEMA_KEY: &str = \"markdown_v2_document\";\npub const BLOCK_SCHEMA_KEY: &str = \"markdown_v2_block\";\n\nconst DOCUMENT_SCHEMA_JSON: &str = include_str!(\"../schema/markdown_document.json\");\nconst BLOCK_SCHEMA_JSON: &str = include_str!(\"../schema/markdown_block.json\");\n\nconst SCHEMA_JSONS: [&str; 2] = [DOCUMENT_SCHEMA_JSON, BLOCK_SCHEMA_JSON];\n\nstatic SCHEMA_DEFINITIONS: OnceLock<Vec<Value>> = OnceLock::new();\n\npub fn schema_jsons() -> &'static [&'static str] {\n    &SCHEMA_JSONS\n}\n\npub fn schema_definitions() -> &'static Vec<Value> {\n    SCHEMA_DEFINITIONS.get_or_init(|| {\n        SCHEMA_JSONS\n            .iter()\n            .map(|raw| serde_json::from_str(raw).expect(\"markdown schema JSON must be valid\"))\n            .collect()\n    })\n}\n"
  },
  {
    "path": "packages/plugin-md-v2/tests/apply_changes.rs",
    "content": "mod common;\n\nuse common::{\n    assert_invalid_input, block_change, decode_utf8, document_change, empty_file,\n    file_from_markdown,\n};\nuse plugin_md_v2::{apply_changes, BLOCK_SCHEMA_KEY, DOCUMENT_SCHEMA_KEY};\n\n#[test]\nfn materializes_markdown_from_document_order_and_blocks() {\n    let file = empty_file(\"f1\", \"/notes.md\");\n    let changes = vec![\n        block_change(\"b2\", \"paragraph\", \"Second paragraph.\"),\n        document_change(vec![\"b1\".to_string(), \"b2\".to_string()]),\n        block_change(\"b1\", \"heading\", \"# Title\"),\n    ];\n\n    let data = apply_changes(file, changes).expect(\"apply_changes should succeed\");\n\n    assert_eq!(decode_utf8(data), \"# Title\\n\\nSecond paragraph.\\n\");\n}\n\n#[test]\nfn document_tombstone_results_in_empty_file() {\n    let file = file_from_markdown(\"f1\", \"/notes.md\", \"before\");\n    let changes = vec![plugin_md_v2::PluginEntityChange {\n        entity_id: plugin_md_v2::ROOT_ENTITY_ID.to_string(),\n        schema_key: DOCUMENT_SCHEMA_KEY.to_string(),\n        snapshot_content: None,\n    }];\n\n    let data = apply_changes(file, changes).expect(\"apply_changes should succeed\");\n\n    assert!(data.is_empty());\n}\n\n#[test]\nfn passes_through_when_no_markdown_rows_are_present() {\n    let file = file_from_markdown(\"f1\", \"/notes.md\", \"keep me\");\n\n    let data = apply_changes(file, Vec::new()).expect(\"apply_changes should succeed\");\n\n    assert_eq!(decode_utf8(data), \"keep me\");\n}\n\n#[test]\nfn rejects_duplicate_document_rows() {\n    let file = empty_file(\"f1\", \"/notes.md\");\n    let changes = vec![\n        document_change(vec![\"b1\".to_string()]),\n        document_change(vec![\"b2\".to_string()]),\n    ];\n\n    let error = apply_changes(file, changes).expect_err(\"apply_changes should fail\");\n\n    assert_invalid_input(error);\n}\n\n#[test]\nfn rejects_duplicate_block_rows() {\n    let file = empty_file(\"f1\", \"/notes.md\");\n    let changes = vec![\n        block_change(\"b1\", \"paragraph\", \"a\"),\n        block_change(\"b1\", \"paragraph\", \"b\"),\n    ];\n\n    let error = apply_changes(file, changes).expect_err(\"apply_changes should fail\");\n\n    assert_invalid_input(error);\n}\n\n#[test]\nfn rejects_unknown_document_entity_id() {\n    let file = empty_file(\"f1\", \"/notes.md\");\n    let changes = vec![plugin_md_v2::PluginEntityChange {\n        entity_id: \"other\".to_string(),\n        schema_key: DOCUMENT_SCHEMA_KEY.to_string(),\n        snapshot_content: Some(\n            serde_json::json!({\n                \"id\": \"other\",\n                \"order\": [\"b1\"],\n            })\n            .to_string(),\n        ),\n    }];\n\n    let error = apply_changes(file, changes).expect_err(\"apply_changes should fail\");\n\n    assert_invalid_input(error);\n}\n\n#[test]\nfn rejects_invalid_block_snapshot_json() {\n    let file = empty_file(\"f1\", \"/notes.md\");\n    let changes = vec![plugin_md_v2::PluginEntityChange {\n        entity_id: \"b1\".to_string(),\n        schema_key: BLOCK_SCHEMA_KEY.to_string(),\n        snapshot_content: Some(\"{\".to_string()),\n    }];\n\n    let error = apply_changes(file, changes).expect_err(\"apply_changes should fail\");\n\n    assert_invalid_input(error);\n}\n\n#[test]\nfn rejects_invalid_document_snapshot_json() {\n    let file = empty_file(\"f1\", \"/notes.md\");\n    let changes = vec![plugin_md_v2::PluginEntityChange {\n        entity_id: plugin_md_v2::ROOT_ENTITY_ID.to_string(),\n        schema_key: DOCUMENT_SCHEMA_KEY.to_string(),\n        snapshot_content: Some(\"{\".to_string()),\n    }];\n\n    let error = apply_changes(file, changes).expect_err(\"apply_changes should fail\");\n\n    assert_invalid_input(error);\n}\n\n#[test]\nfn rejects_block_snapshot_id_mismatch_with_entity_id() {\n    let file = empty_file(\"f1\", \"/notes.md\");\n    let changes = vec![plugin_md_v2::PluginEntityChange {\n        entity_id: \"b1\".to_string(),\n        schema_key: BLOCK_SCHEMA_KEY.to_string(),\n        snapshot_content: Some(\n            serde_json::json!({\n                \"id\": \"b2\",\n                \"type\": \"paragraph\",\n                \"node\": {},\n                \"markdown\": \"hello\",\n            })\n            .to_string(),\n        ),\n    }];\n\n    let error = apply_changes(file, changes).expect_err(\"apply_changes should fail\");\n\n    assert_invalid_input(error);\n}\n\n#[test]\nfn rejects_document_snapshot_id_mismatch_with_root() {\n    let file = empty_file(\"f1\", \"/notes.md\");\n    let changes = vec![plugin_md_v2::PluginEntityChange {\n        entity_id: plugin_md_v2::ROOT_ENTITY_ID.to_string(),\n        schema_key: DOCUMENT_SCHEMA_KEY.to_string(),\n        snapshot_content: Some(\n            serde_json::json!({\n                \"id\": \"other\",\n                \"order\": [\"b1\"],\n            })\n            .to_string(),\n        ),\n    }];\n\n    let error = apply_changes(file, changes).expect_err(\"apply_changes should fail\");\n\n    assert_invalid_input(error);\n}\n\n#[test]\nfn ignores_unknown_schema_rows() {\n    let file = file_from_markdown(\"f1\", \"/notes.md\", \"keep me\");\n    let changes = vec![\n        plugin_md_v2::PluginEntityChange {\n            entity_id: \"unknown1\".to_string(),\n            schema_key: \"other_schema\".to_string(),\n            snapshot_content: Some(\"{\\\"x\\\":1}\".to_string()),\n        },\n        plugin_md_v2::PluginEntityChange {\n            entity_id: \"unknown2\".to_string(),\n            schema_key: \"other_schema\".to_string(),\n            snapshot_content: None,\n        },\n    ];\n\n    let data = apply_changes(file, changes).expect(\"apply_changes should succeed\");\n\n    assert_eq!(decode_utf8(data), \"keep me\");\n}\n\n#[test]\nfn skips_missing_block_ids_referenced_in_document_order() {\n    let file = empty_file(\"f1\", \"/notes.md\");\n    let changes = vec![\n        document_change(vec![\"b1\".to_string(), \"b2\".to_string()]),\n        block_change(\"b1\", \"paragraph\", \"Only this exists.\"),\n    ];\n\n    let data = apply_changes(file, changes).expect(\"apply_changes should succeed\");\n\n    assert_eq!(decode_utf8(data), \"Only this exists.\\n\");\n}\n\n#[test]\nfn appends_orphan_blocks_not_in_document_order() {\n    let file = empty_file(\"f1\", \"/notes.md\");\n    let changes = vec![\n        document_change(vec![\"b1\".to_string()]),\n        block_change(\"b2\", \"paragraph\", \"Second\"),\n        block_change(\"b1\", \"paragraph\", \"First\"),\n    ];\n\n    let data = apply_changes(file, changes).expect(\"apply_changes should succeed\");\n\n    assert_eq!(decode_utf8(data), \"First\\n\\nSecond\\n\");\n}\n\n#[test]\nfn materializes_deterministically_without_document_row() {\n    let file = empty_file(\"f1\", \"/notes.md\");\n    let changes = vec![\n        block_change(\"b2\", \"paragraph\", \"Second\"),\n        block_change(\"b1\", \"paragraph\", \"First\"),\n    ];\n\n    let data = apply_changes(file, changes).expect(\"apply_changes should succeed\");\n\n    // BTreeMap key ordering makes this deterministic.\n    assert_eq!(decode_utf8(data), \"First\\n\\nSecond\\n\");\n}\n\n#[test]\nfn normalizes_block_markdown_whitespace_and_trailing_newline() {\n    let file = empty_file(\"f1\", \"/notes.md\");\n    let changes = vec![\n        document_change(vec![\"b1\".to_string(), \"b2\".to_string()]),\n        block_change(\"b1\", \"heading\", \"\\n# Title\\n\"),\n        block_change(\"b2\", \"paragraph\", \"\\n\\nParagraph\\n\\n\"),\n    ];\n\n    let data = apply_changes(file, changes).expect(\"apply_changes should succeed\");\n\n    assert_eq!(decode_utf8(data), \"# Title\\n\\nParagraph\\n\");\n}\n\n#[test]\nfn tombstoned_block_is_not_rendered_even_if_order_mentions_it() {\n    let file = empty_file(\"f1\", \"/notes.md\");\n    let changes = vec![\n        document_change(vec![\"b1\".to_string(), \"b2\".to_string()]),\n        block_change(\"b1\", \"paragraph\", \"Alive\"),\n        plugin_md_v2::PluginEntityChange {\n            entity_id: \"b2\".to_string(),\n            schema_key: BLOCK_SCHEMA_KEY.to_string(),\n            snapshot_content: None,\n        },\n    ];\n\n    let data = apply_changes(file, changes).expect(\"apply_changes should succeed\");\n\n    assert_eq!(decode_utf8(data), \"Alive\\n\");\n}\n"
  },
  {
    "path": "packages/plugin-md-v2/tests/common/mod.rs",
    "content": "#![allow(dead_code)]\n\nuse plugin_md_v2::{\n    PluginApiError, PluginEntityChange, PluginFile, BLOCK_SCHEMA_KEY, DOCUMENT_SCHEMA_KEY,\n    ROOT_ENTITY_ID,\n};\nuse std::collections::BTreeMap;\n\npub type StateKey = (String, String);\npub type StateRows = BTreeMap<StateKey, PluginEntityChange>;\n\npub fn file_from_markdown(id: &str, path: &str, markdown: &str) -> PluginFile {\n    PluginFile {\n        id: id.to_string(),\n        path: path.to_string(),\n        data: markdown.as_bytes().to_vec(),\n    }\n}\n\npub fn empty_file(id: &str, path: &str) -> PluginFile {\n    PluginFile {\n        id: id.to_string(),\n        path: path.to_string(),\n        data: Vec::new(),\n    }\n}\n\npub fn decode_utf8(bytes: Vec<u8>) -> String {\n    String::from_utf8(bytes).expect(\"materialized markdown should be valid UTF-8\")\n}\n\npub fn is_document_change(change: &PluginEntityChange) -> bool {\n    change.schema_key == DOCUMENT_SCHEMA_KEY\n}\n\npub fn is_block_change(change: &PluginEntityChange) -> bool {\n    change.schema_key == BLOCK_SCHEMA_KEY\n}\n\npub fn parse_document_order(change: &PluginEntityChange) -> Vec<String> {\n    assert!(is_document_change(change));\n    let raw = change\n        .snapshot_content\n        .as_ref()\n        .expect(\"document snapshot should be present\");\n    let parsed: serde_json::Value =\n        serde_json::from_str(raw).expect(\"document snapshot should be valid JSON\");\n    assert_eq!(\n        parsed.get(\"id\").and_then(serde_json::Value::as_str),\n        Some(ROOT_ENTITY_ID)\n    );\n    parsed\n        .get(\"order\")\n        .and_then(serde_json::Value::as_array)\n        .expect(\"document snapshot should contain order array\")\n        .iter()\n        .map(|entry| {\n            entry\n                .as_str()\n                .expect(\"order entries should be strings\")\n                .to_string()\n        })\n        .collect()\n}\n\npub fn parse_block_markdown(change: &PluginEntityChange) -> String {\n    assert!(is_block_change(change));\n    let raw = change\n        .snapshot_content\n        .as_ref()\n        .expect(\"block snapshot should be present\");\n    let parsed: serde_json::Value =\n        serde_json::from_str(raw).expect(\"block snapshot should be valid JSON\");\n    parsed\n        .get(\"markdown\")\n        .and_then(serde_json::Value::as_str)\n        .expect(\"block snapshot should contain markdown\")\n        .to_string()\n}\n\npub fn assert_invalid_input(error: PluginApiError) {\n    match error {\n        PluginApiError::InvalidInput(_) => {}\n        PluginApiError::Internal(message) => {\n            panic!(\"expected invalid-input error, got internal error: {message}\")\n        }\n    }\n}\n\npub fn apply_delta(state: &mut StateRows, delta: Vec<PluginEntityChange>) {\n    for change in delta {\n        let key = (change.schema_key.clone(), change.entity_id.clone());\n        if change.snapshot_content.is_some() {\n            state.insert(key, change);\n        } else {\n            state.remove(&key);\n        }\n    }\n}\n\npub fn collect_state_rows(state: &StateRows) -> Vec<PluginEntityChange> {\n    state.values().cloned().collect()\n}\n\npub fn document_change(order: Vec<String>) -> PluginEntityChange {\n    PluginEntityChange {\n        entity_id: ROOT_ENTITY_ID.to_string(),\n        schema_key: DOCUMENT_SCHEMA_KEY.to_string(),\n        snapshot_content: Some(\n            serde_json::json!({\n                \"id\": ROOT_ENTITY_ID,\n                \"order\": order,\n            })\n            .to_string(),\n        ),\n    }\n}\n\npub fn block_change(id: &str, node_type: &str, markdown: &str) -> PluginEntityChange {\n    PluginEntityChange {\n        entity_id: id.to_string(),\n        schema_key: BLOCK_SCHEMA_KEY.to_string(),\n        snapshot_content: Some(\n            serde_json::json!({\n                \"id\": id,\n                \"type\": node_type,\n                \"node\": {},\n                \"markdown\": markdown,\n            })\n            .to_string(),\n        ),\n    }\n}\n"
  },
  {
    "path": "packages/plugin-md-v2/tests/detect_changes.rs",
    "content": "mod common;\n\nuse common::{\n    assert_invalid_input, file_from_markdown, is_block_change, is_document_change,\n    parse_document_order,\n};\nuse plugin_md_v2::{\n    detect_changes, detect_changes_with_state_context, PluginDetectStateContext, BLOCK_SCHEMA_KEY,\n    DOCUMENT_SCHEMA_KEY,\n};\nuse std::collections::BTreeSet;\n\nfn count_tombstones(changes: &[plugin_md_v2::PluginEntityChange]) -> usize {\n    changes\n        .iter()\n        .filter(|change| change.schema_key == BLOCK_SCHEMA_KEY && change.snapshot_content.is_none())\n        .count()\n}\n\nfn count_upserts(changes: &[plugin_md_v2::PluginEntityChange]) -> usize {\n    changes\n        .iter()\n        .filter(|change| change.schema_key == BLOCK_SCHEMA_KEY && change.snapshot_content.is_some())\n        .count()\n}\n\nfn count_document_rows(changes: &[plugin_md_v2::PluginEntityChange]) -> usize {\n    changes\n        .iter()\n        .filter(|change| change.schema_key == DOCUMENT_SCHEMA_KEY)\n        .count()\n}\n\nfn upsert_types(changes: &[plugin_md_v2::PluginEntityChange]) -> Vec<String> {\n    changes\n        .iter()\n        .filter(|change| change.schema_key == BLOCK_SCHEMA_KEY)\n        .filter_map(|change| change.snapshot_content.as_ref())\n        .map(|raw| {\n            let parsed: serde_json::Value =\n                serde_json::from_str(raw).expect(\"block snapshot should be valid JSON\");\n            parsed\n                .get(\"type\")\n                .and_then(serde_json::Value::as_str)\n                .expect(\"block snapshot should contain type\")\n                .to_string()\n        })\n        .collect()\n}\n\nfn upsert_markdowns(changes: &[plugin_md_v2::PluginEntityChange]) -> Vec<String> {\n    changes\n        .iter()\n        .filter(|change| change.schema_key == BLOCK_SCHEMA_KEY)\n        .filter_map(|change| change.snapshot_content.as_ref())\n        .map(|raw| {\n            let parsed: serde_json::Value =\n                serde_json::from_str(raw).expect(\"block snapshot should be valid JSON\");\n            parsed\n                .get(\"markdown\")\n                .and_then(serde_json::Value::as_str)\n                .expect(\"block snapshot should contain markdown\")\n                .to_string()\n        })\n        .collect()\n}\n\nfn bootstrap_order(markdown: &str) -> Vec<String> {\n    let bootstrap = detect_changes(None, file_from_markdown(\"bootstrap\", \"/notes.md\", markdown))\n        .expect(\"bootstrap detect_changes should succeed\");\n    let document = bootstrap\n        .iter()\n        .find(|change| is_document_change(change))\n        .expect(\"bootstrap should include document row\");\n    parse_document_order(document)\n}\n\nfn document_order_from_changes(\n    changes: &[plugin_md_v2::PluginEntityChange],\n) -> Option<Vec<String>> {\n    changes\n        .iter()\n        .find(|change| is_document_change(change))\n        .map(parse_document_order)\n}\n\nfn tombstone_ids(changes: &[plugin_md_v2::PluginEntityChange]) -> Vec<String> {\n    changes\n        .iter()\n        .filter(|change| change.schema_key == BLOCK_SCHEMA_KEY && change.snapshot_content.is_none())\n        .map(|change| change.entity_id.clone())\n        .collect()\n}\n\nfn upsert_ids(changes: &[plugin_md_v2::PluginEntityChange]) -> Vec<String> {\n    changes\n        .iter()\n        .filter(|change| change.schema_key == BLOCK_SCHEMA_KEY && change.snapshot_content.is_some())\n        .map(|change| change.entity_id.clone())\n        .collect()\n}\n\nfn state_context_from_rows(rows: &[plugin_md_v2::PluginEntityChange]) -> PluginDetectStateContext {\n    PluginDetectStateContext {\n        active_state: Some(\n            rows.iter()\n                .map(|row| plugin_md_v2::PluginActiveStateRow {\n                    entity_id: row.entity_id.clone(),\n                    schema_key: Some(row.schema_key.clone()),\n                    snapshot_content: row.snapshot_content.clone(),\n                    file_id: None,\n                    plugin_key: None,\n                    version_id: None,\n                    change_id: None,\n                    metadata: None,\n                    created_at: None,\n                    updated_at: None,\n                })\n                .collect::<Vec<_>>(),\n        ),\n    }\n}\n\nfn bootstrap_state(\n    markdown: &str,\n) -> (\n    plugin_md_v2::PluginFile,\n    Vec<String>,\n    PluginDetectStateContext,\n) {\n    let before = file_from_markdown(\"f1\", \"/notes.md\", markdown);\n    let bootstrap =\n        detect_changes(None, before.clone()).expect(\"bootstrap detect_changes should succeed\");\n    let before_order =\n        document_order_from_changes(&bootstrap).expect(\"bootstrap should include document row\");\n    let state_context = state_context_from_rows(&bootstrap);\n    (before, before_order, state_context)\n}\n\nfn make_large_markdown_paragraphs(count: usize) -> Vec<String> {\n    (1..=count).map(|idx| format!(\"P{idx}\")).collect::<Vec<_>>()\n}\n\n#[test]\nfn no_changes_when_documents_are_equal() {\n    let before = file_from_markdown(\"f1\", \"/notes.md\", \"# Title\\n\\nSame paragraph.\\n\");\n    let after = file_from_markdown(\"f1\", \"/notes.md\", \"# Title\\n\\nSame paragraph.\\n\");\n\n    let changes = detect_changes(Some(before), after).expect(\"detect_changes should succeed\");\n\n    assert!(changes.is_empty());\n}\n\n#[test]\nfn emits_document_and_block_rows_for_new_file() {\n    let after = file_from_markdown(\"f1\", \"/notes.md\", \"# Title\\n\\nParagraph.\\n\");\n\n    let changes = detect_changes(None, after).expect(\"detect_changes should succeed\");\n\n    let document_rows = changes\n        .iter()\n        .filter(|change| is_document_change(change))\n        .collect::<Vec<_>>();\n    let block_rows = changes\n        .iter()\n        .filter(|change| is_block_change(change))\n        .collect::<Vec<_>>();\n\n    assert_eq!(document_rows.len(), 1);\n    assert_eq!(block_rows.len(), 2);\n\n    for row in block_rows {\n        assert_eq!(row.schema_key, BLOCK_SCHEMA_KEY);\n        assert!(row.snapshot_content.is_some());\n    }\n}\n\n#[test]\nfn handles_empty_documents() {\n    let before = file_from_markdown(\"f1\", \"/notes.md\", \"\");\n    let after = file_from_markdown(\"f1\", \"/notes.md\", \"\");\n\n    let changes = detect_changes(Some(before), after).expect(\"detect_changes should succeed\");\n\n    assert!(changes.is_empty());\n}\n\n#[test]\nfn rejects_non_utf8_input() {\n    let after = plugin_md_v2::PluginFile {\n        id: \"f1\".to_string(),\n        path: \"/notes.md\".to_string(),\n        data: vec![0xFF, 0xFE, 0xFD],\n    };\n\n    let error = detect_changes(None, after).expect_err(\"detect_changes should fail\");\n\n    assert_invalid_input(error);\n}\n\n#[test]\nfn inline_html_br_does_not_drop_changes() {\n    let after = file_from_markdown(\n        \"f1\",\n        \"/notes.md\",\n        \"SSH auth: `git clone git@github.com:microsoft/vscode-docs.git`<br>HTTPS auth: `git clone https://github.com/microsoft/vscode-docs.git`\\n\",\n    );\n\n    let changes = detect_changes(None, after).expect(\"detect_changes should succeed\");\n\n    assert!(\n        !changes.is_empty(),\n        \"inline html <br> in .md should not produce an empty change set\"\n    );\n    assert!(changes.iter().any(is_document_change));\n    assert!(changes.iter().any(is_block_change));\n}\n\n#[test]\nfn move_only_emits_document_row() {\n    let before = file_from_markdown(\"f1\", \"/notes.md\", \"First paragraph.\\n\\nSecond paragraph.\\n\");\n    let after = file_from_markdown(\"f1\", \"/notes.md\", \"Second paragraph.\\n\\nFirst paragraph.\\n\");\n\n    let changes = detect_changes(Some(before), after).expect(\"detect_changes should succeed\");\n\n    assert_eq!(changes.len(), 1);\n    assert!(changes.iter().all(is_document_change));\n}\n\n#[test]\nfn move_section_emits_document_row_only() {\n    let before = file_from_markdown(\"f1\", \"/notes.md\", \"# A\\n\\npara a\\n\\n# B\\n\\npara b\\n\");\n    let after = file_from_markdown(\"f1\", \"/notes.md\", \"# B\\n\\npara b\\n\\n# A\\n\\npara a\\n\");\n\n    let changes = detect_changes(Some(before), after).expect(\"detect_changes should succeed\");\n\n    assert_eq!(changes.len(), 1);\n    assert_eq!(changes[0].schema_key, DOCUMENT_SCHEMA_KEY);\n}\n\n#[test]\nfn cross_type_paragraph_to_heading_emits_delete_add_and_document_update() {\n    let before = file_from_markdown(\"f1\", \"/notes.md\", \"Hello\\n\");\n    let after = file_from_markdown(\"f1\", \"/notes.md\", \"# Hello\\n\");\n\n    let changes = detect_changes(Some(before), after).expect(\"detect_changes should succeed\");\n\n    let tombstones = changes\n        .iter()\n        .filter(|change| change.schema_key == BLOCK_SCHEMA_KEY && change.snapshot_content.is_none())\n        .count();\n    let upserts = changes\n        .iter()\n        .filter(|change| change.schema_key == BLOCK_SCHEMA_KEY && change.snapshot_content.is_some())\n        .count();\n    let document_rows = changes\n        .iter()\n        .filter(|change| change.schema_key == DOCUMENT_SCHEMA_KEY)\n        .count();\n\n    assert_eq!(tombstones, 1);\n    assert_eq!(upserts, 1);\n    assert_eq!(document_rows, 1);\n}\n\n#[test]\nfn cross_type_code_to_paragraph_emits_delete_add_and_document_update() {\n    let before = file_from_markdown(\"f1\", \"/notes.md\", \"```js\\nconsole.log(1)\\n```\\n\");\n    let after = file_from_markdown(\"f1\", \"/notes.md\", \"console.log(1)\\n\");\n\n    let changes = detect_changes(Some(before), after).expect(\"detect_changes should succeed\");\n\n    let tombstones = changes\n        .iter()\n        .filter(|change| change.schema_key == BLOCK_SCHEMA_KEY && change.snapshot_content.is_none())\n        .count();\n    let upserts = changes\n        .iter()\n        .filter(|change| change.schema_key == BLOCK_SCHEMA_KEY && change.snapshot_content.is_some())\n        .count();\n    let document_rows = changes\n        .iter()\n        .filter(|change| change.schema_key == DOCUMENT_SCHEMA_KEY)\n        .count();\n\n    assert_eq!(tombstones, 1);\n    assert_eq!(upserts, 1);\n    assert_eq!(document_rows, 1);\n}\n\n#[test]\nfn duplicate_paragraphs_with_no_text_change_emit_no_changes() {\n    let before = file_from_markdown(\"f1\", \"/notes.md\", \"Same\\n\\nSame\\n\");\n    let after = file_from_markdown(\"f1\", \"/notes.md\", \"Same\\n\\nSame\\n\");\n\n    let changes = detect_changes(Some(before), after).expect(\"detect_changes should succeed\");\n\n    assert!(changes.is_empty());\n}\n\n#[test]\nfn insert_duplicate_paragraph_emits_new_block_and_document_update() {\n    let before = file_from_markdown(\"f1\", \"/notes.md\", \"Same\\n\\nOther\\n\");\n    let after = file_from_markdown(\"f1\", \"/notes.md\", \"Same\\n\\nSame\\n\\nOther\\n\");\n\n    let changes = detect_changes(Some(before), after).expect(\"detect_changes should succeed\");\n\n    let tombstones = changes\n        .iter()\n        .filter(|change| change.schema_key == BLOCK_SCHEMA_KEY && change.snapshot_content.is_none())\n        .count();\n    let upserts = changes\n        .iter()\n        .filter(|change| change.schema_key == BLOCK_SCHEMA_KEY && change.snapshot_content.is_some())\n        .count();\n    let document_rows = changes\n        .iter()\n        .filter(|change| change.schema_key == DOCUMENT_SCHEMA_KEY)\n        .count();\n\n    assert_eq!(tombstones, 0);\n    assert_eq!(upserts, 1);\n    assert_eq!(document_rows, 1);\n}\n\n#[test]\nfn crlf_vs_lf_normalization_emits_no_changes() {\n    let before = file_from_markdown(\"f1\", \"/notes.md\", \"Line A\\r\\n\\r\\nLine B\\r\\n\");\n    let after = file_from_markdown(\"f1\", \"/notes.md\", \"Line A\\n\\nLine B\\n\");\n\n    let changes = detect_changes(Some(before), after).expect(\"detect_changes should succeed\");\n\n    assert!(changes.is_empty());\n}\n\n#[test]\nfn unicode_nfc_vs_nfd_emits_no_changes() {\n    let before = file_from_markdown(\"f1\", \"/notes.md\", \"caf\\u{00E9}\\n\");\n    let after = file_from_markdown(\"f1\", \"/notes.md\", \"caf\\u{0065}\\u{0301}\\n\");\n\n    let changes = detect_changes(Some(before), after).expect(\"detect_changes should succeed\");\n\n    assert!(changes.is_empty());\n}\n\n#[test]\nfn large_doc_pure_shuffle_emits_document_row_only() {\n    let paragraphs = (0..140).map(|idx| format!(\"P{idx}\")).collect::<Vec<_>>();\n    let before_markdown = paragraphs.join(\"\\n\\n\") + \"\\n\";\n\n    let mut after = paragraphs.clone();\n    after.rotate_left(37);\n    let after_markdown = after.join(\"\\n\\n\") + \"\\n\";\n\n    let changes = detect_changes(\n        Some(file_from_markdown(\"f1\", \"/notes.md\", &before_markdown)),\n        file_from_markdown(\"f1\", \"/notes.md\", &after_markdown),\n    )\n    .expect(\"detect_changes should succeed\");\n\n    assert_eq!(changes.len(), 1);\n    assert_eq!(changes[0].schema_key, DOCUMENT_SCHEMA_KEY);\n}\n\n#[test]\nfn paragraph_to_blockquote_emits_delete_add_and_document_update() {\n    let before = file_from_markdown(\"f1\", \"/notes.md\", \"Hello\\n\");\n    let after = file_from_markdown(\"f1\", \"/notes.md\", \"> Hello\\n\");\n\n    let changes = detect_changes(Some(before), after).expect(\"detect_changes should succeed\");\n\n    assert_eq!(count_tombstones(&changes), 1);\n    assert_eq!(count_upserts(&changes), 1);\n    assert_eq!(count_document_rows(&changes), 1);\n    assert_eq!(upsert_types(&changes), vec![\"blockquote\".to_string()]);\n}\n\n#[test]\nfn hard_break_variant_does_not_introduce_extra_blocks() {\n    let before = file_from_markdown(\"f1\", \"/notes.md\", \"line  \\r\\nbreak\\r\\n\");\n    let after = file_from_markdown(\"f1\", \"/notes.md\", \"line\\\\\\r\\nbreak\\r\\n\");\n\n    let changes = detect_changes(Some(before), after).expect(\"detect_changes should succeed\");\n\n    if changes.is_empty() {\n        return;\n    }\n\n    assert_eq!(count_tombstones(&changes), 1);\n    assert_eq!(count_upserts(&changes), 1);\n    assert_eq!(count_document_rows(&changes), 1);\n    assert_eq!(upsert_types(&changes), vec![\"paragraph\".to_string()]);\n}\n\n#[test]\nfn code_fence_length_variation_does_not_introduce_new_id() {\n    let before = file_from_markdown(\"f1\", \"/notes.md\", \"```js\\nconsole.log(1)\\n```\\n\");\n    let after = file_from_markdown(\"f1\", \"/notes.md\", \"````js\\nconsole.log(1)\\n````\\n\");\n\n    let changes = detect_changes(Some(before), after).expect(\"detect_changes should succeed\");\n\n    if changes.is_empty() {\n        return;\n    }\n\n    assert_eq!(count_tombstones(&changes), 1);\n    assert_eq!(count_upserts(&changes), 1);\n    assert_eq!(count_document_rows(&changes), 1);\n    assert_eq!(upsert_types(&changes), vec![\"code\".to_string()]);\n}\n\n#[test]\nfn id_stability_pure_reorder_preserves_existing_ids() {\n    let before_markdown = \"First\\n\\nSecond\\n\";\n    let before_order = bootstrap_order(before_markdown);\n    assert_eq!(before_order.len(), 2);\n\n    let changes = detect_changes(\n        Some(file_from_markdown(\"f1\", \"/notes.md\", before_markdown)),\n        file_from_markdown(\"f1\", \"/notes.md\", \"Second\\n\\nFirst\\n\"),\n    )\n    .expect(\"detect_changes should succeed\");\n\n    assert_eq!(count_tombstones(&changes), 0);\n    assert_eq!(count_upserts(&changes), 0);\n    assert_eq!(count_document_rows(&changes), 1);\n\n    let after_order =\n        document_order_from_changes(&changes).expect(\"reorder should include document row\");\n    assert_eq!(\n        after_order,\n        vec![before_order[1].clone(), before_order[0].clone()]\n    );\n}\n\n#[test]\nfn id_stability_insert_between_keeps_neighbors_and_mints_new_id() {\n    let before_markdown = \"A\\n\\nC\\n\";\n    let before_order = bootstrap_order(before_markdown);\n    assert_eq!(before_order.len(), 2);\n\n    let changes = detect_changes(\n        Some(file_from_markdown(\"f1\", \"/notes.md\", before_markdown)),\n        file_from_markdown(\"f1\", \"/notes.md\", \"A\\n\\nB\\n\\nC\\n\"),\n    )\n    .expect(\"detect_changes should succeed\");\n\n    assert_eq!(count_tombstones(&changes), 0);\n    assert_eq!(count_upserts(&changes), 1);\n    assert_eq!(count_document_rows(&changes), 1);\n\n    let after_order =\n        document_order_from_changes(&changes).expect(\"insert should include document row\");\n    assert_eq!(after_order[0], before_order[0]);\n    assert_eq!(after_order[2], before_order[1]);\n    assert_ne!(after_order[1], before_order[0]);\n    assert_ne!(after_order[1], before_order[1]);\n    assert_eq!(upsert_ids(&changes), vec![after_order[1].clone()]);\n}\n\n#[test]\nfn id_stability_delete_keeps_survivor_id_and_tombstones_deleted() {\n    let before_markdown = \"Keep me\\n\\nDelete me\\n\";\n    let before_order = bootstrap_order(before_markdown);\n    assert_eq!(before_order.len(), 2);\n\n    let changes = detect_changes(\n        Some(file_from_markdown(\"f1\", \"/notes.md\", before_markdown)),\n        file_from_markdown(\"f1\", \"/notes.md\", \"Keep me\\n\"),\n    )\n    .expect(\"detect_changes should succeed\");\n\n    assert_eq!(count_tombstones(&changes), 1);\n    assert_eq!(count_upserts(&changes), 0);\n    assert_eq!(count_document_rows(&changes), 1);\n    assert_eq!(tombstone_ids(&changes), vec![before_order[1].clone()]);\n    assert_eq!(\n        document_order_from_changes(&changes).expect(\"delete should include document row\"),\n        vec![before_order[0].clone()]\n    );\n}\n\n#[test]\nfn id_stability_cross_type_does_not_reuse_old_id() {\n    let before_markdown = \"Hello\\n\";\n    let before_order = bootstrap_order(before_markdown);\n    assert_eq!(before_order.len(), 1);\n\n    let changes = detect_changes(\n        Some(file_from_markdown(\"f1\", \"/notes.md\", before_markdown)),\n        file_from_markdown(\"f1\", \"/notes.md\", \"# Hello\\n\"),\n    )\n    .expect(\"detect_changes should succeed\");\n\n    assert_eq!(count_tombstones(&changes), 1);\n    assert_eq!(count_upserts(&changes), 1);\n    assert_eq!(count_document_rows(&changes), 1);\n    assert_eq!(tombstone_ids(&changes), vec![before_order[0].clone()]);\n\n    let upserts = upsert_ids(&changes);\n    assert_eq!(upserts.len(), 1);\n    assert_ne!(upserts[0], before_order[0]);\n\n    let after_order = document_order_from_changes(&changes).expect(\"should include doc row\");\n    assert_eq!(after_order, upserts);\n}\n\n#[test]\nfn id_stability_large_pure_shuffle_preserves_id_set() {\n    let paragraphs = (1..=500).map(|idx| format!(\"P{idx}\")).collect::<Vec<_>>();\n    let before_markdown = paragraphs.join(\"\\n\\n\") + \"\\n\";\n    let before_order = bootstrap_order(&before_markdown);\n    assert_eq!(before_order.len(), 500);\n\n    let mut after = paragraphs.clone();\n    after.rotate_left(123);\n    let after_markdown = after.join(\"\\n\\n\") + \"\\n\";\n\n    let changes = detect_changes(\n        Some(file_from_markdown(\"f1\", \"/notes.md\", &before_markdown)),\n        file_from_markdown(\"f1\", \"/notes.md\", &after_markdown),\n    )\n    .expect(\"detect_changes should succeed\");\n\n    assert_eq!(count_tombstones(&changes), 0);\n    assert_eq!(count_upserts(&changes), 0);\n    assert_eq!(count_document_rows(&changes), 1);\n\n    let after_order = document_order_from_changes(&changes).expect(\"shuffle should include doc\");\n    let before_set = before_order.into_iter().collect::<BTreeSet<_>>();\n    let after_set = after_order.into_iter().collect::<BTreeSet<_>>();\n    assert_eq!(before_set, after_set);\n}\n\n#[test]\nfn with_state_context_paragraph_edit_reuses_existing_id_without_tombstone() {\n    let before = file_from_markdown(\"f1\", \"/notes.md\", \"Hello\\n\\nWorld\\n\");\n    let bootstrap =\n        detect_changes(None, before.clone()).expect(\"bootstrap detect_changes should succeed\");\n    let before_order = bootstrap_order(\"Hello\\n\\nWorld\\n\");\n    let state_context = state_context_from_rows(&bootstrap);\n\n    let changes = detect_changes_with_state_context(\n        Some(before),\n        file_from_markdown(\"f1\", \"/notes.md\", \"Hello updated\\n\\nWorld\\n\"),\n        Some(state_context),\n    )\n    .expect(\"detect_changes_with_state_context should succeed\");\n\n    assert_eq!(count_tombstones(&changes), 0);\n    assert_eq!(count_upserts(&changes), 1);\n    assert_eq!(count_document_rows(&changes), 0);\n    assert_eq!(upsert_ids(&changes), vec![before_order[0].clone()]);\n}\n\n#[test]\nfn with_state_context_move_and_edit_reuses_existing_id_and_updates_order() {\n    let before_markdown = \"Alpha\\n\\nBeta\\n\";\n    let before = file_from_markdown(\"f1\", \"/notes.md\", before_markdown);\n    let bootstrap =\n        detect_changes(None, before.clone()).expect(\"bootstrap detect_changes should succeed\");\n    let before_order = bootstrap_order(before_markdown);\n    let state_context = state_context_from_rows(&bootstrap);\n\n    let changes = detect_changes_with_state_context(\n        Some(before),\n        file_from_markdown(\"f1\", \"/notes.md\", \"Beta plus\\n\\nAlpha\\n\"),\n        Some(state_context),\n    )\n    .expect(\"detect_changes_with_state_context should succeed\");\n\n    assert_eq!(count_tombstones(&changes), 0);\n    assert_eq!(count_upserts(&changes), 1);\n    assert_eq!(count_document_rows(&changes), 1);\n    assert_eq!(upsert_ids(&changes), vec![before_order[1].clone()]);\n    assert_eq!(\n        document_order_from_changes(&changes).expect(\"document row should be present\"),\n        vec![before_order[1].clone(), before_order[0].clone()]\n    );\n}\n\n#[test]\nfn with_state_context_insert_between_preserves_neighbor_ids_and_mints_new_id() {\n    let before_markdown = \"A\\n\\nC\\n\";\n    let before = file_from_markdown(\"f1\", \"/notes.md\", before_markdown);\n    let bootstrap =\n        detect_changes(None, before.clone()).expect(\"bootstrap detect_changes should succeed\");\n    let before_order = bootstrap_order(before_markdown);\n    let state_context = state_context_from_rows(&bootstrap);\n\n    let changes = detect_changes_with_state_context(\n        Some(before),\n        file_from_markdown(\"f1\", \"/notes.md\", \"A\\n\\nB\\n\\nC\\n\"),\n        Some(state_context),\n    )\n    .expect(\"detect_changes_with_state_context should succeed\");\n\n    assert_eq!(count_tombstones(&changes), 0);\n    assert_eq!(count_upserts(&changes), 1);\n    assert_eq!(count_document_rows(&changes), 1);\n\n    let order = document_order_from_changes(&changes).expect(\"document row should be present\");\n    assert_eq!(order[0], before_order[0]);\n    assert_eq!(order[2], before_order[1]);\n    assert_ne!(order[1], before_order[0]);\n    assert_ne!(order[1], before_order[1]);\n    assert_eq!(upsert_ids(&changes), vec![order[1].clone()]);\n}\n\n#[test]\nfn with_state_context_pure_reorder_emits_only_document_row() {\n    let (before, before_order, state_context) = bootstrap_state(\"First\\n\\nSecond\\n\");\n    assert_eq!(before_order.len(), 2);\n\n    let changes = detect_changes_with_state_context(\n        Some(before),\n        file_from_markdown(\"f1\", \"/notes.md\", \"Second\\n\\nFirst\\n\"),\n        Some(state_context),\n    )\n    .expect(\"detect_changes_with_state_context should succeed\");\n\n    assert_eq!(count_tombstones(&changes), 0);\n    assert_eq!(count_upserts(&changes), 0);\n    assert_eq!(count_document_rows(&changes), 1);\n    assert_eq!(\n        document_order_from_changes(&changes).expect(\"document row should be present\"),\n        vec![before_order[1].clone(), before_order[0].clone()]\n    );\n}\n\n#[test]\nfn with_state_context_move_section_emits_only_document_row() {\n    let (before, before_order, state_context) = bootstrap_state(\"# A\\n\\nPara A\\n\\n# B\\n\\nPara B\\n\");\n    assert_eq!(before_order.len(), 4);\n\n    let changes = detect_changes_with_state_context(\n        Some(before),\n        file_from_markdown(\"f1\", \"/notes.md\", \"# B\\n\\nPara B\\n\\n# A\\n\\nPara A\\n\"),\n        Some(state_context),\n    )\n    .expect(\"detect_changes_with_state_context should succeed\");\n\n    assert_eq!(count_tombstones(&changes), 0);\n    assert_eq!(count_upserts(&changes), 0);\n    assert_eq!(count_document_rows(&changes), 1);\n    assert_eq!(\n        document_order_from_changes(&changes).expect(\"document row should be present\"),\n        vec![\n            before_order[2].clone(),\n            before_order[3].clone(),\n            before_order[0].clone(),\n            before_order[1].clone(),\n        ]\n    );\n}\n\n#[test]\nfn with_state_context_large_shuffle_500_emits_only_document_row() {\n    let paragraphs = (1..=500).map(|idx| format!(\"P{idx}\")).collect::<Vec<_>>();\n    let before_markdown = paragraphs.join(\"\\n\\n\") + \"\\n\";\n    let (before, before_order, state_context) = bootstrap_state(&before_markdown);\n    assert_eq!(before_order.len(), 500);\n\n    let mut after = paragraphs;\n    after.rotate_left(123);\n    let after_markdown = after.join(\"\\n\\n\") + \"\\n\";\n\n    let changes = detect_changes_with_state_context(\n        Some(before),\n        file_from_markdown(\"f1\", \"/notes.md\", &after_markdown),\n        Some(state_context),\n    )\n    .expect(\"detect_changes_with_state_context should succeed\");\n\n    assert_eq!(count_tombstones(&changes), 0);\n    assert_eq!(count_upserts(&changes), 0);\n    assert_eq!(count_document_rows(&changes), 1);\n\n    let after_order =\n        document_order_from_changes(&changes).expect(\"document row should be present\");\n    let before_set = before_order.into_iter().collect::<BTreeSet<_>>();\n    let after_set = after_order.into_iter().collect::<BTreeSet<_>>();\n    assert_eq!(before_set, after_set);\n}\n\n#[test]\nfn with_state_context_duplicate_edit_second_preserves_first_id_without_document_noise() {\n    let (before, before_order, state_context) = bootstrap_state(\"Same\\n\\nSame\\n\");\n    assert_eq!(before_order.len(), 2);\n\n    let changes = detect_changes_with_state_context(\n        Some(before),\n        file_from_markdown(\"f1\", \"/notes.md\", \"Same\\n\\nSame updated\\n\"),\n        Some(state_context),\n    )\n    .expect(\"detect_changes_with_state_context should succeed\");\n\n    assert_eq!(count_tombstones(&changes), 0);\n    assert_eq!(count_upserts(&changes), 1);\n    assert_eq!(count_document_rows(&changes), 0);\n    assert_eq!(upsert_ids(&changes), vec![before_order[1].clone()]);\n}\n\n#[test]\nfn with_state_context_duplicate_middle_edit_targets_only_middle_entity() {\n    let (before, before_order, state_context) = bootstrap_state(\"Same\\n\\nSame\\n\\nSame\\n\");\n    assert_eq!(before_order.len(), 3);\n\n    let changes = detect_changes_with_state_context(\n        Some(before),\n        file_from_markdown(\"f1\", \"/notes.md\", \"Same\\n\\nSame updated\\n\\nSame\\n\"),\n        Some(state_context),\n    )\n    .expect(\"detect_changes_with_state_context should succeed\");\n\n    assert_eq!(count_tombstones(&changes), 0);\n    assert_eq!(count_upserts(&changes), 1);\n    assert_eq!(count_document_rows(&changes), 0);\n    assert_eq!(upsert_ids(&changes), vec![before_order[1].clone()]);\n}\n\n#[test]\nfn with_state_context_list_reorder_emits_single_list_upsert_without_document_row() {\n    let (before, before_order, state_context) = bootstrap_state(\"- one\\n- two\\n- three\\n\");\n    assert_eq!(before_order.len(), 1);\n\n    let changes = detect_changes_with_state_context(\n        Some(before),\n        file_from_markdown(\"f1\", \"/notes.md\", \"- three\\n- one\\n- two\\n\"),\n        Some(state_context),\n    )\n    .expect(\"detect_changes_with_state_context should succeed\");\n\n    assert_eq!(count_tombstones(&changes), 0);\n    assert_eq!(count_upserts(&changes), 1);\n    assert_eq!(count_document_rows(&changes), 0);\n    assert_eq!(upsert_types(&changes), vec![\"list\".to_string()]);\n    assert_eq!(upsert_ids(&changes), vec![before_order[0].clone()]);\n}\n\n#[test]\nfn with_state_context_list_add_item_emits_single_list_upsert_without_document_row() {\n    let (before, before_order, state_context) = bootstrap_state(\"- one\\n- two\\n\");\n    assert_eq!(before_order.len(), 1);\n\n    let changes = detect_changes_with_state_context(\n        Some(before),\n        file_from_markdown(\"f1\", \"/notes.md\", \"- one\\n- two\\n- three\\n\"),\n        Some(state_context),\n    )\n    .expect(\"detect_changes_with_state_context should succeed\");\n\n    assert_eq!(count_tombstones(&changes), 0);\n    assert_eq!(count_upserts(&changes), 1);\n    assert_eq!(count_document_rows(&changes), 0);\n    assert_eq!(upsert_types(&changes), vec![\"list\".to_string()]);\n    assert_eq!(upsert_ids(&changes), vec![before_order[0].clone()]);\n}\n\n#[test]\nfn with_state_context_list_remove_item_emits_single_list_upsert_without_document_row() {\n    let (before, before_order, state_context) = bootstrap_state(\"- one\\n- two\\n- three\\n\");\n    assert_eq!(before_order.len(), 1);\n\n    let changes = detect_changes_with_state_context(\n        Some(before),\n        file_from_markdown(\"f1\", \"/notes.md\", \"- one\\n- three\\n\"),\n        Some(state_context),\n    )\n    .expect(\"detect_changes_with_state_context should succeed\");\n\n    assert_eq!(count_tombstones(&changes), 0);\n    assert_eq!(count_upserts(&changes), 1);\n    assert_eq!(count_document_rows(&changes), 0);\n    assert_eq!(upsert_types(&changes), vec![\"list\".to_string()]);\n    assert_eq!(upsert_ids(&changes), vec![before_order[0].clone()]);\n}\n\n#[test]\nfn with_state_context_table_reorder_rows_emits_single_table_upsert_without_document_row() {\n    let (before, before_order, state_context) =\n        bootstrap_state(\"| a | b |\\n| - | - |\\n| 1 | 2 |\\n| 3 | 4 |\\n| 5 | 6 |\\n\");\n    assert_eq!(before_order.len(), 1);\n\n    let changes = detect_changes_with_state_context(\n        Some(before),\n        file_from_markdown(\n            \"f1\",\n            \"/notes.md\",\n            \"| a | b |\\n| - | - |\\n| 3 | 4 |\\n| 5 | 6 |\\n| 1 | 2 |\\n\",\n        ),\n        Some(state_context),\n    )\n    .expect(\"detect_changes_with_state_context should succeed\");\n\n    assert_eq!(count_tombstones(&changes), 0);\n    assert_eq!(count_upserts(&changes), 1);\n    assert_eq!(count_document_rows(&changes), 0);\n    assert_eq!(upsert_types(&changes), vec![\"table\".to_string()]);\n    assert_eq!(upsert_ids(&changes), vec![before_order[0].clone()]);\n}\n\n#[test]\nfn with_state_context_table_add_row_emits_single_table_upsert_without_document_row() {\n    let (before, before_order, state_context) =\n        bootstrap_state(\"| a | b |\\n| - | - |\\n| 1 | 2 |\\n\");\n    assert_eq!(before_order.len(), 1);\n\n    let changes = detect_changes_with_state_context(\n        Some(before),\n        file_from_markdown(\n            \"f1\",\n            \"/notes.md\",\n            \"| a | b |\\n| - | - |\\n| 1 | 2 |\\n| 3 | 4 |\\n\",\n        ),\n        Some(state_context),\n    )\n    .expect(\"detect_changes_with_state_context should succeed\");\n\n    assert_eq!(count_tombstones(&changes), 0);\n    assert_eq!(count_upserts(&changes), 1);\n    assert_eq!(count_document_rows(&changes), 0);\n    assert_eq!(upsert_types(&changes), vec![\"table\".to_string()]);\n    assert_eq!(upsert_ids(&changes), vec![before_order[0].clone()]);\n}\n\n#[test]\nfn with_state_context_table_remove_row_emits_single_table_upsert_without_document_row() {\n    let (before, before_order, state_context) =\n        bootstrap_state(\"| a | b |\\n| - | - |\\n| 1 | 2 |\\n| 3 | 4 |\\n\");\n    assert_eq!(before_order.len(), 1);\n\n    let changes = detect_changes_with_state_context(\n        Some(before),\n        file_from_markdown(\"f1\", \"/notes.md\", \"| a | b |\\n| - | - |\\n| 1 | 2 |\\n\"),\n        Some(state_context),\n    )\n    .expect(\"detect_changes_with_state_context should succeed\");\n\n    assert_eq!(count_tombstones(&changes), 0);\n    assert_eq!(count_upserts(&changes), 1);\n    assert_eq!(count_document_rows(&changes), 0);\n    assert_eq!(upsert_types(&changes), vec![\"table\".to_string()]);\n    assert_eq!(upsert_ids(&changes), vec![before_order[0].clone()]);\n}\n\n#[test]\nfn with_state_context_heading_edit_reuses_existing_id_without_document_row() {\n    let (before, before_order, state_context) = bootstrap_state(\"# Hello\\n\\nBody\\n\");\n    assert_eq!(before_order.len(), 2);\n\n    let changes = detect_changes_with_state_context(\n        Some(before),\n        file_from_markdown(\"f1\", \"/notes.md\", \"# Hello World\\n\\nBody\\n\"),\n        Some(state_context),\n    )\n    .expect(\"detect_changes_with_state_context should succeed\");\n\n    assert_eq!(count_tombstones(&changes), 0);\n    assert_eq!(count_upserts(&changes), 1);\n    assert_eq!(count_document_rows(&changes), 0);\n    assert_eq!(upsert_types(&changes), vec![\"heading\".to_string()]);\n    assert_eq!(upsert_ids(&changes), vec![before_order[0].clone()]);\n}\n\n#[test]\nfn with_state_context_code_edit_reuses_existing_id_without_document_row() {\n    let (before, before_order, state_context) = bootstrap_state(\"```js\\nconsole.log(1)\\n```\\n\");\n    assert_eq!(before_order.len(), 1);\n\n    let changes = detect_changes_with_state_context(\n        Some(before),\n        file_from_markdown(\"f1\", \"/notes.md\", \"```js\\nconsole.log(2)\\n```\\n\"),\n        Some(state_context),\n    )\n    .expect(\"detect_changes_with_state_context should succeed\");\n\n    assert_eq!(count_tombstones(&changes), 0);\n    assert_eq!(count_upserts(&changes), 1);\n    assert_eq!(count_document_rows(&changes), 0);\n    assert_eq!(upsert_types(&changes), vec![\"code\".to_string()]);\n    assert_eq!(upsert_ids(&changes), vec![before_order[0].clone()]);\n}\n\n#[test]\nfn with_state_context_link_text_edit_reuses_existing_id_without_document_row() {\n    let (before, before_order, state_context) = bootstrap_state(\"[text](https://example.com)\\n\");\n    assert_eq!(before_order.len(), 1);\n\n    let changes = detect_changes_with_state_context(\n        Some(before),\n        file_from_markdown(\"f1\", \"/notes.md\", \"[new](https://example.com)\\n\"),\n        Some(state_context),\n    )\n    .expect(\"detect_changes_with_state_context should succeed\");\n\n    assert_eq!(count_tombstones(&changes), 0);\n    assert_eq!(count_upserts(&changes), 1);\n    assert_eq!(count_document_rows(&changes), 0);\n    assert_eq!(upsert_types(&changes), vec![\"paragraph\".to_string()]);\n    assert_eq!(upsert_ids(&changes), vec![before_order[0].clone()]);\n}\n\n#[test]\nfn with_state_context_link_url_edit_reuses_existing_id_without_document_row() {\n    let (before, before_order, state_context) = bootstrap_state(\"[text](https://example.com)\\n\");\n    assert_eq!(before_order.len(), 1);\n\n    let changes = detect_changes_with_state_context(\n        Some(before),\n        file_from_markdown(\"f1\", \"/notes.md\", \"[text](https://example.org)\\n\"),\n        Some(state_context),\n    )\n    .expect(\"detect_changes_with_state_context should succeed\");\n\n    assert_eq!(count_tombstones(&changes), 0);\n    assert_eq!(count_upserts(&changes), 1);\n    assert_eq!(count_document_rows(&changes), 0);\n    assert_eq!(upsert_types(&changes), vec![\"paragraph\".to_string()]);\n    assert_eq!(upsert_ids(&changes), vec![before_order[0].clone()]);\n}\n\n#[test]\nfn with_state_context_paragraph_split_reuses_first_id_and_mints_one_new() {\n    let (before, before_order, state_context) = bootstrap_state(\"AB\\n\");\n    assert_eq!(before_order.len(), 1);\n\n    let changes = detect_changes_with_state_context(\n        Some(before),\n        file_from_markdown(\"f1\", \"/notes.md\", \"A\\n\\nB\\n\"),\n        Some(state_context),\n    )\n    .expect(\"detect_changes_with_state_context should succeed\");\n\n    assert_eq!(count_tombstones(&changes), 0);\n    assert_eq!(count_upserts(&changes), 2);\n    assert_eq!(count_document_rows(&changes), 1);\n\n    let upserts = upsert_ids(&changes);\n    assert!(upserts.contains(&before_order[0]));\n    assert_eq!(\n        upserts.iter().filter(|id| **id != before_order[0]).count(),\n        1\n    );\n\n    let order = document_order_from_changes(&changes).expect(\"document row should be present\");\n    assert_eq!(order.len(), 2);\n    assert_eq!(order[0], before_order[0]);\n    assert_ne!(order[1], before_order[0]);\n}\n\n#[test]\nfn with_state_context_paragraph_merge_reuses_first_id_and_tombstones_second() {\n    let (before, before_order, state_context) = bootstrap_state(\"A\\n\\nB\\n\");\n    assert_eq!(before_order.len(), 2);\n\n    let changes = detect_changes_with_state_context(\n        Some(before),\n        file_from_markdown(\"f1\", \"/notes.md\", \"AB\\n\"),\n        Some(state_context),\n    )\n    .expect(\"detect_changes_with_state_context should succeed\");\n\n    assert_eq!(count_tombstones(&changes), 1);\n    assert_eq!(count_upserts(&changes), 1);\n    assert_eq!(count_document_rows(&changes), 1);\n    assert_eq!(tombstone_ids(&changes), vec![before_order[1].clone()]);\n    assert_eq!(upsert_ids(&changes), vec![before_order[0].clone()]);\n    assert_eq!(\n        document_order_from_changes(&changes).expect(\"document row should be present\"),\n        vec![before_order[0].clone()]\n    );\n}\n\n#[test]\nfn with_state_context_large_500_tiny_edits_emit_only_targeted_upserts() {\n    let paragraphs = make_large_markdown_paragraphs(500);\n    let before_markdown = paragraphs.join(\"\\n\\n\") + \"\\n\";\n    let (before, before_order, state_context) = bootstrap_state(&before_markdown);\n    assert_eq!(before_order.len(), 500);\n\n    let mut after = paragraphs;\n    let edited_indexes = [10usize, 111, 222, 333, 444];\n    for index in edited_indexes {\n        after[index] = format!(\"{} x\", after[index]);\n    }\n    let after_markdown = after.join(\"\\n\\n\") + \"\\n\";\n\n    let changes = detect_changes_with_state_context(\n        Some(before),\n        file_from_markdown(\"f1\", \"/notes.md\", &after_markdown),\n        Some(state_context),\n    )\n    .expect(\"detect_changes_with_state_context should succeed\");\n\n    assert_eq!(count_tombstones(&changes), 0);\n    assert_eq!(count_upserts(&changes), 5);\n    assert_eq!(count_document_rows(&changes), 0);\n\n    let expected_ids = edited_indexes\n        .iter()\n        .map(|idx| before_order[*idx].clone())\n        .collect::<BTreeSet<_>>();\n    let actual_ids = upsert_ids(&changes).into_iter().collect::<BTreeSet<_>>();\n    assert_eq!(actual_ids, expected_ids);\n}\n\n#[test]\nfn with_state_context_large_500_delete_insert_move_emits_minimal_noise() {\n    let paragraphs = make_large_markdown_paragraphs(500);\n    let before_markdown = paragraphs.join(\"\\n\\n\") + \"\\n\";\n    let (before, before_order, state_context) = bootstrap_state(&before_markdown);\n    assert_eq!(before_order.len(), 500);\n\n    let moved = paragraphs[450..460].to_vec();\n    let mut remaining = paragraphs[..450].to_vec();\n    remaining.extend_from_slice(&paragraphs[460..]);\n    remaining.retain(|entry| entry != \"P500\");\n\n    let idx_p300 = remaining\n        .iter()\n        .position(|entry| entry == \"P300\")\n        .expect(\"P300 should exist\");\n\n    let mut after = Vec::new();\n    after.extend(moved);\n    after.extend_from_slice(&remaining[..=idx_p300]);\n    after.push(\"PX\".to_string());\n    after.extend_from_slice(&remaining[idx_p300 + 1..]);\n    let after_markdown = after.join(\"\\n\\n\") + \"\\n\";\n\n    let changes = detect_changes_with_state_context(\n        Some(before),\n        file_from_markdown(\"f1\", \"/notes.md\", &after_markdown),\n        Some(state_context),\n    )\n    .expect(\"detect_changes_with_state_context should succeed\");\n\n    let tombstones = count_tombstones(&changes);\n    let upserts = count_upserts(&changes);\n    assert!(tombstones <= 1);\n    assert_eq!(upserts, 1);\n    assert_eq!(count_document_rows(&changes), 1);\n    assert!(tombstones + upserts <= 2);\n\n    let deleted_id = before_order[499].clone();\n    if tombstones == 1 {\n        assert_eq!(tombstone_ids(&changes), vec![deleted_id.clone()]);\n    }\n\n    let inserted_id = upsert_ids(&changes)\n        .into_iter()\n        .next()\n        .expect(\"insert should create one upsert\");\n    if tombstones == 1 {\n        assert!(!before_order.contains(&inserted_id));\n    } else {\n        assert_eq!(inserted_id, deleted_id);\n    }\n    assert!(upsert_markdowns(&changes)\n        .iter()\n        .any(|markdown| markdown.contains(\"PX\")));\n\n    let order = document_order_from_changes(&changes).expect(\"document row should be present\");\n    assert_eq!(order.len(), 500);\n    assert_eq!(order[0..10], before_order[450..460]);\n    if tombstones == 1 {\n        assert!(!order.contains(&deleted_id));\n    } else {\n        assert!(order.contains(&deleted_id));\n    }\n    let idx_300_in_after = order\n        .iter()\n        .position(|id| id == &before_order[299])\n        .expect(\"P300 id should remain in order\");\n    assert_eq!(order[idx_300_in_after + 1], inserted_id);\n}\n\n#[test]\nfn with_state_context_large_duplicates_edit_350_targets_only_matching_id() {\n    let before_paragraphs = (0..500).map(|_| \"Same\".to_string()).collect::<Vec<_>>();\n    let before_markdown = before_paragraphs.join(\"\\n\\n\") + \"\\n\";\n    let (before, before_order, state_context) = bootstrap_state(&before_markdown);\n    assert_eq!(before_order.len(), 500);\n\n    let mut after = before_paragraphs;\n    after[349] = \"Same updated\".to_string();\n    let after_markdown = after.join(\"\\n\\n\") + \"\\n\";\n\n    let changes = detect_changes_with_state_context(\n        Some(before),\n        file_from_markdown(\"f1\", \"/notes.md\", &after_markdown),\n        Some(state_context),\n    )\n    .expect(\"detect_changes_with_state_context should succeed\");\n\n    assert_eq!(count_tombstones(&changes), 0);\n    assert_eq!(count_upserts(&changes), 1);\n    assert!(count_document_rows(&changes) <= 1);\n    assert!(before_order.contains(&upsert_ids(&changes)[0]));\n\n    if let Some(order) = document_order_from_changes(&changes) {\n        let before_set = before_order.into_iter().collect::<BTreeSet<_>>();\n        let after_set = order.into_iter().collect::<BTreeSet<_>>();\n        assert_eq!(before_set, after_set);\n    }\n}\n"
  },
  {
    "path": "packages/plugin-md-v2/tests/roundtrip.rs",
    "content": "mod common;\n\nuse common::{\n    apply_delta, collect_state_rows, decode_utf8, empty_file, file_from_markdown,\n    is_document_change, StateRows,\n};\nuse plugin_md_v2::{\n    apply_changes, detect_changes, detect_changes_with_state_context, PluginActiveStateRow,\n    PluginDetectStateContext, PluginEntityChange, BLOCK_SCHEMA_KEY, DOCUMENT_SCHEMA_KEY,\n};\n\nfn to_state_context(rows: &[PluginEntityChange]) -> PluginDetectStateContext {\n    PluginDetectStateContext {\n        active_state: Some(\n            rows.iter()\n                .map(|row| PluginActiveStateRow {\n                    entity_id: row.entity_id.clone(),\n                    schema_key: Some(row.schema_key.clone()),\n                    snapshot_content: row.snapshot_content.clone(),\n                    file_id: None,\n                    plugin_key: None,\n                    version_id: None,\n                    change_id: None,\n                    metadata: None,\n                    created_at: None,\n                    updated_at: None,\n                })\n                .collect::<Vec<_>>(),\n        ),\n    }\n}\n\nfn detect_with_state_context(\n    state: &StateRows,\n    before: plugin_md_v2::PluginFile,\n    after: plugin_md_v2::PluginFile,\n) -> Vec<PluginEntityChange> {\n    let rows = collect_state_rows(state);\n    let ctx = to_state_context(&rows);\n    detect_changes_with_state_context(Some(before), after, Some(ctx))\n        .expect(\"detect_changes_with_state_context should succeed\")\n}\n\nfn count_tombstones(changes: &[PluginEntityChange]) -> usize {\n    changes\n        .iter()\n        .filter(|c| c.schema_key == BLOCK_SCHEMA_KEY && c.snapshot_content.is_none())\n        .count()\n}\n\nfn count_upserts(changes: &[PluginEntityChange]) -> usize {\n    changes\n        .iter()\n        .filter(|c| c.schema_key == BLOCK_SCHEMA_KEY && c.snapshot_content.is_some())\n        .count()\n}\n\nfn count_document_rows(changes: &[PluginEntityChange]) -> usize {\n    changes\n        .iter()\n        .filter(|c| c.schema_key == DOCUMENT_SCHEMA_KEY)\n        .count()\n}\n\nfn upsert_block_types(changes: &[PluginEntityChange]) -> Vec<String> {\n    changes\n        .iter()\n        .filter(|c| c.schema_key == BLOCK_SCHEMA_KEY && c.snapshot_content.is_some())\n        .map(|c| {\n            let raw = c\n                .snapshot_content\n                .as_ref()\n                .expect(\"upsert should have snapshot\");\n            let parsed: serde_json::Value =\n                serde_json::from_str(raw).expect(\"block snapshot should be valid JSON\");\n            parsed\n                .get(\"type\")\n                .and_then(serde_json::Value::as_str)\n                .expect(\"block snapshot should contain type\")\n                .to_string()\n        })\n        .collect()\n}\n\n#[test]\nfn roundtrip_file_detect_state_apply_markdown() {\n    let markdown = \"# Title\\n\\nParagraph one.\\n\\nParagraph two.\\n\";\n    let file = file_from_markdown(\"f1\", \"/notes.md\", markdown);\n\n    let delta = detect_changes(None, file).expect(\"detect_changes should succeed\");\n\n    let mut state = StateRows::new();\n    apply_delta(&mut state, delta);\n\n    let materialized = apply_changes(empty_file(\"f1\", \"/notes.md\"), collect_state_rows(&state))\n        .expect(\"apply_changes should succeed\");\n\n    assert_eq!(decode_utf8(materialized), markdown);\n}\n\n#[test]\nfn roundtrip_edit_move_delete_across_block_rows() {\n    let before_markdown = \"Alpha.\\n\\nBravo.\\n\\nCharlie.\\n\";\n    let after_markdown = \"Charlie.\\n\\nAlpha updated.\\n\";\n\n    let before_file = file_from_markdown(\"f1\", \"/notes.md\", before_markdown);\n\n    let mut state = StateRows::new();\n    let bootstrap =\n        detect_changes(None, before_file.clone()).expect(\"bootstrap detect should succeed\");\n    apply_delta(&mut state, bootstrap);\n\n    let delta = detect_changes(\n        Some(before_file),\n        file_from_markdown(\"f1\", \"/notes.md\", after_markdown),\n    )\n    .expect(\"delta detect should succeed\");\n\n    assert!(delta\n        .iter()\n        .any(|change| change.schema_key == DOCUMENT_SCHEMA_KEY));\n    assert!(delta.iter().any(|change| {\n        change.schema_key == BLOCK_SCHEMA_KEY && change.snapshot_content.is_none()\n    }));\n    assert!(delta.iter().any(|change| {\n        change.schema_key == BLOCK_SCHEMA_KEY && change.snapshot_content.is_some()\n    }));\n\n    apply_delta(&mut state, delta);\n\n    let materialized = apply_changes(empty_file(\"f1\", \"/notes.md\"), collect_state_rows(&state))\n        .expect(\"apply_changes should succeed\");\n\n    assert_eq!(decode_utf8(materialized), after_markdown);\n}\n\n#[test]\nfn roundtrip_move_only_updates_document_order() {\n    let before_markdown = \"First block.\\n\\nSecond block.\\n\";\n    let after_markdown = \"Second block.\\n\\nFirst block.\\n\";\n\n    let delta = detect_changes(\n        Some(file_from_markdown(\"f1\", \"/notes.md\", before_markdown)),\n        file_from_markdown(\"f1\", \"/notes.md\", after_markdown),\n    )\n    .expect(\"detect_changes should succeed\");\n\n    assert_eq!(delta.len(), 1);\n    assert!(delta.iter().all(is_document_change));\n}\n\n#[test]\nfn roundtrip_multi_step_evolution() {\n    let a = \"# Title\\n\\nOne.\\n\";\n    let b = \"# Title v2\\n\\nOne.\\n\\nTwo.\\n\";\n    let c = \"Two.\\n\\n# Title v3\\n\";\n\n    let a_file = file_from_markdown(\"f1\", \"/notes.md\", a);\n    let b_file = file_from_markdown(\"f1\", \"/notes.md\", b);\n    let c_file = file_from_markdown(\"f1\", \"/notes.md\", c);\n\n    let mut state = StateRows::new();\n\n    let delta_a = detect_changes(None, a_file.clone()).expect(\"detect_changes should succeed\");\n    apply_delta(&mut state, delta_a);\n\n    let delta_b = detect_with_state_context(&state, a_file, b_file.clone());\n    apply_delta(&mut state, delta_b);\n\n    let delta_c = detect_with_state_context(&state, b_file, c_file);\n    apply_delta(&mut state, delta_c);\n\n    let materialized = apply_changes(empty_file(\"f1\", \"/notes.md\"), collect_state_rows(&state))\n        .expect(\"apply_changes should succeed\");\n\n    assert_eq!(decode_utf8(materialized), c);\n}\n\n#[test]\nfn roundtrip_delete_all_blocks_to_empty_document() {\n    let before = \"A\\n\\nB\\n\";\n    let before_file = file_from_markdown(\"f1\", \"/notes.md\", before);\n\n    let mut state = StateRows::new();\n    let bootstrap =\n        detect_changes(None, before_file.clone()).expect(\"bootstrap detect should succeed\");\n    apply_delta(&mut state, bootstrap);\n\n    let delta = detect_changes(Some(before_file), file_from_markdown(\"f1\", \"/notes.md\", \"\"))\n        .expect(\"detect_changes should succeed\");\n    apply_delta(&mut state, delta);\n\n    let materialized = apply_changes(empty_file(\"f1\", \"/notes.md\"), collect_state_rows(&state))\n        .expect(\"apply_changes should succeed\");\n\n    assert_eq!(decode_utf8(materialized), \"\");\n}\n\n#[test]\nfn roundtrip_list_internal_edit_keeps_top_level_block_model() {\n    let before = \"- one\\n- two\\n\";\n    let after = \"- one\\n- two changed\\n\";\n    let before_file = file_from_markdown(\"f1\", \"/notes.md\", before);\n\n    let mut state = StateRows::new();\n    let bootstrap =\n        detect_changes(None, before_file.clone()).expect(\"bootstrap detect should succeed\");\n    apply_delta(&mut state, bootstrap);\n\n    let delta = detect_with_state_context(\n        &state,\n        before_file,\n        file_from_markdown(\"f1\", \"/notes.md\", after),\n    );\n\n    assert_eq!(count_tombstones(&delta), 0);\n    assert_eq!(count_upserts(&delta), 1);\n    assert_eq!(count_document_rows(&delta), 0);\n    assert_eq!(upsert_block_types(&delta), vec![\"list\".to_string()]);\n\n    apply_delta(&mut state, delta);\n\n    let materialized = apply_changes(empty_file(\"f1\", \"/notes.md\"), collect_state_rows(&state))\n        .expect(\"apply_changes should succeed\");\n    assert_eq!(decode_utf8(materialized), after);\n}\n\n#[test]\nfn roundtrip_table_row_add_remove_reorder() {\n    let initial = \"| a | b |\\n| - | - |\\n| 1 | 2 |\\n\";\n    let add = \"| a | b |\\n| - | - |\\n| 1 | 2 |\\n| 3 | 4 |\\n\";\n    let reorder = \"| a | b |\\n| - | - |\\n| 3 | 4 |\\n| 1 | 2 |\\n\";\n    let remove = \"| a | b |\\n| - | - |\\n| 3 | 4 |\\n\";\n\n    let mut state = StateRows::new();\n    let initial_file = file_from_markdown(\"f1\", \"/notes.md\", initial);\n    let bootstrap =\n        detect_changes(None, initial_file.clone()).expect(\"bootstrap detect should succeed\");\n    apply_delta(&mut state, bootstrap);\n\n    let delta_add = detect_with_state_context(\n        &state,\n        initial_file,\n        file_from_markdown(\"f1\", \"/notes.md\", add),\n    );\n    assert_eq!(count_tombstones(&delta_add), 0);\n    assert_eq!(count_upserts(&delta_add), 1);\n    assert_eq!(count_document_rows(&delta_add), 0);\n    assert_eq!(upsert_block_types(&delta_add), vec![\"table\".to_string()]);\n    apply_delta(&mut state, delta_add);\n\n    let delta_reorder = detect_with_state_context(\n        &state,\n        file_from_markdown(\"f1\", \"/notes.md\", add),\n        file_from_markdown(\"f1\", \"/notes.md\", reorder),\n    );\n    assert_eq!(count_tombstones(&delta_reorder), 0);\n    assert_eq!(count_upserts(&delta_reorder), 1);\n    assert_eq!(count_document_rows(&delta_reorder), 0);\n    assert_eq!(\n        upsert_block_types(&delta_reorder),\n        vec![\"table\".to_string()]\n    );\n    apply_delta(&mut state, delta_reorder);\n\n    let delta_remove = detect_with_state_context(\n        &state,\n        file_from_markdown(\"f1\", \"/notes.md\", reorder),\n        file_from_markdown(\"f1\", \"/notes.md\", remove),\n    );\n    assert_eq!(count_tombstones(&delta_remove), 0);\n    assert_eq!(count_upserts(&delta_remove), 1);\n    assert_eq!(count_document_rows(&delta_remove), 0);\n    assert_eq!(upsert_block_types(&delta_remove), vec![\"table\".to_string()]);\n    apply_delta(&mut state, delta_remove);\n\n    let materialized = apply_changes(empty_file(\"f1\", \"/notes.md\"), collect_state_rows(&state))\n        .expect(\"apply_changes should succeed\");\n    assert_eq!(decode_utf8(materialized), remove);\n}\n\n#[test]\nfn roundtrip_large_shuffle_500_with_state_context_low_noise() {\n    let paragraphs = (1..=500).map(|idx| format!(\"P{idx}\")).collect::<Vec<_>>();\n    let before_markdown = paragraphs.join(\"\\n\\n\") + \"\\n\";\n    let before_file = file_from_markdown(\"f1\", \"/notes.md\", &before_markdown);\n\n    let mut state = StateRows::new();\n    let bootstrap =\n        detect_changes(None, before_file.clone()).expect(\"bootstrap detect should succeed\");\n    apply_delta(&mut state, bootstrap);\n\n    let mut after = paragraphs;\n    after.rotate_left(123);\n    let after_markdown = after.join(\"\\n\\n\") + \"\\n\";\n\n    let delta = detect_with_state_context(\n        &state,\n        before_file,\n        file_from_markdown(\"f1\", \"/notes.md\", &after_markdown),\n    );\n    assert_eq!(count_tombstones(&delta), 0);\n    assert_eq!(count_upserts(&delta), 0);\n    assert_eq!(count_document_rows(&delta), 1);\n    apply_delta(&mut state, delta);\n\n    let materialized = apply_changes(empty_file(\"f1\", \"/notes.md\"), collect_state_rows(&state))\n        .expect(\"apply_changes should succeed\");\n    assert_eq!(decode_utf8(materialized), after_markdown);\n}\n\n#[test]\nfn roundtrip_large_tiny_edits_500_with_state_context_low_noise() {\n    let paragraphs = (1..=500).map(|idx| format!(\"P{idx}\")).collect::<Vec<_>>();\n    let before_markdown = paragraphs.join(\"\\n\\n\") + \"\\n\";\n    let before_file = file_from_markdown(\"f1\", \"/notes.md\", &before_markdown);\n\n    let mut state = StateRows::new();\n    let bootstrap =\n        detect_changes(None, before_file.clone()).expect(\"bootstrap detect should succeed\");\n    apply_delta(&mut state, bootstrap);\n\n    let mut after = paragraphs;\n    for idx in [10usize, 111, 222, 333, 444] {\n        after[idx] = format!(\"{} x\", after[idx]);\n    }\n    let after_markdown = after.join(\"\\n\\n\") + \"\\n\";\n\n    let delta = detect_with_state_context(\n        &state,\n        before_file,\n        file_from_markdown(\"f1\", \"/notes.md\", &after_markdown),\n    );\n    assert_eq!(count_tombstones(&delta), 0);\n    assert_eq!(count_upserts(&delta), 5);\n    assert_eq!(count_document_rows(&delta), 0);\n    apply_delta(&mut state, delta);\n\n    let materialized = apply_changes(empty_file(\"f1\", \"/notes.md\"), collect_state_rows(&state))\n        .expect(\"apply_changes should succeed\");\n    assert_eq!(decode_utf8(materialized), after_markdown);\n}\n\n#[test]\nfn roundtrip_large_duplicate_edit_with_state_context_low_noise() {\n    let before_paragraphs = (0..500).map(|_| \"Same\".to_string()).collect::<Vec<_>>();\n    let before_markdown = before_paragraphs.join(\"\\n\\n\") + \"\\n\";\n    let before_file = file_from_markdown(\"f1\", \"/notes.md\", &before_markdown);\n\n    let mut state = StateRows::new();\n    let bootstrap =\n        detect_changes(None, before_file.clone()).expect(\"bootstrap detect should succeed\");\n    apply_delta(&mut state, bootstrap);\n\n    let mut after = before_paragraphs;\n    after[349] = \"Same updated\".to_string();\n    let after_markdown = after.join(\"\\n\\n\") + \"\\n\";\n\n    let delta = detect_with_state_context(\n        &state,\n        before_file,\n        file_from_markdown(\"f1\", \"/notes.md\", &after_markdown),\n    );\n    assert_eq!(count_tombstones(&delta), 0);\n    assert_eq!(count_upserts(&delta), 1);\n    assert!(count_document_rows(&delta) <= 1);\n    apply_delta(&mut state, delta);\n\n    let materialized = apply_changes(empty_file(\"f1\", \"/notes.md\"), collect_state_rows(&state))\n        .expect(\"apply_changes should succeed\");\n    assert_eq!(decode_utf8(materialized), after_markdown);\n}\n\n#[test]\nfn roundtrip_move_insert_delete_large_with_state_context_low_noise() {\n    let paragraphs = (1..=500).map(|idx| format!(\"P{idx}\")).collect::<Vec<_>>();\n    let before_markdown = paragraphs.join(\"\\n\\n\") + \"\\n\";\n    let before_file = file_from_markdown(\"f1\", \"/notes.md\", &before_markdown);\n\n    let mut state = StateRows::new();\n    let bootstrap =\n        detect_changes(None, before_file.clone()).expect(\"bootstrap detect should succeed\");\n    apply_delta(&mut state, bootstrap);\n\n    let moved = paragraphs[450..460].to_vec();\n    let mut remaining = paragraphs[..450].to_vec();\n    remaining.extend_from_slice(&paragraphs[460..]);\n    remaining.retain(|entry| entry != \"P500\");\n    let idx_p300 = remaining\n        .iter()\n        .position(|entry| entry == \"P300\")\n        .expect(\"P300 should exist\");\n    let mut after = Vec::new();\n    after.extend(moved);\n    after.extend_from_slice(&remaining[..=idx_p300]);\n    after.push(\"PX\".to_string());\n    after.extend_from_slice(&remaining[idx_p300 + 1..]);\n    let after_markdown = after.join(\"\\n\\n\") + \"\\n\";\n\n    let delta = detect_with_state_context(\n        &state,\n        before_file,\n        file_from_markdown(\"f1\", \"/notes.md\", &after_markdown),\n    );\n    let tombstones = count_tombstones(&delta);\n    let upserts = count_upserts(&delta);\n    let docs = count_document_rows(&delta);\n    assert!(tombstones <= 1);\n    assert_eq!(upserts, 1);\n    assert_eq!(docs, 1);\n    assert!(tombstones + upserts <= 2);\n    apply_delta(&mut state, delta);\n\n    let materialized = apply_changes(empty_file(\"f1\", \"/notes.md\"), collect_state_rows(&state))\n        .expect(\"apply_changes should succeed\");\n    assert_eq!(decode_utf8(materialized), after_markdown);\n}\n"
  },
  {
    "path": "packages/plugin-md-v2/tests/schema.rs",
    "content": "use plugin_md_v2::schemas::{\n    schema_definitions, schema_jsons, BLOCK_SCHEMA_KEY, DOCUMENT_SCHEMA_KEY,\n};\nuse std::collections::BTreeSet;\n\n#[test]\nfn schema_definitions_have_expected_keys() {\n    let schemas = schema_definitions();\n\n    assert_eq!(schemas.len(), 2);\n\n    let expected_keys = BTreeSet::from([DOCUMENT_SCHEMA_KEY, BLOCK_SCHEMA_KEY]);\n\n    let mut actual_keys = BTreeSet::new();\n    for schema in schemas {\n        let key = schema\n            .get(\"x-lix-key\")\n            .and_then(serde_json::Value::as_str)\n            .expect(\"schema must define string x-lix-key\");\n        let primary_key = schema\n            .get(\"x-lix-primary-key\")\n            .and_then(serde_json::Value::as_array)\n            .expect(\"schema must define x-lix-primary-key array\");\n\n        actual_keys.insert(key);\n        assert_eq!(primary_key.len(), 1);\n        assert_eq!(primary_key[0].as_str(), Some(\"/id\"));\n    }\n\n    assert_eq!(actual_keys, expected_keys);\n}\n\n#[test]\nfn schema_json_accessors_return_expected_text() {\n    let raw = schema_jsons().join(\"\\n\");\n    assert!(raw.contains(\"\\\"x-lix-key\\\": \\\"markdown_v2_document\\\"\"));\n    assert!(raw.contains(\"\\\"x-lix-key\\\": \\\"markdown_v2_block\\\"\"));\n}\n"
  },
  {
    "path": "packages/react-utils/.oxlintrc.json",
    "content": "{\n\t\"plugins\": [\"typescript\"],\n\t\"categories\": {\n\t\t\"correctness\": \"error\",\n\t\t\"suspicious\": \"warn\"\n\t},\n\t\"env\": {\n\t\t\"es2022\": true,\n\t\t\"node\": true\n\t},\n\t\"ignorePatterns\": [\"dist\", \"coverage\", \"**/*.d.ts\"],\n\t\"rules\": {\n\t\t\"typescript/no-explicit-any\": \"off\"\n\t}\n}\n"
  },
  {
    "path": "packages/react-utils/.prettierrc.json",
    "content": "{\n\t\"useTabs\": true\n}\n"
  },
  {
    "path": "packages/react-utils/LICENSE",
    "content": "MIT License\n\nCopyright (c) 2025 Opral US Inc.\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE."
  },
  {
    "path": "packages/react-utils/README.md",
    "content": "# @lix-js/react-utils\n\nReact 19 hooks and helpers for building reactive UIs on top of the Lix SDK. These utilities wire Kysely queries to React Suspense and subscribe to live database updates.\n\n- React 19 Suspense-first data fetching\n- Live updates via Lix.observe(query)\n- Minimal API surface: `LixProvider`, `useLix`, `useQuery`, `useQueryTakeFirst`, `useQueryTakeFirstOrThrow`\n\n## Installation\n\n```bash\nnpm i @lix-js/react-utils\n```\n\n## Requirements\n\n- React 19 (these hooks use `use()` and Suspense)\n- Lix SDK instance provided via context\n\n## Quick start\n\nWrap your app with `LixProvider` and pass a Lix instance.\n\n```tsx\nimport { createRoot } from \"react-dom/client\";\nimport { LixProvider } from \"@lix-js/react-utils\";\nimport { openLix } from \"@lix-js/sdk\";\n\nasync function bootstrap() {\n\tconst lix = await openLix({});\n\tconst root = createRoot(document.getElementById(\"root\")!);\n\troot.render(\n\t\t<LixProvider lix={lix}>\n\t\t\t<App />\n\t\t</LixProvider>,\n\t);\n}\n\nbootstrap();\n```\n\n## useQuery\n\nSubscribe to a live query using React Suspense. The callback receives `lix` and must return a compilable/executable query (for example `qb(lix).selectFrom(...)`).\n\n```tsx\nimport { Suspense } from \"react\";\nimport { ErrorBoundary } from \"react-error-boundary\";\nimport { useQuery } from \"@lix-js/react-utils\";\nimport { qb } from \"@lix-js/kysely\";\n\nfunction KeyValueList() {\n\tconst rows = useQuery((lix) =>\n\t\tqb(lix).selectFrom(\"key_value\").where(\"key\", \"like\", \"demo_%\").selectAll(),\n\t);\n\n\treturn (\n\t\t<ul>\n\t\t\t{rows.map((r) => (\n\t\t\t\t<li key={r.key}>\n\t\t\t\t\t{r.key}: {r.value}\n\t\t\t\t</li>\n\t\t\t))}\n\t\t</ul>\n\t);\n}\n\nexport function Page() {\n\treturn (\n\t\t<Suspense fallback={<div>Loading…</div>}>\n\t\t\t<ErrorBoundary fallbackRender={() => <div>Failed to load.</div>}>\n\t\t\t\t<KeyValueList />\n\t\t\t</ErrorBoundary>\n\t\t</Suspense>\n\t);\n}\n```\n\nOptions\n\n```tsx\n// One-time execution (no live updates)\nconst rows = useQuery((lix) => qb(lix).selectFrom(\"config\").selectAll(), {\n\tsubscribe: false,\n});\n```\n\n### Behavior\n\n- Suspends on first render until the underlying query resolves.\n- Re-suspends if the compiled SQL or params of the query change.\n- Subscribes to live updates when `subscribe !== false` and updates state on emissions.\n- On subscription error, clears the cached promise and throws to the nearest ErrorBoundary.\n\n## Single-row helpers\n\nWhen you want just one row:\n\n```tsx\nimport {\n\tuseQueryTakeFirst,\n\tuseQueryTakeFirstOrThrow,\n} from \"@lix-js/react-utils\";\nimport { qb } from \"@lix-js/kysely\";\n\n// First row or undefined\nconst file = useQueryTakeFirst((lix) =>\n\tqb(lix).selectFrom(\"file\").select([\"id\", \"path\"]).where(\"id\", \"=\", fileId),\n);\n\n// First row or throw (suspends, then throws to ErrorBoundary if not found)\nconst activeVersion = useQueryTakeFirstOrThrow((lix) =>\n\tqb(lix)\n\t\t.selectFrom(\"active_version\")\n\t\t.innerJoin(\"version\", \"version.id\", \"active_version.version_id\")\n\t\t.selectAll(\"version\"),\n);\n```\n\n## Query Builder Integration\n\n`react-utils` does not construct query builders for you. Pass any query object that implements `compile()` and `execute()`. In practice, most apps use `qb(lix)` from `@lix-js/kysely`.\n\n## Synchronizing external state updates (rich text editors, etc.)\n\nWhen building experiences like rich text editors, dashboards, or collaborative views, you often need to synchronize external changes while avoiding feedback loops from your own writes. Lix provides a simple pattern for this using a “writer key” and commit events.\n\nSee the guide for the pattern, pitfalls, and a decision matrix:\n\n- https://lix.dev/guide/writer-key\n\n## Provider and context\n\n```tsx\nimport { LixProvider, useLix } from \"@lix-js/react-utils\";\n\nfunction NeedsLix() {\n\tconst lix = useLix(); // same instance passed to LixProvider\n\t// …\n}\n```\n\n## FAQ\n\n- Why does the callback receive `lix` directly?\n  - The hook is query-builder agnostic. You can wrap `lix` however you want (for example `qb(lix)`), and react-utils only needs the compiled SQL + execute behavior.\n\n- Can I do imperative fetching?\n  - Yes, you can call `qb(lix)` directly in event handlers. `useQuery` is for declarative, Suspense-friendly reads.\n\n## TypeScript tips\n\n- `useQuery<TRow>(...)` infers the row shape from your Kysely selection. You can also provide an explicit generic to guide inference if needed.\n\n## License\n\nApache-2.0\n"
  },
  {
    "path": "packages/react-utils/package.json",
    "content": "{\n\t\"name\": \"@lix-js/react-utils\",\n\t\"type\": \"module\",\n\t\"publishConfig\": {\n\t\t\"access\": \"public\"\n\t},\n\t\"version\": \"0.1.0\",\n\t\"license\": \"Apache-2.0\",\n\t\"types\": \"./dist/index.d.ts\",\n\t\"exports\": {\n\t\t\".\": \"./dist/index.js\"\n\t},\n\t\"scripts\": {\n\t\t\"build\": \"tsc --build\",\n\t\t\"test\": \"tsc --noEmit && vitest run\",\n\t\t\"test:watch\": \"vitest\",\n\t\t\"lint\": \"oxlint --config .oxlintrc.json --tsconfig ./tsconfig.json --format stylish src\",\n\t\t\"dev\": \"tsc --watch\",\n\t\t\"format\": \"prettier ./src --write\"\n\t},\n\t\"_comment\": \"Required for tree-shaking https://webpack.js.org/guides/tree-shaking/#mark-the-file-as-side-effect-free\",\n\t\"sideEffects\": false,\n\t\"peerDependencies\": {\n\t\t\"@lix-js/sdk\": \"*\",\n\t\t\"react\": \">=19.0.0\"\n\t},\n\t\"devDependencies\": {\n\t\t\"@lix-js/kysely\": \"workspace:*\",\n\t\t\"@lix-js/sdk\": \"workspace:*\",\n\t\t\"@testing-library/react\": \"^16.3.0\",\n\t\t\"@types/react\": \"^19.1.8\",\n\t\t\"@vitest/coverage-v8\": \"^3.2.4\",\n\t\t\"https-proxy-agent\": \"7.0.2\",\n\t\t\"jsdom\": \"^26.1.0\",\n\t\t\"oxlint\": \"^1.14.0\",\n\t\t\"prettier\": \"^3.3.3\",\n\t\t\"react\": \"19.2.0\",\n\t\t\"react-dom\": \"19.2.0\",\n\t\t\"typescript\": \"^5.5.4\",\n\t\t\"vitest\": \"^3.2.4\"\n\t}\n}\n"
  },
  {
    "path": "packages/react-utils/src/hooks/use-lix.test.tsx",
    "content": "import { test, expect } from \"vitest\";\nimport { renderHook } from \"@testing-library/react\";\nimport React from \"react\";\nimport { useLix } from \"./use-lix.js\";\nimport { LixProvider } from \"../provider.js\";\nimport { openLix } from \"@lix-js/sdk\";\n\ntest(\"useLix throws error when used outside LixProvider\", () => {\n\texpect(() => {\n\t\trenderHook(() => useLix());\n\t}).toThrow(\"useLix must be used inside <LixProvider>.\");\n});\n\ntest(\"useLix returns the Lix instance when used inside LixProvider\", async () => {\n\tconst lix = await openLix({});\n\tconst wrapper = ({ children }: { children: React.ReactNode }) => (\n\t\t<LixProvider lix={lix}>{children}</LixProvider>\n\t);\n\n\tconst { result } = renderHook(() => useLix(), { wrapper });\n\n\texpect(result.current).toBe(lix);\n\texpect(result.current.execute).toBeDefined();\n\texpect(result.current.observe).toBeDefined();\n\texpect(result.current.close).toBeDefined();\n\n\tawait lix.close();\n});\n"
  },
  {
    "path": "packages/react-utils/src/hooks/use-lix.ts",
    "content": "import { useContext } from \"react\";\nimport { LixContext } from \"../provider.js\";\n\n/**\n * Hook to access the Lix instance from the context.\n * Must be used within a LixProvider.\n *\n * @example\n * ```tsx\n * function CreateAccountButton() {\n *   const lix = useLix();\n *\n *   const handleClick = async () => {\n *     await qb(lix)\n *       .insertInto('account')\n *       .values({\n *         name: 'John Doe',\n *       })\n *       .execute();\n *   };\n *\n *   return (\n *     <button onClick={handleClick}>\n *       Create Account\n *     </button>\n *   );\n * }\n * ```\n */\nexport function useLix() {\n\tconst lix = useContext(LixContext);\n\tif (!lix) {\n\t\tthrow new Error(\"useLix must be used inside <LixProvider>.\");\n\t}\n\treturn lix;\n}\n"
  },
  {
    "path": "packages/react-utils/src/hooks/use-query.test.tsx",
    "content": "import { test, expect } from \"vitest\";\nimport { renderHook, waitFor, act } from \"@testing-library/react\";\nimport React, { Suspense } from \"react\";\nimport {\n\tuseQuery,\n\tuseQueryTakeFirst,\n\tuseQueryTakeFirstOrThrow,\n} from \"./use-query.js\";\nimport { LixProvider } from \"../provider.js\";\nimport { openLix } from \"@lix-js/sdk\";\nimport { qb, sql } from \"@lix-js/kysely\";\n\ntype KeyValueRow = {\n\tkey: string;\n\tvalue: unknown;\n\treadonly [key: string]: unknown;\n};\n\n// React Error Boundaries require class components - no functional equivalent exists\nclass MockErrorBoundary extends React.Component<\n\t{ children: React.ReactNode; onError?: (error: Error) => void },\n\t{ hasError: boolean; error?: Error }\n> {\n\toverride state = { hasError: false, error: undefined };\n\n\t// @ts-expect-error - type error\n\tstatic override getDerivedStateFromError(error: Error) {\n\t\treturn { hasError: true, error };\n\t}\n\n\toverride componentDidCatch(error: Error) {\n\t\tthis.props.onError?.(error);\n\t}\n\n\toverride render() {\n\t\treturn this.state.hasError ? (\n\t\t\t<div>Error occurred</div>\n\t\t) : (\n\t\t\tthis.props.children\n\t\t);\n\t}\n}\n\ntest(\"useQuery throws error when used outside LixProvider\", () => {\n\t// We need to catch the error since it's thrown during render\n\texpect(() => {\n\t\trenderHook(() =>\n\t\t\tuseQuery((lix) => qb(lix).selectFrom(\"lix_key_value\").selectAll()),\n\t\t);\n\t}).toThrow(\"useQuery must be used inside <LixProvider>.\");\n});\n\ntest(\"returns array with data using new API\", async () => {\n\tconst lix = await openLix({});\n\n\tconst wrapper = ({ children }: { children: React.ReactNode }) => (\n\t\t<LixProvider lix={lix}>\n\t\t\t<Suspense fallback={<div>Loading...</div>}>\n\t\t\t\t<MockErrorBoundary>{children}</MockErrorBoundary>\n\t\t\t</Suspense>\n\t\t</LixProvider>\n\t);\n\n\tlet hookResult: { current: KeyValueRow[] };\n\n\tawait act(async () => {\n\t\tconst { result } = renderHook(\n\t\t\t() => {\n\t\t\t\tconst data = useQuery((lix) =>\n\t\t\t\t\tqb(lix)\n\t\t\t\t\t\t.selectFrom(\"lix_key_value\")\n\t\t\t\t\t\t.selectAll()\n\t\t\t\t\t\t.where(\"key\", \"like\", \"test_%\"),\n\t\t\t\t);\n\t\t\t\treturn data;\n\t\t\t},\n\t\t\t{ wrapper },\n\t\t);\n\t\thookResult = result;\n\t});\n\n\t// Wait for suspense to resolve and data to be available\n\tawait waitFor(() => {\n\t\texpect(Array.isArray(hookResult.current)).toBe(true);\n\t\texpect(hookResult.current).toEqual([]); // No test keys initially\n\t});\n\n\tawait lix.close();\n});\n\ntest(\"updates when data changes\", async () => {\n\tconst lix = await openLix({});\n\tconst wrapper = ({ children }: { children: React.ReactNode }) => (\n\t\t<LixProvider lix={lix}>\n\t\t\t<Suspense fallback={<div>Loading...</div>}>\n\t\t\t\t<MockErrorBoundary>{children}</MockErrorBoundary>\n\t\t\t</Suspense>\n\t\t</LixProvider>\n\t);\n\n\tlet hookResult: { current: KeyValueRow[] };\n\n\tawait act(async () => {\n\t\tconst { result } = renderHook(\n\t\t\t() => {\n\t\t\t\tconst data = useQuery((lix) =>\n\t\t\t\t\tqb(lix)\n\t\t\t\t\t\t.selectFrom(\"lix_key_value\")\n\t\t\t\t\t\t.selectAll()\n\t\t\t\t\t\t.where(\"key\", \"like\", \"react_test_%\"),\n\t\t\t\t);\n\t\t\t\treturn data;\n\t\t\t},\n\t\t\t{ wrapper },\n\t\t);\n\t\thookResult = result;\n\t});\n\n\t// Wait for initial empty data\n\tawait waitFor(() => {\n\t\texpect(hookResult.current).toEqual([]);\n\t});\n\n\t// Insert a test key-value pair\n\tawait act(async () => {\n\t\tawait qb(lix)\n\t\t\t.insertInto(\"lix_key_value\")\n\t\t\t.values({ key: \"react_test_key\", value: \"test_value\" })\n\t\t\t.execute();\n\t});\n\n\t// Check updated data\n\tawait waitFor(() => {\n\t\texpect(hookResult.current).toHaveLength(1);\n\t\texpect(hookResult.current[0]).toMatchObject({\n\t\t\tkey: \"react_test_key\",\n\t\t\tvalue: \"test_value\",\n\t\t});\n\t});\n\n\tawait lix.close();\n});\n\ntest(\"akeFirst returns array with single item or undefined\", async () => {\n\tconst lix = await openLix({});\n\n\t// Insert test data\n\tawait qb(lix)\n\t\t.insertInto(\"lix_key_value\")\n\t\t.values([\n\t\t\t{ key: \"first_test_1\", value: \"first\" },\n\t\t\t{ key: \"first_test_2\", value: \"second\" },\n\t\t])\n\t\t.execute();\n\n\tconst wrapper = ({ children }: { children: React.ReactNode }) => (\n\t\t<LixProvider lix={lix}>\n\t\t\t<Suspense fallback={<div>Loading...</div>}>\n\t\t\t\t<MockErrorBoundary>{children}</MockErrorBoundary>\n\t\t\t</Suspense>\n\t\t</LixProvider>\n\t);\n\n\tlet hookResult: { current: KeyValueRow | undefined } | undefined;\n\n\tawait act(async () => {\n\t\tconst { result } = renderHook(\n\t\t\t() => {\n\t\t\t\tconst data = useQueryTakeFirst((lix) =>\n\t\t\t\t\tqb(lix)\n\t\t\t\t\t\t.selectFrom(\"lix_key_value\")\n\t\t\t\t\t\t.selectAll()\n\t\t\t\t\t\t.where(\"key\", \"like\", \"first_test_%\")\n\t\t\t\t\t\t.orderBy(\"key\", \"asc\"),\n\t\t\t\t);\n\t\t\t\treturn data;\n\t\t\t},\n\t\t\t{ wrapper },\n\t\t);\n\t\thookResult = result;\n\t});\n\n\tawait waitFor(() => {\n\t\texpect(hookResult!.current).toMatchObject({\n\t\t\tkey: \"first_test_1\",\n\t\t\tvalue: \"first\",\n\t\t});\n\t});\n\n\tawait lix.close();\n});\n\ntest(\"akeFirst returns undefined for empty results\", async () => {\n\tconst lix = await openLix({});\n\tconst wrapper = ({ children }: { children: React.ReactNode }) => (\n\t\t<LixProvider lix={lix}>\n\t\t\t<Suspense fallback={<div>Loading...</div>}>\n\t\t\t\t<MockErrorBoundary>{children}</MockErrorBoundary>\n\t\t\t</Suspense>\n\t\t</LixProvider>\n\t);\n\n\tlet hookResult: { current: KeyValueRow | undefined };\n\n\tawait act(async () => {\n\t\tconst { result } = renderHook(\n\t\t\t() => {\n\t\t\t\tconst data = useQueryTakeFirst((lix) =>\n\t\t\t\t\tqb(lix)\n\t\t\t\t\t\t.selectFrom(\"lix_key_value\")\n\t\t\t\t\t\t.selectAll()\n\t\t\t\t\t\t.where(\"key\", \"=\", \"non_existent\"),\n\t\t\t\t);\n\t\t\t\treturn data;\n\t\t\t},\n\t\t\t{ wrapper },\n\t\t);\n\t\thookResult = result;\n\t});\n\n\tawait waitFor(() => {\n\t\texpect(hookResult.current).toBeUndefined();\n\t});\n\n\tawait lix.close();\n});\n\ntest(\"useQueryTakeFirst (subscribe:false) returns fresh data on rerender\", async () => {\n\tconst lix = await openLix({});\n\tawait qb(lix)\n\t\t.insertInto(\"lix_key_value\")\n\t\t.values([\n\t\t\t{ key: \"memo_a\", value: \"value_a\" },\n\t\t\t{ key: \"memo_b\", value: \"value_b\" },\n\t\t])\n\t\t.execute();\n\n\tconst wrapper = ({ children }: { children: React.ReactNode }) => (\n\t\t<LixProvider lix={lix}>\n\t\t\t<Suspense fallback={<div>Loading…</div>}>\n\t\t\t\t<MockErrorBoundary>{children}</MockErrorBoundary>\n\t\t\t</Suspense>\n\t\t</LixProvider>\n\t);\n\n\tconst seenKeys: Array<string | undefined> = [];\n\tconst hook = await act(async () =>\n\t\trenderHook(\n\t\t\t({ lookup = \"memo_a\" }: { lookup?: string } = {}) => {\n\t\t\t\tconst row = useQueryTakeFirst(\n\t\t\t\t\t(lix) =>\n\t\t\t\t\t\tqb(lix)\n\t\t\t\t\t\t\t.selectFrom(\"lix_key_value\")\n\t\t\t\t\t\t\t.selectAll()\n\t\t\t\t\t\t\t.where(\"key\", \"=\", lookup),\n\t\t\t\t\t{ subscribe: false },\n\t\t\t\t);\n\t\t\t\tif (row?.key) seenKeys.push(row.key);\n\t\t\t\treturn row;\n\t\t\t},\n\t\t\t{ wrapper },\n\t\t),\n\t);\n\tconst { rerender, unmount } = hook;\n\n\tawait waitFor(() => {\n\t\texpect(seenKeys.length).toBeGreaterThan(0);\n\t});\n\tseenKeys.length = 0;\n\tawait act(async () => {\n\t\trerender({ lookup: \"memo_b\" });\n\t});\n\n\tawait waitFor(() => {\n\t\texpect(seenKeys.length).toBeGreaterThan(0);\n\t});\n\texpect(seenKeys[0]).toBe(\"memo_b\");\n\n\tunmount();\n\tawait lix.close();\n});\n\ntest(\"useQueryTakeFirst (subscribe:false) does not reuse previous rows\", async () => {\n\tconst lix = await openLix({});\n\tawait qb(lix)\n\t\t.insertInto(\"lix_key_value\")\n\t\t.values([\n\t\t\t{ key: \"no_subscribe_a\", value: \"value_a\" },\n\t\t\t{ key: \"no_subscribe_b\", value: \"value_b\" },\n\t\t])\n\t\t.execute();\n\n\tconst wrapper = ({ children }: { children: React.ReactNode }) => (\n\t\t<LixProvider lix={lix}>\n\t\t\t<Suspense fallback={<div>Loading...</div>}>\n\t\t\t\t<MockErrorBoundary>{children}</MockErrorBoundary>\n\t\t\t</Suspense>\n\t\t</LixProvider>\n\t);\n\n\tconst emissions: Array<string | null | undefined> = [];\n\tlet rerender: (props?: { key: string }) => void;\n\tawait act(async () => {\n\t\tconst { rerender: rerenderFn } = renderHook(\n\t\t\t({ key = \"no_subscribe_a\" }: { key?: string } = {}) => {\n\t\t\t\tconst row = useQueryTakeFirst(\n\t\t\t\t\t(lix) =>\n\t\t\t\t\t\tqb(lix)\n\t\t\t\t\t\t\t.selectFrom(\"lix_key_value\")\n\t\t\t\t\t\t\t.selectAll()\n\t\t\t\t\t\t\t.where(\"key\", \"=\", key),\n\t\t\t\t\t{ subscribe: false },\n\t\t\t\t);\n\t\t\t\temissions.push(row?.key);\n\t\t\t\treturn row;\n\t\t\t},\n\t\t\t{ wrapper },\n\t\t);\n\t\trerender = rerenderFn;\n\t});\n\n\tawait waitFor(() => {\n\t\texpect(emissions).toContain(\"no_subscribe_a\");\n\t});\n\n\temissions.length = 0;\n\n\tawait act(async () => {\n\t\trerender({ key: \"no_subscribe_b\" });\n\t});\n\n\tawait waitFor(() => {\n\t\texpect(emissions).toContain(\"no_subscribe_b\");\n\t});\n\n\t// The first emission after switching should be the new key, not the previous one.\n\texpect(emissions[0]).toBe(\"no_subscribe_b\");\n\n\tawait lix.close();\n});\n\ntest(\"useQueryTakeFirst (subscribe:false) returns fresh data on rerender\", async () => {\n\tconst lix = await openLix({});\n\tawait qb(lix)\n\t\t.insertInto(\"lix_key_value\")\n\t\t.values([\n\t\t\t{ key: \"memo_a\", value: \"value_a\" },\n\t\t\t{ key: \"memo_b\", value: \"value_b\" },\n\t\t])\n\t\t.execute();\n\n\tconst wrapper = ({ children }: { children: React.ReactNode }) => (\n\t\t<LixProvider lix={lix}>\n\t\t\t<Suspense fallback={<div>Loading…</div>}>\n\t\t\t\t<MockErrorBoundary>{children}</MockErrorBoundary>\n\t\t\t</Suspense>\n\t\t</LixProvider>\n\t);\n\n\tconst seenKeys: Array<string | undefined> = [];\n\tconst hook = await act(async () =>\n\t\trenderHook(\n\t\t\t({ lookup = \"memo_a\" }: { lookup?: string } = {}) => {\n\t\t\t\tconst row = useQueryTakeFirst(\n\t\t\t\t\t(lix) =>\n\t\t\t\t\t\tqb(lix)\n\t\t\t\t\t\t\t.selectFrom(\"lix_key_value\")\n\t\t\t\t\t\t\t.selectAll()\n\t\t\t\t\t\t\t.where(\"key\", \"=\", lookup),\n\t\t\t\t\t{ subscribe: false },\n\t\t\t\t);\n\t\t\t\tif (row?.key) seenKeys.push(row.key);\n\t\t\t\treturn row;\n\t\t\t},\n\t\t\t{ wrapper },\n\t\t),\n\t);\n\tconst { rerender, unmount } = hook;\n\n\tawait waitFor(() => {\n\t\texpect(seenKeys.length).toBeGreaterThan(0);\n\t});\n\tseenKeys.length = 0;\n\tawait act(async () => {\n\t\trerender({ lookup: \"memo_b\" });\n\t});\n\n\tawait waitFor(() => {\n\t\texpect(seenKeys.length).toBeGreaterThan(0);\n\t});\n\texpect(seenKeys[0]).toBe(\"memo_b\");\n\n\tunmount();\n\tawait lix.close();\n});\n\ntest(\"akeFirst updates reference when underlying row changes\", async () => {\n\tconst lix = await openLix({});\n\tconst rowKey = \"react_first_ref\";\n\n\tawait qb(lix)\n\t\t.insertInto(\"lix_key_value\")\n\t\t.values({ key: rowKey, value: \"initial\" })\n\t\t.execute();\n\n\tconst wrapper = ({ children }: { children: React.ReactNode }) => (\n\t\t<LixProvider lix={lix}>\n\t\t\t<Suspense fallback={<div>Loading...</div>}>\n\t\t\t\t<MockErrorBoundary>{children}</MockErrorBoundary>\n\t\t\t</Suspense>\n\t\t</LixProvider>\n\t);\n\n\tlet hookResult: { current: KeyValueRow | undefined };\n\n\tawait act(async () => {\n\t\tconst { result } = renderHook(\n\t\t\t() =>\n\t\t\t\tuseQueryTakeFirst((lix) =>\n\t\t\t\t\tqb(lix)\n\t\t\t\t\t\t.selectFrom(\"lix_key_value\")\n\t\t\t\t\t\t.selectAll()\n\t\t\t\t\t\t.where(\"key\", \"=\", rowKey),\n\t\t\t\t),\n\t\t\t{ wrapper },\n\t\t);\n\t\thookResult = result;\n\t});\n\n\tawait waitFor(() => {\n\t\texpect(hookResult!.current?.value).toBe(\"initial\");\n\t});\n\n\tconst initialRef = hookResult!.current;\n\n\tawait act(async () => {\n\t\tawait lix.execute(\"DELETE FROM lix_key_value WHERE key = ?1\", [rowKey]);\n\t\tawait qb(lix)\n\t\t\t.insertInto(\"lix_key_value\")\n\t\t\t.values({ key: rowKey, value: \"updated\" })\n\t\t\t.execute();\n\t});\n\n\tawait waitFor(() => {\n\t\texpect(hookResult!.current?.value).toBe(\"updated\");\n\t\texpect(hookResult!.current).not.toBe(initialRef);\n\t});\n\n\tawait lix.close();\n});\n\ntest(\"useQuery key includes lix instance (no cross-instance reuse)\", async () => {\n\tconst lix1 = await openLix({});\n\tconst lix2 = await openLix({});\n\n\tawait qb(lix1)\n\t\t.insertInto(\"lix_key_value\")\n\t\t.values({ key: \"shared_key\", value: \"instance_one\" })\n\t\t.execute();\n\tawait qb(lix2)\n\t\t.insertInto(\"lix_key_value\")\n\t\t.values({ key: \"shared_key\", value: \"instance_two\" })\n\t\t.execute();\n\n\tlet current = lix1;\n\n\tconst wrapper = ({ children }: { children: React.ReactNode }) => (\n\t\t<LixProvider lix={current}>\n\t\t\t<Suspense fallback={<div>Loading…</div>}>\n\t\t\t\t<MockErrorBoundary>{children}</MockErrorBoundary>\n\t\t\t</Suspense>\n\t\t</LixProvider>\n\t);\n\n\tlet hookResult: { current: KeyValueRow[] };\n\tlet rerender: () => void;\n\n\tawait act(async () => {\n\t\tconst { result, rerender: rerenderFn } = renderHook(\n\t\t\t() =>\n\t\t\t\tuseQuery((lix) =>\n\t\t\t\t\tqb(lix)\n\t\t\t\t\t\t.selectFrom(\"lix_key_value\")\n\t\t\t\t\t\t.selectAll()\n\t\t\t\t\t\t.where(\"key\", \"=\", \"shared_key\"),\n\t\t\t\t),\n\t\t\t{ wrapper },\n\t\t);\n\t\thookResult = result;\n\t\trerender = rerenderFn;\n\t});\n\n\tawait waitFor(() => {\n\t\texpect(hookResult.current[0]?.value).toBe(\"instance_one\");\n\t});\n\n\tawait act(async () => {\n\t\tcurrent = lix2;\n\t\trerender();\n\t});\n\n\tawait waitFor(() => {\n\t\texpect(hookResult.current[0]?.value).toBe(\"instance_two\");\n\t});\n\n\tawait lix1.close();\n\tawait lix2.close();\n});\n\ntest(\"akeFirst re-emits when aggregate result returns to the initial value\", async () => {\n\tconst lix = await openLix({});\n\tconst key = \"agg_count_test\";\n\n\tconst wrapper = ({ children }: { children: React.ReactNode }) => (\n\t\t<LixProvider lix={lix}>\n\t\t\t<Suspense fallback={<div>Loading...</div>}>\n\t\t\t\t<MockErrorBoundary>{children}</MockErrorBoundary>\n\t\t\t</Suspense>\n\t\t</LixProvider>\n\t);\n\n\tlet hookResult: { current: KeyValueRow[] } | undefined;\n\n\tawait act(async () => {\n\t\tconst { result } = renderHook(\n\t\t\t() =>\n\t\t\t\tuseQuery((lix) =>\n\t\t\t\t\tqb(lix)\n\t\t\t\t\t\t.selectFrom(\"lix_key_value\")\n\t\t\t\t\t\t.selectAll()\n\t\t\t\t\t\t.where(\"key\", \"=\", key),\n\t\t\t\t),\n\t\t\t{ wrapper },\n\t\t);\n\t\thookResult = result;\n\t});\n\n\tawait waitFor(() => {\n\t\texpect(hookResult!.current).toHaveLength(0);\n\t});\n\n\tawait act(async () => {\n\t\tawait qb(lix)\n\t\t\t.insertInto(\"lix_key_value\")\n\t\t\t.values({ key, value: \"v1\" })\n\t\t\t.execute();\n\t});\n\n\tawait waitFor(() => {\n\t\texpect(hookResult!.current).toHaveLength(1);\n\t});\n\n\tawait act(async () => {\n\t\tawait lix.execute(\"DELETE FROM lix_key_value WHERE key = ?1\", [key]);\n\t});\n\n\tawait waitFor(() => {\n\t\texpect(hookResult!.current).toHaveLength(0);\n\t});\n\n\tawait lix.close();\n});\n\ntest(\"return type is properly typed\", async () => {\n\tconst lix = await openLix({});\n\tconst wrapper = ({ children }: { children: React.ReactNode }) => (\n\t\t<LixProvider lix={lix}>\n\t\t\t<Suspense fallback={<div>Loading...</div>}>\n\t\t\t\t<MockErrorBoundary>{children}</MockErrorBoundary>\n\t\t\t</Suspense>\n\t\t</LixProvider>\n\t);\n\n\tlet hookResult: { current: KeyValueRow[] } | undefined;\n\n\tawait act(async () => {\n\t\tconst { result } = renderHook(\n\t\t\t() => {\n\t\t\t\tconst data = useQuery((lix) =>\n\t\t\t\t\tqb(lix).selectFrom(\"lix_key_value\").selectAll(),\n\t\t\t\t);\n\t\t\t\treturn data;\n\t\t\t},\n\t\t\t{ wrapper },\n\t\t);\n\t\thookResult = result;\n\t});\n\n\t// Wait for data to be available\n\tawait waitFor(() => {\n\t\texpect(hookResult).toBeDefined();\n\t\texpect(Array.isArray(hookResult!.current)).toBe(true);\n\t});\n\n\t// Type test: data should be properly typed as an array of KeyValue\n\t// This should pass without any type errors if the types are working correctly\n\thookResult!.current satisfies KeyValueRow[];\n\n\tawait lix.close();\n});\n\ntest(\"error handling with ErrorBoundary\", async () => {\n\tconst lix = await openLix({});\n\tlet caught: Error | undefined;\n\n\t// Suppress console errors for this test\n\tconst originalError = console.error;\n\tconsole.error = () => {};\n\n\tconst wrapper = ({ children }: { children: React.ReactNode }) => (\n\t\t<LixProvider lix={lix}>\n\t\t\t<Suspense fallback={<div>Loading…</div>}>\n\t\t\t\t<MockErrorBoundary onError={(e) => (caught = e)}>\n\t\t\t\t\t{children}\n\t\t\t\t</MockErrorBoundary>\n\t\t\t</Suspense>\n\t\t</LixProvider>\n\t);\n\n\tawait act(async () => {\n\t\trenderHook(\n\t\t\t() =>\n\t\t\t\tuseQuery((lix) =>\n\t\t\t\t\t// invalid table: will reject then throw\n\t\t\t\t\tqb(lix)\n\t\t\t\t\t\t.selectFrom(\"non_existent_table\" as never)\n\t\t\t\t\t\t.selectAll(),\n\t\t\t\t),\n\t\t\t{ wrapper },\n\t\t);\n\t});\n\n\tawait waitFor(() => {\n\t\texpect(caught).toBeDefined();\n\t});\n\n\tconst caughtMessage =\n\t\tcaught instanceof Error ? caught.message : caught ? String(caught) : \"\";\n\texpect(caughtMessage).toMatch(/no such table|non_existent_table/i);\n\n\t// Restore console.error\n\tconsole.error = originalError;\n\tawait lix.close();\n});\n\ntest(\"akeFirstOrThrow returns data when result exists\", async () => {\n\tconst lix = await openLix({});\n\n\t// Insert test data\n\tawait qb(lix)\n\t\t.insertInto(\"lix_key_value\")\n\t\t.values({ key: \"throw_test\", value: \"exists\" })\n\t\t.execute();\n\n\tconst wrapper = ({ children }: { children: React.ReactNode }) => (\n\t\t<LixProvider lix={lix}>\n\t\t\t<Suspense fallback={<div>Loading...</div>}>\n\t\t\t\t<MockErrorBoundary>{children}</MockErrorBoundary>\n\t\t\t</Suspense>\n\t\t</LixProvider>\n\t);\n\n\tlet hookResult: { current: KeyValueRow };\n\n\tawait act(async () => {\n\t\tconst { result } = renderHook(\n\t\t\t() => {\n\t\t\t\tconst data = useQueryTakeFirstOrThrow((lix) =>\n\t\t\t\t\tqb(lix)\n\t\t\t\t\t\t.selectFrom(\"lix_key_value\")\n\t\t\t\t\t\t.selectAll()\n\t\t\t\t\t\t.where(\"key\", \"=\", \"throw_test\"),\n\t\t\t\t);\n\t\t\t\treturn data;\n\t\t\t},\n\t\t\t{ wrapper },\n\t\t);\n\t\thookResult = result;\n\t});\n\n\tawait waitFor(() => {\n\t\texpect(hookResult.current).toMatchObject({\n\t\t\tkey: \"throw_test\",\n\t\t\tvalue: \"exists\",\n\t\t});\n\t});\n\n\tawait lix.close();\n});\n\ntest(\"akeFirstOrThrow throws when no result found\", async () => {\n\tconst lix = await openLix({});\n\tlet caught: Error | undefined;\n\n\t// Suppress console errors for this test\n\tconst originalError = console.error;\n\tconsole.error = () => {};\n\n\tconst wrapper = ({ children }: { children: React.ReactNode }) => (\n\t\t<LixProvider lix={lix}>\n\t\t\t<Suspense fallback={<div>Loading...</div>}>\n\t\t\t\t<MockErrorBoundary onError={(e) => (caught = e)}>\n\t\t\t\t\t{children}\n\t\t\t\t</MockErrorBoundary>\n\t\t\t</Suspense>\n\t\t</LixProvider>\n\t);\n\n\tawait act(async () => {\n\t\trenderHook(\n\t\t\t() => {\n\t\t\t\tconst data = useQueryTakeFirstOrThrow((lix) =>\n\t\t\t\t\tqb(lix)\n\t\t\t\t\t\t.selectFrom(\"lix_key_value\")\n\t\t\t\t\t\t.selectAll()\n\t\t\t\t\t\t.where(\"key\", \"=\", \"does_not_exist\"),\n\t\t\t\t);\n\t\t\t\treturn data;\n\t\t\t},\n\t\t\t{ wrapper },\n\t\t);\n\t});\n\n\tawait waitFor(() => {\n\t\texpect(caught).toBeDefined();\n\t\texpect(caught!.message).toBe(\"No result found\");\n\t});\n\n\t// Restore console.error\n\tconsole.error = originalError;\n\tawait lix.close();\n});\n\ntest(\"re-executes when query function changes (dependency array fix)\", async () => {\n\tconst lix = await openLix({});\n\n\t// Insert test data with different prefixes\n\tawait qb(lix)\n\t\t.insertInto(\"lix_key_value\")\n\t\t.values([\n\t\t\t{ key: \"prefix_a_1\", value: \"value_a_1\" },\n\t\t\t{ key: \"prefix_a_2\", value: \"value_a_2\" },\n\t\t\t{ key: \"prefix_b_1\", value: \"value_b_1\" },\n\t\t\t{ key: \"prefix_b_2\", value: \"value_b_2\" },\n\t\t])\n\t\t.execute();\n\n\tconst wrapper = ({ children }: { children: React.ReactNode }) => (\n\t\t<LixProvider lix={lix}>\n\t\t\t<Suspense fallback={<div>Loading...</div>}>\n\t\t\t\t<MockErrorBoundary>{children}</MockErrorBoundary>\n\t\t\t</Suspense>\n\t\t</LixProvider>\n\t);\n\n\t// State to control which prefix to query\n\tlet hookResult: { current: KeyValueRow[] };\n\tlet rerender: (props?: { prefix: string }) => void;\n\n\tawait act(async () => {\n\t\tconst { result, rerender: rerenderFn } = renderHook(\n\t\t\t({ prefix = \"prefix_a\" }: { prefix?: string } = {}) => {\n\t\t\t\t// Create a new query function each time prefix changes\n\t\t\t\tconst data = useQuery((lix) =>\n\t\t\t\t\tqb(lix)\n\t\t\t\t\t\t.selectFrom(\"lix_key_value\")\n\t\t\t\t\t\t.selectAll()\n\t\t\t\t\t\t.where(\"key\", \"like\", `${prefix}_%`)\n\t\t\t\t\t\t.orderBy(\"key\", \"asc\"),\n\t\t\t\t);\n\t\t\t\treturn data;\n\t\t\t},\n\t\t\t{\n\t\t\t\twrapper,\n\t\t\t\tinitialProps: { prefix: \"prefix_a\" },\n\t\t\t},\n\t\t);\n\t\thookResult = result;\n\t\trerender = rerenderFn;\n\t});\n\n\t// Wait for initial data (prefix_a results)\n\tawait waitFor(() => {\n\t\texpect(hookResult.current).toHaveLength(2);\n\t\texpect(hookResult.current[0]).toMatchObject({\n\t\t\tkey: \"prefix_a_1\",\n\t\t\tvalue: \"value_a_1\",\n\t\t});\n\t\texpect(hookResult.current[1]).toMatchObject({\n\t\t\tkey: \"prefix_a_2\",\n\t\t\tvalue: \"value_a_2\",\n\t\t});\n\t});\n\n\t// Change the query prefix - this should trigger a re-execution\n\tawait act(async () => {\n\t\trerender({ prefix: \"prefix_b\" });\n\t});\n\n\t// Wait for the query to re-execute with new prefix\n\tawait waitFor(() => {\n\t\texpect(hookResult.current).toHaveLength(2);\n\t\texpect(hookResult.current[0]).toMatchObject({\n\t\t\tkey: \"prefix_b_1\",\n\t\t\tvalue: \"value_b_1\",\n\t\t});\n\t\texpect(hookResult.current[1]).toMatchObject({\n\t\t\tkey: \"prefix_b_2\",\n\t\t\tvalue: \"value_b_2\",\n\t\t});\n\t});\n\n\tawait lix.close();\n});\n\ntest(\"useQuery with subscribe: false executes once without live updates\", async () => {\n\tconst lix = await openLix({});\n\n\t// Insert initial test data\n\tawait qb(lix)\n\t\t.insertInto(\"lix_key_value\")\n\t\t.values({ key: \"once_test\", value: \"initial\" })\n\t\t.execute();\n\n\tconst wrapper = ({ children }: { children: React.ReactNode }) => (\n\t\t<LixProvider lix={lix}>\n\t\t\t<Suspense fallback={<div>Loading...</div>}>\n\t\t\t\t<MockErrorBoundary>{children}</MockErrorBoundary>\n\t\t\t</Suspense>\n\t\t</LixProvider>\n\t);\n\n\tlet hookResult: { current: KeyValueRow[] };\n\n\tawait act(async () => {\n\t\tconst { result } = renderHook(\n\t\t\t() => {\n\t\t\t\tconst data = useQuery(\n\t\t\t\t\t(lix) =>\n\t\t\t\t\t\tqb(lix)\n\t\t\t\t\t\t\t.selectFrom(\"lix_key_value\")\n\t\t\t\t\t\t\t.selectAll()\n\t\t\t\t\t\t\t.where(\"key\", \"=\", \"once_test\"),\n\t\t\t\t\t{ subscribe: false },\n\t\t\t\t);\n\t\t\t\treturn data;\n\t\t\t},\n\t\t\t{ wrapper },\n\t\t);\n\t\thookResult = result;\n\t});\n\n\t// Wait for initial data\n\tawait waitFor(() => {\n\t\texpect(hookResult.current).toHaveLength(1);\n\t\texpect(hookResult.current[0]).toMatchObject({\n\t\t\tkey: \"once_test\",\n\t\t\tvalue: \"initial\",\n\t\t});\n\t});\n\n\t// Update the data in the database\n\tawait act(async () => {\n\t\tawait qb(lix)\n\t\t\t.updateTable(\"lix_key_value\")\n\t\t\t.set({ value: \"updated\" })\n\t\t\t.where(\"key\", \"=\", \"once_test\")\n\t\t\t.execute();\n\t});\n\n\t// Give some time for potential updates (there shouldn't be any)\n\tawait new Promise((resolve) => setTimeout(resolve, 100));\n\n\t// Data should NOT have updated because subscribe: false\n\texpect(hookResult!.current).toHaveLength(1);\n\texpect(hookResult!.current[0]).toMatchObject({\n\t\tkey: \"once_test\",\n\t\tvalue: \"initial\", // Still the initial value\n\t});\n\n\tawait lix.close();\n});\n\ntest(\"useQuery subscription updates when query dependencies change\", async () => {\n\tconst lix = await openLix({});\n\n\t// Insert initial test data\n\tawait qb(lix)\n\t\t.insertInto(\"lix_key_value\")\n\t\t.values([\n\t\t\t{ key: \"sub_test_a_1\", value: \"initial_a\" },\n\t\t\t{ key: \"sub_test_b_1\", value: \"initial_b\" },\n\t\t])\n\t\t.execute();\n\n\tconst wrapper = ({ children }: { children: React.ReactNode }) => (\n\t\t<LixProvider lix={lix}>\n\t\t\t<Suspense fallback={<div>Loading...</div>}>\n\t\t\t\t<MockErrorBoundary>{children}</MockErrorBoundary>\n\t\t\t</Suspense>\n\t\t</LixProvider>\n\t);\n\n\tlet hookResult: { current: KeyValueRow[] };\n\tlet rerender: (props?: { filter: string }) => void;\n\n\tawait act(async () => {\n\t\tconst { result, rerender: rerenderFn } = renderHook(\n\t\t\t({ filter = \"sub_test_a\" }: { filter?: string } = {}) => {\n\t\t\t\tconst data = useQuery((lix) =>\n\t\t\t\t\tqb(lix)\n\t\t\t\t\t\t.selectFrom(\"lix_key_value\")\n\t\t\t\t\t\t.selectAll()\n\t\t\t\t\t\t.where(\"key\", \"like\", `${filter}_%`),\n\t\t\t\t);\n\t\t\t\treturn data;\n\t\t\t},\n\t\t\t{\n\t\t\t\twrapper,\n\t\t\t\tinitialProps: { filter: \"sub_test_a\" },\n\t\t\t},\n\t\t);\n\t\thookResult = result;\n\t\trerender = rerenderFn;\n\t});\n\n\t// Verify initial subscription works\n\tawait waitFor(() => {\n\t\texpect(hookResult.current).toHaveLength(1);\n\t\texpect(hookResult.current[0]?.key).toBe(\"sub_test_a_1\");\n\t});\n\n\t// Switch to different filter - new subscription should be created\n\tawait act(async () => {\n\t\trerender({ filter: \"sub_test_b\" });\n\t});\n\n\t// Verify new subscription works\n\tawait waitFor(() => {\n\t\texpect(hookResult.current).toHaveLength(1);\n\t\texpect(hookResult.current[0]?.key).toBe(\"sub_test_b_1\");\n\t});\n\n\t// Insert new data that matches the current filter\n\tawait act(async () => {\n\t\tawait qb(lix)\n\t\t\t.insertInto(\"lix_key_value\")\n\t\t\t.values({ key: \"sub_test_b_2\", value: \"new_b\" })\n\t\t\t.execute();\n\t});\n\n\t// The subscription should pick up the new data\n\tawait waitFor(() => {\n\t\texpect(hookResult.current).toHaveLength(2);\n\t\texpect(hookResult.current.some((item) => item.key === \"sub_test_b_2\")).toBe(\n\t\t\ttrue,\n\t\t);\n\t});\n\n\tawait lix.close();\n});\n\ntest(\"identical useQuery subscriptions share observe event payloads\", async () => {\n\tconst lix = await openLix({});\n\tconst key = \"shared_subscriptions_engine_key\";\n\n\tawait qb(lix)\n\t\t.insertInto(\"lix_key_value\")\n\t\t.values({ key, value: \"before\" })\n\t\t.execute();\n\n\tconst wrapper = ({ children }: { children: React.ReactNode }) => (\n\t\t<LixProvider lix={lix}>\n\t\t\t<Suspense fallback={<div>Loading...</div>}>\n\t\t\t\t<MockErrorBoundary>{children}</MockErrorBoundary>\n\t\t\t</Suspense>\n\t\t</LixProvider>\n\t);\n\n\tlet hookResult:\n\t\t| {\n\t\t\t\tcurrent: {\n\t\t\t\t\tleft: KeyValueRow[];\n\t\t\t\t\tright: KeyValueRow[];\n\t\t\t\t};\n\t\t  }\n\t\t| undefined;\n\n\tawait act(async () => {\n\t\tconst { result } = renderHook(\n\t\t\t() => {\n\t\t\t\tconst left = useQuery((lix) =>\n\t\t\t\t\tqb(lix)\n\t\t\t\t\t\t.selectFrom(\"lix_key_value\")\n\t\t\t\t\t\t.select([\n\t\t\t\t\t\t\t\"key\",\n\t\t\t\t\t\t\t\"value\",\n\t\t\t\t\t\t\tsql<string>`CAST(random() AS TEXT)`.as(\"nonce\"),\n\t\t\t\t\t\t])\n\t\t\t\t\t\t.where(\"key\", \"=\", key),\n\t\t\t\t);\n\t\t\t\tconst right = useQuery((lix) =>\n\t\t\t\t\tqb(lix)\n\t\t\t\t\t\t.selectFrom(\"lix_key_value\")\n\t\t\t\t\t\t.select([\n\t\t\t\t\t\t\t\"key\",\n\t\t\t\t\t\t\t\"value\",\n\t\t\t\t\t\t\tsql<string>`CAST(random() AS TEXT)`.as(\"nonce\"),\n\t\t\t\t\t\t])\n\t\t\t\t\t\t.where(\"key\", \"=\", key),\n\t\t\t\t);\n\t\t\t\treturn { left, right };\n\t\t\t},\n\t\t\t{ wrapper },\n\t\t);\n\t\thookResult = result;\n\t});\n\n\tawait waitFor(() => {\n\t\texpect(hookResult!.current.left).toHaveLength(1);\n\t\texpect(hookResult!.current.right).toHaveLength(1);\n\t});\n\n\tawait act(async () => {\n\t\tawait qb(lix)\n\t\t\t.updateTable(\"lix_key_value\")\n\t\t\t.set({ value: \"after\" })\n\t\t\t.where(\"key\", \"=\", key)\n\t\t\t.execute();\n\t});\n\n\tawait waitFor(() => {\n\t\texpect(hookResult!.current.left[0]?.value).toBe(\"after\");\n\t\texpect(hookResult!.current.right[0]?.value).toBe(\"after\");\n\t});\n\n\texpect(String(hookResult!.current.left[0]?.nonce)).toBe(\n\t\tString(hookResult!.current.right[0]?.nonce),\n\t);\n\n\tawait lix.close();\n});\n\ntest(\"useQuery refreshes when lix instance is switched\", async () => {\n\tconst switchKey = \"switch_instance_value\";\n\t// Create two separate lix instances\n\tconst lix1 = await openLix({});\n\tconst lix2 = await openLix({});\n\tawait qb(lix1)\n\t\t.insertInto(\"lix_key_value\")\n\t\t.values({ key: switchKey, value: \"instance_one\" })\n\t\t.execute();\n\tawait qb(lix2)\n\t\t.insertInto(\"lix_key_value\")\n\t\t.values({ key: switchKey, value: \"instance_two\" })\n\t\t.execute();\n\n\t// Check that they have different values for the same key\n\tconst lix1IdDirect = await qb(lix1)\n\t\t.selectFrom(\"lix_key_value\")\n\t\t.selectAll()\n\t\t.where(\"key\", \"=\", switchKey)\n\t\t.executeTakeFirst();\n\tconst lix2IdDirect = await qb(lix2)\n\t\t.selectFrom(\"lix_key_value\")\n\t\t.selectAll()\n\t\t.where(\"key\", \"=\", switchKey)\n\t\t.executeTakeFirst();\n\n\t// Ensure the test is valid - the two instances should have different values\n\texpect(lix1IdDirect?.value).not.toBe(lix2IdDirect?.value);\n\n\t// Use a state variable to control which lix instance is used\n\tlet currentLix = lix1;\n\n\t// Wrapper function that uses the current lix\n\tconst TestComponent = () => {\n\t\tconst data = useQuery((lix) =>\n\t\t\tqb(lix)\n\t\t\t\t.selectFrom(\"lix_key_value\")\n\t\t\t\t.selectAll()\n\t\t\t\t.where(\"key\", \"=\", switchKey),\n\t\t);\n\t\treturn data;\n\t};\n\n\tconst wrapper = ({ children }: { children: React.ReactNode }) => (\n\t\t<LixProvider lix={currentLix}>\n\t\t\t<Suspense fallback={<div>Loading...</div>}>\n\t\t\t\t<MockErrorBoundary>{children}</MockErrorBoundary>\n\t\t\t</Suspense>\n\t\t</LixProvider>\n\t);\n\n\tlet hookResult: { current: KeyValueRow[] };\n\tlet rerender: () => void;\n\n\tawait act(async () => {\n\t\tconst { result, rerender: rerenderFn } = renderHook(() => TestComponent(), {\n\t\t\twrapper,\n\t\t});\n\t\thookResult = result;\n\t\trerender = rerenderFn;\n\t});\n\n\t// Verify we get data from lix1\n\tawait waitFor(() => {\n\t\texpect(hookResult.current).toHaveLength(1);\n\t\texpect(hookResult.current[0]?.key).toBe(switchKey);\n\t});\n\n\t// Store the initial lix_id value\n\tconst lix1Id = hookResult!.current[0]?.value;\n\n\t// Switch to lix2 by changing the current lix and rerendering\n\tawait act(async () => {\n\t\tcurrentLix = lix2;\n\t\trerender();\n\t});\n\n\t// Verify the query refreshes and we now get data from lix2\n\tawait waitFor(() => {\n\t\texpect(hookResult.current).toHaveLength(1);\n\t\texpect(hookResult.current[0]?.key).toBe(switchKey);\n\t\t// The lix_id value should be different from lix1\n\t\texpect(hookResult.current[0]?.value).not.toBe(lix1Id);\n\t});\n\n\tawait lix1.close();\n\tawait lix2.close();\n});\n"
  },
  {
    "path": "packages/react-utils/src/hooks/use-query.ts",
    "content": "import { useContext, useEffect, useState, use } from \"react\";\nimport type { Lix } from \"@lix-js/sdk\";\nimport { LixContext } from \"../provider.js\";\n\n// Map to cache promises by query key\nconst queryPromiseCache = new Map<string, Promise<any>>();\nconst lixInstanceIds = new WeakMap<object, number>();\nlet nextLixInstanceId = 1;\n\ninterface UseQueryOptions {\n\tsubscribe?: boolean;\n}\n\n// Query factory receives a lix instance and returns a compilable+executable query.\ninterface QueryLike<TRow> {\n\tcompile(): {\n\t\tsql: string;\n\t\tparameters: ReadonlyArray<unknown>;\n\t};\n\texecute(): Promise<TRow[]>;\n}\n\ntype QueryFactory<TRow> = (lix: Lix) => QueryLike<TRow>;\n\n/**\n * Subscribe to a live query using React 19 Suspense.\n *\n * The hook suspends on first render and re-suspends whenever its SQL changes,\n * so wrap consuming components with React Suspense and an ErrorBoundary.\n *\n * @param query - Factory function that creates a compiled+executable query object. Preferred shape: `(lix) => qb(lix).selectFrom(...)`.\n * @param options - Optional configuration\n * @param options.subscribe - Whether to subscribe to live updates (default: true)\n *\n * @example\n * // Basic list\n * function KeyValueList() {\n *   const keyValues = useQuery((lix) =>\n *     qb(lix).selectFrom('lix_key_value')\n *       .where('key', 'like', 'example_%')\n *       .selectAll()\n *   );\n *   return (\n *     <ul>\n *       {keyValues.map(item => (\n *         <li key={item.key}>{item.key}: {item.value}</li>\n *       ))}\n *     </ul>\n *   );\n * }\n *\n * @example\n * // With Suspense + ErrorBoundary\n * import { Suspense } from 'react';\n * import { ErrorBoundary } from 'react-error-boundary';\n *\n * function App() {\n *   return (\n *     <Suspense fallback={<div>Loading…</div>}>\n *       <ErrorBoundary fallbackRender={() => <div>Failed to load.</div>}>\n *         <KeyValueList />\n *       </ErrorBoundary>\n *     </Suspense>\n *   );\n * }\n *\n * @example\n * // One-time query without live updates\n * const config = useQuery(\n *   (lix) => qb(lix).selectFrom('config').selectAll(),\n *   { subscribe: false }\n * );\n */\nexport function useQuery<TRow>(\n\tquery: QueryFactory<TRow>,\n\toptions: UseQueryOptions = {},\n): TRow[] {\n\tconst lix = useContext(LixContext);\n\tif (!lix) throw new Error(\"useQuery must be used inside <LixProvider>.\");\n\n\tconst { subscribe = true } = options;\n\tconst builder = query(lix);\n\tconst compiled = builder.compile();\n\tconst observeQuery = {\n\t\tsql: compiled.sql,\n\t\tparams: [...compiled.parameters] as any,\n\t};\n\tconst cacheKey =\n\t\t`${getLixInstanceId(lix)}:${subscribe ? \"sub\" : \"once\"}:` +\n\t\t`${compiled.sql}:${JSON.stringify(compiled.parameters)}`;\n\n\t// Get or create promise. Cache key includes parameters so different queries\n\t// resolve independently while reuse avoids duplicating in-flight requests.\n\tconst cached = queryPromiseCache.get(cacheKey) as Promise<TRow[]> | undefined;\n\tconst promise: Promise<TRow[]> =\n\t\tcached ??\n\t\t(() => {\n\t\t\tconst p = builder.execute() as Promise<TRow[]>;\n\t\t\tqueryPromiseCache.set(cacheKey, p);\n\t\t\treturn p;\n\t\t})();\n\n\t// Use the promise (suspends on first render)\n\tconst initialRows = use(promise);\n\n\t// Local state for updates\n\tconst [rows, setRows] = useState(initialRows);\n\tuseEffect(() => {\n\t\tsetRows(initialRows);\n\t}, [cacheKey]);\n\n\t// Subscribe for ongoing updates (only if subscribe is true)\n\tuseEffect(() => {\n\t\tif (!subscribe) return;\n\t\tlet closed = false;\n\t\tconst events = lix.observe(observeQuery);\n\n\t\tvoid (async () => {\n\t\t\ttry {\n\t\t\t\twhile (!closed) {\n\t\t\t\t\tconst event = await events.next();\n\t\t\t\t\tif (closed || event === undefined) {\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t\tconst nextRows = queryResultToRows<TRow>(event.rows);\n\t\t\t\t\tsetRows(nextRows);\n\t\t\t\t}\n\t\t\t} catch (err) {\n\t\t\t\tif (closed) {\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\t// Clear promise to allow retry\n\t\t\t\tqueryPromiseCache.delete(cacheKey);\n\t\t\t\t// Surface error to ErrorBoundary\n\t\t\t\tsetRows(() => {\n\t\t\t\t\tthrow err instanceof Error ? err : new Error(String(err));\n\t\t\t\t});\n\t\t\t}\n\t\t})();\n\n\t\treturn () => {\n\t\t\tclosed = true;\n\t\t\tevents.close();\n\t\t};\n\t}, [cacheKey, subscribe, lix]);\n\n\tif (!subscribe) {\n\t\treturn initialRows;\n\t}\n\n\treturn rows;\n}\n\nfunction queryResultToRows<TRow>(result: {\n\trows?: ReadonlyArray<ReadonlyArray<unknown>>;\n\tcolumns?: ReadonlyArray<string>;\n}): TRow[] {\n\tconst columns = Array.isArray(result?.columns) ? result.columns : [];\n\tconst rows = Array.isArray(result?.rows) ? result.rows : [];\n\treturn rows.map((row) => {\n\t\tconst output: Record<string, unknown> = {};\n\t\tfor (let index = 0; index < columns.length; index += 1) {\n\t\t\tconst column = columns[index];\n\t\t\tif (typeof column !== \"string\") {\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\toutput[column] = row[index];\n\t\t}\n\t\treturn output as TRow;\n\t});\n}\n\nfunction getLixInstanceId(lix: Lix): number {\n\tconst asObject = lix as object;\n\tconst cached = lixInstanceIds.get(asObject);\n\tif (cached !== undefined) {\n\t\treturn cached;\n\t}\n\tconst next = nextLixInstanceId++;\n\tlixInstanceIds.set(asObject, next);\n\treturn next;\n}\n\n/* ------------------------------------------------------------------------- */\n/* Optional single-row helper                                                */\n/* ------------------------------------------------------------------------- */\n\n/**\n * Subscribe to a live query and return only the first result inside React.\n * Equivalent to calling `.executeTakeFirst()` on a Kysely query.\n *\n * @example\n * ```tsx\n * function ExampleComponent({ itemId }: { itemId: string }) {\n *   const item = useQueryTakeFirst((lix) =>\n *     qb(lix).selectFrom('lix_key_value')\n *       .where('key', '=', `example_${itemId}`)\n *       .selectAll()\n *   );\n *\n *   // No loading/error states needed - Suspense and ErrorBoundary handle them\n *   if (!item) {\n *     return <div>Item not found</div>;\n *   }\n *\n *   return <div>Value: {item.value}</div>;\n * }\n *\n * // Wrap with Suspense and ErrorBoundary:\n * <Suspense fallback={<div>Loading...</div>}>\n *   <ErrorBoundary fallback={<div>Error occurred</div>}>\n *     <ExampleComponent itemId=\"123\" />\n *   </ErrorBoundary>\n * </Suspense>\n * ```\n */\nexport const useQueryTakeFirst = <TResult>(\n\tquery: QueryFactory<TResult>,\n\toptions: UseQueryOptions = {},\n): TResult | undefined => {\n\tconst rows = useQuery<TResult>(query, options);\n\treturn rows[0] as TResult | undefined;\n};\n\n/**\n * Subscribe to a live query and return only the first result inside React.\n * Throws an error if no result is found.\n *\n * @param query - Factory function that creates a Kysely SelectQueryBuilder\n *\n * @throws Error if no result is found\n *\n * @example\n * ```tsx\n * function ExampleDetail({ itemId }: { itemId: string }) {\n *   const item = useQueryTakeFirstOrThrow((lix) =>\n *     qb(lix).selectFrom('lix_key_value')\n *       .where('key', '=', `example_${itemId}`)\n *       .selectAll()\n *   );\n *\n *   // No need to check for undefined - will throw to ErrorBoundary if not found\n *   return <div>Value: {item.value}</div>;\n * }\n *\n * // Wrap with Suspense and ErrorBoundary:\n * <Suspense fallback={<div>Loading...</div>}>\n *   <ErrorBoundary fallback={<div>Item not found</div>}>\n *     <ExampleDetail itemId=\"123\" />\n *   </ErrorBoundary>\n * </Suspense>\n * ```\n */\nexport const useQueryTakeFirstOrThrow = <TResult>(\n\tquery: QueryFactory<TResult>,\n\toptions: UseQueryOptions = {},\n): TResult => {\n\tconst data = useQueryTakeFirst(query, options);\n\tif (data === undefined) throw new Error(\"No result found\");\n\treturn data;\n};\n"
  },
  {
    "path": "packages/react-utils/src/index.ts",
    "content": "export { LixProvider, LixContext } from \"./provider.js\";\nexport {\n\tuseQuery,\n\tuseQueryTakeFirst,\n\tuseQueryTakeFirstOrThrow,\n} from \"./hooks/use-query.js\";\nexport { useLix } from \"./hooks/use-lix.js\";\n"
  },
  {
    "path": "packages/react-utils/src/provider.tsx",
    "content": "import { createContext, type ReactNode } from \"react\";\nimport type { Lix } from \"@lix-js/sdk\";\n\nexport const LixContext = createContext<Lix | null>(null);\n\nexport function LixProvider(props: { lix: Lix; children: ReactNode }) {\n\treturn (\n\t\t<LixContext.Provider value={props.lix}>\n\t\t\t{props.children}\n\t\t</LixContext.Provider>\n\t);\n}\n"
  },
  {
    "path": "packages/react-utils/test-setup.ts",
    "content": "import { Blob as BlobPolyfill } from \"node:buffer\";\n\n// https://github.com/jsdom/jsdom/issues/2555#issuecomment-1864762292\nglobal.Blob = BlobPolyfill as any;\n"
  },
  {
    "path": "packages/react-utils/tsconfig.json",
    "content": "{\n\t\"include\": [\n\t\t\"src/**/*\"\n\t],\n\t\"compilerOptions\": {\n\t\t\"skipDefaultLibCheck\": true,\n\t\t\"emitDeclarationOnly\": false,\n\t\t\"experimentalDecorators\": true,\n\t\t\"emitDecoratorMetadata\": true,\n\t\t\"useDefineForClassFields\": false,\n\t\t\"lib\": [\n\t\t\t\"ESNext\",\n\t\t\t\"DOM\"\n\t\t],\n\t\t\"outDir\": \"./dist\",\n\t\t\"rootDir\": \"./src\",\n\t\t\"esModuleInterop\": true,\n\t\t\"skipLibCheck\": true,\n\t\t\"forceConsistentCasingInFileNames\": true,\n\t\t\"jsx\": \"react-jsx\",\n\t\t\"sourceMap\": true,\n\t\t\"module\": \"Node16\",\n\t\t\"moduleResolution\": \"Node16\",\n\t\t\"target\": \"ES2022\",\n\t\t\"allowSyntheticDefaultImports\": true,\n\t\t\"resolveJsonModule\": false,\n\t\t\"declaration\": true,\n\t\t\"strict\": true,\n\t\t\"checkJs\": true,\n\t\t\"verbatimModuleSyntax\": true,\n\t\t\"noUncheckedIndexedAccess\": true,\n\t\t\"declarationMap\": true,\n\t\t\"noImplicitAny\": true,\n\t\t\"noImplicitReturns\": true,\n\t\t\"noFallthroughCasesInSwitch\": true,\n\t\t\"noImplicitOverride\": true,\n\t\t\"allowUnreachableCode\": false\n\t}\n}\n"
  },
  {
    "path": "packages/react-utils/vitest.config.ts",
    "content": "import { defineConfig } from 'vitest/config';\n\nexport default defineConfig({\n  test: {\n    environment: 'jsdom',\n    setupFiles: ['./test-setup.ts'],\n  },\n});"
  },
  {
    "path": "packages/rs-sdk/Cargo.toml",
    "content": "[package]\nname = \"lix_rs_sdk\"\nversion = \"0.1.0\"\nedition = \"2021\"\n\n[dependencies]\nlix_engine = { path = \"../engine\" }\nasync-trait = \"0.1\"\n\n[dev-dependencies]\ntokio = { version = \"1\", features = [\"rt\", \"macros\"] }\n"
  },
  {
    "path": "packages/rs-sdk/src/in_memory_backend.rs",
    "content": "use std::collections::BTreeMap;\nuse std::sync::{Arc, Mutex};\n\nuse async_trait::async_trait;\nuse lix_engine::{\n    Backend, BackendKvEntryPage, BackendKvExistsBatch, BackendKvExistsGroup, BackendKvGetRequest,\n    BackendKvKeyPage, BackendKvScanRange, BackendKvScanRequest, BackendKvValueBatch,\n    BackendKvValueGroup, BackendKvValuePage, BackendKvWriteBatch, BackendKvWriteStats,\n    BackendReadTransaction, BackendWriteTransaction, BytePageBuilder, LixError,\n};\n\ntype KvKey = (String, Vec<u8>);\ntype KvMap = BTreeMap<KvKey, Vec<u8>>;\n\n#[derive(Debug, Clone, Default)]\npub(crate) struct InMemoryBackend {\n    kv: Arc<Mutex<KvMap>>,\n}\n\nimpl InMemoryBackend {\n    pub(crate) fn new() -> Self {\n        Self::default()\n    }\n}\n\n#[async_trait]\nimpl Backend for InMemoryBackend {\n    async fn begin_read_transaction(\n        &self,\n    ) -> Result<Box<dyn BackendReadTransaction + Send + Sync + 'static>, LixError> {\n        let snapshot = self\n            .kv\n            .lock()\n            .map_err(|_| lock_error(\"rs-sdk in-memory backend kv\"))?\n            .clone();\n        Ok(Box::new(InMemoryReadTransaction { kv: snapshot }))\n    }\n\n    async fn begin_write_transaction(\n        &self,\n    ) -> Result<Box<dyn BackendWriteTransaction + Send + Sync + 'static>, LixError> {\n        let snapshot = self\n            .kv\n            .lock()\n            .map_err(|_| lock_error(\"rs-sdk in-memory backend kv\"))?\n            .clone();\n        Ok(Box::new(InMemoryWriteTransaction {\n            parent: Arc::clone(&self.kv),\n            kv: snapshot,\n        }))\n    }\n}\n\nstruct InMemoryReadTransaction {\n    kv: KvMap,\n}\n\n#[async_trait]\nimpl BackendReadTransaction for InMemoryReadTransaction {\n    async fn get_values(\n        &mut self,\n        request: BackendKvGetRequest,\n    ) -> Result<BackendKvValueBatch, LixError> {\n        Ok(get_values_from_map(&self.kv, request))\n    }\n\n    async fn exists_many(\n        &mut self,\n        request: BackendKvGetRequest,\n    ) -> Result<BackendKvExistsBatch, LixError> {\n        Ok(exists_many_from_map(&self.kv, request))\n    }\n\n    async fn scan_keys(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvKeyPage, LixError> {\n        Ok(scan_map_keys(&self.kv, request))\n    }\n\n    async fn scan_values(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvValuePage, LixError> {\n        Ok(scan_map_values(&self.kv, request))\n    }\n\n    async fn scan_entries(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvEntryPage, LixError> {\n        Ok(scan_map_entries(&self.kv, request))\n    }\n\n    async fn rollback(self: Box<Self>) -> Result<(), LixError> {\n        Ok(())\n    }\n}\n\nstruct InMemoryWriteTransaction {\n    parent: Arc<Mutex<KvMap>>,\n    kv: KvMap,\n}\n\n#[async_trait]\nimpl BackendReadTransaction for InMemoryWriteTransaction {\n    async fn get_values(\n        &mut self,\n        request: BackendKvGetRequest,\n    ) -> Result<BackendKvValueBatch, LixError> {\n        Ok(get_values_from_map(&self.kv, request))\n    }\n\n    async fn exists_many(\n        &mut self,\n        request: BackendKvGetRequest,\n    ) -> Result<BackendKvExistsBatch, LixError> {\n        Ok(exists_many_from_map(&self.kv, request))\n    }\n\n    async fn scan_keys(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvKeyPage, LixError> {\n        Ok(scan_map_keys(&self.kv, request))\n    }\n\n    async fn scan_values(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvValuePage, LixError> {\n        Ok(scan_map_values(&self.kv, request))\n    }\n\n    async fn scan_entries(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvEntryPage, LixError> {\n        Ok(scan_map_entries(&self.kv, request))\n    }\n\n    async fn rollback(self: Box<Self>) -> Result<(), LixError> {\n        Ok(())\n    }\n}\n\n#[async_trait]\nimpl BackendWriteTransaction for InMemoryWriteTransaction {\n    async fn write_kv_batch(\n        &mut self,\n        batch: BackendKvWriteBatch,\n    ) -> Result<BackendKvWriteStats, LixError> {\n        let mut stats = BackendKvWriteStats::default();\n        for group in batch.groups {\n            let namespace = group.namespace().to_string();\n            for index in 0..group.put_count() {\n                let key = group.put_key(index).ok_or_else(|| {\n                    LixError::new(\"LIX_ERROR_UNKNOWN\", \"backend write batch missing put key\")\n                })?;\n                let value = group.put_value(index).ok_or_else(|| {\n                    LixError::new(\"LIX_ERROR_UNKNOWN\", \"backend write batch missing put value\")\n                })?;\n                stats.puts += 1;\n                stats.bytes_written += key.len() + value.len();\n                self.kv\n                    .insert((namespace.clone(), key.to_vec()), value.to_vec());\n            }\n            for index in 0..group.delete_count() {\n                let key = group.delete_key(index).ok_or_else(|| {\n                    LixError::new(\n                        \"LIX_ERROR_UNKNOWN\",\n                        \"backend write batch missing delete key\",\n                    )\n                })?;\n                stats.deletes += 1;\n                stats.bytes_written += key.len();\n                self.kv.remove(&(namespace.clone(), key.to_vec()));\n            }\n        }\n        Ok(stats)\n    }\n\n    async fn commit(self: Box<Self>) -> Result<(), LixError> {\n        *self\n            .parent\n            .lock()\n            .map_err(|_| lock_error(\"rs-sdk in-memory backend kv\"))? = self.kv;\n        Ok(())\n    }\n}\n\nfn get_values_from_map(kv: &KvMap, request: BackendKvGetRequest) -> BackendKvValueBatch {\n    let mut groups = Vec::with_capacity(request.groups.len());\n    for group in request.groups {\n        let namespace = group.namespace.clone();\n        let mut values = BytePageBuilder::with_capacity(group.keys.len(), 0);\n        let mut present = Vec::with_capacity(group.keys.len());\n        for key in group.keys {\n            if let Some(value) = kv.get(&(namespace.clone(), key)) {\n                values.push(value);\n                present.push(true);\n            } else {\n                values.push([]);\n                present.push(false);\n            }\n        }\n        groups.push(BackendKvValueGroup::new(\n            namespace,\n            values.finish(),\n            present,\n        ));\n    }\n    BackendKvValueBatch { groups }\n}\n\nfn exists_many_from_map(kv: &KvMap, request: BackendKvGetRequest) -> BackendKvExistsBatch {\n    let mut groups = Vec::with_capacity(request.groups.len());\n    for group in request.groups {\n        let namespace = group.namespace.clone();\n        let exists = group\n            .keys\n            .into_iter()\n            .map(|key| kv.contains_key(&(namespace.clone(), key)))\n            .collect();\n        groups.push(BackendKvExistsGroup { namespace, exists });\n    }\n    BackendKvExistsBatch { groups }\n}\n\nfn scan_map_keys(kv: &KvMap, request: BackendKvScanRequest) -> BackendKvKeyPage {\n    let pairs = scan_filtered_pairs(kv, &request);\n    let has_more = pairs.len() > request.limit;\n    let mut keys = BytePageBuilder::with_capacity(request.limit.min(pairs.len()), 0);\n    let mut resume_after = None;\n    for (index, (key, _)) in pairs.into_iter().enumerate() {\n        if index >= request.limit {\n            break;\n        }\n        resume_after = Some(key.clone());\n        keys.push(key);\n    }\n    let resume_after = has_more.then_some(resume_after).flatten();\n    BackendKvKeyPage {\n        keys: keys.finish(),\n        resume_after,\n    }\n}\n\nfn scan_map_values(kv: &KvMap, request: BackendKvScanRequest) -> BackendKvValuePage {\n    let pairs = scan_filtered_pairs(kv, &request);\n    let has_more = pairs.len() > request.limit;\n    let mut values = BytePageBuilder::with_capacity(request.limit.min(pairs.len()), 0);\n    let mut resume_after = None;\n    for (index, (key, value)) in pairs.into_iter().enumerate() {\n        if index >= request.limit {\n            break;\n        }\n        resume_after = Some(key.clone());\n        values.push(value);\n    }\n    let resume_after = has_more.then_some(resume_after).flatten();\n    BackendKvValuePage {\n        values: values.finish(),\n        resume_after,\n    }\n}\n\nfn scan_map_entries(kv: &KvMap, request: BackendKvScanRequest) -> BackendKvEntryPage {\n    let pairs = scan_filtered_pairs(kv, &request);\n    let has_more = pairs.len() > request.limit;\n    let mut keys = BytePageBuilder::with_capacity(request.limit.min(pairs.len()), 0);\n    let mut values = BytePageBuilder::with_capacity(request.limit.min(pairs.len()), 0);\n    let mut resume_after = None;\n    for (index, (key, value)) in pairs.into_iter().enumerate() {\n        if index >= request.limit {\n            break;\n        }\n        resume_after = Some(key.clone());\n        keys.push(key);\n        values.push(value);\n    }\n    let resume_after = has_more.then_some(resume_after).flatten();\n    BackendKvEntryPage {\n        keys: keys.finish(),\n        values: values.finish(),\n        resume_after,\n    }\n}\n\nfn scan_filtered_pairs<'a>(\n    kv: &'a KvMap,\n    request: &BackendKvScanRequest,\n) -> Vec<(&'a Vec<u8>, &'a Vec<u8>)> {\n    let scan_limit = request\n        .limit\n        .checked_add(1 + usize::from(request.after.is_some()))\n        .unwrap_or(request.limit);\n    let mut pairs = kv\n        .iter()\n        .filter(|((candidate_namespace, key), _)| {\n            candidate_namespace == &request.namespace && key_matches_range(key, &request.range)\n        })\n        .filter(|((_, key), _)| {\n            request\n                .after\n                .as_deref()\n                .is_none_or(|after| key.as_slice() > after)\n        })\n        .collect::<Vec<_>>();\n    pairs.sort_by(|left, right| left.0 .1.cmp(&right.0 .1));\n    pairs.truncate(scan_limit);\n    pairs\n        .into_iter()\n        .filter(|((_, key), _)| {\n            request\n                .after\n                .as_deref()\n                .is_none_or(|after| key.as_slice() > after)\n        })\n        .map(|((_, key), value)| (key, value))\n        .collect()\n}\n\nfn key_matches_range(key: &[u8], range: &BackendKvScanRange) -> bool {\n    match range {\n        BackendKvScanRange::Prefix(prefix) => key.starts_with(prefix),\n        BackendKvScanRange::Range { start, end } => start.as_slice() <= key && key < end.as_slice(),\n    }\n}\n\nfn lock_error(name: &str) -> LixError {\n    LixError::new(\"LIX_ERROR_UNKNOWN\", format!(\"{name} mutex was poisoned\"))\n}\n"
  },
  {
    "path": "packages/rs-sdk/src/lib.rs",
    "content": "//! Rust SDK for Lix.\n//!\n//! The public API mirrors `@lix-js/sdk`: `open_lix()` opens the workspace\n//! session, and the returned [`Lix`] handle owns the small application-facing\n//! surface.\n\nmod in_memory_backend;\nmod lix;\n\npub use lix::{open_lix, Lix, OpenLixOptions};\npub use lix_engine::{\n    Backend, BackendKvEntryPage, BackendKvExistsBatch, BackendKvExistsGroup, BackendKvGetGroup,\n    BackendKvGetRequest, BackendKvKeyPage, BackendKvScanRange, BackendKvScanRequest,\n    BackendKvValueBatch, BackendKvValueGroup, BackendKvValuePage, BackendKvWriteBatch,\n    BackendKvWriteGroup, BackendKvWriteStats, BackendReadTransaction, BackendWriteTransaction,\n    BytePage, BytePageBuilder, CreateVersionOptions, CreateVersionReceipt as CreateVersionResult,\n    ExecuteResult, LixError, LixNotice, MergeChangeStats, MergeConflict, MergeConflictChangeKind,\n    MergeConflictKind, MergeConflictSide, MergeVersionOptions, MergeVersionOutcome,\n    MergeVersionPreview, MergeVersionPreviewOptions, MergeVersionReceipt as MergeVersionResult,\n    Row, SqlQueryResult, SwitchVersionOptions, SwitchVersionReceipt as SwitchVersionResult,\n    TryFromValue, Value,\n};\n"
  },
  {
    "path": "packages/rs-sdk/src/lix.rs",
    "content": "use std::sync::atomic::{AtomicBool, Ordering};\nuse std::sync::Arc;\n\nuse async_trait::async_trait;\nuse lix_engine::{\n    Backend, BackendReadTransaction, BackendWriteTransaction, CreateVersionOptions,\n    CreateVersionReceipt as CreateVersionResult, Engine, ExecuteResult, LixError,\n    MergeVersionOptions, MergeVersionPreview, MergeVersionPreviewOptions,\n    MergeVersionReceipt as MergeVersionResult, SessionContext, SwitchVersionOptions,\n    SwitchVersionReceipt as SwitchVersionResult, Value,\n};\n\nuse crate::in_memory_backend::InMemoryBackend;\n\n/// Options for opening a Lix workspace session.\n#[derive(Default)]\npub struct OpenLixOptions {\n    pub backend: Option<Box<dyn Backend + Send + Sync>>,\n}\n\n/// Workspace-session handle for a Lix repository.\npub struct Lix {\n    _engine: Engine,\n    session: SessionContext,\n    backend: SharedBackend,\n    backend_closed: AtomicBool,\n}\n\n/// Opens a Lix workspace session.\n///\n/// If `options.backend` is omitted, a fresh in-memory backend is used. If a\n/// backend is supplied, it is opened when already initialized and initialized\n/// first when empty.\npub async fn open_lix(options: OpenLixOptions) -> Result<Lix, LixError> {\n    let backend: Box<dyn Backend + Send + Sync> = options\n        .backend\n        .unwrap_or_else(|| Box::new(InMemoryBackend::new()));\n    let backend = SharedBackend::new(backend);\n    let engine = open_or_initialize_engine(&backend).await?;\n    let session = engine.open_workspace_session().await?;\n    Ok(Lix {\n        _engine: engine,\n        session,\n        backend,\n        backend_closed: AtomicBool::new(false),\n    })\n}\n\nimpl Lix {\n    /// Executes one DataFusion SQL statement against this Lix session.\n    ///\n    /// The SQL dialect is DataFusion SQL, not SQLite SQL. Positional\n    /// placeholders use `$1`, `$2`, and so on. SQLite-specific catalog tables\n    /// and transaction statements such as `sqlite_master`, `BEGIN`, and\n    /// `COMMIT` are not part of this contract; use `information_schema` for\n    /// catalog inspection. Lix owns transaction boundaries for each statement.\n    pub async fn execute(&self, sql: &str, params: &[Value]) -> Result<ExecuteResult, LixError> {\n        self.session.execute(sql, params).await\n    }\n\n    pub async fn active_version_id(&self) -> Result<String, LixError> {\n        self.session.active_version_id().await\n    }\n\n    pub async fn create_version(\n        &self,\n        options: CreateVersionOptions,\n    ) -> Result<CreateVersionResult, LixError> {\n        self.session.create_version(options).await\n    }\n\n    pub async fn switch_version(\n        &self,\n        options: SwitchVersionOptions,\n    ) -> Result<SwitchVersionResult, LixError> {\n        let (_session, receipt) = self.session.switch_version(options).await?;\n        Ok(receipt)\n    }\n\n    pub async fn merge_version(\n        &self,\n        options: MergeVersionOptions,\n    ) -> Result<MergeVersionResult, LixError> {\n        self.session.merge_version(options).await\n    }\n\n    pub async fn merge_version_preview(\n        &self,\n        options: MergeVersionPreviewOptions,\n    ) -> Result<MergeVersionPreview, LixError> {\n        self.session.merge_version_preview(options).await\n    }\n\n    pub async fn close(&self) -> Result<(), LixError> {\n        self.session.close().await?;\n        if !self.backend_closed.swap(true, Ordering::SeqCst) {\n            self.backend.close().await?;\n        }\n        Ok(())\n    }\n}\n\nasync fn open_or_initialize_engine(backend: &SharedBackend) -> Result<Engine, LixError> {\n    match Engine::new(Box::new(backend.clone())).await {\n        Ok(engine) => Ok(engine),\n        Err(error) if error.code == \"LIX_ERROR_NOT_INITIALIZED\" => {\n            Engine::initialize(Box::new(backend.clone())).await?;\n            Engine::new(Box::new(backend.clone())).await\n        }\n        Err(error) => Err(error),\n    }\n}\n\n#[derive(Clone)]\nstruct SharedBackend {\n    inner: Arc<dyn Backend + Send + Sync>,\n}\n\nimpl SharedBackend {\n    fn new(backend: Box<dyn Backend + Send + Sync>) -> Self {\n        Self {\n            inner: Arc::from(backend),\n        }\n    }\n}\n\n#[async_trait]\nimpl Backend for SharedBackend {\n    async fn begin_read_transaction(\n        &self,\n    ) -> Result<Box<dyn BackendReadTransaction + Send + Sync + 'static>, LixError> {\n        self.inner.begin_read_transaction().await\n    }\n\n    async fn begin_write_transaction(\n        &self,\n    ) -> Result<Box<dyn BackendWriteTransaction + Send + Sync + 'static>, LixError> {\n        self.inner.begin_write_transaction().await\n    }\n\n    async fn destroy(&self) -> Result<(), LixError> {\n        self.inner.destroy().await\n    }\n\n    async fn close(&self) -> Result<(), LixError> {\n        self.inner.close().await\n    }\n}\n"
  },
  {
    "path": "packages/rs-sdk/tests/e2e.rs",
    "content": "use std::collections::BTreeMap;\nuse std::sync::{Arc, Mutex};\n\nuse async_trait::async_trait;\nuse lix_rs_sdk::{\n    open_lix, Backend, BackendKvEntryPage, BackendKvExistsBatch, BackendKvExistsGroup,\n    BackendKvGetRequest, BackendKvKeyPage, BackendKvScanRange, BackendKvScanRequest,\n    BackendKvValueBatch, BackendKvValueGroup, BackendKvValuePage, BackendKvWriteBatch,\n    BackendKvWriteStats, BackendReadTransaction, BackendWriteTransaction, BytePageBuilder,\n    CreateVersionOptions, LixError, MergeVersionOptions, MergeVersionOutcome, OpenLixOptions,\n    SwitchVersionOptions, Value,\n};\n\n#[tokio::test]\nasync fn rs_sdk_open_register_write_query_version_and_merge_flow() {\n    let lix = open_lix(OpenLixOptions::default()).await.unwrap();\n    let main_version_id = lix.active_version_id().await.unwrap();\n\n    register_crm_task_schema(&lix).await;\n\n    lix.execute(\n        \"INSERT INTO crm_task (id, title, done, meta) VALUES ($1, $2, $3, lix_json($4))\",\n        &[\n            Value::Text(\"task-1\".to_string()),\n            Value::Text(\"Draft RS SDK flow\".to_string()),\n            Value::Boolean(false),\n            Value::Text(r#\"{\"priority\":\"high\",\"tags\":[\"sdk\",\"json\"]}\"#.to_string()),\n        ],\n    )\n    .await\n    .unwrap();\n\n    let projected = lix\n        .execute(\n            \"SELECT title, done, meta, lixcol_snapshot_content FROM crm_task WHERE id = $1\",\n            &[Value::Text(\"task-1\".to_string())],\n        )\n        .await\n        .unwrap();\n    assert_crm_task_projection(&projected);\n\n    assert_eq!(task_done(&lix, \"task-1\").await, false);\n\n    let draft = lix\n        .create_version(CreateVersionOptions {\n            id: Some(\"draft-version\".to_string()),\n            name: \"Draft\".to_string(),\n            from_commit_id: None,\n        })\n        .await\n        .unwrap();\n    assert_eq!(draft.id, \"draft-version\");\n    assert_eq!(draft.name, \"Draft\");\n    assert!(!draft.hidden);\n\n    lix.switch_version(SwitchVersionOptions {\n        version_id: draft.id.clone(),\n    })\n    .await\n    .unwrap();\n\n    lix.execute(\n        \"UPDATE crm_task SET done = $1 WHERE id = $2\",\n        &[Value::Boolean(true), Value::Text(\"task-1\".to_string())],\n    )\n    .await\n    .unwrap();\n\n    assert_eq!(task_done(&lix, \"task-1\").await, true);\n\n    lix.switch_version(SwitchVersionOptions {\n        version_id: main_version_id.clone(),\n    })\n    .await\n    .unwrap();\n\n    assert_eq!(task_done(&lix, \"task-1\").await, false);\n\n    let merge = lix\n        .merge_version(MergeVersionOptions {\n            source_version_id: draft.id,\n        })\n        .await\n        .unwrap();\n\n    assert_eq!(merge.outcome, MergeVersionOutcome::FastForward);\n    assert_eq!(merge.target_version_id, main_version_id);\n    assert_eq!(merge.change_stats.total, 1);\n    assert_eq!(merge.change_stats.modified, 1);\n    assert_eq!(merge.created_merge_commit_id, None);\n    assert_eq!(task_done(&lix, \"task-1\").await, true);\n\n    lix.close().await.unwrap();\n}\n\n#[tokio::test]\nasync fn rs_sdk_close_is_idempotent_and_rejects_later_operations() {\n    let backend = SharedTestBackend::new();\n    let close_count = backend.close_count();\n    let lix = open_lix(OpenLixOptions {\n        backend: Some(Box::new(backend)),\n    })\n    .await\n    .unwrap();\n\n    lix.close().await.unwrap();\n    lix.close().await.unwrap();\n    assert_eq!(\n        close_count\n            .lock()\n            .map(|count| *count)\n            .expect(\"close count lock should be available\"),\n        1\n    );\n\n    let error = lix\n        .execute(\"SELECT value FROM lix_key_value WHERE key = 'lix_id'\", &[])\n        .await\n        .expect_err(\"execute after close should fail\");\n    assert_closed(error);\n\n    let error = lix\n        .active_version_id()\n        .await\n        .expect_err(\"active_version_id after close should fail\");\n    assert_closed(error);\n}\n\n#[tokio::test]\nasync fn rs_sdk_close_does_not_destroy_committed_data() {\n    let backend = SharedTestBackend::new();\n    let first = open_lix(OpenLixOptions {\n        backend: Some(Box::new(backend.clone())),\n    })\n    .await\n    .unwrap();\n\n    first\n        .execute(\n            \"INSERT INTO lix_key_value (key, value) VALUES ('close-key', 'close-value')\",\n            &[],\n        )\n        .await\n        .unwrap();\n    first.close().await.unwrap();\n\n    let error = first\n        .execute(\n            \"SELECT value FROM lix_key_value WHERE key = 'close-key'\",\n            &[],\n        )\n        .await\n        .expect_err(\"closed handle should not be usable\");\n    assert_closed(error);\n\n    let second = open_lix(OpenLixOptions {\n        backend: Some(Box::new(backend)),\n    })\n    .await\n    .unwrap();\n    let result = second\n        .execute(\n            \"SELECT key FROM lix_key_value WHERE key = 'close-key' AND value = lix_json('\\\"close-value\\\"')\",\n            &[],\n        )\n        .await\n        .unwrap();\n    assert_eq!(result.len(), 1);\n    assert_eq!(\n        result.rows()[0].values(),\n        &[Value::Text(\"close-key\".to_string())]\n    );\n    second.close().await.unwrap();\n}\n\n#[tokio::test]\nasync fn failed_write_validation_does_not_poison_backend_transaction() {\n    let backend = SharedTestBackend::rejecting_nested_transactions();\n    let rollback_count = backend.rollback_count();\n    let lix = open_lix(OpenLixOptions {\n        backend: Some(Box::new(backend)),\n    })\n    .await\n    .unwrap();\n\n    register_poison_task_schema(&lix).await;\n\n    let error = lix\n        .execute(\n            \"INSERT INTO poison_task (id, title) VALUES ($1, $2)\",\n            &[\n                Value::Text(\"bad-task\".to_string()),\n                Value::Text(\"missing meta\".to_string()),\n            ],\n        )\n        .await\n        .expect_err(\"schema validation should reject missing required field\");\n    assert_eq!(error.code, \"LIX_ERROR_SCHEMA_VALIDATION\");\n\n    let result = lix.execute(\"SELECT 1 AS ok\", &[]).await.unwrap();\n    assert_eq!(result.len(), 1);\n    assert_eq!(result.rows()[0].values(), &[Value::Integer(1)]);\n    assert!(\n        *rollback_count\n            .lock()\n            .expect(\"rollback count lock should be available\")\n            > 0,\n        \"failed commit validation should rollback the backend transaction\"\n    );\n\n    lix.execute(\n        \"INSERT INTO poison_task (id, title, meta) VALUES ($1, $2, lix_json($3))\",\n        &[\n            Value::Text(\"good-task\".to_string()),\n            Value::Text(\"valid\".to_string()),\n            Value::Text(r#\"{\"priority\":\"high\"}\"#.to_string()),\n        ],\n    )\n    .await\n    .expect(\"valid write after failed write should succeed\");\n\n    lix.close().await.unwrap();\n}\n\nasync fn register_crm_task_schema(lix: &lix_rs_sdk::Lix) {\n    let schema = r#\"{\n        \"$schema\": \"https://json-schema.org/draft/2020-12/schema\",\n        \"x-lix-key\": \"crm_task\",\n        \"x-lix-primary-key\": [\"/id\"],\n        \"type\": \"object\",\n        \"required\": [\"id\", \"title\", \"done\", \"meta\"],\n        \"properties\": {\n            \"id\": { \"type\": \"string\" },\n            \"title\": { \"type\": \"string\" },\n            \"done\": { \"type\": \"boolean\" },\n            \"meta\": { \"type\": \"object\" }\n        },\n        \"additionalProperties\": false\n    }\"#;\n\n    lix.execute(\n        \"INSERT INTO lix_registered_schema (value) VALUES (lix_json($1))\",\n        &[Value::Text(schema.to_string())],\n    )\n    .await\n    .unwrap();\n}\n\nfn assert_crm_task_projection(result: &lix_rs_sdk::ExecuteResult) {\n    assert_eq!(result.len(), 1);\n    let row = &result.rows()[0];\n    assert_eq!(\n        row.get::<String>(\"title\").unwrap(),\n        \"Draft RS SDK flow\".to_string()\n    );\n    assert_eq!(row.get::<bool>(\"done\").unwrap(), false);\n\n    let meta = row.get::<Value>(\"meta\").unwrap();\n    let Value::Json(meta) = meta else {\n        panic!(\"expected meta JSON value, got {meta:?}\");\n    };\n    assert_eq!(\n        meta.get(\"priority\").and_then(|value| value.as_str()),\n        Some(\"high\")\n    );\n    assert_eq!(\n        meta.get(\"tags\")\n            .and_then(|value| value.as_array())\n            .map(|tags| tags.len()),\n        Some(2)\n    );\n\n    let snapshot = row.get::<Value>(\"lixcol_snapshot_content\").unwrap();\n    let Value::Json(snapshot) = snapshot else {\n        panic!(\"expected snapshot JSON value, got {snapshot:?}\");\n    };\n    assert_eq!(\n        snapshot.get(\"id\").and_then(|value| value.as_str()),\n        Some(\"task-1\")\n    );\n    assert_eq!(\n        snapshot\n            .get(\"meta\")\n            .and_then(|value| value.get(\"priority\"))\n            .and_then(|value| value.as_str()),\n        Some(\"high\")\n    );\n\n    let missing = row\n        .value(\"missing\")\n        .expect_err(\"missing column should return a structured error\");\n    assert_eq!(missing.code, \"LIX_COLUMN_NOT_FOUND\");\n}\n\nasync fn register_poison_task_schema(lix: &lix_rs_sdk::Lix) {\n    let schema = r#\"{\n        \"$schema\": \"https://json-schema.org/draft/2020-12/schema\",\n        \"x-lix-key\": \"poison_task\",\n        \"x-lix-primary-key\": [\"/id\"],\n        \"type\": \"object\",\n        \"required\": [\"id\", \"title\", \"meta\"],\n        \"properties\": {\n            \"id\": { \"type\": \"string\" },\n            \"title\": { \"type\": \"string\" },\n            \"meta\": { \"type\": \"object\" }\n        },\n        \"additionalProperties\": false\n    }\"#;\n\n    lix.execute(\n        \"INSERT INTO lix_registered_schema (value) VALUES (lix_json($1))\",\n        &[Value::Text(schema.to_string())],\n    )\n    .await\n    .unwrap();\n}\n\nasync fn task_done(lix: &lix_rs_sdk::Lix, task_id: &str) -> bool {\n    let result = lix\n        .execute(\n            \"SELECT done FROM crm_task WHERE id = $1\",\n            &[Value::Text(task_id.to_string())],\n        )\n        .await\n        .unwrap();\n\n    let rows = result;\n    assert_eq!(rows.len(), 1);\n\n    match rows.rows()[0].values().first() {\n        Some(Value::Boolean(done)) => *done,\n        value => panic!(\"expected boolean done value, got {value:?}\"),\n    }\n}\n\nfn assert_closed(error: LixError) {\n    assert_eq!(error.code, LixError::CODE_CLOSED);\n}\n\ntype KvMap = BTreeMap<(String, Vec<u8>), Vec<u8>>;\n\n#[derive(Clone, Default)]\nstruct SharedTestBackend {\n    kv: Arc<Mutex<KvMap>>,\n    close_count: Arc<Mutex<usize>>,\n    rollback_count: Arc<Mutex<usize>>,\n    active_transaction: Arc<Mutex<bool>>,\n    reject_nested_transactions: bool,\n}\n\nimpl SharedTestBackend {\n    fn new() -> Self {\n        Self::default()\n    }\n\n    fn rejecting_nested_transactions() -> Self {\n        Self {\n            reject_nested_transactions: true,\n            ..Self::default()\n        }\n    }\n\n    fn close_count(&self) -> Arc<Mutex<usize>> {\n        Arc::clone(&self.close_count)\n    }\n\n    fn rollback_count(&self) -> Arc<Mutex<usize>> {\n        Arc::clone(&self.rollback_count)\n    }\n\n    fn begin_test_transaction(&self) -> Result<SharedTestTransaction, LixError> {\n        let mut active_transaction = self\n            .active_transaction\n            .lock()\n            .map_err(|_| LixError::unknown(\"test backend active transaction lock poisoned\"))?;\n        if *active_transaction && self.reject_nested_transactions {\n            return Err(LixError::unknown(\n                \"cannot open nested Lix backend transaction\",\n            ));\n        }\n        *active_transaction = true;\n        drop(active_transaction);\n\n        let snapshot = self\n            .kv\n            .lock()\n            .map_err(|_| LixError::unknown(\"test backend lock poisoned\"))?\n            .clone();\n        Ok(SharedTestTransaction {\n            parent: Arc::clone(&self.kv),\n            kv: snapshot,\n            active_transaction: Arc::clone(&self.active_transaction),\n            rollback_count: Arc::clone(&self.rollback_count),\n        })\n    }\n}\n\n#[async_trait]\nimpl Backend for SharedTestBackend {\n    async fn begin_read_transaction(\n        &self,\n    ) -> Result<Box<dyn BackendReadTransaction + Send + Sync + 'static>, LixError> {\n        Ok(Box::new(self.begin_test_transaction()?))\n    }\n\n    async fn begin_write_transaction(\n        &self,\n    ) -> Result<Box<dyn BackendWriteTransaction + Send + Sync + 'static>, LixError> {\n        Ok(Box::new(self.begin_test_transaction()?))\n    }\n\n    async fn close(&self) -> Result<(), LixError> {\n        *self\n            .close_count\n            .lock()\n            .map_err(|_| LixError::unknown(\"test backend close count lock poisoned\"))? += 1;\n        Ok(())\n    }\n}\n\nstruct SharedTestTransaction {\n    parent: Arc<Mutex<KvMap>>,\n    kv: KvMap,\n    active_transaction: Arc<Mutex<bool>>,\n    rollback_count: Arc<Mutex<usize>>,\n}\n\n#[async_trait]\nimpl BackendReadTransaction for SharedTestTransaction {\n    async fn get_values(\n        &mut self,\n        request: BackendKvGetRequest,\n    ) -> Result<BackendKvValueBatch, LixError> {\n        Ok(get_values_from_map(&self.kv, request))\n    }\n\n    async fn exists_many(\n        &mut self,\n        request: BackendKvGetRequest,\n    ) -> Result<BackendKvExistsBatch, LixError> {\n        Ok(exists_many_from_map(&self.kv, request))\n    }\n\n    async fn scan_keys(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvKeyPage, LixError> {\n        Ok(scan_map_keys(&self.kv, request))\n    }\n\n    async fn scan_values(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvValuePage, LixError> {\n        Ok(scan_map_values(&self.kv, request))\n    }\n\n    async fn scan_entries(\n        &mut self,\n        request: BackendKvScanRequest,\n    ) -> Result<BackendKvEntryPage, LixError> {\n        Ok(scan_map_entries(&self.kv, request))\n    }\n\n    async fn rollback(self: Box<Self>) -> Result<(), LixError> {\n        *self\n            .rollback_count\n            .lock()\n            .map_err(|_| LixError::unknown(\"test backend rollback count lock poisoned\"))? += 1;\n        *self\n            .active_transaction\n            .lock()\n            .map_err(|_| LixError::unknown(\"test backend active transaction lock poisoned\"))? =\n            false;\n        Ok(())\n    }\n}\n\n#[async_trait]\nimpl BackendWriteTransaction for SharedTestTransaction {\n    async fn write_kv_batch(\n        &mut self,\n        batch: BackendKvWriteBatch,\n    ) -> Result<BackendKvWriteStats, LixError> {\n        let mut stats = BackendKvWriteStats::default();\n        for group in batch.groups {\n            let namespace = group.namespace().to_string();\n            for index in 0..group.put_count() {\n                let key = group.put_key(index).ok_or_else(|| {\n                    LixError::new(\"LIX_ERROR_UNKNOWN\", \"backend write batch missing put key\")\n                })?;\n                let value = group.put_value(index).ok_or_else(|| {\n                    LixError::new(\"LIX_ERROR_UNKNOWN\", \"backend write batch missing put value\")\n                })?;\n                stats.puts += 1;\n                stats.bytes_written += key.len() + value.len();\n                self.kv\n                    .insert((namespace.clone(), key.to_vec()), value.to_vec());\n            }\n            for index in 0..group.delete_count() {\n                let key = group.delete_key(index).ok_or_else(|| {\n                    LixError::new(\n                        \"LIX_ERROR_UNKNOWN\",\n                        \"backend write batch missing delete key\",\n                    )\n                })?;\n                stats.deletes += 1;\n                stats.bytes_written += key.len();\n                self.kv.remove(&(namespace.clone(), key.to_vec()));\n            }\n        }\n        Ok(stats)\n    }\n\n    async fn commit(self: Box<Self>) -> Result<(), LixError> {\n        *self\n            .parent\n            .lock()\n            .map_err(|_| LixError::unknown(\"test backend lock poisoned\"))? = self.kv;\n        *self\n            .active_transaction\n            .lock()\n            .map_err(|_| LixError::unknown(\"test backend active transaction lock poisoned\"))? =\n            false;\n        Ok(())\n    }\n}\n\nfn get_values_from_map(kv: &KvMap, request: BackendKvGetRequest) -> BackendKvValueBatch {\n    let mut groups = Vec::with_capacity(request.groups.len());\n    for group in request.groups {\n        let namespace = group.namespace.clone();\n        let mut values = BytePageBuilder::with_capacity(group.keys.len(), 0);\n        let mut present = Vec::with_capacity(group.keys.len());\n        for key in group.keys {\n            if let Some(value) = kv.get(&(namespace.clone(), key)) {\n                values.push(value);\n                present.push(true);\n            } else {\n                values.push([]);\n                present.push(false);\n            }\n        }\n        groups.push(BackendKvValueGroup::new(\n            namespace,\n            values.finish(),\n            present,\n        ));\n    }\n    BackendKvValueBatch { groups }\n}\n\nfn exists_many_from_map(kv: &KvMap, request: BackendKvGetRequest) -> BackendKvExistsBatch {\n    let mut groups = Vec::with_capacity(request.groups.len());\n    for group in request.groups {\n        let namespace = group.namespace.clone();\n        let exists = group\n            .keys\n            .into_iter()\n            .map(|key| kv.contains_key(&(namespace.clone(), key)))\n            .collect();\n        groups.push(BackendKvExistsGroup { namespace, exists });\n    }\n    BackendKvExistsBatch { groups }\n}\n\nfn scan_map_keys(kv: &KvMap, request: BackendKvScanRequest) -> BackendKvKeyPage {\n    let pairs = scan_filtered_pairs(kv, &request);\n    let has_more = pairs.len() > request.limit;\n    let mut keys = BytePageBuilder::with_capacity(request.limit.min(pairs.len()), 0);\n    let mut resume_after = None;\n    for (index, (key, _)) in pairs.into_iter().enumerate() {\n        if index >= request.limit {\n            break;\n        }\n        resume_after = Some(key.clone());\n        keys.push(key);\n    }\n    let resume_after = has_more.then_some(resume_after).flatten();\n    BackendKvKeyPage {\n        keys: keys.finish(),\n        resume_after,\n    }\n}\n\nfn scan_map_values(kv: &KvMap, request: BackendKvScanRequest) -> BackendKvValuePage {\n    let pairs = scan_filtered_pairs(kv, &request);\n    let has_more = pairs.len() > request.limit;\n    let mut values = BytePageBuilder::with_capacity(request.limit.min(pairs.len()), 0);\n    let mut resume_after = None;\n    for (index, (key, value)) in pairs.into_iter().enumerate() {\n        if index >= request.limit {\n            break;\n        }\n        resume_after = Some(key.clone());\n        values.push(value);\n    }\n    let resume_after = has_more.then_some(resume_after).flatten();\n    BackendKvValuePage {\n        values: values.finish(),\n        resume_after,\n    }\n}\n\nfn scan_map_entries(kv: &KvMap, request: BackendKvScanRequest) -> BackendKvEntryPage {\n    let pairs = scan_filtered_pairs(kv, &request);\n    let has_more = pairs.len() > request.limit;\n    let mut keys = BytePageBuilder::with_capacity(request.limit.min(pairs.len()), 0);\n    let mut values = BytePageBuilder::with_capacity(request.limit.min(pairs.len()), 0);\n    let mut resume_after = None;\n    for (index, (key, value)) in pairs.into_iter().enumerate() {\n        if index >= request.limit {\n            break;\n        }\n        resume_after = Some(key.clone());\n        keys.push(key);\n        values.push(value);\n    }\n    let resume_after = has_more.then_some(resume_after).flatten();\n    BackendKvEntryPage {\n        keys: keys.finish(),\n        values: values.finish(),\n        resume_after,\n    }\n}\n\nfn scan_filtered_pairs<'a>(\n    kv: &'a KvMap,\n    request: &BackendKvScanRequest,\n) -> Vec<(&'a Vec<u8>, &'a Vec<u8>)> {\n    let scan_limit = request\n        .limit\n        .checked_add(1 + usize::from(request.after.is_some()))\n        .unwrap_or(request.limit);\n    let mut pairs = kv\n        .iter()\n        .filter(|((candidate_namespace, key), _)| {\n            candidate_namespace == &request.namespace && key_matches_range(key, &request.range)\n        })\n        .collect::<Vec<_>>();\n    pairs.sort_by(|left, right| left.0 .1.cmp(&right.0 .1));\n    pairs.truncate(scan_limit);\n    pairs\n        .into_iter()\n        .filter(|((_, key), _)| {\n            request\n                .after\n                .as_deref()\n                .is_none_or(|after| key.as_slice() > after)\n        })\n        .map(|((_, key), value)| (key, value))\n        .collect()\n}\n\nfn key_matches_range(key: &[u8], range: &BackendKvScanRange) -> bool {\n    match range {\n        BackendKvScanRange::Prefix(prefix) => key.starts_with(prefix),\n        BackendKvScanRange::Range { start, end } => start.as_slice() <= key && key < end.as_slice(),\n    }\n}\n"
  },
  {
    "path": "packages/text-plugin/Cargo.toml",
    "content": "[package]\nname = \"text_plugin\"\nversion = \"0.1.0\"\nedition = \"2021\"\npublish = false\n\n[lib]\ncrate-type = [\"cdylib\", \"rlib\"]\n\n[dependencies]\nbase64 = \"0.22\"\nimara-diff = \"0.2\"\nserde = { version = \"1\", features = [\"derive\"] }\nserde_json = \"1\"\nsha1 = \"0.10\"\nwit-bindgen = \"0.40\"\n\n[dev-dependencies]\ncriterion = \"0.5\"\n\n[[bench]]\nname = \"detect_changes\"\nharness = false\n\n[[bench]]\nname = \"apply_changes\"\nharness = false\n"
  },
  {
    "path": "packages/text-plugin/README.md",
    "content": "# text-plugin\n\nRust/WASM component plugin that models files as line entities for the Lix engine.\n\n- Uses `packages/engine/wit/lix-plugin.wit`.\n- Provides `manifest.json` for install metadata (`text_plugin`).\n- Provides Lix schema docs:\n  - `schema/text_line.json`\n  - `schema/text_document.json`\n- `detect-changes` emits:\n  - `text_line` rows for inserted/deleted lines (order-preserving line matching, Git-style)\n  - one `text_document` row with ordered `line_ids`\n- `apply-changes` rebuilds exact bytes from the latest projection.\n\nThis plugin is byte-safe (works with non-UTF-8 files) by storing line content as base64 in\nsnapshot payloads.\n\n## Benchmarks\n\nRun plugin micro-benchmarks:\n\n```bash\ncargo bench -p text_plugin --bench detect_changes\ncargo bench -p text_plugin --bench apply_changes\n```\n"
  },
  {
    "path": "packages/text-plugin/benches/apply_changes.rs",
    "content": "mod common;\n\nuse common::{apply_scenarios, file_from_bytes};\nuse criterion::{black_box, criterion_group, criterion_main, BatchSize, Criterion};\nuse std::time::Duration;\nuse text_plugin::apply_changes;\n\nfn bench_apply_changes(c: &mut Criterion) {\n    let scenarios = apply_scenarios();\n\n    let mut group = c.benchmark_group(\"apply_changes\");\n    group.sample_size(20);\n    group.measurement_time(Duration::from_secs(15));\n\n    for scenario in scenarios {\n        group.bench_function(scenario.name, |b| {\n            b.iter_batched(\n                || {\n                    (\n                        file_from_bytes(\"f1\", \"/yarn.lock\", &scenario.base),\n                        scenario.changes.clone(),\n                    )\n                },\n                |(base, changes)| {\n                    let reconstructed = apply_changes(base, changes)\n                        .expect(\"apply_changes benchmark should succeed\");\n                    black_box(reconstructed);\n                },\n                BatchSize::SmallInput,\n            );\n        });\n    }\n\n    group.finish();\n}\n\ncriterion_group!(benches, bench_apply_changes);\ncriterion_main!(benches);\n"
  },
  {
    "path": "packages/text-plugin/benches/common/mod.rs",
    "content": "#![allow(dead_code)]\n\nuse text_plugin::{detect_changes, PluginEntityChange, PluginFile};\n\npub struct DetectScenario {\n    pub name: &'static str,\n    pub before: Option<Vec<u8>>,\n    pub after: Vec<u8>,\n}\n\npub struct ApplyScenario {\n    pub name: &'static str,\n    pub base: Vec<u8>,\n    pub changes: Vec<PluginEntityChange>,\n}\n\npub fn file_from_bytes(id: &str, path: &str, data: &[u8]) -> PluginFile {\n    PluginFile {\n        id: id.to_string(),\n        path: path.to_string(),\n        data: data.to_vec(),\n    }\n}\n\npub fn detect_scenarios() -> Vec<DetectScenario> {\n    vec![\n        DetectScenario {\n            name: \"small_single_line_edit\",\n            before: Some(build_small_before()),\n            after: build_small_after(),\n        },\n        DetectScenario {\n            name: \"lockfile_large_create\",\n            before: None,\n            after: build_lockfile(1200),\n        },\n        DetectScenario {\n            name: \"lockfile_large_patch\",\n            before: Some(build_lockfile(1800)),\n            after: build_lockfile_with_patch(1800),\n        },\n        DetectScenario {\n            name: \"lockfile_large_block_move_and_patch\",\n            before: Some(build_lockfile(2200)),\n            after: build_lockfile_with_block_move_and_patch(2200),\n        },\n    ]\n}\n\npub fn apply_scenarios() -> Vec<ApplyScenario> {\n    let small_before = build_small_before();\n    let small_after = build_small_after();\n    let lockfile_base_1800 = build_lockfile(1800);\n    let lockfile_patch_1800 = build_lockfile_with_patch(1800);\n    let lockfile_base_2200 = build_lockfile(2200);\n    let lockfile_move_patch_2200 = build_lockfile_with_block_move_and_patch(2200);\n\n    vec![\n        ApplyScenario {\n            name: \"small_projection_from_empty\",\n            base: Vec::new(),\n            changes: detect_changes(None, file_from_bytes(\"f1\", \"/doc.txt\", &small_after))\n                .expect(\"small projection should be constructible for apply bench\"),\n        },\n        ApplyScenario {\n            name: \"small_delta_on_base\",\n            base: small_before.clone(),\n            changes: detect_changes(\n                Some(file_from_bytes(\"f1\", \"/doc.txt\", &small_before)),\n                file_from_bytes(\"f1\", \"/doc.txt\", &small_after),\n            )\n            .expect(\"small delta should be constructible for apply bench\"),\n        },\n        ApplyScenario {\n            name: \"lockfile_projection_from_empty\",\n            base: Vec::new(),\n            changes: detect_changes(\n                None,\n                file_from_bytes(\"f1\", \"/yarn.lock\", &lockfile_patch_1800),\n            )\n            .expect(\"lockfile projection should be constructible for apply bench\"),\n        },\n        ApplyScenario {\n            name: \"lockfile_delta_patch_on_base\",\n            base: lockfile_base_1800.clone(),\n            changes: detect_changes(\n                Some(file_from_bytes(\"f1\", \"/yarn.lock\", &lockfile_base_1800)),\n                file_from_bytes(\"f1\", \"/yarn.lock\", &lockfile_patch_1800),\n            )\n            .expect(\"lockfile delta should be constructible for apply bench\"),\n        },\n        ApplyScenario {\n            name: \"lockfile_delta_move_patch_on_base\",\n            base: lockfile_base_2200.clone(),\n            changes: detect_changes(\n                Some(file_from_bytes(\"f1\", \"/yarn.lock\", &lockfile_base_2200)),\n                file_from_bytes(\"f1\", \"/yarn.lock\", &lockfile_move_patch_2200),\n            )\n            .expect(\"lockfile move+patch delta should be constructible for apply bench\"),\n        },\n    ]\n}\n\nfn build_small_before() -> Vec<u8> {\n    b\"const a = 1;\\nconst b = 2;\\nconst c = a + b;\\n\".to_vec()\n}\n\nfn build_small_after() -> Vec<u8> {\n    b\"const a = 1;\\nconst b = 3;\\nconst c = a + b;\\n\".to_vec()\n}\n\nfn build_lockfile(pkg_count: usize) -> Vec<u8> {\n    let mut out = String::with_capacity(pkg_count * 170);\n    for idx in 0..pkg_count {\n        out.push_str(&package_block(idx));\n    }\n    out.into_bytes()\n}\n\nfn build_lockfile_with_patch(pkg_count: usize) -> Vec<u8> {\n    let mut blocks = (0..pkg_count).map(package_block).collect::<Vec<_>>();\n\n    let patch_index = pkg_count / 2;\n    blocks[patch_index] = patched_package_block(patch_index);\n\n    let insert_at = pkg_count / 3;\n    let inserted = (0..120)\n        .map(|offset| package_block(pkg_count + offset + 10_000))\n        .collect::<Vec<_>>();\n    blocks.splice(insert_at..insert_at, inserted);\n\n    blocks.join(\"\").into_bytes()\n}\n\nfn build_lockfile_with_block_move_and_patch(pkg_count: usize) -> Vec<u8> {\n    let mut blocks = (0..pkg_count).map(package_block).collect::<Vec<_>>();\n\n    let move_start = pkg_count / 5;\n    let move_end = move_start + (pkg_count / 8);\n    let moved = blocks.drain(move_start..move_end).collect::<Vec<_>>();\n\n    let insert_at = pkg_count / 2;\n    blocks.splice(insert_at..insert_at, moved);\n\n    for idx in (pkg_count / 3)..(pkg_count / 3 + 64) {\n        let clamped = idx.min(blocks.len().saturating_sub(1));\n        blocks[clamped] = patched_package_block(90_000 + idx);\n    }\n\n    blocks.join(\"\").into_bytes()\n}\n\nfn package_block(idx: usize) -> String {\n    let major = (idx % 9) + 1;\n    let minor = (idx * 7) % 40;\n    let patch = (idx * 13) % 70;\n    let integrity_a = idx.wrapping_mul(31).wrapping_add(17);\n    let integrity_b = idx.wrapping_mul(53).wrapping_add(29);\n\n    format!(\n        \"\\\"pkg-{idx}@^1.0.0\\\":\\n  version \\\"{major}.{minor}.{patch}\\\"\\n  resolved \\\"https://registry.yarnpkg.com/pkg-{idx}/-/pkg-{idx}-{major}.{minor}.{patch}.tgz\\\"\\n  integrity sha512-{integrity_a:016x}{integrity_b:016x}\\n\\n\"\n    )\n}\n\nfn patched_package_block(idx: usize) -> String {\n    let major = (idx % 9) + 2;\n    let minor = (idx * 11) % 50;\n    let patch = (idx * 17) % 80;\n    let integrity_a = idx.wrapping_mul(67).wrapping_add(23);\n    let integrity_b = idx.wrapping_mul(79).wrapping_add(31);\n\n    format!(\n        \"\\\"pkg-{idx}@^1.0.0\\\":\\n  version \\\"{major}.{minor}.{patch}\\\"\\n  resolved \\\"https://registry.yarnpkg.com/pkg-{idx}/-/pkg-{idx}-{major}.{minor}.{patch}.tgz\\\"\\n  integrity sha512-{integrity_a:016x}{integrity_b:016x}\\n\\n\"\n    )\n}\n"
  },
  {
    "path": "packages/text-plugin/benches/detect_changes.rs",
    "content": "mod common;\n\nuse common::{detect_scenarios, file_from_bytes};\nuse criterion::{black_box, criterion_group, criterion_main, BatchSize, Criterion};\nuse std::time::Duration;\nuse text_plugin::detect_changes;\n\nfn bench_detect_changes(c: &mut Criterion) {\n    let scenarios = detect_scenarios();\n\n    let mut group = c.benchmark_group(\"detect_changes\");\n    group.sample_size(20);\n    group.measurement_time(Duration::from_secs(15));\n\n    for scenario in scenarios {\n        group.bench_function(scenario.name, |b| {\n            b.iter_batched(\n                || {\n                    let before = scenario\n                        .before\n                        .as_ref()\n                        .map(|bytes| file_from_bytes(\"f1\", \"/yarn.lock\", bytes));\n                    let after = file_from_bytes(\"f1\", \"/yarn.lock\", &scenario.after);\n                    (before, after)\n                },\n                |(before, after)| {\n                    let changes = detect_changes(before, after)\n                        .expect(\"detect_changes benchmark should succeed\");\n                    black_box(changes);\n                },\n                BatchSize::SmallInput,\n            );\n        });\n    }\n\n    group.finish();\n}\n\ncriterion_group!(benches, bench_detect_changes);\ncriterion_main!(benches);\n"
  },
  {
    "path": "packages/text-plugin/manifest.json",
    "content": "{\n  \"key\": \"text_plugin\",\n  \"runtime\": \"wasm-component-v1\",\n  \"api_version\": \"0.1.0\",\n  \"match\": {\n    \"path_glob\": \"*\",\n    \"content_type\": \"text\"\n  },\n  \"entry\": \"plugin.wasm\"\n}\n"
  },
  {
    "path": "packages/text-plugin/schema/text_document.json",
    "content": "{\n  \"x-lix-key\": \"text_document\",\n  \"x-lix-override-lixcols\": {\n    \"lixcol_plugin_key\": \"'text_plugin'\"\n  },\n  \"type\": \"object\",\n  \"properties\": {\n    \"line_ids\": {\n      \"type\": \"array\",\n      \"items\": {\n        \"type\": \"string\",\n        \"minLength\": 1\n      },\n      \"uniqueItems\": true,\n      \"description\": \"Ordered line entity ids for the projected document.\"\n    }\n  },\n  \"required\": [\n    \"line_ids\"\n  ],\n  \"additionalProperties\": false\n}\n"
  },
  {
    "path": "packages/text-plugin/schema/text_line.json",
    "content": "{\n  \"x-lix-key\": \"text_line\",\n  \"x-lix-override-lixcols\": {\n    \"lixcol_plugin_key\": \"'text_plugin'\"\n  },\n  \"type\": \"object\",\n  \"properties\": {\n    \"content_base64\": {\n      \"type\": \"string\",\n      \"contentEncoding\": \"base64\",\n      \"contentMediaType\": \"application/octet-stream\",\n      \"description\": \"Base64-encoded line bytes. Empty string represents an empty line body.\"\n    },\n    \"ending\": {\n      \"type\": \"string\",\n      \"enum\": [\n        \"\",\n        \"\\n\",\n        \"\\r\\n\"\n      ],\n      \"description\": \"Original line ending bytes.\"\n    }\n  },\n  \"required\": [\n    \"content_base64\",\n    \"ending\"\n  ],\n  \"additionalProperties\": false\n}\n"
  },
  {
    "path": "packages/text-plugin/src/lib.rs",
    "content": "use crate::exports::lix::plugin::api::{EntityChange, File, Guest, PluginError};\nuse base64::engine::general_purpose::STANDARD as BASE64_STANDARD;\nuse base64::Engine as _;\nuse imara_diff::{Algorithm, Diff, InternedInput};\nuse serde::{Deserialize, Serialize};\nuse serde_json::Value;\nuse sha1::{Digest, Sha1};\nuse std::collections::{HashMap, HashSet};\nuse std::sync::OnceLock;\n\nwit_bindgen::generate!({\n    path: \"../engine/wit\",\n    world: \"plugin\",\n});\n\npub const LINE_SCHEMA_KEY: &str = \"text_line\";\npub const DOCUMENT_SCHEMA_KEY: &str = \"text_document\";\npub const DOCUMENT_ENTITY_ID: &str = \"__document__\";\nconst MANIFEST_JSON: &str = include_str!(\"../manifest.json\");\nconst LINE_SCHEMA_JSON: &str = include_str!(\"../schema/text_line.json\");\nconst DOCUMENT_SCHEMA_JSON: &str = include_str!(\"../schema/text_document.json\");\n\nstatic LINE_SCHEMA: OnceLock<Value> = OnceLock::new();\nstatic DOCUMENT_SCHEMA: OnceLock<Value> = OnceLock::new();\n\npub use crate::exports::lix::plugin::api::{\n    EntityChange as PluginEntityChange, File as PluginFile, PluginError as PluginApiError,\n};\n\nstruct TextLinesPlugin;\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]\nenum LineEnding {\n    None,\n    Lf,\n    Crlf,\n}\n\nimpl LineEnding {\n    fn as_str(self) -> &'static str {\n        match self {\n            Self::None => \"\",\n            Self::Lf => \"\\n\",\n            Self::Crlf => \"\\r\\n\",\n        }\n    }\n\n    fn marker_byte(self) -> u8 {\n        match self {\n            Self::None => 0,\n            Self::Lf => 1,\n            Self::Crlf => 2,\n        }\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\nstruct ParsedLine {\n    entity_id: String,\n    content: Vec<u8>,\n    ending: LineEnding,\n}\n\n#[derive(Debug, Serialize)]\nstruct DocumentSnapshot<'a> {\n    line_ids: &'a [String],\n}\n\n#[derive(Debug, Serialize, Deserialize)]\n#[serde(deny_unknown_fields)]\nstruct DocumentSnapshotOwned {\n    line_ids: Vec<String>,\n}\n\nimpl Guest for TextLinesPlugin {\n    fn detect_changes(\n        before: Option<File>,\n        after: File,\n        _state_context: Option<crate::exports::lix::plugin::api::DetectStateContext>,\n    ) -> Result<Vec<EntityChange>, PluginError> {\n        if let Some(previous) = before.as_ref() {\n            if previous.data == after.data {\n                return Ok(Vec::new());\n            }\n        }\n\n        let before_lines = before\n            .as_ref()\n            .map(|file| parse_lines_with_ids(&file.data))\n            .unwrap_or_default();\n        let after_lines = if let Some(before_file) = before.as_ref() {\n            parse_after_lines_with_histogram_matching(&before_lines, &before_file.data, &after.data)\n        } else {\n            parse_lines_with_ids(&after.data)\n        };\n\n        let before_ids = before_lines\n            .iter()\n            .map(|line| line.entity_id.clone())\n            .collect::<Vec<_>>();\n        let after_ids = after_lines\n            .iter()\n            .map(|line| line.entity_id.clone())\n            .collect::<Vec<_>>();\n\n        let before_id_set = before_ids.iter().cloned().collect::<HashSet<_>>();\n        let after_id_set = after_ids.iter().cloned().collect::<HashSet<_>>();\n        let mut changes = Vec::new();\n\n        if before.is_some() {\n            let mut removed_ids = HashSet::<String>::with_capacity(before_lines.len());\n            for line in &before_lines {\n                if after_id_set.contains(&line.entity_id) {\n                    continue;\n                }\n                if removed_ids.insert(line.entity_id.clone()) {\n                    changes.push(EntityChange {\n                        entity_id: line.entity_id.clone(),\n                        schema_key: LINE_SCHEMA_KEY.to_string(),\n                        snapshot_content: None,\n                    });\n                }\n            }\n        }\n\n        for line in &after_lines {\n            if before_id_set.contains(&line.entity_id) {\n                continue;\n            }\n            changes.push(EntityChange {\n                entity_id: line.entity_id.clone(),\n                schema_key: LINE_SCHEMA_KEY.to_string(),\n                snapshot_content: Some(serialize_line_snapshot(line)?),\n            });\n        }\n\n        if before.is_none() || before_ids != after_ids {\n            let snapshot = serde_json::to_string(&DocumentSnapshot {\n                line_ids: &after_ids,\n            })\n            .map_err(|error| {\n                PluginError::Internal(format!(\"failed to encode document snapshot: {error}\"))\n            })?;\n            changes.push(EntityChange {\n                entity_id: DOCUMENT_ENTITY_ID.to_string(),\n                schema_key: DOCUMENT_SCHEMA_KEY.to_string(),\n                snapshot_content: Some(snapshot),\n            });\n        }\n\n        Ok(changes)\n    }\n\n    fn apply_changes(file: File, changes: Vec<EntityChange>) -> Result<Vec<u8>, PluginError> {\n        let expected_line_changes = changes\n            .iter()\n            .filter(|change| change.schema_key == LINE_SCHEMA_KEY)\n            .count();\n        let mut document_snapshot: Option<DocumentSnapshotOwned> = None;\n        let mut document_tombstoned = false;\n        let mut line_by_id = parse_lines_with_ids(&file.data)\n            .into_iter()\n            .map(|line| (line.entity_id.clone(), line))\n            .collect::<HashMap<_, _>>();\n        line_by_id.reserve(expected_line_changes);\n        let mut seen_line_change_ids = HashSet::<String>::with_capacity(expected_line_changes);\n\n        for change in changes {\n            if change.schema_key == LINE_SCHEMA_KEY {\n                if !seen_line_change_ids.insert(change.entity_id.clone()) {\n                    return Err(PluginError::InvalidInput(\n                        \"duplicate text_line snapshot in apply_changes input\".to_string(),\n                    ));\n                }\n\n                match change.snapshot_content {\n                    Some(snapshot_raw) => {\n                        let snapshot = parse_line_snapshot(&snapshot_raw, &change.entity_id)?;\n                        line_by_id.insert(\n                            change.entity_id.clone(),\n                            ParsedLine {\n                                entity_id: change.entity_id,\n                                content: snapshot.content,\n                                ending: snapshot.ending,\n                            },\n                        );\n                    }\n                    None => {\n                        line_by_id.remove(&change.entity_id);\n                    }\n                }\n                continue;\n            }\n\n            if change.schema_key == DOCUMENT_SCHEMA_KEY {\n                if change.entity_id != DOCUMENT_ENTITY_ID {\n                    return Err(PluginError::InvalidInput(format!(\n                        \"document snapshot entity_id must be '{DOCUMENT_ENTITY_ID}', got '{}'\",\n                        change.entity_id\n                    )));\n                }\n\n                match change.snapshot_content {\n                    Some(snapshot_raw) => {\n                        if document_snapshot.is_some() || document_tombstoned {\n                            return Err(PluginError::InvalidInput(\n                                \"duplicate text_document snapshot in apply_changes input\"\n                                    .to_string(),\n                            ));\n                        }\n                        let parsed = parse_document_snapshot(&snapshot_raw)?;\n                        document_snapshot = Some(parsed);\n                    }\n                    None => {\n                        if document_snapshot.is_some() || document_tombstoned {\n                            return Err(PluginError::InvalidInput(\n                                \"duplicate text_document snapshot in apply_changes input\"\n                                    .to_string(),\n                            ));\n                        }\n                        document_tombstoned = true;\n                    }\n                }\n            }\n        }\n\n        if document_tombstoned {\n            return Ok(Vec::new());\n        }\n\n        let document_snapshot = document_snapshot.ok_or_else(|| {\n            PluginError::InvalidInput(\n                \"missing text_document snapshot; apply_changes requires full latest projection\"\n                    .to_string(),\n            )\n        })?;\n\n        let mut output = Vec::new();\n        for line_id in document_snapshot.line_ids {\n            let Some(line) = line_by_id.get(&line_id) else {\n                return Err(PluginError::InvalidInput(format!(\n                    \"document references missing text_line entity_id '{line_id}'\"\n                )));\n            };\n            output.extend_from_slice(&line.content);\n            output.extend_from_slice(line.ending.as_str().as_bytes());\n        }\n\n        Ok(output)\n    }\n}\n\nfn parse_document_snapshot(raw: &str) -> Result<DocumentSnapshotOwned, PluginError> {\n    let parsed: DocumentSnapshotOwned = serde_json::from_str(raw).map_err(|error| {\n        PluginError::InvalidInput(format!(\"invalid text_document snapshot_content: {error}\"))\n    })?;\n\n    let mut seen = HashSet::new();\n    for line_id in &parsed.line_ids {\n        if line_id.is_empty() {\n            return Err(PluginError::InvalidInput(\n                \"text_document.line_ids must not contain empty ids\".to_string(),\n            ));\n        }\n        if !seen.insert(line_id.clone()) {\n            return Err(PluginError::InvalidInput(format!(\n                \"text_document.line_ids contains duplicate id '{line_id}'\"\n            )));\n        }\n    }\n\n    Ok(parsed)\n}\n\nfn parse_line_snapshot(raw: &str, entity_id: &str) -> Result<ParsedLine, PluginError> {\n    let (content_base64, ending) = parse_line_snapshot_fields(raw).map_err(|error| {\n        PluginError::InvalidInput(format!(\n            \"invalid text_line snapshot_content for entity_id '{entity_id}': {error}\"\n        ))\n    })?;\n\n    let content = base64_to_bytes(content_base64).map_err(|error| {\n        PluginError::InvalidInput(format!(\n            \"invalid text_line.content_base64 for entity_id '{entity_id}': {error}\"\n        ))\n    })?;\n    let ending = parse_line_ending_literal(ending).map_err(|error| {\n        PluginError::InvalidInput(format!(\n            \"invalid text_line.ending for entity_id '{entity_id}': {error}\"\n        ))\n    })?;\n\n    Ok(ParsedLine {\n        entity_id: entity_id.to_string(),\n        content,\n        ending,\n    })\n}\n\nfn serialize_line_snapshot(line: &ParsedLine) -> Result<String, PluginError> {\n    let content_base64 = bytes_to_base64(&line.content);\n    let ending = line_ending_json_literal(line.ending);\n    let mut encoded = String::with_capacity(\n        LINE_SNAPSHOT_PREFIX.len()\n            + content_base64.len()\n            + LINE_SNAPSHOT_SEPARATOR.len()\n            + ending.len()\n            + LINE_SNAPSHOT_SUFFIX.len(),\n    );\n    encoded.push_str(LINE_SNAPSHOT_PREFIX);\n    encoded.push_str(&content_base64);\n    encoded.push_str(LINE_SNAPSHOT_SEPARATOR);\n    encoded.push_str(ending);\n    encoded.push_str(LINE_SNAPSHOT_SUFFIX);\n    Ok(encoded)\n}\n\nfn parse_lines_with_ids(data: &[u8]) -> Vec<ParsedLine> {\n    parse_lines_with_ids_from_split(split_lines(data))\n}\n\nfn parse_lines_with_ids_from_split(split: Vec<(Vec<u8>, LineEnding)>) -> Vec<ParsedLine> {\n    let mut occurrence_by_key = HashMap::<[u8; 20], u32>::new();\n    let mut lines = Vec::with_capacity(split.len());\n\n    for (content, ending) in split {\n        let fingerprint = line_fingerprint(&content, ending);\n        let occurrence = occurrence_by_key.entry(fingerprint).or_insert(0);\n        let entity_id = format!(\"line:{}:{}\", bytes_to_hex(&fingerprint), occurrence);\n        *occurrence += 1;\n\n        lines.push(ParsedLine {\n            entity_id,\n            content,\n            ending,\n        });\n    }\n\n    lines\n}\n\nfn parse_after_lines_with_histogram_matching(\n    before_lines: &[ParsedLine],\n    before_data: &[u8],\n    after_data: &[u8],\n) -> Vec<ParsedLine> {\n    let after_split = split_lines(after_data);\n\n    let matching_pairs = compute_histogram_line_matching_pairs(before_data, after_data);\n\n    let mut matched_after_to_before = HashMap::<usize, usize>::new();\n    for (before_index, after_index) in matching_pairs {\n        matched_after_to_before.insert(after_index, before_index);\n    }\n\n    let mut used_ids = before_lines\n        .iter()\n        .map(|line| line.entity_id.clone())\n        .collect::<HashSet<_>>();\n    let mut occurrence_by_key = HashMap::<[u8; 20], u32>::new();\n    let mut after_lines = Vec::with_capacity(after_split.len());\n\n    for (after_index, (content, ending)) in after_split.into_iter().enumerate() {\n        let fingerprint = line_fingerprint(&content, ending);\n        let occurrence = occurrence_by_key.entry(fingerprint).or_insert(0);\n        let canonical_occurrence = *occurrence;\n        *occurrence += 1;\n\n        let entity_id = if let Some(before_index) = matched_after_to_before.get(&after_index) {\n            before_lines[*before_index].entity_id.clone()\n        } else {\n            let canonical_entity_id = format!(\n                \"line:{}:{}\",\n                bytes_to_hex(&fingerprint),\n                canonical_occurrence\n            );\n            allocate_inserted_line_id(&canonical_entity_id, &used_ids)\n        };\n        used_ids.insert(entity_id.clone());\n\n        after_lines.push(ParsedLine {\n            entity_id,\n            content,\n            ending,\n        });\n    }\n\n    after_lines\n}\n\nfn compute_histogram_line_matching_pairs(\n    before_data: &[u8],\n    after_data: &[u8],\n) -> Vec<(usize, usize)> {\n    let input = InternedInput::new(before_data, after_data);\n    let mut diff = Diff::compute(Algorithm::Histogram, &input);\n    diff.postprocess_lines(&input);\n\n    let mut pairs = Vec::new();\n    let mut before_pos = 0usize;\n    let mut after_pos = 0usize;\n\n    for hunk in diff.hunks() {\n        let hunk_before_start = hunk.before.start as usize;\n        let hunk_after_start = hunk.after.start as usize;\n        let unchanged_before_len = hunk_before_start.saturating_sub(before_pos);\n        let unchanged_after_len = hunk_after_start.saturating_sub(after_pos);\n        let unchanged_len = unchanged_before_len.min(unchanged_after_len);\n\n        for offset in 0..unchanged_len {\n            pairs.push((before_pos + offset, after_pos + offset));\n        }\n\n        before_pos = hunk.before.end as usize;\n        after_pos = hunk.after.end as usize;\n    }\n\n    let before_tail = input.before.len().saturating_sub(before_pos);\n    let after_tail = input.after.len().saturating_sub(after_pos);\n    let tail_len = before_tail.min(after_tail);\n    for offset in 0..tail_len {\n        pairs.push((before_pos + offset, after_pos + offset));\n    }\n\n    pairs\n}\n\nfn allocate_inserted_line_id(base: &str, used_ids: &HashSet<String>) -> String {\n    if !used_ids.contains(base) {\n        return base.to_string();\n    }\n\n    let mut suffix = 0u32;\n    loop {\n        let candidate = format!(\"{base}:ins:{suffix}\");\n        if !used_ids.contains(&candidate) {\n            return candidate;\n        }\n        suffix += 1;\n    }\n}\n\nfn split_lines(data: &[u8]) -> Vec<(Vec<u8>, LineEnding)> {\n    if data.is_empty() {\n        return Vec::new();\n    }\n\n    let mut lines = Vec::new();\n    let mut start = 0usize;\n\n    for index in 0..data.len() {\n        if data[index] != b'\\n' {\n            continue;\n        }\n\n        if index > start && data[index - 1] == b'\\r' {\n            lines.push((data[start..index - 1].to_vec(), LineEnding::Crlf));\n        } else {\n            lines.push((data[start..index].to_vec(), LineEnding::Lf));\n        }\n        start = index + 1;\n    }\n\n    if start < data.len() {\n        lines.push((data[start..].to_vec(), LineEnding::None));\n    }\n\n    lines\n}\n\nfn line_fingerprint(content: &[u8], ending: LineEnding) -> [u8; 20] {\n    let mut hasher = Sha1::new();\n    hasher.update(content);\n    hasher.update([0xff, ending.marker_byte()]);\n    let digest = hasher.finalize();\n    let mut fingerprint = [0u8; 20];\n    fingerprint.copy_from_slice(&digest);\n    fingerprint\n}\n\nconst LINE_SNAPSHOT_PREFIX: &str = \"{\\\"content_base64\\\":\\\"\";\nconst LINE_SNAPSHOT_SEPARATOR: &str = \"\\\",\\\"ending\\\":\\\"\";\nconst LINE_SNAPSHOT_SUFFIX: &str = \"\\\"}\";\n\nfn parse_line_snapshot_fields(raw: &str) -> Result<(&str, &str), String> {\n    let inner = raw\n        .strip_prefix(LINE_SNAPSHOT_PREFIX)\n        .and_then(|value| value.strip_suffix(LINE_SNAPSHOT_SUFFIX))\n        .ok_or_else(|| \"expected {\\\"content_base64\\\":\\\"...\\\",\\\"ending\\\":\\\"...\\\"}\".to_string())?;\n    inner\n        .split_once(LINE_SNAPSHOT_SEPARATOR)\n        .ok_or_else(|| \"missing content_base64 or ending field\".to_string())\n}\n\nfn line_ending_json_literal(ending: LineEnding) -> &'static str {\n    match ending {\n        LineEnding::None => \"\",\n        LineEnding::Lf => \"\\\\n\",\n        LineEnding::Crlf => \"\\\\r\\\\n\",\n    }\n}\n\nfn parse_line_ending_literal(value: &str) -> Result<LineEnding, String> {\n    match value {\n        \"\" => Ok(LineEnding::None),\n        \"\\\\n\" => Ok(LineEnding::Lf),\n        \"\\\\r\\\\n\" => Ok(LineEnding::Crlf),\n        _ => Err(\n            \"unsupported ending literal; expected \\\"\\\", \\\"\\\\\\\\n\\\", or \\\"\\\\\\\\r\\\\\\\\n\\\"\".to_string(),\n        ),\n    }\n}\n\nfn bytes_to_hex(bytes: &[u8]) -> String {\n    let mut output = String::with_capacity(bytes.len() * 2);\n    for byte in bytes {\n        output.push(hex_char(byte >> 4));\n        output.push(hex_char(byte & 0x0f));\n    }\n    output\n}\n\nfn hex_char(value: u8) -> char {\n    match value {\n        0..=9 => (b'0' + value) as char,\n        10..=15 => (b'a' + (value - 10)) as char,\n        _ => '?',\n    }\n}\n\nfn bytes_to_base64(bytes: &[u8]) -> String {\n    BASE64_STANDARD.encode(bytes)\n}\n\nfn base64_to_bytes(raw: &str) -> Result<Vec<u8>, String> {\n    BASE64_STANDARD\n        .decode(raw)\n        .map_err(|error| format!(\"invalid base64: {error}\"))\n}\n\npub fn detect_changes(before: Option<File>, after: File) -> Result<Vec<EntityChange>, PluginError> {\n    <TextLinesPlugin as Guest>::detect_changes(before, after, None)\n}\n\npub fn detect_changes_with_state_context(\n    before: Option<File>,\n    after: File,\n    state_context: Option<crate::exports::lix::plugin::api::DetectStateContext>,\n) -> Result<Vec<EntityChange>, PluginError> {\n    <TextLinesPlugin as Guest>::detect_changes(before, after, state_context)\n}\n\npub fn apply_changes(file: File, changes: Vec<EntityChange>) -> Result<Vec<u8>, PluginError> {\n    <TextLinesPlugin as Guest>::apply_changes(file, changes)\n}\n\npub fn manifest_json() -> &'static str {\n    MANIFEST_JSON\n}\n\npub fn line_schema_json() -> &'static str {\n    LINE_SCHEMA_JSON\n}\n\npub fn line_schema_definition() -> &'static Value {\n    LINE_SCHEMA.get_or_init(|| {\n        serde_json::from_str(LINE_SCHEMA_JSON).expect(\"text line schema must parse\")\n    })\n}\n\npub fn document_schema_json() -> &'static str {\n    DOCUMENT_SCHEMA_JSON\n}\n\npub fn document_schema_definition() -> &'static Value {\n    DOCUMENT_SCHEMA.get_or_init(|| {\n        serde_json::from_str(DOCUMENT_SCHEMA_JSON).expect(\"text document schema must parse\")\n    })\n}\n\n#[cfg(target_arch = \"wasm32\")]\nexport!(TextLinesPlugin);\n"
  },
  {
    "path": "packages/text-plugin/tests/apply_changes.rs",
    "content": "mod common;\n\nuse common::{file_from_bytes, parse_document_snapshot};\nuse text_plugin::{\n    apply_changes, detect_changes, PluginApiError, PluginEntityChange, DOCUMENT_SCHEMA_KEY,\n    LINE_SCHEMA_KEY,\n};\n\n#[test]\nfn applies_full_projection_and_reconstructs_bytes() {\n    let expected = b\"line 1\\nline 2\\r\\nline 3\";\n    let after = file_from_bytes(\"f1\", \"/doc.txt\", expected);\n\n    let changes = detect_changes(None, after).expect(\"detect_changes should succeed\");\n    let output = apply_changes(file_from_bytes(\"f1\", \"/doc.txt\", b\"\"), changes)\n        .expect(\"apply_changes should succeed\");\n\n    assert_eq!(output, expected);\n}\n\n#[test]\nfn supports_binary_bytes() {\n    let expected = vec![0xff, b'\\n', 0x00, b'\\r', b'\\n', 0x7f];\n    let after = file_from_bytes(\"f1\", \"/bin.dat\", &expected);\n\n    let changes = detect_changes(None, after).expect(\"detect_changes should succeed\");\n    let output = apply_changes(file_from_bytes(\"f1\", \"/bin.dat\", b\"\"), changes)\n        .expect(\"apply_changes should succeed\");\n\n    assert_eq!(output, expected);\n}\n\n#[test]\nfn rejects_missing_document_snapshot() {\n    let changes = vec![PluginEntityChange {\n        entity_id: \"line:abc:0\".to_string(),\n        schema_key: LINE_SCHEMA_KEY.to_string(),\n        snapshot_content: Some(r#\"{\"content_base64\":\"YQ==\",\"ending\":\"\\n\"}\"#.to_string()),\n    }];\n\n    let error = apply_changes(file_from_bytes(\"f1\", \"/doc.txt\", b\"\"), changes)\n        .expect_err(\"apply_changes should fail\");\n\n    match error {\n        PluginApiError::InvalidInput(message) => {\n            assert!(message.contains(\"missing text_document snapshot\"));\n        }\n        PluginApiError::Internal(message) => {\n            panic!(\"expected InvalidInput, got Internal({message})\");\n        }\n    }\n}\n\n#[test]\nfn document_order_drives_output_order() {\n    let after = file_from_bytes(\"f1\", \"/doc.txt\", b\"a\\nb\\n\");\n    let mut changes = detect_changes(None, after).expect(\"detect_changes should succeed\");\n\n    let document_index = changes\n        .iter()\n        .position(|change| change.schema_key == DOCUMENT_SCHEMA_KEY)\n        .expect(\"document row should exist\");\n    let mut doc = parse_document_snapshot(&changes[document_index]);\n    doc.line_ids.reverse();\n    changes[document_index].snapshot_content = Some(\n        serde_json::json!({\n            \"line_ids\": doc.line_ids,\n        })\n        .to_string(),\n    );\n\n    let output = apply_changes(file_from_bytes(\"f1\", \"/doc.txt\", b\"\"), changes)\n        .expect(\"apply_changes should succeed\");\n\n    assert_eq!(output, b\"b\\na\\n\");\n}\n"
  },
  {
    "path": "packages/text-plugin/tests/common/mod.rs",
    "content": "#![allow(dead_code)]\n\nuse serde::Deserialize;\nuse text_plugin::{PluginEntityChange, PluginFile};\n\n#[derive(Debug, Deserialize)]\npub struct LineSnapshot {\n    pub content_base64: String,\n    pub ending: String,\n}\n\n#[derive(Debug, Deserialize)]\npub struct DocumentSnapshot {\n    pub line_ids: Vec<String>,\n}\n\npub fn file_from_bytes(id: &str, path: &str, data: &[u8]) -> PluginFile {\n    PluginFile {\n        id: id.to_string(),\n        path: path.to_string(),\n        data: data.to_vec(),\n    }\n}\n\npub fn parse_line_snapshot(change: &PluginEntityChange) -> LineSnapshot {\n    let raw = change\n        .snapshot_content\n        .as_ref()\n        .expect(\"line snapshot should exist\");\n    serde_json::from_str(raw).expect(\"line snapshot should parse\")\n}\n\npub fn parse_document_snapshot(change: &PluginEntityChange) -> DocumentSnapshot {\n    let raw = change\n        .snapshot_content\n        .as_ref()\n        .expect(\"document snapshot should exist\");\n    serde_json::from_str(raw).expect(\"document snapshot should parse\")\n}\n"
  },
  {
    "path": "packages/text-plugin/tests/detect_changes.rs",
    "content": "mod common;\n\nuse common::{file_from_bytes, parse_document_snapshot};\nuse text_plugin::{detect_changes, DOCUMENT_ENTITY_ID, DOCUMENT_SCHEMA_KEY, LINE_SCHEMA_KEY};\n\n#[test]\nfn creation_returns_full_projection() {\n    let after = file_from_bytes(\"f1\", \"/doc.txt\", b\"a\\nb\\n\");\n\n    let changes = detect_changes(None, after).expect(\"detect_changes should succeed\");\n\n    let line_changes = changes\n        .iter()\n        .filter(|change| change.schema_key == LINE_SCHEMA_KEY)\n        .collect::<Vec<_>>();\n    assert_eq!(line_changes.len(), 2);\n    assert!(line_changes\n        .iter()\n        .all(|change| change.snapshot_content.is_some()));\n\n    let document_change = changes\n        .iter()\n        .find(|change| change.schema_key == DOCUMENT_SCHEMA_KEY)\n        .expect(\"document snapshot should exist\");\n    assert_eq!(document_change.entity_id, DOCUMENT_ENTITY_ID);\n    let doc = parse_document_snapshot(document_change);\n    assert_eq!(doc.line_ids.len(), 2);\n}\n\n#[test]\nfn insertion_in_middle_emits_inserted_line_and_document_change() {\n    let before = file_from_bytes(\"f1\", \"/doc.txt\", b\"a\\nb\\n\");\n    let after = file_from_bytes(\"f1\", \"/doc.txt\", b\"a\\nx\\nb\\n\");\n\n    let changes = detect_changes(Some(before), after).expect(\"detect_changes should succeed\");\n\n    let line_inserts = changes\n        .iter()\n        .filter(|change| change.schema_key == LINE_SCHEMA_KEY)\n        .filter(|change| change.snapshot_content.is_some())\n        .collect::<Vec<_>>();\n    let line_tombstones = changes\n        .iter()\n        .filter(|change| change.schema_key == LINE_SCHEMA_KEY)\n        .filter(|change| change.snapshot_content.is_none())\n        .collect::<Vec<_>>();\n\n    assert_eq!(line_inserts.len(), 1);\n    assert_eq!(line_tombstones.len(), 0);\n    assert!(changes\n        .iter()\n        .any(|change| change.schema_key == DOCUMENT_SCHEMA_KEY));\n}\n\n#[test]\nfn deletion_emits_line_tombstone_and_document_change() {\n    let before = file_from_bytes(\"f1\", \"/doc.txt\", b\"a\\nb\\n\");\n    let after = file_from_bytes(\"f1\", \"/doc.txt\", b\"a\\n\");\n\n    let changes = detect_changes(Some(before), after).expect(\"detect_changes should succeed\");\n\n    let line_tombstones = changes\n        .iter()\n        .filter(|change| change.schema_key == LINE_SCHEMA_KEY)\n        .filter(|change| change.snapshot_content.is_none())\n        .collect::<Vec<_>>();\n\n    assert_eq!(line_tombstones.len(), 1);\n    assert!(changes\n        .iter()\n        .any(|change| change.schema_key == DOCUMENT_SCHEMA_KEY));\n}\n\n#[test]\nfn unchanged_file_returns_no_changes() {\n    let before = file_from_bytes(\"f1\", \"/doc.txt\", b\"unchanged\\n\");\n    let after = file_from_bytes(\"f1\", \"/doc.txt\", b\"unchanged\\n\");\n\n    let changes = detect_changes(Some(before), after).expect(\"detect_changes should succeed\");\n\n    assert!(changes.is_empty());\n}\n\n#[test]\nfn line_reorder_emits_delete_and_insert() {\n    let before = file_from_bytes(\"f1\", \"/doc.txt\", b\"a\\nb\\n\");\n    let after = file_from_bytes(\"f1\", \"/doc.txt\", b\"b\\na\\n\");\n\n    let changes = detect_changes(Some(before), after).expect(\"detect_changes should succeed\");\n\n    let line_inserts = changes\n        .iter()\n        .filter(|change| change.schema_key == LINE_SCHEMA_KEY)\n        .filter(|change| change.snapshot_content.is_some())\n        .collect::<Vec<_>>();\n    let line_tombstones = changes\n        .iter()\n        .filter(|change| change.schema_key == LINE_SCHEMA_KEY)\n        .filter(|change| change.snapshot_content.is_none())\n        .collect::<Vec<_>>();\n\n    assert_eq!(line_inserts.len(), 1);\n    assert_eq!(line_tombstones.len(), 1);\n    assert_ne!(line_inserts[0].entity_id, line_tombstones[0].entity_id);\n    assert!(changes\n        .iter()\n        .any(|change| change.schema_key == DOCUMENT_SCHEMA_KEY));\n}\n"
  },
  {
    "path": "packages/text-plugin/tests/roundtrip.rs",
    "content": "mod common;\n\nuse common::file_from_bytes;\nuse std::collections::BTreeMap;\nuse text_plugin::{apply_changes, detect_changes, PluginEntityChange};\n\n#[test]\nfn detect_then_apply_roundtrips_exact_bytes() {\n    let payload = b\"first line\\nsecond line\\r\\nthird line\\n\";\n    let file = file_from_bytes(\"f1\", \"/doc.txt\", payload);\n\n    let changes = detect_changes(None, file).expect(\"detect_changes should succeed\");\n    let reconstructed = apply_changes(file_from_bytes(\"f1\", \"/doc.txt\", b\"\"), changes)\n        .expect(\"apply_changes should succeed\");\n\n    assert_eq!(reconstructed, payload);\n}\n\n#[test]\nfn update_roundtrip_preserves_exact_target_bytes() {\n    let before_payload = b\"a\\nb\\nc\\n\";\n    let before = file_from_bytes(\"f1\", \"/doc.txt\", before_payload);\n    let after_payload = b\"a\\nx\\nc\\n\";\n    let after = file_from_bytes(\"f1\", \"/doc.txt\", after_payload);\n\n    let changes = detect_changes(Some(before), after).expect(\"detect_changes should succeed\");\n    let reconstructed = apply_changes(file_from_bytes(\"f1\", \"/doc.txt\", before_payload), changes)\n        .expect(\"apply_changes should succeed\");\n\n    assert_eq!(reconstructed, after_payload);\n}\n\n#[test]\nfn projected_change_log_reconstructs_from_empty_base() {\n    let before_payload = b\"a\\nb\\nc\\n\";\n    let before_for_initial = file_from_bytes(\"f1\", \"/doc.txt\", before_payload);\n    let before_for_delta = file_from_bytes(\"f1\", \"/doc.txt\", before_payload);\n    let after_payload = b\"a\\nx\\nc\\n\";\n    let after = file_from_bytes(\"f1\", \"/doc.txt\", after_payload);\n\n    let initial_changes =\n        detect_changes(None, before_for_initial).expect(\"initial detect_changes should succeed\");\n    let delta_changes =\n        detect_changes(Some(before_for_delta), after).expect(\"delta detect_changes should succeed\");\n\n    let projected_changes = collapse_to_latest_projection([initial_changes, delta_changes]);\n    let reconstructed = apply_changes(file_from_bytes(\"f1\", \"/doc.txt\", b\"\"), projected_changes)\n        .expect(\"apply_changes should succeed for projected changes\");\n\n    assert_eq!(reconstructed, after_payload);\n}\n\nfn collapse_to_latest_projection(batches: [Vec<PluginEntityChange>; 2]) -> Vec<PluginEntityChange> {\n    let mut latest = BTreeMap::<(String, String), PluginEntityChange>::new();\n    for batch in batches {\n        for change in batch {\n            latest.insert(\n                (change.schema_key.clone(), change.entity_id.clone()),\n                change,\n            );\n        }\n    }\n    latest.into_values().collect()\n}\n"
  },
  {
    "path": "packages/text-plugin/tests/schema.rs",
    "content": "use text_plugin::{\n    document_schema_definition, document_schema_json, line_schema_definition, line_schema_json,\n    manifest_json, DOCUMENT_SCHEMA_KEY, LINE_SCHEMA_KEY,\n};\n\n#[test]\nfn line_schema_matches_constants() {\n    let schema = line_schema_definition();\n    assert_eq!(\n        schema\n            .get(\"x-lix-key\")\n            .and_then(serde_json::Value::as_str)\n            .expect(\"x-lix-key must be string\"),\n        LINE_SCHEMA_KEY\n    );\n}\n\n#[test]\nfn document_schema_matches_constants() {\n    let schema = document_schema_definition();\n    assert_eq!(\n        schema\n            .get(\"x-lix-key\")\n            .and_then(serde_json::Value::as_str)\n            .expect(\"x-lix-key must be string\"),\n        DOCUMENT_SCHEMA_KEY\n    );\n}\n\n#[test]\nfn schema_json_accessors_return_expected_text() {\n    let line = line_schema_json();\n    let document = document_schema_json();\n    assert!(line.contains(\"\\\"x-lix-key\\\": \\\"text_line\\\"\"));\n    assert!(document.contains(\"\\\"x-lix-key\\\": \\\"text_document\\\"\"));\n}\n\n#[test]\nfn manifest_json_has_expected_plugin_identity() {\n    let manifest: serde_json::Value =\n        serde_json::from_str(manifest_json()).expect(\"manifest must be valid JSON\");\n    assert_eq!(\n        manifest\n            .get(\"key\")\n            .and_then(serde_json::Value::as_str)\n            .expect(\"manifest.key must be string\"),\n        \"text_plugin\"\n    );\n    assert_eq!(\n        manifest\n            .get(\"runtime\")\n            .and_then(serde_json::Value::as_str)\n            .expect(\"manifest.runtime must be string\"),\n        \"wasm-component-v1\"\n    );\n}\n"
  },
  {
    "path": "packages/website/.gitignore",
    "content": "node_modules\n.DS_Store\ndist\ndist-ssr\n*.local\nsrc/routeTree.gen.ts\ncount.txt\n.env\n.nitro\n.tanstack\n.wrangler\n.output\n.vinxi\ntodos.json\ncontent/plugins/*.md\n!content/plugins/index.md\n*.gen.*"
  },
  {
    "path": "packages/website/.vscode/settings.json",
    "content": "{\n  \"files.watcherExclude\": {\n    \"**/routeTree.gen.ts\": true\n  },\n  \"search.exclude\": {\n    \"**/routeTree.gen.ts\": true\n  },\n  \"files.readonlyInclude\": {\n    \"**/routeTree.gen.ts\": true\n  }\n}\n"
  },
  {
    "path": "packages/website/HTML_DIFF_LIX_DEV_SEO_FOLLOWUP.md",
    "content": "# html-diff.lix.dev SEO Follow-up\n\nThis checklist is for the separate `html-diff.lix.dev` deployment and codebase. It is not implemented in this workspace.\n\n- Replace internal `.html` navigation links with canonical extensionless URLs.\n- Add canonical, Open Graph, and X/Twitter metadata to the home, guide, example, and playground/test pages.\n- Generate a sitemap that includes every indexable page and excludes redirects or test-only routes.\n- Update internal links so they point directly to the final destination URL instead of relying on redirects.\n"
  },
  {
    "path": "packages/website/README.md",
    "content": "# Lix Website\n\nTriggering a docs site rebuild.\n"
  },
  {
    "path": "packages/website/content/plugins/index.md",
    "content": "# Plugins\n\nPlugins are coming soon.\n\nWe are rewriting this section as part of the website cleanup.\n"
  },
  {
    "path": "packages/website/package.json",
    "content": "{\n  \"name\": \"@lix-js/website\",\n  \"private\": true,\n  \"type\": \"module\",\n  \"scripts\": {\n    \"dev\": \"vite dev --port 3000\",\n    \"build\": \"vite build\",\n    \"postbuild\": \"node ./scripts/post-build-seo.js\",\n    \"preview\": \"vite preview\",\n    \"test\": \"vitest run && tsc --noEmit\",\n    \"format\": \"prettier --write .\"\n  },\n  \"dependencies\": {\n    \"@cloudflare/vite-plugin\": \"^1.36.0\",\n    \"@lix-js/plugin-json\": \"1.0.1\",\n    \"@lix-js/sdk\": \"workspace:*\",\n    \"@opral/markdown-wc\": \"0.9.0\",\n    \"@tailwindcss/vite\": \"^4.2.4\",\n    \"@tanstack/react-router\": \"^1.169.2\",\n    \"@tanstack/react-start\": \"^1.167.64\",\n    \"@tanstack/router-plugin\": \"^1.167.34\",\n    \"lucide-react\": \"^0.544.0\",\n    \"posthog-js\": \"^1.321.2\",\n    \"react\": \"^19.2.0\",\n    \"react-dom\": \"^19.2.0\",\n    \"shiki\": \"^3.2.2\",\n    \"tailwindcss\": \"^4.2.4\"\n  },\n  \"devDependencies\": {\n    \"@testing-library/dom\": \"^10.4.0\",\n    \"@testing-library/react\": \"^16.2.0\",\n    \"@types/node\": \"^22.10.2\",\n    \"@types/react\": \"^19.2.0\",\n    \"@types/react-dom\": \"^19.2.0\",\n    \"@vitejs/plugin-react\": \"^6.0.1\",\n    \"@vitest/browser\": \"^4.1.5\",\n    \"@vitest/coverage-v8\": \"^4.1.5\",\n    \"jsdom\": \"^27.0.0\",\n    \"prettier\": \"^3.6.0\",\n    \"typescript\": \"^5.7.2\",\n    \"vite\": \"^8.0.10\",\n    \"vite-plugin-static-copy\": \"^4.1.0\",\n    \"vitest\": \"^4.1.5\",\n    \"web-vitals\": \"^5.1.0\",\n    \"wrangler\": \"^4.88.0\"\n  }\n}\n"
  },
  {
    "path": "packages/website/public/_redirects",
    "content": "/docs /docs/what-is-lix 308\n/guide /docs/what-is-lix 308\n/guide/* /docs/:splat 308\n"
  },
  {
    "path": "packages/website/public/manifest.json",
    "content": "{\n  \"short_name\": \"Lix\",\n  \"name\": \"Lix - Change Control System\",\n  \"icons\": [\n    {\n      \"src\": \"favicon.svg\",\n      \"type\": \"image/svg+xml\",\n      \"sizes\": \"any\"\n    }\n  ],\n  \"start_url\": \".\",\n  \"display\": \"standalone\",\n  \"theme_color\": \"#07B6D4\",\n  \"background_color\": \"#ffffff\"\n}\n"
  },
  {
    "path": "packages/website/public/robots.txt",
    "content": "# https://www.robotstxt.org/robotstxt.html\nUser-agent: *\nDisallow:\n"
  },
  {
    "path": "packages/website/scripts/plugin-readme-sync.test.ts",
    "content": "import { describe, expect, test } from \"vitest\";\nimport { buildSeoFrontmatter } from \"./plugin-readme-sync\";\n\ndescribe(\"buildSeoFrontmatter\", () => {\n  test(\"does not add a second period when the base description already ends with one\", () => {\n    const frontmatter = buildSeoFrontmatter({\n      key: \"plugin_json\",\n      name: \"JSON Plugin\",\n      description: \"Tracks JSON changes.\",\n      readme: \"https://example.com/README.md\",\n    });\n\n    expect(frontmatter).toContain(\n      'description: \"Tracks JSON changes. Learn how to install it, supported file types, and how it fits into Lix workflows.\"',\n    );\n    expect(frontmatter).not.toContain(\".. Learn\");\n  });\n\n  test(\"adds sentence punctuation when the base description is missing it\", () => {\n    const frontmatter = buildSeoFrontmatter({\n      key: \"plugin_json\",\n      name: \"JSON Plugin\",\n      description: \"Tracks JSON changes\",\n      readme: \"https://example.com/README.md\",\n    });\n\n    expect(frontmatter).toContain(\n      'description: \"Tracks JSON changes. Learn how to install it, supported file types, and how it fits into Lix workflows.\"',\n    );\n  });\n});\n"
  },
  {
    "path": "packages/website/scripts/plugin-readme-sync.ts",
    "content": "import { mkdir, readFile, writeFile } from \"node:fs/promises\";\nimport { existsSync } from \"node:fs\";\nimport path from \"node:path\";\nimport type { Plugin } from \"vite\";\n\ntype PluginRegistry = {\n  plugins?: Array<{\n    key: string;\n    name?: string;\n    description?: string;\n    readme?: string;\n  }>;\n};\n\n/**\n * Rewrites relative image links to absolute GitHub raw URLs.\n *\n * @example\n * rewriteRelativeImages(\"![Alt](./assets/img.png)\", \"https://raw.githubusercontent.com/opral/lix/main/packages/plugin-md/README.md\")\n */\nfunction rewriteRelativeImages(markdown: string, readmeUrl: string) {\n  const base = readmeUrl.replace(/\\/README\\.md$/, \"/\");\n  return markdown.replace(\n    /!\\[([^\\]]*)\\]\\((?!https?:\\/\\/)([^)]+)\\)/g,\n    (match, alt, url) => {\n      void match;\n      const normalized = url.replace(/^\\.?\\//, \"\");\n      return `![${alt}](${base}${normalized})`;\n    },\n  );\n}\n\n/**\n * Rewrites relative links to GitHub tree URLs.\n *\n * @example\n * rewriteRelativeLinks(\"[Example](./example)\", \"https://raw.githubusercontent.com/opral/lix/main/packages/plugin-md/README.md\")\n */\nfunction rewriteRelativeLinks(markdown: string, readmeUrl: string) {\n  const repoBase = readmeUrl\n    .replace(\"https://raw.githubusercontent.com/\", \"https://github.com/\")\n    .replace(/\\/README\\.md$/, \"\");\n  return markdown.replace(\n    /\\[([^\\]]+)\\]\\((?!https?:\\/\\/)([^)]+)\\)/g,\n    (match, text, url) => {\n      if (url.startsWith(\"#\")) {\n        return match;\n      }\n      const normalized = url.replace(/^\\.?\\//, \"\");\n      return `[${text}](${repoBase}/${normalized})`;\n    },\n  );\n}\n\n/**\n * Loads the plugin registry from disk.\n *\n * @example\n * const registry = await loadRegistry(\"/path/to/plugin.registry.json\");\n */\nasync function loadRegistry(registryPath: string): Promise<PluginRegistry> {\n  const raw = await readFile(registryPath, \"utf8\");\n  return JSON.parse(raw) as PluginRegistry;\n}\n\nfunction ensureTrailingSentence(value: string) {\n  return /[.!?]$/.test(value) ? value : `${value}.`;\n}\n\nexport function buildSeoFrontmatter(\n  plugin: NonNullable<PluginRegistry[\"plugins\"]>[number],\n) {\n  const title = plugin.name ?? plugin.key;\n  const description = plugin.description\n    ? `${ensureTrailingSentence(plugin.description.trim())} Learn how to install it, supported file types, and how it fits into Lix workflows.`\n    : `Learn how to install ${title}, supported file types, and how it fits into Lix workflows.`;\n\n  return [\n    \"---\",\n    `title: ${JSON.stringify(title)}`,\n    `description: ${JSON.stringify(description)}`,\n    \"---\",\n    \"\",\n  ].join(\"\\n\");\n}\n\n/**\n * Downloads plugin readmes and writes them to the content directory.\n *\n * @example\n * await syncPluginReadmes(registry, \"/content/plugins\");\n */\nasync function syncPluginReadmes(registry: PluginRegistry, contentDir: string) {\n  const plugins = Array.isArray(registry.plugins) ? registry.plugins : [];\n  await mkdir(contentDir, { recursive: true });\n\n  await Promise.all(\n    plugins.map(async (plugin) => {\n      if (!plugin?.key || !plugin?.readme) {\n        throw new Error(`Missing readme entry for plugin ${plugin?.key ?? \"\"}`);\n      }\n\n      const destination = path.join(contentDir, `${plugin.key}.md`);\n      let response: Response;\n      try {\n        response = await fetch(plugin.readme);\n      } catch (error) {\n        if (existsSync(destination)) {\n          console.warn(\n            `Failed to fetch ${plugin.readme}; using cached ${destination}`,\n          );\n          return;\n        }\n        throw error;\n      }\n\n      if (!response.ok) {\n        if (existsSync(destination)) {\n          console.warn(\n            `Failed to fetch ${plugin.readme} (${response.status} ${response.statusText}); using cached ${destination}`,\n          );\n          return;\n        }\n        throw new Error(\n          `Failed to fetch ${plugin.readme} (${response.status} ${response.statusText})`,\n        );\n      }\n\n      const markdown = rewriteRelativeLinks(\n        rewriteRelativeImages(await response.text(), plugin.readme),\n        plugin.readme,\n      );\n      const content = `${buildSeoFrontmatter(plugin)}${markdown}`;\n      await writeFile(destination, content);\n    }),\n  );\n}\n\n/**\n * Vite plugin that syncs plugin READMEs into local content.\n *\n * @example\n * pluginReadmeSync()\n */\nexport function pluginReadmeSync(): Plugin {\n  return {\n    name: \"plugin-readme-sync\",\n    async buildStart() {\n      const root = process.cwd();\n      const registryPath = path.join(\n        root,\n        \"src/routes/plugins/plugin.registry.json\",\n      );\n      const contentDir = path.join(root, \"content/plugins\");\n      const registry = await loadRegistry(registryPath);\n      await syncPluginReadmes(registry, contentDir);\n      console.log(\"copied plugin readmes\");\n    },\n  };\n}\n"
  },
  {
    "path": "packages/website/scripts/post-build-seo.js",
    "content": "import fs from \"node:fs\";\nimport path from \"node:path\";\n\nconst SITE_URL = \"https://lix.dev\";\nconst SITEMAP_PATH = path.resolve(\"dist/client/sitemap.xml\");\nconst ALIAS_URLS = new Set([`${SITE_URL}/docs`, `${SITE_URL}/guide`]);\n\nfunction isAliasUrl(url) {\n  return ALIAS_URLS.has(url) || url.startsWith(`${SITE_URL}/guide/`);\n}\n\nif (fs.existsSync(SITEMAP_PATH)) {\n  const sitemap = fs.readFileSync(SITEMAP_PATH, \"utf8\");\n  const filtered = sitemap.replace(\n    /<url>\\s*<loc>([^<]+)<\\/loc>[\\s\\S]*?<\\/url>/g,\n    (match, loc) => (isAliasUrl(loc) ? \"\" : match),\n  );\n  fs.writeFileSync(SITEMAP_PATH, filtered.trimEnd().concat(\"\\n\"));\n}\n"
  },
  {
    "path": "packages/website/src/blog/blogMetadata.ts",
    "content": "import { getMarkdownDescription, getMarkdownTitle } from \"../lib/seo\";\n\ntype BlogMetadataInput = {\n  rawMarkdown: string;\n  frontmatter?: Record<string, unknown>;\n};\n\nexport function getBlogTitle({ rawMarkdown, frontmatter }: BlogMetadataInput) {\n  return getMarkdownTitle({ rawMarkdown, frontmatter });\n}\n\nexport function getBlogDescription({\n  rawMarkdown,\n  frontmatter,\n}: BlogMetadataInput) {\n  return getMarkdownDescription({ rawMarkdown, frontmatter });\n}\n"
  },
  {
    "path": "packages/website/src/blog/og-image.ts",
    "content": "import { buildCanonicalUrl } from \"../lib/seo\";\n\nexport function resolveOgImageUrl(value: string, folderName: string): string {\n  if (isAbsoluteUrl(value)) return value;\n  const base = `${buildCanonicalUrl(`/blog/${folderName}`)}/`;\n  return new URL(value, base).toString();\n}\n\nexport function resolveBlogAssetPath(\n  value: string,\n  folderName: string,\n): string {\n  if (isAbsoluteUrl(value)) return value;\n  if (value.startsWith(\"/\")) return value;\n  const normalized = value.replace(/^\\.\\//, \"\");\n  return `/blog/${folderName}/${normalized}`;\n}\n\nfunction isAbsoluteUrl(value: string): boolean {\n  return /^[a-z][a-z0-9+.-]*:/.test(value);\n}\n"
  },
  {
    "path": "packages/website/src/components/code-snippet.tsx",
    "content": "import { useEffect, useState } from \"react\";\nimport { bundledLanguages, createHighlighter, type Highlighter } from \"shiki\";\n\n// Global highlighter instance.\nlet highlighterPromise: Promise<Highlighter> | null = null;\n\nasync function getHighlighter(): Promise<Highlighter> {\n  if (!highlighterPromise) {\n    highlighterPromise = createHighlighter({\n      themes: [\"github-light\", \"github-dark\"],\n      langs: Object.keys(bundledLanguages),\n    });\n  }\n  return highlighterPromise;\n}\n\ninterface CodeBlockProps {\n  code: string;\n  language?: string;\n  showLineNumbers?: boolean;\n}\n\nfunction CodeBlock({\n  code,\n  language = \"typescript\",\n  showLineNumbers = false,\n}: CodeBlockProps) {\n  const [isCopied, setIsCopied] = useState(false);\n  const [highlightedHtml, setHighlightedHtml] = useState<string>(\"\");\n\n  useEffect(() => {\n    let cancelled = false;\n\n    const highlight = async () => {\n      try {\n        const highlighter = await getHighlighter();\n\n        if (cancelled) return;\n\n        const html = highlighter.codeToHtml(code, {\n          lang: language,\n          theme: \"github-light\",\n          transformers: showLineNumbers\n            ? [\n                {\n                  line(node: any, line: number) {\n                    if (node.properties) {\n                      node.properties[\"data-line\"] = String(line);\n                    }\n                    return node;\n                  },\n                },\n              ]\n            : [],\n        });\n\n        setHighlightedHtml(html);\n      } catch (error) {\n        console.error(\"Failed to highlight code:\", error);\n        setHighlightedHtml(`<pre><code>${escapeHtml(code)}</code></pre>`);\n      }\n    };\n\n    highlight();\n\n    return () => {\n      cancelled = true;\n    };\n  }, [code, language, showLineNumbers]);\n\n  const displayHtml =\n    highlightedHtml || `<pre><code>${escapeHtml(code)}</code></pre>`;\n\n  const handleCopy = async () => {\n    try {\n      await navigator.clipboard.writeText(code);\n      setIsCopied(true);\n      setTimeout(() => setIsCopied(false), 2000);\n    } catch (err) {\n      console.error(\"Failed to copy:\", err);\n    }\n  };\n\n  return (\n    <div className=\"relative group rounded-lg overflow-hidden border border-gray-200 bg-gray-50\">\n      <div\n        className=\"overflow-x-auto p-4 [&_pre]:!bg-transparent [&_pre]:!p-0\"\n        dangerouslySetInnerHTML={{ __html: displayHtml }}\n        style={{\n          whiteSpace: \"pre\",\n          wordWrap: \"normal\",\n          fontSize: \"14px\",\n        }}\n      />\n      <div className=\"absolute top-2 right-2 opacity-0 group-hover:opacity-100 transition-opacity\">\n        <button\n          title=\"Copy code\"\n          onClick={handleCopy}\n          className=\"p-1.5 rounded bg-gray-200 hover:bg-gray-300 transition-colors\"\n        >\n          {!isCopied ? (\n            <svg\n              xmlns=\"http://www.w3.org/2000/svg\"\n              width=\"20\"\n              height=\"20\"\n              viewBox=\"0 0 24 24\"\n              aria-hidden=\"true\"\n            >\n              <path\n                fill=\"currentColor\"\n                d=\"M19 3h-4.18C14.4 1.84 13.3 1 12 1s-2.4.84-2.82 2H5c-1.1 0-2 .9-2 2v14c0 1.1.9 2 2 2h14c1.1 0 2-.9 2-2V5c0-1.1-.9-2-2-2m-7 0c.55 0 1 .45 1 1s-.45 1-1 1s-1-.45-1-1s.45-1 1-1m7 16H5V5h2v3h10V5h2z\"\n              />\n            </svg>\n          ) : (\n            <svg width=\"20\" height=\"20\" viewBox=\"0 0 24 24\" aria-hidden=\"true\">\n              <path\n                fill=\"#22c55e\"\n                d=\"m9 16.17l-4.17-4.17l-1.42 1.41L9 19L21 7l-1.41-1.41z\"\n              />\n            </svg>\n          )}\n        </button>\n      </div>\n    </div>\n  );\n}\n\nfunction escapeHtml(code: string): string {\n  return code\n    .replace(/&/g, \"&amp;\")\n    .replace(/</g, \"&lt;\")\n    .replace(/>/g, \"&gt;\")\n    .replace(/\"/g, \"&quot;\")\n    .replace(/'/g, \"&#039;\");\n}\n\nfunction formatConsoleOutput(\n  outputs: Array<{\n    level: string;\n    args: Array<{ type: string; content: string }>;\n    timestamp: string;\n    section?: string;\n  }>,\n): string {\n  return outputs\n    .map((entry) => {\n      const prefix =\n        entry.level !== \"log\" ? `// ${entry.level.toUpperCase()}: ` : \"\";\n      const content = entry.args.map((arg) => arg.content).join(\" \");\n      return prefix + content;\n    })\n    .join(\"\\n\\n\");\n}\n\ninterface CodeSnippetProps {\n  module: any;\n  srcCode: string;\n  sections?: string[];\n}\n\nfunction dedentCode(code: string): string {\n  const lines = code.split(\"\\n\");\n\n  let startIndex = 0;\n  let endIndex = lines.length - 1;\n\n  while (startIndex < lines.length && lines[startIndex].trim() === \"\") {\n    startIndex++;\n  }\n\n  while (endIndex > startIndex && lines[endIndex].trim() === \"\") {\n    endIndex--;\n  }\n\n  const trimmedLines = lines.slice(startIndex, endIndex + 1);\n\n  const minIndent = trimmedLines\n    .filter((line) => line.trim().length > 0)\n    .reduce((min, line) => {\n      const match = line.match(/^(\\s*)/);\n      const indent = match ? match[1].length : 0;\n      return Math.min(min, indent);\n    }, Infinity);\n\n  if (minIndent === Infinity || minIndent === 0) return trimmedLines.join(\"\\n\");\n\n  return trimmedLines.map((line) => line.slice(minIndent)).join(\"\\n\");\n}\n\nfunction parseSections(code: string): {\n  sections: Record<string, string>;\n  imports: string;\n  fullCode: string;\n  sectionRanges: Record<string, { start: number; end: number }>;\n} {\n  const sections: Record<string, string> = {};\n  const sectionRanges: Record<string, { start: number; end: number }> = {};\n  const lines = code.split(\"\\n\");\n  const importLines: string[] = [];\n\n  let currentSection: string | null = null;\n  let sectionContent: string[] = [];\n  let sectionStartLine = 0;\n  let inSection = false;\n\n  for (let i = 0; i < lines.length; i++) {\n    const line = lines[i];\n\n    const sectionStartMatch = line.match(/SECTION\\s+START\\s+['\"]([^'\"]+)['\"]/);\n    if (sectionStartMatch) {\n      currentSection = sectionStartMatch[1];\n      sectionContent = [];\n      sectionStartLine = i + 1;\n      inSection = true;\n      continue;\n    }\n\n    const sectionEndMatch = line.match(/SECTION\\s+END\\s+['\"]([^'\"]+)['\"]/);\n    if (sectionEndMatch) {\n      if (currentSection && sectionContent.length > 0) {\n        sections[currentSection] = dedentCode(sectionContent.join(\"\\n\"));\n        sectionRanges[currentSection] = { start: sectionStartLine, end: i - 1 };\n      }\n      currentSection = null;\n      inSection = false;\n      continue;\n    }\n\n    if (!inSection && line.match(/^import\\s+/)) {\n      importLines.push(line);\n    }\n\n    if (\n      inSection &&\n      currentSection &&\n      !line.includes(\"export default async function\") &&\n      !line.match(/^}$/)\n    ) {\n      sectionContent.push(line);\n    }\n  }\n\n  const fullCode = lines\n    .filter(\n      (line) =>\n        !line.match(/SECTION\\s+(START|END)\\s+['\"]([^'\"]+)['\"]/) &&\n        !line.includes(\"SECTION\"),\n    )\n    .filter((line) => !line.includes(\"export default async function\"))\n    .filter((line) => !line.match(/^}$/))\n    .join(\"\\n\")\n    .trim();\n\n  return { sections, imports: importLines.join(\"\\n\"), fullCode, sectionRanges };\n}\n\nfunction transformDynamicImports(code: string): string {\n  let transformedCode = code;\n\n  transformedCode = transformedCode.replace(\n    /const\\s*{\\s*(\\w+)\\s*:\\s*(\\w+)\\s*}\\s*=\\s*await\\s+import\\s*\\(\\s*[\"']([^\"']+)[\"']\\s*\\)\\s*;/g,\n    'import { $1 as $2 } from \"$3\";',\n  );\n\n  transformedCode = transformedCode.replace(\n    /const\\s*{\\s*([^}]+)\\s*}\\s*=\\s*await\\s+import\\s*\\(\\s*[\"']([^\"']+)[\"']\\s*\\)\\s*;/g,\n    'import { $1 } from \"$2\";',\n  );\n\n  transformedCode = transformedCode.replace(\n    /const\\s+(\\w+)\\s*=\\s*await\\s+import\\s*\\(\\s*[\"']([^\"']+)[\"']\\s*\\)\\s*;/g,\n    'import $1 from \"$2\";',\n  );\n\n  return transformedCode;\n}\n\nfunction combineSections(\n  allSections: Record<string, string>,\n  selectedSections?: string[],\n): string {\n  if (!selectedSections || selectedSections.length === 0) {\n    return Object.values(allSections).join(\"\\n\\n\");\n  }\n\n  return selectedSections\n    .map((sectionName) => allSections[sectionName])\n    .filter(Boolean)\n    .join(\"\\n\\n\")\n    .replace(/console1\\.log/g, \"console.log\");\n}\n\nfunction getPrerequisiteCode(\n  allSections: Record<string, string>,\n  selectedSections: string[],\n  imports: string,\n): string {\n  const sectionNames = Object.keys(allSections);\n  const firstSelectedIndex = Math.min(\n    ...selectedSections.map((s) => sectionNames.indexOf(s)),\n  );\n\n  const prerequisiteSections = sectionNames\n    .slice(0, firstSelectedIndex)\n    .map((name) => allSections[name])\n    .filter(Boolean)\n    .join(\"\\n\\n\");\n\n  return [imports, prerequisiteSections].filter(Boolean).join(\"\\n\\n\");\n}\n\n/**\n * Normalizes raw source input from bundlers into usable code text.\n *\n * @example\n * decodeSource('import { foo } from \"./bar\";');\n */\nfunction decodeSource(source: string): string {\n  const trimmed = source.trim().replace(/;$/, \"\");\n  if (isJsonStringLiteral(trimmed)) {\n    return JSON.parse(trimmed) as string;\n  }\n  return trimmed;\n}\n\n/**\n * Checks if a string is a valid JSON string literal.\n *\n * @example\n * isJsonStringLiteral('\"hello\"');\n */\nfunction isJsonStringLiteral(value: string): boolean {\n  if (value.length < 2 || value[0] !== '\"' || value[value.length - 1] !== '\"') {\n    return false;\n  }\n  return /^\"(?:\\\\[\"\\\\/bfnrt]|\\\\u[0-9a-fA-F]{4}|[^\"\\\\])*\"$/.test(value);\n}\n\n/**\n * Interactive code example for docs, showing selected SECTIONs and output.\n *\n * The `module` must default-export an async function that accepts a mock console.\n *\n * @example\n * <CodeSnippet module={example} srcCode={raw} sections={[\"my-section\"]} />\n */\nexport default function CodeSnippet({\n  module,\n  srcCode,\n  sections,\n}: CodeSnippetProps) {\n  const [setupExpanded, setSetupExpanded] = useState(false);\n  const [outputExpanded, setOutputExpanded] = useState(false);\n  const [hasExecuted, setHasExecuted] = useState(false);\n  const [isExecuting, setIsExecuting] = useState(false);\n  const [consoleOutput, setConsoleOutput] = useState<\n    Array<{\n      level: string;\n      args: Array<{ type: string; content: string }>;\n      timestamp: string;\n      section?: string;\n    }>\n  >([]);\n\n  const decodedSrcCode = decodeSource(srcCode);\n\n  const { sections: allSections, imports } = parseSections(\n    decodedSrcCode.replace(/console1\\.log/g, \"console.log\"),\n  );\n  const currentCode = transformDynamicImports(\n    combineSections(allSections, sections),\n  );\n  const prerequisiteCode = sections\n    ? transformDynamicImports(\n        getPrerequisiteCode(allSections, sections, imports),\n      )\n    : \"\";\n\n  const executeCode = async () => {\n    if (isExecuting) return;\n\n    setIsExecuting(true);\n    setConsoleOutput([]);\n    setHasExecuted(true);\n\n    try {\n      const outputs: Array<{\n        level: string;\n        args: Array<{ type: string; content: string }>;\n        timestamp: string;\n        section?: string;\n      }> = [];\n\n      const logOutput = (\n        level: string,\n        section: string | undefined,\n        ...args: any[]\n      ) => {\n        const formattedArgs = args.map((arg: any) => {\n          if (typeof arg === \"object\" && arg !== null) {\n            return {\n              type: \"object\",\n              content: JSON.stringify(arg, null, 2),\n            };\n          }\n          return {\n            type: \"primitive\",\n            content: String(arg),\n          };\n        });\n\n        outputs.push({\n          level,\n          args: formattedArgs,\n          timestamp: new Date().toLocaleTimeString(),\n          section,\n        });\n      };\n\n      try {\n        if (module.default && typeof module.default === \"function\") {\n          let currentSection: string | undefined = undefined;\n\n          const mockConsole = {\n            log: (...args: any[]) => {\n              const firstArg = String(args[0]);\n              const sectionStartMatch = firstArg.match(\n                /SECTION\\s+START\\s+['\"]([^'\"]+)['\"]/,\n              );\n              if (sectionStartMatch) {\n                currentSection = sectionStartMatch[1];\n                return;\n              }\n              const sectionEndMatch = firstArg.match(\n                /SECTION\\s+END\\s+['\"]([^'\"]+)['\"]/,\n              );\n              if (sectionEndMatch) {\n                return;\n              }\n              logOutput(\"log\", currentSection, ...args);\n            },\n            warn: (...args: any[]) => {\n              logOutput(\"warn\", currentSection, ...args);\n            },\n            error: (...args: any[]) => {\n              logOutput(\"error\", currentSection, ...args);\n            },\n            info: (...args: any[]) => {\n              logOutput(\"info\", currentSection, ...args);\n            },\n          };\n\n          await module.default(mockConsole);\n        } else {\n          logOutput(\n            \"error\",\n            undefined,\n            \"Module doesn't export default function\",\n          );\n        }\n      } catch (error) {\n        console.error(\"Error executing code:\", error);\n        logOutput(\"error\", undefined, \"Error executing code:\", error);\n      }\n\n      setConsoleOutput(outputs);\n    } finally {\n      setIsExecuting(false);\n    }\n  };\n\n  return (\n    <div className=\"my-4\">\n      {prerequisiteCode && (\n        <div className=\"mb-2\">\n          <button\n            onClick={() => setSetupExpanded(!setupExpanded)}\n            className=\"text-blue-600 text-sm font-medium flex items-center gap-1 opacity-70 hover:opacity-100 transition-opacity\"\n          >\n            <svg\n              className={`w-3 h-3 transition-transform ${\n                setupExpanded ? \"rotate-90\" : \"\"\n              }`}\n              fill=\"none\"\n              stroke=\"currentColor\"\n              viewBox=\"0 0 24 24\"\n              aria-hidden=\"true\"\n            >\n              <path\n                strokeLinecap=\"round\"\n                strokeLinejoin=\"round\"\n                strokeWidth={2}\n                d=\"M9 5l7 7-7 7\"\n              />\n            </svg>\n            Show hidden code\n          </button>\n\n          <div\n            className={`overflow-hidden transition-all duration-300 ease-in-out ${\n              setupExpanded ? \"max-h-[800px] mt-2\" : \"max-h-0\"\n            }`}\n          >\n            <CodeBlock\n              code={prerequisiteCode}\n              language=\"typescript\"\n              showLineNumbers={true}\n            />\n          </div>\n        </div>\n      )}\n\n      <CodeBlock\n        code={currentCode}\n        language=\"typescript\"\n        showLineNumbers={false}\n      />\n\n      <div className=\"mt-4\">\n        <button\n          onClick={async () => {\n            if (!hasExecuted) {\n              await executeCode();\n            }\n            setOutputExpanded(!outputExpanded);\n          }}\n          disabled={isExecuting}\n          className=\"text-blue-600 hover:text-blue-700 text-sm font-medium flex items-center gap-1 mb-2 disabled:opacity-50\"\n        >\n          <svg\n            className={`w-3 h-3 transition-transform ${\n              outputExpanded ? \"rotate-90\" : \"\"\n            }`}\n            fill=\"none\"\n            stroke=\"currentColor\"\n            viewBox=\"0 0 24 24\"\n            aria-hidden=\"true\"\n          >\n            <path\n              strokeLinecap=\"round\"\n              strokeLinejoin=\"round\"\n              strokeWidth={2}\n              d=\"M9 5l7 7-7 7\"\n            />\n          </svg>\n          {isExecuting ? \"Running...\" : \"Show output\"}\n        </button>\n\n        <div\n          className={`overflow-hidden transition-all duration-300 ease-in-out ${\n            outputExpanded && hasExecuted ? \"max-h-[800px]\" : \"max-h-0\"\n          }`}\n        >\n          {hasExecuted && (\n            <CodeBlock\n              code={(() => {\n                const filteredOutput = sections\n                  ? consoleOutput.filter(\n                      (output) =>\n                        !output.section || sections.includes(output.section),\n                    )\n                  : consoleOutput;\n\n                if (filteredOutput.length === 0) {\n                  return \"// No output\";\n                }\n\n                return formatConsoleOutput(filteredOutput);\n              })()}\n              language=\"javascript\"\n              showLineNumbers={false}\n            />\n          )}\n        </div>\n      </div>\n    </div>\n  );\n}\n"
  },
  {
    "path": "packages/website/src/components/doc-code-snippet-element.tsx",
    "content": "import { createRoot, type Root } from \"react-dom/client\";\nimport CodeSnippet from \"./code-snippet\";\n\nconst exampleModules = import.meta.glob<any>(\"../docs-examples/*.ts\");\n\nconst exampleSources = import.meta.glob<string>(\"../docs-examples/*.ts\", {\n  eager: true,\n  import: \"default\",\n  query: \"?raw\",\n});\n\nfunction fileBasename(path: string): string {\n  const last = path.split(\"/\").pop() ?? path;\n  return last.replace(/\\.ts$/, \"\");\n}\n\nconst modulesByName = new Map<string, any>();\nconst sourcesByName = new Map<string, string>();\n\nfor (const [path, loader] of Object.entries(exampleModules)) {\n  modulesByName.set(fileBasename(path), loader);\n}\nfor (const [path, src] of Object.entries(exampleSources)) {\n  sourcesByName.set(fileBasename(path), src);\n}\n\nfunction parseSectionsAttribute(value: string | null): string[] | undefined {\n  if (!value) return undefined;\n  const trimmed = value.trim();\n  if (!trimmed) return undefined;\n  if (trimmed.startsWith(\"[\")) {\n    return JSON.parse(trimmed) as string[];\n  }\n  return trimmed\n    .split(\",\")\n    .map((s) => s.trim())\n    .filter(Boolean);\n}\n\nclass DocCodeSnippetElement extends HTMLElement {\n  private reactRoot: Root | null = null;\n  private mountEl: HTMLDivElement | null = null;\n  private renderSeq = 0;\n\n  static get observedAttributes() {\n    return [\"example\", \"sections\"];\n  }\n\n  connectedCallback() {\n    if (!this.mountEl) {\n      this.mountEl = document.createElement(\"div\");\n      this.appendChild(this.mountEl);\n    }\n    this.renderReact();\n  }\n\n  attributeChangedCallback() {\n    this.renderReact();\n  }\n\n  private async renderReact() {\n    if (!this.mountEl) return;\n\n    const exampleName = this.getAttribute(\"example\")?.trim();\n    if (!exampleName) {\n      throw new Error(\"<doc-code-snippet> requires an example attribute.\");\n    }\n\n    const loader = modulesByName.get(exampleName) as\n      | (() => Promise<any>)\n      | undefined;\n    const src = sourcesByName.get(exampleName);\n    if (!loader || !src) {\n      this.replaceChildren();\n      return;\n    }\n\n    const seq = ++this.renderSeq;\n    const mod = await loader();\n    if (seq !== this.renderSeq) return;\n\n    const sections = parseSectionsAttribute(this.getAttribute(\"sections\"));\n\n    if (!this.reactRoot) {\n      this.reactRoot = createRoot(this.mountEl);\n    }\n\n    this.reactRoot.render(\n      <CodeSnippet module={mod} srcCode={src} sections={sections} />,\n    );\n  }\n}\n\nif (typeof window !== \"undefined\" && !customElements.get(\"doc-code-snippet\")) {\n  customElements.define(\"doc-code-snippet\", DocCodeSnippetElement);\n}\n"
  },
  {
    "path": "packages/website/src/components/docs-layout.tsx",
    "content": "import { Link } from \"@tanstack/react-router\";\nimport { useEffect, useState } from \"react\";\nimport { Footer } from \"./footer\";\nimport { Header, MenuIcon } from \"./header\";\n\nexport type SidebarSection = {\n  label: string;\n  items: Array<{\n    label: string;\n    href: string;\n    relativePath: string;\n  }>;\n};\n\nexport type PageTocItem = {\n  id: string;\n  label: string;\n  level: number;\n};\n\n/**\n * VitePress-style documentation shell with header, left sidebar, and main content.\n *\n * The sidebar is driven from the docs table of contents and highlights the\n * active entry based on the current doc relative path.\n *\n * @example\n * <DocsLayout\n *   toc={toc}\n *   sidebarSections={[\n *     { label: \"Overview\", items: [{ label: \"What is Lix?\", href: \"/docs/what-is-lix\", relativePath: \"./what-is-lix.md\" }] },\n *   ]}\n *   activeRelativePath=\"./what-is-lix.md\"\n *   pageToc={[{ id: \"intro\", label: \"Intro\", level: 2 }]}\n * >\n *   <MarkdownPage html=\"<h1>Hello</h1>\" markdown=\"# Hello\" />\n * </DocsLayout>\n */\nexport function DocsLayout({\n  sidebarSections,\n  activeRelativePath,\n  pageToc,\n  children,\n}: {\n  sidebarSections: SidebarSection[];\n  activeRelativePath?: string;\n  pageToc?: PageTocItem[];\n  children: React.ReactNode;\n}) {\n  const [isMobileMenuOpen, setIsMobileMenuOpen] = useState(false);\n  const hasPageToc = Boolean(pageToc && pageToc.length > 0);\n  const [activeTocId, setActiveTocId] = useState<string | null>(null);\n\n  useEffect(() => {\n    if (!pageToc || pageToc.length === 0) return;\n\n    const headings = pageToc\n      .map((item) => document.getElementById(item.id))\n      .filter((node): node is HTMLElement => Boolean(node));\n\n    if (headings.length === 0) return;\n\n    const updateActiveHeading = () => {\n      const activationOffset = 96;\n      let activeHeading = headings[0];\n\n      for (const heading of headings) {\n        if (heading.getBoundingClientRect().top <= activationOffset) {\n          activeHeading = heading;\n        } else {\n          break;\n        }\n      }\n\n      setActiveTocId((current) =>\n        current === activeHeading.id ? current : activeHeading.id,\n      );\n    };\n\n    updateActiveHeading();\n    window.addEventListener(\"scroll\", updateActiveHeading, { passive: true });\n    window.addEventListener(\"resize\", updateActiveHeading);\n\n    return () => {\n      window.removeEventListener(\"scroll\", updateActiveHeading);\n      window.removeEventListener(\"resize\", updateActiveHeading);\n    };\n  }, [pageToc]);\n\n  const SidebarContent = () => (\n    <nav\n      aria-label=\"Documentation sidebar\"\n      className=\"px-6 pt-6 pb-8 space-y-6\"\n    >\n      {sidebarSections.map((section) => (\n        <section key={section.label} className=\"space-y-3\">\n          <h2 className=\"text-sm font-semibold text-slate-900\">\n            {section.label}\n          </h2>\n          <ul>\n            {section.items.map((item) => {\n              const isActive = item.relativePath === activeRelativePath;\n              return (\n                <li key={item.href}>\n                  <Link\n                    to={item.href}\n                    onClick={() => setIsMobileMenuOpen(false)}\n                    className={[\n                      \"block py-1 text-sm leading-6 transition-colors\",\n                      isActive\n                        ? \"font-medium text-[#0891B2]\"\n                        : \"text-slate-600 hover:text-slate-900\",\n                    ].join(\" \")}\n                  >\n                    {item.label}\n                  </Link>\n                </li>\n              );\n            })}\n          </ul>\n        </section>\n      ))}\n    </nav>\n  );\n\n  return (\n    <div className=\"min-h-screen bg-white text-slate-900\">\n      <div className=\"sticky top-0 z-50\">\n        <Header />\n        {/* Mobile menu bar - below header, above content */}\n        <div className=\"border-b border-gray-200 bg-white lg:hidden\">\n          <div className=\"mx-auto flex w-full max-w-[1440px] items-center px-6 py-2\">\n            <button\n              onClick={() => setIsMobileMenuOpen(true)}\n              className=\"flex items-center gap-2 text-sm font-medium text-gray-700\"\n              aria-label=\"Open menu\"\n            >\n              <MenuIcon className=\"h-5 w-5\" />\n              <span>Menu</span>\n            </button>\n          </div>\n        </div>\n      </div>\n      {/* Mobile sidebar overlay */}\n      {isMobileMenuOpen && (\n        <>\n          <div\n            className=\"fixed inset-0 bg-black/50 z-40 lg:hidden\"\n            onClick={() => setIsMobileMenuOpen(false)}\n            aria-hidden=\"true\"\n          />\n          <aside className=\"fixed inset-y-0 left-0 w-full bg-slate-50 border-r border-slate-200 overflow-y-auto z-50 lg:hidden\">\n            <div className=\"sticky top-0 bg-slate-50 px-6 py-3 flex justify-end items-center\">\n              <button\n                onClick={() => setIsMobileMenuOpen(false)}\n                className=\"text-slate-600 hover:text-slate-900\"\n                aria-label=\"Close menu\"\n              >\n                <svg\n                  xmlns=\"http://www.w3.org/2000/svg\"\n                  viewBox=\"0 0 24 24\"\n                  fill=\"none\"\n                  stroke=\"currentColor\"\n                  strokeWidth=\"2\"\n                  strokeLinecap=\"round\"\n                  strokeLinejoin=\"round\"\n                  className=\"h-5 w-5\"\n                  aria-hidden=\"true\"\n                >\n                  <line x1=\"18\" y1=\"6\" x2=\"6\" y2=\"18\" />\n                  <line x1=\"6\" y1=\"6\" x2=\"18\" y2=\"18\" />\n                </svg>\n              </button>\n            </div>\n            <SidebarContent />\n          </aside>\n        </>\n      )}\n      <div className=\"relative\">\n        <div className=\"absolute left-0 top-14 hidden h-[calc(100vh-3.5rem)] w-64 bg-slate-50 -z-10 lg:block\" />\n        <div className=\"mx-auto flex w-full max-w-[1440px]\">\n          <aside className=\"sticky top-14 hidden h-[calc(100vh-3.5rem)] w-64 shrink-0 overflow-y-auto border-r border-slate-200 lg:block\">\n            <SidebarContent />\n          </aside>\n\n          <main id=\"VPContent\" className=\"min-w-0 flex-1\">\n            <div className=\"px-6 py-8 lg:px-8\">\n              <div className=\"mx-auto w-full max-w-3xl\">{children}</div>\n            </div>\n          </main>\n\n          {hasPageToc && (\n            <aside className=\"sticky top-14 hidden h-[calc(100vh-3.5rem)] w-64 shrink-0 xl:block\">\n              <nav\n                aria-label=\"On this page\"\n                className=\"px-6 py-8 text-sm text-slate-600\"\n              >\n                <div className=\"text-sm font-semibold text-slate-900\">\n                  On this page\n                </div>\n                <ul className=\"mt-3 space-y-2 border-l border-slate-200 pl-4\">\n                  {pageToc?.map((item) => {\n                    const isActive = item.id === activeTocId;\n                    return (\n                      <li key={item.id} className=\"relative\">\n                        {isActive && (\n                          <span\n                            className=\"absolute -left-4 top-1/2 h-5 w-0.5 -translate-y-1/2 bg-[#0891B2]\"\n                            aria-hidden=\"true\"\n                          />\n                        )}\n                        <a\n                          href={`#${item.id}`}\n                          className={[\n                            \"block transition-colors\",\n                            item.level > 2 ? \"pl-3\" : \"\",\n                            isActive\n                              ? \"font-medium text-[#0891B2]\"\n                              : \"text-slate-600 hover:text-slate-900\",\n                          ].join(\" \")}\n                        >\n                          {item.label}\n                        </a>\n                      </li>\n                    );\n                  })}\n                </ul>\n              </nav>\n            </aside>\n          )}\n        </div>\n      </div>\n      <Footer />\n    </div>\n  );\n}\n"
  },
  {
    "path": "packages/website/src/components/docs-prev-next.tsx",
    "content": "import { PrevNextNav } from \"./prev-next-nav\";\n\ntype DocRoute = {\n  slug: string;\n  title?: string;\n};\n\nconst navTitleOverrides: Record<string, string> = {\n  \"next-js\": \"Next.js\",\n  \"api-reference\": \"API Reference\",\n};\n\nfunction formatNavTitle(input: string) {\n  const normalized = input.toLowerCase();\n  if (normalized in navTitleOverrides) {\n    return navTitleOverrides[normalized];\n  }\n  return normalized\n    .split(\"-\")\n    .filter(Boolean)\n    .map((word) => word[0]?.toUpperCase() + word.slice(1))\n    .join(\" \");\n}\n\nexport function DocsPrevNext({\n  currentSlug,\n  routes,\n}: {\n  currentSlug: string;\n  routes: DocRoute[];\n}) {\n  const currentIndex = routes.findIndex((item) => item.slug === currentSlug);\n  if (currentIndex === -1 || routes.length <= 1) return null;\n\n  const prevRoute = currentIndex > 0 ? routes[currentIndex - 1] : null;\n  const nextRoute =\n    currentIndex < routes.length - 1 ? routes[currentIndex + 1] : null;\n\n  const prev = prevRoute\n    ? {\n        slug: prevRoute.slug,\n        title: prevRoute.title ?? formatNavTitle(prevRoute.slug),\n      }\n    : null;\n  const next = nextRoute\n    ? {\n        slug: nextRoute.slug,\n        title: nextRoute.title ?? formatNavTitle(nextRoute.slug),\n      }\n    : null;\n\n  return (\n    <PrevNextNav\n      prev={prev}\n      next={next}\n      basePath=\"/docs\"\n      paramName=\"slugId\"\n      prevLabel=\"Previous\"\n      nextLabel=\"Next\"\n      className=\"mt-8\"\n    />\n  );\n}\n"
  },
  {
    "path": "packages/website/src/components/footer.tsx",
    "content": "import { getGithubStars } from \"../github-stars-cache\";\n\nconst footerLinks = [\n  { href: \"/docs\", label: \"Docs\", emoji: \"📘\" },\n  { href: \"/blog\", label: \"Blog\", emoji: \"📝\" },\n  { href: \"/rfc\", label: \"RFCs\", emoji: \"📄\" },\n];\n\nexport function Footer() {\n  const githubStars = getGithubStars(\"opral/lix\");\n\n  const formatStars = (count: number) => {\n    if (count >= 1000) {\n      return `${(count / 1000).toFixed(1).replace(/\\.0$/, \"\")}k`;\n    }\n    return count.toString();\n  };\n\n  return (\n    <footer className=\"bg-white\">\n      <div className=\"border-t border-gray-200\">\n        <div className=\"flex flex-col gap-3 px-6 py-10 sm:flex-row sm:justify-center sm:gap-8\">\n          {footerLinks.map((link) => (\n            <a\n              key={link.href}\n              href={link.href}\n              className=\"inline-flex items-center justify-center gap-2 text-sm font-medium text-gray-500 transition-colors hover:text-gray-900\"\n            >\n              <span aria-hidden>{link.emoji}</span>\n              {link.label}\n            </a>\n          ))}\n          <a\n            href=\"https://discord.gg/gdMPPWy57R\"\n            className=\"inline-flex items-center justify-center gap-2 text-sm font-medium text-gray-500 transition-colors hover:text-gray-900\"\n          >\n            <span aria-hidden>💬</span>\n            Discord\n            <img\n              src=\"https://img.shields.io/discord/897438559458430986?label=%20&color=f3f4f6&style=flat-square\"\n              alt=\"Discord online members\"\n              className=\"h-4\"\n            />\n          </a>\n          <a\n            href=\"https://github.com/opral/lix\"\n            className=\"inline-flex items-center justify-center gap-2 text-sm font-medium text-gray-500 transition-colors hover:text-gray-900\"\n            title={\n              githubStars\n                ? `${githubStars.toLocaleString()} GitHub stars`\n                : \"Star us on GitHub\"\n            }\n          >\n            <span aria-hidden>⭐</span>\n            Star on GitHub\n            {githubStars !== null && (\n              <span className=\"rounded-full bg-gray-100 px-2 py-0.5 text-xs font-semibold text-gray-600\">\n                {formatStars(githubStars)}\n              </span>\n            )}\n          </a>\n        </div>\n      </div>\n    </footer>\n  );\n}\n"
  },
  {
    "path": "packages/website/src/components/header.tsx",
    "content": "import { Link, useRouterState } from \"@tanstack/react-router\";\nimport { getGithubStars } from \"../github-stars-cache\";\n\n/**\n * Lix logo used across the site.\n *\n * @example\n * <LixLogo className=\"h-6 w-6\" />\n */\nexport const LixLogo = ({ className = \"\" }) => (\n  <svg\n    width=\"30\"\n    height=\"22\"\n    viewBox=\"0 0 26 18\"\n    fill=\"currentColor\"\n    xmlns=\"http://www.w3.org/2000/svg\"\n    className={className}\n  >\n    <g id=\"Group 162\">\n      <path\n        id=\"Vector\"\n        d=\"M14.7618 5.74842L16.9208 9.85984L22.3675 0.358398H25.7133L19.0723 11.6284L22.5712 17.5085H19.2407L16.9208 13.443L14.6393 17.5085H11.2705L14.7618 11.6284L11.393 5.74842H14.7618Z\"\n        fill=\"currentColor\"\n      />\n      <path\n        id=\"Vector_2\"\n        d=\"M6.16211 17.5081V5.74805H9.42368V17.5081H6.16211Z\"\n        fill=\"currentColor\"\n      />\n      <path\n        id=\"Vector_3\"\n        d=\"M3.52112 0.393555V17.6416H0.287109V0.393555H3.52112Z\"\n        fill=\"currentColor\"\n      />\n      <path\n        id=\"Rectangle 391\"\n        d=\"M6.21582 0.393555H14.8399V3.08856H6.21582V0.393555Z\"\n        fill=\"currentColor\"\n      />\n    </g>\n  </svg>\n);\n\n/**\n * GitHub mark icon used in the site header.\n *\n * @example\n * <GitHubIcon className=\"h-5 w-5\" />\n */\nexport const GitHubIcon = ({ className = \"\" }) => (\n  <svg\n    xmlns=\"http://www.w3.org/2000/svg\"\n    viewBox=\"0 0 24 24\"\n    fill=\"currentColor\"\n    className={className}\n    aria-hidden=\"true\"\n  >\n    <path d=\"M12 2a10 10 0 00-3.16 19.49c.5.09.68-.21.68-.47v-1.69c-2.78.6-3.37-1.34-3.37-1.34a2.64 2.64 0 00-1.1-1.46c-.9-.62.07-.6.07-.6a2.08 2.08 0 011.52 1 2.1 2.1 0 002.87.82 2.11 2.11 0 01.63-1.32c-2.22-.25-4.56-1.11-4.56-4.95a3.88 3.88 0 011-2.7 3.6 3.6 0 01.1-2.67s.84-.27 2.75 1a9.5 9.5 0 015 0c1.91-1.29 2.75-1 2.75-1a3.6 3.6 0 01.1 2.67 3.87 3.87 0 011 2.7c0 3.85-2.34 4.7-4.57 4.95a2.37 2.37 0 01.68 1.84v2.72c0 .27.18.57.69.47A10 10 0 0012 2z\" />\n  </svg>\n);\n\n/**\n * Discord icon used in the site header.\n *\n * @example\n * <DiscordIcon className=\"h-5 w-5\" />\n */\nexport const DiscordIcon = ({ className = \"\" }) => (\n  <svg\n    xmlns=\"http://www.w3.org/2000/svg\"\n    viewBox=\"0 0 71 55\"\n    fill=\"currentColor\"\n    className={className}\n    aria-hidden=\"true\"\n  >\n    <path d=\"M60.1045 4.8978C55.5792 2.8214 50.7265 1.2916 45.6527 0.41542C45.5603 0.39851 45.468 0.440769 45.4204 0.525289C44.7963 1.6353 44.105 3.0834 43.6209 4.2216C38.1637 3.4046 32.7345 3.4046 27.3892 4.2216C26.905 3.0581 26.1886 1.6353 25.5617 0.525289C25.5141 0.443589 25.4218 0.40133 25.3294 0.41542C20.2584 1.2888 15.4057 2.8186 10.8776 4.8978C10.8384 4.9147 10.8048 4.9429 10.7825 4.9793C1.57795 18.7309 -0.943561 32.1443 0.293408 45.3914C0.299005 45.4562 0.335386 45.5182 0.385761 45.5574C6.45866 50.0174 12.3413 52.7249 18.1147 54.5195C18.2071 54.5477 18.3052 54.5131 18.363 54.4376C19.7295 52.5728 20.9469 50.6063 21.9907 48.5383C22.0527 48.4172 21.9931 48.2735 21.8674 48.2259C19.9366 47.4931 18.0979 46.6 16.3292 45.5858C16.1893 45.5033 16.1789 45.3039 16.3116 45.2082C16.679 44.9293 17.0464 44.6391 17.4034 44.346C17.4654 44.2947 17.5534 44.2843 17.6228 44.3189C29.2558 49.8743 41.8354 49.8743 53.3179 44.3189C53.3873 44.2817 53.4753 44.292 53.5401 44.3433C53.8971 44.6364 54.2645 44.9293 54.6346 45.2082C54.7673 45.3039 54.7594 45.5033 54.6195 45.5858C52.8508 46.6197 51.0121 47.4931 49.0775 48.223C48.9518 48.2706 48.894 48.4172 48.956 48.5383C50.0198 50.6034 51.2372 52.5699 52.5872 54.4347C52.6414 54.5131 52.7423 54.5477 52.8347 54.5195C58.6464 52.7249 64.529 50.0174 70.6019 45.5574C70.6559 45.5182 70.6894 45.459 70.695 45.3942C72.1747 30.0791 68.2147 16.7757 60.1968 4.9821C60.1772 4.9429 60.1436 4.9147 60.1045 4.8978ZM23.7259 37.3253C20.2276 37.3253 17.3451 34.1136 17.3451 30.1693C17.3451 26.225 20.1717 23.0133 23.7259 23.0133C27.308 23.0133 30.1626 26.2532 30.1066 30.1693C30.1066 34.1136 27.28 37.3253 23.7259 37.3253ZM47.2012 37.3253C43.7029 37.3253 40.8203 34.1136 40.8203 30.1693C40.8203 26.225 43.6469 23.0133 47.2012 23.0133C50.7833 23.0133 53.6379 26.2532 53.5819 30.1693C53.5819 34.1136 50.7833 37.3253 47.2012 37.3253Z\" />\n  </svg>\n);\n\n/**\n * X (formerly Twitter) icon used in the site header.\n *\n * @example\n * <XIcon className=\"h-5 w-5\" />\n */\nexport const XIcon = ({ className = \"\" }) => (\n  <svg\n    xmlns=\"http://www.w3.org/2000/svg\"\n    viewBox=\"0 0 1200 1227\"\n    fill=\"currentColor\"\n    className={className}\n    aria-hidden=\"true\"\n  >\n    <path d=\"M714.163 519.284 1160.89 0h-105.86L667.137 450.887 357.328 0H0l468.492 681.821L0 1226.37h105.866l409.625-476.152 327.181 476.152H1200L714.137 519.284h.026ZM569.165 687.828l-47.468-67.894-377.686-540.24h162.604l304.797 435.991 47.468 67.894 396.2 566.721H892.476L569.165 687.854v-.026Z\" />\n  </svg>\n);\n\n/**\n * Hamburger menu icon for mobile navigation.\n *\n * @example\n * <MenuIcon className=\"h-5 w-5\" />\n */\nexport const MenuIcon = ({ className = \"\" }) => (\n  <svg\n    xmlns=\"http://www.w3.org/2000/svg\"\n    viewBox=\"0 0 24 24\"\n    fill=\"none\"\n    stroke=\"currentColor\"\n    strokeWidth=\"2\"\n    strokeLinecap=\"round\"\n    strokeLinejoin=\"round\"\n    className={className}\n    aria-hidden=\"true\"\n  >\n    <line x1=\"3\" y1=\"6\" x2=\"21\" y2=\"6\" />\n    <line x1=\"3\" y1=\"12\" x2=\"21\" y2=\"12\" />\n    <line x1=\"3\" y1=\"18\" x2=\"21\" y2=\"18\" />\n  </svg>\n);\n\nconst navLinks = [\n  { href: \"/docs/what-is-lix\", label: \"Docs\", activePrefix: \"/docs\" },\n  { href: \"/plugins\", label: \"Plugins\", activePrefix: \"/plugins\" },\n  { href: \"/blog\", label: \"Blog\", activePrefix: \"/blog\" },\n];\n\nconst socialLinks = [\n  {\n    href: \"https://discord.gg/gdMPPWy57R\",\n    label: \"Discord\",\n    Icon: DiscordIcon,\n    sizeClass: \"h-5 w-5\",\n  },\n  {\n    href: \"https://x.com/lixCCS\",\n    label: \"X\",\n    Icon: XIcon,\n    sizeClass: \"h-4 w-4\",\n  },\n];\n\n/**\n * Site header with logo, navigation, and social links.\n *\n * @example\n * <Header />\n */\nexport function Header() {\n  const pathname = useRouterState({\n    select: (state) => state.location.pathname,\n  });\n  const githubStars = getGithubStars(\"opral/lix\");\n\n  const formatStars = (count: number) => {\n    if (count >= 1000) {\n      return `${(count / 1000).toFixed(1).replace(/\\.0$/, \"\")}k`;\n    }\n    return count.toString();\n  };\n\n  const isActive = (href: string, activePrefix?: string) => {\n    const candidate = activePrefix ?? href;\n    const normalized = candidate === \"/\" ? \"/\" : candidate.replace(/\\/$/, \"\");\n    if (normalized === \"/\") return pathname === \"/\";\n    return pathname === normalized || pathname.startsWith(`${normalized}/`);\n  };\n\n  return (\n    <header className=\"sticky top-0 z-50 border-b border-gray-200 bg-white/80 backdrop-blur\">\n      <div className=\"mx-auto flex w-full max-w-[1440px] items-center justify-between pl-6 pr-6 py-3\">\n        <Link\n          to=\"/\"\n          className=\"flex items-center text-[#0891B2]\"\n          aria-label=\"lix home\"\n        >\n          <LixLogo className=\"h-7 w-7\" />\n          <span className=\"sr-only\">lix</span>\n        </Link>\n        <div className=\"flex items-center gap-6\">\n          <nav className=\"hidden items-center gap-4 text-sm font-medium text-gray-700 sm:flex\">\n            {navLinks.map(({ href, label, activePrefix }) => (\n              <Link\n                key={href + label}\n                to={href}\n                className={\n                  isActive(href, activePrefix)\n                    ? href.startsWith(\"/plugins\")\n                      ? \"px-2 py-1 text-[#0891B2] hover:text-[#0692B6]\"\n                      : \"px-2 py-1 text-[#0891B2]\"\n                    : \"px-2 py-1 transition-colors hover:text-[#0692B6]\"\n                }\n                aria-current={isActive(href, activePrefix) ? \"page\" : undefined}\n              >\n                {label}\n              </Link>\n            ))}\n          </nav>\n          <div\n            className=\"hidden h-4 w-px bg-gray-200 sm:block\"\n            aria-hidden=\"true\"\n          />\n          <div className=\"flex items-center gap-3\">\n            {socialLinks.map(({ href, label, Icon, sizeClass }) => (\n              <a\n                key={label}\n                href={href}\n                target=\"_blank\"\n                rel=\"noopener noreferrer\"\n                className=\"text-gray-900 transition-colors hover:text-gray-900\"\n                aria-label={label}\n              >\n                <Icon className={sizeClass ?? \"h-5 w-5\"} />\n              </a>\n            ))}\n            <div className=\"h-4 w-px bg-gray-200\" aria-hidden=\"true\" />\n            <a\n              href=\"https://github.com/opral/lix\"\n              target=\"_blank\"\n              rel=\"noopener noreferrer\"\n              className=\"group inline-flex items-center gap-1.5 text-sm font-medium text-gray-700 transition-colors hover:text-gray-700\"\n            >\n              <GitHubIcon className=\"h-5 w-5\" />\n              GitHub\n              {githubStars !== null && (\n                <span\n                  className=\"inline-flex items-center gap-1 text-gray-500 transition-colors group-hover:text-gray-500\"\n                  title={`${githubStars.toLocaleString()} GitHub stars`}\n                  aria-label={`${githubStars.toLocaleString()} GitHub stars`}\n                >\n                  <span className=\"relative h-3.5 w-3.5\" aria-hidden=\"true\">\n                    <svg\n                      className=\"absolute inset-0 h-3.5 w-3.5 text-gray-400 group-hover:opacity-0 transition-opacity\"\n                      viewBox=\"0 0 24 24\"\n                      fill=\"none\"\n                      stroke=\"currentColor\"\n                      strokeWidth=\"2\"\n                      strokeLinecap=\"round\"\n                      strokeLinejoin=\"round\"\n                    >\n                      <polygon points=\"12 2 15.09 8.26 22 9.27 17 14.14 18.18 21.02 12 17.77 5.82 21.02 7 14.14 2 9.27 8.91 8.26 12 2\" />\n                    </svg>\n                    <svg\n                      className=\"absolute inset-0 h-3.5 w-3.5 text-yellow-500 opacity-0 group-hover:opacity-100 transition-opacity\"\n                      viewBox=\"0 0 16 16\"\n                      fill=\"currentColor\"\n                    >\n                      <path d=\"M8 .25a.75.75 0 0 1 .673.418l1.882 3.815 4.21.612a.75.75 0 0 1 .416 1.279l-3.046 2.97.719 4.192a.75.75 0 0 1-1.088.791L8 12.347l-3.766 1.98a.75.75 0 0 1-1.088-.79l.72-4.194L.818 6.374a.75.75 0 0 1 .416-1.28l4.21-.611L7.327.668A.75.75 0 0 1 8 .25z\" />\n                    </svg>\n                  </span>\n                  <span>{formatStars(githubStars)}</span>\n                </span>\n              )}\n            </a>\n          </div>\n        </div>\n      </div>\n    </header>\n  );\n}\n"
  },
  {
    "path": "packages/website/src/components/landing-page.tsx",
    "content": "import { useRouterState } from \"@tanstack/react-router\";\nimport { getGithubStars } from \"../github-stars-cache\";\nimport { Footer } from \"./footer\";\n/**\n * Lix logo used across the landing page.\n *\n * @example\n * <LixLogo className=\"h-6 w-6\" />\n */\nconst LixLogo = ({ className = \"\" }) => (\n  <svg\n    width=\"30\"\n    height=\"22\"\n    viewBox=\"0 0 26 18\"\n    fill=\"currentColor\"\n    xmlns=\"http://www.w3.org/2000/svg\"\n    className={className}\n  >\n    <g id=\"Group 162\">\n      <path\n        id=\"Vector\"\n        d=\"M14.7618 5.74842L16.9208 9.85984L22.3675 0.358398H25.7133L19.0723 11.6284L22.5712 17.5085H19.2407L16.9208 13.443L14.6393 17.5085H11.2705L14.7618 11.6284L11.393 5.74842H14.7618Z\"\n        fill=\"currentColor\"\n      />\n      <path\n        id=\"Vector_2\"\n        d=\"M6.16211 17.5081V5.74805H9.42368V17.5081H6.16211Z\"\n        fill=\"currentColor\"\n      />\n      <path\n        id=\"Vector_3\"\n        d=\"M3.52112 0.393555V17.6416H0.287109V0.393555H3.52112Z\"\n        fill=\"currentColor\"\n      />\n      <path\n        id=\"Rectangle 391\"\n        d=\"M6.21582 0.393555H14.8399V3.08856H6.21582V0.393555Z\"\n        fill=\"currentColor\"\n      />\n    </g>\n  </svg>\n);\n\n/**\n * GitHub mark icon used in the site header.\n *\n * @example\n * <GitHubIcon className=\"h-5 w-5\" />\n */\nconst GitHubIcon = ({ className = \"\" }) => (\n  <svg\n    xmlns=\"http://www.w3.org/2000/svg\"\n    viewBox=\"0 0 24 24\"\n    fill=\"currentColor\"\n    className={className}\n    aria-hidden=\"true\"\n  >\n    <path d=\"M12 2a10 10 0 00-3.16 19.49c.5.09.68-.21.68-.47v-1.69c-2.78.6-3.37-1.34-3.37-1.34a2.64 2.64 0 00-1.1-1.46c-.9-.62.07-.6.07-.6a2.08 2.08 0 011.52 1 2.1 2.1 0 002.87.82 2.11 2.11 0 01.63-1.32c-2.22-.25-4.56-1.11-4.56-4.95a3.88 3.88 0 011-2.7 3.6 3.6 0 01.1-2.67s.84-.27 2.75 1a9.5 9.5 0 015 0c1.91-1.29 2.75-1 2.75-1a3.6 3.6 0 01.1 2.67 3.87 3.87 0 011 2.7c0 3.85-2.34 4.7-4.57 4.95a2.37 2.37 0 01.68 1.84v2.72c0 .27.18.57.69.47A10 10 0 0012 2z\" />\n  </svg>\n);\n\n/**\n * Discord icon used in the site header.\n *\n * @example\n * <DiscordIcon className=\"h-5 w-5\" />\n */\nconst DiscordIcon = ({ className = \"\" }) => (\n  <svg\n    xmlns=\"http://www.w3.org/2000/svg\"\n    viewBox=\"0 0 71 55\"\n    fill=\"currentColor\"\n    className={className}\n    aria-hidden=\"true\"\n  >\n    <path d=\"M60.1045 4.8978C55.5792 2.8214 50.7265 1.2916 45.6527 0.41542C45.5603 0.39851 45.468 0.440769 45.4204 0.525289C44.7963 1.6353 44.105 3.0834 43.6209 4.2216C38.1637 3.4046 32.7345 3.4046 27.3892 4.2216C26.905 3.0581 26.1886 1.6353 25.5617 0.525289C25.5141 0.443589 25.4218 0.40133 25.3294 0.41542C20.2584 1.2888 15.4057 2.8186 10.8776 4.8978C10.8384 4.9147 10.8048 4.9429 10.7825 4.9793C1.57795 18.7309 -0.943561 32.1443 0.293408 45.3914C0.299005 45.4562 0.335386 45.5182 0.385761 45.5574C6.45866 50.0174 12.3413 52.7249 18.1147 54.5195C18.2071 54.5477 18.3052 54.5131 18.363 54.4376C19.7295 52.5728 20.9469 50.6063 21.9907 48.5383C22.0527 48.4172 21.9931 48.2735 21.8674 48.2259C19.9366 47.4931 18.0979 46.6 16.3292 45.5858C16.1893 45.5033 16.1789 45.3039 16.3116 45.2082C16.679 44.9293 17.0464 44.6391 17.4034 44.346C17.4654 44.2947 17.5534 44.2843 17.6228 44.3189C29.2558 49.8743 41.8354 49.8743 53.3179 44.3189C53.3873 44.2817 53.4753 44.292 53.5401 44.3433C53.8971 44.6364 54.2645 44.9293 54.6346 45.2082C54.7673 45.3039 54.7594 45.5033 54.6195 45.5858C52.8508 46.6197 51.0121 47.4931 49.0775 48.223C48.9518 48.2706 48.894 48.4172 48.956 48.5383C50.0198 50.6034 51.2372 52.5699 52.5872 54.4347C52.6414 54.5131 52.7423 54.5477 52.8347 54.5195C58.6464 52.7249 64.529 50.0174 70.6019 45.5574C70.6559 45.5182 70.6894 45.459 70.695 45.3942C72.1747 30.0791 68.2147 16.7757 60.1968 4.9821C60.1772 4.9429 60.1436 4.9147 60.1045 4.8978ZM23.7259 37.3253C20.2276 37.3253 17.3451 34.1136 17.3451 30.1693C17.3451 26.225 20.1717 23.0133 23.7259 23.0133C27.308 23.0133 30.1626 26.2532 30.1066 30.1693C30.1066 34.1136 27.28 37.3253 23.7259 37.3253ZM47.2012 37.3253C43.7029 37.3253 40.8203 34.1136 40.8203 30.1693C40.8203 26.225 43.6469 23.0133 47.2012 23.0133C50.7833 23.0133 53.6379 26.2532 53.5819 30.1693C53.5819 34.1136 50.7833 37.3253 47.2012 37.3253Z\" />\n  </svg>\n);\n\n/**\n * X (formerly Twitter) icon used in the site header.\n *\n * @example\n * <XIcon className=\"h-5 w-5\" />\n */\nconst XIcon = ({ className = \"\" }) => (\n  <svg\n    xmlns=\"http://www.w3.org/2000/svg\"\n    viewBox=\"0 0 1200 1227\"\n    fill=\"currentColor\"\n    className={className}\n    aria-hidden=\"true\"\n  >\n    <path d=\"M714.163 519.284 1160.89 0h-105.86L667.137 450.887 357.328 0H0l468.492 681.821L0 1226.37h105.866l409.625-476.152 327.181 476.152H1200L714.137 519.284h.026ZM569.165 687.828l-47.468-67.894-377.686-540.24h162.604l304.797 435.991 47.468 67.894 396.2 566.721H892.476L569.165 687.854v-.026Z\" />\n  </svg>\n);\n\n/**\n * JavaScript icon for code tabs.\n */\nconst JsIcon = ({ className = \"\" }) => (\n  <svg\n    viewBox=\"0 0 24 24\"\n    className={className}\n    fill=\"none\"\n    xmlns=\"http://www.w3.org/2000/svg\"\n  >\n    <rect width=\"24\" height=\"24\" fill=\"#F7DF1E\" rx=\"2\" />\n    <path\n      d=\"M6 18l2-12h4l2 12h-2l-1-4h-4l-1 4H6z\" // Placeholder path if text doesn't work well, but let's try text\n      fill=\"none\"\n    />\n    <text\n      x=\"12\"\n      y=\"17\"\n      textAnchor=\"middle\"\n      fontSize=\"11\"\n      fontWeight=\"bold\"\n      fill=\"black\"\n      fontFamily=\"sans-serif\"\n    >\n      JS\n    </text>\n  </svg>\n);\n\n/**\n * Python icon for code tabs.\n */\nconst PythonIcon = ({ className = \"\" }) => (\n  <svg\n    viewBox=\"16 16 32 32\"\n    className={className}\n    fill=\"none\"\n    xmlns=\"http://www.w3.org/2000/svg\"\n  >\n    <path\n      fill=\"url(#python__a)\"\n      d=\"M31.885 16c-8.124 0-7.617 3.523-7.617 3.523l.01 3.65h7.752v1.095H21.197S16 23.678 16 31.876c0 8.196 4.537 7.906 4.537 7.906h2.708v-3.804s-.146-4.537 4.465-4.537h7.688s4.32.07 4.32-4.175v-7.019S40.374 16 31.885 16zm-4.275 2.454a1.394 1.394 0 1 1 0 2.79 1.393 1.393 0 0 1-1.395-1.395c0-.771.624-1.395 1.395-1.395z\"\n    />\n    <path\n      fill=\"url(#python__b)\"\n      d=\"M32.115 47.833c8.124 0 7.617-3.523 7.617-3.523l-.01-3.65H31.97v-1.095h10.832S48 40.155 48 31.958c0-8.197-4.537-7.906-4.537-7.906h-2.708v3.803s.146 4.537-4.465 4.537h-7.688s-4.32-.07-4.32 4.175v7.019s-.656 4.247 7.833 4.247zm4.275-2.454a1.393 1.393 0 0 1-1.395-1.395 1.394 1.394 0 1 1 1.395 1.395z\"\n    />\n    <defs>\n      <linearGradient\n        id=\"python__a\"\n        x1=\"19.075\"\n        x2=\"34.898\"\n        y1=\"18.782\"\n        y2=\"34.658\"\n        gradientUnits=\"userSpaceOnUse\"\n      >\n        <stop stopColor=\"#387EB8\" />\n        <stop offset=\"1\" stopColor=\"#366994\" />\n      </linearGradient>\n      <linearGradient\n        id=\"python__b\"\n        x1=\"28.809\"\n        x2=\"45.803\"\n        y1=\"28.882\"\n        y2=\"45.163\"\n        gradientUnits=\"userSpaceOnUse\"\n      >\n        <stop stopColor=\"#FFE052\" />\n        <stop offset=\"1\" stopColor=\"#FFC331\" />\n      </linearGradient>\n    </defs>\n  </svg>\n);\n\n/**\n * Rust icon for code tabs.\n */\nconst RustIcon = ({ className = \"\" }) => (\n  <svg\n    viewBox=\"0 0 224 224\"\n    className={className}\n    fill=\"currentColor\"\n    xmlns=\"http://www.w3.org/2000/svg\"\n  >\n    <path\n      fill=\"#000\"\n      d=\"M218.46 109.358l-9.062-5.614c-.076-.882-.162-1.762-.258-2.642l7.803-7.265a3.107 3.107 0 00.933-2.89 3.093 3.093 0 00-1.967-2.312l-9.97-3.715c-.25-.863-.512-1.72-.781-2.58l6.214-8.628a3.114 3.114 0 00-.592-4.263 3.134 3.134 0 00-1.431-.637l-10.507-1.709a80.869 80.869 0 00-1.263-2.353l4.417-9.7a3.12 3.12 0 00-.243-3.035 3.106 3.106 0 00-2.705-1.385l-10.671.372a85.152 85.152 0 00-1.685-2.044l2.456-10.381a3.125 3.125 0 00-3.762-3.763l-10.384 2.456a88.996 88.996 0 00-2.047-1.684l.373-10.671a3.11 3.11 0 00-1.385-2.704 3.127 3.127 0 00-3.034-.246l-9.681 4.417c-.782-.429-1.567-.854-2.353-1.265l-1.713-10.506a3.098 3.098 0 00-1.887-2.373 3.108 3.108 0 00-3.014.35l-8.628 6.213c-.85-.27-1.703-.53-2.56-.778l-3.716-9.97a3.111 3.111 0 00-2.311-1.97 3.134 3.134 0 00-2.89.933l-7.266 7.802a93.746 93.746 0 00-2.643-.258l-5.614-9.082A3.125 3.125 0 00111.97 4c-1.09 0-2.085.56-2.642 1.478l-5.615 9.081a93.32 93.32 0 00-2.642.259l-7.266-7.802a3.13 3.13 0 00-2.89-.933 3.106 3.106 0 00-2.312 1.97l-3.715 9.97c-.857.247-1.71.506-2.56.778L73.7 12.588a3.101 3.101 0 00-3.014-.35A3.127 3.127 0 0068.8 14.61l-1.713 10.506c-.79.41-1.575.832-2.353 1.265l-9.681-4.417a3.125 3.125 0 00-4.42 2.95l.372 10.67c-.69.553-1.373 1.115-2.048 1.685l-10.383-2.456a3.143 3.143 0 00-2.93.832 3.124 3.124 0 00-.833 2.93l2.436 10.383a93.897 93.897 0 00-1.68 2.043l-10.672-.372a3.138 3.138 0 00-2.704 1.385 3.126 3.126 0 00-.246 3.035l4.418 9.7c-.43.779-.855 1.563-1.266 2.353l-10.507 1.71a3.097 3.097 0 00-2.373 1.886 3.117 3.117 0 00.35 3.013l6.214 8.628a89.12 89.12 0 00-.78 2.58l-9.97 3.715a3.117 3.117 0 00-1.035 5.202l7.803 7.265c-.098.879-.184 1.76-.258 2.642l-9.062 5.614A3.122 3.122 0 004 112.021c0 1.092.56 2.084 1.478 2.642l9.062 5.614c.074.882.16 1.762.258 2.642l-7.803 7.265a3.117 3.117 0 001.034 5.201l9.97 3.716a110 110 0 00.78 2.58l-6.212 8.627a3.112 3.112 0 00.6 4.27c.419.33.916.547 1.443.63l10.507 1.709c.407.792.83 1.576 1.265 2.353l-4.417 9.68a3.126 3.126 0 002.95 4.42l10.65-.374c.553.69 1.115 1.372 1.685 2.047l-2.435 10.383a3.09 3.09 0 00.831 2.91 3.117 3.117 0 002.931.83l10.384-2.436a82.268 82.268 0 002.047 1.68l-.371 10.671a3.11 3.11 0 001.385 2.704 3.125 3.125 0 003.034.241l9.681-4.416c.779.432 1.563.854 2.353 1.265l1.713 10.505a3.147 3.147 0 001.887 2.395 3.111 3.111 0 003.014-.349l8.628-6.213c.853.271 1.71.535 2.58.783l3.716 9.969a3.112 3.112 0 002.312 1.967 3.112 3.112 0 002.89-.933l7.266-7.802c.877.101 1.761.186 2.642.264l5.615 9.061a3.12 3.12 0 002.642 1.478 3.165 3.165 0 002.663-1.478l5.614-9.061c.884-.078 1.765-.163 2.643-.264l7.265 7.802a3.106 3.106 0 002.89.933 3.105 3.105 0 002.312-1.967l3.716-9.969c.863-.248 1.719-.512 2.58-.783l8.629 6.213a3.12 3.12 0 004.9-2.045l1.713-10.506c.793-.411 1.577-.838 2.353-1.265l9.681 4.416a3.13 3.13 0 003.035-.241 3.126 3.126 0 001.385-2.704l-.372-10.671a81.794 81.794 0 002.046-1.68l10.383 2.436a3.123 3.123 0 003.763-3.74l-2.436-10.382a84.588 84.588 0 001.68-2.048l10.672.374a3.104 3.104 0 002.704-1.385 3.118 3.118 0 00.244-3.035l-4.417-9.68c.43-.779.852-1.563 1.263-2.353l10.507-1.709a3.08 3.08 0 002.373-1.886 3.11 3.11 0 00-.35-3.014l-6.214-8.627c.272-.857.532-1.717.781-2.58l9.97-3.716a3.109 3.109 0 001.967-2.311 3.107 3.107 0 00-.933-2.89l-7.803-7.265c.096-.88.182-1.761.258-2.642l9.062-5.614a3.11 3.11 0 001.478-2.642 3.157 3.157 0 00-1.476-2.663h-.064zm-60.687 75.337c-3.468-.747-5.656-4.169-4.913-7.637a6.412 6.412 0 017.617-4.933c3.468.741 5.676 4.169 4.933 7.637a6.414 6.414 0 01-7.617 4.933h-.02zm-3.076-20.847c-3.158-.677-6.275 1.334-6.936 4.5l-3.22 15.026c-9.929 4.5-21.055 7.018-32.614 7.018-11.89 0-23.12-2.622-33.234-7.328l-3.22-15.026c-.677-3.158-3.778-5.18-6.936-4.499l-13.273 2.848a80.222 80.222 0 01-6.853-8.091h64.61c.731 0 1.218-.132 1.218-.797v-22.91c0-.665-.487-.797-1.218-.797H94.133v-14.469h20.415c1.864 0 9.97.533 12.551 10.898.811 3.179 2.601 13.54 3.818 16.863 1.214 3.715 6.152 11.146 11.415 11.146h32.202c.365 0 .755-.041 1.166-.116a80.56 80.56 0 01-7.307 8.587l-13.583-2.911-.113.058zm-89.38 20.537a6.407 6.407 0 01-7.617-4.933c-.74-3.467 1.462-6.894 4.934-7.637a6.417 6.417 0 017.617 4.933c.74 3.468-1.464 6.894-4.934 7.637zm-24.564-99.28a6.438 6.438 0 01-3.261 8.484c-3.241 1.438-7.019-.025-8.464-3.261-1.445-3.237.025-7.039 3.262-8.483a6.416 6.416 0 018.463 3.26zM33.22 102.94l13.83-6.15c2.952-1.311 4.294-4.769 2.972-7.72l-2.848-6.44H58.36v50.362h-22.5a79.158 79.158 0 01-3.014-21.672c0-2.869.155-5.697.452-8.483l-.08.103zm60.687-4.892v-14.86h26.629c1.376 0 9.722 1.59 9.722 7.822 0 5.18-6.399 7.038-11.663 7.038h-24.77.082zm96.811 13.375c0 1.973-.072 3.922-.216 5.862h-8.113c-.811 0-1.137.532-1.137 1.327v3.715c0 8.752-4.934 10.671-9.268 11.146-4.129.464-8.691-1.726-9.248-4.252-2.436-13.684-6.482-16.595-12.881-21.672 7.948-5.036 16.204-12.487 16.204-22.498 0-10.753-7.369-17.523-12.385-20.847-7.059-4.644-14.862-5.572-16.968-5.572H52.899c11.374-12.673 26.835-21.673 44.174-24.975l9.887 10.361a5.849 5.849 0 008.278.19l11.064-10.568c23.119 4.314 42.729 18.721 54.082 38.598l-7.576 17.09c-1.306 2.951.027 6.419 2.973 7.72l14.573 6.48c.255 2.607.383 5.224.384 7.843l-.021.052zM106.912 24.94a6.398 6.398 0 019.062.209 6.437 6.437 0 01-.213 9.082 6.396 6.396 0 01-9.062-.21 6.436 6.436 0 01.213-9.083v.002zm75.137 60.476a6.402 6.402 0 018.463-3.26 6.425 6.425 0 013.261 8.482 6.402 6.402 0 01-8.463 3.261 6.425 6.425 0 01-3.261-8.483z\"\n    />\n  </svg>\n);\n\n/**\n * Go icon for code tabs.\n */\nconst GoIcon = ({ className = \"\" }) => (\n  <svg\n    viewBox=\"0 0 207 180\"\n    className={className}\n    xmlns=\"http://www.w3.org/2000/svg\"\n  >\n    <g fill=\"#00ADD8\" fillRule=\"evenodd\" transform=\"translate(0, 51)\">\n      <path d=\"m16.2 24.1c-.4 0-.5-.2-.3-.5l2.1-2.7c.2-.3.7-.5 1.1-.5h35.7c.4 0 .5.3.3.6l-1.7 2.6c-.2.3-.7.6-1 .6z\" />\n      <path d=\"m1.1 33.3c-.4 0-.5-.2-.3-.5l2.1-2.7c.2-.3.7-.5 1.1-.5h45.6c.4 0 .6.3.5.6l-.8 2.4c-.1.4-.5.6-.9.6z\" />\n      <path d=\"m25.3 42.5c-.4 0-.5-.3-.3-.6l1.4-2.5c.2-.3.6-.6 1-.6h20c.4 0 .6.3.6.7l-.2 2.4c0 .4-.4.7-.7.7z\" />\n      <g transform=\"translate(55)\">\n        <path d=\"m74.1 22.3c-6.3 1.6-10.6 2.8-16.8 4.4-1.5.4-1.6.5-2.9-1-1.5-1.7-2.6-2.8-4.7-3.8-6.3-3.1-12.4-2.2-18.1 1.5-6.8 4.4-10.3 10.9-10.2 19 .1 8 5.6 14.6 13.5 15.7 6.8.9 12.5-1.5 17-6.6.9-1.1 1.7-2.3 2.7-3.7-3.6 0-8.1 0-19.3 0-2.1 0-2.6-1.3-1.9-3 1.3-3.1 3.7-8.3 5.1-10.9.3-.6 1-1.6 2.5-1.6h36.4c-.2 2.7-.2 5.4-.6 8.1-1.1 7.2-3.8 13.8-8.2 19.6-7.2 9.5-16.6 15.4-28.5 17-9.8 1.3-18.9-.6-26.9-6.6-7.4-5.6-11.6-13-12.7-22.2-1.3-10.9 1.9-20.7 8.5-29.3 7.1-9.3 16.5-15.2 28-17.3 9.4-1.7 18.4-.6 26.5 4.9 5.3 3.5 9.1 8.3 11.6 14.1.6.9.2 1.4-1 1.7z\" />\n        <path\n          d=\"m107.2 77.6c-9.1-.2-17.4-2.8-24.4-8.8-5.9-5.1-9.6-11.6-10.8-19.3-1.8-11.3 1.3-21.3 8.1-30.2 7.3-9.6 16.1-14.6 28-16.7 10.2-1.8 19.8-.8 28.5 5.1 7.9 5.4 12.8 12.7 14.1 22.3 1.7 13.5-2.2 24.5-11.5 33.9-6.6 6.7-14.7 10.9-24 12.8-2.7.5-5.4.6-8 .9zm23.8-40.4c-.1-1.3-.1-2.3-.3-3.3-1.8-9.9-10.9-15.5-20.4-13.3-9.3 2.1-15.3 8-17.5 17.4-1.8 7.8 2 15.7 9.2 18.9 5.5 2.4 11 2.1 16.3-.6 7.9-4.1 12.2-10.5 12.7-19.1z\"\n          fillRule=\"nonzero\"\n        />\n      </g>\n    </g>\n  </svg>\n);\n\n/**\n * Landing page for the Lix documentation site.\n *\n * @example\n * <LandingPage />\n */\nfunction LandingPage({ readmeHtml }: { readmeHtml?: string }) {\n  const docsPath = \"/docs/what-is-lix\";\n  const pathname = useRouterState({\n    select: (state) => state.location.pathname,\n  });\n  const githubStars = getGithubStars(\"opral/lix\");\n\n  const formatStars = (count: number) => {\n    if (count >= 1000) {\n      return `${(count / 1000).toFixed(1).replace(/\\.0$/, \"\")}k`;\n    }\n    return count.toString();\n  };\n\n  const navLinks = [\n    { href: docsPath, label: \"Docs\", activePrefix: \"/docs\" },\n    { href: \"/plugins\", label: \"Plugins\", activePrefix: \"/plugins\" },\n    { href: \"/blog\", label: \"Blog\", activePrefix: \"/blog\" },\n  ];\n\n  const isActive = (href: string, activePrefix?: string) => {\n    const candidate = activePrefix ?? href;\n    const normalized = candidate === \"/\" ? \"/\" : candidate.replace(/\\/$/, \"\");\n    if (normalized === \"/\") return pathname === \"/\";\n    return pathname === normalized || pathname.startsWith(`${normalized}/`);\n  };\n\n  const socialLinks = [\n    {\n      href: \"https://discord.gg/gdMPPWy57R\",\n      label: \"Discord\",\n      Icon: DiscordIcon,\n      sizeClass: \"h-5 w-5\",\n    },\n    {\n      href: \"https://x.com/lixCCS\",\n      label: \"X\",\n      Icon: XIcon,\n      sizeClass: \"h-4 w-4\",\n    },\n  ];\n\n  return (\n    <div className=\"font-sans text-gray-900 bg-white\">\n      <header className=\"sticky top-0 z-50 border-b border-gray-200 bg-white/80 backdrop-blur\">\n        <div className=\"mx-auto flex w-full max-w-[1440px] items-center justify-between pl-6 pr-6 py-3\">\n          <a\n            href=\"/\"\n            className=\"flex items-center text-[#0891B2]\"\n            aria-label=\"lix home\"\n          >\n            <LixLogo className=\"h-7 w-7\" />\n            <span className=\"sr-only\">lix</span>\n          </a>\n          <div className=\"flex items-center gap-6\">\n            <nav className=\"hidden items-center gap-4 text-sm font-medium text-gray-700 sm:flex\">\n              {navLinks.map(({ href, label, activePrefix }) => (\n                <a\n                  key={href}\n                  href={href}\n                  className={\n                    isActive(href, activePrefix)\n                      ? href.startsWith(\"/plugins\")\n                        ? \"px-2 py-1 text-[#0891B2] hover:text-[#0692B6]\"\n                        : \"px-2 py-1 text-[#0891B2]\"\n                      : \"px-2 py-1 transition-colors hover:text-[#0692B6]\"\n                  }\n                  aria-current={\n                    isActive(href, activePrefix) ? \"page\" : undefined\n                  }\n                >\n                  {label}\n                </a>\n              ))}\n            </nav>\n            <div\n              className=\"hidden h-4 w-px bg-gray-200 sm:block\"\n              aria-hidden=\"true\"\n            />\n            <div className=\"flex items-center gap-3\">\n              {socialLinks.map(({ href, label, Icon, sizeClass }) => (\n                <a\n                  key={label}\n                  href={href}\n                  target=\"_blank\"\n                  rel=\"noopener noreferrer\"\n                  className=\"text-gray-900 transition-colors hover:text-gray-900\"\n                  aria-label={label}\n                >\n                  <Icon className={sizeClass ?? \"h-5 w-5\"} />\n                </a>\n              ))}\n              <div className=\"h-4 w-px bg-gray-200\" aria-hidden=\"true\" />\n              <a\n                href=\"https://github.com/opral/lix\"\n                target=\"_blank\"\n                rel=\"noopener noreferrer\"\n                className=\"group inline-flex items-center gap-1.5 text-sm font-medium text-gray-700 transition-colors hover:text-gray-700\"\n              >\n                <GitHubIcon className=\"h-5 w-5\" />\n                GitHub\n                {githubStars !== null && (\n                  <span\n                    className=\"inline-flex items-center gap-1 text-gray-500 transition-colors group-hover:text-gray-500\"\n                    title={`${githubStars.toLocaleString()} GitHub stars`}\n                    aria-label={`${githubStars.toLocaleString()} GitHub stars`}\n                  >\n                    <span className=\"relative h-3.5 w-3.5\" aria-hidden=\"true\">\n                      <svg\n                        className=\"absolute inset-0 h-3.5 w-3.5 text-gray-400 group-hover:opacity-0 transition-opacity\"\n                        viewBox=\"0 0 24 24\"\n                        fill=\"none\"\n                        stroke=\"currentColor\"\n                        strokeWidth=\"2\"\n                        strokeLinecap=\"round\"\n                        strokeLinejoin=\"round\"\n                      >\n                        <polygon points=\"12 2 15.09 8.26 22 9.27 17 14.14 18.18 21.02 12 17.77 5.82 21.02 7 14.14 2 9.27 8.91 8.26 12 2\" />\n                      </svg>\n                      <svg\n                        className=\"absolute inset-0 h-3.5 w-3.5 text-yellow-500 opacity-0 group-hover:opacity-100 transition-opacity\"\n                        viewBox=\"0 0 16 16\"\n                        fill=\"currentColor\"\n                      >\n                        <path d=\"M8 .25a.75.75 0 0 1 .673.418l1.882 3.815 4.21.612a.75.75 0 0 1 .416 1.279l-3.046 2.97.719 4.192a.75.75 0 0 1-1.088.791L8 12.347l-3.766 1.98a.75.75 0 0 1-1.088-.79l.72-4.194L.818 6.374a.75.75 0 0 1 .416-1.28l4.21-.611L7.327.668A.75.75 0 0 1 8 .25z\" />\n                      </svg>\n                    </span>\n                    <span>{formatStars(githubStars)}</span>\n                  </span>\n                )}\n              </a>\n            </div>\n          </div>\n        </div>\n      </header>\n      {/* Main content */}\n      <main className=\"relative px-4 sm:px-6\">\n        {/* Hero Section - Simplified */}\n        <section className=\"relative pt-20 pb-12 px-4 sm:px-6\">\n          <div className=\"relative max-w-4xl mx-auto text-center\">\n            {/* Beta Chip */}\n            <a\n              href=\"https://github.com/opral/lix/issues/374\"\n              target=\"_blank\"\n              rel=\"noopener noreferrer\"\n              className=\"inline-flex items-center gap-2 mb-6 px-3 py-1.5 rounded-full bg-amber-50 border border-amber-200 text-sm text-amber-800 hover:bg-amber-100 transition-colors\"\n            >\n              <span className=\"inline-block w-2 h-2 rounded-full bg-amber-400\" />\n              <span>\n                <span className=\"font-medium\">Lix is in alpha</span> · Follow\n                progress to v1.0\n              </span>\n              <svg\n                className=\"w-3.5 h-3.5\"\n                fill=\"none\"\n                viewBox=\"0 0 24 24\"\n                stroke=\"currentColor\"\n                strokeWidth={2}\n              >\n                <path\n                  strokeLinecap=\"round\"\n                  strokeLinejoin=\"round\"\n                  d=\"M9 5l7 7-7 7\"\n                />\n              </svg>\n            </a>\n            <h1 className=\"text-gray-900 font-bold leading-[1.1] text-4xl sm:text-5xl md:text-6xl tracking-tight\">\n              Embeddable version control system for AI agents\n            </h1>\n\n            <p className=\"text-gray-500 text-lg sm:text-xl max-w-4xl mx-auto mt-8\">\n              Lix is a version control system that can be imported as a library.\n              Use it to, for example, enable human-in-the-loop workflows for AI\n              agents like diffs and reviews.\n            </p>\n\n            {/* Trust signals */}\n            <div className=\"flex items-center justify-center gap-8 sm:gap-12 mt-12\">\n              <div className=\"flex flex-col items-center\">\n                <svg\n                  className=\"w-5 h-5 text-gray-400 mb-1.5\"\n                  fill=\"none\"\n                  viewBox=\"0 0 24 24\"\n                  stroke=\"currentColor\"\n                  strokeWidth={1.5}\n                >\n                  <path\n                    strokeLinecap=\"round\"\n                    strokeLinejoin=\"round\"\n                    d=\"M3 16.5v2.25A2.25 2.25 0 005.25 21h13.5A2.25 2.25 0 0021 18.75V16.5M16.5 12L12 16.5m0 0L7.5 12m4.5 4.5V3\"\n                  />\n                </svg>\n                <div className=\"text-2xl font-bold text-gray-900\">90k+</div>\n                <div className=\"text-sm text-gray-500 mt-1\">\n                  Weekly downloads\n                </div>\n              </div>\n              <div className=\"w-px h-14 bg-gray-200\" />\n              <div className=\"flex flex-col items-center\">\n                <svg\n                  className=\"w-5 h-5 text-gray-400 mb-1.5\"\n                  fill=\"none\"\n                  viewBox=\"0 0 24 24\"\n                  stroke=\"currentColor\"\n                  strokeWidth={1.5}\n                >\n                  <path\n                    strokeLinecap=\"round\"\n                    strokeLinejoin=\"round\"\n                    d=\"M3 6l3 1m0 0l-3 9a5.002 5.002 0 006.001 0M6 7l3 9M6 7l6-2m6 2l3-1m-3 1l-3 9a5.002 5.002 0 006.001 0M18 7l3 9m-3-9l-6-2m0-2v2m0 16V5m0 16H9m3 0h3\"\n                  />\n                </svg>\n                <div className=\"text-2xl font-bold text-gray-900\">MIT</div>\n                <div className=\"text-sm text-gray-500 mt-1\">Open Source</div>\n              </div>\n            </div>\n\n            <div className=\"flex flex-col sm:flex-row items-center justify-center gap-3 mt-10\">\n              <a\n                href={docsPath}\n                className=\"inline-flex items-center justify-center h-11 px-6 rounded-lg text-sm font-medium bg-[#0891b2] text-white hover:bg-[#0e7490] transition-colors\"\n              >\n                Read the docs\n                <svg\n                  className=\"h-4 w-4 ml-2\"\n                  fill=\"none\"\n                  viewBox=\"0 0 24 24\"\n                  stroke=\"currentColor\"\n                  strokeWidth={2}\n                >\n                  <path\n                    strokeLinecap=\"round\"\n                    strokeLinejoin=\"round\"\n                    d=\"M14 5l7 7m0 0l-7 7m7-7H3\"\n                  />\n                </svg>\n              </a>\n              <a\n                href=\"https://github.com/opral/lix\"\n                target=\"_blank\"\n                rel=\"noopener noreferrer\"\n                className=\"inline-flex h-11 items-center justify-center gap-2 px-5 rounded-lg border border-gray-300 bg-white text-sm text-gray-800 transition-colors duration-200 hover:bg-gray-50\"\n                title={\n                  githubStars\n                    ? `${githubStars.toLocaleString()} GitHub stars`\n                    : \"Star us on GitHub\"\n                }\n              >\n                <svg\n                  className=\"w-5 h-5\"\n                  viewBox=\"0 0 24 24\"\n                  fill=\"currentColor\"\n                >\n                  <path d=\"M12 2C6.477 2 2 6.477 2 12c0 4.42 2.865 8.166 6.839 9.489.5.092.682-.217.682-.48 0-.237-.008-.866-.013-1.7-2.782.603-3.369-1.34-3.369-1.34-.454-1.156-1.11-1.464-1.11-1.464-.908-.62.069-.608.069-.608 1.003.07 1.531 1.03 1.531 1.03.892 1.529 2.341 1.087 2.91.831.092-.645.35-1.087.636-1.337-2.22-.253-4.555-1.11-4.555-4.943 0-1.091.39-1.984 1.029-2.683-.103-.253-.446-1.27.098-2.647 0 0 .84-.268 2.75 1.026A9.578 9.578 0 0112 6.836c.85.004 1.705.114 2.504.336 1.909-1.294 2.747-1.026 2.747-1.026.546 1.377.203 2.394.1 2.647.64.699 1.028 1.592 1.028 2.683 0 3.842-2.339 4.687-4.566 4.935.359.309.678.919.678 1.852 0 1.336-.012 2.415-.012 2.743 0 .267.18.578.688.48C19.138 20.161 22 16.416 22 12c0-5.523-4.477-10-10-10z\" />\n                </svg>\n                Star on GitHub\n                {githubStars !== null && (\n                  <span className=\"rounded-full bg-gray-100 px-2 py-0.5 text-xs font-semibold text-gray-600\">\n                    {githubStars >= 1000\n                      ? `${(githubStars / 1000).toFixed(1)}k`\n                      : githubStars}\n                  </span>\n                )}\n              </a>\n            </div>\n\n            {/* Hero code snippet with language tabs */}\n            <div className=\"mt-12 w-full max-w-2xl mx-auto\">\n              <div className=\"rounded-xl border border-gray-200 bg-white overflow-hidden\">\n                <div className=\"flex items-center px-4 border-b border-gray-200 bg-gray-50\">\n                  <div className=\"flex gap-6 text-sm\">\n                    <button className=\"flex items-center gap-2 text-gray-900 font-medium border-b-2 border-gray-900 py-3 px-1 cursor-pointer\">\n                      <JsIcon className=\"h-4 w-4\" />\n                      JavaScript\n                    </button>\n                    <a\n                      href=\"https://github.com/opral/lix/issues/370\"\n                      target=\"_blank\"\n                      rel=\"noopener noreferrer\"\n                      className=\"flex items-center gap-2 text-gray-400 py-3 px-1 hover:text-gray-600 transition-colors cursor-pointer\"\n                    >\n                      <PythonIcon className=\"h-4 w-4\" />\n                      Python\n                    </a>\n                    <a\n                      href=\"https://github.com/opral/lix/issues/371\"\n                      target=\"_blank\"\n                      rel=\"noopener noreferrer\"\n                      className=\"flex items-center gap-2 text-gray-400 py-3 px-1 hover:text-gray-600 transition-colors cursor-pointer\"\n                    >\n                      <RustIcon className=\"h-4 w-4\" />\n                      Rust\n                    </a>\n                    <a\n                      href=\"https://github.com/opral/lix/issues/373\"\n                      target=\"_blank\"\n                      rel=\"noopener noreferrer\"\n                      className=\"flex items-center gap-2 text-gray-400 py-3 px-1 hover:text-gray-600 transition-colors cursor-pointer\"\n                    >\n                      <GoIcon className=\"h-4 w-4\" />\n                      Go\n                    </a>\n                  </div>\n                </div>\n                <div className=\"p-5 text-sm leading-relaxed font-mono text-left overflow-x-auto whitespace-pre-wrap\">\n                  <span className=\"text-indigo-600\">import</span>{\" \"}\n                  <span className=\"text-gray-900\">{\"{ openLix }\"}</span>{\" \"}\n                  <span className=\"text-indigo-600\">from</span>{\" \"}\n                  <span className=\"text-amber-600\">\"@lix-js/sdk\"</span>\n                  <span className=\"text-gray-900\">;</span>\n                  <br />\n                  <span className=\"text-indigo-600\">const</span>{\" \"}\n                  <span className=\"text-gray-900\">lix</span>{\" \"}\n                  <span className=\"text-gray-900\">= </span>\n                  <span className=\"text-indigo-600\">await</span>{\" \"}\n                  <span className=\"text-gray-900\">openLix</span>\n                  <span className=\"text-gray-900\">{\"()\"}</span>\n                </div>\n              </div>\n            </div>\n          </div>\n        </section>\n\n        {/* Value Props - Lightweight */}\n        <section className=\"py-12 px-6 sm:px-12 md:px-16 bg-white\">\n          <div className=\"max-w-6xl mx-auto\">\n            <div className=\"grid grid-cols-1 sm:grid-cols-3 gap-8 sm:gap-12\">\n              <div className=\"flex flex-col items-center sm:items-start gap-4\">\n                {/* Library/dependency illustration */}\n                <div className=\"w-full max-w-[220px] h-32 rounded-lg border border-gray-200 bg-white p-4\">\n                  <div className=\"text-xs text-gray-400 mb-3\">dependencies</div>\n                  <div className=\"space-y-2 font-mono text-xs\">\n                    <div className=\"flex items-center justify-between\">\n                      <div className=\"flex items-center gap-2\">\n                        <div className=\"w-2.5 h-2.5 rounded bg-gray-200\"></div>\n                        <span className=\"text-gray-400\">http</span>\n                      </div>\n                      <span className=\"text-gray-300\">2.1</span>\n                    </div>\n                    <div className=\"flex items-center justify-between\">\n                      <div className=\"flex items-center gap-2\">\n                        <div className=\"w-2.5 h-2.5 rounded bg-gray-200\"></div>\n                        <span className=\"text-gray-400\">db</span>\n                      </div>\n                      <span className=\"text-gray-300\">3.0</span>\n                    </div>\n                    <div className=\"flex items-center justify-between\">\n                      <div className=\"flex items-center gap-2\">\n                        <div className=\"w-2.5 h-2.5 rounded bg-cyan-200 border border-cyan-400\"></div>\n                        <span className=\"text-gray-700 font-medium\">lix</span>\n                      </div>\n                      <span className=\"text-gray-400\">1.0</span>\n                    </div>\n                  </div>\n                </div>\n                <div className=\"text-center sm:text-left\">\n                  <h3 className=\"text-lg font-semibold text-gray-900\">\n                    Fits into your tech stack\n                  </h3>\n                  <p className=\"text-gray-600 text-base mt-2\">\n                    Import Lix and get branching, diff, and rollback without\n                    changing your architecture.\n                  </p>\n                </div>\n              </div>\n              <div className=\"flex flex-col items-center sm:items-start gap-4\">\n                {/* Diff illustration - semantic/field-level */}\n                <div className=\"w-full max-w-[220px] h-32 rounded-lg border border-gray-200 bg-white p-4\">\n                  <div className=\"text-xs text-gray-400 mb-3\">config.json</div>\n                  <div className=\"space-y-3 text-sm\">\n                    <div className=\"flex items-center justify-between\">\n                      <span className=\"text-gray-600\">title</span>\n                      <div className=\"flex items-center gap-1.5\">\n                        <span className=\"bg-red-50 text-red-700 px-1 rounded line-through\">\n                          Draft\n                        </span>\n                        <span className=\"text-gray-300\">→</span>\n                        <span className=\"bg-green-50 text-green-700 px-1 rounded\">\n                          Final\n                        </span>\n                      </div>\n                    </div>\n                    <div className=\"flex items-center justify-between\">\n                      <span className=\"text-gray-600\">price</span>\n                      <div className=\"flex items-center gap-1.5\">\n                        <span className=\"bg-red-50 text-red-700 px-1 rounded line-through\">\n                          10\n                        </span>\n                        <span className=\"text-gray-300\">→</span>\n                        <span className=\"bg-green-50 text-green-700 px-1 rounded\">\n                          12\n                        </span>\n                      </div>\n                    </div>\n                  </div>\n                </div>\n                <div className=\"text-center sm:text-left\">\n                  <h3 className=\"text-lg font-semibold text-gray-900\">\n                    Tracks semantic changes\n                  </h3>\n                  <p className=\"text-gray-600 text-base mt-2\">\n                    Lix stores semantic changes via plugins. Diffs, blame, and\n                    history are queryable via SQL.\n                  </p>\n                </div>\n              </div>\n              <div className=\"flex flex-col items-center sm:items-start gap-4\">\n                {/* Trace illustration */}\n                <div className=\"w-full max-w-[220px] h-32 rounded-lg border border-gray-200 bg-white p-4 font-mono text-xs\">\n                  <div className=\"flex items-center gap-2 text-gray-400\">\n                    <span>12:03</span>\n                    <svg\n                      className=\"w-3 h-3\"\n                      viewBox=\"0 0 24 24\"\n                      fill=\"none\"\n                      stroke=\"currentColor\"\n                      strokeWidth=\"2\"\n                    >\n                      <path d=\"M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z\" />\n                      <polyline points=\"14 2 14 8 20 8\" />\n                      <line x1=\"16\" y1=\"13\" x2=\"8\" y2=\"13\" />\n                      <line x1=\"16\" y1=\"17\" x2=\"8\" y2=\"17\" />\n                    </svg>\n                    <span className=\"text-gray-600\">edit config.json</span>\n                  </div>\n                  <div className=\"flex items-center gap-2 text-gray-400 mt-1.5\">\n                    <span>12:04</span>\n                    <svg\n                      className=\"w-3 h-3\"\n                      viewBox=\"0 0 24 24\"\n                      fill=\"none\"\n                      stroke=\"currentColor\"\n                      strokeWidth=\"2\"\n                    >\n                      <path d=\"M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z\" />\n                      <polyline points=\"14 2 14 8 20 8\" />\n                      <line x1=\"16\" y1=\"13\" x2=\"8\" y2=\"13\" />\n                      <line x1=\"16\" y1=\"17\" x2=\"8\" y2=\"17\" />\n                    </svg>\n                    <span className=\"text-gray-600\">update data.xlsx</span>\n                  </div>\n                  <div className=\"flex items-center gap-2 mt-1.5\">\n                    <span className=\"text-gray-400\">12:05</span>\n                    <div className=\"w-3 h-3 rounded-full bg-green-500 flex items-center justify-center\">\n                      <svg\n                        className=\"w-2 h-2 text-white\"\n                        viewBox=\"0 0 24 24\"\n                        fill=\"none\"\n                        stroke=\"currentColor\"\n                        strokeWidth=\"3\"\n                      >\n                        <polyline points=\"20 6 9 17 4 12\" />\n                      </svg>\n                    </div>\n                    <span className=\"text-gray-900 font-medium\">approved</span>\n                  </div>\n                  <div className=\"flex items-center gap-2 text-gray-400 mt-1.5\">\n                    <span>12:06</span>\n                    <svg\n                      className=\"w-3 h-3\"\n                      viewBox=\"0 0 24 24\"\n                      fill=\"none\"\n                      stroke=\"currentColor\"\n                      strokeWidth=\"2\"\n                    >\n                      <path d=\"M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z\" />\n                      <polyline points=\"14 2 14 8 20 8\" />\n                      <line x1=\"16\" y1=\"13\" x2=\"8\" y2=\"13\" />\n                      <line x1=\"16\" y1=\"17\" x2=\"8\" y2=\"17\" />\n                    </svg>\n                    <span className=\"text-gray-600\">edit report.pdf</span>\n                  </div>\n                </div>\n                <div className=\"text-center sm:text-left\">\n                  <h3 className=\"text-lg font-semibold text-gray-900\">\n                    Human in the loop for agents\n                  </h3>\n                  <p className=\"text-gray-600 text-base mt-2\">\n                    Agents propose changes in isolated versions. Humans review,\n                    approve, and merge.\n                  </p>\n                </div>\n              </div>\n            </div>\n          </div>\n        </section>\n\n        {/* README Content */}\n        {readmeHtml && (\n          <section className=\"py-16 px-6 sm:px-12 md:px-16 bg-white border-t border-gray-200\">\n            <div className=\"max-w-4xl mx-auto\">\n              {/* GitHub README banner */}\n              <a\n                href=\"https://github.com/opral/lix\"\n                target=\"_blank\"\n                rel=\"noopener noreferrer\"\n                className=\"flex items-center justify-between mb-10 px-4 py-3 rounded-lg border border-gray-200 bg-gray-50 hover:bg-gray-100 transition-colors group\"\n              >\n                <div className=\"flex items-center gap-3\">\n                  <svg\n                    className=\"w-5 h-5 text-gray-700\"\n                    viewBox=\"0 0 24 24\"\n                    fill=\"currentColor\"\n                  >\n                    <path d=\"M12 2C6.477 2 2 6.477 2 12c0 4.42 2.865 8.166 6.839 9.489.5.092.682-.217.682-.48 0-.237-.008-.866-.013-1.7-2.782.603-3.369-1.34-3.369-1.34-.454-1.156-1.11-1.464-1.11-1.464-.908-.62.069-.608.069-.608 1.003.07 1.531 1.03 1.531 1.03.892 1.529 2.341 1.087 2.91.831.092-.645.35-1.087.636-1.337-2.22-.253-4.555-1.11-4.555-4.943 0-1.091.39-1.984 1.029-2.683-.103-.253-.446-1.27.098-2.647 0 0 .84-.268 2.75 1.026A9.578 9.578 0 0112 6.836c.85.004 1.705.114 2.504.336 1.909-1.294 2.747-1.026 2.747-1.026.546 1.377.203 2.394.1 2.647.64.699 1.028 1.592 1.028 2.683 0 3.842-2.339 4.687-4.566 4.935.359.309.678.919.678 1.852 0 1.336-.012 2.415-.012 2.743 0 .267.18.578.688.48C19.138 20.161 22 16.416 22 12c0-5.523-4.477-10-10-10z\" />\n                  </svg>\n                  <div>\n                    <span className=\"text-sm font-medium text-gray-900\">\n                      README.md\n                    </span>\n                    <span className=\"text-sm text-gray-500 ml-2\">\n                      from opral/lix\n                    </span>\n                  </div>\n                </div>\n                <div className=\"flex items-center gap-1.5 text-sm text-gray-600 group-hover:text-gray-900\">\n                  View on GitHub\n                  <svg\n                    className=\"w-4 h-4\"\n                    fill=\"none\"\n                    viewBox=\"0 0 24 24\"\n                    stroke=\"currentColor\"\n                    strokeWidth={2}\n                  >\n                    <path\n                      strokeLinecap=\"round\"\n                      strokeLinejoin=\"round\"\n                      d=\"M14 5l7 7m0 0l-7 7m7-7H3\"\n                    />\n                  </svg>\n                </div>\n              </a>\n              <article\n                className=\"markdown-wc-body\"\n                dangerouslySetInnerHTML={{ __html: readmeHtml }}\n              />\n            </div>\n          </section>\n        )}\n\n        <Footer />\n      </main>\n    </div>\n  );\n}\n\nexport default LandingPage;\n"
  },
  {
    "path": "packages/website/src/components/markdown-page.interactive.js",
    "content": "import \"./doc-code-snippet-element\";\n\nconst COPY_BUTTON_ATTR = \"data-mwc-copy-button\";\n\nfunction ensureCopyButtons(root = document) {\n  const blocks = root.querySelectorAll(\"pre[data-mwc-codeblock]\");\n  for (const pre of blocks) {\n    if (pre.querySelector(`[${COPY_BUTTON_ATTR}]`)) continue;\n\n    const button = document.createElement(\"button\");\n    button.type = \"button\";\n    button.setAttribute(COPY_BUTTON_ATTR, \"\");\n    button.className = \"mwc-copy-button\";\n    button.textContent = \"Copy\";\n    pre.appendChild(button);\n  }\n}\n\nfunction handleCopyClick(event) {\n  const target = event.target;\n  if (!(target instanceof HTMLElement)) return;\n  const button = target.closest(`[${COPY_BUTTON_ATTR}]`);\n  if (!button) return;\n\n  const pre = button.closest(\"pre[data-mwc-codeblock]\");\n  const code = pre?.querySelector(\"code\")?.textContent ?? \"\";\n  navigator.clipboard.writeText(code);\n\n  const previous = button.textContent;\n  button.textContent = \"Copied\";\n  window.setTimeout(() => {\n    button.textContent = previous || \"Copy\";\n  }, 1500);\n}\n\nfunction initCopyButtons() {\n  if (window.__lixDocsCopyButtonsInitialized) return;\n  window.__lixDocsCopyButtonsInitialized = true;\n\n  ensureCopyButtons();\n  document.addEventListener(\"click\", handleCopyClick);\n\n  const observer = new MutationObserver(() => ensureCopyButtons());\n  observer.observe(document.body, { childList: true, subtree: true });\n}\n\nif (typeof window !== \"undefined\") {\n  initCopyButtons();\n}\n"
  },
  {
    "path": "packages/website/src/components/markdown-page.style.css",
    "content": "/**\n * VitePress-style markdown content styles\n * Based on VitePress's vp-doc styling\n */\n\n/* CSS Custom Properties matching VitePress */\n:root {\n  --vp-c-brand-1: #3451b2;\n  --vp-c-brand-2: #3a5ccc;\n  --vp-c-brand-soft: rgba(100, 108, 255, 0.14);\n  --vp-c-text-1: rgba(60, 60, 67);\n  --vp-c-text-2: rgba(60, 60, 67, 0.78);\n  --vp-c-text-3: rgba(60, 60, 67, 0.56);\n  --vp-c-divider: rgba(60, 60, 67, 0.12);\n  --vp-c-bg: #ffffff;\n  --vp-c-bg-soft: #f6f6f7;\n  --vp-c-bg-alt: #f6f6f7;\n  --vp-c-border: rgba(60, 60, 67, 0.12);\n  --vp-c-gutter: rgba(60, 60, 67, 0.05);\n\n  /* Tip colors - Green */\n  --vp-c-tip-1: #059669;\n  --vp-c-tip-2: rgba(5, 150, 105, 0.16);\n  --vp-c-tip-3: rgba(5, 150, 105, 0.08);\n\n  /* Warning colors - Orange */\n  --vp-c-warning-1: #ea580c;\n  --vp-c-warning-2: rgba(234, 88, 12, 0.16);\n  --vp-c-warning-3: rgba(234, 88, 12, 0.08);\n\n  /* Danger colors */\n  --vp-c-danger-1: #b8272c;\n  --vp-c-danger-2: rgba(244, 63, 94, 0.16);\n  --vp-c-danger-3: rgba(244, 63, 94, 0.08);\n\n  /* Note/Info colors */\n  --vp-c-note-1: #3451b2;\n  --vp-c-note-2: rgba(100, 108, 255, 0.16);\n  --vp-c-note-3: rgba(100, 108, 255, 0.08);\n\n  /* Important colors */\n  --vp-c-important-1: #8250df;\n  --vp-c-important-2: rgba(130, 80, 223, 0.16);\n  --vp-c-important-3: rgba(130, 80, 223, 0.08);\n\n  /* Caution colors */\n  --vp-c-caution-1: #b8272c;\n  --vp-c-caution-2: rgba(244, 63, 94, 0.16);\n  --vp-c-caution-3: rgba(244, 63, 94, 0.08);\n\n  --vp-code-font-size: 0.9375em;\n  --vp-code-line-height: 1.6;\n}\n\n/* Base markdown body styles */\n.markdown-wc-body {\n  color: var(--vp-c-text-1);\n  line-height: 1.7;\n  font-size: 16px;\n}\n\n/* Headings */\n.markdown-wc-body h1,\n.markdown-wc-body h2,\n.markdown-wc-body h3,\n.markdown-wc-body h4,\n.markdown-wc-body h5,\n.markdown-wc-body h6 {\n  position: relative;\n  font-weight: 600;\n  outline: none;\n  color: var(--vp-c-text-1);\n}\n\n.markdown-wc-body h1 {\n  letter-spacing: -0.02em;\n  line-height: 40px;\n  font-size: 28px;\n  margin: 0 0 16px 0;\n}\n\n.markdown-wc-body h2 {\n  margin: 48px 0 16px;\n  border-top: 1px solid var(--vp-c-divider);\n  padding-top: 24px;\n  letter-spacing: -0.02em;\n  line-height: 32px;\n  font-size: 24px;\n}\n\n.markdown-wc-body h3 {\n  margin: 32px 0 0;\n  letter-spacing: -0.01em;\n  line-height: 28px;\n  font-size: 20px;\n}\n\n.markdown-wc-body h4 {\n  margin: 24px 0 0;\n  letter-spacing: -0.01em;\n  line-height: 24px;\n  font-size: 18px;\n}\n\n/* Anchor scroll offset so hash links don't stick to viewport top */\n.markdown-wc-body h2[id],\n.markdown-wc-body h3[id],\n.markdown-wc-body h4[id],\n.markdown-wc-body h5[id],\n.markdown-wc-body h6[id] {\n  scroll-margin-top: 64px;\n}\n\n/* Heading anchor hash on hover (VitePress-style) */\n.markdown-wc-body h2:has(> a),\n.markdown-wc-body h3:has(> a),\n.markdown-wc-body h4:has(> a),\n.markdown-wc-body h5:has(> a),\n.markdown-wc-body h6:has(> a) {\n  position: relative;\n}\n\n.markdown-wc-body h2:has(> a)::before,\n.markdown-wc-body h3:has(> a)::before,\n.markdown-wc-body h4:has(> a)::before,\n.markdown-wc-body h5:has(> a)::before,\n.markdown-wc-body h6:has(> a)::before {\n  content: \"#\";\n  position: absolute;\n  left: 0;\n  top: 0;\n  transform: translateX(-1.1em);\n  color: var(--vp-c-brand-1);\n  opacity: 0;\n  font-weight: 600;\n  line-height: inherit;\n  transition: opacity 0.25s;\n}\n\n/* h2 has extra padding-top for the divider; align hash with text */\n.markdown-wc-body h2:has(> a)::before {\n  top: 24px;\n}\n\n/* First h2 after h1 has no divider/padding */\n.markdown-wc-body h1 + h2:has(> a)::before {\n  top: 0;\n}\n\n.markdown-wc-body h2:has(> a):hover::before,\n.markdown-wc-body h3:has(> a):hover::before,\n.markdown-wc-body h4:has(> a):hover::before,\n.markdown-wc-body h5:has(> a):hover::before,\n.markdown-wc-body h6:has(> a):hover::before,\n.markdown-wc-body h2:has(> a):focus-within::before,\n.markdown-wc-body h3:has(> a):focus-within::before,\n.markdown-wc-body h4:has(> a):focus-within::before,\n.markdown-wc-body h5:has(> a):focus-within::before,\n.markdown-wc-body h6:has(> a):focus-within::before {\n  opacity: 1;\n}\n\n/* First h2 should not have border-top */\n.markdown-wc-body h1 + h2 {\n  margin-top: 24px;\n  border-top: none;\n  padding-top: 0;\n}\n\n/* Paragraphs */\n.markdown-wc-body p {\n  margin: 16px 0;\n  line-height: 28px;\n}\n\n/* First paragraph after heading */\n.markdown-wc-body h1 + p,\n.markdown-wc-body h2 + p,\n.markdown-wc-body h3 + p,\n.markdown-wc-body h4 + p {\n  margin-top: 8px;\n}\n\n/* Links */\n.markdown-wc-body a {\n  font-weight: 500;\n  color: var(--vp-c-brand-1);\n  text-decoration: underline;\n  text-underline-offset: 2px;\n  transition:\n    color 0.25s,\n    opacity 0.25s;\n}\n\n.markdown-wc-body a:hover {\n  color: var(--vp-c-brand-2);\n}\n\n/* Links inside headings should inherit heading styles */\n.markdown-wc-body h1 a,\n.markdown-wc-body h2 a,\n.markdown-wc-body h3 a,\n.markdown-wc-body h4 a,\n.markdown-wc-body h5 a,\n.markdown-wc-body h6 a {\n  color: inherit;\n  font-weight: inherit;\n  text-decoration: none;\n}\n\n.markdown-wc-body h1 a:hover,\n.markdown-wc-body h2 a:hover,\n.markdown-wc-body h3 a:hover,\n.markdown-wc-body h4 a:hover,\n.markdown-wc-body h5 a:hover,\n.markdown-wc-body h6 a:hover {\n  color: inherit;\n}\n\n/* Lists */\n.markdown-wc-body ul,\n.markdown-wc-body ol {\n  padding-left: 1.25rem;\n  margin: 16px 0;\n}\n\n.markdown-wc-body ul {\n  list-style: disc;\n}\n\n.markdown-wc-body ol {\n  list-style: decimal;\n}\n\n.markdown-wc-body li + li {\n  margin-top: 8px;\n}\n\n.markdown-wc-body li > ol,\n.markdown-wc-body li > ul {\n  margin: 8px 0 0;\n}\n\n.markdown-wc-body ul.contains-task-list {\n  padding-left: 0;\n  list-style: none;\n}\n\n.markdown-wc-body li.task-list-item {\n  display: flex;\n  align-items: flex-start;\n  gap: 8px;\n}\n\n.markdown-wc-body li.task-list-item + li.task-list-item {\n  margin-top: 8px;\n}\n\n.markdown-wc-body li.task-list-item > input[type=\"checkbox\"] {\n  appearance: none;\n  display: inline-grid;\n  place-content: center;\n  flex: 0 0 auto;\n  width: 14px;\n  height: 14px;\n  margin: 5px 0 0;\n  border: 1px solid var(--vp-c-divider);\n  border-radius: 3px;\n  background: var(--vp-c-bg);\n  opacity: 1;\n}\n\n.markdown-wc-body li.task-list-item > input[type=\"checkbox\"]:checked {\n  border-color: var(--vp-c-brand-1);\n  background: var(--vp-c-brand-1);\n}\n\n.markdown-wc-body li.task-list-item > input[type=\"checkbox\"]:checked::before {\n  content: \"\";\n  width: 7px;\n  height: 4px;\n  border: solid white;\n  border-width: 0 0 2px 2px;\n  transform: translateY(-1px) rotate(-45deg);\n}\n\n/* Inline code */\n.markdown-wc-body :not(pre) > code {\n  font-size: var(--vp-code-font-size);\n  color: var(--vp-c-text-1);\n  background-color: var(--vp-c-bg-soft);\n  border-radius: 4px;\n  padding: 2px 6px;\n  font-weight: 500;\n  transition:\n    color 0.25s,\n    background-color 0.5s;\n}\n\n/* Ensure code inside pre doesn't have nested background */\n.markdown-wc-body pre code {\n  background: transparent !important;\n}\n\n/* Syntax highlighting color overrides for better contrast */\n.markdown-wc-body pre .hljs {\n  color: var(--vp-c-text-1);\n}\n\n/* Keywords (export, default, const, etc.) */\n.markdown-wc-body pre .hljs-keyword,\n.markdown-wc-body pre .hljs-selector-tag,\n.markdown-wc-body pre .hljs-built_in {\n  color: #d73a49;\n}\n\n/* Strings */\n.markdown-wc-body pre .hljs-string,\n.markdown-wc-body pre .hljs-attr {\n  color: #032f62;\n}\n\n/* Object property names / attributes */\n.markdown-wc-body pre .hljs-attr {\n  color: #005cc5;\n}\n\n/* Comments */\n.markdown-wc-body pre .hljs-comment,\n.markdown-wc-body pre .hljs-quote {\n  color: #6a737d;\n  font-style: italic;\n}\n\n/* Numbers */\n.markdown-wc-body pre .hljs-number {\n  color: #005cc5;\n}\n\n/* Functions */\n.markdown-wc-body pre .hljs-title,\n.markdown-wc-body pre .hljs-title.function_ {\n  color: #6f42c1;\n}\n\n/* Types */\n.markdown-wc-body pre .hljs-type,\n.markdown-wc-body pre .hljs-class {\n  color: #d73a49;\n}\n\n/* Diff styling for code blocks */\n.markdown-wc-body pre code .hljs-deletion {\n  background-color: #ffebee;\n  color: #b71c1c;\n  padding: 0 2px;\n}\n\n.markdown-wc-body pre code .hljs-addition {\n  background-color: #e8f5e9;\n  color: #1b5e20;\n  padding: 0 2px;\n}\n\n.markdown-wc-body pre code .hljs-deletion * {\n  color: #b71c1c !important;\n}\n\n.markdown-wc-body pre code .hljs-addition * {\n  color: #1b5e20 !important;\n}\n\n.markdown-wc-body pre code .hljs-meta {\n  color: inherit;\n  font-weight: 600;\n}\n\n.markdown-wc-body a > code {\n  color: var(--vp-c-brand-1);\n}\n\n/* Code blocks */\n.markdown-wc-body pre {\n  margin: 16px 0;\n  padding: 8px 12px;\n  background-color: var(--vp-c-bg-soft);\n  border-radius: 8px;\n  overflow-x: auto;\n  position: relative;\n}\n\n/* Copy button for code blocks */\n.markdown-wc-body pre {\n  position: relative;\n}\n\n.markdown-wc-body pre > button.mwc-copy-button {\n  position: absolute;\n  top: 8px;\n  right: 8px;\n  padding: 4px 8px;\n  font-size: 12px;\n  font-weight: 500;\n  border-radius: 6px;\n  background: var(--vp-c-bg);\n  color: var(--vp-c-text-2);\n  border: 1px solid var(--vp-c-divider);\n  opacity: 0;\n  cursor: pointer;\n  transition:\n    opacity 0.2s,\n    color 0.2s,\n    border-color 0.2s,\n    background-color 0.2s;\n}\n\n.markdown-wc-body pre:hover > button.mwc-copy-button {\n  opacity: 1;\n}\n\n.markdown-wc-body pre > button.mwc-copy-button:hover {\n  color: var(--vp-c-text-1);\n  border-color: var(--vp-c-border);\n  background: var(--vp-c-bg-soft);\n}\n\n.markdown-wc-body pre code {\n  display: block;\n  padding: 0;\n  width: fit-content;\n  min-width: 100%;\n  line-height: var(--vp-code-line-height);\n  font-size: var(--vp-code-font-size);\n  color: var(--vp-c-text-1);\n  background: transparent;\n  border-radius: 0;\n  font-weight: 400;\n  transition: color 0.25s;\n}\n\n/* Blockquotes */\n.markdown-wc-body blockquote {\n  margin: 16px 0;\n  border-left: 2px solid var(--vp-c-divider);\n  padding-left: 16px;\n  color: var(--vp-c-text-2);\n  transition:\n    border-color 0.5s,\n    color 0.5s;\n}\n\n.markdown-wc-body blockquote > p {\n  margin: 0;\n  font-size: 16px;\n  line-height: 28px;\n}\n\n.markdown-wc-body blockquote[data-mwc-alert] > p {\n  font-size: 0.875rem;\n  line-height: 1.5rem;\n}\n\n/* GitHub-style alerts/callouts */\n.markdown-wc-body blockquote[data-mwc-alert] {\n  border-left: none;\n  border-radius: 8px;\n  padding: 16px;\n  margin: 16px 0;\n  color: var(--vp-c-text-1);\n  font-size: 0.875rem;\n  line-height: 1.5rem;\n}\n\n.markdown-wc-body blockquote[data-mwc-alert] [data-mwc-alert-marker] {\n  display: none;\n}\n\n.markdown-wc-body blockquote[data-mwc-alert]::before {\n  display: block;\n  font-weight: 600;\n  margin-bottom: 8px;\n}\n\n.markdown-wc-body blockquote[data-mwc-alert=\"note\"] {\n  background-color: var(--vp-c-note-3);\n  border: 1px solid var(--vp-c-note-2);\n}\n.markdown-wc-body blockquote[data-mwc-alert=\"note\"]::before {\n  content: \"Note\";\n  color: var(--vp-c-note-1);\n}\n\n.markdown-wc-body blockquote[data-mwc-alert=\"tip\"] {\n  background-color: var(--vp-c-tip-3);\n  border: 1px solid var(--vp-c-tip-2);\n}\n.markdown-wc-body blockquote[data-mwc-alert=\"tip\"]::before {\n  content: \"Tip\";\n  color: var(--vp-c-tip-1);\n}\n\n.markdown-wc-body blockquote[data-mwc-alert=\"important\"] {\n  background-color: var(--vp-c-important-3);\n  border: 1px solid var(--vp-c-important-2);\n}\n.markdown-wc-body blockquote[data-mwc-alert=\"important\"]::before {\n  content: \"Important\";\n  color: var(--vp-c-important-1);\n}\n\n.markdown-wc-body blockquote[data-mwc-alert=\"warning\"] {\n  background-color: var(--vp-c-warning-3);\n  border: 1px solid var(--vp-c-warning-2);\n}\n.markdown-wc-body blockquote[data-mwc-alert=\"warning\"]::before {\n  content: \"Warning\";\n  color: var(--vp-c-warning-1);\n}\n\n.markdown-wc-body blockquote[data-mwc-alert=\"caution\"] {\n  background-color: var(--vp-c-caution-3);\n  border: 1px solid var(--vp-c-caution-2);\n}\n.markdown-wc-body blockquote[data-mwc-alert=\"caution\"]::before {\n  content: \"Caution\";\n  color: var(--vp-c-caution-1);\n}\n\n/* Custom callout styling */\n.markdown-wc-body .callout,\n.markdown-wc-body .custom-block {\n  margin: 16px 0;\n  border-radius: 8px;\n  padding: 16px 16px 8px;\n  line-height: 24px;\n  font-size: 14px;\n  color: var(--vp-c-text-1);\n}\n\n.markdown-wc-body .callout p,\n.markdown-wc-body .custom-block p {\n  margin: 8px 0;\n  line-height: 24px;\n}\n\n.markdown-wc-body .callout-title,\n.markdown-wc-body .custom-block-title {\n  display: flex;\n  align-items: center;\n  gap: 8px;\n  font-weight: 600;\n  margin-bottom: 8px;\n}\n\n/* TIP callout */\n.markdown-wc-body .callout.tip,\n.markdown-wc-body .custom-block.tip {\n  background-color: var(--vp-c-tip-3);\n  border: 1px solid var(--vp-c-tip-2);\n}\n\n.markdown-wc-body .callout.tip .callout-title,\n.markdown-wc-body .custom-block.tip .custom-block-title {\n  color: var(--vp-c-tip-1);\n}\n\n/* NOTE/INFO callout */\n.markdown-wc-body .callout.note,\n.markdown-wc-body .callout.info,\n.markdown-wc-body .custom-block.info {\n  background-color: var(--vp-c-note-3);\n  border: 1px solid var(--vp-c-note-2);\n}\n\n.markdown-wc-body .callout.note .callout-title,\n.markdown-wc-body .callout.info .callout-title,\n.markdown-wc-body .custom-block.info .custom-block-title {\n  color: var(--vp-c-note-1);\n}\n\n/* WARNING callout */\n.markdown-wc-body .callout.warning,\n.markdown-wc-body .custom-block.warning {\n  background-color: var(--vp-c-warning-3);\n  border: 1px solid var(--vp-c-warning-2);\n}\n\n.markdown-wc-body .callout.warning .callout-title,\n.markdown-wc-body .custom-block.warning .custom-block-title {\n  color: var(--vp-c-warning-1);\n}\n\n/* DANGER/CAUTION callout */\n.markdown-wc-body .callout.danger,\n.markdown-wc-body .callout.caution,\n.markdown-wc-body .custom-block.danger {\n  background-color: var(--vp-c-danger-3);\n  border: 1px solid var(--vp-c-danger-2);\n}\n\n.markdown-wc-body .callout.danger .callout-title,\n.markdown-wc-body .callout.caution .callout-title,\n.markdown-wc-body .custom-block.danger .custom-block-title {\n  color: var(--vp-c-danger-1);\n}\n\n/* IMPORTANT callout */\n.markdown-wc-body .callout.important {\n  background-color: var(--vp-c-important-3);\n  border: 1px solid var(--vp-c-important-2);\n}\n\n.markdown-wc-body .callout.important .callout-title {\n  color: var(--vp-c-important-1);\n}\n\n/* Tables */\n.markdown-wc-body table {\n  display: block;\n  border-collapse: collapse;\n  margin: 20px 0;\n  overflow-x: auto;\n}\n\n.markdown-wc-body tr {\n  background-color: var(--vp-c-bg);\n  border-top: 1px solid var(--vp-c-divider);\n  transition: background-color 0.5s;\n}\n\n.markdown-wc-body tr:nth-child(2n) {\n  background-color: var(--vp-c-bg-soft);\n}\n\n.markdown-wc-body th,\n.markdown-wc-body td {\n  border: 1px solid var(--vp-c-divider);\n  padding: 8px 16px;\n}\n\n.markdown-wc-body th {\n  font-size: 14px;\n  font-weight: 600;\n  color: var(--vp-c-text-1);\n  background-color: var(--vp-c-bg-soft);\n}\n\n.markdown-wc-body td {\n  font-size: 14px;\n}\n\n/* Horizontal rule */\n.markdown-wc-body hr {\n  margin: 16px 0;\n  border: none;\n  border-top: 1px solid var(--vp-c-divider);\n}\n\n/* Images - General */\n.markdown-wc-body img {\n  max-width: 100%;\n  border-radius: 8px;\n}\n\n/* Respect explicit height/width attributes on images */\n.markdown-wc-body img[height] {\n  height: attr(height px);\n}\n\n.markdown-wc-body img[width] {\n  width: attr(width px);\n}\n\n/* Fallback for browsers that don't support attr() for height/width */\n.markdown-wc-body img[height=\"14\"] {\n  height: 14px;\n}\n.markdown-wc-body img[height=\"18\"] {\n  height: 18px;\n}\n.markdown-wc-body img[height=\"20\"] {\n  height: 20px;\n}\n.markdown-wc-body img[height=\"24\"] {\n  height: 24px;\n}\n.markdown-wc-body img[height=\"32\"] {\n  height: 32px;\n}\n.markdown-wc-body img[width=\"18\"] {\n  width: 18px;\n}\n.markdown-wc-body img[width=\"20\"] {\n  width: 20px;\n}\n.markdown-wc-body img[width=\"24\"] {\n  width: 24px;\n}\n.markdown-wc-body img[width=\"32\"] {\n  width: 32px;\n}\n\n/* Images - Standalone content images */\n.markdown-wc-body p > img:only-child {\n  display: block;\n  margin: 16px auto;\n}\n\n/* Images inside links - Always inline (badges, logos, icons) */\n.markdown-wc-body a > img {\n  display: inline-block;\n  vertical-align: middle;\n  margin: 0;\n  border: none;\n  border-radius: 3px;\n  background-color: transparent;\n}\n\n/* Links containing images - inline with spacing */\n.markdown-wc-body a:has(> img) {\n  display: inline-block;\n  text-decoration: none;\n}\n\n/* Badge-only paragraphs (no br, sub, or other block content) - flex layout */\n.markdown-wc-body p:has(a > img):not(:has(br)):not(:has(sub)):not(:has(sup)) {\n  display: flex;\n  flex-wrap: wrap;\n  gap: 4px;\n  align-items: center;\n}\n\n/* Paragraphs with mixed content (br, sub, etc.) - keep normal flow */\n.markdown-wc-body p:has(a > img):has(br),\n.markdown-wc-body p:has(a > img):has(sub) {\n  display: block;\n}\n\n.markdown-wc-body p:has(a > img):has(br) a:has(> img),\n.markdown-wc-body p:has(a > img):has(sub) a:has(> img) {\n  margin-right: 8px;\n}\n\n/* Centered content via align attribute */\n.markdown-wc-body [align=\"center\"],\n.markdown-wc-body p[align=\"center\"],\n.markdown-wc-body div[align=\"center\"] {\n  text-align: center;\n}\n\n/* Badge-only centered paragraphs need justify-content */\n.markdown-wc-body\n  p[align=\"center\"]:has(a > img):not(:has(br)):not(:has(sub)):not(:has(sup)) {\n  justify-content: center;\n}\n\n/* Strong/Bold */\n.markdown-wc-body strong {\n  font-weight: 600;\n}\n\n/* Definition lists */\n.markdown-wc-body dt {\n  font-weight: 600;\n  margin-top: 16px;\n}\n\n.markdown-wc-body dd {\n  margin-left: 1.25rem;\n  margin-top: 4px;\n}\n\n/* Keyboard shortcuts */\n.markdown-wc-body kbd {\n  display: inline-block;\n  padding: 0 6px;\n  font-size: 12px;\n  font-weight: 500;\n  line-height: 20px;\n  color: var(--vp-c-text-1);\n  background-color: var(--vp-c-bg-soft);\n  border: 1px solid var(--vp-c-border);\n  border-radius: 4px;\n  box-shadow: 0 1px 1px rgba(0, 0, 0, 0.04);\n}\n"
  },
  {
    "path": "packages/website/src/components/markdown-page.tsx",
    "content": "import { useEffect, useState } from \"react\";\nimport { splitTitleFromHtml } from \"../lib/seo\";\n\ntype CopyStatus = \"idle\" | \"copied\";\n\n/**\n * Copy icon used for the markdown copy button.\n *\n * @example\n * <CopyMarkdownIcon className=\"h-4 w-4\" />\n */\nconst CopyMarkdownIcon = ({ className = \"\" }: { className?: string }) => (\n  <svg\n    xmlns=\"http://www.w3.org/2000/svg\"\n    viewBox=\"0 0 24 24\"\n    fill=\"none\"\n    stroke=\"currentColor\"\n    strokeWidth=\"2\"\n    strokeLinecap=\"round\"\n    strokeLinejoin=\"round\"\n    className={className}\n    aria-hidden=\"true\"\n  >\n    <rect x=\"9\" y=\"9\" width=\"13\" height=\"13\" rx=\"2\" ry=\"2\" />\n    <path d=\"M5 15H4a2 2 0 0 1-2-2V4a2 2 0 0 1 2-2h9a2 2 0 0 1 2 2v1\" />\n  </svg>\n);\n\n/**\n * Check icon shown on copy success.\n *\n * @example\n * <CopyCheckIcon className=\"h-4 w-4\" />\n */\nconst CopyCheckIcon = ({ className = \"\" }: { className?: string }) => (\n  <svg\n    xmlns=\"http://www.w3.org/2000/svg\"\n    width=\"24\"\n    height=\"24\"\n    viewBox=\"0 0 24 24\"\n    fill=\"none\"\n    stroke=\"currentColor\"\n    strokeWidth=\"2\"\n    strokeLinecap=\"round\"\n    strokeLinejoin=\"round\"\n    className={className}\n    aria-hidden=\"true\"\n  >\n    <path d=\"M20 6 9 17l-5-5\" />\n  </svg>\n);\n\n/**\n * Renders pre-parsed markdown HTML inside the docs layout.\n *\n * @example\n * <MarkdownPage html=\"<h1>Hi from Lix</h1>\" markdown=\"# Hi from Lix\" />\n */\nexport function MarkdownPage({\n  html,\n  markdown,\n  imports,\n}: {\n  html: string;\n  markdown?: string;\n  imports?: string[];\n}) {\n  const [copyStatus, setCopyStatus] = useState<CopyStatus>(\"idle\");\n\n  useEffect(() => {\n    // @ts-expect-error - JS-only module\n    import(\"./markdown-page.interactive.js\");\n  }, [html]);\n\n  useEffect(() => {\n    if (!imports || imports.length === 0) return;\n\n    for (const url of imports) {\n      if (!url) continue;\n      const existing = document.querySelector(\n        `script[data-mdwc-import=\"${url}\"]`,\n      );\n      if (existing) continue;\n\n      const script = document.createElement(\"script\");\n      script.type = \"module\";\n      script.src = url;\n      script.setAttribute(\"data-mdwc-import\", url);\n      document.head.appendChild(script);\n    }\n  }, [imports]);\n\n  const { title, body } = splitTitleFromHtml(html);\n\n  const handleCopy = () => {\n    if (!markdown) return;\n    const clipboard = navigator?.clipboard;\n    if (!clipboard?.writeText) return;\n\n    clipboard.writeText(markdown).then(() => {\n      setCopyStatus(\"copied\");\n      window.setTimeout(() => setCopyStatus(\"idle\"), 2000);\n    });\n  };\n\n  return (\n    <article className=\"markdown-wc-body\">\n      {title && (\n        <div className=\"mb-6 flex flex-col gap-4 sm:flex-row sm:items-center sm:justify-between\">\n          <h1 className=\"text-[28px] font-semibold leading-10 tracking-[-0.02em] text-slate-900\">\n            {title}\n          </h1>\n          <button\n            type=\"button\"\n            onClick={handleCopy}\n            className=\"inline-flex items-center gap-2 rounded-lg border border-slate-200 px-4 py-2 text-sm font-medium text-slate-700 transition hover:border-slate-300 hover:text-slate-900\"\n            aria-label=\"Copy markdown\"\n          >\n            {copyStatus === \"copied\" ? (\n              <CopyCheckIcon className=\"h-4 w-4 animate-[copy-arrow_0.4s_ease-out]\" />\n            ) : (\n              <CopyMarkdownIcon className=\"h-4 w-4\" />\n            )}\n            {copyStatus === \"copied\" ? \"Copied\" : \"Copy Markdown\"}\n          </button>\n        </div>\n      )}\n      <div dangerouslySetInnerHTML={{ __html: body }} />\n    </article>\n  );\n}\n"
  },
  {
    "path": "packages/website/src/components/prev-next-nav.tsx",
    "content": "import { Link } from \"@tanstack/react-router\";\n\ntype PrevNextItem = {\n  slug: string;\n  title: string;\n} | null;\n\ntype PrevNextNavProps = {\n  prev: PrevNextItem;\n  next: PrevNextItem;\n  basePath: string;\n  paramName?: string;\n  prevLabel?: string;\n  nextLabel?: string;\n  className?: string;\n};\n\n/**\n * Reusable previous/next navigation component for docs, blog, and RFCs.\n *\n * @example\n * <PrevNextNav\n *   prev={{ slug: \"intro\", title: \"Introduction\" }}\n *   next={{ slug: \"advanced\", title: \"Advanced Topics\" }}\n *   basePath=\"/docs\"\n *   paramName=\"slugId\"\n *   prevLabel=\"Previous\"\n *   nextLabel=\"Next\"\n * />\n */\nexport function PrevNextNav({\n  prev,\n  next,\n  basePath,\n  paramName = \"slug\",\n  prevLabel = \"Previous\",\n  nextLabel = \"Next\",\n  className = \"\",\n}: PrevNextNavProps) {\n  if (!prev && !next) return null;\n\n  return (\n    <nav\n      className={`grid grid-cols-1 gap-4 border-t border-slate-200 pt-8 sm:grid-cols-2 ${className}`}\n    >\n      <div className=\"min-w-0\">\n        {prev && (\n          <Link\n            to={`${basePath}/$${paramName}` as string}\n            params={{ [paramName]: prev.slug } as Record<string, string>}\n            className=\"group block w-full rounded-xl border border-slate-200 p-4 transition-colors hover:border-slate-300\"\n          >\n            <span className=\"flex items-center gap-1.5 text-sm text-slate-400\">\n              <svg\n                className=\"h-3 w-3\"\n                viewBox=\"0 0 24 24\"\n                fill=\"none\"\n                stroke=\"currentColor\"\n                strokeWidth=\"2\"\n                strokeLinecap=\"round\"\n                strokeLinejoin=\"round\"\n              >\n                <path d=\"M19 12H5M12 19l-7-7 7-7\" />\n              </svg>\n              {prevLabel}\n            </span>\n            <span className=\"mt-1 block font-medium text-[#3451b2] group-hover:text-[#3a5ccc]\">\n              {prev.title}\n            </span>\n          </Link>\n        )}\n      </div>\n\n      <div className=\"min-w-0\">\n        {next && (\n          <Link\n            to={`${basePath}/$${paramName}` as string}\n            params={{ [paramName]: next.slug } as Record<string, string>}\n            className=\"group block w-full rounded-xl border border-slate-200 p-4 transition-colors hover:border-slate-300\"\n          >\n            <span className=\"flex items-center justify-end gap-1.5 text-sm text-slate-400\">\n              {nextLabel}\n              <svg\n                className=\"h-3 w-3\"\n                viewBox=\"0 0 24 24\"\n                fill=\"none\"\n                stroke=\"currentColor\"\n                strokeWidth=\"2\"\n                strokeLinecap=\"round\"\n                strokeLinejoin=\"round\"\n              >\n                <path d=\"M5 12h14M12 5l7 7-7 7\" />\n              </svg>\n            </span>\n            <span className=\"mt-1 block font-medium text-right text-[#3451b2] group-hover:text-[#3a5ccc]\">\n              {next.title}\n            </span>\n          </Link>\n        )}\n      </div>\n    </nav>\n  );\n}\n"
  },
  {
    "path": "packages/website/src/github-stars-cache.ts",
    "content": "import githubCache from \"./github_repo_data.gen.json\";\n\nexport type GithubRepoMetrics = {\n  stars: number;\n  forks: number;\n  openIssues: number;\n  closedIssues: number;\n  contributorCount: number;\n};\n\ntype GithubCache = {\n  generatedAt: string;\n  data: Record<string, GithubRepoMetrics | null>;\n};\n\nconst cache = githubCache as GithubCache;\n\nfunction normalizeRepo(input: string): string | null {\n  const trimmed = input.trim();\n  if (!trimmed) return null;\n\n  const urlMatch = trimmed.match(/github\\.com\\/([^/]+\\/[^/]+)/i);\n  const repo = urlMatch ? urlMatch[1] : trimmed;\n  const normalized = repo.replace(/\\.git$/i, \"\");\n\n  return /^[^/]+\\/[^/]+$/.test(normalized) ? normalized : null;\n}\n\nexport function getGithubRepoMetrics(repo: string): GithubRepoMetrics | null {\n  const normalized = normalizeRepo(repo);\n  if (!normalized) return null;\n\n  const key = normalized.toLowerCase();\n  if (!(key in cache.data)) {\n    return null;\n  }\n  return cache.data[key] ?? null;\n}\n\nexport function getGithubStars(repo: string): number | null {\n  const metrics = getGithubRepoMetrics(repo);\n  return metrics?.stars ?? null;\n}\n"
  },
  {
    "path": "packages/website/src/lib/build-doc-map.test.ts",
    "content": "import { describe, expect, test } from \"vitest\";\nimport {\n  buildDocMaps,\n  buildTocMap,\n  normalizeRelativePath,\n  resolveDocsMarkdownHref,\n  slugifyFileName,\n  slugifyRelativePath,\n} from \"./build-doc-map\";\n\ndescribe(\"buildDocMaps\", () => {\n  test(\"creates slug records from markdown frontmatter\", () => {\n    const { bySlug } = buildDocMaps({\n      \"/docs/guide/hello-world.md\": `---\nslug: hello-doc\ntitle: Hello World\ndescription: Sample doc\n---\n\n# Hello world`,\n      \"/docs/reference/api.md\": `---\ntitle: API\n---\n\nAPI contents`,\n    });\n\n    expect(bySlug[\"hello-doc\"].relativePath).toBe(\"./guide/hello-world.md\");\n    expect(bySlug[\"api\"].relativePath).toBe(\"./reference/api.md\");\n  });\n});\n\ndescribe(\"buildTocMap\", () => {\n  test(\"normalizes relative file paths\", () => {\n    const tocMap = buildTocMap({\n      Overview: [\n        { path: \"./what-is-lix.md\", label: \"What is Lix?\" },\n        { path: \"/docs/guide/setup.md\", label: \"Setup\" },\n      ],\n    });\n\n    expect(tocMap.get(\"./what-is-lix.md\")?.label).toBe(\"What is Lix?\");\n    expect(tocMap.get(\"./guide/setup.md\")?.label).toBe(\"Setup\");\n  });\n});\n\ndescribe(\"path helpers\", () => {\n  test(\"normalizeRelativePath removes docs prefix\", () => {\n    expect(normalizeRelativePath(\"/docs/guide/setup.md\")).toBe(\n      \"./guide/setup.md\",\n    );\n  });\n\n  test(\"normalizeRelativePath handles website-local legacy docs paths\", () => {\n    expect(normalizeRelativePath(\"/content/docs/guide/setup.md\")).toBe(\n      \"./guide/setup.md\",\n    );\n  });\n\n  test(\"slugifyRelativePath flattens path into url safe slug\", () => {\n    expect(slugifyRelativePath(\"./guide/hello-world.md\")).toBe(\n      \"guide-hello-world\",\n    );\n  });\n\n  test(\"slugifyFileName uses the filename without extension\", () => {\n    expect(slugifyFileName(\"./guide/hello-world.md\")).toBe(\"hello-world\");\n  });\n});\n\ndescribe(\"resolveDocsMarkdownHref\", () => {\n  const currentDoc = {\n    slug: \"persistence\",\n    content: \"\",\n    relativePath: \"./persistence.md\",\n  };\n  const docsByRelativePath = {\n    \"./backend.md\": {\n      slug: \"backend\",\n      content: \"\",\n      relativePath: \"./backend.md\",\n    },\n    \"./versions.md\": {\n      slug: \"versions\",\n      content: \"\",\n      relativePath: \"./versions.md\",\n    },\n  };\n\n  test(\"resolves portable markdown links to clean docs routes\", () => {\n    expect(\n      resolveDocsMarkdownHref(\"./backend.md\", currentDoc, docsByRelativePath),\n    ).toBe(\"/docs/backend\");\n  });\n\n  test(\"resolves page-url based markdown links to clean docs routes\", () => {\n    expect(\n      resolveDocsMarkdownHref(\n        \"/docs/persistence/backend.md\",\n        currentDoc,\n        docsByRelativePath,\n      ),\n    ).toBe(\"/docs/backend\");\n  });\n\n  test(\"preserves heading hashes\", () => {\n    expect(\n      resolveDocsMarkdownHref(\n        \"./versions.md#merge\",\n        currentDoc,\n        docsByRelativePath,\n      ),\n    ).toBe(\"/docs/versions#merge\");\n  });\n});\n"
  },
  {
    "path": "packages/website/src/lib/build-doc-map.ts",
    "content": "export type TocItem = {\n  path: string;\n  label: string;\n};\n\nexport type Toc = Record<string, TocItem[]>;\n\nexport type DocRecord = {\n  slug: string;\n  /**\n   * Raw markdown including frontmatter.\n   */\n  content: string;\n  relativePath: string;\n};\n\nexport type DocsByRelativePath = Record<string, DocRecord>;\n\n/**\n * Converts file path entries in the table of contents into a quick lookup map.\n *\n * @example\n * buildTocMap({ Overview: [{ path: \"./what-is-lix.md\", label: \"What is Lix?\" }] });\n */\nexport function buildTocMap(toc: Toc): Map<string, TocItem> {\n  const map = new Map<string, TocItem>();\n\n  for (const items of Object.values(toc)) {\n    for (const item of items) {\n      const normalized = normalizeRelativePath(item.path);\n      map.set(normalized, item);\n    }\n  }\n\n  return map;\n}\n\n/**\n * Builds doc lookup maps keyed by slug.\n *\n * @example\n * buildDocMaps({ \"/docs/what-is-lix.md\": rawMarkdown });\n */\nexport function buildDocMaps(entries: Record<string, string>) {\n  return Object.entries(entries).reduce(\n    (acc, [filePath, raw]) => {\n      const relativePath = normalizeRelativePath(filePath);\n      const frontmatter = extractFrontmatter(raw);\n      const frontmatterSlug = frontmatter?.slug?.trim() ?? \"\";\n      const normalizedSlug = frontmatterSlug\n        ? slugifyValue(frontmatterSlug)\n        : \"\";\n      const slug = normalizedSlug || slugifyFileName(relativePath);\n\n      const record: DocRecord = {\n        slug,\n        content: raw,\n        relativePath,\n      };\n\n      acc.bySlug[slug] = record;\n\n      return acc;\n    },\n    {\n      bySlug: {} as Record<string, DocRecord>,\n    },\n  );\n}\n\n/**\n * Resolves portable markdown file links to clean docs routes.\n *\n * Markdown files stay portable with links like `./backend.md`, while the site\n * renders them as `/docs/backend`.\n *\n * @example\n * resolveDocsMarkdownHref(\"./backend.md\", { slug: \"persistence\", content: \"\", relativePath: \"./persistence.md\" }, { \"./backend.md\": { slug: \"backend\", content: \"\", relativePath: \"./backend.md\" } })\n */\nexport function resolveDocsMarkdownHref(\n  href: string,\n  currentDoc: DocRecord,\n  docsByRelativePath: DocsByRelativePath,\n) {\n  if (\n    href.startsWith(\"#\") ||\n    /^[a-z][a-z0-9+.-]*:/i.test(href) ||\n    !href.replace(/[?#].*$/, \"\").endsWith(\".md\")\n  ) {\n    return undefined;\n  }\n\n  const hashIndex = href.indexOf(\"#\");\n  const hash = hashIndex === -1 ? \"\" : href.slice(hashIndex);\n  const withoutHash = hashIndex === -1 ? href : href.slice(0, hashIndex);\n  const queryIndex = withoutHash.indexOf(\"?\");\n  const query = queryIndex === -1 ? \"\" : withoutHash.slice(queryIndex);\n  const pathOnly =\n    queryIndex === -1 ? withoutHash : withoutHash.slice(0, queryIndex);\n\n  const candidates = buildDocsLinkCandidates(pathOnly, currentDoc);\n\n  for (const candidate of candidates) {\n    const doc = docsByRelativePath[candidate];\n    if (doc) {\n      return `/docs/${doc.slug}${query}${hash}`;\n    }\n  }\n\n  return undefined;\n}\n\nfunction buildDocsLinkCandidates(pathOnly: string, currentDoc: DocRecord) {\n  const candidates = new Set<string>();\n  const currentSlugPrefix = `/docs/${currentDoc.slug}/`;\n\n  if (pathOnly.startsWith(currentSlugPrefix)) {\n    candidates.add(\n      resolveRelativeDocPath(\n        currentDoc.relativePath,\n        pathOnly.slice(currentSlugPrefix.length),\n      ),\n    );\n  }\n\n  if (pathOnly.startsWith(\"/docs/\")) {\n    candidates.add(normalizeRelativePath(pathOnly));\n  } else {\n    candidates.add(resolveRelativeDocPath(currentDoc.relativePath, pathOnly));\n  }\n\n  const fileName = pathOnly.split(\"/\").pop();\n  if (fileName?.endsWith(\".md\")) {\n    candidates.add(resolveRelativeDocPath(currentDoc.relativePath, fileName));\n  }\n\n  return [...candidates];\n}\n\nfunction resolveRelativeDocPath(currentRelativePath: string, hrefPath: string) {\n  const currentPath = currentRelativePath.replace(/^\\.\\//, \"\");\n  const currentDirectory = currentPath.includes(\"/\")\n    ? currentPath.slice(0, currentPath.lastIndexOf(\"/\"))\n    : \".\";\n  const normalized = posixNormalize(`${currentDirectory}/${hrefPath}`);\n  return normalized.startsWith(\".\") ? normalized : `./${normalized}`;\n}\n\nfunction posixNormalize(value: string) {\n  const parts: string[] = [];\n  for (const part of value.replace(/\\\\/g, \"/\").split(\"/\")) {\n    if (!part || part === \".\") continue;\n    if (part === \"..\") {\n      parts.pop();\n      continue;\n    }\n    parts.push(part);\n  }\n  return parts.join(\"/\");\n}\n\n/**\n * Normalizes a doc file path to a relative form rooted at docs.\n *\n * @example\n * normalizeRelativePath(\"/docs/guide/setup.md\") // \"./guide/setup.md\"\n */\nexport function normalizeRelativePath(filePath: string) {\n  return filePath\n    .replace(/\\\\/g, \"/\")\n    .replace(/^.*\\/docs\\//, \"./\")\n    .replace(/^docs\\//, \"./\");\n}\n\n/**\n * Produces a URL-safe slug base from a relative file path.\n *\n * @example\n * slugifyRelativePath(\"./guide/hello-world.md\") // \"guide-hello-world\"\n */\nexport function slugifyRelativePath(relativePath: string) {\n  const withoutExt = relativePath.replace(/\\.md$/, \"\");\n  return withoutExt\n    .replace(/^\\.\\//, \"\")\n    .replace(/[\\/\\\\]+/g, \"-\")\n    .toLowerCase()\n    .replace(/[^a-z0-9-]+/g, \"-\")\n    .replace(/^-+|-+$/g, \"\");\n}\n\n/**\n * Produces a URL-safe slug from a single filename.\n *\n * @example\n * slugifyFileName(\"./guide/hello-world.md\") // \"hello-world\"\n */\nexport function slugifyFileName(relativePath: string) {\n  const fileName = relativePath.split(/[\\\\/]/).pop() ?? relativePath;\n  const withoutExt = fileName.replace(/\\.md$/, \"\");\n  return slugifyValue(withoutExt);\n}\n\n/**\n * Produces a URL-safe slug from a string value.\n *\n * @example\n * slugifyValue(\"Hello World\") // \"hello-world\"\n */\nexport function slugifyValue(value: string) {\n  return value\n    .toLowerCase()\n    .replace(/[^a-z0-9-]+/g, \"-\")\n    .replace(/^-+|-+$/g, \"\");\n}\n\n/**\n * Extracts a minimal YAML frontmatter object from markdown.\n *\n * Only supports simple `key: value` pairs.\n *\n * @example\n * extractFrontmatter(\"---\\\\ntitle: Hello\\\\n---\\\\n# Title\") // { title: \"Hello\" }\n */\nfunction extractFrontmatter(markdown: string): Record<string, string> | null {\n  const match = markdown.match(/^---\\s*\\n([\\s\\S]*?)\\n---\\s*\\n?/);\n  if (!match) {\n    return null;\n  }\n\n  const lines = match[1].split(\"\\n\");\n  const data: Record<string, string> = {};\n\n  for (const line of lines) {\n    const trimmed = line.trim();\n    if (!trimmed || trimmed.startsWith(\"#\")) {\n      continue;\n    }\n\n    const separatorIndex = trimmed.indexOf(\":\");\n    if (separatorIndex === -1) {\n      continue;\n    }\n\n    const key = trimmed.slice(0, separatorIndex).trim();\n    const value = trimmed.slice(separatorIndex + 1).trim();\n    if (!key) {\n      continue;\n    }\n\n    data[key] = value.replace(/^['\"]|['\"]$/g, \"\");\n  }\n\n  return data;\n}\n"
  },
  {
    "path": "packages/website/src/lib/plugin-sidebar.ts",
    "content": "import type { SidebarSection } from \"../components/docs-layout\";\n\ntype PluginRegistry = {\n  plugins?: Array<{\n    key: string;\n    name?: string;\n  }>;\n};\n\n/**\n * Builds sidebar sections for plugin pages.\n *\n * @example\n * buildPluginSidebarSections(registry)\n */\nexport function buildPluginSidebarSections(\n  registry: PluginRegistry,\n): SidebarSection[] {\n  const plugins = Array.isArray(registry.plugins) ? registry.plugins : [];\n  const items = plugins.map((plugin) => ({\n    label: plugin.name ?? plugin.key,\n    href: `/plugins/${plugin.key}`,\n    relativePath: plugin.key,\n  }));\n\n  return items.length > 0\n    ? [\n        {\n          label: \"Plugins\",\n          items,\n        },\n      ]\n    : [];\n}\n"
  },
  {
    "path": "packages/website/src/lib/seo.test.ts",
    "content": "import { describe, expect, test } from \"vitest\";\nimport { resolveBlogAssetPath, resolveOgImageUrl } from \"../blog/og-image\";\nimport {\n  buildCanonicalUrl,\n  getMarkdownDescription,\n  getMarkdownTitle,\n  splitTitleFromHtml,\n} from \"./seo\";\n\ndescribe(\"buildCanonicalUrl\", () => {\n  test(\"keeps the site root canonical without changing it\", () => {\n    expect(buildCanonicalUrl(\"/\")).toBe(\"https://lix.dev\");\n  });\n\n  test(\"normalizes route paths to no-trailing-slash canonicals\", () => {\n    expect(buildCanonicalUrl(\"/blog\")).toBe(\"https://lix.dev/blog\");\n    expect(buildCanonicalUrl(\"docs/what-is-lix\")).toBe(\n      \"https://lix.dev/docs/what-is-lix\",\n    );\n    expect(buildCanonicalUrl(\"/rfc/002-rewrite-in-rust/\")).toBe(\n      \"https://lix.dev/rfc/002-rewrite-in-rust\",\n    );\n  });\n\n  test(\"keeps file-like paths canonicalized without extra slash\", () => {\n    expect(buildCanonicalUrl(\"/lix-features.svg\")).toBe(\n      \"https://lix.dev/lix-features.svg\",\n    );\n  });\n});\n\ndescribe(\"getMarkdownTitle\", () => {\n  test(\"prefers explicit frontmatter title over og title and markdown h1\", () => {\n    expect(\n      getMarkdownTitle({\n        rawMarkdown: \"# Markdown Title\",\n        frontmatter: {\n          title: \"Frontmatter Title\",\n          \"og:title\": \"OG Title\",\n        },\n      }),\n    ).toBe(\"Frontmatter Title\");\n  });\n\n  test(\"falls back to og title when explicit title is missing\", () => {\n    expect(\n      getMarkdownTitle({\n        rawMarkdown: \"# Markdown Title\",\n        frontmatter: {\n          \"og:title\": \"OG Title\",\n        },\n      }),\n    ).toBe(\"OG Title\");\n  });\n});\n\ndescribe(\"getMarkdownDescription\", () => {\n  test(\"prefers explicit frontmatter description over og description and prose\", () => {\n    expect(\n      getMarkdownDescription({\n        rawMarkdown: \"# Title\\n\\nMarkdown description.\",\n        frontmatter: {\n          description: \"Frontmatter description.\",\n          \"og:description\": \"OG description.\",\n        },\n      }),\n    ).toBe(\"Frontmatter description.\");\n  });\n\n  test(\"falls back to og description when explicit description is missing\", () => {\n    expect(\n      getMarkdownDescription({\n        rawMarkdown: \"# Title\\n\\nMarkdown description.\",\n        frontmatter: {\n          \"og:description\": \"OG description.\",\n        },\n      }),\n    ).toBe(\"OG description.\");\n  });\n\n  test(\"extracts clean prose and skips admonitions, code, lists, tables, and images\", () => {\n    const markdown = `# Validation Rules\n\n> [!NOTE]\n> Proposed feature.\n\n\\`\\`\\`ts\nconst nope = true;\n\\`\\`\\`\n\n- list item\n| name | value |\n| --- | --- |\n![Diagram](/example.png)\n\nValidation rules catch **mistakes** before [changes](/docs/change-proposals) ship and keep \\`agents\\` and humans aligned.\n\n## Next\n\nMore content here.\n`;\n\n    expect(getMarkdownDescription({ rawMarkdown: markdown })).toBe(\n      \"Validation rules catch mistakes before changes ship and keep agents and humans aligned.\",\n    );\n  });\n\n  test(\"clamps long fallback descriptions at a safe boundary\", () => {\n    const markdown = `# Long Form\n\nLix tracks semantic changes across structured files so teams can review AI-generated edits, audit what changed, and restore safe states without relying on brittle line-based diffs or app-specific APIs alone.\n`;\n\n    const description = getMarkdownDescription({ rawMarkdown: markdown });\n    expect(description).toBe(\n      \"Lix tracks semantic changes across structured files so teams can review AI-generated edits, audit what changed, and restore safe states without relying on...\",\n    );\n    expect(description?.length).toBeLessThanOrEqual(160);\n  });\n});\n\ndescribe(\"splitTitleFromHtml\", () => {\n  test(\"removes the first h1 from rendered html\", () => {\n    expect(\n      splitTitleFromHtml(\"<h1>RFC &amp; Notes</h1><p>Body copy</p>\"),\n    ).toEqual({\n      title: \"RFC & Notes\",\n      body: \"<p>Body copy</p>\",\n    });\n  });\n});\n\ndescribe(\"resolveOgImageUrl\", () => {\n  test(\"resolves blog-local images within the post folder\", () => {\n    expect(\n      resolveOgImageUrl(\n        \"./cover.jpg\",\n        \"002-modeling-a-company-as-a-repository\",\n      ),\n    ).toBe(\n      \"https://lix.dev/blog/002-modeling-a-company-as-a-repository/cover.jpg\",\n    );\n  });\n});\n\ndescribe(\"resolveBlogAssetPath\", () => {\n  test(\"keeps visible blog card images deployment-relative\", () => {\n    expect(\n      resolveBlogAssetPath(\n        \"./cover.jpg\",\n        \"002-modeling-a-company-as-a-repository\",\n      ),\n    ).toBe(\"/blog/002-modeling-a-company-as-a-repository/cover.jpg\");\n  });\n});\n"
  },
  {
    "path": "packages/website/src/lib/seo.ts",
    "content": "const SITE_URL = \"https://lix.dev\";\nconst DEFAULT_OG_IMAGE_PATH = \"/lix-features.svg\";\nconst DEFAULT_OG_IMAGE_ALT = \"Lix\";\nconst DESCRIPTION_MAX_LENGTH = 160;\nconst DESCRIPTION_SENTENCE_MIN_LENGTH = 120;\n\ntype MarkdownMetaInput = {\n  rawMarkdown: string;\n  frontmatter?: Record<string, unknown>;\n};\n\ntype MetaEntry =\n  | { name: string; content: string }\n  | { property: string; content: string };\n\nexport function buildCanonicalUrl(pathname: string): string {\n  if (!pathname || pathname === \"/\") return SITE_URL;\n  const normalized = pathname.startsWith(\"/\") ? pathname : `/${pathname}`;\n  const withoutTrailingSlash =\n    normalized.endsWith(\"/\") && normalized.length > 1\n      ? normalized.slice(0, -1)\n      : normalized;\n  return `${SITE_URL}${withoutTrailingSlash}`;\n}\n\nexport function resolveOgImage(frontmatter?: Record<string, unknown>) {\n  const ogImage =\n    (typeof frontmatter?.[\"og:image\"] === \"string\"\n      ? frontmatter[\"og:image\"]\n      : undefined) ??\n    (typeof frontmatter?.[\"twitter:image\"] === \"string\"\n      ? frontmatter[\"twitter:image\"]\n      : undefined) ??\n    DEFAULT_OG_IMAGE_PATH;\n  const ogImageAlt =\n    (typeof frontmatter?.[\"og:image:alt\"] === \"string\"\n      ? frontmatter[\"og:image:alt\"]\n      : undefined) ??\n    (typeof frontmatter?.[\"twitter:image:alt\"] === \"string\"\n      ? frontmatter[\"twitter:image:alt\"]\n      : undefined) ??\n    DEFAULT_OG_IMAGE_ALT;\n\n  const url = normalizeAssetUrl(ogImage);\n  return { url, alt: ogImageAlt };\n}\n\nexport function getMarkdownTitle(input: MarkdownMetaInput) {\n  const title =\n    typeof input.frontmatter?.title === \"string\"\n      ? input.frontmatter.title\n      : undefined;\n  if (title) {\n    return title.trim() || undefined;\n  }\n\n  const ogTitle =\n    typeof input.frontmatter?.[\"og:title\"] === \"string\"\n      ? input.frontmatter[\"og:title\"]\n      : undefined;\n  if (ogTitle) {\n    return ogTitle.trim() || undefined;\n  }\n\n  return extractMarkdownH1(input.rawMarkdown);\n}\n\nexport function getMarkdownDescription(input: MarkdownMetaInput) {\n  const description =\n    typeof input.frontmatter?.description === \"string\"\n      ? input.frontmatter.description\n      : undefined;\n  if (description) {\n    return normalizeDescriptionText(description);\n  }\n\n  const ogDescription =\n    typeof input.frontmatter?.[\"og:description\"] === \"string\"\n      ? input.frontmatter[\"og:description\"]\n      : undefined;\n  if (ogDescription) {\n    return normalizeDescriptionText(ogDescription);\n  }\n\n  return extractMarkdownDescription(input.rawMarkdown);\n}\n\nexport function extractOgMeta(\n  frontmatter?: Record<string, unknown>,\n): MetaEntry[] {\n  if (!frontmatter) return [];\n  return Object.entries(frontmatter)\n    .filter(\n      ([key, value]) =>\n        key.startsWith(\"og:\") &&\n        typeof value === \"string\" &&\n        key !== \"og:image\" &&\n        key !== \"og:image:alt\",\n    )\n    .map(([key, value]) => ({\n      property: key,\n      content: value as string,\n    }));\n}\n\nexport function extractTwitterMeta(\n  frontmatter?: Record<string, unknown>,\n): MetaEntry[] {\n  if (!frontmatter) return [];\n  return Object.entries(frontmatter)\n    .filter(\n      ([key, value]) =>\n        key.startsWith(\"twitter:\") &&\n        typeof value === \"string\" &&\n        key !== \"twitter:image\" &&\n        key !== \"twitter:image:alt\",\n    )\n    .map(([key, value]) => ({\n      name: key,\n      content: value as string,\n    }));\n}\n\nexport function extractMarkdownH1(markdown: string) {\n  if (!markdown) return undefined;\n  const sanitized = stripFrontmatter(markdown);\n  const lines = sanitized.split(/\\r?\\n/);\n  for (const line of lines) {\n    if (line.startsWith(\"# \")) {\n      return line.slice(2).trim() || undefined;\n    }\n  }\n  return undefined;\n}\n\nexport function extractMarkdownDescription(markdown: string) {\n  if (!markdown) return undefined;\n  const sanitized = stripFrontmatter(markdown);\n  const lines = sanitized.split(/\\r?\\n/);\n  let inCodeFence = false;\n  let collecting = false;\n  const paragraph: string[] = [];\n\n  for (const line of lines) {\n    const trimmed = line.trim();\n\n    if (trimmed.startsWith(\"```\") || trimmed.startsWith(\"~~~\")) {\n      inCodeFence = !inCodeFence;\n      if (collecting) break;\n      continue;\n    }\n    if (inCodeFence) continue;\n\n    if (!trimmed) {\n      if (collecting) break;\n      continue;\n    }\n    if (trimmed.startsWith(\"#\")) continue;\n    if (trimmed.startsWith(\">\")) continue;\n    if (trimmed.startsWith(\"![\")) continue;\n    if (trimmed.startsWith(\"<\")) continue;\n    if (isMarkdownTableLine(trimmed)) continue;\n    if (isMarkdownListLine(trimmed)) {\n      continue;\n    }\n\n    const normalized = normalizeDescriptionText(trimmed);\n    if (!normalized) {\n      if (collecting) break;\n      continue;\n    }\n\n    collecting = true;\n    paragraph.push(normalized);\n  }\n\n  if (!paragraph.length) return undefined;\n  return clampDescription(paragraph.join(\" \"));\n}\n\nexport function splitTitleFromHtml(html: string): {\n  title?: string;\n  body: string;\n} {\n  const match = html.match(/<h1\\b[^>]*>([\\s\\S]*?)<\\/h1>/i);\n  if (!match) {\n    return { body: html };\n  }\n\n  const title = decodeHtmlEntities(stripHtml(match[1])).trim();\n  const body = html.replace(match[0], \"\").trimStart();\n  return { title: title || undefined, body };\n}\n\ntype WebPageJsonLdInput = {\n  title: string;\n  description?: string;\n  canonicalUrl: string;\n  image?: string;\n};\n\nexport function buildWebPageJsonLd(input: WebPageJsonLdInput) {\n  return {\n    \"@context\": \"https://schema.org\",\n    \"@type\": \"WebPage\",\n    name: input.title,\n    description: input.description,\n    url: input.canonicalUrl,\n    ...(input.image ? { image: input.image } : {}),\n  };\n}\n\ntype WebSiteJsonLdInput = {\n  title: string;\n  description?: string;\n  canonicalUrl: string;\n};\n\nexport function buildWebSiteJsonLd(input: WebSiteJsonLdInput) {\n  return {\n    \"@context\": \"https://schema.org\",\n    \"@type\": \"WebSite\",\n    name: input.title,\n    description: input.description,\n    url: input.canonicalUrl,\n  };\n}\n\ntype BreadcrumbItem = {\n  name: string;\n  item: string;\n};\n\nexport function buildBreadcrumbJsonLd(items: BreadcrumbItem[]) {\n  return {\n    \"@context\": \"https://schema.org\",\n    \"@type\": \"BreadcrumbList\",\n    itemListElement: items.map((entry, index) => ({\n      \"@type\": \"ListItem\",\n      position: index + 1,\n      name: entry.name,\n      item: entry.item,\n    })),\n  };\n}\n\nfunction normalizeAssetUrl(value: string) {\n  if (value.startsWith(\"http://\") || value.startsWith(\"https://\")) {\n    return value;\n  }\n  if (value.startsWith(\"/\")) {\n    return `${SITE_URL}${value}`;\n  }\n  return `${SITE_URL}/${value}`;\n}\n\nfunction stripFrontmatter(markdown: string) {\n  if (!markdown.startsWith(\"---\")) return markdown;\n  const end = markdown.indexOf(\"\\n---\", 3);\n  if (end === -1) return markdown;\n  return markdown.slice(end + 4).trimStart();\n}\n\nfunction normalizeDescriptionText(value: string) {\n  return value\n    .replace(/!\\[([^\\]]*)\\]\\(([^)]+)\\)/g, \"$1\")\n    .replace(/\\[([^\\]]+)\\]\\(([^)]+)\\)/g, \"$1\")\n    .replace(/`([^`]+)`/g, \"$1\")\n    .replace(/\\*\\*([^*]+)\\*\\*/g, \"$1\")\n    .replace(/__([^_]+)__/g, \"$1\")\n    .replace(/\\*([^*]+)\\*/g, \"$1\")\n    .replace(/_([^_]+)_/g, \"$1\")\n    .replace(/~~([^~]+)~~/g, \"$1\")\n    .replace(/<[^>]+>/g, \" \")\n    .replace(/\\s+/g, \" \")\n    .trim();\n}\n\nfunction clampDescription(value: string) {\n  if (value.length <= DESCRIPTION_MAX_LENGTH) {\n    return value;\n  }\n\n  const withinLimit = value.slice(0, DESCRIPTION_MAX_LENGTH);\n  const sentenceBoundary = Math.max(\n    withinLimit.lastIndexOf(\". \"),\n    withinLimit.lastIndexOf(\"! \"),\n    withinLimit.lastIndexOf(\"? \"),\n  );\n  if (sentenceBoundary >= DESCRIPTION_SENTENCE_MIN_LENGTH - 1) {\n    return withinLimit.slice(0, sentenceBoundary + 1).trim();\n  }\n\n  const wordBoundary = withinLimit.lastIndexOf(\" \");\n  if (wordBoundary > 0) {\n    return `${withinLimit.slice(0, wordBoundary).trim()}...`;\n  }\n\n  return `${withinLimit.trim()}...`;\n}\n\nfunction isMarkdownListLine(value: string) {\n  return (\n    value.startsWith(\"- \") ||\n    value.startsWith(\"* \") ||\n    value.startsWith(\"+ \") ||\n    /^\\d+\\.\\s/.test(value)\n  );\n}\n\nfunction isMarkdownTableLine(value: string) {\n  return (\n    value.startsWith(\"|\") ||\n    /^\\|?[\\s:-]+\\|[\\s|:-]*$/.test(value) ||\n    value.includes(\"| ---\")\n  );\n}\n\nfunction stripHtml(input: string): string {\n  return input.replace(/<[^>]*>/g, \"\");\n}\n\nfunction decodeHtmlEntities(input: string): string {\n  return input\n    .replace(/&amp;/g, \"&\")\n    .replace(/&lt;/g, \"<\")\n    .replace(/&gt;/g, \">\")\n    .replace(/&quot;/g, '\"')\n    .replace(/&#39;/g, \"'\");\n}\n"
  },
  {
    "path": "packages/website/src/router.tsx",
    "content": "import { createRouter } from \"@tanstack/react-router\";\n\n// Import the generated route tree\nimport { routeTree } from \"./routeTree.gen\";\n\n// Create a new router instance\nexport const getRouter = () => {\n  const router = createRouter({\n    routeTree,\n    scrollRestoration: true,\n    defaultPreloadStaleTime: 0,\n    trailingSlash: \"never\",\n  });\n\n  return router;\n};\n"
  },
  {
    "path": "packages/website/src/routes/-seo-smoke.test.ts",
    "content": "import { readFileSync } from \"node:fs\";\nimport { parse } from \"@opral/markdown-wc\";\nimport { describe, expect, test } from \"vitest\";\nimport { getBlogDescription, getBlogTitle } from \"../blog/blogMetadata\";\nimport { resolveOgImageUrl } from \"../blog/og-image\";\nimport {\n  getMarkdownDescription,\n  getMarkdownTitle,\n  splitTitleFromHtml,\n} from \"../lib/seo\";\nimport { buildBlogPostHead } from \"./blog/$slug\";\nimport { buildDocsPageHead } from \"./docs/$slugId\";\nimport { buildRfcHead } from \"./rfc/$slug\";\n\nfunction findLink(\n  links: Array<{ rel: string; href: string }> | undefined,\n  rel: string,\n) {\n  return links?.find((entry) => entry.rel === rel)?.href;\n}\n\nfunction findMetaContent(\n  meta:\n    | Array<\n        | { title: string }\n        | { name: string; content: string }\n        | { property: string; content: string }\n      >\n    | undefined,\n  key: string,\n) {\n  const entry = meta?.find(\n    (item) =>\n      (\"name\" in item && item.name === key) ||\n      (\"property\" in item && item.property === key),\n  );\n  if (!entry || !(\"content\" in entry)) {\n    return undefined;\n  }\n  return entry.content;\n}\n\ndescribe(\"SEO route smoke tests\", () => {\n  test(\"docs head stays canonical and strips the rendered h1 once\", async () => {\n    const rawMarkdown = readFileSync(\n      new URL(\"../../../../docs/comparison-to-git.md\", import.meta.url),\n      \"utf8\",\n    );\n    const parsed = await parse(rawMarkdown, {\n      externalLinks: true,\n      assetBaseUrl: \"/docs/comparison-to-git/\",\n    });\n    const rendered = splitTitleFromHtml(parsed.html);\n    const head = buildDocsPageHead({\n      doc: {\n        slug: \"comparison-to-git\",\n        content: rawMarkdown,\n      },\n      frontmatter: parsed.frontmatter,\n      html: rendered.body,\n      pageToc: [],\n      sidebarSections: [],\n      tocEntry: undefined,\n    } as any);\n\n    expect(findLink(head.links, \"canonical\")).toBe(\n      \"https://lix.dev/docs/comparison-to-git\",\n    );\n    expect(findMetaContent(head.meta, \"og:title\")).toBe(\n      \"Comparison to Git | Lix Documentation\",\n    );\n    expect(findMetaContent(head.meta, \"twitter:description\")).toBe(\n      \"Git versions text files line-by-line. Lix versions any file format (DOCX, XLSX, CAD, etc.) semantically per entity.\",\n    );\n    expect(rendered.title).toBe(\"Comparison to Git\");\n    expect(rendered.body).not.toContain(\"<h1\");\n  });\n\n  test(\"blog head includes social metadata and keeps cover assets in the post folder\", async () => {\n    const slug = \"002-modeling-a-company-as-a-repository\";\n    const rawMarkdown = readFileSync(\n      new URL(`../../../../blog/${slug}/index.md`, import.meta.url),\n      \"utf8\",\n    );\n    const parsed = await parse(rawMarkdown, {\n      assetBaseUrl: `/blog/${slug}/`,\n    });\n    const rendered = splitTitleFromHtml(parsed.html);\n    const title = getBlogTitle({\n      rawMarkdown,\n      frontmatter: parsed.frontmatter,\n    });\n    const description = getBlogDescription({\n      rawMarkdown,\n      frontmatter: parsed.frontmatter,\n    });\n    const ogImage = resolveOgImageUrl(\n      parsed.frontmatter?.[\"og:image\"] as string,\n      slug,\n    );\n    const head = buildBlogPostHead({\n      post: {\n        slug,\n        title,\n        description,\n        date: parsed.frontmatter?.date as string | undefined,\n        authors: undefined,\n        readingTime: 4,\n        ogImage,\n        ogImageAlt: parsed.frontmatter?.[\"og:image:alt\"] as string | undefined,\n        imports: undefined,\n      },\n      html: rendered.body,\n      rawMarkdown,\n      prevPost: null,\n      nextPost: null,\n    });\n\n    expect(findLink(head.links, \"canonical\")).toBe(\n      `https://lix.dev/blog/${slug}`,\n    );\n    expect(findMetaContent(head.meta, \"og:title\")).toBe(\n      \"Your Company should be a Repository for AI agents | Lix Blog\",\n    );\n    expect(findMetaContent(head.meta, \"twitter:image\")).toBe(\n      \"https://lix.dev/blog/002-modeling-a-company-as-a-repository/cover.jpg\",\n    );\n    expect(rendered.title).toBe(\n      \"Your Company should be a Repository for AI agents\",\n    );\n    expect(rendered.body).not.toContain(\"<h1\");\n  });\n\n  test(\"rfc head includes canonical and social metadata with summary-based descriptions\", async () => {\n    const slug = \"001-preprocess-writes\";\n    const rawMarkdown = readFileSync(\n      new URL(`../../../../rfcs/${slug}/index.md`, import.meta.url),\n      \"utf8\",\n    );\n    const parsed = await parse(rawMarkdown, {\n      assetBaseUrl: `/rfc/${slug}/`,\n    });\n    const rendered = splitTitleFromHtml(parsed.html);\n    const title = getMarkdownTitle({\n      rawMarkdown,\n      frontmatter: parsed.frontmatter,\n    });\n    const description = getMarkdownDescription({\n      rawMarkdown,\n      frontmatter: parsed.frontmatter,\n    });\n    const head = buildRfcHead({\n      slug,\n      title: title ?? slug,\n      description: description ?? `Design proposal for ${title ?? slug}.`,\n      date: parsed.frontmatter?.date as string | undefined,\n      html: rendered.body,\n      frontmatter: parsed.frontmatter,\n      prevRfc: null,\n      nextRfc: null,\n    });\n\n    expect(findLink(head.links, \"canonical\")).toBe(\n      `https://lix.dev/rfc/${slug}`,\n    );\n    expect(findMetaContent(head.meta, \"og:title\")).toBe(\n      \"Preprocess writes to avoid vtable overhead | Lix RFCs\",\n    );\n    expect(findMetaContent(head.meta, \"twitter:description\")).toBe(\n      \"Write operations in Lix are slow due to the vtable mechanism crossing the JS ↔ SQLite WASM boundary multiple times per row.\",\n    );\n    expect(rendered.title).toBe(\"Preprocess writes to avoid vtable overhead\");\n    expect(rendered.body).not.toContain(\"<h1\");\n  });\n});\n"
  },
  {
    "path": "packages/website/src/routes/__root.tsx",
    "content": "import {\n  HeadContent,\n  Scripts,\n  createRootRoute,\n  useRouter,\n} from \"@tanstack/react-router\";\nimport React from \"react\";\nimport { PostHogProvider } from \"posthog-js/react\";\nimport appCss from \"../styles.css?url\";\n\nconst GA_MEASUREMENT_ID = \"G-3GEP4W5688\";\nconst posthogOptions = {\n  api_host: import.meta.env.VITE_PUBLIC_POSTHOG_HOST,\n  defaults: \"2025-11-30\",\n} as const;\n\nexport const Route = createRootRoute({\n  head: () => ({\n    meta: [\n      {\n        charSet: \"utf-8\",\n      },\n      {\n        name: \"viewport\",\n        content: \"width=device-width, initial-scale=1\",\n      },\n      {\n        title: \"Lix\",\n      },\n      {\n        name: \"theme-color\",\n        content: \"#ffffff\",\n      },\n      {\n        name: \"robots\",\n        content: \"index, follow\",\n      },\n    ],\n    links: [\n      {\n        rel: \"stylesheet\",\n        href: appCss,\n      },\n      {\n        rel: \"icon\",\n        type: \"image/svg+xml\",\n        href: \"/favicon.svg\",\n      },\n      {\n        rel: \"manifest\",\n        href: \"/manifest.json\",\n      },\n    ],\n    scripts: [\n      {\n        type: \"application/ld+json\",\n        children: JSON.stringify({\n          \"@context\": \"https://schema.org\",\n          \"@type\": \"Organization\",\n          name: \"Lix\",\n          url: \"https://lix.dev\",\n          logo: \"https://lix.dev/icon.svg\",\n          sameAs: [\n            \"https://github.com/opral/lix\",\n            \"https://x.com/lixCCS\",\n            \"https://discord.gg/gdMPPWy57R\",\n          ],\n        }),\n      },\n    ],\n  }),\n\n  notFoundComponent: NotFoundPage,\n  shellComponent: RootDocument,\n});\n\nfunction GoogleAnalytics() {\n  const router = useRouter();\n\n  React.useEffect(() => {\n    if (!import.meta.env.PROD) return;\n    if ((window as any).__gaInitialized) return;\n    (window as any).__gaInitialized = true;\n\n    (window as any).dataLayer = (window as any).dataLayer || [];\n    function gtag(...args: unknown[]) {\n      (window as any).dataLayer.push(args);\n    }\n    (window as any).gtag = gtag;\n\n    const script = document.createElement(\"script\");\n    script.async = true;\n    script.src = `https://www.googletagmanager.com/gtag/js?id=${GA_MEASUREMENT_ID}`;\n    document.head.appendChild(script);\n\n    gtag(\"js\", new Date());\n    gtag(\"config\", GA_MEASUREMENT_ID, { send_page_view: false });\n\n    const sendPageView = (location: {\n      href: string;\n      pathname: string;\n      search: string;\n      hash: string;\n    }) => {\n      gtag(\"event\", \"page_view\", {\n        page_location: location.href,\n        page_path: `${location.pathname}${location.search}${location.hash}`,\n        page_title: document.title,\n      });\n    };\n\n    sendPageView(router.history.location);\n    const unsubscribe = router.history.subscribe(({ location }) => {\n      sendPageView(location);\n    });\n\n    return () => {\n      unsubscribe();\n    };\n  }, []);\n\n  return null;\n}\n\nfunction RootDocument({ children }: { children: React.ReactNode }) {\n  // Only render PostHogProvider on the client side to avoid hydration mismatches.\n  // PostHog is a client-side only library and will cause React error #418 if\n  // rendered during SSR.\n  const [isMounted, setIsMounted] = React.useState(false);\n\n  React.useEffect(() => {\n    setIsMounted(true);\n  }, []);\n\n  const appContent =\n    import.meta.env.PROD &&\n    isMounted &&\n    import.meta.env.VITE_PUBLIC_POSTHOG_KEY ? (\n      <PostHogProvider\n        apiKey={import.meta.env.VITE_PUBLIC_POSTHOG_KEY}\n        options={posthogOptions}\n      >\n        {children}\n      </PostHogProvider>\n    ) : (\n      children\n    );\n\n  return (\n    <html lang=\"en\">\n      <head>\n        <HeadContent />\n      </head>\n      <body>\n        <GoogleAnalytics />\n        {appContent}\n        <Scripts />\n      </body>\n    </html>\n  );\n}\n\n/**\n * Fallback UI for unmatched routes.\n *\n * @example\n * <NotFoundPage />\n */\nfunction NotFoundPage() {\n  return (\n    <div className=\"mx-auto flex min-h-[60vh] max-w-3xl flex-col justify-center px-6 py-16 text-slate-900\">\n      <p className=\"text-xs font-semibold uppercase tracking-[0.35em] text-slate-500\">\n        404\n      </p>\n      <h1 className=\"mt-4 text-3xl font-semibold leading-tight sm:text-4xl\">\n        Page not found\n      </h1>\n      <p className=\"mt-3 text-base text-slate-600\">\n        The page you are looking for does not exist.\n      </p>\n    </div>\n  );\n}\n"
  },
  {
    "path": "packages/website/src/routes/blog/$slug.tsx",
    "content": "import { createFileRoute, Link, redirect } from \"@tanstack/react-router\";\nimport { parse } from \"@opral/markdown-wc\";\nimport { useEffect, useState } from \"react\";\nimport markdownPageCss from \"../../components/markdown-page.style.css?url\";\nimport { getBlogDescription, getBlogTitle } from \"../../blog/blogMetadata\";\nimport { Footer } from \"../../components/footer\";\nimport { Header } from \"../../components/header\";\nimport { PrevNextNav } from \"../../components/prev-next-nav\";\nimport { resolveOgImageUrl } from \"../../blog/og-image\";\nimport {\n  buildCanonicalUrl,\n  buildWebPageJsonLd,\n  resolveOgImage,\n  splitTitleFromHtml,\n} from \"../../lib/seo\";\n\nconst blogMarkdownFiles = import.meta.glob<string>(\n  \"../../../../../blog/**/*.md\",\n  {\n    query: \"?raw\",\n    import: \"default\",\n  },\n);\nconst blogJsonFiles = import.meta.glob<string>(\"../../../../../blog/*.json\", {\n  query: \"?raw\",\n  import: \"default\",\n});\nconst blogRootPrefix = \"../../../../../blog/\";\n\nconst ogImageWidth = 1200;\nconst ogImageHeight = 630;\n\ntype Author = {\n  name: string;\n  role?: string;\n  avatar?: string | null;\n  twitter?: string;\n  github?: string;\n};\n\ntype BlogPrevNext = {\n  slug: string;\n  title: string;\n} | null;\n\nfunction calculateReadingTime(text: string): number {\n  const wordsPerMinute = 200;\n  const words = text.trim().split(/\\s+/).length;\n  return Math.max(1, Math.ceil(words / wordsPerMinute));\n}\n\nasync function loadBlogPost(slug: string) {\n  if (!slug) {\n    throw new Error(\"Missing blog slug\");\n  }\n\n  const authorsContent = await getBlogJson(\"authors.json\");\n  const authorsMap = JSON.parse(authorsContent) as Record<string, Author>;\n\n  const tocContent = await getBlogJson(\"table_of_contents.json\");\n  const toc = JSON.parse(tocContent) as Array<{\n    path: string;\n    slug: string;\n    authors?: string[];\n  }>;\n\n  // Load all posts to get dates from frontmatter for sorting\n  const postsWithDates = await Promise.all(\n    toc.map(async (item) => {\n      const relPath = item.path.startsWith(\"./\")\n        ? item.path.slice(2)\n        : item.path;\n      const md = await getBlogMarkdown(relPath);\n      const parsedMd = await parse(md);\n      const date = parsedMd.frontmatter?.date as string | undefined;\n      const title =\n        getBlogTitle({ rawMarkdown: md, frontmatter: parsedMd.frontmatter }) ??\n        item.slug;\n      return { ...item, date, title };\n    }),\n  );\n\n  const sortedToc = [...postsWithDates].sort((a, b) => {\n    if (!a.date && !b.date) return 0;\n    if (!a.date) return 1;\n    if (!b.date) return -1;\n    return new Date(b.date).getTime() - new Date(a.date).getTime();\n  });\n\n  const currentIndex = sortedToc.findIndex((item) => item.slug === slug);\n  const entry = sortedToc[currentIndex];\n  if (!entry) {\n    throw new Error(`Blog post not found: ${slug}`);\n  }\n\n  const prevEntry = currentIndex > 0 ? sortedToc[currentIndex - 1] : null;\n  const nextEntry =\n    currentIndex < sortedToc.length - 1 ? sortedToc[currentIndex + 1] : null;\n\n  const prevPost: BlogPrevNext = prevEntry\n    ? { slug: prevEntry.slug, title: prevEntry.title }\n    : null;\n  const nextPost: BlogPrevNext = nextEntry\n    ? { slug: nextEntry.slug, title: nextEntry.title }\n    : null;\n\n  const authors = entry.authors\n    ?.map((authorId) => authorsMap[authorId])\n    .filter(Boolean);\n\n  const relativePath = entry.path.startsWith(\"./\")\n    ? entry.path.slice(2)\n    : entry.path;\n  // Extract folder name from path (e.g., \"001-introducing-lix\" from \"001-introducing-lix/index.md\")\n  const folderName = relativePath.replace(/\\/index\\.md$/, \"\");\n  const rawMarkdown = await getBlogMarkdown(relativePath);\n  const parsed = await parse(rawMarkdown, {\n    assetBaseUrl: `/blog/${folderName}/`,\n  });\n  const rendered = splitTitleFromHtml(parsed.html);\n  const title =\n    getBlogTitle({\n      rawMarkdown,\n      frontmatter: parsed.frontmatter,\n    }) ?? rendered.title;\n  const description = getBlogDescription({\n    rawMarkdown,\n    frontmatter: parsed.frontmatter,\n  });\n\n  // Get date from frontmatter\n  const date = parsed.frontmatter?.date as string | undefined;\n\n  const ogImageOverrideRaw =\n    typeof parsed.frontmatter?.[\"og:image\"] === \"string\"\n      ? parsed.frontmatter[\"og:image\"]\n      : undefined;\n  const ogImageOverride = ogImageOverrideRaw\n    ? resolveOgImageUrl(ogImageOverrideRaw, folderName)\n    : undefined;\n  const ogImageAlt =\n    typeof parsed.frontmatter?.[\"og:image:alt\"] === \"string\"\n      ? parsed.frontmatter[\"og:image:alt\"]\n      : undefined;\n\n  const readingTime = calculateReadingTime(rawMarkdown);\n  const imports = parsed.frontmatter?.imports as string[] | undefined;\n\n  return {\n    post: {\n      slug: entry.slug,\n      title,\n      description,\n      date,\n      authors,\n      readingTime,\n      ogImage: ogImageOverride,\n      ogImageAlt,\n      imports,\n    },\n    html: rendered.body,\n    rawMarkdown,\n    prevPost,\n    nextPost,\n  };\n}\n\ntype BlogPostLoaderData = Awaited<ReturnType<typeof loadBlogPost>>;\n\nexport function buildBlogPostHead(loaderData?: BlogPostLoaderData) {\n  const title = loaderData?.post.title;\n  const description = loaderData?.post.description;\n  const slug = loaderData?.post.slug;\n  const defaultOg = resolveOgImage();\n  const ogImageUrl = loaderData?.post.ogImage ?? defaultOg.url;\n  const ogImageAlt =\n    loaderData?.post.ogImageAlt ?? (title ? `${title} cover` : \"Lix blog post\");\n  const canonicalUrl = slug\n    ? buildCanonicalUrl(`/blog/${slug}`)\n    : buildCanonicalUrl(\"/blog\");\n  const meta: Array<\n    | { title: string }\n    | { name: string; content: string }\n    | { property: string; content: string }\n  > = [\n    { title: title ? `${title} | Lix Blog` : \"Lix Blog\" },\n    { property: \"og:url\", content: canonicalUrl },\n    { property: \"og:type\", content: \"article\" },\n    { property: \"og:site_name\", content: \"Lix\" },\n    { property: \"og:locale\", content: \"en_US\" },\n    { property: \"og:image\", content: ogImageUrl },\n    { property: \"og:image:width\", content: String(ogImageWidth) },\n    { property: \"og:image:height\", content: String(ogImageHeight) },\n    { property: \"og:image:alt\", content: ogImageAlt },\n    { name: \"twitter:card\", content: \"summary_large_image\" },\n    { name: \"twitter:image\", content: ogImageUrl },\n    { name: \"twitter:image:alt\", content: ogImageAlt },\n  ];\n\n  if (description) {\n    meta.push(\n      { name: \"description\", content: description },\n      { property: \"og:description\", content: description },\n      { name: \"twitter:description\", content: description },\n    );\n  }\n\n  if (title) {\n    const pageTitle = `${title} | Lix Blog`;\n    meta.push(\n      { property: \"og:title\", content: pageTitle },\n      { name: \"twitter:title\", content: pageTitle },\n    );\n  }\n\n  if (loaderData?.post.date) {\n    meta.push({\n      property: \"article:published_time\",\n      content: loaderData.post.date,\n    });\n  }\n\n  if (loaderData?.post.authors) {\n    loaderData.post.authors.forEach((author) => {\n      meta.push({\n        property: \"article:author\",\n        content: author.name,\n      });\n    });\n  }\n\n  const links = [\n    { rel: \"stylesheet\", href: markdownPageCss },\n    { rel: \"canonical\", href: canonicalUrl },\n  ];\n  if (loaderData?.prevPost?.slug) {\n    links.push({\n      rel: \"prev\",\n      href: buildCanonicalUrl(`/blog/${loaderData.prevPost.slug}`),\n    });\n  }\n  if (loaderData?.nextPost?.slug) {\n    links.push({\n      rel: \"next\",\n      href: buildCanonicalUrl(`/blog/${loaderData.nextPost.slug}`),\n    });\n  }\n\n  const pageTitle = title ? `${title} | Lix Blog` : \"Lix Blog\";\n  const jsonLd = buildWebPageJsonLd({\n    title: pageTitle,\n    description,\n    canonicalUrl,\n    image: ogImageUrl,\n  });\n\n  return {\n    meta,\n    links,\n    scripts: slug\n      ? [\n          {\n            type: \"application/ld+json\",\n            children: JSON.stringify({\n              \"@context\": \"https://schema.org\",\n              \"@type\": \"BlogPosting\",\n              headline: title ?? slug,\n              description,\n              url: canonicalUrl,\n              image: ogImageUrl,\n              ...(loaderData?.post.date\n                ? { datePublished: loaderData.post.date }\n                : {}),\n              ...(loaderData?.post.authors\n                ? {\n                    author: loaderData.post.authors.map((author) => ({\n                      \"@type\": \"Person\",\n                      name: author.name,\n                      ...(author.avatar ? { image: author.avatar } : {}),\n                      ...(author.twitter || author.github\n                        ? {\n                            sameAs: [author.twitter, author.github].filter(\n                              (value): value is string => Boolean(value),\n                            ),\n                          }\n                        : {}),\n                    })),\n                  }\n                : {}),\n            }),\n          },\n          {\n            type: \"application/ld+json\",\n            children: JSON.stringify(jsonLd),\n          },\n          ...(loaderData?.post.authors\n            ? loaderData.post.authors.map((author) => ({\n                type: \"application/ld+json\",\n                children: JSON.stringify({\n                  \"@context\": \"https://schema.org\",\n                  \"@type\": \"Person\",\n                  name: author.name,\n                  ...(author.avatar ? { image: author.avatar } : {}),\n                  ...(author.twitter || author.github\n                    ? {\n                        sameAs: [author.twitter, author.github].filter(\n                          (value): value is string => Boolean(value),\n                        ),\n                      }\n                    : {}),\n                }),\n              }))\n            : []),\n        ]\n      : [],\n  };\n}\n\nexport const Route = createFileRoute(\"/blog/$slug\")({\n  loader: async ({ params }) => {\n    try {\n      return await loadBlogPost(params.slug);\n    } catch {\n      throw redirect({ to: \"/blog\" });\n    }\n  },\n  head: ({ loaderData }) => buildBlogPostHead(loaderData),\n  component: BlogPostPage,\n});\n\nfunction BlogPostPage() {\n  const { post, html, prevPost, nextPost } = Route.useLoaderData();\n  const [copied, setCopied] = useState(false);\n\n  useEffect(() => {\n    if (!post.imports || post.imports.length === 0) return;\n    post.imports.forEach((url) => {\n      import(/* @vite-ignore */ url).catch((err) => {\n        console.error(`Failed to load web component from ${url}:`, err);\n      });\n    });\n  }, [post.imports]);\n\n  useEffect(() => {\n    // @ts-expect-error - JS-only module\n    import(\"../../components/markdown-page.interactive.js\");\n  }, [html]);\n\n  const copyUrl = async () => {\n    try {\n      await navigator.clipboard.writeText(window.location.href);\n      setCopied(true);\n      setTimeout(() => setCopied(false), 2000);\n    } catch (err) {\n      console.error(\"Failed to copy URL:\", err);\n    }\n  };\n\n  return (\n    <div className=\"flex min-h-screen flex-col bg-white text-slate-900\">\n      <Header />\n      <main className=\"flex-1\">\n        <header className=\"bg-white\">\n          <div className=\"mx-auto max-w-4xl px-6 pt-12 pb-8\">\n            <nav className=\"mb-8 flex justify-center\">\n              <Link\n                to=\"/blog\"\n                className=\"inline-flex items-center gap-1.5 text-sm text-slate-500 hover:text-slate-700 transition-colors\"\n              >\n                <svg\n                  className=\"h-4 w-4\"\n                  viewBox=\"0 0 24 24\"\n                  fill=\"none\"\n                  stroke=\"currentColor\"\n                  strokeWidth=\"2\"\n                  strokeLinecap=\"round\"\n                  strokeLinejoin=\"round\"\n                >\n                  <path d=\"M19 12H5M12 19l-7-7 7-7\" />\n                </svg>\n                Blog\n              </Link>\n            </nav>\n\n            <h1 className=\"text-2xl md:text-3xl lg:text-4xl font-bold text-slate-900 text-center mb-8\">\n              {post.title}\n            </h1>\n\n            {post.authors && post.authors.length > 0 && (\n              <div className=\"flex justify-center gap-6 mb-8\">\n                {post.authors.map((author, index) => (\n                  <div key={index} className=\"flex items-center gap-3\">\n                    {author.avatar ? (\n                      <img\n                        src={author.avatar}\n                        alt={author.name}\n                        className=\"w-10 h-10 rounded-full object-cover\"\n                      />\n                    ) : (\n                      <div className=\"w-10 h-10 rounded-full bg-slate-300 flex items-center justify-center text-slate-600 font-medium\">\n                        {author.name.charAt(0)}\n                      </div>\n                    )}\n                    <span className=\"font-medium text-slate-900\">\n                      {author.name}\n                    </span>\n                    {author.twitter && (\n                      <a\n                        href={author.twitter}\n                        target=\"_blank\"\n                        rel=\"noopener noreferrer\"\n                        className=\"text-slate-400 hover:text-slate-600 transition-colors\"\n                        aria-label={`${author.name} on X`}\n                      >\n                        <svg\n                          className=\"w-4 h-4\"\n                          viewBox=\"0 0 24 24\"\n                          fill=\"currentColor\"\n                        >\n                          <path d=\"M18.244 2.25h3.308l-7.227 8.26 8.502 11.24H16.17l-5.214-6.817L4.99 21.75H1.68l7.73-8.835L1.254 2.25H8.08l4.713 6.231zm-1.161 17.52h1.833L7.084 4.126H5.117z\" />\n                        </svg>\n                      </a>\n                    )}\n                    {author.github && (\n                      <a\n                        href={author.github}\n                        target=\"_blank\"\n                        rel=\"noopener noreferrer\"\n                        className=\"text-slate-400 hover:text-slate-600 transition-colors\"\n                        aria-label={`${author.name} on GitHub`}\n                      >\n                        <svg\n                          className=\"w-4 h-4\"\n                          viewBox=\"0 0 24 24\"\n                          fill=\"currentColor\"\n                        >\n                          <path d=\"M12 0c-6.626 0-12 5.373-12 12 0 5.302 3.438 9.8 8.207 11.387.599.111.793-.261.793-.577v-2.234c-3.338.726-4.033-1.416-4.033-1.416-.546-1.387-1.333-1.756-1.333-1.756-1.089-.745.083-.729.083-.729 1.205.084 1.839 1.237 1.839 1.237 1.07 1.834 2.807 1.304 3.492.997.107-.775.418-1.305.762-1.604-2.665-.305-5.467-1.334-5.467-5.931 0-1.311.469-2.381 1.236-3.221-.124-.303-.535-1.524.117-3.176 0 0 1.008-.322 3.301 1.23.957-.266 1.983-.399 3.003-.404 1.02.005 2.047.138 3.006.404 2.291-1.552 3.297-1.23 3.297-1.23.653 1.653.242 2.874.118 3.176.77.84 1.235 1.911 1.235 3.221 0 4.609-2.807 5.624-5.479 5.921.43.372.823 1.102.823 2.222v3.293c0 .319.192.694.801.576 4.765-1.589 8.199-6.086 8.199-11.386 0-6.627-5.373-12-12-12z\" />\n                        </svg>\n                      </a>\n                    )}\n                  </div>\n                ))}\n              </div>\n            )}\n\n            <div className=\"flex items-center justify-between text-sm text-slate-500 pt-6 border-t border-slate-200\">\n              <div className=\"flex items-center gap-4\">\n                <span className=\"flex items-center gap-1.5\">\n                  <svg\n                    className=\"h-4 w-4\"\n                    viewBox=\"0 0 24 24\"\n                    fill=\"none\"\n                    stroke=\"currentColor\"\n                    strokeWidth=\"2\"\n                    strokeLinecap=\"round\"\n                    strokeLinejoin=\"round\"\n                  >\n                    <circle cx=\"12\" cy=\"12\" r=\"10\" />\n                    <polyline points=\"12 6 12 12 16 14\" />\n                  </svg>\n                  {post.readingTime} min read\n                </span>\n                <button\n                  onClick={copyUrl}\n                  className=\"flex items-center gap-1.5 text-cyan-600 hover:text-cyan-700 transition-colors\"\n                >\n                  <svg\n                    className=\"h-4 w-4\"\n                    viewBox=\"0 0 24 24\"\n                    fill=\"none\"\n                    stroke=\"currentColor\"\n                    strokeWidth=\"2\"\n                    strokeLinecap=\"round\"\n                    strokeLinejoin=\"round\"\n                  >\n                    <path d=\"M10 13a5 5 0 0 0 7.54.54l3-3a5 5 0 0 0-7.07-7.07l-1.72 1.71\" />\n                    <path d=\"M14 11a5 5 0 0 0-7.54-.54l-3 3a5 5 0 0 0 7.07 7.07l1.71-1.71\" />\n                  </svg>\n                  {copied ? \"Copied!\" : \"Copy URL\"}\n                </button>\n              </div>\n              {post.date && (\n                <time className=\"text-slate-500\">{formatDate(post.date)}</time>\n              )}\n            </div>\n          </div>\n        </header>\n\n        <div className=\"mx-auto max-w-4xl px-6 py-12\">\n          <article\n            className=\"markdown-wc-body\"\n            dangerouslySetInnerHTML={{ __html: html }}\n          />\n\n          <form\n            action=\"https://buttondown.com/api/emails/embed-subscribe/lix-blog\"\n            method=\"post\"\n            target=\"_blank\"\n            className=\"embeddable-buttondown-form mt-16 border-t border-slate-200 pt-8\"\n          >\n            <p className=\"mb-3 text-sm text-slate-500\">\n              Get notified about new blog posts\n            </p>\n            <div className=\"flex gap-2\">\n              <label htmlFor=\"bd-email\" className=\"sr-only\">\n                Enter your email\n              </label>\n              <input\n                type=\"email\"\n                name=\"email\"\n                id=\"bd-email\"\n                placeholder=\"your@email.com\"\n                required\n                className=\"flex-1 rounded-md border border-slate-300 px-4 py-2 text-sm focus:border-transparent focus:outline-none focus:ring-2 focus:ring-slate-900\"\n              />\n              <input\n                type=\"submit\"\n                value=\"Subscribe\"\n                className=\"rounded-md border border-slate-300 px-4 py-2 text-sm font-medium text-slate-900 transition-colors hover:bg-slate-50\"\n              />\n            </div>\n          </form>\n\n          <PrevNextNav\n            prev={prevPost}\n            next={nextPost}\n            basePath=\"/blog\"\n            prevLabel=\"Previous post\"\n            nextLabel=\"Next post\"\n            className=\"mt-8\"\n          />\n        </div>\n      </main>\n      <Footer />\n    </div>\n  );\n}\n\nfunction getBlogJson(filename: string): Promise<string> {\n  const loader = blogJsonFiles[`${blogRootPrefix}${filename}`];\n  if (!loader) {\n    throw new Error(`Missing blog file: ${filename}`);\n  }\n  return loader();\n}\n\nfunction getBlogMarkdown(relativePath: string): Promise<string> {\n  const normalized = relativePath.replace(/^[./]+/, \"\");\n  const loader = blogMarkdownFiles[`${blogRootPrefix}${normalized}`];\n  if (!loader) {\n    throw new Error(`Missing blog markdown: ${relativePath}`);\n  }\n  return loader();\n}\n\nfunction formatDate(dateString: string): string {\n  try {\n    const date = new Date(dateString);\n    return date.toLocaleDateString(\"en-US\", {\n      year: \"numeric\",\n      month: \"long\",\n      day: \"numeric\",\n    });\n  } catch {\n    return dateString;\n  }\n}\n"
  },
  {
    "path": "packages/website/src/routes/blog/index.tsx",
    "content": "import { createFileRoute, Link } from \"@tanstack/react-router\";\nimport { parse } from \"@opral/markdown-wc\";\nimport { getBlogDescription, getBlogTitle } from \"../../blog/blogMetadata\";\nimport { resolveBlogAssetPath } from \"../../blog/og-image\";\nimport { Footer } from \"../../components/footer\";\nimport { Header } from \"../../components/header\";\nimport { buildCanonicalUrl, resolveOgImage } from \"../../lib/seo\";\n\ntype Author = {\n  name: string;\n  avatar?: string | null;\n};\n\nconst blogMarkdownFiles = import.meta.glob<string>(\n  \"../../../../../blog/**/*.md\",\n  {\n    query: \"?raw\",\n    import: \"default\",\n  },\n);\nconst blogJsonFiles = import.meta.glob<string>(\"../../../../../blog/*.json\", {\n  query: \"?raw\",\n  import: \"default\",\n});\nconst blogRootPrefix = \"../../../../../blog/\";\n\nasync function loadBlogIndex() {\n  const authorsContent = await getBlogJson(\"authors.json\");\n  const authorsMap = JSON.parse(authorsContent) as Record<\n    string,\n    { name: string; avatar?: string | null }\n  >;\n\n  const tocContent = await getBlogJson(\"table_of_contents.json\");\n  const toc = JSON.parse(tocContent) as Array<{\n    path: string;\n    slug: string;\n    authors?: string[];\n  }>;\n\n  const posts = await Promise.all(\n    toc.map(async (item) => {\n      const relativePath = item.path.startsWith(\"./\")\n        ? item.path.slice(2)\n        : item.path;\n      const rawMarkdown = await getBlogMarkdown(relativePath);\n      const parsed = await parse(rawMarkdown);\n      const title = getBlogTitle({\n        rawMarkdown,\n        frontmatter: parsed.frontmatter,\n      });\n      const description = getBlogDescription({\n        rawMarkdown,\n        frontmatter: parsed.frontmatter,\n      });\n\n      const authors = item.authors\n        ?.map((authorId) => authorsMap[authorId])\n        .filter(Boolean) as Author[] | undefined;\n\n      // Extract folder name from path (e.g., \"001-introducing-lix\" from \"001-introducing-lix/index.md\")\n      const folderName = relativePath.replace(/\\/index\\.md$/, \"\");\n      const ogImageRaw =\n        typeof parsed.frontmatter?.[\"og:image\"] === \"string\"\n          ? parsed.frontmatter[\"og:image\"]\n          : undefined;\n      const ogImage = ogImageRaw\n        ? resolveBlogAssetPath(ogImageRaw, folderName)\n        : undefined;\n      const ogImageAlt =\n        (typeof parsed.frontmatter?.[\"og:image:alt\"] === \"string\"\n          ? parsed.frontmatter[\"og:image:alt\"]\n          : undefined) ??\n        (typeof parsed.frontmatter?.[\"twitter:image:alt\"] === \"string\"\n          ? parsed.frontmatter[\"twitter:image:alt\"]\n          : undefined) ??\n        (title ? `${title} cover image` : undefined);\n\n      // Get date from frontmatter\n      const date = parsed.frontmatter?.date as string | undefined;\n\n      return {\n        slug: item.slug,\n        title,\n        description,\n        date,\n        authors,\n        ogImage,\n        ogImageAlt,\n      };\n    }),\n  );\n\n  posts.sort((a, b) => {\n    if (!a.date && !b.date) return 0;\n    if (!a.date) return 1;\n    if (!b.date) return -1;\n    return new Date(b.date).getTime() - new Date(a.date).getTime();\n  });\n\n  return { posts };\n}\n\nexport const Route = createFileRoute(\"/blog/\")({\n  loader: async () => {\n    return await loadBlogIndex();\n  },\n  head: () => {\n    const canonicalUrl = buildCanonicalUrl(\"/blog\");\n    const description =\n      \"Product updates, architecture notes, and experiments from building Lix for AI agents and structured file workflows.\";\n    const ogImage = resolveOgImage();\n    const title =\n      \"Lix Blog | Product updates, architecture notes, and AI workflow ideas\";\n\n    return {\n      links: [{ rel: \"canonical\", href: canonicalUrl }],\n      scripts: [\n        {\n          type: \"application/ld+json\",\n          children: JSON.stringify({\n            \"@context\": \"https://schema.org\",\n            \"@type\": \"Blog\",\n            name: \"Blog | Lix\",\n            description,\n            url: canonicalUrl,\n          }),\n        },\n      ],\n      meta: [\n        { title },\n        { name: \"description\", content: description },\n        { property: \"og:title\", content: title },\n        { property: \"og:description\", content: description },\n        { property: \"og:url\", content: canonicalUrl },\n        { property: \"og:type\", content: \"website\" },\n        { property: \"og:site_name\", content: \"Lix\" },\n        { property: \"og:locale\", content: \"en_US\" },\n        { property: \"og:image\", content: ogImage.url },\n        { property: \"og:image:alt\", content: ogImage.alt },\n        { name: \"twitter:card\", content: \"summary_large_image\" },\n        { name: \"twitter:image\", content: ogImage.url },\n        { name: \"twitter:image:alt\", content: ogImage.alt },\n        { name: \"twitter:title\", content: title },\n        { name: \"twitter:description\", content: description },\n      ],\n    };\n  },\n  component: BlogIndexPage,\n});\n\nfunction BlogIndexPage() {\n  const { posts } = Route.useLoaderData();\n\n  return (\n    <div className=\"flex min-h-screen flex-col bg-white text-slate-900\">\n      <Header />\n      <main className=\"flex-1\">\n        <div className=\"mx-auto max-w-4xl px-6 py-16\">\n          <h1 className=\"mb-6 text-4xl font-bold tracking-tight text-slate-900\">\n            Blog\n          </h1>\n\n          <form\n            action=\"https://buttondown.com/api/emails/embed-subscribe/lix-blog\"\n            method=\"post\"\n            target=\"_blank\"\n            className=\"embeddable-buttondown-form mb-12\"\n          >\n            <p className=\"mb-3 text-sm text-slate-500\">\n              Get notified about new blog posts\n            </p>\n            <div className=\"flex gap-2\">\n              <label htmlFor=\"bd-email\" className=\"sr-only\">\n                Enter your email\n              </label>\n              <input\n                type=\"email\"\n                name=\"email\"\n                id=\"bd-email\"\n                placeholder=\"your@email.com\"\n                required\n                className=\"flex-1 rounded-md border border-slate-300 px-4 py-2 text-sm focus:border-transparent focus:outline-none focus:ring-2 focus:ring-slate-900\"\n              />\n              <input\n                type=\"submit\"\n                value=\"Subscribe\"\n                className=\"rounded-md border border-slate-300 px-4 py-2 text-sm font-medium text-slate-900 transition-colors hover:bg-slate-50\"\n              />\n            </div>\n          </form>\n\n          <div className=\"flex flex-col gap-6\">\n            {posts.map((post) => (\n              <Link\n                key={post.slug}\n                to=\"/blog/$slug\"\n                params={{ slug: post.slug }}\n                className=\"group -mx-6 block rounded-xl p-6 transition-colors hover:bg-slate-50\"\n              >\n                <article className=\"flex gap-6\">\n                  {post.ogImage && (\n                    <div className=\"h-24 w-40 flex-shrink-0 overflow-hidden rounded-lg bg-slate-100\">\n                      <img\n                        src={post.ogImage}\n                        alt={\n                          post.ogImageAlt ??\n                          `${post.title ?? post.slug} cover image`\n                        }\n                        className=\"h-full w-full object-cover\"\n                      />\n                    </div>\n                  )}\n                  <div className=\"min-w-0 flex-1\">\n                    <h2 className=\"text-xl font-semibold text-slate-900 transition-colors group-hover:text-slate-700\">\n                      {post.title ?? post.slug}\n                    </h2>\n                    {post.description && (\n                      <p className=\"mt-2 line-clamp-2 text-sm text-slate-600\">\n                        {post.description}\n                      </p>\n                    )}\n                    <div className=\"mt-3 flex items-center gap-2 text-sm text-slate-500\">\n                      {post.authors && post.authors.length > 0 && (\n                        <>\n                          {post.authors.map((author, index) => (\n                            <div\n                              key={index}\n                              className=\"flex items-center gap-2\"\n                            >\n                              {author.avatar ? (\n                                <img\n                                  src={author.avatar}\n                                  alt={author.name}\n                                  className=\"h-5 w-5 rounded-full object-cover\"\n                                />\n                              ) : (\n                                <div className=\"flex h-5 w-5 items-center justify-center rounded-full bg-slate-300 text-xs font-medium text-slate-600\">\n                                  {author.name.charAt(0)}\n                                </div>\n                              )}\n                              <span>{author.name}</span>\n                            </div>\n                          ))}\n                          {post.date && (\n                            <span className=\"text-slate-300\">·</span>\n                          )}\n                        </>\n                      )}\n                      {post.date && <time>{formatDate(post.date)}</time>}\n                    </div>\n                  </div>\n                </article>\n              </Link>\n            ))}\n          </div>\n        </div>\n      </main>\n      <Footer />\n    </div>\n  );\n}\n\nfunction formatDate(dateString: string): string {\n  try {\n    const date = new Date(dateString);\n    return date.toLocaleDateString(\"en-US\", {\n      year: \"numeric\",\n      month: \"long\",\n      day: \"numeric\",\n    });\n  } catch {\n    return dateString;\n  }\n}\n\nfunction getBlogJson(filename: string): Promise<string> {\n  const loader = blogJsonFiles[`${blogRootPrefix}${filename}`];\n  if (!loader) {\n    throw new Error(`Missing blog file: ${filename}`);\n  }\n  return loader();\n}\n\nfunction getBlogMarkdown(relativePath: string): Promise<string> {\n  const normalized = relativePath.replace(/^[./]+/, \"\");\n  const loader = blogMarkdownFiles[`${blogRootPrefix}${normalized}`];\n  if (!loader) {\n    throw new Error(`Missing blog markdown: ${relativePath}`);\n  }\n  return loader();\n}\n"
  },
  {
    "path": "packages/website/src/routes/docs/$slugId.tsx",
    "content": "import { createFileRoute, notFound } from \"@tanstack/react-router\";\nimport {\n  DocsLayout,\n  type PageTocItem,\n  type SidebarSection,\n} from \"../../components/docs-layout\";\nimport { MarkdownPage } from \"../../components/markdown-page\";\nimport tableOfContents from \"../../../../../docs/table_of_contents.json\";\nimport { DocsPrevNext } from \"../../components/docs-prev-next\";\nimport {\n  buildDocMaps,\n  buildTocMap,\n  normalizeRelativePath,\n  resolveDocsMarkdownHref,\n  type Toc,\n  type TocItem,\n} from \"../../lib/build-doc-map\";\nimport {\n  buildCanonicalUrl,\n  buildBreadcrumbJsonLd,\n  buildWebPageJsonLd,\n  extractOgMeta,\n  extractTwitterMeta,\n  getMarkdownDescription,\n  getMarkdownTitle,\n  resolveOgImage,\n} from \"../../lib/seo\";\nimport { parse } from \"@opral/markdown-wc\";\nimport markdownPageCss from \"../../components/markdown-page.style.css?url\";\n\nconst docs = import.meta.glob<string>(\"../../../../../docs/**/*.md\", {\n  eager: true,\n  import: \"default\",\n  query: \"?raw\",\n});\n\nconst tocMap = buildTocMap(tableOfContents as Toc);\nconst { bySlug: docsBySlug } = buildDocMaps(docs);\nconst docsByRelativePath = Object.values(docsBySlug).reduce(\n  (acc, doc) => {\n    acc[doc.relativePath] = doc;\n    return acc;\n  },\n  {} as Record<string, (typeof docsBySlug)[string]>,\n);\n\n/**\n * Builds a list of heading links from rendered HTML for the \"On this page\" TOC.\n *\n * @example\n * buildPageToc('<h2 id=\"intro\">Intro</h2>') // [{ id: \"intro\", label: \"Intro\", level: 2 }]\n */\nfunction buildPageToc(html: string): PageTocItem[] {\n  const headings: PageTocItem[] = [];\n  const regex = /<h2\\b[^>]*id=\"([^\"]+)\"[^>]*>([\\s\\S]*?)<\\/h2>/g;\n  let match: RegExpExecArray | null;\n\n  while ((match = regex.exec(html)) !== null) {\n    const id = match[1];\n    const label = decodeHtmlEntities(stripHtml(match[2])).trim();\n    if (!id || !label) continue;\n    headings.push({ id, label, level: 2 });\n  }\n\n  return headings;\n}\n\n/**\n * Removes HTML tags from a string.\n *\n * @example\n * stripHtml(\"<strong>Title</strong>\") // \"Title\"\n */\nfunction stripHtml(input: string): string {\n  return input.replace(/<[^>]*>/g, \"\");\n}\n\n/**\n * Decodes a minimal set of HTML entities for heading labels.\n *\n * @example\n * decodeHtmlEntities(\"Foo &amp; Bar\") // \"Foo & Bar\"\n */\nfunction decodeHtmlEntities(input: string): string {\n  return input\n    .replace(/&amp;/g, \"&\")\n    .replace(/&lt;/g, \"<\")\n    .replace(/&gt;/g, \">\")\n    .replace(/&quot;/g, '\"')\n    .replace(/&#39;/g, \"'\");\n}\n\nfunction buildSidebarSections(toc: Toc): SidebarSection[] {\n  return Object.entries(toc)\n    .map(([label, sectionItems]) => {\n      const items = sectionItems\n        .map((item) => {\n          const relativePath = normalizeRelativePath(item.path);\n          const doc = docsByRelativePath[relativePath];\n          if (!doc) {\n            return null;\n          }\n\n          return {\n            label: item.label,\n            href: `/docs/${doc.slug}`,\n            relativePath,\n          };\n        })\n        .filter((value): value is NonNullable<typeof value> => Boolean(value));\n\n      return { label, items };\n    })\n    .filter((section) => section.items.length > 0);\n}\n\nfunction buildDocsNavRoutes(toc: Toc) {\n  return Object.values(toc)\n    .flatMap((items) =>\n      items.map((item) => {\n        const relativePath = normalizeRelativePath(item.path);\n        const doc = docsByRelativePath[relativePath];\n        return {\n          slug: doc?.slug ?? \"\",\n          title: item.label,\n        };\n      }),\n    )\n    .filter((item) => item.slug);\n}\n\ntype DocsLoaderData = {\n  doc: (typeof docsBySlug)[string];\n  tocEntry: TocItem | undefined;\n  sidebarSections: SidebarSection[];\n  html: string;\n  frontmatter: Record<string, unknown> & { imports?: string[] };\n  pageToc: PageTocItem[];\n};\n\nexport function buildDocsPageHead(loaderData?: DocsLoaderData) {\n  const data = loaderData as DocsLoaderData | undefined;\n  const frontmatter = data?.frontmatter;\n  const rawMarkdown = data?.doc?.content ?? \"\";\n  const title = getMarkdownTitle({ rawMarkdown, frontmatter });\n  const description = getMarkdownDescription({ rawMarkdown, frontmatter });\n  const canonicalUrl = data?.doc?.slug\n    ? buildCanonicalUrl(`/docs/${data.doc.slug}`)\n    : buildCanonicalUrl(\"/docs/what-is-lix\");\n  const ogImage = resolveOgImage(frontmatter);\n  const ogMeta = extractOgMeta(frontmatter);\n  const twitterMeta = extractTwitterMeta(frontmatter);\n  const pageTitle = title\n    ? `${title} | Lix Documentation`\n    : \"Lix Documentation\";\n  const jsonLd = buildWebPageJsonLd({\n    title: pageTitle,\n    description,\n    canonicalUrl,\n    image: ogImage.url,\n  });\n  const breadcrumbJsonLd = buildBreadcrumbJsonLd(\n    [\n      { name: \"Lix\", item: buildCanonicalUrl(\"/\") },\n      { name: \"Documentation\", item: buildCanonicalUrl(\"/docs/what-is-lix\") },\n      title ? { name: title, item: canonicalUrl } : undefined,\n    ].filter(Boolean) as Array<{ name: string; item: string }>,\n  );\n  const meta: Array<\n    | { title: string }\n    | { name: string; content: string }\n    | { property: string; content: string }\n  > = [\n    {\n      title: pageTitle,\n    },\n    { property: \"og:url\", content: canonicalUrl },\n    { property: \"og:type\", content: \"article\" },\n    { property: \"og:site_name\", content: \"Lix\" },\n    { property: \"og:locale\", content: \"en_US\" },\n    { property: \"og:image\", content: ogImage.url },\n    { property: \"og:image:alt\", content: ogImage.alt },\n    { name: \"twitter:card\", content: \"summary_large_image\" },\n    { name: \"twitter:image\", content: ogImage.url },\n    { name: \"twitter:image:alt\", content: ogImage.alt },\n  ];\n\n  if (description) {\n    meta.push(\n      { name: \"description\", content: description },\n      { property: \"og:description\", content: description },\n      { name: \"twitter:description\", content: description },\n    );\n  }\n\n  if (title) {\n    meta.push(\n      { property: \"og:title\", content: pageTitle },\n      { name: \"twitter:title\", content: pageTitle },\n    );\n  }\n\n  return {\n    meta: [...meta, ...ogMeta, ...twitterMeta],\n    links: [\n      {\n        rel: \"stylesheet\",\n        href: markdownPageCss,\n      },\n      {\n        rel: \"canonical\",\n        href: canonicalUrl,\n      },\n    ],\n    scripts: [\n      {\n        type: \"application/ld+json\",\n        children: JSON.stringify(jsonLd),\n      },\n      {\n        type: \"application/ld+json\",\n        children: JSON.stringify(breadcrumbJsonLd),\n      },\n    ],\n  };\n}\n\nexport const Route = createFileRoute(\"/docs/$slugId\")({\n  head: ({ loaderData }) => buildDocsPageHead(loaderData),\n  loader: (async ({ params }: { params: { slugId: string } }) => {\n    const doc = docsBySlug[params.slugId];\n\n    if (!doc) {\n      throw notFound();\n    }\n\n    const tocEntry = tocMap.get(doc.relativePath);\n    const parsedMarkdown = await parse(doc.content, {\n      externalLinks: true,\n      assetBaseUrl: `/docs/${doc.slug}/`,\n      resolveHref: (href) =>\n        resolveDocsMarkdownHref(href, doc, docsByRelativePath),\n    });\n    const html = parsedMarkdown.html;\n    const pageToc = buildPageToc(html);\n\n    return {\n      doc,\n      tocEntry,\n      sidebarSections: buildSidebarSections(tableOfContents as Toc),\n      html,\n      frontmatter: parsedMarkdown.frontmatter,\n      pageToc,\n    };\n  }) as any,\n  component: DocsPage,\n});\n\nfunction DocsPage() {\n  const { doc, sidebarSections, html, frontmatter, pageToc } =\n    Route.useLoaderData() as DocsLoaderData;\n  const navRoutes = buildDocsNavRoutes(tableOfContents as Toc);\n  const editUrl = `https://github.com/opral/lix/blob/main/docs/${doc.relativePath.replace(\n    /^\\.\\//,\n    \"\",\n  )}`;\n\n  return (\n    <DocsLayout\n      sidebarSections={sidebarSections}\n      activeRelativePath={doc.relativePath}\n      pageToc={pageToc}\n    >\n      <MarkdownPage\n        html={html}\n        markdown={doc.content}\n        imports={(frontmatter.imports as string[] | undefined) ?? undefined}\n      />\n      <div className=\"mt-12\">\n        <a\n          href={editUrl}\n          target=\"_blank\"\n          rel=\"noreferrer\"\n          className=\"inline-flex items-center gap-2 text-sm text-slate-500 hover:text-slate-700\"\n        >\n          <svg\n            className=\"h-4 w-4\"\n            fill=\"none\"\n            viewBox=\"0 0 24 24\"\n            stroke=\"currentColor\"\n            strokeWidth={2}\n            aria-hidden=\"true\"\n          >\n            <path\n              strokeLinecap=\"round\"\n              strokeLinejoin=\"round\"\n              d=\"M11 5H6a2 2 0 00-2 2v11a2 2 0 002 2h11a2 2 0 002-2v-5m-1.414-9.414a2 2 0 112.828 2.828L11.828 15H9v-2.828l8.586-8.586z\"\n            />\n          </svg>\n          Edit this page on GitHub\n        </a>\n      </div>\n      <DocsPrevNext currentSlug={doc.slug} routes={navRoutes} />\n    </DocsLayout>\n  );\n}\n"
  },
  {
    "path": "packages/website/src/routes/docs/index.tsx",
    "content": "import { createFileRoute, notFound, redirect } from \"@tanstack/react-router\";\nimport tableOfContents from \"../../../../../docs/table_of_contents.json\";\nimport {\n  buildDocMaps,\n  normalizeRelativePath,\n  type Toc,\n} from \"../../lib/build-doc-map\";\nimport redirects from \"./redirects.json\";\n\n/**\n * Resolves a redirect destination from the docs redirect map.\n *\n * @example\n * resolveDocsRedirect(\"/docs\") // \"/docs/what-is-lix\"\n */\nfunction resolveDocsRedirect(pathname: string): string | undefined {\n  const normalized = pathname.endsWith(\"/\") ? pathname.slice(0, -1) : pathname;\n  const redirectMap = redirects as Record<string, string>;\n  return redirectMap[normalized] ?? redirectMap[pathname];\n}\n\nconst docs = import.meta.glob<string>(\"../../../../../docs/**/*.md\", {\n  eager: true,\n  import: \"default\",\n  query: \"?raw\",\n});\n\nconst { bySlug: docsBySlug } = buildDocMaps(docs);\nconst docsByRelativePath = Object.values(docsBySlug).reduce(\n  (acc, doc) => {\n    acc[doc.relativePath] = doc;\n    return acc;\n  },\n  {} as Record<string, (typeof docsBySlug)[string]>,\n);\n\nexport const Route = createFileRoute(\"/docs/\")({\n  loader: () => {\n    const redirected = resolveDocsRedirect(\"/docs\");\n    if (redirected) {\n      throw redirect({\n        to: redirected,\n      });\n    }\n\n    const toc = tableOfContents as Toc;\n    const firstPath = Object.values(toc)[0]?.[0]?.path;\n    const firstRelative = firstPath\n      ? normalizeRelativePath(firstPath)\n      : undefined;\n    const firstDoc =\n      (firstRelative && docsByRelativePath[firstRelative]) ||\n      Object.values(docsBySlug)[0];\n\n    if (!firstDoc) {\n      throw notFound();\n    }\n\n    throw redirect({\n      // @ts-ignore\n      to: `/docs/${firstDoc.slug}`,\n    });\n  },\n});\n"
  },
  {
    "path": "packages/website/src/routes/docs/redirects.json",
    "content": "{\n  \"/docs\": \"/docs/what-is-lix\"\n}\n"
  },
  {
    "path": "packages/website/src/routes/guide/$slugId.tsx",
    "content": "import { createFileRoute, redirect } from \"@tanstack/react-router\";\n\nexport const Route = createFileRoute(\"/guide/$slugId\")({\n  loader: ({ params }) => {\n    throw redirect({\n      to: \"/docs/$slugId\",\n      params: { slugId: params.slugId },\n    });\n  },\n});\n"
  },
  {
    "path": "packages/website/src/routes/guide/index.tsx",
    "content": "import { createFileRoute, redirect } from \"@tanstack/react-router\";\n\nexport const Route = createFileRoute(\"/guide/\")({\n  loader: () => {\n    throw redirect({\n      to: \"/docs/$slugId\",\n      params: { slugId: \"what-is-lix\" },\n    });\n  },\n});\n"
  },
  {
    "path": "packages/website/src/routes/index.tsx",
    "content": "import { createFileRoute } from \"@tanstack/react-router\";\nimport { parse } from \"@opral/markdown-wc\";\nimport LandingPage from \"../components/landing-page\";\nimport {\n  buildCanonicalUrl,\n  buildWebSiteJsonLd,\n  resolveOgImage,\n} from \"../lib/seo\";\nimport markdownPageCss from \"../components/markdown-page.style.css?url\";\nimport readmeMarkdown from \"../../../../README.md?raw\";\n\nasync function loadReadmeContent() {\n  const parsed = await parse(readmeMarkdown);\n  return { html: parsed.html };\n}\n\nexport const Route = createFileRoute(\"/\")({\n  loader: async () => {\n    return await loadReadmeContent();\n  },\n  head: () => {\n    const title =\n      \"Lix | Version control as a library for AI agents and structured data\";\n    const description =\n      \"Lix gives AI agents and applications branchable, reviewable change control for structured files, binary formats, and SQL-backed workflows.\";\n    const canonicalUrl = buildCanonicalUrl(\"/\");\n    const ogImage = resolveOgImage();\n    const jsonLd = buildWebSiteJsonLd({\n      title,\n      description,\n      canonicalUrl,\n    });\n\n    return {\n      meta: [\n        { title },\n        { name: \"description\", content: description },\n        { property: \"og:title\", content: title },\n        { property: \"og:description\", content: description },\n        { property: \"og:url\", content: canonicalUrl },\n        { property: \"og:type\", content: \"website\" },\n        { property: \"og:site_name\", content: \"Lix\" },\n        { property: \"og:locale\", content: \"en_US\" },\n        { property: \"og:image\", content: ogImage.url },\n        { property: \"og:image:alt\", content: ogImage.alt },\n        { name: \"twitter:card\", content: \"summary_large_image\" },\n        { name: \"twitter:title\", content: title },\n        { name: \"twitter:description\", content: description },\n        { name: \"twitter:image\", content: ogImage.url },\n        { name: \"twitter:image:alt\", content: ogImage.alt },\n      ],\n      links: [\n        { rel: \"canonical\", href: canonicalUrl },\n        { rel: \"stylesheet\", href: markdownPageCss },\n      ],\n      scripts: [\n        {\n          type: \"application/ld+json\",\n          children: JSON.stringify(jsonLd),\n        },\n      ],\n    };\n  },\n  component: LandingPageWrapper,\n});\n\nfunction LandingPageWrapper() {\n  const { html } = Route.useLoaderData();\n  return <LandingPage readmeHtml={html} />;\n}\n"
  },
  {
    "path": "packages/website/src/routes/plugins/$pluginKey.tsx",
    "content": "import { createFileRoute } from \"@tanstack/react-router\";\nimport { Header } from \"../../components/header\";\nimport { Footer } from \"../../components/footer\";\nimport {\n  buildBreadcrumbJsonLd,\n  buildCanonicalUrl,\n  buildWebPageJsonLd,\n} from \"../../lib/seo\";\n\nconst title = \"Lix Plugins\";\nconst description = \"Plugins are coming soon.\";\n\nexport const Route = createFileRoute(\"/plugins/$pluginKey\")({\n  head: () => {\n    const canonicalUrl = buildCanonicalUrl(\"/plugins\");\n    const jsonLd = buildWebPageJsonLd({\n      title,\n      description,\n      canonicalUrl,\n    });\n    const breadcrumbJsonLd = buildBreadcrumbJsonLd([\n      { name: \"Lix\", item: buildCanonicalUrl(\"/\") },\n      { name: \"Plugins\", item: canonicalUrl },\n    ]);\n\n    return {\n      meta: [\n        { title },\n        { name: \"description\", content: description },\n        { property: \"og:title\", content: title },\n        { property: \"og:description\", content: description },\n        { property: \"og:url\", content: canonicalUrl },\n        { property: \"og:type\", content: \"website\" },\n        { property: \"og:site_name\", content: \"Lix\" },\n        { name: \"twitter:card\", content: \"summary\" },\n        { name: \"twitter:title\", content: title },\n        { name: \"twitter:description\", content: description },\n      ],\n      links: [{ rel: \"canonical\", href: canonicalUrl }],\n      scripts: [\n        {\n          type: \"application/ld+json\",\n          children: JSON.stringify(jsonLd),\n        },\n        {\n          type: \"application/ld+json\",\n          children: JSON.stringify(breadcrumbJsonLd),\n        },\n      ],\n    };\n  },\n  component: PluginsComingSoonPage,\n});\n\nfunction PluginsComingSoonPage() {\n  return (\n    <div className=\"min-h-screen bg-white text-slate-900\">\n      <Header />\n      <main className=\"mx-auto flex min-h-[60vh] w-full max-w-3xl flex-col justify-center px-6 py-24 text-center\">\n        <p className=\"text-sm font-medium uppercase tracking-wide text-[#0891B2]\">\n          Plugins\n        </p>\n        <h1 className=\"mt-4 text-4xl font-semibold tracking-tight sm:text-5xl\">\n          Plugins are coming soon\n        </h1>\n        <p className=\"mt-6 text-lg leading-8 text-slate-600\">\n          We are rewriting this section as part of the website cleanup.\n        </p>\n      </main>\n      <Footer />\n    </div>\n  );\n}\n"
  },
  {
    "path": "packages/website/src/routes/plugins/index.tsx",
    "content": "import { createFileRoute } from \"@tanstack/react-router\";\nimport { Header } from \"../../components/header\";\nimport { Footer } from \"../../components/footer\";\nimport {\n  buildBreadcrumbJsonLd,\n  buildCanonicalUrl,\n  buildWebPageJsonLd,\n} from \"../../lib/seo\";\n\nconst title = \"Lix Plugins\";\nconst description = \"Plugins are coming soon.\";\n\nexport const Route = createFileRoute(\"/plugins/\")({\n  head: () => {\n    const canonicalUrl = buildCanonicalUrl(\"/plugins\");\n    const jsonLd = buildWebPageJsonLd({\n      title,\n      description,\n      canonicalUrl,\n    });\n    const breadcrumbJsonLd = buildBreadcrumbJsonLd([\n      { name: \"Lix\", item: buildCanonicalUrl(\"/\") },\n      { name: \"Plugins\", item: canonicalUrl },\n    ]);\n\n    return {\n      meta: [\n        { title },\n        { name: \"description\", content: description },\n        { property: \"og:title\", content: title },\n        { property: \"og:description\", content: description },\n        { property: \"og:url\", content: canonicalUrl },\n        { property: \"og:type\", content: \"website\" },\n        { property: \"og:site_name\", content: \"Lix\" },\n        { name: \"twitter:card\", content: \"summary\" },\n        { name: \"twitter:title\", content: title },\n        { name: \"twitter:description\", content: description },\n      ],\n      links: [{ rel: \"canonical\", href: canonicalUrl }],\n      scripts: [\n        {\n          type: \"application/ld+json\",\n          children: JSON.stringify(jsonLd),\n        },\n        {\n          type: \"application/ld+json\",\n          children: JSON.stringify(breadcrumbJsonLd),\n        },\n      ],\n    };\n  },\n  component: PluginsComingSoonPage,\n});\n\nfunction PluginsComingSoonPage() {\n  return (\n    <div className=\"min-h-screen bg-white text-slate-900\">\n      <Header />\n      <main className=\"mx-auto flex min-h-[60vh] w-full max-w-3xl flex-col justify-center px-6 py-24 text-center\">\n        <p className=\"text-sm font-medium uppercase tracking-wide text-[#0891B2]\">\n          Plugins\n        </p>\n        <h1 className=\"mt-4 text-4xl font-semibold tracking-tight sm:text-5xl\">\n          Plugins are coming soon\n        </h1>\n        <p className=\"mt-6 text-lg leading-8 text-slate-600\">\n          We are rewriting this section as part of the website cleanup.\n        </p>\n      </main>\n      <Footer />\n    </div>\n  );\n}\n"
  },
  {
    "path": "packages/website/src/routes/plugins/plugin.registry.json",
    "content": "{\n  \"$schema\": \"https://json-schema.org/draft/2020-12/schema\",\n  \"plugins\": [\n    {\n      \"key\": \"plugin_json\",\n      \"name\": \"JSON Plugin\",\n      \"package\": \"@lix-js/plugin-json\",\n      \"description\": \"Tracks JSON files with JSON Pointer entities\",\n      \"file_types\": [\"*.json\"],\n      \"readme\": \"https://raw.githubusercontent.com/opral/lix/main/packages/plugin-json/README.md\",\n      \"links\": {\n        \"npm\": \"https://www.npmjs.com/package/@lix-js/plugin-json\",\n        \"github\": \"https://github.com/opral/lix/tree/main/packages/plugin-json\",\n        \"docs\": \"/plugins/plugin_json\"\n      }\n    },\n    {\n      \"key\": \"lix_plugin_csv\",\n      \"name\": \"CSV Plugin\",\n      \"package\": \"@lix-js/plugin-csv\",\n      \"description\": \"Tracks CSV files with row-level entities\",\n      \"file_types\": [\"*.csv\"],\n      \"readme\": \"https://raw.githubusercontent.com/opral/lix/main/packages/plugin-csv/README.md\",\n      \"links\": {\n        \"npm\": \"https://www.npmjs.com/package/@lix-js/plugin-csv\",\n        \"github\": \"https://github.com/opral/lix/tree/main/packages/plugin-csv\",\n        \"docs\": \"/plugins/lix_plugin_csv\"\n      }\n    },\n    {\n      \"key\": \"plugin_md\",\n      \"name\": \"Markdown Plugin\",\n      \"package\": \"@lix-js/plugin-md\",\n      \"description\": \"Tracks Markdown files using markdown-wc AST\",\n      \"file_types\": [\"*.md\"],\n      \"readme\": \"https://raw.githubusercontent.com/opral/lix/main/packages/plugin-md/README.md\",\n      \"links\": {\n        \"npm\": \"https://www.npmjs.com/package/@lix-js/plugin-md\",\n        \"github\": \"https://github.com/opral/lix/tree/main/packages/plugin-md\",\n        \"docs\": \"/plugins/plugin_md\"\n      }\n    },\n    {\n      \"key\": \"plugin_prosemirror\",\n      \"name\": \"ProseMirror Plugin\",\n      \"package\": \"@lix-js/plugin-prosemirror\",\n      \"description\": \"Tracks rich text edits in ProseMirror documents\",\n      \"file_types\": [\"/prosemirror.json\"],\n      \"readme\": \"https://raw.githubusercontent.com/opral/lix/main/packages/plugin-prosemirror/README.md\",\n      \"links\": {\n        \"npm\": \"https://www.npmjs.com/package/@lix-js/plugin-prosemirror\",\n        \"github\": \"https://github.com/opral/lix/tree/main/packages/plugin-prosemirror\",\n        \"docs\": \"/plugins/plugin_prosemirror\",\n        \"example\": \"https://prosemirror-example.onrender.com/\"\n      }\n    }\n  ]\n}\n"
  },
  {
    "path": "packages/website/src/routes/rfc/$slug.tsx",
    "content": "import { createFileRoute, Link, redirect } from \"@tanstack/react-router\";\nimport { parse } from \"@opral/markdown-wc\";\nimport { useEffect } from \"react\";\nimport markdownPageCss from \"../../components/markdown-page.style.css?url\";\nimport { Footer } from \"../../components/footer\";\nimport { Header } from \"../../components/header\";\nimport { PrevNextNav } from \"../../components/prev-next-nav\";\nimport {\n  buildBreadcrumbJsonLd,\n  buildCanonicalUrl,\n  buildWebPageJsonLd,\n  extractOgMeta,\n  extractTwitterMeta,\n  getMarkdownDescription,\n  getMarkdownTitle,\n  resolveOgImage,\n  splitTitleFromHtml,\n} from \"../../lib/seo\";\n\nconst rfcMarkdownFiles = import.meta.glob<string>(\n  \"../../../../../rfcs/**/index.md\",\n  {\n    query: \"?raw\",\n    import: \"default\",\n  },\n);\n\nconst rfcRootPrefix = \"../../../../../rfcs/\";\n\ntype RfcPrevNext = {\n  slug: string;\n  title: string;\n} | null;\n\nasync function getTitleForSlug(slug: string): Promise<string> {\n  const path = `${rfcRootPrefix}${slug}/index.md`;\n  const loader = rfcMarkdownFiles[path];\n  if (!loader) return slug;\n\n  const rawMarkdown = await loader();\n  const parsed = await parse(rawMarkdown);\n\n  return (\n    getMarkdownTitle({\n      rawMarkdown,\n      frontmatter: parsed.frontmatter,\n    }) ?? slug\n  );\n}\n\n/**\n * Rewrite RFC links to remove index.md suffix\n * Handles both relative paths (../001-slug/index.md) and absolute paths (/rfc/001-slug/index.md)\n */\nfunction rewriteRfcLinks(html: string): string {\n  return (\n    html\n      // Handle relative paths: ../001-slug/index.md or ./001-slug/index.md\n      .replace(/href=\"\\.\\.?\\/([\\d]+-[^/]+)\\/index\\.md\"/g, 'href=\"/rfc/$1\"')\n      // Handle absolute paths that were resolved by assetBaseUrl: /rfc/001-slug/index.md\n      .replace(/href=\"\\/rfc\\/([\\d]+-[^/]+)\\/index\\.md\"/g, 'href=\"/rfc/$1\"')\n  );\n}\n\nasync function loadRfc(slug: string) {\n  if (!slug) {\n    throw new Error(\"Missing RFC slug\");\n  }\n\n  const path = `${rfcRootPrefix}${slug}/index.md`;\n  const loader = rfcMarkdownFiles[path];\n\n  if (!loader) {\n    throw new Error(`RFC not found: ${slug}`);\n  }\n\n  // Auto-discover all RFCs for prev/next navigation\n  const rfcPaths = Object.keys(rfcMarkdownFiles);\n  const allSlugs = rfcPaths\n    .map((p) => p.replace(rfcRootPrefix, \"\").replace(\"/index.md\", \"\"))\n    .sort((a, b) => b.localeCompare(a)); // Sort Z-A\n\n  const currentIndex = allSlugs.findIndex((s) => s === slug);\n  const prevSlug = currentIndex > 0 ? allSlugs[currentIndex - 1] : null;\n  const nextSlug =\n    currentIndex < allSlugs.length - 1 ? allSlugs[currentIndex + 1] : null;\n\n  const prevRfc: RfcPrevNext = prevSlug\n    ? { slug: prevSlug, title: await getTitleForSlug(prevSlug) }\n    : null;\n  const nextRfc: RfcPrevNext = nextSlug\n    ? { slug: nextSlug, title: await getTitleForSlug(nextSlug) }\n    : null;\n\n  const rawMarkdown = await loader();\n  const parsed = await parse(rawMarkdown, {\n    assetBaseUrl: `/rfc/${slug}/`,\n  });\n\n  const rendered = splitTitleFromHtml(rewriteRfcLinks(parsed.html));\n  const title =\n    getMarkdownTitle({\n      rawMarkdown,\n      frontmatter: parsed.frontmatter,\n    }) ??\n    rendered.title ??\n    slug;\n  const description =\n    getMarkdownDescription({\n      rawMarkdown,\n      frontmatter: parsed.frontmatter,\n    }) ?? `Design proposal for ${title}.`;\n  const date = parsed.frontmatter?.date as string | undefined;\n\n  return {\n    slug,\n    title,\n    description,\n    date,\n    html: rendered.body,\n    frontmatter: parsed.frontmatter,\n    prevRfc,\n    nextRfc,\n  };\n}\n\ntype RfcLoaderData = Awaited<ReturnType<typeof loadRfc>>;\n\nexport function buildRfcHead(loaderData?: RfcLoaderData) {\n  const title = loaderData?.title;\n  const description = loaderData?.description;\n  const slug = loaderData?.slug;\n  const canonicalUrl = slug\n    ? buildCanonicalUrl(`/rfc/${slug}`)\n    : buildCanonicalUrl(\"/rfc\");\n  const ogImage = resolveOgImage(loaderData?.frontmatter);\n  const ogMeta = extractOgMeta(loaderData?.frontmatter);\n  const twitterMeta = extractTwitterMeta(loaderData?.frontmatter);\n  const pageTitle = title ? `${title} | Lix RFCs` : \"Lix RFCs\";\n\n  const links: Array<{ rel: string; href: string }> = [\n    { rel: \"stylesheet\", href: markdownPageCss },\n    { rel: \"canonical\", href: canonicalUrl },\n  ];\n\n  if (loaderData?.prevRfc?.slug) {\n    links.push({\n      rel: \"prev\",\n      href: buildCanonicalUrl(`/rfc/${loaderData.prevRfc.slug}`),\n    });\n  }\n\n  if (loaderData?.nextRfc?.slug) {\n    links.push({\n      rel: \"next\",\n      href: buildCanonicalUrl(`/rfc/${loaderData.nextRfc.slug}`),\n    });\n  }\n\n  const meta: Array<\n    | { title: string }\n    | { name: string; content: string }\n    | { property: string; content: string }\n  > = [\n    { title: pageTitle },\n    { property: \"og:title\", content: pageTitle },\n    { property: \"og:description\", content: description ?? \"Lix RFC\" },\n    { property: \"og:url\", content: canonicalUrl },\n    { property: \"og:type\", content: \"article\" },\n    { property: \"og:site_name\", content: \"Lix\" },\n    { property: \"og:locale\", content: \"en_US\" },\n    { property: \"og:image\", content: ogImage.url },\n    { property: \"og:image:alt\", content: ogImage.alt },\n    { name: \"twitter:card\", content: \"summary_large_image\" },\n    { name: \"twitter:title\", content: pageTitle },\n    { name: \"twitter:description\", content: description ?? \"Lix RFC\" },\n    { name: \"twitter:image\", content: ogImage.url },\n    { name: \"twitter:image:alt\", content: ogImage.alt },\n  ];\n\n  if (description) {\n    meta.push({ name: \"description\", content: description });\n  }\n\n  if (loaderData?.date) {\n    meta.push({\n      property: \"article:published_time\",\n      content: loaderData.date,\n    });\n  }\n\n  const webPageJsonLd = buildWebPageJsonLd({\n    title: pageTitle,\n    description,\n    canonicalUrl,\n    image: ogImage.url,\n  });\n  const breadcrumbJsonLd = buildBreadcrumbJsonLd([\n    { name: \"Lix\", item: buildCanonicalUrl(\"/\") },\n    { name: \"RFCs\", item: buildCanonicalUrl(\"/rfc\") },\n    ...(title ? [{ name: title, item: canonicalUrl }] : []),\n  ]);\n\n  const scripts = [\n    {\n      type: \"application/ld+json\",\n      children: JSON.stringify({\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"TechArticle\",\n        headline: title ?? \"Lix RFC\",\n        description,\n        url: canonicalUrl,\n        image: ogImage.url,\n        ...(loaderData?.date ? { datePublished: loaderData.date } : {}),\n      }),\n    },\n    {\n      type: \"application/ld+json\",\n      children: JSON.stringify(webPageJsonLd),\n    },\n    {\n      type: \"application/ld+json\",\n      children: JSON.stringify(breadcrumbJsonLd),\n    },\n  ];\n\n  return {\n    meta: [...meta, ...ogMeta, ...twitterMeta],\n    links,\n    scripts,\n  };\n}\n\nexport const Route = createFileRoute(\"/rfc/$slug\")({\n  loader: async ({ params }) => {\n    try {\n      return await loadRfc(params.slug);\n    } catch {\n      throw redirect({ to: \"/rfc\" });\n    }\n  },\n  head: ({ loaderData }) => buildRfcHead(loaderData),\n  component: RfcPage,\n});\n\nfunction RfcPage() {\n  const { title, html, prevRfc, nextRfc } = Route.useLoaderData();\n\n  useEffect(() => {\n    // @ts-expect-error - JS-only module\n    import(\"../../components/markdown-page.interactive.js\");\n  }, [html]);\n\n  return (\n    <div className=\"flex min-h-screen flex-col bg-white text-slate-900\">\n      <Header />\n      <main className=\"flex-1\">\n        <div className=\"mx-auto max-w-4xl px-6 py-12\">\n          <nav className=\"mb-8\">\n            <Link\n              to=\"/rfc\"\n              className=\"inline-flex items-center gap-1.5 text-sm text-slate-500 hover:text-slate-700 transition-colors\"\n            >\n              <svg\n                className=\"h-4 w-4\"\n                viewBox=\"0 0 24 24\"\n                fill=\"none\"\n                stroke=\"currentColor\"\n                strokeWidth=\"2\"\n                strokeLinecap=\"round\"\n                strokeLinejoin=\"round\"\n              >\n                <path d=\"M19 12H5M12 19l-7-7 7-7\" />\n              </svg>\n              All RFCs\n            </Link>\n          </nav>\n\n          <h1 className=\"text-2xl md:text-3xl lg:text-4xl font-bold text-slate-900 mb-8\">\n            {title}\n          </h1>\n\n          <article\n            className=\"markdown-wc-body\"\n            dangerouslySetInnerHTML={{ __html: html }}\n          />\n\n          <PrevNextNav\n            prev={prevRfc}\n            next={nextRfc}\n            basePath=\"/rfc\"\n            prevLabel=\"Previous RFC\"\n            nextLabel=\"Next RFC\"\n            className=\"mt-16\"\n          />\n        </div>\n      </main>\n      <Footer />\n    </div>\n  );\n}\n"
  },
  {
    "path": "packages/website/src/routes/rfc/index.tsx",
    "content": "import { createFileRoute, Link } from \"@tanstack/react-router\";\nimport { parse } from \"@opral/markdown-wc\";\nimport { Footer } from \"../../components/footer\";\nimport { Header } from \"../../components/header\";\nimport {\n  buildCanonicalUrl,\n  buildWebPageJsonLd,\n  resolveOgImage,\n} from \"../../lib/seo\";\n\nconst rfcMarkdownFiles = import.meta.glob<string>(\n  \"../../../../../rfcs/**/index.md\",\n  {\n    query: \"?raw\",\n    import: \"default\",\n  },\n);\n\nconst rfcRootPrefix = \"../../../../../rfcs/\";\n\ntype RfcEntry = {\n  slug: string;\n  title: string;\n  date?: string;\n};\n\nasync function loadRfcIndex(): Promise<{ rfcs: RfcEntry[] }> {\n  const rfcPaths = Object.keys(rfcMarkdownFiles);\n\n  const rfcs = await Promise.all(\n    rfcPaths.map(async (path) => {\n      // Extract slug from path like \"../../../../../rfcs/001-preprocess-writes/index.md\"\n      const slug = path.replace(rfcRootPrefix, \"\").replace(\"/index.md\", \"\");\n      const rawMarkdown = await rfcMarkdownFiles[path]();\n      const parsed = await parse(rawMarkdown);\n\n      // Extract title from frontmatter or first h1\n      let title = slug;\n      if (parsed.frontmatter?.title) {\n        title = parsed.frontmatter.title as string;\n      } else {\n        const h1Match = rawMarkdown.match(/^#\\s+(.+)$/m);\n        if (h1Match) {\n          title = h1Match[1];\n        }\n      }\n\n      // Extract date from frontmatter\n      const date = parsed.frontmatter?.date as string | undefined;\n\n      return { slug, title, date };\n    }),\n  );\n\n  // Sort Z-A (descending by slug, so 002 comes before 001)\n  rfcs.sort((a, b) => b.slug.localeCompare(a.slug));\n\n  return { rfcs };\n}\n\nexport function buildRfcIndexHead() {\n  const title =\n    \"Lix RFCs | Design proposals, architecture decisions, and roadmap notes\";\n  const description =\n    \"Read Lix RFCs covering architecture decisions, engine design, and upcoming changes before they land in the product.\";\n  const canonicalUrl = buildCanonicalUrl(\"/rfc\");\n  const ogImage = resolveOgImage();\n  const jsonLd = buildWebPageJsonLd({\n    title,\n    description,\n    canonicalUrl,\n    image: ogImage.url,\n  });\n\n  return {\n    links: [{ rel: \"canonical\", href: canonicalUrl }],\n    scripts: [\n      {\n        type: \"application/ld+json\",\n        children: JSON.stringify(jsonLd),\n      },\n    ],\n    meta: [\n      { title },\n      { name: \"description\", content: description },\n      { property: \"og:title\", content: title },\n      { property: \"og:description\", content: description },\n      { property: \"og:url\", content: canonicalUrl },\n      { property: \"og:type\", content: \"website\" },\n      { property: \"og:site_name\", content: \"Lix\" },\n      { property: \"og:locale\", content: \"en_US\" },\n      { property: \"og:image\", content: ogImage.url },\n      { property: \"og:image:alt\", content: ogImage.alt },\n      { name: \"twitter:card\", content: \"summary_large_image\" },\n      { name: \"twitter:title\", content: title },\n      { name: \"twitter:description\", content: description },\n      { name: \"twitter:image\", content: ogImage.url },\n      { name: \"twitter:image:alt\", content: ogImage.alt },\n    ],\n  };\n}\n\nexport const Route = createFileRoute(\"/rfc/\")({\n  loader: async () => {\n    return await loadRfcIndex();\n  },\n  head: () => buildRfcIndexHead(),\n  component: RfcIndexPage,\n});\n\nfunction formatDate(dateString: string): string {\n  try {\n    const date = new Date(dateString);\n    return date.toLocaleDateString(\"en-US\", {\n      year: \"numeric\",\n      month: \"long\",\n      day: \"numeric\",\n    });\n  } catch {\n    return dateString;\n  }\n}\n\nfunction RfcIndexPage() {\n  const { rfcs } = Route.useLoaderData();\n\n  return (\n    <div className=\"flex min-h-screen flex-col bg-white text-slate-900\">\n      <Header />\n      <main className=\"flex-1\">\n        <div className=\"mx-auto max-w-4xl px-6 py-16\">\n          <h1 className=\"mb-12 text-4xl font-bold tracking-tight text-slate-900\">\n            RFCs\n          </h1>\n\n          <p className=\"mb-10 max-w-3xl text-base leading-7 text-slate-600\">\n            Request for Comments capture the design proposals, architectural\n            tradeoffs, and implementation plans behind major Lix changes.\n          </p>\n\n          <div className=\"flex flex-col gap-8\">\n            {rfcs.map((rfc) => {\n              const rfcNumber = rfc.slug.match(/^(\\d+)/)?.[1] ?? \"\";\n              return (\n                <Link\n                  key={rfc.slug}\n                  to=\"/rfc/$slug\"\n                  params={{ slug: rfc.slug }}\n                  className=\"group block transition-colors hover:text-cyan-700\"\n                >\n                  <div className=\"flex items-baseline justify-between mb-1\">\n                    <span className=\"text-sm text-slate-400 font-mono\">\n                      RFC {rfcNumber}\n                    </span>\n                    {rfc.date && (\n                      <span className=\"text-sm text-slate-400\">\n                        {formatDate(rfc.date)}\n                      </span>\n                    )}\n                  </div>\n                  <span className=\"text-base font-medium underline decoration-slate-300 underline-offset-4 group-hover:decoration-cyan-500\">\n                    {rfc.title}\n                  </span>\n                </Link>\n              );\n            })}\n          </div>\n        </div>\n      </main>\n      <Footer />\n    </div>\n  );\n}\n"
  },
  {
    "path": "packages/website/src/ssg/github-stars-plugin.ts",
    "content": "import fs from \"node:fs/promises\";\nimport { fileURLToPath } from \"node:url\";\n\nexport type GithubRepoMetrics = {\n  stars: number;\n  forks: number;\n  openIssues: number;\n  closedIssues: number;\n  contributorCount: number;\n};\n\ntype GithubCache = {\n  generatedAt: string;\n  data: Record<string, GithubRepoMetrics | null>;\n};\n\nconst GITHUB_CACHE_TTL_MINUTES = 60;\nconst githubCachePath = fileURLToPath(\n  new URL(\"../github_repo_data.gen.json\", import.meta.url),\n);\nlet didLogGithubToken = false;\n\nexport function githubStarsPlugin({ token }: { token?: string }) {\n  return {\n    name: \"lix:github-data\",\n    async buildStart() {\n      await ensureGithubCache(token);\n    },\n    async configureServer() {\n      await ensureGithubCache(token);\n    },\n  };\n}\n\nasync function ensureGithubCache(token?: string) {\n  if (token && !didLogGithubToken) {\n    console.info(\"Using LIX_WEBSITE_GITHUB_TOKEN for GitHub API requests.\");\n    didLogGithubToken = true;\n  }\n  const cached = await readGithubCache();\n  if (cached && !isCacheExpired(cached)) return;\n\n  const repos = new Set<string>([\"opral/lix\"]);\n\n  const data: Record<string, GithubRepoMetrics | null> = {};\n  for (const repo of repos) {\n    const metrics = await fetchGithubRepoMetrics(repo, token);\n    data[repo.toLowerCase()] = metrics;\n  }\n\n  const payload: GithubCache = {\n    generatedAt: new Date().toISOString(),\n    data,\n  };\n\n  await fs.writeFile(githubCachePath, JSON.stringify(payload, null, 2) + \"\\n\");\n}\n\nasync function readGithubCache(): Promise<GithubCache | null> {\n  try {\n    const raw = await fs.readFile(githubCachePath, \"utf8\");\n    return JSON.parse(raw) as GithubCache;\n  } catch {\n    return null;\n  }\n}\n\nfunction isCacheExpired(cache: GithubCache) {\n  const generatedAt = Date.parse(cache.generatedAt);\n  if (Number.isNaN(generatedAt)) return true;\n  const ttlMs = GITHUB_CACHE_TTL_MINUTES * 60 * 1000;\n  return Date.now() - generatedAt > ttlMs;\n}\n\nfunction getHeaders(token?: string) {\n  return {\n    Accept: \"application/vnd.github+json\",\n    \"User-Agent\": \"lix-website\",\n    ...(token ? { Authorization: `Bearer ${token}` } : {}),\n  };\n}\n\nasync function fetchGithubRepoMetrics(\n  repo: string,\n  token?: string,\n): Promise<GithubRepoMetrics | null> {\n  try {\n    const repoRes = await fetch(`https://api.github.com/repos/${repo}`, {\n      headers: getHeaders(token),\n    });\n\n    if (!repoRes.ok) {\n      console.warn(`GitHub repo fetch failed for ${repo}: ${repoRes.status}`);\n      return null;\n    }\n\n    const repoData = (await repoRes.json()) as {\n      stargazers_count?: number;\n      forks_count?: number;\n      open_issues_count?: number;\n    };\n\n    const openIssuesRes = await fetch(\n      `https://api.github.com/search/issues?q=repo:${repo}+is:issue+is:open&per_page=1`,\n      { headers: getHeaders(token) },\n    );\n    const closedIssuesRes = await fetch(\n      `https://api.github.com/search/issues?q=repo:${repo}+is:issue+is:closed&per_page=1`,\n      { headers: getHeaders(token) },\n    );\n\n    let openIssues = 0;\n    if (openIssuesRes.ok) {\n      const openData = (await openIssuesRes.json()) as {\n        total_count?: number;\n      };\n      openIssues = openData.total_count ?? 0;\n    }\n\n    let closedIssues = 0;\n    if (closedIssuesRes.ok) {\n      const closedData = (await closedIssuesRes.json()) as {\n        total_count?: number;\n      };\n      closedIssues = closedData.total_count ?? 0;\n    }\n\n    const contributorsRes = await fetch(\n      `https://api.github.com/repos/${repo}/contributors?per_page=1&anon=1`,\n      { headers: getHeaders(token) },\n    );\n\n    let contributorCount = 0;\n    if (contributorsRes.ok) {\n      const linkHeader = contributorsRes.headers.get(\"Link\");\n      if (linkHeader) {\n        const lastMatch = linkHeader.match(/page=(\\d+)>; rel=\"last\"/);\n        if (lastMatch) {\n          contributorCount = parseInt(lastMatch[1], 10);\n        }\n      } else {\n        const data = (await contributorsRes.json()) as unknown[];\n        contributorCount = data.length;\n      }\n    }\n\n    return {\n      stars: repoData.stargazers_count ?? 0,\n      forks: repoData.forks_count ?? 0,\n      openIssues,\n      closedIssues,\n      contributorCount,\n    };\n  } catch (error) {\n    console.warn(`GitHub fetch failed for ${repo}`, error);\n    return null;\n  }\n}\n"
  },
  {
    "path": "packages/website/src/styles.css",
    "content": "@import \"tailwindcss\";\n\nbody {\n  @apply m-0;\n  font-family:\n    Inter,\n    ui-sans-serif,\n    system-ui,\n    -apple-system,\n    BlinkMacSystemFont,\n    \"Segoe UI\",\n    sans-serif;\n  -webkit-font-smoothing: antialiased;\n  -moz-osx-font-smoothing: grayscale;\n  color: #213547;\n  background: #ffffff;\n}\n\ncode {\n  font-family:\n    ui-monospace, SFMono-Regular, Menlo, Monaco, Consolas, \"Liberation Mono\",\n    \"Courier New\", monospace;\n}\n"
  },
  {
    "path": "packages/website/src/types/lix-js-plugin-json.d.ts",
    "content": "declare module \"@lix-js/plugin-json\";\n"
  },
  {
    "path": "packages/website/tsconfig.json",
    "content": "{\n  \"include\": [\"**/*.ts\", \"**/*.tsx\", \"**/*.d.ts\"],\n  \"compilerOptions\": {\n    \"target\": \"ES2022\",\n    \"jsx\": \"react-jsx\",\n    \"module\": \"ESNext\",\n    \"lib\": [\"ES2022\", \"DOM\", \"DOM.Iterable\"],\n    \"types\": [\"vite/client\"],\n\n    /* Bundler mode */\n    \"moduleResolution\": \"bundler\",\n    \"allowImportingTsExtensions\": true,\n    \"verbatimModuleSyntax\": false,\n    \"noEmit\": true,\n    \"resolveJsonModule\": true,\n\n    /* Linting */\n    \"skipLibCheck\": true,\n    \"strict\": true,\n    \"noUnusedLocals\": true,\n    \"noUnusedParameters\": true,\n    \"noFallthroughCasesInSwitch\": true,\n    \"noUncheckedSideEffectImports\": true,\n    \"baseUrl\": \".\",\n    \"paths\": {\n      \"@/*\": [\"./src/*\"]\n    }\n  }\n}\n"
  },
  {
    "path": "packages/website/vite.config.ts",
    "content": "import { defineConfig, loadEnv, type Plugin } from \"vite\";\nimport { tanstackStart } from \"@tanstack/react-start/plugin/vite\";\nimport viteReact from \"@vitejs/plugin-react\";\nimport tailwindcss from \"@tailwindcss/vite\";\nimport { pluginReadmeSync } from \"./scripts/plugin-readme-sync\";\nimport { githubStarsPlugin } from \"./src/ssg/github-stars-plugin\";\nimport { viteStaticCopy } from \"vite-plugin-static-copy\";\nimport path from \"path\";\nimport fs from \"fs\";\nimport type { ViteDevServer } from \"vite\";\n\nconst mimeTypes: Record<string, string> = {\n  \".svg\": \"image/svg+xml\",\n  \".png\": \"image/png\",\n  \".jpg\": \"image/jpeg\",\n  \".jpeg\": \"image/jpeg\",\n  \".gif\": \"image/gif\",\n  \".webp\": \"image/webp\",\n  \".ico\": \"image/x-icon\",\n};\n\n/**\n * Serves blog assets from the blog directory in dev mode.\n */\nfunction blogAssetsPlugin(): Plugin {\n  return {\n    name: \"blog-assets\",\n    configureServer(server) {\n      server.middlewares.use((req, res, next) => {\n        if (req.url?.startsWith(\"/blog/\") && !req.url.endsWith(\"/\")) {\n          const assetPath = req.url.replace(\"/blog/\", \"\");\n          const filePath = path.resolve(__dirname, \"../../blog\", assetPath);\n          if (fs.existsSync(filePath) && fs.statSync(filePath).isFile()) {\n            const ext = path.extname(filePath).toLowerCase();\n            const contentType = mimeTypes[ext] || \"application/octet-stream\";\n            res.setHeader(\"Content-Type\", contentType);\n            return res.end(fs.readFileSync(filePath));\n          }\n        }\n        next();\n      });\n    },\n  };\n}\n\n/**\n * Keeps the docs route module graph in sync when root docs files are added or\n * removed while the dev server is already running.\n */\nfunction docsContentWatchPlugin(): Plugin {\n  const docsDir = path.resolve(__dirname, \"../../docs\");\n  const docsRouteFiles = [\n    path.resolve(__dirname, \"src/routes/docs/$slugId.tsx\"),\n    path.resolve(__dirname, \"src/routes/docs/index.tsx\"),\n  ];\n\n  const invalidateDocsRoutes = (server: ViteDevServer) => {\n    for (const routeFile of docsRouteFiles) {\n      const modules = server.moduleGraph.getModulesByFile(routeFile);\n      if (!modules) continue;\n      for (const module of modules) {\n        server.moduleGraph.invalidateModule(module);\n      }\n    }\n    server.ws.send({ type: \"full-reload\" });\n  };\n\n  const isDocsFile = (file: string) => {\n    const normalizedFile = path.normalize(file);\n    return normalizedFile.startsWith(docsDir + path.sep);\n  };\n\n  return {\n    name: \"docs-content-watch\",\n    configureServer(server) {\n      server.watcher.add(docsDir);\n      server.watcher.on(\"add\", (file) => {\n        if (isDocsFile(file)) invalidateDocsRoutes(server);\n      });\n      server.watcher.on(\"unlink\", (file) => {\n        if (isDocsFile(file)) invalidateDocsRoutes(server);\n      });\n    },\n  };\n}\n\nconst config = defineConfig(({ mode, command }) => {\n  const isTest = process.env.VITEST === \"true\" || mode === \"test\";\n  const env = loadEnv(mode, process.cwd(), \"\");\n  const githubToken =\n    process.env.LIX_WEBSITE_GITHUB_TOKEN ?? env.LIX_WEBSITE_GITHUB_TOKEN;\n\n  return {\n    server: {\n      fs: {\n        allow: [\"../..\", \".\"],\n      },\n    },\n    resolve: {\n      tsconfigPaths: true,\n    },\n    plugins: [\n      command === \"serve\" && blogAssetsPlugin(),\n      command === \"serve\" && docsContentWatchPlugin(),\n      pluginReadmeSync(),\n      githubStarsPlugin({\n        token: githubToken,\n      }),\n      tailwindcss(),\n      !isTest &&\n        viteStaticCopy({\n          targets: [\n            {\n              src: \"../../blog/**\",\n              dest: \"../client/blog\",\n            },\n          ],\n          watch: command === \"serve\" ? { reloadPageOnChange: true } : undefined,\n        }),\n      tanstackStart({\n        prerender: {\n          enabled: true,\n          autoSubfolderIndex: true,\n          autoStaticPathsDiscovery: true,\n          crawlLinks: true,\n          concurrency: 8,\n          retryCount: 2,\n          retryDelay: 1000,\n          maxRedirects: 5,\n          failOnError: true,\n        },\n        sitemap: {\n          enabled: true,\n          host: \"https://lix.dev\",\n        },\n      }),\n      viteReact(),\n    ].filter(Boolean),\n  };\n});\n\nexport default config;\n"
  },
  {
    "path": "packages/website/wrangler.json",
    "content": "{\n  \"$schema\": \"https://unpkg.com/wrangler@latest/config-schema.json\",\n  \"name\": \"lix-website\",\n  \"compatibility_date\": \"2025-11-23\",\n  \"assets\": {\n    \"directory\": \"./dist/client\",\n    \"html_handling\": \"drop-trailing-slash\"\n  }\n}\n"
  },
  {
    "path": "pnpm-workspace.yaml",
    "content": "packages:\n  - packages/**/*\n  - '!packages/**/dist'\nonlyBuiltDependencies:\n  - '@tailwindcss/oxide'\n  - 'better-sqlite3'\n"
  },
  {
    "path": "rfcs/001-preprocess-writes/index.md",
    "content": "---\ndate: \"2025-11-24\"\n---\n\n# Preprocess writes to avoid vtable overhead\n\n## Summary\n\nWrite operations in Lix are slow due to the vtable mechanism crossing the JS ↔ SQLite WASM boundary multiple times per row. This RFC proposes extending the existing SQL preprocessor to handle writes, bypassing [SQLite's Vtable mechanism](https://www.sqlite.org/vtab.html) entirely.\n\n## Background & Current Architecture\n\n### How We Got Here\n\nLix evolved organically from application requirements:\n\n1. **Git era**: Initially built on git, which [proved unsuited despite the ecosystem appeal](https://opral.substack.com/p/building-on-git-was-our-failure-mode).\n\n2. **SQLite migration**: Rewrote on top of SQLite to gain ACID guarantees, a storage format, and a query engine.\n\n3. **DML triggers**: Early prototypes used triggers on regular tables to track changes.\n\n4. **VTable adoption**: The requirement to control transaction and commit semantics led to [SQLite's vtable mechanism](https://www.sqlite.org/vtab.html) to intercept reads and writes.\n\n5. **Read performance fix**: VTables can't be optimized by SQLite (no filter pushdown for `json_extract`, etc.). A preprocessor was built ([#3723](https://github.com/opral/monorepo/pull/3723)) that rewrites SELECT queries to target real tables, achieving native read performance.\n\n6. **Current state**: Reads are fast. Writes remain slow because they still hit the vtable.\n\n### Current Data Model\n\nLix has a unified read/write interface via the virtual table `lix_internal_state_vtable`.\n\nUnderneath the vtable, the state is spread across four groups of physical tables:\n\n1. **Change History** – `lix_internal_change`\n\n   - Stores the history of changes which are used to materialize the committed state.\n   - The foundation of the system.\n\n2. **Transaction state** – `lix_internal_transaction_state`\n\n   - Uncommitted changes (“staging area”) visible via the vtable before commit.\n\n3. **Untracked state** – `lix_internal_state_all_untracked`\n\n   - Local-only changes; not synced; coexist with transaction/committed rows.\n\n4. **Committed state** – `lix_internal_state_cache_v1_*`\n   - Schema-partitioned cache tables representing immutable history, optimized for reads. Materialized from `lix_internal_change`.\n\nConceptually:\n\n```\n┌─────────────────────────────────────────────────────────────────┐\n│                    lix_internal_state_vtable                    │\n│                      (unified read/write interface)             │\n└─────────────────────────────────────────────────────────────────┘\n                                 │\n                    ┌────────────┼────────────────────────┐\n                    ▼            ▼                        ▼\n        ┌───────────────┐ ┌───────────┐ ┌─────────────────────────────┐\n        │  Transaction  │ │ Untracked │ │      Committed State        │\n        │    State      │ │   State   │ │      (cache tables)         │\n        │   (staging)   │ │  (local)  │ │                             │\n        └───────────────┘ └───────────┘ └──────────────▲──────────────┘\n              │                │                       │\n              │                │            ┌──────────┴──────────┐\n              │                │            │ lix_internal_change │\n              │                │            │   (change history)  │\n              │                │            └─────────────────────┘\n              │                │                       │\n              └────────────────┴───────────────────────┘\n                               │\n                        Prioritized UNION\n                      (transaction > untracked > committed)\n```\n\n### Current Read Path (Fast)\n\n```\nApp Query                    Preprocessor                    SQLite\n    │                            │                             │\n    │  SELECT * FROM vtable      │                             │\n    │ ─────────────────────────► │                             │\n    │                            │  Rewrite to UNION of        │\n    │                            │  physical tables            │\n    │                            │ ──────────────────────────► │\n    │                            │                             │\n    │  ◄─────────────────────────────────────────────────────  │\n    │         Results (native speed)                           │\n```\n\nThe preprocessor intercepts SELECT queries and rewrites them into a `UNION` query combining the three physical tables, using `ROW_NUMBER()` to prioritize uncommitted/untracked changes.\n\n_Example rewritten query:_\n\n```sql\n-- User writes:\nSELECT * FROM lix_internal_state_vtable\nWHERE schema_key = 'lix_key_value'\n\n-- Preprocessor rewrites to (pseudocode):\nSELECT * FROM (\n  SELECT *, ROW_NUMBER() OVER (\n    PARTITION BY entity_id\n    ORDER BY priority\n  ) AS rn\n  FROM (\n    -- Priority 1: Uncommitted transaction state\n    SELECT *, 1 AS priority\n    FROM lix_internal_transaction_state\n    WHERE schema_key = 'lix_key_value'\n\n    UNION ALL\n\n    -- Priority 2: Untracked state\n    SELECT *, 2 AS priority\n    FROM lix_internal_state_all_untracked\n    WHERE schema_key = 'lix_key_value'\n\n    UNION ALL\n\n    -- Priority 3: Committed state from schema-specific cache table\n    SELECT *, 3 AS priority\n    FROM lix_internal_state_cache_v1_lix_key_value\n  )\n) WHERE rn = 1\n```\n\n### Current Write Path (Slow)\n\n```\nApp Query                    SQLite                      JavaScript\n    │                          │                             │\n    │  INSERT INTO vtable      │                             │\n    │ ───────────────────────► │                             │\n    │                          │  xUpdate() callback         │\n    │                          │ ──────────────────────────► │\n    │                          │                             │  ┌─────────────────┐\n    │                          │  SELECT (validation)        │  │ Per-row loop:   │\n    │                          │ ◄────────────────────────── │  │  • 1 timestamp  │\n    │                          │                             │  │  • 3-5 schema   │\n    │                          │  SELECT (FK check)          │  │  • N FK checks  │\n    │                          │ ◄────────────────────────── │  │  • N unique     │\n    │                          │                             │  │  • 1 insert     │\n    │                          │  INSERT (transaction state) │  └─────────────────┘\n    │                          │ ◄────────────────────────── │\n    │                          │         ...repeat...        │\n```\n\nEach write triggers `xUpdate` in JavaScript, which executes multiple synchronous queries back into SQLite for validation.\n\n### Query Breakdown Per Write\n\nFrom [`validate-state-mutation.ts`](https://github.com/opral/monorepo/blob/bbcb3b551f4d5cbf47f52eb8bc2846c3a5c0c411/packages/lix/sdk/src/state/vtable/validate-state-mutation.ts) and [`vtable.ts`](https://github.com/opral/monorepo/blob/bbcb3b551f4d5cbf47f52eb8bc2846c3a5c0c411/packages/lix/sdk/src/state/vtable/vtable.ts):\n\n| Phase                    | Queries                  |\n| ------------------------ | ------------------------ |\n| Timestamp                | 1                        |\n| Version existence check  | 1                        |\n| Schema retrieval         | 1-2                      |\n| JSON Schema validation   | (in-memory via AJV)      |\n| Primary key uniqueness   | 1                        |\n| Unique constraints       | 1 per constraint         |\n| Foreign key constraints  | 2-3 per FK               |\n| Transaction state insert | 1                        |\n| File cache update        | 0-2 (if file_descriptor) |\n\n**Total: ~10-25 queries per row**, depending on schema complexity.\n\n## Problem\n\n**Write performance is poor.** A single logical write from the application results in:\n\n1. One JS ↔ WASM boundary crossing to enter `xUpdate`\n2. 10-25 internal SQL queries inside `xUpdate` for validation and bookkeeping\n3. Each internal query crosses the JS ↔ WASM boundary again\n\nFor bulk operations, this scales linearly: inserting 1,000 rows triggers 1,000 `xUpdate` calls and 10,000-25,000 boundary crossings.\n\n### Quantifying the Problem\n\nBased on the existing benchmark suite (`vtable.insert.bench.ts`, `commit.bench.ts`):\n\n| Operation            | Current Behavior (bench.base.json)      |\n| -------------------- | --------------------------------------- |\n| Single row insert    | ~15.3ms (state_by_version insert)       |\n| 10-row chunk insert  | ~39ms (state_by_version 10-row chunk)   |\n| 100-row chunk insert | ~344ms (state_by_version 100-row chunk) |\n\n**Target**: A single mutation that writes ~100 rows should complete in <50ms.\n\nWhy this target:\n\n- 50ms leaves another ~50ms in the 100ms UI budget for other work (rendering, effects).\n- 100 rows matches typical document transactions and keeps bulk edits responsive.\n\n## Proposal\n\nExtend the preprocessor to handle `INSERT`, `UPDATE`, and `DELETE` statements, bypassing the vtable entirely.\n\n### Write Path\n\n```\nApp Query                    Preprocessor                    SQLite\n    │                            │                             │\n    │  INSERT INTO vtable        │                             │\n    │ ─────────────────────────► │                             │\n    │                            │  1. Parse SQL               │\n    │                            │  2. Extract mutation rows   │\n    │                            │  3. JSON Schema validate    │\n    │                            │     (in-memory)             │\n    │                            │  4. File change detection   │\n    │                            │     (plugin callbacks)      │\n    │                            │  5. Build bulk SQL with     │\n    │                            │     constraint checks       │\n    │                            │ ──────────────────────────► │\n    │                            │    Single optimized query   │\n    │  ◄─────────────────────────────────────────────────────  │\n    │         Done (single boundary crossing)                  │\n```\n\n### Pseudocode Flow\n\n```typescript\nasync function execute(sql: string): Promise<string> {\n  // 1. Parse the incoming SQL\n  const ast = parse(sql);\n  if (!isMutation(ast)) return sql; // Pass through\n\n  // 2. Extract target table and mutation type\n  const { table, operation, rows } = extractMutation(ast);\n\n  // 3. Resolve values (handle subqueries, defaults, etc.)\n  const resolvedRows = await resolveRowValues(ast, rows);\n\n  // 4. In-memory JSON Schema validation\n  for (const row of resolvedRows) {\n    const schema = getStoredSchema(row.schema_key);\n    validateJsonSchema(row.snapshot_content, schema); // throws on error\n  }\n\n  // 5. File change detection (for file mutations)\n  const detectedChanges: MutationRow[] = [];\n  for (const row of resolvedRows) {\n    if (row.schema_key === \"lix_file\") {\n      const plugin = getMatchingPlugin(row);\n      const changes = plugin.detectChanges({\n        after: row.snapshot_content,\n        // Provide a query function that reads from pending state\n        querySync: createPendingStateQuery(resolvedRows),\n      });\n      detectedChanges.push(...changes);\n    }\n  }\n  const allRows = [...resolvedRows, ...detectedChanges];\n\n  // 6. Build optimized SQL with constraint validation\n  const targetTable = determineTargetTable(allRows); // transaction_state or untracked\n\n  const optimizedSql = `\n    -- Constraint validation (fails entire transaction on violation)\n    SELECT CASE\n      WHEN EXISTS (${buildForeignKeyValidation(allRows)})\n      THEN RAISE(ABORT, 'Foreign key constraint failed')\n    END;\n\n    SELECT CASE\n      WHEN EXISTS (${buildUniqueConstraintValidation(allRows)})\n      THEN RAISE(ABORT, 'Unique constraint failed')\n    END;\n\n    -- Bulk insert into physical table\n    INSERT INTO ${targetTable} (entity_id, schema_key, file_id, ...)\n    VALUES ${allRows.map(formatRow).join(\", \")}\n    ON CONFLICT (entity_id, schema_key, file_id, version_id)\n    DO UPDATE SET snapshot_content = excluded.snapshot_content, ...;\n  `;\n\n  const result = sqlite.exec(optimizedSql);\n\n  emit(\"onStateCommit\", allRows);\n  return result;\n}\n```\n\n### Benefits\n\n1.  **No VTable Overhead**: By bypassing `xUpdate` and `xCommit`, we eliminate the costly JS ↔ WASM boundary crossings for every row.\n2.  **Elimination of `lix_internal_transaction_state`**: Since we write directly to the physical tables within the user's transaction, we no longer need a separate table to stage uncommitted changes. The underlying SQL database handles the transaction isolation for us.\n3.  **Bulk Performance**: Batch inserts (e.g., `INSERT INTO ... VALUES (...), (...)`) are handled as a single efficient SQL operation. In the vtable approach, SQLite loops and calls `xUpdate` for _each row_ individually, preventing bulk optimizations.\n\n### Downsides / Risks\n\n1.  **Complexity of SQL Rewriting**: The preprocessor must correctly parse and rewrite potentially complex SQL statements, including handling edge cases.\n\n### Bonus: Using Postgres or any other SQL database as backend\n\nRelying purely on preprocessing rather than SQLite (WASM build) specific APIs, enables lix to use any SQL database as backend e.g. PostgreSQL, Turso, or MySQL.\n\n```ts\nconst lix = await openLix({\n  environment: new PostgreSQL({ ... }),\n});\n```\n\nFor NodeJS we wouldn't need to build a SQLite WASM <-> FS bridge.\n\nInstead, we can use `better-sqlite3` or `node-postgres` directly. Of which the performance is significantly better than the WASM build which, for example, lacks WAL mode.\n\n```ts\nconst lix = await openLix({\n  environment: new BetterSqlite3({ ... }),\n});\n```\n"
  },
  {
    "path": "rfcs/002-rewrite-in-rust/index.md",
    "content": "---\ndate: \"2025-11-30\"\n---\n\n# Implement the Lix Engine in Rust\n\n## Summary\n\n[RFC 001](../001-preprocess-writes/index.md) proposes extending the SQL preprocessor to handle writes. This RFC proposes implementing the **Lix Engine** - the core layer responsible for SQL preprocessing, validation, and rewriting - in Rust.\n\n## Goals\n\n1. **Leverage existing Rust libraries** - Rust has production-grade SQL parsers (`sqlparser-rs`), CEL evaluators (`cel-rust`), and JSON Schema validators that don't exist in JS. Our custom JS SQL parser is fragile and limited.\n\n2. **Portable engine for multi-language bindings** - A Rust engine can be exposed to JS (NAPI-RS, WASM), Python (PyO3), and other languages. Implementing in Rust now means the core is written once.\n\n## Non-Goals\n\n- **Deviate from SQLite dialect** - SQLite is the target for now. While `sqlparser-rs` supports multiple dialects, the initial implementation strictly targets SQLite.\n\n## Context\n\nRFC 001 establishes that:\n\n1. Lix is moving to a preprocessor-driven architecture that rewrites SQL against virtual tables into SQL against physical tables.\n2. The preprocessor must handle both reads and writes, including parsing SQL, extracting mutations, validating schemas/constraints, and emitting optimized SQL.\n3. A custom JS SQL parser is a source of fragility.\n\nThe question is: **implement this in JavaScript or Rust?**\n\n## Proposal\n\nImplement the Lix Engine in Rust.\n\n### Architecture\n\n```\n┌─────────────────────────────────────────────────────────────┐\n│                     SDK (JS/TS, Python, etc.)               │\n│  - High-level API (openLix, lix.db.select, etc.)            │\n│  - Owns SQLite connection (WASM or native)                  │\n│  - Provides execute callback to engine                      │\n└─────────────────────────────┬───────────────────────────────┘\n                              │ engine.execute(sql, params)\n                              ▼\n┌─────────────────────────────────────────────────────────────┐\n│                     Engine (Rust)                           │\n│  - SQL parsing & rewriting (sqlparser-rs)                   │\n│  - Schema validation (JSON Schema, CEL)                     │\n│  - Calls host.execute(sql) via callback                     │\n│  - Calls host.detectChanges() for plugins                   │\n└─────────────────────────────┬───────────────────────────────┘\n                              │ callback: host.execute(sql)\n                              ▼\n┌─────────────────────────────────────────────────────────────┐\n│                 SQL Database (SQLite)                       │\n│  - Physical storage                                         │\n│  - Transaction management\n|  - Index/query execution                                │\n└─────────────────────────────────────────────────────────────┘\n```\n\n**Key design decisions:**\n\n- SDK owns SQLite - no bundling concerns in the engine\n- Engine controls flow - can run multiple queries internally via host callback\n- Preprocessing is internal - SDK never sees intermediate SQL\n\n### Engine API\n\nFrom the SDK's perspective:\n\n```typescript\nconst engine = createEngine({\n  execute: (sql: string, params: unknown[]) => sqlite.exec(sql, params),\n  detectChanges: (pluginId: string, before: Uint8Array, after: Uint8Array) =>\n    plugin.detectChanges({ before, after }),\n});\n\nconst result = engine.execute(\"INSERT INTO messages ...\", [params]);\n```\n\nThe Rust engine exposes bindings via:\n\n- **NAPI-RS** for Node.js (native addon)\n- **WASM** for browser environments\n- **C FFI** for other languages (Python via PyO3, etc.)\n\n### Implementation\n\nThe engine uses these Rust libraries:\n\n#### 1. SQL Parsing - `sqlparser-rs`\n\n```rust\nuse sqlparser::dialect::SQLiteDialect;\nuse sqlparser::parser::Parser;\n\nlet dialect = SQLiteDialect {};\nlet statements = Parser::parse_sql(&dialect, sql)?;\n\nfor statement in statements {\n    match statement {\n        Statement::Query(query) => { /* rewrite SELECT */ }\n        Statement::Insert(_) => { /* rewrite INSERT */ }\n        Statement::Update(_) => { /* rewrite UPDATE */ }\n        Statement::Delete(_) => { /* rewrite DELETE */ }\n        other => { /* passthrough PRAGMA, etc. */ }\n    }\n}\n```\n\n#### 2. CEL Validation - `cel-rust`\n\n```rust\nuse cel_interpreter::{Context, Program};\n\nlet program = Program::compile(\"data.amount > 0 && data.amount < 1000\")?;\nlet mut context = Context::default();\ncontext.add_variable(\"data\", row_data);\nlet result = program.execute(&context)?;\n```\n\n#### 3. JSON Schema Validation - `jsonschema`\n\n```rust\nuse jsonschema::JSONSchema;\n\nlet schema = serde_json::from_str(schema_json)?;\nlet compiled = JSONSchema::compile(&schema)?;\ncompiled.validate(&row_data)?;\n```\n\n#### 4. Host Plugin Callbacks\n\n```rust\npub trait HostBindings {\n    fn execute(&self, sql: &str, params: &[Value]) -> Result<Vec<Row>>;\n    fn detect_changes(&self, plugin_id: &str, before: &[u8], after: &[u8]) -> Result<Vec<Change>>;\n}\n\n// During file mutations, call back to host for plugin logic\nfn detect_file_changes(rows: &[MutationRow], host: &impl HostBindings) -> Result<Vec<Change>> {\n    for row in rows.iter().filter(|r| r.schema_key == \"lix_file\") {\n        let changes = host.detect_changes(&row.plugin_id, &row.before, &row.after)?;\n        // ... collect changes\n    }\n}\n```\n\n### Pseudocode: Full Pipeline\n\n```rust\npub fn execute(sql: &str, params: &[Value], host: &impl HostBindings) -> Result<Vec<Row>> {\n    let statements = Parser::parse_sql(&SQLiteDialect {}, sql)?;\n\n    for statement in statements {\n        match statement {\n            Statement::Insert(_) | Statement::Update(_) | Statement::Delete(_) => {\n                // 1. Extract mutation details\n                let mutation = extract_mutation(statement)?;\n\n                // 2. Materialize rows (resolve subqueries via host.execute)\n                let rows = materialize_rows(&mutation, host)?;\n\n                // 3. Validate in-memory (JSON Schema + CEL)\n                validate_rows(&rows, &schemas, &cel_env)?;\n\n                // 4. Detect file changes via host plugin callback\n                let plugin_changes = detect_file_changes(&rows, host)?;\n\n                // 5. Rewrite to physical tables and execute\n                let rewritten_sql = build_write_sql(&rows, &plugin_changes)?;\n                host.execute(&rewritten_sql, &[])?;\n            }\n            Statement::Query(query) => {\n                // Rewrite vtable references to physical tables\n                let rewritten = rewrite_select(query)?;\n                return host.execute(&rewritten.to_string(), params);\n            }\n            other => {\n                // Passthrough (PRAGMA, etc.)\n                return host.execute(&other.to_string(), params);\n            }\n        }\n    }\n}\n```\n"
  },
  {
    "path": "rfcs/003-canonical-lix-value/index.md",
    "content": "# RFC 003: Two-Layer Value Model With Canonical Boundaries\n\n## Status\nAccepted\n\n## Context\nValue-shape drift across JS/wasm/backend/CLI boundaries introduced brittle decode logic and inconsistent handling of `lix_file.data`.\n\nObserved drift patterns included:\n- Mixed object wrappers and raw primitives.\n- Binary values represented as `Uint8Array` in some APIs and `0x...` hex strings in CLI JSON.\n- Adapter-level implicit unwrapping that masked inconsistencies.\n\n## Decision\nAdopt a two-layer value model:\n\n1. Runtime contract for app-facing SQL APIs (`LixRuntimeValue`).\n2. Canonical contract for boundary/wire/JSON surfaces (`LixCanonicalValue`).\n\nRuntime contract:\n\n```ts\nexport type LixRuntimeValue =\n  | null\n  | boolean\n  | number\n  | string\n  | Uint8Array;\n\nexport type LixRuntimeQueryResult = {\n  columns: string[];\n  rows: LixRuntimeValue[][];\n};\n```\n\nCanonical contract:\n\n```ts\nexport type LixCanonicalValue =\n  | { kind: \"null\"; value: null }\n  | { kind: \"bool\"; value: boolean }\n  | { kind: \"int\"; value: number }\n  | { kind: \"float\"; value: number }\n  | { kind: \"text\"; value: string }\n  | { kind: \"blob\"; base64: string };\n\nexport type LixCanonicalQueryResult = {\n  columns: string[];\n  rows: LixCanonicalValue[][];\n};\n```\n\n## Invariants\n- Runtime SQL APIs accept/return `LixRuntimeValue`.\n- Boundary/wire/JSON APIs accept/return `LixCanonicalValue`.\n- `LixCanonicalQueryResult.columns` is always present and always a string array.\n- `LixCanonicalQueryResult.rows` is always present and always a 2D array.\n- `int.value` must be a finite integer and fit in signed 64-bit range.\n- `float.value` must be a finite number.\n- `blob.base64` uses RFC 4648 standard base64.\n- `lix_file.data` is representation-stable:\n  - runtime: `Uint8Array`\n  - canonical: `{ kind: \"blob\", base64: string }`\n\n## Consequences\n- Runtime SQL APIs stay ergonomic and efficient (raw values, no base64 overhead in hot paths).\n- Canonical boundary format remains deterministic for CLI/IPC/JSON transports.\n- Legacy mixed forms are rejected at strict boundaries.\n- Conversion is explicit at boundaries via dedicated runtime<->canonical codecs.\n"
  },
  {
    "path": "skills/cli/SKILL.md",
    "content": "---\nname: lix\ndescription: Use this skill when working with .lix repositories via the lix CLI.\n---\n\n# Lix Skill\n\nUse this skill when working with `.lix` repositories.\n\n## Goal\n\nRead and write data in a Lix repo safely through the Lix CLI.\n\n## Concepts\n\n- Files:\n  - Files are exposed through `lix_file` (active version) and `lix_file_by_version` (explicit version).\n  - `data` is bytes; use `lix_text_encode('...')` for text payloads.\n- Entities:\n  - Entities are schema-scoped records in `lix_state` / `lix_state_by_version`.\n  - They are keyed by `schema_key` + `entity_id` + `file_id`, with schemas discoverable via `lix_registered_schema`.\n- Checkpoints:\n  - A checkpoint is a committed history boundary (a saved change set) used to anchor history/diffs.\n  - History views (`*_history`) are read-only projections over these committed changes.\n- Working changes:\n  - Uncheckpointed changes are exposed via `lix_working_changes`.\n  - This is the primary surface for “what changed since last checkpoint”.\n- Versions:\n  - Versions are the name for \"branches\". Version is used because non technical users dont know what a branch is.\n  - `lix_active_version` selects the current one; `lix_version` lists available versions.\n\n## Rules (non-negotiable)\n\n1. Never use `sqlite3` (or any direct SQLite client) on `.lix` files.\n2. Always use the `lix` CLI.\n3. Always pass `--path` to avoid operating on the wrong repo.\n4. For `lix_file.data`, write bytes only:\n   - text: `lix_text_encode('...')`\n   - hex blob: `X'...'`\n   - blob parameter\n\n## CLI quickstart\n\nBuild/run from source:\n\n```sh\ncd /Users/samuel/git-repos/flashtype/submodule/lix/packages/cli\ncargo run --bin lix -- --help\n```\n\n## Canonical commands\n\nRead:\n\n```sh\nlix --path /path/to/repo.lix sql execute \"SELECT id, path FROM lix_file ORDER BY path;\"\n```\n\nWrite text file data:\n\n```sh\nlix --path /path/to/repo.lix sql execute \\\n  \"INSERT INTO lix_file (path, data) VALUES ('/hello.md', lix_text_encode('hello'));\"\n```\n\nRead query via stdin:\n\n```sh\ncat <<'SQL' | lix --path /path/to/repo.lix sql execute -\nSELECT path, hidden\nFROM lix_file\nORDER BY path;\nSQL\n```\n\n## Common gotchas\n\n- `.lix` is the repository. There is no checked-out working directory.\n- `lix_file` uses `id` (not `file_id`).\n- Some views are read-only (`*_history`).\n- Unknown table/column errors should be fixed by checking `lix_*` table/column names first.\n"
  }
]